Uploaded by alx.pdvk

vonNessiGT AnalyticMethodsInPartialDifferentialEquations

advertisement
Analytic Methods in Partial Differential
Equations
Version::31102011
A collection of notes and insights into the world
of differential equations.
Gregory T. von Nessi
October 31, 2011
Contents
ii
Contents
ii
Where I’m Coming From
I
viii
Classical Approaches
Chapter 1: Physical Motivations
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . .
1.2 The Heat Equation . . . . . . . . . . . . . . . . . .
1.2.1 A Note on Physical Assumptions . . . . . . .
1.3 Classical Model for Traffic Flow . . . . . . . . . . .
1.4 The Transport Equation . . . . . . . . . . . . . . .
1.4.1 Method of Characteristics: A First Look . . .
1.4.2 Duhamel’s Principle . . . . . . . . . . . . . .
1.5 Minimal Surfaces . . . . . . . . . . . . . . . . . . .
1.5.1 Divergence Theorem and Integration by Parts
1.6 The Wave Equation . . . . . . . . . . . . . . . . . .
1.7 Classification of PDE . . . . . . . . . . . . . . . . .
1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . .
Chapter
2.1
2.2
2.3
2.4
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2: Laplace’s Equation ∆u = 0 and Poisson’s Equation
Laplace’s Eqn. in Polar Coordinates . . . . . . . . . . .
The Fundamental Solution of the Laplacian . . . . . . .
Properties of Harmonic Functions . . . . . . . . . . . .
2.3.1 WMP for Laplace’s Equation . . . . . . . . . . .
2.3.2 Poisson Integral Formula . . . . . . . . . . . . .
2.3.3 Strong Maximum Principle . . . . . . . . . . . .
Estimates for Derivatives of Harmonic Functions . . . .
Gregory T. von Nessi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
−∆u
. . .
. . .
. . .
. . .
. . .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
2
3
5
5
6
7
8
9
10
13
13
16
=f
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
18
21
26
26
27
32
32
.
.
.
.
.
.
.
.
.
.
.
.
Version::31102011
Contents
Contents
Version::31102011
2.5
2.6
2.7
iii
2.4.1 Mean-Value and Maximum Principles . . . . . .
2.4.2 Regularity of Solutions . . . . . . . . . . . . . .
2.4.3 Estimates on Harmonic Functions . . . . . . . .
2.4.4 Analyticity of Harmonic Functions . . . . . . . .
2.4.5 Harnack’s Inequality . . . . . . . . . . . . . . .
Existence of Solutions . . . . . . . . . . . . . . . . . . .
2.5.1 Green’s Functions . . . . . . . . . . . . . . . . .
2.5.2 Green’s Functions by Reflection Methods . . . .
2.5.2.1 Green’s Function for the Halfspace . .
2.5.2.2 Green’s Function for Balls . . . . . . .
2.5.3 Energy Methods for Poisson’s Equation . . . . .
2.5.4 Perron’s Method of Subharmonic Functions . . .
2.5.4.1 Boundary behavior . . . . . . . . . . .
Solution of the Dirichlet Problem by the Perron Process
2.6.1 Convergence Theorems . . . . . . . . . . . . . .
2.6.2 Generalized Definitions & Harmonic Lifting . . .
2.6.3 Perron’s Method . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 3: The Heat Equation
3.1 Fundamental Solution . . . . . . . . . . . .
3.2 The Non-Homogeneous Heat Equation . .
3.2.1 Duhamel’s Principle . . . . . . . . .
3.3 Mean-Value Formula for the Heat Equation
3.4 Regularity of Solutions . . . . . . . . . . .
3.4.1 Application of the Representation to
3.5 Energy Methods . . . . . . . . . . . . . . .
3.6 Backward Heat Equation . . . . . . . . . .
3.7 Exercises . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
Local Estimates
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter 4: The Wave Equation
4.1 D’Alembert’s Representation of Solution in 1D . . . . . . . . . . .
4.1.1 Reflection Argument . . . . . . . . . . . . . . . . . . . . .
4.1.2 Geometric Interpretation (h ≡ 0) . . . . . . . . . . . . . .
4.2 Representations of Solutions for n = 2 and n = 3 . . . . . . . . . .
4.2.1 Spherical Averages and the Euler-Poisson-Darboux Equation
4.2.2 Kirchoff’s Formula in 3D . . . . . . . . . . . . . . . . . . .
4.2.3 Poisson’s Formula in 2D . . . . . . . . . . . . . . . . . . .
4.2.4 Representation Formulas in 2D/3D . . . . . . . . . . . . .
4.3 Solutions of the Non-Homogeneous Wave Equation . . . . . . . . .
4.3.1 Duhamel’s Principle . . . . . . . . . . . . . . . . . . . . . .
4.3.2 Solution in 3D . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Energy methods . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 Finite Speed of Propagation . . . . . . . . . . . . . . . . .
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
34
36
39
42
44
47
47
51
51
55
57
58
60
62
62
62
63
66
.
.
.
.
.
.
.
.
.
70
70
74
74
76
82
84
85
86
86
.
.
.
.
.
.
.
.
.
.
.
.
.
.
88
88
90
92
93
93
94
95
97
98
99
100
101
102
103
Gregory T. von Nessi
Contents
iv
Chapter 5: Fully Non-Linear First Order PDEs
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2 Method of Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2.1 Strategy to Prove that the Method of Characteristics Gives a Solution
of F (Du, u, x) = 0 . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
7.1 Classification of 2nd Order PDE in Two Variables . . . . . . . .
7.2 Maximum Principles for Elliptic Equations of 2nd Order . . . . .
7.2.1 Weak Maximum Principles . . . . . . . . . . . . . . . .
7.2.1.1 WMP for Purely 2nd Order Elliptic Operators
7.2.1.2 WMP for General Elliptic Equations . . . . .
7.2.2 Strong Maximum Principles . . . . . . . . . . . . . . .
7.3 Maximum Principles for Parabolic Equations . . . . . . . . . . .
7.3.1 Weak Maximum Principles . . . . . . . . . . . . . . . .
7.3.2 Strong Maximum Principles . . . . . . . . . . . . . . .
7.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
II
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 112
. 121
.
.
.
.
.
.
.
.
.
.
124
124
128
129
133
135
138
139
141
143
148
.
.
.
.
.
.
.
.
.
.
152
153
158
160
161
163
169
172
173
174
178
Functional Analytic Methods in PDE
Chapter
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
8.9
8: Facts from Functional Analysis and Measure Theory
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Banach Spaces and Linear Operators . . . . . . . . . . . . . . . . .
Fredholm Alternative . . . . . . . . . . . . . . . . . . . . . . . . .
Spectral Theorem for Compact Operators . . . . . . . . . . . . . .
Dual Spaces and Adjoint Operators . . . . . . . . . . . . . . . . .
Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.6.1 Fredholm Alternative in Hilbert Spaces . . . . . . . . . . .
Weak Convergence and Compactness; the Banach-Alaoglu Theorem
Key Words from Measure Theory . . . . . . . . . . . . . . . . . .
8.8.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . .
8.8.2 Integrability . . . . . . . . . . . . . . . . . . . . . . . . . .
8.8.3 Calderon-Zygmund Decomposition . . . . . . . . . . . . . .
8.8.4 Distribution functions . . . . . . . . . . . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gregory T. von Nessi
181
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
182
182
183
187
191
195
196
200
201
204
204
206
207
209
209
Version::31102011
Chapter 6: Conservation Laws
6.1 The Rankine-Hugoniot Condition . . . . . . . . . . . . . . . . .
6.1.1 Information about Discontinuous Solutions . . . . . . .
6.2 Shocks, Rarefaction Waves and Entropy Conditions . . . . . . .
6.2.1 Burger’s Equation . . . . . . . . . . . . . . . . . . . .
6.3 Viscous Approximation of Shocks . . . . . . . . . . . . . . . . .
6.4 Geometric Solution of the Riemann Problem for General Fluxes
6.5 A Uniqueness Theorem by Lax . . . . . . . . . . . . . . . . . .
6.6 Entropy Function for Conservation Laws . . . . . . . . . . . . .
6.7 Conservation Laws in nD and Kružkov’s Stability Result . . . .
6.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
106
. 106
. 107
Contents
Version::31102011
Chapter
9.1
9.2
9.3
9: Function Spaces
Introduction . . . . . . . . . . . . . . . . . . . . . .
Hölder Spaces . . . . . . . . . . . . . . . . . . . . .
Lebesgue Spaces . . . . . . . . . . . . . . . . . . .
9.3.1 Convolutions and Mollifiers . . . . . . . . . .
9.3.2 The Dual Space of Lp . . . . . . . . . . . .
9.3.3 Weak Convergence in Lp . . . . . . . . . . .
9.4 Morrey and Companato Spaces . . . . . . . . . . . .
9.4.1 L∞ Companato Spaces . . . . . . . . . . . .
9.5 The Spaces of Bounded Mean Oscillation . . . . . .
9.6 Sobolev Spaces . . . . . . . . . . . . . . . . . . . .
9.6.1 Calculus with Sobolev Functions . . . . . . .
9.6.2 Boundary Values, Boundary Integrals, Traces
9.6.3 Extension of Sobolev Functions . . . . . . .
9.6.4 Traces . . . . . . . . . . . . . . . . . . . . .
9.6.5 Sobolev Inequalities . . . . . . . . . . . . . .
9.6.6 The Dual Space of H01 = W01,2 . . . . . . . .
9.7 Sobolev Imbeddings . . . . . . . . . . . . . . . . . .
9.7.1 Campanato Imbeddings . . . . . . . . . . . .
9.7.2 Compactness of Sobolev/Morrey Imbeddings
9.7.3 General Sobolev Inequalities . . . . . . . . .
9.8 Difference Quotients . . . . . . . . . . . . . . . . .
9.9 A Note about Banach Algebras . . . . . . . . . . . .
9.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . .
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter 10: Linear Elliptic PDE
10.1 Existence of Weak Solutions for Laplace’s Equation . . . . . . . . . . . . . .
10.2 Existence for General Elliptic Equations of 2nd Order . . . . . . . . . . . . .
10.2.1 Application of the Fredholm Alternative . . . . . . . . . . . . . . . .
10.2.2 Spectrum of Elliptic Operators . . . . . . . . . . . . . . . . . . . . .
10.3 Elliptic Regularity I: Sobolev Estimates . . . . . . . . . . . . . . . . . . . . .
10.3.1 Caccioppoli Inequalities . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.2 Interior Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3.3 Global Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.4 Elliptic Regularity II: Schauder Estimates . . . . . . . . . . . . . . . . . . .
10.4.1 Systems with Constant Coefficients . . . . . . . . . . . . . . . . . .
10.4.2 Systems with Variable Coefficients . . . . . . . . . . . . . . . . . . .
10.4.3 Interior Schauder Estimates for Elliptic Systems in Non-divergence
Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.4.4 Regularity Up to the Boundary: Dirichlet BVP . . . . . . . . . . . .
10.4.4.1 The nonvariational case . . . . . . . . . . . . . . . . . . .
10.4.5 Existence and Uniqueness of Elliptic Systems . . . . . . . . . . . . .
10.5 Estimates for Higher Derivatives . . . . . . . . . . . . . . . . . . . . . . . .
10.6 Eigenfunction Expansions for Elliptic Operators . . . . . . . . . . . . . . . .
10.7 Applications to Parabolic Equations of 2nd Order . . . . . . . . . . . . . . .
10.7.1 Outline of Evan’s Approach . . . . . . . . . . . . . . . . . . . . . .
10.7.2 Evan’s Approach to Existence . . . . . . . . . . . . . . . . . . . . .
10.8 Neumann Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . .
10.8.1 Fredholm’s Alternative and Weak Solutions of the Neumann Problem
216
216
217
218
221
224
225
225
233
238
240
244
247
249
253
257
260
261
264
266
270
271
274
274
282
282
286
288
290
291
292
298
301
303
304
308
310
311
315
315
318
320
324
326
327
330
332
Gregory T. von Nessi
Contents
vi
10.8.2 Fredholm Alternative Review . . .
10.9 Semigroup Theory . . . . . . . . . . . . .
10.10Applications to 2nd Order Parabolic PDE
10.11Exercises . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Chapter 12: Variational Methods
12.1 Direct Method of Calculus of Variations . .
12.2 Integral Constraints . . . . . . . . . . . . .
12.2.1 The Implicit Function Theorem . .
12.2.2 Nonlinear Eigenvalue Problems . . .
12.3 Uniqueness of Solutions . . . . . . . . . . .
12.4 Energies . . . . . . . . . . . . . . . . . . .
12.5 Weak Formulations . . . . . . . . . . . . .
12.6 A General Existence Theorem . . . . . . .
12.7 A Few Applications to Semilinear Equations
12.8 Regularity . . . . . . . . . . . . . . . . . .
12.8.1 DeGiorgi-Nash Theorem . . . . . .
12.8.2 Second Derivative Estimates . . . .
12.8.3 Remarks on Higher Regularity . . .
12.9 Exercises . . . . . . . . . . . . . . . . . . .
III
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
333
334
342
344
.
.
.
.
.
.
.
.
.
.
.
.
.
348
348
348
351
351
354
355
356
356
357
361
364
365
366
.
.
.
.
.
.
.
.
.
.
.
.
.
.
368
368
377
377
380
384
384
385
387
390
392
393
396
399
400
Modern Methods for Fully Nonlinear Elliptic PDE
Chapter 13: Non-Variational Methods in Nonlinear PDE
13.1 Viscosity Solutions . . . . . . . . . . . . . . . . .
13.2 Alexandrov-Bakel’man Maximum Principle . . . . .
13.2.1 Applications . . . . . . . . . . . . . . . . .
13.2.1.1 Linear equations . . . . . . . . .
13.2.1.2 Monge-Ampére Equation . . . .
13.2.1.3 General linear equations . . . . .
13.2.1.4 Oblique boundary value problems
13.3 Local Estimates for Linear Equations . . . . . . . .
13.3.1 Preliminaries . . . . . . . . . . . . . . . .
13.3.2 Local Maximum Principle . . . . . . . . . .
Gregory T. von Nessi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
401
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
402
402
406
408
408
409
410
410
411
411
411
Version::31102011
Chapter 11: Distributions and Fourier Transforms
11.1 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.1.1 Distributions on C ∞ Functions . . . . . . . . . . . . . . . .
11.1.2 Tempered Distributions . . . . . . . . . . . . . . . . . . . .
11.1.3 Distributional Derivatives . . . . . . . . . . . . . . . . . . .
11.2 Products and Convolutions of Distributions; Fundamental Solutions
11.2.1 Convolutions of Distributions . . . . . . . . . . . . . . . . .
11.2.2 Derivatives of Convolutions . . . . . . . . . . . . . . . . .
11.2.3 Distributions as Fundemental Solutions . . . . . . . . . . .
11.3 Fourier Transform of S(Rn ) . . . . . . . . . . . . . . . . . . . . .
11.4 Definition of Fourier Transform on L2 (Rn ) . . . . . . . . . . . . . .
11.5 Applications of FTs: Solutions of PDEs . . . . . . . . . . . . . . .
11.6 Fourier Transform of Tempered Distributions . . . . . . . . . . . .
11.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
Contents
13.4
Version::31102011
13.5
13.6
13.7
vii
13.3.3 Weak Harnack inequality . . . . . . .
13.3.4 Hölder Estimates . . . . . . . . . . .
13.3.5 Boundary Gradient Estimates . . . .
Hölder Estimates for Second Derivatives . . .
13.4.1 Interior Estimates . . . . . . . . . . .
13.4.2 Function Spaces . . . . . . . . . . .
13.4.3 Interpolation inequalities . . . . . . .
13.4.4 Global Estimates . . . . . . . . . . .
13.4.5 Generalizations . . . . . . . . . . . .
Existence Theorems . . . . . . . . . . . . . .
13.5.1 Method of Continuity . . . . . . . . .
13.5.2 Uniformly Elliptic Case . . . . . . . .
13.5.3 Monge-Ampére equation . . . . . . .
13.5.4 Degenerate Monge-Ampére Equation
13.5.5 Bondary Curvatures . . . . . . . . . .
13.5.6 General Boundary Values . . . . . . .
Applications . . . . . . . . . . . . . . . . . .
13.6.1 Isoperimetric inequalities . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . .
Appendices
Appendix A: Notation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
412
415
417
419
419
422
423
425
427
428
428
431
433
436
437
438
438
438
441
443
444
Bibliography
446
Index
450
Gregory T. von Nessi
Where I’m Coming From
viii
Where I’m Coming From
When I was going through school, I was extremely fortunate to have all of my PDE
courses (I took many) taught by really awesome professors; and it’s probably fair to say that
this continues to influence my thoughts and feelings toward the subject to this very day.
My professors were not the type to drone out solution estimates or regurgitate whatever
mathematical progression was necessary to complete a proof. Instead, I was taught PDE
from various intuitive perspectives. It’s not to say that there was no rigour in my education
(there was quite a bit); but my teachers never let me lose sight of the bigger picture, be it
mathematical, physical or computational in nature.
This book embodies my own endeavour to share my enthusiasm and intuition that I have
gained in this area of maths with you. I am no stranger to the idea that PDE is generally
viewed as a terrifying subject, that only masochists or the clinically insane truly enjoy. While
I am going to refrain from commenting on my own mental stability (or lack thereof), I will
say that I believe that PDE (and mathematics in general) can be understood by anyone. It’s
not to say that I believe everyone should be a mathematician; but I think if someone likes
the subject, they can learn it. This is where I am coming from.
Gregory T. von Nessi
Version::31102011
You may think me a freak or just completely insane, but I love partial differential equations
(PDE). It is important that I get this out quickly because I don’t want you to get all committed to reading this book and then start thinking I’ve just started going all weird on you
half-way through. I believe in being honest in relationships and writing a textbook on PDE
should be no different... right? So, if you are going to have issues with for me being...
excessively demonstrative... with this subject, it is probably best to put the book down now
and walk away... slowly.
Version::31102011
Where I’m Coming From
ix
As much as I would like to present a 10,000 page self-contained treatise on the subject
of PDE, I am told that I will be dead before such a manuscript would get through peer
review. Given that I am writing this book for fame and money, having this book published
post-mortem is really not a viable option. So, I have decided to focus on the functionalanalytic foundation of PDE in this book; but this will require you to know some things about
real analysis and measure theory to suck out all of the information contained in these pages.
However, I still encourage anyone to at least flip through this book (or download/steal it...
Go ahead be a pirate! ARGHHH!) because my hope is that you will still gain some insight
into this really awesome and beautiful subject. Even if you have no understanding of maths,
I still encourage you to read the words and try to absorb the maths by some osmotic means.
Just keep in mind that no matter how scary the maths looks, the underlying concepts are
always within your grasp to understand. I believe in you!
If you get through this book, will you love PDE? That I can not honestly say. To me PDE
is like Opera (the music not the web browser), in the sense that (to rip off the sentiment
from Pretty Woman) some fall in love with Opera the first time they hear it, while others
can grow to appreciate it over time but never truly love it. Again, I believe you have all
the abilities to grasp every aspect of PDE, but whether or not you love or appreciate it is
intrinsic to your personality (I feel). My hope is that by reading this book that you are able
to figure out whether you are a lover of PDE or lean more toward appreciating the subject
(both are great). As for myself, PDE represents a new perspective by which I can see not
only abstract ideas but also every aspect of the universe that surrounds me. If you can at
least gain a glimpse of that perspective by reading this book, I will be happy.
As I said earlier, my enthusiasm for this subject has been cultivated by the teachings of
great professors from my past. I just want to thank Kiwi (James Graham-Eagle), Stephen
(Pennell), (Louis) Rossi, Georg (Dolzmann) and Neil (Trudinger) for robbing me of my sanity
and showing me that I am a lover of PDE.
Finally, I am initially distributing this work under an international Creative Commons
license that allows for the free distribution of these notes (but prevents commercial usages and modifications). This license enables me to make these notes available, as I
continue the slow process of their revision. The license mark is at the footer of each
page, and you can find more information out by visiting the Creative Commons website
at http://creativecommons.org.
Gregory T. von Nessi Jr.
Gregory T. von Nessi
Version::31102011
Version::31102011
Part I
Classical Approaches
Chapter 1: Physical Motivations
2
“. . . because it’s dull you TWIT; it’ll ’urt more!”
— Alan Rickman
Contents
1.1
Introduction
1.2
The Heat Equation
1.3
Classical Model for Traffic Flow
1.4
The Transport Equation
1.5
Minimal Surfaces
1.6
The Wave Equation
1.7
Classification of PDE
1.8
Exercises
1.1
2
3
5
6
9
13
13
16
Introduction
As the chapter title indicates, the following material is meant to simply acquaint the reader
with the concept of Partial Differential Equations (PDE). The chapter starts out with a few
simple physical derivations that motivate some of the most fundamental and famous PDE. In
some of these examples analytic techniques, such as Duhamel’s Principle, are introduced to
whet the reader’s appetite for the material forthcoming. After this initial motivation, basic
definitions and classifications of PDE are presented before moving on to the next chapter.
Gregory T. von Nessi
Version::31102011
1 Physical Motivations
1.2 The Heat Equation
1.2
3
The Heat Equation
One of the most, if not THE most, famous PDE is the Heat Equation. Obviously, this
equation is somehow connected with the physical concept of heat energy, as the name notso subtly suggests. The natural questions now are: what is the heat equation and where
does it come from? To answer these, we will first construct a physical model; but before we
proceed with this, let us define some notation.
Version::31102011
Ω
B
In our model, take Ω to be a heat conducting body, ρ(x) as the mass density at x ∈ Ω,
c(x) is the specific heat at x ∈ Ω, and u(x, t) is the temperature at point x and time t. Also
take B to be an arbitrary ball in Ω.
In order to construct any physical model, one has to make some assumption about the
reality they are trying to replicate. In our model we assume that heat energy is conserved in
any subset of Ω; and in particular, is conserved in B.
With this in mind, we first consider the total heat energy in B, which is
Z
ρ(x)c(x)u(x, t) dx;
B
and the rate of change of the heat energy in B is
Z
d
ρ(x)c(x)u(x, t) dx.
dt B
If we know the heat flux rate, F~ (x, t) (a vector-valued function), across a surface element
A centered at x with normal ~
ν (x), then we know the flux across a surface is given by:
|A| · F~ (x, t) · ~
ν (x).
Thus, using the assumption that heat energy is conserved in B, we calculate
Z
Z
d
ρ(x)c(x)u(x, t) dx = −
F~ (x, t) · ~
ν (x) dS
dt B
∂B
Z
=−
div F~ (x, t) dx.
B
Upon rewriting this equality, we see that
Z
(ρ(x)c(x)ut (x, t) + div F~ (x, t)) dx = 0.
{z
}
B |
turns out to be smooth
Since B is arbitrary, this last equation must be true for all balls B ⊂ Ω. We can now say,
ρ(x)c(x)ut (x, t) + div F~ (x, t) = 0.
in Ω
(1.1)
Gregory T. von Nessi
Chapter 1: Physical Motivations
4
We seek u, but we have only one equation that involves the unknown heat flux F ; in order
to proceed with our model, we must make another assumption based on what (we think) we
know of reality. So we make the following constitutive assumption as to what the heat flux
“looks like” mathematically:
~
F~ (x, t) = −k(x) · Du(x,
t).
(See Appendix A for notation). Plugging this constitutive assumption into (1.1), we get
Fourier’s Law of Cooling:
(1.2)
To further simplify the problem, we consider the special case that ρ(x), c(x) and k(x)
are all constant; take them to be ρ1 , c and k respectively. Now, we can further manipulate
the second term of the LHS of (1.2):
div Du =
div
∂2
∂2
u
+
.
.
.
+
u
∂xn2
∂x12
2
∂
∂2
=
+ ... + 2 u
∂xn
∂x12
∂
∂
u, . . . ,
u
∂x1
∂xn
=
= ∆u
(≡ ∇2 u).
Thus we conclude that (1.2) can be written as
ρc · ut (x, t) = k · ∆u
ut (x, t) = K · ∆u,
where K =
k
ρc
(1.3)
is the diffusion constant. (1.3) is the legendary Heat Equation in all its glory.
We shall see later that the solutions of the Heat Equation will evolve to a steady state
which does not change in time. These steady state solutions thus solve
∆u = 0.
This is the most ubiquitous equation in all of physics. This is Laplace’s Equation.
Gregory T. von Nessi
(1.4)
Version::31102011
ρ(x)c(x)ut (x, t) − div (k(x) · D~x u(x, t)) = 0.
1.3 Classical Model for Traffic Flow
1.2.1
5
A Note on Physical Assumptions
Version::31102011
[The reader may skip this section if they feel comfortable with the concepts of conservation
laws and constitutive assumptions.]
In our derivation of the Heat Equation, we used two physical observations to construct
a model of heat flow. Specifically, we first assumed that heat energy was conserved; this is
particular manifestation of what is known as a conservation law this was followed by a constitutive assumption based on the actual observed behavior of heat. Considering conservation
laws, it is observed that many quantities are conserved nature; examples are conservation of
mass/energy, momentum, angular momentum etc. It turns out that these conservation laws
correspond to specific symmetries (via Noether’s Theorem). Physics, as we know it, is constructed off of the assumed validity of these conservation laws; and it is through these laws
that one finds that conclusions in physics manifest in the form of PDE. Some examples are
Schrödinger’s Equation, the Heat Equation (via the argument above), Einstein’s Equations,
and Maxwell’s Equation’s, to name a few.
In the above derivation, the use of the conservation of energy did not depend on the
fact that we were trying to model heat energy. Indeed, are model lost it’s generality (i.e.
pertaining to the general flow of energy) via the constitutive assumption, which is based on
the observations pertaining specifically to heat flow. In the above derivation, the constitutive
assumption was simply a mathematical way of saying that heat energy is observed to move
from a hot to a cold object, and that the rate of this flow will increase with the temperature
differential present. It is worth trying to understand this physically, through the example of
your own sense of “temperature”. To do this, one first should realize that when one feels
“temperature”, they are really feeling the heat energy flux on the surface of their skin. This
is why people with hypothermia cease to feel “cold”. In this situation their body’s internal
temperature has dropped below the normal 98.6◦ F. As a result, the heat energy flux across
the surface of the skin has decreased as the temperature differential between the person’s
body and outside air has decreased.
Now, let us move on to another model to try and solidify the ideas of conservation laws
and constitutive assumptions.
1.3
Classical Model for Traffic Flow
We will now proceed to construct a model for (very) simple traffic flow in the same spirit as
the heat equation was derived; i.e. our model will be born out of a conservation law and a
constitutive assumption.
First, we define ρ(x, t) to represent the density of cars on an ideal high (one that is
straight, one lane with no ramps). Here, we take the highway to be represented by the
x-axis. Since we do not expect cars to be created or destroyed (hopefully) on our highway,
it is reasonable to assume that ρ(x, t) represents a conserved quantity. The corresponding
conservation law is written as
ρt (x, t) + div F~ (x, t) = 0.
Again, F~ represents the flux of car density at any given point in space-time:
F~ (x, t) = ρ(x, t) · ~
v (x, t),
Gregory T. von Nessi
Chapter 1: Physical Motivations
6
where ~
v (x, t) is the velocity of cars at a given point in space-time. With that, we rewrite
our conservation law as
ρt + (ρ · v )x = 0.
(1.5)
Now it is time to bring in a constitutive assumption based on traffic flow observations. The
assumption we will use comes from the Lighthill-Whitham-Richards model of traffic flow,
and this corresponds to ~
v (x, t) assumed to have the form
ρ(x, t)
~
v (x, t) = vmax 1 −
x̂,
ρmax
It turns out that this PDE predicts that a continuous density profile can develop a discontinuity in finite time, i.e. a car pile-up is an inevitability on an ideal highway!
1.4
The Transport Equation
Here we will again present another PDE via a physical derivation. In this model we seek
to understand the dispersion of an air-born pollutant in the presence of a constant and unidirectional wind. For this, we take u(x, t) to be the pollutant density, and ~b to be the velocity
of the wind, which we are assuming to be constant in both space and time.
Notation: From this point forward, the vector notation over the symbols will be dropped, as
the context will indicate whether or not a quantity is a vector. With this in mind, we acknowledge that x may indeed represent a spatial variable in any number of dimenions (although
for physical relevance, x ∈ R3 ; but this restriction is not necessary in our mathematical
derivation).
If we assume that the pollutant doesn’t react chemically with the surrounding air, then
we its total mass is conserved:
ut + div F (x, t) = 0,
where
F (x, t) = u(x, t)b.
Again, the form of the pollutant density flux is born out of a constitutive assuumption
regarding the dispersion of pollutant. It is easy to see that this assumption corresponds to
the pollutant simply being carried along by the wind. Combining the above conservation law
and constitutive assumption yields
Gregory T. von Nessi
ut + Du · b = 0,
Version::31102011
where vmax is the maximum car velocity (speed limit) and ρmax is the maximum density of
cars (parked bumper-to-bumper). Looking at this assumption, we see that this model rather
näively indicates that car velocity is directly proportional to car density; cars approach the
speed-limit as their density goes to zero and, conversely, slow to a stop as they get bumperto-bumper. Using this constitutive assumption with our conservation law (1.5) yields
ρ2
=0
ρt + vmax · ρ −
ρmax x
2ρ
=⇒ ρt + vmax · 1 −
ρx = 0.
(1.6)
ρmax
1.4 The Transport Equation
7
which is called the Transport Equation. Viewed in a space-time setting, this equation corresponds to the directional derivative of u (as a function of x and t) being zero in the direction
(b, 1):
(Du, ut ) · (b, 1) = 0.
|
{z
}
(1.7)
Directional derivative
Version::31102011
Before moving on to another PDE, let us see if we can make any progress in terms of
solving (1.7).
1.4.1
Method of Characteristics: A First Look
Expect u = constant on the integral curves. The curve through (x, t): γ(s) = (x + sb, t + s)
z(s) = u(γ(s)) = u(x + sb, t + s)
ż(s) = Du(x + sb, t + s) · b + ut (x + sb, t + s)
= 0
If we know the concentration of the pollutant at t = 0, i.e. we are solving the initial value
problem
ut + Du · b =
0
u(x, 0)
= g(x)
then u(x, t) = u(γ(0))
by definition of γ
= u(γ(s)) ∀s. (because u is constant on curve γ(s))
Thus u(x, t) is the value of g at the intersection of the line γ(s) with the plane Rn × {t = 0}
for s = −t.
u(x, t) = u(x − tb, 0) = g(x − tb)
Remarks: If you want to check that u(x, t) = g(x − t, b)
1. is a solution of ut + Du · b = 0, then we need that g is differentiable.
The formula also makes sense for g not differentiable, and u(x, t) = g(x − tb) would
be a so-called generalized or weak solution.
2. We were solving the PDE by solving the ODE
ż(s) = 0
z(−t) = u(γ(−t)) = u(x − tb, 0)
= g(x − tb)
This is a general concept which is called the method of characteristics.
Gregory T. von Nessi
Chapter 1: Physical Motivations
8
1.4.2
Duhamel’s Principle
If you can solve the homogeneous equation ut + Du · b = 0 then you can solve the inhomogeneous equation ut +Du ·b = f (x, t). f (x, t) is the source of the pollutant (e.g. power plant).
Idea: Suppose the pollutant were released only once per hour, then you could trace the
evolution of the pollutant by homogeneous equation, and you could add this evolution to the
evolution of the initial data. Equation we want to solve
ut + Du · b =
f
u(x, 0)
= g(x)
wt + Dw · b = f
w (x, 0)
= 0
Check: ut + Du · b = (v + w )t + D(v + w ) · b
= (vt + Dv · b) + (wt + Dw · b) = f
|
{z
} |
{z
}
=0
f
u(x, 0) = v (x, 0) + w (x, 0)
= g(x) + 0
= g(x)
w (x, t) =
Suspect:
Z
t
w (x, t; s) ds
0
where
wt (x, t; s) + Dw (x, t; s) · b = 0 for x ∈ Rn , t > s
w (x, s; s) = f (x, s)
Solutions:
v (x, t) = g(x − tb)
w (x, t; s) = f (x + (s − t)b, s)
Expectation:
u(x, t) = g(x − tb) +
Z
0
t
f (x + (s − t)b, s) ds
Theorem 1.4.1 (): Let f and g be smooth. Then the solution of the initial value problem
ut + Du · b = f
u(x, 0) = g
is given by
u(x, t) = g(x − tb) +
Proof: Differentiate explicitly.
Gregory T. von Nessi
Z
0
t
f (x + (s − t)b, s) ds
Version::31102011
Since the sum of two solutions is a solution (since the equation is linear) we can write
u = v + w , where
vt + Dv · b =
0
v (x, 0)
= g(x)
1.5 Minimal Surfaces
1.5
9
Minimal Surfaces
Version::31102011
D = unit disk in the plane.
All surfaces that are graphs of functions on D with a given fixed boundary curve
u(x, t)
D
Question: Find a surface with minimal surface are A(u)
ZZ p
A(u) =
1 + |Du|2
D
Idea: Find a PDE that is a necessary condition for A(u) to be minimal. If you take any φ
which is zero on ∂D, then A(u + φ) ≥ A(u). Define
g() = A(u + φ) ≥ A(u)
∀ ∈ R, and we have a minimum (as a function of !) if = 0. Thus g 0 () = 0
A(u + ǫφ)
0
d
d
g() = 0 =
=0
=
=
=
ǫ
ZZ p
d
1 + |D(u + φ)|2
d
D
=0
ZZ
1
2Du · Dφ + 2|Dφ|2
p
dx
2 D
1 + |Du + Dφ|2
=0
ZZ
Du · Dφ
p
dx
1 + |Du|2
D
!
ZZ
Du
−
div p
φ dx
1 + |Du|2
D
Gregory T. von Nessi
Chapter 1: Physical Motivations
10
This holds ∀φ
p
1 + |Du|2
div
1.5.1
!
Du
=0
Minimal Surface Equation
Divergence Theorem and Integration by Parts
Theorem 1.5.1 (Divergence Theorem): F~ vector field domain Ω, smooth boundary ∂Ω =
S, outward unit normal ~
ν (x)
~
ν (x)
Version::31102011
Ω
∂Ω
Z
div F~ dx =
Ω
Z
F~ · ~
ν dS
∂Ω
Divergence Theorem
Example: Take F~ (x) = w (x)~
v (x)
~
v (x) = vector field
w (x) = scalar
Z
LHS =
div (w (x)~
v (x)) dx
Ω
=
=
Z X
n
Ω i=1
Z X
n
∂i (w (x)vi (x)) dx
[(∂i w (x))vi (x) + w (x)(∂i vi (x))] dx
Ω i=1
=
RHS =
=
Z
ZΩ
[Dw · ~
v + w (x) · div v (x)] dx
Z∂Ω
∂Ω
Thus, we have
Z
Ω
(w (x)~
v (x)) · ~
ν (x) dS
w (x)~
v (x) · ~
ν (x) dS
Dw · ~
v dx =
Z
∂Ω
w~
v ·~
ν dS −
Integration by parts
1 dimension:
Z
a
Applications:
Gregory T. von Nessi
b
Z
b
0
w v dx = w v
a
−
Ω
Z
w · div ~
v dx
a
b
w v 0 dx
(1.8)
1.5 Minimal Surfaces
11
1. ~
v (x) = Du(x)
w (x) = φ(x)
Z
i.e.
ZΩ
Version::31102011
Ω
Dφ · Du dx
=
Dφ · Du dx
=
Z
Z∂Ω
∂Ω
Z
φ Du · ~
ν dS −
φ div Du dx
Z Ω
φ ∂ν u dS −
φ ∆u dx
Ω
2. Soap bubbles = minimal surfaces
u(x, t)
D
surface area
Z p
1 + |Du|2 dx.
D
Gregory T. von Nessi
Chapter 1: Physical Motivations
12
Minimize.
Assume that u is a minimizer. i.e. that A(u) ≤ A(v ) for all functions v with the same
boundary values. Family of functions u + φ where φ has zero boundary values. Look
at g() = A(u + φ)
g(ǫ)
Version::31102011
ǫ
d
d
Z
d p
1 + |Du + Dφ|2 dx
Ω d
Z
d p
=
1 + |Du|2 + 2Du · Dφ + 2 |Dφ|2 dx
Ω d
Z
2Du · Dφ + 2|Dφ|2
1
p
=
dx
2 Ω 1 + |Du|2 + 2Du · Dφ + 2 |Dφ|2
=
Now, since u is a minimum,
=⇒
Z
d
d
Ω
If we identify
w =φ
we can apply (1.8):
Z
Du
dx
Dφ · p
1 + |Du|2
Ω
=
Z
∂Ω
= 0
=0
Z
. . . dx
Du · Dφ
p
dx
1 + |Du|2
= 0
= 0
Du
~
v (x) = p
,
1 + |Du|2
Du
φp
·~
ν dS −
1 + |Du|2
The first term on the RHS is 0 since φ = 0 on ∂Ω. Thus,
!
Z
Du
φ div p
dx = 0
1 + |Du|2
Ω
Z
Ω
φ div
Du
p
1 + |Du|2
!
dx
for all φ that are zero on ∂Ω. If u is smooth, then the integrand must be zero pointwise,
i.e.
!
Du
div p
=0
Minimal surface equation
1 + |Du|2
Gregory T. von Nessi
1.6 The Wave Equation
1.6
13
The Wave Equation
[Ant80] Comments on ad hoc derivations (e.g. wave equation).
Outcome of this “incomprehensible” derivation is the wave equation
utt = c 2 uxx
Newton’s law f = ma.
Version::31102011
1.7
Classification of PDE
1. Diffusion equation (2nd order):
ut = k∆u
2. Traffic flow (1st order):
ρt + vmax ∂x ρ 1 −
ρ
ρmax
=0
3. Transport equation (1st order):
ut + Du · b = 0
4. Minimal surfaces (2nd order):
div
5. Wave equation (2nd order):
Du
p
1 + |Du|2
!
=0
utt = c 2 uxx
Definition 1.7.1: A PDE is an expression of the form
F (Dk u, Dk−1 u, . . . , Du, u, x) = 0,
where
Du = gradient of u
D2 u = all 2nd order derivatives of u
..
.
Dk u = all kth order derivatives of u
k is called the order of the PDE.
Definition 1.7.2: A PDE is linear if it’s of the form
X
aα (x) Dα u(x) = 0
|α|≤k
where α is a multi-index.
Gregory T. von Nessi
Chapter 1: Physical Motivations
14
Notation: α = (α1 , . . . , αn ) with αi non-negative integers. We define
|α| =
n
X
αi
i=1
So,
Dα =
∂ α1
∂ α2
∂ αn
·
·
·
·
∂xnαn
∂x1α1 ∂x2α2
Example:
∂u
∂x1
∂u
α = (0, 1) : Dα u =
∂x2
α = (1, 0) : Dα u =
|α| = 2 is associated with α = (2, 0), α = (1, 1) or α = (0, 2).
α = (2, 0) : Dα u =
∂2u
∂x12
∂2u
∂x1 ∂x2
∂2u
α = (0, 2) : Dα u =
∂x22
α = (1, 1) : Dα u =
• n = 3. Just look at α = (1, 1, 2) with u = u(x1 , x2 , x3 )
Dα u =
∂u ∂u ∂ 2 u
∂4u
u
=
u
∂x1 ∂x2 ∂x32
∂x1 ∂x2 ∂x32
Example: 2nd order linear equation in 2D]
X
aα (x) Dα u(x)
|α|≤2
= a(2,0) (x)uxx + a(1,1) (x)ux,y + a(0,2) (x)uy y
+a(1,0) (x)ux + a(0,1) (x)uy
+a(0,0) (x)u
= f
|α| = 2
|α| = 1
|α| = 0
Crucial term is the highest order derivative. Generalizations:
• a PDE is semilinear if it is linear in the highest order derivatives, i.e. it is of the form
X
aα (x) Dα u(x) + a0 (Dk−1 u, . . . , Du, u, x)
|α|=k
• a PDE is quasilinear if it is of the form
X
aα (Dk−1 u, . . . , Du, u, x) Dα u + a0 (Dk−1 u, . . . , Du, u, x)
|α|=k
Gregory T. von Nessi
Version::31102011
• n = 2. α = (0, 0) corresponds to Dα u = u.
|α| = 1 is associated with α = (1, 0) or α = (0, 1).
1.8 Exercises
15
Example:
• diffusion equation:
ut = k∆u
• traffic flow:
ρt + vmax ∂x ρ 1 −
ρ
ρmax
linear
=0
quasilinear
Version::31102011
• transport equation:
ut + Du · b = 0
linear
• minimal surface equation:
• wave equation:
Du
dif p
1 + |Du|2
X ∂
∂ u
p xi
=⇒
∂xi 1 + |Du|2
utt = c 2 uxx
1.8
= 0
= 0
quasilinear
linear
Exercises
1.1: Classify each of the following PDEs as follows:
a) Is the PDE linear, semilinear, quasilinear or fully nonlinear?
b) What is the order of the PDE?
Here is the list of the PDEs:
i) Linear transport equation ut +
Pn
i=1 bi Di u
= 0.
ii) Schrodinger’s equation i ut + ∆u = 0.
iii) Beam equation ut + uxxxx = 0.
iv) Eikonal equation |Du| = 1.
v) p-Laplacian div(|Du|p−2 Du) = 0.
vi) Monge-Ampere equation det(D2 u) = f .
1.2: Let Ω ⊂ R2 be an open and bounded domain representing a heat conducting plate with
density ρ(x, t) and specific heat c(x). Find an equation for the evolution of the temperature
u(x, t) at the point x at the time t under the assumption that there is a heat source f (x) in
Ω. Use Fourier’s law of cooling as the constitutive assumption and assume that all functions
are sufficiently smooth.
1.3:
Gregory T. von Nessi
Chapter 1: Physical Motivations
16
a) Verify that the function u(x, t) = e −kt sin x satisfies the heat equation ut = kuxx on
−∞ < x < ∞ and t > 0. Here k > 0 is the diffusion constant.
b) Verify that the function v (x, t) = sin(x − ct) satisfies the wave equation utt = c 2 uxx
on −∞ < x < ∞ and t > 0. Here c > 0 is the wave speed.
c) Sketch u and v . What is the most striking difference between the evolution of the
temperature u and the wave v ?
subject to given boundary conditions u(x) = g(x) on ∂Ω. Show that u satisfies Laplace’s
equation ∆u = 0 in Ω.
1.5: [Evans 2.5 #1] Suppose g is smooth. Write down an explicit formula for the function u solving the initial value problem
ut + b · Du + cu = 0 in Rn × (0, ∞),
u = g on Rn × {t = 0}.
Here c ∈ R and b ∈ Rn are constants.
1.6: Show that the minimal surface equation
div
can in two dimensions be written as
Du
p
1 + |Du|2
!
=0
uxx (1 + uy2 ) + uy y (1 + ux2 ) − 2ux uy uxy = 0.
1.7: Suppose that u = u(x, y ) = v (r ) is a radially symmetric solution of the minimal surface
equation
uxx (1 + uy2 ) + uy y (1 + ux2 ) − 2ux uy uxy = 0.
Show that v satisfies
vr
(1 + vr2 ) = 0.
r
p
Use this result to p
show that u(x, y ) = ρ log(r + r 2 − ρ2 ) is a solution of the minimal surface
equation for r = x 2 + y 2 ≥ ρ > 0.
vr r +
Hint: You can verify this by differentiating the given solution or by solving the ordinary
differential equation for v .
1.8: [Evans 2.5 #2] Prove that Laplace’s equation ∆u = 0 is rotation invariant; that is,
if R is an orthogonal n × n matrix and we define
v (x) = u(Rx),
then ∆v = 0.
Gregory T. von Nessi
x ∈ Rn ,
Version::31102011
1.4: Suppose that u is a smooth function that minimizes the Dirichlet integral
Z
1
I(u) =
|Du|2 dx
2 Ω
Version::31102011
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
18
Laplace’s Equation ∆u = 0 and Poisson’s
Equation −∆u = f
“. . . You are too young to be burdened by something as cool as sadness.”
–? Kurosaki Isshin
Contents
2.1
Laplace’s Eqn. in Polar Coordinates
2.2
The Fundamental Solution of the Laplacian
2.3
Properties of Harmonic Functions
2.4
Estimates for Derivatives of Harmonic Functions
2.5
Existence of Solutions
2.6
Solution of the Dirichlet Problem by the Perron Process
2.7
Exercises
2.1
18
21
26
32
47
62
66
Laplace’s Equation in Polar Coordinates and for Radially
Symmetric Functions
We say that u is harmonic if ∆u = 0.
Example:
The real and imaginary part of holomorphic functions are harmonic
f (z) = z 2 = (x + i y )2 = x 2 − y 2 + 2i xy
u(x, y ) = x 2 − y 2
are harmonic
u(x, y ) = 2xy
p
x2 + y2
y θ = tan−1
x
u(x, y ) = v (r, θ)
r
Gregory T. von Nessi
=
Version::31102011
2
2.1 Laplace’s Eqn. in Polar Coordinates
19
y
r
θ
x
∂x f
Version::31102011
∂xx r
∂x
1
r
∂xy r
In Summary:
p
1
2x
x
x2 + y2 = p
=
2 x2 + y2
r
x 1
1
1 x2
= ∂x
= + x∂x
= − 3
r
r
r
r
r
2x
1
−x
= ∂x (x 2 + y 2 )−1/2 = − p
= 3
3
2 x2 + y2
r
x −xy
= ∂y
= 3
r
r
= ∂x
y
x
,
∂y r =
r
r
2
y
1
− x 2r 3 = 3 ,
∂xx r =
r
r
−xy
∂xy r =
r3
Now, we look at derivatives with respect to θ.
∂xy
=
∂x θ = ∂x tan−1
∂y y r =
x2
r3
y
y
1
−y
− 2 = 2
=
y2
x
x
x
+ y2
1 + x2
−y
r2
x
∂y θ = 2
r
y
2xy
∂xx θ = ∂x − 2 = ∂x (−y (x 2 + y 2 )−1 ) = 2
r
(x + y 2 )2
2xy
=
r4
=
y
1
1
2y 2
∂xy θ = ∂y − 2 = − 2 − y ∂y (x 2 + y 2 )−1 = − 2 + 2
r
r
r
(x + y 2 )2
1
2y 2
= − 2+ 4
r r
x
∂y y θ = ∂y 2 = x∂y (x 2 + y 2 )−1
r
2xy
= − 4
r
In Summary:
y
x
∂x θ = − 2 ,
∂y θ = 2
r
r
2xy
2xy
∂xx θ =
,
∂y y θ = − 4
4
r
r
y2 − x2
∂xy θ =
r4
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
20
Now, we consider
u(x, y ) = v (r, θ)
∂x u = vr rx + vθ θx
∂xx u = vr r (rx )2 + 2vr θ rx θx + vθθ (θy )2 + vr ry y + vθ θy y
∂y u = vr ry + vθ θy
∂y y u = vr r (ry )2 + 2vr θ ry θy + vθθ (θy )2 + vr ry y + vθ θy y
∆u = uxx + uy y
= vr r (rx2 + ry2 ) + 2vr θ (rx θx + ry θy ) + vθθ (θx2 + θy2 )
Version::31102011
+vr (rxx + ry y ) + v θ(θxx + θy y )
= 0 2
x2 + y2
x + y2
= vr r
+ vθθ
r2
r4
1 x2 1 y2
+vr
− 3 + − 3
r
r
r
r
Result: If u(x, y ) = v (r, θ) and u solves Laplace’s equation, then v solves
1
1
vr r + vr + 2 vθθ = 0
r
r
Laplace’s equation in polar coordinates
Now we consider Laplace’s equation for radially symmetric functions:
q
u(x) = v (r ),
r = x12 + . . . + xn2
∂u
∂xi
∂2u
∂xi2
∂xi
1
r
Thus,
∂r
xi
= vr (r )
∂xi
r
∂
xi
=
vr (r )
∂xi
r
2
xi
1
1
= vr r 2 + vr
+ xi ∂xi
r
r
r
= vr (r )
1
xi
= ∂xi (x12 + . . . + xn2 )−1/2 = − (x12 + . . . + xn2 )−3/2 2xi = − 3
2
r
xi2
1 xi2
= vr r 2 + vr
− 3
r
r
r
n
X
∂2
∆u =
u
∂xi2
i=1
xi2
1 xi2
=
vr r 2 + vr
− 3
r
r
r
i=1
n 1
= vr r +
−
vr
r
r
n
X
Result: If u(x) = v (r ) is a radially symmetric solution of Laplace’s equation then
n−1
vr r +
vr = 0
r
Gregory T. von Nessi
(2.1)
2.2 The Fundamental Solution of the Laplacian
2.2
21
The Fundamental Solution of the Laplacian
Special solution radially symmetric by solving (1) of the previous section. Assuming vr 6= 0
1−n
integrate in r
r
= (1 − n) ln r + C1
vr r
vr
=⇒ ln vr
=
= C2 · r 1−n
=⇒ vr
Integrate again:
Version::31102011
n=2:
n≥3:
Find special solutions
C2
=⇒ v = C2 · ln r + C3
r
C2
−1 C2
vr = n−1 =⇒ v =
+ C3
r
n − 2 r n−2
vr =
u(x) = v (r ) =
(
C2 · ln |x| + C3
n=2
C2
1
− n−2
+
C
3 n ≥3
|x|n−2
The fundamental solution is of this form with particular constants.
Definition 2.2.1:
Φ(x) =
(
1
− 2π
· ln |x|
1
1
− n(n−2)α(n)
|x|n−2
n=2
n≥3
for x 6= 0, and α(n) is the volume of the unit ball in Rn .
Note: ∆Φ = 0 for x 6= 0 but not defined if x = 0. ∆Φ can be defined everywhere in the
sense of distributions,
−∆Φ = δ0 = “Dirac Mass”
placed at x = 0,
δ0 (x) =
−∆Φ(y − x) =
1 x =0
0 x=
6 0
0 x 6= y
1 x =y
Formally, this suggests a solution formula for Poisson’s equation,
in Rn
−∆u = f
namely,
u(x) =
Z
Rn
Why?
“ − ∆u(x) =
=
Φ(x − y )f (y ) dy
Z
n
ZR
Rn
−∆x Φ(x − y )f (y ) dy
δx (y )f (y ) dy = f (x)“
Gregory T. von Nessi
22
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Lemma 2.2.2 (): We have
|DΦ(x)| ≤
|D2 Φ(x)| ≤
and
C
|x|n−1
C
|x|n
1
∂Φ
−1
(x) =
∂ν
nα(n) r n−1
x 6= 0
∀x ∈ ∂B(0, r )
∂φ
∂ν (x)
= Dφ(x) ·
x
|x|
r
O
x
ν(x) =
x
|x|
Proof:
n=2:
∂
ln |x| =
∂xi
DΦ(x) =
1 ∂
1 xi
|x| =
|x| ∂xi
|x| |x|
−2 x
1
,
|DΦ| ≤ C
2π |x|2
|x|
∂ ∂
∂ 1
∂ xi
δij
ln |x| =
= 2 + xi
2
∂xj ∂xi
∂xj |x|
|x|
∂xj |x|2
∂ 1
∂xj |x|2
∂2Φ
∂xi ∂xj
∂2Φ
∂xi ∂xj
∂ X 2 −1
xk
∂xj
X −2
2xj
= −
xk2
2xj = − 4
|x|
=
1
δij
2xi xj
= −
−
2π |x|2
|x|4
1
2|xi ||xj |
1
≤ −
−
2π |x|2
|x|4
1 3
≤
2π |x|2
1
1 1 x
D ln |x| = −
2π
2π |x| |x|
∂Φ
1 x x
(x) = −
∂ν
2π |x|2 |x|
1 |x|2
1 1
= −
=−
2
2π |x|
2π |x|
DΦ(x) = −
Gregory T. von Nessi
√
Version::31102011
x
x 6= 0
2.2 The Fundamental Solution of the Laplacian
=
−1 x
nα(n) |x|n
1 |x|
1
1
=
n
nα(n) |x|
nα(n) |x|n−1
DΦ(x) =
|DΦ(x)| ≤
Version::31102011
∂ X 2 − n−2
2
xk
∂xi
+1)
n − 2 X 2 −( n−2
2
xk
2xi
= −
2
xi
= −(n − 2) n
|x|
∂
1
∂xi |x|n−2
n≥3:
23
Second derivatives are similar
∂Φ
(x) =
∂ν
=
−1 x x
nα(n) |x|n |x|
−1 |x|2
nα(n) |x|n+1
C
Remark: |D2 Φ| ≤ |x|
n
i.e. D2 Φ are not integrable about zero.
Theorem 2.2.3 (): Solution of Poisson’s equation in Rn . Suppose that f ∈ C 2 and that f
has compact support, i.e. closure of ({x ∈ Rn : f (x) 6= 0}) ⊆ B(0, s) for some s > 0. Then
Z
u(x) =
Φ(x − y )f (y ) dy
Rn
is twice differentiable and solves Poisson’s equations −∆u = f in Rn .
Proof: Change variables: z = x − y
1. y = x − z and dy = dz, i.e.,
Z
Φ(x − y )f (y ) dy
=
Rn
=
Z
n
ZR
Rn
2. u is differentiable
u(x + hei ) − u(x)
lim
= lim
h
h→0
h→0
Z
Φ(y )
Rn
Φ(z)f (x − z) dz
Φ(y )f (x − y ) dy = u(x)
f (x + hei − y ) − f (x − y )
dy
h
Technical Step: If we can interchange limh→0 and integration, then (f ∈ C 2 ):
Z
∂u
∂f
=
Φ(y )
(x − y ) dy
∂xi
∂x
n
i
R
and by the same arguments,
∂2u
∂xi ∂xj
and
=
∆u =
Z
ZR
Φ(y )
n
Rn
∂2f
(x − y ) dy
∂xi ∂xj
Φ(y )∆x f (x − y ) dy
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
24
So, what was the “black box” that enables use to switch the integration and limit?
Black Box: Lebesgue’s Dominated Convergence Theorem: fk be a sequence of integrable
functions such that |fk | ≤ g almost everywhere, g integrable fk → f pointwise almost everywhere. then
Z
Z
Z
lim
fk =
lim fk =
f
k→∞ Ω
Ω k→∞
Ω
Our application: Need
≤M
is integrable since f and hence
∆u =
=
=
Z
n
ZR
n
ZR
∂f
∂xi
have compact support.
Φ(y )∆x f (x − y ) dy
Φ(y )∆y f (x − y ) dy
Φ(y )∆y f (x − y ) dy +
Rn \B(0,)
= I1 + I2
Z
B(0,)
Φ(y )∆y f (x − y ) dy
Now, we have
|I2 | ≤ max
|D2 f |
n
R
n=2:
Z
B(0,)
Z
ln |x| dx
B(0,)
=
|Φ(y )| dy
Z
0
2π
Z
Z
0
ln r · r dr dθ
= 2π
r ln r dr
0
Z 0
1 0
=
r
ln r dr
2
0
Z 1 2
1 21
=
r ln r −
r
dr
2
r
0 2
0
Gregory T. von Nessi
=
1 2
1
ln − r 2
2
4
=
1
1
ln − 2
2
4
∴ |I2 | is O( ln )
0
Version::31102011
f (x + hei − y ) − f (x − y )
≤g
h
∂f
g(y ) = |Φ(y )| maxn
(z)
z∈R ∂xi
{z
}
|
Φ(y )
2.2 The Fundamental Solution of the Laplacian
Z
n=3:
B(0,)
1
dx
|x|n−2
25
Z Z
=
0
Z
=
∂B(0,)
0
= C
Z
1
r n−2
1
dSdr
|r |n−2
Cr n−1 dr
r dr
0
C 2
2
Version::31102011
=
2
∴ |I2 | ≤ max
|D f |
n
R
=
Z
B(0,)
|Φ(y )| dy
O( ln ) n = 2
O(2 )
n≥3
In particular: I2 → 0 as → 0
Z
I1 =
Rn \B(0,)
Z
=
∂B(0,)
Φ(y )∆y f (x − y ) dy
Φ(y )Df (x − y ) · ν dS −
= J1 + J2
Z
Rn \B(0,)
DΦ(y )Df (x − y ) dy
Now,
|J1 | =
Z
Φ(y )Df (x − y ) · ν dS
Z
≤ max |Df |
Φ(y ) dS(y )
∂B(0,)
∂B(0,)
n=2:
n≥3:
ln → 0 as → 0
C n−1
= → 0 as → 0
n−2
So we are left to look at
Z
J2 = −
DΦ(y )Df (x − y ) dy
Rn \B(0,)
Z
Z
= −
DΦ(y ) · νin (y )f (x − y ) dS +
∂B(0,)
=
Z
∂B(0,)
=
Z
∂B(0,)
Z
= −
∂B(0,)
DΦ(y ) · νout (y )f (x − y ) dS
∆Φ(y ) f (x − y ) dy
{z }
Rn \B(0,) |
=0
−1
f (x − y ) dS
|∂B(0, )|
−f (x − y ) dS(y ) → −f (x)
as
→0
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
26
Notation:
Finally,
R
−
u
f = |u|−1
R
u
f , where |u| is the measure of u.
∆u(x) =
Z
Rn
Φ(y )∆f (x − y ) dy
= terms tending to zero as → 0
Z
−−
f (x − y ) dS → −f (x) as
B(0,)
→0
=⇒ −∆u(s) = f (x)
2.3.1
Properties of Harmonic Functions
WMP for Laplace’s Equation
To begin this section, we introduce the Laplacian differential operator:
∆u =
n
X
∂2u
i=1
∂xi2
, u ∈ C 2 (Ω).
Note that ∆u is sometimes written as div · ∇u in classical notation. Coupled to this we have
the following definition
Definition 2.3.1: u ∈ C 2 (Ω) is called harmonic in Ω if and only if ∆u = 0 in Ω.
Example 1: u(x1 , x2 ) = x12 − x22 is a nonlinear harmonic function in R2 .
In addition to the above, we have the next definition.
Definition 2.3.2: u ∈ C 2 (Ω) is called subharmonic( superharmonic) if and only if ∆u ≥ 0(≤
0) in Ω.
Remark: It is obvious, due to the linearity of the Laplacian, that if u is subharmonic, then
−u is superharmonic and vice-versa.
Example 2: If x ∈ Rn , then ∆|x|2 = 2n. Thus, |x|2 is subharmonic in Rn .
Now, we come to our first theorem
Theorem 2.3.3 (Weak Maximum Principle for the Laplacian): Let u ∈ C 2 (Ω) ∩ C 0 (Ω)
with Ω bounded in Rn . If ∆u ≥ 0 in Ω, then
u ≤ max u
⇐⇒ max u = max u .
(2.2)
∂Ω
∂Ω
Ω
Conversely, if ∆u ≤ 0 in Ω, then
u ≥ min u
∂Ω
⇐⇒ min u = min u .
∂Ω
Ω
(2.3)
Consequently, if ∆u = 0 in Ω, then
min u ≤ u ≤ max u.
∂Ω
Gregory T. von Nessi
∂Ω
(2.4)
Version::31102011
2.3
2.3 Properties of Harmonic Functions
27
Proof: We will just prove the subharmonic result, as the superharmonic case is proved the
same way. Given u subharmonic, take > 0 and define v = u + |x|2 , so that ∆v =
∆u + 2n > 0 in Ω. If v takes attains a maximum in Ω, we have ∆v ≤ 0 at that point, a
contradiction. Thus, by the compactness of Ω, we have
v ≤ max v .
∂Ω
Finally, we take → 0 in the above to get the result.
Version::31102011
Corollary 2.3.4 (Uniqueness): Take u, v ∈ C 2 (Ω) ∩ C 0 (Ω). If ∆u = ∆v = 0 in Ω and
u = v on ∂Ω, then u = v in Ω.
Before moving onto more general elliptic equations, we briefly remind ourselves of the Dirichlet Problem, which takes the following form.
∆u = 0 in Ω
u = φ on ∂Ω for some φ ∈ C 0 (∂Ω)
2.3.2
Poisson Integral Formula
The best possible scenario one can hope for when dealing in theoretical PDE is to actually
have an analytic solution to the equation. Obviously, it is easier to analyze a solution directly
rather than having to do so through its properties. Fortunately, this turns out to be possible
for the Dirichlet problem for Laplace’s equation in a ball.
To start off, let us consider some other examples of harmonic functions:
• Since Laplace’s equation is a constant coefficient equation, we know that if u ∈ C 3 (Ω)
is harmonic, then so is all its partial derivatives. Indeed,
Di (∆u) = 0 ⇐⇒ ∆(Di u) = 0.
• There are also radially symmetric solutions to Laplace’s equation. To derive these, let
r = |x| and u(x) = g(r ). We calculate
g0
xi
=
· xi
|x|
r
g 0 g 00
g0
Dii u = n · + 2 · xi2 − 3 xi2
r
r
r
0
g0
g
n · + g 00 −
r
r
g0
(n − 1) · + g 00
r
(r n−1 g 0 )0
= 0.
r n−1
Di u = g 0 ·
∆u =
=
=
=
This implies that r n−1 g 0 (r ) = constant; integration then yields
C1 r 2−n + C2 n > 2
g=
.
C1 ln r + C2 n = 2
These are all the radial solution of the homogeneous Laplace’s equation outside of the
origin. For simplicity we will take C1 = 1 and C2 = 0 in the above.
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
28
• It is easily verified that translating the above solutions so that y is taken to be the new
origin, are also harmonic functions:
|x − y |2−n n > 2
.
C1 ln |x − y | n = 2
Since these functions are symmetric in x and y , it’s obvious the above are harmonic
with respect to either x or y provided x 6= y . Taking a derivative of the above indicates
that
is also harmonic. Next consider
V (x, y ) =
=
|y |2 − |x|2
|y |2 + |x|2 − 2xy
2x(x − y )
=
−
n
n
|x − y |
|x − y |
|x − y |n
1
2xi (xi − yi )
−
.
|x − y |n−2
|x − y |n
The last term is harmonic since it is the derivative with respect to yi of |x − y |2−n .
Thus, v (x, y ) is harmonic in both x and y variables.
It would be difficult to verify the above aforementioned functions solved Laplace’s equation,
but they are readily constructed from simpler solutions.
Now, let us consider |y | = R and define
Z
ṽ (x) =
V (x, y ) dS(y ).
|y |=R
First, it is clear that ṽ (x) is harmonic for x ∈ BR (0). Indeed for any fixed x in the BR (0),
V (x, y ) is bounded. So we may interchange the Laplacian with the integral to get that ṽ (x)
is harmonic in BR (0). Next, we claim that ṽ (x) is radial, i.e. only depends on |x|. To show
this we consider an arbitrary rotation transformation on ṽ (x). Sometimes this is called an
orthonormal transformation as it corresponds to the transformation of x = P x 0 , where P
is an orthonormal matrix (rows/columns are orthogonal basis in Rn , P T = P −1 is another
property). So, given the transformation, we have
Z
|y |2 − |P x|2
dS(y )
ṽ (P x) =
n
|y |=R |P x − y |
Z
R2 − |P x|2
dS(y ).
=
n
|y |=R |P x − y |
Now, we change variables with P z = y :
Z
ṽ (P x) =
|P z|=R
=
Z
|z|=R
=
Gregory T. von Nessi
Z
|z|=R
R2 − |P x|2
dS(P z)
|P x − P z|n
R2 − |x|2
det(P ) dS(z)
|P (x − z)|n
R2 − |x|2
dS(z).
|x − z|n
Version::31102011
xi − yi
|xi − yi |n
2.3 Properties of Harmonic Functions
29
In the above, we have used the fact that rotations obviously do not change vector magnitude. Also, we know det(P ) = 1 as 1 = det(I) = det(P P −1 ) = det(P ) det(P −1 ) =
det(P ) det(P T ) = det(P )2 . So, ṽ (x) is rotationally invariant. Thus, we calculate
Z
ṽ (0) =
R2−n dS(y ) = R2−n A(SR (0)) = nωn R.
|y |=R
It turns out the above result is valid for any x ∈ BR (0), this can be verified by carrying out
the integration; but this is rather complicated.
Version::31102011
Now, we can make the following definition
Definition 2.3.5: The Poisson Kernel:
K(x, y ) =
R2 − |x|2
1
,
nωn R |x − y |n
where |y | = R.
From the above, we already know that K(x, y ) is harmonic in BR (0) with respect to x. From
the calculation for ṽ (x), we also have
Z
K(x, y ) dS(y ) = 1
|y |=R
by construction. Next, we have
Definition 2.3.6: The Poisson Integral for x ∈ BR (0) is
Z
w (x) =
K(x, y )φ(y ) dS(y ).
|y |=R
Theorem 2.3.7 (): (Properties of the Poisson Integral) Given the above definition, w ∈
C ∞ (BR (0)) ∩ C 0 (BR (0) with ∆w = 0 in BR (0). Also w (x) → φ(y ) as x → y ∈ ∂BR (0).
Remark: Basically one has that w solves the Dirichlet problem for Laplace’s equation:
∆w = 0 in BR (0)
w = φ on ∂BR (0)
Proof: w ∈ C ∞ (BR (0)) is obvious since for fixed x ∈ BR (0), K(x, y ) is bounded and
harmonic. Hence on may take derivatives of any order inside the integral (the derivatives will
also be well behaved as x 6= y for our fixed x). So, all we need to show is that w (x) → φ(y0 )
as x → y0 ∈ ∂BR (0). Take an arbitrary > 0 and choose δ such that |φ(y ) − φ(y0 )| < implies |y − y0 | < δ. Keeping in mind the figure, we calculate:
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
30
y0
x
δ
2
O
|w (x) − φ(y0 )| =
≤
Z
Z
∂BR (0)∩{|y −y0 |<δ}
+
≤ Gregory T. von Nessi
K(x, y )(φ(y ) − φ(y0 )) dS(y )
∂BR (0)
Z
Z
|K(x, y )(φ(y ) − φ(y0 ))| dS(y )
∂BR (0)∩{|y −y0 |>δ}
∂BR (0)∩{|y −y0 |<δ}
+2 · max φ ·
∂BR (0)
|K(x, y )(φ(y ) − φ(y0 ))| dS(y )
|K(x, y )| dS(y )
Z
∂BR (0)∩{|y −y0 |>δ}
R2 − |x|2
dS(y ).
|x − y |n
Version::31102011
δ
2
δ
BR (0) ∩ {|y − y0 | > δ}
2.3 Properties of Harmonic Functions
31
Now, if one chooses x such that |x − y0 | ≤
∂BR (0) ∩ {|y − y0 | > δ}. Thus, we have
δ
2,
this implies that |x − y | ≥
|w (x) − φ(y0 )| ≤ + 2 · max φ ·
∂BR (0)
δ
2
for y ∈
R2 − |x|2
δn
2
≤ 2,
where we have chosen x so that · max∂BR (0) φ ·
R2 −|x|2
δn
2
< 2 . The last inequality comes from
Version::31102011
simply picking x close enough to y0 (as this happens R2 − |x|2 clearly shrinks). So, we have
now shown that w (x) → φ(y0 ) as x → y0 ∈ ∂BR (0). This also implies the continuity of
w (x) on the closure of Ω.
A very important consequence to the last theorem is that if u ∈ C 2 (Ω) is harmonic, then for
any ball BR (z), one has
Z
R2 − |x − z|2
u(y )
u(x) =
dS(y ).
n
nωn R
∂BR (z) |x − y |
In other words, the value of a harmonic function at any point is completely determined by
the values it takes on any given spherical shell surrounding that point! Taking, x = z in the
above, we get the following:
Theorem 2.3.8 (): (Mean Value Property) If u ∈ C 2 (Ω) and is harmonic on Ω, then one
has
Z
1
u(y ) dS(y )
u(z) =
nωn Rn−1 ∂BR (z)
Z
1
=
u(y ) dS(y ),
(2.5)
A(∂BR (z)) ∂BR (z)
for any z ∈ Ω, BR (z) ⊂ Ω.
In addition, one has
Corollary 2.3.9 (): If u ∈ C 2 (Ω) and is subharmonic(superharmonic) on Ω, then one has
Z
1
u(z) ≤ (≥)
u(y ) dS(y ),
(2.6)
A(∂BR (z)) ∂BR (z)
for any z ∈ Ω, BR (z) ⊂ Ω.
Proof: This is a simple matter of solving the following Dirichlet Problem:
∆v = 0 in BR (z)
v = u on ∂BR (z)
First, we know that ∆(u − v ) ≥ (≤) 0. Thus, the weak maximum principle states that
u ≤ (≥) v in ∂BR (z). So, putting everything together, one has
Z
1
u ≤ (≥) v =
v (y ) dS(y )
A(∂BR (z)) ∂BR (z)
Z
1
=
u(y ) dS(y ).
A(∂BR (z)) ∂BR (z)
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
32
2.3.3
Strong Maximum Principle
Theorem 2.3.10 (Strong Maximum Principle): If u ∈ C 2 (Ω) with Ω connected and ∆u ≥
(≤) 0, then u can not take on interior maximum(minimum) in Ω, unless u is constant.
Proof: Suppose u(y ) = M := maxΩ u. Clearly, one has
∆(M − u) ≤ (≥) 0.
for any BR (y ) ⊂ Ω. This implies that u = M on BR (y ), which in turn implies u = M on Ω
as Ω is connected.
Problem: Take Ω ⊂ Rn bounded and u ∈ C 2 (Ω) ∩ C 0 (Ω). If
∆u = u 3 − u in Ω
u = 0 on ∂Ω
Prove −1 ≤ u ≤ 1. Can u take either value ±1?
2.4
Estimates for Derivatives of Harmonic Functions
Now, we will go through estimating the derivatives of harmonic functions. First, note that
the mean value property immediately implies that
Z
1
u(y ) dy
u(x) =
ωn Rn BR (x)
for any BR (x) ⊂ Ω and u harmonic. This can simply be ascertained from preforming a radial
integration on the statement of the mean value property. As we have shown derivatives to
be harmonic, we have that
Z
1
Di u(x) =
div u ,i dy ,
ωn Rn BR (x)
where u ,i is a vector whose i th place is u and whose other components are zero. So applying
the divergence theorem to the above, we have
Z
1
Di u(x) =
u ,i · ν dS(y )
ωn Rn ∂BR (x)
Z
1
≤
sup |u| ·
dS(y )
ωn Rn BR (x)
∂BR (x)
n
=
sup |u|.
R ∂BR (x)
From this we see that any derivative is bounded by the final term in the above, thus
|Du(x)| ≤
Gregory T. von Nessi
n
sup |u|.
R BR (x)
Version::31102011
Thus, by the corollary to the mean value property,
Z
1
(M − u)(y ) ≥ (≤)
M − u(x) dS(x)
nωn R ∂BR (y )
Version::31102011
2.4 Estimates for Derivatives of Harmonic Functions
33
R
|β|
R
|β|
R
|β|
R
|β|
Applying this result iteratively over concentric balls whose radii increase by R/|β| for each
iteration, yields the following (refer to the figure):
β
|D u(x)| ≤
n|β|
R
|β|
sup |u|.
BR (x)
From this equation, the following is clear.
Theorem 2.4.1 (): Consider Ω bounded and u ∈ C 2 (Ω) ∩ C 0 (Ω). If u is harmonic in Ω,
then for any Ω0 ⊂⊂ Ω the following estimate holds
β
|D u(x)| ≤
n|β|
dΩ0
|β|
sup |u|
Ω
∀x ∈ Ω0 ,
(2.7)
where dΩ0 = Di st (Ω0 , ∂Ω).
2.4.1
Mean-Value and Maximum Principles
n = 1: u 00 = 0, u(x) = ax + b. u is affine.
u(x0 ) =
1
(u(x0 + r ) + u(x0 − r ))
Z2
= − u(y ) dS(y )
Z∂I
u(x0 ) = − u(y ) dy
I
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
34
ax+b
x
0
I
Theorem 2.4.2 (Mean value property for harmonic functions): Suppose that u is harmonic on U, and that B(x, r ) ⊂ U. Then
Z
u(x) = −
u(y ) dS(y )
∂B(x,r )
Z
= −
u(y ) dy
B(x,r )
Proof:
R
Idea: Φ(r ) = −∂B(x,r ) u(y ) dS(y ); Prove that Φ0 (r ) = 0.
d
Φ(r ) =
dr
=
=
=
=
=
Z
d
−
u(y ) dS(y )
dr ∂B(x,r )
Z
d
−
u(x + r z) dS(z)
dr ∂B(0,1)
Z
−
Du(x + r z) · z dS(z)
∂B(0,1)
Z
y −x
−
Du(y ) ·
dS(y )
r
∂B(x,r )
Z
−
Du(y ) · ν(y ) dS(y ) (See below figure)
∂B(x,r )
Z
1
div Du(y ) dy
|∂B(x, r )| B(x,r ) | {z }
∆u=0
0
∴ Φ (r ) = 0
Theorem 2.4.3 (Maximum principle for harmonic functions): u harmonic (u ∈ C 2 (Ω) ∩
C(Ω), ∆u = 0) in open bounded Ω. Then
max u = max u
u∈Ω
∂Ω
Moreover, if Ω is connected and if the maximum of u is obtained in Ω, then u is constant.
Gregory T. von Nessi
Version::31102011
( rxr )
2.4 Estimates for Derivatives of Harmonic Functions
y −x
r
y
35
= ν(y )
y −x
x
r
Version::31102011
Remark:
• The second statement is also referred to as the strong maximum principle.
• The assertion holds also for the minimum of u, replace u by −u.
Proof: We prove the strong maximum principle.
Suppose that M = maxΩ u is attained at x0 ∈ Ω, then exists a ball B(x0 , r ) ⊂ Ω, r > 0.
Mean-value property:
Z
M = u(x0 ) = −
u(y ) dy
B(x0 ,r )
Z
≤ −
M dy = M
B(x0 ,r )
=⇒ we have equality.
=⇒ u ≡ M on B(x0 , r )
If u(z) < M for a point z ∈ B(x0 , r ), then we get a strict inequality.
M = {x ∈ Ω, u(x) = M}
M is a closed set in Ω since u is constant; M is also open in Ω since it can be viewed as
the union of open balls.
=⇒ M = Ω, u is constant in Ω.
Application of maximum principles: uniqueness of solutions of PDEs.
Theorem 2.4.4 (): There is at most one smooth solution of the boundary-value problem
−∆u = f in Ω
u = g on ∂Ω
Proof: Suppose u1 and u2 are solutions, take u1 = u2
max(u1 − u2 ) = max(u1 − u2 ) = 0
Ω
∂Ω
=⇒ u1 − u2 = in Ω
Same argument for −(u1 − u2 ) gives
−(u1 − u2 ) ≤ in Ω
=⇒ |u1 − u2 | ≤ 0in Ω
=⇒ u1 = u2 in Ω
=⇒ there exists at most one solution!
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
36
x0
Regularity of Solutions
Goal: Harmonic functions are analytic (they can be expressed as convergent power series).
• Need that u is infinitely many times differentiable.
• Need estimates for the derivatives of u.
Next goal: Harmonic functions are smooth.
A way to smoothen functions is mollification.
Idea: (weighted) averages of functions are smoother.
Definition 2.4.5:
η(x) =
(
1
Ce |x|2 −1
0
|x| < 1
|x| ≥ 1
Fact: η is infinitely many times differentiable, η ≡ 0 outside B(0, 1).
We fix C such that
R
Rn
η(y ) dy = 1.
• η = 0 outside B(0, )
R
• Rn η (y ) dy = 1
Gregory T. von Nessi
η (x) =
1 x η
n
Version::31102011
2.4.2
2.4 Estimates for Derivatives of Harmonic Functions
37
If u is integrable, then
Z
u (x) :=
n
ZR
=
n
ZR
=
η (x − y )u(y ) dy
η (y )u(x − y ) dy
η (x − y )u(y ) dy
B(x,)
Version::31102011
is called the mollification of u.
• u is C ∞ (infinitely many times differentiable)
Theorem 2.4.6 (Properties of u ):
• u → u (pointwise) a.e. as → 0
• If u is continuous then u → u uniformly on compact subsets
Now since Φ0 (r ) = 0,
=⇒ Φ(r ) =
lim Φ(ρ)
Z
lim −
ρ→0
=
ρ→0 ∂B(x,ρ)
u(y ) dS(y )
= u(x)
∴ Φ(r ) = u(x) ∀r such that B(x, r ) ⊂ U.
Z
Z rZ
1
u(y ) dS(y )dρ
−
u(y ) dy =
|B(x, r )| 0 ∂B(x,ρ)
B(x,r )
Z r
Z
1
=
|∂B(x, ρ)| −
u(y ) dS(y ) dρ
|B(x, r )| 0
∂B(x,ρ)
|
{z
}
=
=
u(x)
|B(x, r )|
Z
u(x)
r
nα(n)ρn−1 dρ
0
u(x)
1
nα(n) ρn
n
r α(n)
n
= u(x).
r
0
Note: nα(n) is the surface are of ∂B(0, 1) in Rn .
Complex differentiable =⇒ analytic
f (z) =
∞
X
an
n=1
an =
n!
2πi
n!
I
(z − z0 )n
γ
f (ω)
dω = f (n) (z)
(ω − z0 )n+1
Theorem 2.4.7 (): If u ∈ C(Ω), Ω bounded and open, satisfies the mean-value property,
then u ∈ C ∞ (Ω)
Gregory T. von Nessi
38
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
ǫ
Ωǫ
Ω
ǫ
Ω = {x ∈ Ω : dist(x, ∂Ω) > }
In Ω :
u (x) =
η(y ) =
u (x) =
=
=
(
Z
Rn
η (x − y )u(y ) dy
1
Ce |y |2 −1
0
|y | ≤ 1
|y | > 1
Z
1
|x − y |
η
u(y ) dy
n B(x,)
Z Z
ρ
1 η
u(y ) dS(y )dρ
n 0 ∂B(x,ρ)
Z
Z
1 ρ
η
|∂B(x,
ρ)|
−
u(y ) dS dρ
n 0
∂B(x,ρ)
{z
}
|
Z
u(x)
ρ
u(x)
η
nα(n)ρn−1 dρ
n 0
Z Z
ρ
u(x) =
η
dSdρ
n 0 ∂B(x,ρ)
Z
= u(x)
η (y ) dy
=
Rn
= u(x)
In Ω , u (x) = u(x)
=⇒ u ∈ C ∞ (Ω )
> 0 arbitrary =⇒ u ∈ C ∞ (Ω).
Corollary 2.4.8 (): If u ∈ C(Ω) satisfies the mean-value property, then u is harmonic.
Gregory T. von Nessi
Version::31102011
Proof: Take > 0
2.4 Estimates for Derivatives of Harmonic Functions
39
Proof: Mean-value property =⇒ u ∈ C ∞ (Ω). Our proof of harmonic =⇒ mean-value
property gave
Z
r
Φ0 (r ) = −
∆u(y ) dy ,
n B(x,r )
Z
Φ(r ) = −
where
u(y ) dy
∂B(x,r )
Version::31102011
Now Φ0 (r ) = 0 and thus
Z
−
∆u(y ) dy = 0
B(x,r )
Since ∆u is smooth, this is only possible if ∆u = 0 i.e. u is harmonic.
2.4.3
Estimates on Harmonic Functions
Theorem 2.4.9 (Local estimates for harmonic functions): Suppose u ∈ C 2 (Ω), u harmonic. Then
Z
Ck
α
|u(y )| dy ,
|α| = k
|D u(x0 )| ≤ n+k
r
B(x0 ,r )
where
Ck =
(
1
α(n)
(2n+1 nk)k
α(n)
k =0
k = 1, 2, . . .
and B(x0 , r ) ⊂ Ω.
Remarks:
Z
C
|D (x0 )| ≤ k −
|u(y )| dy
r B(x0 ,r )
α
|α| = k,
Proof: (by induction)
k = 0:
Z
u(x0 ) = −
B(x0 ,r )
∴ |u(x0 )| ≤
u(y ) dy
Z
1 1
α(n) r n
B(x0 ,r )
|u(y )| dy
k = 1: u harmonic =⇒ u ∈ C ∞ (Ω)
∆uxi =
∂
∆u = 0,
∂xi
i.e., all derivatives of u are harmonic. Use mean value property for
Z
uxi = −
uxi dy
B (xo , 2r ) |{z}
div u
∂
∂xi u
= uxi on B x0 , 2r
Gregory T. von Nessi
40
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
r
r
2
x0
r
2
u xi
=
≤
1
B x0 , 2r
Z
∂B (x0 , 2r )
u(y )νi dS(y )
r
2n
∂B
x
,
max |u(y )|
0
r
α(n)r n |
{z 2 } y ∈∂B(x0 , 2 )
n−1
nα(n) r n−1
2
≤
≤
Z
1
2n 1
max
|u(z)| dz
r α(n) 2r n y ∈∂B(x0 , 2r ) B(y , 2r )
Z
2n+1 n
|u(y )| dy
α(n)r n+1 B(x0 ,r )
k ≥ 2:
|α| = k, then
Dα u(x) = Dβ
Gregory T. von Nessi
∂
u(x),
∂xi
Version::31102011
Note: uxi = div (0, . . . , 0, u, 0, . . . , 0) where u is in the i th component.
Use divergence theorem:
2.4 Estimates for Derivatives of Harmonic Functions
41
|β| = k − 1 for some 1 ≤ i ≤ n.
r
Version::31102011
r
k (k
− 1)
Dα u is harmonic. MVF for Dα u on B x0 , kr
|Dα u(x)| =
=
≤
=
=
Z
−
r
k
x0
x0 , kr
B(
Dα u(y ) dy
) | {z }
1
B x0 , kr
∂
∂xi
Dβ u
Z
x0 , kr
∂B (
)
Dβ u · νi dS
∂B x0 , kr
max |Dβ u(u)|
B x0 , kr
y ∈∂B (x0 , kr )
n−1
nα(n) kr
max |Dβ u(u)|
r n
α(n) k
y ∈∂B (x0 , kr )
nk
r
max |Dβ u(u)|
y ∈∂B (x0 , kr )
⊂ B(x0 , r ) and we use the induction hypothesis
For y ∈ ∂B x0 , kr , we have B y , (k−1)r
k
on this ball for
β
|D u(y )| ≤
1
(2n+1 n(k − 1))k−1
n+k−1
α(n)
(k−1)r
k
=
=
≤
Z
B(x0 ,r )
|u(t)| dt
n+k−1
Z
(2n+1 n(k − 1))k−1
|u(t)| dt
α(n)
r n+k−1
B(x0 ,r )
n n+1
Z
1
k
(2 nk)k−1
|u(t)| dt
α(n)
r n+k−1 k − 1
B(x0 ,r )
n+1 nk)k−1 Z
1
n (2
2
|u(t)| dt
α(n)
r n+k−1
B(x0 ,r )
1
k
k −1
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
42
=⇒ |Dα u(x0 )| ≤
≤
n+1 nk)k−1 Z
1
nk
n (2
2
|u(t)| dt
r r n+k−1
α(n)
B(x0 ,r )
Z
1 (2n+1 nk)k
|u(t)| dt
α(n)
r n+k
| {z } B(x0 ,r )
Ck
As a consequence of this, we get
Theorem 2.4.10 (Liouville’s Theorem): If u is harmonic on Rn and u is bounded, then u
is constant.
|Du(x0 )| ≤
≤
≤
C
Z
|u(y )| dy
r n+1 B(x0 ,r )
C
M|B(x0 , r )|, where M := sup |u(y )|
n+1
r
y ∈Rn
C
M
r
where B(x0 , r ) ⊆ Rn .
As r → ∞, Du(x0 ) = 0.
=⇒ u is constant.
2.4.4
Analyticity of Harmonic Functions
Theorem 2.4.11 (Analyticity of harmonic functions): Suppose that Ω is open (and bounded)
and that u ∈ C 2 (Ω), u harmonic in Ω. Then u is analytic i.e. u can be expanded locally into
a convergent power series.
Proof:
Ω
4r
r
2r
Suppose that x0 ∈ Ω, show that u can be expanded into a power series at x0 . i.e.
u(x) =
Gregory T. von Nessi
X 1
Dα u(x0 )(x − x0 )α
|α|
α
Version::31102011
Proof:
2.4 Estimates for Derivatives of Harmonic Functions
43
Let
r
=
M =
1
dist(x0 , ∂Ω) and let
4
Z
1
|u(y )| dy
α(n)r n B(x0 ,2r )
Local estimates: ∀x ∈ B(x0 , r ), we have B(x, r ) ⊂ B(x0 , 2r ) ⊂ Ω, and thus
max
Version::31102011
x∈B(x0 ,r )
1 n+1
(2 nk)k
rk
n+1 k
2 nk
= M
r
n+1 |α|
2 n
= M
|α||α|
r
|Dα u(x)| ≤ M
Stirling’s formula:
√ k
kk
lim
k→∞ k!e k
=⇒ k k
1
√
2π
C k!e k
√
≤ √
2π k
≤ C1 k!e k
=
Multinomial theorem:
n
k
n
X
=
i=1
!k
1
X |α|!
α!
=
Multinomal Thm.
|α|=k
|α|!
α!
≥
for every α with |α| = k
=⇒ |α|! ≤ nk α!
|α|
2n+1 n
max |D u(x)| ≤ M
|α||α|
r
x∈B(x0 ,r )
n+1 2 |α|
2 n
≤ CM
|α|!e |α|
r
n+1 2 |α|
2 en
= CM
|α|!
r
α
Taylor Series:
n+1 2 N
∞ X
∞ X
X
X
|Dα u(x0 )|
2 en
α
(x − x0 ) ≤
CM
|x − x0 |α
α!
r
N=0 |α|=N
N=0 |α|=N
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
44
——
there are no more that nN multiindices α with |α| = N, thus
——
≤
≤
CM
N=0
∞
X
N=0
∞
X
CM
2n+1 en2
r
2n+1 en3
r
CMq N
N=0
N
N
nN |x − x0 |α
|x − x0 |N
|q| < 1
The last inequality is true only if the series is converging, i.e. for
2n+1 en3
|x − x0 | < 1
r
r
=⇒ |x − x0 | < n+1 3
2 en
Result: The Taylor series converges in a ball with radius R =
dist(x0 , ∂Ω) and dimension.
2.4.5
r
2n+1 en3
which only depends on
Harnack’s Inequality
Theorem 2.4.12 (Harnack’s Inequality): Suppose that Ω is open and that Ω0 ⊂⊂ Ω (Ω0
is compactly contained in Ω), and Ω0 is connected. Suppose that u ∈ C 2 (Ω) and harmonic,
and nonnegative, then
sup u ≤ C inf0 u
Ω0
and this constant does not depend on u.
Gregory T. von Nessi
Ω
Version::31102011
≤
∞
X
2.4 Estimates for Derivatives of Harmonic Functions
45
Ω
4r
u(x0 ) = 0
2r
Version::31102011
x
Ω′
This means that non-negative harmonic functions can not have wild oscillations on Ω0 .
Remark: inf Ω0 u = 0, then supΩ0 u = 0 =⇒ u ≡ 0 on Ω0 .
Ω
Ω′
u(x0 ) = 0
This follows also from the minimum principle for harmonic functions.
Proof of Harnack:
Let 4r = dist(Ω0 , ∂Ω) and take x, y ∈ Ω0 , |x − y | ≤ r =⇒ B(x, 2r ) ⊂ Ω, B(y , r ) ⊆ B(x, 2r ).
Z
u(x) = −
B(x,2r )
≥
=
=
u(z) dz
Z
1
u(z) dz
|B(x, 2r )| B(y ,r )
Z
|B(y , r )|
−
u(z) dz
|B(x, 2r )| B(y ,r )
1
u(y )
2n
i.e.
u(x) ≥
1
u(y )
2n
Same argument interchanging x and y yields
u(y ) ≥
1
u(x)
2n
=⇒ 2n u(x) ≥ u(y ) ≥
1
u(x)
2n
∀x, y ∈ Ω0 , |x − y | ≤ r
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
46
Ω
y
Ω′
x
y0
y
x0
x2
x1
x
Ω0 is compact, cover Ω0 by at most N balls B(xi , r ). Take x, y ∈ Ω0 , take a curve γ joining
x and y .
Chain of balls covering γ, B(x0 , r ), . . . , B(xk , r ), where k ≤ N. Now, |x − x0 | ≤ r
≥
1
u(x0 )
2n 1
1
u(y0 )
2n 2n
1
u(y0 )
22n
1
u(x1 )
23n
≥
2(2k+2)n
≥
1
u(y )
2(2N+2)n
u(x) ≥
≥
=
..
.
1
u(y )
∀x, y ∈ Ω0 we get
u(x) ≥
1
2(2N+2)n
=⇒ Take xi such that u(xi ) → inf x∈Ω0 u(x)
yi such that u(yi ) → supx∈Ω0 u(x)
=⇒ assertion.
Properties of harmonic functions
• mean-value property
Gregory T. von Nessi
u(y )
Version::31102011
xk
2.5 Existence of Solutions
47
• maximum principle
• regularity
• local estimates
Version::31102011
• analyticity
• Harnack’s inequality
2.5
Existence of Solutions
∆u = 0 in Ω
u = g on ∂Ω
Laplace’s equation
−∆u = f in Ω
u = g on ∂Ω
Poisson’s equation
• Green’s function → general representation formula
• Finding Green’s function for special domains, half-space and a ball
• Perron’s Method - get existence on fairly general domains from the existence on balls
2.5.1
Green’s Functions
We have in Rn
u(x) =
Z
Rn
Φ(y − x)f (y ) dy
solves
−∆u = f
in Rn
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
48
Issue: Construct fundamental solutions with boundary conditions.
General representation formula for smooth functions u ∈ C 2
Ω is a bounded, open domain. x ∈ Ω, > 0 such that B(x, ) ⊂ Ω. Define Ω = Ω \ B(x, ).
ǫ
x
Ω
u(y )∆Φ(y − x) dy
Ω
=
Z
∂Ω
=
−
Z
Z
Ω
∂Ω
+
u(y )DΦ(y − x) · ν dS(y )
Z
Du(y ) · DΦ(y − x) dy
[u(y )DΦ(y − x) · ν − Du(y ) · νΦ(y − x)] dS(y )
Ω
Volume term for → 0 gives
∆u(y )Φ(y − x) dy
Z
Ω
∆u(y )Φ(y − x) dy
singularity of Φ integrable.
boundary terms:
Z
=
[u(y )DΦ(y − x) · ν − Du(y ) · νΦ(y − x)] dS(y )
∂Ω
Z
+
[u(y )DΦ(y − x) · ν − Du(y ) · νΦ(y − x)] dS(y )
{z
} |
{z
}
∂B(x,) |
I1
Remembering Du(y ) ≤ Cn−1 and
Φ≤
I2
C
n−2
n≥3
,
ln n = 2
we see that
n≥3
ln n = 2
=⇒ I2 → 0 as → 0.
I2 ≤
Gregory T. von Nessi
Version::31102011
0 =
Z
2.5 Existence of Solutions
49
ǫ
x
Version::31102011
Ω
Z
I1 =
∂B(x,)
[u(y )DΦ(y − x) · νin dS(y )
DΦ(z) · νout =
DΦ(y − x) · νin =
Z
I1 =
−1
,
nα(n)n−1
−1
nα(n)n−1
u(y )
∂B(x,)
Z
= −
∂B(x,)
z ∈ ∂B(0, )
1
dS(y )
nα(n)n−1
u(y ) dS(y ) → u(x) as → 0
Putting things together:
Z
[u(y )DΦ(y − x) · ν − Du(y ) · νΦ(y − x)] dS(y )
Z
+u(x) +
∆u(y )Φ(y − x) dy
Ω
Z
=⇒ u(x) =
[Du(y ) · νΦ(y − x) − u(y )DΦ(y − x) · ν] dS(y )
∂Ω
Z
−
∆u(y )Φ(y − x) dy
0 =
∂Ω
Ω
Recall: Existence of harmonic functions.
Green’s functions: Ω, u ∈ C 2
0=
Z
Ω
u(y )∆Φ(y − x) dy
Representation of u(x)
u(x) = −
Z
∂Φ
∂u
u(y )
(y − x) −
(y )Φ(y − x)
∂ν
∂ν
Z∂Ω
−
∆u(y )Φ(y − x) dy
Ω
dS(y )
Gregory T. von Nessi
50
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
Green’s function = fundamental solution with zero boundary data.
Idea: Correct the boundary values of Φ(y − x) by a harmonic corrector to be equal to
zero.
G(x, y ) = Φ(y − x) − φx (y ),
Green’s function
where
∆y φx (y ) = 0
in Ω
x
φ (y ) = Φ(y − x) on ∂Ω
Same calculations as before:
u(x) = −
Z
u(y )
∂G
(x, y ) dS(y )
∂ν
Z∂Ω
−
G(x, y )∆u(y ) dy
Ω
Expectation: The solution of
∆u = 0 in Ω
u = g on ∂Ω
is given by
u(x) = −
Z
∂Ω
g(y )
∂G
(x, y ) dS(y )
∂ν
Theorem 2.5.1 (Symmetry of Green’s Functions): ∀x, y ∈ Ω, x 6= y , G(x, y ) = G(y , x).
Proof: Define
v (z) = G(x, z)
w (z) = G(y , z)
Gregory T. von Nessi
Version::31102011
2.5 Existence of Solutions
51
v harmonic for z 6= x, w harmonic for z 6= y .
v , w = 0 on ∂Ω.
ǫ
ǫ
y
x
Version::31102011
Ω
small enough such that B(x, ) ⊂ Ω and B(y , ) ⊂ Ω.
Ω = Ω \ (B(x, ) ∪ B(y , ))
Z
0 =
∆v · w dz
Ω
Z
Z
∂v
=
Dv · Dw dz
w dS −
∂Ω ∂ν
Ω
Z
∂v
∂w
=
w −v
dS
∂ν
∂ν
∂Ω
Z
v · ∆w dz
+
Ω
Now, the last term on the RHS is 0. Thus
Z
Z
∂v
∂w
∂v
∂w
w −v
dS =
v
−
w dS
∂ν
∂ν
∂ν
∂B(x,) ∂ν
∂B(y ,)
Now, since w is nice near x, we have
Z
∂w
v dS ≤ Cn−1 sup |v | = o(1)
∂B(x,)
∂B(x,) ∂ν
as → 0
On the other hand, v (z) = Φ(z − x) − φx (z), where φx is smooth in U. Thus
Z
Z
∂Φ
∂v
lim
w dS = lim
(x, z)w (z) dS = w (x)
→0 ∂B(x,) ∂ν
→0 ∂B(x,) ∂ν
by previous calculations. Thus, the LHS of (2.8) converges to w (x) as → 0. Likewise the
RHS converges to v (y ). Consequently
G(y , z) = w (x) = v (y ) = G(x, y )
2.5.2
2.5.2.1
Green’s Functions by Reflection Methods
Green’s Function for the Halfspace
Rn+ = {x ∈ Rn : xn ≥ 0}
u(x) = −
Z
∂Ω
u(y )
∂v
(y − x) dy
∂ν
Gregory T. von Nessi
52
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
In this case: ∂Ω = ∂Rn+ = {x ∈ Rn : xn = 0}
Idea: reflect x into x̃ = (x1 , . . . , xn−1 , −xn ) and to take φx (y ) = Φ(y − x̃)
G(x, y ) = Φ(y − x) − φx (y )
= Φ(y − x) − Φ(y − x̃)
where
∂G
∂G
=−
∂ν
∂yn
R
Rn−1
Gregory T. von Nessi
Version::31102011
This works since x ∈ Rn+ , x̃ ∈
/ Rn+
For y ∈ ∂Rn+ , yn = 0, |y − x| = |y − x̃|. Thus, Φ(y − x) = Φ(y − x̃) for y ∈ ∂Rn+ . Guessing
from what we showed on bounded domains:
Z
∂G
(x, y ) dy ,
u(x) = −
g(y )
∂ν
∂Ω
2.5 Existence of Solutions
53
n ≥ 3:
1
1
n(n − 2)α(n) |z|n−2
−1 z
nα(n) |z|n
Φ(z) =
=⇒ DΦ =
1 yn − xn
nα(n) |y − x|n
1 yn + xn
nα(n) |y − x|n
∂G
(y − x) =
∂yn
∂G
−
(y − x̃) =
∂yn
Version::31102011
−
So if y ∈ ∂Rn+ ,
1
∂G
2xn
∂G
(x, y ) = −
(x, y ) = −
∂ν
∂yn
nα(n) |x − y |n
So our expectation is
u(x) = −
=
Z
∂Rn+
2xn
nα(n)
g(y )
Z
∂Rn+
∂G
(x, y ) dy
∂ν
g(y )
dy .
|x − y |n
(2.8)
This is Poisson’s formula on Rn+ .
K(x, y ) =
2xn
nα(n)|x − y |n
is called Poisson’s kernel on Rn+ .
Theorem 2.5.2 (Poisson’s formula on the half space): g ∈ C 0 (∂Rn+ ), g is bounded and
u(x) is given by (2.8), then
i.) u ∈ C ∞ (Rn+ )
ii.) ∆u = 0 in Rn+
iii.) limx→x0 u(x) = g(x0 ) for x0 ∈ ∂Rn+
Proof: y 7→ G(x, y ) is harmonic x 6= y .
i.) symmetric G(x, y ) = G(y , x)
=⇒ G is harmonic in its first argument x 6= y
=⇒ derivatives of G are harmonic for x 6= y
=⇒ our representation formula involves
∂ν G(x, y )
y ∈ ∂Rn+ , x ∈ Rn+ =⇒ x 6= y
=⇒ ∂ν G is harmonic for x ∈ Rn+ .
ii.) u is differentiable
– integrate on n − 1 dimensional surface
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
54
– decay at ∞ is of order |x|1 n =⇒ the function decays fast enough to be integrable,
and the same is true for its derivatives (since g is bounded). =⇒ differentiate
under the integral, =⇒ ∆u(x) = 0.
iii.) Boundary value for u. Fix x0 ∈ ∂Rn+ , choose > 0, choose δ > 0 small enough such
that |g(x) − g(x0 )| < if |x − x0 | < δ, x ∈ ∂Rn+ , by continuity of g. For x ∈ Rn+ ,
|x − x0 | < 2δ
2xn
|u(x) − g(x0 )| =
mα(n)
Z
Rn+
K(x, y ) dy = 1
(2.9) ≤
|
∂Rn+ ∩B(x0 ,δ)
+
(2.10)
K(x, y ) |g(y ) − g(x0 )| dy
|
{z
}
≤
{z
≤ by (2.10)
Z
(2.9)
(HW# 4)
∂Rn+
Z
g(y ) − g(x0 )
dy
|x − y |n
∂Rn+ \B(x0 ,δ)
}
K(x, y )|g(y ) − g(x0 )| d
R
δ
δ
2
x
y
x0
δ
2
|x0 − y | > δ
|x − x0 | <
|y − x0 | ≤ |y − x| + |x − x0 |
δ
≤ |y − x| +
2
1
≤ |y − x| + |x0 − y |
2
=⇒
Gregory T. von Nessi
1
|y − x0 | ≤ |y − x|
2
Rn−1
Version::31102011
since
Z
2.5 Existence of Solutions
55
2nd integral
=
Z
2xn |g(y ) − g(x0 )|
dy
nα(n)
|x − y |n
n
Z
2
4xn0 max |g|
dy
nα(n)
∂Rn+ \B(x0 ,δ) |x0 − y |
∂Rn+ \B(x0 ,δ)
≤
Version::31102011
≤
4xn0 max |g|
C(δ) ≤ nα(n)
If xn0 is sufficiently small. Two estimates together: |u(x) − g(x0 )| ≤ 2 if f |x − x0 | is
small enough. =⇒ continuity at x0 u(x0 ) = g(x0 ) as asserted.
2.5.2.2
Green’s Function for Balls
1. unit ball
2. scale the result to arbitrary balls
3. inversion in the unit sphere
x̃ =
1
x
|x|2
x · x̃ =
x
x 6= 0
|x|2
|x|2
=1
O
Φ(y − x) =
For x = 0 this simplifies to
(
(
1
− 2π
ln |y − x| n = 2
1
1
n(n−2)α(n) |x−y |n−2 n ≥ 3
1
− 2π
ln |y |
1
1
n(n−2)α(n) |y |n−2
=
|{z}
|y |=1
0
1
n(n−2)α(n)
We choose Φ(y ) to be this constant, and we assume x 6= 0 from now on. For x ∈ B(0, 1),
x̃ ∈
/ B(0, 1), and Φ centered at x̃ is a good function.
n≥3
y
y
y
7→ Φ(y − x̃) is harmonic
7→ |x|2−n Φ(y − x̃) is harmonic
7→ Φ(|x|(y − x̃)) is harmonic
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
56
Corrector φx (y ) = Φ(|x|(y − x̃)) harmonic in B(0, 1)
|y | = 1 ⇐= y ∈ ∂B(0, 1) =⇒ |x|2 |y − x̃|2
x 2
|x|2
2(y , x) |x|2
|x|2 |y |2 −
+ 4
|x|2
|x|
|x|2 y −
=
=
|x|2 − 2(y , x) + |y |2
=
|x − y |2
=
=⇒ Φ(|x|(y − x̃))
Φ(y − x)
=
if y ∈ ∂B(0, 1)
Expectation: The solution of
∆u = 0
B(0, 1)
in
u = g on ∂B(0, 1)
is given by
u(x) = −
=
Z
∂ν G(x, y )g(y ) dy
∂B(0,1)
Z
− |x|2
1
nα(n)
∂B(0,1)
g(y )
dy
|x − y |n
Poisson’s formula on B(0, 1)
K(x, y ) =
1 − |x|2
1
nα(n) |x − y |n
is called Poisson’s kernel on B(0, 1)
So now, we want to scale this result.
−1 z
nα(n) |z|n
∂Φ
−1 y − x
(y − x) =
∂y
nα(n) |y − x|n
∂Φ
−1 |x|(y − x̃)
(|x|(y − x̃)) =
|x|
∂y
nα(n) ||x|(y − x̃)|n
DΦ(z) =
on ∂B(0, 1). |y − x|2 = ||x|(y − x̃)|2
=⇒
∂Φ
(|x|(y − x̃)) =
∂y
∂G
(y , x) =
∂y
=
Gregory T. von Nessi
∂G
∂ν
=
2 y − x
|x|
−1
|x|2
nα(n)
|y − x|n
−1
y −x
y |x|2 − x
−
nα(n) |y − x|n
|y − x|n
−1
y (1 − |x|2 )
nα(n)
|y − x|n
∂G
−1 1 − |x|2
y
(x, y ) =
∂y
nα(n) |x − y |n
Version::31102011
Definition 2.5.3: G(x, y ) = Φ(y − x) − Φ(|x|(y − x̃)) is the Green’s function on B(0, 1).
2.5 Existence of Solutions
57
This gives for a solution of ∆u = 0 in B(0, 1), u(x) = g(x) on ∂B(0, 1)
Z
1 − |x|2
g(y )
u(x) =
DS(y )
nα(n) ∂B(0,1) |x − y |n
Transform this to balls with radius r
Solve
∆u = 0
in B(0, r )
u = g
on ∂B(0, r )
Version::31102011
Define v (x) = u(r x) on B(0, 1)
=⇒ ∆v (x) = r 2 ∆u(r x) = 0
v (x) = u(r x) = g(r x) for x ∈ ∂B(0, 1)
Poisson’s formula on B(0, 1)
Z
g(r y )
1 − |x|2
dS(y )
v (x) =
nα(n) ∂B(0,1) |x − y |n
x u(x) = v
r
2 Z
1 − xr
g(y ) dS(y )
=
nα(n) ∂B(0,r ) xr − yr n r n−1
Z
r 2 − |x|2
=
g(y ) dS(y )
nα(n)r ∂B(0,r )
r 2 − |x|2
u(x) =
nα(n)r
Z
∂B(0,r )
for x ∈ B(0, 1)
g(y )
dS(y )
|x − y |n
(2.11)
Poisson’s formula on B(0, r )
Theorem 2.5.4 (): Suppose that g ∈ C(∂B(0, x)). then u(x) given by (2.11) has the
following properties:
i.) u ∈ C ∞ (B(0, r ))
ii.) ∆u = 0
iii.) ∀x0 ∈ ∂B(0, r )
lim u(x) = g(x0 ),
x→x0
x ∈ B(0, r )
Proof: Exercise, similar to the proof in the half-space.
2.5.3
Energy Methods for Poisson’s Equation
Theorem 2.5.5 (): There exists at most one solution of
−∆u = f in Ω
u = g on ∂Ω
if ∂Ω is smooth, f , g continuous.
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
58
Proof: Two solutions u1 and u2 . Then define w = u1 − u2 , thus
−∆w = 0 in Ω
w = 0 on ∂Ω
0 =
Z
−∆w · w dx
ΩZ
= −
Z
Ω
=⇒
=⇒
=⇒
=⇒
=⇒
Dw · ν |{z}
w dS(y ) +
=0
Ω
|Dw |2 dx
|Dw |2 dx
Dw = 0 in Ω
w = constant in Ω
w = 0 on ∂Ω
w = 0 in Ω
u1 = u2 in Ω.
2.5.4
Perron’s Method of Subharmonic Functions
• Poisson’s formula on balls
• compactness argument
• construct a solution of ∆u = 0 in Ω
• Check boundary data, barriers.
Assumption: Ω open, bounded and connected.
Definition 2.5.6: A continuous function u is said to be subharmonic on Ω, if for all balls
B(x, r ) ⊂⊂ Ω, and all harmonic functions h with u ≤ h on ∂B(x, r ) we have u ≤ h on
B(x, r ).
Definition 2.5.7: Superharmonic is analogous.
Proposition 2.5.8 (): Suppose u is subharmonic in Ω. Then u satisfies a strong maximum
principle. Moreover, if v is superharmonic on Ω with u ≤ v on ∂Ω, then either u < v on Ω,
or u = v on Ω.
Proof: u satisfies a mean-value property:
B(x, r ) ⊂⊂ Ω. Let h be the solution of
∆h = 0
h = u
in B(x, r )
on ∂B(x, r )
Then
Z
u(x) ≤ h(x) = −
∂B(x,r )
Z
h(y ) dy = −
∂B(x,r )
Idea: Local modification of subharmonic functions.
Gregory T. von Nessi
u(y ) dy
Version::31102011
=
∂Ω
Z
2.5 Existence of Solutions
59
Definition 2.5.9: u subharmonic on Ω, B ⊂⊂ Ω. Let u be the solution of
∆u = 0
in B
u = u
on ∂B
Define
U(x) =
u(x) in B
u(x) on Ω \ B
Version::31102011
U is called the harmonic lifting of u on B. U subharmonic.
In 1D:
u
harmonic
Proposition 2.5.10 (): If u1 , . . . , un are subharmonic, then u = max{u1 , . . . , un } is subharmonic.
Idea: Φ is bounded on ∂Ω. A continuous subharmonic function u is called a subfunction
relative to Φ if u ≤ Φ on ∂Ω.
Define superfunction analogously
inf Φ
∂Ω
sup Φ
subfunction
superfunction
∂Ω
Theorem 2.5.11 (): Φ bounded function on ∂Ω.
SΦ = {v : v subfunction relative to Φ}
and define u = supv ∈SΦ v . Then u is harmonic in Ω.
Remark: u is well-defined since w (x) = sup∂Ω Φ is a superfunction.
All subfunction ≤ all superfunctions on ∂Ω, and thus by Proposition 2.20, all subfunctions
≤ all superfunctions on Ω. Thus,
all subfunctions ≤ sup Φ < ∞.
∂Ω
Proof: u is well-defined. Fix y ∈ Ω, there exists vk ∈ SΦ such that vk (y ) → u(y ). Replace
vk by max{vk , inf Φ} if needed to ensure that vk is bounded from below. Choose
B(y , R) ⊂⊂ Ω,
R > 0.
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
60
vk = harmonic lifting of vk on B(y , R)
vk uniformly bounded
vk (y ) → u(y )
vk is harmonic in B(y , R).
HW1 (compactness result): ∃ subsequence vki that converges uniformly on all compact subsets on B(y , R) to a harmonic function v . By construction, v (y ) = u(y ).
Assertion: u = v on B(y , R). =⇒ u harmonic on B(y , R) =⇒ u harmonic in Ω
Clearly: v ≤ u in B(y , R)
∃z ∈ B(y , R), v (z) < u(z)
=⇒ ∃u ∈ SΦ such that v (z) < u(z) ≤ u(z)
Take wL = max{u, vki }
WL = harmonic lifting of wL on B.
As before, ∃ subsequence of {WL } that converges to a harmonic function w on B, by
construction:
v ≤w ≤u
v (y ) = w (y ) = u(y )
Maximum principle =⇒ v = w .
However, w (z) > v (z), contradiction.
=⇒ u = v .
2.5.4.1
Boundary behavior
Definition 2.5.12: Let ξ ∈ ∂Ω. A function ω ∈ C(Ω) is called a barrier at ξ relative to Ω if
i.) w is superharmonic in Ω
ii.) w > 0 in Ω \ {ξ}
Theorem 2.5.13 (): Φ bounded on ∂Ω. Let u be Perron’s solution of ∆u = 0 as constructed
before. Assume that ξ is a regular boundary point, i.e., there exists a barrier at ξ, and that
Φ is continuous at ξ. Then u is continuous at ξ and u(x) → Φ(ξ) as x → ξ.
Gregory T. von Nessi
Version::31102011
Suppose otherwise:
2.5 Existence of Solutions
61
w (ξ) = 0
w >0
ξ
Version::31102011
Ω
Proof: M = sup∂Ω |Φ|. w = barrier at ξ. Φ continuous at ξ. Fix > 0, ∃δ > 0 such that
|Φ(x) − Φ(ξ)| < if |x − ξ| < δ, x ∈ ∂Ω. Choose k > 0 such that kw (x) > 2M for x ∈ Ω,
and |x − ξ| > δ.
x 7→ Φ(ξ) − − kw (x) is a subfunction.
x 7→ Φ(ξ) + + kw (x) is a superfunction.
w superharmonic =⇒ −w subharmonic.
By construction
Φ(ξ) − − kw (x) ≤ u(x)
≤ Φ(ξ) + + kw (x)
⇐⇒ |u(x) − Φ(ξ)| ≤ + kw (x)
and w (x) → 0 as x → ξ
|u(x) − Φ(ξ)| < if
|x − ξ| is small enough.
Theorem 2.5.14 (): Ω bounded, open, connected and all ξ ∈ ∂Ω regular. g continuous on
∂Ω =⇒ there exists a solution of
∆u = 0 in Ω
h = g on ∂Ω
Remark: A simple criterion for ξ ∈ ∂Ω to be regular is that Φ satisfies an exterior sphere
condition: there exists B(y , R), R > 0 such that B(y , R) ∩ Ω = {ξ}
R
y
ξ
Barrier:
w (x) =
R2−n − |x − y |2−n n ≥ 3
|
ln |x−y
n=2
R
Gregory T. von Nessi
62
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
2.6
Solution of the Dirichlet Problem by the Perron Process
2.6.1
Convergence Theorems
Important to the proof of existence of solutions to Laplace’s equation is the concept of
convergence of harmonic functions.
Theorem 2.6.1 (): A bounded sequence of harmonic functions on a domain Ω contains a
subsequence which converges uniformly, together with its derivatives, to a harmonic function.
Proof: Take Ω0 ⊂ Ω compact. Now, we calculate via the mean value theorem of calculus:
max
|Dum | · |x − y |
tx + (1 − t)y
0≤t≤1
≤ max
|Dum | · |x − y |
0
Ω
≤ C sup |um | · |x − y |
Ω0
0
≤ C (K, n, L)|x − y |,
where L is a constant which bounds the sequence elements um . From this, we see that
the sequence is equicontinuous. Next we recall that the Azalea-Ascoli theorem asserts that
such a sequence will have a subsequence converging uniformly to a limit. Now, one may
apply the same calculation above iteratively to any order of derivative (iteratively refining the
subsequence for each new derivative). From this we conclude that there is a subsequence
whose derivatives up to and including order |β| all converge uniformly to a limit for any |β|.
It now follows that the limit is indeed a solution to Laplace’s equation.
2.6.2
Generalized Definitions & Harmonic Lifting
Before proving the existence of solution to the Dirichlet problem for Laplace’s equation, one
needs to extend the notions of subharmonic(superharmonic) functions to non-C 2 functions.
Definition 2.6.2: u ∈ C 0 (Ω) is subharmonic(superharmonic) in Ω if for any ball B ⊂⊂ Ω
and harmonic function h ∈ C 2 (B) ∩ C 0 (B) with h ≥ (≤) u on ∂B, one has h ≥ (≤) u on B.
Note: If u is subharmonic and superharmonic, then it is harmonic.
From this new definition, some properties are immediate.
Properties:
i.) The Strong Maximum Principle still applies to this extended definition. Since u is
subharmonic, u ≤ h for h harmonic in B with the same boundary values, the Strong
Maximum Principle follows by applying the Poisson Integral formula to this situation.
ii.) w (x) = maxm (minm ){u1 (x), . . . , um (x)} is subharmonic(superharmonic) if u1 , . . . , um
are subharmonic(superharmonic).
iii.) The Harmonic Lifting on a ball B for a subharmonic function u on Ω is defined as
u(x) for x ∈
/B
U(x) =
,
h(x) for x ∈ B
where h ∈ C 2 (B) ∩ C 0 (B) is harmonic in B with h = u on ∂B.
Gregory T. von Nessi
Version::31102011
|um (x) − un (y )| ≤
2.6 Solution of the Dirichlet Problem by the Perron Process
63
Note: The key thing to realize about the definition of harmonic lifting is that the “lifted”
part is harmonic in our classical C 2 sense; it is simply the Poisson integral solution on the
ball with u taken as boundary data.
Proposition 2.6.3 (): U is subharmonic on Ω, when h is harmonic in B.
B
B′ \ B
B′ ∩ B
Version::31102011
B
′
Proof: Refer to the figure on the next page as you read this proof. Take another ball
B 0 ⊂⊂ Ω along with another harmonic function h0 ∈ C 2 (B 0 ) ∩ C 0 (B 0 ) which is ≥ U on ∂B 0 .
By definition of u(x), one has that h0 ≥ u on B 0 \ B. This, in turn, leads to the assertion
that h0 ≥ U on ∂B ∩ B 0 , i.e. h0 ≥ U on ∂(B 0 ∩ B). As, h0 − U is harmonic (in the classical
sense), we can apply the Weak Maximum Principle to ascertain that h0 ≥ U on B ∩ B 0 . Since
B 0 is arbitrary, the proposition follows.
2.6.3
Perron’s Method
Now, we can start pursuit of the main objective of this section, namely to prove the existence
of the Dirichlet Problem:
∆u = 0 in Ω, Ω bounded domain in Rn
u = φ on ∂Ω, φ ∈ C 0 (∂Ω)
The first step to doing this requires the following definition
Definition 2.6.4: A subfunction(superfunction) in Ω is subharmonic(superharmonic) in Ω
and is in C 0 (Ω) in addition to being ≤ φ on ∂Ω.
Now, we define
Sφ = {all subfunctions},
along with
ũ(x) = sup v (x),
v ∈Sφ
where the supremum is understood to be taken in the pointwise sense. ũ(x) is bounded as
v ∈ Sφ implies that v ≤ max∂Ω φ via the Weak Maximum Principle.
Remark: Essentially, the above definitions allow one to construct a weak formulation (or
weak solution) to the Dirichlet Problem via Maximum Principles. Indeed, at this point, it
has not been ascertained whether or not ũ is continuous.
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
64
Theorem 2.6.5 (): ũ is harmonic in Ω.
Proof: First, fix y ∈ Ω. Since u(y ) = supSφ v , there is a sequence {vm } ⊂ Sφ such that
vm (y ) → u(y ). Of course, we may assume that vm is bounded from above by max∂Ω φ.
Now, consider a ball B ⊂⊂ Ω and replace all vm by their harmonic liftings on B. Denoting
the resulting sequence as {Vm }, it is clear that {Vm } ⊂ Sφ and that Vm (y ) → u(y ). Since
{Vm } are all harmonic (in the C 2 sense) in B, we apply our harmonic convergence theorem
to extract a subsequence (again denoted {Vm } after relabeling) which converges uniformly in
compact subset of B to a harmonic function ṽ in B. Obviously, we have ṽ (y ) = u(y ), but
we now make the following
Proof of Claim: Consider z ∈ B such that ṽ (z) < u(z). This implies that there exists
a function v ∈ Sφ such that v (z) > ṽ (z). Replace Vm by vm = max{Vm , v }. Next, take the
harmonic liftings in B of these to get {Vm }. Again, utilizing the harmonic convergence theorem, we extract a subsequence (still denoted Vm after relabeling) such that {Vm } converges
locally uniformly to a harmonic function w > ṽ in B, but w (y ) = v (y ) = u(y ). Thus, by
the Strong Maximum Principle w = ṽ , a contradiction.
Thus, ṽ = u in B which implies that u is indeed harmonic.
Now, to complete our existence proof, we need to see if the Perron solution of the Dirichlet
Problem attains the proper boundary data. To do this we make the next definition.
Definition 2.6.6: A Barrier at y ∈ ∂Ω is a function w ∈ C 0 (Ω) that has the following
properties:
i.) w is superharmonic,
ii.) w (y ) = 0,
iii.) w > 0 on Ω \ {y }.
Theorem 2.6.7 (): Let u be a Perron solution. Then u ∈ C 0 (Ω) with u = φ on ∂Ω if and
only if there exists a barrier at each point y ∈ ∂Ω.
Proof:
(Only if) Solve with the BC φ = |x − y |.
(If) Take > 0 which implies there exists a δ > 0 such that |φ(x) − φ(y )| < implies
|x − y | < δ by continuity of φ. Now, choose a constant K() large enough so that
Kw > 2 · max∂Ω |φ| if |x − y | ≥ δ. Next, define w ± := φ(y ) ± ( + Kw ). It is clear
that w − is a subfunction and that w + is a superfunction with w − ≤ u ≤ w + on ∂Ω.
Application of the Weak Maximum Principle now yields
|u(x) − φ(y )| ≤ + Kw .
Taking → 0, one sees that u(x) → φ(y ) as x → y (since w (x) → 0 as x → y by the
definition).
Gregory T. von Nessi
Version::31102011
Claim: ṽ = u in B.
2.7 Exercises
65
Ω
y
S
Version::31102011
Now, that it has been shown the existence of Perron solutions is equivalent to the existence
of barriers on ∂Ω, we wish to ascertain precisely what criterion on ∂Ω implies the existence
of barriers. With that, we make the following definition.
Definition 2.6.8: A domain Ω satisfies an exterior sphere condition if at every point y ∈ ∂Ω,
there exists a sphere S such that S ∩ Ω = {y } (refer to the above figure).
Correspondingly, we have the following theorem.
Theorem 2.6.9 (): If Ω satisfies an exterior sphere condition, then there exists a barrier at
each point y ∈ ∂Ω.
Proof: Simply take
w (x) =
(
R2−n − |x − y |2−n for n > 2
,
R
ln |x−y
| for n = 2
as the barrier at y ∈ ∂Ω. It is trivial to verify this function is indeed a barrier at y ∈ ∂Ω.
Remarks:
i.) It is easy to see that if the boundary ∂Ω is smooth, in a sense that locally it is the
graph of a C 2 function, then it satisfies an exterior sphere condition. Also, all convex
domains satisfy an exterior sphere condition.
ii.) There are more general criterion for the existence of barriers, like the exterior cone
condition; but the proofs are harder.
2.7
Exercises
2.1:
a) Find all solutions of the equation ∆u + cu = 0 with radial symmetry in R3 . Here c ≥ 0
is a constant.
b) Prove that
√
cos( c|x|)
Φ(x) = −
4π|x|
is a fundamental solution for the equation, i.e., show that for all f ∈ C 2 (R3 ) with
compact support a solution of the equation ∆u + cu = f is given by
Z
u(x) =
Φ(x − y )f (y ) dy .
R3
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
66
Hint: You can solve the ODE you get in a) by defining w (r ) = r v (r ). Show that w solves
w 00 + cw = 0.
2.2: [Evans 2.5 #4] We say that v ∈ C 2 (Ω) is subharmonic if
−∆v ≤ 0
in Ω.
a) Suppose that v is subharmonic. Show that
Z
1
v (x) ≤
v (y ) dy
|B(x, r )| B(x,r )
for all B(x, r ) ⊂ Ω.
max v = max v .
∂Ω
Ω
c) Let φ : R → R be smooth and convex. Assume that u is harmonic and let v = φ(u).
Prove that v is subharmonic.
d) Let u be harmonic. Show that v = |Du|2 is subharmonic.
2.3: Let Rn+ be the upper half space, Rn+ = {x ∈ Rn : xn > 0.
a) Suppose that ũ is a smooth solution of ∆u = 0 in Rn+ with ũ = 0 on ∂Rn+ .
b) Use the result in a) to show that bounded solutions of the Dirichlet problem in the half
space Rn+ ,
∆u = 0 in Rn+ ,
u = g on ∂Rn+ ,
are unique. Show by a counter example that the corresponding statement does not
hold for unbounded solutions.
2.4: [Qualifying exam 08/98] Suppose that u ∈ C 2 (R2+ ) ∩ C 0 (R2+ ) is harmonic in R2+ and
bounded in R2+ ∪ ∂R2+ . Prove the maximum principle
sup u = sup u
R2+
∂R2+
Hint: Consider > 0 the harmonic function
v (x, y ) = u(x, y ) − ln(x 2 + (y + 1)2 ).
2.5: Fundamental solutions and Green’s functions in one dimension.
a) Find a fundamental solution Φ of Laplace’s equation in one dimension,
−u 00 = 0
in (−∞, ∞).
Show that
u(x) =
Z
∞
−∞
Φ(x − y )f (y ) dy
is a solution of the equation −u 00 = f for all f ∈ C 2 (R) with compact support.
Gregory T. von Nessi
Version::31102011
b) Prove that subharmonic functions satisfy the maximum principle,
2.7 Exercises
67
b) Find the Green’s function for Laplace’s equation on the interval (−1, 1) subject to the
Dirichlet boundary conditions u(−1) = u(1) = 0.
c) Find the Green’s function for Laplace’s equation on the interval (−1, 1) subject to the
mixed boundary conditions u 0 (−1) = 0 and u(1) = 0, i.e., find a fundamental solution
Φ with Φ0 (−1) = 0 and Φ(1) = 0.
Version::31102011
d) Can you find the Green’s function for the Neumann problem on the interval (−1, 1),
i.e., can you find a fundamental solution Φ with Φ0 (−1) = Φ0 (1) = 0?
2
2.6: Use the method of reflection to find the Green’s function for the first quadrant R++
=
2
2
{x ∈ R : x1 > 0, x2 > 0} in R . Find a representation of a solution u of the boundary value
problem
∆u = 0 in R2++ ,
u = g on ∂R2++ ,
where g is a continuous and bounded function on ∂R2++ .
2.7: Let Ω1 and Ω2 be open and bounded sets in Rn (n ≥ 3) with smooth boundary
that are connected. Suppose that Ω1 is compactly contained in Ω2 , i.e., Ω1 ⊂ Ω2 . Let
Gi (x, y ) be the Green’s functions for the Laplace operator on Ωi , i = 1, 2. Show that
G1 (x, y ) < G2 (x, y )
for x, y ∈ Ω1 , x 6= y .
2.8: [Evans 2.5 #6] Use Poisson’s formula on the ball to prove that
r n−2
r + |x|
r − |x|
u(0) ≤ u(x) ≤ r n−2
u(0),
n−1
(r + |x|)
(r − |x|)n−1
whenever u is positive and harmonic in B(0, r ). This is an explicit form of Harnack’s inequality.
2.9: [Qualifying exam 08/97] Consider the elliptic problem in divergence form,
Lu = −
n
X
∂i (aij (x)∂j u)
i,j=1
over a bounded and open domain Ω ⊂ Rn with smooth boundary. The coefficients satisfy
aij ∈ C 1 (Ω) and the uniform ellipticity condition
n
X
i,j=1
aij (x)ξi ξj ≥ λ0 |ξ|2
∀x ∈ Ω, ξ ∈ Rn
with a constant λ0 > 0. A barrier at x0 ∈ ∂Ω is a C 2 function w satisfying
Lw ≥ 1 in Ω,
w (x0 ) = 0, and
w (x) ≥ 0 on ∂Ω.
a) Prove the comparison principle: If Lu ≥ Lv in Ω and u ≥ v on ∂Ω, then u ≥ v in Ω.
b) Let f ∈ C(Ω) and let u ∈ C 2 (Ω) ∩ C(Ω) be a solution of
Lu = f in Ω,
u = 0 on ∂Ω.
Gregory T. von Nessi
Chapter 2: Laplace’s Equation ∆u = 0 and Poisson’s Equation −∆u = f
68
Show that there exists a constant C that does not depend on w such that
∂w
∂u
(x0 ) ≤ C
(x0 )
∂ν
∂ν
where ν is the outward normal to ∂Ω at x0 .
c) Let Ω be convex. Show that there exists a barrier w for all x0 ∈ ∂Ω.
Version::31102011
Gregory T. von Nessi
Version::31102011
Chapter 3: The Heat Equation
70
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
3.1
Fundamental Solution
3.2
The Non-Homogeneous Heat Equation
3.3
Mean-Value Formula for the Heat Equation
3.4
Regularity of Solutions
3.5
Energy Methods
3.6
Backward Heat Equation
3.7
Exercises
3.1
70
74
76
82
85
86
86
Fundamental Solution
ut − 4u = 0
Evan’s approach: find a solution that is invariant under dilation scaling u(x, t) 7→ λα u(λβ x, λt).
Want u(x, t) = λα u(λβ x, λt) for all λ > 0 and α, β constant that need to be determined.
Idea: λ = 1t , i.e. find
u(x, t) =
Gregory T. von Nessi
1 x 1 x u
,
1
=
v β
tα
tα
tα
t
Version::31102011
3 The Heat Equation
3.1 Fundamental Solution
and let y =
x
.
tβ
71
Find an equation for v .
∂
u(x, t) =
∂t
Version::31102011
=
∂
u(x, t) =
∂xi
∂2
u(x, t) =
∂xi2
ut − 4u =
=⇒
λ = 1t , β =
α
t α+1
n
−α x 1 X ∂v x −βxj
v β + α
· β+1
t α+1
t
∂xj t β
t
t
j=1
x x
−α x β
−
· β
v
D
v
x
t α+1
t α+1
tβ
tβ
t
1 ∂v 1
t α ∂xi t β
1 ∂2v 1
t α ∂xi t 2β
0
v (y ) +
β
t α+1
Dv (y ) · y +
1
t α+2β
4v = 0
1
2
1
αv (y ) + Dv · y + 4u = 0
2
Assume: v is radial, v (y ) = w (r ) = w (|y |)
yi
Di w (|y |) = w 0 (|y |)
|y |
|y |2
Dv (y ) · y = w 0 (|y |)
= w 0 (r )r
|y |
Equation for w :
1
n−1 0
αw + w 0 (r )r + w 00 +
w =0
2
r
α = n2 :
n
1
n−1 0
w + w 0 r + w 00 +
w =0
2
2
r
xr n−1 :
n n−1
1
r
w + r n w 0 + r n−1 w 00 + (n − 1)r n−2 w 0 = 0
{z
}
2
|
{z 2
} |
1 n
(r w )0
2
Integrate:
(r n+1 w 0 )0
1 n
r w + r n−1 w 0 = a = constant
2
Assume: r n w → 0, r n−1 w 0 → 0 as r → ∞, then a = 0 and
1
r n−1 w 0 = − r n w
2
1 2
1
0
w = − r w =⇒ w (r ) = be − 4 r
2
v (y ) = w (|y |) = be −
u(x, t) =
|y |2
4
x
y=√
t
1
b − |x|2
4t
v
(y
)
=
n e
tα
t2
Gregory T. von Nessi
Chapter 3: The Heat Equation
72
Definition 3.1.1: The function


Φ(x, t) =
1
n
(4πt) 2

e−
|x|2
4t
t>0
0
t<0
is called the fundamental solution of the heat equation.
Motivation for choice of b:
Lemma 3.1.2 ():
Φ(y , t) dy = 1,
t>0
Rn
Proof:
Z
∞
e
−x 2
dx
−∞
2
Z
=
∞
Z
∞
−∞ −∞
Z 2π Z ∞
=
0
e −x
2
0
Φ(y , t) dy
1
(4πt)n+2
=
Rn
Change coordinates: z =
Z
√y ,
4t
dz =
Φ(y , t) dy =
Rn
1
n
(4t) 2
1
π n+2
dy dx
e −r r dr dθ
1
2
= 2π − e −r
2
= π
Z
2 −y 2
Z
e−
∞
0
|y |2
4t
dy
Rn
dy
Z
2
e −|z| dz = 1.
Rn
Initial value problem for the heat equation in Rn : want to solve
ut − 4u = 0 in Rn × 0, ∞)
u = g on Rn × {t = 0}
Theorem 3.1.3 (): Suppose g is continuous and bounded on Rn , define
Z
Φ(x − y , t)g(y ) dy
u(x, t) =
Rn
Z
−|x−y |2
1
=
e 4t g(y ) dy
n
(4πt) 2 Rn
for x ∈ Rn , t > 0. Then
i.) u ∈ C ∞ (Rn \ (0, ∞))
ii.) ut − 4u = 0 in Rn × (0, ∞)
iii.) lim(x,t)→(x0 ,0) u(x, t) = g(x0 ) if x0 ∈ Rn , x ∈ Rn , t > 0
Gregory T. von Nessi
Version::31102011
Z
3.2 The Non-Homogeneous Heat Equation
73
Remark
i.) “smoothening”: the solution is C ∞ for all positive times, even if g is only continuous.
ii.) Suppose that g > 0, g ≡ 0 outside B(0, 1) ∈ Rn , then u(x, t) > 0 in (Rn × (0, ∞)),
“infinite speed of propagation”.
Proof: e
−|x−y |2
4t
is infinitely many times differentiable
i.) in x, t, the integral exists, and we can differentiate under the integral sign.
Version::31102011
ii.) ut − 4u = 0 since Φ(y , t) is a solution
iii.) Boundary data:
> 0, choose δ > 0 such that |g(x0 ) − g(x)| < for |x − x0 | < δ, Then for |x − x0 | <
Z
Φ(x − y )[g(y ) − g(x0 )] dy
|u(x, t) − g(x0 )| =
n
ZR
=
Φ(y + x, t)|g(y ) − g(x0 )| dy
B(x0 ,δ)
Z
+
Φ(x − y , t) |g(y ) − g(x0 )| dy
|
{z
}
Rn \B(x0 ,δ)
δ
2
≤C
|x − x0 | < 2δ , |y − x0 | > δ
=⇒ (see Green’s function on half space)
1
2 |x0
− y | ≤ |x − y | Then
Z
1
(4πt)
n
2
e
−|x−y |2
4t
≤
dy
Rn \B(x0 ,δ)
≤
1
Z
e
−|x0 −y |2
16t
dy
(4πt) Rn \B(x0 ,δ)
Z ∞
−r 2
C
e 16t r n−1 dr → 0
n
(4πt) 2 0
n
2
as t → 0
|u(x, t) − g(x0 )| < 2 if |x − x0 | <
small enough.
δ
2
and t small enough, if |(x, t) − (x0 , t)| is
Formally: Φt − 4Φ = 0 in Rn × (0, ∞)
Φ as t → 0
Z
Φ(x, t) dx = 1
for t > 0.
Rn
Formally: Φ(x, 0) = δx , i.e., the total mass is concentrated at x = 0.
3.2
The Non-Homogeneous Heat Equation
Non-homogeneous equation:
ut − ∆u = f in Rn × {t > 0}
u = 0 on Rn × {t = 0}
(3.1)
Gregory T. von Nessi
Chapter 3: The Heat Equation
74
Φ(x, t)
3.2.1
Duhamel’s Principle
Φ(x − y , t − s) is a solution of the heat equation. For fixed x > 0, the function
Z
u(x, t; s) =
Φ(x − y , t − s)f (y , s) dy
Rn
Solves
ut (·; s) − ∆u(·; s) = 0 in Rn × {s, ∞}
u(·; s) = f (·, s) on Rn × {t = s}
Duhamel’s principle: the solution of (3.1) is given by
Z t
u(x, t) =
u(x, t; s) ds
⇐⇒
0
Z tZ
u(x, t) =
Φ(x − y , t − s)f (y , s) dy ds
(3.2)
Rn
0
Compare to Duhamel’s principle applied to the transport equation.
Remark: The solution of
ut − ∆u = f in Rn × {t > 0}
u = g on Rn × {t = 0}
(3.3)
is given by
u(x, t) =
Z
Rn
Φ(x − y , t)g(y ) dy +
Z
0
t
Z
Rn
Φ(x − y , t − s)f (y , s) dy ds
Proof: See proof of following theorem.
Theorem 3.2.1 (): Suppose that f ∈ C12 (Rn × (0, ∞)) (i.e. f is twice continuously differentiable in x, f is continuously differentiable in t), f has compact support, then the solution
of the non-homogeneous equation is given by (3.2).
Proof: Change coordinates
u(x, t) =
Gregory T. von Nessi
Z
0
t
Z
Rn
Φ(y , s)f (x − y , t − s) dy ds
Version::31102011
Rn
3.3 Mean-Value Formula for the Heat Equation
75
We can differentiate under the integral sign:
Z tZ
Z
∂u
∂f
(x, t) =
Φ(y , s) (x − y , t − s) dy ds +
Φ(y , t)f (x − y , 0) dy
∂t
∂t
0
Rn
Rn
Z
Version::31102011
∂2u
(x, t) =
∂xi ∂xj
t
Z
Φ(y , s)
Rn
0
∂2f
(x − y , t − s) dy ds
∂xi ∂xj
and all these derivatives are continuous.
Z tZ
ut − ∆u =
Φ(y , s)(∂t − ∆x )f (x − y , t − s) dy ds
0Z Rn
+
Φ(y , t)f (x − y , 0) dy
Rn
|
{z
}
=
Z
+
t
Z
K
n
Z RZ
Φ(y , s)(−∂s − ∆y )f (x − y , t − s) dy ds
Rn
0
Φ(y , s)(−∂s − ∆y )f (x − y , t − s) dy ds + K
= I + J + K
|J | ≤ (sup |∂t f | + sup |D2 f |) ·
I
Z Z
0
|
Rn
Φ(y , s) dy ds
{z
}
1
= → 0 as → 0
Z tZ
=
(∂s − ∆y )Φ(y , s) f (x − y , t − s) dy ds
{z
}
Rn |
=0
Z
−
Φ(y , t)f (x − y , 0) dy
Rn
|
{z
}
−K
Z
+
Φ(y , )f (x − y , t − ) dy
Rn
Collecting the terms:
(ut − ∆u)(x, t) =
lim
Z
→0 Rn
Φ(y , )f (x − y , t − ) dy
= f (x, t)
Calculation similar to last class.
3.3
Mean-Value Formula for the Heat Equation
1
u(x, t) = n
4r
ZZ
u(y , s)
E(x,t;r )
|x − y |2
dy ds
(t − s)2
Analogy: Laplace ∂B(x, r ), B(x, r )
∂B(x, r ) = level set of the fundamental solution.
Expectation: level sets of Φ(x, t) relevant.
Gregory T. von Nessi
Chapter 3: The Heat Equation
76
Definition 3.3.1: x ∈ Rn . t ∈ R, r > 0. Define the heat ball
t
(x, t)
x
E(x, t; r ) =
1
(y , s) ∈ R × R s ≤ t, Φ(x − y , t − s) ≥ n
r
n
x, t is in the boundary of E(x, t; r )
Definition 3.3.2:
1. Parabolic cylinder: Ω open, bounded set, then
ΩT = Ω × (0, T ]
ΩT includes the top surface
t
Ω
Rn
ΓT
2. Parabolic boundary: ΓT := ΩT \ ΩT = bottom surface + lateral boundary
(This body is the relevant boundary for the maximum principle).
Theorem 3.3.3 (Mean value formula): Ω bounded, open
u ∈ C12 (ΩT ) solution of the heat equation. Then
ZZ
1
|x − y |2
u(x, t) = n
u(y , s)
dy ds
4r
(t − s)2
E(x,t,r )
Gregory T. von Nessi
Version::31102011
Rn
3.3 Mean-Value Formula for the Heat Equation
77
∀E(x, t, r ) ⊂ ΩT
Proof: Translate in space and time to assume x = 0, t = 0.
E(r ) = E(0, 0, r )
ZZ
1
|y |2
φ(r ) = n
u(y , s) 2 dy ds
4r
|s|
E(r )
Idea: compute φ0 (r ).
Version::31102011
E(r ):
φ(−y − s) > 1
−|y |2
1
−4s ≥ 1
n e
(4π(−s)) 2
1
−(y /r )2 /(−4(s/r 2 ))
≥1
n e
−s 2
4π r 2
⇐⇒
⇐⇒
h(y , s) = (r y , r 2 s) maps E(1) to E(r ).
Change coordinates:
ZZ
1
|y |2
u(y
,
s)
dy ds =
4r n
s2
E(r )
"Z
f (x) dx =
ZZ
1
|y |2
u(y
,
s)
dy ds
4r n
s2
h(E(1))
#
Z
(f ◦ h)(x) · |det(Dh)| dx
Ω
h(Ω)
ZZ
1
|r y |2 n+2
2
u(r
y
,
r
s)
r
dy ds
4r n
(r 2 s)2
E(1)
ZZ
|y |2
1
u(r y , r 2 s) 2 dy ds
4 E(1)
s
=
=
(
)
2
2
X ∂u
|y
|
∂u
|y
|
φ0 (r ) =
(r yi , r 2 s) · yi 2 +
(r y , r 2 s) · 2r s 2
dy ds
∂yi
s
∂s
s
E(1)
i
ZZ
1
y (y /r )2
∂u
|y /r |2
1
Du(y , s) ·
+
dy ds
=
(y
,
s)
·
2r
4 E(r )
r (s/r 2 )2
∂s
s/r 2
r n+2
ZZ
1
|y |2
∂u
|y |2
=
Du(y
,
s)
·
y
+
2
(y
,
s)
dy ds
4r n+1
s2
∂s
s
E(r )
=: A + B
1
4
ZZ
Heat ball defined by φ(−y , −s) =
⇐⇒
1
rn
1
(r π(−s))
n
2
e −|y |
2 /(−4s)
=
1
rn
Now, we define
ψ(y , s) =
n
|y |2
ln(−4πs) +
+ n ln r
2
4s
Gregory T. von Nessi
Chapter 3: The Heat Equation
78
It’s observed that ψ = 0 on ∂E(0, 0, r ) which implies ψ(y , s) = 0 on ∂E(s)
Dψ(y ) =
y
2s
Rewrite B:
1
4r n+1
ZZ
4
E(r )
i=1
Collecting our terms
φ0 (r ) = A + B
=
=
1
4r n+1
1
4r n+1
= 0
(
)
n
2n X ∂u
−4n∆u · ψ −
· yi dy ds
s
∂yi
E(r )
i=1
(
)
ZZ
n
2n X ∂u
4nDu · Dψ −
· yi dy ds
s
∂yi
E(r )
ZZ
i=1
=⇒ φ is constant
=⇒ φ(r ) =
lim φ(r )
r →0
1
= u(0, 0) lim n
r →0 4r
|
= u(0, 0)
ZZ
|y |2
dy ds
2
E(r ) s
{z
}
1
Recall:
MVP :
1
4r n
1
4r n
ZZ
E(x,t,r )
ZZ
E(x,t,r )
u(y , s)
|x − y |2
dy ds = u(x, t)
(t − s)2
|x − y |2
dy ds = 1
(t − s)2
Theorem 3.3.4 (Maximum principle for the heat equation on bounded and open domains): u ∈ C12 (ΩT ) ∩ C(ΩT ) solves the heat equation in ΩT . Then
Gregory T. von Nessi
Version::31102011
∂u
(y , s) · y · Dψ dy ds
∂s
ZZ
n X
∂2u
∂u
1
4
yi ψ + 4 ψ dy ds
= 0 − n+1
4r
∂s∂yi
∂s
E(r ) i=1
(
)
ZZ
n
X
1
∂2u
∂u
= − n+1
4
yi ψ + 4n ψ dy ds
4r
∂s∂yi
∂s
E(r )
i=1
(
)
ZZ
n
X
1
∂u ∂ψ
∂u
= n+1
4
yi
− 4n ψ dy ds
4r
∂yi ∂s
∂s
E(r )
i=1
ZZ
n
X
∂u
∂u
1
|y |2
n
4
= n+1
yi − − 2 − n ψ dy ds
4r
2s
4s
∂s
E(r ) i=1 ∂yi
(
)
ZZ
n
X
∂u
2n ∂u
1
−4n
= n+1
·ψ−
· yi dy ds
4r
∂s
s ∂yi
E(r )
3.3 Mean-Value Formula for the Heat Equation
79
i.)
max u = max u
ΩT
max. principle
ΓT
ii.) Ω connected, ∃(x0 , t0 ) ∈ ΩT such that u(x0 , t0 ) = maxΩT u then u is constant on ΩT
(strong max. principle).
Version::31102011
Remarks:
1. In ii.) we can only conclude for t ≤ t0 .
2. Since −u is also a solution, we have the same assertion with max replaced by min.
3. Heat conducting bar:
T
ΩT
ΓT
Ω
No heat concentration in the interior of the bar for t > 0.
/ ΓT . For r > 0 small enough,
Proof: Suppose u(x0 , t0 ) = M = maxΩT u, with x0 , t0 ∈
E(x0 , t0 , r ) ⊂ ΩT . By the MVT:
1
M = u(x0 , t0 ) =
4r n
≤ M
ZZ
u(y , r )
E(x0 ,t0 ,r )
|x − y |2
dy dS
(t − s)2
Suppose (y0 , s0 ) ∈ ΩT , s0 < t0 , line segment L connecting (y0 , s0 ) and (x0 , t0 ) in ΩT .
t
(x0 , t0 )
t0
s0
u(x0 , t0 ) = M
(y0 , s0 )
Rn
Define r0 as
n
min s ≥ s0
o
u(x, t) = M, ∀(x, t) ∈ L, s ≤ t ≤ t0 = r0
Gregory T. von Nessi
Chapter 3: The Heat Equation
80
x0
x
=⇒ segments (xi , ti ) + (xi+1 , ti+1 ) in ΩT
=⇒ u(x, t) = u(x0 , t0 ).
Theorem 3.3.5 (Uniqueness of solutions on bounded domains): If g ∈ C(ΓT ), f ∈ C(ΩT )
then there exists at most one solution to
ut − ∆u = f in ΩT
u = g on ΓT
Proof: If u1 and u2 are solutions then v = ±(u1 − u2 ) solve
vt − ∆v = 0 in ΩT
u = 0 on ΓT
Now we use the maximum principle.
Theorem 3.3.6 (Maximum principle for solutions of the Cauchy problem):
ut − ∆u = 0 in Rn × (0, T )
u = g on Rn × {t = 0}
Suppose u ∈ C12 (Rn × (0, T )) ∩ C(Rn × [0, T ]) is a solution of the heat equation with
2
u(x, t) ≤ Ae a|x| for x ∈ Rn , t ∈ [0, T ], and A, a > 0 constants. Then
sup
u = sup u = sup g
Rn ×[0,T ]
Rn
Rn
Remark: Theorem not true without the exponential bound, [John].
Proof: Suppose that 4aT < 1, 4a(T + ) < 1, > 0 small enough, a <
exists γ > 0 such that
a+γ =
Gregory T. von Nessi
1
4(T + )
1
4(T +) ,
there
Version::31102011
Either r0 = s0 =⇒ u(y0 , s0 ) = u(x0 , t0 ) or r0 > s0 . Then for r small enough, E(u0 , s0 , r ) ⊆
ΩT . Argument above =⇒ u ≡ constant on E(u0 , s0 , r ). E(y0 , s0 , t) contains a part of L =⇒
contradiction with definition of r0 .
=⇒ u ≡ M for all points (y0 , s0 ) connected to (x0 , t0 ) by line segments.
General Case: (x, t) ∈ ΩT , t < t0 , Ω connected.
Choose a polygonal path x0 , x1 , . . . , xn = x in Ω.
t0 > t1 > · · · > tn = t
3.3 Mean-Value Formula for the Heat Equation
81
Fix y ∈ Rn , µ > 0 and define
v (x, t) = u(x, t) −
|x−y |2
µ
(T + − t)
x ∈ Rn , t > 0
4(T +−t)
n e
2
Check: v is a solution of the heat equation since since Φ(x, t) is a solution.
T
Version::31102011
|x − y | = r
r
y
|x| ≤ |y | + r
r
Ω = B(y , r ) r > 0
ΩT = B(y , r ) × (0, T ]
Use the maximum principle for v on the bounded set ΩT .
max v ≤ max v
ΓT
ΩT
on the bottom:
v (x, 0) = u(x, 0) −
≤ u(x, 0)
|x−y |2
µ
x ∈ Rn
}
4(T +)
n e
(T + ) 2
|
{z
≥0
≤ sup g
Rn
|x−y |2
µ
x ∈ Rn
(T + − t)
r2
µ
2
4(T +)
≤ Ae a|x| −
x ∈ Rn
n e
(T + ) 2
r2
µ
2
4(T +)
e
≤ Ae a(|y |+r ) −
x ∈ Rn
n
(T + ) 2
"
#
1
1
a+γ =
= 4(a + γ)
4(T + )
T +
v (x, t) = u(x, t) −
n
2
e 4(T +−t)
n
2
= Ae a(|y |+r ) − µ(4(a + γ)) 2 e (a+γ)r
≤ sup g
2
if we choose r big enough.
Rn
2
x ∈ Rn
2
Leading order term e (a+γ)r dominates e ar for r large enough. If r big enough, then
max v ≤ sup g =⇒ max v ≤ sup g
ΩT
Rn
Rn ×[0,T ]
Rn
Gregory T. von Nessi
Chapter 3: The Heat Equation
82
Now µ → 0 and get
max u ≤ sup g
Rn ×[0,T ]
Rn
If 4aT > 1, choose t0 > 0 such that 4at0 < 1, apply above argument on [0, t0 ], [t0 , 2t0 ], . . .
Theorem 3.3.7 (Uniqueness of solutions for the Cauchy problem): There exists at most
one solution u ∈ C12 (Rn × (0, T ]) ∩ C(Rn × [0, T ]) of
ut − ∆u = f in Rn × (0, T ]
u = g on Rn × {t = 0}
|u(x, t)| ≤ Ae a|x|
2
x ∈ Rn , t ∈ [0, T ]
Proof: Suppose u1 and u2 are solutions. Define w = ±(u1 − u2 ) solves
wt − ∆w = 0 in Rn × (0, T ]
w = 0 on Rn × {t = 0}
2
|w (x, t)| ≤ 2Ae a|x|
=⇒ max principle applies
=⇒ w = 0
=⇒ uniqueness.
3.4
Regularity of Solutions
Theorem 3.4.1 (): Ω open, u ∈ C12 (ΩT ), is a solution of the heat equation. Then u ∈
C ∞ (ΩT ).
Proof: Regularity is a local issue.
Parabolic cylinders
n
o
C(x, t, r ) = (y , s) ∈ Rn × R |x − y | ≤ r, t − r 2 ≤ s
Fix (x0 , t0 ) ∈ ΩT
v (x, t) = u(x, t) · ξ(x, t)
ξ cut-off function.
0 ≤ ξ
ξ ≡ 1
ξ ≡ 0
on ΩT
3
on C x0 , t0 , r
4
outside C, i.e. on ΩT \ C
ξ ∈ C ∞ (ΩT )
Goal: Representation for u:
u(x, t) =
Gregory T. von Nessi
ZZ
K(x, y , t, s)u(y , s) dy ds
Version::31102011
which satisfies that
3.4 Regularity of Solutions
83
(x0 , t0 )
¡
¢
C x0 , t0 , 2r = C ′′
C(x0 , t0 , r ) ⊂ ΩT
for r small enough
Version::31102011
¢
¡
C x0 , t0 , 34 r = C ′
C (x0 , t0 , r ) = C
with a smooth kernel =⇒ u as good as K.
vt
Dv
∆v
= ut ξ + uξt
= Du · ξ + uDξ
= ∆u · ξ + sDu · Dξ + u∆ξ
vt − ∆u = uξt − 2Du · Dξ − u∆ξ = f
A solution ṽ of this equation is given by Duhamel’s principle,
Z tZ
ṽ (x, t) =
Φ(x − y , t − s)f (y , s) dy ds
0
Rn
Uniqueness for the Cauchy problem
Z tZ
v (x, t) =
0
Rn
Φ(x − y , t − s)f (y , s) dy ds
Suppose for the moment being that u is smooth.
(x, t) ∈ C 00 , ξ(x, t) = 1
=⇒ v (x, t) = u(x, t)ξ(x, t) = u(x, t)
Z tZ
u(x, t) =
Φ(x − y , t − s)[uξs − 2Du · Dξ − u∆ξ](y , s) dy ds
n
| 0 {z R}
C\C 0
=
ZZ
C\C 0
ZZ
=
+
ZZ
Φ(x − y , t − s)[uξs − u∆ξ + 2u∆ξ] dy ds
C\C 0
DΦ(x − y , t − s)2uDξ dy ds
K(x, y , t, s)u(y , s) dy ds
C\C 0
with
K(x, y , t, s) = [ξs + ∆ξ](y , s)Φ(x − y , t − s) + 2Dξ · DΦ(x − y , t − s)
Gregory T. von Nessi
Chapter 3: The Heat Equation
84
We don’t see the singularity of Φ, and K is C ∞
=⇒ representation of u,
=⇒ u is as good as K
=⇒ u is C ∞ .
3.4.1
Application of the Representation to Local Estimates
Theorem 3.4.2 (local bounds on derivatives): u ∈ C12 (ΩT ) solution of the heat equation:
Z
Ckl
k l
|u(y , s)| dy ds
max |Dx Dt u| ≤ k+2l+n+2
r
C (x,t, 2r )
C(x,t,r )
(0, 0)
(x0 , t0 )
r2
C(0, 0, 1)
1
r
C (x0 , t0 , r )
1
v solves the hear equations since
vt (x, t) = r 2 ut (x0 + r x, t0 + r 2 t)
∆v (x, t) = r 2 ∆u(x0 + r x, t0 + r 2 t)
Representation of v :
v (x, t) =
x, t ∈ C 0, 0, 12
|Dxk Dtl u| ≤
ZZ
K(x, y , t, s)v (y , s) dy ds
C(0,0,1)
ZZ
|Dxk Dtl K(x, y , t, s)| |v (y , s)| dy ds
{z
}
C(0,0,1) |
= Ckl
Ckl
ZZ
C(0,0,1)
=⇒
max
(x,t)∈C (0,0, 12 )
=⇒ r
=⇒
k+2l
|Dxk Dtl v |
max
(x,t)∈C (x0 ,t0 , 2r )
≤ Ckl
|v (y , s)| dy ds
ZZ
|Dxk Dtl u|
C(0,0,1)
Ckl
≤ n+2
r
Ckl
max
|Dxk Dtl u| ≤ k+2l+n+2
r
(x,t)∈C (x0 ,t0 , 2r )
Gregory T. von Nessi
|v (y , s)| dy ds
ZZ
C(x0 ,t0 ,r )
|u(y , s)| dy ds
C(x0 ,t0 ,r )
|u(y , s)| dy ds
ZZ
Version::31102011
Proof: Scaling argument. Calculation on a cylinder with r = 1.
v (x, t) = u(x0 + r x, t0 + r 2 t)
3.5 Energy Methods
85
Remark: This reflects again the different scaling in x and t.
Version::31102011
3.5
Energy Methods
Theorem 3.5.1 (Uniqueness of solutions): Ω open and bounded, there exists at most one
solution u ∈ C12 (ΩT ) of

 ut − ∆u = f in ΩT
u = g on Ω × {t = 0}

u = h on ∂Ω × (0, T ]
Proof: Suppose you have two solutions u1 , u2 . Let w = u1 − u2 . w solves the homogeneous heat equations.
Let
Z
e(t) =
w 2 (x, t) dx,
0≤t≤T
Ω
Z
d
e(t) =
2w (x, t) · wt (x, t) dx
dt
Ω
Z
= 2
w (x, t) · ∆w (x, t) dx
ΩZ
I.P.
= −2
Dw (x, t) · Dw (x, t) dx + 0
Ω
Z
= −2
|Dw (x, t)|2 dx ≤ 0
Ω
=⇒ e(t) is decreasing in t.
e(0) =
Z
w 2 (x, 0) dx = 0
Ω
=⇒ e(t) ≡ 0 on [0, T ]
=⇒ Dx(x, t) = 0 ∀x, t
=⇒ w (x, t) = constant
∴ w (x, t) = 0 x ∈ ∂Ω
=⇒ w (x, t) = 0 ∀x ∈ Ω, ∀t ∈ [0, T ]
=⇒ u1 = u2 in ΩT .
3.6
Backward Heat Equation
ut − ∆u = 0 in ΩT
u(x, 0) = g(x) in Ω
ut − ∆ = 0 in ΩT
u(x, T ) = g(x) in Ω
Backwards heat equation:
Gregory T. von Nessi
Chapter 3: The Heat Equation
86
T
Ω
v (x, t) = u(x, T − t)
vt (x, t) = −ut (x, T − t)
∆v (x, t) = ∆u(x, T − t)
So, ut − ∆u = 0 transforms to
vt + ∆v = 0
Backwards heat equation
Theorem 3.6.1 (): If u1 and u2 are solutions of the heat equation with u1 (x, T ) = u2 (x, T )
with the same boundary values, then the solutions are identical.
3.7
Exercises
3.1: Apply the similarity method to the linear transport equation
ut + aux = 0
to obtain the special solutions of the form u(x, t) = c(at − x)−α .
3.2: [Evans 2.5 #10] Suppose that u is smooth and that u solves ut −∆u = 0 in Rn ×(0, ∞).
a) Show that uλ (x, t) = u(λx, λ2 t) also solves the heat equation for each λ ∈ R.
b) Use a) to show that v (x, t) = x · Du(x, t) + 2tut (x, t) solves the heat equation as
well. The point of this problem is to use part a).
3.3: [Evans 2.5 #11] Assume that n = 1 and that u has the special form u(x, t) = v (x 2 /t).
a) Show that
ut = uxx
if and only if
4z v 00 (z) + (2 + z)v 0 (z) = 0
for z > 0.
b) Show that the general solution of the ODE in a) is given by
Z z
2
v (z) = c
e −s /4 s −1/2 ds + d, with c, d ∈ R.
0
Gregory T. von Nessi
Version::31102011
Reverses the smoothening, expect bad solutions in finite time even if g(t) is smooth. Transform: t 7→ T − t
3.7 Exercises
87
c) Differentiate the solution v (x 2 /t) with respect to x and select the constant c properly,
so as to obtain the fundamental solution Φ for n = 1.
3.4: [Evans 2.5 #12] Write down an explicit formula for a solution of the initial value problem
ut − ∆u + cu = f in Rn × (0, ∞),
u = g on Rn × {t = 0},
where c ∈ R.
Version::31102011
Hint: Make the change of variables u(x, t) = e at v (x, t) with a suitable constant a.
3.5: [Evans 2.5 #13] Let g : [0, ∞) → R be a smooth function with g(0) = 0. Show
that a solution of the initial-boundary value problem

 ut − uxx = 0 in R+ × (0, ∞),
u = 0 on R+ × {t = 0},

u = g on {x = 0} × [0, ∞),
is given by
x
u(x, t) = √
4π
Z
0
t
1
2
e −x /(t−s) g(s) ds.
(t − s)3/2
Hint: Let v (x, t) = u(x, t) − g(t) and extend v to {x < 0} by odd reflection.
Gregory T. von Nessi
Chapter 4: The Wave Equation
88
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
4.1
D’Alembert’s Representation of Solution in 1D
4.2
Representations of Solutions for n = 2 and n = 3
4.3
Solutions of the Non-Homogeneous Wave Equation
4.4
Energy methods
4.5
Exercises
4.1
88
93
98
101
103
D’Alembert’s Representation of Solution in 1D
utt − ∆u = 0 in ΩT
homogeneous
utt − ∆u = f in ΩT
non-homogeneous
u = g on Ω × {t = 0}
ut = h on Ω × {t = 0}
utt − uxx
∂t2 u
−
∂x2 u
= 0
= 0
∂t2 − ∂x2 u = 0
Gregory T. von Nessi
(∂t + ∂x ) (∂t − ∂x ) u = 0
| {z }
v (x,t)
Version::31102011
4 The Wave Equation
4.1 D’Alembert’s Representation of Solution in 1D
89
Solve (∂t + ∂x )v = vt + vx = 0
Transport equation for v : v (x, t) = a(x − t) an arbitrary function.
Thus ∂t u − ∂x u = v (x, t) = a(x − t).
General solution formula for
ut + bux = f (x, t) in Rn × (0, ∞)
u = g on Rn × {t = 0}
Version::31102011
Solution (Duhamel’s principle):
u(x, t) = g(x − tb) +
Solution u (b = −1, f (x, t) = a(x − t))
Z
0
t
f (x + (s − t)b, s) ds
Z
t
u(x, t) = g(x − t(−1)) +
a(x + (s − t)(−1) − s) ds
0
Z t
= g(x + t) +
a (x + t − 2s) ds
|
{z
}
0
y
a(x) = v (x, 0) = ∂t u(x, 0) − ∂x u(x, 0)
= h(x) − g 0 (x)
Z
1 x−t
a(y ) dy
u(x, t) = g(x + t) + −
2 x+t
Z
1 x+t
= g(x + t) +
a(y ) dy
2 x−t
Z
1 x+t
= g(x + t) +
(h(y ) − g 0 (y )) dy
2 x−t
1
1
=⇒ u(x, t) = (g(x + t) + g(x − t)) +
2
2
Z
x+t
h(y ) dy
x−t
D’Alembert’s formula.
Theorem 4.1.1 (D’Alembert’s solution for the wave equation in 1D): g ∈ C 2 (R), h ∈
C 1 (R)
Z
1
1 x+t
u(x, t) = (g(x + t) + g(x − t)) +
h(y ) dy
2
2 x−t
Then
i.) u ∈ C 2 (R × (0, ∞))
ii.)

 utt − uxx = 0
u(x, 0) = g(x)

ut (x, 0) = h(x)
Gregory T. von Nessi
Chapter 4: The Wave Equation
90
x −t
x +t
t>0
t=0
x0
information at x0
travels with finite
speed
c.f heat equation
infinite speed of
propagation
(x, t)
x −t
x +t
t=0
Remark: We say u(x, 0) = g(x), if lim(x,t)→(x0 ,0),t>0 u(x, t) = g(x0 ) ∀x0 ∈ R. Remarks:
• finite speed of propagation.
• domain of influence
The values of u depend only on the value of g and h on [x − t, x + t]
4.1.1
Reflection Argument
u=0
g, h
t
x =0
Want to solve
Gregory T. von Nessi

utt − uxx



u
u

t


u
=
=
=
=
0
g
h
0
in R+ × (0, ∞)
on R+ × {t = 0}
on R+ × {t = 0}
on {0} × (0, ∞)
Version::31102011
t>0
4.1 D’Alembert’s Representation of Solution in 1D
91
and suppose that g(0) = 0, h(0) = 0. Try to find a solution ũ on R of a wave equation that
is odd as a function of x, i.e.
ũ(−x, t) = −ũ(x, t)
=⇒ ũ(0, t) = −ũ(0, t)
=⇒ ũ(0, t) = 0
Extend g and h by odd reflection:
Version::31102011
g̃(x) =
h̃(x) =
−g(−x) x < 0
g(x) x ≥ 0
−h(−x) x < 0
h(x) x ≥ 0
g
ũ solution of:
D’Alembert’s formula:

 ũtt − ũxx = 0
ũ = g̃

ũt = h̃
in R × (0, ∞)
on R × {t = 0}
on R × {t = 0}
1
1
ũ(x, t) = (g̃(x − t) + g̃(x + t)) +
2
2
Z
x+t
h̃(y ) dy
x−t
Assertion: ũ is odd, the restriction to [0, ∞) is our solution u.
Proof:
ũ(−x, t) =
=
(sub. z = −y )
=
=
1
[g̃(−x
2
1
− [g̃(x
2
1
− [g̃(x
2
1
− [g̃(x
2
Z
1 −x+t
− t) + g̃(−x + t)] +
h̃(y ) dy
2 −x−t
Z
1 −x+t
+ t) + g̃(x − t)] −
h̃(−y ) dy
2 −x−t
Z
1 x−t
+ t) + g̃(x − t)] +
h̃(z) dz
2 x+t
Z
1 x+t
+ t) + g̃(x − t)] −
h̃(y ) dy
2 x−t
Gregory T. von Nessi
Chapter 4: The Wave Equation
92
=⇒ ũ(−x, t) = −ũ(x, t)
=⇒ ũ is odd.
u(x, t) =
(
− t) + g(x + t)] +
+ t) − g(t − x)] +
x+t
h̃(y ) dy
=
x−t
=
(sub. z = −y ) =
=
Z
1
2
1
2
R x+t
h(y ) dy
Rx−t
t+x
t−x h(y ) dy
0
x−t
Z 0
x−t
Z 0
h̃(y ) dy +
Z
x+t
h̃(y ) dy
0
−h(−y ) dy +
−x+t
Z x+t
h(z) dz +
x ≥t≥0
t≥x ≥0
Z
Z
x+t
h(y ) dy
x
x+t
h(y ) dy
x
h(y ) dy
t−x
4.1.2
Geometric Interpretation (h ≡ 0)
g(x)
t=0
0
t>0
0
0
Remark: D’Alembert:
1
1
u(x, t) = [g(x + t) + g(x − t)] +
2
2
Z
x+t
h(y ) dy
x−t
The solution is generally of the form u(x, t) = F (x − t) + G(x + t) with arbitrary functions
F and G.
Gregory T. von Nessi
Version::31102011
Z
1
2 [g(x
1
2 [g(x
4.2 Representations of Solutions for n = 2 and n = 3
4.2
Version::31102011
4.2.1
93
Representations of Solutions for n = 2 and n = 3
Spherical Averages and the Euler-Poisson-Darboux Equation
Z
U(x; t, r ) = −
u(y , t) dS(y )
∂B(x,r )
Z
G(x, r ) = −
g(y ) dS(y )
∂B(x,r )
Z
H(x, r ) = −
h(y ) dS(y )
∂B(x,r )
Recover u(x, t) = limr →0 U(x; t, r )
x ∈ Rn fixed from now on.
Lemma 4.2.1 (): u a solution of the wave equation, then U satisfies

 Utt − Ur r −

n−1
r Ur
in R+ × (0, ∞)
on R+ × {t = 0}
on R+ × {t = 0}
= 0
U = G
Ut = H
(4.1)
Once this is established, we transform (4.1) into Ũtt − Ũr r = 0 and we use the representation on the half line to find Ũ, U, u.
Calculation from MVP for harmonic functions:
Z
r
−
∆u(y , t) dy
Ur (x, t, r ) =
n B(x,r )
Z
r
1
=
∆u(y , t) dy
n α(n)r n B(x,r )
Z
1
1
=
∆u(y , t) dy
nα(n) r n−1 B(x,r )
∴ r n−1 Ur
=
(since u solves wave eq.) =
(in polar) =
∴ [r n−1 Ur ]r
Z
1
∆u(y , t) dy
nα(n) B(x,r )
Z
1
utt (y , t) dy
nα(n) B(x,r )
Z rZ
1
utt dSdρ
nα(n) 0 ∂B(x,ρ)
1
nα(n)
Z
n−1
= r
−
=
Z
utt dS
∂B(x,r )
utt dS
∂B(x,r )
= r n−1 Utt
Gregory T. von Nessi
Chapter 4: The Wave Equation
94
1
∴
Utt −
r n−1
⇐⇒
Utt −
r n−1
⇐⇒
Utt −
1
[r n−1 Ur ]r = 0
[(n − 1)r n−2 Ur + r n−1 Ur r ] = 0
n−1
Ur − Ur r = 0
r
Euler-Poisson-Darboux Equation.
4.2.2
Kirchoff’s Formula in 3D
Recall:

 utt = ∆u
u = g

ut = h
in Rn × (0, ∞)
on Rn × {t = 0}
on Rn × {t = 0}
n = 3 Kirchhoff’s formula
Z
U(x, t, r ) = −
u(y , t) dS(y )
∂B(x,r )
G(x, r ), H(x, r )
U satisfies Euler-Poisson-Darboux equation
Utt − Ur r −
n−1
Ur = 0
r
U = G, Ut = H
Transformation: Ũ = r U
Ũtt = r Utt
2
= r [Ur r + Ur ]
r
= r Ur r + 2Ur
= [U + r Ur ]r
Ũr r = [r U]r r

Ũtt − Ũr r



Ũ
=⇒

Ũt


Ũ(0, t)
=
=
=
=
= [U + r Ur ]r
0
G̃
H̃
0
in R+ × (0, ∞)
at t = 0
at t = 0
t>0
1D wave equation on the half line with Dirichlet condition at x = 0. From D’Alembert’s
solution in 1D:
Ũ(x, t, r )
r →0
r
Z t+r
1
1
(formula with 0 ≤ r < t) = lim
[G̃(t + r ) + G̃(t − r )] +
H̃(y ) dy
r →0 2r
2r t−r
u(x, t) =
lim
= G̃ 0 (t) + H̃(t)
Gregory T. von Nessi
Version::31102011
Representations for
4.2 Representations of Solutions for n = 2 and n = 3
95
By definition
G̃(x; t) = tG(x; t)
Z
= t−
g(y ) dS(y )
∂B(x,t)
Z
= t−
g(x + tz) dS(z)
∂B(0,1)
Z
G̃ (x, t) = −
Version::31102011
0
Z
g(x + tz) dS(z) + t−
Dg(x + tz) · z dS(z)
∂B(0,1)
∂B(0,1)
Z
∂y
= −
g(y ) + t (y ) dS(y )
∂ν
∂B(x,t)
Z
H̃(t) = t · H(t) = t−
h(y ) dy
∂B(x,t)
Z
u(x, t) = −
∂B(x,t)
∂g
g(y ) + t (y ) + t · h(y ) dS(y )
∂ν
This is Kirchhoff’s Formula
Loss of regularity: need g ∈ C 3 , h ∈ C 2 , to get u in C 2 .
Finite speed of propagation:
B(x, t)
4.2.3
Poisson’s Formula in 2D
Method of descent: use the 3D solution to get the 2D solution by extending in 2D to 3D
u solves

 utt − ∆u = 0
u = g

ut = h
in R2 × (0, ∞)
on R2 × {t = 0}
on R2 × {t = 0}
Gregory T. von Nessi
Chapter 4: The Wave Equation
96
Define
u(x1 , x2 , x3 , t) = u(x1 , x2 , t)
g(x1 , x2 , x3 ) = g(x1 , x2 )
h(x1 , x2 , x3 ) = h(x1 , x2 )
Then
in R3 × (0, ∞)
on R3 × {t = 0}
on R3 × {t = 0}
Kirchhoff’s formula: x = (x1 , x2 , x3 )
Z
∂g
u(x, t) = −
g(y ) + t (y ) + t · h(y ) dS(y )
∂ν
∂B(x,t)
S
B(x, t) ⊂ R3
+
x3
x2
x
t
x1
S−
u(x1 , x2 ) = u(x1 , x2 , 0)
Assume x = (x1 , x2 , 0)
R
Need to evaluate ∂B(x,t)
|y − x|2 = t 2 equation for the surface
⇐⇒ (y1 − x1 )2 + (y2 − x2 )2 + y32 = t 2
Parametrization:
y32 = t 2 − (y1 − x1 )2 − (y2 − x2 )2
= t 2 − |y − x|2
Parametrization: f (y1 , y2 ) =
∂f
∂yi
p
t 2 − |y − x|2
p
dS = 1 + |Df |2 dy
=
=
Gregory T. von Nessi
1
1
p
(−2)(yi − xi )
2 t 2 − |y − x|2
−(yi − xi )
p
t 2 − |y − x|2
B(x, t) ⊂ R2
Version::31102011

 u tt − ∆u = 0
u = g

ut = h
4.2 Representations of Solutions for n = 2 and n = 3
s
97
|y − x|2
dy
− |y − x|2
12
t2
dy
=
t 2 − |y − x|2
dS =
1+
t
p
dy
2
t − |y − x|2
dS =
Version::31102011
4.2.4
t2
Representation Formulas in 2D/3D
• Kirchoff’s formula
∂y
+ t · h dS(y )
g+t
∂ν
∂B(x,t)
Z
u(x, t) = −
• Poisson’s formula u
x = (x1 , x2 , 0)
u
Z
u(x) = −
∂B(x,t)
S
∂g
+t ·h
g+t
∂ν
dS(y )
B(x, t) ⊂ R3
+
x3
x2
x
t
x1
B(x, t) ⊂ R2
S−
Parameterize thepsurface integral
S + , f (y1 , y2 ) = t 2 − |x − y |2
p
1 + |Df |2 dy
t
= p
t 2 − |y − x|2
dS =
∂g
g+t
+ t · h (y ) dS(y )
∂ν
S+
Z
p
∂g
=
g+t
+ t · h (y1 , y2 , f (y1 , y2 )) · t + |Df |2 dy
∂ν
B(x,t)
Z
Dg · (y − x)
t dy
=
g(y1 , y2 ) + t
+ t · h(y1 , y2 ) p
2
t
t − |y − x|2
B(x,t)
Z
Gregory T. von Nessi
Chapter 4: The Wave Equation
98
y −x
t
y −x
= Dy g ·
t
Z
∂g
u(x) = −
g+t
+ t · h dS(y )
∂ν
∂B(x,t)
∂g
∂ν
= Dy g(y ) ·
Since g, h do not appear on
Poisson’s formula in 2D.
u(x1 , x2 , x3 , t) = u(x1 , x2 , t)
u(x1 , x2 , 0, t) = u(x1 , x2 , t)
u(x, t) = u(x, t)
Remarks
• Used method of descent extend 2D to 3D. The same idea works in n-dimensions,
extend from even to odd dimensions.
• Qualitative difference: integration is on the full ball in 2D.
• Loss of regularity:
need g ∈ C13 , h ∈ C 2 to have u ∈ C 2 .
4.3
Solutions of the Non-Homogeneous Wave Equation

in Rn × (0, ∞)
 utt − ∆u = f (x, t)
u = 0
on Rn × {t = 0}

ut = 0
on Rn × {t = 0}
The fully non-homogeneous wave equation

 utt − ∆u = f
u = g

ut = h
can be solved by u = v + w , where
Gregory T. von Nessi

 vtt − ∆v
v

vt

 vtt − ∆v
w

wt
= f
= 0
= 0
= 0
= f
= g
Version::31102011
u(x, t) = u(x, t)
Z
t dy
2
·
[g(y1 , y2 ) + Dg(y1 , y2 ) · (y − x) + t · h(y1 , y2 )] p
=
2
4πt
t 2 − |x − y |2
B(x,t)
Z
t · g(y ) + t · Dg(y ) · (y − x) + t 2 h(y )
1
p
−
=
dy x ∈ Rn , t > 0
2
2
2 B(x,t)
t − |x − y |
4.3 Solutions of the Non-Homogeneous Wave Equation
4.3.1
99
Duhamel’s Principle
u(x, t; s)
utt (·; s) − ∆u(·; s) = 0
on Rn × (x, ∞)
u(·; s) = 0
on Rn × {t = s}
(u(x, t, 0) = 0)
ut (·; s) = f (·, s) on Rn × {t = s}
(ut (x, t, 0) = f (·, t))
Then
u(x, t) =
Z
t
x ∈ Rn , t > 0
u(x, t; s) ds,
Version::31102011
0
Proof: We differentiate
Z
t
ut (x, t) = u(x, t; t) +
ut (x, t; s) ds
0
Z t
=
ut (x, t; s) ds
0
Z
t
utt (x, t) = ut (x, t; t) +
utt (x, t; s) ds
0
Z t
= f (x, t) +
utt (x, t; s) ds
0
Z
∆u(x, t) =
t
∆u(x, t; s) ds
0
Z
=
t
utt (x, t; s) ds
0
=⇒ utt (x, t) − ∆u(x, t) = f (x, t).
Examples: 1D: D’Alembert’s
Formula:
1
1
u(x, t) = [g(x + t) + g(x − t)] +
2
2
Z
x+t
h(y ) dy
x−t
To get the solution for u(x, t; s) translate the origin from 0 to s.
Z
1
1 x+t−s
u(x, t; s) = [g(x + t − s) + g(x − (t − s))] +
h(y ) dy
2
2 x−(t−s)
Recall that for u(x, t; s) we have g ≡ 0, h(y ) = f (y , s).
Duhamel’s principle:
u(x, t) =
Z
0
(sub. u = t − s) =
=
t
1
2
Z
x+(t−s)
f (y , s) dy ds
x−(t−s)
Z
1 u
f (y , t − u) dy (−du)
2 x−u
Z Z
1 t x+s
f (y , t − s) dy ds
2 0 x−s
Gregory T. von Nessi
Chapter 4: The Wave Equation
100
u(x, t)
t
s =0
x −t
x +t
x
Z
u(x, t) = −
∂B(x,t)
∂g
g+t
+t ·h
∂ν
dS(y )
Poisson’s formula in 2D
Z
t · g(y ) + t · Dg · (y − x) + t 2 h(y )
1
p
dy
u(x, t) = −
2 B(x,t)
t 2 − |y − x|2
Non-homogeneous problems

 utt − ∆u = f
u = 0

ut = 0
u(x, t) =
Z
t
u(x, t; s) ds
0
u(·; s) solution on Rn × (s, ∞)
4.3.2
u(·, s) = 0
ut (·, s) = f (·, s)
Solution in 3D
Initial data g(y ) = s, h(y ) = f (y , s). Translate the origin from t = 0 to t = s
Z
u(x, t; s) = (t − s)−
u(x, t) =
Gregory T. von Nessi
Z
0
f (y , s) dS(y )
∂B(x,t−s)
t
Z
(t − s)−
∂B(x,t−s)
f (y , s) dS(y )ds
Version::31102011
Which values of f influence u(x, t)
Kirchhoff’s formula (3D)
4.4 Energy methods
101
(r = t − s)
=
Z
0
t
=
Z
t
0
=
Z
0
=
Z
t
Z
r−
∂B(x,r )
f (y , t − r ) dS(y )(−dr )
∂B(x,t)
f (y , t − r ) dS(y )dr
Z
r−
r
4πr 2
Version::31102011
B(x,t)
= u(x, t)
4.4
Z
∂B(x,r )
f (y , t − r ) dS(y )dr
1
f (y , t − |x − y |) dy
4π|x − y |
for x ∈ R3 , t > 0
Energy methods
Ω ⊂ Rn bounded and open, ΩT = Ω × (0, T )
Theorem 4.4.1 (uniqueness): There is

utt − ∆u =



u =
u

t =


u =
at most one smooth solution of
0
g
h
k
Proof: Suppose u1 , and u2 are solutions.
w = u1 − u2 satisfies

wtt − ∆w = 0



w = 0
w

t = 0


w = 0
in ΩT
in Ω × {t = 0}
in Ω × {t = 0}
on ∂Ω × (0, t)
in ΩT
in Ω × {t = 0}
in Ω × {t = 0}
on ∂Ω × (0, t)
Multiply by wt :
wtt wt − (∆w )wt = 0
inΩT
Z
(wtt (x, t)wt (x, t) + ∆w (x, t) · wt (x, t)) dx = 0
Ω
Z
=
(wtt (x, t)wt (x, t) + Dw (x, t) · Dwt (x, t)) dx
Ω
Z
+
(Dw · ν)wt dS
∂Ω
Z 1
1
2
2
∂t |wt | + ∂t |Dw |
dx
=
2
Ω 2
Integrate from t = 0, to t = T
Z T Z 1
1
2
2
0 =
∂t
|wt | + |Dw |
dxdt
2
0
Ω 2
Z
1
=
|wt (x, T )|2 + |Dw (x, T )|2 dx
2 Ω
Z
1
−
|wt (x, 0)|2 + |Dw (x, 0)|2 dx = 0
2 Ω
Gregory T. von Nessi
Chapter 4: The Wave Equation
102
=⇒ wt , Dw = 0 in ΩT
=⇒ w = constant =⇒ w = 0
=⇒ u1 = u2 in ΩT .
Alternative: Define
1
e(t) =
2
Z
Ω
(|wt (x, t)| + |Dw (x, t)|) dx
and show that ė(t) = 0, t ≥ 0.
Finite Speed of Propagation
u(x, t0 )
C = {(x, t) : 0 ≤ t ≤ t0 , |x − x0 | ≤ t0 − t}
(x0 , t)
t
x0
t0
B(x0 , t0 ) ⊂ Rn
If u is a solution of the wave equation with u ≡ 0, ut = 0 in B(x0 , t0 ), then u ≡ 0 in C and
in particular u(x0 , t0 ) = 0.
Z
Define:
e(t) =
ut2 (x, t) + |Du(x, t)|2 dx
B(x0 ,t0 −t)
Compute:
ė(t)
=
Z
B(x0 ,t0 −t)
I.P.
=
−
Z
Z
∂B(x0 ,t0 −t)
B(x0 ,t0 −t)
+
(2ut utt + 2Du · Dut ) dx
Z
(ut2 + |Du|2 ) dS
(2ut utt − 2∆u · ut ) dx
Z
2(Du · ν)ut dS −
∂B(x0 ,t0 −t)
=
Gregory T. von Nessi
0+
Z
∂B(x0 ,t0 −t)
∂B(x0 ,t0 −t)
(ut2 + |Du|2 ) dS
(2(Du · ν)ut − ut2 − |Du|2 ) dS
Version::31102011
4.4.1
4.5 Exercises
103
C-S
|Du · ν| ≤ |Du| · |ν| = |Du|
|{z}
=1
|2(Du · ν)ut |
≤
Young’s
Version::31102011
≤
2|Du| · |ut |
|Du|2 + |ut |2
Young’s inequality: 2ab ≤ a2 + b2 , a, b ∈ R. Comes from the fact 0 ≤ a2 − 2ab + b2 =
(a − b)2 .
Z
∴ ė(t) ≤
(|Du|2 + |ut |2 − ut2 − |Du|2 ) dx ≤ 0
B(x0 ,t0 −t)
=⇒ e(t) is decreasing in time.
But,
e(0) =
Z
B(x0 ,t0 )
=⇒
=⇒
=⇒
=⇒
(|ut |2 + |Du|2 ) dx = 0
e(t) = 0, t ≥ 0
ut (x, t), Bu(x, t) = 0, (x, t) ∈ C
u ≡ 0 in C
u(x0 , t0 ) = 0.
4.5
Exercises
4.1: [Evans 2.6 #16] Suppose that E = (E 1 , E 2 , E 3 ) and B = (B 1 , B 2 , B 3 ) are a solution
of Maxwell’s equations
Et = curl B,
Bt = − curl E,
div E = 0,
div B = 0.
Show that
utt − ∆u = 0
where u = E i or u = B i .
4.2: [Evans 2.6 #17] (Equipartition of energy) Suppose that u ∈ C 2 (R × [0, ∞)) is a
solution of the initial value problem for the wave equation in one dimension,

 utt − uxx = 0 in R × (0, ∞),
u = g on R × {t = 0},

ut = h on R × {t = 0}.
Suppose that g and h have compact support. The kinetic energy is given by
Z ∞
k(t) =
ut2 (x, t) dx
−∞
and the potential energy by
p(t) =
Z
∞
−∞
ux2 (x, t) dx.
Prove that
Gregory T. von Nessi
Chapter 4: The Wave Equation
104
a) the total energy, e(t) = k(t) + p(t), is constant in t,
b) k(t) = p(t) for all large enough times t.
4.3: [Evans 2.6 #18] Let u be a solution

 utt − uxx = 0
u=g

ut = h
of
in R3 × (0, ∞),
on R3 × {t = 0},
on R3 × {t = 0}.
Hint: Find an energy E for which dE/dt ≤ CE and use Gronwall’s inequality which asserts
the following. If y (t) ≥ 0 satisfies dy /dt ≤ φy , where φ ≥ 0 and φ ∈ L1 (0, T ), then
Z t
y (t) ≤ y (0)exp
φ(s) dx .
0
C(R3 )
4.5: [Qualifying exam 08/96] Let h ∈
u(x, t) be the solution of the initial-value

 utt − ∆u = 0
u(x, 0) = 0

ut (x, 0) = 0
with h ≥ 0 and h = 0 for |x| ≥ a > 0. Let
problem
on R3 × (0, ∞),
on R3 × {t = 0},
on R3 × {t = 0}.
a) Show that u(x, t) = 0 in the set {(x, t) : |x| ≤ t − a, t ≥ 0}.
b) Suppose that there exists a point x0 such that u(x0 , t) = 0 for all t ≥ 0. Show that
u(x, t) ≡ 0.
4.6: [Qualifying exam 01/00] Suppose that q : R3 → R3 is continuous with q(x) = 0 for
|x| > a > 0. Let u(x, t) be the solution of

 utt − ∆u = e iωt q(x) in R3 × (0, ∞),
u(x, 0) = 0 on R3 × {t = 0},

ut (x, 0) = 0 on R3 × {t = 0}.
a) Show that e −iωt u(x, t) → v (x) as t → ∞ uniformly on compact sets, where
∆v + ω 2 v = q,
Hint: You may take the result of Exercise 2.1 for granted.
b) Show that v satisfies the “outgoing” radiation condition
∂
C
+ i ω v (x) ≤ 2
∂r
|x|
as r = |x| → ∞.
Gregory T. von Nessi
Version::31102011
where g and h are smooth and have compact support. Show that there exists a constant C
such that
C
|u(x, t)| ≤ , x ∈ R3 , t > 0
t
4.4: [Qualifying exam 01/01] Let Ω be a bounded domain in Rn with smooth boundary and
suppose that g : Ω → R is smooth and positive. Given f1 , f2 : Ω → R, show that there can
be at most one C 2 (Ω) solution u : Ω × (0, ∞) → R to the initial value problem

utt = g(x)∆u in Ω × (0, ∞),



u = f1 on Ω × {t = 0},
ut = f2 on Ω × {t = 0},



u = 0 on ∂Ω × (0, ∞).
Version::31102011
Chapter 5: Fully Non-Linear First Order PDEs
106
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
5.1
Introduction
5.2
Method of Characteristics
5.3
Exercises
5.1
106
107
121
Introduction
Solve F (Du(x), u(x), x) = 0 in Ω, Ω bounded and open in Rn .
• Method of characteristics
• Often time variable (conservation laws)
• existence of classical solution only for short times
• global existence for conservation laws via weak solutions (broaden the class of solutions,
solutions continuous and discontinuous).
Notation: Frequently, you set p = Du, z = u.
1st order PDE: F (p, z , x) = 0
F : Rn × R × Ω → R
always f is smooth.
Example: Transport equation
Gregory T. von Nessi
ut (y , t) + a(y , t) · Du(y , t) = 0
(5.1)
Version::31102011
5 Fully Non-Linear First Order PDEs
5.2 Method of Characteristics
107
Set x = (y , t), p = (Du, ut ). Then (5.1) is given by
Version::31102011
F (p, z , x) = 0
F (p, z , x) =
=
F (p, z , x) =
5.2
with
a(y , t)
Du
·
1
ut
a(x)
Du(x)
·
1
ut (x)
a(x)
·ρ
1
Method of Characteristics
Idea: Reduce the PDE to a system of ODEs for x(s), z(s) = u(x(s)), p(s) = Du(x(s))
Ω
x(s)
Suppose u is a smooth solution of F (Du, u, x) = 0.
∂
ux (x(s))
∂s i
n
X
=
uxi xj (x(s)) · ẋj (s)
ṗi (s) =
j=1
1st order PDE, u ∈ C 1 natural, but we get uxi xj .
Eliminate them!
F (Du, u, x) = 0
∂
F (Du, u, x) = 0
∂xi
n
X
∂F
∂F
∂F
=
ux x +
ux +
=0
∂pj i j
∂z i ∂xi
j=1
To eliminate the second derivatives, we impose that
ẋj =
∂F
(p(s), z(s), x(s))
∂pj
Gregory T. von Nessi
Chapter 5: Fully Non-Linear First Order PDEs
108
Then
ṗi (s) =
=
n
X
j=1
n
X
j=1
= −
uxi xj (x(s)) · ẋj (s)
uxi xj (x(s)) ·
∂F
(p(s), z(s), x(s))
∂pj
∂F
∂F
uxi (x(s)) −
∂z
∂xi
Thus,
∂F
∂F
ux (x(s)) −
∂z i
∂xi
Now find
∂
u(x(s))
∂s
n
X
∂u
=
(x(s)) · ẋj (s)
∂xj
ż(s) =
=
j=1
n
X
j=1
pj · ẋj (s)
Thus,
ż(s) =
n
X
j=1
pj · ẋj (s)
Result: If u is a solution of F (Du, u, x) = 0, and if
∂F
(p(s), z(s), x(s)),
∂pj
∂F
∂F
ṗ(s) = −
p−
∂z
∂x
∂F
ż(s) =
· p,
∂p
ẋj
=
then
i.e., we get a system of 2n + 1 ODEs

 ẋ = Fp
ṗ = −Fz p − Fx

ż = Fp · p
Characteristics equations for the 1st order PDE. The ODE ẋ = Fp is called the projected
characteristics equation. Issues:
• How do the boundary conditions relate to initial conditions for the ODEs?
• We can also have the following situation
Typically we can solve only locally, but not globally.
Gregory T. von Nessi
Version::31102011
ṗi (s) = −
5.2 Method of Characteristics
109
Where to prescribe the B.C.?
Version::31102011
Ω
Compatibility condition.
Ω
t
x
• If we construct u(x) from the values of u along the characteristic curves, do we get a
solution of the PDE?
Examples:
1. Transport equation
ut (y , t) + a(y , t) · Du(y , t) = 0
a(x)
F (p, z , x) =
·p = 0
1
Gregory T. von Nessi
Chapter 5: Fully Non-Linear First Order PDEs
110
Characteristic equations:
= Fp =
ẋ
a(x)
1
˙
ẋ(s) = (ẏ (s), t(s))
= (a(y , t), 1)
˙
=⇒ t(s)
= 1 =⇒ t = s
ẏ (s) = a(y (s), s)
a(s)
ż(s) = Fp · p =
·p =0
1
(by the orig. PDE)
t
(y (s), s)
Rn
Ω
2. Linear PDE: F (p, z , x) = b · p + cz
b(x) · Du(x) + c(x)u(x) = 0
i.e.
Characteristic equations:
ẋ(s) = Fp = b(x(s))
ż(s) = −c(x(s))u(x(s)) = −c(x(s))z(s)
ẋ(s) = b(x(s))
ż(s) = −c(x(s))z(s)
can solve without using equation for ṗ.
Examples: x1 ux2 − x2 ux1 = u = 0 linear
F (p, z , x) = 0
F (p1 , p2 , z, x1 , x2 ) = x1 p2 − x2 p1 − z = 0
= x⊥ · p − z,
where
x
Gregory T. von Nessi
⊥
=
−x2
x1
Version::31102011
=⇒ z constant along characteristic curve
=⇒ u(y (s), s) = constant
=⇒ u determined from initial data.
5.2 Method of Characteristics
111
• ẋ = Fp , ẋ = x ⊥
ẋ1 (s) = −x2 (s), x1 (0) = x0
ẋ2 (s) = x1 (s), x2 (0) = 0
I.C.
Version::31102011
Solve subject to the initial data u = g on Γ = {(x, 0) | x1 ≥ 0}
(x0 , 0)
Γ
• ż = Fp · p = x ⊥ · p = z by equation
z(0) = g(x0 )
•
ṗ = −Fx − Fz p =
p2
−p1
− (−1)p + suitable ICs, (see below)
˙ x1
x1
0 −1
=
x2
x2
1 0
|
{z
}
x1
x2
x1
x2
(0) =
(s) = e
A
x0
0
As
x1 (s) = x0 cos s
x2 (s) = x0 sin s
x0
0
=
cos s − sin s
sin s cos s
x0
0
circular characteristics
z(s) = Ce s = g(x0 )e s
Reconstruct
u(x1 , x2 ) from characteristics. Characteristic through (x1 , x2 ) corresponds to
q
x0 = x12 + x22 , (x1 , x2 ) = x0 e iθ . u(x1 , x2 ) = g(x0 )e iθ
q
x2
2
2
g
x1 + x2 exp arctan
= u(x1 , x2 )
x1
Gregory T. von Nessi
Chapter 5: Fully Non-Linear First Order PDEs
112
(x, y )
x(s)
(x0 , 0)
Γ
Strategy to Prove that the Method of Characteristics Gives a
Solution of F (Du, u, x) = 0
i.) Reduce initial data on Γ to initial data on a flat part of the boundary, Rn−1
ii.) Check that the structure of the PDE does not change
iii.) Set up characteristic equations with the correct initial data
iv.) Solve the characteristic equations
v.) Reconstruct u from the solution
vi.) Check that u is a solution
Here we go!
Gregory T. von Nessi
Version::31102011
5.2.1
5.2 Method of Characteristics
113
i.) Flattening the boundary locally, in a neighborhood of x0 ∈ Γ
Suppose that Γ is of class C11 i.e. Γ is locally the graph of a C 1 function of n − 1
variables
Φ(x)
Version::31102011
yn
Γ = γ(x) close to x0
Γ
xn
Ũ
x1 ∈ Rn−1
ũ
U
x0
0
Rn−1
Ψ(y )
Define transformations (y = (y 0 , yn ))
y
= Φ(x) :
x
= Ψ(y ) :
y0 = x0
yn = xn − γ(x 0 )
x0 = y0
xn = yn + γ(x 0 )
If u : U → R, then define
ũ : Ũ → R
ũ(y ) = u(x) = u(Ψ(y ))
or vice versa u(x) = ũ(y ) = ũ(Φ(x))
If u satisfies F (Du, u, x) = 0 in U, what is the equation satisfied by ũ?
ii.)
∂u
(x) =
∂xi
=
∂
ũ(Φ(x))
∂xi
n
X
∂ ũ ∂Φj
j=1


Φ : Rn → Rn ,

∂yi ∂xi
Φ11 · · ·
 .. . .
DΦ =  .
.
n
Φ1 · · ·
=

Φ1n
.. 
. 
Φnn
(DΦ)T Dũ(Φ(x))
i
Gregory T. von Nessi
Chapter 5: Fully Non-Linear First Order PDEs
114
In matrix notation
Du(x) = DΦT (x) · Dũ(Φ(x))
=⇒ Du(x) = DΦT (Ψ(y ) · Dũ(y )
F (DΦ (Ψ(y )) · Dũ(y ), ũ(y ), Ψ(y )) = F̃ (Dũ(y ), ũ(y ), y ) = 0,
where F̃ (p, z , y ) is F (DΦT (Ψ(y ))p, z , Φ(y )) = F̃ .
In particular, F̃ is a smooth function if the boundary of U is smooth.
From now on assume that x0 = 0 and that Γ ⊆ Rn−1
x0
Goal: construct u
Solution in a neighborhood of x0 = 0.
Gregory T. von Nessi
Γ
Version::31102011
F (Du(x), u(x), x) = 0
T
5.2 Method of Characteristics
115
iii.) Find initial conditions for the ODEs.
Version::31102011
0
∀y ∈ Rn close
to 0 we want x(s)
characteristic through y
=⇒ x(0) = y ∈ Rn−1
=⇒ z(0) = u(x(0)) = g(x(0)) = g(y )
∂g
=⇒ pi (0) = ∂y
i = 1, . . . , n − 1
i
Tangential derivatives, compatibility condition. One degree of freedom, pn (0). We
must choose pn (0) to satisfy
F
∂g
∂g
(y ), . . . ,
(y ), pn (0), g(y ), y
∂yi
∂xi
= 0.
Definition 5.2.1: A triple (p0 , z0 , x0 ) of initial conditions is admissible at x0 if z0 =
∂g
(x0 ), i = 1, . . . , n − 1, F (p0 , z0 , x0 ) = 0.
g(x0 ), p0,i = ∂y
i
iii.) Initial conditions

x(0) = y (∈ Rn−1 )


 z(0) = g(y )
∂g
p (0) = ∂x
(y ) i ∈ {1, . . . , n − 1}


i
 i
pn (0)
chosen such that F (p1 (0), . . . , pn (0), g(y ), y ) = 0
(5.2)
(x0 , z0 , p0 ) is called admissible if

 z0 = g(x0 )
∂g
p0,i = ∂x
(x
)
i
=
1,
.
.
.
,
n
−
1
n equations for
0
i

the n ICs p10 , . . . , pn0
F (p0 , z0 , x0 ) = 0
Question: We want to solve close to x0 , can we find admissible initial data, y , g(y ),
q(y ) for y close to x. F (q(y ), g(y ), y ) = 0?
Lemma 5.2.2 (): (p0 , z0 , x0 ) admissible, and suppose that Fp (p0 , z0 , x0 ) 6= 0. Then
there exists a unique solution of (5.2) for y sufficiently close to x. The triple (p0 , z0 , x0 )
is said to be non-characteristic.
Gregory T. von Nessi
Chapter 5: Fully Non-Linear First Order PDEs
116
Remark: If Γ is a smooth boundary then (p0 , z0 , x0 ) is non-characteristic if Fp (p0 , z0 , x0 )·
ν(x0 ) 6= 0, where ν(x0 ) is the normal to Γ at x0 .
Proof: Use the implicit function theorem, G : Rn ×Rn → Rn G(p, y ) = (G 1 (p, y ), . . . , G n (p, y ))
∂g
(y ). G n (p, y ) = F (p, g(y ), y ).
with G i (p, y ) = pi − ∂x
i
0. Want to solve locally for p = q(y ),
···
..
.
..
.
0
···
0
..
.
0
1
···
det(Gp (p, y )) = Fpn (p, g(y ), y )
=⇒ det(Gp (p0 , x0 ) 6= 0
=⇒ there exists a unique solution p = q(y ) close to x0
0
..
.
..
.
0
Fpn








x0
(q(y ), g(y ), y ) admissible initial data for the ODEs
iv.) Solution of the system of ODEs
ẋ(s) = Fp (p(s), z(s), x(s))
ż(s) = Fp (p(s), z(s), x(s)) · p(s)
ṗ(s) = −Fx (p(s), z(s), x(s)) − Fz (p(s), z(s), x(s))p(s)
Initial data
Let
Gregory T. von Nessi

 x(0) = y
z(0) = g(y )

p(0) = q(y )
y ∈ Rn−1 parameter


x(s)
X̃(y , s) =  z(s) 
p(s)
Version::31102011
The triple (p0 , z0 , x0 ) admissible, G(p0 , x0 ) =
we can do this if det(Gp (p0 , y0 )) 6= 0.

1
0 ···

.
.. ...
 0

.
..
..
Gp (p, y ) = 
 ..
.
.

 0 ··· ···
Fp1 · · · · · ·
5.2 Method of Characteristics
117
System:
(
˙ , s) = G(X̃(y , s))
X̃(y
X̃(y , 0) = (q(y ), g(y ), y )
(5.3)
Version::31102011
Lemma 5.2.3 (Existence of solution for (5.3)): If G ∈ C k , X̃(y , 0) is C k (as a
function of y ) then there exists a neighborhood N of x0 ∈ Rn−1 and an s0 > 0 such
that (5.3) has a unique solution X : N × (−s0 , s0 ) → Rn × R × Ω and X is C k .
N
(
x0
)
v.) Reconstruction of u from the solutions of the characteristic equations
x
y
x0
given x: find characteristic through x, i.e. find y = Y (x) and s = S(x) such that the
characteristic through y passes through x at time s.
Define: u(x) = z(Y (x), S(x))
Check: Invert x 7→ Y (x), S(x)
Lemma 5.2.4 (): Suppose that (p0 , z0 , x0 ) are non-characteristic initial data. Then
there exists a neighborhood V of x0 such that ∀x ∈ V there exists a unique Y and a
unique S with
x = X(Y (x), S(x))
Gregory T. von Nessi
Chapter 5: Fully Non-Linear First Order PDEs
118
Proof: We need to invert X. Inverse functions. We

1 0 ··· ···

 0 ... ... ...

 .
DX(x0 ) =  .. . . . . . . . . .


 0 ··· ··· 0
0 ··· ··· ···
can invert X(y , s) if DX(x0 ) 6= 0.

0 Fp1
..
.. 
.
. 

.. 
0 . 

.. 
1 . 
0 Fpn
To find ∂s X recall the characteristic equation, ẋ = Fp =⇒ det(DX) = Fpn 6= 0 since
(p0 , z0 , x0 ) is non-characteristic.
u(x) = Z(Y (x), S(x))
p(x) = P (Y (x), S(x))
Crucial step Du(x) = p(x)
Step 1: f (y , s) := F (P (y , s), Z(y , s), X(y , s)) = 0
Proof:
f (y , 0) = F (P (y , 0), Z(y , 0), X(y , 0))
= F (q(y ), g(y ), y )
= 0
since (q(y ), g(y ), y ) is admissible.
f˙(y , s) = Fp Ṗ + Fz Ż + Fx Ẋ
= Fp (−Fx − Fz p) + Fz Fp p + Fx Fp
= 0
=⇒ F (p(y , s), u(x), x) = 0.
Recall: Use capital letters to denote the dependence on y ,
P (y , s), Z(y , s), X(y , s)
solutions with initial data (q(y ), g(y ), y ).
Invert the characteristics:
x
Y (x), S(x) such that X(Y (x), S(x)) = x
Convention today: all gradients are rows.
Define:
u(x) := Z(Y (x), S(x))
p(x) := P (Y (x), S(x))
1. f (y , s) = F (P (y , s), Z(y , s), X(y , s)) ≡ 0 as a function of s.
Gregory T. von Nessi
Version::31102011
vi.) prove that u is a solution of the PDE
5.2 Method of Characteristics
119
2. We assert the following:
i.) Zs = P · XsT = hP, Xs i
ii.) Zy = P · Xy = P · Dy X
n−1
X ∂X j
∂z
=
Pj
∂yi
∂yi
j=1
i = 1, . . . , n − 1
Version::31102011
Proof:
i.) Zs = Fp · p = Xs · p by the characteristic equations.
ii.) Before we prove the second identity we find
Dy Ds Z = Dy Zs
= P · Dy Ds X + Ds X · Dy P
Define
r (s) := Dy Z − P · Dy X
and show that r (s) = 0.
Idea: find ODE for r and show that r = 0 is the unique solution.


1
∂g


..
ri (0) =
for i = 1, . . . , n − 1
(y ) − q(y ) 

.
∂xi
1
=⇒ ri (0) =
∂g
(y ) − qi (y ) = 0
∂xi
Since the triple (q(y ), g(y ), y ) is admissible.
r˙(s) = Ds Dy Z − Ds P · Dy X − P · Ds Dy X
= P · Dy Ds X + Ds X · Dy P
−Ds P · Dy X − P · Ds Dy X
= Ds X · Dy P − Ds P · Dy X
char. eq. = Fp · Dy P − (−Fx − Fz P )Dy X − Fz · Dy Z
−(Fx − Fz P )Dy X
(Dy (F (P, Z, X)) = 0)
= Fz P · Dy X − Fz · Dy Z
= −Fz (Dy Z − P · Dy X)
= −Fz · r (s)
∴
r˙(s) = −Fz · r (s)
r (0) = 0
Gregory T. von Nessi
Chapter 5: Fully Non-Linear First Order PDEs
120
Linear ODE for r with r (0) = 0 =⇒ r = 0 for s > 0.
3. Prove that Du(x) = p(s)
=⇒ F (P, Z, X) = F (Du(x), u(x), x) = 0
=⇒ u is a solution of the PDE
u(x) = Z(Y (x), S(x))
(by 2.) = P · Dy X · Dx Y + P · Ds X · Dx S
= P (Dy X · Dx Y + Ds X · Dx S)
= P · Dx X(Y, S)
= P · Id = p
=⇒ Dx u = P .
Examples:
1. Linear, homogeneous 1st order PDE:
b(x) · Du(x) + c(x)u(x) = 0
Local existence of solutions:
Initial data non-characteristic:
Fp · ν(x0 ) 6= 0
x0
ν(x0 )
F (p, z , x) = b(x) · p + c(x)z
Fp = b
Can solve if Fp (p0 , z0 , x0 ) · ν(x0 ) = b(x0 ) · ν(x0 ) 6= 0
Gregory T. von Nessi
Version::31102011
=⇒ Dx u = Dy Z · Dx Y + Ds Z · Dx S
5.3 Exercises
121
s
Version::31102011
Local existence of b(x0 ) is not tangential
Characteristic equation:
ẋ = Fp = b
(x(s) would be tangential to Γ if the initial conditions fail to be non-characteristic).
2. Quasilinear equation
b(x, u) · Du(x) + c(x, u) = 0
Initial data non-characteristic Fp = b,
Fp (p0 , z0 , x0 ) · ν(x0 ) = b(x0 , z0 ) · ν(x0 ) 6= 0.
Characteristics:
ẋ
= Fp ,
ż
= Fp · p
ẋ(s) = b(x(s), z(s))
= b(x(s), z(s)) · p(s)
= −c(x(s), z(s))
Closed system
ẋ(s) = b(x(s), z(s))
x(0) = x0
ż(s) = −c(x(s), z(s))
z(0) = g(x0 )
for x and z, do not need the equation for p.
5.3
Exercises
5.1: [Evans 3.5 #2] Write down the characteristic equations for the linear equation
ut + b · Du = f
in Rn × (0, ∞)
where b ∈ Rn and f = f (x, t). Then solve the characteristic equations with initial conditions
corresponding to the initial conditions
u=g
on Rn × {t = 0}.
5.2: [Evans 3.5 #3] Solve the following first order equations using the method of characteristics. Derive the full system of the ordinary differential equations including the correct
initial conditions before you solve the system!
Gregory T. von Nessi
Chapter 5: Fully Non-Linear First Order PDEs
122
a) x1 ux1 + x2 ux2 = 2u with u(x1 , 1) = g(x1 ).
b) uux1 + ux2 = 1 with u(x1 , x2 ) = 12 x1 .
c) x1 ux1 + 2x2 ux2 + ux3 = 3u with u(x1 , x2 , 0) = g(x1 , x2 ).
5.3: [Qualifying exam 08/00] Suppose that u : R × [0, T ] → R is a smooth solution of
ut + uux = 0 that is periodic in x with period L, i.e., u(x + L, t) = u(x, t). Show that
max u(x, 0) − min u(x, 0) ≤
x∈R
x∈R
L
.
T
5.4: Solve the nonlinear PDE of the first order
3/2
u(x1 , 0) = g(x1 ) = 2x1 .
5.5: [Qualifying exam 01/90] Find a classical solution of the Cauchy problem
(ux )2 + ut = 0,
u(x, 0) = x 2 .
What is the domain of existence of the solution?
Gregory T. von Nessi
Version::31102011
ux31 − ux2 = 0,
Version::31102011
Chapter 6: Conservation Laws
124
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
6.1
The Rankine-Hugoniot Condition
6.2
Shocks, Rarefaction Waves and Entropy Conditions
6.3
Viscous Approximation of Shocks
6.4
Geometric Solution of the Riemann Problem for General Fluxes
6.5
A Uniqueness Theorem by Lax
6.6
Entropy Function for Conservation Laws
6.7
Conservation Laws in nD and Kružkov’s Stability Result
6.8
Exercises
6.1
124
129
135
139
141
143
148
The Rankine-Hugoniot Condition
Conservation law:
initial data:
ut + F (u)x = 0
u(x, 0) = u0 (x) = g(x)
Example: Burger’s equation, F (u) = 12 u 2
2
u
ut + F (u)x = ut +
= ut + u · ux = 0
2 x
Solutions via Method of Characteristics.
ut + F 0 (u)ux = 0
Standard form: G(p, z , y ) = 0, where p = (ux , ut ) and y = (x, t).
G(p, z , y ) = p2 + F 0 (z)p1 = 0
Gregory T. von Nessi
138
Version::31102011
6 Conservation Laws
6.1 The Rankine-Hugoniot Condition
125
Characteristic equations: ẏ = Gp
ż = Gp · p
ẏ
= (F 0 (z), 1)
ż
= (F 0 (z), 1) · p = p1 F 0 (z) + p2 = 0
(equation)
Version::31102011
Rewritten, we have
y˙1 (s) = F 0 (z(s))
y1 (0) = x0
y˙2 (s) = 1
y2 (0) = 0
z(0) = g(x0 )
=⇒ y2 (s) = s (=⇒ identification t = s)
=⇒ ż(s) = 0 =⇒ z(s) = constant = g(x0 )
y˙1 (s) = F 0 (z(s)) = F 0 (g(x0 ))
y1 (s) = F 0 (g(x0 ))s + x0
= F 0 (g(x0 ))t + x0
(s = t)
=⇒ characteristic curves are straight lines.
Example: Burgers’ equation with g(x) = u(x, 0)
0
g(x) =
1


1 x ≤0
1−x 0≤x <1

0 x ≥0
Speed of the characteristic equations
F 0 (g(x0 )) = g(x0 )
Characteristics:
Solution (local existence):
• Discontinuous “solutions” in finite time.
• Break down of classical solution.
• Same effect if you take smooth (C 1 , or more) initial data.
Gregory T. von Nessi
Chapter 6: Conservation Laws
126
t
1
0
1
x
0
1
2
1
2
1
t=1
0
1
• Equation ut + F (u)x = 0 involving derivatives is not defined.
• Weak solutions (or integral solutions).
• Go back from the PDE to an integrated version.
• v test function.
v smooth, with compact support in R × [0, ∞).
• Multiply equation and integrate by parts.
Gregory T. von Nessi
Version::31102011
t=
6.1 The Rankine-Hugoniot Condition
0 =
Z
0
= −
−
Version::31102011
= −
∞Z
Z
0
Z
0
Z
0
127
v (ut + F (u)x ) dxdt
R
∞Z
vt · u dxdt +
R
∞Z
Z
R
t=∞
v · u dx
Z
∞
t=0
R
vx · F (u) dxdt +
R
(vt · u + vx · F (u)) dxdt −
∞Z
0
x=∞
F (u) · v dt
Z
R
x=−∞
v (x, 0) · u(x, 0) dx
So, we have
Z
0
∞Z ∞
−∞
vt · u + vx · F (u) dxdt +
Z
∞
−∞
v (x, 0) · g(x) dx = 0
(6.1)
Definition 6.1.1: u is a weak solution of the conservation law ut + F (u)x = 0 if (6.1) holds
for all test functions v .
Question: What information is hidden in the weak formulation?
If u ∈ C 1 is a weak solution, reverse the integration by parts and get
Z
0
∞Z
v (ut + F (u)x ) dxdt = 0
R
for all smooth test functions.
=⇒ ut + F (u)x = 0
Gregory T. von Nessi
Chapter 6: Conservation Laws
128
6.1.1
Information about Discontinuous Solutions
t
C
Vl
Vr
u is a solution in V = Vl ∪ Vr , u smooth in Vl and Vr and u can have a discontinuity across
C (e.g. u ≡ 1 on Vl and u ≡ 0 on Vr ).
Is this possible?
v test function. Support of v contained in Vl (Vr ) =⇒ ut + F (u)x = 0 in Vl (Vr ).
Support for v contains a part of C, and that v (x, 0) = 0.
ZZ
(6.1)
(vt · u + vx · F (u)) dxdt = 0
V
ZZ
ZZ
=
(vt · u + vx · F (u)) dxdt +
(vt · u + vx · F (u)) dxdt = 0
Vl
Vr
Now, let us look at
ZZ
(vt · u + vx · F (u)) dxdt
I.P.
=
Vl
=
−
Z
ZZ
C
Vl
v (ut + F (u)x ) dxdt −
{z
}
|
0
Z C
v · F (u)
v ·u
(v · F (u) · ν1 + v · u · ν2 ) dS
Note that u here is the trace of u from Vl on the curve C, call this ul :
Z
v (·F (ul ) · ν1 + ul · ν2 ) dS
C
Same calculation for Vr with ν replaced by −ν
ZZ
Z
(vt · u + vx · F (u)) dxdt = − v (F (ur ) · ν1 + ur · ν2 ) dS
Vr
Grand total:
C
Z
C
v [F (ul ) − F (ur )]ν1 + (ul − ur )ν2 ] dS = 0
=⇒ [F (ul ) − F (ur )]ν1 + (ul − ur )ν2 = 0 along C.
Then ν = (1, −ṡ) and
Gregory T. von Nessi
F (ul ) − F (ur ) = ṡ(ul − ur )
· ν dS
Version::31102011
x
6.2 Shocks, Rarefaction Waves and Entropy Conditions
129
(ṡ, 1)
Suppose now that C is given by (x = s(t), t)
ν = (1, −ṡ)
C
Version::31102011
this is typically written as (adopting new notation)
[[F (u)]] = σ · [[u]] ,
where σ = ṡ and [[w ]] = wl − wr = jump.
This is the Rankine-Hugoniot Condition
6.2
Shocks, Rarefaction Waves and Entropy Conditions
We return to our original example:
ut + u · ux = 0


1 x ≤0
1−x 0≤x <1
g(x) =

0 x ≥1
A discontinuity develops at t = 1, specifically, there is a jump for ul = 1 to ur = 0.
In the general situation, one solves the Riemann problem for the conservation laws
ut + F (u)x = 0
u(x, 0) =
ul
ur
x <0
0≤x >0
Example: The Riemann problem for Burger’s equation:
1. ul = 1, ur = 0. The discontinuity can propagate along a curve C with
ṡ =
[[F (u)]]
F (ul ) − F (ur )
=
=
[[u]]
ul − ur
1
2
−0
1
=
1
2
The characteristics are going into the curve C, a so-called shock wave.
2. ul = 0, ur = 1. Again, we have ṡ = 12 ; but this time the characteristics are emerging
from the shock.
Is this the only solution? We can find another solution of Burger’s equation by the similarity
method, i.e. by selecting solutions of the form
u(x, t) =
1 x v β
tα
t
Gregory T. von Nessi
Chapter 6: Conservation Laws
130
t
u=1
u=0
0
x
u=0
u=1
0
x
t

 0 x <0
x
0≤x <t
u(x, t) =
 t
1 x >t
u=0
u=1
0
x
0
This leads to u(x, t) = xt . We know construct a different solution:
This solution is continuous and called a rarefaction wave. It is a weak solution.
Question: Which solution is the physically relevant solution?
No
Ideas:
The
solution should be stable under perturbations, i.e. should look similar if we
Gregory
T. von
Nessi
smooth the initial data . The solution should be the limit of a viscous approximation of the
original equation (more later).
Recall that ẏ = G(F 0 (z), 1), i.e., F 0 (ul ) is the speed of the characteristics with initial data
ul .
Version::31102011
t
6.2 Shocks, Rarefaction Waves and Entropy Conditions
131
ṡ
F ′ (ul )
Version::31102011
Lax entropy condition:
F ′ (ul ) > ṡ > F ′ (ur )
i.e., characteristics impinge into the shock.
F ′ (ur )
Recall: Burger’s equation
ut +
u2
2
x
= ut + u · ux = 0
Gregory T. von Nessi
Chapter 6: Conservation Laws
132
Shock wave initial data:
Rarefaction wave initial data:
t
t
0
1
2
t
0
0
x
ṡ =
t
x
x
weak solutions not unique
1
2
"non-physical" shock
0
x
Odd entropy conditions to select solutions:
Lax Characteristics going into the shock.
Viscous approximation: u as a limit of u ,
ut + u · ux + · uxx
=0
Hopf-Cole: Change of the dependent variables, w = Φ(u)
ut − a∆u + b|Du|2 = 0
u = g
wt = a∆w − |Du|2 (aΦ00 (u) − bΦ0 (u))
Choose Φ as a solution of aΦ00 − bΦ0 = 0 =⇒ w solves the heat equation. One solution is
given by
bt
Φ(t) = exp −
a
Gregory T. von Nessi
Version::31102011
ṡ =
t
0
x
6.2 Shocks, Rarefaction Waves and Entropy Conditions
133
Choosing this Φ, we have
(
6.2.1
wt − a∆w
= 0
w (x, 0) = exp − b·g(x)
a
Burger’s Equation
Define
v (x, t) :=
Z
x
u(y , t) dt
Version::31102011
−∞
assuming that the integral exists and that u, ux → 0 as x → −∞.
The equation for v :
1
vt − · vxx + vx2 = 0
2
Proof: Compute
Z x
Z x
1 2
1
ut dy − · ux + u =
( · uxx − u · ux ) dy − · ux + u 2
|
{z
}
2
2
−∞
−∞
( 12 u 2 )x
1
= · ux − u 2
2
x
−∞
1
− · ux + u 2
2
1
1
= · ux − u 2 − · ux + u 2
2
2
= 0
=⇒ v solves vt − · vxx + 21 vx2 = 0
v (x, 0) = h(x) =
Z
x
g(y ) dy
−∞
This is of the form of the model equation with a = , b = 12 .
(
wt − · wxx = 0 b·v
w = exp −
solves
w (x, 0) = exp − b·h(x)
a
a−
The unique bounded solution is
1
w (x, t) = √
4πt
Now, v (x, t) = − b ln w
v (x, t) = − ln
b
(x − y )2
b · h(y )
exp −
exp −
dy
4t
R
Z
1
√
4πt
(x − y )2
b · h(y )
exp −
exp −
dy
4t
R
Z
and u(x, t) = vx (x, t)
u(x, t) =
1
b
R
)2
b·h(y )
exp − (x−y
exp
−
dy
4t
R
(x−y )2
)
exp − b·h(y
dy
R exp − 4t
x−y
R 4t
Gregory T. von Nessi
Chapter 6: Conservation Laws
134
Formula for u , let → 0
Asymptotics of the formula for → 0: Suppose that k and l are continuous functions, l
grows linearly, k grows at least quadratically. Suppose that k has a unique minimum point,
k(y0 ) = min k(y )
y ∈R
Then
k(y )
l(y
)
·
exp
−
dy
R
lim
= l(y0 )
R
k(y )
→0
exp
−
dy
R
R
Version::31102011
Proof: See Evans
Evaluation for viscous approximation with
0 x <0
g(x) =
1 x >0
x −y
t
(x − y )2 h(y )
k(y ) =
+
4t
2 Z y
0 y <0
h(y ) =
g(x) dx =
=: y +
y y ≥0
−∞
l(y ) =
Find minimum of k:
Case 1: y < 0, k(y ) =
(x−y )2
4t
Minimum:
y0 = x
if x ≤ 0
y0 = 0 if x ≥ 0
Case 2: y ≥ 0, k(y ) =
(x−y )2
4t
+
y
2
k 0 (y ) =
k 00 (y ) =
x −y
1
+
2t
2
1
> 0 =⇒ min
2t
k 0 (y ) = 0, y = x − t
y0 = x − t if x ≥ t
y0 = 0 if x ≤ t
Evaluation of the formula:
(1): x > t:
2
y ≤ 0: =⇒ y0 = 0, k(y0 ) = x4t
y ≥ 0: =⇒ u0 = x − t, k(x − t) =
t
4
+
x−t
2
=
−t
4
+
x
2
x2
x
>
⇐⇒ x 2 > 2xt − t 2 ⇐⇒ (x − t)2 ≥ 0
4t
2
x −y
t
y0 = x − t, l(y ) =
, l(x − t) = = 1
t
t
Gregory T. von Nessi
√
6.3 Viscous Approximation of Shocks
135
Analogous arguments for:
(2): 0 ≤ x ≤ t
y0 = 0, l(y ) =
x
t
(3): x < 0, y0 = x, l(x) = 0
Version::31102011
Rarefaction wave!
6.3

 0 x <0
x
0≤x ≤t
u(x, t) = lim u (x, t) =
 t
→0
1 x >t
Viscous Approximation of Shocks
Burger’s equation: Shocks (RH condition), rarefaction waves
ul
ur
x0
ut + F (u)x = ut + F 0 (u) · ux = 0
Lax criterion: F 0 (ul ) > ṡ > F 0 (ur )
Since F 0 (ul/r ) is the speed of the characteristics on the left/right of x0 .
More generally, find shock solutions of the Riemann problem
ut + F (u)x
= 0
ul
u(x, 0) =
ur
x <0
x >0
Building block for numerical schemes that approximate initial data by piecewise constant
functions.
Viscous approximation: ut + F (u)x = · uxx
Special solutions: traveling waves
u(x, t) = v (x − st),
s = speed of profile.
Gregory T. von Nessi
Chapter 6: Conservation Laws
136
Find v = v (y ) such that v (y ) → ul as y → −∞
v (y ) → ur as y → +∞
Equation for v : ut = −s · v 0
ux = v 0
uxx = v 00
ut + F 0 (u) · ux
0
Define w (y ) = v
y
−s · v + F (v ) · v
0
= · uxx
= · v 00
, v (y ) = w ( · y ),

 −s · w 0 + F 0 (w ) · w 0 = w 0
w (y ) → ul as y → −∞
=⇒

w (y ) → ur as y → +∞
Suppose for the moment being that ul > ur .
ul
ur
0
Suppose we have a unique solution
−s · w 0 + F 0 (w ) · w 0 = (−s · w + F (w ))0
Integrate −s · w + F (w ) + C0 = w 0
C0 = constant of integration
Let Φ(w ) = −s · w + F (w ) + C0
If w (y ) → ul , w 0 (y ) → 0 as y → −∞, then
−s · ul + F (ul ) + C0 = 0
⇐⇒ C0 = s · ul − F (ul ). Then
Φ(w ) = −s(w − ul ) + F (w ) − F (ul )
If w → ur , w 0 → 0 as y → +∞, =⇒ Φ(ur ) = 0
=
Gregory T. von Nessi
⇐⇒
−s(ur − ul ) + F (ur ) − F (ul ) = 0
F (ur ) − F (ul )
s=
R.H.
ur − ul
Version::31102011
0
6.3 Viscous Approximation of Shocks
137
w defined by the ODE w 0 = Φ(w )
Φ at least Lipschitz =⇒ local existence and uniqueness of solutions. Moreover, Φ(ur ) = 0,
Φ(ul ) = 0 =⇒ w ≡ ur , w ≡ ul are solutions.
=⇒ ul > w > ur for all times.
Version::31102011
ul
u
can not happen
ur
Analogously, there is no u with ul > u > ur with Φ(u) = 0. Φ has a sign on [ur , ul ] =⇒ w
decreasing =⇒ Φ(u) < 0 on (ur , ul )
Φ(u) = −s(u − ul ) + F (u) − F (ul )
Φ(u) < 0 for u ∈ (ur , ul ). So,
F (u) < s(u − ul ) + F (ul )
F (u) < F (ul ) +
F (ur ) − F (ul )
(u − ul )
ur − ul
|
{z
}
σ
F (ur )
F (ul )
ur
ul
=⇒ F lies under the line segment connecting (ur , F (ur )) and (ul , F (ul )).
Remarks:
Gregory T. von Nessi
Chapter 6: Conservation Laws
138
1. This is always true (for all choices of ul and ur ) if F is convex.
2. Take the limit u % ul
s(u − ul ) > F (u) − F (ul )
F (u) − F (ul )
⇐⇒ s <
u − ul
In the limit s ≤ F 0 (ul )
As u & ur ,
= s(u − ur )
=⇒ s >
F (u) − F (ur )
u − ur
and in the limit s ≥ F 0 (ul )
Two conditions together yield: F 0 (ul ) ≥ s ≥ F 0 (ur )
(Lax Shock admissibility criterion). So seeking traveling wave solution, we recover
the R.H. and Lax conditions.
3. Combining the inequalities
F (u) − F (ul )
F (u) − F (ur )
> ṡ >
u − ul
u − ur
∀u ∈ (ur , ul )
Oleinik’s entropy criterion.
6.4
Geometric Solution of the Riemann Problem for General
Fluxes
F
slope s2
slope s1
K
ur
Gregory T. von Nessi
u1 u2
ul
u
Version::31102011
F (u) − F (ur ) < F (ul ) − F (ur ) + s(ur − ul + u − ur )
6.5 A Uniqueness Theorem by Lax
139
Let K = {u, y : ur ≤ u ≤ ul , y ≤ F (u)}
The convex hull is bounded by line segments tangent to the graph of f and arcs on the
graph of f . The entropy solution consists of shocks (corresponding to the line segments)
and rarefaction waves (corresponding to the arcs).
connects u1 and u2
Version::31102011
s1 (t)
s2 (t)
u = ul
u = ur
0
6.5
A Uniqueness Theorem by Lax
Conservation Law ut + F (u)x = 0
A weak solution (say piecewise smooth with discontinuities) is an entropy solution if Oleinik’s
entropy criterion holds.
F (u) − F (ul )
F (u) − F (ur )
> ṡ >
u − ul
u − ur
for all discontinuities of u.
Theorem 6.5.1 (Lax): Suppose u, v are two entropy solutions. Then
Z
µ(t) =
|u(x, t) − v (x, t)| dx
R
is decreasing in time.
Corollary 6.5.2 (Uniqueness): There exists at most on entropy solution with given initial
data, u(x, 0) = u0 (x), x ∈ R
Proof: µ(0) = 0, µ decreasing =⇒ µ(t) = 0 ∀t =⇒ u(x, t) = v (x, t).
Proof of Theorem (6.5.1): Take u − v
Choose xi (t) in such a way that u − v changes its sign at the points xi and has a sign on
xi , xi+1 ], namely (−1)i .
Now
Z xn+1
X
µ(t) =
(−1)n
(u(x, t) − v (x, t)) dx
n
xn
Gregory T. von Nessi
Chapter 6: Conservation Laws
140
u−v
i + 1 even
xi+2 (t)
xi+1 (t)
xi (t)
xn
n
−
+(u(xn+1
, t)
Z
xn+1
xn
(u − v )t dx
= −
Z
−
− v (xn−1
, t)) · ẋn+1 − (u(xn+ , t) − v (xn+ , t)) · ẋn
(6.2)
xn+1
xn
(F (u) − F (v ))x dx
−
−
= −F (u(xn+1
, t)) ± F (v (xn+1
, t)) ± F (u(xn+ , t)) ± F (v (xn+ , t))
Rewrite (6.2) formally:


−
−
xn+1
xn+1


X
µ̇(t) =
(−1)n −(F (u) − F (v )
+ (u − v ) · ẋ


n
xn+
xn+


X
=
(−1)n F (u) − F (v ) − (u − v ) · ẋ
+ F (u) − F (v ) − (u − v ) · ẋ

−
n
xn
(−1)n−1
xn−1
xn
(−1)n
xn+



xn+1
Case 1: u − v continuous at xn . u(xn , t) = v (xn , t) =⇒ no contribution to the sum at xn
Case 2: u jumps from ul to ur with ur < ul and v continuous, u − v changes sign at xn =⇒
ur < v = v (xn ) < ul
u − v < 0 on [xn , xn+1 ] =⇒ n is odd.
Term at xn :
−{F (ul ) − F (v ) − (ul − v ) ·ẋn + F (ur ) − F (v ) − (ur − v ) ·ẋn }
|
{z
}
|
{z
}
Gregory T. von Nessi
F (ul )−F (v )
>ṡ≥0
ul −v
(v )
ṡ> F (uur r)−F
≥0
−v
Version::31102011
Find µ̇(t), check that the jumps in the intervals (xi , xi+1 ) make no difficulties
Z xn+1
X
µ̇(t) =
(−1)n
(ut − vt ) dx
6.6 Entropy Function for Conservation Laws
141
=⇒ total contribution at xn is non-positive
=⇒ µ̇(t) ≤ 0
=⇒ µ is decreasing.
Analogous discussion for the remaining cases.
6.6
Entropy Function for Conservation Laws
Version::31102011
Idea: Additional equations to be satisfied by smooth solutions, inequalities for non-smooth
solutions.
u smooth solution of ut + F (u)x = 0. Suppose you find a pair of functions (η, ψ)
η entropy, η convex, η 00 > 0
ψ entropy flux
such that
0
η(u)t + ψ(u)x
= 0
0
= 0
0
= 0
η (u) · ut + ψ (u) · ux
ut + F (u) · ux
=⇒ η 0 (u) · ut + F 0 (u) · η 0 (u) · ux = 0. This holds if
ψ0 = F 0 · η0
Suppose that u is a solution of the viscous approximation
ut + F (u )x = · uxx
suppose that (η, ψ) is an entropy-entropy flux pair with η convex.
=⇒ ut · η 0 (u ) + F 0 (u ) · η 0 (u ) · ux = · η 0 (u ) · uxx
⇐⇒ η(u )t + ψ(u ) · ux = (η 0 · u̇x )x − · η 00 (u ) · u̇x2
Integrate this on [x1 , x2 ] × [t1 , t2 ]
Z
t2
t1
=⇒
Z
Z
x2
(η(u )t + ψ(u )x ) dxdt = x1
t2
t1
Z
Z
t2
t1
−
|
x2
x1
(η(u )t + ψ(u )x ) dxdt ≤ 2
Z
Z
x2
x1
t2
t1
Z
( · η 0 (u ) · ux )x dxdt
x2
x1
· η 00 (u )ux 2 dxdt
{z
}
≥0
Z
h
− η 0 (u (x2 , t)) · ux (x2 , t)
t1
i
− η 0 (u (x1 , t)) · ux (x1 , t) dxdt
t2
Gregory T. von Nessi
Chapter 6: Conservation Laws
142
Suppose that u → u, and let → 0:
Z t 2 Z x2
(η(u)t + ψ(u)x ) dxdt ≤ 0
t1
(6.3)
x1
if ux is uniformly integrable, i.e.
Z t2 Z
t1
x2
x1
(η 0 (u ) · u )x dxdt ≤ C.
This means that a solution u of ut + F (ux ) = 0 has to satisfy
in a suitable weak sense.
Example: Burger’s equation.
2
η(u) = u 2 , F (u) = u2
Possible choice: (η, ψ) = u 2 , 23 u
ψ 0 (u) = 2u · u = 2u 2
2 2
ψ(u) =
u
3
2
Suppose u has a jump across a smooth curve.
t2 + ∆t
C
t1
∆x
x1
x1 + ∆x
ṡ = speed of the curve
ṡ = ∆x
∆t
(6.3) =
Z
t2
x2
dx +
η(u)
x1
t1
Z
x2
t2
t1
ψ(u)
dt
x1
≈ (x2 − x1 )(η(ul ) − η(ur )) + (t2 − t1 )(ψ(ur − ul ))
2
= ṡ · ∆t · (ul2 − ur2 ) + ∆t (ur3 − ul3 )
3
1
3
= . . . = − ∆t (ul − ur ) ≤ 0
6 | {z }
≥0
=⇒ jump admissible only if ul > ur .
Gregory T. von Nessi
Version::31102011
η(u)t + ψ(u)x = 0
6.7 Conservation Laws in nD and Kružkov’s Stability Result
6.7
143
Conservation Laws in nD and Kružkov’s Stability Result
Conservation laws in n dimensions
ut + divx F~ (u) = 0,
where F~ : R → Rn vector value flux.

F1 (u)


..
F~ (u) = 

.
Fn (u)
Version::31102011

~ 0 (u) = F~ 0 (u)η 0 (u),
Definition 6.7.1: A pair (η, ψ) is called an entropy-entropy flux pair if ψ
where
~ : R → Rn , ψi0 (u) = Fi0 (u)η 0 (u)
ψ
η:R→R
and if η is convex.
Take a viscous approximation:
ut + divx F~ (u ) = · ∆u ut
n
X
∂
Fi (u ) · η 0 (u ) = · ∆u η 0
· η (u ) +
∂xi
0
(6.4)
i=1
∂ ∂
Fi (u ) η 0 (u ) = Fi0 (u ) · η 0 (u )
u
∂xi
∂xi
∂ = ψi0 (u )
u
∂xi
∂ ~ =
ψ(u )
∂xi
(6.4) is equivalent to
~ )) = · div (Du · η 0 (u )) − · Du · Dη 0 (u )
η(u )t + div (ψ(u
= · div (D(η(u ))) − · Du · Du · η 00 (u )
Multiply this by a test function φ ≥ 0 with compact support.
Z ∞Z
Z ∞Z
~ )] dxdt =
φ[η(u )t + div ψ(u
φ[ · ∆η(u ) − · η 00 (u )|Du |2 ] dxdt
0
Rn
0
Z
0
∞Z
Rn
Rn
~ )] dxdt ≤
φ[η(u )t + div ψ(u
If u → u as → 0, then
Z
0
∞Z
Rn
Z
0
∞Z
Rn
· φ · ∆η(u ) dxdt
~
φ[η(u)t + div ψ(u)]
dxdt ≤ 0
Gregory T. von Nessi
Chapter 6: Conservation Laws
144
Definition 6.7.2: A bounded function u is a weak entropy solution of
ut + div F~ (u) = 0
u(x, 0) = u0 (x)
if
Z
0
∞Z
Rn
~
(η(u) · φt + ψ(u)
· Dφ) dxdt ≤
Z
Rn
φ(x, 0)η(u0 (x)) dx
for all test functions φ ≥ 0 with compact support in Rn × [0, ∞) and all entropy-entropy flux
~
pairs (η, ψ).
~
≤0
η(u)t + divx ψ(u)
Analogue of the Rankine-Hugoniot jump condition
Γ
u−
n
v−
v+
u+
Similar arguments to the 1D case show
hh ii
~ · ν0 ≤ 0
νn+1 [[η]] + ψ
where ν = (ν0 , νn+1 ) is the normal in space-time. Define ñ =
V [[η]] ≥
n=2
hh ii
~ · ñ
ψ
ν0
|ν0 | ,
V = − ν|νn+1
. Then
0|
n
V = normal velocity of the interface.
By approximation, we can extend this setting to Lipschitz continuous η, and in particular to
Kružkov’s entropy, η(u) = |u − k| k is constant.
Gregory T. von Nessi
~
ψ(u)
= sgn (u − k)(F~ (u) − F~ (k))
Version::31102011
Additional conservation laws:
6.7 Conservation Laws in nD and Kružkov’s Stability Result
Check: R.H. jump inequality ṡ [[η]] ≥
145
hh ii
~ with Kružkov’s entropy in 1 dimension.
ψ
ṡ(|ul − k| − |ur − k|) ≥ sgn (ul − k)(F~ (ul ) − F~ (k)) − sgn (ur − k)(F~ (ur ) − F~ (k))
Suppose that ur < k < ul :
ṡ(ul − k + ur − k) ≥ F~ (ul ) − F~ (k) + F~ (ur ) − F~ (k)
ṡ(ul + ur − 2k) ≥ F~ (ul ) + F~ (ur ) − 2F~ (k)
~
~
r
ṡ
F~ (k) ≥ F (ul )+2 F (ur ) + k − ul +u
2
⇐⇒
Version::31102011
⇐⇒
ṡ =
F~ (ul ) − F~ (ur )
ul − ur
Sign error. Should recover Oleinik’s Entropy Criterion.
Correction: u+ = ur , u− = ul → Oleinik’s entropy criterion.
Theorem 6.7.3 (Kružkov): Suppose that u1 and u2 are two weak entropy solutions of
ut + div F~ (u) = 0 which satisfies u1 , u2 ∈ C 0 ([0, ∞); L1 (Rn )) ∩ L∞ ((0, ∞) × Rn ). Then
||u1 (·, t2 ) − u2 (·, t2 )||L1 (Rn ) ≤ ||u1 (·, t1 ) − u2 (,̇t1 )||L1 (Rn )
for all 0 ≤ t1 ≤ t2 < ∞
Remark: This implies uniqueness if u1 and u2 have the same initial data.
Notation: L1 (Rn ) = space of all integrable functions with
Z
||f ||L1 (Rn ) =
|f | dx
Rn
L∞ = space of all bounded measurable functions.
The first assumption means that the map
t 7→ Φ(t) = u(·, t) is continuous
i.e. for t0 fixed, ∀ > 0, ∃δ > 0 such that ||Φ(t) − Φ(t0 )||L1 (Rn ) ≤ if |t − t0 | < δ. i.e.
Z
|u(x, t0 ) − u(x, t)| dx < Rn
Proof: (formal)
Kružkov entropy-entropy flux:
η(u) = |u − k|,
Z
Rn
Z
R+
~
ψ(u)
= sgn (u − k)(F~ (u) − F~ (k))
η(u) φt + sgn (u1 − k)(F~ (u1 ) − F~ (k)) · Dφ dxdt ≥ 0
|{z}
|u1 −k|
for all φ with compact support in Rn × (0, ∞). Suppose you can choose k = u2 (x, t) (it is
not clear you can do this since k is constant)
Z Z
|u1 (x, t) − u2 (x, t)|φt + sgn (u1 − u2 )[F (u1 ) − F (u2 )](x, t) · Dx φ dxdt ≥ 0
Rn
R+
Gregory T. von Nessi
Chapter 6: Conservation Laws
146
R
en+1
t2
ν∝
√ 1
M 2 +1
µ
x/|x|
M
¶
K
t1
−en+1
R − M(t2 − t1 )
R
Rn
φ(x, t) =
1 (x, t) ∈ K
0 otherwise
(This will also have to be considered further as this function isn’t smooth).
Expectation: (Dx , Dt )χK = −νHn · L · ∂K (Hn is the surface measure)
——
One way to make this precise is to use integration by parts:
Z
Z
~ = − φ · div ψ
~
(Dx φ · Dt φ)ψ
——
Suppose all this is correct:
Z
Top:
−
{t=t1 ,
Z
Bottom:
+
{t=t1 ,
Z (
+
A
|x|≤R−M(t2 −t1 )}
|x|≤R}
|u1 (x, t) − u2 (x, t)| dx
|u1 (x, t) − u2 (x, t)| dx
−M
√
|u1 (x, t) − u2 (x, t)|
M2 + 1
)
x
−√
sgn (u1 − u2 )(F~ (u1 ) − F~ (u2 )) ·
dHn
2
|x|
M +1
1
where A := {t1 < t < t2 , |x| = R − M(t2 − t1 )}.
3rd integral:
Z
x
−M|u1 (x, t) − u2 (x, t)| − sgn (u1 − u2 )(F~ (u1 ) − F~ (u2 )) ·
dHn
|x|
A
Z
≤
−M|u1 (x, t) − u2 (x, t)| − |F~ (u1 ) − F~ (u2 )| dHn ≤ 0
Gregory T. von Nessi
A
Version::31102011
Define K ⊂ Rn × R+
Choose φ = χK , i.e.
6.7 Conservation Laws in nD and Kružkov’s Stability Result
147
By assumption, |u1 |, |u2 | ≤ N. Take M = sup|u|≤N |F 0 (u)|.
Z
Z
|u1 (x, t2 ) − u2 (x, t2 )| dx ≤
|u1 (x, t) − u2 (x, t2 )| dx+ ≤ 0
|x|≤R−M(t2 −t1 )
|x|≤R
Remark:
The proof shows more:
Z
Z
|u1 (x, t2 ) − u2 (x, t2 )| dx ≤
|x|≤R
|x|≤R+M(t2 −t1 )
|u1 (x, t1 ) − u2 (x, t1 )| dx
Version::31102011
This shows again finite speed of propagation.
Fixing the proof:
• φ = χK . Fix by taking a regularization of χK , i.e. with standard mollifiers.
• Choice k = u2 (x, t)
“Doubling of the variables”. Take u2 (y , s) and integrate in y and s for the entropy
inequality for u1 :
Z
Z
n
0 ≤
|u1 (x, t) − u2 (y , s)|φt
Rn ×R+ Rn ×R+
o
+ sgn (u1 − u2 )(F~ (u1 ) − F~ (u2 )) · Dx φ dxdtdy ds
Entropy inequality for u2 , written in y and s. Take k = u1 (x, t), integrate in x and t:
Z
Z
n
0 ≤
|u1 (x, t) − u2 (y , s)|φs
Rn ×R+ Rn ×R+
o
+ sgn (u1 − u2 )(F~ (u1 ) − F~ (u2 )) · Dy φ dxdtdy ds
Add the two inequalities:
Z
Z
0 ≤
Rn ×R+
Rn ×R+
|u1 − u2 |(φs + φt )
+ sgn (u1 − u2 )(F~ (u1 ) − F~ (u2 )) · (Dx φ + Dy φ) dxdtdy ds
Trick: Chose φ to enforce y = x as → 0
R
R ρ = 1
ρ (t) =
ρ̂ (x) =
t
n
1 Y xi ρ
n
1
ρ
i=1
Choose
~
φ(x, t, y , s) = ψ
x +y s +t
,
2
2
· ρ̂
x −y
2
· ρ
t −s
2
~ smooth, compact support in Rn × (0, ∞).
ψ
Find
~t (x̂, tˆ) · ρ̂ (ŷ ) · ρ (ŝ)
= ψ
~ tˆ) · ρ̂ (ŷ ) · ρ (ŝ)
Dx φ + Dy φ = Dx ψ(x̂,
φt + φs
Gregory T. von Nessi
Chapter 6: Conservation Laws
148
ρ1/4
ρ ∈ C0∞ (−1, 1)
ρ1/2
ρ1
x +y
x −y
t +s
t −s
, ŷ =
, tˆ =
, ŝ =
2
2
2
2
→ weighted integral, as → 0 the integration is on ŷ = 0, ŝ = 0 i.e. x = y and t = s.
You get exactly the inequality we get formally with k = u2 (x, t).
x̂ =
6.8
Exercises
6.1: Let f (x) be a piecewise constant function

1 x < −1,



1/2 −1 < x < 1,
f (x) =
3/2 1 < x < 2,



1 x > 2.
Construct the entropy solution of the IVP, ut + uux = 0, u(x, 0) = f (x) for small values of
t > 0 using shock waves and rarefaction waves. Sketch the characteristics of the solution.
why does the structure of the solution change for large times?
6.2: Motivation of Lax’s entropy condition via stability. Consider Burgers’ equation
ut + uux = 0
subject to the initial conditions
0 for x > 0
u0 (x) =
,
1 for x < 0
u1 (x) =
1 for x > 0
.
0 for x < 0
The physically relevant solution should be stable under small perturbations in the initial data.
Which solution (shock wave or rarefaction wave) do you obtain in the limit as n → ∞ if you
approximate u0 and u1 by


1 for x < 0
 0 for x < 0

1
nx for 0 ≤ x ≤ n , u1,n (x) =
1 − nx for 0 ≤ x ≤ n1 ?
u0,n (x) =


1
1 for x > n
0 for x > n1
6.3: Let the initial data for Burgers’ equation

ul

r
f (x) =
ul − ul −u
L x

ur
Gregory T. von Nessi
ut + uux = 0 be given by
for x < 0,
for 0 ≤ x ≤ L,
for x > L,
Version::31102011
1
−1
6.8 Exercises
149
where 0 < ur < ul . Find the time t∗ when the profile of the solution becomes vertical. More
generally, suppose that the initial profile is smooth and has a point x0 where f 0 (x0 ) < 0.
Find the time t∗ for which the solution of Burger’s equation will first break down (develop a
vertical tangent).
6.4: [Profile of rarefaction waves] Suppose that the conservation law
ut + f (u)x = 0
has a solution of the form u(x, t) = v (x/t). Show that the profile v (s) is given by
Version::31102011
v (s) = (f 0 )−1 (s).
6.5: Verify that the viscous approximation of Burgers’ equation
ut + uux = uxx
has a traveling wave solution of the form u(x, t) = v (x − st) with
1
(uL − uR )y
v (y ) = uR + (uL − uR ) 1 − tanh
2
4
where s = (uL + uR )/2 is the shock speed given by the Rankine Hugoniot condition. Sketch
this solution. What happens as → 0?
6.6: Consider the Cauchy problem for Burgers’ equation,
ut + uux = 0
with the initial data
u(x, 0) =
1 for |x| > 1,
|x| for |x| ≤ 1.
a) Sketch the characteristics in the (x, t) plane. Find a classical solution (continuous and
piecewise C 1 ) and determine the time of breakdown (shock formation).
b) Find a weak solution for all t > 0 containing a shock curve. Note that the shock
does not move with constant speed. Therefore, find first the solution away from the
shock. Then use the Rankine-Hugoniot condition to find a differential equation for the
position of the shock given by (x = s(t), t) in the (x, t)-plane.
6.7: [Traffic flow] A simple model for traffic flow on a one lane highway without ramps leads
to the following first order equation which describes the conservation of mass,
ρ
ρt + vmax ρ 1 −
= 0, x ∈ R, t > 0.
ρmax
x
In this model, the velocity of cars is related to the density by
ρ
v (x, t) = vmax 1 −
,
ρmax
where we assume that the maximal velocity vmax is equation to one. Solve the Riemann
problem for this conservation law with
ρL if x < 0,
ρ(x, 0) =
ρR if x > 0.
Gregory T. von Nessi
Chapter 6: Conservation Laws
150
Under what conditions on ρL and ρR is the solution a shock wave and a rarefaction wave,
respectively? Sketch the characteristics. Does this reflect your daily driving experience? Find
the profile of the rarefaction wave.
6.8: [LeVeque] The Buckley-Leverett equations are a simple model for two-phase flow in
a porous medium with nonconvex flux
f (u) =
u2
.
u 2 + 21 (1 − u)2
Hint: The line
to the graph of f on the interval [0, 1]
√ through the origin that is tangent
√
has slope 1/( 3 − 1) and touches at u = 1/ 3. If you are proficient with Mathematica or
Matlab you could try to plot the profile of the rarefaction wave.
6.9: Let u0 (x) = sgn(x) be the sign function in R and let a be a constant. Let u(x, t)
be the solution of the transport equation
ut + aux = 0,
and let u (x, t) be the solution of the so-called viscous approximation
ut + aux − uxx
= 0,
both subject to the initial condition u(·, 0) = u (·, 0) = u0 .
a) Derive representation formulas for u and u !
Hint: For the viscous approximation, consider v (x, t) = u (x + at, t).
b) Prove the error estimate
Z ∞
√
|u(x, t) − u (x, t)| dx ≤ C t,
−∞
Gregory T. von Nessi
for
t > 0.
Version::31102011
In secondary oil recovery, water is pumped into some wells to displace the oil remaining in the
underground rocks. Therefore u represents the saturation of water, namely the percentage
of water in the water-oil fluid, and varies between 0 and 1. Find the entropy solution to the
Riemann problem for the conservation law ut + f (u)x = 0 with initial states
1 if x < 0,
u(x, 0) =
0 if x > 0.
Version::31102011
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
152
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
7.1
Classification of 2nd Order PDE in Two Variables
7.2
Maximum Principles for Elliptic Equations of 2nd Order
7.3
Maximum Principles for Parabolic Equations
7.4
Exercises
Gregory T. von Nessi
178
153
172
158
Version::31102011
7 Elliptic, Parabolic and Hyperbolic Equations
7.1 Classification of 2nd Order PDE in Two Variables
7.1
153
Classification of 2nd Order PDE in Two Variables
a(x, y ) · uxx + 2b(x, y ) · uxy + c(x, y ) · uy y + d(x, y ) · ux + e(x, y ) · uy + g(x, y ) · u = f (x, y )
General mth order PDE
L(x, D)u =
X
|α|≤m
aα (x) · Dα u
Version::31102011
where u : Rn → R
Principle part:
Lp (x, D) =
X
aα (x)Dα
|α|=m
The symbol of the PDE is defined to be
X
L(x, i ξ) =
|α|≤m
aα (x) · (i ξ)α
(you take i ξ because of Fourier transform)
∂
ξ ∈ Rn , replace ∂x
by i ξj .
j
P ∂2
Examples: Laplace ∆ = j ∂x
2 = L
j
L(x, i ξ) =
n
X
j=1
(i ξj )n = −|ξ|2
Heat Equation: t = n + 1.
Lu =
∂u
− ∆u =
∂xn+1
Symbol:
n
X
L(x, i ξ) = i ξn+1 −
(i ξj )2
j=1
Wave equation:
L=
∂2
−∆
2
∂xn+1
L(x, i ξ) = (i ξn+1 )2 −
n
X
(i ξj )2
j=1
Principle part of the symbol:
Lp (x, i ξ) =
X
|α|=m
aα (x) · (i ξ)α
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
154
Examples:
∆ : Lp = −|ξ|2
heat: Lp = ξ12 + · · · + ξn2
2
wave: Lp = −ξn+1
+ ξ12 + · + ξn2
2nd order equation in two variables
∂2
∂2
∂2
+
c(x,
y
)
+
2b(x,
y
)
∂x 2
∂x∂y
∂y 2
2
2
Lp (x, i ξ) = a(x, y )ξ1 + 2b(x, y )ξ1 ξ2 + c(x, y )ξ2
T −a −b
ξ1
ξ1
=
−b −c
ξ2
ξ2
|
{z
}
Lp (x, D) = a(x, y )
Definition 7.1.1: The PDE is
elliptic if det A > 0
parabolic if det A = 0
hyperbolic if det A < 0.
Typical situation:
λ1 0
0 λ2
Equation: λ1 uxx + λ2 uy y + lower order terms
1 0
elliptic: −A =
uxx + uy y = ∆
0 1
1 0
parabolic: −A =
uxx + lower order terms
0 0
1 0
hyperbolic: −A =
uxx − uy y = = wave
0 −1
Remarks: This definition even allows an ODE in the parabolic case (not good).
Better definition: parabolic is ut − elliptic = f .
ut − ∆u = f
Two reasons why this determinant is important:
(A) Given Cauchy data along a curve, C(x(s), y (s)) parametrized with respect to arc
length, that is
u(x(s), y (s)) = u0 (s)
∂u
(x(s), y (s)) = v0 (s).
∂n
Question: Conditions of C that determine the second derivatives of a smooth solution.
Tangential derivative
Gregory T. von Nessi
~ = ux ẋ + uy ẏ = w0
Du · T
Version::31102011
A
7.1 Classification of 2nd Order PDE in Two Variables
Version::31102011
n = (−ẏ , ẋ)
155
(ẋ, ẏ )
Du · ~
n = −ux ẏ + uy ẋ = v0
Normal derivative
uxx ẋ 2 + 2uxy ẋ ẏ + uy y ẏ 2 + ux ẍ + uy ÿ = ẇ0 (s)
−uxx ẋ ẏ − uxy ẏ 2 + uxy ẏ 2 + uxy ẋ 2 + uy y ẋ ẏ − ux ÿ + uy ẍ = v̇0 (s)
uxx ẋ 2 + 2uxy ẋ ẏ + uy y ẏ 2 = p(s)
−uxx ẋ ẏ + (ẋ 2 − ẏ 2 )uxy + uy y ẋ ẏ = r (s)
auxx + 2buxy + cuy y = q(s)
We can solve for the 2nd derivatives along the curve if
det
det
ẋ 2
−ẋ ẏ
a
2ẋ ẏ
ẏ 2
2
− ẏ ẋ ẏ
2b
c
ẋ 2
ẋ 2
−ẋ ẏ
a
2ẋ ẏ
ẏ 2
ẋ 2 − ẏ 2 ẋ ẏ
2b
c
6= 0
= a(2ẋ 2 ẏ 2 − ẋ 2 ẏ 2 − ẋ 2 ẏ 2 + ẏ 4 )
−2b(ẋ 3 y + ẋy 3 ) + c(ẋ 4 − ẋ 2 ẏ 2 + 2ẋ 2 ẏ 2 )
= a(ẏ 4 + ẋ 2 ẏ 2 ) − 2b(ẋ 3 ẏ + ẋ ẏ 3 )
+c(ẋ 4 + ẋ 2 ẏ 2 ) 6= 0
Divide by (ẋ ẏ )2 :
2
ẏ
ẋ
ẋ
ẏ 2
+ 1 − 2b
+
+c
+ 1 6= 0
a
ẋ 2
ẋ
ẏ
ẏ 2
2
2
dy
dy
dx
dx
a
+ 1 − 2b
+
+c
+ 1 6= 0
dx 2
dx
dy
dy 2
!
2 !
2
dx
dy
dy
1+
· a
− 2b
+ c 6= 0
dy
dx
dx
{z
}
|
⇐⇒
⇐⇒
6=0
2b
c
2
a t −
t+
=0
a
a
at 2 − 2bt + c = 0
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
156
⇐⇒
⇐⇒
2
b2 ac
− 2 + 2 =0
a
a
√
−ac + b2
b
t= ±
a
a
b
t−
a
Non-trivial solutions if
det
a b
b c
= ac − b2 ≤ 0
elliptic equation → 2nd derivative are determined.
(b) Curves of discontinuity
a, . . . , f , g continuous.
u solution of Lu = f , u, ux , uy continuous, uxx , uxy , uy y continuous except on a curve
C, jump along C. What condition has C to satisfy.
C = (x(s), y (s))
ul
ur
uxl (x(s), y (s)) = uxr (x(s), y (s)).
Differentiate along the curve:
l
l
r
r
ẋ + uxy
ẏ
uxx
ẋ + uxy
ẏ = uxx
[[uxx ]] ẋ + [[uxy ]] ẏ = 0
Analogously:
(7.1)
uyl (x(s), y (s)) = uyr (x(s), y (s))
=⇒
[[uxy ]] ẋ + [[uy y ]] ẏ = 0
(7.2)
Equation holds outside the curve, all 1st order derivatives and coefficients are continuous =⇒ along the curve
a [[uxx ]] + 2b [[uxy ]] + c [[uy y ]] = 0
=⇒ linear system.

Gregory T. von Nessi


ẋ ẏ 0
[[uxx ]]
 0 ẋ ẏ   [[uxy ]]  = 0
[[uy y ]]
a 2b c
(7.3)
Version::31102011
Definition 7.1.2: A curve C is called a characteristic curve for L if
2
dy
dy
a
− 2b
+ c = 0.
dx
dx
7.1 Classification of 2nd Order PDE in Two Variables
157
Non-trivial solution only if

ẋ ẏ 0
det  0 ẋ ẏ  = aẏ 2 − 2bẋ ẏ + c ẋ 2 = 0
a 2b c
2
dy
dy
⇐⇒ a
− 2b
+c =0
dx
dx
Version::31102011

Jump discontinuity only possible if C is characteristic,
Elliptic: no characteristic curve
Parabolic: one characteristic
Hyperbolic: two characteristic
√
b
b2 − ac
dy
= ±
dx
a
a
√
Laplace: a = c = 1, b = 0 dy
dx = ± −1. No solution
Parabolic: a = 1, c = 0, b = 0 dy
dx = 0 y = const. characteristic.
Hyperbolic: a = 1, c = −1, b = 0 dy
dx = ±1, y = ±x characteristic.
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
158
7.2
Maximum Principles for Elliptic Equations of 2nd Order
Definition 7.2.1: L is an elliptic operator in divergence form if
L(x, D)u = −
n
X
∂i (aij (x)∂j u) +
i,j=1
n
X
bi (x)∂i u + c(x)u(x)
i=1
and if the matrix (aij ) is positive definite, i.e.,
i,j=1
aij (x)ξi ξj ≥ λ(x)|ξ|2
for all x ∈ Ω, all ξ ∈ Rn , λ(x) > 0
Remarks: Usually elliptic means uniformly elliptic, i.e.
(1) λ(x) ≥ λ0 > 0 ∀x ∈ Ω.
(2) Divergence form because L(x, D)u = −div (ADu) + ~b · Du + cu where A = (aij )
Definition 7.2.2: L is an elliptic operator in non-divergence form if
L(x, D) = −
n
X
aij (x)∂i ∂j u +
i,j=1
and
n
X
bi (x)∂i u + c(x)u(x)
i=1
n
X
aij ξi ξj > λ(x)|ξ|2
i,j=1
for all x ∈ Ω, ξ ∈ Rn
Remark: If aij ∈ C 1 , then an operator in non-divergence form can be written as an operator
in divergence form. We may also assume that aij = aji .
Definition 7.2.3: A function u ∈ C 2 is called a subsolution (supersolution) if L(x, D)u ≤ 0
(L(x, D)u ≥ 0)
Remark: n = 1, Lu = −uxx
subsolution −uxx ≤ 0 ⇐⇒ uxx ≥ 0 ⇐⇒ u convex =⇒ u ≤ v if vxx = 0 on v = u at the
endpoint of an interval.
Remark: We can not expect maximum principles to hold for general elliptic equations. Obstruction: the existence of eigenvalues and eigenfunctions.
Example: −u 00 = λu
λ = 1, u(x) = sin(x) on [0, 2π]
=⇒ sin x solves −u 00 = u
There is no maximum principle for −uxx − u = L.
Big difference between x ≥ 0 and c < 0.
Gregory T. von Nessi
Version::31102011
n
X
Version::31102011
7.2 Maximum Principles for Elliptic Equations of 2nd Order
π
159
2π
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
160
7.2.1
Weak Maximum Principles
Version::31102011
Gregory T. von Nessi
7.2 Maximum Principles for Elliptic Equations of 2nd Order
7.2.1.1
161
WMP for Purely 2nd Order Elliptic Operators
Again we take Ω ⊂ Rn . We now consider the following differential operator:
Lu :=
n
X
ij
a Dij u +
i,j=1
n
X
bi Di u + cu,
i=1
where aij , bi , c : Ω → R. Now, we state the following.
Version::31102011
Definition 7.2.4: The operator L is said to be elliptic(degenerate elliptic) if A := [aij ] >
0(≥ 0) in Ω.
Remarks:
i.) In the above definition A > 0 means the minimum eigenvalue of the matrix A is > 0.
More explicitly, this means that
n
X
i,j=1
aij ξi ξj > 0, ∀ξ ∈ Rn .
ii.) The last definition indicates that parabolic PDE (such as the Heat Equation) are really
degenerate elliptic equations.
For convenience, we will be using the Einstein summation convention for repeated indicies.
This means that in any expression where letters of indicies appear more than once, a summation is applied to that index from 1 to n. With that in mind, we can write our general
operator as
Lu := aij Dij u + bi Di u + cu.
Now, we move onto the weak maximum principle for our special case operator:
Lu = aij Dij u.
Theorem 7.2.5 (Weak Maximum Principle): Consider u ∈ C 2 (Ω)∩C 0 (Ω) and f : Ω → R.
If Lu ≥ f , then
!
1
|f |
u ≤ max u +
sup
· Di am (Ω)2 ,
(7.4)
2 Ω TrA
∂Ω
where T r A is the trace of the coefficient matrix A (i.e. T r A =
Pn
i=1 a
ii ).
Note: If f ≡ 0, the above reduces to the result of the first weak maximum principle we
proved. Namely, u ≤ max∂Ω u.
Proof: Without loss of generality we may translate Ω so that it contains the origin. Again,
we consider an auxiliary function:
v = u + k|x|2 ,
where k is an arbitrary constant to be determined later. Since the Hessian Matrix D2 u is
symmetric, we may choose a coordinate system such that D2 u is diagonal. In this coordinate
system, we calculate
Lv = Lu + 2kaij Sij = Lu + 2k · T r A,
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
162
where Sii = 1 and Sij = 0 when i 6= j. Now, suppose that v attains an interior maximum at
y ∈ Ω. From calculus, we then know that D2 v (y ) ≤ 0, which implies that aij Dij ≤ 0 as A
is positive definite. Now, if we choose
k>
1
|f |
sup
,
2 Ω TrA
our above calculation indicates that Lv > 0; a contradiction. Thus,
v ≤ max v
∂Ω
Example 3: Now we will apply the weak maximum principle to the second most famous
elliptic PDE, the Minimal Surface Equation:
(1 + |Du|2 )∆u − Di uDj uDij u = 0.
This is a quasilinear PDE as only derivatives of the 1st order are multiplied against the 2nd
order ones (a fully nonlinear PDE would have products of second order derivatives). We will
now show that this is indeed an elliptic PDE. Recalling our special case Lu = aij Dij u, we
can rewrite the minimal surface equation in this form with
aij = (1 + |Du|2 )Sij + Di uDj u,
with S being as it was in the previous proof. Now, we calculate
aij ξi ξj
= (1 + |Du|2 )Sij ξi ξj − Di uDj uξi ξj
= (1 + |Du|2 )|ξ|2 − Di uDj uξi ξj
= (1 + |Du|2 )|ξ|2 − (Di uξi )2
≥ (1 + |Du|2 )|ξ|2 − |Du|2 |ξ|2
= |ξ|2 .
To get the third equality in the above, we simply relabeled repeated indicies. Since one sums
over a repeated index, it’s representation is a dummy index, whose relabeling does not affect
the calculation.
From this calculation, we conclude this equation is indeed elliptic. Now, upon taking f = 0
and replacing u by −u, we can now apply the weak maximum principle to this equation.
Gregory T. von Nessi
Version::31102011
which implies the result as Ω contains the origin (which indicates that |x|2 ≤ Di am (Ω)2 ).
7.2 Maximum Principles for Elliptic Equations of 2nd Order
7.2.1.2
163
WMP for General Elliptic Equations
Now, we will go over the weak maximum principle for operators having all order of derivatives
≤ 2.
Theorem 7.2.6 (Weak Maximum Principle): Consider u ∈ C 2 (Ω) ∩ C 0 (Ω); aij , bi , c :
Ω → R; c ≤ 0; and Ω bounded. If u satisfies the elliptic equation (i.e. [aij ] = A > 0)
Lu = aij Dij + bi Di u + cu ≥ f ,
(7.5)
Version::31102011
then
u ≤ max u + + C
∂Ω
|b|
n, Di am (Ω) , sup
Ω λ
!
· sup
Ω
|f |
,
λ
(7.6)
where
λ = min aij ξi ξj
and
|ξ|=1
u + = max{u, 0}.
Note: If aij is not constant, then λ will almost always be non-constant on Ω as well.
Consequences:
• If f = 0, then u ≤ max∂Ω u + .
• If f = c = 0, then u ≤ max∂Ω u + .
• If Lu ≤ f one gains a lower bound
u ≥ max u − − C sup
∂Ω
Ω
|f |
,
λ
where u − = min{u, 0}.
• If we have the PDE Lu = f , then one has both an upper and lower bound:
|u| ≤ max |u| + C sup
∂Ω
Ω
|f |
.
λ
• Uniqueness of the Dirichlet Problem. Recall, that the problem is as follows
PDE: Lu = f in Ω
.
u = φ on ∂Ω
So if one has u, v ∈ C 2 (Ω) ∩ C 0 (Ω) with Lu = Lv in Ω, and u = v on ∂Ω, then u ≡ v
in Ω. The proof is the simple application of the weak maximum principle to w = u − v ,
which is obviously a solution of PDE as it’s linear.
Proof of Weak Max. Principle: As in previous proofs, we analyze a particular auxiliary function
to prove the result.
Aside: v = |x|2 will not work in this situation as Lv = 2 · T r A + 2bi xi + c|x|2 ; the bi xi may
be a large negative number for large values of |x|.
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
164
d
Ω− = {x ∈ Ω | u < 0}
ξ
O
x0
xi
Let us try
v = e α(x,ξ) ,
where ξ is some arbitrary vector. Also, we will consider Ω+ = {x ∈ Ω | u > 0} instead of Ω,
for the rest of the proof. Clearly, this does not affect the result as we are seeking to bound
u by max∂Ω u + . With that, we have on Ω+
L0 u := aij Dij u + bi Di u ≥ −cu + f
≥ f.
Now, we wish to specify which vector ξ is. For the following, please refer to the above figure.
We first choose our coordinates so that the origin corresponds to a boundary point of Ω such
that there exists a hyperplane through that point with Ω lying on one side of it. ξ is simply
taken to be the unit vector normal to this hyperplane. Next, we calculate
L0 v
= (α2 aij ξi ξj + αbi ξi )e α(x,ξ)
= (α2 aij ξi ξj + αbi ξi )e α(x,ξ)
Gregory T. von Nessi
≥ αλe α(x,ξ) ,
Version::31102011
Ω+ = {x ∈ Ω | u > 0}
7.2 Maximum Principles for Elliptic Equations of 2nd Order
provided we choose α ≥ supΩ
|b|
λ
165
+1 .
Aside: The calculation for the determination of α is as follows:
α2 aij ξi ξj + αbi ξi
≥ α2 λ + αbi ξi .
So, α2 λ + αbi ξi ≥ αλ implies that
Version::31102011
αλ ≥ λ − bi ξi
b i ξi
=⇒ α ≥ sup 1 −
λ
Ω
So, clearly we have the desired inequality if we choose α ≥ b̃ + 1, where b̃ := supΩ
|b|
λ .
Given our choice of ξ, it is clear that (x, ξ) ≥ 0 for all x ∈ Ω. Thus, one sees that
L0 v ≥ αλ. So, one can ascertain that
L0 (u + + kv ) ≥ f + αλk > 0 in Ω+ ,
provided k is chosen so that
k>
1
|f |
sup .
α Ω λ
Now, the proof proceeds as previous ones. Suppose w := u + + kv has a positive maximum
in Ω+ at a point y . In this situation we have
[Dij w (y )] ≤ 0 =⇒ aij Dij w (y ) ≤ 0, bi Di w (y ) = 0,
which implies Lw ≤ 0; a contradiction. So, this implies the second inequality of the following:
u+
e0
|f |
sup
α Ω λ
≤ w
≤ max w
∂Ω+
≤ max u + +
e αd
|f |
· sup
α
Ω λ
≤ max u + +
|f |
e αd
· sup ,
α
Ω λ
∂Ω+
∂Ω
again where α = b̃ + 1 and d is the breadth of Ω in the direction of ξ. From the above, we
finally get
u ≤ max u + +
∂Ω
e αd − 1
|f |
· sup .
α
Ω λ
|f |
Remark: Given the above, one needs |b|
λ and λ bounded for the weak maximum principle to
give a non-trivial result. Then one may apply the weak maximum principle to get uniqueness.
Theorem 7.2.7 (weak maximum principle, c = 0): Ω open, bounded, u ∈ C 2 (Ω) ∩ C(Ω).
L uniformly elliptic (for divergence form) Lu ≤ 0. Then
max u = max u
Ω
∂Ω
Gregory T. von Nessi
166
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
Analogously, if Lu ≥ 0, then
min u = min u
∂Ω
Ω
Proof:
(1) Lu < 0 and u has a maximum at x0 ∈ Ω. Du(x0 ) = 0 and
2
∂ u
(x0 )
= D2 u(x0 )
∂xi ∂xj
i,j
Fact from linear algebra: A, B symmetric and positive definite, then A : B = tr (AB) ≥
0.
Q orthogonal, QAQT = Λ,
λ = diag (λ1 , . . . , λn ), λi ≥ 0
tr (AB) = tr (QABQT )
= tr (QAQT QBQT )
= tr (ΛB̂)
B̂ = QBQT . B̂ symmetric and positive semidefinite =⇒ B̂ii ≥ 0.
tr (ΛB̂) =
n
X
i=1
tr (AB) =
λi B̂ii ≥ 0
n
X
(AB)ii
i=1
=
=
n X
n
X
Aij Bji
i=1 j=1
n
X
Aij Bij
i,j=1
= A:B
Lu = −
n
X
aij ∂ij u +
i,j=1
n
X
bi ∂i u
i=1
Lu(x0 ) = −(aij ) : D2 u(x0 ) + 0
(aij ) symmetric and positive definite. −D2 u(x0 ) symmetric and positive semidefinite.
Conclusion:
0 |{z}
> Lu(x0 ) ≥ 0
assump.
Contradiction, u can not have a maximum at x0 .
Gregory T. von Nessi
Version::31102011
is negative semidefinite.
7.2 Maximum Principles for Elliptic Equations of 2nd Order
167
(2) Lu(x0 ) = 0.
Consider u (x) = u(x) + e r x1 ,
> 0, r > 0, r = r (x) chosen later.
Lu = Lu + Le r x1 ≤ Le r x1
= (−a11 r 2 + b1 r )e r x1
Version::31102011
a11 > 0 (since
P
aij ξi ξj ≥ λ0 |ξ|2 , take ξ1 = e1 = (1, 0, . . . , 0), a11 ≥ λ0 > 0)
b = b(x) by assumption in C(Ω).
|b(x)| ≤ B = maxΩ |b|
Lu ≤ (−a11 r 2 + Br )e r x1
= r (−a11 r + B)e r x1 < 0
if r > 0 is big enough.
By the previous argument,
max u = max u
∂Ω
Ω
As → 0, u → u and in the limit
max u = max u
∂Ω
Ω
(In fact, r is an absolute constant, r = r (λ0 , B)).
Corollary 7.2.8 (Uniqueness): There exists at most one solution of the BVP Lu − f in Ω,
u = u0 in ∂Ω with u ∈ C 2 (Ω) ∩ C(Ω).
Proof: By contradiction with the maximum and minimum principle.
Theorem 7.2.9 (weak maximum principle, c ≥ 0): Suppose u ∈ C 2 (Ω) ∩ C(Ω), and that
c ≥ 0.
i.) If Lu ≤ 0, then
max u ≤ max u +
Ω
∂Ω
ii.) If Lu ≥ 0, then
min u ≥ − max u −
Ω
∂Ω
where u + = max(u, 0), where u − = − min(u, 0),
that is, u = u + − u − .
Proof: Proof for ii.) follows from i.) with −u.
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
168
i.) Let Ω+ = {Ω, u(x) > 0} and define Ku = Lu − cu. If Ω+ = ∅, then the assertion is
obviously true.
On Ω+ , Ku = Lu − cu ≤ 0 − cu ≤ 0, i.e. on Ω+ , u is a subsolution for K and
we may use the weak maximum principle for K.
If Ω+ 6= ∅, then use the weak maximum principle for K on Ω+
max u(x) ≤ max u(x) = max u +
+
∂Ω+
Ω
∂Ω
On ∂Ω+ ∩ Ω we have u = 0.
Ω+
Since u is not less that or equal to zero on Ω,
max u(x) = max u(x) = max u +
+
Ω
∂Ω+
where the second equality is due to (7.7).
∂Ω
Recall: Weak Maximum Principles Lu ≤ 0
c =0:
max u = max u
Ω
c ≥0:
max u ≤ max u +
Ω
Now, we consider strong maximum principles.
Gregory T. von Nessi
∂Ω
∂Ω
Version::31102011
Ω
(7.7)
7.2 Maximum Principles for Elliptic Equations of 2nd Order
7.2.2
169
Strong Maximum Principles
Theorem 7.2.10 ((Hopf’s maximum principle; Boundary point lemma)): Ω bounded,
connected with smooth boundary. u ∈ C 2 (Ω) ∩ C(Ω). Subsolution Lu ≤ 0 in Ω and suppose
that x0 ∈ ∂Ω such that u(x0 ) > u(x) for all x ∈ Ω. Moreover, assume that Ω satisfies an
interior ball condition at x0 , i.e. there exists a ball
Version::31102011
Ω
y
R
x0
B(y , R) ⊂ Ω such that B(y , R) ∩ Ω = {x0 }
i.) If c = 0, then
∂u
(x0 )
∂ν
strict > 0 follows from assumption.
ii.) If c ≥ 0, then the same assertion holds if u(x0 ) ≥ 0
Proof:
Idea: Construct a barrier v ≥ 0, v > 0 in Ω,
w = u(x) − u(x0 ) ∓ v (x) ≤ 0 in B.
u(x) − u(x0 ) ≤ v (x)
2
2
Construction of v (x): v (x) = e −αr − e −αR where r = |x − y |
v (x) > 0 in B
v (x) = 0 on ∂B
∂v
∂xi
Lv
——
2
= (−α)2(xi − yi ) · e −αr
X
X
= −
aij ∂ij v +
∂i v bi + cu
X
2
4α2 (xi − yi )(xj − yi )e −αr aij
= −
X
2
2
2
+
{(aii 2α − 2αbi (xi − yi )}e −αr + c(e −αr − e −αR )
Ellipticity:
——
X
aij ξi ξj ≥ λ0 |ξ|2 ⇐⇒ −
2
X
aij ξi ξj ≤ −λ0 |ξ|2
2
≤ −λ0 4α2 |x − y |2 e −αr + tr (aij )2αe −αr
2
2
2
+2α|~b| · |x − y |e −αr + c(e −αr − e −αR )
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
170
——
"
e
−αr 2
−e
−αR2
=e
−αr 2
−α(R2 −r 2 )
(1 − e
|
{z
≤1
——
)
}
#
2
≤ e −αr {−λ0 4α2 |x − y |2 + 2α tr (aij ) + 2α|b| · |x − y | + |c|}
If |x − y | bound from below, then Lv ≤ 0 if α is big enough.
∂Ω
A
B(y , R)
R
2
A = B(y , R) ∩ B x0 , R2
Then |x − y | ≥ R2 in A and Lv ≤ 0 for α big enough.
Define w (x) = u(x) − u(x0 ) + v (x)
Lw
= |{z}
Lu − Lu(x0 ) + |{z}
Lv
| {z }
≤0
c(x0 )
≤0
≤ −cu(x0 )
< 0
=⇒ w a subsolution.
w on ∂A: ∂A ∩ ∂B(y , R). w ≤ 0
∂A ∩ B(y , R): u(x0 ) > u(x) in Ω
=⇒ u(x) − u(x0 ) ≤ −δ, δ > 0 by compactness.
=⇒ w ≤ 0 if is small enough.
Lw
w
≤ 0 in B
≤ 0 on ∂B
Apply weak max principle to w .
max w ≤ max w + = 0
∂Ω
B
=⇒ w (x) ≤ 0 for all x ∈ A.
w (x) = u(x) − u(x0 ) + v (x)
w (x0 ) = 0
Gregory T. von Nessi
and
u(x) − u(x0 ) ≤ −v (x)
Version::31102011
x0
7.2 Maximum Principles for Elliptic Equations of 2nd Order
=⇒
171
∂u
∂v
(x0 ) ≥ − (x0 )
∂ν
∂ν
∂v
2
= −2α(xi − yi )e −αr
∂xi
∂u
2 x0 − x
(x0 ) ≥ −(−2α)(x0 − y )e −αR
∂ν
|x0 − y |
2
2 |x0 − y |
= 2 · αe −αR
|x0 − y |
| {z }
Version::31102011
=R>0
Theorem 7.2.11 (Strong maximum principle, c = 0): Ω open and connected (not necessarily bounded), c = 0
i.) If Lu ≤ 0, and u attains max at an interior point, then u ≡ constant.
ii.) Lu ≥ 0, u has a minimum at an interior point, the u is constant.
Proof:
M = max u,
Ω
V
u 6= M,
= {x ∈ Ω | u(x) < M} =
6 ∅
(contradiction assump.)
Ω
x0
ν
x
V
Choose x0 ∈ V such that dist(x0 , ∂V ) <dist(x, ∂Ω)
=⇒ B(x0 , |x − x0 |) ⊂ V . We may assume that B ∩ ∂V = {x}
=⇒ Satisfy assumptions in the boundary point lemma.
∂u
=⇒ ∂ν
(x) > 0
Contradiction, since
u.
∂u
∂ν (x)
= Du(x) · ν = 0 since u(x) = M, i.e. x is a maximum point of
Theorem 7.2.12 (Strong maximum principle, c ≥ 0): Ω open and connected (not necessarily bounded). u ∈ C 2 (Ω), c ≥ 0.
i.) Lu ≤ 0 in Ω and u attains a non-negative maximum, then u = constant in Ω
ii.) Lu ≥ 0 u has non-positive maximum then u ≡ constant in Ω.
Proof: Hopf’s Lemma for non-negative maximum point at the boundary.
Example: c = 1, −u 00 + u = 0. u = sinh, cosh
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
172
7.3
Maximum Principles for Parabolic Equations
Definition 7.3.1: L(x, t, D) = ut + Au,
Au = −
n
X
aij (x)∂ij u +
i,j=1
n
X
bi ∂i u + cu,
i=1
is said to be parabolic. A is (uniformly) elliptic
i,j=1
aij ξi ξj ≥ λ0 |ξ|2
∀ξ ∈ Rn , λ0 > 0.
Remark: Solutions of elliptic equations are stationary states of parabolic equations =⇒
Same assumptions concerning c.
Ω open, bounded, ΩT = Ω × (0, T ]
ΓT = ∂ΩT = Ω × {t = 0} ∪ ∂Ω × [0, T )
ΩT
ΓT
Ω
Gregory T. von Nessi
Version::31102011
n
X
7.3 Maximum Principles for Parabolic Equations
7.3.1
173
Weak Maximum Principles
Theorem 7.3.2 (weak maximum principle, c = 0): u ∈ C12 (ΩT ) ∩ C(Ω), c = 0 in ΩT
i.) Lu = ut + Au ≤ 0 in ΩT , then
max u = max u
ΓT
ΩT
ii.) Lu ≥ 0,
min u = min u
ΓT
ΩT
Version::31102011
Proof:
i.) Suppose Lu < 0 and that the max is attained at x0 , t0 ) in ΩT . 0 > Lu = ut + Au =
0 + (Au ≥ 0) ≥ 0
since Au ≥ 0 as in the elliptic case. contradiction.
If the max is attained at (x0 , T ), 0 > Lu = ut + Au ≥ 0 since ut (x0 , T ) ≥ 0 if u is
max at (x0 , T )
If Lu ≤ 0, then take u = u − t
Lu = (∂t + A)(u − t) = Lu − ≤ − < 0
u satisfies Lu < 0, and then
max u = max u ,
ΩT
ΓT
∀ > 0, and the assertion follows as & 0.
Theorem 7.3.3 (weak maximum principle, c ≥ 0): u ∈ C12 (ΩT ) ∩ C(ΩT ), c ≥ 0 in ΩT .
u = u + − u − , where u + = max(u, 0)
u − = − min(u, 0)
i.) If Lu = ut + Au ≤ 0,
max u ≤ max u +
ΩT
ii.) Lu ≥ 0, then
ΓT
min u ≥ − max u − .
ΩT
ΓT
Proof: Suppose first Lu < 0 and that u has a positive maximum at (x0 , t0 ) in ΩT .
0 > Lu = (∂t + A)u
≥ c(x0 , t0 )u(x0 , t0 ) > 0
Similar for t0 = T
General case: u = u − t
contradiction
Lu = (∂t + A)(u − t)
≤ 0 − − c(x, t) t
| {z }
≤ − < 0
≥0
Previous sept applies. If u has a positive max, then u has a positive max in ΩT for small
enough.
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
174
7.3.2
Strong Maximum Principles
Evans: Harnack inequality for parabolic equations
Protter, Weinberger: Maximum principles in differential equations, some form of Hopf’s
Lemma.
Theorem 7.3.4 (): Ω bounded, open, smooth boundary and u ∈ C12 (ΩT ) ∩ C(ΩT ) with
Lu = ut + Au ≤ 0. Suppose that either
i.) c = 0
ii.) c ≥ 0,
ΩT
The the strong maximum principle holds.
Idea of Proof: Ω = (α, β). n = 1, ut − auxx + bux + cu = L, a = a(x, t) ≥ λ0 > 0
Step 1: B = B (, τ ) open ball in ΩT , u < M in B , u(x0 , t0 ) = M, (x0 , t0 ) ∈ ∂B
=⇒ x0 = ξ
T
x0
(x0 , t0 )
τ
x0
α
ξ
β
The proof (by contradiction) uses the weak max principle for w = u − M + ηv , where
2
2
v (x, t) = e −Ar − e −A , A > 0. Compute
2
Lv = e −Ar {−α4A2 (x − ξ)2 + . . .}
∴ r 2 = (x − ξ)2 + (t − τ )2
If the max is attained at a point (x, t0 ) with x0 6= ξ, choose δ small enough such that
|x − ξ| > c(δ) > 0 ∀x ∈ Bδ (x0 )
w subsolution, w < 0 on ∂Bδ , w (x0 , t0 ) = 0. Contradiction with weak max principle.
Step 2: Suppose u(x0 , t0 ) < M with t0 ∈ (0, T ). Then u(x, t0 ) < M for all x ∈ (α, β).
Idea: Suppose ∃ x1 ∈ (α, β0 ) with u(x, t0 ) = M and u(x, t0 ) < M for x ∈ (x0 , x1 )
d(x) =distance to point (x, t0 )
from the set {u = M} d(x1 ) = 0, d(x0 ) > 0
p
d 2 (x0 ) + h2
p
d(x0 ) ≤
d 2 (x0 + h) + h2
d(x + h) ≤
Gregory T. von Nessi
Version::31102011
max u = M ≥ 0
7.3 Maximum Principles for Parabolic Equations
175
(x0 , t0 )
Version::31102011
α
u(x1 , t0 ) = M
x0
x1
x
β
u(x0 , t0 + d(x0 )) = M
h
u(x0 , t0 − d(x0 )) = M
x
x0
x1
Suppose d is differentiable.
Rewrite these inequalities as
p
p
d 2 (x) − h2 ≤ d(x + h) ≤ d 2 (x) + h2
p
d 2 (x) − h2 − d(x)
h
≤
≤
d(x + h) − d(x)
h
p
2
d (x) + h2 − d(x)
h
As h → 0, d 0 (x) = 0 =⇒ d = constant
Contradiction, since d(x0 ) > 0, d(x1 ) = 0.
Step 3: Suppose u(x, t) < M for α < x < β, 0 < t < t1 , t1 ≤ T , then u(x, t1 ) < M
for x ∈ (α, β)
Proof: Weak maximum principle for w (x, t) = u − M + ηv , where v (x, t) = exp(−(x − x1 )2 −
A(t − t1 )) − 1
To conclude that ut > 0 Lu ≤ 0 subsolution.
Lu =
ut − auxx + bux +
|{z}
|{z} |{z}
>0
> 0.
≥0
=0
cu
|{z}
c(x, t) · M
| {z }
at (xi , t1 )
≥0
contradiction.
Gregory T. von Nessi
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
176
T
t1
u<M
α
β
(x − x1 )2 − A(t − t1 ) = 0
M
(x0 , t0 )
t1
L
Conclusion. Suppose that u(x0 , t0 ) = M, (x0 , t0 ) ∈ ΩT
To show if u(x0 , t0 ) = M = max, then u ≡ M on Ωt0
We can not conclude that u ≡ M on ΩT .
Consider the line L(x0 ) − {(x0 , t), 0 ≤ t < t0 }
Suppose there is a t1 ≤ 0 such that u(x0 , t) < M for t < t1 , u(x0 , t1 ) = M
=⇒ u(x, t) < M for all (x, t) in (α, β) × (0, t1 )
=⇒ Step 3: u(x0 , t) < 0. Contradiction.
Modify the function v in n dimensions; the same argument works.
Gregory T. von Nessi
Version::31102011
(x1 , t1 )
7.4 Exercises
7.4
177
Exercises
7.1: [Qualifying exam 08/90] Consider the hyperbolic equation
in R2 .
a) Find the characteristic curves through the point 0, 21 .
Version::31102011
Lu = uy y − e 2x uxx = 0
b) Suppose that u is a smooth of the initial value problem

 uy y − e 2x uxx = 0 in R2 ,
u(x, 0) = f (x) for x ∈ R,

uy (x, 0) = g(x) for x ∈ R.
Show that the characteristic curves bound
the domain of dependence for u at the point
0, 21 . To do this, prove that u 0, 12 = 0 provided f and g are zero on the intersection
of the domain of dependence with the x-axis. This can be done by defining the energy
1
e(y ) =
2
Z
h(y )
g(y )
(ux2 + e −2x uy2 ) dx
where (g(y ), h(y )) is the intersection of the domain of dependence with the parallel
to the x-axis through the point (0, y ). Then show that e 0 (y ) ≤ 0. (See also proof for
the domain of dependence for the wave equation with energy methods.)
7.2: [Evans 7.5 #6] Suppose that g is smooth and that u is a smooth solution of

 ut − ∆u + cu = 0 in Ω × (0, ∞),
u = 0 on ∂Ω × [0, ∞),

u = g on Ω × {t = 0},
and that the function c satisfies c ≥ γ > 0. Prove the decay rate
|u(x, t)| ≤ Ce −γt
for (x, t) ∈ Ω × (0, ∞).
Hint: The solution of an ODE can also be viewed as a solution of a PDE!
7.3: [Evans 7.5 #7] Suppose that g is smooth and that u is a smooth solution of

 ut − ∆u + cu = 0 in Ω × (0, ∞),
u = 0 on ∂Ω × [0, ∞),

u = g on Ω × {t = 0}.
Moreover, assume that c is bounded, but not necessarily nonnegative. Show that u ≥ 0.
Hint: What PDE does v (x, t) = e λt u(x, t) solve?
7.4: [Qualifying exam 08/01] Let Ω be a bounded domain in Rn with a smooth boundary. Use
a) a sharp version of the Maximum principle
b) and an energy method
Gregory T. von Nessi
178
Chapter 7: Elliptic, Parabolic and Hyperbolic Equations
to show that the only smooth solutions of the boundary-value problem
∆u = 0 in Ω,
∂n u = 0 on ∂Ω
(where ∂n is the derivative in the outward normal direction) have the form u = constant.
Here M is a positive number and ur is the radial derivative on ∂B. Prove that there is a
number T depending only on M such that u = 0 on B × (T, ∞).
Hint: Let v be the solution of the initial value problem
dv
+ v 1/2 = 0,
dt
and consider the function w = v − u.
Gregory T. von Nessi
v (0) = M,
Version::31102011
7.5: [Qualifying exam 01/02] Let B be the unit ball in Rn and let u be a smooth function satisfying
ut − ∆u + u 1/2 = 0, 0 ≤ u ≤ M on B × (0, ∞),
ur = 0, on ∂B × (0, ∞).
Version::31102011
Part II
Functional Analytic
Methods in PDE
Chapter 8: Facts from Functional Analysis and Measure Theory
180
Facts from Functional Analysis and Measure
Theory
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
8.1
Introduction
8.2
Banach Spaces and Linear Operators
8.3
Fredholm Alternative
8.4
Spectral Theorem for Compact Operators
8.5
Dual Spaces and Adjoint Operators
8.6
Hilbert Spaces
8.7
Weak Convergence and Compactness; the Banach-Alaoglu Theorem
8.8
Key Words from Measure Theory
8.9
Exercises
8.1
182
183
187
191
195
196
201
204
209
Introduction
This chapter represents what, I believe, to be the most pedagogically challenging material
in this book. In order to break into modern theory of PDE, a certain amount of functional
analysis and measure theory must be understood; and that is what the material in this chapter is meant to convey. Even though I’ve made every effort to keep the following grounded
in PDE via remarks and examples, it is understandable how the reader might regard some
parts of the chapter as unmotivated. Unfortunately, I believe this to be inevitable as there
is so much background material that needs to be “water over the back” to have a decent
understanding of modern methods in PDE. In the next chapter we will start using the tools
learned here to get some actual results in the context of linear elliptic and parabolic equations.
Gregory T. von Nessi
Version::31102011
8
8.2 Banach Spaces and Linear Operators
181
Version::31102011
In regard to organization, I have tried to order the material in this chapter as sequentially
as possible, i.e. material conveyed can be understood based off of material in the previous
section. In addition important theorems and their consequence are presented throughout
as soon as possible. I decided to focus on these two principles in hopes that the ordering
or material will directly indicate to what generality theorems and ideas can be applied. For
example, the validity of a given theorem in a Banach space as opposed to Hilbert spaces will
hopefully be made clear by the order in which the material is presented. Lastly, for the sake
of completeness, many proofs have been included in this chapter which can (and should) be
skipped on the first reading of this book, as many of these dive into rather pedantic points
of real and functional analysis.
8.2
Banach Spaces and Linear Operators
Philosophically, one of the major goals of modern PDE is to ascertain properties of solutions when equations are not able to be solved analytically. These properties precipitate out
of classifying solutions by seeing whether or not they belong to certain abstract spaces of
functions. These abstractions of vector spaces are called Banach Spaces (defined below),
and they have many properties and characteristics that make them particularly useful for the
study of PDE.
To begin, take X to be a topological vector space (TVS) over R (C). Recalling that a
TVS is a linear space, this means that X satisfies the typical vector space axioms pertaining
to scalars in R (C) and has a topology corresponding to the continuity of vector addition and
scalar multiplication. If one is uncertain of this concept refer to [Rud91] for more details.
Definition 8.2.1: A norm is any mapping X → R, denoted ||x|| for x ∈ X, having the
following properties:
i.) ||x|| ≥ 0, ∀x ∈ X;
ii.) ||αx|| = |α| · ||x||, ∀α ∈ R, x ∈ X;
iii.) ||x + y || ≤ ||x|| + ||y ||, ∀x, y ∈ X.
Remark: iii.) is usually referred to as the triangle inequality .
Definition 8.2.2: A TVS, X, coupled with a given norm defined on X, namely (X, || · ||), is
a normed linear space (NLS).
Definition 8.2.3: Given an NLS (X, || · ||), we define the following
• A metric is a mapping ρ : X × X → R defined by ρ(x, y ) = ||x − y ||.
• An open ball, denoted B(x, r ), is a subset of X characterized by B(x, r ) = {y ∈ X :
ρ(x, y ) < r }.
• If U ⊂ X is such that ∀x ∈ U, ∃r > 0 with B(x, r ) ⊂ U, then U is open in X.
Definition 8.2.4: A NLS X is said to be complete, if every Cauchy sequence in X has a limit
in X. Moreover,
• a complete NLS is called a Banach Space (BS);
Gregory T. von Nessi
182
Chapter 8: Facts from Functional Analysis and Measure Theory
• A BS is separable if it contains a countable dense subset.
One can refer to either [Roy88] or [Rud91], if they are having a hard time understanding
any of the above definitions. To solidify these definitions, let us consider a few examples.
ii.) A non-example of a BS is X = {f : [−1, 1] → R, f polynomial}. Indeed X is not
complete as any function has an approximating sequence of Taylor polynomials that
converges (in the sense of Cauchy) in all norms, hence X is not complete. One may try
to argue that this statement is false considering the discrete map (||f || = 1 if f ≡ 0 and
||f || = 0 otherwise), but the discrete map is not a valid norm as it violates Definition
(8.2.1.ii).
iii.) Another classic example of a BS from real analysis is the space l p = {{ã} = (a1 , a2 , a3 , . . .)
∞} 1 ≤ p < ∞ with the norm
||ã||l p =
∞
X
i=1
|ai |p .
Again, showing this space is complete is another good exercise in real analysis.
Moving on, we now start to consider certain types of mapping on our newly defined
spaces.
Definition 8.2.5: Given X and Y are both NLS, T : X → Y is called a contraction if ∃θ < 1
such that
||T x − T y ||Y ≤ θ||x − y ||X ,
∀x, y ∈ X.
A special case of the above definition is when X and Y are the same NLS. If we go further
and consider T : X → X, where X is a BS, then we have the following extremely powerful
theorem.
Theorem 8.2.6 (): [Banach’s fixed point theorem] If X is a BS and T a contraction on X,
then T has a unique fixed point in X; i.e. ∃!x ∈ X such that T x = x. Moreover, for any
y ∈ X, li mi→∞ T i (y ) = x.
Gregory T. von Nessi
Version::31102011
Example 4:
i.) A classic example of a BS is C 0 ([−1, 1]) = {f : [−1, 1] → R, continuous}
with the norm ||f ||C 0 [−1,1] := maxx∈[−1,1] |f (x)|. (Showing completeness of this space
is a good exercise in real analysis. See [Roy88] if you need help.) Of course, C 0 under
this norm is complete over any compact subset of Rn .
P∞
p
i=1 |ai |
<
8.2 Banach Spaces and Linear Operators
183
Proof : Consider a given point x0 ∈ X and define the sequence xi = T i (x0 ). Then, with
δ := ||x0 − T x0 ||X = ||x0 − x1 ||X , we have
||x0 − xk ||X
≤
≤
k
X
i=1
∞
X
i=1
Version::31102011
= δ·
=
||xi−1 − xi ||X
θi−1 ||x0 − x1 ||X
∞
X
θi−1
i=1
δ
.
1−θ
Also, for m ≥ n,
||xn − xm ||X
≤ θ · ||xn−1 − xm−1 ||X ≤ θ2 · ||xn−2 − xm−2 ||X
θn
≤ · · · ≤ θn · ||x0 − xm−n ||X ≤ δ ·
1−θ
which tends to 0 as n → ∞, since θ < 1. Thus, {xi } is Cauchy. Put limi→∞ (xi ) =: x ∈ X,
then we see that T x = limi→∞ T xi = limi→∞ xi+1 = x.
To finish the proof at hand, we need show uniqueness. Assume that w is another fixed
point of T , then
||x − w ||X = ||T x − T w ||X ≤ θ||x − w ||X .
Since θ < 1, this implies that ||x − w ||X = 0 and hence x = w .
An immediate application of the above theorem is the proof of the Picard-Lindelof Theorem for ODE, which asserts the existence of first order ODE under a Lipschitz condition
on any inhomogeneity. See [TP85] for the theorem and proof.
Continuing our exposition on properties of BS operators, we make the following definition.
Definition 8.2.7: Take X and Y as NLS. T : X → Y is said to be bounded if
||T || =
||T x||Y
< ∞.
x∈X,x6=0 ||x||X
sup
(8.1)
From (8.1), we readily verify that the space of all bounded linear mapping from X to Y ,
denoted L(X, Y ), forms a vector space. Also, concerning the boundedness of BS operators
we have the following theorem from functional analysis.
Theorem 8.2.8 (): Take X and Y to be BS. If X is infinite dimensional, then a map T is
linear and bounded if and only if it is continuous. On the other hand, if X is finite dimensional,
then T is continuous if and only if T is linear.
Proof : See [Rud91].
We now are in a position to state another very powerful idea that is useful for proving
existence of solutions for linear PDE.
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
184
Theorem 8.2.9 (Method of Continuity): If X is a BS, Y a NLS, with L0 and L1 bounded
linear operators mapping X → Y such that exists a C > 0 satisfying the a priori estimate
||x||X ≤ C||Lt x||Y
∀t ∈ [0, 1],
(8.2)
where Lt := (1 − t)L0 + tL1 and t ∈ [0, 1], then L1 maps X onto Y ⇐⇒ L0 map X onto
Y.
Ls (x) = Lt x + Ls x − Lt x
= y + (1 − s)L0 + sL1 − [(1 − t)L0 + tL1 ](x)
= y + (t − s)[L0 − L1 ](x).
Upon rewriting this equality, we see
x = L−1
y + (t − s)L−1
(L0 − L1 )x .
{z s
}
|s
(8.3)
Tx
So, now we claim that ∃x ∈ X such that (8.3) is satisfied. This is equivalent to showing T
is a contraction mapping via Theorem 8.2.6. Taking x1 , x2 ∈ X, we have
||T x1 − T x2 ||Y
= ||(t − s)L−1
s (L0 − L1 )(x1 − x2 )||Y
≤ |t − s| · ||L−1
s || · ||L0 − L1 || ·||x1 − x2 ||X .
| {z
} | {z }
≤C
Thus, if we consider t such that
|t − s| <
≤ 2C
1
,
2C 2
we get that T is a contraction; and hence, Lt is onto. Now, we can cover [0, 1] with open
intervals of length C −2 to conclude that our original assumption (there existing an s ∈ [0, 1]
where Ls is onto) leads to Lt being onto ∀t ∈ [0, 1].
The method of continuity essentially falls under the general category of existence theorems.
While not particular good for explicit calculations, this theorem is very handy for proving,
well, existence. The following is a nice exercise that demonstrates this.
Exercise: Prove the existence of solution of the equation −(φu 0 )0 + u = f .
Hints: Consider the mappings
L0 (u) = −u 00 ,
L0 (u) = f
0 0
L1 (u) = −(φ · u ) + u,
L1 (u) = f ,
and note that L0 is onto (just integrate the equation for L0 ).
Gregory T. von Nessi
Version::31102011
Proof : Suppose that there exists an s ∈ [0, 1] such that Ls is onto. (8.2) indicates that
−1
Ls is also one-to-one; and hence, L−1
s exists with ||Ls || ≤ C, where C is the same constant
in (8.2). Now fix an arbitrary y ∈ Y and t ∈ [0, 1]. We now seek to prove that exists x ∈ X
such that Lt (x) = y , i.e. Lt is onto Y . It is clear that Lt (x) = y if and only if
8.3 Fredholm Alternative
8.3
185
Fredholm Alternative
In this section, one of the most important tools in the study of PDE will be presented, namely
the Fredholm Alternative. To begin the exposition, let us recall a fact from linear algebra.
Fact: If L : Rn → Rn is linear, the mapping L is onto and one-to-one (and hence, invertible) if the kernel is trivial i.e. Ker(L) = {x : Lx = 0} = {0}.
Version::31102011
It turns out in general that this is not true, unless you have a compact perturbation of
the identity map. The Fredholm Alternative can be viewed as a BS analogue of the above
fact; but before going on to that theorem, let us define what a compact operator is.
Definition 8.3.1: Consider X and Y NLS with a bounded linear map T : X → Y . T is said
to be compact if T maps bounded sets in X into relatively compact sets in Y .
Sequentially speaking, the above statement is equivalent to saying that T is compact if
it maps bounded sequences in X into sequences in Y that contain converging subsequences.
Not only is the next lemma (due to Riesz) used in the proof of the Fredholm Alternative,
but it is also a very important property of NLS in its own right.
Lemma 8.3.2 (Almost orthogonal element): If X is a NLS, Y a closed subspace of X and
Y 6= X. Then ∀θ < 1, ∃xθ ∈ X satisfying ||xθ || = 1, such that dist(xθ , Y ) ≥ θ.
Proof: First, we fix arbitary x ∈ X \ Y . Since Y is closed,
dist(x, Y ) = inf ||x − y || = d > 0.
x∈Y
Thus, ∃yθ ∈ Y , such that
||x − yθ || ≤
d
.
θ
Now, we define
xθ =
x − yθ
;
||x − yθ ||
it is clear that ||xθ || = 1. Moreover, for any y ∈ Y ,
||xθ − y || =
≥
||x − [yθ − ||yθ − x||y ]||
||yθ − x||
d
≥ θ,
||yθ − x||
where the second inequality is based on the fact that yθ − ||yθ − x||y ∈ Y since Y is a proper
subspace of X.
Corollary 8.3.3 (): If X is an infinite dimensional NLS, then B1 (0) is not precompact in X.
Gregory T. von Nessi
186
Chapter 8: Facts from Functional Analysis and Measure Theory
Proof: The statement of the corollary is equivalent to saying there exists bounded sequences in B1 (0) that have no converging subsequences. To see this, we argue directly by
making the following construction. For a given infinite dimensional BS X, pick an arbitrary
element x0 ∈ B1 (0) ⊂ X. Since Span(x0 ) is a proper subspace of X, we know exists by
lemma 8.3.2 x1 ∈ X such that dist(x1 , Span(x0 )) ≥ 21 with ||x1 || = 1 =⇒ x1 ∈ B1 (0). By
the same argument, we pick x2 ∈ X such that dist(x2 , Span(x0 , x1 )) ≥ 12 with x2 ∈ B1 (0).
Repeating iteratively, we now have constructed a sequence {xi }, for which there is clearly
no converging subsequence in X (the distance between any two elements is greater than
1
2 ).
Now, we are finally in a position to present the objective of this section.
Theorem 8.3.4 (): [Fredholm Alternative] If X is a NLS and T : X → X is bounded, linear
and compact, then
i.) either the homogeneous equation has a non-trivial solution x − T x = 0, i.e. Ker(I − T )
is non-trivial or
ii.) for each y ∈ X, the equation x − T x = y has a unique solution x ∈ X. Moreover,
(I − T )−1 is bounded.
Proof 1 : [GT01] For clarity the proof will be presented in four steps.
Step 1: Claim: Defining S := I − T , we claim there exists a constant C such that
dist(x, Ker(S)) ≤ C||S(x)||
∀x ∈ X.
(8.4)
Proof of Claim: We will argue indirectly by contradiction. Suppose the claim is
false, then there exists a sequence {xn } ⊂ X satisfying ||S(xn )|| = 1 and dn :=
dist(xn , Ker(S)) → ∞. Upon extracting a subsequence we can insure that dn ≤ dn+1 .
Choosing a sequence {yn } ⊂ Ker(S) such that dn ≤ ||xn − yn || ≤ 2dn , we see that if
we define
zn :=
xn − yn
,
||xn − yn ||
then we have ||zn || = 1. Moreover,
||S(zn )|| =
=
≤
1
Can be omitted on first reading.
Gregory T. von Nessi
S(xn ) − S(yn )
||xn − yn ||
1−0
||xn − yn ||
1
.
dn
Version::31102011
The above corollary indicates that compactness will be a big consideration in PDE analysis: many of the arguments forthcoming are based on limit principles, which the above
corollary presents a major obstacle. Thus, ideas of compactness of operators (and function
space imbeddings; more later) play a major role in PDE theory, as compactness “doesn’t
come for free” just because we work in a BS.
8.3 Fredholm Alternative
187
Thus, the sequence {S(zn )} → 0 as n → ∞. Now acknowledging that T is compact,
by passing to a subsequence if necessary, we may assume that the sequence {T zn }
converges to an element w0 ∈ X. Since zn = (S + T )zn , we then also have
lim zn =
n→∞
lim (S + T )zn
n→∞
= 0 + w0 = w0 .
From this, we see that T (w0 ) = w0 , i.e. zn → w0 ∈ Ker(S). However this leads to a
contradiction as
Version::31102011
dist (zn , Ker(S)) =
inf
y ∈Ker(S)
||zn − y ||
= ||xn − yn ||−1
inf
y ∈Ker(S)
||xn − yn − ||xn − yn ||y ||||xn − yn ||
= ||xn − yn ||−1 dist (xn , Ker(S)) ≥
1
.
2
Step 2: We now wish to show that the range of S, R(S) := S(X), is closed in X. This is
equivalent to showing an arbitrary sequence in X, call it {xn }, is mapped to another
sequence with limit S(x) = y ∈ X, where x has to be shown to be in X.
Now consider the sequence {dn } defined by dn := dist(xn , Ker(S)). By (8.4) we
know that {dn } is bounded. Choosing yn ∈ Ker(S) such that dn ≤ ||xn − yn || ≤ 2dn ,
we see that the sequence defined by wn = xn − yn is also bounded. Thus, by the
compactness of T , T (wn ) converges to some z0 ∈ X, upon passing to subsequence if
necessary. We also have S(wn ) = S(xn ) − S(yn ) = S(xn ) → y by definition. These
last two observations, combined with the fact that wn = (S + T )wn , leads us to notice
that wn → y + z0 . Finally, we have that
lim S(wn ) =
n→∞
lim S(xn − yn )
n→∞
=⇒ S(y + z0 ) = y ,
where the limit on the RHS is justified by the continuity of S (see theorem (8.3.4)).
Since y + z0 ∈ X, we conclude that R(S) is indeed closed.
Step 3: We now will shot that if Ker(S) = {0} then R(S) = X, i.e. if case i.) of the theorem
does not hold then case ii.) is true.
To start we make the following. First we define the following series of spaces S(X)j :=
S j (X). Now, it is easy to see that
S(X) ⊂ X =⇒ S 2 (X) ⊂ S(X) =⇒ · · · =⇒ S n (X) ⊂ S n−1 (X).
This combined with the conclusion of Step 2, indicates that S(X)j constitute a set of
closed, non-expanding subspaces of X. Now, we make the following
Claim: There exists K such that for all i , j > K, S(X)i = S(X)j .
Proof of Claim: Suppose the claim is false, thus we can construct a sequence based on
the following rule that xn+1 ∈ S n+1 with dist(xn+1 , S(X)n ) ≥ 21 , ||xn || = 1 and x0 ∈ X
is chosen arbitrarily. So, let us now take n > m and consider
T xn − T xm = xn + (−xm + S(xm ) − S(xn ))
= xn + x,
Gregory T. von Nessi
188
Chapter 8: Facts from Functional Analysis and Measure Theory
where x ∈ S(X)m+1 . From this equation, we see that
||T xn − T xm || ≥
1
2
(8.5)
by construction of our sequence. (8.5) contradicts the fact that T is a compact operator; our claim is proved. As of yet, we still have not used the supposition that Ker(S) = {0}. Utilizing our
claim, take k > K so that S(X)k = S(X)k+1 . So, taking arbitrary y ∈ X, we see that
S k (y ) ∈ S(X)k = S(X)k+1 . Thus, ∃x ∈ S(X)k+1 with S k (y ) = S k+1 (x). This leads
to
but since Ker(S) = {0}, S −k (0) = 0. We now can conclude that S(x) = y , and since
y ∈ X was arbitrary we get the result: R(S) = X.
Step 4: We lastly seek to prove that if R(S) = X then Ker(S) = {0}.
Proof of Step 4: This time we define a non-decreasing sequence of closed subspaces
{Ker(S)j } by setting Ker(S)j = S −j (0). This is indeed a non-decreasing sequence as
0 ∈ Ker(S) = S −1 (0)
=⇒
..
.
S −1 (0) ⊂ S −2 (0)
..
.
=⇒ S −k ⊂ S −(k+1) (0).
Also, Ker(S)j are closed due to the continuity of S. By employing an analogous
argument based on lemma 8.3.2 to that used in Step 3., we obtain that ∃l such that
Ker(S)j = Ker(S)l for all j ≥ l. Now, since R(S) = X =⇒ R(S l ) = X, ∃x ∈ X
such that for any arbitrary element y ∈ Ker(S)l , y = S l x is satisfied. Consequently
S 2l (x) = S l (y ) = 0, which implies x ∈ Ker(S)2l = Ker(S)l whence
y = S l x = 0.
The result follows from seeing that Ker(S) ⊂ Ker(S)l = {0}, which is indicated by
the preceding equation and the non-decreasing nature of Ker(S)l .
The boundedness of the operator S −1 = (I − T )−1 in case ii.) follows from (8.4) with
Ker(S) = {0}. This completes the entire proof, whew!
To conclude this section, we will consider a few examples of compact and non-compact
operators.
Example 5: If T : X → Y is continuous and Y finite dimensional, then T is compact. It is
left to the reader to verify this, as it is a straight-forward calculation in real analysis.
Example 6: If X = C 1 ([0, 1]), Y = C 0 ([0, 1]), and T =
If we look at
Gregory T. von Nessi
d
dx ,
||T u||C 0 ([0,1]) ≤ C||u||C 0 ([0,1])
then T not a compact operator.
Version::31102011
S k (S(x) − y ) = 0 =⇒ S(x) − y = S −k (0);
8.4 Spectral Theorem for Compact Operators
189
√
We see that this can’t hold ∀u ∈ X and ∀x ∈ [0, 1]. Just look at u = x. Thus, T is not
even bounded; and hence, it is not compact. This indicates why taking derivatives is a “bad”
operation.
Rx
Example 7: Take X = C 0 ([0, 1]), Y = C 0 ([0, 1]), and T u = 0 u(s) ds. Now, take uj to
be a bounded sequence in C 0 ([0, 1]) (say it’s bounded by a constant C). Thus, we have that
vj = T uj is also bounded in C 0 ; in fact, it is easy to see that vj ≤ C. Now, we use the
following to show T compact:
Version::31102011
Arzela-Ascoli Theorem: If uj is uniformly bounded and equicontinuous in a NLS X, then
exists a converging subsequence in X. (We will come back to this theorem later.)
So, we look at
|vj (x) − vj (y )| =
Z
x
y
vj0 (s) ds ≤ |x − y | · sup |uj | ≤ C|x − y |
Thus, we see that vj is equicontinuous. By applying Arzela-Ascoli, we find that vj does indeed
contain a converging subsequence in C 0 ([0, 1]). We now conclude that T is compact.
8.4
Spectral Theorem for Compact Operators
In this section we will start investigations into the spectra of compact operators. First, for
completness, we recall the following elementary definitions.
Definition 8.4.1: Take X to be a NLS and T ∈ L(X, X). The resolvent set, ρ(T ), is the
set of all λ ∈ C such that (λI − T ) is one-to-one and onto. In addition, Rλ = (λI − T )−1 is
the resolvent operator assocaited with T .
From the Fredholm Alternative in the previous section, we see that Rλ is a bounded linear
map for λ ∈ ρ(T ).
Definition 8.4.2: Take X to be a NLS and T ∈ L(X, X). λ ∈ C is called an eigenvalue
of T if there exists a x 6= 0 such that T (xλ ) = λxλ . The x in this equation is called the
eigenvector associated with λ. The set of eigenvalues of T is called the spectrum, denoted
σ(T ). Also, we define
• Nλ = {x : T x = λx} is the eigenspace.
• mλ = dim(Nλ ) is the multiplicity of λ.
Warning: It is not necessarily true that [ρ(T )]C = σ(T ).
The next lemma is a simple result from real analysis that will be important in the proof
of the main theorem of this section.
Lemma 8.4.3 (): If a set U ⊂ Rn (Cn ) is bounded and has only a finite number of accumulation points, then U is at most countable.
Gregory T. von Nessi
190
Chapter 8: Facts from Functional Analysis and Measure Theory
Qr
R
O
r
x0
are elements of U
Proof 1 : Without lose of generality we consider U ⊂ C with only one accumulation point
x0 (the argument doesn’t change for the other cases).
Since U is bounded, there exists 0 < R < ∞ such that U ⊂ BR (0). Since BR (0) is open,
there exists r > 0 such that Br (x0 ) ⊂ BR (0); define Qr := BR (0) \ Br (x0 ). To proceed, we
make the following
Claim: Qr contains only a finite number of points of U.
Suppose not, then Qr ∩ U is at least countably infinite. Since Qr is closed and bounded, it
is compact (Heine-Borel). Thus, we cover Qr with a finite number of balls with radius 1. At
least one of these balls, call it B1∗ , is such that B1∗ ∩ U is at least countably infinite. Consequently, we define V1 := B1∗ ∩ Qr . In a similar way, we cover V1 with a finite number of balls
∗
∗ ∩ U being at least countably infinite.
with radius 21 , where B1/2
is one of these balls with B1/2
∗ ∩ V . We thus iterativly construct a series of closed
In analogy with V1 , we define V2 := B1/2
1
1
∗ ∩V
sets Vn := B1/n
n−1 . By construction, we have Vn ⊂ Vm for m < n with diam(Vn ) ≤ n .
We now see that ∃x ∈ U such that
∞
\
x∈
Vn ,
n=1
by the compactness of Vn ; but this implies that x is an accumulation point of V , since every Vn ∩ U is at least countably infinite. Hence we have a contradiction, which proves the
claim.
Now, it is clear that
"
BR (0) = x0 ∪
n=1
Thus,
U ⊂ x0 ∪
1
Can be omitted on first reading.
Gregory T. von Nessi
∞
[
∞
[
#
Qr /n .
(U ∩ Qr /n ),
n=1
(8.6)
Version::31102011
Figure 8.1: Layout for lemma 8.4.3, note that U is only approximated and x0 is not necessarily
contained in U.
8.4 Spectral Theorem for Compact Operators
191
since U ⊂ BR (0). Since the RHS of (8.6) is countable, so is U.
We now are in a position to state the main result of the section.
Version::31102011
Theorem 8.4.4 (Spectral Theorem for Compact Operators): If X is a NLS with T ∈
L(X, X) compact, then σ(T ) is at most countably infinite with λ = 0 as the only possible
accumulation point. Moreover, the multiplicity of each eigenvalue is finite.
Proof: It is easily verified that eigenvectors associated with different eigenvalues are
linearly independent. With that in mind, we can thus extract a series of {λj } (not necessarily
distinct) from σ(T ) with associated eigenvectors xj such that T xj = λj xj and x1 , . . . , xn is
linearly independent ∀n. Now we define Mn := Span{x1 , . . . , xn }. Since T x = λx, we see
that
||T || ≥
||T x||
= λ;
||x||
hence σ(T ) is bounded.
To prove that λ = 0 is the only possible accumulation point, we will argue indirectly by assuming that λ 6= 0 is an accumulation point of {λj }. To do this, we first utilize lemma (8.4.3)
to construct a sequence {yn } ⊂ X such that yn ∈ Mn , ||yn || = 1 with dist(yn , Mn−1 ) ≥ 12 .
We now wish to prove
1
1
T yn −
T ym
λn
λm
≥
1
2
(8.7)
for n > m. If this is indeed true and the accumulation point of {λj } isn’t 0, then we get a
contradiction as ||yn || is bounded, which implies T yn contains a convergent subsequence as
T is compact (an impossibility in the face of (8.7)). So, let use prove (8.7).
Given yn ∈ Mn = Span{x1 , . . . , xn }, we can write
yn =
n
X
βj xj
j=1
for some βj ∈ C. Now, we calculate
n X
1
1
yn −
βj xj −
T yn =
βj T xj
λn
λn
j=1
n X
βj λj
=
βj xj −
xj
λn
j=1
=
n−1 X
j=1
Thus,
1
λm T ym
βj λj
βj xj −
xj
λn
∈ Mn−1 .
∈ Mn−1 since m < n. It is convenient to now define
n−1 X
1
β j λj
βj xj −
xj −
T ym =: z ∈ Mn−1 .
λn
λm
j=1
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
192
Finally, looking at (8.7), we see
1
1
T yn −
T ym
λn
λm
= ||yn − z||
≥
inf
ω∈Mn−1
||yn − ω||
= dist(yn , Mn−1 )
1
≥
.
2
8.5
Dual Spaces and Adjoint Operators
In addition to the rich structure that we have already seen in BS, it turns out that bounded
linear mappings on BS also have some very nice properties that will be very useful in our
studies of linear PDE. First, we make a formal definition.
Definition 8.5.1: If X is a NLS, we define the dual space, X ∗ , as the set of all bounded
linear mappings taking X → R.
The following theorem is easily verified.
Theorem 8.5.2 (): If X is a NLS, X ∗ is a BS with
|f (x)|
.
x6=0 ||x||X
||f ||X ∗ = sup
Notation: If x ∈ X and f ∈ X ∗ , then by definition f (x) ∈ R. Typically, since X ∗ won’t
always be a space of functions, we write hf , xi := f (x) ∈ R.
Since X ∗ is a BS in its own right, we state the following
Definition 8.5.3: If X is a NLS, we define the bidual space of X as X ∗∗ := (X ∗ )∗ . Moreover,
there is a canonical imbedding j : X → X ∗∗ defined by hj(x), f i = hf , xi for all x ∈ X and
f ∈ X ∗ . If the canonical imbedding is onto, then X is a reflexive BS.
Linear operators between BS also have an analogy in dual spaces.
Definition 8.5.4: If X, Y are BS and T ∈ L(X, Y ), the adjoint operator T ∗ : Y ∗ → X ∗ is
defined by
hT ∗ g, xi = hg, T xi
∀x ∈ X, ∀g ∈ Y ∗
From this it is easy to see that T ∗ ∈ L(Y ∗ , X ∗ ).
Before we can elucidate some nice properties of these adjoints, we require a little more
“structure” beyond that stipulated in the definition of a BS. Thus, we move onto the next
section.
Gregory T. von Nessi
Version::31102011
This proves the claim; and hence, {λj } can only have 0 as an accumulation point by the
aforementioned contradiction. Applying this conclusion to lemma 8.4.3, we attain the full
theorem.
8.6 Hilbert Spaces
8.6
193
Hilbert Spaces
Even though BS and their associated operators have a nice mathematical structure, they
lack constructions that would enable one to speak of “geometry”. As geometric structures
yield useful results in Euclidean space (inequalities, angle relations, etc.), we are motivated to
construct geometric analogies on the linear spaces (not necessarily BS) of previous sections.
This nebulous reference to “geometry” on a linear space (LS) will be clarified shortly.
Definition 8.6.1: Given X is an LS, a map (·, ·) : X × X → R is called a scalar product if
Version::31102011
i.) (x, y ) = (y , x),
ii.) (λ1 x1 + λ2 x2 , y ) = λ1 (x1 , y ) + λ2 (x2 , y ), and
iii.) (x, x) > 0
∀x 6= 0
for all x1 , x2 , y ∈ X and λ1 , λ2 ∈ R.
Correspondingly, we now define
Definition 8.6.2: A LS with a scalar product is called a Scalar Product Space (SPS). It is
called a Hilbert Space (HS) if it is complete.
Warning: Do not assume that a scalar product is the same as taking an element of a
BS with it’s dual, i.e. “h·, ·i 6= (·, ·)”. Actually, this comparison doesn’t even make sense
as a scalar product has both arguments in a given space whereas the composition h·, ·i requires an element from the space and another from the associated dual space. That aside,
the dual space of a BS may very well not be isomorphic to the BS itself. This will be the
case in a HS by the Riesz-representation theorem (stated below). With that theorem, one
may characterize any functional on a HS via the scalar product. This does not go the other
way around; i.e. on a BS, a dual space does not necessarily characterize a valid scalar product.
Also, one will notice that we only require that apSPS to be a LS (not a BS) with a
scalar product; but, it is easily verified that ||x|| := (x, x) satisfies the criteria stated in
the definition of a norm. Thus, any HS (SP’S) is a BS (NLS). This is not to say that the
only valid scalar product on a space is the one that generates the norm; case in point, Lp (Ω)
spaces are frequently considered with the norm
Z
f · g dx
Ω
where f , g ∈ Lp (Ω). It is easy to verify this satisfies definition 8.6.1 but does not generate
the Lp (Ω) norm.
Scalar products add a great deal of structure to the spaces we have been considering
previously. First, we can now start talking about the concept of “geometry” on a SPS.
We have seen that for any SPS, there exists a norm which can be analogized to length
in Euclidean space. Taking this analogy one step further, we can define the “angle”, φ,
(x,y )
between two elements x and y in a SPS, X, by cos φ := ||x||·||y
|| . This leads directly to the
all-important orthogonality criterion x ⊥ y ⇐⇒ (x, y ) = 0. In addition to geometry, there
are many useful inequalities that pertain to scalar products; here are a few.
• Cauchy Schwarz inequality: |(x, y )| ≤ ||x|| · ||y ||
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
194
• Triangle inequality: ||x + y || ≤ ||x|| + ||y ||
• Parallelogram identity: ||x + y ||2 + ||x − y ||2 = 2(||x||2 + ||y ||2 )
Notation: From now on we will be referring to Hilbert Spaces, as it usually safe to assume
that we are working in a complete SPS.
With these concepts, we can now analyze some key theorems that pertain to Hilbert
Spaces.
Proof: First, if x ∈ M then as x = x +0 we trivially get the conclusion. Thus, we consider
M 6= X with x ∈
/ M. Defining, d := dist(x, M), we select a minimizing sequence {yn } ⊂ M
such that ||x − yn || → d. Before proceeding, we need to show that {yn } is Cauchy. Utilizing
the parallelogram identity, we write
2
1
4 x − (yn + ym ) +||yn − ym ||2 = 2(||x − yn ||2 + ||x − ym ||2 ) → 4d 2 ,
2
|
{z
}
≥4d 2
where the first term on the LHS is ≥ 4d 2 as 12 (yn + ym ) ∈ M. Thus, we see that
||yn − ym || → 0; the sequence is Cauchy. Since X is complete and M is closed, we ascertain that limn→∞ yn = y ∈ M.
Writing x = z + y , where z := x − y , the conclusion of the proof will follow provided
z ∈ M ⊥ . Consider any y ∗ ∈ M and α ∈ R, as y + αy ∗ ∈ M, it is calculated
d 2 ≤ ||x − y − αy ∗ ||2 = (z − αy ∗ , z − αy ∗ )
= ||z||2 − 2α(y ∗ , z) + α2 ||y ∗ ||2
= d 2 − 2α(y ∗ , z) + α2 ||y ∗ ||2 ;
thus,
|(y ∗ , z)| ≤
α ∗ 2
||y ||
2
∀α ∈ R.
From this we conclude that |(y ∗ , z)| = 0; and since y ∗ ∈ M was arbitrary, z ∈ M ⊥ .
One of the nicest properties of Hilbert Spaces, call it H, is that one can characterize
any bounded linear functional on H in terms of the scalar product. This is formalized by the
following theorem.
Theorem 8.6.4 (Riesz-representation Theorem): If F is a bounded linear function on a
Hilbert space H, then there exists a unique f ∈ H such that F (x) = (x, f ) and ||f || = ||F ||.
Gregory T. von Nessi
Version::31102011
Theorem 8.6.3 (Projection Theorem): If H is a HS and M is a closed subspace of H, then
each x ∈ H has a unique decomposition x = y + z with y ∈ M, z ∈ M ⊥ (i.e. is perpendicular
to M, (z, M) = 0, ∀m ∈ M). Moreover dist(x, M) = ||z||.
8.6 Hilbert Spaces
195
Version::31102011
Proof: First, we consider F such that Ker(F ) = H; then, F = 0 and we correspondingly
choose f = 0. If, on the other hand, we have Ker(F ) 6= H, then ∃z ∈ H \ Ker(F ). By the
projection theorem, we may assume that z ∈ Ker(F )⊥ as it easily seen that Ker(F ) is a
proper subspace of H. Now, we see that
F (x)
F x−
z =0
F (z)
F (x)
=⇒ x −
z ∈ Ker(F ).
F (z)
Since z ∈ Ker(F )⊥ , we have by orthogonality
F (x)
F (x)
F (x)
z , z = 0 ⇐⇒ (x, z) =
z, z =
||z||2 .
x−
F (z)
F (z)
F (z)
Solving for F (x) algebraically leads to
F (z)
F (z)
z = (x, f ), where f =
z.
F (x) = x,
2
||z||
||z||2
Lastly, we need to show ||F || = ||f ||. So, on one hand, we have
||F || =
|F (x)|
|(x, f )| C-S
||x|| · ||f ||
≤ sup
= sup
= ||f ||;
||x||
||x||
x∈H,x6=0 ||x||
sup
but on the other, its calculated that
C-S
||f ||2 = (f , f ) = F (f ) ≤ ||F || · ||f || =⇒ ||f || ≤ ||F ||.
Thus, we conclude that ||f || = ||F ||.
Next, we will go over one of the key existence theorems pertaining to elliptic equations
but first a quick definition.
Definition 8.6.5: Give X a NLS, a mapping B : X × X → R is a bilinear form provided B
is linear in one argument when the other argument is fixed. B is bounded if there exists a
constant C0 > 0 such that
|B(x, y )| ≤ C0 ||x|| · ||y ||
for all x, y ∈ X. Analogously, B is coercive if there exists a constant C1 > 0 such that
B(x, x) ≥ C1 ||x||2
for all x ∈ X.
Warning: B is not necessarily symmetric in its arguments.
Theorem 8.6.6 (Lax-Milgram Theorem): If H is a HS and B : H × H → R is a bounded,
coercive bilinear form, then for all F ∈ H ∗ there exists a unique f ∈ H such that
B(x, f ) = F (x).
Gregory T. von Nessi
196
Chapter 8: Facts from Functional Analysis and Measure Theory
Proof: First, fix f ∈ H and define
Gf (x) := B(x, f ) = (x, T f ),
(8.8)
where the last equality comes from the application of the Riesz representation theorem to
Gf (x). Given B is coercive, we calculate
C1 ||f ||2 ≤ B(f , f ) = (f , T f ) ≤ ||f || · ||T f ||,
||T f ||2 = (T f , T f ) = B(T f , f ) ≤ C0 ||f || · ||T f ||,
where C0 is the boundedness constant of B. From these last two equations, we derive that
C1 ||f || ≤ ||T f || ≤ C0 ||f ||,
(8.9)
which is tantamount to R(T ) being closed (right inequality) and T itself being one-to-one
(left inequality). Now, we wish to show that T is onto, since this along with (8.9) would
imply that T −1 is well-define.
If T is not onto, then by the projection theorem, there exists z ∈ R⊥ (T ) (R(T ) is easily
noticed to be a proper subspace of H). Thus, B(z , f ) = (z, T f ) = 0 for all f ∈ H. Choosing
f = z, we calculate that 0 = B(z , z) ≥ C1 ||z||2 which implies z = 0, i.e. R(T ) = H. So
T −1 is indeed well-defined as we have now shown T to be onto.
The last step of the proof is to apply the Riesz representation theorem: for any F ∈ H ∗ ,
there exists a unique g ∈ H such that F (x) = (x, g) for all x ∈ H. With that, we get
F (x) = (x, g) = (x, T T −1 g) = B(x, T −1 g).
From this applied to (8.8), we see that f = T −1 g. The existence of f is thus guaranteed as
we have shown T −1 to be well defined.
As was mentioned above, the Lax-Milgram theorem turns out to be the right tool for
proving existence of solutions of many elliptic equation. That being said, one could be forgiven for thinking that the coercivity criterion is somewhat artificially induced, as it will be
shown to correspond to the ellipticity criterion of a differential operator. It is true that this
correspondence is very useful, but the Lax-Milgram theorem actually has a deeper implication
than the existence of certain PDE.
Gregory T. von Nessi
Version::31102011
where C1 is the coercivity constant associated with B. Analogously, via the boundedness of
B,
Version::31102011
8.6 Hilbert Spaces
197
Recalling the construction of Hilbert Spaces, one will remember that the properties of
such a space are all dependent on the definition of the associated scalar product. Indeed,
the norm and, hence, the topology of the space result from the particular definition of the
scalar product (on top of the assumed ambient properties of a linear space). Philosophically
speaking, Lax-Milgram indicates whether a given bilinear form acting as a scalar product
will “replicate” the topological structure an already-established HS. If this is found to be
the case, then one can view the conclusion of Lax-Milgram as a direct application of the
Riesz-representation theorem on the space with the bilinear form as the scalar product. Now
since topology is directly correlated to the norm of a HS, one can start to see the coercivity
condition (along with the boundedness of the bilinear form) as a stipulation on the bilinear
form that would guarantee the preservation of the topology of the HS if the scalar product
were replaced by the bilinear form itself. Of course the geometry, as described in the beginning
of the section, would be different; but this is not important, in this instance, as Lax-Milgram
only refers two what elements are contained in a space (a topological attribute). Hopefully,
this bit of insight will make the Lax-Milgram theorem (and existence theorems in general) a
little more intuitive for the reader.
8.6.1
Fredholm Alternative in Hilbert Spaces
With the additional structure of a HS, we are now able to realize more properties of adjoint
operators than was possible in BS.
Theorem 8.6.7 (): If H is a HS with T ∈ L(H, H), then T ∗ has the following properties:
i.) ||T ∗ || = ||T ||;
ii.) If T is compact, then T ∗ is also compact;
iii.) R(T ) = Ker(T ∗ )⊥ .
Proof:
i.) This is clear from the fact that
||T || = sup
x∈H
||T x||
||x||
(T ∗ has an analogous norm).
ii.) Let {xn } be a sequence in H satisfying ||xn || ≤ M. Then
||T ∗ xn ||2 = (T ∗ xn , T ∗ xn ) = (xn , T T ∗ xn )
≤ ||xn || · ||T T ∗ xn ||
≤ M||T || · ||T ∗ xn ||
so that ||T ∗ xn || ≤ M||T ||; that is, the sequence {T ∗ xn } is also bounded. Hence,
since T is compact, by passing to a subsequence if necessary, we may assume that the
sequence {T T ∗ xn } converges. But then
||T ∗ (xn − xm )||2 = (T ∗ (xn − xm ), T ∗ (xn − xm ))
= (xn − xm , T T ∗ (xn − xm ))
≤ 2M||T T ∗ (xn − xm )|| → 0
as m, n → ∞.
Since H is complete, the sequence {T ∗ xn } is convergent and hence T ∗ is compact.
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
198
iii.) Fix arbitrary x ∈ H and set T x =: y ∈ R(T ). From this, it is seen that (y , f ) =
(T x, f ) = (x, T ∗ f ) = 0 for all f ∈ Ker(T ∗ ). Thus, R(T ) ⊂ Ker(T ∗ )⊥ ; and since
Ker(T ∗ ) is closed, R(T ) ⊂ Ker(T ∗ )⊥ . Now suppose that y ∈ Ker(T ∗ )⊥ \ R(T ).
⊥
By the projection theorem, y = y1 + y2 where y1 ∈ R(T ) and y2 ∈ R(T ) \ {0}
Consequently, 0 = (y2 , T x) = (T ∗ y2 , x) for all x ∈ H; hence, y2 ∈ Ker(T ∗ ). Therefore,
(y2 , y ) = (y2 , y1 ) + ||y2 ||2 = ||y2 ||2 . Thus, we see that (y2 , y ) 6= 0, i.e. y ∈
/ Ker(T ∗ )⊥ .
∗
⊥
This contradiction on the assumption of y ∈ Ker(T ) \ R(T ) leads to the conclusion
that R(T ) = Ker(T ∗ )⊥ .
Theorem 8.6.8 (Fredholm Alternative in Hilbert Spaces): Consider a HS, H and a compact operator T : H → H. In this case, there exists an at most countable set σ(T ) having no
accumulation points except possibly λ = 0 such that the following holds: if λ 6= 0, λ ∈
/ σ(T )
then the equations
λx − T x = y
λx − T ∗ x = y
(8.10)
have a unique solution x ∈ H, for all y ∈ H. Moreover, the inverse mappings (λI − T )−1 ,
(λI − T ∗ )−1 are bounded. If λ ∈ σ(T ), then Ker(λI − T ) and Ker(λI − T ∗ ) have positive
and finite dimension, and we can solve the equation (8.10) if and only if y is orthogonal to
Ker(λI − T ∗ ) and Ker(λI − T ) respectively for both cases.
There is a lot of information in this theorem; it’s worth the effort to try and get one’s mind
around it. Also, keep in mind that Ker(λI − T ) and Ker(λI − T ∗ ) are just the eigenspaces
Tλ and Tλ∗ respectively.
8.7
Weak Convergence and Compactness; the Banach-Alaoglu
Theorem
To begin this section we begin with some motivation. Consider a sequence of approximate solutions of F (u, Du, D2 u) = 0, i.e. a bounded sequence {uk } ∈ X such that
F (uk , Duk , D2 uk ) = k → 0. Now, there are two important questions that arise:
• Does bounded imply there is a convergent subsequence i.e. ukj → u?
• Is this limit the solution?
As we have seen that bounded sequences in infinite dimensional BS do not necessarily contain
convergent subsequences (i.e. in the strong sense), these questions are not readily answered
with our current “set of tools”. Given that such approximations as the one stated are
frequently utilized in PDE, we now seek to define a new criterion for convergence so that we
can indeed answer both these questions in the affirmative.
Definition 8.7.1: Take X to be a BS, with X ∗ as its dual.
i.) {xj } ⊂ X converges weakly to x ∈ X if
hy , xj i → hy , xi
∀y ∈ X ∗ .
Notation: uk converging weakly to u is denoted by uk * u.
Gregory T. von Nessi
Version::31102011
Now, we can can combine the above properties of adjoints, the results pertaining to the
spectra of compact operators and the Fredholm Alternative on BS to directly ascertain the
following generalization.
8.7 Weak Convergence and Compactness; the Banach-Alaoglu Theorem
199
ii.) {yj } ∈ X ∗ converges weakly∗ to y ∈ X ∗ if
hyj , xi → hy , xi
∀x ∈ X
It can be readily seen that weak convergence implies strong convergence (that is convergence in norm). Also, be the linearity of scalar products, it is also observed that weak limits
are unique.
Version::31102011
Now, we present the key theorem which essential would enable us to answer “yes” to the
hypothetical situation presented at the beginning of the section.
Theorem 8.7.2 (Banach-Alaoglu): If X is a TVS, with Ω ⊂ X bounded, open set containing
the origin, then the unit ball in X ∗ is sequentially weak∗ compact, i.e. define
K = {F ∈ X ∗ : |F (x)| ≤ 1
∀x ∈ Ω},
then K is sequentially weak∗ compact.
Proof 2 : First, since Ω is open, bounded and contains the origin,
there corresponds to
each x ∈ X a number γx < ∞ such that x ∈ γx Ω, where aΩ := x ∈ X : xa ∈ Ω for any
fixed a ∈ R. Now for any x ∈ X, we know that γxx ∈ Ω, hence
|F (x)| ≤ γ(x)
(x ∈ X, Λ ∈ K).
Let Ax be the set of all scalars α such that |α| ≤ γx . Let τ be the product topology on
P , the Cartesian product of all Ax , one for each x ∈ X. Indeed, P will almost always be a
space with an uncountable dimension. By Tychonoff’s theorem, P is compact since each Ex
is compact. The elements of P are functions f on X (they may or may not be linear or even
continuous) that satisfy
|f (x)| ≤ γ(x)
(x ∈ X).
Thus K ⊂ X ∗ ∩ P , as K ⊂ X ∗ and K ⊂ P . It follows that K inherits two topologies: one
from X ∗ (its weak∗ -topology, to which the conclusion of the theorem refers) and the other,
τ , from P . Again, see [Bre97] for the concept of inherited topologies.
We now claim that
i.) these two topologies coincide on K, and
ii.) K is closed subset of P .
If ii.) is proven, then we have that K is an uncountable Cartesian product of τ -compact sets
(each one contained in some Ax ), hence it is compact by Tychonoff’s theorem. Moreover, if
i.) is shown to be true then K is concluded to be weak∗ -compact from the previous statement.
So, we now see to prove our claims. To do this first fix F0 ∈ K; choose xi ∈ X, for
1 ≤ i ≤ n; and fix δ > 0. Defining
W1 := {F ∈ X ∗ : |Λxi − Λ0 xi | < δ
for
1 ≤ i ≤ n}
W2 := {f ∈ P : |f (xi ) − Λ0 xi | < δ
for
1 ≤ i ≤ n},
and
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
200
we let n, xi , and δ range over all admissible values. The resulting sets W̃1 , then form a
local base for the weak∗ -topology of X ∗ at Λ0 and the sets W̃2 form a local base for the
product topology τ of P at Λ0 ; i.e. any open set in X ∗ or P can be represented as a union
of translates of the sets contained in W̃1 or W̃2 respectively. Since K ⊂ P ∩ X ∗ , we have
W̃1 ∩ K = W̃2 ∩ K.
This proves (i).
f0 (αx + βy ) − αf0 (x) − βf0 (y ) = (f0 − f )(αx + βy )
+α(f − f0 )(x) + β(f − f0 )(y ).
so that
|f0 (αx + βy ) − αf0 (x) − βf0 (y )| < (1 + |α| + |β|).
Since was arbitrary, ,we see that f0 is linear. Finally, if x ∈ Ω and > 0, the same argument
shows that there is an g ∈ K such that |g(x) − f0 (x)| < . Since |g(x)| ≤ 1, by the definition
of K, if follows that |f0 (x)| ≤ 1. We conclude that f0 ∈ K. This proves ii.) and hence the
theorem.
As an immediate consequence we have
Corollary 8.7.3 (): If X is a reflexive BS, then the unit ball in X is sequentially weakly
compact.
From this corollary and the Riesz-representation theorem, we have that any bounded set
in any HS is sequentially weakly compact (the Riesz-representation theory proves that all
HS are reflexive BS). This is a key point that will be used frequently in the upcoming linear
theory, and gives us the criterion under which we can answer “yes” to the questions posed
at the beginning of the section.
Note: With this section I have broken the sequential structure of the chapter in that one
only needs to have a TVS to speak of weak convergence, etc. The reason for presenting
this material now, is that it can only be motivated once the student starts to realize how
restrictive the notion of strong convergence in a BS/HS is. Thus, for motivation’s sake I
have decided to present this material now, late in the chapter.
8.8
Key Words from Measure Theory
This section is intended to be a whirlwind review of basic measure theory, i.e. one that is not
already familiar with the following concepts should consult with [Roy88] for a proper treatise
of such. In reality, this section is a glorified appendix, in that most of the following results
are not proven but are explained in relation to what has already been explained.
Gregory T. von Nessi
Version::31102011
Now, suppose f0 is in the τ -closure of K. Next choose x, y ∈ X with scalars α, β, and
> 0. The set of all f ∈ P such that |f − f0 | < at x, at y , and at αx + βy is a τ neighborhood of f0 . Therefore K contains such an f . Since this f ∈ K ⊂ X ∗ , and hence,
linear, we have
8.8 Key Words from Measure Theory
8.8.1
201
Basic Definitions
Let us recall the “typical” definition of a measure:
Definition 8.8.1: Consider the set of all sets of real numbers, P(R). A mapping µ : P(R) →
[0, ∞) is a measure if the following properties hold:
Version::31102011
i.) for any interval l, µl = the length of l;
ii.) if {En } is a sequence of disjoint sets, then
!
[
X
µ
En =
µEn ;
n
n
iii.) µ is translationally invariant; i.e., if E is a set for which µ is defined and if E + y :=
{x + y : x ∈ E}, then µ(E + y ) = µE for all y ∈ R.
Now, we say this is the “typical” definition as there are actually different ways to define
a measure. To elaborate, in addition to the above three criterion, it would be nice to have µ
defined for all subsets of R; but no such mapping exists. Thus, one of these four criterion
must be compromised to actually get a valid definition; obviously, this may be done in a
number of ways. We take defintion 8.8.1 as a proper definition as the measures we will
consider, Lebesgue (and more esoterically Hausdorff), obey the criterion contained in such.
The next definition is a necessary precursor to defining Lebesgue measure.
Definition 8.8.2: The outer measure, denoted µ∗ is defined for any A ⊆ R as
X
l(In ),
µ∗ A = inf
S
A⊂
n In
n
where In are intervals in R and l(In ) is the length of In .
Now, even though µ∗ is defined for all subsets of R it does not satisfy condition iii.) of
definition 8.8.1, which is undesirable for integration. Thus, we go about restricting the sets
of consideration to guarantee that iii.) is satisfied by the following definition.
Definition 8.8.3: A set E ⊂ R is Lebesgue measurable if
µ∗ A = µ∗ (A ∩ E) + µ∗ (A ∩ E c ),
for all A ⊂ R. Moreover, the Lebesgue measure of E is just µ∗ E.
It is left to the reader that definition does indeed imply condition iii.) of definition ; again,
see [Roy88] if you need help.
Denoting the set of all measurable sets B, let us review one of the properties of B. To
do this, we remember the following definition
Definition 8.8.4: A group of sets A is a σ-ring on a space X if the following are true:
• X ∈ A;
• if E ∈ A, then Ac := X \ A ∈ A;
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
202
• and if Ei ∈ A then
∞
[
i=1
Ei ∈ A.
Canonically extending the above concept of Lebesgue measure on R to Rn by replacing
intervals with Cartesian products of intervals (and lengths with volumes), it is true that B is
a σ-ring over Rn .
Now, we move on to some important definitions pertaining to general measures.
Definition 8.8.6: A property hold µ-a.e. (µ almost everywhere) if the set of all x for which
the property does not hold is a set of µ-measure zero.
Lastly, we go over what it means for a function to be differentiable.
Definition 8.8.7: A function f : Rn → R is B measurable if {x : f (x) ∈ U} ∈ B for all open
sets U in R.
8.8.2
Integrability
For integrability studies, we first define a special clase of functions.
Definition 8.8.8: A function f is simple if f is a finite non-zero constant on finitely many
disjoint sets Ai which are measurable and zero elsewhere.
A simple function f is said to be integrable if
X
|fi | · µ(Ai ) < ∞;
and thus,
where
Z
f dµ =
χA i =
X
f i · χA i ,
1 x ∈ Ai
0 x∈
/ Ai
Note: Keep in mind that µ is a general measure; thus, the above criterion for integrability
of a simple function is dependent on our choice of µ.Still considering a general measure, we
now make the following generalization.
Definition 8.8.9: Consider a general function g and a sequence of simple functions {fn }
such that g = lim fn a.e. g is said to be integrable (with respect to measure µ) if
Z
lim
|fn (x) − fk (x)| dλ = 0.
n,k→∞
Moreover, since the limit
lim
k→∞
Z
fk dµ
thus exists, we see that the limit is independent of the choice of approximating sequence.
Gregory T. von Nessi
Version::31102011
Definition 8.8.5: N ∈ B is called a set of µ-measure
zero if µ(N) =
S
S 0. This ie equivalent
to saying there exists open sets Ai ⊂ B with Ai ⊂ N such that µ ( Ai ) < .
8.8 Key Words from Measure Theory
203
Now, we specify µ to be henceforth the Lebesgue measure. The corresponding Lebesgue
integrals have nice convergence properties, i.e. there exists a good variety of conditions
when one can interchange limits and integrals. We now rapidly go over these convergence
theorems, referring the reader to [Roy88] for proofs.
Theorem 8.8.10 (Lebesgue’s Dominated Convergence): If g is integrable with some fi →
f a.e., where f is measurable and |fi | ≤ g for all i , then f is integrable. Moreover,
Version::31102011
Z
fi dµ →
Z
f dµ
lim
i→∞
Z
fi dµ =
Z
lim fi dµ .
i→∞
Theorem 8.8.11 (Monotone Convergence (Beppo-Levi)): If fi is a sequence of integrable
functions such that 0 ≤ fi % f monotonically and
Z
fi dµ ≤ C < ∞,
then f integrable and
Z
f dµ = lim
i→∞
Z
fi dµ.
Lemma 8.8.12 (Fatou’s): If fi is a sequence of integrable functions with f ≥ 0 and
Z
then lim inf fi is integrable and
Z
fi dλ ≤ C < ∞,
lim fi dµ ≤ lim inf
i→∞
i→∞
Z
fi dµ
Theorem 8.8.13 (Egoroff’s): Take fi to be a sequence of functions such that fi → f a.e.
on a set E, with f measurable, where E is measurable and µ(E) < ∞, then ∀ > 0, ∃E
measurable such that λ(E \ E ) < and fi → f uniformly on E .
The next theorem technically states that multi-dimensional integrals can indeed be computed via an iterative scheme.
i.) If f : Rn × Rm → R is measurable such that
Theorem 8.8.14 (Fubini’s):
Z
Rn
Z
Rm
|f (x, y )| dxdy
exists, then
Z
Rn
Z
f (x, y ) dxdy =
Rm
Z
g(y ) dy ,
Rn
where
g(y ) =
Z
f (x, y ) dx
Rm
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
204
ii.) On the otherhand, if
g(y ) =
Z
Rm
|f (x, y )| dx < ∞
for almost every y with
Z
Rn
then
Rn
8.8.3
Z
Rm
|f (x, y )| dxdy < ∞
Calderon-Zygmund Decomposition
[Cube Decomposition]
A useful construction for some measure theoretic arguments in the forthcoming chapters is
that of cube decomposition. This is a recursive construction that enables one to ascertain
estimates for the measure of a set imbedded in Rn .
First, consider a cube K0 ∈ Rn , f a non-negative integrable function, and t > 0 all satisfying
Z
f dx ≤ t|K0 |,
K0
where | · | indicates the Lebesgue measure of the set. Now, by bisecting the edges of K0 we
divide K0 into 2n congruent subcubes whose interiors are disjoint. The subcubes K which
satisfy
Z
f dx ≤ t|K|
(8.11)
K
are similarly divided and the process continues indefinately. Set K ∗ be the set of subcubes
K thus obtained that satisfy
Z
f dx > t|K|,
K
and for each K ∈ K ∗ , denote by K̃ the subcube whose subdivision gives K. It is clear that
|K̃|
= 2n
|K|
and that for any K ∈ K ∗ , we have
1
t<
|K|
Z
K
f dx ≤ 2n t.
Moreover, setting
F =
[
K
K∈K ∗
and G = K0 \ F , we have
Gregory T. von Nessi
f ≤t
a.e. in G.
(8.12)
Version::31102011
Z
g(y ) dy < ∞,
8.8 Key Words from Measure Theory
205
This inequality is a consequence of Lebesgue’s differention theorem, as each point in G lies
in a nested sequence of cubes satisfying (8.11) with the diameters of the sequence of cubes
tending to zero.
For some pointwise estimates, we also need to consider the set
[
F̃ =
K̃
K∈K ∗
Version::31102011
which satisfies by (8.11),
Z
F̃
f dx ≤ t|F̃ |.
(8.13)
In particular, when f is the characteristic function χΓ of a measurable subset Γ of K0 , we
obtain from (8.12) and (8.13) that
|Γ | = |Γ ∩ F̃ | ≤ t|F̃ |
8.8.4
Distribution functions
Let f be a measurable function on a domain Ω (bounded or unbounded in Rn . The distribution
function µ = µf of f is defined by
µ(t) = µf (t) = |{x ∈ Ω|f (x) > t}|
for t > 0, and measures the relative size of f . Note that µ is a decreasing function on (0, ∞).
The basic properities of the distribution function are embodied in the following lemma.
Lemma 8.8.15 (): For any p > 0 and |f |p ∈ L1 (Ω), we have
Z
µ(t) ≤ t −p
|f |p dx
(8.14)
Ω
Z
Ω
p
|f | dx = p
Z
∞
t p−1 µ(t) dt.
(8.15)
0
Proof: Clearly
µ(t) · t
p
≤
≤
Z
t≤|f |
Z
Ω
|f |p dx
|f |p dx
for all t > 0 and hence (8.14) follows. Next, suppose that f ∈ L1 (Ω). Then, by Fubini’s
Theorem
Z
Z Z |f (x)|
|f | dx =
dtdx
Ω
ZΩ∞ 0
=
µ(t) dt,
0
and the result (8.15) for general p follows by a change of variables.
.
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
206
8.9
Exercises
8.1: Let X = C 0 ([−1, 1]) be the space of all continuous function on the interval [−1, 1], Y =
C 1 ([−1, 1]) be the space of all continuously differentiable functions, and let Z = C01 ([−1, 1])
be the space of all function u ∈ Y with boundary values u(−1) = u(1) = 0.
a) Define
(u, v ) =
Z
1
u(x)v (x) dx.
−1
b) Define
(u, v ) =
Z
1
u 0 (x)v 0 (x) dx.
−1
On which of the spaces X, Y , and Z does (·, ·) define a scalar product? Now consider
those spaces for which (·, ·) defines a scalar product and endow them with the norm
|| · || = (·, ·)1/2 . Which of the spaces is a pre-Hilbert space and which is a Hilbert space
(if any)?
8.2: Suppose that Ω is an open and bounded domain in Rn . Let k ∈ N, k ≥ 0, and γ ∈ (0, 1].
We define the norm || · ||k,γ in the following way. The γ th -Holder seminorm on Ω is given by
[u]γ;Ω =
sup
x,y ∈Ω,x6=y
|u(x) − u(y )|
,
|x − y |γ
and we define the maximum norm by
|u|∞;Ω = sup |u(x)|.
x∈Ω
The Holder space C k,γ (Ω) consists of all functions u ∈ C k (Ω) for which the norm
X
X
||u||k,γ;Ω =
||Dα u||∞;Ω +
[Dα u]γ;Ω
|α|≤k
|α|=k
is finite. Prove that C k,α (Ω) endowed with the norm || · ||k,γ;Ω is a Banach space.
Hint: You may prove the result for C 0,γ ([0, 1]) if you indicate what would change for the
general result.
8.3: Consider the shift operator T : l 2 → l 2 given by
(x)i = xi+1 .
Prove the following assertion. It is true that
Gregory T. von Nessi
x − T x = 0 ⇐⇒ x = 0,
Version::31102011
On which of the spaces X, Y , and Z does (·, ·) define a scalar product? Now consider
those spaces for which (·, ·) defines a scalar product and endow them with the norm
|| · || = (·, ·)1/2 . Which of the spaces is a pre-Hilbert space and which is a Hilbert space
(if any)?
8.9 Exercises
207
however, the equation
x − Tx = y
does not have a solution for all y ∈ l 2 . This implies that T is not compact and the Fredholm’s
alternative cannot be applied. Give an example of a bounded sequence xj such that T xj does
not have a convergent subsequence.
8.4: The space l 2 is a Hilbert space with respect to the scalar product
Version::31102011
(x, y) =
∞
X
xi yi .
i=1
Use general theorems to prove that the sequence of unit vectors in l 2 given by (ei )j = δij
contains a weakly convergent subsequence and characterize the weak limit.
8.5: Find an example of a sequence of integrable functions fi on the interval (0, 1) that
converge to an integrable function f a.e. such that the identity
Z 1
Z 1
lim
fi (x) dx =
lim fi (x) dx
i→∞ 0
0 i→∞
does not hold.
8.6: Let a ∈ C 1 ([0, 1]), a ≥ 1. Show that the ODE
−(au 0 )0 + u = f
has a solution u ∈ C 2 ([0, 1]) with u(0) = u(1) = 0 for all f ∈ C 0 ([0, 1]).
Hint: Use the method of continuity and compare the equation with the problem −u 00 = f .
In order to prove the a-priori estimate derive first the estimate
sup |tut | ≤ sup |f |
where ut is the solution corresponding to the parameter t ∈ [0, 1].
8.7: Let λ ∈ R. Prove the following alternative: Either (i) the homogeneous equation
−u 00 − λu = 0
has a nontrivial solution u ∈ C ( [0, 1]) with u(0) = u(1) = 0 or (ii) for all f ∈ C 0 ([0, 1]),
there exists a unique solution u ∈ C 2 ([0, 1]) with u(0) = u(1) = 0 of the equation
−u 00 − λu = f .
Moreover, the mapping Rλ : f 7→ u is bounded as a mapping from C 0 ([0, 1]) into C 2 ([0, 1]).
Hint: Use Arzela-Ascoli.
8.8: Let 1 < p ≤ ∞ and α = 1 − p1 . Then there exists a constant C that depends
only on a, b, and p such that for all u ∈ C 1 ([a, b]) and x0 ∈ [a, b] the inequality
||u||C 0,α ([a,b]) ≤ |u(x0 )| + C||u 0 ||Lp ([a,b])
Gregory T. von Nessi
Chapter 8: Facts from Functional Analysis and Measure Theory
208
holds.
8.9: Let X = L2 (0, 1) and
M=
f ∈X:
Z
f (x) dx = 0 .
1
0
0
According to the projection theorem, every f ∈ X can be written as f = f1 + f2 with f1 ∈ M
and f2 ∈ M ⊥ . Characterize f1 and f2 .
8.10: Suppose that pi ∈ (1, ∞) for i = 1, . . . , n with
1
1
+ ··· +
= 1.
p1
pn
Prove the following generalization of Holder’s inequality: if fi ∈ Lpi (Ω) then
f =
n
Y
i=1
1
fi ∈ L (Ω)
and
n
Y
fi
i=1
L1 (Ω)
≤
n
Y
i=1
||fi ||Lpi (Ω) .
8.11: [Gilbarg-Trudinger, Exercise 7.1] Let Ω be a bounded domain in Rn . If u is a measurable
function on Ω such that |u|p ∈ L1 (Ω) for some p ∈ R, we define
Φp (u) =
1
|Ω|
Z
Ω
p
|u| dx
1
p
.
Show that
lim Φp (u) = sup |u|.
p→∞
Ω
8.12: Suppose that the estimate
||u − uΩ ||Lp (Ω) ≤ CΩ ||Du||Lp (Ω)
holds for Ω = B(0, 1) ⊂ Rn with a constant C1 > 0 for all u ∈ W01,p (B(0, 1)). Here uΩ
denotes the mean value of u on Ω, that is
Z
1
uΩ =
u(z) dz .
|Ω| Ω
Find the corresponding estimate for Ω = B(x0 , R), R > 0, using a suitable change of coordinates. What is CB(x0 ,R) in terms of C1 and R?
8.13: [Qualifying exam 08/99] Let f ∈ L1 (0, 1) and suppose that
Z 1
f φ0 dx = 0
∀φ ∈ C0∞ (0, 1).
Gregory T. von Nessi
0
Version::31102011
Show that M is a closed subspace of X. Find M ⊥ with respect to the notion of orthogonality
given by the L2 scalar product
Z 1
(·, ·) : X × X → R, (u, v ) =
u(x)v (x) dx.
8.9 Exercises
209
Show that f is constant.
Hint: Use convolution, i.e., show the assertion first for f = ρ ∗ f .
8.14: Let g ∈ L1 (a, b) and define f by
f (x) =
Z
x
g(y ) dy .
a
Version::31102011
Prove that f ∈ W 1,1 (a, b) with Df = g.
Hint: Use the fundamental theorem of Calculus for an approximation gk of g. Show that
the corresponding sequence fk is a Cauchy sequence in L1 (a, b).
8.15: [Evans 5.10 #8,9] Use integration by parts to prove the following interpolation inequalities.
a) If u ∈ H 2 (Ω) ∩ H01 (Ω), then
1
Z
1 Z
Z
2
2
2 2
2
2
|D u| dx
|Du| dx ≤ C
u dx
Ω
Ω
Ω
b) If u ∈ W 2,p (Ω) ∩ W01,p (Ω) with p ∈ [2, ∞), then
Z
1 Z
1
Z
2
2
p
p
2 p
|Du| dx ≤ C
|u| dx
|D u| dx
.
Ω
Ω
Ω
Hint: Assertion a) is a special case of b), but it might be useful to do a) first. For simplicity,
you may assume that u ∈ Cc∞ (Ω). Also note that
Z
n Z
X
|Di u|2 |Du|p−2 dx.
|Du|p dx =
Ω
i=1
Ω
8.16: [Qualifying exam 08/01] Suppose that u ∈ H 1 (R), and for simplicity suppose that u
is continuously differentiable. Prove that
∞
X
n=−∞
|u(n)|2 < ∞.
8.17: Suppose that n ≥ 2, that Ω = B(0, 1), and that x0 ∈ Ω. Let 1 ≤ p < ∞. Show that
u ∈ C 1 (Ω \ {x0 }) belongs to W 1,p (Ω) if u and its classical derivative ∇u satisfy the following
inequality,
Z
(|u|p + |∇u|p ) dx < ∞.
Ω\{x0 }
Note that the examples from the chapter show that the corresponding result does not hold
for n = 1. This is related to the question of how big a set in Rn has to be in order to be
“seen” by Sobolev functions.
Hint: Show the result first under the additional assumption that u ∈ L∞ (Ω) and then
consider uk = φk (u) where φk is a smooth function which is close to

 k if s ∈ (k, ∞),
s if s ∈ [−k, k],
ψk (s) =

−k if s ∈ (−∞, −k).
Gregory T. von Nessi
210
Chapter 8: Facts from Functional Analysis and Measure Theory
You could choose φk to be the convolution of ψk with a fixed kernel.
8.18: Give an example of an open set Ω ⊂ Rn and a function u ∈ W 1,∞ (Ω), such that
u is not Lipschitz continuous on Ω.
Hint: Take Ω to be the open unit disk in R2 , with a slit removed.
1
∈ W 1,n (Ω).
8.19: Let Ω = B 0, 12 ⊂ Rn , n ≥ 2. Show that log log |x|
8.20: [Qualifying exam 01/00]
b) Show that g : [0, 1] → R is a continuous function and define the operator T : H → H
by
(T u)(x) = g(x)u(x)
for x ∈ [0, 1].
Prove that T is a compact operator if and only if the function g is identically zero.
8.21: [Qualifying exam 08/00] Let Ω = B(0, 1) be the unit ball in R3 . For any function
u : Ω → R, define the trace T u by restricting u to ∂Ω, i.e., T u = u ∂Ω . Show that
T : L2 (Ω) → L2 (∂Ω) is not bounded.
8.22: [Qualifying exam 01/00] Let H = H 1 ([0, 1]) and let T u = u(1).
a) Explain precisely how T is defined for all u ∈ H, and show that T is a bounded linear
functional on H.
b) Let (·, ·)H denote the standard inner product in H. By the Riesz representation theorem,
there exists a unique v ∈ H such that T u = (u, v )H for all u ∈ H. Find v .
8.23: Let Ω = B(0, 1) ⊂ Rn with n ≥ 2, α ∈ R, and f (x) = |x|α . For fixed α, determine
for which values of p and q the function f belongs to Lp (Ω) and W 1,q (Ω), respectively. Is
there a relation between the values?
8.24: Let Ω ⊂ Rn be an open and bounded domain with smooth boundary. Prove that
for p > n, the space W 1,p (Ω) is a Banach algebra with respect to the usual multiplication
of functions, that is, if f , g ∈ W 1,p (Ω), then f g ∈ W 1,p (Ω).
8.25: [Evans 5.10 #11] Show by example that if we have ||Dh u||L1 (V ) ≤ C for all 0 <
|h| < 21 dist (V, ∂U), it does not necessarily follow that u ∈ W 1,1 (Ω).
8.26: [Qualifying exam 01/01] Let A be a compact and self-adjoint operator in a Hilbert
space H. Let {uk }k∈N be an orthonormal basis of H consisting of eigenfunctions of A with
associated eigenvalues {λk }k∈N . Prove that if λ 6= 0 and λ is not an eigenvalue of A, then
A − λI has a bounded inverse and
||(A − λI)−1 || ≤
1
< ∞.
inf k∈N |λk − λ|
8.27: [Qualifying exam 01/03] Let Ω be a bounded domain in Rn with smooth boundary.
Gregory T. von Nessi
Version::31102011
a) Show that the closed unit ball in the Hilbert space H = L2 ([0, 1]) is not compact.
8.9 Exercises
211
a) For f ∈ L2 (Ω) show that there exists a uf ∈ H01 (Ω) such that
||f ||2H−1 (Ω) = ||uf ||2H1 (Ω) = (f , uf )L2 (Ω) .
0
Version::31102011
b) Show that L2 (Ω)
⊂⊂
−→
H −1 (Ω). You may use the fact that H01 (Ω)
⊂⊂
−→
L2 (Ω).
8.28: [Qualifying exam 08/02] Let Ω be a bounded, connected and open set in Rn with
smooth boundary. Let V be a closed subspace of H 1 (Ω) that does not contain nonzero
−→ L2 (Ω), show that there exists a constant C < ∞,
constant functions. Using H 1 (Ω) ⊂⊂
independent of u, such that for u ∈ V ,
Z
Z
2
|u| dx ≤ C
|Du|2 dx.
Ω
Ω
Gregory T. von Nessi
Chapter 9: Function Spaces
212
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
9.1
Introduction
9.2
Hölder Spaces
9.3
Lebesgue Spaces
9.4
Morrey and Companato Spaces
9.5
The Spaces of Bounded Mean Oscillation
9.6
Sobolev Spaces
9.7
Sobolev Imbeddings
9.8
Difference Quotients
9.9
A Note about Banach Algebras
9.10 Exercises
9.1
216
217
218
225
238
240
261
271
274
274
Introduction
A central goal to the theoretical study of PDE is to ascertain properties of solutions of
PDE that are not directly attainable by direct analytic means. To this end, one usually
seeks to classify solutions of PDE belonging to certain functions spaces whose elements
have certain known properties. This is, by no means, trivial, as the process requires not only
considering/defining appropriate function spaces but also elucidating the properties of such.
In this chapter we will consider some of the most prolific spaces in theoretical PDE along
with some of their respective properties.
Gregory T. von Nessi
Version::31102011
9 Function Spaces
9.2 Hölder Spaces
9.2
213
Hölder Spaces
To start the discussion on function spaces, Hölder spaces will first be considered as they are
easier to construct than Sobolev Spaces.
Consider open Ω ⊂ Rn and 0 < γ ≤ 1. Recall that a Lipschitz continuous function
u : Ω → R satisfies
Version::31102011
|u(x) − u(y )| ≤ C|x − y |
(x, y ∈ Ω)
for some constant C. Of course, this estimates implies that u is continuous with a uniform
modulus of continuity. A generalization of the Lipschitz condition is
|u(x) − u(y )| ≤ C|x − y |γ
(x, y ∈ Ω)
for some constant C. If this estimate holds for a function u, that function is said to be
Hölder continuous with exponent γ.
Definition 9.2.1:
(i) If u : Ω → R is bounded and continuous, we write
||u||C(Ω) := sup |u(x)|.
x∈Ω
(ii) The γ th -Hölder seminorm of u : Ω → R is
[u]C 0,γ (Ω) :=
sup
x, y ∈ Ω
x 6= y
|u(x) − u(y )|
|x − y |γ
,
and the γ th -Hölder norm is
||u||C 0,γ (Ω) := ||u||C(Ω) + [u]C 0,γ (Ω)
Definition 9.2.2: The Hölder space
C k,γ (Ω)
consists of all functions u ∈ C k (Ω) for which the norm
||u||C k,γ (Ω) :=
X
|α|≤k
||Dα u||C(Ω) +
X
[Dα u]C 0,γ (Ω)
|α|=k
is finite.
So basically, C k,γ (Ω) consists of those functions u which are k-times continuous differentiable with the k th -partial derivatives Hölder continuous with exponent γ. As one can easily
ascertain, these functions are very well-behaved, but these spaces also have a very good
mathematical structure.
Theorem 9.2.3 (): The function spaces C k,γ (Ω) are Banach spaces.
The proof of this is straight-forward and is left as an exercise for the reader.
Gregory T. von Nessi
Chapter 9: Function Spaces
214
9.3
Lebesgue Spaces
Notation: E is a measurable set, and Ω is an open set ⊂ Rn .
Definition 9.3.1: For 1 ≤ p < ∞ we define
p
L (E) = {f : E → R, measurable,
Z
E
|f |p < ∞}
L∞ (E) = {f : E → R, ess sup |f | < ∞}
where ess sup|f | = inf{K ≥ 0, |f | ≤ K a.e.}.
Lp (E) = Lp (E)/ ∼
Theorem 9.3.2 (): The spaces Lp (E) are BS with
Z
1/p
p
||f ||p =
|f | dλ
1≤p<∞
E
||f ||∞ = ess sup |f |
R
• Lp (E) = {f : E → R, |f |p dx < ∞}. These are indeed equivalence classes of
functions (as stated in the last lecture). In general we can regard this as a space of
functions. We do however, need to be careful sometimes. The main such point is
that saying that f ∈ Lp is continuous means: there exists a representative of f that is
continuous. Sometimes referred to as “a version of f ”.
R
• Lploc (Ω) = {f : Ω → R measurable. ∀Ω0 ⊂⊂ Ω, Ω0 |f |p dx < ∞}
Example: L1 ((0, 1))
f (x) = x1 ∈
/ L1 ((0, 1)), f ∈ L1loc ((0, 1))
Theorem 9.3.3 (Young’s Inequality): 1 < p < ∞, a, b ≥ 0
0
ap
bp
ab ≤
+ 0,
p
p
where p 0 is the dual exponent to p defined by
1
1
+ 0 =1
p p
General convention (1)0 = ∞, (∞)0 = 1.
Proof: a, b 6= 0, ln is convex downward (concave). Thus,
ln(λx + (1 − λ)x) ≥ λ ln x + (1 − λ) ln y
= ln x λ + ln y 1−λ
take exponential of both sides
λx + (1 − λ)y > x λ y 1−λ
Define λ = p1 , x λ = a, y 1−λ = b =⇒ Young’s inequality.
1−λ=1−
Gregory T. von Nessi
1
1
= 0
p
p
Version::31102011
Two functions are said to be equivalent if f = g a.e. Then we define
9.3 Lebesgue Spaces
215
0
Theorem 9.3.4 (Holder’s Inequality): f ∈ Lp (E), g ∈ Lp (E), then f g ∈ L1 (E) and
Z
f g ≤ ||f ||p ||g||p0
E
Proof: Young’s inequality
0
|f |
1 |f |p
|g|
1 |g|p
·
≤
+
||g||p0 ||f ||p
p ||f ||pp
p 0 ||g||p00
p
Version::31102011
Clear for p = 1 and p = ∞. Assume 1 < p < ∞ to use Young’s inequality. Integrate on E.
Z
|g|
|f |
·
0
||f ||p
E ||g||p
Z
=⇒
|f | · |g| dx
E
1
p
≤
Z
E
Z
1
|f |p
p + 0
p
||f ||p
≤ ||f ||p ||g||p0
|g|p
0
p0
E ||g||p 0
=
1
1
+ 0 =1
p p
Remark: Sometimes f = 1 is a good choice
Z
E
1 · |g| dx
≤
Z
p
1
E
1
p
1 Z
p
g
p0
E
10
p
= |E| ||g||p0
1
You gain |E| p .
Lemma 9.3.5 (Minkowski’s inequality): Let 1 ≤ p ≤ ∞, then ||f + g||p ≤ ||f ||p + ||g||p .
This is clear for p = 1 and p = ∞.
Assume 1 < p < ∞. Convexity of x p implies that
|f + g|p ≤ 2p−1 (|f |p + |g|p ) =⇒ f + g ∈ Lp
|f + g|p = |f + g||f + g|p−1
≤
|f | |f + g|p−1 + |g| |f + g|p−1
|{z} | {z } |{z} | {z }
Lp0
Lp
p0 =
p
p−1 ,
Z
0
Lp 0
Lp
(|f + g|p−1 )p = |f + g|p . Integrate and use Holder
p
|f + g| dx
≤ ||f ||p
Z
|f + g|
= (||f ||p + ||g||p )
(p−1)p 0
Z
10
p
+ ||g||p
p
|f + g|
p
0
10
Z
(p−1)p 0
|f + g|
10
p
p
= (||f ||p + ||g||p ) ||f + g||pp
Thus, we have
p− pp0
||f + g||p
≤ ||f ||p + ||g||p
Gregory T. von Nessi
Chapter 9: Function Spaces
216
Now p −
p
p0
=p 1−
1
p0
= p p1 = 1. Thus we have the result.
Proof of 9.3.2(Lp is complete) p = ∞ (“simpler than 1 ≤ p < ∞”).
If fn is Cauchy, then fn (x) is Cauchy for a.e. x and you can define the limit f (x) a.e.
Theorem (Riesz-Fischer) Lp complete for 1 ≤ p < ∞
Proof: Suppose that fk is Cauchy, ∀ > 0, ∃k() such that
||fk − fl || < k, l ≥ k()
if
∞
X
i=1
||fki+1 − fki ||p < ∞
relabel the subsequence and call it again fk . Now define
gl =
l
X
k=1
|fk+1 − fk |
then gl ≥ 0 and by Minkowski’s inequality, ||gl ||p ≤ C. By Fatou’s Lemma
Z
Z
p
lim g dx ≤ lim inf glp dx
l→∞
l→∞ l
p
≤
lim inf ||gl ||p
<∞
l→∞
liml→∞ glp
Thus
is finite for a.e. x, limk→∞ gl (x) exists =⇒ fk (x) is a Cauchy sequence for
a.e. x. Thus limk→∞ fk (x) = f (x) exists a.e. By Fatou
Z
Z
|f − fl |p dx =
lim |fk (x) − fl (x)|p dx
k→∞
Z
≤
lim inf |fk − fl |p dx
k→∞
p
≤
lim inf ||fk − fl ||p
k→∞
!p
∞
X
≤
lim inf
||fk − fk+1 ||p
→ 0 as l → ∞
k→∞
k=l
=⇒ ||fl − f ||p → 0
We know f is in Lp by a simple application of Minkowski’s inequality to the last result.
Sobolev Spaces ⇐⇒ f ∈ Lp with derivatives.
Calculus for smooth functions is linked to the calculus of Sobolev functions by ideas of
approximation via convolution methods.
Theorem 9.3.6 (): (see Royden for proof) Let 1 ≤ p < ∞. Then the space of continuous
functions with compact support is dense in in Lp , i.e.
Gregory T. von Nessi
Cc0 (Ω) ⊂ Lp (Ω) dense
Version::31102011
Choose a subsequence that converges fast in the sense that
9.3 Lebesgue Spaces
217
Idea of Proof:
1. Step functions
∞
X
ai χEi are dense
i=1
2. Approximate χEi by step functions on cubes.
Version::31102011
3. Construct smooth approximation for χO , for cube O
Q
9.3.1
Convolutions and Mollifiers
• Sobolev functions = Lp functions with derivatives
• Convolution = approximation of Lp functions by smooth function approximation of the
Dirac δ
– φ ∈ Cc∞ (Rn ), supp(φ) ∈ B(0, 1)
Z
– φ ≥ 0,
φ(x) dx = 1
Rn
Rescaling: φ (x) =
Z
φ (x) dx = 1.
1
n φ
x
,
Z
Rn
φ (x) dx = 1. φ (x) → 0 as → 0 pointwise a.e. and
Rn
B(0, ǫ)
B(0, 1)
Gregory T. von Nessi
Chapter 9: Function Spaces
218
Definition 9.3.7: The convolution,
f (x) = (φ ∗ f )(x)
Z
=
φ (x − y )f (y ) dy
n
ZR
=
φ (x − y )f (y ) dy
B(x,)
is the weighted average of f on a ball of radius > 0.
Theorem 9.3.8 (): f ∈ C ∞
Now, we ask the question f ∈ Lp , f → f in Lp . If p = ∞ f → f implies f ∈ C 0
which implies f ∈ Lp . We know f ∈ C 0 since f ∈ C ∞ ⊂ C 0 .
Ω′′
Ω
Ω′
ǫ
d
Lemma 9.3.9 (): f ∈ C 0 (Ω), Ω00 ⊂⊂ Ω compact subsets, then f → f uniformly on Ω00 !
Proof:
d
= dist(Ω0 , ∂Ω)
= inf{|x − y | : x ∈ Ω0 , y ∈ ∂Ω}
Ω00 = {x ∈ Ω, dist(x, Ω0 ) ≤ d2 }.
Let ≤ d2 , x ∈ Ω0 =⇒ B(x, ) ⊂⊂ Ω00 , and f (x) is well defined
Z
|f (x) − f (x)| =
φ (x − y )[f (y ) − f (x)] dy
Rn
Z
≤
sup |f (y ) − f (x)|
φ (x − y ) dy
y ∈B(x,)
Rn
|
{z
}
=1
——
f ∈ C 0 (Ω) =⇒ f uniformly continuous on Ω00 ⊂⊂ Ω. Thus,
ω() = sup{|f (x) − f (y )|, x, y ∈ Ω00 , |x − y | < } → 0
Gregory T. von Nessi
as → 0
Version::31102011
Proof: Difference Quotients and Lebesgue’s Dominated convergence theorem. 9.3 Lebesgue Spaces
219
——
≤ ω() → 0 uniformly on Ω0
as → 0
Proposition 9.3.10 (): If u, v ∈ L1 (Rn ), then u ∗ v ∈ L1 (Rn ) ∩ C 0 (Rn ). This is a special
case of the following: given f ∈ L1 (Rn ) and g ∈ Lp (Rn ) with p ∈ [1, ∞], then f ∗ g ∈
Lp (Rn ) ∩ C 0 (Rn ).
Version::31102011
Proof: We will prove the specialized case (p = 1) first.
Z
Z
f (x − y )g(y ) dy dx
Z
≤
|g(y )|
|f (x − y )| dxdy
n
n
R
R
Z
Z
=
|g(y )|
|f (z)| dz dy = ||f ||L1 (Rn ) ||g||L1 (Rn )
||f ∗ g||L1 (Rn )| =
Rn
n
ZR
Rn
Rn
There is nothing to show for p = ∞. For p ∈ (1, ∞), we use Holder’s inequality to compute
||f ∗
g||pLp (Rn )
=
Z
Rn
≤
Z
Rn
Z
Z
p
Rn
Z
f (y )g(x − y ) dy
Rn
Z
1
dx
1
|f (y )| p0 |f (y )| p |g(x − y )| dy
p0 Z
p
p
dx
p
|f (y )| · |g(x − y )| dy
|f (y )| dy
Rn
Z Z
p/p 0
= ||f ||L1 (Rn )
|f (y )| · |g(x − y )|p dy dx
n
n
ZR R
Z
p/p 0
= ||f ||L1 (Rn )
|f (y )|
|g(x − y )|p dy dx
≤
Rn
= ||f
which proves the assertion.
Rn
Rn
Rn
p/p 0
||L1 (Rn ) ||f ||L1 (Rn ) ||g||pLp (Rn )
dx
= ||f ||pL1 (Rn ) ||g||pLp (Rn )
Lemma 9.3.11 (): f ∈ Lp (Ω), 1 ≤ p < ∞. Then f → f in Lp (Ω).
Remarks:
• The same assertion holds in Lploc (Ω).
• Cc∞ (Ω) is dense in Lp (Ω).
Proof: Extend f by 0 to Rn : assume Ω = Rn . By Proposition 8.31, we see that if f ∈ Lp (Rn ),
g ∈ L1 (Rn ), then f ∗ g ∈ Lp (Rn ) and ||f ∗ g||p ≤ ||f ||p ||g||1 .
So, take
g = φ : ||f ||p = ||f ∗ φ ||p ≤ ||f ||p · ||φ ||1 = ||f ||p
(9.1)
Gregory T. von Nessi
Chapter 9: Function Spaces
220
Need to show ∀δ > 0, ∃(δ) > 0, ||f − f ||p < δ, ∀ ≤ (δ).
By Theorem 8.29, ∃f1 ∈ Cc0 (Ω) such that
f = f1 + f2 , ||f2 ||p = ||f − f1 ||p <
δ
3
||f − f || = ||(f1 + f2 ) + (f1 + f2 )||p
≤ ||f1 − f1 ||p + ||f2 − f2 ||p
f1 has compact support in Ω, supp(f1 ) ⊂ Ω0 ⊂⊂ Ω, d = dist(Ω0 , ∂Ω), Ω00 = x ∈ Ω0 , dist(x, Ω0 ) ≤
Now suppose that < d2 . By Lemma 8.30, f1 → f1 uniformly in Ω00 and thus in Ω (since
f1 , f1 = 0 in Ω \ Ω00 ). Now choose 0 small enough such that ||f1 − f ||p < 3δ
=⇒ ||f − f ||p ≤ δ ∀(δ) ≤ 0 , 2δ .
9.3.2
The Dual Space of Lp
Explicit example of T ∈ (Lp )∗ :
p
Tg : L → R,
Tg f =
Z
f g dx
Tg f is defined by Holder, and
|Tg f | ≤ ||f ||p ||g||p0
Thus,
||T g|| =
||f ||p ||g||p0
|Tg f |
≤ sup
= ||g||p0
||f ||p
f ∈Lp ,f 6=0
f ∈Lp ,f 6=0 ||f ||p
sup
0
In fact ||T g|| = ||g||p0 . This follows by choosing f = |g|p −2 g.
Theorem 9.3.12 (): Let 1 ≤ p < ∞,
0
Then J : Lp → (Lp )∗ , J(g) = Tg is onto with ||J(g)|| = ||g||p0 . In particular, Lp is reflexive
for 1 < p < ∞.
Remark: (L1 )∗ = L∞ , (L∞ )∗ 6= L1 .
9.3.3
Weak Convergence in Lp
• 1 ≤ p < ∞, fj * f weakly in Lp
Z
Z
fj g → f g
* f weakly∗ in L∞
• p = ∞, fj *
Z
Gregory T. von Nessi
fj g →
Z
fg
∀g ∈ Lp
0
∀g ∈ L1
d
2
Version::31102011
≤ ||f1 − f1 ||p + ||f2 ||p + ||f ||p
2δ
≤ ||f1 − f1 ||p +
by (9.1)
3
.
9.4 Morrey and Companato Spaces
221
Examples: Obstructions to strong convergence. (1) Oscillations. (2) Concentrations.
* 0 in L∗∞ , but fj 9 0 in L∞ . (||0 −
• Oscillations: Ω = (0, 1), fj (x) = sin(jx). fj *
sin(jx)||∞ = 1).
• Concentration: g ∈ Cc∞ (Rn ), fj (x) = j −n/p g(j −1 x). So,
Z
Z
Z
|fj |p dx = j −n g p (j −1 x) dx = |g|p dx
1<p<∞
So we see that fj * 0 in Lp , but fj 9 0 in Lp .
0
Version::31102011
Last Lecture: We showed (Lp )∗ = Lp , where
talk about
1
p
+
1
p0
= 1 and 1 ≤ p < ∞. With this we can
• Weak-* convergence in L∞ .
• Weak convergence in Lp , 1 ≤ p < ∞.
So, as an example take weak convergence in L2 (Ω), uj * u in L2 (Ω) if
Z
Z
uj v dx →
uv dx
∀v ∈ L2 (Ω).
Ω
Ω
Obstructions to strong convergence which have weak convergence properties are oscillations
and concentrations.
9.4
Morrey and Companato Spaces
Loosely speaking, Morrey Spaces and Companato Spaces characterize functions based on
their “oscillations” in any given neighborhood in a set domain. This will be made clear by the
definitions. Philosophically, Morrey spaces are a precursor of Companato Spaces which in
turn are basically a modern re-characterization of the Hölder Spaces introduced earlier in the
chapter. This new characterization is based on integration; and hence, is easier to utilize, in
certain proofs, when compared against the standard notion of Hölder continuity.
It would have been possible to omit these spaces from this text altogether (and this is the
convention in books at this level); but I have elected to discuss these for the two following
reasons. First, the main theorem pertaining to the imbedding of Sobolev Spaces into Hölder
spaces will be a natural consequence of the properties of Companato Spaces; this is much
more elegant, in the author’s opinion, than other proofs using standard properties of Hölder
continuity. Second, we will utilize these spaces in the next chapter to derive the classical
Schauder regularity results for elliptic systems of equations. Again, it is the authors opinion that these regularity results are proved much more intuitively via the use of Companato
spaces as opposed to the more standard proofs of yore. For the following introductory exposition, we essentially follow [Giu03, Gia93].
Per usual, we consider Ω to be open and bounded in Rn ; but in addition to this, we will
also assume for all x ∈ Ω and for all ρ ≤ diam(Ω) that
|Bρ (x) ∩ Ω| ≥ Aρn ,
(9.2)
where A > 0 is a fixed constant. This is, indeed, a condition on the boundary of Ω that precludes domains with “sharp outward cusps”; geometrically, it can be seen that any Lipschitz
domain will satisfy (9.2). With that we make our first definition.
Gregory T. von Nessi
Chapter 9: Function Spaces
222
Definition 9.4.1: The Morrey Space Lp,λ (Ω) for all real p ≥ 1 and λ ≥ 0 is defined as
(
)
Lp,λ (Ω) :=
u ∈ Lp (Ω) ||u||Lp,λ (Ω) < ∞ ,
where
||u||pLp,λ (Ω)
:=
sup
x0 ∈ Ω
0 < ρ < diam(Ω)
−λ
ρ
Z
Ω(x0 ,ρ)
p
|u| dx
It is left up to the reader to verify that Lp,λ (Ω) is indeed a BS. One can easily see from
the definition that u ∈ Lp,λ (Ω) if and only if there exists a constant C > 0 such that
Z
|u|p dx < Cρλ ,
(9.3)
Ω(x0 ,ρ)
for all x0 ∈ Ω with 0 < ρ < ρ0 for some fixed ρ0 > 0. Sometimes, (9.3) is easier to use than
the original definition of a Morrey Space. Also, since x0 ∈ Ω is arbitrary in (9.3), it really
doesn’t matter what we set ρ0 > 0, as Ω is assumed to be bounded (i.e. it can be covered
by a finite number of balls of radius ρ0 ).
In addition to this, we get that Lp,λ (Ω) ∼
= Lp (Ω) for free from the definition. Now, we
come to our first theorem concerning these spaces.
Theorem 9.4.2 (): If
n−µ
n−λ
≥
and p ≤ q, then Lq,µ (Ω)
p
q
⊂−→
Lp,λ (Ω).
Proof: First note that the index condition above can be rewritten as
µp
np
+n−
≥ λ.
q
q
With this in mind, the rest of the proof is a straight-forward calculation using Hölder’s
inequality:
Z
p/q
Z
Hölder
q
p
≤
|u| dx
|u| dx
|Ω(x0 , ρ)|1−p/q
Ω(x0 ,ρ)
Ω(x0 ,ρ)
≤
≤
≤
p/q
· ρn(1−p/q)
α(n)1−p/q · ||u||qLq,µ (Ω) · ρµ
α(n)1−p/q · ||u||qLq,µ (Ω) · ρµp/q+n−np/q
α(n)1−p/q · ||u||qLq,µ (Ω) · ρλ .
Note in the third inequality utilizes the assumption p ≤ q. The above calculation implies
that
||u||pLp,λ (Ω) ≤ C||u||qLq,µ (Ω) ,
i.e. Lq,µ (Ω)
⊂−→
Lp,λ (Ω).
Corollary 9.4.3 (): Lp,λ (Ω)
Gregory T. von Nessi
⊂−→
Lp (Ω) for any λ > 0.
Version::31102011
with Ω(x0 , ρ) = B(x0 , ρ) ∩ Ω.
9.4 Morrey and Companato Spaces
223
Proof: This is a direct consequence of the previous theorem.
The next two theorems are more specific index relations for Morrey Spaces, but are very
important properties nonetheless.
Theorem 9.4.4 (): If dim(Ω) = n, then Lp,n (Ω) ∼
= L∞ (Ω).
Proof: First consider u ∈ L∞ (Ω). In this case,
Z
Version::31102011
Ω(x0 ,ρ)
Hölder
|u|p dx
≤
C · |Ω(x0 , ρ)| · ||u||pL∞ (Ω)
=
C · α(n)ρn · ||u||pL∞ (Ω) < ∞,
i.e. u ∈ Lp,n (Ω). If, on the other hand, u ∈ Lp,n (Ω), then, considering for a.e. x0 ∈ Ω we
have
Z
1
u(x0 ) = lim
u dx
ρ→0 |Ω(x0 , ρ)| Ω(x0 ,ρ)
via Lebesgue’s Theorem, it is calculated that
Z
|u(x0 )| ≤ lim −
ρ→0 Ω(x0 ,ρ)
|u| dx ≤ lim
ρ→0
Thus, we conclude u ∈ L∞ (Ω).
Z
−
Ω(x0 ,ρ)
|u|p dx
1/p
< ∞.
Theorem 9.4.5 (): If dim(Ω) = n, then Lp,λ (Ω) = {0} for λ > n.
Proof: Fix an arbitrary x0 ∈ Ω, using Lebesgue’s Theorem, we calculate
Z
1
|u(x0 )|
≤
lim
|u|p dx
ρ→0 |Ω(x0 , ρ)| Ω(x0 ,ρ)
Z
1/p
Hölder
|Ω(x0 , ρ)|1−1/p
p
≤
|u|
dx
lim
ρ→0 |Ω(x0 , ρ)|1/p
Ω(x0 ,ρ)
Z
1/p
1−1/p
diam(Ω)
p
|u| dx
≤
lim
ρ→0 |Ω(x0 , ρ)|1/p
Ω(x0 ,ρ)
≤
=
lim C · ρ−n/p ρλ/p
ρ→0
lim C · ρλ−n/p = 0,
ρ→0
since λ − n > 0. As x0 ∈ Ω is arbitrary, we see that u ≡ 0.
.
Now, that a few of the basic properties of Morrey Spaces have been elucidated, we will
now consider a closely related group of spaces.
Definition 9.4.6: The Companato Spaces, Lp,λ (Ω), are defined as
p,λ
L
(Ω) :=
(
p
)
u ∈ L (Ω) [u]Lp,λ (Ω) < ∞ ,
Gregory T. von Nessi
Chapter 9: Function Spaces
224
where
[u]pLp,λ (Ω)
:=
−λ
sup
ρ
x0 ∈ Ω
0 < ρ < diam(Ω)
with
ux0 ,ρ
Z
:= −
Z
p
Ω(x0 ,ρ)
|u − ux0 ,ρ | dx
u dx.
Ω(x0 ,ρ)
First, one will notice that [u]Lp,λ (Ω) is a seminorm; this is seen from the fact that
[u]Lp,λ (Ω) = 0 if and only if u is a constant. Thus, we define
as the norm on Lp,λ (Ω), making such a BS. The verification of this is left to the reader.
In analogy with theorem 9.4.2, we have the following.
Theorem 9.4.7 (): If
n−µ
n−λ
≥
and p ≤ q, then Lq,µ (Ω)
p
q
⊂−→
Lp,λ (Ω)
Proof: See proof of theorem 9.4.2.
Just from looking at the definitions, naive intuition suggests that there should be some
correspondence between Morrey and Companato Spaces, but this correspondence is by no
means obvious. The next two theorems will actually elucidate the correspondence between
Morrey, Companato and Hölder Spaces, but we first prove a lemma that will subsequently
be needed in the proofs of the aforementioned theorems.
Lemma 9.4.8 (): If Ω is bounded and satisfies (9.2), then for any R > 0,
λ−n/p
|ux0 ,Rk − ux0 ,Rh | ≤ C · [u]Lp,λ (Ω) Rk
,
(9.4)
where Ri := 2−i · R and 0 < h < k.
Proof: First we fix an arbitrary R > 0 and define r such that 0 < r < R. Now, we
calculate that
r n |ux0 ,R − ux0 ,r |p
|Ω(x0 , r )|
|ux0 ,R − ux0 ,r |p
≤
α(n)
Z
1
≤
|ux ,R − ux0 ,r |p dx
α(n) Ω(x0 ,r ) 0
Z
2p−1
p
p
|u(x) − ux0 ,R | + |u(x) − ux0 ,r | dx
≤
α(n) Ω(x0 ,r )
Z
Z
2p−1
p
p
≤
|u(x) − ux0 ,R | dx +
|u(x) − ux0 ,r | dx
α(n) Ω(x0 ,R)
Ω(x0 ,r )
≤
2p−1
· [Rλ + r λ ] · [u]pLp,λ (Ω)
α(n)
≤ C · Rλ [u]pLp,λ (Ω) .
Gregory T. von Nessi
Version::31102011
|| · ||Lp,λ (Ω) := || · ||Lp (Ω) + [ · ]pLp,λ (Ω)
9.4 Morrey and Companato Spaces
225
Dividing by r n and taking the p-th root of this result yields
|ux0 ,R − ux0 ,r | ≤ C · [u]Lp,λ (Ω) · Rλ/p · r −n/p .
(9.5)
From here, we will utilize the definition of Ri and (9.5) to ascertain
|u x0 ,Ri − u x0 ,Ri+1 | ≤ C · [u]Lp,λ (Ω) · 2i(n−λ)/p+n/p · R(λ−n)/p
Version::31102011
≤ C · [u]Lp,λ (Ω) · 2i(n−λ)/p · R(λ−n)/p
In the last equality, the 2n/p has been absorbed into the constant. Trudging on, we finish
the calculation to discover
|u x0 ,R − u x0 ,Rh+1 | ≤
h
X
i=0
|u x0 ,Ri − u x0 ,Ri+1 |
≤ C · [u]Lp,λ (Ω) R(λ−n)/p
= C · [u]Lp,λ (Ω) R
h
X
i=0
2(h+1)(n−λ)/p − 1
2(n−λ)/p − 1
(λ−n)/p
(λ−n)/p
≤ C · [u]Lp,λ (Ω) Rh+1
2i(n−λ)/p
!
(9.6)
.
The conclusion follows as (9.6) is valid for any h ≥ 0 and R > 0 was chosen arbitrarily.
Theorem 9.4.9 (): If dim(Ω) = n and 0 ≤ λ ≤ n, then Lp,λ (Ω) ∼
= Lp,λ (Ω).
Proof: First suppose u ∈ Lp,λ (Ω) and calculate
ρ−λ
Z
|u − ux0 ,ρ |p dx
Ω(x0 ,ρ)
Z
p−1 −λ
p
p
≤2 ρ
|u| + |ux0 ,ρ | dx
Ω(x0 ,ρ)
Z
p−1 −λ
p
p
≤2 ρ
|u| dx + |Ω(x0 , ρ)| · |ux0 ,ρ |
Ω(x0 ,ρ)
Z
Z
Hölder p−1 −λ
p
p
≤ 2 ρ
|u| dx + |Ω(x0 , ρ)| · −
|u| dx
Ω(x0 ,ρ)
Ω(x0 ρ)
Z
p −λ
≤2 ρ
|u|p dx < ∞.
Ω(x0 ρ)
Thus, we see that u ∈ Lp,λ (Ω).
Gregory T. von Nessi
Chapter 9: Function Spaces
226
Now assume u ∈ Lp,λ (Ω). We find
−λ
ρ
Z
Ω(x0 ,ρ)
|u|p dx
Z
= ρ−λ
|u − ux0 ,ρ + ux0 ,ρ |p dx
Ω(x0 ,ρ)
Z
≤ 2p−1 ρ−λ
|u − ux0 ,ρ |p dx + |Ω(x0 , ρ)| · |ux0 ,ρ |p
Ω(x0 ,ρ)
Z
p−1
−λ
p
n−λ
p
≤2
ρ
|u − ux0 ,ρ | dx + C · ρ
|ux0 ,ρ | .
Ω(x0 ,ρ)
To proceed with (9.7), we must endeavor to estimate estimate |ux0 ,ρ |. To do this, we consider
0 < r < R along with the calculation
|ux0 ,ρ |p ≤ 2p−1 {|ux0 ,R |p + |ux0 ,R − ux0 ,ρ |p }.
(9.8)
Thus, we now seek to estimate the RHS. To do this, we utilize (9.6) with diam(Ω) < R ≤
2 · diam(Ω) chosen such that Rh = ρ for some h > 0. Inserting this result into (9.8) and
(9.8) in (9.7) yields
−λ
ρ
Z
p
Ω(x0 ,ρ)
p−1
|u| dx ≤ 2
"
ρ−λ
Z
Ω(x0 ,ρ)
|u − ux0 ,ρ |p dx
n−λ p−1
+C·ρ
Thus, u ∈ Lp,λ (Ω).
2
Z
· −
p
u dx
Ω(x0 ,R)
p−1
+C·2
·
[u]pLp,λ (Ω)
#
< ∞.
Unlike Lp,λ (Ω), L∞ (Ω) ( Lp,n (Ω). To show this we consider the following.
Example 8: Let us look at consider log x, where x ∈ (0, 1). On any given sub-interval
[a, b] ⊂ (0, 1), we have
Z
b
a
| log x| dx = a · log a − b · log b + (b − a).
Now, if we multiply by ρ−n =
2
b−a
Z
2
b−a ,
we have
b
a
| log x| dx = 2
a · log a − b · log b
+1 .
b−a
If we can show that the RHS is bounded for any 0 < a < b < 1, then we will have shown
that log x ∈ L1,1 ((0, 1)). But choosing b := e −i and a := e −(i+1) , where i > 0 shows that
2e i
lim
i→∞ 1 − e −1
Gregory T. von Nessi
Z
e −i
e −(i+1)
| log x| dx
i − e −1 (i + 1)
= 2 · lim
+1
1 − e −1
i→∞
(1 − e −1 )i − e −1
+1
= 2 · lim
1 − e −1
i→∞
= ∞.
Version::31102011
(9.7)
9.4 Morrey and Companato Spaces
227
Thus, log x ∈
/ L1,1 (Ω) which is in agreement with theorem 9.4.4. Now considering the same
interval [a, b], it is also easily calculated that
Z b
2
| log x − (log x)(b+a)/2,(b−a)/2 | dx = 4,
b−a a
i.e. log x ∈ L1,1 (Ω). On the other hand, it is obvious that log x ∈
/ L∞ ((0, 1)). Thus,
∞
p,n
L (Ω) ( L (Ω).
Version::31102011
As we have seen Companato spaces can be identified with Morrey spaces in the interval
0 ≤ λ < n. For λ > n we have the following
Theorem 9.4.10 (): For n < λ ≤ n + p holds
Lp,λ (Ω) ∼
= C 0,α (Ω)
with α =
λ−n
p
whereas for λ > n + p
Lp,λ (Ω) = {constants}.
Proof: From lemma 9.4.8 and the assumption that λ > n we observe that ux0 ,Rk is a
Cauchy sequence for all x0 ∈ Ω. Thus by Lebesgue’s theorem
ux0 ,Rk → ũ(x0 )
(k → ∞),
for all x ∈ Ω. Thus we know ux0 ,Rk < ∞ exists. Next, we note that ux,Rk =: vk (x) ∈ C(Ω).
This can proved from a standard continuity argument noting that u ∈ L1 (Ω). Taking h → ∞
in (9.6) yields
λ−n
|ux0 ,Rk − ũ(x0 )|C · [u]Lp,λ (Ω) Rk p .
(9.9)
From this, we deduce that vk → ũ is of uniform convergence in Ω; hence ũ is continuous.
Taking ũ to represent u, we know go one to show that u is Hölder continuous. First, take
λ−n
x, y ∈ Ω and set R = |x − y |, and α =
.
p
x
R
Ω
y
2R
, K$/ &'< 9 &$/\)GO-/8)N) 9 &'< 8
Figure 9.1: Layout for proof of Theorem 9.4.10
X" "Q57&>-/-/ ',9h,/
|u(x)
We start first by approximating |u(x) − u(y
)|: − u(y)|
|u(x)
)| ≤
−
− u ,2R ||+
)|;;
x,2R
,2R −
|u(x) −
− u(y
u(y)|
≤ |u
− u(x)|
u(x)| +|u
+|ux,2R
+||u
|uyy,2R
−u(y
u(y)|
x,2R{z
x,2R − uyy,2R
||u
{z
|
|
{z α }}
{z α }}
≤C·R α
≤C·R
≤C·R
≤C·Rα
-/-/ ',9h "*,(2$/%/ 5/ "* 9;1)V9 '& < & < 'OM[ K$/
|Ω(x, 2R) ∩ Ω(y, 2R)| · |ux,2R − uy,2R |
Gregory T. von Nessi
Chapter 9: Function Spaces
228
the approximations in the underbraces come from (9.9). Next we figure
|Ω(x, 2R) ∩ Ω(y , 2R)| · |ux,2R − uy ,2R |
Z
Z
≤
|u(z) − ux,2R | dz +
Ω(x,2R)
≤
Z
Ω(x,2R)
+
|u(z) − ux,2R |p dz
Z
Ω(y ,2R)
1+n+(n−λ)/p
{z
C
u(z) − uy ,2R
u(z) − uy ,2R dz
|Ω(x, 2R)|1−1/p
p
dz
1/p
|Ω(y , 2R)|1−1/p
· diam(Ω)n ·[u]Lp,λ (Ω) Rλ/p Rn(1−1/p)
}
= C · [u]Lp,λ (Ω) Rα .
(9.10)
As R = |x − y |, we have thus shown:
[u]C 0,α (Ω) =
sup
x, y ∈ Ω
x 6= y
|u(x) − u(y )|
≤ C · [u]Lp,λ (Ω) .
|x − y |α
(9.11)
We still aren’t quite done. Remember, we are seeking our conclusion in the BS definition
of C 0,α (Ω) and Lp,λ (Ω); the last calculation only pertains to the seminorms. Let us pick
an arbitrary x ∈ Ω and define v (x) := |u(x)| − inf Ω |u(x)|. Obviously, v (x) > 0 and exists
{xn } ⊂ Ω such that v (xn ) → 0 as n → ∞. Thus, it is clear that
|v (x)| =
lim |v (x) − v (xn )|
n→∞
|v (x) − v (xn )|
|x − xn |α
≤ C(diam(Ω))α · [v ]Lp,λ (Ω)
≤ (diam(Ω))α
= C(diam(Ω))α · [u]Lp,λ (Ω) .
The last equality comes from the fact that |u(x)| and |v (x)| differ only by a constant. With
the last calculation in mind we finally realize
|u(x)|
=
|v (x)| + inf |u(x)|
≤
C(diam(Ω))α · [u]Lp,λ (Ω) +
Hölder
α
≤
Ω
1
|Ω|
Z
Ω
|u| dx
−1/p
C(diam(Ω)) · [u]Lp,λ (Ω) + |Ω|
Z
Ω
p
|u| dx
1
p
.
Combining this result with (9.11), we conclude
sup |u(x)| + [u]C 0,α (Ω) ≤ C ||u||Lp (Ω) + [u]Lp,λ (Ω) ,
x∈Ω
completing the proof!
To conclude the section, two corollaries are put to the reader’s attention. Both of these
are proved by the exact same argument as the previous theorem.
Gregory T. von Nessi
Version::31102011
≤2
|
Ω(y ,2R)
1/p
9.4 Morrey and Companato Spaces
229
Corollary 9.4.11 (): If
Z
|u − ux0 ,ρ |p dx ≤ Cρλ
Ω(x0 ,ρ)
for all ρ ≤ ρ0 (instead of ρ < diam(Ω)), then the conclusion of theorem (9.4.10) is true
with Ω replaced by Ω(x0 , ρ0 ), i.e. local version of (9.4.10).
Version::31102011
Corollary 9.4.12 (): If
Z
Bρ (x0 )
|u − ux0 ,ρ |2 dx ≤ Cρλ
for all x0 ∈ Ω and ρ < dist(x0 , ∂Ω), then for λ > n, we conclude that u is Hölder continuous
in every subdomain of Ω.
9.4.1
L∞ Companato Spaces
Given the characterizations of Morrey and Companato spaces, one may ask if there is a
generalization of these spaces. It turns out there is, but only parts (corresponding to ranges
of indices) of these spaces are useful for our studies.
To construct our generalization, consider




X
Pk := T (x) | T (x) =
Cβ · x β , Cβ = const. ,


|β|≤k
i.e. the set of all polynomials of degree k or less. Also, let us define Tk,x u as the kth order
Taylor expansion of u about x:
Tk,x u(y ) =
X Dβ u(x)
· (y − x)β .
β!
|β|≤k
Now we define our generalization.
Definition 9.4.13: The Lp Companato Spaces, Lp,λ
k (Ω), are defined as
Lp,λ
k (Ω) :=
(
)
u ∈ Lp (Ω) [u]Lp,λ (Ω) < ∞ ,
k
where
[u]Lp,λ (Ω) :=
k
−(k+λ)
sup
ρ
x0 ∈ Ω
0 < ρ < diam(Ω)
· inf
T ∈Pk
Z
Ω(x0 ,ρ)
p
|u − T | dx
p,λ
It is easy to notice that Lp,λ
0 (Ω) is just L0 (Ω). One could also fit Morrey spaces into
this picture by defining P∗ = {0} and denote the Morrey space Lp,λ
∗ (Ω).
Gregory T. von Nessi
? M (!%',,&H" ,A3
<
!M ", T:/ "-J,7:M[P+⊂;P, ,, %=( "Bc "X,$" )V$/[L` "(Ω) %/ ⊂ kL> 0(Ω)"
$$")G$/9: Function
" P & Spaces
9R-332"-3 "!L,!%'&:3 ! %/ 7,Chapter
e
230 M ,S< 4%/%', Y?F)NM %' %> %.LM[!B Z , Y?3M X $/ %4,MT &'"Z-/-J!
:f85J PR"-3 "!< Q $/ %; $/ T OM[!B /,/R J [)0, !",/ k
5 &:, !P",⊂/ Pp ?5/$/,#"we$readily
R,%/ 653E ,/Xthat " LN,9R9;(Ω) %'E⊂ L,&L$"(Ω).
)G$//MOAt,/this
,R we
Now" since
J
O
k
)
/
#
"
0
5
8
0
B
<
$
/
;
/
$
2
0
5
T
$
"
G
)
/
$
S
K
?
"
1
"
/
%
# point
char 3#%/ "Nascertain
are inclined
to
then
ask
to
k >05/as
Companato
spacesican
already
%
W
T
)
R
,
%
/
cis "it-Juseful
/
$
9
%? consider
S
<
K
R
<
B
,
c
&
O
M
,
"
/
0
K
"
%
kIn addition, if we did
p = ∞ a weaker criterion,
acterizeHölder
continuous
functions
veryNwell.
L
8
B
,
/
/
5
,
&
O
M
,
2
K
/
$
E
1!" < ,
9R-33h"-3need
p
kspaces.
= 0 We could counter act the weakening affect of inwe could always?k,appeal
to$/Sobolev
\
$
/
"
2
*
3
k
'
/
5
/
$
Z5
J
G
)
:
[
M
:
-/!P6/ "!?0M[
∼
L index
(Ω)balancing
= C (Ω)
creasingpk=
by∞
increasing
p,
but
such
isn’t immediately useful within the scope
"
O
(
%
2
G
)
,
O
M
,
/
;
R
9
h
9
'
<
of this book. What does turn out to be useful, is to consider the “other end of the index
spectrum”,
with
p =
∞
(as
opposed
to
taking
pP arbitrary
with
k = 0:
,
O
M
,
/
C
/
8
#
)
\
"
"
;
9
O
M
3
Z
/
/
!
,
,
(
&
,
7P
,
c
%
%
.
!
45
J
"i.e.
,& , takek)Garbitrary
∼
the regular
p 3=N5∞,
"B8,-/we-JCompanato
%:prove
MO,this,
$/space
weK",first
/Xcase).
need,7$With
/the, &< following
J ,it/lemma.
Xturns
" %@?Kout
Othat
-/8)kL," F)J(Ω)-/ "= C" (Ω);
but before
"9;1 P \ "O3*!(501 9R-/ & %(R5J$/%',/ e %/ Q9;"!<
+
k
p,λ
k+1
k+1
p,λ
k
k
p,λ
k+1
k+1
p,λ
k
∞,α
k
k,α
∞,α
k
k,α
5",
" ,, |Dβ u| ≤ C()
- )+% % · sup |u| +
& ' · ρxα+k · [u]
",,C k,α (Ω) ,
ρkxΩ· sup
(
x∈
R
>
0
ρ
:=
(x, ∂Ω)
k = |β| x
Ω
Ω
,":M D 5/,&
%D " %/ ? M +%/ M[CMO,, E :, C-/ )< -J=" ,/W4 %',3
"&'" 9(?3M[L" ZhK$/-g) ,,:" K9; 7"!?J%/ % ?0)# /K MO,
"2 ?kMO/ c-3, 8 " < Y Z$"*%/ 6 %'-J,7"
)! 57& % < ,. "-/ ,,9R,3 " ?JM[:-/ %.,4)G$/Z9h,
" -" <
> 0, where
R
for any x ∈ Ωand
ρx := dist(x, ∂Ω) and k = |β|. 1
x ∈ Ω
2 ≥ λ
'
R :=
λ · we
ρx fix an arbitrary
λ x ∈ Ω and consider 1 ≥ λ, we define R := λ · ρ ; we
Proof:
First
x
2
n
will fix λ later in the proof. Upon selecting
a coordinate system,li we select a 2R
group of n line
segments, denotedxli , of
length
2R with centersxiat x, which are parallel to the xi axis. Let
0
00 l
x
x
i
i of each
i li by x 0 and x 00 . With these preliminaries, we proceed in four
us denote the endpoints
i
i
main steps.
ρy
y
R
x
Ω
ρx
,K$/'&'< ! &$/Q)iO-/ )N) 9R9h '& < 8 &
Figure 9.2: Layout for proof of Lemma 9.4.14
"!$#&%(')+*,*.-0/&1&1&2
Step 1: For the first step we claim that
sup |Du| ≤
Ω
1
sup |u|.
R Ω
(9.12)
To prove this we first note that since u ∈ C k,α (Ω), we can utilize the mean value
theorem to state ∃xi∗ ∈ li such that
Di u(x ∗ ) =
u(xi00 ) − u(xi0 )
1
≤ sup |u|.
00
0
x −x
R Ω
Step 2: Next we prove the claim that
ρx · sup |Di u| ≤
Ω
Gregory T. von Nessi
1
sup |u| + 22 32 λ · ρ2x · sup |Dij u|.
λ Ω
BR (x)
Version::31102011
Warning: The following proof is somewhat technically involved and can be skipped with
-/&'*)
( ,, % '*)+%,
u ∈being
C k,α (Ω)
out loosing continuity. That
said, the Ω
proof itself( presents some clever tools that can
be employed to bounding Hölder norms.
β
ρkx · sup |Dk,α
u| ≤ C() · sup |u| + · ρxα+k · [u]C k,α (Ω) ,
Lemma 9.4.14 (): If uΩ∈ C (Ω) with ΩΩbounded, then
9.4 Morrey and Companato Spaces
231
We start with calculating
|Di u(x)| ≤ |Di u(xj∗ )| + |Di u(x) − Di u(xj∗ )|
Z x
∗
≤ |Di u(xj )| +
|Dij u(t)| dt
xj∗
Using (9.12) on the first term on the RHS and Hölder’s inequality on the second we
get
Version::31102011
|Di u(x)| ≤
≤
1
sup |u| + R · sup |Dij u|
R Ω
BR (x)
1
2
sup |u| + R · ρ−2
y · ρy · sup |Dij u|.
R Ω
BR (x)
We know that ρy + R ≥ ρx Longr i ghtar r ow ρy ≥ ρx − R = (1 − λ)ρx ≥
this the above becomes
|Di u(x)| ≤
ρx
2.
Using
22 λ 2
1
· sup |u| +
· ρy · sup |Dij u|.
λρx Ω
ρx
BR (x)
Since we know that ρy ≤ R + ρx Longr i ghtar r ow ρy ≤ 3ρx . Thus,
sup |Di u| ≤
Ω
1
· sup |u| + 22 32 λ · ρx · sup |Dij u|.
λρx Ω
y ∈BR (x)
Multiplying by ρx gets the conclusion of this step.
Step 3: Our last claim is that
ρ2x · sup(|Dij u|) ≤
Ω
ρx
· sup |Di u| + 22 3α+2 · λα · ρα+2
[u]C 2,α (Ω) . (9.13)
x
λ BR (x)
For this we start with
|Dij u| ≤ |Dij u(xj∗ )| + |Dij u(x) − Dij u(xj∗ )|.
Again using (9.12) we get
|Dij u| ≤
≤
≤
≤
|Dij u(x) − Dij u(xj∗ )|
1
α
sup |Di u| + R ·
R Ω
|x − xj∗ |α
1
sup |Di u| + Rα · ρ−α−2
· ρα+2
· [u]C 2,α (Ω)
y
y
R Ω
1
2 2 λα
sup |Di u| + 2 ρα+2
· [u]C 2,α (Ω)
λρx Ω
ρx y
1
sup |Di u| + 22 3α+2 λα · ρα
x · [u]C 2,α (Ω)
λρx Ω
In the above, we again used the previously stated fact that ρ2x ≤ ρy ≤ 3ρx . As in the
previous step, we take the supremum of the LHS and multiply both sides by ρ2x to get
the claim for Step 3.
Gregory T. von Nessi
Chapter 9: Function Spaces
232
Step 4: We are almost done. Next we realize that the proof in Step 3 can be easily adapted to
also state
ρx · sup |Di u| ≤
Ω
1
· sup |u| + 2 · 3α+1 λα · ρα+1
· [u]C 1,α (Ω) .
x
λ Ω
(9.14)
Now λ ≤ 12 has not been fixed in any of these calculations. So for given > 0 let us
pick = 22 3α+2 λα in (9.13) to get
ρ2x · sup(|Dij u|) ≤
Ω
22/α 31+2/α
· ρx · sup |Di u| + · ρα+2
· [u]C 2,α (Ω) .
x
1/α
BR (x)
ρx · sup |Di u| ≤
Ω
22 32
sup |u| + · ρ2x · sup |Dij u|.
Ω
y ∈BR (x)
Combining these last two equations we get
ρ2x · sup |Dij u| ≤
Ω
22+2/α 33+2/α
sup |u|
1+1/α
Ω
+
22/α 31+2/α 2
· [u]C 2,α (Ω) .
· ρx sup |Dij u(y )| + · ρα+2
x
1/α−1
BR (x)
We can immediately algebraically deduce that
ρ2x · sup |Dij u| ≤
Ω
−2 22+2/α 33+2/α
sup |u|
1/α−1 − 22/α 31+2/α Ω
+
1/α
· ρα+2
[u]C 2,α (Ω) .
x
1/α−1
2/α
1+2/α
−2 3
Picking
2
α+2
> 2 1−α 3 1−α
guarantees everything stays positive. Taking this result with (9.14) we can apply an
induction argument to ascertain the conclusion.
Theorem 9.4.15 (): Given ∂Ω ∈ C k,α and u ∈ C k,α (Ω), there exists 0 ≤ C1 ≤ C2 < ∞
such that
C1 · [u]C k,α (Ω) ≤ [u]L∞,α
(Ω) ≤ C2 · [u]C k,α (Ω) ,
k
i.e. C k,α (Ω) ∼
(Ω).
= L∞,α
k
Proof: First, we claim that
||u − Tx,k u||L∞ (Ω(x,ρ)) ≤ C2 · ρk+α · [u]C k,α (Ω) ,
for all x ∈ Ω. So we first fix y ∈ Ω(x, ρ) to find
u(y ) − Tx,k u(y ) =
Gregory T. von Nessi
X Dβ u(z) − Dβ u(x)
· (z − x)β ,
β!
|β|=k
(9.15)
Version::31102011
For (9.13) we choose λ so that = 22 32 λ:
9.5 The Spaces of Bounded Mean Oscillation
233
where z := tx + (1 − t)y for some t ∈ (0, 1).
It needs to be noted that in order for the last equation to be valid, we may need to extend u to the convex hull of Ω(x, ρ). This is why we require the boundary regularity, to
guarantee that such a Hölder continuous extension of u exists. We do not prove this here,
but the idea of the proof is similar to that of extending Sobolev functions.
Continuing on, given our definition of z, we readily see |z − x| ≤ |y − x| ≤ ρ. This combined
with (9.15) immediately indicates that
Version::31102011
|u(y ) − Tx,k u(y )| ≤ C2 · ρk+α · [u]C k,α (Ω) ,
proving the RHS of (9.15).
For the other inequality, we first notice that ∀T ∈ Pk , [u − T ]C k,α (Ω) = [u]C k,α (Ω) . We
know this since Dβ T = const. when |β| = k. Now, from lemma 9.4.14 we have
!
ρk · sup |Dβ (u − T )| ≤ C1
ρα+k · [u]C k,α (Ω) + sup |Dβ (u − T )| .
Ω(x,ρ)
Ω(x,ρ)
Taking the infimum of T ∈ Pk and dividing by ρk+α gives us
ρ−α sup |Dβ u(x) − Dβ u(y )| ≤ C1 [u]C k,α (Ω) + [u]L∞,α
,
(Ω)
k
x,y ∈Ω(x,ρ)
proving the left inequality, concluding the proof.
9.5
The Spaces of Bounded Mean Oscillation
Note: These are also know as BMO Spaces or John-Nirenberg Spaces. Let Q0 be a cube in
Rn and u a measurable function on Q0 . We define
u ∈ BMO(Q0 )
if and only if
|u|BMO(Q0 ) := sup
Q
Z
Z
−
Q∩Q0 Q∩Q0
|u − u Q∩Q0 | dx < ∞.
It is easy to see that we can also take as definition
Z
|u|BMO(Q0 ) = sup − |u − u Q | dx
Q⊂Q0 Q
and that u ∈ BMO(Q0 ), if and only if u ∈ L1,n (Q0 ).
Theorem 9.5.1 (John/Nirenberg): There exist constants C1 > 0 and C2 > 0 depending
only on n, such that the following holds: If u ∈ BMO(Q0 ), then for all Q ⊂ Q0
!
−C2
|Q|.
{x ∈ Q0 | (u − u Q )(x) > t} ≤ C1 · exp
|u|BMO(Q0 )
Gregory T. von Nessi
Chapter 9: Function Spaces
234
Proof: It is sufficient to give the proof for Q = Q0 (because u ∈ BMO(Q0 )Ri ghtar r ow u ∈
BMO(Q) for Q ⊂ Q0 ). Moreover the inequality is homogenous and so we can assume that
|u|BMO(Q0 ) = 1.
R
For α > 1 = supQ⊂Q0 −Q |u − uQ | dx we use Calderon-Zygmund argument to get a
(1)
sequence {Qj } with
R
i.) α < −Q(1) |u − u Q0 | dx ≤ 2n α
j
ii.) |u − u Q0 | ≤ α on Q0 \
Then we also have
(1)
j
Qj .
Z
≤−
− u Q0
(1)
Q
j
and
X
j
|u − u Q0 | dx ≤ 2n α
(1)
Qj
1
α
(1)
|Qj | ≤
Z
Q0
|u − u Q0 | dx.
(1)
Take one of the Qj . Then as |u|BMO(Q0 ) = 1 we have
Z
−
u−u
(1)
Qj
(1)
Q
j
dx ≤ 1 < α
(2)
and we can apply Calderon-Zygmund argument again. Thus, there exists a sequence {Qj }
with
|u(x) − u Q0 | ≤ u(x) − uQ(1) + |u
j
S
for a.e. x on Q0 \
X
j
(2)
Qk
(1)
Q
j
− u Q0 | ≤ 2 · 2n α
and
(2)
|Qj |
≤
≤
≤
≤
1X
α
j
Z
(1)
Qj
u−u
(1)
Q
j
dx
1 X (1)
|Qj | (u ∈ BMO and |u|BMO = 1)
α
j
Z
1
|u − u Q0 | dx
α2 Q 0
1
|Q0 |.
α2
(k)
Repeating this argument one gets by induction: For all k ≥ 1 there exists sequence {Qj }
such that
[ (k)
|u − u Q0 | ≤ k2n α a.e. on Q0 \ Qj
j
and
Gregory T. von Nessi
X
j
(k)
|Qj |
1
≤ k
α
Z
Q0
|u − u Q0 | dx.
Version::31102011
u
S
9.5 The Spaces of Bounded Mean Oscillation
235
Now for t ∈ R+ either 0 < t < 2n α or there exists k ≥ 1 such that 2n αk < t < 2n α(k + 1).
In the first case
|{x ∈ Q0 | |u(x) − u Q0 | > t} ≤ |Q0 | ≤ |Q0 |e −At e 2
(o ≤ 2n α − tRi ghtar r ow 1 ≤ e A(2
≤
Version::31102011
≤
≤
n α−t)
n Aα
for some A > 0); in the secont case
|{x ∈ Q0 | |u(x) − u Q0 | > t}
|{x ∈ Q0 | |u(x) − u Q0 | > 2n αk}
Z
X (k)
1
|u − u Q0 | dx
|Qj | ≤ k
α Q0
j
Z
− ln α
1− 2nt α ) ln α
(
e
|u − u Q0 |u − u Q0 | dx ≤ αe 2n α |Q0 |
Q0
=: αe
−At
|Q0 |
(we need − k ≤ 1 −
t
and α−k = e −k ln α ).
2n α
Corollary 9.5.2 (): If u ∈ BMO(Q0 ), then u ∈ Lp (Q0 ) for all p > 1. Moreover
sup
Q⊂Q0
Proof:
Z
|u − u Q |p dx
Q
Z
p
Q
Z
|u − u Q | dx
1/p
≤ C(n, p) · |u|BMO(Q0 ) .
∞
t p−1 |{x ∈ Q | |u − u Q | > t}| dt
0
!
Z ∞
C2
p−1
≤ p · C1
t
exp −
t |Q| dt
|u|BMO(Q)
0
!−p
Z ∞
C2
= p · C1
|Q|
e −t t p−1 dx ≤ C(n, p)|u|pBMO(Q) .
|u|BMO(Q)
0
= p
Finally we have
Theorem 9.5.3 (Characterization of BMO): The following statements are equivalent:
i.) u ∈ BMO(Q0 )
ii.) there exists constatns C1 , C2 such that for all Q ⊂ Q0 and all s > 0
|{x ∈ Q | |u(x) − uQ | > s}| ≤ C1 exp(−C2 s)|Q|
iii.) there exist constants C3 , C4 such that for all Q ⊂ Q0 ,
Z
(exp(C4 |u − u Q |) − 1) dx ≤ C3 |Q|
Q
iv.) if v = exp(C4 u), then
Z
Z
1
sup − v dx
− dx ≤ C5 .
Q⊂Q0
Q
Q v
(We can take C1 = C3 = C5 = C(n) and C2 = 2C4 = C(n)|u|−1
.)
BMO(Q )
0
Gregory T. von Nessi
Chapter 9: Function Spaces
236
9.6
Sobolev Spaces
Take Ω ⊆ Rn
Definition 9.6.1: f ∈ L1 (Ω) is said to be weakly differentiable if
∃g1 , . . . , gn ∈ L1 (Ω) such that
Z
Z
∂φ
f
dx = −
gi φ dx
∀φ ∈ Cc∞ (Ω)
Ω ∂xi
Ω
Notation: Df = (g1 , . . . , gn ), gi = Di f =
∂f
∂xi .
If f ∈ C 1 (Ω), then Df = ∇f .
W 1,p (Ω) = {f ∈ Lp (Ω), Df exists, Di f ∈ Lp (Ω), i = 1, . . . , n}
Proposition 9.6.3 (): The space W 1,p (Ω) is a Banach space with the norm
||f ||1,p =
Z
p
Ω
p
1
p
|f | + |Df | dx
1 ≤ p < ∞,
where
|Df | =
n
X
i=1
!1
2
|Di f |2
.
In the p = ∞ case, we define
||f ||1,∞ = ||f ||∞ + ||Df ||∞
Proof: The triangle inequality follows from Minkowski’s inequality.
Completeness of W 1,p (Ω): Take fj Cauchy in W 1,p (Ω) =⇒ fj and Di fj are both Cauchy
in Lp (Ω). By the completeness of Lp , fj → f ∈ Lp (Ω) and Di fj → gi ∈ Lp (Ω).
Need: Df = g,
Call Di fj = (gj )i → gi ∈ Lp (Ω). By definition
Z
Z
∂φ
fj
dx = − (gj )i φ dx
Ω ∂xi
Ω
Cauchy
↓
↓
Z
Z
∂φ
dx = −
gi φ dx f
∂x
i
Ω
Ω
∀φ ∈ Cc∞ (Ω)
Remark: W 1,p (Ω) is a Hilbert space with
Z
(u, v ) = (uv + Du · Dv ) dx
Ω
Thus, we can use the Riesz-Representation Theorem and Lax-Milgram in this space.
Examples:
Gregory T. von Nessi
Version::31102011
Definition 9.6.2: The Sobolev space W 1,p (Ω) is
9.6 Sobolev Spaces
237
1
|x|
0
−1
0
1
Version::31102011
• f (x) = |x| on (−1, 1)
We will prove f ∈ W 1,1 (−1, 1) and
g = Df =
To show this, look at
Z 1
Z
0
f φ dx = −
−1
−1 x ∈ (−1, 0)
1 x ∈ (0, 1)
1
= −
gφ dx
−1
=
Z
Z
But we also have
Z
1
0
f φ dx
−1
Z
= −
=
Z
0
0
0
xφ dx +
−1
−1
φ dx + 0 −
φ dx +
−1
0
−1
0
φ dx −
Z
Z
1
Z
Z
1
φ dx
0
1
φ dx
0
xφ0 dx
0
1
φ dx + 0
√
0
Thus, g is the weak derivative of f .
•
f (x) = sgn(x) =
−1 x ∈ (−1, 0)
1 x ∈ (0, 1)
1
0
−1
−1
0
1
Our intuition says that f 0 = 2δ0 ∈
/ L2 (−1, 1). Thus, we see to prove f is not weakly
integrable. So, suppose otherwise, g ∈ L1 (−1, 1), Df = g.
Z 1
Z 1
Z 0
Z 1
gφ dx = −
f φ0 dx =
φ0 dx −
φ0 dx
−1
−1
−1
0
= φ(0) − φ(−1) − [φ(1) − φ(0)]
= 2φ(0)
Gregory T. von Nessi
Chapter 9: Function Spaces
238
1
0
−1
0
1
Now choose φi such that φ ≥ 0, φ ≤ 1, φj (0) = 1, and φj (x) → 0 a.e.
2φj (0) =
=⇒ 2 =
Z
1
−1
gφj dx
lim 2φj (0) = lim
j→∞
= 0
Z
1
j→∞ −1
gφj dx
contradiction
Where the last equality is a result of Lebesgue’s dominated convergence theorem.
For higher derivatives, α is a multi-index. g = Dα f if
Z
Z
α
|α|
f D φ dx = (−1)
gφ dx
Ω
Ω
∀φ ∈ Cc∞ (Ω)
Definition 9.6.4:
W k,p (Ω) = {f ∈ Lp (Ω), Dα f exists, Dα f ∈ Lp (Ω), |α| ≤ k}
Sobolev functions are like classical functions. We will address the following in the next
few lectures
• Approximation
• Calculus. Specifically product and chain rules.
• Boundary values
• Embedding Theorems. In the HW we showed that
||u||C 0,α (R) ≤ |u(x0 )| + C||u 0 ||Lp (R)
which gives us the intuition that
W 1,p (R)
⊂−→
C 0,α (R).
Recall: W 1,p (Ω) means we have weak derivatives in Lp (Ω), g = Dα f
Z
Z
f Dα φ dx = (−1)|α|
gφ dx
∀φ ∈ Cc∞ (Ω)
Ω
• n = 1 W 1,p ⊂ C 0,α with α = 1 − p1 .
Gregory T. von Nessi
Ω
Version::31102011
With this choice of φj we have
9.6 Sobolev Spaces
239
• n ≥ 2 W n,p not holder continuous.
Version::31102011
Lemma 9.6.5 (): f ∈ L1loc (Ω), α is a multi-index, and φ (x) = −n φ(−1 x) with usual
mollification.
If Dα f exists, then Dα (φ ∗ f )(x) = (φ ∗ Dα f )(x) ∀x with dist(x, ∂Ω) > .
∂
.
Proof: Without loss of generality take |α| = 1, Dα = ∂x
i
Z
∂
∂
(φ ∗ f )(x) =
φ (x − y )f (y ) dy
∂xi
∂xi Rn
Z
∂
=
φ (x − y )f (y ) dy
Rn ∂xi
Z
∂
φ (x − y )f (y ) dy
= −
∂y
n
i
R
——
Now since φ(x − y ) ∈ Cc∞ (Ω), we apply the definition of the weak partial derivative to get
——
Z
∂
=
φ (x − y )
f (y ) dy
∂yi
Rn
∂
=
φ ∗
f
∂xi
Proposition 9.6.6 (): f , g ∈ L1loc (Ω). Then g = Dα f ⇐⇒ ∃fm ∈ C ∞ (Ω) such that fm → f
in L1loc (Ω) and Dα fm → g in L1loc (Ω).
Proof:
(⇐=) fm → f and Dα fm → g
Z
α
Ω
fm · D φ dx
(see following aside) ↓
Z
f · Dα φ dx
|α|
= (−1)
Ω
fm · Dα φ dx −
Z
Ω
f · Dα φ dx
Dα fm · φ dx
Z
gφ dx
Ω
= (−1)|α|
Ω
Z
Z
↓
Ω
Z
|fm − f | |Dα φ| dx
| {z }
Ω
C
Z
≤ C
|fm − f | dx → 0 given
≤
supp(φ)
(=⇒) Use convolution and lemma 8.37.
9.6.1
Calculus with Sobolev Functions
0
Proposition 9.6.7 (Product Rule): f ∈ W 1,p (Ω), g ∈ W 1,p (Ω), then f g ∈ W 1,1 (Ω) and
D(f g) = Df · g + f · Dg.
Proof: By Holder, f g, Df · g, and f · Dg are in L1 (Ω). Explicitly, this is
Z
|f g| dx ≤ ||f ||p ||g||p0
Ω
Gregory T. von Nessi
Chapter 9: Function Spaces
240
Now suppose p < ∞ (by symmetry) and f is the convolution of f defined by f = φ ∗ f .
f → f in L1loc (Ω). Let Ω0 =supp(φ), <dist(Ω0 , ∂Ω0 ).
Z
Z
Z
f g · Di φ dx =
g · Di (f φ) dx −
g · Di f · φ dx
Ω
ΩZ
ZΩ
(def. of weak deriv. of g)
= −
Di g · f φ dx −
g · Di f · φ dx
Ω
=⇒
Z
Ω
f g · Di φ dx
= −
Z
Ω
Ω
(Di f · g + f · Di g)φ dx
Ω
Ω
Now by applying the definition of weak derivatives, we get
Di (f g) = Di f · g + f · Di g
Proposition 9.6.8 (Chain Rule): f ∈ C 1 , f bounded and g ∈ W 1,p , then (f ◦ g) ∈ W 1,p (Ω)
and
D(f ◦ g) = (f 0 ◦ g)Dg
Remark: The same assertion holds if f is continuous and piecewise C 1 with bounded
derivative.
Proof: g = φ ∗ g, g → g in Lploc , Dg → Dg in Lploc . (Assume p < ∞, even p = 1
would be sufficient).
D(f ◦ g ) = (f 0 ◦ g )Dg → (f 0 ◦ g)Dg in L1loc
Take Ω0 ⊂⊂ Ω, small enough.
Z
|(f 0 ◦ g )Dg − (f 0 ◦ g)Dg| dx
ZΩ
Z
0
≤
|(f ◦ g )(Dg − Dg)| dx +
|(f 0 ◦ g − f 0 ◦ g )Dg| dx
Ω
Ω
0
0
Now f ◦ g is bounded since f is bounded and Dg − Dg → 0 in L1loc . Thus the first term
on the RHS goes to 0 as → 0. Since g → g in L1loc , g → g in measure and thus exists a
subsequence such that g → g a.e. Thus we can apply Lebesgue’s dominated convergence
theorem to see the second term in the RHS goes to 0 as → 0. Thus we have
Z
|(f 0 ◦ g )Dg − (f 0 ◦ g)Dg| dx = 0
Ω
and by the application of the definition of a weak derivative, we attain the result.
Lemma 9.6.9 (): Let f + = max{f , 0}, f − = min{f , 0}, f = f + + f − , if f ∈ W 1,p , then f +
and f − are in W 1,p ,
Df on f > 0
+
Df
=
0 on f ≤ 0

 Df on f > 0
−Df on f < 0
D|f | =

0
f =0
Gregory T. von Nessi
Version::31102011
Now f → f in L1loc and by Lemma 8.37 Di f → Di f in L1loc as → 0. Thus,
Z
Z
f g · Di φ dx = − (Di f · g + f · Di g)φ dx
9.6 Sobolev Spaces
241
Corollary 9.6.10 (): Let c ∈ R, E = {x ∈ Ω, f (x) = c}. Then Df = 0 a.e. in E.
Theorem 9.6.11 (): 1 ≤ p < ∞, then W k,p (Ω) ∩ C ∞ (Ω) is dense in W k,p (Ω).
Proof: Choose Ω1 ⊂⊂ Ω2 ⊂⊂ Ω2 ⊂⊂ · · · ⊂⊂ Ω, e.g. Ωi = x ∈ Ω : dist(x, ∂Ω) <
Define the following open sets: Ui = Ωi+1 \ Ωi−1
..
.
1
i
.
U2
Version::31102011
Ω3
Ω2
U2
···
Ω1
···
Ω
S
Then i Ui = Ω, and thus Ui is an open cover of Ω. This cover is said to be locally finite,
i.e., ∀x ∈ Ω there exists > 0, such that B(x, ) ∩ Ui 6= 0 only for finitely many i .
Theorem 9.6.12 (Partition of Unity): {Ui } is an open and locally finite cover of Ω. Then
there exists ηi ∈ Cc∞ (Ui ), ηi ≥ 0, and
∞
X
ηi = 1 in Ω
i=1
Intuitive Picture in 1D:
1
Proof of 8.43 cont.:
ηi is a partition of unity corresponding to Ui , fi = ηi f ∈ W k,p (Ω).
supp(fi ) ⊂ Ui = Ωi+1 \ Ωi−1 .
Choose i > 0, such that φi ∗ fi =: gi satisfies
Gregory T. von Nessi
Chapter 9: Function Spaces
242
• supp(gi ) ⊂ Ωi+2 \ Ωi−2
• ||gi − fi ||W k,p (Ω) ≤
δ
,
2i
δ > 0 fixed.
• gi ∈ Cc∞ (Ω)
Now define
g(x) =
∞
X
gi (x)
i=1
(this sum is well-defined since gi (x) 6= 0 only for finitely many i ).
≤
i=1
∞
X
i=1
gi −
∞
X
ηi f
i=1
W k,p (Ω)
||gi − ηi f ||W k,p (Ω)
∞
X
δ
≤
=δ
2i
i=1
Remark: The radius of the ball on which you average to get g(x) is proportional to dist(x, ∂Ω).
Z x
1
Theorem 9.6.13 (Fundamental Theorem of Calculus):
i.) g ∈ L (a, b), f (x) =
g(t) dt
a
then f ∈ W 1,1 (a, b) and Df = g.
ii.) If f ∈
W 1,1 (a, b)
and g = Df , then f (x) = C +
Z
a
where C is a constant.
x
g(t) dt for almost all x ∈ (a, b),
Remark: In particular, there exists a representative of f that is continuous.
Proof of ii.): Choose f ∈ W 1,1 (a, b) ∩ C ∞ (a, b) such that f → f in W 1,1 (Ω), i.e., f → f in
L1 (a, b) and Df → Df = g in L1 (a, b). There exists a subsequence such that f (x) → f (x)
a.e.
Fundemental theorem for f
Z x
f (x) − f (a) =
f0 (t) dt
a
a.e. ↓
↓
f (x) − L =
Z
x
↓
g(t) dt
a
where L is finite and f (a) is chosen such that L = limx→a =: f (a) (Only changes f on a set
of measure zero).
9.6.2
Boundary Values, Boundary Integrals, Traces
Definition 9.6.14: 1 ≤ p < ∞
1. W01,p (Ω) = closure of Cc∞ (Ω) in W 1,p (Ω)
Gregory T. von Nessi
Version::31102011
||g − f ||W k,p (Ω) =
∞
X
9.6 Sobolev Spaces
243
2. p = ∞: W01,∞ (Ω) is given by
W01,∞ (Ω) = f ∈ W 1,∞ (Ω) : ∃fi ∈ Cc∞ (Ω) s.t.
1,1
fj → f in Wloc
(Ω), sup ||fj ||W 1,∞ (Ω) ≤ C < ∞
3. If f0 ∈ W 1,p (Ω). Then f = f0 on ∂Ω if
f − f0 ∈ W01,p (Ω)
Version::31102011
Remark: It’s not always possible to define boundary values.
1
1
2
Γ
Ω ⊆ R2 , Ω = B(0, 1) \ B 0, 12 \ Γ , where Γ = x = (x1 , 0), 12 < x1 < 1 . u(x1 , x2 ) = φ =
polar angle. Ω not “on one side” of Γ .
Definition 9.6.15: Lipschitz boundary
f is Lipschitz continuous if f ∈ C 0,1 , i.e.
|f (x) − f (y )| ≤ L|x − y |,
for some
Sm constant L. ∂Ω is said to be Lipschitz, if ∃Ui , i = 1, . . . , m open such that
∂Ω ⊂ i=1 Ui , where in any Ui , we can find a local coordinate system such that ∂Ω ∩ Ui is
the graph of a Lipschitz function and Ω ∩ Ui lies on one side of the graph.
Remark: Analogous definition with C k,α boundary. ∂Ω ∪ Ui = graph of a C k,α functions.
The following figure should make clear the concept of graphs of boundaries.
Definition 9.6.16: f : ∂Ω → R. f is measurable/integrable if f (g(ỹ )) measurable integrable.
S
Boundary integral: Choose U0 such that Ω ⊂ m
i=0 Ui . Now pick a partition of unity, ηi
Pm
corresponding to these Ui such that i=1 ηi = 1 on Ω
Z
Z
m Z
X
n−1
ηi f dS
f dH
=
f dS =
∂Ω
∂Ω
i=1
∂Ω
To make it clear what the integral in the last equality means, assume f has support in one
Ui , then in that Ui with coordinates properly redefined,
Z
Z
p
f dS =
f (ỹ , g(ỹ )) 1 + |Dg(ỹ )|2 d ỹ .
∂Ω
Rn−1
Gregory T. von Nessi
Chapter 9: Function Spaces
244
U1
g1 (y ) = |y |
∂Ω
U2
y2
U4
y1
y1
U3
Ui
R
Ω
U0
∂Ω = graph
Ω
g(ỹ )
ỹ = (y1 , . . . , yn−1 )
Rn−1
Ui
So, it’s clear how to extend this to general f since ηi f does have support in Ui .
p
Remark: 1 + |Dg(ỹ )|2 d ỹ can be defined for g Lipschitz (derivatives well defined up except
for a countable number of “kinks”). We will assume ∂Ω is C 1 .
Definition 9.6.17:
L (∂Ω) =
f : ∂Ω → R,
|f | dS < ∞
∂Ω
∞
L (∂Ω) =
f : ∂Ω → R, ess sup |f | < ∞
p
Z
p
∂Ω
Gregory T. von Nessi
Version::31102011
y2
9.6 Sobolev Spaces
9.6.3
245
Extension of Sobolev Functions
Theorem 9.6.18 (Extension): Ω ⊆ Rn , bounded, ∂Ω Lipschitz. Suppose Ω0 open neighborhood of Ω, i.e. Ω ⊂⊂ Ω0 , then exists a bounded and linear operator
E : W 1,p (Ω) → W01,p (Ω0 ) such that Ef
Ω
=f
Version::31102011
reflection
Ω
Ω′
cut-off function
Outline of the Proof: Reflect and multiply by cut-off functions.
boundary: ∃ transformation that maps ∂Ω locally to a hyperplane “straightening or
flattening the boundary”.
C1
R
Ω
Φ
x0
∂Ω
x0
y0
y-coordinate
x-coordinate
Rn−1
Ψ
Define Φ by
yi
=
yn
=
xi = Φi (x), i = 1, . . . , n − 1
xn − g(x̃) = Φn (x), x̃ = (x1 , . . . , xn−1 )
=⇒ det(DΦ(x)) = 1
∂Ω flat near x0 , ∂Ω ⊂ {xn = 0}
B(x0 , R) such that B(x0 , R) ∩ ∂Ω ⊂ {xn = 0}
B + = B ∩ {xn ≥ 0} ⊂ Ω
B − = B ∩ {xn ≤ 0} ⊂ Rn \ Ω
Gregory T. von Nessi
Chapter 9: Function Spaces
246
xn
Ω
B+
x0
Rn−1
B−
1.
3. Check this is C 1 across the boundary.
• u continuous {xn = 0}
• all tangential derivatives are continuous across {xn = 0}
∂u
∂xn
∂u
∂u ∂u
xn (x) = 3
(x̃ − xn ) − 2
x̃, −
∂xn
∂xn
∂xn
2
∂u
(x̃, 0)
∂xn
=
{xn =0}
4. Calculate
||u||Lp (B)
≤ C||u||Lp (B+ )
||Du||Lp (B) ≤ C||Du||Lp (B+ )
5. Straightening out of the boundary:
Φ
W = Ψ(B̂)
Φ(B)
B̂
u
y0
y
x0
u ∗ (y ) = u (Ψ(y ))
x
x-coordinate
Ψ
y-coordinate
Choose B̂(y0 , R0 ) ⊂ Φ(B). Construction for u ∗ (see above figure). Now extend u ∗
such that
||u ∗ ||W 1,p (B̂) ≤ C||u ∗ ||W 1,p (B̂+ )
Gregory T. von Nessi
= C||u ∗ ||W 1,p (B̂+ )
Version::31102011
2. Approximate: suppose first that u ∈ C ∞ (Ω) extend u by higher order reflection:
+
+
u(x)
if x ∈ B − := u −
u(x) =
xn
−3u(x̃, −xn ) + 4u x̃, − 2
if x ∈ B
:= u
9.6 Sobolev Spaces
247
Change coordinates from y back to x to get an extension u on W that satisfies
||u||W 1,p (W ) ≤ C||u||W 1,p (W ∩Ω)
≤ ||u||W 1,p (W )
Remark: The constant C depends (by the chain rule) on ||Φ||∞ , ||Ψ||∞ , ||DΦ||∞ , and
||DΨ||∞ but not on u.
Version::31102011
6. ∀x0 ∈ ∂Ω, W (x0 ), u extension, ∂Ω is compact. Thus cover ∂Ω by finitely many
Sm sets
W1 , . . . , Wm (with corresponding u i extensions), choose W0 such that Ω ⊆ i=0 Wi ,
u0 = u and ηi the corresponding partition of unity.
u(x) =
m
X
ηi u i
i=0
Making W (x0 ) smaller if necessary, we may assume that W (x0 ) ⊂ Ω0 , thus u = 0
outside Ω0 . By Minkowski
||u||W 1,p (Ω0 ) ≤ C||u||W 1,p (Ω)
7. Define Eu = u, bounded and linear on C ∞ (Ω)
8. Approximation: Refinement of approximation result.
Find um ∈ C ∞ (Ω) such that um → u in W 1,p (Ω)
||u m − u n ||W 1,p (Ω0 ) = ||Eum − Eun ||W 1,p (Ω0 )
≤ C||um − un ||W 1,p (Ω) → 0
=⇒ Eum is Cauchy in W 1,p (Ω0 ), and thus has a limit in this space, call it Eu.
9.6.4
Traces
Theorem 9.6.19 (): Ω bounded domain in Rn , Lipschitz boundary. 1 ≤ p < ∞. Then exists
a unique linear bounded operator
T : W 1,p (Ω) → Lp (∂Ω)
such that
Tu = u
∂Ω
if u ∈ W 1,p (Ω) ∩ C 0 (Ω)
Proof: u ∈ C 1 (Ω), x0 ∈ ∂Ω,
∂Ω flat near x0 . ∃B(x0 , R) such that ∂Ω ∩ B(x0 , R) ⊆
{xn = 0}. Define B̂ = B x0 , R2 and let Γ = ∂Ω ∩ B̂.
Gregory T. von Nessi
Chapter 9: Function Spaces
248
R
2
∂Ω
x0
Γ
Gregory T. von Nessi
Version::31102011
R
9.6 Sobolev Spaces
249
x̃ = (x1 , . . . , xn−1 ) ∈ {xn = 0}
Choose ξ ∈ Cc∞ (B(x0 , R)), ξ ≥ 0, ξ ≡ 1 on B̂
xn
Version::31102011
(x̃, xn∗ )
B+
···
···
R
x̃
x̃ ∈ Rn−1
x0
Note: From the above figure we see
ξ|u|p (x̃, 0) = (ξ|u|p )(x̃, 0) − (ξ|u|p )(x̃, xn∗ )
Z 0
=
(ξ|u|p )xn dxn
(9.16)
xn∗
So we have for the main calculation:
Z
Z
|u(x̃, 0)|p d x̃
≤
ξ|u|p d x̃
n−1
Γ
RZ
=
−
(ξ|u|p )xn dx
+
B


Z

 p
=
−
|u| ξxn + ξ|u|p−1 sgn(u) uxn  dx
{z
}
|
|{z}
+
B
Lp 0
Hölder
≤
Young’s
≤
−
C
Z
ZB
p
+
B+
|u| ξxn dx + C
Lp
Z
B+
p
|u| dx
10 Z
p
B+
p
|uxn | dx
1
p
(|u|p + |Du|p ) dx
General situation: non-linear change of coordinates just as before
Z
Z
p
|u| dS ≤ C
(|u|p + |Du|p ) dx
Γ
C
where Γ ⊂ ∂Ω contains the points x0 .
Now, ∂Ω compact, choose finitely many open sets Γi ⊂ ∂Ω such that ∂Ω ⊂
Z
Z
p
|u| dS ≤ C
(|u|p + |Du|p ) dx
Γi
C
Sm
i=1 Γi
with
Gregory T. von Nessi
Chapter 9: Function Spaces
250
=⇒ If we define for u ∈ C 1 (Ω), T u = u
=⇒ ||u||Lp (∂Ω) ≤ C||u||W 1,p (Ω)
∂Ω
Approximation: u ∈ W 1,p (Ω) choose uk ∈ C ∞ (Ω) such that
uk → u
in W 1,p (Ω)
Estimate
||T um − T un ||Lp (Ω) ≤ C||um − un ||W 1,p (Ω)
Define
in Lp (∂Ω)
T u = lim T uk
k→∞
If u ∈ W 1,p (Ω) ∩ C 0 (Ω) then the approximation uk converges uniformly on Ω to u and
Tu = u .
∂Ω
Remark: W 1,∞ (Ω) is space of all functions that have Lipschitz continuous representations.
Theorem 9.6.20 (Trace-zero functions in W 1,p ): Assume Ω is bounded and ∂Ω is C 1 .
Suppose furthermore that u ∈ W 1,p (Ω). Then
u ∈ W01,p (Ω) ⇐⇒ T u = 0
Proof: (=⇒) We are given u ∈ W01,p (Ω). Then by the definition, ∃um ∈ Cc∞ (Ω) such
that
um → u
in W 1,p (Ω).
clearly T um = 0 on ∂Ω. Thus,
sup |T u − T um | ≤ C sup |u − um | = 0
∂Ω
∂Ω
Thus, T u = limm→∞ T um = 0.
(⇐=) Assume T u = 0 on ∂Ω. As usual assume that ∂Ω is flat around a point x0 and
using standard partitions of unity we may assume for the following calculations that
n
u ∈ W 1,p (Rn+ ), supp(u) ⊂⊂ R+ ,
T u = 0, on ∂Rn+ = Rn−1
n
Since T u = 0 on Rn−1 , there exists um ∈ C 1 (R+ ) such that
um → u
in W 1,p (Rn+ )
(9.17)
and
T um = um
Gregory T. von Nessi
Rn−1
→0
in Lp (Rn−1 )
(9.18)
Version::31102011
=⇒ T um is a Cauchy sequence in Lp (∂Ω).
9.6 Sobolev Spaces
251
Now, if x̃ ∈ Rn−1 , xn ≥ 0, we have by the fundamental theorem of calculus
|um (x̃, xn )|
|um (x̃, 0) + um (x̃, xn ) − um (x̃, 0)|
Z xn
|um,xn (x̃, t)| dt
|um (x̃, 0)| +
=
≤
0
Hölder
≤
Version::31102011
|um (x̃, 0)| + xn
p−1
p
≤
|um (x̃, 0)| + xn
From this, we have
Z
Z
p
p−1
|um (x̃, xn )| d x̃ ≤ 2
Rn−1
p−1
p
p
Rn−1
Z
xnp−1
Rn−1
p
Z
Z
xn
p
Rn−1
0
1
p
|Dum (x̃, t)| dt
0
Letting m → ∞ and using (9.17) and (9.18), we have
Z
Z xn Z
p
p−1 p−1
|u(x̃, xn )| d x̃ ≤ 2 xn
0
xn
1
p
p
|um,xn (x̃, t)| dt
0
Z
|um (x̃, 0)| d x̃ +
Rn−1
xn
|Dum (x̃, t)| d x̃dt
|Du(x̃, t)|p d x̃dt
(9.19)
Approximation: Next, let ξ ∈ C ∞ (R). We require that ξ ≡ 1 on [0, 1] and ξ ≡ 0 on R \ [0, 2]
with 0 ≤ ξ ≤ 1 and we write
ξm (x := ξ(mxn ) (x ∈ Rn+ )
wm := u(x)(1 − ξm (x))
Then we have
wm,xn = uxn (1 − ξm ) − muξ0
Dx̃ wm = Dx̃ u(1 − ξm )
ξ(xn )
xn
2
m
1
m
∂Ω
Figure 9.3: A possible ξ
Consequently, we have
Z
|Dwm − Du|p dx
=
Rn+
≤
Z
Rn+
| − ξm Du − muξ0 |p dx
Rn+
|ξm |p |Du|p dx + Cmp
Z
=: A + B
Z
0
2/m
Z
Rn−1
|u|p d x̃dxn
Gregory T. von Nessi
Chapter 9: Function Spaces
252
Note: In deriving B, C is picked to bound |ξ0 |p . Also the integral is restricted since ξ0 = 0 if
xn > 2/m by construction.
Now
lim A = 0
m→∞
Rn−1
0
Thus, we deduce Dwm → Du in Lp (Rn+ ). Since clearly wm → u in Lp (R+
n ) (is clear from
construction of wm ). We can now conclude
in W 1,p (Rn+ )
wm → u
But wm = 0 if 0 < xn < 1/m. We can therefore mollify the wm to produce function
.
um ∈ Cc∞ (Rn+ ) such that um → u in W 1,p (Rn+ ). Hence w ∈ W01,p (Rn+ ).
9.6.5
Sobolev Inequalities
Theorem 9.6.21 (Sobolev-Gagliardo-Nirenberg Inequality): Ω is a bounded set in Rn . If
u ∈ W 1,1 (Rn ), then
Z
1
n
√
|Du| dx.
≤
||u|| n−1
(Rn )
L
n Rn
∗
Remark: In the above theorem means that W01,p (Ω) ⊆ Lp (Ω) and ||u||Lp∗ (Ω) ≤ C||u||W 1,p (Ω) .
0
Proof: Cc∞ (Ω) dense in W01,p (Ω).
First estimate for smooth functions, then pass to the limit in the estimate. It thus suffices
to prove
||f ||Lp∗ (Ω) ≤ C||Df ||Lp (Ω)
Special case: p = 1, p ∗ =
∀f ∈ Cc∞ (Ω)
n
n−1
n = 2: p ∗ = 2
Z
x1
f (x) = f (x1 , x2 ) =
D1 f (t, x2 ) dt
−∞
Z x1
Z ∞
=⇒ |f (x)| ≤
|D1 f (t, x2 )| dt ≤
|D1 f (t, x2 )| dt = g1 (x2 )
−∞
−∞
Same idea
|f (x)| ≤
Gregory T. von Nessi
Z
∞
−∞
|D2 f | dx2 = g2 (x1 )
|f (x)|2 ≤ g1 (x2 )g2 (x1 )
Version::31102011
since ξm 6= 0 only if 0 ≤ xn ≤ 2/m. To estimate the term B, we use (9.19)
!
p Z 2/m Z
2
B ≤ C · 2p−1 mp
|Du|p d x̃dxn
m
0
Rn−1
Z 2/m Z
≤ C
|Du|p d x̃dxn → 0 as m → ∞
9.6 Sobolev Spaces
||f
||22
=
Z
253
2
R2
|f (x)| dx
Z
=
g1 (x2 )g2 (x1 ) dx1 dx2
Z
g2 (x1 ) dx1
=
g1 (x2 ) dx2
R
R
Z Z
Z Z
=
|D1 f | dx1 dx2
|D2 f | dx2 dx1
2
ZR
R
R
R2
|Df | dx
Z
≤
R
2
R
Version::31102011
=⇒ ||f ||L2 (R2 ) ≤ ||Df ||L1 (R2 ) .
General Case:
Z
|f (x)| ≤ gi (x) =
∞
−∞
|Di f | dxi
We have
|f (x)|
Z
R
|f (x)|
n
n−1
n
n−1
≤
dx1 ≤
n
Y
gi
i=1
Z
!
n
Y
R
1
n−1
gi
i=1
!
1
n−1
dx1
Now taking Holder for n − 1 terms with pi = n − 1
[g1 (x)]
Z
1
n−1
n
Y
R
i=2
gi
!
1
n−1
dx1 ≤ [g1 (x)]
integrate with respect to x2 to get
Z Z
n
|f (x)| n−1 dx1 dx2
R
R
≤
=
Z
[g1 (x)]
1
n−1
R
Z
g2 (x) dx1
≤
Integrate
R
dx3 · · ·
Z
Rn
Z
g2 (x) dx1
R
R
|f (x)|
gi (x) dx1
R
i=2
R
Hölder
n Z
Y
1
n−1
"Z
1
n−1
Z
1
n−1
1
n−1
1
n−1
R
1
n−1
gi (x) dx1
R
i=2
[g1 (x)]
n Z
Y
dx2
n Z
Y
g1 (x) dx2
R
gi (x) dx1
R
i=3
1
n−1
n Z Z
Y
i=3
R
1
n−1
dx2
#
gi (x) dx1 dx2
R
1
n−1
dxn + Holder
n
n−1
dx
≤
(def. of gi ) =
n Z
Y
Rn−1
i=1
n Z
Y
i=1
Rn
gi (x) dx1 · · · dxi−1 dxi+1 · · · dxn
|Di f | dx
1
n−1
1
n−1
Gregory T. von Nessi
Chapter 9: Function Spaces
254
So this implies that
||f ||
n
L n−1 (Rn )
≤
n Z
Y
n
Rn
i=1
1
|Di f | dx
=
Z
Rn
|Df | dx = ||Df ||L1 (Rn )
We can do a little better than this by observing
n
Y
!1
n
ai
i=1
n
1X
ai
≤
n
i=1
||f ||
n
L n−1 (Rn )
≤
1
n
Z
n
X
Rn i=1
1
|Di f | dx ≤ √
n
Z
Rn
|Df | dx
Since
n
X
i=1
v
u n
X
√ u
ai2
ai = nt
i=1
In the general case use: |f |γ with γ suitably chosen.
Theorem 9.6.22 (Poincare’s Inequality): Ω ⊆ Rn bounded, then
||u||Lp (Ω) ≤ C||Du||Lp (Ω)
∀v ∈ W01,2 (Ω), 1 ≤ p ≤ ∞
If ∂Ω is Lipschitz, then
||u − u||Lp (Ω) ≤ C||Du||Lp (Ω)
∀v ∈ W01,2 (Ω), 1 ≤ p ≤ ∞
where
1
u = uΩ =
|Ω|
Z
Ω
Z
u(x) dx = − u(x) dx
Ω
Remark:
1. We need u − u to exclude constant functions
2. C depends on Ω, see HW for Ω = B(x0 , R)
This will be an indirect proof with no explicit scaling given. Use additional scaling argument
to find scaling.
Proof: Sobolev-Gagliardo-Nirenberg:
||u||Lp∗ (Ω) ≤ CS ||Du||Lp (Ω)
Now we apply Holder under the assumption that Ω is bounded:
Gregory T. von Nessi
||u||Lp (Ω) ≤ C(Ω)||u||Lp∗ (Ω) ≤ C(Ω)CS ||Du||Lp (Ω)
Version::31102011
(which is just a generalized Young’s inequality). So we have
9.6 Sobolev Spaces
255
Proof of ||u − u||Lp (Ω) ≤ C||Du||Lp (Ω) . Without loss of generality take u = 0, other wise
apply inequality to v = u − u. Now we prove
Dv = Du.
Suppose otherwise ∃uj ∈ W 1,p (Ω), uj = 0, such that
||uj ||Lp (Ω) ≥ j||Duj ||Lp (Ω)
May suppose, ||uj ||Lp (Ω) = 1
Version::31102011
=⇒
1
≥ ||Duj ||Lp (Ω) =⇒ Duj → 0 in Lp (Ω)
j
and
||uj ||W 1,p (Ω) = ||uj ||Lp (Ω) + ||Duj ||Lp (Ω) ≤ 1 +
1
≤2
j
Compact Sobolev embedding (See Section
9.7):R∃ subsequence such that ujk ∈ u in Lp (Ω)
R
with Du = 0 =⇒ u is constant, 0 = Ω uj dx → Ω u dx =⇒ u = 0 =⇒ 1 = ||ujk ||W 1,p (Ω) →
0. Contradiction.
9.6.6
The Dual Space of H01 = W01,2
Definition 9.6.23: Ω ⊂ Rn bounded, H −1 (Ω) = (H01 )∗ = (W01,2 )∗
Theorem 9.6.24 (): Suppose that T ∈ H −1 (Ω). Then there exists functions f0 , f1 , . . . , fn ∈
L2 (Ω) such that
!
Z
n
X
fi Di v dx
∀v = H01 (Ω)
(9.20)
T (v ) =
f0 v +
Ω
i=1
Moreover,
||T ||2H−1 (Ω) = inf
(Z n
X
)
|fi |2 dx, fi ∈ L2 (Ω), (9.20) holds
Ω i=0
Remark: One frequently works formally with, T = f0 −
Proof: u, v ∈ H01 (Ω),
(u, v ) =
Z
Ω
Pn
i=1 Di fi
(uv + Du · Dv ) dx
if T ∈ H −1 (Ω), then by Riesz, ∃!u0 such that
T (u) = (v , u0 )
∀v ∈ H01 (Ω)
Z
=
(v u0 + Dv · Du0 ) dx
(9.21)
Ω
i.e. (9.20) with f0 = u0 and fi = Di u0 .
Suppose g0 , g1 , . . . , gn satisfy
T (v ) =
Z
Ω
(v g0 + Di v · gi ) dx
Gregory T. von Nessi
Chapter 9: Function Spaces
256
take v = u0 in (9.21)
||u0 ||2H1 (Ω) = (u0 , u0 )
0
=
T (u0 ) =
≤
≤
||u0 ||H1 (Ω)
0
Z
Ω
Di u0 gi
i=1
n
X
i=1
!
dx
||Di u0 ||L2 (Ω) ||gi ||L2 (Ω)
|g0 |2 + |g1 |2 + · · · + |gn |2
dx
(u0 v + Di u0 · Di v ) dx ≤ ||v ||H1 (Ω) ||u0 ||H1 (Ω)
0
0
Now, we see that
||T ||H−1 (Ω) =
On the other hand, choose v =
sup
v ∈H01 (Ω)
|T (v )|
≤ ||u0 ||H1 (Ω)
0
||v ||H1 (Ω)
0
u0
||u0 ||H1 (Ω)
0
T (v ) = ||u0 ||H1 (Ω) =⇒ ||T ||H−1 (Ω) = ||u0 ||H1 (Ω)
0
9.7
0
Sobolev Imbeddings
Theorem 9.7.1 (): Ω is a bounded set in Rn . Then the following embeddings are continuous
i.) 1 ≤ p < n, p ∗ =
np
n−p ,
W01,p (Ω)
ii.) p > n, α = 1 − pn , W01,p (Ω)
⊂−→
⊂−→
∗
Lp (Ω)
C 0,α (Ω)
Remark:
∗
1. i.) in the above theorem means that W01,p (Ω) ⊆ Lp (Ω) and ||u||Lp∗ (Ω) ≤ C||u||W 1,p (Ω)
0
2. “Sobolev number”: W m,p (Ω), Ω ⊆ Rn , then m − pn is the Sobolev number.
Lq (Ω) = W 0,q (Ω), − qn is its Sobolev number.
Philosophy: W 1,p (Ω) ⊂−→ Lq (Ω) if 1 − pn ≥ − qn , where maximal q corresponds to
equality.
1−
n
n
p−n
n
n−p
n
np
= − ⇐⇒
= − ⇐⇒
=
⇐⇒ q =
= p∗
p
q
p
q
p
q
n−p
3. Analogously C k,α (Ω) has a Sobolev number k + α
W 1,p (Ω) ⊂−→ C 0,α (Ω) if 1 − pn ≥ 0 + α = α, optimal: α = 1 − pn .
4. More generally: W m,p (Ω) ⊂−→ Lq (Ω) if m −
1
1
m
q = p − n.
W m,p (Ω) ⊂−→ C 0,α (Ω) if m − pn ≥ 0 + α = α
Gregory T. von Nessi
n
p
≥ − qn and the optimal q satisfies
Version::31102011
Ω
n
X
||u0 ||L2 (Ω) ||g0 ||L2 (Ω) +
C-S
(9.20) =⇒
u0 g0 +
Ω
Hölder
Z
Z
9.7 Sobolev Imbeddings
257
Proof: Cc∞ (Ω) dense in W01,p (Ω).
First estimate for smooth functions, then pass to the limit in the estimate. It thus suffices
to prove
∀f ∈ Cc∞ (Ω)
||f ||Lp∗ (Ω) ≤ C||Df ||Lp (Ω)
n
n−1
Special case: p = 1, p ∗ =
Version::31102011
n = 2: p ∗ = 2
Z
x1
f (x) = f (x1 , x2 ) =
D1 f (t, x2 ) dt
−∞
Z x1
Z ∞
=⇒ |f (x)| ≤
|D1 f (t, x2 )| dt ≤
|D1 f (t, x2 )| dt = g1 (x2 )
−∞
−∞
Same idea
Z
|f (x)| ≤
∞
−∞
|D2 f | dx2 = g2 (x1 )
|f (x)|2 ≤ g1 (x2 )g2 (x1 )
||f
||22
=
Z
R2
2
|f (x)| dx
=
Z
g1 (x2 )g2 (x1 ) dx1 dx2
Z
=
g1 (x2 ) dx2
g2 (x1 ) dx1
R
R
Z Z
Z Z
=
|D1 f | dx1 dx2
|D2 f | dx2 dx1
ZR
≤
2
R
R
R2
|Df | dx
Z
R
2
R
=⇒ ||f ||L2 (R2 ) ≤ ||Df ||L1 (R2 ) .
General Case:
Z
|f (x)| ≤ gi (x) =
∞
−∞
|Di f | dxi
We have
|f (x)|
Z
R
|f (x)|
n
n−1
n
n−1
≤
dx1 ≤
n
Y
gi
i=1
Z
R
!
n
Y
i=1
1
n−1
gi
!
1
n−1
dx1
Now taking Holder for n − 1 terms with pi = n − 1
[g1 (x)]
1
n−1
Z
R
n
Y
i=2
gi
!
1
n−1
dx1 ≤ [g1 (x)]
1
n−1
n Z
Y
i=2
R
gi (x) dx1
1
n−1
Gregory T. von Nessi
Chapter 9: Function Spaces
258
integrate with respect to x2 to get
Z Z
n
|f (x)| n−1 dx1 dx2
R
R
≤
=
Z
[g1 (x)]
1
n−1
R
n Z
Y
Z
g2 (x) dx1
R
Hölder
Integrate
R
dx3 · · ·
Z
Rn
g2 (x) dx1
R
R
|f (x)|
1
n−1
"Z
[g1 (x)]
1
n−1
dx2
n Z
Y
1
n−1
R
Z
1
n−1
g1 (x) dx2
R
gi (x) dx1
R
i=3
1
n−1
n Z Z
Y
i=3
R
1
n−1
dx2
#
gi (x) dx1 dx2
R
1
n−1
dxn + Holder
n
n−1
dx
n Z
Y
≤
Rn−1
i=1
n Z
Y
(def. of gi ) =
Rn
i=1
gi (x) dx1 · · · dxi−1 dxi+1 · · · dxn
|Di f | dx
1
n−1
1
n−1
So this implies that
||f ||
≤
n
L n−1 (Rn )
n Z
Y
n
Rn
i=1
1
|Di f | dx
=
Z
Rn
|Df | dx = ||Df ||L1 (Rn )
We can do a little better than this by observing
n
Y
i=1
!1
n
n
ai
≤
1X
ai
n
i=1
(which is just a generalized Young’s inequality). So we have
||f ||
n
L n−1 (Rn )
≤
1
n
Z
n
X
Rn i=1
1
|Di f | dx ≤ √
n
Z
Rn
|Du| dx
Since
n
X
i=1
v
u n
X
√ u
ai = nt
ai2
i=1
In the general case use: |f |γ with γ suitably chosen. This result is referred to as the
Sobolev-Gagliardo-Nirenberg inequality.
Recall: Sobolev-Gagliardo-Nirenberg:
||u||
n
L n−1 (Rn )
Gregory T. von Nessi
1
≤ √ ||Du||L1 (Rn )
n
u ∈ W01,1 (Rn )
Version::31102011
≤
Z
gi (x) dx1
R
i=2
9.7 Sobolev Imbeddings
259
This was for p = 1, now consider
General Case: 1 ≤ p < n, Use p = 1 for |u|γ
|u|γ
n
L n−1 (Rn )
1
≤√
n
D|u|γ
L1 (Rn )
≤
Holder ≤
1
√
n
γ
√
n
γ|u|γ−1 D|u|
|u|γ−1
L1 (Rn )
Lp0 (Rn )
· ||Du||Lp (Rn )
Version::31102011
Choose γ such that
p
γn
= (γ − 1)p 0 = (γ − 1)
n−1
p−1
p
−p
n
−
=
γ
n−1 p−1
p−1
np − n − np + p
p
γ
=−
(n − 1)(p − 1)
p−1
p−n
p
γ
=
(n − 1)(p − 1)
p−1
p(n − 1)
γ=
>0
n−p
⇐⇒
⇐⇒
⇐⇒
⇐⇒
Z
Rn
|u|
p∗
n−1 − p−1
n
dx
p
≤
(n − 1)p 1
√ ||Du||Lp (Rn )
n−p
n
Performing some algebra on the exponent, we see
n−1 p−1
n−p
1
−
=
= ∗
n
p
np
p
(n − 1)p 1
√ ||Du||Lp (Rn ) = CS ||Du||Lp (Rn )
=⇒ ||u||Lp∗ (Rn ) ≤
n−p
n
Remark: The estimate “explodes” as p → n.
This is a proof for W01,p (Ω)
General case with Lipschitz boundary. Extension theorem:
Ω ⊂⊂ Ω0 =⇒ extension u ∈ W01,p (Ω0 ) such that ||u||W 1,p (Ω0 ) ≤ C||u||W 1,p (Ω) .
Use estimate for u on Ω0
||u||Lp∗ (Ω) ≤ ||u||Lp∗ (Ω0 ) ≤ CS ||u||W 1,p (Ω0 ) ≤ CS CE ||u||W 1,p (Ω)
9.7.1
Campanato Imbeddings
Theorem 9.7.2 (Morrey’s Theorem on the growth of the Dirichlet integral): Let u be
1,p
in Hloc
(B1 ) and suppost that
Z
|Du|p dx ≤ ρn−p+pα
∀ρ < dist(x, B1 )
Bρ (x)
0,α
then u ∈ Cloc
(B1 ).
Gregory T. von Nessi
Chapter 9: Function Spaces
260
Proof: We use Poincare’s inequality:
Z
Z
p
p
|u − ux0 ,ρ | dx ≤ Cρ
Bρ (x0 )
Bρ (x0 )
|Du|p dx ≤ C · ρn+pα
0,α
i.e. u ∈ Lp,λ
loc (B1 ) and therefore by 1) u ∈ Cloc (B1 ).
Theorem 9.7.3 (Global Version of (9.7.2)): If u ∈ H 1,p (Ω), p > n, then u ∈ L1,n+(1−n/p) (Ω)
=⇒ u ∈ C 0,1−n/p (Ω) via Companato’s Theorem.
Ω(x0 ,ρ)
Ω(x0 ,ρ)
|Du|p dx,
where C only depends on the geometry of Ω; then we fine
Z
Ω(x0 ,ρ)
|Du| dx ≤
Z
Ω(x0 ,ρ)
Again by Poincare’s inequality:
Z
Z
|u − ux0 ,ρ | dx ρ
Ω(x0 ,ρ)
1/p
|Du|p dx
Ω(x0 ,ρ)
|Ω(x0 , ρ)|1−1/p ≤ C · ||u||H1,p (Ω) ρn−n/p .
|Du| dx ≤ Cρn+1−n/p ||u||H1,p (Ω)
Corollary 9.7.4 (Morrey’s theorem): For p > n, the imbedding
W 1,p (Ω)
⊂−→
C 0,1−n/p (Ω)
is a continous operator, i.e.
||u||C 0,1−n/p (Ω) ≤ C||u||W 1,p (Ω)
Remark: So, from the above we see
Du ∈ Lp,λ (Ω) =⇒ u ∈ Lp,λ+p (Ω)
(Du ∈ L2,n−2+2α (Ω) =⇒ u ∈ L2,n+2α (Ω)).
Theorem 9.7.5 (Estimates for W 1,p , n < p ≤ ∞): Let Ω be a bounded open subset of
Rn with ∂Ω being C 1 . Assume n < p ≤ ∞, and u ∈ W 1,p (Ω). Then u has a version
u ∗ ∈ C 0,γ (Ω), for γ = 1 − pn , with the estimate
||u ∗ ||C 0,γ (Ω) ≤ C||u||W 1,p (Ω) .
The constant C depends only on p, n and Ω.
Proof: Since ∂Ω is C 1 , there exists an extension Eu = u ∈ W 1,p (Rn ) such that

 u = u in Ω,
u has compact support, and

||u||W 1,p (Ω) .
Since u has compact support, we know that there exists um ∈ Cc∞ (Rn ) such that
Gregory T. von Nessi
um → u in W 1,p (Rn )
Version::31102011
Proof: For all ρ < ρ0 and a u ∈ H 1,p (Ω) we have
Z
Z
p
p
|u − ux0 ,ρ | dx ≤ Cρ
9.7 Sobolev Imbeddings
261
Now from the previous theorem, ||um − ul ||C 0,1−n/p (Rn ) ≤ C||um − ul ||W 1,p (Rn ) for all l, m ≥ 1,
thus there exists a function u ∗ ∈ C 0,1−n/p (Rn ) such that
um → u ∗ C 0,1−n/p (Rn )
Version::31102011
So we see from the last two limits, u ∗ = u a.e. on Ω; such that u ∗ is a version of u. The
last theorem also implies ||um ||C 0,1−n/p (Rn ) ≤ C||um ||W 1,p (Rn ) this combined with the last two
limits, we get
||u ∗ ||C 0,1−n/p (Rn ) ≤ C||u||W 1,p (Rn )
9.7.2
Compactness of Sobolev/Morrey Imbeddings
1 ≤ p < n, W01,p (Ω) ⊂−→ Lq (Ω) is compact if q < p ∗ i.e. if {uj } is a bounded sequence in
W01,p (Ω) then {uj } is precompact in Lq (Ω), i.e., contains a subsequence that converges in
Lq (Ω).
Proposition 9.7.6 (): X is a BS
i.) A ⊂ S
X is precompact ⇐⇒ ∀ > 0 there exists finitely many balls B(xi , ) such that
A⊂ m
i=1 B(xi , )
ii.) A is precompact ⇐⇒ every bounded sequence in A contains a converging subsequence
(with limit in X).
Theorem 9.7.7 (Arzela-Ascoli): S ⊆ Rn compact, A ⊆ C 0 (S) is precompact ⇐⇒ A is
bounded and equicontinuous.
Proof: (=⇒) ∀ > 0 ∃B(fi , ) such that A ⊆
Sm i=1 B(fi
, )
||f ||∞ ≤ ||f − fi || + ||fi ||∞ , f ∈ B(fi , )
≤ + max ||fi || < ∞
i=1,...,m
|f (x) − f (y )| ≤ 2 + |fi (x) − fi (y )|
≤ 2 + max |fi (x) − fi (y )|
i=1,...,m
Gregory T. von Nessi
Chapter 9: Function Spaces
262
so if |x − y | tends to zero =⇒ equicontinuity.
(⇐=) ∀ > 0, ||f ||∞ ≤ C ∀f ∈ A
S
Choose ξi ∈ R, i = 1, . . . , k such thatSB(0, C) = [−C, C] ⊂ ki=1 B(ξi , ).
xj ∈ Rn , j = 1, . . . , m such that S ⊂ m
i=1 B(xj , ). Basically we are covering the range and
domain.
Let π : {1, . . . , m} → {1, . . . , k}
Let Aπ = {f ∈ A : |f (xj ) − ξπ(j) | < , j = 1, . . . , m}
S
Claim: A ⊂ π B(fπ , ), i.e. the (finitely many) functions fπ define the centers of the
balls. To show this, we see that
|f (x) − fπ (x)| ≤ |f (x) − f (xj )| + |f (xj ) − ξπ(j) |
{z
}
|
≤
+ |ξπ(j) − fπ (xj )| +|fπ (xj ) − fπ (x)|
|
{z
}
≤
≤ 2 + sup sup |g(x) − g(y )| := r
|x−y |≤ g∈A
r → 0 as → 0 since A is equicontinuous.
S
=⇒ f ∈ B(fπ , r ). Suppose you want to find a finite cover A ⊂ B(fπ , δ), δ > 0 then
choose small enough such that r ≤ δ.
Remark: If X, Y , and Z are Banach Spaces and we have X
−→ Z.
then X ⊂⊂
⊂−→
Y
⊂⊂
−→
Z or X
⊂⊂
−→
Y
⊂−→
Z,
Theorem 9.7.8 (Compactness of Sobolev Imbeddings): If Ω ⊂ Rn bounded, then the
following Imbedding is true
i.) 1 ≤ p < n, q < p ∗ =
np
n−p
W01,p (Ω)
⊂⊂
−→
Lq (Ω)
Moreover, if ∂Ω is Lipschitz, then the same is true for W 1,p (Ω)
ii.) If Ω ⊂ Rn bounded and satisfies condition () with p > n, 1 −
W01,p (Ω)
⊂⊂
−→
n
p
> α,
C 0,α (Ω)
Remark: A special case of the above is W01,2 (Ω) ,→ L2 (Ω); This is called Rellich’s Theorem.
Proof:
i.) q = 1 {fj } bounded in W01,p (Ω) need to prove that {fj } is precompact in L1 .
Extend by zero to Rn . Without loss of generality, take fj to be smooth with f˜j ∈
Cc∞ (Rn ), (||fj − f˜j || ≤ ).
Gregory T. von Nessi
Version::31102011
Choose fπ ∈ Aπ if Aπ 6= 0. If x ∈ S then x ∈ B(xj , ) for some j and f ∈ Aπ .
9.7 Sobolev Imbeddings
263
1. A = {φ ∗ fj }, where φ is the standard mollifier, φ (x) = −n φ(−1 x).
Then A precompact in L1 ∀ ≥ 0. We now show this.
Idea: Use Arzela-Ascoli
Z
φ (x − y )f (y ) dy
|φ ∗ fj |(x) =
Rn
Z
≤ sup |φ |
|f (y )| dy
Rn
= sup |φ | · ||fj ||L1 (Rn )
Version::31102011
≤ sup |φ | · ||fj ||Lp (Rn ) < C < ∞
Equicontinuity follows if D(φ ∗ fi ) uniformly bounded,
|Dφ ∗ fj |(x) ≤ sup |Dφ | · ||fj ||Lp (Rn ) < C < ∞
=⇒ (φ ∗ fj ) uniformly bounded hence equicontinuous.
=⇒ precompact.
Step 2: ||φ ∗ fj − fj ||L1 (Rn ) ≤ C
2. (Proof of Step 2)
(φ ∗ fj − fj )(x) =
(sub. y = x − z) =
Integrate in x
Z
Rn
≤
Z
Rn
(Fubini) = Z
Z
Rn =B(x,)
Z
Rn =B(0,1)
φ (x − y )(fj (y ) − fj (x)) dy
φ(z)(fj (x − z) − fj (x)) dy
|
{z
}
R1
d
0 dt fj (x−tz)
dt
|φ ∗ fj − fj | dx
Z
Z 1
φ(z)
|Dfj (x − tz)| · |z| dtdz dx
B(0,1)
φ(z)
B(0,1)
≤ ∀j
Z
Rn
Z
0
0
1Z
Rn
|
|Dfj |(x̃) d x̃
≤ C
|Dfj (x − tz)| · |z| dx dtdz
|{z}
Z
=
R
Rn
B(0,1)
{z
|Dfj |(x̃) d x̃
≤1
}
φ(z) dz ≤ ||fj ||L1 (Rn ) ≤ ||fj ||Lp (Rn )
=⇒ ||φ ∗ fj − fj ||L1 (Rn ) ≤ C uniformly ∀j
3. Suppose δ > 0 given, choose small enough such that C <
δ
2
A = {φ ∗ fj } is precompact in L1 (Rn ), for 2δ we find balls B gi , 2δ such that
mδ
m
[
[
δ
A ⊆
B gi ,
=⇒ {fj } ⊆
B(gi , δ) =⇒ fj
2
i=1
precompact in
i=1
L1 (Rn ).
Remark: Evans constructs a converging subsequence.
Gregory T. von Nessi
Chapter 9: Function Spaces
264
4. general q < p ∗ .
Trick is to use the “interpolation inequality”:
||g||Lq (Rn ) ≤ ||g||θL1 (Rn ) ||g||1−θ
Lp∗ (Rn )
where
1
q
=
θ
1
+
1−θ
p∗ .
So if fj → f in L1 (Rn ) then
||fj − f ||Lq (Rn ) ≤ ||fj − f ||θL1 (Rn ) ||fj − f ||1−θ
Lp∗ (Rn )
|
{z
}|
{z
}
→0
≤C
r
f =
=
r −1
0
1
qθ
1
qθ
−1
=
1
1 − qθ
Thus
(1 − θ)q
= p∗
1 − qθ
So,
Z
q
g dx
Z
=
Hölder
≤
g qθ g (1−θ)q dx
Z
qθ Z
1−qθ
∗
g dx
g p dx
Now take the qth root
(1−qθ)p ∗ /q
||g||Lq (Rn ) ≤ ||g||θL1 (Rn ) ||g||Lp∗ (Rn )
= ||g||θL1 (Rn ) ||g||1−θ
Lp∗ (Rn )
−→ C 0,λ (Ω), where 0 < µ <
ii.) General Idea: We will prove that W 1,p (Ω) ⊂−→ C 0,µ (Ω) ⊂⊂
n
λ < 1− p and then we will use the remark following the statement of the Arzela-Ascolli
−→ C 0,λ (Ω).
theorem to conclude W 1,p (Ω) ⊂⊂
Since 0 < λ < 1 − pn , we know ∃µ with 0 < µ < λ < 1 − pn ; and by Morrey’s theorem
−→ C 0,λ (Ω). It
we know W 1,p (Ω) ⊂−→ C 0,µ (Ω). We now only need to show C 0,µ (Ω) ⊂⊂
0,µ
0,λ
is obvious that C (Ω) ⊂−→ C (Ω) is continuous, so our proof is reduced to showing
the compactness of this imbedding. We do this in two steps
−→ C(Ω). Again, the continuity of the imbedding is obvious;
Step 1. Claim C 0,ν (Ω) ⊂⊂
and compactness is the only thing needed to be proven. If A is bounded in
C 0,ν (Ω), then we know ||φ||C 0,ν (Ω) ≤ M for all φ ∈ A. This obviously implies that
|φ(x)| ≤ M in addition to |φ(x) − φ(y )| ≤ M|x − y |ν for all x, y ∈ Ω. Thus, A is
precompact via the Arzela-Ascoli theorem.
Gregory T. von Nessi
Version::31102011
So, we just need to prove the interpolation inequality.
1
with the corresponding conjugate index
Idea: Use Holder with r = qθ
9.7 Sobolev Imbeddings
265
Step 2. Consider
|φ(x) − φ(y )|
=
|x − y |λ
|φ(x) − φ(y )|
|x − y |µ
λ/µ
|φ(x) − φ(y )|1−λ/µ
≤ M · |φ(x) − φ(y )|1−λ/µ
Version::31102011
for all φ ∈ A for which A is bounded in C 0,λ (Ω). We now use the compactness of
the imbedding in Step 1. by extracting a subsequence of this bounded sequence
converging in C(Ω). With this new sequence () indicates this sequence is Cauchy
and so converges in C 0,λ (Ω)
This completes the proof.
9.7.3
General Sobolev Inequalities
Theorem 9.7.9 (General Sobolev inequalities): . Let Ω be a bounded open subset of Rn ,
with a C 1 boundary.
i.) If
k<
then W k,p (Ω)
⊂−→
n
,
p
(9.22)
Lq (Ω), where
1
1 k
= − ,
q
p n
(9.23)
with an imbedding constant depending on k, p, n and Ω.
ii.) If
k>
then C
k−
h i
n
p
−1,γ
(Ω)
γ=
⊂−→
( h i
n
p
n
,
p
(9.24)
W k,p (Ω), where
+ 1 − pn , if
n
p
is not an integer
any positive number < 1, if
n
p
is an integer.
with an imbedding constant depending on k, p, n, γ and Ω.
Proof.
Step 1: Assume (9.22). Then since Dα u ∈ Lp (Ω) for all |α| = k, the SobolevNirenberg-Gagliardo inequality implies
||Dβ u||Lp∗ (Ω) ≤ C||u||W k,p (Ω)
∗
if |β| = k − 1,
∗∗
and so u ∈ W k−1,p (Ω). Similarly, we find u ∈ W k−2,p (Ω), where p1∗∗ = p1∗ − n1 =
1
2
0,q (Ω) = Lq (Ω),
p − n . Continuing, we eventually discover after k steps that u ∈ W
for q1 = p1 − kn . The corresponding continuity estimate follows from multiplying the
relevant estimates at each stage of the above argument.
Gregory T. von Nessi
Chapter 9: Function Spaces
266
Step 2: Assume now condition (9.23) holds, and
n
p
is not an integer. Then as above we see
u ∈ W k−l,r (Ω),
(9.25)
1
1
l
= − ,
r
p n
(9.26)
for
Dα u
(9.27)
pn
n−pl
> n. Hence (9.25)
0,1− nr
and Morrey’s inequality imply that
∈C
(Ω) for allh |α|
i <
h k
i − l − 1. Observe
h i
n
n
k− p −1, p +1− pn
also that 1 − nr = 1 − pn + l = pn + 1 − pn . Thus u ∈ C
(Ω), and the
stated estimate follows.
h i
Step 3: Finally, suppose (9.26) holds, with pn an integer. Set l = pn −1 = pn −1. Consequently,
pn
we have as above u ∈ W k−l,r (Ω) for r = n−pl
= n. Hence the Sobolev-Nirenbergα
q
Gagliardoh inequality
shows
D
u
∈
L
(Ω)
for
all
n ≤ q < ∞ and all |α| ≤ k − l −
i
n
n
1 = k − p . Therefore Morrey’s inequality further implies Dα u ∈ C 0,1− q (Ω) for all
h i
h i
k− pn −1,γ
n
(Ω) for each
n < q < ∞ and all |α| ≤ k − p − 1. Consequently u ∈ C
0 < γ < 1. As before, the stated continuity estimate follows as well.
9.8
Difference Quotients
In this section we develop a tool that will enable us to determine the existence of derivatives
of certain solutions to various PDE. More succinctly, studies that pertain to this pursuit are
referred to collectively as regularity theory. We will go into this more in later chapters.
Recalling 1D calculus, we know that f is differentiable at x0 if
lim
x→x0
f (x) − f (x0 )
f (x0 + h) − f (x0 )
= lim
x − x0
h
h→0
exists. In nD with f : Ω ⊂ Rn → R, we analogize to say that f is differentiable at x0 in the
direction of s if nD: f : Ω ⊂ Rn → R,
lim ∆kh f (x) = lim
h→0
h→0
f (x + hes ) − f (x)
, ek = (0, . . . , 1, . . . , 0)
h
exists, where
es = (0, . . . , |{z}
1 , . . . , 0);
sth place
i.e., to have the s partial derivative, limh→0 ∆kh f (x) must exist.
The power of using difference quotients is conveniently summed up in the following
proposition.
Gregory T. von Nessi
Version::31102011
provided lp < n. We choose the integer l so that
n
l < < l + 1;
p
h i
that is, we set l = pn . Consequently (9.26) and (9.27) imply
9.8 Difference Quotients
267
Proposition 9.8.1 (): If 1 ≤ p < ∞, f ∈ W 1,p (Ω), Ω0 ⊂⊂ Ω, and h small enough with
h < dist(Ω0 , ∂Ω), then
i.)
||∆kh f ||Lp (Ω0 ) ≤ C||Dk f ||Lp (Ω) ,
k = 1, . . . , n.
Version::31102011
ii.) Moreover, if 1 < p < ∞ and
||∆kh f ||Lp (Ω0 ) ≤ K
∀h < dist(Ω0 , ∂Ω),
1,p
then f ∈ Wloc
(Ω) and ||Dk f ||Lp (Ω0 ) ≤ K with ∆kh f → Dk f strongly in Lp (Ω0 ).
Proof: First, consider f ∈ W 1,p (Ω) ∩ C 1 (Ω). We start off by calculating
|∆kh f (x)|
f (x + hek ) − f (x)
h
Z h
1
|Dk f (x + tek )| dt
h 0
Z h
p1
−1/p
p
.
h
|Dk f (x + tek )| dt
=
≤
Hölder
≤
0
Taking the pth power, we obtain
|∆kh f (x)|p ≤
1
h
Z
0
h
|Dk f (x + tek )|p dt .
Next we integrate both sides over Ω0 :
Z
Ω0
|∆kh f
p
(x)| dx
≤
≤
≤
=
Z Z
1 h
|Dk f (x + tek )|p dxdt
h 0 Ω0
Z Z
1 h
|Dk f (x)|p dxdt
h 0 Ω0 +tek
Z Z
1 h
|Dk f (x)|p dxdt
h 0 Ω
Z
|Dk f |p dx ≤ K.
Ω
Now, we need to verify the last calculation for general f ; we do this by the standard argument
that W 1,p (Ω) ∩ C 1 (Ω) is dense in W 1,p (Ω). Indeed, approximate f ∈ W 1,p (Ω) by fn ∈
W 1,p (Ω)∩C 1 (Ω) such that fn → f in W 1,p (Ω) and use the above estimate for fn to ascertain
Z
Ω0
|∆kh fn |p
dx
p
Z ↓ L (Ω)
|∆kh f |p dx
Ω0
≤
Z
Ω
|Dk fn |p dx
p
Z ↓ L (Ω)
≤
|Dk f |p dx
Ω
∀f ∈ W 1,p (Ω),
Gregory T. von Nessi
Chapter 9: Function Spaces
268
for h fixed. This proves i.).
Ω0
Ω0
To this end, we calculate
Z
Z
k
∆h f · φ dx =
Ω0
f (x + hek ) − f (x)
φ(x) dx
h
Z
Z
1
1
f (x + hek )φ(x) dx −
f (x)φ(x) dx
h Ω0
h Ω0
Z
Z
1
1
f (x)φ(x − hek ) dx −
f (x)φ(x) dx
h Ω0 +hek
h Ω0
Z
φ(x − hek ) − φ(x)
f (x) −
dx
−h
Ω
Z
−
f (x)(∆k−h φ(x)) dx
Ω0
=
=
=
=
Ω
Now as Dk φ, ∆k−h φ(x) ∈ Cc∞ (Ω), we can use a simple real-analysis argument to say that
∆k−h φ → Dk φ uniformly in Ω0 . Thus, we have shown
Z
Z
gk φ dx = −
f · Dk φ dx
∀φ ∈ Cc∞ (Ω0 ),
Ω0
Ω0
proving the claim. Moreover, this last equality clearly indicate that gk ∈ Lp (Ω0 ), as f ∈
W 1,p (Ω). Now, we just have to show that ∆kh f → Dk f strongly in Lp (Ω0 ). As we did for
part i.), first consider f ∈ W 1,p (Ω) ∩ C 1 (Ω). Calculating we see
Z
Z
Z
p
1 h
|∆kh f − Dk f | dx =
(Dk f (x + tek ) − Dk f (x)) dt dx
Ω0
Ω0 h 0
Z
Z
1 h
=
|Dk f (x + tek ) − Dk f (x))|p dtdx
h
0
Ω
0
Z Z
1 h
=
|Dk f (x + tek ) − Dk f (x))|p dtdx
h 0 Ω0
Now, since f ∈ C 1 (Ω), the integrand on the RHS is strictly bounded, i.e. we can apply
Lebesgue’s dominated convergence theorem (to the inner integral) to state
Z
|∆kh f − Dk f | dx = 0.
lim
h→0 Ω0
For general f ∈ W 1,p (Ω0 ), we do as before by picking fn ∈ W 1,p (Ω) ∩ C 1 (Ω) such that
fn → f in W 1,p (Ω0 ) to conclude that the above limit still holds.
Gregory T. von Nessi
Version::31102011
Now suppose that the last conclusion holds for h < Dist(Ω0 , ∂Ω); i.e. since f ∈ W 1,p (Ω) we
see that ∆kh f is uniformly bounded in Lp (Ω0 ). Thus, we have established the stipulations of
part ii.). Next, we construct a sequence {∆khn f } by considering hn & 0. Taking the boundedness ∆kh f in Lp (Ω0 ) we apply the Banach-Alaoglu theorem to establish the existence of a
weakly convergent subsequence of {∆khn f }. Upon relabeling such a subsequence, we define
gk as the weak limit, i.e. ∆khn f * gk in Lp (Ω0 ). Now, we claim that gk = Dk f on Ω0 . Claim:
gk = Dk f on Ω0 , i.e.,
Z
Z
f · Dk φ dx = −
gk φ dx
∀φ ∈ Cc∞ (Ω0 ).
9.9 A Note about Banach Algebras
9.9
269
A Note about Banach Algebras
The following theorem is sometimes useful in the manipulation of weak formulations of PDE.
Theorem 9.9.1 (): Let Ω ⊂ Rn be an open and bonded domain with smooth boundary. If
p > n, then the space W 1,p (Ω) is a Banach algebra with respect to the usual multiplication
of functions, that is, if f , g ∈ W 1,p (Ω), then f · g ∈ W 1,p (Ω).
Proof: The Sobolev embeddings imply that
Version::31102011
f , g ∈ C 0,α (Ω)
with α = 1 −
n
>0
p
and that the estimates
||f ||C 0,α (Ω) ≤ C0 ||f ||W 1,p (Ω) , ||g||C 0,α (Ω) ≤ C0 ||g||W 1,p (Ω)
hold. For n ≥ 2 we have p 0 < p and we may use the usual product rule to deduce that the
weak derivative of f g is given by f · Dg + Df · g. Then
Z
||D(f g)||pLp (Ω) ≤ C (|f · Dg|p + |g · Df |p ) dx
Ω
≤ B(||f ||pL∞ (Ω) ||Dg||pW 1,p (Ω) + ||g||pL∞ (Ω) ||Df ||pW 1,p (Ω) )
≤ 2BC1 ||f ||pW 1,p (Ω) ||g||pW 1,p (Ω) .
The same arguments work for n = 1 and p ≥ 2. If p ∈ (1, 2), then we choose f = φ ∗ f
1,1
where φ is a standard mollifier. Suppose that Ω = (a, b). Then f → f in Wloc
(a, b) and
locally uniformly on (a, b) since f is continuous. It follows that
Z b
Z b
0
f gφ dx = −
(Dg · f + g · Df )φ dx.
a
As → 0, we conclude
Z
a
a
b
f gφ0 dx = −
Z
b
a
(Dg · f + g · Df )φ dx.
Thus f g ∈ W 1,1 (Ω) with D(f g) = Df · g + g · Df . The p-integrability of the derivative
follows from the embedding theorems as before.
9.10
Exercises
8.1: Let X = C 0 ([−1, 1]) be the space of all continuous function on the interval [−1, 1], Y =
C 1 ([−1, 1]) be the space of all continuously differentiable functions, and let Z = C01 ([−1, 1])
be the space of all function u ∈ Y with boundary values u(−1) = u(1) = 0.
a) Define
(u, v ) =
Z
1
u(x)v (x) dx.
−1
On which of the spaces X, Y , and Z does (·, ·) define a scalar product? Now consider
those spaces for which (·, ·) defines a scalar product and endow them with the norm
|| · || = (·, ·)1/2 . Which of the spaces is a pre-Hilbert space and which is a Hilbert space
(if any)?
Gregory T. von Nessi
Chapter 9: Function Spaces
270
b) Define
(u, v ) =
Z
1
u 0 (x)v 0 (x) dx.
−1
On which of the spaces X, Y , and Z does (·, ·) define a scalar product? Now consider
those spaces for which (·, ·) defines a scalar product and endow them with the norm
|| · || = (·, ·)1/2 . Which of the spaces is a pre-Hilbert space and which is a Hilbert space
(if any)?
[u]γ;Ω =
sup
x,y ∈Ω,x6=y
|u(x) − u(y )|
,
|x − y |γ
and we define the maximum norm by
|u|∞;Ω = sup |u(x)|.
x∈Ω
The Holder space C k,γ (Ω) consists of all functions u ∈ C k (Ω) for which the norm
||u||k,γ;Ω =
X
|α|≤k
||Dα u||∞;Ω +
X
[Dα u]γ;Ω
|α|=k
is finite. Prove that C k,α (Ω) endowed with the norm || · ||k,γ;Ω is a Banach space.
Hint: You may prove the result for C 0,γ ([0, 1]) if you indicate what would change for the
general result.
8.3: Consider the shift operator T : l 2 → l 2 given by
(x)i = xi+1 .
Prove the following assertion. It is true that
x − T x = 0 ⇐⇒ x = 0,
however, the equation
x − Tx = y
does not have a solution for all y ∈ l 2 . This implies that T is not compact and the Fredholm’s
alternative cannot be applied. Give an example of a bounded sequence xj such that T xj does
not have a convergent subsequence.
8.4: The space l 2 is a Hilbert space with respect to the scalar product
(x, y) =
Gregory T. von Nessi
∞
X
i=1
xi yi .
Version::31102011
8.2: Suppose that Ω is an open and bounded domain in Rn . Let k ∈ N, k ≥ 0, and γ ∈ (0, 1].
We define the norm || · ||k,γ in the following way. The γ th -Holder seminorm on Ω is given by
9.10 Exercises
271
Use general theorems to prove that the sequence of unit vectors in l 2 given by (ei )j = δij
contains a weakly convergent subsequence and characterize the weak limit.
8.5: Find an example of a sequence of integrable functions fi on the interval (0, 1) that
converge to an integrable function f a.e. such that the identity
Z 1
Z 1
lim
fi (x) dx =
lim fi (x) dx
i→∞ 0
0 i→∞
Version::31102011
does not hold.
8.6: Let a ∈ C 1 ([0, 1]), a ≥ 1. Show that the ODE
−(au 0 )0 + u = f
has a solution u ∈ C 2 ([0, 1]) with u(0) = u(1) = 0 for all f ∈ C 0 ([0, 1]).
Hint: Use the method of continuity and compare the equation with the problem −u 00 = f .
In order to prove the a-priori estimate derive first the estimate
sup |tut | ≤ sup |f |
where ut is the solution corresponding to the parameter t ∈ [0, 1].
8.7: Let λ ∈ R. Prove the following alternative: Either (i) the homogeneous equation
−u 00 − λu = 0
has a nontrivial solution u ∈ C ( [0, 1]) with u(0) = u(1) = 0 or (ii) for all f ∈ C 0 ([0, 1]),
there exists a unique solution u ∈ C 2 ([0, 1]) with u(0) = u(1) = 0 of the equation
−u 00 − λu = f .
Moreover, the mapping Rλ : f 7→ u is bounded as a mapping from C 0 ([0, 1]) into C 2 ([0, 1]).
Hint: Use Arzela-Ascoli.
8.8: Let 1 < p ≤ ∞ and α = 1 − p1 . Then there exists a constant C that depends
only on a, b, and p such that for all u ∈ C 1 ([a, b]) and x0 ∈ [a, b] the inequality
||u||C 0,α ([a,b]) ≤ |u(x0 )| + C||u 0 ||Lp ([a,b])
holds.
8.9: Let X = L2 (0, 1) and
M=
f ∈X:
Z
0
1
f (x) dx = 0 .
Show that M is a closed subspace of X. Find M ⊥ with respect to the notion of orthogonality
given by the L2 scalar product
Z 1
(·, ·) : X × X → R, (u, v ) =
u(x)v (x) dx.
0
Gregory T. von Nessi
Chapter 9: Function Spaces
272
According to the projection theorem, every f ∈ X can be written as f = f1 + f2 with f1 ∈ M
and f2 ∈ M ⊥ . Characterize f1 and f2 .
8.10: Suppose that pi ∈ (1, ∞) for i = 1, . . . , n with
1
1
+ ··· +
= 1.
p1
pn
Prove the following generalization of Holder’s inequality: if fi ∈ Lpi (Ω) then
f =
n
Y
i=1
fi ∈ L1 (Ω)
n
Y
and
fi
i=1
L1 (Ω)
≤
n
Y
i=1
||fi ||Lpi (Ω) .
Show that
lim Φp (u) = sup |u|.
p→∞
Ω
8.12: Suppose that the estimate
||u − uΩ ||Lp (Ω) ≤ CΩ ||Du||Lp (Ω)
holds for Ω = B(0, 1) ⊂ Rn with a constant C1 > 0 for all u ∈ W01,p (B(0, 1)). Here uΩ
denotes the mean value of u on Ω, that is
Z
1
uΩ =
u(z) dz .
|Ω| Ω
Find the corresponding estimate for Ω = B(x0 , R), R > 0, using a suitable change of coordinates. What is CB(x0 ,R) in terms of C1 and R?
8.13: [Qualifying exam 08/99] Let f ∈ L1 (0, 1) and suppose that
Z 1
f φ0 dx = 0
∀φ ∈ C0∞ (0, 1).
0
Show that f is constant.
Hint: Use convolution, i.e., show the assertion first for f = ρ ∗ f .
8.14: Let g ∈ L1 (a, b) and define f by
f (x) =
Z
x
g(y ) dy .
a
Prove that f ∈ W 1,1 (a, b) with Df = g.
Hint: Use the fundamental theorem of Calculus for an approximation gk of g. Show that
the corresponding sequence fk is a Cauchy sequence in L1 (a, b).
8.15: [Evans 5.10 #8,9] Use integration by parts to prove the following interpolation inequalities.
Gregory T. von Nessi
Version::31102011
8.11: [Gilbarg-Trudinger, Exercise 7.1] Let Ω be a bounded domain in Rn . If u is a measurable
function on Ω such that |u|p ∈ L1 (Ω) for some p ∈ R, we define
1
Z
p
1
p
Φp (u) =
|u| dx
.
|Ω| Ω
9.10 Exercises
273
a) If u ∈ H 2 (Ω) ∩ H01 (Ω), then
Z
2
|Du| dx ≤ C
Ω
Z
2
u dx
Ω
1 Z
2
Ω
2
2
1
2
p
1
|D u| dx
2
b) If u ∈ W 2,p (Ω) ∩ W01,p (Ω) with p ∈ [2, ∞), then
Z
Version::31102011
Ω
p
|Du| dx ≤ C
Z
Ω
p
|u| dx
1 Z
2
Ω
|D u| dx
2
.
Hint: Assertion a) is a special case of b), but it might be useful to do a) first. For simplicity,
you may assume that u ∈ Cc∞ (Ω). Also note that
Z
Ω
p
|Du| dx =
n Z
X
i=1
Ω
|Di u|2 |Du|p−2 dx.
8.16: [Qualifying exam 08/01] Suppose that u ∈ H 1 (R), and for simplicity suppose that u
is continuously differentiable. Prove that
∞
X
n=−∞
|u(n)|2 < ∞.
8.17: Suppose that n ≥ 2, that Ω = B(0, 1), and that x0 ∈ Ω. Let 1 ≤ p < ∞. Show that
u ∈ C 1 (Ω \ {x0 }) belongs to W 1,p (Ω) if u and its classical derivative ∇u satisfy the following
inequality,
Z
(|u|p + |∇u|p ) dx < ∞.
Ω\{x0 }
Note that the examples from the chapter show that the corresponding result does not hold
for n = 1. This is related to the question of how big a set in Rn has to be in order to be
“seen” by Sobolev functions.
Hint: Show the result first under the additional assumption that u ∈ L∞ (Ω) and then
consider uk = φk (u) where φk is a smooth function which is close to

 k if s ∈ (k, ∞),
s if s ∈ [−k, k],
ψk (s) =

−k if s ∈ (−∞, −k).
You could choose φk to be the convolution of ψk with a fixed kernel.
8.18: Give an example of an open set Ω ⊂ Rn and a function u ∈ W 1,∞ (Ω), such that
u is not Lipschitz continuous on Ω.
Hint: Take Ω to be the open unit disk in R2 , with a slit removed.
1
8.19: Let Ω = B 0, 12 ⊂ Rn , n ≥ 2. Show that log log |x|
∈ W 1,n (Ω).
8.20: [Qualifying exam 01/00]
Gregory T. von Nessi
Chapter 9: Function Spaces
274
a) Show that the closed unit ball in the Hilbert space H = L2 ([0, 1]) is not compact.
b) Show that g : [0, 1] → R is a continuous function and define the operator T : H → H
by
(T u)(x) = g(x)u(x)
for x ∈ [0, 1].
Prove that T is a compact operator if and only if the function g is identically zero.
8.21: [Qualifying exam 08/00] Let Ω = B(0, 1) be the unit ball in R3 . For any function
u : Ω → R, define the trace T u by restricting u to ∂Ω, i.e., T u = u ∂Ω . Show that
T : L2 (Ω) → L2 (∂Ω) is not bounded.
a) Explain precisely how T is defined for all u ∈ H, and show that T is a bounded linear
functional on H.
b) Let (·, ·)H denote the standard inner product in H. By the Riesz representation theorem,
there exists a unique v ∈ H such that T u = (u, v )H for all u ∈ H. Find v .
8.23: Let Ω = B(0, 1) ⊂ Rn with n ≥ 2, α ∈ R, and f (x) = |x|α . For fixed α, determine
for which values of p and q the function f belongs to Lp (Ω) and W 1,q (Ω), respectively. Is
there a relation between the values?
8.24: Let Ω ⊂ Rn be an open and bounded domain with smooth boundary. Prove that
for p > n, the space W 1,p (Ω) is a Banach algebra with respect to the usual multiplication
of functions, that is, if f , g ∈ W 1,p (Ω), then f g ∈ W 1,p (Ω).
8.25: [Evans 5.10 #11] Show by example that if we have ||Dh u||L1 (V ) ≤ C for all 0 <
|h| < 21 dist (V, ∂U), it does not necessarily follow that u ∈ W 1,1 (Ω).
8.26: [Qualifying exam 01/01] Let A be a compact and self-adjoint operator in a Hilbert
space H. Let {uk }k∈N be an orthonormal basis of H consisting of eigenfunctions of A with
associated eigenvalues {λk }k∈N . Prove that if λ 6= 0 and λ is not an eigenvalue of A, then
A − λI has a bounded inverse and
||(A − λI)−1 || ≤
1
< ∞.
inf k∈N |λk − λ|
8.27: [Qualifying exam 01/03] Let Ω be a bounded domain in Rn with smooth boundary.
a) For f ∈ L2 (Ω) show that there exists a uf ∈ H01 (Ω) such that
||f ||2H−1 (Ω) = ||uf ||2H1 (Ω) = (f , uf )L2 (Ω) .
0
b) Show that L2 (Ω)
⊂⊂
−→
H −1 (Ω). You may use the fact that H01 (Ω)
⊂⊂
−→
L2 (Ω).
8.28: [Qualifying exam 08/02] Let Ω be a bounded, connected and open set in Rn with
smooth boundary. Let V be a closed subspace of H 1 (Ω) that does not contain nonzero
−→ L2 (Ω), show that there exists a constant C < ∞,
constant functions. Using H 1 (Ω) ⊂⊂
independent of u, such that for u ∈ V ,
Z
Z
2
|u| dx ≤ C
|Du|2 dx.
Ω
Gregory T. von Nessi
Ω
Version::31102011
8.22: [Qualifying exam 01/00] Let H = H 1 ([0, 1]) and let T u = u(1).
Version::31102011
Chapter 10: Linear Elliptic PDE
276
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
10.1 Existence of Weak Solutions for Laplace’s Equation
282
10.2 Existence for General Elliptic Equations of 2nd Order
10.3 Elliptic Regularity I: Sobolev Estimates
291
10.4 Elliptic Regularity II: Schauder Estimates
10.5 Estimates for Higher Derivatives
303
318
10.6 Eigenfunction Expansions for Elliptic Operators
320
10.7 Applications to Parabolic Equations of 2nd Order
10.8 Neumann Boundary Conditions
10.9 Semigroup Theory
10.1
324
330
334
10.10Applications to 2nd Order Parabolic PDE
10.11Exercises
286
342
344
Existence of Weak Solutions for Laplace’s Equation
General assumption Ω ⊂ Rn open and bounded
Want to solve:
−∆u = f in Ω
u = 0 in ∂Ω
multiply by v ∈ W01,2 (Ω), integrate by parts:
Z
Z
Du · Dv dx =
f v dx
Ω
Gregory T. von Nessi
Ω
(10.1)
Version::31102011
10 Linear Elliptic PDE
10.1 Existence of Weak Solutions for Laplace’s Equation
277
Definition 10.1.1: Let f ∈ L2 (Ω). A function u ∈ W01,2 (Ω) is called a weak solution of
(10.1) if
Z
Z
Du · Dv dx =
f v dx ∀v ∈ W01,2 (Ω)
Ω
Ω
Theorem 10.1.2 (): The equation (10.1) has a unique weak solution in W01,2 (Ω) and we
have ||u||W 1,2 (Ω) ≤ C||f ||L2 (Ω)
Version::31102011
Notation: u = (−∆)−1 f implicitly understood u = 0 on ∂Ω
Proof: H01 = W01,2 (Ω) is a Hilbert space, F : H01 → R,
Z
F (v ) =
f v dx
Ω
Then F is linear and bounded
|F (v )|
Z
=
Z
≤
Ω
Hölder
≤
Define B : H × H → R, B(u, v ) =
R
|f v | dx
||f ||L2 (Ω) ||v ||L2 (Ω)
≤
Ω Du
f v dx
Ω
||f ||L2 (Ω) ||v ||W 1,2 (Ω)
· Dv dx. B is bilinear. B is also bounded:
|B(u, v )| ≤ ||Du||L2 (Ω) ||Dv ||L2 (Ω) ≤ ||u||W 1,2 (Ω) ||v ||W 1,2 (Ω)
We can thus use Lax-Milgram if B is also coercive, i.e.,
Z
2
B(u, u) ≥ ν||u||W 1,2 (Ω) , B(u, u) =
|Du|2 dx
Ω
So, to show this we use Poincare:
||u||2L2 (Ω) ≤ Cp ||Du||2L2 (Ω)
⇐⇒
=⇒
||u||2W 1,2 (Ω) = ||u||2L2 (Ω) + ||Du||2L2 (Ω) ≤ (Cp2 + 1)||Du||2L2 (Ω)
1
||Du||2L2 (Ω) ≥ 2
||u||2W 1,2 (Ω)
Cp + 1
Thus,
B(u, u) ≥
Cp2
1
||u||2W 1,2 (Ω) , i.e. B is coercive
+1
Now Lax-Milgram says ∃u ∈ H01 (Ω) = W01,2 (Ω) such that
Z
Z
B(u, v ) = F (v )
∀v ∈ H01 (Ω) ⇐⇒
Du · Dv dx =
f v dx
Ω
Ω
∀v ∈ H01 (Ω)
i.e. u is a unique weak solution. The estimate ||u||W 1,2 (Ω) ≤ C||f ||L2 (Ω) , follows from the
estimate in Lax-Milgram.
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
278
Proposition 10.1.3 (): The operator
(−∆)−1 : L2 (Ω) → L2 (Ω) is compact
Proof: Since ||u||W 1,2 (Ω) ≤ C||f ||L2 (Ω) we know that (−∆)−1 : L2 (Ω) → W01,2 (Ω) is
bounded. From compact embedding, we thus have
(−∆)−1 : L2 (Ω)
→ W 1,2 (Ω)
bounded
⊂⊂
−→
L2 (Ω)
(−∆)−1 maps bounded sets in L2 (Ω) into precompact sets in L2 (Ω), i.e., is compact.
Theorem 10.1.4 (): Either
−∆u = λu + f in Ω
u = 0 on ∂Ω
(10.2)
has a unique weak solution, or
ii.) there exists a nontrivial solution of
−∆u = λu in Ω
u = 0 on ∂Ω
(10.3)
iii.) If ii.) holds, then the space N of solutions of (10.3) is finite dimensional
iv.) the equation (10.2) has a solution ⇐⇒
(f , v )L2 (Ω) = 0
∀v ∈ N
Remark: In general iv.) is true if f ⊥ N ∗ , where N ∗ is the null-space of the dual operator.
Proof: Almost all assertions follow from Fredholm’s-alternative in Hilbert Spaces.
Suppose that K = K∗
(10.2) is equivalent to
u = (−∆)−1 u + (−∆)−1 f
take K = (−∆)−1 compact. f˜ = (−∆)−1 f . Without loss of generality, λ 6= 0, (I−λK)u = f .
i.) has a solution if f˜ is perpendicular to the null-space of I − λK, i.e. (f˜, v ) = 0 ∀v
with v − λKv = 0.
=⇒ (Kf , v ) = 0 ⇐⇒ (f , K ∗ v ) = 0 ⇐⇒ λ1 (f , v ) = 0
=⇒ (f , v ) = 0, ∀v ∈ N since λ 6= 0.
Finally, we just need to prove K is self-adjoint, i.e., K = K ∗ , f ∈ L2 (Ω), h = Kf
⇐⇒ h is a weak solution of −∆h = f , f ∈ W01,2 (Ω)
Z
Z
⇐⇒
Dh · Dv dx =
f v dx
∀v ∈ W01,2 (Ω)
Ω Z
ZΩ
⇐⇒
D(Kf ) · Dv dx =
f v dx
∀v ∈ W01,2 (Ω)
Ω
|Ω
{z
}
v =Kg (*)
Gregory T. von Nessi
Version::31102011
i.) ∀f ∈ L2 (Ω) the equations
10.1 Existence of Weak Solutions for Laplace’s Equation
279
So with this definition of g, we see
(K ∗ f , g) = (f , Kg)
(by (*)) =
D(Kf ), D(Kg)
(by symmetry) =
D(Kg), D(Kf )
= (g, Kf )
= (Kf , g)
Version::31102011
=⇒ this holds ∀g ∈ L2
=⇒ K ∗ f = Kf ∀f
=⇒ K ∗ = K.
———
−∆u = f ,
B(u, v ) = L(u, v ) =
Weak solution:
L(u, v ) =
F ∈ H −1 (Ω)
L = −∆, f
Z
Z
Ω
f v dx = hf , v i
Ω
Du · Dv dx
∀v ∈ H01 (Ω)
Existence and uniqueness: Lax-Milgram
More generally, we can solve
−∆u = g +
Weak solution:
*
|
L(u, v ) g +
n
X
i=1
{z
∈H −1 (Ω)
n
X
Di f i , v
i=1
g, fi ∈ L2 (Ω)
Di f i ,
}
+
=
Z
Ω
gv −
n
X
i=1
fi gi
!
dx
Notation: Summation convention. Frequently drop the summation symbol, sum over repeated indexes.
General boundary conditions: u0 ∈ W 1,2 (Ω), want to solve
−∆u = 0 in Ω
u = u0 on ∂Ω
Define ũ = u − u0 ∈ W01,2 (Ω), and find equation for ũ.
Z
Du · Dv dx = 0
∀v ∈ H01 (Ω)
Ω
Z
⇐⇒
((Du − Du0 ) + Du0 ) · Dv dx = 0
Ω
Z
Z
⇐⇒
Dũ · Dv dx = −
Du0 · Dv dx
Ω
Equation for ũ with general
H −1 (Ω)
Ω
on the RHS.
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
280
10.2
Existence for General Elliptic Equations of 2nd Order
Lu = −Di (aij Dj u + bi u) + ci Di u + du
with aij , bi , ci , d ∈ L∞ (Ω). General elliptic operator in divergence form, −div(A · Du) is
natural to work with in the framework of weak solutions..
Non-divergence form
Lu = aij Di Dj u + bi Di u + cu
Corresponding bilinear form (multiply by v , integrate by parts)
Z
L(u, v ) =
aij Dj u · Di v + bi uDi v + cDi u · v + duv dx
Ω
want to solve Lu = g + Di fi ∈ H −1 (Ω), i.e.,
Z
L(u, v ) = (gv + fi Di v ) dx
Ω
∀v ∈ H01 (Ω)
(10.4)
Definition 10.2.1: u ∈ H01 (Ω) is a weak solution of Lu = g + Di fi if (10.4) holds.
Definition 10.2.2: L is strictly elliptic if aij ξi ξj ≥ λ|ξ|2 , ∀ξ ∈ Rn , λ > 0.
Definition 10.2.3: Let L strictly elliptic, σ > 0 sufficiently large and let Lσ u = Lu + σu,
then
Lσ u = g + Di fi in Ω
u = 0 on ∂Ω
and ||u||W 1,2 (Ω) ≤ C(σ) ||g||L2 (Ω) + ||f˜||L2 (Ω) , f˜ = (f1 , . . . , fn )
Proof: Lax-Milgram in H01 (Ω),
Z
F (v ) = (gv − fi Di v ) dx ≤ ||g||L2 (Ω) + ||f˜||L2 (Ω) ||v ||W 1,2 (Ω)
Ω
Bilinear form:
L(u, v ) =
L is bounded since
Z
Ω
(aij Dj u · Di v + bi uDi v + ci Di u · v + duv ) dx
Hölder
L(u, v ) ≤ C(aij , bi , ci , d)||u||W 1,2 (Ω) ||v ||W 1,2 (Ω)
Crucial: check whether L is coercive

Z

L(u, u) =
 aij Dj u · Di u +
{z
}
|
Ω
Gregory T. von Nessi
≥ λ
Z
Ω
ellipticity: ≥λ|Du|2
bi uDi u · u + du 2
|
{z
}
≤C(bi ,ci ,a)||u||L2 (Ω) ||u||W 1,2 (Ω)
|Du|2 dx − C||u||L2 (Ω) ||u||W 1,2 (Ω)


 dx
Version::31102011
If coefficients smooth, then non-divergence is equivalent to divergence form.
10.2 Existence for General Elliptic Equations of 2nd Order
281
——
||u||2L2 (Ω) ≤ Cp2 ||Du||2L2 (Ω)
Poincare:
=⇒ ||u||2L2 (Ω) + ||Du||2L2 (Ω) ≤ (Cp2 + 1)||Du||2L2 (Ω)
1
=⇒ ||Du||2L2 (Ω) ≥ 2
||u||2W 1,2 (Ω)
Cp + 1
——
Poincaré
Version::31102011
≥
Cp2
Young’s
λ
||u||2W 1,2 (Ω) − C||u||L2 (Ω) ||u||W 1,2 (Ω)
+1
λ0
||u||2W 1,2 (Ω) − C||u||2L2 (Ω)
2
λ0 ||u||2W 1,2 (Ω) −
≥
λ0
||u||2W 1,2 (Ω) − C||u||2L2 (Ω)
2
=
In general, L is not coercive, but if we take Lσ = Lu + σu,
Z
Lσ (u, v ) = L(u, v ) + σ
uv dx, then
Ω
λ0
Lσ (u, u) ≥
2
||u||2W 1,2 (Ω) + σ||u||2L2 (Ω) − σ||u||2L2 (Ω) − C||u||2L2 (Ω)
and Lσ is coercive as soon as σ > C(bi , ci , d).
Question: Under what conditions can we solve
Lu = F
∀F ∈ H −1 (Ω)?
Answer: Applications of Fredholm’s alternative.
Lu = −Di (aij Dj u + bi u) + ci Di u + du
Define:
hLu, v i = L(u, v )
Z
=
(aij Dj u · Di v + bi uDi v + ci Di u · v + duv ) dx
Ω
Formally adjoint operator L∗
hL∗ u, v i = L∗ (u, v ) = L(v , u)
Z
=
(aij Dj v · Di u +bi v Di u + ci Di v · u + duv ) dx
{z
}
Ω |
=
Z
aji Di v ·Dj u
(−Di (aji Di u + Ci u) + Bi Di u + du)v dx
Ω
∗
∗
L
= hL u, v i, i.e.
= −Di (aji Dj u + ci u) + bi Di u + du
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
282
10.2.1
Application of the Fredholm Alternative
Theorem 10.2.4 (Fredholm’s Alternative): L strictly elliptic. Then either
i.) ∀g, fi ∈ L2 (Ω) the equation
Lu = g + Di fi in Ω
u = 0 on ∂Ω
has a unique solution, or
iii.) If ii.) holds, then Lu = g + Di fi = F has a solution ⇐⇒
Z
hF, v i = (gv − fi Di v ) dx = 0
Ω
∀v ∈
N(L∗ ),
i.e. for all weak solutions of the adjoint equation L∗ v = 0.
Proof: Setting up Fredholm
⇐⇒
Lu = F ∈ H −1 (Ω)
Lσ u − σu = F,
σ large enough such that Lσ u = F has a unique solution ∀F .
Formally:
I1 : H01 (Ω)
⊂−→
L2 , I0 : L2 (Ω)
⊂−→
H −1 (Ω)
So, I2 = I0 I1 : H01 (Ω) → H −1 (Ω). Interpret our equation as Lσ u − I2 u = F is H −1 (Ω).
——
Z
(I0 u)(v ) =
uv dx I0 u bounded and linear
Ω
——
−1
u − σL−1
σ I2 u = Lσ F
in H01 (Ω)
(10.5)
One option: use Fredholm for this equation (done in Gilbarg and Trudinger). We will actually
shift into L2 (Ω) first.
Interpret this as an equation in L2 (Ω).
⇐⇒
⇐⇒
−1
I1 u − σI1 L−1
σ I2 u = I1 Lσ F
−1
(Id − σI1 L−1
σ I0 )I1 u = I1 Lσ F
−1
2
(Id − σI1 L−1
σ I0 )ũ = I1 Lσ F in L (Ω)
(10.6)
where ũ = I1 u ∈ L2 (Ω).
Remark: If ũ is a solution of (10.6), then
Gregory T. von Nessi
u = L−1
σ (σI0 ũ + F )
(10.7)
Version::31102011
ii.) there exists a non-trivial, weak solution of the homogeneous equation
Lu = 0 in Ω
u = 0 on ∂Ω
10.2 Existence for General Elliptic Equations of 2nd Order
283
is a solution of (10.5).
Now we come back to the Fredholm’s alternative:
I : L2 (Ω) → L2 (Ω)
K = σ I1 · L−1
|{z} | σ{z 0}
cmpct
bnded
Version::31102011
=⇒ K is compact.
=⇒ either (10.6) has a unique solution for all RHS in L2 (Ω) or 2., the homogeneous equation
has non-trivial solutions.
• What i.) means:
Take RHS f˜ = I1 L−1
σ F
=⇒ ∃! solution ũ ∈ L2 (Ω)
=⇒ ∃! u of (10.5) by (10.7), that is
−1
u − σL−1
σ I2 u = L0 F
⇐⇒
Lσ u − σI2 u = F ⇐⇒ Lu = F
• What ii.) means:
(Id − K)ũ = 0 which implies the this a non-trivial weak solution of
u − σL−1
σ I2 u = 0
⇐⇒
Lσ u − σI2 u = F ⇐⇒ Lu = 0
Proof of iii.): K ∗ = σI1 (L∗σ )−1 I0
By definition (also see below aside):
(K ∗ g, h)L2
= (g, Kh)
——
Lu = f ⇐⇒ L(u, v ) = (f , v ) ∀v ,
L∗ u = f ⇐⇒ L∗ (u, v ) = (f , v ) ∀v ,
∴ L∗σ (v , u) = Lσ (u, v )
Lσ u = f ⇐⇒ Lσ (u, v ) = (f , v ) = Lσ (L−1
σ f , v)
L∗σ u = f ⇐⇒ L∗σ (u, v ) = (f , v ) = Lσ ((L∗σ )−1 f , v )
——
(dual eqn v = Kh) = L∗σ ((L∗σ )−1 g, Kh)
= Lσ (Kh, (L∗σ )−1 g)
(definition of K) = σLσ ((L−1
)h, (L∗ )−1 g)
| σ{z } σ
H01 (Ω)
So,
original equation = σ(h, (L∗σ )−1 g)
= σ((L∗σ )−1 g, h) =⇒ ((K ∗ − σ(L∗σ )−1 )g , h) = 0 ∀h ∈ L2 , ∀g ∈ L2
|
{z
}
=0
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
284
Solvability of Original Equation
Lu = F ∈ H −1 ⇐⇒ (I − K)ũ = I1 L−1
σ F
Fredholm in L2 : If you don’t have a unique solution, then we can solve 2. if the RHS is
perpendicular to the null-space of the adjoint operator I − K ∗ , i.e.,
∗
(I1 L−1
σ F, v ) = 0, ∀v ∈ N(I − K )
v ∈ N(I − K ∗ ) ⇐⇒ (I − σ(L∗σ )−1 )v = 0 ⇐⇒ L∗σ v − σv = 0
⇐⇒ L∗ v = 0, i.e. v ∈ N(L∗ )
⇐⇒
⇐⇒
⇐⇒
−1
∗
(I1 L−1
σ F, σv ) = 0 = (I1 Lσ F, Lσ v ) =
∗
hL−1
σ F, Lσ v i
∗
∗
−1
hL−1
σ F, Lσ v i = 0 ⇐⇒ Lσ (v , Lσ F ) = 0
Lσ(L−1
σ F, v ) = hF, v i = 0
∀v ∈ N(L∗ )
1,2 . Obviously, u ≡constant is periodic
Remark: ∆u = f in the space of Periodic functions Wper
with ∆u
R = 0. Solvability condition comes from Fredholm’s alternative, it turns out that we
need Ω f dx = 0.
10.2.2
Spectrum of Elliptic Operators
Theorem 10.2.5 (Spectrum of elliptic operators): L strictly elliptic, then there exists an
at most countable set Σ ⊂ R such that the following holds
1. if λ ∈ Σ, then
Lu = λu + g + Di fi
u=0
in Ω
on ∂Ω
2. if λ ∈ Σ, then the eigenspace N(λ) = {u : Lu = λu} is non-trivial and has finite
dimension.
3. If Σ is infinite, then Σ = {λk , k ∈ N}, and λk → ∞ as k → ∞.
Remark: In 1D, −u 00 = λu on [0, π] u(0) = u(π) = 0 uk (x) = sin(kx), i.e. λk = k 2 → ∞
Proof: Essentially a consequence of the spectral Theorem of compact operators 7.10
Fredholm: uniqueness fails ⇐⇒ Lu = λu has a non-trivial solution. Lu = λu ⇐⇒
Lu + σu = (λ + σ)u ⇐⇒ Lσ u = (λ + σ)u.
λ+σ
σ
u = (λ + σ)L−1
σ = σ Ku, K the compact operator as before, ⇐⇒ Ku = λ+σ u =⇒
spectrum is at most countable, eigenspace finite dimension, zero only possible accumulation
point =⇒ λk → ∞ if Σ not finite.
General Elliptic Equation: Lu = f
Lu
l
= −Di (uij Dj u + bi Di u) + ci Di u + du
L(u, v ) = hLu, v i, existence works, L(u, u) ≥ λ||u||2W 1,2 (Ω)
e.g. Lσ u = Lu + σu =⇒ ∃ of solutions, Fredholm.
If e.g., bi , ci , d = 0, then σ = 0 will work.
Gregory T. von Nessi
Version::31102011
∗
Solvability condition ⇐⇒ (I1 L−1
σ F, v ) = 0, ∀v ∈ N(L ). Multiplying by σ
10.3 Elliptic Regularity I: Sobolev Estimates
10.3
285
Elliptic Regularity I: Sobolev Estimates
Elliptic regularity is one of the most challenging aspects of theoretical PDE. Regularity is
basically the studies “how good” a weak solution is; given that weak formulations of PDE
are not unique by any means, regularity tends to be very diverse in terms of the type of
calculations it requires.
Version::31102011
The next two sections are devoted to the “classical” regularity results pertaining to linear
elliptic PDE. “Classical” is quoted because we will actually be employing more modern methods pertaining to Companato Spaces for derivation of Schauder estimates in the next section.
Aside: One will notice that the coming expositions focus heavily on Hölder estimates as
opposed to approximations in Sobolev Spaces. Philosophically speaking, Sobolev Spaces is
the tool to use for existence and are hence the starting assumption in many regularity arguments; but the ideal goal of regularity is to prove that a weak solution is indeed a classical
one. Thus, in regularity we usually are aiming for continuous derivatives, i.e. solutions in
Hölder Spaces (at least).
To get the ball moving, let us consider a brief example. This will hopefully help convince
the reader that regularity is something that really does need to be proven as opposed to a
pedantic technicality that is always true.
Example 9: Consider the following problem in R:
−u 00 = f in (0, 1)
.
u(0) = 0
Using integration by parts to attain a weak formulation of our solution we see that
u(x) =
Z
x
0
Z
t
f (s) ds dt
0
will solve the original equation, but the harder question of regularity u persist. Now if
f ∈ L2 ([0, 1]) it can be shown that u ∈ H 2 ([0, 1]).
In higher dimensions, if u is a weak solution of −∆u = f , then f ∈ L2 (Ω) usually im2 (Ω) (provided ∂Ω has decent geometry, etc.). On the other hand, if f ∈ C 0 (Ω)
plies u ∈ Hloc
is is generally not the case that u ∈ C 2 (Ω)! There is hope though; we will prove (along with
the previous statements) that if f ∈ C 0,α (Ω) (α > 0), then u ∈ C 2 (Ω) in general.
The next proposition concludes in one of the most important inequalities in regularity
studies of elliptic PDE.
10.3.1
Caccioppoli Inequalities
In this first subsection, we will develop the essential Caccioppoli inequalities. Even though
these inequalities ultimately provide a means of bounding first derivatives of solutions of
elliptic equations, the conditions under which the inequality itself holds vary. Thus, we first
consider the most intuitive example of the Laplace’s equation; following this, a more general
situation will be presented.
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
286
Proposition 10.3.1 (Caccioppoli’s inequality for the −∆u = 0): If u ∈ H 1 (B(x0 , R)) is a
weak solution of −∆u = 0, i.e.
Z
Du · Dφ dx = 0 ∀φ ∈ H01 (Ω),
Ω
then
Z
C
|Du| dx ≤
(R − r )2
B(x0 ,r )
2
Z
B(x0 ,R)\B(x0 ,r )
for all
λ ∈ R and 0 < r < R.
|u − λ|2 dx
47474
D
6
ii.) G&0 ≤
η≤
η≡
1 1, B(x0 , r) U
/698
iii.) η ≡ 1 on B(x0 , r ), and
4
C
|Dη| ≤ C
G&
G
R
−. r
iv.) |Dη| ≤
R−r
R
r
4 3?>=*
x0 − R x 0 − r x0
D ,=*-6 ,C4 / A D
21
x0 + R
x0 + r
>C:
DA
476
2
G 10.1: A potential
@
Figure
formη of ηR in R2
21
* *
figure
3?>=*
>T/ a
4Evisualization
<C3O/
4
/,C4ED 6
Dof
AI/ a potential
D ,=*-6 ,C4 /
See
(10.1)A D for
η 476in R22 .
R G
G
@
@ η
Next, fix λ ∈ R and define φ = η 2 (u − λ) ∈ W01,2 (B(x0 , R)). Calculating
+* *
)*
,
U
λ∈R
/698J8?*969*
2 (B(x0 , R)) G
φ=
η 2 (u −−λ)λ)∈+Wη01,2
Dφ =
2ηDη(u
Du
/
)
1-3
@
/,C476
@
and plugging into the weak
Dφformulation,
= 2ηDη(u −we
λ)ascertain
+ η 2 Du
Z
Z
/698
3)
476) 476 ,=D,CDu
N9*
P#
*;/Qdx
A D >C:S
*S/ < 1 *, /476
0 =
· Dφ
=3 /,C4ED 6 P2ηDη
· >CDu(u
− λ) + η 2 |Du|2 dx.
@
B(x0 ,R)
Gregory T. von Nessi
0=
Z
B(x0 ,R)
Du · Dφ dx =
@
Z
U
B(x0 ,R)
B(x0 ,R)
2ηDη · Du(u − λ) + η 2 |Du|2 dx.
Version::31102011
Proof: The proof of this proposition is based on the idea of picking a “good” test function in
H01 (Ω) to manipulate our weak formulation with.
4
η ∈ Cc∞ (B(x0 , R)) U
G&
With the purpose of finding such a function, let us consider a function η such that
474
0 ≤ η ≤ 1U
i.) G&η ∈ Cc∞ (B(x0 , R)),
10.3 Elliptic Regularity I: Sobolev Estimates
287
Algebraic manipulation of the above equality shows that
Z
η 2 |Du|2 dx
B(x0 ,R)
Z
=−
2ηDη · Du(u − λ) dx
B(x0 ,R)
Z
≤
2η|Dη| · |Du| · |u − λ| dx
B(x0 ,R)
Hölder
Version::31102011
≤ 2
Z
2
B(x0 ,R)
1 Z
2
2
η |Du| dx
2
B(x0 ,R)
1
2
2
|Dη| |u − λ| dx
.
Now we divide through by the first integral on the RHS and square the result to find that
Z
Z
η 2 |Du|2 dx ≤ 4
|Dη|2 |u − λ|2 dx.
B(x0 ,R)
B(x0 ,R\B(x0 ,r ) | {z }
C
(R−r )2
This leads immediately to the conclusion that
Z
Z
C
2
|Du| dx ≤
|u − λ|2 dx
2
(R
−
r
)
B(x0 ,r )
B(x0 ,R)\B(x0 ,r )
Before moving onto Caccioppoli’s inequalities for elliptic systems, we have the following
special cases of proposition 10.3.1:
R
2,
Case 1: r =
λ = 0.
Z
2
B(
R
2,
Case 2: r =
λ = uB(x0 ,R) .
Z
x0 , R2
x0 , R2
B(
R
2,
Case 3: r =
Z
|Du| dx
)
)
|Du|2 dx ≤
≤
C
R2
≤
C
R2
C
R2
Z
Z
B(x0 ,R)\B(x0 ,R/2)
Z
B(x0 ,R)
B(x0 ,R)
|u|2 dx
|u|2 dx
|u − uB(x0 ,R) |2 dx
λ = uB(x0 ,R)\B(x0 ,R/2) .
B (x0 , R2 )
|Du|2 dx
C
≤ 2
R
Z
B(x0 ,R)\B (x0 , R2 )
Poincaré
≤
C
Z
u − uB(x0 ,R)\B(x0 ,R/2)
B(x0 ,R)\B(x0 ,R/2)
2
dx
|Du|2 dx.
We can “plug the hole” in the area of integration on the RHS by adding C times the
LHS of the above to both sides of the inequality to get
Z
Z
(1 + C)
|Du|2 dx ≤ C
|Du|2 dx,
R
B (x0 , 2 )
B(x0 ,R)
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
288
i.e.
Z
2
B (x0 , R2 )
|Du| dx
C
≤
C+1
Z
= Θ
Z
B(x0 ,R)
B(x0 ,R)
|Du|2 dx
|Du|2 dx,
(10.8)
where 0 < Θ < 1 is constant!
A direct consequence of this last case is
Corollary 10.3.2 (Lioville’s Theorem): If u ∈ H 1 (Rn ) is a weak solution of −∆u = 0, then
u is constant.
Proof: Take limit R → ∞ of both sides of (10.8) to see
Z
Z
|Du|2 dx ≤ Θ
|Du|2 dx,
Rn
Rn
which implies Du is constant a.e. 0 < Θ < 1; but as u ∈ H 1 (Rn ), we conclude that Du
constant everywhere in R.
Now we move on to consider elliptic systems of equations (which may or may not be
linear).
Proposition 10.3.3 (Caccioppoli’s inequality): Consider the elliptic system
j
− Dα Aαβ
(x)D
u
= g, for i = 1, . . . , m,
β
ij
(10.9)
where g ∈ H −1 (Ω). Suppose we choose fi , fi α ∈ L2 (Ω) such that g = fi +Dα fi α and consider
1 (Ω) of (10.9), e.g.
a weak solution u ∈ Hloc
Z
Z
αβ
j
i
Aij Dβ u Dα φ dx = (fi φi + fi α Dα φi ) dx. for ∀φ ∈ H01 (Ω).
Ω
Ω
If we ascertain that
or
or
(
∞
i.) Aαβ
ij ∈ L (Ω)
i j
2
ii.) Aαβ
ij ξα ξβ ≥ ν|ξ| , ν > 0

αβ

 i.) Aij = constant
i j
2
2
m
n
ii.) Aαβ
ij ξ ξ ηα ηβ ≥ ν|ξ| |η| , ν > 0 ∀ξ ∈ R and η ∈ R .


(This is called the Legendre-Hadamard condition)
Gregory T. von Nessi


0
i.) Aαβ
ij ∈ C (Ω)
ii.) Legendre-Hadamard condition holds ,

iii.) R sufficiently small
Version::31102011
Note: The general Caccioppoli inequalities do not requires the integrations to be over concentric balls. It is easy to see that as long as BR/2 (y0 ) ⊂ BR (x0 ), the inequality still holds
(just appropriately modify η in the proofs). This generalization will be assumed in some of
the more heuristic arguments to come.
10.3 Elliptic Regularity I: Sobolev Estimates
289
then the following Caccioppoli inequality is true
Z
2
Version::31102011
B(x0 ,R/2)
|Du| dx
(
≤ C(n)
+
1
R2
Z
Z
2
B(x0 ,R)
X
B(x0 ,R) i,α
|u − λ| dx + R
(fi α )2 dx
)
2
Z
B(x0 ,R)
X
fi 2 dx
i
for all λ ∈ R.
Proof: 1 Here we will prove the third case; it’s not hard to adapt the following estimates
to prove cases i.) and ii.). From our weak formulation we derive
Z
BR (x0 )
i
j
Aαβ
ij (x0 )Dα u Dβ φ
dx
=
Z
h
i
αβ
Aαβ
(x
)
−
A
(x)
Dα u i Dβ φj dx
0
ij
ij
BR (x0 )
Z
Z
+
fi φi dx +
fi α Dα φi dx
BR (x0 )
1
BR (x0 )
Can be omitted on the first reading.
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
290
for all φ ∈ H01 (BR (x0 )). So, we consider the above again with φ = (u − λ)η 2 . Now we
calculate:
Z
BR (x0 )
≤
BR (x0 )
i
i
j
j
Aαβ
ij (x0 )Dα [(u − λ )η]Dβ [(u − λ )η] dx
BR (x0 )
i
j
j
2
Aαβ
ij (x0 )Dα u Dβ [(u − λ )η ]
Z
+
Z
Z
BR (x0 )
Version::31102011
=
Z
|D[(u − λ)η]|2 dx
i
i
j
j
Aαβ
ij (x0 )(u − λ )(u − λ )Dα ηDβ η dx
h
i
αβ
αβ
=
Aij (x0 ) − Aij (x) Dα u i Dβ (u j − λj )η 2 dx
BR (x0 )
Z
Z
i
i
2
+
fi (u − λ )η dx +
fi α Dα (u i − λi )η 2 dx
B (x )
BR (x0 )
Z R 0
αβ
+
Aij (x0 )(u i − λi )(u j − λj )Dα ηDβ η dx
BR (x0 )
Z
Z
≤δ
Dα u i Dβ u j η 2 dx + 2δ
Dα u i · Dβ η · (u j − λj )η dx
BR (x0 )
B (x )
Z
Z R 0
+
fi η(u i − λi ) dx +
fi α Dα · u i η 2 dx
BR (x0 )
BR (x0 )
Z
Z
X
i
α i
|u − λ|2 |Dη|2 dx
+2
fi (u − λ )ηDη dx + C
Young’s
Z
BR (x0 ) α
BR (x0 )
|Du|2 · η 2 dx
Z
Z
1
2δ
+
|u − λ|2 |Dη|2 dx + 2
|u − λ|2 dx
ν BR (x0 )
R BR (x0 )
Z
Z
X
X
1
2
fi dx +
fi α dx
+
ν BR (x0 )
BR (x0 ) i
i,α
Z
Z
X
2
+ν
|Du|2 η 2 dx +
(fi α )2 dx
ν
BR (x0 )
BR (x0 ) i,α
Z
Z
2
2
+ 2ν
|u − λ| |Dη| dx + C
|u − λ|2 |Dη|2 dx
BR (x0 )
BR (x0 )
Z
Z
X
3
2 2
≤ (δ + 2δν + ν)
|Du| η dx +
(fi α )2 dx
ν BR (x0 )
BR (x0 )
i,α
Z
−1
1 + 2ν δ + 2ν + C
|u − λ|2 dx
+
R2
BR (x0 )
Z
X
+ R2
fi 2 dx.
≤
δ
BR (x0 )
|Du|2 η 2 dx + 2δν
Z
BR (x0 )
Gregory T. von Nessi
BR (x0 )
i
(10.10)
10.3 Elliptic Regularity I: Sobolev Estimates
291
In case it is unclear, δ is the standard free constant of continuity; and C bounds all Aαβ
ij on
2
B R (x0 ). Also, ν and some R terms arise via Young’s inequality. Next, choose a specific η
such that η ≡ 1 on R/2. Now, we consider
Z
BR/2 (x0 )
≤
Version::31102011
=
Z
|Du|2 dx
BR/2 (x0 )
|Du|2 η 2 dx
BR/2 (x0 )
(D[(u − λ)η])2 − (u − λ)2 (Dη)2 − 2DuDη(u − λ)η dx
Z
Young’s
≤
Z
2
BR/2 (x0 )
|D[(u − λ)η]| dx + 2ν
Z
BR/2 (x0 )
Z
2
+ 1+
|u − λ|2 |Dη|2 dx.
ν
BR/2 (x0 )
|Du|2 η 2 dx
Solving this last inequality algebraically and utilizing the properties of η, we get
Z
Z
2
(1 − 2ν)
|Du| dx ≤
|D[(u − λ)η]|2 dx
BR/2 (x0 )
BR/2 (x0 )
ν+2
+
νR2
Z
BR/2 (x0 )
|u − λ|2 dx.
Marching on, we put in estimate (10.10) into the RHS of the above (remembering that we
now put in R/2 instead of R into the estimate) to ascertain
(1 − 2ν)
Z
BR/2 (x0 )
|Du|2 dx
Z
Z
X
3
≤ (δ + 2δν + ν)
|Du| η dx +
(fi α )2 dx
ν
BR/2 (x0 )
BR/2 (x0 ) i,α
Z
2ν 2 + (C + 2)ν + 2δ + 3
+
|u − λ|2 dx
R2
BR/2 (x0 )
Z
X
fi 2 dx
+ R2
2 2
BR/2 (x0 )
i
Finally solving algebraically we conclude that
(1 − 3ν − δ(1 − 2ν))
Z
Z
BR/2 (x0 )
|Du|2 dx
Z
X
2ν 2 + (C + 2)ν + 2δ + 3
3
α 2
≤
(fi ) dx +
|u − λ|2 dx
ν BR (x0 )
R2
B
(x
)
R 0
i,α
Z
X
+ R2
fi 2 dx.
BR (x0 )
i
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
292
In the above we have used the properties of η and after solving algebraically, expanded the
RHS integrals to balls of radius R (obviously this can be done as we are integrating over
strictly positive quantities). This last inequality brings us to our conclusion as ν, δ, and C
are independent of u, along with the fact that δ and ν can be chosen small enough so the
LHS of the above remains positive.
10.3.2
Interior Estimates
Changing the system under consideration, we will now work with an elliptic system of the
form
j
(x)D
u
= fi + Dα fi α , for i = 1, . . . , m,
−Dα Aαβ
β
ij
1 (Ω). Basically, we will no longer assume that the inhowhere fi ∈ L2 (Ω) and fi α ∈ Hloc
mogenous term in the system is an element in H −1 (Ω). Indeed, we will actually need true
weak differentiability of the inhomogenaity (i.e. a bounded Sobolev norm) to attain the full
1 (Ω)
regularity theory of these linear systems. With that, we consider a weak solution u ∈ Hloc
of the elliptic system:
Z
Z
j
i
(10.11)
Aαβ
D
u
D
φ
dx
=
(fi φi + fi α Dα φi ) dx ∀φ ∈ H01 (Ω);
α
β
ij
Ω
Ω
this particular weak formulation is derived by the usual method of integration by parts. Now,
we go onto our first theorem.
2
Theorem 10.3.4 (): Suppose that Aαβ
ij ∈ Lip(Ω), that (10.11) is elliptic with fi ∈ L (Ω)
1 (Ω) is a weak solution of (10.11), then u ∈ H 2 (Ω).
and fi α ∈ H 1 (Ω). If u ∈ Hloc
loc
Proof: We start off by recalling the notation from section Section 9.8: uh (x) := u(x +
hes ), where
es := (0, . . . , |{z}
1 , . . . , 0).
sth place
With this we consider Ω0 ⊂⊂ Ω with h small enough so that uh (x) is defined for all x ∈ Ω0 .
Now, we can rewrite our weak formulation as
Z
j
i
Aαβ
ij (x + hes )Dβ uh Dα φ dx
0
Ω
Z
=
(fi (x + hes )φi (x) + fi α (x + hes )Dα φi (x)) dx.
Ω0
Subtracting the original weak formulation from this, we have
"
#
" αβ
#
Z
j
j
Aij (x + hes ) − Aαβ
ij (x)
αβ Dβ uh − Dβ u
i
Aij
Dα φ dx +
Dβ u j Dα φi dx
h
h
0
Ω
Z Z α
fi (x + hes ) − fi α (x)
fi (x + hes ) − f (x)i
i
=
φ (x) dx +
Dα φi dx.
h
h
Ω0
Ω0
Gregory T. von Nessi
Version::31102011
We will now start exploring the classical theory of linear elliptic regularity. To begin, we
will go over estimation of elliptic solutions via bounding Sobolev norms, utilizing difference
quotients. The first two theorems of this subsection deal with interior estimates, while the
last theorem treats the issue of boundary regularity.
10.3 Elliptic Regularity I: Sobolev Estimates
293
Using notation from section Section 9.8 and commuting the difference and differential operator on u, we obtain
Z
Ω0
j
i
s i
j
s αβ
Aαβ
ij · Dβ (∆h u ) · Dα φ + ∆h Aij · Dβ u · Dα φ dx
=
Z
Ω0
∆sh fi · φi + ∆sh fi α · Dα φi dx.
Version::31102011
We can algebraically manipulate this to get
Z
Ω0
s j
i
Aαβ
ij · Dβ (∆h u ) · Dα φ
=
Z
Ω0
j
i
∆sh fi · φi + (∆sh fi α − ∆sh Aαβ
ij · Dβ u )Dα φ dx. (10.12)
At this point, we would like to apply Caccioppoli’s inequality to the above, but the first term
on the RHS is not necessarily bounded; in order to proceed we now move to prove it is indeed
bounded. So, let us look at this term with φi := η 2 ∆sh u i , where η has the standard definition.
After some algebraic tweaking around, we see that
Z
Ω0
fi · ∆s−h (η 2 · ∆sh u i ) dx
Z
=
Ω0
fi η · ∆sh u i · ∆s−h η dx +
Z
Ω0
fi · η(x − hes ) · ∆s−h (η · ∆sh u i ) dx.
We now estimate the second term on the RHS by applying Young’s inequality, a derivative
product rule, and difference quotients bounds with derivatives. The ultimate result is:
Z
fi · ∆s−h (η 2 · ∆sh u i ) dx
Z
Young’s
≤
fi η · ∆sh u i · ∆s−h η dx
0
Ω
Z
Z
Z
1
2
2
s i 2
2 s i 2
+
|fi η| dx + ν
η |D(∆h u )| dx +
|Dη| |∆h u | dx
ν Ω0
Ω0
Ω0
Z
≤
fi η · ∆sh u i · ∆s−h η dx
Ω0
+ C(R) sup ||fi ||L2 (Ω0 ) + sup ||u i ||H1 (Ω0 )
i
i
Z
|D(∆sh u i )|2 dx .
+ sup
Ω0
i
Ω0
2
i
≤ C(R) sup ||fi ||L2 (Ω0 ) + sup ||u ||H1 (Ω0 ) + 1
Hölder
i
+ sup
i
Z
Ω0
!
i
|D(∆sh u i )|2 dx .
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
294
Next, this estimate is applied to (10.12) yields
Z
s j
i
Aαβ
ij · Dβ (∆h u ) · Dα φ
Ω0 Z
i
j
≤
fi η · ∆sh u i · ∆s−h η + (∆sh fjβ − ∆sh Aαβ
ij · Dα u )Dβ φ dx
Ω0
+ C(R)
+ sup
i
Z
i
sup ||fi ||L2 (Ω0 ) + sup ||u ||H1 (Ω0 ) + 1
i
!
i
2
|D(∆sh u i )|2 dx .
Ω0
BR/2 (x0 )
1
≤ C(R)
R2
+
Z
Z
BR (x0 )
X
BR (x0 ) α
|∆sh u i |2 dx
|∆sh fi α |2 dx
+
Z
X
BR (x0 ) j,α
2
i 2
|∆sh Aαβ
ij | |Dβ u | dx
i
+ sup ||fi ||L2 (R) + sup ||u ||H1 (R)
i
i
2 !
+1
.
Thus, we have finally bounded the difference of the derivative of u. By proposition 9.8.1 we
2 (Ω).
have shown u ∈ Hloc
Now, we move onto the next theorem which generalizes the first.
k,1 (Ω), that (10.11)
Theorem 10.3.5 (Interior Sobolev Regularity): Suppose that Aαβ
ij ∈ C
k
α
k+1
1
(Ω). If u ∈ Hloc (Ω) is a weak solution of
is elliptic with fi ∈ H (Ω) and fi ∈ H
j
−Dα Aαβ
= fi − Dα fi α , for i = 1, . . . , m,
ij (x)Dβ u
k+2
then u ∈ Hloc
(Ω).
2 (Ω).
Proof: We append the proof of theorem (10.3.4), i.e. we have shown that u ∈ Hloc
Now, let us set φi := Ds ψ i , where ψ i ∈ C0∞ (Ω). Upon integrating by parts the weak
formulation we ascertain
Z
Z
Z
αβ
j
i
i
−
Ds (Aij Dβ u )Dα ψ dx =
f i Ds ψ −
Ds (fi α )Dα ψ i dx.
Ω
Ω
Ω
Upon carrying out the Ds derivative on the RHS and algebraically manipulating we conclude
Z
Z
αβ
j
i
j
i
Aij Dβ (Ds u )Dα ψ dx = −
Ds (Aαβ
ij )Dβ u Dα ψ dx
Ω
Ω
Z
Z
i
−
fi Ds (ψ ) dx +
Ds (fi α )Dα ψ i dx
Ω
Ω
Z
j
α
i
=
[−Ds (Aαβ
ij )Dβ u − fi − Ds fi ]Dα ψ dx
Gregory T. von Nessi
Ω
Version::31102011
Finally, we now can apply Caccioppoli’s inequality to this despite the fact that the C(R) term
is not accounted for. Basically, we keep the C(R) term as is; and reflecting on the proof of
Caccioppoli’s inequality it is obvious that we can indeed say
Z
|D(∆sh u i )|2
10.3 Elliptic Regularity I: Sobolev Estimates
295
Now, in the case of the theorem of k = 1, we have the assumptions that Ds Aαβ
ij ∈ Lip(Ω),
1
α
2
fi ∈ Hloc (Ω) and fi ∈ Hloc (Ω), i.e. we can integrate the LHS of the last equation to as2 (Ω) or u ∈ W 3,2 (Ω). We get our conclusion by applying a standard
certain that Ds u i ∈ Hloc
loc
induction argument to the above method.
From this last theorem, it is clear that the regularity of the homogeneity and coefficients
α ∈
determine the regularity of our weak solution. We note particularly that if Aαβ
ij , fi , fi
Version::31102011
α
C ∞ (Ω) then u ∈ C ∞ (Ω). More specifically, if Aαβ
ij = const., fi ≡ 0, and fi ≡ 0, then for
any BR ⊂⊂ Ω and for any k, it can be seen from the above proofs that
||u||Hk (BR/2 ) ≤ C(k, R)||u||L2 (BR ) .
10.3.3
(10.13)
Global Estimates
Now we move our discussion to the boundary of Ω. Fortunately, we have done most of the
hard work already for this situation.
Theorem 10.3.6 (): Suppose that ∂Ω ∈ C k+1 , Aαβ
∈ C k (Ω), fi ∈ H k (Ω) and fi α ∈
ij
1 (Ω) solves
H k+1 (Ω). If u is a weak solution of the Dirichlet-problem, ie. u ∈ Hloc
Z
Ω
∀φ ∈ H01 (Ω) with
j
i
Aαβ
ij Dβ u Dα φ dx =
Z
Ω
(fi φi + fi α Dα φi ) dx
u = u0 on ∂Ω, then u ∈ H k+1 (Ω) and moreover
"
||u||Hk+1 (Ω) ≤ C(Ω) ||f ||W k−1,2 (Ω) +
X
i,α
#
||fi α ||Hk (Ω) + ||u||L2 (Ω) .
Proof:Setting v = u − u0 , it is clear that v satisfies the same system with a homogeneous
boundary condition. So, without loss of generality, we assume u0 = 0 in our problem. Since
∂Ω ∈ C k+1 , we know for any x0 ∈ ∂Ω, there exists a diffeomorphism
+
Φ : BR
→W ∩Ω
Gregory T. von Nessi
1 '( $ " 1 %. & +" 1
' u = 0 $ ' ∂Ω ∈ C k+1 -1 21 " x ∈ ∂Ω 0
0
) ! '( '
Chapter 10: Linear Elliptic PDE
+
Φ : BR
→W ∩Ω
296
' 6 C k+1 1 ΓR M ∩ ∂Ω
k+1
which maps ΓR onto M ∩ ∂Ω.
of class C
xn
+
W = Φ(BR
)
Φ
Rn
Ω
+
BR
x0
ΓR
x0
!
$ '( '
.Figure
? 10.2:
#"4Boundary
"diffeomorphism
!
1
∗ u)(y ) := u(Φ(y )) with y ∈
The
induces a map Φ∗ defined
by
$ "
diffeomorphism
Φ' k+1
(Φ
4
'
Φ is'(
∗
∗ u)(y) := between
Φ that Φ ∗ is an(Φisomorphism
u(Φ(y))
of class C Φ, the chain-rule dictates
k+1
∗
6
&
D
C
+
k+1
H
(Ω ∩ W
y ∈ and
BR
Φ ). Upon defining
C u∗ := Φ u, the chain-rule again says that
Φ∗
−1
$
)
))
'(=J Φ Du
' ∗ (y ), where
1 JΦHisk+1
H k+1
Du(Φ(y
the Jacobian
of Φ. Recalling multi-dimensional
+ matrix
(BR
)
(Ω ∩ W )
calculus, we see that u∗ satisfies
Z
Z
Z
αβ
i
i
j
˜
(10.14)
!
f˜i α Dα φi dy ,
fi φ dy +
Ãij Dβ u∗ Dα φ dy =
+
BR
. Since
+
k+1
H
(BR
)
+
BR
+
BR
+
BR
where
−1 αβ
Ãαβ
= |det(JΦ )|JΦ
Aij JΦ
ij
−1 ∗ α
Φ fi
f˜i α = |det(JΦ )|JΦ
∗
f˜i = |det(JΦ )|Φ fi .
αβ
From this we see that Ãαβ
ij is a positive multiple of a similarity transformation of Aij , i.e.
Ãαβ
ij is essentially a coordinate rotation of the original coefficients and thus are still elliptic.
After all that, we have reduced the problem to that of u∗ = 0 on ΓR . Now, for ever
direction but xn we can use the previous proof to bound the difference quotient. Specifically,
we take Ψi = ∆s−h (η 2 ∆sh u∗i ) for s = 1, . . . , n − 1 and η ∈ Cc∞ (BR (x0 )) to ascertain
s
i
i
α
||∆h (Dα u∗ )||L2 (B+ ) ≤ C ||u∗ ||H1 (B+ ) + sup ||fi ||L2 (B+ ) + sup ||fi ||H1 (B+ ) .
R
R/2
i
α,i
R
R
precisely as we did for the interior case. This only leaves Dnn u∗ left to be handled. Looking
at (10.14) and writing the whole thing out, we obtain
Z
j
i
Ãnn
ij Dn u∗ Dn φ dy
+
BR
=−
n−1 Z
X
α,β=1
+
BR
j
i
Ãαβ
ij Dβ u∗ Dα φ dy +
Z
+
BR
f˜i φi dy +
Z
+
BR
f˜i α Dα φi dy .
This is an explicit representation of Dnn u∗ in the weak sense. Since the RHS is known to
exist, we thus conclude that so must Dnn u∗ . Now, we can apply the induction and covering
arguments employed in earlier proofs of this section to obtain the result.
Gregory T. von Nessi
Version::31102011
1
∂Ω
10.4 Elliptic Regularity II: Schauder Estimates
10.4
297
Elliptic Regularity II: Schauder Estimates
Version::31102011
In the last section we considered regularity results that pertained to estimates of Sobolev
norms. As was eluded to at the beginning of the last section, these estimates provide a starting point for more useful estimates on linear elliptic systems. Namely, we now have enough
tools to derive the classical Schauder estimates of linear elliptic systems. The power of these
estimates is that they give us bounds on the Hölder norms of solutions with relatively light
criteria placed upon the weak solution being considered and the coefficients of the system.
We will go through a gradual series of results, starting with the relatively specialized case of
systems with constant coefficient working up to results pertaining to non-divergence form
systems with Hölder continuous coefficients.
Notation: For some of the following proofs, the indices have been ommitted when they
have no affect on the logic or idea of the corresponding proof. Such pedantic notation is
simply too distracting when it is not needed for clarity.
To start off this section we will first go over a very useful lemma that will be used in
several of the following proofs.
Lemma 10.4.1 (): Consider Ψ(ρ) non-negative and non-decreasing. If
i
h ρ α
+ Ψ(R) + BRβ
∀ρ < R ≤ R0 ,
Ψ(ρ) ≤ A
R
(10.15)
with A,B,α, and β non-negative constants and α > β, then ∃0 (α, β, A) such that
h ρ α
i
Ψ(ρ) ≤ C(α, β, A)
Ψ(R) + Bρβ
R
is true if (10.15) holds for some < 0 .
Proof: First, consider set 0 < τ < 1 and set ρ = τ R. Plugging into (10.15) gives
Ψ(τ R) ≤ τ α A[1 + τ −α ]Ψ(R) + BRβ .
Next fix γ ∈ (β, α) and choose τ such that 2Aτ α = τ γ . Now, pick 0 such that 0 τ −α < 1.
Our assumption now becomes
Ψ(τ R) ≤ 2Aτ α · Ψ(R) + BRβ
≤ τ γ Ψ(R) + BRβ .
Now, we can use this last inequality as a scheme for iteration:
Ψ(τ k+1 R) ≤ τ γ Ψ(τ k R) + B(τ k R)β
≤ τ γ [τ γ Ψ(τ k−1 R) + B(τ k−1 R)β ] + Bτ βk Rβ ;
and after doing this iteration k + 1 times, we ascertain
Ψ(τ
k+1
R) ≤ τ
(k+1)γ
Ψ(R) + Bτ
βk
R
β
k
X
τ j(γ−β) .
j=0
Since ρ < R there exists k such that τ k+1 R ≤ ρ < τ k R. Consequent upon this choice of k:
Ψ(ρ) ≤ Ψ(τ k+1 R) ≤ τ (k+1)γ Ψ(R) + C · B(τ k R)β
ρ γ
ρ β
≤
Ψ(R) + C · B
.
R
τ
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
298
Remark: The above proof might seem to be grossly unintuitive at first, but in reality given
so many free constants, the proof just systemattically selects the constants to apply the
iterative step that sets the exponents correctly.
The value of the above lemma is that it allows us to replace the BRβ term with a Bρβ ,
in addition to letting us drop the term. Thus Ψ can be bounded by a linear combination of
ρα and ρβ via this lemma; a very useful tool for reconciling Companato semi-norms (as one
can intuitively guess from definitions).
10.4.1
Systems with Constant Coefficients
1,p
Theorem 10.4.2 (): If u ∈ Wloc
(Ω) is a weak solution of
j
Dα (Aαβ
ij Dβ u ) = 0,
(10.16)
where Aαβ
ij = const. and satisfy the L-H condition, then for some fixed R0 , we have that for
any ρ < R < R0 ,
i.)
Z
2
Bρ (x0 )
|u| dx ≤ C ·
ρ n Z
R
BR (x0 )
|u|2 dx.
(10.17)
ii.) Moreover,
Z
Bρ (x0 )
2
|u − ux0 ,ρ | dx ≤ C ·
ρ n+2 Z
R
BR (x0 )
|u − ux0 ,R |2 dx.
(10.18)
Proof.
(10.17): First, if ρ ≥ R/2 we can clearly pick C such that both of the inequalities above
are true. So we consider now when ρ ≤ R/2. Since u is a solution of (10.16), we
recall the interior Sobolev estimate from the last section (theorem 10.3.5) to write
||u||Hk (BR/2 (x0 )) ≤ C(k, R) · ||u||L2 (BR )
Thus for ρ < R/2 we fine
Z
|u|2 dx
Bρ (x0 )
Hölder
≤
≤
≤
C(n) · ρn · sup |u|2
Bρ (x0 )
n
C(n) · ρ · ||u||Hk (BR/2 (x0 ))
Z
n
C(n, k, R) · ρ ·
|u|2 dx
BR (x0 )
The second inequality comes from the fact that H k (Ω) ⊂−→ C 1 (Ω) ⊂ L∞ (Ω) for k
choosen such that k > pn (see Section 9.7). Also, we can employ a simple rescaling
argument to see that C(n, k, R) = C(k)R−n .
Gregory T. von Nessi
Version::31102011
Moving on to actual regularity results, we first consider the simplest system where the coefficients are constant. After going through the regularity results, we will briefly explore the
very interesting consequences of such.
Version::31102011
10.4 Elliptic Regularity II: Schauder Estimates
299
(10.18): This essentially is a consequence of (10.17) used in conjunction with Poincaré and
Caccioppoli inequalities:
Z
Z
Poincaré
2
2
≤
|u − uρ,x0 | dx
C·ρ
|Du|2 dx
Bρ (x0 )
Bρ (x0 )
n Z
2 ρ
≤
C·ρ
|Du|2 dx
R
BR/2 (x0 )
Z
ρ n+2
=
C·
R2
|Du|2 dx
R
BR/2 (x0 )
ρ n+2 Z
Caccioppoli
≤
C·
|u − ux0 ,R |2 dx.
R
BR (x0 )
Specifically, we have used (10.17) applied to Du to justify the second inequality. We
can do this since Du also solves (10.16) since the coefficients are constant.
The above theorem has some very interesting consequences, that we will now go into.
First, we note that theorem (10.4.2) holds for all derivatives Dα u, as derivatives of a solution
of a system with constant coefficients is itself a solution. With that in mind, let us consider
the polynomial Pm−1 such that
Z
Dα (u − Pm−1 ) dx = 0
Bρ (x0 )
for all |α| ≤ m − 1. With this particular choice of polynomial, we can now apply Poincaré’s
inequality m times to obtain
Z
Z
Poincaré
|u − Pm−1 |2 dx ≤ Cρ2m
|Dm u|2 dx.
Bρ (x0 )
Bρ (x0 )
Since Dm u also solves (10.16) we apply theorem (10.4.2) to get
Z
ρ n+2m Z
|u − Pm−1 |2 dx ≤
|u|2 dx.
R
BR (x0 )
Bρ (x0 )
Now, if we place the additional stipulation on u that |u(x)| ≤ A|x|σ , σ = [m] + 1 in addition
to its solving (10.16), then by letting R → ∞ in the last equation, we get
Z
|u − Pm−1 |2 dx = 0.
Bρ (x0 )
Thus,
If the growth of u is polynomial, then u actually is a polynomial!
This is a very interesting result in that we are actually able to determine the form of u merely
from asymptotics!
We now look at non-homogenous elliptic systems with constant coefficients. For this,
1,p
(Ω) is a solution of
suppose u ∈ Wloc
j
α
Dα (Aαβ
ij Dβ u ) = −Dα fi ,
i = 1, . . . , m
(10.19)
where the Aαβ
ij are again constant and satisfy a L-H Condition. In this situation, we have the
following result:
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
300
2,λ
Theorem 10.4.3 (): Suppose f ∈ L2,λ
loc (Ω), then Du ∈ Lloc (Ω) for 0 ≤ λ < n + 2.
Proof: We start by splitting u in BR ⊂ Ω such that u = v + (u − v ) = v + w where v is
the soluton (which exists by Lax-Milgram, see section Section 10.2) of the Dirichlet BVP:
Dα (Aαβ Dβ v j ) = 0 i = 1, . . . , m in BR
v =u
on ∂BR .
Using this, we start our calculations:
Z
|Du − (Du)x0 ,ρ |2 dx
Bρ (x0 )
Z
|Dv − Dw − (Dv )x0 ,ρ + (Dw )x0 ,ρ |2 dx
=
Bρ (x0 )
Z
ρ n+2 Z
2
|Dv − (Dv )x0 ,ρ | dx +
|Dw − (Dw )x0 ,ρ |2 dx.
≤C
R
BR (x0 )
Bρ (x0 )
Next, we use the fact that v = u − w .
Z
|Du − (Du)x0 ,ρ |2 dx
Bρ (x0 )
ρ n+2 Z
≤C
|Du − Dw − (Du)x0 ,ρ + (Dw )x0 ,ρ |2 dx
R
BR (x0 )
Z
|D(w ) − (Dw )x0 ,ρ |2 dx
+
Br
ρ n+2 Z
≤C
|Du − (Du)x0 ,ρ |2 dx
R
BR (x0 )
Z
+2
|D(w ) − (Dw )x0 ,ρ |2 dx
Bρ (x0 )
Z
ρ n+2 Z
≤C
|Du − (Du)x0 ,ρ |2 dx + 2
|D(u − v )|2 dx
R
BR (x0 )
Bρ (x0 )
Z
+2
|(Dw )x0 ,ρ |2 dx.
Bρ (x0 )
Looking at the last term on the RHS, we see
Z
Bρ (x0 )
2
|(Dw )x0 ,ρ | dx
≤
Hölder
≤
=
Gregory T. von Nessi
1
ρ
Z
Z
Bρ (x0 )
|Dw | dx
!2
Bρ (x0 )
|Dw |2 dx
Bρ (x0 )
|D(u − v )|2 dx
Z
(10.20)
Version::31102011
Since this system has constant coefficients, Dv also solves this system locally for all ρ < R.
Thus, by theorem 10.4.2.ii), we have
Z
ρ n+2 Z
2
|Dv − (Dv )x0 ,ρ | dx ≤ C
|Dv − (Dv )x0 ,ρ |2 dx.
R
Bρ (x0 )
BR (x0 )
10.4 Elliptic Regularity II: Schauder Estimates
301
This inequality combined with (10.24), we see that
Z
Version::31102011
Bρ (x0 )
|Du − (Du)x0 ,ρ |2 dx
Z
ρ n+2 Z
≤C
|Du − (Du)x0 ,ρ |2 dx + C
|D(u − v )|2 dx. (10.21)
R
BR (x0 )
BR (x0 )
To bound the second term on the RHS, we use the L-H condition to ascertain
Z
Z
2
|D(u − v )| dx ≤
A · D(u − v ) · D(u − v ) dx
BR (x0 )
BR (x0 )
Z
≤ C
(f − fx0 ,R )D(u − v ) dx.
BR (x0 )
The second equality above comes from noticing that since u solves (10.19) in the weak
sense, u − v solves
j
i
Dα (Aαβ
ij Dβ (u − v ) ) = −Dα (fα − fx0 ,R ),
i = 1, . . . , m
1,p
(Ω) to be the test function). From
also in the weak sense (also we have taken u − v ∈ Wloc
this, we see that
Z
Z
|D(u − v )|2 dx ≤
|f − fx0 ,R |2 dx,
BR (x0 )
BR (x0 )
i.e., upon combining this with (12.16),
Z
Bρ (x0 )
|Du − (Du)x0 ,ρ |2 dx
≤C
ρ n+2 Z
R
BR (x0 )
|Du − (Du)x0 ,R |2 dx + C[f ]2L2,λ (Ω) Rλ .
To attain our result, we now apply lemma 10.4.1 to the above to justify replacing the second
term on the RHS by ρλ .
Looking closer at the above proof, it is clear that we have proved even more than the
result stated in the theorem. Specifically, we have shown the bound
h
i
[Du]L2,λ (Ω) ≤ C(Ω, Ω0 ) ||u||H1 (Ω) + [f ]L2,λ (Ω)
with Ω0 ⊂⊂ Ω
to be true. An immediate consequence of theorem 10.4.3 and Companato’s theorem is
0,µ
0,µ
Corollary 10.4.4 (): If f ∈ Cloc
(Ω), then Du ∈ Cloc
(Ω), where µ ∈ (0, 1).
10.4.2
Systems with Variable Coefficients
1,p
0
Theorem 10.4.5 (): Suppose u ∈ Wloc
(Ω) is a solution of (10.19 with Aαβ
ij ∈ C (Ω) which
also satisfy a L-H condtion. If fαi ∈ L2,λ (Ω) (∼
= L2,λ (Ω) for 0 ≤ λ ≤ n), then Du ∈ L2,λ
loc (Ω).
Proof: We first rewrite the elliptic system as
n
o
αβ
αβ
j
j
α
Dα Aαβ
(x
)D
u
=
D
A
(x
)
−
A
(x)
D
u
+
f
=: Dα Fiα ,
0
α
0
β
β
i
ij
ij
ij
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
302
BR (x0 )
R
Now BR (x0 ) |f |2 dx ≤ Rλ as f ∈ L2,λ (Ω). With that in mind, if we choose R0 sufficiently
small with ρ < R < R0 , then ω(R) is small and again we can apply lemma 10.4.1 with ω(R)
corresponding to the term; this gives us the result.
1,p
(Ω) be a solution of
Theorem 10.4.6 (): Let u ∈ Wloc
j
j = 1, . . . , m
Dα Aαβ
(x)D
u
= Dα f i α
β
ij
0,µ (Ω) which satisfy a L-H condition. If f i ∈ C 0,µ (Ω), then Du ∈ C 0,µ (Ω) and
with Aαβ
α
ij ∈ C
loc
moreover
n
o
[Du]C 0,µ (Ω) ≤ C ||u||H1 (Ω) + [f ]C 0,µ (Ω) .
Proof: Again, we use the exact same idea as in the previous proof to see that
Z
|Du − (Du)ρ |2 dx
ρ n+2 Z
|Du − (Du)x0 ,r |2 dx
≤C
R
BR (x0 )
Z
Z
+
|f − fR |2 dx + C
|[A(x) − A(x0 )]Du|2 dx.
Bρ (x0 )
BR (x0 )
BR (x0 )
To bound this we start out by looking at the first term on the RHS of the above:
Z
Z
2
|f − fR | dx ≤
|x − x0 |2µ · [f ]2C 0,µ (BR (x0 )) dx
BR (x0 )
Gregory T. von Nessi
BR (x0 )
2µ+n
≤ CR
.
(10.22)
Version::31102011
where we have defined a new inhomogeneity. Now, we essential have the situation in theorem
(10.4.3) with Dα Fiα as the new inhomogenity. Thus, using the exact method used in that
theorem we ascertain
Z
|Du|2 dx
Bρ (x0 )
Z
ρ n Z
2
≤C
|Du| dx + C
|D(u − v )|2 dx
R
BR (x0 )
BR (x0 )
Z
ρ n Z
2
≤C
|Du| dx + C
|F |2 dx
R
BR (x0 )
B (x )
Z R 0
ρ n Z
≤C
|Du|2 dx + C
|[A(x) − A(x0 )]Du|2 dx
R
B (x )
BR (x0 )
Z R 0
+C·
|f |2 dx
BR (x0 )
Z
ρ n Z
≤C
|Du|2 dx + C · ω(R)
|Du|2 dx
R
B (x )
BR (x0 )
Z R 0
+C·
|f |2 dx
10.4 Elliptic Regularity II: Schauder Estimates
303
Now we look at the second term on the RHS of (10.22) to notice
Z
C
BR (x0 )
|[A(x) − A(x0 )] · Du|2 dx
Z
≤C
||x − x0 |µ · [A]C 0,µ (BR (x0 )) · Du|2 dx
BR (x0 )
Z
≤ CR2µ ·
|Du|2 dx
BR (x0 )
Hölder
Version::31102011
≤ CR
2µ+n
≤ CR2µ+n
||u||W 1,2 (BR (x0 ))
Putting these last two estimates into (10.22) yields
Z
Bρ (x0 )
|Du − (Du)ρ |2 dx
≤C
ρ n+2 Z
R
BR (x0 )
|Du − (Du)x0 ,r |2 dx + CR2µ+n .
Per usual this fits the form required by lemma 10.4.1 and consequently this leads us to our
conclusion.
10.4.3
Interior Schauder Estimates for Elliptic Systems in Non-divergence
Form
In this section, we consider with systems in non-divergence form. Obviously, if A is continuously differentiable, there is no difference between the divergent and non-divergent case; but
in this subsection we will consider the situation where A is only continuous. I.e. we consider
j
Aαβ
ij Dβ Dα u = fi
j = 1, . . . , m.
(10.23)
2 (Ω) and Aαβ ∈ C 0,µ (Ω). In this situation, we have the following theorem.
with u ∈ Hloc
ij
Theorem 10.4.7 (): If f i ∈ C 0,µ (Ω) then Dβ Dα u ∈ C 0,µ (Ω); moreover,
h
i
Dβ Dα u C 0,µ (Ω) ≤ C ||Dβ Dα u||L2 (Ω) + [f ]C 0,µ (Ω) .
Proof: Essentially, we follow the progression of proofs of the last subsection, each of
which remain almost entirely unchanged. Corresponding to the progression of the previous
section we consider the proof in three main steps.
Step 1: Suppose A = const. and f = 0, then we are in a divergence form situation and
therefore
Z
|Dβ Dα u − (Dβ Dα u)x0 ,ρ |2 dx
Bρ (x0 )
ρ n+2 Z
≤C·
|Dβ Dα u − (Dβ Dα u)x0 ,ρ |2 dx.
R
BR (x0 )
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
304
Step 2: Suppose A = const and f 6= 0. Again we seek to get this into a divergence form; to
this end we set
i
i
z (x) := u (x) −
(Dβ Dα u i )x0 ,R
2
· xβ · xα .
Thus,
Dσ Dγ z i = Dσ Dγ u i − (Dσ Dγ u i )x0 ,R ,
plugging this into (10.23) yields
As in the previous section, we now split z = v + (z − v ) where v is a solution of of the
homogenous equation:
αβ
A Dβ Dα v j = 0 in BR (x0 )
.
v j = z j on ∂BR (x0 )
With that, (10.23) indicates
(
j
j
Aαβ
ij Dβ Dα (z − v ) = fi − (fi )x0 ,R in BR (x0 )
z j − v j = 0 on ∂BR (x0 )
As in the divergence form case then, we have the estimate
Z
Z
|D2 (z − v )|2 dx ≤
|f − fx0 ,R |2 dx.
BR (x0 )
BR (x0 )
The spatial indices have been dropped above under the assumption that the supremum
of i is taken on the RHS to bound all second order derivatives (as stated at the beginning
of this section, writing this triviality out serves merely as a distraction at this point).
Using this estimate we ascertain
Z
|Dβ Dα z − (Dβ Dα z)x0 ,ρ |2 dx
Bρ (x0 )
ρ n+2 Z
≤C·
|Dβ Dα z − (Dβ Dα z)x0 ,R |2 dx
R
BR (x0 )
Z
+C
|f − fx0 ,R |2 dx,
BR (x0 )
i.e.
Z
|Dβ Dα u − (Dβ Dα u)ρ |2 dx
ρ n+2 Z
|Dβ Dα u − (Dβ Dα u)x0 ,R |2 dx
≤C·
R
BR (x0 )
Z
+C
|f − fx0 ,R |2 dx
Bρ (x0 )
BR (x0 )
Gregory T. von Nessi
Version::31102011
j
Aαβ
ij Dβ Dα z = fi − (fi )x0 ,R .
10.4 Elliptic Regularity II: Schauder Estimates
305
Step 3: The proof now continues as in the divergence form case by writing the equation
j
Aαβ
ij (x)Dβ Dα u = fi
as
αβ
αβ
j
j
Aαβ
ij (x0 )Dβ Dα u = [Aij (x0 ) − Aij (x)]Dβ Dα u + fi =: Fi
Version::31102011
and concludes by using Step 2 with Fi substituted by the RHS of (10.23).
10.4.4
Regularity Up to the Boundary: Dirichlet BVP
In this section, we consider solutions of the Dirichlet BVP
(
j = D f α in Ω
(x)D
u
Dα Aαβ
α i
β
ij
uj = gj
on ∂Ω ∈ C 1,µ
with fi α ∈ C 0,µ (Ω) and g ∈ C 1,µ (Ω) and we shall extend the local regularity result Section
10.4.2. Again, we have already seen many of the techniques in previous proofs that we will
require here. So, we will simply sketch out these two proofs pointing out the differences that
should be considered.
Theorem 10.4.8 (): Let u ∈ W 1,p (Ω) be a solution of
j
Dα Aαβ
(x)D
u
= Dα f i α
β
ij
j = 1, . . . , m
0,µ (Ω) which satisfy a L-H condition. If f i ∈ C 0,µ (Ω), then Du ∈ C 0,µ (Ω) and
with Aαβ
α
ij ∈ C
moreover
n
o
[Du]C 0,µ (Ω) ≤ C ||u||H1 (Ω) + [f ]C 0,µ (Ω) .
Proof: First, we associate u with its trivial extension outside of Ω. Given the interior
result of theorem 10.4.6, we just have to prove boundary regularity to get the global result.
So, as in the Sobolev estimate case, we consider a zero BVP without loss of generality as
the transformation u 7→ u − g does not change our system. Also, in the same vein as the
global Sobolev proof, we know that the assumption ∂Ω ∈ C 1,µ says that (locally) there exists
a diffeomorphism
+
Ψ : U → BR
which flattens the boundary i.e. maps ∂Ω onto ΓR (see figure (10.3)).
+
+
In this proof we will be considering BR
with the goal of finding estimates on BR
0 , where
0
R = µR for some fixed constant µ < 1. Then the global estimate will follow by using a
standard covering argument.
+
So, with that in mind, we may suppose that Ω = BR
and ∂Ω = ΓR . Generally speaking, we seek the analogies of (10.17) and (10.18); once these are established the completion
of the proof mimics the progression of the proofs of interior estimates. To prove these
estimates we proceed in a series of steps.
Gregory T. von Nessi
'( '
306
1 ∂Ω ∈ C
$ "
+
Ψ : U → BR
Chapter 10: Linear Elliptic PDE
' ∂Ω Γ ! . ? @
R
1
Ψ
U
xn
Ω
Rn
+
BR
x0
x0
ΓR
∂Ω
1
$ and
1:
First,
we have
R/2,
Step
-suppose
1 A1 αβ
fi α = 0.BFor
*
+ 1 ρ <(
ij = const.
R
&'
+ 71 BR
R0 = ZµR µ < 1
0
Hölder
' dx ≤ Cρn sup
1 %1 $ " |Du|
2 )
|Du|
2.' Bρ+
' .% $ Bρ+
≤
C(k) · ρn ||u||Hk (B+
≤
C(k) · ρ ||u||Hk (B∗ )
Z
D(R) · ρn
|u|2 dx
∗
ZB
n
|u|2 dx
D(R) · ρ
R/2
)
n
≤
≤
Caccioppoli
C(R) · ρn
≤
Z
+
BR
|uxn |2 dx,
+
BR
+
+
where BR/2
⊂ B ∗ ⊂ BR
and ∂B ∗ is C ∞ . This result corresponds to (10.17). The
fourth inequality in the above calculation comes from the interior Sobolev regularity
estimate of the previous section; hence B ∗ was introduced into the above calculation
to satisfy the boundary requirement required for the global Sobolev regularity. Also,
as with the derivation of (10.17), the second inequality comes from the continuous
+
+
+
imbedding H k (BR/2
) ⊂−→ C k−1,1−n/2 (BR/2
) ⊂ L∞ (BR/2
).
The counterpart of (10.18) is
Z
Bρ+
2
|Du − (Du)ρ | dx
Poincaré
≤
Caccioppoli
≤
Z
Cρ
2
Cρ
n+2
Bρ+
Z
|D2 u|2 dx
+
BR
|uxn |2 dx.
To proceed, we note that the same calculation can be carried out for u − λxn (which is
also a solution with zero boundary value). For this, we have to replace uxn by uxn − λ
(λ = constant) in the above estimate.
Step 2: Now, that we have a way to bound I(Bρ+ ) (the integral over Bρ+ ) with ρ < R/2, we
now need to bound I(Bρ (z)) for z ∈ BR0 (x0 ) with ρ < R/2. To do this, we need to
consider two cases for balls centered at z.
Gregory T. von Nessi
Version::31102011
!
? @ 10.3:
'
. Figure
#"4Boundary
" diffeomorphism
'( ? =
1 ) (z, Γ) = r ≤ µR 1 B (z) ⊂ B + (x
ρ
10.4 Elliptic Regularity II: Schauder Estimates
R 0
307
+
BR
(x0 )
BR/2 (z)
Br (z)
Version::31102011
+
B2r
(y)
r
y
1
1
$ "
' I(B (z
I(B
(z))
I(B
(z))
r
For this, we first estimate I(B
$ "
" ρ(z)) by+I(Bρr (z)),$ then
" I(Br (z)) rby I(B2r+ (y )) and
*
+
+ I(B (y )) by I(BR/2 (z)):I(B (y))
I(Bfinally(y))
I(B (z))
1
+
Case 1: We first consider when dist(z , Γ ) = r ≤ µR with Bρ (z) ⊂ BR
(x0 ).
2r
Z
Bρ (z)
Z
|u − uz,ρ
|2 dx
2
Bρ (z) z,ρ
ρ n+2 Z
|u − uz,r |2 dx
≤C
r
Br (z)
n+2
ρ n+2 Z
2
≤C
|u − uz,2r |2z,r
dx
+
r
(y )
BrB2r(z)
!n+2 Z
ρ n+2
2r
n+2
p
≤D
|u − uz,R/2 |2 dx
2
2
2
+
r
R /4 − r
B√ 2
(y )
z,2r
R /4−r 2
+n+2
!
Z
B (y)
4 2r ρ n+2 r n+2
≤D 1
|u − uz,R/2 |2 dx.
n+2
2
r
R
−
µ
B
(z)
R/2
4
|u − u
ρ
r
ρ
≤C
r
R/2
2r
2r
≤C
Case 2:
x0
? Figure 10.4:Case? 1 for proof
' ? of Theorem 10.4.8
.
| dx
Z
ρ n+2
Z
|u − u
|u − u
2r
| dx
| dx
!
Z
p
|u − uz,R/
2
2
+
r
− rof the result form
B√ Step
In the third inequality we useR
the /4
analogy
1 for (y)
u (the
R2 /4−r 2
proof is the same).
!n+2
Z +
r Bn+2
In the second case
r < µR with
4 we have dist(z, Γρ)=n+2
ρ (z) * BR (x0 ). First we
+
− uz,R/2 |
≤ D that
remember
only want to find a bound for I(Bρ (z) ∩ BR
(x0 )) |u
as we
1 we really
2
µ
are considering −
a trivial
extension ofru outside Ω.R
Heuristically, B
weR/2
first
estimate
(z)
4
≤D
+
BR
0 (x0 )
z
+
+
+
(y )) and finally I(B2ρ
(y )) by I(BR/2 (z)). Again, the
I(Bρ (z) ∩ BR
(x0 )) by I(B2ρ
calculation follows that of Case 1.
* #" 1 ? - & '
u
1 T. von
*% " Gregory
Nessi
'
1 Chapter 10: Linear Elliptic PDE
308
It is a strait-forward geometrical calculation that µ can be taken to be 2√117 in order to
+
ensure B2r
(y ) ⊂ BR/2 (z), that Step 1 can be applied in the above cases, and that all the
above integrations are well defined (i.e. the domains of integration only extend outside of
+
BR
(x0 ) over Γ where the trivial extension of u is still well defined). Since µ is independent
of the original boundary point (x0 ), we can thus apply the standard covering argument to
conclude the analogies of (10.17) and (10.18) do indeed hold on the boundary.
10.4.4.1
The nonvariational case
As in 9.8.1 we find the equivalent of (10.17):
Z
2
Bρ+
Z
Bρ+
|Dβ Dα u| dx ≤ C
2
|Dβ Dα u − (Dβ Dα u)ρ | dx ≤ C
ρ n Z
R
+
BR
ρ n+2 Z
R
|Dβ Dα u|2 dx
BR (x0 )
|Dβ Dα u − (Dβ Dα u)ρ |2 dx (10.24)
moreover we have
n
o
||u||C 2,µ (Ω) ≤ C ||f ||C 0,µ (Ω) + ||Dβ Dα u||L2 (Ω) .
The proof is the same as in 9.8.1 with the only exception that in the scond step instead of
z = u − 1/2(uxα ,xβ )x0 ,R · xα · xβ , one takes
1
z = u − (aijmn )−1 (f j )x0 ,R · xβ2 ,
2
then there appears a term
Z
BR (x0 )
10.4.5
(f − fx0 ,R )2 dx on the RHS of (10.19).
Existence and Uniqueness of Elliptic Systems
We finally show how the above a prior estimate implies existence. The idea is to use the
continuation method.
Consider
Lt := (1 − t)∆u + t · Lu = f in Ω
u = 0 on ∂Ω
where L is the given elliptic differential operator.
We assume that for Lt (t ∈ [0, 1]) and boundary values zero, we have uniqueness. Set
Gregory T. von Nessi
Σ := {t ∈ [0, 1]|Lt is uniquely solvable}.
Version::31102011
We look at a solution of
αβ
A (x)uxj α ,xβ = f i ∈ C 2,µ in Ω
u = g ∈ C 2,µ on ∂Ω ∈ C 2,µ
10.4 Elliptic Regularity II: Schauder Estimates
309
Since t = 0 ∈ Σ, Σ 6= ∅.
We shall show that Σ is open and closed and therefore Σ = [0, 1]. Especially we conclude
(for t = 1) that
Lu = f in Ω
u = 0 on ∂Ω
Version::31102011
has a unique solution.
From the regularity theorem, we know that if Aαβ ∈ C 0,µ and f ∈ C 0,µ then
o
n
||Dβ Dα u||C 0,µ (Ω) ≤ C ||f ||C 0,µ (Ω) + ||Dβ Dα u||L2 (Ω) .
Now the first step is to show the
i
Theorem 10.4.9 (): Suppose for the elliptic operator Lu = Aαβ
ij uxα ,xβ we have
has only the zero solution. Then if
Lu = 0 in Ω
u = 0 on ∂Ω
Lu = f in Ω
u = 0 on ∂Ω
we have
||u||H2,2 (Ω) ≤ C1 ||f ||C 0,µ (Ω) .
αβ(k)
Proof: Suppose that the theorem is not true, then there exists Aij
all k
αβ(k)
Aij
ξα ξβ ≥ ν|ξ|2 ;
, f (k) such that for
||A(k) ||C 0,µ (Ω) ≤ M
and ||f (k) ||C 0,µ (Ω) → 0 as k → ∞.
If u (k) is a solution with f (k) and ||u ( k)||H2,2 (Ω) = 1 then u (k) → u as k → ∞ and Lu = 0
on Ω and u = 0 on ∂Ω i.e. u = 0, hence a contradiction.
Remark: Uniqueness of second order equations follows from Hopf’s maximum principle.
Suppose u is a solution of
Aαβ uxα ,xβ = 0 on Ω
u = 0 on ∂Ω
If u has a maximum point in x0 ∈ Ω, then the Hessian {uxα ,xβ } ≤ 0 and v := u + |x − x0 |2
has still a maximum point in x0 (for sufficiently small). But then Aαβ vxα ,xβ = Aαβ uxα ,xβ + const. > 0 and that is a contradiction. One concludes that u = 0.
As 0 = t ∈ Σ, we have
||Dβ Dα u||L2 (Ω) ≤ C||f ||C 0,µ (Ω)
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
310
and therefore
||Dβ Dα u||C 0,µ (Ω) ≤ C||f ||C 0,µ (Ω)
by the a-priori estimate above.
We show that Σ is closed:
Let tk ∈ Σ and tk → t as k → ∞ with
Ltk u (k) = f in Ω
u (k) = 0 on ∂Ω
(k)
because ||uxx ||C 0,µ (Ω) ≤ C||f ||C 0,µ (Ω) , we have u (k) → u in C 2 (Ω) and
Lt u = f in Ω
u = 0 on ∂Ω
therefore t ∈ Σ.
Now we show that Σ is also open:
Let t0 ∈ Σ. Then there exists u such that
Lt0 u = f in Ω
u = 0 on ∂Ω
For w ∈ C 2,µ (Ω) there exists a unique uw such that
Lt0 uw
uw
= (Lt0 − Lt )w + f
= 0
in Ω
on ∂Ω
We conclude that
||uw ||C 2,µ (Ω) ≤ C|t − t0 | · ||w ||C 2,µ (Ω) + C · ||f ||C 2,µ (Ω)
and
||uw1 − uw2 ||C 2,µ (Ω) ≤ C|t − t0 | · ||w1 − w2 ||C 2,µ (Ω)
If we choose t − t0 | =: δ < 1/c, we see that the operator
T : C 0,µ (Ω) → C 0,µ (Ω)
w 7→ uw
is a contradiction and therefore has a fixpoint which just means that (t0 − δ, t0 + δ) ⊂ Σ, i.e
that Σ is open.
Gregory T. von Nessi
Version::31102011
10.5 Estimates for Higher Derivatives
10.5
311
Estimates for Higher Derivatives
u ∈ W 1,2 (Ω) is a weak solution of −∆u = 0.
interior
Version::31102011
interior estimates: B(x0 , R) ⊂⊂ Ω
boundary estimates: x0 ∈ ∂Ω, transformations =⇒ ∂Ω flat near x0
∂
=⇒ −∆uDs u = 0, vs := Ds u is a solution
∂s
−∆u = 0,
Caccioppoli with vs
Z
Z
2
B (x0 , R2 )
B (x0 , R2 )
|Dvs | dx
2
|DDs u| dx
≤
C
R2
≤
C
R2
Z
B(x0 ,R)
|vs |2 dx
B(x0 ,R)
|Ds u|2 dx
Z
=⇒ all 2nd derivatives are in L2 (Ω) =⇒ u ∈ W 2,2 (Ω)
To fully justify this, we replace derivatives with difference quotients.
Define uh = u(x + hes ), es = (0, . . . , 1, . . . , 0). Suppose Ω0 ⊂⊂ Ω, with h small enough so
that uh is defined in Ω0 . Take φ ∈ W01,2 (Ω0 ) and compute
Z
Ω0
Duh (x) · Dφ(x) dx
=
=
Z
0
ZΩ
Du(x + hes ) · Dφ(x) dx
hes +Ω0 ⊆Ω
=
Z
Ω
Du(x) · Dφ(x − hes ) dx
Du(x) · Dψ(x) dx,
where ψ(x) = φ(x − hes ) ∈ W01,2 (Ω). If u is a weak solution of −∆u = 0 in Ω, then uh is a
weak solution in Ω!
Define difference quotient ∆sh u =
Z
0
Ω
Z
u(x+hes )−u(x)
h
Duh (x) · Dφ(x) dx
= 0,
Du(x) · Dφ(x) dx
= 0
Ω
=⇒
Z
Ω
Duh (x) − Du(x)
Dφ(x) dx = 0
h
∀φ ∈ W01,2 (Ω)
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
312
=⇒ ∆sh u =
uh −u
h
is a weak solution of −∆u = 0 in Ω0
Suppose that B(x0 , 2R) ⊂ Ω0 , h < R,
Caccioppoli inequality for ∆hs u:
Z
B(
x0 , R2
)
|D∆sh u|2
dx
C
R2
≤
Z
B(x0 ,R)
|∆sh u|2 dx
(by Theorem 8.59) ≤
≤
By Theorem 8.56, Ds u exists, and we can pass to the limit to get
Z
Z
C
|DDs u| dx ≤ 2
R
B (x0 , R2 )
2
B(x0 ,2R)
|Ds u|2 dx
Taking s = 1, . . . , n, we get
Z
C
|Du| dx ≤ 2
R
R
B (x0 , 2 )
1,2
=⇒ Wloc
(Ω)
2
Z
B(x0 ,2R)
|Du|2 dx
1,2
Now you can “boot-strap”; apply this argument to v = Ds u and get v ∈ Wloc
(Ω) =⇒
3,2
k,2
∞
u ∈ Wloc (Ω) =⇒ · =⇒ u ∈ Wloc (Ω) ∀k =⇒ u ∈ C (Ω) by Sobolev embedding.
In general, the regularity of the coefficients determine how often you can differentiate your
equation.
In the boundary situation, this works in all tangential derivatives, i.e. DDs u ∈ L2 (Ω),
s = 1, . . . , n − 1 (after reorienting the axises).
xn
Rn−1
x0
The only derivative missing is Dnn u. But from the original PDE we have
equation:
Gregory T. von Nessi
Dnn u = −
n−1
X
i=1
Dii u ∈ L2 (Ω)
Version::31102011
Z
C
|Ds u|2 dx
R2 B(x0 ,R+h)
Z
C
|Ds u|2 dx
R2 B(x0 ,2R)
10.6 Eigenfunction Expansions for Elliptic Operators
10.6
313
Eigenfunction Expansions for Elliptic Operators
A : Rn → Rn (A is a n × n matrix)
A symmetric ⇐⇒ (Ax, y ) = (x, Ay ) ∀x, y ∈ Rn
=⇒ ∃ an orthonormal basis of eigenvectors, ∃λi ∈ R such that Avi = λi vi , where vi ∈
Rn \ {0}.
Unique expansion:
X
ν =
αi vi , αi = (v , vi )
Version::31102011
i
X
|ν|2 =
α2i
i
and
Av = A
X
αi v i
i
!
=
X
αi λ i v i
i
A diagonal in the eigenbasis.
n
Application: Solve
P the ODE ẏ = Au, i.e., y (t) ∈ R with y (0) = y0 . Explicit representation: y (t) = i αi (t)νi ,
X
X
X
=⇒
α̇i (t)νi =
αi (t)λi νi
α0i νi
i
=⇒
i
i
α̇i (t) = λi αi (t), i = 1, . . . , n
αi (0) = α0i
Natural Generalization: H is a Hilbert Space, K : H → H compact and symmetric, i.e.,
(Kx, y ) = (x, Ky ) ∀x, y ∈ H.
Theorem 10.6.1 (Eigenvector expansion): H is a Hilbert Space, K symmetric, compact,
and K 6= 0. Then there exists an orthonormal system of eigenvectors ei , i ∈ N ⊆ N and
corresponding eigenvalues λi ∈ R such that
X
Kx =
λi (x, ei )ei
i∈N
If N is infinite, then λk → 0 as k → ∞. Moreover,
H = N(k) ⊕ span{ei , i ∈ N}
Definition 10.6.2: The elliptic operator L is said to be symmetric if L = L∗ , i.e.
aij = aji , bi = ci
a.e. x ∈ Ω
Theorem 10.6.3 (Eigenfunction expansion for elliptic operators): Suppose L is symmetric
and strictly elliptic. There there exists an orthonormal system of eigenfunctions, ui ∈ L2 (Ω),
and eigenvalues, λi , with λi ≤ λi+1 , with λi → ∞ such that
Lui = λi ui in Ω
ui = 0 on Ω
L2 (Ω) = span{ei , i ∈ N}
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
314
Remark: Lui = λi ui ∈ L2 (Ω) =⇒ ui ∈ H 2 (Ω) by regularity theory if the coefficients are smooth,
then Lui = λi ui ∈ H 2 (Ω) =⇒ ui ∈ H 4 (Ω), . . ., i.e. ui ∈ C ∞ (Ω) if the coefficients
are smooth.
Proof: As before Lσ is given by Lσ = Lu + σu is invertible for σ large enough, L−1
σ is
compact as a mapping from L2 (Ω) → L2 (Ω). Moreover, L−1
u
=
0
⇐⇒
u
=
L
0
=
0,
i.e.
σ
σ
N(L−1
)
=
{0}.
σ
i∈N
Choose uj such that
L−1
σ uj =
X
i∈N
=⇒ Lσ uj =
1
µj uj
µi (uj , ui ) ui = µj uj , µj 6= 0
| {z }
δij
1
1
Lσ uj = uj ⇐⇒ Luj + σuj = uj ⇐⇒ Luj =
µj
µj
1
− σ uj
µj
| {z }
λj
Lσ uj =
1
1
1
uj ⇐⇒ 0 ≤ L(uj , uj ) = (uj , uj ) =
µj
µj
µj
=⇒ µj > 0
=⇒ µj has zero as its only possible accumulation point.
=⇒ λj → +∞ as j → ∞.
Proposition 10.6.4 (Weak lower semicontinuity of the norm): X is a Banach Space,
xj * x in X, then
||x|| ≤ lim inf ||xj ||
j→∞
Proof: Take y ∈ X ∗ , then
Hölder
|hy , xi| ← |hy , xj i| ≤ ||y ||X ∗ ||xj ||X
=⇒ |hy , xi| ≤ lim inf ||xj ||X ||y ||X ∗
j→∞
Hahn-Banach: For every x0 ∈ X, ∃y ∈ X ∗ such that |hy , x0 i| = ||x0 ||X and hy , zi ≤ ||z||X ,
∀z ∈ X (see Rudin 3.3 and corollary). This obviously implies that ||y ||X ∗ = 1. So, we pick
y ∗ ∈ X ∗ such that |hy ∗ , xi| = ||x||X . Thus,
∗
=⇒ ||x|| = hy , xi ≤ lim inf ||xj ||X ||y ∗ ||X ∗
| {z }
j→∞
=1
Theorem 10.6.5 (): L strictly elliptic, λi are eigenvalues. Then,
λ1 = min{L(u, u) : u ∈ H01 (Ω), ||u||L2 (Ω) = 1}
λk+1 = min{L(u, u) : u ∈ H01 (Ω), ||u||L2 (Ω) = 1, (u, ui ) = 0, i = 1, . . . , k}
Define µ := inf{L(u, u) : ||u||L2 (Ω) = 1}
Gregory T. von Nessi
Version::31102011
By Theorem 9.13, ∃ui ∈ L2 (Ω) such that L2 (Ω) = span{ui , i ∈ n}. Moreover,
X
L−1
µi (u, ui )ui
σ u =
10.6 Eigenfunction Expansions for Elliptic Operators
315
Proof:
1. inf = min. Choosing a minimizing sequence uj ∈ H01 (Ω), ||uj ||L2 (Ω) = 1 such that
Version::31102011
L(uj , uj ) → µ
We know from the proof of Fredholm’s alternative
Z
Z
λ
2
L(uj , uj ) ≥
|Duj | dx − C
|uj | dx
2 Ω
| Ω {z }
=1
Z
λ
=⇒
|Duj |2 dx ≤ L(uj , uj ) +C < ∞
2 Ω
| {z }
=⇒ uj ∈
W01,2 (Ω)
→µ
bounded
W 1,2 (Ω) reflexive =⇒ ∃ a subsequence ujk such that ujk * u in W 1,2 (Ω) (Banach(Banach-Alaoglu).
Since ujk * u in L2 (Ω) Sobolev→ ujk → u strongly in L2 (Ω)
(More precisely, ujk bounded in L2 (Ω) =⇒ ujk bounded in W 1,2 (Ω) =⇒ ujk has a subsequence converging strongly to some u ∗ in L2 (Ω) by Sobolev embedding. Clearly, u ∗
is also a weak limit of ujk . Thus, by the uniqueness of weak limits, we have u ∗ = u a.e.)
Dujk * Du in L2 (Ω) weakly and ||u||L2 (Ω) = 1 from strong convergence.
2. Show that L(u, u) ≤ µ (=⇒ L(u, u) = µ, u minimizer, inf = min). As before
Lσ (u, u) ≥ γ||u||2W 1,2 (Ω) for σ sufficiently large, i.e.
|||u|||2 = Lσ (u, u)
Lσ (u, u) defines a scalar product equivalent to the standard H 1 (Ω) scalar product,
||| · ||| defines a norm.
weak lower semicontinuity of the norm:
ujk * u in W 1,2 (Ω)
Lσ (u, u) = |||u|||2 ≤ lim inf |||ujk |||2
k→∞
=
lim inf(L(ujk , ujk ) +σ ||ujk ||2L2 (Ω) )
k→∞
| {z }
| {z }
= µ+σ
→µ
=1
=⇒ L(u, u) ≤ µ =⇒ u is the minimizer
3. µ is an eigenvalue, Lu = µu
Consider the variation:
ut =
u + tv
||u + tv ||L2 (Ω)
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
316
Then t 7→ L(ut , ut ) has a minimum at t = 0.
L(ut , ut ) =
1
(L(u, u) + tL(u, v ) + tL(v , u) +t 2 L(v , v ))
|
{z
}
||u + tv ||2L2 (Ω)
2tL(u,v ) by sym.
d
dt
t=0
L(ut , ut ) = 2L(u, v ) − 2
where we have used
1
d
R
dt Ω (u + tv )2 dx
=
Z
|
Ω
(u + 0)v dx L(u, u) = 0
| {z }
2
Ω
(u + tv ) dx
{z
−1
=1 @ t=0
= −1 ·
Z
=µ
2
}
(u + tv ) dx
Ω
{z
|
−2
=1 @ t=0
=⇒ L(u, v ) = µ(u, v )
∀v ∈ H01 (Ω)
⇐⇒ Lu = µu, i.e., µ is an eigenvalue.
}
·2 ·
Z
(u + tv )v dx
Ω
4. µ = λ1
Suppose µ > λ1 , ∃u1 ∈ H01 (Ω) with ||u1 ||L2 (Ω) = 1 such that
Lu1 = λ1 u1
⇐⇒ L(u1 , u1 ) = λ1 (u1 , u1 )
= λ1 < µ =
Thus, we have a contradiction.
inf
||u||L2 (Ω) =1
L(u, u)
Simple situation: aij smooth, bi , ci , d = 0.
Proposition 10.6.6 (): L strictly elliptic, then the smallest eigenvalue λ1 (also called the
principle eigenvalue) is simple, i.e., the dimension of the corresponding eigenspace is one.
Proof: Following the last theorem: variational characterization of λ1 ,
min L(u, u) = λ1 , L(u, u) = λ1 ⇐⇒ Lu1 = λ1 u1
Key Step: Suppose that u solves
Lu = λ1 u in Ω
u = 0 on ∂Ω
Then either u > 0 or u < 0 in Ω.
Suppose that ||u||L2 (Ω) = 1 and define u + := max(u, 0), and u − := − min(u, 0), i.e.,
u = u + − u − . Then u + , u − ∈ W01,2 (Ω), and
Du a.e. on {u > 0}
+
Du
=
0 else
−Du a.e. on {u < 0}
Du − =
0 else
Gregory T. von Nessi
Version::31102011
1
R
2
Ω (u + tv ) dx
Z
10.7 Applications to Parabolic Equations of 2nd Order
Thus, we see that L(u + , u − ) = 0 since
R
Ωu
+u−
dx = 0,
=⇒ λ1 = L(u, u) = L(u + − u − , u + − u − )
317
R
Ω Du
+
· Du − dx = 0, . . ..
= L(u + , u + ) − L(u + , u − ) − L + L(u − , u − )
= L(u + , u + ) + L(u − , u − )
≥ λ1 ||u + ||2L2 (Ω) + λ1 ||u − ||2L2 (Ω) = λ1 ||u||2L2 (Ω) = λ1
=⇒ so we have equality throughout,
Version::31102011
L(u + , u + ) = λ1 ||u + ||2L2 (Ω) , L(u − , u − ) = λ1 ||u − ||2L2 (Ω)
=⇒ Lu ± = λ1 u ± , ,u ± ∈ W01,2 (Ω) since every minimizer is a solution of the eigen-problem.
Coefficients being smooth =⇒ u ± smooth, i.e. u ± ∈ C ∞ (Ω). In particular, Lu + = λ1 u + ≥ 0
(λ1 ≥ 0).
=⇒ u + is a supersolution of the elliptic equation.
The strong maximum principle says that either u + ≡ 0 or u + > 0 in Ω. Analogously,
either u − ≡ 0, or u − < 0 in Ω.
=⇒ either u = u + or u = u − , i.e. either u > 0 or u < 0 in Ω.
Suppose now that u and ũ are solutions of Lu = λ1 u in Ω. u, ũ ∈ W01,2 (Ω) =⇒ u and
ũ have a sign, and we can choose γ such that
Z
(u + γ ũ) dx = 0
Ω
Since the original equation is linear, u +γ ũ is also a solution =⇒ u +γ ũ = 0 (otherwise u +γ ũ
would have a sign) =⇒ u = −γ ũ =⇒ all solutions are multiples of u =⇒ dim(Eig(λ1 )) = 1.
Other argument: If dim(Eig(λ1 )) > 1, ∃ u1 , u2 in the eigenspace such that
Z
u1 u2 dx = 0
Ω
which is a contradiction.
10.7
Applications to Parabolic Equations of 2nd Order
General assumption, Ω ⊂ Rn open and bounded, ΩT = Ω × (0, T ] solve the initial/boundary
value problem for u = u(x, t)

 ut + Lu = f on ΩT
u = 0 on ∂Ω × [0, T ]

u = g on Ω × {t = 0}
L strictly elliptic differential operator, Lu = −Di (aij Dj u)+bi Di u+cu with aij , bi , c ∈ L∞ (Ω).
Definition 10.7.1: The operator ∂t + L is uniformly parabolic, if aij ξi ξj ≥ λ|ξ|2 ∀ξ ∈ Rn ,
∀(x, t) ∈ ΩT , with λ > 0.
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
318
Standard example: Heat equation, L = −∆, ut − ∆u = f .
Suppose that f , g ∈ L2 (Ω), and define
Z
L(u, v ; t) =
aij Dj uDi v + bi Di u · v + uv dx
Ω
∀v ∈
H01 (Ω),
t ∈ [0, T ].
New point of view: parabolic PDE ↔ ODE in Hilbert Space.
U : [0, T ] → H01 (Ω)
u(x, t)
analogously:
f (x, t)
x ∈ Ω, t ∈ (0, T ]
F : [0, T ] → L2 (Ω)
x ∈ Ω, t ∈ (0, T ]
(F (t))(x) = f (x, t)
SupposeRwe have a smooth solution of the PDE, multiply by v ∈ W01,2 (Ω) and integrate
parts in Ω dx
Z
Z
0
U (t) · v dx + L(U, v ; t) =
F (t) · v dx
Ω
Ω
What is ut ? PDE ut + Lu = f says that
cu
ut = f − Lu = |{z}
f +Di (aij Dj u) − bi Di u − |{z}
| {z } | {z }
L2 (Ω)
L2 (Ω)
L2 (Ω)
L2 (Ω)
=⇒ ut = g0 + Di gi ∈ H −1 (Ω),
where g0 = f − bi Di u − cu and gi = aij Dj u.
This suggests that
for y (t) = αi (t)vi .
R
0 (t)
· v dx has to be interpreted as
hU 0 (t), v i = duality in H −1 (Ω), H01 (Ω)
ΩU
Lu = −Di (aij Dj u) + bi Di u + cu
equation:

 ut + Lu = f in Ω × (0, T ]
u = 0 on ∂Ω × [0, t]

u = g on Ω × {t = 0}
P
parabolic if L elliptic:
aij ξi ξj ≥ λ|ξ|2 , λ > 0.
ut = f − Lu = f + Di (aij Dj u) − bi Di u − cu ∈ H −1 (Ω)
(ut , v ) duality between H −1 and H01
parabolic PDE ↔ ODE with BS-valued functions
U(t)(x) = u(x, t)
∀x ∈ Ω, t ∈ [0, T ]
Z
L(u, v ; t) =
(aij Dj u · Di v + bi Di u + cuv ) dx
Ω
Gregory T. von Nessi
Version::31102011
(U(t))(x) = u(x, t)
10.7 Applications to Parabolic Equations of 2nd Order
10.7.1
319
Outline of Evan’s Approach
Definition 10.7.2: u ∈ L2 (0, T ; H01 (Ω)) with u 0 ∈ L2 (0, T ; H −1 ) is a weak solution of the
IBVP if
i.) hU 0 , v i + L(U, v ; t) = (F, v )L2 (Ω) ∀v ∈ H01 (Ω), a.e. t ∈ (0, T )
ii.) U(0) = g in Ω
Remark: u ∈ L2 (0, T ; H01 (Ω)), u 0 ∈ L2 (0, T ; H −1 ) then C 0 (0, T ; L2 ) and thus U(0) is
well defined.
Version::31102011
General framework: Calculus in abstract spaces.
Definition 10.7.3:
with
1) Lp (0, T ; X) is the space of all measurable functions u : [0, 1] → X
||u||Lp (0,T ;X) =
Z
T
||u(t)||pX
0
p1
< ∞, 1 ≤ p < ∞
and
||u||L∞ (0,T ;X) = ess sup ||u(t)||X < ∞
0≤t≤T
2) C 0 ([0, T ]; X) is the space of all continuous functions u : [0, T ] → X with
||u||C 0 ([0,T ];X) = max ||u(t)||X < ∞
0≤t≤T
3) u ∈ L1 (0, T ; X), then v ∈ L1 (0, T ; X) is the weak derivative of u
Z T
Z T
φ0 (t)u(t) dt = −
φ(t)v (t) dt
∀φ ∈ Cc∞ (0, T )
0
0
4) The Sobolev space W 1,p (0, T ; X) is the space of all u such that u, u 0 ∈ Lp (0, T ; X)
||u||W 1,p (0,T ;X) =
Z
0
T
||u(t)||pX
+ ||u
0
(t)||pX
dt
p1
for 1 ≤ p < ∞ and
||u||w 1,∞ (0,T ;X) = ess sup (||u(t)||X + ||u 0 (t)||X )
0≤t≤T
Theorem 10.7.4 (): u ∈ W 1,p (0, T ; X), 1 ≤ p ≤ ∞. Then
i.) u ∈ C 0 ([0, T ]; X) i.e. there exists a continuous representative.
ii.)
u(t) = u(s) +
Z
x
t
u 0 (τ ) dτ
0≤s≤t≤T
iii.)
max ||u(t))|| = C · ||u||W 1,p (0,T ;X)
0≤t≤T
X = H01 , X = H −1
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
320
10.7.2
Evan’s Approach to Existence
Galerkin method: Solve your equation in a finite dimensional space, typically the span of the
first k eigenfunctions of the elliptic operator; solution uk .
Key: energy (a priori) estimates of the form
max ||uk (t)||L2 (Ω) + ||uk ||L2 (0,T ;H1 ) + ||uk0 ||L2 (0,T ;H−1 )
0
≤ C ||f ||L2 (0,T ;L2 ) + ||g||L2 (Ω)
0≤t≤T
with C independent of k
Galerkin method: More formal construction of solutions. ẏ = Ay , y (0) = y0 . A symmetric
and positive definite =⇒ exists basis of eigenvectors ui with eigenvalues λi
y (t) =
express
n
X
αi (t)ui , y0 =
i=1
=⇒
n
X
α̇i (t)ui =
i=1
n
X
n
X
α0i ui
i=1
· uj
λi αi (t)ui
i=1
=⇒ α̇i (t) = λi αi (t), αi (0) = α0i , αi (t) = α0i e λ1 t
n
X
y (t) =
α0i e λi t
i=1
Same idea for A = ∆ = −L, model for elliptic operators L.
Want to solve:

 ut − ∆u = 0 in Ω × (0, T )
u = 0 on ∂Ω × (0, T )

u(x, 0) = u0 in Ω
Let ui be the orthonormal basis of eigenfunctions
−∆ui = λi ui
and try to find
u(x, t) =
∞
X
αi (t)ui (t)
i=1
ut = ∆u :
∞
X
i=1
α̇i (t)ui (x) =
∞
X
(−λi )αi (t)ui (x)
i=1
=⇒ α̇i (t) = −λi αi (t), αi (0) = α0i ,
Gregory T. von Nessi
∞
X
i=1
· uj ,
Z
α0i ui (x) = u0 (x)
dx
Version::31102011
Crucial: (weak) compactness bounded sequences in L2 (0, T ; H01 ) and L2 (0, T ; H −1 ) contain weakly convergent subsequences: ukl * u, uk0 l * v
Then prove that v = u 0 , and that u is a weak solution of the PDE.
10.7 Applications to Parabolic Equations of 2nd Order
321
Formal solution:
u(x, t) =
∞
X
e −λi t α0i ui (x)
Version::31102011
i=1
Justification via weak solutions:
Multiply by test function, integrate by parts, choose φ ∈ Cc∞ ([0, T ) × Ω)
Z T
Z
Z TZ
−
u · φt dxdt −
u0 (x)φ(x, 0) dx = −
Du · Dφ dxdt
0
Ω
0
Ω
Z TZ
=
u · ∆φ dxdt
0
Ω
Expand the initial data
u0 (x) =
∞
X
α0i ui (x)
i=1
and define
(k)
uj (x) =
k
X
α0i uu (x)
i=1
(k)
Then u0
→ u0 in L2 .
Then
u (k) (x) =
k
X
α0i e −λi t ui (x)
is a weak solution
i=1
(k)
of the IBVP with u0 replaced by u0 . Galerkin method: approximate by u (k) ∈span{u1 , . . . , uk }
Moreover:
||u (k) (t) − u(t)||L2 (Ω) =
≤
∞
X
k+1
it
α0i e| −λ
{z } ui
∞
X
≤1
!1
2
(α0i )2
i=k+1
=⇒ uk (t) → u(t) uniformly in t as k → ∞
Z TZ
Z
Z
(k)
−
u (k) φt dxdt −
u0 φ(x, 0) dx =
0
Ω
L2 (Ω)
Ω
0
T
→ 0 as k → ∞
Z
u (k) ∆φ dxdt
Ω
=⇒ u is a weak solution (as k → ∞)
Representation:
u(x, t) =
∞
X
i=1
α0i e −λi t ui (x)
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
322
gives good estimates.
∞
X
||u(t)||L2 (Ω) =
α0i e −λi t ui (x)
i=1
L2 (Ω)
∞
X
2
(α0i )2 e −λi t
=
i=1
≤ e −λ1 t
2
!1
2
(α0i )2
i=1
||u0 ||L2 (Ω)
=⇒ ||u(t)||L2 (Ω) ≤ ||u0 ||L2 (Ω) e −λ1 t
||∆u(t)||L2 (Ω) =
=
∞
X
(α0i e −λi t (−λi )ui )
i=1
L2 (Ω)

1
2
∞
X
1
0
−λi t 2 
(αi t · λi e
)
| {z }
t
i=1
≤
C
t
x
∞
X
(α0i )2
i=1
!1
2
≤
(xe −x ≤ C, x ≥ 0)
C
||u0 ||L2 (Ω)
t
Theorem 10.7.5 (): L symmetric and strictly elliptic, then the IBVP

 ut + Lu = 0 in Ω × (0, T )
u = 0 on ∂Ω × (0, T )

u = u0 on Ω × {t = 0}
has for all u0 ∈ L2 (Ω) the solution
u(x, t) =
∞
X
e −λi t α0i ui (x)
i=1
where {ui } is a basis of orthonormal eigenfunctions of L in L2 (Ω) with corresponding eigenvalue λ1 ≤ λ2 ≤ · · ·
10.8
Neumann Boundary Conditions
Neumann ↔ no flux
F (x, t) = heat flux = −κ(x)Du(x, t) Fourier’s law of cooling
Du · ν = 0 ⇐⇒ F (x, t) · ν = 0 no heat flux out of Ω
Simple model problem:
− div (a(x)Du) + c(x)u = f in Ω
(a(x)Du) · ν = 0 on ∂Ω
Gregory T. von Nessi
Version::31102011
= e
−λ1 t
∞
X
!1
10.8 Neumann Boundary Conditions
323
a and c are smooth, f ∈ L2 (f ∈ H −1 )
Weak formulation of the problem: multiply by v ∈ C ∞ (Ω) ∩ H 1 (Ω) and integrate in Ω,
Z
Z
[− div (a(x)Du) + c(x)u] v dx =
f v dx
(10.25)
Ω Z
Ω
Z
=
(a(x)Du) ·νv dx +
a(x)Du · Dv + c(x)uv dx
{z }
Ω|
Ω
=0
Z
Z
=⇒
[a(x)Du · Dv + c(x)uv ] dx =
f v dx
∀v ∈ H 1
(10.26)
Version::31102011
Ω
Ω
Definition 10.8.1: A function u ∈ H ( Ω) is said to be a weak solution of the Neumann
problem if (10.26) holds.
Remark: (a(x)Du) · ν = 0 on ∂Ω
Subtle issue: a(x)Du seems to be only an L2 function and is not immediately clear how to
define Du on ∂Ω. However: −div(a(x)Du) = −c(x) + f ∈ L2 . Turns out that a(x)Du ∈
L2 (di v )
Proposition 10.8.2 (): Suppose that u is a weak solution and suppose that u ∈ C 2 (Ω) ∩
C 1 (Ω). Then u is a solution of the Neumann problem.
Proof: Use (10.26) first for V ∈ Cc∞ (Ω). Reverse the integration by parts (10.25) to get
Z
[− div (a(x)Du) + c(x)u − f ]v dx = 0
∀v ∈ Cc∞ (Ω)
Ω
=⇒ −div(a(x)Du) + cu = f in Ω, i.e. the PDE Holds.
Now take arbitrary v ∈ C ∞ (Ω)
Z
Z
[a(x)Du · Dv + c(x)v ) dx =
f v dx
ΩZ
Ω
Z
=
[a(x)Du · ν)v dS + [− div (a(x)Du) + c(x)u]v dx
∂Ω
|Ω
{z
}
R
=⇒
Z
∂Ω
Ω
[a(x)Du · ν)v dS = 0
=⇒ a(x)Du · ν = 0 on ∂Ω
f v dx
∀v ∈ C ∞ (Ω)
Remark: The Neumann BC is a “natural” BC in the sense that it follows from the weak
formulation. It’s not enforced as e.g. the Dirichlet condition u = 0 on ∂Ω by seeking
u ∈ H01 (Ω).
Theorem 10.8.3 (): Suppose a, c ∈ C(Ω) and that there exists a1 , a2 , c1 , c2 > 0 such that
c1 ≤ c(x) ≤ c2 , a1 ≤ a(x) ≤ a2 (i.e., the equation is elliptic). Then there exists for all
f ∈ L2 (Ω) a unique weak solution u of
− div (a(x)Du) + cu = f in Ω
(a(x)Du) · ν = 0 on ∂Ω
and ||u||H1 (Ω) ≤ C||f ||L2 (Ω) .
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
324
Remark: The same assertion holds with f a linear function on H 1 (Ω), i.e. f ∈ H −1 .
Proof: Weak solution meant
Z
Ω
[a(x)Du · Dv + c(x)uv ] dx =
R
Ωf
f v dx
Ω
∀v ∈ H 1 (Ω)
v dx.
Z
B(u, v ) =
Ω
[a(x)Du · Dv + c(x)uv ] dx
our Hilbert space is clearly H 1 (Ω). The assertion of the theorem follows from Lax-Milgram
if:
Z
[a(x)Du · Dv + c(x)uv ] dx
Z
≤ a2
|Du| · |Dv | dx + c2
|u| · |v | dx
B(u, v ) =
ΩZ
Ω
Ω
≤ a2 ||Du||L2 (Ω) ||Dv ||L2 (Ω) + c2 ||u||L2 (Ω) ||v ||L2 (Ω)
≤ C||u||H1 (Ω) ||v ||H1 (Ω)
=⇒ B is bounded.
B(u, u) =
Z
(a(x)|Du|2 + c(x)|u|2 ) dx
Ω
≥ min(a1 , c1 )||u||2h1 (Ω)
i.e., B is coercive.
Gregory T. von Nessi
Version::31102011
linear functional F (v ) =
Bilinear form:
Z
10.8 Neumann Boundary Conditions
325
Remarks:
1) We need c1 > 0 to get the L2 -norm in the estimate, there is no Poincare-inequality in
H1
Z
Z
Poincare:
|u|2 dx ≤ Cp
|Du|2 dx
Ω
Ω
Version::31102011
and this fails automatically if the space contains non-zero constants.
2) This theorem does not allow us to solve
−∆u = f in Ω
Du · ν = 0 on ∂Ω
Z
=
f dx
Ω
I.P.
=
So, we have the constraint that
10.8.1
R
Ωf
Z
−∆u dx
ΩZ
−
∂Ω
Du · ν dx = 0
dx = 0.
Fredholm’s Alternative and Weak Solutions of the Neumann
Problem
−∆u + u + λu = f in Ω
Du · ν = 0 on ∂Ω
(i.e., a(x) = 1, c(x) = 1, −∆u + u = f has a unique solution with Neumann BCs)
Operator equation:
(−∆ + I + λ · I)u = f
⇐⇒
⇐⇒
· (−∆ + I)−1
[I + (−∆ + I)−1 λ · I]u = (−∆ + I)−1 (λ · I)
(I − T )u = f in H 1 (Ω), T = −(−∆ + I)−1 (λ · I)
Fredholm applies to compact perturbations of I. We want to show T = (−∆ + I)−1 (λ · I) is
compact.
(−∆ + I)−1 : H −1 → H 1
λ · I : H 1 → H −1 , I = I1 I0
−→ L2 (Ω) for ∂Ω smooth
where I0 : H 1 (Ω) ⊂⊂
I1 : L2 (Ω) ⊂−→ H −1 (Ω)
=⇒ I compact =⇒ T compact.
Fredholm alternative: Either unique solution for all g or non-trivial solution of the homogeneous equation, (I − T )u = 0
⇐⇒
[I + (−∆ + I)−1 λ · I]u = 0
⇐⇒
[−∆ + (λ + 1)I]u = 0
⇐⇒
[(−∆ + I) + λ · I]u = 0
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
326
Special case λ = −1 has non-trivial solutions, e.g. u ≡ constant.
Solvability criterion: We can solve if we in N ⊥ ((I − T )∗ ) = N ⊥ (I − T ) since T = T ∗ .
Non-trivial solutions for λ = −1 satisfy
−∆u = 0 in Ω (in the weak sense)
Du · ν = 0 on ∂Ω
⇐⇒
Z
Du · Dv dx = 0
∀v ∈ H 1 (Ω)
Z
u=v :
|Du|2 dx = 0
Ω
=⇒ u ≡constant (if Ω is connected)
=⇒ Null-space of adjoint operator
R is {u ≡ constant}
=⇒ perpendicular to this means Ω f dx = 0.
There is only uniqueness only up to constants.
Remark: Eigenfunctions of −∆ with Neumann BC, i.e. solutions of
−∆u = λu, Du · ν = 0
then λ = 0 is the smallest eigenvalue with a 1D eigenspace, the space of constant functions.
10.8.2
Fredholm Alternative Review
1
0
H 1 ⊂−→
L2 ⊂−→
H −1
I2 = I0 I1
I
∆u = f in Ω
Du · ν = 0 on ∂Ω
L = −∆u + u, −∆u + u + λu = f
L(u, v ) =
Z
Ω
∗
I
(Du · Dv + v u) dx
L∗ : hL∗ u, v i = L (u, v ) = L(v , u) = hLu, v i
Here L = L∗ , (Lu = (Di aij Dj u) L∗ has coefficients aij )
Operator equation:
u + L−1 (λI2 u) = L−1 F in H 1
As for Dirichlet conditions, it’s convenient to write the equation for L2 :
I1 u + I1 L−1 (λI2 )u = I1 L−1 F
⇐⇒
ũ + (I1 L−1 λI0 ) ũ = F̃
| {z }
(ũ = I1 u, F̃ = I1 L−1 F )
cmpct. since I1 is cmpct.
This last equation is the same equation as for the Dirichlet problem, but L−1 has Neumann
conditions.
Application of Fredholm Alternative:
Gregory T. von Nessi
Version::31102011
Ω
10.9 Semigroup Theory
327
i.) Solution for all right hand sides.
ii.) Non-trivial solutions of the homogeneous equations.
iii.) If ii.) holds, solutions exists if F̃ is perpendicular to the kernel of the adjoint.
What happens in ii.)? Check: if ũ is a solves ũ + (I1 L−1 λI0 )ũ = 0 then u ∈ L−1 (I0 ũ) ∈ H 1
solves u + L−1 (λI2 )u = 0 in H 1 ⇐⇒ −∆u + u + λu = 0.
Version::31102011
Now consider T to be our compact operator, defined by T = −I1 L−1 λI0 .
Solvability of the original problem
(1) Check that T ∗ = −λI1 (L∗ )−1 I0 = −λI1 L−1 I0
(2) F̃ has to be orthogonal to N((I − T )∗ ) = N(I − T ) ũ non-trivial solution of ũ +
(I1 L−1 λI0 )ũ = 0 =⇒ u = L−1 I0 ũ solves u + (L−1 λI2 )u = 0 ⇐⇒ u = constant (in
our special case λ = −1) ⇐⇒ ũ = constant.
(3) Solvability: then if F̃ = I1 L−1 F ⊥ {constants} (this again is for the special case of
λ = −1).
⇐⇒
If F (v ) =
R
Ωf
⇐⇒
(F̃ , v )L2 = 0
∀v ∈ {constant}
· · · ⇐⇒ hF, v i = 0
∀v ∈ {constant}
v dx, if we solve −∆u = f in L2 , then
Z
f v dx = 0
∀v ∈ {constant
ZΩ
=⇒
f dx = 0
Ω
So we conclude that
∆u = f in Ω
Du · ν = 0 on ∂Ω
has a solutions
⇐⇒
10.9
Z
f dx = 0
Ω
Semigroup Theory
See [Hen81, Paz83] for additional information on this subject.
Idea: PDE ↔ ODE in a BS ẏ = ay which we know to have a solution of y (t) = y0 e at .
Now consider ẏ = Ay , where A is a n × n matrix which is also symmetric
y (t) = e At y0 ,
e At =
∞
X
(At)n
n=0
n!
= Qe tΛ QT
if A = QΛQT , Λ diagonal
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
328
Goal: Solve ut = Au (= ∆u) in a similar way by saying
u(t) = S(t)u0
(S(t) = e At )
where S(t) : X → X is a bounded, linear operator over a suitable Banach space, with
• S(0)u = u, ∀u ∈ X
• S(t + s)u = S(t)S(s)u
• t 7→ S(t)u continuous ∀u ∈ X, t ∈ [0, ∞)
(1) S(0)u = u, ∀u ∈ X
(2) S(t + s)u = S(t)S(s)u = S(s)S(t)u, ∀s, t > 0, u ∈ X
(3) t 7→ S(t)u continuous as a mapping [0, ∞) → X, ∀u ∈ X
ii.) {S(t)}t≥0 is called a contraction semigroup if ||S(t)||X ≤ 1, ∀t ∈ [0, ∞)
Remark: It becomes clear that S −1 may not exist if one consider the reverse heat equation.
Solutions with smooth initial data will become very bad in finite time.
Definition 10.9.2: {S(t)}t≥0 semigroup, then define
S(t)u − u
exists in X ,
D(A) = u ∈ X : lim
t
t&0+
where
Au = lim
t&0+
S(t)u − u
t
u ∈ D(A).
A : D(A) → X is called the infinitesimal generator of {S(t)}t≥0 , D is called its domain.
General assumption: {S(t)}t≥0 is a contraction semigroup
where we recall that
e tA = Qe tΛ QT ,
Λ diagonal
 tλ

1
e
0


..
e tΛ = 
with all λi ≤ 0,

.
tλ
n
0
e
e tA =
∞ n
X
t
n=0
n!
An ,
Theorem 10.9.3 (): Let u ∈ D(A)
i.) S(t)u ∈ D(A), t ≥ 0
ii.) A · S(t)u = S(t)Au
iii.) t 7→ S(t)u is differentiable for t > 0
Gregory T. von Nessi
d tA
e = Ae tA
dt
Version::31102011
Definition 10.9.1:
i.) A is a family of bounded and linear maps {S(t)}t≥0 with S(t) :
X → X, X a BS, is called a semigroup (meaning that S has a group structure but no
inverses)
10.9 Semigroup Theory
iv.)
d
dt S(t)u
329
= A · S(t)u = S(t)Au where the last equality is due to ii.).
Version::31102011
Proof:
v ∈ D(A) ⇐⇒ lims&0+ S(s)u−u
exists
s
S(s)S(t)u−S(t)u
+
S(t)u ∈ D(A) ⇐⇒ lims&0
exists
s
S(t)S(s)u − S(t)u
= lim
(by (2) in the def. of semigroup)
s
s&0+
S(s)u − u
= S(t) · lim
+
s
s&0
= S(t)Au (by def. of A)
=⇒ the limit exists, i.e. S(t) ∈ D(A), and thus
lim
s&0+
S(t)S(s)u − S(t)u
= A · S(t)u = S(t)Au
s
=⇒ i.) and ii.) are proved.
iii.) indicates that backward and forward difference quotients exists. So take u ∈ D(A),
h > 0, t > 0
S(t)u − S(t − h)u
− S(t)Au
lim
h
h&0+
S(t − h)S(h)u − S(t − h)u
− S(t)Au
= lim
h
h&0+
S(h)u − u
= lim S(t − h)
− S(t)Au
h
h&0+
"
#
S(h)u − u
= lim
S(t − h)
− Au + (S(t − h)Au − S(t)Au)
| {z }
|
{z
}
h
h&0+
||S(t−h)||X ≤1
|
=⇒ limit exists and
|
||·||X ≤
{z
S(h)u−u
−Au
u
{z
→0 since u∈D(A)
lim
h&0+
X
}
}
|
=(S(t−h)−S(t))Au
{z
}
→0 by contin. of t 7→ S(t)u
S(t)u − S(t − h)u
= S(t)Au
h
Analogous argument for forwards quotient; the limit is the same and
d
S(t)u = S(t)Au
dt
= A · S(t)u
Remark: S(t) being continuous =⇒
tiable.
d
dt S(t)
is continuous. So, S(t) is continuously differenGregory T. von Nessi
Chapter 10: Linear Elliptic PDE
330
Theorem 10.9.4 (Properties of infinitesimal generators):
Z
1 t
lim
G(s)u ds = u.
t&0+ t 0
i.) For each u ∈ X,
ii.) D(A) is dense in X
iii.) A is a closed operator, i.e. if uk ∈ D(A) with uk → u and Auk → v , then u ∈ D(A)
and v = Au
Proof:
ii.): Take u ∈ X and define
ut =
Z
t
S(s)u ds,
0
then
ut
1
=
t
t
Z
t
0
S(s)u ds → S(0)u = u as t & 0+ .
ii.) follows if u t ∈ D(A), i.e.
S(r )u t − u t
r
(semigroup prop.)
=
=
=
=
=
taking the limit of both sides
lim
r &0+
Z t
Z t
1
S(r )S(s)u ds −
S(s)u ds
r
0
0
Z
Z
1 t
1 t
S(r + s)u ds −
S(s)u ds
r 0
r 0
Z
Z
1 t+r
1 t
S(s)u ds −
S(s)u ds
r r
r 0
Z
Z
1 t+r
1 t
S(s)u ds ±
S(s)u ds
r t
r r
Z
Z
1 r
1 t
−
S(s)u ds ∓
S(s)u ds
r 0
r r
Z
Z
1 t+r
1 r
S(s)u ds −
S(s)u ds
r t
r 0
S(r )u t − u t
= S(t)u − u
r
by continuity of S(t)
=⇒ u t ∈ D(A), Au t = S(t)u − u.
iii.): (proof of A being closed).
Take uk ∈ D(A) with uk → u and Auk → v in X. We need to show that u ∈ D(A)
and v = Au. So consider,
Z t
d
S(t)uk − uk =
S(t 0 )uk dt 0
0
0 dt
Z t
=
S(s) Auk ds.
|{z}
0
Gregory T. von Nessi
→v
Version::31102011
i.): Take u ∈ X. By the continuity of G(·)u we have limt&0+ ||G(t)u−u||X = limt&0+ ||G(t)u−
Z
1 t
u ds.
G(0)u||X = 0. Conclusion i.) follows since u =
t 0
10.9 Semigroup Theory
331
Taking k → ∞, we have
S(t)u − u =
Z
t
S(s)v ds
0
S(t)u − u
1
=⇒ lim
= lim
+
+
t
t&0
t&0 t
=⇒ Au = v with u ∈ D(A).
Z
t
S(s)v ds = v
0
Version::31102011
Definition 10.9.5:
i.) We say λ ∈ ρ(A), the resolvent set of A, if the operator λI − A :
D(A) → X is one-to-one and onto.
ii.) If λ ∈ ρ(a), then the resolvent operator Rλ : X → X is defined by Rλ = (λI − A)−1
Remark: The closed graph theorem says that if A : X → Y is a closed and linear operator,
then A is bounded. It turns out that, Rλ is closed, thus Rλ is therefore bounded.
Theorem 10.9.6 (Properties of the resolvent):
i.) If λ, µ ∈ ρ(A), then
Rλ − Rµ = (µ − λ)Rλ Rµ
(resolvent identity)
which immediately implies
Rλ Rµ = Rµ Rλ
ii.) If λ > 0, then λ ∈ ρ(A) and
Rλ u =
Z
∞
e −λt S(t)u dt
0
and hence
||Rλ ||X =
1
λ
(from estimating the integral)
Proof:
i.):
Rλ = Rλ (µI − A)Rµ
= Rλ ((µ − λ)I + (λI − A))Rµ
= (µ − λ)Rλ Rµ + Rλ (λI − A) Rµ
{z
}
|
I
=⇒ Rλ − Rµ = (µ − λ)Rλ Rµ
=⇒ Rλ Rµ = Rµ Rλ
ii.): S(t) contraction semigroup, ||S(t)||X ≤ 1. Define
R̃λ =
Z
0
∞
e −λt S(t) dt
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
332
which clearly exists. Now fix h > 0, u ∈ X and compute
S(h)R̃λ u − R̃λ u
h
=
=
=
=
→λ
→−u
R̃λ u
Collecting terms we have
lim
h&0+
S(h)R̃λ u − R̃λ u
= −u + λR̃λ u
h
=⇒ R̃λ u ∈ D(A), and
⇐⇒
AR̃λ u = −u + λR̃λ u
u = (λI − A)R̃λ u
If u ∈ D(A), then
Z
(exer.
∀x
(10.27)
∞
AR̃λ u = A
e −λt S(t) dt
Z ∞0
in Evans)
=
e −λt A · S(t)u dt
0
Z ∞
=
e −λt S(t) |{z}
Au dt
0
= R̃λ (Au)
⇐⇒ R̃λ (λI − A)u = λR̃λ u − R̃λ Au = λR̃λ u − AR̃λ u
= (λI − A)R̃λ u
= u
(by (10.27))
Now we need to prove the following.
Claim: λI − A is one-to-one and onto
• one-to-one: Assume (λI − A)u = (λI − A)v , u 6= v
(λI − A)(u − v ) = 0
=⇒ R̃λ (λI − A)(u − v ) = 0,
u, v ∈ D(A) =⇒ u − v ∈ D(A)
=⇒ R̃λ (λI − A)(u − v ) = u − v 6= 0 a contradiction.
• onto: (λI − A)R̃λ u = u. R̃λ u = w solves (λI − A)w = u ⇐⇒ (λI − A) :
D(A) → X is onto.
Gregory T. von Nessi
Version::31102011
Z
1 ∞ −λt
[e
S(t + h)u − S(t)u] dt
h 0
Z ∞
Z
1
1 ∞ −λt
−λ(t−h)
e
S(t)u dt −
e
S(t)u dt
h h
h 0
Z
Z
1 h −λ(t−h)
1 ∞ −λ(t−h)
−
e
S(t)u dt +
(e
− e −λt )S(t)u dt
h 0
h 0
Z h
Z
e λh − 1 ∞ −λt
λh 1
−λt
−e
e
S(t)u dt +
e
S(t)u dt .
h 0
h
{z
} | {z } | 0
{z
}
|
10.9 Semigroup Theory
333
So we have (λI − A) is one-to-one and onto if λ ∈ ρ(A) and λ > 0. We also have
R̃λ = (λI − A)−1 = Rλ .
Theorem 10.9.7 (Hille, Yoshida): If A closed and densely defined, then A is infinitesimal
generator of a contraction semigroup ⇐⇒ (0, ∞) ⊂ ρ(A) and ||Rλ ||X ≤ λ1 , λ > 0.
Proof:
=⇒) This is immediately obvious from the last part of Theorem 10.6.
Version::31102011
⇐=) We must build a contraction semigroup with A as its generator. For this fix λ > 0 and
devine
Aλ := −λI + λ2 Rλ = λARλ .
(10.28)
The operator Aλ is a kind of regularized approximation to A.
Step 1. Claim:
Aλ u → Au
as λ → ∞ (u ∈ D(A)).
Indeed, since λRλ u − u = ARλ u = Rλ Au,
||λRλ u − u|| ≤ ||Rλ || · ||Au|| ≤
(10.29)
1
||Au|| → 0.
λ
Thus λRλ u → u as λ → ∞ if u ∈ D(A). But since ||λRλ || ≤ 1 and D(A) is dense,
we deduce then as well
λRλ u → u
as λ → ∞, ∀u ∈ X.
(10.30)
Now if u ∈ D(A), then
Aλ u = λARλ u = λRλ Au.
Step 2. Next, define
2 tR
λ
Sλ (t) := e tAλ = e −λt e λ
= e −λt
∞
X
(λ2 t)k
k=0
Observe that since ||Rλ || ≤ λ−1 ,
||Sλ (t)|| ≤ e
−λt
∞
X
λ2k t k
k=0
k!
k
||Rλ || ≤ e
−λt
k!
Rλk .
∞
X
λk t k
k=0
k!
= 1.
Consequently {Sλ (t)}t≥0 is a contrction semigroup, and it is easy to check its generator
is Aλ , with D(Aλ ) = X.
Step 3. Let λ, µ > 0. Since Rλ Rµ = Rµ Rλ , we see Aλ Aµ = Aµ Aλ , and so
Aµ Sλ (t) = Sλ (t)Aµ
Tus if u ∈ D(A), we can compute
Sλ (t)u − Sµ (t)u =
=
Z
t
0
Z
0
for each t > 0.
d
[Sµ (t − s)Sλ (s)u] ds
ds
t
Sµ (t − s)Sλ (s)(Aλ u − Aµ u) ds,
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
334
d
Sλ (t)u = Aλ Sλ (t)u = Sλ (t)Aλ u. Consequently, ||Sλ (t)u − Sµ (t)u|| ≤
because dt
t||Aλ u − Aµ u|| → 0 as λ, µ → ∞. Hence
S(t)u := lim Sλ (t)u
λ→∞
exists for each t ≥ 0, u ∈ D(A).
(10.31)
As ||Sλ (t)|| ≤ 1, the limit (10.31) in fact exists for all u ∈ X, uniformly for t in compact subsets of [0, ∞). It is now straightforward to verify {S(t)}t≥0 is a contraction
semigroup on X.
0
In addition
||Sλ (s)Aλ u − S(s)Au|| ≤ ||Sλ (s)|| · ||Aλ u − Au|| + ||(Sλ (s) − S(s))Au|| → 0
as λ → ∞, if u ∈ D(A). Passing therefore to limits in (10.32), we deduce
Z t
S(t)u − u =
S(s)Au ds
0
if u ∈ D(A). Thus D(A) ⊆ D(B) and
Bu = lim
t→0+
S(t)u − u
= Au
t
(u ∈ D(A)).
Now if λ > 0, λ ∈ ρ(A) ∩ ρ(B). Also (λI − B)(D(A)) = (λI − A)(D(A)) = X,
according to the assumptions of the theorem. Hence (λI − B) D(A) is one-to-one and
onto; whence D(A) = D(B). Therefore A = B, and so A is indeed the generator of
{S(t)}t≥0 .
Recall: In a contraction semigroup, we have ||S(t)||X ≤ 1
Take A to be a n × n matrix, A = QAQT
||S(t)||X = ||e At ||X = ||Qe Λt QT ||X ≤ 1
Since
e Λt

e λ1 t
 ..
= .
0
···
..
.
···
0
..
.
e λn t
we need all λi ≤ 0. In general, we can only expect that
||S(t)||X ≤ Ce λt ,


,
λ > 0.
Remark: {S(t)}t≥0 is called an ω-contraction semigroup if
||S(t)||X ≤ e ωt ,
t≥0
variant of the Hille-Yoshida Theorem:
If A closed and densely defined, then A is a generator of an ω-contraction semigroup ⇐⇒
1
(ω, ∞) ⊂ ρ(A), ||Rλ ||X ≤ λ−ω
and λ > ω
Gregory T. von Nessi
Version::31102011
Step 4. It remains to show A is the generator of {S(t)}t≥0 . Write B to denote this generator.
Now
Z t
Sλ (t)u − u =
Sλ (s)Aλ u ds.
(10.32)
10.10 Applications to 2nd Order Parabolic PDE
10.10
335
Applications to 2nd Order Parabolic PDE

 ut + Lu = 0 in ΩT
u = 0 on ∂Ω × [0, T ]

u = g on Ω × {t = 0}
L strongly elliptic in divergence form, smooth coefficients independent of t.
Version::31102011
Idea: Consider this as an ODE, and get U(t) = S(t)g, where S(t) is the semigroup generated by −L.
Choose: X = L2 (Ω), A = −L
D(A) = H 2 (Ω) ∩ H01 (Ω)
A is unbounded, but we have the estimate
||u||2H1 (Ω) ≤ L(u, u) + γ||u||2L2 (Ω) ,
0
γ≥0
Theorem 10.10.1 (): A generates a γ-contraction semigroup on X.
Proof: Check the assumptions of the Hille-Yoshida Theorem:
• D(A) dense in X: This is clearly true since we can approximate f ∈ L2 (Ω) even with
fk ∈ Cc∞ (Ω).
• A is a closed operator: This means that if we have uk ∈ D(A) with uk → u and
Auk → f , then u ∈ D(A) and Au = f .
To show this, set
Auk = fk ,
Au − Au = fl − fk
| l {z k}
−Lul +Luk
Elliptic regularity: (Auk = fk is an elliptic equation)
||uk − ul ||H2 (Ω) ≤ C ||fk − fl ||L2 (Ω) + ||uk − ul ||L2 (Ω)
R
(HW: Show this for −∆u + u = f ; L(u, u) = Ω |Du|2 + |u|2 dx = ||u||2H1 (Ω) )
||uk − ul ||H2 (Ω) ≤ C(||Auk − Aul ||L2 (Ω) + ||uk − ul ||L2 (Ω) )
|
{z
} |
{z
}
→0
→0
=⇒ uk is Cauchy in L2 (Ω)
=⇒ u ∈ H 2 (Ω) ∩ H01 (Ω), u ∈ D(A)
=⇒ since u ∈ H 2 (Ω), we have Auk → Au. We also know that Auk → f (assumed),
thus Au = f in L2 (Ω)
• Now we need to show (γ, ∞) ⊂ ρ(A), ||Rλ || ≤
1
λ−γ ,
λ>0
Use Fredholm’s Alternative:
Lu + λu = f in Ω
u = 0 on ∂Ω
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
336
has a unique solution for all f ∈ L2 (Ω) (by Lax-Milgram), and the solution u ∈
H 2 (Ω) ∩ H01 (Ω)
=⇒ −Au + λu = f
⇐⇒ (λI − A)u = f ∈ L2 (Ω), remembering u ∈ H 2 (Ω) ∩ H01 (Ω), we see that in
particular (Iλ − A)−1 exists and
(λI − A)−1 : L2 (Ω) = X → D(A)
=⇒ λ ∈ ρ(A) and Rλ : (λI − A)−1 exists.
What Remains: Resolvent estimate. Consider the weak form
: L(u, u) + λ||u||2L2 (Ω) = (f , u)L2 (Ω) ≤ ||f ||L2 (Ω) ||u||L2 (Ω) .
Remembering the estimate β||u||2H1 (Ω) − γ||u||2L2 (Ω) ≤ L(u, u), we see that
0
β||u||H1 (Ω)2 + (λ − γ)||u||2L2 (Ω) ≤ ||f ||L2 (Ω) ||u||L2 (Ω)
0
=⇒
=⇒
⇐⇒
(λ − γ)||u||2L2 (Ω) ≤ ||f ||L2 (Ω) ||u||L2 (Ω)
(λ − γ)||u||L2 (Ω) ≤ ||f ||L2 (Ω)
||Rλ f ||L2 (Ω)
1
1
≤
=⇒ ||Rλ || ≤
||f ||L2 (Ω)
λ−γ
λ−γ
This proves the resolvent estimate.
Now by the Hille-Yoshida Theorem, we have that A defines a γ-contraction semigroup.
Definition 10.10.2: If a function u ∈ C 1 ([0, T ]; X) ∩ C 0 ([0, T ]; D(A)) solves u̇ = Au + f (t),
u(0) = u0 , then u is called a classical solution of the equation.
Theorem 10.10.3 (): Suppose that u is a classical solution, then u is represented by
Z t
u(t) = S(t)u0 +
S(t − s)f (s) ds.
(10.33)
0
(this is the formula given by the variation of constant method.
R t Specifically S(t)u0 corresponds
to the homogeneous equation with initial data u0 , and 0 S(t − s)f (s) ds corresponds to
the inhomogeneous equation with 0 initial data).
Proof: Differentiate.
Theorem 10.10.4 (): Suppose you have u0 ∈ D(A), f ∈ C 0 ([0, T ]; X) and that in addition
f ∈ W 1,1 ([0, T ]; X) or f ∈ L2 ([0, T ]; D(A)), then (10.33) gives a classical solution of the
abstract ODE u̇ = Au + f , u(0) = u0 .
Remark: Parabolic equation:

 ut + Lu = f (t) in Ω × [0, T ]
u = 0 on ∂Ω × [0, T ]

u = g in Ω × {t = 0}
In particular, if f ∈ L2 (Ω) and independent of t, and if u0 ∈ H 2 (Ω) ∩ H01 (Ω) = D(A), then
there exists a classical solution of the parabolic equation.
Proofs of the last two theorems can be found in [RR04].
Gregory T. von Nessi
Version::31102011
u=v
L(u, v ) + λ(u, v )L2 (Ω) = (f , v )L2 (Ω)
10.11 Exercises
10.11
337
Exercises
9.1: [Qualifying exam 08/00] Let Ω ⊂ Rn be open and suppose that the operator L, formally
defined by
Lu = −
n
X
aij uxi xj
i,j=1
Version::31102011
has smooth coefficients aij : Omega → R. Suppose that L is uniformly elliptic, i.e., for some
c > 0,
n
X
i,j=1
aij ξi ξj ≥ c|ξ|2
∀ξ ∈ Rn
uniformly in Ω. Show that if λ > 0 is sufficiently large, then whenever Lu = 0, we have
L(|Du|2 + λ|u|2 ) ≤ 0.
9.2: [Qualifying exam 01/02] Let B be the unit ball in R2 . Prove that there is a constant C
such that
||u||L2 (B) ≤ C(||Du||L2 (B) + ||u||L2 (B) )
∀u ∈ C 1 (B).
9.3: [Qualifying exam 01/02] Let Ω be the unit ball in R2 .
a) Find the weak formulation of the boundary value problem
−∆u = f
in Ω,
u ± ∂n u = 0 on ∂Ω.
b) Decide for which choice of the sign this problem has a unique weak solution u ∈ H 1 (Ω)
for all f ∈ L2 (Ω). For this purpose you may use without proof the estimate
||u||L2 (Ω) ≤ γ ||Du||L2 (Ω) + ||u||L2 (∂Ω)
∀u ∈ H 1 (Ω).
c) For the sign that does not allow a unique weak solution, prove non-uniqueness by
finding two smooth solutions of the boundary value problem with f = 0.
9.4: [John, Chapter 6, Problem 1] Let n = 3 and Ω = B(0, π) be the ball centered at zero
with radius π. Let f ∈ L2 (Ω). Show that a solution u of the boundary value problem
∆u + u = f
in Ω
u = 0 on ∂Ω,
can only exists if
Z
Ω
f (x)
sin |x|
dx = 0.
|x|
9.5: Let f ∈ L2 (Rn ) and let u ∈ W 1,2 (Rn ) be a weak solution of
−∆u + u = f
in Rn ,
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
338
that is,
Z
Rn
(Du · Dφ + uφ) dx =
Z
f φ dx
Rn
∀φ ∈ W 1,2 (Rn ).
Show that u ∈ W 2,2 (Rn ).
Hint: It suffices to prove that Dk u ∈ W 1,2 (Rn ). In order to show this, check first that
uh = u(· + hek ) is a weak solution of the equation
−∆uh + uh = fh
in Rn ,
9.6: [Qualifying exam 08/00] Let Ω be the unit ball in R2 . Suppose that u, v : Ω → R
are smooth and satisfy
∆u = ∆v in Ω,
u = 0 on ∂Ω.
a) Prove that there is a constant C independent of u and v such that
||u||H1 (Ω) ≤ C||v ||H1 (Ω) .
b) Show there is no constant C independent of u and v such that
||u||L2 (Ω) ≤ C||v ||L2 (Ω) .
Hint: Consider sequences satisfying un − vn = 1.
9.7: [Qualifying exam 08/02] Let Ω be a bounded and open set in Rn with smooth boundary
Γ . Let q(x), f (x) ∈ L2 (Ω) and suppose that
Z
q(x) dx = 0.
Ω
Let u(x, t) be the solution of the initial-boundary value problem

 ut − ∆u = q x ∈ Ω, t > 0,
u(x, 0) = f (x) x ∈ Ω,

Du · ν = 0 on Γ.
Show that ||u(·, t) − v ||W 1,2 (Ω) → 0 exponentially as t → ∞, where v is the solution of the
boundary value problem
−∆v = q x ∈ Ω,
Dv · ν = 0 onΓ,
that satisfies
Z
Ω
v (x) dx =
Z
f (x) dx.
Ω
9.8: [Qualifying exam 01/97] Let Ω ⊂ Rm be a bounded domain with smooth boundary, and
let u(x, t) be the solution of the initial-boundary value problem

 utt − c 2 ∆u = q(x, t) for x ∈ Ω, t > 0,
u = 0 for x ∈ ∂Ω, t > 0

u(x, 0) = ut (x, 0) = 0 for x ∈ Ω.
Gregory T. von Nessi
Version::31102011
with fh = f (·+hek ). Then derive an equation for (uh −u)/h and use uh −u as a test function.
10.11 Exercises
339
Let {φn (x)}∞
n=1 be a complete orthonormal set of eigenfunctions for the
√Laplace operator ∆
in Ω with Dirichlet boundary conditions, so −∆φn = λn φn . Set ωn = c λn . Suppose
q(x, t) =
∞
X
qn (t)φn (x),
n=1
P
where each qn (t) is continuous,
qn2 (t) < ∞ and |qn (t)| ≤ Ce −nt . Show that there are
P
sequences {an } and {bn } such that (an2 + bn2 ) < ∞ and if
∞
X
v (x, t) =
[an cos(ωn t) + bn sin(ωn t)]φn (x),
Version::31102011
n=1
then
||u(·, t) − v (·, t)||L2 (Ω) → 0
at an exponential rate as t → ∞.
Let Ω ⊂ Rn be a bounded open set and aij ∈ C(Ω) for
9.9: [Qualifying exam
P 08/99]
i, j = 1, . . . , n, with i,j aij (x)ξi ξj ≥ |ξ|2 for all ξ ∈ Rn , x ∈ Ω. Suppose u ∈ H 1 (Ω) ∩ C(Ω)
is a weak solution of
X
−
∂j (aij ∂i u) = 0 in Ω.
i,j
Set w = u 2 . Show that w ∈ H 1 (Ω), and show that w is a weak subsolution, i.e., prove that
Z X
B(w , v ) :=
aij ∂j w ∂i v dx ≤ 0
Ω i,j
for all v ∈ H01 (Ω) such that v ≥ 0.
9.10: [Qualifying exam 01/00] Let Ω = B(0, 1) be the unit ball in Rn . Using iteration
and the maximum principle (and other result with proper justification), prove that there
exists some solution to the boundary value problem
−∆u = arctan(u + 1) in Ω,
u = 0 on ∂Ω.
9.11: [Qualifying exam 01/97] Suppose f
compact support. Consider the initial value
ut − ∆u = u 2 for
u(x, 0) = f (x) for
: Rn → R is continuous, nonnegative and has
problem
x ∈ Rn , 0 < t < T,
x ∈ Rn .
Using Duhamel’s principle, reformulate the initial value problem as a fixed point problem for
an integral equation. Prove that if T > 0 is sufficiently small, the integral equation has a
nonnegative solution in C(Rn × [0, T ]).
9.12: Let X = L2 (R) and define for t > 0 the operator S(t0 : X → X by
(S(t)u)(x) = u(x + t).
Gregory T. von Nessi
Chapter 10: Linear Elliptic PDE
340
Show that S(t) is a semigroup. Find D(A) and A, the infinitesimal generator of the semigroup.
Hint: You may use without proof that for f ∈ Lp (R)
||f (· + h) − f (·)||Lp (R) → 0
as |h| → 0
for p ∈ [1, ∞).
Version::31102011
Gregory T. von Nessi
Version::31102011
Chapter 11: Distributions and Fourier Transforms
342
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
11.1 Distributions
348
11.2 Products and Convolutions of Distributions; Fundamental Solutions
11.3 Fourier Transform of S(Rn )
357
11.4 Definition of Fourier Transform on L2 (Rn )
11.5 Applications of FTs: Solutions of PDEs
11.6 Fourier Transform of Tempered Distributions
11.7 Exercises
11.1
354
361
364
365
366
Distributions
[RR04]
11.1.1
Distributions on C ∞ Functions
Definition 11.1.1: Suppose Ω non-empty, open set in Rn . f is a test function if f ∈ Cc∞ (Ω),
i.e., f is infinitely many time differentiable, supp(f ) ⊂ K ⊂⊂ Ω, K compact. The set of all
test functions f is denoted by D(Ω).
Definition 11.1.2: {φn }n∈N , φ ∈ D(Ω). Then φn → φ in D(Ω) if
i.) There exists a compact set K ⊂⊂ Ω, such that supp(φn ) ⊆ K ∀n, supp(φ) ⊂ K.
ii.) φn and arbitrary derivatives of φn converge uniformly to φ and the corresponding derivatives of φ.
Gregory T. von Nessi
Version::31102011
11 Distributions and Fourier Transforms
11.1 Distributions
343
Definition 11.1.3: A distribution or generalized function is a linear mapping φ 7→ (f , φ)
from D(Ω) into R which is continuous in the following sense: If φn → φ in D(Ω) then
(f , φn ) → (f , φ). The set of all distributions is denoted D0 (Ω).
Example: If f ∈ C(Ω), then f gives a distribution Tf defined by
Z
(Tf , φ) =
f (x)φ(x) dx
Ω
Version::31102011
Not all distributions can be identified with functions. If Ω = D(0, 1), we can define δ0 by
(δ0 , φ) = φ(0).
Lemma 11.1.4 (): f ∈ D0 (Ω), K ⊂⊂ Ω. Then there exists a nK ∈ N and a constant CK
with the following property:
X
|(f , φ)| ≤ CK
max |Dα φ(x)|
|α|≤nK
x∈K
for all test functions φ with supp(φ) ⊆ K.
Remarks:
1) The examples above satisfy this with n = 0.
2) If nK can be chosen independent of K, then the smallest n with this property is called
the order of the distribution.
3) There are distributions with n = ∞
Proof: Suppose the contrary. Then there exists a compact set K ⊂⊂ Ω and a sequence of
functions ψn with supp(ψn ) ⊆ K and
X
|(f , ψn )| ≥ n
max |Dα ψn (x)| > 0
|α|≤n
Define φn =
ψn
(f ,ψn ) .
x∈K
Then φn → 0 in D(Ω)
• supp(φn ) ⊆ K ⊂⊂ Ω.
• |α| = k, n ≥ k. Then
|Dα φn (x)| =
≤
|Dα ψn (y )|
Dα ψn (y )
≤ P
(f , ψn )
n |β|≤n maxx∈K |Dα ψn (x)|
1
n
=⇒ Dα φn → 0 uniformly as n → ∞.
However,
(f , φn ) =
(f , ψn )
contradiction.
(f , ψn )
Example: We can define a distribution f by
(f , φ) = −φ0 (0)
k
(δ0 )0
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
344
|(f , φ)| ≤ supx∈K |φ0 (x)|, f is a distribution of order one.
Localization: G ⊂ Ω, G open, then D(G) ⊆ D(Ω). f ∈ D0 (Ω) defines a distribution in
D0 (G) simply by restriction. f ∈ D0 (Ω) vanishes on the open set G if (f , φ) = 0 for all
φ ∈ D(G). With this in mind, we say that f = g on G if f − g vanishes on G. If f ∈ D0 (Ω),
then one can show that there exists a largest open set G ⊂ Ω, such that f vanishes on G
(this can be the empty set). The complement of this set in Ω is called the support of f .
Example: Ω = B(0, 1), f = δ0 , supp(δ0 ) = {0}.
Example: Ω = R
fn (x) =
(fn , φ) =
Z
n 0≤x ≤
0 else
fn (x) dx
R
= n
1
n
Z
1
n
φ(x) dx =
0
1
1
n
Z
1
n
φ(x) dx = φ(0) = (δ0 , φ)
0
=⇒ fn → δ0 in D0 (Ω)
Analogously: Standard mollifiers, φ
1
0
−1
0
1
R
supp(φ) ⊆ [−1, 1], R φ(x) dx = 1
φ = −1 φ(−1 x) then φ → δ0 in D0 (Ω) as → 0.
Theorem 11.1.6 (): fn ∈ D0 (Ω) such that (fn , φ) converges for all test function φ, then
there ∃f ∈ D0 (Ω) such that fn → f .
Gregory T. von Nessi
Version::31102011
Definition 11.1.5: A sequence fn ∈ D0 (Ω) converges to f ∈ D0 (Ω) if (fn , φ) → (f , φ) for all
φ ∈ D(Ω).
11.1 Distributions
345
Proof: (See Renardy and Rogers) Define f by
(f , φ) = lim (fn , φ).
n→∞
Then f is clearly linear, and it remains to check the required continuity property of f .
Definition 11.1.7:
• S(Rn ) is the space of all C ∞ (Rn ) functions such that
|x|k |Dα φ(x)|
Version::31102011
is bounded for every k ∈ N, for every multi-index α.
• We say that φn → φ in S(Rn ) if the derivatives of φn of all orders converge uniformly
to the derivatives of φ, and if the constants Ck,α in the estimate
|x|k |Dα φn (x)| ≤ Ck,α
are independent of n.
• S(Rn ) = space of rapidly decreasing function or the Schwarz space.
Example: φ(x) = e −|x|
11.1.2
2
Tempered Distributions
Definition 11.1.8:
• A tempered distribution is a linear mapping φ 7→ (f , φ) from S(Rn )
into R which is continuous in the sense that (f , φn ) → (f , φ) if φn → φ in S(Rn ).
• The set of all tempered distributions is denoted by S 0 (Rn )
11.1.3
Distributional Derivatives
Definition 11.1.9: Let f ∈ D0 (Ω), then the derivative of f with respect to xi is defined by
∂φ
∂f
,φ = − f,
(11.1)
∂xi
∂xi
Remarks:
1) If f ∈ C(Ω), then the distributional derivative is the classical derivative, and (11.1) is
just the formula for integration by parts.
2) Analogously, definition for higher order derivatives: α ∈ Rn is a multi-index
(Dα f , φ) = (−1)|α| (f , Dα φ)
3) Taking derivatives of distributions is continuous in the following sense
if fn → f in D0 (Ω), then
∂fn
∂f
→
in D0 (Ω)
∂xi
∂xi
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
346
4) We can define a “translated” distribution f (x + hei ) by
(f (x + hei ), φ) = (f , φ(x − hei )).
Then, we see that
∂f
f (x + hei ) − f (x)
= lim
∂xi
h
h→0
is a limit of difference quotients.
Example:
1 x >0
0 x ≤0
in D0 (R)
1
this is not weakly diffentiable
0
−1
dH
dx
0
1
in the sense of distributions:
Z
dH
dφ
dφ
,φ
= − H,
= − H(x)
dx
dx
dx
dx
R
Z ∞
∞
dφ
= −
dx = −φ
= φ(0) = (δ0 , φ)
dx
0
0
Recall:
• D(Ω) = test functions, φn → φ in D(Ω) means that supp(φn ) ⊂ K ⊂⊂ Ω and
Dα φn → Dα φ uniformly.
• D0 (Ω) = distributions on Ω = linear functionals of D(Ω), (f , φn ) → (f , φ) if φn → φ
• S(Rn ) = rapidly decreasing test functions, φn → φ in S(Rn ) means that Dα φn → Dα φ
uniformly ∀α.
|x|k |Dα φn | ≤ Ck,α
independent of n
• S 0 (Rn ) = tempered distributions = linear functionals of S(Rn )
• f ∈ D0 (Ω), (Dα f , φ) = (−1)|α| (f , Dα φ)
Gregory T. von Nessi
Version::31102011
H(x) =
11.1 Distributions
347
Example: Ω ⊂ Rn open and bounded with smooth boundary, f ∈ C 1 (Ω). f defines a
distribution in D0 (Ω)
Z
Z
∂φ
∂φ
∂φ
∂f
,φ
= − f,
=−
f (x)
(x) dx = −
f (x)
(x) dx
∂xj
∂xj
∂xj
∂xj
Rn
Ω
Z
Z
∂f
= −
f (x)φ(x)νj dS +
(x)φ(x) dx
∂x
j
∂Ω
Ω
Version::31102011
distributional derivatives have two terms: a volume term and a boundary term.
Proposition 11.1.10 (): Ω open, bounded and connected. u ∈ D0 (Ω) such that ∇u = 0 (in
the sense of distributions), then u ≡ constant.
Proof: We do this only for n = 1; only a suitable induction is needed to prove the general
case.
Ω = I = (a, b); by assumption u 0 = 0 or 0 = (u 0 , φ) = −(u, φ0 )
=⇒ (u, ψ) = 0 whenever
ψ = φ0 , φ ∈ D(I);
R
0
But ψ = φ ⇐⇒ I ψ(s) ds = 0 and ψ ∈ Cc∞ (I).
Choose any φ0 ∈ D(I) with
Rb
a
φ ∈ D(I),
φ0 (s) ds = 1
φ(x) = [φ(x) − φ · φ0 (x)] +φ · φ0 (x),
|
{z
}
R
[φ(x)
−
φ
·
φ0 (x)]
dx
=
Z b
φ−φ
φ0 (x) dx = 0
a
|
{z
}
=1
where φ =
Rb
a
φ(s) ds.
=⇒ φ(x) = ψ 0 (x) + φ · φ0 (x)
0
=⇒ (u, φ) = (u, ψ ) +(u, φ · φ0 (x)) = (u, φ0 )
| {z }
=0
=⇒ u = (u, φ0 ) = constant.
Z
b
φ(x) dx
a
Recall:
• D(Ω) = test functions, φn → φ in D(Ω) means that supp(φn ) ⊂ K ⊂⊂ Ω and
Dα φn → Dα φ uniformly.
• D0 (Ω) = distributions on Ω = linear functionals of D(Ω), (f , φn ) → (f , φ) if φn → φ
• S(Rn ) = rapidly decreasing test functions, φn → φ in S(Rn ) means that Dα φn → Dα φ
uniformly ∀α.
|x|k |Dα φn | ≤ Ck,α
independent of n
• S 0 (Rn ) = tempered distributions = linear functionals of S(Rn )
• f ∈ D0 (Ω), (Dα f , φ) = (−1)|α| (f , Dα φ)
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
348
Example: Ω ⊂ Rn open and bounded with smooth boundary, f ∈ C 1 (Ω). f defines a
distribution in D0 (Ω)
Z
Z
∂f
∂φ
∂φ
∂φ
,φ
= − f,
=−
f (x)
(x) dx = −
f (x)
(x) dx
∂xj
∂xj
∂xj
∂xj
Rn
Ω
Z
Z
∂f
= −
f (x)φ(x)νj dS +
(x)φ(x) dx
∂Ω
Ω ∂xj
distributional derivatives have two terms: a volume term and a boundary term.
11.2
Difficulty: No good definition of f · g for f , g ∈ D0 (Ω).
For example: f , g ∈ L1 (Ω), but f · g ∈
/ L1loc (Ω), then f · g ∈
/ D0 (Ω).
However, if f ∈ D0 (Rp ), g ∈ D0 (Rq ), then we can define f ·g ∈ D0 (Rp+q ) in the following
way:
Definition 11.2.1: If f ∈ D0 (Rp ), g ∈ D0 (Rq ), then we define f (x)g(x) ∈ D0 (Rp+q ) by
(f (x)g(y ), φ(x, y )) = (f (x), (g(y ), φ(x, y )))
{z
}
|
consider x as a
parameter
Check:
• (g(y ), φ(x, y )) is a test function.
• Continuity: (f (x)g(y ), φn (x, y )) → (f (x)g(y ), φ(x, y )).
• Commutative operation: f (x)g(y ) = g(y )f (x). To check this, use that test functions
of the form φ(x, y ) = φ1 (x)φ2 (y ) are dense
Example:
(δ0 (x)δ0 (y ), φ(x, y )) = (δ0 (x), (δ0 (y ), φ(x, y )))
= (δ0 (x), φ(x, 0))
= φ(0, 0) = δ0 (x, y )
11.2.1
Convolutions of Distributions
Suppose f , g ∈ S(Rn )
(D(Rn ))
Z
(f ∗ g)(x)φ(x) dx
n
ZR Z
=
f (x − y )g(y )φ(x) dy dx
Rn Rn
Z Z
=
f (x − y )φ(x) dx g(y ) dy
n
n
ZR Z R
=
f (x)g(y )φ(x + y ) dxdy
(f ∗ g, φ) =
Rn
Rn
= (f (x)g(y ), φ(x + y ))
Gregory T. von Nessi
Version::31102011
Products and Convolutions of Distributions; Fundamental Solutions
11.2 Products and Convolutions of Distributions; Fundamental Solutions
349
Suggestion: If f , g ∈ S 0 (Rn ), then
(f ∗ g, φ) = (f (x)g(y ), φ(x + y ))
Difficulty: φ(x + y ) does not have compact support even if φ does.
y ∈ Rn
Version::31102011
x +y =0
x ∈ Rn
supp(φ(x))
Sufficient condition: We can define f ∗ g the following way: f (x)g(y ) has compact support,
e.g., f or g has compact support. We then modify φ(x + y ) outside the support to be
compactly supported/rapidly decreasing.
Example:
1)
(δ0 ∗ f , φ) = (δ0 (x)f (y ), φ(x + y ))
(commutativity)
= (f (y )δ0 (x), φ(x + y ))
= (f (y ), (δ0 (x), φ(x + y )))
= (f (y ), φ(y ))
= (f , φ)
=⇒ δ0 ∗ f = f
2) f ∈ D0 (Rn ), ψ ∈ D(Rn )
(f ∗ ψ, φ) = (f (x)ψ(y ), φ(x + y ))
(justified by Riemann Sums)
= (f (x), (ψ(y ), φ(x + y )))
Z
=
f (x),
ψ(y )φ(x + y ) dy
Rn
Z
=
f (x),
ψ(y − x)φ(y ) dy
Rn
Z
=
(f (x), ψ(x − y ))φ(y ) dy
Rn
Therefore, we see that f ∗ ψ can be identified with the function (f (x), ψ(x − y )) ∈
C ∞ (Rn ) (this function being C ∞ needs to be checked).
Remark: Suppose that fn → f in D0 (Rn ) and that either ∃K compact, K ⊂ Rn with
supp(fn ) ⊆ K, or that g ∈ D0 (Rn ) has compact support, then
fn ∗ g → f ∗ g in D0 (Rn )
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
350
Theorem 11.2.2 (): D(Rn ) is dense in D0 (Rn ).
Example: δ0 ↔ φ , where φ (x) = −n φ(−1 x), φ ≥ 0 and
Idea of Proof:
R
Rn
φ(x) dx = 1.
(1) Distributions with compact support are dense in D0 (Rn ). To show this, choose test
functions φn ∈ D(Rn ) with φn ≡ 1 on Bn (0). Now define φn · f ∈ D0 (Rn ), (φn · f , φ) =
(f , φn · φ) so that we have φn · f → f in D0 (Rn ) and φn · f has compact support.
φn ∗ f → δ0 ∗ f = f
11.2.2
Derivatives of Convolutions
(Dα (f ∗ g), φ) = (−1)|α| (f ∗ g, Dα φ)
= (−1)|α| (f (x)g(y ), Dα φ(x + y ))
= (−1)|α| (g(y ), (f (x), Dα φ(x + y )))
= (g(y ), (Dα f (x), φ(x + y )))
= (g(y )Dα f (x), φ(x − y ))
= (Dα f ∗ g, φ) = (f ∗ Dα g, φ)
11.2.3
Distributions as Fundemental Solutions
L(D) is a differential operator
∆ = L(D) =
n
X
j=1
aj · Dj2
ut − ∆u = L(D) = b · Dt −
n
X
j=1
aj · Dj2
Definition 11.2.3: Suppose that L(D) is a differential operator with constant coefficients.
A fundamental solution for L is a distribution G ∈ D0 (Rn ) satisfying L(D)G = δ0 .
Remark: Why is this a good definition?
L(D)(G ∗ f ) = (L(D)G) ∗ f = δ0 ∗ f = f
The first equality of the above uses the result obtained for differentiating convolutions. So
if G ∗ f is defined (e.g. f has compact support), then u = G ∗ f is a solution of L(D)u = f
Gregory T. von Nessi
Version::31102011
(2) Distributions with compact support
are limits of test functions. To show this choose
R
−n
−1
φ = φ( x), φ ≥ 0 and Rn φ(x) dx = 1 standard mollifier. Then φn ∗ f is a test
function and by the above remark,
11.3 Fourier Transform of S(Rn )
351
Version::31102011
1
Example: f (x) = |x|
in R3
Check: f is up to a constant a fundamental solution for the Laplace operator.
(∆f , φ) = (f , ∆φ)
Z
=
f (x)∆φ(x) dx
R3 Z
1
∆φ(x) dx
= lim
→0 R3 \B (0) |x|
Z
Z
1
1
= lim −
D
· Dφ(x) dx −
Dφ(x) · ν dS
→0
|x|
R3 \B (0)
∂B (0) r
Z
Z
1
1
D
= lim
∆ φ(x) dx +
· νφ(x) dx
→0
|x|
∂B (0)
R3 \B (0) |x|
| {z }
{z
}
|
=0
:=I
Z
1
−
Dφ(x) · ν dS
∂B (0) r
|
{z
}
∼C→0
So, from PDE I we know
Z
I = −4π−
B (0)
φ(x) dS → −4πφ(0)
= −4π(δ0 , φ).
This gives us the correct notion of what “∆G = δ0 ” really means.
11.3
Fourier Transform of S(Rn )
Definition 11.3.1: If f ∈ S(Rn ) then we define
n
Ff (ξ) = fˆ(ξ) = (2π)− 2
F
−1
n
f (x) = f˜(x) = (2π)− 2
Z
e −ix·ξ f (x) dx
Rn
Z
e ix·ξ f (ξ) dξ
Rn
F is the Fourier transform
F −1 is the inverse Fourier transform.
Two important formulas
1. Dα (fˆ(ξ) = F[(−i x)α f ](ξ)
2. (i ξ)β fˆ(ξ) = F[Dβ f ](ξ)
Proof:
1.
Dξα fˆ(ξ)
=
Dξα
(2π)
Z
− n2
n
= (2π)− 2
Rn
Z
Rn
e
−ix·ξ
f (x) dx
Dξα [e −ix·ξ f (x)] dx = F[(−i x)α f ](ξ)
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
352
2.
βˆ
(i ξ) f (ξ)
=
=
(2π)
− n2
n
(2π)− 2
Z
(i ξ)β e −ix·ξ f (x) dx
n
ZR
(−1)|β| (−i ξ)β e −ix·ξ f (x) dx
Rn
=
I.P.
=
(2π)
− n2
n
(2π)− 2
Z
Rn
(−1)|β| Dxβ e −ix·ξ · f (x) dx
Rn
e −ix·ξ · Dβ f (x) dx = F[Dβ f ](ξ).
Z
Proposition 11.3.3 (): F −1 : S(Rn ) → S(Rn ) is linear and continuous.
Proof:
(−1)
|β|+|α| β
ξ ·
Dξα fˆ(ξ)
= (2π)
− n2
Z
Rn
e −ix·ξ Dβ (x α f (x)) dx
{z
}
|
finite sum of
rapidly decreasing functions
=⇒ fˆ(ξ) is rapidly decreasing, fˆ ∈ S(Rn )
continuous: fn → 0 in S(Rn ) then fˆn → 0 in S(Rn ). Suppose that fn → 0, show that
Dα fˆn → 0 uniformly ∀α ∈ Nn . It will be sufficient to show this for α = 0,
Z
n
fˆn (ξ) = (2π)− 2
e −ix·ξ fn (x) dx
n
ZR
Z
− n2
−ix·ξ
= (2π)
e
f (x) dx +
e −ix·ξ f (x) dx
|{z}
n
BR (0)
R \BR (0)
≤
(f (x) ≤
1
|x|k
1
|x|k
since f ∈ S(Rn )).
say k = n + 2
Z
Rn \BR (0)
|fn (x)| dx
Z
C
dx = C
n+2
Rn \BR (0) |x|
Z ∞
1
1
dρ = C 2
= C
3
R
R ρ
≤
Z
∞
R
ρn−1
dρ
ρn+2
2nd integral as small as we want if R large enough.
1st integral as small as we want if n is large enough since fn → 0 uniformly. This analogously
for higher derivatives.
Theorem 11.3.4 (): If f ∈ S(Rn ), then
F −1 (Ff ) = f ,
F(F −1 f ) = f
i.e., F −1 is the inverse of the Fourier transform.
Gregory T. von Nessi
Version::31102011
Proposition 11.3.2 (): F : S(Rn ) → S(Rn ) is linear and continuous.
11.3 Fourier Transform of S(Rn )
353
Fourier Transform
n
(Ff )(ξ) = fˆ(ξ) = (2π)− 2
Z
e −ix·ξ f (x) dx
Z
−1
− n2
˜
ˇ
e ix·ξ f (x) dx
(F f )(ξ) = f (ξ) = f (ξ) = (2π)
Rn
Rn
Example:
Version::31102011
f (x) = e −|x|
n
fˆ(y ) = (2π)− 2
Z
2 /2
, fˆ(y ) = e −|y |
e −ix·ξ e −|x|
2 /2
2 /2
!
dx = e −|y |
2 /2
Rn
It is sufficient to show this for n = 1.
Z
2
2
− 12
e −t /2 e −it·u dt = e −u /2
(2π)
{z
}
|R
Z
Z
2
−t 2 /2−it·u
−u 2 /2
e
dx = e
e −(t+iu) /2 dt
R
(completing the square)
R
2
Once way to evaluate the integral is to realize that e −z /2 is analytic, thus by Cauchy we
know that integrating this on a closed contour will yield a value of 0.
ℑ
iu
C
γ1 (s) = s, s ∈ (−λ, λ)
γ2 (s) = λ + i s, s ∈ (0, u)
γ3 (s) = −s + i u, s ∈ (−λ, λ)
γ4 (s) = −λ + i (u − s), s ∈ (0, u)
γ3
γ2
γ4
γ1
−λ
0 =
=
Z
e −z
C
Z λ
e
2 /2
ds +
−λ
+
R
ds
−s 2 /2
Z
λ
u
Z
|
u
e
−(λ+is)2 /2
0
{z
:=I2
e −(−λ+i(u−s))
2 /2
i ds +
}
Z
λ
e −(−s+iu)
2 /2
(−1)ds
−λ
(−i )ds
0
Looking at the second integral:
Z u
−λ2 /2 −iλs s 2 /2
I2 =
|e {z } |e {ze } ds → 0
0
→ 0 as
λ→∞
≤C
as
λ→∞
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
354
Similarly, I4 → 0 as λ → ∞. Thus, in the limit we have
Z λ
Z λ
2
−s 2 /2
lim
e
ds = lim
e −(−s+iu) /2 ds
λ→∞ −λ
λ→∞ −λ
Z ∞
Z ∞
Z ∞
2
2
2
2
=⇒
e −s /2 ds =
e −(t+iu) /2 dt =
e −t /2 e −iut e u /2 dt
−∞
−∞
}
| −∞ {z
√
= 2π
2
Remark: The generalization of this is the following: if f (x) = e −|x| , then fˆ(ξ) = (2)−n/2 e −|ξ|
This is basically the same calculation as the previous example.
Proof: f , g ∈ S(Rn )
Z
g(ξ)f˜(ξ)e ix·ξ dx
Rn
Z
− n2
Z
−iy ·ξ
e
f (y ) dy e ix·ξ dξ
g(ξ) (2π)
n
R
Z Z
n
−2
−iξ(y −x)
= (2π)
e
g(ξ) dξ f (y ) dy
Rn
Rn
Z
Z
=
ĝ(y − x)f (y ) dy =
ĝ(y )f (x + y ) dy
=
Rn
Rn
Rn
d
Replace g(ξ) by g( · ξ). Need to computer g(·):
Z
Z
z
− n2
−iy ·ξ
− n2 1
e
g( · ξ) dξ = (2π)
e −iy · g(z) dz
(2π)
n
Rn
Rn
1 y = n ĝ
So, we see that
Z
Z
d ) · f (x + y ) dy
g( · ξ)fˆ(ξ)e ix·ξ dξ =
g(·)(y
n
n
R
RZ
y 1
b
= n
g
f (x + y ) dy
n
Z R
=
ĝ(y )f (x + y ) dy
Rn
−|x|2 /2
and take the limit → 0.
Z
ix·ξ
ˆ
g(0)
f (ξ)e
dξ = f (x)
ĝ(y ) dy
n
Rn
ZR
√
n
2
= f (x)
e −|y | /2 dy = ( 2π)− 2 f (x)
Choose g(x) = e
Z
Rn
∴
− n2
f (x) = (2π)
Z
Rn
e ix·ξ fˆ(ξ) dξ = ((F −1 F)f )(x)
The proof of (FF −1 f )(x) = f (x) is analogous.
Remark: Suppose L(D) is a differential operator with constant coefficients, i.e.,
L(D) =
Gregory T. von Nessi
n
X
i=1
Di2 = −∆.
.
Version::31102011
Theorem 11.3.5 (): f ∈ S(Rn ), F −1 Ff = f , FF −1 f = f .
2 /4
11.4 Definition of Fourier Transform on L2 (Rn )
355
Then F(L(D)u) = L(i ξ)û, where L(i ξ) is just the symbol of the operator.
Example: Suppose we canPextend Fourier transforms to L2 (Rn ) (see below), then with
L(D) = −∆ and L(i ξ) =
ξi2 = |ξ|2 , we can find the solution of −∆u = f ∈ L2 (Rn ) by
Fourier Transform:
\ = f˜
(−∆u)
|ξ|2 û(ξ) = f˜(ξ)
⇐⇒
Version::31102011
⇐⇒
û(ξ) =
⇐⇒
fˆ(ξ)
|ξ|2
u(x) = F
−1
fˆ(ξ)
f
|ξ|2
!
(x)
Theorem 11.3.6 (): f , g ∈ S(Rn ), then (f , g) = (fˆ, ĝ), i.e., the Fourier Transform preserves
the inner-product
Z
(f , g) =
f · g dx
Rn
Remark: Using a Fourier Transform, we have to use complex valued functions, |z|2 = z ·z.
Proof: Computer. g(x) = (F −1 Fg)(x) = (R−1 ĝ)(x)
Z
Z
Z
n
e ix·ξ ĝ(ξ) dξdx
f (x)g(x) dx =
f (x) · (2π)− 2
n
n
n
R
ZR
ZR
n
=
ĝ(ξ) · (2π)− 2
f (x)e ix·ξ dxdξ
n
n
R
R
Z
Z
− n2
=
ĝ(ξ) · (2π)
f (x)e −ix·ξ dxdξ
n
n
R
R
Z
=
ĝ(ξ)fˆ(ξ) dξ = (fˆ, ĝ)
Rn
11.4
Definition of Fourier Transform on L2(Rn )
Proposition 11.4.1 (): If u, v ∈ L1 (Rn ) ∩ L2 (Rn ), then ud
∗ v = (2π)n/2 û · v̂ .
Proof:
ud
∗ v (y ) = (2π)
− n2
n
n
= (2π)− 2
=
Rn
e −ix·y (u ∗ v )(x) dx
Z
Z
e −ix·y
u(z)v (x − z) dz dx
n
Rn
ZR
Z
−iz·y
e
u(z)
e −iy (x−z) v (x| {z
− z}) dxdz
Rn
= (2π)− 2
Z
Z
Rn
e
−iz y
n
Rn
sub.w
u(z)v̂ (y ) dz
= (2π) 2 û(y )v̂ (y )
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
356
Theorem 11.4.2 (Plancherel’s Theorem): Suppose that u ∈ L1 (Rn ) ∩ L2 (Rn ), then û ∈
L2 (Rn ) and
||û||L2 (Rn ) = ||u||L2 (Rn )
Proof: Let v (x) = u(−x). Define w = u ∗ v . So, we have ŵ (ξ) = (2π)n/2 û(ξ)v̂ (ξ).
Now w ∈ L1 (Rn ) and w is continuous (using Proposition 8.31). Now,
Z
− n2
v̂ (ξ) = (2π)
u(−x) · e −ix·ξ dx.
Rn
Let y = −x, so
Rn
u(y ) · e
Z
− n2
−iy ·ξ
dy = (2π)
u(y ) · e iy ·ξ dy = û(y )
Rn
Now we claim for any v , w ∈ L1 (Rn ),
Z
Z
v (x) · ŵ (x) dx =
Rn
Rn
v̂ (ξ) · w (ξ) dξ.
Check:
Z
Rn
v (x) · ŵ (x) dx
=
Z
Rn
=
=
Z
n
ZR
Rn
− n2
Z
−ix·ξ
v (x) · (2π)
w (ξ)e
dξ dx
Rn
Z
n
v (x)e −ix·ξ dx · w (ξ) dξ
(2π)− 2
Rn
v̂ (ξ) · w (ξ) dξ
2
2
Now set f (ξ) = e −|ξ| so fˆ(x) = (2)−n/2 e −|ξ| /4 (as shown in the example earlier). Then,
Z
Z
2
−|ξ|2
− n2
ŵ (ξ)e
dξ = (2)
w (x)e −|x| /4 dx.
Rn
Rn
Now, take the limit as → 0 to get
Z
n
ŵ (ξ) dξ = (2π) 2 w (0)
Rn
(w (0) because the integral was a limit representation of the dirac distribution). But we know
w =u∗v
so
n
2
n
(2π) w (0) =
Z
n
n
ŵ (ξ) = (2π) 2 û · v̂ (ξ) = (2π) 2 û · û(ξ) = (2π) 2 |û|2 ,
and
ŵ (ξ) dξ = (2π)
Rn
We also know
w = u ∗ v =⇒ w (0) =
n
2
Z
Rn
Z
Rn
2
|û| dξ =⇒ w (0) =
u(y ) · v (−y ) dy =
Z
Rn
Z
Rn
u(y ) · u(y ) dy =
|û|2 dξ.
Z
Rn
|u|2 dy .
Thus,
Z
2
|û| dξ = w (0) =
Rn
Gregory T. von Nessi
Z
Rn
|u|2 dy =⇒ ||û||L2 (Rn ) = ||u||L2 (Rn )
Version::31102011
v̂ (ξ) = (2π)
Z
− n2
11.5 Applications of FTs: Solutions of PDEs
357
Definition 11.4.3: If u ∈ L2 (Rn ), choose uk ∈ L1 (Rn ) ∩ L2 (Rn ) such that uk → u in
L2 (Rn ). By Plancherel’s Theorem:
||ûk − ûl ||L2 (Rn ) = ||u\
k − ul ||L2 (Rn )
= ||uk − ul ||L2 (Rn ) → 0
k, l → ∞
as
=⇒ ûk is Cauchy in L2 (Rn ). The limit is defined to be û ∈ L2 (Rn ).
Note: This definition does not depend on the approximating sequence.
Version::31102011
Remarks:
1) We have
||u||L2 (Rn ) = ||û||L2 (Rn )
if u ∈ L2 (Rn ) by approximation of u in L2 (Rn ) by functions uk ∈ L1 (Rn ) ∩ L2 (Rn );
Plancherel’s Theorem.
2) You can use this to find integrals of functions. Suppose u, û, then
Z
Z
2
|û| dx =
|u|2 dx
R
R
Theorem 11.4.4 (): If u, v ∈ L2 (Rn ), then
i.)
(u, v ) = (û, v̂ ), i.e.,
Z
Rn
u · v dx =
Z
Rn
û · v̂ dx
α u)(y ) = (i y )α û(y ), D α u ∈ L2 (Rn )
d
ii.) (D
iii.) (ud
∗ v ) = (2π)n/2 û · v̂
iv.) u = û˜ = F −1 [F(u)]
Proofs: See Evans.
11.5
Applications of FTs: Solutions of PDEs
(1) Solve −∆u + u = f in Rn , f ∈ L2 (Rn ).
The Fourier Transform converts the PDE into an algebraic expression.
⇐⇒
⇐⇒
b=
where B
1
|y |2 +1 .
|y |2 û + û = fˆ
fˆ
û =
2
|y | + 1
u=F
−1
fˆ
2
|y | + 1
!
Since ud
∗ v = (2π)n/2 û · v̂ ,
b
= F −1 (fˆ · B),
b = (2π)− n2 (f[
û = fˆ · B
∗ B)
n
u = (2π)− 2 (f ∗ B).
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
358
Now we need to find B:
Z ∞
1
=
e −ta dt
a
0
Z ∞
1
2
=
e −t(|y | +1) dt.
2
|y | + 1
0
Trick:
=⇒
So,
0
This last integral is called the Bessel Potential.
So, we at last have an explicit representation of our solution:
n
u(x) = (2π)− 2 (B ∗ f )
Z ∞Z
n
2
− n2
= (4π)
t − 2 e −t−|x−y | /4t f (y ) dy dt
Rn
0
(2) Heat equation:
ut − ∆u = 0 in Rn × (0, ∞)
u = g on Rn × {t = 0}
Take the Fourier transform in the spatial variables:
ût (y ) − |y |2 û(y ) = 0 in Rn × (0, ∞)
û(y ) = ĝ(y ) on Rn × {t = 0}
Now, we can view this as an ODE in t:
û(y , t) = ĝ(y )e −|y |
2t
2
=⇒ u(x, t) = F −1 [ĝ(y )e −|y | t ](x)
n
= (2π)− 2 (g ∗ F ),
where
Changing variables as before,
2
Fb = e −|y | t .
n
F (x) = (2t)− 2 e −|x|
2 /4t
n
=⇒ u(x, t) = (2π)− 2 (g ∗ F )(x, t)
Z
2
− n2
g(y )e −|x−y | /4t dy
= (4πt)
Rn
This is just the representation formula from the last term!
Gregory T. von Nessi
Version::31102011
b
B(x) = (F −1 B)(x)
Z
Z ∞
2
ix·y
− n2
= (2π)
e
e −t(|y | +1) dtdy
n
ZR∞
Z0
n
2
−2
−t
= (2π)
e
e ix·y e −t|y | dy dt
n
0
R
Z ∞
n
2
(change var.)
=
e −t (2t)− 2 e −|x| /4t dt.
11.6 Fourier Transform of Tempered Distributions
11.6
359
Fourier Transform of Tempered Distributions
Definition 11.6.1: If f ∈ S 0 (Rn ), then we define
(F(f ), φ) = (f , F −1 (φ))
Check that the following holds:
Version::31102011
Dα fˆ = F((−i x)α f )
(i ξ)fˆ = F(Dα f )
Examples:
(1) f = δ0 ,
hF(δ0 ), φi = hδ0 , F −1 φi = (F −1 φ)(0)
Z
− n2
= (2π)
φ(x) dx
Rn
E
D
n
=
(2π)− 2 , φ
n
=⇒ F(δ0 ) = (2π)− 2
(2) F(1), 1 = constant function.
hF(1), φi = h1, F −1 φi
Z
=
F −1 (φ)(x) dx
Rn
n
= (2π) 2 FF −1 (φ)(0)
n
= (2π) 2 φ(0)
D
E
n
=
(2π) 2 δ0 , φ
11.7
Exercises
11.1: Show that the Dirac delta distribution cannot be identified with any continuous function.
11.2: Let Ω = (0, 1). Find an example of a distribution f ∈ D0 (Ω) of infinite order.
11.3: Let a ∈ R, a 6= 0. Find a fundamental solution for L =
d
dx
− a on R.
11.4: Find a sequence of test functions that converges in the sense of distributions in R to δ 0 .
11.5: Let Ω = B(0, 1) be the unit ball in R2 , and let w ∈ W 1,p (B(0, 1); R2 ) be a vector
valued function, that is, w(x) = (u(x), v (x)) with u, v ∈ W 1,p (Ω). Recall that Dw, the
gradient of w, is given by
∂1 u ∂2 u
Dw =
.
∂1 v ∂2 v
Gregory T. von Nessi
Chapter 11: Distributions and Fourier Transforms
360
a) Suppose w is smooth. Show that
∂1 (u · ∂2 v ) − ∂2 (u · ∂1 v ) = det Dw.
b) Let w ∈ W 1,p (B(0, 1); R2 ). We want to define a distribution Det Dw ∈ D0 (Ω) by
Z
hDet Dw, φi = − (u · ∂2 v · ∂1 φ − u · ∂1 v · ∂2 φ) dx.
Ω
Find the smallest p ≥ 1 such that Det Dw defines a distribution, that is, the integral
on the right hand side exists.
Det Dw = cδ0 ,
but that det Dw(x) = 0 for a.e. x ∈ Ω.
11.6: [Qualifying exam 08/98] Assume u : Rn → R is continuous and its first-order distribution derivatives Du are bounded functions, with |Du(x)| ≤ L for all x. By approximating
u by mollification, prove that u is Lipschitz, with
|u(x) − u(y )| ≤ L|x − y |
∀x, y ∈ Rn .
11.7: [Qualifying exam 01/03] Consider the operator
A : D(A) → L2 (Rn )
where D(A) = H 2 (Rn ) ⊂ L2 (Rn ) and
Au = ∆u
for u ∈ D(A).
Show that
a) the resolvent set ρ(A) contains the interval (0, ∞), i.e., (0, ∞) ⊂ ρ(A);
b) the estimate
||(λI − A)−1 || ≤
1
λ
for λ > 0
holds.
11.8: Let f : R → R be defined by
f (x) =
sin x if x ∈ (−π, π)
0 else.
Show that
√
2 sin πξ
ˆ
.
f (ξ) = i √ 2
πξ −1
11.9: We define for real-valued functions f and s ≥ 0
Z
s
n
2
n
2 s ˆ
2
H (R ) = f ∈ L (R ) :
(1 + |ξ| ) |f (ξ)| dξ < ∞ .
Rn
Gregory T. von Nessi
Version::31102011
c) It is in general not true that Det Dw = det Dw. Here is an example; let w(x) = x/|x|.
Show that there exists a constant c such that
11.7 Exercises
361
a) Show that
fˆ(ξ) = fˆ(−ξ).
b) Let f ∈ H s (Rn ) with s ≥ 1. Show that g(x) = F −1 (i ξfˆ(ξ))(x) is a (vector-valued)
function with values in the real numbers.
c) Prove that H 1 (R) = W 1,2 (R). (This identity is in fact true in any space dimension for
s ∈ N).
Version::31102011
d) Show that fˆ ∈ C 0 (Rn ) if f ∈ L1 (Rn ). Analogously, if fˆ ∈ L1 (Rn ), then f ∈ C 0 (Rn )
(you need not show the latter assertion).
e) Show that H s (R)
⊂−→
C 0 (R) if s > n2 . Hint: Use Holder’s inequality.
11.10:
a) Find the Fourier transform of χ[−1,1] , the characteristic function of the interval [−1, 1].
b) Find the Fourier transform of the function

 2 + x for x ∈ [−2, 0],
2 − x for x ∈ [0, 2],
f (x) =

0
else.
Hint: What is χ[−1,1] ∗ χ[−1,1] ?
c) Find
Z
R
sin2 ξ
dξ.
ξ2
1
d) Show that χ[−1,1] ∈ H 2 − (R) for all ∈ 0, 12 .
Gregory T. von Nessi
Chapter 12: Variational Methods
362
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
12.1 Direct Method of Calculus of Variations
12.2 Integral Constraints
377
12.3 Uniqueness of Solutions
12.4 Energies
368
384
384
12.5 Weak Formulations
385
12.6 A General Existence Theorem
387
12.7 A Few Applications to Semilinear Equations
12.8 Regularity
12.9 Exercises
12.1
390
392
400
Direct Method of Calculus of Variations
We wish to solve −∆u = 0, by using minimizers
Z
1
F (u) =
|Du|2 dx
2 Ω
Crucial Question: Can we decide whether there exists a minimizer?
General Question: If X a BS, || · ||, and F : X → R, is there a minima of F ?
The answer will be yes, if we have the following:
• F bounded from below
let M = inf F (x) dx > −∞
x∈X
Gregory T. von Nessi
Version::31102011
12 Variational Methods
12.1 Direct Method of Calculus of Variations
363
• Can choose a minimizing sequence xn ∈ X such that
F (xn ) → M as n → ∞
• Bounded sets are compact.
Suppose (subsequence) xk → x
• Continuity property:
F (xn ) → F (x) = M, x a minimizer
Version::31102011
Turns out the lower-semicontinuity will be enough:
M = lim inf F (xk ) ≥ F (x) ≥ M
k→∞
=⇒ F (x) = M, x minimizer
There is a give and take between convergence and continuity.
Compactness
Lower Semicontinuity
Strong Convergence
difficult
easy
Weak Convergence
easy
difficult
Our Choice
Definition 12.1.1: 1 < p < ∞. A function F is said to be (sequentially) weakly lower semicontinuous
(swls) on W 1,p (Ω) if uk * u in W 1,p (Ω) implies that
F (u) ≤ lim inf F (uk )
k→∞
Example: Dirichlet integral
Z
F (u) =
Ω
1
|Du|2 dx
2
and the generalization
F (u) =
Z Ω
1
|Du|2 + f u
2
dx
with f ∈ L2 (Ω) in X = H01 (Ω)
Goal: ∃! minimizer in H01 (Ω) which solves −∆u = f in the weak sense.
(1) Lower bound on F :
F (u)
=
Z Ω
Hölder
≥
Z
Ω
Now we use Young’s inequality with
1
|Du|2 − f u
2
dx
1
|Du|2 dx − ||f ||L2 (Ω) ||u||L2 (Ω)
2
√
a −
b
√
2 2
≥ 0 =⇒ ab ≤ a2 +
b2
4
Gregory T. von Nessi
Chapter 12: Variational Methods
364
Z
Young’s
≥
Ω
1
1
|Du|2 dx − ||u||2L2 (Ω) − ||f ||2L2 (Ω)
2
4
——
Recall Poincare’s inequality in H01 (Ω): ∃Cp such that ||u||2L2 (Ω) ≤ Cp ||Du||2L2 (Ω)
——
Poincaré
≥
=⇒
Ω
1
|Du|2 dx − · Cp
2
Z
Ω
|Du|2 dx −
1
||f ||2L2 (Ω)
4
1
4Cp ,
1
F (u) ≥
4
Z
2
|Du| dx − Cp
|
|
{z
}
Ω
≥0
Z
|f |2 dx ≥ C > −∞
{z
}
Ω
fixed number and
finite since f ∈
L2 (Ω)
Recall:
Goal: Want to solve −∆u = f by variational methods, i.e., we need to prove the existence
of a minimizer of the functional
Z 1
2
F (u) =
|Du| − f u dx.
Ω 2
We can do this via the direct method of the calculus of variations. This entails
• Showing F is bounded from below.
• choosing a minimizing sequence xn .
• Show that this minimizing sequence is bounded, so we can use weak compactness of
bounded sets to show xn * x.
• Prove the lower semicontinuity of F
M ≤ F (X) ≤ lim inf F (xn ) = M
n→∞
So we start where we left off:
(2)
1
4
Z
Z
|Du|2 dx −Cp ·
|f |2 dx ≤ F (u)
Ω
Ω
|
{z
}
−Cp ·||f ||2 2
L (Ω)
=⇒ F (u) ≥ −Cp · ||f ||2L2 (Ω) > −∞
=⇒ F is bounded from below.
Gregory T. von Nessi
(12.1)
Version::31102011
Now, choose =
Z
12.1 Direct Method of Calculus of Variations
365
(3) Define
M=
inf
u∈H01 (Ω)
F (u) > −∞.
Version::31102011
Choose un ∈ H01 (Ω) such that F (un ) → M as n → ∞
(4) Boundedness of {un }: choose n ≥ n0 large enough such that F (un ) ≤ M + 1 for
n ≥ n0 . (12.1) tells us that
Z
Z
1
|f |2 dx + F (un )
|Dun |2 dx ≤ Cp
4 Ω
ZΩ
|f |2 dx + M + 1 for n ≥ n0
≤ Cp
Ω
——
||un ||2L2 (Ω) ≤ Cp ||Dun ||2L2 (Ω)
Poincare:
——
=⇒ {un } is bounded in H01 (Ω).
=⇒ weak compactness: ∃ a subsequence {unj } such that unj * u in H01 (Ω) as j → ∞,
and
compact Sobolev embedding: ∃ a subsequence of {unj } such that unjk → u in L2 (Ω).
Call {uk }k∈N this subsequence.
(5) weak lower semicontinuity: Need
F (u) ≤
lim inf F (uk )
Z 1
2
= lim inf
|Duk | − f uk dx
k→∞
Ω 2
k→∞
Now since uk → u in L2 (Ω), we know
Z
Z
f uk dx →
f u dx.
Ω
Ω
A question remains. Does the following relation hold?
Z
Z
1
1
2
|Du| dx ≤ lim inf
|Duk |2 dx
2 Ω
2 Ω
k→∞
To see this we utilize the following:
Note:
1 2
|p| =
2
1 2
1
|q| + (q, p − q) + |p − q|2
2
2
1 2
= − |q| + p · q
2
⇐⇒
⇐⇒
1 2 1 2
1
|p| + |q| − p · q = |p − q|2
2
2
2
1
1
[|p|2 − 2p · q + |q|2 ] = |p − q|2
2
2
Gregory T. von Nessi
Chapter 12: Variational Methods
366
Choose p = Duk and q = Du.
Z
Z 1
1
1
2
2
2
|Duk | dx =
|Du| + (Du, Duk − Du) + |Duk − Du| dx
Ω 2
Ω 2
|2
{z
}
≥
k→∞
Ω
Ω
1
|Duk |2 dx
2
1
|Du|2 dx +
2
Z
Z
Ω
|
≥0
Du · (Duk − Du) dx
|
{z
}
*0 in L2 (Ω)
{z
→0 as k→0
1
≥
|Du|2 dx + lim inf
2
k→∞
Ω
Z
1
|Du|2 dx
=
Ω 2
Z
Ω
}
Du · (Duk − Du) dx
——
Trick: The inequality
1 2 1 2
|p| ≥ |q| + (q, p − q)
2
2
allows us to replace the quadratic term in Duk by a term that is linear in Duk .
——
=⇒ u is a minimizer!
(6) Uniqueness: Use again that
1 2 1 2
1
|p| = |q| + (q, p − q) + |p − q|2
2
2
2
Suppose we have two minimizers u, v . Define w = 12 (u + v ). Now we choose p = Du,
q = 12 (Du + Dv ), p − q = Du − 12 (Du + Dv ) = 21 (Du − Dv ).
=⇒
Z
Ω
1
|Du|2 dx
2
Z (
=
Ω
1
1
|Dw |2 +
2
2
Z
1 1
Dw · (Du − Dv ) dx +
(Du − Dv )
2 2
Ω
We apply the same inequality with p = Dv , q = Dw and p − q = Dv − 21 (Du + Dv ) =
1
2 (Dv − Du).
=⇒
Z
Ω
1
|Dv |2 dx =
2
Z (
Ω
1
1
1 1
|Dw |2 + Dw · (Dv − Du) +
(Dv − Du)
2
2
2 2
2
Now we sum these inequalities.
Z
Z
Z
Z
1
1
1
2
2
2
|Du| dx +
|Dv | dx =
|Dw | dx +
|Du − Dv |2 dx.
2 Ω
2 Ω
4
Ω
Ω
Now we remember that u and v are minimizers of
Z 1
2
F (u) =
|Du| − f u dx.
Ω 2
Gregory T. von Nessi
)
dx
2
)
dx
Version::31102011
=⇒ lim inf
Z
Z
12.1 Direct Method of Calculus of Variations
367
Version::31102011
So, we calculate
Z Z
1
1
|Du|2 − f u dx +
(|Dv |2 − f v ) dx
2
2
Ω
Ω
Z
Z
1
2
=
|Dw | − (u + v )f dx +
|Du − Dv |2 dx
| {z }
4 Ω
Ω
=2w f
Z
Z
1
1
2
=2
|Dw | − w f dx +
|Du − Dv |2 dx
4 Ω
Ω 2
Z
1
=⇒ M + M ≥ 2M +
|Du − Dv |2 dx
4 Ω
Z
1
|Du − Dv |2 dx
=⇒ 0 ≥
4 Ω
=⇒ Du = Dv , u, v ∈ H01 (Ω) =⇒ u = v =⇒ uniqueness!
Recall: To solve
−∆u = f in Ω
,
u = 0 on ∂Ω
we looked for a minimizer in H01 (Ω) of the functional
Z 1
2
F (u) =
|Du| − f u dx
Ω 2
Already done: Existence and uniqueness of minimizer.
(7) Euler-Lagrange equation: We have the following necessary condition. If v ∈ H01 (Ω),
7→ F (u + v ), then this function has a minimum at = 0 and
d
d
F (u + v ) = 0.
=0
So for our Dirichlet integral we have
Z Z
d
1
2
|Du + · Dv | − f (u + v ) dx = (Du · Dv − f v ) dx = 0
d
Ω 2
Ω
Z =0
i.e.
(Du · Dv − f v ) dx = 0
∀v ∈ H01 (Ω)
Ω
i.e., u is a weak solution of Laplace’s equation.
Remarks:
1) Laplace’s equation has a variational structure, i.e., its weak formulation is the EulerLagrange equation of a variational problem.
2) f : R → R is lower semicontinuous if xn → x and
f (x) ≤ lim inf f (xn ).
n→∞
f continuous =⇒ f lower semicontinuous.
If f discontinuous at x, we have the following situations:
Gregory T. von Nessi
Chapter 12: Variational Methods
368
x
x
lower semicontinuous
upper semicontinuous
Example: fn = sin(nx) on (0, π), fn * 0 in L2 ((0, π)).
Z π
Need:
fn g dx → 0
∀g ∈ L2 ((0, π)).
0
It will be sufficient to show this for φ ∈ Cc∞ ((0, π)).
Z π
Z π
1
d
− cos(nx) φ(x) dx
sin(nx)φ(x) dx =
n
0
0 dx
Z π
=
cos(nx) φ0 (x) dx
0 | {z }
|·|≤1
≤
=
Z π
1
· max |φ0 |
dx
n
0
π
· max |φ0 | → 0 as n → ∞
n
Now choose g(x) = x 2 , g(fn ) * g(f )? NO
Z π
Z π
d x
1
2
sin (nx)φ(x) dx = lim
lim
−
sin(2nx) φ(x) dx
n→∞ 0
n→∞ 0 dx 2
4n
Z π
Z π
1
1
= lim
φ(x) dx + lim
sin(2nx)φ0 (x) dx
n→∞ 0 2
n→∞ 4n 0
|
{z
}
=0
Z π
1
=
φ(x) dx.
0 2
So, sin2 (nx) *
1
2
in L2 ((0, π))
So, we need lower semicontinuity since F (u) will in general not be linear (for our
Dirichlet integral |Du|2 is not linear), and thus passing a weak limit within F (u) isn’t
justified. Lower semicontinuity saves this by trapping the weak sequence as follows
inf
Gregory T. von Nessi
u∈H01 (Ω)
F (u) ≤ F (u) ≤ lim inf F (uk ) →
k→∞
inf
v ∈H01 (Ω)
F (v )
Version::31102011
3) Weak convergence is not compatible with non-linear functions: if fn * f in L2 (Ω),
and if g is a non-linear functions, then g(fn ) * g(f ).
12.1 Direct Method of Calculus of Variations
=⇒ F (u) =
369
inf
v ∈H01 (Ω)
F (v )
4) Crucial was
1 2 1 2
|p| ≥ |q| + (q, p − q)
2
2
Define F (p) = 12 |p|2 =⇒ F 0 (p) = p.
We need: F (p) ≥ F (q) + (F 0 (q), p − q) with q fixed, p ∈ Rn , i.e., F is convex.
Version::31102011
Proposition 12.1.2 ():
i.) Let f be in C 1 (Ω), then f is convex, i.e., f (λp + (1 − λ)q) ≤
λf (p) + (1 − λ)f (q) for all p, q ∈ Rn and λ ∈ [0, 1] ⇐⇒
f (p) ≥ f (q) + (f 0 (q), p − q)
∀p, q ∈ Rn
ii.) If f ∈ C 2 (Ω), then f is convex ⇐⇒ D2 f is positive semidefinite, i.e.,
n
X
i,j=1
Dij f (p)ξi ξj ≥ 0,
∀p ∈ Rn , ∀ξ ∈ Rn
We needed for uniqueness
1 2 1 2
1
|p| = |q| + (q, p − q) + |p − q|2 .
2
2
2
The argument would work if
1 2 1 2
|p| ≥ |q| + (q, p − q) + λ|p − q|2
2
2
λ > 0.
Now take F (p) = 12 |p|2 .
1
F (p) = F (q) + (F 0 (q), p − q) + (F 00 (q)(p − q), p − q)
2
0
00
We know F (p) = p and F (p) = I. So,
(F 00 (q)(p − q), (p − q)) = |p − q|2 .
For general F , this argument works if D2 F (q) is uniformly positive definite, i.e., ∃λ > 0 such
that
(D2 F (q)ξ, ξ) ≥ λ|ξ|2
∀q ∈ Rn , ∀ξ ∈ Rn
(12.2)
Definition 12.1.3: f ∈ C 2 (Ω) is uniformly convex if (12.2) holds.
Theorem 12.1.4 (): Suppose that f ∈ C 2 (Ω) and f is uniformly convex, i.e., there exists
constants α > 0, β ≥ 0 such that
f (p) ≥ α|p|2 − β,
then the variational problem:
minimize F (u) in H01 (Ω)
with
F (u) =
Z
Ω
(f (Du) − gu) dx
has a unique minimizer which solves the Euler-Lagrange equation −div (f 0 (Du)) = g in Ω
and u = 0 on ∂Ω.
Gregory T. von Nessi
Chapter 12: Variational Methods
370
Proof:
• F (u) bounded from below since
Z
F (u) =
ZΩ
(f (Du) − gu) dx
(α|Du|2 − β − gu) dx
ΩZ
α
|Du|2 dx − C > −∞
2 Ω
≥
≥
by Holder, Young and Poincare inequalities.
• uniqueness since f uniformly convex.
• Euler-Lagrange equations:
Z
d
{f (Du + · Dv ) − g(u + v )} dx
d
Ω
=0
Z ∂f
∂v
=
(Du) ·
− gv dx
∂pi
∂xi
ZΩ
=
−div f 0 (Du) − g v dx
∀v ∈ H01 (Ω)
Ω
Recall: Given a functional
F (u) =
Z Ω
1
|Du|2 − f u
2
dx,
we have shown the following:
• If we have weak lower semicontinuity AND
• convexity, we have existence of a minimizer.
• If we have uniform convexity, we get uniqueness of a minimizer.
i.e., given
F (u) =
Z
Ω
F(Du, u, x) dx,
p 7→ F(p, z, x) is convex.
Euler-Lagrange equation:
d
d
F (u + v ) = 0
=0
∀v ∈ H01 (Ω)
We calculate
d
d
F (u + v ) =
=0
=
Gregory T. von Nessi
d
d
Z
Ω
F(Du + · Dv , u + v , x) dx
=0

n
X
∂F
∂v
∂F

(Du, u, x)
+
(Du, u, x) v dx
∂p
∂x
∂z
j
j
Ω
Z
j=1
=0
=0
∀v ∈ H01 (Ω)
Version::31102011
• weak lower semicontinuity since f is convex.
12.2 Integral Constraints
371
(This last equality is the weak formulation of our original PDE.)


Z X
n
∂ ∂
∂F

⇐⇒
−
(Du, u, x) · v +
(Du, u, x) · v  = 0
∂x
∂p
∂z
j
j
Ω j=1
Z ∂F
⇐⇒
−div Dp F(Du, u, x) +
(Du, u, x) v = 0
∂z
Ω
∂F
⇐⇒ −div Dp F (Du, u, x) +
(Du, u, x) = 0
∂z
Version::31102011
This last line represents the strong formulation of our PDE.
If a PDE has variational form, i.e., it is the Euler-Lagrange equation of a variational problem,
then we can prove the existence of a weak solution by proving that there exists a minimizer.
12.2
Integral Constraints
In this section we shall expand upon our usage of the calculus of variations to include problems
with integral constraints. Before going into this, a digression will be made to prove the implicit
function theorem, which will subsequently be used to prove the resulting Euler-Lagrange
equation that arises from problems with constraints.
12.2.1
The Implicit Function Theorem
[Bredon] In this subsection we will prove the implicit function theorem with the goal of giving
the reader a better general intuition regarding this theorem and its use. As this section
is self-contained, it may be skipped if the reader feels they are already familiar with this
theorem.
Theorem 12.2.1 (The Mean Value Theorem): Let f ∈ C 1 (Rn ) with x, x ∈ Rn , then
f (x) − f (x) =
n
X
∂f
(x)(xi − x i )
∂xi
i=1
Proof: Apply the Mean Value Theorem to the function g(t) = f (tx + (1 − t)x) and use
the Chain Rule:
dg
dt
t=t0
n
X
∂f
d(txi + (1 − t)x i )
=
(x
∂xi
dt
i=1
where x = t0 x + (1 − t0 )x.
t=t0
n
X
∂f
=
(x)(xi − x i ),
∂xi
i=1
Corollary 12.2.2 (): Let f ∈ C 1 (Rk × Rm ) with x ∈ Rk and y , y ∈ Rm , then
m
X
∂f
f (x, y ) − f (x, y ) =
(x, y )(yi − y i )
∂yi
i=1
for some y on the line segment between y and y .
Gregory T. von Nessi
Chapter 12: Variational Methods
372
Lemma 12.2.3 (): Let ξ ∈ Rn and η ∈ Rm be given, and let f ∈ C 1 (Rn × Rm ; Rm ). Assume
that f (ξ, η) = η and that
∂fi
∂yj
= 0,
x=ξ,y =η
∀i , j,
where x and y are the coordinates of Rn and Rm respectively. Then there exists numbers a > 0 and b > 0 such that there
exists a unique function φ : A → B, where
A = x ∈ Rn ||x − ξ|| ≤ a and B = y ∈ Rm ||y − η|| ≤ b , such that φ(ξ) = η and
φ(x) = f (x, φ(x)) for all x ∈ A. Moreover, φ is continuous.
||f (x, y ) − f (x, y )|| ≤ K||y − y ||
(12.3)
for ||x|| < a, ||y || < b, and ||y || ≤ b. Moreover, we can take a to be even smaller so that
we also have for all ||x|| ≤ a,
Kb + ||f (x, 0)|| ≤ b
(12.4)
Now consider the set F of all functions φ : A → B with φ(0) = 0. Define the norm on F
to be ||φ − ψ||F = sup ||φ(x) − ψ(x)|| x ∈ A . This clearly generates a complete metric
space. Now define T : F → F by putting (T φ)(x) = f (x, φ(x)). To prove the theorem, we
must check
i) (T φ)(0) = 0 and
ii) (T φ)(x) ∈ B for x ∈ A.
For i), we see (T φ)(0) = f (0, φ(0)) = f (0, 0) = 0. Next, we calculate
||(T φ)(x)|| = ||f (x, φ(x))|| ≤ ||f (x, φ(x)) − f (x, 0)|| + ||f (x, 0)||
≤ K · ||φ(x)|| + ||f (x, 0)||
≤ Kb + ||f (x, 0) ≤ b
by (12.3)
by (12.4),
which proves ii). Now, we claim T is a contraction by computing
||T φ − T ψ||F
= sup(||(T φ)(x) − (T ψ)(x)||)
x∈A
= sup(||f (x, φ(x)) − f (x, ψ(x))||)
x∈A
≤ sup θ||φ(x) − ψ(x)||
by (12.3)
x∈A
= θ · ||φ − ψ||F .
Thus by the Banach fixed point theorem (Theorem 8.3), we see that there is a unique
φ : A → B with φ(0) = 0 and T φ = φ; i.e. f (x, φ(x)) = φ(x) for all x ∈ A. Theorem 8.3
also states that φ = limi→∞ φi , where φ0 is arbitrary and φi+1 = T φi . Put φ0 (x) ≡ 0. Then
φi is continuous ∀i . Thus, we see that φ is the uniform limit of continuous functions which
implies φ is itself continuous. .
Gregory T. von Nessi
Version::31102011
Proof: WLOG, assume ξ = η = 0. Applying Corollary 12.6 to each coordinate of f and
using the assumption that ∂fi /∂yj = 0 at the origin, and hence small in a neighborhood of
0, we can find a > 0 and b > 0 such that for any given constant 0 < K < 1,
12.2 Integral Constraints
373
Theorem 12.2.4 (The Implicit Function Theorem): Let g ∈ C 1 (Rn × Rm ) and let ξ ∈
Rn , η ∈ Rm be given with g(ξ, η) = 0. (g only needs to be defined in a neighborhood of
(ξ, η).) Assume that the differential of the composition
Rm → Rn × Rn → Rm ,
y 7→ (ξ, y ) 7→ g(ξ, y ),
Version::31102011
is onto at η. (This is a fancy way of saying the Jacobian with respect to the second argument
of g, Jy (g), is non-singular or det [Jy (g)] 6= 0.) Then there are numbers a > 0 and b > 0
such that there exists a unique function φ : A → B (with A, B as in Lemma 12.7), with
φ(ξ) = η, such that
g(x, φ(x)) = 0
∀x ∈ A.
(i.e. φ is “solves” or the implicit relation g(x, y ) = 0). Moreover, if g is C p , then so is φ
(including the case p = ∞).
Proof: The differential referred to is the linear map L : Rm → Rm given by
Li (y ) =
m
X
∂gi
j=1
∂yi
(ξ, η)yj ,
That is, it is the linear map represented by the Jacobian matrix (∂gi /∂yi ) at (ξ, η). The
hypothesis says that L is non-singular, and hence has a linear inverse L−1 : Rm → Rm . Let
f : Rn × Rm → Rm
be defined by
f (x, y ) = y − L−1 (g(x, y )).
Then f (ξ, η) = η − L−1 (0) = η. Also, computer differentials at η of y 7→ f (ξ, y ) gives
I − L−1 L = I − I = 0.
Explicitly, this computation
P is as follows: Let L = (ai,j ) so that ai,j = (∂gi /∂yj )(ξ, η) and
let L−1 = (bi,j ) so that
bi,k ak,i = δi,j . Then
" m
#
m
X
∂ X
∂gk
∂fi
bi,k gk (x, y )
= δi,j −
bi,k
(ξ, η) = δi,j −
(ξ, η)
∂yj
∂yj
∂yj
k=1
= δi,j −
m
X
(ξ,η)
k=1
bi,k ak,j = 0.
k=1
Applying Lemma 12.7 to get a, b, and φ with φ(ξ) = η and f (x, φ(x)) = φ(x), we see that
φ(x) = f (x, φ(x)) = φ(x) − L−1 (g(x, φ(x))),
which is equivalent to g(x, φ(x)) = 0.
Now we must show that φ is differentiable. Since the determinant of the Jacobian Jy (g) 6= 0
at (ξ, η), it is nonzero in a neighborhood, say A × B. To show that φ is differentiable at a
point x ∈ A, we can assume x = ξ = η = 0 WLOG. With this assumption, we apply the
Mean Value Theorem (Theorem 12.5) to g(x, y ), g(0, 0) = 0:
0 = gi (x, φ(x)) = gi (x, φ(x)) − gi (0, 0)
n
m
X
X
∂gi
∂gi
=
(pi , qi ) · xj +
(pi , qi ) · φk (x),
∂xj
∂yk
j=1
k=1
Gregory T. von Nessi
Chapter 12: Variational Methods
374
where (pi , qi ) is some point on the line segment between (0, 0) and (x, φ(x)). Let h(j) denote
the point (0, 0, . . . , h, 0, . . . , 0), with the h in the jth place, in Rn . Then, putting h(j) in place
of x in the above equation and dividing by h, we get
m
X ∂gi
φk (h(j) )
∂gi
(pi , qi ) +
(pi , qi )
.
0=
∂xj
∂yk
h
k=1
For j fixed, i = 1, . . . , m and k = 1, . . . , m these are m linear equations for the m unknowns
φk (h(j) )
φk (h(j) ) − φk (0)
=
h
h
We know that φ is differentiable (once) and in a neighborhood A × B of (ξ, η) and thus, we
can apply standard calculus to compute the derivative of the equations g(x, φ(x)) = 0. The
Chain Rule gives
m
0=
X ∂gi
φk
∂gi
(x, φ(x)) +
(x, φ(x)) (x).
∂xj
∂yk
xj
k=1
Again, these are linear equations with non-zero determinant near (ξ, η) and hence has, by
Cramer’s Rule, a solution of the form
∂φk
(x) = Fk,j (x, φ(x)),
∂xj
where Fk,j is C p−1 when g is C p . If φ is C r for r < p, then the RHS of this equation is also
C r . Thus, the LHS ∂φk /∂xj is C r and hence the φk are C r +1 . By induction, the φk are C p .
Consequently, φ is C p when g is C p , as claimed. 12.2.2
Nonlinear Eigenvalue Problems
[Evans] Now we are ready to investigate problems with integral constraints. As in the previous
section, we will first consider a concrete example. So, let us look at the problem of minimizing
the energy functional
Z
1
|Dw |2 dx
I(w ) =
2 Ω
over all functions w with, say, w = 0 on ∂Ω, but subject now also to the side condition that
Z
J(w ) =
G(w ) dx = 0,
Ω
where G : R → R is a given, smooth function.
We will henceforth write g = G 0 . Assume now
Gregory T. von Nessi
|g(z)| ≤ C(|z| + 1)
Version::31102011
and they can be solved since the determinant of the Jacobian matrix is non-zero in A × B.
The solution (Cramer’s Rule) has a limit as h → 0. Thus, (∂φk /∂xj )(0) exists and equals
this limit.
12.2 Integral Constraints
375
and so
|G(z)| ≤ C(|z|2 + 1)
(∀z ∈ R)
(12.5)
for some constant C.
Now let us introduce an appropriate admissible space
A = w ∈ H01 (Ω) J(w ) = 0 .
Version::31102011
We also make the standard assumption that the open set Ω is bounded, connected and has
a smooth boundary.
Theorem 12.2.5 (Existence of constrained minimizer): Assume the admissible space A
is nonempty. Then ∃u ∈ A satisfying
I(u) = min I(w )
w ∈A
Proof: As usual choose a minimizing sequence {uk }∞
k=1 ⊂ A with
I(uk ) → m = inf I(w ).
w ∈A
Then as in the previous section, we can extract a weakly converging subsequence
in H01 (Ω),
ukj * u
with I(u) ≤ m. All that we need to show now is
J(u) = 0,
which shows that u ∈ A. Utilizing the fact that H01 (Ω)
subsequence of ukj such that
⊂⊂
−→
L2 (Ω), we extract another
in L2 (Ω),
ukj → u
where we have relabeled ukj to be this new subsequence. Consequently,
Z
|J(u)| = |J(u) − J(uk )| ≤
|G(u) − G(uk )| dx
Ω
Z
≤ C
|u − uk |(1 + |u| + |uk |) dx by (12.5)
Ω
→ 0 as k → ∞.
A more interesting topic than the existence of the constrained minimizers is an examination
of the corresponding Euler-Lagrange equation.
Theorem 12.2.6 (Lagrange multiplier): Let u ∈ A satisfy
I(u) = min I(w ).
w ∈a
Then ∃λ ∈ R such that
Z
Ω
∀v ∈ H01 (Ω).
Du · Dv dx = λ
Z
Ω
g(u) · v dx
(12.6)
Gregory T. von Nessi
Chapter 12: Variational Methods
376
Remark: Thus u is a weak solution of the nonlinear BVP
−∆u = λg(u) in Ω
u = 0
on ∂Ω,
(12.7)
where λ is the Lagrange Multiplier corresponding to the integral constraint
J(u) = 0.
A problem of the form (12.7) for the unknowns (u, λ), with u ≡ 0, is a non-linear eigenvalue
problem.
Step 1) Fix any function v ∈ H01 (Ω). Assume first
g(u) 6= 0 a.e. in Ω.
(12.8)
Choose then any function w ∈ H01 (Ω) with
Z
g(u) · w dx 6= 0;
(12.9)
Ω
this is possible because of (12.8). Now write
j(τ, σ) = J(u + τ v + σw )
Z
=
G(u + τ v + σw ) dx
Ω
(τ, σ ∈ R).
Clearly
j(0, 0) =
Z
G(u) dx = 0.
Ω
In addition, j is C 1 and
Z
∂j
(τ, σ) =
g(u + τ v + σw ) · v dx,
∂τ
Ω
Z
∂j
(τ, σ) =
g(u + τ v + σw ) · w dx,
∂σ
Ω
(12.10)
(12.11)
Consequently, (12.9) implies
∂j
(0, 0) 6= 0.
∂σ
According to the Implicit Function Theorem, ∃φ ∈ C 1 (R) such that
φ(0) = 0
and
j(τ, φ(τ )) = 0
for all sufficiently small τ , say |τ | ≤ τ0 . Differentiating, we discover
Gregory T. von Nessi
∂j
(τ, φ(τ )) ∂j (τ, φ(τ ))φ0 (τ ) = 0;
∂σ
∂τ
(12.12)
Version::31102011
Proof of 12.10:
12.2 Integral Constraints
377
whence (12.10) and (12.11) yield
Z
φ0 (0) = − Z Ω
Ω
g(u) · v dx
(12.13)
.
g(u) · w dx
Step 2) Now set
Version::31102011
w (τ ) = τ v + φ(τ )w
(|τ | ≤ τ0 )
and write
i (τ ) = I(u + w (τ )).
Since (12.12) implies J(u + w (τ )) = 0, we see that u + w (τ ) ∈ A. So the C 1 function
i ( · ) has a minimum at 0. Thus
0
0 = i (0) =
=
Z
Ω
(Du + τ Dv + φ(τ )Dw ) · (Dv + φ0 (τ )Dw ) dx
Ω
Du · (Dv + φ0 (0)Dw ) dx.
Z
τ =0
(12.14)
Recall now (12.13) and define
Z
λ = ZΩ
Du · Dw dx
Ω
,
g(u) · w dx
to deduce from (12.14) the desired equality
Z
Ω
Du · Dv dx = λ
Z
Ω
g(u) · v dx
∀v ∈ H01 (Ω).
Step 3) Suppose now instead (12.8) that
g(u) = 0
a.e. in Ω.
Approximating g by bounded functions, we deduce DG(u) = g(u)·Du = 0 a.e. Hence,
since Ω is connected, G(u) is constant a.e. It follows that G(u) = 0 a.e., because
J(u) =
Z
G(u) dx = 0.
Ω
As u = 0 on ∂Ω in the trace sense, it follows that G(0) = 0.
But then u = 0 a.e. as otherwise I(u) > I(0) = 0. Since g(u) = 0 a.e., the identity
(12.6) is trivially valid in this case, for any λ. Gregory T. von Nessi
Chapter 12: Variational Methods
378
12.3
Uniqueness of Solutions
Warning: In general, uniqueness of minimizers does not imply uniqueness of weak solutions
for the Euler-Lagrange equation. However, if the joint mapping (p, z) 7→ F(p, z , x) is convex
for x ∈ Ω, then uniqueness of minimizers implies uniqueness of weak solutions.
Reason: In this case, weak solutions are automatically minimizers. Suppose u is a weak
solution of
−div
∂F
∂F
+
=0
∂p
∂z
u = g on ∂Ω
F(q, y , x) ≥ F(p, z , x) + Dp F(p, z , x) · (q − p) + Dz F(p, z , x) · (y − z)
(the graph of F lies above the tangent plane).
Suppose that u = g on ∂Ω. Choose p = Du, z = u, q = Dw and y = w .
F(Dw , w , x) ≥ F(du, u, x) + Dp F(Du, u, x) · (Dw − Du) + Dz F(Du, u, x) · (w − u)
Integrate in Ω:
Z
F(Dw , w , x) dx
Ω
Z
≥
Ω
F(Du, u, x) dx
Z
+
[Dp F(Du, u, x) · (Dw − Du) + Dz F(Du, u, x) · (w − u)] dx
{z
}
|Ω
= 0 since w − v = 0 on ∂Ω and u is a
weak sol. of the Euler Lagrange eq.
=⇒ F (w ) =
Z
Ω
12.4
F(Dw , w , x) dx ≥
Z
Ω
F(Du, u, x) dx = F (u)
Energies
Ideas: Multiply by u, ut , integrate in space/space-time and try to find non-negative quantities.
Typical terms in energies:
Z
Z
Z
|Du|2 dx,
|u|2 dx,
|ut |2 dx.
Ω
If you take
dE
dt ,
Ω
Ω
you should find a term that allows you to use the original PDE:
d 1
dt 2
Z
Ω
2
|ut (x, t)| dx =
Z
Ω
ut (x, t) · utt (x, t) dx
is good for hperbolic equations, and
Z
Z
d 1
2
|u| dx =
u(x, t) · ut (x, t) dx
dt 2 Ω
Ω
is good for parabolic equations.
Gregory T. von Nessi
Version::31102011
(p, z) 7→ F(p, z , x) is convex ∀x ∈ Ω), i.e.,
12.5 Weak Formulations
12.5
379
Weak Formulations
Typically: integral identities
• test functions
• contain less derivatives than the original PDE in the strong form.
• multiply strong form by a test function and integrate by parts
Version::31102011
Examples:
1) −∆u = f in Ω, u = 0 on ∂Ω
Z
Ω
−(∆u) dx =
Z
Ω
f v dx ⇐⇒
Z
Ω
Du · Dv dx =
Z
∀v ∈ H01 (Ω).
f v dx
Ω
So, we should have:
Weak formulation + Regularity =⇒ Strong formulation, e.g.,
u ∈ H01 (Ω) ∩ H 2 (Ω) + weak solution =⇒ −∆u = f in Ω.
– Dirichlet conditions: Enforced in the sense that we require u to have correct
boundary data, e.g.,
−∆u = f in Ω
u = g on ∂Ω
seek u ∈ H 1 (Ω) with u = g on ∂Ω.
– Neumann conditions: Natural boundary conditions, they should arise as a consequence of the weak formulations.
Weak formulation:
Z
Ω
Du · Dv dx =
Z
∀v ∈ H 1 (Ω).
f v dx
Ω
Now suppose u ∈ H01 (Ω) ⊂ H 1 (Ω) =⇒ −∆u = f in Ω.
Now take u ∈ H 1 (Ω)
Z
Ω
Du · Dv dx =
Z
f v dx.
Ω
Integrate by parts:
=⇒
−
Z
Z
Ω
∂Ω
∆u · v dx +
Z
∂Ω
(Du · ν)v dS = 0
=⇒ Du · ν = 0 on ∂Ω
(Du · ν)v dS =
Z
f v dx
Ω
∀v ∈ H 1 (Ω)
Gregory T. von Nessi
Chapter 12: Variational Methods
380


−∆u = f in Ω
u = 0 on Γ0

Du · ν = g on Γ1
Ω
Γ0
Γ1
2) Now we consider a problem with split BCs.
Define H(Ω) = {φ ∈ H 1 (Ω), s.t. φ = 0 on Γ0 }.
Z
Z
−∆u · φ dx =
f φ dx
Ω
k
Z
Ω
Z
=
Ω
∀φ ∈ H(Ω)
Ω
Du · Dφ dx −
Z
Du · Dφ dx −
Z
∂Ω=Γ0 ∪Γ1
(Du · ν) φ dS
|{z}
= 0
on Γ0
gφ dS
Γ1
So for the weak formulation, we seek u ∈ H(Ω) such that
Z
Z
Z
Du · Dφ dx =
gφ dS +
f φ dx
∀φ ∈ H(Ω)
Ω
Ω
Γ1
Check: Weak solution + Regularity =⇒ Strong solution.
Dirichlet data okay, u ∈ H(Ω) =⇒ u = 0 on Γ0 .
Z
Z
Z
Du · Dφ dx = −
∆u · φ dx +
(Du · ν)φ dS
Ω
Ω
∂Ω
Z
Z
=
gφ dS +
f φ dx
∀φ ∈ H(Ω)
Ω
Γ1
Now, taking φ ∈ H01 (Ω) ⊂ H(Ω), we see
Z
(−∆u − f )φ dx = 0
Ω
∀φ ∈ H01 (Ω)
=⇒ −∆u − f = 0 in Ω ⇐⇒ −∆u = f =⇒ PDE Holds.
The volume terms cancel, and we get
Z
Z
(Du · ν)φ dS =
gφ dS
∂Ω
Z
Γ1
Γ1
k
(Du · ν)φ dS =
Z
Γ1
gφ dS =⇒ Du · ν = g.
Where we have thus recovered the Neumann BC.
Gregory T. von Nessi
∀φ ∈ H(Ω)
Version::31102011
Ω smooth, bounded, connected, Γ0 and Γ1 have positive surface measure and ∂Ω =
Γ0 ∪ Γ1 .
12.6 A General Existence Theorem
12.6
381
A General Existence Theorem
Goal: Existence of a minimizer for
F (u) =
Z
L(Du, u, x) dx
Ω
with L : Rn × R × Ω → R smooth and bounded from below.
Note:
Version::31102011
L(p, z , x) =
1 2
|p| − f (x) · z
2
f ∈ L2 (Ω)
doesn’t fit into this framework.
Theorem 12.6.1 (): Suppose that p 7→ L(p, z , x) is convex. Then F (·) is sequentially
weakly lower semicontinuous on W 1,p (Ω), 1 < p < ∞.
Idea of Proof: Have to show: uk * u in W 1,p (Ω), then lim inf k→∞ F (uk ) ≥ F (u). True
if lim inf k→∞ F (uk ) = ∞.
Suppose M = lim inf k→∞ F (uk ) < ∞, and that lim inf = lim (subsequence, not relabeled).
uk * u in W 1,p (Ω) =⇒ ||uk ||W 1,p (Ω) bounded
Compact Sobolev embedding: without loss of generality uk → u in Lp (Ω) =⇒ convergence
in measure =⇒ further subsequence: uk → u pointwise a.e.
Egoroff: exists E such that uk → u uniformly on E , |Ω \ E | < .
Define F by
1
F = x ∈ Ω : |u(x)| + |Du(x)| ≤
Claim: |Ω \ F | → 0. Fix K = 1 . Then,
Z
Z
1
1
|{|u(x)| > K}|
≤
|u(x)| dx ≤
|u(x)| dx
K Ω
K Ω
Z
10 Z
1
p
p
Hölder
1
p
≤
1 dx
|u(x)| dx
K
Ω
Ω
C
≤
K
Define good points G = F ∩ E , then |Ω \ G | → 0 as → 0.
By assumption, L bounded from below =⇒ may assume that L ≥ 0.
Z
F (uk ) =
L(Duk , uk , x) dx
Ω
Z
≥
L(Duk , uk , x) dx
G
Z
Z
(by convex. of L)
≥
L(Du, uk , x) dx +
Dp L(Du, uk , x) · (Duk − Du) dx
{z
} | G
{z
}
| G
:=Ik
:=Jk
Gregory T. von Nessi
Chapter 12: Variational Methods
382
Since L(q, z, x) ≥ L(p, z , x) + Lp (p, z , x) · (p − q). Now
Z
Ik →
L(Du, u, x) dx uniformly
G
and
Jk =
Z
G
(Dp L(Du, uk , x) − Dp L(Du, u, x)) ·(Duk − Du) dx +
{z
}
|
→0 uniformly
{z
}
→0
So, we have
lim inf F (uk ) ≥
k→∞
Z
|
G
→0
L(Du, u, x) dx.
G
Now, we use monotone convergence for & 0 to get
Z
lim inf F (uk ) ≥
L(Du, u, x) dx = F (u)
k→∞
Dp L(Du, u, x) · (Duk − Du) dx .
{z
}
Ω
Recall the assumption:
F (u) =
Z
p 7→ L(p, z , x) is convex.
L(Du, u, x) dx,
Ω
Theorem 12.6.2 (): Suppose that L satisfies the coercivity assumption L(p, z , x) ≥ α|p|q −
β with α > 0, β ≥ 0 and 1 < q < ∞. Suppose in addition that L is convex in p for all z, x.
Also, suppose that the class of admissable functions
A = {w ∈ W 1,q (Ω), w (x) = g(x) on ∂Ω}
is not empty. Lastly, assume that g ∈ W 1,q (Ω). If these conditions are met, then there
exists a minimizer of the variational problem:
Z
Minimize F (u) =
L(Du, u, x) dx in A
Ω
Remark: It’s reasonable to assume that there exists a w ∈ A with F (w ) < ∞. Typically
one assumes in addition that L satisfies a growth condition of the form
L(p, z , x) ≤ C(|p|q + 1)
This ensures that F is finite on A.
Idea of Proof: If F (w ) = +∞ ∀w ∈ A, then there is nothing to show. Thus, suppose
F (w ) < ∞ for at least on w ∈ A.
Given L(p, z, x) ≥ α|p|q − β, we see that if F (w ) < ∞, then
Z
Z
L(Du, u, x) dx ≥
(α|Du|q − β) dx
Ω
ΩZ
≥ α
|Du|q dx − β · |Ω|
Gregory T. von Nessi
Ω
(12.15)
Version::31102011
|
Z
12.6 A General Existence Theorem
383
=⇒ F (w ) ≥ −β · |Ω| for w ∈ A =⇒ F is bounded from below:
−∞ < M = inf F (w ) < ∞.
w ∈A
Choose a minimizing sequence wk such that F (wk ) → M as k → ∞. Then by (12.15):
Z
α
|Dwk |q dx ≤ F (wk ) + β · |Ω| ≤ M + 1 + β · |Ω|
Ω
Version::31102011
if k is large enough.
Now since wk ∈ A, we know wk = g on ∂Ω. Thus,
||wk ||Lq (Ω)
||wk − g + g||Lq (Ω)
=
≤
||wk − g||Lq (Ω) + ||g||Lq (Ω)
≤
Cp ||Dwk − Dg||Lq (Ω) + ||g||Lq (Ω)
Poincaré
≤
≤
Cp ||Dwk ||Lq (Ω) + Cp ||Dg||Lq (Ω) + ||g||Lq (Ω)
C<∞
∀k
=⇒ wk is a bounded sequence in W 1,q (Ω)
=⇒ wk contains a weakly converging subsequence (not relabeled), wk * w in W 1,q (Ω).
Weak lower semicontinuity result: Theorem 12.5 applies and
M ≤ F (w ) ≤ lim inf F (wk ) → M = inf F (v ).
v ∈A
k→∞
If w ∈ A then
F (w ) = M = inf F (v )
v ∈A
=⇒ w is a minimizer.
To show this, we apply the following theorem:
Theorem 12.6.3 (Mazur’s Theorem): X is a BS,
P and suppose that xn * x in X. Then for
> 0 given, there exists a convex combination kj=1 λj xj of elements xj ∈ {xn }n∈N , λj ≥ 0,
Pk
j=1 λj = 1, such that
x−
k
X
λj x j
< .
j=1
In particular, there exists a sequence of convex combinations that converges strongly to x.
Proof: [Yoshida] Consider the totality MT of all convex combinations of xn . We can
assume 0 ∈ MT by replacing x with x − x1 and xj with xj − x1 .
We go for a proof by contradiction. Suppose the contrary of the theorem, i.e., ||x − u||X > for all u ∈ MT . Now consider the set,
n
o
M = v ∈ X; ||v − u||X ≤ for some u ∈ MT ,
2
Gregory T. von Nessi
Chapter 12: Variational Methods
384
which is a convex (balanced and also absorbing) neighborhood of 0 in X and ||x − v ||X > /2
for all v ∈ M.
Recall the Minkowski functional pA for given set A:
pA (x) =
inf
α>0, α−1 x∈A
α.
Now, since x ∈
/ M, there exists u0 ∈ M such that x = β −1 u0 with 0 < β < 1. Thus,
−1
pM (x)β > 1.
Consider now the real linear subspace of X:
and define f1 (x) = γ for x = γ · u0 ∈ X1 . This real linear functional f1 on X1 satisfies
f1 (x) ≤ pM (x). Thus, by the Hahn-Banach Extension theorem, there exists a real linear
extension f of f1 defined on the real linear space X and such that f (x) ≤ pM (x) on X. Since
M is a neighborhood of 0, the Minkowski functional pM (x) is continuous in x. From this we
see that f must also be a continuous real linear functional defined on X. Thus, we see that
sup f (x) ≤ sup f (x) ≤ sup pM (x) = 1 < β −1 = f (β −1 u0 ) = f (x).
x∈MT
x∈M
x∈M
Clearly we see that x can not be a weak accumulation point of M1 , this is a contradiction
since we assumed that xn * x.
Corollary 12.6.4 (): If Y is a strongly closed, convex set in a BS X, then Y is weakly closed.
Proof: xn * x =⇒ exists a sequence of convex combinations yk ∈ Y such that yk → x.
Y closed =⇒ x ∈ Y .
Application to proof of 12.6: A is a convex set. We see that A is closed by the following
argument. If wk → w in W 1,q (Ω), then by the Trace theorem, we have
||wk − w ||Lq (∂Ω) ≤ CTr ||wk − w ||W 1,q (Ω) → 0
since wk = g on ∂Ω =⇒ w = g on ∂Ω.
So, we apply Mazur’s Theorem, to see that A is also weakly closed, i.e., if wk * w in
W 1,q (Ω), then w ∈ A. Thus, this establishes existence by the arguments stated earlier in
the proof.
Uniqueness of this minimizer is true if L is uniformly convex (see Evans for proof).
12.7
A Few Applications to Semilinear Equations
Here we want to show how the technique used in the linear case extends to some nonlinear
cases. First we show that “lower-order terms” are irrelevant for the regulairty. For example
consider
Z
Z
(12.16)
Du · Dφ dx =
a(u) · Dφ dx ∀φ ∈ H01 (Ω)
Gregory T. von Nessi
Ω
Ω
Version::31102011
X1 = {x ∈ X; x = γ · u0 , −∞ < γ < ∞}
12.7 A Few Applications to Semilinear Equations
If
Z
BR (x0 )
385
Dv · Dφ dx = 0 for ∀φ ∈ H01 (Ω) and v = u on ∂BR then
Z
2
Bρ (x0 )
|Du| dx
ρ n Z
Z
|D(u − v )|2 dx
|Du| dx + C
R
BR (x0 )
BR (x0 )
Z
ρ n Z
2
≤ C
|Du| dx + C
|a(u)|2 dx.
R
BR (x0 )
BR (x0 )
≤ C
2
Now if |a(u)| ≤ |u|α , then the last integral is estimated by
Z
α(1−2/n)
Z
|a(u)|2 dx ≤
)BR (x0 )|u|2n/(n−2) dx
Rn(1−α(1−2/n))
Version::31102011
BR (x0 )
so that u is Hölder continuous provided α <
2
n−2 .
The same result holds if we have instead of (12.16)
Z
Z
Z
Du · Dφ dx =
a(u) · Dφ dx +
a(Du) · φ dx
Ω
Ω
Ω
with a(Du) ≤ |Du|1−2/n or a(Du) ≤ |Du|2− and |u| ≤ M = const.
The above idea works also if we study small perturbations of a system with constant coefficients:
For
Z
Ω
i
j
Aαβ
ij (x)Dα u Dβ φ dx = 0
with
∞
Aαβ
ij ∈ L (Ω)
one finds
Z
Bρ (x0 )
because
2
|Du| dx ≤ C
Z
ρ n Z
R
2
BR (x0 )
Dα u i Dβ φj dx =
Ω
Aαβ
ij (v ) − δαβ δij ≤ 0
and
Z
Ω
|Du| dx + C
Z
|δ1 δ2 − A| ·Du 2 dx
|
{z }
BR (x0 )
≤0
i
j
(δαβ δij − Aαβ
ij )Dα u Dβ φ dx.
So we can apply the lemma from the very begining of 9.8 and conclude that Du is Hölder
continuous. Finally we mention the
Theorem 12.7.1 (): Let u ∈ H 1,2 (Ω, RN ) be a minimizer of
Z
F (Du) dx
Ω
and let us assume that
F (tp)
→ |p|2
t2
then u ∈ C 0,µ (Ω).
for t → ∞
Gregory T. von Nessi
Chapter 12: Variational Methods
386
Proof: We consider the problem
Z
|DH|2 dx → minimize,
H=0
on ∂BR
BR (x0 )
(H stands for harmonic function!) then
Z
Z
ρ n Z
2
2
|Du| dx ≤ C
|Du| dx +
|Du − DH|2 dx.
R
Bρ (x0 )
BR (x0 )
BR (x0 )
BR (x0 )
D(u − H) · D(u − H) dx
=
Z
Z
Du · D(u − H) dx −
DH · D(u − H) dx
BR (x0 )
Z
Z
2
|Du| dx −
Du · DH dx
BR (x0 )
BR (x0 )
Z
Z
|Du|2 dx −
(D(u − H) + DH)DH dx
BR (x0 )
BR (x0 )
Z
Z
2
|Du| dx −
|DH|2 dx
B (x )
B (x )
Z R 0
Z R 0
Z
2
|Du| dx +
F (Du) dx −
F (DH) dx
BR (x0 )
BR (x0 )
BR (x0 )
|
{z
}
BR (x0 )
=
=
=
=
−
Z
F (Du) dx +
BR (x0 )
Z
BR (x0 )
≤0(u is a minimizer)
F (DH) dx −
Z
BR (x0 )
|DH|2 dx.
By the growth assumption on F and because
2
|p| − F (p) = |p|
2
F (p)
1−
|p|2
we find
Z
BR (x0 )
2
|D(u − H)| dx ≤ Z
BR (x0 )
|Du|2 dx + C · Rn .
So we can apply the lemma at the beginning of the section and conclude that Du is Hölder
continuous.
12.8
Regularity
We discuss in this section the smoothness of minimizers to our energy functionals. This is
generally a quite difficult topic, and so we will make a number of strong simplifying assumptions, most notably that L depends only on p. Thus we henceforth assume our functions I[·]
to have the form
Z
I[w ] :=
L(Dw ) − w f dx,
(12.17)
Ω
for f ∈ L2 (Ω). We will also take q = 2, and suppose as well the growth condition
Gregory T. von Nessi
|Dp L(p)| ≤ C(|p| + 1)
(p ∈ Rn ).
Version::31102011
Now
Z
(12.18)
12.8 Regularity
387
Then any minimizer u ∈ A is a weak solution of the Euler-Lagrange PDE
−
n
X
(Lpi (Du))xi = f
in Ω;
(12.19)
i=1
that is,
Z X
n
Version::31102011
Ω i=1
Lpi (Du)vxi dx =
Z
(12.20)
f v dx
Ω
for all v ∈ H01 (Ω).
12.8.1
DeGiorgi-Nash Theorem
[Moser-Iteration Technique]
Let u be a subsolution for the elliptic operator −Dβ (aαβ Dα ), i.e.
Z
aαβ Dα uDβ φ dx ≤ 0 ∀φ ∈ H01,2 (Ω), φ ≥ 0
Ω
where aαβ ∈ L∞ (Ω) and satisfy an ellipticity-condition. The idea is to use as test-function
φ := (u + )p η 2 with p > 1 and η the cut-off-function we defined on page 20. For the sake of
simplicity we assume that u ≥ 0. Then
Z
Z
αβ
p−1 2
p
a Dα uDβ uu
η dx +
aαβ Dα uu p ηDη dx ≤ 0
BR
Z
ZBR
p−1
p+1
c
|Du|2 u p−1 η 2 dx ≤
|Du|u 2 u 2 η|Dη| dx
p BR
BR
Z
Z
c
2 p−1 2
|Du| u
η dx ≤ 2
u p+1 |Dη|2 dx.
p
BR
BR
As
Du
p+1
2
2
=
p+1
2
2
u p−1 |Du|2
we get
Z
Du
p+1
2
BR
2
2
η dx ≤ c
p+1
p
2 Z
BR
u p+1 |Dη|2 dx
and together with
D(ηu
p+1
2
)
2
≤ c Du
p+1
2
2
η 2 + u p+1 |Dη|2
follows
Z
BR
Du
p+1
2
2
dx
!Z
p+1 2
≤ c 1+
u p+1 |Dη|2 dx (p > 1)
2
BR
Z
1
≤ c(p + 1)2 1 + 2
u p+1 |Dη|2 dx.
p
BR
Gregory T. von Nessi
Chapter 12: Variational Methods
388
By Sobolev’s inequality and the properties of η we finally have:
Z
2/2∗
Z
p+1
1
1
(u 2 η)2∗ dx
u p+1 dx.
≤ c(p + 1)2 1 + 2
2
p
(R
−
ρ)
BR
BR
For p = 1, this is just Caccioppoli’s inequality. Set λ := 2 ∗ /2 =
then
1/λ
Z
Z
(1 + q)2
λq
u dx
≤c
u q dx.
2
(R
−
ρ)
BR
BR
n
n−2
and q := p + 1 > 2,
Now we choose
Ri
and we find that
Z
u qi+1 dx
BRi+1
!
1
λi+1
=
Z
u λqi dx
BRi+1
≤
≤
!1
1
λ λi
!1/λi
1/λi Z
c(1 + qi )2
u qi dx
2−2(i+1) R2
BRi
1/λk Z
1/2
i Y
c(1 + qk )2
2
u dx
.
R2 4−k−1
BR
k=0
We have to estimate the above product:
1/λ
∞ Y
c(1 + qk )2
k
k=0
= exp ln
R2 4−k−1
∞ Y
c(1 + qk )2
R2
k=0
−k−1
4
∞
X
2
(1 + qk )
= exp
ln ĉ
λk
R2−k−1
1/λk !
k=0
= exp(ĉ − n ln R) qk = 2λk ,
thus ∀k
Z
−
u
qk
dx
BRk
!1/qk
∞
X
k=0
λ−j
n
=
2
Z
1/2
2
≤ ĉ − u dx
BR
from which immediately follows that
Z
1/2
2
sup u(x) ≤ ĉ − |u| dx
.
x∈BR/2
BR
Remark: Instead of 2, one can take any expoenent p¿0 in the above formulas.
If we start with a supersolution and u > 0, i.e.
Z
aαβ (x)Dα uDβ φ dx ≥ 0 ∀φ ∈ H01,2 (Ω),
Gregory T. von Nessi
Ω
φ≥0
!
Version::31102011
:= 2λi = λqi−1 (q0 = 2)
R
R
:=
+ i+1 (R0 = R)
2
2
qi
12.8 Regularity
389
then we can take p < −1. Exactly as before we then get for λ :=
Z
u
λq
dx
Bρ
!1/λ
≤ c(1 + q ) 1 +
2
As |q − 1| > 1 we have
Z
λq
u
dx
Bρ
Version::31102011
Now we set Ri :=
R
2
+
R
2i+1
!1/λ
1
(q − 1)2
(1 + q)2
≤c
(R − ρ)2
2∗
2
1
(R − ρ)2
Z
> 1 and q := p + 1 < 0.
Z
u q dx.
BR
u q dx.
BR
as before and qi := q0 λi but q0 < 0. Then follows
Z
1/q0
q0
inf u(x) ≤ c − u dx
x∈BR/2
(q0 < 0).
BR
Up to now we have the following two estimates for a positive solution u: For any p > 0 and
any q < 0 holds:
Z
1/p
p
sup u(x) ≤ k1 − u dx
x∈BR/2
BR
x∈BR/2
BR
Z
1/q
q
inf u(x) ≥ k2 − u dx
,
for some constants k1 and k2 , q < 0. The main point now is that the John-Nirenberg-lemma
allows us to combine the two inequalities to get
Theorem 12.8.1 (Harnack’s Inequality): If u is a solution and u > 0, then
inf u(x) ≥ C sup u(x).
x∈BR/2
Proof: We have
Z
aαβ Dα uDβ φ dx = 0
Ω
x∈BR/2
∀φ ∈ H01,2 (Ω)
φ > 0.
As u > 0, we can choose φ := u1 η 2 and get
Z
Z
1
1
−
aαβ Dα uDβ u 2 η 2 dx +
aαβ Dα u ηDη dx = 0
u
u
Ω
Ω
thus
Z
Ω
i.e.
|Du|2 2
η dx ≤
u2
Z
BR
Z
Ω
|Dη|2 dx,
|D ln u|2 dx ≤ CRn−2
which implies by the Poincaré’s inequality that ln u ∈ BMO. Now we use the characerization
of BMO by which there exists a constant α, such that for v := e α ln u , we have
Z
−1
Z
1
v dx ≤ C
dx
.
BR
BR v
Gregory T. von Nessi
Chapter 12: Variational Methods
390
Hence (p = 1, q = −1),
1/α
−1/α
Z
Z
K2
α
−a
−1
≥
≥ K2 C
− u dx
inf u ≥ K2 − u dx
sup u.
K1 C BR/2
BR/2
BR
BR
Remark: It follows from Harnack’s inequality that a solution of the equation at the beginning
of this section is Hölder-continuous: M(2R) − u ≥ 0 is a supersolution, thus
M(2R) − m(R) ≤ C(M(2R) − M(R))
u − m(2R) ≥ 0 is also a supersolution, thus
and from this we get
ω(2R) + ω(R) ≤ C(ω(2R) − ω(R))
hence ω(R) ≤
12.8.2
C+1
C−1 ω(2R)
and u is Hölder-continuous.
Second Derivative Estimates
We now intend to show if u ∈ H 1 (Ω) is a weak solution of the nonlinear PDE (12.19), then
2 (Ω). But to establish this, we will need to strengthen our growth condition
in fact u ∈ Hloc
on L. Let us first of all suppose
|D2 L(p)| ≤ C
(p ∈ Rn ).
(12.21)
In addition, let us assume that L is uniformly convex, and so there exists a constant θ > 0
such that
n
X
i,j=1
Lpi pj (p)ξi ξj ≥ θ|ξ|2
(p, ξ ∈ Rn ).
(12.22)
Clearly, this is some sort of nonlinear analogue of our uniform ellipticity condition for linear
PDE. The idea will therefore be to try to utilize, or at least mimic, some of the calculations
from that chapter.
Theorem 12.8.2 (Second derivatives for minimizers): (i) Let u ∈ H 1 (Ω) be a weak
solution of the nonlinar PDE (12.19), where L statisfies (12.21), (12.22). Then
2
u ∈ Hloc
(Ω).
(ii) If in addition u ∈ H01 (Ω) and ∂Ω is C 2 , then
u ∈ H 2 (Ω),
with the estimate
||u||H2 (Ω) ≤ C · ||f ||L2 (Ω)
Proof:
Gregory T. von Nessi
Version::31102011
M(R) − m(2R) ≤ C(m(R) − m(2R))
12.8 Regularity
391
1. We will largely follow the proof of blah blah, the corresponding assertion of local H 2
regularity for solutions of linear second-order elliptic PDE.
Fix any open set V ⊂⊂ Ω and choose then an open set W so that V ⊂⊂ W ⊂⊂ Ω.
Select a smooth cutoff function ζ satisfying
ζ ≡ 1 on V, ζ ≡ 0 in Rn \ W
0≤ζ≤1
Version::31102011
Let |h| > 0 be small, choose k ∈ {1, . . . , n}, and substitute
v := −Dk−h (ζ 2 Dkh u)
into (12.20). We are employing her the notion from the difference quotients section:
u(x + hek ) − u(x)
(x ∈ W ).
h
Z
Z
−h
u · Dk v dx = −
v · Dkh u dx, we deduce
Using the identity
Dkh u(x) =
Ω
n Z
X
Ω
i=1
Ω
Dkh (Lpi (Du))
· (ζ
2
Dkh u)xi
dx = −
Z
Ω
f · Dk−h (ζ 2 Dkh u) dx.
(12.23)
Now
Lpi (Du(x + hek )) − Lpi (Du(x))
h
Z
1 1 d
=
Lp (s · Du(x + hek ) + (1 − s) · Du(x)) ds
h 0 ds i
(uxj (x + hek ) − uxj (x))
n
X
(12.24)
=
aijh (x) · Dkh uxj (x),
Dkh Lpi (Du(x)) =
j=1
for
aijh (x)
:=
Z
0
1
Lpi pj (s · Du(x + hek ) + (1 − s) · Du(x)) dx
(i , j = 1, . . . , n).(12.25)
We substitute (12.24) into (12.23) and perform simplce calculations, to arrive at the
identity:
n Z
X
A1 + A2 :=
ζ 2 aijh Dkh uxj Dkh uxi dx
i,j=1 Ω
+
=
−
Z
Ω
n Z
X
i,j=1 Ω
aijh Dkh uxj Dkh u · 2ζζxi dx
f · Dk−h (ζ 2 Dkh u) dx =: B.
Now the uniform convexity condition (12.22) implies
Z
A1 ≥ θ
ζ 2 |dkh Du|2 dx.
Ω
Gregory T. von Nessi
Chapter 12: Variational Methods
392
Furthermore we see from (12.21) that
Z
|A2 | ≤ C
ζ|Dkh Du| · |Dkh u| dx
W
Z
Z
C
2
h
2
≤ ζ |Dk Du| dx +
|Dh u|2 dx.
W k
W
Furthermore, as in the proof of interior H 2 regularity of elliptic PDE, we have
Z
Z
C
2
h
2
|B| ≤ ζ |Dk Du| dx +
f 2 + |Du|2 dx.
Ω
Ω
Z
Ω
ζ
2
|Dkh Du|2 dx
≤C
Z
W
2
f +
|Dkh u|2 dx
≤C
Z
Ω
f 2 + |Du|2 dx,
the last inequaltiy valid from the properties of difference quotients.
2. Since ζ ≡ 1 on V , we find
Z
Z
2
h
f 2 + |Du|2 dx
|Dk Du| dx ≤ C
Ω
V
for k = 1, . . . , n and all sufficiently small |h| > 0. Consequently prop of diff. quotients
imply Du ∈ H 1 (V ), and so u ∈ H 2 (V ). This is true for each V ⊂⊂ U; thus u ∈
2 (Ω).
Hloc
3. If u ∈ H01 (Ω) is a weak solution of (12.19) and ∂U is C 2 , we can then mimic the
proof of the boundary regularity theorem 4 of elliptic pde to prove u ∈ H 2 (Ω), with
the estimate
||u||H2 (Ω) ≤ C(||f ||L2 (Ω) + ||u||H1 (Ω) );
details are left to the reader. Now from (12.22) follows the inequality
(DL(p) − DL(0)) · p ≥ θ|p|2
(p ∈ Rn ).
If we then put v = u in (12.20), we can employ this estimate to derive the bound
||u||H1 (Ω) ≤ C||f ||L2 (Ω) ,
and so finish the proof.
12.8.3
Remarks on Higher Regularity
We would next like to show that if L is infinitely differentiable, then so is u. By analogy with
the regularity theory developed for second-order linear elliptic PDE in 6.3, it may seem nat2 estimate from teh previous section to obtain further estimates
ural to try to extend the Hloc
k (Ω) for k = 3, 4, . . .
in higher order Sobolev spaces Hloc
Gregory T. von Nessi
Version::31102011
We select = 4θ , to deduce from the foregoing bounds on A1 , A2 , B, the estimate
12.8 Regularity
393
Version::31102011
This mehtod will not work for the nonlinear PDE (12.19) however. The reason is this. For
linear equations we could, roughly speaking, differntiate the equation many times and still obtain a linear PDE of the same general form as that we began with. See for instance the proof
of theorem 2 6.3.1. But if we differentiate a nonlinear differential equaiton many times, the
resulting increasingly complicated expressions quickly becvome impossible to handle. Much
deeper ideas are called for, the full development of which is beyond the scope of this book.
We will nevertheless at least outline the basic plan.
To start with, choose a test function w ∈ Cc∞ (Ω), select k ∈ {1, . . . , n}, and set v = −wxk in
2 (Ω),
the identity (12.20), where for simplicity we now take f ≡ 0. Since we now know u ∈ Hloc
we can integrate by parts to find
Z X
n
(12.26)
Lpi pj (Du)uxk xj wxi dx = 0.
Ω i,j=1
Next write
ũ := uxk
(12.27)
and
aij := Lpi pj (Du)
(i , j = 1, . . . , n).
(12.28)
Fix also any V ⊂⊂ Ω. Then after an approximation we find from (12.26)-(12.28) that
Z X
n
aij (x)ũxj wxi dx = 0
(12.29)
Ω i,j=1
for all w ∈ H01 (V ). This is to say that ũ ∈ H 1 (V ) is a weak solution of the linear, second
order elliptic PDE
−
n
X
(aij ũxj )xi = 0
(12.30)
in V.
i,j=1
But we can not just apply our regularity theory from 6.3 to conclude from (12.30) that ũ is
smooth, the reason being that we can deduce from (12.21) and (12.28) only that
aij ∈ L∞ (V )
(i , j = 1, . . . , n).
However DeGiorgi’s theorem asserts that any weak solution of (12.30) must in fact be locally
Hölder continuous for some exponent γ > 0. Thus if W ⊂⊂ V we have ũ ∈ C 0,γ (W ), and
so
1,γ
u ∈ Cloc
(Ω).
0,γ
Return to the definition (12.28). If L is smooth, we now know aij ∈ Cloc
(Ω) (i , j = 1, . . . , n).
Then (12.19) and the Schauder estimates from Chapter 8 assert that in fact
2,γ
u ∈ Cloc
(Ω).
1,γ
But then aij ∈ Cloc
(Ω); and so another version of Schauder’s estimate implies
3,γ
u ∈ Cloc
(Ω).
k,γ
We can continue this so called “bootstrap” argument, eventually to deduce u is Cloc
(Ω) for
∞
k = 1, . . . , and so u ∈ C (Ω).
Gregory T. von Nessi
Chapter 12: Variational Methods
394
12.9
Exercises
12.1: [Qualifying exam 08/98] Let Ω ⊂ Rn be open (perhaps not bounded, nor with smooth
boundary). Show that if there exists a function u ∈ C 2 (Ω) vanishing on ∂Ω for which the
quotient
Z
|Du|2 dx
I(u) = ΩZ
u 2 dx
Ω
12.2: [Qualifying exam 08/02] Let Ω be a bounded, open set in Rn with smooth boundary Γ .
Suppose Γ = Γ0 ∪ Γ1 , where Γ0 and Γ1 are nonempty, disjoint, smooth (n − 1)-dimensional
surfaces in Rn . Suppose that f ∈ L2 (Ω).
a) Find the weak formulation of the boundary

 −∆u = f
Du · ν = 0

u=0
value problem
in Ω,
on Γ1 ,
on Γ0 .
b) Prove that for f ∈ L2 (Ω) there is a unique weak solution u ∈ H 1 (Ω).
12.3: [Qualifying exam 08/02] Let Ω be a bounded domain in the plane with smooth boundary. Let f be a positive, strictly convex function over the reals. Let w ∈ L2 (Ω). Show that
there exists a unique u ∈ H 1 (Ω) that is a weak solution to the equation
−∆u + f 0 (u) = w in Ω,
u = 0 on ∂Ω.
Do you need additional assumptions on f ?
Gregory T. von Nessi
Version::31102011
reaches its infimum λ, then u is an eigenfunction for the eigenvalue λ, so that ∆u + λu = 0
in Ω.
Version::31102011
Part III
Modern Methods for
Fully Nonlinear Elliptic
PDE
Chapter 13: Non-Variational Methods in Nonlinear PDE
396
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
Contents
13.1 Viscosity Solutions
402
13.2 Alexandrov-Bakel’man Maximum Principle
13.3 Local Estimates for Linear Equations
13.4 Hölder Estimates for Second Derivatives
13.5 Existence Theorems
13.6 Applications
13.7 Exercises
13.1
406
411
419
428
438
441
Viscosity Solutions
The problem with the Perron definition of subharmonic and superharmonic functions is that
they relied on a classical notion of a function being harmonic. For a general differential
operator, call it F , there might not be an analogy of a harmonic function as the equation
F [u] = 0 may or may not have a solution in the classical sense. This is opposed to solutions
of F [u] ≥ 0 of F [u] ≤ 0, these solutions can readily be constructed for differential operators.
With this in mind, we make the following alternative definition:
Definition 13.1.1: A function u ∈ C 0 (Ω) is subharmonic(superharmonic) if for every ball
B ⊂⊂ Ω and superharmonic(subharmonic) function v ∈ C 2 (B) ∩ C 0 (B) (i.e. ∆v ≤ (≥) 0)
which is ≥ u on ∂B one has v ≥ (≤) u in B.
It’s not hard to understand the above definition is equivalent to the Perron definition.
Remarks:
Gregory T. von Nessi
Version::31102011
13 Non-Variational Methods in Nonlinear PDE
13.1 Viscosity Solutions
397
i.) By replacing ∆ by other operators, denoted F , we then get general definitions of the
statement “F [u] ≥ (≤) 0” for u ∈ C 0 (Ω).
ii.) One can define a solution of “F [u] = 0” as a function satisfying both F [u] ≥ 0 and
F [u] ≤ 0.
iii.) The solution of F [u] = 0 in this sense are referred to as viscosity solutions.
Version::31102011
We can still further generalize the above definition still further:
Definition 13.1.2: A function u ∈ C 0 (Ω) is subharmonic(superharmonic) (with respect to
∆) if for every point y ∈ Ω and function v ∈ C 2 (Ω) such that u − v is maximized(minimized)
at y , we have ∆v (y ) ≥ (≤) 0.
Proposition 13.1.3 (): Definitions (13.1.1) and (13.1.2) are equivalent.
Proof: (13.1.1) =⇒ (13.1.2): Let u be subharmonic. Suppose now there exists y ∈ Ω such
that ∆v (y ) < 0. Since v ∈ C 2 (Ω) there exists a ball B containing y such that ∆v (y ) ≤ 0 on
B. Now, define v = v + |x − y |4 which implies that y corresponds to a strict maximum of
u − v . With this we know there exists a δ > 0 such that v − δ > u on ∂B with v − δ < u
on some set Eδ ⊂ B; a contradiction.
vǫ − δ
vǫ
u
u
Eδ
Eδ
B
B
v
u
(13.1.2) =⇒ (13.1.1): Suppose that the maximum of u − v is attained at y ∈ Ω implies that
∆v (y ) ≥ 0 for any v ∈ C 2 (Ω). Next, suppose that u is not subharmonic; i.e. there exists a
ball B and superharmonic v such that v ≥ u on ∂B but v ≤ u in a set Eδ ⊂ B. Another way
to put this is to say there exists a point y ∈ B where u − v attains a positive max. Now,
define v := v + (p 2 − |x − y |2 ); clearly ∆v (y ) < 0. For small enough u − v will have
a positive max in B (the constant p can be adjusted so that v ≥ u on ∂Ω). This yields a
contradiction as ∆v (y ) < 0 by construction.
Gregory T. von Nessi
398
Chapter 13: Non-Variational Methods in Nonlinear PDE
Now, we can make the final generalization of our definition. First, we redefine the idea of
ellipticity. Consider the operator
F [u](x) := F (x, u, Du, D2 u),
(13.1)
Definition 13.1.4: An operator F ∈ C 0 (Ω × R × Rn × Sn ) is degenerate elliptic if
∂F
≥0
∂r ij
and
∂F
≤ 0,
∂z
where the arguments of F are understood as F (x, z , p, [r ij ]).
Finally, we make our most general definition for subharmonic and superharmonic functions.
Definition 13.1.5: If u is upper(lower)-semicontinuous on Ω, we say F [u] ≥ (≤) 0 in the
viscosity sense if for every y ∈ Ω and v ∈ C 2 (Ω) such that u − v is maximized(minimized)
at y , one has F [v ](y ) ≥ (≤) 0.
Remark: A viscosity solution is a function that satisfies both F [u] ≥ 0 and F [u] ≤ 0 in the
viscosity sense.
This final definition for viscosity solutions even has meaning in first order equations:
Gregory T. von Nessi
Version::31102011
where F ∈ C 0 (Ω × R × Rn × Sn ). Note that Sn represents the space of all symmetric n × n
matrices.
13.1 Viscosity Solutions
399
Example 10: Consider |Du|2 = 1 in R. It is obvious that generalized solutions will have a
“saw tooth” form and are not unique for specified boundary values. That being understood,
it is easily verified that the viscosity solution to this equation is unique.
C 2 viscosity violator function
Version::31102011
C 0 solutions
y
viscosity solution
To see this, the above figure illustrates a non-viscosity solution with an accompanying C 2
function which violates the viscosity definition. At point y , it is clear that u − v is maximized
but that ∆v (y ) < 0. Basically, any “peak” in a C 0 solution will violate the viscosity criterion with a similar C 2 example. The only solution with no “peaks” is the viscosity solution
depicted in the figure.
Aside: A nice way to derive this viscosity solution is to consider solutions u of the equation:
F [u] := ∆u + |Du |2 − 1 = 0.
Clearly, F → F , where F [u] := |Du|2 − 1 as → 0. It is not clear whether or not u
converges. Even if u converged to some limit, say ũ, it’s not clear whether or not F [ũ] = 0
in any sense. Fortunately, in this particular example, all this can be ascertained through direct
calculation. Since we are in R, we have
F [u] = · uxx + |ux |2 − 1 = 0.
To solve this ODE take v := ux ; this makes the ODE separable. After integrating, one
gets
u = · ln
e
2x
2
+1
e− + 1
!
− x,
where the constants of integration have been set so u(−1) = u(1) = 1. The following figure
is a graph of these functions.
It is graphically clear that lim→0 u = |x|; i.e. we have u converging (uniformly) to
the viscosity solution of |Du|2 = 1. This limit is easy to verify algebraically via the use of
L’Hopital’s rule.
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
400
1
|x|
O
−1
13.2
1
2
1
4
1
8
1
Alexandrov-Bakel’man Maximum Principle
Let u ∈ C 0 (Ω) and consider the graph u given by xn+1 = u(x) as well as the hyperplane
which contacts the graph from below. For y ∈ Ω, set
χu (y ) := {p ∈ Rn u(x) ≥ u(y ) + (x − y ) · p ∀x ∈ Ω}.
Define the lower contact set Γ − or Γu− by
Γ − := p ∈ Rn u(x) ≥ u(y ) + (x − y ) · p ∀x ∈ Ω
=
for some p = p(y ) ∈ Rn
{points of convexity of graph of u}.
Proposition 13.2.1 (): Let u ∈ C 0 (Ω). We have
i.) if u is differentiable at y ∈ Γ − then χu (y ) = Du(y ),
ii.) if u is convex in Ω then Γ − = Ω and χu is a subgradient of u,
iii.) if u is twice differentiable at y ∈ Γ − then D2 u(y ) ≥ 0
Investigate the relation between χu , max u, and min u. Let p0 ∈
/ χu (Ω) :=
By definition, there exists a point y ∈ ∂Ω such that
u(x) ≥ u(y ) + p0 · (x − y ) ∀x ∈ Ω.
S
y ∈Ω χu (y ).
(13.2)
You can see 13.2) by considering the hyperplane of the type xn+1 = p0 · x + C, first
placing far from below the graph u and tending it upward.(13.2) yields an estimate of u; i.e.,
u ≥ min u − |p0 |d,
(13.3)
∂Ω
where d = Di am (Ω) as before.
The relation with differential operator is as follows. Suppose u ∈ C 2 (Ω), then χu (y ) =
Du(y ) for y ∈ Γ − and the Jacobian matrix of χu is D2 u ≥ 0 in Γ − . Thus, the n-dimensional
Lebesgue measure of the normal image of Ω is given by
Z
−
−
|χ(Ω)| = |χ(Γ )| = |Du(Γ )| ≤
| det D2 u| dx
(13.4)
Gregory T. von Nessi
Γ−
Version::31102011
ǫ=
ǫ=
ǫ=
13.2 Alexandrov-Bakel’man Maximum Principle
401
Version::31102011
u
p0 · (x − y )
x
Ω
since D2 u ≥ 0 on Γ − . (13.4) can be realised as a consequence of the classical change of
variables formula by considering, for positive , the mapping χ = χ + I, whose Jacobian
matrix D2 u + I is then strictly positive in the neighbourhood of Γ − , and by subsequently
letting → 0. If we can show that χ is 1-to-1, then equality will in fact hold in (13.4). To
show this, suppose not, i.e. there exists x 6= y such that χ (x) = χ (y ). In this case we
have
χ(x) − χ(y ) = (y − x).
But, we also have (after using abusive notation), that
u(x) ≥ u(y ) + (x − y ) · χ(y )
u(y ) ≥ u(x) + (y − x) · χ(x)
Adding these we have
0 ≥ (χ(x) − χ(y ))(y − x) = (y − x)2 ≥ 0,
which leads us to conclude y = x, a contradiction.
More generally, if h ∈ C 0 (Ω) is positive on Rn , then we have
Z
χu (Ω)
dp
=
h(p)
Z
Γ−
det D2 u
dx.
h(Du)
(13.5)
Remark: It is trivial to extend this to u ∈ (C 2 (Ω) ∩ C 0 (Ω)) ∪ W 2,n (Ω) by using approximating functions and Lebesgue dominated convergence. Also note that even thous
C 2 (Ω) ⊂ W 2,n (Ω), C 2 (Ω) ∩ C 0 (Ω) * W 2,n (Ω). Not sure if further generalization is possible.
Theorem 13.2.2 (): Suppose u ∈ C 2 (Ω) ∩ C 0 (Ω) satisfies
det D2 u ≤ f (x)h(Du) on Γ −
(13.6)
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
402
for some positive function h ∈ C 2 (Rn ). Assume moreover,
Z
Z
dp
f dx <
.
h(p)
−
n
Γ
R
Then there holds an estimate of the type
min u ≥ min u − Cd,
Ω
∂Ω
where C denotes some constant independent of u.
Thus, for R0 > R, there exists p ∈
/ χu (Ω) such that |p| < R0 . So (13.3) now implies that
min u ≥ min u − Rd,
Ω
∂Ω
which completes the proof.
13.2.1
Applications
13.2.1.1
Linear equations
First we consider the case h ≡ 1 in theorem 13.2.2. The result is simply stated as follows:
The differential inequality
det D2 u ≤ f in Γ −
implies an estimate
min u ≥ min −
Ω
∂Ω
1
ωn
Z
f dx
Γ−
1/n
· d.
Next, suppose u ∈ C 2 (Ω) ∩ C 0 (Ω) satisfies
Lu := aij Dij u ≤ g in Ω,
where A := [aij ] > 0 and g a given bounded function. We derive
det A · det D2 u ≤
|g|
n
n
on Γ − ,
by virtue of an elementary matrix inequality for any symmetric n × n matrices A, B ≥ 0:
T r (AB) n
det A · det B ≤
,
n
whose proof is left as an exercise.
Gregory T. von Nessi
Version::31102011
Proof. Let BR (0) be a ball of radius R centered at 0 such that
Z
Z
Z
Z
dp
dp
dp
≤
f dx =
<
.
χu (Ω) h(p)
Γ−
BR (0) h(p)
Rn h(p)
13.2 Alexandrov-Bakel’man Maximum Principle
403
As det [AB] = det [A] · det [B], we really only need to show that det [A] ≤ T nr A
the matrices are non-singular we assume the matrix A is diagonalized wlog. So,
!
!n n
n
n
Y
X
ln λi
1X
TrA n
det [A] =
λi = exp n ·
≤
λi
=
n
n
n
i=1
i=1
n
. Since
i=1
The inequality of the of the above comes from the convexity of exponential functions.
Therefore it follows that
min u ≥ min u −
Version::31102011
Ω
∂Ω
d
g
(det A)1/n
1/n
nωn
.
Ln (Γ − )
We can find an estimate for maxΩ u similarly; if Lu ≥ g in Ω, thenwe have
max u ≤ max u −
Ω
∂Ω
d
g
(det A)1/n
1/n
nωn
,
Ln (Γ + )
−
where Γu+ = Γ−u
.
13.2.1.2
Monge-Ampére Equation
Suppose a convex function u ∈ C 2 (Ω) solves
det D2 u ≤ f (x) · h(Du),
for some functions f , h which are assumed to satisfy
Z
Z
dp
f (x) dx <
h(p)
n
Ω
R
Then we have an estimate
inf u ≥ inf u − Rd,
Ω
∂Ω
BR (0)
dp
=
h(p)
where R is defined by
Z
Z
f (x) dx.
Ω
This result is a consequence of theorem 13.2.2. If Ω is convex, then we infer supΩ u ≤
sup∂Ω u, taking into account the convexity of u.
Remark: If det D2 u = f (x) · h(Du) in Ω, then
Z
Z
det D2 u
=
f (x) dx.
h(Du)
Ω
Ω
Thus, the inequality
Z
Ω
f (x) dx ≤
Z
Rn
dp
h(p)
is a necessary condition for solvability. It is also sufficient, by the work of Urbas. The
inequality is strict if Du is bounded.
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
404
13.2.1.3
General linear equations
Consider a solution u ∈ C 2 (Ω) ∩ C 0 (Ω) of
Lu := aij Dij u + bi Di u ≤ f .
We rewrite this in the form
aij Dij u ≤ −bi Di u + f .
Define
n
n
for µ :=
f
D1/n
Ln (Γ + )
6= 0 and D := det [A]. We infer the following estimate as before:
f
D1/n
max u ≤ max u + C
Ω
∂Ω
where C depedends on n, d, and
13.2.1.4
f
D1/n
Ln (Γ + )
,
Ln (Γ + )
.
Oblique boundary value problems
The above estimates can all be extended to boundary conditions of the form
β · Du + γ · u = g on ∂Ω
where β · ν > 0 on ∂Ω, γ > 0 on Ω and g is a given function. Here ν denotes the outer
normal on ∂Ω. We obtain the more general estimate
g
|β|
inf u ≥ inf − C sup
+d
Ω
∂Ω γ
∂Ω γ
where d = Di am (Ω).
13.3
13.3.1
Local Estimates for Linear Equations
Preliminaries
Consider the linear operator Lu := aij Dij u with strict ellipticity condition:
λI ≤ [aij ] ≤ ΛI for0 < λ ≤ Λ, λ, Λ ∈ R.
To see the reason why we discuss the linear operator Lu, we deal ith the following nonlinear
equation of simple case:
F (D2 u) = 0.
Differentiate (13.8) with respect to a non-zero vector γ, to obtain
Gregory T. von Nessi
∂F
(D2 u)Dijγ u = 0.
∂rij
(13.7)
Version::31102011
h(p) = (|p| n−1 + µ n−1 )n−1
13.3 Local Estimates for Linear Equations
If we set L :=
∂F
∂rij Dij ,
405
then we have LDγ u = 0. Differentiate again and you find
∂F
∂2F
Dijγγ u +
Dijγ u · Dklγ u = 0.
∂rij
∂rij ∂rkl
Version::31102011
If F is concave with respect to r , which most of the important examples satisfy, then we
derive
∂F
(D2 u)Dij Dγγ u ≥ 0,
∂rij
i .e., L(Dγγ u) ≥ 0.
This is the linear differential inequality and justifies our investigation of the linear operator
Lu = aij Dij u.
13.3.2
Local Maximum Principle
Theorem 13.3.1 (): Let Ω ⊂ Rn be a bounded open domain and let BR := BR (y ) ⊂ Ω be
a ball. Suppose u ∈ C 2 (Ω) fulfuills Lu ≥ f in Ω for some given function f . Then for any
0 < σ < 1, p > 0, we have
sup u ≤ C
BσR
(
1
Rn
Z
BR
+ p
(u ) dx
1/p
R
+ ||f ||Ln (BR )
λ
)
,
where C depends on n, p, σ and Λ/λ. u + = max{u, 0} and BσR (y ) denotes the concentric
subball.
Proof. We may take R = 1 in view of the scaling x → x/R, f → R2 f . Let
η=
(1 − |x|2 )β , β > 1 in B1
0
in Ω \ B1 .
It is easy to verify that
|Dη| ≤ Cη 1−1/β , |D2 η| ≤ Cη 1−2/β ,
(13.8)
for some constant C.
Introduce the test function v := η(u + )2 and compute
Lv
= aij Dij η(u + )2 + 2aij Di ηDj ((u + )2 ) + ηaij Dij ((u + )2 )
≥ −CΛη 1−2/β (u + )2 + 4u + aij Di ηDj u +
+2ηaij Di u + · Dj u + + 2ηu + aij Dij u +
≥ −CΛη 1−2/β (u + )2 + 2ηu + f .
here the use of (13.8) and the next inequality was made:
|aij Di ηDj u + | ≤ (aij Di ηDj η)1/2 (aij Di u + Dj u + )1/2 .
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
406
Apply the Alexandrov maximum principle 2.3.1 to find
sup v
B1
C(n)
||Λη 1−2/β (u + )2 + ηu + f ||Ln (B1 )
λ
C(n)
≤
||Λv 1−2/β (u + )4/β + v 1/2 η 1/2 f ||Ln (B1 )
λ
Λ 1−2/β + 4/β
1 1/2 1/2
||v
(u ) ||Ln (B1 ) + ||v η f ||Ln (B1 )
≤ C
λ
λ
( )
1/2
1−2/β
supB1 v
Λ
+ 4/β
≤ C
sup v
||(u ) ||Ln (B1 ) +
||f ||Ln (B1 ) .
λ B1
λ
≤
from which we conclude, invoking the Young inequality, that
)
(Z
1/p
Λ
1
, n, p ·
(u + )p dx
+ ||f ||Ln (B1 ) .
sup v 1/2 ≤ C
λ
λ
B1
B1
This completes the proof.
13.3.3
Weak Harnack inequality
Lemma 13.3.2 (): Let u ∈ C 2 (Ω) satisfies Lu ≤ f , u ≥ 0 in BR := BR (y ) ⊂ Ω, for some
function f . For 0 < σ < τ < 1, we have
R
inf u ≤ C inf u + ||f ||Ln (BR ) ,
λ
BσR
Bτ R
where C = C(n, σ, τ, Λ/λ).
Proof. Set for k > 0
w :=
|x/R|−k − 1
inf u.
σ −k − 1 BσR
There holds w ≤ u on ∂BσR and ∂BR . Now, compute
L|x|−k
= aij Dij |x|−k
= aij (−kδij |x|−k−2 + k(k + 2)xi xj |x|−k−4 )
aij xi xj
−k−2
= −k|x|
T r A − (k + 2)
|x|2
≥ −k|x|−k−2 (T r A − (k + 2)λ)
≥ 0,
if we choose k >
T rA
λ
− 2. Thus
Lw
Gregory T. von Nessi
≥ 0,
L(u − w ) ≤ f .
Version::31102011
Choosing β = 4n/p, we deduce
( )
1/2−2/β Z
1/n
Λ
1
sup v 1/2 ≤ C
sup v
(u + )p dx
+ ||f ||Ln (B1 ) ,
λ B1
λ
B1
B1
13.3 Local Estimates for Linear Equations
407
Applying the Alexandrov maximum principle again, we obtain
u≥w−
CR
||f ||Ln (BR ) .
λ
Take the infimum and you get the desired result.
Version::31102011
Theorem 13.3.3 (): Let u ∈ C 2 (Ω) satisfies Lu ≤ f , u ≥ 0 in BR := BR (y ) ⊂ Ω, for some
function f . Then for any 0 < σ,τ < 1, there exists p > 0 such that
1
Rn
Z
p
u dx
BσR
1/p
R
≤ C inf u + ||f ||Ln (BR ) ,
λ
Bτ R
where p, C depend on n, σ, τ and Λ/λ.
Proof. WLOG thake y = 0 so that BR (0) = BR . Consider
η(x) = 1 −
|x|2
;
R2
then η − u ≤ 0 on ∂BR with L(η − u) = Lη − Lu ≤ −2 TRr2A − g the above lemma now
indicates
η−u ≤
d
1/n
nωn
d
−2
TrA
det [A]
≤
1/n
λnωn
−2
≤
Λ d
λ nωn1/n
−
1/n
·
TrA
−g
R2
2
g
−
R2 T r A
R2
−
g
det [A]1/n
Ln ({x∈BR | η−u≥0})
Ln ({x∈BR | η−u≥0})
Ln ({x∈BR | η−u≥0})
Λ
1
≤ C |{x ∈ BR | η ≥ u}| ≤ η,
λ
2
if
|{u ≤ 1 ∩ BR }|
|BR |
(13.9)
is suffciently small. Thus, we have u ≥ C on BσR and apply the previous lemma to ascertain
u ≥ C on Bτ R . Remarks:
i.) This ascertion contains all the ’information’ for the PDE.
ii.) We essentially shrink the ball after the calculation and 3.5 to then expand the ball back
out (perterbing by measure).
At this point, in order to simplify a later measure theoretic argument, we change our
basis domain from balls to cubes. Also take f = 0 and drop the bar for simplicity.
Let K := KR (y ) be an open cube, parallel to the coordinate axis with center y and side
length 2R. The corresponding estimate of (13.9) is as follows: if
|{u(x) ≥ 1} ∩ K| ≥ θ|K| and K3R ⊂ B := B1 ,
(13.10)
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
408
then u ≥ γ on KR . Now consider any fixed cube K0 in Rn with Γ ⊂ K0 a measurable subset
and 0 < θ < 1. Defining
Γ̃ :=
[
{K3R (y ) ∩ K0 |KR (y ) ∩ Γ | ≥ θ|KR (y )|},
we have either Γ̃ = K0 or |Γ̃ | ≥ θ−1 |Γ | by the cube decomposition at the beginning of the
notes. With this in mind, we make the following claim:
inf u ≥ C
K
|Γ |
|K|
ln γ/ ln θ
.
(13.11)
u ≥ γ m+0 = γ m on K.
The representation of γ 0 = 1 may seem absurd at this point, but it will clarify the final
inductive step immensely. Now, to proceed to m + 1, we first note that given the inductive
hypothesis Γ also satisfies the weaker condition |Γ | ≥ θm+1 |K|. Recalling the m = 1 case,
we also have from (13.10) that
u ≥ γ on Γ̃ .
Now cube decomposition indicates that |Γ̃ | ≥ θ−1 |Γ | ≥ θm |K|. Thus, we ascertain
u ≥ γ m+1 on K.
This argument is tricky in a sense that γ is not only apart of the inductive result, it’s a
paramater of Γ and Γ̃ (hence the γ 0 ) notation. Once this is clear, we get the claim by
picking γ and C correctly. Returning to the ball, we infer that
inf u ≥ C
B
|Γ ∩ B|
|B|
κ
,
where C and κ are positive constants depending only on n and Λ/λ. Now define
Γt := {x ∈ B | u(x) ≥ t}.
Replacing u by u/t in (13.13) yields
inf u ≥ Ct
B
|Γ ∩ B|
|B|
κ
.
To obtain the weak Harnack inequality, we normalize inf B u = 1 and hence
Gregory T. von Nessi
|Γt | ≤ C · t −1/k .
(13.12)
Version::31102011
Proof of Claim We use induction. So if we take Γ := {x ∈ K | u ≥ γ 0 = 1}, then we have
u ≥ γ on Γ̃ with |Γ̃ | ≥ θ−1 |Γ |. This is the m = 1 step and is the result of the previous
argument in the proof. Now, we assume the condition |Γ | > θm |K|, m ∈ N, implies the
estimate
13.3 Local Estimates for Linear Equations
409
Choosing p = (2κ)−1 , we have
Z
p
u dx
=
B
=
≤
Z Z
u(x)
pt p−1 dtdx
ZB∞ 0
Z0 ∞
0
pt p−1 · |{x ∈ B | u ≥ t}| dt
pt p−1 |Γt | dt
Version::31102011
≤ C.
the second inequality is the coarea formula from geometric measure theory. Intuitivly, this
can be thought of as slicing the B × [0, u(x)] by hyperplanes w (x, t) = t, and integrating
these. Thus, we have obtained our result for B. The completion of the proof follows from
a covering argument.
Z
Z Z
g(x) · |Jm f (x)| dx =
g(x) dHn−m (x)dy
Rm
Ω
f −1 (y )
Theorems (13.3.1) and (13.3.3) easily give
Theorem 13.3.4 (): Suppose Lu = f , u ≥ 0 in BR := BR (y ) ⊂ Ω. Then for any 0 < σ < 1,
we have
R
sup u ≤ C inf u + ||f ||Ln (B) ,
λ
BR
BσR
where C = C(n, σ, Λ/λ).
13.3.4
Hölder Estimates
Lemma 13.3.5 (): Suppose a non-decreasing function ω on [0, R0 ] satisfies
ω(σR) ≤ σ · ω(R) + Rδ
for all R ≤ R0 . Here σ < 1, δ > 0 are given constants. Then we have
α
R
ω(R) ≤ C
w (R0 ),
R0
where C, α depend on σ and δ.
Proof. We may take δ < 1. Replace ω by
ω(R) := ω(R) − ARδ ,
and we obtain ω(σR) ≤ σ · ω(R), selecting A suitably. Thus, we have by iteration
ω(σ ν R) ≤ σ ν · ω(R0 ).
Choosing ν such that σ ν+1 R0 < R ≤ σ ν R0 gives the result upon noting that ω is nondecreasing.
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
410
Now we present the next pointwise estimate, which is due to Krylov ans Safonov.
Theorem 13.3.6 (): Suppose u satsifies Lu = f in Ω ⊂ Rn for some f ∈ Ln (Ω). Let
BR := BR (y ) ⊂ Ω be a ball and 0 < σ < 1. Then we have
R
α
osc u + ||f ||Ln (BR ) ,
(13.13)
osc u ≤ Cσ
λ
BR
BσR
where α = α(n, Λ/λ) > 0 and C = C(n, Λ/λ).
This is analogous to the famous De Giorgi-Nash estimate for the divergence equation
Di (aij (x)Dj u) = u.
MR := sup u,
BR
mR := inf u.
BR
There holds
osc u = MR − mr =: ω(R).
BR
Applying the weak Harnack inequality to MR − u and mr − u respectively, we obtain
1/p
Z
1
R
p
(MR − u) dx
≤ C MR − MσR + ||f ||Ln (BR )
Rn BσR
λ
1/p
Z
1
R
p
(u − mR ) dx
≤ C mσR − MR + ||f ||Ln (BR ) .
Rn BσR
λ
Add these two inequalities and you get
ω(R) ≤ C{ω(R) − ω(σR) + R},
(13.14)
where C depends on ||f ||Ln (BR ) . Here we have used the inequality:
(Z
1/p Z
1/p )
Z
1/p
p
p
p
1/p
|f + g| dx
≤2
|f | dx
+
|g| dx
.
(13.14) can be rewritten in the form
ω(σR) ≤ (1 − 1/C)ω(R) + R.
Applying lemma 13.3.5 now yields the desired result.
13.3.5
Boundary Gradient Estimates
Now we turn our attention to the boundary gradient estimate, due to Krylov. The situation
+
is as follows: Let BR
be a half ball of radius R centered at the origin:
+
BR
:= BR (0) ∩ {xn > 0}
Suppose u satisfies
Then we have
Gregory T. von Nessi
Lu := aij Dij u = f
u=0
+
,
in BR
on BR (0) ∩ {xn = 0}.
(13.15)
Version::31102011
Proof. Take BR = BR (y ) ⊂ Ω as in the theorem and define
13.3 Local Estimates for Linear Equations
411
Theorem 13.3.7 (): Let u fulfil (13.15) and let 0 < σ < 1. Then there holds
(
)
u
R
u
α
osc
≤ Cσ
osc
+ sup |f | .
+ x
+ x
λ B+
BσR
BR
n
n
R
Version::31102011
where α > 0,C > 0 depend on n and Λ/λ.
From this theorem, we infer the Hölder estimate for the gradient on the boundary xn = 0.
In fact
)
(
R
osc Du(x 0 , 0) ≤ Cσ α sup |Du| + sup |f |
λ B+
|x 0 |<σR
B+
R
R
holds, where x 0 = {x1 , . . . , xn−1 }. Proof: Define v :=
+
in BR
. Write for some δ > 0,
u
xn .
We may assume that v is bounded
BR,δ := {|x 0 | < R, 0 < xn < δR}.
+
Under the assumption that u ≥ 0 in BR
, we want to prove the next claim.
Claim: There exists a δ = δ(n, Λ/λ) such that
!
R
inf
v ≤ 2 inf v + sup |f |
λ B+
B R ,δ
|x 0 | < R
2
R
xn = δR
for any R ≤ R0 .
Proof of Claim: We may normalize as before to take λ = 1, R = 1 and
inf
v = 1.
|x 0 | < R
xn = δR
Define the comparison function w in B1,δ as
w (x) :=
xn − δ
1 − |x | + (1 + sup |f |) √
δ
0 2
xn .
It is easy to check that w ≤ u on ∂B1,δ .
Compute
Lw
n−1
X
ann
= −2
aii xn + 2(1 + sup |f |) √ − 2
ain xi
δ
i=1
i=1
≥ f
n−1
X
in B1,δ
if we take δ sufficiently small. Thus the maximum principle implies that on B1/2,δ
v
xn − δ
≥ 1 − |x 0 |2 + (1 + sup |f |) √
δ
1
≥
− sup |f |
2
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
412
again for sufficiently small δ. Normalizing back we obtain the claim.
To resume the proof of the theorem, we apply the Harnack inequality in the region |x 0 | < R, δR
2 < xn <
to obtain


δR
2
v
sup
|x 0 | < R
< xn < 3δR
2


≤ C



δR
2
inf
|x 0 | < R2
< xn < 3δR
2


R
v + sup |f |

λ

(13.16)

Version::31102011




R

≤ C
inf
v + sup |f |

λ

 |x 0 | < R
xn = δR
!
R
≤ C inf v + sup |f | .
λ
B R ,δ
2
The last inequality comes from the claim.
Now we set up the oscillation estimate: Define
MR := sup v ,
BR,δ
mR := inf v
BR,δ
ω(R) := MR − mR = osc v .
BR,δ
Consider M2R − v , v − m2R and invoke (13.16). We find

M2R −
δR
2
δR
2
v
inf
<R
< xn < 3δR
2
|x 0 |
≤ C M2R − sup v +
inf
v − m2R ≤ C
|x 0 | < R
< xn < 3δR
2
B R ,δ
2

R
sup |f |
λ
!
R
inf v − m2R + sup |f |
λ
B R ,δ
2
Adding each terms tells you
R
ω(2R) ≤ C ω(2R) − ω(R/2) + sup |f | .
λ
Lemma 13.3.5 now implies our desired result:
α R0
R
ω(R) ≤ C
ω(R0 ) +
sup |f | .
R0
λ
13.4
Hölder Estimates for Second Derivatives
In this section, we wish to derive the second derivative estimates for general nonlinear equation of simple case
F (D2 u) = ψ
Gregory T. von Nessi
in Ω,
3δR
2
(13.17)
13.4 Hölder Estimates for Second Derivatives
413
where Ω ⊂ Rn is a domain. General cases can be handled by perterbation arguments. See
Safonov [Sa2].
We begin with the interior estimates of Evans [Eva82] and Krylov [Kry82]. For further
study, suitable function spaces are introduced. Various interpolation inequalities are recalled.
Finally, we show the global bound for second derivates, due to Krylov. For the interior
estimate, we follow the treatment of [GT01] but with an idea of Safonov [Saf84] being used
to avoid the Motzkin-Wasco lemma. The corresponding boundary estimate will be derived
from Theorem 13.3.7.
Version::31102011
13.4.1
Interior Estimates
Concerning (13.17), it is assumed to satisfy F ∈ C 2 (Rn×n ), u ∈ C 4 (Ω), ψ ∈ C 2 (Ω) and
i.) λI ≤ Fr (D2 u) ≤ ΛI, where 0 < λ ≤ Λ and Fr := [∂F/∂rij ];
ii.) F is concave on some open set U ⊂ Rn×n including the range of D2 u.
Under these assumptions, our first result is the next
Theorem 13.4.1 (): Let BR = BR (y ) ⊂ Ω and let 0 < σ < 1. Then we have
R
R2
2
α
2
2
osc D u ≤ Cσ
osc D u + sup |Dψ| +
sup |D ψ| ,
λ
λ
BσR
BR
where C, α > 0 depend only on n, Λ/λ.
Proof: Recall the connection between (13.17) and the linear differential inequalities stated
at 3.1.
Differentiating (13.17) with respect to γ twice, we obtain
Fij Dij Dγγ u ≥ Dγγ ψ,
(13.18)
in view of the assumption ii.), where Fij := Frij . If we set
aij (x) := Fij (D2 u(x)),
L := aij Dij ,
then there holds
Lw ≥ Dγγ ψ
for w = w (x, γ) := Dγγ u
and λI ≤ [aij ] ≤ ΛI.
Take a ball B2R ⊂ Ω and define as before
M2 (γ) := sup w (·, γ),
M1 (γ) := sup w (·, γ),
m2 (γ) := inf w (·, γ),
m1 (γ) := inf w (·, γ).
B2R
B2R
BR
BR
Applying the weak Harnack inequality, Theorem to M2 − w on BR , we fine
1/p
Z
R2
2
−n
p
sup |D ψ| .(13.19)
R
(M2 (γ) − w (x, γ)) dx
≤ C M2 (γ) − M1 (γ) +
λ
BR
To establish the Hölder estimat, we need a bound for −w . At present, however, we only
have the inequality (13.18). Thus we return to the equation
F (D2 u(x)) − F (D2 (y )) = ψ(x) − ψ(y ),
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
414
for x ∈ BR , y ∈ B2R . Since F is concave:
ψ(y ) − ψ(x)
Fij (D2 u(x))(Dij u(y ) − Dij u(x))
≤
(13.20)
ij
a (x)(Dij u(y ) − Dij u(x))
=
=: αi (x)µi ,
where µ1 ≤ · · · ≤ µn are the eigenvalues of the matrix D2 u(y ) − D2 u(x) and λ ≤ αi ≤ Λ.
Setting
ω(R) := max (M1 (γ) − m1 (γ)),
|γ|=1
Z
∗
ω (R) :=
(M1 (γ) − m1 (γ)) dγ,
|γ|=1
|γ|=1
if follows by integration of (13.19) that
R
−n
Z
|γ|=1
Z
p
BR
(M2 (γ) − w (x, γ)) dxdγ
1/p
(13.21)
R2
2
sup |D ψ| .
≤ C ω (2R) − ω (R) +
λ
∗
∗
For any fixed x ∈ BR , one of the inequalities
max (M1 (γ) − w (x, γ)), max (w (x, γ) − w (y , γ)) ≥
|γ|=1
|γ|=1
1
ω(R)
2
must be valid. Suppose it is the second one and choose y ∈ BR so that also
max (w (x, γ) − w (y , γ)) ≥
|γ|=1
1
ω(R).
2
The from (13.20) we have
λ
2R
µ1 −
sup |Dψ|
(n − 1)Λ
(n − 1)Λ
λ
2R
ω(R) −
sup |Dψ|.
2(n − 1)Λ
(n − 1)Λ
µn = max (w (y , γ) − w (x, γ)) ≥ −
|γ|=1
≥
Consequently, in all cases we obtain
4R
max (M1 (γ) − w (x, γ)) ≥ δ ω(R) −
sup |Dψ| ,
λ
|γ|=1
for some positive constant δ depending on n, Λ/λ.But then, because w is a quadratic form
in γ, we must have
Z
4R
∗
(M1 (γ) − w (x, γ)) dγ ≥ δ ω(R) −
sup |Dψ|
λ
|γ|=1
for a further δ ∗ = δ ∗ (n, Λ/λ), so that from (13.21), we obtain
R
R2
∗
∗
2
sup |D ψ|
ω(R) ≤ C ω (2R) − ω (R) + sup |Dψ| +
λ
λ
and the result follows by Lemma 13.3.5 and the following lemma of Safonov [Saf84].
Gregory T. von Nessi
Version::31102011
ω(2R) := max (M2 (γ) − m2 (γ)),
|γ|=1
Z
ω ∗ (2R) :=
(M2 (γ) − m2 (γ)) dγ,
13.4 Hölder Estimates for Second Derivatives
415
Lemma 13.4.2 (): We have
ω(R) ≤ C1 ω ∗ (R) ≤ C2 ω(R),
for some positive constants C1 ≤ C2 .
Proof: Since the right hand side inequality is obvious, it sufficies to prove
M1 (γ) − m1 (γ) ≤ C1 ω ∗ (R)
Version::31102011
for any |γ| = 1, so that without loss of generality, we can assume γ = e1 . Writing
2D11 u(x) = w (x, γ + 2e1 ) − 2w (x, γ + e1 ) + w (x, γ),
we obtain by integration of |γ| < 1, for any x, y ∈ BR ,
Z
Z
Z
2ωn (D11 u(y ) − D11 u(x)) =
−2
+
(w (y , γ) − w (x, γ))
B1 (2e1 )
B1 (e1 )
B1 (0)
Z
≤ 2
(M1 (γ) − m1 (γ)) dγ
= 2
B3 (0)
3
n+1
Z
r
dr
|γ|=1
0
and hence the result follows with C1 =
3n+2
(n+2)ωn .
The proof of the theorem is now complete.
13.4.2
Z
(M1 (γ) − m1 (γ)) dγ
Function Spaces
Here we introduce some suitable function spaces for further study on the classical theory
of second order nonlinear elliptic equations. The notations are compatible with those of
[GT01].
Let Ω ⊂ Rn be a bounded domain and suppose u ∈ C 0 (Ω). Define seminorms:
[u]0,α;Ω :=
[u]k;Ω :=
sup
x,y ∈Ω,x6=y
sup
|β|=k,x∈Ω
|u(x) − u(y )|
, 0 < α ≤ 1,
|x − y |α
|Dβ u(x)|,
k = 0, 1, 2, . . . ,
and the space


k


X
X
C k,α (Ω) := u ∈ C k (Ω) ||u||k,α;Ω :=
[u]j;Ω +
[Dβ u]α;Ω < ∞ .


j=0
|β|=k
C k,α (Ω) is a Banach space under the norm || · ||k,α;Ω . Moreover, we see
C k (Ω) = C k,0 (Ω)
with the norm ||u||k;Ω :=
Pk
j=0 [u]j;Ω .
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
416
Recall
C k,α (Ω) = {u ∈ C k (Ω) u
Ω0
0
∈ C k,α (Ω ) for any Ω0 ⊂⊂ Ω},
and define the interior seminorms:
[u]∗α;Ω := sup dΩα0 [u]α;Ω ,
0 < α ≤ 1,
Ω0 ⊂⊂Ω
where u ∈ C 0,α (Ω), dΩ0 := Di st (Ω0 , ∂Ω). Similarly
[u]∗k;Ω :=
sup dΩk 0 [u]k;Ω0 ,
Ω0 ⊂⊂Ω
and
sup
Ω0 ⊂⊂Ω,|β|=k
[Dβ u]k;Ω0 ,
dΩk+α
0


k


X
C∗k,α (Ω) := u ∈ C k (Ω) ||u||∗k,α;Ω :=
[u]∗j;Ω + [u]∗k,α;Ω < ∞ .


j=0
C∗k,α (Ω) is a Banach space under the norm || · ||∗k,α;Ω .
13.4.3
Interpolation inequalities
For the aboe defined spaces, there hold various interpolation inequalities.
Lemma 13.4.3 (): Suppose j + β < k + α, where j, k = 0, 1, 2, . . ., and 0 ≤ α, β ≤ 1. Let Ω
be an open subset of Rn+ with a boundary portion T on xn = 0, and assume u ∈ C k,α (Ω ∪ T ).
Then for any > 0 and some constant C = C(, j, k) we have
[u]∗j,β;Ω∪T
|u|∗j,β;Ω∪T
≤ C|u|0;Ω + [u]∗k,α;Ω∪T 0 ,
≤ C|u|0;Ω + [u]∗k,α;Ω∪T .
(13.22)
Proof: Again we suppose that the right members are finite. The proof is patterned after
(the interior global result you did earlier) and we emphasize only the details in which the proofs
differ. In the following we omit the subscript Ω ∪ T , which will be implicitly understood.
We consider first the cases 1 ≤ j ≤ k ≤ 2, β = 0, α ≥ 0, starting with the inequality
[u]∗2 ≤ C()|u|0 + [u]∗2,α ,
α > 0.
(13.23)
Let x be any point in Ω, dx its distance from ∂Ω − T , and d = µdx where µ ≤ 14 is
a constant to be specificed later. If Di st (x, T ) ≥ d, then the ball Bd (x) ⊂ Ω and the
argument proceeds as in (your interior proof) leading to the inequality
2
d x |Dil u(x)| ≤ C()[u]∗1 + [u]∗2,α
provided µ = µ() is chosen sufficiently small. If Di st (x, T ) < d we consider the ball B =
Bd (x0 ) ⊂ Ω, where x0 is on the perpendicular to T passing through x and Di st (x, x0 ) = d.
Let x 0 , x 00 be the endpoints of the diamter of B parallel to the xl axis. Then we have for
some x on this diamater
|Dil u(x)| =
Gregory T. von Nessi
≤
|Di u(x 0 ) − Di u(x 00 )|
1
2 −2
≤ sup |Di u| ≤ d x sup d y |Di u(y )|
2d
d B
µ
y ∈B
2 −2 ∗
d [u]1 , since d y > d x /2 ∀y ∈ B;
µ x
Version::31102011
[u]∗k,α;Ω :=
k = 0, 1, 2, . . . ,
13.4 Hölder Estimates for Second Derivatives
417
and
|Dil u(x)| ≤ |Dil u(x)| + |Dil u(x) − Dil u(x)|
2 −2 ∗
−2−α
2+α |Dil u(x) − Dil u(y )|
≤
d [u]1 + 2d α sup d x,y sup d x,y
;
µ x
|x − y |α
y ∈B
y ∈B
hence
2
2 ∗
[u] + 16µα [u]∗2,α
µ 1
≤ C[u]∗1 + [u]∗2,α
Version::31102011
d x |Dil u(x)| ≤
provided 16µα ≤ , C = 2/µ. Choosing the smaller value of µ, corresponding to the two
cases Di st (x, T ) ≥ d and Di st (x, T ) < d, and taking the supremem over all x ∈ Ω and
i, l = 1, . . . , n, we obtain (the first equation in this proof) for j = k = 1, we proceed as
in (the proof you did for the interior result) with the modifications suggested by the above
proof of (the first equation of the proof). Together with the preceding cases, this gives us
(the equation in the statement of the lemma) for 1 ≤ j ≤ k ≤ 2, β = 0, α ≥ 0.
The proof of (the statement eq) for β > 0 follows closely that in cases (iii) and (iv) of
(the interior proof you did before). The principal difference is that the argument for β > 0,
α = 0 now requires application of the theorm of the mean in the truncated ball Bd (x) ∩ Ω
for points x such that Di st (x, T ) < d. Lemma 13.4.4 (): Suppose k1 + α1 < k2 + α2 < k3 + α3 with k1 , k2 , k3 = 0, 1, 2, . . .,
and 0 < α1 , α2 , α3 ≤ 1. Suppose further ∂Ω ∈ C k3 ,α3 . Then for any > 0, there exists a
constant C = O(−k ) for some positive k such that
||u||k2 ,α2 ;Ω ≤ ||u||k2 ,α3 ;Ω + C ||u||k1 ,α;Ω
holds for any u ∈ C k3 ,α3 (Ω).
Proof: The proof is based on a reduction to lemma 6.34 by means of an argument very
similar to that in Lemma 6.5. As in that lemma, at each point x0 ∈ ∂Ω let Bρ (x0 ) be a ball
and ψ be a C k,α diffeomorphism that straightens the boundary in a neighborhood containing
B 0 = Bρ (x0 ) ∩ Ω and T = Bρ (x0 ) ∩ ∂Ω. Let ψ(B 0 ) = D0 ⊂ Rn+ , ψ(T ) = T 0 ⊂ ∂Rn+ . Since
T 0 is a hyperplane portion of ∂D0 , we may apply the interpolation inequality from previous
lemma in D0 to the function ũ = u ◦ ψ −1 to obtain
[ũ]j,β;D0 ∪T 0 ≤ C()|ũ|0;D0 + |ũ|∗k,α;D0 ∪T 0 .
from
Let Ω be a bounded domain with C k,α boundary portion T, k ≥ 1, 0 ≤ α ≤ 1. Suppose
that Ω ⊂⊂ D, where D is a domain that is mapped by a C k,α diffeomorphism ψ onto D0 .
Letting ψ(Ω) = Ω0 and ψ(T ) = T 0 , we can define the quantities in (definitions of spaces)
with respect to Ω0 and T 0 . If x 0 = ψ(x), y 0 = ψ(y ), one sees that
K −1 |x − y | ≤ |x 0 − y 0 | ≤ K|x − y |
for all points x, y ∈ Ω, where K is a constant depending on ψ and Ω. Letting u(x) → ũ(x 0 )
under the mapping x → x 0 , we find after a calculation using the above equation: for 0 ≤ j ≤
k,0 ≤ β ≤ 1, j + β ≤ k + α,
[u]j,β;B0 ∪T ≤ C()|u|0;D0 + |u|∗k,α;B0 ∪T .
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
418
In these inequalities K denotes constants depending on the mapping ψ and the domain Ω.
if follows that
[u]j,β;B0 ∪T ≤ C()|u|0;D0 + |u|∗k,α;B0 ∪T .
(we recall that the same notation C() is being used for different functions of .) Letting
B 00 = Bρ/2 (x0 ) ∩ Ω, we infer from
We note that |u|∗k;Ω and |u|∗k,α;Ω are norms on the subspaces of C k (Ω) and C k,α (Ω)
respectively for which they are finite. If Ω is bounded and d = Di am (Ω), then obviously
these interior norms and the global norms (standard defs) are related by
|u|∗k,α;Ω ≤ max(1, d k+α )|u|k,α;Ω .
min(1, σ k+α )|u|k,α;Ω0 ≤ |u|∗k,α;Ω
|u|j,β;B00
≤ C()|u|0,B0 + |u|k,α;B0
≤ C()|u|0;Ω + |u|k,α;Ω .
(13.24)
Let Bρi /4 (xi ), xi ∈ ∂Ω, i = 1, . . . , N, be a finite collection of balls covering ∂Ω, such that
the intequality (2nd eq prev lemma) holds in each set Bi00 = Bρi /2 (xi ) ∩ Ω, with a constant
Ci (). Let δ = min ρi /4 and C = C() = max Ci (). Then at every point x0 ∈ ∂Ω, we have
B = Bδ (x0 ) ⊂ Bρi /2 (xi ) for some i and hence
|u|j,β;B∩Ω ≤ C|u|0;Ω + |u|k,α;Ω
(13.25)
The remainder of the argument is analogous to that in Theorem 6.6 and is left to the
reader. 13.4.4
Global Estimates
Returning to the interior estimate, we can write the second derivative estimate Theorem 4.1
in the form:
[u]∗2,α;Ω ≤ C([u]∗2;Ω + 1),
where C depends on λ, Λ, n, ||ψ||2;Ω and Di am (Ω). Therefore if λ, Λ are independent of u,
we obtain by interpolation
∗
||u||2,α;Ω ≤ C sup |u| + 1 .
Ω
Now we want to prove the next global second derivative Hölder estimate due to essentially
Krylov [Kry87].
Theorem 13.4.5 (): Let Ω ⊂ Rn be a bounded domain with ∂Ω ∈ C 3 . Suppose u ∈
C 4 (Ω) ∩ C 3 (Ω) fulfils
F (D2 u) = ψ in Ω
u=0
on ∂Ω,
where ψ ∈ C 2 (Ω). F ∈ C 2 (Sn ) is assumed to satisfy i.) and ii.) in the first theorem of this
section. Then we have an estimate
|u|2,α;Ω ≤ C(|u|2;Ω + |ψ|2;Ω ),
where α = α(n, Λ/λ) > 0, C = C(n, Ω, Λ/λ).
Gregory T. von Nessi
Version::31102011
If Ω0 ⊂⊂ Ω and σ = Di st (Ω, ∂Ω), then
13.4 Hölder Estimates for Second Derivatives
419
Proof: First we straighten the boundary.
Let x0 ∈ ∂Ω. By (Defintion of boundary reg.), there exists a ball B = B(x0 ) ⊂ Rn and
a one-to-one mapping χ : B → D ⊂ Rn such that
i.) χ(B ∩ Ω) ⊂ Rn+ = {x ∈ Rn | xn > 0}
ii.) χ(B ∩ ∂Ω) ⊂ ∂Rn+
iii.) χ ∈ C 3 (B), χ−1 ∈ C 3 (Ω).
Version::31102011
Let us write y := χ(x). An elementary calculation shows
∂u
∂xi
2
∂ u
∂xi ∂xj
=
=
∂χk
∂u
(x) ·
∂xi
∂yk
k
k
l
∂χ ∂χ
∂2u
∂
∂χ
∂u
·
+
·
.
∂xi ∂xj ∂yk ∂yl
∂xj ∂xi
∂yk
Thus, the ellipticity condition is preserved under the transformation χ with
λ̃I ≤ [J Fr J T ] ≤ Λ̃I,
J := Dχ ,
where 0 < λ̃ ≤ Λ̃ < ∞. Moreover, χ also preserves the Hölder spaces and transforms the
equation (13.26) into the more general form:
F (x, u, Du, D2 u) = 0.
Therefore it suffices to deal with the following situation:
Suppose u satisfies
+
F (x, u, Du, Du ) = 0 in B := BR
= {x ∈ Rn+ |x| < R}
u=0
on T := {x ∈ B | xn = 0},
where F ∈ C 3 (B × R × Rn × Sn ).
Differentiation with respectto xk , 1 ≤ k ≤ n − 1 leads
∂F
∂F
∂F
∂F
Dijk u +
Dik u +
Dk u +
= 0,
∂rij
∂pi
∂u
∂xk
which can be summarized in the form
Lv = f
v =0
in B
on T.
(13.26)
Here we have set v := Dk u, 1 ≤ k ≤ n − 1 and
L
=
−f
:=
∂F
Dij
∂rij
∂F
∂F
∂F
Dij u +
,
Dk u +
∂pi
∂u
∂xk
aij Dij :=
with λI ≤ [aij ] ≤ ΛI.
You can apply the Krylov boundary estimate (previous boundary result) to obtain
)
α (
v
δ
v
R
osc
≤ C
osc
+ |f |0;B+
+ x
R
R
λ
Bδ+ xn
BR
n
α δ
R
≤ C
|u|2;B+ + |f |0;B+ .
R
R
R
λ
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
420
where α, C depend on n and Λ/λ. By subtracting a linear function of xn from v if necessar,
we can assume that v (x) = O(|xn |2 ) as x → 0. Thus, fixing R and taking δ = xn , we find
v (x)
≤ C(|u|2;B+ + |f |0;B+ ).
R
R
xn1+α
(13.28)
Br
where D0 u = (D1 u, . . . , Dn−1 u), α = α(n, Λ/λ) > 0 and C depend furthermore on f . The
second inequality in (13.28) comes from the equation itself. In fact, F (x, u, Du, D2 u) = 0
can be solved so that Dnn u = function of (x, u, Du, DD0 u) and hence
|Dnn u| ≤ C(sup |DD0 u| + 1).
(13.28) implies in particular
0
osc DD u ≤ Cσ
sup |DD u| + 1 .
0
BσR
α
Br
By interpolating the interior norm, we get from (13.27)
[DDk u]α;Br
≤ C{Di st (Br , T )−1−α |D0 u|0;Br + 1}
≤ C(|u|2;B+ + |f |0;B+ ),
R
R
(13.29)
for 1 ≤ k ≤ n − 1.
The equation itself again shows that (13.29) holds true for k = n.
In summary, we finally obtain
[D2 u]α;B+ C (|u|2;B+ + |ψ|2;B+ ).
R
R
R
Usual covering argument now yields the full global estimate. This completes the proof.
Corollary 13.4.6 (): Under the assumptions of the previous theorem, we have by interpolation
|u|2,α;Ω ≤ C(|u|0;Ω + |ψ|2;Ω )
≤ C|ψ|2;Ω .
13.4.5
Generalizations
The method of proof of theorem above provides more general results. We present one of
them without proof.
Theorem 13.4.7 (): If u ∈ C 2 (Ω) ∩ C 0 (Ω) solves the linar partial differential equation of
the type (13.26):
Lu = f in Ω
u=0
on ∂Ω,
Gregory T. von Nessi
Version::31102011
This is the boundary estimate to start with.
+
For the interior estimate, Theorem 4.1 gives for any Br ⊂ BR
α
2
sup |D u| + 1
osc ≤ Cσ
Bσr
Br
α
0
≤ Cσ
sup |DD u| + 1 ,
(13.27)
13.5 Existence Theorems
421
for a bounded domain Ω ⊂ Rn with ∂Ω ∈ C 2 , f ∈ L∞ (Ω) and
α
osc Du + |f |0;Ω + 1 ,
osc Du ≤ C0 σ
BR
BσR
for some δ = δ(n, α, Λ/λ), C = C(n, α, C0 , Ω, Λ/λ.
The form of the interior gradient estimate can be weakened; (see [Tru84], Theorem 4).
Version::31102011
13.5
Existence Theorems
After establishing various estimates, we are now able to investigate the problem of the
existence of solutions. This section is devoted to the solvability for equations of the form
F (D2 u) = ψ,
(13.30)
Two cases are principally discussed; (i) uniformly elliptic case and (ii) Monge-Ampere equation.
We begin with recalling the method of continuity. Then (i)(ii) are studied.
13.5.1
Method of Continuity
First we quickly review the usual existence method of continuity.
Let B1 and B2 be Banach spaces and let F : U → B2 be a mapping, where B ⊂ B1 is an
open set. We say that F is differentiable at u ∈ U if there exists a bounded linear mapping
L : B1 → B2 such that
||F [u + h] − F [u] − Lh||B2
||h||B1
→0
as h → 0 in B1 .
(13.31)
We write L = Fu and call it the Fréchet derivative F at u. When B1 , B2 are Euclidean
spaces, Rn , Rm , the Fréchet derivative coincides with the usual notion of differentia, and ,
moreover, the basic theory for the infinite dimensional cas can be modelled on that for the
finite dimensional case as usually treated in advanced calculus. In particular, it is evident
from (the above limit) that the Fréchet differentiablity of F at u implies that F is continuous
at u and that the Fréchet derivative Fu is determined uniquely by (the limit def above). We
call F continuously differentiable at u if F is Fréchet differentiable in a neighbourhood of u
and the resulting mapping
v 7→ Fv ∈ E(B1 , B2 )
is continuous at u. Here E(B1 , B2 ) denotes the Banach space of bounded linear mapping
from B1 to B2 with the norm given by
||L|| = sup
v ∈ B1
v 6= 0
||Lv ||B2
||v ||B1
.
The chain rule holds for Fréchet differentiation, that is, if F : B1 → B2 , G : B2 → B3 are
Fréchet differentiable at u ∈ B1 , F [u] ∈ B2 , respectively, then the composite mapping G ◦ F
is differentiable at u ∈ B1 and
(G ◦ F )u = GF [u] ◦ Fu .
(13.32)
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
422
The theorem of the mean also holds in the sense that if u, v ∈ B1 , F : B1 → B2 is
differentiable on the closed line segment γ joining u and v in B1 , then
||F [u] − F [v ]||B2 ≤ K||u − v ||B1 ,
(13.33)
where
K = sup ||Fw ||.
w ∈γ
2
1
(k)
(h) + G(u,σ)
G(u,σ) (h, k) = G(u,σ)
for h ∈ B1 , k ∈ X. We state the implicit function theorem in the following form.
Theorem 13.5.1 (): Let B1 , B2 and X be Banach spaces and G a mapping from an open
subset of B1 × X into B2 . Let (u0 , σ0 ) be a point in B1 × X satisfying:
i.) G[u0 , σ0 ] = 0;
ii.) G is continuously differentiable at (u0 , σ0 );
1
iii.) the partial Fréchet derivative L = G(u
is invertible.
0 ,σ0 )
Then there exists a neighborhood N of σ0 in X such that the equation G[u, σ] = 0, is
solvable for each σ ∈ N , with solution u = uσ ∈ B1 .
Sketch of proof: This may be proved by reduction to the contraction mapping principle.
Indeed the equation G[u, σ] = 0, is equivalent to the equation
u = T u ≡ u − L−1 G[u, σ],
and the operator T will, by virtue (13.31) and (13.32), be a contraction mapping in a closed
ball B = B δ (u0 ) in B1 for δ sufficiently small and σ sufficiently close to σ0 in X. One
can further show that the mapping F : X → B1 defined by σ → uσ for σ ∈ N , uσ ∈ B,
G[uσ , σ] = 0 is differentiable at σ0 with Fréchet derivative
2
Fσ0 = −L−1 G(u
.
0 ,σ0 )
In order to apply the above theorem we suppost that B1 and B2 are Banach spaces with F
a mapping from an open subset U ⊂ B1 into B2 . Let ψ be a fixed element in U and define
for u ∈ U, t ∈ R the mapping G : U × R → B2 by
G[u, t] = F [u] − tF [ψ].
Let S and E be the subsets of [0, 1] and B1 defined by
S := {t ∈ [0, 1] | F [u] = tF [ψ] for some u ∈ U}
E
:= {u ∈ U | F [u] = tF [ψ] for some t ∈ [0, 1]}.
Clearly 1 ∈ S, ψ ∈ E so that S and E are not empty. Let us next suppose that the mapping F
is continuously differentiable on E with invertible Fréchet derivative Fu . It follows then from
the implicit function theorem (previous theorem) that the set S is open in [0.1]. Consequently
we obtain the following version of the method of continuity.
Gregory T. von Nessi
Version::31102011
With the aid of these basic properties we may deduce an implicit fuinction theorem for
Fréchet differentiable mappings. Suppose that B1 , B2 and X are Banach spaces and that
G : B1 × X → B2 is Fréchet differentiable at a point (u, σ), where u ∈ B1 and σ ∈ X. The
1
2
partial Fréchet derivatives, G(u,σ)
, G(u,σ)
at (u, σ) are the bounded linear mappings from B1 ,
X, respectively, into B2 defined by
13.5 Existence Theorems
423
Theorem 13.5.2 (): Suppose F is continuously Fréchet differentiablewith the invertable
Fréchet derivative Fu . Fix ψ ∈ U and set
S := {t ∈ [0, 1] | F [u] = tF [ψ] for some u ∈ U}
E
:= {u ∈ U | F [u] = tF [ψ] for some t ∈ [0, 1]}.
Then the equation F [u] = 0 is solvable for u ∈ U if S is closed.
The aim is to apply this to the Dirichlet problem. Let us define
Version::31102011
B1 := {u ∈ C 2,α (Ω) | u = 0 on ∂Ω}
B2 := C α (Ω),
and let F : B1 → B2 be given by
F [u] := F (x, u, Du, D2 u).
Assume F ∈ C 2,α (Ω × R × Rn × Sn ). Then F is continuously Fréchet differentiable in B1
with
Fv [u] =
∂F
(x, v (x), Dv (x), D2 v (x))Dij u
∂rij
∂F
∂F
+
(x, v (x), Dv (x), D2 v (x))Di u +
(x, v (x), Dv (x), D2 v (x))u.
∂pi
∂z
Check this formular and observe that F ∈ C 2,α is more than enough.
To proceed further, we recall the result of Classical Schauder theory proven in Chapter
9(?) (put here for convenience:
Theorem 13.5.3 (): Consider the linear operator
Lu := aij Dij u + bi Di u + cu,
in a bounded domain Ω ⊂ Rn with ∂Ω ∈ C 2,α , 0 < α < 1. Assume
aij , bi , c ∈ C α (Ω),
ij
λI ≤ [a ] ≤ ΛI,
c ≤0
0 < λ ≤ ΛI,
0 < λ ≤ Λ < ∞.
Then for any function f ∈ C α (Ω), there exists a unique solution u ∈ C 2,α (Ω) satisfying
Lu = f in Ω, u = 0 on ∂Ω and |u|2,α;Ω ≤ C|f |α;Ω , where C depends on n, λ, |aij , bi , c|α;Ω
and Ω.
Theorem 13.5.4 (): Let Ω ⊂ Rn be a bounded domain with ∂Ω ∈ C 2,α for some 0 < α < 1.
Let U ⊂ C 2,α (Ω) be open and let ψ ∈ U. Define
E := {u ∈ U | F [u] = σF [ψ] for some σ ∈ [0, 1] and u = ψ on ∂Ω},
where F ∈ C 2,α (Γ ) with Γ := Ω × R × Rn × Sn . Suppose further
i.) F is elliptic on Ω with respect to u ∈ U;
ii.) Fz ≤ 0 on U; (or Fz ≥ 0 on U?)
iii.) E is bounded in C 2,α (Ω);
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
424
iv.) E ⊂ U, where the closure is taken in an appropriate sense.
Then there exists a unique solution u ∈ U of the following Dirichlet problem:
F [u] = 0 in Ω
u=ψ
on ∂Ω.
Proof: We can reduce to the case of zero boundary values by replacing u with u − ψ.
The mapping G : B1 × R → B2 is then defined by taking ψ ≡ 0 so that
G[u, σ] = F [u + ψ] − σF [ψ].
Observe that by this theorem, the solvability of the Dirichlet problem depends upon the
estimates in C 2,α (Ω) independent of σ (this is why we need Schauder theory).
13.5.2
Uniformly Elliptic Case
Let Ω ⊂ Rn be a bounded domain with ∂Ω ⊂ C 3 . Suppose F ∈ C 2 (Sn ) satisfying the
following
i.) λI ≤ Fr (s) ≤ ΛI for all s ∈ Sn with some fixed constants 0 < λ ≤ Λ.
ii.) F is concave.
We also assume without loss of generality that F (0) = 0.
Theorem 13.5.5 ((Krylov [Kry82])): Under the above assumptions, the Dirichlet problem
F (D2 u) = ψ in Ω
(13.34)
u=0
on ∂Ω
is uniquely solvable for any ψ ∈ C 2 (Ω). The solution u belongs to C 2,α (Ω) for each 0 <
α < 1.
Proof: Recall the a-priori estimate (global result prev. section)
[u]2,αΩ ≤ C(|u|0;Ω + |ψ|2;Ω ),
where α = α(n, Λ/λ) > 0, C = C(n, ∂Ω, Λ/λ).
Since we can write
Z 1
2
2
F (D u) =
Frij (θD u) dθ Dij u,
0
by virture of F (0) = 0, the classical maximum principle (find this) gives
Gregory T. von Nessi
|u|0;Ω ≤
|ψ|0;Ω
(Di am (Ω))2 .
λ
(13.35)
Version::31102011
It then follows from the previous theorem and the remarks proceeding the statement of this
theorem, that the given Dirichlet problem is solvable provided the set S is closed. However
the closure of S (and also of E) follows readily from the boundedness of E (and hypothesis
iv.)) by virtue of the Arzela-Ascoli theorem. 13.5 Existence Theorems
425
It is easy to see that the corresponding estimates for F (D2 u) = σψ, σ ∈ [0, 1] are independent of σ. Thus the result follows from the method of continuity. Remark: Once you get u ∈ C 2 (Ω), then the linear theory ensures u ∈ C 2,α (Ω) for all
α < 1.
Version::31102011
The above proof owes its validity to the regularity of F . If you want to examine a non-smooth not the
F , you need to mollify F ; replace F by
ous argum
ally;explain
Z
r −s
F (r ) =
ρ
F (s) ds,
Sn
where ρ ∈ C0∞ (Sn ) is defined by
ρ ≥ 0,
Z
ρ(s) ds = 1.
Sn
Observe that F satisfies the assumptions (i)(ii). Then solve the problem
F (D2 u ) = ψ in Ω
,
u = 0
on ∂Ω
and let → 0. The examples include the following Bellman equation:
F (D2 u) =
inf aij Dij u.
k=1,...,n k
Next, we wish to modify the argument to include the non-zero boundary value problems,
especially with continuous boundary values:
F (D2 u) = ψ in Ω
(13.36)
u=g
on ∂Ω
where g ∈ C 0 (∂Ω).
As a first step,we approximate g by {gn } ⊂ C 2 (Ω) such that gn → g uniformly on ∂Ω
as n → ∞. Secondly, in view of ∂Ω ∈ C 3 , you can construct barriers; for any y ∈ ∂Ω, there
exists a neighbourhood Ny of y and a function w ∈ C 2 (Ny ) such that
i.) w > 0 in Ω ∩ Ny − {y }
ii.) w (y ) = 0,
iii.) and F (D2 w ) ≤ ψ in Ω ∩ Ny .
(refer to peron process for proofs about barries).
Suppose {un } ⊂ C 2,α (Ω) ∩ C 0 (Ω) are given by the solutions of
F (D2 un ) = ψ in Ω
un = gn
on ∂Ω
Then we can appeal to the linear theory to conclude that {un } are equicontinuous on Ω,
taking into account (13.34) and the existence of barriers.
On the other hand, the Schauder estimate (put the theorem here) implies the equicontinuity of Du, D2 u on any compact subset of Ω. Therefore, by letting n → ∞, we obtain
u ∈ C 0 (Ω) ∩ C 2,α (Ω) which enjoys (13.36).
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
426
13.5.3
Monge-Ampére equation
The equation we wish to examine here is
det D2 u = ψ
in Ω,
(13.37)
where Ω ⊂ Rn is a iniformly convex domain with ∂Ω ∈ C 3 .
(13.37) is elliptic with respect to u provided D2 u > 0. Thus the set U of admissable
functions is defined to be
U := {u ∈ C 2,α (Ω) | D2 u > 0}.
Now our main result is the next
For the rest of this subsection, we prove this theorem, employing apriori estimates stepby-step Proof:
Step 1: |u|2;Ω estimate implies |u|2,α;Ω estimate.
First, we show the concavity of F (r ) := (det r )1/n for r > 0. Recalling,
(det D2 u)1/n (det A)1/n ≤
1 ij
(a Dij u),
n
we have
(det D2 u)1/n =
1 ij
(a Dij u).
det A≥1 n
inf
This proves the concavity Q
of F , since the RHS is a concavity function of D2 u.
Next recall det D2 u = ni=1 λi , where λ1 , . . . , λn are the eigenvalue of D2 u.
Thus we have
ψλi
λi = Q n
j=1 λj
≥
inf ψ
,
(sup |D2 u|)n−1
which means that the C 2 boundedness of u ensures the uniform ellipticity of the equation.
The (globals C 2,α estimate theorem thingy) then implies
|u|2,α;Ω ≤ C,
where C = C(|u|2;Ω , |ψ|2;Ω , ∂Ω). This proves Step 1.
Step 2: |u|0;Ω and |u|1;Ω estimates.
Since u is convex, we have u ≤ 0 in Ω. Let w := A|x|2 − B and compute
det D2 w = (2nA)n .
Taking A, B large enough, we find
det D2 w ≥ ψ
Gregory T. von Nessi
w ≤u
in Ω
on ∂Ω,
Version::31102011
Theorem 13.5.6 ((Krylov [Kry83a, Kry83b], Caffarelli, Nirenberg and Spruck [CNS84])):
Let Ω ⊂ Rn be a uniformly convex domain with ∂Ω ∈ C 3 and let ψ ∈ C 2 (Ω) be such that
ψ > 0 in Ω. Then there exists a unique solution u ∈ U of
det D2 u = ψ in Ω
(13.38)
u=0
on ∂Ω.
13.5 Existence Theorems
427
from which we obtain |u|0;Ω ≤ C by applying the maximum principle.
Next we want to give a bound for |u|1;Ω . The convexity implies
sup |Du| ≤ sup |Du|.
Version::31102011
Ω
∂Ω
Thus it sufficies to estimate Du on the boundary ∂Ω. More precisely, we with to estimate
uν from above, where uν denotes the exterior normal derivative of u on ∂Ω. This is because
uν ≥ 0 on ∂Ω and the tangential derivatives are zero.
We exploit the uniform convexity of the domain Ω; for any y ∈ ∂Ω, there exists a C 2
function w defined on a ball Ω ⊂ By such that w ≤ 0 in Ω, w (y ) = 0 and det D2 w ≥ ψ in
Ω. This can be seen by taking Ω ⊂ By with ∂By ∩ Ω = {y } and considering the function of
the type A|x − x0 |2 − B as before.
Therefore, we infer
w ≤u
in Ω
and in particular wν ≥ uν at y . Since ∂Ω is compact, we obtain |u|1;∂Ω estimate.
Step 3: sup∂Ω |D2 u| estimate implies supΩ |D2 u| estimate.
We use the concavity of
F (D2 u) := (det D2 u)1/n = ψ 1/n .
as in (the linearization first part), we deduce
Fij Dij Dγγ u ≥ Dγγ (ψ 1/n )
for |γ| = 1.
Then the result of (the linear app of AB max prin) leads to
Dγγ u ≤ sup Dγγ u + C · sup
Ω
∂Ω
|Dγγ (ψ 1/n )|
.
T r (Fij )
On the other hand, we have
Fij =
1
(det D2 u)−1+1/n · u ij ,
n
where [u ij ] denotes the cofactor matrix of [ui j]. Compute
1
nn
det Fij
=
T r Fij
n
≥ (det Fij )1/n =
1
.
n
Thus, there holds
Dγγ u ≤ sup Dγγ u + C,
∂Ω
where C depends only on the know quantitites. This proves Step 3.
Step 4: sup∂Ω |D2 u| estimate.
Take any y ∈ ∂Ω. Without loss of generality, we may suppose that y is the origin and
the xn -axis points the interior normal direction.
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
428
Near the origin, ∂Ω is represented by
xn = ω(x 0 ) :=
1
Ωαβ xα xβ + O(|x 0 |3 ).
2
Here x 0 := (x1 , . . . , xn−1 ) and in the summation, Greek letters go from 1 to n − 1. The
uniform convexity of ∂Ω is equivalent to the positivity of Ωαβ . For more details, see (section
after next).
Our aim is an estimation for |uij (0)|. First, we see that the boundary condition u = 0 on
∂Ω implies
(Dβ + ωα Dn )(Dα + ωβ Dn )u = 0
Version::31102011
on ∂Ω near the origin; in particular,
|Dαβ u(0)| ≤ C
at the origin.Next,differential the equation (13.43) to obtain
u ij Dij (Dα u + ωα Dn u) = Dα ψ + ωα Dn ψ + 2u ij ωαi Djn u + u ij Dn uDij ω.
By virtue of u ij Djn u = δin det D2 u,there holds
|u ij Dij (Dα + ωα Dn u)| ≤ C(1 + T r u ij ),
in Ω ∩ N denotes a suitable neighbourhood of the origin.
On the other hand, introduce a barrier function of the type
w (x) := −A|x|2 + Bxn ,
and compute
u ij Dij w = −2AT r u ij .
Choosing A and B large enough, we conclude that
|u ij Dij (Dα u + ωα Dn u)| + u ij Dij w ≤ 0
|Dα u + ωα Dn u| ≤ w
in Ω ∩ N
on ∂(Ω ∩ N ).
By the maximum principle, we now find
|Dα u + ωα Dn u| ≤ w
inΩ ∩ N
and hence
|Dnα u(0)| ≤ B.
To summarize, we have so far derived
|Dkl u(0)| ≤ C
if k + l < 2n
(13.39)
Finally, we want to estimate |Dnn u(0)|. Since u = 0 on ∂Ω, the unit outer normal ν on ∂Ω
is given by
ν=
Gregory T. von Nessi
Du
|Du|
13.5 Existence Theorems
429
and so
D2 u(0) =
|Du|D0 ν Din u
Dni u
Dnn u
(x),
(13.40)
where D0 := (∂/∂x1 , . . . , ∂/∂xn−1 ). The choice of our coordinates gives
Version::31102011
D0 ν(0) = Di ag κ01 , . . . , κ0n−1 (0)
and |Du(0)| = |Dn u(0)|. Here κ01 , . . . , κ0n−1 denote the principal curvatures of ∂Ω at 0. (see
section after next). By our regularity assumption on ∂Ω, for any y ∈ ∂Ω, there exists an
interior ball By ⊂ Ω with ∂By ∩ ∂Ω = {y }. Thus the comparison argument as before yields
|Du| ≥ δ > 0
(13.41)
for some δ > 0.
In view of (13.40), we have
ψ(0) = det D2 u(0)
)
(
n−1
n−1
2
Y
X
(D
u)
in
(0),
= |Dn u|n−2
κ0i |Dn u|Dnn u −
κ0i
i=1
i=1
from which we infer
0 < Dnn u(0) ≤ C,
(13.42)
taking into account (13.41) and (13.39) and the uniform convexity κ0i > 0. This proves our
claim.
Now we have established an estimate for |u|2,α;Ω and the method of continuity can be
applied to compete the proof. 13.5.4
Degenerate Monge-Ampére Equation
Here we want to generalize (the previous theorem) so that ψ merely enjoys ψ ≥ 0 in Ω.
In the proof of (the above theorem), the steps 2 and 3 are not altred. The step 4 is also
as before except for the estimation of Dn u away from zero (see (13.41), which follows from
the convexity of u and being able to assume ψ(x0 ) > 0 for some x0 ∈ Ω. The step 1 does
not hold true in general.
To deal with these points, we approximate ψ by ψ + for > 0 sufficiently small.
Theorem (above) is applicable and we obtain a convex C 2,α solution u . We with to make
→ 0. However, an estimation independent of can be obtained up to |u|1;Ω .
In summarly, by virtue of the convexity of u , we derive
Theorem 13.5.7 (): Let Ω ⊂ Rn be a uniformly convex domain with ∂Ω ∈ C 3 and let
ψ ∈ C 2 (Ω) be non-negative in Ω. Then there exists a unique solution u ∈ C 1,1 (Ω) of
det D2 u = ψ n in Ω
u=0
on ∂Ω.
(13.43)
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
430
13.5.5
Bondary Curvatures
Here we collect some properties of boundary curvatures and the distance function, which are
taken frpm the Appendix in [GT01, Chapter 14].
Let Ω ⊂ Rn be a bounded domain with its boundary ∂Ω ∈ C 2 . Take any rotation
of coordinate if necessary, the xn axis lies in the direction of the inner normal to ∂Ω at
the origin.Also, there exists a neighbourhood N of the origin such that ∂Ω is given by the
height function xn = ω(x 0 ) whre x 0 := (x1 , . . . , xn−1 ). Moreover, we can assume that h is
represented by
ω(x 0 ) =
1
Ωαβ xα xβ + O(|x 0 |3 ),
2
Ωαβ = Dαβ ω(0),
∂ := D − ν(ν · D),
where ν denotes the unit outer normal to ∂Ω. ν can be extended smoothly in a neighbourhood
of ∂Ω. In terms of ∂, we infer that {κ01 , . . . , κ0n−1 , 0} are given by the eigenvalues of [∂ν].
Let d(x) := Di st (x, ∂Ω) for x ∈ Ω. Then we find d ∈ C 0,1 (Ω) and d ∈ C 2 {0 < d < a}
where a := 1/max∂Ω |κ0 |. We call d(x) the distance function. Remark that ν(x) = −Dd(x)
for x ∈ ∂Ω.
A principal coordinate system at y ∈ ∂Ω consists of the xn axis being along the unit
normal and others along the principal directions. For x ∈ {x ∈ Ω | 0 < d(x) < a}, let
y = y (x) ∈ ∂Ω be such that |y (x) − x| = d(x). Then with respect to a principal coordinate
system at y = y (x), we have
−κ0n−1
−κ01
2
[D d(x)] = Di ag
,...,
,0 ,
1 − dκ01
1 − dκ0n−1
and for α, β = 1, 2, . . . , n − 1,
Dαβ u = ∂α ∂β u − Dn u · κ0α δαβ
= ∂α ∂β u − Dn u · Ωαβ .
If u = 0 on ∂Ω, then Dαβ u = −Dn u · Ωαβ for α, β = 1, 2, . . . , n − 1 at y = y (x).
13.5.6
General Boundary Values
The estimations in (ma equation) extend automatically to the case of general Dirichlet boundary values given by a function φ ∈ C 3 (Ω), except for the double normal second derivative on
the boundary. Here the reultant estimate (for φ ∈ C 3,1 (Ω)) is due to Ivochkina [Ivo90]. We
indicate here a simple proof of this estimate using a recent trick. introduced by Trudinger in
[Tru95], in the study of Hessian and curvature equations. Accordingly, les us suppose that
u ∈ C 3 (Ω) is a convex solution of the Dirichlet problem,
det D2 u = ψ in Ω
(13.44)
u=φ
on ∂Ω.
Gregory T. von Nessi
Version::31102011
in N ∩ T where T denotes the tangent hyperplane to ∂Ω at the origin.
The principal curvatures of ∂Ω at the origin are now defined as the eigenvalues of [h]ij
and we denote the κ01 , . . . , κ0n−1 . The corresponding eigenvectors are called the principal
directions.
We define the tangential gradient ∂ on ∂Ω as
13.6 Applications
431
with ψ ∈ C 2 (Ω), inf Ω ψ > 0 as before, and φ ∈ C 4 (Ω), ∂Ω ∈ C 4 . Fix a point y ∈ ∂Ω and a
principal coordinate system at y . Then the estimation of Dnn u(y ) is equivalent to estimation
away from zero of the minimum eigenvalue λ0 of the matrix
Dαβ u = ∂α ∂β − Dn u · Ωαβ ,
1 ≤ α, β ≤ n − 1.
Nor for ξ = (ξ1 , . . . , ξn−1 ), |ξ| = 1, the function
∂α ∂β φξα ξβ
− A|x 0 |2
Ωαβ ξα ξβ
Version::31102011
w = Dn u −
will take a maximum value at a point z ∈ N ∩ ∂Ω, for sufficiently large A, depending on N ,
max |Du|, max |D2 φ| and ∂Ω. But then an estimate
Dnn u(z) ≤ C
follows by the same barrier considerations as before, whence λ0 (z) is estimated away from
zero and consequently λ0 (y ) as required. Note that this argument does not extend to the
degenerate case. We thus establish the following extension of (non-deg monge-ampere exist
theorem) to general boundary values [CNS84]
Theorem 13.5.8 (): Let Ω be a uniformly convex domain with ∂Ω ∈ C 4 , φ ∈ C 4 (Ω) and
let ψ ∈ C 2 (Ω) be such that ψ > 0 in Ω. Then there exists a unique solution u ∈ U of the
Dirichlet problem:
det D2 u = ψ in Ω
.
u=0
on ∂Ω.
13.6
Applications
This section is concerned with some applications and extensions.
13.6.1
Isoperimetric inequalities
We give a new proof of the isoperimetric inequalities via the existence theorem of MongeAmpère equations.
Theorem 13.6.1 (): Let Ω be a bounded domain in Rn with its boundary ∂Ω ∈ C 2 . Then
we have
|Ω|1−1/n ≤
1
Hn−1 (∂Ω),
nωn
(13.45a)
2/n
|Ω|1−2/n ≤ f r ac1n(n − 1)ωn
Z
∂Ω
H 0 dHn−1 if H 0 ≥ 0 on ∂Ω,
(13.45b)
where | · | and Hn−1 ( · ) denote the usual Lebesgue measure on Rn and the (n−1)-dimentional
Hausdorff measure, respectively. H 0 is the mean curvature of ∂Ω; i.e.
0
H :=
n−1
X
κ0i .
i=1
These inequalities are sharp on balls.
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
432
(13.45a) is the classical isoperimetric inequality. We show the proof employing the
Monge-Ampère existence theorem. (13.45b) is given in [Tru94].
Proof. We may assume without loss of generality that 0 ∈ Ω. Let Ω ⊂⊂ Br (0) ⊂⊂ BR (0).
We want to solve
det D2 u = χ in BR
(13.46)
u = 0 on ∂Ω,
where χ := χΩ denotes the characteristic function of Ω.
0≤ψ≤χ
ψ≡1
on BR ;
on Ωδ := {x ∈ Ω | Di st (x, ∂Ω) > γ} ,
for sufficiently small δ. From Theorem 13.5.7, we have a solution u ∈ C 1,1 BR of (13.46),
where χ is replaced by ψ. Moreover, u ∈ C 3 ({x ∈ BR | ψ(x) > 0}).
Proof of (13.45a). The Alexandrov maximum principle implies
Z
n
n
ωn sup |u| ≤ (R + r )
det D2 u .
Ω
BR
Since u is convex, we deduce
supΩ |u|n
ωn sup |Du|n ≤ ωn
(R − r )n
Ω
Z
2 R+r n
R+r n
det D u =
|Ω|,
≤
R−r
R−r
BR
and hence
sup |Du| ≤
Ω
1
1/n
ωn
R+r
R−r
|Ω|1/n .
(13.47)
By the arithmetic-geometric inequality, we have
∆u
n
≥
1/n
det D2 u
= 1 on Ωδ .
An integration gives
|Ωδ | ≤
≤
Z
1
∆u =
ν · Du dHn−1
n ∂Ωδ
Ωδ
1
R+r
n−1
H
(∂Ω
)
|Ω|1/n ,
δ
1/n
R
−
r
nωn
1
n
Z
by virtue of (13.47). Here ν denotes the unit outer normal. Letting R → ∞ and δ → 0, we
find the desired result.
Gregory T. von Nessi
Version::31102011
First we approximate χ by ψ ∈ C 2 BR with
13.6 Applications
433
Proof of (13.45b). Recalling the inequality
1/2
!1/n 
n
Y
X
1
λi
≤
λi λj 
n(n − 1)
i=1
for λi ≥ 0, we see that
Version::31102011
1/n
det D2 u
i<j




X
1
(∆u)2 −
(Dij u)2 .
≤

n(n − 1) 
i,j
Integrating the left hand side of the last inequality, we obtain


Z 
Z

X
2
2
(∆u) −
(Dij u)
dx =
(Dii uDjj u − Dij uDij u) dx

Ω
Ω
i,j
Z
= −
Di u(Dijj u − Dijj u) dx
ΩZ
+
(νi Di uDjj u − νj Di uDij u) dHn−1
∂Ω
Z
=
(ν · Du)∆u − νi Dj uDij u dHn−1 . (13.48)
∂Ω
For any y ∈ ∂Ω, take the principle coordinate system at y . Again, noting that ∂ denotes
the tangential gradient, then there holds


Z 

X
(13.48) =
(ν · Du)(∂i ∂i u + (ν · Du)H 0 ) −
νj Di uDij u dHn−1

∂Ω 
i+j<2n
Z
=
(ν · Du)2 H 0 + (ν · Du)∂i ∂i u − Di uDi (ν · Du) dHn−1
∂Ω
Z
=
(ν · Du)2 H 0 + (ν · Du)∂i ∂i u − ∂i u∂i (ν · Du) dHn−1
Z∂Ω
=
(ν · Du)2 H 0 + 2(ν · Du)∂i ∂i u dHn−1 ,
∂Ω
by virtue of the integration formula (see [GT01, Lemma 16.1])
Z
Z
n−1
∂i φ dH
=
H 0 νi φ dHn−1 .
∂Ω
(13.49)
∂Ω
Since the convexity of u implies
(ν · Du)H 0 + ∂i ∂i u = Dii u ≥ 0,
we obtain for H 0 ≥ 0
Z
(∆u)2 − |D2 u|2 dx
Ω
≤
Z
(K 2 H 0 − 2K∂i ∂i u) dHn−1
∂ΩZ
≤ K2
H 0 dHn−1 ,
∂Ω
where K := max∂Ω |ν · Du|, by (13.49) again.
Using (13.47), replacing Ω by Ωδ and letting δ → 0, R → 0 as before, we obtain
(13.45b).
Gregory T. von Nessi
Chapter 13: Non-Variational Methods in Nonlinear PDE
434
13.7
Exercises
12.1: Let u ∈ C 2 (Ω) satisfy u ≥ 0 on ∂Ω,
C2 [x] := ∆u − νi νj Dij u ≥ 0 in Ω,
where
ν :=
(
Du
|Du|
if |Du| =
6 0
0 otherwise.
Using (13.45b) show that
n
L n−2 (Ω)
Gregory T. von Nessi
≤
1
2/n
n(n − 1)ωn
Z
Ω
C[u].
Version::31102011
||u||
Version::31102011
Appendices
Appendix A: Notation
436
“. . . it’s dull you TWIT; it’ll ’urt more!”
– Alan Rickman
~ = ∂ u, . . . , ∂ u .
For derivatives D = ∇ and Du
∂x1
∂xn
Gregory T. von Nessi
Version::31102011
A Notation
Version::31102011
Bibliography
438
Bibliography
[Bre97]
Glen E. Bredon, Topology and geometry, Graduate Texts in Mathematics, vol. 139,
Springer-Verlag, New York, 1997, Corrected third printing of the 1993 original. MR
MR1700700 (2000b:55001) 8.7
[CNS84] L. Caffarelli, L. Nirenberg, and J. Spruck, The Dirichlet problem for nonlinear
second-order elliptic equations. I. Monge-Ampère equation, Comm. Pure Appl.
Math. 37 (1984), no. 3, 369–402. MR MR739925 (87f:35096) 13.5.6, 13.5.6
[Eva82] Lawrence C. Evans, Classical solutions of fully nonlinear, convex, second-order elliptic equations, Comm. Pure Appl. Math. 35 (1982), no. 3, 333–363. MR MR649348
(83g:35038) 13.4
[Gia93]
Mariano Giaquinta, Introduction to regularity theory for nonlinear elliptic systems, Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel, 1993. MR
MR1239172 (94g:49002) 9.4
[Giu03]
Enrico Giusti, Direct methods in the calculus of variations, World Scientific Publishing Co. Inc., River Edge, NJ, 2003. MR MR1962933 (2004g:49003) 9.4
[GT01]
David Gilbarg and Neil S. Trudinger, Elliptic partial differential equations of second
order, Classics in Mathematics, Springer-Verlag, Berlin, 2001, REPRINT of the
1998 edition. MR MR1814364 (2001k:35004) 8.3, 13.4, 13.4.2, 13.5.5, 13.6.1
[Hen81] Daniel Henry, Geometric theory of semilinear parabolic equations, Lecture Notes in
Mathematics, vol. 840, Springer-Verlag, Berlin, 1981. MR MR610244 (83j:35084)
10.9
[Ivo90]
N. M. Ivochkina, The Dirichlet problem for the curvature equation of order m,
Algebra i Analiz 2 (1990), no. 3, 192–217. MR MR1073214 (92e:35072) 13.5.6
Gregory T. von Nessi
Version::31102011
[Ant80] Stuart S. Antman, The equations for large vibrations of strings, Amer. Math.
Monthly 87 (1980), no. 5, 359–370. MR MR567719 (81e:73027) 1.6
Version::31102011
Bibliography
439
[Kry82]
N. V. Krylov, Boundedly inhomogeneous elliptic and parabolic equations, Izv.
Akad. Nauk SSSR Ser. Mat. 46 (1982), no. 3, 487–523, 670. MR MR661144
(84a:35091) 13.4, 13.5.5
[Kry83a]
, Degenerate nonlinear elliptic equations, Mat. Sb. (N.S.) 120(162)
(1983), no. 3, 311–330, 448. MR MR691980 (85f:35096) 13.5.6
[Kry83b]
, Degenerate nonlinear elliptic equations. II, Mat. Sb. (N.S.) 121(163)
(1983), no. 2, 211–232. MR MR703325 (85f:35097) 13.5.6
[Kry87]
, Nonlinear elliptic and parabolic equations of the second order, Mathematics and its Applications (Soviet Series), vol. 7, D. Reidel Publishing Co., Dordrecht, 1987, Translated from the Russian by P. L. Buzytsky [P. L. Buzytskiı̆]. MR
MR901759 (88d:35005) 13.4.4
[Paz83] A. Pazy, Semigroups of linear operators and applications to partial differential equations, Applied Mathematical Sciences, vol. 44, Springer-Verlag, New York, 1983.
MR MR710486 (85g:47061) 10.9
[Roy88] H. L. Royden, Real analysis, third ed., Macmillan Publishing Company, New York,
1988. MR MR1013117 (90g:00004) 8.2, 4, 8.8, 8.8.1, 8.8.2
[RR04]
Michael Renardy and Robert C. Rogers, An introduction to partial differential equations, second ed., Texts in Applied Mathematics, vol. 13, Springer-Verlag, New
York, 2004. MR MR2028503 (2004j:35001) 10.10, 11.1
[Rud91] Walter Rudin, Functional analysis, second ed., International Series in Pure and
Applied Mathematics, McGraw-Hill Inc., New York, 1991. MR MR1157815
(92k:46001) 8.2, 8.2, 8.2
[Saf84]
M. V. Safonov, The classical solution of the elliptic Bellman equation, Dokl. Akad.
Nauk SSSR 278 (1984), no. 4, 810–813. MR MR765302 (86f:35081) 13.4, 13.4.1
[TP85]
Morris Tenenbaum and Harry Pollard, Ordinary differential equations, Dover Publications Inc., Mineola, NY, 1985, New Ed Edition. 8.2
[Tru84]
Neil S. Trudinger, Boundary value problems for fully nonlinear elliptic equations,
Miniconference on nonlinear analysis (Canberra, 1984), Proc. Centre Math. Anal.
Austral. Nat. Univ., vol. 8, Austral. Nat. Univ., Canberra, 1984, pp. 65–83. MR
MR799213 (87a:35083) 13.4.5
[Tru94]
, Isoperimetric inequalities for quermassintegrals, Ann. Inst. H. Poincaré
Anal. Non Linéaire 11 (1994), no. 4, 411–425. MR MR1287239 (95k:52013)
13.6.1
[Tru95]
, On the Dirichlet problem for Hessian equations, Acta Math. 175 (1995),
no. 2, 151–164. MR MR1368245 (96m:35113) 13.5.6
Gregory T. von Nessi
Index
440
Index
Banach Space, 181
reflexive, 192
seperable, 182
Banach’s fixed point theorem, 182
Banach-Alaoglu Theorem, 199
Beppo-Levi Convergence Theorem, see Monotone Convergence Theorem
bidual, 192
bilinear form, 195
bounded, 195
coercive, 195
Caccioppoli inequalities, 285
Caccioppoli’s inequality
elliptic systems, for, 288
Laplace’s Eq., for, 286
Cauchy-Schwarz inequality, 193
compact operator, see operator, compact
Companato Spaces, 223
L∞ , 229
conservation law, 5
contraction, 182
dual space, 192
canonical imbedding, 192
Gregory T. von Nessi
Egoroff’s Theorem, 203
eigenspace, 189
eigenvalue, 189
multiplicity, 189
eigenvector, 189
Einstein’s Equations, 5
Fatou’s Lemma, 203
Fourier’s Law of Cooling, 4
Fredholm Alternative
Hilbert Spaces, for, 198
normed linear spaces, for, 186–188
Fubini’s Theorem, 203
Heat Equation, 3
steady state solutions, 4
heat flux, 3
Hilbert Space, 193
inner product, see scalar product
integrable function, 202
Laplace’s Equation, 4
Lax-Milgram theorem, 195
Lebesgue’s Dominated Convergence Theorem,
203
Legendre-Hadamard condition, 288
Lioville’s Theorem, 288
Maxwell’s Equations, 5
measurable
function, 202
Version::31102011
adjoint operator, see operator, adjoint
almost everywhere
criterion, 202
almost orthogonal element, 185
Arzela-Ascoli Theorem, 189
Index
Version::31102011
measure, 201
Hausdorff, 201
Lebesgue, 201
outer, 201
Method of Continuity
for bounded linear operators, 184
metric, 181
minimizing sequence, 194
Monotone Convergence Theorem, 203
Morrey Space, 222
441
weak convergence, 198
weak-* convergence, 199
Noether’s Theorem, 5
norm, 181
Normed Linear Space, 181
open
ball, 181
set, 181
operator
adjoint, 192
bounded, 183
compact, 185
parallelogram identity, 194
Picard-Lindelof Theorem, 183
projection theorem, 194
regularity
linear elliptic eq., for, 285–310
resolvent operator, 189
resolvent set, 189
Riesz-representation theorem, 194
scalar product, 193
Scalar Product Space, 193
Schrödinger’s Equation, 5
set
Lebesgue measurable, 201
σ-ring, 201
simple function, 202
spectral theorem
compact operators, for, 189
spectrum
compact operator, of a, 189
Topological Vector Space, 181
traffic flow, 5
Lighthill-Whitham-Richards model, 6
Transport Equation, 7
triangle inequality, 181, 194
Gregory T. von Nessi
Download