Uploaded by Yousif Basheer

Applied Functional Analysis 3rd Solutions Manual (John Tinsley Oden, Leszek Demkowicz) (z-lib.org)

advertisement
solutions MANUAL FOR
Applied Functional
Analysis
THIRD EDITION
by
J. Tinsley Oden
Leszek F. Demkowicz
solutionS MANUAL FOR
Applied Functional
Analysis
THIRD EDITION
by
J. Tinsley Oden
Leszek F. Demkowicz
Boca Raton London New York
CRC Press is an imprint of the
Taylor & Francis Group, an informa business
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2018 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed on acid-free paper
Version Date: 20171011
International Standard Book Number-13: 978-1-498-76114-7 (Hardback)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity
of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright
holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized
in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying,
microfilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://
www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For
organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
1
Preliminaries
Elementary Logic and Set Theory
1.1
Sets and Preliminary Notations, Number Sets
Exercises
Exercise 1.1.1 If Z
I = {. . . , −2, −1, 0, 1, 2, . . .} denotes the set of all integers and IN = {1, 2, 3, . . .} the set
of all natural numbers, exhibit the following sets in the form A = {a, b, c, . . .}:
(i) {x ∈ Z
I : x2 − 2x + 1 = 0}
(ii) {x ∈ Z
I : 4 ≤ x ≤ 10}
(iii) {x ∈ IN : x2 < 10}
(i) {1}
(ii) {4, 5, 6, 7, 8, 9, 10}
(iii) {1, 2, 3}
1.2
Level One Logic
Exercises
Exercise 1.2.1 Construct the truth table for De Morgan’s Law:
∼ (p ∧ q) ⇔ ((∼ p) ∨ (∼ q))
1
2
APPLIED FUNCTIONAL ANALYSIS
∼
1
1
1
0
(p ∧
00
00
10
11
q)
0
1
0
1
⇔ ((∼
1
1
1
1
1
0
1
0
SOLUTION MANUAL
p)
0
0
1
1
∨
1
1
1
0
(∼
1
0
1
0
q))
0
1
0
1
Exercise 1.2.2 Construct truth tables to prove the following tautologies:
(p ⇒ q) ⇔ (∼ q ⇒∼ p)
∼ (p ⇒ q) ⇔ p ∧ ∼ q
(p ⇒
0 1
0 1
1 0
1 1
∼
0
0
1
0
q)
0
1
0
1
(p
(0
(0
(1
(1
⇔ (∼ q ⇒
1 10 1
1 01 1
1 10 0
1 01 1
⇒ q)
1 0)
1 1)
0 0)
1 1)
⇔
1
1
1
1
p∧
00
00
11
10
∼ p)
10
10
01
01
∼
1
0
1
0
q
0
1
0
1
Exercise 1.2.3 Construct truth tables to prove the associative laws in logic:
p ∨ (q ∨ r) ⇔ (p ∨ q) ∨ r
p ∧ (q ∧ r) ⇔ (p ∧ q) ∧ r
p
0
0
0
0
1
1
1
1
∨
0
1
1
1
1
1
1
1
(q
0
0
1
1
0
0
1
1
∨
0
1
1
1
0
1
1
1
r)
0
1
0
1
0
1
0
1
⇔
1
1
1
1
1
1
1
1
(p
0
0
0
0
1
1
1
1
∨
0
0
1
1
1
1
1
1
q)
0
0
1
1
0
0
1
1
∨r
00
11
10
11
10
11
10
11
p
0
0
0
0
1
1
1
1
∧
0
0
0
0
0
0
0
1
(q
0
0
1
1
0
0
1
1
∧
0
0
0
1
0
0
0
1
r)
0
1
0
1
0
1
0
1
⇔
1
1
1
1
1
1
1
1
(p
0
0
0
0
1
1
1
1
∧
0
0
0
0
0
0
1
1
q)
0
0
1
1
0
0
1
1
∧r
00
01
00
01
00
01
00
11
Preliminaries
1.3
3
Algebra of Sets
Exercises
Exercise 1.3.1 Of 100 students polled at a certain university, 40 were enrolled in an engineering course,
50 in a mathematics course, and 64 in a physics course. Of these, only 3 were enrolled in all three
subjects, 10 were enrolled only in mathematics and engineering, 35 were enrolled only in physics and
mathematics, and 18 were enrolled only in engineering and physics.
(i) How many students were enrolled only in mathematics?
(ii) How many of the students were not enrolled in any of these three subjects?
Let A, B, C denote the subsets of students enrolled in mathematics, the engineering course and physics,
repectively. Sets: A ∩ B ∩ C, A ∩ B − (A ∩ B ∩ C), A ∩ C − (A ∩ B ∩ C) and A − (B ∪ C) are
pairwise disjoint (no two sets have a nonempty common part) and their union equals set A, see Fig. 1.1.
Consequently,
#(A − (B ∪ C)) = #A − #A ∩ B ∩ C − #(A ∩ B − (A ∩ B ∩ C)) − #(A ∩ C − (A ∩ B ∩ C))
= 50 − 3 − 10 − 35 = 2
In the same way we compute,
#(B − (A ∪ C)) = 9
and
#(C − (A ∪ B)) = 8
Thus, the total number of students enrolled is
#(A − (B ∪ C)) + #(B − (A ∪ C)) + #(C − (A ∪ B))
+#(A ∩ B − C) + #(A ∩ C − B) + #(B ∩ C − A)
+#(A ∩ B ∩ C)
= 2 + 9 + 8 + 10 + 35 + 18 + 3 = 85
Consequently, 15 students did not enroll in any of the three classes.
Exercise 1.3.2 List all of the subsets of A = {1, 2, 3, 4}. Note: A and ∅ are considered to be subsets of A.
∅,
{1},
{2}, {1, 2},
{3}, {1, 3}, {2, 3}, {1, 2, 3},
{4}, {1, 4}, {2, 4}, {1, 2, 4}, {3, 4}, {1, 3, 4}, {2, 3, 4}, {1, 2, 3, 4}
4
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Figure 1.1
Illustration of Exercise 1.3.1.
Exercise 1.3.3 Construct Venn diagrams to illustrate the idempotent, commutative, associative, distributive,
and identity laws. Note: some of these are trivially illustrated.
This is a very simple exercise. For example, Fig. 1.2 illustrates the associative law for the union of sets.
Figure 1.2
Venn diagrams illustrating the associative law for the union of sets.
Exercise 1.3.4 Construct Venn diagrams to illustrate De Morgan’s Laws.
Follow Exercise 1.3.3.
Exercise 1.3.5 Prove the distributive laws.
Preliminaries
5
x ∈ A ∩ (B ∪ C)
�
definition of intersection of sets
x ∈ A and x ∈ (B ∪ C)
�
definition of union of sets
x ∈ A and (x ∈ B or x ∈ C)
�
tautology:p ∧ (q ∨ r) ⇔ (p ∧ q) ∨ (p ∧ r)
(x ∈ A and x ∈ B) or (x ∈ A and x ∈ C)
�
definition of intersection of sets
x ∈ (A ∩ B) or x ∈ (A ∩ C)
�
definition of union of sets
x ∈ (A ∩ B) ∪ (A ∩ C)
In the same way,
x ∈ A ∪ (B ∩ C)
�
definition of union of sets
x ∈ A or x ∈ (B ∩ C)
�
definition of intersection of sets
x ∈ A or (x ∈ B and x ∈ C)
�
tautology:p ∨ (q ∧ r) ⇔ (p ∨ q) ∧ (p ∨ r)
(x ∈ A or x ∈ B) and (x ∈ A or x ∈ C)
�
definition of union of sets
x ∈ (A ∪ B) and x ∈ (A ∪ C)
�
definition of intersection of sets
x ∈ (A ∪ B) ∩ (A ∪ C)
Exercise 1.3.6 Prove the identity laws.
In each case, one first has to identify and prove the corresponding logical law. For instance, using the
truth tables, we first verify that, if f denotes a false statement, then
p∨f ⇔p
for an arbitrary statement p. This tautology then provides the basis for the corresponding identity law
in algebra of sets:
A∪∅=A
Indeed,
x ∈ (A ∪ ∅)
�
definition of union of sets
x ∈ A or x ∈ ∅
�
tautology above
x∈A
The remaining three proofs are analogous.
Exercise 1.3.7 Prove the second of De Morgan’s Laws.
6
APPLIED FUNCTIONAL ANALYSIS
x ∈ A − (B ∩ C)
�
x ∈ A and x ∈
/ (B ∩ C)
�
x ∈ A and ∼ (x ∈ B ∩ C)
�
x ∈ A and ∼ (x ∈ B ∧ x ∈ C)
�
(x ∈ A and x ∈
/ B) or (x ∈ A and x ∈
/ C)
�
x ∈ (A − B) or x ∈ (A − C)
�
x ∈ (A − B) ∪ (A − C)
SOLUTION MANUAL
definition of difference of sets
x∈
/ D ⇔∼ (x ∈ D)
definition of intersection
tautology: p∧ ∼ (q ∧ r) ⇔ (p∧ ∼ q) ∨ (p∧ ∼ r)
definition of difference of sets
definition of union
Exercise 1.3.8 Prove that (A − B) ∩ B = ∅.
Empty set is a subset of any set, so the inclusion ∅ ⊂ (A − B) ∩ B is obviously satisfied. To prove the
converse, notice that the statement x ∈ ∅ is equivalent to the statement that x does not exist. Suppose
now, to the contrary, that there exists an x such that x ∈ (A − B) ∩ B. Then
x ∈ (A − B) ∩ B
⇓
x ∈ (A − B) and x ∈ B
⇓
(x ∈ A and x ∈
/ B) and x ∈ B
⇓
x ∈ A and (x ∈
/ B and x ∈ B)
⇓
x ∈ A and x ∈ ∅
⇓
x∈∅
definition of intersection
definition of difference
associative law for conjuction
p∧ ∼ p is false
identity law for conjuction
In fact, the statements above are equivalent.
Exercise 1.3.9 Prove that B − A = B ∩ A� .
x∈B−A
�
definition of difference
x ∈ B and x ∈
/A
�
definition of complement
x ∈ B and x ∈ A�
�
definition of intersection
x ∈ B ∩ A�
Preliminaries
1.4
7
Level Two Logic
Exercises
Exercise 1.4.1 Use Mathematical Induction to derive and prove a formula for the sum of squares of the first
n positive integers:
n
�
i2 = 1 + 22 + . . . + n2
i=1
This is an “inverse engineering” problem. Based on elementary integration formulas for polynomials,
we expect the formula to take the form:
n
�
i2 =
i=1
αn3 + βn2 + γn + δ
A
In the proof by induction, we will need to show that:
n
�
i2 + (n + 1)2 =
i=1
This leads to the identity:
n+1
�
i2
i=1
α(n + 1)3 + β(n + 1)2 + γ(n + 1) + δ
αn3 + βn2 + γn + δ
+ (n + 1)3 =
A
A
Comparing coefficients in front of n3 , n2 , n, 1 on both sides, we get relations:
A = 3α,
2A = 3α + 2β,
A=α+β+γ
This leads to α = A/3, β = A/2, γ = A/6. Choosing A = 6, we get α = 2, β = 3, γ = 1. Validity
of the formula for n = 1 implies that δ = 0.
Exercise 1.4.2 Use mathematical induction to prove that the power set of a set U with n elements has 2n
elements:
#U = n
⇒
#P(U ) = 2n
The hash symbol # replaces the phrase “number of elements of.”
• T (1). Let U = {a}. Then P(U ) = {∅, {a}}, so #P(U ) = 2.
• T (n) ⇒ T (n + 1). Assume the statement has been proved for every set with n elements. Let
#U = n + 1. Pick an arbitrary element a from set U . The power set of U can then be split into
two families: subsets that do not contain element a and subsets that do contain a:
P(U ) = A ∪ B where
A = {A ⊂ U : a ∈
/ A},
B = {B ⊂ U : a ∈ B}
8
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The two families are disjoint, so #P(U ) = #A + #B. But A = P(U − {a}) so, by the
assumption of mathematical induction, set A has 2n elements. On the other side:
B ∈ B ⇔ B − {a} ∈ P(U − {a})
so family B has also exactly 2n elements. Consequently, power set P(U ) has 2n + 2n = 2n+1
elements, and the proof is finished.
Another way to see the result is to recall Newton’s formula:
� �
� �
�
�
� �
n
n
n
n
(a + b)n =
a n b0 +
an−1 b1 + . . . +
a1 bn−1 +
a 0 bn
0
1
n−1
n
In the particular case of a = b = 1, Newton’s formula reduces to the identity:
� � � �
�
� � �
n
n
n
n
2n =
+
+ ... +
+
0
1
n−1
n
Recall that Newton’s symbol
� �
n
k
represents the number of k–combinations of n elements, i.e., the number of different subsets with k
elements from a set with n elements. As all subsets of a set U with n elements can be partitioned into
subfamiles of subsets with k elements, k = 0, 1, . . . , n, the right-hand side of the identity above clearly
represents the number of all possible subsets of set U . Obviously, in order to prove the formula above,
we may have to use mathematical induction as well.
1.5
Infinite Unions and Intersections
Exercises
Exercise 1.5.1 Let B(a, r) denote an open ball centered at a with radius r:
B(a, r) = {x : d(x, a) < r}
Here a, x are points in the Euclidean space and d(x, a) denotes the (Euclidean) distance between the
points. Similarly, let B(a, r) denote a closed ball centered at a with radius r:
B(a, r) = {x : d(x, a) ≤ r}
Notice that the open ball does not include the points on the sphere with radius r, whereas the closed
ball does.
Preliminaries
9
Determine the following infinite unions and intersections:
�
�
B(a, r),
r<1
�
1≤r≤2
�
B(a, r),
r<1
�
B(a, r),
B(a, r),
r<1
�
B(a, r),
1≤r≤2
�
B(a, r),
r<1
B(a, r),
1≤r≤2
�
B(a, r)
1≤r≤2
This exercise relies on the understanding of the concepts of infinite union and intersection, use of
quantifiers, and geometric intuition. We will prove that
�
B(a, r) = B(a, 1)
r<1
Indeed,
x∈
�
B(a, r)
r<1
�
∃r < 1 : x ∈ B(a, r)
�
∃r < 1 : d(a, x) < r
�
d(a, x) < 1
�
x ∈ B(a, 1)
definition of union
definition of ball
property of real numbers
definition of ball
In the same way, we can prove the following results:
�
B(a, r) = B(a, 1)
r<1
�
r<1
�
�r<1
B(a, r) = {a}
B(a, r) = {a}
B(a, r) = B(a, 2)
1≤r≤2
�
1≤r≤2
�
1≤r≤2
�
1≤r≤2
B(a, r) = B(a, 2)
B(a, r) = B(a, 1)
B(a, r) = B(a, 1)
10
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Relations
1.6
Cartesian Products, Relations
Exercises
Exercise 1.6.1 Let A = {α, β}, B = {a, b}, and C = {c, d}. Determine
(i) (A × B) ∪ (A × C)
A × B = {(α, a), (β, a), (α, b), (β, b)}
A × C = {(α, c), (β, c), (α, d), (β, d)}
(A × B) ∪ (A × C) = {(α, a), (β, a), (α, b), (β, b), (α, c), (β, c), (α, d), (β, d)}
(ii) A × (B ∪ C)
A × (B ∪ C) = {(α, a), (β, a), (α, b), (β, b), (α, c), (β, c), (α, d), (β, d)}
= (A × B) ∪ (A × C)
(iii) A × (B ∩ C)
A × (B ∩ C) = A × ∅ = ∅
Exercise 1.6.2 Let R be the relation < from the set A = {1, 2, 3, 4, 5, 6} to the set B = {1, 4, 6}.
(i) Write out R as a set of ordered pairs.
R = {(1, 2), (1, 4), (2, 4), (3, 4), (1, 6), (2, 6), (3, 6), (4, 6), (5, 6)}
(ii) Represent R graphically as a collection of points in the xy-plane R
I × R.
I
Exercise 1.6.3 Let R denote the relation R = {(a, b), (b, c), (c, b)} on the set R = {a, b, c}. Determine
whether or not R is (a) reflexive, (b) symmetric, or (c) transitive.
(a) It is not, e.g. (a, a) ∈
/ R.
(b) It is not, e.g. (a, b) ∈ R but (b, a) ∈
/ R.
Preliminaries
11
Figure 1.3
Exercise 1.6.2. Illustration of relation R.
(c) It is not, e.g. (a, b) ∈ R and (b, c) ∈ R but (a, c) ∈
/ R.
Exercise 1.6.4 Let R1 and R2 denote two nonempty relations on set A. Prove or disprove the following:
(i) If R1 and R2 are transitive, so is R1 ∪ R2 .
False. Counterexample: A = {a, b}, R1 = {(a, a)}, R2 = {(b, b)}.
(ii) If R1 and R2 are transitive, so is R1 ∩ R2 .
True. Let (x, y), (y, z) ∈ R1 ∩ R2 . Then (x, y), (y, z) ∈ R1 and (x, y), (y, z) ∈ R2 . Since both
R1 and R2 are transitive, (x, z) ∈ R1 and (x, z) ∈ R2 . Consequently, (x, z) ∈ R1 ∩ R2 .
(iii) If R1 and R2 are symmetric, so is R1 ∪ R2 .
True. Let (x, y) ∈ R1 ∪ R2 . Then (x, y) ∈ R1 or (x, y) ∈ R2 . Thus, (y, x) ∈ R1 or (y, x) ∈ R2 .
Consequently, (y, x) ∈ R1 ∪ R2 .
(iv) If R1 and R2 are symmetric, so is R1 ∩ R2 .
True. Let (x, y) ∈ R1 ∩ R2 . Then (x, y) ∈ R1 and (x, y) ∈ R2 . Thus, (y, x) ∈ R1 and
(y, x) ∈ R2 . Consequently, (y, x) ∈ R1 ∩ R2 .
1.7
Partial Orderings
Exercises
Exercise 1.7.1 Consider the partial ordering of R
I 2 from Examples 1.7.2 and 1.7.6. Construct an example of
a set A that has many minimal elements. Can such a set have the least element?
Consider e.g. A = {(2, 1), (1, 2)}. Both elements of A are its minimal elements. A set with multiple
minimal elements cannot have the smallest element. Indeed, suppose a, b ∈ A are two different minimal elements in set A with respect to some partial ordering ≤ on A. Suppose, to the contrary, that
12
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
x ∈ A is the smallest element of A. It must be then x ≤ a and x ≤ b. Since both a and b are minimal
in A, there must be both a = x and b = x. Consequently, a = b, a contradiction.
Exercise 1.7.2 Consider the following relation in R
I 2:
xRy
iff (x1 < y1 or (x1 = y1 and x2 ≤ y2 ))
(i) Show that R is a linear (total) ordering of R
I 2.
All three axioms are easily verified. The relation is symmetric, since x1 = x1 and x2 ≤ x2 imply
that (x1 , x2 ) R (x1 , x2 ). To show that R is antisymmetric, assume that (x1 , x1 ) R (y1 , y2 ) and
(y1 , y2 ) R (x1 , x2 ). Four different logical cases have to be considered:
(a) x1 < y1 and y1 < x1 . Clearly, this is impossible.
(b) x1 = y1 and y1 < x1 . Impossible again.
(c) x1 < y1 and y1 = x1 . Impossible.
(d) x1 = y1 and y1 = x1 . This is possible. But then, x2 ≤ y2 and y2 ≤ x2 which implies that
x2 = y2 . Consequently, x = y.
By considering four similar scenarios, we show that relation R is transitive.
(ii) For a given point x ∈ R
I 2 construct the set of all points y “greater than or equal to” x, i.e., x R y.
See Fig. 1.4.
Figure 1.4
Exercise 1.7.2. Set {y : x ≤ y}. Notice that the set excludes points y1 = x1 , y1 < x1 .
(iii) Does the set A from Example 1.7.6 have the greatest element with respect to this partial ordering?
Yes, it does. It is the uppermost point on axis x2 in set A.
Exercise 1.7.3 Consider a contact problem for the simply supported beam shown in Fig. 1.5. The set K of
all kinematically admissible deflections w(x) is defined as follows:
K = {w(x) : w(0) = w(l) = 0
and
w(x) ≤ g(x), x ∈ (0, l)}
Preliminaries
13
Figure 1.5
A contact problem for a beam.
where g(x) is an initial gap function specifying the distance between the beam and the obstacle. Let V
be a class of functions defined on (0, l) including the gap function g(x). For elements w ∈ V define
the relation
w R v(w ≤ v)
iff
w(x) ≤ v(x)
for every x ∈ (0, l)
(i) Show that R is a partial ordering of V .
All three axioms are easily verified.
(ii) Show that R is not a linear ordering of V .
Take e.g. w1 (x) = 1 − x and w2 (x) = x. Then neither w1 ≤ w2 nor w2 ≤ w1 , i.e., elements w1
and w2 are not comparable with each other.
(iii) Show that the set K can be rewritten in the form:
K = {w(x) : w(0) = w(l) = 0 and
w ≤ g}
Nothing to show. Just interpret the definition.
Exercise 1.7.4 Let P(A) denote the power class of a set A; then P(A) is partially ordered by the inclusion
relation (see Example 1.7.3). Does P(A) have the smallest and greatest elements?
Yes. The empty set ∅ is the smallest element, and the whole set A is the biggest element in P(A).
1.8
Equivalence Relations, Equivalence Classes, Partitions
Exercises
Exercise 1.8.1
(i) Let T be the set of all triangles in the plane R
I 2 . Show that “is similar to” is an equiva-
lence relation on T .
14
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The relation is trivially reflexive, symmetric and transitive.
(ii) Let P be the set of all polygons in the plane R
I 2 . Show that “has the same number of vertices” is
an equivalence relation on P .
The relation is trivially reflexive, symmetric and transitive.
(iii) For part (i) describe the equivalence class [T0 ], where T0 is a (unit) right, isosceles triangle with
unit sides parallel to the x- and y-axes.
The class consists of all isoceles right triangles.
(iv) For part (ii) describe the equivalence class [P0 ], where P0 is the unit square
{(x, y) : 0 ≤ x ≤ 1, 0 ≤ y ≤ 1}
The class consists of all possible quadrilaterals.
(v) Specify quotient sets corresponding to the relations in (i) and (ii).
The quotient space corresponding to relation (i) consists of disjoint families of different triangles.
Triangles belonging to an equivalence class are similar, i.e., they have the same angles.
The quotient space corresponding to relation (ii) consists of disjoint families of polygons. Each
equivalence class is formed of all polygons with the same number of vertices (edges), e.g. triangles, quadrilaterals, etc.
Exercise 1.8.2 Let A = IN × IN , where IN is the set of natural numbers. The relation
(x, y) R (u, v)
def
⇔
x+v =u+y
is an equivalence relation on A. Determine the equivalence classes [(1, 1)], [(2, 4)], [(3, 6)].
Notice that
(x, y) R (u, v)
⇔
x−u=y−v
⇔
∃c ∈ R
I : u = x + c, v = y + c
Consequently,
[(1, 1)] = {(1 + c, 1 + c) : c ∈ R}
I
[(2, 4)] = {(2 + c, 4 + c) : c ∈ R}
I
[(3, 6)] = {(3 + c, 6 + c) : c ∈ R}
I
Exercise 1.8.3 Let Xι , ι ∈ I, be a partition of a set X. Show that the relation
x∼y
def
⇔
∃ι ∈ I : x ∈ Xι and y ∈ Xι
is an equivalence relation on X.
Reflexivity: Let x ∈ X. By definition of a partition, there must be a ι ∈ I such that x ∈ Xι .
Consequently, x ∈ Xι and x ∈ Xι (p ⇒ p ∧ p) which means that x ∼ x.
Preliminaries
15
Symmetry: Let x ∼ y. This means that there exists ι ∈ I such that x ∈ Xι and y ∈ Xι . This implies
(p ∧ q ⇒ q ∧ p) that y ∈ Xι and x ∈ Xι . Consequently, y ∼ x.
Transitivity: Let x ∼ y and y ∼ z. Then, there exist ι, κ ∈ I such that x, y ∈ Xι and y, z ∈ Xκ .
Thus, y ∈ Xι ∩ Xκ . As different members of a partition must be pairwise disjoint, this implies
that ι = κ. Consequently, x, z ∈ Xι = Xκ which proves that x ∼ z.
Exercise 1.8.4 Let ∼ be an equivalence relation on a set X. Consider the corresponding quotient set X|∼ ,
i.e., the partition of X into equivalence classes [x]∼ corresponding to relation ∼. Let ≈ be a (potentially
new, different) relation corresponding to the partition discussed in Exercise 1.8.3. Demonstrate that the
two equivalence relations are identical, i.e.,
x∼y
⇔
x≈y
The identity of the two relations is a direct consequence of definitions:
x≈y
⇔
∃ [z]∼ ∈ X|∼ : x, y ∈ [z]∼
⇔
x∼y
Exercise 1.8.5 Let Xι , ι ∈ I be a partition of a set X. Let ∼ be the corresponding induced equivalence
relation defined in Exercise 1.8.3. Consider the corresponding (potentially different) partition of X
into equivalence classes with respect to the relation ∼. Prove that the two partitions are identical.
Both partitions constitute families of subsets of X. In order to demonstrate that the two families are
identical, we need to show that a set A is in one family if and only if it is a member of the second
family as well. Consider Xι from the original partition. Let x ∈ Xι be an arbitrary element. As Xι
coincides with the equivalence class of x,
Xι = [x]∼
it is also a member of the second family. Conversely, let [x]∼ be an equivalence class of an element
x ∈ X. There must exist an index ι ∈ I such that x ∈ Xι . Moreover, [x]∼ = Xι and therefore, [x]∼
belongs to the original partition as well.
16
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Functions
1.9
Fundamental Definitions
Exercises
Exercise 1.9.1 Let f : X → Y be an arbitrary function. Let A, B ⊂ Y . Prove that f −1 (A − B) =
f −1 (A) − f −1 (B). In particular, taking A = Y , we get f −1 (B � ) = f −1 (Y − B) = f −1 (Y ) −
f −1 (B) = X − f −1 (B) = (f −1 (B))� .
The following statements are equivalent to each other:
x ∈ f −1 (A − B)
f (x) ∈ (A − B)
(f (x) ∈ A) ∧ ∼ (f (x) ∈ B)
(x ∈ f −1 (A)) ∧ ∼ (x ∈ f −1 (B)
x ∈ (f −1 (A) − f −1 (B))
Exercise 1.9.2 Let f : X → Y be an arbitrary function. Let A, B ⊂ X. Prove that f (A) − f (B) ⊂
f (A − B). Is the inverse inclusion true (in general)?
Let y ∈ f (A) − f (B). There exists then an x ∈ A such that y = f (x). At the same time, it is not true
that there exists an x1 ∈ B such that y = f (x1 ). Equivalently, for every x1 ∈ B, y �= f (x1 ). Thus the
original x from A cannot be in B and, therefore, x ∈ A − B. Consequently, y ∈ f (A − B).
The inverse inclusion does not hold. Take, for instance, f : R
I → R,
I
y = x2 , and consider A =
(−1, 1) and B = (0, 1). Then f (A − B) = f ((−1, 0]) = [0, 1), but f (A) − f (B) = {0}.
Exercise 1.9.3 Let f : X → Y be an arbitrary function. Let Bι ⊂ Y, ι ∈ I be an arbitrary family. Prove
that
f
−1
�
�
ι∈I
Bι
�
=
�
ι∈I
f
−1
(Bι )
and
f
−1
�
�
ι∈I
Bι
�
=
�
ι∈I
f −1 (Bι )
Preliminaries
17
The following statements are equivalent to each other:
�
�
�
−1
x∈f
Bι
� ι∈I
Bι
f (x) ∈
ι∈I
∃ι ∈ I f (x) ∈ Bι
∃ι ∈ I x ∈ f −1 (Bι )
�
f −1 (Bι )
x∈
ι∈I
The proof of the other property is fully analogous.
Exercise 1.9.4 Prove that f −1 (D ∩ H) = f −1 (D) ∩ f −1 (H).
This is a particular case of Exercise 1.9.3(b).
Exercise 1.9.5 Let f : X → Y be a function. Prove that, for an arbitrary set C ⊂ Y ,
f −1 (R(f ) ∩ C) = f −1 (C)
Use the result of Exercise 1.9.4 with D = R(f ). Notice that f −1 (R(f )) = X.
1.10
Compositions, Inverse Functions
Exercises
Exercise 1.10.1 If F is the mapping defined on the set R
I of real numbers by the rule: y = F (x) = 1 + x2 ,
find F (1), F (−1), and F ( 12 ).
F (1) = 2,
F (−1) = 2,
1
1
F( ) = 1
2
4
Exercise 1.10.2 Let IN be the set of all natural numbers. Show that the mapping F : n → 3 + n2 is an
injective mapping of IN into itself, but it is not surjective.
Let
F (n1 ) = 3 + n21 = F (n2 ) = 3 + n22
Then n21 = n22 . But, since n1 , n2 are natural numbers (positive), this implies that n1 = n2 . Thus, F is
injective.
18
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
F is not surjective, though. For instance, there is no n that would be mapped into 2,
3 + n2 = 2
n2 = −1
⇒
which is impossible for n ∈ IN
Exercise 1.10.3 Consider the mappings F : n → n + 1, G : n → n2 of IN into IN . Describe the product
mappings F F , F G, GF , and GG.
F 2 (n) = n + 2
(F G)(n) = n2 + 1
(GF )(n) = (n + 1)2
G2 (n) = n4
Exercise 1.10.4 Show that if f : IN → IN and f (x) = x + 2, then f is one-to-one but not onto.
F is injective since
f (x1 ) = x1 + 2 = f (x2 ) = x2 + 2
⇒
x1 = x 2
R(F ) = {2, 3, . . .} is a proper subset of IN , so F is not surjective (we assume that IN includes zero).
Exercise 1.10.5 If f is one-to-one from A onto B and g is one-to-one from B onto A, show that (f g)−1 =
g −1 ◦ f −1 .
It is sufficient to notice that
(f g)(g −1 f −1 ) = f (gg −1 )f −1 = f f −1 = idB
and
(g −1 f −1 )(f g) = g −1 (f −1 f )g = g −1 g = idB
Exercise 1.10.6 Let A = {1, 2, 3, 4} and consider the sets
f = {(1, 3), (3, 3), (4, 1), (2, 2)}
g = {(1, 4), (2, 1), (3, 1), (4, 2)}
(i) Are f and g functions?
Yes.
(ii) Determine the range of f and g.
R(f ) = {1, 2, 3}, R(g) = {1, 2, 4}.
(iii) Determine f ◦ g and g ◦ f .
f g = {(1, 1), (2, 3), (3, 3), (4, 2)}
gf = {(1, 1), (2, 1), (3, 1), (4, 4)}
Preliminaries
19
Exercise 1.10.7 Let f : X → Y be a bijection and f −1 its inverse. Show that:
(f −1 )(B) = f −1 (B)
(direct image of B through inverse f −1 ) = (inverse image of B through f )
Let x ∈ (f −1 )(B). By definition of the direct image, there exists y ∈ B such that f −1 (y) = x. Since
f (x) = f (f −1 (y)) = y, this implies that x ∈ f −1 (B). Conversely, assume that x ∈ f −1 (B). By
the definition of the inverse image, y = f (x) ∈ B. But f is invertible so x = f −1 (y). Consequently,
x ∈ (f −1 )(B).
Exercise 1.10.8 Let A = [0, 1] ⊂ R,
I and let fi : A → A be defined by:
(i) f1 (x) = sin x
(ii) f2 (x) = sin πx
(iii) f3 (x) = sin( π2 x)
Classify each fi as to whether or not it is surjective, injective, or bijective.
(i) f1 is injective but not surjective.
(ii) f2 is surjective but not injective.
(iii) f3 is bijective.
Cardinality of Sets
1.11
Fundamental Notions
Exercises
Exercise 1.11.1 Show that if U is a universal set, the relation ∼ is an equivalence relation on P(U ).
Reflexivity. Let A ⊂ U . Use the identity map idA : A → A to show that A ∼ A.
Symmetry. Let A ∼ B, i.e., there exists a bijection TAB : A → B. Use inverse map TBA =
(TAB )−1 to show that B ∼ A.
20
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Transitivity. Let A ∼ B and B ∼ C. This means that there exist bijections TAB : A → B and
def
TBC : B → C. Use composition TAC = TBC ◦ TAB to demonstrate that A ∼ C. Notice that
composition of two bijections is bijective as well.
Exercise 1.11.2 Prove the following properties of countable sets:
(i) B countable, A ⊂ B implies A countable.
(ii) A1 , . . . , An countable implies A1 × . . . × An countable.
(iii) An countable, n ∈ IN implies ∪∞
n=1 An countable.
(i) If A is finite then A is countable. The only interesting case is when A is infinite. Then B must be
denumerable. Let
b : IN � n → bn ∈ B
be the bijection from IN onto set B. Define the following function:
a1 = bk where k = min{n ∈ IN : bk ∈ A}
an = bk where k = min{n ∈ IN : bk ∈ A − {a1 , . . . , an−1 }}
By construction, an is a bijection and it maps IN onto subset A. Consequently, A is denumerable.
(ii) For any countable set A, there exists an injection from set A into IN . Thus A1 × . . . × An can be
identified with a subset of IN × . . . × IN . We already have proved that IN × IN is countable. We
have
IN n = IN n−1 × IN
so, by induction, IN n is countable as well. As A1 × . . . × An can be identified with a subset of
IN n (there exists a bijection from A1 × . . . × An into IN n ), by property (i), set A1 × . . . × An
must be countable.
(iii) We can always assume that An are pairwise disjoint. Indeed,
∞
�
n=1
An =
∞
�
n=1
Bn ,
where
Bn = An −
n−1
�
Ak
k=1
and the sets Bn are pairwise disjoint. Let Tn : An → IN be an injection. Let x ∈ ∪∞
n=1 An .
There exists a unique k such that x ∈ Ak . Define a new map:
T (x) = (k, Tk (x)) ∈ IN × IN = IN 2
Map T is an injection. Indeed, if T (x) = T (y) then both x and y belong to the same set Ak , and
Tk (x) = Tk (y), which implies that x = y. Thus ∪∞
n=1 An can be identified with a subset of a
countable set and, by property (i), has to be countable as well.
Preliminaries
1.12
21
Ordering of Cardinal Numbers
Exercises
Exercise 1.12.1 Complete the proof of Proposition 1.12.1 by showing that ≤ is a partial ordering of family
F.
Reflexivity. Consider a triple (X, Y, TXY ). X is a subset of itself, so is Y , and TXY is a restriction
of itself to X. This means that
(X, Y, TXY ) ≤ (X, Y, TXY )
Antisymmetry. Let
(X1 , Y1 , TX1 Y1 ) ≤ (X2 , Y2 , TX2 Y2 )
and
(X2 , Y2 , TX2 Y2 ) ≤ (X1 , Y1 , TX1 Y1 )
X1 ⊂ X2 , X2 ⊂ X1 imply that X1 = X2 . Similarly, Y1 = Y2 . If TX1 Y1 is a restriction of TX2 Y2
and X1 = X2 then TX1 Y1 and TX2 Y2 are identical.
Transitivity. Let
(X1 , Y1 , TX1 Y1 ) ≤ (X2 , Y2 , TX2 Y2 )
and
(X2 , Y2 , TX2 Y2 ) ≤ (X3 , Y3 , TX3 Y3 )
X1 ⊂ X2 , X2 ⊂ X3 imply X1 ⊂ X3 (inclusion of sets is transitive). Similarly, Y1 ⊂ Y3 . Finally,
for x ∈ X1 ⊂ X2 ⊂ X3 ,
TX3 Y3 (x) = TX2 Y2 (x)
(since TX2 Y2 is a restriction of TX3 ,Y3 )
= TX1 Y1 (x)
(since TX1 Y1 is a restriction of TX2 ,Y2 )
Exercise 1.12.2 Prove Theorem 1.12.2.
Hint: Establish a one-to-one mapping showing that #A ≤ #P(A) for every set A and use next
Theorem 1.12.1.
The map prescribing for each element a ∈ A its singleton {a} ∈ P(A) is injective which proves that
#A ≤ #P(A). According to Theorem 1.11.1, #A �= #P(A).
Exercise 1.12.3 Prove that if A is infinite, and B is finite, then A ∪ B ∼ A.
Hint: Use the following steps:
(i) Prove that the assertion is true for a denumerable set.
22
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(ii) Define a family F of couples (X, TX ) where X is a subset of A and TX : X → X ∪ B is a
bijection. Introduce a relation ≤ in F defined as
(X1 , TX1 ) ≤ (X2 , TX2 )
iff X1 ⊂ X2 and TX2 is an extension of TX1 .
(iii) Prove that ≤ is a partial ordering of F.
(iv) Show that family F with its partial ordering ≤ satisfies the assumptions of the Kuratowski–Zorn
Lemma.
(v) Using the Kuratowski–Zorn Lemma, show that A ∪ B ∼ A.
Question: Why do we need the first step?
(i) By replacing B with B − A, we can always assume that A and B are disjoint. Let A =
{a1 , a2 , a3 , . . .}, B = {b1 , b2 , . . . , bN }. Map
�
i≤N
bi
T (ai ) =
ai−N i > N
is a bijection from A onto A ∪ B.
(iii) The relation is reflexive, antisymmetric and transitive. Indeed,
(X, TX ) ≤ (X, TX )
since X ⊂ X and TX is a restriction of itself. Next,
(X1 , TX1 ) ≤ (X2 , TX2 )
and
(X2 , TX2 ) ≤ (X1 , TX1 )
imply that X1 = X2 and, consequently, TX1 = TX2 . Finally,
(X1 , TX1 ) ≤ (X2 , TX2 )
and
(X2 , TX2 ) ≤ (X3 , TX3 )
imply that X1 ⊂ X3 and TX3 |X1 = TX1 .
(iv) Let (Xι , TXι ), ι ∈ I be a chain in family F. Define
X=
�
Xι
ι∈I
Let x ∈ X. There exists then an index ι ∈ I such that x ∈ Xι . Pick the corresponding map TXι
and set
def
T (x) = TXι (x)
The map is well defined, i.e., its value is independent of choice of ι. This is a consequence of
having a chain. Indeed, if x ∈ Xι and x ∈ Xκ then either Xι ⊂ Xκ or Xκ ⊂ Xι . In either case
TXι (x) = TXκ (x)
Preliminaries
23
since TXι is a restriction of TXκ or, vice versa, TXκ is a restriction of TXι . Map TX is also a
bijection from X onto X ∪ B (explain, why?). Thus (X, TX ) is a member of family F. By
construction,
(Xι , TXι ) ≤ (X, TX )
which proves the existence of an upper bound for any chain in F.
(v) By the Kuratowski–Zorn Lemma there exists a maximal element (X, TX ) in the family. There
must be X = A. Indeed, if there were any element a ∈ A−X left, one could extend bijection TX
to a bijection TX ∪ (a, a) from X ∪ {a} onto itself. This contradicts the maximality of (X, TX ).
Consequently, X = A which proves that A ∼ A ∪ B.
The first step was necessary to assure that family F is nonempty.
Exercise 1.12.4 Prove that if A is infinite, then A ∼ A × {1, 2}. In other words, any infinite set has “the
same number of elements” as two copies of it.
1. Argue why the result is true for a denumerable set A.
Let A = {a1 , a2 , a3 , . . .}. Define map T :
�
(ak , 1) if n = 2k − 1
T (n) =
(ak , 2) if n = 2k
The map is a bijection from IN onto A × {1, 2}.
2. Define a family F of couples (X, TX ), where X ⊂ A is infinite and TX : X → X × {1, 2} is a
bijection. Prove that the following relation is a partial ordering in F.
(X1 , TX1 ) ≤ (X2 , TX2 )
iff
X1 ⊂ X2 and TX2 is an extension of TX1
Standard reasoning. The relation is reflexive since X ⊂ X and TX is an extension of itself.
It is antisymmetric. Indeed, (X1 , TX1 ) ≤ (X2 , TX2 ) and (X2 , TX2 ) ≤ (X1 , TX1 ) imply that
X1 = X2 and, consequently, TX1 = TX2 . Similar arguments are used to prove transitiveness.
3. Use the Kuratowski-Zorn Lemma to conclude that F has a maximal element (X, TX ).
Let (Xι , TXι ), ι ∈ I be a chain. Define,
X=
�
ι∈I
Xι ,
TX (x) = TXι (x), where x ∈ Xι
Then map TX is well defined, TX is a bijection from X onto X × {1, 2} so, the pair (X, TX ) is
an upper bound for the chain.
Consequently, by the Kuratowski-Zorn Lemma, there exists a maximal element (X, TX ).
4. Use the existence of the maximal element to conclude the theorem.
Let Y be the complement of X in A. We claim that Y must be finite. Assume to the contrary
that Y is infinite. Let Y0 be a denumerable subset of Y . By Step 1 of the proof, there exists a
24
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
bijection TY0 from Y0 onto Y0 × {1, 2}. Then X ∪ Y0 with the union of maps TX and TY0 defines
an element in F that is strictly greater than (X, TX ), a contradiction.
Now, by Exercise 1.12.3,
A ∼ X ∼ X × {1, 2} ∼ A × {1, 2}
since X ∼ A implies X × {1, 2} ∼ A × {1, 2}.
Question: Where do we need the first step ?
Twice, to assure that family F is non-empty and in the last step.
Exercise 1.12.5 Let A be an infinite set, and B another set such that #B ≤ #A. Prove that A ∪ B ∼ A.
Hint: Use result from Exercise 1.12.4.
Replacing B with B − A, we can always assume that A and B are disjoint. Let A0 be a subset of A
such that A0 ∼ B. By the result from Exercise 1.12.4, there exists a bijection T from A0 × {1, 2}
onto A0 or, equivalently from A0 ∪ B onto A0 . The union of T and identity on A − A0 defines then a
bijection from A ∪ B onto A.
Exercise 1.12.6 Prove Cantor–Bernstein Theorem: If #A ≤ #B and #B ≤ #A then #A = #B.
Hint: Proof is easy if you use result from Exercise 1.12.5.
Proof is trivial if the sets are finite. Assume the sets are infinite. By Exercise 1.12.5, #A ≤ #B implies
#B = #(B ∪ A). By the same token, #B ≤ #A implies #A = #(A ∪ B). Thus #A = #B.
Exercise 1.12.7 Prove that if A is infinite, A × A ∼ A.
Hint: Use the following steps:
(i) Recall that IN × IN ∼ IN (recall Example 1.11.1).
(ii) Define a family F of couples (X, TX ) where X is an infinite subset of A and TX : X → X × X
is a bijection. Introduce a relation ≤ in F defined as
(X1 , TX1 ) ≤ (X2 , TX2 )
iff X1 ⊂ X2 and TX2 is an extension of TX1 .
(iii) Prove that ≤ is a partial ordering of F .
(iv) Show that family F with its partial ordering ≤ satisfies the assumptions of the Kuratowski–Zorn
Lemma (recall the proof of Proposition 1.12.1).
(v) Using the Kuratowski–Zorn Lemma, show that X ∼ X × X.
Question: Why do we need the first step?
Follow precisely the lines in Exercise 1.12.3 to arrive at the existence of a maximal element (X, TX )
in the family. Let Y = A − X. We will consider two cases.
Preliminaries
25
Case: #Y ≤ #X. By Exercise 1.12.5, A ∼ X. i.e. #A = #X. We have then,
#A = #X = #(X × X) = #(A × A)
Case: #X < #Y . In this case, we can split Y into two disjoint subsets, Y = Y1 ∪ Y2 with Y1 ∼ X.
We claim that
Y1 ∼ (X × Y1 ) ∪ (Y1 × X) ∪ (Y1 × Y1 )
Indeed, since #Y1 = #X, we have
# ((X × Y1 ) ∪ (Y1 × X) ∪ (Y1 × Y1 )) = # ((X × X) × {1, 2, 3}) = #(X × X) = #X
But this contradicts that (X, TX ) is the maximal element. Indeed, by the equivalence above, there
exists a bijection from Y1 onto (X × Y1 ) ∪ (Y1 × X) ∪ (Y1 × Y1 ). We can use it then to extend
TX : X → X × X to a bijection from X ∪ Y1 onto
(X ∪ Y1 ) × (X ∪ Y1 ) = (X × X) ∪ (X × Y1 ) ∪ (Y1 × X) ∪ (Y1 × Y1 )
Step (i) was necessary to assure that family F is nonempty.
Foundations of Abstract Algebra
1.13
Operations, Abstract Systems, Isomorphisms
Exercises
Exercise 1.13.1 Determine the properties of the binary operations ∗ and % defined on a set A = {x, y, z} by
the tables below:
∗
x
y
z
x
x
y
z
y
y
y
x
z
z
x
x
%
x
y
z
x
x
y
z
y
y
z
x
z
z
x
y
Verify first then both operations are commutative (the tables are symmetric). Once the commutativity
has been established, one needs to consider only one case to show that both operations are also associative. Both are also distributive with respect to each other. Finally, element x is an identity element
for both operations. None of the operations is idempotent.
26
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 1.13.2 Let ∗ be a binary operation defined on the set of integers Z
I such that a ∗ b = a2 b, where
a, b ∈ Z.
I Discuss the distributive properties of ∗ with respect to addition + on Z.
I
The operation is distributive with respect to b but not with respect to a.
Exercise 1.13.3 If ∗ is a binary operation on Z
I defined by a ∗ b = ab for a, b ∈ Z,
I is ∗ commutative? Is it
associative? Is it distributive with respect to −?
Answer to all questions is ‘”yes”.
Exercise 1.13.4 Let S and J denote two systems with binary operations ∗ and ◦, respectively. If S and J
are isomorphic, i.e., there exists an isomorphism F : S → J , show that if:
(i) The associative law holds in S, then it holds in J .
Pick arbitrary elements a, b, c ∈ J . We have,
(a ◦ b) ◦ c = (F −1 (a) � F −1 (b)) � F −1 (c)
(definition of isomorphism)
= F −1 (a) � (F −1 (b) � F −1 (c))
(operation � is associative)
= a ◦ (b ◦ c)
(definition of isomorphism)
(ii) S = {A, ∗}, J = {B, ◦}, and if e ∈ A is an identity element in S, then its corresponding element
f ∈ B, f = F (e) is an identity element in J .
Let b ∈ B.
b ◦ f = F −1 (b) � e
(definition of isomorphism)
= F −1 (b)
(e is a unitary element in A)
=b
(definition of isomorphism)
Exercise 1.13.5 Let S denote the system consisting of the set R = {4n : n ∈ IN }, where IN is the set of
natural numbers, and the operation of addition +. Let J denote the set IN plus addition. Show that S
and J are isomorphic.
Map
F : IN � n → F (n) = 4n ∈ R
is an isomorphism. Indeed, F is trivially a bijection, and
F (n + m) = 4(n + m) = 4n + 4m = F (n) + F (m)
1.14
Examples of Abstract Systems
Exercises
Preliminaries
27
Exercise 1.14.1 Let Z
I be the set of integers and let ◦ denote an operation on Z
I such that a ◦ b = a + b − ab
for a, b ∈ Z.
I Show that {I
Z, ◦} is a semi-group .
The integers are closed with respect to the operation. We need to verify only that the operation is
associative. We have
(a ◦ b) ◦ c = (a + b − ab) ◦ c = (a + b − ab) + c − (a + b − ab)c = a + b + c − ab − ac − bc + abc
Similar calculation shows that a ◦ (b ◦ c) yields the same result.
Exercise 1.14.2 Let a, b, and c be elements of a group G = {G, ∗}. If x is an arbitrary element of this group,
prove that the equation (a ∗ x) ∗ b ∗ c = b ∗ c has a unique solution x ∈ G.
Multiply (in the sense of ∗ operation) both sides by (b ∗ c)−1 to get a ∗ x = e where e is the identity
element. Multiply next by a−1 to reveal that x = a−1 . Conversely, set x = a−1 to verify that the
equation is satisfied.
Exercise 1.14.3 Classify the algebraic systems formed by:
(a) The irrational numbers (plus zero) under addition.
�
√
The set is not closed with respect to the operation, e.g. 2 + 2 and 3 − (2) are in the set, but
√
√
2 + 2 + 3 − 2 = 5 is not. Thus, this is not even a grupoid.
(b) The rational numbers under addition.
This is a group. All axioms are easily verified. Number 0 is the identity.
(c) The irrational numbers under multiplication.
Again, the set has no algebraic structure as it is not closed with respect to the operation.
Exercise 1.14.4 Determine which of the following systems are groups with respect to the indicated operations:
(a) S = {{x ∈ Z
I : x < 0}, addition}
(b) S = {{x : x = 5y, y ∈ Z},
I addition}
(c) S = {{−4, −1, 4, 1}, multiplication}
(d) S = {{z ∈ IC : |z| = 1}, multiplication}
Here Z
I is the set of integers and IC is the complex-number field.
Sets (b) and (d) are groups. Sets (a) and (c) are not closed with respect to the operations.
Exercise 1.14.5 Show that the integers Z,
I the rationals IQ, and the reals R
I are rings under the operations of
ordinary addition and multiplication.
This is trivial verification of all involved axioms.
Exercise 1.14.6 Show that the system {{a, b}, ∗, #} with ∗ and # defined by
28
APPLIED FUNCTIONAL ANALYSIS
*
a
b
a
a
b
SOLUTION MANUAL
#
a
b
b
b
a
a
a
a
b
a
b
is a ring.
Indeed, {{a, b}, ∗} is a commutative group with a being the zero element. The single element set b
with operation # is trivially a semi-group. The distributive properties are satisfied.
Exercise 1.14.7 Which of the following algebraic systems are rings?
(a) S = {{3x : x ∈ Z},
I +, ·}
(b) S = {{x + 2 : x ∈ Z},
I +, ·}
Here + and · denote ordinary addition and multiplication.
Both. The second set is simply Z
I in disguise. In the first case, it is sufficient to notice that the set is
closed with respect to both operations. All properties follow then directly from the fact that, with these
operations, Z
I is a ring (S is a sub-ring of Z).
I
Exercise 1.14.8 Let A = P(A) = the set of all subsets of a given set A. Consider the system S =
{A, ⊗, #}, where, if B and C are sets in A,
B ⊗ C = (B ∪ C) − (B ∩ C)
and
B#C = B ∩ C
Show that S is a commutative ring.
All axioms are directly verified. In the algebraic operations, it is wise to reduce the operations on sets
to unions, intersections and complements. To begin with,
A ⊗ B = A ∪ B − A ∩ B = A ∪ B ∩ (A ∩ B)� = (A ∪ B) ∩ (A� ∪ B � ) = (A ∩ B � ) ∪ (A� ∩ B)
To see that operation ⊗ is associative, compute
(A ⊗ B) ⊗ C = (A ∩ B � ∩ C � ) ∪ (A� ∩ B ∩ C � ) ∪ (A� ∩ B � ∩ C) ∪ (A ∩ B ∩ C)
and verify that A ⊗ (B ⊗ C) yields an identical result. The empty set ∅ is the zero element. Each
element is equal to its inverse (interesting, is it not ?). {A, ∪} is trivially a commutative semi-group.
Finally, the distributive property is directly verified. We have,
A ∩ (B ⊗ C) = A ∩ (B ∩ C � ∪ B � ∩ C) = (A ∩ B ∩ C � ) ∪ (A ∩ B � ∩ C)
At the same time,
A ∩ B ⊗ A ∩ C = (A ∩ B) ∩ (A ∩ C)� ∪ (A ∩ B)� ∩ (A ∩ C)
= (A ∩ B) ∩ (A� ∪ C � ) ∪ (A� ∪ B � ) ∩ (A ∩ C)
= (A ∩ B ∩ C � ) ∪ (B � ∩ A ∩ C)
Preliminaries
29
Exercise 1.14.9 Let B denote the set of ordered quadruples of real numbers of the form (a, b, −b, a), (a, b ∈
R).
I Consider the system B = {B, ⊕, �}, where
(a, b, −b, a) ⊕ (c, d, −d, c) = (a + c, b + d, −b − d, a + c)
(a, b, −b, a) � (c, d, −d, c) = (ac − bd, ad + bc, −ad − bc, ac − bd)
Determine if the system B is a field.
Yes, it is. If we identify a complex number z with a pair of real numbers (x, y), the system is isomorphic
with ordered pairs of complex numbers (z, Rz) where R denotes the rotation by 90 degrees in the
complex plane. The operations are defined as
def
(z1 , Rz1 ) + (z2 , Rz2 ) = (z1 + z2 , R(z1 + z2 ))
and
def
(z1 , Rz1 ) · (z2 , Rz2 ) = (z1 z2 , R(z1 z2 ))
It is somehow easier now to verify that the set is a field. This is a consequence of the facts that IC is a
field, and the rotation R is an isomorphism in IC . Please verify all axioms one by one if you cannot see
it.
Exercise 1.14.10 Show that the set of polynomials defined on S, where {S, +, ∗} is a ring, with the operations defined in Example 1.14.8 forms a ring.
First of all, the set is closed with respect to the operations: the sum or the product of two polynomials
is also a polynomial (possibly of a higher degree). The rest of the proof reduces to a direct verification
of the axioms. The main difficulty stems from the fact that the symbols for the operations have been
overloaded.
For instance, the proof of commutativity of function addition goes as follows.
(f + g)(x) = f (x) + g(x) (definition of function addition, note the different meaning of plus signs)
= g(x) + f (x) (commutativity of addition in the ring)
= (g + f )(x)
1.15
(definition of function addition)
Examples of Abstract Systems
Exercises
Exercise 1.15.1 Let Z
I be the set of integers and let ◦ denote an operation on Z
I such that a ◦ b = a + b − ab
for a, b ∈ Z.
I Show that {I
Z, ◦} is a semi-group .
30
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The integers are closed with respect to the operation. We need to verify only that the operation is
associative. We have
(a ◦ b) ◦ c = (a + b − ab) ◦ c = (a + b − ab) + c − (a + b − ab)c = a + b + c − ab − ac − bc + abc
Similar calculation shows that a ◦ (b ◦ c) yields the same result.
Exercise 1.15.2 Let a, b, and c be elements of a group G = {G, ∗}. If x is an arbitrary element of this group,
prove that the equation (a ∗ x) ∗ b ∗ c = b ∗ c has a unique solution x ∈ G.
Multiply (in the sense of ∗ operation) both sides by (b ∗ c)−1 to get a ∗ x = e where e is the identity
element. Multiply next by a−1 to reveal that x = a−1 . Conversely, set x = a−1 to verify that the
equation is satisfied.
Exercise 1.15.3 Classify the algebraic systems formed by:
(a) The irrational numbers (plus zero) under addition.
�
√
The set is not closed with respect to the operation, e.g. 2 + 2 and 3 − (2) are in the set, but
√
√
2 + 2 + 3 − 2 = 5 is not. Thus, this is not even a grupoid.
(b) The rational numbers under addition.
This is a group. All axioms are easily verified. Number 0 is the identity.
(c) The irrational numbers under multiplication.
Again, the set has no algebraic structure as it is not closed with respect to the operation.
Exercise 1.15.4 Determine which of the following systems are groups with respect to the indicated operations:
(a) S = {{x ∈ Z
I : x < 0}, addition}
(b) S = {{x : x = 5y, y ∈ Z},
I addition}
(c) S = {{−4, −1, 4, 1}, multiplication}
(d) S = {{z ∈ IC : |z| = 1}, multiplication}
Here Z
I is the set of integers and IC is the complex-number field.
Sets (b) and (d) are groups. Sets (a) and (c) are not closed with respect to the operations.
Exercise 1.15.5 Show that the integers Z,
I the rationals IQ, and the reals R
I are rings under the operations of
ordinary addition and multiplication.
This is trivial verification of all involved axioms.
Exercise 1.15.6 Show that the system {{a, b}, ∗, #} with ∗ and # defined by
*
a
b
a
a
b
b
b
a
#
a
b
a
a
a
b
a
b
Preliminaries
31
is a ring.
Indeed, {{a, b}, ∗} is a commutative group with a being the zero element. The single element set b
with operation # is trivially a semi-group. The distributive properties are satisfied.
Exercise 1.15.7 Which of the following algebraic systems are rings?
(a) S = {{3x : x ∈ Z},
I +, ·}
(b) S = {{x + 2 : x ∈ Z},
I +, ·}
Here + and · denote ordinary addition and multiplication.
Both. The second set is simply Z
I in disguise. In the first case, it is sufficient to notice that the set is
closed with respect to both operations. All properties follow then directly from the fact that, with these
operations, Z
I is a ring (S is a sub-ring of Z).
I
Exercise 1.15.8 Let A = P(A) = the set of all subsets of a given set A. Consider the system S =
{A, ⊗, #}, where, if B and C are sets in A,
B ⊗ C = (B ∪ C) − (B ∩ C)
and
B#C = B ∩ C
Show that S is a commutative ring.
All axioms are directly verified. In the algebraic operations, it is wise to reduce the operations on sets
to unions, intersections and complements. To begin with,
A ⊗ B = A ∪ B − A ∩ B = A ∪ B ∩ (A ∩ B)� = (A ∪ B) ∩ (A� ∪ B � ) = (A ∩ B � ) ∪ (A� ∩ B)
To see that operation ⊗ is associative, compute
(A ⊗ B) ⊗ C = (A ∩ B � ∩ C � ) ∪ (A� ∩ B ∩ C � ) ∪ (A� ∩ B � ∩ C) ∪ (A ∩ B ∩ C)
and verify that A ⊗ (B ⊗ C) yields an identical result. The empty set ∅ is the zero element. Each
element is equal to its inverse (interesting, is it not ?). {A, ∪} is trivially a commutative semi-group.
Finally, the distributive property is directly verified. We have,
A ∩ (B ⊗ C) = A ∩ (B ∩ C � ∪ B � ∩ C) = (A ∩ B ∩ C � ) ∪ (A ∩ B � ∩ C)
At the same time,
A ∩ B ⊗ A ∩ C = (A ∩ B) ∩ (A ∩ C)� ∪ (A ∩ B)� ∩ (A ∩ C)
= (A ∩ B) ∩ (A� ∪ C � ) ∪ (A� ∪ B � ) ∩ (A ∩ C)
= (A ∩ B ∩ C � ) ∪ (B � ∩ A ∩ C)
32
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 1.15.9 Let B denote the set of ordered quadruples of real numbers of the form (a, b, −b, a), (a, b ∈
R).
I Consider the system B = {B, ⊕, �}, where
(a, b, −b, a) ⊕ (c, d, −d, c) = (a + c, b + d, −b − d, a + c)
(a, b, −b, a) � (c, d, −d, c) = (ac − bd, ad + bc, −ad − bc, ac − bd)
Determine if the system B is a field.
Yes, it is. If we identify a complex number z with a pair of real numbers (x, y), the system is isomorphic
with ordered pairs of complex numbers (z, Rz) where R denotes the rotation by 90 degrees in the
complex plane. The operations are defined as
def
(z1 , Rz1 ) + (z2 , Rz2 ) = (z1 + z2 , R(z1 + z2 ))
and
def
(z1 , Rz1 ) · (z2 , Rz2 ) = (z1 z2 , R(z1 z2 ))
It is somehow easier now to verify that the set is a field. This is a consequence of the facts that IC is a
field, and the rotation R is an isomorphism in IC . Please verify all axioms one by one if you cannot see
it.
Exercise 1.15.10 Show that the set of polynomials defined on S, where {S, +, ∗} is a ring, with the operations defined in Example 1.14.8 forms a ring.
First of all, the set is closed with respect to the operations: the sum or the product of two polynomials
is also a polynomial (possibly of a higher degree). The rest of the proof reduces to a direct verification
of the axioms. The main difficulty stems from the fact that the symbols for the operations have been
overloaded.
For instance, the proof of commutativity of function addition goes as follows.
(f + g)(x) = f (x) + g(x) (definition of function addition, note the different meaning of plus signs)
= g(x) + f (x) (commutativity of addition in the ring)
= (g + f )(x)
(definition of function addition)
Elementary Topology in R
In
1.16
The Real Number System
Exercises
Preliminaries
33
Exercise 1.16.1 Let A, B ⊂ R
I be two nonempty sets of real numbers. Let
C = {x + y : x ∈ A, y ∈ B}
(set C is called the algebraic sum of sets A and B). Prove that
sup C = sup A + sup B
Let x ∈ A, y ∈ B. Then
x + y ≤ sup A + sup B
So, sup A + sup B is an upper bound for C. To prove that it is the least upper bound, we need to
consider two cases.
Case 1: Sets A and B are bounded. Pick an arbitrary ε > 0. Since both sup A and sup B are the least
upper bounds, we can find elements x ∈ A and y ∈ B such that
sup A − x <
ε
2
and
sup B − y <
ε
2
Consequently,
sup A + sup B − (x + y) < ε
or
x + y > sup A + sup B − �
Since ε is arbitrarily small, this proves that there is no upper bound for set C that is smaller than
sup A + sup B.
Case 2: sup A = ∞ or sup B = ∞. If one of the sets is unbounded, then set C is also unbounded, and
sup C = ∞ = sup A + sup B.
Exercise 1.16.2 Let f, g be two functions defined on a common set X with values in R.
I Prove that
f (x) ≤ g(x) ∀x ∈ X
⇒
sup f (x) ≤ sup g(x)
x∈X
x∈x
In other words, we can always pass to the supremum in the (weak) inequality.
We have
f (x) ≤ g(x) ≤ sup g
X
∀x ∈ X
Thus supX g is an upper bound for f . Consequently, supX f being the least upper bound must be less
or equal than supX g.
Exercise 1.16.3 Let f (x, y) be a function of two variables x and y defined on a set X × Y , with values in R.
I
Define:
g(x) = sup f (x, y)
y∈Y
h(y) = sup f (x, y)
x∈X
34
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Prove that
sup
f (x, y) = sup g(x) = sup h(y)
x∈X
(x,y)∈X×Y
In other words,
sup
�
f (x, y) = sup
x∈X
(x,y)∈X×Y
y∈Y
sup f (x, y)
y∈Y
�
= sup
y∈Y
�
sup f (x, y)
x∈X
�
Passing to supremum in (x, y) ∈ X × Y on both sides of (comp. Exercise 1.16.2)
f (x, y) ≤ g(x)
we get
sup f (x, y) ≤ sup g(x) = sup g(x)
x,y
x,y
x
On the other side, from
g(x) = sup f (x, y) ≤ sup f (x, y)
y
x,y
follows that supx g(x), being the least upper bound of g(x), must be bounded by supx,y f (x, y).
The the second relation is proved in the same way.
Exercise 1.16.4 Using the notation of the previous exercise, show that
�
�
�
�
sup f (x, y)
sup inf f (x, y) ≤ inf
x∈X
y∈Y
x∈X
y∈Y
Construct a counterexample showing that, in general, the equality does not hold.
Passing to infimum in x ∈ X on both sides of
f (x, y) ≤ sup f (x, y)
y
we get
inf f (x, y) ≤ inf sup f (x, y)
x
x
y
∀y ∈ Y
The right-hand side is thus an upper bound for the function of x represented by the left-hand side. Thus
sup inf f (x, y) ≤ inf sup f (x, y)
y
x
x
y
For X = Y = (0, ∞) and f (x, y) = xy, the right-hand side is ∞ whereas the left-hand side is 0.
Exercise 1.16.5 If |x| denotes the absolute value of x ∈ R
I defined as
�
x
if x ≥ 0
|x| =
−x otherwise
prove that |x| ≤ a if and only if −a ≤ x ≤ a.
Case: x ≥ 0 trivial.
Case: x < 0. |x| = −x ≤ a is equivalent to x ≥ −a.
Preliminaries
35
Exercise 1.16.6 Prove the classical inequalities (including the triangle inequality) involving the absolute
values
�
�
�
�
�|x| − |y|� ≤ |x ± y| ≤ |x| + |y|
for every x, y ∈ R.
I
Inequality
x + y ≤ |x| + |y|
and
−|x| − |y| ≤ x + y
prove that (comp. Exercise 1.16.5)
|x + y| ≤ |x| + |y|
Replace y with −y to get
|x − y| ≤ |x| + | − y| = |x| + |y|
Now,
�
�
�
�
�|x| − |y|� ≤ |x + y|
is equivalent to
−|x + y| ≤ |x| − |y| ≤ |x + y|
The inequality on the left can be rewritten as
|y| ≤ |x| + |x + y|
and follows from the triangle inequality proved in the first part of this exercise,
|y| = | − y| = |x − (x + y)| ≤ |x| + |x + y|
The inequality on the right is proved in the same way. Finally, replacing y with −y, we get the version
corresponding to the minus sign in the middle term.
Exercise 1.16.7 Prove the Cauchy–Schwarz inequality
�
� �
� 12 � n
� 12
n
n
��
�
�
�
�
�
2
2
x i yi � ≤
xi
yi
�
�
�
1
i=1
where xi , yi ∈ R,
I i = 1, . . . , n.
Hint: Use the inequality
n
�
i=1
for every λ ∈ R.
I
(xi λ + yi )2 ≥ 0
1
36
APPLIED FUNCTIONAL ANALYSIS
We have
n
�
x2i
2
λ +2
i=1
� �� �
a
which implies that
�
Δ = b2 − 4ac = 4 
�
n
�
x i yi λ +
i=1
��
n
�
x i yi
i=1
�2
n
�
i=1
�
b
SOLUTION MANUAL
yi2 ≥ 0
� �� �
∀λ
c
−
�
n
�
x2i
i=1
��
from which the final inequality follows.
n
�
i=1
�
yi2  ≤ 0
Exercise 1.16.8 Use the Cauchy–Schwarz inequality to prove the triangle inequality
d(x, y) ≤ d(x, z) + d(z, y)
for every x, y, z ∈ R
I n.
Denote (the Euclidean norm in R
I n ),
�
�x� =
We have
�x + y�2 =
=
n
�
x2i + 2
i=1
≤
n
�
i=1
x2i
i=1
� 12
(xi + yi )2
i=1
n
�
n
�
x2i
+2
n
�
x i yi +
i=1
�
n
�
= (�x� + �y�)
i=1
2
x2i
n
�
yi2
� 12i=1
� n
�
yi2
i=1
� 12
+
n
�
yi2
(Cauchy–Schwarz inequality)
i=1
which gives the triangle inequality for the norm:
�x + y� ≤ �x� + �y�
From this, the triangle inequality for the Euclidean distance (metric) follows
d(x, y) = �x − y� = �x − z + z − y� ≤ �x − z� + �z − y� = d(x, z) + d(z, y)
1.17
Open and Closed Sets
Exercises
Preliminaries
37
Exercise 1.17.1 Prove the properties of the closed sets (Proposition 1.16.3) directly, i.e., using the definition
of a closed set only, without invoking the duality argument.
(i) Empty set ∅ and the whole space R
I n are closed.
The set of accumulation points of an empty set is empty itself, so adding an empty set to the
empty set keeps the set still empty. By definition, accumulation points of a set are points from
I n contains all its accumulation points. Of course, all points in R
I n are its accumulations
R
I n , so R
points.
(ii) Fι closed, ι ∈ I, implies ∩ι∈I Fι closed.
We have to demonstrate that ∩ι∈I Fι contains all its accumulation points. Let x be an accumulation point of the set. Be definition, this means that
A∩
�
ι∈I
Fι − {x} �= ∅,
for every neighborhood A of x
This in turn implies that
A ∩ Fι − {x} �= ∅,
for every neighborhood A of x,
∀ι ∈ I
In other words, x is an accumulation point of each of the sets Fι . As every set Fι is closed, i.e., it
contains all its accumulation points, x ∈ Fι , ∀ι ∈ I. Consequently x ∈ ∩ι∈I Fι .
(iii) F1 , F2 , . . . , Fn closed implies F1 ∪ F2 ∪ . . . ∪ Fn closed.
We will prove it for the case n = 2. The general case follows then by induction. We will work
now with balls rather than general neighborhoods. Let x be an accumulation point of F1 ∪ F2 .
It is sufficient to show that x is an accumulation point of F1 or F2 . Indeed, since both sets are
closed, x will be an element of F1 or F2 and, therefore, it will belong to the sum F1 ∪ F2 as well.
Assume now, to the contrary, that x is an accumulation point of neither F1 nor of F2 . This means
that there exist balls B(x, �i ), i = 1, 2 such that B(x, �i ) ∩ Fi − {x} = ∅. Take � = min{�1 , �2 }.
Then
B(x, �) ∩ (F1 ∪ F2 ) − {x} = (B(x, �) ∩ F1 − {x}) ∪ (B(x, � ∩ F2 ) − {x}) = ∅
which contradicts the fact that x is an accumulation point of F1 ∪ F2 .
Exercise 1.17.2 Let intA denote the interior of a set A ⊂ R
I n . Prove that the following properties hold:
(i) If A ⊂ B then intA ⊂ intB
Let x ∈ intA. There exists then a neighborhood C of x such that C ⊂ A. Consequently, C ⊂ B
and, therefore, x is also an interior point of set B.
(ii) int(intA) = intA
We start with an observation that an open ball B(x, r) is a neighborhood for all of its points.
Indeed, if y ∈ B(x, r) then it is sufficient to take � < r − d(x, y) to have B(y, �) ⊂ B(x, r). In
38
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
particular, as we have discussed in the text, every open ball is open; i.e., int(B(x, r)) = B(x, r).
By definition, intA ⊂ A and, by Property (i), int(intA) ⊂ intA. It remains thus to prove the
inverse inclusion only. Let x ∈ intA. There exists then a ball B(x, r) ⊂ A. From the fact that
open balls are open, it follows now that B(x, r) = intB(x, r) ⊂ int(intA). Consequently, x is
also an interior point of intA.
(iii) int(A ∪ B) ⊃ intA ∪ intB
Let x ∈ (intA ∪ intB). Then x ∈ intA or x ∈ intB. In the first case, there exists a neighborhood
C of x such that C ⊂ A. Consequently, C ⊂ (A ∪ B) as well and, therefore, x is an interior point
of A ∪ B. Case x ∈ intB is analogous.
You can also show this by utilizing the properties of open sets and already-proved Properties (i)
and (ii). By definition, intA ⊂ A ⊂ A ∪ B. The same holds for set B, so intA ∪ intB ⊂ A ∪ B.
By property (i), int(intA ∪ intB) ⊂ int(A ∪ B). But, by Property (ii), both intA and intB are
open and, since the union of open sets is open, int(intA ∪ intB) = intA ∪ intB, so the inclusion
holds.
Notice that, in general, we do not have an equality here. Take, for instance, A = [1, 2], B = [2, 3].
Then int(A ∪ B) = (1, 3) but intA ∪ intB = (1, 3) − {2} only.
(iv) int(A ∩ B) = intA ∩ intB
Let x ∈ int(A ∩ B). There exists then a neighborhood C of x such that C ⊂ A ∩ B. Thus
C ⊂ A and C ⊂ B. Consequently, x ∈ intA and x ∈ intB, so x ∈ int(A ∩ B). Conversely,
let x ∈ intA and x ∈ intB. There exist then balls B(x, �1 ) and B(x, �2 ) such that B(x, �1 ) ⊂ A
and B(x, �2 ) ⊂ B. Take � = min{�1 , �2 }. The ball B(x, �) is then contained in both sets A and
B and, therefore, in A ∩ B. Consequently, x is an interior point of A ∩ B. Notice that in proving
the reverse inclusion, we have worked with balls rather than general neighborhoods.
I n . Prove that the following properties hold:
Exercise 1.17.3 Let A denote the closure of a set A ⊂ R
(i) If A ⊂ B then A ⊂ B
Let x ∈ A. Then C ∩ A �= ∅, for each neighborhood C of x. Consequently, C ∩ B ⊃ C ∩ A
must also be nonempty and, therefore, x is a cluster point of set B as well.
Notice that in proving this and the next properties, it is convenient to operate with the notion of
the cluster point rather than the accumulation point.
(ii) (A)¯ = A
By Property (i), A ⊂ (A)¯ and we need to prove the reverse inclusion only. Let x ∈ (A)¯. This
means that, for every ball B(x, r),
B(x, r) ∩ A �= ∅
We claim that B(x, r) ∩ A �= ∅ as well. Indeed, let us take a point y ∈ B(x, r) ∩ A, and a ball
B(y, �) with � < r − d(x, y). Since y ∈ A, B(y, �) ∩ A must be nonempty. But
(B(y, �) ∩ A) ⊂ (B(x, r) ∩ A)
Preliminaries
39
so B(x, r) ∩ A must be nonempty as well.
(iii) A ∩ B ⊂ A ∩ B
We have A ∩ B ⊂ A. By Property (i), A ∩ B ⊂ A. The same holds for set B, so A ∩ B ⊂ A ∩ B.
Notice that we may have a proper inclusion. Take, e.g., A = (1, 2), B = (2, 3). Then A ∩ B
is empty, and the left-hand side of the inclusion is empty as well. The right-hand side, however,
equals [1, 2] ∩ [2, 3] = {2}.
(iv) A ∪ B = A ∪ B
A ⊂ (A ∪ B) so, by Property (i), A ⊂ A ∪ B. The same holds for set B so the right-hand side
must be a subset of the left-hand side. On the other side, (A ∪ B) ⊂ (A ∪ B) so, by Property (i)
again, A ∪ B ⊂ A ∪ B = (A ∪ B) since unions of closed sets are closed and, by Property (i),
both A and B are closed.
Exercise 1.17.4 Show the following relation between the interior and closure operations:
intA = A�
�
where A� = R
I n − A is the complement of set A.
Take the complement of both sides to get an equivalent relation:
(intA)� = A�
Let x ∈ (intA)� , i.e., it is not true that x ∈ intA, i.e.,
∼ (∃ neighborhood C of x : C ⊂ A)
or, equivalently,
∀ neighborhoods C of x : C ∩ A� �= ∅
Consequently, x ∈ A� . Follow the argument backwards to establish equivalence.
Notice that the property can be used to relate properties of the closure and interior operations discussed
in Exercise 1.17.3 and Exercise 1.17.2. In particular, with this property in place, the properties of the
closure operation can be derived from the properties of the interior operation and vice-versa.
Exercise 1.17.5 Construct examples showing that, in general,
A ∩ B �= A ∩ B
int(A ∪ B) �= intA ∪ intB
See the discussion above.
40
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 1.17.6 Show that if a is an accumulation point of a set A ⊂ R
I n , then every neighborhood of a
contains infinitely many points of A. Note that this in particular implies that only infinite sets may have
accumulation points.
This is a simple consequence of the fact that balls B(a, �) are nested. If a ball contained only a finite
number of points x1 , . . . , xk ∈ A different from a, we could always select an � such that
� < min{d(x1 , a), . . . , d(xk , a)}
The ball B(a, �) would then be empty which contradicts the definition of accumulation points that says
that every neighborhood of point a contains points from A − {a}.
Exercise 1.17.7 Prove the Bolzano–Weierstrass Theorem for Sets: if A ⊂ R
I is infinite and bounded, then
there exists at least one accumulation point x of set A.
Hint: Use the method of nested intervals:
1. Choose I1 = [a1 , b1 ] ⊃ A. Why is this possible?
Because A is assumed to be bounded.
2. Divide I1 into two equal intervals, and choose I2 = [a2 , b2 ] as to contain an infinity of elements
A. Why is this possible?
Because A is infinite. At least one of the subintervals (possibly both) must contain an infinite
number of elements of set A.
3. Continue this subdivision and produce a sequence of “nested” intervals I1 ⊃ I2 ⊃ I3 ⊃ . . ., each
containing infinitely many elements of set A.
Just use the induction argument.
4. Define xn = inf(In ∩A), yn = sup(In ∩A). Argue that sequences xn , yn converge to a common
limit.
By construction, sequence yn is bounded and monotone; i.e., yn+1 ≤ yn . By the Monotone
Sequence Lemma, yn → y = inf{yn }. By the same argument, xn → x. But yn − xn ≤ 1/n.
Passing to the limit, we conclude that x = y.
5. Demonstrate that x = y is an accumulation point of set A.
Indeed, every neighborhood (x − �, x + �) of x contains sets In , for 1/n < �. Sets In in turn
contain elements of set A different from x (because they all contain an infinite number of elements
from A).
Exercise 1.17.8 Show an example of an infinite set in R
I which has no accumulation points.
Take, e.g., set of natural numbers IN .
Exercise 1.17.9 Let A = {x ∈ IQ : 0 ≤ x ≤ 1}. Prove that every point x ∈ [0, 1] is an accumulation point
of A, but there are no interior points of A.
Preliminaries
41
Pick any point x ∈ [0, 1]. Then every neighborhood (x − �, x + �) of x contains points from A. Thus,
x is an accumulation point of set A. On the other side, for x ∈ A, every neighborhood of x contains
points from outside of A (irrational numbers), so it is not an interior point of A.
Exercise 1.17.10 Most commonly, the intersection of an infinite sequence of open sets, and the union of an
infinite sequence of closed sets are not open or closed, respectively. Sets of this type are called sets of
Gδ or Fσ type, i.e.,
A is of Gδ type if A =
B is of Fσ type if B =
∞
�
1
∞
�
Ai ,
Ai open
Bi ,
Bi closed
1
Construct examples of a Gδ set which is not open, and an Fσ set which is not closed.
Take
∞
�
n=1
and
∞
�
n=1
1.18
(−1 −
1
1
, 1 + ) = [−1, 1]
n
n
[−1 +
1
1
, 1 − ] = (−1, 1)
n
n
Sequences
Exercises
Exercise 1.18.1 A sequence an ∈ R
I is said to be monotone increasing if an ≤ an+1 for all n; it is monotone
decreasing if an+1 ≤ an for all n. Further, a sequence is said to be bounded above if its range is
bounded from above, i.e., there exists a number b such that an ≤ b for all n. Similarly, a sequence an
is bounded below if a number a exists such that a ≤ an for all n.
Prove that every monotone increasing (decreasing) and bounded above (below) sequence in R
I is convergent (in R).
I
Let an be a monotone increasing sequence bounded above. Consider the range A = {an , n ∈ IN }
of the sequence. The range is bounded above and therefore has a supremum a in R.
I We claim that
an → a. Indeed, pick an arbitrary � > 0. By definition of the supremum of a set, there must exist an
index N such that a−aN < �. But the sequence is increasing, so this implies that a−an < �, ∀n ≥ N .
Exercise 1.18.2 Prove the Bolzano–Weierstrass Theorem for Sequences: every bounded sequence in R
I has a
convergent subsequence. Hint: Let A be the set of values of the sequence. Consider separately the case
42
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
of A being finite or infinite. Use the Bolzano–Weierstrass Theorem for Sets (Exercise 5) in the second
case.
In the first case, there must exist an element a ∈ A such that the inverse image of {a} through the
sequence is infinite, i.e., there exists a constant subsequence ank = a.
In the second case, the Bolzano–Weierstrass Theorem for Sets implies that A has an accumulation point
a. We will construct a subsequence converging to a. Recall that every neighborhood of a contains an
infinite number of elements from set A different from a (Exercise 1.17.6). Choose any n1 such that
d(an1 , a) < 1. Given n1 , . . . , nk , choose nk+1 in such a way that
nk+1 �= n1 , . . . , nk
and
d(ank+1 , a) <
1
k+1
As every neighborhood of a contains an infinite number of different values of the sequence, this is
always possible.
Exercise 1.18.3 Prove that the weak inequality is preserved in the limit, i.e., if xn ≤ yn , xn → x, yn → y,
then x ≤ y.
Suppose in contrary that x > y. Pick � = x − y > 0. There exist indices N1 , N2 such that |xn − x| <
�/2, ∀n ≥ N1 and |yn − y| < �/2, ∀n ≥ N2 . Thus, for n ≥ N = max{N1 , N2 }, elements xn
are within �/2-neighborhood of x, and elements yn are within �/2-neighborhood of y. Consequently,
yn < xn for n ≥ N , a contradiction.
I n . Prove that
Exercise 1.18.4 Let xk = (xk1 , . . . , xkn ) be a sequence of points in R
lim xk = x
k→∞
⇔
lim xki = xi ,
k→∞
for every i = 1, 2, . . . , n
where x = (x1 , . . . , xn ).
The proof is a play with quantifiers and epsilons. Let xk → x. Pick an index i = 1, . . . , n and an
arbitrary � > 0. Let N be such that d(xk , x) < �, ∀k ≥ N . Then
 12

n
�
|xkj − xj |2  < �
|xki − xi | ≤ d(xk , x) = 
j=1
for every k ≥ N . Conversely, let xki → xi for every i. Pick an arbitrary � > 0. Introduce an auxiliary
√
�1 = �/ n. For each component i, there exists an index Ni such that |xki − xk | ≤ �1 for all k ≥ Ni .
Define N = max{Ni , i = 1, . . . , n}. Then, for all k ≥ N ,
 12
 12


n
n
2
�
�
� 
|xkj − xj |2  < 
=�
d(xk , x) = 
n
j=1
j=1
which proves that xk → x.
Preliminaries
43
I n . Prove that x is a cluster point of xk iff every neighborhood of
Exercise 1.18.5 Let xk be a sequence in R
x contains infinitely many elements of the sequence.
If x is the limit of a subsequence xkl , then every neighborhood of x contains almost all elements of
sequence y l = xkl . Consequently, it contains an infinite number of elements of the original sequence.
Conversely, let x contain an infinite number of elements of the sequence. For every l > 0, we can find
an infinite number of k’s such that d(xk , x) < � = 1/l. Proceed by induction. Pick any k1 such that
d(xk1 , x) < � = 1. Given k1 , . . . , kl , pick kl+1 �= k1 , . . . , kl (you have an infinite number of indices
to pick from ...) such that d(xkl+1 , x) < � = 1/(l + 1). By construction, subsequence xkl → x.
I 2 given by the formula
Exercise 1.18.6 Let xk = (xk1 , xk2 ) be a sequence in R
xki = (−1)k+i
k+1
, i = 1, 2, k ∈ IN
k
Determine the cluster points of the sequence.
For k even, the corresponding subsequence converges to (−1, 1). For k odd, the corresponding subsequence converges to (1, −1). These are the only two cluster points of the sequence.
Exercise 1.18.7 Calculate lim inf and lim sup of the following sequence in R:
I

for n = 3k
 n/(n + 3)
for n = 3k + 1
an = n2 /(n + 3)
 2
n /(n + 3)2 for n = 3k + 2
where k ∈ IN .
Each of the three subsequences is convergent, a3k → 1, a3k+1 → ∞, and a3k+2 → 1. There are only
two cluster points of the sequence, 1 and ∞. Consequently, lim sup an = ∞ and lim inf an = 1.
Exercise 1.18.8 Formulate and prove a theorem analogous to Proposition 1.17.2 for limit superior.
Follow precisely the reasoning in the text.
Exercise 1.18.9 Establish the convergence or divergence of the sequences {xn }, where
n2
(b) xn = sin(n)
1 + n2
3n2 + 2
(−1)n n2
(d)
x
=
(c) xn =
n
1 + 3n2
1 + n2
(a) xn =
Sequences (a) and (c) converge, (b) and (d) diverge.
I be > 1 and let x2 = 2 − 1/x1 , . . . , xn+1 = 2 − 1/xn . Show that this sequence
Exercise 1.18.10 Let x1 ∈ R
converges and determine its limit.
For x1 > 1,
1
x1
< 1, so x2 = 2 −
1
x1
> 1. By the same argument, if xn > 1 then xn+1 > 1. By
induction, xn is bounded below by 1. Also, if x > 1 then
2−
1
≤x
x
44
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Indeed, multiply both sides of the inequality to obtain an equivalent inequality
2x − 1 ≤ x2
which in turn is implied by (x − 1)2 ≥ 0. The sequence is thus decreasing. By the Monotone Sequence
Lemma, it must have a limit x. The value of x may be computed by passing to the limit in the recursive
relation. We get
x=2−
1
x
which results in x = 1.
1.19
Limits and Continuity
Exercises
Exercise 1.19.1 Prove Proposition 1.18.2.
(i) ⇒ (ii). Let G be an open set. Let x ∈ f −1 (G). We need to show that x is an interior point of
f −1 (G). Since G is open and f (x) ∈ G, there exists an open ball B(f (x), �) ⊂ G. Continuity of f
implies that there exists an open ball B(x, δ) such that
B(x, δ) ⊂ f −1 (B(f (x), �)) ⊂ f −1 (G)
This proves that x is an interior point of f −1 (G).
(ii) ⇒ (i). Take an open ball B(f (x), �) neighborhood of f (x). The inverse image f −1 (B(f (x), �))
being open implies that there exists a ball B(x, δ) such that
B(x, δ) ⊂ f −1 (B(f (x), �))
or, equivalently,
f (B(x, δ)) ⊂ B(f (x), �)
which proves the continuity of f at x.
(ii) ⇔ (iii). This follows from the duality principle (complement of a set is open iff the set is closed)
and the identity
Rm − G) = f −1 (I
Rm ) − f −1 (G) = R
I n − f −1 (G)
f −1 (I
I m and g : R
Im →R
I k . Prove
Exercise 1.19.2 Let g ◦ f denote the composition of a function f : R
In →R
that if f is continuous at x0 and g is continuous at f (x0 ), then g ◦ f is continuous at x0 .
Preliminaries
45
Pick an open ball B(g(f (x0 )), �) neighborhood of g(f (x0 )). By continuity of function g, there exists
an open ball B(f (x0 ), δ) neighborhood of f (x0 ) such that
g(B(f (x0 ), δ)) ⊂ B(g(f (x0 )), �)
In turn, by continuity of f , there exists an open ball B(x, α) neighborhood of x0 such that
f (B(x, α)) ⊂ B(f (x0 ), δ)
Consequently,
(g ◦ f )(B(x, α)) ⊂ g(B(f (x0 ), δ)) ⊂ B(g(f (x0 )), �)
which proves the continuity of composition g ◦ f .
Exercise 1.19.3 Let f, g : R
In →R
I m be two continuous functions. Prove that the linear combination of f, g
defined as
(αf + βg)(x) = αf (x) + βg(x)
is also continuous.
Let d denote the Euclidean distance in R
I k (k = n, m), and � · � the corresponding Euclidean norm
(comp. Exercise 1.16.8). We have,
d(αf (x) + βg(x), αf (x0 ) + βg(x0 )) = �αf (x) + βg(x) − (αf (x0 ) + βg(x0 ))�
= �α(f (x) − f (x0 )) + β(g(x) − g(x0 ))�
≤ |α|�f (x) − f (x0 )� + |β|�g(x) − g(x0 )�
= |α|d(f (x), f (x0 )) + |β|d(g(x), g(x0 ))
Pick an arbitrary � > 0. Continuity of f implies that there exists δ1 such that
d(x, x0 ) < δ1
⇒
d(f (x), f (x0 )) <
�
2|α|
Similarly, continuity of g implies that there exists δ2 such that
d(x, x0 ) < δ2
⇒
d(g(x), g(x0 )) <
�
2|β|
Take now δ = min{δ1 , δ2 }. Then
d(x, x0 ) < δ
⇒
d(αf (x) + βg(x), αf (x0 ) + βg(x0 )) < |α|
�
�
+ |β|
=�
2|α|
2|β|
Exercise 1.19.4 Prove the Weierstrass Intermediate Value Theorem:
Let f be a continuous function from R
I into R.
I Consider a closed interval [a, b] and assume f (a) ≤ f (b).
Then f (x) attains every value between f (a) and f (b).
46
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The result is an immediate consequence of two fundamental facts concerning connected sets (see Section 4.3). The first says that a set in R
I is connected iff the set is an interval, comp. Exercise 4.3.9. The
second one states that the image of a connected set through a continuous functions is connected, comp.
Exercise 4.3.8. Thus, the image of an interval through a continuous function must be an interval I and,
therefore, it contains [f (a), f (b)].
Any proof of the result discussed here must somehow reproduce the arguments used for the connected
sets. Assume, to the contrary, that there exists d ∈ (f (a), f (b) such that f −1 ({d}) = ∅. Then
f −1 ((−∞, d) ∪ {d} ∪ (d, ∞)) = f −1 ((−∞, d)) ∪ f −1 ((d, ∞))
must be an open partition of the real line. Notice that the first set contains a, whereas the second one
contains b. Both sets are thus nonempty. But a complement of an open set is closed, so both sets are
simultaneously open and closed. Take then, for instance, c = sup f −1 ((−∞, d)). Since f −1 ((−∞, d))
is closed, it must contain c. But since it is also open, it must contain c along with an interval (c−�, c+�).
This contradicts c being the supremum of the set.
Exercise 1.19.5 Determine a point x0 ∈ R
I at which the following function is continuous:
�
1 − x x is rational
f (x) =
x
x is irrational
This function is continuous only at x = 12 .
Exercise 1.19.6 Show that
f (x) =
�
x sin
0
1
x
x �= 0
x=0
is continuous on all of R.
I
Function 1/x is continuous at all x �= 0 and function sin x is continuous everywhere. Consequently,
function sin 1/x is continuous at all x �= 0, comp. Exercise 1.19.2.
Product of two continuous functions is also continuous. This can be shown directly or derived from the
fact that the product of functions f and g is in fact a composition of continuous function
R
I � x → (f (x), g(x)) ∈ R
I2
and continuous product function
I
R
I 2 � (x, y) → xy ∈ R
In any event, the only nontrivial fact is the continuity of the considered function at x = 0. The essential
observation here is that, outside of zero, the function is defined as a product of function x that is
continuous at zero, and function sin 1/x that is bounded (by one). Pick an arbitrary � > 0. Then for
δ = � and |x| < δ, we have
1
|f (x) − f (0)| = |x sin | < |x| < �
x
which proves the continuity at zero.
Preliminaries
47
Exercise 1.19.7 This exercise enforces understanding of the Weierstrass Theorem. Give examples of:
(i) a function f : [0, 1] → R
I that does not achieve its supremum on [0, 1],
(ii) a continuous function f : R
I →R
I that does not achieve its supremum on R.
I
(i) Take
f (x) =
�
x
x ∈ [0, 1)
0
x=1
(ii) Take f (x) = x.
Elements of Differential and Integral Calculus
1.20
Derivatives and Integrals of Functions of One Variable
Exercises
Exercise 1.20.1 Prove an analog of Theorem 1.19.1 for the case in which f assumes a minimum on (a, b) at
c ∈ (a, b).
Repeat exactly the reasoning from the text. Trading maximum for minimum reverses the inequalities
but the final conclusion: f � (c) = 0, remains the same.
Exercise 1.20.2 The derivative of the derivative of a function f is, of course, the second derivative of f and
is denoted f �� . Similarly, (f �� )� = f ��� , etc. Let n be a positive integer and suppose f and its derivatives
f � , f �� , . . . , f (n) are defined and are continuous on (a, b) and that f (n+1) exists in (a, b). Let c and
x belong to (a, b). Prove Taylor’s formula for f ; i.e., show that there exists a number ξ satisfying
c < ξ < x such that
1
1 �
f (c)(x − c) + f �� (c)(x − c)2
1!
2!
1 (n)
1
n
f (n+1) (ξ)(x − c)n+1
+ · · · + f (c)(x − c) +
n!
(n + 1)!
f (x) = f (c) +
We will assume that x > c. Case x < c is treated analogously. Define a polynomial
φ(x) = f (c) +
1 �
1
1
f (c)(x − c) + f �� (c)(x − c)2 + · · · + f (n) (c)(x − c)n + A(x − c)n+1
1!
2!
n!
48
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
and select the constant A in such a way that f (x) matches φ(x) at x = b. By Rolle’s Theorem, there
exists an intermediate point ξ1 such that
(f − φ)� (ξ1 ) = 0
But we have also (f − φ)� (c) = 0 so, by the Rolle’s Theorem again, there exists an intermediate point
ξ1 ∈ (c, ξ1 ) such that,
(f − φ)�� (ξ2 ) = 0
Continuing in this manner, we arrive at the existence of a point ξn ∈ (c, x) such that
(f − φ)(n+1) (ξn ) = f (n+1) (ξn ) − A(n + 1)! = 0
Solving for A we get the final result.
Exercise 1.20.3 Let f be differentiable on (a, b). Prove the following:
(i) If f � (x) = 0 on (a, b), then f (x) = constant on (a, b).
Pick arbitrary c, x ∈ (a, b), c �= x. By the Lagrange Mean-Value Theorem,
∃ξ ∈ (c, x) : f (x) − f (c) = f � (ξ)(x − c) = 0
i.e., f (x) = f (c). Since x and c were arbitrary points, the function must be constant.
(ii) If f � (x) = g � (x) on (a, b), then f (x) − g(x) = constant.
Apply (i) to f (x) − g(x).
(iii) If f � (x) < 0 ∀ x ∈ (a, b) and if x1 < x2 ∈ (a, b), then f (x1 ) > f (x2 ).
Apply Lagrange Mean-Value Theorem,
f (x2 ) − f (x1 )
= f � (ξ) < 0
x2 − x1
⇒
f (x2 ) − f (x1 ) < 0
(iv) If |f � (x)| ≤ M < ∞ on (a, b), then
|f (x1 ) − f (x2 )| ≤ M |x1 − x2 |
∀ x1 , x2 ∈ (a, b)
Again, by the Lagrange Mean-Value Theorem,
f (x1 ) − f (x2 ) = f � (ξ)(x1 − x2 )
for some ξ ∈ (x1 , x2 ). Take absolute value on both sides to obtain
|f (x1 ) − f (x2 )| = |f � (ξ)| |x1 − x2 | ≤ M |x1 − x2 |
Exercise 1.20.4 Let f and g be continuous on [a, b] and differentiable on (a, b). Prove that there exists a
point c ∈ (a, b) such that f � (c)(g(b) − g(a)) = g � (c)(f (b) − f (a)). This result is sometimes called the
Cauchy Mean-Value Theorem.
Preliminaries
49
Hint: Consider the function h(x) = (g(b) − g(a))(f (x) − f (a)) − (g(x) − g(a))(f (b) − f (a)).
Repeat the reasoning from the proof of the Lagrange Mean-Value Theorem. We have: h(a) = h(b) =
0. By Rolle’s Theorem, there exists c ∈ (a, b) such that
h� (c) = (g(b) − g(a))f � (c) − g � (c)(f (b) − f (a)) = 0
Exercise 1.20.5 Prove L’Hôspital’s rule: If f (x) and g(x) are differentiable on (a, b), with g � (x) �= 0 ∀x ∈
(a, b), and if f (c) = g(c) = 0 and the limit K = limx→c f � (x)/g � (x) exists, then limx→c f (x)/g(x) =
K.
Hint: Use the result of Exercise 1.20.4.
According to the Cauchy Mean-value Theorem, there exists ξ ∈ (c, x) such that
f � (ξ)(g(x) − g(c)) = g � (ξ)(f (x) − f (c))
or,
f (x)
f � (ξ)
=
�
g (ξ)
g(x)
With x → c, the intermediate point ξ converges to c as well. As the limit on the left-hand side exists,
the right-hand side has a limit as well, and the two limits are equal.
Exercise 1.20.6 Let f and g be Riemann integrable on I = [a, b]. Show that for any real numbers α and β,
αf + βg is integrable, and
�
b
(αf + βg) dx = α
a
�
b
f dx + β
a
�
b
g dx
a
Let P be an arbitrary partition of I,
a = x 0 ≤ x 1 ≤ x 2 ≤ · · · ≤ xn = b
and ξk , k = 1, . . . , n arbitrary intermediate points. We have the following simple relation between the
Riemann sums of functions f ,g and αf + βg,
R(P, αf + βg) =
n
�
(αf (ξk ) + βg(ξk ))(xk − xk−1 )
k=1
n
�
=α
k=1
f (ξk )(xk − xk−1 ) + β
= αR(P, f ) + βR(P, g)
n
�
k=1
g(ξk )(xk − xk−1 )
Thus, if the Riemann sums on the right-hand side converge, the sum on the left-hand side converges as
well, and the two limits (integrals) are equal.
50
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 1.20.7 Let f and g be continuous on [a, b] and suppose that F and G are primitive functions of f
and g, respectively, i.e., F � (x) = f (x) and G� (x) = g(x) ∀ x ∈ [a, b]. Prove the integration-by-parts
formula:
�
b
a
F (x)g(x) dx = F (b)G(b) − F (a)G(a) −
�
b
f (x)G(x) dx
a
Integrate between a and b both sides of
�
[F (x)G(x)] = f (x)G(x) + F (x)g(x)
to obtain
F (b)G(b) − F (a)G(a) =
�
b
f (x)G(x) dx +
a
�
b
F (x)g(x) dx
a
Exercise 1.20.8 Prove that if f is Riemann integrable on [a, c], [c, b], and [a, b], then
�
b
f dx =
a
�
c
f dx +
a
�
b
f dx,
a<c<b
c
Consider a class of partitions P of interval [a, b] that include point c,
a = x0 < x1 < . . . < xm = c < xm+1 < . . . , xn = b
with arbitrary intermediate points ξk ∈ (xk−1 , xk ), k = 1, . . . , n. The following simple relation holds:
R(P, f ) =
=
n
�
k=1
m
�
k=1
f (ξk )(xk − xk−1 )
f (ξk )(xk − xk−1 ) +
= R(P1 , f ) + R(P2 , f )
n
�
k=m+1
f (ξk )(xk − xk−1 )
where P1 and P2 denote subpartitions corresponding to subintervals [a, c] and [c, b], respectively. By
assumption, each of the Riemann sums converges and, in the limit, we get the required result.
Remark. Riemann integrability on subintervals [a, c] and [b, c] implies Riemann integrability on the
whole interval [a, b]. This follows from the fundamental result proved in Chapter ?? that states that
function f is Riemann integrable if and only if it is continuous almost everywhere.
Exercise 1.20.9 Let f be a Riemann integrable function defined on [a, b], and let x(u) denote a C 1 bijective
map from an interval [c, d] onto [a, b]. Assume, for simplicity, that composition f ◦ x is Riemann
integrable on [c, d]. Show that
�
b
f (x) dx =
a
�
d
c
f (x(u)) |
dx
| du
du
There must be either dx/du ≥ 0 or dx/du ≤ 0, in the whole interval [c, d] (a change of sign implies
that x(u) is not bijective).
Preliminaries
51
Case: dx/du ≥ 0. Let
c = u0 < ua < . . . < un = d
be an arbitrary partition of interval [c, d]. Then xk = x(uk ) is a partition of interval [a, b]. By the
Lagrange Mean-Value Theorem, for each k, there exists an intermediate point ηk ∈ (uk−1 , uk ) such
that
xk − xk−1
= x� (ηk )
uk − uk−1
Denoting ξk = x(ηk ), we have
n
�
k=1
f (ξk )(xk − xk−1 ) =
n
�
k=1
n
�
xk − xk−1
f (x(ηk ))
(uk − uk−1 ) =
f (x(ηk ))x� (ηk )(uk − uk−1 )
uk − uk−1
k=1
Passing to the limit, we get the final result.
Case: dx/du ≤ 0. Trade [a, b] for [b, a] and use the result above to obtain,
� d
� a
dx
du
f (x) dx =
f (x(u))
du
b
c
Then
� b
� a
� d
dx
def
f (x) dx = −
f (x) dx =
f (x(u)) | | du
du
a
b
c
Remark. As in Exercise 1.20.8, one does not need to assume that both integrals exist. Existence of one
of them implies the existence of the other one.
1.21
Multidimensional Calculus
Exercises
Exercise 1.21.1 Let f : R
In →R
I m be a function defined on a set E ⊂ R
I n , and x be an interior point of
E. Suppose that f has all its partial derivatives at all x ∈ E, and that a directional derivative De f (x)
I n . Show that
exists, where e = (e1 , . . . , en ) is a unit vector in R
D e f (x) =
n
�
∂f
ei
∂xi
i=1
We have,
f (x1 + te1 , x2 + te2 , . . . , xn + ten ) − f (x1 , x2 + te2 , . . . , xn + ten )
f (x + te) − f (x)
=
e1
t
te1
f (x1 , x2 + te2 , . . . , xn + ten ) − f (x1 , x2 , . . . , xn + ten )
+
e2
te2
f (x1 , x2 , . . . , xn + ten ) − f (x1 , x2 , . . . , xn )
+
en
ten
52
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Passing to the limit with t → 0, we get the required result.
Exercise 1.21.2 Let z = z(t) be a one-to-one function from [a, b] into R
I 2 . The image c of z in R
I 2 is
identified as a curve in R
I 2 , and the function z, as its parametrization. Assume that z is of class C 1 .
I now be a continuous function on R
I 2 . Consider the integral
Let f : R
I 2 →R
��
�2 �
�2
� b
dz2
dz1
+
dt
f (z(t))
J=
dt
dt
a
Use the result from Exercise 1.20.9 to show that J is independent of the parametrization of curve c.
More precisely, if ẑ is another injective function defined on a different interval [ā, b̄] ⊂ R,
I but with the
2
�
same image c in R
I as z(t), then the corresponding integral J is equal to J.
The number J depends thus on the curve c only, and it is known as the line integral of f along c,
denoted by
�
f dc
c
Let ẑ(T ) be injective C 1 function from interval [ā, b̄] onto curve c. Then composition
t(u) = z −1 (ẑ(u))
is a C 1 bijection from [ā, b̄] onto [a, b]. By Exercise 1.20.9 and chain formula for differentiation,
��
��
�2 �
�2
�2 �
�2
� b
� b̄
dz2
dz2
dz1
dz1
dt
du
f (z(t))
+
dt =
f (z(t(u)))
+
dt
dt
dt
dt
du
a
ā
��
�2 �
�2
� b̄
dz2
dz1
f (z(t(u)))
+
du
=
du
du
ā
Exercise 1.21.3 Let Ω be an open set in R
I 2 with a boundary ∂Ω which is a (closed) C 1 curve in R
I 2 . Let
I be two C 1 functions defined on a set Ω1 ⊃ Ω (functions f and g are in particular
f, g : R
I2 → R
continuous along the boundary ∂Ω).
Prove the elementary Green’s formula (also known as the multidimensional version of the integrationby-parts formula)
�
f
Ω
∂g
dx = −
∂xi
�
Ω
∂f
g dx +
∂xi
�
f gni ds,
i = 1, 2
∂Ω
where n = (n1 , n2 ) is the outward normal unit vector to the boundary ∂Ω.
Hint: Assume for simplicity that the integral over Ω can be calculated as the iterated integral (see
Fig. 1.6), e.g.,
�
∂g
f
dx =
∂x
2
Ω
�
b
a
��
d(x1 )
c(x1 )
∂g
f
dx2
∂x2
�
and use the integration-by-parts formula for functions of one variable.
dx1
Preliminaries
53
Figure 1.6
Exercise 1.21.3. Notation for the iterated integral.
Integrating by parts with respect to x2 , we obtain
�
�
� b �� d(x1 )
� b �� d(x1 )
∂g
∂f
f
dx2 dx1 = −
g dx2 dx1
∂x2
a
c(x1 )
a
c(x1 ) ∂x2
� b
�
+
f (x1 , d(x1 ))g(x1 , d(x1 )) dx1 −
a
b
f (x1 , c(x1 ))g(x1 , d(x1 )) dx1
a
The last two integrals can be reinterpreted as the line integral over the boundary ∂Ω. Indeed, ∂Ω shown
in Fig.1.6 can be split into two curves, both parametrized with first coordinate x1 . The parametrization
for the bottom curve is
(a, b) � x1 → (x1 , c(x1 ))
The corresponding tangent vector is obtained by differentiating the parametrization with respect to the
parameter x1 ,
�
(1, c ) =
�
�
c�
1
1+
(c� )2
,�
1 + (c� )2
�
�
1 + (c� )2
The expression in large parenthesis represents the unit tangent vector. Rotating the tangent vector by
90 degrees, we obtain the corresponding outward normal unit vector
�
�
1
c�
n= �
, −�
1 + (c� )2
1 + (c� )2
Thus
−n2
which implies that
−
�
�
ds
= −n2 1 + (c� )2 = 1
dx2
b
f (x1 , c(x1 ))g(x1 , d(x1 )) dx1 =
a
�
f g ds
c
54
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
where c denotes the bottom part of boundary ∂Ω. In a similar way, we conclude that the other integral
represents the curve integral over the top part of the boundary.
The case of an “arbitrary” domain Ω can be obtained by partitioning Ω into subdomains in such a way
that the use of the iterated integral is possible, and summing up the formulas obtained for individual
subdomains.
The case of derivatives with respect to x1 is treated similarly, changing the order of integration accordingly.
2
Linear Algebra
Vector Spaces—The Basic Concepts
2.1
Concept of a Vector Space
Exercises
Exercise 2.1.1 Let V be an abstract vector space over a field IF . Denote by 0 and 1 the identity elements with
respect to addition and multiplication of scalars, respectively. Let −1 ∈ IF be the element∗ opposite to
1 (with respect to scalar addition). Prove the identities
(i) 0 = 0 x,
∀x ∈ V
(ii) −x = (−1) x,
∀x ∈ V
where 0 ∈ V is the zero vector, i.e., the identity element with respect to vector addition, and −x
denotes the opposite vector to x.
(i) Let x be an arbitrary vector. By the axioms of a vector space, we have
x + 0 x = 1 x + 0 x = (1 + 0) x = 1 x = x
Adding to both sides the inverse element −x, we obtain that
0+0x=0x=0
(ii) Using the first result, we obtain
x + (−1) x = (1 + (−1)) x = 0 x = 0
∗ It
is unique.
55
56
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 2.1.2 Let IC denote the field of complex numbers. Prove that IC n satisfies the axioms of a vector
space with analogous operations to those in R
I n , i.e.,
def
x + y = (x1 , . . . , xn ) + (y1 , . . . , yn ) = (x1 + y1 , . . . , xn + yn )
def
α x = α (x1 , . . . , xn ) = (α x1 , . . . , α xn )
This is really a trivial exercise. One by one, one has to verify the axioms. For instance, the associative law for vector addition is a direct consequence of the definition of the vector addition, and the
associative law for the addition of complex numbers, etc.
Exercise 2.1.3 Prove Euler’s theorem on rigid rotations. Consider a rigid body fixed at a point A in an initial
configuration Ω. Suppose the body is carried from the configuration Ω to a new configuration Ω1 , by a
rotation about an axis l1 , and next, from Ω1 to a configuration Ω2 , by a rotation about another axis l2 .
Show that there exists a unique axis l, and a corresponding rotation carrying the rigid body from the
initial configuration Ω to the final one, Ω2 , directly. Consult any textbook on rigid body dynamics, if
necessary.
This seemingly obvious result is far from trivial. We offer a proof based on the use of matrices, and
you may want to postpone studying the solution after Section 2.7 or even later.
A matrix A = Aij is called orthonormal if its transpose coincides with its inverse, i.e.,
A AT = AT A = I
or, in terms of its components,
�
k
Aik Ajk =
�
Aki Akj = δij
(2.1)
k
The Cauchy theorem for determinants implies that
detA detAT = det2 A = detI = 1
Consequently, for an orthonormal matrix A, detA = ±1. It is easy to check that orthonormal matrices
form a (noncommutative) group. Cauchy’s theorem implies that orthonormal matrices with detA = 1
constitute a subgroup of this group.
We shall show now that, for n = 2, 3, orthonormal matrices with detA = 1 represent (rigid body)
rotations.
Case: n = 2. Matrix representation of a rotation by angle θ has the form
�
�
cos θ sin θ
A=
− sin θ cos θ
and it is easy to see that it is an orthonormal matrix with unit determinant. Conversely, let aij satisfy
conditions (2.1). Identities
a211 + a212 = 1 and a221 + a222 = 1
Linear Algebra
57
imply that there exist angles θ1 , θ2 ∈ [0, 2π) such that
a11 = cos θ1 ,
a12 = sin θ1 ,
a21 = − sin θ2 ,
a22 = cos θ2
But
a11 a21 + a12 a22 = sin(θ1 − θ2 ) = 0
and
a11 a22 − a12 a21 = cos(θ1 − θ2 ) = 1
The two equations admit only one solution: θ1 − θ2 = 0.
Case: n = 3. Linear transformation represented by an orthonormal matrix preserves the length of
vectors (it is an isometry). Indeed,
��
�
(
Aij xj )(
Aik xk )
�Ax�2 =
=
i �
j �
k
��
�
�
(
Aik Ajk )xj xk =
δjk xj xk =
xk xk = �x�2
j
k
i
j
k
k
Consequently, the transformation maps unit ball into itself. By the Schauder Fixed Point Theorem (a
heavy but very convenient argument), there exists a vector x that is mapped into itself. Selecting a
system of coordinates in such a way that vector x coincides with the third axis, we deduce that A has
the following representation


a11 a12 0
 a21 a22 0 
a31 a32 1
Orthogonality conditions (2.1) imply that a31 = a32 = 0 and that aij , i, j = 1, 2, is a two-dimensional
orthonormal matrix with unit determinant. Consequently, the transformation represents a rotation about
the axis spanned by the vector x.
Exercise 2.1.4 Construct an example showing that the sum of two finite rotation “vectors” does not need to
lie in a plane generated by those vectors.
Use your textbook to verify that the composition of rotations represented by “vectors” (π, 0, 0) and
(0, −π, 0) is represented with the “vector” (0, 0, −π).
Exercise 2.1.5 Let P k (Ω) denote the set of all real- or complex-valued polynomials defined on a set Ω ⊂
R
I n (IC n ) with degree less or equal to k. Show that P k (Ω) with the standard operations for functions is
a vector space.
It is sufficient only to show that the set is closed with respect to the vector space operations. But this
is immediate: sum of two polynomials of degree ≤ k is a polynomial with degree ≤ k and, upon
multiplying a polynomial from P k (Ω) with a number, we obtain a polynomial from P k (Ω) as well.
Exercise 2.1.6 Let Gk (Ω) denote the set of all polynomials of order greater or equal to k. Is Gk (Ω) a vector
space? Why?
58
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
No, it is not. The set is not closed with respect to function addition. Adding polynomial f (x) and
−f (x), we obtain a zero function, i.e., a polynomial of zero degree which is outside of the set.
Exercise 2.1.7 The extension f1 in the definition of a function f from class C k (Ω̄) need not be unique. The
boundary values of f1 , however, do not depend upon a particular extension. Explain why.
By definition, function f1 is continuous in the larger set Ω1 . Let x ∈ ∂Ω ∈ Ω1 , and let Ω � xn → x.
By continuity, f1 (x) = limn→∞ f1 (xn ) = limn→∞ f (xn ) since f1 = f in Ω. The same argument
applies to the derivatives of f1 .
Exercise 2.1.8 Show that C k (Ω), k = 0, 1, . . . , ∞, is a vector space.
It is sufficient to notice that all these sets are closed with respect to the function addition and multiplication with a number.
2.2
Subspaces
2.3
Equivalence Relations and Quotient Spaces
Exercises
Exercise 2.3.1 Prove that the operations in the quotient space V /M are well defined, i.e., the equivalence
classes [x + y] and [αx] do not depend upon the choice of elements x ∈ [x] and y ∈ [y].
Let xi ∈ [x] = x + M, y i ∈ [y] = y + M, i = 1, 2. We need to show that
x1 + y 1 + M = x2 + y 2 + M
Let z ∈ x1 + y 1 + M , i.e., z = x1 + y 1 + m, m ∈ M . Then
z = x2 + y 2 + (x1 − x2 ) + (y 1 − y 2 ) + m ∈ x2 + y 2 + M
since each of vectors x1 − x2 , y 1 − y 2 , m is an element of subspace M , and M is closed with respect
to the vector addition. By the same argument x2 + y 2 + M ⊂ x1 + y 1 + M .
Similarly, let xi ∈ [x] = x + M, i = 1, 2. We need to show that
αx1 + M = αx2 + M
This is equivalent to show that αx1 − αx2 = α(x1 − x2 ) is an element of subspace M . But this
follows from the fact that x1 − x2 ∈ M and that M is closed with respect to the multiplication by a
scalar.
Linear Algebra
59
Exercise 2.3.2 Let M be a subspace of a real space V and RM the corresponding equivalence relation.
Together with three equivalence axioms (i) - (iii), relation RM satisfies two extra conditions:
⇔
(iv)
xRy, uRv
(x + u)R(y + v)
(v)
xRy ⇔ (αx)R(αy)
∀α ∈ R
I
We say that RM is consistent with linear structure on V . Let R be an arbitrary relation satisfying
conditions (i)–(v), i.e., an equivalence relation consistent with linear structure on V . Show that there
exists a unique subspace M of V such that R = RM , i.e., R is generated by the subspace M .
Define M to be the equivalence class of zero vector, M = [0]. Axioms (iv) and (v) imply that M is
closed with respect to vector space operations and, therefore, is a vector subspace of V . Let yRx. Since
xRx, axiom (v) implies −xR − x and, by axiom (iv), (y − x)R0. By definition of M , (y − x) ∈ M .
But this is equivalent to y ∈ x + M = [x]RM .
Exercise 2.3.3 Another way to see the difference between two equivalence relations discussed in Example 2.3.3 is to discuss the equations of rigid body motions. For the sake of simplicity let us consider
the two-dimensional case.
(i) Prove that, under the assumption that the Jacobian of the deformation gradient F is positive,
E(u) = 0 if and only if u takes the form
u1 = c1 + cos θx1 + sin θx2 − x1
u2 = c2 − sin θx1 + cos θx2 − x2
where θ ∈ [0, 2π) is the angle of rotation.
(ii) Prove that εij (u) = 0 if and only if u has the following form
u1 = c1 + θx2
u2 = c2 − θx1
One can see that for small values of angle θ (cos θ ≈ 1, sin θ ≈ θ) the second set of equations can be
obtained by linearizing the first.
(i) Using the notation from Example 2.3.3, we need to show that the right Cauchy-Green tensor
Cij = xk,i xk,j = δij if and only if
x1 = c1 + cos θX1 + sin θX2
x2 = c2 − sin θX1 + cos θX2
A direct computation shows that Cij = δij for the (relative) configuration above. Conversely,
(x1,1 )2 + (x2,1 )2 = 1
60
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
implies that there exists an angle θ1 such that
x1,1 = cos θ1 ,
x2,1 = sin θ1
Similarly,
(x1,2 )2 + (x2,2 )2 = 1
implies that there exists an angle θ2 such that
x1,2 = − sin θ2 ,
x2,2 = cos θ2
Finally, condition
x1,1 x1,2 + x2,1 x2,2 = 0
implies that sin(θ1 − θ2 ) = 0. Restricting ourselves to angles in [0, 2π), we see that either
θ1 = θ2 + π or θ1 = θ2 . In the first case, sin θ1 = − sin θ2 and cos θ1 = − cos θ2 which results
in a deformation gradient with negative Jacobian. Thus θ1 = θ2 =: θ. A direct integration results
then in the final formula. Angle θ is the angle of rotation, and integration constants c1 , c2 are the
components of the rigid displacement.
(ii) Integrating u1,1 = 0, we obtain
u 1 = c1 + θ 1 x 2
Similarly, u2,2 = 0 implies
u 2 = c2 − θ 2 x 2
Finally, u1,2 + u2,1 = 0 results in θ1 = θ2 =: θ.
2.4
Linear Dependence and Independence, Hamel Basis, Dimension
Linear Transformations
2.5
Linear Transformations—The Fundamental Facts
Linear Algebra
61
Exercises
Exercise 2.5.1 Find the matrix representation of rotation R about angle θ in R
I 2 with respect to basis a1 =
(1, 0), a2 = (1, 1).
Start by representing basis a1 , a2 in terms of canonical basis e1 = (1, 0), e2 = (0, 1),
a 1 = e1
a 2 = e1 + e 2
e1 = a1
e 2 = a2 − a 1
⇒
Then,
Ra1 = Re1 = cos θe1 + sin θe2 = cos θa1 + sin θ(a2 − a1 ) = (cos θ − sin θ)a1 + sin θe2
Similarly,
√
√
√
√
π
π
π
π
π
Ra2 = 2 cos(θ + )e1 + 2 sin(θ + )e2 = 2(cos(θ + ) − sin(θ + ))a1 + 2 sin(θ + )a2
4
4
4
4
4
or,
Ra2 = R(e1 + e2 ) = (cos θ − sin θ)e1 + (sin θ + cos θ)e2 = −2 sin θa1 + (sin θ + cos θ)a2
Therefore, the matrix representation is:

(cos θ − sin θ)


sin θ
or, equivalently,



√

2(cos(θ + π4 ) − sin(θ + π4 ))

√

2 sin(θ + π4 )
cos θ − sin θ
sin θ
−2 sin θ


cos θ + sin θ 
Exercise 2.5.2 Let V = X ⊕ Y , and dimX = n, dimY = m. Prove that dimV = n + m.
Let e1 , . . . , en be a basis for X, and let g1 , . . . , gm be a basis for Y . It is sufficient to show that
e1 , . . . , en , g1 , . . . , gm is a basis for V . Let v ∈ V . Then v = x + y with x ∈ X, y ∈ Y , and
�
�
�
�
x = i xi ei , y = j yj gj , so v = i xi ei + j yj gj which proves the span condition. To prove
linear independence, assume that
�
i
αi ei +
�
βj g j = 0
j
From the fact that X ∩ Y = {0} follows that
�
�
αi ei = 0 and
βj g j = 0
i
j
which in turn implies that α1 = . . . = αn = 0, and β1 = . . . = βm = 0, since each of the two sets of
vectors is separately linearly independent.
62
APPLIED FUNCTIONAL ANALYSIS
2.6
Isomorphic Vector Spaces
2.7
More About Linear Transformations
SOLUTION MANUAL
Exercises
Exercise 2.7.1 Let V be a vector space and idV the identity transformation on V . Prove that a linear transformation T : V → V is a projection if and only if idV − T is a projection.
Assume T is a projection, i.e., T 2 = T . Then
(idV − T )2 = (idV − T ) (idV − T )
= idV − T − T + T 2
= idV − T − T + T
= idV − T
The converse follows from the first step and T = idV − (idV − T ).
2.8
Linear Transformations and Matrices
2.9
Solvability of Linear Equations
Exercises
Exercise 2.9.1 Equivalent and Similar Matrices. Given matrices A and B, when nonsingular matrices P
and Q exist such that
B = P −1 AQ
we say that the matrices A and B are equivalent. If B = P −1 AP , we say A and B are similar.
Let A and B be similar n × n matrices. Prove that det A = det B, r(A) = r(B), n(A) = n(B).
Linear Algebra
63
The first assertion follows immediately from the Cauchy’s Theorem for Determinants. Indeed, P A =
BP implies
det P det A = det B det P
and, consequently, det A = det B.
Let A : X → X be a linear map. Recall that the rank of A equals the maximum number of linearly
independent vectors Aej where ej , j = 1, . . . , n is an arbitrary basis in X. Let P : X → X be now an
isomorphism. Consider a basis ej , j = 1, . . . , n in space X. Then P ej is another basis in X, and the
rank of A equals the maximum number of linearly independent vectors AP ej which is also the rank
of AP . The Rank and Nullity Theorem implies then that nullity of AP equals nullity of A.
Similarly, nullity of A is the maximum number of linearly independent vectors ej such that Aej = 0.
But
Aej = 0
⇔
P Aej = 0
so the nullity of A is equal to the nullity of P A. The Rank and Nullity Theorem implies then that rank
of P A equals nullity of A.
Consequently, for similar transformations (matrices) rank and nullity are the same.
Exercise 2.9.2 Let T1 and T2 be two different linear transformations from an n-dimensional linear vector
space V into itself. Prove that T1 and T2 are represented relative to two different bases by the same
matrix if and only if there exists a nonsingular transformation Q on V such that T2 = Q−1 T1 Q.
�
�
Let T1 g j = i Tij g i and T2 ej = i Tij ei where g j , ej are two bases in V . Define a nonsingular
transformation Q mapping basis ej into basis g j . Then
�
�
T1 Qej =
Tij Qei = Q
Tij ei
i
which implies
Q−1 T1 Qej =
i
�
Tij ei = T2 ej
i
Conversely, if T2 = Q−1 T1 Q and Q maps basis ej into basis g j , then matrix representation of T2 with
respect to ej equals the matrix representation of T1 with respect basis g j .
Exercise 2.9.3 Let T be a linear transformation represented by the matrix
�
�
1 −1 4
A=
0 32
I 2 and {b1 , b2 , b3 } of R
I 3 . Compute the matrix representing T relative to
relative to bases {a1 , a2 } of R
the new bases:
α1 = 4a1 − a2
β 1 = 2b1 −b2 +b3
α2 = a1 + a2
β 2 = b1
β 3 = b1 +2b2
−b3
64
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
We have
T b1
= a1
T b2 = −a1 +3a2
T b3 = 4a1 +2a2
Inverting the formulas for ai , we get
a1
= 15 α1 + 15 α2
a2 = − 15 α1 + 45 α2
We have now,
T β 1 = T (2b1 − b2 + b3 )
= 2T b1 − T b2 + T b3
= 2a1 + a1 − 3a2 + 4a1 + 2a2
= 7a1 − a2
= 7( 15 α1 + 15 α2 ) − (− 15 α1 + 45 α2 )
= 85 α1 + 35 α2
Similarly,
T β 2 = T b1 − T b 3
= −3a1 − 2a2
= − 15 α1 −
11
5 α2
and
T β 3 = T b1 + 2T b2
= −a1 + 6a2
= − 75 α1 +
23
5 α2
Thus, the matrix representation of transformation T wrt to new bases is
�8
5
3
5
− 15 − 75
− 45
23
5
�
Exercise 2.9.4 Let A be an n × n matrix. Show that transformations which
(a) interchange rows or columns of A
(b) multiply any row or column of A by a scalar �= 0
(c) add any multiple of a row or column to a parallel row or column
produce a matrix with the same rank as A.
Linear Algebra
65
Recall that j-th column represents value Aej . All discussed operations on columns redefine the map
but do not change its range. Indeed,
span{Ae1 , . . . , Aej , . . . , Aei , . . . , Aen } = span{Ae1 , . . . , Aei , . . . , Aej , . . . , Aen }
span{Ae1 , . . . , A(αei ), . . . , Aej , . . . , Aen } = span{Ae1 , . . . , Aei , . . . , Aej , . . . , Aen }
span{Ae1 , . . . , A(ei + βej ), . . . , Aej , . . . , Aen } = span{Ae1 , . . . , Aei , . . . , Aej , . . . , Aen }
The same conclusions apply to the rows of matrix A as they represent vectors AT e∗i , and rank AT =
rank A.
I 2 , where a1 = (−1, 2), a2 = (0, 3), and
Exercise 2.9.5 Let {a1 , a2 } and {e1 , e2 } be two bases for R
I2 →R
I 2 be given by T (x, y) = (3x − 4y, x + y). Find the matrices
e1 = (1, 0), e2 = (0, 1). Let T : R
for T for each choice of basis and show that these matrices are similar.
Matrix representation of T in the canonical basis e1 , e2 is:
�
�
3 −4
T =
1 1
We have
aj =
2
�
⇒
Pjk ek
k=1
where
P =
�
−1 2
03
el =
2
�
Pli−1 ei
i=1
�
�
−1
0
2
3
1
3
�
Linearity of map T implies the following relations.
T aj = T (
2
�
Pjk ek )
k=1
=
=
2
�
k=1
2
�
k=1
2
�
Pjk T ek
Pjk
2
�
Tlk el
l=1
2
�
2
�
Pjk
Tlk
Pli−1 ei
i=1
k=1 � l=1
�
2
2
2 �
�
�
−1
=
Pli Tlk Pjk
=
i=1
k=1 l=1
Consequently, matrix representation T̃ij in basis a1 , a2 is:
T̃ij =
2
2 �
�
Pli−T Tlk Pjk
k=1 l=1
or, in the matrix form,
T̃ = P −T T P T
66
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
which shows that matrices T̃ and T are similar. Finally, computing the products above, we get
��
��
� �
�
�
−1 0
3 −4
−1 0
11 12
=
T̃ =
2 1
1 1
23
−7 −7
3 3
Algebraic Duals
2.10
The Algebraic Dual Space, Dual Basis
Exercises
Exercise 2.10.1 Consider the canonical basis e1 = (1, 0), e2 = (0, 1) for R
I 2 . For x = (x1 , x2 ) ∈ R
I 2 , x1 , x2
are the components of x with respect to the canonical basis. The dual basis functional e∗j returns the
j-th component:
e∗j : R
I 2 � (x1 , x2 ) → xj ∈ R
I
Consider now a different basis for R
I 2 , say a1 = (1, 1), a2 = (−1, 1). Write down the explicit formulas
for the dual basis.
We follow the same reasoning. Expanding x in the new basis, x = ξ1 a1 + ξ2 a2 , we apply a∗j to both
sides to learn that the dual basis functionals a∗j return the components with respect to basis aj ,
a∗j : R
I 2 � (x1 , x2 ) → ξj ∈ R
I
The whole issue is thus simply in computing the components ξj . This is done by representing the
canonical basis vectors ei in terms of vectors aj ,
a 1 = e1 + e 2
a2 = −e1 + e2
=⇒
e1 = 12 a1 − 12 a2
e2 = 12 a1 + 12 a2
Then,
1
1
1
1
1
1
x = x1 e1 + x2 e2 = x1 ( a1 − a2 ) + x2 ( a1 + a2 ) = (x1 + x2 )a1 + (x2 − x1 )a2
2
2
2
2
2
2
Therefore, ξ1 = 12 (x1 + x2 ) and ξ2 = 12 (x2 − x1 ) are the dual basis functionals.
Exercise 2.10.2 Let V be a finite-dimensional vector space, and V ∗ denote its algebraic dual. Let ei , i =
1, . . . , n be a basis in V , and e∗j , j = 1, . . . , n denote its dual basis. What is the matrix representation
of the duality pairing with respect to these two bases? Does it depend upon whether we define the dual
space as linear or antilinear functionals?
Linear Algebra
67
It follows from the definition of the dual basis that the matrix representation of the duality pairing is
the Kronecker’s delta δij . This is true for both definitions of the dual space.
Exercise 2.10.3 Let V be a complex vector space. Let L(V, IC ) denote the space of linear functionals defined
on V , and let L̄(V, IC ) denote the space of antilinear functionals defined on V . Define the (complex
conjugate) map C as
C : L(V, IC ) � f → f¯ ∈ L̄(V, IC ),
def
f¯(v) = f (v)
Show that operator C is well defined, bijective, and antilinear. What is the inverse of C?
Let f be a linear functional defined on V . Then,
f (α1 v1 + α2 v2 ) = α1 f (v1 ) + α2 f (v2 ) = α1 f (v1 ) + α2 f (v2 )
so f¯ is antilinear. Similarly,
(α1 f1 + α2 f2 )(v) = α1 f1 (v) + α2 f2 (v) = α1 f1 (v) + α2 f2 (v)
so the map C is itself antilinear. Similarly, map
D : L̄(V, IC ) � f → f¯ ∈ L(V, IC ),
def
f¯(v) = f (v)
is well defined and antilinear. Notice that C and D are defined on different space so you cannot say
that C = D. Obviously, both compositions D ◦ C and C ◦ D are identities, so D is the inverse of C,
and both maps are bijective.
Exercise 2.10.4 Let V be a finite-dimensional vector space. Consider the map ι from V into its bidual space
V ∗∗ , prescribing for each v ∈ V the evaluation at v, and establishing the canonical isomorphism
between space V and its bidual V ∗∗ . Let e1 , . . . , en be a basis for V , and let e∗1 , . . . , e∗n be the
corresponding dual basis. Consider the bidual basis, i.e., the basis e∗∗
i , i = 1, . . . , n in the bidual
space, dual to the dual basis, and prove that
ι(ei ) = e∗∗
i
This is simple. Definition of map ι implies that
< ι(v), f >V ∗∗ ×V ∗ =< f, v >V ∗ ×V
Thus,
∗
δij =< e∗∗
i , ej >V ∗∗ ×V ∗
and
< ι(ei ), e∗j >V ∗∗ ×V ∗ =< e∗j , ei >V ∗ ×V = δji = δij
The relation follows then form the uniqueness of the (bi)dual basis.
68
2.11
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Transpose of a Linear Transformation
Exercises
Exercise 2.11.1 The following is a “sanity check” of your understanding of concepts discussed in the last
two sections. Consider R
I 2.
I 2.
(a) Prove that a1 = (1, 0), a2 = (1, 1) is a basis in R
It is sufficient to show linear independence. Any n linearly independent vectors in a n-dimensional
vector space provide a basis for the space. The vectors are clearly not collinear, so they are linearly independent. Formally, α1 a1 + α2 a2 = (α1 + α2 , α2 ) = (0, 0) implies α1 = α2 = 0, so
the vectors are linearly independent.
I f (x1 , x2 ) = 2x1 + 3x2 . Prove that the functional is linear,
(b) Consider a functional f : R
I 2 → R,
and determine its components in the dual basis a∗1 , a∗2 .
Linearity is trivial. Dual basis functionals return components with respect to the original basis,
a∗j (ξ1 a1 + ξ2 a2 ) = ξj
It is, therefore, sufficient to determine ξ1 , ξ2 . We have,
ξ1 a1 + ξ2 a2 = ξ1 e1 + ξ2 (e1 + e2 ) = (ξ1 + ξ2 )e1 + ξ2 e2
so x1 = ξ1 + ξ2 and x2 = ξ2 . Inverting, we get, ξ1 = x1 − x2 , ξ2 = x2 . These are the dual basis
functionals. Consequently,
f (x1 , x2 ) = 2x1 + 3x2 = 2(ξ1 + ξ2 ) + 3ξ2 = 2ξ1 + 5ξ2 = (2a∗1 + 5a∗2 )(x1 , x2 )
Using the argumentless notation,
f = 2a∗1 + 5a∗2
If you are not interested in the form of the dual basis functionals, you get compute the components
of f with respect to the dual basis faster. Assume α1 a∗1 + α2 a∗2 = f . Evaluating both sides at
x = a1 we get,
(α1 a∗1 + α2 a∗2 )(a1 ) = α1 = f (a1 ) = f (1, 0) = 2
Similarly, evaluating at x = a2 , we get α2 = 5.
I 2 whose matrix representation in basis a1 , a2 is
(c) Consider a linear map A : R
I 2 →R
�
�
10
12
Linear Algebra
69
Compute the matrix representation of the transpose operator with respect to the dual basis.
Nothing to compute. Matrix representation of the transpose operator with respect to the dual basis
is equal of the transpose of the original matrix,
�
�
11
02
Exercise 2.11.2 Prove Proposition 2.11.3.
All five properties of the matrices are directly related to the properties of linear transformations discussed in Proposition 2.11.1 and Proposition 2.11.2. They can also be easily verified directly.
(i)
(αAij + βBij )T = αAji + βBji = α(Aij )T + β(βij )T
(ii)
�
n
�
l=1
(iii) (δij )T = δji = δij .
Bil Alj
�T
=
n
�
Bjl Ali =
l=1
n
�
Ali Bjl =
l=1
n
�
(Ail )T (Blj )T
l=1
(iv) Follow the reasoning for linear transformations:
AA−1 = I
⇒
(A−1 )T AT = I T = I
A−1 A = I
⇒
AT (A−1 )T = I T = I
Consequently, matrix AT is invertible, and (AT )−1 = (A−1 )T .
(v) Conclude this from Proposition 2.11.2. Given a matrix Aij , ij, = 1, . . . , n, we can interpret it as
I n defined as:
the matrix representation of map A : R
I n →R
where
y = Ax
yi =
n
�
Aij xj
j=1
with respect to the canonical basis ei , i = 1, . . . , n. The transpose matrix AT can then be
interpreted as the matrix of the transpose transformation:
AT : (I
Rn )∗ → (I
R n )∗
The conclusion follows then from the facts that rank A = rank A, rank AT = rank AT , and
Proposition 2.11.2.
Exercise 2.11.3 Construct an example of square matrices A and B such that
(a) AB �= BA
�
10
A=
11
Then
AB =
�
11
12
�
�
�
11
B=
01
and
BA =
�
�
21
11
�
70
APPLIED FUNCTIONAL ANALYSIS
(b) AB = 0, but neither A = 0 nor B = 0
�
�
10
A=
00
SOLUTION MANUAL
�
00
B=
01
�
(c) AB = AC, but B �= C
Take A, B from (b) and C = 0.
Exercise 2.11.4 If A = [Aij ] is an m × n rectangular matrix and its transpose AT is the n × m matrix,
ATn×m = [Aji ]. Prove that
(i) (AT )T = A.
�
(Aij )T
�T
= (Aji )T = Aij
(ii) (A + B)T = AT + B T .
Particular case of Proposition 2.11.3(i).
(iii) (ABC · · · XY Z)T = Z T Y T X T · · · C T B T AT .
Use Proposition 2.11.3(ii) and recursion,
(ABC . . . XY Z)T = (BC . . . XY Z)T AT
= (C . . . XY Z)T B T AT
..
.
= Z T Y T X T . . . C T B T AT
(iv) (qA)T = qAT .
Particular case of Proposition 2.11.3(i).
Exercise 2.11.5 In this exercise, we develop a classical formula for the inverse of a square matrix. Let
A = [aij ] be a matrix of order n. We define the cofactor Aij of the element aij of the i-th column of
A as the determinant of the matrix obtained by deleting the i-th row and j-th column of A, multiplied
by (−1)i+j :
Aij = cofactor aij
(a) Show that
�
� a11
�
�
�
�
def
i+j � ai−1,1
= (−1) �
� ai+1,1
�
�
� an1
δij det A =
n
�
k=1
a12
ai−1,2
ai+1,2
an2
· · · a1,j−1
···
· · · ai−1,j−1
· · · ai+1,j−1
···
· · · an,j−1
aik Ajk ,
a1,j+1
···
ai−1,j+1
ai+1,j+1
···
an,j+1
1 ≤ i, j ≤ n
�
�
�
�
�
· · · ai−1,n ��
· · · ai+1,n ��
�
�
· · · ann �
· · · a1n
Linear Algebra
71
where δij is the Kronecker delta.
Hint: Compare Exercise 2.13.4.
For i = j, the formula reduces to the Laplace Expansion Formula for determinants discussed in
Exercise 2.13.4. For i �= j, the right-hand side represents the Laplace expansion of the determi-
nant of an array where two rows are identical. Antilinearity of determinant (comp. Section 2.13)
implies then that the value must be zero.
(b) Using the result in (a), conclude that
A−1 =
1
[Aij ]T
det A
Divide both sides by det A.
(c) Use (b) to compute the inverse of

and verify your answer by showing that

1 22
A =  1 −1 0 
2 13
A−1 A = AA−1 = I
A
Exercise 2.11.6 Consider the matrices
�
�
1 041
A=
,
2 −1 3 0
and
−1

1
4
3

−1 4
B =  12 0  ,
01
If possible, compute the following:



 1 1 −2 


3
3
=



−1 −1 1

� �
2
D=
,
3
− 23

1
 −1
E=
 1
0
C = [1, −1, 4, −3]
0 2
4 0
0 2
1 −1

3
1

4
2
(a) AAT + 4D T D + E T
The expression is ill-defined, AAT ∈ Matr(2, 2) and E T ∈ Matr(4, 4), so the two matrices
cannot be added to each other.
(b) C T C + E − E 2

−1
 −7

=
−2
−1

−4 3 −17
−12 −1
1

−8 16 −27 
−2 −9 10
72
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(c) B T D
Ill-defined, mismatched dimensions.
(d) B T BD − D
=
�
276
36
�
(e) EC − AT A
EC is not computable.
(f) AT DC(E − 2I)


32 −40 40 144
 −12 15 −15 −54 

=
 68 −85 85 306 
8 −10 10 36
Exercise 2.11.7 Do the following vectors provide a basis for R
I 4?
a = (1, 0, −1, 1),
b = (0, 1, 0, 22)
c = (3, 3, −3, 9),
d = (0, 0, 0, 1)
It is sufficient to check linear independence,
αa + βb + γc + δd = 0
?
⇒
α=β=γ=δ=0
Computing
αa + βb + γc + δd = (α + 3γ, β + 3γ, −α − 3γ, α + 22β + 9γ + δ)
we arrive at the homogeneous system of equations


1 0 30
 0 1 3 0


 −1 0 −3 0 
1 22 9 1
   
α
0
β  0
 = 
γ  0
0
δ
The system has a nontrivial solution iff the matrix is singular, i.e., det A = 0. By inspection, the third
row equals minus the first one, so the determinant is zero. Vectors a, b, c, d are linearly dependent and,
therefore, do not provide a basis for R
I 4.
Exercise 2.11.8 Evaluate the determinant of the matrix


1 −1 0 4
1 0 2 1

A=
 4 7 1 −1 
1 01 2
Linear Algebra
73
Use e.g. the Laplace expansion with respect to the last row and Sarrus’ formulas,
�
�
�
�
�
� 1 −1 0 4 �
�
�
�
�1 0 4�
� −1 −1 0 �
�
�
� −1 0 4 �
�
�
�
�
�
�
�1 0 2 1�
�
�
�
�
�
�
�
�
� 4 7 1 −1 � = −(−1) � 0 2 1 � − 1 � 1 2 1 � + 2 � 1 0 2 �
�
�
� 7 1 −1 �
� 4 1 −1 �
� 4 7 1�
�1 0 1 2�
= 2 − 56 + 1 − (−2 + 4 − 32 − 1) + 2(−8 − 14 + 1) = −53 − (−31) − 42 = −64
Exercise 2.11.9 Invert the following matrices (see Exercise 2.11.5).

�
�
42
1 −1
A=
, B = 2 4
1 2
12
−1
A


=
2 1
3 3
− 13


1
3
B
−1

4
12

1
2
2
2
− 12
0



− 2
6 
7
−


12
12
12
=



6
0 − 12
1
Exercise 2.11.10 Prove that if A is symmetric and nonsingular, so is A−1 .
Use Proposition 2.11.3(iv).
(A−1 )T = (AT )−1 = A−1
Exercise 2.11.11 Prove that if A, B, C, and D are nonsingular matrices of the same order then
(ABCD)−1 = D −1 C −1 B −1 A−1
Use the fact that matrix product is associative,
(ABCD)(D −1 C −1 B −1 A−1 ) = ABC(DD −1 )C −1 B −1 A−1
= ABC I C −1 B −1 A−1 = . . . = I
In the same way,
(D −1 C −1 B −1 A−1 )(ABCD) = I
So, (ABCD)−1 = D −1 C −1 B −1 A−1 .
Exercise 2.11.12 Consider the linear problem

(i) Determine the rank of T .

0 1 3 −2
T =  2 1 −4 3  ,
2 3 2 −1
 
1
y = 5
7
74
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Multiplication of columns (rows) with non-zero factor, addition of columns (rows), and interchange of columns (rows), do not change the rank of a matrix. We may use those operations and
mimic Gaussian elimination to compute the rank of matrices.


0 1 3 −2
rank  2 1 −4 3 
2 3 2 −1


1 3 −2 0
= rank  1 −4 3 2 
switch columns 1 and 4
3 2 −1 2


1 3 −2 0
= rank  1 −4 3 2 
divide row 3 by 3
1 23 − 13 32


1 3 −2 0
subtract row 1 from rows 2 and 3
= rank  0 −7 5 2 
0 − 73 35 32


1 0 0 0
= rank  0 −7 5 2 
manipulate the same way columns to zero out the first row
0 − 73 35 32


10 0 0
= rank  0 1 − 57 − 27 
00 0 0


1000
= rank  0 1 0 0 
0000
=2
(ii) Determine the null space of T .
Set x3 = α and x4 = β and solve for x1 , x2 to obtain
7
5
N (T ) = {( α − β, −3α + 2β, α, β)T : α, β ∈ R}
I
2
2
(iii) Obtain a particular solution and the general solution.
Check that the rank of the augmented matrix is also equal 2. Set x3 = x4 = 0 to obtain a
particular solution
x = (2, 1, 0, 0)T
The general solution is then
7
5
x = (2 + α − β, 1 − 3α + 2β, α, β)T , α, β ∈ R
I
2
2
(iv) Determine the range space of T .
As rank T = 2, we know that the range of T is two-dimensional. It is sufficient thus to find two
linearly independent vectors that are in the range, e.g. we can take T e1 , T e2 represented by the
first two columns of the matrix,
R(T ) = span{(0, 2, 2)T , (1, 1, 3)T }
Linear Algebra
75
Exercise 2.11.13 Construct examples of linear systems of equations having (1) no solutions, (2) infinitely
many solutions, (3) if possible, unique solutions for the following cases:
(a) 3 equations, 4 unknowns
(1)


1000
T = 1 0 0 0
0111
(2)


1111
T = 1 1 1 1
1111
(3) Unique solution is not possible.
 
0
y = 1
0
 
0
y = 0
0
(b) 3 equations, 3 unknowns
(1)


100
T = 1 0 0
011
(2)


100
T = 2 0 0
111
(3)


100
T = 0 1 0
001
Exercise 2.11.14 Determine the rank of the following matrices:



21 4 7
12134
0 1 2 1
, T2 = 2 0 3 2 1
T =
2 2 6 8
11121
4 4 14 10
 
0
y = 1
0
 
1
y = 2
1
 
1
y = 1
1



4
2 −1 1
5, T3 = 2 0 1
3
0 11
In all three cases, the rank is equal 3.
Exercise 2.11.15 Solve, if possible, the following systems:
(a)
+ 3x3 − x4 + 2x5 = 2
4x1
x 1 − x2 + x3 − x4 + x5 = 1
x 1 + x2 + x3 − x4 + x5 = 1
x1 + 2x2 + x3


t+3


0



x =  −2t − 3 
,
 −1 
t
+ x5 = 0
t ∈R
I
76
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(b)
− 4x1 − 8x2 + 5x3 = 1
− 2x2
2x1
+ x2
5x1
+ 3x3
= 2
+ 2x3
= 4
�
19
47 3
x=
,−
,
120 120 10
�T
(c)
2x1 + 3x2 + 4x3 + 3x4 = 0
x1 + 2x2 + 3x3 + 2x4 = 0
x 1 + x2 + x3 + x4 = 0


α+β
 −2α − β 
,
x=


α
β
α, β ∈ R
I
2.12
Tensor Products, Covariant and Contravariant Tensors
2.13
Elements of Multilinear Algebra
Exercises
Exercise 2.13.1 Let X be a finite-dimensional space of dimension n. Prove that the dimension of the space
s
Mm
(X) of all m-linear symmetric functionals defined on X is given by the formula
�
�
(n + m − 1)!
n(n + 1) . . . (n + m − 1)
n+m−1
s
dim Mm
=
=
(X) =
m
1 · 2 · ... · m
m! (n − 1)!
Proceed along the following steps.
(a) Let Pi,m denote the number of increasing sequences of m natural numbers ending with i,
1 ≤ a 1 ≤ a2 ≤ . . . ≤ a m = i
Argue that
s
dim Mm
(X) =
n
�
i=1
Pi,m
Linear Algebra
77
Let a be a general m-linear functional defined on X. Let e1 , . . . , en be a basis for X, and let
v j , j = 1, . . . , n, denote components of a vector v with respect to the basis. The multilinearity of
a implies the representation formula,
a(v1 , . . . , vm ) =
n �
n
�
...
j1 =1 j2 =1
n
�
jm
a(ej1 , ej2 , . . . , ejm ) v1j1 v2j2 . . . vm
jm =1
On the other side, if the form a is symmetric, we can interchange any two arguments in the coefficient a(ej1 , ej2 , . . . , ejm ) without changing the value. The form is thus determined by coefficients
a(ej1 , ej2 , . . . , ejm ) where
1 ≤ j 1 ≤ . . . ≤ jm ≤ n
s
The number of such increasing sequences equals the dimension of space Mm
(X). Obviously, we
can partition the set of such sequences into subsets that contain sequences ending at particular
indices 1, 2, . . . , n, from which the identity above follows.
(b) Argue that
Pi,m+1 =
i
�
Pj,m
j=1
The first m elements of an increasing sequence of m + 1 integers ending at i, form an increasing
sequence of m integers ending at j ≤ i.
(c) Use the identity above and mathematical induction to prove that
Pi,m =
i(i + 1) . . . (i + m − 2)
(m − 1)!
For m = 1, Pi,1 = 1. For m = 2,
Pi,2 =
i
�
1=i
j=1
For m = 3,
Pi,3 =
i
�
j=
j=1
i(i + 1)
2
Assume the formula is true for a particular m. Then
Pi,m+1 =
i
�
j(j + 1) . . . (j + m − 2)
j=1
(m − 1)!
We shall use induction in i to prove that
i
�
j(j + 1) . . . (j + m − 2)
j=1
(m − 1)!
=
i(i + 1) . . . (i + m − 1)
m!
78
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The case i = 1 is obvious. Suppose the formula is true for a particular value of i. Then,
i+1
�
j(j + 1) . . . (j + m − 2)
j=1
(m − 1)!
i
�
j(j + 1) . . . (j + m − 2)
(i + 1)(i + 2) . . . (i + m − 1)
(m − 1)!
(m − 1)!
j=1
i(i + 1) . . . (i + m − 1) m(i + 1)(i + 2) . . . (i + m − 1)
+
=
m!
m!
(i + 1)(i + 2) . . . (i + m − 1)(i + m)
=
m!
(i + 1)(i + 2) . . . (i + m − 1)((i + 1) + m − 1)
=
m!
=
+
(d) Conclude the final formula.
Just use the formula above.
Exercise 2.13.2 Prove that any bilinear functional can be decomposed into a unique way into the sum of a
symmetric functional and an antisymmetric functional. In other words,
M2 (V ) = M2s (V ) ⊕ M2a (V )
Does this result hold for a general m-linear functional with m > 2 ?
The result follows from the simple decomposition,
a(u, v) =
1
1
(a(u, v) + a(v, u)) + (a(u, v) − a(v, u))
2
2
Unfortunately, it does not generalize to m > 2. This can for instance be seen from the simple comparison of dimensions of the involved spaces in the finite-dimensional case,
�
� � �
n+m−1
n
m
+
n >
m
m
for 2 < m ≤ n.
Exercise 2.13.3 Antisymmetric linear functionals are a great tool to check for linear independence of vectors.
Let a be an m-linear antisymmetric functional defined on a vector space V . Let v1 , . . . , vm be m
vectors in space V such that a(v1 , . . . , vm ) �= 0. Prove that vectors v1 , . . . , vn are linearly independent.
Is the converse true? In other words, if vectors v1 , . . . , vn are linearly independent, and a is a nontrivial
m-linear antisymmetric form, is a(v1 , . . . , vm ) �= 0?
Assume in contrary that there exists an index i such that
vi =
�
βj vj
j�=i
for some constants βj , j �= i. Substituting into the functional a, we get,
a(v1 , . . . , vi , . . . , vm ) = a(v1 , . . . ,
�
j�=i
βj vj , . . . , v m ) =
�
j�=i
βj a(v1 , . . . , vj , . . . , vm ) = 0
Linear Algebra
79
since in each of the terms a(v1 , . . . , vj , . . . , vm ), two arguments are the same.
The converse is not true. Consider for instance a bilinear, antisymmetric form defined on a threedimensional space. Let e1 , e2 , e3 be a basis for the space. As discussed in the text, the form is uniquely
determined by its values on pairs of basis vectors: a(e1 , e2 ), a(e1 , e3 ), a(e2 , e3 ). It is sufficient for one
of these numbers to be non-zero in order to have a nontrivial form. Thus we may have a(e1 , e2 ) = 0
for the linearly independent vectors e1 , e2 , and a nontrivial form a. The discussed criterion is only a
sufficient condition for the linear independence but not necessary.
Exercise 2.13.4 Use the fact that the determinant of matrix A is a multilinear antisymmetric functional of
matrix columns and rows to prove the Laplace Expansion Formula. Select a particular column of
matrix Aij , say the j-th column. Let Aij denote the submatrix of A obtained by removing i-th row
and j-th column (do not confuse it with a matrix representation). Prove that
det A =
n
�
(−1)i+j Aij det Aij
i=1
Formulate and prove an analogous expansion formula with respect to an i-th row.
It follows from the linearity of the determinant with respect to the j-th column that,






... 1 ...
... 0 ...
. . . A1j . . .






det  ... ... ...  = A1j det  ... ... ...  + . . . + Anj det  ... ... ... 
... 0 ...
. . . Anj . . .
... 1 ...
On the other side, the determinant of matrix,


(j)

... 0 ...



.
. 
 (i) .. 1 .. 
... 0 ...
is a multilinear functional of the remaining columns (and rows) and, for Aij = I (The I denote here
the identity matrix in R
I n−1 ), its value reduces to (−1)i+j . Hence,


(j)

... 0 ...


i+j
i+j
det 
.
.  = (−1) det A
 (i) .. 1 .. 
... 0 ...
The reasoning follows identical lines for the expansion with respect to the i-th column.
Exercise 2.13.5 Prove the Kramer’s formulas for the solution of a nonsingular system of n equations with n
unknowns,

a11
 ..
 .
...
..
.
   
x1
b1
a1n
..   ..  =  .. 
.  .   . 
an1 . . . ann
xn
bn
80
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Hint: In order to develop the formula for the j-th unknown, rewrite the system in the form:

a11
 ..
 .

an1

1 . . . x1 . . .
a1n
..
..   ..

.
. 
 .
0 . . . xn . . .
. . . ann
...
..
.
(j)



a11 . . . b1 . . . a1n
  ..

..
= .

.
 

1
an1 . . . bn . . . ann
0
(j)
Compute the determinant of both sides of the identity, and use Cauchy’s Theorem for Determinants for
the left-hand side.
Exercise 2.13.6 Explain why the rank of a (not necessarily square) matrix is equal to the maximum size of a
square submatrix with a non-zero determinant.
Consider an m × n matrix Aij . The matrix can be considered to be a representation of a linear map A
from an n-dimensional space X with a basis ei , i = 1, . . . , n, into an m-dimensional space Y with a
basis g1 , . . . , gm . The transpose of the matrix represents the transpose operator AT mapping dual space
∗
and e∗1 , . . . , e∗n . The rank of the
Y ∗ into the dual space X ∗ , with respect to the dual bases g1∗ , . . . , gm
matrix is equal to the dimension of the range space of operator A and operator AT . Let ej1 , . . . , ejk be
such vectors that Aej1 , . . . , Aejk is the basis for the range of operator A. The corresponding submatrix
represents a restriction B of operator A to a subspace X0 = span(ej1 , . . . , ejk ) and has the same rank
as the original whole matrix. Its transpose has the same rank equal to k. By the same argument, there
exist k vectors gi1 , . . . , gik such that AT gi∗1 , . . . , AT gi∗k are linearly independent. The corresponding
k × k submatrix represents the restriction of the transpose operator to the k-dimensional subspace
Y0∗ = span(gi∗1 , . . . , gi∗k ), with values in the dual space X0∗ , and has the same rank equal k. Thus,
the final submatrix represents an isomorphism from a k-dimensional space into a k-dimensional space
and, consequently, must have a non-zero determinant.
Conversely, let v 1 , . . . , v m be k column vectors in R
I m . Consider a matrix composed of the columns.
If there exists a square submatrix of the matrix with a non-zero determinant, the vectors must be
linearly independent. Indeed, the determinant of any square submatrix of the matrix represents a klinear, antisymmetric functional of the column vectors, so, by Exercise 2.13.3, v 1 , . . . , v k are linearly
independent vectors. The same argument applies to the rows of the matrix.
Linear Algebra
81
Euclidean Spaces
2.14
Scalar (Inner) Product. Representation Theorem in Finite-Dimensional Spaces
2.15
Basis and Cobasis. Adjoint of a Transformation. Contra- and Covariant Components of Tensors
Exercises
Exercise 2.15.1 Go back to Exercise 2.11.1 and consider the following product in R
I 2,
R
I 2 ×R
I 2 � (x, y) → (x, y)V = x1 y1 + 2x2 y2
Prove that (x, y)V satisfies the axioms for an inner product. Determine the adjoint of map A from
Exercise 2.11.1 with respect to this inner product.
The product is bilinear, symmetric and positive definite, since (x, x)V = x21 +2x22 ≥ 0, and x21 +2x22 =
0 implies x1 = x2 = 0. The easiest way to determine a matrix representation of A∗ is to determine the
cobasis of the (canonical) basis used to define the map A. Assume that a1 = (α, β). Then
(a1 , a1 ) = α = 1
(a2 , a1 ) = α + 2β = 0
=⇒
β = − 12
so a1 = (1, − 12 ). Similarly, if a2 = (α, β) then,
(a1 , a2 ) = α = 0
(a2 , a2 ) = α + 2β = 1
=⇒
β=
1
2
so a2 = (0, 12 ). The matrix representation of A∗ in the cobasis is simply the transpose of the original
matrix,
�
11
02
�
In order to represent A∗ in the original, canonical basis, we need to switch in between the bases.
a1 = e1 − 12 e2
a2 = 12 e2
=⇒
e 1 = a1 + a 2
e2 = 2a2
82
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Then,
A∗ y = A∗ (y1 e1 + y2 e2 ) = A∗ (y1 (a1 + a2 ) + y2 2a2 ) = A∗ (y1 a1 + (y1 + 2y2 )a2 )
= y1 A∗ a1 + (y1 + 2y2 )A∗ a2 = y1 a1 + (y1 + 2y2 )(a1 + 2a2 )
= y1 (e1 − 12 e2 ) + (y1 + 2y2 )(e1 + 12 e2 ) = 2(y1 + y2 )e1 − 12 y1 e2
= (2(y1 + y2 ), y2 )
Now, let us check our calculations. First, let us compute the original map (that has been given to us in
basis a1 , a2 ), in the canonical basis,
A(x1 , x2 ) = A(x1 e1 + x2 e2 ) = A(x1 a1 + x2 (a2 − a1 ))
= A((x1 − x2 )a1 + x2 a2 ) = (x1 − x2 )(a1 + a2 ) + x2 2a2
= (x1 − x2 )(2e1 + e2 ) + 2x2 (e1 + e2 )
= (2x1 , x1 + x2 )
If our calculations are correct then,
(Ax, y)V = 2x1 y1 + 2(x1 + x2 )y2
must match
(x, A∗ y)V = x1 2(y1 + y2 ) + 2x2 y2
which it does! Needless to say, you can solve this problem in many other ways.
3
Lebesgue Measure and Integration
Lebesgue Measure
3.1
Elementary Abstract Measure Theory
Exercises
Exercise 3.1.1 Prove Proposition 3.1.2.
Let Sι ⊂ P(X) be a family of σ-algebras. Prove that the common part
�
ι
Sι is a σ-algebra as well.
The result is a simple consequence of commutativity of universal quantifiers. For any open statement
P (x, y), we have,
P (x, y)
∀x ∀y ⇐⇒ P (x, y)
∀y ∀x
For finite sets of indices, the property follows by induction from level one logic axioms. The property
serves then as a motivation for assuming the level two logic axiom above.
The specific arguments are very simple and look as follows.
�
�
1. ∅, X ∈ Sι , ∀ι implies ∅, X ∈ ι Sι , so ι Sι is nonempty.
�
�
2. Let A ∈ ι Sι . Then A ∈ Sι , ∀ι and, consequently, A� ∈ Sι , ∀ι, which implies that A� ∈ ι Sι .
�
�∞
3. Ai ∈ ι Sι , i = 1, 2, . . ., implies Ai ∈ Sι , ∀ι, i = 1, 2, . . ., which in turn implies i=1 Ai ∈
�∞
�
Sι , ∀ι, and, consequently, i=1 Ai ∈ ι Sι .
Exercise 3.1.2 Let f : X → Y be a function. Prove the following properties of the inverse image:
f −1 (Y − B) = X − f −1 (B)
and
f −1 (
∞
�
i=1
for arbitrary B, Bi ⊂ Y .
Bi ) =
∞
�
f −1 (Bi )
i=1
83
84
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Both proofs are very straightforward.
x ∈ f −1 (Y − B)
⇔
f (x) ∈
/B
⇔
∼ (f (x) ∈ B)
⇔
x∈
/ f −1 (B)
and
x ∈ f −1 (
∞
�
i=1
Bi ) ⇔ f (x) ∈
∞
�
i=1
Bi ⇔ ∃i f (x) ∈ Bi ⇔ ∃i x ∈ f −1 (Bi ) ⇔ x ∈
∞
�
f −1 (Bi )
i=1
Exercise 3.1.3 Construct an example of f : X → Y and a σ-algebra in X such that f (S) is not a σ-algebra
in Y .
Take any X, Y and any non-surjective function f : X → Y . Take the trivial σ-algebra in X,
S = {∅, X}
Then, in particular,
f (S) = {∅, f (X)}
does not contain Y = ∅� and, therefore, it violates the first axiom for σ-algebras.
Exercise 3.1.4 Prove Corollary 3.1.1.
Let f : X → Y be a bijection and S ⊂ P(X) a σ-algebra. Prove that:
(i) f (S) := {f (A) : A ∈ S} is a σ-algebra in Y .
(ii) If a set K generates S in X then f (K) generates f (S) in Y .
Solutions:
(i) Let g = f −1 be the inverse function of f . Then f (S) = g −1 (S) and the property follows from
Proposition 3.1.3. Equivalently, f (S) = R defined in the same Proposition.
(ii) Let f : X → Y be an arbitrary function. Let S = S(K) be the σ-algebra generated by a
family K in Y . Then, by Proposition 3.1.3, f −1 (S) is a σ-algebra in X and it contains f −1 (S).
Consequently, it must contain the smallest σ-algebra containing f −1 (S), i.e.
f −1 (S(K)) ⊃ S(f −1 (K))
Applying the result to the inverse function g = f −1 , we get
f (S(K)) ⊃ S(f (K))
Applying the last result to g = f −1 and set L = f (K) ⊂ Y gives:
S(g(L)) ⊂ g(S(L))
which, after applying f to both sides, leads to the inverse inclusion,
f (S(g(L)) ⊂ S(����
L )
����
K
f (K)
Lebesgue Measure and Integration
85
Exercise 3.1.5 Prove the details from the proof of Proposition 3.1.5. Let G ⊂ R
I n be an open set. Prove that
the family
{F ⊂ R
I m : G × F ∈ B(I
Rn × R
I m )}
is a σ-algebra in R
I m.
I n ×R
I m . Thus, the family contains all
For any open set F ⊂ R
I m , Cartesian product G × F is open in R
open sets and, therefore, is not empty. The properties of σ-algebra follow then from simple identities,
G × F � = G × (I
Rm − F ) = G × R
Im −G×F
�
�
G×
Fn = (G × Fn )
n
n
Exercise 3.1.6 Let X be a set, S ⊂ PX a σ-algebra of sets in X, and y a specific element of X. Prove that
function
µ(A) :=
is a measure on S.
�
if y ∈ A
otherwise
1
0
�∞
Obviously, µ ≡
/ ∞. Let An ∈ S, i = 1, 2, . . ., be a family of pairwise disjoint sets. If y ∈
/ n=1 An
�∞
�∞
�∞
then y ∈
/ An , ∀n, and, therefore, 0 = µ( n=1 An ) = n=1 0. If y ∈ n=1 An then there must be
exactly one m such that y ∈ Am . Consequently,
1 = µ(
∞
�
An ) = µ(Am ) =
n=1
3.2
∞
�
µ(An )
n=1
Construction of Lebesgue Measure in R
In
Exercises
Exercise 3.2.1 Let F1 , F2 ∈ R
I n be two disjoint closed sets, not necessarily bounded. Construct open sets
G1 , G2 such that
Fi ⊂ Gi , i = 1, 2
and
G 1 ∩ G2 = ∅
Obviously, F1 ⊂ F2� . Since F2� is open, for every x ∈ F1 , there exists a ball B(x, �x ) ⊂ F2� . Define
G1 =
�
x∈F1
B(x,
�x
)
2
Being a union of opens sets, G1 is open. Construct G2 in the same way. We claim that G1 , G2 are
disjoint. Indeed, let y ∈ G1 ∩ G2 . It follows then from the construction that there exist x1 ∈ F1 and
86
APPLIED FUNCTIONAL ANALYSIS
x2 ∈ F2 such that y ∈ B(xi ,
�x i
2
SOLUTION MANUAL
), i = 1, 2. Without any loss of generality, assume that �x1 ≥ �x2 . Let
d be the Euclidean metric. Then,
d(x1 , x2 ) ≤ d(x1 , y) + d(x2 , y) ≤
�x
� x1
+ 2 ≤ �x 1
2
2
which implies that x2 ∈ B(x1 , �x1 ). Since x2 ∈ F2 , this contradicts the construction of G1 .
Exercise 3.2.2 Complete proof of Proposition 3.2.4.
Prove that the following families of sets coincide with Lebesgue measurable sets in R
I n.
(ii) {J ∪ Z : J is Fσ -type, m∗ (Z) = 0},
(iii) S(B(I
Rn ) ∪ {Z : m∗ (Z) = 0}).
Let E be a Lebesgue measurable set. According to Proposition 3.2.3, for every i, there exists a closed
subset Fi of E such that
m∗ (E − Fi ) ≤
Define H =
�∞
i=1
Fi ⊂ E. As the sets E −
�n
i=1
Fi form an increasing sequence, we have,
m∗ (E − H) = m(E − H) = lim m(E −
n→∞
1
i
n
�
i=1
Fi ) ≤ lim m(E − Fn ) = 0
n→∞
Consequently, Z := E − H is of measure zero, and E = H ∪ Z. Conversely, from the fact that Fi , Z
�∞
Rn ) is a σ-algebra
are Lebesgue measurable, it follows that i=1 Fi ∪Z must be measurable as well (L(I
containing open sets).
Since L(I
Rn ) is a σ-algebra that contains both Borel sets and sets of measure zero, it must also contain
the smallest σ-algebra containing the two families. Conversely, representation (ii) above proves that
every Lebesgue measurable set belongs to S(B(I
Rn ) ∪ {Z : m∗ (Z) = 0}).
3.3
The Fundamental Characterization of Lebesgue
Measure
Exercises
Exercise 3.3.1 Follow the outlined steps to prove that every linear isomorphism g : R
In →R
I n is a compoλ
.
sition of simple isomorphisms gH,c
Lebesgue Measure and Integration
87
/ H. Show
Step 1: Let H be a hyperplane in R
I n , and let a, b denote two vectors such that a, b, a − b ∈
λ
that there exists a unique simple isomorphism gH,c
such that
λ
(a) = b
gH,c
Hint: Use c = b − a.
Indeed, consider the decompositions
a = a0 + α(b − a),
a0 ∈ S, α ∈ R,
I
b = b0 + β(b − a),
b0 ∈ S, β ∈ R,
I
Subtracting the two representations from each other, we get
b − a = b0 − a0 + (β − α)(b − a)
which implies
b0 − a0 + (β − α − 1)(b − a) = 0
It follows from the assumption b − a ∈
/ H that the two terms are linearly independent. Conse/ S implies that α, β �= 0. Therefore, with λ = β/α,
quently, a0 = b0 . Also, assumption a, b ∈
we have
λ
λ
gH,c
(a) = gH,c
(a0 + α(b − a)) = b0 + β(b − a) = b
Step 2: Let g be a linear isomorphism in R
I n and consider the subspace Y = Y (g) such that g(x) = x on
Y . Assume that Y �= R
I n . Let H be any hyperplane containing Y . Show that there exist vectors
a, b such that
g(a) ∈
/H
and
b∈
/ H,
b − g(a) ∈
/ H,
b−a∈
/H
Make use then of the Step 1 result and consider simple isomorphisms g1 and h1 invariant on H
and mapping f (a) into b and b into a, respectively. Prove that
dimY (h1 ◦ g1 ◦ g) > dimY (g)
From the fact that g is an isomorphism follows that there must exist a vector a �= 0 for which
g(a) ∈
/ H. Denote c = g(a) ∈
/ H and consider the corresponding direct sum decomposition
R
I n = H ⊕ Rc
I
Let b be an arbitrary vector. Consider decompositions of vectors a and b corresponding to the
direct sum above
a = a0 + αc,
b = b0 + βc,
a0 , b0 ∈ H, α, β ∈ R
I
88
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Then
b − g(a) = b − c = b0 + (β − 1)c,
b − a = b0 − a0 + (β − α)c
By choosing any β ∈ R
I − {0, 1, α} we satisfy the required conditions. Let g1 , h1 be now the
simple isomorphisms invariant on H and mapping g(a) into b, and b into a, respectively. By
construction, g(a) �= a (otherwise, a ∈ Y (g) ⊂ H), and (h1 ◦ g1 ◦ g)(a) = a, so a ∈
Y (h1 ◦ g1 ◦ g). As h1 , g1 do not alter H, Y (g) � Y (h1 ◦ g1 ◦ g).
Step 3: Use induction to argue that after a finite number of steps m
dimY (hm ◦ gm ◦ . . . ◦ h1 ◦ g1 ◦ g) = n
Consequently,
hm ◦ gm ◦ . . . ◦ h1 ◦ g1 ◦ g = idR
In
Finish the proof by arguing that the inverse of a simple isomorphism is itself a simple isomorphism, too.
The induction argument is obvious. Also,
�
λ
gH,c
�−1
1
λ
= gH,c
Lebesgue Integration Theory
3.4
Measurable and Borel Functions
Exercises
Exercise 3.4.1 Let ϕ : R
In → R
Ī be a function such that dom ϕ is measurable (Borel). Prove that the
following conditions are equivalent to each other:
(i) ϕ is measurable (Borel).
(ii) {(x, y) ∈ dom ϕ × R
I : y ≤ ϕ(x)} is measurable (Borel).
(iii) {(x, y) ∈ dom ϕ × R
I : y > ϕ(x)} is measurable (Borel).
(iv) {(x, y) ∈ dom ϕ × R
I : y ≥ ϕ(x)} is measurable (Borel).
Lebesgue Measure and Integration
89
I n+1 into itself and it maps {y < g(x)}
For λ �= 0, function gλ (x, y) = x+λy is an isomorphism from R
into {λy < g(x)}. Choose any 0 < λn � 1. Then
{y ≤ g(x)} =
∞
�
n=1
{y < λ−1
n g(x)} =
∞
�
gλn ({y < g(x)})
n=1
Since a linear isomorphism maps measurable (Borel) sets into measurable (Borel) sets, each of the sets
on the right-hand side is measurable (Borel) and, consequently, their common part must be measurable
(Borel) as well.
Use the identity
{y < g(x)} =
∞
�
n=1
{y ≤ λn g(x)} =
∞
�
n=1
g λ1 ({y ≤ g(x)})
n
to demonstrate the converse statement.
The last two results follow from simple representations
{y < g(x)} = dom ϕ × R
I − {y ≥ g(x)},
{y ≤ g(x)} = dom ϕ × R
I − {y > g(x)}
Exercise 3.4.2 Prove Proposition 3.4.3.
Let g : R
In →R
I n be an affine isomorphism. Then a function ϕ : R
I n ⊃ dom ϕ → R
Ī is measurable
(Borel) if and only if the composition ϕ ◦ g is measurable (Borel).
Obviously, g ⊗ idR
I n ×R
I � (x, y) → (g(x), y) ∈ R
I n ×R
I is an affine isomorphism, too. The
I : R
assertion follows then from the identity
−1
{(x, y) ∈ g −1 (dom ϕ) × R
I : y < (ϕ ◦ g)(x)} = (g ⊗ idR
I : y < ϕ(z)}
I ) {(z, y) ∈ dom ϕ) × R
Exercise 3.4.3 Prove Proposition 3.4.4.
Let ϕi : E → R,
Ī i = 1, 2 and ϕ1 = ϕ2 a.e. in E. Then ϕ1 is measurable if an only if ϕ2 is measurable
Let E0 = {x ∈ E : ϕ1 (x) = ϕ2 (x)}. Then, for i = 1, 2,
{(x, y) ∈ E × R
I : y < ϕi (x)} = {(x, y) ∈ E0 × R
I : y < ϕi (x)} ∪Zi
�
��
� �
��
�
:=Si
:=S0
where Zi ⊂ (E − E0 ) × R
I are of measure zero. Thus, if S1 is measurable then S0 = S1 − Z1 is
measurable and, in turn, S2 = S0 + Z2 must be measurable as well.
90
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
3.5
Lebesgue Integral of Nonnegative Functions
3.6
Fubini’s Theorem for Nonnegative Functions
3.7
Lebesgue Integral of Arbitrary Functions
Exercises
Exercise 3.7.1 Complete proof of Proposition 3.7.1.
Let ai ∈ R,
Ī i ∈ IN . Suppose that ai = a1i − a2i , where a1i , a2i ≥ 0, and one of the series
∞
�
∞
�
a1i ,
1
is finite. Then the sum
�
IN ai exists and
�
∞
�
ai =
1
IN
Case 3:
�∞
a2i
1
a1i −
∞
�
a2i
1
�∞ −
2
a2i = ∞. From a−
i ≤ ai follows that
1 ai = ∞. By the same argument
�
+
ai < ∞. Consequently, the sum IN ai exists but it is infinite. So is the sum of the sums
of the two series.
1
�∞
1
a1i < ∞,
�∞
1
Exercise 3.7.2 Prove Corollary 3.7.1.
�
�
Ī The sum IN is finite if and only if IN |ai | is finite. In such a case
Let ai ∈ R.
�
ai =
∞
�
ai
1
IN
We have
−
|ai | = a+
i + ai
Consequently,
n
�
1
|ai | =
n
�
1
a+
i +
n
�
1
a−
i
Lebesgue Measure and Integration
91
If both sequences on the right-hand side have finite limits, so does the left-hand side, and the equality
�
�∞
holds in the limit. As the negative part of |ai | is simply zero, IN |ai | = 1 |ai |.
�∞
Conversely, assume that 1 |ai | is finite. By the Monotone Sequence Lemma, both sequences
�n + � n −
1 ai ,
1 ai converge. The equality above implies that both limits must be finite as well.
Finally, the equality
n
�
ai =
1
n
�
1
−
(a+
i − ai ) =
implies that the left-hand side converges and
∞
�
ai =
1
∞
�
1
a+
i −
∞
�
n
�
1
a+
i −
n
�
�
|ai |
a−
i =:
1
IN
a−
i
1
Exercise 3.7.3 Prove Corollary 3.7.2.
Let f : E → R
Ī be measurable and f = f1 + f2 , where f1 , f2 ≥ 0 are measurable. Assume that at
�
�
least one of the integrals f1 dm, f2 dm is finite. Then f is integrable and
�
�
�
f dm = f1 dm + f2 dm
Repeat proof of Proposition 3.7.1 (compare also Exercise 3.7.1), replacing sums with integrals.
Exercise 3.7.4 Prove Proposition 3.7.4.
The following conditions are equivalent to each other:
(i) f is summable.
�
�
(ii) f + dm, f − dm < +∞.
�
(iii) |f | dm < +∞.
(i) ⇒ (ii) We have
|
�
f dm| = |
�
f + dm −
�
f − dm|
If the left-hand side is finite then so is the right-hand side. This implies that both
� −
f dm must be finite as well.
(ii)⇒(iii) follows from
(iii)⇒(i) follows from
�
|f | dm =
�
(f + + f − ) dm =
|
�
f dm| ≤
�
�
f + dm −
|f | dm
Exercise 3.7.5 Prove Proposition 3.7.5.
All functions are measurable. The following properties hold:
�
f − dm
�
f + dm and
92
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(i) f summable, E measurable ⇒ f |E summable.
(ii) f, ϕ : E → R,
Ī |f | ≤ ϕ a.e. in E, ϕ summable ⇒ f summable.
(iii) f1 , f2 : E → R
Ī summable ⇒ α1 F1 + α2 f2 summable for α1 , α2 ∈ R.
I
(i) follows from Proposition 3.7.4 and the monotonicity of measure which implies that
�
f dm for f ≥ 0.
�
E
f dm ≤
(ii) follows from Proposition 3.7.4 and monotonicity of measure.
(iii) follows from (ii) and inequality
|α1 f1 + α2 f2 | ≤ |α1 | |f1 | + |α2 | |f2 |
Exercise 3.7.6 Prove Theorem 3.7.2.
Im →R
Ī be summable (and Borel). Then the following properties
(Fubini’s Theorem): Let f : R
I n ×R
hold:
(i) y → f (x, y) is summable for almost all x (Borel for all x).
�
(ii) x → f (x, y) dm(y) is summable (and Borel).
�
�
� ��
(iii)
f dm =
f (x, y) dm(y) dm(x)
The result is a direct consequence of the Fubini’s theorem for nonnegative functions, definition of
the integral for arbitrary functions, Corollary 3.7.2 and Proposition 3.7.4. Summability of f implies
�
�
that both integrals f + dm and f − dm are finite. Applying the Fubini’s theorem for nonnega-
tive functions, we conclude all three statements for both positive and negative parts of f . Then use
Corollary 3.7.2 and Proposition 3.7.4.
3.8
Lebesgue Approximation Sums, Riemann Integrals
Exercises
Exercise 3.8.1 Consider function f from Example 3.8.1. Construct explicitly Lebesgue and Riemann approximation sums and explain why the first sum converges while the other does not.
Let
. . . < y−1 < y0 < y1 < . . . < yk−1 ≤ 2 < yk < . . . < yn−1 ≤ 3 < yn < . . .
Lebesgue Measure and Integration
93
be any partition of the real axis. It follows from the definition of the function that Ek = f −1 ([yk−1 , yk )) =
{irrational numbers} ∩ (0, 1) and that En = {rational numbers} ∩ (0, 1). Thus, m(Ek ) = 1 and
m(En ) = 0. All other sets Ei are empty. Consequently, the lower and upper Lebesgue sums reduce to
s = yk−1 m((0, 1)) = yk−1
and
S = yk
If supi (yi − yi−1 ) → 0, then both yk−1 , yk must converge simply to 2, and both Lebesgue sums
converge to the value of the Lebesgue integral equal to 2.
On the other side, if
0 = x0 < . . . < xk−1 < xk < . . . < xn = 1
is an arbitrary partition of interval (0, 1), then for each subinterval [xk−1 , xk ) we can choose either a
rational or irrational intermediate point ξk . If all intermediate points are irrational then f (ξk ) = 2 and
the Riemann sum is equal to
R=
n
�
k=1
n
�
f (ξk )(xk − xk−1 ) =
k=1
2(xk − xk−1 ) = 2
n
�
(xk − xk−1 ) = 2
k=1
Similarly, if all intermediate points are rational, the corresponding value of the Riemann sum is equal
3. For a Riemann integrable function, the Riemann sums must converge to a common value which
obviously cannot be the case.
Exercise 3.8.2 Let f : R
I n ⊃ D →R
Ī be a measurable (Borel) function. Prove that the inverse image
f −1 ([c, d)) = {x ∈ D : c ≤ f (x) < d}
is measurable (Borel), for any constants c, d.
It is sufficient to prove that
f −1 ((c, ∞)) = {x ∈ D : c < f (x)}
is a measurable (Borel) set in R
I n , for any constant c. Indeed, it follows from the identity,
f −1 ([c, ∞)) = f −1 (
∞
�
n=1
(c −
∞
�
1
1
, ∞)) =
f −1 ((c − , ∞))
n
n
n=1
that f −1 ([c, ∞)) is measurable. Similarly, the identity,
f −1 ([c, d)) = f −1 ([c, ∞) − [d, ∞)) = f −1 ([c, ∞)) − f −1 ([d, ∞))
implies that f −1 ([c, d)) must be measurable as well.
I n � x → (x, c) ∈ R
I n+1 is obviously continuous, and
Function ic : R
f −1 (c, ∞) = i−1
c ({(x, t) : x ∈ D, t < f (x)})
This proves the assertion for Borel functions, as the inverse image of a Borel set through a continuous
function is Borel. If f is only measurable then the Fubini’s Theorem (Generic Case) implies only that
94
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
set f −1 (c, ∞) is measurable a.e. in c. The critical algebraic property that distinguishes set {y < f (x)}
from an arbitrary set, is that,
f −1 ((c, ∞)) = f −1 (
�
cn �c
(cn , ∞)) =
�
cn �c
f −1 ((cn , ∞))
for any sequence cn � c. As f −1 ((c, ∞)) are measurable a.e. in c, we can always find a sequence
cn � c, for which sets f −1 ((cn , ∞)) are measurable. Consequently, f −1 ((c, ∞)) is measurable as
well.
Lp Spaces
3.9
Hölder and Minkowski Inequalities
Exercises
Exercise 3.9.1 Prove the generalized Hölder inequality:
��
�
�
�
� uvw� ≤ �u�p �v�q �w�r
�
�
where 1 ≤ p, q, r ≤ ∞, 1/p + 1/q + 1/r = 1.
In view of estimate:
|
�
uvw| ≤
�
|uvw| =
�
|u| |v| |w|
we can restrict ourselves to nonnegative, real-valued functions only. The inequality follows then from
the original Hölder’s result,
�
uvw ≤
��
u
p
�1/p ��
(v w)
p
p−1
� p−1
p
If 1/q+1/r = 1−1/p = (p−1)/p, then for q � = q(p−1)/p, r� = r(p−1)/p, we have 1/q � +1/r� = 1.
Consequently,
�
v
p
p−1
w
p
p−1
≤
��
(v
p
p−1
)
q�
� 1� ��
q
(w
p
p−1
)
r�
Combining the two inequalities, we get the final result.
� 1�
r
=
��
v
q
p
� q(p−1)
��
w
r
�
p
r ( p−1)
Lebesgue Measure and Integration
95
Exercise 3.9.2 Prove that the Hölder inequality
�
Ω
|f g| ≤
��
Ω
|f |
p
� p1 ��
Ω
|g|
q
� q1
1 1
+ = 1,
p q
,
p, q > 1
turns into an equality if and only if there exist constants α and β such that
α|f |p + β|g|q = 0
a.e. in Ω
It is sufficient to prove the fact for real-valued and nonnegative functions f, g ≥ 0, with unit norms,
i.e.,
�
�
p
f = 1,
Ω
gq = 1
Ω
The critical observation here is the fact that the inequality used in the proof of the Hölder inequality,
1
1
1
t+ v
p
q
1
tp vq ≤
turns into equality if and only if t = v. Consequently, for nonnegative functions u, v,
�
� �
1
1
1
1
p
q
u+ v−u v
= 0 implies u = v a.e. in Ω .
p
q
Ω
Substituting u = f p , v = g q we get the desired result.
Exercise 3.9.3
(i) Show that integral
�
is finite, but, for any � > 0, integral
�
1
2
0
1
2
0
dx
x ln2 x
dx
[x ln2 x]1+�
is infinite.
(ii) Use the property above to construct an example of a function f : (0, 1) → R
I which belongs to
space Lp (0, 1), 1 < p < ∞, but it does not belong to any Lq (0, 1), for q > p.
(i) Use substitution y = ln x to compute:
�
�
1
1
dx
dy
=− =−
=
y2
y
ln x
x ln2 x
Hence,
Define:
�
1
2
�
1
1
dx
+
.
=−
ln 1/2 ln �
x ln2 x
fn =

 x ln12 x
0
x ∈ [ n1 , 12 )
x ∈ (0, n1 )
96
APPLIED FUNCTIONAL ANALYSIS
Then fn � f =
1
x ln2 x
SOLUTION MANUAL
in (0, 12 ) and, therefore f is integrable with
�
fn �
�
f.
Raising function f to any power greater than 1 renders the resulting integral to be infinite. It
is sufficient to construct a lower bound for f whose integral is infinite. It follows easily from
applying the L’Hospital’s rule that
lim x� lnr x = 0
x→0
for an arbitrarily small � > 0 and large r > 0. Consequently, for any �, r, there exists a δ > 0
such that:
Consequently,
1
ln
for x < δ. Then,
with
�
1
x
x� lnr x ≤ 1
implies
x<δ
2(1+�)
x
≥ x�
1
1
2 1+� ≥ x
[x ln ]
= ∞.
(ii) Define, for instance,
f (x) =
�


1
x ln2 x
0
� p1
Notice that plenty of other examples are possible.
x ∈ (0, 12 )
x ∈ [ 12 , 1)
Exercise 3.9.4 Let fn , ϕ ∈ Lp (Ω), p ∈ [1, ∞) such that
(a) |fn (x)| ≤ ϕ(x) a.e. in Ω, and
(b) fn (x) → f (x) a.e. in Ω.
Prove that
(i) f ∈ Lp (Ω), and
(ii) �fn − f �p → 0.
(i) We have: |fn (x)|p ≤ |ϕ(x)|p ,
Convergence Theorem,
�
|fn (x)|p → |f (x)|p and, therefore, by Lebesgue Dominated
p
|fn (x)| →
�
p
|f (x)| ≤
�
|ϕ|p
(ii) It is sufficient to show a summable function dominating |fn − f |p , e.g.,
|fn − f |p ≤ (|fn | + |f |)p ≤ (2|ϕ|)p
Note that the theorem is false for p = ∞. Counterexample:
�
1 x ∈ (0, n1 )
fn (x) =
0 x ∈ [ n1 , 1)
Functions fn → 0 a.e. in (0, 1), are bounded by unity, but �fn − 0�∞ = 1.
Lebesgue Measure and Integration
97
Exercise 3.9.5 In the process of computing the inverse of the Laplace transform,
f¯(s) =
we need to show that the integral
�
1
s−a
est
ds
s−a
over the semicircle shown in Fig. 3.1 vanishes as R → ∞. Use parametrization
s = γ + Reiθ = γ + R(cos θ + i sin θ),
π 3π
)
θ∈( ,
2 2
to convert the integral to a real integral over interval ( π2 , 3π
2 ), and use the Lebesgue Dominated Convergence Theorem to show that this integral vanishes as R → ∞ (you can think of R as an integer).
We have,
|
�
est
ds| ≤
s−a
=
=
�
�
�
3π
2
π
2
3π
2
π
2
3π
2
π
2
|est | ds
| |dθ
|s − a| dθ
�
�
e(γ+R cos θ)t
(γ − a + R cos θ)2 + (R sin θ)2
e(γ+R cos θ)t
((γ − a)/R + cos θ)2 + sin2 θ)
Rdθ
dθ
For θ ∈ (π/2, 3π/2), the denominator converges to one, whereas the numerator converges to zero
(exponential with a negative exponent), as R → ∞. Consequently the integrand converges a.e. to
zero. In order to apply the Lebesgue Dominated Convergence Theorem, we need to show only that the
integrand is dominated by an integrable function, for all R. The numerator is bounded by eγt . For the
denominator, we have
�2
�
(γ − a)2
2(γ − a)
γ−a
+ cos θ + sin2 θ =
cos θ + 1
+
2
R
R
R
2(γ − a)
(γ − a)2
+1
−
≥
R2
R
�2
�
γ−a
−1
=
R
Thus, for sufficiently large R, the denominator is bounded below by a positive number (independent of
angle θ).
98
APPLIED FUNCTIONAL ANALYSIS
Figure 3.1
Contour integration for the inversion of the Laplace transform.
SOLUTION MANUAL
4
Topological and Metric Spaces
Elementary Topology
4.1
Topological Structure—Basic Notions
Exercises
Exercise 4.1.1 Let A, B ⊂ P(X) be two arbitrary families of subsets of a nonempty set X. We define the
−
trace A ∩ B of families A and B as the family of common parts
−
A ∩ B := {A ∩ B : A ∈ A, B ∈ B}
−
By analogy, by the trace of a family A on a set C, denoted A ∩ C, we understand the trace of family
A and the single set family {C}
−
−
A ∩ C := A ∩ {C} = {A ∩ C : A ∈ A}
Prove the following simple properties:
−
−
−
−
(i) A � C, B � D
⇒
A ∩ B � C ∩ D.
(ii) A ∼ C, B ∼ D
⇒
A ∩ B ∼ C ∩ D.
(iii) A ⊂ P(C)
⇒
−
A ∩ C = A.
−
−
(iv) A � B
⇒
A ∩ C � B ∩ C.
(v) B ⊂ C
⇒
A ∩ B � A ∩ C.
(vi) A ⊂ P(C)
−
⇒
−
−
(A � B ⇔ A � B ∩ C).
These very simple properties are direct consequences of definitions.
(i) Let C ∈ C, D ∈ D. It follows from the assumptions that there exists A ∈ A and B ∈ B such that
A ⊂ C and B ⊂ D. Consequently, (A ∩ B) ⊂ (C ∩ D).
99
100
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(ii) Follows from (i) and the definition of equivalent families of sets.
(iii) If A ⊂ C then A ∩ C = A.
(iv) A ⊂ B implies (A ∩ C) ⊂ (B ∩ C).
(v) B ⊂ C implies (A ∩ B) ⊂ (A ∩ C).
(vi) A ⊂ C and A ⊂ B imply A ⊂ (B ∩ C).
Exercise 4.1.2 Let A ⊂ P(X) and B ⊂ P(Y ) denote two arbitrary families of subsets of X and Y , respectively, and let f : X → Y denote an arbitrary function from X into Y . We define the (direct)
image of family A by function f , and the inverse image of family B by function f by operating simply
on members of the families,
f (A) := {f (A) : A ∈ A}
f −1 (B) := {f −1 (B) : B ∈ B}
Prove the following simple properties:
(i) A � B
(ii) A ∼ B
(iii) C � D
(iv) C ∼ D
⇒
f (A) � f (B).
⇒
f (A) ∼ f (B).
⇒
f −1 (C) � f −1 (D).
⇒
f −1 (C) ∼ f −1 (D).
(v) Let domain of function f be possibly only a subset of X. Then f (A) � C
f −1 (C).
⇔
−
A ∩ domf �
The results are simple consequences of definitions and properties of direct and inverse image.
(i) A ⊂ B implies f (A) ⊂ f (B).
(ii) follows from (i) and the notion of equivalence.
(iii) C ⊂ D implies f −1 (C) ⊂ f −1 (D).
(iv) follows from (iii).
(v) f (A) ⊂ C is equivalent to A ∩ domf ⊂ f −1 (C).
Exercise 4.1.3 Let X = {w, x, y, z}. Determine whether or not the following classes of subsets of X satisfy
the axioms for open sets and may be used to introduce a topology in X (through open sets).
X1 = {X, ∅, {y, z}, {x, y, z}}
X2 = {X, ∅, {w}, {w, x}, {w, y}}
X3 = {X, ∅, {x, y, z}, {x, y, w}, {x, y}}
The first and the third one do, the second one does not. In the second case, {w, x, y} = {w, x}∪{w, y}
does not belong to X2 which violates the condition that union of open sets must be open.
Topological and Metric Spaces
101
Exercise 4.1.4 The class X = {X, ∅, {a}, {b}, {a, b}, {a, b, c}, {a, b, d}} satisfies axioms∗ for open sets in
X = {a, b, c, d}
(i) Identify the closed sets in this topology.
(ii) What is the closure of {a}, of {a, b}?
(iii) Determine the interior of {a, b, c} and the filter (system) of neighborhoods of b.
The corresponding family of closed sets may be determined by taking complements of open sets:
{∅, X, {b, c, d}, {a, c, d}, {c, d}, {d}, {c}}
Closure of a set A is the smallest closed set containing A and interior of a set A is the largest open set
contained in A (compare Exercise 4.1.7). Consequently,
{a} = {a, c, d},
{a, b} = X,
int{a, b, c} = {a, b, c}
Base of open neighborhoods of b includes the singleton of b. Consequently, all subsets of X containing
b are neighborhoods of the point.
Exercise 4.1.5 Let A ⊂ P(X) be a family of subsets of a set X. Prove that the following conditions are
equivalent to each other:
(i) ∀A, B ∈ A
−
∃C ∈ A : C ⊂ A ∩ B (condition for a base).
(ii) A � A ∩ A.
−
(iii) A ∼ A ∩ A.
(See Exercise 4.1.1 for notation.)
Equivalence of (i) and (ii) is a simple consequence of definitions. Equivalence of (ii) and (iii) follows
−
from the trivial observation that A ∩ A � A.
Exercise 4.1.6 Let X be a topological space. We say that a point x is a cluster point of set A if
N ∩ A �= ∅,
for every neighborhood N of x
Show that point x is a cluster point of A if and only if it belongs to its closure: x ∈ A.
Let x be a cluster point of A. If x is a member of A then x belongs to its closure as well. Suppose
x∈
/ A. Then, N ∩ A − {x} = N ∩ A, for every neighborhood N of x, and the condition above implies
that x is an accumulation (limit) point of A. Consequently, x ∈ A.
Conversely, let x ∈ A. If x ∈ A then N ∩ A � x and the condition for the cluster point is trivially
satisfied. If x ∈
/ A then, again, N ∩ A − {x} = N ∩ A, which implies that N ∩ A is non empty.
∗ Frequently,
we simply say that the class is a topology in X.
102
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 4.1.7 Let X be an arbitrary topological space and A ⊂ X an arbitrary set. Show that intA is the
largest open subset of set A, and that closure A is the smallest closed superset of A.
Let G be an open subset of A. Then G = intG ⊂ intA. Consequently,
�
{G open ⊂ A}
is the largest open subset of A and it is contained in intA. As the intA is itself an open subset of A,
the two sets must be equal.
By the same argument,
A=
is the smallest closed set containing set A.
�
{F closed ⊃ A}
Exercise 4.1.8 We say that a topology has been introduced in a set X through the operation of interior, if we
have introduced operation (of taking the interior)
P(X) � A → int∗ A ∈ P(X)
with
int∗ A ⊂ A
that satisfies the following four properties:
(i) int∗ X = X
(ii) A ⊂ B implies int∗ A ⊂ int∗ B
(iii) int∗ (int∗ A) = int∗ A
(iv) int∗ (A ∩ B) = int∗ A ∩ int∗ B
Sets G such that int∗ G = G are identified then as open sets.
1. Prove that the open sets defined in this way satisfy the usual properties of open sets (empty set
and the whole space are open, unions of arbitrary families, and intersections of finite families of
open sets are open).
2. Use the identified family of open sets to introduce a topology (through open sets) in X and
consider the corresponding interior operation int with respect to the new topology. Prove then
that the original and the new operations of taking the interior coincide with each other, i.e.,
int∗ A = intA
for every set A.
3. Conversely, assume that a topology was introduced by open sets X . The corresponding operation
of interior satisfies then properties listed above and can be used to introduce a (potentially different) topology and corresponding (potentially different) open sets X � . Prove that families X and
X � must be identical.
Topological and Metric Spaces
103
1. By assumption, int∗ ∅ ⊂ ∅. Conversely, the empty set is a subset of every set, so int∗ ∅ = ∅. The
whole space X is open by axiom (i). Let Gι , ι ∈ I, be now an arbitrary family of sets such that
int∗ Gι = Gι . By assumption,
int∗
�
ι∈I
Gι ⊂
Conversely,
Gι =
ι∈I
�
Gκ ,
int∗ Gι ⊂ int∗
�
Gι ⊂
Axiom (ii) implies that,
and, consequently,
�
�
ι∈I
�
int∗ Gι
ι∈I
∀ι ∈ I
κ∈I
Gκ ,
κ∈I
int∗ Gι ⊂ int∗
�
∀ι ∈ I
Gκ
κ∈I
Finally, by induction, axiom (iv) implies that interior of a common part of a finite family of sets
is equal to the common part of their interiors.
2. Let A be an arbitrary set. By axiom (iii), int∗ A ∈ X and, by definition, int∗ A ⊂ A. Recall
(comp. Exercise 4.1.7) that the interior of a set is the largest open subset of the set. As int∗ A
is open and contained in A, we have intA ⊂ int∗ A. On the other side, since int∗ A ∈ X ,
int(int∗ A) = int∗ A. Consequently, int∗ A = int(int∗ A) ⊂ intA since int∗ A ⊂ A.
3. This is trivial. Open sets A in both families are identified through the same property: int∗ A = A.
Exercise 4.1.9 We say that a topology has been introduced in a set X through the operation of closure, if we
have introduced operation (of taking closure)
P(X) � A → clA ∈ P(X)
with
A ⊂ clA
that satisfies the following four properties:
(i) cl∅ = ∅
(ii) A ⊂ B implies clA ⊂ clB
(iii) cl(clA) = clA
(iv) cl(A ∪ B) = clA ∪ clB
Sets F such that clF = F are identified then as closed sets.
1. Prove that the closed sets defined in this way satisfy the usual properties of closed sets (empty set
and the whole space are closed, intersections of arbitrary families, and unions of finite families of
closed sets are closed).
104
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
2. Define open sets X by taking complements of closed sets. Notice that the duality argument
implies that family X satisfies the axioms for the open sets. Use then family X to introduce a
topology (through open sets) in X. Consider next the corresponding closure operation A → A
with respect to the new topology. Prove then that the original and the new operations of taking
the closure coincide with each other, i.e.,
clA = A
for every set A.
3. Conversely, assume that a topology was introduced by open sets X . The corresponding operation
of closure satisfies then properties listed above and can be used to introduce a (potentially different) topology and corresponding (potentially different) open sets X � . Prove that families X and
X � must be identical.
1. By assumption, X ⊂ clX ⊂ X so clX = X. The empty set is closed by axiom (i). Let Fι , ι ∈ I,
be now an arbitrary family of sets such that clFι = Fι . By assumption,
cl
�
ι∈I
Fι ⊃
�
Fι =
ι∈I
�
clFι
ι∈I
Conversely,
Fι ⊃
�
∀ι ∈ I
Fκ ,
κ∈I
Axiom (ii) implies that,
Fι ⊃ cl
�
Fκ ,
κ∈I
∀ι ∈ I
and, consequently,
�
ι∈I
Fι ⊃ cl
�
Fκ
κ∈I
Finally, by induction, axiom (iv) implies that closure of a union of a finite family of sets is equal
to the union of their closures.
2. Let A be an arbitrary set. Recall (compare Exercise 4.1.7) that A is the smallest closed set that
contains A. By axiom (iii), clA is closed and it contains A, so A ⊂ clA.
On the other side, A is a closed set which means that clA = A. But then, by axiom (ii), A ⊂ A
implies that clA ⊂ clA = A.
3. This is trivial. Closed sets F in both families are identified through the same property: F =
clF = F .
Topological and Metric Spaces
4.2
105
Topological Subspaces and Product Topologies
Exercises
Exercise 4.2.1 Let A ⊂ P(X), B ⊂ P(X) be families of subsets of X and Y , respectively. The Cartesian
product of families A and B is defined as the family of Cartesian products of sets from A and B
−
A × B := {A × B : A ∈ A, B ∈ B}
Prove the following properties:
−
−
−
−
(i) A � B, C � D
⇒
A × C � B × D.
(ii) A ∼ B, C ∼ D
⇒
A × C ∼ B × D.
−
−
(iii) (f × g)(A × B) = f (A) × g(B).
(i) Let B ∈ B, D ∈ D. There exists A ∈ A such that A ⊂ B. Similarly, there exists C ∈ C such
that C ⊂ D. Thus (A × C) ⊂ (B × D).
(ii) follows from (i) and the notion of equivalent families.
(iii) follows directly from definitions of Cartesian product of two functions and Cartesian product of
two families.
Exercise 4.2.2 Recall the topology introduced through open sets
X = {X, ∅, {a}, {b}, {a, b}, {a, b, c}, {a, b, d}}
on a set X = {a, b, c, d} from Exercise 4.1.4.
1. Are the sets {a} and {b} dense in X?
2. Are there any other sets dense in X?
3. Is the space X separable? Why?
1. No, {a} = {a, c, d} � X and {b} = {b, c, d} � X.
2. Yes, {a, b}, {a, b, c}, {a, b, d} and X itself.
3. Sure. X is finite and therefore countable. Every set is dense in itself so X is separable. Nothing
needs to be checked.
106
4.3
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Continuity and Compactness
Exercises
Exercise 4.3.1 Let X = {1, 2, 3, 4} and consider the topology on X given by a family of open sets,
X = {X, ∅, {1}, {2}, {1, 2}, {2, 3, 4}}
Show that the function f : X → X given by
f (1) = 2,
f (2) = 4,
f (3) = 2,
f (4) = 3
is continuous at 4, but not at 3.
There are only two neighborhoods of 3: set {2, 3, 4} and the the whole space X. For X, the continuity
condition is trivially satisfied (f (X) ⊂ X), and for the first neighborhood we select the very same set
to satisfy the continuity condition,
f ({2, 3, 4}) = {2, 3, 4}
To prove discontinuity at 3, select the singleton {2}, one of neighborhoods of f (3) = 2. For f to be
continuous, the inverse image f −1 ({2}) = {2} would have to be a (not necessarily open) neighborhood
of 3. But this implies that {3} would have to include an open neighborhood of 3. As the singleton of a
point is the smallest possible neighborhood of the point, {3} should be on the list of open sets, and it
is not. Thus, function f is discontinuous at 3.
Exercise 4.3.2 Let f : X → Y . Prove that f is continuous iff f −1 (B) ⊂ f −1 (B) for every B ⊂ Y .
Recall that f is continuous iff the inverse image of every closed set is closed. Assume that f is continuous and pick an arbitrary set B ⊂ Y . The closure B is closed, so the inverse image f −1 (B) must be
closed. It also contains f −1 (B). Since the closure of a set is the smallest closed set including the set,
we have
f −1 (B) ⊂ f −1 (B) (closed)
Conversely, assume that the condition is satisfied. Pick any closed set B = B ⊂ Y . Then
f −1 (B) ⊂ f −1 (B) = f −1 (B)
which implies that set f −1 (B), being equal to its closure, is closed.
Exercise 4.3.3 Let f be a function mapping X into Y . Prove that f is continuous iff f (A) ⊂ f (A) for every
A ⊂ X.
Topological and Metric Spaces
107
We will show that the condition is equivalent to the condition studied in Exercise 4.3.2, i.e.,
f (A) ⊂ f (A) ∀A ⊂ X
⇔
f −1 (B) ⊂ f −1 (B) ∀B ⊂ Y
(⇒) Set A = f −1 (B). Then
f (f −1 (B)) ⊂ f f −1 (B) = B
Taking the inverse image on both sides, we get
f −1 (B) = f −1 f (f −1 (B)) ⊂ f −1 (B)
(⇐) Set B = f (A). Then
A ⊂ f −1 (f (A)) ⊂ f −1 (f (A))
Taking the direct image on both sides, we get
f (A) ⊂ f f −1 (f (A)) = f (A)
Exercise 4.3.4 Let X1 and X2 be two topologies on a set X and let I : (X, X1 ) → (X, X2 ) be the identity
function. Show that I is continuous if and only if X1 is stronger than X2 ; i.e., X2 ⊂ X1 .
I is continuous iff the inverse image of every open set is open. But I −1 (A) = A, so this is equivalent
to X2 ⊂ X1 .
Exercise 4.3.5 Show that the constant function f : X → X,
f (x) = c, is continuous relative to any
topology on X.
Inverse image of any set through the constant function is equal to either an empty set or the whole
space. Both set are always closed (and open as well). So the inverse image of any open (closed) set, is
open (closed).
Exercise 4.3.6 Explain why every function is continuous at isolated points of its domain with respect to any
topology.
Recall that a point a ∈ A is an isolated point of set A, if there exists a neighborhood B of a such
that B ∩ A = {a}. The definition of the topological subspace implies then that singleton {a} is a
neighborhood of point a in the topological subspace. By definition, function f defined on a domain
domf ⊂ X is continuous if it is continuous on its domain with the topology inherited from X. If,
in this topology, the singleton of a is a neighborhood of a then that continuity condition is trivially
satisfied since
f ({a}) = {f (a)} ⊂ B
for any neighborhood B of f (a).
108
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 4.3.7 We say that bases A and B are adjacent if
A ∩ B �= ∅
∀A ∈ A, B ∈ B
¯ B is a base.
Verify that bases A and B are adjacent iff A∩
¯ B is a base then A ∩ B �= ∅, for every A, B.
All elements of a base are nonempty. Thus, if A∩
¯ B. As both A and B are
Conversely, assume that A ∩ B �= ∅ ∀A, B. Pick A1 ∩ B1 , A2 ∩ B2 ∈ A∩
bases, there exist A3 ∈ A and B3 ∈ B such that A3 ⊂ A1 ∩ A2 and B3 ⊂ B1 ∩ B2 . This implies that
A3 ∩ B3 ⊂ (A1 ∩ B1 ) ∩ (A2 ∩ B2 )
Analogously, we say that base A is adjacent to set C if
A ∩ C �= ∅
∀a ∈ A
¯ C is a base.
Verify that base B is adjacent to set C iff family A∩
This follows from the previous assertion as the singleton {C} is a base.
Prove the following simple properties:
(i) A is adjacent to C, A � B
⇒
B is adjacent to C.
Let B ∈ B. There exists A ∈ A such that A ⊂ B. Thus ∅ �= A ∩ C ⊂ B ∩ C.
(ii) If family A is a base then (family f (A) is a base iff A is adjacent to domf ). (⇒) Assume, in
contrary, that A ∩ domf = ∅, for some A ∈ A. Then f (A) = f (∅) = ∅ which contradicts the
fact that f (A) is a base.
(⇐) By the same argument, if A ∩ domf �= ∅ then f (A) �= ∅.
(iii) Family C being a base implies that (f −1 (C) is a base iff C is adjacent to range f (X)).
Use the same reasoning as above but with the inverse image in place of direct image.
¯ C, B ×
¯ D are
(iv) If families A, B are adjacent and families C, D are adjacent then families A ×
adjacent as well. This trivially follows from the definitions.
Exercise 4.3.8 Prove that the image of a connected set through a continuous function is connected as well.
It is sufficient to prove that if X is connected and f : X → Y is a continuous surjection, then Y
is connected. Assume to the contrary that there exists a nontrivial open partition G1 , G2 of Y . The
continuity of f implies then that f −1 (Gi ), i = 1, 2, is a nontrivial open partition of X, a contradiction.
Exercise 4.3.9 Prove that a set E ⊂ R
I is connected if and only if E is an interval (may be closed or open on
either end).
Let A be a connected subset of R.
I Let a = inf E, b = sup E. It is sufficient to show that (a, b) ⊂ E.
Suppose to the contrary that there exists a c ∈ (a, b) such that c ∈
/ E. But then (−∞, c)∩E, (c, ∞)∩E
is a nontrivial (relatively) open partition of E, a contradiction.
Topological and Metric Spaces
109
Conversely, let I be an interval. Assume to the contrary that there exists a nonempty subset A of I,
different from I, that is simultaneously open and closed in I. Consider a family of intervals J contained
in set A,
F := {J : J ⊂ A} .
As A is open in I, family F must be nonempty. F is partially ordered by inclusion and, by Kuratowski–
Zorn Lemma (explain, why?), must have a maximal element J. Assume that sup J < sup I. Then,
since A is closed in I, x = sup J must be an element of A. But then, since A is also open in I, there
exists an interval (x − �, x + �) ⊂ A. We can add the interval to J and obtain a larger subinterval of set
A which contradicts the fact that J is maximal. Therefore, we must have sup J = sup I. By the same
argument, inf J = inf I and, therefore, set A may be only missing the endpoints of I. Clearly, I − A
is not open in I then, so A cannot be closed in I, a contradicttion.
νn νn +1
Exercise 4.3.10 Let G1 , G2 ⊂ R
I n be two disjoint open sets. Let σ = [ 2νk1 , ν12+1
) be a
k ) × . . . × [ 2k ,
2k
cube such that σ̄ ⊂ G1 ∪ G2 , i.e., σ ∈ Sk (G1 ∪ G2 ). Prove that σ belongs to the partition of one and
only one of sets Gi , i = 1, 2.
Assume, to the contrary, that σ̄ intersects both sets. This implies that there exist points x1 ∈ σ̄ ∩
G1 , x2 ∈ σ̄ ∩ G2 . Consequently, segment [x1 , x2 ] ⊂ σ̄ ⊂ G1 ∪ G2 . As G1 and G2 are two disjoint
open sets, segment [x1 , x2 ] can be partitioned into two (relatively) open subsets Gi ∩ [x1 , x2 ]. This
I through a continuous
contradicts the fact that segment [x1 , x2 ] as the image of an interval I ⊂ R
function, must be connected, compare Exercises 4.3.8 and 4.3.9.
4.4
Sequences
Exercises
Exercise 4.4.1 Let Φ be a family of subsets of IN of the form
{n, n + 1, . . .},
n ∈ IN
1. Prove that Φ is a base (the so-called fundamental base in IN ).
2. Characterize sets from filter F(Φ).
3. Let an be a sequence in a topological space X. Prove that the following conditions are equivalent
to each other:
(i) an → a0 in X
110
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(ii) a(Φ) � Ba0 , and
(iii) a(F (Φ)) � Fa0
where Ba0 and Fa0 are the base and filter of neighborhoods of point a0 ∈ X, respectively.
1. Every nonempty decreasing family of nonempty sets is a base.
2. Set C ∈ F(Φ) if there exists a natural number N such that n ∈ C, ∀n ≥ N .
3. By the definition, an → a0 iff a(Φ) � Fa0 . Equivalence with (ii) and (iii) follows then directly
from properties of a base and filter generated by the base.
Exercise 4.4.2 Let X be a topological space and an a sequence in X. Let Φ be the fundamental base in IN
from the previous exercise and a : IN � n → an ∈ X an arbitrary sequence in X. Prove that the
following conditions are equivalent to each other:
1. a(Φ) � Ba0
2. (a ◦ α)(Φ) � Ba0 , for every injection α : IN → IN
where Ba0 is base of neighborhoods of some point a0 ∈ X (sequence an converges to a0 iff its every
subsequence converges to a0 ).
(i) ⇒ (ii) follows from the fact that α(Φ) � Φ.
(ii) ⇒ (i). Assume to the contrary that there exists a neighborhood B ∈ Ba0 such that
a({n, n + 1, . . .}) = {an , an+1 , . . .} �⊂ B
∀n
/ B which contradicts (ii).
Then, for every n, there exists an element ank ∈
Exercise 4.4.3 Let xn be a sequence in a topological space X and let Φ denote the fundamental base in IN
(recall Exercise 4.4.1). Prove that the following conditions are equivalent to each other:
1. x0 is a cluster point of xn
2. bases x(Φ) and Bx0 are adjacent
By definition, x0 is a cluster point of xn if every neighborhood B of x0 contains an infinite number of
elements of the sequence, i.e.,
B ∩ x({n, n + 1, . . .}) �= ∅
which is exactly the second condition.
Exercise 4.4.4 Let X, Y be Hausdorff topological spaces, and f : X → Y an arbitrary function. Prove that
the following conditions are equivalent to each other.
1. Function f is sequentially continuous (everywhere).
2. For every sequentially open set G ⊂ Y , its inverse image f −1 (G) is sequentially open in X.
Topological and Metric Spaces
111
3. For every sequentially closed set F ⊂ Y , its inverse image f −1 (F ) is sequentially closed in X.
(1 ⇒ 3)
Let F ⊂ Y be an arbitrary sequentially closed set. Let f −1 (F ) � xn → x be a convergent sequence of
elements from f −1 (F ). Since f is sequentially continuous, f (xn ) → f (x) and, since F is sequentially
closed, f (x) ∈ F which implies that x ∈ f −1 (F ).
(2 ⇒ 1)
Let xn → x in X. We need to show that f (xn ) → f (x) in Y . Let G be an arbitrary open (and,
therefore, sequentially open) neighborhood of f (x). Then x ∈ f −1 (G). Since f −1 (G) is sequentially
open, there exists an N such that xn ∈ f −1 (G), for n > N . But this implies that f (xn ) ∈ G, for
n > N , i.e. f (xn ) → f (x).
Equivalence of second and third conditions follows immediately from the duality principle (Prop. 4.4.2).
Exercise 4.4.5 Let X be a sequential Hausdorff topological space, and f : X → Y a sequentially continuous
function into an arbitrary topological space Y . Prove that f is continuous.
Let G ⊂ Y be an arbitrary open and, therefore, sequentially open set in Y . By Exercise 4.4.4, f −1 (G)
is sequentially open in X. But X is a sequential space, so f −1 (G) must be open. This proves that f is
(globally) continuous.
Exercise 4.4.6 Let X be a topological space such that, for every function f : X → Y (with arbitrary topo-
logical space Y ), sequential continuity of f implies its continuity. Prove that X must be a sequential
topological space.
1. Let Y = X with a new topology introduced through open sets X1 where for the open sets X1 we
take all sequentially open sets in X. Prove that such sets satisfy the axioms for open sets.
• Empty set and the space X are open and, therefore, sequentially open as well.
• Union of an arbitrary family Gι , ι ∈ I of sequentially open sets is sequentially open. Indeed
�
let x ∈ ι∈I Gι . There exists then κ ∈ I such that x ∈ Gκ . Let xn → x. As Gκ is
�
sequentially open, there exists N such that for n > N , xn ∈ Gκ ⊂ ι∈I Gι . Done.
• Intersection of a finite number of sequentially open sets is sequentially open. Indeed, let
xn → x and x ∈ G1 ∩G2 ∩. . .∩Gm . Let Ni , i = 1, . . . , m be such that xn ∈ Gi for n > Ni .
Take N = max{N1 , . . . , Nm } to see that, for n > N , xn ∈ G1 ∩ . . . ∩ Gm .
2. Take Y = X with the new, stronger (explain, why?) topology induced by X1 and consider the
identity map idX mapping original topological space (X, X ) onto (X, X1 ). Prove that the map is
sequentially continuous.
Let xn → x in the original topology. We need to prove that xn → x in the new topology as
well. Take any open neighborhood G ∈ X1 of x. By construction, G is sequentially open in the
original topology so there exists N such that, for n > N , xn ∈ G. But, due to the arbitrariness
of G, this proves that x → x in the new topology.
112
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
3. The identity map is, therefore, continuous as well. But this means that
G ∈ X1
⇒
id−1
X (G) = G ∈ X
i.e. every sequentially open set in X is automatically open.
4.5
Topological Equivalence. Homeomorphism
Theory of Metric Spaces
4.6
Metric and Normed Spaces, Examples
Exercises
Exercise 4.6.1 Let (X, d) be a metric space. Show that
ρ(x, y) = min{1, d(x, y)}
is also a metric on X.
Symmetry and positive definitness of d(x, y) imply symmetry and positive definitness of ρ(x, y). The
easiest perhaps way to see that the triangle inequality is satisfied, is to consider the eight possible cases:
1: d(x, y) < 1, d(x, z) < 1, d(z, y) < 1. Then
ρ(x, y) = d(x, y) ≤ d(x, z) + d(z, y) = ρ(x, z) + ρ(z, y).
2: d(x, y) < 1, d(x, z) < 1, d(z, y) ≥ 1. Then
ρ(x, y) = d(x, y) < 1 = ρ(z, y) ≤ ρ(x, z) + ρ(y, z).
3: d(x, y) < 1, d(x, z) ≥ 1, d(z, y) < 1. Then
ρ(x, y) = d(x, y) < 1 = ρ(x, z) ≤ ρ(x, z) + ρ(y, z).
4: d(x, y) < 1, d(x, z) ≥ 1, d(z, y) ≥ 1. Then
ρ(x, y) = d(x, y) < 1 < 2 = ρ(x, z) + ρ(y, z).
5: d(x, y) ≥ 1, d(x, z) < 1, d(z, y) < 1. Then
ρ(x, y) = 1 ≤ d(x, y) ≤ d(x, z) + d(z, y) = ρ(x, z) + ρ(z, y).
Topological and Metric Spaces
113
6: d(x, y) ≥ 1, d(x, z) < 1, d(z, y) ≥ 1. Then
ρ(x, y) = 1 = ρ(z, y) ≤ ρ(x, z) + ρ(y, z).
7: d(x, y) ≥ 1, d(x, z) ≥ 1, d(z, y) < 1. Then
ρ(x, y) = 1 = ρ(x, z) ≤ ρ(x, z) + ρ(y, z).
8: d(x, y) ≥ 1, d(x, z) ≥ 1, d(z, y) ≥ 1. Then
ρ(x, y) = 1 < 2 = ρ(x, z) + ρ(y, z).
Exercise 4.6.2 Show that any two norms � · �p and � · �q in R
I n , 1 ≤ p, q ≤ ∞, are equivalent, i.e., there exist
constants C1 > 0, C2 > 0 such that
and
�x�p ≤ C1 �x�q
�x�q ≤ C2 �x�p
for any x ∈ R
I n . Try to determine optimal (minimum) constants C1 and C2 .
It is sufficient to show that any p-norm is equivalent to e.g. the ∞-norm. We have,
� p1
� n
�
1
p p
p
�x�∞ = max |xi | = |xj | (for a particular index j) = (|xj | ) ≤
|xi |
= �x�p
i=1,...,n
i=1
At the same time,
�x�p =
�
n
�
i=1
|xi |
p
� p1
≤
�
n
�
i=1
�x�p∞
� p1
= n �x�∞
Consider now 1 < p, q < ∞. Let Cpq be the smallest constant for which the following estimate holds.
∀x ∈ R
In
�x�p ≤ Cpq �x�q
Constant Cpq can be determined by solving the constrained minimization problem,
Cpq = max �x�p
�x�q =1
This leads to the Lagrangian,
L(x, λ) =
n
�
i=1
p
|xi | − λ
and the necessary conditions,
�
� xi
 p|xi |p−1 − λq|xi |q−1
∂L
|xi |
=

∂xi
0
For xi �= 0, we get,
|xi | = λ
1
p−q
�
n
�
i=1
q
�
|xi | − 1

if xi �= 0 
if xi = 0
1
� � p−q
p
q

Raising both sides to power q and summing up over i, we get
q
� � p−q
n
�
q
q
q
|xi | = mλ p−q
1=
p
i=1
= 0,
i = 1, . . . , n
114
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
where m is the number of coordinates for which xi �= 0. Notice that m must be positive to satisfy the
constraint. This yields the value for λ,
λ=
p − p−q
n q
q
and the corresponding value for the p-norm,
�
n
�
i=1
|xi |
p
� p1
1
1
= mp−q
Consequently, the maximum is attained at the point for which m = n, i.e., all coordinates are different
from zero. This gives the value for the optimal constant,
1
1
Cpq = n p − q
Upon passing with p, q to 1 or ∞, we get the corresponding values of the constant for the limiting
cases.
Exercise 4.6.3 Consider R
I N with the l1 -norm,
x = (x1 , . . . , xN ),
�x�1 =
N
�
i=1
|xi |
Let �x� be now any other norm defined on R
I n.
(i) Show that there exists a constant C > 0 such that
�x� ≤ C�x�1
∀x ∈ R
IN
(ii) Use (i) to demonstrate that function
R
I N � x → �x� ∈ R
I
is continuous in l1 -norm.
(iii) Use Weierstrass Theorem to conclude that there exists a constant D > 0 such that
�x�1 ≤ D�x�
∀x ∈ R
IN
Therefore, the l1 norm is equivalent to any other norm on R
I N . Explain why the result implies that any
two norms defined on an arbitrary finite-dimensional vector space must be equivalent.
(i) Let ei denote the canonical basis in R
I n . Then
�x� = �
n
�
i=1
xi ei � ≤
n
�
i=1
|xi | �ei � ≤ C
where
C = max{�e1 �, . . . , �en �}
n
�
i=1
|xi |
Topological and Metric Spaces
115
(ii) This follows immediately from the fact that
|�x� − �y�| ≤ �x − y�
and property (i).
(iii) The l1 unit ball is compact. Consequently, norm � · � attains a minimum on the l1 unit ball, i.e.,
C≤�
x
�
�x�1
∀x
Positive definitness of the norm implies that C > 0. Multiplying by �x�1 /C, we get
�x�1 ≤ C −1 �x�
Take now two arbitrary norms. As each of them is equivalent to norm � · �1 , they must be equivalent
with each other as well.
4.7
Topological Properties of Metric Spaces
Exercises
Exercise 4.7.1 Prove that F : (X, D) → (Y, ρ) is continuous if and only if the inverse image of every
(open) ball B(y, �) in Y is an open set in X.
It is sufficient to show that the inverse image of any open set G in Y is open in X. For any y ∈ G,
there exists �y > 0 such that B(y, �y ) ⊂ G. Set G thus can be represented as a union of open balls,
�
B(y, �y )
G=
y∈G
Consequently, set

f −1 (G) = f −1 
�
y∈G
as a union of open sets must be open.

B(y, �y ) =
�
f −1 (B(y, �y ))
y∈G
Exercise 4.7.2 Let X = C ∞ (a, b) be the space of infinitely differentiable functions equipped with Chebyshev metric. Let F : X → X be the derivative operator, F f = df /dx. Is F a continuous map on X?
No, it is not. Take, for instance, [a, b] = [0, π] and the sequence fn (x) =
�fn � = max |
x∈[0,π]
1
sin(nx)
| ≤ → 0,
n
n
sin(nx)
.
n
Then,
116
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
but, at the same time,
�fn� � = max |fn� (x)| = max | cos(nx)| = 1 � 0
x∈[0,π]
4.8
x∈[0,π]
Completeness and Completion of Metric Spaces
Exercises
Exercise 4.8.1 Let Ω ⊂ R
I n be an open set and let (C(Ω), � �p ) denote the (incomplete) metric space of
continuous, real-valued functions on Ω with metric induced by the Lp norm. Construct arguments
supporting the fact that the completion of this space is Lp (Ω).
Hint: See Exercise 4.9.3 and use Theorem 4.8.2.
Definition of the completion of a metric space and the density result imply that Lp (Ω) is a completion
of (C(Ω), � �p ). By Theorem 4.8.2, completions are unique (up to an isometry) and, therefore, space
Lp (Ω) is the completion of (C(Ω), � �p ).
Exercise 4.8.2 Let xnk be a subsequence of a Cauchy sequence xn . Show that if xnk converges to x, so does
the whole sequence xn .
Recall that xn is Cauchy if
∀� > 0
∃N1 = N1 (�)
n, m ≥ N1 ⇒ d(xn , xm ) < �
Also xnk → x means that
∀� > 0
∃N2 = N2 (�)
k ≥ N2 ⇒ d(xnk , x) < �
Let � be now an arbitrary positive number. Choose
N = max{N1 (�/2), N2 (�/2)}
Then, for any n ≥ N ,
d(xn , x) ≤ d(xn , xnN2 (�/2) ) + d(xnN2 (�/2) , x) <
�
�
+ =�
2 2
Exercise 4.8.3 Prove that in a complete normed space, we have the generalization of the triangle inequality,
|
∞
�
n=1
xn | ≤
∞
�
n=1
|xn |
Topological and Metric Spaces
117
The result should be read as follows: if the series on the right-hand side converges, so does the one on
the left-hand side, and the estimate holds.
Convergence of the right-hand side and the estimate,
|
M
�
n=1
xn −
N
�
n=1
imply that the sequence SN =
xn | ≤ |
�N
n=1
M
�
n=N +1
xn | ≤
M
�
n=N +1
|xn | ≤
∞
�
n=N +1
|xn |
xn of partial sums is Cauchy. Consequently, it must converge to
a limit S. For a finite N , the usual triangle inequality implies
|
N
�
n=1
xn | ≤
N
�
n=1
|xn | ≤
∞
�
n=1
|xn |
Recalling that any norm is continuous in the very topology it generates, and passing with N → ∞ on
the left-hand side, we get the result.
4.9
Compactness in Metric Spaces
Exercises
Exercise 4.9.1 Let E ⊂ R
I n be a Lebesgue measurable set. Function
�
1 x∈E
χE :=
0 x∈
/E
is called the characteristic function of set E. Prove that there exists a sequence of continuous functions
φn : R
I n → [0, 1], φn ≥ 0, converging to χE in the Lp norm for 1 ≤ p < ∞.
Hint: Pick � = 1/n and consider a closed subset F of E, and an open superset G of E such that
m(G − F ) < � (recall characterization of Lebesgue measurable sets in Proposition 3.2.3(ii)). Then set
φn (x) = φ� (x) :=
d(x, G� )
d(x, G� ) + d(x, F )
where d(x, A) denotes the distance from point x to set A
d(x, A) := inf d(x, y)
y∈A
1. Distance function d(x, A) is continuous in x. Indeed, taking infimum with respect to y on both
sides of the triangle inequality:
d(x1 , y) ≤ d(x1 , x2 ) + d(x2 , y)
118
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
we obtain,
inf d(x1 , y) ≤ d(x1 , x2 ) + inf d(x2 , y)
y∈A
y∈A
which implies:
d(x1 , A) − d(x2 , A) ≤ d(x1 , x2 )
Upon switching x1 with x2 and combining the two inequalities, we get
|d(x1 , A) − d(x2 , A)| ≤ d(x1 , x2 )
which implies that the distance function is Lipschitz with constant 1.
I 2 � (x, y) →
2. Function φ� is continuous. Indeed, from the fact that function R
x
y
∈R
I is continuous
for y �= 0, follows that, if functions f (x), g(x) are continuous at x0 and g(x0 ) �= 0, then f /g is
continuous at x0 as well. It is sufficient then to notice that the denominator in the definition of φ�
is always positive.
3. Function φ� = 1 on F , vanishes in G� and 0 ≤ φ� ≤ 1 on G − F . Consequently,
�
�
p
|χE − φ� | ≤
1=�
R
In
G−F
Note that the result is not true for p = ∞.
Exercise 4.9.2 Let f : Ω → R
Ī be a measurable function. Function φ : Ω → R
I is called a simple function
if Ω can be partitioned into measurable sets Ei , i = 1, 2, . . ., and the restriction of φ to each Ei is
constant. In other words,
φ=
∞
�
ai χ Ei
i=1
where ai ∈ R
I and Ei are pairwise disjoint measurable sets. Prove then that, for every � > 0, there
I such that
exists a simple function φ� : Ω → R
�f − φ� �∞ ≤ �
Hint: Use the Lebesgue approximation sums.
I such that
Indeed, let . . . < yi−1 < yi < yi+1 < . . . be a partition of R
sup |yi − yi−1 | ≤ �
i
Define Ei = f −1 ([yi−1 , yi )), select any ci ∈ [yi−1 , yi ] and set:
φe =
�
c i χ Ei
i∈Z
I
Exercise 4.9.3 Let Ω ⊂ R
I n be an open set. Let f ∈ Lp (Ω), 1 ≤ p < ∞. Use results of Exercise 4.9.1 and
I converging to
Exercise 4.9.2 to show that there exists a sequence of continuous functions φn : Ω → R
function f in the Lp (Ω) norm.
Topological and Metric Spaces
119
1. Assume additionally that domain Ω is bounded, and 0 ≤ f ≤ M < ∞ a.e. in Ω. Pick an arbitrary
� > 0.
• Select a partition 0 = y0 < y1 < . . . < yN = M such that
max |yi − yi−1 | ≤
i=1,...,N
and define:
φ=
N
�
yi χ E i ,
1
�
[m(Ω)]− p
2
Ei = f −1 ([yi−1 , yi ))
i=1
Then
1
�f − φ�p ≤ �f − φ�∞ [m(Ω)] p ≤
�
2
• Use Exercise 4.9.1 to select continuous functions φi such that
�φEi − φi �p ≤
Then
�
N
�
i=1
yi χ E i −
N
�
By triangle inequality, �f −
i=1
y i φ i �p ≤
�N
i=1
N
�
i=1
� 1 1
2 2i yi
N
yi �χEi − φi �p ≤
�� 1
�
≤
i
2 i=1 2
2
yi φi �p ≤ �.
2. Drop the assumption on f being bounded. Define fM (x) = min{f (x), M }. Then fM → f
pointwise and, by construction, fM are bounded by f . Thus, by Lebesgue Dominated Conver-
gence Theorem, |fM − f |p → 0 as M → ∞. Apply then the result of the previous step to
functions fM .
3. Drop the assumption on f being nonnegative. Split function f into its positive and negative parts:
f = f+ − f− and apply the previous step to each of the parts separately.
4. Drop the assumption on Ω being bounded. Let Bn = B(0, n). Consider restriction of f to
Ω ∩ Bn . By the Lebesgue Dominated Convergence argument again, �fn − f �p → 0. Apply the
result of the previous step to fn then.
Exercise 4.9.4 Argue that, in the result of Exercise 4.9.3, one can assume additionally that functions fn have
compact support.
Let Ω ⊂ R
I n be an open set. Define,
Ωn := {x ∈ Ω : d(x, Ω� ) >
1
, x ∈ Bn }
n
Continuity of the distance function d(x, A) implies that sets Ωn are open. By construction, they are
also bounded and Ω̄n ⊂ Ωn+1 . Consider restriction fn of f to Ωn and apply the result of Exercise 4.9.3
to function fn . Since Ω̄n ⊂ Ωn+1 , one can assume that the opens sets G used in Exercise 4.9.1 are
contained in Ωn+1 . Consequently, the corresponding continuous approximations have their supports in
Ωn+1 ⊂ Ω.
120
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 4.9.5 Let F be a uniformly bounded class of functions in Chebyshev space C[a, b], i.e.,
∃M > 0 : |f (x)| ≤ M,
∀x ∈ [a, b], ∀f ∈ F
Let G be the corresponding class of primitive functions
� x
f (s) ds,
F (x) =
f ∈F
a
Show that G is precompact in the Chebyshev space.
According to Arzelà–Ascoli Theorem, it is sufficient to show that G is equicontinuous. We have,
� x
� x
f (s) ds| ≤
|f (s)| ds ≤ M |x − y|
|F (x) − F (y)| = |
y
y
Thus functions F (x) are Lipschitz continuous with a uniform (with respect to F ) bound on the Lipschitz constant. This implies that functions F, f ∈ F are equicontinuous.
4.10
Contraction Mappings and Fixed Points
Exercises
Exercise 4.10.1 Reformulate Example 4.10.6 concerning the Fredholm Integral Equation using the Lp spaces,
1 < p < ∞. What is the natural regularity assumption on kernel function K(x, y)? Does it have to be
bounded?
Let
Af = φ + λ
�
b
K(·, y)f (y) dy
a
We need to come up with sufficient conditions to have the estimate
�Af − Ag�p = �A(f − g)�p ≤ k�f − g�p
where k < 1. The inequality will also automatically imply that operator A is well defined. Let
h = f − g. We have,
� b
�
�λ
K(·, y)h(y) dy�pp =
a
b
a
≤ |λ|p
= |λ|p
|λ
�
�
b
a
�
b
K(x, y)h(y) dy|p dx
a
�

b
a
�
��
a
b
1
b
|K(x, y)|
p
p−1
p
dy
|K(x, y)| p−1 dy
� p−1
��
p
�p−1
dx
b
a
�
� p1 p
|h(y)|p dy  dx
b
a
|h(y)|p dy
Topological and Metric Spaces
121
where we have applied the Hölder inequality to the inner integral to obtain an estimate with the desired
Lp norm of function h. A natural assumption is thus to request the inequality:

 p1
�p−1
�
� b � b

p
|λ|
|K(x, y)| p−1 dy
dx
<1
 a

1
We may try to do a little better and come up with a more readable assumption. For p = 2 we actually
need to do nothing as the assumption expresses simply the fact that
|λ|�K�2 < 1
where the L2 -norm of kernel K(x, y) is calculated with respect to both variables x, y. In attempt to
derive a similar condition for general 1 < p < ∞, we need to distinguish between two cases.
Case: 1 < p < 2. For this range of p, 1 < p� =
the outer integral in the condition above. We get,
�p−1
��
� b �� b
p
1
|K(x, y)| p−1 dy
dx ≤
a
< ∞, and we may apply the Hölder inequality to
1
p−1
a
b
�2−p ��
1
2−p
1
a
= (b − a)
2−p
��
�
b
a
b
a
b
a
�
b
a
|K(x, y)|
p
p
p−1
|K(x, y)| p−1 dydx
dydx
�p−1
�p−1
This leads to the assumption on the kernel in the form:
|λ|(b − a)
2−p
p
p
�K� p−1
<1
The norm of K is again calculated with respect both variables x, y.
Case: 2 < p < ∞. In this case 1 < p − 1 < ∞, and we can apply the Hölder inequality to the inner
integral. We obtain,
�
b
a
��
b
a
|K(x, y)|
p
p−1
dy
�p−1
dx ≤
�
b
a

(b − a)
= (b − a)p−2
�
p−2
p−1
b
a
�
��
b
a
b
a
1 p−1
� p−1

|K(x, y)|p dy
dx
|K(x, y)|p dydx
which leads to the assumption:
|λ|(b − a)
p−2
p
�K�p < 1
Note that in both cases for p �= 2, the assumption on the kernel involves an Lq -norm with q > 2.
For bounded domains, the Lp spaces form a scale (compare Proposition 3.9.3 ) which hints that the
formulation for p = 2 is optimal in the sense that it involves the weakest assumption on the kernel† .
Finally notice that the integral conditions may be satisfied for singular kernels, e.g. K(x, y) = |x −
1
y|− 2 .
† Under
the assumption that we try to express it in terms of an Lp norm on (a, b)2 .
122
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 4.10.2 Consider an initial-value problem:

q ∈ C([0, T ]) ∩ C 1 (0, T )



q̇ = t ln q, t ∈ (0, T )



q(0) = 1
Use the Banach Contractive Map Theorem and the Picard method to determine a concrete value of T
for which the problem has a unique solution.
Function F (t, q) = t ln q is defined only for q > 0 and it is not uniformly Lipschitz with respect to q in
the domain of definition. We need to restrict ourselves to a smaller domain [0, T ] × [�, ∞) where � < 1
is fixed and T is to be determined. The Mean-Value Theorem implies then that
ln q1 − ln q2 =
1
(q1 − q2 )
ξ
for some intermediate point ξ ∈ (q1 , q2 ). This implies that, in the smaller domain, function F (t, q) is
uniformly Lipschitz in q,
|t ln q1 − t ln q2 | ≤
T
|q1 − q2 |
�
∀t ∈ [0, T ], q ∈ [�, ∞)
The initial-value problem is equivalent to the integral equation:

q ∈ C([0, T ])



� t

q(t)
=
1
+
s ln q(s) ds


0
and function q is a solution of the integral equation if an only if q is a fixed point of the operator:
� t
s ln q(s) ds
A : C([0, T ]) → C([0, T ]), (Aq)(t) = 1 +
0
We have,
|(Aq1 )(t) − (Aq2 )(t)| ≤
�
t
0
|s ln q1 (s) − s ln q2 (s)| ds ≤
Consequently, map A is a contraction if
�
t
0
T2
T
|q1 (s) − q2 (s)| ds ≤
�q1 − q2 �∞
�
�
T2
<1
�
The important detail in this problem is to notice that, without further restrictions on time interval T ,
value of (Aq)(t) may fall outside of the domain of definition of F (t, q) = t ln q. We need thus the
following estimate yet.
� t
� t
� t
T2
1
(1 − �)
s ln q(s) ds| ≤
|s ln q(s) − s ln 1| ds ≤
s |q(s) − 1| ds ≤
|(Aq)(t) − 1| = |
�
2�
0
0
0
under an additional condition that |q(t) − 1| ≤ 1 − �. In order to stay within the domain, it is sufficient
to assume that
T2
(1 − �) ≤ 1 − �
2�
Topological and Metric Spaces
123
which is equivalent to
T 2 ≤ 2�
The second condition is thus automatically satisfied if the first one is. We can guarantee the existence
√
(and uniqueness) of the solution for any T < � for any � < 1, i.e, for any T < 1. Map A maps set
{x ∈ C[0, T ] : |x(t) − 1| ≤ 1 − �, ∀x ∈ [0, T ]}
into itself, and it is contractive.
Exercise 4.10.3 Show that f (x) = 12 (x + 32 ) is a contraction mapping with the fixed point x = 32 . If x0 = 2
is the starting point of a series of successive approximations, show that the error after n iterations is
bounded by 1/2n+1 .
f (x) − f (y) = 12 (x − y), so f is a contraction with contraction constant k = 12 . Passing to the limit in
xn+1 =
1
3
(xn + )
2
2
we obtain that x = 12 (x + 32 ) which yields x = 32 . The error estimate derived in the text gives then
d(xn , x) ≤
( 1 )n 1
1
kn
d(x1 , x0 ) = 21
= n+1
1−k
4
2
2
Exercise 4.10.4 Use the idea of contraction maps and fixed points to compute an approximate value of
Use Example 4.10.4 with F (x) = x3 − 5, a = 1, b = 2, µ = 3, γ = 1/12.
√
3
5.
5
Banach Spaces
Topological Vector Spaces
5.1
Topological Vector Spaces—An Introduction
Exercises
Exercise 5.1.1 Let V be a t.v.s. and let B0 denote a base of neighborhoods for the zero vector. Show that B0
is equivalent to αB0 for α �= 0.
Continuity of the multiplication by scalar at (α, 0),
(α, 0) → α · 0 = 0
implies that
∀B ∈ B0
∃C ∈ B0
∃δ > 0
such that
v ∈ C, |β − α| < δ
⇒
βv ∈ B
In particular, for β = α,
v∈C
⇒
αv ∈ B
In other words, for every α,
∀B ∈ B0 ∃C ∈ B0 : αC ⊂ B
i.e., αB0 � B0 . For α �= 0, this implies that
αB0 ∼ B0 .
1
α B0
� B0 or, equivalently, B0 � αB0 . Consequently,
125
126
5.2
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Locally Convex Topological Vector Spaces
Exercises
Exercise 5.2.1 Let A ∈ L(X, Y ) be a linear transformation from a vector space X into a normed space Y .
Assume that A is non-injective, and define
p(x) := �Ax�Y
Explain why p is a seminorm but not a norm.
Funtion p(x) is homogeneous and it satisfies the triangle inequality. But it is not positive definite as
p(x) vanishes on N (A).
Exercise 5.2.2 Show that each of the seminorms inducing a locally convex topology is continuous with
respect to this topology.
Consider seminorm pκ , κ ∈ I, and u ∈ V . We need to show that
∀� > 0 ∃C ∈ Bx : y ∈ C ⇒ |pκ (y) − pκ (x)| ≤ �
It is sufficient to take C = x + B({κ}, �). Indeed, by definition of B({κ}, �),
y ∈ C ⇔ pκ (y − x) ≤ �
and, consequently,
pκ (y) = pκ (x + y − x) ≤ pκ (x) + pκ (y − x) ≤ pκ (x) + �
At the same time,
pκ (x) = pκ (y + x − y) ≤ pκ (y) + pκ (x − y) = pκ (y) + pκ (y − x) ≤ pκ (y) + �
Exercise 5.2.3 Show that replacing the weak equality in the definition of set Mc with a strict one does not
change the properties of Mc .
Properties (i)-(iii) follow from the same arguments as for the case with weak inequality.
(iv) Consider two cases:
• If p(u) = 0 then p(αu) = αp(u) = 0, so αu ∈ Mc , for any α > 0.
• If p(u) �= 0, take any α >
p(u)
c .
Then
p(α−1 u) = α−1 p(u) <
c
p(u) < c
p(u)
Banach Spaces
127
(v) For p(u) = 0, take any αn � 0. For p(u) �= 0, take any αn �
p(u)
c .
previous arguments, and αn c → p(u).
Then αn−1 u ∈ Mc by the
Exercise 5.2.4 Show that by replacing the weak equality in the definition of sets B(I0 , ε) with a strict one,
one obtains bases of neighborhoods equivalent to the original ones and therefore the same topology.
Define
B o (I0 , �) = {v : pι (v) < �, ∀ι ∈ I0 }
Equivalence of the two bases follows from the simple inclusions:
B(I0 , �1 ) ⊂ B o (I0 , �)
∀�1 < �
B o (I0 , �1 ) ⊂ B(I0 , �)
Exercise 5.2.5 Show that seminorms are convex functionals.
Let 0 ≤ α ≤ 1, then
p(αu + (1 − α)v) ≤ p(αu) + p((1 − α)v) = αp(u) + (1 − α)p(v)
Exercise 5.2.6 Prove the following characterization of continuous linear functionals.
Let V be a locally convex t.v.s. generated by a family of seminorms pι , ι ∈ I. A linear functional f on
V is continuous iff there exist a finite subset I0 of I, and a constant C > 0 such that
|f (v)| ≤ C max pι (v)
ι∈I0
The following is a straightforward generalization of arguments used in Proposition 5.6.1.
For linear functionals defined on a t.v.s., continuity at 0 implies continuity at any u. Indeed, let u be
an arbitrary vector. We need to show that
∀� > 0 ∃B ∈ B0 : y ∈ u + B ⇒ |f (y) − f (u)| < �
If f is continuous at 0 then
∀� > 0 ∃B ∈ B0 : z ∈ B ⇒ |f (z)| < �
Consequently, for y ∈ u + B, i.e., y − u ∈ B,
|f (y) − f (x)| = |f (y − x)| < � .
Thus, it is sufficient to show that the condition above is equivalent to continuity of f at 0.
Necessity: Let f be continuous at 0. For any � > 0 and, therefore, for � = 1 as well, there exists a
neighborhood B ∈ B0 such that
x∈B
⇒
|f (x)| < 1
128
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Recalling the construction of the base of neighborhoods, we learn that there exists a finite subset I0 ⊂ I,
and a constant δ > 0 such that,
max pι (x) < δ
ι∈I0
⇒
|f (x)| < 1
Let λ := maxι∈I0 pι (x). Then
max pι
ι∈I0
�
xδ
λ
�
=
δ
max pι (x) < δ
λ ι∈I0
|f
�
xδ
λ
�
and, therefore,
or, equivalently,
|f (x)| <
|<1
1
1
λ=
max pι (x)
δ
δ ι∈I0
����
=:C
Sufficiency follows immediately from the fact that all seminorms defining the topology are continuous
functionals, comp. Exercise 5.2.2 or, more directly, the definition of neighborhoods.
5.3
Space of Test Functions
Hahn–Banach Extension Theorem
5.4
The Hahn–Banach Theorem
Exercises
Exercise 5.4.1 Prove that relation ≤ introduced in the proof of the Hahn–Banach Theorem is a partial ordering of the family F.
All three conditions: reflexivity, antisymmetry and transitivity are trivially satisfied.
Exercise 5.4.2 Prove that element (Y, fY ) of F defined in the proof of the Hahn–Banach theorem
1. is well defined, i.e.,
(i) Y is a linear subspace of X and
Banach Spaces
129
def
(ii) value fY (x) = fYι (x) is well defined, i.e., independent of the choice of index ι, and
2. is an upper bound for the chain G.
• We need to show first that Y is closed with respect to vector space operations. Let x, y ∈ Y .
Then x ∈ Yι , y ∈ Yκ , for some ι, κ ∈ I. But G is a chain, i.e., either Yι ⊂ Yκ or Yκ ⊂ Yι , i.e.,
both x, y ∈ Yι or x, y ∈ Yκ . Consequently, x + y ∈ Yι or x + y ∈ Yκ , so x + y ∈ Y .
Similarly, y ∈ Y ⇒ ∃κ ∈ I : y ∈ Yκ ⇒ αy ∈ Yκ ⇒ αy ∈ Y .
• The fact that G is a chain, is a gain crucial in showing that fY is well defined. Indeed, let
x ∈ Yι ∩ Yκ . If Yι ⊂ Yκ then fYκ is an extension of fYι and, consequently, fYι (x) = fYκ (x).
Same argument applies to case Yκ ⊂ Yι .
• By construction, Yι ⊂ Y and fY |Yι = fYι .
5.5
Extensions and Corollaries
Bounded (Continuous) Linear Operators on Normed
Spaces
5.6
Fundamental Properties of Linear Bounded Operators
Exercises
Exercise 5.6.1 Consider IC n with norm � · �q , q ∈ [1, ∞]. Let u ∈ IC n . Consider antilinear functional:
φ(v) = φu (v) :=
n
�
ui v i
i=1
Prove that
sup |φ(v)| = max |φ(v)| = �u�p
�v�=1
�v�=1
where 1/p + 1/q = 1.
Case u = 0 is trivial. Assume u �= 0. Inequality “≤” follows from the discrete Hölder inequality. To
demonstrate the equality, we need only to construct a specific v for which the equality holds.
130
APPLIED FUNCTIONAL ANALYSIS
Case: 1 < p < ∞. Equality
�
ui v i =
i
suggests to look for v in the form:
�
�
i
SOLUTION MANUAL
|ui |
p
�1/p
I
vi = ui |ui |p−2 λ, λ ∈ R
�
Substituting into the condition, we get λ = ( i |ui |)−1/q which yields the final result.
Case: p = 1. Relation:
�u�1 =
suggests setting
�
vi =
Notice that �v�∞ = 1.
Case: p = ∞. Let
i
|ui | =

0

�
ui v i
i
ui = 0
ui
|ui |
ui �= 0
�u�∞ = max |ui | = |vj |
i
for some particular (not necessarily unique) index j. Set,

i=
� j
0
vi =
 uj
i=j
|uj |
Then �v�1 = 1, and v realizes the maximum.
Exercise 5.6.2 What part of the result discussed in Exercise 5.6.1 carries over to the infinite-dimensional
case n = ∞ and what does not?
All of the arguments go through except for a small modification in case p = ∞ where the supremum
may not be attained.
The interesting issue is elsewhere and lies in the surjectivity of the operator
�p � u → {v →
∞
�
i=1
ui vi } ∈ (�q )�
Except for the case p = 1, the operator is surjective, see Section 5.13.
Exercise 5.6.3 Verify the assertions given in Example 5.6.3.
• Let
�n
j=1
|xj | ≤ 1, then
n
�
�Ax�∞ = max |
Aij xj |
1≤i≤n
j=1

≤ max  max |Aij |
1≤i≤n
1≤j≤n
= max max |Aij |
1≤i≤n 1≤j≤n
n
�
j=1

|xj |
Banach Spaces
131
On the other side, let i0 , j0 be the indices where the maximum is achieved, i.e.,
max max |Aij | = |Ai0 j0 |
1≤i≤n 1≤j≤n
Construct
xj =
Then �x�1 = 1 and
�
n
�
j=1
so the upper bound is attained.
sgn(Ai0 j0 )
j = j0
0
j �= j0
Ai0 ,j xj = |Ai0 j0 |
• Let max1≤j≤n |xj | ≤ 1, then
�Ax�∞ = max |
1≤i≤n
≤ max
1≤i≤n
≤ max
1≤i≤n
≤ max
1≤i≤n
n
�
j=1
n
�
j=1
�
Aij xj |
|Aij | |xj |
max |xj |
1≤j≤n
n
�
j=1
��
n
j=1
|Aij |
On the other side, let i0 be such that
max
1≤i≤n
Choose xj = sgnAi0 j , then
n
�
n
�
j=1
|Aij | =
j=1
�Ax�1 =
≤
≤
i=1
1≤j≤n
n
�
i=1
|Ai0 j |
j=1
n �
n
�
i=1 j=1
n �
n
�
j=1 i=1
1≤j≤n
max
|Ai0 j |
n
n
�
�
|
Aij xj |
≤ max
On the other side, let
j=1
n
�
Ai 0 j x j =
j=1
so the upper bound is attained.
�n
• Let j=1 |xj | ≤ 1, then
n
�
|Aij | |xj |
|Aij | |xj |
n
�
i=1
|Aij | =
|Aij |
n
�
i=1
|Aij0 |
|Aij |
132
APPLIED FUNCTIONAL ANALYSIS
and construct
xj = δjj0 =
Then �x�1 = 1 and
1
0
j = j0
j �= j0
n
n
n
�
�
�
|
Aij xj | =
|Aij0 |
i=1
so the upper bound is attained.
j=1
i=1
Exercise 5.6.4 Show that
�
|Aij |
�A�∞,1 <
�
|Aij |
Aij xj | ≤
�n
�A�∞,1 ≤
but construct an example of a matrix for which
Let |xj | ≤ 1, j = 1, . . . , n. Then |
�
SOLUTION MANUAL
�N
j=1
�Ax�1 =
i,j
i,j
j=1
|Aij |. Consequently,
n
n
n �
n
�
�
�
|
Aij xj | ≤
|Aij |
i=1
j=1
Take now n = 2, and consider matrix
A=
So,
i=1 j=1
�
1 1
1 −1
�
Ax = (x1 + x2 , x1 − x2 )T
and
�Ax�1 = |x1 + x2 | + |x1 − x2 |
The maximum,
max �Ax�1
�x�∞ ≤1
is attained on the boundary �x�∞ = 1. Also,
�A(−x)�1 = � − Ax�1 = �Ax�1
so it is sufficient to consider only the following two cases.
Case: |x1 | ≤ 1, x2 = 1.
�Ax�1 = |x1 + 1| + |x1 − 1| = x1 + 1 + 1 − x1 = 2
Case: x1 = 1, |x2 | ≤ 1.
�Ax�1 = |1 + x2 | + |1 − x2 | = 2
Consequently,
�A�∞,1 = 2 <
�
i,j
|Aij | = 4
Banach Spaces
133
R2 , � · �b ) and B : (I
R2 , � · �a ) → (I
R2 , � · �b ), where a, b = 1, 2, ∞,
Exercise 5.6.5 Let A : (I
R2 , � · �a ) → (I
are linear operators represented by the matrices
A=
�
�
2 1
,
3 −2
B=
Determine �A� and �B� for all choices of a and b.
�
42
21
�
Except for case �A�∞,1 , �B�∞,1 , this is a simple exercise enforcing use of formulas discussed in
Examples 5.6.3 and 5.6.2. For the choice of norms �x�1 , �y�∞ , we can use the same technique as
in Exercise 5.6.4. By restricting ourselves to the boundary of the unit ball in (I
R2 , � �1 ), we obtain for
matrix A,
Case: |x1 | ≤ 1, x2 = 1.
�Ax�1 = |2x1 + 1| + |3x1 − 2|


−2x1 − 1 − 3x1 + 2 = −bx1 + 1





= 2x1 + 1 − 3x1 + 2 = −x1 + 3





 2x1 + 1 + 3x1 − 2 = 5x1 − 1
1
x1 ∈ [−1, − ]
2
1 2
x1 ∈ [− , ]
2 3
2
x1 ∈ [ , 1]
3
As the function is piece-wise linear, the maximum must be attained at one of points −1, − 12 , 23 , 1, in
this case at x1 = −1, and it is equal 6.
Case: x1 = 1, |x2 | ≤ 1.
�Ax�1 = |2 + x2 | + |3 − 2x2 |
= 2 + x2 + 3 − 2x2
= 5 − x2 ,
x2 ∈ [−1, 1]
and the maximum equals 6 again.
Consequently, �A�∞,1 = 6. Norm �B�∞,1 is computed in an analogous way.
Exercise 5.6.6 Let A be an invertible matrix in R
I n ×R
I n , and µi > 0 denote its singular values (see Example 5.6.2). Show that with �A� calculated with respect to the Euclidean norm in R
I n,
�A−1 � =
1
min µi
1≤i≤n
It is sufficient to notice that if λi ’s are the eigenvalues of AT A, i.e.,
AT Ax = λx
−1 T −1
then λ−1
) A . Indeed, premultiplying the equation above by A−1 (AT )−1 ,
i ’s are eigenvalues of (A
we obtain,
x = λA−1 (A−1 )T x
134
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
For an invertible matrix A, all singular values must be positive so, dividing by λi , we get
1
x
λ
(A−1 )(A−1 )T x =
Premultiplying both sides by (A−1 )T , we obtain
1 −1 T
(A ) x
λ
(A−1 )T A−1 (A−1 )T x =
−1 T −1
) A (with corresponding eigenvectors (A−1 )T x ).
which proves that λ−1
i ’s are eigenvalues of (A
Finally, for µ2i = λi ,
max µ−1
=
i
1≤i≤n
5.7
�
min µi
1≤i≤n
�−1
The Space of Continuous Linear Operators
Exercises
Exercise 5.7.1 Show that the integral operator defined by
Au(x) =
�
1
K(x, y)u(y) dy
0
where K(x, y) is a function continuous on the square Ω = {(x, y) ∈ R
I 2 : 0 ≤ x, y ≤ 1}, is continuous
on C[0, 1] endowed with the supremum norm, i.e., �u�∞ = supx∈[0,1] |u(x)|.
We have
|Au(x)| ≤
�
0
1
|K(x, y)| |u(y)| dy ≤
Consequently, taking the supremum over x,
�Au�∞ = sup |Au(x)| ≤
x∈[0,1]
�
sup
x∈[0,1]
�
�
1
|K(x, y)| dy �u�∞
0
1
0
|K(x, y)| dy
�
�u�∞
where the supremum on the right-hand side is bounded. Indeed, by the Weierstrass Theorem, |K(x, y)|
attains a maximum on the closed unit square, so
|K(x, y)| ≤ M < ∞
and
sup
x∈[0,1]
�
1
0
|K(x, y)| dy ≤ M
Banach Spaces
135
Exercise 5.7.2 Let U and V be two arbitrary topological vector spaces. Show that a linear operator A : U →
V is continuous iff it is continuous at 0.
Recall that in a topological vector space U , if B0U is a base of neighborhoods of 0, then u + B0U is a
base of neighborhoods of u. Let Au = v; we need to show that
A(u + B0U ) � v + B0V
where B0V is a base of neighborhoods of 0 in space V . Using the linearity of A,
A(u + B0U ) = Au + A(B0U ) = v + A(B0U )
and its continuity at 0,
A(B0U ) � B0V
we obtain the desired result.
Exercise 5.7.3 Discuss why, for linear mappings, continuity and uniform continuity are equivalent concepts.
Let (U, � �U ), (V, � �V ) be two normed spaces. According to Proposition 5.6.1,
A ∈ L(U, V ) ⇔ �Au�V ≤ �A� �u�U
Consequently,
�Au1 − Au2 �V ≤ �A(u1 − u2 )�V ≤ �A� �u1 − u2 �U
i.e., A is Lipschitz and, therefore, uniformly continuous.
Exercise 5.7.4 Show that the null space N (A) of any continuous linear operator A ∈ L(U, V ) is a closed
linear subspace of U .
This is a trivial observation. The null space is the inverse image of (the singleton of) the zero vector,
i.e.,
N (A) = A−1 ({0})
Since the singleton {0} is closed, and the inverse image of a closed set through a continuous map is
closed, the result follows.
136
APPLIED FUNCTIONAL ANALYSIS
5.8
Uniform Boundedness and Banach-Steinhaus Theorems
5.9
The Open Mapping Theorem
SOLUTION MANUAL
Exercises
Exercise 5.9.1 Construct an example of a continuous function from R
I into R
I which is not open.
Take, e.g.
f (x) = sin x
Then, for an open set G = (0, 3π), f (G) = [−1, 1] is not an open set in R.
I
Closed Operators
5.10
Closed Operators, Closed Graph Theorem
Exercises
Exercise 5.10.1 Let A be a closed linear operator from D(A) ⊂ U into V , where U and V are Banach
spaces. Show that the vector space (D(A), � · �A ) where �u�A = �u�U + �Au�V (the so-called
operator norm on D(A)) is Banach.
Conditions for a norm are easily verified. To prove completness, consider a Cauchy sequence un ∈ D(A)
endowed with the operator norm, i.e.,
∀� > 0 ∃N : n, m ≥ N ⇒ �un − um �U + �A(un − um )�V < �
This implies that both un is Cauchy in U , and Aun is Cauchy in V . By completness of U and V ,
un → u, Aun → v, for some u ∈ U, v ∈ V . But operator A is closed, so (u, v) ∈ G(A) which in
turn implies that u ∈ D(A) and v = Au.
Banach Spaces
5.11
137
Example of a Closed Operator
Exercises
Exercise 5.11.1 Let Ω ⊂ R
I n be an open set. Prove that the following conditions are equivalent to each other.
(i) For every point x ∈ Ω, there exists a ball B = B(x, �x ) ⊂ Ω such that u|B ∈ L1 (B).
(ii) For every compact subset K ⊂ Ω, u|K ∈ L1 (K).
�
�
(ii) ⇒ (i). Take a closed ball B ⊂ Ω and observe that B |u| = B |u|.
(ii) ⇒ (i). Let K be a compact subset of Ω. Use neighborhoods from (i) to form an open cover for K,
�
B(x, �x )
K⊂
x∈K
By compactness of K, there exists a finite number of points xi ∈ K, i = 1, . . . , N such that
K⊂
But then
�
K
|u| ≤
N
�
B(xi , �xi )
i=1
N �
�
i=1
B(xi ,�xi )
|u| < ∞
Exercise 5.11.2 Consider X = L2 (0, 1) and define a linear operator T u = u� , D(T ) = C ∞ ([0, 1]) ⊂
L2 (0, 1). Show that T is closable. Can you suggest what would be the closure of T ?
We shall use Proposition 5.10.3. Assume un → 0 and T un = u�n → v, where the convergence is
understood in the L2 -sense. We need to show that v = 0. Let φ ∈ D(0, 1). Then,
� 1
� 1
�
un φ =
u�n φ
−
0
0
L -convergence of un to zero implies that the left-hand side converges to zero. At the same time,
�
L2 -convergence of u�n to v implies that the right-hand side converges to vφ. Consequently,
� 1
vφ = 0, ∀φ ∈ D(0, 1)
2
0
Thus, density of test functions in L2 (0, 1) implies that v must be zero and, by Proposition 5.10.3, the
operator is closable.
The closure of the operator is the distributional derivative defined on the Sobolev space,
L2 (0, 1) ⊃ H 1 (0, 1) � u → u� ∈ L2 (0, 1)
138
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 5.11.3 Show that the Sobolev space W m,p (Ω) is a normed space.
All three conditions for a norm are easily verified.
Case: 1 ≤ p < ∞.
Positive definitness.
�u�W m,p (Ω) = 0 ⇒ �u�Lp (Ω) = 0 ⇒ u = 0
( in the Lp -sense, i.e., u = 0 a.e.)
Homogeneity.

�λu�W m,p (Ω) = 
�
|α|≤m
Triangle inequality.

= |λ| 
�u + v�W m,p (Ω)

=
�
�α�≤m

≤
≤
�
�α�≤m

�Dα (λu)�pLp (Ω)  = 
�
|α|≤m
 p1
�Dα u�pLp (Ω) 
�
|α|≤m
 p1
|λ|p �Dα u�pLp (Ω) 
= |λ|�u�W m,p (Ω)
 p1
�Dα (u + v)�pLp (Ω) 
� �
�α�≤m

 p1
�p
 p1
�Dα u�Lp (Ω) + �Dα v�Lp (Ω) 
 p1

�Dα u�Lp (Ω)  + 
�
�α�≤m
( triangle inequality in Lp (Ω) )
 p1
�Dα v�Lp (Ω)  ( triangle inequality for the p norm in R
I N)
where, in the last step, N equals the number of partial derivatives, i.e., N = #{|α| ≤ m}.
Case: p = ∞ is proved analogously.
Banach Spaces
139
Topological Duals, Weak Compactness
5.12
Examples of Dual Spaces, Representation Theorem for Topological Duals of
Lp -Spaces
Exercises
Exercise 5.12.1 Let Ω ⊂ R
I N be a bounded set, and fix 1 ≤ p < ∞. Prove that, for every r such that
p < r ≤ ∞, Lr (Ω) is dense in Lp (Ω).
Hint: For an arbitrary u ∈ Lp (Ω) define
un (x) =
�
u(x)
if |u(x)| ≤ n
n
otherwise
Show that
1. un ∈ Lr (Ω) and
2. �un − u�p → 0.
1. By construction,
�
Ω
|un |p ≤ np meas(Ω) < ∞
2. From the definition of un , it follows that u − un → 0 pointwise and
Since
�
|u − un |p ≤ |u|p
|u|p < ∞, the Lebesgue Dominated Convergence Theorem implies that
�
|u − un |p → 0
Ω
Exercise 5.12.2 Consider R
I n equipped with the p-norm,
�
� p1
n

�


p

|xi |
�x�p =
i=1


 max |xi |

1≤i≤n
Prove that
sup
n
�
�y�p =1 i=1
1≤p<∞
p=∞
xi yi = �x�q
140
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
where 1/p + 1/q = 1. Explain why the result implies that the topological dual of (I
Rn , � · �p ) is
isometric with (I
Rn , � · �q ).
Case: 1 < p < ∞. The supremum is attained and the maximum can be determined using Lagrange
multipliers. Introduce Lagrangian,
n
�
λ
L(y, λ) =
x i yi −
p
i=1
�
n
�
i=1
�
p
|yi | − 1
Differentiating with respect to yi , we get
xi = λ|yi |p−1 sgn yi
Value of the Lagrange multiplier λ at the point bf y equals value of the functional being maximized.
Indeed, multipying equation above by yi and summing up in i, we obtain
n
�
x i yi = λ
i=1
n
�
i=1
|yi |p−1 sgn yi |yi |sgn yi = λ
n
�
i=1
|yi |p = λ
Notice that λ > 0. In order to determine the supremum, it is sufficient thus to compute λ. Solving for
yi and requesting y to be of unit norm, we get
n
�
i=1
p
p
|xi | p−1 = λ p−1
Consequently,
λ=
�
n
�
i=1
|xi |
p
p−1
n
�
i=1
p
|yi |p = λ p−1
� p−1
p
= �x�q
Case: p = 1. We have,
|
n
�
i=1
x i yi | ≤
n
�
i=1
|xi yi | ≤ max |xi |
1≤i≤n
n
�
i=1
|yi | = max |xi | = �x�∞
1≤i≤n
The bound is attained. Indeed, let i0 be an index such that
|xi0 | = max |xi |
1≤i≤n
select yi = sgn xi0 δi,i0 . Then �y�1 = 1, and
n
�
i=1
xi yi = |xi0 |
Case: p = ∞. We have,
|
n
�
i=1
x i yi | ≤
n
�
i=1
|xi yi | ≤ max |yi |
1≤i≤n
n
�
i=1
|xi | =
n
�
i=1
|xi | = �x�1
Banach Spaces
141
The bound is attained. Indeed, select yi = sgn xi . Then �y�∞ = 1, and
n
�
x i yi =
i=1
n
�
i=1
|xi | = �x�1
Finally, recall that every linear functional defined on a finite-dimensional vector space equipped with a
norm, is automatically continuous. The topological dual coincides thus with the algebraic dual. Given
a linear functional f on R
I n , and recalling the canonical basis ei , i = 1, . . . , n, we have the standard
representation formula,
f (y) = f
�
n
�
yi e i
i=1
�
=
n
�
i=1
The map,
Rn � y →
R
I n � f → {I
n
�
i=1
f (ei ) yi =
� �� �
fi
n
�
f i yi
i=1
fi yi ∈ R}
I ∈ (I
Rn )∗ = (I
R n )�
is a linear isomorphism and, by the first part of this exercise, it is an isometry if the space for f is
equipped with the q-norm.
Exercise 5.12.3 Prove Theorem 5.12.2.
Let u = (u1 , u2 , . . .) ∈ �p , 1 ≤ p < ∞ and let ei = (0, . . . , 1(i) , . . .). Define
uN =
N
�
ui ei = (u1 , . . . , uN , . . .)
i=1
It follows from the definition of �p -spaces that the tail of the sequence converges to zero,
�u − uN ��p =
�
∞
�
i=N +1
|ui |
p
� p1
→0
as N → ∞
Notice that the argument breaks down for p = ∞.
Let f ∈ (�p )� . Set φi = f (ei ). Then
∞
�
def
φ i ui =
i=1
lim
N →∞
N
�
φi ui = lim f (uN ) = f (u)
i=1
N →∞
Hölder inequality implies that
|f (u)| ≤ �φ��q �u��p
We will show that the bound equals the supremum.
Case: p = 1. Let ij be a sequence such that
|φij | → sup |φi | = �φ�∞ , as j → ∞
i
142
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Take v j = sgn φij eij . Then �v j ��1 = 1, and
f (uj ) = |φij | → �φ�∞
Case: 1 < p < ∞. Use the choice derived in Exercise 5.12.2 for the finite-dimensional case,
1
|φi | p−1 sgn φi
ui = �∞
p
1
( i=1 |φi | p−1 ) p
Then �u��p = 1, and
f (u) =
∞
�
φ i ui =
i=1
�∞
p
p−1
i=1 |φi |
p
�n
1
( i=1 |φi | p−1 ) p
= �φ��q
Exercise 5.12.4 Generalize the Representation Theorem for Lp Spaces (Theorem 5.12.1) to the case of an
arbitrary (measurable) set Ω. Hint: Consider a sequence of truncated domains,
Ωn = Ω ∩ B(0, n) ,
use Theorem 5.12.1 on Ωn to conclude existence φn ∈ Lq (Ωn ), and investigate convergence of φn .
Let φn ∈ Lq (Ωn ) be such that,
�
∀v ∈ Lp (Ωn ) ,
φn v = f (ṽ)
Ωn
where ṽ is the zero extension of v. For m > n and v ∈ Lp (Ωn ),
�
�
φm v = f (ṽ) =
φn v .
Ωn
Ωn
Restriction φm |Ωn lives in Lq (Ωn ) so, by uniqueness of φ in the Representation Theorem, φm is an
extension of φn . Let φ(x) be the (trivial) pointwise limit of φ̃n (x). For p > 1, Lemma 3.5.1 implies
that
�
Ω
|φ̃n |q →
�
Ω
|φ|q .
At the same time, �φn �Lq (Ωn ) are uniformly bounded as, by the Representation Theorem,
�φn �Lq (Ωn ) = �f |Lp (Ωn ) �(Lp (Ωn ))� ≤ �f �(Lp (Ω))� ,
which proves that �φ�Lq (Ω) is finite (case q = ∞ included).
Let now v ∈ Lp (Ω), and let vn be its restriction to Ωn . By Lebesgue Dominated Convergence Theorem,
ṽn → v in Lp (Ω). Consequently, the Lp functions with support in Ωn , n = 1, 2, . . ., are dense in
Lp (Ω). Consequently,
�
φv =
Ω
�
φn v = f (v) ,
Ωn
true for any v ∈ Lp (Ω) with support in Ωn , can be extended to arbitrary v ∈ Lp (Ω).
I a measurable function defined on Ω. Prove that
Exercise 5.12.5 Let Ω ⊂ R
I n be an open set and f : Ω → R
the following conditions are equivalent to each other:
Banach Spaces
143
1. For every x ∈ Ω there exists a neighborhood N (x) of x (e.g., a ball B(x, ε) with some ε =
ε(x) > 0) such that
�
2. For every compact K ⊂ Ω
N (x)
�
K
|f | dx < +∞
|f | dx < +∞
Functions of this type are called locally integrable and form a vector space, denoted L1loc (Ω).
Closed balls are compact, so the second condition trivially implies the first. To show the converse,
consider a compact set K and the corresponding open cover consisting of balls present in the first
condition,
K⊂
�
B(x, εx )
x∈K
As the set is compact, there must exist a finite subcover. i.e., a collection of points x1 , . . . , xN ∈ K
such that
K ⊂ B(x1 , εx1 ) ∪ . . . ∪ B(xN , εxN )
Standard properties of the Lebesgue integral imply
�
�
�
|f | dx ≤
|f | dx + . . . +
K
B(x1 ,εx1 )
B(xN ,εxN )
|f | dx < ∞
which proves the second assertion.
Exercise 5.12.6 Consider the set B defined in (5.1). Prove that B is balanced, convex, absorbing, and Bi ⊂
B, for each i = 1, 2, . . ..
�
• B is balanced as, for every |α| ≤ 1, and
α
�
i∈I
ϕi ∈ B,
ϕi =
i∈I
�
i∈I0
αϕi ∈ B
since all Bi ’s are balanced.
• B is convex. Indeed, let α ∈ [0, 1],
J0 , J2 = I2 − J0 . Then
α
�
i∈I1
ϕi + (1 − α)
�
i∈I2
ψi =
�
�
j∈J0
i∈I1
ϕi ,
�
i∈I2
ψi ∈ B. Set J0 = I1 ∩ I2 , J1 = I1 −
αϕj + (1 − α)ψj +
�
j∈J1
αϕj +
�
j∈J2
(1 − α)ψj ∈ B
since Bi ’s are convex and balanced.
• B is absorbing. Indeed, for ever ϕ ∈ D(Ω), there exists i such that ϕ ∈ D(Ki ) and Bi ⊂ D(Ki )
is absorbing.
• The last condition is satisfied by construction.
144
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 5.12.7 Let q be a linear functional on D(K). Prove that q is sequentially continuous iff there exist
constants CK > 0 and k ≥ 0 such that
|q(φ)| ≤ Ck sup sup |Dα φ(x)|
|α|≤k x∈K
∀φ ∈ D(K)
Proof is a direct consequence of the definition of topology in D(K) and Exercise 5.2.6.
Exercise 5.12.8 Prove that the regular distributions and the Dirac delta functional defined in the text are
continuous on D(Ω).
It is sufficient to use the criterion discussed in the text. Let f ∈ L1loc (Ω), and K ⊂ Ω be an arbitrary
compact set. Then, for ϕ ∈ D(K),
��
� ��
� �
�
� �
�
� f ϕ dx� = � f ϕ dx� ≤
|f | dx sup |ϕ(x)|
�
� �
�
where the integral
�
K
Ω
K
x∈K
K
|f | is finite.
Similarly, for the delta functional δx0 , and a compact set K, for every ϕ ∈ D(K),
| < δx0 , ϕ > | = |ϕ(x0 )| ≤ sup |ϕ(x)|
x∈K
Exercise 5.12.9 Consider function u : (0, 1) → R
I of the form
�
u1 (x) 0 < x ≤ x0
u(x) =
u2 (x) x0 < x ≤ 1
where x0 ∈ (0, 1)
Here u1 and u2 are C 1 functions (see Example 5.11.1), but the global function u is not necessarily
continuous at x0 . Follow the lines of Example 5.11.1 to prove that the distributional derivative of the
regular distribution qu corresponding to u is given by the formula
(qu )� = qu� + [u(x0 )]δx0
where u� is the union of the two branches, derivatives u�1 and u�2 (see Example 5.11.1), δx0 is the Dirac
delta functional at x0 , and [u(x0 )] denotes the jump of u at x0 ,
[u(x0 )] = u2 (x0 ) − u1 (x0 )
It is sufficient to interprete the result obtained in the text,
def
< qu� , ϕ > = − < qu , ϕ� >
� x0
� 1
� 1
uϕ� dx =
u1 ϕ� dx +
u2 ϕ� dx
=
0
0
x0
� 1
� x0
�
u1 ϕ dx +
u�2 ϕ dx + [u2 (x0 ) − u1 (x0 )]ϕ(x0 )
=
0
x0
� 1
=
u� ϕ dx + [u(x0 )] < δx0 , ϕ >
0
=< qu� + [u(x0 )]δx0 , ϕ >
Banach Spaces
5.13
145
Bidual, Reflexive Spaces
Exercises
Exercise 5.13.1 Explain why every finite-dimensional space is reflexive.
Recall the discussion from Chapter 2. As the evaluation map is injective and bidual space is of the
same dimension as the orginal space, the evaluation map must be surjective.
I n . The closure in W m,p (Ω)
Exercise 5.13.2 Let W m,p (Ω) be a Sobolev space for Ω, a smooth domain in R
of the test functions C0∞ (Ω) (with respect to the W m,p norm), denoted by W0m,p (Ω),
W0m,p (Ω) = C0∞ (Ω)
may be identified as a collection of all “functions” from W m,p (Ω) which “vanish” on the boundary
together with their derivatives up to m − 1 order (this is a very nontrivial result based on Lions’ Trace
Theorem; see [8, 10]). The duals of the spaces W0m,p (Ω) are the so-called negative Sobolev spaces
def
W −m,p (Ω) = (W0m,p (Ω))
�
m>0
Explain why both W0m,p (Ω) and W −m,p (Ω), for 1 < p < ∞, are reflexive.
This is a simple consequence of Proposition 5.13.1. Space W0m,p (Ω) as a closed subspace of a reflexive
space must be reflexive, and W −m,p (Ω) as the dual of a reflexive space is reflexive as well.
5.14
Weak Topologies, Weak Sequential Compactness
Exercises
Exercise 5.14.1 Prove Proposition 5.14.1.
All properties are a direct consequence of the definitions.
Exercise 5.14.2 Let U and V be two normed spaces. Prove that if a linear transformation T ∈ L(U, V ) is
strongly continuous, then it is automatically weakly continuous, i.e., continuous with respect to weak
topologies in U and V .
146
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Hint: Prove first the following:
Lemma: Let X be an arbitrary topological vector space, and Y be a normed space. Let T ∈ L(X, Y ).
The following conditions are equivalent to each other.
(i) T : X → Y (with weak topology) is continuous
(ii) f ◦ T : X → R(I
I C ) is continuous, ∀ f ∈ Y �
Follow then the discussion in the section about strongly and weakly continuous linear functionals.
(i) ⇒ (ii). Any linear functional f ∈ Y � is also continuous on Y with weak topology. Composition of
two continuous functions is continuous.
(ii) ⇒ (i). Take an arbitrary B(I0 , �), where I0 is a finite subset of Y � . By (ii),
∀g ∈ I0 ∃Bg , a neighborhood of 0 in X : u ∈ Bg ⇒ |g(T (u))| < �
It follows from the definition of filter of neighborhoods that
B=
�
Bg
g∈I0
is also a neighborhood of 0. Consequently,
u ∈ B ⇒ |g(T (u))| < � ⇒ T u ∈ B(I0 , �)
To conclude the final result, it is sufficient now to show that, for any g ∈ Y � ,
g ◦ T : X (with weak topology) → R
I
is continuous. But g ◦ T , as a composition of continuous functions, is a strongly continuous linear
functional and, consequently, it is continuous in the weak topology as well (compare the discussion in
the text).
Exercise 5.14.3 Consider space c0 containing infinite sequences of real numbers converging to zero, equipped
with �∞ -norm.
def
c0 = {x = {xn } : xn → 0},
�x� = sup |xi |
i
Show that
(a) c�0 = �1
(b) c��0 = �∞
(c) If en = (0, . . . , 1(n) , . . .) then en → 0 weakly∗ but it does not converge to zero weakly.
Banach Spaces
147
(a) We follow the same reasoning as in the Exercise 5.12.3. Define
N
�
xN =
xi ei = (x1 , . . . , xN , . . .)
i=1
It follows from the definition of c0 -space that
�x − xN � = sup |xi | → 0
i>N
Let f ∈ c�0 and set φi = f (ei ). Then
∞
�
def
φ i xi =
i=1
lim
N →∞
N
�
φi xi = lim f (xN ) = f (x)
i=1
N →∞
Consequently,
|f (x)| ≤ �φ��1 �x�
In order to show that the bound equals the supremum, it is sufficient to take a sequence of vectors
xN = (sgn φ1 , . . . , sgn φN , 0, . . .) ∈ c0
Then
f (xN ) =
N
�
i=1
|φi | →
∞
�
i=1
|φi |
(b) This follows from ��1 = �∞ .
(c) We have
< eN , x >�1 ×c0 = xN → 0,
∀x ∈ c0
but
< φ, eN >�∞ ×�1 = 1 � 0
for φ = (1, 1, . . .) ∈ �∞ .
Exercise 5.14.4 Let U and V be normed spaces, and let either U or V be reflexive. Prove that every operator
A ∈ L(U, V ) has the property that A maps bounded sequences in U into sequences having weakly
convergent subsequences in V .
Case: V reflexive. Map A maps a bounded sequence in U into a bounded sequence in V . In turn, any
bounded sequence in reflexive space V has a weakly convergent subsequence.
Case: U reflexive. Any bounded sequence un in reflexive space U has a weakly convergent subsequence unk . As A is also weakly continuous ( Recall Exercise 5.14.2 ), it follows that Aunk is weakly
convergent in V .
148
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 5.14.5 In numerical analysis, one is often faced with the problem of approximating an integral of
a given continuous function f ∈ C[0, 1] by using some sort of numerical quadrature formula. For
instance, we might introduce in [0, 1] a sequence of integration points
0 ≤ xn1 < xn2 < · · · < xnj < · · · < xnn ≤ 1,
and set
def
Qn (f ) =
n
�
k=1
ank f (xnk ) ≈
�
n = 1, 2, . . .
1
f (x) dx
0
where the coefficients ank satisfy the condition
n
�
k=1
|ank | < M,
∀n≥1
Suppose that the quadrature rule Qn (f ) integrates polynomials p(x) of degree n − 1 exactly; i.e.,
Qn (p) =
(a) Show that, for every f ∈ C[0, 1],
lim
n→∞
�
�
1
p(x)dx
0
Qn (f ) −
�
1
f (x)dx
0
�
=0
(b) Characterize the type of convergence; this limit defines in terms of convergence in the space
C[0, 1] (equipped with the Chebyshev norm).
(a) We start with a simple abstract result.
Lemma. Let U be a normed space, X a dense subspace of U , and fn ∈ U � a uniformly bounded
sequence of continuous linear functionals on U , i.e., �fn �U � ≤ M , for some M > 0. Assume
that the sequence converges to zero on X, fn (x) → 0, ∀x ∈ X. Then, the sequence converges
to zero on the entire space,
fn (u) → 0, ∀u ∈ U
Proof follows from the simple inequality,
|fn (u)| = |fn (u − x) + fn (x)| ≤ �fn � �u − x� + |fn (x)| ≤ M �u − x� + |fn (x)|
Given u ∈ U , and � > 0, select x ∈ X such that �u − x� < �/2M , and then N such that
|fn (x)| < �/2 for n ≥ N . It follows from the inequality above that, for n ≥ N , |fn (x)| < �.
According to Weierstrass Theorem on polynomial approximation of continuous functions (see,
e.g. []), polynomials are dense in space C([0, 1]). Functionals
C([0, 1]) � f → Qn (f ) −
�
1
0
f (x)dx ∈ R
I
Banach Spaces
149
converge to zero for all polynomials f (for n exceeding the order of polynomial f , the functionals are identically zero). At the same time, condition on the integration points implies that the
functionals are uniformly bounded. Consequently, by the lemma above, sequence
Qn (f ) −
�
1
0
f (x)dx → 0
for any f ∈ C([0, 1]).
(b) The sequence converges to zero in weak∗ topology.
5.15
Compact (Completely Continuous) Operators
Exercises
Exercise 5.15.1 Let T : U → V be a linear continuous operator from a normed space U into a reflexive
Banach space V . Show that T is weakly sequentially compact, i.e., it maps bounded sets in U into sets
whose closures are weakly sequentially compact in V .
A is bounded in U ⇒ T (A) is weakly sequentially compact in V .
This is a simple consequence of the fact that bounded sets in a reflexive Banach space are weakly
sequentially compact.
Exercise 5.15.2 Let U and V be normed spaces. Prove that a linear operator T : U → V is compact iff
T (B) is precompact in V for B - the unit ball in U .
Assume T is linear and maps unit ball in U into a precompact set in V . Let C be an arbitrary bounded
set in U ,
�u�U ≤ M,
∀u ∈ C
Set M −1 C is then a subset of unit ball B and, consequently, M −1 T (C) is a subset of T (B). Thus,
M −1 T (C) ⊂ T (B)
as a closed subset of a compact set, is compact as well. Finally, since multiplication by a non-zero
constant is a homeomorphism, T (C) is compact as well.
Exercise 5.15.3
Use the Frechet–Kolmogorov Theorem (Theorem 4.9.4) to prove that operator T from
R)
Example 5.15.1 with an appropriate condition on kernel K(x, ξ) is a compact operator from Lp (I
R), 1 ≤ p, r < ∞.
into Lr (I
150
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
According to Exercise 5.15.2 above, we can restrict ourselves to a unit ball B ∈ Lp (I
R) and seek
conditions that would guarantee that T (B) is precompact in Lr (I
R). By the Frechet-Kolmogorov Theorem, we need to come up with sufficient conditions on kernel K(y, x) to satisfy the following three
conditions.
R).
(i) T (B) is bounded in Lr (I
(ii)
�
R
I
(iii)
t→0
|T u(t + s) − T u(s)|r ds −→ 0,
�
n→∞
|s|>n
|T u(s)|r ds −→ 0,
uniformly in u ∈ B
uniformly in u ∈ B
We shall restrict ourselves to a direct use of Hölder inequality only. We have the following estimates.
(i)
�r
� rq
� ��
� ��
�
�
�
�
� K(y, x)u(y) dy � dx ≤
� |K(y, x)|q dy �
�
�
�
�
R
I R
I
R
I R
I
≤1
� rq
� ��
�
�
� |K(y, x)|q dy � dx =: A
≤
�
�
R
I R
I
(ii)
(iii)
� pr
��
�
�
� |u(y)|p dy � dx
�
�
I
�R
��
�
�r
� ��
�
�
�
� K(y, t + s)u(y) dy −
� ds
K(y,
s)u(y)
dy
�
�
R
I R
I
R
I
�
�
� �
�
�r
� [K(y, t + s) − K(y, s)]u(y) dy � ds
=
�
�
R
I R
I
� rq ��
� pr
� ��
� �
�
�
� |K(y, t + s) − K(y, s)|q dy � � |u(y)|p dy � ds
≤
� �
�
�
I
R
I R
I
�R
��
�
≤1
� rq
� ��
�
�
� |K(y, t + s) − K(y, s)|q dy � ds =: B(t)
≤
�
�
R
I R
I
��
�r
��
� rq
�
�
�
�
�
� K(y, s)u(y) dy � ds ≤
� |K(y, s)|q dy �
�
�
�
�
I
I
|s|>n R
|s|>n R
�
≤
�
|s|>n
|
�
R
I
q
|K(y, s)| dy|
r
q
��
� pr
�
�
� |u(y)|p dy � ds
�
�
I
�R
��
�
≤1
ds =: C(n)
where q is the conjugate index to p. Consequently, if A < ∞, limt→0 B(t) = 0, limn→∞ C(n) = 0,
R) into Lr (I
R). In fact, the first condition implies the last
then operator T is a compact map from Lp (I
one and, if we assume that kernel K(y, x) is continuous a.e., we can use the Lebesgue Dominated
Convergence Theorem to show that the second condition is verified as well. Indeed, the integrand
Banach Spaces
151
in B(t) converges pointwise a.e. to zero, as t → 0, and we can construct a dominating function by
utilizing convexity of function xq for q ≥ 1,
�q
�
1
1
1
1
|K(y, t + s)| + |K(y, s)| ≤ |K(y, t + s)|q + |K(y, s)|q
2
2
2
2
which in turn implies
|K(y, t + s) − K(y, s)|q ≤ 2q−1 (|K(y, t + s)|q + |K(y, s)|q )
Closed Range Theorem, Solvability of Linear Equations
5.16
Topological Transpose Operators, Orthogonal Complements
Exercises
Exercise 5.16.1 Prove Proposition 5.16.1(i)–(iv).
The properties follow immediately from the properties of the algebraic transpose operator.
Exercise 5.16.2 Let U, V be two Banach spaces, and let A ∈ L(U, V ) be compact. Show that A� is also
compact. Hint: See Exercise 5.21.2 and recall Arzelà–Ascoli Theorem.
See proof of Lemma 5.21.5.
5.17
Solvability of Linear Equations in Banach Spaces, The Closed Range Theorem
Exercises
Exercise 5.17.1 Let X be a Banach space, and P : X → X be a continuous linear projection, i.e., P 2 = P .
Prove that the range of P is closed.
Let un ∈ R(P ), un → u. We need to show that u ∈ R(P ) as well. Let vn ∈ X be such that
un = P vn . Then P un = P 2 vn = P vn = un → P u. By the uniqueness of the limit, it must be
u = P u. Consequently, u is the image of itself and must be in the range of the projection.
152
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 5.17.2 Let X be a Banach space, and M ⊂ X a closed subspace. Consider the map:
ι : X � ⊃ M ⊥ � f → f˜ ∈ (X/M )�
f˜(x + M ) := f (x) .
Here M ⊥ is the orthogonal complement of M ,
M ⊥ = {f ∈ X � : f |M = 0} .
Prove that map ι is an isometric isomorphism. Consequently, (X/M )� and M ⊥ can be identified with
each other.
The map is well-defined, i.e. f (x + m) = f (x), for every m ∈ M . Next,
|f˜(x + M )| = |f (x + m)| ≤ �f �X � �x + m� .
Taking infimum in m ∈ M on the right-hand side, we have,
|f˜(x + M )| ≤ �f �X � �x + M �X/M ,
i.e.
�f˜�(X/M )� ≤ �f �X � .
On the other side,
|f (x)| = |f˜(x + M )| ≤ �f˜�(X/M )� �x + M �X/M ≤ �f˜�(X/M )� �x� ,
so also,
�f �X � ≤ �f˜�(X/M )� .
Map ι is thus an isometry (and, therefore, injective). To show surjectivity, take g ∈ (X/M )� , and set
f (x) = g(x + M ). Then, for any x ∈ M ,
f (x) = g(x + M ) = g(M ) = 0
so, f ∈ M ⊥ and, finally, f˜ = g.
( g is a linear map)
Banach Spaces
5.18
153
Generalization for Closed Operators
Exercises
Exercise 5.18.1 Let X, Y be two normed spaces with Y being complete. Let X be a dense subspace of
X. Let A be a linear and continuous map from X into Y . Prove that operator A admits a unique
continuous extension à to the whole space X that preserves the norm of A. Hint: For x ∈ X, take
xn → x, xn ∈ X , and investigate sequence Axn .
Let x ∈ X and let xn → x, xn ∈ X . Sequence Axn is Cauchy in Y . Indeed,
�Axn − Axm �Y ≤ �A�L(X ,Y ) �xn − xm �
and xn , as a convergent sequence, is Cauchy in X. By completness of Y , sequence Axn has a unique
limit y that can be identified as the value of the extension, Ãx := y = limn→∞ Axn . First of all,
extension à is well defined, i.e. the limit y is independent of xn . Indeed, if we take another sequence
converging to x, say z n → x, then
�Axn − Az n �Y ≤ �A� �xn − z n �Y ≤ �A� (�xn − x�X + �x − z n �X ) → 0
Secondly, if xn → x and z n → z then αxn + βz n → αx + βz and, passing to the limit in:
A(αxn + βz n ) = αAxn + βAz n
we learn that à is linear. Finally, by passing to the limit in
�Axn � ≤ �A�L(X ,Y ) �xn �
we learn that à is continuous and �Ã� ≤ �A� and, therefore, �Ã� = �A�.
Exercise 5.18.2 Discuss in your own words why the original definition of the transpose for a closed operator
and the one discussed in Remark 5.18.1 are equivalent.
The original definition requires the identity:
�y � , Ax� = �x� , x�
∀x ∈ D(A)
for some x� ∈ X � and then sets A� y � := x� . Embedded in the condition is thus the requirement that
A� y � is continuous. Conversely, the more explicitly defined transpose satisfies the identity above with
x� = A� y � .
Exercise 5.18.3 Prove Proposition 5.18.1.
154
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(i) By definition, y � ∈ D(A�1 ) and x�1 = A�1 y � , if
< y � , A1 x > = < x�1 , x >
∀x ∈ D
Similarly, y � ∈ D(A�2 ) and x�2 = A�2 y � , if
< y � , A2 x > = < x�2 , x >
∀x ∈ D
Let y � ∈ D(A�1 ) ∩ D(A�2 ). Taking a linear combination of the equalities above, we get
< y � , (α1 A1 + α2 A2 )x > = < α1 x�1 + α2 x�2 , x >
∀x ∈ D
Consequently, y � ∈ D((α1 A1 + α2 A2 )� ) and
(α1 A1 + α2 A2 )� y � = α1 x�1 + α2 x�2 = α1 A�1 y � + α2 A�2 y �
(ii) Again, by definition, y � ∈ D(A� ) and x� = A� y � , if
< y � , Ax > = < x� , x >
∀x ∈ D(A)
Similarly, z � ∈ D(B � ) and y � = B � z � , if
< z � , By > = < y � , y >
∀y ∈ D(B)
Taking y = Ax in the second equality and making use of the first one, we get,
< z � , BAx > = < y � , Ax > = < x� , x >
∀x ∈ D(A)
Consequently, z � ∈ D((BA)� ) and
(BA)� z � = x� = B � y � = B � A� z �
(iii) It is sufficient to notice that
< y � , Ax > = < x� , x >
∀x ∈ D(A)
is equivalent to
< y � , y > = < x� , A−1 y >
∀y ∈ D(A−1 )
Banach Spaces
5.19
155
Closed Range Theorem for Closed Operators
Exercises
Exercise 5.19.1 Prove property (5.4).
Let z � ∈ (M + N )⊥ . By definition,
�z � , m + n� = 0
∀ m ∈ M, n ∈ N
Setting n = 0, we have,
�z � , m� = 0 ∀ m ∈ M
i.e. z � ∈ M ⊥ . By the same argument, z � ∈ N ⊥ .
Similarly, if
�z � , m� = 0 ∀ m ∈ M
and
�z � , n� = 0
∀n ∈ N
then, by linearity of z � ,
�z � , m + n� = �z � , m� + �z � , n� = 0
Exercise 5.19.2 Let Z be a Banach space and X ⊂ Z a closed subpace of Z. Let Z = X ⊕ Y for some
Y , and let PX : Z → X be the corresponding projection, PX z = x where z = x + y is the unique
decomposition of z. Prove that projection PX is a closed operator.
Assume
z n = xn + y n
Let
(z n , xn ) = (z n , PX z n ) → (z, x)
Then y n = z n − xn → z − x =: y so, z = x + y. This proves that x = PX z, i.e. (z, x) ∈
graph of PX .
Exercise 5.19.3 Prove the algebraic identities (5.6).
Elementary.
Exercise 5.19.4 Let X, Y be two topological vector spaces. Prove that a set B ⊂ Y is closed in Y if and
only if set X + B is closed in X × Y . Here X is identified with X × {0} ⊂ X × Y and B is identified
with {0} × B ⊂ X × Y .
Elementary. The whole trouble lies in the identification. We have,
X × {0} + {0} × B = X × B
and the assertion follows from the construction of topology in Cartesian product.
156
5.20
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Examples
Exercises
Exercise 5.20.1 Prove that the linear mapping (functional)
H 1 (0, 1) � w → w(x0 ), where x0 ∈ [0, l]
is continuous. Use the result to prove that space W in Example 5.20.1 is closed.
Hint: Consider first smooth functions w ∈ C ∞ ([0, l]) and then use the density of C ∞ ([0, l]) in
H 1 (0, l).
Take w ∈ C ∞ ([0, l]). Function
ŵ(x) := w(x) −
1
l
�
l
w(s) ds
0
is continuous and has a zero average. By the Mean-Value Theorem of Integral Calculus, there exists an
intermediate point c ∈ [0, l] such that ŵ(c) = 0. Integrating ŵ� = w� from c to x ∈ [0, l], we obtain
� x
� x
w� (s) ds =
ŵ� (s) ds = ŵ(x)
c
c
Consequently,
1
l
w(x) =
This implies the estimate
1
|w(x)| ≤
l
��
0
� 12 ��
l
�
l
w(s) ds +
0
l
2
1
0
|w(s)| ds
≤ �w�L2 (0,l) + l�w� �L2 (0,l)
� 12
�
+
x
w� (s) ds
c
��
� 12 ��
x
x
1
c
c
|w� (s)|2 ds
� 12
≤ max{1, l}�w�H 1 (0,l)
By the density argument, the estimate generalizes to w ∈ H 1 (0, l).
Applying the result to higher derivatives, we conclude that operator
I4
f : H 4 (0, l) � w → (w�� (0), w��� (0), w�� (l), w��� (l)) ∈ R
is continuous. Consequently W = f −1 ({0}) as inverse image of a closed set, must be closed.
Exercise 5.20.2 Let u, v ∈ H 1 (0, l). Prove the integration by parts formula
� l
� l
uv � dx = −
u� v dx + (uv)|l0
0
0
Banach Spaces
157
Hint: Make use of the density of C ∞ ([0, l]) in H 1 (0, l).
Integrate (uv)� = u� v + uv � , to obtain the formula for smooth functions u, v ∈ C ∞ ([0, l]). Let
u, v ∈ H 1 (0, l). By the density argument, there exist seqeunces un , vn ∈ C ∞ ([0, l]) such that un →
u, vn → v in H 1 (0, l). For each n, we have
� l
� l
�
un vn dx = −
u�n vn dx + (un vn )|l0
0
0
The point is that both sides of the equality represent continuous functional on space H 1 (0, l). Integrals
represent L2 -products, and the boundary terms are continuous by the result of Exercise 5.20.1 above.
Consequently, we can pass on both sides to the limit with n → ∞, to obtain the final result.
Exercise 5.20.3 Work out all the details of Example 5.20.1 once again, with different boundary conditions:
w(0) = w�� (0) = 0 and w�� (l) = w��� (l) = 0
(left end of the beam is supported by a pin support).
We follow precisely the same lines to obtain
W = {w ∈ H 4 (0, l) : w(0) = w�� (0) = w�� (l) = w��� (l) = 0}
The transpose operator is defined on the whole L2 (0, l) by the same formula as before (but different
W ). The corresponding null space is
N (A� ) = {v ∈ L2 (0, l) : v ���� = 0 and v(0) = v �� (0) = v �� (l) = v ��� (l) = 0}
I
= {v ∈ L2 (0, l) : v(x) = αx, α ∈ R}
The necessary and sufficient condition for the existence of a solution w ∈ W ,
q ∈ N (A� )⊥ = 0
is equivalent to
�
l
q(x)x dx = 0
0
(the moment of active load q with respect to the pin must vanish). The solution is determined up to a
linearized rotation about x = 0 (the pin).
Exercise 5.20.4 Prove that operator A from Remark 5.20.1 is closed.
Consider a sequence of functions (un , qn ) ∈ L2 (0, l) × L2 (0, l) such that
2
��
��
���
����
un ∈ D(A), i.e., u����
n ∈ L (0, l), un (0) = un (0) = un (l) = un (l) = 0 and un = qn
Assume un → u, qn → q. The question is: Does u ∈ D(A), Au = q ?
Recall the definition of distributional derivative,
def
����
< u����
>=
n , φ > = < un , φ
�
l
0
un φ���� =
�
l
qn φ
0
∀φ ∈ D(0, l)
158
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Keeping φ fixed, we pass to the limit with n → ∞ to learn that
� l
� l
< u���� , φ >=
uφ���� =
qφ ∀φ ∈ D(0, l)
0
i.e., u
����
0
= q. The boundary conditions represent continuous functionals on H 4 (0, l) and, therefore,
they are satisfied in the limit as well.
Exercise 5.20.5 (A finite-dimensional sanity check). Determine necessary conditions on data f for solutions
to the linear systems of equations that follow to exist. Determine if the solutions are unique and, if not,
describe the null space of the associated operator:
Au = f
Here
�
1 −1 0
A=
−1 0 1
�


1 2 −1
A = 4 0 2
3 −2 −3

3 1 −1 2
A =  6 2 −2 4 
9 3 −3 6

We shall discuss the first case, the other two being fully analogous. Matrix operator A : R
I3 →R
I 2.
Identifying the dual of R
I n with itself (through the canonical inner product), we have


1 −1
I 2 →R
I 3 , A =  −1 0 
A� : R
0 1
The null space of A� is trivial and, consequently, the linear problem has a solution for any right-hand
side f . The solution is determined up to elements from the kernel of A,
N (A) = {(t, t, t) : t ∈ R}
I
In elementary language, one can set x1 = t, and solve the remaining 2 × 2 system for x2 , x3 to obtain
x2 = −f1 + t, x3 = f2 − t
5.21
Equations with Completely Continuous Kernels, Fredholm Alternative
Exercises
Exercise 5.21.1 Complete the proof of Lemma 5.21.6.
Remark: We use the same notation as in the text. A linear functional may be denoted with both standard
and boldface symbols when we want to emphasize the vectorial character of the dual space.
Banach Spaces
159
Case: n ≥ m. Define operator
def
P = T � + Q,
Qf =
m
�
f (y k )f k
k=1
Operator I − P is injective. Indeed, assume
f − P f = f − T � f − Qf = A� f − Qf = 0
Evaluating both sides at xi , we obtain,
< A� f , xi > −
m
�
f (y k ) < f k , xi >= 0
k=1
which implies
< f , Axi > −f (y i ) = 0
But Axi = 0 so f (y i ) = 0, i = 1, . . . , m and, consequently, A� f = 0, i.e., f ∈ N (A� ). This implies
that f can be represented in terms of functionals gi ,
f=
m
�
bi g i
i=1
Evaluating both sides at vectors y i , we conclude that bi = 0, i = 1, . . . , m and, therefore, f = 0. This
proves that I − P is injective. By Corollary 5.21.2, I − P is surjective as well. There exists thus a
solution f̄ to the problem
A� f̄ −
m
�
f¯(y k )f k = f m+1
k=1
Evaluating both sides at xn+1 we arrive at a contradiction, the left-hand side vanishes while the righthand one is equal to one. This proves that n = m.
Exercise 5.21.2 Let X, d be a complete metric space and let A ⊂ X. Prove that the following conditions are
equivalent to each other.
(i) A is precompact in X, i.e., A is compact in X.
(ii) A is totally bounded.
(iii) From every sequence in A one can extract a Cauchy subsequence.
(i) ⇒ (ii). Since A is compact then, by Theorem 4.9.2, it is totally bounded, i.e., for every � > 0, there
exists an �-net Y� ⊂ A, i.e.,
A⊂A⊂
�
B(y� , �)
y∈Y�
It remains to show that one can select �-nets from A itself. Take � > 0. Let Y 2� be the corresponding
�
2 -net
in A. For each z ∈ Y 2� , there exists a corresponding yz ∈ A such that d(y, z) < �/2. It follows
from the triangle inequality that
{zy : y ∈ Y 2� } ⊂ A
160
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
is an �-net for set A.
(ii) ⇒ (i). By Theorem 4.9.2 again, it is sufficient to show that A is totally bounded. Take � > 0 and
select an arbitrary �1 < �. Then �1 -net Y�1 for A is an �-net for A. Indeed,
A⊂
implies
A⊂
�
y∈Y�1
�
B(y, �1 )
y∈Y�1
B(y, �1 ) ⊂
�
B(y, �)
y∈Y�1
(i) ⇒ (iii). Let xn ∈ A. As A is sequentially compact, one can extract a convergent and, therefore,
Cauchy subsequence xnk .
(iii) ⇒ (i). Let xn ∈ A. We need to demonstrate that there exists a subsequence xnk converging to
x ∈ A. For each n, select yn ∈ A such that d(yn , xn ) <
1
n.
Let ynk be a Cauchy subsequence of yn .
Since X is complete, ynk converges to some x. Consequently, xnk converges to x as well. Finally,
x ∈ A.
6
Hilbert Spaces
Basic Theory
6.1
Inner Product and Hilbert Spaces
Exercises
Exercise 6.1.1 Let V be an inner product space. Prove that
(u, w) = (v, w)
∀w∈V
if and only if u = v. Select w = u − v. Then
(u, u − v) − (v, u − v) = (u − v, u − v) = �u − v�2 = 0
implies u − v = 0.
Exercise 6.1.2
(a) Prove the parallelogram law for real inner product spaces
�u + v�2 + �u − v�2 = 2�u�2 + 2�v�2
(b) Conversely, let V be a real normed space with its norm satisfying the condition above. Proceed
with the following steps to prove that
def
(u, v) =
1
(�u + v�2 − �u − v�2 )
4
is an inner product on V . Proceed along the following steps.
Step 1. Continuity
un → u, v n → v =⇒ (un , v n ) → (u, v)
Step 2. Symmetry
(u, v) = (v, u)
161
162
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Step 3. Positive definiteness
(u, u) = 0 =⇒ u = 0
Step 4. Use the parallelogram law to prove that
�u + v + w�2 + �u�2 + �v�2 + �w�2 = �u + v�2 + �v + w�2 + �w + u�2
Step 5. Use the Step 4 identity to show additivity
(u + v, w) = (u, w) + (v, w)
Step 6. Homogeneity
(αu, v) = α(u, v)
Hint: Use the Step 5 identity to prove the assertion first for α = k/m, where k and m are
integers, and use the continuity argument.
(c) Generalize the result to a complex normed space V using the formula (so-called polarization
formula)
(u, v) =
1
(�u + v�2 − �u − v�2 + i�u + iv�2 − i�u − iv�2 )
4
Compare the discussion on the extension of a scalar product from a real space to its complex
extension.
(a) Use the relation between the norm and inner product,
�u + v�2 + �u − v�2 = (u + v, u + v) + (u − v, u − v)
= (u, u) + (u, v) + (v, u) + (v, v) + (u, u) − (u, v) − (v, u) + (v, v)
= 2�u�2 + 2�v�2
(b)
Step 1. In a normed space, the norm is continuous in the norm topology. The right-hand side represents thus a continuous functional f (u, v).
Step 2. This is a direct consequence of the definition.
Step 3. This is implied directly by the definition and the positive definiteness of the norm.
Step 4. We have
�u + v + w�2 + �u − (v + w)�2 = 2�u�2 + 2�v + w�2
(6.1)
�(u − v) + w�2 + �(u − v) − w)�2 = 2�u − v�2 + 2�w�2
(6.2)
�(u + w) + v�2 + �(u + w) − v)�2 = 2�u + w�2 + 2�v�2
(6.3)
and
and
Hilbert Spaces
163
Subtract (6.2) from the sum of (6.1) and (6.3), and divide by a factor of two, to get
�u + v + w�2 = �u�2 + �v�2 − �w�2 + �u + w�2 + �v + w�2 − �u − v�2
Add to the last equation
2�u�2 + 2�v�2 = �u + v�2 + �u − v�2
to get the final result.
Step 5. This is a straightforward algebra, use the definition and Step 4 result to eliminate �u+v+w�
and �u + v − w�.
Step 6. Additivity implies
(ku, v) = k(u, v)
Divide by m,
k
1
(ku, v) = (u, v)
m
m
Use additivity again,
k
1
1 k
1
k
k
(u, v) = (ku, v) == ( mu, v) =
m ( u, v) = ( u, v)
m
m
m m
m
m
m
Take now an arbitrary real number α and a sequence of rational numbers αn converging to
α. Use the continuity proved in Step 1 and pass to the limit in
(αn u, v) = αn (u, v)
(c) Follow precisely the same lines as for the real case.
Exercise 6.1.3 Use the results of Exercise 6.1.2 to show that the spaces �p , p �= 2 are not inner product
spaces. Hint: Verify the parallelogram law.
Take,e.g.
u = (1, 0, . . .),
v = (0, 1, . . .)
Then
2
�u + v�2 + �u − v�2 = 2 2 p = 2
p+2
p
and
2�u�2 + 2�v�2 = 4
and the two numbers are different unless p = 2.
Exercise 6.1.4 Let u and v be non-zero vectors in a real inner product space V . Show that
�u + v� = �u� + �v�
if and only if v = αu for some real number α > 0 (compare Exercise 3.9.2). Does the result extend to
complex vector spaces?
164
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The result is true for complex spaces. Let v = αu, α > 0. Then both sides are equal to (1 + α)�u�.
Conversely, squaring the left-hand side of the equality above,
�u + v�2 = (u + v, u + v) = �u�2 + 2Re (u, v) + �v�2
and comparing it with the square of the right-hand side, we learn that
Re (u, v) = �u� �v�
Consider now a function of a real argument α,
�u − αv�2 = (u − αv, u − αv) = �u�2 − 2αRe (u, v) + α2 �v�2 = α2 �v�2 − 2α�u� �v� + �u�2
Quadratic function on the right-hand side has a minimum equal zero at α = �v�/�u� > 0. Conse-
quently, the left-hand side must vanish as well which implies that u − αv = 0.
Exercise 6.1.5 Let {un } be a sequence of elements in an inner product space V . Prove that if
(un , u) −→ (u, u)
and
�un � −→ �u�
then un −→ u, i.e., �un − u� −→ 0.
We have
�un − u�2 = (un − u, un − u) = (un , un ) − (u, un ) − (un , u) + (u, u)
= �un �2 − (u, un ) − (un , u) + �u�2 → 2�u�2 − 2(u, u) = 0
since (u, un ) = (un , u) → (u, u).
Exercise 6.1.6 Show that the sequence of sequences
u1 = (α1 , 0, 0, . . .)
u2 = (0, α2 , 0, . . .)
u3 = (0, 0, α3 , . . .)
etc., where the αi are scalars, is an orthogonal sequence in �2 , i.e., (un , um ) = 0 for m �= n.
Apply definition of the inner product in �2 .
Exercise 6.1.7 Let A : U → V be a linear map from a Hilbert space U, (·, ·)U into a Hilbert space V, (·, ·)V .
Prove that the following conditions are equivalent to each other,
(i) A is unitary, i.e., it preserves the inner product structure,
(Au, Av)V = (u, v)U
∀u, v ∈ U
(ii) A is an isometry, i.e., it preserves the norm,
�Au�V = �u�U
∀u ∈ U
(i)⇒(ii). Substitute v = u.
(ii)⇒(i). Use the parallelogram law (polarization formula) discussed in Exercise 6.1.2.
Hilbert Spaces
6.2
165
Orthogonality and Orthogonal Projections
Exercises
Exercise 6.2.1 Let V be an inner product space and M, N denote vector subspaces of V . Prove the following
algebraic properties of orthogonal complements:
(i) M ⊂ N ⇒ N ⊥ ⊂ M ⊥ .
(ii) M ⊂ N ⇒ (M ⊥ )⊥ ⊂ (N ⊥ )⊥ .
(iii) M ∩ M ⊥ = {0}.
(iv) If M is dense in V , (M = V ) then M ⊥ = {0}.
(i) Let v ∈ N ⊥ . Then
(n, v) = 0 ∀n ∈ N
⇒
(n, v) = 0 ∀n ∈ M
⇒
v ∈ M⊥
(ii) Apply (i) twice.
(iii) Let v ∈ M ∩ M ⊥ . Then v must orthogonal to itself, i.e., (v, v) = 0 ⇒ v = 0.
(iv) Let v ∈ M ⊥ and M � v n → v. Passing to the limit in
(v n , v) = 0
we get (v, v) = 0 ⇒ v = 0.
Exercise 6.2.2 Let M be a subspace of a Hilbert space V . Prove that
M = (M ⊥ )⊥
By Corollary 6.2.1,
It is sufficient thus to show that
� ⊥ �⊥
M= M
M⊥ = M
As M ⊂ M , Exercise 6.2.1(i) implies that M
M � mn → m ∈ M . Passing to the limit in
⊥
⊥
⊂ M ⊥ . Conversely, assume v ∈ M ⊥ , and let
(mn , v) = 0
we learn that v ∈ M
⊥
as well.
166
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 6.2.3 Two subspaces M and N of an inner product space V are said to be orthogonal, denoted
M ⊥ N , if
∀ m ∈ M, n ∈ N
(m, n) = 0,
Let V now be a Hilbert space. Prove or disprove the following:
(i) M ⊥ N =⇒ M ⊥ ⊥ N ⊥ .
(ii) M ⊥ N =⇒ (M ⊥ )⊥ ⊥ (N ⊥ )⊥ .
I
The first assertion is false. Consider, e.g. R
I 3 with the canonical inner product. Take M = R×{0}×{0}
and N = {0} × R
I × {0}. Then
I ×R
I
M ⊥ = {0} × R
and
N⊥ =R
I × {0} × R
I
Obviously, spaces M ⊥ and N ⊥ are not orthogonal.
To prove the second assertion, in view of Exercise 6.2.2, it is sufficient to show that
M ⊥N ⇒ M ⊥N
But this follows immediately from the continuity of the inner product. Take M � mn → m ∈ M and
N � nn → n ∈ N , and pass to the limit in
(mn , nn ) = 0
Exercise 6.2.4 Let Ω be an open, bounded set in R
I n and V = L2 (Ω) denote the space of square integrable
functions on Ω. Find the orthogonal complement in V of the space of constant functions
�
�
M = u ∈ L2 (Ω) : u = const a.e. in Ω
Let f ∈ L2 (Ω). Projection of f onto M is equivalent to the variational problem:

u ∈R
I

 f
�
�

u
v
=
f v ∀v ∈ R
I

f
Ω
Ω
Selecting v = 1, we learn that uf is the average of f ,
u0 =
1
meas(Ω)
�
f
Ω
Orthogonal complement M ⊥ contains functions f − uf , i.e., functions of zero average,
�
M ⊥ = {f ∈ L2 (Ω) :
= 0}
Ω
(compare Example 2.??)
Hilbert Spaces
167
I C ) a sequence of measurable functions.
Exercise 6.2.5 Let Ω ⊂ R
I N be a measurable set and fn : Ω → R(I
We say that sequence fn converges in measure to a measurable function f : Ω → R(I
I C ) if, for every
ε > 0,
m({x ∈ Ω : |fn (x) − f (x)| ≥ ε}) → 0
as
n→0
Let now m(Ω) < ∞. Prove that Lp (Ω) convergence, for any 1 ≤ p ≤ ∞, implies convergence in
measure.
Hint:
 ��
� p1


1

|fn (x) − f (x)|p dx
Ω
m({x ∈ Ω : |fn (x) − f (x)| ≥ ε}) ≤ ε

1

 ess supx∈Ω |fn (x) − f (x)|
ε
1≤p<∞
p=∞
The inequalities in the hint follow directly from the definitions. Let 1 ≤ p < ∞, and let f ∈ Lp (Ω).
Define set A as
A := {x ∈ Ω : |f (x)| ≥ ε}
Then
ε m(A) ≤
which implies
��
A
|f (x)|
p
� p1
≤
m(A) ≤
��
Ω
|f (x)|
p
� p1
= �f �Lp (Ω)
1
�f �Lp (Ω)
ε
Consequently, Lp -convergence of fn − f to zero, implies convergence in measure. Case p = ∞ is
proved analogously.
Exercise 6.2.6 Let Ω ⊂ R
I N be a measurable set and fn : Ω → R(I
I C ) a sequence of measurable functions
converging in measure to a measurable function f : Ω → R(I
I C ). Prove that one can extract a subsequence fnk converging to function f almost everywhere in Ω.
Hint: Follow the steps given below.
Step 1. Show that, given an ε > 0, one can extract a subsequence fnk such that
m ({x ∈ Ω : |fnk (x) − f (x)| ≥ ε}) ≤
1
2k+1
∀k ≥ 1
Step 2. Use the diagonal choice method to show that one can extract a subsequence fnk such that
m({x ∈ Ω : |fnk (x) − f (x)| ≥
1
1
}) ≤ k+1
k
2
Consequently,
m({x ∈ Ω : |fnk (x) − f (x)| ≥ ε}) ≤
for every ε > 0, and for k large enough.
∀k ≥ 1
1
2k+1
168
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Step 3. Let ϕk = fnk be the subsequence extracted in Step 2. Use the identities
{x ∈ Ω : inf sup |ϕn (x) − f (x)| > 0} =
ν≥0 n≥ν
{x ∈ Ω : inf sup |ϕn (x) − f (x)| ≥ ε} =
ν≥0 n≥ν
to prove that
�
{x ∈ Ω : inf sup |ϕn (x) − f (x)| ≥
k
�
ν≥0
ν≥0 n≥ν
1
}
k
{x ∈ Ω : sup |ϕn (x) − f (x)| ≥ ε}
n≥ν
m({x ∈ Ω : lim sup |ϕn (x) − f (x)| > 0})
n→∞
≤
Step 4. Use the identity
�
k
lim m({x ∈ Ω : sup |ϕn (x) − f (x)| ≥
ν→∞
n≥ν
{x ∈ Ω : sup |ϕn (x) − f (x)| >
n≥ν
1
})
k
�
1
1
}⊂
{x ∈ Ω : |ϕn (x) − f (x)| > }
k
k
n≥ν
and the result of Step 2 to show that
m({x ∈ Ω : sup |ϕn (x) − f (x)| ≥ ε}) ≤
n≥ν
1
2ν
for every ε > 0 and (ε-dependent!) ν large enough.
Step 5. Use the results of Step 3 and Step 4 to conclude that
m({x ∈ Ω : lim fnk (x) �= f (x)}) = 0
k→∞
Remark: The Lebesgue Dominated Convergence Theorem establishes conditions under which pointwise convergence of a sequence of functions fn to a limit function f implies the Lp -convergence.
While the converse, in general, is not true, the results of the last two exercises at least show that the Lp convergence of a sequence fn implies the pointwise convergence (almost everywhere only, of course)
of a subsequence fnk .
Step 1. This follows directly from the definition of a convergent sequence. We have,
∀δ ∃N n ≥ N m({x ∈ Ω : |fn (x) − f (x)| ≥ ε}) ≤ δ
∀n
Select δ = 1 and an element fn1 that satisfies the condition for δ = 1/2. By induction, given
n1 , . . . , nk−1 , select nk > n1 , . . . , nk−1 such that fnk satisfies the condition for δ = 1/2k+1 .
Notice that avoding the duplication (enforcing injectivity of function k = nk ) is possible since
we have an infinite number of elements of the sequence at our disposal.
Step 2. Use Step 1 result for ε = 1. In particular, the subsequence converges alond with the orginal
sequence. Take then ε = 1/2 and select a subsequence of the first subsequence fnk (denoted
with the same symbol) to staisfy the same condition. Proceed then by induction. The diagonal
subsequence satisfies the required condition.
Hilbert Spaces
169
Step 3. By Proposition 3.1.6v,
m({x ∈ Ω : inf sup |ϕn (x) − f (x)| ≥
ν≥0 n≥ν
1
1
}) = lim m({x ∈ Ω : sup |ϕn (x) − f (x)| ≥ })
ν→∞
k
k
n≥ν
The final condition is then a consequence of the equality above, the first identity in Step 3, and
subadditivity of the measure.
Step 4. Given ε, choose k such that � > k1 . The identity and subadditivity of the measure imply that
1
})
k
n≥ν
�
1
})
≤
m({x ∈ Ω : |ϕn (x) − f (x)| >
k−1
n≥ν
∞
�
1
1
≤
≤
k+1
2
ν
ν
m({x ∈ Ω : sup | ϕn (x) − f (x)| ≥ ε}) ≤ m({x ∈ Ω : sup | ϕn (x) − f (x)| >
n≥ν
Step 5. Combining results of Step 3 and Step 4, we get
m({x ∈ Ω : lim sup |ϕn (x) − f (x)| > 0}) = 0
n→∞
which is equivalent to the final assertion.
6.3
Orthonormal Bases and Fourier Series
Exercises
Exercise 6.3.1 Prove that every (not necessarily separable) nontrivial Hilbert space V possesses an orthonormal basis.
Hint: Compare the proof of Theorem 2.4.3 and prove that any orthonormal set in V can be extended to
an orthonormal basis.
Let A0 be an orthonormal set. Let U be a class of orthonormal sets A (i.e., A contains unit vectors, and
every two vectors are orthogonal to each other) containing A0 . Obviously, U is nonempty. Family U is
partially ordered by the inclusion. Let Aι , ι ∈ I, be a chain in U . Then
�
�
A :=
Aι ∈ U and Aκ ⊂
Aι , ∀κ ∈ I
ι∈I
ι∈I
Indeed, linear ordering of the chain implies that, for each two vectors in u, v ∈ A, there exists a
common index ι ∈ I such that u, v ∈ Aι . Consequently, u and v are orthogonal. By the Kuratowski-
Zorn Lemma, U has a maximal element, i.e., an orthonormal basis for space U that contains A0 .
To conclude the final result, pick an arbitrary vector u1 �= 0, and set A0 = {u/�u�}.
170
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 6.3.2 Let {en }∞
n=1 be an orthonormal family in a Hilbert space V . Prove that the following conditions are equivalent to each other.
(i) {en }∞
n=1 is an orthonormal basis, i.e., it is maximal.
∞
�
(u, en ) en
∀u∈V.
(ii) u =
n=1
(iii) (u, v) =
2
(iv) �u� =
(i)⇒(ii). Let
∞
�
n=1
∞
�
n=1
(u, en ) (v, en ).
2
|(u, en )| .
uN :=
N
�
uN → u
uj ej ,
j=1
Multiply both sides of the equality above by ei , and use orthonormality of ej to learn that
ui = (uN , ei ) → (u, ei ) as N → ∞
(ii)⇒(iii). Use orthogonality of ei to learn that
(uN , v N ) =
N
�
ui v i =
i=1
N
�
i=1
(u, ei ) (v, ei ) →
∞
�
(u, ei ) (v, ei )
i=1
(iii)⇒(iv). Substitute v = u.
(iv)⇒(i). Suppose, to the contrary, the {e1 , e2 , . . .} can be extended with a vector u �= 0 to a bigger
orthonormal family. Then u is orthogonal with each ei and, by property (iv), �u� = 0. So u = 0, a
contradiction.
Exercise 6.3.3 Let {en }∞
n=1 be an orthonormal family (not necessarily maximal) in a Hilbert space V . Prove
Bessel’s inequality
∞
�
i=1
2
2
|(u, ei )| ≤ �u�
∀u∈V
Extend the family to an orthonormal basis (see Exercise 6.3.1), and use the property (iv) proved in
Exercise 6.3.2.
Exercise 6.3.4 Prove that every separable Hilbert space V is unitary equivalent with the space �2 .
Hint: Establish a bijective correspondence between the canonical basis in �2 and an orthonormal basis
in V and use it to define a unitary map mapping �2 onto V .
Let e1 , e2 , . . . be an orthonormal basis in V . Define the map
T : �2 � (x1 , x2 , . . .) →
∞
�
i=1
xi ei =: x ∈ V
Hilbert Spaces
171
Linearity is obvious.
for N > M ,
�∞
i=1
|xi |2 < ∞ implies that sequence xN =
�uN − uM �2 =
N
�
i=M +1
∞
�
|xi |2 ≤
i=M +1
�N
i=1
xi ei is Cauchy in V . Indeed,
|xi |2 → 0 as M → ∞
By completeness of V , the series converges, i.e., the map is well defined. By Exercise 6.3.2(iv), the
map is a surjection. Orthonormality of ei implies that it is also an injection. Finally, it follows from
the definition that the map is unitary.
Exercise 6.3.5 Prove the Riesz–Fisher Theorem.
Let V be a separable Hilbert space with an orthonormal basis {en }∞
n=1 . Then
�
�∞
∞
�
�
2
vn en :
|vn | < ∞
V =
n=1
n=1
In other words, elements of V can be characterized as infinite series
∞
�
vn en with �2 -summable
n=1
coefficients vn .
See Exercise 6.3.4
Exercise 6.3.6 Let I = (−1, 1) and let V be the four-dimensional inner product space spanned by the
monomials {1, x, x2 , x3 } with
(f, g)V =
�
1
f g dx
−1
(i) Use the Gram-Schmidt process to construct an orthonormal basis for V .
(ii) Observing that V ⊂ L2 (I), compute the orthogonal projection Πu of the function u(x) = x4
onto V .
(iii) Show that (x4 − Πx4 , v)L2 (I) = 0
∀v ∈ V .
(iv) Show that if p(x) is any polynomial of degree ≤ 3, then Πp = p.
(v) Sketch the function Πx4 and show graphically how it approximates x4 in V .
(i) Taking monomials 1, x, x2 , x3 , we obtain
e1 (x) =
1
2
�
3
x
2
�
90 2 1
e3 (x) =
(x − )
21
6
�
6
175 3
e4 (x) =
(x − x)
8
10
e2 (x) =
(ii) We get
(P u)(x) =
414 2 1
1
+
(x − )
10 441
6
172
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
(iii) Nothing to show. According to the Orthogonal Decomposition Theorem, u − P u is orthogonal
to subspace V .
(iv) This is a direct consequence of the orthogonality condition. If u ∈ V then u − P u ∈ V as well
and, in particular, u − P u must be orthogonal to itself,
(u − P u, u − P u) = 0
which implies u − P u = 0.
(v) See Fig. 6.1.
Figure 6.1
Function x4 and its L2 -projection onto P 3 (−1, 1).
Exercise 6.3.7 Use the orthonormal basis from Example 6.3.4 to construct the (classical) Fourier series representation of the following functions in L2 (0, 1).
f (x) = x,
f (x) = x + 1
Evaluation of coefficients (f, ek ) leads to the formulas,
x=
∞
1�1
1
+
sin 2πkx,
2 π
k
k=1
x+1=
∞
3
1�1
+
sin 2πkx
2 π
k
k=1
Hilbert Spaces
173
Duality in Hilbert Spaces
6.4
Riesz Representation Theorem
Exercises
Exercise 6.4.1 Revisit Example 6.4.1 and derive the matrix representation of the Riesz map under the assumption that the dual space consists of antilinear functionals.
Follow the lines in the text to obtain,
fj = gjk xk
The only difference between the formula above and the formula in the text, is the dissapearance of the
conjugate over xk .
6.5
The Adjoint of a Linear Operator
6.6
Variational Boundary-Value Problems
Exercises
Exercise 6.6.1 Let X be a Hilbert space and V a closed subspace. Prove that the quotient space X/V , which
a priori is only a Banach space, is in fact a Hilbert space.
Let V ⊥ be the orthogonal complement of V in X,
X =V ⊕V⊥
As a closed subspace of a Hilbert space, V ⊥ is also a Hilbert space. Consider map
T : V ⊥ � w → [w] = w + V ∈ X/V
174
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Map T is an isometry from V ⊥ onto X/V . Indeed, T is linear and
inf �w + v�2 = inf (�w�2 + �v�2 ) = �w�2
v∈V
v∈V
From the representation x = (x − P x) + P x where P is the orthogonal projection onto V , follows
also that T is surjective. Map T transfers the inner product from V ⊥ into X/V ,
def
([w1 ], [w2 ])X/V = (T −1 [w1 ], T −1 [w2 ])
Exercise 6.6.2 Prove a simplified version of the Poincaré inequality for the case of Γ1 = Γ.
Let Ω be a bounded, open set in R
I n . There exists a positive constant c > 0 such that
�
�
2
u dx ≤ c
|∇u|2 dx
∀ u ∈ H01 (Ω)
Ω
Ω
Hint: Follow the steps:
Step 1. Assume that Ω is a cube in R
I n , Ω = (−a, a)n and that u ∈ C0∞ (Ω). Since u vanishes on the
boundary of Ω, we have
u(x1 , . . . , xn ) =
Use the Cauchy–Schwarz inequality to obtain
2
u (x1 , . . . , xn ) ≤
�
a
−a
�
�
xn
−a
∂u
(x1 , ..., t)dt
∂xn
∂u
(x1 , . . . , xn )
∂xn
�2
dxn (xn + a)
and integrate over Ω to get the result.
Step 2. Ω bounded. u ∈ C0∞ (Ω). Enclose Ω with a sufficiently large cube (−a, a)n and extend u by
zero to the cube. Apply Step 1 result.
Step 3. Use density of test functions C0∞ (Ω) in H01 (Ω).
Solution:
Step 1. Applying Cauchy–Schwarz inequality to the identity above, we get,
2
u (x1 , . . . , xn ) ≤
≤
�
�
xn
−a
a
−a
�
�
∂u
(x1 , . . . , t)
∂xn
�2
∂u
(x1 , . . . , xn )
∂xn
dt (xn + a)
�2
dxn (xn + a)
Integrating over Ω on both sides, we get
�
2
Ω
u dx ≤
� �
∂u
∂xn
�2
�
∂u
∂xn
�2
Ω
dx · 2a2
Step 2. Applying the Step 1 results, we get
�
2
u dx =
Ω
�
2
Ω1
u dx ≤ 2a
2
�
Ω1
2
dx = 2a
� �
Ω
∂u
∂xn
�2
dx
Hilbert Spaces
175
Step 3. Let u ∈ H01 (Ω) and um ∈ C0∞ (Ω) be a sequence converging to u in H 1 (Ω). Then
�
Ω
u2m
dx ≤ 2a
� �
� �
�2
2
Ω
∂um
∂xn
�2
dx
Passing to the limit, we get
�
2
Ω
u dx ≤ 2a
2
Ω
∂u
∂xn
dx ≤ 2a2
�
Ω
|∇u|2 dx
Exercise 6.6.3 Let Ω be a sufficiently regular domain in R
I n , n ≥ 1, and let Γ denote its boundary. Con-
sider the diffusion-convection-reaction problem discussed in the text with slightly different boundary
conditions,

−(aij u,j ),i + bi u,i + cu = f in Ω



u = 0 on Γ1



aij u,j ni = 0 on Γ2
with commas denoting the differentiation, e.g., u,i =
∂u
∂xi ,
and the Einstein summation convention in
use. In this and the following exercises, we ask the reader to reproduce the arguments in the text for
this slightly modified problem. Make the same assumptions on coefficients aij , bi , c as in the text.
Step 1: Derive (formally) the classical variational formulation:

1

 u ∈ HΓ1 (Ω)
�
�

 {aij u,j v,i + bj u,j v + cuv} dx =
f v dx
Ω
Ω
∀v ∈ HΓ11 (Ω)
where
HΓ11 (Ω) := {u ∈ H 1 (Ω) : u = 0 on Γ1 }
Step 2: Use Cauchy–Schwarz inequality, the assumptions on the coefficients aij , bj , c, and an appropriate assumption on source term f (x) to prove that the bilinear and linear forms are continuous
on H 1 (Ω).
Step 3: Use Poincaré inequality and the assumptions on the coefficients aij , bj , c to prove that the
bilinear form is coercive.
Step 4: Use the Lax–Milgram Theorem to conclude that the variational problem is well posed.
All reasoning is fully analogous to that in the text.
Exercise 6.6.4 Reformulate the second order diffusion-convection-reaction problem considered in Exercise 6.6.3, as a first order problem
�
σi = aij u,j
−σi,i + bi u,i + cu = f
176
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
where the first equation may be considered to be a (new) definition of flux σi . Use the ellipticity
condition to introduce inverse αij = (aij )−1 (the compliance matrix), and multiply the first equation
with αij to arrive at the equivalent system,
�
αij σj − u,i = gi
−σi,i + bi u,i + cu = f
(6.4)
with the additional source term gi = 0 vanishing for the original problem.
We can cast the system into a general abstract problem
Au = f
where u, f are group variables, and A represents the first order system,
�
u = (σ, u) ∈ (L2 (Ω))n × L2 (Ω)
�
f = (g, f ) ∈ (L2 (Ω))n × L2 (Ω)
Au := (αij σj − u,j , −σi,i + bi u,i + cu)
Recall that the accent over the equality sign indicates a “metalanguage” and it is supposed to help you
survive the notational conflicts; on the abstract level both u and f gain a new meaning. Definition of
the domain of A incorporates the boundary condition,
D(A) := { (σ, u) ∈ (L2 (Ω))n × L2 (Ω) :
A(σ, u) ∈ (L2 (Ω))n × L2 (Ω) and u = 0 on Γ1 , σ · n = 0 on Γ2 }
Step 1: Prove that the operator A is closed.
Step 2: Prove that the operator A is bounded below,
�Au� ≥ γ�u�,
u ∈ D(A)
Hint: Eliminate the flux and reduce the problem back to the second order problem with the righthand side equal to f − (aij gj ),i . Upgrade then slightly the arguments used in Exercise 6.6.3.
Step 3: Identify the adjoint operator A∗ (along with its domain),
(Au, v) = (u, A∗ v),
u ∈ D(A), v ∈ D(A∗ )
Step 4: Show that the adjoint operator is injective. Recall then the Closed Range Theorem for Closed
Operators and conclude that the adjoint is bounded below with the same constant γ as well.
Discuss then the well-posedness of the first order problem Au = f .
Step 5: Discuss the well-posedness of the adjoint problem,
v ∈ D(A∗ ),
Again, the reasoning is identical to that in the text.
A∗ v = f
Hilbert Spaces
177
Exercise 6.6.5 Consider the ultraweak variational formulation corresponding to the first order system studied
in Exercise 6.6.4.
�
u ∈ L2 (Ω)
(u, A∗ v) = (f, v)
or, in a more explicit form,

σ ∈ (L2 (Ω))n , u ∈ L2 (Ω)



�
�
where




σi (αji τj + v,i ) dx +
Ω
Ω
v ∈ D(A∗ )
u(τi,i − (bi v),i + cv) dx =
�
(gi τi + f v) dx
Ω
τ ∈ HΓ2 (div, Ω), v ∈ HΓ11 (Ω)
HΓ2 (div, Ω) := {σ ∈ (L2 (Ω))n : div σ ∈ L2 (Ω), σ · n = 0 on Γ2 }
Step 1: Double check that the energy spaces in the abstract and the concrete formulations are identical.
Step 2: Identify the strong form of the adjoint operator discussed in Exercise 6.6.4 as the transpose operator corresponding to the bilinear form b(u, v) = (u, A∗ v), and conclude thus that the conjugate
operator is bounded below.
Step 3: Use the well-posedness of the (strong form) of the adjoint problem to conclude that the ultraweak operator,
B : L2 → (D(A∗ ))� ,
�Bu, v� = b(u, v)
is injective.
Step 4: Use the Closed Range Theorem for Continuous Operators to conclude that the ultraweak operator B satisfies the inf-sup condition with the same constant γ as for the adjoint operator and,
therefore, the same constant γ as for the original strong form of operator A.
Step 5: Conclude with a short discussion on the well-posedness of the ultraweak variational formulation.
Just follow the text.
Exercise 6.6.6 Suppose we begin with the first order system (6.4). We multiply the first equation with test
function τi , the second with test function v, integrate over domain Ω, and sum up all the equations.
If we leave them alone, we obtain a “trivial” variational formulation with solution (σ, u) in the graph
norm energy space and L2 test functions (τ, v). If we integrate by parts (“relax”) both equations, we
obtain the ultraweak variational formulation. The energy spaces have been switched. The solution lives
now in the L2 space, and the test function comes from the graph norm energy space for the adjoint.
We have two more obvious choices left. We can relax one of the equations and leave the other one in
the strong form. The purpose of this exercise is to study the formulation where we relax the second
equation (conservation law) only. Identify the energy spaces and show that the problem is equivalent
to the classical variational formulation discussed in Exercise 6.6.3 with the right-hand side equal to
178
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
f − (αij gj ),i . Discuss the relation between the two operators. Could you modify the standard norm
in the H 1 space in such a way that the inf-sup constants corresponding to the two formulations would
actually be identical ?
Follow the text.
Exercise 6.6.7 Analogously to Exercise 6.6.6, we relax now the first equation (constitutive law) only. We
arrive at the so-called mixed formulation,

σ ∈ HΓ2 (div, Ω), u ∈ L2 (Ω)




�
�

�
αij σj τi dx +
uτj,j dx =
gi τi dx
Ω
Ω
Ω

�
�




 (−σi,i + bi u,i + cu)v dx =
f v dx
Ω
Ω
τ ∈ HΓ2 (div, Ω)
v ∈ L2 (Ω)
Identify the corresponding conjugate operator. Discuss boundedness below (inf-sup conditions) for
both operators and the well-posedness of the formulation.
Follow the text.
6.7
Generalized Green’s Formulae for Operators on Hilbert Spaces
Exercises
Exercise 6.7.1 Consider the elastic beam equation
(EIw�� )�� = q
0<x<l
with the boundary conditions
w(0) = w� (0) = 0 and
w(l) = EIw�� (l) = 0
(a) Construct an equivalent variational formulation, identifying appropriate spaces.
(b) Use the Lax–Milgram Theorem to show that there exists a unique solution to this problem.
(a) Multiply the beam equation with a test function v, and integrate twice by parts to obtain
� l
� l
EIw�� v �� dx − [EIw�� v � ]|l0 + [(EIw�� )� v]|l0 =
qv dx
0
0
Boundary term EIw (l)v(l) vanishes because of the boundary condition on w. To eliminate the
��
remaining boundary terms, we assume additional boundary conditions on the test function,
v(0) = v � (0) = v(l) = 0
Hilbert Spaces
179
Finally, we identify the space of solutions, identical with the space of test functions as,
V = {w ∈ H 2 (0, l) : w(0) = w� (0) = w(l) = 0}
The variational formulation looks as follows,

Find w ∈ V, such that



� l
� l
�� ��


EIw
v
dx
=
qv dx

0
0
∀v ∈ V
By reversing the integration by parts procedure and utilizing the Fourier Lemma, we show that a
sufficiently regular solution to the variational problem solves the problem in the classical sense.
(b) Application of Cauchy–Schwarz inequality shows that the bilinear and linear forms are continuous,
|b(w, v)| = |
|l(v)| = |
�
�
l
0
w�� v �� dx| ≤ �w�� �L2 (0,l) �v �� |L2 (0,l) ≤ �w�H 2 (0,l) �v�H 2 (0,l)
l
0
qv dx| ≤ �q�L2 (0,l) �v�L2 (0,l) ≤ �q�L2 (0,l) �v�H 2 (0,l)
provided q ∈ L2 (0, l). In order to prove V -coercivity, we need a one-dimensional version of the
Poincaré inequality,
�v�L2 (0,l) ≤ �v � �L2 (0,l)
Indeed, let v ∈ C ∞ [0, l], v(0) = 0. Then
v(x) =
∀v ∈ H 1 (0, l) : v(0) = 0
�
x
v � (s) ds
0
Squaring both sides and integrating over the interval, we get
� l � x
� l� x
� x
� l
|v(x)|2 dx ≤
|
v � (s) ds|2 dx ≤
ds
|v � (s)|2 ds dx
0
0
0
� 0
�0 l 0 � l
l2 l �
�
2
2
≤
x dx
|v (s)| ds| dx ≤
|v (s)|2 ds
2
0
0
0
Using the density argument, we extend the inequality to functions v ∈ H 1 (0, l), v(0) = 0.
We apply now the Poincaré inequality to show the V -coercivity of the bilinear form,
2 � 2
4
4
�w �L2 (0,l) ≤ 4 �w�� �2L2 (0,l) =≤ 4 b(w, w)
l2
l
l
2
2
≤ 2 �w�� �2L2 (0,l) = 2 b(w, w)
l
l
�w�2L2 (0,l) ≤
�w� �2L2 (0,l)
Consequently,
2
4
+ 4 ) b(w, w) ∀w ∈ V
l2
l
We apply the Lax–Milgram Theorem to conclude that the variational problem has a unique solu�w�2H 2 (0,l) ≤ (1 +
tion w that depends continuosly upon the data, i.e., there exists a constant C > 0 such that
�w�H 2 (0,l) ≤ C�q�L2 (0,l)
180
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
The discussed boundary-value problem describes an elastic beam clamped on the left-hand side, pin
supported on the right-hand side, and loaded with a continuous load q(x), see Fig.6.2
Figure 6.2
Exercise 6.7.1: A stable beam problem.
Exercise 6.7.2 Consider again the elastic beam equation
(EIw�� )�� = q
0<x<l
with different boundary conditions
w(0) = EIw�� (0) = 0
EIw�� (l) = −Ml ,
(EIw�� )� (l) = Pl
(a) Construct an equivalent variational formulation, identifying appropriate spaces.
(b) Use the Lax–Milgram Theorem to establish existence and uniqueness result in an appropriate
quotient space. Derive and interpret the necessary and sufficient conditions for the distributed
load q(x), moment Ml , and force Pl to yield the existence result.
This is another beam problem but this time the beam is supported only with a pin support. The beam
is unstable as the supports do not eliminate the rigid body motion, the beam can rotate about the pin,
see Fig.6.3 The beam is loaded with a continuous load q(x), concentrated force Pl and concentrated
moment Ml acting at the end of the beam. We proceed along the same lines as in the previous problem
to obtain the variational formulation

Find w ∈ V, such that



� l
� l
�� ��


EIw
v
dx
=
qv dx + Pl v(l) + Ml v � (l)

0
0
where the space of kinematically admissible displacements
V = {w ∈ H 2 (0, l) : w(0) = 0}
∀v ∈ V
Hilbert Spaces
181
Figure 6.3
Exercise 6.7.1: An unstable beam problem.
In order to apply the Lax–Milgram Theorem, we have te reformulate the problem in terms of quotient
spaces. We introduce the space of (linearized) rigid body motions,
V0 = {w ∈ H 2 (0, l) : b(w, w) = 0} = {w(x) = ax, a ∈ R}
I
and consider the quotient space V /V0 (or, equivalently, orthogonal complement V0⊥ , comp. Exercise 6.6.1). The linear functional l([v]) = l(v + V0 ) is well defined iff its value is invariant with respect
to the rotations. This leads to the necessary and sufficient consition for the load
� l
q(x)x dx + Pl l + Ml = 0
0
identified as the condition that the total moment of active forces around the pin support must vanish.
The ultimate variational formulation looks then as follows,


 Find [w] ∈ V /V0 , such that

� l
� l
�� ��


EIw v dx =
qv dx + Pl v(l) + Ml v � (l)

0
0
or, using the orthogonal complements,

Find w ∈ V0⊥ , such that



� l
� l
�� ��


EIw
v
dx
=
qv dx + Pl v(l) + Ml v � (l)

0
0
∀[v] ∈ V /V0
∀v ∈ V0⊥
In order to demonstrate the V0⊥ -coercivity, we need to show only that
�v � � ≤ C�w�� �
∀w ∈ V0⊥ , C > 0
This can be done in many ways. For instance, the orthogonality condition
� 1
� l
w(x)x dx +
w� (x) dx = 0
0
0
can be transformed by integration by parts to
� l
l 2 − x2
] dx = 0
w� [1 +
2
0
182
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
By the Mean-Value Theorem of Integral Calculus, there must exist an intermediate point ξ such that
w� (ξ)[1 +
l2 − ξ 2
]=0
2
⇒
w� (ξ) = 0
An argument identical with the one used to derive the one-dimensional Poincaré inequality in Exercise 6.7.1, leads then to the inequality
�v � �2L2 (0,l) ≤
2 �� 2
�v �L2 (0,l)
l2
∀v ∈ V0⊥
and, consequently, to the coercivity result. The Lax–Milgram Theorem shows that the problem is
well-posed. The solution is determined only up to the rotations about the pin.
Exercise 6.7.3 Let u, v ∈ H 2 (0, l) and b(·, ·) denote the bilinear form
� l
(EIu�� v �� + P u� v + kuv) dx
b(u, v) =
0
where EI, P , and k are positive constants. The quadratic functional b(u, u) corresponds to twice the
strain energy in an elastic beam of length l with flexural rigidity EI, on elastic foundation with stiffness
k, and subjected to an axial load P .
(a) Determine the formal operator B associated with b(·, ·) and its formal adjoint B ∗ .
(b) Describe the spaces G, H, UB , VB ∗ , ∂U, ∂V for this problem. Identify the trace operators.
(c) Describe the Dirichlet and Neumann problems corresponding to operators B and B ∗ .
(d) Consider an example of a mixed boundary-value problem for operator B, construct the corresponding variational formulation, and discuss conditions under which this problem has a unique
solution.
We have:
U = V = H 2 (0, l)
G = H = L2 (0, l)
∂U = ∂V = R
I4
β : H 2 (0, l) � u → (u(0), u� (0), u(l), u� (l)) ∈ R
I4
γ=β
U0 = V0 = H02 (0, l) = {v ∈ H 2 (0, l) : v(0) = v � (0) = v(l) = v � (l) = 0}
The formal operators B and B ∗ are as follows,
B : H 2 (0, l) → (H02 (0, l))� =: H −2 (0, l)
Bu = (EIu�� )�� + P u� + ku
B ∗ : H 2 (0, l) → (H02 (0, l))� =: H −2 (0, l)
B ∗ v = (EIv �� )�� − (P v)� + kv
Hilbert Spaces
183
Space UB and VB ∗ are identical,
UB = {u ∈ H 2 (0, l) : Bu ∈ L2 (0, l)} = {u ∈ H 2 (0, l) : u���� ∈ L2 (0, l)} = H 4 (0, l)
VB ∗ = {v ∈ H 2 (0, l) : B ∗ v ∈ L2 (0, l)} = {v ∈ H 2 (0, l) : v ���� ∈ L2 (0, l)} = H 4 (0, l)
Dirichlet boundary operators for both operators B and B ∗ are
u → (u(0), u� (0), u(l), u� (l))
Neumann boundary operator for B is
u → (−(EIu�� )� (0), EIu�� (0), (EIu�� )� (l), −EIu�� (l))
Neumann boundary operator for B ∗ is
v → (−(EIv �� )� (0) + (P v � )(0), EIv �� (0), (EIv �� )� (l) − (P v � )(l), −EIv �� (l))
For an example of a mixed boundary-value problem, consider
u(0) = u� (0) = 0 and
(EIu�� )� (l) = Pl , −EIu�� (l) = Ml
(EIu�� )�� + P u� + ku = q
in (0, l)
The corresponding variational formulation is


 Find u ∈ W such that

 b(u, v) = l(v)
where
∀v ∈ W
W = {u ∈ H 2 (0, l) : u(0) = u� (l) = 0}
� l
b(u, v) =
(EIu�� v �� + P u� v + kuv) dx
� l 0
l(v) =
qv dx + Pl w(l) + Ml w� (l)
0
Cauchy–Schwartz inequality is used to prove the continuity of bilinear and linear forms. The bilinear
form is also W -coercive. Indeed,
� l
� l
�� 2
�
2
[(v ) + v v + v ] dx ≥
[(v �� )2 + v 2 ] dx
b(v, v) =
0
since
�
l
v � v dx =
0
0
� l�
0
1 2
v
2
��
dx =
�
�
1 2 l
1
v |0 = v(l)2 ≥ 0
2
2
Boundary condition v (0) = 0 and the Poincaré inquality are then used to show that �v � �2L2 (0,l) is
�
bounded above by
2
�� 2
l2 �v �L2 (0,l) .
Consequently, by Lax–Milgram Theorem, the variational problem
has a unique solution that depends continuosly upon the data.
184
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
Exercise 6.7.4 Consider the fourth-order boundary-value problem in two dimensions:
∂4u
∂4u
∂4u
+2 2 2 + 4 +u=f
4
∂x
∂x ∂y
∂y
∂u
= 0 on ∂Ω
u = 0,
∂n
def
∇2 ∇2 u + u =
in Ω
with f ∈ L2 (Ω). Construct a variational formulation of this problem, identifying the appropriate
spaces, and show that it has a unique solution. What could be a physical interpretation of the problem?
Recall that
∇ 2 ∇ 2 u = Δ2 u =
∂4u
∂4u
∂4u
+2 2 2 + 4
4
∂x
∂x ∂y
∂y
Integrating by parts, we get
�
� � 4
∂4u
∂ u
v dx dy
+
∂x4
∂x2 ∂y 2
Ω
�
�
� � 3
� � 3
∂ 3 u ∂v
∂3u
∂ u ∂v
∂ u
+
dx dy +
ny v ds
nx +
=−
∂x3 ∂x ∂x2 ∂y ∂y
∂x3
∂x2 ∂y
Ω
∂Ω
�
� � 2 2
∂2u ∂2v
∂ u∂ v
+
dx dy
=
∂x2 ∂x2
∂x2 ∂y 2
Ω
�
�
�
� � 3
�
∂v
∂3u
∂ u
∂ 2 u ∂v
n
n
n
+
n
+
ds
+
v ds
−
x
y
x
y
2
∂x
∂y
∂x3
∂x2 ∂y
∂Ω ∂x
∂Ω
Assuming boundary conditions v = ∂v/∂n = 0, we eliminate the boundary terms. Following similar
lines, we get
� �
Ω
∂4u
∂4u
+
∂x2 ∂y 2
∂y 4
Assuming the energy space as
�
v dx dy =
� �
Ω
V = {v ∈ H 2 (Ω) : v =
∂2u ∂2v
∂2u ∂2v
+
∂y 2 ∂x2
∂y 2 ∂y 2
dx dy
∂v
= 0 on ∂Ω}
∂n
we obtain the variational formulation


 Find u ∈ V such that
�
�

 (ΔuΔv + uv) dxdy =
f v dx dy
Ω
�
Ω
∀v ∈ V
Continuity of bilinear and linear forms follows immediately from Cauchy–Schwarz inequality, provided f ∈ L2 (Ω).
To prove coercivity, notice that v = 0 on ∂Ω implies vanishing of the tangential derivative ∂v/∂t = 0
and, since the normal derivative vanishes as well, both ∂v/∂x = ∂v/∂y = 0 on the boundary. This
implies
� �
Ω
�
∂2u ∂2v
∂2u ∂2v
dx dy
−
∂x2 ∂y 2
∂x∂y ∂x∂y
�
�
� � 2
� � 3
∂3u
∂2u
∂ u
∂ u
∂v
−
dx dy +
nx
ds = 0
n −
=
2 ∂y
2 ∂y
2 y
∂x
∂x
∂x
∂x∂y
∂y
Ω
∂Ω
Hilbert Spaces
185
Consequently, for u, v ∈ V ,
and, similarly,
�
We have thus
Ω
∂2u ∂2v
dx dy =
∂x2 ∂y 2
�
Ω
∂2u ∂2v
dx dy =
∂y 2 ∂x2
�
�
Ω
Ω
∂2u ∂2v
dx dy
∂x∂y ∂x∂y
∂2u ∂2v
dx dy
∂y∂x ∂y∂x
b(v, v) = |v|2H 2 (Ω)
∀v ∈ V
where the latter denotes the H 2 -seminorm. Vanishing of partial derivatives on the boundary, and the
application of the Poincaré inequality, reveals that
|v|2H 2 (Ω) ≥ C�
∂v 2
� 2
∂x L (Ω)
and
|v|2H 2 (Ω) ≥ C�
∂v 2
� 2
∂y L (Ω)
for some C > 0. Consequently, the bilinear form is V -coercive. By the Lax–Milgram Theorem, the
problem has a unique solution that depends continuosly upon f ∈ L2 (Ω).
The discussed boundary-value problem describes a clamped elastic plate on an elastic (Winkler) foundation.
Elements of Spectral Theory
6.8
Resolvent Set and Spectrum
Exercises
Exercise 6.8.1 Determine spectrum of operator A : U ⊃ D(A) → U where
R)
U = L2 (I
D(A) = H 1 (I
R)
Au = i
du
dx
Hint: Use Fourier transform (comp. Example 6.8.1).
We follow precisely Example 6.8.1. Solution to
λu − i
du
dx
is u(x) = Ce−iλx and, for C �= 0, it is not L2 -integrable over the entire real line. Consequently, the
point spectrum is empty.
186
APPLIED FUNCTIONAL ANALYSIS
Consider now the problem
λu − i
SOLUTION MANUAL
du
=v
dx
Applying the Fourier transform we get
(λ + ξ)û(ξ) = v̂(ξ)
which results in
1
|v̂(ξ)|2
(a + ξ)2 + b2
|û(ξ)|2 =
and
|
Case: b �= 0: Factor
1
(a+ξ)2 +b2
ˆ
du
ξ2
(ξ)|2 =
|v̂(ξ)|2
dx
(a + ξ)2 + b2
is bounded by
1
b2 ,
so
�û�L2 (R
I) ≤
1
�v̂�L2 (R
I)
b
and, since the Fourier transform preserves the L2 -norm, the same conclusion applies to functions u and
v. Thus the resolvent set r(A) contains all numbers with a non-zero imaginary part. Notice that, due to
the boundedness of
Case: b = 0: Factor
ξ2
(a+ξ)2 +b2 ,
1
(a+ξ)2
L -space. For instance, for
2
Rλ v = u ∈ D(A).
is now ubounded as ξ → −a, and the resolvent is not defined on the entire
v̂(ξ) =
�
1
|a + ξ| < 1
0
otherwise
R). For any
function û is not square integrable. However, the domain of the resolvent is dense in L2 (I
v̂ ∈ L2 (I
R), define
v̂n (ξ) =
�
v̂(ξ)
|a + ξ| ≥
0
otherwise
1
2n
Function |vn − v|2 converges pointwise to zero and it is dominated by integrable function |v|2 so, by
the Lebesgue Dominated Convergence Theorem, �v̂n − v̂�L2 (R
I ) → 0 as n → ∞. This proves that the
truncated functions are dense in the L2 -space.
But the resolvent is unbounded. Indeed, taking
�√
n
v̂n (ξ) =
0
|a + ξ| <
1
2n
otherwise
we see that �vn � = 1 and, at the same time,
�
R
I
|ûn (ξ)|2 dξ > n
�
1
−a+ 2n
1
−a− 2n
1
dξ > 4n2 → ∞
(a + ξ)2
Concluding, the operator has only a continuous spectrum that coincides with the whole real line.
Hilbert Spaces
6.9
187
Spectra of Continuous Operators. Fundamental Properties
Exercises
Exercise 6.9.1 Let X be a real normed space and X × X its complex extension (comp. Section 6.1). Let
A : X → X be a linear operator and let à denote its extension to the complex space defined as
Ã((u, v)) = (Au, Av)
Suppose that λ ∈ IC is an eigenvalue of à with a corresponding eigenvector w = (u, v). Show
that the complex conjugate λ̄ is an eigenvalue of à as well with the corresponding eigenvector equal
w̄ = (u, −v).
Exercise 6.9.2 Let U be a Banach space and let λ and µ be two different eigenvalues (λ �= µ) of an operator
A ∈ L(U, U ) and its transpose A� ∈ L(U � , U � ) with corresponding eigenvectors x ∈ U and g ∈ U � .
Show that
�g, x� = 0
6.10
Spectral Theory for Compact Operators
Exercises
Exercise 6.10.1 Let T be a compact operator from a Hilbert space U into a Hilbert space V . Show that:
(i) T ∗ T is a compact, self-adjoint, positive semi-definite operator from a space U into itself.
(ii) All eigenvalues of a self-adjoint operator on a Hilbert space are real.
Conclude that all eigenvalues of T ∗ T are real and nonnegative.
By Proposition 5.15.3??(ii), T ∗ T as a composition of a compact and a continuous operator is compact.
Since
(T ∗ T )∗ = T ∗ T ∗∗ = T ∗ T
T ∗ T is also self-adjoint. Finally,
(T ∗ T u, u)U = (T u, T u)V = �T u�2V ≥ 0
188
APPLIED FUNCTIONAL ANALYSIS
SOLUTION MANUAL
i.e., the operator is positive semi-definite.
Next, if A is a self-adjoint operator on a Hilbert space H, and (λ, e) is an eigenpair of A, we have
λ�e�2 = (λe, e) = (Ae, e) = (Ae, e) = (e, Ae) = (e, λe) = λ�e�2
which implies that λ = λ. Additionally, if A is positive semi-definite then
(Ae, e) = λ(e, e) ≥ 0
which implies λ ≥ 0.
6.11
Spectral Theory for Self-Adjoint Operators
Exercises
Exercise 6.11.1 Determine the spectral properties of the integral operator
� x� 1
u(η) dη dξ
(Au)(x) =
0
ξ
defined on the space U = L2 (0, 1).
The operator if self-adjoint. Indeed, integrating twice by parts, we get
� 1� x� 1
� 1� 1
� 1
u(η) dη dξ v̄(x) dx =
u(η) dη
v̄(ξ) dξ dx
0
0
ξ
�0 1 x � x � 1 x
=
u(x)
v̄(η) dη dξ dx
0
0
ξ
Second step in the reasoning above shows also that the operator is positive semi-definite.
The operator is also compact. Indeed,
� x� 1
� x� x
� x� 1
u(η) dη dξ =
u(η) dη dξ +
u(η) dη dξ
0
ξ
� 01 x
� x
�0 x �ξ η
dξu(η) dη +
u(η) dη
dξ
=
0
� 1 x
�0 x 0
ηu(η) dη + x
u(η) dη
=
x
�0 1
=
K(x, η)u(η) dη
0
where
K(x, η) =
�
η
x
0<η<x
x<η<1
Hilbert Spaces
189
is an L2 -integrable kernel (comp. Example 5.15.2).
In conclusion, spectrum of operator A consists of a sequence of positive eigenvalues converging to
zero. Upon a double differentiation of the eigenvalue problem,

Find λ ∈ (0, ∞), e ∈ L2 (I)



� x� 1


e(η) dη dξ = λe

0
ξ
we reduce it to the simple boundary-value problem,
−e = λe�� ,
which leads to the solution:
e(0) = e� (1) = 0,
�e�L2 (I) = 1
√
nπx
2 sin
2
The spectral representation of operator A reduces to an infinite series,
λn =
Au =
∞
�
λn (u, en )L2 (I) en =
n=1
4
n2 π 2
,
en (x) =
∞
�
nπx
8
un sin
2
2
n π
2
n=1
where un =
�
1
u(s) sin
0
nπs
ds
2
Exercise 6.11.2 Determine the spectral properties of the differential operator
Au = −u��
defined on the subspace D(A) of L2 (0, 1),
D(A) = {u ∈ H 2 (0, 1) : u(0) = u(1) = 0}
We have,
−
�
1
��
u v dx =
0
�
1
u� v �
0
dx = −
�
1
uv �� dx
0
provided v(0) = v(1) = 0. The operator is thus self-adjoint and positive definite. It is also easy to
calculate its inverse. Let f ∈ L2 (0, 1). Integrating
−u�� = f
we obtain
−u� (ξ) =
�
ξ
f (η) dη + C
0
where C = −u� (0) is an integration constant. Repeating the operation and utilizing boundary condition
u(0) = 0, we get
u(x) = −
�
0
x
�
ξ
0
f (η) dη dξ − Cx = −
�
0
x
(x − η) f (η) dη − Cx
Enforcing boundary condition: u(1) = 0, results in the final formula
� 1
�
� x
(x − η)f (η) dη +
(1 − η)f (η) dη =
u(x) = −
0
0
1
K(x, η)f (η) dη
0
190
APPLIED FUNCTIONAL ANALYSIS
where
K(x, η) =
�
1−x
1−η
SOLUTION MANUAL
0<η<x
x<η<1
As the kernel function K(x, η) is L2 -integrable, the inverse operator is compact (comp. Example 5.15.2). Consequently, by Theorem 6.10.1, spectrum of operator A consists of an increasing sequence of positive eigenvalues converging to infinity. Solving the eigenvalue problem


I e ∈ H 2 (I) such that
 Find λ ∈ R,
��

 −e = λe,
we get
λn = n2 π 2 ,
e(0) = e(1) = 0,
en (x) =
√
2 sin nπx,
�e�L2 (I) = 1
n = 1, 2, . . .
The spectral representation looks as follows:
Au =
∞
�
n=1
λn (u, en )L2 (I) en =
∞
�
n=1
2n2 π 2 un sin nπx
where un =
�
1
u(s) sin nπx ds
0
Download