PRINCIPLES OF MATHEMATICAL ANALYSIS. 10. Integration of

advertisement
PRINCIPLES OF MATHEMATICAL ANALYSIS.
WALTER RUDIN
10. Integration of Differential Forms
1. Let H be a compact convex set in Rk with nonempty
interior. Let f ∈ C(H), put
R
f (x) = 0 in the
complement
of
H,
and
define
f
as
in
Definition 10.3.
H
R
Prove that H f is independent of the order in which the k integrations are carried
out. Hint: Approximate f by functions that are continuous on Rk and whose supports
are in H, as was done in Example 10.4.
2. For i = 1, 2, 3, . . . , let ϕi ∈ C(R1 ) have support in (2−i , 21−i ), such that
f (x, y) =
∞
X
i=1
[ϕi (x) − ϕi+1 (x)] ϕi (y).
R
ϕi = 1. Put
Then f has compact support in R2 , f is continuous except at (0, 0), and
Z
Z
f (x, y) dx dy = 0 but
f (x, y) dy dx = 1.
Observe that f is unbounded in every neighbourhood of (0, 0).
3. (a) If F is as in Theorem 10.7, put A = F0 (0), F1 (x) = A−1 F(x). Then F01 (0) = I.
Show that
F1 (x) = Gn◦Gn−1◦. . .◦G1 (x)
in some neighbourhood of 0, for certain primitive mappings G1 , . . . , Gn . This
gives another version of Theorem 10.7:
F(x) = F0 (0)Gn◦Gn−1◦. . .◦G1 (x).
(b) Prove that the mapping (x, y) 7→ (y, x) of R2 onto R2 is not the composition of
any two primitive mappings, in any neighbourhood of the origin. (This shows
that the flips Bi cannot be omitted from the statement of Theorem 10.7.)
4. For (x, y) ∈ R2 , define
F(x, y) = (ex cos y − 1, ex sin y).
May 20, 2005. Solutions by Erin P. J. Pearse.
1
Principles of Mathematical Analysis — Walter Rudin
Prove that F = G2◦G1 , where
G1 (x, y) = (ex cos y − 1, y)
G2 (u, v) = (u, (1 + u) tan v)
are primitive in some neighbourhood of (0, 0).
G2◦G1 (x, y) = G2 (ex cos y − 1, y)
sin y
x
x
= e cos y − 1, (e cos y)
cos y
x
x
= (e cos y − 1, e sin y)
= F(x, y).
G1 (x, y) = (g1 (x, y), y) and G2 (u, v) = (u, g2 (u, v) are clearly primitive.
Compute the Jacobians of G1 , G2 , F at (0, 0).
JG1 (0) =
det G01 (x)
= ex cos y x=0
x
e cos y −ex sin y
= 0
1
(x,y)=(0,0)
= 1 · 1 = 1.
1
0
JG2 (0) = tan v (1 + u) sec2 v
2 = (1 + u) sec v (x,y)=(0,0)
(x,y)=(0,0)
= e2x cos2 y + e2x sin2 y (x,y)=(0,0)
= e2x cos2 y + sin2 y (x,y)=(0,0)
= e2x (x,y)=(0,0)
= 1.
Define
H2 (x, y) = (x, ex sin y)
and find
H1 (u, v) = (h(u, v), v)
2
(x,y)=(0,0)
(x,y)=(0,0)
2
= 1 · 1 = 1.
x
e cos y −ex sin y
JF (0) = x
e sin y ex cos y
Solutions by Erin P. J. Pearse
so that F = H1◦H2 in some neighbourhood of (0, 0).
We have
H1◦H2 (x, y) = H1 (x, ex sin y)
= (h(x, ex sin y), ex sin y)
= (ex cos y − 1, ex sin y)
for h(u, v) = eu cos (arcsin(e−u v)) − 1, valid in the neighbourhood R × − π2 , π2
of (0, 0). Indeed, for (u, v) = (x, ex sin y),
eu cos arcsin(e−u v) − 1 = ex cos arcsin(e−x ex sin y) − 1
= ex cos (arcsin(sin y)) − 1
= ex cos y − 1.
5. Formulate and prove an analogue of Theorem 10.8, in which K is a compact subset
of an arbitrary metric space. (Replace the functions ϕi that occur in the proof of
Theorem 10.8 by functions of the type constructed in Exercise 22 of Chap. 4.)
3
Principles of Mathematical Analysis — Walter Rudin
6. Strengthen the conclusion of Theorem 10.8 by showing that the functions ψi can be
made differentiable, even smooth (infinitely differentiable). (Use Exercise 1 of Chap.
8 in the construction of the auxiliary functions ϕi .)
7. (a) Show that the simplex Qk is the smallest convex subset of Rk that contains the
points 0 = e0 , e1 , . . . , ek .
We first show that Qk is the convex hull of the points 0, e1 , . . . , ek , i.e.,
x∈Q
k
⇐⇒
x=
k
X
ci e i ,
with 0 ≤ ci ≤ 1,
i=0
k
X
ci = 1.
k=0
(⇒) Consider x = (x1 , . . . , xk ) ∈ Qk . Let ci = xi for 1 = 1, . . . , k and define
P
P
c0 = 1 − ki=1 xi . Then ci ∈ [0, 1] for i = 0, 1, . . . , k and ki=0 ci = 1. Moreover,
!
k
k
k
X
X
X
xi e 0 +
ci e i
ci e i = 1 −
i=0
i=1
=
1−
k
X
i=1
i=1
xi
!
0+
k
X
xi ei = (x1 , . . . , xk ) = x.
i=1
(⇐) Now given x as a convex combination x =
Then 0 ≤ ci ≤ 1 and
k
X
ci = 1
=⇒
i=0
k
X
i=1
Pk
i=0 ci ei ,
write x = (c1 , . . . , ck ).
ci ≤ 1,
so x ∈ Qk .
Now if K is a convex set containing
it
Pkthe points 0, e1 , . . . , ek , convexity implies
k
must also contain all points x = i=0 ci ei , i.e., it must contain all of Q .
(b) Show that affine mappings take convex to convex sets.
An affine mapping T may always be represented as the composition of a linear
mapping S followed by a translation B. Translations (as congruences) obviously
preserve convexity, so it suffices to show that linear mappings do.
Let S be a linear mapping. If we have a convex set K with u, v ∈ K, then any
point between u, v is (1 − λ)u + λv for some λ ∈ [0, 1], and in the image of S,
the linearity of S gives
S ((1 − λ)u + λv) = (1 − λ)S(u) + λS(v),
which shows that any point between u and v gets mapped to a point between
S(u) and S(v), i.e., convexity is preserved.
4
Solutions by Erin P. J. Pearse
8. Let H be the parallelogram in R2 whose vertices are (1, 1), (3, 2), (4, 5), (2, 4).
Find the affine map T which sends (0, 0) to (1, 1), (1, 0) to (3, 2), and (0, 1) to
(2, 4).
Since T = B◦S, where S is linear and B is a translation, let B(x) = x + (1, 1)
and find S which takes I 2 to (0, 0), (2, 1), (3, 4), (1, 3). Once we fix a basis (let’s
use the standard basis in Rk ), then S corresponds to a unique matrix A. Then
we can define S in terms of its action on the basis of R2 :


|
|
2
1
A =  S(e1 ) S(e2 )  =
.
1 3
|
|
Then S(e1 ) = S(1, 0) = (2, 1)T , and S(e2 ) = S(0, 1) = (1, 3)T , as you can check.
Now for b = (1, 1), we have
2 1
x1
1
2x1 + x2 + 1
=
.
T (x) = B◦S(x) = Ax + b =
+
1 3
x2
1
x1 + 3x2 + 1
Show that JT = 5.
Let T1 (x1 , x2 ) = 2x1 + x2 + 1 and T2 (x1 , x2 ) = x1 + 3x2 + 1.
d
d
2 1 T
T
1
1
dx2
=
JT = dx1
1 3 = 6 − 1 = 5.
d
d
dx T2 dx T2 1
2
Use T to convert the integral
α=
2
Z
ex−y dx dy
H
to an integral over I and thus compute α.
T satisfies all requirements of Thm 10.9, so we have
Z
Z
x−y
α=
e
dy =
e(2x1 +x2 +1)−(x1 +3x2 +1) |5| dx
2
H
I
Z 1Z 1
=5
ex1 −2x2 dx1 dx2
0Z 1 0
Z 1
x1
−2x2
=5
e dx1
e
dx2
=
0
x1 1
5 [e ]0
1
0
1 −2x2 1
−2e
0
= 5(e − 1)(− 12 e−2 + 12 )
= 52 (e1 − 1)(1 − e−2 )
= 25 (e1 − 1 − e−1 + e−2 ).
5
Principles of Mathematical Analysis — Walter Rudin
9. Define (x, y) = T (r, θ) on the rectangle
0 ≤ r ≤ a,
by the equations
0 ≤ θ ≤ 2π
x = r cos θ,
y = r sin θ.
Show that T maps this rectangle onto the closed disc D with center at (0, 0) and
radius a, that T is one-to-one in the interior of the rectangle, and that JT (r, θ) = r.
If f ∈ C(D), prove the formula for integration in polar coordinates:
Z
Z a Z 2π
f (x, y) dx dy =
f (T (r, θ)) rdr dθ.
D
0
0
Hint: Let D0 be the interior of D, minus the segment from (0, 0) to (0, a). As it
stands, Theorem 10.9 applies to continuous functions f whose support lies in D0 . To
remove this restriction, proceed as in Example 10.4.
6
Solutions by Erin P. J. Pearse
12. Let I k be the unit cube and Qk be the standard simplex in Rk ; i.e.,
I k = {u = (u1 , . . . , uk ) ... 0 ≤ ui ≤ 1, ∀i}, and
P
Qk = {x = (x1 , . . . , xk ) ... xi ≥ 0, xi ≤ 1.}
Define x = T (u) by
x1 = u 1
x2 = (1 − u1 )u2
..
.
xk = (1 − u1 ) . . . (1 − uk−1 )uk .
The unit cube Ik
The standard simplex Qk
Figure 1. The transform T , “in slow motion”.
Show that
k
X
i=1
xi = 1 −
k
Y
i=1
(1 − ui ).
This computation is similar to (28)–(30), in Partitions of Unity, p.251.
By induction, we begin with the (given) basis step x1 = u1 , then assume
j
X
i=1
xi = 1 −
j
Y
i=1
(1 − ui ),
for some j < k.
Using this assumption immediately, we can write
Pj+1
Pj
i=1 xi =
i=1 xi + xj+1
Q
= 1 − ji=1 (1 − ui ) + (1 − u1 ) . . . (1 − uj−1 )uj
= 1 − (1 − u1 ) . . . (1 − uj ) + (1 − u1 ) . . . (1 − uj )uj+1
= 1 − (1 − u1 ) . . . (1 − uj ) − (1 − u1 ) . . . (1 − uj )uj+1
= 1 − (1 − u1 ) . . . (1 − uj )(1 − uj+1 )
Q
= 1 − j+1
i=1 (1 − ui ).
7
Principles of Mathematical Analysis — Walter Rudin
Show that T maps I k onto Qk , that T is 1-1 in the interior of I k , and that its
inverse S is defined in the interior of Qk by u1 = x1 and
xi
ui =
1 − x1 − · · · − xi−1
for i = 1, 2, . . . , k.
Note that T is a continuous map (actually, smooth) because each of its component functions is a polynomial.
First, we show that T (∂I k ) = ∂Qk , so that the boundary of I k is mapped onto
the boundary of Qk . Injectivity and surjectivity for the interior will be given by
the existence of the inverse function.
Let u ∈ ∂I k . Then for at least one uj , either uj = 0 or P
uj = 1. uj Q
= 0 implies
xj = 0 so that x ∈ Ej ⊆ ∂Qk . Meanwhile, uj = 1 implies xi = 1− (1−ui ) =
1 − 0 = 1, which puts x ∈ E0 ⊆ ∂Qk . Therefore, T (∂I k ) ⊆ ∂Qk .
Let x ∈ ∂Qk . Then either xj = 0 for some 1 ≤ j ≤ k (because x ∈ Ej ) or else
P
xj = 1 (because x ∈ E0 ), or both.
case (i) Let xj = 0 so x = (x1 , . . . , 0, . . . , xk ), and take xε = (x1 , . . . , ε, . . . , xk ),
so xε ∈ (Qk )◦ . We have
xj
xj+1
xk
−1
x2
x3
T (xε ) = x1 , 1−x1 , 1−x1 −x2 , . . . , 1−x1 −···−xj−1 , 1−x1 −···−xj , . . . , 1−x1 −···−xk−1
xj+1
xk
x3
x2
ε
= x1 , 1−x
,
,
.
.
.
,
,
,
.
.
.
,
1−x1 −···−xj−1 1−x1 −···−xj−1 −ε
1−x1 −···−xk−1
1 1−x1 −x2
ε→0
xj+1
xk
x2
−−−−→ x1 , 1−x
, x3 , . . . , 0, 1−x1 −···−x
, . . . , 1−x1 −···−x
1 1−x1 −x2
j−1
k−1
where the last step uses the continuity guaranteed by the Inverse Function
Theorem. Now check that for
xj+1
xk
x2
x3
u = x1 , 1−x
,
,
.
.
.
,
0,
,
.
.
.
,
,
1−x1 −···−xj−1
1−x1 −···−xk−1
1 1−x1 −x2
we do in fact have T (u) = x:
x
xk
x2
j+1
T x1 , 1−x
, x3 , . . . , 0, 1−x1 −···−x
, . . . , 1−x1 −···−x
1 1−x1 −x2
j−1
k−1
x2
x2
= x1 , (1 − x1 ) 1−x
, (1 − x1 )(1 − 1−x
) x3 ,
1
1 1−x1 −x2
xj−2
xj−1
x2
) . . . (1 − 1−x1 −···−x
)
,
1−x1
j−3 1−x1 −···−xj−2
xj−1
xj
x2
− 1−x
) . . . (1 − 1−x1 −···−x
)
,
1
j−2 1−x1 −···−xj−1
xj
xj+1
x2
− 1−x1 ) . . . (1 − 1−x1 −···−xj−1 ) 1−x1 −···−xj ,
. . . , (1 − x1 )(1 −
(1 − x1 )(1
(1 − x1 )(1
. . . , (1 − x1 )(1 −
= (x1 , x2 , x3 , . . . xj−1 , 0, (1 − x1 −
−
= (x1 , x2 , x3 , . . . xj−1 , 0, xj+1 , . . . , xk )
8
xk−1
xk
)
1−x1 −···−xk−2 1−x1 −···−xk−1
xj+1
x2 − · · · − xj−1 ) 1−x1 −···−x
, . . . , xk )
j
x2
) . . . (1
1−x1
Solutions by Erin P. J. Pearse
P
case (ii) ki=1 xi = 1.
Q
Q
Then by the previous part, 1 − ki=1 (1 − ui ) = 1, so ki=1 (1 − ui ) = 0.
This can only happen if one of the factors is 0, i.e., if one of the ui is 1.
This puts u ∈ ∂I k .
For the interior, we have an inverse.
S◦T (u) = S(u1 , (1 − u1 )u2 , . . . , (1 − u1 )(1 − u2 ) . . . uk )
(1 − u1 )u2
(1 − u1 )(1 − u2 ) . . . (1 − uk−1 )uk
= u1 ,
,...,
1 − u1
1 − u1 − (1 − u1 )u2 − · · · − (1 − u1 )(1 − u2 ) . . . uk−1
= (u1 , u2 , . . . , uk )
= u.
where the third equality follows by successive distributions against the last factor:
[(1 − u1 )(1 − u2 ) . . . (1 − uk−2 )] (1 − uk−1 )
= (1 − u1 )(1 − u2 ) . . . (1 − uk−2 ) − (1 − u1 )(1 − u2 ) . . . (1 − uk−2 )uk−1 .
Injectivity may also be shown directly as follows: let u and v be distinct points
of (I k )◦ , with T (u) = x and T (v) = y. We want to show x 6= y. From the
previous part, we have T −1 (∂Qk ) ⊆ ∂I k , so x, y ∈ (Qk )◦ . Let the j th coordinate
be the first one in which u differs from v, so that ui = vi for i < j, but uj 6= vj .
Then
xj = (1 − u1 ) . . . (1 − uj−1 )uj
= (1 − v1 ) . . . (1 − vj−1 )uj
6= (1 − v1 ) . . . (1 − vj−1 )vj = yj .
So x 6= y.
Show that
JT (u) = (1 − u1 )k−1 (1 − u2 )k−2 . . . (1 − uk−1 ),
and
JS (x) = [(1 − x1 )(1 − x1 − x2 ) . . . (1 − x1 − · · · − xk−1 )]−1 .
1
0
0
...
0
−u2
1 − u1
0
...
0
−(1 − u1 )u3 (1 − u1 )(1 − u2 ) . . .
0
JT (u) = −(1 − u2 )u3
..
..
..
..
.
.
.
.
−(1 − u ) . . . u
...
...
. . . (1 − u1 ) . . . (1 − uk−1 )
2
k
= 1 · (1 − u1 ) · (1 − u1 )(1 − u2 ) · · · · · (1 − u1 )(1 − u2 ) . . . (1 − uk−1 )
= (1 − u1 )k−1 (1 − u2 )k−2 . . . (1 − uk−1 ).
9
Principles of Mathematical Analysis — Walter Rudin
JS (u) = =1
1
0
x2
(1−x1 )−2
x3
(1−x1 −x2 )−2
1
1−x1
x3
(1−x1 −x2 )−2
..
.
xk
(1−x1 −···−xk−1 )−2
1
· 1−x11 −x2 ·
· 1−x
1
..
.
xk
(1−x1 −···−xk−1 )−2
1
· · · · 1−x1 −···−x
k−1
0
0
1
1−x1 −x2
0
0
0
..
.
...
1
1−x1 −···−xk−1
..
.
xk
(1−x1 −···−xk−1 )−2
= [(1 − x1 )(1 − x1 − x2 ) . . . (1 − x1 − · · · − xk−1 )]−1
10
...
...
...
Solutions by Erin P. J. Pearse
13. Let r1 , . . . rk be nonnegative integers, and prove that
Z
r1 ! . . . rk !
xr11 . . . xrkk dx =
.
(k + r1 + · · · + rk )!
Qk
Hint: Use Exercise 12, Theorems 10.9 and 8.20. Note that the special case r1 = · · · =
rk = 0 shows that the volume of Qk is 1/k!.
Theorem 8.20 gives that for α, β > 0 we have
Z 1
Γ(α)Γ(β)
tα−1 (1 − t)β−1 dt =
.
Γ(α + β)
0
Theorem 10.9 says when T is a 1-1 mapping of an open set E ⊆ Rk into Rk with
invertible Jacobian, then
Z
Z
f (y) dy =
f (T (x))|JT (x)| dx
Rk
Rk
R
R
for any f ∈ C(E). Since we always have cl(E) f = int(E) f, we can apply this theorem
to the T given in the previous exercise:
Z
Z
rk
r1
x1 . . . xk dx =
T1 (u)r1 T2 (u)r2 . . . Tk (u)rk |JT (u)|, du
Qk
Ik
Z
=
ur11 [(1 − u1 )u2 ]r2 . . . [(1 − u1 ) . . . (1 − uk−1 )uk ]rk
Ik
· (1 − u1 )k (1 − u2 )k−1 . . . (1 − uk−1 ) du
Z
=
ur11 ur22 . . . urkk (1 − u1 )r2 +···+rk +k−1 (1 − u2 )r3 +···+rk +k−2
Ik
=
Z
. . . (1 − uk−2 )rk−1 +rk +2 (1 − uk−1 )rk +1 du
Z
r1
r2
r2 +···+rk +k−1
r3 +···+rk +k−2
u1 (1 − u1 )
dx1
u2 (1 − u2 )
dx1
I
I
Z
Z
rk−1
rk
rk +1
dxk
uk dxk .
...
uk−1 (1 − uk−1 )
I
I
Now we can use Theorem 8.20 to evaluate each of these, using the respective values
αj = 1 + rj , β = rj+1 + · · · + rk + k − j + 1,
so that, except for the final factor
rk +1 1
Z
1
uk
rk
=
,
uk dxk =
rk + 1 0 rk + 1
I
the above product of integrals becomes a product of gamma functions:
=
Γ(1+r1 )Γ(r2 +...rk +k)
Γ(1+r1 +r2 +···+rk +k)
·
Γ(1+r2 )Γ(r3 +...rk +k−1)
Γ(r2 +r3 +···+rk +k)
·
Γ(1+r3 )Γ(r4 +...rk +k−2)
Γ(r3 +r4 +···+rk +k−1)
····
Γ(1+rk−1 )Γ(rk +2)
Γ(1+2+rk−1 +rk )
then making liberal use of Γ(1 + rj ) = rj ! from Theorem 8.18(b), we have
=
r1 !...rk−1 !
(r1 +...rk +k)!
·
Γ(r2 +···+rk +k)
Γ(r2 +···+rk +k)
·
Γ(r3 +···+rk +k−1)
Γ(r3 +···+rk +k−1)
k +2)
. . . Γ(r
.
rk +1
11
·
1
rk +1
Principles of Mathematical Analysis — Walter Rudin
Now making use of xΓ(x) = Γ(x + 1) from Theorem 8.19(a), to simplify the final
factor, we have
Γ(rk +2)
rk +1
=
Γ((rk +1)+1)
rk +1
(rk +1)Γ(rk +1)
rk +1
=
so that the final formula becomes
Z
xr11 . . . xrkk dx =
Qk
= Γ(rk + 1) = rk !,
r1 !...rk !
.
(r1 +...rk +k)!
Note: if we take rj = 0, ∀j, then this formula becomes
Z
1
0! . . . 0!
1dx =
= ,
(0 + . . . 0 + k)!
k!
Qk
showing that the volume of Qk is
1
.
k!
15. If ω and λ are k- and m-forms, respectively, prove that ω ∧ λ = (−1)km λ ∧ ω.
Write the given forms as
X
X
aI dxI
ω=
ai1 ,...,ik (x) dxi1 ∧ . . . ∧ dxik =
I
and
λ=
X
bj1 ,...,jk (x) dxj1 ∧ . . . ∧ dxjm =
Then we have
X
ω∧λ=
aI (x) bJ (x) dxI ∧ dxJ
and
I,J
λ∧ω =
X
J
X
aI (x) bJ (x) dxJ ∧ dxI ,
I,J
aJ dxJ .
so it suffices to show that for every summand,
dxI ∧ dxJ = (−1)km dxJ ∧ dxI .
Making excessive use of anticommutativity (Eq. (46) on p.256),
dxI ∧ dxJ = dxi1 ∧ . . . ∧ dxik ∧ dxj1 ∧ . . . ∧ dxjm
= (−1)k dxj1 ∧ dxi1 ∧ . . . ∧ dxik ∧ dxj2 ∧ . . . ∧ dxjm
..
.
= (−1)2k dxj1 ∧ dxj2 ∧ dxi1 ∧ . . . ∧ dxik ∧ dxj3 ∧ . . . ∧ dxjm
= (−1)mk dxj1 ∧ . . . ∧ dxjm ∧ dxi1 ∧ . . . ∧ dxik
= (−1)km dxJ ∧ dxI .
12
Solutions by Erin P. J. Pearse
16. If k ≥ 2 and σ = [p0 , p1 , . . . , pk ] is an oriented affine k-simplex, prove that ∂ 2 σ = 0,
directly from the definition of the boundary operator ∂. Deduce from this that
∂ 2 Ψ = 0 for every chain Ψ.
Denote σi = [p0 , . . . , pi−1 , pi+1 , . . . , pk ] and for i < j, let σij be the (k − 2)-simplex
obtained by deleting pi and pj from σ.
Now we use Eq. (85) to compute
∂σ =
k
X
(−1)j σj .
j=0
Now we apply (85) again to the above result and get
!
k
k
X
X
2
j
(−1) σj =
(−1)j ∂σj
∂ σ = ∂(∂σ) = ∂
j=0
j=0
=
k
X
(−1)
j=0
=
j
k−1
X
(−1)i σij
i=0
k X
k−1
X
(−1)i+j σij
j=0 i=0
Lemma. For i < j, σij = σj−1,i .
Proof. These both correspond to removing the same points pi , pj , just in different
orders (because pj is the (j − 1)th vertex of σi ).
σ = [p0 , ..., pi , ..., pj , ..., pk ] 7→ σj = [p0 , ..., pi , ..., pk ] 7→ σij = [p0 , ..., pk ]
σ = [p0 , ..., pi , ..., pj , ..., pk ] 7→ σi = [p0 , ..., pj , ..., pk ] 7→ σj−1,i = [p0 , ..., pk ].
Thus we split the previous sum along the “diagonal”:
∂2σ =
k−1
X
(−1)i+j σij +
{z
A
}
Continuing on with B for a bit, we have
B=
k
X
(−1)i+j σij .
i<j=1
i≥j=0
|
k
X
(−1)i+j σj−1,i
|
{z
B
}
by lemma
i<j=1
=
j−1
k X
X
(−1)i+j σj−1,i
rewriting the sum
(−1)i+j σi−1,j
swap dummies i ↔ j
j=1 i=0
=
k X
i−1
X
i=1 j=0
13
Principles of Mathematical Analysis — Walter Rudin
j
...
k-1
2
1
0
...
0 1 2
i
... k-1
Figure 2. Reindexing the double sum over i, j.
=
k−1 X
i
X
(−1)i+j+1 σi,j
reindex i 7→ i + 1
i=0 j=0
= (−1)
k−1 X
k−1
X
(−1)i+j σi,j
see Figure 2
j=1 i=j
Whence
= −A.
∂ 2 σ = A + B = A − A = 0.
In order to see how this works with the actual faces, define Fki : Qk−1 7→ Qk for
i = 0, . . . , k − 1 to be the affine map Fki = (e0 , . . . , êi , . . . ek ), where êi means omit
ei , i.e.,
(
ej ,
j<i
Fki (ej ) =
ej+1 , j ≥ i.
Then define the ith face of σ to be σi = σ◦Fki . Then
e2
p2
F21
e0
e1
σ0
σ
F22
Q1
σ1
F20
p1
e0
e1
p0
Q2
Figure 3. The mappings Fki and σ, for k = 2.
∂ 2 σ = ∂(∂σ) = ∂(σ0 − σ1 + σ2 )
= ∂(σ0 ) − ∂(σ1 ) + ∂(σ2 )
= (p2 − p1 ) − (p2 − p0 ) + (p1 − p0 )
=0
14
σ2
σ(Q2)
Solutions by Erin P. J. Pearse
17. Put J 2 = τ1 + τ2 , where
τ1 = [0, e1 , e1 + e2 ], τ2 = −[0, e2 , e2 + e1 ].
Explain why it is reasonable to call J 2 the positively oriented unit square in R2 .
Show that ∂J 2 is the sum of 4 oriented affine 1-simplices. Find these.
τ1 has p0 = 0, p1 = e1 , p2 = e1 + p2 , with arrows pointing toward higher index.
τ2 has p0 = 0, p1 = e1 , p2 = e1 + p2 , with arrows pointing toward lower index.
e2
p2 = e1 + e2
e2
τ1
e0
e1
p1 = e2
p2 = e1 + e2
−τ2
p0
p1 = e1
e0
e1
p0
Figure 4. τ1 and −τ2 .
e2
p1 = e2
p2 = e1 + e2
τ1 + τ1
τ2
e0
e1
p0
Figure 5. τ2 and τ1 + τ2 .
So
∂(τ1 + τ2 ) = ∂[0, e1 , e1 + e2 ] − ∂[0, e2 , e2 + e1 ]
= [e1 , e1 + e2 ] − [0, e1 + e2 ] + [0, e1 ] − [e2 , e2 + e1 ] + [0, e2 + e1 ] − [0, e2 ]
= [0, e1 ] + [e1 , e1 + e2 ] + [e2 + e1 , e2 ] + [e2 , 0].
So the image of τ1 + τ2 is the unit square, and it is oriented counterclockwise,
i.e., positively.
What is ∂(τ1 − τ2 )?
∂(τ1 − τ2 ) = ∂[0, e1 , e1 + e2 ] + ∂[0, e2 , e2 + e1 ]
= [e1 , e1 + e2 ] − [0, e1 + e2 ] + [0, e1 ] + [e2 , e2 + e1 ] − [0, e2 + e1 ] + [0, e2 ]
= [0, e1 ] + [e1 , e1 + e2 ] − [e2 + e1 , e2 ] − [0, e2 ] − 2[0, e2 + e1 ].
15
Principles of Mathematical Analysis — Walter Rudin
20. State conditions under which the formula
Z
Z
Z
f dω =
f ω − (df ) ∧ ω
Φ
∂Φ
Φ
is valid, and show that it generalizes the formula for integration by parts.
Hint: d(f ω) = (df ) ∧ ω + f dω.
By the definition of d, we have
d(f ω) = d(f ∧ ω) = (df ∧ ω) + (−1)0 (f ∧ dω) = (df ) ∧ ω + f dω.
Integrating both sides yields
Z
Z
Z
d(f ω) = (df ) ∧ ω + f dω.
Φ
Φ
Φ
If the conditions of Stokes’ Theorem are met, i.e., if Φ is a k-chain of class C 00 in an
open set V ⊆ Rn and ω is a (k − 1)-chain of class C 0 in V , then we may apply it to
the left side and get
Z
Z
Z
f ω = (df ) ∧ ω + f dω,
∂Φ
Φ
Φ
which is equivalent to the given formula.
21. Consider the 1-form
η=
(a) Show
R
γ
x dy − y dx
x2 + y 2
in R2 \{0}.
η = 2π 6= 0 but dη = 0.
Put gg(t) = (r cos t, r sin t) as in the example. Then
x2 + y 2 7→ r 2 , dx 7→ −r sin t dt, anddy 7→ r cos t dt
and the integral becomes
Z
Z 2π
Z 2π
r sin t
r cos t
η=
− 2 (−r sin t)dt +
(r cos t)dt
r
r2
γ
0
0
Z 2π
=
sin2 t + cos2 tdt
0
Z 2π
=
dt
0
= 2π 6= 0.
P
∂f
However, using df = ni=1 ∂x
dx1 , we compute
1
x dy
y dx
dη = d
− 2
2
2
x +y
x + y2
x dy
y dx
=d
−d
x2 + y 2
x2 + y 2
16
Solutions by Erin P. J. Pearse
u
1
Φ
γ
Γ
2π
t
Figure 6. The homotopy from Γ to γ.
x
∧ d(dy)
+ y2
y
y
−d
∧ dx − 2
∧ d(dx)
2
2
x +y
x + y2
2
(x + y 2 ) · 1 − x(2x)
(x2 + y 2 ) · 0 − x(2y)
dx +
dy ∧ dy
=
(x2 + y 2 )2
(x2 + y 2 )2
2
(x2 + y 2 ) · 1 − y(2y)
(x + y 2 ) · 0 − y(2x)
−
dx +
dy ∧ dx
(x2 + y 2 )2
(x2 + y 2 )2
1
= (x2 +y
(y 2 − x2 )dx ∧ dy − 2xy dy ∧ dy
2 )2
+ 2xy dx ∧ dx − (x2 − y 2 )dy ∧ dx
2
2
2
2
1
= (x2 +y
(y
−
x
)dx
∧
dy
+
(x
−
y
)dx
∧
dy
2 )2
=d
=
x
2
x + y2
1
(y 2
(x2 +y 2 )2
∧ dy +
x2
− x2 + x2 − y 2 )dx ∧ dy
= 0 dx ∧ dy = 0.
(b) With γ as in (a), let Γ be a C 00 curve in R2 \{0}, with Γ(0) = Γ(2π), such
R that the
intervals [γ(t), Γ(t)] do not contain 0 for any t ∈ [0, 2π]. Prove that Γ η = 2π.
For 0 ≤ t ≤ 2π and 0 ≤ u ≤ 1, define the straight-line homotopy between γ and
Γ:
Φ(t, u) = (1 − u)Γ(t) + uγ(t),
Φ(t, 0) = Γ(t),
Φ(t, 1) = γ(t).
Then Φ is a 2-surface in R2 \{0} whose parameter domain is the rectangle R =
[0, 2π] × [0, 1] and whose image does not contain 0. Now define s(x) : I → R2
by s(x) = (1 − x)γ(0) + xΓ(x), so s(I) is the segment connecting the base point
of γ to the base point of Γ. Also, denote the boundary of R by
∂R = R1 + R2 + R3 + R4 ,
17
Principles of Mathematical Analysis — Walter Rudin
where R1 = [0, 2πe1 ], R2 = [2πe1 , 2πe1 + e2 ], R3 = [2πe1 + e2 , e2 ], R4 = [e2 , 0].
∂Φ = Φ(∂R)
= Φ(R1 + R2 + R3 + R4 )
= Φ(R1 ) + Φ(R2 ) + Φ(R3 ) + Φ(R4 )
=Γ+s+γ+s
=Γ+s−γ−s
= Γ − γ,
where γ denotes the reverse of the path γ (i.e., traversed backwards). Then
Z
Z
Z
Z
η− η=
η=
η
∂Φ = Γ − γ
Γ
γ
Γ−γ
∂Φ
Z
=
dη
by Stokes
Φ
Z
=
0
by (a)
Φ
=0
shows that
Z
η=
Γ
Z
η.
γ
(c) Take Γ(t) = (a cos t, b sin t) where a, b > 0 are fixed. Use (b) to show
Z 2π
ab
dt = 2π.
2
2
a cos t + b2 sin2 t
0
We have
x(t) = Γ1 (t) = a cos t and y(t) = Γ2 (t) = b sin t
and
dx =
∂x
dt
∂t
= −a sin t dt and dy =
∂y
dt
∂t
= b cos t dt,
so
x dy − y dx
ab sin2 t + ab cos2 t
ab
=
= 2
,
2
2
2
2
2
2
2
x +y
a cos t + b sin t
a cos t + b2 sin2 t
and the integral becomes
Z
Z 2π
ab cos2 t dt + ab sin2 t dt
2π = η =
a2 cos2 t + b2 sin2 t
Γ
0
Z 2π
ab
=
dt.
2
2
a cos t + b2 sin2 t
0
18
Solutions by Erin P. J. Pearse
(d) Show that
η = d arctan xy
in any convex open set in which x 6= 0 and that
η = d − arctan xy
in any convex open set in which y 6= 0. Why does this justify the notation
η = dθ, even though η isn’t exact in R2 \{0}?
θ(x, y) = arctan xy is well-defined on
On D + we have
D + = {(x, y) ... x > 0}.
dθ = d arctan xy
=
=
=
∂θ
dx
∂x
∂
∂x
1+
−yx
∂θ
+ ∂y
dy
y
x
dx
y 2
x
−2
+
dx
+
y 2
∂
∂y
1+
1
x
dy
1+ x
1 + xy
−y dx
x dy
= 2
+ 2
.
2
x +y
x + y2
y
x
dy
y 2
x
2
So dθ = η on D + (and similarly on D − = {(x, y) ... x < 0}), which is everywhere
θ is defined. If we take
B + = {(x, y) ... y > 0},
then
1/y dx
(x/y)2 dy
y dx
x dy
x
d − arctan y = −
+
=− 2
+ 2
,
2
2
2
1 + (x/y)
1 + (x/y)
x +y
x + y2
and similarly on B − = {(x, y) ... y < 0}.
We say η = dθ because θ gives arg(x, y); if we let γ(t) = (cos t, sin t) as above,
then this becomes
y
sin t
arctan
= arctan
= t,
x
cos t
where the inverse is defined, i.e., for 0 ≤ t ≤ 2π.
Suppose η were exact: ∃λ, a 0-form on R20 such that dλ = η. Then
dθ − dλ = d(θ − λ) = 0
=⇒ θ = λ + c
=⇒ θ = λ
=⇒
19
θ−λ=c
Principles of Mathematical Analysis — Walter Rudin
but λ is continuous and θ isn’t. Alternatively, if η were exact,
Z
η=
γ
Z
dλ =
γ
Z
λ=
∂γ
Z
λ = 0,
∅
<
contradicting (b). .
(e) Show (b) can be derived from (d).
Taking η = dθ,
Z
η=
Γ
Z
Γ
2π
dθ = θ
= 2π − 0 = 2π.
0
(f) Γ is a closed C 0 curve in R20 implies that
1
2π
Z
1
η = Ind(Γ) :=
2πi
Γ
Z
b
a
Γ0 (t)
dt.
Γ(t)
We use the following basic facts from complex analysis:
log z = log |z| + i arg z,
so
arg z = −i(log z − log |z|).
First, fix |Γ| = c, so Γ is one or more circles around the origin in whatever
direction, but of constant radius. Then
Z
η=
Γ
Z
dθ =
Γ
Z
b
d(θ(Γ(t)))
Z b
= −i
d(log Γ(t) − log |Γ(t)|)
a
Z b
= −i
d(log Γ(t))
a
Z b 0
Γ (t)
= −i
dt.
a Γ(t)
a
Now use (b) for arbitrary Γ.
20
Solutions by Erin P. J. Pearse
P
24. Let ω =
ai (x) dxi be a 1-form of class C 00 in a convex open set E ⊆ Rn . Assume
dω = 0 and prove that ω is exact in E.
Fix p ∈ E. Define
f (x) =
Z
ω,
x ∈ E.
[p,x]
Apply Stokes’ theorem to the 2-form dω = 0 on the affine-oriented 2-simplex
[p, x, y] in E.
By Stokes’ Thm,
Z
Z
0=
dω =
ω
[p,x,y]
∂[p,x,y]
Z
=
ω
[x,y]−[p,y]+[p,x]
Z
Z
Z
=
ω−
ω+
ω
[x,y]
[p,y]
[p,x]
Z
ω − f (y) + f (x)
=
[x,y]
Z
f (y) − f (x) =
ω.
Stokes’ Thm
def of ∂
Rem 10.26, p.267
def of f
[x,y]
Deduce that
f (y) − f (x) =
n
X
i=1
(yi − xi )
Z
1
0
ai ((1 − t)x + ty) dt
for x, y ∈ E. Hence (Di f )(x) = ai (x).
Note that γ = [x, y] is the straight-line path from x to y, i.e., γ(t) = (1−t)x+ty.
Then the right-hand integral from the last line of the previous derivation becomes
Z
f (y) − f (x) =
ω
[x,y]
=
=
n
X
Z
0
=
ai (u) dui
[x,y] i=1
Z 1X
n
n
X
i=1
i=1
∂
(1 − t)xi + tyi dt
ai ((1 − t)x + ty) ∂t
(yi − xi )
Z
1
0
ai ((1 − t)x + ty) dt.
Putting y = x + sei , we plug this into the formula for the ith partial derivative.
Note that all terms other than j = i are constant with respect to xi , and hence
21
Principles of Mathematical Analysis — Walter Rudin
vanish. This leaves
f (x + sei ) − f (x)
s→0
s
Z 1
(xi +s)−xi
= lim
ai ((1 − t)x + t(x + sei )) dt
s
s→0
0
Z 1
1
= lim s · s
ai (x + tsei )) dt
s→0
0
Z 1
= lim
ai (x + tsei )) dt
s→0 0
Z 1
=
lim ai (x + tsei )) dt
0 s→0
Z 1
=
ai (x) dt
0
Z 1
= ai (x)
dt
(Di f )(x) = lim
0
= ai (x).
We can pass the limit through the integral using uniform convergence.
27. Let E be an open 3-cell in R3 , with edges parallel to the coordinate axes. Suppose
(a, b, c) ∈ E, fi ∈ C 0 (E) for i = 1, 2, 3
ω = f1 dy ∧ dz + f2 dz ∧ dx + f3 dx ∧ dy,
and assume that dω = 0 in E. Define
λ = g1 dx + g2 dy
where
g1 (x, y, z) =
Z
z
f2 (x, y, s) ds −
c
Z
g2 (x, y, z) = −
Z
y
f3 (x, t, c) dt
b
z
f1 (x, y, s) ds,
c
for x, y, z) ∈ E.
Prove that dλ = ω in E.
We compute the exterior derivative:
dλ = d(g1 dx + g2 dy)
= dg1 ∧ dx + g1 ∧ d(dx) + dg2 ∧ dy + g2 ∧ d(dy)
Z z
Z
Z y
=d
f2 (x, y, s) ds −
f3 (x, t, c) dt ∧ dx + d −
c
b
22
z
c
f1 (x, y, s) ds ∧ dy
Solutions by Erin P. J. Pearse
=
=
∂
∂y
∂
∂z
Z
z
∂
∂z
Z
z
f2 (x, y, s) ds dy ∧ dx +
f2 (x, y, s) ds dz ∧ dx
c
Z y
Z y
∂
∂
f3 (x, t, c) dt dy ∧ dx − ∂z
f3 (x, t, c) dt dz ∧ dx
− ∂y
b
b
Z z
Z z
∂
∂
− ∂x
f1 (x, y, s) ds dx ∧ dy − ∂z
f1 (x, y, s) ds dz ∧ dy
c
c
Z z
Z z
Z y
∂
f1 (x, y, s) ds dy ∧ dz + ∂z
f2 (x, y, s) ds −
f3 (x, t, c) dt dz ∧ dx
c
c
b
Z y
Z z
∂
f3 (x, t, c) dt −
f2 (x, y, s) ds dx ∧ dy
∂y
b
c
Z z
∂
− ∂x
f1 (x, y, s) ds dx ∧ dy
c
c
Note that the third integral above is constant with respect to z, and hence vanishes.
= f1 (x, y, z)dy ∧ dz + f2 (x, y, z)dz ∧ dx + f3 (x, y, c)dx ∧ dy
Z z ∂
∂
−
f (x, y, s) + ∂x f1 (x, y, s) ds dx ∧ dy
∂y 2
c
= f1 (x, y, z)dy ∧ dz + f2 (x, y, z)dz ∧ dx
Z z
∂
+ f3 (x, y, c) −
f (x, y, s) +
∂y 2
c
∂
f (x, y, s)
∂x 1
Since ω is closed,
ds dx ∧ dy.
0 = dω = d(f1 dy ∧ dz) + d(f2 dz ∧ dx) + d(f3 dx ∧ dy)
= df1 ∧ dy ∧ dz + f1 ∧ d(dy ∧ dz) + df2 ∧ dz ∧ dx + f2 ∧ d(dz ∧ dx)
=
∂
f dx
∂x 1
∧ dy ∧ dz +
+
∂
f
∂y 2
∂
f dy
∂y 2
∧ dz ∧ dx +
∂
f dz
∂z 3
∧ dx ∧ dy
∂
+ ∂z
f3 dx ∧ dy ∧ dz
∂
∂
∂
=⇒ ∂z
f3 = − ∂x
f1 + ∂y
f2 ,
=
∂
f
∂x 1
+ df3 ∧ dx ∧ dy + f3 ∧ d(dx ∧ dy)
so the last line of the previous derivation, (10.1), becomes
Z z
∂
f3 (x, y, c) +
f ds dx ∧ dy
∂z 3
c
h
i
= f3 (x, y, c) + f3 (x, y, z) − f3 (x, y, c) dx ∧ dy
= f3 (x, y, z) dx ∧ dy
and we finally obtain
dλ = f1 (x, y, z)dy ∧ dz + f2 (x, y, z)dz ∧ dx + f3 (x, y, z)dx ∧ dy
= ω.
23
(10.1)
Principles of Mathematical Analysis — Walter Rudin
Evaluate these integrals when ω = ζ and thus find the form λ = −(z/r)η, where
η=
x dy−y dx
.
x2 +y 2
Now the form is
1
r3
ω=ζ=
so
(x dy ∧ dz + y dz ∧ dx + z dx ∧ dy) ,
f1 =
x
,
r3
f2 =
y
,
r3
f3 =
z
.
r3
The integrals are
Z z
y
2
2
2 −3/2
z
c
f2 (x, y, s) ds =
y(x + y + s )
ds = x2 +y2 r − √ 2 2 2 ,
x +y +c
c
c
Z y
Z y
y
2
2
2 −3/2
c
b
f3 (x, t, c) dt =
c(x + t + c )
dt = x2 +c2 √ 2 2 2 − √x2 +b2 +c2 ,
x +y +c
b
b
Z z
Z z
2
2
2 −3/2
z
c
x
f1 (x, y, s) ds =
x(x + y + s )
ds = x2 +y2 r − √ 2 2 2 .
Z
z
c
x +y +c
c
Thus λ = g1 dx + g2 dy is
y
z
c
c
√
λ = x2 +y2 r −
dx − x2 +c2 √ 2 y 2 2 −
2
2
2
x +y +c
x +y +c
x
z
− x2 +y
− √ 2 c 2 2 dy
2
r
x +y +c
dx
x dy
= − zr − xy2 +y
2 + x2 +y 2
y
c
b
√
√
+ x2 +c2
−
2
2
2 −
x +b +c
+
(x2 +y 2 )
√xc
x2 +y 2 +c2
x2 +y 2 +c2
dy
= − zr η,
due to some magic cancellations at the end.
24
b
√
x2 +b2 +c2
(x2 +y 2 )
√yc
dx
x2 +y 2 +c2
dx
Solutions by Erin P. J. Pearse
28. Fix b > a > 0 and define
Φ(r, θ) = (r cos θ, r sin θ)
for a ≤ r ≤ b, 0 ≤ θ ≤ 2π. The range of Φ is thus an annulus in R2 . Put ω = x3 dy,
and show
Z
Z
dω =
ω
Φ
∂Φ
by computing each integral separately.
Starting with the left-hand side,
dω =
∂ 3
x dx
∂x
so
∧ dy +
∂ 3
x dy
∂y
∧ dy = 3x2 dx ∧ dy + 0 = 3x2 dx ∧ dy,
cos θ −r sin θ
sin θ r cos θ
whence
= r cos2 θ + r sin2 θ = r,
3x2 dx ∧ dy = 3r 2 (cos2 θ) r dr dθ,
and thus
Z
Z
2π
Z
b
3r 3 cos2 θ dr dθ
0
a
Z b
Z 2π
3
2
=
cos θ dθ
3r dr
dω =
Φ
a
0
1
2π b
= 2 θ + sin 2θ 0 43 r 4 a
= π + 41 (sin 4π − sin 0) ·
=
1
4
3π 4
(b
4
3
4
b4 − a 4
− a4 ).
Meanwhile, on the right-hand side, note that
∂Φ = Γ − γ,
as in Example 10.32 or Exercise 21(b). Thus,
Z
Z
Z
Z
3
ω=
ω = x dy − x3 dy
Φ
Γ−γ
Γ
Z
γ
2π
3
Z
2π
b cos θ(b cos θ) dθ −
a3 cos3 θ(a cos θ) dθ
0
0
Z 2π
Z 2π
= b4
cos4 θ dθ − a4
cos4 θ dθ
0
0
Z 2π
= b4 − a 4
cos4 θ dθ
=
3
0
3π
= b4 − a 4 ·
.
4
25
Principles of Mathematical Analysis — Walter Rudin
11. Lebesgue Theory
1. If f ≥ 0 and
R
E
f dµ = 0, prove that f (x) = 0 almost everywhere on E.
First define
En := {f > n1 }
and
A :=
[
En = {f 6= 0}.
We need to show that µA = 0, so suppose not. Then
X
µA > 0 =⇒
En > 0
=⇒ µEn > 0 for some n.
But then we’d have
Z
E
f dµ ≥
Z
f dµ >
En
1
n
· µ(En ) > 0.
<
.
R
2. If A f dµ = 0 for every measurable subset A of a measurable set E, then f (x) =ae 0
on E.
First, suppose f ≥ 0. Then f =ae 0 on every A ⊆ E by the previous exercise. In
particular, E ⊆ E, so f =ae 0 on E.
Now let f 0. Then f = f + − f − , and each of f + , f − are 0 on E by the above
remark, so f =ae 0 on E, too.
3. If {fn } is a sequence of measurable functions, prove that the set of points x at which
{fn (x)} converges is measurable.
Define f (x) = lim fn (x), so f is measurable by (11.17) and |fn (x) − f (x)| is measurable by (11.16),(11.18). Then we can write the given set as
{x ... ∀ε > 0, ∃N s.t. n ≥ N =⇒ |fn (x) − f (x)| < ε}
= {x ... ∀k ∈ N, ∃N s.t. n ≥ N =⇒ |fn (x) − f (x)| < 1/k}
∞
\
=
{x ... ∃N s.t. n ≥ N =⇒ |fn (x) − f (x)| < 1/k}
=
k=1
∞ [
∞
\
k=1 n=1
{x ... |fn (x) − f (x)| < 1/k}
4. If f ∈ L(µ) on E and g is bounded and measurable on E, then f g ∈ L(µ) on E.
Since g is bounded, ∃M such that |g| ≤ M . Then
Z
Z
f g dµ ≤ M
|f | dµ < ∞
E
E
by (11.23(d)) and (11.26).
26
Download