Topics from Tensoral Calculus 1 Preliminaries ∗

Topics from Tensoral Calculus∗
Jay R. Walton
August 30, 2014
1
Preliminaries
These notes are concerned with topics from tensoral calculus, i.e. generalizations of calculus
to functions defined between two tensor spaces. To make the discussion more concrete, the
tensor spaces are defined over ordinary Euclidean space, RN , with its usual inner product
structure. Thus, the tensor spaces inherit a natural inner product, the tensor dot-product,
from the underlying vector space, RN .
2
The Derivative of Tensor Valued Functions
Let
F : D ⊂ T r −→ T s
(1)
be a function with domain D a subset of T r taking values in the tensor space T s , where D
is assumed to be an open subset of T r .1
Continuity. The function F is said to be Continuous at A ∈ D provided for every > 0
there exists a δ > 0 such F (B) ∈ B(F (A), ) whenever B ∈ B(A, δ), i.e. F maps the ball
of radius δ centered at A into the ball of radius centered at F (A). The function F is said
to be continuous on all of D provided it is continuous at each A ∈ D. There are two useful
alternative characterizations of continuity. The first is that F is continuous on D provided
it maps convergent sequences onto convergent sequences. That is, if An ∈ D is a sequence
converging to A ∈ D (limn→∞ An = A), then limn→∞ F (An ) = F (A). The second alternative
characterization is that the inverse function F −1 maps open subsets of T s to open subsets
of D.
Derivative. The function F is said to be Differentiable at A ∈ D provided there exists a
tensor L ∈ T r+s such that
F (A + H) = F (A) + L[H] + o(H) as |H| → 0
∗
(2)
c 2014 by J. R. Walton. All rights reserved.
Copyright A subset D ⊂ T r is said to be open provided for every element A ∈ D there exists an open ball centered
at A that is wholly contained in D.
1
1
where |H| denotes the norm of the tensor H ∈ T r . If such a tensor L exists satisfying (2),
it is called the Derivative of F at A and denoted DF (A). Thus, (2) can be rewritten
F (A + H) = F (A) + DF (A)[H] + o(H).
(3)
Recall that o(|H|) is the Landau “little oh” symbol which is used to denote a function
depending upon H that tends to zero faster that |H|, i.e.
o(H)
= 0.
|H|→0 |H|
lim
If the derivative DF (A) exists at each point in D, then it defines a function
DF (·) : D ⊂ T r −→ T r+s .
Moreover, if the function DF (·) is differentiable at A ∈ T r , then its derivative is a tensor in
T 2r+s , denoted by D2 F (A), called the second derivative of F at A ∈ D. Continuing in this
manner, derivatives of F of all orders can be defined.
Example. Let
φ(·) : T 1 ∼
(4)
= RN −→ T 0 ∼
= R.
Thus, φ(·) is a real-valued function of N -real variables and its graph is a surface in RN +1 .
In the definition of the derivative (3), it is more customary in this context to let H = hu
where u is a unit vector in RN . Defining equation (3) then becomes
φ(a + hu) = φ(a) + Dφ(a)[hu] + o(hu).
(5)
From the linearity of the tensor Dφ(a)[hu], one concludes that
φ(a + hu) − φ(a)
,
h→0
h
Dφ(a)[u] = lim
(6)
which is the familiar directional derivative of φ(·) at the point a in the direction u. Thus,
being differentiable at a point implies the existence of directional derivatives, and hence
partial derivatives, in all directions. However, the converse is not true. That is, there exist
functions with directional derivatives existing at a point in all possible directions but which
are not differentiable at the point. For such an example, consider the function φ(·) : R2 −→
R given by
( 3 3
x −y
when (x, y) 6= (0, 0),
x2 +y 2
φ(x, y) =
(7)
0
when (x, y) = (0, 0).
One then shows easily that if u = (cos(θ), sin(θ))T , the directional derivative of φ(·) at the
origin (0,0) in the direction u equals cos(θ)3 − sin(θ)3 . However, φ(·) is not differentiable at
the origin in the sense of (5). (Why?)
Consider further the function φ(·) in (4). If φ(·) is differentiable in the sense of (5),
then its derivative Dφ(a)[·] ∈ T 1 is a linear transformation from RN to R, and as such is
representable by dot-product with a vector in RN . Specifically, there exists a unique vector,
denoted by ∇φ(a) ∈ RN such that
Dφ(a)[u] = ∇φ(a) · u for all u ∈ RN .
2
(8)
The vector ∇φ(a) is called the Gradient of φ(·) at a.
The component forms for the derivative Dφ and the gradient ∇φ are easily constructed.
In particular, let B = {e1 , . . . , eN } be the natural orthonormal basis for RN . Then the 1 × N
matrix representation for Dφ and the N -tuple vector representation for the gradient ∇φ are
given by


∂x1 φ(a)


..
[Dφ(a)]B = [∂x1 φ(a), . . . , ∂xN φ(a)] and [∇φ(a)]B = 
(9)

.
∂xN φ(a)
where ∂xi φ(a) denotes the partial derivative of φ with respect to xi at the point a
φ(a + hei ) − φ(a)
.
h→0
h
∂xi φ(a) = lim
Example. Another important example is provided by functions F (·) : T 0 −→ T s , i.e.
s-ordered tensor valued functions of a single scalar variable. Since in continuum mechanics
the scalar independent variable is usually time, that variable is given the special symbol t
and the derivative of such functions is represented by
DF (t)[τ ] = Ḟ (t)τ ∈ T s
(10)
where
F (t + h) − F (t)
.
h→0
h
In component form, if the tensor valued function F (·) has the component representation
[Fi1 ,...,is ] with respect to the natural basis for T s , then the component representation for the
tensor Ḟ (t) is
[Ḟ (t)] = [Ḟi1 ,...,is ].
(11)
Example. A Vector Field is a function a(·) : D ⊂ T 1 ∼
= RN −→ T 1 . Its derivative defines
a second order tensor Da(x)[·] ∈ T 2 . Its component form, with respect to the natural basis
on RN for example, is
[Da(x)] = [∂xj ai (x)] i, j = 1, . . . , N
(12)
Ḟ (t) = lim
where [ai (x)], i = 1, . . . , N gives the component representation of a(x). The right hand side
of (12) is the familiar Jacobian matrix.
Product Rule. Various types of “products” of tensor functions occur naturally in tensor
calculus. Rather than proving a separate product rule formula for every product that arises,
it is much more expedient and much cleaner to prove one product rule formula for a general,
abstract notion of product. To that end, the appropriate general notion of product is provided
by a general bi-linear form. More specifically, suppose that F (·) : D ⊂ T r −→ T p and
G(·) : D ⊂ T r −→ T q are two differentiable functions with the same domain set D in T r
but different range spaces. Let π̂(·, ·) : T p × T q −→ T s denote a bi-linear function (i.e.
π̂(·, ·) is linear in each of its variables separately) with values in T s . One then defines the
product function E(·) : D ⊂ T r −→ T s by
E(A) := π̂(F (A), G(A)) for A ∈ D.
3
Since F and G are assumed to be differentiable at A ∈ D, it is not difficult to show that E
is also differentiable at A with
DE(A)[H] = π̂(DF (A)[H], G(A)) + π̂(F (A), DG(A)[H]) for all H ∈ T r .
(13)
Notice that (13) has the familiar form (f g)0 = f 0 g + f g 0 from single variable calculus.
Example. Let A(·) : T 0 ∼
= R −→ T r and B(·) : T 0 −→ T s be differentiable tensor valued
functions of the single scalar variable t. Then their tensor product E(t) := A(t) ⊗ B(t) is
differentiable with
Ė(t) = Ȧ(t) ⊗ B(t) + A(t) ⊗ Ḃ(t).
Chain Rule The familiar chain rule from single variable calculus has a straight forward
generalization to the tensor setting. Specifically, suppose F (·) : D ⊂ T r −→ T q is differentiable at A ∈ D and G(·) : G ⊂ T q −→ T s (with G being an open set on which G(·) is
defined) is differentiable at F (A) ∈ G ∩ F (D), then the composite function E(·) := G ◦ F (·)
is also differentiable at A ∈ D with
DE(A)[H] = DG(F (A))[DF (A)[H]] for all H ∈ T r .
(14)
The right hand side of (14) is the composition of the tensor DG(F (A))[·] ∈ T q+s with
the tensor DF (A)[·] ∈ T r+q producing a the tensor DG(F (A)) ◦ DF (A)[·] ∈ T r+s . This
generalizes the familiar chain rule formula g(f (x))0 = g 0 (f (x))f 0 (x) from single variable
calculus.
Example. An important application of the chain rule is to composite functions of the form
E(t) = G ◦ F (t) = G(F (t)), i.e. functions for which the inner function is a function of the
single scalar variable t. The chain rule then yields the result
Ė(t) = DG(F (t))[Ḟ (t)].
For example, let A(t) be a differentiable function taking values in T 2 , i.e. A(·) : R −→
T 2 . Then the composite real valued function, φ(t) = det(A(t)) is differentiable provided
det(A(t)) 6= 0 with
φ̇(t) = det(A(t))tr(Ȧ(t)A(t)−1 ).
3
Div, Grad, Curl
Classical vector calculus concerns special cases of (1) with r = 0, 1 and s = 0, 1, 2. It is in that
setting that the operators gradient, divergence and curl are usually defined. In elementary
calculus, these operators are most often defined through component representations with
respect to the natural orthonormal basis for RN . Here they are given intrinsic definitions
irrespective of any chosen basis for RN .
Gradient. The definition of the gradient of a scalar valued function defined on T 1 ∼
= RN
has been given previously and won’t be repeated here. It is also customary to define the
gradient of a vector field a(·) : T 1 −→ T 1 as
Grad(a(x)) = ∇a(x) := Da(x).
4
Thus, for vector fields on RN , the gradient is just another name for the previously defined
derivative.
Divergence. For a vector field, a(x), defined on D ⊂ RN , the Divergence is defined by
Div(a(x)) := tr(∇a(x)).
(15)
Thus, the divergence of a vector field is a scalar valued function. The reader should verify
that the component form of Div(a(x)) with respect to the natural basis on RN is
Div(a(x)) = ∂x1 a1 (x) + . . . + ∂xN aN = ai,i (x).
It is also useful to define the divergence for second order tensor valued functions defined
on D ⊂ RN . To that end, if A(·) : D ⊂ RN −→ T 2 , then its divergence, denotes Div(A(x)),
is that unique vector field satisfying
Div(A(x)) · a = Div(A(x)T a)
(16)
for all constant vectors a ∈ RN . In component form, if [A(x)] = [aij (x)], then
[Div(A(x))] = [aij,j (x)]T .
Curl. The operator Curl is defined for vector fields on R3 . To give it an implicit, component
independent definition, one proceeds as follows. For a vector field v(·) : D ⊂ R3 −→ R3 ,
Curl(v(x)) is defined to be the unique vector field satisfying
(∇v(x) − (∇v(x))T )a = Curl(v(x)) × a
(17)
for all constant vectors a ∈ R3 . The right hand side of (17) is the vector cross product of the
two vectors Curl(v(x)) and a. An important observation from (17) is that Curl(v(x)) = 0
if and only if ∇v(x) is a symmetric second order tensor.
Useful Formulas. In the following formulas, φ(x) denotes a scalar valued field, u(x) and
v(x) denote vector fields and A(x) denotes a second order tensor valued field. One can then
readily show from the general product rule (13) that
∇(φv)
Div(φv)
∇(u · v)
Div(u ⊗ v)
Div(AT v)
Div(φA)
=
=
=
=
=
=
φ∇v + v ⊗ ∇φ
φDiv(v) + v · ∇φ
(∇v)T u + (∇u)T v
uDiv(v) + (∇u)v
A · ∇v + v · Div(A)
φDiv(A) + A∇φ.
(18)
Another useful formula shows that the operators Div and ∇ do not commute. Specifically,
the reader should verify that
∇(Div(v)) = Div((∇v)T ).
5
(19)
4
Integral Theorems
In this section are cataloged a selection of results concerning integrals of various differential
operators acting on tensor fields. They all may be thought of as multidimensional generalizations of the Fundamental Theorem of Single Variable Calculus
Z b
f (x) dx = F (b) − F (a)
a
where F (x) is any anti-derivative of f (x), i.e. F 0 (x) = f (x). All of the theorems presented
have versions valid in RN . However, attention is restricted here to R3 to keep the mathematical complications to a minimum. The reader is assumed to be familiar with the definition
of line, surface and volume integral in R3 as well as with techniques for actually computing
them. In what follows, R ⊂ R3 is assumed to be a bounded open set with piecewise smooth
boundary ∂R, and n denotes the piecewise smooth field on ∂R of outward pointing unit
normal vectors. Moreover, the natural basis for R3 is denoted by {e1 , e2 , e3 }. Also, volume
integrals over a region R ⊂ R3 are denoted
Z
f dV
R
with dV denoting volume measure, and surface integrals over a surface S in R3 are denoted
Z
f dA
S
with dA denoting surface area measure.
4.1
Divergence Theorem
The Divergence Theorem has four fundamental versions in R3 given in
Divergence Theorem. Let φ(·) : R −→ R, v(·) : R −→ T 1 ∼
= R3 and S(·) : R −→ T 2
denote smooth scalar, vector and 2nd -order tensor fields on R. Then
Z
Z
∇φ dV =
φn dA
(20)
R
∂R
Z
Z
Div(v) dV =
v · n dA
(21)
R Z
Z∂R
∇v dV =
v ⊗ n dA
(22)
R
∂R
Z
Z
Div(S) dV =
Sn dA.
(23)
R
∂R
Proof: The reader has undoubtedly seen (20) and (21) proved in a vector calculus text.
(22) and (23) are simple consequences of (20) and (21). In particular, for (23), let a be a
6
constant vector in R3 . Then,
Z
Z
a·
Sn dA =
a · (Sn) dA
∂R
∂R
Z
=
(S T a) · n dA
∂R
Z
Div(S T a) dV
=
ZR
Z
=
(Div(S)) · a dV = a ·
Div(S) dV.
R
R
Since the vector a is arbitrary, (23) now follows easily. (22) can be proved in similar fashion.
The following important application of the Divergence Theorem provides a useful interpretation of the divergence operator. Specifically, one can easily show
Theorem. Let B[a, r] ⊂ R denote the ball centered at a ∈ R of radius r > 0. Then
Z
1
v · n dA
(24)
Div(v(a)) = lim
r→0+ vol(B[a, r]) ∂B[a,r]
Z
1
Div(S(a)) = lim
Sn dA.
(25)
r→0+ vol(B[a, r]) ∂B[a,r]
Proof: Supplied by the reader.
Thus, one sees from (25) that the Div(v(a)) equals the (outward) flux per unit volume of
v through an infinitesimally small sphere centered at a. Whereas, if the second order tensor
S is thought of as a stress tensor, for example, one sees from (25) that Div(S(a)) gives the
total contact force per unit volume acting on the boundary of an infinitesimally small ball
centered at a.
4.2
Green Identities
The Green Identities are important applications of the Divergence Theorem. They are proved
from the following multidimensional generalization of the single variable calculus integration
by parts formula
Z b
Z b
0
f (x)g (x)dx = f (b)g(b) − f (a)g(a) −
f 0 (x)g(x)dx.
a
a
If φ is a smooth scalar field and v is a smooth vector field defined on R ⊂ R3 , then it follows
from the Divergence Theorem and the product rule formula (18b) that
Z
Z
Z
(φ Div(v) + v · ∇φ) dV =
Div(φ v) dV =
φ v · n dA.
R
R
∂R
Letting v = ∇ψ, where ψ is another smooth scalar field defined on R, one obtains the
First Green Identity. Let φ and ψ denote smooth scalar fields defined on the region
R ⊂ R3 . Then
Z
Z
Z
dψ
φ4ψ dV =
φ
dA −
∇φ · ∇ψ dV
(26)
R
R
∂R dn
7
where
dψ
:= ∇ψ · n
dn
denotes the (outward) normal derivative of φ on the boundary ∂R of R.
From the First Green Identity one easily derives the Second Green Identity. Again
let φ and ψ denote smooth scalar fields defined on R ⊂ R3 . Then
Z
Z
dψ
dφ
(φ4ψ − ψ4φ) dV =
(φ
− ψ ) dA.
(27)
dn
dn
R
∂R
Application to Poisson Equation. The Green Identities are work-horse tools in applied
mathematics. As an illustration, consider the classical mixed boundary value problem for
the Poisson equation. Specifically, let R ⊂ R3 be a bounded open region with piecewise
smooth boundary ∂R. Suppose further that the boundary is the union of two surfaces with
non-empty (2-dimensional) interiors and with a piecewise smooth common boundary curve
◦
◦
◦
◦
Γ. More specifically, ∂R = S1 ∪ S2 with S1 ∩ S2 = ∅, S1 ∩ S2 = Γ and S1 , S2 6= ∅ where Γ,
the common boundary to the surfaces S1 and S2 is a piecewise smooth closed curve.
Mixed Boundary Value Problem. The classical mixed boundary value problem for the
Poisson equation requires finding a smooth function u(x) satisfying the Poisson equation
−4u = f,
for x ∈ R
(28)
u = g for x ∈ S1
du
= h for x ∈ S2 .
dn
(29)
subject to the mixed boundary conditions
(30)
Boundary condition (29) is called a Dirichlet Boundary Condition while (30) is called a
Neumann Boundary Condition.
Data Compatibility. Letting φ(x) ≡ 1 and ψ(x) = u(x) in (26) yields
Z
Z
Z
du
f dV =
−4u dV = −
.
(31)
∂R dn
R
R
In particular, for the pure Neumann problem for which S1 = ∅, one has the equilibrium
requirement on the data h and f
Z
Z
h dA +
f dV = 0.
R
∂R
Uniqueness of Smooth Solutions. Suppose there are two smooth solutions, u1 (x) and
u2 (x), of (28,29,30). Define u(x) := u1 (x) − u2 (x). Then u(x) satisfies
−4u = 0,
for x ∈ R
(32)
subject to the mixed boundary conditions
u = 0 for x ∈ S1
and
8
du
= 0 for x ∈ S2 .
dn
(33)
Multiplying (32) by u(x), integrating the resulting equation over R and then integrating by
parts (i.e. using the First Green Identity (26)) making use of (33), one obtains the identity
Z
|∇u|2 dV = 0.
(34)
R
It follows from (34) that |∇u(x)| ≡ 0 on R, and hence that u(x) is a constant function. Since
the boundary conditions require u(x) ≡ 0 on S1 , u(x) ≡ 0 on all of R. Thus, u1 (x) = u2 (x)
on R as was required to be shown.
4.3
Potential Theory
This section presents a few results from classical potential theory. Specifically, the issue
studied is conservative vector fields on RN .
Conservative Field. A vector field v(x) defined on a region R ⊂ RN is called Conservative
provided that work is path independent. That is, given any two piecewise smooth paths
Γ1 [a, b], Γ2 (a, b] ⊂ R connecting the two points a, b ∈ R, one has
I
I
v · dx =
v · dx
(35)
Γ1 [a,b]
Γ2 [a,b]
H
where Γ[a,b] v · x denotes the line integral of v(x) along the path Γ[a, b]. Alternatively,
the field v(x) is conservative provided its line integral around any piecewise smooth, closed
curve in R is zero.
The main result of the subject asserts that a vector field v(x) is conservative if and only
if it is the gradient of some scalar field φ(x). If such a scalar field exists, it (or possibly
its negation in some applications) is called a Potential for the field v(x). If the region R is
connected, i.e. if every two points a, b ∈ R can be connected by a path Γ[a, b] ⊂ R, then
any two potential functions differ by at most an additive constant. More formally, this result
is called the
Potential Theorem. A vector field v(x) defined on a connected region R ⊂ RN is conservative if and only if there exists a scalar field φ(x) such that v(x) = ∇φ(x). Any two such
scalar fields differ by a constant.
Proof: Suppose that v(x) = ∇φ(x) and let Γ[a, b] be a path in R joining the two points a
and b. Suppose c(r), 0 ≤ r ≤ 1 is a parametric representation of Γ[a, b] with c(0) = a and
c(1) = b. Then
I
Z 1
v(x) · dx =
v(c(r)) · c0 (r) dr
Γ[a,b]
0
Z
1
=
∇φ(c(r)) · c0 (r) dr
Z0 1
d
φ(c(r)) dr
0 dr
= φ(c(1)) − φ(c(0)) = φ(b) − φ(a).
=
9
Thus, the line integral depends only upon a and b, not the particular path joining them. It
follows that v(x) is a conservative vector field on R. Conversely, assume v(x) is conservative.
Let a ∈ R be fixed. Then for any x ∈ R, define φ(x; a) by
I
φ(x; a) :=
v(z) · dz
(36)
Γ[a,x]
where Γ[a, x] is any path in R joining a to x. By path independence, φ(x; a) is unambiguously
defined. To show that v(x) = ∇φ(x; a), it suffices to demonstrate that
d
d
φ(x; a) = φ(x + s e; a)|s=0 = v(x) · e
de
ds
(37)
where e is any unit vector. Since the left most term in (37) is the directional derivative of
φ(x; a) at x in the direction e, which must also be given by
d
φ(x; a) = ∇φ(x; a) · e,
de
one concludes that ∇φ(x, a) = v(x). To verify (37), let δ > 0 be chosen small enough so
that x + s e ∈ R for all |s| < δ. Let Γ[a, x − δe] denote any path in R joining a and x − δe
and let Γ[x − δe, x + s e] denote the straight line path connecting x − δe to x + s e. Then
I
I
v(z) · dz.
(38)
v(z) · dz +
φ(x + s e; a) =
Γ[x−δe,x+s e]
Γ[a,x−δe]
The first integral on the right-hand-side of (38) is constant with respect to s whereas for the
second,
I
Z
s
v(z) · dz =
v(x + r e) · e dr.
−δ
Γ[x−δe,x+s e]
It now follows that
d
φ(x + s e; a)|s=0 = v(x + s e) · e|s=0 = v(x) · e
ds
as required. Finally, let φ1 (x) and φ2 (x) be two scalar fields whose gradients equal v(x).
Define φ(x) := φ1 (x) − φ2 (x). Then ∇φ(x) ≡ 0 on R. Since R is assumed to be connected,
a standard theorem in calculus shows that φ(x) is identically constant on R.
This result is somewhat unsatisfying as a test for whether or not a given vector field v(x) is
conservative since it requires either showing that a suitable scalar potential exists or showing
that there is a closed curve on which the line integral of v(x) does not vanish. However, for
simply connected regions R ⊂ R3 , there is a convenient test for conservative fields involving
the Curl operator. More specifically, a region R ⊂ R3 is called Simply Connected provided
every closed path is Homotopic to a Point. Roughly speaking, homotopic to a point means
that a closed path can be continuously deformed to a point without leaving the region R.
To be more precise, a closed curve c(s), 0 ≤ s ≤ 1 is said to be homotopic to a point if
there exists a continuous function h(s, r) defined for 0 ≤ s, r ≤ 1 safisfying h(s, 0) = c(s)
for 0 ≤ s ≤ 1, h(0, r) = h(1, r) for all 0 ≤ r ≤ 1, h(s1 , 1) = h(s2 , 1) for all 0 ≤ s1 , s2 ≤ 1.
10
The function h(·, ·), which is called a homotopy, is a one parameter family of closed paths,
h(·, r) 0 ≤ r ≤ 1, that continuously deforms the given closed curve, c(s), to the one point
(constant) curve h(s, 1). The reader should show that the region between two co-axial
cylinders is not simply connected in R3 whereas the region between to concentric spheres is.
A simple test for conservative vector fields on simply connected regions in R3 is given by
the following
Theorem. Let R ⊂ R3 be simply connected. Then a smooth vector field v(x) on R is
conservative if and only if Curl(v(x)) ≡ 0 on R.
Proof: The theorem can be proved with the aid of the classical
Stokes Theorem. Let v(x) be a smooth vector field on a region R ⊂ R3 and let S ⊂ R be
a smooth, orientable surface whose boundary ∂S = Γ is a a piecewise smooth, simple closed
curve (no self-intersections). Then
Z
I
Curl(v) · n dA = v(x) · dx.
(39)
R
Γ
where n(x) is a continuous normal vector field on S and the direction for the line integral
on the right-hand-side of (39) is chosen by a parametric representation c(s), 0 ≤ s ≤ 1 for
Γ satisfying (ċ(0) × ċ(s)) · n(c(0)) > 0, 0 < s < δ for some 0 < δ. (Proof: Omitted since
the reader is assumed to have seen the proof in a previous calculus course.)
The main theorem is proved by showing that the line integral of v(x) along any simple closed
curve in R vanishes. To that end, let c(s), 0 ≤ s ≤ 1 be a parametric representation for a
smooth, simple closed curve Γ in R. Since R is assumed to be simply connected, there exists
a homotopy h(s, r), 0 ≤ s, r ≤ 1 smoothly deforming Γ to a point in R. The homotopy
h(s, r), 0 ≤ s, r ≤ 1 may be thought of as giving a parametric representation for a smooth
orientable surface S ⊂ R having Γ as boundary and with a smooth normal vector field
m(x)
where
defined by n(x) := |m(x)|
m(x) := ∂s h(s, r) × ∂r h(s, r).
Applying Stokes Theorem one concludes that
I
Z
v(x) · dx =
Curl(v(x)) · n(x) dA = 0.
Γ
S
The integral over the surface S vanishes because Curl(v(x)) is assumed to be identically zero
in R.
Conversely, if v(x) is conservative, then v(x) = ∇φ(x) for some scalar field φ(x).
It follows that ∇v(x) = ∇∇φ(x) is a symmetric second order tensor, and hence that
Curl(v(x)) ≡ 0 on R. This completes the proof of the theorem.
11