Document 13815728

advertisement
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
TENSORS
These notes may serve as a reminder of tensor algebra. The notes
supplement the course on advanced elasticity (http://imechanica.org/node/725).
Several books are listed at the end of the notes.
Number Field
A number field is a set F of elements, called numbers. When the numbers
are manipulated with two operations, called addition and multiplication, the
results of the operations give elements in F. The two operations are specified
with the following properties.
Addition. To every pair of numbers α and β in F there corresponds to
an operation, called the addition of α and β . The result of this operation,
written as α + β , is also an element in F. Addition obeys the following axioms.
1) Addition is commutative: α + β = β + α for every pair of numbers in F.
(
)
(
)
2) Addition is associative: α + β + γ = α + β + γ for every α , β and γ in F.
F.
3) There exists a number 0 (zero) in F such that 0 + α = α for every α in
4) For every α in F there exists a number (the negative element) γ in F
such that α + γ = 0 , and we write γ = −α .
Multiplication. To every pair of numbers α and β in F there
corresponds to an operation, called the multiplication of α and β . The result of
this operation, written as αβ or α ⋅ β , is also an element in F. Multiplication
obeys the following axioms.
5) Multiplication is commutative: αβ = βα for every pair of numbers in F.
( )
( )
6) Multiplication is associative: αβ γ = α βγ for every α , β and γ in F.
7) There exists a number 1 in K such that 1 ⋅ α = α for every α in F.
8) For every α ≠ 0 in K there exists a number γ in F such that αγ = 1 ,
and we write γ = 1 / α .
Multiplication is distributive over addition. A final axiom involves
both operations of addition and multiplication:
9) γ α + β = γα + γβ for every α , β and γ in F.
(
)
Examples. The set of rational numbers is a field. The set of rational
number ensures a unique solution to the equation
αx + β = γ .
That is, there exists a rational number x that satisfies the above equation for any
given rational numbers α ≠ 0 , β and γ .
February 8, 2013
1
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
The set of real numbers and the set of complex numbers are both fields.
Counter example. The set of integers is not a field because the set
violates Axiom 8.
Vector Space
A vector space is a set S of objects, called vectors. To specify the space, we
also need a number field F. The linear space is equipped with two other
operations: addition of two vectors in S, and scaling a vector in S with a number
in F.
Addition of two vectors. To every pair of vectors x and y in S there
corresponds to an operation, called the addition of x and y. The result of this
operation, written as x + y , is also a vector in S. Addition obeys the following
axioms.
1) Addition is commutative: x + y = y + x for every pair of vectors in S.
(
)
(
)
2) Addition is associative: x + y + z = z + y + z for every x, y and z in S.
3) There exists a vector 0 (the zero vector) in S such that 0 + x = x for
every x in S.
4) For every x in S there exists a vector (the negative element) z in S such
that x + z = 0 . and we write z = −x .
Scaling a vector with a number. To every vector x in S and every
number α in a field F there corresponds to an operation, called the scaling x
with α . The result of this operation, α x , is also a vector in S. Scaling obeys the
following axioms:
5) 1 ⋅ x = x for every x in S.
6) α β x = αβ x for every x in S and for every α , β in F.
( ) ( )
7) (α + β ) x = α x + β x for every x in S and for every α , β in F.
8) α ( x + y) = α x + α y For every α in K and every x and y in S.
The combination of the two sets, along with the operations, is called a
vector space S over a number field F. In most applications, the number field is
either the set of real numbers or the set of complex numbers. If the context is
clear, we often drop the phrase “over a number field F”.
In defining a vector space, we do not specify the operation of
multiplication of two vectors in S.
Linear combination. Consider a vector space S over a field F. Let x
and y be two vectors in S, and let α and β be two numbers in F. The vector
February 8, 2013
2
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
α x + β y is called a linear combination of the two vectors x and y, and the
numbers α and β are called the coefficients of the linear combination.
The vectors x and y are said to be linearly dependent if there exist
numbers α and β , not all of which are zero, such that the linear combination of
the vectors is the zero vector, namely,
αx + βy = 0 .
The vectors x and y are said to be linearly independent if the above equation
holds only when all the coefficients vanish. These definitions work whenever we
look at more than one vector.
Dimension of a vector space. A space S is said to be n-dimensional if
there exist n linearly independent vectors in S, while every n +1 vectors of the
space are linearly dependent. In the following development we will use a threedimensional space, but the most results are applicable to any finite dimensions.
Basis.
Let e1 ,e2 ,e3 be three linearly independent vectors in a three-
dimensional space S over number field F. Then every vector x in S is a linear
combination of the base vectors e1 ,e2 ,e3 . That is, we can find numbers x 1 , x 2 , x 3
in F, such that
x = x 1e1 + x 2e2 + x 3e3 .
We call the vectors e1 ,e2 ,e3 a basis of the space S, and call the coefficients
x 1 , x 2 , x 3 the components of the vector x relative to the basis e1 ,e2 ,e3 .
This arrangement of indices is convenient because of the following
summation convention: If we have a sum of terms such that the summation index
occurs twice, once as a superscript and once as a subscript, then we will omit the
summation sign. For example, we write the linear combination as
x = x i ei .
The repeated index implies sum over 1,2,3.
Two bases.
Change of Basis
Let e1 ,e2 ,e3 be one basis of the vector space S, and let
e1 , e2 , e3 be another basis of the same vector space. As a vector in S, e1 is a linear
combination of e1 ,e2 ,e3 . Write this linear combination as
e1 = p11e1 + p12e2 + p13e3 ,
where p11 , p12 , p13 are the components of the vector e1 relative to the basis e1 ,e2 ,e3 .
Similarly, write
e2 = p21e1 + p22e2 + p23e3 ,
February 8, 2013
3
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
e1 = p31e1 + p32e2 + p33e3 .
The coefficients pij can be written as a matrix:
! 1
1
1 $
# p1 p2 p3 &
# p2 p2 p2 &
2
3 &
# 1
3
3
# p p p3 &
2
3 %
" 1
As a convention, we use the superscript to indicate the row, and the subscript to
indicate the column. This matrix is called the matrix of transformation from
basis e1 ,e2 ,e3 to basis e1 , e2 , e3 .
The linear combination can also be written by using the summation
convention:
ei = pij e j .
Inverse transformation. Of course, e j is also a linear combination of
e1 , e2 , e3 , namely,
e j = q1j e1 + q2j e2 + q 3j e3 .
The coefficients qij are called the matrix of the transformation from the basis
e1 , e2 , e3 to the basis e1 ,e2 ,e3 . This linear combination can also be written by using
the summation convention:
e j = q kj ek .
The two matrices are inverse to each other:
"$ 0 for i ≠ j
pi1q1j + pi2q2j + pi3q3j = #
$% 1 for i = j
and
"$ 0 for i ≠ j
qi1 p1j + qi2 p2j + qi3 p3j = #
%$ 1 for i = j
These relations can be rewritten by using the summation convention:
pik qkj = qik pkj = δi j ,
where δi j = 0 when i ≠ j , and δi j = 1 when i = j .
February 8, 2013
4
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
Transformation of the components of a vector associated with a
change of basis. Let e1 ,e2 ,e3 be a basis of a vector space S. Any vector x in S is
a linear combination of the basis e1 ,e2 ,e3 , namely,
x = x 1e1 + x 2e2 + x 3e3 ,
where x 1 , x 2 , x 3 are the components of the vector x relative to the basis e1 ,e2 ,e3 .
Let e1 , e2 , e3 be another basis of the same vector space S. The vector x is
also a linear combination of the basis e1 , e2 , e3 , namely,
x = x 1e1 + x 2e2 + x 3e3 .
where x 1 , x 2 , x 3 are the components of the vector x relative to the basis e1 , e2 , e3 .
In the above expression, replacing the basis e1 , e2 , e3 with the basis e1 ,e2 ,e3 ,
we obtain that
(
+ ( x p
+ ( x p
)
+ x p + x p ) e
+ x p + x p ) e
x = x 1 p11 + x 2 p21 + x 3 p31 e1
1
1
2
1
1
3
1
2
2
2
3
2
3
2
2
3
2
3
3
3
3
2
3
Compare this equation to x = x e1 + x e2 + x e3 , and we obtain that
x 1 = x 1 p11 + x 2 p21 + x 3 p31
x 2 = x 1 p12 + x 2 p22 + x 3 p32
x 3 = x 1 p13 + x 2 p23 + x 3 p33
This relation transforms the components of the vector x relative to the basis
e1 , e2 , e3 to the components of the same vector x relative to the basis e1 ,e2 ,e3 .
Similarly, we can write the inverse transformation:
x 1 = x 1q11 + x 2q21 + x 3q31
x 2 = x 1q12 + x 2q22 + x 3q32
x 3 = x 1q13 + x 2q23 + x 3q33
Summation convention. The above expressions can be written by
using the summation conventions. The vector x is a linear combination of the old
basis:
x = x i ei .
The same vector x is also a linear combination of the new basis:
x = x i ei .
The two sets of the components of the same vector are related as
February 8, 2013
5
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
x i = qij x j .
Comparing the transformation of the basis, ei = pij e j , and the
transformation of the components of a vector, x i = qij x j , we note that the
matrices involved in the two transformations are inverse to each other. We say
that the vector is contravariant.
Linear Maps between Vector Spaces
We next consider linear maps between vector spaces. The vector spaces
may be identical or different. In particular, a one-dimensional vector space is
called a scalar space. We will consider the following linear maps.
• Linear form: a linear map that maps a vector in a vector space to a scalar.
• Linear operator: a linear map that maps one vector in a vector space to
another vector in the same vector space.
• Bi-linear form: a linear map that maps two vectors in a vector space to a
scalar.
Any such a linear map between vector spaces is called a tensor. A tensor
is co-variant if its components transform in the same way as the basis, and
contra-variant if its components transform in the opposite way. The vector is a
contra-variant tensor. As we will see, the linear form and the bilinear form are
covariant tensors. The linear operator is a mixed tensor.
Linear Form
Definition of a linear form. A function α = f x maps a vector x in a
()
vector space S over a number field F to a scalar α . That is, the domain of map is
the vector space S, and the range of the map is a scalar space. The function is
called a linear form if the following properties hold.
1) f x + y = f x + f y for every x and y in S.
2)
( ) () ()
f (γ x ) = γ f ( x ) for every γ
in F and every x in S.
Components of a linear form. Let e1 ,e2 ,e3 be a basis of the space S.
( )
Observe that ei is a vector in S, so that the linear form f ei maps the vector ei
to a scalar. Write this scalar as fi . Thus,
( )
f ei = fi .
The scalars f1 , f2 , f3 are known as the components of the linear form relative to
the basis e1 ,e2 ,e3 .
Any vector x in S is a linear combination of the basis:
February 8, 2013
6
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
x = x 1e1 + x 2e2 + x 3e3 .
The linear form of the vector x is
f x = x 1 f1 + x 2 f2 + x 3 f3 .
()
Thus, once the components of a linear form are known, the above expression
calculates the value of the linear form for any vector. Using the summation
convention, the above expression is written as
f x = x i fi .
()
Transformation of the components of a linear form. Let e1 , e2 , e3
be a new basis of the space S. Because e j is a vector in S, the linear form f e j
maps the vector e j to a scalar. Note this scalar as f j , namely,
( )
fi = f ei .
( )
The three scalars f1 , f2 , f3 are called the components of the linear form relative to
the basis e1 , e2 , e3 . The new basis relates to the old basis by
ei = pij e j .
A combination of the above two expressions gives
fi = pij f j
Thus, the components of the linear form transform in the same way as the basis
e1 ,e2 ,e3 . We say that the linear form is covariant.
Linear Operator
Definition of a linear operator. A map z = A x maps a vector x in a
()
space S over a field F to a vector z in the same space. Both the domain and the
range of the map are the space S. The map is linear if the following properties
hold.
1) A x + y = A x + A y for every x and y in S.
2)
( ) () ()
A (γ x ) = γ A ( x ) for every γ
in K and every x in S.
Such a linear map is known as a linear operator.
Components of a linear operator. Let e1 ,e2 ,e3 be a basis of the space
( )
S. Thus, A ei is also a vector in S, and must be a linear combination of the basis.
Write
A ei = Ai1e1 + Ai2e2 + Ai3e3 .
( )
February 8, 2013
7
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
The coefficients Aij are called the components of the linear operator relative to
the basis e1 ,e2 ,e3 .
Any vector x in S is a linear combination of the basis:
x = x 1e1 + x 2e2 + x 3e3 ,
where the coefficients x 1 , x 2 , x 3 are the components of the vector x relative to the
basis e1 ,e2 ,e3 .
()
The linear map z = A x
can be written in terms of the
components:
(
+(A x + A x
+(A x + A x
)
+ A x )e
+ A x )e
z = A11 x 1 + A21 x 2 + A31 x 3 e1
2
1
1
2
2
2
3
1
1
3
2
2
2
3
3
3
3
3
2
3
The vector z is also a linear combination of the basis:
z = z 1e1 + z 2e2 + z 3e3 ,
where the coefficients z 1 ,z 2 ,z 3 are the components of the vector x relative to the
basis e1 ,e2 ,e3 . A comparison of the above expressions gives
z 1 = A11 x 1 + A21 x 2 + A31 x 3
z 2 = A12 x 1 + A22 x 2 + A32 x 3
z 2 = A13 x 1 + A23 x 2 + A33 x 3
These expressions are also written in the notation of matrix:
! 1 $ ! A11 A21 A31 $! 1 $
&# x &
# z & #
2
2
2
2 &
#
# z & = A1 A2 A3 # x 2 &
&# 3 &
# z3 & # 3
#"
&% # A1 A23 A33 &"# x %&
"
%
As a convention, we use the superscript of Aij to indicate the row, and the
subscript to indicate the column.
Summation convention. The above development looks simpler if we
adopt the summation convention. The quantity A ei is a vector, and is a linear
( )
combination of the base vectors:
A ei = Aij e j .
( )
Any vector x in S is a linear combination of the base vectors:
x = x i ei .
()
The linear map of the vector z = A x is
February 8, 2013
8
For updated version of this document see http://imechanica.org/node/14164
( )
Z. Suo
z = A x = A x i ei = x i A ei = Aij x i e j .
()
( )
The vector z is also a linear combination of the base vectors:
z = z je j ,
A comparison of the above two expressions gives that
z j = Aij x i .
The repeated index i implies a summation. The result of the summation gives a
vector—an object of a single index. Such an operation reduces the number of
indices, and is known as contraction.
Transformation of the components of a linear operator. Let
e1 , e2 , e3 be a new basis of the space S. Note that
A ei = A ij e j .
( )
The coefficients A i constitute the matrix of the operator relative to the basis
j
e1 , e2 , e3 . Recalling that ei = pij e j and ei = qij e j , we write
A ij e j = A ei = A pij e j = pik A ek = pik Akl el = pik Akl qlj e j .
( )
(
)
( )
Consequently, the matrix of the operator relative to new basis transforms to that
relative to the old basis according to
A ij = pik qlj Akl .
This transformation involves both the matrix of the forward transformation from
e1 ,e2 ,e3 to e1 , e2 , e3 , and the inverse transformation the other way around. The
linear operator is a mixed tensor, partly contravariant and partly covariant.
Multiplication of two linear operators. Let A and B be two linear
operations. Let u be a vector in a vector space S, the object Bu is also a vector in S.
Write v = Bu. The object Av is still a vector in S, and we write w = Av. We write
( ( ))
()
w= A v = A B u .
The composite operation is called the multiplication of the two linear operators.
We also write
w = AB u
()
We can write the above operation in terms of components. Let a basis of
the vector space S be e1 ,e2 ,e3 . The vector u is a linear combination of the base
vectors:
u = u je j ,
where u j are components of the vector u relative to the basis e1 ,e2 ,e3 .
components of the two operators are defined by
February 8, 2013
9
The
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
( )
B (e ) = B e .
A ek = Aik ei ,
j
kj k
We have adopted a convention for the order of the two indices. The composite
operation becomes that
( ( )) ( ( )) ( ( ))
w = A B u = A B u j e j = A Bkj u j ek = Aik Bkj u j ei .
The vector w is a linear combination of the base vectors:
w = wi ei ,
where wi are components of the vector w relative to the basis e1 ,e2 ,e3 .
A
comparison of the above two expressions gives that
wi = Aik Bkj u j .
This expression is also written in the matrix form:
! w $ ! A
A12 A13 $&!# B11 B12
# 1 & # 11
# w &=# A
A22 A23 &# B21 B22
&#
# 2 & # 21
# w3 & # A31 A32 A33 &# B31 B32
"
% "
%"
B13 $&! u1 $
#
&
B23 &# u2 &
&#
&
B33 &#" u3 &%
%
Bilinear Form
Definition of a bilinear operator. A function α = B x,y maps two
( )
vectors x and y in a space S over a field F to a number α in F. The function is a
bilinear form if the function is linear in each argument.
Components of a bilinear form. Let e1 ,e2 ,e3 be a basis of the space S.
(
)
Then B ei ,e j is a number in F. Write
(
)
Bij = B ei ,e j .
The numbers Bij constitute the components of the bilinear form relative to the
basis e1 ,e2 ,e3 .
Express the vectors x and y as linear combinations of the basis:
x = x i ei , y = y j e j .
The bilinear form can be written in terms of the components
B x,y = Bij x i y j .
( )
February 8, 2013
10
For updated version of this document see http://imechanica.org/node/14164
Z. Suo
Transformation of the components of a bilinear form. Let
e1 , e2 , e3 be a new basis of the space S. The matrix of the bilinear form relative to
this new basis is
B ij = B ei , e j .
(
Thus,
)
B ij = pik plj Bkl .
Each index transforms the same way as the basis.
covariant tensor.
The bilinear form is a
References
• I.M. Gel’fand, Lectures on Linear Algebra, Dover Publications, New York,
1961.
• G.E. Shilov, Linear Algebra, Dover Publications, New York, 1977.
• N. Jeevanjee, An Introduction to Tensors and Group Theory for Physicists.
Birkhauser, 2010.
• 李开泰,黄艾香,张量分析极其应用,西安交通大学出版社,1984.
February 8, 2013
11
Download