```Freefall in curved space
2
Contents:
1.
Curvilinear coordinate systems .....................................................................................4
1.1.
1.2.
Non orthogonal example ....................................................................................................... 5
General curvilinear systems ................................................................................................ 9
2.
Tensors................................................................................................................................. 13
3.
Christoffel symbols, 1st and 2nd kind .......................................................................... 16
3.1.
Christoffel symbols as Derivatives of the Metric Tensor ........................................ 17
4.
Covariant derivative ........................................................................................................ 19
5.
The geodesic equation .................................................................................................... 20
6.
Appendix A, a wakeup problem................................................................................... 27
7.
Appendix B, index notation........................................................................................... 31
8.
Appendix C, remarks of interest ................................................................................. 36
9.
Appendix D, T-shirt from CERN ................................................................................... 39
10.
References ........................................................................................................................ 41
3
Abstract
space. It assumes that the reader has a working knowledge of
partial derivatives, the chain rule and some basics about vector
algebra/calculus and variational calculus. It tries to introduce
the concepts and notations that are necessary to be able to
read and understand the different terms of the equation. Here is
the ‘layout’ of the article:
- short about curvilinear coordinate systems
- some facts about tensors and why tensors are of
fundamental importance in physics
- Christoffel symbols
- covariant derivative of a covariant and a contravariant
vector
- derivation of the equation for a geodesic in curved space
As an introduction the reader is invited to look at a simple
example in Appendix A and if unfamiliar with index notation also
read Appendix B. Appendix C and D contains miscellaneous
4
1. Curvilinear coordinate
systems
A useful observation :
= (1 , 2 , 3 ) is a scalar function in 3 dimensions. For
each point in space it has a scalar value. Setting (1 , 2 , 3 )
= const.= c1 will introduce a dependency between the
variables 1 , 2 and 3 and thereby define a surface in the 3d
space. A point on the surface is represented by a vector ⃗ . A
small variation of 1 , 2 and 3 under the constraint
(1 , 2 , 3 ) = c1 will give a new point on the surface
represented by the vector ⃗ + d⃗. The vector d⃗ will clearly
“lie in surface”. The situation is shown in the figure below.
x3
dr
r
x2
x1
d(c1) = 0 =  =

= ∇ϕ ∙ ⃗
=>
the vector ∇ϕ is ⊥ to the surface (1 , 2 , 3 ) = c1
5
An example concerning dependence:
In 3 dimensional space where the cross product of vectors is
defined, two functions  = (1 , 2 , 3 ) and  =
(1 , 2 , 3 ) are not independent.
There exists a function  = (, ) = const. = c1 that
connects the two. A necessary and sufficient condition is that
∇ × ∇ = 0.
0 = ∇ = ∇ + ∇ => ∇ and ∇ are parallel => ∇
∇ = 0 . This last expression is the condition for the
determinant of a Jacobian matrix to be zero. This
interdependence condition is also used in the Legendre
transform.
1.1. Non orthogonal example
coordinate system.
2
2 = 2 − 1
r
1
1 = 1
×
6
The coordinate transformation is given by
1 = 1
2 = 1 + 2
And its inverse
1 = 1
2 = 2 − 1
There are two “natural” ways to specify the coordinate
components for the vector ⃗ = (1 , 2 ) in the oblique
coordinate system.
Contravariant components =>
r
The basis vectors are
⃗⃗

1
= (1,1) and
⃗⃗

2
Call these covariant basis vectors ⃗ =
and the
⃗ vector can be written as
= (0,1)
⃗⃗

i = 1..2
7
⃗ =
⃗⃗

=  ⃗ i=1..2 , where the
are the vectors
contravariant components.
Covariant components =>
90°
r
90°
The basis vectors are ∇1
= (1,0) and ∇2 = (−1,1)
Call these contravariant basis vectors ⃗ = ∇ i = 1..2
and the ⃗ vector can be written as
⃗ =  ∇ =  ⃗ i=1..2 , where the  are the vectors
covariant components.
Notice now the joyful fact that
[ ∇ ∙
⃗⃗

=

⃗ ∙ ⃗ = ∇ ∙
= ℎ  =

⃗⃗

=
=  ]
The two sets of basis vectors are called reciprocal.
Using the two reciprocal sets of basis vectors, the scalar
product of two vectors simplifies to
8
⃗ ∙ ⃗⃗ =  ⃗ ∙   ⃗ =    ⃗ ∙ ⃗ =     =
Notice that this is now valid in a general curvilinear coordinate
system. In Cartesian coordinates the two sets of vector
components are equal.
This simple example should clarify why tensor calculus and
general curvilinear coordinate systems needs the covariant and
contravariant concept(s).
Concepts that are linked to reciprocal basis vectors are dual
basis vectors (http://en.wikipedia.org/wiki/Dual_basis) and dual
vector spaces (http://en.wikipedia.org/wiki/Dual_space ).
9
1.2. General curvilinear systems
Generalized coordinates are often denoted by a q and a sub
index, like this,  (), and are a coordinate transformation from a
Cartesian orthogonal coordinate system  :
=  ( 1 , 2 , …,  ) i = 1..N
[1-1]
The functions  must be differentiable and form an
independent set, meaning that the Jacobian determinant must
be # 0. The Jacobian is defined as the matrix (Jacobian often
means Jacobian determinant):
1
1
⋮

…
⋱
…
1

⋮
and is often written as

[ 1
]
(1 ,…)
(1 , …  ) ,
(1 ,… )
or/and some more versions.
and the corresponding determinants with bars around in the

usual way or det(

). If the Jacobian determinant is # 0 the
functions  can be expressed in 1 , … ,  :
=  ( 1 , 2 , …,  ) i = 1..N
[1-2]
A line element, displacement vector, in curvilinear
coordinates is given by (chain rule):
⃗ =
⃗

[1-3]
10
The vectors
⃗⃗

provide a natural vector basis and can be
visualized starting with a vector ⃗ and varying one  at a time.
⃗
⃗
1
, 2  3
The reciprocal vector basis is ∇ , the ∇ vectors, can be
visualized by having  fixed and vary the other two. The vector
∇ is perpendicular to the surface generated in this way.
vector ∇1
dr Surface: 1
⃗
Denote
⃗⃗

= ⃗
and ∇ = ⃗ and that they are reciprocal
basis vectors can be easily shown:
11
⃗ ∙ ⃗ = ∇ ∙

⃗⃗

=

= ℎ  =
=
The
⃗⃗

[1-4]
= ⃗
basis is particularly appropriate for vectors such
as the velocity. The velocity components
coordinates are simply
⃗

= ̇ ̂ = ̇
̇ in the
⃗⃗

̇
in Cartesian
system :
⃗⃗

On the other hand, the ∇ = ⃗ basis is appropriate for the
gradient operator as, by the chain rule again, the components
∇ =

̂ =

in the ∇ basis :
∇
In general, any vector can be expanded in terms of either
basis.
⃗ =  ⃗ =   ⃗
The vector components with a sub index are called the
covariant components and the ones with a super index are
called the contravariant components.
The scalar products
⃗⃗
⃗⃗

= ⃗ ⃗ form a second-rank
tensor that describes all the angles between the basis vectors
and all the lengths of the vectors. It is called the covariant
metric tensor and is denoted by  . It is obviously symmetric.
=  =
⃗⃗
⃗⃗

= ⃗ ⃗
[1-5]
The scalar products ∇ ∇ = ⃗ ⃗ form a second-rank
tensor that describes all the angles between the basis vectors
12
and all the lengths of the vectors. It is called the contravariant
metric tensor and is denoted by  . It is obviously
symmetric.
=   = ∇ ∇ = ⃗ ⃗
[1-6]
The concept of metric tensor is one of the cornerstones of
geometry in general curvilinear coordinate systems and
differential geometry. The value of the metric tensor is a
function of the position in space, i.e. it’s values are generally
different at different positions,  =  (1 , … ,  ) and  =
(1 , … ,  )
One frequent use is that it can lower and rise the indices of the
components of a vector.
⃗ =  ⃗ =   ⃗ =>  ⃗ ∙ ⃗ =   ⃗ ∙ ⃗ =>
[⃗ ∙ ⃗ =  ] =>  =
=    and similarly   =
[1-7]
Example:
2 = ⃗ ∙ ⃗ =   =
[1-8]
Example:
=  ⃗ ∙ ⃗ = ⃗ ∙ ⃗ =
[1-9]
13
2.
Tensors
This chapter notes some facts about tensors that are more or
with a @ symbol.
Einstein’s summation convention is used.
A tensor can be seen as a generalization of the vector
concept and is defined by how it is transformed by a coordinate
transformation.
The definition of tensors via transformation properties
conforms to the physicist’s notion that physical observables
must not depend on the choice of coordinate frames.
@ The “main theorem” of tensor calculus is as follows:
If two tensors of the same type are equal in one coordinate
system, then they are equal in all coordinate systems.
@ A tensor has a rank.
A scalar has rank 0 and is an “invariant object in the space
with respect to the group of coordinate transformations”.
A vector is a rank 1 tensor with covariant components  and
contravariant components   .
Examples of rank 2 tensors are the metric tensor
(covariant)  (contravariant), the inertia tensor

and  a rank 2 mixed tensor
(http://mathworld.wolfram.com/KroneckerDelta.html)
@ Definition of a rank 1 tensor:
Taking a differential distance vector ⃗ and letting
function of the unprimed variables.

′ =
′

Any set of quantities   that transform according to
′
be a
[2-1]
14
′

′ =

[2-2]
is defined as a contravariant vector (contravariant rank-1
tensor), and the indices are written as superscript.
Taking a scalar field  the transformation is different.
∇ =
′
′
=

̂  =>

[2-3]
′
Notice the difference, it is vital. [2-3] is the definition of a
covariant vector. Rewritten it looks like this
′ =

′

[2-4]
In the same way tensor rank 2 is defined. When the rank is 2
there is also mixed tensors, one subscript index and one
superscript.
@ Definition of a rank 2 tensor:

′ =
′ =
′ ′

′

′ =
′

′ ′

[2-5]
[2-6]
[2-7]
@ The quotient rule.
To establish the tensor nature of a quantity can be tedious.
Help comes from the quotient rule:
If A and B are tensors, and if the expression A = BT is invariant
15
under coordinate transformation, then T is a tensor.
Example: If A and B are tensors and the expression holds in all
(rotated) Cartesian coordinate systems, then K is a tensor in the
following expressions.
=  and   =  and   =  and
=
and
=
@ Contraction
Dealing with vectors in orthogonal coordinates the scalar
product is
⃗ ∙ ⃗⃗ = ( ⃗ ) ∙ ( ⃗ ) =   ⃗ ∙ ⃗ =    =
,with an implicit summation over i
The generalization of this in tensor analysis is a process known
as contraction. Two indices, one covariant and the other
contravariant, are set equal to each other, and then (as implied
by the summation convention) we sum over this repeated index.
The scalar product in a general coordinate system:

⃗ ∙ ⃗⃗ =  ⃗ ∙   ⃗ =    ⃗ ∙ ⃗ =     =
=
16
3. Christoffel symbols, 1st and
2nd kind
Normal partial derivatives of a vector doesn’t transform as
tensors under general curvilinear coordinate transformations.
An important property of tensors is that if two tensors A and B
are equal, A=B, in one coordinate system then the transformed
tensors are equal, A’ = B’.
This property means that two different observers in different
coordinate systems agree on physical laws. Substituting regular
partial derivatives with the covariant derivatives, which follows
tensor transformation rules, is therefore important and has been
stated as the mathematical statement of Einstein’s equivalence
principle.
The covariant derivative is defined in the next chapter.
The Christoffel symbols have to be defined first.
Starting with scalar ∅
∅ =
∅

Since the   are the components of a contravariant vector, the
partial derivatives must form a covariant vector by the quotient
rule. The gradient of the scalar becomes
∇∅ =
∅

⃗
[3-1]
Moving on to the derivatives of a vector, the situation is more
complicated because the basis vectors ⃗ and ⃗ are not
constant. With vector ⃗⃗⃗⃗
′ =   ⃗ , ′ =
⃗⃗ ′

=

⃗⃗

⃗
+

, we get
[3-2]
or in component form, direct differentiation
′

=

+
2

[3-3]
17
The right hand side of [3-3] differs from the transformation law
for a second-rank mixed tensor by the second term containing
second derivatives of the coordinates   .
⃗⃗

will be some linear combination of

⃗⃗

⃗ , write this as
= Γ ⃗
[3-4]
Multiply by ⃗ and use ⃗
Γ = ⃗ ∙
∙ ⃗ =
to get
⃗⃗

[3-5]

These are Christoffel symbols of the second kind. They
are not third-rank tensors. And

is not generally a second-
rank tensor.
⃗⃗

=
2 ⃗

=
2 ⃗

=
⃗⃗

, meaning that these
Christoffel symbols are symmetric in the lower indices.
Γ = Γ
[3-6]
Christoffel symbols of the first kind can be defined as
[, ] =  Γ
[3-7]
The symmetry [ij, k] = [ji, k] follows from second kinds
symmetry. [ij, k] is not a third-rank tensor.
[, ] =  ⃗ ∙
⃗⃗

= [⃗ =  ⃗ ] = ⃗ ∙
⃗⃗

… [3-8]
.
3.1. Christoffel symbols as Derivatives of
the Metric Tensor
= ⃗ ∙ ⃗
[ definition of covariant metric tensor ]
18
Differentiate to get:

=
⃗⃗

∙ ⃗ + ⃗ ∙
⃗⃗

= [equation [3-8]] = [ik, j] +

[jk, i]
[3-9]
Equation [3-9] yields
1
2
[ij, k] = {
+

−

}
[3-10]
This is the sought expression for Christoffel symbols of the first
kind.
Using equation [3-7] :

[, ] =   Γ =
Γ = Γ
[3-11]
Equations [3-10] and [3-11] gives the sought expression for
Christoffel symbols of the second kind.
Γ
=
1

{
2

+

−

}
[3-12]
19
4.
Covariant derivative
Equation [3-2] :
⃗⃗ ′

=

⃗⃗

⃗
+

can now be
rewritten using the Christoffel symbols
⃗⃗ ′

=

⃗ +   Γ ⃗
and in the last term the k and i
indices are dummy indices, change k -> i and i -> k to get
⃗⃗ ′

=

⃗ +

Γ
⃗ = (

+   Γ
) ⃗
[4-1]
The expression within the parentheses is the covariant
derivative of the contravariant vector   and the notation for
derivation is a semicolon, not a comma as in chapter about
index notation.
;
=

+   Γ
[4-2]
; is the covariant derivative of a contravariant vector. It is a
second-rank tensor.

By differentiation of the relation ⃗ ∙ ⃗ =  it is quite easy to
get the expression for the covariant derivative of a covariant
vector.
; =

−  Γ
[4-3]
;
is the covariant derivative of a covariant vector. It is a
second-rank tensor.
⃗⃗⃗⃗ becomes
A differential ′
⃗⃗⃗⃗ =
′
⃗⃗⃗⃗⃗
′

= [ ;   ]⃗
[4-4]
In Cartesian coordinates the Christoffel symbols vanish and
the ordinary partial derivative coincide with the covariant
derivative.
20
A more detailed proof that the covariant derivative is a tensor
can be found in [Heinbockel] .
Rules for covariant differentiation:
- The covariant derivative of a sum is the sum of covariant
derivatives
- The covariant derivative of a product of tensors is the first
times the covariant derivative of the second plus the
second times the covariant derivative of the first.
- Higher derivatives are defined as derivatives of derivatives.
But take care, in general ; ≠ ; .
5.
The geodesic equation
A geodesic in Euclidean space is a straight line. In general, it
is the curve of shortest length between two points and the curve
along which a freely falling particle moves. The ellipses of
planets are geodesics around the sun, and the moon is in free
fall around the Earth on a geodesic. The geodesic can be
obtained in a number of ways. We show three of them.
#1 The geodesic can be obtained from variational principles,
[Arfken] 6th Edition.
∫  = 0
[5-1]
where  2 is the metric of the space.
2
()2 = ⃗ ∙ ⃗ = (⃗  ) = ⃗ ∙ ⃗
=
The variation of  2
( 2 ) = 2 = (    ) = [ () =
() + () + ( )] =     +
+
[5-2]
21
Inserting [5-2] into [5-1] yields
1
∫[
2

+

+
] = 0.
[5-3]
where ds measures the length on the geodesic.
The variations  expressed in terms of the independent
variations   yields
=

= (  )
[5-4]
Insert [5-4] in equation [5-3], shift the derivatives in the last
two terms of [5-3] upon integrating by parts and rename the
dummy summation indices and [5-3] will be turned into
1
∫[
2
0

−

(

+

) ]   =
… [5-5]
The   can have any value, which means that the
integrand of [5-5], set equal to zero, gives the geodesic
equation. It needs some more manipulations, though …

−

(

+

)=0

= (  )  and  = (  )
and that  is symmetric results in :
Using

1
2
(  −   −   ) −

[5-6]

2
2
in [5-6]
=0
.
… [5-7]
Multiplying [5-7] with  and using the fact that   =
finally yields the geodesic equation:
2
2
+
1
2
(  +   −   ) = 0
… [5-8]

The coefficient of the velocities is the Christoffel symbol Γ
22
#2 An alternative derivation of the geodesic equation can be
found in [Arfken] 7th Edition.
The distance between two points can be represented as
= ∫ √ ̇  ̇
[5-9]
To find the geodesic equation it is possible to start from the
action:
= ∫
[5-10]
Using the Lagrangian formulation of relativistic mechanics
where, for a particle not subject to a potential other than a
gravitational force (which is described by the metric tensor), the
Lagrangian reduces to:
1
L =  ̇  ̇
[5-11]
2
It can be shown that the above Lagrangian leads in
fact to the same Euler-Lagrange equations as the
Lagrangian relative to [5-9].
We can replace the minimization of J by that of the action:

∫  ̇  ̇   = 0
[5-12]
And thereby simplifying the problem by eliminating the radical
(the square root).
The minimization in [5-12] is a relatively simple standard
problem in variational calculus.
Note that  is a function of all the   but not on the
derivatives ̇  . There will be an Euler equation for each k:
̇  ̇

−

̇  ̇
(
̇
Evaluate [5-13] to get:
)=0
[5-13]
23

̇ ̇ −

̇ ̇ −
(

(̇ ̇ )) = [
̇
̇
̇
=   ] =
( ̇  +  ̇  ) = 0
[5-14]
Simplify by using:
̇

= ̈

=

̇  [ℎ ]
[5-15]
And [5-14] can be written as:
1
2
̇  ̇  [

−

−

]
−

̈
=0

[5-16]
As a final simplification, multiply [5-16] by  and use the
identity
2
2
=  to get the geodesic equation:
+
1
2

[

+

−

]=0
[5-17]
Or using the Christoffel symbols of the second kind written in
terms of the metric tensor:
2
2
+

Γ = 0
[5-18]
#3 Yet another way to derive the geodesic equation is to see
the geodesic as the curve with zero tangent acceleration. The
approach can be described as “take a parameterized curve and
let the space curvature, described by the metric tensor, move
all points along the curve to the correct positions”
Consider a curve
= () = ( ())
[5-19]
24
which is a sufficiently smooth function and
where  ∶  →   ,   ℎ  [1 , 1 ]
Calling t the ‘time’, only a choice we make, we can call
̇  () = , ̇  ()
[5-20]
the velocity and
̈  () = , ̇  ̇ + , ̈
[5-21]
the acceleration.
The geodesic is the shape of the curve when the acceleration
has zero projection to the plane tangent to the given surface,
which gives the equations
̈  () , = 0  = 1 …
[5-22]
and using ̈  () = , ̇  ̇ + , ̈
to get
(, ̇  ̇ + , ̈  ), = 0
[5-21]
and just do the multiplication
, , ̈  + , , ̇  ̇ = 0
[5-23]
Before going on, some clarification of the meaning of the
condition stated in [5-22] :
It has the form
⃗ ∙ ⃗ = ⃗ ∙
⃗

, in vector form ⃗
∙
which means that
⃗

,
= 0 gives
⃗ ⊥ ⃗ , the vector has no projection in the ⃗ direction.
Applied to [5-22] this means that ̈⃗ ⊥ ⃗ . Acceleration has
zero projection etc. as stated above. The acceleration, ‘force’, is
perpendicular to the curve and only changes the direction of the
curve in N-dim space. This condition is then expressed in terms
of the metric tensor for the space.
25
To get the geodesic expressed in terms of the metric tensor,
note the definition of the covariant metric tensor
= , , definition of the covariant metric tensor.
Derivation of the definition with respect to

yields
, = , , + , ,
[5-24]
and the two permutations of the indices
, = , , + , ,
[5-25]
, = , , + , ,
[5-26]
[5-24] + [5-25] - [5-26] gives :
, + , − , = 2 , ,
[5-27]
Using the definition of the metric tensor + [5-27] into [5-23] :
1
̈  + (, + , − , )̇  ̇ = 0
2
[5-28]
Multiplying with  and noticing the identity   =
,  ̈  + 12 (, + , − , )̇  ̇ = 0
[5-28] will
result in:
1
̈  +  (, + , − , )̇  ̇ = 0
2
[5-29]
This is the geodesic equation and it can be written more
compact as before using the Christoffel symbol of the second
kind:
̈  + Γ ̇  ̇ = 0
[5-30]
An Example “along the geodesic”:
Since the length along the geodesic is a scalar, the velocities

of a freely falling particle along the geodesic form a
26
contravariant vector. Hence

is a well-defined scalar on a
geodesic, which we can differentiate in order to define the
covariant derivative of any covariant vector  .

2
(
)=
+

2
= [  . ]

=
−  Γ

=
(  −  Γ
)=

;
The quotient theorem tells us that ; is a covariant tensor
that defines the covariant derivative of  Similarly, higherorder tensors may derived.
Some concluding remarks:
The Mass and Space ‘marriage’, stolen from somewhere,
“Mass tells space how to curve and curved space tells
mass how to move”.
And as a reminder that there is always more to learn,
Einstein’s field equations that are still researched.
terms :
1
8πG
−   +  Λ = 4
2

where  is the Ricci curvature tensor,
the scalar
the metric tensor, is the cosmological
constant, G is Newton's gravitational constant, c the speed of
light in vacuum, and  the stress–energy tensor.
curvature,
27
6.
Appendix A, a wakeup problem
Living in flat Eucledian space ?
The following example serves as an illustration of the
importance of the geometry of space itself and the importance
of choosing a proper coordinate system.
Here is the problem:
We have a box whose short sides are squares,
1.2 meters x 1.2 meters.
The length of the box is 3 meters.
In the middle of one of the short sides, 0.1 meter from the top
side of the box is a spider.
In the middle of the other short side, 0.1 meter from the
bottom side of the box, is a fly, caught in the spider’s web.
The spider wants to catch the fly as fast as possible.
The spider can only move on the surface of the box.
The geometry is strictly Euclidean on the surfaces of the box,
but with ‘singularities’ at the edges.
How long is the shortest path from the spider to the fly ?
0.1 m
1.2 m
0.1 m
3.0 m
1.2 m
28
Yes, with the proper “coordinate
transformation”
By unfolding the box in different ways, the problem is solved
using a straight ruler, the Pythagorean theorem and some
thinking.
The easy answer is 4.2 meters, but there are more straight
lines to the prey:
Four different unfolding are shown, three giving values # 4.2
meters, two of them shorter than 4.2 meters:
#1 the easy one
#2 the no-good one
29
#3 shorter than the easy one
#4 the shortest path, chosen by the spider that has mirrors
helping him to see round all the edges
This example shows the intrinsic nature of curved space,
where care must be taken in defining the notion of “shortest
path”. In general curved space the curvature is different in each
point in the space, but the space is normally smooth, no ‘edges’
like in this example.
Some mapping notes
Different mapping methods can simplify the mathematics of a
problem considerably. In high power microwave transmission it
is common to use pipes with rectangular cross section and
30
conformal mapping can be used to get a circular cross
section, solve the problem in that geometry and then use the
inverse map to get the result for the rectangular cross section.
Riemann’s mapping theorem shows that almost any area in
the complex plane can be mapped to the interior of the unit
circle. Riemann’s inverse theorem is about mapping to the area
outside the unit circle. With some “minimal” conditions the map
can be shown to be unique. The conditions for the mapping is
“a non-empty simply connected open subset of the complex
number plane which is not all of , then there exists a
biholomorphic (bijective and holomorphic) mapping from
onto the open unit disk”.
Another example of using mapping techniques is the laminar
flow around the wing of an airplane. The wing’s cross-section
is mapped to a circle. This is probably not used anymore with
31
7.
Appendix B, index notation
Index notation is used extensively in tensor algebra (and
thereby in general relativity theory), differential geometry,
matrices, determinants and more. A basic understanding is
understanding some basics about index notation is a good and
simple-to-learn tool.
⃗ has components  , and a vector ⃗⃗ has
⃗⃗
components  , the scalar product of the two vectors is ⃗∙
= ∑   =   if Einstein's summation convention is used,
A vector
meaning that anytime two same indices appear it is an implicit
summation over that index.
NB! writing ⃗ =  is a misuse of the = sign, since right hand
side is a scalar and left hand side is a vector.
NB! Einstein’s convention cannot always be used, e.g.
is meaningless, Σ has to be used. Also e.g. if
readability is compromised. Use with common sense.
⃗ and ⃗⃗ can be written as
⃗⃗ =
⃗⃗ and ⃗⃗ =
⃗⃗ , which gives the scalar product

⃗⃗
⃗⃗ =
⃗⃗
⃗⃗ . This is a large number of terms,

The two vectors
summing over i=1…n and k=1…n, quite far from the nice
formula in the preceding paragraph !
The Kronecker delta is defined as

=  =  = 0 if i # j
and = 0 if i = j
The condition for orthonormal basis vectors can be written as
⃗⃗ ∙
⃗⃗ =  ,

which gives
⃗⃗
⃗⃗ =
⃗⃗ ∙
⃗⃗ =    =
⃗∙⃗⃗ =
32
{ k can be set to i, all other terms are multiplied with 0, notice
that when doing this contraction care must be taken which
indices are just dummies for e.g. a summation or which one has
other semantic content }
=   , the nice expression you usually see.
NB!  = 11 + 22 + 33 = 3 (= N if N dimensions)
Can one avoid the mess caused by general coordinate
systems where the basis vectors are neither orthogonal nor
normalized ?
Yes, with the basis vectors chosen in a smart way. Then the
⃗⃗ will look like ⃗∙⃗⃗ =    where the
scalar product of ⃗ and
sub/lower index denotes covariant vector components and the
super/upper index contra variant vector components.
⃗∙⃗⃗ =    =   for a general curvilinear coordinate
system.
Another important symbol used is the Levi-Cevita symbol,
sometimes also called the permutation symbol. It is denoted e
or  in the case that be misunderstood as Euler’s number e.
The definition in 3 dimensions, i,j,k=1..3, is  { = 0 if two of
i, j, k are equal, = 1 if i , j, k is an even permutation, = -1 if i, j, k
is an odd permutation }
The definition of  is closely coupled to the definition of the
determinant, which is 0 if two rows or columns are equal and
which changes sign if two rows or columns changes sign.
Consequently, with a matrix A with elements  , the
determinant | | =  1 2 3 =  1 2 3 ,
where on the right hand side the i, j, k are just dummy
summation indices 1..3, on the left hand side the i, j picks the
element in the matrix.
NB! that
= 3! (= N! if N dimensions)
Using this with matrix A with elements  , the determinant
33
| | =  1 2 3 =
1
1

3!    1 2 3
=

3!
The generalized Kronecker delta is defined as
“generalized delta”

= |

|

Using the generalized Kronecker delta, we can get the very
useful epsilon-delta identity (“ –  identity”), here is how:
11 21 31
1 0 0
1 = |0 1 0| = |12 22 32 | ,   =
0 0 1
13 23 33
1 2 3
| 1 2 3 | “row shift taken care of by   ” =
1
2

3

= |

|

“ column shift taken care of by
”
Now, do a contraction by setting i = l and evaluate the
determinant on the right hand side using well known algorithms
to get:

“ –  identity” :    =   -
There is a special notation for writing derivatives :
Example 1, j-component of ∇
: ⃗ ∙ ∇ = , =
Φ

34
Example 2, second partial derivative :
, =
2 Φ
ð
NB! A shorthand for partial derivative can also be
meaning “partial derivative in the k direction”.
=

,
Examples index notation, Cartesian coordinates
Example 3:
⃗ × ⃗⃗ =    ⃗
Example 4:
⃗ ∙ (⃗⃗ × ⃗) =
∇ × (⃗ × ⃗⃗) = ⃗ (∇ ∙ ⃗⃗) − ⃗⃗⃗⃗
(∇ ∙ ⃗) +
(⃗⃗ ∙ ∇)⃗ − (⃗ ∙ ∇)⃗⃗ “not too easy ?”
Example 5:
Using index notation: take the ⃗ component of the vector
[often used this way] =>
⃗ ∙ [∇ × (⃗ × ⃗⃗)] =  [   ], = [derivative of a
product] =   [ , + ,  ] =    , +
,  = [ “epsilon-delta identity” ] =
[  -   ]  , + [  -   ] ,  =
[use  properties] =
, -  , + ,  - ,  And the vector identity
⃗⃗ , =∇ ∙ ⃗
can be easily recognized e.g. , =∇ ∙
, = ⃗ ∙ (⃗ ∙ ∇) ⃗⃗))
Example 6: ∇ × (∇∅) = ⃗0⃗ “Curl of a gradient field is the
zero vector”. Taking the i component of the vector using index
notation

Φ

=
2 Φ

, with i fixed this results in two
35
terms
2 Φ
2 Φ

+
2 Φ

j,k # i =
2 Φ

-
j,k # i = 0 (if it is OK to change order of derivatives)

∇ ∙ (∇ × ⃗ ) = 0 “Divergence of a curl is zero”

Using index notation ∇ ∙ (∇ × ⃗ ) =
⃗ ∙   ⃗ =
Example 7:

⃗ ∙ ⃗ =
=

2
2

=
= [ def. + changing order

of derivation=no change] = 0
Example 8: Matrix multiplication
A matrix A with elements
elements  i = 1..3
⃗ with
i,j = 1..3 and a vector
A∙ ⃗ = B will be a 3x1 matrix with elements  =
Example 9: Let A be an mxn matrix with elements
i = 1..m, j=1..n
B an nxp matrix with elements
k = 1..n, k=1..p
A∙B = C will be an mxp matrix with elements
=   , k=1..n , i=1..m, j=1..p
36
8.
Appendix C, remarks of interest
No coordinate system ?
Differential forms are an approach to multivariable calculus
that is independent of coordinates
(http://en.wikipedia.org/wiki/Differential_form).
The differential-form framework has brought considerable
unification to vector algebra and to tensor analysis on manifolds
more generally…[Arfken].
Maxwell’s homogenous equations can be formulated as simply
dF = 0 and his inhomogeneous equations take the elegant form
d(*F) = *J
In April 1919 Theodor Kaluza noticed that when he solved
Albert Einstein's equations for general relativity using five
dimensions, then James Clark Maxwell's equations for
electromagnetism emerged spontaneously. Kaluza wrote to
Einstein who, in turn, encouraged him to publish. Kaluza's
theory was published in 1921 in a paper, "Zum Unitätsproblem
der Physik" with Einstein's support. It is now called Kaluza–
Klein theory (KK theory) and is a model that seeks to unify the
two fundamental forces of gravitation and electromagnetism.
Klein is known among other things for his ‘bottle’.
Klein bottle is a non-orientable surface; informally, it can be
defined as a surface (a two-dimensional manifold) in which
notions of left and right cannot be consistently defined. Other
related non-orientable objects include the Möbius strip and the
real projective plane. Whereas a Möbius strip is a surface with
boundary, a Klein bottle has no boundary (for comparison, a
sphere is an orientable surface with no boundary).
37
The Klein bottle.
Beauty ?
Illustrations of hyperbolic geometry by M.C. Escher.
38
39
9.
Appendix D, T-shirt from CERN
This confusing T-shirt from CERN demonstrates how complex
physical laws can be written in a simple way and make it
possible to write equations from different fields of physics in a
similar form.
The leaflet that came with the shirt says :
This equation neatly sums up our current understanding of
fundamental particles and forces.
It represents mathematically what we call the standard model
of particle physics.
The top line describes the forces: electricity, magnetism and
the strong and weak nuclear forces.
The second line describes how these forces act on the
fundamental particles of matter, namely the quarks and leptons.
The third line describes how these particles obtain their
masses from the Higgs boson.
40
The fourth line enables the Higgs boson to do the job.
Many experiments at CERN and other laboratories have
verified the top two lines in detail.
One of the primary objectives of the LHC was to see whether
the Higgs boson exists, now confirmed, and behaves as
predicted by the last two lines.
41
10. References
[Arfken] Mathematical Methods for Physicists, Georg B.
Arfken , Hans J. Weber and Frank E. Harris , 6th and 7th edition
[Heinbockel] Professor J.H. Heinbockel Introduction to tensor
calculus and Continuum Mechanics
[Waleffe] Article available on internet about curvilinear
coordinates by Professor Fabian Waleffe, University of
[Penrose] The Road to Reality: A Complete Guide to the
Laws of the Universe, Sir Roger Penrose
[Wikipedia] http://en.wikipedia.org
and more
```