Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses

advertisement
Lecture Notes 1: Matrix Algebra
Part B: Determinants and Inverses
Peter J. Hammond
Autumn 2012, revised Autumn 2014, 2015
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
1 of 75
Lecture Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
2 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
3 of 75
Permutations and Their Signs
Definition
Given Nn = {1, . . . , n} where n ≥ 2,
a permutation of Nn is a bijective mapping π : Nn → Nn .
The family of all permutations Π includes:
I
the identity mapping ι defined by ι(h) = h for all h ∈ Nn ;
I
for each π ∈ Π, a unique inverse π −1 ∈ Π
for which π −1 ◦ π = π ◦ π −1 = ι
Definition
1. Given any permutation π on Nn , an inversion of π
is a pair (x, y ) ∈ Nn such that i > j and π(i) < π(j).
2. A permutation π : Nn → Nn is either even or odd
according as it has an even or odd number of inversions.
3. The sign or signature of a permutation σ, denoted by sgn(π),
is defined as: +1 if π is even; and −1 if π is odd.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
4 of 75
A Product Rule for the Signs of Permutations
Theorem
For any two permutations π, ρ ∈ Π, one has
sgn(π ◦ ρ) = sgn(π) sgn(ρ)
The proof on the next slides uses the following:
Definition
First, define the signum or sign function
(
+1 if x > 0
R \ {0} 3 x 7→ s(x) :=
−1 if x < 0
Next, for each permutation π ∈ Π, let S(π) denote the matrix
(
−1 if i > j and π(i) < π(j)
whose elements satisfy sij (π) =
+1 otherwise
N
Qn Qn
Finally, let
S(π) := i=1 j=1 sij (π) ∈ {−1, +1}.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
5 of 75
A Product Formula for the Sign of a Permutation
Lemma
For all π ∈ Π one has sgn(π) =
Q
i>j
s(π(i) − π(j)) =
N
S(π).
Proof.
Let p := #{(i, j) ∈ Nn × Nn | i > j & π(i) < π(j)}
denote the number of inversions of π.
By definition, sgn(π) = (−1)p = ±1 according as p is even or odd.
But the definitions on the previous slide imply that
p = #{(i, j) ∈ Nn × Nn | i > j & s(π(i) − π(j)) = −1}
= #{(i, j) ∈ Nn × Nn | sij (π) = −1}
Therefore
Q
sgn(π) = (−1)p =
− π(j))
i>j s(π(i)N
Q n Qn
=
s
(π)
=
S(π)
i=1
j=1 ij
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
6 of 75
Proving the Product Rule
Suppose the three
Npermutations π, ρ, σ ∈ Π satisfy σ = π ◦ ρ.
S(σ) Qn Qn sij (σ)
sgn(σ)
= N
Then
= i=1 j=1
sgn(ρ)
sij (ρ)
S(ρ)
and also s(σ(i) − σ(j)) = s(π(ρ(i)) − π(ρ(j))).
The definition of the two matrices S(σ) and S(ρ) implies
that their elements satisfy sij (σ)/sij (ρ) = 1 whenever i ≤ j.
In particular, sij (σ)/sij (ρ) = 1 unless both i > j
and also s(σ(i) − σ(j)) = −s(ρ(i) − ρ(j)).
Hence, given any i, j ∈ Nn with i > j, one has sij (σ)/sij (ρ) = −1
if and only if s(π(ρ(i)) − π(ρ(j))) = −s(ρ(i) − ρ(j)),
or equivalently, if and only if:
either ρ(i) > ρ(j) and (ρ(i), ρ(j)) is an inversion of π;
or ρ(i) < ρ(j) and (ρ(j), ρ(i)) is an inversion of π.
Let p denote the number of inversions of the permutation π.
Then sgn(σ)/ sgn(ρ) = (−1)p = sgn(π),
implying that sgn(σ) = sgn(π) sgn(ρ).
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
7 of 75
Transpositions
For each disjoint pair k, ` ∈ {1, 2, . . . , n},
the transposition mapping i 7→ τk` (i) on {1, 2, . . . , n}
is the permutation defined by


` if i = k;
τk` (i) := k if i = `;


i otherwise;
That is, τk` transposes the order of k and `,
leaving all i 6∈ {k, `} unchanged.
Evidently τk` = τ`k and τk` ◦ τ`k = ι, the identity permutation,
and so τ ◦ τ = ι for every transposition τ .
It is also evident that τ has exactly one inversion, so sgn(τ ) = −1.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
8 of 75
Transposition is Not Commutative
Exercise
Show that transpositions defined
on a set containing more than two elements
may not commute because, for example,
τ12 ◦ τ23 = π 231 6= τ23 ◦ τ12 = π 312
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
9 of 75
Permutations are Products of Transpositions
Theorem
Any permutation π ∈ Π on Nn := {1, 2, . . . , n}
is the product of at most n − 1 transpositions.
We will prove the result by induction on n.
As the induction hypothesis,
suppose the result holds for permutations on Nn−1 .
Any permutation π on N2 := {1, 2} is either the identity,
or the transposition τ12 , so the result holds for n = 2.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
10 of 75
Proof of Induction Step
For general n, let j := π −1 (n) denote the element
that π moves to the end.
By construction, the permutation π ◦ τjn
must satisfy π ◦ τjn (n) = π(τjn (n)) = π(j) = n.
So the restriction π̃ of π ◦ τjn to Nn−1 is a permutation on Nn−1 .
By the induction hypothesis, for all k ∈ Nn−1 ,
there exist transpositions τ 1 , τ 2 , . . . , τ q
such that π̃(k) = π ◦ τjn (k) = τ 1 ◦ τ 2 ◦ . . . ◦ τ q (k)
where q ≤ n − 2 is the number of transpositions in the product.
For p = 1, . . . , q, because τ p interchanges only elements of Nn−1 ,
one can extend its domain to include n by letting τ p (n) = n.
Then π ◦ τjn (k) = τ 1 ◦ τ 2 ◦ . . . ◦ τ q (k) for k = n as well,
so π = (π ◦ τjn ) ◦ τjn−1 = τ 1 ◦ τ 2 ◦ . . . ◦ τ q ◦ τjn−1 .
Hence π is the product of at most q + 1 ≤ n − 1 transpositions.
This completes the proof by induction on n.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
11 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
12 of 75
Permutation Matrices: Definition
Definition
For each permutation π ∈ Π on {1, 2, . . . , n}, let Pπ denote
the unique associated n-dimensional permutation matrix
which is derived by applying π to the rows of the identity matrix In .
That is, for each i = 1, 2, . . . , n,
the ith row vector of the identity matrix In
is moved to become row π(i) of Pπ .
This definition implies that the only nonzero element
in row i of Pπ occurs
not in column j = i, as it would in the identity matrix,
but in column j = π −1 (i) where i = π(j).
Hence the matrix elements pijπ of Pπ
are given by pijπ = δi,π(j) for i, j = 1, 2, . . . , n.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
13 of 75
Permutation Matrices: Examples
Example
There are two 2 × 2 permutation matrices,
0
P12 = I2 ; P21 =
1
which are given by:
1
.
0
Their signs are respectively +1 and −1.
There are 3! = 6 permutation matrices in 3 dimensions given by:

P123 = I3 ;

P231
0
= 0
1
P132
1
0
0

0
1 ; P312
0



1 0 0
0 1 0
= 0 0 1 ; P213 = 1 0 0 ;
0 1 0
0 0 1
0 0 1
0 0 1
= 1 0 0 ; P321 = 0 1 0 .
0 1 0
1 0 0
Their signs are respectively +1, −1, −1, +1, +1 and −1.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
14 of 75
Permutation Matrices: Exercise
Exercise
Suppose that π, ρ are permutations in Π,
whose composition is the function π ◦ ρ defined by
{1, 2, . . . , n} 3 i 7→ (π ◦ ρ)(i) = π(ρ(i)) ∈ {1, 2, . . . , n}
Show that:
1. the mapping i 7→ π(ρ(i)) is a permutation on {1, 2, . . . , n};
2. the associated permutation matrices satisfy Pπ◦ρ = Pπ Pρ .
Then use result 2 to prove by induction on q that,
if the permutation π equals the q-fold composition
π1 ◦ π2 ◦ · · · ◦ πq ,
1
2
q
then Pπ equals the q-fold matrix product Pπ Pπ · · · Pπ .
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
15 of 75
Transposition Matrices
A special case of a permutation matrix
is a transposition Th,i of rows h and i.
As the matrix I with rows h and i transposed, it satisfies


δrs if r 6∈ {h, i}
(Th,i )rs = δis if r = h


δhs if r = i
Exercise
Prove that:
1. any transposition matrix T = Th,i is symmetric;
2. Th,i = Ti,h ;
3. Th,i Ti,h = Ti,h Th,i = I.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
16 of 75
More on Permutation Matrices
Theorem
Any permutation matrix P = Pπ satisfies:
Q
1. P = qs=1 Ts for the product
of some collection of q ≤ n − 1 transposition matrices.
2. PP> = P> P = I
Proof.
The permutation π is the composition τ 1 ◦ τ 2 ◦ · · · ◦ τ q
of q ≤ n − 1 transpositions τ s (for s ∈ S := {1, 2, . . . , q}).
Q
s
As shown previously, Pπ = qs=1 Ts where Ts = Pτ for s ∈ S.
Then, because each Ts is symmetric,
the transpose (Pπ )> equals the reversed product Tq · · · T2 T1 .
But each transposition Ts also satisfies Ts Ts = I,
so PP> = T1 T2 · · · Tq Tq · · · T2 T1 = T1 T2 · · · Tq−1 Tq−1 · · · T2 T1
which equals I by induction on q, and similarly P> P = I.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
17 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
18 of 75
A First Elementary Row Operation
Suppose that row r of the m × m identity matrix Im
is multiplied by a scalar α ∈ R, leaving all other rows unchanged.
This gives the m × m diagonal matrix
Sr (α) = diag(1, 1, . . . , 1, α, 1, . . . , 1)
whose diagonal elements are 1, except the r th element which is α.
Hence the elements of Sr (α) satisfy
(
δij
if i =
6 r
(Sr (α))ij =
αδij if i = r
Exercise
For the particular m × m matrix Sr (α)
and the general m × n matrix A,
show that the transformed m × n matrix Sr (α)A
is the result of multiplying row r of A by the scalar α,
leaving all other rows unchanged.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
19 of 75
A Second Elementary Row Operation
Suppose a multiple of α times row q of the identity matrix Im
is added to its r th row, leaving all the other m − 1 rows unchanged.
Provided that q 6= r , the resulting m × m matrix Er +αq equals Im ,
but with an extra non-zero element equal to α
in the (r , q) position.
Its elements therefore satisfy (Er +αq )ij = δij + αδir δjq .
Exercise
For the particular m × m matrix Er +αq
and the general m × n matrix A,
show that the transformed m × n matrix Er +αq A
is the result of adding the multiple of α times its row q
to the r th row of matrix A, leaving all other rows unchanged.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
20 of 75
Levi-Civita Lists
For any n ∈ N, define the set Nn := {1, 2, . . . , n}
of the first n natural numbers.
Definition
A Levi-Civita list j = (j1 , j2 , . . . , jn ) ∈ (Nn )n is a representation
of a mapping Nn 3 i 7→ ji ∈ Nn := {1, 2, . . . , n}.
This list can be regarded as a mapping
from the row numbers of an n × n matrix
to the column numbers of the same matrix.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
21 of 75
Levi-Civita Symbols
Definition
The Levi-Civita function (Nn )n 3 j 7→ j ∈ {−1, 0, 1} of order n
maps each Levi-Civita list
into an associated value called its Levi-Civita symbol.
This value depends on whether the mapping Nn 3 i 7→ ji ∈ Nn
is an even or an odd permutation of the ordered list (1, 2, . . . , n),
or is not a permutation at all. Specifically,


+1 if i 7→ ji is an even permutation
j = j1 j2 ...jn := −1 if i 7→ ji is an odd permutation


0
if i 7→ ji is not a permutation
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
22 of 75
The Levi-Civita Matrix
Definition
Given the Levi-Civita mapping Nn 3 i 7→ ji ∈ Nn := {1, 2, . . . , n},
the associated n × n Levi-Civita matrix Lj has elements defined by
(Lj )rs = (Lj1 j2 ...jn )rs := δjr ,s
This implies that the r th row of Lj equals row jr of the matrix In .
That is, Lj = (ej1 , ej2 , . . . , ejn )> is the n × n matrix
produced by stacking the n row vectors e>
jr (r = 1, 2, . . . , n)
of the canonical basis on top of each other,
with repetitions allowed.
For a general n × n matrix A, the matrix Lj A = Lj1 j2 ...jn A
is the result of stacking the n row vectors a>
jr (r = 1, 2, . . . , n)
of A on top of each other, with repetitions allowed.
Specifically, (Lj A)rs = ajr s for all r , s ∈ {1, 2, . . . , n}.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
23 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
24 of 75
Determinants of Order 2: Definition
Consider again the pair of linear equations
a11 x1 + a12 x2 = b1
a21 x1 + a12 x2 = b2
with its associated coefficient matrix
a11 a12
A=
a21 a22
Let us define the number D := a11 a22 − a21 a12 .
Provided that D 6= 0, the equations have a unique solution given by
x1 =
1
(b1 a22 − b2 a12 ),
D
x2 =
1
(b2 a11 − b1 a21 )
D
The number D is called the determinant of the matrix A.
It is denoted by either det(A), or more concisely, by |A|.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
25 of 75
Determinants of Order 2: Simple Rule
Thus, for any 2 × 2 matrix A, its determinant D is
a11 a12 = a11 a22 − a21 a12
|A| = a21 a22 For this special case of order 2 determinants, a simple rule is:
1. multiply the diagonal elements together;
2. multiply the off-diagonal elements together;
3. subtract the product of the off-diagonal elements
from the product of the diagonal elements.
Exercise
Show that the determinant satisfies
1 0 0 1
|A| = a11 a22 + a21 a12 0 1
1 0
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
26 of 75
Cramer’s Rule in the 2 × 2 Case
Using determinant notation, the solution to the equations
a11 x1 + a12 x2 = b1
a21 x1 + a12 x2 = b2
can be written in the alternative form
1
1 b1 a12 ,
x2 =
x1 = D b2 a22
D
a11 b1 a21 b2 This accords with Cramer’s rule for the solution to Ax = b,
which is the vector x = (xi )ni=1 each of whose components xi
is the fraction with:
1. denominator equal to the determinant D
of the coefficient matrix A (provided, of course, that D 6= 0);
2. numerator equal to the determinant of the matrix (A−i , b)
formed from A by replacing its ith column
with the b vector of right-hand side elements,
while keeping all the columns in their original order.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
27 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
28 of 75
Determinants of Order 3: Definition
Determinants of order 3 can be calculated
from those of order 2 according to the formula
a21 a22 a21 a23 a22 a23 − a12 |A| = a11 a31 a33 + a13 a31 a32 a32 a33 X3
=
(−1)1+j a1j |C1j |
j=1
where, for j = 1, 2, 3, the 2 × 2 matrix C1j is the (1, j)-cofactor
obtained by removing both row 1 and column j from A.
The result is the following sum
|A| = a11 a22 a33 − a11 a23 a32 + a12 a23 a31
− a12 a21 a33 + a13 a21 a32 − a13 a22 a31
of 3! = 6 terms, each the product of 3 elements chosen
so that each row and each column is represented just once.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
29 of 75
Determinants of Order 3: Cofactor Expansion
The determinant expansion
|A| = a11 a22 a33 − a11 a23 a32 + a12 a23 a31
− a12 a21 a33 + a13 a21 a32 − a13 a22 a31
is very symmetric, suggesting (correctly)
that the cofactor expansion along the first row (a11 , a12 , a13 )
|A| =
X3
j=1
(−1)1+j a1j |C1j |
gives the same answer as the other cofactor expansions
|A| =
X3
j=1
(−1)r +j arj |Crj | =
X3
i=1
(−1)i+s ais |Cis |
along, respectively:
I the r th row (ar 1 , ar 2 , ar 3 )
I the sth column (a1s , a2s , a3s )
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
30 of 75
Determinants of Order 3: Alternative Expressions
The same result
|A| = a11 a22 a33 − a11 a23 a32 + a12 a23 a31
− a12 a21 a33 + a13 a21 a32 − a13 a22 a31
can be obtained as either of the two expansions
|A| =
=
X3
j1 =1
X
π∈Π
X3
X3
j2 =1
sgn(π)
j3 =1
Y3
i=1
j1 j2 j3 a1j1 a2j2 a3j3
aiπ(i)
Here j = j1 j2 j3 ∈ {−1, 0, 1} denotes the Levi-Civita symbol
associated with the mapping i 7→ ji from {1, 2, 3} into itself.
Also, Π denotes the set
of all 3! = 6 possible permutations on {1, 2, 3},
with typical member π, whose sign is denoted by sgn(π).
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
31 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
32 of 75
The Determinant Function
Let Dn denote the domain Rn×n of n × n matrices.
When n = 1, 2, 3, the determinant mapping Dn 3 A 7→ |A| ∈ R
specifies the determinant |A| of each n × n matrix A
as a function of its n row vectors (ai )ni=1 .
For a general natural number n ∈ N, consider any mapping
Dn 3 A 7→ D(A) = D ((ai )ni=1 ) ∈ R
defined on the domain Dn .
Notation: Let D(A/br ) denote
the new value D(a1 , . . . , ar −1 , br , ar +1 , . . . , an ) of the function D
after the r th row ar of the matrix A
has been replaced by the new row vector br .
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
33 of 75
Row Multilinearity
Definition
The function Dn 3 A 7→ D(A) of A’s n rows (ai )ni=1
is (row) multilinear just in case,
for each row number i ∈ {1, 2, . . . , n},
each pair bi , ci ∈ Rn of new versions of row i,
and each pair of scalars λ, µ ∈ R, one has
D(A/λbi + µci ) = λD(A/bi ) + µD(A/ci )
Formally, the mapping Rn 3 ai 7→ D(A/ai ) ∈ R
is required to be linear, for fixed each row i ∈ Nn .
That is, D is a linear function of the ith row vector ai on its own,
when all the other rows ah (h 6= i) are fixed.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
34 of 75
The Three Characterizing Properties
Definition
The function Dn 3 A 7→ D(A) is alternating
just in case
for every transposition matrix T, one has D(TA) = −D(A)
— i.e., interchanging any two rows reverses its sign.
Definition
The mapping Dn 3 A 7→ D(A) is of the determinant type
just in case:
1. D is multilinear in its rows;
2. D is alternating;
3. D(In ) = 1 for the identity matrix In .
Exercise
Use the previous definitions of the determinant for n ≤ 3
to show that the mapping Dn 3 A 7→ |A| ∈ R
is of the determinant type provided that n ≤ 3.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
35 of 75
First Implication of Multilinearity in the n × n Case
Lemma
Suppose that Dn 3 A 7→ D(A) is multilinear in its rows.
For any fixed B ∈ Dn , the value of D(AB)
can be expressed as the linear combination
D(AB) =
n X
n
X
j1 =1 j2 =1
...
n
X
a1j1 a2j2 · · · anjn D(Lj1 j2 ...jn B)
jn =1
of its values at all possible matrices
Lj B = Lj1 j2 ...jn B := (bjr )nr=1
whose r th row, for each r = 1, 2, . . . , n,
equals the jr th row bjr of the matrix B.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
36 of 75
Characterizing 2 × 2 Determinants
1. In the case of 2 × 2 matrices,
the lemma tells us that multilinearity implies
D(AB) = a11 a21 D(b1 , b1 ) + a11 a22 D(b1 , b2 )
+ a12 a21 D(b2 , b1 ) + a12 a22 D(b2 , b2 )
where b1 = (b11 , b21 ) and b2 = (b12 , b22 ) are the rows of B.
2. If D is also alternating, then D(b1 , b1 ) = D(b2 , b2 ) = 0
and D(B) = D(b1 , b2 ) = −D(b2 , b1 ), implying that
D(AB) = a11 a22 D(b1 , b2 ) + a12 a21 D(b2 , b1 )
= (a11 a22 − a12 a21 )D(B)
3. Imposing the additional restriction D(B) = 1 when B = I2 ,
we obtain the ordinary determinant D(A) = a11 a22 − a12 a21 .
4. Then, too, one derives the product rule D(AB) = D(A)D(B).
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
37 of 75
First Implication of Multilinearity: Proof
Each element of the product C = AB satisfies cik =
Hence
as the
Pn
j=1 aij bjk .
each row ci = (cik )nk=1 of
PC can be expressed
linear combination ci = nj=1 aij bj of B’s rows.
For each r = 1, 2, . . . , n and each arbitrary selection bj1 , . . . , bjr −1
of r − 1 rows from B, multilinearity therefore implies that
D(bj1 , . . . , bjr −1 , cr , cr +1 , . . . , cn )
Xn
=
aijr D(bj1 , . . . , bjr −1 , bjr , cr +1 , . . . , cn )
jr =1
This equation can be used to show, by induction on k, that
D(C) =
n X
n
X
j1 =1 j2 =1
...
n
X
a1j1 a2j2 · · · akjk D(bj1 , . . . , bjk , ck+1 , . . . , cn )
jk =1
for k = 1, 2, . . . , n, including for k = n as the lemma claims.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
38 of 75
Additional Implications of Alternation
Lemma
Suppose Dn 3 A 7→ D(A) is both row multilinear and alternating.
Then for all possible n × n matrices A, B,
and for all possible permutation matrices Pπ , one has:
P
Q
1. D(AB) = π∈Π ni=1 aiπ(i) D(Pπ B)
2. D(Pπ B) = sgn(π)D(B).
3. Under the additional assumption that D(In ) = 1, one has:
P
Q
determinant formula: D(A) = π∈Π sgn(π) ni=1 aiπ(i) ;
product rule: D(AB) = D(A)D(B)
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
39 of 75
First Additional Implication of Alternation: Proof
Because D is alternating,
one has D(B) = 0 whenever two rows of B are equal.
It follows that for any matrix (bji )ni=1 = Lj B
whose n rows are all rows of the matrix B,
one has D((bji )ni=1 ) = 0 unless these rows are all different.
But if all the n rows of (bji )ni=1 = Lj B are all different,
there exists a permutation π ∈ Π such that Lj B = Pπ B.
Hence, after eliminating terms that are zero, the sum
Pn Pn
P
D(AB) =
. . . njn =1 a1j1 a2j2 · · · anjn D((bjr )nr=1 )
j
=1
j
=1
1
2
Pn Pn
P
=
. . . njn =1 a1j1 a2j2 · · · anjn D(Lj1 j2 ...jn B)
Pj1 =1 Qnj2 =1
π
=
π∈Π
i=1 aiπ(i) D(P B)
as stated in part 1 of the Lemma.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
40 of 75
Second Additional Implication: Proof
Because D is alternating, whenever T is a transposition matrix,
one has D(TPπ B) = −D(Pπ B).
Suppose that π = τ 1 ◦ · · · ◦ τ q is one possible “factorization”
of the permutation π as the composition of transpositions.
But sgn(τ ) = −1 for any transposition τ .
So sgn(π) = (−1)q by the product rule for signs of permutations.
Note that Pπ = T1 T2 · · · Tq
where Tp denotes the permutation matrix
corresponding to the transposition τ p , for each p = 1, . . . , q
It follows that
D(Pπ B) = D(T1 T2 · · · Tq B) = (−1)q D(B) = sgn(π)D(B)
as required.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
41 of 75
Third Additional Implication: Proof
In case D(In ) = 1, applying parts 1 and 2 of the Lemma
(which we have already proved) with B = In gives immediately
X
Yn
X
Yn
D(A) =
aiπ(i) D(Pπ ) =
sgn(π)
aiπ(i)
π∈Π
i=1
π∈Π
i=1
But then, applying parts 1 and 2 of the Lemma
for a general matrix B gives
X
Yn
D(AB) =
aiπ(i) D(Pπ B)
π∈Π
i=1
X
Yn
=
sgn(π)
aiπ(i) D(B) = D(A)D(B)
π∈Π
i=1
as an implication of the first equality on this slide.
This completes the proof of all three parts.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
42 of 75
Formal Definition and Cofactor Expansion
Definition
The determinant |A| of any n × n matrix A is defined
so that Dn 3 A 7→ |A| is the unique (row) multilinear
and alternating mapping that satisfies |In | = 1.
Definition
For any n × n determinant |A|, its rs-cofactor |Crs |
is the (n − 1) × (n − 1) determinant of the matrix Crs
obtained by omitting row r and column s from A.
The cofactor expansion of |A| along any row r or column s is
Xn
Xn
|A| =
(−1)r +j arj |Crj | =
(−1)i+s ais |Cis |
j=1
i=1
Exercise
Prove that these cofactor expansions are valid, using the formula
X
Yn
|A| =
sgn(π)aiπ(i)
π∈Π
University of Warwick, EC9A0 Maths for Economists
i=1
Peter J. Hammond
43 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
44 of 75
Eight Basic Rules (Rules A–H of EMEA, Section 16.4)
Let |A| denote the determinant of any n × n matrix A.
1. |A| = 0 if all the elements in a row (or column) of A are 0.
2. |A> | = |A|, where A> is the transpose of A.
3. If all the elements in a single row (or column) of A
are multiplied by a scalar α, so is its determinant.
4. If two rows (or two columns) of A are interchanged,
the determinant changes sign, but not its absolute value.
5. If two of the rows (or columns) of A are proportional,
then |A| = 0.
6. The value of the determinant of A is unchanged
if any multiple of one row (or one column)
is added to a different row (or column) of A.
7. The determinant of the product |AB| of two n × n matrices
equals the product |A| · |B| of their determinants.
8. If α is any scalar, then |αA| = αn |A|.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
45 of 75
The Transpose Rule 2: Verification
The transpose rule 2 is key: for any statement
about how |A| depends on the rows of A,
there is an equivalent statement
about how |A| depends on the columns of A.
Exercise
Verify Rule 2 directly for 2 × 2 and then for 3 × 3 matrices.
Proof of Rule 2 The expansion formula implies that
X
Yn
X
Yn
|A| =
sgn(π)
aiπ(i) =
sgn(π)
π∈Π
i=1
π∈Π
j=1
aπ−1 (j)j
But the product rule for signs of permutations implies
that sgn(π) sgn(π −1 ) = sgn(ι) = 1, with sgn(π) = ±1.
Hence sgn(π −1 ) = 1/ sgn(π) = sgn(π).
So, because π ↔ π −1 is a bijection,
X
Yn
−1
>
>
|A| =
sgn(π
)
ajπ
−1 (j) = |A |
−1
π
∈Π
j=1
after using the expansion formula with π replaced by π −1 .
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
46 of 75
Verification of Rule 6
Exercise
Verify Rule 6 directly for 2 × 2 and then for 3 × 3 matrices.
Proof of Rule 6 Recall the notation Er +αq for the matrix resulting
from adding the multiple of α times row q of I to its r th row.
Recall too that Er +αq A is the matrix that results
from applying the same row operation to the matrix A.
P
Finally, recall the formula |A| = nj=1 arj |Crj |
for the cofactor expansion of |A| along the r th row.
The corresponding cofactor expansion of Er +αq A is then
Xn
(arj + αaqj )|Crj | = |A| + α|B|
|Er +αq A| =
j=1
where B is derived from A by replacing row r with row q.
Unless q = r , the matrix B will have its qth row repeated,
implying that |B| = 0 because the determinant is alternating.
So q 6= r implies |Er +αq A| = |A| for all α, which is Rule 6.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
47 of 75
Verification of the Other Rules
Apart from Rules 2 and 6,
note that we have already proved the product Rule 7,
whereas the interchange Rule 4 just restates alternation.
Now that we have proved Rule 2,
note that Rules 1 and 3 follow from multilinearity,
applied in the special case when one row of the matrix
is multiplied by a scalar.
Also, the proportionality Rule 5 follows
from combining Rule 4 with multilinearity.
Finally, Rule 8, concerning the effect of multiplying
all elements of a matrix by the same scalar, is easily checked
because the expansion of |A| is the sum of many terms,
each of which involves the product of exactly n elements of A.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
48 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
49 of 75
Expansion by Alien Cofactors
Expanding along either row r or column s gives
Xn
Xn
|A| =
arj |Crj | =
ais |Cis |
j=1
i=1
when one uses matching cofactors.
Expanding by alien cofactors, however,
from either the wrong row i 6= r
or the wrong column j 6= s, gives
Xn
Xn
0=
arj |Cij | =
j=1
i=1
ais |Cij |
This is because the answer will be the determinant
of an alternative matrix in which:
I
either row i has been duplicated and put in row r ;
I
or column j has been duplicated and put in column s.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
50 of 75
The Adjugate Matrix
Definition
The adjugate (or “(classical) adjoint”) adj A
of an order n square matrix A
has elements given by (adj A)ij = |Cji |.
It is therefore the transpose (C+ )> of the cofactor matrix C+
whose elements (C+ )ij = |Cij | are the respective cofactors of A.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
51 of 75
Main Property of the Adjugate Matrix
Theorem
For every n × n square matrix A one has
(adj A)A = A(adj A) = |A|In
Proof.
The (i, j) elements of the two product matrices are respectively
Xn
Xn
[(adj A)A]ij =
|Cki |akj and [A(adj A)]ij =
aik |Cjk |
k=1
k=1
These are both cofactor expansions, which are expansions by:
I alien cofactors in case i 6= j, implying that both equal 0;
I
matching cofactors in case i = j, implying that they equal |A|.
Hence for each pair (i, j) one has
[(adj A)A]ij = [A(adj A)]ij = |A|δij = |A|(In )ij
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
52 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
53 of 75
Minors: Definition
Definition
Given any m × n matrix A, a minor (determinant) of order k
is the determinant |Ai1 i2 ...ik ,j1 j2 ...jk | of a k × k submatrix (aij ),
with 1 ≤ i1 < i2 < . . . < ik ≤ m and 1 ≤ j1 < j2 < . . . < jk ≤ n,
that is formed by selecting all the elements that lie both:
I
in one of the chosen rows ir (r = 1, 2, . . . , k);
I
in one of the chosen columns js (s = 1, 2, . . . , k).
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
54 of 75
Minors: Some Examples, and Rank
Example
1. In case A is an n × n matrix:
I
I
the whole determinant |A| is the only minor of order n;
each of the n2 cofactors Cij is a minor of order n − 1.
2. In case A is an m × n matrix:
I
I
each element of the mn elements of the matrix
is a minor of order 1;
the number of minors of order k is
n!
m
n
m!
·
=
k!(m − k)! k!(n − k)!
k
k
Exercise
Verify that the set of elements that make up
the minor |Ai1 i2 ...ik ,j1 j2 ...jk | of order k is completely determined
by its k diagonal elements aih ,jh (h = 1, 2, . . . , k).
(These need not be diagonal elements of A).
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
55 of 75
Definition of Minor Rank
Definition
The (minor) rank of a matrix
is the dimension of its largest non-zero minor determinant.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
56 of 75
Principal and Leading Principal Minors
Definition
If A is an n × n matrix,
the minor |Ai1 i2 ...ik ,j1 j2 ...jk | of order k is:
I
a principal minor if ih = jh for h = 1, 2, . . . , k,
implying that all its diagonal elements aih jh
are diagonal elements of A;
I
a leading principal minor if its diagonal elements
are the leading diagonal elements ahh (h = 1, 2, . . . , k).
Exercise
Explain why an n × n determinant has:
1. 2n − 1 principal minors;
2. n − 1 leading principal minors.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
57 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
58 of 75
Definition of Inverse Matrix
Exercise
Suppose that A is any “invertible” n × n matrix
for which there exist n × n matrices B and C
such that AB = CA = I.
1. By writing CAB in two different ways, prove that B = C.
2. Use this result to show that the equal matrices B = C,
if they exist, must be unique.
Definition
The n × n matrix X is the unique inverse
of the invertible n × n matrix A
just in case AX = XA = In .
In this case we write X = A−1 ,
so A−1 denotes the unique inverse.
Big question: does the inverse exist?
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
59 of 75
Existence Conditions
Theorem
An n × n matrix A has an inverse if and only if |A| =
6 0,
which holds if and only if
at least one of the equations AX = In and XA = In has a solution.
Proof.
Provided that |A| =
6 0, the identity (adj A)A = A(adj A) = |A|In
shows that the matrix X := (1/|A|) adj A is well defined
and satisfies XA = AX = In , so X is the inverse A−1 .
Conversely, if XA = In has a solution,
then the product rule for determinants implies
that 1 = |In | = |XA| = |X||A|.
Similarly if AX = In has a solution.
In either case one has |A| =
6 0.
The rest follows from the paragraph above.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
60 of 75
Singularity versus Invertibility
So A−1 exists if and only if |A| =
6 0.
Definition
1. In case |A| = 0,
the matrix A is said to be singular;
2. In case |A| =
6 0,
the matrix A is said to be non-singular or invertible.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
61 of 75
Example and Application to Simultaneous Equations
Exercise
Verify that
1
1 1
−1
A=
=⇒ A = C := 21
1 −1
2
1
2
− 12
by using direct multiplication to show that AC = CA = I2 .
Example
Suppose that a system of n simultaneous equations in n unknowns
is expressed in matrix notation as Ax = b.
Of course, A must be an n × n matrix.
Suppose A has an inverse A−1 .
Premultiplying both sides of the equation Ax = b by this inverse
gives A−1 Ax = A−1 b, which simplifies to Ix = A−1 b.
Hence the unique solution of the equation is x = A−1 b.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
62 of 75
Cramer’s Rule: Statement
Notation
Given any m × n matrix A,
let [A−j , b] denote the new m × n matrix
in which column j has been replaced by the column vector b.
Evidently [A−j , aj ] = A.
Theorem
Provided that the n × n matrix A is invertible,
the simultaneous equation system Ax = b
has a unique solution x = A−1 b whose ith component
is given by the ratio of determinants xi = |[A−i , b]|/|A|.
This result is known as Cramer’s rule.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
63 of 75
Cramer’s Rule: Proof
Proof.
Given the equation Ax = b, each cofactor |Cij | of the coefficient
matrix A is formed by dropping row i and column j of A.
It therefore equals the (i, j) cofactor of the matrix |[A−j , b]|.
Expanding the determinant by cofactors along column j
therefore gives
Xn
Xn
|[A−j , b]| =
bi |Cij | =
(adj A)ji bi
i=1
i=1
by definition of the adjugate matrix.
Hence the unique solution to the equation system has components
xi = (A−1 b)i =
1
1 Xn
(adj A)ji bi =
|[A−i , b]|
i=1
|A|
|A|
for i = 1, 2, . . . , n.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
64 of 75
Rule for Inverting Products
Theorem
Suppose that A and B are two invertible n × n matrices.
Then the inverse of the matrix product AB exists,
and is the reverse product B−1 A−1 of the inverses.
Proof.
Using the associative law for matrix multiplication repeatedly gives:
(B−1 A−1 )(AB) = B−1 (A−1 A)B = B−1 (I)B = B−1 (IB) = B−1 B = I
and
(AB)(B−1 A−1 ) = A(BB−1 )A−1 = A(I)A−1 = (AI)A−1 = AA−1 = I.
These equations confirm that X := B−1 A−1 is the unique matrix
satisfying the double equality (AB)X = X(AB) = I.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
65 of 75
Rule for Inverting Chain Products
Exercise
Prove that, if A, B and C are three invertible n × n matrices,
then (ABC)−1 = C−1 B−1 A−1 .
Then use mathematical induction to extend this result
in order to find the inverse of the product A1 A2 · · · Ak
of any finite chain of invertible n × n matrices.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
66 of 75
Matrices for Elementary Row Operations
Example
Consider the following two
out of the three possible kinds of elementary row operation:
1. of multiplying the r th row by α ∈ R,
represented by the matrix Sr (α);
2. of multiplying the qth row by α ∈ R,
then adding the result to row r ,
represented by the matrix Er +αq .
Exercise
Find the determinants and, when they exist, the inverses
of the matrices Sr (α) and Er +αq .
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
67 of 75
Outline
More Special Matrices
Permutations and Transpositions
Permutation and Transposition Matrices
Elementary Row Operations
Determinants
Determinants of Order 2
Determinants of Order 3
Characterizing the Determinant Function
Rules for Determinants
Expansion by Alien Cofactors and the Adjugate Matrix
Minor Determinants
The Inverse Matrix
Definition and Existence
Orthogonal Matrices
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
68 of 75
Inverting Orthogonal Matrices
An n-dimensional square matrix Q is said to be orthogonal
just in case its columns (qj )nj=1 form an orthonormal set
— i.e., they must be pairwise orthogonal unit vectors (of length 1).
Theorem
A square matrix Q is orthogonal if and only if it satisfies Q> Q = I,
and so if and only if Q−1 = Q> .
Proof.
The elements of the matrix product Q> Q satisfy
Xn
Xn
>
(Q> Q)ij =
qik
qkj =
qki qkj = qi · qj
k=1
k=1
where qi (resp. qj ) denotes the ith (resp. jth) column vector of Q.
But the columns of Q are orthonormal iff qi · qj = δij
for all i, j = 1, 2, . . . , n, and so iff Q> Q = I.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
69 of 75
Exercises on Orthogonal Matrices
Exercise
Show that if the matrix Q is orthogonal,
then so is Q> .
Use this result to show that a matrix is orthogonal
if and only if its row vectors also form an orthonormal set.
Exercise
Show that any permutation matrix is orthogonal.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
70 of 75
Rotations in R2
Example
In R2 , consider the anti-clockwise rotation through an angle θ
of the unit circle S 1 = {(x1 , x2 ) ∈ R2 | x12 + x22 = 1}.
It maps:
1. the first unit vector (1, 0) of the canonical basis
to the column vector (cos θ, sin θ)> ;
2. the second unit vector (0, 1) of the canonical basis
to the column vector (− sin θ, cos θ)> .
So the rotation can be represented by the rotation matrix
cos θ − sin θ
Rθ :=
sin θ cos θ
which has these vectors as its columns.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
71 of 75
Rotations in R2 Are Orthogonal Matrices
Because sin(−θ) = − sin(θ) and cos(−θ) = − cos(θ),
the transpose of Rθ satisfies R>
θ = R−θ , and so is the clockwise
rotation through an angle θ of the unit circle S 1 .
Since clockwise and anti-clockwise rotations are inverse operations,
it is no surprise that R>
θ Rθ = I.
We verify this algebraically by using matrix multiplication
cos θ sin θ
cos θ − sin θ
1 0
>
Rθ Rθ =
=
=I
− sin θ cos θ
sin θ cos θ
0 1
because cos2 θ + sin2 θ = 1, thus verifying orthogonality.
Similarly
Rθ R>
θ
cos θ − sin θ
cos θ sin θ
1 0
=
=
=I
sin θ cos θ
− sin θ cos θ
0 1
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
72 of 75
The Unit Sphere in R3
In R3 , the three unit vectors (1, 0, 0), (0, 1, 0) and (0, 0, 1)
of the canonical basis represent respectively
1. one point (1, 0, 0) on the “equatorial great circle”,
whose latitude is 0◦ ,
and whose “longitude” is also 0◦ (the Greenwich Meridian);
2. a second point (0, 1, 0) on the “equatorial great circle,
whose latitude is also 0◦ , and whose “longitude” is 90◦ East;
3. the “North pole” (0, 0, 1) at latitude 90◦ North,
whose longitude is undefined.
The unit sphere is the set
S 2 = {(x1 , x2 , x3 ) ∈ R3 | x12 + x22 + x32 = 1} = {x ∈ R3 | kxk2 = 1}
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
73 of 75
Spherical or Polar Coordinates in R3
x = r sin ϕ cos θ, y = r sin ϕ sin θ, z = r cos ϕ
p
p
2 + y 2 /z
r = x 2 + y 2 + z 2 , θ = arctan(y /x), ϕ = arctan
x
I
I
Spherical coordinates (r, θ, φ) as often used in
More details
1 r, azimuthal angle θ, and
distanceπ
ϕ is the polarmathematics
angle : radial
(η =
2 − ϕ is “latitude”);
θ is the azimuthal angle (“longitude”).
Dmcq ‑ Own work
https://en.wikipedia.org/wiki/Spherical_coordinate_system#/media/File:3D_Spherical_2.svg
University of Warwick, EC9A0 Maths for Economists
1/2
Peter J. Hammond
74 of 75
Orthogonal Matrices: Rotations in R3
Example
A rotation in S 3 is described by two angles, denoted by θ and η.
I
Representing a rotation through the angle θ about the z-axis:


cos θ − sin θ 0
Rθ :=  sin θ cos θ 0
0
0
1
I
Representing a rotation through

cos η
Sη :=  0
sin η
the angle η about the y -axis:

0 − sin η
1
0 
0 cos η
Exercise
Calculate the two matrix products Rθ Sη and Sη Rθ .
Show that they are equal iff either θ = 0 or η = 0.
University of Warwick, EC9A0 Maths for Economists
Peter J. Hammond
75 of 75
Download