Uploaded by jibreelraheem456

Problems on Matrices

advertisement
Willi-Hans Steeb
Yorick Hardy
PROBLEMS
AND
SOLUTIONS
IN
INTRODUCTORY
AND
ADVANCED M ATRIX
CALCULUS
Second Edition —
World Scientific
PROBLEMS
AND
SOLUTIONS
IN
INTRODUCTORY
AND
ADVANCED MATRIX
CALCULUS
—
Second Edition —
This page intentionally left blank
W illi-Hans Steeb
Yorick Hardy
University of Johannesburgf South Africa &
University of South Africa, South Africa
PROBLEMS
AND
SOLUTIONS
IN
INTRODUCTORY
AND
ADVANCED MATRIX
CALCULUS
—
Second Edition —
O World Scientific
NEW JERSEY • LONDON • SINGAPORE • BEIJING - SHANGHAI • HONGKONG • TAIPEI • CHENNAI* TOKYO
Published by
World Scientific Publishing Co. Pte. Ltd.
5 Toh Tuck Link, Singapore 596224
USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
Library o f Congress Cataloging-in-Publication Data
Names: Steeb, W.-H. |Hardy, Yorick, 1976Title: Problems and solutions in introductory and advanced matrix calculus.
Description: Second edition / by Willi-Hans Steeb (University o f Johannesburg,
South Africa & University o f South Africa, South Africa),
Yorick Hardy (University o f Johannesburg, South Africa & University o f South Africa,
South Africa). |New Jersey : World Scientific, 2016. |
Includes bibliographical references and index.
Identifiers: LCCN 2016028706| ISBN 9789813143784 (hardcover : alk. paper) |
ISBN 9789813143791 (pbk. : alk. paper)
Subjects: LCSH: Matrices--Problems, exercises, etc. |Calculus. |Mathematical physics.
Classification: LCC QA188 .S664 2016 |DDC 512.9/434~dc23
LC record available at https://lccn.loc.gov/2016028706
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.
Copyright © 2017 by World Scientific Publishing Co. Pte. Ltd.
All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or
mechanical, including photocopying, recording or any information storage and retrieval system now known or to
be invented, without written permission from the publisher.
For photocopying o f material in this volume, please pay a copying fee through the Copyright Clearance Center,
Inc., 222 Rosewood Drive, Danvers, M A 01923, USA. In this case permission to photocopy is not required from
the publisher.
Printed in Singapore
Preface
The purpose of this book is to supply a collection of problems in introductory
and advanced matrix problems together with their detailed solutions which will
prove to be valuable to undergraduate and graduate students as well as to re­
search workers in these fields. Each chapter contains an introduction with the
essential definitions and explanations to tackle the problems in the chapter. If
necessary, other concepts are explained directly with the present problems. Thus
the material in the book is self-contained. The topics range in difficulty from
elementary to advanced. Students can learn important principles and strategies
required for problem solving. Lecturers will also find this text useful either as
a supplement or text, since important concepts and techniques are developed in
the problems.
A large number of problems are related to applications. Applications include
wavelets, linear integral equations, Kirchhoff’s laws, global positioning systems,
Floquet theory, octonians, random walks, entanglement, tensor decomposition,
hyperdeterminant, matrix-valued differential forms, Kronecker product and im­
ages. A number of problems useful in quantum physics and graph theory are
also provided. Advanced topics include groups and matrices, Lie groups and
matrices and Lie algebras and matrices. Exercises for matrix-valued differential
forms are also included.
In this second edition new problems for braid groups, mutually unbiased bases,
vec operator, spectral theorem, binary matrices, nonnormal matrices, wavelets,
fractals, matrices and integration are added. Each chapter also contains sup­
plementary problems. Furthermore a number of Maxima and Sym bolicC++
programs are added for solving problems. Applications in mathematical and
theoretical physics are emphasized.
The book can also be used as a text for linear and multilinear algebra or matrix
theory. The material was tested in the first author’s lectures given around the
world.
v
Note to the Readers
The International School for Scientific Computing (ISSC) provides certificate
courses for this subject. Please contact the authors if you want to do this course
or other courses of the ISSC.
e-mail addresses of the first author:
S te e b w illiQ g m a il. com
steeb_wh@yahoo. com
e-mail address of the second author:
yorickhardyQ gm ail. com
Home page of the first author: h t t p : / / i s s c . uj . a c . za
Contents
Preface
v
Notation
ix
1
Basic Operations
I
2
Linear Equations
47
3
Kronecker Product
71
4
Traces, Determinants and Hyperdeterminants
99
5
Eigenvalues and Eigenvectors
142
6
Spectral Theorem
205
7
Commutators and Anticommutators
217
8
Decomposition of Matrices
241
9
Functions of Matrices
260
10 Cayley-Hamilton Theorem
299
11 Hadamard Product
309
12 Norms and Scalar Products
318
13 vec Operator
340
14 Nonnormal Matrices
355
15 Binary Matrices
365
16 Star Product
371
17 Unitary Matrices
377
18 Groups, Lie Groups and Matrices
398
vii
viii
Contents
19 Lie Algebras and Matrices
439
20 Braid Group
466
21 Graphs and Matrices
486
22 Hilbert Spaces and M utually Unbiased Bases
496
23 Linear Differential Equations
507
24 Differentiation and Matrices
520
25 Integration and Matrices
535
Bibliography
547
Index
551
Notation
i
n
u
0
TcS
SnT
SuT
HS)
f 09
K+
C
Kn
Cn
H
Sn
»(* )
9 (* )
\z\
X
XT
0
x •y = x*y
x x y
is defined as
belongs to (a set)
does not belong to (a set)
intersection of sets
union of sets
empty set
subset T of set S
the intersection of the sets S and T
the union of the sets S and T
image of set S under mapping /
composition of two mappings ( / o g)(x) = f(g (x ))
set of natural numbers
set of natural numbers including 0
set of integers
set of rational numbers
set of real numbers
set of nonnegative real numbers
set of complex numbers
n-dimensional Euclidean space
space of column vectors with n real components
n-dimensional complex linear space
space of column vectors with n complex components
Hilbert space
symmetric group on a set of n symbols
real part of the complex number z
imaginary part of the complex number z
modulus of complex number z
\x + iy\ = (x2 + ?/2) 1/2, x , y € R
column vector in Cn
transpose of x (row vector)
zero (column) vector
norm
scalar product (inner product) in C n
vector product in E 3
IX
x
Notation
m x n matrices
n x n permutation matrix
n x n projection matrix
n x n unitary matrix
vectorization of matrix A
A, B, C
P
U
U
vec(A)
det(A)
tr(A)
Pf(A)
rank(A)
At
A
A*
A -1
In
I
On
AB
A »B
[A, B } := AB - BA
[A, B ]+ := AB + BA
A® B
A® B
fijk
A
€
t
H
determinant of a square matrix A
trace of a square matrix A
Pfafiian of square matrix A
rank of matrix A
transpose of matrix A
conjugate of matrix A
conjugate transpose of matrix A
inverse of square matrix A (if it exists)
n x n unit matrix
unit operator
n x n zero matrix
matrix product o f m x n matrix A
and n x p matrix B
Hadamard product (entry-wise product)
o f m x n matrices A and B
commutator for square matrices A and B
anticommutator for square matrices A and B
Kronecker product of matrices A and B
Direct sum of matrices A and B
Kronecker delta with Sjk = I for j = k
and Sjk = O for j ^ k
eigenvalue
real parameter
time variable
Hamilton operator
The elementary matrices Ejk with j = I , . . . , m and k = I , . . . , n are del
I at entry (j, k ) and O otherwise.
The Pauli spin matrices are defined as
Gi :=
CT2 :=
0
1
Chapter I
Basic Operations
Let F be a field, for example the set of real numbers R or the set of complex
numbers C. Let ra, n be two integers > I. An array A of numbers in F
/ an
&12
«13
« ln ^
^21
«22
«23
«2 n
— (0>ij)
\ ®ml
«m2
«m3
••
«mn J
is called an m x n matrix with entry
in the z-th row and j -th column. A row
vector is a I x n matrix. A column vector is an n x I matrix. We have a zero
matrix, in which
= 0 for all z, j. In denotes the n x n identity matrix with
the diagonal elements equal to I and 0 otherwise.
Let A = ( aij ) and B = (6^ ) be two m x n matrices. We define A + B to be the
m x n matrix whose entry in the z-th row and j - th column is aij + bij. Matrix
multiplication is only defined between two matrices if the number of columns of
the first matrix is the same as the number of rows of the second matrix. If A is
an m x n matrix and B is an n x p matrix, then the matrix product AB is an
m x p matrix defined by
n
(A B )iJ —
air brj
r=l
for each pair z and j, where (AB)ij denotes the ( z ^ - t h entry in A B . Let
A = (a^ ) and B = ( b^-) be two m x n matrices with entries in some field. Then
their Hadamard product is the entry wise product of A and B , that is the m x n
matrix Am B whose (z, j)-th entry is
I
2
Problems and Solutions
Let V be a vector space and let
B = { v i , v 2, . . . , v n}
be a finite subset of V . The set B is a linearly independent set in V if
C i V i + C2 V 2 H----------- b Cn V n =
0
only admits the trivial solution
Ci — c2 — *
— Cn — 0
for the scalars ci, C2, . . . , cn. We define the span of B by
span(B ) = { CiVi + C2V2 H------- b cnv n : for scalar Ci , c2, . . . , Cn }.
If span(B) = V, then B is said to span V (or B is a spanning set for V). If B is
both linearly independent and a spanning set for V then B is said to be a basis
for Vr, and we write dim (F) = |B|. In this case, the dimension dim (F) of V is
the number of elements |B| = n of B. A vector space which has no finite basis
is infinite dimensional. Every basis for a finite dimensional vector space has the
same number of elements.
The set o f m x n matrices forms an mn dimensional vector space, for example
the 2 x 2 matrices over C form vector space over C with standard basis
Another basis for the 2 x 2 matrices over C is given in terms of the Pauli spin
matrices
However, this is not a basis for the 2 x 2 matrices over R. For the 2 x 2 matrices
over E we have the same standard basis as for the 2 x 2 matrices over C. Another
basis for the 2 x 2 matrices over M is
For the vector space C 2 an orthonormal basis frequently used is the Hadamard
basis
Basic Operations
P r o b le m I .
3
Let e i, e 2, e 3 be the standard basis in E 3
'V
ei
e2
e3
(i) Consider the normalized vectors
a —
I
b =
(ei + e 2 + e 3),
I
c = - = ( - e i + e2 - e3),
V3
( - e i - e 2 + e3),
d = - = ( e i - e2 - e3).
V 3V
These vectors are the unit vectors giving the direction of the four bonds of an
atom in the diamond lattice. Show that the four vectors are linearly dependent,
(ii) Find the scalar products aTb, b Tc, c Td, d Ta. Discuss.
S olu tion I. (i) From Cia + c2b + c3c + C4d = 0 we find that c\ = C2 = C3 =
C4 = I is a nonzero solution. Thus the vectors are linearly dependent.
(ii) We find aTb = - 1 /3 , b Tc = - 1 /3 , c Td = - 1 /3 , d Ta = - 1 /3 .
P r o b le m 2 . (i) Consider the normalized vector v in E 3 and the permutation
matrix P , respectively
I\
1
-I/
/0 1 0
, P= 0 0 1
Vi
0 0
Are the three vectors v, P v , P 2V linearly independent?
(ii) Consider the 4 x 4 symmetric matrix A and the vector b in E 4
( °1
0
Vo
I
0
I
0
0
I
0
I
/ 1V
0
I
0/
- ViIJ •
Are the vectors b, Ah, A2b, A 3b in E 4 linearly independent? Show that the
matrix A is invertible. Look at the column vectors of the matrix A
/° \
1
0
VoJ
/ 1V
0
I
I
0
\oJ
\ lj
/° \
0
1
VoJ
Find the inverse of A.
S olu tion 2 .
(i) Yes the three vectors
P 2V = - =
4
Problems and Solutions
are linearly independent, i.e. they form a basis in the vector space E 3.
(ii) We obtain the four vectors
( l\
b,
2
2
Ab
A2b
[2\
3
3
Ai b
\ 2/
V i/
[Z
\
5
5
\3/
Thus the vectors b and A b are linearly independent. However A2b = A b + b,
A 3b = A2b + A b . Thus b, A b and A3b are linearly dependent. The column
vectors in A are linearly independent. Thus the inverse of A exists and is given
by
1 0
,4" 1 =
0
I 0 0 "01V
0 0 0 I
V-I 0 1 0 J
Note that det(A) = I and thus det(A 1J = I.
P r o b le m 3. (i) Consider the Hilbert space M 2(E) of the 2 x 2 matrices over
E. Show that the matrices
(1
Mj ’ (1
VO °)
oj ’ v( °1 M
oj ’ (1
V1 o
V1 M
lj
are linearly independent.
(ii) Use the Gram-Schmidt orthonormalization technique to find an orthonormal
basis for M 2(E ).
S o lu tio n 3.
(i) From
a(o o)+6(o o)+c(i o)+d(i IH l D
we find that the only solution is a = 6 = c = d = 0.
(ii) We obtain the standard basis
/I
0\
/0
A
VO oj ’ V0 °/
P r o b le m 4.
matrices
(0
0\
V1 °/
/0
0\
V0 1Z
Consider the vector space of 2 x 2 matrices over E and the
D'
* -(! !)■ * - ( ! ?)• * -(! 0)
(i) Are these four 2 x 2 matrices linearly independent?
(ii) An n x n matrix M is called normal if M M * = M *M . Which of the four
matrices Aj (j = 1 ,2,3,4 ) are normal matrices?
Basic Operations
S olu tion 4.
5
(i) No. We have
(o o) =(o l) +(l 0)-(1 1)'
(ii) None of these matrices is normal.
Let 07, 02, 03 be the Pauli spin matrices
P r o b le m 5.
^ = ( ! 0 )’
a2 =
( 0i o‘ )* CT3=(o -1)
and (To = I2. Consider the vector space of 2 x 2 matrices over C.
(i) Show that the 2 x 2 matrices
Tf2' T f 1' T f 2' Tf z
are linearly independent.
(ii) Show that any 2 x 2 complex matrix has a unique representation of the form
00/2 + icL\(J\ + ia2<J2 + ^ 3(73
for some
oq,
01, 02,03 € C.
S olu tion 5.
(i) From the condition
I
—
I
co—
j= i2 + c i —^=07
I
c2^Tfcr2
03(73 — ^2
we find the four equations Co + C3 = 0, c\ — ¢2¾= 0, c\ + ¢2¾= 0, Co — C3 = 0
with the only solution Co = c\ = C2 = C3 = 0.
(ii) Since
j
.
.
.
( ao + 203
Q>oh + iaiCTi + 202(72 + 203(73 = I .
\
ia\ + 02
201 — O2
O o-
•
I
203 J
we obtain
00-^2 H- ^Oi(Ti + 202(72 + 2O3O3
-(; 2)
where a, )0,7 ,5 G C. Thus
a + J
oo
-,
Oi
ft + 7
2i
"j
a2 =
P -i
a — J
-,
0, 3
=
P r o b le m 6 . Consider the normalized vector vo = ( I
three normalized vectors v i, V2, V3 such that
v J = °>
J=O
v J v fc = - I
u # *0-
0
" 2i “
’
0 ) T in M3. Find
6
Problems and Solutions
S olu tion 6.
Owing to the form of Vo we have due to the second condition
that Vi^i = 7/2,1 = t/3,i = - 1 /3 . We obtain
I -1 /3
/ -1 /3 \
vi = I 2 -/2 /3 I ,
V 0 )
P r o b le m 7.
v2 =
—/ 2 / 3
V3
\ V 6/3
-1 /3 \
-/2 /3
.
-/6 /3 /
(i) Find four normalized vectors v i, V2, V3, V4 in R 3 such that
4,
v j v fc = ^Sjk
i
f I for j = k
3 _ \ - 1 / 3 for j ± k
(ii) Calculate the vector and the matrix, respectively
Ev;-
j =I
I
Ev;+
J= I
S olu tion 7. (i) Such a quartet of vectors consists of the vectors pointing from
the center of a cube to nonadjacent corners. We may picture these four vectors
as the normal vectors for the faces of the tetrahedron that is defined by the
other four corners of the cube. Owing to the conditions the four vectors are
normalized. Thus
(ii) We have
E
J= I
v; = 0-
j E
3= I
v ; v ;T = /3 -
P r o b le m 8. Find the set of all four (column) vectors Ui, U2, v i, V2 in M2
such that the following conditions are satisfied V fu 2 = 0, v ^ u i = 0, v f u i = I,
V^u2 = I.
S o lu tio n 8.
We obtain the following four conditions
^11^21+^12^22 = 0? ^21^11+^22^12 = O5 ^11^11+^12^12 = I 5 ^21^21+^22^22 = IWe have four equations with eight unknowns. We obtain three solutions. The
first one is
r*4
T’iT’4 - r2r3
r3
r ir 4 - r2r3
r2
n
7*17*4 - r2r3
^ l l = --------------------, 7X12 = ------------------------ 5 ^21 = ------------------------5 u 22 = -------------------7 /1 1 = Tl,
^12=^2,
r\r± - r2r3
^21=7%
7/22=7*4
Basic Operations
I
where t*i , t*2, 7-3, 7-4 are arbitrary real constants with 7*17*4 —7*27*3 7^ 0. The second
solution is
Vll = r5,
Vn = 0,
1 ,
Vi2 = —
7*6
V12 = r6,
u2l = 1— ,
V22 = nU,
7*7
V2I = r7,
V22 = - r 5r6r7
where 7*5, 7¾, r7 are arbitrary real constants with Tq 7^ 0 and t*7 7^ 0. The third
solution is
no
I
I
n
Vn = ---------, Vi2 = — , v 2i = — , V22 = U,
7-37-9
7-8
7*9
Vll = 0,
Vi2 = 7-8,
V2I = 7*9,
V22 = 7*10
where rs, 7-9, 7*10 are arbitrary real constants with rs 7^ 0 and 7-9 7^ 0.
P r o b le m 9.
Consider the 2 x 2 matrices
a_
( V11
Vi2 \
W
ail/’
n _ ( 0
I\
V1 0J
where a n , a i2 G R. Can the expression A 3 + 3AC (A + C) + C 3 be simplified for
computation?
S o lu tio n 9.
Yes. Since [A1C] = O2, i.e. AC = CA we have (A + C )3.
P r o b le m 10. Let A 1 B be 2 x 2 matrices. Let AB = O2 and BA = O2. Can
we conclude that at least one of the two matrices is the 2 x 2 zero matrix? Prove
or disprove.
S olu tion 10.
Consider the matrices
Then AB = O2 and BA = O2 but both are not the zero matrix. Another example
Note that the rank of all these matrices is I.
P r o b le m 11. Let A 1 C be n x n matrices over R. Let x, y, b, d be column
vectors in R n. Write the system of equations ( A + iC )(x + iy) = (b + id) as a
2ti x 2n set of real equations.
S o lu tio n 11.
can write
We have (A + iC)(x. + iy) = A x — C y + i(A y + C x). Thus we
(c
7 ) 6 ) - ( 5)-
P r o b le m 12. Let A 1 B be n x n symmetric matrices over R, i.e. A t = A 1
B t = B. What is the condition on A 1 B such that AB is symmetric?
8
Problems and Solutions
Solution 12. We have ( A B )T = B t A t = BA. Thus the condition is that
AB = B A , i.e. [.A , B] = On.
Problem 13.
(i) Compute the matrix product
(ii) Write the quadratic polynomial 3xf — 8x 1X2 + 2x| + 6x 1X3 — 3x§ in matrix
form.
Solution 13.
(ii) We have
(i) We obtain the polynomial 4xf — 2x 1X2 + 4x 1X3 + 2x 2X3.
Problem 14. An n x n matrix M is called normal if M M * = M *M . Let A ,
B be normal n x n matrices. Assume that AB* = B*A , BA* = A*B. Show
that their sum A + B is normal. Show that their product AB is normal.
Solution 14.
We have
(A + B)*(A + B ) = A* A + A* B + B*A + B*B = (A + B )(A + B)*
and
(AB)*(AB) = B*A*AB = (AB*)(BA*) = A(BB*)A* = (AB)(AB)*.
Problem 15. Let A be an n x n normal matrix. Show that ker(A) = ker(A*),
where ker denotes the kernel.
Solution 15.
If A is normal and v G C n, then
(A v )*A v = v M M v = v M A * v = (A * v )M * v .
Thus A v = 0 if and only if A* v = 0.
Problem 16.
(i) Let A be an n x n hermitian matrix. Show that A m is a
hermitian matrix for all m G N.
(ii) Let B be an n x n hermitian matrix. Is iB skew-hermitian?
Solution 16. (i) We have (Am)* = (A*)m.
(ii) Yes. Since i* = —i we have (iB)* = —iB* = —iB. Thus iB is skewhermitian. Can any skew-hermitian matrix be written in the form iB with B
hermitian?
Basic Operations
9
P r o b le m 17. (i) Let A be an n x n matrix with A 2 = On. Is the matrix In + A
invertible?
(ii) Let B be an n x n matrix with B 3 = 0n. Show that In + B has an inverse.
Solution 17. (i) We have (In A) (In — A) = In + A — A — A2 = In. Thus
the matrix In + A is invertible and the inverse is given by In — A.
(ii) We have
(In + B)(In - B + B2) = In - B + B2 + B - B 2 + B3 = In.
Thus In —B + B 2 is the inverse of In + B .
Problem 18. Let A be an n x n matrix over R. Assume that A2 = 0n. Find
the inverse of In + iA.
Solution 18. We have (In + iA )(In — iA) = In + A2 = In. Thus In — iA is
the inverse of In + iA.
Problem 19. Let A, B be n x n matrices and c a constant. Assume that the
inverses of (A —cln) and (A + B —cln) exist. Show that
(A - cln) - xB (A + B - Cln) - 1 = ( A - Cln) - 1 - ( A + B - Cln) - 1.
Solution 19.
Multiplying the identity from the right with (A + B —cln) yields
(A - Cln) - 1B = (A —cIn) - \ A + B - cln) - In.
Multiplying this identity from the left with (A —cln) yields B = B. Reversing
this process provides the identity.
Problem 20. (i) A 3 x 3 matrix M over R is orthogonal if and only if the
columns of M form an orthonormal basis in R 3. Show that the matrix
is orthogonal.
(ii) Represent the nonnormal 3 x 3 matrix
(relative to the natural basis)
relative to the orthonormal basis in R 3
10
Problems and Solutions
Solution 20.
(i) The three normalized column vectors are
/ V 3 /3 \
\ /3 /3 ,
vi =
/
v2 =
V>/5/ 3 /
0
\
\/2/2
,
V">/2 / 2 /
v3 =
/ —n/6 /3 \
-n/6 /6
V >/6 /6 /
The scalar products are v f v 2 = 0, v f v 3 = 0, v^V 3 = 0.
(ii) We form the orthogonal matrix
O=
/1 A /2
O
\ l/> /2
0
I
0
l/y/2 \
0
-1 A /2 /
from the orthonormal basis. Then O = O -1 = O t and
/0
O A O -1 = I o
\2
0
2
0
0\
0 .
0/
Note that A and O AO -1 are nonnormal and have the eigenvalues 0 (2 times)
and 2.
Problem 21.
Consider the rotation matrix
t>(o\ _ ( cosW
K ~ V sin(6>)
- sinW A
cosW ) '
Let n be a positive integer. Calculate Rn(O).
Solution 21.
( cos(a)
Since
—sin(o:) \ ( cos (P)
Ysin(a) cos(a) J y sin(P)
—sin (P) \ _ ( cos(a + /3)
—sin(o: + (3) \
cos(/3) J ~ \ sin(o: + /3)
cos(a + /3) J
we obtain
T)n(a\ - ( cos (n0)
1 ~ V sin(n6>)
Problem 22.
- s i n {n0)\
003(716') ) ’
(i) Consider the 2 x 2 matrix
A-(i
?)
where a, /5 G R. Find the condition on
such that the inverse matrix exists.
Find the inverse in this case.
(ii) Let t ^ 0 and s G R. Find the inverse of the matrix
M =
Basic Operations
11
S olu tion 22. (i) We have det(^l) = /3 — a. Thus the inverse exists if a / ft.
In this case the inverse is given by
A-i = _ j _ f P
P —Ol V - I
- aV
1 J'
(ii) We find
- (V -?*)■
P r o b le m 23.
Let
0
A =
-1
0
0
1
0
0
0
0
0
0
-1
°\
0
,
1
0/
°
0
-I
0
B =
0
0
0
-I
I
0
0
0
°\
I
0
0/
Can one find a 4 x 4 permutation matrix such that A = P B P t I
S olu tion 23.
We find the permutation matrix
P = Pt = P -1=
P r o b le m 24.
0
0
I
0
/I
0
0
Vo
0
I
0
0
°0 \
0
I/
(i) Consider the 3 x 3 permutation matrix
Find all 3 x 3 matrices A such that P A P t = A.
(ii) Consider the 4 x 4 permutation matrix
/0
0
0
I
0
0
0
i
0
0\
0
1
Vl 0 0 0 /
Find all 4 x 4 matrices A such that P A P t = A.
S olu tion 24.
(i) From the 9 conditions of P A P t = A we obtain
G n = Q>22 = « 3 3 ,
« 12 = «2 3 = « 3 1 ,
Thus
«1 3
an
13
«12
«12
«1 3
an
«
A - I
11
«12
«
«13 = a 21 = a 32 .
12
Problems and Solutions
From the 16 conditions of P A P t = A we find a n = a22 = «33 = «44 and
(ii)
ai2 = 023 = 034 = a n ,
ai3 = a24 = a3i = 042,
ai4 = a2i = 032 = 043.
Thus
« 12
an
«13
«14 \
«12
«13
«13
an
«n
«12
V«12
«13
«14
«11 /
/ « ii
ai4
P r o b le m 25.
(i) Let tt be a permutation on { 1 ,2 ,. . . , n } . The matrix P7r
for which pi* = e ^ )* is called the permutation matrix associated with 7r, where
Pi* is the i-th row of P7r and
= I if i = j and 0 otherwise. Let n = 4 and
tt = (3 2 4 I). Find P7r.
(ii) Find the corresponding permutation matrix for the permutation
I
3
2
4
3
3
1 4)
2)
Solution 25. (i) Pn is obtained from the n x n identity matrix In by applying
7r to the rows of In. We find
Pn
/0
0
0
0 I
1 0
0 0
0\
0
1
Vi 0 0 0/
and pi* = es*
^7r(l)*5 P2* -- ^2* -- ^71-(2)*? TM* -- ^4* -- ^7r(3)*5 P4* — e \* —
^■n(4)**
(ii) We have two options: For ( I
matrix P is given by
23
4)P = (3
4
I
2 ) the permutation
/ 0 0 I 0\
0
0 0 1
1 0
0 0
\0 I 0 0 /
For
the permutation matrix Q is given by Q = P.
P r o b le m 26.
(i) Find an invertible 2n x 2n matrix T such that
/ti - I
I
On
1
-In
^n \ rp _ f On
OnJ 1
~
\
ln
In
|
On
;■
Basic Operations
(ii)
13
Show that the 2n x 2n matrix
is invertible. Find the inverse.
S olu tion 26.
(i) We find
T = T ~1
(ii)
On
In \
In
In J
Since R2 = / 2,1 we have R 1 = R .
P r o b le m 27.
Let M be an 2n x 2n matrix with n > I. Then M can be
written in block form
where A, B ,C , D are n x n matrices. Assume that M 1 exists and that the n x n
matrix D is also nonsingular. Find M -1 using this condition.
S o lu tio n 27.
We have
M ilT 1
(A
VC
B\ ( E
DJ \G
F \ _ ( In
H J - Von
On \
In J
-
It follows that
A E A B G = I n,
A F A B H = 0n,
C E a DG = 0n,
C F A DH = In.
Since D is nonsingular we obtain from the last two equations that
G = -D ~ lCE,
H = D ~1 - D ~l CF.
It follows that (A — B D ~ 1C )E = In. Thus E and A — B D - 1C are invertible
and we obtain E = (A — B D - 1C ) - 1. For F we find
F = -E B D ~ 1 = - ( A - B D - 1C y 1B D - 1.
P r o b le m 28. (i) Let A be an m x n matrix with m > n. Assume that A has
rank n. Show that there exists an m x n matrix B such that the n x n matrix
B*A is nonsingular. The matrix B can be chosen such that B*A = In.
(ii) An n2 x n matrix J is called a selection matrix such that J t is the n x n 2
matrix
[Eu E22 •••Enn]
where Eii is the n x n matrix of zeros except for a I in the (z,z)-th position.
Find J for n = 2 and calculate J t J . Calculate Jt J for arbitrary n.
14
Problems and Solutions
S olu tion 28. (i) If m = n, the A is nonsingular and we choose B = (A *)- 1 .
If n < ra, then there exists an m x (m — n) matrix C such that the m x m
matrix Q = [A C\ is nonsingular. If we write (P * )_1 as [5 D], where B is m x n
and D is m x (m - n), then
A = 7n, 7?*C = 0n.(m_ n), D M = 0m_ (n.n) and
(ii) We have
0 0 «\
. (0
0 0 1) = J = 0
\0
°\
0
0
1/
Therefore Jt J = 72. For the general case we find Jt J = In.
P r o b le m 29. Let A be an arbitrary n x n matrix over R. Can we conclude
that A2 is positive semidefinite?
S olu tion 29.
For example
No we cannot conclude in general that A2 is positive semidefinite.
A =
0
- I
I
0
A2
0
-(-O 1
- I
)'
P r o b le m 30. An n x n matrix II is a projection matrix if II* = II, II2 = II.
(i) Let IIi and 1¾ be projection matrices. Is IIi + II2 a projection matrix?
(ii) Let IIi and 1¾ be projection matrices. Is IIi 1¾ a projection matrix?
(iii) Let II be a projection matrix. Is In — II a projection matrix? Calculate
U(In - U ) .
(iv) Is
I
I
I
I
I
I
I
I
I
a projection matrix?
(v) Let e G [0,1]. Show that the 2 x 2 matrix
y/e — e2
I —e
)
is a projection matrix. What are the eigenvalues of !!(e )? Be clever.
S o lu tio n 30.
(i) Obviously (Hi + II2)* = IIi + 1¾ = IIi + II2. We have
(Iii H- 1¾)2 = II2 + IIiII2 + II2IIi + II2 = IIi + IIiII2 + II2IIi + II2.
Thus IIi + II2 is a projection matrix only if IIiII2 = On, where we used that
from IliII2 = On we can conclude that II2IIi = 0n. From II i II2 = On it follows
that (IIiII2)* = II^IIi = II2IIi — On.
(ii) We have (H iII2)* = U%UJ = n 2n i and (H in 2)2 = HiH2HiH2. Thus we see
that n i n 2 is a projection matrix if and only if HiH2 = n 2n i.
Basic Operations
15
(iii) We have (Jn - II)* = In - II* = In - U and (In - TL)2 = In - II. Thus Jn - I I
is a projection matrix. We have
TL(In - I I ) = I I - I I 2 = I I - I I = On.
(iv) We find II* = II and II2 = II. Thus II is a projection matrix.
(v) The matrix is symmetric over R. We have II2(e) = 11(e), IIt (e) = 11(e).
Thus 11(e) is a projection matrix. The eigenvalues are I and 0.
Problem 31.
Let m > I and N > 2. Assume that N > m. Let X be an
N x m matrix over R such that X * X = Jm.
(i) We define II := X X * . Calculate II2, II* and tr(II).
(ii) Give an example for such a matrix X , where m = I and N = 2.
Solution 31.
(i) We have
n2 = ( x x * ) ( x x * ) = x ( x * x ) x * = X irnX * = x x * = n
and n* = (X X * )* = X X * = II. Thus II is an N x N projection matrix. We
have tr(II) = m.
(ii) An example is
I ) =►X X ' = - ( 1
X
2 \ I
Any normalized vector in R 2 can be used.
Problem 32. (i) A square matrix A is called idempotent if A2 = A. Find all
2 x 2 matrices A over the real numbers which are idempotent and
^ 0 for
h i =
1 , 2.
(ii) Let A , B be n x n idempotent matrices. Show that A + B are idempotent
if and only if AB = BA = 0n.
Solution 32.
(i) From A2 = A we obtain
a l l + a 12a21 = Gllj ^12(^11+^22) = ai2, ^21(^11+022) = O21, 012021+^22 = a 22Since
/
matrix is
0 we obtain o n + a<i2 = I and (Z21 = (o n — CL21)/a12. Thus the
A = (
0/11
V (aii — ai i ) / ai2
0,12 ^
I —o n /
with a n arbitrary and a i2 arbitrary and nonzero.
(ii) We have
(A + B )2 = A2 + B 2 + AB + BA = A + B + AB + BA.
Thus (A + B )2 = A + B if AB + BA = On. Now suppose that AB + BA = On.
We have
AB + ABA = A2B + ABA = A(AB + BA) = On,
ABA + BA = ABA + BA2 = (AB + B A) A = On.
16
Problems and Solutions
This implies AB —BA = AB + ABA - (ABA + BA) = On. Thus AB = BA and
AB = ^ (AB + AB) = i ( + S + BA) = 0„.
Problem 33.
Let A, B be n x n matrices over C. Assume that A + B is
invertible. Show that
(+ + B ) - 1A = In - ( A + B ^ 1B,
Solution 33.
A(A + B ) - 1 = In - B(A + B)~\
Since A + B is invertible we have
(A + B y 1A + (A + B y 1B = (+ + B y ^ A + B) = In.
Consequently (+ + B ) - 1 + = / „ — (+ + B y 1B. Analogously
+(+ + B y 1 + B(A + B) - 1 = (+ + B)(A + B y 1 = In.
Thus + ( + + S ) - 1 = I n - B(A + B ) - 1.
Problem 34. (i) Find 2 x 2 matrices + over C such that A2 = —I2, +* = —+•
(ii) Find 2 x 2 matrices C, D such that CD = O2 and DC ^ O2.
Solution 34.
(i) Solutions are
Extend to 3 x 3 matrices,
(ii) An example is
P r o b le m 35.
Find the 2 x 2 matrices F and F' from the two equations
Find the anticommutator of F and F\ i.e. [F, F ']+ = F F f + F fF .
Solution 35.
Adding and subtracting the two equations yields
We obtain [F, F ']+ = O2.
Basic Operations
Problem 36.
17
(i) Find all 2 x 2 matrices X such that
*(! J) =(? J)x
(ii) Impose the additional conditions such that tr(X ) = 0 and d et(X ) = +1.
Solution 36.
(i) We obtain x\\ =
£22
and
X = f Xn
\ £12
with £ 11 , £12 arbitrary.
(ii) The condition tr(X ) = 0 yields
£12 =
X
=
£12
£21-
Thus the matrix is
Xl2>\
X 11
J
11 = 0. The condition d et(X ) = +1 yields
±i.
Problem 37.
Let v be a normalized (column) vector in C 71. Consider the
n x n matrix
A = vv* - I JB.
(i) Find A* and AA*.
(ii) Is the matrix A invertible?
Solution 37.
(i) Since (vv*)* = vv* we have A = A*, i.e. the matrix is
hermitian. Since vv*vv* = vv* we obtain
A A * = \ ln.
(ii) Yes. We find T T 1 = 4vv* - 2In.
Problem 38.
Find all 2 x 2 matrices
+ 0;1
«12 \
«22 )
over E with a\2 7^ 0 such that A2 = I2.
Solution 38.
From ^42 = /2 we obtain the three conditions
«11 — I?
«22 — I?
«11«12 + «12«22 = 0
and the two possible solutions
A = (
Problem 39.
maps
q
^ 1 ) > A = ( Q1
^ 2 ) > ai2 arbitrary.
Let z = x + iy with £, y G E and thus z = x —iy. Consider the
X + iy ^ ( X
y
x ) = A { X ' y)’
( o )’
(l)'
18
Problems and Solutions
(i) Calculate z 2 and A2(x,y). Discuss.
(ii) Find the eigenvalues and normalized eigenvectors of A (x,y).
(iii) Calculate A (x,y) <g>A (x ,y ) and find the eigenvalues and normalized eigen­
vectors.
Solution 39.
(i) We obtain z2 = x 2 —y2 -\- cIixy and
A2
x 2 —y2 - 2 xy \
cIxy
x 2 - y 2) '
(ii) The eigenvalues depending on x, y are Ai = z = x + iy, \2 = z = x —iy with
the corresponding normalized eigenvectors
(
I
—i
)
which are independent of x and y.
(iii) We have
/ X2
A (x ,y )® A (x ,y )
xy
xy
\y2
-x y
X
2
-y 2
xy
-x y
y2
- y 2 -x y
X
2
xy
-x y
X
2
with the eigenvalues z 2, zz, zz, zz. Note that zz = zz.
Problem 40. Let A G Cnxm with n > m and rank(A) = m.
(i) Show that the matrix m x m matrix A* A is invertible.
(ii) We set II = A (A * A)~ 1A. Show that II is a projection matrix, i.e. II2 = II
and II = IT .
Solution 40.
(i) To show that A*A is invertible it suffices to show that
N(A*A) = {0}. From A* A v = 0 we obtain v*A *A v = 0. Thus (Av)*Av = 0
and therefore A v = 0. Hence v = 0.
(ii) We have
n2 = (A(AtA ) - 1A)(A(AtA ) - 1A) = A(At A ) - 1 = A(At A ) - 1At = II
and
IT = A(AtA ) - 1 = A((i4M)*)-1 = A(AtA ) - 1A = II.
Problem 41. Let A be an n x n hermitian matrix and n be an n x n projection
matrix. Then n A n is again a hermitian matrix. Is this still true if A is a normal
matrix, i.e. A A* = A* A?
Solution 41.
This is not true in general. Consider
A=
/ 0 0 1\
IO O
,
\0 1 0 /
/0 0
0\
n= I 0 I 0 I
\0 0 I /
Basic Operations
19
Then A* A = A A* and
O O O
0
0
0
0
I
0
Thus (IL4II)(IL4n)* ^ (IL4II)*(IL4n).
P r o b le m 42.
An n x n matrix C of the form
/
C0
cn_ i
Cl
Co
C =
Cl
Cn -
...
cn_ i
C2
\
C2
CO
* cn_ i
2
'C n _ l
Cl
Cn _ 2
Cl
Co
/
is called a circulant matrix. Show that the set of all n x n circulant matrices
form an n-dimensional vector space.
S olu tion 42.
A circulant matrix is fully specified by the n-dimensional vector
which appears in the first column of the matrix C . The other columns are
rotations of it. The last row is in reverse order. The sum of two n x n circulant
matrices is again a circulant matrix and multiplying a circulant matrix with a
constant provides again a circulant matrix. Thus the n x n circulant matrices
form an n-dimensional vector space.
P r o b le m 43. Two n x n matrices A , B are called similar if there exists an
invertible n x n matrix S such that A = SBS- 1 . If A is similar to 5 , then B is
also similar to A , since B = S-1 AS.
(i) Show that the two 2 x 2 matrices
are similar.
(ii) Consider the two 2 x 2 invertible matrices
Are the matrices similar?
(iii) Consider the two 2 x 2 invertible matrices
Are the matrices similar?
20
Problems and Solutions
S o lu tio n 43.
(i) We have
{ Sll
\ s2i
Si2 \
S22/
£_i _
I
( S22
det(S') \ - s 2i
-51 2^
Sn
J
Then P B P 1 = A provides the three conditions
Si2^22 = 0,
s22 = 0,
Si2 = - det(S).
Thus S22 = 0 and we arrive at s\2 = Si2S2I- Since Si2 ^ 0 we have Si2 = s2i.
Note that Sn is arbitrary. Thus
A special case is Sn = 0, Si2 = I.
(ii) Prom A = S-1 BS we obtain SA = B S . Then from SA = BS we obtain
It follows that Si2 = 0 and S22 = 0. Thus S is not invertible and therefore A
and B are not similar.
(hi) The matrices C and D are similar. We find Sn = s2i, Si2 = —S22. Since S
must be invertible, all four matrix elements are nonzero. For example, we can
select
P r o b le m 44. An n x n matrix is called nilpotent if some power of it is the
zero matrix, i.e. there is a positive integer p such that Ap = 0n.
(i) Show that every nonzero nilpotent matrix is nondiagonalizable.
(ii) Consider the 4 x 4 matrix
/0 0
1 0
“
0
0
0\
0
0 1 0 0
VO 0 I 0 /
Show that the matrix is nilpotent.
(iii) Is the matrix product of two n x n nilpotent matrices nilpotent?
S o lu tio n 44.
(i) Assume that there is an invertible matrix S such that
SAS -1 = D , where D is a diagonal matrix. Then
On = SApS - 1 = S A S -1S A S -1 •••S A S -1 = Dp.
Thus Dp = On and therefore D must be the zero matrix. Consequently A is the
zero matrix.
(ii) We find that N 4 is the 4 x 4 zero matrix. Thus the matrix N is nilpotent.
Basic Operations
(iii)
21
This is not true in general. Consider
AB =
A =
Problem 45. A matrix A for which Ap = On, where p is a positive integer,
is called nilpotent. If p is the least positive integer for which Ap = On then A is
said to be nilpotent of index p. Find all 2 x 2 matrices A over the real numbers
which are nilpotent with p = 2, i.e. A2 = O2.
Solution 45.
From A2 = O2 we obtain the four equations
aIi + ai2«2i = 0, ^12(^11 + ^22) = 0, ^21(^11 + ^22) = 0, a\20>2\ + ^22 = 0Thus we have to consider the cases a n + a22 7^ 0 and a n +022 = 0. If a n -\-a22 zZz
0, then a i2 = a2i = 0 and therefore a n = 022 = 0. Thus we have the 2 x 2 zero
matrix. If a n + 022 = 0 we have a n = —022 and a n ^ 0, otherwise we would
find the zero matrix again. Thus 012021 = ~ an = —a 22 anc^ for this case we
find the solution
P r o b le m 46.
Let
be vectors in R 3. The vector product x is defined as
(i) Show that we can find a 3 x 3 matrix 5 (a ) such that a x b = 5 (a )b .
(ii) Express the Jacobi identity
a x (b x c) + c x (a x b) + b x (c x a) = 0
using the matrices 5 (a ), 5 (b ) and 5 (c ).
S olu tion 46.
(i) The matrix 5 (a ) is the skew-symmetric matrix
5 (a ) =
(ii) Using the result from (i) we obtain
a x (5 (b )c ) + c x (5 (a )b ) + b x (5 (c)a ) = 0
5 (a )(5 (b )c ) + 5 (c )(5 (a )b ) + 5 (b )(5 (c )a ) = 0
(5 (a )5 (b ))c + (5 (c )5 (a ))b + (5 ( b ) 5 ( c )) a = 0
22
Problems and Solutions
where we used that matrix multiplication is associative.
P r o b le m 47.
Let
/ yi
y=
m /2
\y*,
be two normalized vectors in E3. Assume that x Ty = 0, i.e. the vectors are
orthogonal. Is the vector x x y a unit vector again? We have
X
2 i
1+
X
2 .
2+
X
2
2 I 2 I
2
i
2/1 + Vz + Vz = 1
i
3 = I,
(1)
and
(2)
x Ty = Xxy1 + £ 22/2 + ^32/3 = 0.
The square of the norm ||x x y||2 of the vector x x y is given by
S o lu tio n 47.
x iiv l +
2/1) +
x Iiv i
+ 2/1) +
x Iiv l
+
Vz) ~
2 (^ 2^ 32/22/3 + x I x ZViVz + x I x ZViVz)-
From (I) it follows that x\ = I — X 1 —x\, y$ = I — 2/i — Vz- Prom (2) it follows
that £32/3 = -X xy1 - x 2yz and
2£ i 2/i £ 22/2 =
xIvl - xivi - xlvz =
vl - xi - xI + xjvz + xIvi-
1 - 2/i -
Inserting these equations into ||x x y|| gives ||x x y ||2 = I. Thus the vector is
normalized.
Problem 48.
tors in E3
Consider the three linear independent normalized column vec­
(i) Find the “volume” V8l := a f (a2 x as).
(ii) From the three vectors a i, a2, a3 we form the 3 x 3 orthogonal matrix
(1 / V 2
A=
0
\ 1 / n/2
0
I
0
l/y/2 \
0
.
-I/ V 2 J
Find the determinant and trace. Discuss.
(iii) Find the vectors
U
1 a2 x a3,
bi = —
•'a
U
1 a3 x a i,
b2 = —
•'a
U
1 ai x a2.
b3 = —
•'a
Are the three vectors linearly independent?
S o lu tio n 48.
(i) We have
Va = (l/s/2
0 1/ V2) H- 1/ V2
0 —1A/2)) t = -1 .
Basic Operations
23
(ii) Since the vectors are linearly independent the determinant must be nonzero.
We find det(^4) = —1 and tr(A) = I. Note that
det(^4) = aT (a2 x a3).
(iii) We obtain b i = a i, b 2 = a2, b 3 = a3. Thus the vectors are linearly
independent.
P r o b le m 49. Let v i, v 2, V3 be three normalized linearly independent vectors
in R 3. Give an interpretation of A defined by
cos
/I
A
-A
I + Vi •V2 + V2 •V3 + V3 •Vi
---> /2(l + v i •v 2) ( l + v 2 •v 3) (I + V3 •v i )
= —.
V2 /
where • denotes the scalar product. Consider first the case where v i, V2, V3
denote the standard basis in R 3.
S olu tion 49. The quantity A is the directed area of spherical triangles on the
unit sphere S2 with corners given by the vectors v i, V2, V3, whose sign is given
by
sign(A) = sgn(vi •(v 2 x V3)).
For the standard basis we find cos(|A) = ^=.
P r o b le m 50.
Let v be a column vector in R n and v ^ 0. Let
A:=
VVi
where T denotes the transpose, i.e. v T is a row vector. Calculate A2.
S o lu tio n 50. Obviously
real number. We find
vvT
is a nonzero n x n matrix and
v Tv
is a nonzero
(V V T ) ( V V T ) _ V ( V T V ) V T _ V V T _
(vT v ) 2
(vT v ) 2
VT V
where we used that matrix multiplication is associative.
Problem 51.
(i) A square matrix A over C is called skew-hermitian if A =
—A*. Show that such a matrix is normal, i.e. we have AA* = A*A.
(ii) Let A b e a n n x n skew-hermitian matrix over C, i.e. A * = —A. Let U be an
n x n unitary matrix, i.e. U* = U ~l . Show that B := U*AU is a skew-hermitian
matrix.
Solution 51. (i) We have AA* = —A*A* = (—A *)(—A) = A*A.
(ii) We have A* = —A. Thus from B = U*AU and U** = U it follows that
B* = ( U*AU)* = U*A*U = U*(—A)U = -U * AU = - B .
24
Problems and Solutions
Problem 52. Let A, X , Y be n x n matrices. Assume that X A = Tn, A Y = In.
Show that X = Y .
Solution 52.
We have X = X I n = X (A Y ) = (X A )Y = InY = Y.
Problem 53. (i) Let A, B be n x n matrices. Assume that A is nonsingular,
i.e. A -1 exists. Show that if BA = On, then B = 0n.
(ii) Let A, B be n x n matrices and A + B = / n, AB = 0n. Show that A2 = A
and B 2 = B.
Solution 53. (i) We have B = B In = B (A A ~ l ) = (B A )A ~l = OnA -1 = 0n.
(ii) Multiplying A + B = In with A we obtain A2 + AB = A and therefore
A2 = A. Multiplying A + B = In with B yields AB + B 2 = B and therefore
B 2 = B.
Problem 54.
Find a 2 x 2 matrix A over M such that
(o)= v i ( i ) ’
S o lu tio n 54.
A (°) = v § ( - i ) -
We find a\\ = a\2 = a>i\ = l/> /2 , a22 = —1/>/2. Thus
A
P r o b le m 55.
Consider the vector space R 4. Find all pairwise orthogonal
vectors (column vectors) x i , . . . , x p, where the entries of the column vectors can
only be +1 or —I. Calculate the matrix
3= I
and find the eigenvalues and eigenvectors of this matrix.
S olu tion 55. The number of vectors p cannot exceed 4 since that would imply
dim(R4) > 4. A solution is
Xl =
1 V
i
i
\ i )
1 V
- i
1 >
- I
,
X2 =
I
,
X3 =
- i
i
\ - l )
)
Thus
/4
0
0
VO
0
4
0
0
0
0
4
0
0\
0
0
4/
,
X4 =
1 \
I
- I
V -i J
Basic Operations
25
The eigenvalue is 4 with multiplicity 4. The eigenvectors are all x G R 4 with
x ^ O . Another solution is given by
1 X
1
I
Xl =
I
1 X
I
X2 =
,
\ - i )
V
I
X3 =
,
- I
- I
I
V
J
I
X4 =
,
/ - 1 X
I
J
I
I
/
P r o b le m 56. The (n + I) x (n + I) Hadamard matrix H(n) of any dimension
is generated recursively as follows
( ._ (H {n -l)
H{-n ) ~ \ H { n - I)
H { n - I) \
- H ( n - I)
J
where n = 1 ,2 ,... and # ( 0 ) is the I x l matrix # ( 0 ) = (I). The column vectors
in the matrix # ( n ) form a basis in R n. They are pairwise orthogonal to each
other. Find # (1 ), # (2 ), # ( 3 ) and # ( 3 ) # T (3). Find the eigenvalues of # (1 )
and # (2 ).
S o lu tio n 56.
We find
I
I
# ( 1) =
and
m
=
/ 1I
I
I
I
I
I
Vi
I
-I
1
1
1
1
# ( 2) =
I
-I
I
-I
I
-I
I
-I
I
I
-I
-I
I
I
-I
-I
I
-I
-I
I
I
-I
-I
I
I
I
I
I
-I
-I
-I
-I
1
- 1
1 - 1 -
I
-I
I
-I
-I
I
-I
I
I
I \
1 - 1
1 - 1
1
I /
I
I
-I
-I
-I
-I
I
I
-I1 \
-I
I
-I
I
I
-I/
We find H(S)H t (S) = 8/s. The eigenvalues of # (1 ) are v^2 and —y/2. The
eigenvalues of # ( 2 ) are 2 (2 times) and —2 (2 times).
P r o b le m 57.
A Hadamard matrix is an n x n matrix # with entries in
{ —I, + 1 } such that any two distinct rows or columns of # have inner product
0. Construct a 4 x 4 Hadamard matrix starting from the column vector Xi =
(i 1 1 i ) r .
S olu tion 57. The column vector X2 = (I I — I — 1)T is perpendicular to
the vector x i. Next the column vector
X 3 = (-1
I
I
-
1) T
26
Problems and Solutions
is perpendicular to Xi and X2. Finally the column vector
X4 = (I
— I I — 1)T
is perpendicular to x i , X 2 and X 3. Thus we obtain the 4 x 4 Hadamard matrix
/ 1
I
I
Vi
Problem 58.
1
I
-I
-I
-I
I
I
-I
IX
-I
I
-I /
Consider the 3 x 3 symmetric matrix over R
A =
/ 2
2
\ -2
2
2
-2
—2\
-2
.
6 )
(i) Let I b e a n m x n matrix. The column rank of X is the maximum number of
linearly independent columns. The row rank is the maximum number of linearly
independent rows. The row rank and the column rank of X are equal (called
the rank of X ). Find the rank of A and denote it by k.
(ii) Locate a k x k submatrix of A having rank k.
(iii) Find 3 x 3 permutation matrices P and Q such that in the matrix PAQ the
submatrix from (ii) is in the upper left portion of A.
Solution 58. (i) The vectors in the first two columns are linearly dependent.
Thus the rank of A is 2.
(ii) A 2 x 2 submatrix having rank 2 is
-2
6
(iii)
)■
We have
0
0
1
Problem 59.
At A = On.
0
I
0
PAQ =
2
-2
-2
6
2 \
-2
.
2 - 2 2 /
Find all n x n matrices A over R that satisfy the equation
Solution 59. From At A = On we find Y l= 1 aijaij = 0» where j = I , . . . , n.
Thus A must be the zero matrix.
Problem 60. A square matrix A such that A2 = In is called involutory.
(i) Find all 2 x 2 matrices over the real numbers which are involutory. Assume
that aij
0 for i , j = 1 ,2.
Basic Operations
27
(ii) Show that a n n x n matrix A is involutory if and only if (In- A ) (In+ A) = On.
S o lu tio n 60.
(i) From A2 = /2 we obtain
aIi + ai2^2i = I5 «12(^11 + ^22) = 0, ^21(^11 + ^22) = 0, a\20 >2 \ + ^22 = I*
Since
^ 0 we have a\\ + a^2 = 0 and a^\ = (I — a f j / a 12. Then the matrix
is given by
a
_ (
«12 \
an
V (I - a ll)/a 12
“ Oil ) '
(ii) Suppose that (In - A )(In + A) = Ora. Then (In - A )(In + A) = In - A 2 = 0n.
Thus A2 = In and A is involutory. Suppose that A is involutory. Then A2 = In
and
Ora = Jra — -A2 = (In — A )(In + A).
P r o b le m 61. (i) Let A be an n x n symmetric matrix over R. Let P be an
arbitrary n x n matrix over R. Show that P t A P is symmetric.
(ii) Let B be an n x n skew-symmetric matrix over R, i.e. B t = —B. Let P be
an arbitrary n x n matrix over R. Show that P t B P is skew-symmetric.
(i) Using that A t = A and (P T)T = P we have (P TA P )T =
P t A t (P T)T = P t AP. Thus P t A P is symmetric.
(ii) Using B t = - B and (P T)T = P we have (P TB P )T = P TB T(P T)T =
—P TBP. Thus P t B P is skew-symmetric.
S o lu tio n 61.
P r o b le m 62. Let A b e an m x n matrix. The column rank of A is the maximum
number of linearly independent columns. The row rank is the maximum number
of linearly independent rows. The row rank and the column rank of A are equal
(called the rank of A). Find the rank of the 4 x 4 matrix
/ 1
5
9
13
2
6
10
14
3
7
11
15
4
\
8
12
16/
S olu tion 62.
The elementary transformations do not change the rank of a
matrix. We subtract the third column from the fourth column, the second
column from the third column and the first column from the second column, i.e.
I
5
9
V13
2
6
10
14
3 i\
1
7 1 I~ 5
11 1
9
15 1 / \ 13
2
6
10
14
I i\
I 1
I 1
I 1)
I
I
9
I
V13 I
1
5
I
I
I
I
From the last matrix we see (three columns are the same) that the rank of A is
2. It follows that two eigenvalues must be 0.
28
Problems and Solutions
P r o b le m 63. A Cartan matrix A is a square matrix whose elements aij satisfy
the following conditions:
1. a^ is an integer, one of { —3, —2, —1,0,2 }
2. ajj = 2 for all diagonal elements of A
3.
< 0 off of the diagonal
4. a^ = 0 iff aji = 0
5. There exists an invertible diagonal matrix D such that DAD -1 gives a
symmetric and positive definite quadratic form.
Give a 2 x 2 non-diagonal Cartan matrix and its eigenvalues.
S o lu tio n 63.
For n = 2 the only possible Cartan matrix is
The first four conditions are obvious for the matrix A. The last condition can
be seen from
xTAx = (
X
1
21 )
( * 2)
=
2( * ! -
* 1* 2 + ^ )
^
0
for x
0. That the symmetric matrix A is positive definite can also be seen
from the eigenvalues of A which are 3 and I.
P r o b le m 64. Let A, B, C, D be n x n matrices over M. Assume that A B t
and C D t are symmetric and A D t — B C t = / n, where T denotes transpose.
Show that At D - Ct B = In.
S olu tion 64.
From the assumptions we have
A B t = (AB t ) t = B A t ,
C D t = ( C D T)T = D C t ,
A D t - B C t = In.
Taking the transpose of the third equation we have D A t — C B t = In. These
four equations can be written in the form of block matrices in the identity
- B t \ - ( In 0n \
A t ) ~ \ 0 n In ) '
Thus the matrices are (2n) x (2n) matrices. If X 1 Y are m x m matrices with
X Y = Tm, the identity matrix, then Y = X -1 and Y X = Irn too. Applying
this to the matrix equation with m = 2n we obtain
( Dt
\ -C T
-B t \ (A
At ) VC
0n \
d
) -
V °n
In ) '
Equating the lower right blocks shows that —C TB H- AFD = In.
P r o b le m 65. Let n be a positive integer. Let An be the (2n + I) x (2n + 1 )
skew-symmetric matrix for which each entry in the first n subdiagonals below
Basic Operations
29
the main diagonal is I and each of the remaining entries below the main diagonal
is —1. Give Ai and A^. Find the rank of An.
Solution 65.
We have
/ 0 - 1 - 1 1
I \
I
0 - 1 - 1 1
I
I
0
-I
-I
.
- 1 1
I
0 - 1
\-i - i i
i
o I
We use induction on n to prove that rank(An) = 2n. The rank of A\ is 2 since
the first vector in the matrix A\ is a linear combination of the second and third
vectors and the second and third vectors are linearly independent. Suppose
n > 2 and that the rank(An_ i) = 2(n — I) is known. Adding multiples of the
first two rows of An to the other rows transforms An to a matrix of the form
0
1
0
-I
0
*
A n_ i
in which 0 and * represent blocks of size (2n —I) x 2 and 2 x (2n—I), respectively.
Thus rank(An) = 2 + rank(An_ i) = 2 + 2(n — I) = 2n.
Problem 66. A vector u = (ui,
. . . , ^n) is called a probability vector if the
components are nonnegative and their sum is I.
(i) Is the vector u = (1/2, 0, 1/4, 1/4) a probability vector?
(ii) Can the vector v = (2, 3, 5, I, 0) be “normalized” so that we obtain a
probability vector?
Solution 66. (i) Since all the components are nonnegative and the sum of the
entries is I we find that u is a probability vector.
(ii) All the entries in v are nonnegative but the sum is 11. Thus we can construct
the probability vector
v = -y(2, 3, 5, I, 0).
Problem 67. An n x n matrix P = ( Pij) is called a stochastic matrix if each
of its rows is a probability vector, i.e. if each entry of P is nonnegative and
the sum of the entries in each row is I. Let A and B be two stochastic n x n
matrices. Is the matrix product AB also a stochastic matrix?
Solution 67.
of the product
Yes. From matrix multiplication we have for the (ij )-th entry
n
(AB)ij = ^ ^Qjkbkj'
k =I
30
Problems and Solutions
It follows that
n
n
'y
^ ^ 'y ^ Q>ikfkj — 'y ] a ik ^ ^ bkj — I
J= I
J= I k = I
fc=l
j=l
since
^ ^ a ik — 1 5
^ ] bkj — I •
k= I
J=I
P r o b le m 68. The numerical range, also known as the field of values, of an
n x n matrix A over the complex numbers, is defined as
F (A ) := { z*Az : ||z|| = z*z = I, z
G
Cn }.
The Toeplitz-Hausdorff convexity theorem tells us that the numerical range of a
square matrix is a convex compact subset of the complex plane.
(i) Find the numerical range for the 2 x 2 matrices
» - ( J o)(ii) Let a
G
*-(!!)•
M and the 5 x 5 symmetric matrix
(a
A =
I
I
0
0
Vo
a
I
0
0
0
I
a
I
0
0
0
I
a
I
0\
0
0
1
aJ
Show that the set F(A ) lies on the real axis. Show that |z*Az\ < a + 8.
S o lu tio n 68.
a
(i) Let
= V( «21
011 a«22i j JV
z = ( eJJcosS J ') ,
V e slnW J
*,*,*<=«.
Therefore z is an arbitrary complex vector of length I, i.e. ||z|| = I. Then
z*Az = ( e~u* cos (0)
e~%x sin (0)) ^ «11
«21
«12>\ ( ei(t>cos (0) \
>J \ e ix sin(0) J
«22,
= a n cos2(0) + (ai2e^x _ ^ + a2iez^ _x^) sin(0) cos(0) + 022 sin2(0).
Thus for the matrix B we have zTB z = cos 2 (O)1 where cos2(0)
6 G M. For the matrix C we have
z*C z = e
G
[0,1] for all
x^sin(0) cos(0) + sin2(0).
Thus the numerical range F (C ) is the closed elliptical disc in the complex plane
with foci at (0,0) and (1,0), minor axis I, and major axis y/2 .
Basic Operations
31
(ii) Since z* = (z±, Z2, Z3, z4, 25) with z*z = I and applying matrix multiplication
we obtain
Z*Az =
Ot
+
H- Z$Z2 H- ^3^4 H- ^4-^3 H- ^4^5 H- ^5-^4-
+ ^2-^1 H-
Let Zj = Vjel6j with 0 < Tj < I. For j
we have
“I" ‘Zj'Zfc — t^TjVk cos (Qj
Ok)
with 0 < Tj < I. It follows that z*Az is given by
a +
2(7*17*2 c o s ( 0 i 2 ) + 7*27*3 C O s( 0 2 3 ) + T 3 V 4 COS(O34) + 7*47*5 C O s ( 0 4 5 ) )
where 0i2 = 0i — O2, 023 = O2 - O3, 034 = O3 - O4, O45 = O4 - 0$. Thus the
set F (A ) lies on the real axis. Since 0 < r, < I and |cos(0)| < I we obtain
|z*Az| < a + 8.
P r o b le m 69. Let A be an Ti x Ti matrix over C and F (A ) the field of values.
Let C/ be an Ti x Ti unitary matrix.
(i) Show that F(U*AU) = F (A ) .
(ii) Apply the theorem to the two 2 x 2 matrices which are unitarily equivalent
*-(!!)•
* -0 -0 -
S o lu tio n 69.
(i) Since a unitary matrix leaves invariant the surface of the
Euclidean unit ball, the complex numbers that comprise the sets F(LTfAU) and
F (A ) are the same. If z G Cn and z*z = I, we have
z*(U*AU)z = w *A w G F(A )
where w = Uz, so that w *w = z*U*Uz = z*z = I. Thus F(U*AU) C F(A ).
The reverse containment is obtained similarly.
(ii) For A i we have
z* A i Z = Ziz2 + Z2Zi = 27*17*2 cos(02 - 0 i)
with the constraints
0 < 7*1,7*2 < I and r\ + r\ = I. For A 2 we find
z* A 2Z =
zizi -
Z^Z2
=7*1 - 7*2
with the constraints 0 < r i , r 2 < I and r\ + r\ = I. Both define the same set,
namely the interval [—1,1].
Let A be an 771 x n matrix over C. The Moore-Penrose pseudo
inverse matrix A + is the unique n x m matrix which satisfies
P r o b le m TO.
A A + A = A,
A+AA+ = A+ ,
(A A + )* = A A + ,
(A + A)* = A + A.
We also have that
x = A+b
(I)
32
Problems and Solutions
is the shortest length least square solution to the problem Ax. = b.
(i) Show that if (A*A) ~ 1 exists, then A + = (A*A)~ 1A*.
(ii) Let
(I
A=
3\
2 4
V3 5 /
.
Find the Moore-Penrose matrix inverse A + of A.
S olu tion 70.
(i) Suppose that (A*A)- 1 exists we have
A x = b =► A*A x = A *b
x = ( A M ) - 1^ b .
Using (I) we obtain A + = (A*A) 1A*.
(ii) We have
— a 2 2) (i Jj-(SS)Since det(A*A) ^ O the inverse of A* A exists and is given by
(A*
1 _ 1 1' 25
12 \,- 1 3
-13
7
Thus
A + = (A* A ) - 1 A* = —
12 V - 1 3
-14
- 13\ ( I 2
7 j 13 4 5
8
-2
2
10
- 4 )-
P r o b le m 71. The Fibonacci numbers are defined by the recurrence relation
(linear difference equation of second order with constant coefficients) sn+2 =
Sn-I-I + sn, where n = 0 ,1 ,. . . and so = 0, Si = I. Write this recurrence relation
in matrix form. Find S6, 55, and s4.
S olu tion 71.
We have S2 = I. We can write
1 ,2 ,...
Thus
It follows that S6 = 8, S5 = 5 and S4 = 3.
P r o b le m 72. Assume that A = A i -M A2 is a nonsingular n x n matrix, where
Ai and A^ are real n x n matrices. Assume that A\ is also nonsingular. Find
the inverse of A using the inverse of A\.
Basic Operations
33
S olu tion 72. We have the identity (A i+ iA 2)(In —iA 1 1^ ) = A i + A 2A X1 A 2.
Thus we find the inverse
A-1 = (Al + A2Ar1A2) " 1 - iAr1A2(Ai + A2AY1A2) - 1.
P r o b le m 73. Let A and B be n x n matrices over R. Assume that A ^ B,
A 3 = B 3 and A2B = B 2A. Is A2 + B 2 invertible?
S olu tion 73. We have ( A2 + B 2)(A —B) = A3 —B 3 —A2B + B 2A = On. Since
A 7^ B , we can conclude that A2 + B 2 is not invertible.
P r o b le m 74.
Let A be a symmetric 2 x 2 matrix over M
_ ( aoo
a oi \
\ a io
J
a n
’
Thus aoi = am. Assume that aooaoi = aoi> aooan = aoian- Find all matrices
A that satisfy these conditions.
S olu tion 74.
We obtain as trivial solutions
and the non-trivial solutions
P r o b le m 75.
Consider the invertible 2 x 2 matrix
Find all nonzero 2 x 2 matrices A such that AJ = JA.
S olu tion 75.
We have A = JAJ 1 with
an
-a n
P r o b le m 76.
an
an
Consider the column vector in M8
X
= (20.0 6.0 4.0 2.0 10.0 6.0 8.0 4.0)T
where T denotes transpose. Consider the matrices
/I
H
l
~
I
0
0
0
0
0
0\
1( 0 0 1 1 0 0 0 0
2 0 0 0 0 1 1 0 0
VO
0
0
0
0
0
I
I/
34
Problems and Solutions
/I
0
0
G' = \
Vo
I
0
-I
0
0
0
0
I
0
I
0
0
0\
5
I
^3 = 2 ( 1
i).
0
-I
0
0
0
0
I
0
0
0
-I
0
0
0
0
I
0 V
0
0
-I/
G2 — 1 ( I
-I
0
G3 = ^ ( i
-i).
2
(o
0
I
(i) Calculate the vectors
Hxx,
H2H i*,
G2H1X,
G ix
H2G ix,
G 2G i X
H3H2H ix,
G3H2H ix,
H3G2H ix,
G3G2H ix,
H3H2G ix,
G3H2G ix,
H3G2G ix,
G3G2Gix.
(ii) Calculate Hj H j, Gj G j, Hj G j for j = 1,2,3.
(iii) How can we reconstruct the original vector x from the vector
(H3H2H ix, G3H2H ix, H3G2H ix, G3G2H ix, H3H2G ix,
G3H2G ix, H 3G 2G 1X, G3G2G 1X).
The problem plays a role in wavelet theory.
S olu tion 76.
(i) We find
H ix =
13-0 \
3.0
8.0
6.0
G 1X
=
/7.0N
1.0
2.0
V2 0
J
. /
Thus we have the vector (13.0 3.0 8.0 6.0 7.0 1.0 2.0 2.0)T. Next we find
H2H1^ = ( ^
q)
.
G2H 1X = ( J ’j j ) ,
Thus we have the vector (8.0 7.0 5.0 1.0 4.0 2.0 3.0 0.0)T. Finally we have
H3H2H ix = 7.5, G3H2H ix = 0.5, H3G2HxX = 3.0, G3G2HxX = 2.0
H3H2G xX = 3.0,
G3H2G xX = 1.0,
H3G2GxX = 1.5,
G3G 2G i X = 1.5.
Thus we obtain the vector (7.5 0.5 3.0 2.0 3.0 1.0 1.5 1.5).
(ii) We find
H1H l = ^I4,
G 1G f = ^J4,
H iG 1 = O4
H2H f = i / 2,
G2G 2 = ^ / 2,
H2Gt2 = O2
Basic Operations
H sH i
2.0
CO
O
(iii) Let w :
X1
Y1 =
Vo
/00
0
Vo
I. 0 I .5 I .5)T. Consider
0
L 0
0
I
0
I
0
0
I
-I
0
0
0
0
0
0
0
0
I
I
0
0
I
-I
0
0
I
/I
I
0
0
0
0
0
H3GJ = 0.
G3G j
2-
35
0
0
0
0
0
0
0
0
0
0
0
0
0\
0
0
0/
00 \
0
0
I
I
I
-I /
Then
X 1W ■
/ 4\
/ 8\
7
5
\ l/
V iw =
2
3
\0/
Now let
/1
I
0
X 2 = Y2
Vo
i
-i
0
0
0\
0
I
-I /
Then
X 2 ( X 1W)
/ 13\
3
8
’
Y2(Y1W) =
V6/
Thus the original vector x is reconstructed from
/ X 2( X 1W) \
V ^ (H w ); •
The odd entries come from X 2( X i w ) + Y2(Yiw ).
X 2(X iw ) - T2(Tiw ).
P r o b le m 77.
The even ones come from
Given a signal as the column vector
x = (3.0 0.5 2.0 7.0)t .
The pyramid algorithm (for Haar wavelets) is as follows: The first two entries
(3.0 0.5)T in the signal give an average of (3.0 + 0.5)/2 = 1.75 and a difference
average of ( 3 .0 -0 .5 )/2 = 1.25. The second two entries (2.0 7.0) give an average
of (2.0 + 7.0)/2 = 4.5 and a difference average of (2.0 — 7.0)/2 = —2.5. Thus we
end up with a vector
(1.75 1.25 4.5 - 2.5)t .
Now we take the average of 1.75 and 4.5 providing (1.75 + 4.5)/2 = 3.125 and
the difference average (1.75 —4.5)/2 = —1.375. Thus we end up with the vector
y = (3.125 - 1.375 1.25 - 2.5)T.
36
Problems and Solutions
(i)
Find a 4 x 4 matrix A such that
/ 3-°\
0.5
3.125 \
-1.3 75
1.25
-2 .5 /
Ay = A
2.0
7 .0 /
(ii)
Show that the inverse of A exists. Then find the inverse.
S olu tion 77.
/ 30\
0.5
2.0
(i) Since we can write
3.125
V 7 .0 /
/ 1V
I
I
1I ^
- 1.375
\l)
1
+ 1.25
-I
\-lJ
-I
^
0
0J
0
I
U i/
/° \
-2 .5
we obtain the matrix
/1
1
i
i
Vi
i
-i
-I
I
-I
0
0
0
0
I
-I/
(ii) All the column vectors in the matrix A are nonzero and all the pairwise scalar
products are equal to 0. Thus the column vectors form a basis (not normalized)
in E n. Thus the matrix is invertible. The inverse matrix is given by
1/4
1/4
A~l =
1/2
0
P r o b le m 78.
1/4
1/4
-
1/2
0
1/4
-1 /4
0
1/4 \
-1 /4
0
1/2
—1/ 2 /
The 8 x 8 matrix
/I
I
I
I
I
I
I
Vi
I
I
I
I
-I
-I
-I
-I
I
I
-I
-I
0
0
0
0
0
0
0
0
I
I
-I
-I
I
-I
0
0
0
0
0
0
0
0
I
-I
0
0
0
0
0
0
0
0
I
-I
0
0
0 \
0
0
0
0
0
I
-I/
plays a role in the discrete wavelet transform. Show that the matrix is invertible
without calculating the determinant. Find the inverse.
S olu tion 78.
(i) All the column vectors in the matrix H are nonzero. All
the pairwise scalar products of the column vectors are 0. Thus the matrix has
maximum rank, i.e. rank(iJ) = 8 and the column vectors form a basis (not
Basic Operations
37
normalized) in R n. The inverse matrix is given by
/ 1
I
2
I
0
8 4
0
0
\0
I
I
2
0
-4
0
0
0
1
1
-2
0
0
1
1
-2
0
0
-4
0
0
4
0
0
1
-1
0
2
0
0
4
0
1
-1
0
2
0
0
-4
0
1
-1
0
-2
0
0
0
4
1 \
-I
0
-2
0
0
0
-4 /
(i) For n = 4 the transform matrix for the Daubechies wavelet
P r o b le m 79.
is given by
/CO
V
Cl
C3
-C 2
C2
Cl
C2
C3
CO
Cl
“ CO
C3
/C °\
C3 \
-C o
Cl
-C2/
C2
/ i + V3\
3 + v'S
3 - V3
\ C3 /
\i-VsJ
I
Cl
I
Is D 4 orthogonal? Prove or disprove.
(ii) For n = 8 the transform matrix for the Daubechies wavelet is given by
/CO
C3
0
0
0
0
Cl
-C 2
C2
Cl
C3
Co
0 0 0
0 0 0
0
0
0
0
0 0 0
0 0 0
0 0 0 0
C2
C3
VCl - C 0 0 0 0 0
CO
Cl
C3
-C2
C2
Cl
Co
C3
-Co
Cl
C3
-C2
C2
Cl
00 ^
0
0
C3
Co
-C o
Cl
C3
-C 2/
Is Ds orthogonal? Prove or disprove.
S olu tion 79. (i) We find D4DJ = / 4. Thus D J1 = D j and D4 is orthogonal,
(ii) We find D s D j = Is . Thus D J 1 = D j and Ds is orthogonal.
P r o b le m 80. Let A 1 B be n x n matrices over C. Assume that A and A + B
are invertible. Then (A + B )~ l = A ~ l —A ~l B {A + J5)_1. Apply the identity
to A = (Ti1 B = 03.
S o lu tio n 80.
We have
^ s =O
and A ~x — G\. Thus
- 1 ) * ^ +B r 1 = K l
- 1)
38
Problems and Solutions
P r o b le m 81. Let A be a positive definite n x n matrix over R. Let x G Mn.
Show that A + x x T is positive definite.
S olu tion 81.
For all vectors y
M71, y / 0, we have y TA y > 0. We have
G
y Txx.Ty =
and therefore we have y T(A + x x T)y > 0 for all y G R n, y ^ 0.
S u p p lem en ta ry P ro b le m s
P r o b le m I .
(i) Are the four 2 x 2 nonnormal matrices
M 0 °/’ M 1
0
2-(1
A
J _/0
M 0 0» M l
) '
0\
1J
linearly independent?
(ii) Show that the nine 3 x 3 matrices
form a basis in the vector space of the 3 x 3 matrices. This basis is called the
spiral basis (and can be extended to any dimension). Which of these matrices
are nonnormal?
(iii) Do the eight 3 x 3 skew-hermitian matrices
T1 =
x /0
i (T
-TS
V2V0
[ i 00
0 0,
0
-I
\/2 ( 0
I
0
0
0
0
0
/°
0
0
0
0
0
o
r3
o
( 0
0
I'
V 2 V -0i o0 0 o,
r5 = 7 d
I
I0
=Ti v<
r4=
I
(
!
i
"T i
T8 =
/°
I0
U
i
0
0
0
i
0
i
Basic Operations
39
together with the 3 x 3 unit matrix form an orthonormal basis in the vector space
of 3 x 3 matrices over the complex number. Find all pairwise scalar products
(A ,B ) := tr(A B *). Discuss.
P r o b le m 2 .
(i) One can describe a tetrahedron in the vector space M3 by
specifying vectors v i, V2, V3, V4 normal to its faces with lengths equal to the
faces’ area. Give an example.
(ii) Consider a tetrahedron defined by the triple of linearly independent vectors
Wj G R 3, j = 1,2,3. Show that the normal vectors to the faces defined by two
of these vectors, normalized to the area of the face, is given by
I
ni =
-V 2
x v 3,
n2 =
I
-V 3
x
Vi
I
n3 =
,
-V i
x v 2.
Show that
2
v i = — n 2 x n3,
2
2
v 2 = — n 3 x Ii1,
v 3 = — ni x n 2
where V is the volume of the tetrahedron given by
V = ^ ( V 1 x v 2) •v 3 =
(Ii
1 x n 2) •n 3.
Apply it to normalized vectors which form an orthonormal basis in R 3
,
/ I\
/0 \
/ I \
,
P r o b le m 3. (i) Show that any rank one positive semidefinite n x n matrix M
can be written as M = v v T , where v is some (column) vector in C n.
(ii) Let u, v be normalized (column) vectors in Cn. Let A be an n x n positive
semidefinite matrix over C. Show that (u *v)(u *A v) > 0.
(iii) Show that if hermitian matrices S and T are positive semidefinite and
commute (ST = T S ), then their product ST is also positive semidefinite. We
have to show that
(SfT v ) V > 0
for all v G C 71. Note that if S = 0n, then obviously (ST u) *u = 0. So assume
that S ^ 0n, i.e. ||S|| > 0.
P r o b le m 4.
Show that the symmetric matrices
/ 01
0
Vi
are positive semidefinite. Extend to the n x n case.
0
0
0
0
0
0
0
0
01N
0
I/
40
Problems and Solutions
P r o b le m 5.
Let A be a hermitian n x n matrix and A /
Arn On for all m G N.
0n. Show that
P r o b le m 6. Let A G JRm x n be a nonzero matrix. Let x G Mn, y G R m be
vectors such that c := y TA x ^ 0. Show that the matrix B := A — C- 1 A x y r A
has rank exactly one less than the rank of A.
P r o b le m 7. (i) Consider the two-dimensional Euclidean space and let e i, e 2
be the standard basis. Consider the seven vectors
vo =
0,
I
V3
vi = -e i + —
I
y/3
V3 = - ^ e I + ~2” e2’
I
e 2,
V2 = - - e i -
I
VS
V4 — 2 e i -----2~e2’
y/S
—
V5 = - e i ’
e 2,
V6 = e i -
Find the distance between the vectors and select the vectors pairs with the
shortest distance.
(ii) Consider the vectors in R 2
cos ((k — l)7r/4) \
sin ((k — 1)tt/4 ) J ’
Vfc
Jfc = 1 ,3 ,5 ,7
and
f cos ((k — 1)tt/ 4) \
\sin(((fc — 1) tt/4 ) J ’
Vfc
fc = 2 ,4 ,6 ,8
which play a role for the lattice Boltzmann model. Find the angles between the
vectors.
P r o b le m 8.
Show that the 4 x 4 matrices
I i
I
2 0
\-l/v/2 0
l/y/2
l/ y/ 2
l/y/2,
0
I
l/ y/ 2
l/ y/ 2
-l/ y / 2 \
0
I
l/ y/ 2
J
are projection matrices. Describe the subspaces of R4 they project into.
P r o b le m 9.
(i) The vectors u, v, w point to the vertices of a equilateral
triangle
Find the area of this triangle.
(ii) Let u, v be (column) vectors in R n. Show that
A = ^/|(uTu )(v Tv ) — (uTv ) 2|
calculates the area spanned by the vectors u and v.
Basic Operations
Problem 10.
kxi
41
Assume that two planes in R 3 given by
+
£x 2
+
m xs
+ n = 0,
k 'x
i +
£ 'x 2
+
m 'x 3
+ n'
=
0
be the mirror images with respect to a third plane in R 3 given by
ax 1 + bx 2 + cx 3 + d = 0.
Show that (r2 = a2 + b2 + c2)
k' \
1 / a2 — b2 —c2
J=-r-[
2a6
m '/
r \
2ac
2ab
—a2 + b2 —c2
26c
2ac
\ / fc \
2be
] I ^ ] .
- a 2 - 6 2 + c2 / \ r a /
Problem 11.
(i) Let A be a hermitian n x n matrix over C with A2 = In.
Find the matrix (A -1 + i / n) _1.
(ii) Let B be an n x n matrix over C. Find the condition on B such that
(In + iB )(In —iB ) = In.
This means that ( In —iB) is the inverse of ( In + iB).
Problem 12.
Consider the vector space R d.
Suppose that I y 7-} d=1 and
( w fc}fc=i are two bases in R d. Then there is an invertible d x d matrix
T = (tjk)j,k=i
V j = ^ t jkVfk,
J = 1 ,...,0?.
k =I
The bases { v j } d=1 and {w fc}d=1 are said to have the same orientation if det(T) >
0. If det(T) < 0, then they have the opposite orientation. Consider the two bases
in R 2
I vi =OO- V2=0 ) } '
I wi= T i O ) - wj= 7 1 ( - 1 ) }-
Find the orientation.
P r o b le m 13.
(i) Let u > 0 (fixed) and t > 0. Show that the 4 x 4 matrix
e ~ ut
e ~ ut
3
I
I - e ~ ut
I - e ~ ut
I + 3 e -“ ‘
I - e ~ ut
e ~ ut
S e ~ ut
W
I
II+
II-
i-H
I + 3 e -“ *
I - e ~ wt
I - e ~ wt
I
V l - e ~ wt
I
I - e~wt
I - e~ut
I + 3 e -“ ‘ I
is a stochastic matrix (probabilistic matrix). Find the eigenvalues of S(t). Let
7T=i(l
0
0
I).
42
Problems and Solutions
Find the vectors 7vS(t), (7r5(t))S'(t) etc.. Is S(t\ + £2) = £(£1)6^ 2)?
(ii) Let e G [0,1]. Consider the stochastic matrix
Let n = 1 ,2 ,__ Show that
P r o b le m 14.
Can one find 4 x 4 matrices A and B such that
/0
0 0 1\
0
0 I 0|
0 1 0 0
/ I 0 0 1\
|0 0 0 0
0 0 0 0
Vi
Vi
0 0
0
/
0
0
1
?
‘
/
Of course at least one of the matrices A and B must be singular. The underlying
field F could be R or char (F) = 2.
P r o b le m 15. (i) Find all 2 x 2 matrices such that A2 ^ O2 but A3 = 02(ii) Find all non-hermitian 2 x 2 matrices B such that BB* = / 2.
(iii) Find all hermitian 2 x 2 matrices H such that H 2 = O2.
(iv) Find all 2 x 2 matrices C over R such that C t C + CC t = I2, C2 = 02(v) Find all 2 x 2 matrices A\, A2, As such that Ai A2 = A 2A 3, A3A 1 = A2A3.
(vi) Find all 2 x 2 matrices A and B such that A2 = B 2 = / 2, AB = - I 2.
P r o b le m 16. Let A be an m x n matrix and B be a p x q matrix. Then the
direct sum of A and B , denoted by A 0 5 , is the (m + p) x (n + q) matrix defined
Let A 1, A2 be m x m matrices and Bi^B2 b e n x n matrices. Show that
(A\ ® B {)(A 2 ® B2) — (A 1A2) ® ( # 1!¾)P r o b le m 17.
Consider the symmetric matrix A and the normalized vector v
Calculate Sj = v*Aj v for j = 1,2,3. Can A be reconstructed from Si, S2 , Ss
and the information that A is real symmetric?
P r o b le m 18.
The standard simplex A n is defined by the set in R n
71
A 71 :— { (#15 •••5#n)
• £j ^ 0? ^ ^ %j — I }•
Basic Operations
43
Consider n affinely independent points Bi, . . . , Bn G A n. They span an (n —1)simplex denoted by A = Con(J3i,. . . , B n), that is
n
A = C o n (i?i,. . . , Bn) = { AiJ?i + •••+ XnBn :
Aj - I , Ai , . . . , An > 0 }.
The set corresponds to an invertible n x n matrix [A] whose columns are S i,
. . . , Bn. Conversely, consider the matrix C = (6^ ), where Cjc = (b\k, . . . , bnk)T
(k = I , . . . , n). If det(C) ^ 0 and the sum of the entries in each column is I,
then the matrix C corresponds to an (n — l)-simplex C o n (S i,. . . , S n) in A n.
Let Ci and (¾ be n x n matrices with nonnegative entries and all the columns
of each matrix add up to I.
(i) Show that Ci (¾ and C2C 1 are also such matrices.
(ii) Are the n2 x n2 matrices Ci 0 C2, C2 0 Ci such matrices?
P r o b le m 19. (i) Let S1T b e n x n matrices over C with S2 = Ini (TS )2 = In.
Thus S and T are invertible. Show that STS~l = T - 1 , S T - 1 S = T.
(ii) Let A be an invertible n x n matrix over C and S be an n x n matrix over C.
We define the n x n matrix D := A - 1BA. Show that D n = A - 1B 71A applying
A A -1 = In.
P r o b le m 20.
Find all invertible 2 x 2 matrices S such that
P r o b le m 21.
(i) Consider the 3 x 2 matrix
Can one find 2 x 3 matrices S such that AB = / 2?
(ii) Consider the 3 x 2 matrix
Can one find 2 x 3 matrices Y such that X Y = / 2?
P r o b le m 22.
Let x € E 3 and x be the vector product. Find all solutions of
44
Problems and Solutions
P r o b le m 23. (i) Let A be an 3 x 3 matrix over E and u, v G E 3. Find the
conditions on A i u, v such that
A (u x v) = (Au)
x
(A v ).
(ii) Consider the three vectors vi, V2, V3 in E3. Show that Vi •(V2 x V3) =
Vi, V2, V3 are linearly dependent.
0 if
(hi) Let u, v G E 3. Show that (u x v ) •(u x v ) = (u •u )(v •v ) — (u •v ) 2.
(iv) Let a, b, c, d be vectors in E 3. Show that (Lagrange identity)
( a x b ) . ( c x d ) E d e t ( a 'J
Problem 24.
£.d)-
(i) Consider four nonzero vectors vi, V2, V3, V4 in E 3. Let
w := (V1 x v2) x (v3 x v4) ^ 0 .
Find w x (vi x v2) and w x (V3 x v4). Discuss. Note that V1 and V2 span a
plane in E3 and V3 and V4 span a plane in E3.
(ii) Let u, v, w be vectors in E3. Show that ( Jacobi identity)
u x (v x w) + w x (u x v) + v x (w x u) = 0.
(iii) Consider the normalized column vectors
V1 =
I
(0
0
)T ,
V2 =
(1
0
1 )T .
Do the three vectors V1, V2, V1 x V2 form an orthonormal basis in K3?
(iv) Let V1, v2, V3, V4 be column vectors in K3. Show that
(V1 x v2)t (v3 x v4) = (v fv 3)(v fv 4) - (v fv 3)(v fv 4).
A special case with V3 = V1, V4 = V2 is
(V1 X V2)T(Vi X V2) = ( v fV1X v f V2) - ( v f V1X v f V2).
P r o b le m 25.
Consider the coordinates
Pi = { x i,y i,z i)T,
p 2 = {x2,y 2,z 2)T,
p 3 = (z 3,|/3,z 3)T
with P 1 7^ p 2, P2 7^ P3, P3 7^ P 1- We form the vectors
x 2 - X1 \
V2 ~ V i
(
22 -
zi
J
/ X3 - Xi \
,
V 31 =
I 2/3 -
yi
\ Z3 -
Z1
•
J
What does ||v21 x v 31| calculate? Apply it to P 1 = (0 ,0 ,0)T, p 2 = (1 ,0 ,1)T,
P3 = ( M , i ) T-
Basic Operations
P r o b le m 26.
45
Consider the 4 x 4 Haar matrix
I
I
I
I
I \
1 - 1 - 1
-y /2
0
0
0
y/ 2 - y / 2 J
y/ 2
0
Find all 4 x 4 hermitian matrices H such that K H K t = H.
P r o b le m 27.
Consider the real symmetric 3 x 3 matrix
/
A =
-1 /2
- > / 5 /6
- > /5 /6 - 5 / 6
\
>/6/3 - > /2 /3
> /6 /3
\
- > /2 /3
.
1/3 J
Show that A t = A ~l by showing that the column of the matrix are normalized
and pairwise orthonormal.
P r o b le m 28.
Let X € R nxn. Show that X can be written as
X = A
S
cln
where A is antisymmetric ( A t = —A ), S is symmetric ( St = S ) with tr(S') = 0
and c e R .
P r o b le m 29.
Let C7 > 0 for j = I , . . . , n. Show that the n x n matrices
( y/Cj
\
( I /¾ + I/Ck \
\
Cj H- CjcJ
V
\/Cj Ck
/
(k = I , . . . , n) are positive definite.
P r o b le m 30.
matrix
Let a, b G R and c2 := a2 + b2 with c2 > 0. Consider the 2 x 2
= l ( » -*)■
Show that M (a,b)M (a,b) = / 2- Thus M (a, b) can be considered as a square
root of / 2.
P r o b le m 31.
Show that a nonzero 3 x 3 matrix A over R which satisfies
A2A t + A t A2 = 2A,
AAt A = 2A,
A3 = O3
is given by
/0
A = y/2
I
0
0
0\
/0 1
0 ] ^ At = V 2 \ 0
\0 10/
0
V0 0
0\
I
.
0J
P r o b le m 32. Let x be a normalized column vector in R n, i.e. x Tx = I. A
matrix T is called a Householder matrix if T := In —2 xxT . Show that
T 2 = (In - 2x x T)(Jn -
2x x t )
= In - 4 x x t + 4 xxT = In.
46
Problems and Solutions
Problem 33. Let A be an n x n matrix over R. Show that there exists nonnull
vectors x i, X2 in Rn such that
x f A xi
xfxi
XTA x
_
X2^X2
Xt X
_
X^X2
for every nonnull vector x in Rn.
A generalized Kronecker delta can be defined as follows
Problem 34.
1 if J = ( j i , . . . , j r) is an even permutation of I = ( i i , . . . , ir)
—1 if J is an odd permutation of I
0 if J is not a permutation of I
{
Show that $126,621 = — I, $126,651 = 0? $125,512 = I. Give a computer algebra
implementation.
Problem 35.
(i) Let Jo5J i,
&i £ {0 ,1 }. Consider the tensor
rTJ0 J l
-lZcojZci ’
^oo
rOl
<oo
<81
tOO
<01
<10
<01
<10
'< 0 0
^oo
r IO
/ Soo
«01
502
503 \
SlO
«11
512
513
<1?
S 20
«21
522
523
t\\ J
531
532
CO
CO
CO
/< 0 0
Cft
CO
O
Give a I — I map that maps T j ^ 1i to a 22 x 22 matrix S = (Se0^1) with £o, h =
0 ,1 ,2,3, i.e.
< ?\
<?1
I-+
(ii) Let jo, j i , j2,fco,fci,fc2 € {0 ,1 }. Consider the tensor
rpJO J l J 2
± Zco,Zei ,Ze2 ’
Give a I — I map that maps
to a 23 x 23 matrix S = ( S ^ 1) with
^0,^1 = 0 , 1 , . . . , 2 3 - I.
(iii) Let n > 2 and j o ?J i, •••, jn ?&0j &i ? •••, kn £ {0 ,1 }. Consider the tensor
rp jo J l, — J n - l
Zeo,Zei,...,Zcn _ i ’
Give a I — I map that maps the tensor to a 2n x 2n matrix S = (Se0^1) with
^Oj ^i = 0 , 1, . . . , 2n—I. Give a Sym bolicC++ implementation. The user provides
n.
Chapter 2
Linear Equations
Let A be an m x n matrix over a field F. Let &i,. . . , 6m be elements of the field
F. The system of equations
aIlx I
+
a 12x 2
+ •••+
a lnx n
= b\
a 21 x l
+
a 22x 2
+ •••+
a 2nx n
=
H- a m 2 x 2 H- ' ' ' H-
amnx n
—
am lx l
is called a system of linear equations. We also write
Ax. = b
where x and b are considered as column vectors. The system is said to be homo­
geneous if all the numbers &i,. . . , 6m are equal to 0. The number n is called the
number of unknowns, and m is called the number of equations. The system of
homogeneous equations also admits the trivial solution x\ = X2 = •••= x n = 0.
A homogeneous system of m linear equations in n unknowns with n > m admits
a non-trivial solution. An underdetermined linear system is either inconsistent
or has infinitely many solutions.
An important special case is m = n. Then for the system of linear equations
A x = b we investigate the cases A -1 exists and A -1 does not exist. If A -1
exists we can write the solution as x = A ' lh.
If m > n, then we have an overdetermined system and it can happen that no
solution exists. One solves these problems in the least-square sense.
47
48
Problems and Solutions
P r o b le m I.
Let
A=(l -O '
•>-(£)•
Find the solutions of the system of linear equations A x = b.
S olu tion I. Since A is invertible (det(A) = —3) we have the unique solution
x = A - 1 b. From the equations Xi + X2 = I, 2xi
= 5 we obtain by addition
of the two equations 3xi = 6 and thus x\ = 2. It follows that X2 = —1.
P r o b le m 2.
Let a, 61,62
G
R. Solve the system of linear equations
—sin(a )\ ( %i \ _ ( bi\
cos(a) J \ x 2 J ~ \b2 J '
cos(a)
sin(a)
(
S olu tion 2. The inverse of the matrix on the left hand side exists for all a
and is found by the substitution Q 4 - a . Thus the solution is
( x i \ _ ( cos(a)
\ x 2 ) ~ \ —sin(a)
P r o b le m 3.
sin(a) \ / 61 \
cos(a) J \b2 J
(i) Let 6 G K and
1
A =
I
2 2
Find the condition on e so that there is a solution of A x = b.
(ii) Consider the system of three linear equations
I
I
I
I
2 4
4 10
I
Xi
X2
X3
with e G M. Find the condition on e so that there is a solution.
S olu tion 3. (i) From A x = b we obtain xi~\~x2 = 3, 2^1+ 22:2 = e. Multiplying
the first equation by 2 and then subtracting from the second equation yields
6 = 6. Thus if 6 7^ 6 there is no solution. If e = 6 the line x \ + X 2 = 3 is the
solution.
(ii) The matrix has no inverse, i.e. the determinant is equal to 0. Consider the
three equations
Xi + x 2 + xs = I,
Xi + cI x 2 + 4^3 = e,
Xi + 4 x2 + IOX3 = e2.
Subtracting the first equation from the second and third yields
X2 + 3 x3 = e —I,
3 x2 + 9 x 3
= e2 - I.
Inserting X2 from the first equation into the second equation yields e2 —3e+2 = 0
with the solutions 6 = 1 and e = 2.
Linear Equations
P r o b le m 4.
49
Find all solutions of the system of linear equations
cos(0)
—sin(0)
sinl
—sin(0)
—cos(0)
O eR
with x / 0 . What type of equation is this?
S olu tion 4.
We obtain
( x i \ - ( cos(0/2) \
VaV
\ - sin (0 /2 ),/
using the identities sin(0) = 2 sin(0/2) cos(0/2), cos(0) = 2 cos2((?/2) — I. This
is an eigenvalue equation with eigenvalue I.
P r o b le m 5.
Find all solutions of the system of linear equations
S o lu tio n 5.
We obtain
Therefore 4xi - 2x2 - 4x 3 = 0, - 2 x i + X 2 + 2x 3 = 0, - 4 x x + 2x2 + 4x 3 = 0.
These equations are pairwise linearly dependent. We have 2xi — X 2 — 2x 3 = 0.
This equation defines a plane. We have an eigenvalue equation with eigenvalue
I if at least one of the X j is nonzero.
P r o b le m 6.
Let A G Rnxn and x, b G Rn. Consider the linear equation
A x = b. Show that it can be written as x = T x, i.e. find Tx.
S olu tion 6. Let C = In —A. Then we can write x = C x + b. Thus x = T x
with T x := C x + b.
P r o b le m 7. If the system of linear equations A x = b admits no solution we
call the equations inconsistent. If there is a solution, the equations are called
consistent. Let A x = b be a system of m linear equations in n unknowns and
suppose that the rank of A is m. Show that in this case A x = b is consistent.
S olu tion 7. Since [A|b] is an m x (n + I) matrix we have m > rank[A|b]. We
have rank[A|b] > rank(A) and by assumption rank(A) = m. Thus
m > rank[A|b] > rank(A) = m.
Hence rank[A|b] = m and therefore the system of equations A x = b are consis­
tent.
50
Problems and Solutions
P r o b le m 8.
unknowns
X\
Find all solutions of the linear system of three equations and four
+ 2X2 —4 ^ 3 +
Solution 8.
2x\ —3 x 2 +
3,
X4 =
+
5X4
=
—4,
7xi — 1 0 x 3
+
13 x 4
=
0.
We obtain
10
13
= Y t + 28’
Problem 9.
X3
9
39
^
X 2 = 7t + 28’
I
X3 = t’
Xi = _ 4 ’
,
^
t€R'
(i) Solve the linear equation
(I
(X1
X
2 x3 )
2
8\
2 3 I = ( Xi
I
x2
Vl 2 3
x 3 ).
1
(ii) Find x, y € M such that
( 2
-3 \ _ / 3
V-I
solution 9.
2 )~{o
0\ /2 /3
I
JV
x\
y
ab­
(i) We obtain
( Xi
X
2
X
3)
/0
2
I I
I
Vl 2
3
2
= (0
0
0 ).
/
The rank of the matrix on the left-hand side is 3 and thus the inverse exists.
Hence the solution is x\ = X2 = X3 = 0.
(ii) We obtain
/2
3x\
( -1
Thus x =
^)-
\y
2 )
y = —I.
Problem 10.
Show that the curve fitting problem
j
tj
%
0
- 1 .0
1.0
I
-0 .5
0.5
2
0.0
0.0
3
0.5
0.5
4
1.0
2.0
by a quadratic polynomial of the form p(t) = a,2t 2 + a\t + ao leads to an overde­
termined linear system.
S o lu tio n 10. From the interpolation conditions p(tj) = yj with
we obtain the overdetermined linear system
/1
I
I
I
Vi
f yo \
to
h
ti
t2
4
f2
ts
U
t\)
{ a° \
I ai I =
\ a2 J
y\
V2
ys
Vy4 /
j
= 0 , 1, . . . , 4
Linear Equations
P r o b le m 11.
such that
51
Consider the overdetermined linear system Ax. = b. Find an x
Il-Ax - b||2 = m in ||Ax - b||2 = m in ||r(x)||2
with the residual vector r(x ) := b — A x and ||. ||2 denotes the Euclidean norm.
S olu tion 11.
From
llr (x)Hi = rTr = (b - A x )T (b - A x) = b Tb - 2xTA Tb +
Xt A
t
A
x
where we used that x TA Tb = b TAx, and the necessary condition
V||r(x)||i|x=x = 0
we obtain A t A x — A t b = 0. This system is called normal equations. We can
also write this system as
A t (b - Ax) = ATr(x ) = 0.
This justifies the name normal equations.
P r o b le m 12.
Consider the overdetermined linear system A x = b with
Il
i-H CM
1\
2
3
4
5
6
7
8
9
10/
Il
/ 1
I
I
I
I
I
I
I
I
Vi
/444\
458
478
493
506
516
523
531
543
571/
Solve this linear system in the least squares sense (see the previous problem) by
the normal equations method.
S olu tion 12.
From the normal equations A t A x = A t b we obtain
/1 0
^55
55 \ f x A _ ( 5063 \
385 J \X2 J ~ \ 28898 J
with det (At A) ^ 0. The solution is approximately
( x A = ( 436.2 \
\ x 2J
V 12-7455/
P r o b le m 13. An underdeter mined linear system is either inconsistent or has
infinitely many solutions. Consider the underdetermined linear system H x = y,
52
Problems and Solutions
where H is an n x m matrix with m > n and
X
=
( xI )
X2
(y i\
,
y=
2/2
\yn/
V X rn J
Assume that i / x = y has infinitely many solutions. Let Q be the n x m matrix
Q=
/I
0
0 ...
I ...
0
0
0
0
\0
0
I
0
...
...
...
0\
0
...
0/
We define x := Qx. Find
min HQx- y IlI
subject to the constraint 11iiTx —y 11i = 0- Assume that (AH TH + QTQ )~l exists
for all A > 0. Apply the Lagrange multiplier method.
S o lu tio n 13.
We have
V(x) = HQx - y||| + A||H x - y|||
where A is the Lagrange multiplier. Thus V (x) —> min if A is sufficiently large.
The derivative of V (x) with respect to the unknown x is
^ V ( X ) = 2AH T (H x - y) + 2Qr (Qx - y).
Thus
(.XHt H + Q t Q) x = (XHt + Qt )y.
It follows that x = (\HTH + QTQ)_1 (AH + Q)T y.
P r o b le m 14. Show that solving the system of nonlinear equations with the
unknowns x i, X2, X 3, X 4
(xi - I )2 + (x2 - 2)2 + x\ = a2(x4 - &i)2
(X
( X 1 - 2)2 + x\ + (x3 - 2)2 = a2(x4 - b2)2
1 - I )2 + (x2 - I )2 + (x3 - I )2 = a2(x4 - b3)2
( X 1 - 2)2 + (x2 - I )2 + x\ = a2(x4 - b4)2
leads to a linear underdetermined system. Solve this system with respect to
X 2 and x 3 .
X 1,
S o lu tio n 14. Expanding all the squares and rearranging that the linear terms
are on the left-hand side, yields
2x1 + 4x 2 —2a2b1x4
=
5
—a2b\+ x\ + x\ + x\ — a2x\
Ax1 + 4x3 — 2a2b2x4 = S - a2b\+ x 2 + x\ + x\ —a2x\
2 xi
+ 2x2 + 2x3 —2 a263X4 =
A x 1 + 2x2 —2 a264X4 =
—a2&3+ x2+
5 —a 2 b\ + x2+
3
x\
x\
+ x \ — a 2x \
+ x§ —a 2 x 2 .
Linear Equations
53
The quadratic terms in all the equations are the same. Thus by subtracting
the first equation from each of the other three, we obtain an underdetermined
system of three linear equations
2xi — 4x 2 + 4#3 — 2a2(b2 — 61)2:4 = 3 + a2(62 — b\)
—2x2 + 2#3 — 2a2(bs — 61)2:4 = -2 + a2(62 — 63)
2x\ — 2x2 ~ 2a2(64 — 61)2:4 = a2(62 — 62).
Solving these equations we obtain
Xi = a2(bi + 62 — 263)2:4 + — (—b\ — 62 + 263) + a2
X2 = a 2 ( 2 b\
7
+ 62 —263 —64)2:4 + — (—262 —b\ + 63 + 62) + a?
5
^3 = a? (b\ + 62 — 63 — 64)2:4 + - ^ - ( - 62 — 62 + 63 + 64) + - .
Inserting these solutions into one of the nonlinear equations provides a quadratic
equation for 2:4. Such a system of equations plays a role in the Global Positioning
System (GPS), where 2:4 plays the role of time.
P r o b le m 15.
Let A be an m x n matrix over M. We define
N a := J x G M n : A x = 0 } .
N a is called the kernel of A and v{A) := (Mui(N a ) is called the nullity of A. If
N a only contains the zero vector, then v(A) = 0.
(i) Let
-G A
'a ') '
Find N a and v(A).
(ii) Let
Find N a and v( A ) .
S olu tion 15.
(i) From
we find the system of linear equations x\ + 22:2 —2:3 = 0, 22:1 —2:2 + 32:3 = 0.
Eliminating 2:3 yields x\ = —2:2. It follows that 2:3 = —x\. Thus N a is spanned
by the vector ( —1 I I )T . Therefore v(A) = I.
(ii) From
2
4
-6
-I
-2
3
3\
6
- v
/ X
I X
1
2
\xs
•(:)
54
Problems and Solutions
we find the system of linear equations
2x\ — X2 +
3^3 = 0,
4xi —2x2 + 6^3 = 0,
—6xi + 3x2 —9x3 = 0.
The three equations are the same. Thus from 2xi — X2 + 3x 3 = 0 we find that
is a basis for N a and v{A) = 2.
P r o b le m 16.
equations
(i) Let X i,X 2,X3 G Z. Find all solutions of the system of linear
7xi + 5x 2 — 3x 3 = 3,
17xi + 10x 2 — 13x3 = —42.
(ii) Find all positive solutions.
S olu tion 16. (i) Eliminating X2 yields 3xi —5x 3 = —38 or 3xi = —58 (mod 5).
The solution is x\ = 4 (mod 5) or x\ = 4+5s, s G Z. Thus using 3 x i—5x 3 = —38
we obtain X3 = 14 + 3s and using 7xi + 5x 2 — 3x 3 = 8 we find X2 = 10 — 4s.
(ii) For X2 positive we have s < 2, for x\ positive we have 0 < s and X3 remains
positive. Thus the solution set for positive x i, X2, X3 is (4, 10, 14), (9, 6, 17),
(14, 2, 20). We can write
where s = 0, 1 , 2.
P r o b le m 17.
Consider the inhomogeneous linear integral equation
for the unknown function </?, /( x ) — x and
ai(x) = x,
a2 (x) = y/x,
Pi(y) = y, fa{y) = Vy-
Thus Qq and 0:2 are continuous in [0,1] and likewise for /¾ and /¾. We define
and
where / / , 1/ = 1,2. Show that the integral equation can be cast into a system of
linear equations for B\ and 2¾. Solve this system of linear equations and thus
find a solution of the integral equation.
Linear Equations
S o lu tio n 17.
55
Using Bi and B2 equation (I) can be written as
ip(x) = a i ( x ) B i + a 2 ( x ) B 2 + f ( x )
(2)
(p {y ) = a i ( y ) B i + a 2 ( y ) B 2 + f { y ) .
(3)
or
We insert (2) into the left-hand side and (3) into the right-hand side of
<p(x) = a i ( x )
f
Pi(y)<p(y)dy + a 2 (x)
Jo
f
fa{y)v{y)dy + f{x ).
Jo
Using a^jlu and bMwe find
a i(x)B i
+
a 2(x)B 2 = a i(x )B ia u
+
-\- a 2 ( x ) B i a 2i
o t i ( x ) B 2a i 2
+
+
a 2 ( x ) B 2 a 22
ai(x)bi
+
a 2 (x)b2.
Now a 1 and a 2 are linearly independent. Comparing the coefficients of aq and
Oi2 we obtain the system of linear equations for B i and B 2
Bi = anB i
+
ai2B 2
+ 61,
B 2 = a2i B i
+ a 22 B 2 + b2
or in matrix form
/ 1 - an
Y - a 2i
- a i2 \ f Bi \ _ f b i \
I - «22 ) \ B2 J \b2 J '
Since
an = J ^ y 2dy = ^,
ai2 = /
<*22= J^ydy = i,
yz/2dy =
o 2i = /
yz/2dy =
b2 = / y2/3dy = |
b i = J ^ y 2dy=^,
we obtain
/2 /3
V —2 /5
-2 /5 \ ( B A
1/2 ^
_ /1 /3 \
V 2 /5 j
with the unique solution Bi = 49/26, B2 = 30/13. Since ip(x) = ai(x)B i +
Oi2(x)B2 + x we obtain the solution of the integral equation as
, x
75
30
^ = 2 6 * + I T 7 i-
P r o b le m 18.
(modulo I)
Consider the area-preserving map of the two-dimensional torus
( )-*(S)-
‘45)
2
where det(^4) = I (area-preserving). Consider a rational point on the torus
= / ni/p\
\ x 2)
\ n 2/ p j
56
Problems and Solutions
where p is a prime number (except 2, 3, 5) and n i, rt2 are integers between O
and p — I. One finds that the orbit has the following property. It is periodic
and its period T depends on p alone. Consider p = 7, n\ = 2,
= 3. Find the
orbit and the period T .
Solution 18.
We have
Repeating this process we find the orbit
Thus the period is T = 6.
Problem 19.
Gordan1S theorem tells us the following. Let A be an m x n
matrix over R and c be an n- vector in R n. Then exactly one of the following
systems has a solution:
System I: A x < 0 for some x G R n.
System 2: A t p = 0 and p > 0 for some p € R m.
(i) Let
Find out whether system (I) or system (2) has a solution,
(ii) Let
Find out whether system (I) or system (2) has a solution.
Solution 19.
(i) We test system (I). We have
Thus we have the system of inequalities x 2 < 0, X\ + X3 < 0. Obviously this
system has solutions. For example x\ = X3 = —I and X2 = —1/2. Thus system
(2) has no solution.
(ii) From
we find that p\ = p2 = Ps = 0- Thus system (I) has a solution.
Linear Equations
57
Problem 20. Farkas7theorem tells us the following. Let A b e a n m x n matrix
over R and c be an n- vector in R n. Then exactly one of the following systems
has a solution:
System I: A x < 0 and c Tx > 0 for some x G R 71.
System 2: A Ty = c and y > 0 for some y G R m.
Let
0 1\
I
0 J,
0 I/
I
0
A =
I
/I
C=
I
Vi
Find out whether system (I) or system (2) has a solution.
Solution 20.
We test system (I). We have
(i
>0.
i
Thus we have the system of inequalities
X i + X3 <
0
,
X
2<
0
,
X i + Xs <
0
,
Xi + X
2+
Xs >
0
.
Since X i + X s < 0 and X 2 < 0 this system has no solution. Thus after Farkas7
theorem system (2) has a solution.
Problem 21. Let A b e a n n x n matrix. Consider the linear equation A x = 0.
If the matrix A has rank r, then there are n —r linearly independent solutions
of A x = 0. Let n = 3 and
A =
0
0
0
I
0
0
I
I
0
Find the rank of A and the linearly independent solutions.
Solution 21.
The rank of A is 2. Thus we have one 3 — 2 = 1 linearly
independent solution. We obtain x = ( £ 0 0 ) T , t G R .
Problem 22.
Consider the curve described by the equation
2a:2 + Axy —y2 + Ax —2y + 5 = 0
(I)
relative to the natural basis (standard basis ei = ( I 0 ) T, e 2 = (0 I )T).
(i) Write the equation in matrix form.
(ii) Find an orthogonal change of basis so that the equation relative to the new
basis has no crossterms, i.e. no x'y' term. This change of coordinate system
does not change the origin.
Solution 22.
(i) Equation (I) in matrix form is
(x
y)
2
2
2 -I
(4
-2 )
58
Problems and Solutions
(ii) The symmetric 2 x 2 matrix given in (i) admits the eigenvalues 3 and —2
with the corresponding normalized eigenvectors
Thus the coordinates (x,y) of a point relative to the natural basis and (x',y')
relative to the basis consisting of these normalized eigenvectors are related by
( 2/
V lA /5
2 / n/5 ) [ y ' J
Vv )
or
2/s/Z
(* '
1/V 5\
,
,
-HVl v V l ) = (x »>•
Relative to the new basis the quadratic function becomes 3x'2 — 2y'2. To deter­
mine the equation of the original curve relative to the new basis, we substitute
x =
2x '/y /b
- 2/7\/5 ,
y = x '/y /b
+
2 y '/y /b
in the original equation (I) and obtain 3x'2 — 2y'2 + 6x'/y/b — 8y'/\/b + 5 = 0.
P r o b le m 23.
Consider the 2 x 2 matrix
with a, b E M and positive determinant, i.e. a2 + b2 > 0. Solve the equation
( b
Va
-a \ (
b J
V /i/
2
= ( b
\_a
a ) f xO)
h ) \yo)
for the vector (x\ y\)T with a given vector (xq yo)TS o lu tio n 23.
We have
/6
-a \ _1_
\ a b j -
I
( b
a2 + b2 \ —a
a\
bJ *
(I)
Thus
{ xi \ — { b
VW )
V0
-a
b
)“ ( - « :) ( :)
Using equation (I) we obtain
V W i/
(62 - a2)xo + 2abyo
a2 + b2 V 2abxo + ( b2 - a2)yo j
1
(
P r o b le m 24. Let V be a vector space over a field F. Let W be a subspace of V.
We define an equivalence relation ~ on V by stating that v\ ~ v<i if v\ — E W .
Linear Equations
59
The quotient space V/W is the set of equivalence classes [v] where v\ —V2 G W .
Thus we can say that v\ is equivalent to v<i modulo W if v\ = v<i + w for some
w G W . Let
K = K2 = I ( * * )
: XltX2 S R I
and the subspace
hM ( xO) 1iisrJ
G M i)- (O -M )' G M i)’
(ii) Give the quotient space V/W.
Solution 24.
0)
^ (
1)
since
3
(Tl
O IO
II
O
CD
*
(
(
I
since
O CO
CO
I3 )
S’
O
O I-1
(l) ~ (
GO
O
(i) We have
i n ? )-c m
((
(ii) Thus the elements of the quotient space consists of straight lines parallel to
the x i axis.
P r o b le m 25. Suppose that V is a vector space over a field F and U C V is a
subspace. We define an equivalence relation ~ on ^ by x ~ y iff a; - y G C/. Let
V/U = V/
Define addition and scalar multiplication on V/U by [x\ + [y] =
[x] + [y], c[x] = [cx], where c G F and
[x\ = { y e V
: y ~ x }.
(i) Show that these operations do not depend on which representative x we
choose.
(ii) Let V = C 2 and the subspace U = { ( # ! , # 2) : x I — %x 2 }• Find V/U.
(i) Suppose [x\ = [x'] and [y] = [y']. We want to show that
+ y] = [xf + y '], i.e. that x + y ~ x' + y'. We have
S o lu tio n 25.
[x
(x + y ) - ix ' + y ') = (x ~ x ') + (y - y')
and x —x f G U, y —yf G U. Thus (x+ y) —(x'+y') G U which means x + y ~ x'+y'
which means [x + y] = [x ' + y']. Similarly ( cx) — (ex') = c(x —x') G U so that
[cx\ = [ex']. All the vector space axioms are satisfied. Therefore VjU is a vector
space over the field F (quotient vector space).
(ii) We have (# 1, £ 2) ~ (x i , £ 2)
x I ~ x I = %(x 2 ~ ^2) or x I ~ %x 2 — x i ~ %x 2-
60
Problems and Solutions
Let b ^ a. Consider the system of lmesir equations
Problem 26.
I
£1
Xo
x O xr 2I
X2
X2
Xi
rpTl
X2
/ 1
V xg
I
1 N ( wo\
Xn
xI
Wi
W2
xV
\wnJ
I
7
b~ a
w 2.
From
11
I 0
\0
we find wq =
'
\ (6n+1 - an+1)/(n + I) /
Let n =: 2, a = 0, b = I, xq == 0, x\
Xi =
— 1/2,
1/2 x 2 = I. Find wq,
Solution 26.
\
(62 — o2)/2
(63 — o3)/3
Wi =
1
1
1/2
1/4
I
I
w0
Wi
W2
W2 =
Problem 27. Let Y , X , A , B , C , E n x n matrices over R. Consider the system
of matrix equations Y + C E + D X = On, AE + B X = On. Assume that A has
an inverse. Eliminate the matrix E and solve the system for Y .
Solution 27. From the first equation we obtain Y = - C E —D X . The second
equation provides E = - A - 1B X . Inserting this equation into Y = - C E —D X
we obtain
Y = (CA-1 B - D)X.
Problem 28.
plays a role
For the three-body problem the following linear transformation
X ( x i , x 2, x 3) = - ( x i + X 2 + x 3)
X(XI,X2,X3) = -j=(Xi - X 2)
y ( x i , x 2 , x 3)
= -IjOri
+ x 2 -
2 x 3).
(i) Find the inverse transformation.
(ii) Introduce polar coordinates x(r, ¢) = r sin(</>), y(r, ¢) = r cos(</>) and
r2 = ^(Oc I “ Z2)2 + (x2 ~ X3)2 + (x3 - X1)2).
Express (x\ —x 2), (x2 —£ 3), (£3 — £ 1) using these three coordinates.
Solution 28.
(i) In matrix form we have
( X\
(
x
=
\y /
1/3
l/y/ 2
V W s
1/3
- 1 /V2
VVS
1/3
0
\ ( X l\
I I x2 I
-2/y/6/ \X3 )
Linear Equations
61
Calculating the inverse of the matrix on the right-hand side yields
/I
I
\ i
l/y/ 2
- 1 /V2
l/y/6 \
0
-2/VeJ
1 /V 5
.
(ii) We find
xi —X2 = \/2rsin(0),
X2 - X s = V2 r sin(0 + 27r/3),
X3 - X i = V2 r sin(0 + 47r/3).
P r o b le m 29.
Let a G [0,27r). Find all solutions of the linear equation
( cos(a)
ysin (a )
sin(a) \ { %i \ _ {
cos(a) J \ x 2 J ~ \ l J '
Thus Xi and X2 depend on a.
S o lu tio n 29. The determinant of the matrix is given by cos(2a). Hence the
determinant is equal to 0 if a = 7r/4 or a = 37r/4. Note that cos(7r/4) = I / \/2,
sin(7r/4) = l/\ /2 , cos(37r/4) = —I /V^2, sin(37r/4) = l/\ /2 . Thus if a / 7r/4 and
a / 37r/4 the inverse of the matrix exists and we find the solution
f Xi \ _
I
f cos(a)
—sin(o:) \
W
~ cos2(a) - sin2(a) \ ” sin(a )
cos(a) J '
If a = 7r/4 we have the matrix
TiO i)
and we find the solution (xi + x 2)/y/2 = I. If a = 37r/4 we have the matrix
7 1 ( 11
-
1)
and the solution is (xi —x 2)/y/ 2 = I.
P r o b le m 30.
Consider the partial differential equation (Laplace equation)
g
+ 0 = o
o„
[0,1| X [0,1]
with the boundary conditions
u(x, 0) = I,
u(x, I) = 2,
«(0 , y) = I,
«(1 , y) = 2.
62
Problems and Solutions
Apply the central difference scheme
(
~ u J - ^ k ~ 2 u h k + «j+l,fc
\dx2 ) j , k ~
( A x )2
( d 2u \
'U'jyk—l
\dy2 )j,k
’
2//j,fc “1“
1
(A y )2
and then solve the linear equation. Consider the cases Arr = A y = 1/3 and
A x = A y = 1/4.
S olu tion 30. Case A x = A y = 1/3: We have the 16 grid points (j, k) with
j, k = 0, 1 , 2,3 and four interior grid points (I, I), (2, 1 ), (1 , 2), (2, 2).
For j = k= Iwe have 2/ 0,1 + 2/ 2,1 + 2/ 1,0 + 2/ 1,2 — 4ui,i = 0.
For j = I,k = 2 we have 2/0,2 + 2/2,2 + 2/ 1,1+ 2/ 1,3 — 42/1,2 = 0.
For j = 2,k = I we have 2/ 1,1 + 2/3,1 + 2/ 2,0+ 2/2,3 — 4//2,1 = 0.
For j = 2,k = 2 we have 2/1,2 + 2/3,2 + 2/2,1+ 2/2,3 — 4//2,2 = 0.
From the boundary conditions we find that
220,1 = I,
2/ 1,0 = I,
2/ 0,2 = I,
2/ 1,3 = 2,
2/3,1 = 2,
2/ 2,0 = I,
2/ 3,2 = 2,
2/ 2,3 = 2.
Thus we find the linear equation in matrix form A u = b
/ - 4
I
I
0
I
-4
0
I
I
0
-4
I
0
\
i
i
-4 /
/-U \
U\,2
-2,1
V 2/2,2 /
r - 32 \
-3
\ —4 /
The matrix A is invertible with
1-1
/7 /2
I
I
12
I
V1/2
I
7/2
1/2
I
I
1/2
7/2
I
1/2 \
I
I
7 /2 /
and thus we obtain the solution
221,1 =
4’
Ul,2 = 2>
«2,1 = j ,
u 2,2
=
T-
4
Case A x = Ay = 1/4. We have 25 grid points and 9 interior grid points. The
equations for these nine grid points are as follows
220.1 + 222,1 + 221,0 + 221,2
— 4 //1 ,1 = 0
220.2 + u 2,2 + u l,l + u l ,3
— 4 //1 ,2 = 0
220.3 + 222,3 + u l ,2 + ^ 1,4
— 4 //1,3 = 0
^1,1 + 223,1 + u 2,0 + 222,2 — 4 //2,1 = 0
2/1,2 + 2/3,2 + u 2,l + 2/2,3 — 4//2,2 = 0
2/1,3 + 2/3,3 + u 2,2 + 2/2,4 — 4//2,3 = 0
2/2,1 + 2/4,1 +
2/3,0 + 2/3,2— 4//3,1 = 0
2/2,2 + 2/4,2 +
2/3,1 + 2/3,3— 4//3,2 = 0
2/2,3 + 2/4,3 +
2/3,2 + 2/3,4“ 4//3,3 = 0 -
Linear Equations
63
From the boundary conditions we find the 12 values
^0,1 = I, ^1,0 = I, ^0,2 = I, ^0,3 = I, ^1,4 = 2, U2,0 = I,
^2,4 = 2,
Uzljl
= 2, UsjO= I, ^4,2 = 2, ^4,3 = 2, !/3,4 = 2.
Inserting the boundary values the linear equation in matrix form Au = b is
given by
f-4
I
-4
I
0
I
0
0
0
0
0
I
-4
0
0
I
0
0
0
I
0
0
-4
I
0
I
0
0
0
0
1
0
1
-4
0
0
1
0
I
0
I
-4
I
0
I
0
0
0
0
1
0
0
-4
1
0
0
0
0
0
1
0
1
-4
1
/U h l\
°\
0
U 1, 2
0
U l ,3
0
U 2 ,1
0
U2,2
1
U2 , 3
0
^3,1
1
- 4 J V ^3,3 /
CM
CO
I
0
I
0
0
0
0
V 0
-I
-3
-I
=
0
-2
-3
-2
V -4 /
The matrix is invertible and thus the solution is u = A 1b.
P r o b le m 31.
(i) The equation of a line in the Euclidean space R 2 passing
through the points (# 1, 2/1) and (#2, 2/2) is given by
(2/ - 2/1)0¾
- X1) =
(2/2 -
yi)(x -
# 1 ).
Apply this equation to the points in R 2 given by (# 1, 2/1) = (1 ,1/2), (#2, 2/2) =
(1 /2 ,1 ). Consider the unit square with the corner points (0,0), (0,1), (1,0),
(I, I) and the map
( 0 ,0 ) -> 0 , ( 0 , 1 ) ^ 0 , (1,0) —» 0, (1,1) ^ I.
We can consider this as a 2 input AND-gate. Show that the line constructed
above classifies this map.
(ii) The equation of a plane in R 3 passing through the points (# 1, 2/1, ^1), (#2, 2/2, ^2),
(X3, 2/3,¾ ) in K3 is given by
x - X 1
I
2/ — 2/1
X2 - X 1
2/2 — 2/1
X3 - X 1
2/3 -
2/1
Z - Z 1
\
-¾ - Z1 I = 0.
Z5 - Z 1 J
Apply this equations to the points (1 ,1 ,1 /2 ), (1 ,1 /2 ,1 ), (1 /2 ,1 ,1 ). Consider
the unit cube in R 3 with the corner points (vertices)
(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)
and the map where all corner points are mapped to 0 except for (1,1,1) which
is mapped to I. We can consider this as a 3 input AND-gate. Show that the
plane constructed in (i) separates these solutions.
Solution 31.
(i) The line is given by # + 2/ = 3/2.
64
Problems and Solutions
(ii) The plane is given by x + y + z = 5/2.
This can be extend to any dimension. For a hypercube in R4 and the points
(1 ,1 ,1 ,1 /2 ), (1 ,1 ,1 /2 ,1 ), (1 ,1 /2 ,1 ,1 ), (1 /2 ,1 ,1 ,1 )
we obtain the hyperplane # 1 + 2 2 + 2:3 + 24 = 7/2 which separates the corner
points (I, I, I, I) from all the other points.
P r o b le m 32.
Find the system of linear equations for a and b given by
# + 15
a
b
(2 + 3)(2 - I)
2+ 3
2 —I ’
Solve the system of linear equations.
S olu tion 32.
We have
a
b
_ a(x — I) + 6(2 + 3) _ (a + b)x + (—a + 36)
2 + 3 ^ 2 - 1
(2 + 3 ) ( 2 - 1 )
(2 + 3 ) ( 2 - 1 )
Comparing coefficients leads to the system of linear equations given in matrix
form by
(-+(+U)'
The matrix is invertible and the solution is a = —3, 6 = 4.
P r o b le m 33.
Let k
=
1 , 2 , 3 , 4 and
20 = I , 25 = 0 .
Solve
Xk-i —
22^ +
2^ +1 =
0.
S olu tion 33.
Inserting k = 1 ,2 ,3,4 we obtain the system of linear equations
- 2x\ + 22 = —I,
21 -
2x 2
+ 23 = 0,
22 - 223 + 24 = 0,
23 - 224 = 0
or in matrix form
I
I
0
-2
I
0
I
-2
0
0
I
(~2
0
1
-2/
0 ^
/2 A
22
23
V24/
0
0
0/
( ~ l\
The matrix is invertible.
P r o b le m 34. Consider the polynomial p(x) = a + 62 + cx2. Find a, 6, c from
the conditions p(0) = 0 , p( I) = I, p{2) = 0 .
S olu tion 34.
We obtain the three linear equations 0 = a, I = a + 6 + c,
0 = a + 26 + 4c with the solution a = 0, 6 = 2, c = —I.
P r o b le m 35. Let A € Cnxm with n > m and rank(A) = m.
(i) Show that the m x m matrix A* A is invertible.
Linear Equations
65
(ii) We set II := A(A*A) 1A. Show that II is a projection matrix, i.e. II2 = II
and n = IT .
S o lu tio n 35.
(i) To show that A*A is invertible it suffices to show that
N (A* A) = {0 }. From A* Av = 0 we obtain
A* Av
vM M v
= O ^
=
O ^ (Av)*(Av) = O ^ A v =
O.
Thus v = 0 since rank(A) = m. Hence N(A*A) = {0 } and A*A is invertible,
(ii) We have
n 2 = A (A t A ) - 1Af A (A t A ) - 1A* = A (A t A ) - 1A t = n
and n* = (A(At A ) - 1A t )* = A((A*A) * ) -1 A* = A (A t A ) - 1A t = II.
P r o b le m 36. Let a, 6, c G E and abc ^ 0. Find the solution of the system of
linear equations
0
c b\ / cos(a)\
/ a\
c
0 a
cos(/3)
(
II
a 0 J \ cos(7 ) J
ft
S o lu tio n 36.
J = JftJ.
\c/
Since
/0
det I c
e ft
0 a
= 2abc ^ 0
V ftaO
the inverse of the matrix exists and is given by
I
2
—a2
ab
abc
ac
ab
-ft2
be
Thus the solution is
cos(a)
cos(/?)
(
I
2abc
-a 2
aft
ac
-ft2
ab
ac
be
be
-C 2
COS ( 7 )
P r o b le m 37. Let A be an invertible n x n matrix over E. Consider the system
of linear equation A x = b or
Tl
a^jfXji — ft^,
j
i — I , . . . , n.
=1
Let A = C - R . This is called a splitting of the matrix A and R is the defect
matrix of the splitting. Consider the iteration
C x(fc+1) = R x w + b,
k = 0,1,2,. . . .
66
Problems and Solutions
Let
A =
4 - 1 0
-I
4
-I
0 - 2
4
C =
4
0
0
0
4
0
0
0
4
x<°> =
0
0
0
The iteration converges if p(C 1R) < i, where p(C 1R) denotes the spectral
radius of C ~ 1R. Show that p(C~l R) < I. Perform the iteration.
S olu tion 37.
We have
/0
i
1 0
4 VO 2
C - 1R = -
0
1
0
with the eigenvalues >/3/4, —>/3/4, 0. Thus p(C 1R) < I. We have
= Jfcc(O) + b
C xW = b =
xW = ^ 1 /2 ^
The second iteration yields
f 7/2 \
C xW = .RxW + b =
/
13/4
V 3
x (2) =
)
7 /8 \
13/16
.
\ 3 /4 ;
The series converges to (I I 1)T the solution of the linear system Ax. = b.
P r o b le m 38.
Let A be an n x n matrix over R and let b G R 71. Consider
the linear equation A x = b. Assume that ajj ^ 0 for j = I , .. . ,n. We define
the diagonal matrix D = diag(a^). Then the linear equation A x = b can be
written as
x = Bx + c
with B := —D ~1(A —D), c := D - 1 b. The Jacobi method for the solution of the
linear equation A x = b is given by
x (fc+1)= .B x W -| - c,
fc = 0 ,1 ,...
where xW is any initial vector in Rn. The sequence converges if
p{B) :=
max |Xj(B)\ < I
j=l,...,n
where p{B) is the spectral radius of B. Let
/2
I
0
A = I l
\0
2
I
I
2
(i) Show that the Jacobi method can applied for this matrix.
67
Linear Equations
(ii)
Find the solution of the linear equation with b = (I I 1)T .
S o lu tio n 38.
(i) We have
..
/ 1
0
2 \0
B = -D -^ A -D ) = - -
0 0\
I 0
0 I/
/0
1
\0
I 0\
0 I
I 0/
=
I
0
\
I
2 Vo
°\
1
•
0/
The eigenvalues of B are 0, I /'/2, —1/\/2- Thus p(B) = l / \ / 2 < I and the
Jacobi method can be applied.
(ii) The solution is x = (1/2 0 1 /2 )T .
P r o b le m 39. Kirchhoff’s current law states that the algebraic sum of all the
currents flowing into a junction is 0. Kirchhoff’s voltage law states that the
algebraic sum of all the voltages around a closed circuit is 0. Use Kirchhoff’s
laws and Ohm’s law (V = RI) to setting up the system of linear equations for
the circuit depicted in the figure.
Given the voltages VAy Vb and the resistors R ly R2y 7¾. Find 11, J2, / 3.
S olu tion 39.
From Kirchhoff’s voltage law we find for loop I and loop 2 that
U1 + U2 = Vaj
U2 + U3 = Vb .
From Kirchhoff’s current law for node A we obtain
h —h + h = 0.
Thus the linear system can be written in matrix form
( R 1 R2
(1
0 W J 1X
(V A\
? )(& )-(*)•
The determinant of the matrix on the left-hand side is given by
R1R2 + R2R3 H- R1RsSince R lyR2yRs > 0 the inverse matrix exists. The solution is given by
j
_ (R2 + R s)Va ~ R 2Vb
R iR2 + RiRs H- R2R3
j
=
R3Va + Ri Vb
R1R2 + R2R3 H- RiRs
68
Problems and Solutions
- R 2Va + (R1 ^ R 2)Vb
R1R2 + R2R3 + -¾-¾
P r o b le m 40. Write a C + + program that implements Gauss elimination to
solve linear equations. Apply it to the system
S olu tion 40.
The program is
// Gauss elimination + back substitution
#include <iostream>
# include <cmath>
us i n g namespace std;
const int n = 3;
void gauss(double C[n][n+1]) // in place Gauss elimination
{
int i, j, k, m;
f o r ( i = 0 ;i < n - l ;i++)
{
double temp, m ax = f a bs(C[m=i][i]) ;
for(j=i+l;j<n;j++)
// find the maximum
if (fabs(C[j] [i] ) > max) max = fabs(C[m=j] [i] ) ;
for(j=i;j<=n;j++)
// swap the rows
{ temp = C [i] [j] ; C[i] [j] = C[m][j]; C[m] [j] = temp; >
i f (C [i] [i]
!=0)
// no elimination necessary
for(j=i+l;j<n;j++)
// elimination
{
double r = C[j] [i]/C[i] [i] ;
C [j ] [i] = 0.0;
// perform the elimination
for(k=i+l;k<=n;k++) C[j] [k] = C[j] [k]-r*C[i] [k] ; >
>
>
int solve(double A [ n ] [ n ] ,double b[n],double x[n]) // solve Ax = b
{
int i, j;
double C[n] [n+1] ;
for(i=0;i<n;i++)
// I. Construct augmented matrix
{ f or(j=0; j<n; j++) C[i] [j] = A C i H j ] ; C [i] [n] = b [i] ; >
gauss ( C ) ;
// 2. Gauss elimination
f o r ( i = n - l ;i > = 0 ;i— )
// 3. Back substitution
{
double sum = C [i][n];
// Initial value from the last column
i f ( C [i] [i]==0.0) r e turn 0; // no solution/no unique solution
for(j=i+l;j<n;j++) sum -= C [i] [j]*x[j]; // Substitution
Linear Equations
x[i] = sum/C [i] [i];
69
// divide by the leading coefficient
>
return I;
>
int main(void)
{
double A[n][n] = {{ 1.0,1.0,1.0 },{ 8.0,4.0,2.0 >,{ 27.0,9.0,3.0 » ;
double b[n] = { 1.0,5.0,14.0 >;
double x [ n ] ;
if(solve(A,b,x))
{ cout « for(int i=0;i<n;i++) cout « x[i] « "
cout « ")" « endl; >
else cout «
"No solution or no unique solution" «
endl;
return 0;
>
The output is (0.333333 0 .5 0.166667) compared to (1/3 1/2 1/6).
S u p p lem en ta ry P ro b le m s
Problem I. Let n and p be vectors in En with n / 0. The set of all vectors x
in En which satisfy the equation n- (x —p) = 0 is called a hyperplane through the
point p G E. We call n a normal vector for the hyperplane and call n- (x —p) = 0
a normal equation for the hyperplane. Find n and p in E4 such that we obtain
the hyperplane given by
7
X1 + X 2 +Xs + X 4 = - .
Any hyperplane of the Euclidean space E n has exactly two unit normal vectors.
P r o b le m 2. The hyperplanes x\ + X2 + Xs +
E 4 do not intersect. Find the shortest distance.
P r o b le m 3.
= I, x\ + X2 + X3 + X4 = 2 in
(i) Find the solutions of the equation
/I
2\
\3
4J
f X11
X22J
x i2 \f 00\
V0 0Z
(ii) Find the solutions of the equation
( I
\3
2\
4J
f X11
^x2I
xi2\ / 1 0\
X22J \0l j '
P r o b le m 4. Let A be a given 3 x 3 matrix over E with det(A) / 0. Is the
transformation
1 1
Xf (X ^x2)
aUx 1 + a^X2 + ^13
^31^1 + Gs2X2 + 033 ’
X12 (XitX2)
21X 1 + a22x 2 + a23
^ 3 1 ^ 1 + Gs 2X 2 + 033
G
70
Problems and Solutions
invertible? If so find the inverse.
P r o b le m 5.
(i) Consider the linear equation written in matrix form
The determinant of the 3 x 3 matrix is nonzero (+1) and the inverse matrix is
U:v)
Apply two different methods (Gauss elimination and the Leverrier’s method) to
find the solution. Compare the two methods and discuss.
(ii) Apply the Gauss-Seidel method to solve the linear system
- 1 - 1
0 \
f Xl\
4
0 - 1
Xl
0
4 - 1
X3
- 1 - 1
4 / \Xi/
4
-I
-I
0
/ 1N
o
o
Vo/
and show that the solution is given by
7
I
Xl ~ 2 4 ’
(iii)
X2~ 12’
I
Xi = k -
* 3 “ 12’
Show that the solution of the system of linear equations
xi
+
x2
+
xs =
0,
+ 2x 2 +
xi
xs =
I,
2xi + x 2 + x s
is given by x\ = 2, X2 = I, xs = —3.
P r o b le m 6. (i) Given the 2 x 2 matrix A. Find all 2 x 2 matrices X such that
A X = X A. From A X = X A we obtain
( ^ 11^11 + «
12^21
«
\ ^21^11 + ^22^21
11^12 + « 12^22 A _
O21X12 + 0>22X22 J
( a n X i i + (I21X 12
a 12 X 11 + «
Thus we obtain the four equations
« 12^21 = « 21^12,
^11^12 + 012^12 = « 12^11 + ^22^12
^21^12 = « 12^ 21 ,
^21^11 + « 22^21 = ^ 11^21 + ^21^ 22 -
(ii) Show that the solution of the equation
/0
V
1
l \ f X11
x 12\ = f X11 x 12 \ f 0
O / V ^ i
X22J
\
X21 X22y
V
l\
1 0J
is given by X11 = X22, Xy2 = X2IP r o b le m 7.
Let a, &i, b2 G R. Solve the system of linear equations
( cosh(a)
y sinh(a)
22^12
\ ^11^21 + <*21#22 «12^21 + ^22^22
sinh(a) \ ( X1 \ _ (
\
cosh(a) J \ x 2 J
\ b2 J '
Chapter 3
Kronecker Product
Let A be an m x n matrix and B be an r x s matrix. The Kronecker product of
A and B is defined as the (m •r) x (n •s) matrix
A ® B :=
/ a\\B
CiiiB
olyiB
V^ml B
CLrnI B
(LlnB \
Ctln^
CbiiB
...
C rn n B J
The Kronecker product is associative
(A ® B ) ® C = A ® ( B ® C ) .
Let A and B be m x n matrices and C a p x g matrix. Then
(.A + B ) ® C = A ® C + B ® C .
Let c E C. Then ( cA ) <g>B = c(A ® B) = A ® ( cB ). We have
{ A ® B ) T = At ® B t ,
(A ® B )* = A*®B*.
Let A be an m x n matrix, B be a p x q matrix, C be an n x r matrix and D
be a q x s matrix. Then
(A <g>B )(C ® D) = ( A C ) ® ( B D ).
Let A be an n x n matrix and B an m x m matrix. Then we have
tr(i4 ® B ) = tr(i4)tr(B),
The Kronecker sum is defined as A
det {A ® B) = (det(A ))m(d e t(£ ))n.
B := A ® Irn + In ® B.
71
72
Problems and Solutions
P r o b le m I.
Consider the Pauli spin matrices
Find (Ti 0 <73 and as 0> Cr1. Discuss. Find tr(cri (8) 03) and tr(<J3 0 a \).
S olu tion I .
We obtain
0
I
0
0
0
-I
I
0
0
0
-I
0
0
Vo
We see that Cr1 0 03
P r o b le m 2.
0V
0
(
CT3 <g>(Tl =
,
I
0
I
0
0
0
0
0
-I
0
-I
0
0
Vo
)
°\
0
J
as <0 Cr1. We find tr(ai (8) as) = tr(<J3 0 Cr1) = 0.
Let
xi
(O’
^(-O'
x2
Thus { x i, X2 } forms an orthonormal basis in M2. Calculate
xi (8) x i ,
X 1 (8) x 2,
x 2 <g>xi,
x 2 (8) x2
and interpret the result.
S olu tion 2.
We obtain
x i (8) x i
x 2 (8) x i
I 1f '1\
: 2 11
Vi /
I
2
I
X1 0 X2 = -
,
I
K -I/
1N
I
-I
' - 1I \
,
X2
1
I
0 X2 = -
-I
-I
I
V-Iy
\
J
Thus we find an orthonormal basis in M4 from an orthonormal basis in M2.
P r o b le m 3.
Given the orthonormal basis
( e ^ c o s (0)\
* ' = { sin(0) ) '
( —sin (0) \
X2 = { e ^ c o s (0) )
in the vector space C 2. Use this orthonormal basis to find an orthonormal basis
in C 4.
S olu tion 3.
since
A basis in C4 is given by {
(x.j 0 x fc)(x m 0
X1
(8) X i ,
X 71)
X1
<8>X 2 ,
= Sjmfikn
X 2 (8) X 1 , X 2 (8) X 2 }
Kronecker Product
73
where j, k,m ,n = 1,2.
P r o b le m 4.
(i) An orthonormal basis in R 3 is given by
Use this orthonormal basis and the Kronecker product to find an orthonormal
basis in R 9.
(ii) A basis in the vector space M 2(C) of 2 x 2 matrices over C is given by the
Pauli spin matrices (including the 2 x 2 identity matrix) ao = / 2, 04, 02, 03- Use
this basis and the Kronecker product to find a basis in the vector space M ^ C 4)
of all 4 x 4 matrices over C.
S olu tion 4.
(i) An orthonormal basis in C 9 is given by the nine vectors
V j ^ V fc,
j , k = 1,2,3.
(ii) A basis in the vector space of 4 x 4 matrices is given by the 16 matrices
crj®°ki
P r o b le m 5.
j ,k = 0,1,2,3 .
Consider the two normalized vectors
I
Tl
1
(»1
0
Vi J ’
0V
0
V-i/
1
Tl
in the Hilbert space R 4. Show that the vectors obtained by applying the matrix
(/2 0 CTi) together with the original ones form an orthonormal basis in R4.
S olu tion 5.
We have
/ 1X
0
(J2 0 * i)
V2 0
Vi
/ °\I
I ,
Vo /
.
, I 0V I 1V
1)-^=
- Tl - I
Vo /
\-l)
1
(h®cr
0
0
The four vectors (Bell basis)
I / 01N I / °1\
Tl 0 ’ Tl I ’
Vo /
Vi/
1
y/2
V I f °1 \
-1
V - i / ’ Tl V 0 /
1
0
0
are linearly independent, normalized and pairwise orthogonal.
P r o b le m 6.
Let A be an m x m matrix and B be an n x n matrix. The
underlying field is C. Let I rn, In be the m x m and n x n unit matrix, respectively.
74
Problems and Solutions
(i) Show that tr(^4 8 B) = tr(A )tr(i?).
(ii) Show that tr(A 8 Zn + Zm ® B) = Utr(A) + mtr(B).
Solution 6.
(i) We have
m n
tl(A 8 B) =
I m
EE
j= ik = i
\ / n
\
EloW IE 6fcfeI=tr(A)tr(B)-
ajjhk =
\j= i
J
V fe=i
/
(ii) Since the trace operation is linear and tr(Zn) = n we find
tr(^4 (8) Zn +
Problem 7.
® 5 ) = tr(A ® In) + tr(Zm 0 5 ) = ntr(A) + ratr(Z?).
(i) Let A be an arbitrary n x n matrix over C. Show that
exp (A <8 In) = exp (A) 8 Zn.
(ii) Let A be an n x n matrix and Irn be the r a x ra identity matrix. Show that
sin(^4 8 Im) = sin(A) 8 Zm.
(iii) Let A be an n x n matrix and B be an ra x ra matrix. Is
sin (A S) Im + In ® B) = (sin(^l)) 8 (co s(£ )) + (cos (A )) 8 (sin(Z?))?
Prove or disprove.
Solution 7.
(i) Using the expansion of the exponential function
exp (A ® / „ ) =
E
1
fc,
= In® In + Ji(Ag) In) + y i A ® In)2 H-----
Ze=O
and (A 8 In)k = A k 8 In (k G N) we find the identity.
(ii) Let k G N. Since ( i 8 / m) fc = Ak <g>Im we find the identity.
(iii) Since [A 8 Zm, In 8 5 ] = Onxm and using the result
sin (A 8 Zm) = sin(^l) 8 Zm,
cos(Zn 8 5 ) = Zn 8 cos(Z?)
we find the identity.
Problem 8.
Let A, B be arbitrary n x n matrices over C. Show that
exp (A (S) In + In® B) = exp (A) 8 exp(Z?).
Solution 8.
The proof of this identity relies on [A® In, In ® B\ = 0n2 and
(A 8 In T (In ® B )S ES ( Ar 8 In)(In ® B 3) = Ar ® B 3,
r, 5 G N.
Kronecker Product
75
Thus
( A ® I n)k(In ® B)j ~k
exp (A 0 7n + / n 0
(Ak ® Bj ~k)
OO
Bk
E ~k\
= exp(A) 0 ex p (£ ).
P r o b le m 9.
Let A and B be arbitrary n x n matrices over C. Is eA®B =
eA 0 eB?
S olu tion 9.
Obviously this is not true in general. Let A = B = In. Then
eA®B — ei n2
an(j
eA ^ eB _ ein ^ e/n
e/ n2
P r o b le m 10 . The underlying field is C. Let A be an m x m matrix with
eigenvalue A and corresponding normalized eigenvector v. Let B be an n x 77matrix with eigenvalue fx and corresponding normalized eigenvector u. Let ei, 62
and 63 be real parameters. Find an eigenvalue and the corresponding normalized
eigenvectors of the matrix
e\A 0 B + 62A (E) In + €3Irn ® B.
S o lu tio n 10.
We have
(A 0 £ ) ( v 0 u) = (A v) (E) ( B u ),
(A (E) In) (v (E) u) = (A v) (E) u,
(Im (E) B )(v <E>u) = v (E) ( S u ) .
Thus a normalized eigenvector of the matrix e\A 0 B + €2A (E) In + €3Irn ® B is
u 0 v and the corresponding eigenvalue is given by eiXfx + 62X + 63fi.
P r o b le m 11.
the notation
Let Aj (j = I , . . . , A;) be matrices of size nrij x nj. We introduce
®)k=1Aj = (®k
j Z\Aj) 0 A k = A i 0 A 2 0 •••0 Afc.
Consider the elementary matrices
76
Problems and Solutions
(i) Calculate ^ ^ ( E o o + Eoi + E u ) for k = I, k = 2, k = 3 and k = 8. Give
an interpretation of the result when each entry in the matrix represents a pixel
(I for black and 0 for white). This means we use the Kronecker product for
representing images.
(ii) Calculate
i (Eqo +
E qi + Eio + -E1Ii)) 1
(! i)
for k = 2 and give an interpretation as an image, i.e. each entry 0 is identified
with a black pixel and an entry I with a white pixel. Discuss the case for
arbitrary k.
S o lu tio n 11.
(i) We obtain
1¾) + -Eqi + Eu
(i O
/1
0
0
Vo
(Eoo + Eoi + Eu) ® (Eoo + Eoi + Eu)
i
i
0
0
I
0
I
0
1V
i
i
i/
For k = 3 we obtain the 8 x 8 matrix
/ 1
O
0
0
0
1 1 1
l O l
0 1 1
0 0 1
0 0 0
I I I 1\
0 1 0
1
0 0 1 1
0 0 0 1
1 1 1 1
0
00
0 0
1 0 1
0
00
0 0
0
Vo
00
0 0
1 1
0 0 l/
For n larger and larger the image approaches the Sierpinski triangle. For k = 8
we have an 256 x 256 matrix which provides a good approximation for the
Sierpinski triangle.
(ii) Note that
I I
+ E qi + Eio + E u —
I I
(
the 8 x 8 matrix
( °i
0
I
0
I
0
Vi
I
0
I
0
I
0
I
0
0
I
0
I
0
I
0
I
I
0
I
0
I
0
I
0
0
I
0
I
0
I
0
I
I
0
I
0
I
0
I
0
0
I
0
I
0
I
0
I
1V
0
I
0
I
0
I
0/
which is a checkerboard. For all n we find a checkerboard pattern.
Kronecker Product
77
P r o b le m 12.
(i) Let A and X be n x n matrices over C. Assume that
[A, A] = On. Calculate the commutator [X ® In + In 8 A , A ® A ].
(ii) Let A, B, C be n x n matrices. Assume that [A, B] = 0n, [A, C] = 0n. Let
X := In ® A + A 8 Zn,
F := 7n (8) B + B (8) In + -4 (8) C.
Calculate the commutator [.X , Lr].
S o lu tio n 12.
(i) We have
[ X ® I n + In ® X , A ® A ] = (X A) (8) A + A (8) (X A) - (AX) (8 A - A (8 ( A X ) .
Since A X = X A we obtain [X 8 In + In 0 X , A 8 A] = 0n2.
(ii) We have
[X, Y] = In 8 (Al? - BA) + (A S - BA) 8 In + A 8 (AC - CA)
= In 8 [A, £ ] + [A, £ ] 8 In + A 8 [A, C]
= 0n2.
P r o b le m 13.
A square matrix is called a stochastic matrix if each entry is
nonnegative and the sum of the entries in each row is I. Let A, B be n x n
stochastic matrices. Is A 8 B a stochastic matrix?
S o lu tio n 13. The answer is yes. For example the sum of the first row of the
matrix A 8 B is
an
b\j + a\2
J= I
H------- b a\n
j=I
J= I
a^- — I.
J= I
Analogously for the other rows.
P r o b le m 14. Let X be an m x m and Y be an n x n matrix. The direct sum
is the (m + n) x (m + n) matrix
X * Y ~ ( X
0
?)•
Let A be an n x n matrix, B be an m x m matrix and C be an p x p matrix.
Then we have the identity
(A ® B) ® C = (A ® C) ® (B ® C).
Is A ® (B ® C) = (A ® B) ® (A ® C) true?
S o lu tio n 14.
This is not true in general. For example, let
-(o
I) '
* - ( ! I). ° - ( !
I)-
78
Problems and Solutions
Then A ® (B © C) ^ (A <g>B) © (A <g>C).
P r o b le m 15. Let A, B be n x n matrices. Then the direct sum A ® B is the
2n x 2n matrix
(i) Show that A® B can also be written using the Kronecker product and addition
of matrices.
(ii) Let C be a 2 x 2 matrix. Can C ® C be written as a Kronecker product of
two 2 x 2 matrices?
S o lu tio n 15.
(i) Yes. We have
(ii) Yes. We have C ® C = J2 <8>C.
P r o b le m 16. Let A, B be 2 x 2 matrices, C a 3 x 3 matrix and D a l x l
matrix. Find the condition on these matrices such that A ® B = C ® D, where
® denotes the direct sum. We assume that D is nonzero.
S o lu tio n 16.
Since
/C u
C eD =
021
C31
Vo
Ci2
022
C32
0
Ci3
C23
C33
0\
?
0
OdJ
we obtain from the last row and column of the two matrices A ® B
Ci\2^\2 — a 12^22 =
&22&12 — 0?
a 21^21 =
^21^22 — a 22^ 2l — 0
and a22&22 = d. Since d ^ 0 we have a22 ^ 0 and 622 / 0. Thus it follows that
dl2 = 0,21 = b\2 = h i = 0.
Therefore Ci2 = c2i = c23 = C32 = C13 = C31 = 0 and
Cu
P r o b le m 17.
=
diibu ,
C22
= an&22,
C33
= a226n ,
d = a22fr22.
Let A be an 2 x 2 matrix over C. Solve the equation A ® A =
A ® A.
S olu tion 17.
The equation yields the sixteen conditions
^12(^11 - I) = 0)
^21(^11 - I) = 0,
« 22(^11 - I) = 0,
Kronecker Product
«n(«22 - I) =
0,
«12(^22 - I) =
« 12«11 = a \2 = « 12«21 = « 12«22 = 0 ,
0,
«21 («22 — I) =
79
0,
Q>\\0>2\ = « 12«21 = «21 = « 21«22 = 0 .
Thus a i2 = a2i = 0. For a n and 0,22 we find the two options a n = 022 = 0 and
a n = CL22 = I. Thus either we have the 2 x 2 zero matrix or the 2 x 2 identity
matrix as solution.
P r o b le m 18.
With each m x n matrix Y we associate the column vector
vec(T ) of length m x n defined by
v ec(y ) .— (2/II) •••j Vmli Vl2 ) •••j y,m2') •••) yin 1 •••j 2/mn)
(i) Let A be an m x n matrix, B an p x q matrix, and C an m x q matrix. Let
X be an unknown n x p matrix. Show that the matrix equation A X B = C is
equivalent to the system of qm equations in np unknowns given by
( B t 0 A )v ec(X ) = vec(C)
that is, vec (A X B ) = ( B t 0 A )v ec(X ).
(ii) Let i , B, D be n x n matrices and In the n x n identity matrix. Use the
result from (i) to prove that A X + X B = D can be written as
(( In 0) A) + ( B t 0 Jn))v ec(X ) = vec (D ).
S olu tion 18. (i) For a given matrix Y, let Yk denote the k-th column of Y.
Let B = ( bij). Then
( A X B )k = A X B k = A
=
b2kA ... bpkA)vec(X)
= (B% 0 A)vec(X).
Thus
I Bi 0 A\
vec(X) = (BT 0 A)vec(X)
vec (A XB )
[
b
? ®
a
J
since the transpose of a column of B is a row of B t . Therefore
vec (C) = vec(AXB) = ( B t 0 A)vec(X).
(ii) Utilizing the result from (i) and that the vec operation is linear we have
vec (A X + X B ) = vec(A X ) + vec (X B ) = vec (A X In) + vec (InX B )
= (In 0 ^4)vec(X) + ( B t 0 J71)vec(X )
= vec (D ).
80
Problems and Solutions
P r o b le m 19. Let A , B , C, D be symmetric n x n matrices over M. Assume
that these matrices commute with each other. Consider the An x 4n matrix
A
-B
-C
\ -D
B
A
-D
C
C
D
A
-B
D \
-C
B
A J
(i) Calculate H H t and express the result using the Kronecker product.
(ii) Assume that A2 A- B 2 A- C 2 A- D 2 = AnIn.
S o lu tio n 19. (i) We find H t H = (A2 A-B2 A-C2 A- D 2) <g>I4.
(ii) For the special case with A2 A- B 2 + C 2 + D 2 = AnIn we find that H is a
Hadamard matrix, since H H t = AnIn <g> I4 = AnI4n. This is the Williamson
construction of Hadamard matrices.
P r o b le m 20.
(i) Can the 4 x 4 matrix
/1
0
0
Vi
0 0
1 1
1 -1
0 0
1\
0
0
- 1/
be written as the Kronecker product of two 2 x 2 matrices A and B , i.e. C =
A ® BI
(ii) Can the 4 x 4 matrix
0
0
-1
1
I
2
0
0
1
-1
-1
1
0
0
1 \
-1
0
0/
be written as a Kronecker product of 2 x 2 matrices?
S o lu tio n 20.
(i) From C = A <g>B we find the 16 conditions
a n hi = Cu an&i2
= ci2 011621
= c2i on622 = C22
O12611 = C13 Oi26i2 = C14 Oi 262i
= C23 a i2622 = C24
021611 = C31 o2i6i2 = C32 o2i62i
= C41 a2i622 = C42
022611 = C33 O22612 = C34 O22621 = C43 O22622 = C44.
Since Ci2 = C13 = C21 = c24 = C31 = C34 = C42 = C43 = 0 and
Cu = C14 = C22 = C23 = C32 = C41 = I,
C33 = C44 = - I
we obtain
011611 = I
o n 6i2 = 0
o h 62i
012611 = 0
o i 26i
2 = I
o i 2 62 i
=
I
o i 2622
021611 = 0
o 2i
6i 2 = I
o 2i 62 i
=
I
o 2i
022611 = — 1
o 226 i
2 = 0
= 0 a n 622
o 2262 i
= 0
=I
= 0
622 = 0
a 22 622 = — I .
Kronecker Product
81
Thus from a\\b\\ = I we obtain that an ^ 0. Then from anb 12 = 0 we obtain
612 = 0. However 021^12 = I. This is a contradiction. Thus C cannot be written
as a Kronecker product of 2 x 2 matrices.
(ii) Yes. We have
I
-I
)■
P r o b le m 2 1 .
Let x , y , z G R n. We define a wedge product
x A y : = x 0 y - y 0 x.
Show that ( x A y) A z + (z A x ) A y + (y A z) A x = 0 .
S olu tion 21.
We have
(x A y ) A z = X O y O z - y 0 x 0 z —Z 0 x 0 y + z 0 y 0 x
( z A x ) A y = z 0 x 0 y —x 0 z 0 y —y 0 z 0 x + y 0 x 0 z
( y A z ) A x = y 0 z 0 x —z 0 y 0 x —X 0 y 0 z + x 0 z 0 y .
Adding these equations the result follows.
P r o b le m 22.
Consider the vectors v and w in C 2
(i) Calculate v 0 w , v T 0 w, v 0 w T , v T 0 w T.
(ii) Is tr(vT (E) w ) = tr(v 0 w T)?
(iii) Is det(vT 0 w ) = det(v 0 w T)?
S o lu tio n 22.
(i) We have
V0 W
^
T
v 0 W
=
Vi Wi
(/ VlWi
\V2WI
U
2^ i
/ViWi\
Vi W2
V2Wi
XV2W2 /
VlW2 \
ViW2\
,
V2W2 J ’
VT 0 W
_ f ViWi
~ \ ViW2
T ^ T f
V 0 W
= ( ViWi
V2W1
V2W2
ViW2
)•
V2Wi
\
V2W2 ) .
(ii) Yes we have tr(vT 0 w ) = tr(v 0 w T) = ViWi + v2w2.
(iii) Yes we have det(vT 0 w ) = det(v 0 w T) =
— ^1^ 2^2^ 1Let A , B be skew-hermitian matrices, i.e.
B* = —B . Is A ® B skew-hermitian?
P r o b le m 23.
A * = —A and
S olu tion 23. We have (A 0 B)* = A* 0 B* = (—A) 0 {—B) = A 0 B. Thus
A 0 B is not skew-hermitian in general. However, A 0 B is hermitian.
82
Problems and Solutions
P r o b le m 24. Let A, B be n x n normal matrices over C, i.e. A A* = A* A
and 5 5 * = B*B . Show that A ® B is a normal matrix.
S o lu tio n 24.
We have
(A 0 5 ) ( A (8) B Y = {A 0 5 )(A * (8) B *) = (AA* (8) B B *)
= (A M (8) 5 * 5 ) = (A* (8) 5 * ) (A (8) 5 )
= (A®5)*(A®5).
P r o b le m 25. Let z = ( Zo Z1
assume that z is nonzero. Then
Z7v )T be a (column) vector in C n +1 and
...
Z* = ( Z q
Z1
...
Z7v )
i.e. z* is the transpose and conjugate complex of z. Consider the ( N + l ) x ( N + l )
matrix
n := IN+1 - — (z0z*).
z*z
Show that n* = II, n 2 = n. This means that II is a projection matrix.
S o lu tio n 25.
We have
/ M
Zi
z 0 z* =
® (z 0 , Zl , . ■
Z0Z0
Z0Zl
Z0^7V \
ZlZ0
ZlZl
ZiZ7V
V ZjvZo
Z7V^l
( Z0Z0
ZiZ0
Z7V^o \
ZOZl
ZiZi
Zn Z1
Zn ) =
\Z n /
...
Z7V^7V /
and
/ zO \
zI
z* <g> z = (z 0 , Zi ,
,Z jv) ®
\Zn /
\ ZoZjv
Zi Z7V
.. .
z NzN J
Thus z 0 z* = z* ® z. Since (z 0 z *)* = z* 0 z** = z* ® z and J^+i = I n + i we
obtain II = II*. Using z 0 z* = zz* and z 0 z* = z* 0 z we find
n2 =
(ijv + i -
^ - ( z ® z * )) (ijv + ! -
= In + 1 ~ J i (z 0
^ L ( z ® Z *))
+ J^Z )2 (Z*Z)(ZZ*) = JiV+1 “
® Z*)
= n.
P r o b le m 26. Let A, 5 be n x n tridiagonal matrices with n > 3. Is A 0 5 a
tridiagonal matrix?
Kronecker Product
S olu tion 26.
3 x 3 matrix
83
This is obviously not true in general. For example, consider the
(
an ai2 0
«21
a 22
«23
0
032
033
and the 9 x 9 matrix A (8> A.
P r o b le m 27. Consider the 2 x 2 matrix A over C with det (A) = I.
(i) Find the inverse of A.
(ii) Let v be a normalized vector in C 2. Is the vector A v normalized?
(hi) Let
£ = (-°i
o )-
Calculate AE A t .
(iv) Let B be a 2 x 2 matrix over C with det (B) = I. Find the 4 x 4 matrix
(A (8) B )(E <8>E)(A <8>B )T.
S olu tion 27.
(i) The inverse matrix of A is given by
£ - 1 _ { «22
\-«2i
“ «12^
an ) '
(ii) In general the vector A v is not normalized, for example
(hi) Utilizing det (A) = I we find
A EAT = ( ~ ai2
\ - C i 22
(iv)
021J
V
ai2
a21) = (
a 22J
,°
det (A)
detn(j4 )) = E .
0
J
We obtain
(A (8) B )(E (8) E)(A (8) B )T = (AEAt ) (8) (B E B t ) = E ® E .
P r o b le m 28.
Let A be an n x n hermitian matrix with A 2 = In. Let
Hi = - ( / n H- A) <S>Ini
H2 = - ( / n
A) <8>In.
Show that n x and n 2 are projection matrices. Calculate the product HiH2S olu tion 28.
we have
Since A = A* we have Hi = Ul and n 2 = n^- Since A 2 = In
n? = iii,
U2= n2.
Since (In + A ) (In - A) = On we obtain HiH2 = 0n2.
84
Problems and Solutions
P r o b le m 29.
Is the Kronecker product of two circulant matrices again a
circulant matrix? Let A, B be two n x n circulant matrices. Is A ® B = B <8>A l
S o lu tio n 29. The Kronecker product of two circulant matrices is not a cir­
culant matrix in general as we see by calculating the Kronecker product of two
circulant matrices
codi
codo
cid i
Cido
Cidi
codo
Cidi \
Cido
codi
c id o
codi
codo
I
In general we do not have A <8>B = B <8>A for two n x n circulant matrices.
P r o b le m 30.
A Hankel matrix is a matrix where for each antidiagonal the
element is the same. For example, Hankel matrices are
(1 7
V
16
15
24
16
15
24
33
15
24
33
2
24 \
33
2
41/
72
60
55
60
55
43
55
43
30
43
30
21
30
21
10
21
10
8
Given two square Hankel matrices. Is the Kronecker product again a Hankel
matrix?
S olu tion 30.
Let
/I 2 3\
3 4 ,
\3 4 1 /
A= I 2
/ 2 1 4\
4 5 .
\4 5 8J
B = I l
We see that A <8>B is not a Hankel matrix.
P r o b le m 31. Let A be an invertible n x n matrix. Then the matrix (A -1 (8>
In)(In ® A) also has an inverse. Find the inverse.
S olu tion 31. We have (A 1 <8>In) (In ® A) = A
inverse is given by A <8>A -1 since
Thus we find that the
(A -1 <8>A)(A (8) A - 1 ) = / n <8>In.
Note that (identity) A (8) A -1 = (A (8) In) (In ® A - 1 ).
P r o b le m 32. (i) Let x, y be vectors in M2. Can one find a 4 x 4 permutation
matrix P such that P (x <8>y) = y <8>x?
(ii) Let A, B be 2 x 2 matrices. Can one find a 4 x 4 permutation matrix such
that
P(A(S)B)P- 1 = B ® A l
Kronecker Product
85
(iii) Consider the vectors u and v in C3
u =
v =
Find the 9 x 9 permutation matrix P such that P (u ® v) = v ® u.
S olu tion 32.
(i) We have
( xiyi \
Xiy2
x® y =
x 2y\
/ 2/1*1 \
2/ 1 * 2
x® y =
2/ 2 * 1
V2/23-2/
V
/I
0
=► P =
0
\0
0 0 0\
0 1 0
1 0
0
0 0 l/
(ii) We find the same permutation matrix as in (i). Note that P = P 1 = P t .
: (written as a direct sum
(°
0
P = ( 1)<
1
0
0
0
Vo
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
1
0
0
0
0
0
°\
0
0
0
1
0
0/
® ( 1 )-
P r o b le m 33. Let A, B be symmetric matrices over E.
(i) What is the condition on A, B such that AB is symmetric?
(ii) What is condition on A , B such that A ® B is symmetric?
S olu tion 33.
(i) We have (A B )T = B t A t = BA. Thus the condition is that
AB = BA.
(ii) We have (^4® B )T = AT ® B t = A ® B.
P r o b le m 34. Let A, B be 2 x 2 matrices over E. Find the conditions on A
and B such that
(^ j > !
2 7
S o lu tio n 34.
/ 1V
0
= (I2 O B ) 7 =
0
Vi J
/ 01A
0
Vi J
We find a n = f>n, a i2 = i>2i, 021 = 621, 022 = 622.
P r o b le m 35. Let (Ti, <72, <73 be the Pauli spin matrices.
(i) Consider the Hilbert space C4. Show that the matrices
IIi =
h O h + <?! O ¢71),
n 2 = ^(/2 O h - v i O a i)
86
Problems and Solutions
are projection matrices in C4.
(ii) Find IIi 1¾. Discuss.
(iii) Let ei, e , e , e4 be the standard basis in C4. Calculate IIiej , I^ e j ,
j = 1 ,2 ,3 ,4 and show that we obtain 2 two-dimensional Hilbert spaces under
these projections.
(iv) Do the matrices /2 0 I2,
0
form a group under matrix multiplication?
(v) Show that the matrices
2 3
n J = \ (h ® h +
®Vj),
j = 1 , 2,3
are projection matrices.
(vi) Is
-(/2 0 h ~ 2 ^ ai ® i(Tl + i(72 ®
a projection matrix?
Solution 35. (i) We have n j = Hi, n£ = n 2 and
= n i, n| = n 2. Thus
n x and n 2 are projection matrices.
(ii) We obtain H in 2 = O4. Let u be an arbitrary vector in C4. Then the vectors
n iu and n 2u are perpendicular.
(iii) We obtain
( 0x\
n ie i = n i e 4
0
/°\
IIie2 = IIie3
Vi/
W
1 \
0
0
\ -l/
n 2ei = - n 2e4 = -
n 2e2
- n 2e3 = -
I
I
/ ° \
I
-I
V
0
/
Thus n i projects into a two-dimensional Hilbert space spanned by the normal­
ized vectors
>
P )
I
1
0
0 ’ Ti
Ti
<
f°\
1
1
Voy
The projection matrix n 2 projects into a two-dimensional Hilbert space spanned
by the normalized vectors
1
Ti
1V
1
0
0 ’ Ti
v-iy
0
1V
>
-1
0y
The four vectors we find are the Bell basis.
(iv) Yes. Matrix multiplication yields
( /2 0 /2 ) ( ^ 2 0 I 2 ) =
0
(I2 0 ^ 2 )(^ 1 0 0 l ) = ^ 1 0 C l
,
Kronecker Product
((T1
Q
CTi ) ( / 2
0 Z2)
= C T 1 Q (T1 ,
((Tl (0 (Tl )((71
0
(Tl) =
87
Z2 0 Z2.
The neutral element is J2 0 J2. The inverse element of G 1 0 (Ti is G 1 Q G 1 .
(v) Since <7? = Z2 we obtain II^ = II7 and IT- = IIj for j = 1 ,2,3. Thus the IIj 5S
are projection matrices.
(vi) We find the projection matrix
/1 /2
O
O
O
P r o b le m 36.
O
O
1/2
1/2
1/2
1/2
O
O
°O ^
O
1/ 2 /
Let A be an n x n matrix. Show that the 2n x 2n matrix
In \
In + A )
B =
can be expressed using the Kronecker product, the n x n identity matrix Zn, the
matrix A and the 2 x 2 matrices
C =
S olu tion 36.
We have B = C Q I n - \ - D Q A .
P r o b le m 37. Let Ejk (j,k = I , . . . , n) be the n x n matrices with I at entry
(j, k) and O otherwise. Let n = 2. Find the 4 x 4 matrices
X = E 11 Q E 11
+
Eu Q Eu
+ Z?2i 0 F21 +
E22
0 -E1
22
and
Y = E11 Q E11 + E21 0 E u + E u 0 Z?2i + E22 0 -E22.
Are the matrices invertible?
S olu tion 37.
We obtain
/ O1
O
O
O O
O
Vi
1X
I,
/
O
O 0
O
0
O 1
Y =
/ O1
O
Vo
0
0
I
0
0
I °\
0
0
0
0
l)
The matrix Y is invertible, but not the matrix X .
P r o b le m 38. Let A, B, C be 2 x 2 matrices. Can we find 8 x 8 permutation
matrices P and Q such that P(A Q B Q C)Q = C Q B Q A ?
88
Problems and Solutions
S o lu tio n 38.
We find P = Q with
0
0
I
0
Vo
(°
p = (i)<
0
I
0
0
0
0
0
0
0
0
0
I
I
0
0
0
0
0
0
0
0
0
I
0
0V
0
I
0
0
0/
i (i ).
P r o b le m 39.
Let Pi be an m x m permutation matrix and P2 an n x n
permutation matrix. Then P\ 0 PtI is an mn x mn permutation matrix. Let
PsTi and P 2 be m x m and n x n matrices, respectively such that Pi = eKl and
P2 = eK<1. Find K such that Pi 0 P2 =
where K is an mn x mn matrix.
S olu tion 39.
We have Pi (8 P2 = eKl (8) eKz. Since
eKl 0 eK2 = eK ^
+ I^ K2
we obtain K = K\ 0 In + Irn 0 Pr2.
P r o b le m 40. Let A and B be two complex valued matrices, not necessarily
of the same size. If B ^ 0 then the operation 0 given by
(A 0 B) 0 B = A
is well defined, where 0 is the Kronecker product. The operation 0 is called a
right Kronecker quotient Show that the nearest Kronecker product
M 0 B := A
(||M —A 0 P|| is a minimum)
defines a Kronecker quotient, where ||•||denotes the Frobenius norm. Show that
M(Z)B
tr2((J 0 B*)M)
PH2
where tr2 denotes the partial trace over matrices with the same size as P * P , and
I is an appropriate identity matrix. Are there any other Kronecker quotients?
S olu tion 40.
Clearly, \\A 0 B — C 0 P|| attains a minimum when C = A.
We need to show that no other matrix C can attain the minimum. Suppose
| | A 0 P -C 0 P | | = 0 . Then
\\A® B - C ® B f = t r { ( A ® B - C ® B )*(A ® B - C ® B))
= \ \ A -C f\ \ B f
= 0
if and only if ||j4 -C W = 0 (since B
0 by assumption). In other words, the
minimum 0 is only attained when A = C. Let { S i , . . . , B m} be an orthogonal
Kronecker Product
89
basis (with respect to the Probenius inner product) for the vector space of matri­
ces of the same size as B , with B\ = B . In other words t i ( B j B ^ ) = tr ( B ^ B j ) =
Sjtk 11Bj 112. We may write
m
3= I
so that
Il M - A ® B H2 = tr((M - A ® B)*{M - A ® B))
m
= HM1 -^ | | 2||5||2 + £ IIm j II2I I ^ ii2
O=2
which achieves a minimum when ||j4 — M 11| is minimized, i.e. A = M 1 where
M1
tr2( ( / ® B*)M)
IISII2
There are many other Kronecker quotients, for example
M 0 B =
I
^
tr 2((I <®Eij)*M)
|{(i, j ) : (B)itj ^ 0}\
(B)i
defines a Kronecker quotient, where Eij is the matrix of the same size as B with
a I in row i and column j and O in all other entries.
P r o b le m 41.
Let d > 2. Consider the vector space V = Cd. Let V x V
be the Cartesian product of V with itself. Let u, v, w be elements of V . The
Kronecker product (tensor product) is defined by the relations
(u + v ) ® w = u ® w + v(g>w,
u ® ( v + w) = u ® v + u®w,
(cu) ® v = U <g> (cv) = c(u <g>v).
Let V <g> V = V ®2 = span{u ® v : u, v G V }, where the span is taken in C rf2.
Let { e j } d=i be a basis in the vector space V . Then {e^ <g>e^} (j,k = I , . . . , d) is
a basis in the vector space V ® V with dimension d2. We define the A;-th tensor
power of the vector space V denoted by V® given by the elements
Ui <g>U2 ® •••® Ufc.
Thus the dimension of the vector space V®k is dk. Let k < d. Then the wedge
product (also called Grassmann product, exterior product) is defined by
V1 A •••A Vfc :=
Y
ffM v W(I) ®
® v TT(fc)
TreSk
where Sk is the permutation group and a is its alternating representation, i.e.
cr(7r) = +1 for even permutations and <r(7r) = —1 for odd permutations. The
complex vector space spanned by elements of the form
v i,..., vn
90
Problems and Solutions
is denoted by AkV and has dimension (^). Give a computer algebra implemen­
tation with Sym bolicC++ of the wedge product. Apply it to the case d = 3
with the basis
Calculate Vi A V2 , V2 A V3 , V3 A Vi, Vi A V2 A V3 .
S olu tion 41. The v e c to r class of the Standard Template Library of C + + is
utilized and kron (Kronecker product) from Sym bolicC++.
/* w e d g e . cpp */
#include <iostream>
#include <vector>
#include "symbolicc++.h"
u s i n g namespace std;
Symbolic wedge(const vector<Symbolic> &v)
{
Symbolic term, sum = 0;
int sign, i, k, I, n = v . s i z e O ;
i = 0;
vector<int> j(n,-l); j [0] = -I;
w h i l e (i >= 0)
{
if(++j[i]==n) { j [i— ] = -I; continue; }
if(i < 0) break;
for(k=0;k<i;++k) i f (j[k]== j [ i ] ) break;
if(k!=i) continue; ++i;
if (i==n)
{
f o r (S i g n = I ,k= 0 ;k < n ;k++)
{
if(k==0) term = v [j [0]]; else term = k r o n (term,v[j[k]] ) ;
for(l=k+l;l<n;l++) i f (j [1]<j [ k ] ) sign = -sign;
>
sum += sign*term; — i;
>
>
return sum;
>
int main(void)
{
vector<Symbolic> wl(2), w2(3);
Symbolic vl = ( S y mbolic( I),0,1)/sqrt(Sy m b o l i c ( 2)) ;
Symbolic v2 = ( S y mbolic( 0),1 ,0);
Symbolic v3 = ( S y mbolic( I),0,-1)/sqrt ( S y m b o l i c (2) );
cout «
"vl = " «
vl «
endl;
cout «
"v2 = " «
v2 «
endl;
cout «
"v3 = " «
v3 «
endl;
Kronecker Product
w l [0]
=v l ;
w l [1] = v2; cout
«
"vl ~ v2 = " «
wedge(wl)
«
endl;
w l [0]
= v2;
w l [1] = v3; cout
«
"v2 ~ v3 = " «
wedge(wl)
«
endl;
w l [0]
= v3;
w l [1] = vl; cout
«
"v3 ~ vl = " «
wedge(wl)
«
endl;
w 2 [0]
= vl;
w 2 [1] = v2;w2[2] =
cout «
"vl ~ v2 ~ v3 = " «
91
v3;
wedge(w2) «
endl;
return 0;
}
S u p p lem en ta ry P ro b le m s
P r o b le m I.
(i) Given the standard basis { e i, e 2} in C 2. Is
-^=(ei 0 ei + e 2 0 e 2),
-^=(ei ® e I - e 2 ® e 2),
I / 0 e 2 + e 2 0 e i),
N
-^=(ei
1 (e
/ i 0 e 2 - e 2 0 e i)
-^=
a basis in C4?
(ii) Let e i, e 2 be the standard basis in C 2. The eight vectors
e i0 e i0 e i,
e i 0 e i 0 e 2,
e i 0 e 20 e i,
e i 0 e 2 0 e 2,
e2 0 e i 0 e i ,
e 2 0 e i 0 e 2,
e2 0 e 2 0 e i ,
e2 0 e2 0 e2
provides the standard basis in R 8. Do the 8 vectors
(ei 0 ei 0 ei ± e 2 0 e 2 0 e 2),
-^=(ei ® e I ® e 2 ± e 2 0 e 2 0 e i),
—p ( e i 0 e 2 0 ei =L e 2 0 ei 0 e 2),
V2
TI (ei 0 e 2 0 e 2 =Le 2 0 ei 0 e i)
v/2
form an orthonormal basis in R 8?
(iii) Let the vectors v i, V2 form an orthonormal basis in C2. Then the four
vectors
V 10 V 1, V 1 0 V 2, V2 0 V 1 , v 2 0 v 2
form an orthonormal basis in C4. Do the four vectors
v i 0 Vi + V2 0 v 2, v i 0 v i - V2 0 V2, Vi 0 V2 + V2 0 V i, Vi 0 V2 - V2 0 Vi
form a basis in C4? Can the four vectors be written as a Kronecker product of
two vectors in C 2?
(iv) The three vectors
92
Problems and Solutions
form an orthonormal basis in R 3. Do the 9 vectors
+ e _ 0 e + — eo 0 eo)
V 0 ,0 = -7=(e + 0>
V3
V i j0 = — j= (e+ (8) e _ - e _ ® e + )
V2
vi,±i
= ± —j=(e± (8) eo - eo 0 e±)
v2 o =
- 7 = (e+ 0 e _ + e _ 0 e + + 2eo 0 eo)
V6
1 /
V 2 ,±i = - ^ ( e ± 0 e o + e o 0 e±)
V 2 ,± 2 = e ± 0 e ±
form an orthonormal basis in M9? These vectors play a role for the 7r-mesons.
Problem 2.
Show that the conditions on x x,x 2,y x,y 2 E R such, that
\x2J
\y2j
\y2j
\x2J
is given by Xiy2 = x 2yiProblem 3.
The 6 x 6 primary cyclic matrix S is given by
/ °0
0
0
0
Vi
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
0
0
1
0
I
0
0
0
Vo
(°
=> S
nT
/
=
0
0
I
0
0
0
0
0
0
I
0
0
0
0
0
0
I
0
0
0
0
0
0
I
01X
0
0
0
0/
Let ej (j = 1 ,..., 6 ) be the standard basis in R6. Then
St ex = e2, St e 2 = e3, STe3 = e4, STe4 = e5, STe5 = e6, 5 Te6 = ei
and Sei = e6, Se2 = ex, ^e3 = e2, *Se4 = e3, Se$ = e4, Se^ = e*>. Consider
the Hadamard matrix
Apply St (Is 0 U) and S(Is 0 U) to the standard basis. Discuss.
Problem 4.
(i) We know that if A and B normal matrices, then A 0 B is
normal. Can we find two matrices C and D which are nonnormal but C 0 D is
normal?
(ii) Consider the invertible nonnormal 2 x 2 matrix M
M =
D
M -1
Kronecker Product
Is the matrix M 0 M
P r o b le m 5.
93
1 nonnormal?
Let A, B be invertible matrices over R. Then
(A B) - 1 = B - 1A - 1
(A 0 B ) - 1 = A - 1 0 B~\
Find the conditions on 2 x 2 matrices over M such that A - 1 8 ) 5 -1 = B ~ 10 A - 1 .
P r o b le m 6.
A =(o
Consider the set of 2 x 2 matrices
l)’
s = ( S !)■ c - ( ? ! )• D = 0 o ) '
(i) Are the matrices linearly independent?
(ii) Are the matrices A ® A, B (8> B , C (S)C, D (8> D linearly independent?
(iii) Show that the 2 x 2 matrices B and D are similar. Show that the matrices
B S) D and D (8 B are similar.
P r o b le m 7. Let ® be the direct sum.
(i) Let A be an 2 x 2 matrix over C. Can A (8 A be written as
A S> A = B\ ® B2 + Ci ® C2
Di ® D2
where B i , B2 are 2 x 2 matrices, C i , D 2 are I x l matrices and C2, Di are 3 x 3
matrices?
(ii) Let A, B, C , D be n x n matrices. Is
( A ® B ) ® ( C ® D ) = (A 0 (C © D)) © (H 0 (C © D)) ?
(iii) Let A, B be 2 x 2 matrices over R. Can A 0 B be written as
A®B = C®D + 1 ®E + F @ 1 ?
Here C, D are 2 x 2 matrices over M and E , F are 3 x 3 matrices over R.
P r o b le m 8. Let J = icF2 ? where a2 is the second Pauli spin matrix.
(i) Show that any 2 x 2 matrix A G SL( 2,C ) satisfies At JA = J , where A t is
the transpose of A.
(ii) Let A be a 2 x 2 matrix with A G SL( 2,C ). Show that
(A 0 A)T(J 0 J )(A 0 A) = J 0 J and (A © A )T ( J ® J )(A ® A) = J ® J.
P r o b le m 9.
(i) Find the conditions on the 2 x 2 matrices A and B such that
(A 0 B) 0 (A 0 £ ) = (A 0 A) 0 (B 0 B ).
(ii) Let A, B be 2 x 2 matrices. Find the conditions on A and B such that
[A 0 £ ,£ 0 A ]= O 4 ^
(AB) 0 (HA) = (BA) 0 (AH).
94
Problems and Solutions
P r o b le m 10.
Consider the 4 x 4 matrix
/1 6
3
2
13 \
9
6
15
7
14
12
I /
V4
The matrix M is a so-called magic square, since its row sums, column sums,
principal diagonal sum, principal counter diagonal sums are equal. Is M <g>M a
magic square? Prove or disprove. Find the eigenvalues of M .
P r o b le m 11. Let { ej } i < j < N be the standard basis in C and S = { 1 ,. . . , N}
be the index set. For each ordered index subset I = { j i , j 2, •••>jp} C S , one
defines e / as
® e^<2) ® •••® e Mp) •
eI = J 2
aeSp
Let n = 3 and p = 2. Find all e /s .
P r o b le m 12. Let A, B be 2 x 2 matrices over C.
(i) Assume that A <S>B = B 0 A. Can we conclude that AB = BA? Prove or
disprove.
(ii) Assume that AB = BA. Can be conclude that A ® B = B ® A?
P r o b le m 13.
Let
I
(i) Assume that
Find the coefficients tjke such that
r^ C
0
0 0
0
0
0
1 )T .
(ii) Assume that
Find the coefficients tjki such that
T =4=( I
V 2
Problem 14.
0
0
(i) Consider the vectors
0
0
0
1 )T .
Kronecker Product
95
in C3 and C 2, respectively. Find the 6 x 6 permutation matrix P such that
P ( u (g) v ) = v ® u.
(ii) Find all non-trivial solutions of the equation
From the equation we obtain the four conditions
xiV2 = ViX2,
Xiy3 = V2X\,
x 2yi = y ix 2,
x 2y2 = 2/3^1-
A case study has to be done starting with x\ = 0 and r i ^ 0 .
(iii) Consider the standard basis e i, e 2 , e3 in R 3. Find a 8 x 8 permutation
matrix T such that
T ( e 1 (g> e 2 <8 >6 3 ) = e 3 <g> e 2 (8 )ei.
Consider the invertible 2n x 2n matrix
P r o b le m 15.
C =
£ ) .
The defining equation for an element S' G Sp(2n) is given by
(5 - 1)* = 5 = 0 ( 5 " 1) ^ " 1.
Find the inverse of the matrix C.
P r o b le m 16.
Show that
(! J) ® 0 ?) =( ? ) ® (1
0 0)+(0
0
1,e(°)
where 0 denotes the direct sum.
P r o b le m IT.
Let <to = / 2, 04, Cr2, 03 be the Pauli spin matrices. These
matrices form an orthogonal basis in the Hilbert space of 2 x 2 matrices.
(i) Show that any 2 x 2 matrix can be written as linear combination
3
^ ^cj
J= 0
5
cj G C .
Show that any 4 x 4 matrix can be written as linear combination
3
3
EE
Ji=Oj2=O
Cjl32(Jjl
® a325
Cjlj2
^
Show that any 2n x 2n matrix can be written as linear combination
3
3
3
y ! y ! ''*
C3l32---jn<Tjl ® aj2 ®
Jl=Oj2=O
Jn=O
® ^3nJ
Cjlj2...jn ^
96
Problems and Solutions
(ii) Consider the 256 sixteen times sixteen matrices
Tjojij2Jz : =
where jo,
a jo ®
a 3i ®
a 32 ®
a 33
€ { 0 ,1 ,2 ,3 } . Let
\4>) = i ( i
I
I
I
I
I
I
I
I
I
I
I
I
I
i
I )T
Find
IrIp) =
Problem 18.
Let A be an invertible n x n matrix with A2 = In. Consider
tjo3i3233 •
X 1 = In ® A ® A~l f
X 2 = A ® In ® A - 1 ,
Xs = A ® A - 1 ® / n.
(i) Find X 1
2, X 2 , X 32, X 1X 2, X 1X 3, X 2X 3.
(ii) Find the commutators [X l j X 2], [X 2, X 3], [X3, X 2].
Problem 19. Let A f B be given nonzero 2 x 2 matrices. Let X be a 2 x 2
matrix. Find the solution of the equation X <g>A = B ® X .
Let A be an m x m matrix and B be an n x n matrix. The
Kronecker sum is defined by
Problem 20.
A
Is ( A 0
k
B := A (8) In + Irn 0 B.
B) <g>C = (A (g) C) ® if ( 5 ® C ) ? Prove or disprove.
Problem 21.
equation
Let i be m x m over C, and B be n x n over C. Solve the
A ® B = A ® I n + Irn® B .
Obviously A = 0m, B = On is a solution. Are there nonzero solutions? Could
be the vec-operator of any use?
Problem 22. Let A be an 2 x 2 matrix over C. Show that the condition on
A such that A <g>I2 = I2 ® A is given by a12 = a21 = 0, O11 = a22.
Problem 23. (i) Let u, v be normalized column vectors in Cn with u*v = 0.
Show that u ® v — v <g>u cannot be written as
u<g>v — v ® u = a<g)b
where a, b € Cn. This problem plays a role for entanglement.
(ii) Let u, v G Cn and linearly independent. Can one find x, y G Cn such that
x(g)y = u<g)v — v ® u?
(iii) Let A f B be nonzero n x n matrices over C. Assume that A and B are linearly
independent. Show that A ® B —B ® A cannot be written as A ® B —B ® A =
X ® Y f where X , Y are n x n matrices over C.
Kronecker Product
P r o b le m 24.
of the matrix
97
Let n € N. The Fibonacci numbers are provided by the entries
(Ii)'.
Find the sequence provided by the 4 x 4 matrix
Compare with the sequence of the Fibonacci numbers. Discuss.
P r o b le m 25.
Let A be an n x n matrix over IR. The matrix A is called
symmetric if A = A t . If the matrix A is symmetric about the ‘northeast-tosouth-west’ diagonal, i.e. a j ^ = a n- k + i , n - j + i it is called persymmetric. Let X 1
Y be n x n matrices over IR which are symmetric and persymmetric. Show that
X 0 Y is again symmetric and persymmetric.
P r o b le m 26. Let A be an n x n matrix over C and B be an m x m matrix
over C. Assume that A 0 B is positive semidefinite. Can we conclude that A
and B are positive semidefinite?
P r o b le m 27.
Let A 1 B be n x n positive semidefinite matrices. Show that
tr(A 0 B) < ^(tr(A ) + tr(-B))2,
tr (A 0 B) < ^tr(A 0 A + B 0 B).
P r o b le m 28. Let U be a 2 x 2 unitary matrix and /2 be the 2 x 2 identity
matrix. Show that the 4 x 4 matrix
^ = (2
2)®'”
i ) ® u + ( eo
“ €R
is unitary. Utilize UU* = /2 and
f 0
0 \ ( e ia 0\ _ ( e ia
0\ / 0
Vo iyVo o ) ~ \ o
0\ _ / 0
0\
o)\o i)~ Vo OyT
P r o b le m 29. Let A, B be n x n matrices over R. Let u, v be column vectors
in IRn. Show that
A( u 0 v t ) 5 = (Au) 0 (wt B ) ?
It can also be written as A(uvT)B = (Au)(vTB) since matrix multiplication is
associative.
P r o b le m 30.
Consider the invertible 4 x 4 matrix
/I
I
0
Vo
0
0
I
0
0
I
—i
i
I
-I
0
0
\
)
/1
=
T -i
t
1
= 2
0
0
Vi
I
0
0
-I
0
I
I
0
°i \
—i
0
/
98
Problems and Solutions
Let
R(a) =
cos(a)
sin(a)
—sin(a)
cos(a)
Show that T(R(a) <g>R(/3)T)T 1 is given by
cos(a — /3)
(
:
0
cos(a + ft)
sin(a + j3)
0
0
—sin(a + ft)
cos(a + j3)
0
isin(a: —P)\
0
0
cos(o' —P) /
zsin(o; — /?)
P r o b le m 31.
Let v be a normalized (column) vector in M2. Is the matrix
h ~ v <g>v T
invertible?
Chapter 4
Traces, Determinants and
Hyperdeterminants*1
A function on n x n matrices det : Cnxn —> C is called a determinant function
if and only if it satisfies the following conditions:
1) det is linear in each row if the other rows of the matrix are held fixed.
2) If the n x n matrix A has two identical rows then det (A) = 0.
3) If In is the n x n identity matrix, then det (In) = I.
Let A, B be n x n matrices and c G C. Then we have
det(AB) = det(A) d e t(£ ),
det(cA) = cn det(A).
The determinant of A is the product of the eigenvalues of A, det (A) = Ai •A2 •
.. . •An. Let A be an n x n matrix. Then the trace is defined as
n
tr(-A) -.= Y l aHj= i
The trace is independent of the underlying basis. The trace is the sum of the
eigenvalues (counting multiplicities) of A, i.e.
M j4) = Y Xr
j= i
Let A, B 1 C be n x n matrices. Then (cyclic invariance)
tr {ABC) = tr (CAB) = tr (S C A ).
The trace and determinant of a square matrix A are related by the identity
det (exp (A)) = exp(tr(A)).
99
100
Problems and Solutions
P r o b le m I.
Consider the Hadamard matrix
Find tr (U), tr(£/2), det (U) and det (U2).
Solution I.
We obtain tr(Z7) = 0, tr (U2) = 2, det(/7) = - 1 , det (U2) = I.
Can the matrix U be reconstructed from this information?
Problem 2.
Consider the 2 x 2 nonnormal matrix
A -(l
0'
Can we find an invertible 2 x 2 matrix Q such that Q~lAQ is a diagonal matrix?
Solution 2.
The answer is no. Let
A = Q -'A Q ,
M . ( ;
J)
where a, b 6 C. Taking the trace of A we find a + b = tr (Q- 1AQ) = tr(A) = 0.
Taking the determinant of A we obtain
ab = det(Q _1A<2) = det(<3- 1 ) det (A) det(Q) = det(A) = 0.
Thus from a + b = 0 and ab = 0 we find a = b = 0. It follows that QAQ~l = A.
Therefore A is the zero matrix which contradicts the assumption for the matrix
A. Thus no invertible Q can be found. This means that A is not diagonalizable.
Problem 3.
Let A be a 2 x 2 matrix over R. Assume that tr(A) = 0 and
tr(A 2) = 0. Can we conclude that A is the 2 x 2 zero matrix?
Solution 3.
No we cannot conclude that A is the 2 x 2 zero matrix. The
nonnormal matrix
M ! :)
satisfies tr(A) = 0 and tr(A 2) = 0. What happens if we also assume that A is
normal?
Problem 4. For an integer n > 3, let 0 :=
/n. Find the determinant of the
n x n matrix A + J n, where the matrix A = (ajk) has the entries a^ = cos^'fl+A;#)
for all k = I , . . . , n.
Solution 4. The determinant of a square matrix is the product of its eigen­
values. We compute the determinant of In + A by calculating the eigenvalues
of In + A. The eigenvalues of In + A are obtained by adding I to each of the
Traces, Determinants and Hyperdeterminants
101
eigenvalues of A. Thus we calculate the eigenvalues of A and then add I. We
show that the eigenvalues of A are n/2, —n / 2 , 0 , . . . , 0, where 0 occurs with mul­
tiplicity n —2. We define column vectors
0 < m < n —I, componentwise by
= etfem*. We form a matrix from the column vectors v (m). Its determinant
is a Vandermonde product and hence is nonzero. Thus the vectors V^m) form a
basis in C n. Since cos (z) = ( elz + e~lz)/2 for any z G C we obtain
eij6
( A v ^ ) j = Y , cos (jO + k0)eikrne
k=I
Tl
eik(m+l)0 _|_
e-ije
~ Y k=l
2
Tl
^ ^
—1)0
k=I
where j = I , . . . , n. Since
AklO = 0
Zc=I
for integer I unless n\l, we conclude that A v ^ = O i o r m = O and for 2 < m <
n — I. We also find
(A v(D )j = _ e- a » = - ( V ^ - D ) j ,
( A v ^ -D ) j = - e«® = -(V (D )j .
Thus
A (v ^
±
v (n_1)) =
= b ^ ( v ^ ^ d= v ( n _ 1 ) ).
Consequently
{ v ( 0 ) ? v ( 2 ) ? v ( 3 ) , . . . , V ( n" 2 ) , V (1) + V ( n _ 1 ) , V (1) -
V (n_1) }
is a basis for Cn of eigenvectors of A with the corresponding eigenvalues. Since
the determinant of In + A is the product of (I + A) over all eigenvalues A of A ,
we obtain
d e t(/„ + A) = (1 + n/2)(1 - n /2 ) = I - n2/ 4.
Let a , fl, 7 , S be real numbers. Is the 2 x 2 matrix
P r o b le m 5.
TT _ pi<* (
e-W 2
y
0
0 \ / cos( 7 / 2 )
el/3/ 2 y Iv sin(7 / 2)
- sin(7 / 2) \ ( e~ iS/ 2
0 \
cos(7 / 2) y \ 0
el<5/<2 /
unitary? What the determinant of Ul
S o lu tio n 5.
Each of the three matrices on the right-hand side is unitary and
eta is unitary. The product of two unitary matrices is again a unitary matrix.
Thus U is unitary. The determinant of each of the three matrices on the righthand side is I. Thus det (U) = e2tOL.
P r o b le m 6 . Let A and B be two n x n matrices over C. If there exists a
nonsingular n x n matrix X such that A = X B X - 1 , then A and B are said to
be similar matrices. Show that the spectra (eigenvalues) of two similar matrices
are equal.
102
Problems and Solutions
Solution 6.
We have
det(j4 - AIn) = d e t p f B X " 1 - XXI „ X ~ l ) = det (X (B - AJn) * - 1)
= det(X)det(B - AIn)d e t(X _1)
= det (B — AIn).
Problem 7. Let A and B be n x n matrices over C. Show that the matrices
AB and BA have the same set of eigenvalues.
Solution 7. Consider first the case that A is invertible. Then we have AB =
A (B A )A - 1 . Thus AB and BA are similar and therefore have the same set of
eigenvalues. If A is singular we apply the continuity argument: If A is singular,
consider A + eln. We choose S > 0 such that A + eln is invertible for all e,
0 < 6 < 5. Thus (A + eIn)B and B(A + eln) have the same set of eigenvalues
for every e € (0, £). We equate their characteristic polynomials to obtain
det(A In - (A + eIn)B) = det(A In - B(A + e /n)),
0 < e < S.
Since both sides are analytic functions of e we find with e —> O+ that
det (AIn —A B ) = det (AIn — BA).
Problem 8. Let A 7 B be n x n matrices. Assume that [A7B] = A. What can
be said about the trace of A l
Solution 8.
Since tr([A, B]) = 0 we have tr(A) = 0.
Problem 9. An n x n matrix A is called reducible if there is a permutation
matrix P such that
P t AP
c a
where B and D are square matrices of order at least I. An n x n matrix A is
called irreducible if it is not reducible. Show that the n x n primary permutation
matrix
i 0 ...
0 0 I ...
0\
0
0 0 0 ...
0 0 ...
I
Oj
(°
Vi
is irreducible.
Solution 9.
Suppose the matrix A is reducible. Let
P t A P = J i © J2 © •••© Jfc,
k> 2
where P is some permutation matrix and the Jj are irreducible matrices of
order < n. Here © denotes the direct sum. The rank of A — In is n — I since
Traces, Determinants and Hyperdeterminants
103
det (A —In) = 0 and the submatrix of size n —I by deleting the last row and the
last column from A —In is nonsingular. It follows that
rank(PTA P —In) = rank(PT (A —In)P) = n — I.
By using the above decomposition, we obtain
k
rank(PTA P — In) =
rank(Jj —In) < (n — k) < (n — I).
3= I
This is a contradiction. Thus A is irreducible.
Problem 10. We define a linear bisection, h, between M4 and H(2), the set of
complex 2 x 2 hermitian matrices, by
t+ x
y -iz\
y + iz
t— x J
'
We denote the matrix on the right-hand side by H .
(i) Show that the matrix can be written as a linear combination of the Pauli
spin matrices
02, 03 and the identity matrix / 2(ii) Find the inverse map.
(iii) Calculate the determinant of 2 x 2 hermitian matrix H . Discuss.
10
(i) We have H = tl
Solution
.
(ii) Consider
(a
\c*
24- #03+ y<J\ + zo 2.
c\ _ / t + x
b) ~ \y + iz
y —iz\
t —x J *
Compare the entries of the 2 x 2 matrix we obtain
,
a Ab
2
a —b
’
2
c + c*
y
2
’
c* —c
2i
(iii) We obtain d et(P ) = t2 —x 2 —y2 — z2. This is the Lorentz metric. Let U
be a unitary 2 x 2 matrix. Then det (UHU*) = det (H).
11
Problem
. Let A be an n x n invertible matrix over C. Assume that A can
be written as A = B + iB , where B has only real coefficients. Show that B ~ l
exists and
A -1 =
! ( B - 1 - i B - 1)-
11
Solution
. Since A = (1 4- i)B and det(A) / 0 we have (I + i)n det (B) / 0.
Thus det(P ) / 0 and B ~ l exists. We have
(I+
12
= In.
Problem
. (i) Let A be an invertible matrix. Assume that A = A x. What
are the possible values for det(A )?
104
Problems and Solutions
(ii) Let B be a skew-symmetric matrix over R, i.e. A t = —B and of order 2n —I.
Show that det(B ) = 0.
(iii) Show that if a square matrix C is hermitian, i.e. C* = C then det (C) is a
real number.
S olu tion 12. (i) Since I = det (In) = det(A A _1) = det(A) det(A _1) and by
assumption det (A) = det (A - 1 ) we have I = (det (A ))2. Thus det (A) is either
+ 1 or —I.
(ii) From B t = —B we obtain
det (B t ) = d e t (- B ) = ( - 1)271" 1 det (B) = - det (B ).
Since det (B) = det(B t ) we obtain det(B ) = —det(B ) and therefore det(B ) = 0.
(iii) Since C is hermitian we have C* = C or C = C t . Furthermore det(C) =
det(C t ) . Thus
det (C) = det (C t ) = det (C ).
Now if det(C) = x + iy then det(C) = x —iy with x, y G R. Thus x + iy = x —iy
and therefore y = 0. Thus det(C) is a real number.
P r o b le m 13.
(i) Let A , B are 2 x 2 matrices over R. Let H := A + iB.
Express det (H) as a sum of determinants.
(ii) Let A , B are 2 x 2 matrices over R. Let H := A + iB. Assume that H is
hermitian. Show that d et(B ) = det(A) — det(B ).
S olu tion 13.
(i) Since
H = ( 0,11 + ^ 11
\ 0>21 + ^21
0,12 + ^ 12 ^
&22 + ^22 )
we find
det(H) = det(i4) - det(B ) + * det ( “ “
) + i det (
^12 ^
(ii) Since H is hermitian, i.e. H* = H with H * = A t —iBT we have A = A t
and B = —B T. Thus a\2 = 021, &11 = £>22 = 0 and 612 = —621- If follows that
det ( a n I ^ 11
\ «21 + ^ 2 1
P r o b le m 14.
012 + ibu12 ) = det(A) - det( S ) .
a 22 +
1022 J
(i) Let A , B, and C be n x n matrices. Calculate
(ii) Let A, B, C, D be n x n matrices. Assume that DC = C D , i.e. C and D
commute and det (D) 7^ 0. Consider the (2n) x (2n) matrix
M =
Traces, Determinants and Hyperdeterminants
105
Show that
det(M ) = d e t ( A D - B C ) .
(I)
We know that
det ( x
y ) = det^ ) det(y )>
det (
Y ) = de^ t7) det(y )
(2)
where U , V , X , Y are n x n matrices.
S olu tion 14.
(i) Obviously we find
C
°b ) = d et(A )d et(B ).
(ii) We have the identity
[A
\C
B\ ( D
D )\ -C
0n \ _ ( A D - B C
In J - \ C D - D C
B \ _ ( AD - B C
D J - \
On
B\
DJ
W
where we used that CD = DC. Applying the determinant to the right and
left-hand side of (3) and using (2) and det (D) ^ 0 we obtain identity (I).
P r o b le m 15.
Let A , B be n x n matrices. We have the identity
det ( B
A ) = det(A +
det^ “
Use this identity to calculate the determinant of the left-hand side using the
right-hand side, where
S olu tion 15.
We have
A+ B =
(I
£)•
A~B=(-\
!)•
Therefore det(A+L?) = I and det ( A - B ) = 5. Finally det (A +B ) det ( A - B ) = 5.
P r o b le m 16.
(i) Let A, B be n x n matrices. Show that
tr((A + B)(A - B)) = tr(A 2) - t r (£ 2).
(ii) Let A, B, C be n x n matrices. Show that tr([A, B]C) = tr (A[B, C]).
S olu tion 16.
(i) We have
(A + B)(A - B) = A2 - AB + B A - B 2 = A2 + [B,A] - B 2.
106
Problems and Solutions
Since tr ([B,A]) = 0 and the trace is linear we obtain the identity,
(ii) Using cyclic invariance for the trace we have
tr([A, B]C) = tr ((AB - BA)C) = tr (ABC) - tr (BAC)
= tr (ABC) - tr (ACB) = tr (ABC - A C B )
= tr(A[B,C]).
Problem 17.
0.
Let A, B be n x n positive definite matrices. Show that tr (AB) >
Solution 17.
Let U be a unitary matrix such that
U*AU = D = diag(di,d2, . . . ,dn).
Obviously, d±, d2, •••, dn are the eigenvalues of A. Then tr(^4) = tr (D) and
tr (AB) = tr(U*AUU*BU) = tr (DC)
where C = U*BU. Now C is positive definite and therefore its diagonal entries
Cu are real and positive and tr(C ) = tr (B ). The diagonal entries of D C are diCu
and therefore
n
tr (DC) =
> 0i=l
Problem 18.
Let
f.
A =
I
I
xi + 2/1
x i + y2
I
\ x 2 + 2/1
A
I
x 2 + y2 /
where we assume that Xi + y} zZz 0 for i , j = 1,2. Show that
det(yl) =
Solution 18.
(z i - x 2)(yi - y2)
(^i + yi)(xi + 2/2)(^2 + y i)(x 2 + 2/2) ’
We have
de1, ^
(x2 + yi)(xi + 2 /2 ) - ( s i + yi)(x2 + 2 /2 )
(a > i + yi){xi + y2)(x2 + yi)(x2 + y2)
_
xiyi ~ Xiy2 + x2y2 - x2yi
(xi + yi)(xi + y2)(x2 + yi){x2 + y2)
= _____________A i - ^ 2 ) ( 2 / 1 - 2 /2 )_____________
=
A i + 2 /1 ) A i + 2 /2 ) A 2 + 2 /1 ) A 2 + 2 /2 ) ’
Traces, Determinants and Hyperdeterminants
107
For the general case with the matrix
I
I
I
1
x i + yi
Xl
+ 2/2
I
Xl
+ 2/3
Xi + y n
x 2 + yi
X2
+
X2
+ V3
x 2 + yn
I
i
Xn
V2
I
i
+ 2/1
Xn
I
i
+ 2/2
i
+ 2/3
Xn
\
%n
H- Vn
we find the determinant ( Cauchy determinant)
n i>j(Xi
det(A) =
xj)(yi
Y\i,j=l(xi
+ Vj)
Vj)
P r o b le m 19. For a 3 x 3 matrix we can use the rule of Sarrus to calculate
the determinant. Let
( d\i
0>2l
d\2
CL
CL
dSl
d32
^33
22
a\s
23
Write the first two columns again to the right of the matrix to obtain
dn
di2
di 3
I
an
d2l
d 22
d 23
I
a 2 1 d 22
d3l
d32
d33
I
031 d32
(
a\ 2
Now look at the diagonals. The product of the diagonals sloping down to the
right have a plus sign, the ones up to the left have a negative sign. This leads
to the determinant
det(^4) = ^11^22^33 + ^12^23^31 + ^13^21^32 —^31^22^13 —^32^23^11 —^33^21^12(i) Use this rule to calculate the determinant of the rotational matrix
cos (0)
0
0
1
—sin(0) \
0
sin(0)
0
cos(0) J
(
(ii) Use the rule to find the determinant of the matrix
S o lu tio n 19.
(
0
cos(a) sin(a)
cos(a) sin(a)
0
0
cos(a) sin(a)
0
I
0
—sin(0)
o
cos(0)
(i) We have
cos(0)
0
sin(0)
I cos(0)
o
I sin(0)
0
I
0
108
Problems and Solutions
Thus det (R(0)) is given by
cos(0) cos(0) H- 0 + 0 — sin(0)(—sin(0)) — 0 — 0 = cos2(0) + sin2(0) = I.
(ii) We find det(A (a)) = 0.
P r o b le m 20. Let A, S be n x n matrices. Assume that S is invertible and
assume that S-1 AS = pS, where p ^ 0. Show that A is invertible.
S olu tion 20.
From S- 1 AS = pS we obtain det( 5 - 1AS) = det (pS). Thus
det ( 5 - 1 ) det (A) det (S) = det (pS).
It follows that det (A) = det (pS) = pn det(S). Since pn ^ 0 and det (5 ) ^ 0 it
follows that det (A) ^ 0 and therefore A - 1 exists.
P r o b le m 21. Let A be an invertible n x n matrix. Let c = 2. Can we find an
invertible matrix S such that SAS- 1 = cAl
S olu tion 21.
From SAS- 1 = cA it follows that
d e t(5 A 5 _1) = det(cA) => det(A) = cn det(A).
Since det (A) / Owe have Cn = I. Thus such an S does not exist.
P r o b le m 22.
The determinant of an n x n circulant matrix is given by
ai
a2
as
.
an
ai
a2
. ••
as
c&4
a2
as
an
an —1
n —I
det
/
n
(i)
= (-1 )"-1] !
j
a2
<24
\
=0
V fc = I
)
a1
where C := exp(27ri/n). Find the determinant of the circulant n x n matrix
/I
n2
A
4
I
9
4
16
9
25
16
(n-1)2
C =
9
V4
4
I
/
using equation (I).
S olu tion 22.
Applying (I) we have
Tl- 1
det(C) = ( - I ) " " 1 n
/
Tl
(
Kk= I
I'
Tracesj Determinants and Hyperdeterminants
109
Now
k=I
72 fc _ n2x n+s - (2n2 + 2n - l ) x n+2 + (n2 + 2n + l ) x n+1 - x 2 - x
k x ~ ----------------------------------------------------------------------------------------(x — I )3
for x / I and
y - fc2 fc = n ( n + l ) ( 2n + l)
^
6
k=i
for x = I. For the product we have
n —I
n—I
j= l
/c=0
^
'
It follows that
f f ^ = ( - 1) " - 1 '
j= i
f f ( c ' - o = ( - 1)” - 1 *
j= i
and
n > - ^ )7
j= l x
n_ i ( n + 2 ) ™ - n ”
(-I )'
2Un- 1
Finally
det(C) = (—I)
n_i nn~2(n + l ) ( 2n + I )((n + 2)n - nn)
12
P r o b le m 23. Let A be a nonzero 2 x 2 matrix over M. Let B i , B2, B s , B 4 be
2 x 2 matrices over M and assume that
det(A + B j ) = det(A) + det ( B j )
for
j = 1 , 2,3,4.
Show that there exist real numbers c\, C2, Cs, C4, not all zero, such that
CiBi + C2B2 + CsBs + C4B 4 = O2.
S o lu tio n 23.
(I)
Let
with j = 1 , 2,3,4. If det (A + Bj ) = det (A) + det(Bj ), it follows that
022&U - a,2ibi2 - ai2b(
2i + a,n bi$ = 0.
(2)
Since A is a nonzero matrix the solution space to (I) for fixed j is a threedimensional vector space. Any four vectors in a three-dimensional space are
linearly dependent. Thus there must exist ci, C2, Cs, C4, not all 0, for which
equation (I) is true.
HO
Problems and Solutions
P r o b le m 24. An n x n matrix Q is orthogonal if Q is real and Qt Q = Qt Q =
In i.e. Q - 1 = Qt .
(i) Find the determinant of an orthogonal matrix.
(ii) Let u, v be two vectors in M3 and u x v denotes the vector product of u and
v
( u2V3 - U3V2 \
U3V1 — U1V3
U iV2 -
I.
U 2 Vi J
Let Q be a 3 x 3 orthogonal matrix. Calculate (Qu) x (Q v ).
S o lu tio n 24. (i) We have det(Q) = det(QT) = det(Q _1) = I/ det(Q). Thus
(det(Q ))2 = I and therefore det(Q) = ±1.
(ii) We have (Qu) x (Q v) = Q (u x v ) det(Q).
P r o b le m 25.
(i) Calculate the determinant of the n x n matrix
I I . .. I 1\
/I
I
I . .. I I
. .. I I
I I
0
A =
I
\l
I
I
0
I
I
. ..
. ..
0
I
I
0J
I
I
I
I
I
0
I
(ii) Find the determinant of the matrix
(°
I
I
C =
I
U
I
0
I
I
I
0
I
I
I
I
. ..
. ..
I
S olu tion 25. (i) Subtracting the second row from the first row in A (n > 2)
we find the matrix
. ..
I
(°
I
I . .. I I
. .. I I
I I
0
0
0
I
I
I
I
B =
I
Vi
0
. ..
. ..
0
I
I
(V
det (B ). Now an expansion yields sn = —sn_i
Thus det(A) = det(B ). Let sn
with the initial value Si = I from the matrix A. It follows that
sn = det(A) = ( - 1 ) Tl- I
n=
1,2,...
(ii) Using the result from (i) we find det(C) = (—1)" *(n — I)
Traces, Determinants and Hyperdeterminants
P r o b le m 26.
111
Let A be a 2 x 2 matrix over R with det(A) ^ 0. Is (^4T) 1 =
(A~1)T?
S o lu tio n 26.
We have
and
Thus (At ) 1 = (A 1^t . This is also true for n x n matrices.
P r o b le m 27.
Find all 2 x 2 matrices A over C that satisfy the three conditions
tr(A) = 0,
A = A *,
A2 = I2.
S olu tion 27. From the first condition we find a n = —a22- From the second
condition we find a n = a2i and that a n and a^2 are real. Inserting these two
conditions into the third one we have
Thus —I < a n < +1 and 0 < r n < I with the constraint a fx + r f2 = I.
P r o b le m 28. Let A be an n x n matrix with A 2 = In. Let B be a matrix
with AB = —B A , i.e. [A, J3]+ = 0n. Show that tr(B ) = 0. Find tr(A ® B).
S o lu tio n 28.
We have
tr (B) = tr (InB) = tr (A2B) = —tr (ABA) = —tr(A2B) = —tr(B).
Thus tr (B) = 0. Using this result we have tr (A <g) B) = tr(A)tr(B) = 0.
P r o b le m 29.
(i) Consider the two 2 x 2 matrices
The first column of the matrices A and B agree, but the second column of the
two matrices differ. Is det(A + B) = 2(det(A) + det(i?))?
S olu tion 29.
We have det(A) = —a2i and det(i?) = a n and
112
Problems and Solutions
with det(A + B) = 2an —2a2i. Thus the equation holds.
P r o b le m 30.
Let A, B be 2 x 2 matrices. Assume that det (A) = 0 and
det(B) = 0. Can we conclude that det (A + B) = 0?
S olu tion 30.
We cannot conclude that det (A + B ) = 0 in general. For example
- \ ) ■ b = ( i o) - ■4 + s = ( i -1O Thus det (A) = det (B) = 0, but det (A + B) = —2.
P r o b le m 31.
Consider the symmetric 3 x 3 matrix
A=
I ni
I
I
\o
n2
0
I
I
ns
Find all positive integers n i, n2, 713 such that det (A) = I and A is positive
definite.
S o lu tio n 31. The five solutions are { 131, 312, 221, 122, 213}. Thus
the number of solutions is 5, which is the Catalan number Cs . Consider the 4 x 4
matrix
0
/n i
I
°\
I
0
n2 I
B =
0
I
1
ns
0
714 /
0
I
with ni,ri2, n 3, n 4 € N and show that there are 14 solutions with C4 = 14.
P r o b le m 32. The oriented volume of an n-simplex in n-dimensional Euclidean
space with vertices Vo, v i, . . . , v n is given by
Id e t(S )
where S is the n x n matrix S := (v i - Vo V 2 - Vo . . . v n_i - Vo v n — Vo ).
Thus each column of the n x n matrix is the difference between the vectors
representing two vertices.
(i) Let
1/ 2
V2
VQ =
( )■
1
Find the oriented volume,
(ii) Let
vo
0
0
0
vi =
V2
V3
Traces, Determinants and Hyperdeterminants
113
Find the oriented volume.
S o lu tio n 32.
(i) We have
i
4
0
(ii) We find
I
7 det
6
P r o b le m 33.
1 0 0
0
I
I
0
6*
I o o i
The area A of a triangle given by the coordinates of its vertices
(® 2 , y 2)
(z o ,2 /o ),
is
1
A = - det
/ *o
xi
2/o
yi
I\
I
.
\*2
2/2
I/
(i) Let (x0, 2/o) = (0,0), (xi,yi) = (1,0), (£2, 2/2) = (0,1). Find A.
(ii) A tetrahedron is a polyhedron composed of four triangular faces, three of
which meet at each vertex. A tetrahedron can be defined by the coordinates of
the vertices
(*o,2/o,20),
( *2, 2/2, 22),
(*i,2/i,2i),
(*3, 2/3, 23).
The volume V of the tetrahedron is given by (actually IV^)
/
Xo
Xi
V =
X2
\X
2/0
2/1
2/2
3 2/3
20
21
1X
I
22
I
23
I/
Let
(*0,2/0,20) = (O5O5O) j (* i 52/i 521) = (0 ,0 ,1 ),
( * 2 , 2/ 2, 22) = ( 0 , 1 , 0 ) ,
( £ 3, 2/3 , 23) = ( 1 , 0 , 0 ) .
Find the volume V .
(iii) Let
(*0, 2/0, 20) = ( + 1 , + 1 , + 1 ),
(xu yu zi) = ( - 1 , - 1 , + 1 ),
(* 2, 2/2, 22)
(* 3, 2/3, 23)
( 1,+ 1,
I),
(+1,
Find the volume V .
S o lu tio n 33.
(i) We obtain
1
(° 0
A = - det [ 1 0
2
\0
I
1
1
I
I
2
'
I,
I)*
114
Problems and Solutions
(ii) We find
/0
O O 1\
Vi o o i /
(iii)
We find
P r o b le m 34. Let A, B be n x n matrices over C. Assume that tr(AB) = 0.
(i) Can we conclude that tr (AB*) = 0?
(ii) Consider the case that B is skew-hermitian.
S olu tion 34. (i) We cannot conclude that tr (AB*) = 0 follows from tr (AB) =
0. For example, for n = 2 let
Then we have tr {AB) = 0, but tr(A B *) = I.
(ii) If B is skew-hermitian we have B* = —B. Thus tr (AB*) = —tr {AB) = 0.
P r o b le m 35.
Let A, B be n x n matrices over C. Is tr(A B *) = tr (A*B)7
S olu tion 35.
This is not true in general. For example let
Then tr(A B *) = 2i and tr(A*i?) = - Ici. In general we have (tr(AB*))* =
tr (A*B). If A = B, then tr (AA*) = tr (A M ).
P r o b le m 36.
Let
A — (ai, a2, •••, an_i, u),
B
(ai, a2, . . . , an_i , v)
be n x n matrices, where the first n —I columns a i, . . . , an_i are the same and
for the last column u / v . Show that
det(A + B) = 2n -1 (det(A) + det(B )).
S o lu tio n 36. This identity is true since determinants are multilinear (alter­
nating) maps (Grassmann product) for their columns, i.e.
det(A + B) = det(2ai, 2a2, . . . , 2an_ i, u + v)
= det(2ai, 2a2, . . . , 2an_ i, u) + det(2ai, a2, . . . , 2an_ i, v)
= 2n_1 det(ai, a2, . . . , an_i , u) + 2n_1 det(ai, a2, . . . , an_i , v)
= 2n_1 det(A) + 2n_1 det(B).
Traces, Determinants and Hyperdeterminants
115
As a consequence we have that if det (A) = det(i?) = 0, then det (A + B) = 0.
P r o b le m 37. Find all 2 x2 matrices g over C such that det (g) = I, 77*7*77 = g ~l ,
where 77 is the diagonal matrix 77 = diag(l, —1 ).
S olu tion 37.
We have
since
From 7x1 = Tqe1^1, 7x2 = 7*2ez^2 we obtain r\ —r\ = I.
P r o b le m 38.
An n x n tridiagonal matrix ( n > 3) has nonzero elements
only in the main diagonal, the first diagonal below this, and the first diagonal
above the main diagonal. The determinant of an Ti x n tridiagonal matrix can
be calculated by the recursive formula
det (A) = Un,n det[A ]{i?
1}
&n,n—l&n—l,n det[A]{]_ ,...,n—2}
where det [A] {1
denotes the k-th principal minor, that is, [A]{
is the
submatrix by the first k rows and columns of A. The cost of computing the
determinant of a tridiagonal matrix using this recursion is linear in n, while the
cost is cubic for a general matrix. Apply this recursion relation to calculate the
determinant of the 4 x 4 matrix
/0 I 0
1 1 2
0 2 2
Vo
S olu tion 38.
3
3
/
We have
/0
det(A) = 3 det I I
VO
<
P r o b le m 39.
0
0\
0
3
I
I
2
Consider the symmetric 3 x 3 matrix
(i) Find the maxima and minima of the function f ( a ) = det (A (a)).
116
Problems and Solutions
(ii) For which values of a is the matrix noninvertible?
S o lu tio n 39.
(i) We obtain f ( a ) = det(A (a)) = a 3 —3a + 2. Therefore
df(a)
= 3a 2 - 3.
da
From 3a2 —3 = 0 we find a 2 = I or a\ = I, a<i = —I. Since d2f(a)/da 2 = 6a we
have d2f ( a = a\)/da2 = 6 and d2f ( a = a ^ j d a 1 = —6. Thus at a i = I we have
a (local) minimum and at a 2 = - I we have a (local) maximum. Furthermore
/ ( a = I) = 0 and / ( a = - I ) = 4.
(ii) Solving a 3 — 3a + 2 = 0 we find that the matrix is noninvertible for a = I
and a = —2.
P r o b le m 40.
The Pascal matrix of order n is defined as
(z + j - 2 ) !
Pn :=
i,j =
(* - !)!(?’ — I)!y
Thus
P2
(I
0
-
II
Pi =
P3
I
Vi
I
2
3
4
I
3
6
10
i\
4
10
20/
Find the determinant of Pz, P3, P4. Find the inverse of Pz , P3, P4.
S olu tion 40.
form
The determinant of Pz is given by I and the inverse takes the
*
‘ - ( - 1
"i1 ) '
The determinant of P3 is also given by I and the inverse takes the form
p-1 =
f
3
-3
1
-3
5
-2
-2
I
Vl
\
J
.
The determinant of P4 is also given by I and the inverse takes the form
4
Pr 1 =
-6
4
V-i
-6
14
-1 1
3
4
-1 1
10
-3
-31 \
-3
IJ
P r o b le m 41. Let A 1 B b e n x n matrices over M. Assume that A is invertible.
Let t be a nonzero real number. Show that
det(A + tB) = tn det(A) det(A 1B + t 1Jn).
Traces, Determinants and Hyperdeterminants
S olu tion 41.
117
We have
det (A + tB ) = tn det(£ 1A H- B ) = tn det(-A(^4. 1B H- t 1In))
= tn det(A) det (A-1 B + £_1 Jn).
P r o b le m 42. Let A be an n x n invertible matrix over R. Show that A t is
also invertible. Is (A t ) -1 = ( A_1)T?
S olu tion 42.
Since det (At ) = det (^4) and det (A) / 0 we conclude that
det(AT) ^ 0. Thus A t is invertible. We find (A t ) - 1A t = In and
( A A - 1F = ( A ^ ) t A t = In.
Thus (A- 1 Y A t = (A t ) - 1A t and therefore
P r o b le m 43.
Calculate the determinant of the 4 x 4 matrix
0
I
I
0
/ 01
0
Vi
0
I
-I
0
using the exterior product A (wedge product).
0
0
( l\
A
( °I \
I
A
Vo/
\ l )
S o lu tio n 43.
= ( A_1)T .
0I \
0
-I/
This means calculate
0V
A
V0 /
I
-I
01 V
0
V-i/
Let ej (j = 1,2,3,4) be the standard basis in R4. Then
(e i + e4) A (e 2 + e 3) A (e 2 - e 3) A (e x - e4) = 4 e i A e 2 A e 3 A e 4
utilizing that ej A e& = —e^ A ej. Thus the determinant is 4.
P r o b le m 44. The 3 x 3 diagonal matrices over R with trace equal to 0 form
a vector space. Provide a basis for this vector space. Using the scalar product
tr (ABt ) for n x n matrices A, B over R the elements of the basis should be
orthogonal to each other.
S o lu tio n 44. Owing to the constraint that the trace of the matrices is equal
to 0 the dimension of the vector space is 2. We select
Vi
I
0
0
0
-I
0
°\
0
0/
,
V2 =
/I
0
Vo
0
I
0
Then V\, U2 are linearly independent and tr (UlV^t ) = 0.
0
0
-2
118
Problems and Solutions
P r o b le m 45.
The Hilbert-Schmidt norm of an n x n matrix over C is defined
by
_________
Ib : =
y tr(-A M ).
Another norm is the trace norm defined by
P l l i := IrxZ (A M ).
Calculate the two norms for the 2 x 2 matrix
( 0
V*
S o lu tio n 45.
We have
o)
A' = ( l
Thus P I I 2 = A
P r o b le m 46.
matrix
- 2 i\
o )■
*
“ )•
A ’A ‘ ( l
P lli = 3 .
Let A be a 3 x 3 matrix over M. Consider the 3 x 3 permutation
/°
P = O
Vl
0 1\
I
0
0
.
Oj
Assume that A P = A. Show that A cannot be invertible.
S olu tion 46. We have det(A P ) = det(A) => d et(A )d et(P ) = det(A). Since
det(P) = —1 we obtain —det(A) = det(A) and therefore A is not invertible. We
also have
(
»3 1 -
»1 3 »3 2 -
»2 1 -
»2 3
»1 1 -
»3 3 »1 2 “
a i2
0
»3 2
»33 -
an \
»2 3
-
»2 1 I •
»1 3
-
»3 1 /
The determinant of this matrix is 0.
P r o b le m 47.
is defined as
Let A = (a^) be a 2n x 2n skew-symmetric matrix. The Pfaffian
I
P fP ) := 2 ^ 1
n
Sgn(^) I I a<r(2j-l),a(2J)
cr€S2n
3 —I
where 5 2n is the symmetric group and sgn(cr) is the signature of permutation cr.
Consider the case with n = 2, i.e.
0
Calculate Pf(A).
»1 2
»1 3
O i4 \
0
»2 3
<124
/ —»1 3
»2 3
0
<134
—»1 4
»2 4
— »3 4
0
—»1 2
/
Traces, Determinants and Hyperdeterminants
Solution 47.
119
We have n = 2 and 4! = 24 permutations. Thus
2
I
pf(^) = g E
I
n
Jf= I
&ES 4
a(2j)
= g CEES
1 4
= 012^34 — ^13^24 + ^23^14
where we have taken into account that for j > k we have ajk = —a,kj- The sum­
mation over all permutations can be avoided. Let II be the set of all partitions
of the set { 1 ,2 ,. . . , 2n } into pairs without regard to order. There are 2n — I
such partitions. An element a G II can be written as
^ — {(^1?Jl)? (^25^2)? • • • 5(jmjn) }
with ik < jk and i\ < i2 < •••< in- Let
/I
2
^ ~ \h
3
4
jl *2 32
...
2n\
...
jn )
be a corresponding permutation. Given a partition a we define the number
Aa = sg>n(jr)aiijiai2j2 *’ ’ ainjnThe Pfaffian of A is then given by
pfw = E
aen
For n = 2 we have (1,2 )(3 ,4), (1,3 )(2 ,4), (1,4 )(2 ,3), i.e. we have 3 partitions.
Problem 48. Let A be a skew-symmetric 2n x 2n matrix. For the Pfaffian we
have the properties
(P f(A ))2 = det(A),
Pf(AA) = AnPf(A ),
P f(E A E r ) = det(E )Pf(A )
P f(A T) = ( - l ) nPf(A)
where B is an arbitrary 2n x 2n matrix. Let J be a 2n x 2n skew-symmetric
matrix with P f(J) ^ 0. Let B be a 2n x 2n matrix such that B t JB = J. Show
that det (B) = I.
Solution 48. Using the property such that P f (BABt ) = det(E )P f(A ), where
A is a 2n x 2n skew-symmetric matrix we obtain
P f(J) = P f (B t JB) = det(E )P f(J).
Since P f(J) 7^ 0, we find that det (B) = I.
Problem 49.
Find all 2 x 2 invertible matrices A such that A + A - 1 = / 2.
Solution 49.
Since det (A) = ad —be ^ 0 we have
A - ( a
b\
Vc d /
^
A- i _
1
detW
( d
v -c
~ h\
a )'
120
Problems and Solutions
Thus from the condition we obtain the four equations
det(yl)
Thus we have to do a case study. Case 6 / 0. Then det(^4) = I and we obtain
a + d = I. If 6 = 0, then det(^4) = ad ^ 0. Thus a and d must be nonzero.
Problem 50.
Let V1 be a hermitian n x n matrix. Let V2 be a positive
semidefinite n x n matrix. Let fc be a positive integer. Show that tr((I^ V i)fc)
can be written as tv(V k), where V := V ^ 2V1 V ^ 2.
Solution 50.
Since V2 is positive semidefinite the square root of V2 exists.
Thus applying cyclic invariance of the trace we have
tr((F2K!)fe) = H i V ^ aV1V V 2V V 2V1 ■ ■ ■ V V 2V1V V 2) = H(Vk).
Problem 51.
M =
Consider the 2 x 2 matrix
( cosh(r) — sinh(r) cos(20)
Y
—sinh(r) sin(20)
—sinh(r) sin(20)
cosh(r) + sinh(r) cos(20)
Find the determinant of M . Thus show that the inverse of M exists. Find the
inverse of M .
Solution 51.
We have
det(M ) = cosh2(r) — sinh2(r) cos2(20) — sinh2(r) sin2(20) = I.
The inverse of M are obtained by replacing r
—r and Q
—0 We obtain
Problem 52. Consider the 2 x 2 matrix C over C. Calculate det(CC*) and
show that det(CC*) > 0.
Solution 52.
We have
det(CC'*) = det(C) det(C*) = (C 11C22 — ci2c2i) (cnc22 — C12C21)
= C11C11C22C22 + Ci2Ci2C2IC2I - Ch C22Ci 2C2I - Ci 2C2ICiic22.
Setting ( rjk > 0) C11 = rne**11, C22 = r22ei<i>22, C12 = r12ei<t>12, C21 = r21ei4>21
yields
det(C'C*) = r2n rl2 +
- 2r n r 22ri2r2i cos(a)
which cannot be negative since |cos(o:)| < I.
Traces, Determinants and Hyperdeterminants
P r o b le m 53.
121
(i) Let z G C. Find the determinant of
A =
Is the matrix
a projection matrix?
(ii) Let zi,Z2 E C. Find the determinant of
Is the matrix
a projection matrix?
S o lu tio n 53.
Thus
(i) We find det(^4) = zz — zz = 0. We have A2 = (I + zz)A.
We also have 1¾ = II^. Thus 1¾ is a projection matrix.
(ii) We have det (B) = 0. We find E§ = II3 and II3 = II3. Thus II3 is a projection
matrix.
P r o b le m 54.
matrix
Consider the golden mean number r = (y/S - 1 ) / 2 and the 2 x 2
Find tr (F) and det (F ) . Since det (F) / Owe have an inverse. Find F 1.
S olu tion 54. We have tr(F ) = 0 and det(F ) = —T2 — r = —I. The inverse
matrix is F - 1 = F .
P r o b le m 55.
Show that
Let A be an n x n matrix and B be an invertible n x n matrix.
det (In + A) — det(In H- BAB ^).
S olu tion 55.
We have
det (In + A) = det (B) det (B *) det (In + A) = det (B) d e t(/n + A) det (B -1)
= det (B (In + A )B ~ 1) = det ( F F - 1 + B A B ~ l )
= det(/n + F A B - 1 ).
122
Problems and Solutions
P r o b le m 56.
Let A be an 2 x 2 matrix. Show that
det(/2 + ^4) = I + tr (A) + det (A).
S olu tion 56.
We have
det(/2 + A) = det f
V
11
12
a2i
I + a
) = 1 + a n + a22 + ^11^22 — ai2a2i
22)
= I + tr(A) + det(^l).
Can the result extended to det (/3 + £ ) , where B is a 3 x 3 matrix?
P r o b le m 57.
Let { e^ : j = 1 ,2,3 } be the three orthonormal vectors in Z 3
ei- (i)■ e2 =0 )' *3=(;)'
We consider the face-centered cubic lattice as a sublattice of Z 3 generated by
the three primitive vectors ei + e 2, ei + e3, e 2 + 63. Form the 3 x 3 matrix
(e i + e 2 ei + e3 e2 + 63) and show that this matrix has an inverse. Find the
inverse.
S olu tion 57.
The determinant of the matrix
is —2 and thus the inverse exists. The inverse is given by
I
P r o b le m 58.
over R
I
- 1\
Let n > 2. Consider the n x n symmetric tridiagonal matrix
I cI
•
•
I •
1 0 0
C
I
0
I
C
0
0
0
0
0
0
0
0
°0\
0
0
An =
Vo
•
I
•
0
I
•
0
0
C
I
C
I
0
I
C/
where c G R. Find the determinant of An.
S olu tion 58.
cients
We obtain the linear difference equation with constant coeffi­
A■ ,3+1
cAi - A i
j = I ,..., n - I
Traces, Determinants and Hyperdeterminants
123
with the initial conditions Ao = I, A\ = c. The solution of this linear difference
equation is
[n/2]
" ‘ rk~V
\ J /
J=O
where [n/2] denotes the integer part of n/2.
—2cosh(o:) with a € M. Then we have
An = ( - 1 )
If c < —2, one can set c
wsinh((n + l)a )
sinh(a:)
If c > 2 and c = 2cosh(a), then
An '■
sinh((n + l)a )
sinh(a)
For —2 < c < 2 and c = 2 cos(a) we have
^ s i n((n + l)a )
sin(a)
An = ( - 1 )"
P r o b le m 59. Find a nonzero 2 x 2 matrix V over C such that V 2 = tr(V)V.
Can such a matrix be invertible?
S olu tion 59.
Let z € C. Then
V = { 61
c -* )
^
V 2 = (ez + e - z)V.
Such a matrix cannot be invertible. We can assume that tr(F ) ^ 0. Suppose
that V is invertible. Then from V 2 = ti{V )V it follows that V = t r (F )/2Taking the trace yields tr(F ) = 2tr(F). Thus we have a contradiction.
P r o b le m 60.
Consider the (n + 1) x (n + I) matrix over C
/1
0
0
0
I
0
0
0
I
0
0
Vzi
0
22
0
Z3
..
I
•• Zn
0
0
Z l\
Z2
Z3
Zn
IJ
Find the determinant. What is the condition on the Zj’s such that A is invertible?
S olu tion 60.
We find
det(A) = I - ^ Zj.
j= i
For A 1 to exist we need
i zj
I-
124
Problems and Solutions
P r o b le m 61. Let Zjc = Xjc + iyk, where
2n x 2n matrix A such that
(Z i\
Zn
Zl
Xjci Xfjc
€ R and k = I , . . . , n. Find the
/Z !\
= A
\Zn /
Xn
yi
V y„ /
Find the determinant of the matrix A.
S o lu tio n 61.
The 2n x 2n matrix A can be written in block form
with det(A) = ( - 2¾)71.
P r o b le m 62.
Let A 1 B be n x n matrices over C. We define the product
A o B := ^(AB + BA) - h r (AB) I n (i) Find the trace of A o B .
(ii) Is the product commutative?
S olu tion 62. (i) Since the trace is a linear operation and tr (/n) = n we find
tr(A o B ) = 0. Thus A o B is an element of the Lie algebra Si(U1C).
(ii) Yes. We have A o B = B o A. Is the product associative?
P r o b le m 63.
Let A be an n x n invertible matrix over C. Let x, y € C 71.
Then we have the identity
det (A + xy *) = det(^4)(l + y M - 1x).
Can we conclude that A + xy* is also invertible?
S olu tion 63.
This is not true in general. Consider for example
Then
A + xy*
0
0
0
-y/2
det(^4 + xy*) = 0.
P r o b le m 64. Let n > I. Consider the 271 x 271 matrices A and B. Assume
that A2 = /2n, B 2 = Iyn and [A1 B]+ = 02". Show that
det(A + B) = 22" / 2.
Traces, Determinants and Hyperdeterminants
S olu tion 64.
125
We have
det(A + 5 ) det(A H- B )s = det((A H- 5 ) (A H- 5 ) ) = det(A^ H- 5^ H- AB H- BA)
= d e t(/2^ H- -^2n) = det(2/ 2^) = 2^ .
Hence the result follows.
P r o b le m 65. Let A, B be n x n matrices over C. Assume that B is invertible.
Show that there exists c G C such that A + cB is not invertible.
Hint. Start of with the identity A + cB = ( A 5 _1 + cIn)B and apply the
determinant.
S o lu tio n 65.
From A H- cB = (A 5 _1 + cIn)B we obtain
det(A + cB) = det( A 5 _1 H- cln) d e t(5 )
where d et(5 ) ^ 0. If we choose —c equal to an eigenvalue of A 5 ” 1, then
det(A B - 1 + cln) = 0 and A + cB is not invertible.
P r o b le m 66.
The permanent of an n x n matrix A is defined as
n
Per(^) := ^
I I aMU)
a-esn j
=I
where Sn is the set of all permutation of n elements. Find the permanent of a
2 x 2 matrix A.
S olu tion 6 6 .
We obtain Per(A) = ^11^22 + ^12^21-
P r o b le m 67.
Let A be a 4 x 4 matrix we write as
where A i , A 2, A 3, A 4 are the (block) 2 x 2 matrices in the matrix A. Consider
the map
where T denotes the transpose. Is the trace preserved under this map? Is the
determinant preserved under this map?
S olu tion 67. The trace is obviously preserved. The determinant is not pre­
served in general. For example consider the case
/1
0
0
Vo
0
0
I
0
0
I
0
0
/1
°0 \
0
1—>
0
0
l)
Vi
0
0
0
0
0
0
0
0
1V
0
0
l)
126
Problems and Solutions
The matrix A is invertible but not the matrix after the mapping.
P r o b le m 68.
Calculate
Let H be a 4 x 4 matrix and U1 V be 2 x 2 unitary matrices.
det ((U 0 V )H (U * (8) F *)),
S olu tion 68.
tr((U (8) V)H(U* <8>F *)).
Since UU* = /2 and VV* = /2 we have
det((U (8) U)iJ(U* 0 F *)) = det (U 0 Vr) det(TT) det (IT* 0 V *)
= det {{U 0 V)(U* 0 Vr*)) det(JT)
= det (/2 ® / 2) det(JT)
= det (JT).
Using cyclic invariance of the trace we obtain
tr ((U 0 V)H(U* 0 U*)) = tr((U* 0 V ) ( U 0 VrJtr(TT))
= tr((t/*U ) 0 ( V V r)JT) = tr((/2 (8) J2JTT)
= tr (if).
P r o b le m 69.
Let A, B be n x n matrices over C. Consider the expression
( A 0 B - B 0 A ) © ( - [ ; 4 , B])
where 0 denotes the direct sum. Find the trace of this expression. This expres­
sion plays a role for universal enveloping algebras.
S o lu tio n 69.
W ith tr(^4 0 B) = tr(^4)tr(B) we have
tr((A 0 B — B 0 A) 0 ( - [ A 1B])) = tr (A 0 B —B 0 A) + tr(—[A1B])
= tr(yl 0 B) — tr(B 0 A) — tr(A B — BA)
= 0.
P r o b le m 70. Let A be an 2 x 2 matrix over M. Let tr(A) = Ci1 tr (A2) = C2.
Can det(^4) be calculated from Ci1 c<p.
S o lu tio n 70.
Yes we can calculate det (A) from Ci1 c<i- We have
tr(A) = an + a22 = Ci1
tr(A 2) = a\x + a22 + 2a i2a2i = c2.
Thus we have
(an + ¢122)2 = «11 + «22 + 2a n a 22 = c\.
Inserting this expression into det(A2) = C2 yields c2 — 2a n a 22 + 2a i2a2i = C2 or
det(j4) = i ( c f - c 2).
Traces, Determinants and Hyperdeterminants
127
Can this be extended to 3 x 3 matrices, i.e. given tr(A) = c\, tr(A 2) = C2,
tr(A 3) = Cs? Find det (A) using only c\, C2, Cs.
P r o b le m 71.
Let J be the 2n x 2n matrix
In
J :=
On
)■
We define symplectic G-reflectors to be those 2n x 2n symplectic matrices that
have a (2n — I)-dimensional fixed-point subspace. Any symplectic G-reflector
can be expressed as
G = I2n +
^ u u
tJ
(I)
for some 0 / / 3 E F, O / u G F2n and u is considered as a column vector. The
underlying field is F. Conversely, any G given by (I) is always a symplectic
G-reflector. Show that det (G) = + 1 .
S o lu tio n 71. Let G = I2n + P u u t J be an arbitrary symplectic G-reflector.
Consider the continuous path of matrices given by
G(t) = I2n + ( 1 - t)/3uuTJ
with 0 < t < I. Then G(O) = G. Now G(t) is a symplectic G-reflector for
0 < t < I, so det(G (t)) = ± 1 for all t < I. However
Iim G(t) = I2n
so by continuity det(G) = + 1 for all t , in particular for t = 0. We can also prove
it as follows. Any symplectic G-reflector is the square of another symplectic
G-reflector. Since u TJ u = 0 for all u E F2n, we have
G = I2n + Puur J = ^I2n + ^P uut J J
s = S2.
Thus det(G) = det (S2) = (det(S))2 = ( ± 1)2 = + 1 .
P r o b le m 72.
Let A , B be positive semidefinite n x n matrices. Then
det (A + B) > det (A) + det (B ) ,
tr(AH ) < tr(A)tr(H ).
Let
I
I
I
I
I
I
I
I
I
I
0
0
0
I
0
0
0
I
Both matrices are density matrices, with A a pure state and B a mixed state.
Calculate the left- and right-hand side of the two inequality. Discuss.
S o lu tio n 72. We have det (A) = 0, det (B) = 1/21 and det (A + B) = 4/27.
We have tr(A) = I, tr (£ ) = I and tr(A B ) = 1/3.
128
Problems and Solutions
P r o b le m 73. Let A be an n x n positive semidefinite matrix. Let B be an
x n positive definite matrix. Then we have K l e i n fS i n e q u a l i t y
n
tr(A (ln(A) — \n(B))) > tr(A — B).
(i) Let
1/2
/
1/2 \
-
( 1/2
V 0
y — 1/2 1/2
0
\
V 2Z
Calculate the left-hand side and the right-hand side of the inequality.
(ii) When is the inequality an equality?
S o lu tio n 73. (i) Obviously the right-hand side of the inequality is equal to 0.
The unitary matrix
puts A into diagonal form, i.e.
;)
and U - 1BU = B. Thus
tr(A(\n(A) - In(B))) =tr(E M (ln(A ) - In( £ ) ) 1/ " 1)
= M E M tr 1In(EMtrx) - U A U -1In iU B U -1))
= tr(A In(A) — A ln (B )).
Since Oln(O) = 0 and Oln(I) = 0 we have tr (A In(A)) = 0, tr (A ln (£ )) = —ln(2).
Thus Klein’s inequality provides In 2 > 0.
(ii) We have equality if and only if A = B.
P r o b le m 74.
P i (x) =
where
n
Consider the two polynomials
a0 + a\x H------- b anx n,
= deg(pi) and
P 2 ( x ) / p i (x).
We expand
m
r(x)
p2(x) = b0 + bix-\---------b 6mx m
= deg(p2)- Assume that
in powers of I j x , i.e.
r(x)
n
>
m.
Let
r(x)
:=
= — + -¾ + '
x
X1
From the coefficients ci, C2, . . . , C2n- i we can form an
( C1
C2
C2
C3
Cn+1
n x n H a nk el m a trix
Cn ^
Cn+1
' * C2n-I /
The determinant of this matrix is proportional to the r e s u l t a n t of the two poly­
nomials. If the resultant vanishes, then the two polynomials have a non-trivial
greatest common divisor. Apply this theorem to the polynomials
p i (x) = x
3 + 6x 2 +
Ilx
+ 6,
P2(%) =
x
2
+
4 x + 3.
Traces, Determinants and Hyperdeterminants
129
S olu tion 74. Since n = 3 we need the coefficients C\, C2, . . . , C5. Straightfor­
ward division yields
1 2
4
8
16
( x 2 + 4x + 3) : (x3 + 6x + I l x + 6) = ---------2 + _ 3 ----- 4 + -5 H----- *
X X
X
X
X
Thus Ci = I, C2 = —2, C3 = 4, C4 = —8, C5 = 16. Thus the Hankel matrix is
t f3 =
/I
-2
4 \
-2
I 4
4
-8
-8
16/
.
We find det(Hs) = 0. Thus p\, P2 have a greatest common divisor. Obviously
(x2 + 4x + 3)(x + 2) = x 3 + 6x + I l x + 6.
P r o b le m 75.
Consider the 5 x 5 symmetric matrix over R
/ 01
0
0
Vi
0
I
I
I
0
0
I
I
I
0
0
I
I
I
0
01\
0
0
l)
The rank is the number of linearly independent row (or column) vectors of A.
(i) Find the rank of the matrix. Explain.
(ii) Find the determinant and trace of the matrix.
(iii) Find all eigenvalues of the matrix.
(iv) Find one eigenvector.
(v) Is the matrix positive semidefinite?
S o lu tio n 75. (i) We have rank(A) = 2. There are only two unique row vectors
in A and they are linearly independent.
(ii) Since rank(A) = 2 < 5 the determinant must be 0. The trace is given by
tr(A) = 5.
(iii) Since rank(A) = 2 three of the five eigenvalues must be 0. The other two
eigenvalues are 2 and 3.
(iv) Obviously ( I 0 0 0 —I )T is an eigenvector.
(v) From (iii) we find that the matrix is positive semidefinite.
P r o b le m 76. Let A be an n x n matrix over R. Assume that A - 1 exists. Let
u, v G Rn, where u, v are considered as column vectors.
(i) Show that if v TA _1u = - 1 , then A + u v T is not invertible.
(ii) Assume that v TA _1u 7^ —1 . Show that
(A + u v T) 1 = A 1
A -1UVt A "1
! + Vt A - 1U*
(I)
130
Problems and Solutions
S olu tion 76.
(i) From v TA
A _1u
0. Now we have
(A +
UVT ) A _ 1 U
1U
= A A -1 U +
= —1 it follows that u ^ 0, v ^ 0 and
UVt A - 1 U
=
U
+ u (v TA - 1u)
= U - U
0
=
where we used v TA - 1 u = —I. Now (A + u v T)(A - 1u) = 0 is an eigenvalue
equation with eigenvalue 0 and eigenvector A~ 1U. Since
det(A + u v T) = Ai • A2 •... • An
where Ai, . . . , An are the eigenvalues of A + u v T , we have det(A + u v T) = 0.
Thus the matrix A + u v T is not invertible.
(ii) Since v TA -1 u € M we have the identity
In = In + u v TA - 1(v TA - 1u) — u (v TA - 1u )v TA - 1 .
Thus
In = In + UVTA-1 — UVTA-1 + UVt A-1Vt A-1U - UVTA-1UVTA_1
and therefore
(A + u v T) A -1
(A + u v T)(A + u v T )-1
(A + u v T) A - 1u v T A -1
! + V t A - 1U
Multiplying with (A + u v T) 1 provides equation (I).
P r o b le m 77. The collocation polynomial p(x) for unequally-spaced arguments
Xq, Xi, . . . , x n can be found by the determinant method
V
2
I
I
I
X
X
yo
yi
Xo
xl
Xi
T
X
J/n
I
Xn
Xn
/p(x)
...
Xn \
/y»n
2
1
. ••
. •*
2
.
rfTl
X1
= 0
where p(xk) = Vk for k = 0 ,1 , . . . , n. Apply it to (n = 2)
p(x0 = 0) = I = 2/0,
S o lu tio n 77.
p(xi = 1/2) = 9 /4 = 2/1,
We have
/p(x)
I
9 /4
4
It follows that p(x) = I + 2x +
X2.
I
I
I
I
X
X2 \
0
1/2
I
0
1/4
I J
p(x2 = I) = 4 = 2/2-
Traces, Determinants and Hyperdeterminants
131
P r o b le m 78.
Let M be the Minkowski space endowed with the standard
coordinates xo, x\, x 2, X3 and the metric tensor field
g = —dx0 <8>dx0 + dx 1 (8) dx\ + dx2 (8) dx2 + ^ 3 (8) dx3
and the quadratic form
q(x) = ~ ( x 0)2 + (x i)2 + (x2)2 + (x3)2.
Let H ( 2) be the vector space of 2 x 2 hermitian matrices. Consider the map
ip : M
H ( 2)
{x) = x = ( x o + x 3
X l - i x 2\
v 7
Xo -
y X i + IX2
X3 J
which is an isomorphism.
(i) Find the determinant of X .
(ii) Find the Cayley transform of X , i.e.
U = (X + U2) - 1 ( X - U 2)
with the inverse
= *(/2 + f/)(J 2 - c / ) - 1.
S o lu tio n 78.
(i) The determinant of X is given by
d et(X ) = (x0)2 - ( (x i)2 + (x 2)2 + (x 3)2) = -q (x ).
(ii) For the Cayley transform of X we find
jj
_________ I_______ f I - q(x) + 2ix3
~ - q { x ) - I + 2ixo V 2(ix i “ x 2)
P r o b le m 79.
2 (ixi + x 2) \
I - q(x) ~ 2ix3 J '
Let A be an n x n skew-symmetric matrix over M. Show that
det (In + ^4) > I
with equality holding if and only if A = On.
S olu tion 79.
First note that
d et(/n + A) = d e t((/n + A)T) = det (In - A).
The matrix In + AA t is positive definite and all its eigenvalues are greater or
equal to I. Thus det (In + AAT) > I. Now
d e t(/„ + AAt ) = det (In - A2) = det ((In - A)(In + A))
= det((Jn + A)2) > I.
Thus det (In + A) > I or det (In + A ) < —I. Since the eigenvalues of In + A are
either I or form conjugate pairs it follows that det (In A A) > 0 and consequently
det(In + A) > I.
132
Problems and Solutions
P r o b le m 80.
Consider the (n — I) x (n — I) symmetric matrix over R
(z
i
i
An —
I
4
I
I
I
5
...
I
I
.. .
i
i
i
\
n+ I/
with n > 2. Let Dn = det(An). Is the sequence { Dn/n\} bounded?
S olu tion 80.
If we expand the last row we obtain the recursion
Dn = nDn- 1 + (n - I)!,
n = 3 , 4 , . ..
with the initial condition 1¾ = 3. We divide by n\ to obtain
Dn
n!
D n- 1 ^ I
n'
(n — I)!
Hence
Dn = n\
+ 1 +
Therefore the sequence { Dnfn\ } is the nth partial sum of the harmonic series,
which is unbounded as n —>•oo.
P r o b le m 81.
The hyperdeterminant Det(A) of the three-dimensional array
A = (aijk) G R 2x2x2 can be calculated as follows
Det(A) = I f d e t f f a000
4 \
V V ao0i
—det ^
I
aOio^j+ f «ioo
O0I i )
\ ai01
oooo
0010
V aOOl
0011
—4 det I^oooo
0010
^Oooi
0011
Assume that only one of the coefficients
terminant.
V e t f ai00
)
V a IO l
aUo \ \
aHi JJ
anoI
0111/
is nonzero. Calculate the hyperde­
S olu tion 81. Since no terms of the form afjk appear in calculation the deter­
minants we find that the hyperdeterminant of A is 0.
P r o b le m 82.
matrix
Let eoo = cn = 0, eoi = I, €io = —I, i.e. we consider the 2 x 2
< = ( - 1 J)'
Then the determinant of a 2 x 2 matrix
as
^
i i i
det(A 2) := ^
= (a^) with i , j = 0, 1 can be defined
I
EEEE
i= 0 j = 0 £=0 m = 0
Traces, Determinants and Hyperdeterminants
133
Thus d et(^ 2) = aoonn — Goi^io- In analogy the hyperdeterminant Det(As) of
the 2 x 2 x 2 array A 3 = (ctijk) with i , j , k = 0 ,1 is defined as
1
i
i
i
i
i
i
E E E E E E
2 Ut= 0 j j ' = 0
C-H'Cjj' CfcJc' Crnrn' Cnn' Cpp' aijkai'j I^anpJc' CLnIpirn' •
k k '= 0 Tnmt=O Unt=O p p '= 0
Calculate Det(Aa).
S olu tion 82.
We find
2
2 , 2
There are 28 = 256 terms for Det(A), but only 24 are nonzero.
2 , 2
2
2
-
aOOOa H i + a OOia HO + a OOia IOi + a Ioonoii
— 2(aooonooinnoaiii + nooonoionioinm + nooonioonoiinin + nooinoionioinno
+ aooiaiooaoiiano
aOionioonoiinioi)
+ 4(aooononnioinno + nooinoionioonm).
S u p p lem en ta ry P ro b le m s
P r o b le m I . Let A, B be 2 x 2 matrices over C. What are the conditions on
A, B such that tr (AB) = tr (A )tr(5 )?
P r o b le m 2 . Given a 2 x 2 matrix M over C. What can be said about the
determinant of M <g>/2 — I2 ® M ?
P r o b le m 3.
(i) What is condition on a G R such that
a a
I
a I a
a a I
F(a) =
is invertible?
(ii) What is the condition on a, p G M such that
p p\
a a
p a I a
\p a a I /
/1
G(a,p)
I
p
is invertible?
P r o b le m 4.
n = 3 and
Let A, B b e n x n matrices over C. Then tr([A, B]) = 0. Consider
C =
1
0
0
0
0
0
0
0
-1
with tr(C ) = 0. Construct 3 x 3 matrices A and B such that [A, B] = C.
134
Problems and Solutions
P r o b le m 5.
Consider the k vectors
( VH \
v 2j
j = I,-..,*
' Vnj '
in the Euclidean space E 71. The parallelepiped determined by these vectors has
the fc-dimensional area
^Jdet(Vr V )
where V is the n x k matrix with Vi, v 2, . . . , v* as its columns. Apply it to the
two vectors in E 3
I
0
I
P r o b le m 6 . The equation of a hyperplane passing through the n points x i,
X2, . . . , x n in the Euclidean space E n is given by
det f\ x1 1 12 - M=O.
J
Xi
X
...
Xn
Let n = 3. Apply it to the three vectors in E 3
P r o b le m 7.
Let A, B be 2 x 2 matrices. Show that
[A, B]+ = AB + BA = (tr (AB) - tr(A )tr(B )) J2 + tr (A)B + tr (B)A.
Can this identity be extended to 3 x 3 matrices?
P r o b le m 8. (i) Find all nonzero 2 x 2 matrices A and B such that BA* =
tr (BA*)A.
(ii) Find all 2 x 2 matrices A such that A2 = tr(A )A. Calculate det(A) and
det (A2) of such a matrix.
(iii) Let B be a 2 x 2 matrix over C. Assume that B 2 = Z2 and thus tr (B2) = 2.
What can be said about the trace of BI
(iv) Find all 2 x 2 matrices C with tr(C 2) — (tr(C ))2 = 0.
(v) Let D be a 2 x 2 matrix over C. Find the condition on D such that det (D) =
tr (D2). Find the condition on D such that det (D D) = tr (D (S) D).
P r o b le m 9.
Show that
Let A, B b e n x n matrices. Let U, V be unitary n x n matrices.
tr((C7 S V)(A S B){U * (8) V*)) = tr(A )tr(B )
Traces, Determinants and Hyperdeterminants
135
applying cyclic invariance and UU* = In, VV* = In.
P r o b le m 10 . Let A, B be 2 x 2 matrices over C with tr(A) = 0 and t r (£ ) = 0.
Show that the condition on A and B such that tv(AB) = 0 is given by
2an&n + a i2&2i + &21&12 = 0.
P r o b le m 1 1 . Let A, B be nonzero 2 x 2 matrices over R with d e t(A ® l?) = 0.
Can we conclude that both A and B are noninvertible?
P r o b le m 1 2 .
Consider the symmetric n x n band matrix (n > 3)
/ II
0
0
Vo
I
I
I
0
0
0
I
I
0
0
....
..
..
..
..
0
0
0
I
I
°\
0
0
I
IJ
with the elements in the field F2. Show that
det(M n) = det(M n_ 1) - det(Mn- 2)
with the initial conditions det (M 3) = det(M 4) = I. Show that the solution is
,
.
2\/3
/n7T
7T\ ,
,
det(M n) =
cos
- - J (m od 2).
P r o b le m 13.
Consider the nonnormal matrices
Find (det(A2^ 2))1/2 and (det(A 3A 3)) 1Z2. Extend to n dimensions.
P r o b le m 14.
det (A - 1 ) = —I.
Let A be a n x n matrix with det (A) = —I.
Show that
P r o b le m 15. Let n be even. Consider a skew-symmetric n x n matrix A over
R. Let B be a symmetric n x n matrix over R with the entries of B = (bjk)
given by bjk = bjbk ( bj,bk E R). Show that
det (A + B) = det (A).
Notice that det (B) = 0. Let Sn be the symmetric group. Then we have
det (A + B) = ^ ^ ( I) (&1<j(1)
£TESn
^cr(I) (^2cr(2)
^2^0-(2)) ***(^ncr(n) “1“ 5n^<7(n))*
136
Problems and Solutions
P r o b le m 16.
Let h > 0 and b > a. Consider the matrix
=(! »)•
Show that det(M ) = h(b —a) > 0 and the inverse of M is given by
M ~ l (a b h) — (
b/(b —a)
—a/(b— a) \
P r o b le m IT.
The n x n permutation matrices form a group under matrix
multiplications. Show that det (In — P) = 0 for any n x n permutation matrix.
P r o b le m 18.
Let u, v, w be column vectors in M3. We form the 3 x 3 matrix
M = (u
v
w ).
Show that det(M ) = (u x v) •w.
P r o b le m 19.
(i) Let n > 2. Consider the n x n matrix
/I
2
3
2
3
4
6
4
\n
I
2
n —I
.. .
5
...
Ti
I
n
I
2
\
n —2
Ti - I /
Show that det(A) = (—l ) n(n 1IZ2^ n + I )nn 1
(ii) Let n > 2. Consider the n x n matrix
/C!
X
C2
M(a?) =
X
\^
X
X
C3
X
...
...
...
X
a:
...
X
X
x\
X
X
/
Show that d et(M (x )) = (—l ) n(P(x) —xdP{x)/dx ), where
P{x) = ( x - ci)(x - c2) •••(x - cn).
P r o b le m 20.
(i) Consider the 2 x 2 matrix
tr(C ) = I, tr(C 2) = 3.
Show that tr(C fe) = tr(C fc_1) + tr(Ck~2) with k = 3,4, —
Traces, Determinants and Hyperdeterminants
P r o b le m 2 1 .
137
Let A be an n x n matrix. Assume that
tr(AJ) = 0
j = I , . . . , n.
for
Can we conclude that det (A) = 0?
P r o b le m 22.
Cj
Let A 1, A 2, As be 2 x 2 matrices over M. We define
tr(A j),
j
Cjii
1,2,3,
tr [AjAji),
j,k
1,2,3.
(i) Given the coefficients
Cjii reconstruct the matrices A±, A2, As.
(ii) Apply the result to the case c\ = C2 = Cs = 0
Cu = 0, C 12 = I,
C 1S
= 0, c2i = I, C22 = 0, c23 = 0, C31 = 0, C32 = 0, C33 = 2
and show that
Problem
23.
(i) Let B be a 2 x 2 matrix over C. Given
d1 = det (B),
d2 = det (B2),
ds = det (B3),
d± = det (B4).
Can we reconstruct B from d1, d2, ds, d4. Does it depend on whether the matrix
B is normal?
(ii) Find all 2 x 2 matrices A 1, A2, As over C such that
tr(Ai) = tr(A 2) = tr(A 3) = 0,
^ ( A 1A 2) = tr (A 2A 3) = tr(A 3A i) = 0.
(iii) Find all 3 x 3 matrices B 1, B2, Bs over C such that
tr(F?i) = tr (B2) = t r (5 3) = 0,
^ ( B 1B2) = tr (B2Bs) = tr(£?3i?i) = 0.
(iv) Find all 2 x 2 matrices C such that det (C) = I, tr(C ) = 0. Do these matrices
form a group under matrix multiplication?
Problem
24. Let A, B b e n x n matrices over C. Assume that B is invertible.
Show that d e t(/n + B A B - 1 ) = det (In + A).
Problem 25. An n x n matrix A is called idempotent if A 2 = A. Show that
rank(A) = tr(A).
Problem
26.
Let A be a real symmetric and positive definite n x n matrix.
138
Problems and Solutions
we have n = 2, det (A) = 2, tr(^l) = 3 and ln(2) < I.
P r o b le m 27.
Let A, B , C be n x n matrices over C. Let aJ? b J? Cj (j =
be the j-th column of j4, B , C, respectively. Show that if for some
/c E { 1 , . . . , 7 ¾ }
c*: = a* + bfc
and
Cj- =
= b^,
j = I , . . . , k — I, k + I , . . . , n
then det (C) = det (^4) + det (5 ).
Problem 28.
Consider the Legendre polynomials pj (j = 0 , 1 , .. . ) with
P o (z ) =
I,
Pi(X)
p2 (x) = l(5 a :3 — 3a;),
2
= X,
p2{x)
=
i ( 3 z 2 - I),
P4{x) = ^(35a;4 — 30a;2 + 3).
o
Show that
(n ix)
Pi(x) P2{x)\
( Po(O)
p2(x) p3(x)
= (I - a;2)3
0
\P2(x) pz{x) P i i x ) )
\P2(0)
det I pi (a;)
0
P2(O)\
p2(0)
0
.
0
P 4 (0 )/
Note that (x omitted)
/ Po
det I pi
\P2
= P 0 (P 2P 4 -
Pi
p2
Ps
P2 \
P3
Pa )
( P 3 )2 ) - P l ( P l P 4 - P2P3 + P 2 (P lP 3 ~ ( P 2 )2 )
and po(0) = I, p i(0) = 0, p2(0) = - 1 / 2 , p3(0) = 0, p4(0) = 3/8.
Problem 29. Let A be an n x n diagonal matrix over C. Let B be an n x n
matrix over C with bjj = 0 for all j = 1,...,7¾. Can we conclude that all diagonal
elements of the commutator [A, B] are 0?
Problem 30.
Let A, B be n x n matrices over M with det (A) = I and
det(B) = I. This means A, B are elements of the Lie group 5L(n,M ). Can we
conclude that
tr (i4 B ) + t r ^ B " 1) = tr(a 4 )tr(B )?
P r o b le m 31.
Let H be a hermitian n x n matrix. Show that det (H + iIn) / 0.
P r o b le m 32. Let A be an 2 x 2 matrix over M. Calculate r = tx{A2) —(tr(A ))2.
Show that the conditions on ajk such that r = 0 are given by 012021 = 011O22,
i.e. det(j4) = 0.
Traces, Determinants and Hyperdeterminants
139
P r o b le m 33.
Consider a triangle embedded in R 3. Let V j = ( ¾ , ¾ , ¾ )
(j = 1,2,3) be the coordinates of the vertices. Then the area A of the triangle
is given by
A
=
^lKv 2 — Vl) X ( V l - V 3)II = ^II(V3 - V 1) X (V3 - V 2)II
where x denotes the vector product and ||.|| denotes the Euclidean norm. The
area A of the triangle can also be found via
'
\
I xi
<
y\
X2
3/2
I
I
\ *3
2/3
I,
D I
f
+
(yi
ID
Z1 i
I 3/2 -¾ I
V V 3/3
Z3
I
where D denotes the determinant. Consider
V 1 = (1,0,0),
v 2 = (0,1,0),
v 3 = (0,0,1).
Find the area of the triangle using both expressions. Discuss. The triangle could
be one of the faces of a tetrahedron.
P r o b le m 34.
Consider the 3 x 3 matrix M with entries
(M )jk =
X3k - 1 ,
j , k = 1,2,3.
Find the determinant of this matrix.
P r o b le m 35. The permanent and the determinant of an n x n matrix M over
C are respectively defined as
perm (M ) := £
JI
TTE S n \j= I
J
’
detW
:= E
(“ l ) " " 00
nesn
I l MiMi)
\j=i
)
where Sn denotes the symmetric group on a set of n symbols. For an n x n
matrix A and an m x m matrix B we know that
det( A ® B) = (det(A ))m(d e t(£ ))n.
Study perm(A <S>B).
P r o b le m 36.
(i) Let A, B be positive definite n x n matrices. Show that
tr(A + B) > tr(A).
(ii) Let C, D be n x n matrices over C. Is tr (CC*DD*) < tr(CC*)ti(DD *)?
P r o b le m 37. Let A, B, C, D be n x n matrices. Assume that D is invertible.
Consider the (2n) x (2n) matrix
140
Problems and Solutions
Show that det (M ) = det {AD —BD 1CD) using the identity
( A
\C
B \(
In
D ) \ - D - 1C
0n \ = ( A - B D -1 C
I„J -\
On
B\
DJ-
P r o b le m 38. Let \i € M and A be an 2 x 2 matrix over EL Show that the
determinant of the 4 x 4 matrix
f -IiI2
Ar
A \
- I i I 2)
V
only contains even powers of p.
P r o b le m 39.
A=
Consider the invertible 3 x 3 matrix A
/1/V2
1/V2
0 1/V2 \
/ l / x/ 2
0 -l/y/2 I =*> A - 1 = I
0
\
I
0
0
\l/\/2
J
1/V2
0
0\
I
- l/ s / 2
0J
Find a permutation matrix P such that P A - 1P - 1 = A.
P r o b le m 40.
matrix
Let
Show that the determinant of the symmetric n x n
x , 6 GM.
/x + e
x
x
x+ 6
x
x
\
A(x,e) =
\
x
x
x + e/
is given by det(A(x, e)) = en~1(nx + e).
P r o b le m 41.
Show that the determinant of the 2n x 2n matrix
ft =
is given by det (Q) =
P r o b le m 42.
2 x 2 matrix
® Jn
I
0
(det (Jn))2 = I.
Given a 2 x 2 x 2 hypermatrix A = ( a ^ ) ,
S = ( S°°
k, £ = 0, 1 and the
501 ^
Sn J
V s IO
*
The multiplication AS which is again a 2 x 2 hypermatrix is defined by
1
(A S )jk £ •=
^ ^QtjkrSrt•
r=0
Assume that det(5) = I, i.e. S G SL( 2,C ). Show that Det (AS) = Det(A). This
is a typical problem to apply computer algebra.
Traces, Determinants and Hyperdeterminants
141
P r o b le m 43. The Levi-Civita symbol (also called completely antisymmetric
constant tensor) is defined by
+1
—1
if j i , J2, . . . , jn is an even permutation of 1 2 . . . n
if ji,
•••,in is an odd permutation of 1 2 . . . n
0
otherwise
{
Let Sjk be the Kronecker delta. Show that
det
( fijiki
^jik2
Sj2k\
Sj2k2
' Sjikn
Sj2kn
Sjnki \
Sjnk2
•• Sjnkn /
Chapter 5
Eigenvalues and
Eigenvectors
Let A be an n x n matrix over C. The complex number A is called an eigenvalue
of A if and only if the matrix (A — AIn) is singular. Let x be a nonzero column
vector in Cn. Then x is called an eigenvector belonging to (or associated with)
the eigenvalue A if and only if (A — AJn)x = 0. The equation
A x = Ax
is called the eigenvalue equation. For an eigenvector x of A, Ax. is a scalar
multiple of x. The eigenvalues are found from the characteristic equation
det{A - AIn) = 0.
From the eigenvalue equation we obtain x*A* = A*x* and thus
x*A*Ax = AA*x*x.
If the matrix is hermitian then the eigenvalues are real. If the matrix is unitary
then the absolute value of the eigenvalues is I. If the matrix is skew-hermitian
then the eigenvalues are purely imaginary. If the matrix is a projection matrix
then the eigenvalues can only take the values I and 0. The eigenvalues of an
upper or lower triangular matrix are the elements on the diagonal. Let A, B be
n x n matrices. Then AB and BA have the same spectrum.
The trace of a square matrix A is the sum of its eigenvalues (counting degen­
eracy). The determinant of a square matrix A is the product of its eigenvalues
(counting degeneracy). Let / be an analytic function / : R —» R o r / : C —» C .
Then from A x = Ax we obtain / ( A ) x = / ( A)x.
142
Eigenvalues and Eigenvectors
P r o b le m I .
matrix
143
(i) Find the eigenvalues and normalized eigenvectors of the 2 x 2
A = ( sinW
\ —cos(0)
cosW \
sin(0) J *
Are the eigenvectors orthogonal to each other?
(ii) Find the eigenvalues and normalized eigenvectors of the 2 x 2 matrix
/
cos(0)
Y e- ^ s in (0)
sin(0) \
cos(0) J '
(iii) Consider the normalized vector in C 3
sin(0) cos(0) \
sin(0)sin(</>) I .
cos(0)
)
(
Calculate the 2 x 2 matrix U (9, ¢) = n •a = n\G\ + 71202 + 71303, where G\, 02,
03 are the Pauli spin matrices. Is the matrix U ( 0, <f)) unitary? Find the trace
and the determinant. Is the matrix U ( 0, <f)) hermitian? Find the eigenvalues and
normalized eigenvectors of U(Q,(f)).
S o lu tio n I .
(i) From the characteristic equation A2 — 2Asin(0) + I = 0 we
obtain the eigenvalues
Ai = sin(0) + ico s (0),
A2 = sin(0) — zcos(0).
The corresponding normalized eigenvectors are
We see that x j x 2 = 0. Thus the eigenvectors are orthogonal to each other.
(ii) The matrix is unitary. Thus the absolute value of the eigenvalues must be
I. The eigenvalues are e ~%e, etd with the corresponding normalized eigenvectors
(te -* )’
(-< €"*)•
JJ(f) tL \ _ ( cos(0)
U(0' W - ^ a n ( « )
e - * sin(0) \
- c o s ( 0) ) '
(iii) We find
The matrix is unitary since C/(0, </>)[/*(#, </>) = / 2- The trace is zero and the
determinant is —1 . The matrix is also hermitian. Since the matrix is unitary,
hermitian and the trace is equal to 0, we obtain the eigenvalues Ai = + 1 and
A2 = —I. Using the identities
sin(0) = 2 sin(0/ 2) cos(0/ 2),
cos(0) = cos2(0/ 2) — sin2(0/ 2)
we obtain the normalized eigenvectors
( e~ i<f>/ 2 cos(0/ 2) \
\ e ^ /2 sin(0/ 2)
( - e " ^ / 2 sin(0/ 2) \
\ e^ / 2 cos(0/ 2) ) '
144
Problems and Solutions
P r o b le m 2.
Find all solutions of the linear equation
(y —sin(0)
““SO*= * ,
—cos(0) J
fe n
(I)
with the condition that x G M2 and x Tx = I, i.e. the vector x must be normal­
ized. What type of equation is (I)?
S olu tion 2.
We find
x =
cos(0/ 2) \
—sin(0/ 2) J '
Obviously (I) is an eigenvalue equation and the eigenvalue is +1.
(i)
P r o b le m 3.
matrix
Let e G IR. Find the eigenvalues and eigenvectors of the 2 x 2
i - 0 '
For 6 — 0 we obtain the Pauli spin matrix (Ti, for e = I we have the Hadamard
matrix and for e —> oo we obtain the Pauli spin matrix (73.
(ii) Let e G R. Find the eigenvalues and eigenvectors of the 2 x 2 matrix
v
( tanh(e)
^ = (,l/cosh (e)
l / c o s h( e ) \
—tanh(e) ) ’
The matrices A(e) and B(e) are connected via the invertible transformation
6 —> sinh(e).
S olu tion 3. (i) The eigenvalues of A(e) are +1 and —1 and thus are indepen­
dent of e. The corresponding eigenvectors are
(
vT T ?
\
[ l + e2 _ e^ i + ^ )
,
and
(
vT+e2
\
^ _ 1 _ €2 _ €V/ I 4 ^ 2 j -
(ii) The eigenvalues of B(e) are +1 and —1 and thus are independent of e. The
corresponding eigenvectors are
( I / cosh(e) \
V I - tanh(e) )
,
ana
/
I / cosh(e)
\
\ - l - tanh(e) ) '
P r o b le m 4. Let A be a 2 x 2 matrix. Assume that det(A) = 0 and tr(A) = 0.
What can be said about the eigenvalues of Al
S o lu tio n 4. We have AiA2 = 0 and Ai + A2 = 0. From these conditions it
follows that both eigenvalues must be 0. Is such a matrix normal?
P r o b le m 5.
eigenvectors
Find all 2 x 2 matrices A over C which admit the normalized
Vl = 7 f 0 ) ’
V2“
7 5
( - 1)
Eigenvalues and Eigenvectors
145
with the corresponding eigenvalues Ai and A2.
S o lu tio n 5.
From the eigenvalue equations Av\ = \\v i, A v 2 = A2V2 we
find after addition and subtracting to eliminate Ai and A2 that an = 0*22,
a\2 = ^21- Inserting these equations into the eigenvalue equations we obtain
the linear equations
(1
- 0 (::0 =(¾ * (::0 =Ki -1OKO
Thus
^
^
a n = a 22 = 2(^1 + A2),
a i2 = a 2i = - ( A i - A2).
Since the normalized eigenvectors are pairwise orthonormal we can find A also
with the spectral theorem A = A i v i v f + A 2V 2V^.
P r o b le m 6 .
matrix
Find the eigenvalues and normalized eigenvectors of the 2 x 2
M (ff1 _ ( sm(0)
Vcos(0)
cos(0) \
-s in W J -
S o lu tio n 6 . We have det(M (0)) = —1 and tr (M(O)) = 0. The eigenvalues are
+ 1 and —1 and the corresponding normalized eigenvectors are
I
f y/ l - sin(Q) \
\/2 V \ /l + sin(0)
J’
I
/
y /l + sinW \
y/ 2 V - K 1 - s i n W
J'
Extend the problem to the matrix
/
sin(0)
Y e_l^cos(0)
ex<t>cos(0) \
—sin(0) J '
This matrix contains the Pauli spin matrices 01,
¢ = 3tt/2 ; 0 = tt/2 , ¢ = 0.
P r o b le m 7.
cr2,
<73
for 0 = 0,
<f> =
0; 0 = 0,
Let /ii, /i 2 E M. Consider the 2 x 2 matrix
At,,2 ,,2'I = ( Ml cos2W + M2 sin2(^) (M2 “ Mi) cos(6>) sin(0) \
IAtI? /½)
^ ^ 2 _ jj2^
cos(0) /i 1 sin2(0) + P2 Cos2(O) J '
(i) Find the trace and determinant of A(^piQ).
(ii) Find the eigenvalues of A(fi2, p^)'
Remark. Any positive semidefinite 2 x 2 matrix over M can be written in this
form. Try the 2 x 2 matrix with all entries +1.
S o lu tio n 7.
(i) Let A i, A2 be the eigenvalues A(pL2 ,p2). We have
tr(A(/Zi, P 2 )) — AtI +
lA
(ii) We obtain d et(A (//2, /i^)) = AtIAtI = AiA2.
— Al + A2.
146
Problems and Solutions
(iii) Prom (i) and (ii) we find that the eigenvalues are given by Ai = /if, A2 = AtfThus the eigenvalues cannot be negative.
Extra. Find the eigenvalues of A (/i2, /if) <8>A(v \, iif)P r o b le m 8.
(i) Let a G M. Consider the matrices
A(n \ - ( cos(a )
^ '
y sin(a)
DfriA = ( cos(a)
^ ^
V sin(a )
-sin(a)\
cos(a) J ’
sin(a) \
—cos(a:) J '
Find the trace and determinant of these matrices. Show that for the matrix
A (a) the eigenvalues depend on a but the eigenvectors do not. Show that for
the matrix B(a) the eigenvalues do not depend on a but the eigenvectors do.
(ii) Let a G l . Consider the matrices
cosh (a)
sinh(a)
sinh(a) \
cosh(a) J
n( ,
/ cosh(a)
’
S in M a )
sinh(a)
cosh (a)
Find the trace and determinant of these matrices. Show that for the matrix
C (a) the eigenvalues depend on a but the eigenvectors do not. Show that for
the matrix D (a) the eigenvalues do not depend on a but the eigenvectors do.
S o lu tio n 8. (i) For the matrix A(a) we have det(^4(a)) = I and tr(^4(a)) =
2cos(a ). The eigenvalues are given by A+ = eza, A_ = e~tOL with the corre­
sponding normalized eigenvectors
I
Tl
Thus the eigenvalues depend on a, but the eigenvectors do not depend on a.
For the matrix B (a) we find that d e t(5 (a )) = —1 and t r (£ (a )) = 0. The
eigenvalues are given by
A_ = —1
A_|_ = +1,
with the corresponding normalized eigenvectors
/ c o s (a /2)\
y s in (a /2) J ’
/
s in (a /2) \
y —c o s (a /2) J
where we utilized the identities
sin(a) = 2 s in (a /2) c o s (a /2),
cos(a) = cos2( a / 2) — sin2( a / 2).
Thus the eigenvalues do not depend on a but the eigenvectors do.
(ii) For the matrix C(a) we have d et(C (a)) = I and tr(C (a )) = 2cosh(a). The
eigenvalues are given by A+ = ea , A_ = e- a with the corresponding normalized
eigenvectors
Tl ( O ’
Tl ( - O '
Eigenvalues and Eigenvectors
147
Thus the eigenvalues depend on a, but the eigenvectors do not depend on a.
For the matrix D(a) we find that det(D (o:)) = —1 and tr(D (a )) = 0. The
eigenvalues are given by A+ = +1, A_ = —1 with the corresponding normalized
eigenvectors
I
/ co s h (a /2)\
^ co s h (a ) \ sin h (a /2) J ’
I
/ sin h (a /2) \
,/cosh(a) \ co s h (a /2) J
where we utilized the identities
sinh(a) = 2 sin h (a/2) co s h (a /2),
cosh (a) = cosh2( a / 2) + sinh2( a / 2).
Thus the eigenvalues do not depend on a but the eigenvectors do.
P r o b le m 9.
matrix
Let a n , a 22 G R and ai2 G C. Consider the 2 x 2 hermitian
H = ( 0,11
V^12
° 12^
a22 J
with the real eigenvalues A+ and A _. What conditions are imposed on the matrix
elements of H if A+ = A_?
S o lu tio n 9.
The eigenvalues are
A± = 2 ^ 11 + tt22) ^ yJ ai2^i2 + (all ~ G22)2/ 4.
From A+ = A_ we obtain
V « 12^12 + (dn —^22)2/ 4 = 0.
Thus a i2a i2 + (a n — a22)2/4 = 0 and therefore a i2 = 0, a n = 022P r o b le m 10 . (i) Let A t = (1/2, 1 /2 )T . Find the eigenvalues of the 2 x 2
matrix AA t and the I x l matrix .At A
(ii) Let x be a nonzero column vector in Mn. Then x x T is an n x n matrix
and x Tx is a real number. Show that x Tx is an eigenvalue of x x T and x is the
corresponding eigenvector.
(iii) Let x be a nonzero column vector in Mn and n > 2. Consider the n x n
matrix x x T . Find one nonzero eigenvalue and the corresponding eigenvector of
this matrix.
S o lu tio n 10.
(i) We have
A t A = ( 1 / 2 ).
Thus the eigenvalues of the 2 x 2 matrix AA t are 1/2 and 0. The eigenvalue of
the I x l matrix
is 1/2. Thus the nonzero eigenvalues are the same.
(ii) Since matrix multiplication is associative we have (x x T)x = x (x Tx) =
(x Tx )x .
148
Problems and Solutions
(iii) Since matrix multiplication is associative we have (x x T)x = x ( x Tx) =
(x Tx )x . Thus x is an eigenvector and x Tx is the nonzero eigenvalue.
Problem 11. Consider the Pauli spin matrix G\ with the eigenvalues +1 and
—I and the corresponding normalized eigenvectors
W ith the two normalized eigenvectors we form the unitary matrix
Find the eigenvalues and normalized eigenvectors of U .
Solution 1 1 . The eigenvalues of U are given by + 1 and —1 with the corre­
sponding normalized eigenvectors
Wi
\/4 — 2\/2 ( J - i ) ’
W
I
2
V 4 T 2V 2
03
Problem 12. Let <7i, cr2,
be the Pauli spin matrices. Find the eigenvalues
and normalized eigenvectors of 2 x 2 nonnormal matrix
Cr3 + iai
Solution 12.
eigenvector
The eigenvalues are 0 (twice). There is only one (normalized)
Problem 13. (i) Find all 2 x 2 matrices over E that admit only one eigenvector,
(ii) Consider the 2 x 2 matrix
^ ) = (]
J )>
« € R .
Can one find a condition on the parameter e so that A(e) has only one eigenvec­
tor?
Solution 13.
(i) A necessary condition is that the matrices admit only one
eigenvalue (twice degenerate). The matrices are
(a
b\
IvO a j ’
(a
\b
0\
a)
Eigenvalues and Eigenvectors
149
where b 7^ 0. The eigenvectors are
(ii) Obviously the eigenvalues are I (twice). Thus the eigenvalue equation is
If e 7^ 0 we have #2 = 0 and therefore we find only one eigenvector. If e = 0 we
find two linearly independent eigenvectors.
P r o b le m 14.
Find all 2 x 2 matrices over R which commute with
* - ( ! ;)■
What is the relation between the eigenvectors of these matrices?
S olu tion 14.
fa
\c
We have
b\ ( 0
d ) ’ \0
A l _ (0
OyJ J - ^O
0\
( -c
OJ ^
^ 0
a —d\
c
J
Hence c = 0 and a = d. Thus the matrices are of the form
a
0
b
a
a,b G. R.
The eigenvectors of A are ^ j , x G M / { 0 } . The eigenvectors of
(s 0
are
M/ {0 } and, when b = 0,
R / { 0 } . Thus the matrices
share the eigenvector
(s)
x
GR/{0}. More generally, let A and B be n x n matrices over C which commute,
i.e. [A, B] = 0n. Let v be an eigenvector of A corresponding to the eigenvalue
A. Then
A{Bv) = B(Av) = X(Bv).
Thus B v is either 0 or B v is also an eigenvector of A corresponding to A.
Now suppose the eigenspace of A corresponding to A is one dimensional. Then
B v = fiv for some /jl. It follows that A and B share one-dimensional eigenspaces.
P r o b le m 15. (i) Let A 1 B be 2 x 2 matrices over R and vectors x, y in R 2
such that Ax. = y, B y = x, x Ty = 0 and x Tx = I, y Ty = I. Show that AB
and BA have an eigenvalue +1.
150
Problems and Solutions
(ii) Find all 2 x 2 matrices A , B which satisfy the conditions given in (i). Use
cos(a) \
sin(o;) J ’
Solution 15.
_ ( —sin(o:) \
\ cos(a) J '
^
(i) We have
B(Ax) = B y = x = ( B A )x ,
A(B y) = A x = y = (AB)y.
Thus the matrices AB and BA have the eigenvalue +1.
(ii) From
Ax =
f
an ai2^
( cos(a) ) = (~sin(a)
0,22 J V sin(a) J
\ cos(a) J
V a21
we obtain two equations for the unknown a n , a i2, a2i, 022
a n cos(a) + 012(1 + sin(a)) = 0,
(021 — I) cos(a) + 022 sin(o;) = 0.
We eliminate a n and 021- The cases cos(a) = 0, sin(a) 7^ 0 and sin(a) =
0, cos(a) 7^ 0 are obvious. Assume now sin(o:) 7^ 0 and cos(a) 7^ 0. Then we
obtain
_ {
«n
—I — a n cos (a)/ sin(a)
— \ I - a22 sin (a )/ cos(a)
a22
Similarly for the matrix B.
Problem 16. Let A = (cijk) be a normal nonsymmetric 3 x 3 matrix over the
real numbers. Show that
a =
«1
«2
«2 3 “
«3 2
«3 1 “
«1 3
«3
«12
«21
“
is an eigenvector of A.
S olu tion 16. Since A is normal with the underlying field R we have AA t =
ATA. Since A is nonsymmetric the matrix
S i= A - A t
0
«3
-« 3
0
«1
«2
-« 1
0
«2
is skew-symmetric with rank 2. The kernel of S is therefore one-dimensional.
We have A a = 0. Therefore a is an element of the kernel. Since A is normal,
we find
AS = A(A - At ) = A2 - AA t = A2 - At A = ( A - AT)A = SA.
If we multiply the equation 0 = 5a with A we obtain
0 = AO = A{Sa) = (AS) a = (5A)a =
S (A
sl ).
Eigenvalues and Eigenvectors
151
Thus besides a, Aa. also belongs to the one-dimensional kernel of S. Therefore
Aa = Aa.
P r o b le m 17.
Let c € M and consider the symmetric 3 x 3 matrix
A (C ) =
Show that c is an eigenvalue of A and find the corresponding eigenvector. Find
the two other eigenvalues and normalized eigenvectors.
S o lu tio n 17.
From
/Xl\ /Xl\
A(c) I x 2 I = c I x 2 I
VW
VW
we obtain X2 = 0 and Xs = —x\. Thus we have the normalized eigenvector
The other eigenvalues are
eigenvectors
P r o b le m 18.
and —y/ 2 + c with the corresponding normalized
y/2 + c
(i) Let a G R. Find the eigenvalues of
0
a
0
a
0
Q
0
a
0
Do the eigenvalues cross as a function of a?
(ii) Let a E M. Find the eigenvalues and eigenvectors of the 3 x 3 matrix
B(a)
-I
a
0
a
0
0
a
I
a
Discuss the dependence of the eigenvalues and eigenvectors of a.
(iii) Let a € M. Find the eigenvalues of the 4 x 4 symmetric matrix
C(a)
( - 1
a
0
0
a
- 1/2
a
0
0
a
1/2
a
Discuss the eigenvalues Xj (a) as functions of a.
°\
0
a
i /
152
Problems and Solutions
S o lu tio n 18.
(i) The eigenvalues are Ai (a) = —y/2a, A2(a) = 0, A3 (a) =
+y/2a. Thus the eigenvalues Ai (a) and A3(a) cross at 0.
(ii) From det(\Is —B(a)) = 0 we obtain (—1 —A )(—A )(—1 —A) + 2Ao:2 = 0. Thus
A = 0 is an eigenvalue independent of a. This leads to the quadratic equation
A2 = I + 2a2
with A± = ± v T T 2 c* 2 . We write the eigenvalues in the ordering
A0 = - V l + 2a 2,
A2 = V l + 2a 2.
Ai = 0,
For the eigenvalue Ai = 0 we find the normalized eigenvector
a
A2
—a J
For the eigenvalue A2 = vT+~2c ? we obtain the normalized eigenvector
'A 2 - I x
2a
2A2
A2 + I .
For the eigenvalue Aq = - V l + 2a 2 we obtain the normalized eigenvector
'A 2 + l '
2A2
(iii)
- 2a
A2 - I y
We obtain the characteristic equation
A4 + ( - J - 3 a 2) A2 + a 4 + J = 0.
Setting A2 = z with A = ±y/z we obtain
z2 + p z + q = 0,
p=
- 3 a 2,
q = ^ + a 4.
It follows that
*i =
\{-p + V p 2 - M),
=
\{-p - V p 2 - M )
with P 2 = 25/16 + 9a4 + 15a2/2 . Thus
zi = \ ( ^ + 3<*2 + V 9 /1 6 + 15a2/2 + 5a4) ,
22 = \ ( i + 3a2 ” V 9 /1 6 + 15a2/2 + 5a4)
and
Ai (a ) = y/zVa),
A2(a) = - \ A i ( a)>
Eigenvalues and Eigenvectors
A3(a ) = y/z2(a),
153
A4(a) = - y / z 2(a).
For a = 0 we have Ai(O) = I, A2(0) = —I, As(O) = 1/2, A^O) = —1/2.
Problem
19.
Consider the following 3 x 3 matrix
/0
I
0\
\0
v
in E 3
v = I sin(2«) I
0J
I
and vector
/ sin(a) \
I I 0 I J,
A =
A
y sin(3a) J
where a E M and a ^ nn with n E Z. Show that using this vector we can find
the eigenvalues and eigenvectors of A . Start of with A v = Av.
Solution
19.
We obtain
sin(2o:)
\
sin(o:) + sin(3a) I .
sin(2o:)
J
Under the assumption that
condition
v
(
is an eigenvector of
sin(2a)
\
sin(a) + sin(3o:)
sin(2a)
J
(
A
for specific a ’s we find the
/ sin(o:) \
sin(2o:)
ysin(3o:) J
J=Al
J.
Thus we have to solve the three equations
sin(2o:) = Asin(o:),
sin(o:) + sin(3a) = Asin(2a),
sin(2a) = Asin(3o:).
We have to study the case A = O and A ^ 0. For A = 0 we have solve the two
equations
sin(2o:) = 0,
sin(2o:) cos(a) = 0
with the solution a = n /2 .
eigenvector
Thus the eigenvalue A = O has the normalized
For A / 0 we obtain the equation 2 sin(a) = A2 sin(a) after elimination sin(2a)
and sin(3a). Since a ^ utt we have that sin(o:) ^ 0. Thus A2 = 2 with the
eigenvalues A = —y/2 and A = y/2 and the corresponding normalized eigenvectors
Can this technique be extended to the 4 x 4 matrix and vector v
f 0
I
0
Vo
1
0
1
0
0
1
0
1
0V
0
1 >
0/
sin(a) \
sin(2o:)
v — sin(3o:)
V sin(4o:) /
E
M4
154
Problems and Solutions
and even higher dimensions?
Problem 20.
Consider the symmetric matrix over R
2
A =
I
-I
I
I
0
-I
0
I
Find an invertible matrix B such that B 1AB is a diagonal matrix. Construct
the matrix B from the normalized eigenvectors of A.
Solution 20. The eigenvalues of A are given by Ai = I, A2 = 0, A3 = 3. The
corresponding normalized eigenvectors are given by
Hance
f
B =
0
1 /V 2
V 1/V 2
- 2/v /6 \
1 /V 5
-i/V 5
1 /V 5
-l/ V e
.
1/ VS J
We have B ~ l = B t .
Problem 21.
Find the eigenvalues of the 3 x 3 symmetric matrix over M
/2
I
I
A = I l
\1
2
I
I
2
using the trace and the determinant of the matrix and the information that two
eigenvalues are the same.
Solution 21. The trace is the sum of the eigenvalues and the determinant is the
product of the eigenvalues. Thus for the given matrix A we have Ai + A 2+ A3 = 6,
A1A2A3 = 4. Now two eigenvalues are the same, say A3 = A2. Thus we obtain
the two equations
Ai + 2A2 = 6,
AiA^ = 4
with the solution Ai = 4, A2 = A3 = I.
Problem 22.
Consider the symmetric matrix
Ian a i2
a i2 0>22
a 23
V a 13
A = I
a i3
a
23
a 33
over R. Write down the characteristic polynomial det(XIs — A) and express it
using the trace and determinant of A.
Eigenvalues and Eigenvectors
S olu tion 22.
155
We obtain
det(A/3 —A) = A3 + A2(—a n — a22 —a33)
+ A(ana22 + ^11^33 + ^22^33 “
— a13 — a23)
+ «11^23 + a 22^13 + a 33«12 “ 2 a i2a i 3a 23 ~ ^11^22^33-
Thus we can write
Clet(AJ3- A ) = A3—tr(A)A2+ A (a n a 22+ a n a 33+ a 22a33—a22—a23 —O23) - det(A).
If the real numbers A i, A2, A3 are the eigenvalues of A we have
Ai + A2 + A3 = tr(A)
AiA2A3 = det(A)
AiA2 + AiA3 + A2A3 = a n a 22 + ^11^33 + a22^33 — a\2 ~ ais ~ a23-
P r o b le m 23.
Consider the quadratic form
7x2 + 6x 2 + 5x3 - 4x i x 2 - 4x 2x 3 + 14xi - 8x 2 + IOx3 + 6 = 0.
Write this equation in matrix form and find the eigenvalues and normalized
eigenvectors of the 3 x 3 matrix.
S olu tion 23.
( xi
x2
We have
/ 7 - 2 0
x 3 ) I —2
6
-2
V 0
-2
5
X2 ^j + 2 ( 7
X3 )
-4
5 ) ( ^ x ^ + 6 = 0.
\X3/
The eigenvalues and normalized eigenvectors of the symmetric 3 x 3 matrix are
Ai = 3, A2 = 6, A3 = 9 with the corresponding normalized eigenvectors
P r o b le m 24. (i) Let H be an n x n matrix over C. Assume that H is hermitian
and unitary. What can be said about the eigenvalues of H l
(ii) An n x n matrix A such that A2 = A is called idempotent. What can be
said about the eigenvalues of such a matrix?
(iii) An n x n matrix A for which Ap = On, where p is a positive integer, is called
nilpotent. What can be said about the eigenvalues of such a matrix?
(iv) An n x n matrix A such that A 2 = In is called involutory. What can be
said about the eigenvalues of such a matrix?
S olu tion 24.
(i) Since H is hermitian the eigenvalues are real. Since H is
unitary the eigenvalues lie on the unit circle in the complex plane. Now H is
hermitian and unitary we have that A G { + ! , — !}.
156
Problems and Solutions
(ii) From the eigenvalue equation we obtain A 2X = A(Ax) = AXx = XAx = A2x.
Thus since A2 = A we have A2 = A.
(iii) From the eigenvalue equation A x = Ax we obtain Apx = Xpx. Since Ap = On
we obtain Ap = 0. Thus all eigenvalues are 0.
(iv) From the eigenvalue equation A x = Ax we obtain A 2X = A2X. Since A 2 = In
we have A2 = I. Thus A = ±1 .
P r o b le m 25. (i) Let A b e a n n x n matrix over C. Show that the eigenvectors
corresponding to distinct eigenvalues are linearly independent.
(ii) Show that eigenvectors of a normal matrix M corresponding to distinct
eigenvalues are orthogonal.
S o lu tio n 25.
tions
(i) The proof is by contradiction. Consider the eigenvalue equa­
A x = Aix,
A y = A2y,
x, y ^ 0
with Ai ^ A2. Assume that x = cy with c ^ 0, i.e. we assume that the
eigenvectors x and y are linearly dependent. Then from A x = Aix we obtain
A (cy) = A i(cy). Thus c(A y) = c(A iy). Using A y = A2y we arrive at c(Ai —
A2)y = 0. Since c ^ 0 and y / 0 we obtain Ai = A2. Thus we have a
contradiction and therefore x and y must be linearly independent.
(ii) Let Al, A2 be the two different eigenvalues with the corresponding eigen­
vectors V i, v 2, respectively. Using that from M v = Av follows M V = Av we
have
(Al - A2) v j V2 = Ai ViV2 - A2VjV2 = (M v i ) V 2 - Vi (M 5V 2) = 0.
P r o b le m 26.
Let A be an n x n hermitian matrix, i.e. A = A*. Assume
that all n eigenvalues are different. Then the normalized eigenvectors { V7 : j =
l , . . . , n } form an orthonormal basis in C n. Consider
P := (Ax —fix, A x — ux) = (Ax —fix)* (Ax — vx)
where ( , ) denotes the scalar product in C n and /i, v are real constants with
fi < v . Show that if no eigenvalue lies between /i and v , then j3 > 0.
S o lu tio n 26.
First note that Xj G R, since the matrix A is hermitian. We
expand x with respect to the basis { V7 : j = I , . . . , n }, i.e.
Ti
x = I V
3= I
v J-
Since A vj = Aj Vj and therefore v*A = Aj V* we have
n
A x = Y , aJxJv J'
j=I
n
X*A = Y a kXkVkk=I
Eigenvalues and Eigenvectors
157
Since (v j, v k) = v f v k = Sjk we find
/3 = x* A A x - VxfA x - /xx* A x + /jlvx*x
=
it,
x2\ai
j I2 -
»
1 ¾ I2 -
"
J=I
J=I
aj
1¾ I
2+ w
J=I
1 ¾ I2
J=I
n
=
1^ (aj
-
m
)(
aj
- ^ ) 1
¾ !2-
j= i
If no eigenvalue lies between /i and I/, then (Aj — /i) and (Aj — I/) have the same
sign for all eigenvalues. Therefore (Aj — /i)(AJ — v) is nonnegative. Thus P > 0.
P r o b le m 27.
Let A be an arbitrary n x n matrix over C. Let
H
=
= A + A*
2
’
A - A*
‘
2i
'
Let A be an eigenvalue of A and x be the corresponding normalized eigenvector
(column vector).
(i) Show that A = x* H x + ix*Sx.
(ii) Show that the real part Ar of the eigenvalue A is given by Ar = x*H x and
the imaginary part A* is given by A* = x * ^ .
S o lu tio n 27. (i) The eigenvalue equation is given by A x = Ax. Then using
x * x = I we find
x
„
A + A*
, A - A tc
.
A = x A x = x — - — x + x — - — x = x H x + ix Sx.
(ii) Note that H and S are hermitian, i.e. H* = H and S * = S. Hence
A* = x* H x —ix*Sx.
Calculating A + A* and A — A* provides the results.
P r o b le m 28.
Let A i, A2 and A3 be the eigenvalues of the matrix
0
A=
0
1
0
\ 2 2
Find Af + A2 + A§ without calculating the eigenvalues of A or A 2.
S o lu tio n 28.
Now
From the eigenvalue equation A v = Av we obtain A 2V = A2v.
tr(A) = Ai + A2 + A3.
Thus tr(A 2) = A2 + A| + A§. Since the diagonal part of A2 is given by (4 2 7)
we find A2 + A2 + A3 = 13. Thus we only have to calculate the diagonal part of
,42.
158
Problems and Solutions
Problem 29.
Consider the column vectors u and v in Rn
cos (6) \
cos(20)
U=
sin(0) \
sin(20)
,
V=
V cos (n0) J
\sin(n0) /
where n > 3 and 0 = 2 tt/n.
(i) Calculate uTu + v Tv.
(ii) Calculate uTu — vTv + 2zuTv.
(iii) Calculate the matrix A = uuT — vvT, ^4u and A v . Discuss.
Solution 29.
(i) We obtain
uTu + vTv = y > o s 2(jfl) + Sin2(^fl)) = ^ I = n.
j=I
j=i
(ii) We obtain
n
uTu — v Tv + 2iuTv = y > o s 2(jfl) — Sin2(^fl) + 2i cos (jQ) sin (jO)) = 0.
3= I
Thus uTu = vTv and uTv = 0. Using the result of (i) we find uTu = vTv =
n /2.
(iii) Using the result from (ii) we find
Au = (uuT — vvT)u = u(uTu) — v(vTu) =
Av = (uuT - vvT)v = u(uTv) - v (vTv) = - | v .
Thus we conclude that A has the eigenvalues n /2 and —n/2.
Problem 30.
matrix
(i) Let u be a nonzero column vector in Rn. Consider the n x n
A = uuT — uTuIn.
Is u an eigenvector of this matrix? If so what is the eigenvalue?
(ii) Let u, v be nonzero column vectors in Rn and u / v. Consider the n x n
matrix A over R
A =
UUt +
UVt -
VUt -
VVt
Find the nonzero eigenvalues of A and the corresponding eigenvector. Note that
UTV = VTU.
Solution 30.
(i) We have
(uuT — uTu /n)u = u(uTu) — (uTu)u = (uTu)(u — u) = 0.
Thus u is an eigenvector with eigenvalue 0.
Eigenvalues and Eigenvectors
159
(ii) Since
Au = (uTu + v Tv ) u + ( —uTu —v Tu )v,
Av = (uTv + v Tv ) u + ( —uTv —v Tv )v
we have the eigenvalue equation
A( u - v ) = (uTu - v Tv )(u - v)
since uTv = v Tu. Thus n — I of the eigenvalues of A are 0 and the nonzero
eigenvalue is uTu — v Tv with the eigenvector u — v.
P r o b le m 31.
Let A be an n x n matrix over C. We define
n
rJ := '52 \aJk\,
j = l,...,n .
k=I
k^3
(i) Show that each eigenvalue A of A satisfies at least one of the following in­
equalities
l A - a ^ l < r,, j = l , . . . , n .
In other words show that all eigenvalues of A can be found in the union of disks
{ z : \z — a.jj\ < r j , j = l , . . . , n } .
This is Gersgorin disk theorem.
(ii) Apply this theorem to the 2 x 2 unitary matrix
A = i-i
o)
(iii) Apply this theorem to the 3 x 3 nonnormal matrix
I
3
I
B=
2 3\
4 9 .
I I/
S o lu tio n 31.
(i) Prom the eigenvalue equation A x = Ax we obtain (AIn —
A )x = 0. Splitting the matrix A into its diagonal part and nondiagonal part we
obtain
n
(A — ajj)xj = ^ ^ OjicXfc,
j = I ,..., n
k=I
where Xj is the j -th component of the vector x. Let Xfc be the largest component
(in absolute value) of the vector x. Then, since
< I for j / k , we obtain
160
Problems and Solutions
Thus the eigenvalue A is contained in the disk {A : |A — a^k\ < Vjc }.
(ii) Since \i\ = \— i\ = I we obtain n = I, 7*2 = I. Thus the Gersgorin disks are
Ri : { z : |z| < I },
R2 : { z : \ z \ < l } .
Since the matrix is hermitian the eigenvalues are real and therefore we have
\x\ < I in both cases for x real. The eigenvalues of the matrix A are given by I,
- I .
(iii) We obtain r± = 5, 7*2 = 12, 7*3 = 2. The Gersgorin disks are
R1 : { z : \z-l\ < 5 } ,
R2 : { z : \ z - 4| < 1 2},
The eigenvalues of the matrix B are Ai =
R3 : { z : \z - 1| < 2 }.
7.3067, A2,3 = —0.6533± 0.3473¾.
P r o b le m 32. Let A be a normal matrix over C, i.e. A* A = AA*. Show that
if x is an eigenvector of A with eigenvalue A, then x is an eigenvector of A* with
eigenvalue A.
S olu tion 32.
Since A A* = A* A and A ** = A we have
( A x )M x = x M M x = x*A A *x = (A*x)*A*x.
It follows that
0 = 0*0 = (A x — Ax )* (A x — Ax) = (x*A* — A x*)(A x — Ax)
= x* A A *x — Ax* A x — Ax* A *x + AAx*x
= (A *x — A x)*(A*x — Ax).
We have the eigenvalue equation A *x = Ax, which implies that x is an eigen­
vector of A* corresponding to the eigenvalue A.
P r o b le m 33.
(i) Show that an n x n matrix A is singular if and only if at
least one eigenvalue is 0.
(ii) Let B be an invertible n x n matrix. Show that if x is an eigenvector of B
with eigenvalue A, then x is an eigenvector of B ~ l with eigenvalue A- 1 .
S olu tion 33. (i) From the eigenvalue equation A x = Ax we obtain det(A —
AIn) = 0. Thus if A = 0 we obtain det(A) = 0. It follows that A is singular, i.e.
A -1 does not exist.
(ii) From B x = Ax we obtain B - 1Bx. = B - 1 (Ax). Thus x = AB - 1x. Since
det (B) 7^ 0 all eigenvalues must be nonzero. Therefore
B -1 X = i x .
A
P r o b le m 34. Let A be an n x n matrix over R. Show that A and A t have
the same eigenvalues.
Eigenvalues and Eigenvectors
Solution 34.
161
Let A be an eigenvalue of A. Then we have
0 = det (A — AIn) = det((A T)T — A/ T) = det (A t — AIn)T = det (A t — AIn)
since for any n x n matrix B we have det (B) = det (Bt ) .
Problem 35.
Let A be an n x n real symmetric matrix and
Q (x) := x TAx.
The following statements hold (maximum principle)
1) Ai = max||x||= i((5 (x )) = Q (x i) is the largest eigenvalue of the matrix A and
Xi is the eigenvector corresponding to eigenvalue A i.
2) (inductive statement). Let A^ = m axQ (x) subject to the constraints
a) Xt Xj = 0, j = I , . . . , k — I.
b) ||x|| = I.
Then A^ = Q(x-k) is the fcth eigenvalue of A, X\ > X2 > ••• > A^ and x*, is the
corresponding eigenvectors of A.
Apply the maximum principle to the symmetric 2 x 2 matrix
A =
Solution 35. We find Q (x) = 3x% + 2 x i X2 + 3x%- We change to polar coordi­
nates x\ = cos(0), X2 = sin(0). Note that 2 cos(0) sin(0) = sin(20). Thus
Q(O) = 3 + sin(20).
Clearly max(<3(0)) = 4 which occurs at 0 = K/A and min(<3(0)) = 2 at # = —7r/4.
It follows that Ai = 4 with Xi = ( 1 , 1)T and A2 = 2 with X2 = ( I , —1)T .
Problem 36. Let A be an n x n matrix. An n x n matrix can have at most
n linearly independent eigenvectors. Now assume that A has n + I eigenvectors
(at least one must be linearly dependent) such that any n of them are linearly
independent. Show that A is a scalar multiple of the identity matrix In.
Solution 36.
Let x i, . . . , x n+i be the eigenvectors of A with eigenvalues
Al, . . . , An+ 1. Since x i, . . . , x n are linearly independent, they span the vector
space. Consequently
n
x n+i = £ > x * .
«=I
Multiplying equation (I) with An+1 we have
n
^n+lXn+l = ^ ^
i =I
An+ 1X+
(I)
162
Problems and Solutions
Applying A to equation (I) yields
n
n
^n+l^n+1 = ^ ^CKiAiXi
z=l
n
^ ^CKiAn+ 1Xi — ^ ^ CKiAiXi .
z=l
z=l
It follows that CKiAn+! = CKiAi for all z = I , . . . , n. If CKi = 0 for some z, then
x n+i can be expressed as a linear combination of x i, . . . , Xi- I , Xi+ !, . . . , x n,
contradicting the linear independence. Therefore CKi ^ 0 for all z, so that An+1 =
Ai for all z. This implies A = An+1In.
P r o b le m 37.
An n x n stochastic matrix P satisfies the following conditions
Pi j
and
for all z, j = 1 ,2 ,. . . , n
>0
n
s^ jPij = I
for all j = 1 ,2 ,. . . , n.
Z=I
Show that a stochastic matrix always has at least one eigenvalue equal to one.
S o lu tio n 37.
is given by
Assume that I is an eigenvalue. Then the eigenvalue equation
/P u
P21
^Pnl
Pl2
P22
Pn
2
Pl3
P23
•••
Pin \
( x i\
•••
P n
2
X
PnS
• ••
Pnn '
V Xn /
2
—
( Xl\
x2
V Xn /
From the second condition we find the eigenvector x = ( l
I is an eigenvalue.
P r o b le m 38.
I
I )T. Thus
The matrix difference equation
p ( i + l ) = M p (t),
¢ = 0 ,1 ,2 ,...
with the column vector (vector of probabilities)
P ( t ) = { P l ( t ) , P 2( t ) , ■ ■ - , P n ( t ) ) T
and the n x n matrix
M =
(i - w)
0.5 w
0
\ 0.5 w
0.5 w
(l-u > )
0.5«)
0
0.5«;
0.5¾/;
0
0
0
0
( i — w.
plays a role in random walk in one dimension. M is called the transition proba­
bility matrix and w denotes the probability w € [0,1] that at a given time step
Eigenvalues and Eigenvectors
163
the particle jumps to either of its nearest neighbor sites and the probability that
the particle does not jump either to the right of left is (I — w). The matrix M
is of the type known as circulant matrix. Such an n x n matrix is of the form
(
Cn _i
C2
Cl
Co
. . . cn_ i \
. . . cn_ 2
. . . cn_3
C2
C3
Co I
I
\
Co
Cn -I
Cl
Co
Cn-2
C1
V
with the normalized eigenvectors
/
e 2iH j / n
j = 1,...,71.
,
y/n
^e 2 (n -l)7rij/n
/
(i) Use this result to find the eigenvalues of the matrix C.
(ii) Use (i) to find the eigenvalues of the matrix M .
(iii) Use (ii) to find p (t) (t = 0 ,1 ,2 ,...) , where we expand the initial distribution
vector p(0) in terms of the eigenvectors
n
p ( o)
n
akek
=
k=I
(iv) Assume that p(0) = ^ ( I
S olu tion 38.
with
5 ^ (° )= L
J=I
I
I )T . Give the time evolution of p(0).
(i) Calculating C ej provides the eigenvalues
Aj = Co + Ci exp(27rij/n) + C2 exp(47r ^ /n ) H------- b cn_ i exp(2(n — l ) 7rij/n).
(ii) We have Co = I — w, c\ = 0.5w, cn_ i = 0.5w and C2 = C3 = •••= cn_2 = 0.
Using e2™-7 = I and the identity cos(o) = (eia + e _ *a)/2 we obtain as eigenvalues
of M
Xj = (I — w) + w cos(2.771-/71),
j = I , . . . , n.
(iii) Since p(0) is a probability vector we have Pj (O) > 0 and Y^j=I Pj (V) = IThe expansion coefficients
are given by
ak =
Thus we obtain
I
n
°)ex p (-2 7 r«(j - I )k/n).
n
P (0 = ^ ^
t = 0, I, 2, . . . .
k=I
(iv) Since Mp(O) = p(0) we find that the probability vector p(0) is a fixed point.
164
Problems and Solutions
Problem 39 . Let n be a positive integer. Consider the 3 x 3 matrix with rows
of elements summing to unity
^ / n —a —b
M = - I
a
n \
c
a
b
\
U - It a - C
a+ c
I
a
n —a — c J
where the values of a, b, c are such that, 0 < a, 0 < 6, a + b < n, 2a + c < n.
Thus the matrix is a stochastic matrix. Find the eigenvalues of M .
Solution 39 . A stochastic matrix always has an eigenvalue Ai = I with the
corresponding eigenvector ui = ( I I I )T . The other two eigenvalues are
A2 = I ----- (a + 6 + c ),
n
A3 = I ------(3a + c ).
n
If 6 = 2a, the eigenvalues A2 and A3 are the same.
Problem 40 . Let U be a unitary matrix and x an eigenvector of U with the
corresponding eigenvalue A, i.e. Ux = Ax.
(i) Show that U*x = Ax.
(ii) Let A, /i be distinct eigenvalues of a unitary matrix U with the corresponding
eigenvectors x and y, i.e. Ux = Ax, Uy = fiy . Show that x*y = 0.
Solution 40 . (i) From Ux = Ax we obtain U*Ux = AU*x. Therefore x =
AU*x. Finally f/*x = Ax since AA = I. Now A can be written as eta with a € M
we have A = e~l0i.
(ii) Using the result from (i) we have
/x x * y = x * ( / i y ) = x * C /y = ( U * x ) * y = ( A x ) * y = A x * y -
Thus (/i — A ) x * y = 0. Since /i and A are distinct it follows that x * y = 0.
Problem
41. Let H, Ho, V be n x n matrices over C and H = Ho + V . Let
z G C and assume that z is chosen so that (Ho —z ln)~l and (H —z ln)~l exist.
Show that
(H - Zln) - 1 = (H0 - ZlnT 1 ~ (Ho - ZlnT 1V iH - Zln) - 1.
This is called the second resolvent identity.
Solution
41.
We have
H0 + V = H
^ H 0= H -V
(H o- z l n) = ( H - Z l n) - V
# (Ho - Zln) - 1(H0 - z ln) = (Ho - Z ln T ^ H - Zln) - (Ho - ZlnT 1V
^ I n = (Ho - ZlnT 1(H - z ln) - (Ho - ZlnT 1V
(H - ZlnT 1 = (Ho - ZlnT 1 ~ (Ho - z I T ^ V t f - ZlnT 1.
Eigenvalues and Eigenvectors
165
P r o b le m 42. An n x n matrix A is called a Hadamard matrix if each entry of
A is I or —1 and if the rows or columns of A are orthogonal, i.e.
A A t = n ln
or
A r A = n ln.
Note that A A t = n ln and A FA = n ln are equivalent. Hadamard matrices Hn
of order 2n can be generated recursively by defining
for n > 2. Show that the eigenvalues of Hn are given by + 2 n/ 2 and —271Z2 each
of multiplicity 2n_1.
S olu tion 42.
n > 2 we have
We use induction on n. The case n = I is obvious. Now for
det(A I - Hn) = det((A I - t f n_i)(A 7 +
- H ^ 1).
Thus
det(AJ - Hn) = det(A2J - 27 7 ^ ) = det(AI - \/2Hn- X) det(AI + \/277n_ i).
This shows that each eigenvalue /i of Hn- 1 generates two eigenvalues ±y/2/j,
of Hn. The assertion then follows by the induction hypothesis, for Hn- \ has
eigenvalues +2(71-1) /2 and —2(n_1)/2 each of multiplicity 2n_2.
P r o b le m 43. An n x n matrix A over the complex numbers is called positive
semidefinite (written as A > 0), if
x*A x > 0
for all x € Cn.
Show that for every A > 0, there exists a unique B > 0 so that B 2 = A. Thus
B is a square root of A.
S olu tion 43.
Let A = U*diag(A i,. . . , An)?/, where U is unitary. We take
B = U* diag(A;/2 ,...,A i/2 ) [ /.
Then the matrix B is positive semidefinite and B 2 = A since U*U = In. To
show the uniqueness, suppose that C is an n x n positive semidefinite matrix
satisfying C 2 = A. Since the eigenvalues of C are the nonnegative square roots
of the eigenvalues of A, we can write
C = Vdia,g(\11 /2, . . . , \ 1J2)V*
for some unitary matrix V . Then the identity C 2 = A = B 2 yields
T d iag(A i,. . . , An) = d iag(A i,. . . , An)T
166
Problems and Solutions
where T = UV . This yields tjk^k = ^jtjk- Thus tjk^jj2 = A1J2Ijk- Hence
Tdiag(Aj/2 , . . . , A*/2) = diag(Aj/2 , . . . , A*/2 )T.
Since T = UV it follows that B = C.
P r o b le m 44. (i) Consider the polynomial p{x) = X 2 — sx + d, s, d G C. Find
a 2 x 2 matrix A such that its characteristic polynomial is p.
(ii) Consider the polynomial q(x) = —x 3 + sx 2 —qx + d, s,q,d G C. Find a 3 x 3
matrix B such that its characteristic polynomial is g.
S o lu tio n 44.
(i) We obtain
d
0
)•
(ii) We obtain
P r o b le m 45.
q
d
0
-I
0
0
Calculate the eigenvalues of the 4 x 4 matrix
/I
0
0
Vi
0
I
I
0
0 I
I 0
-I 0
0 -I
by calculating the eigenvalues of A2.
S olu tion 45. The matrix A is symmetric over R. Thus the eigenvalues of A
are real. Now A 2 is the diagonal matrix
A 2 = diag(2 2 2 2)
with eigenvalue 2 (four times). Since tr(A) = 0 we obtain y/2 , \/2 , —\/2, —y/2 as
the eigenvalues of A.
If { Aj
is a commuting family of matrices that is to say
AjAk = AkAj for every pair from the set, then there exists a unitary matrix
V such that for all Aj in the set the matrix Aj = V*AjV is upper triangular.
P r o b le m 46.
Apply this to the matrices
Al
S olu tion 46.
vectors
I
I
A2
-I
I
)■
The eigenvalues of A\ are 0 and 2 with the normalized eigen-
Tl
Eigenvalues and Eigenvectors
167
The eigenvalues of A2 are 0 and 2 with the normalized eigenvectors
Thus the set of the eigenvectors are the same. The unitary matrix V is given by
0
with
A2 = V t A2V =
A 1 = V* A 1V = ( °
P r o b le m 47. Let A be an n x n matrix over C. The spectral radius of the
matrix A is the non-negative number defined by
p(A) := max{ |Aj(^4)| : I < j < n }
where Xj (A) are the eigenvalues of A. We define the norm of A as
P U := sup ||Ax||
IIxII=I
where ||Ax|| denotes the Euclidean norm of the vector ^4x.
(i) Show that p(A) < \\A\\.
(ii) Consider the 2 x 2 matrix
Now p(A) < I. If p(A) < I, then (I2 — A) 1 = I2 + A + A2 + •••. Calculate
(I2 — A )
Calculate (/2 — Al) (I2 H- A H- A^ + •••+ A^).
S o lu tio n 47.
(i) From the eigenvalue equation A x = X x ( x ^ 0) we obtain
|A|||x|| = IIAxII = ||i4x||<||A||||x||.
Therefore since ||x|| ^ O w e obtain |A| < \\A\\ and p ( A ) < \\A\\.
(ii) Since A is symmetric over M the eigenvalues are real. We find Ai = —1/4
and A2 = 3/4. Thus p ( A ) < I. We obtain
$ )•
We have ( I 2 - A ) ( I 2 + A + A 2 + •••+ A k) = I 2 - A fc+ 1.
Let A be an n x n matrix with entries Oj ^ > 0 and with positive
spectral radius p. Then there is a (column) vector x with Xj > 0 and a (column)
P r o b le m 48.
vector y such that the following conditions hold
Ax
=
px,
y t A = py,
y T x = I.
168
Problems and Solutions
Consider the 2 x 2 symmetric matrix
Show that B has a positive spectral radius. Find the vectors x and y.
The eigenvalues of A are 3 and I. Thus the spectral radius is
S olu tion 48.
p = 3 and the vectors are
x =
P r o b le m 49.
Consider a symmetric 2 x 2 matrix A over R with an > 0,
a22 > 0, 012 < 0 and ajj > \ai2\for j = 1 ,2. Is the matrix A positive definite?
S olu tion 49. We show that the matrix A is positive definite by calculating
the eigenvalues. The eigenvalues of A are given by
(an — 0,22)2 + 4af2 + a n + ^22),
A+ =
A- = - -
(an - ^22)2 + 4af2 - an ~ a22).
Thus both eigenvalues are positive and therefore the matrix is positive semidefinite.
P r o b le m 50. Let A be a positive definite n x n matrix. Show that A~l exists
and is also positive definite.
S o lu tio n 50. Since A is positive definite we have for the eigenvalues (which
are all real and positive) using the ordering 0 < Ai < A2 < •••< An. The inverse
A ~l has the real and positive eigenvalues
0 < An 1 < AnI 1 < •••< A1 1.
Thus A ~l is also positive definite.
P r o b le m 51.
0
1
I
Find the eigenvalues of the symmetric matrices
I
0
I
I
I 0
0 I
Vi 0
(°
0
I
0
I
1\
0
I
0/
(0
I
0
0
\1
I
0
I
0
0
0
I
0
I
0
0
0
I
0
I
1\
0
0
I
0/
S olu tion 51. Since the matrices are symmetric over R the eigenvalues must
be real. Since the trace for all three matrices is 0 the sum of the eigenvalues
Eigenvalues and Eigenvectors
169
must be 0. For As we obtain the eigenvalues —1 (twice) and 2. For A4 we obtain
the eigenvalues —2, 0 (twice) and +2. For A$ we obtain
I + >/5 ,
--------—
. ,
>/5-1 /
(twice)
\
(twice),
— -—
+ 2.
Extend the result to n-dimensions. The largest eigenvalue of An is +2.
Consider the n x n cyclic matrix
...
...
flu
fll 4
fll 5
...
fll2
flu
fll 3
fll2
flln
fll 3
V ai2
flln —I
a in —2
I
A =
fll 4
fll 3
fll2
flu
flln
flln--1
CO
/
P
• S
P r o b le m 52.
flin
flln ^
flln —I
An—2
flu
/
w here a Jjc G M. Show th a t
-¾
CM
VU
v =
\
€4fe
I
6 = eiw/n,
\f n
I < k < n
e2(n-l)fc
\
1
/
is a normalized eigenvector of A. Find the eigenvalues.
S o lu tio n 52.
We have
1 n
in "E 1=
I.
j= i
Thus the vector is normalized. Applying A to the vector yields
A v — (flu + C2fcfll2 + C^kfll3 + •••+ e2(n 1^ flln )v Thus we have an eigenvalue equation with the n eigenvalues
flu + 62fcfll2 + € ^fli3 + •••+ C2^n 1^fllnj
k = I, . . . , n.
P r o b le m 53. (i) Let a, fe G M. Find on inspection two eigenvectors and the
corresponding eigenvalues of the symmetric 4 x 4 matrix
/a
0
0
0
0 fe\
0 fe
0 f l 6 *
a
Vfe fe fe 0/
(ii) Let a, fe G M. Find on inspection two eigenvectors and the corresponding
eigenvalues of the 4 x 4 matrix
/
0
0
fe\
O a O f e
0 0 —a fe
\b b 6 0 /
a
170
Problems and Solutions
S olu tion 53. (i) Obviously ( I —1 0 0 ) T, ( I 0 —1 0 )T are eigenvec­
tors with the corresponding eigenvalues a and a.
(ii) Obviously ( I —1 0 0 ) is an eigenvector with the corresponding eigen­
value a.
P r o b le m 54.
Consider the two 4 x 4 permutation matrices
/°
0
S=
I
Vo
0
I
0
0
I
0
0
0
/ 1
0
0
Vo
°\
0
0
I/
0
0
0
I
0
0
I
0
°\
I
0
0/
Show that the two matrices have the same (normalized) eigenvectors. Find the
commutator [S, T\.
S olu tion 54.
I
I
I
( ° I\
CO
Il
\ lj
1 V
0
-I
0 J
>
CN
Il
>
v i= 2
I
I
We find the normalized eigenvectors
0
1 \
-I
I
V4=2
V -i/
I
’
\ -lJ
Obviously we find [S, T] = O4 applying the spectral theorem.
P r o b le m 55.
Let a G l . Consider the symmetric 4 x 4 matrix
/ 1
a
A(a) =
0
Vo
a
2
2a
0
0
2a
3
0
a
a
4/
(i) Find the characteristic equation.
(ii) Show that
Ai + A2 + A3 + A4 = 10
A1A2 + Ai A3 + A1A4 + A2A3 + A2A4 + A3A4 = 35 — 602
A1A2A3 + A1A2A4 + A1A3A4 + A2A3A4 = 50 — 30a 2
A1A2A3A4 = 24 — 30a2 + a 4
where Al, A2, A3, A4 denote the eigenvalues.
S olu tion 55.
(i) From det(^4(a) — A/4) = 0 we find
A4 - IOA3 + (35 - 6a 2)A2 + ( -5 0 + 30a2)A + 24 - 30a2 + a 4 = 0.
(ii) From
0
0
0
I
CN
I
0
0
CO
/A i - A
0
det
0
0
0
0
\
0
= 0
0
A4 - XJ
Eigenvalues and Eigenvectors
171
we obtain
4
Aj) +
A4 — A3
A2(Ai A2 + Ai A3 +
A1A4 + A2A3 + A2A4 + A3A4)
3 =1
—A(Ai A2A3 + A1A2A4 + A1A3A4 + A2A3A4) + A1A2A3A4 = 0.
Thus the result given above follows.
P r o b le m 56. Let B be a 2 x 2 matrix with eigenvalues Ai and A2. Find the
eigenvalues of the 4 x 4 matrix
0
0
611 612
V b2i 622
1
0 °1\
0 0
0 0/
0
0
Let v be an eigenvector of B with eigenvalue A. What can be said about an
eigenvector of the 4 x 4 matrix X given by eigenvector v and eigenvalue of BI
S olu tion 56.
The characteristic equation d et(X — r / 4) = 0 is given by
r 4 - r 2(&n + 622) + (&11&22 — &12&21) = 0
or r4 — r2tr(B) + det (B) = 0. Thus since tr (B) = Ai + A2 and det (B) = Ai A2
we have
r4 — r 2(Ai + A2) + A1A2 = 0.
This quartic equation can be reduced to a quadratic equation.
eigenvalues
r2 = -y / \ u
n =
rS = a/A 2,
We find the
=
Let v be an eigenvector of B with eigenvalue A, i.e. Bw = Av. Then we have
0
0
V 621
0
0
bi2
b 22
I
0
0
0
°1\ /
0
0/
\
V2
\/\vi
\y/\v2 /
v\
— VA
( V
l \
V2
\/\v i
V \/A ^’2/
\
V2
y/Xvi
\y/Xv2/
is an eigenvector of X .
P r o b le m 57. Let A, B be two n x n matrices over C. The set of all matrices
of the form A — XB with A G C is said to be a pencil The eigenvalues of the
pencil are elements of the set A(A, B) defined by
A(AiB) : = { z e C : det(A - zB) = 0 }.
172
Problems and Solutions
If A G A(A, B) and A v = ABv, v ^ O then v is referred to as an eigenvector of
A —XB . Note that A may be finite, empty or infinite. Let
/°
0
0
Vi
0
0
I
0
0
I
0
0
1N
0
0
0/
D
1
/ 1
0
0
“ n/2 Vl
0
I
I
0
0
I
-I
0
1 \
0
0
-I /
Find the eigenvalue of the pencil.
Solution 57.
Since B is invertible we have A(A, B) = \(B~l A, / 4) = X(B- 1A).
Problem 58.
Let A be an n x n matrix with eigenvalues Ai , . . . , An. Let
c G C \ {0 }. What are the eigenvalues of cA?
Solution 58.
We set B = cA. Then
det (B —fil n) = det(cA — filn) = det
^A — —^
= cn det ^A — - I n^j .
Thus det (A — filn/c) = 0 and therefore the eigenvalues of cA are cAi, . . . , cAn.
Problem 59.
Let A, B be n x n matrices over C. Let a, P G C. Assume
that A 2 = In and B 2 = In and AB + BA = 0n. What can be said about the
eigenvalues of aA + PB?
Solution 59.
From the eigenvalue equation (aA + f3B)u = Au we obtain
(aA + PB)2u = A(aA + PB) u = A2U.
Thus
( o 2A 2 + p B 2 + aP(AB + BA)) u = A2U
or (a 2 + P2)Inu = A2u. Hence the eigenvalues can only be A = =L^Ja2 + P2.
Problem 60. (i) Let A, B be n x n matrices. Show that AB and BA have
the same eigenvalues.
(ii) Can we conclude that every eigenvector of AB is also an eigenvector of BA?
Solution 60.
(i) If B is nonsingular, then B -1 exists and BA is similar to
B - 1 (BA)B = AB
and the result follows since two similar matrices have the same eigenvalues. The
same arguments holds if A is nonsingular. If both A and B are singular, then 0
is an eigenvalue of B. Let S be the modulus of the nonzero eigenvalue of B of
smallest modulus. If 0 < e < J, then B-\-eI is nonsingular and the eigenvalues of
A(B-\- el) are the same as those of (B + el) A. As e —> 0 both sets of eigenvalues
converge to the eigenvalues of AB and B A , respectively.
Eigenvalues and Eigenvectors
(ii)
173
No. Consider for example
* - ( j J )'
B =(°
i)
=*
a b
= (
i
J )'
s a
= (
i
i)
and (I 0)T is an eigenvector of A B 1 but not of BA.
P r o b le m 61.
(i) Show that if A is an n x m matrix and if B is an m x n
matrix, then A ^ 0 is an eigenvalue of the n x n matrix AB if and only if A is an
eigenvalue of the m x m matrix BA. Show that if m = n then the conclusion is
true even for A = 0.
(ii) Let M b e a n m x n matrix (m < n) over R. Show that at least one eigenvalue
of the n x n matrix M t M is equal to 0. Show that the eigenvalues of the m x m
matrix M M t are also eigenvalues of M t M .
S olu tion 61.
(i) If A ^ 0 is an eigenvalue of AB and if v is an eigenvector
of A B , then A B v = Av ^ 0 since A ^ O and v / 0 . Thus B v ^ 0. Therefore
B A B v = XBv and A is an eigenvalue of BA. If A and B are square matrices
and A = 0 is an eigenvalue of AB with eigenvector v, then we have det (AB) =
det (BA) = 0. Thus A = 0 is an eigenvalue of BA.
(ii) Since rank(M TM ) < rank(M) < m < n less than n eigenvalues of M t M
are nonzero, i.e. at least one eigenvalue is 0. From the eigenvalue equation
M M Tx = Ax we obtain M TM (M Tx) = A (M Tx ).
P r o b le m 62.
We know that a hermitian matrix has only real eigenvalues.
Can we conclude that a matrix with only real eigenvalues is hermitian?
S olu tion 62. We cannot conclude that a matrix with real eigenvalue is her­
mitian. For example the nonnormal matrix
/0
A = I l
\0
I
0
0
I
2 0
admits the eigenvalues y/3, —>/3, 0.
P r o b le m 63. Let A be an n x n matrix over C. Show that the eigenvalues of
A* A are nonnegative.
S o lu tio n 63.
Then
Let v be a normalized eigenvector of A* A, i.e. A* A v = Av.
0 < |Av|2 = v*A*Av = Av*v = A.
Thus A > 0.
P r o b le m 64.
Let n be a positive integer. Consider the 2 x 2 matrix
r“ = ( 2r 4I ; 1) =*■ " P u = i.
174
Problems and Solutions
Show that the eigenvalues of Tn are real and not of absolute value I.
S o lu tio n 64. The eigenvalues are A± = 2n ± yjAn2 — I. Thus the eigenvalues
are real and not of absolute value I since n is a positive integer.
P r o b le m 65.
Let Ln be the n x n matrix
In - I
-I
V
-I
n —I
-I
-I
-I
-I
-I
-I
-I
•.
-I
-I
-I
•••
\
-I
71-1/
Find the eigenvalues.
S olu tion 65.
The matrix Ln has a single eigenvalue at 0. All the other
eigenvalues are equal to n. For example, for n = 3 we have the matrix
with the eigenvalues 0 and 3 (twice). We can write Ln = nln — Jn, where Jn
is the n x n matrix with all entries +1. Obviously, Jn has rank I and therefore
only one nonzero eigenvalue, namely n the trace of Jn. Thus the eigenvalues of
Ln are n —n = 0 and n —0 = n ( n — I times degenerate).
P r o b le m 66.
Let A be an n x n matrix over C. Assume that A2 = —In.
What can be said about the eigenvalues of Al
S o lu tio n 66.
From the eigenvalue equation A v = Av we obtain
A 2v = AA v => —v = A2V => (A2 + l ) v = 0.
Thus the eigenvalues can only be ±i.
P r o b le m 67.
Consider the symmetric 6 x 6 matrix over R
/0 1
I
1 0
1
1 1
0
1 - 1 1
I I
-
1 - 1 - 1 1
I\
1 - 1 1
1
1 - 1
0 1 - 1
-
0
Vi i - i - i i
I
o/
This matrix plays a role in the construction of the icosahedron which is a regular
polyhedron with 20 identical equilateral triangular faces, 30 edges and 12 vertices,
(i)
Find the eigenvalues of this matrix.
Eigenvalues and Eigenvectors
175
(ii) Consider the matrix A + VEIe- Find the eigenvalues.
(iii) The matrix A + VEle induces an Euclidean structure on the quotient space
R 6/ker(^4 + VEIe)- Find the dimension of ker(yl + VEle)S o lu tio n 67. (i) The matrix is symmetric over R. Thus the eigenvalues must
all be real. The trace is 0 which is the sum of the eigenvalues. The rank of the
matrix is 6. Thus the matrix is invertible. Thus all eigenvalues are nonzero.
Now A2 = 5/6. Thus the eigenvalues are VE (three-fold) and - V E (three-fold).
(ii) From (i) we obtain that the eigenvalues of A + VEle are 0 (three-fold) and
2 VE (three-fold).
(iii) The kernel of A + VEle has dimension 3. Thus the quotient space is iso­
morphic to E 3.
P r o b le m 68. Let A be an n x n matrix over Cn. Let A be an eigenvalue of
A. A generalized eigenvector x E Cn of A corresponding to the eigenvalue A is
a nontrivial solution of
(A - AIn)Jx = On
for some j E { 1 , 2 , . . . } , where On is the n-dimensional zero vector. For j = I
we find the eigenvectors. It follows that x is a generalized eigenvector of A
corresponding to A if and only if
( A - A/n)n x = 0n.
Find the eigenvectors and generalized eigenvectors of the nonnormal matrix
0
0
0
S o lu tio n 68.
for x \, X2 and
I
0
-I
0
0
0
The eigenvalues are all 0. The eigenvectors follow from solving
£ 3.
Thus £2 = 0- The set of eigenvectors is given by
(U,v)€ CV(OtO)
The generalized eigenvectors follow from solving
0
0
0
for X i , X 2 and X 3 . This equation is satisfied for all
generalized eigenvectors is given by C3/ ( 0 , 0,0).
Xi,
*2 and 2:3. The set of
176
Problems and Solutions
Problem 69.
Consider the 2n x 2n matrix
J :=
In
In
O71
G R
2nx2n
A matrix S G M 2nx2n is called a symplectic matrix if St JS = J.
(i) Show that symplectic matrices are nonsingular.
(ii) Show that the product of two symplectic matrices Si and S2 is also sym­
plectic.
(iii) Show that if S is symplectic S - 1 and St are also symplectic.
(iv) Let S' be a symplectic matrix. Show that if A G cr(S), then A- 1 G cr(S),
where cr(S) denotes the spectrum of S.
Solution 69.
(i) Since det(J) 7^ 0, det(S) = det(ST) and
det(STJS) = det(St ) det(J) det(S) = det(J )(d et(S ))2
we find that det(S) 7^ 0. We have S - 1 = JSt J t .
(ii) Let Si, S2 be symplectic matrices, i.e. S t J S i = J, St JS2 = J- Now
(Si S2) t J(S1S2) = St (St JS1)S2 = St JS2 = J
(iii) From St JS = J we have (S t J S )-1 = J - 1 . Since J - 1 = —J it follows that
S - 1 J (S - 1 )T = J.
(iv)
We note that Jt - J
1 = —J.
Problem 70. Let A, B be n x n matrices over C and u a nonzero vector in
Cn. Assume that [A, B] = A and A v = Av. Find (AB)v.
Solution 70.
We have the identity AB = BA + [A, B]. Thus
(AB)w = (BA + [A, B])w = BAw + [A, B]w = ABw + A v = (AB + A )v.
Let A, B be n x n matrices over c G C with [A, J?] = 0n. Then
[A + cIn,B + cIn\= 0n, where c G C. Let v be an eigenvector of the n x n matrix
A with eigenvalue A. Show that v is also an eigenvector of A + c /n, where c G C.
Problem 71.
Solution 71. We have (A + cln)w = Aw + c lnw = Av + cw = (A + c)v. Thus
v is an eigenvector of A + cln with eigenvalue A + c.
Problem 72. Let A be an n x n normal matrix over C. How would one apply
genetic algorithms to find the eigenvalues of A. This means we have to construct
a fitness function f with the minima as the eigenvalues. The eigenvalue equation
is given by A x = zx (z G C and x G C n with x ^ O ) . The characteristic equation
Eigenvalues and Eigenvectors
177
is p(z) = det(A — z ln) = 0. What would be a fitness function? Apply it to the
matrices
S olu tion 72.
Let z = x + iy (x, y G R). A fitness function would be
f( x , V) = Idet(A - zln) I2
which we have to minimize, where f ( x , y ) > 0 for all x, y G M.
The minimum of / is reached for (x = 1,2/ = 0) with f ( x = I, y = 0) = 0 and
(x = —1,2/ = 0) with / ( # = —1 , 2/ = 0) = 0. Obviously since the matrix B is
hermitian we can simplify the problem right at the beginning by setting y = 0
since the eigenvalues of a hermitian matrix are real. Thus the function we have
to minimize would be g(x) = x 4 — 2x 2 + I.
The matrix C is skew-hermitian. Thus the eigenvalues are purely imaginary, i.e.
we can set x = 0 and the fitness function is f(y ) = (—y2 + I ) 2.
For the matrix D we have p{z) = —z3 + z2 + z —I. The matrix is hermitian and
so we can set y = 0 and the fitness function is f(pc) = (—a?3 + x 2 + x — I )2.
P r o b le m 73. Let v be a nonzero column vector in W 1. Matrix multiplication
is associative. Then we have (vvT)v = v(vTv). Discuss.
S olu tion 73.
This is an eigenvalue equation with the (positive semidefinite)
n x n matrix given by
Vi
ViV2
V2Vi
V2
VlVn \
V2Vn
VV
V v n Vl
VnV2
vl )
The eigenvector is v and the positive eigenvalue is v Tv = Y^Jj= i vjP r o b le m 74. Let n > 2 and even. Consider an n x n hermitian matrix A.
Thus the eigenvalues are real. Assume we have the information that if A is an
eigenvalue then —A is also an eigenvalue of A. How can the calculation of the
eigenvalues be simplified with this information?
178
Problems and Solutions
S olu tion 74.
The characteristic equation is
An + cn_iAn * + cn_2An 2 + • •• + C2A2 + CiA + Co = 0.
The substitution A —> —A yields
An — cn_ 1 An * + cn_ 2An 2 + •••+ C2A2 — CiA + Co = 0.
Addition of the two equations provides
An + cn_ 2An 2 + •••+ C2A2 + Co = 0.
Study the problem for n odd. So in this case one of the eigenvalues must be 0.
P r o b le m 75. Let A, B be two nonzero n x n matrices over C. Let A v = Av be
the eigenvalue equation for A. Assume that [A, B] = 0n. Then from [A, B]v = 0
it follows that
[A, B]v = (AB - B A )v = A(B v) - B(Av) = 0.
Therefore A(Bv) = A(Bv. If B v / 0 we find that B v is an eigenvector of A
with eigenvalue A. Apply it to A = o\ <g>a 2 and B = <73 <8>03.
S o lu tio n 75. The matrix oq has the eigenvalue +1 with normalized eigenvector
( I i ) 1 . Thus
^ ( I I )T and a2 has the eigenvaluee +1 with eigenvector
P\
i
I
W
is a normalized eigenvector of (J\ (g) CF2 with eigenvalue +1. Then
3® < J S )~
((T
p \
i
I
\ iJ
is an eigenvector of
cf\
1 \
—i
-I
\ i J
<g>02.
P r o b le m 76.
Let A, B be two nonzero n x n matrices over C. Let A v =
Av be the eigenvalue equation for A. Assume that [A jjB]+ = 0n. Then from
[A, B]+v = 0 it follows that
[A, 5 ]+ v = (AB + B A )v = A(Bv) + B(Av) = 0.
Therefore A(Bv) = —A(Bv. If B v / 0 we have an eigenvalue equation with
eigenvalue —A. Apply it to A = o\ and B = <72, where [<71,(72]+ = O2.
S olu tion 76. The matrix <7i has the eigenvalue +1 with normalized eigenvec­
tor
( I I )T and <72 admits the eigenvalue +1 with normalized eigenvector
Eigenvalues and Eigenvectors
j= ( I
179
i ) T. It follows that
P r o b le m 77.
Consider the real symmetric 3 x 3 matrices
/0
A=I O
\1
0 1\
I 0 ,
0 0 /
/0 1
B = I l
0
\I I
1\
I .
OJ
(i) Find the eigenvalues and normalized eigenvectors for the matrices A and B.
(ii) Find the commutator [.A , B \.
(iii) Discuss the results from (ii) and (i) with respect to the eigenvectors.
S o lu tio n 77. (i) The eigenvalues of A are A = - I and A = + 1 (twice) with
the corresponding normalized eigenvectors
For the matrix B the eigenvalues are A = - I
corresponding normalized eigenvectors
(twice) and A = 2 with the
(ii) The commutator of A and B vanishes, i.e. [AiB] = O3.
(iii) The matrices A and B have an eigenvalue A = + 1 and the normalized
eigenvector
in common which relates that the commutator of A and B is the zero matrix.
P r o b le m 78. Consider the Pauli spin matrix 02 with the eigenvalues + 1 , - 1
and the corresponding normalized eigenvectors
Tl (O’
7
i(-0 '
Then 02 0 02 admits the eigenvalues + 1 (twice) and —1 (twice). Show that
there are product states as eigenvectors and two sets of entangled eigenvectors
for 02 0 02»
S o lu tio n 78.
From the eigenvectors of 02 we find the four product states
HiMO- KOT-1O' K-1O8 O)'
180
Problems and Solutions
The first set of entangled states (the so-called Bell states) are
( l \
I
Ti
0
0
(°\
i
’ Ti
i
i
Vo /
i
’ Ti
1 V
0
0
V -i J
I
’ Ti
°\
i
-i
Vo
/
The other set of fully entangled states are
I -I
I I
I -I1 V I 1I \
2 -I ’ 2 I ’ 2 I ’ 2 -I
IJ
IJ
IJ
IJ
P r o b le m 79. (i) Let A be an invertible n x n matrix. Given the eigenvalues and
eigenvectors of A. What can be said about the eigenvalues of A 0 A -1 + A ~ x0 A l
(ii) Let c G E and A be a normal n x n matrix. Assume that A is invertible.
The eigenvalue equation is given by A vj = Aj Vj for j = I , . . . , n and Aj / 0 for
all j =
Find all the eigenvalues and eigenvectors of the matrix
A ® A -1 + c{A_1 ® A).
Find det(A <g>^4- 1 ) and tr(.4 <g>^4- 1 ).
S o lu tio n 79.
(i) Let u and v be two eigenvectors of A , i.e. A u = Au and
Av = fiv. Then
(A 0 A -1 + A -1 0 A )(u (8) v) = Au (8) —v +
fi
A
u 0 fiv = ( — + ^ ) (u 0 v).
\ /i A J
Thus the eigenvalue is X/fi + fi/X. I f u = V w e have the eigenvalue 2.
(ii) We have the eigenvalue equation
((,4 ® A-1) + c(j4-1 ® .4))(Vj ® Vfc) = ( ^ + C p ) (vj ® Vzc).
P r o b le m 80.
Let M be an n x n matrix over E given by
M = D + u 0 vT
where D is a diagonal matrix over E and u, v are column vectors in E 71. The
eigenvalue equation is given by
M x = (D + u 0 v T)x = Ax.
Assume that YJj= i uj vj zZz 0. Show that
+ 1= 0
h
-A
)
Eigenvalues and Eigenvectors
181
where dk are the diagonal elements of D.
S o lu tio n 80.
From the eigenvalue equation we obtain
(D —AJn)x + (u (g) v T)x = 0.
Now
/ U iV i
u0
U \ V2
U iV n \
. ••
U2 V I
••
U 2 Vn
V Un VI
••
U n Vn J
VT =
Thus
n
j ^
^ Ujc Xfc — 0 ,
3 =
1,2,
fc=i
Uj S f c = I V k x k
dj
A
We set s := YlH=I vkx k- It follows that
SUi
Xi = —
dj
A
Multiplying this equation with Vj on the left and right-hand side, sum over j
and using s with the condition s ^ 0 yields
Y
UkVk
+ 1
= o.
P r o b le m 81. Let A be an n x n matrix over R with eigenvalues A i, . . . , An.
Find the eigenvalues of the 2n x 2n matrix
(-•, otM-0, })•*
Is this matrix skew-symmetric over R?
S olu tion 81.
The eigenvalues of the 2 x 2 matrix
are —i. Thus the eigenvalues of the 2n x 2n matrix are iA^, —iXk, k = I , . . . , n.
The 2nx 2n matrix is not skew-symmetric over R as can be seen from the example
(
0
0
0
an
cli2 \
0 a2i a22
~ a\2 0
0
—a2i —a22 0 0 I
—a n
182
Problems and Solutions
P r o b le m 82.
Consider the nonnormal 2 x 2 matrix
For j > I, let dj be the greatest common divisor of the entries of A? —I2. Show
that
Iim dj = 00.
j - > 00
Use the eigenvalues of A and the characteristic polynomial.
S o lu tio n 82. We have det(A) = I and thus I = AiA2, where Ai and A2 denote
the eigenvalues of A. Now det(A — XI2) = A2 — 6A + I = O and the eigenvalues
are given by
Ai = 3 + 2y/2,
X2 =
Al
= Z - 2y/2.
Therefore there exists an invertible matrix C such that A = C D C -1 with
D =
Ai
0
O \
1/ A i ;
and the entries of the matrix C are in Q(\/2). We choose an integer k > I such
that the entries of kC and kC~l are in Z(\/2). Then
k2(Aj - I2) = ( kC){Dj - I2)(kC~l )
and
Thus A-j — I divides k2dj in Z(y/2). Taking norms, we find that the integer
(\{ — l)(Aj^ — I) divides k4d2. However |A i|> I, so
I(A i-I)(A ^ -I)I ^ o o
as j —y 00. Hence Iimj-^00 dj = 00.
P r o b le m 83.
The 2 x 3 matrix
A =
0
1
I
O
!)
is the starting matrix in the construction of the Hironaka curve utilizing the
Kronecker product.
(i) Find A t and the rank of A and A t .
(ii) Find AA t and the eigenvalues of AA t . Find At A and the eigenvalues of
ATA. Compare the eigenvalues of AA t and A t A. Discuss.
(iii) Calculate A <g>A and A t <g>A t . Calculate the eigenvalues of
(A (8) A)(A t 0 A t ),
( A t 0 At )(A 0 A).
Eigenvalues and Eigenvectors
Solution 83.
183
(i) We have
The rank of A and A t is two.
(ii) We have
The eigenvalues of AA t are I and 2 and the eigenvalues of At A are 0, I, 2. The
eigenvalues of AA t are also eigenvalues of At A with the additional eigenvalue
0 for ATA.
(iii) The following Maxima program
A: m a t r i x ( [0,1,0], [1,0,1]) ;
AT: t r a n s p o s e ( A ) ;
K A : k r o n e cker.product (A ,A ) ;
K A T : k r o n e cker. p r o d u c t (AT,AT);
P K A K A T : KA . K A T ;
eigenvalues (P K A K A T ) ;
P K A T K A : KAT . K A ;
eigenvalues (P K A T K A ) ;
will do the job. Note that . is matrix multiplication in Maxima. The eigenvalues
of {A 0 A)(AT 0 AT) are 1,2 (twice),4. The eigenvalues of (A t 0 AT){A 0 A)
are I, 2 (twice), 4 and 0 (5 times).
Problem 84.
Consider the unitary 4 x 4 matrix
0 0
0
0
0
I
Vo I
-I
- 0I
0
0
0
V
-ZCF2 0 I 2.
0
0 J
(i) Find the eigenvalues and normalized eigenvectors of U .
(ii) Can the normalized eigenvectors of U be written as Kronecker product of
two normalized vectors in C 2. If so they are not entangled.
Solution 84. (i) For the eigenvalues we find i (twice) and —i (twice) with the
corresponding normalized eigenvectors
i
I^
ii
CO
0
Z
v 0y
S
I
I^
Il
CM
0
V— i /
3
( ° i\
I
Il
U1 = 7 I
1 \
0
—Z
0 J
2
I
(ii) For the eigenvalue z we have the eigenvectors
/ 1 X
01
-I
0 /
,
f
x
,
x
f 1\
-(X i)-
/° \
I I
0
V -Z/
H X I)-
( °i \
0
Vi J
184
Problems and Solutions
For the eigenvalue —i we have
/° \
I
0
/i\ „ n \
= (ij® (o J '
i
Vo/
K iJ
P r o b le m 85. Consider the Hilbert space C 71. Let A, B, C be n x n matrices
acting in C n. We consider the nonlinear eigenvalue problem
A v = AB v + X2Cv
where v G Cn and v ^ 0. Let (7i, (72, (73 be the Pauli spin matrices. Find the
solutions of the nonlinear eigenvalue problem
(7i V = A(72V + A2(73V
where v G C 2 and v ^ O .
S olu tion 85.
Ai
We obtain the eigenvalues
l + %/5
A2 = —Ai,
A3 =
I - \/5
A4 = —A3.
The corresponding eigenvectors are
_ /
1 + iAi \
V l- ( , ( l + V B )/2j’
_ (
I + i\$ \
V3“
’
_ ( I - i\\ A
V ! - U l + ^ 5 )/2 ) ’
_ ( I—
\
V4 “ V(1 - ^ 5 )/2 ^ '
P r o b le m 86. Let A b e a n n x n symmetric matrix over E. Since A is symmetric
over E there exists a set of orthonormal eigenvectors v i, V2, . . . , v n which form
an orthonormal basis in En. Let x G E n be a reasonably good approximation
to an eigenvector, say v i. Calculate
Xj X
The quotient is called Rayleigh quotient Discuss.
S o lu tio n 86.
We can write
x = C1 V1 + C2V2 H-------h CnVn,
Cj G E.
Then, since Avj = Xj Vj , j = I , . . . , n and using v j Vjc = 0 if j ^ k we find
R _ x TA x _ (civ i H------- h cnv n)TA(ciVi -I------- h cnv n)
Xt X
(ClVl H------- h CnV„)T(ClVi H-------h CnV n )
Eigenvalues and Eigenvectors
185
(Cl Vi + ••• + cn y n )T (ciAiVi + ••• + Cn A w V w )
c\ + 4 + ■■■+ cI
AjCi + A2C2 -I------+ AnC2
c2 + c2 + .. . + C2
/ 1 + (A2/Ai )(c2/ci)2 H---- + (An/Ai)(cn/ci)2
1V
l + (c2/ c i ) 2 + --- + (cn/c i)2
Since x is a good approximation to V i , the coefficient c\ is larger than the other
Cj, j = 2 , 3 , . . . ,n. Thus the expression in the parenthesis is close to I, which
means that Rq is close to Al.
S u p p lem en ta ry P ro b le m s
P r o b le m I . (i) Let 0 G [0, tt/ 2). Find the trace, determinant, eigenvalues and
eigenvectors of the 2 x 2 matrices
, , _ I ( tan2(0)
M l “ 2 V -tan (l9)
—tan(0)\
I
) ’
, , _ I ( tan2(0)
“ 2 V tan(»)
tan(0)\
I
)'
(ii) Find the eigenvalues and normalized eigenvectors of the matrix {(j) G [0,27r))
A{4>)
ei<t> \
J - ( 1
J
■
Is the matrix invertible? Make the decision by looking at the eigenvalues. If so
find the inverse matrix.
(iii) Let e € IR. Find the eigenvalues and eigenvectors of
A(e) =
I
V l + e2
For which e is .4(e) not invertible? Let e € [0,1]. Consider the 2 x 2 matrix
B(e)
7 ^ ( 1
-0-
For e = 0 we have the Pauli spin matrix as and for e = I we have the Hadamard
matrix. Find the eigenvalues and eigenvectors of A(e).
(iv) Let e G M. Find the eigenvalues A+(e), A_(e) and normalized eigenvectors
of the matrix
- - L w )Find the shortest distance between A+(e) and A_(e).
(v) Let O e R . Show that the eigenvalues of the hermitian matrix
cos(0)
I
)
are given by A± = I =L cos(0) and thus cannot be negative.
186
Problems and Solutions
(vi)
Let
T
:= (1 + \/5)/2 be the
g o ld e n
Consider the modular 2 x 2 matrix
r a tio .
M =
Show that the eigenvalues are given by r and —r
and the normalized eigenvectors are
I
vi
(vii)
where r 1 = (\/5 - 1 ) / 2
V2
)■
I + T2
Consider the two 2 x 2 matrices
—1/2J
A ± ~ \ ± l /2
'
Show that det(^4±) = I, tr(^4±) = —I, A ^.1 = Azp, A± = /2 and that the
eigenvalues of the two matrices are the same, namely
Problem 2.
(i) The symmetric 3 x 3 matrix over R
/0
A = I l
\1
I
0
I
I
I
0
plays a role for the chemical compounds ZnS and NaCL Find the eigenvalues
and eigenvectors of A. Then find the inverse of A. Find all v such that Av = v.
(ii) Let a, 6, c G R. Find the eigenvalues and eigenvectors of the symmetric
matrix
/0
a 6\
I a 0 c I .
\b
c
Oj
The characteristic equation is A3 — (a2 + b2 + c2) A — 2abc = 0.
Problem 3.
trices
/l
R(a) = I 0
\0
(ii)
(i) Find the eigenvalues and eigenvectors of the orthogonal ma­
0
cos(a)
—sin(a)
0 \
sin(a) I ,
cos(a) J
/ cos(/3)
S(/3) = I —sin(/3)
sin (/3)
cos(/3)
0
0\
0 I .
1/
Find the eigenvalues and eigenvectors of R(a)S(f3).
Problem 4.
(i) Find the eigenvalues and eigenvectors of the 3 x 3 matrices
0
1
0
0
0
I
ai3 \
a23 ,
^33 I
I
0
ai2
0 \
o2i 0 o23 I •
0 O32 O33 J
Eigenvalues and Eigenvectors
187
(ii) Let A be a normal 2 x 2 matrix over C with eigenvalues A i, A2 and corre­
sponding eigenvectors v i, V2, respectively. Let c E C. What can be said about
the eigenvalues and eigenvectors of the 3 x 3 matrices
/c
£?i = I 0
\0
0
0 '
( on
an
a21
ai2
0,22 ,
O
0 ai2 '
c O
0
021
( on
012
0\
021
0 22
0 I ?
0
0
a 22 ,
c)
(iii) Let J> E R. Find the eigenvalues and eigenvectors of the 3 x 3 matrices
(iv)
Y(J>)
0
0
X(J))
0
0
Consider the 3 x 3 matrix
A =
Show that the characteristic polynomial is given by p( A) = A3 - A 2 - I . Find
the eigenvalues. Calculate A ® A, A ® A ® A, A ® A ® A ® A. Then identify I
with a black square and 0 with a white square. Draw the pictures.
(v) Let m > 0 and 0 E M. Consider the three 3 x 3 matrices
/
0
Mi(Q) = m I sin(0)
\
0
sin(0)
0
cos(0)
)
(O)
)
'
(
\
J
0
, M2(O) = m I
0
sin(0)
cos(0)
0
\ sin(0)
sin(0)
0
0
0
0
cos(0)
sin(0) \
cos(0) I ,
0
I
cos(0) \
0
0
J•
Find the eigenvalues and eigenvectors of the matrices. These matrices play a
role for the Majorana neutrino.
P r o b le m 5. Let e E R.
(i) Find the eigenvalues and eigenvectors of the matrices
0 0 e\
/1
I I 00
I I I 0
Vi I I I/
Extend t o n x n matrices.
he matrices
I 0
/°0 0
I
000
Ve 0 0
°\
0
I
0/
188
Problems and Solutions
Extend to the n x n case.
P r o b le m 6 .
(i) Consider the 5 x 4 matrix
( °I
I
0
Vo
I
0
0
I
I
0
I
I
0
0
1X
0
0
I
I/
Find the eigenvalues of the matrices A t A and AA t by only calculating the
eigenvalues of A t A.
(ii) Let say we want to calculate the eigenvalues of the 4 x 4 matrix
/I
0
I
Vo
0
I
0
I
I
0
I
0
°\
I
0
IJ
How could we utilize the facts that
/°
I
0
Vi
1\
0
/ 0 1 0 1
I
\ 1 0 I 0, ) = * .
0/
0
( 1
I
0
0
1
l\
0)
(°
I
0
Vi
1\
0
I
0/
and tr (£ ) = 4 and thus avoid solving det(H — A/4) = 0?
P r o b le m 7. Let A be an n x n diagonalizable matrix with distinct eigenvalues
Xj (j = I , . . . , n). Show that the Vandermonde matrix
1
V =
Al
A?
Va^ - 1
I
A2
A2
\n—I
A2
1
An
An
\
A r 1/
is nonsingular.
P r o b le m 8.
(i) Let cri, ¢72, ¢73 be the Pauli spin matrices. Solve the nonlinear
eigenvalue problem
CT1U = A( 7 2 U + A 2<7 3 U
where u G C 2 and u ^ 0.
(ii) Solve the nonlinear eigenvalue problem
(¢71 0 (T1)V = A(cr2 0 cr2)v + A2(cr3 0 cr3)v
where v G C4 and v / 0 . Compare the result to (i). Discuss.
Eigenvalues and Eigenvectors
189
Problem 9. Let A , B , C y D y E y F y Gy H be 2 x 2 matrices over C. We define
the star product
/A
I c
fl\
/£ F\
d J M g
/ E
O2
O2
Vg
_
J
O2
O2 P \
B O2 I
D O2 '
o2 h J
A
C
o2
Thus the right-hand side is an 8 x 8 matrix. Assume we know the eigenvalues
and eigenvectors of the two 4 x 4 matrices on the left-hand side. What can be
said about the eigenvalues and eigenvectors of the 8 x 8 matrix of the right-hand
side?
Problem 10.
Consider a 2 x 2 matrix A = (ajk) over M with
2
^jk = I,
tr(^4) = 0.
j,k= I
What can be said about the eigenvalues of such a matrix?
Problem 11.
Consider the Hadamard matrix
-1O
" - £ 0
The eigenvalues of the Hadamard matrix are given by +1 and —1 with the
corresponding normalized eigenvectors
4 + 2V 2 \
J _
/ V
Vs
V V 4 - 2 V 2
f y/A - 2^2 \
Vs [ - V 4 + 2V2 ) '
J _
J
’
How can this information be used to find the eigenvalues and eigenvectors of the
Bell matrix
I / 01
T l Vi0
0
1
1
0
0
1
-1
0
1\
0
0
1
- /
Problem 12. (i) Let U be an n x n unitary matrix. Assume that U* = - U y
i.e. the matrix is also skew-hermitian. Find the eigenvalues of such a matrix,
(ii) Let V be an n x n unitary matrix. What can be concluded about the
eigenvalues of V if V* = V t ?
Problem 13.
Find the eigenvalues of the staircase matrices
/0
0
0
0 0 1\
0 1 1
1 1 1 '
Vi 1 1 1/
190
Problems and Solutions
Problem 14. Let A, B be hermitian matrices over C and eigenvalues A i, . . . ,
An and fii ,
//n, respectively. Assume that tr (AB) = 0 (scalar product).
What can be said about the eigenvalues of A + BI
Consider the reverse-diagonal n x n unitary matrix
Problem 15.
0
0
0
0
0
gi<t>n-l
..
..
0
ei4>2
0
■^■(01 5••*54*Tl)
Xei4tn
0
0
0
0
0 J
where <f)j E M (j = I , . . . , n). Find the eigenvalues and eigenvectors.
Problem 16. Let A, B be n x n matrices over C. The two n x n matrices A
and B have a common eigenvector if and only if the n x n matrix
n—I
£ [Aj , B k]*[A\Bk]
j,k= I
is singular, i.e. the determinant is equal to 0.
(i) Apply the theorem to the 2 x 2 matrices
(ii) Apply the theorem to the 3 x 3 matrices
1
0
0
(
0
cos(a)
sin(a)
0
\
—sin(o:) I ,
cos(a) J
/ cos(a)
B(a) = I sin(o:)
\
0
—sin(a)
cos(a)
0
0\
0 I
1/
Problem 17.
Let 0 < x < I. Consider the N x N matrix C (correlation
matrix) with the entries Cjk := x^~k\ j ,k = I , . . . , N. Find the eigenvalues of
C . Show that if N
oo the distribution of its eigenvalues becomes a continuous
function of 0 G [0,27r]
I-X 2
I — 2x cos (¢) + x 2 ’
Problem 18. (i) Let A be an n x n matrix over C. Show that A is normal if
and only if there exists an n x n unitary matrix U and an n x n diagonal matrix
D such that D = U- 1AU. Note that U -1 = U*.
(ii) Let A be a normal n x n matrix over C. Show that A has a set of n or­
thonormal eigenvectors. Show that if A has a set of n orthonormal eigenvectors,
then A is normal.
Eigenvalues and Eigenvectors
P r o b le m 19.
191
(i) The 2n x 2n symplectic matrix is defined by
( On
\ -In
CJ _
In \
0n J ’
The matrix S is unitary and skew-hermitian. Find the eigenvalues of S from
this information.
(ii) Find the eigenvalues of the 2n x 2n matrix
R
1
A JIn
= y/2,U \ lnT
In
(iii) Let A be an n x n matrix over R. Let A i, . . . , An be the eigenvalues. What
can be said about the eigenvalues of the 2n x 2n matrix
(O n
A \?
U t 0J '
P r o b le m 20.
Given the matrix
/ 1
5
9
V 13
2
6
10
14
3
7
11
15
4
\
8
12
16/
Prove or disprove that exactly two eigenvalues are 0.
P r o b le m 21. Let A be an n x n matrix over C. What is the condition on A
such that all eigenvalues are 0 and A admits only one eigenvector?
P r o b le m 22.
Consider the n x n matrix
/0
0
I
0
0
I
°\
0
Jn
I
0}
V
Hence an arbitrary Jordan block is given by z ln + Jn, where z e C. Find the
eigenvalues of
a
b'
)In + 12 ® In-b a
(-»«)
P r o b le m 23. Let /2 be the 2 x 2 identity matrix, O2 be the 2 x 2 zero matrix
and (73 be the Pauli spin matrix.
(i) Find the eigenvalues and normalized eigenvectors of the 4 x 4 matrix
M t) =
cosh(2£)/2
sinh(2t)(73
sinh(2£)(73 \
cosh(2£)/2 ) ’
192
Problems and Solutions
(ii) Find the eigenvalues and normalized eigenvectors of the 6 x 6 matrix
cosh(2t)/2
B(t) =
O2
sinh(2t)<r3
O2
sinh(2t)cr3 \
O2
h
O2 cosh(2£)/2
■
Can the results from (i) be utilized here?
(iii) Find the eigenvalues of the 6 x 6 matrices
I T
75 72 O2
O2
P r o b le m 24.
^ 2
O2
h
O2
/O 3
J3 \
v -/3
o3 ;
Let n > 3. Consider the tridiagonal n x n matrix
b2
0
0
OL2
1>3
0
0
«3
0
0
/ai
C2
0
C3
0
0
0 . , ••
CLn- I
bn
0
0
0 . , ••
Cn
CLn /
I , . . ., n). It has in general n complex eigenvalues the
with a \ , a , j , b j , C j G C ( j
n roots of the characteristic polynomial p( A). Show that this polynomial can be
evaluated by the recursive formula
P kW = (A - ak)pk- 1 - bkckpk- 2( A),
k = 2,3 , . . . , n
Pi(A) = A - ai
Po(A) = I.
P r o b le m 25. Consider an n x n symmetric tridiagonal matrix over R. Let
f n( A) := det(A - XIn) and
/ O L l - X
P l
Zfc(A) = det
0
P i
OL2
-
X
p
0
0
2
P2
0
\
0
.
Pk-1
0
1,2, ...,n and /0(A) = I, Z-i(A) = 0 . Then
Zfc(A) = (a - A)/fc_!(A) - ^_!/fc-2(A)
for k = 2 , 3 , . . . , n. Find / 4(A) for the 4 x 4 matrix
0
VT
VT
0
0
\/2
0
0
V2
0
0
V3
(
0 \
0
V3
0 /
I
0
I
7
0
Qik-X/
Eigenvalues and Eigenvectors
P r o b le m 26.
matrix
193
Let €j € R with j = 1 ,2,3,4. Consider the tridiagonal hermitian
/ £1I I 0I 00 V
0 I
1
Vo 0 I C4/
Find the eigenvalues and eigenvectors. Let v = ( v\
vectors. Find vi/v2, V2/V3, V3/V4.
V2
v$
v± )T be an eigen­
P r o b le m 27. Let M be an n x n invertible matrix with eigenvalues A i, ...,
An (counting degeneracy). Find the eigenvalues of
M+ = i ( M + M - 1 )
and
M - = ^ M - M - 1).
J
Jd
P r o b le m 28. Let A = (a^fc) be an n x n semi-positive definite matrix with
Sfc=I ajk < I for j = I , . . . , n. Show that 0 < A < I for all eigenvalues A of A.
P r o b le m 29. Given a n n x n permutation matrix P .
(i) Assume that n is even and tr(P ) = 0. Can we conclude that half of the
eigenvalues of such a matrix are +1 and the other half are —1?
(ii) Assume that n is odd and tr(P ) = 0. Can we conclude that the eigenvalues
are given by the n solutions of An = I?
P r o b le m 30. Let A b e a n n x n matrix over C. The characteristic polynomial
of A is defined by
Tl
A(A) := det(A /„ - A) = An + ^ ( - l ) fca(A:)An- fc.
k=l
The eigenvalues A i, . . . , A71 of A are the solutions of the characteristic equation
A(A) = 0. The coefficients cr(j) (j = I , . . . , n) are given by
O-(I) = J ^ X j = tr (A)
3= I
a(2) = j<k
5 Z ^ Afe
n
cr(n) = Y [ Aj- = det(A).
3 =1
Another set of symmetric polynomials is given by the traces of powers of the
matrix A, namely
n
sU) = ^ 2 ( xk)3
k=
I
= tr(^),
j = l,...,n.
194
Problems and Solutions
One has (so-called Newton relation)
M j )
~ s(l)cr(j
- I) + •••+ ( - I y -1SO' - 1)^(1) + (-ly'sC?) = 0
where j = I , . . . , n. Consider the 3 x 3 matrix
f l/y/ 2
A
= O
\1/V2
0
I
l/y/ 2 \
0
-1/V2 J
0
.
Calculate s(l), s(2), s(3) from the traces of the powers of A. Then apply the
Newton relation to find <r(l), a (2), <r(3).
P r o b le m 31.
(i) Is the 3 x 3 stochastic matrix
/1/3
2 /3
1/3 1 /3 \
0 1/3
\
1/2
1/2
0
J
diagonalizable? First find the eigenvalues and normalized eigenvectors,
(ii) Find the eigenvalues and eigenvectors of the stochastic matrix
/1/2
0
1/2 0 \
1/2
1/2
0
0 /
V l
(iii)
.
Find the eigenvalues and eigenvectors of the stochastic matrix
f Pa
a
0
0
\P
d a
Pa
b
0
0
Pd
b
Pa
c
0
0
Pd
0\
1
1
c
0/
where pAA = P d a = 2 — \/3, P a b = P d b = VS — \/2, pAC =
(iv) Find the eigenvalues of the double stochastic matrices
( sin2(0)
Y cos2(0)
cos2(0)\
sin2(0) J ’
/s in 2(0)
cos2(^)
0
\ 0
P r o b le m 32.
0
sin2(0)
cos2(0)
0
sin2(0)
cos2(0)
0
0
0
sin2(0)
cos2(0)
0
sin2(0)
cos2(0)
Pd
c
=
cos2(0)\
0
I ,
sin2(0) J
cos2(0)\
0
0
sin2(0) /
(i) Let Cj G M. Find the eigenvalues of the matrices
0
I
0
0
I
0
0
0
I
C2
C3
C4 /
V Cl
0
0
V2
—
I.
Eigenvalues and Eigenvectors
195
Generalize to the n x n case. Note that for the 2 x 2 matrix we find
Al’2 = ¥ ± ^
4ci + c2'
(ii) Show that the eigenvalues the 4 x 4 matrix
are given by 0 (2 times) and yja\ 2 + a23 + a24, —yja\ 2 + a?3 +
eigenvectors.
P r o b le m 33.
Find the
Consider the 3 x 3 matrices
Find the eigenvalues of A and B and the eigenvalues of the commutator [A, B\.
Are the matrices A and B similar?
P r o b le m 34. What can be said about the eigenvalues and eigenvectors of a
nonzero hermitian n x n matrix A with det(A) = 0 and tr(A) = 0. Give one
eigenvalue of A. Give one eigenvalue of A 0 A. Consider the case n = 3 for the
matrix A. Find all the eigenvalues and eigenvectors.
P r o b le m 35. Let A b e a n n x n matrix over C. Show that the matrix A admits
the inverse matrix A -1 iff A v = 0 implies v = 0. Note that from A v = 0 it
follows that A - 1 (A v) = 0 and then v = 0.
P r o b le m 36.
Let S and Q be n x n matrices and
(5 - XIn)u = 0,
(Q - /x/n)v = 0.
These are eigenvalue equations. Show that
(S 0 Q - A /i(/n 0 Jn) ) (u 0 v) = On2
and
P r o b le m 37.
Let 7 € M and 7 2 < I. Consider the 2 x 2 matrix
K = (Ti —270-3.
Show that the eigenvalues of K and K* are given by ± ^ / l — 7 2. Find the
normalized eigenvectors.
196
Problems and Solutions
P r o b le m 38. Let n > 2. Consider the (n — I) x n matrix A over R. Then
AA t is an (n — I) x (n — I) matrix over E and At A is an n x n matrix over R.
Show that the (n — I) eigenvalues of AA t are also eigenvalues of At A and A t A
additionally admits the eigenvalue 0.
Let A, B be 3 x 3 matrices. We define the composition
P r o b le m 39.
AOB :=
an
0
0
bn
«21
&21
&31
0
0
«31
0 «13
i>13 0
&23 a23
&33 0
0 «33
«12
&12
^22^22
&32
«32
Let
fl/y/2
M=
1 /V2 \
0
1
0
0
\ 1 /V2
0
-l/y/2 )
Find the eigenvalues of M and MOM.
P r o b le m 40.
Let j ,k G {0 ,1 }.
M = (Mjk) with Mjk = el7r(j 'k\
P r o b le m 41.
Find the eigenvalues of the 2 x 2 matrix
Compare the lowest eigenvalues of the two 6 x 6 matrices
(0 i 0 0 0
I 0 I 0 0 0
0 I 0 I 0 0
0 0 I 0 I 0
0 0 0 I 0 I
Vo 0 0 0 I 0 /
P r o b le m 42.
(°I
0
0
0
Vl
I
0
I
0
0
0
0
I
0
I
0
0
0
0
0
0
0
I
1X
0
0
0
I
0
I
I
0
I
0
0
/
Let An be the n x n matrices of the form
/0 0
rO I
0
V*
/°
M
0
0
0
\t
0
0
0
t
0
t\
I.
r0 jJ
0
0
I
0
0
A4 =
0
(0
0 0
0
0
t
t
r
\t
0
0
0
t
0
r
0
0
0
0
r/
Thus the even dimensional matrix A 2n has t along the skew-diagonal and r along
the lower main diagonal. Otherwise the entries are 0. The odd dimensional
matrix A2n4-I has t along the skew-diagonal except I at the centre and r along
the lower main diagonal. Otherwise the entries are 0. Find the eigenvalues of
these matrices.
Eigenvalues and Eigenvectors
197
(i) Show that n = 2 the eigenvalues are
~\
+ r2 _ r ) >
7) ^\/4£2 + r 2 +
.
(ii) Show that n = 3 the eigenvalues are
“
(\ /4 i2 + r2 - r ) ,
i ( v 74i2 + r2 + r ) ,
I.
(iii) Show that for n = 4 the eigenvalues are
~\
+ r2 — r ) »
^ ^\/4t2 + r 2 +
twice each.
(iv) Show that for n = 5 we have the same eigenvalues as for n = 4 and additional
the eigenvalue I. Extend to general n.
P r o b le m 43.
Let n > 2 and j , k = 0 ,1 ,... ,n — I. Consider
1 for j < k
0 for j = k .
{
—I for j > k
Write down the matrix M for n = 2 and find the eigenvalues and normalized
eigenvectors. Write down the matrix M for n = 3 and find the eigenvalues and
normalized eigenvectors.
P r o b le m 44.
Find the spectrum of the 3 x 3 matrix A = (ajk) (j , A; = 1,2,3)
Ojfc = V i i k - 1)-
P r o b le m 45.
Let U be a unitary 4 x 4 matrix. Assume that each column
vector (which obviously is normalized) of U can be written as the Kronecker
product of two normalized vectors in C 2.
(i) Can we conclude that the eigenvectors of such a unitary matrix can also be
written a Kronecker product of two vectors in C 2?
(ii) Can the matrix U be written as a Kronecker product of two 2 x 2 matrices?
P r o b le m 46. Let A\ be an n\ x n\ matrix, A^ be an n<i x n<i matrix, As be
an ns x ns matrix and Inj be the nj x nj matrices (j = 1,2,3). Let u be a
normalized eigenvector of A\ with eigenvalue A, v be a normalized eigenvector
of A2 with eigenvalue fi and w be a normalized eigenvector with eigenvalue v.
Consider
H=
Al
<g) In2 <g) In3 + In1 ® A 2 <8>In3 + In1 ® In2 ® A 3
+ Ai ® A2 <8>I + A i (8) I (S) As + 1 0 A2 <g>As + A\ <g>A2 <g>A 3.
Show that u ® v ® w is a normalized eigenvector of H . Find the corresponding
eigenvalue.
198
Problems and Solutions
P r o b le m 47. Let A be an eigenvalue of the n x n matrix A with normalized
eigenvector v and /r be an eigenvalue of the n x n matrix B with normalized
eigenvector w. Show that X/r —A —/r is an eigenvalue of A ® B —A ® In - In <g) B
with normalized eigenvector v 0 w.
P r o b le m 48. Let A 1 B be 2 x 2 matrices over C. Let A be an eigenvalue of A
with corresponding normalized eigenvector u and /r be an eigenvalue of B with
corresponding normalized eigenvector v. Let A 0 B be the Kronecker product
of A and B . Then X/r is an eigenvalue of A 0 B with corresponding normalized
eigenvector u 0 v. Let A © B be the direct sum of A and B . Then we have
U2
0
0
O2
O2 \
O O
(
(A
\ O2
B j
V1
(U !\
fU
l\
U2
A
'
0
0
/
(°\
0
= P
V1
Xv2J
XV2 J
Consequently
O
to
/U1X
U2
BJ
O2
/Xu1X
Xu2
V1
ITV1
\ v2 J
Xf TV 2 /
Thus the vector (uj u2 V 1 v2)T would be an eigenvector o f A © B if A =
A * B be the star product
bn
0
0
an
0
b
21
0
b
12
U21 U 22
0
0
12\
0
U
(A*B)
0
/PVl\
Xui
Xu2
\/rv2/
fU
Vl\
1
U2
Xv2 J
b2 2 )
Let
Thus the vector (^i U 1 U 2 v2)T would be an eigenvector of A ★ B if /r = A.
(i) Study the eigenvalue problem for A ® B + A ® B + A * B .
(ii) Apply the result to A = Cr1 and B = a2, where (Tll O2 denote the Pauli spin
matrices. Both admit the eigenvalues +1, —I.
P r o b le m 49.
eigenvalues of
(i) Given the eigenvalues of the n x n matrix M . Find the
e~M <g) In + In ® eM(ii) Let A and B be normal n x n matrices with eigenvalues A i, . . . , An and /Tll
. . . , /ini respectively. Find the eigenvalues of e ^ A (S) In +
0 B.
P r o b le m 50.
Let a € M. Consider the symmetric 4 x 4 matrix
0
I — a
I + Ct
0
I — a
0
I + a
0
0
I — a
0
1 + 0 /
/ 1
+ 0!
0
0
\
I — a
Eigenvalues and Eigenvectors
199
Show that an invertible matrix B and diagonal matrix D such that A (a) =
B -1 D B are given by
1 _
I
2
I
1 - 1
I
I
I
-I
\l
- 1 - 1
0
/I0 0I 0
00
00 0
\0 0 0
\
I
-I I
-I
I J
Il
B = B
( 1 1
a
a
P r o b le m 51. Let A be an n x n matrix with eigenvalue A and eigenvector u,
i.e. Au = Au, u / 0 . Consider the nu x nn matrix
K = A ® In
In+In <
8>A<g) In
In ------- b In <8> •••® In ® A
where each term has n Kronecker products. Show that
K ( u (8) •••(8) u) = A(u <g> •••<g>u) H------- h A(u <g> •••® u) = nA(u 0 •••0 u)
i.e. nX is an eigenvalue.
P r o b le m 52.
Consider the (n + I) x (n + I) matrix
where r and s are n x l vectors with complex entries, s* denoting the conjugate
transpose of s. Show that the characteristic polynomial is given by
det(£ - AIn+ 1 ) = (-A)n_1(A2 - s*r).
P r o b le m 53.
Let n > 2 and n even. Find the eigenvalues of the n x n matrices
-I 0
-I 2 - I
0
O
( 2
0
V -i 0
P r o b le m 54.
entries
-1 \
O
O
-I
0 - 1 2 /
Let j , k = 0 ,1,2,3 . Write down the 4 x 4 matrix A with the
( 7rj — 7rk\
Q>jk — cos I
—
I
and find the eigenvalues.
P r o b le m 55.
Let H be a hermitian n x n matrix and U an n x n unitary
matrix such that UHU* = H. We call H invariant under U . From UHU* = H
200
Problems and Solutions
it follows that [H, U] = 0n. If v is an eigenvector of H , i.e. H v = Av, then U v
is also an eigenvector of H since
H (U v ) = ( H U ) v = U (H v ) = X (U v ).
The set of all unitary matrices Uj (j = I , . . . , ra) that leave a given hermitian
matrix invariant, i.e. U jH U j = H (j = I , . .. ,m) form a group under matrix
multiplication.
(i) Find all 2 x 2 hermitian matrices H such that [H, U] = O2, where U = G\.
(ii) Find all 2 x 2 hermitian matrices H such that [.H , U] = O2, where U = a 2.
(hi) Find all 4 x 4 hermitian matrices H such that [H, U] = 0 4 , where U = <Ti<g>cri.
(iv) Find all 4 x 4 hermitian matrices H such that [.H , U] = 04, where U =
P r o b le m 56. The Cartan matrix A of a rank r root system of a semi-simple
Lie algebra is an r x r matrix whose entries are derived from the simple roots.
The entries of the Cartan matrix are given by
ajk = 2
&k)
aj i aj)
where (, ) is the Euclidean inner product and OLj are the simple roots. The
entries are independent of the choice of simple roots (up to ordering).
(i) The Cartan matrix of the Lie algebra es is given by the 8 x 8 matrix
2
-I
0
0
0
0
0
0
-I
2
-I
0
0
0
0
0
0
-I
2
-I
0
0
0
-I
0
0
-I
2
-I
0
0
0
0
0
0
-I
2
-I
0
0
0
0
0
0
-I
2
-I
0
0
0
0
0
0
-I
2
0
0 \
0
-I
0
0
0
0
2 /
Show that the trace is equal to 16. Show that the determinant is equal to I.
Find the eigenvalues and eigenvectors of the matrix.
(ii) The Lie algebra s£(3, E) is the rank 2 Lie algebra with Cartan matrix
C =
Find the eigenvalues and normalized eigenvectors of C .
(iii) Let r = 2cos(7r/5). Find the eigenvalues and eigenvectors of the three
Cartan matrices
P r o b le m 57. Let A be an n x n matrix. Let J be the n x n matrix with l ’s in
the counter diagonal and 0’s otherwise. Let tr(^4) = 0, tr( JA) = 0. What can
be said about the eigenvalues of A ? Consider first the cases n = 2 and n = 3.
Eigenvalues and Eigenvectors
Problem 58.
201
Let A be an n x n matrix over C. Let n > I. We define
( ” ) Ak ® An~k■
A (A n) :=
Given the eigenvalues of A. What can be said about the eigenvalues of A (A n)?
Problem 59.
Let A, B be n x n matrices over C. Assume we know the
eigenvalues and eigenvectors of A and B . What can be said about the eigenvalues
and eigenvectors of A(S) B — B ® A and A ® B + B (g> A?
Problem 60.
Let U be a unitary 4 x 4 matrix. Assume that each column
(vector) can be written as the Kronecker product of two vectors in C 2. Can we
conclude that the eigenvectors of such a unitary matrix can also be written a
Kronecker product of two vectors in C 2?
Problem 61.
matrices
Let A, B be n x n normal matrices.
t f = A ® / n + / n <g>£ + A<g>5,
Consider the normal
K = A ® I n + In ® B + B ® A .
Given the eigenvalues and eigenvectors of A, i.e. Aj , U7 (j = 1,...,7¾) and
the eigenvalues and eigenvectors of B , i.e. /Zj , Vj (j = 1,...,7¾). Then the
eigenvalues and eigenvectors of H are given by Aj + /¾
¾+ Aj /i^, U70 v^. Assume
that
tr(A B *) = 0
i.e. considering the Hilbert space of the n x n matrices with the scalar product
(X ,Y ) := tr(X K *) the matrices A and B are orthogonal. What can be said
about the eigenvalues and eigenvectors of K l
Problem 62.
matrix
(i) Let a, 6, c G M. Show that the four eigenvalues of the 4 x 4
/°
—a
a
b
Vc
0
ZC
—ib
-b
—ic
0
ia
~ C\
ib
—ia
0 J
are given by d- y / o ? + b2 + C2,
± i \ / a 2 + b2 + c2.
(ii) Let i G l . Show that the lowest eigenvalue of the symmetric 4 x 4 matrix
(x G K )
0
—xy/5
0
—xy/E
4
—2x
—2x
—2x
4 — 2x
—x
/
V
0
0
0
\
—2x J
—x I
8 —2 x J
is given by
Xo = 4 —x —2yJ\ +
X 2
+ 2x cos(27t/ 5) - 2y/\ +
X 2
+ 2xco s (47t/5).
202
Problems and Solutions
P r o b le m 63. Let A be an n x n normal matrix, i.e. A A* = A* A. Let u be an
eigenvector of A , i.e. Au = Au. Show that u is also an eigenvector of A* with
eigenvalue A, i.e.
A* u = Au.
Apply (A u )M u = (A *u )M *u .
P r o b le m 64. Let A b e a n n x n normal matrix. Assume that Xj (j = I , . . . , n)
are the eigenvalues of A. Calculate
(I +
k=I
without using the eigenvalues. Apply the identity
n
U (I + Afc) = det (In + A).
k=I
P r o b le m 65. Let A b e a n n x n matrix over C, which satisfies A 2 = AA = cA ,
where c G C is a constant. Obviously the equation is satisfied by the zero matrix.
Assume that A ^ 0n. Then we have a “type of eigenvalue equation” .
(i) Is c an eigenvalue of A?
(ii) Take the determinant of both sides of the equation. Discuss. Study the cases
that A is invertible and non-invertible. If the matrix A is invertible it follows
that A = cln and
0. Taking the determinant yields
(det (A) ) 2 = cn det (A).
(iii) Study the case
A(Z) = ( V
gz ) »
*€ C
and show that
A2{z) = (ez + e~z)A(z)
where ez + e~z together with 0 are the eigenvalues of A(z).
(iv) Study (A <8>A )2 = c(A ® A).
(v) Let A be a 2 x 2 matrix and A ★ A be the star product. Study the case
(A icA )2 = C(AicA).
(vi) Study the case that A 3 = cA.
P r o b le m 66.
The additive inverse eigenvalue problem is as follows: Let A
be an n x n symmetric matrix over M with ajj = 0 for j = I , .. . ,n. Find a
real diagonal n x n matrix D such that the matrix A + D has the prescribed
eigenvalues A i, . . . , An. The number of solutions for the real matrix D varies
from 0 to n\. Consider the 2 x 2 matrix A = o\ and the prescribed eigenvalues
Ai = 2, A2 = 3. Can one find a D l We have
Eigenvalues and Eigenvectors
203
P r o b le m 67. Let A be an n x n normal matrix with pairwise different eigen­
values. Are the matrices
n
n»= n
A Xkin
Xj Afc
k=l,j?k
projection matrices?
P r o b le m 68.
Consider the hermitian 3 x 3 matrix
A=
/0
I
I
0
U
I
I
I
0
Find A 2 and A 3. We know that
tr(A) = Ai + A2 + A3,
tr(A 2) = A2 + A| + A3,
tr(A 3) = Ai + A3 + A3.
Use Newton’s method to solve this system of three equations to find the eigen­
values of A.
P r o b le m 69.
Consider an n x n permutation matrix P . Obviously +1 is
always an eigenvalue since the column vector with all n entries equal to + 1 is
an eigenvector. Apply a brute force method and give a C + + implementation
to figure out whether —1 is an eigenvalue. We run over all column vectors v
of length n, where the entries can only be + 1 or —I, where of course the cases
with all entries +1 or all entries —1 can be omitted. Thus the number of column
vectors we have to run through are 2n — 2. The condition then to be checked is
P v = —v. If true we have an eigenvalues —1 with the corresponding eigenvector
P r o b le m 70.
The power method is the simplest algorithm for computing
eigenvectors and eigenvalues. Consider the vector space R n with the Euclidean
norm ||x|| of a vector x € R n. The iteration is as follows: Given a nonsingular
n x n matrix M and a vector Xq with ||xo|| = I. One defines
x t+i =
M xt
l|M xt||’
This defines a dynamical system on the sphere Sn x. Since M is invertible we
have
M -1 Xt+!
^_ n ,
X t(i)
HAf-Ixm r
Apply the power method to the nonnormal matrix
A = (o 1 ) and x° = ( 0 ) '
(ii)
Apply the power method to the Bell matrix
/I 0
I I ° II
V2 I °
Vi 0
0
I
-I
0
0I \
0
-I/
and
xq
=
0
Vo/
204
Problems and Solutions
(iii) Consider the 3 x 3 symmetric matrix over R
2 - 1 0
-I
,4 =
2
-I
0 - 1 2
Find the largest eigenvalue and the corresponding eigenvector using the power
method. Start from the vector
v=
I
I
I
I
0
I
A2V =
2
-2
2
The largest eigenvalue is A = 2 + y/2 with the eigenvector ( I
-y /2
i f .
P r o b le m 71. Let A be an n x n matrix over C. Then any eigenvalue of A
satisfies the inequality
n
|A|< max £ M
I <? <71
-
Zc=I
Write a C + + program that calculates the right-hand side of the inequality for
a given matrix. Apply the complex class of STL. Apply it to the matrix
( i
0
0
4i
V
0
2i
0
2i
Si
Si
0
0
i
0
0
4i
Chapter 6
Spectral Theorem
An n x n matrix over C is called normal if A* A = AA*. Let A be an n x n
normal matrix over C with eigenvalues A i, . . . , An and corresponding pairwise
orthonormal eigenvectors Vj (j = I , . . . ,n). Then the matrix A can be written
as (spectral decomposition)
A = E
V vJv J*-
J=I
Note that
V7V*,
j = h---,n
are projection matrices, i.e. (vJv *)(v Jv*) = Vj V*, (Vj V*)* = Vj V* and v* Vj = I
for j = k and v*v& = 0 for j / k.
Let M n(C) be the vector space of n x n matrices over C. Let A G M n(C)
with eigenvalues Al, . . . , An counted according to multiplicity. The following
statements are equivalent.
(1) The matrix A is normal.
(2) The matrix A is unitarily diagonalizable, i.e. there is a n n x n unitary matrix
U such that UAU * is a diagonal matrix.
(3) There exists an orthonormal basis of n eigenvectors of A.
(4) With A = ( ajk) (j,k = I , . . . , n)
n
n
E E
j=l k=I
n
m
2 = JE= I ia/ -
The spectral theorem, for example, can be utilized to calculate exp(A) of a nor­
mal matrix or the square roots of a normal matrix.
205
206
Problems and Solutions
P r o b le m I.
Consider the vectors in C2 (sometimes called spinors)
_ ( cos(0/ 2)e-i<?!>/2 \
Vl — \^ s in (0 /2 )e^ /2 J ’
_ ( sin(0/ 2)e- ^ / 2 \
v 2 — ^ _ co s (0 /2 )e ^ /2 J '
(i) First show that they are normalized and orthonormal.
(ii) Assume that for the vector Vi is an eigenvector with the corresponding
eigenvalue + 1 and that the vector V2 is an eigenvector with the corresponding
eigenvalue —1 . Apply the spectral theorem to find the corresponding 2 x 2
matrix.
(iii) Since the vectors Vi and V2 form an orthonormal basis in C 2 we can form
an orthonormal basis in C4 via the Kronecker product
Wn = V i <g) v i ,
Wi2 = Vi <g) v 2,
W2I = V 2 (S)Vi,
w22= v 2 <g)v2.
Assume that the eigenvalue for W n is + 1 , for W i2 —I, for w2i —1 and for W22
+ 1 . Apply the spectral theorem to find the corresponding 4 x 4 matrix.
S olu tion I .
(i) Straightforward calculation yields
v j v i = I,
V^ V 2 = I,
V 2V i = 0 .
(ii) Since U($,</>) = A iv iv j + A2V2V2 we find utilizing the identities
cos2(0/2) — sin2(0/2) = cos(0),
2cos(0/2) sin(0/2) = sin(0)
that
TTUI jl\
\
* ,x
*
( cos(6)
sin(0)e~^\
U(0,4>) = A1V1V1 + A2V2V2 = ( sin(0)gU
_ ; os(0) J (iii) For the 4 x 4 matrix V(d,4>)
V (0, ¢) = w n W ii - Wi2W i2 - W2iW2I + W22W22.
’ V2 = 7 i
0
Il
1 ^
0
V -i J
I ( °I\
I
Vo /
Il
\ lj
I
>
0
0
<O
C
P r o b le m 2 . Given a 4 x 4 symmetric matrix A over E with the eigenvalues
Ai = 0, A2 = I, A3 = 2, A4 = 3 and the corresponding normalized eigenvectors
I
I
-I
Vo J
(° \
Reconstruct the matrix A using the spectral theorem. The eigenvectors given
above are called Bell basis.
S olu tion 2 .
Since Ai = 0 we have
A = v 2V2 + 2V3V3 + 3V4V4 =
/ 1/2
0
0
V- 1 /2
0
0
5/2 - 1/2
- 1 / 2 5/2
0
0
—1/ 2 \
0
0
1 /2
/
Spectral Theorem
P r o b le m 3.
207
(i) Consider the 2 x 2 permutation matrix
(Ti =
Find the spectral decomposition of <j\. Find a matrix K with exp(i^) = o\.
(ii) Consider the permutation matrix
Find the spectral decomposition of P. Find a matrix X with exp (X ) = P.
(iii) Consider the permutation matrix
/0
0
0
1 0
Vo
1\
0
1
0
0
0
/
Show that the eigenvalues form a group under multiplication. Find the spectral
representation to find K such that P = exp ( K ) .
S o lu tio n 3. (i) The eigenvalues are given by Ai = +1 and A2 = —I with the
corresponding normalized eigenvectors
Thus we find o\ = Aiviv^ + A 2V 2V^. Hence
K = In(Ai)ViVi + In(A2) V2V^.
Since In(I) = 0 and ln(—I) = ln(eZ7r) = in we obtain the noninvertible matrix
K = In(A2 )V 2 V^ =
J
Note that tr(K) = in , det(P) = —1 and e™ = —I.
(ii) The eigenvalues are given by Ai = +1, A2 = +1 and A3 = —1 with the
corresponding normalized eigenvectors
The eigenvectors are pairwise orthogonal. Thus we find the spectral decompo­
sition of P as
P = Al Vi V^ + A 2 V 2 V^ + A 3 V 3 V 3 .
Applying this result we find
X = In(Ai)Vi Vi + In(A2) v 2V2 + In(A3)V3Vs.
208
Problems and Solutions
Since In(I) = 0 and ln (—I) = ln(et7r) = in we obtain the noninvertible m atrix
I
0
-I
X = In(A3)V3V^ =
I
0
0
0
- 1 0
-D -?
V 5 (1
-I
0
1
Note that tr(X ) = Z7r, det(P) = —1 and et7r = —I.
(hi) The eigenvalues are Ai = + 1 , A2 = —I, A3 = +¾, A4 = —i with the
corresponding eigenvectors
I
Vl = 2
/ 1)
I
I
1 ^
I
v2 = -
,
I
-I
’
-I
—2
V 3=2
\ -lJ
KlJ
1 ^
I
1
’
V 4=2
\ i )
1 \
-I
2
\ —i J
Obviously the numbers + 1 , —I, +¾, —i form a commutative group under multi­
plication. From the spectral representation of P
P = A1V1V1 + A2V2V^ + A3V3Vg + A4V4V4
and In(I) = 0, ln(—I) = in, ln(z) = 27r/ 2 , ln(—2) = —in/2 provides the skewhermitian K as
i
i
i
i
1 - 2
V —I — 2
—1 — 2
1 —2
—1 — 2
1 —2
2
2
I —2 \
—1 — 2
2
2I
P r o b le m 4. Let z € C and A be an n x n normal matrix with eigenvalues
Al, . . . , An and pairwise orthonormal eigenvectors vi, . . . , vn. Use the spectral
decomposition to calculate exp(zA).
S o lu tio n 4.
Since
eXjvjvj = In + V j v*(e*x> - I)
and the matrices yVj7Vj and v^vj; (j ^ k) commute we have
ezA = (In + viVi(e*Al - l ) ) ( / „ + V2V^ez*2 - 1 ) ) •• • (In + v„v *(ezXn - 1)).
Now 7VjVk = Sjk and X lj=1 vj vj = Pi- It follows that
0zA
=E ezXjVjVj.
j= i
P r o b le m 5. Let A be a hermitian n x n matrix. Assume that all the eigenvalues
A1, . . . , An are pairwise different. Then the normalized eigenvectors u j ( j =
I , . . . , n) satisfy Ujiik = 0 for j ^ k and U^u7 = I. We have
n
A =E
J=I
aJuJuJ*-
Spectral Theorem
(k =
Let
209
be the standard basis in Cn. Calculate U*AU , where
u = E L I u *e fcFrom U we find
S o lu tio n 5.
U* = £ e , u j .
£=1
Thus
n
n
n
U* AU = Y e eu*e Y
£=1
Ti
Ti
= ^ ^
j= l
aJu Ju]
j= l
^
Y
Ufee* = Y
k=l
n
n
Xo Y Y eHul ui ) ( u> f c ) e fc
j= l
£=1 k=l
n
^ ^G£$£jfijk&k
£=1 k=l
Tl
=E
3= I
Thus U* AU is a diagonal matrix with the eigenvalues on the diagonal.
Problem 6. Let A be a positive definite n x n matrix. Thus all the eigenvalues
are real and positive. Assume that all the eigenvalues Ai, . . . , An are pairwise
different. Then the normalized eigenvectors U7 (j = I , . . . ,n) satisfy u^Ufc = 0
for j ^ k and U^u7 = I. We have (spectral theorem)
n
A = Y Xonon*ji =I
Let efc (A; = I , . . . ,n) be the standard basis in C n. Calculate In(A). Note that
the unitary matrix
Tl
U = Y n^ t
k=i
transforms A into a diagonal matrix, i.e. A = U* AU is a diagonal matrix.
Solution 6.
From A = U*AU it follows that A = UAU*. Thus
n
ln(;4) = In(CMtT) = Uln(A)U* = ^ l n ( A j )Uj U*.
j= i
P r o b le m 7.
Consider the normal noninvertible 3 x 3 matrix
A=
(i)
Find the spectral decomposition.
0 I
1 0
0 I
0\
1
0/
210
Problems and Solutions
(ii)
Use this result to calculate exp(zA), where z E C.
S o lu tio n 7. (i) The eigenvalues are Ai = 0, A2 = \/2, and A3 = —y/2 with the
corresponding normalized eigenvectors
Thus we have the spectral decomposition of A
A = A1V1V1 + A2V2V^ + A3V3V3 = A2V2V^ + A3V3V3.
Note that v i v f , V2V^, V3V3 are projection matrices.
(ii) The spectral decomposition of exp (zA) is
^ A 2V2V2 e ZA3V3V3 _
_|_ ^ z X 2 _ 1 ) v 2V 2 ) ( / 3 + ( e * * 3 — l ^ V ^ )
where
vivj;
P r o b le m 8.
1
4
I
y/ 2
I \
V2
2
I
y/ 2
V2 ) ’
I )
- y /2
2
-y /2
VjV2 = 2 ( ~ v -
4V
1
I
-V 2
I
Consider the two 3 x 3 permutation matrices with Pi = P j
Pi
0
0
I
I
0
0
0
I
0
0
I
0
P2
0
0
I
I
0
0
(i) Find the spectral decomposition of the permutation matrices.
(ii) Use the spectral decomposition to find the matrices K\ and A2 such that
Pi = exp(PTi), P2 = exp(A 2).
(iii) We want to find efficiently K\ and Pr2 such that Pi = eKl and P2 = eK2.
We would apply the spectral decomposition theorem to find K \, i.e.
3
Ki = ^
ln(Aj )Vj V*
j= i
where Aj are the eigenvalues of U\ and Vj are the corresponding normalized
eigenvectors. But then to find K 2 we would apply the property that Up = U2. Or
could we actually apply that U2 = U j ? Note that U i, U2, /3 form a commutative
subgroup of the group of 3 x 3 permutation under matrix multiplication.
S o lu tio n 8.
(i) For the eigenvalues of Pi we find
A1 = I,
A2 = ei2*/3,
A3 = ei4n/3
Spectral Theorem
211
with Ai + A2 + A3 = 0. The corresponding normalized eigenvectors are
I
I
\/3\ /
Vl =
1
\
/
v2= -4
s Iye_i47r/3
i2T/3j
vv
3^\
I
\
V3 = _ L I e*47r>
ei47r/3
v^y
pi2n/3
For the eigenvalues of P2 we find the same as for Pi, namely
Ai = I,
A2 = e*27r/ 3,
A3 =
ei4* / s
with Ai + A2 + A3 = 0. The corresponding normalized eigenvectors are
W2 =
4 =
W3
I e i 4 v /3
v 3 I pi2n/3
= —7=
t/3
ei27r/3
I
piA^/3
Hence the spectral decomposition of Pi and P2 are
O
O
p I = I ] aJv Jv J 3= I
p2 =
xOyfJyfJ■
J= I
(ii) Since In(I) = 0, ln(e*27r/ 3) = 27ri/3, ln(ez47r/ 3) = 47ri/3 we obtain
/
3/2
Ari = -^T[ eeii2
7r/3/2+eeii4
7r/3
4 n /3 J 2
2 n /3
9
I
_|_
e-i27r/3/2 + e- i47r/3
3/2
e i2 n /3
e- i47r/3/2 + e " i27r/ 3 \
e~i27r/3/2 +e*2
7r/3 j.
_|_ g — i27r/3
3/2
Analogously we construct the matrix Ar2. Note that Ar2 = ATjr . Also note that
the trace of these matrices are 2m.
(iii) For the second and third eigenvalues we obtain the corresponding normalized
eigenvectors
V, = —
2
1 e*2”/3
I
(^ 4 ./3 j ’
Wt = —
3
I ei4^/3 I
(^ 2 ./3 j '
Now since ln(et2?r/ 3) = i2n/3, ln(et47r/ 3) = i4ir/3 we find for K i
,,
.2 tt
»
.4 tt
,
K l = J j v IVi + »-g-V2V2Since I /2 = CZ2 = eKleKl = e2Kl = eK2, the matrix K 2 is given by K 2 = 2K i.
P r o b le m 9.
Consider the unitary matrix
U
/0
0
0
0 0 1\
0 1 0
1 0
0
Vi o o o/
C *M! i)-
Find the skew-hermitian matrix K such that U = exp(K). Utilize the spectral
representation.
212
Problems and Solutions
S olu tion 9. The eigenvalues of U are Ai = I, A2 = I, A3 = —I, A4 = —I with
the corresponding normalized eigenvectors ( Bell basis)
vi =
v2 =
V2
I
v3 =
V2
Ti
V4
/°
0
The eigenvectors are pairwise orthonormal. Thus we have the spectral represen­
tation of U
U = A1V1V1 + A2V2V^ + A3V3V3 + A4V4V4.
Thus K is given by (since In(I) = 0)
0
1
-1
0
1
0
K = ln(A3)v 3V3 + ln(A4)v 4v| = in
0
V-i
0
-I
I
0
0
0
I J
where we used In(I) = 0 and —1 = exp(z7r).
P r o b le m 10.
Consider the Hadamard matrix
with eigenvalues +1 and —I. Find the square roots of this matrix applying
the spectral theorem. Since y/l = ± 1 and >/—I = dzi four cases have to be
considered
(1,0,
(1 .-0 ,
(-1 ,0 ,
(-1 ,-0 .
S o lu tio n 10. For the eigenvalues we find Ai = +1 and A2 = —I. The corre­
sponding normalized eigenvectors are
( V 4 + 2 y/2 \
I
Vl
v'S W 4 - 2 v / 2 j ’
_
I
( \ / A - 2 s/2 \
VS W 4 + 2 V 2 ) '
V2
Thus U = AiViVj + A2V2V5. Thus for the main branch (y/l = I, ^/^T = i) we
have
y/U = I •V1V1 + i •V2V2.
Thus
_
I I+
(
2 y/ 2 \
y/2 -
i(l -
1~ i
y/2)
—
I+
I —i
y/2 + i(l + y/2)
)■
Analogously we find the other three branches.
P r o b le m 11. Let A be an n x n matrix over C. An n x n matrix B is called
a square root of A if B 2 = A. Find the square roots of the 2 x 2 identity matrix
Spectral Theorem
213
applying the spectral theorem. The eigenvalues of Z2 are Ai = I and A2 = I. As
normalized eigenvectors choose
f e^ 1 cos(0) \
\ e^ 2 sin(0) ) ’
/ e^ 1 sin(0) \
\ - e ^ 2 cos(0) J
which form an orthonormal basis in C 2. Four cases
( 4NAij y/^2) = (I, I),
( 4N A l j v A S
=
( 4NAlj y/^2) = (lj - I ) ,
( _ 1 j 1 Jj
( 4N A l j V ^ 2 ) = ( “ 1 j - i )
have to be studied. The first and last cases are trivial. So study the second case
(VAT, vO^) = ( 1 ,- 1 ) . The second case and the third case are “equivalent” .
S olu tion 1 1 . From the normalized eigenvectors we obtain the spectral repre­
sentation of /2 as
T_ .
i2
/
“ Al V
.
cos2(0)
sin(0) cos(0)
sin(0) cos(0)
sin2((9)
/
sin2(0)
_ e*(</>i-</>2) sin(0) cos(0)
2 y _ e*(^2- 0i) sin(^) cos(0)
cos2(0)
W ith (\/Ai, v ^ ) = ( I j —I) we obtain the square of /2 for this case as
/
cos(20)
Y e 1^ 2 -^1) sin(20)
e^{<t>\-<i>2) sin(20) \
—cos(20)
J
utilizing cos2(0) — sin2(0) = cos(20), 2 cos(0) sin(0) = sin(20). This case also in­
cludes the Pauli spin matrices 01, 02 and 03 as square root of / 2- The Hadamard
matrix is also included. Note that the square root of a unitary matrix is always
a unitary matrix.
P r o b le m 1 2 .
Let A i, A2, /^i, /^2 € C. Let v i, V2 (column vectors) be an
orthonormal basis in C2. We define the 2 x 2 matrices (spectral theorem)
A = A1V1V1 + A2V2V^,
B = / i i v i v j + /x2v 2v£.
Find the commutator [A, B]. Find the conditions on A i, A2, /ii, /X2 such that
[A1B] = O2.
S o lu tio n 1 2 . Since v^v^ = I for j = 1,2 and v^Vfc = Skj we obtain [A1B] =
O2. Thus there is no condition on /xi, /X2 , A i, A2.
P r o b le m 13.
define
Let vo, v i , . . . ,
U := - L
./O n
VZ
v 2ti—1
be an orthonormal basis in C2
2n—I 2n—I
V
V e —i27rkI/2' Vfc V " .
Z—/ A - /
J-=O fc=0
We
(i)
214
Problems and Solutions
Show that U is unitary. In other words show that UU* = / 2^, using the com­
pleteness relation
2n—I
J2n =
VJV,*J=O
S olu tion 13.
From the definition (I) we find
2n—I 2n—I
^
= T p E
Vz
Vi Vi
E
J=O
k=0
where * denotes the adjoint. Therefore
2n—I 2n—I 2n—I 2n—I
2r jE= 0 kE= 0
^ t7* = i
E m=0
E e^
ik3- lmy2nVl V ivlVil
I=0
2n—I 2n—I 2n—I
^ E E E
j =0
fc=0 m =0
We have for j = m, e^27r(fcj-fcm)/2n _
Thus for
ra = 0 ,1 , . . . , 2n — I
2n—I
^e z27r 0 - m ) / 2n ^fc _
yn^
j
_
m
k=0
^ei27r(j-m)/2n^fc _
I
_
I_
e i2ic(j-m )
e i27r(j-m )/2n
= 0,
j ^ rn .
k=0
Thus
2n—I
v Jy J — / 2^.
1717*
J=O
S u p p lem en ta ry P ro b le m s
P r o b le m I . The star product of the Hadamard matrix Uh with itself provides
the Bell matrix
B
/ 1
0
0
TI
Vi
I
0
1
1
0
0
1
-1
0
1 \
0
0
- 1/
which is a unitary matrix. Note that the eigenvalues of the Bell matrix are +1
(twice) and —1 (twice).
(i) Find the square roots of the Bell matrix applying the spectral theorem.
Spectral Theorem
(ii)
215
Apply the spectral theorem to find the skew-hermitian matrix K such that
B = eK .
P r o b le m 2.
Let A be a normal matrix with eigenvalues A i, . . . , An and
pairwise orthonormal eigenvectors a j (column vectors). Let B be a normal
matrix with eigenvalues /xi, . . . , /in and pairwise orthonormal eigenvectors b j
(column vectors). Then A and B can be written as
n
n
A = 'y ^ XjQ.j3.j,
j=I
B = 'y ] /Xfcbfcbfc.
k=I
Let z E C. Use the spectral decomposition to calculate ezABe~zA.
P r o b le m 3.
Consider the normalized vectors
in R 3. Find the eigenvalues and eigenvectors of the 3 x 3 matrix M = I •VivJ +
0 •V2V2 — I •V3V3 without calculating M.
P r o b le m 4. Let A be an n x n normal matrix with eigenvalues A i, . . . , A71
and pairwise normalized orthogonal eigenvectors Vj (j = I , . . . , n). Then
A = J 2 X3 V 3 V *33= I
Show that cos(A) = Yl]= 1 cos(^j)v j v ji sin(A) = Yl]= 1 Sin(Aj )Vj V^.
P r o b le m 5.
The spectral theorem for n x n normal matrices over C is as
follows: A matrix A is normal if and only if there exists an n x n unitary matrix
U and a diagonal matrix D such that D = U*AU. Use this theorem to prove
that the matrix
/0
I I
A=
O 0 I
\0 0 0
is nonnormal.
P r o b le m 6 .
eigenvectors
Find all 2 x 2 matrices over C which admit the normalized
with the corresponding eigenvalues Ai and A2.
P r o b le m 7.
Consider the nonnormal 2 x 2 matrix
I
0
I
-I
216
Problems and Solutions
with eigenvalues Ai = +1 and A2 = —I and normalized eigenvectors
The eigenvectors are linearly independent, but not orthonormal. Calculate
and
B = A1V1V1 + A2V2V^.
Find IlA — B ||. Discuss. Is the matrix B normal?
Chapter 7
Commutators and
Anticommutators
Let A and B be n x n matrices. Then we define the commutator of A and B as
[A,B\ := A B - B A .
For all n x n matrices A, 5 , C we have the Jacobi identity
[A, [B, C]] + [C, [A, B ]] + [B, [C, A]] = On
where On is the n x n zero matrix. Since tr (AB) = tr (BA) we have
t r ([A B ]) = 0.
If
[A, B] = On
we say that the matrices A and B commute. For example, if A and B are
diagonal matrices then the commutator is the zero matrix 0n. We define the
anticommutator of A and B as
[A,B\+ : = A B + BA.
We have
t r ( [ A £ ] + ) = 2tr( A B ),
AB = -l [ A , B ] + \{A,B\+ .
The anticommutator plays a role for Fermi operators. Let A, B, C be n x n
matrices. Then
[A, [B, C]+ ] = [[A, B], C]+ + \B, [A, C]]+ .
217
218
Problems and Solutions
Problem I .
(i) Let A, B be n x n matrices. Assume that [.A , B ] = On and
[A, B\+ = 0n. What can be said about AB and B A l
(ii) Let A, B be n x n matrices. Assume that A is invertible. Assume that
[A, B\ = 0n. Can we conclude that [A- 1 , B] = 0n?
(iii) Let A, B be n x n matrices. Suppose that
[A, B] = 0n,
[A, B]+ = On
and that A is invertible. Show that B must be the zero matrix.
Solution I . (i) Since AB = BA and AB = —5 A we find AB = 0n, B A = 0n.
(ii) From [A, 5 ] = On we have A B = BA. Thus B = A - 1 B A and therefore
B A -1 = A - 1 B. It follows that A - 1 B — B A -1 = [A- 1 ,B ] = 0n.
(iii) From the conditions we obtain A B = 0n, B A = 0n. Thus A - 1 A B = On and
therefore B = 0n. Analogously, B A A -1 = On also implies B = 0n.
Problem 2. (i) Let A and B be symmetric n x n matrices over M. Show that
A B is symmetric if and only if A and B commute.
(ii) Let A, B be n x n hermitian matrices. Is z[A, B] hermitian?
(iii) Let A, B be symmetric n x n matrices over M. Show that [A, B] is skewsymmetric over M.
Solution 2.
(i) Suppose that A and B commute, i.e. A B = BA. Then
(A B ) t = B t A t = BA = AB
and thus A B is symmetric. Suppose that A B is symmetric, i.e. (A B )T = A B .
Then (A B )T = B t A t = BA. Hence A B = B A and the matrices A and B
commute.
(ii) Since A* = A and B = B* we have
(i[A, B])* = (i(A B - B A ))* = ( - i ( B A - A B )) = i(AB - BA) = i[A, B].
Thus z[A, B] is hermitian.
(iii) Using that A t = A and B t = B we have
([A, B ]) t = (A B - B A ) t = (A B ) t - (B A ) t = B t A t - A t B t = - [ A , B].
Problem 3. (i) Let A and B be n x n matrices over C. Show that A and B
commute if and only if A — cln and B — cln commute over every c E C.
(ii) Let A, B be n x n matrices over C with [A, B] = 0n. Calculate
[A + c / n, B + c /n].
(iii) Let v be an eigenvector of the n x n matrix A with eigenvalue A. Show that
v is also an eigenvector of A + c /n, where c G C.
Solution 3.
(i) Suppose that A and B commute, i.e. A B = BA. Then
(A - cIn)(B - cln) = BA - c(A + B) + C2In = (B - cIn)(A - cl n).
Commutators and Anticommutators
219
Thus A — cln and B — cln commute. Now suppose that A —cln and B — cln
commute, i.e.
(A - cIn)(B - cln) = { B ~ cln) (A - cln).
Thus AB —c(A + B) A C2In = BA —c(A A B) A C2In and AB = BA.
(ii) We obtain [A A c /n, B A cln\ = [.A , B] A c[A, In] A c[Jn, B\ A c2[/n, In] = On.
(iii) We have (A A cln)v = A v + c /nv = Av + cv = (A + c)v. Thus v is an
eigenvector of A + cln with eigenvalue A Ac.
P r o b le m 4.
Can one find 2 x 2 matrices A and B such that [A2, B 2] = O2
while [A,B] ^ O 2?
S o lu tio n 4.
Yes. An example is
^ = ( 2 J ) ' B = ( ? 2) - 1^
1 = (5
- 1)
with A2 = O2 and B 2 = O2.
P r o b le m 5. Let A, B, H be n x n matrices over C such that [A, H] = 0n,
[ £ , # ] = 0n. Find [[A, B],H\.
S o lu tio n 5.
Using the Jacobi identity for arbitrary n x n matrices X , Y , Z
[[X, Y], Z\ + HZ9X ], Y] + [[Y9Z], X ] = On
we obtain [[A, B\9H] = 0n.
P r o b le m 6.
Let A and B be n x n hermitian matrices. Suppose that
A2 = In,
B 2 = In
(I)
and
[A, B]+ = AB + BA = 0n.
(2)
Let x G C n be normalized, i.e. ||x|| = I. Here x is considered as a column
vector.
(i) Show that
(x *A x )2 + (x*H x)2 < I.
(3)
(ii) Give an example for the matrices A and B .
S o lu tio n 6. (i) Let a, b G M and let r 2 := a2 A b 2. The matrix C = aA A bB
is again hermitian. Then
C 2 = a2A2 + abAB + baBA + b2B 2.
Using the properties (I) and (2) we find
C
2
= CL2 I n
+
b2 I n
=
r 2I n .
220
Problems and Solutions
Therefore (x*C2x) = r 2 and —r < a(x*Ax) + b(x*Bx) < r. Let
a = x*A x,
b = x*5x
then a2+ 62 < r or r 2 < r. This implies r < I and r 2 < I from which (3) follows,
(ii) An example is A = (Ti, B = 02 since 0 2 = / 2, 02 = /2 and 0102 + 02Gi =
where 0 1 , 0 2 ? 03 are the Pauli spin matrices.
P r o b le m 7.
Let A , B be skew-hermitian matrices over C 5 i.e. A* = —A,
B* = —B. Is the commutator of A and B again skew-hermitian?
S o lu tio n 7.
The answer is yes. We have
([A, 5])* = (AB - BA)* = B*A* - A*B* = B A - A B = - ( [ A , B]).
P r o b le m 8. Let A, B be 2 x 2 skew-symmetric matrices over K. Find the
commutator [A, B].
S o lu tio n 8.
Since
A- (
0
V-aia
° 12 ^
0
) '
we find [A, B\ =0 2 .
P r o b le m 9. (i) Let A, F b e n x n matrices over C. Let S be an invertible n x n
matrix over C with A = S- 1AS, B = S- 1BS. Show that [A, B] = S- 1 [A, B]S.
(ii) Let A, B b e n x n matrices over C. Assume that [A, B\ = 0n. Let U be a
unitary matrix. Calculate [U*AU,U*BU].
(iii) Let T be an invertible matrix. Show that T -1 AT = A + T - 1 [A, T].
S o lu tio n 9.
(i) Since SS- 1 = In we have
[A, B] = [ 5 " 1AS, S - 1BS} = S - 1A S S - 1BS - S - 1B S S - 1AS
= S - 1ABS - S - 1BAS = S- 1 (AB - BA)S
= S - 1 IA, B]S.
(ii)
Since U*U = UU* = In we have
[U^AU, U*BU] = U*AUU*BU - U*BUU*AU = U* ABU - U*BAU
= U*([A,B])U = U*0nU
(iii)
We have
A + T - 1IA5T] = A + T - 1(A T - T A ) = A + T -1 AT - A = T - 1AT.
Commutators and Anticommutators
Problem 10.
221
C an we find n x n m atrices A , B over C such th a t
[A B \ = In
(i)
where I n denotes the id en tity m atrix?
Solution 10. Since t r ( X T ) = t r ( T X ) for any n x n m atrices X , Y we obtain
tr([A , B ]) = 0 . However for the right-hand side of (I) we find t r ( I n ) = n. T hu s
we have a contradiction and no such m atrices exist.
W e can find unbounded infinite-dim ensional m atrices X , Y w ith [X, Y \ = /,
w here I is the infinite-dim ensional unit m atrix.
Problem 11.
Show th a t any tw o 2 x 2 m atrices w hich com m ute w ith the
m atrix
com m ute w ith each other.
Solution 11.
A n y 2 x 2 m atrix com m uting w ith this skew -sym m etric m atrix
takes the form
T h en we have for the com m utator
0
0
Problem 1 2 .
0
0
)
C an we find 2 x 2 m atrices A and B o f the form
and singular (i.e. d et(A ) = 0 and det(H ) = 0 ) such th a t [A, B \ + = /2?
Solution 12.
W e have
T hus we obtain the equation 012^21 + «21^12 = I- Since A and B are singular
we obtain the tw o solutions a\2 = &21 = I, ^21 = &12 = 0 and a\2 = ^21 = 0 ,
«21 = &12 = I-
Problem 13.
L et A be an n x n herm itian m atrix over C. Assum e th a t
the eigenvalues o f A , Ai, ..., An are nondegenerate and th a t the norm alized
eigenvectors V j ( j = I , . . . , n) of A form an orthonorm al basis in C n . L et B be
an n x n m atrix over C . Assum e th a t [A, B \ = 0 n . Show th a t
v lB v j = 0
for
k / j.
(i)
222
Problems and Solutions
S olu tion 13. From AB = BA it follows that V^(ABvj ) = v l(B A v j). The
eigenvalues of a hermitian matrix are real. Since A vj = Aj Vj and v^A = Xkv I
we obtain
\k{v*kB vj) = XjivtkB v j ).
Consequently (A*, — Aj ) ( v l B v j ) = 0. Since Xk ^ Xj equation (I) follows.
P r o b le m 14. Let A 7 B be hermitian n x n matrices. Assume they have the
same set of eigenvectors
A vj = Xj Vj ,
B vj = H j Vj ,
j = \ ,...,n
and that the normalized eigenvectors form an orthonormal basis in C n. Show
that [A7B] = 0n.
S olu tion 14.
Any vector v in C n can be written as
v = E ( v * v ,) v ,.
3= I
It follows that
[,4, B]v = (AB - BA) JJ (V tVj )Vj = JJ (V tVj )A B vj ~ J J ( ^ v j )B A vj
3= I
3= I
3= I
n
n
=J
5 2 ( v * v j )Xj nj vj - ^ ( V +V7Ozij Aj Vj
3= I
3= I
= 0.
Since this is true for an arbitrary vector v in C n it follows that [A7B] = 0n.
P r o b le m 15.
Let A 7 B be n x n matrices. Then we have the expansion
eABe~A = B + (A , B] + ± (A , [A, B]] + I [A, [A, [A, B ]]] + ••• .
(i) Assume that [A7B] = A. Calculate eABe~A.
(ii) Assume that [A7B] = B. Calculate eABe~A.
S olu tion 15. (i) From [A7B] = A we obtain [A7[A7B]] = [A7A] = 0n. Thus
eABe~A = B + A.
(ii) From [A7B] = B we obtain [A7 [A75]] = B 7 [A7[A7 [A7B]]] = B etc. Thus
eABe~A = B + B + ^ B +
+ •••= 5 ( l + I + i
+ ^ + ')
= eB.
P r o b le m 16.
Let A be an arbitrary n x n matrix over C with tr(A) = 0.
Show that A can be written as commutator, i.e. there are n x n matrices X and
Y such that A = [X7Y].
Commutators and Anticommutators
223
S o lu tio n 16. The statement is valid if A is I x I or a larger n x n zero matrix.
Assume that A is a nonzero n x n matrix of dimension larger than I. We use
induction. We assume the desired inference valid for all matrices of dimensions
smaller than A ’s with trace zero. Since tr(A) = 0 the matrix A cannot be a
nonzero scalar multiple of In. Thus there is some invertible matrix R such that
with tr (B) = 0. The induction hypothesis implies that B is a commutator. Thus
R -1 AR = X Y — Y X is also a commutator for some n x n matrices X and Y .
It follows that
A = {R X R - 1X R Y R - 1) - [ R Y R - 1X R X R - 1)
must also be a commutator.
Problem 17. Let A, B be 2 x 2 symmetric matrices over E. Assume that
AA t = /2 and B B t = / 2. Is [.A , B] = 02? Prove or disprove.
Solution 17.
Since [AAT, B B T] = [/2,^2] = O2 we obtain [A, B] = O2.
Problem 18.
Let A, B , X , Y be n x n matrices over C. Assume that
A X - X B = Y. Let z e C . Show that (A - zIn) X - X { B - z ln) = Y.
Solution 18.
We have
(A - zIn) X - X ( B - z ln) = A X - z X - X B + z X = A X - X B = Y.
Problem 19.
E such that
(i) Can we find nonzero symmetric 2 x 2 matrices H and A over
[H, A] = nA
where fi G M and [i ^ 0?
(ii) Let K be a nonzero n x n hermitian matrix. Let B be a nonzero n x n
matrix. Assume that
[K,B\ = aB
where a G M and a ^ 0. Show that B cannot be hermitian.
S o lu tio n 19. (i) Since tr([H, A]) = 0 and / i / 0 w e find that tr(A) = 0. Thus
A can be written as
Now
224
Problems and Solutions
It follows that such a pair does not exist since A should be symmetric over R
and nonzero.
(ii) Assume that B is hermitian. Then from [AT, B ] = aB we obtain by taking
the transpose and complex conjugate of this equation
[B,K\ = a B .
Adding the two equations [AT, B] = aB , [B, K] = aB we obtain 2aB = On. Since
B is a nonzero matrix we have a = 0. Thus we have a contradiction and B
cannot be hermitian.
P r o b le m 20. A truncated Bose annihilation operator is defined as the n x n
(n > 2) matrix
0
0
0
Vo
Vi
0
0
0
0
0
V2
0
y/i
0
0
0
0
0
0
0
0
0
0
0
0
0
0
y/n — I
0
(i) Calculate B*Bn.
(ii) Calculate the commutator [Bn,Bl].
S olu tion 20. (i) We find the diagonal matrix B^Bn = diag(0, I, . . . , n — I).
(ii) We obtain the diagonal matrix
[Bn9B*] = BnB l - B l B n = diag(l, I, . . . , I, - ( n - 1)).
P r o b le m 21.
Find nonzero 2 x 2 matrices A , B such that [.A , B] ^ O2, but
[A,[A,B}} = O2,
[B, [A, B]] = O2.
S olu tion 21. For example if [A, B] = A, then [A, [A, J3]] = O2 and [B , [A, B]] =
O2. An example is
P r o b le m 22.
Let
Find all 2 x 2 matrices A such that [A, C] =02S olu tion 22.
a 12 = a 21-
The commutator is the zero matrix if and only if a n = 022 and
Commutators and Anticommutators
225
P r o b le m 23. Let A and B be positive semidefinite matrices. Can we conclude
that [A, B\+ is positive semidefinite?
S o lu tio n 23.
No. Consider
Both matrices A and B are positive semidefinite. However [A, H]+ admits the
eigenvalues Ai = I + y/2 and A2 = I — \f2. Thus AB + BA is not positive
semidefinite.
P r o b le m 24.
Let A, B be n x n matrices. Given the expression
A2B + A B 2 + B 2A + B A 2 - 2ABA - 2BAB.
Write the expression in a more compact form using commutators.
S olu tion 24.
We have
A2B + A B 2 + B 2A + B A 2 - 2A B A - 2B AB = [.A, [A, B}\ + [B, [B, A}}.
P r o b le m 25. (i) Let A, B be n x n matrices over C. Assume that A 2 = In
and B 2 = In. Find the commutators [AB + BA, A], [AB + BA, B\. Give an
example of such matrices for n = 2 and A ^ B.
(ii) Let A, B be n x n matrices over C such that A2 = In, B 2 = In. Now assume
that [A, B]+ = On. Show that there is no solution for A and B if n is odd.
S olu tion 25.
(i) We find
[AB + BA, A] = ABA + B A 2 - A 2B - ABA = B A 2 - A2B = B - B = O71.
Analogously we find [AB + BA, B] = 0n. As an example consider the Pauli spin
matrices A = cr\, B = as.
(ii) The proof follows from det(A) d et(£ ) = (—I )71det(A) det(B ) and det(A) ^
0, det(B ) ^ 0.
Let A\, A i , As be n x n matrices over C. The ternary commu­
tator [A\, A2, A%] (also called the ternutator) is defined as
P r o b le m 26.
TTES3
= A1A2A3 + A2A3A1 + A3A1A2 —A1A3A2 —A2A1A3 —A3A2A1.
Let n = 2 and consider the Pauli spin matrices a\, 02, ¢73. Calculate the ternu­
tator [(71,(72,(73].
S olu tion 26.
We obtain
226
Problems and Solutions
Note that tr([cri,02, 03]) = 12z.
P r o b le m 27. Let ® be the direct sum.
(i) Let A , B , H be n x n matrices such that [ H , A ] = On, [H, B] = 0n. Show
that
[H
® Tn + Tn ®
H ,A
®
O2n.
B] =
(ii) Let A l , A 2 be m x m matrices over C. Let B i , B 2 be n x n matrices over
C. Show that
[Al
S o lu tio n 27.
[H
®
In
+
In
®
Bi, A 2
®
®
B 2] = ([Al , A 2])
([ Bi , B 2])
(i) We have
®
H, A
®
B] = [H
®
In, A
®
= (HA)
®
B -
B]
+
®
[In
(AH)
®
B
H, A
+
A
®
®
B]
(HB) - A ®
(BH)
= [ H , A ] ® B + A ® [ H , B]
= O2n •
(ii) We have
[Al ® B i , A 2 Q B 2] = ( A i A 2)
=
®
( B i B 2) — ( A 2A i )
(^iA 2 —A 2A i ) ®
®
( H 2B i )
( H i B 2 — B 2B i )
= ([Al , A 2]) Q ([ Bi , B 2]).
P r o b le m 28. (i) Can one find non-invertible 2 x 2 matrices A and B such the
commutator [^4, B] is invertible?
(ii) Consider the 3 x 3 matrices
A =
Oi
0
0
0
02
0
°\
0
/°
,B =
O
0
61
62
0
0
0
03 /
Can we find a,j, bj (j = 1,2,3) such that the commutator [A,B] is invertible?
S o lu tio n 28.
(i) Yes. An example is
A=(l
2)- s=(! i) - n.B]=(-0, I)•
(ii) We obtain
/
[A,B] =
0
0
0
0
0
0
0
\ 63(03 - o i )
Thus [A, B ] is not invertible.
6 1 (0 1 -0 3 )'
Commutators and Anticommutators
P r o b le m 2 9 .
A, B
Let
be
n x n m a trice s
over C . A s s u m e th a t
B
227
is in v ertib le.
S h ow th a t
[AiB - 1] =
S o lu tio n 2 9 .
W e h ave
- B - 1IA, B]B~l = - B ~ 1(AB - BA)B ~ 1
P r o b le m
30.
- B - 1A
=
+
AB~X=
bi2
bis\
[A , B " 1].
C o n sid e r th e 3 x 3 m a trice s over C
an
0
0
0
022
0 \
0
’
033 /
0
/ 0
B =
0
621
V «>31
&23
0 )
&32
[AiB] a n d d e t ( [ A , B]).
a^i = e*^2 , 033 = el<t>3. F in d th e c o n d itio n
b$2 su ch th a t [Ai B] is u n itary .
(i) C a lc u la te th e c o m m u ta to r
(ii) S et
an =
e*^1,
613, 621, 623, &31,
S o lu tio n 3 0 .
on
¢ 1 , ¢2,
¢ 3 , &12,
(i) W e o b ta in
0
[A,B]
^>12 ( a n -
&3 l (« 3 3 -
^>13 ( a n — 033) \
022)
0
&2l(«22 - Oil)
623(022 — »33) I .
632(033 - 0,22)
a il)
0
/
Thus
d e t([i4 , B ] ) = (612623^31 (ii) S e ttin g
c o s (2 tt/
<j>0
= 0,
&i3 &2 i&32) ( a n -
<j>i = 2ir/Z, ¢2
3) = —i ,
s in ( 27r / 3 ) =
0 2 2 )(0 2 2 -
0 3 3 )(0 1 1 -
0 3 3 ).
= 4 tt/ 3 w ith
i - \ / 3 , c o s (4 7 r /3 ) = — i ,
s in (4 7 r /3 ) = — ^ v^3
w e h ave a n + 022 + 033 = 0 . T h u s
(a n -
0 2 2 )(0 2 2 -
P r o b le m 3 1 .
0 3 3 )(0 1 1 -
-(3 -
*V /3 ) i V /3 —(3 + i-\/3 ).
C o n s id e r th e se t o f s ix 3 x 3 m a trice s
/ 1 0
0 0
0
0
VO
0
0
/0
^12 =
I
\0
I
0
0
0\
0 ,
0/
Ai =
0 33) =
A2 =
/0
0
0
0
I
0
VO
0
0
0
0
I
0
I
0
/0
A 23 = I 0
\0
^3 =
0
0
0
0
0\
0 ,
0 0 1/
Al3
0
0
I
0
0
0
1\
0
0/
C a lc u la te th e a n tic o m m u ta to r a n d th u s sh o w th a t w e h ave a b a sis o f a
algebra.
Jordan
228
Problems and Solutions
S olu tion 31.
We obtain
[Al, A2]+
[Ai , A i ]+ = 2A i ,
[A12, A23]+ = Ais ,
[^ 1,^ 12 ]+ =
Au,
[A2, Au]+ = A u ,
[As, Au]+ =
O3,
=
[Al, As]+ =
[^ 2,^ 3]+ = O3
[^2,^21+=2^2,
[Au, Au]+ = A2S,
[^ 1,^ 23]+ = O3,
[A2, A 2s\+ = A2s,
[As, A2s]+ = A2s,
[^3,^31+ = 2^3
[^23, Au]+ = Au
[Al, Au]+ = Au
[A2, Au]+ = O3
[As, Au]+ = Au-
P r o b le m 32.
Let A, B be hermitian matrices, i.e. A* = A and B* = B.
Then in general A-\-iB is nonnormal. What are the conditions on A and B such
that A + iB is normal?
S olu tion 32. Prom (^4 + iB)*{A + iB) = (A + iB)(A + iB )* we find that the
commutator of A and B must vanish, i.e. [A, B] = On.
P r o b le m 33.
Show that one can find a 3 x 3 matrix over M such that
A2A t + A t A2 = 2A,
AAt A = 2A,
and
S o lu tio n 33.
We obtain the nonnormal matrix
P r o b le m 34.
Consider the 3 x 3 matrices
Find the conditions on d1, d2, ds, a, b such that
A3 = O3
Commutators and Anticommutators
S olu tion 34.
229
We have
/
0
a(di-da)
a(da-di)
0
V
0
&(d3 - da)
[D, M] =
0
\
b(d2 - d3)
0
/
Hence J1 — 6¾ = I and d2 —ds = I.
P r o b le m 35. (i) Consider the Pauli spin matrix Cr1 with eigenvalues +1 and
—I and corresponding normalized eigenvectors
T lO )’
Find all 2 x 2 matrices A such that [cr1?yl] = O2. Find the eigenvalues and
eigenvectors of A. Discuss.
(ii) Consider the counter-diagonal unit matrix
(0
J3 =
0
Vi
0
I
0
1\
0
0/
with eigenvalues + 1 (twice) and —1 and the corresponding normalized eigenvec­
tors
0
1
I
1
0
1
Ti
Ti 1
0
Find all 3 x 3 matrices A such that [J3, A] = O3. Find the eigenvalues and
normalized eigenvectors of A and compare with the eigenvectors of J3. Discuss,
(iii) Consider the 4 x 4 matrix (counter-diagonal unit matrix)
/0
0
J4 =
0
Vi
0 0 1\
0 1 0
1 0
0
0
0
0
/
(1
o)®(i
o ) = * 1® * 1
with the eigenvalues + 1 (twice) and —1 (twice) and the corresponding nonentangled eigenvectors
Find entangled eigenvectors. Find all 4 x 4 matrices A such that [.A , J4] =
AJ4 — J4A = O4.
(iv) Let Jn be the n x n counter unit matrix. Let A be an n x n matrix with
[Jn, A] = 0n. Show that [Jn 0 Jn, A <g>A] = 0n2.
230
Problems and Solutions
Solution 35 .
(i) From
[^r1, yl] =
O2 we obtain an = ayz and a i2 = a2i, i.e.
^12 ^ =
\ ai2 an J
A =
«11 h
( ^ 11
+
«12^2-
The eigenvalues are an + ai2 and an —«12 with the corresponding normalized
eigenvectors are given by a\.
(ii) We find the matrix
«i3 ai2 a n
(
«21
«22
«21
«11
«12
«13
with a n , a n , a n , 021 arbitrary. The three eigenvalues are a n — a n
8012021 + a\ 3 + 2a n a i2 + «11 + «13 + « 11^ >
I
I
J
"2
The matrix
and
A
8012021 + a \3 + 2anai2 + a —ai3 —a u j .
J3
have the eigenvector
I
0
-I
I
m common.
(iii) By linear combinations we obtain the
I
Tl
From
[A,
( 0l \
0
\i)
J4] = O4 we
’ Tl
as eigenvectors
/ °\
1
01 ^
1
0
-1
Tl
’ Tl
0 J
\-i)
(°\
1
1
B e l l bas i s
1
1 ’
W
obtain the matrix
/ «1 1
«1 2
«1 3
«1 4 \
«2 1
«2 2
«2 3
«2 4
«2 4
«2 3
«2 2
«2 1
\ «1 4
«1 3
«1 2
«1 1 /
with a n , a n , a n , a n , 021, «225 « 23> a24 arbitrary. What common eigenvectors
do J4 and A have? Find the corresponding eigenvalue,
(iv) We have
[Jn (S) Jn, A(S) A]
=
(Jn <
g
>Jn)(A (S) A) — (A(S) A)(Jn 0 Jn)
— (JnA) (S) (JnA)
(AJn) S (AJn)
= (AJn) S (AJn) - (AJn) S (AJn)
= O2™.
Problem 36 .
(i) Let
M 9 A, B
[ M 9A ] =
be
n
0n,
x
n
matrices over
[ M 9B ] =
0n.
C.
Assume that
Commutators and Anticommutators
231
Calculate the commutator [M ® In + In ® M yA ® B].
(ii) Let A y B be n x n matrices over C. Calculate the commutator [A® In +
In ® A yB ® B\. Assume that [AyB] = On.
(iii) Find the commutator
[A ® B + B ® A yA ® A - B ® B\.
Simplify the result for [AyB] = 0n. Simplify the result for A 2 = B 2 = 0n.
Simplify the result for A2 = B 2 = In.
Solution 36.
(i) We have
[M ® In + In <8>Af, A ® B\ = [MyA\ ® B + A ® [MyB] = 0n.
(ii) We have
[A® In + In ® A yB ® B] = [A, B] ® B + B ® [AyB\.
It follows that [A® In + In ® A yB ® B\ = 0n2.
(iii) We have
[A® B + B ® A yA ® A - B ® B] = {A2 + B 2) ® [ByA] + [ByA] ® (A2 + B 2).
If [AyB] = On the commutator is the n2 x n2 zero matrix. If A 2 = B 2 = On the
commutator is the n2 x n2 zero matrix. We find
2In ® [ B yA] + 2[ByA ] ® I n.
P r o b le m 37.
T h e four D ira c m atrices w hich play a role in the D irac equation
are given by
7i
( °0
= 0
\i
(0
0
73 =
i
\0
0 0
0 —i 0
i 0 0
0 0 0J
0 —i
0 0 i
0 0 0
—i 0 0/
,
72 =
0
0
1
0
I( 00
\ - 0l
/1
,
74 =
0
0
\o
0
1
0
0
0
1
0
0
0
0
-1
0
-1\
0
0
0 /
00 \
0
- 1/
(i) Show th a t the m atrices can be w ritten as K ronecker product o f the P au li
spin m atrices a \ y o i, &3 and the 2 x 2 iden tity m atrix /2(ii) C alcu late the anticom m utators [74,71]+, [74,72]+, [74,73]+-
S olu tion 37. (i) W e find 71 = cr2 ®&iy7 2 =
73
F ind the eigenvalues o f 7 1 , 7 2 , 7 3 , 7 4 using this result.
(ii) W e obtain for the anticom m utators
[74,71]+ = 04,
[74,72]+ = o4,
=
02^ 03,
[74,73]+ = o4.
=
¢73(8)/2-
232
Problems and Solutions
The matrix 74 also satisfies 74 = / 4. The matrix 74 is called a chirality matrix.
P r o b le m 38. Let A, B be n x n matrices over C. What is the condition on
A , B such that
[ A ® B yB ® A] = O n2.
S olu tion 38.
We have
[A 0 B yB 0 A) = (AB) 0 (BA) - (BA) 0 (A B ).
Thus if AB = BA (i.e. A and B commute) we obtain the zero matrix. This is
sufficient, but is it also necessary?
Let A y B be n x n matrices over C with commutator [AyB\. Let
R b e a n n x n matrix over C with R2 = In. Find the commutator [A® R yB ® R].
P r o b le m 39.
S olu tion 39.
We have
[A 0 R yB 0 R] = (AB) 0 Jn - (BA) 0 In = (AB - BA) 0 In
= [AyB] 0 In.
P r o b le m 40. Let A i , A2y As be m x m matrices. Let B\y B2y Bs be n x n
matrices. Consider the (n •m) x (n •m) matrices
Xi = Al 0 In + Irn 0 B i, X2 = A2 0 In + Im ®-¾? X3 = A3 0 In + Irn 0 1¾.
(i) Calculate the commutators and show that the results can again be expressed
using commutators.
(ii) Assume that
[-Al »^2] = Asy [^2,^3] = A iy [A3, Ai] = A2
and
[BiyB2] = B3y
[.B2, £ 3] = B i y
[B3, Bi] = B2.
Use these commutation relations to simplify the results of (i).
S olu tion 40.
(i) We have
[Al, A 2] = (Al 0 In + Jm 0 B i )(A2 0 In + Im 0 B2)
= (A iA 2) 0 In - (A2Ai) ®In + Im® (BiB2) - Im 0 (B2Bi)
= [Al, A2] 0 In + Irn 0 [.Bi, -B2].
Analogously we have
[A2, A3] = [A2, A3] 0 In + Jm ® [B2, B3]
[A3, Al] = [A3, Al] 0 In + Im 0 [B3, £1].
Commutators and Anticommutators
(ii)
233
Using the commutation relations we obtain
[X \, X 2] = As <8>In + Im 0 Bs = Xs
[X2, Xs] = Ai <g>In + Irn <g>Bi = Xi
[Xs, Xi] = A2 <g) In + Im ® B2 = X 2.
P r o b le m 41. Let A, B be n x n matrices over C. Assume that [.A , B\ = On.
Can we conclude that A ® B —B <g>A = 0n2?
S o lu tio n 41.
Obviously not in general. For example let
A ={1
! )■
B“ 0 O '
Then A ® B ^ B <g>A.
P r o b le m 42. Let A be a 2 x 2 matrix with A2 = I2. Find the commutator
and anticommutator of
Y = A®
X = A®
S olu tion 42.
We obtain
[ X , Y ] = I 2 ®<js,
P r o b le m 43.
multiplication
[ x ,y ] + = A 2 ® ( j
J)
Let A, B be n x n matrices over C.
= I2 ® I2.
We define the quasi­
A o B : = ^ ( A B + BA).
Obviously A o B = B o A . Show that (A2 o B) o A = A2 o (B o A). This is called
the Jordan identity.
S o lu tio n 43.
We have
4( j42 o B ) o A = (A2B + B A2)A + A(A2B + B A 2)
= A2BA + B A 3 + A3B + A B A 2
= A2 (BA + AB) + (BA + A B )A 2
= AA2 O (B o A ).
234
Problems and Solutions
S u p p lem en ta ry P ro b le m s
P r o b le m I .
Consider the 2 x 2 matrices
where
Calculate the commutator [A(a), B(P)]. What is the condition
on a , p such that [.A(a ), B(P)] = O2?
P r o b le m 2.
Let A be an arbitrary 2 x 2 matrix. Show that
[A (S) as, I2 (S) A] = AS) ([as, A]).
P r o b le m 3. (i) Find all nonzero 2 x 2 matrices A , B such that [A, B] = A-\-B .
Since tr([A, B]) = 0 we have tr(A + B) = 0.
(ii) Find all 2 x 2 matrices A and B such that [A,B] = A — B.
(iii) Find all 2 x 2 matrices A and B such that [A<S A, B<S B] = A ® A —B <g>B.
P r o b le m 4.
Find all 2 x 2 matrices A over C such that
From the two conditions we find
AA* + A*A = I2,
AA* - A*A = I2.
(ii) Find all 2 x 2 matrices B over C such that [B , .B*]+ = I2. Start with
(iii) Find all 2 x 2 matrices C over C such that [C, C*]+ = I 2, [C,C*] = as,
where as is the third Pauli spin matrix.
P r o b le m 5.
(i) Find all nonzero 2 x 2 matrices if+ , i f _ , Ks such that
[if3, if+ ] = if+ ,
[if3, K . ] = - i f _ ,
[if+, i f - ] = - 2 K s
where (if+ )* = i f - . Since tr([j4, B]) = 0 for any n x n matrices A, B we have
tr (if+ ) = t r (if_ ) = tr (if3) = 0.
(ii) Find all nonzero 2 x 2 matrices A i , A 2 , A s such that
[^ 1, ^ 2] = O2,
[Al , As] = A i ,
[A2, As] = A 2.
(iii) Find all 2 x2 matrices A , B , C such that [A, B\ / O2, [^4, C ] / O2, [.B , C] / O2
and
[A, [B, C]] = O2.
Commutators and Anticommutators
P r o b le m 6.
235
Can one find a 2 x 2 matrix A over R such that
o)?
P r o b le m 7. Let a, P G C.
(i) Show that the 2 x 2 matrices
satisfy the conditions [A, 5 ]+ = I2l [^4, ^4]+ = O2, [ 5 ,5 ] + = O2.
(ii) Show that the 2 x 2 matrices
_ «).
B = ( ;
0)
satisfy the conditions [.A1B ]+ = / 2, [A A]+ = O2, [ 5 ,5 ] + = O2.
P r o b le m 8.
(i) Find all nonzero 2 x 2 matrices A 1 B such that
[A1B ^ = O2l
tr(A 5 *) = 0.
(ii) Find all nonzero 2 x 2 matrices A and 5 such that [A1 [A1 5]] = (½.
P r o b le m 9.
relation
(i) Let A 1 B 1 C 3 x 3 nonzero matrices with the commutation
[Ai 5 ] = C,
[B1C] = A 1
[C1A ] = 5 .
Consider the six 27 x 27 matrices
A(S)B(S)C1 AS) CS) B 1 B s A s C 1 B s C s A 1 C s A s B 1 C S B S A.
Find all the commutators for these six matrices. Discuss.
(ii) Let A i 1 A i 1 As be nonzero 3 x 3 matrices which satisfy the commutation
relations
[ A i 1A 2] = A s 1
Let
[^ 2, ^ 3] =
A i1
[j43,j4i ] =
A 2.
3
X := Y j CkAk.
k =I
Find the commutator
3
[ Y ( Ai ® Aj ), X ® J3 + J3 ® X\.
3= I
P r o b le m 10. (i) Let A 1 B be n x n matrices over C. Show that if A and 5
commute and if A is normal, then A * and 5 commute.
236
Problems and Solutions
(ii) Let A be an n x n matrix over C. It can be written as A = H U , where
H is a non-negative definite hermitian matrix and U is unitary. Show that the
matrices H and A commutate if and only if A is normal.
P r o b le m 11. Let crj (j = 0 ,1 ,2,3) be the Pauli spin matrices, where
Form the four 4 x 4 Dirac matrices
= I2.
(i) Are the matrices 7 *, linearly independent?
(ii) Find the eigenvalues and eigenvectors of the 7 fc’s.
(iii) Are the matrices 7 ^ invertible. Use the result from (ii). If so, find the
inverse.
(iv) Find the commutators [7 ^, 7^] for k, t = 0 ,1 ,2,3. Find the anticommutators
for M = 0 ,1,2,3 .
P r o b le m 1 2 . Let A, B be n x n matrices over C. Assume that [A, B ] / 0n.
Can we conclude that
P r o b le m 13.
Let A, B be invertible n x n matrices over C. Assume that
[A, B] = 0n. Can we conclude that [A- 1 , B ~ l] = 0n?
P r o b le m 14. (i) Let (Ti, cr2, <73 be the Pauli spin matrices. Consider the three
nonnormal matrices
A = (Ti +i<72,
B = G2 +ZCT3,
C = (T3 +i(Ti.
Find the commutators and anticommutators. Discuss.
(ii) Consider the three nonnormal matrices
A = (Tl (8) (Tl + ZCT2 (8) cr2, Y = G2 (8 CT2 + i(Ts (8 CT3, Z = CT3 (8 CT3 + icri (8 (Tl.
Find the commutators and anticommutators. Discuss.
P r o b le m 15.
(i) Let a 1, cr2, cr3 be the Pauli spin matrices.
nonnormal matrices
Consider the
where det(A) = d e t(5 ) = det(C) = I, i.e. A, 5 , C are elements of the Lie group
SL{ 2,C ). Show that [A, A*] = <r3, [B,B*] = <ru [C,C*] = (t2.
(ii) Consider the unitary matrices
Show that B = UAU *, C = VBV*.
Commutators and Anticommutators
(iii)
237
Consider the nonnormal and noninvertible matrices
All have trace zero and thus are elements of the Lie algebra s£(29C). Show that
[X9X * ] = G^9
\Y9Y * ] = G l9
IZ9Z * ] = * 2-
Hint. Obviously we have X = A - J2, Y = B - Z2, Z = C - J2.
(iv) Show that Y = U XU *9 Z = V Y V \
(v) Study the commutators
[X (8) A , A* (8) X*],
[X (8) X * ,X * <8>X ]9
[Y <8>Y9Y* (8) Y*],
[Y <8>Y*, Y* (8) Y],
[Z S)Z9Z*S> Z*],
[Z (8) Z *9Z* (8) Z].
P r o b le m 16. (i) Can one find n x n matrices A and B over C such that the
following conditions are satisfied [A9B] = On, [A, H]+ = On and both A and B
are invertible?
(ii) Can one find n x n matrices A and B over C such that the following conditions
are satisfied
[A, B\ = 0n,
[A, B]+ = In
and both A and B are invertible?
P r o b le m 17.
i
0\
-0
0 /
( 0
,
L2 =
0
\-i
0
0
0
/0
M
0
0/
:
•e*
0
0
CO
0
0
0
Il
Li
Consider the hermitian 3 x 3 matrices
Vo
—i
0
0
0
0
0
Find the commutators [Li, £,2]» [-^2^ 3], [L3,L i ]. Find the 3 x 3 matrix
A = C23[L2,L3]_|_ + Ci3[L3,Li]+ + Ci 2[Li ,L 2]+ .
P r o b le m 18.
(i) Find 4 x 4 matrices C and 2 x 2 matrices A such that
[C9A 0 I2 + h (8) A] = O4.
(ii) Find the conditions on the two 2 x 2 hermitian matrices B 9 C such that
[B (S)C9P ] = O4
where P is the permutation matrix
238
Problems and Solutions
(iii)
L et £ , F be a rb itrary 2 x 2 m atrices. T h en
[E 0 I2, h ® F] = O4.
L et X , Y be 4 x 4 m atrices w ith [X, Y] = 0 4 . C an X , Y r be w ritten as X = £'(8)/2?
Y = I2 <
8>F or Y = E ® I2, X = I2 ® F ?
P r o b le m 1 9 .
(i) Consider the tw o 2 x 2 m atrices (counter diagonal m atrices)
0
a\ 2
a2\
0
) -UVo)-
F ind the condition on A and B such th a t [.A , B\ =02(ii) Consider the tw o 3 x 3 m atrices (counter diagonal m atrices)
0
0
0,31
0
a>22
0
ai3\
0
0 /
°O
Wai
(
B =
0
&13
622
0
0
0
F ind the condition on A and B such th a t [A , B\ = 0 3 .
(iii) E xten d to n dimensions.
P r o b l e m 20.
Consider (m + n) x (m + n) m atrices of the form
(mxm
nxm
m x n\
n xnJ'
L et
B
(o' V M
o 1 I U - U
o').- u
2)
Show th a t
_
[B,B] =
( B\B\ - B \ B i
0
B 2B2 — B2B2
\fl p] _ (
0
1 ’ J - \ B 2F2 - F 2B i
[F,F].
P r o b le m 2 1.
-(
£ i £2 + F\F2
0
)■
^iF i - FiB2
)■
~°~ V
£ 2 Fi
£ + F2Fi /
0
i
L et n > I and A, B be n x n m atrices over C . Show th a t
n —I
[A, B n] =
Aj [A, BlBn- 1- j .
3 =0
P r o b l e m 22.
L et
A\, A2, Bi, B 2 be n x n
m atrices over C w ith
B \ = B l = I n . L et
T = A\ 0 (£1 + B2) + A 2 0 (£1 — B2).
A\ = A\ =
Commutators and Anticommutators
239
Show that T 2 = 4Zn 0 Zn - [-Ai, A 2] 0 [-Bi5-B2].
P r o b le m 23. Let A, B be 2 x 2 matrices.
(i) Solve the equation A 0 B — B 0 A = [A, B] 0 Z2 + Z2 0 [-A, B].
(ii) Solve the equation A 0 -B — B 0 A = [A 0 A, 5 0 -B].
P r o b le m 24.
Let A be an n x n matrix. Find the commutator
(On
VA
AW o n
OnJ ’ V - 4
A\
O7J
and the anticommutator
P r o b le m 25. Let A, B be n x n matrices and T a (fixed) invertible n x n
matrix. We define the bracket
[A ,5 ] t := A T B - B T A.
Let
Find [X,Y]t , IX ,H ]t , [Y,H]t .
P r o b le m 26.
A2 = B 2 =
Can we find 3 x 3 matrices A and B such that [A, B]+ = 0 3 and
/3 ?
P r o b le m 27. (i) Let X , Lr be n x n matrices over C.
Do X 0 A and Y
commutate? Do X 0 X and F 0
(ii) Let X , Y be n x n matrices over C. Assume that
and Y <g>Y anticommutate? Do X <g>X and Y 0 Zn +
Assume that [.X , y ] = On.
/ n + / n 0 l commutate?
[.X , Lr]+ = 0n. Do X 0 X
/ n 0 X anticommutate?
P r o b le m 28. Find all 2 x 2 matrices over C such that the commutator is an
invertible diagonal matrix D, i.e. d\\ ^ 0 and d22 ^ 0. An example is
P r o b le m 29.
Ai =
Consider the skew-symmetric 3 x 3 matrices
/0
0 0 \
0 0 -I
,
\0
I 0 )
A2 =
/ 0 0 1\
0 0 0
\ -l 0 0)
/0
,
-I
A3 = I I
\0
0\
0
0
0
0/
with [Ai5A2] = A3, [A2, A3] = A i5 [A3, Ai] = A2. Find the commutators for
the 9 x 9 matrices
A i 0 / 3 + Z3 0 A i 5
A 2 0 Z3 + Z3 0 A 2,
A 3 0 Z3 + Z3 0 A 3.
240
Problems and Solutions
P r o b le m 30.
Let A, B, C be n x n matrices. Show that
ti([A,B]C)=tT(A[B,C\)
applying cyclic invariance.
P r o b le m 31.
Show that A = icr\/2, B = —zcr2 satisfy
- A = [B,[B,A)},
- B = [A, [A,B\\.
P r o b le m 32. Consider the linear operators J+, J _ , Jz satisfying the commuta­
tion relations [J2, J+] = J+, [J2, J_] = —J_, [J+, J_] = 2 J2, where (J+)* = J - .
Show that the 2 x 2 matrices
satisfy these commutations relations.
P r o b le m 33.
(i) Consider the 3 x 3 matrices
P=
/0
I 0
O 0 I
V l O O
B =
bn
&12
b is
bi3
bn
bn
bis
bn
hi
Both matrices are circulant matrices. Find the commutator [P, B]. Discuss,
(ii) Consider the invertible 3 x 3 matrices
0
0
I
I
0
0
°\
1 1*
0
A=
/I
0
Vo
0
g2i7r/3
0
0
0
e ~2in/3
Is the matrix [.A , P] invertible?
P r o b le m 34. Let c € M and be an 2 x 2 matrix over E. Find the commutator
of the 3 x 3 matrices c © A and A 0 c, where 0 denotes the direct sum.
P r o b le m 35.
Let U, V be n x n unitary matrices. Show that
Iy -1^ u iV ®u~x] = on.
P r o b le m 36.
Let A, B be n x n matrices over C. We define (Jordan product)
A o B = ^ ( A B + BA).
Show that A o B is commutative and satisfies A o ( A 2
is not associative in general, i.e. (A o B) o C ^ A o ( B
o
o
B) = A2 o (A o B), but
C) in general.
Chapter 8
Decomposition of Matrices
A matrix decomposition (or matrix factorization) is the right-hand side matrix
product
A = F1F2 - F n
for an input matrix A. The number of factor matrices F i, F2, . . . , Fn depends
on the chosen decomposition. In most cases n = 2 or n = 3.
The most common decompositions are:
1) L U-decomposition. A square matrix A is factorized into a product of a lower
triangular matrix, L y and an upper triangular matrix Uy i.e. A = LU.
2) QR-decomposition. An n x m matrix with linearly independent columns is
factorized as A = QRy where Q is an n x m matrix with orthonormal columns
and R is an invertible m x m upper triangular matrix.
3) The polar decomposition of A G C nxn factors A as the product A = UHy
where U is unitary and H is hermitian positive semi-definite. The hermitian
factor H is always unique and can be expressed as (A* A) 1Z2, and the unitary
factor is unique if A is nonsingular.
4) In the singular value decomposition an m x n matrix can be written as
A = UEV t
where [ / i s a n m x m orthogonal matrix, V is an n x n orthogonal matrix, E is an
m x n diagonal matrix with nonnegative entries and the superscript T denotes
the transpose.
Other important decompositions are the cosine-sine decomposition for unitary
matrices, Iwasawa decomposition (for Lie groups), Schur decomposition, Cholesky
decomposition (for positive semi-definite matrices) and the Jordan decomposi­
tion.
241
242 Problems and Solutions
Problem I.
Find the LU -decomposition of the 3 x 3 matrix
3
2
-4
A =
6
5
1
—9\
-3
.
10 J
The triangular matrices L and U are not uniquely determined by the matrix
equation A = LU. These two matrices together contain n2 + n unknown el­
ements. Thus when comparing elements on the left- and right-hand side of
A = LU we have n2 equations and n2 + n unknowns. We require a further
n conditions to uniquely determine the matrices. There are three additional
sets of n conditions that are commonly used. These are Doolittle’s method with
£jj = I, j = I , . . . ,n; CholeskiyS method with £jj = Ujjl j = I , . . . ,n; CrouUs
method with Ujj = I 1 j = I , . . . , n. Apply C rou f s method.
S olu tion I .
From
0
hi
hi
hi
0 \
0 b
W
^22
^32
/i
Ul2
^13
U23
Vo
I
0
U= I 0
I
and A = LU we obtain the 9 equations
hi = 3
^11^12 = 6
^11^13 = - 9
hi = 2
^21^12 + ^22 = 5
^21^13 + ^22^23 = —3
hi = -4
^31^12 + h 2 = I
^31^13 + ^32^23 + h s
10
=
with the solution h i = 3, h i = 2, h 2 = I, h i = - 4 , h 2 = 9, ^33 = -2 9 ,
U12 = 2,
«13
= -3 ,
«23 =
3.
/I
0
Vo
2
I
0
Thus we obtain
L=
3
2
-4
0
I
9
0
\
0
,
-2 9 I
U-.=
-3
3
I
P r o b le m 2. Let A be an n x n matrix over E. Consider the Lt/-decomposition
A = LU1 where L is a unit lower triangular matrix and U is an upper triangular
matrix. The LDt/-decomposition is defined as A = LDU1 where L is unit lower
triangular, D is diagonal and U is unit upper triangular. Let
A =
2
4
—2 \
4
9 - 3
- 2 - 3
7 /
Decomposition o f Matrices
243
Find the LDU -decomposition via the LU-decomposition.
We have the LU-decomposition
S olu tion 2.
L=
U=
Then the LD [/-decomposition follows as
I
2
-I
0
I
I
°\
( 2
0
Vo0
i/
0
I
0
0
0
4
Find the QR -decomposition of the 3 x 3 matrix
P r o b le m 3.
2
-I
0
3
7
-I
I
0
-I
The columns of A are
S olu tion 3.
They are linearly independent. Applying the Gram-Schmidt orthonormalization
process we obtain
/ 2 //5
-1 /-/5
vi =
V
/
v2 =
0
1 //3 0
2 //3 0
\ -5 //3 0
1 //6 \
2//6
1/ / 6 /
V3 =
Thus we obtain
/
Q=
2 /> /5
\
1 /^ 5
0
l /\ /3 0
2 /\ /3 0
- 5 /V 30
l/\/6\
2 /V 6
,
l/\/6 /
2/> /5
A=
\
0
6 /V 30
0
0
—1/\/5 \
2 2 /^ 3 0
16 /
VS /
P r o b le m 4. We consider 3 x 3 matrices over E. An orthogonal matrix Q such
that det(Q) = I is called a rotation matrix. Let I < p < r < 3 and 0 be a real
number. An orthogonal 3 x 3 matrix Qpr(¢) = ( Qij)i<ij<3 given by
Qpp ~~ Qrr
Qu = I
Qpr =
~~
COS(0 )
if i + P , r
Qrp =
sin(0)
*Zip — ^pi
— Qir — Qri —0 Z ^ p, T
Qij = 0
if z / p, rand j / p, r
244 Problems and Solutions
will be called a plane rotation through 4> in the plane span (ep,e r). Let Q =
( Qtj)i<ij<s be a rotation matrix. Show that there exist angles 4> E [0,7r), 0,ip E
(—7r, 7r] called the Euler angles of Q such that
(I)
Q = Q u W Q 2S (O )Q u W .
S o lu tio n 4.
We set
Qi = Q,
To prove it we use the QR factorization of the rotation matrix.
Q2 = Qu(-$)Qi,
Qs = Q23(-0)Q2,
Q4 = Qu (-^)Q s
where Qk = ( Qij)i<i,j<3 • We then choose ¢, 0, ^ to be the numbers that
subsequently annihilate
3, qf2, that is, such that
cot (¢) = -qli/qls,
cot(0) = - q zz/qlz,
cot(ip) = - ¢ 22/^ 12-
The matrix
is a rotation lower triangular matrix and therefore it is the 3 x 3
identity matrix. Since Qpr(-'ip) = Qpr(^)~l we obtain (I).
P r o b le m 5.
Consider a square non-singular square matrix A over C, i.e.
A ~ 1 exists. The polar decomposition theorem states that A can be written as
A = UH , where U is a unitary matrix and H is a hermitian positive definite
matrix. Show that A has a unique polar decomposition.
S o lu tio n 5.
Since A is invertible, so are A* and A*A. The positive square
root H of A*A is also invertible. Set U := A H ~l . Then U is invertible and
U*U = H ~ lA*AH~l = H - 1H 2H - 1 = I
so that U is unitary. Since H is invertible, it is obvious that A H ~ l is the only
possible choice for U.
P r o b le m 6.
For any n x n matrix A over C, there exists a positive semidefinite matrix H and a unitary matrix such that A = HU. If A is nonsingular,
then H is positive definite and U and H are unique.
(i) Find the polar decomposition for the invertible 3 x 3 matrix
I
0
-4
0
5
4
-4
4
3
The positive definite matrix H is the unique square root of the positive definite
matrix A*A and then U is defined by U = A H ~ l .
(ii) Apply the polar decomposition to the nonnormal matrix A
-G
0
-
which is an element of the Lie group SL( 2,M).
?)
Decomposition of Matrices
Solution 6.
245
(i) The unique decomposition is
I
4
H
(ii) We have
A* A
-(1
I)
with the positive eigenvalues X± = 3/2 ± \fZj2. The positive definite square
root of this matrix is
p=7 l 0 I) *
0
'
P r o b le m 7. Let A be an arbitrary m x n matrix over R, i.e. A G R mxn. Then
A can be written as
A = UHVt
where [ / i s a n m x m orthogonal matrix, F is an n x n orthogonal matrix, E is an
m x n diagonal matrix with nonnegative entries and the superscript T denotes
the transpose. This is called the singular value decomposition. An algorithm to
find the singular value decomposition is as follows.
1) Find the eigenvalues Xj (j = I , . . . , n) of the n x n matrix A t A. Arrange the
eigenvalues A i, . . . , An in descending order.
2) Find the number of nonzero eigenvalues of the matrix A t A. We call this
number r.
3) Find the orthogonal eigenvectors Vj of the matrix A t A corresponding to the
obtained eigenvalues, and arrange them in the same order to form the columnvectors of the n x n matrix V .
4) Form an m x n diagonal matrix E placing on the leading diagonal of it the
square root Crj := y/X] of p = min(m, n) first eigenvalues of the matrix At A
found in I) in descending order.
5) Find the first r column vectors of the m x m matrix U
Uj =
A vj ,
j =
1 , 2 , ...,
r.
6) Add to the matrix U the rest of the m —r vectors using the Gram-Schmidt
orthogonalization process.
We have Avj = (TjUj-, At Uj = Crj Vj and therefore At Avj = Cr^vj , AAt Uj =
CTjUj . Apply the algorithm to the matrix
/0.96 1.72\
^2.28 0.96/-
246
Problems and Solutions
S olu tion 7.
I) We find
6.12
3.84
At A =
3.84 \
3.88/’
The eigenvalues are (arranged in descending order) Ai = 9 and A2 = I.
2) The number of nonzero eigenvalues is r = 2.
3) The orthonormal normalized eigenvectors of the matrix A t A, corresponding
to the eigenvalues Ai and A2 are given by
vi
We obtain the 2 x 2 matrix V ( V t follows by taking the transpose)
0.6
V =
0.8
-
\
)
‘
4) From the eigenvalues we find the singular matrix taking the square roots of
the eigenvalues
5) Next we find two column vectors of the 2 x 2 matrix U. Using the equation
given above we find (<Ti = 3, a2 — I)
It follows that
TT
T
/
N
0.6
[ / = (U1 U2) = ^ 0 8
-00.8
\
6 y
Thus we have found the singular value decomposition of the matrix A
A = UYV t
/0 .6
\0.8
- 0 .8 \ / 3
0.6 ) \ 0
0\ /0 .8
I ) \ 0.6
0.6 \ T
-0 .8 j '
P r o b le m 8. Find the singular value decomposition A = UY1V t of the matrix
(row vector) A = (2 I — 2).
S olu tion 8. First we find the eigenvalues of the 3 x 3 matrix ^4T^4, where T
denotes transpose. Note that AA t = 9. Thus the eigenvalues of
4
At A =
2 - 4
2
I
-2
-4
-2
4
are Ai = 9, A2 = 0, A3 = 0. Thus the eigenvalue 0 is degenerate. Next we
find the number r of nonzero eigenvalues which is r = I. Now we calculate the
normalized eigenvectors. For the eigenvalue Ai = 9 we find
v i = ( - 2 / 3 - 1/3 2 /3 )t .
Decomposition o f Matrices
247
To find the two normalized eigenvectors for the eigenvalue 0 we have to apply
the Gram-Schmidt orthogonalization process. Then we find
This provides the orthonormal matrix
Next we form the singular value matrix E = (3 0 0). Finally we calculate the
unique column-vector of the matrix U
Hence the singular value decomposition of the matrix A is
P r o b le m 9.
(i) Let n > 2 and n = 2k. Let A be an n x k matrix and
A* A = Ik. Find the n x n matrix A A* using the singular value decomposition.
Calculate tr (AA*).
(ii) Let n > 2 and n = 2k. Let A be an n x k matrix and A* A = /&. Let 5 be a
positive definite n x n matrix. Show that
tr(A*S2A)
~ ti((A*SA)2)'
S olu tion 9. (i) Using the singular value decomposition we have A = UEU*,
A * = F E *£/*, where U is a unitary n x n matrix, V is a unitary k x k matrix
and E is an n x k matrix diagonal matrix with nonnegative entries. Thus
A* A = {VT,*U*){UTV) = VE*EF* = Ik.
It follows that E*E = /&. On the other hand we have
where
is the k x k zero matrix and 0 the direct sum. Obviously
ti(A*A) =ti(A A *).
248
Problems and Solutions
(ii)
Using the result from the previous problem we have
{A*SA)2 = (A*SA)(A*SA) = A*SU ^
jj*
JUmSA.
We also have
A* S2A = A* SU ^
^ j u tS A A A tSufJje
k ^ U tSA.
Thus
tr(A *S2A)
t r ( A * S U ( O k ® I k) U * S A ) ^
+
tr ( ( A * S A ) 2)
tr ( ( A * S A ) 2)
~
'
Problem 10. Let A b e a n n x n matrix over IR. Assume that A -1 exists. Given
the singular value decomposition of A, i.e. A = U W V t . Find the singular value
decomposition for A - 1 .
Solution 10.
We have A “ J = (C Z W t ) " 1 = (F t ) - 1W - 1C /"1 = V W - 1 Ut .
Problem 11.
Find the Moore-Penrose pseudo inverses of
’ (o)’
applying the singular value decomposition.
Solution 11.
The singular value decompositions are
/ I 0\ / 1/-/2 0 1/-/2
0 1=
0 I 0
\ - l 0 / \-1/-/2 0 1/-/2
(JMJ !)(*)“>•
(i
i)= d )(V 5
» )-L (;
j / ' .
Thus the Moore-Penrose pseudo inverses are
(• I)/:
T
f
: :)(IS i X
( J ) = O X i ° > ( j ; ) ' =<i o ) '
o » - - ; * ( ! - O ( K vV
=K l)
(I
VO
0
I
-I )
0 )'
Decomposition o f Matrices
249
Any unitary 2n x 2n matrix U can be decomposed as
Problem 12.
(U 1
o w
e
U2 J \ - S
V 0
s\ (ir,
c j {0
o\
U4 J
where Ui,U2,Us,U^ are 2n_1 x 2n_1 unitary matrices and C and S are the
2n_1 x 2n_1 diagonal matrices
C = diag(cos(ai), cos(a2) , . . . , cos(a2n/ 2)),
S
= diag(sin(ai), sin(a2) , . . . , sin(a2n/2))
where aj G M. This decomposition is called cosine-sine decomposition.
(i) Consider the unitary 2 x 2 matrix
v = (-<
»)•
Show that U can be written as
jj
_ ( u\
V 0
0 \ /
cos(a)
U2 J \ —sin(a)
sin(a) \ / U3
cos(a) / \ 0
0 \
U4 y
where a G M and u i , u 2,U3, u4 G U(I) (i.e. u i , u 2,U3, u4 are complex numbers
with length I). Find a , ui, U 2 , U 3 , u4.
(ii) Find the cosine-sine decomposition of the unitary matrix
I
y/ 2
Solution 12.
(i) Matrix multiplication yields
( 0
\ —i
i \ _ ( uiUscos(a)
Oy
Y —U 2 U 3 sin(o:)
uiu 4 sin(a)
cos(a)
U 2U4
Since u i , u 2,U3,u4 / 0 we obtain cos(a) = 0. We select the solution a = 7r /2.
Since sin(7r /2) = I it follows that UiU4 = i , U2U3 = i. Thus we can select the
solution u i = U 2 = I , U 3 = U 4 = i .
(ii) We have ( a G M)
I /l
v/2 \1
where u i ,
yields
I \
/ ui 0 \ / cos(a )
- I / _ V 0 u2 y Y —sin (a)
u 2 , U 3, U4 G
I
C w ith
/l
y/2 \ I
|ui|
=
sin(a) \ / u 3
c o s (a )y \ 0
|u2 | = |u31= |u4 | =
u iu 4 sin (a)
-I/
u 2 u 4 cos(a)
We obtain the four equations
Tl
U1U3 cos(a),
Tl
U4 J
I. Matrixmultiplication
I \ _ / uiU 3C os(a)
_ \ - u 2 U 3sin(a)
0 \
- U 2U4 cos(a),
250
Problems and Solutions
-^= = U1U4 Sin(Ce),
-^= = —U2U3 sin(a)
with a solution a — 7r/4 and u\ = U3 = U4 = I, ^2 = —I. Hence the decomposi­
tion is
J_fl
V2 V 1
P r o b le m 13.
I \ _ (I
0\fl/V2
V0
-W
- W V - W 5
1/V2\(1
0\
W W
W '
V0
Find the cosine-sine decomposition of the 4 x 4 unitary matrix
(Bell matrix)
0
1
1
0
1
1
0
0
TI
Vi
S olu tion 13 .
I 1
Ti
0
I
-I
0
1 \
0
0
- 1/
We have to solve
0
0
Vi
0
I
I
0
0
I
-I
0
1 \
0
_ I( U i
—I
0
V o2
- 1/
02 \ ( C
U2t) V- -S
S
C
where
- (
c o s (o i)
0
0
c o s (o 2)
I
JI ’
C --_ ( sin(ai)
0
0
Sin(O2)
)•
U i 1 U 2 , U s ,U 4 are 2 x 2 unitary matrices and O2 is the 2 x 2 zero matrix. Then
from the condition we obtain the four matrix equations
?)'
0
U2SU3
- ( V° 1 0MW
V/22
j a (I
!)■
u 's u * = J i ( 0i I)-
The solution is
J L f1
0
M
1
o
s
V
2O
1
)
and
U 1 = U 3 = I 01 o)-
!'2 = (~ o1 -“ )•
D‘ = (l
!)•
P r o b le m 14. For any n x n matrix A there exists an n x n unitary matrix
(U* = JJ-1) such that
U*AU = T
(I)
where T is an n x n matrix in upper triangular form. Equation (I) is called a
Schur decomposition. The diagonal elements of T are the eigenvalues of A. Note
Decomposition o f Matrices
251
that such a decomposition is not unique. An iterative algorithm to find a Schur
decomposition for a n n x n matrix is as follows.
It generates at each step matrices Uyc and Tyz (fc = I , . . . , n — I) with the proper­
ties: each Uyz is unitary, and each Tyz has only zeros below its main diagonal in
its first k columns. Tn- 1 is in upper triangular form, and U = U1U2 •••Un- 1 is
the unitary matrix that transforms A into TnWe set To = A. The fc-th step
in the iteration is as follows.
Step I. Denote as Ayz the (n —k + I) x {n —k + I) submatrix in the lower right
portion of Tyz-I.
Step 2. Determine an eigenvalue and the corresponding normalized eigenvector
for A k.
Step 3. Construct a unitary matrix Nyz which has as its first column the nor­
malized eigenvector found in step 2.
Step 4. For k = I, set U\ = Ni, for k > I, set
where Iyz- I is the (k — I) x (k — I) identity matrix.
Step 5. Calculate Tyz = U^Tyc-IUyzApply the algorithm to the 3 x 3 symmetric matrix
(1
0
A = O
I
i\
0 .
U
0
I /
S olu tion 14.
Since rank(A) = 2 one eigenvalue is 0.
normalized eigenvector of the eigenvalue 0 is
>°
The corresponding
- i)T-
Thus we can construct the unitary matrix
1 ( 1
Ni = Ui = —= I 0
0
V2
V2 \ - l
0
i\
0 .
I/
The two other column vectors in the matrix we construct from the fact that they
must be normalized and orthogonal to the first column vector and orthogonal
to each other. Now
/ 0 0 0\
Ti = U{AUi = I O
I 0
\0 0 2 /
which is already the solution of the problem. Thus the eigenvalues of A are 0,
I, 2.
P r o b le m 15. Let A be an n x n matrix over C. Then there exists a n n x n
unitary matrix Q , such that
Q*AQ = D + N
252 Problems and Solutions
where D = diag(Ai, A2, . . . , An) is the diagonal matrix composed of the eigen­
values of A and N is a strictly upper triangular matrix (i.e. N has zero entries
on the diagonal). The matrix Q is said to provide a Schur decomposition of A.
Let
Show that Q provides a Schur decomposition of A.
Obviously, Q*Q = QQ* = / 2. Now
S o lu tio n 15.
I ( —2i
¢ ^ = 5 ( 1
_ { 3 + 4i
“ V
0
-1 \ ( 3
« J ( - 2
8\/2i
3 )(-1
0 \
/0
3 - 4 i y + V0
I \
—2i ) = (
/ 3 + 4¾
-6 \
0
3 -4 i)
- 6\
0 )'
Consequently, we obtained a Schur decomposition of the matrix A with the given
Q-
P r o b le m 16. We say that a matrix is upper triangular if all their entries below
the main diagonal are 0, and that it is strictly upper triangular if in addition all
the entries on the main diagonal are equal to I. Any invertible real n x n matrix
A can be written as the product of three real n x n matrices
A = ODN
where N is strictly upper triangular, D is diagonal with positive entries, and O
is orthogonal. This is known as the Iwasawa decomposition of the matrix A. The
decomposition is unique. In other words, that if A = O'D' TV', where O', D' and
N f are orthogonal, diagonal with positive entries and strictly upper triangular,
respectively, then Of = O, D = D f and N' = N .
(i) Find the Iwasawa decomposition of the matrix
(ii)
Consider the 2 x 2 matrix
where a, 6, c, d G C and ad — be = I. Thus M is an element of the Lie group
SL( 2,C ). The Iwasawa decomposition is given by
(a
[c
b\ _ ( a
d) - \ - 0
( £- 1 /2 0 \ / l
rj\
0
S1/ 2 A 0 1 Z
a) V
where a, /3,77 G C and S G M+ . Find a, /5, S and rj.
S o lu tio n 16.
(i) We obtain
O=
Decomposition o f Matrices
253
(ii) Matrix multiplication yields the four equations
a = aJ -1 / 2,
b = arjS- 1 / 2 + /361/2,
0 = -/ 3 5 -1/2,
d = —firjS- 1 / 2 + a£ 1//2
where aa + P(3 = I. Solving these four equations yields
S = (\a\2 + |&|2) - 1 ,
a = ad1/2,
/3 = —c51/2,
r] = (ab + cd)5.
P r o b le m 17. Let A be a unitary n x n matrix. Let P be an invertible n x n
matrix. Let B := A P . Show that P B - 1 is unitary.
S o lu tio n 17.
We have
Since A and P are invertible, the matrix B is also invertible.
P B ~ 1 (P B ~ 1)* = P B - 1B - u P * = P i A P y 1 ( A P ) - u P*
= P ( P - XA - 1^ P - 1A - 1Y P * = P i P - 1A - 1Y A - u P - 1*)P*
= P i P - 1 A - 1A P - 1^P* = p p - i p - i * p *
= /n-
P r o b le m 18. Show that every 2 x 2 matrix A of determinant I is the product
of three elementary matrices. This means that matrix A can be written as
fau
V a 21
S olu tion 18.
a n
=
au \
& 22 /
= f I
/ l 0W 1 z \
V 0 I / \ 2/ I / \ 0 I /
(I)
We find the four equations
ai2 = *(1 +
IH -X j/,
^ 2/ ) ,
a2i = y ,
a22 = yz + l.
( 2)
Case a2i 7^ 0. Then we solve the system of equations (2) for y, x, 2; in that order
using all but the second equation. We obtain
y = 021,
an - I
a22 — I
a2i
a2i
For these choices oi x, y, z the second equation of (2) is also satisfied since
2:(1 + xy) + x = a2i i a n a 22 - I) = a\ 2
where we used that det(A) = I.
Case a2\ = 0. We have y = 0. Thus we have the representation
(a n
I 0
P r o b le m 19.
ai2 \ _ f an
a d )-y 0
0\ / 1
ai2\ f I
0 \
1 ) \0
I )\ 0
an1) -
Almost any 2 x 2 matrix A can be factored iGaussian decom-
position) as
( an
\ a2\
ai2 \_ f I a \ f X
a22 J
\0 l / \ 0
0\ f I
H) \
0\
I/
254
Problems and Solutions
Find the decomposition of the matrix
S olu tion 19.
Matrix multiplication yields
I \ _ ( A + fiaP
/l
Vi i/
V
a/i\
Pv
v )'
We obtain the four equations A + a /3 fi = I, a/i = I, /3 ^ = I, ^ = I. This leads
to the unique solution /jl = a = /3 = I , A = O. Thus we have the decomposition
0 !Hi DC DC DP r o b le m 20. Consider the symmetric 3 x3 matrix A over R and the orthogonal
matrix O, respectively
/ 2
A = I - I
\ 0
—1
2
—1
0 \
—I I ,
2 /
/c o s (0 )
—sin(0)
cos(</>)
0
O = I sin(</>)
\
0
0\
0 I.
IJ
Calculate A = O- 1AO. Can we find an angle 4> such that a\2 = a2i = 0?
S olu tion 20.
From the orthogonal matrix O we obtain the inverse via </>—> —(j>
o~1
cos (¢)
sin (</>)
—sin (</>) cos(</>)
0
0
0
0
I
Thus
2 — 2sin(0) cos(</>) sin2((f>) — cos2(0)
sin2(¢) —cos2(</>) 2 + 2 sin(</>) cos(0)
(
—sin(</>)
— cos(</>)
—sin(</>) \
—cos (¢) I .
2
J
Thus the condition is a i2 = a2i = sin2(</>) — cos2(</>) = 0. This leads to the
solution <fi = 7r/4 + kn/ 2 (k G Z) and the matrix A takes the form
(
1
0
A = O - 1A O = I
0
3
\ - l / y / 2 -l/ y / 2
~ 1 /V2 \
—l/y/2 I
2
J
since sin(7r/4) = cos(7r/4) = l/v /2 .
P r o b le m 21. Let A be an m x m matrix. Let B be an n x n matrix. Let X
be an m x n matrix such that
AX = XE.
(i)
Decomposition o f Matrices
255
We can find non-singular matrices V and W such that V -1 AV = Ja , W - 1 B W =
Jb , where JA, Jb are the Jordan canonical form of A and B , respectively. Show
that from (I) it follows that Ja Y = YJb -, where Y := V - 1X W .
S olu tion 21.
From A X = X B we have
VJ a V - 1X = X W J b W - 1
^ V J a V - 1X W = X W J b
=> VJa Y = X W J b
=► Ja Y = V - 1X W J b
^ J a Y = YJb .
P r o b le m 22. (i) Let A be a square nonsingular matrix of order nA with LU
factorization A = PJL a Ua and B be a square nonsingular matrix of order nB
with LU factorization B = Pb L b Ub - Find the LU factorization for A ® B.
(ii) Let A be a positive definite matrix of order nA with Cholesky factor G a and
B be a positive definite matrix of order nB with Cholesky factor Gb . Find the
Cholesky factorization for A ® B .
(iii) Let A be an mA x nA matrix with linear independent columns and QR
factorization A = Q a Ra , where Q a is an mA x nA matrix with orthonormal
columns and R a is an nA x nA upper triangular matrix. B is similar defined
with B = Q b R b as its QR factorization. Find the QR factorization for A ® B.
(iv) Let A be a square matrix of order nA with Schur decomposition A =
Ua Ta UJ, where Ua is unitary and Ta is upper triangular. Let B be a square
matrix of order nB with Schur decomposition B = Ub Tb Uq , where Ub is uni­
tary and Tb is upper triangular. Find the Schur decomposition for A ® B.
(v) Let A be an mAx n A matrix with singular value decomposition A = Ua Ha VJ
and B be an mB x nB matrix with singular value decomposition B = Ub Hb Vq .
Find the singular value decomposition for A ® B .
S o lu tio n 22.
(i) We find
A ® B = (PJ l a Ua ) ® (PJL b Ub ) = (PA ® Pb ) t (L a ® L b )(Ua ® Ug).
(ii) We find
A ® B = (GJGa ) ® (GJGb ) = (Ga ® GB)T(GA ® Gb ).
(iii) We find
A ® B = (Q a R a ) ® (Q b R b ) = (Q a <8>Q b )(R a ® R b )(iv) We find
A ® B = (Ua Ta Ua ) ® (Ub Tb UI) = (UA ® UB)(TA ® TB)(UA ® UBf .
(v) We find
A ® B = (Ua Ha VJ) ® (Ub Hb VJ) = (UA ® UB)(HA ® HB)(VA ® VB)T.
256
Problems and Solutions
P r o b le m 23.
Write the matrix
(I
I
I
I
-i
-I
Vl
i
I
-I
I
-I
I \
i
-I
-i J
as the product of two 4 x 4 matrices A and B such that each of these matrices
has precisely two non-zero entries in each row.
S olu tion 23.
We have
/1
0
I
Vo
I
0
-I
0
0
I
0
I
I
0
,
/I
0
B =
I
-l)
0
I
0
Vo —i
I
0
-I
0
°\
I
0
iJ
with H = A B .
P r o b le m 24. An n x n matrix A is persymmetric if JnAJn = A t and skew
persymmetric if JnAJn = A t , where Jn is the n x n exchange matrix, i.e. the
matrix with l ’s on the counter diagonal and 0’s otherwise. Hence J^ = In. Let
B be an n x n symmetric matrix.
(i) Show that B + JnBJn is persymmetric.
(ii) Show that B - J nBJn is skew persymmetric.
(iii) Show that B can be written as a sum of a persymmetric matrix and a skew
persymmetric matrix.
S olu tion 24.
(i) We have that
Jn(B + JnB Jn) Jn = JnBJn + J^1B J^1 = B + JnBJn = (B + JnBJn)T
since B t = B and J j = Jn.
(ii) We have that
Jn(B — JnB Jn) Jn = JnBJn — J^B J^ — —B + JnBJn — —(B — JnBJn)T
since B t = B and J j = Jn.
(iii) Clearly
B = \{B + JnBJn) + \ { B - JnBJn)
where I (B + JnBJn) is persymmetric and \{B —JnBJn) is skew persymmetric.
P r o b le m 25.
Decompose the 3 x 3 matrix
as the sum of a persymmetric and a skew persymmetric matrix.
Decomposition o f Matrices
S o lu tio n 25.
./io
-o o
2
257
The matrix
i \ 1 / o o i\ / i o i \ / o o i \ / o
o +±01000 0 0 1 0 = 0 0
\i o - i / 2 \i o o/ \i o - i / V1 ° o/
o i\
0
\i o o/
is persymmetric, and the matrix
. / 1 0
I \
. / O O l W l O
- I 0 0 0
- 0 1 0
00
I \ / 0 0 1\
0
0 1 0
2 \i o - i / 2 \i o o / \i o - i / V1 ° ° /
=
/ 1 0
0 0
0 \
0
V0 o - 1/
is skew persymmetric. We have
/I
0
\1
0
0
0
I \
0 =
-I/
/ 0 0 1\
/I 0 0 \
00 0 +
0 0 0
.
\1 0 0 /
\0 0 - I /
S u p p lem en ta ry P ro b le m s
P r o b le m I .
that
If A € R nxn, then there exists an orthogonal Q € R nxn such
0
Ru
R22
0
0
Rn
Qt AQ =
Rlm ^
-^2m
.,•• Rmn/
where each Ru is either a I x I matrix or a 2 x 2 matrix having complex conjugate
eigenvalues. Find Q for the 3 x 3 matrix
/0
A =
I 2
VO
I 0\
0 3
.
4 0/
Then calculate Q t AQ.
P r o b le m 2.
Find a cosine-sine decomposition of the matrices
J_ (i
%\
J_ (i
y/2 \i
P r o b le m 3.
%\
- iJ
V 2 Vi
-V '
(i) Consider the 4 x 4 matrices
1 0
0
-1
0 1 0 ,
0 -1 0 1
0 0 -1 0/
~
n=
0
0 0
-1 0
0 -1
0
1
0
0
0
°\
1
0
0/
258
Problems and Solutions
Can one find 4 x 4 permutation matrices P , Q such that Q = PQQI
(ii) Consider the 2n x 2n matrices
O
\ O
O
O
O
O
O
O
O
O
O
-I
O
O
-I
I
0/
Il
Q=
O .. .
I .. .
O .. .
O
O
g►
O
O
I
O
-I
( -0I
\-ln
Can one find 2n x 2n permutation matrices P , Q such that Q = PQQl
Problem 4.
(i) Given the 3 x 2 matrix
Find the singular value decomposition of A.
(ii) Let a E R. Find the singular value decomposition of the 2 x 3 matrix
Y(n\ _ ( cos(<*) sin(a)
' '
\ 0
cos(a)
0 \
sin(o;) J ’
Problem 5. The Cholesky decomposition of the a positive semi-definite matrix
M is the decomposition into a product of a lower triangular matrix L and its
conjugate transpose L*, i.e. M = LL*. Find the Cholesky decomposition for
the density matrix (pure state)
Problem 6.
Show that (cosine-sine decomposition)
(? *)-o ?)(-.;) (o ?)■
Problem 7. Let U be an n x n unitary matrix. The matrix U can always be
diagonalized by a unitary matrix V such that
/ e*$1
U= V
O \
-
:
O
oi0 n
I V*
where e %e*, Oj G [0, 2tt) are the eigenvalues of U. Let n = 2 and
Decomposition o f Matrices
259
Thus the eigenvalues of U are I and —I. Show that
U= V
P r o b le m 8.
V
with
V = V*
If A € SL( 2,K ), then it can be uniquely be written in the form
cos(<£)
—sin(0)
sin(<A) \
cos(0) J
f a
b \
^ \ b —a J ’
Find this decomposition for the 2 x 2 nonnormal matrix
Chapter 9
Functions of Matrices
Let p be a polynomial
n
P (x) = Y
j cI
x3
J=O
then the corresponding matrix function of an n x n matrix A over C can be
defined by
n
P{A) = Y ci Ai
J=O
with the convention A0 = In. If a function / of a complex variable z has
a MacLaurin series expansion f ( z ) =
o cj zj which converges for \z\ < R
(R > 0), then the matrix series
OO
f ( A ) = Y ci Ai
3=0
converges, provided A is square and each of its eigenvalues has absolute value
less than R. The most used matrix function is exp(A) defined by
Aj = i.hm ( rIn + a—X 1 .
exp (4) := s> X —
^ Q j\
n -° o V
n)
For the analytic functions sinh(z) and cosh(z) we have
°°
sinh(^4) :=
A2j +1
°°
— -T7 ,
U
i2j+1)!
cosh(^4) :=
A?j
.
Let M b e a n n x n matrix over C. If there exists a matrix B such that B 2 = M,
then B is called a square root of M .
260
Functions o f Matrices
P r o b le m I.
261
Consider the 2 x 2 nonnormal matrix
A =
Calculate A n, where n € N.
S olu tion I . Note that the determinant of A is +1. Thus det(^4n) = +1 for
all n. We have
This suggests the ansatz
(i)
We use proof by induction. We see that n = I in (I) gives A. Suppose that (I)
is valid for n = k. Then we have
_ / 1 + 2(* + I)
V
*+ 1
- 4 ( * + I) \
I -2 (* +
I) J
showing that (I) is correct for n = * + I. Thus (I) is proved by induction.
P r o b le m 2.
Find exp(^4).
Consider the 2 x 2 matrix A = (a ^ ) over the complex numbers.
S o lu tio n 2.
If
(an - ^22)2 + 4ai2a2i = (tr (A))2 - 4det(A ) = 0
then
If (an - a22)2 + 4ai2a2i = (tr(A ))2 - 4det(A ) ^ 0, then
q12
where A := y/ (a n — a22)2 + 4ai2a2i.
P r o b le m 3.
(i) Consider the 2 x 2 matrix
sinh(A )
a
262 Problems and Solutions
Find exp(tA).
(ii) Let a ,j8 e R and
Calculate exp (M (a, /3)).
S olu tion 3.
(i) If a n 7^ «22 we have
exp(tA) = f
pOll*
0
_
CaUt-Ca22t
12 pO>22t
If a n — a22 we obtain
/ gOllt
V 0
a n te0'!* ^
p022*
(ii) Since M can be written as
M (O liS) = a / 2 + / ? ( J
)
and the two matrices on the right-hand side commute we have
e x p (M (a ,/3 ))= e x p (a ^
_ / ea cos(/3)
y ea sin(/3)
P r o b le m 4.
^ ) ) exp (/? ( ®
(/))
—ea sin(/3) \
ea cos(^) y *
Consider the 2 x 2 unitary matrix
"=(; J)Can we find an a G M such that U = exp (a A), where
Solution 4 .
Since A2 =
—/2, A3 = —A, A4 = /2 etc.
eaA = I2 cos(a) + ^4sin(o:).
This leads to the matrix equation
( cos(a)
Y —sin(a)
which has no solution.
sin(a) \ _ ( 0
cos(a) J
yI
I\
Oy
we obtain
Functions o f Matrices
263
P r o b le m 5. Let H be a hermitian matrix, i.e. H = H*. Now U := etH is a
unitary matrix. Let
H =( i
a € E, b e C
M ,
with b ^ 0.
(i) Calculate etH using the normalized eigenvectors of H to construct a unitary
matrix V such that V*HV is a diagonal matrix.
(ii) Specify a, b such that we find the unitary matrix
U=
S olu tion 5. (i) The eigenvalues of H are given by Ai = a + Vbb1 X2 = a —VW.
The corresponding normalized eigenvectors are
V1 = TI (> /» /& )’
V2 = 7 i ( - ^ / b) '
Thus the unitary matrices V, V * which diagonalize H are
v _
J_
I
(
i_
V 2 \Vbb/b
\
v* =
-V bb/ b)’
j-
( 1
V2 V1
b^
i \
-b / V fi)
with
D : = V H V = ( a+ ™
\
0 Z1T
a — Vbb
0
From U = eiH it follows that V*UV = V*eiHV = eiv*HV = eiD. Thus
eiD = ( ei{a+y^
l
0
°
\
£*(«-%/1>5) I
and since V * = V ~ l the unitary matrix U is given by U = V elDV*. We obtain
JJ
_
e ia
(
cos (Vbb)
\ Vbb/b sin(VW)
b/Vbbsm(Vbb)\
COS(VW)
) ■
(ii) If a = TT/2 and b = - K j Ic we find the unitary matrix, where we used that
cos(7r/2) = 0.
P r o b le m 6. Let z € C. Let A , B be n x n matrices over C. We say that B is
invariant with respect to A if
ezABe~zA = B.
Obviously e zA is the inverse of ezA. Show that, if this condition is satisfied,
one has [A, B] = 0n. If ezA would be unitary we have UBU* = B.
264
Problems and Solutions
S o lu tio n 6.
We set
f ( z ) := ezABe~zA.
Then
= ezAABe~zA - ezABAe~zA = ezA([A,B])e~zA.
dz
Since dB/dz = On and z is arbitrary we find [A1 B\ = 0n.
P r o b le m 7. Let z e C .
(i) Consider the 2 x 2 matrices
bu\
bn J '
A =
Calculate exp (zA), e x p (-z A ) and exp(zA)B exp(—zA). Calculate the commu­
tator [A1B].
(ii) Consider the 2 x 2 matrices
C =
0
-i
1
0
D = ( dn
\ -d n
du\
d n )'
Calculate exp(zC ), exp(—zC) and exp(zC)D e x p (-zC ). Calculate the commu­
tator [C1D].
(iii) Consider the 2 x 2 matrices
fl2
hi
E =
)■
Calculate exp (zE), e x p (-z E ) and exp(zE)F e x p (-zE ). Calculate the commu­
tator [E1F].
S o lu tio n 7.
pzA
(i) We find
_ ( COSh(Z)
ysinh(z)
sinh(z) \
cosh(z) J
zA
( cosh(z)
\ —sinh(z)
- sinh(z) \
cosh(z) J
and
We obtain [A1B] = 0 2.
(ii) We find
zc _ ( cosh(z)
yisinh(z)
—i sinh(z) \
cosh(z) J
_zC _ ( cosh(z)
and
exp(zC)D e x p (-z C ) = D.
We obtain [C1D] = O2.
(iii) We find
zsinh(z)
i sinh(z)
cosh(z)
265
Functions o f Matrices
exp (zE) ( ^j1
^
) e x p (-z E ) = (
/n ) = *
We obtain [.E , F] = 62.
P r o b le m 8.
matrix
Let 07, ¢72, 0-3 be the Pauli spin matrices. Consider the 2 x 2
E/(a,/3,7 ) =
e -i<*<T3/ 2 e -i(3v2/ 2 e -ini<T3/2
where a, /3, 7 are the three FwZer angles with the range 0 < a < 2 7 r , 0 < / 3 < 7 r
and 0 < 7 < 27T. Show that
e za/2 cos(/5/2)e n^2
sin (^ / 2 )e _ i 7 /
e -i a /2
S o lu tio n 8.
—e ia/ 2 sin(/5/2)ei7/ 2 \
eia/2 co s (^ /2)eZ7/2 y ’
2
(I)
Since 03 = Z2 and cr2 = /2 we have
e-ta<r3/2 _ j 2 cosh(2a / 2 ) — (73 sinh(ia/ 2 )
e-z/3<r2/2 _ j 2 co sh ^ ^ / 2 ) — CT2 sinh(i/3/ 2 )
e - Z7<73/2 — J2 cosh(z7 / 2 ) — (73 sinh(i7 / 2 )
where we used cosh(—2) = cosh(z) and sinh(—z) = —sinh(z). Using (x
G
M)
sinh(ix) = isin(x)
cosh(ix) = cos(x),
we arrive at
U(a, /3,
O W
sin(/3/2) \ / e ^ / 2
0
cos(/3/2)
eloc/ 2 J Iv —sin(/3/2)
cos(P/2 ) ) \
Applying matrix multiplication equation (I) follows.
P r o b le m 9.
(i) Let a G l . Consider the 3 x 3 symmetric matrix
4 (a ) =
a\
0
0
0
0
0 .
a
0
0 /
Find e x p (4 (a )).
(ii) Consider the 3 x 3 matrix
B=
/0
I
\0
-I
0
I
;
Let f t G l . Find exp (aF ).
(hi) Let 012, 013,023 G M. Consider the skew-symmetric 3 x 3 matrix
^ ( 012, 013, 023)
0
O l2
O l3
—012
0
02 3
— O l3
-0 2 3
0
0
ei7/2
)'
266
Problems and Solutions
Find exp(^4).
S o lu tio n 9.
(i) We obtain
( cosh(a)
PM<x) -
0
I
0
0
sinh(a)
sinh(a)
0
cosh(a)
(ii) We obtain
exp(aB) =
I — a2/2
—a
-OL2 /2
a
a2/2
I
a
a2/2
a
(iii) Let A := ^/a22 + ai3 + a23- Then ^43 + A 2
, .v
r
sin(A)
exp(A) = J3 +
= 0 and we find
(I — cos(A )) 2
+ -------^
j1 a -
P r o b le m 10. Let A be an n x n matrix over C with A2 = rA, where r G C
and r / 0.
(i) Calculate ezA, where z G C.
(ii) Let U(z) = ezA. Let z' G C. Calculate U(z)U(zf).
S olu tion 10.
(i) From A2 = rA , we obtain A3 = r2A, A4 = r3A ,
An = rn~lA. Thus we obtain
0z A
.
:
^ ]
(zA y
Tj—
i= o
= In
r•~
z 2
, V>2 Z~3 a
z
a
+ 77 A H— — v4 H—
-A + •••+
I!
2!
3!
-A + -
/W\3
T
A f
(rz)2
(rz)
{r zY , ^
= In + -r \rz + 2! + H3!tl + • " + n! + (■rz)n t
^
A f
(rz)2
(rz)3
= In + 1 + rz + ^— ^ + L - ^ + . . . +
+ ■
r V
2!
3!
A
n!
= Jn H----(erz — I).
r
(ii) Straightforward calculation yields
U(z)U(z') = I n + 4 (^ (* + * ') - I) = £/(z + z').
r
This result is obvious since [A, A] = On and therefore ezAez A = e^z+z ^a .
P r o b le m 11.
Can we find a 2 x 2 matrix B over the real numbers M such that
sin(B) = (
q
j
)?
(I)
Functions o f Matrices
267
S o lu tio n 11. Owing to the right-hand side of equation (I) the 2 x 2 matrix
B can only be of the form
» - ( S *)
with x / 0. Straightforward calculation yields
v
y\
xJ "
_ ^
( - D fc
(x
(2fc + 1.)! V 0
(_i)fcx 2fc+i Z 1
(2* + I)!
_ ( sin(x)
Y
y
y \ 2k+1
( - D fc* 2fc+1
(i
y/x
(2* + 1)! V 0
^2fc+l
I
(2fc + 1)y/a,x
VO
J
I
y cos(x) \
0
sin(x) J '
Thus sin(x) = I and y cos(x) = 4. From the first condition we find that cos(x) =
0 and therefore the second condition ycos(x) = 4 cannot be satisfied. Thus there
is no such matrix.
P r o b le m 12. Let A be an n x n matrix over C. Assume that A2 = cln, where
CE M.
(i) Calculate exp(A).
(ii) Apply the result to the 2 x 2 matrix (z / 0)
* - ( - *
_T
Thus B is skew-hermitian, i.e. B
S o lu tio n 12.
in general
o)'
= —B.
(i) We have A3 = c A , A4 = C2Inj A 5 = C2A j A 6 = C3In . Thus,
CrnI2In
{c(m 1)/2 ^
m even
m Q0JcL
It follows that
6
—I™ + A + -^In +
C
a
+
C2 r
C2 t
+
A
C3
+ —
In H-------
= I n { 1 + ^ + ^. + i . + " ) + A { 1 + ^ + h. + i \ + " )
= / n ( 1 + l + ¥ + ^ + ' " ) + ^ ( ^ +£ 3T + £^ + ' " )
= In cosh(\/c) -I— ■= sinh(\/c).
Vc
(ii) We have B 2 = —rl^, r = z*z, r > 0. Thus the condition B 2 = cl 2 is satisfied
with c = —r. It follows that
eB = I2 cosh(V —r) H— = = sinh(V—r).
V -r
268
Problems and Solutions
W ith ^ /^ r = iy/r, cosh(iy/r) = cos (y/r), Sinh(Z-^r) = zsn^-y/r) we obtain
P r o b le m 13.
For any n x n matrix A we have the identity
det(eA) = exp(tr(A)).
(i) Any n x n unitary matrix U can be written as U = exp (iK), where AT is a
hermitian matrix. Assume that det (U) = - 1 . What can be said about the trace
of K I
(ii) Let
Calculate det(eA).
S olu tion 13.
(i) It follows that
—I = det (U) = det(etK) = exp(ztr(AT)).
Thus tr(K) = 7r (±n27r) with n G N.
(ii) To calculate det(eA) would be quite clumsy. We use the identity given above.
Since tr(A) = 0 we obtain det(exp(A)) = e0 = I.
P r o b le m 14.
as
Let A be an n x n matrix. Then exp (A) can also be calculated
Use this definition to show that det(eA) = exp(tr(A)).
S olu tion 14.
We have
det(exp(A)) = det( Iim (In + A/m)rn) =
m —>oo
Iim det ((In + A/m)rn)
m —>oo
= Iim (det(Jn + A/m))171 = Iim (I + (tr( A))/m + 0 (r a - 2 ))m
m —too
m —>oo
= exp(tr(A)).
P r o b le m 15.
Let A , B be n x n matrices. Then we have the identity
det(eAeBe- A e-jB) = exp(tr([A, B])).
Show that det(eAeBe Ae B) = I.
S olu tion 15.
Since tr([A, J3]) = 0 and e0 = I we have det(eAeBe- A e- B ) = I.
Functions o f Matrices
P r o b le m 16.
269
The MacLaurin series for arctan(z) is defined as
Z^
Z7
arctan(z) = z ~ — + — - — ^----- = ^
__ »
( - 1 y z 2*+1
J=O
2j + l
which converges for all complex values of z having absolute value less than I,
i.e. \z\ < I. Let A be an n x n matrix. Thus the series expansion
,
A
A3
Mctan(J4) = J4 - 7
A5
A7
^
+ 1 - - T + ... = ^
( - 1 )j A2j+l
j =0
is well-defined for A if all eigenvalues A of A satisfy |A| < I. Let
Does arctan(A) exist?
S olu tion 16.
exist.
P r o b le m 17.
The eigenvalues of A are 0 and I. Thus arctan(A) does not
Let A, B be n x n matrices over C. Assume that
[A,[A,B]] = [B,[A,B}\ = On.
(I)
eA+B = eAeBe~HA,Bl
(2a)
eA+B = eBeAe + ^ A'Bl
(26)
Show that
Use the technique of parameter differentiation, i.e.consider the matrix-valued
function
/( e ) := e€V b
where e is a real parameter. Then take the derivative of / with respect to e.
S olu tion 17.
If we differentiate / with respect to e we find
de
= AeeAeeB + eeAe‘ BB = (A + e€ABe~eA)f{e)
since eeAe~eA = In. We have eeABe~eA = B + e[A, B\, where we have taken (I)
into account. Thus we obtain the linear matrix-valued differential equation
®
= P
+ B) + £[AB])/(e).
Since the matrix A + B commutes with [A, B] we may treat A + B and [A, B]
as ordinary commuting variables and integrate this linear differential equation
with the initial condition / ( 0) = In. We find
y ( 6) = e €(A+B)+(€2/2)[A,B] =
e e ( A + B ) e (e2/2 ) [ A ,B ]
270 Problems and Solutions
where the last form follows since A + B commutes with [A, B\. If we set e = I
and multiply both sides by e ~^A,B^ 2 then (2a) follows. Likewise we can prove
the second form of the identity (2b).
Let A, B , (¾, ..., Cm, ... b e n x n matrices over C. The
Zassenhaus formula is given by
P r o b le m 18.
exp(A + B) = exp(A) exp (B) exp(C2) •••exp(Cm) ••• .
The left-hand side is called the disentangled form and the right-hand side is called
the undisentangled form. Find C2, C3, . . . , using the comparison method. In the
comparison method the disentangled and undisentangled forms are expanded in
terms of an ordering scalar a and matrix coefficients of equal powers of a are
compared. From
exp(a(A + B)) = exp(aA ) exp (olB) eyLp(a2C2) exp(a3C3) •••
we obtain
o f Q+r 1 +2 r2+3r3+...
E
:0
7*0 !r*i !7*2!7*3!
•• •
AroB riCZ2C ? - - - .
(i) Find C2 and C3.
(ii) Assume that [A, [.A , B]] = On and [.B , [.A , B]] = 0n. What conclusion can we
draw for the Zassenhaus formula?
S o lu tio n 18.
(i) For a 2 we have the decompositions (t*o, 7*1,7*2) = (2,0,0),
(1,1 ,0 ), (0,2,0), (0,0,1). T husw eobtain
(A 4- B )2 = A2 + 2AB 4 - B 2 + 2C2.
Thus it follows that C2 = —\[A,B\. For a 3 we obtain
{A + B f = A3 + SA2B + SAB2 + B 3 + 6AC2 + SBC2 + 6C3.
Using C2 given above we obtain
C3 = l[B ,[A ,B }} + l[A,[A,B}}.
(ii) Since [B , [A, B]] = 0n, [A, [A, B]] = On we find that C3 = C4 = ••• = 0n.
Thus
exp(a(A + B)) = exp {aA) exp (aB) exp(—a 2[A, B\/2).
P r o b le m 19.
Let A, B be Ti x n matrices and t G M. Show that
f2
et(A + B ) _ etAetB _ —(BA —AB) 4- higher order terms in t.
(I)
Functions o f Matrices
S olu tion 19.
271
We have
etAetB = ( In + tA + — A 2 H----- )(In + tB + - B 2 H------)
= In + £(<4. + -B) +
(^-2 + B 2 + 2AB) + •••
and
e t ( A + B ) = Jn + f ( 4 + B ) +
^ (,4 + B ) 2 +
= In A t (^4 + B ) +
• ••
(-43 + -B2 + AB + B A ) + •••.
Subtracting the two equations yields (I).
P r o b le m 20.
exp {a A).
Let A be an n x n matrix with A3 = —A and a G l . Calculate
S olu tion 20.
Thus
From A3 = —A it follows that A2k+1 = ( —l ) fcA, k = 1,2,----
^
Q kA k
a 2 k + lA 2k+l
exp(aA ) = In + y ^
k=l
OO
= In + Y
k=0
*'
" + k
“
a 2k+1 ( - l ) kA
(2*+l)!
(2 t + 1 >!
~
a 2kA2k
~ m r
a 2fc( - l ) fcA 2
(»)!
= In + Asin(O) + A 2(I — cos(o)).
P r o b le m 21. Let X be an n x n matrix over C. Assume that X 2 = In. Let
Y be an arbitrary n x n matrix over C. Let z € C.
(i) Calculate exp (z X )Y exp(—zX ) using the Baker-Campbell-Hausdorff formula
2
3
ezXY e~zX = Y + z[X, Y] + -Z[ X , [.X , Y]] + ^ [ X , [X, [X, Y]]] + ■■■.
(ii) Calculate e x p (z X )T exp(—zX ) by first calculating exp (zX ) and exp(—zX )
and then doing the matrix multiplication. Compare the two methods.
S o lu tio n 21.
(i) Using X 2 = In we find for the first three commutators
[X, [.X , Y]] = [.X , X Y - YX] = 2 (Y - X Y X )
[ x , [ x , [ x , y ] ] ] = 22[ x ,y ]
[X ,[X ,[X ,[X ,Y ]]]] = 23( Y - X Y X ) .
If the number of X ’s in the commutator is even (say, m, m > 2) we have
[X, [X ,... [X,Y] ...]] = 2m -1 (Y - X Y X ).
272 Problems and Solutions
If the number of X ’s in the commutator is odd (say m, m > 3) we have
[ x , [ x , . . . [ x , r ] . . . ] ] = 2m- 1[ x , n
Thus
ezXY e~zX
■)
Consequently
ezXYe zX = Y cosh2(z) + [.X , Y] sinh(z) cosh(z) — X Y X sinh2(z).
(ii) Using the expansion
OO
and X 2 = In we have
ezX = In cosh(z) + X sinh(z),
e~zX = In cosh(z) — X sinh(z).
Matrix multiplication yields
ezXYe~zX = Y cosh2(z) + [X, Y] sinh(z) cosh(z) — X Y X sinh2(z).
P r o b le m 22. Let K be a hermitian matrix. Then U := exp (iK) is a unitary
matrix. A method to find the hermitian matrix K from the unitary matrix U is
to consider the principal logarithm of a matrix A G C nxn with no eigenvalues
on M- (the closed negative real axis). This logarithm is denoted by Iog(A)
and is the unique matrix B such that exp (B) = A and the eigenvalues of B
have imaginary parts lying strictly between —n and 7r. For A G Cnxn with no
eigenvalues on M- we have the following integral representation
Thus with
S
= I w e obtain
Find Iog(U) of the unitary matrix
First test whether the method can be applied.
Functions o f Matrices
273
S o lu tio n 22. We calculate Iog(CZ) to find iK given by U = exp [iK). We set
B = iK in the following. The eigenvalues of U are given by
Al = ^ (1+i)>
Aa = ^ (1_<)-
Thus the condition to apply the integral representation is satisfied. We consider
first the general case U = (Ujk) and then simplify to Wn = U22 = I /
and
U2I = —U12 = I /y/2. We obtain
I + t(un — I)
tu 21
t(U —I2) A h =
tu12
I + t(u22 — I)
)
and
d(t) := det (t(U — I2) H- I2 ) = I H- t [—2 + tr(CZ)) + t*(l — tr(CZ) + det(CZ)).
Let X = det(CZ) - tr(CZ) + I. Then
(U- ,m u - / , ) + 7, ) - = ± ( ,x +„“ " - 1
W ith Un = u22 = l / v % U21 = - U i2 = I /
+” “ 2 _ , ) .
we obtain
d(t) = I + t [—2 + y/2) H- t/{2 — y/2)
and X = 2 — y/2. Thus the matrix takes the form
_J_ ( t ( 2 - y/2) + l / V 2 ~ l
-l/y/2
\
d(t)\
I/y/2
t(2-V2) + l / V 2 - l ) *
Since
I1 I
/-TT
I
J0 W )
t
Jo m
I
7r
4
we obtain
K _ (
P r o b le m 23.
O
n r /A
O )■
Consider the matrix (C E R)
Zcosh(C)
0
S{ 0 =
0
V sinh(C)
0
cosh (C)
sinh(C)
0
0
sinh(C)
cosh(C)
0
sinh(C) \
0
0
cosh ( C) /
(i) Show that the matrix is invertible, i.e. find the determinant.
(ii) Calculate the inverse of 5(C).
(iii) Calculate
A :=
C=O
274 Problems and Solutions
and then calculate exp(£^4).
(iv) Do the matrices 5(C) form a group under matrix multiplication?
Solution 23. (i) We find det(5(C)) = (cosh2(C) — Sinh2(C))2 — I(ii) The inverse is obtained by setting C —> —C? be.
cosh(C)
0
S - ^ o = S i-O =
0
V - sinh(C)
0
cosh (C)
—sinh(C)
0
0
—sinh(C)
cosh(C)
0
- sinh(C) \
0
0
cosh(C) /
(iii) We have
0
0
0 1
Vi 0
(°0
0 1X
1 0
0 0
0 0/
Since A2 = I4 it follows that exp(C^4) = 5(C).
(iv) Yes. For C = 0 we have the identity matrix / 4.
Problem 24. Let M be an n x n matrix with rrijk = I for all j ,k = I,
Let z G C. Find exp (zM). Then consider the special case zn = in
Solution 24.
Since
1>
Znk- 1
Mk= I :
~k- 1
V n fc- 1
we obtain ezM = In + ^(ezn — I )M . If zn = in we obtain
ezM = In - - M .
n
Problem 25.
Let X , Y be n x n matrices. Show that
Qx
[Xk,Y]
>Y] = E
k=I
Solution 25.
k\
'
We have
[ex , Y ] = e x Y - Y e x = ( l n + X + ^
+ •••) F - F ( / „ + X + ^
= [X,Y] + ± [ X 2,Y} + ± [ X 3,Y} + --y ^ [Xk,Y]
k\
‘
+
Functions o f Matrices
275
Let A i B be n x n matrices over C and a G C. The BakerCampbell-Hausdorff formula states that
Problem 26.
2
eaABe~aA = B + a[A, B] +
o°
j
[A, [A, B}} + ■■■= £
[A i , B } = B(a)
3=0 J'
where [Ai B] := AB — BA and [A j i B ) := [Ai [A j ~l , B}] is the repeated com­
mutator.
(i) Extend the formula to eaAB ke~0iAi where k > I.
(ii) Extend the formula to e°iAeBe~°iA.
Solution 26.
(i) Using that e~aAeaA = In we have
eocAB ke - ° cA = ColaB e - ^ ColaB k~xC-^a = B (a )e ai4B fc_1e“ aA = ( B {a ))k.
(ii) Using the result from (i) we obtain eaAeBe_aA = exp (B(a)).
Problem 27.
Consider the n x n matrix (n > 2)
0
0
0
Vo
I
0
0
0
0
I
0
0
...
...
...
...
0\
0
I
0J
Let / : K —> K be an analytic function. Calculate
/( 0 ) /» + ® A
+
I!
n o ) A2+.
2!
, / ” - ‘ (0) ,,, - I
+ (^ T j!
where ' denotes differentiation. Discuss.
/'(0)/1!
/(0)
V 0
0
f(A) =
.
•
/(0)
I I
to I—
1
/HO)
0
OO
Since An is the zero matrix the expansion given above is f(A ).
I I
Solution 27.
We obtain
/
Problem 28. Let A i B b e n x n matrices with A2 = In and B 2 = In. Assume
that [Ai B] = 0n. Let a, b G C. Calculate eaA+bB.
Solution 28.
Since [Ai B] = On we have eaA+bB = eaAe bB. Now
eaA = In + aA + —a?A? H- — o? A? H- •••= Jn + clA H- —ya2/ n + ^yfl3A H- •••
— InO- + 2!
H----- ) + A(a + —a3 H------)
= In cosh(a) + A sinh(a).
276
Problems and Solutions
Analogously ebB = In cosh(6) + jBsinh(6). Thus
eaA+bB _
Cosh(&) + A sinh(a) cosh(fr) + B sinh(6) cosh(a)
+ AB sinh(a) sinh(6).
P r o b le m 29. Let A j B be n x n matrices with A2 = In and B 2 = In. Assume
that [A, B]+ = On. Let a, b G C. Calculate eaA+bB.
S olu tion 29.
We have
e aA+bB =
jn +
+
bB) +
i
_ (< lA + b B )2 +
i _ ( a ,4 +
bBf
+ ...
Since
(aA + bB)2 = a2A2 + b2B 2 + ab(AB + BA) = (a2 + b2)In
(aA + bB)3 = (a2 + b2)(aA + bB)
we have in general
(aA + bB)n = (a2 + b2)n^2In
n even
(aA + bB)n = (a2 + &2) (n" 1)/2(aA + bB)
n odd.
Thus
gaA+&B _
cosh(\/a2 + b2) +
^ sinh(\/a2 + 62).
y/a2 + b2
P r o b le m 30. (i) Let A, B be n x n matrices over C. Assume that A 2 = In
and B 2 = In. Find exp(aA 0 PB), where a, P € C.
(ii) Let A, B, C be n x n matrices over C such that A 2 = In, B 2 = In and
C 2 = In. Furthermore assume that
[A, B]+ = On,
[B, C]+ = 0n,
[C, A]+ = On
Let a, p, 7 € C. Calculate eaA+/3B+7C using
eaA+/3B+7C _ ^
Solution 30.
(aA + PB + 7 C )J
(i) Since (aA 0 /3B)2 = (a2P2)In 0 In we find
exp(aA 0 PB) = (In 0 7n) cosh(a^) + (A 0 5 ) sinh(a/3).
(ii) Since the anticommutators vanish we have
(aA + PB + 7 C )2 = (a 2 + /52 + 7 2)2/n
(aA + PB + 7 C )3 = (a 2 + /32 + 7 2)(a A + PB + 7 C ).
Functions o f Matrices
277
In general we have for positive n
(a A + (3B + 7 C)n = (a2 + P2 + 7 2)n^2In
for
n even
(aA + PB + 7 C )71 = (a 2 + P2 + 7 2)71/2-1
for
n odd.
and
Thus we have the expansion ((J2 := a 2 + P2 + 7 2)
e a A + P B + y C = Jn(1 + ^ 2
+
+ ^ ( * 2 ) 2 + i . ((52)3 + . . .)
(aA + PB +
7(7)(1 + ^(<52) +
± ( 6 2) 2
+ •••)•
This can be summed up to
+ ^A + PB + jC
e a A + P B + -yC = ^
vs
P r o b le m 31. Let A be an n x n matrix over C. Assume that all eigenvalues
Al, . . . , An are pairwise distinct.
(i) Then etA can be calculated as follows (Lagrange interpolation)
n
e tA
yy**
j= i
n
k =I
( A - X kIn)
(Xj - Xk) *
Consider the Pauli spin matrix o\. Calculate etA using this method,
(ii) etA can also be calculated as follows (Newton interpolation)
j —I
n
e<A =
exltjn +
£ [ A l > . . . ) Aj.]
3=2
Y[(A - XkIn).
k= l
The divided differences [Al,. . . , Aj ] depend on t and are defined recursively by
[Al, A2]
[Al,. . . , A*;+i]
Ai —A2
[Ai,...,Afc] - [A2, . . . , A fc+i]
Ai - Afc+i
k > 2.
Calculate et(Tl using this method.
S o lu tio n 31.
(i) The eigenvalues of A are Ai = I and A2 = —I. Thus
JbA _
*A - ^ A 2T2
Ai - A 2
It follows that
JA
-H(? O+G ?))
t A - XiI2
A2 - Ai
278 Problems and Solutions
Finally
tA _ I f el + e~l
2 \ e* —e~l
e* — e~l \
el + e~l J
6
( cosh (t)
y sinh(t)
sinh(t) \
cosh(t) J '
(ii) We have n = 2 and the eigenvalues of A are Ai = I and A2 = —1. Thus
etA = e ^ h + [Al, A2] J \ ( A - XkI2) = elI2 + [A1, A2](A - I2).
k= I
Since
_eA2t
[Al, A2] =
e —e
Ai — A2
it follows that
+ <
AA = I
2 V e* - 1
P r o b le m 32.
e —e ~l \ _ { cosh(t)
el + e' ~l J ~ y sinh(t)
sinh(t) \
cosh(t) J ’
Let A be an n x n matrix. The characteristic polynomial
det(A In —A^) — Xn + aiAn * + ••• + an = p( A)
is closely related the resolvent (AIn — ^4)-1 trough the formula
1
K
n
T V iA -1 + AT2A - 2 + - .. + TVn
\ n _|_ ai An-1 H
----- + an
A )
N(X)
p(X)
where the adjugate matrix N(X) is a polynomial in A of degree n — I with
constant n x n coefficient matrices N i , . . . , Nn. The Laplace transform of the
matrix exponential is the resolvent
C(e*A) = (XIn - A ) - 1.
The Nk matrices and
coefficients may be computed recursively as follows
N1 = In,
U1 = - J t i ( A N 1)
N2 = ANi + GiJn,
a2 — —- t r (AZV2)
Nn —AN n- 1 + an—i l n,
an =
n
ti (ANn)
O71 —
—ANn H- anIn.
Consider the 2 x 2 unitary matrix
Find the Nk matrices and the coefficients
and thus calculate the resolvent.
Functions o f Matrices
279
We have Ni = Z2, a\ = 0, AT2 = Uj a<i = —I. Thus the resolvent
S o lu tio n 32.
is
(AI2 - U ) - 1 = ^ - J i X I 2 + U).
P r o b le m 33. (i) Let A, B be n x n matrices over C. Calculate eAB eA. Set
/( e ) = eeAB eeA, where e is a real parameter. Then differentiate with respect to
e. For e = I we have eABeA.
(ii) Let e G M. Calculate the matrix-valued function
/( e ) = e - ^ W
2.
Differentiate the matrix-valued function / with respect to e and solve the initial
value problem of the resulting ordinary differential equation.
(i) From /(e ) = eeABeeA we obtain
S o lu tio n 33.
= eeAA B eeA + e€ABAe€A = e€A[A,B]+eeA.
de
The second derivative yields
= e€AA[A, B]+eeA + e€A[A, B]+AetA = e€A[A, [A, B]+]+eeA
etc. Using the Taylor expansion we obtain
f(e )- B + ld f + ld 2f
. ld 3f +
/ (c )- B + l ! * + 2 ! d ? + 3 ! d ? + " '
and therefore
I
I
= B + ^ [A , B}+ + ^ [A, [A, -B]+]+ + ^ [A , [-4, [A, B]+]+]+ + ••• .
Since
e A B e ~ A = e A B e A e ~ 2A
eA B e ~ A
we also have
= (B + i [ . AiB]+ + ± [ A , [A ,£ ]+]+ + •••) e ~ 2 A.
(ii) We obtain
df(e)
= —eet72[CT2, CT3Jeet72 = —2ie et72CTie e<72
de
and
d2m
de2
—2je_et722iCT3eet72 = 4 /(e ).
Thus we have to solve the linear matrix differential equation
d?f(e)
= 4/(e)
de2
280 Problems and Solutions
with the initial conditions / ( 0) = <73 and df(0)/de = —2ia\. We obtain the
solution
/(e) =
- i<?i)e2e + i ( a 3 + io\)e 2e.
Problem 34. Let A be an n x n matrix over C. Let T be a nilpotent matrix
over C satisfying T*A 4- AT = 0n. Show that (eT) M e T = A.
Solution 34.
Since T is nilpotent we have T k = On for some non-negative
integer k. Thus
Thus
(eT ) M = P A = £
h r y A = £
3=0 3 '
j= 0
t £ L AT> = A e - r
3•
It follows that (eT)M e T = A.
Problem 35. (i) Let II be an n x n projection matrix. Let e G M. Calculate
exp(ell).
(ii) Let I I i ,II2, . . . , I I 71 be n x n projection matrices. Assume that IIj IIfc = On
(j / k) for all j, k = I , . . . , n. Let ej G M with / = 1,...,7¾. Calculate
expfyilli + e2n 2 H------- b enn n).
Assume additionally that IIi-I-II2 -I------- b IIn = In. Simplify the result from (i)
using this condition.
S o lu tio n 35.
(i) Using II2 = II we find
exp(ell) = In + e l l + I e 2II2 + I e 3II3 + •••
= In + eII + XjC2II + XjC3II +•••
= I n + (e€— i)n.
(ii) Using II2 = IIj- and 1 1,¾ = On for j
we find
e£ln1+...+ennn = {In + (c<1 _ 1 )n i)(/n + (c«* - l) n 2) • • • (ee” - 1)¾)
= In A- (e€l — l)IIi + (e€2 — i ) n 2 + •••+ (eCn — l ) n n.
Using IIi-I-II2 -I----- + Hn = In the result simplifies to
n
^ 1II1+.. .+ennn =
3= I
Functions o f Matrices
281
P r o b le m 36.
Let A, B be n x n matrices over C. Assume that A and B
commute with the commutator [A, B\. Then
exp(A + B) = exp (A) exp (B) exp |
Can this formula be applied to the matrices
C =
S o lu tio n 36.
We have
)
= D.
Now D does not commute with C. Thus the formula cannot be applied.
Let e G R. Let A, B be n x n matrices over C.
eeAeeBe~eAe~eB up to second order in e.
P r o b le m 37.
S o lu tio n 37.
There are no first order terms. We find
eeAeeBe- c A e- t B „ ^
P r o b le m 38.
Expand
^
£] +
+ #2)
Let a € R. Consider the 2 x 2 matrix
sin(a)
—cos(a)
(i) Show that the matrix is orthogonal.
(ii) Find the determinant of A(a). Is the matrix an element of 5 0 (2 , R )?
(iii) Do these matrices form a group under matrix multiplication?
(iv) Calculate
CK=
O
Calculate ex p (a X ) and compare this matrix with A(a). Discuss,
(v) Let P G M and
cos(/3)
sin(/3) \
B(P) = sin(/3) —cos(yd) J ’
Is the matrix A(a) <g) B ((3) orthogonal? Find the determinant of A(a) (8) B (a).
Is this matrix an element of 5 0 (4 , E )?
S olu tion 38.
(i) We have A(a) = AT(a) and thus
^(a)AT(a)=(j J)-
282
Problems and Solutions
(ii) We obtain det(A (a)) = —I. Thus A(a) is not an element of SO( 2,R ).
(hi) No since det(A (a)) = - 1 . No neutral element (identity matrix) exists,
(iv) We have
exp (a
a=0
/ 0
\l
I \ v _ ( cosh(a)
0y
Y sinh(a:)
sinh(a:)
cosh (a)
(v) Yes. We have
(A(a) 0 B(P))T = A t (a) 0 B t (/3).
We find det(A (a) 0 B(P)) = ( - 1 ) •( - 1 ) = +1. Yes A(a) 0 B(P) is an element
of SO( 4,R ).
P r o b le m 39. Let A be a normal matrix with eigenvalues A i, . . . , An and corre­
sponding normalized pairwise orthogonal eigenvectors u i , . . . , un. Let w , v € C n
(column vectors). Find w* exp (A)v by expanding w and v with respect to the
basis Uj (j = I , . . . ,n ).
S olu tion 39.
Using the expansion
n
n
v = D dkuk
k=I
w = E c*u*>
j= i
and eAUk = eXkUk we obtain
n
n
n
n
We^v = D E c’ 4 e A‘ u*ufc = E E
j = i j= i
P r o b le m 40.
Ti
CjdkeXk6jk = ^2c*kdkeXk.
j=ifc=i
fc=i
Let B be an n x n matrix with B 2 = In. Show that
exp ( _ § i7r^ - ln^ )= B '
S olu tion 40.
Let z € C. Then we have
ezB = cosh (z)In + sin h (z)B .
S ettin g z = —in/ 2 we obtain the iden tity utilizing th a t sin( 7r/ 2 ) =
cos( 7r/ 2 ) = 0 .
P r o b le m 41.
Consider the 2 x 2 nonnormal matrices
Are the matrices
exp(i4),
exp (B)
I and
Functions o f Matrices
283
normal? Are the matrices sin(A), sin (B )y cos (A), cos (B) normal?
S o lu tio n 41.
We have
exp(j4) =
l ) ’
exP(S ) = ( }
i) '
Both matrices are nonnormal. We have
sin(A) = A,
sin (B) = B ,
cos (A) = I2i
cos(B) = / 2.
Thus sin(A), sin(B) are nonnormal and cos (A), cos (B) are normal.
P r o b le m 42.
Let z £ C. Let A be an n x n matrix over C. Assume that
A3 = zA. Find exp(A).
S olu tion 42.
If 2 = 0 we have exp (A) = In + A + A2 /2. If 2 ^ 0, then
= /„ +
P r o b le m 43.
matrix
+ c0sh^
yZ
Z
- 1 A'.
Let v i, V2 be an orthonormal set in C 2. Consider the 2 x 2
A = - i v 1Vi + iv 2v 2Thus J2 = v i v j + v 2V2. Find K such that exp (K) = A.
S olu tion 43.
Since (Vj V?)2 = Vj V!- (j = 1,2) we obtain
K =
+ 2z/ci7r) Vi Vi + ^
+ 2ik2'KS
j V2v£
where ki,k2 G Z. Thus
K = 7T Q - (ki - Zc2)^ A + i7r(k\ + k2)I2.
P r o b le m 44. Any 2 x 2 matrix A can be written as a linear combination of
the Pauli spin matrices and the 2 x 2 identity matrix I2
A = a/2 +
7
b( i + CG
2+
da
3
where a, 6, c, d G C.
(i) Find A 2 and A 3.
(ii) Use the result from (i) to find all matrices A such that A 3 = Gii i.e. calculate
the third root ^[g \.
284
Problems and Solutions
S o lu tio n 44.
(i) We obtain
A2 = (b2 + c 2 + d2 - (I2)I2 A 2aA
A3 = a(a2 A 3b2 A 3c2 + 3d2)I2 A b(3a2 A b 2 A c 2 A d2)iTi
A c(3a2 A b 2 A c 2 + d2)cr2 A d(3a2 A b 2 A c 2 A d2)<r3.
(ii) T h e P auli spin m atrices 0 + <t2, <73 together w ith J2 form a basis for the
vector space o f the 2 x 2 m atrices. Settin g A 3 = o \ yields the four equations
a(a2 + 3b2 + 3c2 + 3d2) = 0
6(3a2 + b2 + c2 + d2) = I
c(3a2 + b2 + c2 + d2) = 0
d(Sa2 + 62 + c2 + d2) = 0 .
Thus b ^ 0 and c = d = 0 . Case I. If a = 0 , then b3 = I. Therefore A = 60+
where b G {1, e2™/3, e4™/3}. Case 2. If a ^ 0, then
a2+ 362 = 0,
6(3a2 + 62) = I.
It follows that 63 = —1/8, i.e. b = —1/2 and a = ±\/3i/2. Thus
± i\ /3
—1
—1 \
±i\ /3 y
P r o b le m 45.
Let 4 , 5 ,
K be n x n matrices over C.
A 2 = B 2 = 0n. Let z € C. Calculate
Assume that
ez(<A®B)(H<S}K)e-z(A®B)
using the Baker-Campbell-Hausdorff formula.
S olu tion 45.
We have
[A 0 5 , X 0 X ] = (AH) 0 (B K ) - (HA) <g> (KB )
and using (I)
[A 0 B 1 [A 0 £ , X <g>X]] = - 2 ( A X A ) 0 ( 5 X 5 ) .
Since
[A 0 5 , (AHA) 0 ( 5 X 5 ) ] = (A 2X A ) 0 ( 5 2X 5 ) - (A X A 2) 0 ( B K B 2)
we obtain
e z(A® B )(H
0
= H0
+
® (B K ) - (HA) ® (KB))
- Z2(AHA) ® ( B K B ) .
(I)
Functions o f Matrices
P r o b le m 46.
Calculate
285
(i) Let A, B be n x n matrices over C, z G C and u, v € Cn.
exp (z(A <8 B ))( u <8 v).
(ii) Simplify the result from (i) if A2 = In and B 2 = In.
S olu tion 46.
(i) Since ( A <8 B)(u (8) v ) = ( Au ) <8 (Bv) we have
(ii) If A2 = In and B 2 = In we obtain
exp(z(A (8) £ ) ) ( u (8) v) = (u <g>v ) cosh(z) + (Au) (8) (i?v) sinh(z).
P r o b le m 47.
Let A be an n x n matrix. Calculate
(On
exp I
'U
A \
°n j
using the identity
(! S)— Cr 0 S olu tion 47.
(O n
A\
We have
f
1 /0 ,
A V
expU ° J _ § ; ! U 0»)
, (On
~ l2 n + \A
= /2n + ( j
(
P r o b le m 48.
cosh(A)
sinh(A)
A ) , 1 ( On
On J + 2! V A
J)®^ + ^ ( J
A ) \ l ( 0 n
On J + 3! \ -A
J)
A Y
+ '
On J
+
sinh(A) \
cosh(A) J ’
(i) Let A i , A 2, . . . , Arn be n x n matrices with A2 = In for
j = I , . . . , m. Let z e C. Calculate
exp(z(A i <8A 2 <8•••<8A m)).
(ii) Use the result from (i) to calculate exp(—i9(a\ (8 cr2 (8 03)), where 0 £ M.
S olu tion 48.
(i) Since (A l <8 A 2 (8 •••<8 A m)2 = In <8 In <8 •••<8 In we obtain
exp(z(A i 8 A 2 (8 •••<8 A m)) = (Jn <8>•••8) In) Cosh(Z) + (A i <8 •••(8 A m) sinh(z).
286
Problems and Solutions
(ii) Since cosh(—iO) = cos(0) and sinh(—iO) = —zsin(0) we have
exp(-i9(ai 0 cr2 0 03)) = cos(0 )(/2 (E) h ® h ) - (isin(0))(cri (8)(72(8) ¢73).
P r o b le m 49.
(i) Let A be an n x n matrix over C and II be an m x m
projection matrix. Let z G C. Calculate
exp(z(A (8) II)).
(ii) Let A i i A2 be n x n matrices over C. Let IIi, II2 be m x m projection
matrices with IIiLh = On. Calculate
exp(z(Ai (8) IIi + A2 (8) II2)).
(iii) Use the result from (ii) to find the unitary matrix
U it) = exp (-iHt/h)
where H = hw(A l (8) IIi + A 20 l l 2) and we assume that A\ and A 2 are hermitian
matrices.
S o lu tio n 49.
(ii) We obtain
(i) We find exp (z(A 0 II)) = In 0 Irn + ( ezA —In) 0 II.
exp(z(A i 0 IIi + A 2 0 II2)) = In 0 Im + ( ezAl - In) 0 IIi + ( ezA2 - In) 0 II2.
(iii) Since
U(t) = exp(—iut(Ai 0 IIi + A 2 0) II2))
with z = —iut we obtain the unitary matrix
U(t) = In ® Im + ( e - ^ tAl - In) ® III + (e~iuitM - In) ® U2.
P r o b le m 50. Let A, B be n x n matrices over C. Let u be an eigenvector of
A and v be an eigenvector of B , respectively, i.e. A u = Au, B v = fiv.
(i) Find eA®B( u 0 v).
(ii) Find eA®In+In®B(u 0 v).
(iii) Find eA®/n+/n®B+A®B( u 0 v ) .
S olu tion 50.
(i) We have
e A®B(u ® v) = (Jn ® In + A ® B + ^ A 2 ® B 2 + ^ 4 3 ® B 3 + ••-)(u ® v )
= u 0 v + A u 0 B v + ^ ( A 2u ) 0 (B2v) H----= u 0 v + Au 0 / i V +
A2U 0
/JL2 V
= (I + \/x + ^jA2//2 H-----)(u ® v )
= eA//(u 0 v).
H-----
Functions o f Matrices
(ii)
Since
e A ® In + In ® B
=
eA
287
<g>e B we obtain
eA®In+In®B (u (8) v ) = eA+/x(u 0 v).
(iii) Using
[A
<g>I n +
In
<g>B ,
A
e A ® I n + I n® B + A ® B
<S B ] = 0n2 we find
0v ) =
e A ® I n+ I n® B e A ® B
= (e A S) e B ) ( e x ^ u
^
0
0 v)
= eA/xeAeM( u 0 v)
= eA+^+A/x(U 0 V).
Obviously this is an eigenvalue equation again.
Problem 51 . For every positive definite matrix A , there is a unique positive
definite matrix Q such that Q 2 = A . The matrix Q is called the s q u a r e r o o t of
A.
Can we find the square root of the matrix
Solution 51 . First we see that B is symmetric over M and therefore hermitian.
The matrix is also positive definite since the eigenvalues are I and 4. Thus we
can find an orthogonal matrix U such that U t B U = diag(4,1) with U given by
Obviously the square root of the diagonal matrix diag(4,1) is
Then the square root of B is
Problem
52.
D
= diag(2,1).
Consider the 2 x 2 matrix
The matrix is symmetric over M and the eigenvalues are positive.
matrix is positive definite. Show that
are
squ a re ro o ts
Solution 52 .
of
A.
Thus the
Discuss.
Direct calculation yields X 2 = A and Y 2 = A . The matrix
is positive definite again, whereas the matrix Y is not positive definite (one
negative eigenvalue). Note that — X and — Y are also square roots.
X
288 Problems and Solutions
Problem 53.
Find the square root of the Pauli matrix o\ applying the spectral
theorem.
Solution 53.
The eigenvalues of G\ are Ai = + 1 and A2 = —I with the
corresponding normalized eigenvectors
Then the spectral representation of o\ is o\ = AiViVi + A 2V 2V^. Now \f\ = =Ll
and y/l = =Lz. Thus we find the four roots of o\
R1 = IviVi + iv2v 2,
i
R3 = -I v iV i + iv2v 2,
?2=
2
IviVi - iv2V
Ra = - I v iVi - W 2V^.
Note that —R\, —i?2, —-¾, - R a are also square roots. Extend the result to
U®U.
Problem 54. Let A be an n x n matrix over C. The n x n matrix B over C
is a square root of A iff B 2 = A. The number of square roots of a given matrix
A may be zero, finite or infinite. Does the matrix
admit a square root?
Solution 54.
From B 2 = A we find the four conditions
&11 + 612621 = 0 ,
612(611 + 622) =
I,
621(611 + 622) = 0 ,
From the second equation we find that 611 + 622 7^ 0.
equation we have 621 = 0. Thus we arrive at
6
?i =
0
,
6 1 2 (6 1 1
+
622
) = I,
622
Thus 6 1 1 = 6 2 2 = 0 and we find a contradiction since
A does not admit a square root.
Problem 55.
Thus from the third
0
.
6 1 2 (6 1 1
+
622
Find a square root of the nonnormal 3 x 3 matrix
i.e. find the matrices B such that B 2 = A.
Solution 55.
=
612621 + 622 = 0.
Let /3 ^ 0. We find
) = I. Hence
Functions o f Matrices
289
where a is arbitrary.
P r o b le m 56.
Consider the two-dimensional rotation matrix
cos(a)
sin(a)
R(a) =
—sin(a)
cos(a)
with 0 < a < 7r. Find a square root R1Z2(Ct) of R (a ), i.e. a matrix S such that
5 2 = R(a).
S o lu tio n 56.
Obviously R 1Z2 is the rotation matrix with rotation angle a /2
R 1/2(a) =
f cos(o;/2)
Y s in (a /2)
—sin(o:/2) \
c o s (a /2) J '
P r o b le m 57.
Let V\ be a hermitian n x n matrix. Let
be a positive
semidefinite n x n matrix. Let H e a positive integer. Show that
M (K 2K1)*)
can be written as tr(V fc), where V := K2172K1V^172.
S o lu tio n 57.
Since
is positive semidefinite we calculate the square root.
Thus applying cyclic invariance of the trace we have
M V 2V1)*) = M V 2v V 1V217V 217V 1 •••K2172K1K2172) = tr(K fc).
P r o b le m 58.
Let A, B b e n x n hermitian matrices. Then (Golden-Thompson
inequality)
tr(eA+B) < tr(eAeB).
Let A = as and B = a i . Calculate the left and right-hand side of the inequality.
S olu tion 58.
We have (z
G
C)
ez<Tl = I2 cosh(z) + (Ti sinh(z),
ez<Ts = I2 cosh(z) + (73 sinh(z).
Thus ti(eZ(T3ez<Tl) = 2 cosh 2(z). On the other hand since (<J\ + 03)2 = 2/2 we
have
e z{<7 i+<jz ) _
Q0 s ^ y f Q lZ) + (<Ji + ¢73) sinh(v/2^).
Thus
tr(e*(<Tl+<T3)) = 2 cosh(-\/2z).
W ith z = I we have the inequality cosh(\/2) < cosh2(I).
P r o b le m 59. Let A b e a n n x n matrix over M. The (p , q) Pade approximation
to exp(^4) is defined by
Rpq(A) := (Dpq( A ) ) - 1Npq(A)
290
Problems and Solutions
where
Npq(A) = ^
3= O
(;p + q-jV -q! , A)j
(p + q ) m q - j ) ' ( ’
(p + q-j)lpl
(jp+ q)\j\(jp - j)\
Nonsingularity of Dpq(A) is assured if p and q are large enough or if the eigen­
values of A are negative. Find the Pade approximation for the matrix
and p = q = 2. Compare with the exact solution.
S o lu tio n 59.
™
-
Since A2 = /2 we find
(
W
/ )’
- 1/.¾
13 1 2
with the inverse
(n
(£>22(24))
-
13
f 13/12 1/2
133 ^
1 /2
1 3 /1 2
J .
Thus
which compares well with the exact solution
( cosh(l)
sinh(l)
P r o b le m 60.
exp(A) by
sinh(l) \
cosh(l) J '
Let A be an n x n matrix. We define the j —k approximant of
(i)
We have the inequality
\\eA - f j M W <
j k(k + I)! MU
fc+iJNI
and ZjiIc(A) converges to eA, i.e.
Iim f j A A )
j—
¥O O
Iim fj,k(A) = e/
AC-^OO
Let
A =
°O
M'
O
Find / 2,2 (A) and eA. Calculate the right-hand side of the inequality (2).
(2)
Functions o f Matrices
S olu tion 60.
291
We have
/2’2(A) = ( g
\ (f)
) - ( l2 + \ A + 2!½^2) •
Since A2 = O2 we obtain
/2,2 0 4 ) = ( / 2
+ ^ ) = ( j J)-
For exp (A) we have
“ -(i !)■
Thus the approximant gives the correct result for exp(A).
P r o b le m 61. The Denman-Beavers iteration for the square root of an n x n
matrix A with no eigenvalues on R - is
Yk+i = \(Yk + Z fc- 1),
Zk+1 = \{Zk + Yk- 1)
with k = 0 , 1,2 , . . . and Zo = In and Yo = A. The iteration has the properties
that
lim Yk = A 1/2,
Iim Zk = A ~ 1/2
k—¥00
k—>00
and, for all /c,
Yk = AZk,
YkZk = ZkYk,
I
Yk+1 = ^(Yk + A Y , 1).
(i) Show that the Denman-Beavers iteration be applied to the matrix
-C
0
(ii) Find Y\ and Z \.
S olu tion 61.
(i) For the eigenvalues of A we find
Aji2 = i(3 ± \ /5 ).
Thus there is no eigenvalue on R - and the Denman-Beavers iteration can be
applied.
(ii) We have det(A) = I. For the inverse of A we find
-= (-1
1)
1
and therefore
Y - (
1
n - ^l/2
3/2 J ’
Z ,-(
3 /2
— V -l / 2
~ 1/2\
1/2 J '
292
Problems and Solutions
For n —> oo, Yn converges to
/0.89443 0.44721 \
^0.44721 1.34164 ) '
Supplementary Problems
Problem I.
Let z\, Z2, 23 G C. Consider the 2 x 2 matrix
A(
z 1 } Z2,Zs )
=
( ^
q
) -
Calculate exp(A(zi, Z2, 23)).
Problem 2.
Let a G R. Calculate the vector in E 2
exp( a (o 0 ) ) ( 0 )
in two different ways. Compare and discuss.
Problem 3. Let A, B be n x n matrices.
(i) Show that
00
[eA, eB] = eAeB - e BeA = £
1
^ [ A j , B k].
(ii) Let A, B be n x n matrices. Show that
eA + B _ eA =
f 1 e( l - t ) A B e t(A+B)d t
JO
(ii) Assume that for all eigenvalues A we have 3¾(A) < 0. Let B be an arbitrary
n x n matrix over C. Let
/*00
R(t):= /
Jo
etA*B e tAdt.
Show that the matrix R satisfies R(t)A + A*R(t) = —B.
Problem 4. Let A, B be n x n matrices over C. Is tr(eA (8) eB) = tv(eA®B)l
Prove or disprove.
(i) Let A i , A2 , . . . , Ap be n x n matrices over C. The generalized
Trotter formula is given by
Problem 5.
= lim fn ({A .j})
exp
3 =1
(I)
Functions o f Matrices
293
where the n-th approximant / n({ Aj }) is defined by
/ « ( { Aj }) := (e x p ( ^ A i ) exp ( ^ A 2) •••exp ( ^ 4 > ) )
•
Let p = 2 and
* -(!!)■
*-(?*)•
Calculate the left and right-hand side of (I).
(ii) Let A, B be n x n matrices over C with A2 = B 2 = In and [A, B ]+ = On.
Let z e C . The Lie-Trotter formula is given by
exp (z(A + B ) ) = Iim ( e zA/pezB/pY .
p—
*OO \
/
Calculate e z^A+B) using the right-hand side. Note that
ezA/p _
P r o b le m 6.
cos h(z/p) A Asmh(z/p),
ezB^p = In cosh (z/p) A Bsinh(z/p).
(i) Let A, B be n x n matrices with AB = BA. Show that
cos(A A B) =
c o s ( j4
) c o s ( .B )
— sin(yl) sin (B ).
(ii) Let x ,y e R . We know that sin(x -\-y) = sin(x) cos (y) A cos(x) sin (y). Let
Is sin(C A D ) = Sin(C) cos (D) 4- cos(C) sin(L>)? Prove or disprove.
P r o b le m 7. Let A, B be n x n matrices over C. The Laplace transform C of
the matrix exponential is the resolvent
C(etA) = (XIn - A ) - 1.
(i) Calculate C(etA <g>etB).
(ii) Let C r 1 be the inverse Laplace transform. Find
C - 1UXIn - A r 1 (S)(XIn - B ) - 1).
P r o b le m 8.
Let A be an n x n matrix. The characteristic polynomial
det(A In —A) = An + aiAn
- - - - A an = p( A)
is closely related the resolvent (AIn — ^4)-1 trough the formula
!_
^ n
TV1A - 1 +AT2A - 2 + - .. + TVn _ N(X)
X - A a 1Xn -i A - - A a n
p(X)
294
Problems and Solutions
where the adjugate matrix N(X) is a polynomial in A of degree n — I with
constant n x n coefficient matrices AT1, . . . ,AT71. The Laplace transform of the
matrix exponential is the resolvent
C(etA) = (AIn - A ) - 1.
The Nk matrices and
coefficients may be computed recursively as follows
N1 = I n,
Oi = -Ytr(JUV1)
N2 = ANi + a iln,
a2 - - ^ t r ( A N 2)
Nn — AN 7t, _ i H- an—i In,
an —
ti(ANn)
n
On —
—ANn + anIn.
Show that
tv(C(etA)) =
dp( A)
p(X)~dX~'
I
Problem 9.
Let A be an arbitrary n x n matrix.
exp(A*) = (exp(A))*?
Can we conclude that
Problem 10. Let A be an n x n matrix over C. Assume that A is hermitian,
i.e. A * = A. Thus A has only real eigenvalues. Assume that A5 + A 3 + A = 3In.
Show that A = In.
Problem 11. Let 4 , B b e n x n hermitian matrices. There exists n x n unitary
matrices U and V (depending on A and B ) such that
exp(L4) exp(LB) = exp (iUAlJ- 1 + iV B V ~ l ).
Consider n = 2 and
J)' s = ^ 0
-1O ^
'o 1)
Find U and V . Note that A and B are also unitary.
Problem 12. Let A, B be n x n matrices over C such that [A, B] = A. What
can be said about the commutator [eA,e B]?
Problem 13. (i) Let A be an n x n nilpotent matrix. Is the matrix cos(A)
invertible?
(ii) Let AT be a nilpotent n x n matrix with AP = On with j > I. Is
ln (J„ -
N) = -
(
n
+
i j V + ••• +
Functions o f Matrices
295
and exp(ln (In —N)) = In —N .
P r o b le m 14. Let H = H\ + H2 + H 3 and H 1, H 2, Hs be n x n hermitian
matrices. Show that
e - 0 H
e - ^ H i / 2 e -/3 H 2/ 2 e -/3 H 3 e - ^ H 2/ 2 e -/3 H i /2 +
_
S
where
^ = l([[H 2 + H3, H1I H 1 + 2H2 + 2H3] + [[H3, H2], H2 + 2H3]).
P r o b le m 15. (i) Let r , s be integers with r > 2 and s > 2. Let A i B b e n x n
matrices with Ar = On and B s = 0n. Calculate exp (A (8) B).
(ii) Let A, B be the basis (given by n x n matrices) of the non-commutative twodimensional Lie algebra with [Ai B\ = A. Calculate exp (A® In+ In® A + B <g>B ) .
(iii) Let X \, X 2, X 3 be the basis (given by n x n matrices) of a Lie algebra with
the commutation relations
[X 1?X 2] = O n,
[X 15X 3] = X 1,
[X25X 3J = O n.
Calculate exp( X 1 + X 2 + X 3) and
exp( X 1 (8) In (S) In + In 0 * 2 ® In + Zn ® K 0 X 3 + X 1 <g>X 2 0 X 3).
P r o b le m 16.
Consider the 3 x 3 matrix
/I
O
Il
\1
0
I
0
0
= I3 + B,
S=
0
O 0
0
(°
Vi
l)
I
0
0
Calculate eA using eA = eIze B.
Let z € C. Construct all 2 x 2 matrices A and B over C such
P r o b le m 17.
that
exp(zA)B ex p (-z A ) = e~zB.
Let A be an n x n matrix over C. Consider the Taylor series
P r o b le m 18.
(/„ + A)'» =
+
+ 5¾
and
/r
a
\ — 1/2
(In + A)
r
I .
^
I •3 .O
= I n- - A + — A
! 3- -
^
1 - 3 * 5 .3
+"•
What is the condition (the norm) on A such that the Taylor series exist? Can
it be applied to the matrix
I I ?
296
Problems and Solutions
Note that for n = I we have the condition —I < A < +1.
P r o b le m 19.
exp (zH).
Let U be an n x n unitary matrix. Let H = U + U*. Calculate
P r o b le m 20. Let A, B b e n x n matrices over C and exp(A) exp(B) = exp(C ).
Then the matrix C can be given as an infinite series of commutators of A and
B. Let z e C . We write
exp(zA) exp (zB) = exp(C(zA, zB))
where
OO
C(zA, zB) =
cM , B )zj .
J=i
Show that the expansion up to fourth order is given by
C1 (AtB) = A + B,
C2( A B ) = ^ (A B ),
<%(A B) = L [ a , [A, B ]] - 1 [ B , [A, B\,
P r o b le m 21.
c4(A,
B) = ~ [ A
[B , [A, B ].
Let A \, . . . , Arn b e n x n matrices. Show that
eAi ... eAm _ e x p (- ^ [ A j , Ak]) exp (^ i H------- 1- Am)
j< k
if the matrices Aj (j = I , . . . , m) satisfy
[[Aj j Ak], Ae] = On
P r o b le m 22.
for all j, k , £ e { I , . . . , to}.
Let A , B , C b e n x n positive semidefinite matrices. We define
A o B := A xI2B A 1I2
where A 1I2 denotes the unique positive square root of A.
{A o B) o C l Prove or disprove.
P r o b le m 23.
Is A o [B o C) =
(i) Let a\ be the Pauli spin matrix. Show that
eXP
(ii) Find all 2 x 2 matrices A and c
= ° x'
G
C such that exp(c(A —12)) = A. Note that
exp(c(A — / 2)) = exp(cA) exp(—c / 2) = exp(cA)
Functions o f Matrices
Problem 24.
that
Let A i B be real symmetric positive definite matrices. Show
det^
Problem 25.
297
d et(4 + B) < ex p (tr(4 _1B )).
Let e G R. Let In —eA be a positive definite matrix. Show that
exp(tr(ln (In —eA))) = det (In —eA)
using the identity det(eM) = exp(tr(M )).
Problem 26. Let v be a normalized (column) vector in Cn and let A be an
n x n hermitian matrix. Is v* exp(A )v > exp(v*A v) for all normalized v ? Prove
or disprove.
Problem 27.
(i) Let e G M. Let A be an n x n matrix over C. Find
Ii m S in h gM )
e—>0 sinh(e)
(ii) Assume that A2 = In. Calculate
sinh(2eA)
sinh(e)
(iii) Assume that A2 = 0n. Calculate
sinh(2eA)
sinh(e)
Utilize the expansion
sinh(2e^) = 2eA + ^ f - A 3 + ■■■ .
Ol
Problem 28.
Let a > 0. Consider the 4 x 4 matrix
/°
A(a)
0
0
\ai
Show that exp(7rA(o:)) =
0
0
0
i
0
0
0
1
i/a\
0
0
0
‘
—/4.
Problem 29. Let V be the 2 x 2 matrix V = V0I2 +
^0,^ 1,^2,^3 £ M. Consider the equation
01 +
exp(ieV) = (I2 - iW )(I2 + i W )~ 1
+ V303, where
298
Problems and Solutions
where e is real. Show that W as a function of V is given by
W = —tan
P r o b le m 30.
+
VlCTl +
V 2 CT2 +
VS(T3)
Let A, B be n x n matrices. Show that
e A ® I n + I n ® A e B ® I n+ I n® B
=
(e A
0
eB ^ eB ^
=
^
3 (eA eB ).
P r o b le m 31. Let a G M and II be an n x n projection matrix. Let Q = II1/ 2
be a square root of II, i.e. Q2 = II. Show that
U (a) = e x p ^ a ll1/ 2) = Jn + iotII1/ 2 + (cos(a) — 1)11 + i(sin(a) — a)IH Ily/2.
P r o b le m 32.
Let A be an n x n matrix over M. Then we have the Taylor
expansion
To calculate sin(A) and cos(A) from a truncated Taylor series approximation
is only worthwhile near the origin. We can use the repeated application of the
double angle formula
cos(2 A) = 2 cos2(A) — Jn,
sin(2A) = 2sin(A) cos(A).
We can find sin(A) and cos(A) of a matrix A from a suitably truncated Taylor
series approximates as follows
So = Taylor approximate to sin (A /2fc), Co = Taylor approximate to co s(A /2 fc)
and the recursion
Si = 2S i - i C i - u
Cj = 2C?_j - In
where j = 1 ,2 ,__ Here k is a positive integer chosen so that, say ||^4||oo
Apply this recursion to calculate sine and cosine of the 2 x 2 matrix
Use k = 2.
P r o b le m 33.
Let T be the 2 x 2 matrix
Calculate ln(det(J2 — T )) using the right-hand side of the identity
k=i
~
2*.
Chapter 10
Cayley-Hamilton Theorem
Consider a n n x n matrix A over C and the polynomial
p(A) = det(A - AJn)
with the characteristic equation
p(X) = 0.
The Cayley-Hamilton theorem states that substituting the matrix A in the char­
acteristic polynomial results in the n x n zero matrix, i.e.
p{A) = On.
For example, let
Then det(A — AI2) = A2 — I = 0. Consequently
A2 — /2 = O2 => A2 = 12.
The Cayley-Hamilton theorem can be utilized to calculate A2 from A and A -1
(if it exists) from A. The Cayley-Hamilton theorem can also be used to calculate
exp (A) and other entire functions for a n n x n matrix. Let A b e a n n x n matrix
over C. Let / be an entire function, i.e. an analytic function on the whole
complex plane, for example exp(z), Sin(^)5 cos (z). An infinite series expansion
for f (A ) is an alternative for computing f(A ).
299
300
Problems and Solutions
Let A be an n x n matrix over C and
Problem I.
p(X) = d e t ( A - X I n).
Show that p(A) = 0n, i.e.
the matrix A satisfies its characteristic equation
(Cayley-Hamilton theorem).
There exists an invertible n x n matrix X over C such that
Solution I.
X - 1A X = J = diag( J i, J2, . . . , Jt)
where
/A i
I
0
0\
0
Ai
I
:
:
I
\0
...........
0
Xi J
is an upper bidiagonal Tni x Tni matrix (Jordan block) that has on its main
diagonal the eigenvalue Ai of the matrix A (at least mi-multiple eigenvalue of
the matrix A since to this eigenvalue may correspond some more Jordan blocks)
and m\ + •••+ mt = n. Since Ji — AIn = (5kj - i), then ( 5 k j - i ) mi = 0 and
(Ji — Ai/ n)mi = 0. If p(A) is the characteristic polynomial of the matrix A and
the zeros of this polynomial are Ai, . . . , At, then
p(A) = ( - l ) n(A - Ax)mi (A - A2)"* •••(A - X t r t
and
p(J) = ( - 1 r ( J - A1Zn) - 1(J - A2Jn)"* •••(J - AtJn)"*.
We show that p(J) = 0. Let the matrix J have the block form
( ji
0
0
0
J2
0
0
0
J3
0
0
0
J=
°0 \
0
...
Jt
/
Then we obtain
I
<N
p (j)= (-ln j-x jn r ^ j
t
-vm2 •• • ( J -
a)
Atin)
=(-i)nII(J
-W m
fc
=I
k
t
Jl
^kJmk
0
0
j ‘2
0
0
^kJmk
= ( -!)• * n
fc=i
V
0
0
Jt
^kJmk
Cayley-Hamilton Theorem
/ ( J i - A fcJ m J m 0
301
0
( J 2 - A fcI m J m -
;( - D n n
*;=i
(Jt - A fcJmJ m- /
= 0„
From the X
1A X = J it follows that A = X J X
. Thus
P (A )= P iX J X -1)
= (-1 )n( XJ X ~ l - XX 1 InX - 1) . .. (X J X - 1 - XAnZnX " 1)
= ( - 1 )nX ( J - A1/ n) X " 1X (J - A2Zn) X " 1 •••X (J - AnZn) X " 1
= X p (J )X "1
= On
since p (J ) = 0n.
P r o b le m 2.
Let A be an arbitrary 2 x 2 matrix. Show that
A2 - A t r ( A ) A h d e t ( A ) = O
and therefore (tr(A ))2 = tr(A 2) + 2det (A).
S o lu tio n 2.
The characteristic polynomial of A is
A 2 — ( a n + ^ 22) ^ + ^ 11^22 — ^ 12^21 = 0 .
Applying the Cayley-Hamilton theorem we have
A 2 — (a 11 + a22)A + (ana22 - ^12^21)^2 = O2.
Since tr(A) = a n + 022 and det(A) = a n a 22 — a i2a2i we obtain the result.
P r o b le m 3.
Consider the 2 x 2 nonnormal matrix
Calculate f (A ) = A3 using the Cayley-Hamilton theorem.
S olu tion 3.
The characteristic equation det(A — Xh) = 0 of A is given by
A2 — 2A + I = 0. Hence A2 = 2A — Z2. Using this expression it follows that
A3 = AA2 = A (2A - I2) = 2A2 - A = 3 A - 2 Z2.
Consequently
P r o b le m 4. Apply the Cayley-Hamilton theorem to the 3 x 3 matrix A = (ajk)
and express the result using the trace and determinant of A.
302
Problems and Solutions
S o lu tio n 4.
Let
S :=
a\\Ci22 + ^ 110 3 3 + 0 2 2 ^ 3 3 — &13&31 — a 2 3 ^ 3 2
— ^ 12 ^ 2 1 -
From det (A — A/3) = 0 we obtain
A3 - tr(A)A2 + SA - det(i4) = 0.
Thus we have
A3 - tr(A )A 2 + S A - det (A)I3 = O3.
P r o b le m 5. Let A be an n x n matrix. Assume that the inverse matrix of
A exists. The inverse matrix can be calculated as follows ( CsankyfS algorithm).
Let
p(x) := det (xln - A).
(I)
The roots are, by definition, the eigenvalues A i, A2, ..., An of A. We write
p(x) = xn + C1Z71-1 H------- h cn- i x + cn
(2)
where cn = (—l ) n det (A). Since A is nonsingular we have Cn ^ 0 and vice versa.
The Cayley-Hamilton theorem states that
p(A) = A n + CiAn 1 + •••+ cn_ 1A + cnIn = On.
(3)
Multiplying this equation with A -1 we obtain
A ~ l = - ( A n- 1 + C1An- 2 + •••+ Cn- J n).
Cn
(4)
If we have the coefficients Cj we can calculate the inverse matrix A. Let
sk : = ± X * .
3= I
Then the Sj and Cj satisfy the following n x n lower triangular system of linear
equations
/ I
0
0
...
0\
si
2
0
...
0
C2
S2
Si
3
...
0
C3
Sn _ 2
•••
Si
Tl /
V Sn _ i
/ - S i \
/°1\
- S 2
“ S3
=
V Cn /
V
Sn /
tr(A fc) = A^ + A2 + •••+ An —
we find Sk for k = I , . . . , n. Thus we can solve the linear equation for Cj. Finally,
using (4) we obtain the inverse matrix of A. Apply Csanky’s algorithm to the
3 x 3 matrix
I 0 I
I 0 0
0 I I
Cayley-Hamilton Theorem
S olu tion 5.
303
Since
/I
I
2
I
0
I
I
I
A2 =
U
2
1
.43
2
I
3
2
2 I
2
we find tr(j4) = 2 = «i, tr(A 2) = 2 = s2, tr(A 3) = 5 = S3. We obtain the system
o f linear equations
1
0
0\
/cA
2
2
0
c2
23/
V c3 /
2
(-2
=
-2
I " 5
with the solution Ci = —2, C2 = I, C3 = —I. Since C3 = —1 the inverse exists
and det(A) = I. Hence
0
-I
I
A - 1 = - ( A 2 A c 1A A c 2I3)
-C 3
P r o b le m 6 .
0
I
0
I
I
-I
Let A be an n x n matrix. Let
n—I
c(z) := det(zln - A) = zn - ^
ckzk
k=o
be the characteristic polynomial of A.
c(A) = On to calculate exp(A).
S olu tion 6 .
Apply the Cayley-Hamilton theorem
From the Cayley-Hamilton theorem we have
A n = Coin + c I A + •• • + Cn- I A n
It follows that any power n of A can be expressed in terms of 7n, A, . . . , A 71-1
n —I
Ak
3=0
This implies that etA is a polynomial in A with analytic coefficients in t
OO .k Ak
- Ek
=0
t
OO
- Ek
=0
k In -1
sfc!
.
S nM
\j = 0
\
71-1
/
OO
.fc\
= E E A m M iJ=O \fc=o
The coefficients /3jk can be found as follows
Pjk
(h j
Cj
c o P k -i,n -i
, C j P k - i,n -i +
k <n
k= n
k > n , j
=
P k -i,j-i k > n , j >
0
0.
304
Problems and Solutions
(i) Let A be an n x n matrix with A3 = In. Calculate exp(A)
P r o b le m 7.
using
~
exp(A) = ^
Aj
- .
J=O
(ii) Let
/0
0
\1
£ =
I
0
0
0
I
0
with B 3 = Is. Calculate ex p (£ ) applying the Cayley-Hamilton theorem.
S o lu tio n 7.
In
(i) We have for eA utilizing A3 = In
(1+si+ +"')+A (1+if+j\ +"')+a2 +if+^+'") •
(ii) Applying the Cayley-Hamilton theorem we start from
eB = b2B 2 + b\B + boh =
bo
b2
b\
b\ b2
bo b\
b2 bo
The eigenvalues of B are
\
i
1 , . VS
>
Ai - I ,
X2
.
I
.VS
Thus we obtain the three equations for the three eigenvalues
eAl = e1 = b2Xi -\- bi\\ -\- bo = b2
eA2 = e - l / 2 eiV 3/2 =
b2>2 +
bi -\- bo
bi><2 + b() =
bi,( - 1 +
(I + iy/S) + ^
eXs = e - i / V ^ / 2 = 62A§ + 6i A3 + 60 = - j (I - i V 3) - j ( l
+
i V 3) +
i V 3) + 60.
Adding and subtracting the second and third equations we arrive at
c — b2 + b\ + bo
2e-1 / 2 cos(\/3/2) = —b2 — b\ 4- 2bo
0 e 1/2 sin(\/3/2)
VS
— —b2 + bi.
The solution of these linear equations is
/b0 \
/1/3
= I 1/3
Xb2J
\ !/3
U 1
1/3
-1 /6
-1 /6
0 \ /
e
1/2
26-^008(^3/2)
—1 /2 / \ 2 e - V 2 fan(y/S/2)/y/3
b0
Cayley-Hamilton Theorem
305
Thus
bo =
I + | e_1/2 cos(\/3/2)
i>i = I - ^e_1/2 cos(\/3/2) + -^=e_1/2 sin(-\/3/2)
=
I-
^ e-1 /2 cos(\/3/2) - -i= e _1 /2 sin (v^ /2 ).
P r o b le m 8. Let i be an n x n matrix over C. Let / be an entare function,
i.e. an analytic function on the whole complex plane, for example exp(z), sin(z),
cos(z). Using the Cayley-Hamilton theorem we can write
f (A ) — an-\An 1 + an- 2An 2 + •••+ a^A? + a\A + aoIn
(I)
where the complex numbers ao, &i, . . . , an- i are determined as follows: Let
r(A) := a n _ i A n * + a n _ 2 A n 2 + ••• + (Z2 A 2 + aiA + ao
which is the right-hand side of (I) with A-7 replaced by Xj , where j = 0 , 1 , . . . , n —
I of each distinct eigenvalue Xj of the matrix A, we consider the equation
/ ( A i ) = T ( A i ).
(2)
If Xj is an eigenvalue of multiplicity fc, for k > I, then we consider also the
following equations
/'(A)Ia=*, = ^(A)Iasaj
/"(A)Ia=A, = ^'(A)Ia=A,
Apply this technique to find exp(A) with
A = ^ c
S olu tion 8.
eA = a i A +
C€ R,
c 7^ 0.
(i) We have
ao/2 = c
f a i aiaiVJ f \l 0f f0loV/ fV a0tmi
caI
\ai
T1 V
ao + cai J
The eigenvalues of A are 0 and 2c. Thus we obtain the two linear equations
e0 = Oai + do = ao,
e2c = 2cai + ao.
Solving these two linear equations yields ao = I, ai = (e2c — l)/(2 c ). It follows
that
_ [ a 0 + Ca1
Ca1 \ _ ( (e2c + 1)/2 (e2c - 1)/2 \
\ Ca1
Ca1 + ao)
\ (e 2c - 1 ) / 2
(e2c+ 1 ) / 2 /
306
Problems and Solutions
P r o b le m 9. Let
22,23 € C. Assume that at least two of the three complex
numbers are nonzero. Consider the 2 x 2 matrix
* - ( s
2o
)■
Calculate exp(A) applying the Cayley-Hamilton theorem. The characteristic
equation for A is given by A2 — z\A — Z223 = 0 with the eigenvalues
A+ =
^z1
+
A_ = i z i -
\ J z \ + A z 2Z3 ,
^ z\
+
A z 2Z3 .
Cayley-Hamilton theorem then states that A 2 - Z i A - Z1Z3I2 = O2.
S olu tion 9. We can write exp(A) = a iA + ao/2 and ao, cl\ are given by the
system of linear equations
a
ex+ = aiA+ + ao,
— aiA_ + ao-
The solution is
ao
ex~ A+ — eA+A_
ai
A+ — A_
with exp(A) = a\A + aoh and A+ — A_ =
P r o b le m 10.
eA+ - eAA_|_ — A_
z\ + 4z2Z3.
Let a € M and a ^ 0. Consider the 2 x 2 matrix
= (1
o)-
Calculate sin(A (a)) applying the Cayley-Hamilton theorem.
S o lu tio n 10.
The eigenvalues of A(a) are A+(a) = a and A_(a) = —a. Now
and
sin(A_|_) = sin(a) = b\A+ + bo,
sin(A_) = sin(—a) = b\X- + bo.
Note that sin(—a) = —sin(a). The solution of the two equations
sin(a) = b\a + bo,
—sin(a) = —b\a + bo
provides bo = 0 and b\ = sin (a )/a . Thus
^ < »»= (4 »
a
T ) -
307
Cayley-Hamilton Theorem
P r o b le m 11.
Calculate
/
sec
75
0
0
JL-
L
'7 5
S olu tion 11.
0
0
7r
7T
%
/2
7r
v/2
0
V27T
V2
7T
\
75
\
0
0
0
7 5 7
Consider the 4 x 4 matrix
/I
0
0
Vi
0
I
I
0
0
I
-I
0
1 \
0
0
-I /
The eigenvalues \/2, —v^2, y/2, —y/2. Thus the eigenvalues of
A are 7r, 7r,
—7r, —7r. We apply the Cayley-Hamilton theorem
sec(7r) = —I = Co + Ci7T + C27T2 + CsTT3
sec(7r) tan(7r) = 0 = c\ + 2c27r + 3c37r2
sec( —7r) = —1 = Co —Ci7T + C27T2 —CsTT3
sec(—7r) tan(—7r) = 0 = Cl —2c27r + 3cstt2.
Subtracting the fourth equation from the second yields 0 = 4 c 27r s o that C2 = 0.
Adding the first and third equations yields —I = Co + C^tt2 = Co- Adding the
second and fourth equations yields 0 = 2ci + 6c37r2 and subtracting the third
equation from the first yields 0 = 2ci7r + ccIstt3 s o that 2ci = —ccIstt1 , Cs = 0
and Ci = 0. Consequently
/ —
/ V2
0
0
0
0
7T
7T
V2
7T
72
7T
72
0
72
0
- 275
\
\
/I
0
0
VO
0
0
___ ZE- j
7 5 7
0 0 0\
1 0
0
0 I 0
0 0 I/
*
P r o b le m 12. Let A, B be n x n matrices such that A B AB = 0n. Can we
conclude that B A B A = 0n?
S olu tion 12.
There are no I x I or 2 x 2 counterexamples. For 2 x 2 matrices
AB AB = On implies B (A B A B )A = 0n. Therefore the matrix BA is nilpotent,
i.e. (B A )3 = 0n. If a 2 x 2 matrix C is nilpotent, its characteristic polynomial is
A2 and therefore C 2 = On by the Cayley-Hamilton theorem. Thus B A B A = 0n.
For n = 3 we find the counterexample
/0
A=IO
0 1\
0 0 ),
\0 10/
/0 0
1\
B = I l O O j
V0 0 0 J
/0
=►
BABA =
0
0 1\
0 0
V0 0 0Z
.
308
Problems and Solutions
S u p p lem en ta ry P ro b le m s
P r o b le m I.
Consider the nonnormal invertible 2 x 2 matrix
A=(o
-1O-
Find A 2, A3 and A~l utilizing the Cayley-Hamilton theorem.
P r o b le m 2.
Let e € R and 6 ^ 0 . Let
* ‘ >= 0 o ) ■
(i) Calculate cos (A(e)) and cos(A(e) <g>A(e)) applying the Cayley-Hamilton the­
orem.
(ii) Calculate tanh(A(e)) applying the Cayley-Hamilton theorem.
P r o b le m 3.
Let e € R. Consider the 4 x 4 matrix
* « > - ( ! : M < :)•
Find exp(A(e)) applying the Cayley-Hamilton theorem.
P r o b le m 4.
Consider the Bell matrix
/ 1 0
0 I
V2 0 I
\1 0
I
0
I
-I
0
1 \
0
0
-I/
which is a unitary matrix. Apply the Cayley-Hamilton theorem to find the
skew-hermitian matrix K such that B = eK .
Chapter 11
Hadamard Product
Suppose A = ( dij ) and B = (bij) are two n x m matrices with entries in some
fields. Then the Hadamard product (sometimes called the Schur product) is the
entrywise product of A and 5 , that is, the m x n matrix A * B whose (i,j) entry
is aijbij, i.e.
A mB i— {abj_j ).
We have the following properties. Suppose A ,B ,C are matrices of the same size
and c is a scalar. Then
Am( BmC) = ( AmB) mC
AmB=BmA
Am(BAC) = AmBAAmC
Am( cB) = c( AmB) .
Let J be the n x m matrix with entries equal to one. Then J mA = Am J = A. If
A , B are n x n diagonal matrices, then Am B = AB is again a diagonal matrix.
If A , B are n x n positive definite matrices and (ajj) are the diagonal entries of
A , then
n
det(A • B) > det (B) J J ajj
J= I
with equality if and only if A is a diagonal matrix.
If A and B are binary matrices, then A mB is a binary matrix. If A and B are
matrices with entries ± 1 , then A mB is again a matrix with entries ±1.
309
310
Problems and Solutions
Problem I.
Let
A =
Calculate
Am B , Am A
Solution I.
l , Am J , J m B .
Entrywise multiplication of the matrices yields
AmB
and A m
A
l = A, A m J = A, J m B = B.
Problem 2 .
(i) Consider the 2 x 2 matrices
which are invertible. Is A m B invertible?
(ii) Let U be an n x n unitary matrix. Can we conclude that
matrix?
Solution 2.
U mU*
is a unitary
(i) We find the non-invertible matrix
AmB
(ii) In general we cannot conclude that
Then
U* = U = U ~ x
UmU*
is a unitary matrix. For example,
and
!)■
Note that
U m U*
Problem 3.
is a projection matrix.
Let
n
€ N and
?)•
* - { i i)
Thus det(A) = det(A-1 ) = I. Find det(A«
Solution 3.
-1 ).
Obviously we find det(^4 • A -1 ) = I.
Let C,
(rank(A ) ) (rank(5)).
Problem 4.
A
D
be
m
x
n
matrices.
Show that rank(^4 •
B)
<
Hadamard Product
311
Solution 4.
Any matrix of rank r can be written as a sum of r rank one
matrices, each of which is an outer product of two vectors (column vector times
row vector). Thus, if rank(A) = r\ and rank(5) = 7*2, we have
A = j t , x*y* >
i= 1
where X i , Uj-
G
Cm,
y i? Vj G
C71,
B= £
z = I , ...,
A•B =
j=i
r\ and j
= I , ...,
Then
* u^ yi * v^ * '
i=
I
J = I
This shows that A m B is a sum of at most
rank(Am 5 ) < rir*2 = (rank(A))(rank(5)).
Problem 5.
uJvJ
rank one matrices.
Thus
Let
A = ( ai
Va 2
a2V 5 = ( ? 1
CL3
J '
?2)
\b2 h J
be symmetric matrices over R. Assume that A and B are positive definite. Show
that A mB is positive definite using the trace and determinant.
Solution 5.
We have
A positive definite
+>•
tr(A) > 0 det (A) > 0
B
+>•
tr(.B) > 0 det (B) > 0.
positive definite
Thus we have a\ + as > 0, aia3 —
> 0 (or aia3 > a^) and 61 + 63 > 0,
6163 — 62 > 0 (or 6163 > 6|). Now using the conditions <21*23 >
and 6163 > b\
we have
det( A m B )
=
(O1 O3Xftift3) -
( 4 ) ( 4 ) > (a 2 )(bl) -
Thus det(A • B ) > 0. For the trace we have tr(A • B )
¢2^3 > a| and 6163 > b\ again we have
(4)(4 )
= a\bi
= 0.
+ ¢1363. Now using
( a i + ¢23)(^161 + ¢1363)(61 + 6 3) > 0 .
Since a\ + ¢13 > 0 and 61 + 63 > 0 it follows that ai 6i + «363 > 0 and therefore
tr( A m B ) > 0. Thus since det( A m B ) > 0 and tr( A m B ) > 0 it follows that A m B
is positive definite.
Problem 6 . Let A be an n x n matrix over C. The spectral radius p ( A ) is the
radius of the smallest circle in the complex plane that contains all its eigenvalues.
Every characteristic polynomial has at least one root. Let A , B be nonnegative
matrices. Then p(A m B ) < p ( A ) p ( B ) . Apply this inequality to the nonnegative
matrices
+ ?
¾. *-(! !)•
3
312
Problems and Solutions
Solution 6 . The eigenvalues of A are 1/4 and 3/4. Thus the spectral radius
is 3/4. The eigenvalues of B are 0 and 2. Thus the spectral radius is 2. Now
A • B — diag(l/4 3/4). The eigenvalues of A • B are 1/4 and 3/4 with the
spectral radius is 3/4. Thus p ( A • B ) = 3/4 < 3/4 •2 = 3/2.
There exists an n 2 x n selection matrix J such that A mB =
J t is defined as the n x n 2 matrix [ E u E 22 •••E nn] with
E u the n x n matrix of zeros except for a I in the (i,i)-th position ( elementary
matrices). Prove this identity for the special case n = 2.
Problem 7.
Jt
(A <g>B)J, where
Solution 7.
From the definition for the selection matrix we have
(I
( I
VO
0
0
00
0\
1)
=►
0N
0
0
0
0
J =
Vo I /
We have
A » B = ( a n ^n
V « 21^21
0,22^22 )
Now
Jt ( A ^ B ) J = Q
q
O
O
O
/ Giibn
Gnbi2
Gnb22
G2lbl2
G2lb22
Gnb2I
G2lbll
V G2Ib2I
o o i y
Gi2bn
G12&21
G22bll
G22b21
Gl2bl2\
Gl2b22
G22bl2
a 22^22 /
/ 1 °\
0 0
0
Vo
0
l)
■A » B .
Problem 8 . If A 1B are
diagonal entries of A 1 then
n
x
n
positive definite matrices and
(ajj)
are the
TL
det(A • B )
>
det(£?) JJ
ajj
3= 1
with equality if and only if A is a diagonal matrix. Let
. 1 = 1 : : 1 ,
First show that A and
right-hand side of (I).
B
B = ( 1J1 I ) .
are positive definite and then calculate the left- and
Solution 8 . The matrix A is symmetric over R and the eigenvalues are positive.
The matrix B is symmetric over R and the eigenvalues are positive. Thus both
A and B are positive definite. We obtain
det( A • B )
=
f 645
V
^ J = 244,
J
det(B)
j =i
CLj j =
180.
Hadamard Product
Problem 9.
313
Consider the symmetric 4 x 4 matrices
/I
0
0
0
0
0
0
0
0
/I
0
0
1\
0
0
Vl 0 0 I/
0 0 0\
0 1 0
1 0
0 '
Vo o o i /
Calculate the Hadamard product A m B . Show that \\A• B\\ < ||A*A|| ||£*H||,
where the norm is given by the Hilbert-Schmidt norm.
Solution 9.
We have
0
0
0
0
/I
0
0
Vo
Thus (A
m B){Am
0
0
0
0
°0\
0
I/
B)T = Am B and tr((A m B)(Am B)T)
/2
At A =
0
0
\2
0 0 2\
0
0
0
0
/I
0
Bt B =
0
0
0
=
2. Now
0 0 0\
1 0
0
0 1 0
Vo o o i /
D 0 2/
Thus the inequality given in the problem is true for any two matrices A and B.
Problem 1 0 .
Let A, B 1C and D t b e n x n matrices over R. Show that
tr((A • B){CT mD))
Solution 1 0 .
We define A m B
= E =
=
(¾ ) and
Tl
tr((j4 • B){C t • D))
=
tr((A mBm C)D).
Tl
tr(EFt ) =
C
•D t
Tl
= F = (fij).
Then
Tl
= Y . Y . ( aBbHci3)dn
i= l j= l
i= I J =
I
= tr( ( A m B m C ) D ) .
Problem 11. Let V 1W be two matrices of the same order. If all entries of V
are nonzero, then we say that X is Schur invertible and define its Schur Inverse1
y ( - ) , by W -) » V = J, where J is the matrix with all l ’s.
The vector space Mn(F) o f n x n matrices acts on itself in three distinct ways:
if C G Mn(F) we can define endomorphisms X e , A c and Y c by
X c M := C M1
A c M := C m M 1
Yc := M C t .
Let A 1B be n x n matrices. Assume that X a is invertible and
is invertible
in the sense of Schur. Note that X a is invertible if and only if A is, and A# is
314
Problems and Solutions
invertible if and only if the Schur inverse B^ ) is defined. We say that (A, B ) is
a one-sided Jones pair if
X aA bX a = A bX aA b .
We call this the
braid relation.
Give an example for a one-sided Jones pair.
Solution 11.
A trivial example is the pair (7n, Jn), where Jn is the
matrix with all ones. Note that for the left-hand side we have
(X a A b X a )(M) = X a A b (AM) = X A(B • (AM))
= A (B
n x n
• (AM)).
Thus with A = In and B = Jn we obtain M. For the right-hand side we have
(A b X a A b)M = A b X a ( B • M) = A b ( A ( B • M)) = B • ( A ( B • M)).
Thus with A = In and B = Jn we also obtain M .
Problem 12. Let A , B k e n x n matrices. Let ei, . . . , en be the standard basis
vectors in Cn. We form the n2 column vectors ( A e j ) • ( B e ^ , j, k = I , . . . , n. If
A is invertible and B is Schur invertible, then for any j
{
(4 e i)«
( B e j ),
( A e 2) * ( B e j ),
( A e n) ^ ( B e j ) )
is a basis of the vector space Cn. Let
I I 1\
0 I 1 ’
0 0 I/
A =
B=
I
-I
V i
(
-I
I
-I
Find these bases for these matrices.
For j
Solution 12.
I
0
0
(Aei) • (Be1) =
=
I we have
,
I
-I
0
( A e 2) • ( B e i )
, (Ae3) • (Be1) =
The three vectors are linearly independent and thus form a basis in C3. For
we obtain
j = 2
(Aex) • ( B e 2)
=
( 0
,
( A e 2)
• ( B e 2)
=
(
I
,
( A e 2)
• ( B e 2)
=
V 0
For j
=
3 we obtain
(Ae1) • (Be3) =
I
0
0
, (Ae2) • (Be3) =
I
-I
0
, (Ae3) • (Be3) =
Hadamard Product
315
Let B = (bjk) be a diagonalizable n x n matrix with eigenvalues
Al, A2, . . . , An. Thus, there is nonsingular n x n matrix A such that
Problem 13.
B
= A(diag(Ai, A2, . . . , An))A_1.
Show that
/ Al \
A2
fb n \
&22
= (A .( A ~ ') T)
Vbnn J
\A „/
Thus the vector of eigenvalues of B is transformed to the vector of its diagonal
entries by the coefficient matrix A • ( A ~ 1)T .
Solution 13.
A.
Let (A)ij denote the entry in the i-th row and j-th column of
Then
(diag(Ai, A2, . . . , An)A_1) y = Ai( ^ 1)ij.
It follows that
n
bjj =
(A(diag(Ai, A2, . . . , AnJJA-1 )^. =
^ ( A ) jk
(diag(Ai, A2, . . . , An)A_1) fc..
k=I
= £ ( A ) ife( A - % A fe = £ > ) , fc ((A - 1 Jr )ifc Afe = £ ( A • (A - 1 Jr )ifc Afe.
k=I
k=I
k= I
Problem 14. Let A , B j C, D be n x
be a row vector in Rn. Show that
sT (A
where T
=
(7
Solution 14.
n
matrices over R. Let sT = ( I ...
I)
• B ) ( C t • D )s = tr( C T D )
is a diagonal matrix with 7 j j
= Y17= i aij^ij
anc^ J
=
I, •••, Ti.
We have
sT(A • B ) ( C t • D) s =
st E
F t s = t ± ± e , f kj
i=I j =I k= I
n
n
/
\
n
=EEE
j =I /C=I Vi=I
= tr(CTL>).
n
n
ICfcjdjk —E E 7j j ckjdjfc
O'ijbij
/
J= I /c=l
Problem 15. Given two matrices A and B of the same size. If all entries of A
are nonzero, then we say that A is Schur invertible and define its Schur inverse,
A^~) by A ^ : = I/A ij-. Equivalently, we have A ^ • A = J, where J is the
matrix with all ones. An n x n matrix W is a type-I I matrix if
W W ^ T = n ln
316
Problems and Solutions
where In is the n x n identity matrix. Find such a matrix for n = 2.
Solution 15.
Such a matrix is
with W = W(").
Problem 16.
product. Is
Let A, B, C, D be 2 x 2 matrices and 0 be the Kronecker
{A (S) B) • (C 0 D) = (Am C) 0 (B • £>)?
Solution 16.
Yes. A Maxima program to prove it is
/* Note that * is the Hadamard product and . is the matrix product */
A: matrix( [all,al 2] , [a21,a22] ) ; B: matrix( [blI,bl2] , [b21,b22] ) ;
C: matrix([cll,cl2] , [c21,c22] ) ; D: matrix( [dll,dl2] , [d21,d22] ) ;
Tl: kronecker_product(A,B); T2 : kronecker.product(C,D);
T3 : A*C; T4 : B*D;
Rl: T1*T2;
R2 : kronecker.product(T3 ,T4) ;
R3 : R1-R2; print("R3=",R3) ;
Supplementary Problems
Problem I.
Show that the Hadamard product is linear.
Hadamard product is associative.
Show that the
Problem 2. Let A, B b e n x n positive semidefinite matrices. Show that AmB
is also positive semidefinite.
Problem 3 . (i) Let A, B be n x n symmetric matrices over M. Is A • B a
symmetric matrix over M?
(ii) Let C, D be n x n hermitian matrices. Is C • D a hermitian matrix?
Problem 4 . Let A be a positive semidefinite n x n matrix. Let B be an n x n
matrix with \\B\\ < I, where || . || denotes the spectral norm. Show that
max{ IlA • B\\ : ||H|| < I } = max(a^)
where || || denotes the spectral norm and j = I, ..., n.
Problem 5 . Let A, B be m x n matrices and D and E be m x m and n x n
diagonal matrices, respectively. Show that
D ( A m B ) E = Am( DB E) .
Hadamard Product
317
Problem 6 . Let A be an n x n diagonalizable matrix over C with eigenvalues
Al, . . . , An and diagonalization A = T D T -1 , where D is a diagonal matrix such
that D j j = Xj for all j = I , . . . , n. Show that
/an \
a 22
f Xl\
A2
= (T . (T -Y )
V a 71/
V ^nn /
Problem 7.
side yields
Show that tr(A(B • C )) = (vec(AT • B))Tvec(C). The left-hand
n
tr( A( B » C ) ) = E
j=
n
E
^ c ri.
I T=I
Problem 8 .
Consider the n x n matrices with entries ± 1 . If A, B such
matrices, then A • B is again such a matrix. Do these matrices form a group
under the Hadamard product? The neutral element would be the matrix J with
all entries I.
Problem 9. Consider the n x n matrices with entries 0 and I. Let ® be the
XOR operation and we define Am B = (a7J ®
Then AmB is again a matrix
with entries O and I. Do these matrices form a group?
Chapter 12
Norms and Scalar Products
A linear space V is called a normed space, if for every v G V there is associated
a real number ||v||, the norm of the vector v, such that
IMl > 0,
||v|| = O i f f v = O
IIcvII — IcI IIv II where c G C
for all v, w G V.
Ilv + w|| < Ilv|| + Ilw||
Consider Cn and v = (vi, ^2»•••, ^n)T- The most common vector norms are
||2
(i) The Euclidean norm: ||v := \/v *v
(h) The i\ norm: ||v||i := |vi| + \v2\H------- h |vn|
(hi) The I 00 norm: IIvH := max(|vi|, |v2|,. . . , K|)
(iv) The £p norm (p > 1): ||v||p := (\vi\p + \v2\p H----- h \vn\p)1/p
00
A scalar product of two vectors v and u in Cn is given by v*u. This implies the
norm ||v||2 = v*v.
Let A be an n x n matrix over C and x G Cn. Each vector norm induces the
matrix norm
\\A\\ : = max ||Ax||.
IIxll=I
Another important matrix norm is
WM := y M & F jThis is called the Frobenius norm (also called Hilbert-Schmidt norm). This
relates to the scalar product (A, B) := tv(AB*) of two n x n matrices A and B.
318
Norms and Scalar Products
Problem I.
319
Consider the vectors in C4
I
-I
V —
,
0
0
W =
\ %J
\ -i )
Find the Euclidean norm and then normalize the vectors. Are the vectors or­
thogonal to each other?
Solution I.
We have
(
v*v =
( —i
I —I
i)
1
I
-I
V-i
\
: 4,
w*w =
0 0- ¾)
(—i
/* \
0
0
=
2.
\ i /
/
Thus Ilv Il = 2 and ||w|| = y/ 2 and the normalized vectors are
I
2
1 / *0\
I
- I ’ Tl 0
\ —i /
Vi J
( 1\
Yes we have v*w = 0.
Problem 2 .
Consider the vectors in R3
/ I\
/
I
vi=U/ v2 T i 1 J' v3T J
(i) Show that the vectors are linearly independent.
(ii) Apply the Gr am-Schmidt orthonormalization process to these vectors.
Solution 2.
(i) The equation avi + &v2 + CV3 = 0 only admits the solution
a = b = c = 0.
(ii)
Since v fv i = 3 we set Wi = vi/\/3. Since aq := w fv 2 =
1 /V 3
1 2/3 \
Z2 = v 2 - OiWi = I - 4 /3
.
V2/3 y
Next we normalize the vector
Z2
and set
I
W2 =
tpT
Finally we find that wfV 3= —1/\/3,
Z3 = V 3 -
( w f v 3) w i -
/
1/V 5
\
-T H '
V3=
and therefore
I 1
\
( W f v 3)W2 = I o j .
we set
320
Problems and Solutions
Normalizing yields
W
^ 17H
\ -l/ y / 2 j
Thus we find the orthonormal basis
Problem 3.
/
/1 A /3
1/V 3
VVv^
wi =
Let
W2 =
be an n x
A
n
1/V 6
\
-\ /2/\ /3
V 1/V6
,
w3
-W v
)
matrix over M. The
M411
,4 2 ;= max
spectral n orm
is
P*||2
.
X 2
It can be shown that ||j4||2 can also be calculated as
\\A\\2 =
The eigenvalues of A t
Calculate
\\A\\2
Solution 3.
\/largest eigenvalue of A t A .
are real and nonnegative. Consider the 2 x 2 matrix
A
using this method.
We have
The eigenvalues of
6.2429.
At
A
Problem 4. Let { Vj :
with r < n. Show that
are 0.0257 and 38.9743. Thus
j =
=
a/38.9743 =
I , ... , r} be an orthogonal set of vectors in
Ii E ^
3= I
Solution 4.
\\A\\2
ii2 = E i m
3= I
Mn
2-
We have
IiE v j 2 = ( E v^-Evfc) = E E M v fc) = E iK ii 2
3
=I
3= I
fc=l
j =I fc=l
J= I
where we used that for the scalar product we have (Vj , v^) =
= 0 for
3
Problem 5. Consider the 2 x 2 matrix A over C Find the norm of A implied
by the scalar product (A, A) = y/ti(AA*).
Norms and Scalar Products
Solution 5 .
321
Since
A* = ( aii
1 0
«22 J
V«12
we obtain
AA* = I flliaI1
a12a12
\ fl21^ll + a 22a l2
al l a21 + a12^22 j
a 21a 21 + a 22a 22 )
where a*j is the conjugate complex of Oij. Thus
tr(A*A) = Uiiaj1 + ai2a12 + a2ia21 + 0,22^22is ||^4 || = ^anaJ1 + ai2a12 + a2ia21 + a22a22.
Therefore the norm of
Let A , B be n x n matrices over C. A scalar product can be
Problem 6.
defined as
(A,B) := t r (AB*).
The scalar product implies a norm \\A\\2 = (A, A) = tr(AA*). This norm is
called the Hilbert-Schmidt norm.
(i) Consider the two Dirac matrices
/ 1
70 :=
0
0
\0
0
I
0
0
0
0
-I
0
0
0
- J
,
0
0
7i :=
0
0
0
-1
0
0
I
0
0
1X
0
0
0/
Calculate (70 , 7 1 ).
(ii) Let U be a unitary n x n matrix. Find ( U A , U B ) .
(iii) Let C , D be m x m matrices over C. Find ( A ® C , B <8 D ) .
(iv) Let U be a unitary matrix. Calculate (U, U ). Then find the norm implied
by the scalar product.
(v) Calculate \\U\\ := max||v||=i 11L/v11.
Solution 6 .
(ii) Since
(i) We find (70 , 7 1 ) = tr(707 ^ = 0.
tr(U A ( U B ) * ) =
tr ( U A B * U * )
= tv(U *U A B *) =
and using cyclic invariance we find that (U A , U B ) =
product is invariant under the unitary transformation.
(iii) Since
tr((A (8) C ) ( B 0
D )*) = ti((A B * ) 0 (C D *)) =
tr( A B )
(A ,B ).
Thus the scalar
tr(AB*)tr(CT>*)
we find ( A 0 C , 5 0 D ) = ( A , B ) ( C , D ) .
(iv) We have ( U , U ) = t i ( U U * ) = tr(In) = n, where we used that U ~ l =
Thus the norm is given by \\U\\ = y /n .
(v) With Ilv|| = I and U U * = In we find \\U\\2 = v * U * U v = v*v = I.
Problem 7. (i) Let { Xj :
bases in Cn. Show that
j =
I ,...,
n
( U jk ) ■=
}, { y j :
(x*yfc)
j
= I ,...,
n }
U*.
be orthonormal
322
Problems and Solutions
is a unitary matrix, where
is the scalar product of the vectors Xj and y^.
This means showing that UU* = In.
(ii) Consider the two orthonormal bases in C2
I
-I
Use these bases to construct the corresponding 2 x 2 unitary matrix.
Solution 7.
(i) Using the completeness relation
n
^2yey*e = I n
£=1
and the property that matrix multiplication is associative we have
n
n
n
^(x-jyeXyex-k) = X ] xj (y*y/?)x fc = 3S*( X X ^ xfc = x^ x * = xJx *
e=i
e=i
e=i
= Sjk-
(ii) We have
= x iYi = 0(1 + *)>
«12 = x Jy2 = -(1 - i),
«21 = x 2yi = 0(1 “ *)>
«22 = x2Y2 = x(i + *)-
Thus
U = - ( l+l
2 \ 1 —*
Problem 8.
X =*■
1
I + i)
=
*
2 V 1+ *
1+
X
1~ iJ
Find the norm ||/1| = y^tr(A*A) of the skew-hermitian matrix
A =
i
—2 H- i
(
2 H- i
Si
)
without calculating A *.
Solution 8.
Since A* = - A i o v a skew-hermitian matrix we obtain ||A|| =
v/—tr(A2). Since
we obtain \\A\\ = y/20.
Problem 9 .
Let A and B be 2 x 2 diagonal matrices over R. Assume that
tr(AAt ) = tr ( B B t )
Norms and Scalar Products
323
and
m ax
IIxII=I
Can we conclude that
Solution 9.
matrices
\\Ax\\ =
m ax
||Bx||.
IIxII=1
A = BI
No we cannot conclude that
for example the two 2 x 2
A =
satisfy both conditions.
Problem 1 0 . Let A be an n x n matrix over C. Let ||.|| be a subordinate
matrix norm for which ||/n||= I. Assume that ||A|| < I.
(i) Show that the matrix (I n — A ) is nonsingular.
(ii) Show that ||(/n - A)_1||< (I - ||i4||)-1 .
Solution 1 0 . (i) Let Ai, . . . , An be the eigenvalues of A . Then the eigenvalues
of I n —A are I — Ai, . . . , I — An. Since ||A|| < I, we have IAj I < I for each j .
Thus, none of the numbers I — Ai, . . . , I - A n is equal to 0. This proves that
I n — A is nonsingular.
(ii) Since ||A|| < I, we can write the expansion
OO
(I)
Now we have
\\Aj \\ <
||A||J. Taking the norm on both sides of (I) we have
OO
ii(/n - ^ r 1Ii < ^
i w
= U- I I AI I ) - 1
since \\In \\ = I. Note that the infinite series l +
if and only if |x| < I.
Problem 1 1 .
Let
A
be an n x
n
x + x 2 -\---- converges to
1/(1 —a?)
matrix. Assume that ||A|| < I. Show that
IKJn - A ) - 1 _/„| | <
Solution 1 1 .
For any two n x n nonsingular matrices X yY we have the identity
X ~ l - Y ~1 = X ~ l (Y - X)Y~\
If Y = I n and X = I n — A y then (I n
norm on both sides yields
—
A)~l — I n
=
(I n
—
ii(j„ - a ) - 1 - JriIi < iij„ - Air1II^ii.
A)~lA. Taking the
324 Problems and Solutions
Using IlI n
— A\\ 1
< (I — \\A\\) 1 we obtain the result.
Problem 1 2 . Let A be an n x n nonsingular matrix and
Assume that ||A- 1 i?|| < I.
(i) Show that A - B is nonsingular.
(ii) Show that
Il A ^ - ( A - B ) - 1 W
U - 1BW
IiA-i Ii
B
an n x
n
matrix.
- I - P - i Bir
Solution 1 2 . (i) We have A - B = A(In — A-1B). Since ||A- 1 i?|| < I we
have that In — A -1 B is nonsingular. Therefore, A - B which is the product of
the nonsingular matrices A and In — A-1 B is also nonsingular.
(ii) Consider the identity for invertible matrices X and Y
X ~1 - Y - 1 = X - 1 (Y - X ) Y - 1.
Substituting X = A and Y = A —B j we obtain A-1 —( A - B ) -1 = —A -1B(A —
B )-1. Taking the norm on both sides yields
IlA-1 - (A - B r 1H< I P - 1BII IIP - B r 1H.
For nonsingular matrices X and Y we have the identities
Y = X - ( X - Y ) = X (I n - X - \ X - y ))
and therefore F -1 = (In —X ~ l (X —F ))-1 X - 1 . If we substitute Y = A - B
and X = A we have
(A -
B) - 1 =
(In -
A - 1 B y 1 A - 1.
Taking norms yields
II(A-B)-1II < IIA-1II H(Zn- A - 1B)- 1II.
We know that
Il(Zn- A - 1B)- 1Il^ (I-IlA - 1Bll) - 1
and therefore
IIP - B) -1 H<
IP-1II
i - HA-iBir
It follows that
HA"1 - ( A - B ) - 1
-
Iia - 1BIm ia - 1II
I-IlA -1Bll
Problem 13. Let A j B be
matrix norm. Show that
n x n
ha- 1-
(A -B )-1H< IIA-1BH
IIA-1II
- I - I I A - 1Bir
matrices over M and t G M. Let || || be a
||etAetB - Zn||<exp(|t|(||A|| + ||B||)) - I.
Norms and Scalar Products
We have
Solution 13.
\\etA e tB
325
- Jn|| =
<
\\(etA \\etA
-
In)
+ ( e tB
I n Il +
-
In)
+
||ets - Jn|| +
(e tA -
||e“
I n ) (e tB -
I n ) ||
- Jn|| • ||etB - Jn||
< (exp(|t| ||A||) - I) + (exp(|t| ||5 ||) - I)
+ (exp(|<| ||-A||) — l)(exp(|t| ||5 ||) - I)
=exp(|t|(M||+||£||))-1.
Problem 14.
Let A\, A 2, ..., A p be m x m matrices over C. Then we have
the inequality
Il e x p ( f > ) -
•••e^/»)»|| < £ ^
M illj
exP
E
M illj
and ( Trotter formula)
p
Iim (eM l neM l n •• •eA^ n)n = expfV" Aj ).
n —Voo
3= I
Let p = 2. Find the estimate for the 2 x 2 matrices
*
Solution 14.
find
-
(
!
» ) ■
A ‘ = ( ° ° ) -
We have ||Ai|| = I, ||j42||= 2. Thus for the right-hand side we
77. + 2
^ E
n +-f
M ill)2 exp
n
EM il
J
=1
Problem 15. Let CZ1, U 2 be unitary n x n matrices. Let v be a normalized
vector in Cn. Consider the norm of a k x k matrix M
\\M \\ =
max IIMv II
IIvii=I
where ||v|| denotes the Euclidean norm.
IICZ1 V -CZ2v||<e.
Solution 15.
if \\U\ — U 2 W <
e
We have
F iV - L/2v|| = IKCZ1 -
Problem 16.
Show that
U2M
< max IKCZ1 - CZ2)y|| = HCZ1 - CZ2|| < e.
IIyII=I
Given the 2 x 2 matrix
( rotation matrix)
R ( a ) = ( cosIa I
v '
\ sin(a)
- si^ V
cos(a )
J
then
326
Problems and Solutions
Calculate
||/2(a)||
= sup||x ||=1
||jR ( qj) x ||.
Solution 16. Since R t (a) = R ~ 1 (a) we have R t (a ) R ( a ) = / 2. The largest
eigenvalue of /2 is obviously I and the positive square root is I. Thus ||i?(o:)|| = I
and the norm is independent of a.
Problem IT.
Then we have
Let
A
be an n x n matrix. Let
p(A)
n
I, max
l< 7 < n
J= I
Let
I
/ °
3
U
Calculate
p(A)
Solution 17.
spectral radius
of A .
n
Ia ij
K zC n
be the
4
2\
5
.
7
8 /
E
Z=I
and the right-hand side of the inequality.
For the right-hand side we have
min{max{3,12,21}, max{9,12,15}} = 15.
For the left-hand side we find
p(A) =
^max^1Xj I = max{6 — 3 ^ , 6 -I- 3v^6,0} = 6 + 3\/6 « 13.3485.
Problem 18.
Let t G R. Consider the symmetric 3 x 3 matrix over R
I
It
Vo I
( t
A(t)
=
Find the condition on t such that p{A{t))
radius of A(t).
Solution 18.
0\
I
.
t )
< 1 , where p(A(t))
denotes the spectral
We have
p(A) = max IA7L
1<7<3
The eigenvalues depending on t are {t,t —y/2,t + y/2} with the maximum t-\-y/2.
Thus we have the condition t < 1 - V 2 .
Problem 19. (i) Let A be an n x n positive semidefinite matrix. Show that
(I n + A ) - 1 exists.
(ii) Let B be an arbitrary n x n matrix. Show that the inverse of I n + B * B
exists.
Norms and Scalar Products
327
Solution 19. Consider the linear equation (I n + A)u = 0 . Then —u = ^4u
and since A is positive semidefinite we have
0 < (Au)*u = uM u = —u*u = —||u|| < 0 .
Thus ||u|| = 0 and therefore u = 0. Then ( In + A ) - 1 exists. We could also prove
it as follows. Since A is a positive semidefinite matrix there is an n x n unitary
matrix U such that U A U * is a diagonal matrix and the diagonal elements are
nonnegative. Since U (I n + A ) U * = I n + U A U * the inverse of U (I n + A ) U * exists
and therefore the inverse of I n + A exists.
(ii) Since B * B is a positive semidefinite matrix we can apply the result from (i)
and therefore the inverse of I n + B * B exists.
Problem 2 0 . Let A be an n x n matrix. One approach to calculate exp(^4)
is to compute an eigenvalue decomposition A = X B X ~ l and then apply the
formula e A = X e B X ~ l . We have using the Schur decomposition
U* A U
where
0 ,j >
= diag(Ai,. . . , An) +
N
is unitary, the matrix N = (njk) is a strictly upper triangular (njk =
and A(^4) = { A i , . . . , A n } is the spectrum of A . Using the Pade
approximation to calculate e A we have
U
k)
Rpq = (Dpq( A) ) - 1Npq(A)
where
Npg(A)
(p + q - j ) W
,3
(p + q -j)!q !
Dpq(A) : = £
= = E
3=0 (p + q)'.j'.(p - j)\
3=0 (p + q m q ~ j ) r
Consider the nonnormal matrix
0 6 0 °\
0 0 6 0
0 0 0 6
Vo 0 0 0/
Calculate
\\Rn —
Solution 2 0 .
eA||, where || • || denotes the 2-norm.
We have p
I,
= q =
N n = /4
/ 1 3 0 0 '
N 11(A)
+
^A, D n = I4 — I A
-3
I
/ 1
0
D n (A) =
0
0 1 3 0
0 0 1 3
Vo 0 0 i/
0
-3
I
0
0
Vo
0
° \
0
-3
I /
and
'I
0
3
0 0
,0
9
27 \
1 3
9
1 3
0 0
I
(I
Rn( A) =
0
0
6
18
I
6
54 \
18
0
I
6
Vo 0
0
I /
, ,,,
’
328
Problems and Solutions
We also have
I
0
0
0
0
0
0
Vo
°\
0
0
0
1 1 ,
0 ’
0/
0
0
A 3 = 216
Vo
0
0
0
0
0
0
0
0
1\
0
0
0/
and A 4 is the 4 x 4 zero matrix. Thus
eA = I4 + A + ^ A 2 + ^ A 3
2
6
/ 1
0
0
Vo
Thus
0
0
0 0
Vo 0
(°0
Jin(A) - eA
6
I
0
0
18 36 \
18
6
6
I
0
I /
0
0 18V
0
0 0
0 0J
It follows that II-Rn(A) — e A \\ = 18. In general, the bounds on the accuracy of
an approximation f ( A ) to e A deteriorate with loss of normality of the matrix A .
Problem 21.
Show that
Let
A
be an n x n matrix with ||A|| < I. Then ln(/n +
A)
exists.
\\MIn + A ) \ \ < ^ _ .
Solution 2 1 .
Since ||A|| < I we have
00
Ak
ln(In + A) = £ ( - l )fc+1— .
k= I
Thus
Il ln(/„ + A)|| < f )
Problem 2 2 .
2||A|| ||B||.
Let A,
Solution 22.
We have
Il[A, 5 ] Il =
\\AB -
Problem 23.
B
BA\\ <
Let A,
B
be
< IIAII f ; IIAIIfc = J ^ j L
n
x
n
matrices over C. Show that ||[A, R]|| <
HASH + ||BA|| = ||A|| ||B|| + ||B|| ||A|| = 2||A|| ||B||.
be
n
Il[A,B]|| <
x
n
matrices over C. Then
IIABII + PAH <2||A|| ||B||.
Norms and Scalar Products
329
Let
Calculate ||[j4 ,B]\\, ||AB||, ||iM||, ||^4||, ||H|| and thus verify the inequality for
these matrices. The norm is given by ||C|| = y / t v ( C C * ) .
Solution 23. We have \\A\\ = \/2 , ||i?|| = y/ 2 and
The commutator of A and B is given by
K fiI = A B - B A = ( %
\\AB\\ =
\/2 ,
\\BA\\ = y / 2 .
- f ) .
Thus Il[A, B] Il = 2.
Problem 24. Denote by ||•||h s the Hilbert-Schmidt norm and by ||•\\o the
operator norm, i.e.
PllffS := \/tr(AA*),
P||o := sup Px|| =
sup
Ilx"= 1
where A is an m x
(i) Let
n
matrix over C, x G Cn and ||x|| is the Euclidean norm.
Calculate ||M||j j s and ||M||o(ii) Let A be an m x n matrix over C. Find the conditions on
A
such that
PlltfS = P llo .
Solution 24.
The first calculation is straightforward
||M||h
s
=
J tr{M A fT ) =
I.
To calculate ||M||o we need to find the largest eigenvalue of
MtM
=
(I
0
1\
0
0
0
2 Vl
0
l)
-
.
The eigenvalues are 0, 0 and I. Thus ||M||o = >/l = I.
(ii) Let Al, . . . , Am be the nonnegative eigenvalues of A A *, then
— \/Ai + A2 H-------- I-A1
Let /ii, /i2, . . . , /in be the non-negative eigenvalues of A * A , then from tr(A*Pl) =
tr(AA*)
II^IIh s — V / i l + /i2 H----------1- /in-
Now
IPllo = v/max{|Ui,M2,...,jU„}.
330 Problems and Solutions
Setting
PII hs = \Pl + M2 H------ HMn =
= IPIIo
we find
Ml + M2 H------- I-Mn =max{Mi,M2 ,---,Mn}Since the values /ij are non-negative all the eigenvalues, except perhaps for one,
must be zero. In other words
IPIIhs = IPIIo
if and only if A * A has at most one non-zero eigenvalue (or equivalently A A * has
at most one non-zero eigenvalue).
Problem 25.
Let
be an n x
A
Solution 25.
ll\A]
=
Iim
A
matrix. The
logarithmic n orm
is defined by
||Jn + fcA ||-l
Iim
u\A] : =
Let 11A 11 := supx=1 |Ax||. Let
n
h->0+
be the
nx n
identity matrix I n . Find /i[/n].
Since ||/n||= I we have
||/n + W n ||-l
h->0+
h
Iim
||(1+ Wnll - I
h-> 0+
(l + h)||Jn| | - l
h
Problem 26.
Consider the Hilbert space Rn and x, y G Rn. The scalar
product (x, y) is given by
(x ,y ) : = x T y
= ^ x j y j.
j= i
Thus the norm is given by ||x|| =
y j (x, x).
Show that |xTy| < IMl •Ilyll-
Solution 26. If y = 0, then the left- and right-hand side of the inequality are
equal to 0. Now assume that y ^ 0 . We set (e G R)
/(e) := (x —ey,x — ey).
Obviously /(e) > 0. Now we have /(e) = (x, x) - (ey, x) - (x, ey) + (ey, ey) or
since (x,y) = (y,x)
/(e) = (x, x) - 2 e(x, y) + e2 (y,y)
Differentiation of / with respect to e yields
# ( £)
de
„ _A , O
-A
-0
2 /(x,y)
+ 2 e(y,y),
<*2/(e) = 2 (y,y) > 0 .
de 2
Norms and Scalar Products
Therefore we find that the condition
df / d e =
331
0 leads to the minimum
(*,y> = «(y,y) = * ' = £ $ •
Inserting e into the function f ( e ) yields
Therefore (x, x)(y, y) - (x, y )2 > 0. Thus (x,x )(y ,y ) > (x ,y )2 and the inequal­
ity follows.
Problem 27. Let A be an n x
Consider the equation
n
hermitian matrix. Let u, v G Cn and A € C.
Au —Au = v.
(i) Show that for A nonreal (i.e. it has an imaginary part) the vector v cannot
vanish unless u vanishes.
(ii) Show that for A nonreal we have
H (^ -A J n) - 1Vll < — r^lM I-
Solution 27.
obtain
(i) From the equation by taking the scalar product with u we
u*Au
-
Au*u = u* v.
Taking the imaginary part of this equation and taking into account that uM u
is real yields
-S(A)Ilull2 = 9(u*v).
This shows that for A nonreal, v cannot vanish unless u vanishes. Thus A is not
an eigenvalue of A, and (A — AIn) has an inverse.
(ii) By the Schwarz inequality we have
|3(A)| IIu
II2 = l|3(u *v )ll < Iu M
< IIu II IM I-
However u = (A — A/n)_ 1v. Therefore
H( A- AJn) - 1VlI < ^ L j| | v | | .
Problem 28.
is given by
Let
M
be an m x
\\M\\f
Let
Um b e m x m
n
matrix over C. The Frobenius norm of M
:= ^ tr(M tM) = ^ tr (MM*).
unitary matrix and Un be an n x n unitary matrix. Show that
||l/mM||F = ||MC/n||F = ||M||.
332 Problems and Solutions
Show that I l M is the square root of the sum of the squares of the singular
values of M .
Solution 28.
Using the property that tr(AB) = tr[BA) we find
IlUmAfllF = V tr ((UmMY(UmM)) = V tr (M*M) = ||M||F
and
\\MUn\\F = V tr ((MUn)*(MUn)) = V*r (Af*Af) = ||M||F.
Let M = U T V * be a singular value decomposition of M, then
I|M||f = \\u *m \\f = \\u *m v \\f = ||E||F = V m s 1E).
T is an m x n matrix with
<s>»={o W
where the aj are the singular values (with the convention that
and j > n). It follows that T * = Et and
0 when
aj =
j > m
m
m
(E*E)ij = ^ ( E ) fei(E)fe; = ^
k=I
2
M k =
k=I
Finally
IlAf 11/ = V M S *E ) =
\ i= l
i= i
Problem 29. Let M be an m x
matrix A over C which minimizes
matrix over C. Find the rank- 1
n
\\m -
a
m
x
n
\\f .
Hint. Find the singular value decomposition of
rank I which minimizes
M
= UTV*
and find
A!
with
IlE-AV
Apply the method to the 3 x 2 matrix
M=
/°
i\
I
\0
0 .
I/
Solution 29. Using the singular value decomposition
the properties in I. We obtain
||M -
A\\f =
IlU*(M -
A)V\\ f =
||E-
M
= UTV*
A'\\f
of
M
and
Norms and Scalar Products
where A' :=
Since
U *AV.
I I E - A 7II
333
* E E i s*
M <=i j= i
we find that minimizing
EE
- W
4|| implies that
\\M —
-(^ i2
2= ^
\ i= I j =I
A!
must be “diagonal”
min{ra,n}
IS —A'\\f
=
\
~ (A1)i
i=l
Since A must have rank I, A ! must have rank I (since unitary operators are
rank preserving). Since A f is “diagonal” only one entry can be non-zero. Thus
to minimize \\M —A|| we set
A'
= uieiimefiTl
where G\ is the largest singular value of M by convention. Finally
For the given M we find the singular value decomposition
/°
i
1/V2
I
0
0
\0
I
l/y/2
M=
o
I
0
A = U A fV
*.
l/y/2
0
-l/y/2
Thus we obtain
I/y/2 0
0
I
l/ y /2
0
A =
Problem 30. Let A be an n x n matrix over C. The
matrix A is the nonnegative number p { A ) defined by
p(A) : =
where Aj (d)
as
(j =
max{ |Aj(^4)| : I <
I , . . . ,n) axe the eigenvalues of
\\A\\
spectral radius
of the
j < n}
A.
We define the norm of
A
:= sup 11^4x11
IIxII=I
where ||Ax|| denotes the Euclidean norm. Is p ( A ) < ||A||? Prove or disprove.
Solution 30.
From the eigenvalue equation Ax = Ax (x ^ 0 ) we obtain
|A|||x|| = IIAxII = IIAxII < ||A||||x||.
Therefore since ||x|| ^Owe obtain |A| < ||A|| and p { A )
<
Problem 31. Let A be an n x n matrix over C and
matrix. Consider the norm of a k x k matrix M
\\M \\ =
max ||Mx||
||x||=l
||A||.
I rn
be the
m
x
m
unit
334
Problems and Solutions
where ||x|| denotes the Euclidean norm. Show that ||i4® Jm||= ||j4||.
Solution 31. The norm of \\A\\ is the square root of the largest eigenvalue of
the positive semi-definite matrix ^4^4*. Now
( A ® I m ) ( A * ® Jm ) =
( iW * ) ® I m.
Now the largest eigenvalue of the positive semidefinite matrix (A A *) ® I rn is
equal to the largest eigenvalue of A A * since the eigenvalues of I rn are I. Thus
||A|| = \\A<S>Im\\ follows.
Problem 32.
Find 2 x 2 matrices
/0
0
0
\0
and
A
over C which minimize
B
0 0 0\
1 1 0
- A® B
1 1 0
0 0 0/
and determine the minimum.
Solution 32.
Consider the 2 x 2 matrices
A =
The reshaping operator
/
C L \b \
( ai
V
fl2V
R(A<g> B )
d \b 2
B = H 1
a>4J
\bs
\
a^>2 a^b\ a^2
\ CLsbs
CLsb^
Thus
R
/°
0
0
0
I
I
0263
(I4&3
0
I
I
b4 J
is given by
0163
I 0
361
ai&4
J2)
=
v e c ( A ) ( v e c ( B ))1
04 64/
0V
0
0
(°
0
0
0
0
I
0I
0
1A
0
0
which has four (of many) singular value decompositions
/0
0
0
0 0 1\
0 1 0
= /
1 0
0
4/4
Vl 0 0 0/
(°0
I
Vo
/°
I
0
Vo
/ 0 0 0 1\
0 0 1 0
/ 4/ 4*
0 1 0
0
/ 0 0 0 1\
0 0 1 0
0 1 0
0
Vi
0
0
0
I
0
0
I
0
0
0
I
0
0
0
0
0
I
0 0/
1\
(°I
0
^4
0
0
Vo
0/
T
1V
0
0
0/
*4
T
/°
0
I
Vo
V:I
I 0
0 0
0 I
0 0
0 I
I 0
0 0
0 0
0 0
°\
0
0
IJ
°\
0
0
l)
Norms and Scalar Products
335
The first decomposition yields
vec(A) =
vec( B )
Thus
I •/4
■
l ) ’
0
Vo/
- ( 0
1X
0
0
0
0
0/ Vo/
0 )
f °0\
0
Vi/
B
■ (")
The second decomposition yields
\\M — A ® B\\f = V 3 .
^ = ( 0
p0\
0
0
Vo/
0 0
0 I
I 0
0 0
B
=
( o
o ) ’
1 1 ^ - ^ ®
^
=
S = (l
0) ’
\ \ M - A ® B \ \ f = t/3.
0 ) ’
= ^-
The third decomposition yields
A = (o
0) ’
The fourth decomposition yields
A =(°i
Problem 33.
0 ) ’
S
=
( o
Consider the 4 x 4 matrix (Hamilton operator)
huj
~
H = —
where cj is the frequency and
n orm of H , i.e.
h
\\H\\ : =
11 11
((Tl (8 ) (Tl -
(T2 <
8 >CT2 )
is the Planck constant divided by 27T. Find the
max ll-Hxll,
x € C4
||x||=l
applying two different methods. In the first method apply the Lagrange multi­
where the constraint is ||x|| = I. In the second method we calculate
H * H and find the square root of the largest eigenvalue. This is then \\H\\. Note
that H * H is positive semi-definite.
plier method,
Solution 33. In the first method we use the Lagrange multiplier method, where
the constraint ||x|| = I can be written as x\ + x\ + x\ + x\ = I. Since
0
0
0 I
Vi 0
(°0
we have
0
I
0
0
1X
0 1 , ¢72 0 (72 =
0
0/
/0
0
H = Hu
0
0
0
0
0
0
0
1\
0
0
Vi 0 0 0/
0
0
0
-1
0
0
1
0
0
I
0
0
0
0
0 /
336
Problems and Solutions
Let x = (#1 , # 2, # 3 , X4)71 G C4. We maximize
/(x ) := ll-Hxll2 - A(a:2 + ^ 2 + ^ 3 + 0:4 - 1 )
where A is the
Lagrange multiplier.
ox 1
OX2
OX 3
OX4
To find the extrema we solve the equations
= c
I h 2 UJ2Xi — 2 Xx\
= 0
= —2A x 2 = 0
= — 2\xz =
0
0
= 2 H2 uj2 X 4 — 2 Xx 4 =
together with the constraint £^ + £2 + £3 + £4 = I. These equations can be
written in the matrix form
/ h 2u 2 - \
0
0
0
0
0
0
-A
0
0
0
0
0
-A
0
A
f Xl \
X2
X3
A/
h 2u 2 -
\ X4 /
(°\
0
0
Vo /
If A = 0 then x \
= £4
£2 = £3 = 0 and
x \ + £4 = I so that ||#x|| = Huj, which is the maximum. Thus
we find Il#|| =
= 0 and ||#x|| = 0, which is a minimum. If A ^ 0 then
hw.
In the second method we calculate
eigenvalue. Since H * = H we find
H *H
=
H *H
H2 UJ2
and find the square root of the largest
0
0
0
0
/i
0
0
Vo
0
0
0
0
°0\
0
l/
Thus the maximum eigenvalue is H2UJ2 (twice degenerate) and ||#|| =
Hjuj.
Problem 34. Let A be an invertible n x n matrix over R. Consider the linear
system A x = b. The condition number of A is defined as
Cood(A) := PU •P - 1HFind the condition number for the matrix
(
I
^0.9999
0.9999 A
I )
for the infinity norm, 1 -norm and 2-norm.
Solution 34.
The inverse matrix of A is given by
._i_
“
3/
5.000250013
V -4.999749987
—4.999749987A
5.000250013 ) '
Norms and Scalar Products
337
Thus for the infinity norm we obtain
IMloo = 1.9999,
P - 1 H00 = IO4.
Therefore Cond00(A) = 1.9999 •IO4. For the 1 -norm we obtain
IlAll1 = 1.9999,
IIA-1 II1 = IO4.
Therefore Condi(A) = 1.9999 •IO4. Thus for the 2-norm we obtain
IlA||2 = 1.9999,
IIA-1 II2 = IO4.
Therefore Cond2 (A) = 1.9999 •IO4. In the given problem the condition number
is the same with respect to the norms. Note that this is not always the case.
Let A, B be nonsingular n x n matrices. Show that
Problem 35.
cond(A <8>B )
=
cond(A)cond(£)
for all matrix norms, where cond denotes the condition number.
We have
Solution 35.
cond(A® S) = IlA ® S|| ||(A<g>B)_1||= ||A® S|| HA- 1 ® S - 1H
= IIAII ||S|| IIA-1 IIIIS-1II
= cond(A)cond(5).
Supplementary Problems
Problem I. Let A, B be n x n matrices over C. Given the distance ||A — B ||.
Find the distances ||A <g>Jn — ® Jn|| and ||A (8) In — Jn ® B\\.
J3
Problem 2.
Let A be an n x n positive semidefinite matrix. Show that
|x*Ay| < \/x*Ax\/y*Ay
for all x , y 6 C.
Problem 3 . (i) Let A and C be invertible n x n matrices over R. Let B be
an n x n matrix over M. Assume that ||A|| < ||i?|| < ||C||. Is B invertible?
(ii) Let A, 5 , C be invertible n x n matrices over R with ||A|| < ||5 || < ||C||. Is
IlA-1Il > HS- 1 !! > IIC-1II?
Problem 4 .
Let A be an n x n matrix over M. Assume that ||A|| < I, where
IlA|| := sup ||Ax||.
IIxII=I
338 Problems and Solutions
Show that the matrix B = In + A is invertible, i.e. B G GL(nM). To show that
the expansion
In - A + A2 - A3 H---converges apply
IlA m - A m+1 + A m +2----- ± A m^k- 1W< ||Am|| -||1 + ||A|| + •• • + UAH'5- 1
= ||A|r
Then calculate
Problem 5.
(In + A ) ( I n — A + A 2 — A 3
Let
Ai B
I-IIAIIfc
I-Pll '
-----).
be 2 x 2 matrices over R. Find
min||[A, B]
AiB
such that
- I 2 Il•
For the norm |||| consider the Frobenius norm and max-norm.
Problem 6 . Let A be an n x n matrix over C. Assume that ||A|| < I. Then
I n + A is invertible. Can we conclude that I n S I n + A S A is invertible?
Problem 7. Let A be an n x
matrix. Is IIAi Z2H= HAH1/ 2?
n
positive semidefinite (and thus hermitian)
Problem 8 . Let A be a given symmetric 4 x 4 matrix over R. Find symmetric
2 x 2 matrices B i C over M such that
f ( B ,C ) = \ \ A -B ® C \ \
is a minimum. The norm || ||
^ ie
\\X - Y f
where X i Y are
tr((X - Y ) { X
-
F)r )
matrices over R. Such a problem is called the nearest
Note that since At = A, B t = B i C t = C i
tr(5 2)tr(C2) and tr( A ( B S C ) ) = tr( ( B S C)A) we have
n
x
=
Hilbert-Schmidt norm over R given by
n
Kronecker product problem.
tr( B 2
S C 2) =
WA-B(S)
C ||2 = tr((A = tr(A2 -
B S
C)(A -
B S
C))
A ( B ® C ) - ( B S) C ) A
+
B2S
C2)
= tr(A2) - tr(A(B <
8>C)) - tr((B <g>C)A) + tr(S 2)tr(C2)
= tr(A2) —2tr{ A ( B
S C ))
+ tr(£ 2)tr(C2).
Since
/b n c n
B ® C = ( b,n b
, 12) ® ( Cn
Vb12 &22J
\ C\2
Cl2)
C22 J
b\lCi 2
611C12
bi 2 c u
bi 2 c i2 \
b\\C22
b\2 C\2
b\2 C22
^12^11 612C12 ^22^11 622C12
V ^12^12 &I2C22 ^22^12 ^22^22/
Norms and Scalar Products
339
we have
tr(A(B <g>C )) = CLiibiiCii 4- 0 2 2 ^11^ 2 2 + G3 3 &2 2 C1 1 +
+ 2(ai2&nci2 +
+ CL2^ b i 2C i 2 +
We also have tr( B 2)
Problem 9.
=
a is b i2 C n
0 2 4
+
^
0 4 4 22^22
a u b i 2c i 2
^ 1 2 ^ 2 2 + ^ 3 4 ^ 2 2 ^ 1 2 )-
&2X+ 2b\2 + ^22? tr(C2) = C21 + 2c22 + C22-
Let A be an n x n matrix over C. Is ||I n + A\\ < I + ii^ ii?
(i) Let A, B be positive definite n x n matrices. Show that
M + £||f > ||j4.||f ? where ||.||f denotes the Frobenius norm.
(ii) Let Abeannxn matrix over C and v , u G Cn. Is ||A v - Au|| < ||A||-||v-u||?
Problem 1 0 .
Problem 11.
Let { v i , . . . , v m} be a linearly independent set of vectors in
the Euclidean space Mn with m < n.
(i) Show that there is a number c > Osuch that for every choice of real numbers
Cl, . . . , cm we have
| | c i v i + • • • + cmv m || > c (|ci| + • • • + Icm D .
(ii) Consider M2 and the linearly independent vectors
Show that
and c = 1/2 will provide the answer.
Chapter 13
vec Operator
Let A be an m x
of A as
n
matrix over C. One defines the
vec operator
(vectorization)
vec(^4) := (an, <221, . . . , ami, a n , 022, •••, am2, . . . , ain, a2n, . . . , amn)
where T denotes transpose. Thus vec(A) is a column vector in Cmn. Thus the
vec operator stacks the columns of a matrix on top of each other to form a
column vector, for example
/1 2
3
\4 5 6
One defines the
res( A )
=
reshaping operator
of A as
(an,ai2,. . . , ain,a2i, 022,•••, a2n, . . . , ®mi)am2,•••, amn)
where T denotes transpose. Thus res(^4) is a column vector in Cmn. One has
res(>l) = vec(AT). For n x n matrices over C we have
vec(ABC) =
ves(ABC)
(C T S A )vec(B )
= (A(g)
C T )r e s(B )
vec(A B ) =
(I n <g>A ) v e c ( B )
vec(A • B ) =
vec( A ) • vec( B )
tr( A * B )
=
= (vec(A))*vec(B) =
where • denotes the Hadamard product.
340
(B t
S I n)v e c ( A )
(res(A))*res(£)
vec Operator
341
The vec operator can be considered as a bijective operator between the m x n
matrices and Cmn. However, we need to know m and n. For example the vec
operator is a bijection between the 3 x 2 matrices and C6 and is also a bijection
between the 2 x 3 matrices and C6.
Let In be the n x n identity matrix. Let ej (j = I , . . . , n) be the standard basis
in W 1. Then we have
n
Vec(In) =
e J
vec(A) = (In 0 A)vec(In).
® ej,
3= I
This is a straightforward consequence of the identity
vec(ABC)
=
(CT 0 A)vec(B).
We define the permutation matrix Prn,n (perfect shuffle) by
vec(^4) = Prn^nVec(Ar )
for all m x n matrices A. Thus
-P-1
1m ,n = Pmr ,n = PJ n ,m •
Let x G Cm and y G Cn. It follows that
• P m ,n (x ® y ) = y ® x .
The permutation matrices Pnj and Ps,m provide the relation for commuting the
Kronecker product. Let A be and m x n matrix and B be an s x t matrix, Then
A ® B = Ps,rn( B ® A ) P n,t.
Using the vec-operator we can cast matrix equations into vector equations.
1) Sylvester equation
FX
+
XGr
=
C
(In 0 F + G<g> Jm)vec(X) = vec(C).
2) Generalized Sylvester equation
FXHr + KXGr = C
(H 0 F + G <g>K)vec(X) = vec(C).
3) Lyapunov equation
F X + XFt = C
(In 0 F + F ® I n)vec(X) = vec(C).
342
Problems and Solutions
P r o b le m I.
Consider the 2 x 3 matrix
A = ( an
0,12
V «21
«22
0,13 ^
«23
J
Let B = A t . Thus B is a 3 x 2 matrix. Find the 6 x 6 permutation matrix P
such that
vec (B) = Pvec(A).
We obtain
0
0
0
0
0\
flIl \
0
0
1
0
0
0
«21
«13
0
0
0
0
1
0
«12
«21
0
1
0
0
0
0
«22
«22
0
0
0
1
0
0
«13
\0
0
0
0
0
I/
V «23 /
P
/I
«12
( a 11 \
bO
CO
S olu tion I .
P r o b le m 2. Let A be an m x n matrix over C and B be a s x t matrix over
C. Find the permutation matrix P such that
vec(A <0 B) = P(vec(A) 0 vec(B )).
S olu tion 2.
We find
n
VCCmxn(Tl) =
(In
0 -A) ^
m
©j,n 0 ®?,n = (-^
J=I
®
^ ^ & j,m 0 ®j,m*
Im )
J=I
Since
n
v e c m s x n t ( t! 0 ) 5 ) =
t
E E
®
e j,n
e fcjt ® ( j 4 e J in ) 8
( B e fcit)
j= i fc=i
m
t
= E E ( ATe;.~ ) ® efc>* ® eJ-m ® (5 6 *,*)
j= i fc=i
and
n
vecmxn (^4) <8 vecsxt(B) =
£
E E
e,-,n ® (Aejtn) 8 efc)t 8 (Befcit)
j= i fc=i
m £
=
E
^
6 * " * ) ® e J,m ®
eM 8 (Befcit)
j= i fc=i
there is an (must) x (must) permutation matrix P such that
v^cms xnt (-^I 0
-^ )
= P(ve Cmxn ( A )Q v e c sxt(B)).
vec Operator
343
This permutation matrix P is given by
m
(y i
j= i
P r o b le m 3.
t
\
y ^ ( e fe,t 0 ei,m)(e -;,m ® e k,t) J ®
^=1
/
(i) Let A 1 B be two n x n matrices. Is
vec(^4 0 B) = (vec(j4)) 0 (vec(H)) ?
(ii) Let A be an arbitrary 2 x 2 matrix. Find the condition on A such that
vec (A) 0 vec (A) = vec(A 0 A).
What can we conclude about the rank of Al
S olu tion 3.
(i) This is not true in general. Consider, for example, with n = 2
However there is an (n2 x n2) x (n2 x n2) permutation matrix P such that
vec(^4 0 B) = P(vec(A) 0 vec(H)).
Find P for the example given above.
(ii) We obtain the four conditions
G l l ( ^ 1 2 — CL2 l ) =
0,
^22(^12 — ^21) =
Oj
^11^22 — ^ l 2 =
0,
^ 1 ^ 2 2 — a 21 =
0-
an = 0, then a i2 = a2i = 0 and 022 is arbitrary. If 022 = 0, then a i2 = 021 = 0
and an is arbitrary. If a i2 = 021, then a n a 22 = a22. Thus we can conclude that
the rank of A must be I or 0.
If
P r o b le m 4.
Let v be a column vector in Mn. Show that vec(vvT) = v 0 v.
S olu tion 4.
Since
=
V lV n \
V2V2
V 2 Vn
Vn V 2
Vn Vn J
...
T
^1^2
V2V1
0
c5
VV
V lV 1
we obtain the identity.
P r o b le m 5.
Let A be an m x n matrix over C. Using
n
vecmxn(^-) -= ^ ^ ej,n 0 (^©7,n) =
J= I
n
{Jn
0
A}
^ ^ ©j,n 0 ©j,n
J= I
344
Problems and Solutions
and
m
veCmx n ( X )
n
— 5 3 5 3 ( ( e ^’n ® e ^ 771)
i= l j = I
X ) e ^ m ® e j,n
Show that vecm1xn(vecmxn(^4)) = A.
S olu tion 5.
We have
771
71
/
v e c m x n (v e c m x n (^ )) =
Tl
j,n ® ^ m )*
^ efc,n ® (Aefe)fl)
j { e j,n e k,n) ® ( e i , m ^ e fc,n) I e i,m ® ® j,n
i = l J = I \ fc= l
m
n
53 E
M
ei.»)
e i,m e j , n
*=1 j = l
= A
P r o b le m 6. (i) Let A X ’ + -SLB = C, where C i s a n m x n matrix over R. What
are the dimensions of A , B , and X ?
(ii) Solve the equation
for the real-valued 2 x 3 matrix X .
S o lu tio n 6. (i) From A X + X B = C we find A has m rows, X has n columns,
X has m rows and B has n columns. Since X has m rows,
must have m
columns. Since X has n columns, B must have n rows. Thus A is m x m, B is
n x n and I is m x n.
(ii)
The equation
can be written as
1
0
\
vec Operator
345
with the solution
/ —3 /8
V 9 /8
5/8
1/8
-3 /8 \
-7 /8 ) '
Problem 7. Let A be an m x n matrix over C and B be an s x t matrix over
C. For the rearrangement operator R by
R(A <g>B) := vec(A)(vec(B))T
and linear extension. Find
"((?s)
Solution 7.
We have
( °I
I
0
2
2
03 V
3
Vo 0 0 J
Problem 8. (i) Let K be a given n x
to find all unitary n x n matrices such
K U * = U*K* and therefore
nonnormal matrix over C. We want
at UKU* = K*. Now we can write
KU* - U*K* = On
where On is the n x n zero matrix. This is a matrix equation (a special case of
Sylvester) and using the vec-operator and Kronecker product we can cast the
matrix equation into a vector equation. Write down this linear equation.
(ii) Apply it to the case n = 2 with
K =
Solution 8.
i
i
)■
(i) The vector equation is
{In ® K - {K*) t ® In)vec(U*) = 0
where we still have to take into account that U is unitary.
(ii) For the given 2 x 2 matrix we have
K = (~o
=
-.) -
{K'f
= (o
Ii)
346
Problems and Solutions
Thus we arrive at the system of linear equations
/- cIi
i
i
°\
0
0
V 0
0
0
0
0
0
0
i
i
2i/
fu n \
U l
/° \
0
0
\o/
2
u2\
V ^22 /
Note that the determinant on the right-hand side is 0 so we obtain a non-trivial
solution. We find U 22 = 0 and the equation —2u n + Ui2 + u2i = 0- Taking into
account that U is unitary provides the two solutions u\\ = 0, u\2 = I, u2i = —I
and Un — 0? U12 = ~ I : u2i = I-
Problem 9 .
Consider Lyapunov F X + X F t = C and
Find X .
Solution 9 .
We have F t = F and
I
0
(I2 <8>F + F <S>I2)vec(X) =
1 0
Vo I
(°1
I
0
0
I
0V
I
I
0/
/X11X
X21
X12
(°\
I
I
Vo/
Vx22/
It follows that X21 + X12 = 0, Xu + %22 = I. Thus
Xn
X
X12
-X i2 \
I-X n )
with X n , X12 arbitrary.
Problem 10.
/°
A=
Consider the matrices
1
2 3
V -I
-2
°\
B
I ,
0/
■ ( - . ?)•
Find the solution of the equation A X + X B = C , where X is the 3 x 2 matrix
Xll
X-.
X
\ X
Xi2 \
21 22 I •
31 X32/
X
Hint. Using the vec-operation we can write the equation as linear equation
h) veci(X)
T®
Solution 10.
= vec:(C).
We obtain
(
I2 ® A. +
I
I
0
2
4
-2
0
0
0
I
I
-I
0
0
0
0
0
0
-2
0
0
0
-2
0
0 \
I
I
0
-2
0
2
4
-2
I
I
-I
J
vec Operator
347
The determinant of / 2® A + B t <g>/3 is nonzero and thus the inverse matrix exists.
Note that there is a unique solution for X iff no eigenvalue of A is the negative
of the eigenvalue of B. This is the case here and therefore the determinant of
/2 0 A + B t <g> /3 is nonzero. The solution is
Problem 11. Let A , B , X be n x n matrices, where the matrices A , B are
given. Assume that A X = B. Find X using the vec-operator and the Kronecker
product. Then consider the case that A is invertible.
Solution 11.
Applying the vec operator to the left- and right-hand side of
A X = B and using the Kronecker product we have
vec (AX) = (In <g>A)vec(X) = vec (B ).
If A is invertible we have vec(X ) = ( In S A _1)vec(B ).
Problem 12. Let A, B, C, X be n x n matrices, where the matrices A, B,
C are given. Assume that A X B = C. Find X using the vec operator and the
Kronecker product. Then consider the case that A and B are invertible.
Solution 12.
Applying the vec operator to the left- and right-hand side of
A X B = C and using the Kronecker product we have
(B t S A)vec(X) = vec(C ).
If A and B are invertible we have
vec(X ) = ( (Bt ) 1S A
1)vec (C).
Problem 13. Let A, B, C, X Y be n x n matrices, where the matrices A,
B , C are given. Assume that A X + Y B = C. Apply the vec operator to this
equation and then express the left-hand side using the Kronecker product.
Solution 13.
Applying the vec operator to the left- and right-hand side of
A X + Y B = C and using the Kronecker product we have
(In S A )v ec(X ) + ( B t S In)vec(Y) = vec(C).
Problem 14. Let U be an n x n unitary matrix. Show that the column vector
vec (U) cannot be written as a Kronecker product of two vectors in C n.
Solution 14.
We have
n
vec (U) = ^ e i ® (Uei )
348 Problems and Solutions
where { e i, . . . , e n } is the standard basis in C 71. Let a, b G C n. Suppose
vec (U) = a 0 b .
Since
a = E ( eJ a )^j= i
we find
n
n
E eJ ® (c7eJ-) = E ( e^a)eJ ® b
J= I
J=I
Equating yields Uej = ( e ja ) b i.e. the columns of U are linearly dependent.
However, this contradicts the fact that U is unitary. Consequently the vector
vec (U) is entangled in C n 0 C n.
A generalization is: Let A be an n x n matrix with rank greater than I. Then
vec (A) is entangled in C n 0 Cn.
P r o b le m 15. Let A be an arbitrary n x n matrix. Let U be an n x n unitary
matrix U~l = U*. Can we conclude that
vec (U*AU) = vec(A) ?
S olu tion 15.
(I)
If (I) would be true we would have
(■Ut 0 U*)vec(A) = vec(A).
(2)
This is not true in general. A solution of (2) would be U = e*^7n, $ G K.
P r o b le m 16. Consider the Probenius norm. Let M be an ms x nt matrix over
C. Find the m x n matrices A over C and the s x t matrices B over C which
minimize
IlM — A 0 B\\f = H-R(M) — vec(A )(vec(R ))T ||FApply the method for m = 2, n = I, s = 2, t = 2 to the 4 x 2 matrix
M =
S olu tion 16.
We have
IlM - A ( S ) B\\f = IlR (M - A 0 B)\\f = ||R(M) - R(A 0 B) ||F
=
| | .R (M ) -
v e c ( . A ) ( v e c ( R ) ) T | | f.
vec Operator
349
Notice that R(M) is ran x st while M is ms x nt. Clearly vec(A )(vec(B ))T has
rank I. Let R(M) = UTV * be a singular value decomposition of R (M ) , then
the minimum is found when
vec(A)(vec(B))T = U ^ e ^ e l ^ V * = ((T1Ife ltfnn)(K e lirt)*.
Thus we may set (amongst other choices)
vec (A) = (JiUeijrnn
and
v ec(£ ) = V e ijst.
Since
- ( 1) - ( ! 1M ? M ! o)
and applying the rearrangement operator to M yields
= ( o ! i S)The singular value decomposition of R(M ) is
(0
VO
I
I
I
I
0 0
0 0 oj
0\
OJ
0 I 0
I 0 0
V2
I
v/2 0 0
0 0 I
Thus we set
0 \
I
y/2
I
v/2
0 /
vec(A) = - (
M i1
(
vec (B) =
0
I
V2
I
y/2
0
I
0
0
0
0
0
0
I
0
I
y/2
I
y/2
0
0
0
W
I
"T l
f °i \
i
W
B
v^(l »)'
In this case M is exactly the Kronecker product A ® B .
P r o b le m 17.
Let P be an n x n permutation matrix and H be an n x n
hermitian matrix.
(i) Given the hermitian matrix H we want to find the permutation matrices P
such that H P = PH. The equation can be written as H P — P H = 0n. This
is a special case of the Sylvester equation. Applying the vec-operation and the
Kronecker product we can write the equation as
(.ln ® H - H t ® I„)v ec(P ) = 0.
Consider H =
Find all P.
(ii) Given the permutation matrix P we want to find the hermitian matrices
H such that P H — H P = On. This is a special case of the Sylvester equation.
Applying the vec-operation and the Kronecker product we can write the equation
as
(In ® P ~ P T ® J „)v e c(tf) = 0.
350
Problems and Solutions
Consider P =
S o lu tio n 17.
Find all H.
(i) Since
In ® H - H T ® I n
0I
-I
0
I
0
0
-I
-I
0
0
I
-°I \
I
0
J
we find the two equations pn —P22 = 0? Pu ~ P 2i = 0 with the condition that P
is a permutation matrix. Thus we find two solutions, namely pn = I, p\2 = 0,
P21 = 0, P22 = I (identity matrix) and pn = 0, p\ 2 = I, P2\ = I, P22 = 0.
(ii) Since
0
I
-I
0 \
0
0
I
-I
i n ® p - p T ® Jn =
0
0
-I
I
0
-I
I
0 J
0, hu - h2\ = I0 with
i f is a hermitian matrix. Thus
hi2\
hn)
with H = H *. This means that H must be real.
P r o b le m 18.
Consider the Pauli spin matrices <7i, ¢73. The unitary and
hermitian normal matrices o\ and 03 have the same eigenvalues, namely + 1 and
—I. Find all invertible 2 x 2 matrices X such that
X ~ l<JiX = <r3.
Proceed as follows: Multiply the equation with X and after rearrangements we
have
CF i X — Xa^ = O2.
This is a special case of the Sylvester equation. It can be written as
( h <8><7i — crj <g>I2)vec(X) =
( °0\
0
W
M 11 \
,
vec(X ) :=
X21
X12
V#22/
Solve this linear equation to find the matrix X , but one still has to take into
account that the matrix X is invertible.
S o lu tio n 18.
We have
* ® ( ? J M ' o 1 ? ) ® ,2=
/ _1
I
0
0
I 0
- I 0 °\
0
0 I I
0 I I/
vec Operator
351
Thus we have the system of linear equations
/-I
I
—1 0
0
0
Vo
with the solution
X 21
0 0\ / x n \
1
0
1 1
o i i /
= Xn, £12 =
V ^ii
_
X 12
~
0
0
Vo/
Kx22J
— x 2 2 .
X = ( Xn
/ 0\
X21
Therefore the matrix X is
~ X22\
x 22 J
with determinant d et(X ) = 2 x n x 22. This implies that
Then the inverse of X is
Y-i =
P r o b le m 19.
( x 22
1
2^11X22 V - x H
X 11
^ 0 and £22 7^ 0.
*
x Il J
Let A G Cnm = (g)m Cn and x € Cn. Define the product
A x m-1
:=
[In 0 x 0
• ••0
x
]t A .
(m —I) times
Let A be an n x n matrix over C. Find vec (A t ) x
between this product and the matrix product A x ?
S olu tion 19.
What is the relationship
1.
We have
n
n
Vec(Ar )X 1 = (I n 0 x T )ve c(A T ) = ^
e j 0 (x T A T ej ) = ^ ( e j A x ) e ^ = A x
where { e i , . . . , en} is the standard basis in Cn.
P r o b le m 20.
Let A G Cnm = (g)m Cn. If A G C and x G C n, x ^ 0,
satisfies A x m-1 = Ax then (A ,x) is said to be an eigenpair of A , where A is the
eigenvalue of A and x is a corresponding eigenvector. Two eigenpairs (A, x) and
(A ',x ') are equivalent when tm_2A = A' and £x = x ' for some non-zero complex
number t.
(i) Find the eigenpairs for the tensor A = ( 0 I I 0 ) T and n = 2.
(ii) Find the eigenpairs for the tensor A = ( I
n = 2.
S o lu tio n 20.
(i) Here we have m = 2. Since
the eigenvalue equation becomes
I
I
I
I
I
I
1 )T and
352 Problems and Solutions
i.e. we solve the matrix eigenvalue problem for ^
(i,(*)),
to find the eigenpairs
( - ! , ( J t) ) , t e c \ { 0 }.
t e c \ { 0}
(ii) Here we have m = 3. Note that
M iM iM O Let x = ( a
b) . The eigenvalue problem is
(o ?)®(a &)® ( a
b)\ 0
Thus
(a + b)J
)®0 )®0 )=A(*>)'
(0 - ( 0 -
This equation has the solution A = O and a = —b. If A / 0, then a = b so that
Xx = 4a2. Since x ^ 0, we find A = 4a. Thus we have the eigenpairs
M-O)' t
G C \ {0 },
(At, I
,
tec\{0}.
Since the sets of eigenpairs are equivalent, we need only consider the represen­
tative pairs
K-1O)'
MO)-
S u p p lem en ta ry P ro b le m s
P r o b le m I . Let A, B be n x n matrices over C.
(i) Consider the maps
h (A ,B ) = A ® B
f 2(A, B) = vec(A)(vec(B))T
h ( A , B) = vec(A)(vec(BT))T
f i { A , B) = vec(B)(vec(AT))T.
t r ( h ( A , B ) ) = t r ( f 2(A,B)),
tr(/2( A ,£ ))= t r (/ 3(A ,B )),
tr(/3(A ,B ))= tr (/ 4(A,B))?
(ii) Show that
tr(v ec(£ )(v ec(A ))T) = tr (A t B).
vec Operator
353
Problem 2. Let A be an 2 x 2 matrix.
(i) Find vec (A 0 /2 + /2 <8>A) and vec (A 0 A).
(ii) Find the condition on A such that vec(A 0 A) = vec(A 0 /2 + h ® A).
Problem 3.
Let A , B be n x n matrices. Show that
vec(A • B) = vec(A) • vec (B)
where • denotes the Hadamard product.
Problem 4. Find an invertible 3 x 3 matrix X which satisfies the following
three matrix equations simultaneously
—i0\
/0
X-1
X -1
i
\0
0 O
0 0/
/0
0
0 0 \
0 —*
\o
i 0 )
/0
0
V -Z
/I 0
OO
\0 0
0
0 \
0
-I J
1 / 0 I 0\
X = - = 1
0 I
'ft \ o I 0 /
. / 0 - 1 0
t\
0 0 0
X -1
X=
1 0 - 1
X = ft=
0/
V2 V o i
0
Cast the equations into a Sylvester equation and then solve the equations. The
first equation can be written as
(/3 0
(0
-i
i
\0
0
0
0 \T
01
0 Is)vec(X) = 0
-I /
0\
/1 0
0 1— I 0 0
0/
\0 0
where 0 is the zero column vector.
Problem 5.
Consider the Pauli spin matrices <Ti,
02and
0 0 -i\
O O i O
0 -i 0
0 '
\i
0 0 0 J
/0
R :=
(T i
0
Cr2
0
1
I
0
(J ?)-
Apply the vec-operator on R , i.e. vec (R). Calculate the discrete Fourier trans­
form of the vector in C4. Then apply vec-1 to this vector. Compare this 4 x 4
matrix with the 4 x 4 matrix R. Discuss.
Problem 6. Let A be an n x n matrix over C. Find a matrix Sn such that
Snvec(.4) = tr(.4). Prove the equivalence. Illustrate the answer with
1
3
2
2
I
3
3
2
I
354 Problems and Solutions
where
Sn = Vec(J3) = ( I
0 0 0 1 0 0 0 1
)5
and
5 nvec(^4) = (1
P r o b le m 7.
0
0
0
I
0
0
0
I ) vec(^4) = 3.
(i) Let A 1 B be n x n matrices over C. Show that
tr(vec(B)(vec(A))T) = tr(ATB)
applying tr(vec(B)(vec(A))T) = tr((vec(/n))T ( / 0 ATB)vec(I)).
(ii) Let A be an m x n matrix, B be an p x p matrix, X be a p x n matrix and
C be an n x n matrix. The underlying field is R. Show that
tr (A X t B X C ) = (vec ( X ) ) T{(CA) 0 B T)vec(X).
Chapter 14
Nonnormal Matrices
A square matrix M over C is called normal if
M M * = M *M .
A square matrix M over C is called nonnormal if
M M * / M *M .
Examples of normal matrices are: hermitian, skew-hermitian, unitary, projection
matrices. Examples for nonnormal matrices are
If M is a nonnormal matrix, then M* and M t are nonnormal matrices. If M is
nonnormal and invertible, then M -1 is nonnormal. If A and B are nonnormal,
then A 0 B is nonnormal.
If M is nonnormal, then M + M *, M M *, the commutator [M, M*] and the
anti-commutator [M, M *]+ are normal matrices.
The Kronecker product of two normal matrices is normal again. The direct sum
of two normal matrices is normal again. The star product of two normal matri­
ces is normal again.
Any non-diagonalizeable matrix M is nonnormal. Every normal matrix is diagonalizable by the spectral decomposition. There are nonnormal matrices which
are diagonalizable.
355
356
Problems and Solutions
Problem I. Let A be a nonnormal n x n matrix over C.
(i) Show that A + A* is a normal matrix.
(ii) Show that AA* is a normal matrix.
Solution I .
(i) Let M = A + A*. Then M* = A* + A = M.
M M * = M *M .
(ii) Let M = AA*. Then M* = A A* and M * M = M M *.
Hence
Problem 2.
Let A be a nonnormal matrix. Show that [A1 A*] is a normal
matrix. Show that [A1 A*]+ is a normal matrix.
Solution 2. Let M = [A1 A*] = A A * - A* A. Then M* = A A * - A* A = M and
[A1A*] is normal. Let N = [A1A*]+ = A A * + A* A. Then N* = A A * + A* A = N
and [A1 A*]+ is normal.
Problem 3.
matrix
Let a , /3
G
C. What are the conditions on a , f3 such that the
M ( a ,/3) = ( o
i)
^ M * (a ,/? )= (|
J)
is normal?
Solution 3.
We have
^
1) , M (a ,£ )M > ,/? )= ( a “ | / ^
/Q .
Thus the condition is /3/3 = 0 which implies that P = 0.
Problem 4.
matrix A{9)
Let 9
G
[0,27r). What are the conditions on 9 such that the
sinoW ) *
cosOw )
is normal?
Solution 4.
We have
J w),
=
J w).
Thus the condition for the matrix to be normal is cos2(9) = sin2(0) with solutions
9 = it/A1 9 = 37r/4, 9 = 57r/4, 9 = IifjA.
Problem 5.
Let 9
G
M. What are the conditions on 9 such that the matrix
m
- W = U w sT ) - *•<*>=U (O ) cosSw )
Nonnormal Matrices
357
is normal?
Solution 5.
We have
» > =( n ‘ w
^
( cosOw s J w)'
m )-
Thus the condition for the matrix to be normal is cosh2(0) = sinh2(0). There
are no solutions to this equation. Thus A(Q) is nonnormal for all 0.
Problem 6.
Let z € C and z ^ 0. Show that the 2 x 2 matrix
-l)
A = (o
is nonnormal. Give the eigenvalues.
Solution 6.
We have
A A ' = ( 1 + Z*
*
*
I-A= ( 1
*
y*
*)
* /
Thus the matrix is nonnormal if z ^ 0. The eigenvalues are +1 and —I.
Problem 7. (i) Can we conclude that an invertible matrix A is normal?
(ii) Let B be an n x n matrix over C with B 2 = In. Can we conclude that B is
normal?
Solution 7.
(i) No. Consider the example
I) *
A =(o
A~'-(o
“i1) '
Both are nonnormal. Is the matrix A 0 A - 1 normal?
(ii) This is not true in general, for example
A
1
1 \
O 0
-I
V o -i
0
B=
J
with B 2 = I3 and B* B ^ BB*.
Problem 8.
Show that not all nonnormal matrices are non-diagonalizable.
Solution 8.
Consider the 2 x 2 matrix
a ={\
?)
which is nonnormal and the invertible matrix
R_ (
I
I
\
-I /V 2J
^
B - i - ( !/2
~ V 1/2
=* B
l/\/2 \
- 1 /,/2 ) -
358
Problems and Solutions
Then the matrix B 1AB is diagonal. Is B nonnormal?
P r o b le m 9.
nalizable.
Find all 2 x 2 matrices over C which are nonnormal but diago-
S olu tion 9.
The matrix must have distinct eigenvalues (otherwise it is a
multiple of the identity matrix). Thus
(tr(A ))2 ^ 4det(A ).
Let
Then the condition (tr(A ))2 ^ 4det(A ) becomes (a — d)2 ^ —46c.
nonnormal, then the two equations
|6| = |c|,
If A is
ac + bd = ab + cd
have no solution. So we have three classes of solutions
(a —d)2 / -4 6 c, |6| ^ |c| and
ac + bd = ab + cd
(a —d)2 ^ —46c, ac + bd ^ ab + cd
(a —d)2 ^ —46c, |6| ^ |c| and
and
|6| = |c|
ac + b d ^ a b + cd.
P r o b le m 10.
Let A be a normal n x n matrix and let S be an invertible
nonnormal matrix. Is SAS 1 a normal matrix? Study the case
Of course the eigenvalues of A and SAS 1 are the same.
S olu tion 10.
We have
The matrix SAS 1 is nonnormal.
P r o b le m 11.
Consider the nonnormal 2 x 2 matrix
Find sinh(A) and tanh X(A).
S olu tion 11.
Since
sinh(.A) — A + - A ^ + — A^ + •••
o!
5!
Nonnormal Matrices
359
and A3 = O2 we find sinh(A) = A. Since
tanh- 1 (j4 ) = A + \ a 3 + \ a 3 + ■ ■ ■
5
0
and A3 = O2 we have tanh 1(A) = A.
P r o b le m 12.
(i) Consider the nonnormal 2 x 2 matrix
(
A =
0
-i
0
0
)
Find a unitary matrix U such that UAU * = A *.
(ii) Consider the nonnormal 2 x 2 matrix
K =
(0
0
Find a unitary matrix V such V KV* = K * . Find the eigenvalues and normalized
eigenvectors of K and K*. Discuss.
S o lu tio n 12.
(i) We find
(ii) We find
The eigenvalues of K are —i and i with the corresponding normalized eigenvectors
The eigenvectors are not orthonormal to each other, but linearly independent.
The eigenvalues of K * are also —i and i with the corresponding normalized
eigenvectors
The eigenvectors are not orthogonal to each other, but linearly independent.
If we compare the eigenvectors of K and K *, then the eigenvectors for the
eigenvalue —i are orthogonal to each other and the eigenvectors for the eigenvalue
i are orthogonal to each other.
P r o b le m 13. Let A 1 B be normal matrices. Can we conclude that AB is a
normal matrix?
S olu tion 13.
We have A A* = A* A and B B * = B*B. We have to show that
(AB)(AB)* = (ABY(AB).
360
Problems and Solutions
It follows that ABB*A* = B*A*AB. This equation can only be satisfied if
AB = BA, i.e. A and B commute. For example, let
A =
Then A and B do not commute and we find the nonnormal matrices
P r o b le m 14.
Consider the nonnormal 3 x 3 matrix
B =
0 I 1\
O i l .
0 0 0/
Can we find a 3 x 3 matrix Y such that Y 2 = B, i.e. Y would be the square
root of B.
S olu tion 14.
A solution is
Y = B =
0
0
0
I
I
0
I
I
0
P r o b le m 15.
Let A be an normal n x n matrix. Is the matrix A - U n normal?
S olu tion 15.
Yes. We have
(A - iIn)(A - Un)* = ( A - Hn) (A* + iln) = AA* + i(A - A*) + In
{A - Hn)*{A - Un) = (A* + Hn){A - Hn) = A*A + i(A - A*) + In.
P r o b le m 16.
A measure of nonnormality is given by
m(A) := \\A*A —AA*\\
where ||•|| denotes some matrix norm. Let Ss and 5 i be real-valued and sym­
metric n x n matrices (for example, spin matrices). Calculate 771(63+ exp (i$)S\).
S o lu tio n 16.
The matrices S\ and 63 are symmetric and real-valued. So for
A : = SsA exp (i0)5i
we have
A A* —A* A = 2isin (0)[6 i, 63]
so that
\\AA* —A*A\\ = 2| sin(0)| ||[Sl t S3]||.
Nonnormal Matrices
361
Thus A is normal when sin(</>) = 0 and nonnormal otherwise. The commutator
[Si, ,¾] determines the extent to which A is nonnormal under the measure m.
P r o b le m 17.
Consider the nonnormal matrix
Is eA+A* = eAeA*l
S olu tion 17.
No. We have
P r o b le m 18. Prove or disprove the following statements.
(i) If the n x n matrix A is nonnormal, then there exists no matrix B such that
B 2 = A.
(ii) Let A 1 B be nonzero n x n matrices with B 2 = A. Then the matrix A is
normal.
S o lu tio n 18.
(i) Let B be the 2 x 2 nonnormal matrix
Then A A* ^ A* A 1 i.e. A is nonnormal.
incorrect.
(ii) We can take the example form (i).
Thus in general the statement is
P r o b le m 19. We know that all real symmetric matrices are diagonalizable.
Are all complex symmetric matrices are diagonalizable?
S olu tion 19.
This is not true in general, for example the matrix
is not diagonalizable. The matrix A is also nonnormal.
P r o b le m 20. Let A be a nonnormal n x n matrix and let U be a unitary n x n
matrix. Is A = UAU _1 nonnormal for all Ul
S olu tion 20.
have
We have to prove that AA* ^ A*A if A*A ^ A*A. Now we
362 Problems and Solutions
Thus if A A * / A* A it follows that A A* / A* A.
Problem 21. Let A be a nonnormal matrix. Show that A* A and A A* are
linearly independent.
Solution 21.
yields
Consider the equation ciA* A + C2AA* = On. This equation
tr(ci A* A + C2AA*) = (C1 + c2)tr(A*A) = 0.
Since A*A is nonzero and positive semidefinite, tr(A*A) ^ 0 and c\ = —c2.
Thus Ci A* A = Ci A A*. Since A is nonnormal we must have Ci = C2 = 0.
Problem 22. (i) Show that if A and B are nonnormal, then A ® B is nonnor­
mal.
(ii) Show that if A and B are nonnormal, then A 0 B is nonnormal, where 0
denotes the direct sum.
Solution 22. (i) The matrices A* A and A A* are linearly independent and the
matrices B*B and BB* are linearly independent. Now
( A ® B ) ( A ® B)* - ( A ® B ) ( A ® B )* = (AA*) ® (BB*) - (A*A) ® (B*B) ^ 0n2
by the linear independence of AA* and A* A and the linear independence of BB*
and B*B.
(ii) Since (A 0 B)(A 0 B)* = (A 0 B)(A* 0 B*) = (AA*) 0 ( B B *), A ® B is
normal if and only if both A and B are normal.
Supplementary Problems
Problem I.
Are all nonzero nilpotent matrices nonnormal?
Problem 2. (i) Let a n ,a 22,€ G M. What is the condition on a n , a22, e such
that the 2 x 2 matrix
is normal?
(ii) Let a > 0, b > 0 and 0 G [0,7r]. What are the conditions a, b, 0 such that
is a normal matrix?
(iii) Let z G C. Is the 2 x 2 matrix
I
I — e;
nonnormal?
0
e~z
)
Nonnormal Matrices
P r o b le m 3.
matrix
(i) Let
x\,X2,xs G
363
R. What is the condition such that the 3 x 3
0
0
Xi
0
0
0
X2
X3
0
is normal? Show that A3 = X1X2X3 for det(A — A/3) = 0. Find the eigenvalues
and normalized eigenvectors of A.
(ii) Let x G R. For which values of x is the 3 x 3 matrix
—I + ix
ix
ix
ix
ix
ix
ix
ix
I + ix
nonnormal.
P r o b le m 4.
Show that if A is nonnormal, then A mA* is normal, where •
denotes the Hadamard product. Notice that AmB = BmA and (AmB)* = A*mB*.
P r o b le m 5. Find all nonnormal 2 x 2 matrices A such that A A* + A* A = / 2.
An example is
-(s
0 ) ^ --0
2 )-
P r o b le m 6. (i) Can one find a nonnormal matrix A such that eAeA*
At3A*
pA + A * ?
(ii) Can one find a nonnormal matrix A such that eAe
P r o b le m 7.
eA*eA ?
Let M be an invertible n x n matrix. Then we can form
M+ = i ( M + M - 1),
! ( M - M - 1).
M-
Consider the nonnormal matrix
1)-
M ={1
Find M + and M - . Are the matrices M + and M - normal?
P r o b le m 8.
matrix
Sj : = 2 sin(27r j/ 5 )
51
M =
-I
0
0
I
I
«2
-I
0
0
with 3 =
0
I
53
0
0
I
-I
0
-I
S4
1, . . .
_1\
0
0
1
S5 /
Show that the matrix is nonnormal. Find the eigenvalues and eigenvectors. Is
the matrix diagonalizable?
364
Problems and Solutions
P r o b le m 9.
Let A, B be nonzero n x n hermitian matrices. We form the
matrix K as K := A + iB. This matrix is nonnormal if [.A , B\ ^ On and normal
if AB = BA. In the following we assume that K is nonnormal. Assume that
we can find a unitary n x n matrix U such that UKU* = K *. What can be
said about the eigenvalues oi K ? In physics such a unitary matrix is called a
quasi-symmetry operator.
P r o b le m 10.
nonnormal?
Let Q be a nonnormal invertible n x n matrix. Is Q <g> Q~l
P r o b le m 11.
Let M b e a n n x n nonnormal matrix. Are the 2n x 2n matrices
nonnormal?
P r o b le m 12. Let A be a nonnormal invertible matrix. How do we construct
a matrix B such that A = exp (B)? Study first the two examples
Both are elements of the Lie group SL( 2,R ). The matrix A\ admits only one
linearly independent eigenvector, namely (1,0 )T . The matrix
generates A i .
P r o b le m 13.
Let A be an n x n nonnormal matrix and Q G SL(n, C). Can
one find Q such that A = QAQ -1 is normal?
Chapter 15
Binary Matrices
An m x n matrix A is a binary matrix if ajk E {0 ,1 } for j = I , . . . , m and k =
I , . . . , n. A binary matrix is a matrix over the two-element field GF{2) = {0 ,1 }.
In GF( 2) = { 0 , 1 } we have that —1 = 1, i.e. 1 + 1 = 0. Thus A + A = 0 for
all binary matrices A, where 0 is the zero matrix. The special linear group and
general linear group coincide over GF( 2), GLn(GF( 2)) = SLn(GF( 2)). The set
of invertible 2 x 2 binary matrices SL,2(GF(2)) is
Over the complex numbers, each of the above matrices can be expressed as a
group commutator A B A - 1 B -1 for some 2 x 2 complex matrices A and B. Over
GF{ 2), this is not the case. For example
over the complex numbers, but
for all A, B e SL2{GF( 2)).
Permutation matrices are binary matrices with the underlying field R.
The Kronecker product, the direct sum and the star product of binary matrices
are again binary matrices.
365
366
Problems and Solutions
P r o b le m I.
Consider the four 2 x 2 binary matrices
^4= (J J)'
b
= ( o 0 ’ c “ 0 » ) ’ D = (! 0
with the underlying field F, char(F) = 2. Find A + B + C + D and A B C D .
S olu tion I .
We have
A + B + C + D = O2,
P r o b le m 2.
A B C D = O2.
For a 2 x 2 binary matrix
A = ( an
\ a 21
ajk £ { 0, I }
we define the determinant as
det(A) = (an *^22) ® («12 •&2i)
where • is the AND-operation and 0 is the XOR-operation.
(i) Find the determinant for the following 2 x 2 matrices
(; ;). (; :). 0 :). (? i). 0 ?)■
(ii) Find the determinant for the following 2 x 2 matrices
(s s). (i s). ( i ;)■ Co 2)' C :)
C 1)- (I 2) ' (? !)■ (S !)■ (I !)•
S olu tion 2. (i) Since 0 * 0 = 0, 0 - 1 = 0, 1*0 = 0, 1*1 = 1 and 0 0 0 = 0,
0 0 1 = 1, 1 0 0 = 1, 1 0 1 = 0 we find that all 6 matrices have determinant
equal to I.
(ii) Since 0*0 = 0, 0*1 = 0, 1*0 = 0, 1*1 = 1 and 0 0 0 = 0, 0 0 1 = 1, 1 0 0 = 1,
I 0 I = 0 we find that all 10 matrices have determinant equal to 0.
P r o b le m 3.
The determinant of a 3 x 3 matrix A = ( ajk) is given by
a n a 22a33 + a i 2a 23a3i + a i3 a 2ia 32 — a i3 a 22a3i — a n a 23a32 — a i 2a 2ia33.
For a binary matrix B we replace this expression by
det(B) = (611 •&22 •633) 0 (&i2 •&23 •631) 0 (613 •fr21 •632)
©(&13 • &22 • &3 l) © (frll • ^23 • ^32) © (&12 ‘ &21 * ^33 )-
Binary Matrices
367
(i) Calculate the determinant for the binary matrices
I
0
0
0
I
0
0
0
I
1 i1 l M.
O
0 0 1/
(ii) Calculate the determinant for the binary matrices
I
I
0
I
I
0
I 0 0\
I 0 0 ’
I 0 0/
0
0
0
/I 0 I
I 0
0
Vi 0 I
S o lu tio n 3. (i) For both matrices the determinant is I.
(ii) For all three matrices the determinant is 0.
P r o b le m 4.
The finite field G F {2) consists of the elements 0 and I (bits)
which satisfies the following addition (XOR) and multiplication (AND) tables
©
0
I
0
0
1
I
I
0
0
I
0
0
0
I
0
I
Find the determinant of the binary matrices
A=
S olu tion 4.
/I
0 I
O I 0
V l O l
B =
1 i1 l M.
O
0 0 1/
Applying the rule of Sarrus provides
det(A) = (I •I •I) © (0 •0 •I) © (I •0 •0) © (I •I •I) © (0 •0 •I) © (I •0 •0) = 0
det(B) = (I •I •I) © (I •I •0) © (I •0 •0) © (0 •I •I) © (0 •I •I) © (I •0 •I) = I.
P r o b le m 5. A boolean function f : { 0, l } n —> { 0,1} can be transformed from
the domain { 0 , 1 } into the spectral domain by a linear transformation
Ty = s
where T is a 2n x 2n orthogonal matrix, y = (yo> 2/i, •••, 2/2n- i ) T , is the two
valued ( { + 1 , - 1 } with 0
I, I
—1) truth table vector of the boolean function
and Sj (j = 0 , 1 , . . . , 7) are the spectral coefficients (s = (so, S i , . . . , S2n- i ) T)*
Since T is invertible we have
T - 1S = y .
For T we select the Hadamard matrix. The 2n x 2n Hadamard matrix H(n) is
recursively defined as
H(n)
H{n - I)
H{n - I)
H{n - I)
- H { n - I)
n = 1,2,...
368 Problems and Solutions
with H(O) = (I) ( l x l matrix). The inverse of H(n) is given by
H ~\n) = T t f („).
Now any boolean function f(x i , . . . , xn ) can be expanded as the arithmetical
polynomial
(2" - S 0 - S l ( - l ) * " - 5 2 ( - 1 ) " - 1 ---------where 0 denotes the modulo-2 addition.
Consider the boolean function / : { 0 , 1 } 3 —> { 0 , 1 } given by
f ( x i , x2, x3) = X1 •x 2 •x 3 + X 1 •x 2 •X3 + X1 •x 2 •x3.
Find the truth table, the vector y and then calculate, using H ( 3), the spectral
coefficients Sj, (j = 0 , 1 , . . . , 7).
S olu tion 5.
The truth table with the vector y is given by
(X1, X2, X3)
0 0 0
0 0 I
0 I 0
0 I I
I 0 0
I 0 I
I I 0
f (X1, X2, X3)
I
0
I
0
0
0
I
I I I
where we used the map 0
matrix H ( 3) is given by
(3) _ ( H ( 2)
m
- \ H ( 2)
h
0
y
-i
i
-i
i
i
i
-i
i
I, I -f* —I to find the vector y. Now the Hadamard
/H( I)
H( I)
H( I)
H { 2) \
-tf(2)j
H( I)
-H(I)
H( I)
H( I)
H( I)
-H(I)
Vtf(I) - t f ( l ) - t f ( l )
H( I) \
-H(I)
-H(I)
t f(l ) /
Thus
tf (3 )
/ 1
i
i
i
i
i
i
Vi
I
-I
I
-I
I
-I
I
-I
I
I
-I
-I
I
I
-I
-I
I
-I
-I
I
I
-I
-I
I
I
I
I
I
-I
-I
-I
-I
I
-I
I
-I
-I
I
-I
I
I
I
-I
-I
-I
-I
I
I
1 \
-I
-I
I
-I
I
I
-I /
= ( 2 ,- 6 ,2 ,2 , - 2 , - - 2 ,- 2 !,- 2 , - 2 ) T and 1
f (X1, x2, x3) = I
(8 - 2 + 6 (-1 )** - 2 ( - l ) * ’ - 2 ( - l ) * 2®*3
^ 2 ( - 1)^1 + 2 (—l )Xl0a:3 + 2 ( — l) Xl®X2 + 2 ( - 1)^103^0*3).
Binary Matrices
P r o b le m 6.
369
Consider the binary matrices
A =
Calculate the Hadamard product Am B and the star product A * B .
S olu tion 6.
We find
/I 0 0 1X
0 0 I 0
0 I 0 0
Vi 0 0 \)
AmB =
Both are binary matrices again.
P r o b le m 7. A binary Hadamard matrix is an n x n matrix M (where n is
even) with entries in { 0 , I } such that any two distinct rows or columns of M
have Hamming distance n/2. The Hamming distance between two vectors is the
number of entries at which they differ. Find a 4 x 4 binary Hadamard matrix.
S o lu tio n 7. Hadamard matrices are in bijection with binary Hadamard ma­
trices with the mapping I — I and —I —» 0. Thus using the result from the
previous problem we obtain
/1
I
I
Vi
I
I
0
0
0
I
I
0
1X
0
I
0/
P r o b le m 8.
l ’s?
How many 2 x 2 binary matrices can one form which contain two
S olu tion 8.
There are six such matrices
Find the commutator between these matrices.
S u p p lem en ta ry P ro b le m s
P r o b le m I .
How many 3 x 3 binary matrices can one form which contain
three l ’s? Write down these matrices. Which of them are invertible?
370
Problems and Solutions
Problem 2 .
(°0
0
0
0
0
0
Vo
(°0
0
0
0
0
0
Vo
Consider the four 8 x 8 binary matrices
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
I
I
0
0
0
0
0
0
I
I
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0V
0
0
0
0 ,
0
0
0/
0
I
0
0
0
0
I
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
I
0
0
0
0
I
0
0V
0
0
0
0
0
0
0/
Find the 8 x 8 matrices <5o, Q i,
QoSoQl = S1,
,
0
0
S1= 0
0
0
Vo
0
0
0
0
0
0
0
0
0
0
I
0
0
I
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
I
0
0
I
0
0
0
0
0
0
0
0
0
0
°\
0
0
0
0
0
0
0/
/I
0
0
0
0
0
0
Vi
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1X
0
0
0
0
0
0
I/
(°0
S3 =
Qs such that
Q 1 S1QJ = S2,
Q 2 S2Q l = S3,
Q 3 S3Q j = S0.
Problem 3. Consider the four 2 x 2 binary matrices A and B with the underly­
ing field F, char(F) = 2. Find the condition on A and B such that A + B = A B .
Problem 4.
gate)
Consider the two permutation matrices (NOT-gate and XOR/0
0
N =
0
0 0 1\
0 1 0
1 0
0
Vi o o o /
/I
0
0
0 0
1 0
0 0
0\
0
1
Vo o i o /
Can we generate all other permutation matrices from these two permutation
matrices? Note that N 2 = / 4, X 2 = /4 and
/0
0
NX =
0
0 I
0 0
1 0
0\
1
0
Vl 0 0 0 /
/0 0
0 0
XN =
1 0
Vo
0 1\
1 0
0 0
1 0 0/
Chapter 16
Star Product
Consider the 2 x 2 m atrices
A=
( a11
\«21
ai2V
B= (ln
« 2 2/
I12) .
b22J
V^21
W e define the com position A * B (star product) and A ★ B as
0 0 £>12 \
0 a n «12 0
0 «21 «22 0
V621 0 0 £>22/
/£>11
A * B :=
0
,
Aic B I—
an
0
0
hi
621
V0
«21
«12
0
0
«22
°\
£>12
b22
0/
N ote th a t
I2 * I2 —
O2 ★ O2 = O4.
T h is definition can be extended to higher dimensions. For exam ple the extension
to 4 x 4 m atrices A = (a ^ ) and B = ( bjk) is the 8 x 8 m atrix for A ★ B
£>12
0
0
b\3
£>14 \
b22
0
0
0
0
£>21
0
0
0
0
«11
«12
<*13
<*14
&23
0
£>24
0
0
0
«21
«22
«23
<*24
0
0
0
0
«31
«32
<*33
<*34
0
0
0
0
<*43
0
<*44
0
0
£>32
«42
0
0
£>31
«41
0
^34
hi
£>42
0
0
0
0
633
643
/611
and analogously we define A * B .
371
£>4 4 /
372 Problems and Solutions
Problem I.
Consider the unitary and hermitian 2 x 2 matrix
U
J
- ( 1
I -1O ★ U unitary? Is U ★ U hermitian?
(ii) Find U * U . Is U ★ U unitary? Is U ★ U hermitian?
(i) Find U *U. Is U
Solution I.
(i) We obtain the Bell matrix
(I
0 0
0 I I
0 I -I
\1 o 0
I
UirU-Ti
1\
0
0
-I/
hermitian.
d unitary matrix
/° I I
I 0 0
I 0 0
\0 I - I
i
Ti
Problem 2.
°\
I
-I
0 J
Consider the nonnormal 2 x 2 matrices
-(S i).
-C S )-* -
with the eigenvalues 0 (2x).
(i) Find A * B and B * A and the eigenvalues.
(ii) Is A ★ B nonnormal? Is B ★ A nonnormal?
Solution 2.
A*B
(i) We have
0 0
0V
0
0
0 0 0 0
0 0 0/
( °0
I
Vi
,
0
0
Bir A =
0 I
Vo 0
(°0
0 1X
0 0
0 0
0
{A*B)t .
0/
For both matrices the eigenvalues are 0 (4x).
(ii) Yes. Both Air B and B * A are nonnormal. Note that (A ★ B){B ★ A) is
normal.
Problem 3. (i) What can be said about the trace of AirBl What can be said
about the determinant of Air BI
(i) What can be said about the trace of A * BI What can be said about the
determinant of Air BI
Solution 3.
(i) We have
tr(^4 ★ B) — tr(Jl) + tr (B),
det(jl ★ B) = det(Jl) det(B).
Star Product
373
(ii) We have tr(^4 ★ B) = O and det(A ★ B) = det(^4) det ( B) .
P r o b le m 4. Let A, B be 2 x 2 matrices. Can one find a 4 x 4 permutation
matrix P such that P (A ★ B )P T = B 0 A l Here © denotes the direct sum.
S olu tion 4.
We find the permutation matrix
/I
I 0
0
0 0
0 0
1 0
0\
i
0 -
Vo o i o/
P r o b le m 5. Let A, B be 2 x 2 matrices. Let A * B be the star product. Show
that one can find a 4 x 4 permutation matrix P such that
/0,11
021
P (A ie B )P r
0
0
0
0
0
0
«12
«22
0
0
bn
&21
S o lu tio n 5. Owing to the structure of the problem one can set pn = I. The
permutation matrix is
/ 1
0
0
Vo
0
0
I
0
0
0
0
I
/ 1
0
P1 =
0
I
0
0/
Vo
0 °\
I 0
0
0
0
I
0
0
I
0/
Note that
P( Ai c B)
«11
0
0
^12 \
«21
0
0
0
i>ll
621
612
<*22
0
0
P r o b le m 6.
622
0 /
The 2 x 2 matrices
I =
I
0
0
I
J
)
form a group under matrix multiplication. Do the four 4 x 4 matrices / ★ / , /★ J,
J ★ / , J * J form a group under matrix multiplication?
S olu tion 6. Yes. Note that the four matrices are permutation matrices and
form a subgroup of all 4 x 4 permutation matrices. The neutral element is the
4 x 4 matrix I ★ / . Furthermore
(JieJ)(JieJ)=IieI,
(IieJ)(IieJ)=IieI,
(JieI)(IieJ) = JieJ
374
Problems and Solutions
P r o b le m 7. Given the eigenvalues and eigenvectors of the 2 x 2 matrices A
and B. What can be said about the eigenvalues and eigenvectors of A ★ BI
S olu tion 7.
From
/6 n —A
0
det
0
V 621
b\2
0
0
(In ~ A
a12
(221
0
^ll — A
0
\
0
= 0
0
&22 — A )
we obtain
det (A —XI2) det (B — XI2) = 0.
Thus if Al, A2 are the eigenvalues of A and /21, /22 are the eigenvalues of B, then
the eigenvalues of A * B are A i, A2, /21, /22. The eigenvectors have the form
0
0
«21
^21 ’
Vo /
P r o b le m 8.
/ ull\
0\
0 \
Ull
^22
V 0 /
/222l \
I’
V2212/
I
0
0
V2222/
The 2 x 2 matrix
A fn X - ( cos(a )
' '
y sin(a)
—sin(a)\
cos(a) J
admits the eigenvalues A+(a) = eta and A _(a ) = e ta with the corresponding
normalized eigenvectors
V2 O )
Let
/ cos(q )
0
A(a) ★ A(a) =
0
V sin(a)
0
cos(a)
sin(a)
0
0
—sin(a)
cos(a)
0
—sin(a) \
0
0
cos(a) I
Find the eigenvalues and normalized eigenvectors of A(a) * A(a).
S olu tion 8.
For the eigenvalues we find eta (twice), e za (twice) with the
corresponding normalized eigenvectors
/ 1X
0
0
TI
I
\ -i)
1
H
' y/2 \ - i
V 0 /
’
I
T2
/ 1N
0
0
\i J
I
’ T2
(°\
I
i
W
P r o b le m 9. Let A, B be two 2 x 2 matrices and A ★ B be the star product.
Can one find a 4 x 4 permutation matrix P such that
P (A ^ B ) = A * B ?
Star Product
Solution 9.
375
We obtain
I 0
0V
0 0 0
= P ~\
0 0 0 I
Vo 0 I 0/
(°I
S olu tion 10.
B *A .
x 2 matrices. !
P r o b le m 10.
We obtain
A * B — Bic A
( Cu C12 \
VC21 C22 J
( - C 11 - C 12 \
\ - C 21 - C 22J
where
C11 = an —
C12 —ai2 — b12,
C21 = a2i - b217 C22 = 0>22 — b22.
Thus A * B —B * A can be written as a star product.
S u p p lem en ta ry P ro b le m s
P r o b le m I . Consider the star product A ★ B of two 2 x 2 matrices A and B
and the product
0
0
an
a2i
/611
621
0
0
0
0
ai2
022
&12\
&22
0
0 /
Show that there is a 4 x 4 permutation matrix P such that P (A ★ B) = A o B.
P r o b le m 2. Consider the 2 x 2 matrices A 7 B over C. Let A ★ B be the star
product.
(i) Let A and B be normal matrices. Show that Aie B normal.
(ii) Let A and B be unitary matrices. Show that Aie B a unitary matrix.
(iii) Let A and B be nilpotent matrices. Show that Aie B a nilpotent matrix.
P r o b le m 3.
Let A 7 B be invertible 2 x 2 matrices. Let A ★ B be the star
product. Show that
(A * B ) - 1 = A - 1 Ic B -1.
P r o b le m 4.
Let A be a 3 x 3 matrix and B be a 2 x 2 matrix. We define
/£>11
A * B :=
0
0
0
V £>21
0
an
0
0
<*13
a2i
a3 i
ai2
a22
a32
0
0
0
0 -2 3
0 -3 3
&12 \
0
0
0
£>22 /
376 Problems and Solutions
(i) Find det(A ★ B) and tr(A ★ B).
(ii) Let A, B be normal matrices. Is A ★ B normal?
(iii) Let A, B be unitary. Is A ★ B unitary?
(iv) Let
Find A * B and calculate the eigenvalues and normalized eigenvectors of A * / 3.
Problem 5. Let IIi and II2 be 2 x 2 projection matrices. Is the 4 x 4 matrix
IIi ★ II2 a projection matrix? Apply it to IIi * IIi where
Problem 6 . (i) Let U and V be elements of the Lie group SU(2). Is U ★ V
an element of the Lie group SU (4)?
(ii) Let X and Y be elements of SL( 2,R ). Is X * Y an element of SL( 4 ,R )?
Problem 7. Let A, B be normal 2 x 2 matrices with eigenvalues A i, A2 and
/i 1, fi2, respectively. What can be said about the eigenvalues of A ★ B —B ★ A l
Problem 8. Let A, B be 2 x 2 matrices.
(i) What are the conditions on A and B such that A ★ B = A(S> BI
(ii) What are the conditions on A and B such that A ★ B = A(S> BI
(iii) What are the conditions on A and B such that A ★ B = A ® BI
(i) What are the conditions on A and B such that A ★ B = A ® BI
Chapter 17
Unitary Matrices
An n x n matrix U over C is called a unitary matrix if
U* = U~l .
Thus UU* = In for a unitary matrix. If U and V are unitary n x n matrices,
then UV is an n x n unitary matrix. The columns in a unitary matrix are
pairwise orthonormal. Thus the column vectors in a unitary matrix form an
orthonormal basis in Cn. If v is a normalized (column) vector in Cn, then Uv is
also a normalized vector. The n x n unitary matrices form a group under matrix
multiplication. Let K be a skew-hermitian matrix, then exp(K ) is a unitary ma­
trix. For any unitary matrix U we can find a skew-hermitian matrix K such that
U = exp (K ). Let H be an n x n hermitian matrix and U := exp (iK ) is a unitary
matrix. The eigenvalues of a unitary matrix are of the form e1* with (j> G M. Let
A be an arbitrary n x n matrix over C. Then under the map A
UAU* the
eigenvalues are preserved and thus the trace and the determinant of A.
If U is an n x n unitary matrix and V is an m x m unitary matrix, then U (S)V
is a unitary matrix, where (g> denotes the Kronecker product. If U is an n x n
unitary matrix and V is an m x m unitary matrix, then U 0 V is a unitary
matrix, where ® denotes the direct sum product. If U is a 2 x 2 unitary matrix
and V is an 2 x 2 unitary matrix, then U * V is a unitary matrix, where *
denotes the star operation. The Fourier matrices and the permutation matrices
are important special cases of unitary matrices. The square roots of a unitary
matrix are unitary again. Any unitary n x n matrix is conjugate to a diagonal
matrix of the form
diag(e<01 ei4>2 ... et4>n ).
377
378 Problems and Solutions
Problem I.
(i) Let </>i,</>2 G M. Are the 2 x 2 matrices
TJ
_
0 \
( Ci * 1
1/1 “ ^ 0
(
e** y ’
0
\
U2~
0 )
unitary?
(ii) Are the 2 x 2 matrices
/1/2
V l~ {V Z / 2
>/3/2 \
-1 /2 J ’
(
1/2
- > /3 /2
- > /3 /2 \
-1 /2 J
unitary?
(iii) Let 0,</> G C. Is the 2 x 2 matrix
£7(0.
±\_(
cos(0)
V e -^ s in (0 )
sin(0) \
—cos(0) J
unitary? If so find the inverse.
Solution I. (i) Yes. We find U1Uf = I2 and U2U^ = I2.
(ii) Yes. We have V\V{ = I2 and V2V2 = I2.
(iii) We have
TT*(f)<h\-(
U
cos^
e^sin(6>)\
-c o s (9 ))
and U(6,tj>)U*(6,(j>) = / 2. Thus U*(9,4>) = U~1 (9,<j>) and U(9,<j>) is unitary.
Problem 2.
(i) Can one find a 2 x 2 unitary matrix such that
Ko1
v>
(ii) Find a 2 x 2 unitary matrix V such that
-(S J)v=(! I!)(iii) Can one find a 2 x 2 unitary matrix W such that
-C SKi-C ;>
Solution 2.
(i) Note that both matrices are hermitian and have the same
eigenvalues, namely +1, —I. We find the Hadamard matrix
(ii) Let 4> G M. We find
Unitary Matrices
(iii)
379
One obtains
i)‘
P r o b le m 3. Let 0 < rjk < I and 4>jk € R (j,k = 1,2). Find the conditions
on rjk, <t>jk such that the 2 x 2 matrix
TTU
\ - (
u KrjkAjk) -
»1ie<* 11
^ r 2 ie * * ,
rut*+™
\
^¢22 J
is unitary. Then simplify to the special case <pjk = 0 for j, k = 1,2.
S o lu tio n 3.
We have
r n e _<*“
U
(T jk ttPjk)
r u e -**12
T2ie - 1+21 \
r22e - ^ 22 J '
Then from U(rjk,<pjk)U*(rjk,<pjk) = h we obtain the four conditions
r Ii + r 12 = I)
r2i + r22 = I)
H ir 2I C ^ 11"**1) + rnrme*+™-+” ) = 0,
H i r 2I C ^ 1" * 1) + r i 2r 22e i(^22_012) = 0.
Using addition and subtraction the last two equation can be cast into
Tnr2I cos(<£n - <p21) + Ti2T22 cos(p 12 - ¢22) =
TnT2I sin(0n - ¢21) + n 2 r 22 sin(</>!2 - ¢22) =
0
0.
If 011 = 0i2 = 021 = 022 = 0 we arrive at the three equations
rIl + rI2 = !-
P r o b le m 4.
r21 + r22 = !-
r Ur21 + n 2 r22 = 0 .
(i) Is the 3 x 3 matrix
- l / y /2
U1
- i/ y j 2
0
0
0
I
l/ y /2
-i/ y / 2
0
unitary?
(ii) Is the 3 x 3 matrix
/l/v'S I/V
^3
U2 =
a unitary matrix?
1 /y/S
\ 1/ V 2
l/ y / 5
- l / y /2
l/ y /2
\
- y / 2 /y/ 5
0
J
380 Problems and Solutions
(iii)
Consider the 3 x 3 unitary matrix
and the normalized vector in C 3
Find the vectors Uv and U2V.
Solution 4 . (i) We have det (CT1) = i and the rows are pairwise orthonormal
to each other. Thus the matrix is unitary.
(ii) Yes. We have
= / 3.
(iii) We find the normalized vectors
Study Unv with n —> 00.
Problem 5 .
Consider the orthonormal basis in C 3
(i) Is U = ( Vi V2 V3 ) a unitary matrix?
(ii) Let 01,02,03 G K. Is F ( 0 i , 0 2,03) = e ^ v i v f + e ^ 2v 2v£ + ei4>3v 3v^ a
unitary matrix?
Solution 5.
(i) Yes. We have UU* = / 3.
(ii) Yes. The matrix is given by
This is an application of the spectral theorem.
Problem 6 . (i) Let A, B be n x n matrices over E. Show that one can find a
2n x 2n unitary matrix U such that
Here On denotes the n x n zero matrix.
Unitary Matrices
381
(ii) Use the result from (i) to show that
S olu tion 6.
(i) Let In be the n x n identity matrix. We find
jj
_ I f
In
Hjx
= det(A + iB) det(A — iB) = det(A + iB)det(A + iB)
>0.
Note that A —iB = A + iB since A and B are real matrices.
P r o b le m 7. Let v be a column vector in C n with
normalized. Consider the matrix U = In - 2v v * .
(i) Show that U is hermitian.
(ii) Show that U is unitary.
(iii) Let n = 2 and v * = ^ (I I). Find U.
S olu tion 7.
(i) Since
U*
(v*)* = v
=
v*v
= I, i.e. the vector is
we have
(In - 2V V * ) * = I n - 2vv* = U.
(ii) Since U = U* and v * v = I we have
UU* = UU = ( In - 2 v v * )(/n - 2vv*) = In - 2vv* - 2vv* + 4vv* = In.
(iii) We find
P r o b le m 8. (i) Let U be a unitary matrix with U2 = In. Can we conclude
that U is hermitian?
(ii) Let a G l . Calculate exp (aU).
(iii) Let a € M. Consider the 2 x 2 unitary matrix
Calculate exp(iaUi).
382 Problems and Solutions
(iv)
Let a 6 R. Consider the 2 x 2 unitary matrix
Calculate exp ^ L ^ ).
(v) Let cri, <72 be the Pauli spin matrices. We define the unitary 2 x 2 matrix
/,x
. . / ,x
/
^ := Cos(^)ffl + SmWa2 = ^ ^
0
+ -gin(0)
cos(0) - i s i n ( 0)\
0
J •
Calculate exp(—iOa^/2).
S o lu tio n 8. (i) We have Ut/* = UU. Thus U is hermitian.
(ii) W ith U2 = In we obtain exp(aU ) = In cos(a) + U sin(a).
(iii) Using the result from (ii) and i 2 = —I, i(—i) = I we arrive at the rotation
matrix
^
) =
( - ¾
- ¾ ) '
(iv) Using the result from (ii) we obtain exp(aU2) = /2 cos(a) + U2 sin(a).
(v) We find
exp(—20(7^ /2) = c o s ( 0 / 2 ) / 2 — 2sin(0/ 2)cr0.
P r o b le m 9.
Consider the two 2 x 2 unitary matrices
Can one find a unitary 2 x 2 matrix V such that Ui = VU2V*?
S o lu tio n 9. Assume that U\ = VU2V* is true. Taking the trace of the lefthand side yields tr(Ui) = 2. However, taking the trace of the right-hand side
yields
tr(UU2U * )= t r (U 2) = 0.
Thus we have a contradiction. Therefore no such V can be found.
P r o b le m 10. Let U be an n x n unitary matrix.
(i) Is U + U* invertible?
(ii) Is U + U* hermitian?
S o lu tio n 10.
(i) In general U + U* is not invertible. For example
yields U + U* = O2.
(ii) Obviously U + U* is hermitian since (U*)* = U.
Unitary Matrices
383
P r o b le m 11.
Find the condition on the n x n matrix A over C such that
In + A is a unitary matrix.
S o lu tio n 11.
From ( In + A )(In + A*) = In we obtain the condition A + A* +
AA* = On.
P r o b le m 12.
Let
£ C. Consider the 2 x 2 matrices
where Z\Z\ = I, 2:2¾ = I, ^ 1 ^ ) 1 = I, W 2 W 2 = I* This means the matrices U1
V are unitary. Find the condition on Z i 1 Z2 , w i , ^ 2 such that the commutator
[U1V ] is again a unitary matrix.
S olu tion 12.
We have
(
0
zi\/0
\ z2
u>i \
0 J \ w2
_
f Ziw2 - W i z 2
0 J ~ \
0
W iZ2
)
-0 ^ 2 ^ 1 J
Thus the conditions for this matrix to be unitary are
Z i Z i W 2 W 2 — Z i Z 2 W i W 2 — Z i Z 2 W i W 2 + Z2 Z2 W i W i =
I
Z2 Z2 W i W i — Z2 Z i W i W 2 — Z i Z 2 W i W 2 — Z i Z i W 2 W 2 =
I.
Thus
Z iZ2W i W 2 + Z iZ2W i W 2 =
Using
zi
= r
i
Z2 =
Wi =
I,
Z2 Z i W i W 2 H- Z i Z 2 W i W 2 =
S iet6l1 W 2
I.
= s2et02 we obtain
n r 2sis2 cos(<£i - <j>2 - 6i + O2) =
P r o b le m 13.
Find a unitary 3 x 3 matrix U with (Iet(Lr) = —1 such that
f7U
S olu tion 13.
7¾
An example is
(
U =I
P r o b le m 14.
I
l/ s / 2
0
V - 1 /V 2
l/ y / 2
0\
0
1 /^ 2
I
0/
Show that the two matrices
A-
A -
( ei6
^ 0
e~ie ) '
B- (
COS^ )
sin (0) \
cos (0) J
384 Problems and Solutions
are conjugate in the Lie group SU( 2).
Solution 14.
Let U G SU (2) given by
Using et0 = cos(0) + isin(0) straightforward calculation yields UAU 1 = B.
Problem 15. Let U be an n x n unitary matrix. Let V be an n x n unitary
matrix such that V -1 UV = D is a diagonal matrix D. Is V~ 1U* V a diagonal
matrix?
Solution 15. Note that U- 1 = U*. Then (V - 1U V) - 1 = V - 1U-1 V = D -1 .
Thus V - 1 U* V is a diagonal matrix.
Problem 16. Let n > 2 and even. Let U be a unitary antisymmetric n x n
matrix. Show that there exists a unitary matrix V such that
^ = ( . 0!
I)
where ® denotes the direct sum.
Solution 16. The unitary matrix U can be written as U = Si + 2¾ with real
and antisymmetric Si and S2 . Therefore
i n = U ^ u = - S i - S i - I i S u S 2].
It follows that [S\, S2] = On and there is an orthogonal n x n matrix O such that
Or Sj O
an0/)2
0
J
for j = 1 ,2. Since U is unitary we find |
+ iafjp \= I, k = I , . . . , n/2. Hence
there exists a diagonal phase matrix P such that V = OP.
Problem 17.
Let U be a unitary and symmetric matrix. Show that there
exists a unitary and symmetric matrix V such that U = V 2.
Solution 17. We have U = S\-\-iS2, where S\ and S2 are real and symmetric.
Now 5 i and S2 commute and therefore can be diagonalized simultaneously by
an orthogonal matrix O. Then P := Or UO is a diagonal matrix and V is given
by V = OyfPOr .
Problem 18.
Let U be an n x n unitary matrix. Show that |det(U)| = I.
Solution 18.
We have
I = det(Jn) = det (UU*) = det (U) det(U*) = det(U)det(U) = \det(U)|2.
Unitary Matrices
385
Since |det(£/)| > O we have |det(£/)| = I.
P r o b le m 19. Let A be an n x n matrix over R. Show that if A is an eigenvalue
of A 1 then A is also an eigenvalue of A. Give an example for a 2 x 2 matrix,
where the eigenvalues are complex.
S olu tion 19. If A is real, then the characteristic polynomial Pa (I) of A has
real coefficients. Thus Pa (I) = Pa (J)- If A is an eigenvalue of A, then from
Pa (X) = 0 it follows that Pa (X) = 0 and then Pa (X) = 0. Hence A is also an
eigenvalue of A. An example is the 2 x 2 matrix
D
with the eigenvalues i and —i.
P r o b le m 20. Let V be an n x n normal matrix over C. Assume that all its
eigenvalues have absolute value of I, i.e. they are of the form e*^. Show that V
is unitary.
S o lu tio n 20. All the eigenvalues of a unitary matrix have absolute value I.
Thus we only need to show the converse of the statement. Since V is normal
it is unitarily diagonalizable. This means there is a unitary matrix U such that
U*VU = diag(Ai, . . . , An), where by assumption \Xj\ = I for j = I , . . . , n. This
implies that
U*VU(U*VU)* = InThus U*VU is a unitary matrix and it follows that V is a unitary matrix.
P r o b le m 21.
Find square roots of the Pauli spin matrices
^ = (°
o )’
°2= (j
0*)’
a3 = ( o
-l)-
S olu tion 21. First we note that the square root of I is I = e0 and —I = etn.
The square root of —1 is i = el7r/2 and —i = el37r/2 = e_Z7r/ 2. For the Pauli spin
matrix <73 we find
For the square roots of G\ we apply the spectral decomposition theorem. For
the pairs ( 1 ,¾), (1 ,-¾ ), ( - 1 ,¾), ( - 1 ,-¾ ), respectively, we find
Rio
1 /1 + ¾ I - A
1+ ; ; ’
/ 1 - ¾
I +
2 \ i-i
A
Vl + t
i - V ’
-1 + ¾ - I - A
-1-¾
-I + i)'
/ - 1 - ¾
—I +
v-i+ t
-1-¾ ;*
A
386 Problems and Solutions
For the square roots of (72 we apply the spectral decomposition theorem. For
the pairs (1,¾), (1,-¾ ), (-1,¾ ), (-1 ,- ¾ ) , respectively, we find
Problem 22. Can one find a unitary matrix U with det (U) = I, i.e. U is an
element of SU (2), such that
Solution 22.
The condition
f
\ Oy
= f Uu
\U2l
^12 ^ f 0>\
J VI /
U22
yields U22 = 0 and U12 = I. Then the conditions that U is unitary and det (U) =
I provides u\\ = 0 and U21 = —I.
Problem 23.
Can one 2 x 2 find unitary matrices such that the rows and
columns add up to one? Of course the 2 x 2 identity matrix is one of them.
Solution 23.
Another example is
Problem 24.
Let U be an n x n unitary matrix. Then
N
det (U - Mn) = ! ! ( e * - A)
where (j>j G [0, 2tt). Let n = 2. Find A for the matrix
Then calculate Ie*^1 — e%<t>2
Solution 24.
We have
A2 — I = A2 — A(ei01 - e i4>2) + ei(</>1+</>2)
Unitary Matrices
387
Thus el<t>1 + e^ 2 = 0,
= - 1 . Consequently 0i = 0, <j)2 = n or <t>1 =
02 = 0. Using this result we obtain Ie1^1 — e ^ 2|= 2.
P r o b le m 25.
Find a unitary 2 x 2 matrix such that cr2U — U<
7 %= (½.
S olu tion 25.
An example is
tt,
P r o b le m 26. The Pauli spin matrices (Ti, (J2, 0 3 are hermitian and unitary.
Together with the 2 x 2 identity matrix <7o = h they form an orthogonal basis
in Hilbert space of the 2 x 2 matrices over C with the scalar product tr (AB*).
Let X be an n x n hermitian matrix. Then ( X + iln)~l exists and
U = ( X - i I n)(X + iIn) - 1
is unitary. This is the Cayley transform of X . Find the Cayley transform of the
Pauli spin matrix <r2.
S o lu tio n 26.
2 - ih
We have
<i2 + ih =
Cr
(o-2+iJa)-1 = | ( /
Thus
U2 =
We see that U2 is not hermitian anymore, but skew-hermitian. For the identity
matrix we find
P r o b le m 27.
and
Let A be an n x n hermitian matrix. Then (A + iln) 1 exists
Cf = ( A - I Z n) ( ^ t J n) " 1
(I)
is a unitary matrix ( Cayley transform of A ) .
(i) Show that + 1 cannot be an eigenvalue of U.
(ii) Show that A — i{U + In) (In — U) - 1 .
S o lu tio n 27. (i) Assume that +1 is an eigenvalue, i.e. Ux = x, where x is
the eigenvector. Then
(A - Un)(A + i / n) -1 x = x = 7nx = (A + t /n)(A + z/n) _1x.
Thus
2
i / n(A + 2/n) _1x = 0
388 Problems and Solutions
2
or (A + / n)_1x = 0. Therefore x = 0 (contradiction).
(ii) We set y = (A + i / n) _1x, where x G Cn is arbitrary. Then x = (A + iln) y .
From (I) we obtain
U x = ( A - iIn)(A +
and therefore C/x = (A — iln) y* Adding and subtracting the equation x =
(A + iln)y yields the two equations
(U + Jn)x = 2Ay,
(In - U)x = 2ilny.
Thus (U + / n)x = —iA (In — £/)x. It follows that
(U + In) = - i A ( I n - U) =► (U + / „ ) ( / „ - C l)-1 = -iA .
Therefore A = i(U + / „ ) ( / „ - C l)"1.
Problem 28.
Find the Cayley transform of the hermitian matrix
H= (l" ht)’
Solution 28.
h^ e C .
We have
*-* )’
n + K -iX '
» £ « )■
( ' + ■ A)
where
D = det(H + H2) — (^11 H- ^)(^22 + ^) — ^12^12Consequently the Cayley transform ( i / — 2/ 2) ( ^ + ^ 2)- 1 of H is
J1 / (^11 — ^)(^22_+ i) ~ hizhi2
D \
2ifti2
2ih\2
_ \
(^11 + i)(h>22 —i) —^12^12 )
P r o b le m 29. Let 5j,r]j G M with j = 1,2,3. Any 3 x 3 unitary symmetric
matrix U can be written in the product form
f e iSl
U=
V
0
0
0
e15'1
0
0 \
0
/ Tj1
712
eiSs J \ 7 i3
712
7i3 \ f e iSl
m
723
723
V3 J
o
Vo
0
e "»
0
0 \
0
e l&3)
where 7 j k = Njk exp (i(3jk) with Njk, Pjk G M. It follows that
Ujj = r)j exp(2i ^ ) ,
Ujk = Njk exp (2(¾ + Sk + Pjk)).
The unitary condition UU* = /3 provides
3
J 2 NJk+Vj = h
j = 1,2,3
Unitary Matrices
389
and
N12(T)I exp(i/?i2) + TJ2 exp(—ipi2)) = NisN25 exp(i(n + /?23 - /¾ ) )
and cyclic ( 1 —> 2 —> 3 —>1). Write the unitary symmetric matrix
in this form.
S o lu tio n 29.
Owing to the form of W we can start from the ansatz
0
0
0
I
% 0
%
0
0
I
0
0
0
e*a
0
0
0
I
and determine a E [0,27r). We obtain the equation cos(2a) + isin(2a) = i with
the solution a = 7r/4.
P r o b le m 30.
(i) Consider the matrix
Show that R -1 = R* = R. Use these properties and tr (R) to find all the
eigenvalues of R. Find the eigenvectors.
(ii) Let
Calculate RA1R 1 and RA2R 1. Discuss.
S olu tion 30. (i) The matrix R is hermitian and unitary. Thus the eigenvalues
can only the +1 and —I. Together with tr (R) = I we obtain the eigenvalues +1
(twice) and —I.
(ii) We obtain
RA1R - 1
S u p p lem en ta ry P ro b le m s
P r o b le m I . Let U be a unitary 2 x 2 matrix. Can the matrix be reconstructed
from tr(U), tr(U 2), tr(U 3)?
390 Problems and Solutions
P r o b le m 2.
Consider the Hadamard matrix
U
V2O -1)
which is a unitary and hermitian matrix with eigenvalues +1 and —1.
multiply each column with a phase factor
and obtain the matrix
e i<f>2
V {¢1 ^ 2)
V t i ' e i<\>1
We
\
_ e i <\>2 J
•
Is the matrix still unitary? If not find the conditions on ¢1 and ¢2 such that
V {¢ 1 ,^ 2) is unitary.
Can one find a 6 such that
P r o b le m 3.
/ cos(0)e~ld
y2sin(0)e-10
P r o b le m 4.
isin (0)e“ ^ \ _ I / 1 + ¾ 1 — i \ ?
cos (0)e~td J
2 \ 1 —i I + z/
(i) Is the 2 x 2 matrix
JJ
—
(
^3 +
^ 1+ ^2
iu
4
~ V~ UZ +
with Ui = cos(a), U2 = sin(o:) cos(/3) and
us = sin(a) sm(/3) cos(7 ),
unitary?
(ii) Is the 3 x 3 matrix
U4 = sin(a) sin(/3) sin(7 )
1 I1
U = —= I i
I
i
—i
1 A
-J
—1 I
unitary?
(iii) Show that the 3 x 3 matrix (Fourier matrix)
i f 1
V = —= I I
\1
1
exp(z27r/3)
exp(z47r/3)
1
\
exp(z47r/3) I
exp(z27r/3)/
is unitary.
(iv) Are the 4 x 4 matrices
(I
I
I
IX
I
2
unitary?
- 1-1
I
\ I - 1-1
I ’
I/
1
I
B = -2 I
U i
I
-I
I
I
I
I
-I
I
1\
-I
-I
-I/
Unitary Matrices
(v)
391
Show that the 4 x 4 matrices
/ 1 O
O I
O —i
vi
O
\*
O
I
i
-I
O
O
O
’
-I /
Y -
S ) ,
T
-I
O
1
O
I
V2
O
Vi
O
-I
O
I
(°I
1X
O
I
0/
are unitary.
(vi) Let
tr:=
± (
1- V
:=|(1
+
y/ S )
with T the golden mean number. Are the 4 x 4 matrices
I
2
/1
— T
a
Vo
—a
0
I
I
0
T
—a
T
— T
I
J
/1 —
I
a 0
Vo a
T
0 V
a
TT
’
U2
1
=
~ 2
T
—a
0
I
—T
0a \
—
T
I
J
unitary? Prove or disprove.
(vii) Show that the 2n x 2n matrix
is unitary.
Problem 5. Let A be an n x n hermitian matrix with A2 = In.
(i) Show that U =
(In + iA) is unitary.
(ii) Show that U =
(In —iA) is unitary.
(iii) Let </>E [0,27r). Show that the condition on </>such that
F (* ) = -l= ( e * J n + e - * i4 )
is unitary is given by 4> = 7r/2 and </>= 37r/2.
Let In be the n x n identity matrix and Jn be the n x n backward
identity matrix i.e. the entries are I at the counter diagonal and 0 otherwise.
Problem 6.
Are the matrices
U1 = J = (In -^iJn),
U2 = -J=(In - V n ) ,
Us = j = ( ( l + i)In + ( l - i ) J n)
unitary?
Problem 7.
Let U be a unitary 4 x 4 matrix. Assume that each column
(vector) can be written as the Kronecker product of two vectors in C 2. Can we
conclude that the eigenvectors of such a unitary matrix can also be written a
Kronecker product of two vectors in C2?
392 Problems and Solutions
Problem 8.
Find all 2 x 2 hermitian and unitary matrices A, B such that
AB = einBA.
Problem 9. (i) Find the conditions on u n , ^12, ^21, U22, ^23, ^32, ^33 £ C such
that the 3 x 3 matrix
If Un
U 2\
u = \\K 0
U i2
0 ^I
U 22
U 2S
U32
^33 /
is unitary.
(ii) Let I > rjk > 0, 4>jk € E (j, k = 1,2,3). Find the
such that
/ m e 4*11 ri2ei<t>12 r i3e*^13
T2iei^
r22e ^ 22 r23e ^ 23
V r3Iei^ i r32e ^ 32 r33e ^ 33
is a unitary matrix.
Problem 10. Find all (n + I) x (n + I) matrices A such that A*UA = U,
where U is the unitary matrix
/
0
U = I 0 n_i
\ -i
0
I71-I
*
\
0 n_i I
0
0
J
and det(A) = I. Consider first the case n = 2. For n = 2 we have
0
0
0
1
—i
0
A
1( 0
°U = 0
0
'K -i
0
I
0
i
0
0
and find nine equations.
Problem 11. Consider n x n unitary matrices. A scalar product of two n x n
matrices C/, V can be defined as
(U,V) := U (UV*).
Find two 2 x 2 unitary matrices U, V such that {U, V) =
Problem 12. Let U be a unitary matrix. Then the determinant of U must
be of the form e1^. Find the determinant of U + U*.
Problem 13.
(i) Consider the unitary matrix
Find all 2 x 2 matrices S such that Y S Y 1 = St 1 S = S*.
Unitary Matrices
393
(ii) Can one find a 2 x 2 unitary matrix U such that
(
cos(0)
y e ” *^sin(0)
ei4>sin(0) \ _ ,, / I
—cos(0) y
0
—I J
yO
P r o b le m 14. Let C / b e a n n x n unitary matrix and A an arbitrary n x n matrix.
Then we know that UeAU~l = eUAU \ Calculate UeAU with U / U ~l .
P r o b le m 15.
Consider the unit vectors in C 3
WI
z\
Z2
Z3
z =
W =
W 2
Ws
i.e. |zi|2 + I 1 2 + l^ l2 = I, |i^i|2 + |u>2|2 + \ws \2 = I- Assume that (complex
unit cone) Z\W\ + Z2W 2 + Z sW s = 0. Show that U G SU(3) can be written as
where Uj
Z2 z3
W2 Ws
U2 Us
Zi
Wi
Ui
U=
3
E M =I CjkiZkm-
P r o b le m 16.
(i) Let 0 i,0 2 G R. Show that
=
)
c-« *
is unitary. Is t/(0 i,0 2 ) an element of SU{2)1 Find the eigenvalues and eigen­
vectors of U{(j>i,(f)2). Do the eigenvalues and eigenvectors depend on the 0 ’s?
Show that
=
_ C- ^ J
is unitary. Is F ( 0 i , 02) an element of SU{2)1 Find the eigenvalues and eigen­
vectors of V{(f)i,<f)2). Do the eigenvalues and eigenvectors depend on the 0 ’sI
(ii) Let 01,02,03,04 G
Show that
I
17 ( 0 1 , 0 2 , 0 3 , 0 4 )
Ti
,i<t>I
0
0
0
0
_ g —^</>4
_ e-^ 2
e i$3
,i<j>2
0
0
e -i < h
0
0
is unitary. Is C7(0i, 02,03,04) an element of SU{4)1 Find the eigenvalues and
eigenvectors of U (0 i, 02,03,04). Do the eigenvalues and eigenvectors depend on
the 0 ’sI
(iii) Let 0 i,0 2 G M. Show that
0
V2
_g-2^2
0
,¢¢2
0
e -i < h
>i<t>i
tf(0 i,fc )
0
394 Problems and Solutions
is unitary. Is U{<j)1,(^2) an element of SU(2)1 Find the eigenvalues and eigen­
vectors of
2). Do the eigenvalues and eigenvectors depend on the 0 ’s?
Problem 17. (i) What can be said about the eigenvalues of a matrix which
is unitary and hermitian?
(ii) What can be said about the eigenvalues of a matrix which is unitary and
skew-hermitian?
(iii) What can be said about the eigenvalues of a matrix which is unitary and
uT = ui
(iv) What can be said about the eigenvalues of a matrix which is unitary and
U3 = Ul
Problem 18.
matrices
Let n be a prime number and j ,k = I , . . . , n. Show that the
(Ui) jk =
exp((2 m /n)(j + k - I )2)
(Ur)jk = -j= exp((2 m /n)r(j + k - I )2)
(^n-i)jfc =
exp(((2m/n)(n - l ) ( j + k - I )2)
(Un)jk = -j= exp((2 m/n)jk)
(U n+ l ) j k
— djk
are unitary.
Problem 19.
I
TI
In the Hilbert space C 4 consider the Bell states
P\
0
0
’
1
Ti
(°\
1
1 ’
Vo /
1
Ti
0 \
1
-1 ’
Vo /
1
Ti
01 \
0
V -i/
(i) Let LJ := e2™/4. Apply the Fourier transformation
1
I
1
U)
L J2
1
UJ2
I
V i
W3
a ;2
/1
I
Uf = ^
2
W
W3
U
2
a; /
to the Bell states and study the entanglement of these states,
(ii) Apply the Haar wavelet transform
I
Uh = ^
2
I
I
1
1 \
1
I
-I
-I
0
T2 - T 2 0
0
0
T 2 -V 2 J
Unitary Matrices
395
to the Bell states and study the entanglement of these states,
(iii) Apply the Walsh-Hadamard transform
I
-I
I
-I
I
-I
-I
I
Uw = 77
I
I
I
I
1\
I
-I
-I/
to the Bell states and study the entanglement of these states.
Extend to the Hilbert space C 2 with the first Bell state given by
>
0
0
i)T .
P r o b le m 20. Let U be an n x n unitary matrix. Then U + U* is a hermitian
matrix. Can any hermitian matrix represented in this form? For example for
the n x n zero matrix we have U = diag(z, z, . . . , i).
P r o b le m 21.
Let (|ao), l^i), ••• , \o>n- i ) } be an orthonormal basis in the
Hilbert space C71. The discrete Fourier transform
Ti— I
I
Ibj)
-7 = X ^ fcIafc)5
V n
j =0,1,...,71- I
Zs=O
where uj := exp(27ri/n) is the primitive n-th root of unity.
(i) Apply the discrete Fourier transform to the standard basis in C4
[ °1 \
/ 1V
0
0
0
Vo J
Vo/
(ii)
[ °0\
1
Vo /
[ °0\
0
Vi J
Apply the discrete Fourier transform to the Bell basis in C4
I 01
1
0
Ti
’ Ti
P r o b le m 22.
0
0
1
1
1
I
I
I
-I
’ Ti
’ Ti
Consider the 3 x 3 unitary matrices
0
0
0
0 \
>2
e i<t
0
0>3 j
, ¢ / 2 (^ 4 ,^ 5 ,^ 6 )=
e«t
What is the condition on 0 i , . . . ,
such that
[^1(01,02,03),^2(01,02,03)] = 0 3 ?
I
°0
I
V e^6
0
0
>4
0
0
gi<t
396 Problems and Solutions
Problem 23. Let U be a unitary matrix with U = Ut . Show that U can be
written as U = V t V , where V is unitary. Note that ( V TV )T = V t V .
Problem 24.
Let n > 2. Consider the n x n matrix (counting from (0 ,0))
(ajk) = ^ ( e ^ - fc))
where k = 0 , 1, . . . , n — I. Is the matrix unitary? Study first the cases n = 2
and n = 3.
Problem 25.
Consider the rotation matrix
R(a) = ( COSlal
v 7
y sin(a)
- sil^ y
cos(a) J
Find all 2 x 2 matrices
such that R(Qi)M(P)R- 1 (a) = M (ft + a).
Problem 26.
(i) Consider an n x n unitary matrix U = (Ujjc) with j, k =
I , . . . , n. Show that
& = (sJfc) — (UjkUjk)
is a double stochastic matrix.
(ii) Given a double stochastic n x n matrix 5. Can we construct the unitary
matrix U which generates the double stochastic matrix as described in (i).
Problem 27.
Let <t i , 02, as be the Pauli spin matrices and Qij € R for
7 = 1,2,3.
(i) Find the 2 x 2 matrices
Uk := (h ~ i<xkVk)(h + ioikvk)~X
where k = 1 ,2,3. Are the matrices unitary?
(ii) Find the 4 x 4 matrices
Vk := (J4 - iotkak <g>(Jk) ( h + iakak <g><jk)~l
where k = 1 ,2,3. Are the matrices unitary?
Problem 28. Let U be a unitary matrix. Calculate exp (e(U + U*)), where
6 G R. Note that
(U + U*)2 = U2 + ( U*)2 + 2In,
P r o b le m 29.
(U + U * f = U3 + (U*)3 + ZU + 3 U*.
Let U, V be n x n unitary matrices. Is the 2n x 2n matrix
Unitary Matrices
397
unitary?
Problem 30.
4 x 4 matrix
Let (f> € M and U be an 2 x 2 unitary matrix. Show that the
V =
is unitary. Note that V = e^/2 0 U.
(S)I2
Chapter 18
Groups, Lie Groups and
Matrices*1
3
2
A group G is a set of objects { a, 6, c , . . . } (not necessarily countable) together
with a binary operation which associates with any ordered pair of elements a, b
in G a third element ab in G (closure). The binary operation (called group mul­
tiplication) is subject to the following requirements:
1) There exists an element e in G called the identity element such that eg =
ge = g for all g G G.
2) For every g G G there exists an inverse element g~l in G such that gg~l =
g~'g = e.
3) Associative law. The identity ( ab)c = a(bc) is satisfied for all a, 6, c G G.
If ab = ba for all a, b G G we call the group commutative.
If G has a finite number of elements it has finite order n(G ), where n(G) is the
number of elements. Otherwise, G has infinite order.
Groups have matrix representations with invertible n x n matrices and matrix
multiplication as group multiplication. The identity element is the identity ma­
trix. The inverse element is the inverse matrix. An important subgroup is the
set of unitary matrices, where U* = U~l .
Let ( Gi , *) and (Gr2,o) be groups. A function / : G\ —> G2 with
f(a * b) = f(a ) 0 /(6 ),
is called a homomorphism.
398
for all a, b G G\
Groups7 Lie Groups and Matrices
Problem I .
399
Let <j>G M. Do the 2 x 2 matrices
h = (l
? ),
^ )= (.-° *
'" )
form a group under matrix multiplication?
Solution I .
Since U{<t>)U{(j)) = /2 we have a commutative group with two
elements. Note that U{4>) = U*{4>) =
Problem 2.
(i) Show that the 2 x 2 matrix
-a
0
is unitary.
(ii) Find the eigenvalues and normalized eigenvectors of U .
(iii) Find the group generated by U .
(iv) Find the group generated by U <g>U.
Solution 2.
(i) We have
^ = ( °
J ) 0
Oi ) = 7*
(ii) The eigenvalues of U are eZ7r/ 4, el7r5/ 4 = —eZ7r/ 4 with the corresponding
normalized eigenvectors are
Vl = T i ( ei7r/4 ) ’ V2 = T l ( e " 574) '
(iii) We obtain t /8 = /2 with
=
^
^ = ("
= ( . 0!
O ) = - " - , y ‘ = - / 2,
o1) ' ^ =
" ' = (?
o‘ ) -
Hence the (commutative) group consists of 8 elements.
(iv) We have
(U ® U ){U ® U ) = (U2 ® U2) = U2 ® */2 = - / 4.
Thus ({/ ® C/)4 = /4. Hence the group consists of the four elements U ® U,
(U ® {/)2, ( {/ ® {/)3, ( {/ ® { / ) 4 = J4
Problem 3.
Find the group generated by the 3 x 3 matrices
I
0
0
0
- I
0
0 \
0
’
- I /
/0
G2=
I
\o
0
0
I
I
0
0
400
Problems and Solutions
Set G0 = G2
1 = I3.
S o lu tio n 3.
order 12.
Note that G2 = I3 and G2 = I3. We find the tetrahedral group of
P r o b le m 4.
(i) Consider the permutation group S3 which consists of 6 ele­
ments. The matrix representation of the permutations (12) and (13) are
/0
(12) i-> I I
I
0
\0
0\
0 ,
/0 0
1\
0 I 0
(13)
0 1 /
\1
.
0 o/
Can the other permutations of S3 be constructed from matrix products of these
two matrices?
(ii) Consider the two 3 x 3 permutation matrices
/0
C i=
I
0\
O 0
\1
I
/0
,
0 l\
4 = I0
0 0 /
I0
.
0 OJ
\1
Can the remaining four 3 x 3 matrices be generated from Ci and A using matrix
multiplication?
S o lu tio n 4. (i) Yes. All six permutation matrices can be generated from the
two matrices.
(ii) Yes. We have A2 = I3 and
C\ =
/0
I
0
0
I
0
Vo
i
o
/ 10
0
ACi =
0
o
I
\0
I
0
Cf
I
0
0 0\
1 0 ,
1/
0 0
AC2
0
1
0
I
0
0
0
0
I
Note that C l, C 2, C f forms a subgroup under matrix multiplication.
P r o b le m 5.
Find the group generated by the two 2 x 2 matrices
under matrix multiplication. Is the group commutative?
S olu tion 5. W ith erf = I2 we find the group elements a3, R, R2, RS, SR, I2.
The group is not commutative. Note that Ra3
a3R.
P r o b le m 6.
E =
(i) Show that the set of matrices
C3 =
-1 /2
V 3 /2
-V 3 /2
-1 /2
CT1
/ -1 /2
V - x /3 /2
V3/2\
-1 /2 /
Groups, Lie Groups and Matrices
_ ( I
0\
-I J ’
_ ( -1 /2
— Xv —\/3/2
_ ( —1/2
(,> /3 /2
—\/3/2\
1/2 J ’
401
\/3/2\
1/2 J
form a group G under matrix multiplication, where C3"1 is the inverse matrix of
C3(ii) Find the determinant of all these matrices. Does the set of numbers
(det(.E ), det(Cs), det(C^"1), det(cri), det(cr2), det(<73) }
form a group under multiplication?
(iii) Find two proper subgroups.
(iv) Find the right coset decomposition. Find the left coset decomposition. We
obtain the right coset decomposition as follows: Let G be a finite group of order
g having a proper subgroup H of order h. Take some element g2 of G which
does not belong to the subgroup H, and make a right coset Hg2. If H and I-Lg2
do not exhaust the group G , take some element <73 of G which is not an element
of I-L and Hg2, and make a right coset I-Lgs- Continue making right cosets Hgj
in this way. If G is a finite group, all elements of G will be exhausted in a finite
number of steps and we obtain the right coset decomposition.
S olu tion 6 .
(i) Matrix multiplication reveals that the set is closed under
matrix multiplication. The neutral element is the 2 x 2 identity matrix. Each
element has exactly one inverse. The associative law holds for n x n matrices.
The group table is given by
E
C3
C3 1
°l
V2
V3
E
E
C3
C3
C f1
C3- 1
C3
C3- 1
E
C3- 1
E
az
az
a\
C3
a\
az
az
az
a1
az
Vl
V2
03
a1
az
az
E
az
a\
az
V3
C3
C3- 1
C3
C3- 1
C3- 1
C3
E
V2
Vl
E
(ii) For the determinants we find
det(E') = det(Cs) = det (C3”1) = I,
d e t ^ ) = det(cr2) = det(as) = —I.
The set { + 1 , —I } forms a commutative group under multiplication.
(iii) From the group table we see that { E, G\ } is a proper subgroup. Another
proper subgroup is { E , C3, C^ 1 }.
(iv) We start from the proper subgroup H = { E , a\}. If we multiply the
elements of H with <j 2 on the right we obtain the set
Ha2 = { a 2, G1 a2 } = { a 2, C3 }.
Similarly, we have
Uaz = { <t3, <ti<t3 } = { ct3, C f 1 }.
We see that the six elements of the group G are just exhausted by the three
right cosets so that
G = H + Ha2 + H a 3,
H = { E,
G1 }.
402 Problems and Solutions
The left coset decomposition is given by
G = H + a2U + G3H,
H = [ E^a1]
with
rH = { CT2, C 3 1 },
P r o b le m 7.
r _ ( l
h ~\0
_ /l
“ 1,0
O z H = { CT3, C 3 }.
We know that the set of 2 x 2 matrices
0\
I)'
/-1/2
— Y >/3/2
0 \
- l j ’
- > /3 /2 \
-1 /2 J ’
_ / -1 /2
- V -> /3 /2
i _ / - l / 2
— Y - > /3 /2
° 3
—> /3 /2 \
1/2 J ’
_ / —1/2
a3 ~ \ V % /2
>/3/2 \
-1 /2 J
> /3 /2 \
1/2 J
forms a group G under matrix multiplication, where C 3-1 is the inverse matrix
of C3. The set of matrices ( 3 x 3 permutation matrices)
/10
I = P0 =
P2 =
P4 =
0
0
I
0
VO
0
I
/0 I 0
0 0 I
V l O O
0 0
1 0
0 I
Pi
I
Ps =
/0 0 I
o I 0
V l O O
P5
0
1\
0 ,
0/
0\
0 0 1
0
I
0/
0
1
0
I
0
0
0
0
I
,
also forms a group G under matrix multiplication. Are the two groups isomor­
phic? A homomorphism which is I — I and onto is an isomorphism.
S o lu tio n 7.
Using the map
E —¥ I = Po,
Pi,
C3 —¥
—¥ P2,
C3 1
CTi
—¥ P3,
we see that the two groups are isomorphic.
P r o b le m 8.
(i) Show that the 4 x 4 matrices
A=
-I
0
0
/10
0
O I
VO 0
0
I
0
-I
0
0
0
-I
form a group under matrix multiplication.
/0 0
P=
O I
Vl 0
D =
0
0
-I
i0
-I
0
-I
0
0
CT2
—t P4
Groups, Lie Groups and Matrices
403
(ii) Show that the 4 x 4 matrices
/1
0
Y
_
JL —
0
Vo
0
(-1
0 -i
0
0
0
0
0
I
0
0
0
0
I
0
0
0
-I
0
0\
0
0
I)
<0
0
V
Y—
— I
<0
5
0 nI
0
-l)
,
0
0
-I
0
W—
TI/
___
0
I
0
0
I
0 0
0 0
0 l)
0
-I
0
0
—I
0
0
0
J
0 \
0
0
-I/
form a group under matrix multiplication.
(iii) Show that the two groups (so-called Vierergruppe) are isomorphic.
S olu tion 8.
(i) The group table is
A
B
C
D
A
A
B
C
D
B
B
A
D
C
C
C
D
A
B
D
D
C
B
A
The neutral element is A. Each element is its own inverse A 1 = A, B 1 = B 1
C - 1 = C , D - 1 = D.
(ii) The group table is
X
Y
V
W
X
X
Y
V
W
Y
Y
X
W
V
V
V
W
X
Y
W
W
V
Y
X
The neutral element is X . Each element is its own inverse X 1 = X , Y 1 = Y ,
V ~ 1 = V, W ~ 1 = W.
(iii) The map A ^ X , B ^ Y , C ^ V , D ^ W provides the isomorphism.
P r o b le m 9.
(i) Let x G R. Show that the 2 x 2 matrices
i)
form a group under matrix multiplication.
(ii) Is the group commutative?
(iii) Find a group that is isomorphic to this group.
S olu tion 9.
(i) We have (closure)
*■>*»>-G OG O-G *v)-*+*
404 Problems and Solutions
The neutral element is the identity matrix Z2. The inverse element is given by
* -* > = (£
T )-
Matrix multiplication is associative. Thus we have a group.
(ii) Since A (x)A (y) = A (y)A (x) the group is commutative.
(iii) The group is isomorphic to the group (M, + ).
Let a, 6, c, d G Z. Show that the 2 x 2 matrices
P r o b le m 10.
*-(:
\)
with ad — be = I form a group under matrix multiplication.
S o lu tio n 10.
We have
aI
cI
\
f a2
di J \ c2
&2 \ _
d2 J
f a la2 +
c2
a l^2 + ^ld2
\C1a2 + ^iC2
ci&2 + did2
Since ai , &i , ci , di G Z and ¢12, ¾ , ¾ , ¾ G Z the entries of the matrix on the
right-hand side are again elements in Z. Since det(Ai^42) = det(^4i) det(^42)
and det(A i) = I, det(A2) = I we have det(^4i^42) = I. The inverse element of
A is given by
I)Thus the entries of A -1 are again elements in the set of integers Z.
P r o b le m 11.
(i) Let c
G
M and c / 0. Do the 4 x 4 matrices
c c c\
c c c
c c c
c c c /
form a group under matrix multiplication?
(ii) Find the eigenvalues of ^4(c).(i)
S o lu tio n 11.
/ 1/4
1/4
1/4
1/4
1/4
1/4
1/4
1/4
(i) The neutral element and inverse element are, respectively
1/4
1/4
1/4
1/4
1/4 \
1/4
1/4
1/ 4 /
/l /( 1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c )
l/(1 6 c ) \
l/(1 6 c )
l/(1 6 c )
l/(1 6 c ) /
(ii) The rank of the matrix is I and tr(A (c)) = 4c. Thus the eigenvalues are 0
(3 times) and 4c.
Groups, Lie Groups and Matrices
P r o b le m 12.
form
405
The Heisenberg group is the set of upper 3 x 3 matrices of the
( l a c
0
0
I
0
b
I
where a,b,c can be taken from some (arbitrary) commutative ring.
(i) Find the inverse of H.
(ii) Given two elements x, y of a group G , we define the commutator of x and
y , denoted by [x, y] to be the element x ~ ly~lxy. If a, 6, c are integers (in the
ring Z of the integers) we obtain the discrete Heisenberg group Hs. It has two
generators
/I
I 0\
/ 1 0
0\
X=I o
I 0 I ,
2/ =
0 1 1 .
\0 0 1 /
\0 0 I /
Find z = xyx~ ly~l . Show that xz = zx and yz = zy , i.e. z is the generator of
the center of Hs .
(iii) The derived subgroup (or commutator subgroup) of a group G is the sub­
group [G, G] generated by the set of commutators of every pair of elements of
G . Find [G, G\ for the Heisenberg group.
(iv) Let
0 a c\
0 0 6
(
0 0 0/
and a, 6, c G M. Find exp(A).
(v) The Heisenberg group is a simple connected Lie group whose Lie algebra
consists of matrices
0 a c\
(
0 0 6
0
0
.
0/
Find the commutators [L,Z/] and [[L,Lf],L f], where [L,V] := LU —LfL.
S olu tion 12. (i) An inverse exists since det (H) = I. Multiplying two upper
triangular matrices yields again an upper triangular matrix. From the condition
I
0
0
a
c\
/ 1
1 6
0
0 I I VO
e
g
I
0
0
1 /
0 I
0
I
0
0
0
I
we obtain e + a = 0, <7+ a / + c = 0, / + 6 = 0. Thus e = —a, / = —6, g = ab—c.
Consequently
I —a ab — c N
0
I
JT-1
0
0
I
(ii) Since
I
0
0
-I
I
0
0
0
I
y
I
0
0 0 \
1 -1
0 0
1
/
406 Problems and Solutions
we obtain
xyx ly 1
and xz = zx, yz = zy. Thus z is the generator of the center of Hs(hi) We have xyx~ ly~l = z, x z x - 1z - 1 — 7, yzy~l z~l = 7, where I is the
neutral element ( 3 x 3 identity matrix) of the group. Thus the derived subgroup
has only one generator z.
(iv) We use the expansion
~ Ak
- E
k=0
t t
Since
A2 =
/0
0
\0
0
0
0
ab\
A3 =
0
0 J
/0
0
VO
0
0
0
0'
0
Oi
(the matrix A is nilpotent) we obtain
a
I
0
(
c + ab/2 '
b
I
0 0
I
j
(v) We obtain
[L,7/]
'0
0
0
0
a b '- a 'b '
0
0
O
0
j
Using this result we find the zero matrix [[L, L f], L'] = O3.
P r o b le m 13.
Define
M : R3 - ^ V := { a <7 : a G R 3 } c { 2 x 2 complex matrices }
a
M (a) = a •a = a\<j\ + a2c72 + a3<j3.
This is a linear bisection between R 3 and V. Each U G SU( 2) determines a
linear map S(U) on R 3 by
M(S(U)a) = U - 1M(B)U.
The right-hand side is clearly linear in a. Show that C/-1 M (a) U is in V, that
is, of the form M ( b).
S olu tion 13. Let U = #0/ 2+ i x - <7 with (# o,x ) € R 4 obeying ||xo||2 +
and compute U- 1M(a)U explicitly, where U- 1 = U*. We have
IIx II2 =
U-1 M(Sl)U = (# o /2 — Tx •o’) (a •¢7 )(^ 0/2 + ix •a)
= (X0I2 — ix •<t )( xo8l•a + z(a •x ) / 2 — (a x x) •<7 )
= XQa •a — 2xo(a x x) •cr + (a •x )(x •a) — (x x (a x x )) •a
I
Groups, Lie Groups and Matrices
407
since x is perpendicular to a x x, where x denotes the vector product. Using
the identity
c •(a x b) = (b •c)a — (a •c )b
we obtain
U- 1M(Sl)U = (#o — llx l|2)a * & ~ 2xo(a x x) •cr + 2(a •x )(x •a).
This shows, not only that U~1M (a)U G V, but also that, for U = XoI 2 + ^x *&
we have
S(U )a = (xq - ||x||2)a + 2xqx x a + 2(a •x )x .
P r o b le m 14.
Show that the nonnormal 2 x 2 matrices
C :)• C 'i1)
are conjugate in 5 L (2 ,C ) but not in SL( 2,R ) (the real matrices in S L (2 ,C )).
S olu tion 14.
Let
S = ( S11 Sl2)
V S21
with det(5) = I, i.e. S
q(l
I^
V0 1Z
G
S 22 )
SL( 2,C ). Now
c -i _ ( sU
Sl2\ f ^
\ S S22A 0 Iy
21
V -^ 2 1
f s22
Sn
sfi
\ = f 1
I + SllS2I /
\°
= / I - SnS2I
\ ~ S21
- 5 12^
J
I /
We obtain the four conditions 1 —SnS2I = I, S2i = —1, —s2i = 0, l + s n s 2i = I.
It follows that s 2i = 0, S n = i . The entry S22 follows from det(5) = I. We have
S n S 22 = I and therefore S22 = —i. The entry S i 2 is arbitrary. Thus
°=C -O^sM o i T)P r o b le m 15. (i) Let G be a finite set of real n x n matrices { Aj }, I < j < r,
which forms a group under matrix multiplication. Suppose that
r
r
tr( X ^ ) =
J=I
= °
J=I
where tr denotes the trace. Show that
r
E ^
j=
1
= °»-
408 Problems and Solutions
(ii) Let uj := exp(27rz/3). Show that the 2 x 2 matrices
B'=(o ?)’ ft=(o £)• a =(“o I )
B<=0 o)' %=( 3 o)’ B*=(! O
2)
form a group under matrix multiplication. Show that
6
£ t r ( ^ ) = °.
J= I
S o lu tio n 15.
(i) Let
S := ± A ,
i= I
For any j , the sequence of matrices A 7A i , Aj A2, . . . , A 7--Ar is a permutation of
the elements of G, and summing yields AjS = S. Thus
r
r
r
E^ 5 = = E5 =►
5 2 =r
5-
j
=
i
j
=
i
j
=
i
Therefore the minimal polynomial of S divides x 2 —rx, and every eigenvalue of
S is either 0 or r since x 2 — rx = x(x — r). However the eigenvalues counted
with multiplicity sum to tr(S) = 0. Thus they are all 0. Now every eigenvalue
of 5 — r ln is —r / 0, so the matrix S — r l n is invertible. Therefore from
S(S —r l n) = On we obtain S = 0n.
(ii) The group table is
Bi
Bi
B2
B3
B4
B5
B6
Bi
B2
B3
Bi
B5
B6
We have
B2
B2
B3
Bi
B6
Bi
B5
B3
B3
Bi
B2
B5
B6
Bi
Bi
Bi
B5
B6
Bi
B2
B3
B5
B5
B6
Bi
B3
Bi
B2
B6
B6
Bi
B5
B2
B3
Bi
6
^ ^ t r (H j) = 2 H- 2(cj + UJ^) — 0
j
=
i
where we used that u j -\-u j 2 = —I. The matrices 1¾, H3, H5,
numbers as entries. This is a special case of
6
E ^
j
=
i
= Q2-
B
q
contain complex
Groups, Lie Groups and Matrices
P r o b le m 16.
409
Consider the unitary 2 x 2 matrix
J :=
(i) Find all 2 x 2 matrices A over R such that At JA = J.
(ii) Do these 2 x 2 matrices form a group under matrix multiplication?
S olu tion 16.
(i) From the condition
0
-I
I
0
we obtain
Thus det (A) = I.
(ii) All n x n matrices A with det (A) = I form a subgroup of the general linear
group GL(n, R). Note that det (AB) = det(A) det(B). Thus if det(A) = I and
det (B) = I we have det (AB) = I.
P r o b le m 17.
Let J be the 2n x 2n matrix
Show that the 2n x 2n matrices A satisfying At JA = J form a group under
matrix multiplication. This group is called the symplectic group Sp(2n).
S o lu tio n 17.
Let At JA = J, B t JB = J . Then
(AB) t J(AB) = B t (A t JA)B = B t JB = J.
Obviously, hnJhn = J and det (J) ^ 0. From At JA = J it follows that
det (A t JA) = det (A t ) det (J) det (A) = det(J).
Thus det (A t ) det (A) = I. Since det(AT) = det(A) we have (det(A ))2 = I.
From A t JA = J we obtain (A t J A )- 1 = J - 1 . Since J t = J -1 we arrive at
A - 1 J - ^ A t ) - 1 = A -1 J T (A - 1 )T = J t .
Applying the transpose we find A - 1 J (A t ) -1 = J. Let B = (A - 1 )T . Then
B t = A -1 and B t JB = J.
P r o b le m 18. The Pauli spin matrix o\ is not only hermitian, unitary and its
own inverse, but also a permutation matrix. Find all 2 x 2 matrices A such that
a f 1Acri = A.
410
Problems and Solutions
S olu tion 18.
Prom
/0
VI
l\ f a n
0 / \ G21
^12^/^0
I\
Oy
0,22 J V I
/ ctii
«12^
\ 02l
022 )
we find the solution a22 = &n, ^21 = ^12P r o b le m 19.
For the vector space of the n x n matrices over M we can
introduce a scalar product via
(Ai B) : = t i { A B T).
Consider the Lie group SL( 2,M) of the 2 x 2 matrices with determinant I. Find
G SL( 2, R ) such that
X 1Y
( X iY ) = O
where neither X nor Y can be 2 x 2 identity matrix.
S olu tion 19.
The 2 x 2 matrices
X =
satisfy the equations, i.e. d et(X ) = det(T ) = I and t r (X F T) = 0.
P r o b le m 20.
Do the vectors
The numbers I, I1 —I, —i form a group under multiplication.
I ( 1
I ^
-I
’
Vl = 2
1
V2=2
( i \
-I
1
—i
V3=2
’
I
\ i I
1 /
\ —i/
1
—i
V4=2
(~ l\
I
i
\ -i)
form an orthonormal basis in C4?
S olu tion 20. No. The equation aiVi + a2V2 + asV3 + ¢14V4 = 0 admits the
solution a\ —as = 0, (¾ — <24 = 0.
P r o b le m 21.
Show that
(i)
Consider the group G of all 4 x 4 permutation matrices.
I
E s
W\ g E G
is a projection matrix. Here |G| denotes the number of elements in the group,
(ii) Consider the subgroup given by the matrices
0
I
0
0
0
0
I
0
°\
0
0 5
/°
0
0
J
\1
0
0
I
0
0
I
0
0
1X
0
0
0/
Groups, Lie Groups and Matrices
Show that
411
I
E s
W\ g E G
is a projection matrix.
(i) We have \G\ = 24. Thus
S olu tion 21.
n
2
1A
1 1 1
i i i i
>=j
i
i
i
Vi
gEG
i i '
i i)
We find II = II* (i.e. Il is hermitian) and II2 = II. Hence n is a projection
matrix.
(ii) We have |G| = 2. Thus
/1 0 0
i
E s
o
o
Vi
gEG
1\
I I 0
1 1 0
0 0 1/
This symmetric matrix over R is a projection matrix.
P r o b le m 22.
Let A be the 2 x 2 matrix
( m e * * 1 r12e ^ \
\ r 2i e ^ 21 r22ei t e )
where rjk E M, rjk > 0 for j, k = 1 ,2 and r u = 7*21. We also have (f>jk € M for
j, k = 1,2 and impose (j)12 = 4>21. What are the conditions on r jk and (j)jk such
that /2 + iA is a unitary matrix?
Let U = I2 + A. Then U* = /2 — iA*. Thus from UU* = /2 we
S o lu tio n 22.
find
i(A - A*) + AA* = O2.
A solution is ri 1 = r22 = 0, r i2 = r2i = I and
¢11 = 022 = ^ tt,
P r o b le m 23.
A={1
0i2 arbitrary.
Show that the four 2 x 2 matrices
?)■ B=(oI - 0 ' c“(? 5 )' D=(-i o1)
from a group under matrix multiplication. Is the group abelian?
S olu tion 23.
The group is abelian. We have
AA = A, AB = BA = B, AC = CA = C , AD = DA = D
412 Problems and Solutions
B B = A , B C = CB = D, B D = D B = C, CD = D C = B .
P r o b le m 24.
(i) Consider the symmetric 2 x 2 matrix
Find all invertible 2 x 2 matrices S over R such that SAS -1 = A. Obviously
the identity matrix /2 would be such as matrix.
(ii) Do the matrices S form a group under matrix multiplication? Prove or
disprove.
(hi) Use the result form (i) to calculate (S <8>S) (A <g) A)(S <
8 >5 ) - 1 . Discuss.
S olu tion 24.
(i) With
we obtain for det (S)SAS 1
Since a n and a n are arbitrary we find the conditions
S i 2 S 22 -
S n S 2I
= 0,
sfi - s?2 = det(S),
s\ 2 - Sll = det(S).
A solution is Sn = S22 = 0, S12 = S21 = I.
(ii) Yes. The neutral element is the identity matrix. Let S\ASj"1 = A , S2AS2 1 =
A we have
(S iS 2M (S iS 2) -1 = (S iS 2M (S 2S f j ) = S M S f 1 = A.
(iii) We find
(S <g>S)(A ® A)(S <g>S) - 1 = (SASf" 1) <g> (SASf" 1) = A ® A
P r o b le m 25. Find all 2 x 2 matrices S over C with determinant I (i.e. they
are elements of S L (2 ,C )) such that
Obviously, the 2 x 2 identity matrix is such an element.
S olu tion 25.
Let
Groupsj Lie Groups and Matrices
413
with det(*S) = S n $22 — ^12^21 = I. Then
-Sll«12 + S21S22
-«12 + «22
s Il - «21
«11«12 - «21«22
Thus we have to solve the four equations
SnSu — S21S22 = 0, $n - Sn = I, - s \ 2 + s22 = I, S11S22 - ^12^21 = 0.
We have to do a case study.
Case I. Sn = 0. Then S 2 1 = —1, S22 = 0, S12 = —1.
Case 2. S12 = 0. Then S 2 2 = I, S21 = 0, Sf1 = I.
Case 3. S21 = 0. Then S11 = I, S12 = 0, S 2 2 = I.
Case 4. S1I /
0. Then multiplying S11S12 = —S21S22by Sn andinserting
S 11 = I + S n , s 11s22 = I + S12S21 yields s2i = I. Thus S 11 = 2 and S i 1 S 12
= s22,
- S 12 + S22 = I, S 1 1 S 2 2 - S i 2 = I. From S 1 1 S 12 = s 2 2 we obtain S 1 1 S i 2 = S 1 1 S 2 2
and therefore S i 2 = I. Finally S 22 = I.
Problem 26.
(i) Show that the 2 x 2 matrices
are similar. This means find an invertible 2 x 2 matrix 5, i.e. S G GL( 2,M),
such that SAS -1 = B.
(ii) Is there an invertible 2 x 2 matrix S such that
(5 (8) S)(A <8>A ) ^ - 1 (8 5 " 1) = B ® BI
Solution 26.
We start from
Thus
Thus we obtain four conditions
(s n + s i2)s n = 0,
- ( s n + s i2)s2i = det(iS),
(s2i + s22)s n = 0,
- ( s 2i + s22)s2i = det(S).
Case: Sn / 0. Then Sn = —Si 2 and Sn = —S22 from the first two equations.
This contradicts the last two equations (det(S) / 0).
Case: Sn = 0. Then we arrive at
-S 1 2 S 2 I — -^ 1 2 ^ 2 1 ,
- ( « 2 1 + ^22)^21 = -S 1 2 S 2 I
414 Problems and Solutions
w ith «12,521 / O.Thus «2 i ( - 52 i - 522 + 5 i2) = O or S22 = 512 - 52 i . It follows
th a t d et( 5 ) = —5 i 252 i . Therefore
S= ( 0
)
312
\ 521
^
S- l = ^
512 - 521 J
_ f -^ 1 2 + S21
S21
*12 y
UJ
5i 2521 \
T h e sim plest special case w ould be «12 = 521 = I.
(ii) W e have
(S ® S )(A ® ^4)(5-1 ® 5 - 1) = (S A S -1) ® ( SAS _1).
T hus the S constructed in (i) can be utilized.
P r o b le m 27.
Find all M
G
GL( 2, C) such that
"-1O o)M=(° 0 )
Thus we consider the invariance of the Pauli spin matrix
matrices form a group under matrix multiplication.
S o lu tio n 27.
Then
Let M i, M 2 G G L (2,C ) with M 1 V 2M i =
G2 .
G2
Show that these
and M 2
1G 2 M 2 =
G2 .
(MiM2) - V 2(MiM2) = M f 1(M fV 2Mi)M2 = M f V 2M2 =
C
—i
0
)■
I f M V 2M = (J2, then M M 1G2M M 1 = M g2M 1 and thus G2 = M g2M 1.
The neutral element is the 2 x 2 identity matrix.
P r o b le m 28.
Consider the cyclic 3 x 3 matrix
C =
0
0
I
I
0
0
0
I
0
(i) Show that the matrices C, C 2, C 3 form a group under matrix multiplication.
Is the group commutative?
(ii) Find the eigenvalues of C and show that the form a group under multiplica­
tion.
(iii) Find the normalized eigenvalues of C . Show that the form a orthonormal
basis in C 3.
(iv) Use the eigenvalues and normalized eigenvectors to write down the spectral
decomposition of C .
S o lu tio n 28.
(i) We have
0
0
I
0
I
0
I
0
0
C 3 = I3 =
I
0
0
0
I
0
0
0
I
Groups, Lie Groups and Matrices
415
The inverse of C is C 2. The group is commutative.
(ii) The eigenvalues are I, ez27r/ 3, el47r/ 3. Obviously they form a group under
multiplication. Note that e*47r/3 = e-l27r/ 3.
(iii) The normalized eigenvectors for the eigenvalues Ai = I, A2 = e*27r/ 3, A3 =
ei47r/3 are
V1 =7 f(i)’ V2 =^ (^ s)’ V3=7 1 (^/3)These vectors form an orthonormal basis in C 3.
(iv) Hence the spectral decomposition is C = A iv iv j + A2V2V2 + A3V3V3.
P r o b le m 29.
(i) Find the group generated by the 2 x 2 matrix
A =
under matrix multiplication. Is the group commutative? Find the determinant
of all these matrices. Do these (real) numbers form a group under multiplication?
(ii) Find the group generated by the 4 x 4 matrix
A® A =
0
-I
I
0
)
under matrix multiplication.
S olu tion 29.
(i) We have
= B = - I 2.
A2
Now B 2 = I2, BA = A t and AA t = At A = I2. Thus the group consists of
four elements A , A t , B = - I 2 and I2. Of course I2 is the neutral element. The
group is commutative. The determinant of all these matrices is + 1 .
(ii) We have
(A ® A )(A ® A) = A2 ® A2 = B ® B = ( - I 2) ® ( - I 2) = I4.
The group consists of the two elements I4 = I2 ® I2 and A ® A.
P r o b le m 30. Let n > 2. An invertible integer matrix, A
a toral automorphism / : T n —>•Tn via the formula
f
OTT
=
TTO
A,
G Ln(Z), generates
TT : Mn ^ T n := Rn/Zn.
The set of fixed points of / is given by
# F ix ( /)
G
:={x*e T n :
/(* * ) =
* * } .
416 Problems and Solutions
Now we have: if d e t(/n — A) 7^ 0, then
# F ix ( /) = Idet(Jn -
j4)|.
Let n = 2 and
-G
0
Show that det(/2 —A) ^ 0 and find # F ix (/).
Solution 30.
We have det(/2 — A) = —I. Thus |det(/2 — A)\ = I.
Problem 31.
Let A b e an n x n matrix over R. Consider the 2n x 2n matrices
^ = (y¾/ i
J
In ~r
^ = (y" 1 In
/ 24
j
“o1"
un )y -
Can we find an invertible 2n x 2n matrix T such that 5 = T - 1ST?
Solution 31.
We find
rp
I On
" V /n
In I rp—I
~ In )~
*
Problem 32. Let R G Cmxm and 5 G Cnxn be nontrivial involutions. This
means that i? = R - 1 7^ ± J m and S' = S'- 1 7^ Jn. A matrix A G Cmxn is called
(R, ^-sym m etric if RAS = A. Consider the case m = n = 2 and the Pauli spin
matrices
Find all 2 x 2 matrices A over C such that RAS = A.
Solution 32.
From
/0
l\
fa n
ai2 \ / 0
—A
VI
fi)
\ a 21
a>22 )
0 )
\ i
fa n
ai2\
V a 2i
a 22 /
_
we obtain
A = ( 0,11
\ i a 12
0/12 ^
-ia n J '
Problem 33. Find all 2 x 2 matrices A over R such that det(A) = I (i.e. A
is an element of the Lie group 5 L (2 ,R )) and
Do these matrices form a group under matrix multiplication?
Groups, Lie Groups and Matrices
417
S o lu tio n 33. The condition provides the two equations a n — a\2 = I, a<i\ —
a22 = —I. Thus a i2 = a n — I and a2i = 022 — I. Inserting this into det(^4) = I
yields 022 = 2 — a n . Thus we end up with the matrix
A = Z r 1 011
0 ll_M
VI - an 2 —an )
with a n
G
M and det(A) = I. We set a n = a and
*•>-(1 ". ?:;)•
For a = I we obtain the 2 x 2 identity matrix. Now
m
2
m
Setting 1 = a + /3 — I we obtain
^>-(A
i-D -
The inverse of A (a) is given by
A~1( a ) = ( 2 ~ a
v ’
\ a —I
1-a V
a J
Thus the matrices A (a) form a group under matrix multiplication.
P r o b le m 34. Let a G M. Consider the hermitian matrix which is an element
of the noncompact Lie group SO( 1,1)
A(n \ - / cosh(a)
w
“ V sinh(a)
sinh(a) \
cosh(a ) ) '
Find the Cayley transform
B = ( A - U 2 ) ( A A i I 2)
Note that B is a unitary matrix and therefore an element of the compact Lie
group U{n). Find B {a —> 00).
S o lu tio n 34.
.
We have
-T _ ( cosh(a) — i
2 ~ V sinh(a)
sinh(a) \ 4 , -7- _ ( cosh(a) + i
cosh(a) - i ) ' A + Z l2 ~ V sinh(a)
sinh(a) \
cosh(a) + i J m
Thus
(A A-H I-1 =
1
( cosh(a) + i
^
2
2icosh(a) \ -s in h (a )
- sinh(a) \
cosh(a) + i J
and
I
(
—i
B = ( A - iI2)(A + H2 ) - 1 = — b = ( .
.
cosh(a) \ sinh(a)
sinh(a)
slnh(a ) ^
-i
J
418 Problems and Solutions
with det (U) = - 1 . If a —> oo we obtain
B (a — oo
P r o b le m 35.
(i) Consider the Lie group 5L (n, C), i.e. the n x n matrices
over C with determinant I. Can we find A , B e SL(n,C) such that [A, B] is an
element of SL(n, C)?
(ii) Consider the 2 x 2 matrices
Both are elements of the non-compact Lie group SL( 2,C ). Can one finds z € C
such that the commutator [A(z),B] is again an element of SL( 2 ,C )?
S olu tion 35.
example is
(i) Consider the case n = 2. Let e G M and e ^ 0. Then an
Note that det (A (8) B — B ® A) = 0. Another example is
(ii) We find
with the solutions z = i and z = —i.
P r o b le m 36.
The Lie group SU (2) is defined by
SU(2) := { U 2 x 2 matrix : UU* = / 2? det (U) = I }.
Let (3-sphere)
S3 := { (xi, x 2, x s ,x 4) e R 4 : ^ 1 + ^ 2 + ^3 + ^4 = 1 }.
Show that SU( 2) can be identified as a real manifold with the 3-sphere 5 3.
S o lu tio n 36.
Let a, 6, c, d € C and
Imposing the conditions UU* = /2 and det (U) = I we find that U has the form
Groups, Lie Groups and Matrices
419
where aa + bb = I. Now we embed SU(2) as a subset of R4 by writing out the
complex numbers a and b in terms of their real and imaginary parts
a = xi + i x 2,
b = X3 + ix4 ,
£ 1, # 2, £3, #4 € M
whence
SU(2 ) — ^R4 : (a, 6)-)►
The image of this map are points such that aa-\-bb = I, that is re2+ £ 2+ ^ 3+^4 =
I. Given a and b we find
X
P r o b le m 37.
1
a+ a
=
X2 =
a —a
~2i~'
b+ b
X3 =
X4
b -b
2i ’
Consider the matrix (a G R)
sin(a)
—cos(a)
(i) Is A(a) an element of 5 0 (2 )?
(ii) Consider the transformation
Is (^ i)2 + (^2)2 = x\ + # 2? Prove or disprove,
(iii) We define the star product
A(a) ★ A(a)
cos(a)
0
0
sin(a)
0
cos(a)
sin(a)
0
0
sin(a)
—cos(a)
0
sin(a) \
0
0
—cos(a) /
Is A(a) ★ A(a) an element of 5 0 (4 )?
(iv) Is A(a) S) A(a) an element of 5 0 (4 )?
(v) Find
= dA{a)
.= 0
and then B (a) = exp(aX). Compare B (a) and A(a) and discuss.
S olu tion 37. (i) We have ^4(a)T = A(a) and A (a)TA(a) = / 2- Thus A(a) is
an element of 0 (2 ). Now det(A (a)) = —1 and thus A(a) is not an element of
5 0 (2 ).
(ii) Yes we have x \ + x \ = ( X f
1)2 + (x'2 ) 2 .
(iii) The eigenvalues are +1 (2 times) and —1 (2 times) Thus d et(y l(a)*yl(a )) =
I.
(iv) We have det(A (a) <g>A(a)) = det(j4(a)) det(A (a)) = I.
(v) We obtain
X = C 1 ! W
\I
0J
B( c ) = (
v '
y sinh(a)
.
cosh(a) J
420 Problems and Solutions
Thus A(a) / B (a). B (a) is not an element of 0 (2 ).
P r o b le m 38. Given two 2 x 2 matrices A and B in the Lie group G L (2,C ),
i.e. det(j4) / 0, det(R ) / 0. The action of an n x n matrix M = (rrijk) on a
polynomial p(x) G R n = C [xi, x 2, . . . , x n\ may be defined by setting
Tm
p (x
)
:= p(xM)
where x M is the multiplication of the row n- vector x by the n x n matrix M .
Thus x M is a row vector again. We denote by
T5 G L ( 2 ,C)<8>GL( 2 ,C)
JV4
the ring of polynomials in R 4 that are invariant under the action of A <g>B for
all pairs A, B E GL( 2,C ), i.e.
:= {j>£ H4 . TASBp (x )= p (x )).
The action preserves degree and homogeneity. Consider the polynomial
p ( x lj
X 2 , X 3j X 4 )
= x\ + x \ + £ 3
+ X4 + X iX2 +
X
2 X s
+
X 3 X 4
+
X 4 X 1
and
■— (J
Show that Ta ®b P{x) = p(x).
S o lu tio n 38.
Since
( Xi
X2
X s
X4 ) (A <g>B) = ( X4
X s
x2
X
1)
we have
T A® Bp(x.)
P r o b le m 39.
=p(x(A®B))
= P ( X
4 3 2
i
X
j
X
j
X 1 )
=p (x
I, X2, X3, X4).
Let a , /3 ER. Then the matrices
A ( „ \ - ( cos(a )
K ’ ~ V - sin(a)
sin(a)\
cos(a) +
R ( a \ - ( cosW )
KP) ~ \ - sin(£)
sin(£) A
cos(£))
are elements of the Lie group 5 0 (2 ) and A(a) <g>B(P) is an element of the Lie
group 5 0 (4 ). Find the “infinitesimal generators” X and Y such that
exp(aX + pY) = A(a) <g>B(p).
S o lu tio n 39.
We have
X = A n w
'B(P))
+ ° + G ?)
Ct= 0,/3=
Groups, Lie Groups and Matrices
421
and
Y =
dp
with exp(a:Jf + fiY)
cA ( a ) ® B ( p ))
= 0 ,/ 3=0
-O :)•(-. ;)
: A(a) <g>B (a). Note that [.X , T] = (¼ .
P r o b le m 40. A topological group G is both a group and a topological space,
the two structures are related by the requirement that the maps
(of G
onto G) and (x, y)
xy (of G x G onto G) are continuous. G x G is given by
the product topology.
(i) Given a topological group G, define the maps
<p{x) := xax~l
and
0 ( x ) := xax la 1 = [#,a].
How are the iterates of the maps 0 and 0 related?
(ii) Consider G = 5 0 (2 ) and
( cos(a)
^sin(o')
with a:, a
G
—sin (a)\
cos(a) y ’
a
_ ( 0
I
l\
Oy
5 0 (2 ). Calculate 0 and 0 . Discuss.
S olu tion 40.
(i) The iterates
and 0 (n) are related by
0 (n)( z ) = 0 (n)(x)a
since <j)(x) = xp(x)a and by induction
(p(n+1\x) = ( ^ n\ x ) a ) a ( ^ n\x)a)~l = ^ n\ x ) a ( ^ n\ x ))~ l = 0 n+1(x)a.
(ii) We find for 0 (x ) = xax~l
cos(a)
sin(a)
(
—sin(a)\ / 0
cos(a) J
I
I \ / cos(a)
Oy y —sin(o;)
sin(a) \ _ f 0
c o s (a )y
y —I
I\
Oy
Thus a is a fixed point, i.e. 0(a) = a , of 0. Since 0 (# ) = xax~la~l we obtain
ara# *a 1
= J2-
Thus 0 (n)(x) = /2 for all n.
P r o b le m 41. The unitary matrices are elements of the Lie group U{n). The
corresponding Lie algebra u(n) is the set of matrices with the condition X* =
—X . An important subgroup of U(n) is the Lie group SU(n) with the condition
that det (U) = I. The unitary matrices
Ti
-.)■ (!i)
422 Problems and Solutions
are not elements of the Lie group SU ( 2) since the determinants of these unitary
matrices are —I. The corresponding Lie algebra su(n) of the Lie group SU(n)
are the n x n matrices given by
tr(X) = 0.
X* = - X ,
Let a i, <72, <73 be the Pauli spin matrices. Then any unitary matrix in £7(2) can
be represented by
where 0 < a < 2ir, 0 < /3 < 2ir, 0 < 7 < tt and 0 < J < 27r. Calculate the
right-hand side.
Solution 41.
We find
^iocI2
e
U ia
—
- 1I
v 0
0 \
eia J
0
O
e-
z7<t
2/2
( cos(7 / 2)
-_ I1
^sin(7/2)
(
e -i S /2
—sin(7 / 2)
cos(7 /2 )
0 \
CM
QJ
tO
O
e —z <5 <t 3 / 2 _- I1
\
CM
^e- W 2
e —z/3<7 3 / 2 _- 1I
This is the cosine-sine decomposition. Each of the four matrices on the righthand side is unitary and e%ot is unitary. Thus U is unitary and det(U) = e2l0i.
We obtain the special case of the Lie group SU (2) if a = 0.
P r o b le m 42.
Let
We consider the following subgroups of the Lie group SL( 2,M).
MOSS - m ) ; tfeM
M CoVM M
M O 0 ;*€R}Any matrix m G SL( 2, M) can be written in a unique way as the product m =
kan with k G K, a G A and n G N. This decomposition is called Iwasawa
decomposition and has a natural generalization to SL(n , M), n > 3. The notation
of the subgroups comes from the fact that K is a compact subgroup, A is an
abelian subgroup and AT is a nilpotent subgroup of SL{ 2,M). Find the Iwasawa
decomposition of the matrix
y/ 2
I
Groups, Lie Groups and Matrices
S olu tion 42.
423
From
( y/ 2
I \ _ ( cos(0)
ysin(0)
—sin(0) \ ( r 1/ 2
cos(0) / \ 0
y/2 J
Y I
0 \ ( I
r _1/ 2 y
t\
IJ
we obtain
/ y/ 2
y I
I \
y/ 2 J
/ r 11 2 cos(0)
^ r 1Z2 Sin(A)
tr1/ 2 cos(0) — r -1 / 2 sin(0) \
tr1/ 2 sin(A) + r _1//2 cos(A) y
Thus we have the four conditions
r x^2t cos(0) —r -1 / 2 sin(0) = I
r 1/ 2 cos(A) = V^2,
^ Z 2Jsin(A) + r -1 / 2 cos(A) = y/ 2
r 1/ 2 sin(A) = I,
for the three unknowns r, t, 0. From the first and third conditions we obtain
tan(A) = l/y/2 and therefore
s m (0 )= 7 § ’
cos^ ) = T l '
It also follows that r 1/ 2 = \/3 . Prom the second and fourth conditions it finally
follows that t = 2\/2/3. Thus we have the decomposition
(V 2
\1
P r o b le m 43.
1 \ _ (V 2 / V 3
y/2 J
\ 1/VS
-l/ y / 5 \ f y / 5
V 2/ V S J \ 0
0
\ (I
1/ v ^ A 0
2 ^ 2 /3 \
1
/
Let 0 < a < 7t/ 4. Consider the transformation
X ( x ,y ,a ) =
Y ( x , y , a) =
.
^/cos(2a)
(x cos(o;) -M ysin(a)),
=(—z:rsin(ai) + y cos(a)).
^/cos(2 a)
(i) Show that X 2 + Y 2 = x 2 + y 2.
(ii) Do the matrices
I
( cos(a)
y/cos(2a) \ —isin (a )
isin(a:)\
cos(a) )
form a group under matrix multiplication?
S o lu tio n 43.
(i) Using the identity cos2(a) — sin2(a) = cos(2a) we find
X 2 + Y 2 = x 2 + y2.
(ii) The determinant of the matrix is I. Thus it is an element of the Lie group
5 L (2 ,C ). For a = O we have the identity matrix.
P r o b le m 44. The Schur- Weyl duality theorem states that if the natural rep­
resentation of the unitary group U(N) (or of the general linear group GL(n , C))
424 Problems and Solutions
is tensored (Kronecker product) n times, then under the joint action of tbe sym­
metry group Sn (permutation matrices) and GL(n, C) this tensor product is
decomposed into a direct sum of multiplicity-free irreducible subrepresentations
uniquelly labelled by Young diagrams. Let n = 4. Consider the permutation
matrix (flip matrix)
0
0
I
0
/I
0
0
Vo
0
I
0
0
°0 \
0
I/
and the Hadamard matrix
J -(1 1
U1H
M
*
- 1
)
which is an element of the Lie group U(2). Then
U1H 1 Uh = -Z
/ 1
1
1
1 N
1 - 1 1 - 1
1 1 - 1 - 1
\l
- 1 - 1
I /
which is an element of the Lie group £7(4). Let
vi
( ^2,1
v2 =
V v 2,2
)
be elements of C2.
(i) Find the commutator [P, Uh 0 Uh ].
(ii) Find P(UH 0 UH) ( y i 0 V2). Discuss.
(i) We find [P, Uh 0 UH\ = 04.
S o lu tio n 44.
(ii) We obtain
/^1,1(^2,1 +^2,2) +^1,2(^2,1 + ^2,2) \
^1,1^2,1 - ^1,2^2,2
^1,1^2,1 - ^1,2^2,2
P(UH ® U H)(v 1 0 v 2) =
V ^ 1,1 (^2,1 - ^ 2 ,2 ) “ ^1,2(^2,1 - ^ 2 ,2 ) /
\
0
Ul1IU2i2 - Uii2U2lI
- U l 1IU2i2 + Uii2U2iI
V
J
o
The two vectors on the right-hand side are orthogonal to each other. The eigen­
values o f P are +1 (three times) and —1 (one times) with the corresponding
eigenspaces
>
f
a
b
*. fl, 6, d G C
\ d
0 V
X
>
<
— x
b
<
>
\
/
<\ 0 /
:
x
€
C\0
Groups7 Lie Groups and Matrices
425
P r o b le m 45.
The octonion algebra O is an 8-dimensional non-associative
algebra. It is defined in terms of the basis elements
(/i = 0 , 1 , . . . , 7) and their
multiplication table, eo is the unit element. We use Greek indices (//, V7.. . ) to
include the 0 and latin indices (i7j, k 7. .. ) when we exclude the 0. We define
ek := e4+fc for
k = 1 , 2,3.
The multiplication rules among the basis elements of octonions
are given by
3
S i j e o+
^iCj =
^
^ ZjjkCki
I?2,3
iijik =
(I)
k=I
and
^4 ^¾— C^e4 —
j
C4
—
C^e4 — C^,
C4C4 —
Co
3
=
^ ^
& = 1,2,3
k=i
3
=
=
Sije^
^ ^eijk&ki
j, & — 1,2,3
k=i
where Sij is the Kronecker delta and Cij k is +1 if (ijk) is an even permutation
of (123), —1 if (ijk) is an odd permutation of (123) and 0 otherwise. We can
formally summarize the multiplications as
7
eVeV = 9 ^ eo H- 'y ^
k=I
where
9m> = diag(l, - I , - I , - I , - I , - I , - I , - I ) ,
7¾ = - 7^
with /i, v = 0 , 1 , . . . , 7, and i7j 7k = 1 , 2 , . . . , 7 .
(i) Show that the set { eo, ei, e2, ¢3 } is a closed associative subalgebra.
(ii) Show that the octonian algebra O is non-associative.
S o lu tio n 45. (i) Owing to the relation (I) we have a faithful representation
using the Pauli spin matrices
eo
Co = / 2,
ej ->►- i c j
(j = 1,2,3).
We find the isomorphism
3
eiej ^
CiCj —
3
(Sij H- i
e-ijkCk) ^
k=
Sij + ^ ^CjjkCkk=
1
1
(ii) The algebra is non-associative owing to the relation
3
e^e■j —
Sije0
^ ^ ^ijkeki
k=
1
h Ji k — I? 2,3.
426 Problems and Solutions
Thus we cannot find a matrix representation.
S u p p lem en ta ry P ro b le m s
P r o b le m I. (i) Let / 2, O2 be the 2 x 2 identity and zero matrix, respectively.
Find the group generated by the 4 x 4 matrix
(ii) Find the group generated by the 4 x 4 matrix
x = l(i
-O'
7
Find the eigenvalues of the matrix.
(iii) Find the group generated by the 6 x 6 matrix
/O2 O2 h
J2 O2 O2
Vo2 h O2
B=
under matrix multiplication.
(iv) Find the group (matrix multiplication) generated by the 4 x 4 matrix
(°
0
0
Vi
0
I
0
0
0
0
I
0
1X
0
0
0/
Is the group commutative? Find the determinant of all these matrices from (i).
Do these numbers form a group under multiplication? Find all the eigenvalues
of these matrices. Do these numbers form a group under multiplication? Let
a G l . Find exp(aA). Is exp(a^4) an element of SL( 4,R)?
P r o b le m 2.
(i) What group is generated by the two 3 x 3 permutation matrices
I
0
0
0
0
I
I
0
0
0
I
0
0
0
I
under matrix multiplication? Note that A 2 = B 2 = / 3.
(ii) Find the group generated by the 4 x 4 permutation matrices
Pi
I 0
0 I 0V
0
0 0 0 I
Vi 0 0 0 )
( °0
0
0
0 I
Vi 0
( °0
0
I
0
0
1X
0
0
0/
under matrix multiplication. There are twenty-four 4 x 4 permutation matrices
which form a group under matrix multiplication.
Groups7 Lie Groups and Matrices
Problem 3.
427
(i) Let a, P G M and P / 0. Do the 2 x 2 matrices
« (« ,« = (
v
p sin (a )
f W
cos(a)
)
J
form a group under matrix multiplication? Note that d et(M (a , /3)) = I.
(ii) Let a, P 7<t>G M and a, P / 0. Consider the 2 x 2 matrices
Do the matrices form a group under matrix multiplication?
(iii) Let a, 0 E M. Do the matrices
/
cosh(a)
Y e ~ td sinh(a)
ez(9sinh(a) \
cosh(a) J
form a group under matrix multiplication? Are the matrices unitary?
(iv) Let Ol7P7T G l . Do the 2 x 2 matrices
/ et(a+p) Cosh(r)
Y
sinh(r)
e*(<*-£) sinh(r) \
e-i(<*+P) cosh(r) J
form a group under matrix multiplication?
Problem 4.
Do the eight 2 x 2 matrices
:)• o -0O- (? i). (.°.
0
TlO - 0 ’ TlO
I1) ’ Tl( I1 0 ’
form a group under matrix multiplication? If not add the matrices so that one
has a group.
Problem 5.
(i) Do the 4 x 4 matrices
/
ez<t>cosh(0)
0
e ~ l<t> cosh(0)
V
0
0
sinh(0)
9(<t>,0)
sinh(0)
0
0
sinh(0)
el<t>cosh(0)
0
sinh(0)
\
0
0
e ~ x<t>cosh(0) /
form a group under matrix multiplication?
(ii) Do the 4 x 4 matrices
/
9 (4, 0)
V
cos (0)
0
0
sin(0)
0
e ~ z<i>cos(0)
sin(0)
0
form a group under matrix multiplication?
0
—sin(0)
e^ cos(0)
0
—sin(0) \
0
0
e ~ i(t> cos (6) J
428 Problems and Solutions
Problem 6.
The Lie group 0 (2 ) is generated by a rotation R\ and a reflection
R2
Show that
tr(i?i(0)) = 2cos(0),
tr(i?2(0)) = 0,
det(i?i(0)) = I,
det(i?2(0)) = —I
and det{R1 (0)R2(O)) = - I , tr (R1 (O) R2(O)) = 0.
Problem 7.
(i) Find all 3 x 3 permutation matrices P such that
/ l/ y / 2
P- 1
0
O
I
\l/>/2 0
l/ y/ 2 \
0
-l/ y / 2 )
(l/ y / 2
P = O
\1/V 2
0
l/ y/ 2 \
0
-l/ y / 2 )
I
0
.
Show that these matrices form a group, i.e. a subgroup of the 3 x 3 permutation
matrices.
(ii) Find all 2 x 2 invertible matrices B with det(B) = I such that
Show that these matrices form a group under matrix multiplication.
Problem 8. Prove or disprove: For any permutation matrix their eigenvalues
form a group under multiplication. Hint. Each permutation matrix admits the
eigenvalue +1. Why?
Problem 9. Let Gj (j = 0 ,1 ,2 ,3) be the Pauli spin matrices, where Gq = h Do the 4 x 4 matrices
form a group under matrix multiplication? If not add the elements to find a
group.
Problem 10.
Let u := exp(27rz/4). Consider the 3 x 3 unitary matrices
C
Do the matrices of the set
A := { Ei C kQe : 0 < j < 3 , 0 < f c < l , 0 < ^ < 2 }
Groups, Lie Groups and Matrices
429
form a group under matrix multiplication?
P r o b le m 11.
Consider the set of 2 x 2 matrices
Then under matrix multiplication we have a group. Consider the set
{ e0e,
e <g>a,
a (8) e,
a<g> a } .
Show that this set forms a group under matrix multiplication. For example we
have (a (g) a) (a (g) a) = e S e. The neutral element is e 0 e.
P r o b le m 12.
Consider the 2n x 2n matrix
We define that the 2n x 2n matrix H over E is Hamiltonian if ( JH)T = J H .
We define that the 2n x 2n matrix S over E is symplectic if St JS = J . Show
that if H is Hamiltonian and S is symplectic, then the matrix S- 1 HS is Hamil­
tonian. Note that since S invertible St is invertible. We have (JS- 1HS)7' =
St H t (St ) - 1Jt .
P r o b le m 13.
Let a , /5, 7 G E. The Heisenberg group Hs(R) consists of all
3 x 3 upper triangle matrices of the form
M (01, 0 , 7 ) =
with matrix multiplication as composition. Let t G E. Consider the matrices
(I
A(t) = I O
\0
t 0\
I 0
0 1/
/1 0
,
B(t) = I O
\0
I
0
0\
t
,
IJ
/I
0 t\
C(t) = I O I 0
.
\0 0 l )
(i) Show that {A(t) : t G E }, {B(t) : t G E }, {C(t) : t G E } are one-parameter
subgroups in ^ ( E ) .
(ii) Show that {C(t) : t G E } is the center of Hs(R).
P r o b le m 14.
,4 =
Let a;3 = I. What group is generated by the 3 x 3 matrices
/I 0
0\
0
0 uj I ,
\ 0 UJ2 0 )
H=
/ 0 0 uA
O
I 0 ,
\ cj2 0 0 J
C=
/ 0
O
\ uj2
0
I
0
uA
0
0)
under matrix multiplication? Is the matrix n = ^(,4 + H + C) a projection
matrix?
430 Problems and Solutions
Consider the six 3 x 3 permutation matrices denoted by Po, Pi,
P2, P3, P4, P5, where Po is the 3 x 3 identity matrix. Find all triples (Qi, ¢2-^3)
of these permutation matrices such that Q 1 Q 2 Q 3 = Q 3 Q 2 Q 1 5 where Qi ^ Q 2 ,
P r o b le m 15.
Q2
7^ < ? 3 , <?3 7^ Q i -
P r o b le m 16. Let n > 2 and cj = exp(27rz/n), where a; is the n-th primitive
root of unity with LJn = I. Consider the diagonal and permutation matrices,
respectively
/1
0
0
D =
LJ
0
0
0
0
a;2
\o
0
0
0
0
0
•••
^
,
(°
0
I
0
0
I
•.. 0\
••• 0
0
Vi
0
0
0
0
•• I
•• 0/
P =
Un- 1J
Show that D n = P n = InI } form a basis
(ii) Show that the set of matrices { Dj P k : j, k = 0 , 1 , . . . , n
of the vector space of n x n matrices.
(hi) Show that P D = cjDP, P j D k = LJjkD kP j .
(iv) Find the matrix X = £P + C- 1 ^ -1 +
+
and calculate the
eigenvalues.
(v) Let U be the unitary matrix
where jf, k = 0 , 1 , . . . , n — I. Show that UPU ~l = D.
(v) Let R = P ® In and S = D ® In. Find RS. Let X = P ® P and Y = D ® D.
Find X Y . Find the commutator [XiY].
P r o b le m 17.
Consider the diagonal matrix D and the permutation matrix P
/ 1
0
P = I o
exp(2i7r/3)
0
°
\
0
I,
exp(—2i7r/3) J
/0
I 0\
P = I 0 0 I I.
\ 1 0 0)
(i) What group is generated by D and P ?
(ii) Calculate the commutator [D,P]. Discuss.
P r o b le m 18. In the Lie group U( N) of the N x N unitary matrices one can
find two N x N matrices U and V such that
UV = e2ni/NVU.
Any N x N hermitian matrix H can be written in the form
N -I N -I
H = Y t yL h^ u i v k 3=0 k=0
Groups, Lie Groups and Matrices
431
Using the expansion coefficients Jijjc one can associate to the hermitian matrix
H the function
j= 0 k=I
where p = 0 , 1 , . . . , N — I and q = 0 , 1 , . . . , N — I. Consider the case N = 2 and
(i)
Consider the hermitian and unitary matrix
Find h(p, q).
(ii) Consider the hermitian and projection matrix
Find h(p, q).
P r o b le m 19.
(i) Consider the 2 x 2 matrix
J 1=-UT2 = ^ 01 J ) .
Show that any 2 x 2 matrix A G SL( 2,C ) satisfies At JA = J.
(ii) Let A satisfying At JA = J.
Is (A <g>A)T( J ® J ) { A ® A) = J ® J!
Is (A © A)T{ J 0 J)(A © A) = J © Jl
Is (A * A)T{ J * J)(A ★ A) = J * Jl
P r o b le m 20. The group SL(2,¥s) consists of unimodular 2 x 2 matrices with
integer entries taken modulo three. Let
be an element of SL(2,¥s). Find the inverse of A.
P r o b le m 21. Consider the semi-simple Lie group SL( 2, M). Let A G SL( 2, M).
Then A ~l G SL( 2,M). Explain why. Show that A and A -1 have the same
eigenvalues. Is this still true for A G SrL(S5M)?
P r o b le m 22. Let A, B G SL( 2,M).
(i) Is tr(A) = tr(A - 1 )? Prove or disprove.
(ii) Is tr(A B ) = tr(A )tr(2?) — tr( A B - 1 )? Prove or disprove.
432 Problems and Solutions
P r o b le m 23.
Let a G l . Consider the 2 x 2 matrix
(i) Find the trace, determinant, eigenvalues and normalized eigenvectors of the
matrix A (a).
(ii) Calculate
dA(a)
da CK=O
Find e x p (a X ) and compare with A(a), i.e. is ex p (a X ) = A {a )? Discuss.
X :=
P r o b le m 24.
(i) Let a G M. The 2 x 2 rotation matrix
R(a) = ( cosIaI
v 7
y sin(a)
- si“ ^
cos(a) J
is an element of the Lie group SO{ 2,M). Find the spectral decomposition of
R(a).
(ii) Let p G R. The matrix
r>(ff! = ( cosh(/?)
^ 7
ysinh(yd)
sinh(^) \
cosh(/3) J
is an element of the Lie group 5 0 ( 1 , 1, M). Find the spectral decomposition of
Bi?)(iii)
Find the spectral decomposition of R(a) <g>B (/3).
P r o b le m 25. The Lie group S U (I, I) consists of all 2 x 2 matrices T over the
complex numbers with
TMT* = M,
M = (^
_°x ) ,
(Iet(T) = I.
Show that the conditions on £o)£i >£2,£3 GM such that
rp _ ( £0 + ^£3 £1 +
\
U i
to -its J
is an element of 5 £7 (1, 1) is given by det(T) = £0 + £3 — £1 — £2 = I*
P r o b le m 26.
The unit sphere
4
S 3 := { (Xii X2i XsfXi ) € R4 : ^ x j2 = 1 }
j=i
we identify with the Lie group S U (2) via the map
{x\ , x2 ) Xs-)X4) 1 y
f Xi + ix2
\X3 + iX 4
- x s + ix 4
Xi —ix2
Groups, Lie Groups and Matrices
433
(i) Map the standard basis of R4 into SU (2) and express these matrices using
the Pauli spin matrices and the 2 x 2 identity matrix.
(ii) Map the Bell basis
I
Ti
/ 1N
1
V2
0
0 ’
\iJ
1)
0
0 ’
I
V2
\-lJ
I
I
\ 0/
> I
■ 4
into SU (2) and express these matrices using the Pauli spin matrices and the
2 x 2 identity matrix.
P r o b le m 27.
Let A be an n x n matrix over C. Show that if A t = —A, then
eA e O ( n , C).
P r o b le m 28.
( e ia/ 2
^
0
(i) Can any element of the Lie group 517(1,1) be written as
0
sinh(C/2)\ I e i^ 2
cosh(C/2 ) J \ 0
\ /cosh (C /2 )
e ~ ^ 2) ^sinh(C/2)
0
\
e " ^ /2J '
Each element in the this product is an element of the Lie group SU( I, I).
(ii) Can any element of the compact Lie group SU (2) be written as
/ co s(o /2 )
\^sin(a/2)
—sin(o:/2)\ ( e
cos(a:/2) J y 0
0 \ / cos(/3/2)
e- n 7 2 J y —sin(/3/2)
s i n ( /3/ 2) \?
cos(/3/2) J
P r o b le m 29. Consider the Lie group SL( 2,R), i.e. the set of all real 2 x 2
matrices with determinant equal to I. A dynamical system in 5 L (2 ,R ) can be
defined by
Mk+2 = MkMk+1,
k = 0 , 1,2 , ...
with the initial matrices Mo, Mi € 5 L (2 ,R ). Let Fk := tr (Mk). Is
Fk+3 = Fk+2Fk+i - F k
k = 0 , 1,2 , . . . ?
Prove or disprove. Use that property that for any 2 x 2 matrix A we have
(Cayley-Hamilton theorem)
A2 - Atr(A) + /2 det(A) = O2.
P r o b le m 30.
(i) Let a
G
R. Do the matrices
cos(a)
i sin(a)
i sin(o;)
cos(a)
form a group under matrix multiplication? Note that A(O) = / 2. So we have
the neutral element. Furthermore det(A (a)) = I.
434 Problems and Solutions
(ii) Let a G l . Do the matrices
B(n \ - ( cosh( « )
' '
y —isinh(a)
* sinh(a) \
cosh(a) J
form a group under matrix multiplication? Note B(O) = / 2- So we have a
neutral element. Furthermore d et(B (a )) = I.
P r o b le m 31.
Consider the 3 x 3 matrix
cos(0)
0
0
1
sin(0) \
0
).
—sin(0)
0
cos(0)
(
J
(i) Show that A(O) is an element of the compact Lie group 5 0 (3 , R).
(ii) Show that the eigenvalues of A(O) are Ai = I, A2 = eld, A3 = e ~%e.
(iii) Show that the 3 x 3 matrix
sin(0) cos(</>)
sin(0) sin(</>)
cos(0)
—sin(0)
cos(0)
(
0
—cos(0) cos(</>) \
cos(0) sin (¢) I
sin(0)
J
is an element of the compact Lie group 5 0 (3 ).
P r o b le m 32. Let n be an integer with n > 2. Let p, q be integers with p, q > I
and n = p + q. Let Ip be the p x p identity matrix and Iq be the q x q identity
matrix. Let
_ 7p0 (
Iq ).
The Lie group 0 (p , q) is the set of all n x n matrices defined by
0(p, q) := { A e OL(n, R) : A t Ip,qA = IPyQ}.
Show that this is the group of all invertible linear maps of E n that preserves the
quadratic form
p
E xA
j= i
n
-
12
j=p + 1
xM -
P r o b le m 33.
The Lie group SU(2,2) is defined as the group of transfor­
mation on the four dimensional complex space C4 leaving invariant the indefinite
quadratic form
Nil2 + N 2 - N 2 - M 2The Lie algebra su( 2,2) is defined as the 4 x 4 matrices X with trace 0 and
X*L + L X = O4, where L is the 4 x 4 matrix
L=
-h
O2 N
O2
h )'
Groups, Lie Groups and Matrices
435
Is
an element of the Lie algebra su(2,2)l Find exp (zX ). Discuss.
P r o b le m 34. Let 0 G R.
(i) Is the 2 x 2 matrix
M
cosW
( 0 ) - (
sin^
^
1
( 1
cos(0)Jvill
-.lX i -0O
unitary? Prove or disprove. If so is the matrix an element of the Lie group
SU (2)?
(ii) Is the 8 x 8 matrix
. r //j\
( cos(0)
sin(0) \
I
(l
-.‘M i -.)
unitary? Prove or disprove. If so is the matrix an element of the Lie group
5/7(4)?
(iii) Let 0 be the direct sum. Is the 6 x 6 matrix
^ - ( - ¾
¾
3
¾ ) * ; ¼ (I
unitary? Prove or disprove. If so is the matrix an element of the Lie group
51/(4)?
P r o b le m 35. Write down two 2 x 2 matrices A and B which are elements of
the Lie group 0 ( 2 ,R) but n ot elements of 5 0 ( 2 ,R).
(i) Is AB an element of the Lie group 5 0 (2 , R )?
(ii) Is A <g>B an element of the Lie group 5 0 (4 , R )?
(iii) Is A 0 B an element of 5 0 (4 , R )?
P r o b le m 36.
Let z = x + iy with x, y G R. Consider the 2 x 2 matrix
( cos (z)
Y sin(z)
—sin(z)\
cos(z) J ‘
One has the identities
cos(a; + iy) = cos(x) cos (iy) —sin(x) sin (iy) = cos(x) cosh(y) — i sin(x) sinh(y)
sm(x + iy) = sin(x) cos (iy) + cos(x) sin (iy) = sin(:r) cosh (y) + i cos(x) sinh(2/).
Let x = 0. Then we arrive at the matrix
m
(v*
j/ ') = (\ismh{y)
. cosh^
- cosh
isi^ (y) y J
(i) Do these matrices (y G R) form a group under matrix multiplication?
436 Problems and Solutions
(ii) Calculate
X = - M (J/)
y=O
and then exp (yX) with y G M. Discuss.
P r o b le m 37. The maximal compact Lie subgroup of SL( 2,C ) is SU( 2). Give
a 2 x 2 matrix A which is an element of S X (2,C ), but not an element of SU(2).
P r o b le m 38.
Let Al, A2, . . . , An be m x m matrices. We define
^(■^■1) •••) ^n) :
® A s(n)
j- ^ ^ 7^s(I) ®
sESn
where Sn is the permutation group. Let n = 3 and consider the matrices
*-(! i). *-(! o')- *-(S -O0
Find
<t(A i ,A 2, A3) =
A s(i) <g>A s(2) ® A1 (3)SES3
P r o b le m 39.
Let a , /3,7 E M. Do the 3 x 3 matrices
cos(a)
—sin(a)
sin(o;)
cos(o:)
/3\
7 I
0
0
1)
(
form a group under matrix multiplication?
P r o b le m 40.
element
(i) Consider the non-compact Lie group SO(l,l,M ) with the
A(n \ - fcosh(a)
' '
y sinh(a:)
sinh(a) \
cosh(a) J ’
Show that the inverse of A(a) is given by
A - i (a) = A (—a) = ( cosh(a)
' '
y —sinh(a)
“ sinh(Q) 0
cosh(a) J '
Show that the eigenvalues of A(a) are given by ea and e~a.
(ii) Let 0 be the direct sum. Find the determinant, eigenvalues and normalized
eigenvectors of the 3 x 3 matrix (I) 0 A(a).
P r o b le m 41. (i) Let A , B be elements of SL(n, R). Show that A 0 B is an
element of SL(n2, M).
(ii) Let A, B be elements of SL(n, R). Show that A 0 B is an element of
5 L (2n ,R ).
Groups, Lie Groups and Matrices
437
(iii) Let A, B be elements of SL( 2,R). Let ★ be the star product. Show that
A ★ B an element of SL( 4, R).
(iv) Are the matrices
I
0
I
1\
1
’
0)
I
I
0
I
I
/ 1
I
Y =
I
I
Vl
I
0
0
I
I
I
I
I
0
0
1X
I
I
0
I/
elements of SL( 3,R) and SL( 5,E ), respectively?
Problem 42. Let A be an n x n matrix over C. Assume that ||A|| < I. Then
In + A is invertible (In + A G GL(n , C)). Can we conclude that In 0 In + A 0 A
is invertible?
Problem 43.
02, 03 be the Pauli spin matrices. We define
Let
Tl = i(j\ ,
72 = Z02,
tS = ^3
and ro = / 2. Thus Tj (j = 0 , 1,2,3) are elements of the Lie group SU( 2).
(i) We define the operator R as (k > 2)
Tj k ) : =
^ ( T j 1T* , ® Tj 2 T* ,
0 - - - 0
TjfcTZl ) .
Find i?(ri (8) T2 0 T3).
(ii) Consider the unitary matrix (constructed from the four Bell states)
Il
(I
0
0
0
\1
-i
-i
-I
I
0
0
/I
0
0
0
=►u *
= -L
V2
i /
0
0
0
i
i
1
0
0
0
0
—i )
-I I
Vi
\
0
Show that U(ia\ 0 icri)U, U('<ia2 0 i^2)U, U(ias 0 iGs)U are elements of the Lie
group SO(A).
Problem 44.
Consider the 8 x 8 permutation matrix
p = (i)(
/ °I
0
0
0
Vo
0
0
0
I
0
0
0
0
0
0
0
I
I
0
0
0
0
0
0
0
I
0
0
0
°0 \
0
0
I
0/
'(i)
and
G = ( 911
\ 921
91A , V 1 = ( * A , V 2 = ( V2A ,
922 J
\ V1,2 J
\V2,2 )
V3 =
\V3,2 )
438 Problems and Solutions
(i) Show that [P, G <g>G <8>G\ = Os, where
(ii) Assume that G is the Hadamard matrix. Find the vector
(.P(G <8>G <g) G))(vi (8) v2 O v3).
Find the eigenvalues and normalized eigenvectors of P . Discuss.
Chapter 19
Lie Algebras and Matrices
A real Lie algebra L is a real vector space together with a bilinear map
[ ,] : L x L ^ L
called the Lie bracket, such that the following identities hold for all a, 6, c G L
[a, a] = 0
and the so-called Jacobi identity
[a, [6, c]] + [c, [a, b]] + [6, [c, a]] = 0.
It follows that [6, a] = —[a, 6]. If A is an associative algebra over a field F (for
example, the n x n matrices over C and matrix multiplication) with the definition
[a, b] := ab —ba,
a,b e A
then A acquires the structure of a Lie algebra.
If X C L then (X) denotes the Lie subalgebra generated by X , that is, the
smallest Lie subalgebra of L containing X . It consists of all elements obtainable
from X by a finite sequence of vector space operations and Lie multiplications.
A set of generators for L is a subset X C L such that L = ( X ) . If L has a finite
set of generators we say that it is finitely generated.
Given two Lie algebras L\ and L2, a homomorphism of Lie algebras is a function,
/ : Li
L2, that is a linear map between vector spaces L\ and L2 and that
preserves Lie brackets, i.e. /([a , b]) = [/(^ ),/(6 )] f°r
^ -^i-
439
440 Problems and Solutions
Problem I .
Consider the n x n matrices EiJ having I in the (i,j) position
and 0 elsewhere, where
=
(elementary matrices). Calculate the
commutator. Discuss.
Solution I .
Since EijEki = SjkEu it follows that
[Eij , Eki] — SikEn
SuEkj-
The coefficients are all ± 1 or 0; in particular all of them lie in the field C. Thus
the Eij are the standard basis in the vector space of all n x n matrices and thus
the generators for the Lie algebra of all n x n matrices with the commutator
[Aj B] : = A B - B A .
Problem 2. Consider the vector space of the 2 x 2 matrices over E and the
three 2 x 2 matrices with trace equal to 0
J)'
e =(2
f = (°
2
)-
-“)•
Show that we have a basis of a Lie algebra.
Solution 2. We only have to calculate the commutators and show that the
right-hand side can be represented as linear combination of the generators. We
find [Ej E] = [Fj F } = [Hj H } = O2 and
[Ej H ] = - 2 E j
[EjF ] = H ,
[Hj F } = - 2 F.
Thus E , F, H are a basis of a Lie algebra.
Problem 3.
Consider the matrices
/I
Jii = I 0
0
0
/0 0 0
Ji2 = I 0 I 0
V O O O
0^
0
Vo o I
0
e= I 0
Vo
I
0
I
Ji3 =
/0
0
Vl
0
0
0
I'
0
Oi
0\
0 ,
0)
Show that the matrices form a basis of a Lie algebra.
Solution 3. We know that all n x n matrices over R or C form a Lie algebra
(#f(n, R) and g£(n,C)) under the commutator. Thus we only have to calculate
the commutators and prove that they can be written as linear combinations of
Jii, Ji2, Ji3 and e, / . We find
[hi,h2\= [fti, h3] = [h2, h3] = 0„,
[hi,f] = [h3, f] = ~[h2, f] = - / ,
Thus we have a basis of a Lie algebra.
[e, / ] = hi + h3 - 2h2
[hi, e] = \h3, e] = ~[h2, e] = e.
Lie Algebras and Matrices
441
P r o b le m 4.
Let A, B be n x n matrices over C. Calculate tr([A, B]). Discuss.
S o lu tio n 4.
Since is the trace is linear and tr (AB) = tr (BA) we have
tr ([A, B]) = tr {AB - BA) = tr (AB) - tr (BA) = 0.
Thus the trace of the commutator of any two n x n matrices is 0.
P r o b le m 5. An n x n matrix X over C is skew-hermitian if A * = —X . Show
that the commutator of two skew-hermitian matrices is again skew-hermitian.
Discuss.
Let X and Y be n x n skew-hermitian matrices. Then
S o lu tio n 5.
[A, Y}* = ( X Y - YX)* = (XY)* - (YX)*
= Y*X* - X*Y* = Y X - X Y
= -ix ,n
Thus the skew-hermitian matrices form a Lie subalgebra of the Lie algebra
gt(n, C).
P r o b le m 6. The Lie algebra su(m) consists of all m x m matrices X over C
with the conditions X* = —X (i.e. X is skew-hermitian) and tr(X ) = 0. Note
that exp (A ) is a unitary matrix. Find a basis for su( 3).
S olu tion 6. Since we have the condition tr(A ) = 0 the dimension of the Lie
algebra is 8. A possible basis is
/0
I i
\0
z 0\
0 0 ,
0 0 /
/ 0
-I
\ 0
I 0\
0 0
0 0/
0 0 z
( i
,
0
O -Z
\0
0
/0
0
b
°
Vo - I
0
0
0
0
O1
0\
1
oj
0
0 - 2 z,
The eight matrices are linearly independent.
P r o b le m 7.
mation
Any fixed element A of a Lie algebra L defines a linear transfor­
a d (X ) : Z — [A, Z\
for any
Z eL .
Show that for any K e L we have
[ad(Y )i ad (Z)\K = ad([Y, Z])K.
The linear mapping ad gives a representation of the Lie algebra known as adjoint
representation.
442 Problems and Solutions
S o lu tio n 7.
We have
[ad( y ) , ad (Z)\K = a d (y )[Z , X ] - ad(Z ) [Y, K }
= [Y,[Z,K ]]-[Z,[Y,K}\ = [[Y,Z},K}
= ad ([Y,Z])K
where we used the Jacobi identity [Yr, [Z, X]] + [X, [Yr, Z ]] + [Z, [K, Yr]] = 0.
P r o b le m 8. There is only one non-commutative Lie algebra L of dimension
2. If x, y are the generators (basis in L), then
[x,y] = x .
(i) Find the adjoint representation of this Lie algebra. Let v, w be two elements
of a Lie algebra. Then we define
adv(w) := [v,w]
and wa>dv := [v,w],
(ii) The Killing form is defined by
«(x , y) := tr(adxady)
for all x, y G L. Find the Killing form.
S o lu tio n 8.
(i) Since
adx(x) = [x,x] = 0, adx(y) = [x,y] = x
and
ad y(x) = [y, x ] = - x , ady(y) = [y, y] = 0
we obtain
/
\ ( adxn
adxi2 \
/
\ ( &dy\\
\ /
<X lU iid x sl i i d x j = * 0 x > ' ( x s l U w .
&dy\2 \
/
Since x, y are basis elements in L we obtain
(ii) We find
ft(x, x) = tr(adx adx) =
0,
K(y, x) = tr(ad y adx) = 0,
This can be written in matrix form
rv\
«l*i33; = ( - x °>-
«(x , y) = tr(adx ad y) =
0,
«(y , y) = tr(ady ad y) = I.
Lie Algebras and Matrices
443
P r o b le m 9. Consider the Lie algebra L = s^(2,F) with char(F) / 2. Take as
the standard basis for L the three matrices
J)' F=(! 2 )' "-(J -O'
£ =(2
(i) Find the commutators.
(ii) Find the adjoint representation of L with the ordered basis { E H F }.
(iii) Show that L is simple. If L has no ideals except itself and 0, and if moreover
[L, L\
0, we call L simple. A subspace I of a Lie algebra L is called an ideal
of L if x G L 1 y G I together imply [x,y\ G I. In other words show that the Lie
algebra s^(2, R) has no proper nontrivial ideals.
S o lu tio n 9.
(ii) We find
/0
ad (JS) = ( 0
\0
(i) We have [E1F] = H 1 [H1 E\ = 2E 1 [H1F] = —2F.
- 2 0\
/ 0
0 I
,
0 0/
0 0\
ad(F) = ( - 1
\ 0
0 0 ,
2 0/
/2 0
0 \
ad (H) = ( 0 0
\0 0
—2 J
0.
The eigenvalues of ad (H) are 2 ,- 2 ,0 . Since char(F) ^ 2, these eigenvalues are
distinct.
(iii) Assume that J C s£(2, R) is an ideal. Let A G J. Then
[E,A]
^21
0
<*>22 - CLll
—(*•21
G J.
It follows that
and
W ith —2a2i = I we obtain the element E. Thus E G J. Consider
B = A - anE =
{ an
GJ
V «21
since J is a subspace. Then we find the commutator
[H,B] =
0
—2a2i
0\
0J
W ith —2a2i = I we obtain the element F. Thus F G J. Finally [E1F] = H.
Thus H ^ J and the result follows.
P r o b le m 10.
Consider the four-dimensional Lie algebra ^ (2 ,M ). The ele­
mentary matrices
444 Problems and Solutions
form a basis of g£(2,R ). Find the adjoint representation.
S o lu tio n 10.
Since
adei(ei) = [ei,ei] = O2,
adei(e2) = [ei,e2] = e2,
adei(e3) = [ei, e3] = - e 3,
adei(e4) = [ei, e4] = O2
we have (ei e2 e3 e4)adei = (0 e2 — e3 0). Therefore
ad(ei) =
/0
0
0
0
0
0\
1 0
0
0 - 1 0
Vo o
o
o/
Analogously, we have
ad(e2) =
/ 0
0
- 1 0
0
0
Vo
I
0
0
o -i
0\
1
0
Ji =
Vo
o/
ad(e4) =
P r o b le m 11.
ad(e3) =
/ 0 - 1 0
0 \
0
0
0
0
1 0
0 - 1
/ °0
0
Vo
0
-1
0
0
i
o
o /
0
°\
0 0
1 0
0 0/
Show that the matrices
J2 =
0
0
-1
0
0
0 ,
0 0/
(0
Jz = I
\o
-I
0
0
form generators of a Lie algebra. Is the Lie algebra simple? A Lie algebra is
simple if it contains no ideals other than L and 0 .
S olu tion 11. We find the commutators [Ji, J2] = J 3 , [J2, J 3 ] = Ji, [J3 , Ji] =
J2. The Lie algebra contains no ideals other than L and 0 . Thus the given Lie
algebra is simple.
P r o b le m 12. The roots of a semisimple Lie algebra are the Lie algebra weights
occurring in its adjoint representation. The set of roots forms the root sys­
tem, and is completely determined by the semisimple Lie algebra. Consider the
semisimple Lie algebra s£(2, E) with the generators
Find the roots.
Lie Algebras and Matrices
445
Solution 12. We obtain adH(E) = [HiE] = 2E i adH (F) = [H iF ] = —2F
and [E , F ] = H. Thus there are two roots of s£(2, R) given by a (iJ ) = 2 and
a(iir) = —2. The Lie algebraic rank of s£(2,R ) is one, and it has one positive
root.
Problem 13 .
The simple Lie algebra s£(2, R) is spanned by the matrices
with the commutators [HiE] = 2E i [HiF] = —2F and [EiF] = H.
(i) Define
C := i f f 2 + E F + FE.
Find C. Calculate the commutators [CiH], [<CiE ], [Ci F]. Show that C can be
written in the form
C = i f f 2 + f f + 2FE.
(ii) Consider the vector in R 2
Calculate the vectors Hwi Ewi Fw and Cw. Give an interpretation.
(iii) The universal enveloping algebra U(s£( 2, R )) of the Lie algebra s^(2,R) is
the associative algebra with generators H i E i F and the relations
H E - E H = 2E i
H F - F H = - 2 Fi
E F - F E = H.
Find a basis of the Lie algebra s^(2,R) so that all matrices are invertible. Find
the inverse matrices of these matrices. Give the commutation relations.
(iv) Which of the E i F i H matrices are nonnormal? Use linear combinations to
find a basis where all elements are normal matrices.
Solution 13.
(i) Straightforward calculation yields
O - K S ? M 2 ! ) ( ! S M ? S ) ( S S)
= K S !)•
Since C is the 2 x 2 identity matrix times 3/2 we find [CiE] = (½, [CiF] = O2,
[Ci H] = O2. Since E F = F E + [Ei F ] = F E + F w e obtain
C = ^ f f 2 + f f + 2FE.
(ii) We find
/ / v = v,
^v=(J),
F v = ( ! ) ’
Cv =
446 Problems and Solutions
Thus H v = v is an eigenvalue equation with eigenvalue +1. E v = 0 is also
an eigenvalue equation with eigenvalue 0 and finally C v = |v is an eigenvalue
equation with eigenvalue 3/2.
(iii) The matrix H is already invertible with H ~ x = H. For the other two
matrices we set
E :-E -F -(_ °
J ),
F :-E + F - ( [
J)
with F -1 = F and E = E t . The commutators are
[E, H } = —2F ,
[F, H } = - 2 E,
[E, F ] = 2H.
(iv) Obviously E and F are nonnormal, but H is normal.
elements are normal matrices is given by
A basis with all
£ + F = (? J )' £ - £ = ( - l J)- * = ( o - l )
P r o b le m 14.
Let { e, / } with
-(SJ)'
'-(VS)
a basis for a Lie algebra. We have [e, /] = e.
(i) Is { e ® / 2, / ® I2 } a basis of a Lie algebra?
(ii) Is { e ® e , e ® f , f ® e , f ® f } a basis of a Lie algebra?
S o lu tio n 14.
Let A , B , C , D be n x n matrices. Then we have the identity
[A ® B , C ® D] = ( A C ) ® ( B D ) -
(CA)
®
(DB).
Using this identity we obtain
[e <g>/ 2, / ® h ] = ( e /) ® h ~ (fe) ® h = ( e / - / e ) ® I2 = e ® I2
[e ® h , e ® / 2] = (ee) ® h ~ (ee) ® h = (ee - ee) <g>h = O2 ® O2
[ / ® / 2 , / ® / 2] = ( / / ) ® / 2 - ( / / ) ® / 2 = ( / / - / / ) ® / 2 = 02®02.
Thus { e ® I2, f ® h ) is a basis of a Lie algebra.
(ii) Straightforward calculation yields
[e ® e, e ® e]
[e <g>e, e ® f]
[e ® e, / <g>e]
[e ® f, e ® f]
[e ® / , / ® e]
= O4
= O4
= O4
= O4
= O4
f, f ® f] = - e ® f
[f ® e, / ® ej = O4
[ / ® e, f ® f] = - f ® e
[e®
Lie Algebras and Matrices
447
[ / ® / > / ® / ] = O4
where O4 is the 4 x 4 zero matrix. Thus the set is a basis of a Lie algebra.
P r o b le m 15.
We know that
is an ordered basis of the simple Lie algebra s^(2, R) with
[E1H] = - 2 E 1 [E1F] = H 1 [F1H} = 2F.
Consider
0 0
( °0 0 I 1 N
0 I, H=
0 0 0
0
Vo 0 0 0/
I
0
0
Vo
0 0
I 0
0 -I
0 0
0 0 0
0V
( °0 0 0 0
0
T? —
0 J* — 0 I 0 0
Vi 0 0 0
-l)
Find the commutators [E1H], [E1Fl 1 [F1H].
S o lu tio n 15.
We find [E1H] = —2E 1 [E1F ] = H 1 [F1H] = 2F.
P r o b le m 16.
The 2 x 2 matrices
form a basis of the Lie algebra s£(2, R) with the commutation relations [E1H] =
- 2 E 1 [E1F l = H 1 [F1H] = 2F.
(i) Let ★ be the star product Do the nine 4 x 4 matrices
EieE 1 EicF 1 EieH 1 Fie E 1 FieF 1 FieH 1 HieE 1 HieF 1 HieH
form a basis of a Lie algebra?
(ii) Define the 4 x 4 matrices
H := H ®I 2 + I 2 ®H 1 E : = E ® H ~1 + H ® E 1 F := F ® H~l + H ®F.
Find the commutators [H1El1 [#>F], [F, F].
(iii) Consider
H = H® I 2 + I 2 ^ H 1 E = E ® H + H ® E 1 F = F ® H + H®F.
Find the commutators [H1 El1 [H1 F], [E1F ] and thus show that we have a basis
of a Lie algebra. Is this Lie algebra isomorphic to s^(2,M)?
S olu tion 16.
(i) Yes. We have
[EieE1FieFl = H ie H 1 [EieE1HieH] = ~ 2E ★ E 1 [FieF,HieHl= 2F.
448 Problems and Solutions
(ii) Note that H 1 = H. Now we find
[H, E] = 2E,
[H, F\ = - 2F,
[E, F\ = H.
(iii) We note that H 2 = I2, EH = - E , HE = E, FH = F, HF = - F . Thus
[E, F] = H,
[£ 7, tf]
= -2 £ ,
[F, H] = 2F.
Thus the two Lie algebras are isomorphic.
P r o b le m 17. A basis for the Lie algebra su(N ), for odd N , may be built from
two unitary unimodular N x N matrices
0
/i
0
0
9=
u>
0
0
0
0
0
0
U
0
0
. ..
^
,
(°0
I
0
0
I
...
...
0\
0
0
Vi
0
0
0
0
...
...
I
0/
h=
u ” - 1)
where a; is a primitive N-th. root of unity, i.e. with period not smaller than N ,
here taken to be exp(47ri/iV). We obviously have
hg = ugh.
(I)
(i) Find gN and hN.
(ii) Find tr(^).
(iii) Let m = {m\,m2), n = {n\,n2) and define
m x n := m\n2 —m2n\
where m\ = 0 , 1 , . . . , N — I and m2 = 0 , 1 , . . . , N — I.
unitary unimodular N x N matrices
The complete set of
Jmi,rn2 := w mim2/2gmi/im2
suffice to span the Lie algebra su(N ), where Jo,o = I n • Find J*.
(iv) Calculate JmJn(v) Find the commutator [Jm, Jn].
S o lu tio n 17.
(ii) Since
(i) Since u N = I we find gN = I n . We also obtain hN = I n .
I + u + u 2 H------- b u N_1 = 0
we find tr(g) = 0.
(iii) Obviously we have J (*mi>m2) = J (_mi _ m2).
(iv) Using equation (I) we find J mJ n = wnxm/ 2./m+n(v) Using the result from (iv) we obtain
[Jmj Jn] — —2isin (
x n ) Jm+n-
Lie Algebras and Matrices
449
P r o b le m 18. Let L be a finite dimensional Lie algebra and Z(L) the center
of L. Show that ad : L —> g£(L) is a homomorphism of the Lie algebra L with
kernel Z(L).
S olu tion 18.
Using the bilinearity of the commutator we have
ad (cx) = ca d (x),
where
CE
F and x ,y
E
&d(x + y) = ad(ar) + ad (y)
L. Now let x yy yz
E
L. Then we have
ad {{x,y\)z=[[x,y],z} = [z,[y,x]\
= [*, [y, A\ ~ [</> [*, z\\ = ad(aj)[y, z) - ad(y)[a:, z)
= ad(a:)ad (y)(z) —ad (¾/)ad (0:)(2:)
= [ad(a:),ad(y)](z).
Consider x E ker(ad) if and only if 0 = ad (x)y = [xyy\ for every y
only if £ E Z (L ). Thus ker(ad) = Z(L).
P r o b le m 19.
E
Ly if and
Consider the 2 x 2 matrices over R
Calculate the commutator C = [AyB] and check whether C can be written as a
linear combination of A and B. If so we have a basis of a Lie algebra.
S olu tion 19.
We find
C = [AyB] =
P r o b le m 20.
Y1 =
= A -B .
A Chevalley basis for the semisimple Lie algebra s^(3, R) is given
by
X1=
-I
I
0
0
0
0
0
0
° \ , X 2 = / °0 0I
1
\o 0
0/
/0
0 0
0 0 0
V o i o
H1 =
/0
0
\0
V2
0
0
I
0
0 - l
0
I
0
0
0
0
H2
0
0
0
0
0
0
X3
0
0
0
Yz = I 0
/0
Vl
0
0
0
I
0
0
0
0
o
I
0
0
0 - 1 0
0
0
0
where Y3 = X j for j = 1,2,3. The Lie algebra has rank 2 owing to Hi, H2
and [Hi, H^ = O3. Another basis could be formed by looking at the linear
combinations
Uj = X j + Y j ,
Vj = X j - V j .
450 Problems and Solutions
(i) Find the table of the commutator.
(ii) Calculate the vectors of commutators
([ H i , X 1] \
f [ H l t X 2]\
f [ H ltX 3]\
U ^ x 1] , / >
\[ h 2 , x 2} ) ’
V [^2 , x 3]J
and thus find the roots.
Solution 20.
(i) The commutators are summarized in the table
U
Xi
X1
0
X3
0
0
*2
-X 3
0
X2
X3
Y1
Y2
Hi
0
Hi
—2X\
X2
H i + H2
-X 3
0
0
0
-Y 2
Y3
H2
0
X2
0
Y1
Y2
Y3
1¾
^2
-Ii
-X 1
F3
0
ZY1
Hi
H2
0
H2
Xi
—2 X 2
-X 3
-J i
2Y2
Y3
0
0
(ii) We find
0
¾
)
- ( ¾
-
O
-
0 ¾ ; ¾ ! ) = ( 2 ¾ ) = ( 21) * '
([£:*!)-(£)-(!)*
with the roots (2, —1)T , (—1,2)T , ( 1, 1)T .
Problem 21.
Consider the 3 x 3 matrices
/I
fti = I 0
\0
O 0\
0 0 ,
0 1 /
f 0
e=
0
\0
/0
h2 = I 0
0
I
0\
0 ,
V0
0
0J
I 0\
0 0 ,
1 0 /
/ =
ft3 =
/0
I 0
V1
/0
1
\0
0 0\
0 1 .
0 0J
Show that the matrices form a basis of a Lie algebra.
Solution 21. We know that all n x n matrices over E o r C form a Lie algebra
{gi(n, E) and g£(n, C )) under the commutator. Thus we only have to calculate
the commutators and prove that they can be written as linear combinations of
hi, ti2 , hs and e, / . We find
[hi,h2] = [hi, /13] = [h2, h3] = On,
[hi, S] = [ha, f] = ~[h2, /] = - / ,
[e, f ] = h i + h3 - 2h2
[hi,e\ = [h3, e] = ~[h2, e] = e.
Lie Algebras and Matrices
451
Thus we have a basis of a Lie algebra.
P r o b le m 22.
Consider the two 2 x 2 matrices
A =
where A is the Pauli spin matrix <73 and B the Pauli spin matrix o\. The two
matrices A y B are linearly independent. Let A y B be the generators of a Lie
algebra. Classify the Lie algebra generated.
S o lu tio n 22.
We have
The matrices A y B y C are linearly independent. Now we set C = ^Cy where
C = ia2 . Then we find
[AyB] = 2C y
[AyC] = 2B y
[ByC] = - 2A.
From the commutation relation we see that A y B y C is a basis of a simple Lie
algebra.
P r o b le m 23. The <7^(111) superalgebra involves two even (denoted by H and
Z) and two odd (denoted by E y F) generators. The following commutation and
anti-commutation relations hold
[Z,E\ = [Z,F\ = [Z,H\= 0,
[H,E) = E,
[H,F] = - F ,
[E,F]+ = Z
and E 2 = F 2 = 0. Find a 2 x 2 matrix representation.
S olu tion 23.
We find
P r o b le m 24. A basis of the simple Lie algebra s£(2, M) is given by the traceless
2 x 2 matrices
(i) Find the commutators [X\yX ^ y ( ¾ , ¾ ] , [^ 3, ^ 1].
e zXz, e zXs.
(ii) Let z e C . Find eezXly
zXl, ezX<1
(iii) Let uyvyr G R. Show that
_ euX3erX2f>yXz
g(uyvyr) =
euX3erX2e l
{u/2 )\
<v/2 ) )
(v/2 ) \
a(v/2 )J
452 Problems and Solutions
(iv)
Find g(u, v, r) x.
(i) We obtain [^ 1, ¾ ] = X s , ( ¾ , ¾ ] = —X\, [Xs,X\] = —X^-
S o lu tio n 24.
(ii) We find
SX1 _ I( cosh(z/2)
sinh(z/2)
cosh(z/2)
- 1^ sinh(2:/ 2)
,2¾ _ I( e ~ r/ 2
- 1^ 0
0
erI2 )
s x 3 _ I( cos(z/2)
- I^ —sin(z/2)
s in (z /2 )'
cos(z/2) ^
(iii) Using the result from (ii) we find the element g(u,v,r) of the Lie group
SL( 2,R).
(iv) We obtain
^ ( ^ , v, r ) _1 = e~vXz e~rX2e~uXs.
P r o b le m 25.
Consider the semisimple Lie algebra s£(n + 1,F). Let E ij
(i,j G { 1 , 2 , . . . , n + 1}) denote the standard basis, i.e. (n + 1 ) x (n + 1 ) matrices
with all entries zero except for the entry in the z-th row and j-th column which
is one. We can form a Cartan- Weyl basis with
Hj := E jj
3 ^ {1? 2 , . . . , 7i}.
Show that EiJ are root vectors for i ^ j, i.e. there exists AH,i,j € F such that
\H)Eij\ = AH,i,jEij
for all H G span{ H i , . . . , Hn}.
Obviously [Hj,Hk\ = 0 since Hj and Hk are diagonal. We have
S o lu tio n 25.
[B i j
j
E /k j\ =
E i j E k jI
E k ^ E ij =
S j^ k E i J
S iiiE k J .
It follows that
\Hki
j\ — { ^ k y k ’i
j\
= Si ^ E k j
Let
[-^fc+1,fc+1? ^ i j ]
S j^ E i^ k
Siitk + 1E k + 1,j +
•
n
H = ^ ^ XjHj,
J= i
A l , . . . , An G F.
Then, for i / j, we find
n
[H, Eij] = ^ ^Xk(Si,kEkj
Sj^Ei^k
Sitk+iEk+ij H- ^,£+1-^,^+1)
k=i
(Ai Xj + Xj—i^Ei j
i —I
(A* - Xj — Xi—\)Eij
j = I
(A* — Xj — Xi- I + Xj- i ) E ij otherwise
Lie Algebras and Matrices
453
so that AH,i,j is given by
Ai — Xj H- Xj- 1
i —I
Ai Xj
Afc_ i
j = I
Afc — Xj — Afc_i + Aj_i otherwise
P r o b le m 26.
Consider the vector space E 3 and the vector product x. The
vector product is not associative. The associator of three vectors u, v, w is
defined by
ass(u x (v x w )) := (u x v ) x w — u x (v x w ).
The associator measures the failure of associativity,
(i) Consider the unit vectors
■ ■
Find the associator.
(ii) Consider the normalized vectors
Find the associator.
S olu tion 26.
(i) Since u x v = w , v x w = u, w x u = v w e find that
ass(u x (v x w )) = 0.
(ii) Since u x v = —w, v x w = —u, w x u = u we find that the associator is
the zero vector.
P r o b le m 27. Consider vectors in the vector space E 3 and the vector product
x . Consider the mapping of the vectors in E 3 into 3 x 3 skew-symmetric matrices
Calculate the vector product
and the commutator [Mi, M 2], where
454 Problems and Solutions
Discuss.
S o lu tio n 27.
For the vector product we find
( «2 \
/ ai \
)
\ Ci J
/ &2Ci - &ic2 \
h. I x I 6i I = I a ic2 - a2ci J .
c2
\ a2bi - C i b 2J
For the commutator [M i, M 2] we obtain the skew-symmetric matrix
0
Oife2 — o2fei
o2fei — Oife2
O
OiC2 — O2Ci
feic 2 — fe2ci
(
O2Ci — Oic2 \
b2c\ — b\c2 I .
O
/
Thus
wemhave
Lie the
algebras.
P r o b le
28. isomorphic
A basis for
Lie algebra su(S) is given by the eight 3 x 3
skew-hermitian matrices
X2 =
Xi
A3 =
(i
0
VO
0
0
0'
-i
0
0,
0
I
On
-I
0
0
/0
0
0'
0
\0
0
i
i
0,
V o
0 1'
0 0
-1 0
X5
f O
X6=
0
0
X8
X7:
o o,
v/3
Find
Y ( ( X iXk)QXi).
3= I
J=I
Express the results as Kronecker products of two 3 x 3 matrices.
S olu tion 28.
8
We obtain
I
Y i X j Q X j ) = - ( J 3 ® J3),
3= I
«8
i
£ ( ( X , - X fc) ® X j ) = x ( X fc ® J3).
E
3=
I
P r o b le m 29. Consider the non-commutative two-dimensional Lie algebra with
[A1B] = A and
Lie Algebras and Matrices
455
Show that the matrices
{ A 0 I2 + J2 ® A , 5 ® Z2 + / 2 ® # }
also form a non-commutative Lie algebra under the commutator. Discuss.
S olu tion 29.
We have
[A ® I2 + J2 ® A, B ® I2 + J2 ® B] = A ® I2 + /2 <8>A.
Thus the two Lie algebras are isomorphic.
P r o b le m 30.
Consider the three 3 x 3 matrices
0
-I
0
A =
I
0
0
°\
0
0/
0
O 0
Vi 0
/0
,B =
-1 \
0
,
0 /
/0
0
C = O
0
VO- I
0
I
0
which satisfy the commutation relations [A , B \ = C, [B , C] = A , [C, A] = B.
Thus we have a basis of the simple Lie algebra so(3). Calculate the commutators
[A ® / 3, B ® / 3],
S olu tion 30.
[5 ® /3, C ® / 3],
[C ® /3, A ® /3].
We have
[A 0 / 3, B 0 / 3] = [A, 5 ] 0 /3 = C 0 / 3.
Analogously we have [5 0 / 3, C 0 / 3] = A 0 / 3, [C 0 / 3^ 0 / 3] = B 0 / 3.
P r o b le m 31.
(i) The Heisenberg algebra h( 3) is a three-dimensional noncommutative Lie algebra with basis 1¾, 1¾, E s and
[-Bi, -E2] = E3,
[Eu E3] = [E2, E3] = 0.
From the matrix representation
/0
T = I o
h
ts \
0
t2 I ,
Vo
o
o/
K
of an element T = t\E\ + £2¾ + £3¾ in h( 3). Find exp(T).
(ii) The corresponding Lie group # ( 3 ) is the group of 3 x 3 upper triangular
matrices with l ’s on the main diagonal
H( 3 ) =
Fi ndp 1 and gTg 1.
456 Problems and Solutions
S o lu tio n 31.
(i) We have
exp (T) =
I
0
0
h
^it2 +
i
0
I
(ii) We obtain
( I
—ot\
0
0
I
a ia 2 — OL3
-a 2
0
0
0
0
gTg- 1
— OL2t\ + OLit2
0
0
t2
0
I
P r o b le m 32. Let A i , A2l . . . , Ar be n x n matrices which form a basis of a
non-commutative Lie algebra. Let B i 1 B2l . . . , Bs be n x n matrices which form
a basis of a commutative Lie algebra. Let 0 be the Kronecker product and form
all elements
Aj Q B k,
j = 1 , . . . ,r, k = 1 , . . . , 5 .
Calculate the commutators
[Aj Q B kiAi Q B m],
j,£ = l , . . . , r ,
k,m = 1 , . . . , 5 .
Discuss.
S olu tion 32.
We have
[Aj Q Bkl Ae Q Bm] = (Aj Ae) Q (BkBm) - (AeAj ) Q (BmBk).
Since BkBm = BmBk we have
[Aj 0 Bkl Ae 0 Bm] = [Ajl Ae] 0 (BkB m) .
Thus all the elements Aj 0 Bk form a basis of a Lie algebra.
P r o b le m 33. Consider the Heisenberg algebra h( I) with the generators given
by n x n matrices H 1 X i 1 X 2 and the commutation relations
[H1 X 1] = O71,
[H1X 2] = O71,
[X 1, X 2] = H.
Find the commutators for the matrices H 0 Ini X 1 Q In, X 2 0 In•
S o lu tio n 33.
We find
InfX 1 ® In] = 0 n2, [H® In,X 2 0 I„] = 0n2, [Xi 0 In, X 2 ® /„ ] = H O In.
Thus we find another representation of the Heisenberg algebra.
P r o b le m 34.
(i) Let A 1 B be n x n matrices. Calculate the commutator
[A Q In + In Q A, B Q B]
Lie Algebras and Matrices
457
and express the result using the commutator [A, B\.
(ii) Assume that [A, B\ = B 1 i.e. A and B form a basis of a two-dimensional Lie
algebra. Express the result from (i) using [A1B] = B . Discuss.
S olu tion 34.
(i) We find
[A <8>Ini In <8>A 1B <8>B\ = [A1 B\ <8>B + B <8> [A1B ] .
(ii) From (i) we find
[ A ® I n + In ® A 1B ®B\ = B ® B + B ® B = 2 (B (8) B ) .
P r o b le m 35. Let L be a finite dimensional Lie algebra. Let C°°(Sl ) be the
set of all infinitely differentiable functions, where 5 1 is the unit circle manifold.
In the product space L (8) C 00(S x) we define the Lie bracket (pi, <72 € L and
[5 i ® /1 , 5 2 ® /2 ] : = [5 1 , 52] ® ( / 1 / 2 ) -
Calculate
[51 ® /1 , [52 ® /2 , 53 ® /3]] + [53 ® /3, [51 ® /1 , 52 ® /2]] + [52 ® /2 , [53 ® /3 , 5 i ® /1]] •
S olu tion 35.
We have
[51 ® /1, [52
[53 ® /3, [51
[52 ® /2, [53
® /2,53 ®/3]] = [51,[52,53]]® /1/2/3
® /1,52 ®/2]] = [53,[51,52]]® /3/2/1
® /3,51 ®/1]] = [52,[53,51]]® /2/3/1-
Since /1/2/3 = /3/1/2 = /2/3/1 we obtain
([51 , [52,53]] + [53 , [51,52]] + [52, [53 , 5 i]]) ® (/1/2/3) = 0
where we applied the Jacobi identity.
P r o b le m 36.
The elements (generators) Z i1 Z2, . . . , Zr of an r-dimensional
Lie algebra satisfy the conditions
[Z M
= Y t ClrtvZr
T=
I
with cr = —civ,, where the Ctriu’s are called the structure constants. Let A be
an arbitrary linear combination of the elements
r
A = ^ a ltZlt.
/X =I
458 Problems and Solutions
Suppose that X is some other linear combination such that
r
X = Y
VyZv
[A, X } = pX.
and
U=I
This equation has the form of an eigenvalue equation, where p is the corre­
sponding eigenvalue and X the corresponding eigenvector. Assume that the Lie
algebra is represented by matrices. Find the secular equation for the eigenvalues
PS o lu tio n 36.
From [A, X] = pX we have A X - X A = pX. Thus
TT
T
Y Y
/X = I
W P Z ltZ v -
OTbv Z v Z t l ) = P Y b u Z v .
U=I
(I)
U=I
Inserting
ZflZjy —
into (I) yields
cIluZ t +
ZvZfl
T T T
p Y bTZrT= I
It follows that
TfT
T
E E E a^
-pbT Zt =
0.
Since Z t , with T = I , . . . , r are linearly independent we have for a fixed r = r*
E
( E
U=I Xfl=I
^ < ) - ^ '
Introducing the Kronecker delta
write
J
with I if v = r* and 0 otherwise we can
e (e ^ - ^ V
U=I
For
v,
t
*
V = I
= 0-
/
= o-
= I , ... , r
/X= I
defines an r x r matrix with the secular equation
Vrr
a^c,
'liu - p S vT
det
/X= I
to find the eigenvalues p, where
I/,
r* = I , ..., r.
= 0
Lie Algebras and Matrices
P r o b le m 37.
459
Let cT
aX be the structure constants of a Lie algebra. We define
r
r
9v\ = 9\<r = Y
P=
E
I
c^PcAr
9aX9<r\ = ^0 -
and
T = I
A Lie algebra L is called semisimple if and only if det IpcrAI 7^ 0- We assume in
the following that the Lie algebra is semisimple. We define
P=
I
<7=1
The operator C is called Casimir operator. Let X 1- be an element of the Lie
algebra L. Calculate the commutator [C,XT\.
S olu tion 37.
We have
r
r
[ c , x T] = Y Y s pa^ x * ' x ^
P=
7=1
I<
T
E s r - W ,* .] + E
P=
I
T
<7 =
P=
1
T
T
I
T
T
£
£
¢7 =
1 A =
T
T
g ^ Cxp rX x X a
I
T
= E E E gpac* tx px \ + Y E E r
P=
I
<7=1 A = I
T T T
=E E E
T
T
T
T
A ^ tX pX a + E E E 9apCxtX xX p
p = l <7=1 A = I
T
CxptX xX 0
p = l c r = l A = I
P = I <7 =
1 A = I
T
= E E E
P = I
¢7 = 1 A = I
where we made a change in the variables a and p. For semisimple Lie algebras
we have
C0T =
1/=1
and therefore we obtain
T
[C, XT] = Y
p = l
T
T
T
E E E ^ g xvCltoA X pX x + X aX,).
<7=1 A = I 1^=1
Now cVGT is antisymmetric, while the quantity in parentheses is symmetric in
p and A, and hence the right-hand side must vanish to give [C, X T\ = 0 for all
X t G L.
460 Problems and Solutions
S u p p lem en ta ry P ro b le m s
Given the basis of the Lie algebra s£{2, E)
P r o b le m I.
E =
with the commutation relations [E,F] = H i [Ei H] = —2E i [FiH] = —2F.
(i) Let z e C. We define
E = ezH/2 ® E + E ® e -~ zH' 2, F = eSH/2® F + F ® e ~ zH/2, H = H ® I 2 + h ® H .
Find the commutators [E1F], [EiH], [Fi H].
(ii) Let U(s£(2, R)) be the universal enveloping algebra. Then any element of
t/(s^ (2,R )) can be expressed as a sum of product of the form F j H kE e where
j, k,£ > 0. Show that
E F 2 = - 2 F + 2F H + F 2E.
Hint: Utilize that E F 2 = [Ei F 2] + F 2E.
P r o b le m 2.
Study the Lie algebra s^(2,F), where char(F) = 2.
P r o b le m 3.
Do the matrices
/0
E+
I
0\
=(001),
\0 00/
/0
0
0\
£-=(100),
V0 1 0Z
/I
#=
0
00
0 \
0
V0 0 ~l )
form a basis for the Lie algebra s^(2,R)?
P r o b le m 4.
Show that a basis of the Lie algebra s^(2,C) is given by
1
2
1
2
P r o b le m 5.
Given the spin matrices
Consider the 4 x 4 matrices
(O 2
s \
/O 2
VO2
O2 J ’
\S+
O2 \
(S +
O2y/ 5 V ° 2
O2 \
5 + /’
S-
O2
O2
S-
)
Calculate the commutators of these matrices and extend the set so that one finds
a basis of a Lie algebra.
Lie Algebras and Matrices
461
P r o b le m 6 . Let Li and L2 be two Lie algebras. Let </? : Li —> L2 be a Lie
algebra homomorphism. Show that im(<^) is a Lie subalgebra of L2 and ker(^)
is an ideal in Li.
P r o b le m 7.
relations
Let A 1 B 1 C nonzero 3 x 3 matrices with the commutation
[A,B\ = C,
[B,C) = A,
[C,A] = B
i.e. A 1 B 1 C form a basis of the Lie algebra so(3). Find the Lie algebra generated
by the 9 x 9 matrices
H = A ® I 3 + l3 ® B + A ® B 1
K = A ® h + h ® B + B®A.
P r o b le m 8.
Consider the Pauli spin matrices ao = / 2, 01, 02, 03 and the
sixteen 4 x 4 matrices (j = 0, 1 , 2,3)
( Vj
O2 \
V O2
Cj
)
(
Cj
( O2
O2 \
^ O2
’
Cj
-C j
\
)'
( O2 C j \
\ Cj
O2 )
’
\ -C j
Do the sixteen 4 x 4 matrices form a basis of a Lie algebra under the commutator?
P r o b le m 9.
Consider the Lie algebra of real-skew symmetric 3 x 3 matrices
A=I
I(
-a 2
0
0
ai
-a i
0
-a 3
\
^ 0*2
Let R be a real orthogonal 3 x 3 matrix, i.e. RR t = / 3. Show that RAR t is a
real-skew symmetric matrix.
P r o b le m 10 .
The Lie group SU(2,2) is defined as the group of transfor­
mation on the four dimensional complex space C4 leaving invariant the indefinite
quadratic form
N i l 2+
1¾
!2 -
I^ l2 -
M
2-
The Lie algebra su( 2,2) is defined as the 4 x 4 matrices X with trace 0 and
X*L + L X = O4, where L is the 4 x 4 matrix
O2
L = ( - -J2
l O2
J2
J'
Is
X =
( °0
0
Vi
0
0
1
0
0
1
0
0
1N
0
0
0/
an element of the Lie algebra sw(2,2)? Find exp (zX ). Discuss.
462 Problems and Solutions
P r o b le m 11.
The semisimple Lie algebra s /(3 ,R ) has dimension 8.
standard basis is given by
hi =
ei =
/ 1 0
0 -I
\0
0
0
0
0
/0
0
\0
0
0
0
I
0
0
/0
0
0
0
h2
0
0 \
1 0 ,
0 - 1 /
Vo
0
0
0
0
I
0
0
0
0
0
0
I
0\
0 ,
0/
(0
e2 = \ 0
0 0
/ 1 = 1
0
0
/2
/0
ei3 = I 0
\0
0
0
0
I
0
0
/13 =
Vo o o
The
/0
0
0
0
0\
0
.
\1
0
oI
Find the commutator table.
P r o b le m 12.
Let Ejk (j,k = 1,2,3,4) be the elementary matrices in the
vector space of 4 x 4 matrices. Show that the 15 matrices
Xi = £'12,
X2 = £23?
Y l = E 21,
X 3 = £ 13,
>2 = £ 32, ^3 — -^315
X 4 = £ 34,
Y4 = £ 43,
H1 = ^diag(3, - I , - I , - I , - I ) ,
X 5 = £24,
Yq = £ 42,
X q = £14
— £*415
H2 = ^diag(l, I, - I , - I ) ,
H3 = ^diag(l, I, I, —3)
form a basis ( Cartan-Weyl basis) of the Lie algebra s^(4, C).
Let A , £ , C be 3 x 3 nonzero matrices such that (Lie algebra
P r o b le m 13.
so(3))
[A,B\ = C,
[B,C\ = A,
[C,A] = B.
Find the commutators of the 9 x 9 matrices
A ® I3 + /3 0 A,
z g
C ® I 3 + I3 ® C .
The isomorphism of the Lie algebras s£(2, C) and so(3, C) has
P r o b le m 14.
the form
Let
B ® 13 + 13 ® £ ,
0
b —c
c —b
i(b + c)
0
—2ia
—i(b + c)\
2ia
I
0
)
C. Find
exp
(
z
(a
V Vc
b
exp I z
0
b —c
c —b
i(b + c)
0
- 2 ia
—i(b + c) \
2ia
I
0
)
Lie Algebras and Matrices
463
P r o b le m 15. Consider a four-dimensional vector space with basis e±, e 2, ¢3,
¢4. Assume that the non-zero commutators are given
[e2, e3] = ei,
[ei, e4] = 2ei,
[e2, e4] = e2,
[e3, e4] = e2 + e3.
Do these relations define a Lie algebra? For example we have
[ei, [e2,e3]] + [e3,[ei,e2]] + [e2, [e3,ei]] = [ei,ei] + [e3,0] + [e2,0] = 0.
If so find the adjoint representation.
P r o b le m 16. Let A , B , C be nonzero n x n matrices. Assume that [.A , B\ = On
and [C, A] = 0n. Can we conclude that [B, C] = 0n? Is the Jacobi identity
[A, [B , C]] + [C, [A, B}\ + [B, [C, >1]] = On
of any use?
P r o b le m 17.
(i) Find the Lie algebra generated by the 2 x 2 matrices
Note that [.A , B] = A + B.
(ii) Find the Lie algebra generated by the 3 x 3 matrices
A=
/0
0
0 \
O l
0
,
\0
0 -I J
/ 0
0 0\
B = I - I O O ,
\ 0 0 0J
/0
C=
0 0\
0 0 0
.
\1 0 0 J
Note that [A, B\ = B, [B, C] = O3, [C, A] = C.
P r o b le m 18. In the decomposition of the simple Lie algebra s^(3, M) one finds
the 3 x 3 matrices
where ajk,bjk
Note that
G
M. Find the commutators [A, A'], [B ,B f] and [A, B]. Discuss.
Thus the matrix of the commutator [A, B\ is of the form of matrix B.
P r o b le m 19.
relations
Let <7i, 02, 03 be the Pauli spin matrices with the commutation
[^1, CT2] = 2icr3,
[<72, CT3] = 2iau
[<r3, ax\ = 2ia2.
464 Problems and Solutions
Thus we have su{2) = span{ i(7i , icr2, 203 }. Then
[(Tl <8>I2 + h 8 (72, CT2 <8h + h 8 (73] = 2i(cr3 <8h + h 8 04).
(i) Find the commutators
[¢72<8/2 + /2 8 ¢7 3 , ¢73 8
+ ^2 8 ¢71 ],
/2
[¢73
0
+ /2 8 <71, (T2 0
/2
/2
+ /2 8 ¢73 ].
Discuss.
(ii) We know that the Pauli spin matrices ¢71 , cr2, ¢73 are elements of the Lie
algebra u{2), but not su(2) since det((7j) = —I for j = 1 , 2,3. Are the nine 4 x 4
matrices
(7.,-8(7*.,
,7,fc = l, 2,3
elements of the Lie algebra su(4)?
(iii) Let * be the star operation. Find the commutators
[<7l,<72],
[(72,(73],
[<73,(7l],
[<7l*<7l, (72*(72],
[<72*<72, (73*<73],
[<73*€T3, (7i*(7i].
Find the anti-commutators [(7i ,(72]+, [¢72,( 73]+, [¢73,(71]+,
[(71 *(7l,<72 *(72]+ ,
P roblem 20 .
[(72 ★ (72,(73 * (73] + ,
[(73 * (73,(71 *<7i] + .
Let (a G I ) . Show that the matrices
satisfy the commutation relations
[Li, L2] = L 3,
[L2, L 3] = O2,
[L3, Li] = L2.
P roblem 2 1 . Consider the four-dimensional real vector space with a basis eo,
ei, e2, ¢3. The Gddel quaternion algebra G is defined by the non-commutative
multiplication
eoek — ek — e^eo,
Cjek = ( I) €jk£e£
(^o) — cq
( I) SjkeQ,
j, f c , = 1,2 ,3 .
Let ¢70, Qi, 92, 93 be arbitrary real numbers. We call the vector
9 — 9o^o + 9iei + 92^2 + QsCs
a Godel quaternion. We define the basis
C u :=
^ ( eO + e3 ),
e i 2 : = ^ ( e i + e 2 ),
e 22 : =
e2i :=
-(e o -
± (e i -
¢ 3 ),
e 2 ).
Lie Algebras and Matrices
Show that the ejk satisfy the multiplication law
ejk ers — ^krejsi
Jj
^ = 152.
Show that every Godel quaternion q can be written as
2
Q — 'y ] Qjkejkj,k = I
Show that
eO=
ei
is a matrix representation.
O
-I
)
465
Chapter 20
Braid Group
Let n > 3. The braid group Bn of n strings (or group of n braids) has n — I
generators {a 1, 02, . . . , crn_ 1} (pairwise distinct) satisfying the relations
G j G j Jt l G j = G j + \ G j ( j j + \
G j G k = G k Gj
for
for
Ij -
j
= 1,2 , . . . , n — 2 ( Yang-Baxter relation)
k \ > 2
aJcrJ 1 = aJ laJ = e
where e is the identity element. Thus it is generated by elements Gj ( Gj inter­
changes elements j and j + I). Thus actually one should write G12, (723, . . . ,
Gn- I n instead of <7i, 02, •••, crn- i - The braid group Bn is a generalization of the
permutation group.
Let n > 3 and let g \, . . . , <Jn_ i be the generators. The braid group Bn on
n-strings where n > 3 has a finite presentation of Bn given by
((Tl,
. . . ,
OVi-1
• O i O j — Oj G i 5
OV+lOVOV+1
i , j < n — I, \i — j\ > I or j = n — I.
GiGiJtIGi = GiJtIGiGiJtI are called the braid relations.
where I <
— O iG i+ iG i)
Here
G i Gj
=
Gj G i
and
The word written in terms of letters, generators from the set
{ O1, •••j On- 1, G^ j •••5On_ 1 }
gives a particular braid. The length of the braid is the total number of used
letters, while the minimal irreducible length (referred sometimes as the primitive
word) is the shortest non-contractible length of a particular braid which remains
after applying all the group relations given above.
466
Braid Group
Problem I .
(74. Simplify
467
(i) Consider the braid group B*> with the generators (Ti, ¢72, ¢73,
CTf 1CTJ 1 (7^"1CT2(73(J2¢7^"1 ¢74CTI.
(ii) Consider the braid group Bs and with the generators <J\ and cr2. Let a =
CTicr2Cri, b = CTicr2. Show that a2 = b3.
Solution I .
(i) Utilizing
ct2ct3Ct2
= (73(72(73 we find
erf 1CTj 1CT^_1CT2CT3CT2C7^1CT4CTi =CTfl CTfl CTfl O-Sa2CrsCTfl CT4CTi = CTf 1CrJ1Cr2O4CTi
_ - 1 - 1-1
^
— (71 C72 a4 cr4a 1
= (71(72 1CTi.
(ii) Using the Yang-Baxter relation cricr2cri =
ct2ctict2
we find
2
13
a = (T\Gti(y\G\G2(7\ = 0-i 0-2<Ji (J2CTi CT2 = O
Problem 2.
generators
G\
Consider the braid group £ 3. A faithful representation for the
and cr2 is
- ( - 1. O- - ( i !)•
Both are elements of the group SL(2 , Z ) and the Lie group S X ( 2 , R ) .
(i) Find the inverse of cti and cr2. Find CTicr2 and ( J f 1Cr2.
(ii) Is CTict2CTi = Cr2Oicr2?
Solution 2.
(i) We find
Obviously they are also elements of SL( 2,Z) and SL( 2,R). We find
^
= (-1
^
0) ’
=
( ;
1) .
(ii) Yes. We have
CTi CT2CTi = 1( ]0
Problem 3.
1\
q I=
(72(71(72-
Consider the unitary 2 x 2 matrices
A ( Q \ _ ( e~ie
0
°\
eie) ’
M 0 \ - ( cosW
B { 6 } ~ ^ - * s in ( 0)
-*sin(0)\
cos (0) )
where 0 € R. Find the conditions on Oi, O2 , O3 SR such that (braid-like relation)
A(O1 )B(O2 )A(O3) = B(O3 )A(O2 )B(O1).
468 Problems and Solutions
Solution 3.
We obtain the condition
(<e~ 2t02 + 1 )(¾ - sec(0i - 03) sin(0i + 03)) = 2i
which can be written as
/ sin(0i +
+ (03) \
03 = arctan |
\ COS(01 - O z ) ) Do the matrices ^4(0) form a group under matrix multiplication?
Problem 4.
Can one find 2 x 2 matrices A and B with [A,B] ^ O2 and
satisfying the braid-like relation A B B A = B A A B l
If A = A -1 and B = B ~ l the relation holds. For example
Solution 4.
Ci)Problem 5.
b^
O - iO-
Do the 2 x 2 unitary matrices
Z e- W 4
A - {
o
\
I
/1
B ~ V 5 V
0
A
I/
satisfy the braid-like relation ABA = B A B l
Yes the matrices A , B satisfy the braid-like relation.
Solution 5.
Problem 6 . The 3 x 3 matrices g\ and g2 play a role for the matrix represen­
tation of the braid group 84
-t
9i ~~
0
0
I
I
I
/
0 \
,
0
g=
V
- r 1
1- 1 - t ~ l
I - 12 - t ~ l
I
t-1
- r 1
0
0
as generators. Let #2 = 919- 1 - Find the eigenvalues and eigenvectors of gi, #2•
Solution 6.
We have
I' 0
92=919
1=
0
\, - 1
0
-t
- r 1 r 10
1-t
The eigenvalues of g\ and p2 are —t 1, I, - t .
Problem 7.
(a
\0
Find the conditions on a, 6, c, d, e, / G R such that
b\ f d
e \ /a
/ ,fy o
b\ _ ( d
e \ /a
/ ,fy o
b\ / d
e\
f)-
469
Braid Group
Find nontrivial solutions to these conditions.
Solution 7.
We obtain the three conditions
ad(a — d) = 0,
c/(c - / ) = 0
ad(b —e) + cf(b —e) + ace —bdf =
0
.
W ith the 7 ’s arbitrary the eleven solutions are
a = 0, 6 = 7 i , c = 72, d = 73, e = 74, / = 0
a = 75, 6 = 76, c = 0,d = 0, e = 77, / = 78
a=
79, 6 = 710, c = 711, d = 0, e = 0, / = 0
a = 7 i 3, 6 = 714(713-712), c = 712, d = 7i3,e = 714,
/ = 0
a = 0 6 = 0, c = 0, d = 7i5, e = 716,/ = 717
a =
7 i 8, 6 = (718 - 719)/(718 - 720), c = 0, d = 7i8, e = 719, /
a = 722(723 —721)/723, 6 = 721, c = 722, d = 0,
=
720
e = 723, / = 722
a = 0 , 6 = 724726/(724 - 725), C= 724, d = 725, e = 726, / = 724
a = 727, 6 = 728, c = -727(1 + i\/3 ), d = 727, e = 729, / = ^ 727(1 + *\/3 )
a = 730, 6 = 731, c = ^730(1 - iV 3 ), d = 730, e = 732, / = ^73o(l - iV$)
a = 733, 6 = 734, c = 735, d = 733, e = 734, / = 735.
Some of the solutions are trivial such as the first, third, fifth and last one.
Let A, B be n x n matrices with ABA = BAB.
(i) Show that (^4 (8) A)(B (8) B)(A (8) A) = (B (8) B)(A (8) A)(B (8) B).
(ii) Show that (^4 ® A)(B ® B)(A ® A) = (B ® £ )(A ® ^4 )(5 © 5 ).
Problem 8.
Solution 8.
(i) We have
(A (8) A) {B (8) B )( j4 ® A) = (ABA) (8)
and
( 5 (8) B ) ( A (8) A)(B (S)B) = ((BAB) S) (BAB).
Thus since ABA = B A B the statement is true.
(ii) We have
(A © A )(B © B) ( A © A) = (A B A ) © (ABA)
and
( 5 © B ) ( A © A ) ( B © 5 ) = ( (B A B ) © (BAB).
Thus since A B A = B A B the statement is true.
470 Problems and Solutions
Problem 9 .
(i) Show that the matrices
satisfy the braid-like relation S1S2S1 = S2SiS2.
(ii) Show that the matrices Si 0 Si and S2 0>S2 satisfy the braid-like relation
(Si 0 S\)(S2 0 >S2)(Si 0 Si) = (S2 0 S2)(Si 0 Si)(S2 0 S2).
Solution 9 .
The Maxima program will provide the proof
/* Note that
. is mat r ix multiplication */
SI: m a t r i x ( [-t,l], [0,1] ) ;
S2: m a t r i x ( [1,0], [t,-t]) ;
LI: SI . S2 . SI;
R l : S2 . SI . S2;
F: LI - R l ;
Kl: k r o n e c k e r _ p r o d u c t ( Sl , S l ) ;
K 2 : k r o n e c k e r . p r o d u c t ( S 2 ,S 2 ) ;
L2: Kl . K2 . Kl;
R2: K2 . Kl
. K2;
F : L2 - R2;
Problem 10.
the equation
Let T b e a n n x n matrix and R be an n2 x n2 matrix. Consider
R(T 0 T) = (T(S)T)R.
(i) Let n = 2 and
Find all 4 x 4 matrices R which satisfy the equation,
(ii) Let n = 2 and
/I 0 0 0\
0 10
Vo
0
0 *
0
1
/
Find all T which satisfy the equation.
Solution 10.
(i) We obtain the eight conditions
t*ii
= r44, r12 = r43, 7*13 = r42, ri4 = r41,
^21
= 7*34, 7*22 = 7*3 3 , r 23 = 7*32, 7*24 = 7*31
i.e.
/7*11
7*12
7*13 7*14 \
7*21
7*24
7*22
7*23
7*23 7*24
7*22 7*21
Vri4
ri3
ri2 rn I
Braid Group
471
(ii) All
T _ (tu
\h
i
^12
t 22
satisfy the condition R(T ®T) = (T ® T)R.
Problem 11.
Find all 4 x 4 matrices A, B with [A, B\ ^ O4 satisfying the
conditions
ABA = BAB,
ABBA = /4.
The first condition is the braid relation and the second condition ABBA = I4
runs under Dirac game.
Solution 11.
We have (ABA )2 = (BAB )2 = BABBAB = BB = B 2 and
B 4 = (ABA )4 = ABA(ABA)2ABA = ABAB2ABA = ABBA = /4.
Problem 12 . The free group, T2, with two generators g\ and g2 admits the
matrix representation
(i) Find the inverse of the matrices.
(ii) Calculate g\g2g\ and g2g\g2. Discuss.
Solution 12 .
(i) We obtain
=(-2
J ) ’
(ii) We find
010201 = ( 12
5 ).
ThUS 010201 = (020102)T -
Problem 13.
Let A, B be 2 x 2 matrices and
R 12 = A ® B ® I2,
R 13 = A ® I 2 ® B,
R23 = I 2 ® A ® B.
Find the conditions on A and B such that #12-^13^23 = # 23^ 13^ 12Solution 13 .
Since #12^13^23 = A 2 ® BA ® B 2 and
-^23^13-^12 = A 2 ® AB ® B 2
we have to solve A 2 ® BA ® B 2 = A 2 ® AB ® B 2.
472 Problems and Solutions
P rob lem 14. Let /2 be the 2 x 2 unit matrix. Let R be a 4 x 4 matrix over
C. Then R satisfies the Yang-Baxter equation if
{R® I2){l2® R){R® h) =
( /2 ®
R)(R ®
/2 ) ( /2 & > # ).
Use computer algebra to show that
/ l/> /2
I
0
0
V- 1/ V2
0
0
1 /V2
1 /V2
- 1 /V2
1/ V 2
0
0
l/y/2\
0
0
l/y/2 J
satisfy the Yang-Baxter equation.
Solution 14.
A Maxima program which test the equation is
/* Note that . is matrix multiplication in Maxima * /
12: matrix( [ 1 ,0 ] , [0 ,1 ]);
R: matrix( [l/s q r t (2 ), 0 , 0 , 1/sq rt( 2 ) ] , [ 0 ,l /s q r t ( 2 ) ,-l /s q r t ( 2 ) ,0 ] ,
[0, 1 /sqrt( 2 ), I / sqrt( 2 ) ,0 ] , [-1 /s q r t( 2 ) ,0 ,0 ,1/sq rt( 2 ) ] ) ;
Al: kronecker_product(R,I2); A2: kronecker.product(12,R);
A3: kronecker.product(R, 12);
BI: kronecker_product(I2,R); B2: kronecker.product(R,I2);
B3: kronecker.product (12, R);
SI: Al . A2 . A3; S2: BI . B2 . B3;
F: SI - S2;
Thus F is the 8 x 8 zero matrix.
P rob lem 15.
Show that the 2 x 2 matrix
- ( ? :)
satisfies the braid-like relation
(4 ®
h)(h<2> A)(A® h) =
(I3
® A){A ® / 3 ) ( / 3 ® A).
Show that the 3 x 3 matrix
B=
/0 0 I
O I 0
\1 0 0
satisfies the braid-like relation
{B ® J2)(J2 ® B )(B ® I2) = {h ® B )(B ® J2)(J2 ® B).
Solution 15.
The following Maxima program will test the equality
Braid Group
473
12: m a t r i x ( [I,0], [0,1]) ; 13: m a t r i x ( [I,0,0], [0,I ,0], [0,0,1]) ;
A: m a t r i x ( [0,1], [I,0]) ;
B: m a t r i x ( [0,0,1], [0,I ,0], [I,0,0]) ;
Tl: k r o n e c k e r _ p r o d u c t ( A, I 3 ) ; T2: k r o n e cker. p r o d u c t (13,A ) ;
R l : Tl
. T2 . Tl; R2: T2 . Tl
. T2;
R: Rl - R2;
U l : k r o n e cker. p r o d u c t ( B , I 2 ) ; U2: k r o n e cker. p r o d u c t (12,B ) ;
SI: Ul . U2 . U l ; S2: U2 . Ul . U2;
S : S1-S2;
P ro b lem 16 .
Consider the 2 x 2 matrices /2 and o\ and the 4 x 4 matrices
/I
0
F — 0
\0
R — —(/2 ^-/2 + -/ 2 ^ 0 4 + 0 4 ^ / 2 - 04 ® 04)5
0 0 0\
0 10
10
0 ’
0 0 I/
Calculate the matrix R = FR. Find the determinant of R. Does the 4 x 4
matrix R satisfy the braid-like relation
( /2 ^ ^ ) ( ^ /2 ) ( /2 ^ ¾
=
{ R ® I 2){l2®R)(R ( 8 ) /2 ) ?
Consider the four Bell states
/ 1V
( °1 \
1
0
0 ’ Ti
1
\o /
ViJ
I
’ vi
\
1
-1
0
1
/
’ V2
0
V -i/
Apply the matrix R to the Bell states. Discuss.
Solution 16 . The following Maxima program will do the job. The determinant
of R is I. The braid-like relation is satisfied and R maps between the Bell states,
R^r
y/2
/ 1V
0
1
0
Vi
\ 1/
1\
r ~
t
\/2
0
0
V -i J
vi
I
I ’
Voy
1\
0
0
V-i J
5 1
iiT l
I
I
0
I “ vi 0
Vo /
Vi /
/° \
I
R ~i=
V2
-I
Vo J
°
-I
v/2
I
Vo
/ * b raidrelation.mac */
12: m a t r i x ( [I, 0], [0,1]) ;
si: m a t r i x ( [0,1], [1,0] ) ;
Tl: k r o n e c k e r _ p r o d u c t ( I2 , I 2 ) ; T2: k r onecker.product(si,12);
T3: k r o n e cker. p roduct(I2,s I ) ; T4: k r o n e cker.pro d u c t ( s i , s i ) ;
R: (T1+T2+T3-T4)/2;
F : m a t r i x ( [1,0,0,0],[0,0,1,0],[0,1,0,0],[0,0,0,1]);
R P : F . R;
\
J
474 Problems and Solutions
d e t R P : determinant (R P );
VI: k r o n e cker. p roduct(I2,R P ) ;
V2: k r onecker.pro d u c t ( R P , I 2 ) ;
/* b r a i d r e lation */
Z: Vl
. V2
. Vl - V2 . Vl . V2;
/* four Bell states */
BI: m a t r i x ( [1], [0] , [0] , [l])/sqrt(2);
B2: m a t r i x ( [0] , [1] , [1] , [0])/sqrt(2) ;
B3: m a t r i x ([0] , [1] , [-1] , [0])/sqrt(2) ;
B4: m a t r i x ([1 ] , [0], [0], [-1]) / s q r t (2);
/* applying BP to the Bell states */
RP . BI; RP . B2; RP . B3; RP . B4;
Problem 17.
Let
sW=(j 0
and B{k) the 2k x 2k matrix recursively defined by
S ( f c ) : = ( B(fe0_1)
j $ ! : } j ) = - B ( l ) 0 £( f c - l ) ,
k = 2, 3,....
(i) Find B(I)"1.
(ii) Find B(k)~l for k = 2, 3, _
(iii) Let A(k) be the 2k x 2k anti-diagonal matrix defined by A ij (k) = I if
i = 2k + I — j and 0 otherwise with j = 1, 2,..., 2k. Let
C(k) = A(k)B(k)~1A(k).
Show that the matrices B (k) and C(k) have the braid-like relation
B(k)C(k)B(k) = C(k)B(k)C(k).
Solution 17.
(i) We have
flW1=(s v)
(ii) We find B(fc)-1 = (B (l ) - 1)®*5.
(iii) We find
^)=(-1! J)'
The relation is true for k = I by a direct calculation. Since
B(k) = B ( l)®fe,
C{k) = C( l)®k
and using the properties of the Kronecker product we find that the relation is
true for every k.
Problem 18.
Let
B =(J
0 '
c = (-i
!)•
Braid Group
475
Then B C B = C B C . Is
(B 8 B )(C 8 C)(B 8 B) = (C 8 C )(B 8 B)(C (8 C) ?
Solution 18.
For the left-hand side we have
(B 8 B)(C 8 C)(B 8 B) = {BC) 8 (BC)(B 8 B) = (BCB) 8 (BCB).
For the right-hand side we have
(C 8 C ) ( S 8 B)(C 8 C) = (CB) 8 (CB) (C 8 C) = (CBC) 8 (CBC ).
Since BCB = CB C the identity holds.
Problem 19 .
Let n > 2. Let u,v € C (u, v are called the spectral parame­
ters). Consider the n2 x n2 matrices B(w) over C with the entries (rz^e)(u) with
=
and the entries are smooth functions of u. Let (e* : i =
1,...,7¾} be the standard basis in the vector space Cn. Let 8 be the Kronecker
product. Then
{ e i ® e j : i ,j =
is the standard basis in the vector space Cn 8 Cn. One defines
n
n
Riei <g>ej)(u) :=
r°f(u)ea ® e p,
Ct
i,j = l , . . . , n .
(I)
=I p=l
Thus
R iu ) =
it ,
®e/3) (et® e , f .
i t , ri f
i , j = l a ,/ 3
= 1
For n = 2 one finds that the matrix R(u) is given by
*21(w)
rj^(u)
R(u) =
*n («)
r^ («)\
^2i («) rIilu)
r \ liu )
^ii1(W) r I l i u )
Vrg(«) r I l i u )
rl\ («)
rI l i u)
rI liu)
rI liuU
One defines the three Ti3 x Ti3 matrices
R 12(u ) := R(u) 8 In,
R 23(u) := In 8 R(u)
and
Tl
■ R13(ei <g>ej <g>efe)(w) := E
Tl
E
rSf («)e« ®
® e/3-
a = l /3 = 1
(i) Write down the matrices B12(w), B23(u) and B13(n) for 7¾= 2 and find the
Yang-Baxter relation
R l2(u)R13(u + v)R23(v) = R 23(v)Rl3(u + v )R12(u ).
476 Problems and Solutions
Thus there are 26 = 64 equations for 24 = 16 unknowns,
(ii)
Show that the matrix
/l + u
O
O
U
I
O
I
U
O
O
O
O
R(u) =
°
O
\
O
I + u/
satisfies the Yang-Baxter equation,
(iii)
Let
/1
O
O
Vo
0
0
I
0
0
I
0
0
°0\
0
I/
This matrix is called the swap matrix with P(x (8 ) y) = y 0 x where x, y G C2.
We define R(u) := PR(u). Then the Yang-Baxter equation can be written as
{R(v) 0 /2)(/2 ® R(u + v))(R(u) 0 I2) = ( h ® R(u))(R(u + v) 0 /2)(/2 ® -R(v))Show that R(u) given in (ii) satisfies this equation. Note that without the
spectral parameters u, v we obtain Artin’s braid relation.
Solution 19 .
(i) For R 12(u) = R(u) 0 I2 we have the 8 x 8 matrix (the
dependence of the r*? on u is omitted)
/ r 11
f r ll
0
«2 1
r Il
„22
0
0
r ll
r 21
0
0
r 12
'2 2
'2 2
r 12
0
r 12
r 21
0
r 12
'2 2
r 12
r 12
0
r 12
r 21
0
r 21
r 12
0
r 21
r 21
0
r 21
'2 2
0
r 21
r 12
0
r 21
r 21
0
0
r 22
'2 2
„2 2
r 21
0
0
CS.-H
CS.-H
r if
0
r ll
0
„22
r 12
0
0
„22
CS.-H
CSCS
rIl
0
r ll
0
r 12
r 12
CSi-H
r 21
r ll
r 21
0
r 12
0
rI r l
rIl
0
r ll
0
r ii
rn
0
r 12
0
r 12
r0
ll \
'2 2
0
r 21
'2 2
0
r 22 /
r2 2 '
For R 23(u) = I2 0 R(u) we have
/ r ii
/ rU
r 12
r Il
r 21
r Il
T*11
r 12
T*12
r 12
r 21
r 12
r ll
r 21
„22
„22
r fi
„22
r 12
r 21
'2 2
r 12
'2 2
r 21
'2 2
r 22
'2 2
0
0
0
0
0
0
0
0
0
0
0
r ll
0
0
0
0
0
0
0
0
0
0
0
0
0
r ll
'2 2
0
0
rg
0
0
r Ii
0
r ll
r 12
r 12
r 12
r 21
r 12
r 22
r 12
0
0
0
«11
r Il
«1 2
r Il
r 12
r 21
r 21
r 21
r ll
r 21
r 12
r 21
r 21
r 21
r 22
r 21
r 12
'2 2
r 21
'2 2
r 22
r 22
R(u) 0 R(u)
Braid Group
477
where 0 denotes the direct sum. For R 13 (u) we obtain
/ 7 * 11
r n
I
r 12
r I l
0
0
r l l
r 12
r 12
r 12
0
0
0
0
0
0
-1 1
r 12
r Ii1
r ii
-2 1
r 12
92
r 12
r 12
-1 2
r 12
0
0
0
0
0
V 0
0
0
-2 1
-2 1
r 12
„2 2
r 12
- 1 1
r 21
r 12
r 21
r I l
r I l
„2 2
rff
r 22
-1 2
r 22
0
0
0
0
0
0
-2 1
r 21
„2 2
r 21
r 22
-2 2
r 22
0
0
0
0
-2 1
r 21
r 22
r 21
- 1 1
r I l
- 1 1
0
0
- 1 1
'21
r 12
r 21
0 \
0
r ll
'2 2
r 12
'2 2
0
0
r 21
'22
r
r 22 /
22 /
where we utilized that
2
R13(e i
0ei® ei)(w )
= ^
2
^
r j f (u)ea
a= 1 / 3 = 1
— ( rn
rIi
0
0
r\\ rfi
0
0 )T
etc.
(ii) The Maxima program will test the Yang-Baxter relation for the given matrix
R(u).
12: m a t r i x ( [1,0 ] , [0,1] ) ;
R u : m a t r i x ( [1+u,0 ,0 ,0 ] , [0,u ,1 ,0 ] , [0,1 ,u ,0 ] , [0,0 ,0 ,1+u] ) ;
R v : m a t r i x ( [1+v,0 ,0 ,0 ] , [0,v ,1 ,0 ] , [0,1 ,v ,0 ] , [0,0 ,0 ,1+v]) ;
Ruv: m a t r i x ( [ 1 + u + v , 0 , 0 , 0 ] , [0,u+v,l,0], [0,l,u+v,0], [0,0,0,1+u+v]) ;
R12u: k r o n e c k e r . p r o d u c t (Ru,12);
R 2 3 v : k r o n e cker. p r o d u c t (12,Rv);
R13uv: m a t r i x ( [1+u+v,0 , 0 , 0 , 0 , 0 , 0 , 0 ] , [0,u+v,0,0,I ,0,0,0],
[0,0,1+u+v,0 , 0 , 0 , 0 , 0 ] , [0,0,0,u+v,0,0,1 , 0 ] ,
[0,I ,0 ,0 ,u + v ,0 ,0 ,0 ] , [0,0 ,0 ,0 ,0 ,1 + u + v ,0 ,0 ] ,
[0,0,0,1,0,0,u + v , 0 ] ,[0,0,0,0,0,0,0,1+u+v]) ;
L H S : R1 2 u . R 13uv . R23v;
R H S : R23v . R 13uv . R12u;
Result: LHS - R H S ;
Result: expa n d ( R e s u l t ) ;
(iii) The Maxima program will check the Yang-Baxter equation in braid form
12: m a t r i x ( [1, 0], [0,1] ) ;
P: m a t r i x ( [1,0,0,0],[0,0,1,0],[0,1,0,0],[0,0,0,1]);
Ru: m a t r i x ( [1+u,0 , 0 , 0 ] , [0,u,I ,0], [0,I ,u,0], [0,0,0,1+u]) ;
Ru: P . Ru;
R v : m a t r i x ( [1+v,0 ,0 ,0 ] , [0,v ,1 ,0 ] , [0,1 ,v ,0 ] , [0,0 ,0 ,1+v]) ;
Rv: P . Rv;
Ruv: m a t r i x ( [1+u+v,0 , 0 , 0 ] , [0,u + v , 1 , 0 ] , [0,l,u+v,0], [0,0,0,1+u+v]) ;
Ruv: P . Ruv;
T l u : k r o n e c k e r _ p r o d u c t ( Ru , I 2 ) ; T l v : kronecker _ p r o d u c t ( Rv , I 2 ) ;
T2u: k r o n e c k e r _ p r o d u c t ( I2 , R u ) ; T2v: k r o n ecker. p r o d u c t (12,R v ) ;
T l u v : k r o n e c k e r . p r o d u c t ( Ru v ,12); T 2 u v : kro n e c k e r . p r o d u c t ( 12,R u v ) ;
L H S : Tlv . T2uv . Tlu; R H S : T 2u . Tluv . T2v;
D: LHS - R H S ;
D: e x p a n d ( D);
478 Problems and Solutions
Problem 20. Let q = 4 and w := exp(27ri/q), i.e. w4 = I and w = i, w2 = —I,
w3 = —i. Let D i and 7¾ be the diagonal matrices
0 0
0 W 0
0 0 W2
Vo 0 0
/1
°\
0
0 ’
W3 J
0 °\
0 0
I0
0 0 W2 0
Vo 0 0 W/
/1
r>
1)2 =
0
W3
and P the permutation matrix
I
0
0
0
/0
0
0
Vi
0
I
0
0
°0\
I
0/
Consider the 64 x 64 matrices
Si
=
Di
<g>£>2
®
h,
$2
= h
® P
®
h,
Ss
=
I4
<S>D i <S>D 2 ,
64 =
/4 ®
h
® P-
Show that
^9 = J64
^**¾+!¾
= w ^64
[ S i, S3] = 0 6 4 ,
3 = 1, 2, 3,4
j — 1, 2,3
[ S i, £4] = 0 6 4 ,
[S2, S4] = 064
w h e r e 064 is the 6 4 x 6 4 zero matrix.
S o l u t i o n 20.
T h i s is a typical task for c o m p u t e r algebra to do.
A
p r o g r a m is
/* braid64.mac */
14: m a t r i x ( [ 1 , 0,0,0],[0,I ,0,0], [0,0,I ,0], [0,0,0,1]) ;
D l : m a t r i x ( [1,0 ,0 ,0] , [0,°/,i,0 ,0] , [0,0 ,- 1 ,0] , [0,0 ,0 ,-°/,i] ) ;
D2: m a t r i x ( [ l , 0,0,0] , [0,-°/,i,0 ,0] , [0,0,-1,0] , [0,0,0,°/,!]);
P : m a t r i x ( [ 0 , 1 , 0 , 0 ] , [0,0,1,0],[0,0,0,1],[1,0,0,0]);
S I : k r o n e cker.product(D1,kronecker. p r o d u c t ( D 2 ,14)) ;
S 2 : k r o n e cker.product (1 4 ,k r o n e cker. p r o d u c t (P ,14)) ;
S 3 : k r o n e cker. p r o d u c t (14,k r o n e cker.product(D1,D2)) ;
S4: k r o n e c k e r _ p r o d u c t (I4,kronecker_product(I4,P));
R l : SI . S2
. (Sl~-1)
. (S2~~-l);
R2:
R3:
S2 . S3
S3 . S4
. (S2~~-l)
. (S3~~-l)
. (S3~~-l);
. (S4~~-l);
R4:
SI . S3 - S3 . SI;
R5:
R6:
SI . S4 - S4 . SI;
S2 . S4 - S4 . S2;
Consider the Pauli spin matrices ao = /2, CrI, 02, osFind the 4 x 4 matrix
Problem 21.
(i)
Maxima
Braid Group
479
with
wo(u) = sinh(w + 7//2),
w\(u) = w2 (u) = sinh(7//2) cosh(7//2),
ws(u) = sinh(7//2) cosh(u + 7//2)
where 7/ is a fixed parameter.
Solution 21 .
We have
0
Wi - W2 \
00
0
Wo - W s WI + W2
wo
R{u) =
0
Wi
Wi + W2 W o- Ws
0
0
\ Wi —W2
0
0
0
0
/ sinh(?x + 7/)
0
sinh(Tfc) sinh(7/)
0
0
sinh(7/) sinh(7/)
0
0
0
sinh(Tfc + 7/)
V
0
/
wq -\-ws
0
0
Problem 22. Let Bn denote the braid group on n strands. Bn is generated
by the elementary braids { bi, b2, ..., 6n- i } with the braid relations
b j b j 1bj = bj-)-1bjbj-\-\,
bjbk — bfcbj)
I —j
Ij
I?
k\ > 2.
Let { ei, e 2, ..., en } denote the standard basis in Rn. Then u G Rn can be
written as
n
k=I
Consider the linear operators Bj (a, /3,7, S G R and a, 7 ^ 0) defined by
Bj U := Ciei + •••+ (acj+i + P)&j + (7 C.7+1 + $)ej_|_i + •••+ cnen
and the corresponding inverse operation
Use computer algebra to show that B i, B 2, ..., B n- 1 satisfy the braid condition
B jB j+ iB jU = B j+1Bj B j+Iu
if 7 /3 + 5 = a 5 + /3.
Solution 22. We use SymbolicC++. The function B(k,u) implements Bk
acting on Tfc G Rn expressed in terms of the standard basis e [1], ..., e [ n ] .
480 Problems and Solutions
// b r a i d . cpp
#include <iostream>
#include "symbolicc++.h"
u sing namespace std;
Symbolic e = “"Symbolic("e") ;
Symbolic B(int j,const Symbolic &u)
{
UniqueSymbol x;
Symbolic r, a l p h a ( " a l pha"), b e t a C ' b e t a " ) , g a m m a ( " g a m m a " ) , delta("delta") ;
r = u[e[j]==gamma*x];
r = r [ e [j+ 1]= = a l p h a * e [j]];
r = r [x==e [j+1]];
r += beta*e[j]+delta*e[j+l] ;
retu r n r;
>
int main(void)
{
int n = 5, j , k, m;
Symbolic c("c"), u;
for(k=l;k<=n;k++) u += c[k]*e[k];
f o r ( k = l ;k < n - l ;k++)
{
Symbolic r = B(k,B(k+l,B(k,u))) - B(k+ l , B ( k , B ( k + l , u ) )) ;
cout «
r «
endl;
for(j=l;j<=n;j++) cout «
e[j] «
": " «
( r .c o e f f ( e [j])==0) «
>
retu r n 0;
>
The program output is
b e t a * g a m m a * e [2]+ d e l t a * e [2]- d e l t a * a l p h a * e [2]- b e t a * e [2]
e [1] : 0 == 0
e [2]: beta*gamma+del t a-delta*alpha-beta == 0
e [3] : 0 == 0
e [4] : 0 == 0
e [5] : 0 == 0
b e t a * g a m m a * e [3]+ d e l t a * e [3]- d e l t a * a l p h a * e [3]- b e t a * e [3]
e [1] : 0 == 0
e [2] : 0 == 0
e [3]: beta*gamma+del t a-delta*alpha-beta == 0
e [4] : 0 == 0
e [5] : 0 == 0
b e t a * g a m m a * e [4]+ d e l t a * e [4]- d e l t a * a l p h a * e [4]- b e t a * e [4]
e [1] : 0 == 0
e [2] : 0 == 0
e [3] : 0 == 0
e [4]: beta*gamma+del t a-delta*alpha-beta == 0
e [5] : 0 == 0
endl;
Braid Group
481
Supplementary Problems
Problem I.
Let n > 2 and j, k = I, ..., 2n — I. Let Q be the 4 x 4 matrix
( °O
0
0
I
S4
O I
Vo 0
°0\
0
0/
S- 4
0
Then Q 2 = (s4 + s 4)Q. Consider the 22n matrices
Qj = h <
8>• • • <
8>h ® Q <8>h <
8>• • • <
8>h
where the 4 x 4 matrix Q is at the j - th position. These matrices satisfy the
relations (Temperley-Lieb algebra)
QsQi = (»* + s-*)Qj
QjQj±iQj — Qj
QjQk = QkQj
if
\j — k \ > 2.
These matrices play a role for the g-state Potts model. Consider the case n = 2
and the 16 x 16 matrices
Qi = Q
/2 ® I25 Q2 = I2 0 Q ® I2,
Qs = I2 ® I2 ® Q-
Write computer algebra programs in SymbolicC++ and Maxima which check
that these matrices satisfy
Q1Qi = (s4 + S -^ Q 1,
Q2Q2 = (s4 + s_4)Q2,
Q lQ 2 Q l= Q l?
Q2Q3Q2 = Q2,
Q2Q1Q2 = Q2,
Q3Q3 = (s4 + s" 4)Q3
Q sQ2Qs = Q s ,
QiQs = Q3<?i -
Problem 2. (i) Find the conditions on the 2 x 2 matrices over C such that
A B A = B A B . Find solutions where A B ^ BA.
(ii) Find the conditions on the 2 x 2 matrices A and B such that
A® B® A = B® A® B.
Find solutions where A B ^ B A , i.e. [A , B] ^ (½.
Problem 3 .
Consider the 4 x 4 matrices
/1
I
I
0 0
I 0 0V
0
,
2 I 0
Vi
3 3
Is A B - 1A = B - 1A B - 1I
1)
1
0
B =
0
Vo
3 3 1X
I 2 I
0 I I
0 0 I/
482 Problems and Solutions
Problem 4. (i) Consider the Pauli spin matrix A = a\. Does the matrix A
satisfy the braid-like relation
(A 0 / 2)(/2 0 4 ) ( 4 (8) I2) = (/2 0 A)(A <8>/ 2)(/2 0 A) ?
Does the matrix A satisfy the braid-like relation
(A (8) / 3)(/3 0 4 ) ( 4 0) / 3) = (/3 0 4 ) ( 4 0 / 3)(/3 0 4 ) ?
Ask the same question for the Pauli spin matrices ¢72 and 03.
(ii) Let n > 2 and m > 2. Let In be the n x n identity matrix. Let Jm be counter
identity matrix, i.e. the m x m matrix whose entries are all equal to 0 except
those on the counterdiagonal which are all equal to I. Find the conditions on n
and m such that
(Jn 0 Im)(/ra 0 Jn)(Jn 0 Zrn) = (Zm0 Jn)(Jn 0 /m)(Im 0 Jn)*
Problem 5.
Let
j. _ /
V *21
^12
^22
be an element of the Lie group 5L(2,M ), i.e. det(T) = I and tjk E M. Let
/1 0
[ 0 - 1
K ~
0
0
0\
0
0
4
-I 0
Vo
o
o i /
*
Find all T 6 5L(2, R) such that [/£, T 0 T] = 04.
Problem 6.
matrix
Let 04., ¢72, ¢73 be the Pauli spin matrices. Consider the 4 x 4
R = a( A, //)(71 0 (Tl + b(A, //)((72 0 (72 + (73 0 (73)
where
a(A, /t) = +^ ,
v w
4 A2 — /it2
6(A,/t) = v ,w
2 A2 - / t 2
Does /? satisfy the braid-like relation
(i? 0 / 2)(/2 0 / ? ) ( / ? 0 / 2) = (Z2 0 # ) ( # 0 / 2)(/2 0 # ) ?
Problem T.
the relation
Consider the braid group #3 with the generators {(71,(72} and
(7i (72(7i = (72(71(72-
Let t ^ 0. Show that a matrix representation is given by
Braid Group
483
with the inverse matrices
W -I A
ai
- v
i/ A
°
- W i
o \
i ) ’
- 1A y '
Let o = <Ti <72<tJ 1CrJ1CrJ 1. Find f( t) = det(o — / 2). Find minima and maxima
of /•
(i) Let a, 6, c, d e R and a / 0. Does
P r o b le m 8.
/a 0
0
[0 0
c
~
0 d a —cd/a
\0
0
0\
0
0
a)
0
satisfy ( R 0 / 4)(/4 0 R)(R 0 / 4) = (/4 0 P )(P 0 / 4)(/4 0 /2)? We have to check
that /2 2 0 /2 = /2 0 R2.
(ii) Find the condition on the 4 x 4 matrix
0
0
S 22
«23
«32
S33
0
0
0
0
«44 /
/S n
V
0
0
S 41
«14 \
such that ( S 0 / 4)(/4 0 S)(S 0 / 4) = (/4 0 £ )( 5 0 / 4)(/4 0 5 ). We have to check
that S 2 0 S = S 0 5 2.
Let R be a 4 x 4 matrix written as
P r o b le m 9.
where # 12, # 21, P 22 are the 2 x 2 block matrices. Suppose that R satisfies the
equation
(i? 0 / 2)(/2 0 i?)C R 0 / 2) = ( / 2 0 i ? p 0 / 2) ( / 2 ® 4
Consider the 8 x 8 matrix
R = R★ R =
02
Pu
P 21
O2
#11
O2
O2
#12
O2
P 12
P 22
O2
#12 \
O2
O2
#22 /
(i) Does R satisfy (R 0 / 2)(/2 0 R)(R 0 / 2) = (/2 0 /2)(/2 0 / 2)(/2 0 /2)?
(ii) Does /2 satisfy (/2 0 / 4)(/4 0 /2)(/2 0 / 4) = (/4 0 /2)(/2 0 / 4)(/4 0 /2 ) ?
P r o b le m 10.
Let /2 be a 4 x 4 matrix and
/I 0 0 0\
0 0 10
P =
0 10
0
Vo
0
0
1
/
484 Problems and Solutions
Let R = PR. Suppose that R satisfies
(5 0 / 2)(/2 0 5 )(5 0 / 2) = (/2®
0/2)(/20^).
Show the following ten matrices satisfy this equation
Ro
/10
0 *\
0 1 1 0
Ri =
0 1 - 1 0
1 0 0
0 0 I 0V
0
,
0 I 0 0
Vo
0
0 I/
0
0
I+t
0 (l + r ) 1/2
1
0
1
(1 + f 2 ) 1/ 2
0
0
V 1
(I
0 0
0 \
0
st
I
f
0
i?4 =
0 0 s 0 ,
Vo 0 0 - f /
/
R2 =
V. i 0 0
I
/I
0
0 \
, Rz =
0
0
1Vl
0
/1
r
0
Re =
0 I rt
Vo 0
I/
0
0
0
S
I - t s£
0 0
0
0 °0 \I
*
0
°\
0
0
-t)
5
0
I/
/ I 0 0 0\
O s O O
R7 = O O s O
Re =
V l 0 0 s'J
/0
Rg =
0 0 1
O O f O
O f O O
Vi
0
0
0
with Rj = PRj (j = 0 , 1, . . . , 9), s 2 = I, (s ')2 = I and s, p, f, r are arbitrary.
P r o b le m 1 1 .
Assume that
(i) Let A 1 B be invertible n x n matrices with AB ^ BA.
ABA = B AB
and
A BB A = In.
Show that A4 = B 4 = In.
(ii) Find all 2 x 2 matrices A and B which satisfy the conditions given in (i).
(iii) Find all 3 x 3 matrices A and B which satisfy the conditions given in (i).
(iv) Find all 4 x 4 matrices A and B which satisfy the conditions given in (i).
P r o b le m 12.
Let S be the unitary 4 x 4 matrix
( °1
—i
0
I
0
0
I
i
0
0
I
—i
0/
i
Consider the 16 x 16 matrices S 0 Z2 0 / 2, /2 0 S 0 / 2, /2 0/2 0 5. Note that
/4 = / 2 (8) / 2- Show that the braid-like relations
(5 (8) / 4)(/2 0 5 0 / 2)(5 0 / 4) = ( / 2 0 5 0 / 2)(5 0 / 4)(/2 0 5 0 / 2)
(/2 0 5 0 / 2)(/4 0 5 )(/2 0 5 0 / 2) = (/4 0 5 )(/2 0 5 0 / 2)(/4 0 5)
Braid Group
485
are satisfied.
P r o b le m 13.
(i) Find all nonzero 4 x 4 matrices R such that
( R ® I 2){I2 ® R ) { R ® I 2) = ( l 2 ® R ) ( R ® h ) ( l 2 ® R ) We obtain 8 x 8 = 64 conditions.
(ii) Can one find R's with the additional condition R2 = / 4?
P r o b le m 14. (i) Let L i, L2 be 2 x 2 matrices and R a 4 x 4 matrix. Find the
conditions on L i, L 2, i? such that
(Li 0 L2)R = R{L2 0 Li).
Simplify the system of equations using Grobner bases,
(ii) Let
T = a2 =
and Ti = T 0 / 2, T2 = I2 0 T . Find all invertible 4 x 4 matrices R over C such
that
RT1T2 = T2T1R.
(iii) Let A, B be given nonzero 2 x 2 matrices. Find all 4 x 4 matrices R such
that
R(A 0 B) = (B 0 A)R.
Find all 4 x 4 matrices R such that R(A 0 B) = (A 0 B)R.
P r o b le m 15.
Do the matrices
(a 0 0
0 0 b 0
0 c 0 0 ,
VO 0 0 d/
(0
0
R2 =
0
\d
0 0 a\
b 0 0
0 c 0
0
0
0/
satisfy the Yang-Baxter relation
( R ® I2)(I2 ® R ) ( R ® I2) = ( l 2 ® R ) { R ® h ) ( h ® R ) 7
Or what are the conditions on a, 5, c, d so that the condition is satisfied?
P r o b le m 16. The braid linking matrix B is a square symmetric k x k matrix
defined by B = ( 6^ ) with bn the sum of half-twists in the 2-th branch,
the
sum of the crossings between the 2-th and the j -th branches of the ribbon graph
with standard insertion. Thus the 2-th diagonal element of B is the local torsion
of the 2-th branch. The off-diagonal elements of B are twice the linking numbers
of the ribbon graph for the 2-th and j-th branches. Consider the braid linking
matrix
B =
Discuss. Draw a graph.
-I
0
-I
0
2
-I
Chapter 21
Graphs and Matrices
A graph consists of a non-empty set of points, called vertices (singular: vertex) or
nodes, and a set of lines or curves, called edges, such that every edge is attached
at each end to a vertex. We assume that V = { 1 , 2 , . . . , n } . A digraph (directed
graph) is a diagram consisting of points, called vertices, joined by directed lines,
called arcs. Thus each arc joins two vertices in a specified direction. To distin­
guish them from undirected graphs the edges of a digraph are called arcs.
The most frequently used representation schemes for graphs and digraphs are
adjacency matrices. The adjacency matrix of an n- vertex graph G = (V, E) is
an n x n matrix A. The adjacency matrix A(G) of a graph G with n vertices is
a n n x n matrix with the matrix element
being the number of edges joining
the vertices i and j.
The adjacency matrix A(D) of a directed graph D with n vertices is an n x n
matrix with the matrix element
being the number of arcs from vertex i to
vertex j.
A walk of length
in a graph is a succession of k edges joining two vertices.
Edges can occur more than once in a walk. A trail is a walk in which all the
edges (but not necessary all the vertices) are distinct. A path is a walk in which
all the edges and all the vertices are distinct.
An Eulerian graph is a connected graph which contains a closed trail which
includes every edge. The trail is called an Eulerian trail.
486
Graphs and Matrices
487
P r o b le m I . A walk of length k in a digraph is a succession of k arcs joining
two vertices. A trail is a walk in which all the arcs (but not necessarily all the
vertices) are distinct. A path is a walk in which all the arcs and all the vertices
are distinct. Show that the number of walks of length k from vertex i to vertex
j in a digraph D with n vertices is given by the ij- th element of the matrix A k1
where A is the adjacency matrix of the digraph.
S olu tion I . We use mathematical induction. Assume that the result is true
for k < K — I. Consider any walk from vertex i to vertex j of length K . Such
a walk consists of a walk of length K — I from vertex i to a vertex p which is
adjacent to vertex j followed by a walk of length I from vertex p to vertex j.
The number of such walks is (Aic^1)ip x A pj . The total number of walks of
length k from vertex i to vertex j will then be the sum of the walks through any
p, i.e.
p = l
This is just the expression for the i j th element of the matrix A K~l A = A k .
Thus the result is true for k = K . The result is certainly true for the walks
of length I, i.e. k = I since this is the definition of the adjacency matrix A.
Therefore it is true for all k.
P r o b le m 2. Consider a digraph. The out-degree of a vertex v is the number
of arcs incident from v and the in-degree of a vertex V is the number of arcs
incident to v. Loops count as one of each.
Determine the in-degree and the out-degree of each vertex in the digraph given
by the adjacency matrix
0
0
I
0
Vo
I
0
0
0
0
0
I
0
I
0
0
0
0
0
I
0V
0
I
0
0/
and hence determine if it is an Eulerian graph. Display the digraph and deter­
mine an Eulerian trail.
S olu tion 2. The out-degree of each vertex is given by adding the entries in the
corresponding row and the in-degree by adding the entries in the corresponding
column.
Vertex I has out-degree I and in-degree I
Vertex 2 has out-degree I and in-degree I
Vertex 3 has out-degree 2 and in-degree 2
Vertex 4 has out-degree I and in-degree I
Vertex 5 has out-degree I and in-degree I
Hence each vertex has out-degree equal to in-degree and the digraph is Eulerian.
An Eulerian trial is, for instance, 1235431.
488 Problems and Solutions
P r o b le m 3. A digraph is strongly connected if there is a path between every
pair of vertices. Show that if A is the adjacency matrix of a digraph D with n
vertices and B is the matrix
B = A + A2 + A3 + --- + An~1
then D is strongly connected iff each non-diagonal element of B is greater than
0.
S o lu tio n 3. We must show both “if each non-diagonal element of B is greater
than 0 then D is strongly connected” and “if D is strongly connected then each
non-diagonal element of B is greater than 0” .
Firstly, let D be digraph and suppose that each non-diagonal element of the
matrix we have
> 0, i.e. bij > 0 for i ^ j and i , j = 1,2, . . . , n . Then
(Ak)ij > 0 for some k € [I, n — 1], i.e. there is a walk of some length k between
I and n — I from every vertex i to every vertex j. Thus the digraph is strongly
connected.
Secondly, suppose the digraph is strongly connected. Then, by definition, there
is a path from every vertex i to every vertex j. Since the digraph has n vertices
the path is of length no more than n — I. Hence, for all i ^ j, (Ak)ij > 0 for
some k < n — I. Hence, for all i ^ j, we have
> 0.
P r o b le m 4.
Write down the adjacency matrix A for the digraph shown.
Calculate the matrices A 2, A 3 and A 4. Consequently find the number of walks
of length I, 2, 3 and 4 from w to u. Is there a walk of length I, 2, 3, or 4 from
u to w? Find the matrix B = A + A 2 + A 3 + A 4 for the digraph and hence
conclude whether it is strongly connected. This means finding out whether all
off diagonal elements are nonzero.
x
w
Graphs and Matrices
S olu tion 4.
489
We have
U
(o
0
I
I
V
I
0
I
0
Vo 0
W
0
0
0
I
I
X
0
I
0
0
I
y
i>
i
0
0
Thus the powers of A are
/0
0 1
0 I
A2 =
V2
I
I
I 0 °I\
2 5 A3 = I 0 3 3 I
I
0 I I 3 3
0/
Vi 3 0 I 3 /
/4 6 I I 4\
I 4 I 4 6
6 4 4 I I
4 I 6 4 I
Vl I 4 6 4 /
1N
0
2
I
I
(z
3
3
3
The number of walks from w to u is given by the (3 ,1) element of each matrix
so there is I walk of length I, 0 walks of length 2, I walk of length 3 and 6 walks
of length 4 from w to u. Walks from u to w are given by the (1,3) element of
each matrix. Thus there are 0 walks of length I, I walk of length 2, 3 walks of
length 3 and I walk of length 4 from u to w. The matrix B is given by
( 7
5
8
7
5
4
8
B = A + A 2 + A 3 + A4
4
6
6\
8
4
5
6
IJ
\4
Therefore the digraph is strongly connected (all the off diagonal elements are
nonzero).
P r o b le m 5. A graph G(V , E ) is a set of nodes V (points, vertices) connected
by a set of links E (edges, lines). We assume that there are n nodes. The
adjancy (n x n) matrix A = A(G) takes the form with I in row z, column j if z
is connected to j , and 0 otherwise. Thus A is a symmetric matrix. Associated
with A is the degree distribution, a diagonal matrix with row-sums of A along
the diagonal, and 0’s elsewhere. We assume that da > 0 for all z = 1 , 2 , . . . , n.
We define the Laplacian as L := D — A. Let
/°
i
i
0
0
0
Vo
I
0
I
I
0
0
0
I
I
0
I
0
I
0
0
I
I
0
0
I
0
0
0
0
0
0
I
0
0
0
I
I
I
0
I
0V
0
0
0
0
I
0/
490 Problems and Solutions
(i) Give an interpretation of A, A 2, A3.
(ii) Find D and L.
(iii) Show that L admits the eigenvalue Ao = 0 (lowest eigenvalue) with eigen­
vector x = (I, I, I, I, I, I, 1)T.
Solution 5.
(i) By definition A is the matrix of all pairs of nodes linked to
each other. The matrix A 2 has a non-zero in row i column j if j is two steps
away from i. Since i is 2 steps away from itself, the diagonal i, i entry counts the
number of these 2-steps. The matrix A3 has a non-zero entry in row i column j
if j is 3 steps away from i.
(ii) We have
0 0
3 0
0 4
0 0
0 0
0 0
0 0
(2
0
0
D =
0
0
0
\0
and
L =- - D - A =
0 0 0
0 0 0
0 0 0
3 0 0
0 I 0
0 0 4
0 0 0
2
-I
-I
-I
0
0
0
0
3
-I
-I
-I
-I
0
0
0
4
0
-I
-I
-I
0
-I
0
0
-I
0
3
0\
0
0
0
0
0
I/
0
0
0
0
I
-I
0
0
0
-I
-I
-I
4
-I
0 >
0
0
0
0
-I
I )
(iii) W ith L = D - A we have
2
Lx =
-I
-I
0
0
0
0
3
—I
—I
-I
-I
0
0
0
—I
0
—I
0
-i
4
0
-I
-I
0
0
0
0
I
-I
0
3
0
-I
0
0
0
-I
-I
-I
4
-I
<*\
/ 1N
0
I
0
0
I
0
0
I =
0
0
I
0
-I
I
0
VO J
I J \lj
Problem 6. A graph G(V, E ) is a set of nodes V (points, vertices) connected
by a set of links E (edges, lines). We assume that there are n nodes. The
adjacency (n x n) matrix A = A(G) takes the form with I in row i, column j if
i is connected to j , and 0 otherwise. Thus A is a symmetric matrix. Associated
with A is the degree distribution D , a diagonal matrix with row-sums of A along
the diagonal, and 0’s elsewhere. D describes how many connections each node
has. We define the Laplacian as L := D — A. Let A = (a^), i.e.
are the
entries of adjacency matrix. Find the minimum of the weighted sum
I
S =
2
n
y i (X i
i,j= I
~ x j ) 2(lij
Graphs and Matrices
491
with the constraint xTx = I, where xT = (# 1, £2>•••>#n). Use
Lagrange
multiplier method. The sum is over all pairs of squared distances between nodes
which are connected, and so the solution should result in nodes with large num­
bers of inter-connections being clustered together.
Solution 6.
Taking into account that A is a symmetric matrix we obtain
I
n
Y n
s = - ^ 2 ( x i - X j f a ij = - E
(®< - t
I xiX j + Xfjaij
i,j= I
i,j= 1
=
x t
D
=
x t
L
x
—
xTAx
=
x
t (D —
A)x
x
where L = D - A . Let A be the Lagrange multiplier. Then we have to minimize
the expression
Z = x t L x — Ax t x .
This leads to the eigenvalue equation L x = Ax. The lowest eigenvalue is Ao = 0
with the (trivial) eigenvector
X 0 = (1
I
I
I
I
I
1)T .
Thus we can order 0 = Ao < Ai < • •• < An_i. The eigenvector Xi corresponding
to the second smallest eigenvalue Ai can be used to solve the min-cut problem:
separate the network into two approximately equal sets of nodes with the fewest
number of connections between them, based on the signs of the entries of x i.
The eigenvector Xi is called the Fiedler vector.
Problem 7.
Find the eigenvalues of the three adjacent matrices
/I
A=I l
\i
0 1\
I I ,
01/
/ 1 0 1\
D = O
I 0 ,
/I
C = O
V 1 0 1J
I
I
V01
0\
0
1J
provided by three simple graphs. Find the “energy” E(G) of each graph defined
by
3
S(G) = E N 3= I
Solution 7.
The eigenvalues of A are I, 2, 0 with the energy E = 3. The
eigenvalues of B are I, 2, 0 with the energy E = 3. The eigenvalues of C are I
(three-times) with the energy E = 3.
Problem 8.
Consider the two directed graphs with the adjacency matrices
(0
I
Ai =
0
0
Vi
0
0
1
0
0
1 0
°\
0 1 0
0 0 0 ,
0 0 1
0 1 0/
0
0
0 1
A2 =
0 1
\i 0
( °1
0
0
0
1
0
0 1X
I
I
0 0
0 0
I 0/
492 Problems and Solutions
(i) Find the characteristic polynomials and the eigenvalues of A\ and
.
(ii) Do the directed graphs admit a Hamilton cycle?
(iii) Can one remove some of the l ’s in A\ so that one has a permutation matrix?
Solution 8. (i) One finds the same characteristic polynomial for both matrices,
namely A5 — A3 — A = 0 with the eigenvalues Ai = 0, A2 = 0, ....
(ii) For Ai we find the Hamilton cycle 1 — 3 — 2 — 4 — 5 — I. The adjacency
matrix A^ admits no Hamilton cycle.
(iii) If one sets 021 = 0 we obtain a permutation matrix.
Problem 9 .
Consider the directed graph G
(i) Find the adjacency matrix A.
(ii) Calculate ^tr(^42). How many 2-vertex cycles (loops) does G have?
(iii) ] Calculate ^tr(As). How many 3-vertex cycles (triangles) does G have?
Solution 9 . The order of the vertices determine the structure of the adjacency
matrix. Consider the ordering
4
3
For any other ordering of the vertices, the adjacency matrix B satisfies B =
P A P t for some permutation matrix P. Consequently tr(A 2) = tr (B2) and
tr (As) = tr (B3).
(i) We have
/°
I
0
I
\0
I 0 0
0 I 0
0 0 I
0 0 0
I 0 I
i\
0
0
0
0J
(ii) We have
/ 1
0
I
I
I
I
0
I
°\
I
0
I
0
0
0
0
I
0
0
I
\2
0
I
0
0/
Graphs and Matrices
493
and ^tr(A2) = I. There is one loop: (1,2).
(iii) We have
S'
/2
2
0
I
I
0
Vo
2/
and ^tr(As) = 2. There are 2 triangles: (1,5,4) and (1,5,2).
Problem 10. Let G be a graph with vertices V and edges E. The graph G is
transitive if (^1 ,^ 2 ), (^2 ,^ 3 ) € E implies that (^1 ,^ 3 ) G E. The transitive closure
G of G, is the smallest transitive graph G which has G as a subgraph. Let H be
the adjacency matrix for G and H be the adjacency matrix for G. If (H)ij = I
and (H)jk = I then (H)ij = (H)j Jc = (H)iJc = I. Write a C++ program which
finds H for any given adjacency matrix H . Find the adjacency matrix for the
transitive closure of the graph given by the adjacency matrix
1
0
H =
0 0
0 0
Vo 1
(°I
0
0
0
0
0
0
0
0
1
1
0V
1
0
1
0/
Solution 10. We search for pairs ( H ) ij = I and ( H) j Jc = I over all values
of z, j and k and set ( H ) iJc to I if such a pair is found. We have to repeat
this procedure again for each of the nodes because new pairs may have been
introduced with ( H ) ij = I and ( H) j Jc = I. Here I I denotes the logical OR
operation and && denotes the logical and operation. The program outputs H
/1
1
H =
0
1
I
I
0
I
Vi I
0
0
0
0
0
I 1V
I 1
0
I
I
0
1
1
/
// clos u r e . cpp
#include <iostream>
us i n g namespace std;
int main(void)
{
const int n = 5;
bool H[n][n] = { { 0,
0, 0, 0 >,{ I,
0, 0, 0, I >,
{ 0, 0, 0, 0, 0 >,{ 0,
{ 0, I, 0, I, 0 > };
I,
0, 0, I, I >,
int i , j , k , I ;
cout «
"Adjacency matrix" «
endl;
494 Problems and Solutions
for(i=0;i<n;++i)
{
cout «
"[ ";
for(j=0;j<n;++j) cout «
cout «
"]" «
H[i] [j] «
"
endl;
>
cout «
endl;
for(l=0;l<n;++l)
for(k=0;k<n;++k)
f or(j=0;j<n;++j)
f o r (i = 0 ;i< n ;+ + i ) H [i] [j] = ( H [i] [j]
cout «
"Transitive closure" «
I I ( H [i] [k] && H [k] Cj] ) ) ;
endl;
for(i=0;i<n;++i)
{
cout «
"[ ";
for(j=0; j<n; ++j) cout «
cout «
"]" «
H[i] [j] «
" ";
endl;
>
return 0;
>
S u p p lem en ta ry P ro b le m s
P r o b le m I .
Compare the two adjacency matrices
i I
/I 0 0 1 X
0 0 0I V
0 I I 0
0 0 I , B= 0 I I 0
Vo I I 0 /
V i 0 0 I/
(°
I
I
Do they admit an Euler path?
P r o b le m 2.
Consider the adjacency matrix
(°
I
I
I
Vo
I I I
0 0 0 0I V
0 0 0 I
0 0 0 I
I I I 0/
Is there a Hamilton cycle?
P r o b le m 3.
Let G be a graph with vertices Vg = {1 , . . . , n o } and edges
E g ^ Vg x Vg . The nG x nG adjacency matrix A g is given by ( AG)ij =
XEG(h j ) € { 0 , 1 } where i , j E V . In the following products of graphs G\ and G^
with nGlnG2 x nG2nG2 adjacency matrices. The rows and columns are indexed
by Vg 1 X Va2, where we use the ordering (i , j ) < ( k,l ) if i < k or, i = k and
3< l
Graphs and Matrices
495
The cartesian product G\ x G^ of two graphs G i and G 2 is the graph with
vertices
V j1x (?2
^ G i ^ ^ (j 2
and edges
E g 1XG2 — { ((a5&)>(a?^)) : a ^ ^Gi &nd (&>
€ E q2 }
U { ((a, 6), (a', 6)) : (a, a') G E Gl and 6 G Vq 2 } .
It follows that
( ^ G i x G 2 )(i,j),(fe ,() = < M ^ G 2 )(fc,() E t f f c jC ^ G i)(***)
where EB is the usual addition with the convention that Iffll = I. Thus
A g 1x G 2 =
(^ -G i ® I ti g 2 ) ® ( I ri G 2 ® ^ G 2 )-
The lexicographic product Gi • G2 of two graphs Gi and G2 is the graph with
vertices
Vg 1IG2 = Vg 1 x Vq 2
and edges
E g 1IG2 = { ((«, &)>(«, ^)) : a € ^Gi and (6,6') G E q 2 }
U { ((a, 6), (a', 6')) : (a, a') G EGl and 6,6' G Vb2 } .
Thus E q 1XG2 Q E q 1IG2- It follows that
( ^ G i x G 2 )(i,j),(fe ,0 = < M ^ G 2 )(fc,i) E ( ^ G i ) ( i , j ) ‘
Hence
^ G i x G2 =
l nG?i
(A g 1 ®
In c 2) ®
(I71G2 ®
^ -G 2 ) *
Here
is the TIg2 x TIg2 with every entry equal to I.
The tensor product Gi<g> G 2 of two graphs Gi and G 2 is the graph with vertices
Vg 1^g 2
Vg 1 x Vq 2
and edges
E g 1(S)G2 = { ((n> b), (a', b')) : (a, a') G .Eg 1 and (6,6') G E1G2 } •
It follows that
(AGi<&Ga)(i,j),(k,l) = (^Gi )(i,j)(AG2)(k<i) =
{AGl ® AG2)(i-l)na2+k,(i-Vlna2+l-
Thus A g 1^g2 = A q 1 <
8>A q2 where <g> is the Kronecker product of matrices.
The normal product G\ ★ G 2 of two graphs G i and G 2 is the graph with vertices
Vg 1KG2 = Vg 1 x Vq2
and edges
E gi*G2 = E q 1XG2 U E q 1QG2Thus
A g 1KG2 = A qix Q2 EB^G i OG2 = (Ag 1 ® Inc2)
(Inc2 ® A g2) EBA q 1 ® A q2.
(i) Show that the Euler path is not preserved (in general) under these operations.
(ii) Show that the Hamilton path is not preserved (in general) under these op­
erations.
Chapter 22
Hilbert Spaces and
Mutually Unbiased Bases
A complex inner product vector space (also called pre-Hilbert space) is a complex
vector space together with an inner product. An inner product vector space
which is complete with respect to the norm induced by the inner product is
called a Hilbert space. Two finite dimensional Hilbert spaces are considered.
The Hilbert space Cd with the scalar product
( v i , v 2) = V ^ v 2
and the Hilbert space o f m x m matrices X , Y over C with the scalar product
( X 9Y) : = t r (XY*).
Let Hi and H 2 be two finite dimensional Hilbert spaces with
dim(T^i) = dim(%2) = d.
Let # 1,1 = d j i ) } and # i,2 = { IJ2)} ( j i , 32 — I , . . ,d be two orthonormal bases
in the Hilbert space H i . They are called mutually unbiased iff
IO'l \h) I2 = 2
for all
j i , j 2 = I, •••, d.
Let # 2,i = {|A:i)} and # 2,2 = (1¾ )} (&i, ^2 = I , .. . , d2) be orthonormal bases in
the Hilbert space T^2. These means that two orthonormal basis in the Hilbert
space Cd
A
=
{ e i , . . . , e d },
B = { f i , . . . , fd }
are called mutually unbiased if for every I < j, k < d
1(¾,¾)! = - L -
496
Hilbert Spaces and Mutually Unbiased Bases
The four Dirac matrices are given by
P r o b le m I.
0 0
/°
O 0 —i
O i
0
\i 0 0
7i
73 =
( °O
i
Vo
0
0
0
—i
-n
0
0 ,
0J
—i
0
0
0
( 0
0
0
V -i
72
/1
0
74 =
0
Vo
i
0 ,
0)
0
0
I
0
0
-01 \
I
0 0
0 0 /
0 0
I 0
0 -I
0 0
0
0 V
0
-I/
and
75 == 71727374 =
0
0
-I
0
0
0
0
-I
-I
0
0
0
0 V
-I
0
0 /
We define the 4 x 4 matrices
aJ k - = ^ h j , 7fc],
where j = 1,2,3, k = 2,3,4.
(i) Calculate a 1 2 , 0 7 3 , 0 7 4 , 0 2 3 ?
(ii) Do the 16 matrices
j< k
0 2 4 , 034-
h, 7l, 72 , 73 , 74 , 75 , 7571, 7572 , 7573 , 7574,
0 7 2 , 0-13,
0-14, 023, 024, ^34
form a basis in the Hilbert space A ^ (C )? If so is the basis orthogonal?
S olu tion I .
(i) We have
0-12 = 2 ^ 1. 72] = d ia g ( -l, +1, - I , +1)
^13 = 2(71.73]
^14= 2 (71. 74] =
<723 = 2 (72, 73] =
—i
(°i
0
Vo
0
0
0
V-i
0
-I
0
0
0
<724 = 2 (72 , 74]
0
0
V-i
0
0
0
0
0
-I
0
-I
0
0
0
0
0
i
0
0
0 00 V
0 —i
i
0/
0
-I
0
0
0
0
0 /
0
0 \
0
0
0 -I
-I
0 /
0 *\
—i 0
0 0
0 0/
497
498 Problems and Solutions
0
0
-I
0
^34 = ^[73, 74] =
0 -I
0 0
0 0
I 0
°\
I
0
0/
(ii) The 16 matrices are linearly independent since from the equation
5
coh +
Y
4
cJ7j +
Y
4
4 >757j +
Y l eJktjJk = O4
j<k
j=i
it follows that all the coefficients Cj , dj , Cjk are equal to 0. Calculating the scalar
product of all possible pairs of matrices we find 0. For example
Zi
(71,72) = t r ( 7 i 7 $) = t r
Problem 2.
0
0
Vo
0 0
0 0
i
0
0 -i/
—i
0
0
=
0.
Let P be the n x n primary permutation matrix
/0 I 0 ...
0 0 I ...
0\
0
P :=
0 0 0
I
Oj
Vi o o
and V be the n x n unitary diagonal matrix (£ € C)
/I 0 0
0 C 0
0 0 C2
Vo
0
0
0 \
0
0
... C"- 1 Z
where Qn = I. Then the set of matrices
{ Pj V k : j , k = 0, 1, ... ,n —I }
provide a basis in the Hilbert space for all n x n matrices with the scalar product
(AtB) : = l t r (AB*)
for n x n matrices A and B. Write down the basis for n = 2.
S o lu tio n 2.
For n = 2 we have the combinations
( j k ) e { ( 00), (01), (10), (1 1 )}.
This yields the orthonormal basis in the vector space of 2 x 2 matrices
~iCF2
Hilbert Spaces and Mutually Unbiased Bases
Problem 3 .
499
Consider the two normalized vectors
in M2. Find the condition on a, /3 € R such that
|v»u(/?)|2 = I
Solution 3 .
Owing to the identity
cos(a)
we obtain
a
—
Problem 4 .
cos (/3)
+ sin(a) sin (/3)
=
cos(o — /3)
/3 G { tt/ 4 , 37t/ 4 , 57t/ 4 , 77t/4 } m od 27r.
Consider the four nonnormal 2 x 2 matrices
Show that they form an orthonormal basis in the Hilbert space of the 2 x 2
matrices with the scalar product (X 1Y) = tr(X F * ).
Solution 4 .
<<?,*> = 0 ,
The matrices are nonzero
(QtS)= 0 , (QtT) = 0 , (RtS)=Ot (RtT)=Ot (StT)=O.
Furthermore (Q1Q) = (R1R) = (S1S) = (T1T) = I.
Problem 5 . Consider the Hilbert space Md(C) of d x d matrices with scalar
product (A1B) := tr(A B *), A1B € Md(C). Consider an orthogonal basis of d2
d x d hermitian matrices B i 12¾, . . . , Bd^1i.e.
(Bj 1Bk) = tr (BjBk) = dSjk
since B% = Bk for a hermitian matrix. Let M be a d x d hermitian matrix. Let
rrij = tr (BjM)
j = I 1..., d2.
Given nij and Bj (j = I, ..., d2). Find M.
Solution 5 .
We find
500 Problems and Solutions
since
I d
M B k = - ^ m j Bj Bk.
3= I
Thus
I d
tr I —
TfijBjBk
\
I =
J=I
I d
~
TTijtoitB jB k)
= ^ ^ TTijSjk =
77¾¾.
J= I
J= I
P r o b le m 6.
(i) Consider the Hilbert space Pt of the 2 x 2 matrices over
the complex numbers. Show that the rescaled Pauli spin matrices fij = ^ c r j-,
3 = 1,2,3
plus the rescaled 2 x 2 identity matrix
m = 7 i ( o °)
form an orthonormal basis in the Hilbert space Pt.
(ii) Find an orthonormal basis given by hermitian matrices in the Hilbert space
Pt of 4 x 4 matrices over C. Hint. Start with hermitian 2 x 2 matrices and then
use the Kronecker product.
(i) The Hilbert space Pt is four dimensional. Since
S olu tion 6.
(PjiPk) — &jk
and /xo, p i, P2 , Ps are nonzero matrices we have an orthonormal basis.
(ii) Consider the rescaled Pauli spin matrices fjbi, // 2, Ps and the rescaled 2 x 2
identity matrix /iq. Now the Kronecker product of two hermitian matrices is
again hermitian. Thus forming all possible 16 Kronecker products
Pj
<8>P k ?
=
0 , 1 , 2,3
we find an orthonormal basis of the Hilbert space of the 4 x 4 matrices, where
we used
tr((Pj
Pk){Pm ® Pn)) = ^{{PjPm) ® (PkPn)) = {^{PjPm)){^{PkPn))
with jf, /c,ra, n = 0,1,2,3. We applied that tr(A <g> B) = tr(^4)tr(H), where A
and B are n x n matrices.
P r o b le m 7.
Let A , B be hermitian matrices. Find the scalar product
(A ® B,B® A).
Hilbert Spaces and Mutually Unbiased Bases
501
Is ( A ® B, B ® A) > 0 . Prove or disprove.
S o lu tio n 7.
Since B ® A is hermitian we find
( A ® B , B ® A ) = tr ((A £ ) ® (B il)) = tr(i!B )tr(B i4) = (tr (AB))2.
P r o b le m 8. Consider the Hilbert space of the n x n matrices over C. Let A,
B be two n x n matrices over C. Assume that {A, B) = 0. Calculate
( A ® A , B ® B),
S o lu tio n 8.
( A ® B , A ® B),
( A ® B , B ® A).
We have
( A ® A , B ® B) = tr((A B *) ® (AB*)) = tr(A B *)tr(A £ *) = 0
{ A ® B , A ® B) = tr((A A*) ® (B B *)) = tr(AA*)tr(BB*)
{ A ® B , B ® A) = tr((i4B*) O (BA*)) = tr(AB*)tr(BA*) = 0.
P r o b le m 9.
Consider the Hilbert space H = C4 and the hermitian matrices
2
TI
JrL
/1
0
0
Vo
-I
0
"01 V
0 -I 2 -I
V-i o -i 2 /
0 0
/0 .0 0
0 0 0nI
0
0 I 0 0
B =
0 -I 0 ,
0 I0 0 0
0 0 0/
\0 I0 0 - I /
-I
2
-I
We consider the 16 x 16 matrix
K = ^(H ® J4 + J4 ® H) - XA ® A - nB ® B
acting in the Hilbert space C 16, where A, /j, > 0.
(i) Consider the permutation matrix
/0 I 0 0\
0 0 10
R=
0 0 0 1
Vl
0 0 0/
Thus the inverse o f R exists. Calculate (R® J?)-1 K (R® R) and the commutator
[ R® R, K]. Discuss.
(ii) Consider the permutation matrices
( 0 0 I 0\
0 10
0
10
0 0
0 0 0
1/
/ I 0 0 0\
0 0 0 1
0 0 10
Vo I
0 0/
502 Problems and Solutions
Find the commutators [S <g>S, K ] and [T (S)T, K]. Discuss.
S o lu tio n 9.
(i) We find (R <g> R)~1K (R <g> R) = K and [R <g> R, K] = O4 if
A = /i.
(ii) We find [S <g>S, K] = O4 and [T S)T,K] = O4 for any /i, A > 0.
P r o b le m 10 .
Afn \ -
1 ; _
Consider the Hilbert space of 2 x 2 matrices. Let a, P G M and
J L f cos(a )
n/ 2
V Sin(C)
sin(a ) ^
—cos(a) J ’
- J L f cos(^ )
m[P) ~
V sin(/?)
D f a\
s in ( ^ )
^
~ ™ s (P )) '
Find the conditions on a and yd such that {/!(a ), B(P)) = \. The matrices A(a),
B(P) contain ^¢73 with a = 0 and -^=CTi with a = 7t/ 2.
S o lu tio n 10.
Note that B*(f3) = B (/3). We find
(A(a), B(p)) = ti(A (a)B *(P)) = cos(a ~ P ) = \
with the solution a —P = 7r/3, 57t/ 3 (mod27r). Study the case
(A ( a ) ® A ( a ) ) = B(p) ® B(p)) = \.
Extend the matrix A (a) to
a
M
K
a] ,
y/ 2 \ e L 0fsin(a)
) = ± (
sinZ0? Y
e<
*
—cos(a) J
P r o b le m 1 1 .
Consider the Pauli spin matrices 03, o\, 02. Show that the
normalized eigenvectors of ¢73,
a2 provide a set of mutually unbiased bases.
S o lu tio n 11.
The eigenvalues of the three matrices are +1 and —I.
normalized eigenvectors of ¢73 are given by
The normalized eigenvectors of (T\ are
Si = ( t i O ) ’ 7 1 (-1 )} The normalized eigenvectors of (72 are
Bs={ ^ (
0 ’ 71(-1*)}-
This is a set of three mutually unbiased basis.
The
Hilbert Spaces and Mutually Unbiased Bases
503
P r o b le m 1 2 . Consider the three 4 x 4 matrices ¢73 (g> 03, o\ ® (Ti, 02 ® 02Find mutually unbiased bases for these matrices applying the result from the
previous problem.
S olu tion 12.
sets
Utilizing the Kronecker product and the basis given above the
*-{(iMS)- (!MD-OMi)-(!)•(?)}
MKlMD- KlMM
KiMO- K-1OM-1OJ
—{HOMO* KiM-O
0-.)01)0(-00-0}
provide mutually unbiased bases in the Hilbert space C4.
Extend the results from problem 11 and problem 12 the case in the Hilbert space
C 8 with 03 0 03 0 0-3, a\ 0 01 0 cf\, 02 0 020 02• Extend to n Kronecker products
of the Pauli spin matrices.
P r o b le m 13. Let d > 2. Consider the Hilbert space C d and eo, e i, . . . , e ^ -i
be the standard basis
/rM
0
0
eo =
,
ei =
/°\
i
0
f°\
0
, . . . , e</_i =
UJ
0
\lj
Let UJd := et27r/d. Then we can form new orthonormal bases via
1 <2—1
v m;ft =
\m-b)
= - ^ Y . w6n(n- 1)/2- nmeni
V
b, m = 0 ,1 ,. . . , d — I
n=0
where the d labelled by the b are the bases and m labels the state within a basis.
Thus we find mutually unbiased bases.
(i) Consider d = 2 and 5 = 0. Find the basis |0; 0), 11; 0).
(ii) Consider d = 3 and 5 = 0. Find the basis 11; 0 ), |2;0), |3; 0 ).
(iii) Consider d = 4 and 5 = 0. Find the basis 10; 0), 11; 0), |2; 0), |3; 0). Find out
whether the states can be written as Kronecker product of two vectors in C 2,
i.e. whether the states are entangled or not?
504 Problems and Solutions
Solution 13.
(i) We have
uj2
=
el7r =
— 1
and e
l7r =
— 1
= UJ2 1 .
Thus
and
M
= 7 5 £ " ■ “ •>» =
= ?s ( i i ) •
These are the normalized eigenvectors of G \ .
(ii) We have u/3 = e*27r/3 = (—1 + i>/3 )/2 and u j = e_t27r/3 = (—1 — i>/3 )/2
and
|
0;0)=^(i)'
|2;0) =
|i;o»
( e - 4" / 3 ] .
V 3 \ e-i27r/3 J
(hi) We have u/4 = etn/ 2 = z and uj^ 1 = e_t7r/2 = —i. Thus
/ 11O
i°;o> = 2
|1;°) = 2
1
V i/
( —1i \
-I
\ i J
12;°)= 2
71(-1)^ 71(-1*)
( - 11 \
1
V -i/
71(1)^ 71(-1)
/ 1
\
i
I3; 0) = 2
-I
V —i J
Tl (-1)®Tl (O '
Thus none of the states are entangled.
Supplementary Problems
2
Problem I. Assume in the following that S i jI and S i, are mutually unbiased
bases in the Hilbert space PLi and S 24 and 82,2 are mutually unbiased bases in
the Hilbert space PL2, respectively. Show that
{ l/i> ® l&i) },
{ l.?2 > ® 1¾) },
/ 1 , / 2 = I ,-..,d i ,
k u k2 = l , . . . , d 2
Hilbert Spaces and Mutually Unbiased Bases
505
are mutually unbiased bases in the finite dimensional product Hilbert space
Hi <8>H 2 with dim(7^i (0 % ) = d\ •d2 and the scalar product in the product
Hilbert space
(Oil ® <^i D(Iis) ® 1¾)) = Oi |j2)<fcil*2>Apply this result to H 2 = C3 with
# 2,1 = { |l>2,i = ( 0 j , |2)2,i =
(
I j , |3)2,i = f 0 J }
! 1 1\
(
1 /V 5
\
# 2,2 = { 11) 2,2 = —7= I I I > 12)2,2 =
-i/ 2 - l/(2 \ /3 ) ,
V3 VI /
V i/2 - 1 / ( 2 ^ ) /
(
1 /V S
\
i/2 — I / (2y/3)
|3)2i2=
}.
V i/2 - l / ( 2 x / 3 ) /
P r o b le m 2.
The vector space of all n x n matrices over C form a Hilbert
space with the scalar product defined by (A, B ) := tr(AB*). This implies a
norm ||^l||2 = tr(^A *).
(i) Consider the Lie group U(n). Find two unitary 2 x 2 matrices
U2 such
that \\Ui — U2Wtakes a maximum.
(ii) Are the matrices
* - £ ( ! -1O
u” (° 0)’
such a pair?
P r o b le m 3.
Consider the two normalized vectors
/
,v
( ei<f>cos(a)\
= (,
) ’
sin(a)
to
( e^ cos(/3) \
^ * ) = {
,!„ ( ) )
J
in C 2. Find the condition on a, /3,</>, V> G M such that
|v*(a, <£)u(/J,i/>)|2 = I
Note that
^ cos(a) cos(/3) + sin(a) sin(/3)
v* (a, </>)u(/3, VO =
and
|v*(a, 0)u(/3, V O I 2 = cos2(a) + 2cos(V> — ¢) cos(a) cos(/3) + sin2(/3).
P r o b le m 4.
Consider the two normalized vectors
cos(0i) sin(^i) \
sin (0i)sin(0i)
,
(
cos(0i)
/
V2( ^ f a ) =
/ cos(V>2) sin(02) \
sin(</>2) sin(02)
,
\
cos(02)
/
506 Problems and Solutions
Find the conditions on ¢ 1 ,
¢2, @2 such that |vf (¢1 , 0i)v2(</>2, #2)! =
P r o b le m 5. Consider the Hilbert space M (2, C) of the 2 x 2 matrices over the
complex numbers with the scalar product (A, B) = tr(A B *) and A , B e M ( 2, C).
Two mutually unbiased bases are given by
0\
/ 0 l\
/ 0 0\
0) '
\0
OJ ’
l / i
- A
1 / 1
2 \1
-I / ’
2
/ 0 0\
OJ ’
\1
VO
I yI
and
i \
I
- 1 /’
1 / 1
2
I
Construct mutually unbiased bases applying the Kronecker product for the
Hilbert space M (4, C).
Chapter 23
Linear Differential
Equations
Let A be an n x n matrix over R. Homogeneous systems of linear differential
equations with constant coefficients are given by
^=A x,
x = { x i ,x 2, . . . , x n)T
(I)
with the initial condition x(£ = 0) = Xo- The solution is given by
x(£) = exp(L4)xo-
(2)
The nonhomogeneous system of linear differential equations with constant coef­
ficients are given by
^ = A x + y (t)
(3)
where y i(t ),y 2( t ) ,. . . ,y n(t) are analytic functions. The solution can be found
with the method called variation of constants. The solution which vanishes at
t = 0 is given by
x
exp((£ — T)A)y(r)dr = exp(tA)
Jo
exp(—r A
)y {r )d T .
(4)
Thus the solution of a system of differential equations (3) is the sum of the
general solution of (I) given by (2) and the particular solution given by (4).
507
508 Problems and Solutions
P r o b le m I.
Solve the initial value problem of dx/dt = A x, where
S olu tion I .
Since
exp(tA) =
cosh(t)
sinh(t)
sinh(t) \
cosh(t) J
we obtain
( z i( 0 )\ = ( Zi(O) cosh(<) + X2 (O) sinh(i) \
^ z 2 (O))
\ x\ (0 ) sinh(<) + X2 (O) cosh(i ) ) '
P r o b le m 2.
(i) Solve the initial value problem of dx/dt = A x, where
A = (o & )’
a’b’c e R ■
(ii) Let
Calculate exp(tA), where t G IR. Find the solution of the initial value problem
of the differential equation
(\du
? ■2 /dt« !J) = -4 )\ "u 2' )J
with the initial conditions u\(t = 0) = uio, U2(t = 0) = U20• Use the result from
0)S olu tion 2.
(i) We have
exp(tA) = ( eQ
^ ]* )
where
d(t) =
c(ebt - eat)
b —a
for
and d(t) = ceat for b = a. Thus
(ii) We have
e x p (L 4 )= ( eQ
**)
b
a
Linear Differential Equations
509
(ii) Using (i) we obtain
Thus the solution of the initial value problem is
Ux (t) = CtUxo + tetU20,
P r o b le m 3.
u2{t) = CtU2O-
Show that the n-th order differential equation
dnx
dtn
dx
dn~1x
CoX + Cl — + •••+ cn_ i d t n - l ’
dt
6
can be written as a system of first-order differential equation.
S olu tion 3.
We set x\ = x and
dx i
HF := X2’
dxn
'H F
dx 2
HF := x3’
:=
c q X\
+
Ciaj2 H------- h cn _ \ x n .
Thus we have the system dx/dt = A x with the matrix
/° I 0 . .
0 0 I . .
0 0 0 .
Cl C2 -
Vco
0 \
0
I
• cn_ J
P r o b le m 4. Let A, X , F be n x n matrices. Assume that the matrix elements
of X and F are differentiable functions of t. Consider the initial-value linear
matrix differential equation with an inhomogeneous part
^ ) - = A X(t) + F(t),
at
X{to) = C.
Find the solution of this matrix differential equation.
S o lu tio n 4.
The solution is given by
X (t) = e^~to^AC + e tA f
e - aAF(s)ds
X (t) = e(*-*o)*C + t C ^ a F (s)ds.
J to
J to
P r o b le m 5. Let A 1 B 1 C 1 Y be n x n matrices. We know that A Y -\-YB = C
can be written as
((In ® A ) + ( B t <g>Jn))vec(T ) = vec(C)
510 Problems and Solutions
where 0 denotes the Kronecker product. The vec operation is defined as
v e c(y ) .— (yil? •••) 2/nl) 2/125•••5VntIi •••»Vl7i5 •••j Vnn)
Apply the vec operation to the matrix differential equation
j t X {t) = A X{t) + X {t)B
where A, B are n x n matrices and the initial matrix X ( t = 0) = A(O) is given.
Find the solution of this differential equation.
S o lu tio n 5.
obtain
The vec operation is linear and using the result from above we
^ v e c (X(t)) = {In ® A + B T ® In)vec(X(t)).
at
The solution of the linear differential equation is given by
vec {X(t)) = et(/"®j4+BT®/ " )v e c (X (0)).
Since [In 0 A, B t 0 In] = 0n2 this can be written as
vec (X(t)) = (etBT ® etA)vec(X(0)).
P r o b le m 6 .
The motion of a charge q in an electromagnetic field is given by
dw
_
_
m — = q(E + v x B )
at
(1 )
where m denotes the mass and v the velocity. Assume that
(2)
are constant fields. Find the solution of the initial value problem.
S olu tion 6 .
Equation (I) can be written in the form
( dvi/dt
dv^jdt
dv 3/ dt
0
B3
-B 3
B2
0
-B 2
B1
-B 1
0
( 3)
We set
B i -»> - B i ,
J
m J
E i —y - E i .
m
(4)
Thus
dvi/dt
dv2/dt
dv3/dt
0
Bs
-B s
B2
0
-B 2
B1
-B 1
0
V1
V2
V3
+
E2
Es
(5)
Linear Differential Equations
511
Equation (5) is a system of nonhomogeneous linear differential equations with
constant coefficients. The solution to the homogeneous equation
( d Vl/dt\
dv2/dt =
V dvs/dtj
(
0 B3
-S 3
0
V B2 - S 1
- S 2\ Z v A
B1
0
) I V2 )
J \ v 3J
(6)
is given by
/vi(t)\
Zv1 (O)
I v2(t) I = etM I V2(O)
\ v3(t)J
V ^(O )
where M is the matrix of the right-hand side of (6) and Vj (O) = Vj(t = 0). The
solution of the system of nonhomogeneous linear differential equations (5) can
be found with the help of the method called variation of constants. One sets
v (f) = etMf (t)
where f : !
(8)
differentiable curve. Then
^ = M e tMf(t) + etMf .
dt
dt
(9)
Inserting (9) into (5) yields
M v(t) + E = M e tMf(t) + etM^ = M v(t) + etM
dt
dt
(10)
Consequently
df
dt
(11 )
e~tME.
By integration we obtain
f«)
/
e~sME ds + K
(12)
where
K2
K = ( ATi
K3 f .
Therefore we obtain the general solution of the initial value problem of the
nonhomogeneous system (5), namely
V(*)
AM
a :
e~sME ds + K
where
( Vi(O)
K =
V2 (O)
V ^3(0)
We now have to calculate etM and e~sM. We find that
M2 =
(-B l-B l
B 1B2
\ B 1B3
B 1B2
- B j - Bi
B2B3
B 1B3 \
B2B3
- B 21 - B 2J
512
Problems and Solutions
and M 3 = M 2M = - B 2M, where B 2 := B 12 + B f + B f . Hence M 4 = - B 2M 2.
Since
AM
^ (tM )k
T tM
t2M 2
t3M 3
t4M 4
:= 2 J
k\ = 1 + —
I! + —2! + —3! + 4! + '
k= 0
where I denotes the 3 x 3 unit matrix, we obtain
+3
AM
\
*5
/4.2
4.4
Thus
M
M2
: I + -g sin(Bt) + -g 2"(l - cos (B i)).
Therefore
e
M
M2
= I + — sin(Bt) + -55" (I - cos(B f))
B2
and
e- SM = /
^ sin(Bs) + ^
(I - cos(B s))
using the substitution t —> —s. Since
J * e~sMEds = J * (
l
-
sin(B s) + ^
^
ME
= Et + - j p cos^
(I - cos(B s))^ Eds
ME
M 2Ef
M 2E . /rijl,
~ “ p - + - p “ " 5 2 5 sln(B *)
we find as the solution of the initial value problem of system (5)
v ( f , = Et ( l + ^
)
-
Sin(Bf) +
+ ^ E ( l - CO8(St)) +
Sin(Bf)
- COS(Bf)) + V(O)
where v(£ = 0) = v(0).
P r o b le m 7. Consider a system of linear ordinary differential equations with
periodic coefficients
^
= .4(f)u
(I)
where A(t) is an n x n matrix of periodic functions with a period T. From Floquet
theory we know that any fundamental n x n matrix $(£), which is defined as a
nonsingular matrix satisfying the matrix differential equation
d$(t)
= A (W it)
dt
Linear Differential Equations
513
can be expressed as
(2)
$(t) = P(t)exp(tR).
Here P(t) is a nonsingular n x n matrix of periodic functions with the same period
T, and R , a constant matrix, whose eigenvalues are called the characteristic
exponents of the periodic system (I). Let
v(t) =
Show that y satisfies the system of linear differential equations with constant
coefficients
dv(t)
= Rv(t).
dt
S olu tion 7.
From P (t)P 1 ^t) = In we have
and from (2) it follows that
^
= H S l e m + P(I)Reut
~
d T
=
Using these two results we obtain from
dw
dt =
v(£) = P -1(t)u(£)
+ p ~1{t)lE =
u + p~Ht)A(t)
= - P - 1M ( p t p e- tR - P M R j v + P - 1M A M u
=
+
Rv + P - 1W AMPMv
= - P - 1(i)J4(i)$ (f)e-‘ fiv + P - 1WAWPM + Rv
= P - 1 W A M M ^ M e~ tR + P M ) v + Rv
= Rv.
P r o b le m 8. Consider a system of linear ordinary differential equations with
periodic coefficients
i
d
= a^
where A(t) is a 2 x 2 matrix of periodic functions with period T. By the classical
Floquet theory, any fundamental matrix $(£), which is defined as a nonsingular
matrix satisfying the matrix differential equation
514 Problems and Solutions
can be expressed as
$ (t) = P(t)exp(TR).
Here P(t) is nonsingular matrix of periodic functions with the same period T,
and R 1 a constant matrix, whose eigenvalues Ai and A2 are called the character­
istic exponents of the periodic system (I). For a choice of fundamental matrix
$(£), we have
exp (Ti?) = $(to)$(to + T)
which does not depend on the initial time to. The matrix exp(TR ) is called the
monodromy matrix of the periodic system (I). Calculate
tr(exp(TR )).
S olu tion 8.
Using P(t + T) = R(£), $ *(£) = exp(—tR)P x(t) we have
+ T ) = tr(e~toRP ' 1 (t0)P(t0 + T ) e ^ +^ R)
= tr{eTRp - \ t 0)P(t0 + T)) = t r(eTRp - \ t 0)P(to))
= ti(eTR)
= eXlT + ex*T.
P r o b le m 9. Consider the autonomous system of nonlinear first-order ordinary
differential equations
at
at
= a ( x 2 - x i) = f\(x\,X2,xz)
=
(c ~ a )x i + C x 2 -
Xix3 = f
2( x i , x 2, x 3)
= - b x 3 + Xix2 = f 3( x i ,x 2,x 3)
where a > 0, b > 0 and c are real constants with 2c > a.
(i) The fixed points are defined as the solutions of the system of equations
fi(x\,X2, X2) = a(x 2 - x j) = 0
h ( x l , x 2, X3) = (c - a )x j -f- 0x 2 - x\xl = 0
f 3(x*i,x*2,x*3) = -bx*3 +x*ix*2 = 0 .
Find the fixed points. Obviously (0,0 ,0 ) is a fixed point.
(ii) The linearized equation (or variational equation) is given by
/ dyi/dt\
/ yi\
dy2/dt I = A I y2 I
\ dy3/dt)
\ y3)
where the 3 x 3 matrix A is given by
( dfi/dxi
^4x=x* = I 0f 2f d x i
\ 0 f3/0xi
dfi/dx2 Ofifdx3 \
0f 2/0x 2 d f2f Ox3
0 f3/0x2 0 f3/0x3 J x=x.
Linear Differential Equations
515
where x = x* indicates to insert one of the fixed points into A. Calculate A
and insert the first fixed point (0,0,0). Calculate the eigenvalues of A. If all
eigenvalues have negative real part then the fixed point is stable. Thus study
the stability of the fixed point.
S o lu tio n 9.
(i) Obviously x\ = x\ = x% = 0 is a solution. The other two
solutions we find by eliminating x% from the third equation and x\ from the first
equation. We obtain
x\ = yjb(2c —a),
x% = 2 c —a
#2 — \/&(2c — a),
and
x\ = —y/b(2c —a),
#2 — ~y/b(2c —a),
x% = 2 c —a.
(ii) We obtain
( dfi/dxi
0 f 2f 0 x i
\ 0 f 3/0x i
Ofifdx2 0fi/0x 3\
(
OhfOx2 OhfOx3 = I c
OhfOx2 OhfOx3J
\
-a
-
a
a
x3
-
c
x2
X
0\
-X 1
1 -b
J
Inserting the fixed point (0,0,0) yields the matrix
( —a
c-a
0
a
c
0 \
0
.
0 - 6 /
The eigenvalues are
Al,2 = ----- =*1 \ A (2c - a),
A3 = -b.
Depending on c the fixed point (0,0,0) is unstable.
P r o b le m 1 0 . Let A be an n x n matrix over M. Consider the initial value
problem of the system of linear differential equations
+ Au(t) = g (*),
u(0) =
U0
(I)
where g (t) = (gi{t),g2{t),.. ■,gn(t))T. The solution of the initial value problem
is
u (f) = e tAuo + [
e
T^Ag (r)rfr.
Jo
Apply it to the 3 x 3 matrices
A=
with initial values Uq = ( I
I
0
0
I
I )T and
0
I
0
g(t) = ( I
0
1 )T .
(2)
516 Problems and Solutions
S olu tion 1 0 .
Since ecA = /3 cosh (c)+A sin h(c) (c G R) we find for the solution
/ u i(t)\
/ cosh (J)
I U2(t) I = I
0
\us(t)
\ —sinh(J)
J
0
cosh(J) — sinh(J)
0
—sinh(J)\ / 1 \
/ et_T\
0
I I I I + I 0 I .
cosh(J) / \ I /
\ et~TJ
Discretize the system with the implicit Euler method with step size h and h =
0.1. One has
(In + hA)uj = Uj - I + /Ig(Jj ),
j = I , ..., M
with
f h ( h + 1)
j
(A + /1/3)-1
» » (» + ! ) - » - !
j
0
ftV1
-h -1
'
M ft"+ I),
Compare the two solutions for the matrix A.
P r o b le m 1 1 . Let L and K be two n x n matrices. Assume that the entries
depend on a parameter t and are differentiable with respect to t. Assume that
K ~ l (t) exists for all t. Assume that the time-evolution of L is given by
L(t) = K (t)L (0)K ~ 1 (t).
(i) Show that L(t) satisfies the matrix differential equation
§
= [£ .B 1 «)
where [ , ] denotes the commutator and
B = - f K ~ 1W(ii) Show that if L(J) is hermitian and K(t) is unitary, then the matrix B(t) is
skew-hermitian.
S o lu tio n 11.
(i) Differentiation of K ( t ) K X(J) = In with respect to J yields
dK- 1
dt
Differentiation of L(J) with respect to J provides
dL
-
dK . v
= — to ift
1 /.\ Tjr/,\ T/r\\dK ^
■<()+m m - z -
Inserting dK ~l /dt gives
dI = ^ L { Q ) K ~ \ t ) - K ( t ) m K - \ t ) ^ K - \ t )
dK
= -B (t)L (t) + L(t)B(t)
= [L,B\(t).
dK
Linear Differential Equations
517
(ii) We have
-B.
=
P r o b le m 12.
tion
Solve the initial value problem for the matrix differential equa­
[B,A(e)\ = ^
-
where A(e) and B = (Ti are 2 x 2 matrices.
S o lu tio n 12.
Inserting B =
dan
= 021 —&12i
~dT
G\
into the matrix differential equation yields
da\2
= 022 ~ « 11,
de
da21
= an —«22,
de
— (an + «22) —0,
^(«12 + «21) —0.
da22
~de
«12 —«21-
Thus
In matrix form we can write
dan/de \
dai2/de \ _
da2i/de I —
\da22/de/
- I
I
0
- I
0
0
I
0
0
0
I
- I
0
I
\
a Il \
«12
- 1
0 /
«21
\ a22 /
The eigenvalues of the 4 x 4 matrix are 0, 0, 2, —2.
P r o b le m 13.
Let a, b G R. Consider the linear matrix differential equation
d2X
_
dX
+ 0_
n
+ 6X=°.
Find the solution of the initial value problem.
S olu tion 13.
The general solution of the initial-value problem is given by
X (t) = exp (at) ^X(O) cos (/3t) +
where a = —
- a X (0 )^ sin (/3t)^J
/3 = yjb — a2/4.
P r o b le m 14. Let A be an n x n matrix over R. The autonomous system of
first-order differential equations du/dt = Au admits the solution of the initial
value problem u (t) = exp(A)u(0). Differentiation of the differential equations
yields the second-order system
518 Problems and Solutions
Thus we can write
du
at
a
= Au,
- - = V
A2
A
— = A zu = Av
at
or in matrix form
( du/dt\ _ f On
\dv/dt)
In \ ( u (0 )\
0n / \ v ( 0 ) / '
Find the solution of the initial value problem. Assume that A is invertible.
S olu tion 14.
We have
( On In \ _ f cosh(A)
ex^ \ A 2 On J
^ A sinh(A)
A -1 sinh(A) \
cosh( A) J
and therefore the solution of the initial value problem is
(
_ ( cosh(A)
\v(t) J ~ Y Asinh(A)
A _ 1 sinh(A)\ / u (0 )\
cosh(A) J \ v ( 0) J '
P r o b le m 15.
(i) Let u = ( u i , . . . ,un)T and A be an n x n matrix over R.
Solve the initial value problem of the system of linear differential equations
using the Laplace transform, where u (t = 0) = u(0).
integration by parts)
jC(du/dt)(s) = f
Jo
4du
-st— d t = e~ X * ) It= 0 As f
dt
Jo
Note that (applying
e stu(t)dt = s U (s )-u (0 ).
(ii) Apply it to the 2 x 2 Hadamard matrix
S olu tion 15. (i) Taking the Laplace transform of du/dt = A u yields sU(s) —
u(0) = A U (s). Thus
U (s) = ( sln - ^ r 1U(O).
One calls (sln — A )-1 the resolvent of the matrix A. The resolvent is defined
for s G C except the eigenvalues of A. Taking the inverse Laplace transform we
find the solution
U(t) = C- 1 HsIn - A )- 1 u(0)).
(ii) For the Hadamard matrix we have
Sl2 - H
=
( S - 1 /V 2
\ —1/\/2
-l/v /2
\
s + l/\/2 /
Linear Differential Equations
519
Thus
( a + l/y/ 2
s2 + I V 1/ V 2
I
(sl2 - H ) - 1
l/v /2
\
s - 1/V2 J •
Since
£ -1
= cosh(t),
£ 1 ( ^ 3 i ) = sinh(*)
we obtain
Z u i (J)N _ / cosh(t) + -j- sinh(J)
\^2(0/
Y
TSsinhW
\ Zu1(O)N
cosh(J) - ^=Sinh(J) J \u2(0) J '
sinh(t)/\/2
S u p p lem en ta ry P ro b le m s
Consider the Lotka-Volterra model
P r o b le m I .
= fl(u i,U 2) = U 1 -
with the fixed point (uj = I . U2 = I).
equation) with
d fi
U X
is given by
I
d fi
—
2
U X
/ dyi/dt\
\dy2/ dt)
- U
2+
U l U
2
The linearized equation (variational
df 2
= I - U 2, = — = - u i ,
O X
= f 2 (u i,u 2) =
^
U lU 2,
( 0
\l
I
= u 2,
df 2
q— = - 1 + ui
0 2
X
- l \ / j/i\
0
) \ y 2) '
Solve this system of linear differential equations. First show that the eigenvalues
of the 2 x 2 matrix are given by ±z.
P r o b le m 2.
Consider the initial problem of the matrix differential equation
^
= A(t)X,
X(O) = In
where A (t) is an n x n matrix which depends smoothly on t. It is known that
the solution of this matrix differential equation can locally be written as
X (t) = exp (D(t))
where Q(t) is obtained as an infinite series
(X)
n (t) = £ n fc(t).
k=I
This is the so-called Magnus expansion. Implement this recursion in SymbolicC-H- and apply it to
AU) = ( cosW*)
'
\ sin(cjt)
where a; is a constant frequency.
“ Bin(ut) N
cos (ojt) J
Chapter 24
Differentiation and Matrices
Consider the m x n matrix
F(x) =
ZiiOc)
f l 2(x)
f l 3(x)
ZlnOc) \
f2l(x)
fa lx )
/23(2:)
f2n(x)
fm2(x)
fmz(x)
fmn(x) /
V m l(l)
with fjk are differentiable function of x. Then dF(x)/dx is defined entrywise.
For example
d ( cos(x)
dx \ sin(x)
—sin(x) \ _ f —sin(x)
cos(x) J ~ \ cos(x)
—cos(x) \
—sin(x) J '
Let A , B be n x n matrices. Assume that the entries of A and B are analytic
functions of x. Then
d_
dx
) B -\- A(S) ^
B
where <g> denotes the Kronecker product and
where • denotes the Hadamard product (Schur product, entrywise product).
520
Differentiation and Matrices
P r o b le m I.
521
(i) Consider the 2 x 2 rotation matrix
R(a) = ( c0sH
v 7
\ sin(a)
-M * ))
cos(a) J
with det (R(a)) = I. Is the determinant preserved under repeated differentiation
of R(a) with respect to a ?
(ii) Consider the 2 x 2 matrix
S(a) = ( COs{fa\
v 7
sin(? M
—cos(a) J
y sin(a)
with det(5 (a )) = - 1 . Is the determinant preserved under repeated differentia­
tion of 5 (a ) with respect to a?
S olu tion I .
find
(i) Note that d sin (a )/d a = cos(a), dcos(a)/da = —sin(a). We
dR(a) _ ( —sin(a)
da
\ cos(a)
— cos(a) \
—sin(a) J ’
We have det (dR/da) = I. Thus the determinant is preserved.
(ii) We obtain
dS(a) _ / —sin(a) cos(a )\
da
\ cos(a)
sin(a) J
with det (dS/da) = - 1 . Thus the determinant is preserved.
P r o b le m 2.
Let fjk : M —> M be analytic functions, where j^k = 1,2 and
F(e) = (fjk(e))- Find the differential equations for F(e) such that
F(e)
S olu tion 2.
dF{e)
de
dF(e)
F(e).
de
We obtain the three conditions /12/21 = /21/12*
/11/12 + /12/22 = /12/11 + /22/12*
/11/21 + /21/22 = /21/11 + /22/21*
Note that
/n ( e ) = cos(e),
/ 12(e) = - sin(e),
/ 21(e) = sin(e),
f 22(e) = cos(e)
satisfy the three conditions.
P r o b le m 3.
Consider the invertible 2 x 2 matrix
A(x) = ( cos(? \
v 7
\ —sm(x)
sin^ V
cos(x ) J
Show that d(ln(det(A(a:)))) = tr (A~l (x)dA(x)) where d denotes the exterior
derivative.
522 Problems and Solutions
S o lu tio n 3.
A ~ \ x)
From A(x) we obtain
Cos(Z)
sin(x)
- sin(x) \
cos(x) J ’
dA, , = f - sin(x)dx
' '
y —cos (x)dx
cos (x)dx
—sin (x)dx
Thus
A~1 (x)dA(x) = ( °
*)
and therefore for the right-hand side we have tr(^4 1dA) = 0. For the left-hand
side we have det(^4) = I and In(I) = 0. Thus d(ln(det(z4))) = 0.
P r o b le m 4.
(i) Consider the analytic function f : M2 —> M2
/ i ( x i , x 2) = sinh(x2),
h ( x i , x 2) = sinh(xi).
Show that this function admits the (only) fixed point (0,0). Find the functional
matrix at the fixed point
( O fifd x1
\ d f 2/0x 1
Ofifdx2 \
OhfOx2 J
( 0 , 0)
(ii) Consider the analytic function g : M2 —> M2
g i(xu x 2) = sinh(xi),
#2( x i ,x 2) = -s in h (x 2).
Show that this function admits the (only) fixed point (0,0). Find the functional
matrix at the fixed point
( Ogif Oxi
\ 0g2/0xi
OgifOx2 \
0g2/0x 2J
( 0 , 0)
(iii) Multiply the two matrices found in (i) and (ii).
(iv) Find the composite function h : M2 —> M2
h (x ) = (f o g ) ( x ) = f(g (x )).
Show that this function also admits the fixed point (0,0). Find the functional
matrix at this fixed point
f OhifOxi
\ 0h2/0x i
0hi/0x 2\
Oh2 f Ox2 J
( 0 , 0)
Compare this matrix with the matrix found in (iii).
S o lu tio n 4. (i) Since sinh(0) = 0 we find from sinh(x2) = x i, sinh(xi) = X2
that Xi = 0, X2 = 0. Since d sin h (x)/dx = cosh(x) the functional matrix at
(0,0) is
Differentiation and Matrices
523
(ii) Since sinh(O) = O we find from sinh(xi) =
—s i n h ^ ) = X 2 that x\ = 0,
= 0. Since dsinh(x)/dx = cosh(:r) the functional matrix at (0,0) is
X 2
(iii) Multiplication of the two matrices yields
(iv)
We have h (x) = ( / 1 (01, 02) , / 2(01, 02)). Thus
h i ( x i ,x 2) = —sinh(sinh(x2)),
^2(^1,^ 2) = sinh (sinh (# 1))
which admits the fixed point (0,0). For the functional matrix of h at (0,0) we
find
This is the matrix from (iii).
P r o b le m 5.
Consider the 2 x 2 matrix
V(t) =
cos (cjt) sin(cut)
—sin(a)t) cos(cjt)
Calculate dV{t)/dt and then find the commutator [dV(t)/dt, V(t)\. Discuss.
S olu tion 5.
We have
dV(t) _
dt
(
\
—c0 sin(o;t)
—cj cos(cut)
cj cos(cut)
—cj sin(cjt)
For the commutator we find [dV(t)/dt, V(t)\ = O2.
P r o b le m 6. Let A be an n x n matrix. Assume that the inverse of A exists,
i.e. det(A) ^ 0. Then the inverse B = A -1 can be calculated as
ln(det(/L)) = bkj.
Apply this formula to the 2 x 2 matrix A with det(A) = ana22 —a\2a2\ ^ 0.
S olu tion 6.
We have
d
ln (a n a 22 dau
d
ln(ana22 bi2 =
da21
d
ln (a n a 22 &21 =
da\2
d
ln (a n a 22 ^22 =
Oa22
611 =
^12^21) =
Q22
det(A)
« 12^21) =
—^12
det(A)
« 12^21) =
—Q21
det(A)
« 12^21) =
Qn
det(A)
524 Problems and Solutions
Thus the inverse is given by
A
i- 1 _
1
( «22
det(A) \ - « 2 i
—«12 ^
J
P r o b le m 7. Let Q and P be n x n symmetric matrices over M, i.e. Q = Q t
and P = P t . Assume that P -1 exists. Find the maximum of the function
/ : Rn
R
/ ( X ) = X t Qx
subject to x TP x = I. Use the Lagrange multiplier method.
S o lu tio n 7.
The Lagrange function is
L (x, A) =
x t Qx
+ A(1 — x TP x )
where A is the Lagrange multiplier. The partial derivatives lead to
Si,X) = 2xTQ - 2 X x TP
(x, A) =
l
—xTPx.
Thus the necessary conditions are
Q x = APx,
x TP x = I.
Since P is nonsingular we obtain
P - 1QX = Ax.
Since x TP x = I the (column) vector x cannot be the zero vector. Thus the
above equation is an eigenvalue equation, i.e. A is an eigenvalue of the n x n
matrix P - 1 Q. Multiplying this equation with the row vector x TP we obtain
the scalar equation
_
_
xt Q x
= Axt P x = A.
Thus we conclude that the maximizer of x TQ x subject to the constraint x TP x =
I is the eigenvector of the matrix P - 1 Q corresponding to the largest eigenvalue.
P r o b le m 8. Let $ : Rn x Rn —> R be an analytic function, where (x, y ) =
( x i , . . . , x n, 2/1, . . . , yn) G Rn x Rn. The Monge-Ampere determinant M ($ ) is
defined by
*
d^/dyi
M ($ ) := det
di>/dxi
<92$/dxidiji
d$/dxn \
d2$/dxndyi
\d$/dyn d2$/dxi dyn ... d2$/dxndynJ
Let n = 2 and
¢ (^ 1,^ 2, 2/1, 2/2) = X2 + xl + (xiyi)2 + (0:22/2)2 + 2/1 +2/2-
Differentiation and Matrices
Find the Monge-Ampere determinant and the conditions on
that M ($ ) = 0.
525
X2, 2/1, 2/2 such
S o lu tio n 8. We find M ($ ) = —32:r 1^2 2/12/2- For M ( $ ) = 0 we need one of
the variables to be equal to 0.
P r o b le m 9. Consider the m x m matrix F (x ) = (/ ^ ( x )) (j, k = I , . . . , ra),
where fjk '-Mn
M are analytic functions. Assume that F (x ) is invertible for
all x G R n. Then we have the identities (j = I , . . . , m)
a ( d e ,( F ( x ) ) ) g d e t ( f ( x ) ) t t
dxj
y
yXj y
The differentiation is understood entrywise. Apply the identities to the matrix
(ra = 2, n = I)
p ( „\ _ ( cosOr)
' '
y —sin(x)
sin(z) ^
cos(:r) J ’
S o lu tio n 9. For the left-hand side of the first identity we have d et(F (x )) = I
and thus d(det(F(x)))/dx = 0. Since
F~l
cos(x)
—sin(:r)
sin(x)
cos(x)
and therefore
p - i , , d F ( x ) _ ( cos(x)
^
dx
\ sin(x)
—sin(x)\ / —sin(a:)
cos(x) J \ —cos(x)
cos(a:) \ _ / 0
—sin(a:) J
\*
*\
0J
we obviously also find 0.
The left-hand side for the second identity is
dF~l {x) _ / —sin(x)
dx
~ \ cos(x)
— cos(x) \
- s in (x ) J
and for the right-hand side we have
F _i
/
dx
P r o b le m 10.
functions
sin ^ )
\ —cos(x)
cosjx) \
sm(x) J
Let A be an invertible n x n matrix over R.
Ej = —(ACj
— Gj)
(Acj
Consider the
— Gj)
where j = I , . . . ,n, c j is the j -th column of the inverse matrix of A, Gj is the
j-th column of the n x n identity matrix. This means e i, . . . , e n is the standard
526 Problems and Solutions
basis (as column vectors) in R n. The c j are determined by minimizing the Ej
with respect to the c j . Apply this method to find the inverse of the 3 x 3 matrix
A =
S olu tion 10.
I
0
1
0
I
0
I
0
-I
For the matrix A we have
E\ = cI1I + cl,3 —cl,l —cl,3 + 2 ^1,2 + 2
E
2 = c2,l + c2,3 + 2 C2>2 “ C2’2
Es =
c3,l + c3,3 — c3,l + c3,3 +
2
2
^3,2 +
^
*
From dEi/dci^i = 0, dE\/dc\,2 = 0, dE\jdc\$ = 0 we obtain
ci ,1 = 2 ’
ci ,2 = 0,
Cl,3 = 1 / 2.
From SE2Idc2,! = 0, dE2/802,2 = 0, dEi/802,3 = 0 we obtain
C2,l = 0,
c2,3 = 0.
c2,2 = I,
From dEs/dcs^i = 0, dEs/803,2 = 0, 8Es/8cs^3 = we obtain
C3,1 =
2
’
c3,2 =
c3,3 = - 1 /2 .
Thus the inverse matrix is
/1/2
A -1=
0
1/2 \
0 I 0 .
V I/2 0 -1 /2 /
P r o b le m 11.
Let f be a function from U, an open subset of M771, to R n.
Assume that the component function fj (j = I , . . . , n) possess first order partial
derivatives. Then we can associate the n x m matrix
j = l,...,n
A; = 1,...,771
where p G U. The matrix is called the Jacobian matrix of f at the point p.
When m = n the determinant of the square matrix f is called the Jacobian of f .
Let
A = {r€R :r>0},
5 = {0eR:O<0<27r}
and f : A x B —►R 2 with /i ( r , 0) = rcos(0), / 2(r, 0) = rsin(0).
Jacobian matrix and the Jacobian.
Find the
Differentiation and Matrices
S olu tion 11.
527
For the Jacobian matrix we find
( dfi/dr
\ d f 2/dr
dfi/dO\ _ ( cos(0)
df2/d0 J ~ \sin(0)
-rsin(0)\
rcos(0) J ’
Thus the Jacobian is r.
P r o b le m 12.
Consider the rotation matrix
R(t) = ( cosM )
'
\ —sin.(ujt)
Sia(UJt) \
cos (ujt) J
where a; is a fixed frequency. Find the 2 x 2 matrix
H(t) = i h ^ - R T(t)
and show it is hermitian.
S olu tion 12.
dR(t)
dt
We have
/ —ujsin(a;t)
\ —ujcos(ujt)
uj cos(ujt)
—ujsin(ujt)
, R ( t Y = ( cosJ " ')
V7
\ SUi(UJt)
—sin(tctf)
cos(a;t)
Thus
dR(t)
dt m
T
( - 1 0)
0
i\
OJ
and therefore
H(t) = huj
-i
-Huja2.
P r o b le m 13.
Let G L (m ,C ) be the general linear group over C. This Lie
group consists of all nonsingular m x m matrices. Let G be a Lie subgroup of
G L (m ,C). Suppose ui,u2i - - •,un is a coordinate system on G in some neigh­
borhood of Irn, the m x m identity matrix, and that X(u\, U2, . . . , Urn) is a point
in this neighborhood. The matrix dX of differential one-forms contains n lin­
early independent differential one-forms since the n-dimensional Lie group G is
smoothly embedded in G L (m ,C ). Consider the matrix of differential one forms
X eG .
ft :=
The matrix Q1 of differential one forms contains n-linearly independent ones.
(i) Let A be any fixed element of G. The left-translation by A is given by
X
AX.
Show that Q = X _1dX is left-invariant.
(ii) Show that
dQ + Q A Q — 0
528 Problems and Solutions
where A denotes the exterior product for matrices, i.e. we have matrix multi­
plication together with the exterior product. The exterior product is linear and
satisfies
duj A duk = —duk A duj.
Therefore duj Aduj = 0 for j = I , . . . , n. The exterior product is also associative,
(iii) Find d X ~l using X X ~ l = Irn.
S o lu tio n 13.
(i) We have
( A X ) - 1Ct(AX) = X - 1A ^ ( A d X ) =
A)dX = X ~ xdX
since d(AX) = A d X .
(ii) From Q = X ~ ldX we obtain
X Q = dX
d(XQ) = ddX = 0
dX A Q H- XdQ = 0
(XQ) A Q A X d Q = O
Q A Q H- dQ = 0.
(iii) From X X ~ l = Irn we obtain
d ( X X ~ 1) = 0
(d X )X ~ l A X d X - 1 = 0
X ~ 1 (d X )X ~ 1 A d X - 1 = 0.
Thus d X - 1 = - X ~ 1 (d X )X ~ 1.
P r o b le m 14. Consider GL(m , M) and a Lie subgroup of it. We interpret each
element X of G as a linear transformation on the vector space Mm of row vectors
v = (v\, V2 , . . . , vn). Thus v —>>w = wX. Show that dw = wQ.
S o lu tio n 14.
We have
w = vX
dw = d(vX) = vdX
dw = (w X ~ 1)dX
dw = w ( X ~ 1dX)
dw = wQ.
This means Q can be interpreted as an “infinitesimal groups element” .
P r o b le m 15.
Consider the compact Lie group SO( 2) consisting of the matrices
Differentiation and Matrices
Calculate dX and X
S o lu tio n 15.
529
ldX.
We have
_ / —sin{u)du —cos(u)du\ _ / —sin(u)
Y cos(u)du —sin(u)du J
^ cos(u)
-COs(u)>U u
—sin(u) J
Since
X
-I _ / cos(u)
y —sin(u)
sin(u) \
cos(u) J
and using cos2(w) + sin2(n) = I we obtain
( ! O1) ^ -
X ~ ldX :
P r o b le m 16. Let n be the dimension of the Lie group G . Since the vector
space of differential one-forms at the identity element is an n-dimensional vector
space, there are exactly n linearly independent left invariant differential oneforms in G. Let 0 i, 02, . . . , (7n be such a system. Consider the Lie group
G := ((^ 1
“ 2)
: « i,«2€K , « i > 0 j .
Let
/ Ul
V 0
U
2\
I J-
(i) Find X ~ l and X ~ ldX. Calculate the left-invariant differential one-forms.
Calculate the left-invariant volume element.
(ii) Find the right-invariant forms.
S o lu tio n 16.
(i) Since d et(X ) = u\ we obtain
X~l
J_/l
V0
Ui
- U2\
Ui
J
d x = ( rfJJ1
'
dU2
0
)'
It follows that
X - 1 dX = — ( rfJJ1
Ui V o
Hence o \ = d u i / u i , G 2 =
volume element follows as
d u 2 /u\
G\
A
dU2
0
)'
are left-invariant.
G2 =
Then the left-invariant
du\ A du2
(ii) We have
0d X ) X ~1
I ( du\
u\ \ 0
du2\ ( I
0 J \0
—u2\ _ I / du\
Ui J
ui \ 0
—u2du\+u\du2
0
530 Problems and Solutions
Thus
du\
I ,
CTi = ---- ,
Ui
CT2 = —
Ui
It follows that
crI A G 2
,
( - U 2 CL Ui
, N
+ Uiau2).
du\ A du2
=
ui
since du\ A du\ = 0. The left- and right-invariant volume forms are different.
P r o b le m 17.
Consider the Lie group consisting of the matrices
U i ) ’ Ul ’ U 2 € R ’
X = ( U0
uI > 0 -
Calculate X -1 and X _ 1dX. Find the left-invariant differential one-forms and
the left-invariant volume element.
S o lu tio n 17.
Since d et(X ) = u\ we obtain
_ J_ (
x - 1=
Ui
- U 2
\
o
ui
)
V
Moreover
dX
From Q1 = X
_ f du\
du2\
du\ J *
~ V 0
ldX we have
( du\
°-*C?;?) V
o
du2 \
I ( u\du\
dui J ~ u\ \ 0
u\du2 —u2dui
Thus we may take
(Ti =
du\
Ul
5
(T2 =
Uidu2 — u2dui
,
U2
1
Therefore the volume form is
du\ A du2
where we used du\ A du\ = 0.
P r o b le m 18.
Let A be a 2 x 2 symmetric matrix
A = ( 0,11
\ a i2
ai2>\
a22 J
over IR. We define
J - A - (°
dax2
Vl
M
° /'
Show that
8^ * r('4 !)= tt( ^ ' 42) = *r ( 2^ ) -
uidui
Differentiation and Matrices
S olu tion 18.
531
We have
— tr(A 2) = — — (aii + ai2 “I" ai2 + ^22) =
— ^a22 = 4ai2.
On the other hand we have
tr (
9
a 2i + a f 2
(
«11^12+ a i 2a 22 \ \ _ tr /
\ ^ 1 2 \ a l l a 12 + a 12^22
a 12 + a22
//
2a i2
«11 + «22\
\ a H + a22
2ai2
J
= 4ai2
and
tr
2A
P r o b le m 19.
dA \
dau )
H
a {°
E)-*-
D ) = H z
Consider the matrix
A
- a
«)•
Find the function (characteristic polynomial) p( A) = det(A — AI2). Find the
eigenvalues of
by solving p( A) = 0. Find the minima of the function / ( A) =
|p(A) I. Discuss.
S o lu tio n 19.
We find p(A) = A2 — I with the eigenvalues +1 and —I. Now
/ ( A ) = b(A)| = |A2 - 1|
which has minima at +1 and —1 and / ( + I ) = 0, / ( —1) = 0.
P r o b le m 20.
Let a / G l . Consider the 2 x 2 matrix
* - ( 3
2
?)-
Find exp (tB), where t € M and thus solve the initial value problem of the matrix
differential equation
4
d
S olu tion 20.
=
b a v
-
We find
cos (at) —i^ sin(ttt)
- f Sin(Kt)
- ^sin(Kf)
cos ( K t )
A
i ^
A
sin(«£) J
where n := yja 2 —ft1. Then the solution of the initial value problem is A (t) =
exp(tB)A(0).
P r o b le m 21. The Frechet derivative of a matrix function / : C nxn at a point
X e Cnxn is a linear mapping L x : Cnxn
Cnxn such that for all Y € C nxn
f ( X + Y ) - f ( X ) - L x (Y) = o(\\Y\\).
532 Problems and Solutions
Calculate the Frechet derivative of f (X ) = X 2.
S o lu tio n 21.
We have f ( X + Y ) - f ( X ) = X Y + Y X + Y 2. Thus
Lx (Y) = X Y + F X
The right-hand side is the anti-commutator of X and Y .
S u p p lem en ta ry P ro b le m s
P r o b le m I. Let A, B be n x n matrices. Assume that the entries of A and B
are analytic functions of x.
(i) Show that
T x ^
=
B) .
'B + A ( i
(ii) Show that
® B + A <8> (
[ i B.
T x ^ *> = (' M )
(iii) Show that
T x ^ V
•B+A* ^
= ( Y
r/ ,
where • denotes the Hadamard product.
P r o b le m 2.
Let A be an m x n matrix over E and
x := ( l , x , x 2,
x
m~ Y ,
y := (I, y, y2, . . . , yn~ Y ■
Find the extrema of the function p(x, y ) = x T A y .
P r o b le m 3. (i) Let e G M. Let A(e) be an invertible n x n matrix. Assume
that the entries ajk are analytic functions of e. Show that
tr
I
d
det(A(e)) de
det(A(e)).
(ii) Consider the 2 x 2 matrix
M e) J
where fj (j = 1,2,3,4) are smooth functions and det(M (e)) > 0 for all e. Show
that
ti((dM(e))M(e) x) = d(ln(det(M (e))))
where d is the exterior derivative.
Differentiation and Matrices
533
Problem 4. Let fj : R —> R (j = I , . . . , n) be analytic functions. Consider
the determinant ( Wronskian)
W (x) = det
h (x)
/ 2(3 )
/{ ( * )
fi(x )
A nJl)(x)
V /i(n_1)(^)
fn(x)
f'nix)
\
f i n~l\ x ) )
where fj(n — I) denotes the (n — I) derivative of fj. If the Wronskian of these
functions is not identially O on R, then these functions form a linearly inde­
pendent set. Let n = 3 and fi(x ) = x , / 2(^) = ex, / 3(2 ) = e2x. Find the
Wronskian. Discuss.
Problem 5. Let / 1, / 2,/3 : M3 —> M be continuously differentiable functions.
Find the determinant of the 3 x 3 matrix A = (a ^ )
dfk
d x/
dxk
Problem 6. Let A, B be n x n matrices over C. Let u G Cn be considered as
column vector and
d := ( d/dxi ••• d/dxn )T .
Is
[uMO,u*B0]
=
u
* [ j4 , B ] 0 ?
Problem 7.
Let V (a) be a 2 x 2 matrix where all the entries are smooth
functions of a. Calculate dV(a)/da and then find the conditions on the entries
such that [dV(a)/da, V (a)] = O2.
Problem 8.
Calculate
Let j be a positive integer. Let A, B be n x n matrices over M.
Iim -e ((A + eBY - ^ ) ,
Problem 9.
^ tr(A + e B ) \ =Q.
Find the partial differential equation given by the condition
/
O
det I du/dx 1
\du/dx2
du/dx 1
d2u/dx\
d2u/dx2dx\
du/dx2 \
d2u/dx\dx2 I = 0.
d2u/dx\ )
Find a nontrivial solution of the partial differential equation.
Problem 10.
Let <j) :
A(t) ~ I
1
M be an analytic functions. Consider the matrices
i
\
/
I
eid<t>(t)/dt J > D(t) — I eid(f>(t)/dt
pi4>{t) '
ei4>(t)\
I•
534 Problems and Solutions
(i) Find the differential equation for <j> from the condition tr( A B ) = 0.
(ii) Find the differential equation for 4> from the condition det( A B ) = 0.
Problem 1 1 .
0
/ - I
-I
0
0
0
0
/¾ =
V
Consider the four 6 x 6 matrices
(0
0
0
0
0
/¾
0
0
0
0
0
0
and !p =- . { E
I
0
0
0
-I
0
0
0
0
0
0
0
E2
0
0
0
0
0
0
0
-I
0
0
-I
0
0
0
0
0
-I
-I
0
0
0
0
0
0
0
0
0
0
0
Es
cB\
d
¢+
dxi
0 \
0
0
0
0
1
0
d
/% =
-I
I
0
0
0
0 V
0
0
0
0
0
0
0
0
0
0
0
0 /
0
0
0
0 )T
0
0
/
O X 2
0
I
0
0
0
0
0
I
0
cBs ) T .
,
^ +
$2 —
0
0
°0
0
cB 2
-I
0
0
0
0
0
0
\ 0
/
1 N
0
0
0
0
0
0
0
0
0
0
0
0
01
,
d
£ 3 -3 O X 3
0 \
0
0
0
0
0
/ °
0
I
0
0
- I
I
0
0
0
0 /
0
0
0
0
0
0
Show that
=
(0
0
are Maxwell’s equations
I <9E ^
„
-2 "or = V x B,
- § = v * e-
C2 at
Problem 12. Let
¢72, ¢73 be the Pauli spin matrices and f j ( x 1 , 2:2) (j =
1 , 2,3) be real-valued smooth functions f j : R2 —» R. Consider the 2 x 2 matrix
N ( x 1 , x2)
=
fieri
+ / 2^2 + 03/3 =
Find d N , N * . Then calculate
that d ( N * d N ) = O2.
d (N *d N ).
Find the conditions of / 1 , / 2, /3 such
Problem 13. Let /n , / 22, /33 be analytic functions
the 4 x 4 matrix
( h i (x)
M {x ) =
I
0
V /n (z)
) ’
h
0
/ 22(^)
0
0
f s s (x)
/ 22(^)
/ 33(^)
0
fjj
:R
—>
R. Consider
°\
I
I
O/
where ' denotes differentiation with respect to x . Find the determinant of the
matrix and write down the ordinary differential equation which follows from
det( M ( x ) ) = 0. Find solutions of the differential equation.
Chapter 25
Integration and Matrices
Let
A (t)
be an n x
n
matrix
/a n ( t )
Mt) =
^21 W
t
' ^nl ( )
a i 2 (t)
0.22(t)
&1n{t) ^
On2^t)
Qnn(^) '
&2n(^)
with a,j k(t) are integrable functions. The integration is defined entrywise. For
example
—sin(s) \
cos(s) J
ds =
sin(t)
cos(t) — I \
—cos(t) + I
sin(t) J '
Let A be an n x n matrix over C. Suppose / is an analytic function inside on
a closed contour T which encircles A(A), where A(A) denotes the eigenvalues of
A. We define /(A ) to be the n x n matrix
f ( A ) = ± - . £ f ( z ) ( z I n - A ) - l dz.
This is a matrix version of the Cauchy integral theorem. The integral is defined
on an element-by-element basis /(A ) = (f j k ), where
fjk
where e j
(j
= 2^
j>
f(z)ej ( z l n
- A ) ~ l e kdz
= I , . . . , n) is the standard basis in C™.
535
536
Problems and Solutions
Consider the 2 x 2 matrix over M
Problem I.
cosh(s)
sinh(s)
sinh(s)
cosh(s)
Find
G (t)=
[
F (s)d s.
Jo
Then calculate the commutator [F(s),G(£)].
Solution I.
We do the task with the following Maxima program
/* integration.mac * /
F: m atrix([cosh(s),sinh(s)], [sinh(s),cosh(s)]) ;
assume(t > 0 ) ;
g ll ( t ) := ’ 9 (integrate(F[ 1 , 1 ] , s , 0 , t ) ) ;
g l 2 (t) := 1 9 (integrate(F [I, 2 ] , s , 0 , t ) ) ;
g2 1 (t) := ’ ’ (integrate(F[2 , 1 ] , s , 0 , t ) ) ;
g22(t) := ’ ’ ( i ntegrate(F[2, 2], s, 0, t));
G: matrix([ g l l ( t ) , gl 2 ( t ) ] , [g2 1 ( t ) ,g2 2 ( t ) ] ) ;
C: F . G - G . F;
C: expand(C);
The output is
sinh(t)
y cosh(t) — I
cosh(t) — I \
sinh(t) J
(
and the commutator [F(s),G(£)] vanishes.
Consider the analytic function / : C —> C,
Problem 2.
f(z ) = z2
and the
2 x 2 matrix
Calculate
f(A )
applying the Cauchy integral theorem.
L
V 1=
*
*
I
I-*
II
I
IOi
Solution 2 . The eigenvalues of A are + 1 and —I. We consider a circle as the
contour with radius r = 2. Now we have
)
1
(*
Z2 - I
\I
M
z)-
Thus
I
f
Z3
= 2 n i f r Z2 - I ^
I
f
Z2
^21 = 2 m J r z 2 — \ dZ'
fV
“ an
I
Zi - I d z -
Using the residue theorem these integrals with the singularities at +1 and —I
can be easily calculated. One finds f u = /22 = I, /12 = /21 = O- Obviously,
f ( A ) is the identity matrix.
Integration and Matrices
537
Problem 3. Let A be an n x n matrix over C. If A is not an eigenvalue of A ,
then the matrix ( A — AI n ) has an inverse, namely the resolvent
R x = ( A - M n ) - 1.
Let Xj be the eigenvalues of A . For |A| > a, where a is any positive constant
greater than all the \Xj\ the resolvent can be expanded as
Rx = ~ k { In + k A + h A2 + " ) Calculate
XrnRxdX,
2™i/|A|=tt
Solution 3.
0,1,2,—
m =
We obtain
I
/
2 m J\X\=a
XmRxdX =
m = 0, 1 , 2 , —
Problem 4. Let Cnxiv be the vector space of all n x N complex matrices. Let
Z G Cnxiv. Then Z * = Z t , where T denotes transpose. One defines a Gaussian
measure p on Cnxiv by
d »(Z )
: = ^ e x p ( - t r (Z Z * ))d Z
where d Z denotes the Lebesgue measure on Cnxiv. The Fock space J7 (Cnxiv)
consists of all entire functions on Cnxiv which are square integrable with respect
to the Gaussian measure d fi(Z ). With the scalar product
(/|p) :=
[
f(Z )g (Z jd p (Z ),
/ , g G F ( C n* N )
Jcnxjv
one has a Hilbert space. Show that this Hilbert space has a reproducing kernel
K . This means a continuous function K ( Z , Z ') : Cnxiv x Cnxiv —> C such that
f(Z)=
[
K ( Z ,Z ') f ( Z ') d i i ( Z ')
Jcnxw
for all Z G Cnxiv and / G J7(Cnxiv).
Solution 4.
We find
K ( Z 1 Z f) = e x p ( t r ( Z ( Z ' ) * ) ) .
Problem 5. Let A be an n x n positive definite matrix over R, i.e. xTAx > 0
for all x G Rn. Calculate
/ ■
exp(—xTAx)dx.
538 Problems and Solutions
Solution 5. Since A is positive definite it can be written in the form A
where T denotes transpose and S G G L ( n ,R ) . Thus
= St S ,
|det(S)| = Vdet(A).
Let y = Sx. Now
f
exp(—xTAx)dx =
JRn
exp(—( S x ) T (S x ))d x .
f
=
JRn
=
Idet(S-1 )|
I
J Rn
exp(—y Ty)d(S_1y)
exp(—yTy)dy
[
Jun
_
7r”/ 2
Vdet(A)
where we used that
e y T y d y = fKnI 2.
Problem 6 .
Let V be an N x N unitary matrix, i.e. V V * = I n . The
eigenvalues of V lie on the unit circle; that is, they may be expressed in the form
exp(i0n), On G M. A function f ( V ) = f ( 0 \, . . . , 0 N ) is called a class fu n ction if
/ is symmetric in all its variables. Weyl gave an explicit formula for averaging
class functions over the circular unitary ensemble
f
f(V )d V
Ju(N)
I
= (
/»27T
2
/»27T
- L
m
.......
Thus we integrate the function f ( V ) over U ( N ) by parametrizing the group by
the Oi and using W eyV s form ula to convert the integral into an TV-fold integral
over the Oi. By definition the Haar measure d V is invariant under V
U V U *,
where U is any N x N unitary matrix. The matrix V can always be diagonalized
by a unitary matrix, i.e.
0 \
: JW *
Z e idl
V = W
:
\ 0
...
ei0N J
where W is an N x N unitary matrix. Thus the integral over V can be written
as an integral over the matrix elements of W and the eigenphases On . Since the
measure is invariant under unitary transformations, the integral over the matrix
elements of U can be evaluated straightforwardly, leaving the integral over the
eigenphases. Show that for / a class function we have
f(V )d V
LU(N)
I
(2
27T
m , •••, M det(eie,dn -"d)d0i •••M
n.
Integration and Matrices
Solution 6 .
539
From Weyl’s formula we know that
f
}{V)dV
Ju(N)
can be written as
p2iv
I
w w ^ .l
p2iv
e i0j - e i9k\2 d 9 l - - - d 9 N
- J0
I< j< k < N
We first note that
n Ie^
— e i0 k \2 =
e i02
det
I< j< k < N
^gi(JV-I)S1
/I
I
VI
. ..\
I
I
e %0 !
gi(JV-l)S2
e ~ i01
e ~ 2i01
e ~ i02
e ~ 2i02
e ~ i&N
e ~ ^ieN
..
j
N
= det
X /
,iOe(n—m)
U =I
Therefore
p2n
/
Ju( N)
m ,...
,0N)dV
E ti i
m , . . . ,
Jo
E ti e-l*"1)"* \
E ti e ~ it'N ~ 2')0e
E ti^
E t ii
E t 1
x det
(2 tt) » N \ J 0
p2n
...
\ E tie<(iV_1)<){ E ti e ^ N ~ 2^
E t ii
dOx <
dOjy.
/
Using that / is symmetric in its arguments provides
p
-I
’
Ju(N)
I
E tie"4
x det
VE ti
p2n
(2*)»N \J0
p2n
"Vo
' v
e -(JV -l)iS !
e - iS l
E t ii
\
E ti
E t ii
E t i e*(JV_2)94
d O i ••• ddjy.
>
Subtracting e 101 times the first row from the second row then gives
p
L
Iu(N)
I
pp 2
2 tv
tv
f{ou, .. .. ..,M
e N)dV
AT, I0
m
dv =
= ^/nJy „t wJ
pp22ntv
-I
540
Problems and Solutions
det
e - ( N - 1)*#!
e -iOi
i
£ *= 2 e "‘
E
Z
t 2I
\
t 2 e - i{N - 2)e‘
e i ( N - 2)$e
L^l =2 c
£ f= 2i
/
This process is continued reducing the second row to e %
°2, I, e ~td2, . . . , e - ( N ~ 2)te2
and thus pulling out a factor of N — I. Then doing the same to the third row
and so on. The factor of N I resulting from these row manipulations cancels the
N I in the normalization constant of Weyl’s formula.
Problem 7. Let A be an n x n positive definite matrix over M. Let q and J
be column vectors in Mn. Calculate
Z (J ) =
J
-J
dqi
-d qn e x p ( ^ - ^ q T A q + J T q j .
Note that
- ( aq2+bq+c )
(62—4 a c)/(4 a )
Solution 7. Since A is positive definite the inverse
have det^4 / 0. Using (I) we find
z(j)=5
A
(I)
1 of A exists and we
S “p( ^ - ij)'
Problem 8 . Let A be an n x n positive definite matrix over M, i.e. all the
eigenvalues, which are real, are positive. We also have A t = A . Consider the
analytic function / : R n —> M
/(x ) = exp ( - ^
Calculate the
F ourier transform
X r A - 1X j
.
of / . The Fourier transform is defined by
/ ( k) := /
/ (x)e*k xdx
J Rn
where k x = kTx = k \ X \ H------ V k n X n and dx =
transform is given by
= W
where dk
= dk \... dkn .
With
a >
r
L
d x \... d x n .
/<k)e“ k'
0 we have
f e ~ (ax2+ bx+ c)d x = xj ^ e ^ b2~ Aac^^Aa\
Jr
V a
The inverse Fourier
Integration and Matrices
Solution 8 .
Consider first the case n
541
I. Thus we have
=
f e-x*/(2X)eikxdx= [ e~{x2/(2X)-ikx)dx = V2\ne-Xk*/2
Jr
Jr
where A is the (positive) eigenvalue of the I x l matrix A .
Consider now the general case. Since A is positive definite we can find an
orthogonal matrix O such that O A O ~ l is diagonal, where O - 1 = O t . Also
O A ~ l O ~ 1 is diagonal. Thus we introduce y = Ox. It follows that
x t
A _1x
=
=
y T 0 A ~ 1 0 ~ 1y
y TD y
where D is the diagonal matrix D = diag(l/Ai,. . . , 1 /An). Thus the multidi­
mensional integral reduces to a product. Using the result for the case n = I we
find
^ ^
^
/(k ) = \J(27r)n det(A) exp( ( - ^kT.4k) .
Problem 9.
Let
A
be an n x
n
matrix. Let
\\etA \\ < M e u t,
and fi
>
uj
cj,
// G
R. Assume that
>0
t
. Then we have
(tin - A ) - 1 =
(,
pOO
/
Jo
e - ^ e tAdt.
(i)
Calculate the left- and right-hand side of (I) for the matrix
Solution 9.
A =
cf\.
For the left-hand side we have
Since
M t A ) = h co sU t)
+(JJ)
sinh(i) = ( ¾
COSh(S)
and
e x p (-n th )
=
( 6 Q
e-M t)
we find for the right-hand side
/*oo
/
Jo
/»oo
e ~ >ite tAdt =
/
Jo
- Jor
e -^ e ^ d t
,—fit
0
0
^
e- » t )
,—fit cosh(t)
,-fit sinh(t)
cosh(t)
Y sinh(t)
(
sinh(t)
cosh(t)
e-Mt sinh(t) \
e-/xt cosh(t) J
,
dt
542 Problems and Solutions
Since
f
Jo
cosh(t)e
^dt =
------ -,
I -
[
P
Jo
sinh(£)e
^dt
= - - —I —
we obtain
Ii +
Problem 1 0 .
Consider the differentiable manifold
5 3 = { (21 , 2:2, 23, 0:4) ^ 22 + 22 + 22 + 22 = I }.
(i)
Show that the matrix
p fe,*,,*.,*.) = - ^ ? + * * *
\ 2 i + 1X2
* '- “ 0
—23 + 223 /
is unitary. Show that the matrix is an element of S U (2).
(ii) Consider the parameters (#,'0,0) with 0 < 0 < 7r, 0 < 0 < 47r, 0 < 0 < 27r.
Show that
2 i( 0 , 0 , 0 ) + 222(0, 0 , 0 ) = cos(0/ 2 )e^ +<^ /2
23(0,0,0) + 224(0,0,0) = sin(0/2)e*^—^Z2
is a parametrization. Thus the matrix given in (i) takes the form
. /sin(0/2)e^ - ^ / 2
Z V cos(0/ 2 )e^ +<^ /2
cos(0/2)e-^ + ^ / 2
-
\
sin(0/ 2 )e- ^ " ^ / 2 )
and
_ 77-1 _ (
^ /2sin(0/2)
_ l e^+^)/2 cos(0/2)
U ~ U
e iW+^)/2 cos(0/2) \
-z e ^ -^ s in ^ ) ,/ '
(hi) Let (¢1 ,¾,¾) = (0)0)0) with O < 0 < 7r, O < 0 < 47r, O < 0 < 27r. Show
that
= I
where 6123 = 6321 = ^132 = +1) ^213 = e32i = ^132 = - 1 and O otherwise.
(iv) Consider the metric tensor field
g = dx\
(g>d2 i +
dx2
(E) d x 2 +
d xs
(E) d xs + ^24 (E) ^24 .
Using the parametrization show that
gs 3
= ^(d0 <E> d0 + d0 (8>d0 + d0 <8>d0 + cos(0)d0 (8) d(j> + cos(0)d0 (8) d0).
(v) Consider the three differential one forms ei, e2 , e3 defined by
eM
e2
e3 /
=
/ -24
I
#3
“ ®3
—24
^2
-2i
I -®2
2l
—24
2
i
X2
xs
Integration and Matrices
Show that g S 3 =
(vi) Show that
de\
®
d e\ + de 2 (8) de 2 +
543
de3 (8) ^ 3 .
3
dej
i.e. dei = 2e2 A e3 ,
de2
=
^
M =1
A
Cu
= 2 e3 A ei, de3 = 2ei A e2.
Solution 10. (i) We have LT7* = /4. Thus the matrix is unitary. We have
det(C/) = I. Thus U is an element of S U ( 2).
(ii) Since
Xi = cos(0/2) cos((0 + 0)/2),
X2 = cos(0/2) sin((0 + 0)/2),
X3 = sin(0/ 2 ) cos((0 — 0 ) / 2 ),
X4 = sin(0/ 2 ) sin((0 — 0 ) / 2 )
we obtain
x2 + x| = cos2 (0/ 2 ),
x§ + x| = sin2 (0/ 2 ).
Thus x2 + x| + x§ + x2 = I.
(iii) We have
e^ + * ) / 2 cos(0/ 2 )
2 V - e i(^+ 0)/ 2 sin((9/2)
d£i “
- e - ^ ^ ) / 2 sin(0/ 2 ) \
_ e -^ -< ^ )/ 2 cos(0/ 2 ) J
<9U _ _ I / sin^ ) ^ " ^ / 2
«9£2 “
2 Vcos(0/ 2 )e^ +<*°/2
dt/ _ I / sin^ ) ^ " ^ / 2
“ 2 V cos(0/ 2 )e^ +*)/2
(vi)
We show that
de 3
- c o s^ ^ e -^ + ^ )/ 2\
sin(0/ 2 )e- ^ - ^ / 2 /
- cos(0/ 2 )e- ^ +<?!>)/2 \
- sin(0/ 2 )e- ^ - * ) / 2 ) '
= 2ei A e2. We have
de 3
= 2(dxi A dx2 +
dx3 A
dx4).
Now
ei A e2 = (x§ + x2)dxi A dx2 + (X1 X4 — x2X3)dxi A
dx3
+ (—X1X3 — x 2X 4 )d xi A dx4 + (X1X3 + X2X4 ) d x 2 A
dx3
+ (—X2X3 + xiX4)dx2 A dx4 + (x2 + x^)dx3 A dx4 .
Using
xidxi = —x2dx2 —x s d x s
— x^dx^,
x^ d xs = —x\dx\ — x2dx2 —X4dx4
we obtain
(X1X4—x 2X3)dxiAdx3 = X2dxiAdx2+x|dx3Adx4—X2X4dx2Adx3+X2X4dxiAdx4.
Using
x2dx2 = —xidx 1 —x s d x s
— x^dx^,
x^dx^ = —x\dx\ — x2dx2 —x^ d xs
544 Problems and Solutions
we obtain
(#!# 4 -
X 2 x s ) d x 2 Adx±
=
x\dx\ A d x 2 -\ -xld xsA d x 4 ~\-XiXsdxiAdx 4 —x \ x $ d x 2 A d xs-
Inserting these terms into ei A e2 we arrive at
(x\ + £2 +
d A e2 =
= dx 1 A
xs
+
% l)d xi A d x 2
+ ( x\ +
x\
+
+
x\
x fy d x s
A dx4
+ d x 3 A dx*.
Analogously we show that dei
and de 3 = e2 A ei.
= e2 A e s
Supplementary Problems
Problem I.
Let
Ai B
exp(fi{A
be
+ B )) =
x
n
n
matrices over C. Let fteGM. Show that
exp(J3A)
+
d ee~ eAB e < A + B ^
.
Multiply both sides from the left with exp(—j3A) and differentiate with respect
to 3 .
Problem 2 . Let a € IR. Consider an n x n matrix A ( Q ) i where the entries of
depends smoothly on a . Then one has the identity
A
A eA(“) =
Let
n =
f
c(l-s)A(q) d A (a )
J0
da
^
_
f
e sA(a) d A (a )
J0
da
e(l-s)A(oc)^s ^
da
2 and
cos(a)
y sin(a)
\ _ (
' '
—sin(a)\
cos(a) J
d A (a )
da
_
/ —sin(a)
\ cos(ce)
—cos(a) \
—sin(a) J '
Calculate the right-hand sides of the identities.
Problem 3. Let A i B be positive definite matrices. Then we have the integral
representation (x > 0)
OO
ln(A +
xB ) —
In(A) =
f
(A
+
u In) ~ 1 x B ( A
+
xB
+
u l n) l du.
Let
Calculate the left- and right-hand side of the integral representation.
Problem 4.
Consider the 2 x 2 matrices
A (t) =
cos(t)
sin(t)
— sin(t)
cos(£)
\
J
’
A (s )d s .
Integration and Matrices
Find the commutator
\ A (t),B (t) ] = O2?
Problem 5.
Let
Discuss. What is the condition such that
[ A ( t ) ,B ( t ) ] .
0 , 1,2, ...) be the
Pj (j =
Po(x) = I)
Pi ( X) =X,
where j,
k =
Legendre polynom ials
P2{x) = ^(3z2 - I), -----
Show that the infinite dimensional matrix A
„
545
= (ajk)
_ f +1p M dPk(x)
0, 1,... is given by
0
2 0
0 2
2 0
0 2
2 0
0 2
(°
0
0
0
2
0
2
0
0
0
0
0
2
0
2
0
0
0
0
0
2
0
0
0
0
0
0
0
2
\
0
0
0
0
0
0
0
■)
Note that the matrix is not symmetric. The matrix A can be considered as a
linear operator in the Hilbert space ^2 (No). Is \\A\\ < 00?
Problem 6.
Let
A, B
,
C
be positive definite
n
x
n
matrices. Then
(Lieb
inequality)
POO
t r (eln(A )-ln(B)+ln(c)) < tr /
A(B
+
u I^ -'C iB + u lj-'d u .
Jo
(i) Let
A= (-l
'l 1) -
B = ( !
0 '
C=(2 5) '
Calculate the left-hand side and right-hand side of the inequality.
(ii) Show that a sufficient condition such that one has an equality is that
C commute. However this condition is not necessary.
A, B ,
This page intentionally left blank
Bibliography
Aldous J. M. and Wilson R. J.
Graphs and A pplications: A n Introductory Approach,
Springer (2000)
Armstrong M. A.
Groups and S ym m etry,
Springer (1988)
Beezer R. A.
A F irst Course in Linear Algebra,
Congruent Press, Washington (2013)
Bredon G. E.
Introduction to Com pact Transformation Groups,
Academic Press (1972)
Bronson R.
M atrix Operations,
Schaum’s Outlines, McGraw-Hill (1989)
Bump D.
L ie Groups,
Springer (2000)
Burnside W.
Theory o f Groups o f F in ite Order,
Dover, New York (1955)
Carter R. W.
Simple Groups o f L ie Type,
John Wiley (1972)
Cullen C. G.
Second edition, Dover, New York (1990)
M atrices and Linear Transformations,
Davis P. J.
Circulant M atrices,
John Wiley (1997)
de Souza R N. and Silva J.-N.
B erkeley Problem s in M athem atics,
Springer (1998)
Dixmier J.
Enveloping Algebras,
North-Holland (1974)
547
548
Bibliography
Erdmann K. and Wildon M.
Introduction to L ie Algebras,
Springer (2006)
Flanders H.
Differential F orm s with Applications to the Physical S ciences,
Academic Press
(1963)
Fuhrmann, P. A.
A Polynom ial Approach to Linear Algebra,
Springer (1996)
Fulton W. and Harris J.
R epresentation Theory, Springer (1991)
Gallian J. A.
Contem porary Abstract Algebra,
Sixth edition, Houghton Mifflin (2006)
Gantmacher F.
Theory o f M atrices,
AMS Chelsea Publishing (1953)
Golan J. S.
The Linear Algebra a Beginning Graduate Student Ought to K n ow ,
Springer
(2012)
Harville D. A.
M atrix Algebra from a Statistician’s P erspective,
Springer (1997)
Golub G. H. and Van Loan C. F.
M atrix Com putations, Third edition, Johns Hopkins University Press (1996)
Grossman S. I.
E lem en tary Linear Algebra,
Third edition, Wadsworth Publishing, Belmont (1987)
Hall B. C.
L ie Groups, L ie Algebras, and R epresentations:
A n elem entary introduction,
Springer (2003)
Handbook o f Linear Algebra,
Second edition, edited by L. Hogben, CRC Press,
Boca Raton (2014)
Horn R. A. and Johnson C. R.
Topics in M atrix Analysis, Cambridge University Press (1999)
Humphreys J. E.
Introduction to L ie Algebras and R epresentation Theory,
Springer (1972)
Inui T., Tanabe Y. and Onodera Y.
Group Theory and its Applications in P hysics,
Springer (1990)
Bibliography
549
James G. and Liebeck M.
R epresentations and Characters o f Groups ,
Second edition, Cambridge Univer­
sity Press (2001)
Johnson D. L.
Presen tation o f Groups ,
Cambridge University Press (1976)
Jones H. F.
Groups, R epresentations and P h ysics ,
Adam Hilger, Bristol (1990)
Kedlaya K. S., Poonen B. and Vakil R.
The W illiam Lowell Putnam Mathem atical C om petition 1 9 8 5 -2 0 0 0 ,
The Math­
ematical Association of America (2002)
Lang S.
Linear Algebra ,
Addison-Wesley, Reading (1968)
Lee Dong Hoon
The Structure o f C om plex L ie Groups ,
Chapman and Hall/CRC (2002)
Marcus M. and Mine H.
A S u rvey o f M atrix Theory and M atrix Inequalities ,
Dover Publications, New
York (1992)
Miller W.
S ym m etry Groups and Their Applications ,
Academic Press, New York (1972)
Schneider H. and Barker G. P.
M atrices and Linear Algebra , Dover Publications, New York (1989)
Searle S. R.
M atrix Algebra Useful in Statistics ,
John Wiley (2006)
Seber G. A. F.
A M atrix Handbook fo r Statisticians ,
Wiley Series in Probability and Statistics
(2008)
Steeb W.-H.
M atrix Calculus and K ronecker Product with Applications and C + + Programs ,
World Scientific Publishing, Singapore (1997)
Steeb W.-H.
C ontinuous S ym m etries, L ie Algebras, Differential Equations and C om pu ter A l­
gebra,
World Scientific Publishing, Singapore (1996)
Steeb W.-H.
Hilbert Spaces, W avelets, Generalized Functions and Quantum M ech an ics , Kluwer
Academic Publishers, Dordrecht (1998)
550
Bibliography
Steeb W.-H.
Problem s and Solutions in Theoretical and M athem atical P h ysics , Second edi­
tion, Volume I: Introductory Level, World Scientific Publishing, Singapore (2003)
Steeb W.-H.
Problem s and Solutions in Theoretical and Mathem atical P h ysics , Second edi­
tion, Volume II: Advanced Level, World Scientific Publishing, Singapore (2003)
Steeb W.-H., Hardy Y., Hardy A. and Stoop R.
Problem s and Solutions in Scientific Com puting with C + +
tions ,
and Java Simula­
World Scientific Publishing, Singapore (2004)
Sternberg S.
L ie Algebras
www.math.harvard.edu/~shlomo/docs/lie_algebras.pdf
Varadarajan V. S.
L ie Groups, L ie Algebras and Their R epresentations ,
Springer (2004)
Weyl H.
Classical Groups ,
Princeton, UP, 1946
Watkins D. S.
Fundamentals o f M atrix Com putations ,
John Wiley (1991)
Wawrzynczyk A.
Group R epresentations and Special Functions ,
D. Reidel (1984)
Wybourne B. G.
Classical Groups fo r P h ysicists ,
John Wiley (1974)
Zhang F.
M atrix Theory, B asic Results and Techniques ,
Second edition, Springer (2011)
Index
(R , S)-symmetric, 416
C), 527
0 ( p , q ) , 434
S L ( 2 ,R ) , 433
5L(n,R), 138
50(1,1), 417
50(2), 528
517(1,1), 433
57/(2), 418, 432
5 3, 432
5p(2n), 95
7/(2), 424
7/(4), 424
7r-meson, 92
p7(2,R), 443
s7(2,M), 445
s7(3,R), 462
57/ ( 2 , 2 ) , 461
s u (m ), 441
G L (m ,
Adjacency matrix, 486
Adjoint representation, 441, 444
Anticommutator, 217
Arcs, 486
Associative, 23
Associative law, 398
Associator, 453
Backward identity matrix, 391
Baker-Campbell-Hausdorff formula,
271, 275, 284
Bell basis, 73, 86, 206, 212, 230, 433
Bell matrix, 189, 214, 250, 308, 372
Bell states, 180, 394, 473
Bijection, 103, 369, 406
Binary Hadamard matrix, 369
Binary matrix, 365
Block form, 13
Boolean function, 367
Braid group, 466
Braid linking matrix, 485
Braid relation, 314, 479
Braid-like relation, 468
Brute force method, 203
Cartan matrix, 28, 200
Cartan-Weyl basis, 452, 462
Casimir operator, 459
Catalan number, 112
Cauchy determinant, 107
Cauchy integral theorem, 535
Cayley transform, 131, 387, 388, 417
Cayley-Hamilton theorem, 300, 307
Central difference scheme, 62
Characteristic equation, 300
Characteristic exponents, 513
Characteristic polynomial, 278, 293, 300
Chevalley basis, 449
Chirality matrix, 232
Choleski’s method, 242
Circulant matrix, 19, 84, 108, 163
Class function, 538
Collocation polynomial, 130
Column rank, 26, 27
Column vector, I
Commutative, 398
Commutator, 217, 405
Commute, 217
Comparison method, 270
Completeness relation, 214, 322
Condition number, 336
Conjugate, 407
Continuity argument, 102
Coset decomposition, 401
Cosine-sine decomposition, 249, 422
Crout’s method, 242
Csanky’s algorithm, 302
Curve fitting problem, 50
551
552 Index
Cyclic invariance, 99, 106, 126, 321
Cyclic matrix, 169
Free group, 471
Frobenius norm, 318, 348
Daubechies wavelet, 37
Decomposition, 463
Denman-Beavers iteration, 291
Density matrix, 127, 258
Determinant, 139
Diamond lattice, 3
Digraph, 486
Dirac game, 471
Dirac matrices, 231, 236, 321, 497
Direct sum, 42, 77, 78, 226
Discrete Fourier transform, 395
Discrete wavelet transform, 36
Disentangled form, 270
Doolittle’s method, 242
Double angle formula, 298
Gddel quaternion, 464
Gaussian decomposition, 253
Gaussian measure, 537
Generalized Kronecker delta, 46
Generalized Sylvester equation, 341
Generators, 467
Gersgorin disk theorem, 159
Global Positioning System, 53
Golden mean number, 121, 391
Golden ration, 186
Golden-Thompson inequality, 289
Gordan’s theorem, 56
Gram-Schmidt orthonormalization pro­
cess, 243, 319
Graph, 486
Grassmann product, 89
Group, 398
Group table, 401
Edges, 486
Eigenpair, 351
Eigenvalue, 142, 351
Eigenvalue equation, 142
Eigenvector, 142, 351
Elementary matrices, 75, 312, 440
elementary matrices, x
Elementary transformations, 27
Entanglement, 96
Entire function, 299, 305
Entrywise product, 309
Equivalence relation, 58
Euclidean norm, 318
Euler angles, 244, 265
Eulerian graph, 486
Exchange matrix, 256
Exterior product, 89, 117, 528
Farkas’ theorem, 57
Fibonacci numbers, 32, 97
Fiedler vector, 491
Field of values, 30
Finite order, 398
Fitness function, 176
Fixed point, 163, 421, 514
Floquet theory, 512, 513
Fock space, 537
Fourier matrix, 390
Fourier transform, 394, 540
Frechet derivative, 531
Haar matrix, 45
Haar wavelet transform, 394
Haar wavelets, 35
Hadamard matrix, 25, 100, 165, 189,
214, 367, 378, 424
Hadamard product, 309
Hamiltonian, 429
Hamming distance, 369
Hankel matrix, 84, 128
Harmonic series, 132
Heisenberg algebra, 455
Heisenberg group, 405, 429
Hilbert-Schmidt norm, 118, 318, 321
Hironaka curve, 182
Homomorphism, 398, 439
Householder matrix, 45
Hyperdeterminant, 132, 133
Hypermatrix, 140
Hyperplane, 69, 134
Icosahedron, 174
Ideal, 443
Idempotent, 15, 137, 155
Identity element, 398
Integral equation, 54
Invariant, 263
Inverse element, 398
Index
Involutions, 416
Involutory, 26, 155
Irreducible, 102
Isomorphism, 402
Iwasawa decomposition, 252, 422
Jacobi identity, 21, 44, 217, 219, 439,
442, 457
Jacobi method, 66
Jacobian, 526
Jacobian matrix, 526
Jordan algebra, 227
Jordan block, 191, 300
Jordan identity, 233
Jordan product, 240
Kernel, 8, 53, 150
Killing form, 442
Kirchhoff’s current law, 67
KirchhofTs voltage law, 67
Klein’s inequality, 128
Kronecker product, 71
Kronecker quotient, 88
Kronecker sum, 71, 96
Lagrange identity, 44
Lagrange interpolation, 277
Lagrange multiplier, 336
Lagrange multiplier method, 52, 335,
524
Laplace equation, 61
Laplace transform, 278, 293, 294, 518
Laplacian, 490
Left-translation, 527
Legendre polynomials, 138, 545
Levi-Civita symbol, 141
Lie subalgebra, 439
Lie-Trotter formula, 293
Lieb inequality, 545
Linearized equation, 514
Logarithmic norm, 330
Lorentz metric, 103
Lotka-Volterra model, 519
LU-decomposition, 241
Lyapunov equation, 341
MacLaurin series, 269
MacLaurin series expansion, 260
Magic square, 94
553
Magnus expansion, 519
Mathematical induction, 487
Matrix, I
Matrix equation, 79
Matrix norm, 318
Maxima, 183, 316
Maximum principle, 161
Measure of nonnormality, 360
Minkowski space, 131
Monge-Ampere determinant, 524
Monodromy matrix, 514
Moore-Penrose pseudo inverse, 31, 248
Motion of a charge, 510
Mutually unbiased, 496
Nearest Kronecker product problem, 338
Newton interpolation, 277
Newton relation, 194
Nilpotent, 20, 21, 155, 294, 307
Nonlinear eigenvalue problem, 184, 188
Norm, 167, 321, 335
Normal, 4, 8, 23, 205
Normal equations, 51
Normed space, 318
Nullity, 53
Numerical range, 30
Octonion algebra, 425
Ohm’s law, 67
One-sided Jones pair, 314
Orientation, 41
Orthogonal, HO
Pade approximation, 289, 327
Parallelepiped, 134
Pascal matrix, 116
Path, 486
Pauli spin matrices, 5, 425
Pencil, 171
Perfect shuffle, 341
Permanent, 125, 139
Permutation matrices, 402
Permutation matrix, 12
Persymmetric, 256
Pfaffian, 118
Polar coordinates, 60, 161
Polar decomposition, 241
Polar decomposition theorem, 244
Polyhedron, 174
554 Index
Positive definite, 168
Positive semidefinite, 165
Power method, 203
Primary cyclic matrix, 92
Primary permutation matrix, 102
Probability vector, 29
Projection matrix, 14, 82, 410
Proof by induction, 261
Pyramid algorithm, 35
QR-decomposition, 241
Quadratic form, 155
Quasi-symmetry operator, 364
Quotient space, 59
Rank, 26, 40, 129
Rayleigh quotient, 184
Reducible, 102
Repeated commutator, 275
Reproducing kernel, 537
Reshaping operator, 340
Residual vector, 51
Residue theorem, 536
Resolvent, 278, 293, 537
Resolvent identity, 164
Resultant, 128
Root vector, 452
Roots, 444
Rotation matrix, 10, 243, 289, 325, 382,
396, 432
Rotational matrix, 107
Row rank, 26, 27
Row vector, I
Rule of Sarrus, 107, 367
Scalar product, 321, 498
Schur decomposition, 250, 252, 327
Schur inverse, 313
Schur invertible, 315
Schur product, 309
Schur-Weyl duality theorem, 423
Schwarz inequality, 331
Selection matrix, 13, 312
Semisimple, 459
Sierpinski triangle, 76
Similar, 19, 413
Similar matrices, 101
Simple, 443, 444
Singular value decomposition, 241, 245,
349
Skew persymmetric, 256
Skew-hermitian, 23, 441
Skew-hermitian matrices, 81
Spectral norm, 320
Spectral parameter, 475
Spectral radius, 66, 167, 311, 326, 333
Spectral theorem, 145, 215, 288, 380
Spherical triangle, 23
Spinors, 206
Spiral basis, 38
Splitting, 65
Square root, 45, 165, 287, 288, 291, 385
Standard basis, 3, 4
Standard simplex, 42
Star product, 447
Stochastic matrix, 29, 77, 162, 164, 194
Structure constants, 457
Swap matrix, 476
Sylvester equation, 341, 353
Symmetric group, 135, 139
Symplectic, 429
Symplectic group, 409
Symplectic matrix, 176, 191
Taylor expansion, 279, 298
Taylor series, 295
Technique of parameter differentiation,
269
Temperley-Lieb algebra, 481
Ternary commutator, 225
Ternutator, 225
Tetrahedral group, 400
Tetrahedron, 6, 39, 113
Three-body problem, 60
Toeplitz-Hausdorff convexity theorem,
30
Topological group, 421
Trace, 99
Trace norm, 118
Trail, 486
Transition probability matrix, 162
Transitive, 493
Transitive closure, 493
Triangle, 40, 113
Tridiagonal matrix, 82, 115
Trotter formula, 292, 325
Index
Truncated Bose annihilation operator,
224
Type-II matrix, 315
Undisentangled form, 270
Unit sphere, 432
Unitary matrix, 23
Universal enveloping algebra, 445
Vandermonde matrix, 188
Variation of constants, 511
Variations of constants, 507
vec operation, 510
Vec operator, 340
Vector norms, 318
Vector product, 21, HO, 453
Vertices, 486
Vierergruppe, 403
Walk, 486
Walsh-Hadamard transform, 395
Wavelet theory, 34
Wedge product, 81, 89, 117
Weyl’s formula, 538
Williamson construction, 80
Wronskian, 533
Yang-Baxter equation, 472
Yang-Baxter relation, 466
Zassenhaus formula, 270
Zero matrix, I
555
Download