# 18.755 Problems 1 solutions H

advertisement ```18.755 Problems 1 solutions
1. In class Friday we considered a group
H = Hx,ξ,z | x ∈ R, ξ ∈ R, z = eiζ ∈ S 1
with the multiplication law
Hx1 ,ξ1 ,z1 &middot; Hx2 ,ξ2 ,z2 = Hx1 +x2 ,ξ1 +ξ2 ,z1 z2 eix1 ξ2 .
(This was a group of linear transformations of an infinite-dimensional
vector space of functions on R.) Can you find a collection of finite matrices
satisfying this multiplication law?
The answer is no, you cannot find such a collection of matrices all of which are
distinct. More precisely
Theorem.. Suppose V is an n-dimensional vector space over a field k, and that
ρ: H → GL(V )
is a homomorphism of H into the group of invertible linear transformations of V .
For every zℓ ∈ S 1 which is an ℓth root of unity, with ℓ a prime not equal to the
characteristic of k, and larger than the dimension of V , the linear transformation
ρ(H0,0,zℓ ) is equal to the identity.
(The problem does not ask for a group, but only that the multiplication law be
respected. I will omit the very easy and dull analysis of that more general case.)
Proof. Replacing k by its algebraic closure, we may assume that k is algebraically closed. Notice first that for any z = eiθ ,
−1
−1
.
H0,0,z = H1,0,1 H0,θ,1 H1,0,1
H0,θ,1
The group element H0,0,z is therefore a commutator; so its image in GL(V ) is a
commutator, and therefore
det(ρ(H0,0,z ) = 1.
For the particular zℓ in the theorem, ρ(H0,0,zℓ ) is a linear transformation of
(finite) order ℓ, assumed to be relatively prime to the characteristic. It follows that
H0,0,zℓ must be diagonalizable, with eigenvalues various ℓth roots of 1 in k. We can
write the eigenspace decomposition as
M
V =
Vζ .
ζ ℓ =1
Now it is also true that H0,0,zℓ belongs to the center of H, so ρ(H0,0,zℓ ) commutes with all of the linear transformations ρ(Hx,ξ,z ). These linear transformations
therefore respect the eigenspace decomposition. In matrix terms, using a basis of
eigenvectors of ρ(H0,0,zℓ ), all the matrices ρ(Hx,ξ,z ) are block diagonal. Each block
separately defines a group homomorphism
ρζ : H → GL(Vζ )
1
2
exactly as in the theorem. We can therefore apply the commutator argument to
conclude
1 = det(ρζ (H0,0,zℓ )) = ζ dim Vζ .
But we chose ℓ to be a prime larger than dim V and ζ to be an ℓth root of 1. This
equation therefore implies that either Vζ = 0, or ζ = 1. The conclusion is that
ρ(H0,0,zℓ ) is the identity, as we wished to show. Q.E.D.
This proves that any group homomorphism of H into finite matrices must have
a nontrivial kernel. At least in characteristic zero, similar arguments will show that
the kernel must include all the roots of unity in S 1 .
The hint was more or less intentionally misleading. The matrices




1 x θ


N = Nx,ξ,θ =  0 1 ξ  | x, ξ, θ ∈ R


0 0 1
form a group with center R (the θ coordinate. In particular, this means that the
subgroup 2πZ ⊂ R is normal. The quotient group N/2πZ is H, but the quotient
of a group of matrices need not be a group of matrices. Mathematicians call the
matrix group N the “Heisenberg group,” but the group that’s actually interesting
in physics is the quotient H, which is not a matrix group at all.
2. Same question with R and S 1 replaced by Z/pZ (integers mod a
prime p) and the group &micro;p of complex pth roots of 1:
Hx1 ,ξ1 ,z1 &middot; Hx2 ,ξ2 ,z2 = Hx1 +x2 ,ξ1 +ξ2 ,z1 z2 e2πix1 ξ2 /p
(xi ∈ Z/pZ, ξi ∈ Z/pZ, zi ∈ &micro;p ).
(To make sense of this formula, notice that if m ∈ Z/pZ, then e2πim/p is
a well-defined pth root of 1. The formula defines a group of order p3 .)
Can you find a group of finite matrices satisfying this multiplication law?
How small can you make the matrices?
Phrased without matrices, we are looking for a vector space V over a field k and
a homomorphism
ρ: H → GL(V );
what makes it interesting is looking for an injective homomorphism ρ.
There is an easy way to do this, for any finite group H and any field k: define V
to be the |H|-dimensional vector space of all k-valued functions on H, and define
[ρ(h)f ](x) = f (h−1 x).
Another way to say the same thing is that we are realizing H as a subgroup of
the group of permutations of the set H; then realizing those permuations of |H|
elements as |H| &times; |H| (permutation) matrices. In any case the size of the required
matrices is |H|, in this case p3 .
But smaller is possible. As noted in the problem, Z/pZ can be identified with
the group &micro;p of pth roots of 1 by sending m + pZ to e2πim/p . If we do this, our
group law looks like
Hx1 ,ξ1 ,t1 &middot; Hx2 ,ξ2 ,t2 = Hx1 +x2 ,ξ1 +ξ2 ,t1 +t2 +x1 ξ2
(xi ∈ Z/pZ, ξi ∈ Z/pZ, ti ∈ Z/pZ).
3
This is precisely the law for multiplying the

1

Hx,ξ,t = 0
0
3 &times; 3 matrices

x t
1 ξ
0 1
with entries in Z/pZ. So over a field with p elements, we can use 3 &times; 3 matrices.
I believe that this is the smallest possible size in characteristic p, but I have not
written down a proof.
Over the complex numbers, we can use p &times; p matrices; or better, we can think of
V as the space of functions on the group Z/pZ. Exactly as in the case of functions
on R, we use three operations on functions:
[Qx f ] (y) = f (x − y)
[Pξ f ] (y) = e
−2πiξy/p
[Zz f ] (y) = zf (y)
(x, y ∈ Z/pZ)
f (y)
(ξ, y ∈ Z/pZ)
(z ∈ &micro;p , y ∈ Z/pZ)
Composing them gives
Hx,ξ,z = Qx Pξ Zz ,
[Hx,ξ,z f ] (y) = ze−2πiξy/p f (y − x).
The multiplication law for the Hs can be computed exactly as we did in class for
the case of R instead of Z/pZ; it is the one in the problem. Therefore we’ve realized
the group H as a linear transformations of the p-dimensional complex vector space
V of functions on Z/pZ.
If we use the basis of V consisting of functions that are 1 at one group element
and zero at the others, then the matrices Qx look like


0 0 0 &middot;&middot;&middot; 1 0 0 &middot;&middot;&middot;
0 0 0 &middot;&middot;&middot; 0 1 0 &middot;&middot;&middot; 


0 0 0 &middot;&middot;&middot; 0 0 1 &middot;&middot;&middot; 

..
..
..
.. 

.
.
.
.




1 0 0 &middot;&middot;&middot; 0 0 0 &middot;&middot;&middot; 


0 1 0 &middot;&middot;&middot; 0 0 0 &middot;&middot;&middot; 


0 0 1 &middot;&middot;&middot; 0 0 0 &middot;&middot;&middot; 
.
.
..
..
..
..
.
.
with the 1s appearing x spaces below or to the left of the diagonal and p − x spaces
above or to the right of the diagonal. (Here as almost always, I may have the signs
wrong.) The matrices Pξ are diagonal, with entries 1 = e0πiξ/p , e2πiξ/p , e4πiξ/p ,
and so on. The matrices Zz are z Id. A general matrix Hx,ξ,z has the shape in the
display above, but the entries are pth roots of unity instead of 1.
This construction works in any field k admitting a nontrivial pth roots of 1; in
particular, in any algebraically closed field of characteristic not p. It turns out that
these are the smallest matrices by which H may be realized over such fields. In
characteristic p, we saw that we can use 3 &times; 3 matrices
I have not thought carefully about how small the matrices can be over fields of
characteristic not p in which there are no nontrivial pth roots of 1. My guess would
be rp, where r is the degree of the field extension generated by adjoining a pth root
of 1. (Necessarily r ≤ p, and it’s easy to show that you can realize H using rp &times; rp
matrices.)
```