Spectral Theory for SL Z\H 1 Introduction

advertisement
Spectral Theory for SL2 Z\H
Emanuel Geromin
September 9, 2013
1
Introduction
To motivate the need for the spectral theory we begin by investigating how
the group SL2 (Z) := {g ∈ M2 (Z) : det g = 1} acts on the upper half plane
H = {z ∈ C : =(z) > 0}. The group (which we will denote Γ for brevity)
is a discrete subgroup of G := SL2 R. G is important; one proves that it is
precisely the group of isometries of H.
Therefore Γ acts properly discontinuously on H, i.e. for any two distinct
points x, y in H, there exist open neighbourhoods U, V containing x, y respectively such that the number of group elements g in Γ with gU ∩ V 6= ∅ is
finite. For such an action there is a notion of fundamental domain: a subset
F of H such that
• H=
S
γ∈Γ γF ;
• There is an open set U so that F = Ū ;
• U and γU are either identical or disjoint.
We recall that a fundamental domain for the action of Γ on H is given by
the set
F := {z ∈ H : −1/2 ≤ <z ≤ 1/2 and |z| ≥ 1},
1
see ([2],Theorem 4.1.2, page 97). The images of F under Γ therefore tesselate
H; figure (1) shows a picture of F and its images – call these tiles.
Figure 1: Shaded region is F .
A curve v : [0, 1] → H will cross a finite number of tiles and therefore
‘reflect’ in F , inducing a curve v̂ : [0, 1] → F . (If v(t) is in γF then define
v̂(t) = γ −1 v(t).) We are specifically interested in the family of line segments
v := {x + i : −1/2 ≤ x ≤ 1/2}
for > 0. How do these reflect in F ?
Figure (2) shows how 200 evenly spaced points on v reflect into F –
for = 1/10, 1/25, 1/100 and 1/250. We observe that as gets smaller, the
reflection of v covers F more uniformly. It increasingly starts to look like
200 randomly points chosen according to a probability distribution that is
‘bottom-heavy’. We recall the hyperbolic measure µ on H given in terms of
the Lebesgue measure on C by
Z
µ(S) :=
S
2
dx dy
y2
-0.4
-0.4
2
2
1.8
1.8
1.6
1.6
1.4
1.4
1.2
1.2
1
1
0.8
0.8
-0.2
-0.2
0
0.2
0.4
-0.4
-0.2
0
2
2
1.8
1.8
1.6
1.6
1.4
1.4
1.2
1.2
1
1
0.8
0.8
0
0.2
0.4
-0.4
-0.2
0
0.2
0.4
0.2
0.4
Figure 2: Clockwise from top-left = 1/10, 1/25, 1/250 and 1/100.
for any subset S of H that is measurable in the Lebesgue measure. We have
V := µ(F ) < ∞. Furthermore the uniform measure on v induces a measure
µ on F (it is a bit difficult to write down explicitly because v (t) reflects
depending on the tile it’s in); again we have µ (F ) = 1 < ∞. Figure (2)
now suggests that as → 0, µ ‘converges’ to µ on F . We want to make this
intuition precise. For this we need an appropriate notion of convergence of
3
measure.
We may normalise µ and µ and define
P (S) :=
µ(S)
µ(F )
P (S) :=
µ (S)
µ (F )
These measures have total volume 1 and so are probability measures, and for
probability measures we have a notion of weak convergence of measure:
Definition. For probability measures P and {Pn }n≥0 on a space X, let E
be the expectation with respect to P and En the expectation with respect to
Pn . We say Pn converges weakly to P as n → ∞ if
En f → Ef
(1.1)
for every square-integrable (L2 ) function f on X.
This is actually a stronger hypothesis than the standard one, which
assumes f only to be bounded and continuous.
We may restrict ourselves to square-integrable automorphic functions on
F , i.e. functions such that f (γz) = f (z) for γ ∈ Γ and z, γz ∈ F (of course,
this only applies to points on the boundary of F), since these functions are
dense in L2 (F ). (Recall L2 (F ) is a normed space.) Such functions extend
to automorphic functions on the whole of H, i.e. functions in
L := L2 (Γ\H).
This is a Hilbert space with normalised inner product given by
Z
hf, gi :=
f (z)ḡ(z) dµz
F
4
where µ is the hyperbolic measure defined above. For automorphic functions
it is easy to write down E f (the expectation w.r.t P ), since we needn’t
bother about reflection; it is simply
Z
1
f (x + i) dx
E f =
0
since v has total length 1. So we set out to prove that E f → Ef as → 0,
i.e.
Z
1
1
f (x + i) dx =
V
lim
→0 0
Z
f (z) dµz
F
for all functions f ∈ L. Equivalently,
Z
lim
→0 0
1
f (x + i)g(x + i) dx = hf, gi
(1.2)
for all functions f, g ∈ L. (1.2) is the formula we set out to prove.
For this we will require a spectral theory decomposing a function in L
into eigenfunctions of the Laplace operator ∆
∆ = y2
∂2
∂2
+
∂x2 ∂y 2
,
which acts on the whole of L, as we shall see (though this is not yet clear:
for now ∆ acts on twice-differentiable functions).
We shall see that L splits into the direct sum of two subspaces C and
E. On each subspace there is a different type of spectral theory: a ‘discrete’
version on C expressing functions as a countable sum of eigenfunctions (akin
to the usual Fourier series expansion on the torus); and a ‘continuous’ part
on E expressing functions as a continuous sum (integral) of eigenfunctions.
We now define these spaces.
5
2
Definition of Spaces
At first we restrict ourselves to the subspace B of L of smooth and bounded
automorphic functions, which is dense in L. We shall split up B as a direct
sum
B =C⊕E
and so
L = B̄ = C¯ ⊕ Ē
where ¯· denotes topological closure.
Definition of C. Every smooth automorphic function f (x + iy) is in
particular invariant under the action of translations
Γ∞
 






1 n
:= ± 
:n∈Z




0 1
and therefore has a Fourier series expansion in the variable x. Denote by
cP f (y) the constant term in this expansion, so
Z
cP f (y) =
1
f (x + iy) dx
0
A function f ∈ B is called a cusp form if cP (f )(y) is identically zero. The
set of cusp forms is a subspace of B denoted C.
Definition of E. Suppose ϕ is a smooth, compactly supported function
on R+ (i.e. ϕ ∈ Cc∞ (R+ )). Then the function
f (z) := ϕ(=z)
defines a smooth function on H that is invariant under Γ∞ , though not
6
necessarily under Γ. To get an automorphic function, sum over all cosets
Ψϕ (z) :=
X
ϕ(=γz).
(2.1)
γ∈Γ∞ \Γ
This type of function is called a pseudo-Eisenstein series. The sum is wellbehaved:
Lemma 2.1. The sum defining Ψϕ converges absolutely and uniformly on
compacts.
Proof. Let C ∈ H be the compact set in question. The support of ϕ is a
compact subset of (0, ∞), so is contained in an interval [p, q] with p > 0. I
claim that the number of γ ∈ Γ∞ \Γ with =(γC) >= p is finite.
Indeed for


a b 
γ=
∈Γ
c d
and z ∈ C we have
=(γz) =
=z
.
|cz + d|2
z is set to be in the compact set C, so r ≤ |=z| ≤ s and t ≤ |<z| ≤ v say
(r, s, t, v > 0). The equation
|cz + d| ≤ δ
(any δ > 0) is true only for a finite number of c, d dependent only on C:
|cz + d| ≥ |=(cz + d)| = c|=(z)| ≥ cr, giving a bound on the number of c,
and |cz + d| ≥ |<(cz + d)| = |c<z + d|, giving a bound on the number of d
for every c. Pick δ = r/p: then the number of (c, d) such that
=z
r
≥
≥p
|cz + d|
|cz + d|
7
is a finite number dependent only on C. Because two elements



a0
b0
a b 


 and 
c d
c
d



in Γ are in the same coset of Γ∞ , this proves the claim.
Given the claim we know that for z ∈ C the number of terms in (2.1)
is finite, the terms depending only on C; thus the sum is absolutely and
uniformly convergent on C.
We may improve the bound on the set ]{γ : =yz > Y }. We have
]{γ ∈ Γ∞ \Γ : =(γz) > Y } < 1 +
k
Y
(2.2)
for a constant k. For a proof of this, see ([1], lemma 2.10, page 50).
The set of pseudo-Eisenstein series defines a subspace of B denoted E.
(By lemma 2.1, Ψϕ + Ψφ = Ψϕ+φ ; and ϕ + φ is still in Cc∞ (R+ ).) E is not
necessarily equal to B; in fact we have that the orthogonal complement of
E in B is precisely C:
Lemma 2.2. A function f ∈ B is a cusp form if and only if hf, Ψϕ i = 0
for all pseudo-Eisenstein series Ψϕ .
Proof. Compute the inner product
Z
V · hf, Ψϕ i =
f (z) ·
F
X
ϕ(γz) dµz =
γ
XZ
γ
f (z) · ϕ(γz) dµz
(2.3)
F
we may swap sum and integral by absolute convergence. Using the fact that
elements of Γ are isometries (Γ ⊂ G) the integral becomes
XZ
γ
Z
f (z)ϕ(z) dµz =
γF
f (z)ϕ(z) dµz
0≤<(z)≤1
8
which (letting z = x + iy) we evaluate as
Z
0
∞Z 1
0
dy
f (x + iy)ϕ(y) dx 2 =
y
Z
∞
cP f (y) · ϕ(y)
0
dy
.
y2
Since cP f (y) is smooth, it is either identically zero or nonvanishing on an
interval [a, b]. In the former case, it is a cusp form; in the latter, we may
choose a smooth, compactly supported ϕ that is greater or equal to 1 on
[a, b] such that the last integral does not vanish.
3
Discrete Part of Spectrum
To find the spectral resolution of ∆ on C we’ll use functional analysis. The
spectrum will arise from the application of one of the main results in the
theory of bounded linear operators on Hilbert spaces, the Hilbert-Schmidt
theorem. This gives the spectral resolution for a special class of bounded
linear operators on a Hilbert space: the compact, self-adjoint bounded linear
operators.
We recall that a self-adjoint operator T on a Hilbert space X is such
that hT f, gi = hf, T gi for all f, g ∈ X; it is compact if the image of the unit
ball B(1) = {f ∈ X : kf k ≤} is pre-compact, i.e. the closure of T B(1) is
compact.
Theorem 3.1 (Hilbert-Schmidt). Suppose L 6= 0 is a self-adjoint compact
bounded operator on a Hilbert space H. Then
1. if λ ∈ C, λ 6= 0 and L − λ is not invertible, then λ is an eigenvalue of
L (L ‘has pure point spectrum’, see below);
2. the eigenspaces of L have finite dimension;
3. the eigenvalues of L can accumulate only at zero;
9
4. one of ±kLk is an eigenvalue of L;
5. The range of L in H is spanned by eigenfunctions of L; if {uj }j≥0 is
any maximal orthonormal system of eigenfunctions of L in H, then
any f in the range of L has an absolutely and uniformly convergent
series representation
f (z) =
X
hf, uj iuj (z)
j≥0
Proof. See ([1], theorem A.10, page 189)
We would like to apply Hilbert-Schmidt to ∆ acting on L. However we
haven’t yet defined ∆ as acting on the whole of L: and even if we restrict
ourselves to smooth functions f (for which ∆f makes sense), it is not obvious
that if f is square-integrable, then ∆f is too. So this is not possible yet.
But certainly ∆ acts on the subspace D of L, defined by
D := {f ∈ B : ∆f ∈ B}.
This subspace is dense in L. We will prove that ∆ is self-adjoint and nonpositive (lemma 3.2) on D, and therefore that it has a unique extension to
the whole of L, the Friedrichs extension (lemma 3.4).
Lemma 3.2. ∆ is self-adjoint on D. −∆ is nonnegative on D.
Proof. By Stokes’ theorem
Z Z
∆f ḡ dµz =
F
F
∂2
∂2
+
∂x2 ∂y 2
Z
Z
∇f ∇g dx dy +
f g dx dy =
F
∂F
∂f
ḡ dl
∂n
where the boundary ∂F is piecewise smooth, ∆f = [∂f /∂x, ∂f, ∂y] is the
gradient of f , dl is the euclidean length element and ∂f /∂n is the outward
10
normal derivative. We may write the boundary integral in hyperbolic invariant form
Z
∂F
∂f
ḡ dl =
∂n
Z
∂F
∂f
ḡ dl
∂n
where ∂/∂n := y∂/∂n and dl := y −1 dl. By invariance, equivalent parts of
the boundary cancel each other out and so this integral vanishes. So
Z
h−∆f, gi =
∇f ∇g dx dy = hf, ∆gi
F
so ∆ is self-adjoint, and
Z
h−∆f, f i =
|∇f |2 dx dy ≥ 0
F
so −∆ is nonnegative.
Corollary 3.3. Eigenvalues of ∆ are real and nonnegative.
Lemma 3.4 (Friedrichs Extension). Let H be a Hilbert space and G a dense
subset of H. Suppose T is a linear operator defined on H that is nonnegative
and self-adjoint. Then T extends to the whole of G.
We now have a legitimate operator ∆ acting on the whole of L that is
what we think it is when acting on smooth functions. But unfortunately,
we cannot apply Hilbert-Schmidt: ∆ is not compact. Instead, our approach
will be to construct a compact, self-adjoint bounded linear operator L̂ with
dense range; and with a complete orthonormal system of eigenfunctions that
are also eigenfunctions of ∆.
Spectral Theory and Resolvents. L̂ will be defined in terms of the resolvents of ∆. The resolvents for a linear operator T is the family of operators
(T − λ)−1 for λ ∈ C. Of course, this operator isn’t necessarily defined for all
λ; the set of λ for which T − λ is not invertible is called the spectrum σ(T )
11
of T . The spectrum obviously includes all eigenvalues, the set of which is
called the point spectrum of T .
The spectrum is central to spectral theory. The Hilbert-Schmidt theorem consists essentially in proving that for a linear operator satisfying the
hypotheses, the spectrum consists only of the point spectrum and possibly
zero. The spectrum is generally easier to analyse than the point spectrum;
for example, every bounded linear operator has compact spectrum. In particular, there exists λ so that (T − λ)−1 is defined (take |λ| sufficiently big).
We define Rs := (∆ − s(s − 1))−1 , when possible. (It will be clear shortly
why the resolvent is indexed by s rather than λ = s(s − 1).) This will turn
out to be an invariant integral operator, a special type of linear operator
with several useful and important properties.
Properties of Invariant Integral Operators. An integral operator L is
defined by
Z
(Lf )(z) :=
k(z, w)f (w)dµw ,
H
where dµ is the standard hyperbolic measure on H and k : H × H → C is
a given function called the kernel of L. We don’t (yet) assume that f is
automorphic, but we need that f and k are such that the integral converges
absolutely.
For an invariant integral operator we require in addition that
k(gz, gw) = k(z, w), for all g ∈ G.
Since G is precisely the group of isometries on H, this means that k is a pointpair invariant: it depends only on the hyperbolic distance ρ(z, w)between z
and w. So we may write k(z, w) = k(ρ(z, w)); L is a convolution.
Quick Digression on Point-Pair Invariants. It will be useful to write
12
point-pair invariants not in terms of the hyperbolic distance itself, but in
terms of a different point-pair invariant u(z, w), defined by
cosh ρ(z, w) = 1 + 2u(z, w).
Since
ρ(z, w) = log
|z − w̄| + |z − w|
|z − w̄| − |z − w|
(where | · | is the Euclidean distance), we have
u(z, w) =
|z − w|2
.
4=z=w
So we may write k(z, w) = k(u(z, w)). This will be used later.
Back to invariant integral operators. These operators are closely connected with ∆. We have
Lemma 3.5. The invariant integral operators commute with ∆.
Proof. We will use geodesic polar coordinates. These are derived from the
Cartan decomposition of G
G = KAK
where K is the set of rotations and A the set of diagonal matrices,








 cos θ sin θ 
K = k(θ) = 
:0≤θ<π




− sin θ cos θ




√




 α

A = a(α) = 
:
α
>
0

√




1/ α
(To see this multiply g ∈ G on the left by k1 ∈ K to bring g to a symmetric
matrix g1 = k1 g; then by conjugation in K the symmetric matrix g1 can be
13
brought to a diagonal matrix a = kg1 k −1 .)
G acts transitively on H so every z ∈ H may be written in the form
z = gi for some g ∈ G. Write g = k(φ)a(e−r )k(θ) (r ≥ 0); then z = k(φ)e−r i
since K fixes i. Note ρ(gi, i) = ρ(k(φ)e−r i, i) = ρ(e−r i, i) = r, so r is the
hyperbolic distance from i to gi. The is the first version of the geodesic
polar coordinates. The second version is given in terms of u instead of r;
recall that cosh r = 1 + 2u. In geodesic polar coordinates,
∆ = u(u + 1)
∂
∂2
∂2
1
+
(2u
+
1)
.
+
∂u2
∂u 16u(u + 1) ∂φ2
We our now set up to prove the lemma. Suppose L is an invariant
integral operator given by invariant kernel k, so k(z, w) is a smooth pointpair invariant on H × H. Then we have
∆z k(z, w) = ∆w k(z, w).
Indeed, using geodesic polar coordinates for w, we get
∆z k(z, w) = u(u + 1)k 00 (u) + (2u − 1)k 0 (u),
and using geodesic polar coordinates for z we get the same expression for
∆w k(z, w).
So
Z
∆Lf (z) =
Z
∆z k(z, w)f (w) dµw =
H
∆w k(z, w)f (w) dµw
H
= h∆w k(z, ·), f i = hk(z, ·), ∆w f i since ∆w is symmetric (∗)
Z
=
k(z, w)∆w f (w) dµw = L∆f (z).
H
14
∗: The proof that ∆w is symmetric goes as in the proof of (3.2).
Theorem 3.6. Every eigenfunction of ∆ is also an eigenfunction of all invariant integral operators with kernel k(u) in C0∞ (R+ ). Conversely, if f is
an eigenfunction of all such integral operators, then f is also an eigenfunction of ∆.
Proof. See ([1], theorems 1.14 and 1.15, page 30).
If we restrict the domain of L to automorphic functions then we may
write
Z
K(z, w)f (w)dµw,
(Lf )(z) =
F
where F is our standard fundamental domain for Γ on H and
K(z, w) :=
X
k(z, γw).
γ∈Γ
This new kernel is called the automorphic kernel.
These operators are especially convenient on C, as we can form the ‘compact version’ of any such operator subject to certain conditions on the kernel,
as we shall see later. This is an essential ingredient of the proof and the
reason why we restrict ourselves to C in the spectral resolution of ∆, rather
than the whole of L.
We check that L does indeed act on C:
Lemma 3.7. An invariant integral operator L maps the subspace C of L to
itself:
L : C → C.
15
Proof. Let f ∈ C and g = Lf . Then
Z
1
Z
1Z
k(z + t, w)f (w) dµw dt
g(z + t) dt =
0
H
Z 1Z
k(z + t, w + t)f (w + t) dµw dt
=
0
H
Z
Z 1
=
k(z, w)
f (w + t) dt dµw since k is invariant
H
0
Z
=
k(z, w) cP f (=w) dµw = 0,
cP g(y) =
0
H
since cP f = 0.
Before showing how to construct the compactification L̂ of L, we prove
that −Rs is an invariant integral operator.
The Resolvent is an Invariant Integral Operator. Let
1
Gs (u) :=
4π
Z
1
(ξ(1 − ξ))s−1 (ξ + u)−s dξ.
(3.1)
0
Suppose s ∈ C with <s > 1. Let −Ts be the invariant integral operator with
kernel Gs , i.e.
Z
−(Ts f )(z) :=
Gs (u(z, w))f (w)dµw.
H
(with u as defined above). Then
Theorem 3.8. If f is smooth and bounded on H, then
(∆ + s(1 − s))Ts f = f .
Proof. See ([1], theorem 1.17, page 32).
So Ts is the right inverse of ∆ + s(1 − s) and so coincides with Rs when
Rs is defined (and <s > 1).
The kernel Gs has the following properties, which we will need later:
16
Lemma 3.9. The integral (3.1) defining Gs converges absolutely for <(s) =
σ > 0. It gives a function Gs (u) on R+ which satisfies
(∆ + s(s − 1))Gs = 0.
Moreover, Gs (u) satisfies the following bounds:
Gs (u) =
1
1
log + O(1)
4π
u
as u → 0;
(3.2)
as u → 0;
(3.3)
as u → +∞.
(3.4)
G0s (u) = −(4πu)−1 + O(1)
Gs (u) u−σ
Proof. See ([1] lemma 1.7, page 25).
We need one final ingredient to define our operator: given an invariant
integral operator L acting on C, whose kernel k is smooth and compactly
supported, we can form a compact invariant integral L̂ on C with range
identical to that of L.
Compactification of Invariant Integral Operators on C. We will construct
an integral operator that is defined by
Z
(T f )(z) =
κ(z, w)f (w),
F
where the kernel κ is bounded on F × F . Thus in particular
Z Z
F
|κ(z, w)|2 dµz dµw < ∞
F
i.e. k ∈ L2 (F × F ). Thus the integral operator on F with kernel k is a
Hilbert-Schmidt integral operator, and is therefore compact.
Lemma 3.10. Hilbert-Schmidt integral operators are compact.
17
Proof. See ([3], exercise 4.15, page 106)
We would like to take κ = K, the automorphic kernel. But unfortunately
K isn’t bounded on F × F , no matter how small we take the support of k.
This is because as z, w approach infinity, the number of terms which count
P
in K(z, w) = γ k(z, γw) grows to infinity. To get a bounded kernel κ we
subtract from K(z, w) the principal part
H(z, w) =
X Z
+∞
k(z, γw + t)dt
γ∈Γ∞ \Γ −∞
and we define
K̂(z, w) := K(z, w) − H(z, w).
This new automorphic kernel defines an operator L̂ on L provided we can
prove that this integral always converges. This is the case:
Lemma 3.11. The function w 7→ H(z, w) is bounded and invariant under
Γ, i.e.
H(z, ·) ∈ B.
In particular the integral defining (Lf )(z) converges for all z.
Proof. We estimate
H(z, w) =
X Z
∞
k(z, t + γw) dt.
γ∈Γ∞ \Γ −∞
k(u) has compact support, so the range of integration is restricted by
u(z, t + γw) =
|z − t − γw|2
≤c
4=z=γw
for a constant c, so |z−t−γw|2 ≤ 4c=z=γw. This implies that |=(z)−=(γw)|
18
is bounded. So the integral is bounded by O(=(z)), and by formula (2.2)
the number of terms that count is bounded by 1 + O(1/=(z). So
H(z, w) ≤
1+O
1
=z
· O(=z) = O(=z).
In particular H(z, ·) is bounded. It is clearly automorphic.
We claim that L̂ is the ‘compactified’ operator we want: it is compact
and it acts like L on C:
Lemma 3.12. For f ∈ C we have Lf = L̂f .
Lemma 3.13. The kernel K̂(z, w) is bounded on F × F ; therefore L̂ is a
compact operator.
Proof of 3.12. We shall prove that H(z, ·) is orthogonal to the space C, i.e.
hH(z, ·), f i = 0
if f ∈ C.
We do this directly by unfolding the integral:
X Z Z
hH(z, ·), f i =
γ∈Γ∞ \Γ F
Z
∞
k(z, γw + t) dtf¯(w) dµz
−∞
∞Z 1Z ∞
dx dy
as in lemma (2.2)
k(z, w + t)f¯(w)
y2
0
0
−∞
Z 1
Z ∞ Z ∞
dy
¯
=
k(z, t + iy) dt
f (x + iy) dx
y2
0
−∞
0
Z ∞ Z ∞
dy
=
k(z, t + iy) dt cP (f¯)(y) 2 = 0
y
0
−∞
=
(letting w = x + iy), since cP (f¯) = cP (f ) = 0.
Proof of 3.13. This uses a similar estimate to the one in the previous proof.
See ([1], proposition 4.5, page 67).
19
Note that 3.12 crucially requires L to be acting on C, rather than the
whole of L - which is why our spectral resolution restricts to this smaller
space.
For simplicity we have assumed that the kernel k is compactly supported;
however the key results above (3.12, 3.13, 3.11) remain true if we assume k
decays sufficiently rapidly k(u), k 0 (u) (u + 1)−2 .
(3.5)
Granted this we now have all of the ingredients to prove the spectral
resolution of ∆ on C.
Spectral Resolution of ∆ on C. As promised we define L in terms of the
resolvent. Our first guess will be L = Rs for s >= 2. The range is dense in
C: given f ∈ C ∩ D (dense in C)let g := (∆ + s(s − 1))f – then f = Rs g.
Since s >= 2, the kernel decays sufficiently rapidly as u → ∞ (condition
3.5, lemma 3.9). But unfortunately we cannot form the compactification of
L because Gs is singular at zero (lemma 3.9).
To solve this problem we take L = Rs − Ra , a > s ≥ 2 instead. This
kills the singularity – the kernel k(u) = Ga (u) − Gs (u) is smooth; and it still
decays according to 3.5. So we may form the compactified operator L̂. The
range is still dense: using the Hilbert Formula
L = Rs − Ra = (s(1 − s) − a(1 − a)Rs Ra
(proof: apply Rs to the obvious identity I − (∆ − s(1 − s))Ra = (∆ − a(1 −
a) − (∆ − s(s − 1)))Ra = (s(1 − s) − a(1 − a))Ra ), we let
g = (s(1 − s) − a(1 − a))−1 (∆ + a(1 − a))(∆ + s(1 − s))f
20
for f ∈ C ∩ D.
Summarising, the compactified operator L̂ has the following properties:
(a) It is compact;
(b) It is self-adjoint, because the kernel is real;
(c) Its range is dense;
So we may apply the Hilbert-Schmidt theorem to L̂ and deduce
Proposition 3.14. C is spanned by eigenfunctions of L̂. The eigenvalues
are bounded and accumulate only at zero, and the eigenspaces are finitedimensional.
Let {uj }j≥0 be any complete orthonormal set of eigenfunctions of L̂.
Then any f ∈ C has expansion
f=
X
hf, uj iuj (z)
j≥0
which converges absolutely and uniformly on compacts.
We claim that there is a complete orthonormal system {uj }j≥0 of eigenfunctions of L̂ that are also eigenfunctions of ∆. Indeed, let Cλ be the
eigenspace of L̂ corresponding to λ. ∆ and L̂ commute, since L̂ is an integral operator (lemma 3.5); so ∆ maps Cλ to itself. We know Cλ is finite
dimensional and so the action of ∆ on it is expressible as a matrix; it will be
an Hermitian matrix because ∆ is self-adjoint. By linear algebra, hermitian
matrices are diagonalisable: in other words Cλ is spanned by eigenfunctions
of ∆, say {uλj }j . The set {uλj }λ,j ranging over all λ consists of simultaneous
eigenfunctions of ∆ and L̂ and span every Cλ – so they span C.
We conclude
21
Theorem 3.15 (Spectral Resolution of ∆ on C). C is spanned by eigenfunctions of ∆. Let {uj }j≥0 be any complete orthonormal set of eigenfunctions
of ∆. Then any f ∈ C has expansion
X
hf, uj iuj (z)
f=
j≥0
which converges absolutely and uniformly on compacts.
4
Continuous Part
We spectrally decompose the pseudo-Eisenstein series using the Mellin transform, a variant of the Fourier transform, which we recall.
The Mellin Transform. For functions f in L1 (R) we define the Fourier
transform of f
fˆ(ξ) :=
∞
Z
f (x)e−2πiξx dx.
−∞
If fˆ is also in L1 (R) then we have the Fourier inversion theorem:
Z
∞
f (x) =
fˆ(ξ)e2πiξx dξ
−∞
see ([4], 9.11, page 185). We may rewrite this as the identity
1
f (x) =
2π
Z
∞
Z
∞
−itξ
f (t)e
−∞
dt eiξx dξ
(4.1)
−∞
by replacing ξ with ξ/(2π).
If we assume that f is compactly supported (f ∈ Cc∞ (R) then by the
Paley-Wiener theorem (see [4], 19.3, page 375), fˆ(ξ) extends to an entire
function which is of rapid decay along horizontal lines. So by Cauchy’s
22
theorem we have
∞+iτ
Z
f (x) =
fˆ(ξ)e2πiξx dξ,
−∞+iτ
for any fixed τ , since the integral along vertical line segments [x, x + iτ ]
tends to zero as x tends to ±∞.
A variant of this is the Mellin transform. Suppose F ∈ Cc∞ (0, ∞) and
let f (x) = F (ex ). (Then f ∈ Cc∞ (0, ∞) too, so in L1 (R).) Write y = ex and
r = et . Then the identity becomes
F (y) =
1
2π
Z
∞
∞
Z
−∞
F (r)r−iξ
0
dr
r
y iξ dξ.
So we define the Mellin transform MF of F as
∞
Z
F (r)r−iξ
MF (iξ) :=
0
dr
r
and as above we extend to the whole complex plane (s ∈ C)
∞
Z
MF (s) :=
F (r)r−s
0
dr
.
r
Identity (4.1) becomes
1
F (y) =
2πi
Z
0+i∞
MF (s)y s ds
0−i∞
and like before this remains true integrating along any vertical line [σ −
i∞, σ + i∞], i.e.
1
F (y) =
2πi
Z
σ+i∞
MF (s)y s ds.
σ−i∞
We will apply the Mellin inversion to the ϕ ∈ Cc∞ (R+ ) defining the
23
pseudo-Eisenstein series
X
Ψϕ (z) =
ϕ(=(γz)).
γ∈Γ∞ \Γ
By Mellin inversion
1
ϕ(y) =
2πi
σ+i∞
Z
Mϕ(s)y s ds
σ−i∞
so
1
Ψϕ (z) =
2πi
X Z
σ+i∞
Mϕ(s)(=(γz))s ds.
γ∈Γ∞ \Γ σ−i∞
We wish to swap the order of the double integral (sum and integral). We
can do this if it is absolutely convergent, by Fubini’s theorem; this is the
case for σ sufficiently large, say σ > 1. We obtain
1
Ψϕ (z) =
2πi
Z
σ+i∞
Mϕ(s)Es (z)ds
σ−i∞
where Es is the Eisenstein series defined for <(s) > 1 by
Es (z) :=
X
=(γz)s .
γ∈Γ∞ \Γ
The series is defined for <(s) > 1 because it is then absolutely convergent;
but this is not necessarily the case for <(s) ≤ 1. But in fact Es has meromorphic continuation to the whole complex plane:
Theorem 4.1. Es (z) has meromorphic continuation to the whole complex
plane as a function of s. It has a single pole in the the half plane <(s) ≥ 1/2
at s = 1, and this is a simple pole. Furthermore the residue at 1
Ress=1 Es (z)
24
is a constant independent of z.
Proof. See ([2], corollary 7.2.11 and proof, pages 286-287).
These Eisenstein series are the basic eigenpackets with which we decompose the pseudo-Eisenstein series: since
∆(y s ) = s(s − 1)y s
we have
∆Es = s(s − 1)Es .
This is the first version of the spectral resolution – expressing Ψϕ in terms
of a continuous sum (integral) of Eisenstein series (the eigenpackets) with
coefficients given in terms of Mϕ. But we would like to express the coefficients in terms of Ψϕ itself, not just the constituent function ϕ. This is
what we do next.
First we use theorem (4.1) to move the line of integration to the left to
σ = 1/2, for reasons that will be clear shortly. By the Residue theorem and
(4.1)
1
Ψϕ (z) =
2πi
Z
1/2+i∞
Mϕ(s)Es (z)ds + Ress=1 (Es · Mϕ(s)).
1/2−i∞
Recall cP (f ). We shall find the spectral resolution in terms of McP (Ψϕ ).
There will be two steps to this:
Step 1. For f ∈ Cc∞ (Γ\H)
Z
Es (z)f (z) dµz = McP (f )(1 − s)
F
25
Step 2. For a pseudo-Eisenstein series Ψϕ
Z
Es (g)Ψϕ (z) dµz = Mϕ(1 − s) + cs Mϕ(s),
F
where cs is meromorphic in s.
These two steps allow us to conclude that
McP Ψϕ (s) = Mϕ(1 − s) + c1−s Mϕ(s).
(4.2)
Formula (4.2) will allow us to deduce the final version of the spectral resolution from the first one – but first we prove the two steps.
Proof of Step 1.
Z
Z
X
Es (z)f (z)dz =
F
=(γz)s f (z) dµz
F γ∈Γ \Γ
∞
Z
=(z)s f (z) dµz
=
0≤Re(z)≤1
∞Z 1
s
Z
=
y f (x + iy) dx
Z0 ∞
=
Z0 ∞
=
0
y s cP f (y)
dy
y2
y −(1−s) cP f (y)
0
dy
y2
dy
y
= McP f (1 − s).
Properties of Eisenstein Series. To prove Step 2 we derive some properties of the Eisenstein series. ∆ commutes with the map f → cP f , since we
26
may interchange order of differentiation and summation,
Z
1
Z
1
∆f (x + iy) dy,
f (x + iy) dx =
∆
0
0
because f ∈ Cc∞ (R+ ). Thus cP Es is a function u(y) of y satisfying
y2
∂2
u(y) = s(s − 1)u(y).
∂y 2
For s 6= 1/2 this has two linearly independent solutions y s and y 1−s , and so
for meromorphic functions as and cs
cP Es = as y s + cs y 1−s .
By directly expanding the integral
1
Z
cP Es (y) =
Es (x + iy) dx
0
we deduce that as = 1. An important fact about Es is that it has a ‘universal
property’:
Theorem 4.2. The equations
∆w = s(s − 1)w
∂
y
− (1 − s) cP w = (2s − 1)y s
∂y
uniquely determine w = Es .
Granted this, one readily checks that both Es and c−1
1−s Es satisfy the
equations and so are identical by uniqueness. We obtain the functional
equation
E1−s = c1−s Es
27
(4.3)
Proof of Step 2. Proceeding as in the proof of Step 1,
Z
Z
F
F
X
Es (z)
Es (z)Ψϕ (z) dµz =
ϕ(=γz)dµz
Γ∞ \Γ
Z
Es (z)ϕ(z) dµz
=
0≤<(z)≤1
Z ∞Z 1
Es (x + iy)ϕ(y) dx
=
Z0 ∞
0
cP (Es )(y)ϕ(y)
=
0
dy
y2
dy
.
y2
Substituting cP (Es )(y) = y s + cs y 1−s we get
Z
∞
(y s−1 + cs y −s )
0
dy
= Mϕ(1 − s) + cs Mϕ(s).
s
Spectral Resolution of ∆ on E. Finally we use formula (4.2) to obtain
the spectral resolution in terms of McP Ψϕ (s).
1
2πi
Z 1/2+i∞
Z
Mϕ(s)Es (z) ds +
Z
1/2+i∞
Ψϕ (z) − Ress=1 (Mϕ(s) · Es (z)) =
1
=
4πi
1
=
4πi
=
=
=
=
1
4πi
1
4πi
1
4πi
1
2πi
1/2−i∞
Z
Mϕ(s)Es (z) ds
1/2−i∞
!
1/2+i∞
Mϕ(1 − s)E1−s (z) ds
1/2−i∞
1/2+i∞
Mϕ(s)Es (z) + Mϕ(1 − s)E1−s (z) ds
1/2−i∞
Z 1/2+i∞
Mϕ(s)Es (z) + Mϕ(1 − s)c1−s Es (z) ds
1/2−i∞
Z 1/2+i∞
(Mϕ(s) + c1−s Mϕ(1 − s))Es (z) ds
1/2−i∞
Z 1/2+i∞
McP Ψϕ (s)Es (z) ds
1/2−i∞
Z
1/2+i∞
McP Ψϕ (s)Es (z) ds
1/2+i0
28
by (4.3)
(∗)
1
=
2πi
Z
1/2+i∞
hΨϕ , E1−s iEs (z) ds.
1/2+i0
Note that the two integrals in the sum (∗) are the same for <(s) = 1/2 – this
is why we moved the line of integration to the left before (i.e. to σ = 1/2).
Evaluation of the Residue Term. We finish this section by identifying
the residue term Ress=1 (Mϕ(s) · Es (z)) = Mϕ(1) · Ress=1 Es (z). We will
use the fact that Ress=1 Es (z) is constant in z, say k: see (4.1). To compute
Mϕ1, we proceed as in the proof of step 1 and 2, but in reverse:
Z
∞
dy
ϕ(y)y −1
y
0
Z ∞Z 1
dy
=
ϕ(=(x + iy)) dx 2
y
0
Z0
=
ϕ(=z) dµz
0≤<z≤1
Z
=
Ψϕ (z) dµz = hΨϕ , 1i.
Mϕ(1) =
F
Now constant functions are orthogonal to functions given by
1
2πi
Z
1/2+i∞
f (=(s))Es (g) ds
1/2+i0
for f in Cc∞ (R+ ), so we have
hΨϕ , 1i = hΨϕ , 1ihk, 1i.
and so k = 1 if we assume that the hyperbolic measure is normalised. In
this case the spectral theorem becomes
Ψϕ (z) =
1
2πi
Z
1/2+i∞
McP Ψϕ (s)Es (z) ds + hΨϕ , 1i · 1.
1/2+i0
29
5
Spectral Theorem. Proof of Proposition
Combining the results of the previous two sections we obtain the Spectral
Theorem:
Theorem 5.1 (Spectral Theorem). A function f ∈ L has spectral resolution
in terms of eigenfunctions of ∆
1/2+i∞
Z
X
f=
hf, uj iuj +
hf, E1−s iEs ds + hf, 1i · 1,
1/2+i0
j≥0
where the Es are the Eisenstein series and the uj a complete orthonormal
system of eigenfunctions of ∆ in C. The sum and integral converge absolutely
and uniformly on compacts.
We recall the proposition we set out to prove in the beginning.
Proposition 5.2. Let h·, ·i be the inner product on L induced by the normalised hyperbolic measure on H:
1
V
hf, gi :=
Z
f (z)g(z) dµz
F
where V is the area of F . Then
Z
hf, gi = lim
1
f (x + iy)g(x + iy) dx.
y→0+
0
Proof. We spectrally decompose h := f ḡ:
Z
X
h=
hh, uj iuj +
j≥0
1/2+i∞
hh, E1−s iEs ds + hh, 1i · 1
1/2+i0
30
and integrate each of the three terms on the right hand side separately. The
cusp forms integrate out to zero, by definition:
Z
1X
0
hf, uj iuj (x + iy) dx =
j
X
1
Z
hf, uj i
uj (x + iy) dx = 0,
0
j
since every term is zero.
For the pseudo-Eisenstein series we have
Z
1 Z 1/2+i∞
Z
1/2+i∞
hf, E1−s i
hf, E1−s iEs (x + iy) ds dx =
0
Z
Es (x + iy) dx ds
0
1/2+i0
Z 1/2+i∞
1/2+i0
1
hf, E1−s icP Es (y) ds
=
1/2+i0
We know cP Es (y) = y s + cs y 1−s so the integral becomes
Z
1/2+i∞
hf, E1−s i(y s + cs y 1−s ) ds
1/2+i0
We will show that this integral is bounded by
√
y and therefore tends to
√
zero as y does. Indeed if <(s) = 1/2, then |y s | = |y 1−s | = y. Furthermore
by the functional equation(4.3),
E1−s = c1−s Es = c1−s cs E1−s
and so
cs c1−s = 1.
Since Es̄ = Es we have cs = cs̄ and so |c1/2+it |2 = 1 for real t, in other words
|cs | = 1 if <(s) = 1/2. Therefore
√
|y s + cs y 1−s | ≤ |y s | + |cs ||y 1−s | ≤ 2 y.
31
Thus the integral is bounded by
√
Z
1/2+i∞
|hf, E1−s i| ds
2 y
1/2+i0
which tends to zero as y does, as claimed.
Finally the constant term integrates to
Z
lim
y→0 0
1
hh, 1i(x + iy) dx = hh, 1i = hf ḡ, 1i = hf, gi.
Putting these three parts together we get the result.
References
[1] Henryk Iwaniec, Spectral Methods of Automorphic Forms. American
Mathematical Society, 2002.
[2] Toshitsune Miyake, Modular Forms. Springer, 1989.
[3] Walter Rudin, Functional Analysis. McGraw-Hill, 1973.
[4] Walter Rudin, Real and Complex Analysis. third edition, McGraw-Hill,
1987.
32
Download