>>: All right. So the final talk of... organizers on computing genus 2 curves. Thank you.

advertisement
>>: All right. So the final talk of the morning will be by Kristin one of the
organizers on computing genus 2 curves. Thank you.
>> Kristin Lauter: Thank you. So I won't say thank you to the organizers for
inviting me because unfortunately I'm giving this talk on behalf of Tonghai Yang
who was supposed to be here giving this talk on our joint work, and he's had
some unfortunate medical problems. So I'm very sorry that he can't be giving the
talk.
But I am very happy to talk to you about this work, which I think is a lot of fun.
And in the second half of my talk, well, in the end of my talk, anyway, I'm going to
actually talk about the continuation of this project which is joint work with Michael
Naehrig, who is a post-doctoral researcher in our group.
So let me just give you the short version of this talk, the one sentence version.
That is I'm going to show you how to compute genus 2 curves from two invariants
on the Hilbert moduli space instead of from three invariants on the Siegel moduli
space.
And I'm going to try to convince you that this is a good thing to do and give you a
bunch of examples and show you the kinds of research that it's leading to.
So let me just start with some context for the more general, broader audience
here. Suppose that you want to do discrete logarithm based cryptography, so
you have some group and you're going to want to either do a protocol in
cryptography, which is based on the discrete logarithm problem or maybe some
pairing based protocol.
So as we all know from the title of this conference, you can do that on an elliptic
curve with the group of points on an elliptic curve and the pairing on an elliptic
curve. But you can also do that on the Jacobian of a higher genus curve. So for
those of you who weren't familiar with what the Jacobian is, you can think of it as
just being a group which is associated to a curve. So when you move away from
the miracle of elliptic curves which are both curves and groups, you have nice
algebraic curves but they're not groups anymore, there's no group law to
compose elements and do cryptography.
So in the higher genus case, you have to associate a group to the curve. And
that's what we're going to call the Jacobian. And I'm not going to go into what the
group law is and how you do the group law operations and the pairing and all of
that. You just have to kind of believe me that those all work very efficiently even
in genus 2 and people have studied that very intensively actually in the last five
to 10 years and we're just going to think of this as a problem of actually
constructing a curve for which you would want to build a crypto system based on
the hardness of the discrete logarithm problem.
So why would you do this instead of using an elliptic curve? So the very short
answer is that if you take an elliptic curve which has coefficients if a finite field
FP, the size of the group of points on the elliptic curve is roughly P. It's within
square root of P of the -- it's plus or minus square root of P of P. Whereas if you
take the group of points on the Jacobian of a genus 2 curve the size of the group
is P squared. So the point is that your underlying field that you pick where you
define your curve can be much smaller if you do genus 2 curve cryptography.
So for example, if you want to work at the 2 to the 256 bit security level for your -the size of your group, and the security would then be the square -- roughly the
square root of this for elliptic curves, if you want to work at that level for genus 2
curves, you can pick a much smaller field, a field of half the size.
So as we move towards architectures like 64 bit architecture and beyond, maybe,
it could be attractive if we get to the point where field elements could fit in a
single word, that kind of thing can really contribute to efficiency. So in particular
at one of the more recent ECCs, Dan Burnstein and Tanja Lange gave a series
of talks which compared the efficiency of genus 2 and a genus 1 curves and
found that they're extremely competitive at the same security level.
So and the applications of the usual kind of cryptographic things and then
extensions the all kinds of pairing protocols as well which we heard about in the
last talk.
Okay. So the big challenge, though, as with many of the different pairing based
crypto systems is to actually instantiate your system. So how do you build a
genus 2 curve which has let's say a prime order of its Jacobian?
Now, we don't actually need exactly a prime order. It could have a very small
cofactor. But roughly speaking it can be a little flexible. But we'd really like there
to be a very large cyclic subgroup of our Jacobian which is very close to the total
order of the group of points on the Jacobian.
So the problem is that this problem is hard. So in elliptic curves -- with elliptic
curves one thing that you can do, since we've heard a number of talks about how
lightning fast it is to count points on elliptic curves you can randomly choose
curves and then wait until you find one that has a nice order of points in nice
order -- group order for the number of points on the elliptic curve.
Well, this is still currently a pretty slow thing to do for a genus 2 curve
cryptography at the minimum security levels that we're looking at. So there has
been some new work. Gaudry and some of thinks coauthors this year improving
the point counting methods for genus 2 and getting to the security level. But still
to be able to -- I don't think you could call it lightning fast to randomly generate
genus 2 curves and count the number of points.
So I mean we're getting there from that side as well. But I'm going to tell you
about approaching the problem from the other side, which is actually constructing
a curve which you know ahead of time is going to have a certain order for the
group order of its Jacobian.
So yesterday we heard from Bianca Viray, who essentially outlined this method
very quickly in the beginning of her talk as a generalization of the genus 1 CM
method that Francois Morain talked about.
So this is just a very high level version of the algorithm. Kind of decide how
many points you want. Let's say you want N points on the Jacobian and, in fact,
for genus 2 that's not enough to determine the entire zeta function. You actually
need the number of points on the curve over the base field FQ and over the
degree two extension. And that will determine the zeta function and in turn
determine the number of points on the Jacobian.
And given that information, what you can do is you can compute a CM field which
corresponds to this zeta function, and I'll just show you how to do that really
quickly. That's not the focus of the talk. And this is the focus have the talk. It's
to explained to you what it means to compute modular invariants of what are
called CM points associated to this field K. And then from this reconstruct the
curve via Mestre's algorithm.
So the whole most of the talk is going to be focused on explaining how we do this
process, computing modular invariants associated to a field K.
So how is a CM field -- what is a CM field, first of all, and how is it released to the
number of points on the curve that you want? So I'm very -- just very quickly
going to show this calculation just because it's very, very simple, and I think it's
more convincing than just telling you oh, well it can be done.
So this is the analog of explaining the fact that on an elliptic curve if an elliptic
curve has Q plus 1, let's say plus or minus T points, T is the trace of the
Frobenius element and that, because you know that the Frobenius element pi
times pi bar is equal to P, if P is your -- the size of your underlying finite field, if
you know the trace and you know its norm, than you know the whole
characteristic polynomial and that determines a CM field which in the genus 1
case is just an imaginary quadratic field. So in the genus 2 case what happens is
that there's two pieces of information not just the trace of Frobenius but another
-- the -- another symmetric function in the roots of the characteristic polynomial
and this determines not a -- an imaginary quadratic field but a quartic CM field.
So a CM field is an imaginary quadratic extension of a totally real field.
So here we're talking about an imaginary quadratic extension of a totally real
quadratic field for genus 2. And so what is this field? Well, here it is. Let's say
we picked -- we had picked N1 and N2 for the number of points on our curve.
The Jacobian would have N points, which is N1 squared plus N2 over 2 minus Q.
And if you set S1 equal to this number and then you set S2 equal to this number,
then the quartic polynomial that is satisfied by the Frobenius associated to this
curve is just this polynomial right here. And this determines a quartic CM field.
And so when I've -- so what I've said so far is that we want to have a curve which
has this certain number of points. And here I'm saying well, if it had that number
of points, it would have a Frobenius which was satisfying this characteristic
polynomial and Frobenius is both an endomorphism and we're thinking of it here
as being an algebraic -- an algebraic integer, actually and this endomorphism is
kind of this extra information for this group -- this is J of C is an abelian group, but
here we're saying it has a large endomorphism ring. It actually has an
endomorphism which is pi which is the Frobenius in this field. Okay.
So that was all supposed to really explain what it means to be a CM point
associated to a field K. So we're going to say a curve has CM by the ring of
integers of a quartic CM field OK if OK embeds into the endomorphism ring.
So now the CM points associated to a field K are the isomorphism classes of -it's really principally polarized abelian varieties which are the Jacobians of CM
curves. .
So for much of the talk I'm going to be working on the moduli space of principally
polarized abelian surfaces. And what you should think of is that you can
associate with a curve, its Jacobian, which is a point in this moduli space but you
can also think of the -- it as if you have an isomorphism of the Jacobian, that will
also give you an isomorphism of the curve.
So interchangeably people often talk being a curve with CM by K or the Jacobian
of a curve with CM by K. And we really mean the same thing. It means OK
embeds in the endomorphism ring of the Jacobian of the curve.
Okay. So what I'm going to be concerned with is evaluating certain modular
functions on the moduli space at these CM points. And this is kind of a long
answer to one of the questions that was asked in Bianca Viray's talk yesterday.
So if you have modular functions what does it mean to the mod P? Well, that's
what I would like to explain here. So the Siegel moduli space is in this case
we're going to look at genus 2, it parameterizes abelian surfaces with principal
polarization and Siegel gave this very nice description, which is that, okay, so
let's let SP2, which some people call SP4 Z be the symplectic group of -- for
genus 2 consisting of 4 by 4 integral matrices satisfying this relation and then you
have this what we call a Siegel upper half-plane, which instead of being the usual
upper half-plane is actually two-by-two matrices with entries in C which have their
imaginary part has to be positive definite. So the imaginary part of this matrix
should be positive definite.
And the Siegel upper half-plane is acted upon by this symplectic group SP to Z,
and the quotient is what we'll call the open Siegel modular 3 folder. Siegel
moduli space.
Okay. And the way that this acts is -- should be very familiar with the genus 1
case, except that these are now two-by-two matrices, the As and the Bs and the
taus and everything. And the nice part is that we can in this moduli space, we
can give an explicit presentation for the CM points in the moduli space
associated to a field K. So this was originally done by Spallek in '94 who was a
student of Gerhard Frey, and she did it only for the case where the real quadratic
subfield of K has class number one.
And this was extended a couple of years ago by Marco Streng in his thesis so
that he gave a nice characterization of what the CM points look like even if you
don't have real class -- the real quadratic field with class number one.
So I'm not giving you the description but it's something that you need to write
down if you want to evaluate the functions on those points. It's an important
aspect of the calculation.
Okay. So what Siegel -- or, sorry, what Igusa did, was we heard -- we saw these
functions in Bianca's talk yesterday and we heard about the -- what are called the
Igusa class polynomials. Igusa defined these modular functions on the moduli
space, which essentially may the role -- I'm lying a little bit here, but they
essentially play the role of the J function. In the elliptic curve -- for elliptic curves
the J function is a modular invariant in the sense that at least over an
algebraically closed field two elliptic curves are isomorphic if and only if their J
invariants are equal. And these what we're going to call absolute Igusa
invariants play that role for Jacobians of genus 2 curves as long as you stay
away from the set where the first invariant is zero. There's also some issues in
characteristic two and three, but outside of that set these play the role of the J
function.
So what we're going to do is we're going to use these to form three Igusa class
polynomials, which the way we do this is we run through all of these CM points
that I told you we know how to figure out a set of representatives for that
represent all the isomorphism classes of curves which have CM by K and
evaluate those Siegel modular functions. And the point is -- there's a couple of
points. First of all, you have to evaluate these functions to very high precision so
that when you multiply this out, this is supposed to be a polynomial which has
coefficients not in Z as it does for the genus 1 case but in Q.
And so you need very high precision. And the whole issue that Bianca's talk
yesterday was focused on was the fact that these denominators can be fairly
large and we need to have an idea of the size of the denominators in order to
work backwards to be able to get the right precision to estimate the amount of
precision that we need to evaluate these two so that we can actually recognize
the rational numbers that come out.
>>: If you take a guess at the size of the denominator [inaudible] then can you
confirm that you got the right polynomial?
>> Kristin Lauter: Well, the typical way that we do that is to take the curve that
we constructed and check it has the right number of points. So that's the way
that you do it in practice, which is very fast. You just take a point on it and see if
it's killed by the group order that you picked which was supposedly close to a
prime.
But I guess now we actually do have theoretical approved -- approved bounds on
these sizes which then you can actually use to say, look, we really did compute
the right thing here because we know that we only needed this much precision in
order to get it right.
Okay. So now what I'd like to do is to switch gears. I was talking about the
Siegel moduli space up until now. And notice that there were three of these. In
fact, if I hadn't lied to you and we wanted to take care of all the Igusa invariants
we would have actually had 10 of these. But they really amount to computing
essentially the same values of these same modular functions anyway. So it's not
like as it's three times as much -- more than three times as much work to do all
10, but I'm just trying to keep it simple. That's why I'm focusing on the three
class polynomials.
Okay. So now let's switch gears. So forget about the Siegel moduli space for a
moment. The Hilbert mod moduli space is associated to a real quadratic field.
So what you should think of is that I talked about the moduli space, the Siegel
moduli space which parameterizes abelian surfaces which are principally
polarized and then we had a collection of CM points on those. Those CM points,
that meant that OK, the ring of integers in the quartic CM field actually embedded
into the endomorphism ring. So I'm going to put this up here for reference that
our CM field was K and we're going to call its real quadratic field from now on,
we're going to call it F. And a CM point on the Siegel moduli space had OK
embedded into the endomorphism ring of the Jacobian. But this of course also
means that the ring of integers of F was also already embedded in the Jacobian
-- in the endomorphisms of the Jacobian. And so what happens is that naturally
if you shift over to the Hilbert moduli space which is abelian surfaces with real
multiplication by your fixed real quadratic field, then you still have all those points
that you were concerned with, those CM points. They actually live on the Hilbert
moduli space, too, because they have the OF embedded in the endomorphism
ring already. So that's the point behind what we're doing here.
So let F be a real quadratic field. So you'll notice these conditions here with
prime discriminant D congruent to 1 mod 4. Some of this can be removed but
also part -- some of the conditions actually come from what Bianca talked about
yesterday which is that you want that -- you want the Brunei-Yang [phonetic]
formula to actually be true so that you can use it to correctly estimate your
denominators and multiply through and know that you'll be looking for integer
coefficients.
So under this assumption let sigma be the non-trivial Galois conjugate of F -conjugation of F over Q and epsilon is a unit such that its norm is minus 1. And
then this actually I didn't write down the action of SL2OF on this -- this is two
copies of the upper half-plane, not the Siegel upper half-plane in dimension 2.
Two is upstairs, not downstairs. So this is two copies of the upper half-plane with
the action of SL2OF. And what I'm going to show you is if you have -- if you have
a modular function on the Siegel moduli space, I'm going to show you how to pull
it back to a modular function on the Hilbert moduli space. And vice versa.
Okay. So what is the map between these two spaces? This is all well known
stuff. And we just work it out for computational purposes here. So if you have Z,
which is Z1, Z2, this will be a -- this is a point in two copies of the upper
half-plane and an element A of the real quadratic field, this is the notation we're
going to use Z star is the diagonal matrix with Z1 and Z2 on the diagonal, A star
has A and sigma of A on the diagonal. And then gamma star, this is actually a -this is goal to be -- now, you can see this is no longer a two-by-two matrix, this is
a four-by-four matrix because these stars in here are two-by-two matrices. And if
we have -- we are going to choose a Z-basis for OF and so far -- okay. I'll tell
you the secret right now. We're going to end up taking F as Q square root of 5.
All right? So that's -- most have the rest of the talk is going to be Q squared 5.
Most of this you could do for, like I said, D was congruent to 1 mod 4 and prime.
You could probably even do it more generally. But you have to work out a
separate pullback map for each different D.
And in general we're probably still going to want to stick with the assumption that
F has class number 1. So think like in the Spallek world for like the Siegel
modular functions.
So F will usually have class number one. This is a Z-basis. All this does not
depend on the choice of Z-basis. And you write down this matrix R, which is E1,
E2, sigma E1, sigma E2, a bunch of notations. Sorry for all of this. This is all to
explain what the pullback is. We're going to define this map. This is going from
two copies of the upper half-plane which is for the Hilbert moduli space to the
Siegel upper half-plane. And what does it do? It takes a Z and it sends it to this
which is -- this is a diagonal two-by-two matrix. And this is a -- oh, sorry. Yeah.
R is also a two-by-two matrix.
So this phi is going to go from SL2F to SP2Q and it maps. So this has to take
two-by-two matrices to four-by-four matrices and it does that by kind of
expanding gamma into a gamma star, multiplying on basically conjugating by S
where S is this matrix which is determined by R but also depends on the real
quadratic field here. See the square root of D and the epsilon in there. Okay.
So what happens is that we can now form a map which is between the moduli
spaces, in other words, that map that I showed you actually factor order through
this quotient, and now I'm just going to switch to F equals Q square root of 5 for
the rest of the talk.
So if you had picked a different F, there would have been an equally explicit
description of this which we give a general theorem for, but you'd have to write it
out separately with a new description of exactly what the image is. It's very, very
easy. It just basically depends on D. So this is what happens. If you look at a
element Z which has two coordinates in the 2 upper half -- two copies of the
upper half-plane, it gets sent to this, which is now in the Siegel moduli space. So
for anyone who has had to construct CM points on the Siegel moduli space, you'll
know that it's kind of a pain and so it's not surprising that this looks kind of a little
bit -- it's not too ugly, but it looks somewhat complicated and it involves like the
[inaudible] action and you have to take care of things have certain imaginary part
and you have certain sin and all kinds of stuff like that basically for the
polarization to work out right.
But the issue is that when you evaluate those Siegel modular functions, which I
wrote down the Igusa functions, but I didn't tell you very much about evaluation
them, you're evaluation them on these two-by-two matrices. And these are
actually symmetric. Did I forget that condition? They're supposed to be
symmetric. They might not have been in the statement before.
And so it's really like evaluating those modular functions on three inputs here
because you've got these two-by-two matrices but these are the same.
So what happens is that to evaluate them, you can actually write down Fourier
series for Fourier expansions for these Siegel modular functions and for the
Hilbert modular functions and what you get is these power series which are
power series in these variables that are of this form Q1, Q2, and Q 3 for the
Siegel moduli case which this is the exponential function evaluated at this thing.
This is actually X above 2piI times this thing.
So for the Hilbert modular functions that we're going to evaluate, the kind of one
of the big points of doing things this way is there's only two variables. There's
only a Q1 and a Q2, instead of having a Q1, a Q2, and a Q 3 in the Fourier
expansion.
And what you can see is that that actually makes a big difference from the point
of view of computation because each one of these exponential functions is
something that you have to evaluate typically to very high precision because it's
going into a computation where there's going to be lots of multiplications and
things like that, and you want to retain enough precision so your answer will be
accurate in the end.
So the more variables you have, the more of these exponential functions you
have to evaluate and also the more multiplications you have to do, multiplying
them together and thereby potentially losing precision. Okay.
So this is how you pull back -- let's say I gave you the Fourier expansion for the
Siegel modular function. Actually I probably should have written that up here.
So I'm just going to write it like this. Let's say F of tau had an expansion like this.
So for Siegel modular functions we usually write it's a sum over two-by-two
matrices T of these coefficients of the Fourier expansion. I'm going to call A
sub-F of T and then here's the three variables, Q1, Q2, Q3. And this would be
something like let's talk it tau 1, tau 2, and epsilon, where tau was tau 1, tau 2,
and -- well, might be epsilon over 2. I forget. But it's roughly that.
So if you want to take a function like this and now you want to pull it back to the
Hilbert moduli space, what you do is you can -- you can write down a Hilbert
modular function G which has coefficients -- see, it's in two variables now. And
you need to sum over elements which are totally positive elements of OF of this
form so that makes this the A -- like for a given A there's only a finite number of
Bs such that this will be totally positive. And you need to compute this coefficient
now. And this coefficient will be exactly the sum of the coefficients for the Siegel
modular function which satisfied this condition. So for a given A and B, you can
see that this is also a finite list of M1s, M2s and Ms. Because it has to satisfy all
these conditions. These guys are positive and et cetera, et cetera.
So that means that each -- for each coefficient AG of T for your new Hilbert
modular form is just a sum of the coefficients for the Siegel modular form that you
started with, which in our case will be the Igusa -- the modular forms used for the
Igusa function.
Okay. So now what I need to tell you is the analog of the Eisenstein series that
we used for Siegel -- in the Siegel space for the Hilbert moduli space because
this is what we actually need to compute if you want to implement this method.
You need to evaluate Hilbert Eisenstein series up to a certain amount of
precision. And actually I have seen some titles of talks given by people at
workshops organized by William Sage William Stein -- [laughter] I think actually
we should just start calling him William Sage, don't you think so?
And I think that some people probably know how to evaluate these Hilbert
Eisenstein series a lot better than I do. So I'm just telling you a naive version.
And the person's name that comes to mind is John Voigt, but there might be
others. So this is not an optimized way to evaluate Hilbert Eisenstein series but
this is a way to evaluate them, which is that you need to compute these
coefficients here.
These are going to be the basic Eisenstein series of even weight. And these
BKs of Ts, these are actually not too hard to compute but they depend on -- you
can see this T is generating and ideal in OF. And you need to sum over ideals
that contain this ideal so you're going to be looking at the splitting of the primes
dividing this ideal and doing a little computation there. But if you do all that right
and these are just constants that you compute for each K, then you'll get the
expansion -- the Fourier expansions for these Eisenstein series. And you can
see they look very simple.
These are -- what you can see here is that in this case we've kind of pulled out
the -- like, for example, in this line we've pulled out the Q1 and here we've pulled
out the Q1 squared and here we've pulled out the Q1 to the cube. And so here
the power -- this is A equals 1, A equals 2, A equals 3, and then for each A you
have a few terms where you can see the extent to which B can get negative is
limited. So you've got a few terms coming from a B for each A.
So the reason that I'm kind of emphasizing this is because in the Siegel moduli
space when you have three of these actually before coming upon this technique
to pull them back I thought some with Raneer Broker [phonetic] about how to
optimize the computation of the Siegel Fourier expansions. And with the three
functions -- the three exponentials basically to be evaluated, it wasn't exactly
clear how to organize the order of computations in the best way to make the
most efficient. And part of the point of what I'm trying to say here is that this
pullback map actually does a lot of that work for you. It kind of reorganizes the
computation for you and allows you to compute a lot less. Okay.
So what I'm going to define here are the analog of the cusp forms that we saw in
the Siegel case. So we've got theta 6 is this combination of these Eisenstein
series and here's theta 10. And just so you know theta 10 is basically the
pullback of chi 10, which was the thing that was in the denominator of the Igusa
functions, which is -- which was the topic of Bianca Viray's talk yesterday.
So this actually, I think you can even ignore theta 12 for this talk. Mostly you
should focus on theta 10 and theta 6. And there's a very nice theorem of
Gundlach which has led us to call these Gundlach invariants. And it states that
the ring of symmetric holomorphic Hilbert modular forms for SL2OF is a
polynomial ring in G2, G6, and theta 10 and the meromorphic -- symmetric
meromorphic Hilbert modular functions are all rational functions in these two
invariants.
So we're going to call these the Gundlach invariants. But, in fact, there could be
reasons for choosing different invariants, other ways to kind of recombine these
things. So theories two more possible choices. J3 could be set to be this
function, and then you would use J1 and J3 or J4 could be set to be this function
and then you could use J2 and J4. And it wasn't clear to us when we wrote this
paper which would be better. But for computations that Michael Naehrig has
been doing, he's been focusing on J2 and J4 as a good choice. And the tradeoff
seems to be that here J1 and J3 are both relatively small whereas -- which is
generally good when you're multiplying things together because you lose less
precision. But theory they have the advantage that both invariants have the
denominator of chi 10. So then you're in the situation that Bianca's talk was
focusing on yet where both denominators are chi 10 and we basically know what
they are from the Brunei-Yang formula, if not exactly then up to the, you know,
kind of correction factors that she was pointing out yesterday.
>>: [inaudible].
>> Kristin Lauter: I'm sorry?
>>: [inaudible].
>> Kristin Lauter: Arbitrary F, did you say?
>>: [inaudible].
>> Kristin Lauter: Well, as we saw yesterday Brunei-Yang is only proved for both
D congruent to 1 mod 4 and D twiddle which is the norm in the relative norm in
the reflex field, prime and 1 mod 4.
>>: [inaudible].
>> Kristin Lauter: The Gundlach invariants?
>>: Yeah.
>> Kristin Lauter: Oh, no, right.
>>: The [inaudible] general field?
>> Kristin Lauter: This was a statement about Eisenstein series. And nothing
about anything -- so to compute these, the computation of these things depends
on F. But for each F, this is true.
Okay. So anyway, what you can do is using the pullback formulas that we -however, this is something which is true for F exactly Q square root 5 and then
you just have to redo it using our formulas for a different F. But this is what the
Igusa functions pull back to when F is Q square root 5.
Okay. So I -- for example, I think you can see that right here, that these this
pullback, it sums over AF's coefficient of the Siegel modular function but it sums
according to these conditions which depend on A and B which were giving you
elements of OF, which had to be totally positive and things like that.
So if you switched out this real quadratic field some A and Bs might work for one
field but not for another. So it would give you different contributions to the
pullback.
Okay. So this was -- these are the three pullbacks of the Igusa functions. And
so what we're going to do is we're going to just compute these guys, either J1, J2
or J1, J3 and J2, J4, whichever choice you end up deciding to make and
evaluate them to high enough precision so that we can recognize the mod P and
then use those formulas to reconstruct the curve from the usual Mestre's
algorithm.
Okay. So here's an algorithm for computing Gundlach invariants. And so in
addition to the -- yeah, so I've already made a few comments about why this
might be better. So I'll just go through the algorithm first and then make some
more comments on this.
So if K is a primitive quartic CM field and you want the modular invariants for the
CM points associated to K and you're going to want a curve over a field FP,
where P is the prime which splits completely into principal ideals in K star. The
only reason for that is that we want ordinary reduction, then we want P to be a
relative norm so that we'll have either two or four possible group orders. You can
also do things if P doesn't satisfy that condition. But I'm not going to talk about
that here.
So you figure out basically the prime that you want and the group order that you
want coming from this field K, and now you want the curves that have CM by this
field. And so the output is going to be the Gundlach invariants mod P for genus 2
curves and then you can, like I said, use Mestre's algorithm to regenerate the
curve. So what do you do? Part of my claim was that it's actually easier to write
down the CM points on the Hilbert moduli space as well. So let me show you
how that works.
Again, like I said in the beginning, we kind -- in this presentation we're using the
-- this particular form for OK, the ring of integers OK over OF. So having F have
real -- the real quadratic field having last number one as being used here in this
presentation you might be able to get around it somehow. But I don't know.
And then if M is the Galois closure of K over 2 which could be not equal to K, if K
is a dihedral field we want the imaginary part -- we want delta such that the
imaginary part of square root of delta is positive and this imaginary part of sigma
of delta is positive and then what we're going to do is the same as in the usual
Igusa case, we compute the class number and find the ideals which generate the
class group of K and then we write our ideals in this form OF times A plus OF
times B plus root delta over 2 where A is totally positive -- ut-oh, sorry. Lost a
norm symbol. Supposed to be the norm of A. And then we set Z exactly to B
plus root delta over 2A.
And then the CM point -- so these are just going to be pairs of points in the upper
half-plane. So one CM point associated to this CM type phi let's say is just Z,
sigma Z but you can also get the Z point associated to phi prime -- sorry, forgot
to tell you. There's -- maybe I should just very quickly say. A CM type for a CM
field is a choice of essentially half of the complex embeddings. And no two of
them should be complex conjugate of each other. So for genus 2, a choice of
two embeddings. And there's actually four CM types. But in general we only
need to consider two of them in order to get all the isomorphism classes. And
even in the case that K is Galois you actually even only need to consider one CM
type in order to get representatives for all isomorphism classes.
This was basically worked out by Spallek. It's definitely in the class number -- F
as class number one case.
So these are the CM points. And you can see they're just very easy to write
down. They come directly from the representation of the ideal class as opposed
to having to do some extra fancy work like Spallek and March co-strain did to find
something that works for a particular symplectic basis, et cetera, et cetera. So
this is directly what the CM points are given epsilon and Z coming from the
representation of the ideal.
So now the next step is -- that step was writing down the CM points. This step is
evaluating the Gundlach invariants at these points and forming the minimal
polynomials. Hopefully you actually recognize them as having rational
coefficients. That will happen if you for example estimate the denominator
correctly using the Brunei-Yang formula or some adjustment to it. And then you
reduce once you recognize these as polynomials with rational coefficients, you
can reduce modulo O prime that doesn't divide the denominator and find the
roots and then you can compute the curve using the pullback formulas. And you
can apply Mestre's algorithm to the Igusa invariants.
Okay. So that's the algorithm. And these are my claims as to why this should be
better. A couple of things I've all right said. CM points are easier to write down.
There's two variables instead of three. There's -- that means fewer exponentials
to evaluate, fewer multiplications. These functions also have smaller heights
which you'll kind of see that rather prominently in the examples. And there's two
invariants instead of three. Not only two functions, two variables, but two
invariants. And we have pretty good control over the precision needed.
Okay. So before I start giving you a whole bunch of examples, let me try to put
this in the context of a lot of related work, much of it by many people in this room.
So in the genus -- basically it's been about 15 years now that we've been
seriously trying to construct genus 2 curves starting with the thesis of Spallek.
And here instead of just listing the first person that did something, I kind of tried
to write down everybody that did something. And I think I haven't missed
anyone. But if I have, please let me know.
So in the complex analytic method which just means taking these Igusa functions
and trying to evaluate them by any method that you can to high enough precision
so that you'll be able to recognize them. And van Wamelen, for example, those
were all the examples that Bianca showed you yesterday. Annegret Weng and
then I implemented this algorithm 10 years ago with Henry Cohn. And then more
recently Regis Dupont, who is a student of Francois Morain and Marco Streng.
So we've also tried very hard several other methods. The CRT method is
Chinese Remainder Theorem Method which is completely different. It attempts
to recognize these minimal polynomials by doing only operations over various
small finite fields if trying to reconstruct the polynomial using the CRT method.
And in the -- in the third approach is the p-adic method. And I think basically the
first paper on this was the five author paper, Gaudry, Houtmann, Ritzenthaler,
Weng and Kohel, and then I think there's several follow-up papers by Kohel and
Lubicz for the 2-adic and 3-adic cases. And in particular David has provided a
very nice online database with many examples, which I think is the only database
right now that's available with lots and lots of examples.
And so I would like to just comment a little bit on the plusses and minuses or
strengths and weaknesses of these three methods. Because it's a bit of a zoo so
it could be a little bit confusing. So one thing that happens here is that you have
to -- you have to evaluate these functions to very high precision. And this is what
I'm trying to kind of explain to you that we think we've improved in the -- in -- by
using Hilbert modular functions instead.
And the main problem there is even though I mean some methods are better
than others, for example, Dupont's method is probably much better than the
methods -- oh, actually I should have said I also work on this with ray near Broker
I should have added Raneer's name here [inaudible] using the Fourier expansion
of the Igusa functions instead of the theta functions approach which is what
Dupont has a very nice method for doing.
So that has the default, this method has the default that you lose precision when
you multiply things together. And when the formulas are extremely complicated,
it's hard to estimate how much precision you've lost. There's 10 even theta
characteristics and they could be of varying sizes and you have to bound the size
of all of them basically from below and above in order to do all these operations
and to know that you've actually maintained the amount of precision that you
wanted to have. So on the other hand, the CRT method has a big flaw, which is
that you end up doing lots and lots and lots of computations mod -- let's call it
mod L for smaller Ls where you first you just even have to find a curve that's in
the right isogony class and that in this is has tended to dominate the time in this
algorithm.
And Damien Robert made quite a lot of progress on that issue this summer.
That's the work that he alluded to in the end of his yesterday. But it's still, just to
be honest, much slower than the complex analytic method. It could have the
possible advantage if ever we got to the situation like Drew Sutherland was able
to show in the elliptic curve case where you can take advantage of the fact that
you save on the space complexity with this algorithm.
So if we ever got to the point where computation time was not the bottleneck,
then maybe asymptotically this could still be worthwhile. And I have to say
honestly, I can't comment too much on the pluses and minuses of this p-adic
method because I don't know it as much as the others, but I do know, for
example, that some of the larger class numbers that David has told me about are
for kind of specific choices of fields. So it doesn't seem to me that this works
uniformly well for all CM fields K either. But it's a different kind of criteria than the
ones I've mentioned, for example the restriction to F having class number one.
So in my view, these all kind of have many plusses and minuses. And in the
end, I think we still don't know, you know, asymptotically which one will be the
best. Based on the elliptic curve situation you might guess that even though this
one looks like the turtle that it might end up winning out in the end over the hare.
And although David has a very convincing database I would still have to say that
in my opinion this, one looks like the hare right now, the one that's being out
front.
Okay. So now I'd like to spend the last five or 10 minutes talking about the joint
work with Michael Naehrig. And so we gave a few examples in my paper with
Tonghai Yang based on some pari code that I wrote which is in the end of the
paper. But Michael has written magma code for this -- for these -- for this
algorithm and extended it and improved it in many ways and there's still many
more improvements to be made and investigated.
And we also have as part of the object to study understand the factorization of
the coefficients of the class polynomials for these Hilbert modular forms. Much in
the same way that we've tried to understand them for the Siegel modular
functions.
Okay so. Here's just a -- an example just to get us started. So this is class
number one. I should have asked Michael, but I strongly suspect that 3,000
digits of precision was not absolutely necessary here. I think that was kind of a
blanket precision that was set. If I had to guess from doing examples of this size
I think I was often able to make it work with 400 digits. I think you might have
even said you did 400 digits of precision to begin with.
But the next line here is telling you how many terms in the Eisenstein series you
needed. So that's the analog of when you're evaluating theta functions and you
evaluate them basically by summing over a box or an ellipse. So how big is that
ellipse? How many terms do you need for theta function to be accurate up to a
certain precision?
Well, the analog of that here is how many terms in the Fourier series do you
need in order to be accurate up to a certain precision? We've actually done
some estimates which are not in the paper which have given bounds on the size
of the tail. So you can really know how accurate you are with a certain number of
terms.
And then so this was the time for computing 8.4 seconds. But this was, Michael
says on his laptop which is not particularly fast, and it was running magma. But
you could see, I mean, these are like strikingly small. Of course this is a small
example, but these are very small class -- class polynomials if the sense that just
the size of their coefficients.
So I'm going to skip ahead maybe to -- I might just go to the last example unless
people have questions about intermediate ones. So here is a -- what I would
consider to be a large example, which is class number 8. So some people can
do class numbers much bigger than this probably. But -- so -- but this -- oh, does
it have the timing on here? Okay. So I think Michael said an hour and a half
roughly. Is that right? So an hour and a half just on his laptop. So that's not too
bad for something of this size. And these are much, much smaller than the Igusa
functions would be.
Okay. So I think the only other comment that I want to make is is that he's using
J2 and J 4 here so that these denominators will be exactly what -- this is
essentially the denominator here. Because if you made this monic you would
multiply through by that, and here's the -- well, here's the factorization of this one.
So you can see all the small primes like what Bianca was talking about here
yesterday. These are the primes in the Brunei-Yang. And this Chinese
Remainder Method are incredibly assisted by the knowledge of the actual
factorization of the denominator so that you don't have to multiply through by
something coming from my balance with Goran [phonetic] which just give you an
upper bound on the power of the prime that can appear and an upper bound on
the size of the prime that can appear. These are the Brunei-Yang thing is
actually giving you the precise factorization and it requires you to save a huge
amount of computation by using that instead of just using a rough bound.
Okay. So with that, I think I will stop five minutes early.
[applause].
>>: So are there questions for Kristin?
>>: [inaudible].
>> Kristin Lauter: I think that would be nice.
>>: And then to go back you sort of [inaudible] Igusa [inaudible].
>> Kristin Lauter: Yeah.
>>: [inaudible] I mean, you can just [inaudible] since it's rational. But there's one
problem that I -- well, one question that I have. How much is this restricted to
[inaudible] because in general it's [inaudible] you expect these surfaces to be in
general [inaudible] have a rational parameter ->> Kristin Lauter: No, I agree with you. I mean that's essentially the restriction
on the field that we're looking at is that so far D should not grow -- D should not
be very large. In the way we're doing things, D should not be very large. But on
the other hand for a fixed D you can have lots and lots of CM fields that have that
as its -- all of these have real quadratic subfield Q square root of 5. So there's
going to be many CM fields that you can cover this way but for now you should
assume that D is small.
>>: [inaudible].
>> Kristin Lauter: No, no, no. Those are not -- just like for Igusa it's I1, I2, I3 or
whatever. They're just first and second and third.
>>: Okay.
>> Kristin Lauter: But the weight -- I mean, their functions, so but the weights on
the Gs were the weight -- or like G2, that was the weight. Two was the weight.
G4, theta 10, those are weights. But then they're cancelled out because
[inaudible] function. Igor?
>>: So you think [inaudible] and then suddenly it jumps.
>> Kristin Lauter: Where do you mean small primes?
>>: [inaudible] and then it jumps, except for 3, 4, 8.
>> Kristin Lauter: Well, these are of a kind of a natural form that's going to be
consistent across all examples because they're symmetric functions of these
roots. And you can kind of see the size of the roots in terms -- like if you use the
Fourier expansion anyway you can see kind of how big they are.
So -- well, at least for the size. Now, here you're asking like do I have any
reason why ->>: [inaudible].
>> Kristin Lauter: Okay. What you should think about is separate this coefficient
from these. This one is special because it's a denominator. For these the
answer is no. I mean, I actually have some work which I wasn't going to mention
but which is some sense related. There is a geometric expression for
numerators which can understand it can capture geometrically the primes that
appear in all the numerators.
So what happens for that is that you don't get information, generally you don't get
information on these bigger primes. These just appear to be random. But I have
no idea about the size like, you know, why this factorization has one this size and
this one is smaller. I don't know [inaudible].
>>: [inaudible].
>> Kristin Lauter: Well, that's a good question. So this one is not if the
denominator actually. Although it's possible that it was there and it was
cancelled by lots of 5s here.
>>: 3s or 5s?
>> Kristin Lauter: Francois
>>: If we want to imitate what we do in genus 1, why not try to reduce the
polynomial like using [inaudible] so on just to see if you can get smooth
polynomials and then perhaps just some function which could be? Did you try to
[inaudible] or whatever on P4 to see if you get a very small polynomial and if it
does some [inaudible].
>> Kristin Lauter: No, I never did. So the point is that if you got some smaller
polynomial -- I mean, like you're thinking of it as defining a lattice and ->>: I mean that -- just that [inaudible] but I mean trying to find [inaudible]
functions like in genus 1 when you [inaudible].
>> Kristin Lauter: Yeah. I see what you mean.
>>: And so you can try to take [inaudible] like this, reduce it, and if something
striking happens, then maybe ->> Kristin Lauter: And in genus 1, do you ever find like new functions that way
that you wouldn't find by like just taking, you know, the ones you know like
[inaudible] or whatever?
>>: That we do everything just once. The question is [inaudible] [laughter].
>> Kristin Lauter: Good point.
>>: Discovering functions like this is the question.
>> Kristin Lauter: Never done that.
>>: [inaudible] we can't see.
>> Kristin Lauter: Oh, sorry. So you see still huge powers 2 and 5.
>>: Is there a possibility to combine the method you discussed with CRT
methods so that maybe you find the answer [inaudible] high digits without
worrying as much about precision and then CRT to get the -- to adjust the lower
[inaudible].
>> Kristin Lauter: Yeah. So we actually thought about that. Damien had some
ideas about that. And they're -- even like in the context of the CRT method there
is a reason to, if your real quadratic field is like you say square root 5 to actually
use this technique inside the loops of the CRT method. But there's a little -there's a tradeoff there. It's not always the right thing to do. But it can help. But
I think also there's probably other combination ->>: [inaudible] space from PQ to P squared [inaudible].
>> Kristin Lauter: That's right. But without seeing too much about the new
approach that we have, it's not as good as other things in some cases
sometimes. Sometimes [inaudible] actually David, David Grunwald mentioned
that in his thesis for the Igusa function's case or I mean like if you're -- sorry for
the complex analytic method so if you have a [inaudible] better to loop over
[inaudible] what did he say in his thesis? But anyway, David Grunwald made a
related observation.
Anyway, what I was going to say to you, Tony, is that I think that there's actually
a lot of potential combinations here between these different things. And there's
certainly tricks that are used in the other methods that we're now trying to apply
to this one as well, such as like Lagrange interpolation and defining invariance.
>>: [inaudible] question. So today [inaudible] lunch outside. Tomorrow we'll
have a slightly longer lunch break. So we have until 2:30 tomorrow. And you'll
go over to the commons where you can have -- you pay for what you eat but you
can pick exactly what you want to eat.
The reason I'm making this announcement now is that for tomorrow please make
sure to bring your name tag, including the Microsoft badge that you got on the
first day. If you lost yours they can reprint one for you. And please make sure to
have cash for your lunch tomorrow across the [inaudible]. But for today we'll
enjoy it again.
>>: Any other questions? Let's thank Kristin again.
[applause]
Download