21340 >>: Okay. We'd like to start this afternoon's...

advertisement
21340
>>: Okay. We'd like to start this afternoon's sessions. If everyone could be seated and we get
your attention. If you would like to speak at the rump session this evening, you should talk to
Tanya Lange after this talk.
It's my pleasure to introduce Rene Schoof. His seminal work on counting points on elliptic
curves. Ultimately made elliptic curve cryptography practical.
So he's going to speak to us today on counting points elliptic curves over finite fields and beyond.
>> Rene Schoof: Thank you very much. Thank you very much for giving me the opportunity. My
talk will be, will consist of two parts. In the first part, I want to explain something that many
people have asked me over the years.
So a few times a year people would ask me: But is it true that the paper that you wrote with your
algorithm was rejected by something? And then I say, yeah, that's true in some sense.
And I thought this is perhaps the only occasion that I will have to explain this to an audience that
is willing to listen to it.
So I want to tell the story of the publication of my algorithm. And in the second half of my talk I
will -- there will be a commercial. So I will make some advertising for recent work and a recent
book by Jean-Marc Coveignes and Bas Edixhoven which can be viewed in a certain sense as an
extension of my algorithm.
And there are also interesting open problems there that you may find interesting.
Okay. Gerhart Frey explained one should actually start with Gauss and then explain that the
problem started with him. But for the problem that I will be talking about, so the problem is
counting the number of points on an elliptic curve over a finite field or in particular over an elliptic
curve modulo P.
My starting point, the earliest reference that I could find where this issue is looked at from a
complexity point of view is the famous book by Castles and Frales [phonetic], 1967 conference in
Brighton.
The bulk of the book is concerned with the cohmological proof of [inaudible] theory, but there are
some extra papers in the back, for instance, State's famous thesis is a chapter, and there is one
chapter by Swinnerton and Dyer. And Swinnerton-Dyer in this chapter explains, it's called
computations in number theory, and Swinnerton-Dyer explains there -- they have some
philosophical remarks about computations in general. He refers how long it can take.
He says something that the computation should not -- some computations are not worth doing
unless they can land a man on the moon, which I think was a hot topic in 1967.
And so he reports on the work on the, the computations that were done for the Birch
Swinnerton-Dyer conjectures, for Birch Swinnerton-Dyer values of L functions of elliptic curves
over number fields had to be computed.
And for this, the coefficients of the L series were needed. The APs and the ANs, and they can be
computed by counting points on elliptic curves modulo P. And this is what they did.
And Swinnerton-Dyer says the following. So this is the title of his article, an application of
computing to classical theory. And I wrote the equation there because Swinnerton-Dyer refers to
it. So he says the following about how to count points modulo P.
So suppose you have an equation, it is written in homogenous form, Weierstrass equation, and
he says the obvious thing to do is to put Z equals 1. Then you get usual equation. Then for
every X and Y you substitute and you see whether you get 0 or not.
And this is an OP squared algorithm. But this is not so smart. He says it is smarter to just run X
through all the numbers modulo P and see whether or not this expression is square or not.
And this is an OP algorithm. And this is certainly true. So nowadays I think this is maybe the
simplest way to count a number of points, and I think for the range of the computations that were
done by Birch Swinnerton-Dyer, it was also a quite efficient algorithm since the primes were not
so large anyway.
So this is the pre-history, let's say, of this problem. So today then we celebrate the birthdate of,
the 25th birthday of several papers about algorithms. In particular, about my algorithm. And the
thing that I want to tell you is that actually it's through the publication of my paper, 1985, is 25
years ago. But the algorithm is in fact a little bit older.
So let me tell you how I -- when it happened. So it started with when Henri Cohen visited Hendrik
Lenstra in spring of '82 in Amsterdam. Here they are. You might think he looks a little bit old, so
the photograph is really from 1985, so it's three years later.
I can also explain the candle. So this was taken at the dinner after my thesis defense in 1985.
So here are Henri Cohen and Hendrik Lenstra. And Henri Cohen asked this question: He asked,
in Hendrik's office, how quickly can one compute the number of points on an elliptic curve modulo
prime P. I was also there.
I don't know why he asked the question. One could guess that he was working on the Perry
library. I'm not even sure whether this was true or not. I don't know why he asked it. He asked it.
And Hendrik immediately had an answer ready. He said the following: He said, well, if you have
an elliptic curve, to it you can associate this ring, the ring of regular functions on the curve. This
one.
And you can view this, or it is actually the ring of integers of a quadratic field, not over quadratic
number field but over quadratic function field. FPX and then Y is obtained by taking the square
root of something. It's like a quadratic number field.
And, oh, then we know what to do. There were algorithms were around at the time. And, for
instance, Shanks had this paper a few years before on the baby step giant step algorithm to
compute this class group.
Actually, the correct analogy is that it is an imaginary quadratic number field. So there are no
units. It's just a class group. And the baby step giant step algorithm takes about, it takes P to the
one quarter, I think. Yeah.
So this is immediately what Hendrik said. In fact, he's very close. If you take, for instance,
Gauss's formulas for composing binary quadratic forms and you take the coefficients in FPX
rather than Z, then you find exactly the addition forms for elliptic curves. They're precisely the
same formulas. This is what Hendrik said.
I don't remember all the details. It's 28 years ago. But some things I do remember. And I
remember that I instantly thought that it had to be much easier than that, than a quadratic number
field. And one reason, for instance, is that it's very easy to compute the class number modulo
two or modulo three. You only have to check whether there's a rational point of order two or
order three.
That's not so hard. You have to see whether the polynomial has 0 or not in FP. This is
something that's for a number field which is unthinkable. I don't know any good algorithm to see
whether a class number of an imaginary quadratic field is divisible by three or not, apart from
computing the class number. I do not know how to do this.
That made me think that it had to be easier. So what I -- so then I went home. And the same
night I found the algorithm.
And I remember that I was very careful to make it deterministic. Because I know I distinguished
the division polynomials, because I thought, no, that is a natural generalization. If you want to
know whether there's a point of order three or order two rational, then you are led to divisional
polynomials. I was very careful not to factor the division polynomials, because then I would mess
up the deterministic character of the algorithm.
At first I distinguished cases like Atkin and Elkies, like split, nonsplit, this case. But at the end of
the night and after a few beers, probably, everything came together and the only thing I had to do
was to check the characteristic polynomial of Frobenius on the L torsion points.
And I found a deterministic polynomial time algorithm to determine the number of points on an
elliptic curve.
And in my paper I did the estimates not very carefully and the running time is logged to the ninth
P. But with a slight change of wording, you get actually log to the eighth P.
And then the next day I went to the university and I told Hendrik Lenstra about it. And, yeah, he
agreed. And nice algorithm.
And that was it. So this is, actually, one could say the first time my paper was rejected was by
myself because what we did then is I threw it in a drawer basically. And the reason has been
explained by Victor Miller, that the intersection of the sets of people that were interested in elliptic
curves and those interested in computing was very empty, very few people interested.
On top of that, I felt the algorithm was extremely unpractical. In fact, every single example that I
tried to compute by hand I gave up after an hour or so because the polynomials, even modulo
five, if I wanted to compute the number of points modulo 5, I encountered polynomial degree 20
or something.
It was just terrible. So the algorithm looked only of theoretical value, and the people that were
interested in these kinds of theoretical questions didn't know what an elliptic curve was.
So it went in the drawer. And that was it. So this is March '82, something like that. And then as
Francois Morain explained, then I had a marketing trick. So I realized two months later that I
could apply this to a problem that those people interested in computing would understand.
And that is by taking a very special elliptic curve, the one there, and the special thing about it is
that it admits complex multiplication by the ring of integers of Gauss. That is, there is this
automorphism. This one that maps a point XY to this point. This has, if you repeat this two
times, you get minus 1. So you can view it as I, and then the theory tells you how then what the
number of points modulo P, the formula, close formula that in fact the primality testers that
implement ECCP, they use this formula to count the number of points.
I do it the other way around. So P is three mod four. This is the number of points. If P is one
mode four, then everybody knows that P is the sum of two squares, essentially in a unique way,
and then the number of points is given by this, where you can decide which A to take, A or B or
plus or minus.
So we see that computing the number of points on this elliptic curve modulo P is the same as
computing A and B. So it is the same in the complexity sense, it is the same as writing P is the
sum of two squares.
Or if you -- as Morain explained, this also shows this formula, A divided by B is the square root of
minus 1. This was another way to phrase it. I could say I could compute the square root of
minus 1 modulo P in polynomial time.
And this -- yeah, this was the result -- everybody could understand. So I thought now I'm going to
write off my algorithm. This I can sell.
Oh, let me, maybe I should say, just to be complete, that it is of course a totally observed way to
compute a square root of minus one or to compute P as the sum of two squares.
There are probabilistic algorithms that are very, very much more faster and even under
assumption of the Rubin hypothesis you can also prove that there are algorithms that do this
much faster.
So just of theoretical value. In fact, it can also be generalized. I should say instead of minus 1,
by changing the curve, take the CM curve, you can compute the square root of a fixed number.
The algorithm is polynomial time L log P, but the O symbol depends on the number, in this case
minus 1 in general.
If you change it a little bit, for instance, instead of a square root you ask for a cube root, then
nobody knows what to do anymore. So there is no good algorithm provably polynomial time to
compute the cube root of two.
But probabilistically it's very easy. [inaudible] hypothesis, it's very easy. But there is no proof.
Okay. Then let me see. Oh, yeah, let me then show you -- let me prove to you that I already had
this in 1982. So I don't know whether you know these books. There are two volumes. It was a
meeting in Holland at the time, 1980. Maybe one of the first meetings on computational number
theory. Not the first for sure.
And this is 1980. And, in fact, I remember that during the meeting the eight Ferman [phonetic]
number was factored, perhaps a few weeks later and the mail arrived. To put it in perspective a
little bit, this is 30 years ago.
The big news was that 16 digit factor was found of the eight Ferman number using the polynomial
method. So that's how things were standing at the time.
And Hendrik Lenstra, for this 1980 meeting, wrote the preface for the introduction, and he actually
wrote it when the book was published. You see the books are published in '82, so the preface
was written two years after the meeting.
And in this preface you see that he mentions this algorithm, and he doesn't sell it as an algorithm
to count points on an elliptic curve, but, like me, he sells it as an algorithm to write P as the sum
of two squares.
Okay. This is the next thing that I wanted to tell you. So then I gave a talk about it. So this is
about half a year later. In the summer of '82 I wrote up, more or less, my proof, and then I gave a
talk about it.
And these two people were the senior people that were present at the meeting. So Hendrik
Lenstra and Frans Oort, his advisor, Andrew Odlyzko and Ken Manders. Frans Oort was invited
especially because he's an algebraic geometer. And I think for Manders and Odlyzko and even
for Hendrik Lenstra, at the time, elliptic curves were sort of a new thing to use for computation.
So Oort was there as the expert. And Oort didn't believe it. I explained the algorithm and Oort
didn't believe it. He said it was wrong.
So it was the second time it was rejected. And only when Lenstra insisted and Lenstra, Hendrik
Lenstra explained to Oort that it was correct and then Oort sort of bought it.
But his first reaction was certainly that he thought it was wrong. Okay. Then so this is '83, I was
a graduate student still at the time. So my contract was from '79 to '83. And my time was almost
up.
In fact, in June I would be out of a job. And I didn't have a thesis yet, because I had some things
like this algorithm. But I didn't have a thesis yet. And I actually was afraid that I would be without
a job the year after. I didn't know what was going to happen.
So then I found the following: I don't know -- sorry. I found the following: I found this meeting,
the FOCS meeting in Tucson. And I thought -- it was in November of '83, and I said: You know
what I would do, if I won't have a job next year, then I go to this meeting and I give a talk about
my paper, and not so much because of this meeting, but because of the location of Tucson. You
see, Tucson is located near the Mexican border. [laughter] and I thought I would just give my talk
and then I'd spend the rest of the semester in Central America, and I will have a good time there.
That was my plan. But, yeah, in their wisdom -- sorry, the organizers of FOCS, they maybe knew
about my plan. So they turned down the paper. I still remember -- I don't have the letter
anymore. I looked for it.
It says something like: We are sorry that we cannot accept your paper on computing points on
elliptic curves modulo three. That was in the letter. It must have been a typo.
Okay. So then, in fact, it didn't out -- from the job point of view, things were not as bad as they
seemed. And in the spring of '83 I was saved by Hendrik Lenstra with his contacts with Don
Zagier, and his grant in particular, Don Zagier was at the University of Maryland at the time. And
partially he was half the time in Germany, half the time at the University of Maryland.
And it's not really a post-doc. It felt like a post-doc. I was teaching and stuff like that, but I didn't
have my Ph.D. yet. So it's a predoc, let's say. So I went to the University of Maryland and there I
saw Don was there and Dan Shanks, it's the same photo that Victor Miller had, and also Larry
Washington and Jim Craft. I had a great year there. And one of the things that I did was I
proposed my paper to Dan Shanks. Dan Shanks was the editor of Math Com. He was the boss
of the last quarter of the Math Comp issue. Jim was also in Maryland at the time.
So I gave it to Dan Shanks. And Dan Shanks, he sort of had heard about this result. And then
he said: But does it work? And I said: No, it's nonsense; it is just nonsense. It's just a
theoretical algorithm. He said: Then I don't care. I don't care.
But he didn't really reject it totally. He said: Send it to Williams. So that's what I did. So I sent it
to Luke Williams. I'm not sure if I ever got a report actually. And then it was accepted by Math
Comp and it was published in 1985.
By the way, this photo is not really taken in '83, it was taken in '88. So Shanks looks a little bit
older, but you can see it's still Shanks because it's the same shirt. [laughter].
Oops. Sorry. Okay. So this is the end of the story that I wanted to tell about my paper. So then
after the Maryland 1983/'84 I was at the University of Maryland, and it's sort of strange those
years. I wasn't sure I should do a Ph.D. at the time. Because in Holland, the situation at the
universities was so relaxed. Tenured faculty that didn't do it with a Ph.D. I thought I could do it
without a Ph.D. But when I was in U.S. it became clear to me it was pretty weird not to have a
Ph.D. So I went back to Amsterdam so I had one year to finish my Ph.D.
And my algorithm, it's not all of my thesis, but it is a chapter in my thesis. And so that is the next
year, '84/ '85. And I did my thesis in June or May of '85.
But one thing happened in that year that I want to mention, is that at one point I got a phone call
from Hendrik Lenstra. He said I've got Wieb here and I'm explaining all kinds of things. It was
probably 11 in the morning. So Henrik was working, and I just got out of bed. He said I'm
explaining to Wieb whenever you have an algorithm P minus one, you have an elliptic analog,
then I realize I have a new factoring algorithm.
So this is how Henrik Lenstra's factoring algorithm was discovered. A little by chance. He wasn't
really looking for it or trying it, he was explaining something to Wieb Bosma who was
undergraduate student at the time. Wieb Bosma wrote his undergraduate thesis on elliptic
analogues of certain primality tests, so it was a natural thing to talk about.
So what I also remembered is just perhaps -- so Hendrik is not here to speak about his algorithm,
but I do remember that when he had it, he wrote a letter, like one short letter, with a description of
the algorithm. And he sent it to the U.S. mainly to people that were good in implementing these
algorithms.
So I think it went to Bob Silverman and maybe Peter Montgomery, I'm not sure. Maybe to
Gilbraith and Atkin, maybe. Immediately, after two weeks, no, the letter takes one week to get to
the U.S. and one week back. The factorizations would come back by mail. So the people who
implemented algorithms almost instantly. It was immediately very successful.
So I myself I should say that when Hendrik explained to me the algorithm, I thought the same as
what I thought about my own algorithm. I thought it would be very impractical, because me the
addition law on an elliptic curve looked so complicated and so time-consuming compared to the
arithmetic on Z mod P, I couldn't imagine you would win that and it would get faster and Hendrik
of course had a different point of view and he was right. It was actually very efficient.
So then let me just briefly outline the algorithm. That's also on the video then. So the algorithm
proceeds by -- if you have an elliptic curve, there's the Frobenius and the morphism. And
Frobenius and morphism has a characteristic equation, and this number T, the co-efficient T that
appears in the coefficient in the equation is the number that you need to know to determine the
number of points. And you have a bound for this number. It's at most 2 square root P, and the
algorithm proceeds by checking this relation on the L torsion points modulo, on the L torsion
points for various small primes L and then with the Chinese remainder theorem you glue this all
together and you have the number of points.
This is polynomial time. If you want to see a crystal clear description of the algorithm with all the
details included, I can recommend Karl Rubin's AMS review that is about a year later.
Okay. Then I want to say a few words about the SEA algorithm. So the things that were done by
Atkin and Elkies -- so I think the first letter that I got from Atkin is 1988. And he could do 35 digits.
He could count the number of points in an elliptic curve 35 digits. What he did was he used the
equation of X node of L. And using that equation he could limit the number of classes for the T
modulo L and then he had a combinatorial problem which he searched with the baby step giant
step strategy and then he could do 35 digits.
Then a little bit later he learned about Elkies approach. Elkies approach looks for eigen spaces of
the Frobenius and morphism. And combining this within a year or so, Atkin could count the
number of points on a elliptic curve modulo 100 digit prime. I would never have imagined
something like that was possible.
There's one way you can look at improvements of Atkin and Elkies I like. You can say that the L
torsion points you can just view it as an object that is associated to the elliptic curves that's
smaller than the elliptic curve itself. And if you are able to pin down the action of Frobenius on
this smaller object, which is essentially the division polynomial, then you get information about it.
And in this case in fact up get T modulo L. You are happy.
But the problem is that EL is still quite large.
It's a vector space of dimension L squared over Z mod P if you look at its points.
So it's natural to make it smaller. And then there are two directions in which this has been done.
You can make this smaller by considering sup objects or by considering quotient objects.
And each of them has a drawback. Like a sup object then will have to be eigen spaces for
Frobenius and they don't always exist.
And quotient objects, they only give partial information. So the quotient object that Atkin looks at
are lines in EL. They're the isogonies. So it's something of size L plus 1 and that gives the
combinatorial explosion.
But so they both have drawbacks; but, nevertheless, if you combined them, you get an algorithm
that is not deterministic anymore, but it is much more efficient than my original algorithm.
Okay. Yeah, finally the state of the art, what I could find on the Web but Francois Morain may
correct me, pushing these ideas to the limit and several other ideas that Francois Morain
explained in the previous talk, the largest prime I found is 2500 digits.
And it is always the same curve. If you type these numbers in Google, you only get elliptic curves
everywhere.
It's the zip code of the Polytechnic and it's the number of Francois Morain, maybe, and so this is
the number. This is the trace of Frobenius and this is the record at the moment.
>>: [inaudible].
>> Rene Schoof: Excuse me?
>>: He has a record out.
>> Rene Schoof: 5,000. I didn't find it on the Web. I didn't find it. 5,000. So it is -- it's two times
as many digits.
So there is something that I will not say anything about in this talk are the Pila methods. As you
all know, if the finite field is not Z mode P but it is finite field FQ, where Q is a power of a very
small prime, then there's very much better algorithms.
In fact, you can also view it, I think, a little bit the way I just said, instead of computing directly on
the elliptic curves, you take a smaller thing and you look on the action of Frobenius there; and
instead of the L torsion points for these Pila algorithms, you use differentials and you can do the
computations.
There are many people working on this. Forgive me if I've forgot someone, and I understand that
from [inaudible] that actually there is a paper, 1982 paper from Katle and Loopkin [phonetic]
where a similar algorithm is already submitted. But this paper was forgotten somehow.
Actually, I remember that I was in Japan at one point in the, I think around these times, 2000, and
I met a group of Japanese there that had been implementing my algorithm in characteristic 2. So
they had been working very hard to optimize it in characteristic two because there are certain
advantages; for instance, the equation of the polynomials are only 0 and 1, so it gets really
smaller. They optimized it. And these algorithms came along and they killed it completely. They
were a little bit angry with me that my algorithm was not good at all in characteristic two.
Okay. Now, there is an application, a very trivial application, as you will see, of the point counting
algorithm to modular forms. So you can phrase the result in another way.
So let me explain this to you. It's very superficial. So we have -- so let F be a normalized eigen
form of weight two. So I will not exactly explain all these words. I guess many of you know what
they are. If you do not know what they are, think of it as a Fourier expansion. It's a certain
Fourier series. So the parameter is tau. Tau is in the upper half plane. Q is E to the 2 pi I tau, so
it's exponential. And there's a Fourier series. That's the modular form. It has certain
transformation properties that says that it's a modular form for this subgroup SL2Z. Because of
this, when I say it is a normalized eigen form, the coefficients have certain properties.
For instance, normalizing it means is that A1 is 1 and the function AN is multiplicative; so if ANM
are co-prime A and M is just a product and for powers of a prime there's a recursion relation.
So that's what's for me for this purpose a modular form of weight 2. Now there is a way, when
you have a modular form of weight 2 to associate a geometric object to it.
If you have an eigen form of weight 2 to associate a geometric object to it. This geometric object
is in general is an abelian variety that has a lot to do with modular form.
And in the simple case that the coefficients AK of the modular form IZ, then the abelian variety is
then an elliptic curve. What does this elliptic curve have to do with the eigen form that you started
with? The following for instance, the number of points is given as P plus 1 minus T. T trace
Frobenius if you like. The number of points over FP. And T is exactly the P Fourier coefficient of
modular four. That is the relation. That is the relation.
So, therefore, if you want to compute the Fourier coefficients of this modular form, is eigen form
of weight 2, if for some reason you were interested in this, then you may as well count points on
the elliptic curves that is associated to it. That is the same.
And there you can use this algorithm.
So here I say what I just said. So if AK is not in Z, then the associated geometric object is an
abelian variety of higher dimension; and in fact you could still play the same game, you could still
compute the Fourier coefficient in a rather efficient way, but instead of my algorithm, you should
use P loss algorithm which is a generalization to abelian varieties.
So here's an example. So here's the -- so the smallest in some sense, the lowest level for which
you have a nontrivial example is level 11. And this is the Fourier expansion of eigenvalue of level
11. Here's the first few coefficients.
And the elliptic curve that you associate to it that you can compute, which is Shumura
construction, is this one. Up to isogeny it's this one. First elliptic curves in William Stein's
[phonetic] tables. It's this curve. So counting points on this elliptic curve is the same as
determining these coefficients.
So it's sort of strange. So this looks completely elementary and explicit. But yet if you would
work this product out and do the computation, it is much slower than counting the points modulo
P on the associated elliptic curve.
Okay. Then the commercial that I want to make is about a generalization. And, in fact, I
remember -- so this is work by Coveignes and Edixhoven and also Bosma, but I think the main
authors are Coveignes and Edixhoven; and they attribute this question to me, but this is not quite
correct. I remember there was the Atkin conference in Chicago, I think in '97 -- '95, and I was in a
taxi with only Cohen and with Elkies, and we raised the question. And I don't remember anymore
who asked it. Maybe again Cohen. That's how would you compute a coefficient of an eigen form
in polynomial time, because for weight 2 we know how to do.
You can use the algorithm to count points on an elliptic curve. How would you do this for higher
weights? I forgot to put the emphasis on it. So weight two can be done using my algorithm or P
loss algorithm. Higher weights, this is the question. How would you determine the Fourier
coefficients of modular forms, eigen forms of higher weight in polynomial time? Is this possible?
And, well, the question is, yes, this can be done. So there is a polynomial time algorithm invented
by Coveignes, Edixhoven and these people that determine the coefficients of the modular form of
weight larger than two in polynomial time.
Of course, the nicest example, the most famous example of such a modular form is the delta
function. And the delta function has a Fourier expansion some tau and Q to the N where tau is
the Ramanujan tau function. And there's a very simple formula for this Fourier expansion. It's
this product here. While simple, yes. Because you can write it down so easily. But from a
computational point of view, this is quite useless. If you want to compute the Pth coefficients for a
very large P. So here are the first few coefficients.
This is a modular form of weight 12. So instead of two, it's 12. And, in fact, there are some
computational results for -- let me see. Yeah, okay, I just don't know what the next slide is.
Okay. So before I tell you what I can compute and how they do it a little bit, let me perhaps give
you some more motivation why this is an interesting function.
So the Ramanujan tau function pops up in several places in number theory. This is also related
to the weight to the lengths of the vectors in the leaf lattice. That is, a formula that involves the
Ramanujan tau function. And you also encounter it in the following way. And this again has to do
with elliptic curves. If you don't carry about higher weights, modular forms, then perhaps you like
this and then you will see that Ramanujan tau function also shows up.
So let's look at the following result:
We're going to count something. We're going to count cubic, smooth cubics in P2. So more or
less elliptic curves, more or less. But with the embedding. And not only smooth cubics, we know
how to count those, smooth cubic is given by homogenous cubic with 10 coefficients. Not all of
them of which are 0. So you know how many smooth cubics there are in P2. There is the
discriminant locus that you should remove to have only the smooth ones.
And what are we going to count? We are not going to count the cubics themselves, but we are
going to count cubics together with a bunch of points on it.
So one cubic and N points. The points must be rational. And since the group PG3 of FP acts on
this situation, I forgot to say everything is modulo P. So the group PGL3 of XP acts on the
situation. One gets particularly nice formulas when one divides the number of these TUPLs by
the order of this group.
What happens is the following: That for small N, one, two, three, up to nine, this is just a
polynomial expression in P. And actually this is not so strange, because the way to count this is
not that you should count first the curve and then for every curve the points. No, no, no, you
should do it the other way around. You should fix N points in the plane and then count how many
curves there are passing through those points.
Because that's a linear algebra problem. You have 10 coefficients. You fix N points. And if the
number of points is less than, it's a homogenous system. If it's less or equal to nine, then there
are solutions. And the number of solutions is a linear space, a projected space. So this is easy.
It's also not so surprising you get a polynomial there.
Because the number of points in the projected space is a polynomial. So that's what happens if
N is at most 9. You get a polynomial. I will show you a few of them on my next slide.
However, as soon as N gets larger than 9, it becomes more complicated. Then, in fact, you get a
polynomial and for 10 the Ramanujan tau function comes in. If you go on, 11, 12, 13, then you
would get modular forms of SL2Z for higher and higher weights that would come in. Formulas
would get more and more complicated.
So let me show you what it looks like. So here are the first few polynomials. Here's the tenth
one. I didn't write it all. It's a little bit longer still.
But so this is one place where in a very natural way, even if you are interested only in elliptic
curves and in counting elliptic curves, a place where the Ramanujan tau function comes up.
So we all love the Ramanujan tau function, and we're interested in it. So here are some
properties. It's multiplicative, just like the coefficients of an eigen form of weight two. And there is
a similar recursive formula for tau of a power of a prime. And then the Ramanujan tau function
satisfies certain congruences, some go back to Ramanujan, they go back to 1910, 1920s. Also
most of them were proved also in that period.
So there are these congruencies. And there's an inequality. This inequality should be viewed as
the inequality that A of P is less than the two square root P for an elliptic curve.
This one is harder to prove. This is a consequence of Deligne's work when he proved the
[inaudible] conjectures in 1973. So this is a rather deep inequality. So these are some properties
of the Ramanujan tau function.
So what Coveignes and Edixhoven and their collaborators present is a deterministic polynomial
time to compute tau of P, so it's similar to the algorithm of elliptic curves except that it has to do
with a modular form of higher weight.
And the approach is the same, up to a point. So they compute tau of P modulo several times L
and because of this inequality here, when you have log of P number of primes L, you can
determine tau of P using the Chinese remainder theorem. And how do they do that? That's the
question. How do you compute tau of P modulo of prime L? Well, for small primes like these
two, to be precise, these primes, 2, 3, 5, 7, 23 and 691. 691 is not so small, but the other ones
are small. There are these congruencies and they give exactly the value of tau P modulo L.
And L is 11, is also rather easy to explain. I will do that in a minute. But for larger L, this is much
harder. And, in fact, here's an example where L is 19. So they can compute -- this is I suppose, I
didn't check, but I guess this is the smallest prime of a thousand digits. This is the next. This is
the next. And they can compute the value of the tau function in this prime, modulo 19, up to sine.
And I believe no one else in the world can do this, but they can do that with this algorithm.
Unfortunately, the algorithm is not -- the implementation that is available at the moment is not so
efficient that you can really compute tau of P for very large P like Francois Morain can do for
elliptic curves. Also, the bound is much higher. You see, it is not square root P but it is P to the
11 over 2. And this is sharp in some sense.
So you need many more L. And it is a very interesting problem to write such a program that can
do that. Perhaps also a good strategy would be not immediately to go, not to think commercially
and go for the Ramanujan tau function, because everybody knows what it is, but maybe to be
more modest and first try modular forms of weight three or weight four. Then things do not get
out of hand so much. Because this 11 here is the weight minus 1. So if the weight is lower, then
this number is also much smaller.
Now, let me then, in the time that is left, very briefly tell you how they proceed. It is very similar to
the point counting algorithm for elliptic curves. So instead of the L torsion points, which is
two-dimensional vector space over FL of points, with an action of Frobenius, they have another
two-dimensional vector space. Let me call it VL. It also exists.
And the Frobenius acts on it and the Frobenius endomorphism has a characteristic polynomial.
And it has this form, very much like what happens for elliptic curves, you see, except that the P
became a P to the 11th, and this T is exactly the Ramanujan tau function modulo L. So very
similar to what happens for elliptic curves. So what is this magical space VL which substitutes
the L torsion points, this two-dimensional vector space, what is it?
Well, this is the problem. That is pretty horrible. So it was defined by the Deligne in 1969 and it
is an etale chomology group, not of a curve, but of an 11 dimensional variety, and with values in
this group scheme or this shift. This is what is VL is.
In fact, Sayer [phonetic] I think around '68 conjectured that such a two-dimensional space had to
exist. And Sayer thought that it had to exist, and the reason were these congruencies, because if
such a two-dimensional representation would exist, were to exist, then he could interpret these
congruencies in terms of two-by-two matrices.
And these are traces. You see this is one upstairs and P to the 11 downstairs. This would be the
trace. And he had this whole picture in his head that these two-by-two matrices would explain
everything.
And Sayer actually explained this in the illuminating or Sayer's conjecture a few years ago. First
he asked [inaudible] to prove this, because he also had an idea where to find this VL. And
[inaudible] didn't answer the letter or something. And then he asked Deligne and Deligne proves
it immediately, and this conjecture of Sayer, that is, VL existed didn't last more than a month, then
it was proved.
But the drawback of this definition is the fact that it is such an abstract definition. It is difficult to
put this on a computer. What the hell is H11 a tau of this variety with values in this group
scheme? You can change it a little bit. You can go to P1. This is a standard reduction, and then
it becomes an edge one. And edge one is not so bad because edge one is -- that has to do with
curves already.
But then there is some sheaf here. And then all the difficulties are hidden in the sheaf. So this is
also quite useless.
So this is the key problem. The key problem of the generalization to higher weight is the fact that
the definition of the two-dimensional pace on which Frobenius acts is so incredibly abstract. We
do not know how to put this efficiently on a computer.
Right. That's what I just said. So it is unsuitable. There is one case where it is actually quite
doable. And it can be put on a computer. It's when you have to first a etale chomology group of
a curve with values in ZL. This is just a complicated way to say L torsion points of the Jacobean.
So if X is an elliptic curve, this is just L torsion points.
In general, it is the L torsion points of the Jacobean of X. And there you can do computations.
There's counter's algorithm, there are things that you can use to do computation.
So what Coveignes and Edixhoven do, they relate this group of Deligne to the H1 of a curve with
values in ZL. And it is known how to do this. It is known that this group -- so this is the one that
you want. It is related to this chomology group; and it looks just as horrible but it isn't. This is H1
over curve with constant coefficients. This is torsion points. And the curve you know it's a
modular curve. This is the theory.
In fact, it has to do with congruencies -- another way to say this a less geometric way perhaps is
that there's always a weight two modular form not for gamma 0N, but for slightly smaller group
gamma 1 of N which is this group such that tau N is congruent to AN modulo L. AN are the
Fourier coefficients of that normalized eigen form for this group.
And what I just said. So this means the theory tells you that this means precisely that these
two-dimensional vector space that you are interested in to do the computation, that you cannot
lay your hands on because it's so abstract, that this two-dimensional space sits inside this edge
one. And this edge one is not so scary. This is torsion points of the Jacobean.
So it looks great. So we're done. No, this is two-dimensional and it sits in something that we
understand, where we can do computation. L torsion points over Jacobean.
So now comes the problem. Here's an example. First, it still looks great. So let's do this. If you
do this for 11, then the X1 of 11 -- no, let me first say, if you do this for 11, then this modular form
of weight 2 turns out to be this one. This is my previous example. And so you have this
coefficient. Now we know how to compute the coefficients of this modular form, because by the
Shumura construction, those are exactly the traces of Frobenius acting on the L torsion points of
the elliptic curves. So you only need to count points on the elliptic curves. And it so happens that
in this case the Jacobean is one-dimensional abelian variety. It's an elliptic curve. So VL, V11 in
this case is contained in this group, it is necessarily equal because they're the same dimension.
Lucky, we are lucky here. And therefore you can actually compute a Ramanujan tau function
modulo 11 very easily by just counting points on this elliptic curve. Then you know the number of
points modulo 11.
Because in this case X1 of 11 has genus 1. It is a very simple curve. What happens, however,
very unfortunately is that when L grows, then the genus of this curve X1 of L grows quadratically.
So when L is very small it can be maybe one or two. But if L goes up it means that the dimension
of this vector space grows as L squared to the 12th.
And this is a disaster, because this is the dimension. So in the elliptic curves curve algorithm that
I mentioned was always 2. It didn't grow with L. Now it grows with L. Now this is too large now
you don't have a polynomial algorithm now. The polynomials you need to describe the torsion
points of this monster are too large. It gets out of hand.
So this is the key problem that they solve. And so I won't tell you in detail how they solve it. I'm
not able to. But they work actually with numerical approximations. So they represent the
Jacobean of X1 of L as a complex tore rust, and then they can cut out the two-dimensional vector
space they're interested in by taking kernels of Heck [phonetic] operators TM minus their
eigenvalue and all these things are computed complex analytically with enormous -- very
accurate approximations, and to estimate the running time of the algorithm, they need to estimate
how accurately the approximations should be.
And how big the coefficients of algebraic objects become. And this is very delicate. It is very
complicated. They use the Arakelov theory, and the delicate theorems Arakelov theory, Arakelov
theory of modular forms, modular curves I should say tells them how big the coefficients are and
how accurate the approximations should be. And they survive all this and then they end up with
deterministic polynomial time algorithm.
And they wrote a book on this. They are editors, because there are three more authors as I told
you. And you see times have changed. This book is published by Princeton with this algorithm,
and it is a very big book. And so this is the commercial for this book. And to finish my talk, I want
to mention that this summer Peter Bruin student of Bos Edixhoven defended his thesis in Holland.
In his thesis he generalized this whole algorithm.
So Coveignes and Edixhoven, actually I told you only about the Ramanujan tau function, but they
can actually do their thing for every eigen form for the modular group SL2Z of arbitrary weight.
Not only Ramanujan tau. This was generalized by Peter Bruin by allowing levels. Instead of
SL2Z he also allows a level. So it is more general.
And the algorithm that Peter Bruin finds is probabilistic, and the running time is polynomial time
under assumption of GRH. So the result is slightly weaker but it is still very, very nice.
So probably Peter Bruin will have no difficulty publishing this paper. It is a very nice result. But
he also has a commercial. Marketing as [inaudible] calls it. A completely elementary problem
that you can solve with this algorithm.
So the number of ways that a prime number can be written as the sum of M squares. This is a
very elementary question. Let's say 20 squares. And how many ways can you write a given
prime number as a sum of 20 squares? This is an elementary question that Jacobi could have
asked.
And if M is even, this number of times is precisely the Fourier coefficient of a certain modulo form
a power of a suitable theta series.
And the weight of this form is M over 2. And therefore M should be even. Because weights in
this business are even. So for odd M he has no good algorithm. There is no good algorithm to
determine this. So this is an application. Probabilistic algorithm that under GRH is polynomial
time.
And in fact this fact, this brings me to my last suggestion that for order M there is no good
algorithm.
That brings me to the following. There are modulo forms of half integral weigh. There's a
modular form of half integral weight and these are very interesting. This is a Fourier expansion of
a modulo form of weight 3 over 2. And the coefficients are Hurwitz class numbers. So apart from
minor details these are essentially the class numbers of imaginary quadratic fields.
Now suppose that you would have a polynomial time algorithm to compute these numbers. Or
even to compute a modulo L.
Suppose that you could compute a coefficient of modulo forms of weight 3 over 2 modulo L.
Then you can compute class numbers modulo L. And that would be completely new. So this is
my challenge problem to you, find an efficient algorithm to compute Fourier coefficients of half
integral weight that would be very interesting, I think.
Thank you for your attention. [applause]?
>>: Any questions for the speaker?
>>: You calculate [inaudible] you calculate [inaudible] forms of [inaudible] you get test numbers.
You can calculate test numbers [inaudible].
>> Rene Schoof: Ah, in fact there's an anecdote about this. So that's what I thought, too. That's
what I thought. So this is what I told [inaudible] many years ago. I said you really should
compute coefficients of modular forms because if you can do it for quadratic fields then you can
factor, you can factor the discriminant.
And that apparently got Bas Edixhoven going, and now we've got this book.
But the problem is you only compute tau Ps and APs so before you can compute tau N you have
to factor N.
>>: Oh.
>> Rene Schoof: Ah. So ->>: Any other questions?
>> Rene Schoof: Well, whoever.
>>: Another one which very much Atkin, if you look at the form of a minus a half you've got the
partition function.
>> Rene Schoof: Okay. Ah. Yeah. Okay. I don't know.
>>: There are converse relations, all sorts of them.
>> Rene Schoof: Yeah. That is also very interesting. I never thought about that. I never thought
of negative weight, modular forms. So you're suggesting a similar algorithm to compute a
partition function?
>>: I was wondering.
>> Rene Schoof: I don't know. Of course, the partition function is because it is negative weight,
it's very big. Yeah.
>>: Question?
>>: Yeah, you showed us these large P [inaudible] primes but [inaudible] they only got tau P up to
psi.
>> Rene Schoof: I don't know why that is but you can argue if I can compute tau P squared
rather than tau P, tau P squared I also have a good bound and I do not really know where this
ambiguity comes from. I have to admit. So let's say they compute tau P, but I claim that I can
compute tau P. I don't know the reason why, I give it up to you.
>>: Other questions?
>>: I want to check whether it's 0 or not. Is it [inaudible].
>> Rene Schoof: Not that I know. You mean 0 in absolute?
>>: I want to check the absolute.
>> Rene Schoof: More recently your was found you saw it on the Internet. I don't know how to
do it. In fact I think they found it by exhaustive search and by expanding -- by using the trace
formula maybe. I don't know how to do that quicker.
>>: Perhaps one more question? Well, if you'll join me in thanking Renee.
[applause]
Download