21342 >>: Okay. It's my pleasure to introduce Neal... colleagues and friends for almost 25 years now. I've...

advertisement
21342
>>: Okay. It's my pleasure to introduce Neal Koblitz. Neal and I have been
colleagues and friends for almost 25 years now. I've learned over the years that
when Neal has something to say, it's always worth listening to.
So he will speak to us today my last 24 years in crypto, a few good judgments
and many bad ones. Neal.
>> Neal Koblitz: Thank you, Scott. Thank you for the opportunity to speak here.
I won't be saying anything about the invention of ECC. That's why I have 24
years in the title.
[laughter].
Rather, my basic theme will be about how easy it is to make bad misadjustments
in cryptography. And I'll illustrate that by an abbreviated history of my own bad
mistakes over the last years.
But the early part of my talk will be about how conventional wisdom, perhaps,
can sometimes be wrong, and that's based on a paper that will hopefully appear
in the finite future in the special ECC issue of the Journal of Number Theory, and
this is a joint paper with Alfred Menezes and my wife Ann Hibner Koblitz. And in
the meantime, you can easily find it on the IERC Print Server, it's the only paper
there with the word serpentine in its title. It's called ECC, The Serpentine Course
of a Paradigm Shift.
The first part of my talk will be about a section of that paper concerned with the
security implications of isogony walks in the isogony class of an elliptic curve.
That's section 11 of the paper. But I also want to say that I strongly recommend,
if you look at the paper, that you read section 13, which contains the discussion
of such interesting topics as family life of gorillas, controversy over whether or not
the ancient Native American tribes of Arizona were war-like. Implications of Alan
Sokal's hermeneutics of quantum gravity. Smart houses with robotic butlers and
other topics that are rarely treated in the pages of General Number Theory.
[laughter]
So to start, the conventional wisdom in cryptography is that when possible it's
always a good idea to choose, from a security standpoint, to choose parameters
as randomly as possible.
And, in particular, in both elliptic and hyperelliptic curve cryptography, the safest
choice is to have defining equations with randomly generated coefficients.
Continuing with the conventional wisdom, it might be all right -- probably all right
to use special curves for reasons of efficiency, if you really want to, but some day
that choice of curves with special properties might lead to an easier attack by an
adversary.
So there's some risk associated with that choice. That is, the conventional
wisdom. Now, in 1991, I proposed the use of nonsupersingular curves defined
over the field of two elements of which there are two with these equations, these
two equations also called anomalous binary curves.
Because they seem to me to have some efficiency advantages over random
curves. Well, the NSA liked these curves and in crypto 1997 Jerry Salinas gave
a talk presenting a thorough and definitive treatment of how to optimize point
multiplication on these curves. I believe it was the first talk given publicly at a
crypto meeting by an NSA person.
At present, these curves are one of three sets, three families of
NIST-recommended curves, each over five possible choices of finite field at
different security levels. Now some people have been mistrustful of this
particular family of anomalous binary curves in part because of the conventional
wisdom that I gave above that argued for random parameter selection.
There have been two other arguments besides the conventional wisdom I want to
mention. One of them around the time that I presented the proposal for these
curves, Clash Schnore [phonetic] very much objected. And he in fact he
objected to the use of any elliptic curves over binary field because, as Vic Miller
explained this morning, in the early '80s there was the L of one-third algorithm by
Coppersmith to solve the discrete log problem in the multiplicative group of a
binary field. And until somewhat later, about a decade later with Dan Gordon's
algorithm, it wasn't known that there would also be L of one-third algorithms for
the discrete log in multiplicative group of more general finite fields. So the initial
impression during a period of about a decade was that the multiplicative group
was far more vulnerable and characteristic too. So Schnore thought maybe that
vulnerability would carry over to elliptic curves defined over binary field.
But that argument basically fell through when L one-third algorithms were
developed for prime fields and other fields.
There's also a second argument which was harder to refute. It took the form of a
syllogism. The NSA wants us to use these curves. We don't trust the NSA.
Therefore, we don't trust these curves.
Now, this is harder to refute. But you can use a type of organizational analysis,
sort of social science, to explain why it is that I don't accept, and many people
don't accept this syllogism and are willing to use these curves. So let me present
that.
NSA isn't a monolith. Just because NSA mathematicians recommend a family of
curves, it doesn't necessarily follow that they're no good. Now, I think the best
way to depict this organizational analysis is with the use of PowerPoint clip art.
[laughter]
So here is my analysis that the people we love to hate wearing the black hats at
NSA are not the same ones as the ones who are recommending these curves.
So the nice thing I should say about this organizational analysis is that it carries
over -- nothing special about the NSA here. This organizational analysis carries
over to other organizations that are of relevance to cryptography. Thanks to the
miracles of PowerPoint, especially the duplicate slide feature.
[laughter]
Here's a corporate world, a similar dichotomy. And the university world
dichotomy. By the way, my wife who is a professor of women's study objected
that I've used male images for all six of these people. But my answer to that is
that in the first place there aren't six images here there's one image repeated six
times.
And in the second place, I just couldn't find in PowerPoint clip art any depictions
of dweeby nerd heads who are female. So that's why I used these images?
>>: [inaudible].
>> Neal Koblitz: Pardon? Okay. Now, to return to the issue here of random
versus special parameter selection, Alfred Menezes and I found some reasons to
question the conventional wisdom that the random choice is always the more
secure one.
We claim that there are various scenarios where someone might choose ECC
with a special curve and end up better off from a security point of view,
conceivably, than someone who chooses a random curve.
And these scenarios are based on recent work on isogonies. And that's a
subject of section 11 of the serpentine paper. So suppose we have two curves
defined over two element isogeny is a nonrational map that takes point of infinity
to itself. Its degree in our setting anyway, we'll be dealing with ordinary curves, is
also the order of the kernel isogeny. There's a dual isogeny going the other way,
in such a case, and this gives us an equivalence class, the isogony class of a
given elliptic curve. And by a fundamental theorem of Tate that Gerhard Frey
talked about this morning, that two curves are isogenous over the field FQ if and
only if they have the same number of FQ points.
Now, from a practical standpoint, the basic situation, at least in our setting, is that
low isogeny, low degree isogenies are easy to construct but high degree
isogenies are usually not, but they're not in the setting -- have to be careful about
saying that. In some settings high isogeny degree isogenies can be constructed
efficiently but not in the setting we're going to be dealing with.
Now, within the equivalence class of isogenies, it breaks down further according
to the endomorphism ring. And here by an endomorphism we just mean an
isogeny from the curve to itself that in general is defined over the algebraic
closure of the field of definition, but we're going to be working only with the
ordinary, the nonsupersingular case, where the trace T that's given here is
primed to Q. And in that case the endomorphisms are all defined over the field of
definition. So that will be the situation we'll be in.
Now, the endomorphism ring denoted end of E contains a sub ring and contains
the, naturally contains the integers of the sub ring, and to determine it further, we
look at the discriminant of the curve, capital delta here, which is the trace
squared minus 4Q.
Now the field generated by the square root of the discriminant is the complex
multiplication field. And if we write the discriminant as a square times a
fundamental discriminant of the quadratic imaginary field that C0 plays a big role,
the square part of it plays a big role in classifying the endomorphism rings of all
the elliptic curves that have the same number of elements. Well, in general the
endomorphism ring is an order of the ring of integers of the CM field.
And its index in the full ring of integers denoted C is called the conductor of the
endomorphism ring. And the basic fact about that is it's a devisor of CO and
there's a one-to-one correspondence of devisors of C0 and possible
endomorphism rings.
So the elliptic curves that are isogenous to a given elliptic curve can be
partitioned according to their endomorphism ring and the endomorphism classes
are determined by the conductors that are one-to-one correspondence with all
possible devisors of C0.
Okay. And the other thing to know is that the number of isomorphism classes of
elliptic curves in a given endomorphism class is equal to the class number of the
order and that's essentially or very close to the conductor C times the class
number of the CM field.
So the number of endomorphism -- and this is a basic point that we use -- is that
the number of isomorphism classes in a given endomorphism class depends
very much, is basically proportional to the conductor C. So there's a large
conductor C.
For example, at one extreme, if the discriminant is square-free, then there's only
one endomorphism class; that all of the curves have complex multiplication by
the full ring.
On the other hand, if C0 itself is a large prime, then there are two endomorphism
classes within the isogeny class. One is a very small one consisting of the
curves whose endomorphism ring is a full ring of integers and the remaining
curves, almost all of them, have endomorphism ring of conductor C0, which is
prime in this case. Now let L denote a prime, if there's a degree, a basic fact is
that if there's a degree L isogeny between two curves and either of the two
curves have the same endomorphism ring, or else the constructors differ by a
factor of L.
Now, when we talk about two conductor classes that have different conductors,
by the conductor gap, we mean the largest prime that divides one conductor and
not the other. And so the basic situation as far as computation goes is that if
there's a large conductor gap, that is, a large prime that divides one and not the
other, then one cannot go from a curve in one class to another by a string of low
isogenies and to the best of my knowledge there's no efficient way known to
construct an isogeny that goes from one class to the other.
Conversely, by a result of Jao Miller and Venkatesan, within an endomorphism
class, or among classes with small conductor gaps, one can move efficiently,
uniformly using relatively easily constructed low degree isogenies.
Now, the security implications of these isogenies are that they allow one to
transport the discrete log problem from one curve to another. So you can say
that the discrete log problem is random self-reducible within a set of
endomorphism classes with small conductor gaps.
So you can efficiently move around, and in some sense the security of your curve
is determined by the lowest security of curves in that class.
And so by the L conductor gap class of a curve, we mean that all of the
endomorphism classes in its isogeny class that have a conductor gap less than
L. So it's sort of like the relation between the conductor V and the conductor of
all the other curves and the L conductor gap class is a smooth number. So L is
like a smoothness bound.
There's no prime larger than L that differs between E and the elliptic curves in
that class.
So here's a hypothetical scenario. Suppose that an algorithm were found that
solves the discrete log problem in time T1 in a proportion epsilon of all elliptic
curves over FQ, where epsilon is a very small but not a negligible proportion of
all curves.
Where the property of being a weak curve is independent of the isogeny and
endomorphism class. So it's basically the same proportion epsilon within any
such class.
And I'm also assuming here that once you go to an elliptic curve you can quickly,
efficiently, identify whether or not it's a weak one.
Then what this means the discrete log can be found on any curve in an L
conductor gap class in time roughly T1 plus T2 over epsilon where T2 denotes
the time needed to construct these isogonies.
So we're moving around within a class where the isogonies are relatively easy to
construct and T2 is an estimate for the time it takes to construct such an isogeny.
So basically T2 over epsilon is the amount of time it takes us to get to a weak
curve, and then it takes us time T1 to solve the discrete log problem there.
Now, this assumes that the L conductor gap class, where L is this bound on the
conductor gap, where we're able to construct isogenies in time T2. So T2 is the
amount of time it takes us to construct isogenies to take us throughout this L
conductor gap class. And we're assuming that the class contains at least 1 over
epsilon curves, if we're going to have any chance of finding a weak curve. So
that's another assumption that's being made in this time estimate.
Now, it's the possibility of random isogeny walks through a conductor gap class
that might, under various sort of hypothetical circumstances, make a random
curve less secure than a special curve.
Now, it's crucial to note that for a random curve, basically we can move all the
way -- we can move all around, that all isogenous curves are in the same
conductor gap class for small L. The same L conductor gap class for quite small
L, because for random, an elliptic curve with random coefficients, the
discriminant has negligible probability of being divisible by the square of a large
prime.
It's going to be almost square-free. So here's a hypothetical scenario. And this
scenario, there's several examples in the paper. The one I want to talk about is
over a prime degree extension of the field of two elements.
Now, let's suppose that some version of say they dissent or some other approach
that may be one that's not currently known, some day leads to a faster than
square root attack on a small but nonnegligible proportion of curves defined over
this particular prime degree extension of F2. Now, I should say that this is not as
farfetched as it might seem, because there's been a lot of study of they dissent in
the context of composite degree extensions of F2, and there are certain cases,
certain composite degree extensions of F2 where the they dissent methods have
been shown to work efficiently or relatively efficiently, give a faster than square
root attack on the discrete log problem on a certain proportion epsilon that is
small but not negligible.
So this hypothetical assumption is not -- is very speculative for prime degree
extensions of F2 but it is not speculative for certain composite degree extensions
of F2.
Okay. Now, as I mentioned the NIST 2000 digital signature standard
recommendations list five elliptic curves over prime fields, five random elliptic
curves over binary fields, and five anomalous binary curves over those same five
binary fields.
So for each of five binary fields, they suggest one random curve and one
anomalous binary curve. The highest security level is the degree 571 extension
of F2, which is expected to provide more than enough security to protect 256-bed
AES private key. So that's the highest level of security.
So let's look at that case of the anomalous binary curve versus the random curve
over the field 2 to the 571. Now, the conventional wisdom would be that, if
anything, the random curve, which they know AB5701 would be the safer choice
in the anomalous binary curve K571.
However, let's suppose that a proportion epsilon of all curves over this particular
field of degree 571 could be attacked by a new faster than square root algorithm
and again we'll assume that the weak property is independent of isogeny and
endomorphism class.
Now, it turns out, for the particular random curve that they generated and that
NIST recommends, B 571 has a square-free discriminant. So isogeny walks can
fan out over the entire isogeny class which consists of two to 285 curves.
There are plenty of curves there to range through randomly. And so after about
1 over epsilon isogenies the discrete log problem on the NIST curve can be
transported to a weak curve under these very hypothetical assumptions.
Meanwhile, back in the land of special curves, the anomalous binary curve over
the same field has discriminant of a radically different form, namely minus 7
times a perfect square, and it so happens, just by coincidence, I guess, that that
number squared is divisible by a very large prime, a 263 bit prime.
Now, the endomorphism ring of the special curve that you construct the curve
defined over two elements has conductor one. In other words, it's easy to see, in
fact explicitly, that it has endomorphisms by the full ring of integers of Q of root
negative 7.
So if we want to range from that special curve to other curves, but we certainly
are not, as a practical matter, to the best of our knowledge at least certainly to
the best of my knowledge, we would just absolutely never be able to construct an
isogeny that would bridge the gap of this 263 bit prime.
So we wouldn't be able to touch -- we wouldn't be able to go from our special
curve to any of the curves whose conductors divisible by that 263 bit prime.
And so if we just move around within our 2 to the 262 conductor gap class, there
are only about 2 to the -- there are about four million curves there.
So if epsilon is significantly less than 1 out of four million, the discrete log
problem on K 571 probably cannot be transported to a weak curve by means of
isogenies.
So under such hypothetical assumption, K 571 is actually likely to be the safer
choice. Contrary to conventional wisdom. Okay. So what conclusions do we
want to draw? Do we draw on our paper about this? We do not say that that
means we should prefer special curves over random ones. We don't just simply
reverse the signs on the conventional wisdom and say just do the opposite.
We certainly have not given convincing evidence of that. All we can really
conclude is that we don't know. It's an open question. It's a judgment call.
And I think that one of the basic things that we're learning as time goes on is that
an awful lot of cryptography is that way. And despite the natural desire of
cryptographic researchers to give an impression of certainty about what we do, in
fact cryptography is in many ways more an art than a science. Let me mention
also sort of parenthetically without giving details one can give a similar example
over a prime field, a very elementary example in fact of an elliptic curve. Just
choose a random prime B and random even number B such that A squared plus
B squared is prime. And we're going to be using simple as possible elliptic curve
here. Also we'll want to assume that either P plus 1 minus 2 A or P plus 1 plus 2
A is twice a prime. So that -- because that's going to be, mean that the number
of points on this curve is going to be twice price.
So in that case the elliptic curve over the field of P elements where P is
constructed this way define -- you have to choose A to be in a suitable quartic
residence class in FP but it's easy to do.
And then you find that Y squared equals X cubed minus AX has twice N 2 N
points where N is a prime. And it also -- that it's for -- it's the only curve up to
isomorphism in its conductor gap class where the conductor gap is basically B.
That in that case you actually have a case where the C0 in my earlier notation,
the maximum possible conductor is equal to B. That's why we chose B to be
prime.
And so you have a situation where you have this one little curve that you write
down the equation for. It's completely isolated in its isogeny class, that all the
other, in the sense that all the other curves in the isogeny class have conductor
B.
And they are basically unreachable by this curve using isogonies, because we
can't construct an isogeny in practice of degree B.
That will take it to another curve. So this would be another example that's very
similar where, under certain hypothetical assumptions, this curve would be safer
than a random curve over the same prime field. However, in this case there is no
family of special curves recommended by NIST over prime fields.
So you're sort of stuck with the random ones, whether you like it or not. Also, I
want to remark that my Ph.D. student.
Wehan Wang, I misspelled it, Wehan Wang, W-a-n-g, has found that a very
similar situation exists for genus 2 curves. That is, if you look at curves over a
prime field whose Jacobeans have a large endomorphism ring like, say, the
endomorphism ring of the ring of integers of fifth roots of unity, that they're often
quite isolated in the sense that you can't travel wildly from them to other curves
using isogeny walks.
So I think the same type of considerations will be relevant in hyperelliptic curve
cryptography in genus 2.
Okay. So now I want to, in the second half of my talk, talk about -- shift gears
and talk a little bit about history of embarrassing misadjustments that I've made in
the last 24 years and then make some general comments after that.
This will not be a complete history of that. The first major misjudgment I made
was in the late 1980s. It seemed to me that any elliptic curve group would be
secure as long as its order is prime or almost prime.
And so when I gave talks or wrote about it, I wrote an introductory textbook in
1987. I thought, well, for pedagogical reasons why not use the simplest possible
curves.
So I use the curves, the first one was discussed earlier today Y squared X cube
minus X, with four dividing P minus 1, dividing P plus 1 or Y squared plus Y
equals X cubed. Both of those curves provided that P has that divisibility
property has group order P plus 1. It's a very easy exercise for students to show
this. So why not use this as an illustration of cryptography. And as far as I was
concerned that would mean ECC would be secure. So this is a nice example to
give at every opportunity.
But it also turns out that not so much over the prime field as much as over field
extensions of F2 and F3, these two equations -- there are some nice efficiency
advantages for computing point multiples. Basically that you get doubling of
points for free over, in characteristic 2 and tripling of points for free in
characteristic three. Interesting family of curves, I thought.
Then as well noted, in 1991, the Menezes Okamoto Vanstone result used base
pairing to give a reduction of the elliptic curve discrete log problem to the discrete
log problem on the multiplicative group of an extension of a field of definition and
for supersingular curves such as the two written above, the extension degree is
very, very small. In fact, it's two.
So these were really weak curves. Especially the ones that I had written down
with extension degree two.
And this came as a big surprise to me. And made me feel very embarrassed that
I had used these curves so often for expository purposes.
This basically kills supersingular curves for ECC. And it was sort of traumatic for
people like me who just found them so much fun to use in talks and so on.
But I should say that the one thing that was even more surprising to me than the
Menezes Okamoto Vanstone algorithm that killed ECC for supersingular curves
was the fact that these same supersingular curves are closely related ones, may
return from the grave.
They had a resurrection. And ten years later they came roaring back when
paring-based cryptography took the research community by storm.
So that also came as a big surprise to me. And, of course, paring-based
cryptography is not used only with supersingular curves, they're also used with
low degree nonsupersingular curves and they're also used in some commercial
setting with supersingular curves. So this was quite interesting.
The next embarrassing episode -- actually before I get to this next embarrassing
episode this one is not the next one. The next embarrassing episode which I
don't have on the slide occurred in the late '80s when I proposed hyperelliptic
curve cryptography. Because if you asked me at that time I would have said that
the higher the genus, the more secure the curve. That was the most likely thing.
That was my instincts. I really have great instincts, don't I? When I proposed
hyperelliptic curve cryptography, that's what I thought. Most likely. High genus,
means more complicated curves. So the algorithm by Adelman and Demeris
Hwang took me completely by surprise. And subsequent work by a large number
of other people have brought us to the situation where the only version of
hyperelliptic curve cryptography that seemed to be fully competitive with elliptic
curve cryptography is genus 2. So I couldn't have been more wrong about that.
Okay. So now after that, the next embarrassing episode was in the early 1990s
when my colleague Mike Fellows and I became captivated by the idea that
despite the fiasco with knapsack-based crypto systems, you could in fact
compute some -- construct some good crypto systems using NP hard
combinatorial problems rather than just sort of number theory and algebraic
problems.
We even wrote a paper with exuberant title Combinatorial Crypto Systems
Galore, which we published. Well, there's only one actual cryptorchism that we
developed in any detail and it had a rather sorry history. And I'll quote from my
book random curves where I talk about this episode.
What we constructed was a system based on the ideal membership problem that
involved polynomials, and we challenged people to try to crack it.
And the only thing that was really neat, and I'm really proud about this crypto
system, is the name that we gave. It was Mike Fellows idea to call it Poly
Cracker. So anyway, it was a very inefficient system. Super inefficient.
Before long some papers were published that actually cracked the code. So that
was another sort of mistaken episode.
So now let me return to ECC and get to more relevant types of mistakes related
to ECC.
Okay. Now, during the first 15 years of ECC, throughout the '90s, my feeling was
that it didn't really matter what field you worked over. You had to avoid generic
algorithms by working groups of large prime order and after the Menezes
Okamoto Vanstone algorithm you had to avoid supersingular curves, but other
than that, you could use whatever field you most enjoy and security would be
unaffected by the choice of whatever field you worked in.
But in the late 1990s, Professor Frey proposed bay dissent to attack the discrete
log problem on curves over composite extension fields.
And then a number of other people, Gaudry, Hess, Mark Galbraith Menezes
Teske and some others found weak curves over certain binary fields. That's
what I alluded to before, that certain composite degree extensions of F2 have
weak curves in them.
Now, fortunately other people, most notably Scott Vanstone, had much better
instincts than I had had and as a result of that, fortunately, all commercial
implementations and all ECC standards used either prime fields or prime degree
extension fields and as far as we know those -- those fields at least in the
cryptographic range of interest, there's no security problems caused by they
dissent for any of the fields, curves over the fields using commercial
implementations and standards.
Okay. But I was really on many occasions quite bad at anticipating future
developments. For example, in 1998 I published a book called Algebraic
Aspects of Cryptography and I wanted to include some things in it that were of
great mathematical importance but I thought were sort of nonimportant, not really
relevant to cryptography but were so important to the history of the culture of the
field that I should at least briefly say something about them.
So in one such place I discussed the Berchand Swinnerton-Dyer conjecture.
And I essentially apologized to my readers for taking up their valuable time with
something which, though of interest to mathematicians, has no relevance to
cryptography.
This is in early 1998. Eight months later I received an e-mail from Joe Silverman
outlining a striking new attack on the elliptic curve discrete log problem, which he
called xedni calculus. It was a variant of index calculus, doing things in a
somewhat backwards order. He called it xedni calculus because that was index
spelled backwards.
But what was really unexpected, other than just having a brand new algorithm to
attack the ECDLP, was that Joe Silverman used the heuristics of the Berchant
Swinnerton-Dyer conjecture, along with an analytic rank formula of Mestre to
increase the heuristic likelihood of a successful attack on the discrete log
problem.
So what made it at first seem so very difficult to analyze the likelihood of success
of this attack was precisely that he was using the Berchant Swinnerton-Dyer
conjecture or heuristics behind it anyway to give reason for greater likelihood of
discuss.
Well, there's a lot of initial worry about said knee, because it came at a time when
there was a lot of worry that RSA people would use xedni as a weapon in their
public relations battle against ECC, which was still going strong at that time.
But I found that we could use the height function to show that xedni wouldn't
work.
And I was so thrilled about the success in defending ECC that I gave a talk at
ECC 2000. Gerhart Frey was very generous to me this morning about my
Golden Shield talk.
And well, I'm not saying it was a bad talk, but certainly the title of the talk is the
source of embarrassment for me now. Miracles of the height function, golden
shield protecting ECC. So I think that was in a way -- well, it was overhyped.
[laughter]
So now I should also say at about the same time a paper by Joe Silverman and
Joe Suzuki made a detailed examination of detailed includes and explained why
it wouldn't work. Their paper essentially elaborated on the argument that Vic
Miller made in his original 1985 ECC paper, went into a little bit more detail.
Also, at ECC 2007, three years ago, Silverman made a similar analysis for all
four ways one could conceivably try index or xedni with liftings to global fields.
And concluding the same, that thanks to the miracles of the height function, none
of them would be the least bit practical.
So at this point my feeling was that we had nothing to worry about from index
calculus, whether spelled forward, backwards, sideways, index calculus is not a
problem at all.
But alas, very recently index calculus has reared its evil head in the world of
elliptic curves; and, for example, Gaudry and Yem found subexponential index
calculus algorithms on composite degree extension fields if the Q and the M grow
in a suitable way.
And we'll be having a talk, as you know, later in the week by Menezes Vita
[phonetic] about improvements on this work by Gaudry and Yem. So index
calculus is a live issue for the elliptic curve discrete logarithms problem,
unfortunately. Or embarrassingly.
So I want to make some broader comments now. I think it's regrettable that so
much cryptographic writing exudes a type of brash certainty about the work. And
as I've said, I've been guilty of this. Certainly in my talk in the title of my talk ten
years ago, about the miracles of the height function, now as I get older I realize
that there are no miracles.
And so I no longer believe in miracles and I no longer believe in golden shields.
But at the time I was guilty of [laughter] -- of this, of exuding a certain brash
confidence about the work.
And when you look at abstract and introductions to papers if you browse through
the IACRe-print server, they're often written not as if they're written by scientists
but if they're written by marketing people or maybe it's part of a patent
application, you might remember the slide of the black hats and the white hats
about marketing people. But, anyway, and you have sometimes they're written
with hype, advertisement. That has little connection to reality. And here let me
give you an example which by no means is unusual.
Okay. So this is a paper from three years ago saying that their protocol permits
saving on bandwidth and storage, improves computational efficiency and
scalability. In contrast to the only other prior scheme to provide this functionality,
ours offers improved security, formal security definitions. We support the
proposed scheme with security proofs.
This was a paper by Bodyreva, Gentry O'Neal Yum that proposed a way of
what's called sequential aggregate signatures based on pairing for purposes
such as security of network rooting protocols and other applications where
several people in the sequence compose a single compact signature.
But the reason why I mentioned this example, that the amusing thing is that
about a year later Hwang Lee & Yung showed that a crucial security proof in this
paper was fallacious, and they also broke the corresponding protocol.
Now, there have been many cases like this of rather bold claims being made of
protocols that were provably secure. And despite the questionable history of a
lot of these claims, still one often reads the argument that these proofs of security
can be offered to the public as a guarantee.
So here's ->>: [inaudible] what it shows is that they did not have a proof. It's not that a
security would be bad, it's just they didn't have one.
>>: Can you repeat?
>> Neal Koblitz: Let me clarify that. I have not given evidence that a proof of
security would be bad. I've only said that in this case and in some other cases
proof of security turned out to be fallacious.
Let me quote from the preface to a well known book by Katz and Lindell.
Cryptographic constructions can be proven secure with respect to a clearly
stated definition of security and relative to a well defined cryptographic
assumption. This is the essence of modern cryptography and what has
transformed cryptography from an art to a science. The importance of this idea
cannot be overemphasized.
Now, at some times when people get dismayed because there is quite a history
of fallacious proofs in the provable security literature, one of the possible
responses is that advances in theorem proving software will soon make it
possible to prove the security of cryptographic protocols automatically without
any longer, any possibility whatsoever of flaws in the proof. And that will remove
the element of human mistakes and failings. And I guess what some people
might mean by making something into a science is precisely to do that.
And we'll remove the human element from the process of establishing
guarantees of security. I wrote a paper on this examining some of the theorem
proving work. And here's a reference for that, if you're interested.
Another issue that some people find bewildering is the rather ornate nature of
some of the cryptographic assumptions is especially true in pairing based
cryptography that underline the security proofs. As the cryptographic
assumptions that are what the reduction, the reduction argument in the security
proof is based on a certain assumption about hardness of a certain type of
problem.
And a lot of these cryptographic assumptions, these hardness assumptions
about certain problems, are quite exotic. And look to most readers, anyway, as
very unusual.
Now, there in the Pairings 2008 Conference was a statement by Zev Boyen
giving a very positive spin to all these cryptographic assumptions. He said the
newcomer to this particular branch of cryptography will be astonished by the
sheer number and sometimes creativity of these assumptions. In comparison to
the admittedly quite simpler algebraic structures of 20th century public key
cryptography, the new bilinear groups offer a much richer palette of crypt to
graphically trap doors and their unidimensional counterparts.
So on the one hand we see the trend of rather bold and boastful writing by crypto
researchers. And again I include myself at least at my, some of the titles of my
talk in 2000 and my paper with Mike Fellows in the '90s that I've been guilty of
this at times.
And on the other hand we see a long history of misjudgments and uncertainty
and mistakes that continues to the present day. So this raises the question:
How can we reconcile the disciplinary culture of our field with the reality of the
history of our field? And I was thinking about this. And wondering, is there any
sort of key to resolve this dilemma?
And I was thinking I'm going to be going to India for Endocrypt in September, and
it occurred to me maybe I can look, see if the wisdom of ancient India has
anything to tell us about this.
So sure enough I looked in the Bhagavad Gita and in Chapter 13 there's a
section where the all-knowing one, Krishna, describes the qualities that are
necessary for knowledge. So this is something that anybody interested in
knowledge should be aware of.
And the very first one is Ammanitvam, if any of you know sanskrit, please excuse
my mispronunciation. So Ammanitvam is the most important quality, the English
translation is humility.
So it occurred to me -- here's Krishna. But I didn't show pictures of various
people. But I did show Krishna.
[laughter]
So if we make a thought experiment and imagine that Krishna were brought in as
a consultant -- now, this isn't very realistic. I'm sure Krishna would not agree to
sign a nondisclosure agreement. But let's just, as a thought experiment, suppose
that Krishna were brought in as a consultant on how to improve research in
cryptography and he went to work reading through some of the introductions and
abstracts of articles on the ICR e-print server, and he read the Pairings 2008
paper by Boyen and the preface to Katz and Lindell, I think it's fair to say that
most certainly one of his central recommendations to us for what the
cryptographic community badly needs is a healthy dose of Ammanitvam. Thank
you. [applause]
>>: Any questions for Neal?
>>: Okay. I agree with what you say about [inaudible] because making more and
more assumptions of various types has to be looked and scrutiny involved. But
however the idea of Katz and Lindell [inaudible] something simple stage. It's
better to have reduction of something that could be stated in one line than to
assume that the algorithm works, don't you agree? That's all they seem to say.
How could that be considered?
>> Neal Koblitz: No, no what you're saying makes good sense that it's better to
have it than not to have it. But I think what they're saying is more than that.
They're saying that these so-called proofs of security -- and I don't really object to
the proofs of security so much as to their name. If they were called something
else, like a reductionist argument, that gives a certain amount of evidence. That
would be fine.
>>: The reduction. So ->> Neal Koblitz: It gives some sort of evidence for something. But to say that it
transforms our field into a science is, I think, going too far. So I'm not objecting to
the idea of giving a reduction as a form of evidence that there's a close tie
between a certain protocol and a certain mathematical problem that's conjectured
to be difficult.
>>: Why do you say closed type the same -- the same thing is that if there's an
algorithm that breaks the cryptoism, then there's [inaudible] computational
problem that shows equivalent. So [inaudible] either good or -- see what I'm
saying?
>> Neal Koblitz: Sure. First of all, assuming that the proof is correct and not
fallacious. And second of all, there can be other issues that come up such as a
lack of tightness in the reduction and there can of course be many attacks on the
protocol that might not be in the model.
So I agree that is saying something precise, assuming that the proof is not
fallacious. But to interpret it, I think, is a serious issue. And I don't think that it's
reasonable to go around telling the general public that you have a proof of
security and you've transformed -- I don't mean you -- I mean that we as a
cryptographic research community have transformed cryptography into a science
and that we should call it a proof of security.
>>: Sure can't call it a proof of security but if you show it as secure as something
else. But fallacious proofs, they pretty much matters [inaudible].
>> Neal Koblitz: Yes, thank you.
>>: It reminds me of a discussion that I was having earlier about what is sort of a
mathematical conjecture rather than a hoped-for statement or utterance. There's
lots of things that one can state.
So I think sociologically, mathematicians tend to be, for the most part, a little
more conservative on actually announcing something as a conjecture rather than
saying, gee, wouldn't it be nice if the following is true. That seems to have a
different status than saying I conjecture this. Usually you say I conjecture this,
you usually associate it with some other thing at least you give a plausible
argument why it seems to fit in with other believe things or other theorems rather
than just saying wouldn't it be nice if this were true.
>>: And mathematicians are even more conservative yet about saying that this is
a proof.
>>: I think you've been a little too hard on the automatic theorem proving people.
We have a relatively limited number of theorems, perhaps 100 or something, and
it's not inconceivable that we could insist that the people that write papers force
their supposed theorems through a chat room without having to have the force
through generated there some automated things. It's just the checker part that
has to work.
>> Neal Koblitz: It would be interesting to know if anybody can come up with an
automatic way that would have in retrospect would have found the fallacies in the
most scandalous cases of fallacious proofs. And I should say that some of these
fallacious proofs were done by very prominent, very good people. And it's not
something that was done casually. Some of them were quite -- one in particular
that I'm thinking of was an extremely subtle fallacy in the proof that took
cryptographers seven years to discover that it was there.
And a real test, I think, for automatic, for this type of software, would be to go
back and put that original proof into it and see if it would find that fallacy.
My understanding of the state of the art is that they're very, very far from being
able to do anything like that. But maybe I'm wrong.
>>: Well, they claim to have forced a proof for [inaudible] theorem, for example,
from basic axioms. So it's not inconceivable.
>>: Okay. Thank you. Anyone else? Then I would ask you to join me in
thanking Neal.
[applause]
Download