Public key cryptosystems

advertisement
Public key cryptosystems
Public key cryptosystems were invented in the late 1970's, with some help from the
development of complexity theory around that time. It was observed that based on a
problem so difficult that it would need thousands of years to solve, and with some luck, a
cryptosystem could be developed which would have two keys, a secret key and a public
key. With the public key one could encrypt messages, and decrypt them with the private
key. Thus the owner of the private key would be the only one who could decrypt the
messages, but anyone knowing the public key could send them in privacy.
Another idea that was observed was that of a key exchange. In a two-party
communication it would be useful to generate a common secret key for bulk encryption
using a secret key cryptosystem (e.g. some block cipher).
Indeed, Whitfield Diffie and Martin Hellman used ideas from number theory to construct
a key exchange protocol that started the era of public key cryptosystems. Shortly after
that Ron Rivest, Adi Shamir and Leonard Adleman developed a cryptosystem that was
the first real public key cryptosystem capable of encryption and digital signatures.
Later several public cryptosystems followed using many different underlying ideas (e.g.
knapsack problems, different groups on finite fields and lattices). Many of them were
soon proven to be insecure. However, the Diffie-Hellman protocol and RSA appear to
have remained two of the strongest up to now.
Terminology
The basic ingredient in any public key cryptosystem is a difficult computational problem.
The security of the cryptosystem is based on the fact that the private key can be computed
from the public key only by solving this difficult problem. We now introduce some
relevant terminology used in public key cryptography.
Algorithm. An algorithm is an explicit description how a particular computation
should be performed (or a problem solved). The efficiency of an algorithm can be
measured as the number of elementary steps it takes to solve the problem. So if
we claim that the algorithm takes time O(n) then we mean that it takes n
elementary steps, but we do not specify how long one step takes.
Computational complexity. A problem is polynomial time or in P if it can be
solved by an algorithm which takes less than O(nt) steps, where t is some finite
number and the variable n measures the size of the problem instance.
If a guessed solution to a problem can be verified in polynomial time then the
problem is said to be in NP (non-deterministic polynomial time). The set of
problems that lie in NP is very large, it includes the problem of integer
factorization.
It is NP-hard if there is no other problem in NP that is easier to solve. There is no
known polynomial time algorithm for any NP-hard problem, and it is believed
that such algorithms in fact do not exist.
In public key cryptography the attacker is interested in solving particular
instances of a problem (factoring some given number), rather than providing a
general solution (an algorithm to factor any possible number efficiently). This
causes some concern for cryptographers, as some instances of a problem that is
NP-hard in general may be easily solvable.
Primes. A prime number is a number that has no divisors except for itself and 1.
Thus the integers 2,3,5,7,11,... and so on are primes. There are infinitely many
primes, and (one of) the biggest prime numbers currently known is (26,972,593)-1.
Factoring. Every integer can be represented uniquely as a product of prime
numbers. For example, 10 = 2 * 5 (the notation * is common for multiplication in
computer science) and it is unique (except for the order of the factors 2 and 5).
The art of factorization is almost as old as mathematics itself. However, the study
of fast algorithms for factoring is only
a few decades old.
One possible algorithm for factoring an integer is to divide the input by all small
prime numbers iteratively until the remaining number is prime. This is efficient
only for integers that are, say, of size less than 1016 as this already requires trying
all primes up to 108. In public key cryptosystems based on the problem of
factoring, numbers are of size 10300 and this would require trying all primes up to
10150 and there are about 10147 such prime numbers according to the prime
number theorem. This far exceeds the number of atoms in the universe, and is
unlikely to be enumerated by any effort.
The easy instance of factoring is the case where the given integer has only small
prime factors. For example, 759375 is easy to factor as we can write it as 35* 55.
In cryptography we want to use only those integers that have only large prime
factors. Preferably we select an integer with two large prime factors, as is done in
the RSA cryptosystem.
Currently one of the best factoring algorithms is the number field sieve algorithm
(NFS) that consists of a sieving phase and a matrix step. The sieving phase can be
distributed (and has been several times) among a large number of participants, but
the matrix step needs to be performed on large supercomputers. The effectiveness
of the NFS algorithm becomes apparent for very large integers, it can factor any
integer of size 10150 in a few months time. The NFS algorithm takes subexponential time (which is still not very efficient).
There is no known proof that integer factorization is an NP-hard problem nor that
it is not polynomial time solvable. If any NP-hard problem were polynomial time
solvable, then also factoring would, but there is very little hope that this is the
case. It is plausible under current knowledge that factoring is not polynomial time
solvable.
Discrete logarithms. Another important class of problems is the problem of
finding n given only some y such that y = gn. The problem is easy for integers, but
when we are working in a slightly different setting it becomes very hard.
To obscure the nature of n in gn, we divide the infinite set of integers into a finite
set of remainder classes. Intuitively, we take the string of integers and wrap it on
a circle (which has circumference of length m).
The numbers 0, m, 2m, 3m, ... all cover the same point on the circle, and therefore
are said to be in the same equivalence class (we also write "0 = m = 2m = ... (mod
m)"). Each equivalence class has a least representative in 0 .. m-1. So you can
write any integer n as t + km for any integer t, where 0 <= t < m. It is a
convention to write n = t (mod m) in this case. Here m is said to be the modulus.
It can be shown that you can add, subtract and multiply with these classes of
integers (modulo some m).
This structure, when m = p with p a prime number, is often called a prime field or
a Galois field, GF(p). When m is prime, the division of nonzero integer classes is
well defined. According to the proper mathematical terminology it is a finite field
of characteristic p, where p is the modulus. If m is not a prime number then the
structure is called a
(finite) ring. More information about groups, fields and rings can be
read from any good elementary text in algebra.
The discrete logarithm problem in the finite field GF(p) is then stated as follows:
given two positive non-zero integers a, g (both less than p), compute n such that a
= gn (mod p). We can choose g so that a solution for n exists for any non-zero a.
To make this problem cryptographically hard p should be a large prime number
(about 10300 and n, in general, of same magnitude.
This problem is currently considered as hard as factoring. The best method known
at this time is the Number field sieve for discrete logarithms (which uses similar
ideas as the NFS for factoring). The discrete logarithm problem may appear more
complicated than integer factoring, but in many ways they are similar. Many of
the
ideas that work for factoring can be also applied in the setting of discrete
logarithms. There is little hope to find a polynomial time algorithm for the
computation of discrete logarithms in GF(p). In such a case it would be likely that
factoring problems also could be efficiently solved.
The discrete logarithm problem can be applied in many other settings, such as
elliptic curves. The discrete logarithm problem over elliptic curves is commonly
used in public key cryptography. One reason for this is that the the Number field
sieve algorithm does not work here. There are other methods for computing
discrete logarithms over elliptic curves but it appears even harder to solve the
discrete over elliptic curves than over GF(p). This has also the effect that there are
some key size benefits for using elliptic curve based public key cryptosystems as
opposed to factoring based cryptosystems.
Knapsacks. Given a small set of integers, the knapsack problem consists of
determining a subset of these integers such that their sum is equal to a given
integer. For example, given (2, 3, 5, 7) and 10, we can find the solution 2 + 3 + 5
= 10, and thus solved the knapsack problem, by brute force search.
The general knapsack-problem is provably NP-hard, and thus appears superior to
factorization and discrete logarithm used in public key cryptosystems.
Unfortunately, all cryptosystems that have used this underlying idea have been
broken - as the used instances of the problem have not been really NP-hard.
Lattices. Now we define a vector basis wi = < w1, ..., wn> for i = 1, ..., m, and the
lattice that is generated by the basis. That is, elements of the lattice are of the form
t1w1 + t2w2 + ... + tmwm, where ti are integers.
The problem of finding the shortest vector in a lattice (using the usual Euclidean
distance) is a NP-hard problem (for lattices of sufficiently large dimension).
However, the celebrated LLL-algorithm by Lenstra, Lenstra and Lovasz computes
an approximate solution in polynomial time. The effectiveness of the LLLalgorithm comes from the fact that in many cases approximate solutions are good
enough, and that surprisingly often the LLL-algorithm actually gives the shortest
vector. Indeed, this algorithm has been often used to break cryptosystems based
on lattice
problems or knapsacks. It has been applied also to attacks against RSA and DSS.
Practical cryptosystems
The wide interest in public key cryptography has produced several practically important
cryptosystems. In the following they are listed in order of the underlying problem.
As a basic guideline, a public key cryptosystem is build from a difficult problem as
follows: take a difficult problem (for example, NP-hard) for which you can find an
instance that can be solved in polynomial time. To encrypt a message, convert the
message into such an easy instance of the difficult problem, then use the public key to
convert the easy problem into a difficult one. The result is then sent to the recipient
through an insecure channel. To decrypt use the private key to convert the difficult
problem into the easy one and solve it to regain the message. All public key systems use
this principle, although they differ significantly in the details (like the underlying
problem or the structure of public and private key).
For good survey on appropriate key lengths see Lenstra and Verheul's Selecting
Cryptographic Key Sizes (appeared in Public Key Cryptography 2000). They present a
complete analysis of key sizes for almost all cryptosystems.
Below, along with each cryptosystem you will find the current recommendations for key
sizes where appropriate. These recommendations are not always equal to the Lenstra's
and Verheul's.
Factorization: RSA, Rabin
RSA (Rivest-Shamir-Adleman) is the most commonly used public key algorithm.
It can be used both for encryption and for digital signatures. The security of RSA
is generally considered equivalent to factoring, although this has not been proved.
RSA computation takes place with integers modulo n = p * q, for two large secret
primes p, q. To encrypt a message m, it is exponentiated with a small public
exponent e. For decryption, the recipient of the ciphertext c = me (mod n)
computes the multiplicative reverse d = e-1 (mod (p-1)*(q-1)) (we require that e is
selected suitably for it to exist) and obtains cd = m e * d = m (mod n). The private
key consists of n, p, q, e, d (where p and q can be forgotten); the public key
contains only of n, e. The problem for the attacker is that computing the reverse d
of e is assumed to be no easier than factorizing n. More details are available in
many sources, such as in the Handbook of Applied Cryptography.
The key size (the size of the modulus) should be greater than 1024 bits (i.e. it
should be of magnitude 10300) for a reasonable margin of security. Keys of size,
say, 2048 bits should give security for decades.
Dramatic advances in factoring large integers would make RSA vulnerable, but
other attacks against specific variants are also known. Good implementations use
redundancy (or padding with specific structure) in order to avoid attacks using the
multiplicative structure of the ciphertext. RSA is vulnerable to chosen plaintext
attacks and hardware and fault attacks. Also important attacks against very small
exponents exist, as well as against partially revealed factorization of the modulus.
The proper implementation of the RSA algorithm with redundancy is well
explained in the PKCS standards (see definitions at RSA Laboratories). They give
detailed explanations about how to implement encryption and digital signatures,
as well as formats to store the keys. The plain RSA algorithm should not be used
in any application. It is recommended that implementations follow the standard as
this has also the additional benefit of inter-operability with most major protocols.
RSA is currently the most important public key algorithm. It was patented in the
United States (the patent expired in the year 2000).
The Rabin cryptosystem may be seen as a relative of RSA, although it has a quite
different decoding process. What makes it interesting is that breaking Rabin is
provably equivalent to factoring.
Rabin uses the exponent 2 (or any even integer) instead of odd integers like RSA.
This has two consequences. First, the Rabin cryptosystem can be proven to be
equivalent to factoring; second, the decryption becomes more difficult - at least in
some sense. The latter is due to problems in deciding which of the possible
outcomes of the decryption process is correct.
As it is equivalent to factoring the modulus, the size of the modulus is the most
important security parameter. Moduli of more than 1024 bits are assumed to be
secure.
There are currently no standards for the Rabin algorithm but it is explained in
several books. The IEEE P1363 project might propose a standard and thus make it
more widely used.
The equivalence to factoring means only that being able to decrypt any message
encrypted by the Rabin cryptosystem enables one to factor the modulus. Thus it is
no guarantee of security in the strong sense.
References:
R. Rivest, A. Shamir, and L. M. Adleman: Cryptographic
Communications System and Method. US Patent 4,405,829, 1983.
Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone:
Handbook of Applied Cryptography.
Bruce Schneier: Applied Cryptography.
Jennifer Seberry and Josed Pieprzyk: Cryptography: An Introduction to
Computer Security.
Man Young Rhee: Cryptography and Secure Data Communications.
Hans Riesel: Prime Numbers and Computer Methods for Factorization.
Discrete logs: Diffie-Hellman, ElGamal, DSS
Diffie-Hellman is a commonly used protocol for key exchange.
In many cryptographical protocols two parties wish to begin communicating.
However, assume they do not initially possess any common secret and thus
cannot use secret key cryptosystems. The key exchange by Diffie-Hellman
protocol remedies this situation by allowing the construction of a common secret
key over an insecure communication channel. It is based on a problem related to
discrete logarithms, namely the Diffie-Hellman problem. This problem is
considered hard, and it is in some instances as hard as the discrete logarithm
problem.
The Diffie-Hellman protocol is generally considered to be secure when an
appropriate mathematical group is used. In particular, the generator element used
in the exponentiations should have a large period (i.e. order).
Discrete logarithm algorithms can be used to attack Diffie-Hellman, and with
passive attacks that is the best that is currently possible - assuming correctly
chosen parameters. If Diffie-Hellman is applied with usual arithmetic modulo a
prime number, then it suffices to select a large enough prime and to take some
care in selecting the generator element. Subtle problems may be caused by bad
choices of the
generator. Many papers have been written on the problems that may occur, one of
the more well-known references is Oorschot and Wiener's On Diffie-Hellman key
agreement with short exponents (Eurocrypt'96).
Attacks against Diffie-Hellman include also the man-in-the-middle attack. This
attack requires adaptive intervention, but is in practice very easy if the protocol
doesn't use countermeasures such as digital signatures.
Usually Diffie-Hellman is not implemented on hardware, and thus hardware
attacks are not an important threat. This may change in the future, when mobile
communication becomes more widespread.
DSS (Digital Signature Standard). A signature-only mechanism endorsed by the
United States Government. The underlying algorithm DSA (Digital Signature
Algorithm) is similar to the one used by ElGamal or by the Schnorr signature
algorithm. Also it is fairly efficient, although not as efficient as RSA for signature
verification. The standard defines DSS to use the SHA-1 hash function
exclusively to compute message digests.
The main problem with DSS is the fixed subgroup size (the order of the generator
element), which limits the security to around only 80 bits. Hardware attacks can
be a concern to some implementations of DSS. However, it is widely used and
accepted as a good algorithm.
The ElGamal public key cipher. This is a straightforward extension of
Diffie/Hellman's original idea on shared secret generation. Essentially, it
generates a shared secret and uses it as a one-time pad to encrypt one block of
data. ElGamal is the predecessor of DSS and is perfectly usable today, although
no widely known standard has been created for it.
Elliptic curve cryptosystems are just another way of implementing discrete
logarithm methods. An elliptic curve is basically a set of points that satisfy the
equation y2 = x3 + ax + b when considered in finite field of characteristic p
(where p must be larger than 3). A slightly different equation is needed for the
cases of small characteristic, p = 2 and p = 3.
The points on elliptic curves can be added together and they form a structure
called a group (in fact an abelian group). This is just a way of saying that you can
do arithmetic with them as you can do with integers when using just addition and
subtraction.
With regard to cryptography, elliptic curves have many theoretical benefits but
they also are also very practical. There is no known sub-exponential algorithm for
computing discrete logarithms of points of elliptic curves unlike discrete
logarithms in (the multiplicative group of) a finite field, in hyperelliptic curves
(of large genus) or in many other groups. One practical benefit from the nonexistence of a fast discrete logarithm computation for elliptic curves is that the
key size, as well as the produced digital signatures and encrypted messages are
small. Indeed, a very simple way of computing the security limit for the key size
is to take a key size for a secret key cryptosystem in bits and then just multiply it
by 2. This gives a rough estimate, that is good at the moment for a generic
instance of elliptic curves.
Elliptic curves can be implemented very efficiently in hardware and software, and
they compete well in speed with cryptosystems such as RSA and DSS. There are
several standardization attempts for elliptic curve cryptosystems (for example,
ECDSA by ANSI). At the moment elliptic curves are widely known, but not very
widely used in practice.
The security of elliptic curve cryptosystems has been rather stable for years,
although significant advances have been achieved in attacks against special
instances. Nevertheless, these have been conjectured by the leading researchers
several years ago and no great surprises have yet emerged.
The algorithm XTR recently introduced by Lenstra and Verheul might become a
good competitor for elliptic curves. However, elliptic curves appear to be slightly
better in performance, and definitely scale better in the key size.
LUC is a public key cryptosystem that uses a special group based on Lucas
sequences (related to Fibonacci series) as its basic building block. It can
implement all the common discrete logarithm based algorithms, and in a sense
LUC is a class of public key algorithms.
It is possible to view the underlying structure of the algorithm as a certain
multiplicative group of a finite field of characteristic p with degree 2 (written
shortly as Fp2) - and this can be used to prove that there exists a sub-exponential
algorithm for computing discrete
logarithms and thus attacking the LUC algorithm. Thus it might seem that LUC is
of little interest, as it is basically just another way of implementing discrete
logarithm based algorithms on finite fields. However, LUC uses the specific
arithmetic operations derived from Lucas sequences that might be slightly faster
(by a constant factor) than what would be a more direct approach.
The different public key algorithms based on LUC arithmetic are called LUCDIF
(LUC Diffie-Hellman), LUCELG (LUC ElGamal), and LUCDSA (LUC Digital
Signature Algorithm). Several of these are patented.
The fact that values used in the LUC algorithms can be represented as a pair of
values gives some additional advantage against just using integers modulo p. The
computations only involve numbers needing half the bits that would be required
in the latter case. As the LUC group operation is easy to compute this makes LUC
algorithms competitive with RSA and DSS.
However, at present there appears to be little reason to use LUC cryptosystems, as
they offer little benefit over elliptic curves or XTR.
XTR is a public key cryptosystem developed by Arjen Lenstra and Eric Verheul.
The XTR is similar to LUC in that it uses a specific multiplicative group of a
particular finite field (in fact Fp6) as its underlying group.
However, XTR has novel features such as needing only approximately 1/3 of the
bits for signatures and encrypted messages. This is achieved using a specific
trace-representation of the elements of this group, and performing all
computations using this representation.
All discrete logarithm based public key algorithms can be implemented with XTR
ideas. So in a way XTR is a generic name for a class of public key algorithms,
similarly to LUC.
Perhaps surprisingly, the algorithm is efficient and according to Lenstra and
Verheul it might be a good substitute to elliptic curves, DSS and even RSA. It has
the advantage over elliptic curves that it is based essentially on the same discrete
log problem as, say, DSS, which may help cryptographers and others to accept it
faster as a strong algorithm.
References:
Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone:
Handbook of Applied Cryptography.
Alfred J. Menezes: Elliptic curve
cryptosystems.
IEEE P1363: Standard Specifications
for Public-Key Cryptography.
Neil Koblitz: A Course in Number Theory and
Cryptography.
Ian Blake, Gadien Seroussi and Nigel Smart: Elliptic Curves in
Cryptography.
Bruce Schneier: Applied
Cryptography.
Knapsacks
There are only a few interesting knapsack public key cryptosystems, none of which are of
practical importance.
Rivest-Chor cryptosystem is based on a particular variant of the knapsack
problem. This is one of the knapsack cryptosystems that has best resisted attacks.
Merkle-Hellman. This was the first knapsack cryptosystem. It was based on the
simple idea of hiding the easy super-increasing knapsack problem by masking.
However, it was broken already in 1980.
References:
M. E. Hellman and R. C. Merkle: Public Key Cryptographic Apparatus
and Method. US Patent 4,218,582, 1980.
B. Chor and R.L. Rivest: A knapsack type public key cryptosystem based
on arithmetic in finite field, Crypto '84.
Lattices
In recent years large interest has been directed towards lattice based cryptosystems. One
of the reasons is that certain classes of lattice problems are NP-hard, and several efficient
cryptosystems have been proposed and appear strong.
NTRU is a cryptosystem proposed in mid-1990's as an efficient public key cipher.
It is based on the lattice problem, and has some interesting features.
Some of the initial versions had problems, but the current version has been
proposed for some US standards.
References:
D. Coppersmith. and A. Shamir: Lattice Attacks on NTRU, Eurocrypt '97.
NTRU Cryptosystems Inc. http://www.ntru.com.
IEEE P1363: Standard Specifications for Public-Key Cryptography.
Download