Bruteforcce - room30atsomerville2011

advertisement
The EFF's US$250,000 DES cracking machine contained over 1,800 custom chips and could brute
force a DES key in a matter of days. The photograph shows a DES Cracker circuit board fitted with 32
Deep Crack chips and some control chips.
In cryptography, a brute force attack or exhaustive key search is a strategy that can in theory be
used against any encrypted data[1] by an attacker who is unable to take advantage of any weakness
in an encryption system that would otherwise make his/her task easier. It involves systematically
checking all possible keys until the correct key is found. In the worst case, this would involve
traversing the entire search space.
The key length used in the encryption determines the practical feasibility of performing a brute force
attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks
can be made less effective by obfuscating the data to be encoded, something that makes it more
difficult for an attacker to recognise when he/she has cracked the code. One of the measures of the
strength of an encryption system is how long it would theoretically take an attacker to mount a
successful brute force attack against it.
Brute-force attacks are an application of brute-force search, the general problem-solving technique
of enumerating all candidates and checking each one.Contents [hide]
1 Theoretical limits
2 Credential recycling
3 Unbreakable codes
4 Countermeasures
5 See also
6 References
7 External links
[edit]
Theoretical limits
The resources required for a brute force attack scale exponentially with increasing key size, not
linearly. As a result, doubling the key size for an algorithm does not simply double the required
number of operations, but rather squares them. Although US export regulations historically
restricted key lengths to 56-bit symmetric keys (e.g. Data Encryption Standard), these restrictions
are no longer in place, so modern symmetric algorithms typically use computationally stronger 128-
to 256-bit keys. The table below illustrates how much more complex a 128-bit key is than a 56-bit
key. If a device existed that could brute-force a 56-bit encryption key in one second, it would take
that device 149.7 trillion years to brute force a 128-bit encryption key.
Symmetric key length vs brute force combinationsKey size in bits[2]
time for a device checking 256 permutations per second
Permutations
Brute force
8
28
0 milliseconds
40
240
0.015 milliseconds
56
256
1 second
64
264
4 minutes 16 seconds
128
2128
149,745,258,842,898 years
256
2256
50,955,671,114,250,100,000,000,000,000,000,000,000,000,000,000,000,000 years
There is also a physical argument that a 128-bit symmetric key is computationally secure against
brute force attack. The so-called Von Neumann-Landauer Limit implied by the laws of physics sets a
lower limit on the energy required to perform a computation of ln(2)kT per bit erased in a
computation, where T is the temperature of the computing device in kelvins, k is the Boltzmann
constant, and the natural logarithm of 2 is about 0.693. No irreversible computing device can use
less energy than this, even in principle.[3] Thus, in order to simply flip through the possible values for
a 128-bit symmetric key (ignoring doing the actual computing to check it) would theoretically require
2128 - 1 bit flips on a conventional processor. If it is assumed that the calculation occurs near room
temperature (~300 K) the Von Neumann-Landauer Limit can be applied to estimate the energy
required as ~1018 joules, which is equivalent to consuming 30 gigawatts of power for one year
(30×109 W×365×24×3600 s = 9.46×1017 J). The full actual computation—checking each key to see if
you have found a solution—would consume many times this amount.
However, this argument assumes that the register values are changed using conventional set and
clear operations which inevitably generate entropy. It has been shown that computational hardware
can be designed not to encounter this theoretical obstruction (see reversible computing), though no
such computers are known to have been constructed.[citation needed]
As commercial available successors of governmental ASICs Solution also known as custom hardware
attack, today two emerging technologies have proven their capability in the brute force attack of
certain ciphers. One is modern graphics processing unit (GPU) technology, the other is the fieldprogrammable gate array (FPGA) technology. GPUs benefit from their availability, FPGAs from their
energy efficiency per cryptographic operation. Both technologies try to transport the benefits of
parallel processing to brute-force attacks. In case of GPUs some hundreds, in the case of FPGA some
thousand processing units making them much better suited to cracking passwords than conventional
processors. Various publications in the fields of cryptographic analysis have proved the energy
efficiency of today’s FPGA technology, for example, the COPACOBANA FPGA Cluster computer
consumes the same energy as a single PC (600 W), but performs like 2,500 for certain algorithms.
FPGA clusters are often seen like the "Deep Crack" developed by the Electronic Frontier Foundation
as a custom hardware attack. However multiple firms provide FPGA cryptographic analysis solutions
from a single FPGA PCI Express card up to dedicated FPGA computers.[citation needed] WPA and
WPA2 encryption have successfully been brute-force attacked by reducing the workload by a factor
of 50 in comparison to conventional CPUs[4][5] and some hundred in case of FPGAs.
AES permits the use of 256-bit keys. Breaking a symmetric 256-bit key by brute force requires 2128
times more computational power than a 128-bit key. A device that could check a billion billion
(1018) AES keys per second (if such a device could ever be made) would in theory require about
3×1051 years to exhaust the 256-bit key space. Quantum computers are needed to crack such
complicated encryptions in a more practical length of time.
An underlying assumption of a brute force attack is that the complete keyspace was used to
generate keys, something that relies on an effective random number generator, and that there are
no defects in the algorithm or its implementation. For example, a number of systems that were
originally thought to be impossible to crack by brute force have nevertheless been cracked because
the key space to search through was found to be much smaller than originally thought, because of a
lack of entropy in their pseudorandom number generators. These include Netscape's
implementation of SSL (famously cracked by Ian Goldberg and David Wagner in 1995[6]) and a
Debian edition of OpenSSL discovered in 2008 to be flawed.[7]
[edit]
Credential recycling
Credential recycling refers to the hacking practice to re-use username and password combinations
gathered in previous brute force attacks. A special form of credential recycling is Pass the hash.[8]
[edit]
Unbreakable codes
Certain types of encryption, by their mathematical properties, cannot be defeated by brute force. An
example of this is one-time pad cryptography, where every cleartext bit has a corresponding key bit.
One-time pads rely on the ability to generate a truly random sequence of key bits. A brute force
attack would eventually reveal the correct decoding, but also every other possible combination of
bits, and would have no way of distinguishing one from the other. A small, 100-byte, one-time-pad–
encoded string subjected to a brute force attack would eventually reveal every 100-byte string
possible, including the correct answer, but mostly nonsense. Of all the answers given, there is no
way of knowing which is the correct one. Nevertheless, the system can be defeated if not
implemented correctly, for example if one-time pads are re-used or intercepted.[9]
A similar argument can apply when a *single* plaintext is encrypted by any method where the text is
shorter than the key. For example, if the text is a single byte, then (for most types of encryption with
large key sizes such as 128 bits) all bytes from "00"-"FF" will appear, with equal probability, as
possible plaintexts corresponding to guessed keys.
[edit]
Countermeasures
In case of an offline attack where the attacker has access to the encrypted material, he can try key
combinations at his leisure without the risk of discovery or interference. However database and
directory administrators can take countermeasures against online attacks, for example by limiting
the number of attempts that a password can be tried, by introducing time delays between
successive attempts and locking accounts out after unsuccessful logon attempts.[10] Website
administrators may prevent a particular IP address from trying more than a predetermined number
of password attempts against any account on the site.[11]
Brute force Search =/HEX46.
In computer science, brute-force search or exhaustive search, also known as generate and test, is a
trivial but very general problem-solving technique that consists of systematically enumerating all
possible candidates for the solution and checking whether each candidate satisfies the problem's
statement. For example, a brute-force algorithm to find the divisors of a natural number n is to
enumerate all integers from 1 to n, and check whether each of them divides n without remainder.
For another example, consider the popular eight queens puzzle, which asks to place eight queens on
a standard chessboard so that no queen attacks any other. A brute-force approach would examine
all possible arrangements of 8 pieces in the 64 squares, and, for each arrangement, check whether
any queen attacks any other.Brute-force search is simple to implement, and will always find a
solution if it exists. However, its cost is proportional to the number of candidate solutions, which, in
many practical problems, tends to grow very quickly as the size of the problem increases. Therefore,
brute-force search is typically used when the problem size is limited, or when there are problemspecific heuristics that can be used to reduce the set of candidate solutions to a manageable size.The
method is also used when the simplicity of implementation is more important than speed. This is the
case, for example, in critical applications where any errors in the algorithm would have very serious
consequences; or when using a computer to prove a mathematical theorem. Brute-force search is
also useful as "baseline" method when benchmarking other algorithms or metaheuristics. Indeed,
brute-force search can be viewed as the simplest metaheuristic.Brute force search should not be
confused with backtracking, where large sets of solutions can be discarded without being explicitly
enumerated (as in the textbook computer solution to the eight queens problem above). The bruteforce method for finding an item in a table — namely, check all entries of the latter, sequentially —
is called linear search.Contents [hide]
1 Implementing the brute-force search
1.1 Basic algorithm
2 Combinatorial explosion
3 Speeding up brute-force searches
4 Reordering the search space
5 Alternatives to brute force search
6 In cryptography
7 References
8 See also
[edit]
Implementing the brute-force search
[edit]
Basic algorithm
In order to apply brute-force search to a specific class of problems, one must implement four
procedures, first,next, valid, and output. These procedures should take as a parameter the data P for
the particular instance of the problem that is to be solved, and should do the following:
first (P): generate a first candidate solution for P.
next (P, c): generate the next candidate for P after the current one c.
valid (P, c): check whether candidate c is a solution for P.
output (P, c): use the solution c of P as appropriate to the application.
The next procedure must also tell when there are no more candidates for the instance P, after the
current one c. A convenient way to do that is to return a "null candidate", some conventional data
value Λ that is distinct from any real candidate. Likewise the first procedure should return Λ if there
are no candidates at all for the instance P. The brute-force method is then expressed by the
algorithm
c first(P)
while c Λ do if valid(P,c) then output(P, c) c next(P,c)
For example, when looking for the divisors of an integer n, the instance data P is the number n. The
call first(n) should return the integer 1 if n 1, or Λ otherwise; the call next(n,c) should return c + 1 if c
< n, and Λ otherwise; and valid(n,c) should return true if and only if c is a divisor of n. (In fact, if we
choose Λ to be n + 1, the tests n 1 and c < n are unnecessary.)The brute-force search algorithm
above will call output for every candidate that is a solution to the given instance P. The algorithm is
easily modified to stop after finding the first solution, or a specified number of solutions; or after
testing a specified number of candidates, or after spending a given amount of CPU time.
[edit]
Combinatorial explosion
The main disadvantage of the brute-force method is that, for many real-world problems, the number
of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as
described above, the number of candidates tested will be the given number n. So if n has sixteen
decimal digits, say, the search will require executing at least 1015 computer instructions, which will
take several days on a typical PC. If n is a random 64-bit natural number, which has about 19 decimal
digits on the average, the search will take about 10 years. This steep growth in the number of
candidates, as the size of the data increases, occurs in all sorts of problems. For instance, if we are
seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to
consider, which a typical PC can generate and test in less than one second. However, adding one
more letter — which is only a 10% increase in the data size — will multiply the number of candidates
by 11 — a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018
or 2.4 million million million; and the search will take about 10,000 years. This unwelcome
phenomenon is commonly called the combinatorial explosion.
[edit]
Speeding up brute-force searches
One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of
candidate solutions, by using heuristics specific to the problem class. For example, consider the
popular eight queens problem, which asks to place eight queens on a standard chessboard so that
no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle
there are 648 = 281,474,976,710,656 (over 281 trillion) possibilities to consider. However, if we
observe that the queens are all alike, and that no two queens can be placed on the same square, we
conclude that the candidates are all possible ways of choosing of a set of 8 squares from the set all
64 squares; which means 64 choose 8 = 64!/56!8! = 4,426,165,368 candidate solutions — about
1/60,000 of the previous estimate. Actually, it is easy to see that no arrangement with two queens
on the same row or the same column can be a solution. Therefore, we can further restrict the set of
candidates to those arrangements where queen 1 is in row 1, queen 2 is in row 2, and so on; all in
different columns. We can describe such an arrangement by an array of eight numbers c[1] through
c[8], each of them between 1 and 8, where c[1] is the column of queen 1, c[2] is the column of
queen 2, and so on. Since these numbers must be all different, the number of candidates to search is
the number of permutations of the integers 1 through 8, namely 8! = 40,320 — about 1/100,000 of
the previous estimate, and 1/7,000,000,000 of the first one. As this example shows, a little bit of
analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn
an intractable problem into a trivial one. This example also shows that the candidate enumeration
procedures (first and next) for the restricted set of candidates may be just as simple as those of the
original set, or even simpler.In some cases, the analysis may reduce the candidates to the set of all
valid solutions; that is, it may yield an algorithm that directly enumerates all the desired solutions (or
finds one solution, as appropriate), without wasting time with tests and the generation of invalid
candidates. For example, consider the problem of finding all integers between 1 and 1,000,000 that
are evenly divisible by 417. A naive brute-force solution would generate all integers in the range,
testing each of them for divisibility. However, that problem can be solved much more efficiently by
starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000 — which takes only
2398 (= 1,000,000 ÷ 417) steps, and no tests. An even easier solution would be to divide 1,000,000
by 417 and rounding down to a whole number, an operation that takes two steps.
[edit]
Reordering the search space
In applications that require only one solution, rather than all solutions, the expected running time of
a brute force search will often depend on the order in which the candidates are tested. As a general
rule, one should test the most promising candidates first. For example, when searching for a proper
divisor of a random number n, it is better to enumerate the candidate divisors in increasing order,
from 2 to n - 1, than the other way around — because the probability that n is divisible by c is
1/c.Moreover, the probability of a candidate being valid is often affected by the previous failed
trials. For example, consider the problem of finding a 1 bit in a given 1000-bit string P. In this case,
the candidate solutions are the indices 1 to 1000, and a candidate c is valid if P[c] = 1. Now, suppose
that the first bit of P is equally likely to be 0 or 1, but each bit thereafter is equal to the previous one
with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the number t
of candidates examined before success will be about 6, on the average. On the other hand, if the
candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value of t will
be only a little more than 2.More generally, the search space should be enumerated in such a way
that the next candidate is most likely to be valid, given that the previous trials were not. So if the
valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far
as possible from the previous ones, in that same sense. The converse holds, of course, if the
solutions are likely to be spread out more uniformly than expected by chance.
[edit]
Alternatives to brute force search
There are many other search methods, or metaheuristics, which are designed to take advantage of
various kinds of partial knowledge one may have about the solution. Heuristics can also be used to
make an early cutoff of parts of the search. One example of this is the minimax principle for
searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields,
such as language parsing, techniques such as chart parsing can exploit constraints in the problem to
reduce an exponential complexity problem into a polynomial complexity problem. In many cases,
such as in Constraint Satisfaction Problems, one can dramatically reduce the search space by means
of Constraint propagation, that is efficiently implemented in Constraint programming languages.The
search space for problems can also be reduced by replacing the full problem with a simplified
version. For example, in computer chess, rather than computing the full minimax tree of all possible
moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with
the tree being pruned at a certain number of moves, and the remainder of the tree being
approximated by a static evaluation function.
[edit]
In cryptography
Main article: Brute-force attack
In cryptography, a brute-force attack involves systematically checking all possible keys until the
correct key is found. This strategy can in theory be used against any encrypted data[1] by an attacker
who is unable to take advantage of any weakness in an encryption system that would otherwise
make his task easier.
The key length used in the encryption determines the practical feasibility of performing a brute force
attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks
can be made less effective by obfuscating the data to be encoded, something that makes it more
difficult for an attacker to recognise when he has cracked the code. One of the measures of the
strength of an encryption system is how long it would theoretically take an attacker to mount a
successful brute force attack against it.
40 bit encryption
40-bit encryption refers to a key size of forty bits, or five bytes, for symmetric encryption; this
represents a relatively low level of security. A forty bit length corresponds to a total of 240 possible
keys. Although this is a large number in human terms (about a trillion, nearly two hundred times the
world's human population), it is possible to break this degree of encryption using a moderate
amount of computing power in a brute force attack — that is, trying out each possible key in turn.
It is possible to break a 40-bit key on a typical home computer in a matter of days, with home
computers getting faster every year. Even a typical home computer in 2004 could break a 40-bit key
in a little under two weeks, testing a million keys per second. Using free time on a large corporate
network or a set of zombie computers would reduce the time in proportion to the number of
computers available. With dedicated (and rather expensive) hardware, a 40-bit key can be broken in
seconds. The Electronic Frontier Foundation's Deep Crack, built by a group of enthusiasts for
US$250,000 in 1998, could break a 56-bit Data Encryption Standard (DES) key in days, and would be
able to break 40-bit DES encryption in about four seconds.
40-bit encryption was common in software released before 1996, when algorithms with larger key
lengths could not legally be exported from the United States without a case-by-case license. As a
result, the "international" versions of web browsers were designed to have an effective key size of
40 bits when using Secure Sockets Layer to protect e-commerce. Similar limitations were imposed on
other software packages, including early versions of Wired Equivalent Privacy. In 1992, IBM designed
the CDMF algorithm to reduce the strength of DES against brute force attack to 40 bits, in order to
create exportable DES implementations.
Key size
Key size
From Wikipedia, the free encyclopedia
(Redirected from Cryptographic key length)
In cryptography, key size or key length is the size measured in bits[1] of the key used in a
cryptographic algorithm (such as a cipher). An algorithm's key length is distinct from its
cryptographic security, which is a logarithmic measure of the fastest known computational attack on
the algorithm, also measured in bits. The security of an algorithm cannot exceed its key length (since
any algorithm can be cracked by brute force), but it can be smaller. For example, Triple DES has a key
size of 168 bits but provides at most 112 bits of security, since an attack of complexity 2112 is
known. This property of Triple DES is not a weakness provided 112 bits of security is sufficient for an
application. Most symmetric-key algorithms in common use are designed to have security equal to
their key length. No asymmetric-key algorithms with this property are known; elliptic curve
cryptography comes the closest with an effective security of roughly half its key length.Contents
[hide]
1 Significance
2 Key size and encryption system
3 Brute force attack
4 Symmetric algorithm key lengths
5 Asymmetric algorithm key lengths
6 Effect of quantum computing attacks on key strength
7 See also
8 References
9 External links
[edit]
Significance
Keys are used to control the operation of a cipher so that only the correct key can convert encrypted
text (ciphertext) to plaintext. Many ciphers are based on publicly known algorithms or are open
source, and so it is only the difficulty of obtaining the key that determines security of the system,
provided that there is no analytic attack (i.e., a 'structural weakness' in the algorithms or protocols
used), and assuming that the key is not otherwise available (such as via theft, extortion, or
compromise of computer systems). The widely accepted notion that the security of the system
should depend on the key alone has been explicitly formulated by Auguste Kerckhoffs (in the 1880s)
and Claude Shannon (in the 1940s); the statements are known as Kerckhoffs' principle and
Shannon's Maxim respectively.
A key should therefore be large enough that a brute force attack (possible against any encryption
algorithm) is infeasible – i.e., would take too long to execute. Shannon's work on information theory
showed that to achieve so called perfect secrecy, it is necessary for the key length to be at least as
large as the message to be transmitted and only used once (this algorithm is called the One-time
pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic
practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead
focuses on computational security, under which the computational requirements of breaking an
encrypted text must be infeasible for an attacker.
The preferred numbers commonly used as key sizes (in bits) are powers of two, potentially
multiplied with a small odd integer.
[edit]
Key size and encryption system
Encryption systems are often grouped into families. Common families include symmetric systems
(e.g. AES) and asymmetric systems (e.g. RSA); they may alternatively be grouped according to the
central algorithm used (e.g. elliptic curve cryptography).
As each of these is of a different level of cryptographic complexity, it is usual to have different key
sizes for the same level of security, depending upon the algorithm used. For example, the security
available with a 1024-bit key using asymmetric RSA is considered approximately equal in security to
an 80-bit key in a symmetric algorithm (Source: RSA Security).
The actual degree of security achieved over time varies, as more computational power and more
powerful mathematical analytic methods become available. For this reason cryptologists tend to
look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to
longer key sizes or more difficult algorithms. For example as of May 2007, a 1039 bit integer was
factored with the special number field sieve using 400 computers over 11 months.[2] The factored
number was of a special form; the special number field sieve cannot be used on RSA keys. The
computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an
advanced warning that 1024 bit RSA used in secure online commerce should be deprecated, since
they may become breakable in the near future. Cryptography professor Arjen Lenstra observed that
"Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor
number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is
an unqualified yes."[3]
[edit]
Brute force attack
Main article: Brute force attack
Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its
algorithm, it is possible to run through the entire space of keys in what is known as a brute force
attack. Since longer symmetric keys require exponentially more work to brute force search, a
sufficiently long symmetric key makes this line of attack impractical.
With a key of length n bits, there are 2n possible keys. This number grows very rapidly as n
increases. Moore's law suggests that computing power doubles roughly every 18 to 24 months, but
even this doubling effect leaves the larger symmetric key lengths currently considered acceptably
well out of reach. The large number of operations (2128) required to try all possible 128-bit keys is
widely considered to be out of reach for conventional digital computing techniques for the
foreseeable future. However, alternative forms of computing technology are anticipated which may
have superior processing power than classical computers. If a suitably sized quantum computer
capable of running Grover's algorithm reliably becomes available, it would reduce a 128-bit key
down to 64-bit security, roughly a DES equivalent. This is one of the reasons why AES supports a 256bit key length. See the discussion on the relationship between key lengths and quantum computing
attacks at the bottom of this page for more information.
[edit]
Symmetric algorithm key lengths
This section is outdated. Please update this section to reflect
recent events or newly available information. Please see the talk page for more information. (June
2011)
US Government export policy has long restricted the 'strength' of cryptography which can be sent
out of the country. For many years the limit was 40 bits. Today, a key length of 40 bits offers little
protection against even a casual attacker with a single PC. The restrictions have not been removed
(As of 2007, it is still illegal to export cryptographic software using key lengths greater than 64-bits
without authorization from the U.S. Bureau of Industry and Security),[citation needed] but it became
easier to gain authorization to export to certain countries in 1999/2000.
When the Data Encryption Standard cipher was released in 1977, a key length of 56 bits was thought
to be sufficient. There was speculation at the time, however, that the NSA has deliberately reduced
the key size from the original value of 112 bits (in IBM's Lucifer cipher) or 64 bits (in one of the
versions of what was adopted as DES) so as to limit the strength of encryption available to non-US
users. The NSA has major computing resources and a large budget; some thought that 56 bits was
NSA-breakable in the late '70s. However, by the late 90s, it became clear that DES could be cracked
in a few days' time-frame with custom-built hardware such as could be purchased by a large
corporation. The book Cracking DES (O'Reilly and Associates) tells of the successful attempt to break
56-bit DES by a brute force attack mounted by a cyber civil rights group with limited resources; see
EFF DES cracker. 56 bits is now considered insufficient length for symmetric algorithm keys, and may
have been for some time. More technically and financially capable organizations were surely able to
do the same long before the effort described in the book. Distributed.net and its volunteers broke a
64-bit RC5 key in several years, using about seventy thousand (mostly home) computers.
The NSA's Skipjack algorithm used in its Fortezza program employs 80 bit keys.
DES has been replaced in many applications by Triple DES, which has 112 bits of security with 168-bit
keys.
The Advanced Encryption Standard published in 2001 uses a key size of (at minimum) 128 bits. It also
can use keys up to 256 bits (a specification requirement for submissions to the AES contest). 128 bits
is currently thought, by many observers, to be sufficient for the foreseeable future for symmetric
algorithms of AES's quality. The U.S. Government requires 192 or 256-bit AES keys for highly
sensitive data.
In 2003 the U.S. National Institute for Standards and Technology, NIST, proposed that 80-bit keys
should be phased out by 2015. As of 2005, 80-bit keys were allowed to be used only until 2010.
[edit]
Asymmetric algorithm key lengths
The effectiveness of public key cryptosystems depends on the intractability (computational and
theoretical) of certain mathematical problems such as integer factorization. These problems are time
consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric
algorithm keys must be longer for equivalent resistance to attack than symmetric algorithm keys. As
of 2002, a key length of 1024 bits was generally considered the minimum necessary for the RSA
encryption algorithm.
As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in strength to 80-bit symmetric
keys, 2048-bit RSA keys to 112-bit symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys.
RSA claims that 1024-bit keys are likely to become crackable some time between 2006 and 2010 and
that 2048-bit keys are sufficient until 2030. An RSA key length of 3072 bits should be used if security
is required beyond 2030.[4] NIST key management guidelines further suggest that 15360-bit RSA
keys are equivalent in strength to 256-bit symmetric keys.[5]
The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA for the same key
sizes. The work factor for breaking Diffie-Hellman is based on the discrete logarithm problem, which
is related to the integer factorization problem on which RSA's strength is based. Thus, a 3072-bit
Diffie-Hellman key has about the same strength as a 3072-bit RSA key.
One of the asymmetric algorithm types, elliptic curve cryptography, or ECC, appears to be secure
with shorter keys than those needed by other asymmetric key algorithms. NIST guidelines state that
ECC keys should be twice the length of equivalent strength symmetric key algorithms. So, for
example, a 224-bit ECC key would have roughly the same strength as a 112-bit symmetric key. These
estimates assume no major breakthroughs in solving the underlying mathematical problems that
ECC is based on. A message encrypted with an elliptic key algorithm using a 109-bit long key has
been broken by brute force.[6]
The NSA specifies that "Elliptic Curve Public Key Cryptography using the 256-bit prime modulus
elliptic curve as specified in FIPS-186-2 and SHA-256 are appropriate for protecting classified
information up to the SECRET level. Use of the 384-bit prime modulus elliptic curve and SHA-384 are
necessary for the protection of TOP SECRET information."[7]
[edit]
Effect of quantum computing attacks on key strength
The two best known quantum computing attacks are based on Shor's algorithm and Grover's
algorithm. Of the two Shor's offers the greater risk to current security systems.
Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream publickey algorithms including RSA, Diffie-Hellman and elliptic curve cryptography. According to Professor
Gilles Brassard, an expert in quantum computing: "The time needed to factor an RSA integer is the
same order as the time needed to use that same integer as modulus for a single RSA encryption. In
other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative
constant) than to use it legitimately on a classical computer." The general consensus is that these
public key algorithms are insecure at any key size if sufficiently large quantum computers capable of
running Shor's algorithm become available. The implication of this attack is that all data encrypted
using current standards based security systems such as the ubiquitous SSL used to protect ecommerce and Internet banking and SSH used to protect access to sensitive computing systems is at
risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a
later time.
Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash functions (such
as SHA) are widely conjectured to offer greater security against known quantum computing attacks.
They are widely conjectured to be most vulnerable to Grover's algorithm. Bennett, Bernstein,
Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot
be faster than roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with
roughly 2n in the classical case.[8] Thus in the presence of large quantum computers an n-bit key can
provide at most n/2 bits of security. Quantum brute force is easily defeated by doubling the key
length, which has little extra computational cost in ordinary use. This implies that at least a 160-bit
symmetric key is required to achieve 80-bit security rating against a quantum computer.
Download