Security Protocols With Side-Channel Attack New Cryptographic

advertisement
New Cryptographic Protocols With
Side-Channel Attack Security MA SSACHUSETS
WNST1FTECHNOLOGY
by
Rachel A. Miller
JUL 0 1 2012
B.A. in Computer Science and Physics, University of Virginia, 2009
M.A. in Mathematics, University of Virginia, 2009
LIBRARIES
ARCHIVES
Submitted to the Department of Electrical Engineering and Computer
Science in Partial Fulfillment of the Requirements for the Degree of
Master of Science in Computer Science
at the
Massachusetts Institute of Technology
June 2012
@Rachel A. Miller. All rights reserved.
The author hereby grants to MIT permission to reproduce and to distribute publicly paper and
electronic copies of this thesis document in whole or in part in any medium now known or
hereafte reated.
2
Signature of author:
Depar(ment of Electrical Engineering and Compuer cience
May 23, 2012
I
;~
/
-
Certified by:
Shafrira Goldwasser, RSA
I
rofessor of Electrical Engineering and Computer Science
Thesis Supervisor
Accepted by:
ofessor of Electrical Engineering
Leslie A. Kol2odziejsu
Chair of the Committee on Graduate Students
TE
New Cryptographic Protocols With
Side-Channel Attack Security
by
Rachel A. Miller
Submitted to the Department of Electrical Engineering and Computer
Science on May 23, 2012 in Partial Fulfillment of the Requirements for the
Degree of Master of Science in Computer Science
ABSTRACT
Cryptographic protocols implemented in real world devices are subject to tampering attacks,
where adversaries can modify hardware or memory. This thesis studies the security of many
different primitives in the Related-Key Attack (RKA) model, where the adversary can modify a
secret key. We show how to leverage the RKA security of blockciphers to provide RKA security
for a suite of high-level primitives. This motivates a more general theoretical question, namely,
when is it possible to transfer RKA security from a primitive P1 to a primitive P2? We provide
both positive and negative answers. What emerges is a broad and high level picture of the way
achievability of RKA security varies across primitives, showing, in particular, that some
primitives resist "more" RKAs than others.
A technical challenge was to achieve RKA security without assuming the class of allowed
tampering functions is "claw-free"; this mathematical assumption fails to describe how tampering
occurs in practice, but was made for all prior constructions in the RKA model. To solve this
challenge, we present a new construction of psuedorandom generators that are not only RKA
secure but satisfy a new notion of identity-collision-resistance.
Thesis Supervisor: Shafrira Goldwasser
Title: RSA Professor of Electrical Engineering and Computer Science
Abstract
We show how to leverage the RKA (Related-Key Attack) security of blockciphers to provide RKA
security for a suite of high-level primitives. This motivates a more general theoretical question,
namely, when is it possible to transfer RKA security from a primitive P1 to a primitive P2? We
provide both positive and negative answers. What emerges is a broad and high level picture of
the way achievability of RKA security varies across primitives, showing, in particular, that some
primitives resist "more" RKAs than others. A technical challenge was to achieve RKA security
even for the practical classes of related-key deriving (RKD) functions underlying fault injection
attacks that fail to satisfy the "claw-freeness" assumption made in previous works. We surmount
this barrier for the first time based on the construction of PRGs that are not only RKA secure but
satisfy a new notion of identity-collision-resistance.
Contents
1
2
3
Introduction
1.1 Contributions . . . . . . . . . . .
Motivation - Tampering is Real!
2.1 Hardware Errors ...........
2.2 Transient Faults ...........
2.3 Overwrite attacks . . . . . . . .
2.4 Memory Remanence Attacks .
2.5 Protocol Failures . . . . . . . .
2.6 Self-destructing Memory . . . .
. . . . . . . . . . .
4
6
8
8
8
9
. . . . . . 10
. . . . . . 10
11
.
.
.
.
Models of Tampering,
3.1 The History of the RKA Model
3.1.1 Claw-free assumption .
3.2 Protection versus Detection: The History of the Self-destruct Model
3.3 The History of Tampering Intermediate Values . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
12
13
15
17
18
4
Notation & Preliminaries
4.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Cryptographic Primitives & Security Games . . . . . . . . . . .
19
19
20
5
RKA-Model: Formal Treatment
22
6
<b-RKA secure PRFs
6.1 Pseudorandom Functions . . . . . . . . . . . . . . . . . . . . .
6.2 <b-RKA PRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
. . . . . . . . . . 27
. . . . . . . . . . 28
7
Identity-Key Fingerprints and Identity Collision Resistance
7.1 Identity-Key Fingerprint . . . . . . . . . . . . . . . . . . . . . .
7.2 <b-RKA Secure Pseudorandom Generators . . . . . . . . . . . .
7.3 Identity Collision Resistance . . . . . . . . . . . . . . . . . . . .
30
31
34
37
8
Definitions of <b-RKA Secure Primitives
42
9
Constructions of <b-RKA Secure Primitives
9.1 Using <b-RKA PRG to Construct <b-RKA Signatures . . . . . .
9.2 Strong <b-RKA Security . . . . . . . . . . . . . . . . . . . . . .
9.3 <b-RKA Secure Signatures from <b-RKA Secure IBE Schemes .
9.4 <b-RKA Secure PKE-CCA from <b-RKA Secure IBE . . . . . .
50
50
58
64
66
10 Separations between RKA Sets
10.1 Separating 4-RKA PRFs from <b-RKA signatures . . . . . .
10.2 Other Relations Using Cnst . . . . . . . . . . . . . . . . . . .
10.3 Separating <b-RKA secure PRFs from <b-RKA secure wPRFs
10.4 Separating <b-RKA CPA SE from <b-RKA CCA SE . . . . . .
2
.
.
.
.
70
70
72
73
75
Acknowledgments
This thesis is based on joint work with Professor Mihir Bellare and David Cash. I'd like to express
gratitude for the time that we explored ideas together; you were wonderful to work with!
A very special thank you to my Advisor, Professor Shafi Goldwasser, for such caring mentorship
- it has been such a pleasure to work and to learn under you for my time at MIT.
Finally, I'd like to thank my family and my friends for their awesome, incredible support.
3
1
Introduction
Historical notions of cryptographic security have assumed that adversaries have only black box access to cryptographic primitives; in these security games, adversaries can observe input and output
behavior of the protocol, but gain no information from the physical implementation. However, this
model fails to account for the fact that adversaries do gain additional information from physical
implementations: such implementations leak data, such as the time or power it takes to compute
with a secret key, and are also susceptible to physical tampering attacks, such as modification of the
secret key with microprobes. Attacks that take advantage of this non-black-box nature of protocols
are collectively known as side channel attacks. Side channel attacks actually exist in practice many practical side channel attacks have been well documented for almost two decades.
To model the adversarial powers that allow side channel attacks, theoretical cryptographers have
developed the enhanced security models of leakage and tampering. Leakage accounts for information
passive adversaries may gain about the contents of secret memory in addition to the input/output
behavior of the system. Tampering accounts for adversarial modification of an executing protocol,
including changes to the bits in secret memory, the introduction of errors to the computation, and
even physical changes to the circuit executing the protocol.
Theoretical works have studied leakage extensively over the last several years, especially in the
context of signatures and encryption. In contrast, very few theoretical results address tampering
attacks- efficient cryptographic primitives with robust security against tampering are still lacking. Though several constructions of higher level primitives exist, these constructions only protect
against highly algebraic and scheme specific tampering attacks [GOR11, AHI11, Luc04, BC10,
GL1O], or else require heavy and inefficient machinery, such as NIZK proofs[KKS11].
The most commonly used theoretical framework in tampering is the Related-Key Attack (RKA)
model, which allows adversarial modification of the secret key. In a related key attack, the adversary is allowed to modify the secret key and subsequently observe values from the cryptographic
primitive (such as signatures or ciphertexts) computed with the modified secret key. Formally, for
a primitive with secret key K, this occurs by allowing the adversary to select a function
4
4,
and
giving the adversary values computed with
#(K)
instead of with K. The RKA security definition is
parameterized by a class of allowed tampering functions <b; we restrict how the adversary is allowed
to modify the secret key by requiring that the adversary's selected tampering function
#
#
satisfies
E <b. We say a scheme is <b-RKA secure if security is maintained even when the adversary may
perform related key attacks for
4
E
(b.
A second commonly used framework is the Self-Destruct
model, which is a relaxation of the RKA model allowing the primitive to destroy secret memory if
tampering is detected.
The psuedo-random function (PRF) is the most studied primitive in the RKA model. A PRF
is a family of functions, where individual functions are defined by evaluating the function using
a fixed (secret) key. In the security game, an adversary is either given oracle access to a truly
random function, or oracle access to the function defined by the PRF evaluated with a randomly
generated secret key. A family of functions is a secure PRF if all polynomial time adversaries can
only distinguish between the two cases with a negligible advantage over guessing.
This thesis will give a formal definition of a <b-RKA secure PRF, but it will be helpful to give
an intuitive definition here. When we consider PRFs in the RKA model, the adversary is given the
power to modify the secret key, and so should receive oracle access to the PRF evaluated with many
different key values, defining a group of functions instead of a single function. Accordingly, in the
modified security game for a <b-RKA PRF, the adversary tries to distinguish between oracle access
to the family of functions <h-RKA PRF and a family of truly random functions, both evaluated at a
randomly generated secret key; in both cases, if the original secret key is K, the adversary obtains
access to the oracle evaluated with
#(K)
in the family of functions for
#
E 4.
We say a PRF
is <b-RKA secure if all polynomial time adversaries have a negligible advantage in distinguishing
between these two cases.
It turns out that a <b-RKA PRF is an especially powerful building block for constructing other
tamper-resilient primitives. This thesis will give a construction for any <b-RKA secure cryptographic
primitive, using any <b-RKA PRF and a normal secure instance of the primitive as components.
Interestingly, it appears the security requirements for <b-RKA secure PRFs seem to be especially
difficult to satisfy - there exist classes of tampering functions <b for which <b-RKA secure signature
5
and encryption schemes exist, but for which no PRF can be <b-RKA secure.
The construction of tamper-resilient primitives from <b-RKA secure PRFs, and the apparent
difficulty of constructing <b-RKA secure PRFs, lead to further questions about the relationships
between <b-RKA secure primitives. For example, when can one 4-RKA secure primitive be used to
build a different <b-RKA secure primitive in a construction preserving <? Are there <b for which <bRKA secure instantiations of some primitives are known, but for which <b-RKA security is provably
impossible for another primitive?
1.1
Contributions
Prior works in the RKA model have all made the claw-free assumption[AHI11,
GOR1 1].
A class of functions <b is claw-free if for any two distinct
in the domain of <b,
#1 (K)
#1, #2
BC10, GL10,
E <b, and for any K
4 #2 (K). In the context of the RKA model, the claw-free assumption
is that the class of allowed tampering functions <b an adversary may use, which has domain of the
secret key space, is claw-free. Unfortunately, the claw-free assumption fails to capture the tampering attacks that exist in practice. This work drops the claw-free assumption in the RKA model for
the first time.
This thesis gives a generic way to translate the standard, non-tampering security definition for
a cryptographic primitive into a 4-RKA security definition in the RKA model. With this, we give
generic construction of <b-RKA secure primitives from any normal secure instance of the primitive
and any <b-RKA secure PRF - in other words, we construct primitives that are resistant to the
same class of tampering functions <b as the PRF used. This construction maintains the efficiency of
the original <b-RKA PRF. Further, this construction uses the <b-RKA PRF in a black-box way, so
any future <b-RKA PRFs will immediately give a suite of <b-RKA primitives using this construction.
We note that a similar construction was given in concurrent work[GOR11], but our construction in
addition is able to drop the claw-free assumption.
Motivated by this construction of <b-RKA secure primitives from <b-RKA secure PRFs, we
further explore when it is possible to construct <b-RKA secure primitives from other <b-RKA secure
primitives. When is it possible to give a reduction from one primitive to another in a way that
6
PRF
wPRF
IBE
Sig
PRF
wPRF
IBE
Sig
SE - CCA
SE - CPA
PKE - CCA
=
C
=
C
C
C
C
C
C
9
c
C
C
=
=
SE-CCA
SE - CPA
PKE - CCA
_
g
_
_
_
=
_
_
_
=
Table 1: A summary of relationships we will show; gives whether the row is contained or not
contained in the column.
preserves the class of allowed tampering functions <b? We give positive results across a wide variety
of primitives: PRFs, wPRFs, IBE, Signatures, SE-CCA, SE-CPA, and PKE-CCA. For example, we
show that <D-RKA secure IBE implies <b-RKA secure IND-CCA PKE. We also give some negative
results, showing, for example, that there exists a <D for which <D-RKA secure signatures exist but no
<b-RKA secure PRFs exist. Inspired by this fact , we give additional negative results: for example,
there exists a <} such that a <b-RKA secure wPRF exists but for which no <D-RKA secure PRF
exists.
We view the relationships between different <b-RKA secure primitives as analogous to complexity
theory - the relations between these primitives should help us better understand the complexity
of each particular problem. We will express these relationships in a set based notation. We define
RKA[P] to be the set of all <Dfor which <b-RKA secure versions of primitive P exist. In this
framework, a construction of a <}-RKA secure primitive P2 from any <b-RKA secure primitive
P1 will be expressed as RKA[Pi] 9 RKA[P 2]; these relationships will be shown in Section 9.
Showing that there ]ib such that <DE Pi but <DV P 2 will be expressed RKA[P1 ] V RKA[P 21;
these relationships will be shown in Section 10. In a few cases we will need to make additional
assumptions, which we will note in the theorem.
Additionally, this thesis contains an extensive literature search of physical tampering attacks,
in addition to referencing prior theoretical works. This study can guide which tampering attacks
should be modeled by and addressed in theory.
7
2
Motivation - Tampering is Real!
Tampering is a major security concern when adversaries have physical access to the device running
the cryptographic primitive, which is often the case for devices like smart-cards, pay-TV decoders,
and GSM mobile phones. Successful attacks have taken a variety of forms, including chip re-writing,
differential analysis, and observing memory remanence after the device is turned off[AK96] [AK97].
Each of these attacks will be further described below.
2.1
Hardware Errors
Even in standard conditions, circuits are not perfect, and computations may contain errors. Though
errors are typically rare, faulty hardware can make them much more likely. For example, a bug in
the family of Pentium FDIV processors produced errors in floating point division[SB94]. Though
errors for these faulty chips were rare for randomly generated floating .point operations, affecting
about 1 in 9 billion operations, they could be quite common for adversarially selected input.
2.2
Transient Faults
Transient faults are errors in computation caused by very temporary changes to conditions in the
circuit executing the protocol or to the environment of the circuit. The most common attacks
apply quick changes to circuitry's clock signal, power supply, or external electrical field[KK99].
Applying voltages or temperatures outside the normal range of the circuit can affect some write
operations. One family of chips was susceptible to an attack that modifies the applied voltage during
repeated write access to a security bit- this will often clear the bit without erasing the rest of secret
memory[AK96]. Additionally, varying voltage can hamper some processes from running properly:
another family of chips contained random number generator used for cryptographic processes, but
the generator output a string of all 1's when a low voltage was applied, yielding very guessable
secret keys[AK96].
For some processors, transient changes to the applied voltage or to the clock signal can vary the
decoding and execution of individual instructions. Since different signal paths of information on a
8
chip pass through different numbers of gates, they each take (slightly) different amounts of time.
For the whole computation to be correctly executed, every path must finish before the next clock
signal. If an adversary can cause a small number of paths to fail to execute correctly, the protocol
will yield values strongly related to the correct values but with some errors. By applying a precisely
timed glitch to the clock signal, a limited number of signal paths can be affected. Similarly, changes
to the applied voltage changes the amount of time each signal path takes to complete. Increasing
time signals take can prevent some signal paths from completing. This has the potential to cause
the CPU to execute a number of wrong instructions, or allow differential analysis[AK96].
Conditional jumps are commonly among the longest signal paths; if transient errors allow
attackers to eliminate conditional jumps or test instructions[KK99], such errors can often allow
them to bypass security conditions. Instruction glitches can also extend the runtime of loops; for
example, an attacker might be able to extend a loop designed to print out an output buffer to print
out additional values in secret memory[AK96]. Breaking out of a loop early can also be harmful,
transforming a many round cipher into an easily breakable single round variant[AK97].
Varying the external electrical field can be accomplished by applying spikes of up to several
hundred volts very locally on a chip by using two metal needles. Such a technique to be used to
temporarily shift the voltages of nearby transistors, creating variations in signal paths based on
locality instead of timing[KK99].
2.3
Overwrite attacks
Laser cutter microscopes can be used to overwrite single bits of ROM[AK97]. In the case of that
the circuitry for a protocol is well known, such as for DES, attackers could find and overwrite
specific bits to make key extraction easy, such as the bits the control the number of rounds for the
cipher[AK97].
As another example, the lock bit of several devices with on-chip EPROM can be
erased by focusing UV light on the security lock cell, which is located sufficiently far from the rest
of memory. [AK96]
Microprobing needles can be used to set or reset a target bit stored in EEPROM. This can
be useful for a number of attacks. Consider the following attack on DES: DES keys are required
9
to have odd parity, and DES returns an error message if this fails to hold. To attack DES, an
adversary can find the key by setting one bit at a time to 0, and seeing if the error message is
triggered [AK97].
Circuits are often constructed with built in "test" functionalities, such as printing out the
contents of memory, that are destroyed after testing is completed. Bovenlander[Bov] describes an
attack on smartcards using two microprobe needles to bridge the blown fuse for test functionalities,
re-enabling a test routine that prints out secret memory.
Finally, lasers can also be used to destroy specific gates in a circuit, fixing their output to
0. This can be used to simplify the relationship between the secret key and the output, perhaps
making the secret key easier to find[AK97].
2.4
Memory Remanence Attacks
Gutman[Gut96] describes the issue of "burn-in" in both static RAM (SRAM) and dynamic RAM
(DRAM). When values are stored in the same RAM location for a significant period of time,
physical changes to magnetic memory occur and leave a lasting change. To avoid these effects,
sensitive information should not be stored in a single RAM location for more than several minutes.
In addition to the physical changes to memory that occur over the period of minutes, even
the temporarily stored capacitance of information stored only momentarily in DRAM leaves a
remanence. This remanence lasts for at least several seconds after power is removed, even at room
temperature and even if removed from the motherboard.
This gives adversaries with physical
access to the device a (brief) chance to gain information about any key stored in DRAM[CPGR05,
HSH+08].
SRAM also leaves significant traces for several seconds at room temperature[Sko02].
Theoretical works show that even a small amount of memory remanence is a problem; for example,
an RSA key can be recovered from only 0.27 of its bits selected randomly[HS09).
2.5
Protocol Failures
Poorly designed protocols and hardware make attacks even simpler for an adversary to execute.
For example, as documented in [AN95], one poorly designed satellite TV decoder had a processor
10
to store the secret key and to decrypt encoded video signals, and additionally a microcontroller
to handle commands addressed to a customer smartcard ID. Unfortunately, these pieces were very
modular and replacable. In the "Kentucky Fried Chip" attack, the microcontroller is replaced with
one created to block deactivation messages with the customer's ID the satellite company sends out
when a user stops paying for service.
2.6
Self-destructing Memory
There are several examples of circuits that self-destruct and destroy their memory when physical
tampering of hardware is detected[Sko02, Gut96, SPW99]. However, this feature is rarely used in
practice, and fails to detect some types of tampering.
Self-destructing memory is accomplished by using volatile memory inside a tamper-detecting
enclosure. When tampering is detected, the volatile memory is powered down or even shorted to
ground. It is important to note that this method only prevents attacks that the tamper-detecting
enclosure can detect. Even more crucially, since memory takes some amount of time to decay after
power is removed, this method requires that an adversary cannot read memory for some noticeable
amount of time after tampering is first detected.
Tamper detection sensors often are sensitive to changes in voltage or other environmental
conditions [AK96], since these can cause transient faults. However, self-destruct mechanisms can
decrease the reliability of hardware. One family of smartcards was designed to self-destruct when
they detected low clock frequencies, but had relatively common false positives due to the large fluctuations in clock frequency on card startup[AK96]. Additionally, since many transient conditions
exist naturally in circuits, for robustness sensors must be designed in such a way that they are not
sensitive to transient changes in voltage[AK96], limiting the applicability of this model.
Several examples of self-destructing software exist. The most popular is the current iPhone operating system, which can be set to clear memory after 10 failed passcode attempts. Unfortunately,
a software based self-destruct model does not provide security guarantees in the case of physical
tampering, which is the motivation for tampering security.
11
3
Models of Tampering
The most general tampering models must account for all possible adversarial modifications to a
protocol being executed. This includes modification of the secret key, modification of intermediate
computed values (possibly as fine grained as changes to specific wires), and even modification of
the program code being executed. Indeed, all such tampering attacks have occurred in practice!
The vast majority of works on tampering only deal with the tampering of secret keys. These
works divide nicely into two different theoretical frameworks. The first framework , the RelatedKey Attack (RKA) model, was inspired by [Bih93], and gives the adversary the ability to modify
the secret-key using some family of allowable tampering functions. The second framework, the
Self-Destruct model, is a relaxation of the RKA model that allows secret memory to be destroyed
tampering is detected to prevent further leakage of information.
This thesis will work in the RKA model, which only allows tampering on the secret key value. In
this model, adversaries may replace the original secret key with a modified one between consecutive
evaluations of the protocol. This is formalized by letting the adversary select a function, and
replacing the secret key with this function evaluated on the secret key. Although some works in the
Self-Destruct model allow the adversary to apply arbitrary polynomial time tampering functions,
such strong results are actually impossible in the RKA model[KKS11].
Accordingly, works in
the RKA model must further restrict which tampering functions are allowed to an adversary.
Fortunately, such restrictions seem justified by the limited state of existing physical attacks, which
modify a small number of bits in memory, or introduce randomized errors. On the other hand, it's
important not to restrict the allowed attacks too far.
The Self-Destruct model is similar to the RKA model in that it allows adversaries to select
and apply tampering functions to the secret key, but it relaxes the correctness requirements of
the protocol.
Here, the protocol to "self-destruct" and completely overwrite secret memory if
tampering is detected. A second differentiation between the models is the subtle issue of how many
devices with identical memory are available to the adversary. Though not necessarily inherent in
the Self-Destruct model, all works to date in this model allow adversaries a logarithmic number
12
of bits of information about the secret key per device, and so require that the adversary only
has access to a constant number of copies of devices with the same secret key. To contrast, the
RKA model implicitly allows the adversary access to an unbounded number of copies, since the
tampering function < is applied to the secret key in every iteration. Since the Self-Destruct model
gives relaxed conditions for correctness of protocols, and all works to date limit the number of
copies of the device given to adversaries, the Self-Destruct model allows security against a strictly
larger class of tampering functions than the RKA model[KKS11].
We note that modeling tampering of intermediate values is very difficult, since this depends on a
specific instantiation of a protocol- only one theoretical work [IPSW06] allows arbitrary adversarial
tampering of intermediate values in the circuit used to execute the computation. More common are
works addressing random faults in computation, a body of work initiated by [BDLO1] and referred
to as differential fault analysis. No theoretical works to date consider modification of the program
being executed.
3.1
The History of the RKA Model
Differential cryptanalysis was a technique developed by Biham and Shamir[BS90] that attacks block
ciphers by observing encryptions of pairs of messages with known differences in their plaintexts;
the differences between the resulting ciphertexts can give significant information about the secret
key for many block ciphers. Differential cryptanalysis soon gave rise to the Related Key Attack
(RKA) model, also defined in the context of block ciphers[Bih93]: instead of known relationships
between plaintexts as in differential cryptanalysis, the RKA model uses known relationships between
successive secret keys used. The key-scheduling algorithms used in block ciphers are generally public
and well known, giving adversaries information about the relationship between successive keys. In
some cases the adversary can even cause the system to move forward or backwards in the keyschedule, giving control over the key being used. Security against RKAs is now accepted as a
requirement for blockciphers to be used in practice. In fact, AES was designed with the explicit
goal of resisting RKAs [DR02].
After block ciphers, RKAs were next considered in the context of PRFs and psuedorandom
13
permutations, PRPs. Bellare and Kohno[BK03] provided formal theoretical definitions for both <bRKA secure PRFs and D-RKA secure PRPs. They gave important impossibility results: there exist
polynomial time tampering functions for which no block-cipher can be secure. They also showed
that secure block ciphers are secure against some very limited tampering attacks. Lucks[Luc04]
constructed a block cipher secure against some more interesting RKAs, but used a non-standard
number theoretic assumption, and also restricted the adversary to tampering a fixed half of the
secret key. The later is an especially strong assumption since one can trivially construct a block
cipher resilient against such tampering from any normal secure block cipher. To do this, have the
new block cipher generate a secret key that is twice the length of the original block cipher, and
only use the half that the adversary can't tamper.
Bellare and Cash[BC10] gave the first construction of a -- RKA secure PRF under standard
assumptions, using the Decisional Diffie-Hellman (DDH) assumption. In their construction, the
adversary may modify the secret key by either adding a fixed element in the group of secret keys,
or else multiplying by a fixed element in the group of secret keys.
Goldenberg and Liskov[GL10] defined and considered <}-RKA secure hardcore bits and one-way
functions; for these primitives, the tampering function is applied to the secret input of the function
instead of a secret key, and the security challenge still depends on the original un-altered input to
the function. They also consider <D-RKA psuedorandom generators, and show that <k-RKA secure
psuedorandom bits can be used to build <D-RKA secure block ciphers. They additionally show
that hard-core bits with typical proofs are not <D-RKA secure pseudorandom bits, emphasizing the
difficulty of constructing tamper-resilient psuedorandom primitives.
Applebaum, Harnik and Ishai[AHI11] gave a formal study of <b-RKA secure symmetric encryption, and gave two constructions of a <b-RKA secure encryption scheme where <b is comprised of
functions adding constants over the group of secret keys. They use standard assumptions, giving
one construction based DDH, and the other based on lattice assumptions such as learning parity
with noise (LPN) or learning with errors (LWE). They also give a construction for a <D-RKA secure
PRG under DDH, with the same <Das their encryption scheme construction. They show that a <>RKA secure PRG can be used to generate a <D-RKA secure encryption scheme when <D is claw-free.
14
Finally, they show that <b-RKA secure PRGs can be used for several other applications, such as
correlation-robust hash functions as defined in [IKNP03].
A work by Goyal, O'Neill and Rao [GOR11], developed concurrently to work in this thesis,
defines correlated-input (CI) hash functions, which are a generalization of the correlation-robust
hash functions of [IKNP03]. As with RKA-security, the security of CI hash functions is parameterized by a class of allowed tampering functions <b. They show that CI hash functions, with security
parameterized by <b, can be used <b-RKA secure signature schemes. (They also indicate their approach can also be used to build other <b-RKA secure primitives from CI hash functions as well.)
Their construction is similar to one included in this thesis, but their very definition of CI hash
functions immediately requires that <b is claw-free, while this thesis is able to drop the claw-free
assumption.
For a CI hash function H and unique tampering functions
#1,...,#n+1
require that no adversary can predict H(#i+1 (r)) from H (#1(r)), ... , H (#"(r)). If
#n,+
E <b, they
1(r)
=
#i(r)
for some previous i, this is trivial! Further, the adversary of the CI hash function must select
all tampering functions
#i
before even seeing the public hash-key, giving a non-adaptive form of
security. [GOR11] gives a construction of a CI hash function where 4 is the class of polynomials
over the input space, based on a variant of the q-Diffie Hellman Inversion assumption. Although
the class of all polynomials is a very broad class, requiring the adversary to select the tampering
functions non-adaptively gives that
#i(r)
=
j (r) only with negligible probability, which is used
crucially in their security proofs.
3.1.1
Claw-free assumption
As mentioned previously, the claw-free assumption for works in the RKA model requires that any
two distinct tampering functions
for all K c IC, i(K)
# #2 (K).
#1, #2 E
(b
must disagree for every secret key in the key space -
Unfortunately, this assumption fails to hold for any tampering
attacks used in practice and discussed in Section 2, including random bit errors, setting a subset of
bits of the secret key to a fixed value, and memory remanence attacks. Additionally, the claw-free
assumption is quite restrictive: it even disallows classes of tampering functions that intuitively give
very little power to an adversary. For example, it disallows tampering functions that only modify
15
the secret key on a negligible number of secret key values.
Prior Works. Prior to this work, all constructions of <b-RKA secure primitives relied on the
claw-free assumption. This includes the PRF construction of [BK03] which holds for classes of
claw-free permutations, the block cipher construction of [Luc04] which holds for classes of claw-free
functions over the half of the secret key the adversary can tamper, the PRF construction of [BC1O]
which holds for the claw-free classes of addition or multiplication within the group of secret keys,
the symmetric encryption schemes of [AH1l1] which holds for the claw-free family of addition over
the key space, and finally for the CI hash constructions of [GOR11] in which the very definition of
security requirements implies that the family of allowed tampering functions must be claw-free.
Why is the Claw-Free Assumption so Useful? To reflect how tampering works in practice,
in the security game for a <b-RKA secure primitive the adversary should be able to choose not
to modify the secret key. To model this, we let
#ID be
the identity function that maps any key
back to itself, and we let ID = {#1D}. In effect, ID-RKA security does not allow the adversary
to modify the secret key, and so ID-RKA security should reduce to the standard (no tampering)
security definition for any primitive. In general, we will assume that ID C <b, and so a <b-RKA
secure primitive should always have standard security for the primitive. The claw-free assumption,
combined with the assumption that ID C <b, implies that no other tampering function
ever map a secret key onto itself - otherwise
#(K)
=
#ID (K)
#E
(
can
= K.
The absence of the claw-free assumption creates serious technical difficulties in the security
proof for a <b-RKA secure primitive. Generally, the proof proceeds via a reduction of the <b-RKA
primitive to normal (not RKA) security of the primitive; this occurs by giving the adversary values
from the normal primitive when the secret key is unmodified. However, the reduction only sees
the output of the primitive, and not the hidden secret key, so it is difficult for the reduction to tell
when the secret key is unmodified. The claw-free assumption avoids this difficulty since the secret
key is only unmodified when the tampering function chosen is
16
#ID.
3.2
Protection versus Detection: The History of the Self-destruct Model
It is impossible for many primitives to be secure in the RKA model against arbitrary polynomial
time tampering functions. Consider the following example from [KKS11]. Let D contain functions
that set a specific bit of the secret key to 0; then D-RKA secure signature schemes cannot exist.
The adversary can simply proceed bit by bit, setting the bit to 0 and seeing if the scheme will still
produce a valid signature under the original verification key; if it does, that bit of the signing key is
a 0, and otherwise, it is a 1. Since there is no way to protect against this D in this model, and this
1 is a very small subset of all polynomial time functions, it is clear that 1-RKA secure signatures
do not exist when the adversary is allowed to apply arbitrary polynomial time tampering functions
to the secret key.
Since many general security guarantees are not possible in the RKA model, many works add
additional assumptions. A common additional assumption is self-destruction functionality, were
the primitive can completely destroy secret memory if tampering is detected.
The first theoretical work to assume a self-destruct mechanism was the work of Ishai, Prabhakaran, Sahai, and Wagner[IPSW06].
This work gives a compiler from a general circuit to a
circuit that allows setting any wire in the circuit to either 0 or to 1 as the computation progress.
However, this result requires a blow-up of a factor of t in the size of the compiled circuit to allow the
adversary to tamper t wires in between successive runs of the protocol. Another work that gives a
general compiler for tampering resilient circuits is recent work by Kalai, Lewko and Waters : they
give a general compiler for circuits to allow up -th of the gates on any path to have short-circuit
errors, a limited form of tampering that adversarially replaces the output of a gate by either one
of its inputs.
Work by Genero et. al[GLM+04] gives a compiler to allow tampering by arbitrary efficient
tampering functions by storing a signature of the secret state along with the secret state.
In
addition to assuming self-destruct functionality, this requires the very strong assumption of having
a secret signing key to be hard-wired into the circuit, and so avoids tampering on this. This work
is extended in [LL10] to model both tampering and leakage of specific memory locations and wire
values; they give both impossibility results and a modification of [IPSW06] to give a positive result.
17
However, the [LL10] model assumes a secret key space that is polynomial, not exponential, in the
security parameter, giving adversaries a non-negligible advantage even without tampering; they also
assume arbitrary polynomial time tampering functions, giving the tampering function the ability
to depend on all potential secret keys, and so the ability to depend on the correct secret key.
Work by Dziembowski, Krzysztof and Wichs[DPW10] define the security notion of non-malleable
codes (NMC). When an adversary is allowed to modify secret memory that has been encoded with
a NMC, the NMC security guarantees that the resulting output of the protocol is unmodified,
or else is completely unrelated to the original value of secret memory. By adding components to
encode and decode into the NMC, [DPW10] gives a way to compile a circuit into one that gives
non-malleable security. In particular, they give a construction of a NMC that protects against
tampering that modifies each bit of secret memory independently, and they show that NMCs exist
for a much broader classes of tampering functions.
The final and most recent work using the self-destruct assumption is that of Tauman-Kalai,
Kanukurthi, and Sahai[KKS11], which gives the first signature and encryption schemes that are
secure against both leakage and tampering. Though the tampering functions can be chosen arbitrarily, the adversary is only allowed a bounded number of tampering queries per time period.
Their work additionally relies on the assumption of a non-tamperable common reference string
(CRS) that is available to all parties, and uses non-interactive zero knowledge (NIZK) proofs.
3.3
The History of Tampering Intermediate Values
Boneh, DeMillo and Lipton[BDL01] first explored the dangers of errors in computation- even a
very small number of bit flips in intermediate values in a protocol can give a full break of an
otherwise secure scheme. Specifically, they showed that both a common implementation of the
Fiat-Shamir identification protocol and a Chinese-Remainder-Theorem based implementation of
RSA signatures are insecure when an adversary can induce a small number of random bit-flips in
intermediate values. This paper inspired a body of work called differential fault analysis (DFA),
which focuses on exploits of blockciphers caused by small bitwise errors; see [BS97] for a long list
of ciphers broken by DFA.
18
Protecting protocols against arbitrary intermediate values is very difficult, as this depends on
the specific instantiation of a protocol, and requires analysis of tampering of each intermediate
value in the computation. The only works to date that allows non-random errors in intermediate
values are the works by Ishai, Prabhakaran, Sahai, and Wagner [IPSW06] and by Kalai, Lewko and
Waters mentioned above in the History of the Self-Destruct model 3.2.
4
4.1
Notation & Preliminaries
Notation
For a set S, let |SI denote the size of S, and let s <_- S denote the operation of picking a random
element of S and calling it s.
For sets X and Y, let Fun(X, Y) be the set of all functions mapping X into Y. When Z is also
a set, let FF(X, Y, Z) = Fun(X x Y, Z). A class of functions D will be associated with some domain
X and range Y, and will be a subset of all functions mapping X to Y- 4 C Fun(X, Y). A class
of functions
VK
4 associated with domain K will be said to be claw-free if V# 1, #2
C
4 with #1 : 42,
E EC, 0i(K) $ 42 (K). In the context of work on tampering, the claw-free assumption requires
that the family of allowed tampering functions <1is claw-free.
A function family is a tuple of algorithms: a parameter generator, a key generator, and a
deterministic evaluator, JF = (P, KC,F). For each k E N, the function family also defines sets
Dom(-) and Rng(-) such that
fF(K, -) maps elements of Dom(k) to Rng(k). We will assume that
each function family has associated polynomials d for the input length with Dom(k) C {0, 1}d(k),
and output length 1 with Rng(k) C {0, 1}(k).
For a bit b, we use b to denote the opposite of b, i.e. 0 = 1 and i = 0.
A (binary) string x is identified with a vector over {0, 1} so that IxI is its length and x[i] is its
i-th bit. For i <
j,
x[i, j] denotes the inclusive range from i-th bit through the j-th bit of x.
When v is a vector, |vl gives the number of coordinates of v, and v[i] is used to denote its i-th
coordinate- v = (v[1],
. ..
, v[lvl]).
Algorithms. Unless otherwise noted, algorithms are randomized and are polynomial time in the
19
size of their inputs.
For algorithms A and B, we let A 1|B denotes the algorithm that on any input x returns
A(x) | B(x).
We use notation for a fixed sequence of probabilistic functions, where each function is evaluated
with fresh randomness, and the final object is returned: {x
- A, y - B(x); C(x, y)} represents
events A, B and C being executed in order, with the returned value equal to the returned value of
C.
4.2
Cryptographic Primitives & Security Games
A cryptographic primitive P is a tuple of algorithms, which include a key generation algorithm IC
that on input a security parameter k outputs a secret key sk and optionally also a public key pk,
and a number of algorithms that may take the secret key as one of their inputs. We represent
a cryptographic primitive as P = (IC,g1(-),
..
.,gm(-), fi(sk,),.
. .f,f(sk,
-)), where each algorithm
fi depends on the secret key sk, and each algorithm gi does not. Without loss of generality, all
algorithms run in time polynomial in k.
Security Games.
Every cryptographic primitive P also has an associated security definition, which in our work
will be given as a security game with a security parameter k. A security game defines an interaction
between two algorithms: a stateful challenger algorithm, and a stateful adversary algorithm, both
of which are required to run in time polynomial in k. The security game can be played with a class
of potential adversary algorithms that conform to required interactions of the game.
A security game is comprised of phases, which are distinct periods of interaction between the
adversary and the challenger. Each phase might optionally include the challenger correctly executing required computation, messages between the challenger and the adversary, and adversary
access the output of some of the algorithms
fi in P that depend on the secret key.
When the adversary is allowed to view output of algorithms
fi that depend on the secret key,
this is modeled as the adversary gaining access to an oracle that computes
fi , denoted as Ofi. An
oracle call to Ofi () returns fi(sk, .). We number the phases, and for each phase
20
j we define the set
of allowed oracle access Sj as containing i if the adversary has oracle access to Ofi during phase
j.
As part of the security definition, some input combinations may be disallowed by the oracle,
where the set of disallowed input may be based on values chosen during the interactive security
game. (For example, the security game for an IND-CCA2 encryption scheme disallows the adversary from requesting a decryption of its challenge ciphertext.) Additionally, the disallowed inputs
might differ even for oracle access to the same
fi during different phases. The notion of disal-
lowed requests is formalized by a function disallowi,g(-) to disallow queries to Ofi during phase
disallow, (x1,
...
game in phase
, -)
j;
returns true if i 0 Sj or xi, ... , Xo should be disallowed based on the security
j, and false otherwise. In phase j, the oracle for fi sends the adversary either _L if
disallowij(x1,... Xk, ) is true, or sends fi(sk, x, ...
,Xk)
otherwise.
Each security game is associated with a boolean function function that has value 1 when the
primitive is considered broken, and has a required bound on the probability of the break for the
primitive to be considered secure. Here, the probability of a break is measured over the choice of
randomness for both the challenger and for the adversary. The function break might depend on
the message transcript between the challenger and the adversary or on the state of the challenger
and the adversary. The scheme might be considered broken when the adversary can distinguish
between two possible behaviors of the challenger by guessing a secret bit selected by the game, or
when the adversary generates a value that satisfies a certain relationship with the secret or public
keys of the scheme. In the last case, even if the submitted values satisfy the stated relationship,
some values might not be considered a valid break based on prior values that have occurred in the
security game. (For example, in the game defining security for a signature scheme, even a valid
message signature pair is not a break if the adversary made a prior oracle query on that message.)
We use GA to denote that a security game G should be executed with adversary A that is within
the class of allowed adversaries for G. Some of our games will return a value; we will use (GA _* 1)
to denote the event that G played with A returns a value of 1.
21
5
RKA-Model: Formal Treatment
So far our description of adversarially selected tampering functions has been loose, but we will
now formalize structural requirements on them. For example, such functions must map secret keys
back into the secret key space. We will call functions that meet these structural requirements a
Related-Key Deriving function.
Definition 5.1 (Related-Key Deriving Functions - from [BK03])
Consider a cryptographic
primitive P with security parameter k. Let K be the space of secret keys for P, which is the range
of secret keys created by key generation run on input Ik.
We say a function
4
is a Related-Key Deriving (RKD) function is compatible with P if it
satisfies 4 : K -+ K.
We define RKD = Fun(K, K), the set of all RKD functions for a key-space. A class of RKD
functions, typically denoted by D, is a set ( C RKD. A class P is said to be compatible with a
C
primitive P if all $ G
are compatible with P.
Example Classes of RKD Functions. The RKA model is parameterized by classes of allowed
RKD functions. One class that will be used repeatedly in our proofs is the class ID, which contains
only the identity function. Another class that we will reference several times is the class of all
constant functions, which will be denoted by Cnst.
Definition 5.2 (ID, the class of the identity function) Let 41D(X) = x, ID =
{qID}
is the
class containing the singleton #1D.
Definition 5.3 (Cnst, the class of all constant functions) For any c E K,
given by $c(x)
=
4c: K
-+ K is
c for all x E K, returning c for all input. Then Cnst is defined as the set
containing pc for all c
G K.
Oracle Access with Tampering. As we move to the 4-RKA model, we will also want to give
adversaries oracle access to values computed with tampered keys. We will use Ole* to denote
oracle access to
fi with
allowed tampering in <b. An adversary with such access can call Ofi*(#, .);
22
if the query is disallowed or if
##
D the oracle will return _L, and otherwise the oracle will return
fi (#(s k), -).
RKA Model.
We will now formally define the Related Key Attack (RKA) model for an adversary of a cryptographic primitive, where the allowed tampering is parameterized by a class of Related Key Deriving
(RKD) functions 4. Recall that a cryptographic primitive P is a tuple of algorithms, some which
take a secret key sk as input. In the standard security game for the primitive, an adversary receives
oracle access (for a specified period of time) to some of the algorithms that are evaluated using sk.
As examples: in the game defining signature schemes, the adversary receives oracle access to the
signing algorithm S, receiving S(sk, m) on query OS(m); in the game defining IND-CCA2 security
for a symmetric key encryption scheme, the adversary receives access to both an encryption oracle
0 8 (m) and to a decryption oracle OD(c), receiving 5(sk, m) and D(sk, c) in response, except for
the disallowed decryption query of the challenge ciphertext.
We define a 1-RKA adversary against P to have all the abilities of a standard adversary against
P, and in addition receives oracle access to the same algorithms as the standard adversary, except
evaluated using
#(sk)
for
#
E 4> instead of sk. Returning to our examples: in the modified game for
signature schemes, the adversary will now receive oracle access to
0
S,,
and on query 0S' (#, m) an
adversary receives a signature generated with tampered sk, S(#(sk), m) in response; for the game
defining IND-CCA2 security for a symmetric encryption scheme, the adversary now receives access
to encryption oracle OE4 (#, m) and decryption oracle OE) (#, c), receiving encryptions under the
tampered key E(#(sk), m) and decryptions under the tampered key D(#(sk), c) in response.
Moving to the RKA model causes some delicate issues to arise in when a primitive is considered
broken by a I-RKA adversary. As an example, lets consider a
-RKA secure signature scheme,
comprised of algorithms (KC,S, V) for key generation, signing, and verification respectively. In the
standard security definition for a signature scheme, a signing key and verification key are produced
as (sk, vk) <$- k. An adversary adaptively asks the signing oracle for signatures for messages mi
as OS(mi) and receives S(sk, mi) in return; the scheme is considered broken if the adversary can
produce a message m* and signature o-* such that V(vk, m*, o-*) returns true and such that the
23
adversary never queried for a signature of m*.
What should be considered a break for a <b-RKA adversary against a signature scheme? Such
an adversary will adaptively make queries for signatures of mi with tampering function
a signing oracle that will create signatures with tampered keys, denoted by QS0A
S(#i(sk), mj) in return; in some cases
#i(sk)
=
sk, and in other cases
#i(sk)
(#i,
#i
E
(b
to
mi), receiving
will be a different
related key. When the adversary produces m* and o*, it is clear this should not be accepted as a
forgery if the adversary received a signature of m* under the original secret key- in other words,
m* and o-* should not count as a forgery if there is an i such that mi = m* and
#i(sk)
= sk.
However, a more difficult, and subtle question: should a forgery m* and o* be considered a break
if the adversary received a signature of m* under some different, related secret key? Considering
this a valid break for the <D-RKA adversary makes minimal assumptions about what should be
disallowed, and it turns out that such a notion is also achievable.
Definition 5.4 (<b-RKA Security) Recall that a cryptographic primitive P is a tuple of algorithms, P = (gi(), gm(-),
fi(sk, -),. . . ,fn (sk, .)), where each fi depends on the secret key sk, and
each gi does not. Say a standard (non-tampering) security game for P defines set S to be the set of i
such that the adversary receives oracle access to Ofi during phase j, possibly with inputs disallowed
with disaIowi,3 (.).
To use oracle access to
fi
in phase j, the adversary queries Ofi(x 1 , ..
.
, xk),
to which the oracle replies with either I if disallow,(x1,.... ,xk,-) is true, or fi(sk,x1,...,xk)
otherwise.
To move to the <>-RKA model, the 4-RKA adversary participates in the same security game
as in the standard model the with some modifications. In phase j, if i
G Sj so that the adversary
received oracle access to fi, the <b-RKA adversary should now receive oracle access to
of 0fi.
When the <b-RKA adversary makes call Ofi"'(#,x1,...
computes the tampered key sk' =
#(sk).
, xk)
..
f,
instead
to the oracle, the oracle first
If sk' = sk and disallowij(x1,...
returns -L; otherwise the oracles returns fi(sk',xi,.
0
,xk,-)
is true, the oracle
,xk).
The definition of a break of the security of the scheme is unmodified, and should explicitly state
all uses of the secret key of the scheme. Additionally, the definition of what it means for P to be
secure remains unchanged. The standardsecurity definition defines P to be secure if certain bounds
24
apply to the probability of a break for all polynomial time adversaries; we say that P is
secure if the same bounds apply to all polynomial time
(b-RKA
(b-RKA
adversaries.
As part of this thesis, we will give concrete definitions for <b-RKA security as it applies to a
number of different primitives, including PRFs, wPRFs, IBE, Signatures, SE-CCA, SE-CPA, and
PKE-CCA.
Remarks about <D-RKA Security.
" We note that this model gives the <D-RKA adversary access to many copies of primitive
with the original secret key; for each iteration, the adversary is given access to the primitive
evaluated on the original secret key, even though the tampering function
# applied in previous
iterations might not have been invertible.
* Note that while some queries of a <-RKA adversary to an oracle Ofi,* might be disallowed
via disallowo, no such restriction is applied unless
#(sk)
= sk;this appears to be a minimal
assumption.
" If the standard security definition for a primitive creates a challenge for the adversary, this
challenge should be created with the original secret key sk <b-RKA adversary, and not some
tampered sk. Otherwise this definition would immediately disallow <b that contain
#
with
low entropy output.
" We would like for security in the RKA model to always imply security in a standard security
model. To do this, we will assume that ID, the class that only contains the identity function,
is always in the class of allowed tampering functions. ID models the adversary's ability to
choose not to tamper the secret key of a system, and so ID-RKA security always reduces to
the normal security for the primitive.
The definition of a break of security for a <b-RKA secure P is the same as for a normal secure
instance of P. This is particularly interesting when the definition of a valid break depends on prior
queries of the adversary. In the case of a <D-RKA signature scheme, a break is defined when the
25
adversary produces a valid message, signature pair such that it never received a signature of the
message generated with the original signing key; a forgery on a message is still considered valid
even if the adversary received signatures on that message generated with modified signing keys!
This is identical to the definition of a forgery in the normal secure case, where a forgery is valid if
the adversary never received a signature on the message generated with the original signing key.
Signature schemes are of particular interest in considering the definition of a break of security
in the RKA-model since the breaking condition depends on prior queries of the adversary. Prior
work defining <b-RKA security for signature schemes [GOR11] has instead defined a break as a
valid message signature pair such that the adversary never queried for a signature on that message
using the identity function,
#1D,
as the tampering function. This disallows a subset of the forgeries
we disallow, since a query with RKD
#ID will
always generate the signature under the original
unmodified secret key; however, this definition immediately requires that the adversary cannot find
a
such
'
# #ID
that
#(sk)
= sk, or the adversary will have an easy forgery. Thus the definition
of [GOR11] disallows <>-RKA secure signature schemes when <D fails to be claw-free, while such
constructions are possible and meaningful in our definition.
RKA sets. This thesis will address the relative strength of <b-RKA security for many different
cryptographic primitives. In order to achieve this, we will need a notion of what <D-RKA security
is attainable for each primitive.
Definition 5.5 (RKA [P] - the set of <h for which <b-RKA secure P exist) A class of RKD
functions <> is said to be achievable for the primitive P if there exists a <b-RKA secure instantiation
of P. We further define RKA[P] to be the set of all <k that are achievable for primitive P.
Using this set based notation will allow us to easily make comparative statements about <k-RKA
secure primitives. We will give a number of containment results of the form RKA[Pi] c RKA[P 2]
by providing a construction of a <b-RKA secure P 2 from any <D-RKA secure P 1 . We will also
provide negative results of the form RKA[Pi] 5 RKA[P 2] by showing there exists a <Dfor which
<D c RKA[P1 ] but <b
g
RKA[Pi].
Both containment and non-containment relationships give
implications about the achievability of <h-RKA security for a given primitive.
26
We note that since RKA[P] is comprised of <} such that a 4-RKA secure P exists, RKA[P]
might change depending on what assumptions we are willing to make, such as whether one-way
functions exist. In general when considering RKA[P], we will in general make the assumption
that a normal (non-tampering) secure instance of P exists, but will need to make no additional
assumptions for the relationships between primitives that we provide.
<D-RKA secure PRFs
6
The first theoretical works addressing RKA security dealt PRFs[BK03, Luc04], and as we will later
discover, <b-RKA PRFs will a strong starting point for building other primitives. We give a full
definition of <h-RKA secure PRFs here since it is used widely in our constructions, and to illustrate
some of the points in definition 5.4 of <k-RKA security.
6.1
Pseudorandom Functions
Since the definition for a <b-RKA PRF builds on the standard definition for a PRF[GGM86], we
will first give the standard definition here.
Definition 6.1 (Pseudorandom Function - PRF) The following security game is defined for
a function family
FF = (K, F) with algorithms for key generation and evaluation, respectively. For
a security parameterk, we assume that an instance of FF with key K
4-
k(ik) has an efficiently
computable domain and range given by Dom(.) and Rng(.).
1. Setup Phase. The challenger generates a key for the PRF as K
-
J(lk),
and selects a
random bit b - {o,1}.
2. Query Phase.
The adversary is allowed to adaptively query for points x G Dom(k) to an
oracle as 0'0(K,x).
If b = 0 and the adversary has never queried x, the challenger randomly selects a point
y
-
Rng(k) to return to the adversary, and saves it as T[x] = y; if x has been queried
previously, i.e. T[x] is defined, the challenger returns the previous value of T[x].
27
Otherwise, if b
O,,
=
1 then oracle access to 0P
is equivalent to the PRF evaluation function
and the challenger responds to a query x with F(K,x).
3. Guess. The adversary must return a guess b' of the hidden bit b.
For any function family fF,
dorandom Function (PR)
let AdvA(k)
if Adv
Pr[b =b'
-
.
We say that SF
is a Pseu-
A (k) is negligible in k for all polynomial time adversaries
A.
6.2
<b-RKA PRFs
The definition of a PRF requires that the function indexed by a randomly selected key is indistinguishable from a random function for any polynomial time adversary. In the RKA model the
adversary is given access to an oracle that evaluates the PRF using tampered keys, and so will
now have access to the functions indexed by multiple keys. As will be formalized shortly, PRF
security in the RKA model requires that for any polynomial time adversary, the family of functions
is indistinguishable from a family of truly random functions, when accessed at indices generated
by a random key and tampered values of that key.
The only changes to the security game of the <b-RKA PRF from that of a normal PRF are in
the query phase, where the adversary receives oracle access to the PRF evaluated using the secret
key- this is modified to give the adversary access to the PRF evaluated using tampered secret keys.
Definition 6.2 (<b-RKA PRF) The following security game is defined for a function family
J7
= (AZ, F) with algorithms for key generation and evaluation, respectively. For a security pa-
rameter k, we assume that an instance of F with key K
-
C(lk)
has an efficiently computable
domain and range given by Dom(.) and Rng(.), and a compatible class of RKD functions (b.
1. Setup Phase.
The challenger generates a key for the PRF as K
4
I(Ik), and selects a
random bit b 4- {o, 1}.
2. Query Phase.
The adversary is allowed to adaptively query for points x
tampering function $ E <b to as
00
*($,x).
28
E Dom(k) with a
Upon receiving an oracle query (x, $), the
challengerfirst computes the tampered key K' <- $(K). If b = 0 and the adversary has never
received a value for x and K', the challenger randomly selects a point y +- Rng(k) to return
to the adversary, and saves it as T[K', x] = y; if x with key K' has been queried previously,
i.e. T [K', x] is defined, the challenger returns the previous value in T [K', x]. Otherwise, if
b = 1 then the challenger responds to a query x with F(K',x).
3. Guess. The adversary must return a guess b' of the hidden bit.
For any function family 1F, let Adv
(k)
Pr[b
b'] - 1.
We say that FF is a Pseu-
dorandom Function (PRF) if AdvA (k) is negligible in k for all polynomial time adversaries
A.
We note that in general, PRFs are not 4-RKA PRFs. Although individual randomly selected
functions from the PRF are indistinguishable from random, the structure of functions with related
indices might be far from random. For example, a PRF might simply ignore several bits of its key,
and so functions with indices differing only in the ignored bits will actually be identical.
An alternative and equivalent definition follows which is based on two separate games, one in
which an adversary always receives access to the real function family, and a second one in which it
always receives access to a truly random function family. This definition will be used to simplify
several proofs.
Definition 6.3 (Alternative Definition for a <b-RKA PRF) The following two security games,
PRFReal and PRFRand, are both defined for a function family FF = (IC,F) with algorithms for
key generation and evaluation, respectively. For a security parameterk, we assume that an instance
of
17
with key K
+$ IC(lk)
has an efficiently computable domain and range given by Dom(.) and
Rng(.), and a compatible class of RKD functions
(b.
They share a setup phase and a guess phase,
but differ in the query phase.
1. Setup Phase. The challenger generates a key for the PRF as K
A
A(lk).
2a. Query Phase for PRFReal. The adversary is allowed to adaptively make queries of the form
(x,
#) with
x G Dom(k) and # E <b. The challenger first computes K' <29
#(K)
and returns
.F(K',x) to the adversary.
2b. Query Phase for PRFRand. The adversary is allowed to adaptively make queries of the form
(x,#5) with x G Dom(k) and
p
G 4.
The challenger first computes K'
is undefined, the challenger sets T[K', x] +$ Rng(k).
<-
p(K).
If T[K',x]
The challenger sends T[K', x] to the
adversary.
3. Guess. The adversary must return a guess b' of whether it is playing game PRFRand or game
PRFReal.
Pr[PRFRand^ -> 1] - Pr[PRFRealA -> 1].
Standard arguments imply that AdvA ,(k)
7
Identity-Key Fingerprints and Identity Collision Resistance
In our constructions, we will want to reduce the D-RKA security of a primitive to the normal,
non-tampering security of the same primitive. Such a reduction is achieved by construction of an
adversary A that can break a normal secure primitive by using any D-RKA adversary B that can
break the D-RKA security of the primitive. Our constructions have A answer oracle queries of the
D-RKA adversary B in the following way:
1. When B's submitted
4
leaves the secret key unchanged, A should answer B's queries using
its own oracle access to a non-tampering oracle. Since the definition of a break in the 1-RKA
security of any primitive is a valid break for the original untampered secret key, this will
guarantee that a successful break for B will also be a successful break for A.
2. B's queries to different modified keys should give it no additional information about the
original secret key; A should be able simulate answers to oracle queries to modified keys.
To achieve the second point, we design our D-RKA primitives so that A can simulate answers
to B's oracle queries by randomly generating instances of a normal (non-tampering) version of the
primitive.
The first point adds a greater technical challenge: when answering B's query with related-key
deriving function
#,
how will A know when the secret key is unchanged and to answer with its own
30
oracle access? The A only has access to oracle queries to the primitive, but not the secret key of
the primitive. As an example of this difficulty, imagine A is an adversary against a PRF. A is given
oracle access to either the PRF or a family of truly random functions evaluated with the hidden
sk. Though A query this oracle at many points in the domain of the PRF, this might be little help
in telling whether
#(sk)
= sk. (Consider a PRF that ignores the last bit of the secret key, and a <b
that allows the adversary to set the last bit of the secret key to either 0 or 1.)
7.1
Identity-Key Fingerprint
To solve this and other technical issues associated with dropping the claw-free assumption, we will
use a new tool, called an Identity-key Fingerprint (IDFP), to determine when the secret key of a
PRF has changed. To accomplish this, an IDFP for a function family generates a vector of points
in the domain, called the fingerprint; except with negligible probability, a function evaluated with
a random secret key key and the function evaluated with a tampered key will differ at at least one
of the points in the fingerprint. Like our definition of RKA security, an IDFP will be parameterized
by <b, the class of allowed tampering functions.
Definition 7.1 (Identity-key Fingerprint - IDFP) Let F = (K, .F) be any function family,
with key-generation and evaluation algorithms respectively, an associated security parameter k, and
with an efficiently computable domain given by Dom(.).
A fingerprint is a vector of points in the domain of F with length v(k) that is polynomial in
k. An identity key fingerprint (IDFP)is an algorithm IKfp that produces a fingerprint for F as
w
- |IKfp(1k).
The following game gives the security requirementsfor for F, and is parameterized
by a compatible class of RKD functions <b for
F.
1. Setup Phase. The challenger generates a key for F
as K
$
J(lk).
The challenger also
generates the fingerprintfor FF as w + |IKfp(1k), and sends the fingerprint to the adversary.
Finally, the variable WIN is set to have value false.
2. Query Phase. The adversary is allowed to make adaptive queries to an IDFP oracle for $
as
0 idf,4
E (D
($). The challenger answers this oracle query by first computing the tampered key
31
K' +- <(K). IfK
K', and .F(K,w[i]) =.F(K',w[i])for all i
c [lwl], WIN
+- true. In any
event, the challenger sends the adversary FK',w[i]) for all i E w ]I.
For any adversary A that is polynomial time in k, we define Adv ifA
We say
7F
(k)
=
Pr[WIN = true].
is 1-IDFP secure if this advantage function is negligible in k for all such polynomial
time adversariesA.
The notion of IDFP can be seen as a relaxation of the key fingerprint defined by Bellare and
Cash[BC10].
The key fingerprint of [BC10] for a PRF allows statistical disambiguation of any
pair of keys: except with negligible probability, any two keys will differ at some point in the key
fingerprint.
They showed that the Naor-Reingold PRF (NR-PRF) had such a key fingerprint,
but in general, it does not seem common. Interestingly, their own
NR-PRFs, are not known to have a key fingerprint.
4-RKA PRFs, which build on
The IDFP has the weaker requirement of
computational disambiguation of the original key from other keys: given a randomly generated
original key, it must be computationally hard to find a second key that agrees at all points in the
IDFP.
The IDFP notion is easier to satisfy than that of the key fingerprint, and will still be sufficient
for our constructions. We are not able to give a general proof that a D-RKA secure PRF also has
is
4-IDFP secure, and so will have to make this an assumption in some of our constructions. We
are, however, able to show that for claw-free
4, any D-RKA secure PRF with large enough range
is <b-IDFP secure, using any point in the domain functioning as the fingerprint. This adds to the
evidence that assuming D-IDFP security for D-RKA secure PRF is reasonable.
Proposition 7.2 (<b-RKA secure PRFs have 1-IDFP for claw-free <b) Let <b be a claw-free
class of RKD functions, and let SF
be a
(b-RKA
secure PRF with efficiently computable domain
domain Dom(.) and super-polynomial size range Rng(.). Then
71
is
4-IDFPsecure, using IDFP
algorithm |Kfp(1k) that returns the 1-vector comprised of any fixed element in Dom(-).
Proof:
Define the IDFP algorithm IKfp(lk) to return the lexicographically first element in Dom(1k).
32
Using any adversary A against the D-IDFP security of PRF
J7
against the i-RKA security of
Adv
77,
we construct an adversary B
with
,((k)
< Adv FFB(k) + Rng(k)l'
where q(k) is the number of oracle queries made by A to Oidfp.
B receives oracle access to 0
,, which answers its queries OU%*
PRF or a truly random function. Recall that
assumes that
#ID E
#ID is
(#, x)
with using either a
the identity function and the RKA-model
4. B generates the fingerprint as w <-- IKfp(1k), and then computes y[1] <-
OW*(#ID, w[1]), which is the value returned by the oracle, evaluated using the original secret
and the single value in the fingerprint. Finally, B initializes a set T to empty.
For the reduction, B provides the IDFP adversary A with input of fingerprint w. When A makes a
query to the IDFP oracle as Oidfp,4 (#), B queries its <-RKA PRF oracle with z
= 0
and returns the value z to A. Additionally, if z = y then B sets a variable
<- true. When A
halts, B returns 1 if
WIN =
WIN
(#, w[1]),
true , and 0 otherwise.
For the analysis, we consider the cases that B has oracle access to the real PRF in PRFReal or a
truly random function in PRFRand separately.
First lets consider the game PRFReal, where B's oracle queries to 0
4
(#, w[1]) are answered
by the real PRF, z = Y(#(K), w[1]). In this case, A is getting the correct distribution for the PRF
and so is playing the exact game defining D-IDFP security; B will set
found
#
WIN =
true when A has
such that F(#, w[l]) = F(#ID, w[1]) and has broken the security of the IDFP. Thus B will
return 1 with probability given by Adv
(k).
In game PRFRand, B's oracle queries to 0pI(#(, w[1]) are instead answered with a truly random
function. Let q(k) be the number of queries A makes to Oidf,*. When A queries a
the tampered key must be different from the original key, since
assumption and the fact that
from y
=
#ID(K)
#j(K)
#i 4#ID,
= K by the claw-free
= K. Thus in this case z = 0-'(#$, w[1]) is independent
O1(41D,w[1]), and so for each query B sets
33
WIN
to true with probability at most
1).
Rng(1k)|q*
B never sets
WIN
to true when A queries with <ID, since this only happens when the key
is modified. Thus in this case, by a union bound B returns 1 with probability at most
q
y.
Then
Pr[PRFRealB
Pr[PRFRandB
r
1]
=
1]
<
Adv p
(k)
q(k)
rRng(k)|
where q(-) is the number of oracle queries made by A. Using definition 6.3 for Advp
(k) gives
the desired result.
Using Proposition 7.2, the <b-RKA PRF construction of [BC10] can be used to generate an
IDFP, since the <b they use is claw-free. Interestingly, though they use key fingerprints to create
their <-RKA secure PRF, no key fingerprint is known for their construction.
Unfortunately, this construction of IDFP only works for claw-free <D, while we will need IDFP for
classes <b without the claw-free assumption. For some of our constructions of higher level <b-RKA
secure primitives, we will assume the <D-IDFP security of given <D-RKA PRFs, even when <} is not
claw-free. In practice, a vector over a distinct points in the domain should be a suitable fingerprint
and assuming the existence of IDFPs seems to be a reasonable even when <D is not claw-free.
7.2
<b-RKA Secure Pseudorandom Generators
To drop the claw-free assumption on <D in our constructions, we introduce and use a new security
notion called Identity Collision Resistance (ICR); this notion is parameterized by <}, and a primitive
possessing this property is said to be <D-ICR secure.
In particular, we will build psuedorandom generators (PRGs) that are both <h-RKA and <b-ICR
secure, using any <b-RKA and 4-IDFP secure PRF. A pseudorandom generator (PRG)[BM84] is a
tuple of algorithms PRQ = (1C, !) for key generation and evaluation of the PRG respectively; 'PRg
also has an associated function r(.) that gives the output length of the PRG, which must be longer
34
than the length of the secret key. The secret key of a PRG is called a seed; when the evaluation
algorithm is run on a randomly generated seed, it outputs a value that is indistinguishable from
random, even though r(-) ensures that the output is longer than the secret key and so is lacking
full entropy.
Definition 7.3 (Psuedorandom Generator - PRG) A psuedorandom generator (PRG) is a
tuple of algorithms 'PT
= (AC,
9). For a security parameter k, the key generation algorithm JC(lk)
generates a secret key sk called a seed, and g(sk) returns a string of length r(k), where r(k) >
Isk|,
ensuring that 'PRq is length expanding. Further, the output of 9(sk) should be indistinguishable
from a random string for any polynomial time adversary, as is formalized in the following security
game for an adversary A:
1. Setup Phase.
The challenger generates a seed as K
-
K(lk) and generates a random bit
b - {0, 1}.
2. Query Phase. The adversary A is given oracle access to 0
G, which takes no input param-
eters. If b = 0, and A has already queried ORG, the previously returned value is returned
again; if this is A's first query, the oracle returns a value from the uniform distribution of
length r(k), i.e. a uniformly selected element of {0, }r(k). Otherwise when b = 1, the oracle
returns 9(K) to A.
3. Guess. A must return a guess b' of the hidden bit b.
We define AdvpQAg(k) = Pr[b = b'] - -, and say that 'P!1
(PRG) if Adv
is a pseudorandom generator
(k) is negligible in k for all polynomial time adversaries A.
<D-RKA Secure PRGs.. Extending the definition of a PRG to the RKA model, the adversary can
query the PRG evaluation function along with a related-key deriving function
#.
In the modified
security game, the adversary can obtain output of the PRG (or random function) with not only
the original seed, but also seeds modified with
#.
To the adversary, each distinct seed of the PRG
should yield what appears to be an independent random value.
35
We note that unlike the standard definition for a PRG, we will not require a <b-RKA PRG to
be length expanding: the benefit of the <b-RKA PRG is that its output appears random even for
related keys, and not length extension. However, one can easily create a length-extending <b-RKA
PRG by applying a standard PRG to the output of a <b-RKA PRG[Luc04].
Definition 7.4 (<D-RKA PRG) The following security game for an adversary A is defined for a
'PRG and a RKD specification <Dthat is compatible with 'PRG = (K, 9), and a security parameter
k.
1. Setup Phase.
The challenger generates a seed for the PRG as K
random bit b +-
<- IC(lk),
and selects a
{o, 1}.
2. Query Phase. A is allowed to adaptively query tamperingfunctions $ in the family of allowed
tampering functions
(b.
Upon receiving a query $, the challenger first computes the tampered
seed K' <- O(K). If b = 0 and the adversary has never received a value for K' , the challenger
randomly selects a point in the range of
G to return to the adversary; if seed K' has been
1 then the
queried previously, the challenger returns the previous value. Otherwise, if b
challenger responds to a query x with
G(K').
3. Guess. The adversary must return a guess b' of the hidden bit.
For a primitive 'PTR
as defined above, we define Advg,(k)
that 'PQ is a <b-RKA secure PRG if Ady
,4,(k)
=
Pr[b
=
b']
-
{,
and say
is negligible in k for all polynomial time
adversaries A.
As with <b-RKA secure PRFs, it will help our proofs to have an alternative definition of <b-RKA
secure PRGs that separately considers the cases where the adversary has oracle access to the PRG
and when they have oracle access to a random function instead.
Definition 7.5 (Alternative Definition for a <D-RKA PRG) The following two security games,
PRGReal and PRGRand, are both defined for 'PTq = (KC,9) and a RKD specification <D that is
compatible with .PTRG, a security parameter k, and an adversary A.
36
We assume that an instance
with key K
of 'P9RL
$
K(1k) has an efficiently computable domain and range given by Dom(.) and
Rng(-).
These two games share a setup phase and a guess phase, but differ in the query phase.
1. Setup Phase. The challenger generates a seed for the PRG as K
4
K(1k).
2a. Query Phase for PRGReal. A is allowed to adaptively query tampering functions 0
G ,
to which the challenger replies with F(K') to the adversary.
().
2b. Query Phase for PRGRand. A is allowed to adaptively query tamperingfunctions O0'
The challengerfirst computes K' <-#O(K). If T[K'] is undefined, the adversary sets T[K']
4
Rng(k). The challenger sends T[K'] to the adversary.
3. Guess.
The adversary must return'a guess b' of whether it is playing game PRGRand or
game PRGReal.
Standard arguments imply that Advg, ,(k)
7.3
= Pr[PRFRandA -> 1] - Pr[PRFRealA -> 1].
Identity Collision Resistance
For our constructions, we will define and use a new weak form of collision resistance for PRGs.
# E
4(K) #
Roughly, for a D-ICR secure PRG, it should be difficult for an adversary to find a
-D that
modifies the hidden seed but that fails to change the output of the PRG, with
K yet
9(#(K)) = g(K). This definition is formalized below.
Definition 7.6 (4-ICR secure PRG) The following security game for an adversaryC is defined
for a 'PR& = (P, 1C) with a compatible RKD specification D, and a security parameter k.
1. Setup Phase. The challenger generates a seed for the PRG as K
- K(lk), computes T <-
9(K), and initializes the variable WIN to false.
2. Query Phase. The adversaryC adaptively queries 0 ic',0 as 0
these queries by first computing the tampered seed K' <on the tampered seed as S <- 9(K'). If K'
37
f
().
'crD
#(K),
The challenger answers
and then evaluates the 'PRQ
K and S = T, the adversary has found a
collision, and WIN <- true. To answer the query, the challenger returns S to the adversary
C.
Let Adv
g,c,<1>(k) equal Pr[WIN
=
true] when the game has input 1 k. We say 'PKQ is
(b-ICR
secure if this advantage function is negligible.
Does <b-RKA security imply b-ICR security?.
ICR security is only violated when two distinct seeds map to the same value in the range of
a PRG. We note that an adversary given oracle access to a truly random function will only very
rarely be able to find two distinct seeds that map to the same value; since a <b-RKA secure PRG is
indistinguishable from a random function (when evaluated at points given by tampered seeds), it
would at first appear that <b-RKA security implies <-ICR security for a PRG, and that adversary
that breaks the <b-ICR security of the PRG could be used to build a distinguisher of the <b-RKA
PRG and a truly random function. Unfortunately - and unintuitively! - this is not the case.
Let's begin to see where this intuition breaks down. To build a distinguisher of the <b-RKA
PRG and a truly random function, we would try to get the <b-ICR adversary to find a collision
of distinct points; if the <b-ICR adversary finds a collision with its oracle access, the distinguisher
guess it has access to the PRG.
This intuition breaks down because the distinguisher only sees the output of the oracle, and so
does not actually see the secret key- the distinguisher does not know whether a repeated output
was generated by the same secret key. Further, it assumes that < modifies the secret key in the
same way in both the real and random games. As we will see shortly, it is possible that a repeated
output is caused by two different seeds in the real game, and by the same seed in the random game.
This will allow the adversary to break the <b-ICR security, while still giving indistinguishable views
of the real and random games for the <D-RKA PRG distinguisher!
We formalize these ideas to prove that <b-RKA security does not imply <D-ICR security.
Proposition 7.7 [
(b-RKA
Security does not imply <b-ICR Security]
Suppose there exists a normal-secure PRG 'PRQ = (IC,g) with security parameter k, output
length given by r(k) = w(k), and with range super-polynomial in k.
38
Then there exists a PRG
G = (K, 9) with the same r(.), and a compatible class of RKD
f(k) be the length of the seed returned by
flA(1k),
(b-RKA
C(1k).
IC for PR~g to pick a random bit c +- {0, 1}, and
We construct the new key-generation algorithm
returns c
such that 'PtG is
secure.
secure but 'P\_G is not (b-ICR
Proof: Let
(b,
as /C(1k) $ {c +- {0, 1}, 1 (1k) :
|1(1k)}. The new evaluation algorithm
C
parses K into c K, where c is a bit, and |KI =(k), and then returns
g(K);
9(K)
more formally,
9(K) = g(K[2, lKl]). Note that this new evaluation algorithm 9 ignores the first bit of the seed,
and so
9 will agree on any two seeds differing only in the first bit - thus any # that modifies the
first bit will lead to a win for an adversary in the 4-ICR security game.
We define <D such that our constructed 'PRg is <D-RKA secure but not <D-ICR secure. Briefly, our
<D will be created so that
#
E
Ch
will only modify input in the real game PRGReal and not in
the random game PRGRand; with this, output collisions will occur in both cases, but any output
collisions in the random case are trivial since the seed will not be modified here. Let KK with its first bit flipped. For a R
denote
E {0, 1}r(k) and K c (0, 1)e(k)+l, we define #R(K) to be K if
9(K) $ R, and have its first bit flipped as K- otherwise:
#R(K) -
We let <} = ID U UR
E{O,1}y(k){#OR}
K
if 9(K) $ R
-
otherwise
be the collection of
#R
for all R E (0, 1}r(k), along with
#1D the
identity function.
Our constructed 'PRg fails to be <h-ICR secure: an adversary A that makes a query
X = 9(K) in response, and can then make query
#x(K)
#x.
By definition of our
#,
#1D
receives
since 9(K) = X,
will flip the first bit of the seed and give a non-trivial collision, setting WIN <- true, and
giving the adversary an advantage of 1.
Next, we must show that 'PRg is <b-RKA secure under the assumption that 'PZG was normal
(i.e. ID-RKA) secure. To do this, we will construct an adversary B against the normal security of
39
'P1G from any adversary A against the <b-RKA security of 'PRG.
B against the normal security of 'PRG receives either oracle access to 'P!g in game PRGReal, or
oracle access to a random function in game PRGRand. We let B compute X +-
OoPRG( ).
But we
note that in A's game PRGReal, all of A's queries should be answered by a single value generated
by 'PRg evaluated on a random seed; similarly, in A's game PRGRand, all of A's queries should
be answered by a single random value in the range of TPRg. Thus, B can answer X to all of A's
queries O"G4-'*(<); this correctly simulate A's game PRGReal when B is in game PRGReal, and
correctly simulate A's game PRGRand when B is in game PRGRand. When A returns its guess
of hidden bit b', B also outputs b'.
Thus, Adv"
PRQ,B,ID
(k) = Pr[PRFRandB 4 1] - Pr[PRFRealB
Pr[PRFRealA - 1] = Advg(k),
-> 1] = Pr[PRFRandA
=
1
-
as desired.
Building <D-ICR PRGs. As shown above, not all <D-RKA secure PRGs are <D-ICR secure, but we
will need PRGs with both properties for our constructions. To do this, we will build a PRG that
is both <D-ICR and <b-RKA secure from any <D-RKA PRF that is <D-IDFP secure, i.e., a <D-RKA
secure PRF with a key fingerprint for the identity function.
Proposition 7.8 Let 77 = (K, F) be a <D-RKA PRF with output length f(k) for security parameter k. Let IKfp be a <D -IDFP secure identity key fingerprint function for
7F
with vector length
v(k).
= (RG),
Define PRG 'Pg
IC on input
1 k,
with associated output length given by r(.), as follows:
compute fingerprint w <-- IKfp(1k ) and and the secret key for a PRF K
K(lk), and return K = (w, K)
" g(w, K) = .F(K, w[1]) || .- - F(K,w[lwl])
e The length of the output of the PRG is given as r(k) =
Then 'PG is both
(D-RKA
secure and <D-ICR secure.
40
l(k) - v(k)
Proof: Let A 1 be any adversary against the <I-ICR security of PRQ, and let A 2 be any adversary
against the <b-RKA security of 'PTg. We will use to build an adversary B 1 against the <b-IDFP
security of F, and an adversary B 2 against the 4-RKA security of FF.
First, we will construct B 1 , an adversary against the <}-IDFP security of
F, which receives
fingerprint w for F, using Ai, the adversary against the <D-ICR security of 'PGQ. A 1 makes oracle
queries of the form 0-
9
,*(#). To answer this, B1 first queries its own oracle Oidf,'D( 95 , w[i]) for
all i E [lwl]; B 1 receives F(#(K),w[i]) for each i, and simply concatenates all these to form and
return the proper response to A 1 's queries. A 1 is said to break the <D-ICR security if it queries a
such that
#*(K) $
#*
K yet the concatenation of all F(#* (K), w[i]) is the same as the concatenation
of all F(K, w[i]). In this case, B 1 wins too when it queries its oracle with
#*
- here, the key will
be modified yet no part of the fingerprint will. Thus
Advit
k)
(k) > Adyvgic
Next we construct B 2 , the adversary against the <D-RKA security of F, from any A 2 against the
<D-RKA security of PRGQ. When A 2 submits queries
(#), B 2 answers by first querying its
'"RG,D
own oracle OidfP,(#, w[i]) for all i c [|wf], and concatenating each of these; note that when B 2 is
playing PRFReal, this will simulate PRGReal for A 2 , and when B 2 is playing PRFRand this will
simulate PRGRand for A 2 . Thus when A 2 guesses b', B 2 copies this guess, and so B 2 will correctly
guess b exactly when A 2 does. Thus
Advp
,B2 ,(k)
> Adv
gA
2
,(k)
This construction also shows how to build a <D-RKA PRG from any <D-RKA PRF, though if
you don't need the <b-ICR property it is simpler just to apply the PRF to a constant input.
Corollary 7.9 Since any (k-RKA
PRF can be used to build a (k-RKA PRG as above, RKA[PRF] C
RKA[PRG].
41
8
Definitions of <D-RKA Secure Primitives
Though Definition 5.4 gives a general transformation from the standard security definition of a
primitive to the
4-RKA security definition, there are a number of subtleties in the transformation.
Here, we will explicitly give definitions for a number of additional -- RKA secure primitives: signature schemes, CCA public key encryption, weak PRF, and symmetric key encryption. The original
security definitions for these primitives can be obtained by setting
4 = ID.
Signatures.
Most of the primitives we will consider have security definitions that are based on indistinguishability; signature schemes are the one primitive that we consider that defines a break based
on a produced value of the adversary that depends on the secret and public keys of the scheme.
Since there are subtleties in the definition of a
4-RKA secure signature scheme, we will first
give the standard definition of a signature scheme.
Definition 8.1 (Signature Scheme - [DH76]) A signaturescheme 'DS = (K, S, V) is a tuple of
algorithms, for key generation, signing, and verification algorithms respectively, with an associated
security parameterk, and efficiently computable message space M(k).
The key-generation algorithm generates a verification key vk and a signing key sk as (vk, sk) +C(1lk).
The signing algorithm S is used to sign messages m
E M(k) as S(sk, m). The verification
algorithm V takes parameters of a verification key, a message, and a signature as V(vk, m, -), and
returns either true or false depending on whether the signature is valid or invalid respectively.
For the correctness of a scheme, if keys are generated properly as (vk, sk)
4
IC(lk), the signature
a- 4 S(sk,m), should always verify, giving V(vk,m,o-) = true.
We define the security of the scheme via the following game for an adversary A.
1. Setup Phase.
The challenger generates verification and signing keys as (vk, sk)
The challenger also initializes the set of disallowed messages for forgery as M <-
40.
K(lk).
Finally,
the challenger gives the verification key vk to A.
2. Query Phase. A adaptively asks for signatures of messages m G M to a signing oracle as
42
OS(m).
When the challenger receives m, it adds m to the set of disallowed messages for
forgery as M <- M U {m}, and finally returns a
$
S(sk, m) to A.
3. Forge. The adversary halts and outputs a message and signature pair (m*,-o), with m* - M.
We define Adv
scheme if Adv
,~
~(k)
(k) = Pr[m*
M A V(vk,m,o-) = 1]; we say DS is a secure signature
is negligible in k for all adversariesA that run in time polynomial in k.
To adapt this definition to the RKA model, the security game is modified so that the adversary
receives access to a signing oracle that works on tampered keys. Now, A should be able to make
adaptive queries 0s,'
(#, m) and receive S(#(sk), m) in return.
Moving the definition of a forgery from the standard model to the RKA model causes more
delicate issues to arise. In the standard definition, the adversary's forgery must be on a previously
unsigned message.
Definition 5.4 helps us convert this to the RKA model, where the winning
condition should not depend on modified secret keys. Here, a D-RKA adversary that outputs
(m*,
*) should be said to have a valid forgery if it never received a signature on the message
m* generated with the original secret key. This appears to be a minimal requirement for the
disallowed set of messages. Prior work [GOR11] defining
4-RKA secure signature schemes instead
called a forgery valid if the adversary never made a signature query with the identity function as
OS'4)(#1D, m*); our definition disallows only a subset of the messages that [GORl1] disallows, and
makes sense even when 4J fails to be claw-free.
Definition 8.2 (4-RKA Signature Scheme) A
4I-RKA secure signaturescheme DS = (IC,S, V)
is a tuple of algorithms, for key generation, signing, and verification algorithms respectively, with
an associated security parameterk, and efficiently computable message space M(k).
The algorithms of a 4J-RKA secure signature scheme satisfy the same structure and correctness
properties as a standard signature scheme, given in Definition 8.1.
The security game is parameterized by
(b
that is compatible with
DS is defined for an adversary
A.
1. Setup Phase. The challenger generates and verification and signing keys as (vk, sk) $ IC(1k).
43
The challenger also initializes the set of disallowed messages for forgery as M
<-
0.
Finally,
the challenger gives the parameters and verification key (vk) to adversary A.
2. Query Phase. A is allowed to adaptively make oracle queries for signatures as OS,'(m,))
with m
sk'
<-
E M, p E <b. To answer, the challenger first generates the modified signing key as
p(sk).
If sk' = sk, the challenger adds the message to the set of disallowed messages
for the forgery as M +- M U {m}. Finally, the challenger answers the query by returning
a- $ S(sk', m) to A.
3. Forge. The adversary halts and outputs a message and signature pair (m*, o*).
= Pr[m* V M A V(7r, vk, m, o-) = 1], and say DS is a <D-RKA secure
We define Advir(k)
signature scheme if Adv
A (k) is negligible in k for all adversaries A that run in time polynomial
in k.
Public Key Encryption.
Public key encryption (PKE), as defined in [Sha85], with semantic security, as defined in [GM82],
gives a public key that is sufficient to run encryption scheme without compromising a private secret
key used only for decryption. Since in this case the secret key is used only for decryption, RKAs
make sense to consider when the adversary is given access to a decryption oracle, as is the case for
a chosen-ciphertext security definition. In the normal PKE security game, the decryption oracle
disallows queries for the challenge ciphertext; to adapt to the RKA model, the decryption oracle
will refuse to decrypt when the ciphertext it is given matches the challenge one and the tampered
key equals the real one.
Again, we give only a definition for a <}-RKA PKE scheme and not for one with standard
security, but one can obtain the standard definition by setting <D = ID.
Definition 8.3 (<b-RKA Public Key Encryption Scheme) A public key encryption (PKE)
scheme is comprised of a tuple of algorithms for key generation, encryption and decryption algorithms, as 'E
=
(IC,S, D).
The scheme is associated with a security parameter k, and is
assumed to have an efficiently computable message space M(k).
44
The scheme has an associated security parameter k, and is assumed to have an efficiently
computable message space M(k).
The key generation algorithm IC takes
1k
as input and returns
the public key ek and secret key dk, as (dk, ek) <- JC(1k). The encryption function takes a public key
and a message to create a ciphertext as (ek, m) for m
£
M(k). Finally, the decryption algorithm
takes a secret key and a ciphertext c to produce a message as D(dk, c).
For correctness, the scheme requires that for a properly generated key pair, a ciphertext that is
generated as the encryption of a message can be decrypted using the associated secret key back to
the original message.
The following security game is defined for a compatible class of RKD functions <k and an adversary A.
1. Setup Phase.
The challenger generates public and secret keys as (ek, dk) <-
J(1k).
The
challenger also generates a random bit b +- {0, 1}, and initializes the challenge ciphertext as
C* - I.
The challenger returns (7r, ek) to the adversary.
2. Query Phase I. A is given oracle access to 0 )',*. When the challengerreceives query O,+"(p, C),
it first generates the tampered secret key as dk' <-
p(dk).
The challenger returns the decryp-
tion M <- D(dk',C) to A.
3. Challenge. The adversary outputs a pair of messages (mo,ml) with |mo\ = |m1; the challenger then returns the challenge ciphertext C* +$
4.
(7r, ek,
mb)
in response.
Query Phase II. A is again given oracle access as 09,). When the challenger receives query
O">*(p, C), it first generates the tampered secret key as dk'
<-
p(dk).
If ((dk'
=
dk) A (C
C*)), the challenger returns I; otherwise the challenger returns M <- D(dk', C).
5. Guess. Finally, the adversary halts and outputs b', its guess of b.
The advantage of the adversary is defined as Adv
is (D-RKA secure if this advantagefunction Adv
that run in time polynomial in k.
45
c
k
(k)
=
Pr[b
=
b'] - 1. We say 'PRJE
~jc?(k) is negligible in k for all adversaries A
Weak Psuedorandom Functions.
Weak psuedorandom functions (wPRFs) are a relaxation of PRFs such that they only need
to be indistinguishable from a truly random function when queried at random points in the domain, instead of at adversarially chosen points. We include wPRFs since they have an interesting
separation with PRFs.
Definition 8.4 (<b-RKA Weak Psuedorandom Function (wPRF).) Let ut'5F = (IC,.F) be
a family of functions with an associated security parameter k, with efficiently computable domain
Domjj(k) and range Rngfiyj(k).
The following security game is parameterized by a class of allowed RKD functions <k that is
compatible with
uTP'F, and is defined for an adversary A.
1. Setup Phase. The challenger generates a key for the wPRF as K
random bit b
IC(1k),
and selects a
$ {0, 1}.
2. Query Phase. A is given oracle access as
0"'W'"($). Upon receiving an oracle query
the challengerfirst computes the tampered key K' <- O(K), and selects a value x uniformly
from the domain of w'iPF as x +- Dom
xy(k).
If b = 0 and the adversary has never received a value for x and K', the challenger randomly
selects a point y
- Rng(k)
and defines T[K', x] = y. The challenger answers the oracle query
by returning (x, T[K', x]). Otherwise, if b = 1 then the challenger responds to a query x with
(x, F(K',x)).
3. Guess. The adversary must return a guess b' of the hidden bit.
We define Adv"
rk(k)
= Pr[b = b'] -
we say usPF
is
(k-RKA
secure if this advantage
function is negligible in k for all A that run in time polynomial in k.
Symmetric Encryption. Symmetric encryption enables a sender and a receiver that share a
secret key to send encrypted messages to each other. This case is interesting because we can now
consider RKAs on the encryption algorithm as well as on the decryption algorithm.
46
We consider two different notions of security for symmetric encryption:chosen-plaintext attack
(CPA) and chosen-ciphertext attack (CCA). In CPA security, the adversary is allowed to repeatedly
query the encryption oracle with pairs of messages, and receives the encryption of one in return
in a semantic security challenge. In CCA security, the adversary is additionally given access to a
decryption oracle, but the decryption oracle will disallow decryption of any challenge ciphertext.
Definition 8.5 (<b-RKA CPA and CCA secure SE Schemes) A symmetric encryption scheme
SE = (C, S, D) is specified by key generation, encryption and decryption algorithms. The scheme
is associated with a security parameterk, and is assumed to have an efficiently computable message
space M(k).
The key generation algorithm C takes 1k as input and returns a secret key K, as (K) <- C(1k).
The encryption
m E M(k).
function takes a secret key and a message to create a ciphertext as (K, m) for
Finally, the decryption algorithm takes a secret key and a ciphertext c to produce a
message as D(K, c).
For correctness, the scheme requires that for a properly generated secret key, a ciphertext that
is generated as the encryption of a message can be decrypted using the same secret key back to the
original message.
The following security game is defined for a compatible class of RKD functions <b and an adversary A.
1. Setup phase.
K
- KC(lk).
The challenger generates a secret key with the key generation algorithm, as
The challenger also generates a random bit b +- {0, 1}, and initializes the set of
messages disallowed for decryption as S <
0.
2. Query Phase - CPA. The adversary A is given adaptive access to O *,(#,mom1), where
|mo| =Imi1.
To answer, the challenger first computes the tampered key as K' <- $(K), and
then computes ciphertext C 4$
(K',mb). C is added to the list of disallowed ciphertexts as
S <- S U {(K', C*)}, and then the challenger returns C to A.
3. Query Phase - CCA. In the case of CCA security, in addition to the oracle access granted in
the CPA Query Phase, A is also given access to the decryption oracle as 0D9,(4, C. To answer
47
this the challengerfirst computes the tampered key as K' <-
#(K).
If ((K', C)
G S) then the
challenger returns I to A; otherwise the challenger decrypts and returns M <- D(K', C) to
A.
4.
Guess. The adversary must return a guess b' of the hidden bit.
We define Adv
~ ~ (k) = Pr[b = b'] - 1 when A plays the above game with the CPA Query
Phase. We say SE is tI-RKA-CPA secure if this advantage function is negligible in k for all A that
run in time polynomial in k.
We define Adv
~ (k) = Pr[b = b'] - 1 when A plays the above game with the CCA Query
Phase. We say SE is D-RKA-CCA secure if this advantage function is negligible in k for all A
that run in time polynomial in k.
Identity Based Encryption.
Identity Based Encryption (IBE) was first defined in [Sha85]. In an IBE scheme, the public key
of each user is some commonly known user identification (ID), such as an email address, avoiding
the common issues associated with public key exchange. An IBE scheme is enabled by a trusted
party with master keys that generates the secret keys for any ID. We will give the definition for a
@-RKA
secure IBE scheme without first giving the standard definition of an IBE scheme, but the
oriental definition can be obtained by letting i = ID.
As with the definition for signatures, some subtleties arise in actions disallowed to the 4-RKA
adversary. In the standard security game, the adversary can adaptively request secret keys for user
IDs of its choice to a key derivation oracle, then request a semantic security challenge encrypted
with the ID of its choice, and then further request secret keys for IDs; however, its important for
the adversary to be disallowed to have the secret key for the ID used to generate the semantic
security challenge! In the 4-RKA security definition, the key-derivation oracle refuses to act only
when the ID it is given matches the challenge one and the tampering does not modify the secret
key.
Definition 8.6 (D-RKA IBE Scheme) An identity-based encryption (IBE) scheme is comprised
of a tuple of algorithms IBE = (M, C, ,D), which are for master key generation, key generation,
48
encryption, and decryption respectively. The scheme has an associated security parameter k, and
is assumed to have an efficiently computable message space M(k). The master key generation algorithm M takes 1k as input and returns the master public key mpk and master secret key msk, as
(msk, mpk) <- M(1k). Standard key generation IC takes parameters, a user ID id, and the master
secret key to produce a secret key for the ID as dk
<-
IC(mpk, msk, id). The encryption function
takes the master public key, a user ID, and a message to create a ciphertext as S(mpk, id, m) for
m E M (k).
Finally, the decryption algorithm takes the master public key, a secret key, and a
ciphertext c to produce a message as D(mpk, dk, c).
For correctness, the scheme requires that for a properly generated master key, when a message
is encrypted to a user ID, the properly created secret key for that user ID can be used to decrypt to
the original message.
Security is defined for the following game with an adversary A, and is parameterized by a
(b
that is compatible with IB'E.
1. Setup Phase. The challenger generates the master key pair as (mpk, msk) 4 M(1k).
The
challenger forwards (,r, mpk) to the adversary. The challenger also randomly selects a bit b
as b
$ {, 1}, and initializes the set of disallowed IDs as S
ID as id* <-
+- 0, and initializes the challenge
.
2. Query Phase I. A receives oracle access to OK,
submitting queries of the form
0C*4($, id).
To answer, the challengerfirst generates the tampered key as msk' +- $(msk). If msk' = msk,
S <- S U id. The challenger returns secret key dk - JC(mpk, msk', id) to A.
3. Challenge Phase.
The adversary sends a user ID and two messages to the challenger, as
(id,mo,mi), where |mo| = m 1 |. If id G S, the challenger returns I, since the adversary has
already asked for the secret key for this user ID under the original secret key. Otherwise, the
challenger sets id* = id, and sends A the challenge C <$
4.
(mpk, id, mb).
Query Phase II. A again is given access to O'C *($, id), but now queries to id* with the
unhampered master secret key are disallowed. To answer, the challenger first generates the
49
tampered key as msk' <-
#(msk).
If ((msk' = msk) A (id = id*)), then the challenger returns
_L; otherwise the challenger returns dk <-
IC(mpk, msk', id).
5. Guess. Finally, the adversary halts and outputs b', its guess of b.
We define Adv$RE
if Adv
-
(k) = Pr[b = b'] - 1, and say that IB'E is a 4-RKA secure IBE scheme
(k) is negligible in k for all Athat run in time polynomial in k.
Constructions of <D-RKA Secure Primitives
9
We will show that a
4-RKA and 4-ICR PRG is a particularly strong starting point, leading
to a construction of any cryptographic primitive from such a PRG; in our language, this is the
containment of RKA[PRG] in RKA[P] for all other P that we consider. To do this, we first show
RKA[PRG] C RKA[Sig], building a tamper-resilient signature scheme from any D-RKA and 4ICR PRG. Then we will generalize this result to use any
4-RKA and f-ICR PRG to build a tamper
resilient version of any of the other primitives we consider.
We also show that our constructions of a 4-RKA secure signature satisfies an even stronger
notion of security where the adversary cannot forge for any tampered key, and not just the original
one.
Additionally, we give two positive results based on IBE, showing that previous constructions of
a PKE with CCA security and of a signature scheme based on IBE maintain
9.1
4-RKA security.
Using <D-RKA PRG to Construct <}-RKA Signatures
To build high level primitives resilient to tampering, we will start with a normal secure instance
of that primitive. The adapted security notions of the RKA model allows an adversary to obtain
information not only based on the original secret key, but also tampered versions of the secret key.
To build a D-RKA secure primitive, we will need to ensure that information obtained based on
tampered secret keys gives the adversary no additional ability to break the security of the original
primitive.
50
We will give a general method to construct any higher level primitive from a <b-RKA and <ICR PRG. Every higher level primitive has a key generation algorithm to generate a secret key
(and perhaps additionally a public key) from randomness. Instead of storing the secret key of the
original scheme, the new tamper resilient primitive should instead store a seed for the PRG. When
the secret key is needed, the PRG is used to generate randomness to feed into the original key
generation algorithm, generating the secret key freshly each time it is needed. If the seed has been
modified by a
#c
<b, the PRG will produce an output that is indistinguishable from random, and
so the secret key generated will look as if it was generated with fresh randomness, and will give no
information about the original scheme.
To explore this idea in detail, we will formally define the construction in question and give the
necessary security reductions for <b-RKA Signatures.
From <b-RKA PRGs to <b-RKA signatures. We will show containments of the form RKA[PRF] C
RKA[P] for a range of primitives P by showing that any <b-RKA and <b-ICR secure PRG can be
used to build <b-RKA secure P. Under the assumption that a <D-RKA PRF has an identity fingerprint, we can use it to build such a PRG, giving the desired result.
Construction 9.1 [A <b-RKA Signature Scheme] We start with a <b-RKA and <h-ICR PRG TRg =
(IC,g) with associate output length given by r(.), and a normal-secure signature scheme DS =
(KC, S, V), such that the output length r(.) of the PRG is also the number of coins used by 1C. We
construct a new signature scheme DS = (IC', S, V), with algorithms defined as follows:
1. Keys: Pick a random seed for the PRG for the new signing key, K <-
k(1k).
To generate the
verification key, let (vk,sk) <- A(g(K)) be the result of the original key generation algorithm
with coins from
g(K)- the verifying key remains ok. (Key sk is discarded.)
2. Signing: To sign message m with signing key K, recompute (k,sk)
<- A(g(K)), and then sign
m under S using sk.
3. Verifying: Just as in the base scheme signature scheme, verify that o- is a signature of m under
vk using V.
Note that <b is compatible with the space of keys for DS since this is also the space of seeds for
51
the PRG.
We note that our construction has the advantage that the form of the public key, signatures, and
the verification algorithm all remain unchanged from that of the original base signature scheme.
Thus our construction gives minimal changes to software, making it easier to deploy than a totally
new scheme. Creating a signature in the new scheme now additionally requires evaluation of a
<D-RKA-PRG, but in practice this can be instantiated with any efficient block cipher. However,
creating a signature also requires running the key generation algorithm of the base signature scheme
which might be expensive.
We will prove that DS inherits the <b-RKA security of the PRG. The intuition here is very simple.
When an adversary attacking DS makes a signing query (#, m), the signature of m will either be
under the original signing key, or will look like a signature generated with an independent and
randomly generated key. Since the signing key of our new signature scheme is the seed of the <b-RKA
PRG, any modification of the signing key will generate
#(K)
which looks like fresh randomness;
since the output of the PRG is used to generate a signing key of the original signature scheme as
using coins 9(#(K)), this will look like a freshly randomly generate signing key, independent of the
original scheme and giving no additional information to the adversary.
There are, however, technical difficulties in the proof.
In our reductions, we construct an
adversary B against the <b-RKA security of the PRG from any adversary A against the security of
the new signature scheme. The standard way to construct this is for B to guess it has access to a
real oracle when A succeeds in a forgery, and to guess it has access to a random oracle B when A
fails. However, in order for B to tell when A succeeds, it needs to know when the tampered key of
the signature scheme is unchanged, and its unclear how to do this without access to the key! We
overcome this property using the <b-ICR security of the PRG, which bounds the probability that
an adversary can query a
# in
<b such that
#
modifies the seed but not the output of the PRG.
Theorem 9.2 Let signature scheme DS = (P
with a
(b-RKA
secure and (b-ICR
nature scheme DS
=
(7,
KSV).
P, C', S, V) be constructed as in construction 9.1,
secure PRG PG = (P,
Then DS is a (-RKA
52
k, g, r), and with a normal-secure sig-
secure signature scheme.
Proof:
Given an adversary A mounting an attack on the <}-RKA security of DS, we construct adversaries
P, S and C such that for every k
Advsigk
eN
(k) < Adv r
(k)+ -Adv
(k) + Advicr
(1)
(k),
proving the theorem.
In order to make our analysis, we introduce three games for A. In Game 0, the A will receive
correctly formed signatures from our signature construction 9.1.
Game 0 - Go
1. Setup Phase. The challenger generates the signing key for the signature scheme, which is a
seed for 'PG,
as K
-
C(1k),
initializes T[K] <-
g(K), and then uses the key generation of
the original signature scheme 'DS as (vk, sk) <- JC(T[K]) to generate the verification key vk.
The challenger also initializes the set of disallowed messages as M <-
0, and
sets a boolean
variable bad to false. Finally, the challenger returns the verification key vk to A.
2. Query Phase. The challenger is responsible for answering A's adaptive queries 0S,4
(,
).
To do this, the challenger computes the tampered signing key as K' +- <O(K). If T[K'] is
undefined, T[K']
4- g(K'),
and the challenger computes keys for the original signing scheme
using randomness from PRQ as (V',T') +- IC(T[K']), and signs m with a
4- 3(sk', m).
If
the signing key was unmodified, with K' = K, m is added to the set of disallowed messages
with M
<-
M U {im}. If K' 5 K but T[K']
=
T[K], the challenger sets bad
<-
true. Finally,
the challenger returns the signature a in response to A's oracle query.
3. Finalize. When A halts and outputs a potential forgery (m*, -*), the game returns 1
if the signature verifies and m* V M and 0 otherwise; more formally, the game returns
((V(vk, m, a) = 1) A (m V M)).
Game 0 is equivalent to the security game for a <b-RKA signature scheme for our construction DS,
53
and so
Advsr
(2)
(k) = Pr[GA 4 1].
However, Game 0 requires the challenger to add messages to the disallowed set when K' = K, but
the challenger doesn't have access to K! Game 1 will instead add messages to the disallowed set
when T[K']
find
#
=
T[K]; we use the
4-RKA security of the PRG to bound the probability that A can
$
such that for K'= #(K), T[K'] = T[K] yet K'
K.
In Game 1, the A will receive correctly formed signatures from our signature construction 9.1.
Game 1 - G1
G1 is the same as Go, except in the query phase, where a queried message m is added to the
set of disallowed messages with M <- M U {m} when T[K'] = T[K], instead of when K' = K.
An algorithm with oracle access to OMG,' can perform the role of the challenger for the D-RKA
adversary of the signature scheme in Game 1. Note that Go and G1 only differ when the P-RKA
adversary of the signature scheme makes a query to
#
such that T[K'] = T[K] yet K' 0 K. We
construct adversary C against the <D-ICR security of 'PRG by having it serve as the challenger in
Go; whenever the flag bad is set to true, C breaks the <k-ICR security of 'PRG. Thus,
Pr[GO
]< Advic
,,(k) + Pr[G1I4
(3)
]
Game 2 differs from Game 1 in that A will receive signatures created using true randomness instead
of the output of the 'PR.
Differences in Game 2 from Game 1 are boxed.
Game 2 - G2
1. Setup Phase. The challenger generates the signing key for the signature scheme, which is a
seed for 'RG, as K <$
(1k), initializes T[K]
4-
{0, 1}r(k), and then uses the key generation
of the original signature scheme DS as (vk, sk) +- C(T[K]) to generate the verification key
T.
The challenger also initializes the set of disallowed messages as M <-
54
0.
Finally, the
challenger sends the verification key k to A.
2. Query Phase. The challenger is responsible for answering A's adaptive queries for signatures
OS,4 (#, m). To respond, the challenger computes the tampered signing key as K' <-
#(K).
If T[K'] is undefined, T[K']+- {0, 1}"'(), and the challenger computes keys for the original
signing scheme as (V', ;')
<-- J(T[K']), and signs m with o -
S(Tk', m).
If the signing
key was unmodified, with K' = K, m is added to the set of disallowed messages with M <M U {m}. Finally, the challenger returns the signature o to A.
3. Finalize. When A submits a potential forgery (m*,
*), the game returns 1 if the signature
verifies and m* V M and 0 otherwise; more formally the game returns ((V(ok, m, o)
1) A (m
M)).
First we use any adversary A against the signature scheme to construct P which attacks the security
G,
of the 'PTg. P receives either oracle access to either a real instance of
or to a random one;
P will interact with A so that in the real case, it plays Game 1 with A, and in the random case
plays Game 2 with A.
P generates randomness R by querying the TPRG oracle with ID the identity function as R
$
GEN(id), and uses this to create a verification key for DS as (vk,sk) <- JC(R), sending vk as the
0.
verification key to A. P initializes the set of disallowed messages M <-
P must then answer
adaptive signing queries from A of the form (#, m). P first obtains R' <- GEN(#), then uses this as
randomness to run (vk',
') <- IC(R'). P uses the resulting signing key T' for the original signature
scheme DS to sign m as o-
S(T',m), returning signature o- to A. If
#
= ID, then m is added
to the set of disallowed messages as M <- M U {m}. When A returns a message and signature
(m*, o-*), adversary P returns 1 if (V(vk, m*, o-*) = 1) A (m 0 M) and 0 otherwise.
In the case that P is in the real game, the verification key and all signatures given to A are computed
using values from the real PRig as in Game 1; in the case that P has access to the random game,
the verification key and all signatures given to A are computed using a random value for each value
of 'P1g seed, as in Game 2. The remaining challenge for P is to be able determine whether the
55
the forged message m* is in M, the set of disallowed messages. Messages are added to M if signed
with the original signing key, but P does not have access to the 'Pfg seed that is the signing key
for the signature scheme for A. To solve this problem, we use the assumption that <D is claw-free this guarantees that K'
(No other
#
can have
=
K only when
#(K)
=
#
=
ID, so m is added to M only for queries where
#=
ID.
K since ID(K) = K, and that would violate the assumption.)
This analysis yields
Adv
()
(4)
--> 1] - Pr[Gi -> 1],
Pr[G
computed as the difference in probability of returning 1 in the real and random games for P.
To continue in our analysis we use the adversary A attacking the RKA-security of signature scheme
DS to build an adversary S against the normal security of the original signature scheme DS. S
receives verification key vkfor the original signature scheme. S then generates seed for the 'PRg
as K
- /C(1k),
which will serve as the signing key for DS. S sends A the verification key vk. S
responds to adaptive signature queries (#, m)from A by first computing K' =
S uses oracle access to get signature a
T[K'] +
{o, 1}r(k); for
K'
#
<- SIGN(m).
If K' = K,
For K' , K, if T[K'] is undefined, S sets
K, S uses T[K'] as randomness for
4- 5(sk', m).
and signs m with the resulting signing key as a
#(K).
K to compute (vk', sk')
-
C(T[K'])
S returns the generated signature a
to A. When A submits forgery (m*, o-*), S submits this also.
Note that S is able to play Game 2 properly with A. The parameters presented to A have the
proper distribution, since they are composed of independently generated parameters for the 'PTg
and base signature scheme DS. Each tampered seed K'
=
#(K)
accesses a truly random value
T[K'] to generate a signing key of the original scheme DS to answer signature queries; this is even
true when
#
= ID, in which case signatures are generated from S's access to a signature oracle,
which uses a signing key of the base signature DS created with true randomness. Finally we note
that since <D is claw-free, only
yeild
#(K)
#
=
ID leaves the seed of the PRG unmodified, so no other
# will
= K.
A succeeds when (V(vk, m*, o*)
=
1) and (m*
g
M). Since S and A have the same value for ok,
(V(vk, m*, a*) = 1) will be true for S exactly when it is true for A. Moreover, m is added to M
56
for A when it queries (ID, m); since S makes queries to its signature oracle only when
#
= ID, m is
added to M for S exactly when it is added to M for A. Thus S will succeed in creating a forgery
for DS exactly when A succeeds in creating a forgery for DS in Game 2, yielding
Advsgrk(k) = Pr[G2
]
(5)
To conclude our proof, we combine equations 2, 3, 4, and 5 progressively to give
Adv
rj(k)= Pr[GA
= Ady
1]
,0 ,(k) + Pr[G P
1]
= Adv'gC,.(k) + Adv(k)
+ Pr[G
a 1]
= Advg,c,D (k) + Adv ,(k)
+ Adv
sk(k),
proving equation 1 as desired.
I
Corollary 9.3 Under the assumption that 4-RKA secure PRF is also a
(b-IDFPsecure
PRF,
RKA*[PRF] c RKA*[Sig].
Proof: Using the construction in Proposition 7.8, we can build a <b-RKA and <b-ICR secure PRG
from any <b-RKA and <b-IDFP secure PRF. Using Theorem 9.1, we can in turn use this PRG and
a normal secure instance of a signature scheme to build a <b-RKA secure signature scheme.
I
Now that we have shown how to build a <b-RKA secure signature scheme, we generalize the
above construction.
Construction 9.4 [ <b-RKA Primitives from <b-RKA and <b-ICR PRG] Let P = (Cp(-), 91(-),..
be any secure cryptographic primitive, where each algorithm
57
fi
. , gm(),
depends on the secret key sk, and
fi(sk, .)'..
each algorithm gi does not. Without loss of generality, key generation outputs both a private and
a public key, and all algorithms run in time polynomial in k.
Let 4-RKA and 4-ICR PRG PRG = (KG, 9) with associate output length given by r(k),
such that the output length r(k) of the PRG is also the number of coins used by IC(1k).
We construct P'
=
(K'(-),g
),.. ., gm(.),
f(sk,-),.. .,f'(sk, -)) as follows:
* KCI(lk) generates secret key sk <-- KIPRG(1k), and then takes the public key obtained by running
the original key generation algorithm on the psuedorandomness produced by g evaluated with
sk, as (pk, -)
*
+-
Kp(l1; 9(sk)); the returned key pair is (sk,pk).
fi(sk, -) first generates (pk, skp)
<-
Cp (1k;
g(sk)), and then returns fi(skp,).
e All g'(.) = gi(-); algorithms that do not use the secret key are unchanged.
Note that 4) is compatible with the space of keys for P' since this is also the space of seeds for
the PRG.
Theorem 9.5 Let cryptographic primitive P' = (K'(), g'(.),. . .,g9m(.),
f(sk,),.. .,f' (sk, -)) be
(P, K, g, r),
constructed as in construction9.4, from a (b-RKA secure and '1-ICR secure PR G'P9Lg
and from any normal secure instance of the cryptographicprimitive P = (p(.),
gi(),. ..
,
m), f1(sk,-),
Then P' is a tD-RKA secure instance of P.
The proof is a generalized version of Theorem , and we omit it here.
9.2
Strong
<b-RKA
Security
We suggest and use an even stronger notion of
primitives. The security game for
4-RKA security for signature schemes and other
4-RKA secure signatures requires that forgery is difficult with
respect to the original verification key; we note that our Construction 9.1 actually possesses an
even stronger property, namely that forgery is difficult with respect to the public verification keys
corresponding to any tampered secret key. We will use signature schemes to demonstrate this
strong security requirement, and will use it in other proofs.
58
. .. , fn(sk,
In the general definition of a signature scheme the public verification and the secret signing
keys are produced together by the (randomized) key generation algorithm.
However, in most
instantiations of signature schemes, the secret signing key is produced first with randomness and
then the public verification key is generated as a deterministic function of the secret key. We call
any signature scheme with this property separable,and we call the algorithm that deterministically
produces the public key from the secret key the public-key generator. We note that one can
convert any signature scheme into a separable scheme by storing randomness for the key generation
algorithm as the new secret key.
Definition 9.6 (Separable Signature Scheme) A signature scheme DS = (C, S, V) is separable if there is a deterministic algorithm T, called the public-key generator, such that for all k
GN
the output of the process
(vk, sk) < k/(1k) ; vk
<-
T(sk) ; Return (vk, sk)
is distributed identically to the output of A(lk).
We will now give the security definition for a strongly <b-RKA secure signature scheme. Here,
the adversaries potential forgery includes not only a message and a signature, but also a RKD
function
#(sk),
#.
A forgery is valid if it passes verification with the verification key vk derived from
and the adversary has never received a signature on this message generated with a signing
key sk' that gives vk. This might seem odd since the adversary might not know the public key, but
without loss of generality we can add the public verification key to the signatures. We note that
this definition of a forgery is more restrictive than that of normal 4-RKA security; one can move
to the less restrictive form under a general (not just identity) fingerprint assumption.
Definition 9.7 (Strongly <-RKA Secure Signature Scheme) Let DS = (C, S, V) be a separable signature scheme with public-key generatorT.
The algorithms must work in the same way
and satisfy the same correctness properties as a standard signature scheme.
We define the following security game is parameterized by a security parameter k and a (b that
is compatible with DS, and is defined for an adversary A.
59
1. Setup Phase. The challenger generates keys for the signature scheme as (vk, sk) $ K(1k),
and initializes the set of messages disallowed for forgery as M <-
0. Finally, the challenger
gives the public verification key vk to adversary A.
2. Query Phase.
A is given adaptive access to Os,'.
by first computing sk' <-
OsA(4, m)
p(sk).
The challenger computes the associated pub-
lic key for the tampered secret key as vk <- T(sk').
o
-
The challenger answers A's queries
The challenger computes signature
S(sk',m), and adds this to the set of disallowed forgeries as M <- M U {(vk,m)}.
Finally, the challenger returns the signature o- to answer A's query.
3. Forge. When A halts, it outputs
(p, m, o-).
For A's output (p,m,o-), we define Advi,-(k)
true) A ((vk, m)
g
M)].
=Pr[vk <- T(p(sk)) :(V(vk,m,-)
We say (DS, T) is strongly '1-RKA secure if this advantage function is
negligible in k for all A that run in time polynomial in k.
To show that our signature scheme Construction 9.1, is strongly
G-RKA
secure, we first note
that this construction is separable. Recall that in this scheme, the secret signing key is a seed for
a
4-RKA PRG, which is expanded and fed into a signature scheme key generation algorithm; the
verification key is obtained from the signature scheme key generation algorithm, and so can be
computed from the secret key. This is true regardless of whether the original signature scheme is
separable.
Theorem 9.8 Let separable signature scheme DS = (K', S, V) be constructed as in Construction
9.1 from a 'I-RKA PRG PRG
=
(K, 9) with output length given by r(k), and normal-secure sig-
nature scheme DS = (K, S, V). Let T be the associated public-key generation algorithm for DS.
Then (DS, T) is a strongly 1-RKA secure signature scheme.
Proof of Theorem 9.8:
Recall ID is the RKA specification consisting of just the identity function, so that a ID-RKA secure
signature scheme has normal signature security; by assumption, DS is such a scheme. Given an
60
adversary A making qig signature oracle queries and mounting a <D-RKA on the strong security of
DS with public-key generator T, we construct adversaries P, S such that for every k
Advssg
D(k) < Adv
(k) + (qsig + 1)Adv
EN
(k),
which will prove the theorem.
We define two security games for an adversary A against the strong <b-RKA security of our constructed signature scheme.
Game 0 and Game 1 - Go and G 1
Differences in Game 1 from Game 0 are boxed.
1. Setup Phase for Go, G1 . The challenger generates a seed for the PRG as K
stores its initial value as T[K] <lowed for forgery as M <(vk, sk)
<-
0.
<
1C(lk),
and
g(K). The challenger also initializes the messages disal-
Finally, the challenger generates the public verification key as
kC(T[K]), and returns (vk) to A.
2a. Query Phase for Go. To answer A's adaptive queries 0SD (#, m), the challenger first computes the tampered secret key as K' +T[K'] <-
#(K).
If T[K'] = I then the challenger computes
g(K'). The challenger generates the signature by first computing keys for the orig-
inal signature scheme as (vki, si) <- K(T[K']), and then computing o-
S(ski, m). The
challenger adds this to the list of disallowed forgeries as M <- M U { (vki, m) }, and finally
returns signature o- to A.
2b. Query Phase for G1 . To answer A's adaptive queries 0"'*(#, m), the challenger first computes the tampered secret key as K' <T[K'] 4-
{0, 1}r(k).
#(K).
If T[K'] = I then the challenger computes
The challenger generates the signature by first computing keys for the
original signature scheme as (vki, ski) <--
C(T[K']), and then computing o
4- S (ski, m).
The
challenger adds this to the list of disallowed forgeries as M +- M U { (vki, m) }, and finally
returns signature o- to A.
61
3. Finalize for Go, G1 . Eventually A halts and outputs (#, m, u).
sk'
<-
#(sk) and vk
<-
The challenger computes
T(sk'); the game returns ((V(vk, m, o) = 1) A ((sk', m) 0 M)).
The difference in the two games here is small. Where game Go correctly uses the <D-RKA PRG
to generate randomness for key generation, game Gi instead uses true randomness. We construct
adversaries P, S such that for every k E N
true) - Pr[G
Pr[G^0
==
1 Pr[Gf A
qsig
true]
<
AdvG,(k)
true]
<
Advig-r
(6)
(7)
(k).
DS,S,ID
Together with
Adv
4)(k)
= Pr[GA > true],
this gives the desired result.
Adversary P against the PRG runs initializations
M
<-
0
K
+_
/C(1k) ; T[K] +-- 9 (K) ; (Tk, Tk) +- )C(T[K]).
P then runs A on inputs (vk), and answers A's signing oracle queries 0SD(#, m).
by first computing the tampered seed as K' <T[K'] <-
#(K).
P does this
If P has T[K'] = _L then it generates
g(K'). P generates keys for the original signature scheme as (vki,ski) +- JC(T[K']),
generates the signature as o <- S(ski, m), and adds to the set of disallowed messages for forgery
as M - M U {(vki, m)}. Finally, P returns signature o- in answer to A's query. When A halts
and outputs (#*, m*, o-*), P tests whether this is valid by generating tampered key K* <defining T[K*]
<-
4(K),
g(K*) if necessary, and finally computing (vk*, sk*) <- )((T[K*]); adversary P
returns 1 if (V(vk*, m*, o,*) = 1) A ((vk*, m*) V M) and 0 otherwise. We note that P correctly plays
Go with A.
62
Adversary S against the base signature scheme DS receives vk, a public key generated for the
original signature scheme, and access to a corresponding signature oracle. S simulates game G1
by generating a random signature scheme for all but one distinct tampered seed generated by the
#
in A's queries. Let qsig(k) be an upper bound on the number of signing queries made by A.
With probability 1/(qsig + 2), S can predict which
#
and corresponding key A will select for its
forgery, and S will imbed its signature scheme of interest here. S begins by initializing the set of
disallowed messages for forgery as M <-
0, generating
a seed for the PRG as K
I jC(1k), setting
T[K] +- 9(K), and setting the counter of unique seeds encountered as c <- 1.S then generates
x
4-
[0, qsig]. If x = 0, S will embed its signature scheme in the untampered scheme A can access,
setting vk <- vk and Ko <- K. Otherwise, for larger x, S selects randomness s
4-
{0, 1}r(k) and
uses it to generate (vk, sk) <- C(r; s).
S then gives A verification key vk, and answers A's adaptive queries to the signing oracle
To do this, S first computes the tampered seed K' <-
#(K).
0
S''"(5im).
If T[K'] = - this is the first time S
has encountered this seed, so S increments the counter as c <- c + 1, and sets T[K'] <-
g(K'), and
if c = x embeds the challenge here by setting Ko <- K'.
If the tampered key K' = K 0 , S answers with signatures from its own signing oracle, with oOS(m), and adds M +- M U {(vk, m)} to the set of disallowed forgeries. Otherwise, S uses a stored,
randomly generated signing key, setting (ki, Tki) <- A(T[K']), and computing a
adding to the set of disallowed forgeries as M <- M U {(vki, m)}
.
4-
S(ski, m),
In either case, S returns a to A
to answer the query.
When A halts and outputs (#*, m*, o*), S halts and returns (m*,
a successful forgery, we compute K* <compute (k*, sk*)
If
#*
<-
#(K),
a*). To find whether A has created
define T[K*) <- 9(K*) if necessary, and finally
A(T[K*]); A's forgery is successful if (V(vk*, m*, o*)
yields a previously used K*, there is a 1/(qsig
=
1) A ((vk*, m*) V M).
+ 1) chance that vk* matches S's vk, and in
this case S will be successful when A is. Otherwise if S never used K*, vk* is distributed as a
verification key for a fresh instance of the signature scheme, and A succeeds with probability of at
most breaking the original signature scheme.
63
I
Strong Security For Other Primitives.
The analogous strong security definition also exists for <D-RKA CCA public key encryption. In
this definition, the adversary selects not only two messages mo and ml, but also a
4
E <D, and
then receives the encryption of one of the two messages under the related key derived using
4;
the adversary is said to succeed if it can guess with non-negligible advantage which message was
encrypted.
The generic construction that transforms a secure CCA asymmetric encryption scheme using a
<D-RKA-PRG to generate the secret key before each use gives this strong security definition. We
omit the proof here, but it is very similar to the proof of strong security for the signature scheme.
Intuitively, each distinct
4
E <D cause the <D-RKA-PRG to output values indistinguishable from
random. When this pseudorandomness is used to generate the secret key, it appears as if it was
a new randomly generated instance of the primitive. Since the adversary can only interact with
polynomially many of these, we can guess which one they will choose, and embed the standard
<D-RKA security challenge here.
9.3
<h-RKA Secure Signatures from 4-RKA Secure IBE Schemes
As noted by Naor [BF03], any IND-ID-CCA secure IBE gives a public key signature scheme. We
show that RKA[IBE] C RKA[Sig] by proving the Naor transform reserves <D- RKA security.
Construction 9.9 Given an IBE scheme FB'E
=
(M, C, E, D), we construct signature scheme
DS = (IC,S, V) as follows:
* I(1k)
first generates (msk, mpk) <- M(lk), and returns these as signing and verification keys
(sk, vk)
=
(msk, mpk).
* S(sk, m) treats m as a user ID and returns that user IDs secret key generated as C(vk, sk, m).
e In the most general case verification V(vk, m, o) is performed by encrypting a random message
in the IBE scheme using the m as the user ID, and seeing whether the resulting ciphertext
decrypts correctly under the decryption key o.
64
The following says DS inherits the D-RKA security of IBE.
Theorem 9.10 Let signature scheme DS
=
(IC,S, V) be constructed as in Construction 9.9 above
from <b-RKA IBE scheme IBE = (M, k, E,D). Then 1S is 1-RKA secure.
When the adversary against the signature scheme makes queries to the signing oracle, the
signatures can be simulated using the key generation oracle for the IBE scheme. The delicate point
here is that the correctness of this simulation relies on the fact that the challenge identity (in this
case, the message to be signed) has not yet been defined, which means that the two procedures
fail in exactly the same cases. Once the signing adversary outputs its forgery with message id and
signature dk, we pick two random k-bit messages mo, mi to generate a challenge ciphertext C*. We
then decrypt C* using D with decryption key dk and return 0 if we get back mo and 1 otherwise.
Proof:
Correctness of the signature scheme follows from the correctness of the IBE scheme.
We reduce 4-RKA security of the signature scheme to the D-RKA security of the IBE scheme by
constructing an adversary B against the <D-RKA security of IBE from any adversary A against the
,D-RKA security of DS.
B receives a master public key mpk of the IBE scheme, and forwards it to A as the verification key vk
of the signature scheme. B answers A's adaptive queries 04
When A halts and outputs forgery (m*, -*).
(#, m) by returning o- - 0K4 (4, m).
In response, B submits m* as the user ID and
random messages mo, mi that A has never queried to be used to generate its challenge. B receives
ciphertext C = E(mpk, m*, mb) for a hidden random challenge bit b. B decrypts the ciphertext as
m = D(mpk, o-*, C); if m = mo then B guesses b' = 0, and otherwise B guesses b' = 1.
By definition of a forgery for A, the submitted o-* should correctly decrypt a ciphertext of a random
message generated with identity m*. When A correctly decrypts C that is the encryption of either
random message mo or ml, B correctly identifies b, except perhaps when mo = mi1 .
Thus Adyv-r
(k) > Adv"%,j (k) - F,
and so the advantage of B will be non-negligible when
A's advantage is non-negligible.
65
I
9.4
<D-RKA Secure PKE-CCA from
We show the containment RKA[IBE]
c
<D-RKA
Secure IBE
RKA[PKE-CCA] by showing that the construction of
Boneh, Canneti, Halevi, and Katz [?] preserves RKA security.
Construction 9.11 [Based on Boneh, Canneti, Halevi, and Katz [?]] Let I6'E = (M, K, 8, D)
be any <b-RKA IBE scheme and let DS= (K, S, V) be any a regular-secure strongly-unforgeable
signature scheme.
We construct a public key encryption scheme 2P7AE with associated security parameter k as
follows:
" Keys: Pick (mpk, msk) <- M(1k), and use mpk as the public key and msk as the secret key.
" Encryption: To encrypt a message m under mpk, generate a signing-verification key pair
(v/, sk) <- K(1'). Then encrypt m for the identity id =v/ by computing c - (v, m) and
sign c under S using sk to get a signature i. The ciphertext is (c, vk,).
" Decryption: To decrypt a ciphertext (c, v/c,5 ), first verify that : is a valid signature on c
under vk, and output I if verification fails. Then compute the user secret key for the identity
id = vk, and decrypt c using that key.
Since the form of keys for PRE is the same as that for IB'E, <> is compatible with -PRE.
Theorem 9.12 Let the PKE scheme PRE = (K', 8', D') be constructed as above in Construction 9.11, using a
(b-RKA
secure IBE scheme IB'E = (M, K, 8,D) and normal-secure strongly-
unforgeable signature scheme DS = (K, S, V). Then -PREis
(b-RKA
secure.
The proof is an adaptation of the original proof for the non-RKA version of this theorem. The
additional difficult is to show that RKA games for IBE and PKE-CCA will cooperate, because they
each have rules for disallowing certain queries; in IBE the adversary is disallowed to extract a key
for user ID used to issue the challenge ciphertext, and in PKE-CCA the adversary is disallowed to
66
decrypt the challenge ciphertext C*, but both rules only hold under the original unmodified secret
key.
Proof:
The proof will proceed through a sequence of games, Game 0, Game 1, and Game 2.
Games 0, 1, and 2 - Go, G1 , and G2
Each game is designed for an adversary A against the <D-RKA security of 'P9(E. Differences in
Game 1 from Game 0, and in Game 2 from Game 1, are boxed.
1. Setup Phase for GO, G 1, G2 . The challenger generates a key pair for the IBE scheme as
(mpk, msk) +$ M(1k). The challenger also selects a challenge bit b +- {0, 1}, and initializes
Finally, the challenger returns mpk to the adversary A as the public key of the
C* <- I.
PKE scheme.
2a. Query Phase for Go. During this phase, A may choose to run the Issue Challenge phase
and then return to this phase.
A is allowed to adaptively query O,4 (g, (c, vk, a)). To
answer the query, the challenger first computes the tampered key as msk' <((msk' = msk) A ((c, ok,
#(msk).
If
) = C*)), then this the challenge ciphertext and the key used
to generate it, and so the challenger returns I to this query. If V(ok, c, 5) = false, this is
an improperly formed ciphertext, and so again the challenger can return I.
these conditions are true, the challenger computes dk <-
If neither of
IC(msk',vk), and uses this to find
M <- D(dk, c); the challenger answers A's query by returning M.
2b. Query Phase for G1 . During this phase, A may choose to run the Issue Challenge phase and
then return to this phase. A is allowed to adaptively query 0'D,1(#, (c,
query, the challenger first computes the tampered key as msk' <((cvk,
)
#(msk).
, )). To answer the
If ((msk' = msk) A
= C*)), then this the challenge ciphertext and the key used to generate it, and so the
challenger returns I to this query. If V(vk, c, ,) = false, this is an improperly formed ciphertext, and so again the challenger can return -. In this game, the challenger adds an additional
check: if (vk = vk*) A ((c,3)
/
(c*,
*)), then the challenger sets flag bad <- true, and returns I to A.
67
If none of these conditions are true, the challenger computes dk <- IC(msk', vk), and uses this
to find M <- D(dk, c); the challenger answers A's query by returning M.
2c. Query Phase for G2 . During this phase, A may choose to run the Issue Challenge phase and
then return to this phase. A is allowed to adaptively query 0D,4(4, (c, vk, b)). To answer
the query, the challenger first computes the tampered key as msk' <-
#(msk).
If ((msk' =
msk) A ((k = vk* )), then the challenger returns I to this query. If V(vk, c,5) = false, this
is an improperly formed ciphertext, and so again the challenger can return -.
If none of
these conditions are true, the challenger computes dk <- C(msk',vk), and uses this to find
M <- D(dk, c); the challenger answers A's query by returning M.
3. Issue Challenge for Go, G1 , G2 . Here, the adversary outputs two message mo and m 1 with
mo
=mil. To encrypt mb, the challenger computes (k*,
k*) +* C(1k), c* 4 E(, mpk,
and finally Z* <- S(k*, c*). Finally, the challenge ciphertext is C*
k*, mb),
(c*, v*, i*), which the
challenger returns to A.
4. Finalize for Go, G 1 , G 2 . When A halts it must output a guess b' of the hidden bit b.
Game Go implements the security game of 'PR0E with A, so we have
Advecc(k) =Pr[G^] -
.
Game G 1 has the challenger add an additional before decrypting for A: now if
(c,i)
5 (c*, i*), the game sets bad to true and responds to the query with 1.
k = T*
and
Since G 1 and Go are
identical until bad, we have
Pr[GA] - Pr[G ^] < Pr[Ei],
where Ei is the event that G1 sets bad.
We now construct an adversary B that breaks the strong unforgeability of 'DS with probability
Pr[E1]. B takes as input (vk*), and starts by selecting (mpk, msk) 4- IC(lk). B runs A with public
key (mpk), and simulates A's call for a challenge exactly as specified in G 1 , using vk* when called
for and its own signing oracle to generate -* on c*.
68
B also simulates A queries for decryption
exactly as specified in G1. B notes the first query O," (#, c, vk, Z) that triggers bad, and then B
outputs (c,
) as its forgery. This is a valid forgery for B because this query must have passed the
validity check V(w, vk, c, i) (otherwise the B would have returned before going to set bad), and we
have that at (c,
5) $ (c*, a*) by the check immediately before bad is set.
Game G2 rearranges some of the validity checks the challenger performs before decryption, but
but these changes don't affect oracle responses since the same ciphertexts end up being rejected in
either game. We have
Pr[GA] = Pr[GA]
Now we show that an adversary with winning G 2 with significant advantage can be used to break
the <b-RKA security of PB'E. We construct A' such that
Advbr
(k) = Pr[G]
2
A' takes input (mpk) for the IBE scheme, and runs A with input (mpk) for 'PRJE. A' simulates the
challenge that A requests with (mo, mi) as follows:
1. A' generates (vk*, k*) <* yC(1k), and requests its own challenge c* with (k*
no, Mi).
2. A' signs c* using sk*, generating 5* <- S(T*, c*).
3. A' returns challenge ciphertext C* =(c*, k*, *) to A.
A' simulates answers to A's queries to Ovo(#, c,vk, Z) by computing a tampered key with its key
derivation oracle, as msk' <- O ,s(,vk*).
If (V(vk, c, ) = false) then A' returns I, since this is
not a valid ciphertext. Otherwise A' computes M <- D(msk', c), and returns this to A. A' runs A
until it A halts, and then A' outputs whatever A outputs.
To complete the claim we need to argue that A' properly simulates G2 for A. The only subtlety is
in how A' handles decryption queries are handled. But we observe that A"s key derivation oracle
is performing exactly the same first two checks that G2 performs before the challenger returns a
decryption, and all other ciphertexts are correctly decrypted, and so A' performs as claimed.
69
The proof is completed by collecting the relationships between games Go, G 1 and G 2.
I
Separations between RKA Sets
10
After giving several constructions of <h-RKA secure primitives from other <D-RKA secure primitives,
we will now show that in some cases this is actually impossible. We give negative results of the form
RKA[P1] g RKA[P 2 ] by showing there exists a <( for which <b E RKA[P 1 ] but <k
g
RKA[P1 ].
We reach these separations using two different techniques.
One is based on the class Cnst of RKD functions that set keys to a constant . We show,
for example, that any signature scheme is Cnst-RKA secure, since an adversary could simulate
signatures made with constant keys itself; however, no PRF can be Cnst-RKA secure since a PRF
evaluated with a fixed key is easily distinguished from a random function. Thus RKA[Sig]
g
RKA[PRF]. Another example use of this technique is to show RKA[PKE-CCA] g RKA[SE-CCA].
This is true because constant functions are fine for the first but, due to the RKA on the encryption
oracle, not for the second.
The second technique is to define pairs of keys, and define RKD functions that sometimes
exchange one key in the pair for another depend on a bit of the original key; these are designed in
such a way that there is a noticeable change in behavior for a <b-RKA attack on P 2 , yet the two
keys are functionally equivalent under a <D-RKA on P1 . We use variants of the technique to show
RKA[wPRF] g RKA[PRF], and also RKA[SE-CPA] g RKA[SE-CCA].
10.1
Separating <b-RKA PRFs from <}-RKA signatures
Since Corollary 9.3 shows that RKA*[PRF]
; RKA*[Sig], it is natural to consider whether the
converse is also true- can one build a <D-RKA secure PRF from any <b-RKA secure signature
scheme? It turns out the answer is no, which is expressed RKA*[Sig] Z RKA*[PRF].
This is proved by showing that there exists a <Dsuch that there exists a <D-RKA secure signature
scheme, but for which no <D-RKA secure PRF can exist. We do this using Cnst, the class of RKD
70
functions that set the secret key to a fixed constant.
We first show that all signature schemes are Cnst-RKA secure.
Proposition 10.1 Let DS = (C, S, V) be any signature scheme with standard (non-tampering)
security. Then DS is a Cnst-RKA secure signature scheme.
Proof: We use any adversary A against the Cnst-RKA security of DS to build an adversary B
normal security of DS, with Advig~ DrACnst
> dsS,A,Cnst (k)
against the against~k
Adversary B receives a verification key vk for DS from the challenger, and receives access to signing
oracle
0
S.
For the reduction, B forwards vk to A. A should receives access to the signing oracle OS,Cnst. B
must answer queries of the form
* If
#
=
#1D, B returns
* Otherwise
#
=
#c,
0
scnst (, m).
o +- OS(m) from its own signing oracle.
i.e. the RKD which maps all keys to the constant c. In this case, B first
tests for the event that c is the hidden secret key: for an m' that B has not queried its signing
oracle, B tests V(vk, m', S(c, m')); if true, B outputs (m', S(c, m')) as its forgery. If c fails to
generate a valid forgery in this way, B correctly answers A's oracle query with S(c, m).
If B has not already output a value, when A submits a potential forgery (m*, o-*), B repeats this
and outputs (M, -). A's forgery is valid if it has not queried for a signature of m* under
has never queried for a signature of m* under
#s.
But if A queried with
signing key and creates a valid forgery. Otherwise, if A never queried
(m*,
#1D),
#sk,
#ID and
B will have the
B never queried its
signing oracle for m* either, and this will be a valid forgery for B too. Thus B has a valid forgery
whenever A does.
I
Next, we show that no Cnst-RKA secure PRF can exist.
Proposition 10.2 Let
7 = (K, F) be any PRF; then 77 is not Cnst-RKA secure PRF.
71
Proof:
We construct adversary A against the Cnst-RKA security of fJ:
Or,Cnst A queries OTCnst(go, 0) (or
ge for any valid
when A is given oracle access to
secret key c and any valid point in Dom(k)). If
OF,Cnst (o, 0) = F(0, 0), A makes guess b' = 0 of the hidden bit, and otherwise makes guess b'
1.
In the case that A is playing game PRFRand, orCnst(go,0) = F(0, 0) will only happen with
probability
Thus Advf
1(k) while in game PRFReal, it will happen with probability 1.
(k)
=
1-
, and so
77
is not a Cnst-RKA secure PRF.
I
Corollary 10.3 RKA[Sig] g RKA[PRF].
Proof: Proof by Propositions 10.1 and 10.2.
10.2
I
Other Relations Using Cnst
Similarly to Proposition 10.2, no wPRF can be Cnst-RKA secure - even when the adversary can
only access random points, the fixed function determined by evaluating the wPRF at a fixed key
looks far from random. Additionally, no symmetric encryption scheme can be either Cnst-RKA
CPA secure or Cnst-RKA CCA secure - in both cases this would allow the adversary to receive the
message mb encrypted with a fixed key, which it can easily decrypt to determine the challenge bit
b. We omit the proofs.
Proposition 10.4 Let 7F = (K, .F) be any wPRF; then 97 is not Cnst-RKA secure wPRF.
Proposition 10.5 Let SE = (C, &6,D)be any symmetric encryption scheme; then SE is neither
Cnst-RKA CPA secure nor Cnst-RKA CCA secure.
In addition to all signature schemes being Cnst-RKA secure, all PKE schemes and all IBE
schemes are both Cnst-RKA secure as well. Here, the adversary only receives access to decryption
under the secret key, but could easily simulate decryption under constant keys without oracle access.
Again, we omit the proofs.
72
Proposition 10.6 Let PRAE = (k, 5, D) be any public key encryption scheme with standard (nontampering) security. Then PRE is a Cnst-RKA secure public key encryption scheme.
Proposition 10.7 Let I'E
= (M, C, 5,D) be any identity based encryption scheme with standard
(non-tampering) security. Then IB'E is a Cnst-RKA secure identity based encryption scheme.
Together, this yields a whole suite of relations - no primitive that can be shown to be Cnst-RKA
secure can be contained in a primitive that can never be Cnst-RKA secure.
This gives that none of Signature, PKE, or IBE schemes can be contained in PRF, wPRF, or
SE schemes.
Separating D-RKA secure PRFs from D-RKA secure wPRFs
10.3
Since wPRFs are a relaxation of PRFs, a family of functions that is a o-RKA secure PRF is also
a D-RKA secure wPRF, and so RKA[PRF] C RKA[wPRF]. We will show that the converse fails
to be true; in our set based notation, this is expressed as RKA[wPRF]
g RKA[PRF]. In this case
the constant RKD functions do not provide the separation since both primitives are insecure under
these functions. Instead, we create RKD functions which help the attacker only if they can obtain
different RKD functions evaluated with the same point in the domain of the function family, which
happens only with negligible probability in the wPRF game.
Proposition 10.8 RKA[wPRF] g RKA[PRF].
Proof:
For K
c {O, 1}* we let K- denote the string that is K with the first bit flipped. Let #j(K) return
K if K[i] = 1 and Kthe collection of all
otherwise. Additionally, we define
#i, #flip,
and
#flip(K)
to always return K. We let 4 be
#ID.
Let P2_= (K,7 ) be any PRF that is compatible with D with domain given by DomPRT (k) - we
show that P2J is not a 4-RKA secure PRF. Let
f(k) denote the length of keys for PRJ when the
security parameter is k.
73
We now construct an adversary A against the <b-RKA security of 'P2RJ. A picks x at random from
DomPW (k), and then queries 0'
to compute yid
-
O-'N#1D,x). Then for each 1 < i <
f(k)
* (#j, x) and sets K'[i] = 1 if yi = Yid and 0 otherwise. (It is crucial that all
A queries yi +- 0
queries use the same x.) If
F(K,x) = F(K-, x), this will generate K' = K, and so A has recovered
the key. Once it has the key, it can easily distinguish between game PRFReal and game PRFRand
by making a few queries under
#1D
at random inputs and returning 1 if the results are consistent
with K' and 0 otherwise. If F(K, x) = F(K-, x), meaning the functions under keys K and Kagree at x, then the attack will instead recover the string K' = 1 e(k) and not recover the key. We
resolve this problem by knowing that
#flip
in <b: this, combined with the given <b-RKA security of
PRF, ensures that the functions defined by keys K, K- look like random, independent ones. Thus,
our adversary can simply return 1 if K' = 1e(k), meaning declare that the challenge bit was 1.
Let zRF
= (IC,.F) be a normal-secure wPRF and for simplicity assume that keys are random
(k - 1)-bit strings. (Formally, this is assuming that K(1k) generates the uniform distribution on
{0, 1}-
for all k E N.) We now construct a <b-RKA secure wPRF zd!RF
=
(K,Y) based on any
normal-secure zd f = (K, F). Key generation K returns a random k-bit string on input 1 k. The
evaluation algorithm
F(K, x) lets L be the last (k - 1)-bits of K and returns F(L, x). As a result,
the functions F(K, -) and F(K -.) are identical.
We reduce the <D-RKA security of zdRJ to the normal security of zdPIU: for any adversary A
against the normal security of zudPIF, we build an adversary B against the <-RKA security of
; Adv
ziRw
such that Adv
0D'
, B uses its own oracle access to get (x, y)
for all
# E
<b,
#(K)
B(k)+neg(k)
either returns K or K~, and
ID
0w"
'-
(k) For B to answer A's oracle queries
(
), and returns (x, y). We note that
Y agrees at these two values- thus for any K,
z~fW(#(K), -) = wPRf(K, -) = zd'Pf(L, -), where L is the last (k - 1) bits of K - and so if
B has access to the real zulP3Fin game wPRFReal, it correctly simulates A's queries in the game
wPRFReal. If B instead has access to a truly random function in wPRFRand, B receives a truly
random value from the range of the wPRF for each unique pair (x, L) of x in the domain and key
L, where as A should instead receive separately generated random value for (x, 0 || L) and (x,1|1 L);
74
here B correctly simulates responses to A's queries in game wPRFRand except when A receives
both (x, O|1L) and (x, 1|| L).
Since the inputs to a wPRF are random rather than adversary-selected, the probability of a repeated
input x in q queries is at most
qDom
(k)|, which is negligible since q must be polynomial in k and
|Dom,,y(k)l is super-polynomial. Thus Advwprfr
(k) ;> Adv"jka D(k) - neg(k), and so
fPfR is I-RKA secure, since we have assumed that w P'F is a normal secure wPRF.
1
Separating
10.4
<}-RKA
CPA SE from <D-RKA CCA SE
We use a similar technique to show that RKA[SE-CPA] Z RKA[SE-CCA]. In this case the attack
exploits the fact that the decryption oracle rejects if a query (#, C) results in (K', C) being in S,
meaning that C was a challenge ciphertext created under K', and this can be made to happen
depending on a certain bit of K so that the rejection leaks information about K that eventually
allows the attacker to recover K.
Proposition 10.9 RKA[SE-CPA] g RKA[SE-CCA].
Proof:
e- {O,1}* we
For K
let K-
K if K[i] = 1 and K-
denote the string that is K with the first bit flipped. Let
otherwise. We let <b be the collection of all
#i,
and
#i(K) return
#ID.
First, we show that there exists a <D-RKA CPA secure symmetric encryption scheme. Let SE
(K, E,D) be any CPA secure symmetric encryption scheme. We build a new SE = (]C, E, D) as
follows:
"
AC(1k)
selects a random bit c <*- {0, 1}, and returns c
" E(K, m): let L
K[2, 1K], and return E(L, m)
" D(K, c): let L
K[2, 1K1], and return D(L, c)
75
1|(1k)
Note that E(K, -) is the same as E(K-, .), and D(K, -) is the same as D(K~, -).
We reduce the <b-RKA CPA security of SE to the normal CPA security of SE. An adversary A of
SE receives oracle access to 0 8 (mo, ml), and receives encryptions of either mo or ml. To answer
oracle queries of adversary B of O'
0 8 (mo, mi).
Adv
(#i, mo, mi), A can correctly simulate the answer by returning
When B correctly guesses the hidden bit, A does also. Thus Adv cZ-(k)
,~7d(k),
=
and SE is <b-RKA CPA secure.
Despite the fact that we have built a <b-RKA CPA secure symmetric encryption scheme, no <b-RKA
CCA secure symmetric encryption scheme can exist. The problem here is although D(#(K), .) might
be simulatable by some function without tampering allowed as D'(K, .), the oracle actually depends
on the value of the secret key in order to determine what ciphertexts are allowed for decryption,
and so
0
D,) will not be simulatable by
0
D'.
We complete the proof by building an adversary A against the <b-RKA CCA security of any encryption scheme SE = (1C, 6, D).
C* is generated with
yi
0
#1D(K)
To begin, A generates C* <- O,'(#1D, mo, ml). Note that
the original secret key. Then for each 1 < i < |KJ A queries
D,(#i, C*) and sets K'[i] = 1 if yj = I and 0 otherwise.
only return I if
#i(K) =
Note that ODA (#j, C*) will
K, which occurs only if the i-th bit of K is 1. Once A is done, K' = K
and A can decrypt C* as D(K', C*), and will always correctly guess b.
I
References
[AH11]
Benny Applebaum, Danny Harnik, and Yuval Ishai. Semantic security under related-key
attacks and applications. In ICS, pages 45-60, 2011.
[AK96]
Ross Anderson and Markus Kuhn. Tamper resistance: a cautionary note. In Proceedings
of the 2nd conference on Proceedings of the Second USENIX Workshop on Electronic
Commerce - Volume 2, pages 1-1, Berkeley, CA, USA, 1996. USENIX Association.
76
[AK97]
Ross J. Anderson and Markus G. Kuhn. Low cost attacks on tamper resistant devices.
In Security Protocols Workshop, pages 125-136, 1997.
[AN95]
Ross J. Anderson and Roger M. Needham. Programming satan's computer. In Jan van
Leeuwen, editor, Computer Science Today, volume 1000 of Lecture Notes in Computer
Science, pages 426-440. Springer, 1995.
[BC10]
Mihir Bellare and David Cash. Pseudorandom functions and permutations provably
secure against related-key attacks. In CRYPTO, pages 666-684, 2010.
[BDL01]
Dan Boneh, Richard A. DeMillo, and Richard J. Lipton. On the importance of eliminating errors in cryptographic computations. J. Cryptology, 14(2):101-119, 2001.
[BF03]
Dan Boneh and Matthew K. Franklin. Identity-based encryption from the weil pairing.
SIAM J. Comput., 32(3):586-615, 2003.
[Bih93]
Eli Biham. New types of cryptoanalytic attacks using related keys (extended abstract).
In EUROCRYPT, pages 398-409, 1993.
[BK03]
Mihir Bellare and Tadayoshi Kohno. A theoretical treatment of related-key attacks:
Rka-prps, rka-prfs, and applications. In EUROCRYPT, pages 491-506, 2003.
[BM84]
Manuel Blum and Silvio Micali. How to generate cryptographically strong sequences of
pseudo-random bits. SIAM J. Comput., 13(4):850-864, 1984.
[Bov]
E Bovenlander. Invited talk on smartcard security, eurocrypt 1997.
[BS90]
Eli Biham and Adi Shamir. Differential cryptanalysis of des-like cryptosystems.
In
CRYPTO, pages 2-21, 1990.
[BS97]
Eli Biham and Adi Shamir. Differential fault analysis of secret key cryptosystems. In
CRYPTO, pages 513-525, 1997.
[CPGR05] Jim Chow, Ben Pfaff, Tal Garfinkel, and Mendel Rosenblum. Shredding your garbage:
Reducing data lifetime. In Proc. 14th USENIX Security Symposium, August 2005.
77
[DH76]
Whitfield Diffie and Martin E. Hellman. New directions in cryptography, 1976.
[DPW10]
Stefan Dziembowski, Krzysztof Pietrzak, and Daniel Wichs. Non-malleable codes. In
ICS, pages 434-452, 2010.
[DR02]
The design of Rijndael: AES
Joan Daemen and Vincent Rijmen.
the Advanced
Encryption Standard. Springer-Verlag, 2002.
[GGM86]
Oded Goldreich, Shafi Goldwasser, and Silvio Micali. How to construct random functions. J. ACM, 33(4):792-807, 1986.
[GL10]
David Goldenberg and Moses Liskov. On related-secret pseudorandomness. In TCC,
pages 255-272, 2010.
[GLM+04] Rosario Gennaro, Anna Lysyanskaya, Tal Malkin, Silvio Micali, and Tal Rabin. Algorithmic tamper-proof (atp) security: Theoretical foundations for security against hardware tampering. In TCC, pages 258-277, 2004.
[GM82]
Probabilistic encryption & how to play mental
Shafi Goldwasser and Silvio Micali.
poker keeping secret all partial information. In Proceedings of the fourteenth annual
ACM symposium on Theory of computing, STOC '82, pages 365-377, New York, NY,
USA, 1982. ACM.
[GOR11]
Vipul Goyal, Adam O'Neill, and Vanishree Rao. Correlated-input secure hash functions.
In TCC, pages 182-200, 2011.
[Gut96]
Peter Gutmann. Secure deletion of data from magnetic and solid-state memory. In
Proceedings of the 6th conference on USENIX Security Symposium, Focusing on Applications of Cryptography - Volume 6, SSYM'96, pages 8-8, Berkeley, CA, USA, 1996.
USENIX Association.
[HS09]
Nadia Heninger and Hovav Shacham. Reconstructing rsa private keys from random key
bits. In CRYPTO, pages 1-17, 2009.
78
[HSH+08]
J. Alex Halderman, Seth D. Schoen, Nadia Heninger, William Clarkson, William Paul,
Joseph A. Cal, Ariel J. Feldman, and Edward W. Felten. Least we remember: Cold
boot attacks on encryption keys. In In USENIX Security Symposium, 2008.
[IKNPO3]
Yuval Ishai, Joe Kilian, Kobbi Nissim, and Erez Petrank. Extending oblivious transfers
efficiently, 2003.
[IPSW06]
Yuval Ishai, Manoj Prabhakaran, Amit Sahai, and David Wagner. Private circuits ii:
Keeping secrets in tamperable circuits. In EUROCRYPT, pages 308-327, 2006.
[KK99]
Oliver K5mmerling and Markus G. Kuhn. Design principles for tamper-resistant smartcard processors. In Proceedings of the USENIX Workshop on Smartcard Technology on
USENIX Workshop on Smartcard Technology, WOST'99, pages 2-2, Berkeley, CA,
USA, 1999. USENIX Association.
[KKS11]
Yael Tauman Kalai, Bhavana Kanukurthi, and Amit Sahai. Cryptography with tamperable and leaky memory. In CRYPTO, pages 373-390, 2011.
[LL10]
Feng-Hao Liu and Anna Lysyanskaya. Algorithmic tamper-proof security under probing
attacks. In Proceedings of the 7th internationalconference on Security and cryptography
for networks, SCN'10, pages 106-120, Berlin, Heidelberg, 2010. Springer-Verlag.
[Luc04]
Stefan Lucks. Ciphers secure against related-key attacks. In FSE, pages 359-370, 2004.
[SB94]
H. P. Sharangpani and M. L. Barton. Statistical Analysis of Floating Point Flaw in the
Pentium Processor. Technical report, Intel, 11 1994.
[Sha85]
Adi Shamir. Identity-based cryptosystems and signature schemes. In Proceedings of
CRYPTO 84 on Advances in cryptology, pages 47-53, New York, NY, USA, 1985.
Springer-Verlag New York, Inc.
[Sko02]
Sergei Skorobogatov.
Low temperature data remanence in static RAM.
Technical
Report UCAM-CL-TR-536, University of Cambridge, Computer Laboratory, June 2002.
79
[SPW99]
Sean W. Smith, Elaine R. Palmer, and Steve Weingart. Building a high-performance,
programmable secure coprocessor. In Computer Networks, pages 831-860, 1999.
80
Download