17153

advertisement
17480
>> Moderator: So it is a pleasure to have Tal Malkin speaking today. Tal is a professional at
Columbia and she is well-known for work on black-box operations and cryptographic primitives
and multi-party computation, and today she'll be speaking about public key card systems.
>> Tal Malkin: Thanks. Okay. So hi everyone. First of all I understood my initial announcement
that I'm a CEO of something, so I'm not, if you wanted to leave the room now, it's okay. And I'm
going to talk about a line of research. It has several different works within the same line of
research that is about making public encrypted systems stronger. I wasn't sure who the audience
would be, so I will try to make it general without too many technical details, and I have millions of
slides to go into if you have questions at the end or if you want to talk to me afterward. Okay. So
here is the outline of my talk. I will start with a four-slide version of the entire talk and I'll move on
to a many-slide version of the same talk. Okay. So we are starting with the four-slide version.
So public key encryption, I know you all know what it is, but let me go over it both for consistency
and for motivation, and so public key encryption the idea was given by Diffie-Hellman in the 70s,
and the idea is basically informally Alice and Bob want to communicate securely over an unsafe
channel, okay, and the whole point of modern cryptography started from defining really what does
it mean to communicate securely; what does securely mean, and what does an unsafe channel
mean. Okay, so basically showing that you can in fact define it rigorously and you can, in fact,
achieve these definitions constructions and provability that achieve the definitions, typically
relying on some assumptions, okay, like factoring is hard or anything else. The point is the
assumption needs to be well defined, well understood easy to refute, easy to study, etc. The first
such in the definition of security was given by Gold Rasser and McCall [phonetic] in '82 of
something they called semantic security. And for them, the red is kind of what securely means
and blue is kind of what unsafe environment means, or how is it unsafe. So for them they define
security as secrecy. Namely you hide your message against the passive adversary, an
eavesdropper, and this is called semantic security and already it is hard to define and hard to
achieve, but they did it in the 80s. However as cryptographic applications become more
prevalent and there are many different settings, this kind of security is no longer enough for many
applications, okay. Just a passive adversary is just not enough.
So this is a list of stronger attacks. I could list more, but these are the attacks I have solutions for.
These are attacks that are stronger than just a simple passive adversary, okay, and we want to
protect against them, as well. So for example -- this is all at this point not informal. So let's
assume the adversary is not just a passive eavesdropper who sees what you have sent over the
line but for example what if the adversary can change the ciphertext while it's in transit; that's in
the middle attack, or malleability, what if the adversary can actually get to the decryption protocol,
has access to your decryption protocol, can he learn something; hat's called chosen ciphertext
attack. And there's an adversary that can adaptively -- an adaptive adversary is something, let's
say some protocols are running, the adversary will see what's going on and based on that will
decide where else to break into. That's a strong adversary.
You could also, all of the previous, or at least in the 20th century, all of the cryptographic
algorithms assume some secret key say in which all of the security relied, and the secret key is
assumed to be totally secret. But that's not really realistic, the secret key might leak either by
malicious insiders or stupid people or side channel attacks are all kinds of ways the secret key to
leak or the secret key to be changed with some tampering attacks. So all of these are strong
adversaries that are not captured by the previous notions of security, by semantic security, and
we want to protect against them, as well.
So line of work I'll be discussing is about protecting against the stronger attacks. I will focus
specifically on public key encryption although in general this approach could be defined for many
cryptographic tasks. So this is our goal: No. 1 goal is to secure against stronger attacks as
possible. However, we would like to do it with the following principles. So first of all we want to
do it based on the most general possible assumption. Okay, so what does it mean most general
possible assumption, I told you that everything when you prove security, you want to assume
something. Okay, we want to assume as little as possible. The best case that we would like to
achieve is if you start from an encryption scheme that's known to be semantically secure, namely
it's known to only protect against passive adversaries, can you do anything in order to make it
stronger against stronger adversaries, without adding any extra assumptions, without adding any
extra anything. You won't change the protocol but the proof of security will not need any extra
assumption. This is clearly the minimal one, right. When you start with a weaker one, we call it
immunize. You can immunize it to get stronger security without any extra assumption. That will
be one goal to have as few, as small assumptions as possible. And the second goal would be to
use the underlying primitive in a black-box fashion. So what does it mean what is this underlying
primitive: I start from semantically secure encryption, namely, weak, passive encryption, and we
want to achieve some strong encryption.
So I would like to use the weak encryption, the semantically-secure encryption in a black-box
way. You can think of black-box way of something as a subroutine. Okay, I don't need to know
anything about what's going on. I don't want to assume any algebraic properties. I don't know
homomorphisms, any field properties. I don't want to assume any of this. I want to assume I
have a black-box, I have a subroutine that's guaranteed to be semantically secure; can I use it to
have guaranteed strong security.
I don't know if you were counting but this was the third slide and the fourth slide version of the
talk. So I just wanted to motivate, why did I come up with these two requirements. Obviously I
think the first one is clearly well motivated and we want to protect against stronger adversaries.
Why do I want weak assumptions in black-box fashion. So this is interesting, both theoretically
and practically, so even these bullets, I don't even know which ones are practical and which ones
are theoretical. They both are a mix of all. But we want to achieve as minimal assumption as
possible in a black-box construction for several reasons. One, it will really give us insights into
the primitives, so this is like complexity theory for cryptography, okay, which primitives can be
reduced to which, is this assumption sufficient for that, what is it that you need to assume.
So when you have both -- so this second bullet, more possible exponentiation refers both to the
black-box and to the minimal assumptions. So clearly one obvious reason we want to minimize
assumptions, a well-known goal in cryptography is that we want our systems to be secure. So if I
only did a system based on one specific assumption, I don't know, maybe tomorrow will be some
break that assumption and it won't be secure anymore. So clearly I want as weak assumption as
possible. If I could, I would like to do it without any assumption. But that is not possible for public
key encryption and for most cryptographic primitives. So I want security to be possible under any
possible substantiation. So if somebody believes in elliptic curves and somebody believes in
hardness of some lattice problems, I want them both to be happy that this scheme can be
satisfied.
Okay, so this is clear for minimizing assumption. The truth is even for a black-box, if you use the
underlying scheme in a black-box manner without relying on any properties, that means you can
instantiate [phonetic] it with many possible assumptions. Even if in the future somebody else will
come up with a new assumption by which to build semantically secure public encryption that
would immediately, automatically have stronger public encryption without needing to assume any
algebraic properties. A good example of is there is a new wave in crypto results based on
hardness of learning and random codes, and they don't tend to have the nice algebraic properties
that the number theoretical instructions used to have, and still we could use them, okay, so we
could get new instructions with more assumptions.
And finally black-box constructions tend to be much more efficient. We are not sure if this is
inherent, something inherent to black-box constructions or just because we don't know any better.
But the fact of the matter is the constructions that we have that are non-black-box are much less
efficient. They typically involve some kind of an MP reduction, Koch-Levin [phonetic] reduction.
They are very heavy. Black-box reductions often have some kind of ciphertext size that depends
on the computational complexity of the underlying scheme. And black-box constructions are
much more efficient -- and all of the black-box constructions are much more efficient.
So these are my motivations for doing this, okay. So that's basically it for the general view of my
talk. So what I'm going to do now is talk about these different bullets, define what is the goal that
we want, what are the stronger attacks, and say what we can achieve, and maybe touch upon the
technical details depending on your interest, okay. Okay. So the topics for which will show
constructions of strong public key encryption, I will start with showing non-malleable encryption,
so from semantic security to non-malleability. Then I'll talk about chosen ciphertext attack
security. Then I'll talk about adaptive security. And I just wanted to mention but I'm pretty sure I
won't have time to talk about tampering, security against tampering attacks or leakage of the key,
okay. And I didn't list all of my co-authors because there are many. It's many different talks, but
I'll mention them as I get to those results.
Okay. So let's start with just defining more clearly what semantic security is. So in order to
define, before I define semantic security for public encryption, I just want to say the general
structure of how we define security for cryptographic protocols is as follows: We need to define
two things. We need to define the adversarial model. So I want to build a system. One, is it
secure? So one, I want to say, which attacks am I going to protect against. Okay, this is one of
the things I want to improve. So that's the adversarial model; that includes, I don't know the
adversarial polynomial time or just linear time or whatever you want, what he can see, what
access limitations he has, and that's one. And two is what the meaning of security is; namely,
what should the adversary do for me to consider my system broken.
Okay. So these are the two main components. And once we define it, we need obviously to
design protocols, and the way the proofs go for such protocols is first of all, we want them to be
efficient and then the provable security means that as long as the adversary is limited as in one,
then no break is defined, and two is possible, okay. And that tells you exactly what you can get
for sure, and exactly what you cannot get, okay. This gives no guarantee of what happens if the
adversary goes outside the model and is not restricted as in one. But it gives you full security
when -- for this adversarial power and for this meaning of security.
So obviously we want security for -- we want to achieve security for an adversary that's as strong
as possible where the notion of security is as weak as possible. Like we want even the smallest
thing to be considered a break and we still want to achieve security. That clearly will make our
scheme stronger. And we want our computational assumption to be as weak as possible, as I
said before.
So that's some general crypto stuff. When applied to public key encryption, so we'll define public
key encryption as follows. So this is mostly for notation, public key encryption will be defined by
three algorithms. The g, the key generation algorithm, that generates public key and secret key,
using some randomness, that gets randomness as input generates public key and secret key;
encryption algorithm will get the public key and the message in some randomness and generate
a ciphertext; and the decryption algorithm will get a secret key and the ciphertext and will give you
back the message, okay.
So this is public key encryption, and for semantic security we assume that the adversary, Eve, is
just eavesdropping and sees everything goes on, sees the public key and sees the ciphertext,
okay. So now how do we define security for this, for public key encryption. So first we want to
define the power of the adversary and then we find to find out what it means to break the system.
So the power of the adversary, here is some typical common definitions, so perhaps not the
weakest but the weakest one that people use is adversary that can have chosen plain text attack.
What is chosen plain text attack? An adversary that simply sees the public key, for public key
encryption, seeing the public key, you can generate yourself as many public encryption as you
want, and it's called chosen plain text because you can choose whatever plain text you want and
you encrypt it and you see the encryption. So this is what the adversary can see for chosen plain
text attack.
Chosen ciphertext attack says the adversary not only can see encryptions of his choice, he can
also see decryptions of his choice. He gets some temporary access -- yes.
>>: [Inaudible].
>> Tal Malkin: In this talk I won't even talk about the difference but everything I say is true for
both of them. It's just that I have both negative and positive results, and for negative results it's
stronger to state them, with CCA1 and for positive result it's stronger to state them with CCA2, so
I didn't want to mention it. Yeah, CCA1 and CCA2 depends on when did you get access to the
decryption oracle. But this is a stronger adversary, right that can actually access the decryption
oracle and asks, here is a ciphertext, what is the decryption; here is another ciphertext, what is
the decryption. That's a stronger attack. And even stronger ones, these are less traditional, are
adversaries that can read some of the key, part of the key, maybe all of the key, or adversaries
that can change the key in an arbitrary manner or some specific manner.
So these are different adversarial powers. And now we also need to define what is a breach of
security. So there are many ways to define a breach of security, and one of the first things
people think about is, oh, recovering the key would be a breach of security, right. That's
obviously a breach of security. If I have a public key encryption scheme and the adversary,
whatever power we give him, CCA, CPA, can see what's going on, and recover the secret key.
That's clearly a breach of security. But defining this is not good enough, because there are other
things the adversary can do. So say maybe I have great security, the key could never be
recovered. Still, the adversary could possibly recover the message that was encrypted. That's
also clearly a breach of security, right. And what if the adversary cannot recover the message
that was encrypted, but instead could figure out if two encryptions, two ciphertexts, belonged to
the same message or not. Well, for some applications this is no big deal, fine. But for other
applications, this may be a big breach of security. And as I said we wanted to define a breach of
security, the weakest way that we can achieve so that we can use this in many applications.
Okay, so this underlying one is the one that is the one chosen for semantic security definition. So
for semantic security definition, a breach of security is being able to recover any information
whatsoever about the message, okay. So semantic security is if you can prevent an adversary
from recovering any information whatsoever about the message.
Okay, so no matter what he -- whatever he knows in advance, seeing the encryption doesn't give
the adversary any extra information. So this is equivalent to distinguish ability and the equivalent
definition is to say if the adversary sees an encryption of zero and an encryption of one and the
adversary cannot tell which is which; even if he knows for a fact that this encryption is either of
zero or of one, okay, and it turns out that it must be randomized.
So any questions so far? This is old. This is semantic security. Any questions? Okay. So I'll
show an example because I'll use it subsequently in several stronger definitions. So ElGamal
public key encryption, here is how it works. I'll go quickly over it. So for key generation, the
public key is p, g and h, where p is a prime and g is a generator and h equals g to the x mod p
and the secret key is x. Then to encrypt the message m, you take a random r, and you take g to
the r and h to the r times m; that's anybody can do the encryption, and to decrypt, when you get a
pair, one is g to the r and one is h to the r times m, supposedly you just do c two over c one to the
x, and if you check it works, okay, if you have the secret key x, the Algebra works. So this turns
out to be -- in CPA turns out to be semantically secure. This notation refers to the
indistinguishability; that's semantic security, chosen plain text attack.
So this turns out to be secure, semantically secure. So against chosen plain text attack, this is
totally secure because when the passive adversary, the passive adversary can only see the
public key, p, g, h and the encryption, g to the r, h to the r times m, and it turns out that if you
assume some assumption called the Diffie-Hellman assumption, the decisional Diffie-Hellman
assumption, you get security. So I am going to use this as an example later. Okay. So this can
provably -- and it's not too hard to prove, provably secure semantically secure. But this is not
good enough and I wanted to demonstrate why; so one particular way for a stronger adversary to
break it.
So think of an auction example. So one way to do an auction, I'm symplifying here, but one way
to do an auction is each bidder sends their bid encrypted and then the seller chooses the highest
bidder or whatever algorithm he wants to use, okay. So if we use ElGamal for this, for encryption,
and which we know is semantically secure, so we know it leaks no information. This encryption
leaks absolutely no information, so the adversary knows nothing. So the first bidder sent his bid,
that's the ElGamal encryption, g to the r, h to the r times the bid. The bid is the message. The
second bidder let's assume is corrupted, is the adversary. We said before this is semantically
secure, so he has no idea what bid one is, no information whatsoever about bid one.
Nonetheless it's very easy for him to bid twice as much, okay. He would just take, this is all mod
p, he would just take the second element and multiply by two, mod p, okay, very easy. That
immediately generates an encryption of two times bid one.
So this guy, the second bidder managed to bid two times bid one without knowing anything about
bid one without breaking any security definition, all of the security, the semantic security still holds
and this shows why it's not good enough, this shows why semantic security is not good enough
for this application. This is good enough for some other applications. So with ElGamal, with
other encryption scheme, with RSU, with all kind of encryption schemes, you can do all kinds of
things. They are called malling, malleability.
So this gives rise to another requirement we want to have, at least in some settings like auctions,
we want the encryption of two messages to be independent of each other, even if you see some
encryptions you want not to be able to generate any other encryptions that are similar to it, okay,
and this is called non-malleability. So that's the first type of stronger scheme we'll talk about,
non-malleable encryption. So this was initially defined by Letwork [phonetic] and there are
several iterations of this, but basically the definition is given some encryption, it's infeasible to
generate any correlated ciphertext of the encryption. So if for example you have an encryption of
more you should not be able to generate an encryption of two m. So indistinguishability and
semantic security are the same, so the way this is stronger definition is in the previous definition
we had in semantic security, we wanted to say that the adversary's output will be the same no
matter what you get, whether she got an encryption of zero or an encryption of one. For
non-malleability, we want her even if she doesn't know what encryption she got, we don't want
her to be able to generate any other encryption whose decryption depends on what she saw.
So the decryption of what she gets looks the same no matter whether she saw an encryption of m
0 or m 1. So I can give you formal definitions, do you want them or should I skip them? Skip
them, okay. So that's a definition, non-malleability is stronger. It always implies
indistinguishability, and notice that there are two things, one is what attack you get. So we'll talk
about chosen plain text attack, and in chosen plain text attack, it's not clear whether
indistinguishability is strictly weaker than non-malleability or not, okay. These are two -- for
chosen ciphertext attack, it turns out it's not trivial, it turns out for chosen ciphertext attack, it's the
same. It's indistinguishable. It's sufficient to get non-malleability, but for chosen plain text attack,
it's not clear and this is the question we will ask, can you use indistinguishable -- can we use
semantically secure encryption which is indistinguishable under chosen plain text attack to get
non-malleability, which we want, okay.
So in other words, can we immunize any semantically secure public encryption against
malleability attack. Okay and for this question, most of the results I'll show, I'm focusing on
questions where the answer is yes, the answer is no but most of the thing I will show will be
focused on the yes, so the answer is yes, we can do it, so this is based on all kinds of work, but
the last one is work with Choi, Dachman-Soled and Wee and so here is what we do. Starting
from semantically secure encryption, so you have g, e and d, we'll use it in a black-box way to
generate non-malleable encryption. So if you give me a algorithm for key generation for
encryption and decryption, I'll give you an algorithm for key generation, encryption and decryption
for a scheme that generates a stronger level of security for non-malleability.
>>: Stressing the result -- so you are saying that you are construct ->> Tal Malkin: Yes.
>>: For CCA2 and the same scheme ->> Tal Malkin: Yeah, that's a very good point. So in all my talk when I say does this I am employ
that, I'm thinking, I'm kind of repeating the question as I'm answering it. When I say this implies
that, I am talking in terms of complexity theoretically. If you give me this scheme, can I change it,
massage it, do some other scheme that combines the other thing. I'm not talking about if you
give me a scheme and you prove a, can I prove b for the same scheme. Okay. Now, so for -because for semantic security this is not true. For example we saw an example, ElGamal is
semantically secure and is not non-malleable, okay. But the question is, if can I change ElGamal
to be non-malleable, or can I change any in a black-box way, any scheme that you prove for me
is semantically secure can I show you what to do with in order to get non-malleable encryption
and this is what the answer is yes for and you're right about CCA two that in CCA two that they
are equivalent for the same scheme but that's not the definitional use here. Here I'll use their
formation -- is there anyway without -- so I'm talking about the complexity. I don't want to add any
extra assumptions and I want to use it in a black-box way.
>>: [Inaudible].
>> Tal Malkin: Yes, I'll talk about it, or not really talk, but I'll mention it. I have it. So I'll answer
your question quickly. They have -- they don't have a it in a black-box way and it's less efficient
and it's non-black-box. That's the short answer. Okay. So you know, our key generation
algorithm and encryption and decryption, our key generation algorithm or the new scheme we'll
call the key generation algorithm of the old scheme, the encryption algorithm and the decryption
algorithm we'll each call the corresponding algorithm and the underlying scheme and also the key
generation algorithm, okay.
So this is how it works. I mean, this is the statement of the result. I didn't show you how it works.
So here is -- this is exactly the same as the next scheme. The next scheme is word heavy and
this is picture heavy but it's going to be exactly the same thing. So this is just to have like an
official statement of what we do. So assume you had a semantically secure encryption scheme
and then there exists, you see I'm not saying it's the same scope, I'm saying it exists another
scheme that is non-malleable encryption scheme and this notation means it's black-box, it calls
either protocol in a black-box way and this is obviously the minimal assumption necessary
because non-malleability implies semantic security so you must have some semantic security.
So our main technique is some -- so this result existed before in a non-black-box way. And there,
they used -- there's some issue of zero knowledge. In all of these non-black-box techniques
there's some issue of zero knowledge, which I'll say in short, if you don't follow it, doesn't matter
but in general sometimes when you have some scheme and you want to achieve a stronger
scheme, one way to do it is to somehow run the first scheme and on top of it run some zero
knowledge proof to prove that you are doing things correctly and that typically is extremely
inefficient, okay, because these proofs usually go through some complicated np reductions and -not so complicated but well we use some statement, we say it's an np statement and we there for
it's reduce to three core ability, and, therefore, we'll do something, so that's highly inefficient. So
our main technique here was overusing previous work, our main technique was to manage to
verify consistency of encryptions in some zero knowledge way but without using the inefficient
zero knowledge.
Instead we use numeric code techniques, okay. So this is a picture -- the red one is our result
and this is a picture that mentions the result you mentioned before. This is a picture of what was
known before. If you don't know any of these results it's better that you ignore this slide but if you
know some results, this puts it in context of what was known before. So in CPA semantic security
we showed that it implies non-malleable CPA in a black-box way. Before that, it was known in 06
by Pashelat [phonetic] and showed how to do it in a non-black-box way. And the way this graph
is done, the CCA2 is a stronger definition than non-malleable pa, and, in fact, we knew already -what we knew had the belt result in terms of assumption, in most of most general assumption,
how to achieve CCA two was, in fact, for ddn, but they did it, first of all, with extra assumption.
They could not do it directly from semantic security, they needed extra assumption for zero
knowledge and it's also very inefficient.
Now we did it in a black-box way but fewer assumptions but we didn't quite get to CCA2. We
only got to non-malleability. Oh, wait, so at this point I can take a detour and explain the
construction for you, that will probably not leave room for -- I don't know what you prefer, would
you prefer me to go into detail in one construction so you understand what's going on then I'll go
to these details and I will not describe any other results, but you will understand the idea of how
we did this. Or I can go on and tell you a statement of two other result. So any preference,
should I go to the details or should I ->>: Go for the other results.
>> Tal Malkin: Go for the other results, okay. Yes? Sure.
>>: Can you use semantically secure encryption to designate ->> Tal Malkin: Exactly.
>>: Right, and -- [inaudible].
>> Tal Malkin: Exactly.
>>: Does not imply -- [inaudible].
>> Tal Malkin: Exactly.
>>: So you cannot construct -- [inaudible].
>> Tal Malkin: Right. And I can't, either. I'm not claiming that we can.
>>: So in one sense -- [inaudible].
>> Tal Malkin: Oh, but it's non -- the designated verifying nonactivator zero knowledge that they
use is non-black-box. It uses Kukleven [phonetic] in reductions. They basically verify nonactive
zero knowledge using some np language and use something reduction. It's not black-box,
exactly in the same way that this is not black-box. Yeah, any other questions? Yeah?
>>: The dramatical domain, given a sufficiently strong man in the middle, I can never tell who I'm
talk to because you can always talk to the man in the middle who is talking to the man on the
other side so there has to be a proof ->> Tal Malkin: You cannot prevent the man in the middle, for example, if you are the man in the
middle and you take what I sent and you pass it on, it's the same as if I passed it, but this protects
me from if I send you something that's protected you if changing into anything else and passing it
on, so I can prevent direct passing, in many cases you can say it's fine if you are the man in the
middle you are directly passing and you don't change what I said and this would prevents from
you changing what I said.
But yeah it's a question of even all the digital good, it's like how can you prevent, this is not really
about what I am talking about here but in general even like copyright protection, you can always
copy, like it's just digital stream, you can always copy it, what can you do. So it's kind of similar to
that.
>>: How does -- Bob Dash -- [inaudible].
>> Tal Malkin: Right, so you need to assume something. So first of all it's a very big question, it's
a big problem in crypto. I guess the simple answer is you need to assume something like awe
assume you have a public key infrastructure and that's why public like key infrastructure is a big
deal and why it has to be very public and available and easy to verify. There are verification
authorities. It's a big question in crypto in general.
So to summarize this result, we did achieve non-malleability, non-malleability is the level of
security for things you need like auctions where you want independence. Okay. Next we want to
ask about chosen ciphertext attack. Can we protect against chosen ciphertext attack. Now I
remind that you chosen ciphertext attack is the attack where the adversary gets temporary
access to the decryption oracle, so ciphertext can get decryptions. Obviously if ciphertext can get
the decryption, she knows the decryption because the decryption oracle gave it to her. But what
we wants to prevent is prevent her from getting it to anyone else, okay, so if there is a specific
ciphertext you want to decrypt we want her to not be able to decrypt it even if she can submit any
other ciphertext. That's chosen ciphertext attack. It's a very important problem in crypto. It has
many applications where if you can break the chosen ciphertext attack security that your system
would not be good enough and the question is, can we construct it from semantically secure
public encryption okay, without any extra assumptions in a black-box way or in a non-black-box
way or in a whatever way. The answer is we don't know. The ultimate answer is I don't know.
It's a big, open problem if anybody can solve it it's very interesting in crypto, whether it's
black-box or non-black-box, whether if there's any way to take semantically secure encryption
and build and show that it's hard to build CCA secure encryption, it will be very interesting.
But I'll tell you what we do have, which are two partial answers. So one answer is everything I
just said that I didn't show you the details of, the same construction can, in fact, be used pretty
easily to achieve bounded CCA security. What is bound CCA security in you remember that CCA
security, chosen ciphertext security is when Alice can access the decryption oracle. So if you
knew in advance that you can access the decryption oracle no more than this many times, no
more than n times and you want to protect against that, if you knew that in advance, then you
could build an encryption scheme that prevents up to n access to the decryption oracle. So this is
bounded CCAs. If you know what's the max numb number of times you can access. So if this is
how you defined bounded CCA then we can in fact construct it from semantically secure public
encryption, in an easy extension of the result, I mentioned before, I just mentioned of the
non-malleability, okay. That's actually not hard to extend the non-malleability result to bounded
CCA.
>>: [Inaudible]?
>> Tal Malkin: Be practical in any way. So let me think about how to answer it. So first of all it
depends what the a-priori bound is. One thing that's not -- I'll tell why you it is practical and why
it's not practical. One thing that's not practical is if you know the a-priori bound, the sizes of your
new public like keys and the sizes of your ciphertext and everything would depend on that a-priori
bound. So if you know that you will access it many times, then you need like long public keys and
in that sense it's not practical. There is a dependence and the dependence is not great. It is
practical in the sense that it's all linear. The number of times you call the underlying scheme is
linear, so at least theoretically it's practical. It's more practical than, you know, definitely way,
way, way more practical than ddn, say. So it's the most practical one from the general
assumptions. It's not as practical as Kramer Choo [phonetic]. Now what about general CCA?
When people define CCA they don't talk about bounded. They talk about you don't know how
many times she will access it. So for general CCA we kind of have an answer of no, okay. What
does it mean -- kind of -- so this is another paper that we showed that we showed that, in fact, for
non-bounded CCA, can you not -- can you not take semantically secure encryption and build a
CCA-secure encryption in a black-box way. We almost showed that. I'm give a caveat in a
second. But I'll say that given in a semantically secure encryption, can you use it in a black-box
way to build CCA, and when you want to say that the answer is no, there's a whole line of work
separating cryptographic primitives called black-box separation results, where you show that you
can't do it in a black-box way, okay. So we kind of show that you can do it in a black-box way.
Why do I say kind of; in fact, we did not show it for any possible black-box construction. We
showed it for a large class of black-box construction called shielding black-box construction.
What does it mean? I'll tell you what it means. Remember you start from GED, that assumes to
be semantically secure and suppose that I could build a new g prime, e prime, d prime that are
CCA, either CCA1 or CCA2, it's true for both of them. Suppose from GED, I could build g prime,
e prime, d prime. So I'm saying this is not possible if the construction is black-box, and if my e
prime never calls d, okay. That's what shielding means, it's some technical thing saying the new
encryption algorithm doesn't call the old decryption algorithm, okay. So as I said, the real
question of whether you could do it is still open so, what about non-shielding reductions, I don't
know. I used to have this conjecture that you can't but I'm not going to stand behind it, we really
don't know and it's an interesting problem.
Okay. Good. So you know what, I won't go into any details because then when I'm finished the
talk, if there is time when I might be then I can give details about your favorite result and if there is
no time, I won't. It seems better to me. So this is the same with the picture. We show that
there's no black-box shielding construction from CPA to CCA2 and in particular it separates it, it
shows CCA2 is stronger in some inherent way that non-malleable encryption.
Good. So I'm basically done with my first two bullets. All I did is describe what the result is and
state the result. So if you have any questions about these first two things, feel free. Let me just
say in one sentence for non-malleability, we said we can do it, we can achieve it. It's a good
construction. It's fairly practical. And for CCA security, we can achieve limited and we have
some negative results for black-box constructions that are shielding. Okay. The next and last
one I wanted to describe to you is regarding adaptive security is a little bit of a change of gears,
so really this is a good time for questions about the previous stuff. Okay, so now I'm going to talk
about trap door simulatable public encryption for adaptive security. Why is this a change of
gears, what's the change of gears? So first of all so far I talked specifically about public key
encryption, and I didn't really motivate why we need public key encryption because I think you all
know that. But what I'm going to talk about next is not really public key encryption. I'll have
public like key encryption as a tool. But my ultimate goal that I'm talking about now is secure
computation, secure protocols among multiple par ties. My specific result will be about public key
encryption, but only as it's used as a tool within multi-party computation. Okay. So that's one
change of gears, and the other change of gears will be that my ultimate result will use some extra
assumptions, not just semantic secure encryption.
So let me quickly talk about secure computations. So I'm talking about secure protocols, it's a
bunch of parties, now it's not necessarily encryption where I want to send a message, it's for a
bunch of parties, they want to compute something and they want to compute it securely. I won't
go into the definition of it because I'll really only talk about the public key encryption part, but I'll
talk about the attack model. So none of that, a static attack model in a secure protocol is where
the adversary in advance, let's say you all are participating in a secure protocol and I'm the
adversary and then adaptive cases where I will decide okay, I'll break into you and into you and
into you and that's what I'm going to do; whereas an adaptive adversary is one where I'll be like,
okay, I'll only break into you, I'll see what's going on and I'll see some message and based on the
message, I'll decide, okay, now I want to break into you. So an adaptive adversary is one that
has a protocol is ongoing, in the middle of the protocol may decide who else to break into okay.
And adaptive security is more difficult to achieve than for static adversary and there is even some
precise meaning that's more difficult but I won't go into it. And it turns out that all previous
protocols that achieve adaptive security for arbitrary corruptions, arbitrary corruptions means the
adversary may corrupt more than half the participants. If the adversary corrupts more than half of
the participants, there was no known protocol that achieved it in a black-box way, okay, from any
known underlying general primitive. So the only known protocols were non-black-box and very
inefficient for adaptive security and more ever, they all used something called non-committing
encryption why I won't define and it doesn't matter what it is if you don't know what it is. They all
use non-committing encryption, but the only assumptions, concrete assumptions we knew how to
substantiate the non-commit encryption was with RSAQ and CDH, basically it's computational
Diffie-Hellman. The only assumptions we knew how to substantiate adaptive security from
whether either factoring, or like RSAQ or Diffie-Hellman, that's discrete log related.
>>: Are there any other live public key schemes?
>> Tal Malkin: First of all there is elliptic curve stuff -- you asked about what other types of
assumptions there are.
>>: And CDA doesn't work for elliptical.
>> Tal Malkin: Oh, it might work for ellipticals actually. It's weird. There's another one with DDH.
It might work for elliptical, you're right. And other assumptions that are kind of starting to be
popular today is lattice-based assumptions and learning-biased assumptions which are kind of
related. So there's learning with error, priority with knowledge, random codes, all of these things
that I'm saying are kind of the same. They are different variations of the same assumption, and
they are all related to lattices. And one good thing about them is that at least for now, these new
assumptions, we don't know how to break even with the quantum computer. These we know how
to break on a quantum computer. That's one example and in general having more assumptions
is good.
>>: [Inaudible].
>> Tal Malkin: I can't hear you.
>>: Computation protocols which use techniques -- [inaudible].
>> Tal Malkin: Yeah they don't achieve adaptive security. They didn't achieve as -- there are
protocols achieving as, where less than half the parties are corrupted or if more than half the
parties are corrupted but they were non-black-box. At least this is true for as of when we did our
result, which is, in fact, actually very new. It's going to appear in Asia Crypt this year. This kind
of combines two results, but one of them appeared in TCC last year and one of them will appear
in Asia Crypt next year. Okay. So here are our results. So first of all we define something, this I
want to define for you because I think it's a good primitive if you ever design anything else that's
even unrelated to this it's a good primitive to use I think, something called trapdoor simulatable
public key encryptions, /this is where the public key encryption theme comes back in, so we'll
define trapdoor simulatable public key encryption. We will show it's basically depends on
previous work, as well. This is just using previous work. We will show how they can they can be
used in a black-box way to achieve adaptively secure protocols for any functionality and we will
show how to construct this trapdoor simulatable public key encryptions from lattice-based
assumptions and from factoring.
So you asked about RSAQ and CDA, it wasn't known for example based on factoring. RSAQ
and factoring is not the same, right. So now we did it also based on factoring and also based on
lattice-based assumptions and other things. So what I want to do is define, is basically just define
what trapdoor simulatable public key encryption is, because I think it's a useful primitive to know.
We kind of invented it but it's a variation of one we have known before and I think it's a useful
one, because even trap door permutation is the most useful, the most common primitive that
people use, even that we don't know how to do based on discrete log based assumptions, we
don't know how to do it based on learning, so this is I think a new good general primitive to use.
So I'll define trapdoor simulatable public key encryptions, and that's it, and I'll just state the result.
So here is the trapdoor simulatable public key encryptions. If you didn't follow the last part of the
talk, it doesn't matter. That's a new definition. We will use it for adaptive security of protocols.
So what is it? It's a regular public key encryption, semantically secure public key encryption, plus
two properties. These two properties are very worthy but are ease to explain. I'll show you an
example. It's easy to explain. The first one, the properties, one property oblivious sampling.
What is oblivious sampling? In a regular public key encryption algorithm, the key generation
algorithm generates a public key and a secret key, r, right. So oblivious sampling of the public,
that means there's a way to generate a public like key that's distributed exactly the same without,
in fact, knowing what the secret key is. Can you generate a public key where you don't know the
secret key, okay, this needs to be defined exactly but this is what it is. That's oblivious generation
of the public key and also oblivious generation of the ciphertext. What is the easiest way to
generate a ciphertext that's distributed correctly is okay, take a random message and encrypt it,
you get a ciphertext. Oblivious generation says can you generate a ciphertext without knowing
what message it belongs to. That's what oblivious sampling is. This property is easily achieved
for some protocols and is already-known protocols of public key encryption, immediately, an
immediate property that already exists, and is hard or we don't know how to do it for some other
public key encryptions. Okay, so that's oblivious sampling. And the second property is trap door
invert ability. What is trapdoor invertibality? So let's say you generated the public key correctly
with a key generation algorithm, you have a public key and a secret key, can you claim that you
generated the public key randomly without knowing the secret key; that's the invertability. If I
generated it correctly but then I want to pre tends like I generated using the oblivious generation
algorithm, can I do that? So I think it's hard to understand from this. Let's me just give you an
example, it's much easier to see with an example. Let's see the ElGamal example. This is what I
wrote before, that's the standard ElGamal encryption. I'll show that you it already has these
properties, okay. So recall in ElGamal encryption, the key generation, the public key is p, g, h
and the secret key is x, and p is a prime and g and h are generators, h is g to the x mod p. So
encryption, let's do the key generation first.
So an oblivious key generation would be simple, you just choose p at prime, choose g, h at
random. Sure, h is equal g to the x for some x, but you don't know what x is, we don't know how
to do it. So this is very easy and this is distributed the same because g and h are basically
random. So it's easy to see this that it has this oblivious key generation, and it also has oblivious
ciphertext generation because the real encryption if you want encrypt a real message m you
choose a random r and you good g to the r and h to the r times m. That's the real encryption.
But if you wanted to generate it obliviously you just choose random elements in the group, c 1
and c 2 because h to the r times m is random. So if you choose a random C-1 and c two, this is a
valid encryption, but you don't know of what. Okay. So ElGamal easily has it and these are also
invertible. What does it mean invertible, let's say I generated something correctly so I chose p
and g and h and I know the secret key, I can always lie and say, oh, I chose h at random, I have
no idea what the discrete log is, okay. So ElGamal satisfies it. I'm giving you these examples. I
won't go far with any technical details but I just wanted to explain what this assumption is,
trapdoor simulatable public key encryptions. Now factoring base public encryption it's not as
easy, so I wanted to give you an example where it's not obvious. For ElGamal, it's obvious, you
just choose everything at random and it's the right distribution. So for example for RSA or for
Rabin public key encryption what is the public key, you choose n, that's the product of two large
primes p and q.
That's seems hard, at least it's not obvious immediately, how to choose a number, that equals the
product of two large primes without knowing what the large primes are. Okay. It seems hard. So
this means that oblivious key generation seems hard so come buy. Okay. So for example for
RSA or for Rabin public key encryption, what is the public like key? You choose n, that's the
product of two large primes, p and q. That seems hard, at least it's not obvious immediately of
choosing a number for two large primes without knowing what the primes are. That seems hard.
That means that oblivious key generation seems hard to come by, okay. Not clear how to
sample. So this is what oblivious key generation is and we also want to talk about oblivious
ciphertext generation. So if for example the ciphertext in Rabin crypto system or other crypto
systems, the ciphertext often involves finding a square, for example, mod n. So how do you find
a random square, mod n or sometimes it's part of a longer ciphertext. I'm kind of cheating
because Rabin, like m square mod n is not really an encryption scheme, it's more a trapdoor
permutation but it doesn't really matter, even in the public key encryption schemes based on it
you always have to choose a random square so how do you choose a random square? Well, it's
fairly easy right you choose a random number and square it and then you know the scare root, so
how do you choose a random square without knowing the square root; it's not obvious, okay,
because we don't really know how to test whether a number is -- that's considered hard. So this
is something that doesn't have oblivious key generation, okay. So this is the definition of trapdoor
simulatable public key encryptions, it's public key encryption that does have those properties that
you can generate the public like key oblivious and you can generate the ciphertext obliviously. If
you know what enhanced trapdoor permutations are it's very similar to that. So this is the same
definition it was before and we didn't invent this, this was given by Dunkard and Nielson
[phonetic] and we changed it a little bit but what did we get from it, here is a summary of how -- so
I told you this has all kinds of things I did not define and I will not define. But I told you we start
from trapdoor simulatable public key encryptions and we get malicious multi party computation
and this is a multi-step process because the pink ones are our results and the green ones are not
our results and all of them rely on other results, but the bottom line is so after we define trapdoor
simulatable public key encryptions on one hand we showed how you can achieve them from all
assumptions that we know from factoring, learning with error, that's lattice-based assumption,
DDH, RSA, we showed how to construct them, and then using them, we have a sequence of
result that is leads to you malicious multi party computation for -- only in the bb, that it's all in a
bb.
Going through non-committing encryption, I didn't want to define it because it's too many
definitions, but if you care, I think you care about non-committing encryption, so the first result just
shows how to get non-committing encryption from trapdoor simulatable public key encryptions in
two rounds, okay, optimal round and once you get it we already know how to get oblivious
transfer from it from previous work and then we showed a compiler going from semi-honest to
oblivious transfer to malicious oblivious transfer in a bb manner. Yes?
>>: How can you trap door for the o-gen where you generate something totally random.
>> Tal Malkin: The trapdoor invertability is this. It's not that you have trap door but trapdoor
invertability is this. Let's say you generated it not at random, not obliviously, you generated it with
the public key encryption algorithm with a secret key. Can you lie and claim that you generate it
in the o-gen. So if you have the trapdoor invertability, the trapdoor means if you have the secret
key, can you come up with randomness to then lie and say, here is the randomness I used and
here is how I got it. So the example I gave for ElGamal, yeah, it's trivial, if you chose p and g and
then you chose x and then said g to the x equals h, and I come to you and I say, how did you
choose it, you're like, oh, I flipped random coins and I got g and I flipped more random coins and I
got h. You don't have to tell me you chose x and g to the x. That's invertability even without
trapdoor. Anybody could do that:invertability is if you have the secret key you can claim that you
generated it obliviously. Because then in all of these algorithms it will be used somehow, there
will be some simulator that will choose it in this way, will choose everything in the correct way but
will claim to have chosen it obliviously and will need to explain it.
Okay. So I won't go into there's something about in the U.C. setting or not in the U.C. setting,
ignore it. Let me just summarize all of the results. I'm exactly on time so after the talk is over I
can give you details on whatever you want if anybody wants to talk to me. Here is a summary. I
talked about many things. So here is a summary of the results that I showed.
So first of all we showed from semantically secure public key encryption without any further
assumption we can achieve non-malleability and valid CCA and we cannot achieve full CCA for
shielding bb constructions, and we also showed from semantically secure public key encryption
with this extra property of oblivious generation, trapdoor simulatability, we can build
non-committing encryption and from it adaptively secure protocols for any functionality against an
adversary that can corrupt adaptive security many parties adaptive security he wants. The
results I didn't mention, I keep mentioning them just in case you want to ask me afterwards, is we
have both positive and negative, both possibility and impossibility results for some physical
attacks like key tampering and key leakage and I think that's it.
Any more questions?
>>: What was the definition of non-committal?
>> Tal Malkin: The definition of non-committing encryption is an encryption where it's a regular
encryption, semantically secure encryption with an extra property that -- so an encryption, you
have encryptions of zero and encryption of one. I might have a slide, let me see for one second.
No, it's not here. I'll tell you the definition.
So it's a regular encryption and regular encryptions have encryption of zero and encryptions of
one and they are disjoint, they are either encryptions of zero or encryptions of one.
Noncommitting encryption is where encryption of zero and encryptions of one are not really
disjoint, they are some small set of things that could be either encryptions of zero or encryptions
of one, and where if you encrypt correctly you want to encrypt a zero, you want to encrypt a one,
you encrypt a one, you are extremely unlikely almost for sure you will never hit something in the
intersection. But if you are a simulator or you are some special party you could generate them
from the intersection. So basically noncommitting encryption is if I am not the right party, I'm
cheating, I can generate something that later I can prove that it was an encryption of zero or
prove that it was an encryption of one. And this is a tool that with as used for the result of all
achieved adaptive security in encryption.
>>: [Inaudible].
>> Tal Malkin: How do I define oblivious sampling? Did I not define it? So oblivious sampling -oh, how do you define -- I don't know the secret key, what does it mean that I don't know the
secret key, that's a good question because I can always say choose secret key, public key and
forget the secret key and that will not be good enough. The way I define it is that the scheme is
still semantically secure, even if I chose the public key so let's say I chose the public key with this
oblivious way, I want that if I have all of the random coins that were used to choose it, so I have
an algorithm, o-gen, get some random coins and generate a public key and even if I have the
random coins that told me how I generated this public key, the scheme is still semantically
secure, if I get an encryption, I have no idea how to break it. Whereas if you generated it in a
naive way of generating a secret key and a public key it's not secure right, obviously you go can
decrypt anything.
>>: And the public key.
>> Tal Malkin: An external party, you can think of it adaptive security an internal party that
generated it using the oblivious generation algorithm. So if you generated -- I'm just answering
you about the key but it would be the same thing for the ciphertext. So you want both of them, for
the key you want to say that if I generated the key using o-gen, that means I got a key, like for
example, ElGamal, I generated p, g and h, and I that's how I generated them. I chose a p
correctly and I chose a random g and random h. Now if you give me some ElGamal encryption,
it's semantically secure. I will have no idea what you encrypted, even though I know exactly how
I generated them, with o-gen, and the same for -- so that's for the key and the same for the
ciphertext generation. So you know for example if I chose a message and square root and got an
m squared and I know then it's not semantically secure, I know what the square root is.
So you want it to remain semantically secure even if you chose it like this. We have a definition
and damn good kneel son had a definition, I can show you, I probably have it. I can find it in a
slide. I'll show you after the talk. Somebody else had a question I remember.
>>: The public in nature is chosen plain text is a passive attack rather than a an active attack.
>> Tal Malkin: Yes.
>>: And it's gotten critical because if you use a one-time path it's totally semantically secure and
totally immaleable, where does this proof break when you apply them to one-time path?
>> Tal Malkin: I see your question. That's a good question. Let me see. Hold on a second. Let
me find -- I kind of wanted to -- this is our construction, you can ignore all the details. So this
public key, you have some encryption scheme that's semantically secure and you want to build
nonmalleable scheme out of it, so the way you do it is some way of choosing many, many pairs of
keys, okay, in some way it doesn't matter because I'm just showing you, I'm trying to think how
to -- so you generate -- the way you do it is by repetition. You generate many public keys of the
underlying scheme and you encrypt the same thing many times using these public keys. With
one time pad, this construction won't work -- so it's not that the security won't work. You can't
even encrypt. Let me think how to say it. It's private key encryption and it is very malleable. You
are asking why any applied this in whatever construction we have, if I apply it to a secret key
encryption, I will get ->>: That particular one.
>> Tal Malkin: Yeah, you know, we might. You see, we actually might get non-malleability when
we apply because when we apply our construction, it will no longer be one-time pad. It will be
one time pad repeated many times with some extra proof of something, with some extra cut and
choose because we don't use the same scheme, we change it. So, in fact, our construction will
work also for secret key encryption.
>>: Can you say more about how you use the trapdoor simulatable public key encryptions for
the multi code.
>> Tal Malkin: Yes, so the truth is, we don't use it directly. There are two stages there. So there
are two different stages, tell me which one you want me to elaborate on or both. First we use
trapdoor simulatable public key encryptions for non-committing encryption. That's one
construction, and I can show you how to do it and that's not really related to adaptive security. So
just non-committing encryption is what I was answering before, non-committing encryption is that
you have some special way to generate ciphertext that can be opened both ways. Okay. And I
can tell you how just intuition how to use trapdoor simulatable public key encryptions to generate
non-committing encryption, okay. That's one aspect. And another aspect is how to use
non-committing encryption for adaptively secure computation and that's another aspect. I think
you are asking more the second one, how to use non-committing encryption for -- okay. So the
way you do it, let's assume you have non-committing encryption. Non-committing encryption is
something where you can generate ciphertext that you can open either way. So the way to do it
is the following. So let's say we have a secure computation protocol that we know is secure for
non-adaptive adversaries so we all ran the protocol and we do not know in advance we broke into
it, it was secure.
How do we get adaptive security? So what is the problem with adaptive security? The problem
with adaptive security is the adversary is here and he sees what is going on and he saw the
message you've cents him and based on that message he decided to break into you. He saw
that message and he's like, uh-huh, that is your message, I want to break into you. Once he
breaks into you, you have to show him, I forgot to say, we assume no erasures. Achieving
adaptive security is easier if you can assume that you can safely erase. Everything we do we
assume you don't erase anything. So let's say I'm the adversary, I haven't breaked anybody yet, I
saw you sent some message to him, it's encrypted I can't read it but based on the message I
decide to break into you. Once I break into you, now you're corrupted, I tell you, show me what
message did you send, what randomness did you use, how did you generate that ciphertext from
your message.
Okay, the problem is, when we want to achieve security, we have a simulator, we want to show -security we want to show in the ideal world a simulator can simulate everything. So before I
broke into you, the generator just generated something, that's easy just generate an encryption of
anything, all the encryptions look the same so it's no problem, so the simulator could generate an
encryption of zero it looks the same adaptive security the encryption of whatever encryption you
really sent to him. The problem is once I broke into you, I see your inputs and I have to explain
what you sent women with what you really have and this is where the non-committing encryption
will come in is the simulator will sends something that could be open both ways, and then if I
break into you, I can see what you committed to. The way I described it now is kind of a little
cheating.
What I describe now is exactly how it was defined, non-committing encryption by CFGN, they
define non-committing encryption in the exactly way I described it to you. They use it exactly this
way to simulate secure channels. You know how if you forget about adaptivity, we can assume
we have secure channels and then in reality we don't have secure channels so we use public key
encryption instead. So for adaptive they use non-committing encryption instead of secure
channels and this is exactly how it worked. It turns out for this setting, the stand-alone setting, in
fact, at the end, there is newer results that don't even need any non-committing encryption. They
ooze used it for in the adversary only breaks into a few people -- anyway the way I describe it to
you now is it's used for the U.C. sending, it's used in somewhat for sophisticated ways, but the
intuition is the same.
It's supposed to simulate a secure channel where I first saw what was going on and then I broke
into you and now I need to explain and I can't change my mind any more, like this is the message
that was sent, now I have to explain this message adaptive security an encryption of what your
real inputs are, so that's how non-committing input is used there.
>>: Stand-alone only function ->> Tal Malkin: Yes, I can. These are our results. We built from semi-honest -- we built
noncommitting encryption and it's already known by clause, Linda Dalezof [phonetic] and Seehai
[phonetic], so you would ask, you were cheating all along, you were saying you don't want any
extra is a assumption what is this extra assumption you are adding here commitment. This extra
assumption can be achieved in the U.C. setting with some common random string or some other
set up but what I wrote before is in the stand-alone model it's not really an extra assumption
because commitment comes from one-way function. So we can remove it completely because
even if you had just public key encryption, that already employs one-way function, already implies
one-way commitment, so already it does not require any extra assumptions.
>>: Can you go into the trapdoor for RSA, of what it looks like, or if you've got ->> Tal Malkin: Yeah, let me think. Yeah, let me see, how to construct -- based on hardness of
factoring -- that's kind of what you're asking, right?
I specifically said that it's hard to construct this oblivious sampling for factoring-based stuff, right,
so what are we going to do. So let me think about what the most important parts are. So there's
the generation of the public key and there is the ciphertext. The ciphertext let's say is x square
and the public key is n equals p times q, and we want to generate them both. So how do we do
it? It doesn't have anything. First of all, we will change this scheme. So first of all we won't allow
any Integer and it doesn't have to be the product of two primes. Once you say that, we know
how -- it's not easy, it's not trivial but it's a known result of how to sample random integers with
their factoring -- with their factoring, okay.
So now I'm changing it so that any n is fine, even if it's not p times q, okay, so that's easy. We
will still need to show that some security holds and something happens. Now another problem is
it is important that squaring was a permutation, if you know Rabin, Rabin works on quadratic
residues, now once I change the n to b, not necessarily p times q, it's no longer a permutation.
We changed the domain, instead of quadratic residues, the domain is not called quadratic but a
to the 2 to the k. Okay. We somehow changed the domain, all of these are like number theoretic
theorems but they are easy to prove. The hard thing is to notice how to do it and to decide to do
it and to figure it out. So you can show that squaring is a permutation of this domain, okay,
specifically the length of n. So you can show it's a permutation.
Good. And it's hard to invert the same way that Rabin is hard to invert for n equals p times q.
Rabin says that if n equals p times q it's hard to invert. Now I define the scheme for any n but the
hardness to invert is only on n equals p times q. That's fine because that's a sizable fraction of n.
I'll manage to use it. So this shows you that squaring is now, I've got a weak trapdoor
permutation, weak in the sense that it might be easy to invert if n doesn't happen to be equal to p
times x but if it does, then it's hard. Okay. So that's one change we did. Okay. I mean, this
many details, let me think.
So at the end of the day what I'll choose is I'll choose many ns and I'll choose enough ns so that I
know that it's very likely that at least one of them happens to be a Blum integer, so we don't know
how to generate -- Blum integer is n equals p times q , for p equals 3 mod 4 n and q equals three
mod 4. So we don't know how to generate a Blum integer without knowing the factorization, but
we can generate just a bunch of random numbers, enough of them such that they are dense
enough such that one of them is likely to be a Blum integer.
>>: Security, looks like one of a million little bees, if you have a thousand-bit numbers, you have
a billion.
>> Tal Malkin: You are just doing k to the third? Yeah, it's true. Yeah. I guess this is the
oblivious generation, I feel like this doesn't answer your question.
>>: Let me tell you where I was trying. I wanted to understand that if the construction that you
have factoring for RSA, in particular, took into account or could take into account the fact that we
tend to use RSA these days, there's formatting the way the message is occurred, when you think
about it from a theoretical perspective, we take n and exponentiate and don't worry about the
format but when we use it in practice, there is all of this padding and factorization which happens
which would seem to interfere with your ability to generate the appropriate sampling, because you
have to be able to oblivious sampling something that's ciphertext.
>> Tal Malkin: I see what you're saying.
>>: That's where I was going and I understand these are different constructions.
>> Tal Malkin: It won't interfere. I understand your question. It won't interfere because we also
don't -- we theoreticians don't use RSA adaptive security it is because that's not semantically
secure en. In practice the way it's used, there's some paddings of specific format whereas in
theory the way it's used is this, using a hard-core bit, so -- okay. So it doesn't matter, as long as if
you assume that whatever way you use it, semantically secure it, it will work.
So it doesn't assume -- the way the protocol work s is like it generates -- a bunch of, I don't know,
n to the fourth or some number of ciphertext, where -- not n to the fourth, sorry. N, linear number
of ciphertext where a quarter of them were chosen obliviously and -- a quarter of them were
chosen correctly, and you encrypted n and three quarters of them were chosen obliviously. You
want the distribution of the oblivious to be the same adaptive security the distribution of the
correct one. So this will be guaranteed if -- I don't know if it's guaranteed for like ad hoc ways to
construct RSA encryptions but it is guaranteed, if you guarantee your thing is semantically
secured, then I guarantee it will work.
Download