1

advertisement
1
>> Seny Kamara: Okay. So it's a pleasure to have Juan Garay speaking today. Juan
is a researcher at AT&T and he'll be speaking about sound specifications of
cryptographic tasks.
>> Juan Garay: Thanks, Seny. Good to be here. So I'm going to talk about a framework
for the sound specification of cryptographic tasks. This is work with Aggelos
Kiayias and Hong-Sheng Zhou, who just started a post-doc at the university of
Maryland.
And I thought about talking about this, because we've been working on this a couple
of years, but it was just recently presented at CSF, Computer Security Foundations
in Scotland, and I figure not too many people had listened to this talk, except now
then I found out that a few people had listened to the talk, including [indiscernible]
and others. But this was a couple years ago, or a year ago, and I'm told that I
became a much better speaker in the past year or so. So bear with me.
So cryptographic tasks, what I mean by that is these creatures. Encryption, digital
signature, zero-knowledge, commitment, key exchange, and have associated with
security notions that you might recognize, such as CPA security, plain text tax
security, CCA. Shows message attack for digital signatures or forgeability, you
know, zero-knowledge with these properties, commitment, key exchange. Here I'm
just mentioning one, it's called session key security by [indiscernible]. 2001 or
something.
Anyway, so they are cryptographic tasks, and these are select security notions you
might have heard of, all right? So let me now show a timeline of these security
notions, probably starting with the notion of CPA back in 1984, or actually Symantec
security was a little bit earlier. This was where -- the equivalence between CPA
and when the security was established.
And, you know, some other notions that came about.
So what year was this? What year was this?
At different points in time.
>>: [inaudible].
>> Juan Garay: Just checking. All right. So there was a time where all these
security notions came about. And this, in fact, were, you know, we call them
traditional security notions. And they were proven in, you know, isolation or very
controlled settings, such as, for example, zero-knowledge here, prover and verifier
2
protocol, and those notions would have to be satisfied, okay?
So later on, we had this, you know, more things going on. We have this call it notion
that we, you know, try to achieve internet security where there are open networks,
you know, there are multiparty tasks. There might be concurrent execution of these
tasks. The execution could be asynchronous. There will be malicious attackers and
so forth.
So a lot of things going on. Which sort of, you know, researchers are working on
this stuff. At some point in time, this is simplified the timeline version of things
anyway. Around 2001, you see known as UAC. This I know as a joke. Nobody else
does. For Anno Compositi. For Anno Canetti, where instead of defining of these
security notions separately in isolation, there will be a single notion to capture
the properties to secure the purpose of the task and these notions were called
functionalities.
Okay. So we have a notion of functionality for public encryption, signature,
commitments, zero-knowledge and so forth. As you say, that is just a simplified
version, notion, you know, presentation of things because, you know, all these
notions started happening before, when people started talking about the simulation,
paradigm and way to prove things secure and so forth. Again, these are simplified
version of things.
Okay. So we have this concept of a single notion to capture all the properties of
a task.
Okay. And this talk, we focus on this particular Canetti framework called universal
composability framework, but some of these things can be applied elsewhere, which,
in fact, build on a, you know, series of security definitions. Starting with
[indiscernible] in '87 and follow-ups. And as I said, this is just one framework.
There are similarity frameworks such as Fisher Widener and also in '01, and, you
know, other follow-ups.
All right. So one important aspect of this, as I mentioned before, is simulation
based security, which we'll get into. Allows the modular construction of larger
protocols. Protocols can be arbitrarily interleaved. And when they are composed,
they remain secure. If this means something to you, think of object oriented and
thread safe.
>>: What does this mean, object oriented and thread safe?
3
>> Juan Garay: Object oriented [indiscernible] programming type thing. Just you
have an object and you plug it in, it's going to do -- you don't care about the
internals. Thread safe it's going to keep doing what it's doing, regardless of what
other things are going on. That exhausts my knowledge of this terminology.
Okay. So here's a comparison. This was what we call the traditional classical
notions that were designed for specific environments, standalone environments.
There were several properties were specified separately. And in contrast, these
UC notions that would apply to arbitrary environments, a lot of things can be going
on concurrently. And as I said, all the properties would be specified at once, okay?
So I mentioned a simulation paradigm. And I'll get to say a few things about that
before continuing. This, as I mentioned, started with work by [indiscernible]. To
prove that a protocol is secure is basically this type of comparison where you have
a real world, you know, running a protocol, call it pi, and this arrow is representing
interactions between the parties. And some idealized world, where, you know, this
is trusted entity, I'll call that. Thus, what the task is supposed to be doing and
notice that the communication is [indiscernible] the parties this idea of the ideal
entity, okay. So if you have to compute some function, then in the ideal world,
inputs go to the party, you know, all the inputs go. The party can view the function.
They'll process the output and that's it.
Here we're calling this task G that's going to be the F that we talked about earlier.
And, of course, in this crypto world, our parties can be corrupted, misbehave, you
know, do weird things. And these are the parties corrupting their -- in the real
execution. These would be the parties corrupted in the ideal execution -- you know,
execution or this trying to achieve this task.
These parties here are thought of or [indiscernible] as adversary A. And here, we
have this idea, the process ideal world adversary called the simulator that's is
going to try, knowing pretty much what the real adversary knows, is going to try
to create an execution that is indistinguishable from this execution. Okay. And
more formally, some sort of that finishing that we would say that the protocol pi
running here would securely realize this task, G. This if were all input vectors,
some security parameter, for whatever the adversary does, there's a simulator
strategy such that the views, okay, in both worlds are indistinguishable. This can
be defined in different ways.
Proper distribution of inputs/outputs, communication on both sides, okay?
If
4
you're able to do this because I mention the communication between the parties and
the trusted parties, and these channels are ideally secure, and this guy is good,
it's going to do what it's supposed to do. So regardless of what the other study,
whatever study does, the simulators manage to do the same thing. The view is
indistinguishable, then the protocol is secure, okay?
So this is pretty much the basis of UC security, except that here there's another
entity called the environment. You know, which we saw these two distributions being
distinguishable. The environment you can think of as the distinguisher, and it's
going to try to produce, induce, try all possible things so that, you know, both -- so
that he can figure out whether the distributions are equal are not or, you know,
indistinguishable or not.
And you can see that here, the environment [indiscernible] controls again by
providing, you know, these inputs for the parties plus human outputs, communicating
with the adversary, so forth.
Here we have this extra quantifier saying for all these environments, then these
two views are going to be indistinguishable. Okay? So that's a two-minute overview
of the this simulation paradigm thing which is used in the UC security framework.
And so we saw this trusted party here executing the task, right? This is what the
task is supposed to be, to do. This is what this box is going to be doing. That
is what is called the functionality in the earlier pictures that we saw. And here
are the identities that live in this ideal world, okay? So we have the environment,
Z. This adversarial, this ideal world adversary called S, the simulator, and these
are the parties. In this case for our essential application would be signer and
verifier, for example. And some communication between the entities that we'll talk
more about later.
Okay. Any questions so far? So this trusted third party executing this idea of
functionality. Let's go back a little bit and see what ideal means in the
dictionary. Something, you know, pure, uncorrupted, perfect, and so forth. Since
Aggelos, one of my co-authors, is Greek, this is his example of an ideal object.
See how that was, you know, how this ideal functionality [indiscernible] made sense
in the beginning. For example, it's just an ideal functionality for zero-knowledge.
According to zero-knowledge, you're trying to prove the veracity of a statement
without revealing anything else. This case, it's for the argument of knowledge case
where there's a binary relation. The prover is polynomial time bounded, and there's
5
a common input to both parties, prover and verifier. And the prover has some other
element called the witness, and it's trying to prove to the verifier that it exceeds
[indiscernible] a witness satisfies, some element has satisfied this relation and
nothing else. So that's all the verifier learns. Okay?
This is supposed to the capture what's going on. This the functionality, this the
code that the functionalities will be executing. When it receives a message, it
has some form of ZK prover, from PI, and get these two values, you know, the common
input on the witness. The functionality verifies that the relation holds, right.
And if it does, then it sends to the verifier, to the verifier PJ, you know, this
is it. You know, there exists a witness such as that the relation holds. Otherwise,
doesn't do anything. Okay. Notice that here, just by sending this message to the
verifier, the verifier didn't learn anything new. X was the common input to both
parties.
Okay. So you can look at this and say this is what zero-knowledge functionality
should look like, simply enough. If we're able to achieve this, you know, our
protocol that realizes this should be secure. So simple enough code.
So let me show you what these ideal functionalities have become more recently. Now,
don't worry about deciphering it or reading it. This is just one example due to
myself. This is due to my co-author. It is used for the blind signature
functionality. So the point is that who knows what's going on here? It's going
to be very difficult to -- there's a semicolon I'm missing here, there's some action
that is missing or is, you know, it's incorrect and so forth.
Okay. And in some sense, it's what motivated our work.
Because classical notions, these notions that we talked about, CPA, CCA, you know,
were stable, a bit understood. But these notions, while they were supposed to work
for arbitrary environments, you know, capture all the properties, it was, it seemed
to be like a challenging task to do that, right. Because there would be multiple
candidate specifications for the same task with subtle differences that you couldn't
figure out or people figured out for, like, some cases.
There's a case of digital signatures where there was an original proposal by Canetti.
Then this Backus and Hofhines show that the specification had a problem, and then
Canetti then later specified that this specification had a problem and so forth.
So it's very hard to get this thing to do what it's supposed to do.
>>: So why don't you -- sounds like you're saying with dual functionality so
6
complicated, maybe doesn't do what you want. It's kind of take it, you know, I don't
know this, but arguably you can see this problem a if you make one page that do
functionality without proving it secure. Which issue is more serious. You have
plenty of functionality, it's like all those things now you have a protocol, but
to prove it secure [indiscernible] simulators and you need to [indiscernible] is
impossible.
>> Juan Garay: You're saying that the proof itself, the proof effort itself is
complicated and.
>>: So which one is more important for the purposes of your talk?
>> Juan Garay: No, I'm assuming for now that once you give a -- I'm ignoring the
complexity of the proof, [indiscernible] specification.
>>: [indiscernible].
>> Juan Garay: That's the next research project due, you know, negative proofs. In
fact, people have been trying to do it, the proof system systematic and the method
systematic and so forth. So now I'm talking, trying to capture even the properties
of the task, that you more or less intuitively would know, try to lay them down on
paper, you know, as a code for the ideal functionality. Try to get it right.
>>: I'm just saying it happened to myself a few times. That's why I no longer work
in this field. But just, you know, you have something here, a protocol initially
some security [indiscernible] you say okay, forget about the proof. Then you find
that out, you say oh, yeah, that's proof, you're right. There is that one little
things that I miss, that's exactly where that comes through. So to me, the ideal
functionality, I mean [indiscernible] this thing is complicated. It's very easy
to make a mistake.
>> Juan Garay: Okay. Well, assuming that you made a few mistakes on top, you make
them on top of a, you know, ill-defined functionality.
>>: [indiscernible].
>> Juan Garay: So they're not in a post, you see, model here.
So that was the -- was this clear? So we have cryptographic task. We have this
sort of idea [indiscernible] task and the purpose of this talk is to have some
7
methodology to define those specifications, you know, in a consistent way and
hopefully a simple way.
And so related to Evgeni's question is so because what happens when you have an
ill-defined functionality. There we are. So the reasons for not being able to
specify this is that as we said, you know, you're given a task, the functionality,
you're to specify how to generate the parties' outputs. You have input, output,
this is what the output should be, except that we have all these, you know, privacy,
confidentiality requirements in crypto. And this framework, using this simulation
paradigm thing, you have this entity, the simulator, they're supposed to arbitrate
things and play around with things so that, you know, if there's something
distinguishable, so forth. So there's a back channel functionality with simulator
from the functionality that is complicating things, right. And that's what this
text is supposed to say.
Because there are a lot of issues that you have to specify its functionality when
to talk to the simulator, what information should the simulator know. Some
synchronization issues between what's going on in the real world and the ideal world
and so forth. So these are reasons for why getting these things right are ->>: [indiscernible] talk about the simulators.
>> Juan Garay: No, you do. You send something to a simulator, also it's important
to at what point in doing the execution do you send something to a simulator? And
I might even -- that might even show in the zero-knowledge case, right, some
communication with a simulator. The fact that there's this notion of delayed output
that then the simulator has to, you know, host a different version ways of doing
this but maybe be a [indiscernible] doesn't get it but it goes to the simulator who
is going to decide at what time to give the output to the verifier in this case,
you know, depending on what's going on in the real world execution.
Okay. So we were here. So as we said, there is some natural communication going
on. Input/output communication. You can think of these channels as being the I/O
channels come from the environment. We want the environment process inputs, gets
outputs, okay.
There's going to be some internal communication here, which we're going to be calling
the leak influence channels. So when there is some input of the functionality to
receive, depending on the task, the real world adversary might know something about
the input. For example, the length of the message. Okay, so the entity is going
8
to also learn that value.
Also here, we're giving the chance, the simulator to talk to the functionality at
the back. We'll, you know, explain more about this later. That is looking forward.
This is going to be our channels, communication channels.
And relating to Egveni's comments, so what are the consequences of not
defining -- not getting a functionality to the right? If you under specify, then
you're missing some security properties. If you are over specifying, you know, it's
going to be as I said hard to understand and you might pay some, you know, efficiency
and necessary costs, or, you know, you might never be able to realize it to come
up with a protocol that realizes that specification.
So in this work, what we are trying to do is provide a methodology that is sound,
that gets the functionality right, and it's syntactically concise, and you're going
to be, you know, adjust for that. To write and reason about UC functionalities.
Okay. The properties that we're looking at for is the issue of the same structure
for many tasks. Should look the same for many tasks. They are for different
functionalities, you know, capturing probably the same tasks. Now, if you have the
same format, it should be easier to compare them. We can put together, we can combine
functionalities, you know. We can define functionality for one property and combine
them. And in particular, I said property. There are lots of security properties
that have been proved in a so-called game based definition. We'll talk a little
bit more about that later.
So if that's the case, if we only have a game based definition, we have a translation
from that game definition to give you the corresponding functionality. This allows
us to also debug some of the existing functionalities.
So far, half hour, this is the motivation, high level results. And next, we're going
to be looking at some of these bullets. Now I'm going to talk about the notion of
ideal canonical functionality. Remember I said the same thing, the same
functionality should look the same for different tasks. And for that, we have to
get into a little bit of formal language, lingo.
So we have a cryptographic task that is determined by actions okay. For example,
for signatures, we call action a key generation sign verified. [Indiscernible]
prove, that would be an action. The commitment task would be commit and open. Those
are the actions as I showed you of the task.
9
So given that, this is going to be the format of all the -- remember, I mentioned
those channels. Everything that goes in those channels pretty much are this format.
Some action, for example, key generation, the speed that identifies the players,
the entities that are involved in this action, you know, prover, verifier plus some
other session ID information and so forth.
And each action we associate an action return. Look at this the output that the
action produces. For example, commit, it would be a commit, commit return. Send
it to the receiver. Called prove, prove return, send to the prover. May or may
not be this return thing, but it typically associated this to actions.
So if the task has, say, K actions so you an alphabet, associated with ST that are
all these possible pairs. So these are the symbols that we're going to be going
around in the I/O channels that we saw before.
We also said that there's some communication between the functionality and this ideal
world adversary, the simulator. So there are other symbols there that, you know,
that there's some traffic on those channels. One is the leak action. Verb[that
says if it was an action, this is what the simulator is allowed, the other side is
allowed to know from this action.
And this influ action is the influence that the simulator can impose on the
functionality to produce the output. So we have all these. I/O, we have the regular
symbols as I showed you with the task, and you have at least these two symbols in
the communication with the simulator.
So now we can talk about the language of the task. The I/O is all the symbols, okay,
that is what I said before. Between the functionality and the environment. And
we have the complete language for the task when we put all these symbols together.
The functionality environment and functionality and the simulator. Okay. That's
a formal language.
Okay. So this is the I/O language and this is -- everything that's goes on is this
language associated with the task. In some way, what we are going to be doing is
some sort of traffic police, you know, watching what's going to be flowing in these
channels to look for inconsistencies, okay? So that's the basic idea, at least for
consistency properties.
So we're combining the canonical functionality, right. So other things that is
going to be inside the canonical functionality is the well-formedness predicate.
10
Remember, I mentioned that actions has tasks. So the well-formedness predicate is
going to check that actions are regular inside the functionality. So it could not
be open before a commit for a commitment task. It should all be assigned before
a key generation action for signatures. So this well-formedness check, this is
going to be a history thing. You put all the symbols there. There's a fairly good
check that when there's a new symbol coming in, if it doesn't violate any consistency
thing, again, it couldn't be assigned if there was no key generation before.
And there's going to be public and secret outputs that basically also a function
that specify given the action, what is the public output that everybody is supposed
to learn from this action and what is the secret output nobody else except the
intended receiver should learn from this action. And examples of a secret output
would be there is no secret output because the verifier doesn't know anything. That
it's is not public. It just got convinced that the statement is true.
And the public output is basically everybody would know, the simulator in particular,
that a statement is true. There was a history, there was some symbol coming in.
It would be for this statement, the prover has to witness and that's the public
output. And that would go to the simulator, and the secret output would go to the
verifier. All right?
Oblivious transfer, which I will get to later.
So this is the basic block diagram for the canonical functionality for a task T.
Maintains a well-formed history using this predicate that we saw. Leaks to the other
side of the public outlet associated with the task. And furthermore, it's
parameterized by a suppress and validate functions.
Okay. We're almost done with the specification. Suppress is going to determine
basically when there is an action coming in, it's going to determine what gets leaked
to the -- what part of it gets leaked to the other side, and validate is going to,
when there's a message from the simulator saying now up with this, value it's going
to check that that action and the value makes sense. Again, this produces the actual
return. So it's a two-pass box. And we'll see examples shortly.
So suppress redacts information from the action symbol, and validate makes sure that
things make sense. It would instruct the functionality to halt if things in the
history or is inconsistent.
Okay.
So that was the canonical functionality supposed to be the same for everyone.
11
Might not be the most aesthetically pleasing function you've ever seen, but I just
show you, I gave you a preview. So this what it looked like. We're not going to
go over all the details. Okay, we'll ignore these two bullets for now. But it's
basically a four bullet thing.
Some [indiscernible] react when there is an action, okay, and you see that one,
there's an action, there is some leak action. Two, the simulator that goes up, I'm
just going to go with some public output to the simulator, okay? And where there's
an [indiscernible] action from the simulator is going to do a bunch of checks. For
example this well-formedness thing, meaning that what the simulator instructs the
[indiscernible] to output makes sense and is going to output the secret output to
the party. Okay. So that's one box.
You do it once, and let me show you briefly examples of how these things would look
now for specific tasks. So now you have the big box. This is what the functionality
for signatures would look like. Okay, so you're basically instantiating all the
things, like the actions, well-formedness, public and secret output and these two
functions. We'll get to signatures on [indiscernible] later. But from the
somewhat ugly box, this is how it would look like for a specific task and this would
be the oblivious transfer functionality. Okay.
>>: So you're saying that [inaudible] date the canonical, you just need to define
these.
>> Juan Garay: So that's a canonical functionality. We mentioned these two
functions, right, suppress and validate. And you can think of those two knobs that
you can play with, like suppressing more or less, you know, validating, being more
relaxed in the validation or not. So that gives, just generates basically a class
of canonical functionality for the same task, okay.
Where the two extreme cases would be the dummy canonical functionality, which
basically doesn't do anything. You know, if there's something coming in, some
action coming in, it just passes up to the simulator and it doesn't
validate -- whatever the simulator tells the functionality to do, it just does it.
Dummy. It doesn't do anything. It's a pass-through functionality.
The top, on the other hand, is
it doesn't validate anything.
completeness purposes, right?
to combine two functionalities
the other extreme. It doesn't leak anything, and
It doesn't output anything. These are for
So now we can define what's in the middle and a way
with different values for suppress and validate.
12
Here I have for the same task, right, we have suppress one, validate one, suppress
two, validate two. We define an application called join between these two objects.
Define as follows.
Suppress is a composition of suppresses. Okay. If one doesn't leak something, the
other functionality doesn't leak something else, you know, both things are not going
to be suppressed. And validate is, you know, we make sure that separately, each
one does, you know, it consists of what the history of what went on so far. So far.
Okay. So getting more abstract, this thing, this class with this operation is
basically a commutative monoid with the dummy functionality as an identify element.
And never mind what that means. That is going to give us a lattice in a more abstract
algebraic sense. We'll see a picture soon. And this commutative monoid thing has
a preordered relation that is basically denoted by this that is saying that if F1
is less equal F2, if and only if there exist an F2 such that that this happens, okay?
If one, you get F2 by joining F1 with some other functionality.
This basically saying this is stricter from a security point of view than these two.
Okay? And we have some theorem saying you can always realize the dummy
functionality. Every [indiscernible] protocol can realize it. And that no
protocol -- you know the top functionality is not realizable, okay? But it is
basically what this slide is look like. In this case for the commitment property.
Here we have the top for the commitment task and the dummy. Here have the
functionalities corresponding to the properties you might have heard of for a
commitment. Correctness, binding and hiding. And the way you would get to the
commitment functionality is joining, you know, these pairs the way we described it.
Pictorially, also, we can put on this slide is realizability horizon, meaning that
there's no protocol that could realize this in the, you know, above this horizon
in the plain model. And this is allowing the possibility result. For those of you
who have heard of this, but know that any pair wise combination of the commitment
properties can be realized in the plain model.
So that was the lattice that I mentioned.
canonical functionality.
So we have this algebraic structure of
So how do you get these functionalities? So basically the way we have this box,
what we have to do now is specify the suppress and validate functions. And as I
mentioned from a task, you are able to identify its basic properties, which are
divided into consistency or integrity properties and privacy properties. So the
13
consistency properties are captured by the validate predicate that we mentioned.
It's basically what we already talked about. That all the symbols, you know, that
occur have to be in certain order to make sense. Again, they can be a sign action
before a verify action from the same party. Key generation action from the same
party. Okay? So that this is how the consistency properties are checked. And
basically, you check on that by defining a bad language. We have all the symbols.
What would be a bad language. It's a bad sequence of strings.
And here are some examples. Take, for example, the commitment case. This would
be a bad language for the binding property. Here we have a string that says commit
so there might be some other symbols interleaved here, right. There might be other
things going on. But we have a commit symbol. We have a -- to produce a committed
return. Then we have an open of that commitment, open return. Open in prime such
that, you know, prime is not M so that would violate the binding property, okay?
So basically, mechanically, you just go over these history sequencing symbols that
you have and look for this pattern. This is a bad language for this property. So
with zero-knowledge, and oblivious transfer, one half time to do it. That was the
validate function that we were trying to specify. We saw how to do it for the binding
property on commitment.
The privacy properties are captured by the suppress function, right. That's, you
know, it keeps things confidential, lets the simulator know what's going on. And
as we saw, if there is some action, there's going to be a leak symbol going to the
simulator, okay, and this value that is leaked is specified by this suppress
function.
Here we have an example where for zero-knowledge, we have a prove action for this
task. So the suppress is now going to, perhaps is going to leak the size of the
witness, but that's it.
For the case of signature, there's no privacy right now, confidentiality, so just
doesn't suppress anything. And I guess oblivious transfer, if you have a symbol
like this, because it's wrong, coming from the sender, so it's going to, it's not
going to let the simulator know what values the sender oblivious, transfer what
messages he has, but maybe the length of those messages, like string.
Okay. Here we define some rules, like always the same portion of the action is
suppressed. The length is always revealed. And the simulator is kept abreast, you
14
know. It keeps -- whenever there's an action, there is some leak going on so the
simulator knows what's going on for each action.
>>: [inaudible].
>> Juan Garay: Of the input.
>>: [inaudible].
>> Juan Garay: You look at it, it's a big string, because we want to be able to compose
them with two different suppress. So you just basically say the left half of the,
you know, that kind of thing. Nothing.
Okay. So how do we derive now a canonical ideal functionality for a cryptographic
task. We define one canonical functionality for each property. We then combine
them automatically with this join operation that we saw. Okay? And this is the
case of commitment that we saw.
All right.
definitions
translation
go over it,
So it turns out that as I mentioned, there are lots of security
that are called game based. And in that case, we have a sort of automatic
to go from the game to a functionality. Okay. We don't have time to
to look at the paper for that.
So basically, if this was sort of a game, this is the unforgeability attack, whether
it's a public key generation. The attacker, the adversary asks for, you know, signed
message, get messages, at some point it's supposed to generate a megs that's not
signed [indiscernible] forgeability attack.
So we go by assigning things to this idea for entities in some way. Okay. And
basically we come up with this for this particular property, unforgeability, we got
this unforgeability, the CMA game, we derive the bad language for the unforgeability
property. Okay? This is going to be a sequence of elements, you know, key
generation sign, sign, you know, bunch of times and verifies. And then there should
not be a message, a signed message that verifies that was not signed before. Right.
So this would be a sequence of symbols that is called bad language for this particular
property or the task.
And what we can show is that if you apply this methodology, and you have a product
called user realizes they the canonical functionality derive from this consistency
game as we saw, then the corresponding cryptographic scheme satisfies the properly
15
defined by G. So this transformation is sound and, in fact, it's both ways for
consistency property. So the consequence of this is that we are lifting a game based
security definition traditionally assigned for the new model for this UC
functionality world. Yes?
>>: Where does [inaudible] come in this game based definition that you talked about
[inaudible].
>> Juan Garay: So we defined something that is a functionality in the UC framework.
UC have to come up with a protocol that realizes that functionality. But the
functionality itself, you can invoke ->>: But if it's using basically [inaudible].
>> Juan Garay: No, so it's both ways. Transformation. So you can go from the game
based definition to obtain this functionality for a consistent property. And from
a functionality, you can reverse if you want the transformation to get a game based
definition than the standalone thing. There's nothing else going on.
>>: So the thing is, sorry, but I guess I was also wondering the same as Melissa.
So many games, like you can define zero-knowledge, which is not composable
[inaudible] if you're saying that any game automatically get this UC
functionality ->> Juan Garay: Applying this transformation.
>>: Sorry?
>> Juan Garay: So it's not a -- you don't add anything new. You apply this
transformation to make sure how things work to make it into a functionality.
>>: It defines the game [indiscernible] it defines the functionality, and it's UC
using functionality and UC using functionality is always composable.
>> Juan Garay: So this is saying that the transformation is sound.
transformation is sound.
Our
>>: So it doesn't implies it's a game ->> Juan Garay: Right.
If you have a well-established game in a standalone setting,
16
you apply this transformation, you get something.
>>: [indiscernible].
>> Juan Garay: What's that?
>>: The transform game and the new one, that is [indiscernible].
the knowledge [indiscernible].
It does not apply
>> Juan Garay: [indiscernible]. We define it, you know, the environment,
the -- what the simulator does and we get the code for the functionality this way.
So if you have a protocol realizes that, then scheme satisfy that. Original scheme
satisfy that game. Privacy game is similar okay.
So this is just an overview of how -- you know, sort of systematic translation from
the game-based definition world to the ideal functionality UC security world.
So this way, we get a single template functionality for all or many tasks. And these
are the basic things that we talked about. We talked about this well-formedness
predicate. We talked about suppress function, devalued function. One thing we did
not talk about is this corruption verbs that we saw before. There were these two
bullets in the box that we didn't talk about. So mention those briefly.
So another symbol that may come down from the simulator this corrupt party that is
common in the UC framework, that when a party gets corrupted in the ideal world,
the simulator should know, should learn whatever the real life adversary learned.
I mean its internal state. So this is their response to this corrupt symbol, and
another symbol is going to be a patch which allows the simulator to change history
as long as no honest party was involved in the execution.
Okay. So it has the flexibility of making things change when -- once a party is
corrupted, okay. There are some other details for cases where you're not supposed
to do that when like some properties, a party was involved. That's how basically
we get to that.
So here we have the corrupt bullet and the patch bullet, kind of from the simulator,
all right. And that was a template. And let me just finish with this example for
signatures where we have all the constituent functionalities. This is
unforgeability that we already saw. Consistency and correctness. So the idea is
once you have that, define [indiscernible] for each property and then we'll combine
17
them.
So these are the bad languages for the properties. We already saw the
unforgeability. Bad language. A bunch of signed verify, verity for a message that
was not signed.
Let's see. Consistency says, you know, same signed message, verified or not
verified, okay. Two verifiers, same message and signature, and this is yes and no.
And the completeness says the ones you signed should always verify. In this case,
it didn't verify, that's why it's the bad language.
Okay. So those are the three bad languages associated with signatures. This is
the function for signatures. We have the three actions. Well-formedness is that
any sign, symbol must be preceded by a key generation symbol. Public and secret
outputs, that's probably, you know, the message is always -- depending on the action,
the message is always leaked to the adversary so that's a public output.
Suppress doesn't suppress anything. We said this before, okay. You have an action
P and an input X, it doesn't suppress. And validate is basically, is the combination
of all these -- this strength, the history that we've obtained is validated. So
history should not belong, you know, should not be in any of these bad languages
corresponding to the three properties. Okay.
So that was signatures.
So I'm out of time, but one thing that we show with this is -- now, we got this box,
tried to prove is see what happens with existing functionalities for those tasks.
Okay. We tried to prove that they're equivalent in some way. So one of the things
we found out is with respect to this, this equivalent to one of the latest signature
functionalities, by Canetti, except that this functionality is sort of obtained
accidentally from the translation that was done from some scheme.
And basically, the translation that Canetti performed from the GMR signature for
the consistency property basically assumes an honest key generation, okay, while
this consistency that we saw, it strictly doesn't care about an honest key
generation, but all is verified that, you know, either signatures verify or do not
verify, but you can't have it both ways.
Okay. So this is -- you always check. This only checks when there's an honest key
generation that it was registered by the functionality. And therefore, it's
stricter than this. This is stricter than that. Again, therefore the
18
[indiscernible] would have obtained this functionality was just slower than the
signature functionality.
>>: Can you say what the difference is again?
>> Juan Garay: The [indiscernible] basically assumed it was an honest key
generation. By honest, I mean, you know, a corrupt signer has to register to a key
generation basically with the functionality.
>>: [indiscernible].
>> Juan Garay: Okay, so that's one example of what things go on. We have another
example for -- this is how oblivious transfer functionality would look like. The
actual is transfer. It comes from both parties, sender and receiver. The
well-formedness predicate says any transfer returned should be preceded by a
transfer symbol.
The public and secret outputs, there's no public outputs, and the secret output is
basically the message that the receiver chose, okay, satisfying this condition.
And the suppress function is, you know, nothing goes on to -- the simulation don't
know what values these guys are talking about, and the validate predicate says, you
know, make sure that the string is not in this correctness bad language for the
correctness property, which would -- which goes like this. There should not be a
sub sequence, a string and all these symbols such that it was a transfer of M zero
and one. There was a selection of, you know, one of its messages, and there was
a transfer and output of M prime such as M prime is not one of the messages that
happened in this transfer symbol.
Okay. Basically, all the transfers, all the messages that were in this input, right,
should be transferred. That's a correctness property. That's also send us
security receiver. Security thus captured by a suppress function.
Okay. So to wrap up, we can argue this ideal functionality was, you know, not that
ideal and easy to look at. Notice that the definitional approach in this line of
work has been usually top down. You define a very ideal thing and then you try to
defile it, make it dirty so that it can be realized and all these talks
[indiscernible] and so forth.
Our approach was sort of the opposite.
It's bottom up, right.
So we start with
19
some existing, you know, natural problems for the tasks.
for those tasks and we combine them, we go up.
Define [indiscernible]
We have this concise syntax two-pass in this functionality box and that's common
to many tasks. And we didn't see it, but I mentioned that in case you have a
game-based security definition, you sort of an automatic translation from the
definition to a functionality for our property.
We illustrate this approach in signature and the transfer we saw. Commitment and
zero-knowledge proofs. We showed that commitment and zero-knowledge
functionalities were equivalent to existing definitions. For signatures and OT,
they were not. We saw the signature example indicating OT. There are some issues
related to this delayed output that the framework has. Which gets, it doesn't work
when you have two parties about to receive outputs, okay. So that's why this
influence way of doing things that we saw is -- fixes that.
And what we did not do in this current version is functionalities are deterministic.
We didn't look at things like fairness, forward secrecy or something called object
insulation, which in some cases -- which is about -- the adversary in some cases
should not know what's going on, okay. So the [indiscernible] issue become an object
which is transfer without the adversary having much to say about it and there are
examples for signatures doing that. So we -- this could be incorporating our
framework but we haven't looked at it.
Okay. And that's it. So this was recently presented at CSF, and there's an outdated
version of the E-print archive. Thank you very much.
>>: So your functionalities are much simpler, but you still have to find the bad
language, right? So in the cases you use bad languages [inaudible] are
straightforward, but for certain functionality can they get pretty subtle
[inaudible] or, you know, [inaudible] define guideline can actually [inaudible].
>> Juan Garay: Yes. Actually, the -- what makes it hard, so once we have this bad
language, you have to also show that it's a decidable problem or disvalidate. You're
going to have some string and you're looking for some pattern, right? So what makes
it more difficult is that this can be embedded with a, you know, there are a lot
of things going on, right, multiple sessions and so forth.
So you really have to make sure that you're able to extract it. Let me give you
the worst bad language that we found. This is a [indiscernible] general. Although
20
we don't have the proof all the tasks, this is captured by I/O, but this is basically
saying so this is the subsequence symbol, right. So here you're trying to identify
this bad sequence.
Okay. So here the sequence belongs to the unforgeability bad language. But
intuitively, it should be there. It gives the unforgeability as you cannot extract,
check for all subsequences because if you drop, say, one verify, right, how does
it work? So you have sound verify, sound verify, and you have verify. You can't
do all the subsequences there, because for sure you're going to find that there's
a bad string. You get the -- you try, you're figuring out all the subsequences,
you're going to have -- you can drop one sign. That's a valid, you know, that's
a good language string.
>>: So then the approach simplifies investigation of the functionality, sure, but
if a lot of the complexity is pushed towards specifying bad language, is it really
making things easier or -- so for certain examples that you showed maybe for games,
[inaudible].
>> Juan Garay: Right, so we hope. So far, all the things that we looked at are
captured by -- and again, it's not something to do with the task, right. It's
because of this use, you know, might be interleaved with a bunch of other things.
But at least we narrow it down, you know. Indeed, it's the only case that
gives -- you forget unforgeability, then everything else, they are very simple
languages. By the way, you have to prove that each language is decidable. Yeah,
so there's some effort there. But for most tasks, it's this symbol.
>>: So how much work was it to compare functionalities [inaudible].
an easy thing to do, or is it automated, or [inaudible]?
Like was it
>> Juan Garay: It's not difficult. So basically, you basically have a dummy
protocol. It accesses the functionality and you have to show that that realizes
that functionality and the other way around.
>>: It's essentially like [inaudible] or realizing [inaudible].
>> Juan Garay: That what?
>>: That some protocol realizes that ->> Juan Garay: No, you want to show the two functionality is prevalent so you use
21
a dummy, very stupid protocol that only calls for functionality and that invoke,
realizes the other functionality.
>>: Is another thing [inaudible].
>> Juan Garay: Yeah. It involves also using, you know, the [indiscernible].
conceptually, it's not, it's not difficult.
But
>>: [inaudible] it could be automated, check on the two functionalities [inaudible].
>> Juan Garay: Yes.
>>: Would there be a way the functionality -- [indiscernible].
>> Juan Garay: Yeah, task. So that would mean that it's a task that in a way that
you can't identify what properties, what the task is doing. So the intuition is
maybe, but intuition is, you know -- one, you basically have an underlying knowledge
right off what commitment does and all these things too, right.
>>: For commitment and signatures we already understand the basic element, so it's
easy to say something about it. But for the tasks that's really complex, [inaudible]
you end up defining the whole system at the same time all right.
>> Juan Garay: So yeah, there could be larger tasks. There could be multiparty
tasks that do many things and still to be, you know, time tested.
>>: The point of your work, is this sort of a culmination of [inaudible] in some
way [inaudible] which functionalities might be not realizable? Does it actually
help you sort of figure that out in any way?
>> Juan Garay: Yeah, it's helped us. What I -- yeah, it was not our objective. But
for example, noticing that as I showed in the commitment lattice, right, there are
things, you know, if you drop one property, you can basically figure out why in the
same possibility result, you know, the kind of [indiscernible] of commitments, why
it breaks down by just, you know, realizing that, you know, if you combine two of
them, you know, you could -- you should come up with a protocol that realizes that.
So you have that sort of side effect.
>>: So it's not -- so from that line, I mean, it's one thing to sort of actually
know that that line exists, and then it's another thing to go, get that
22
[indiscernible].
>> Juan Garay: Right. So potentially, you can start from scratch so you didn't
follow the properties. Say let's see if we can -- there's no, you know, formal way.
But let's see if the protocol can realize this combination. Then you can draw your
own line. Here, we knew, right. I was trying figure where this is horizon go. And
then, you know, there were three protocols to realize the pair wise combination.
Thank you.
Download