18117

advertisement
18117
>> Melissa Chase: So today we're very happy to have Claudio Orlandi visiting us
from Aarhus University in Denmark. He's going to be talk about some recent
results on multi-party computation.
>> Claudio Orlandi: Thank you for the introduction. As Melissa said, I'm going to
tell you something about multi-party computation for dishonest majority. And
we're going to show how to get from passive security at low cost. And this is joint
work with Ivan Damgard and Jesper Nielsen that are my advisors.
So if you play poker online, nowadays what you do is that you're sitting at home
with another bunch of people sitting at their home and you're connecting to a
central server that shuffles the deck and gives you a hand and you can play
poker online.
What happens is that you're putting the trust into the central server. So what
happens is for some reason the server is corrupted and is colluding with some of
the players, surely the server might pick your hand at random but it might pick
the hand of a friend in a corrupted way and then give him a good hand and then
he's going to steal your money.
So multi-party computation, secure multi-party computation is the area of
cryptography that tries to solve these kind of problems where instead of putting
trust into a central server, into a centralized entity, what we do is basically we
emulate these trusted computer by putting a small piece of software or small
piece of software remotely at each of the parties' computer.
And those pieces of software are going to compute something that is hopefully
going to do the same as this trusted central entity in such a way that everyone
will get a fair hand and then the game is going to be fun.
So what I'm going to tell you is, first of all, I'm going to tell you what secure
multi-party computation is about, and which model of security we are going to
consider. Then I'm going to tell you something about related work in the last 20
years. And then I'm going to describe our solution that is basically computing
shared commitments. And then I'm going to tell you how to do this in a secure
way. And then I'm going to give you some sketch of the security analysis,
please.
>>: Special case of poker, don't need the same answer, just the same probability
solution as an honest -- on a case.
>> Claudio Orlandi: Yes. It was just an example. So other examples of
multi-party computation that actually use in real life are electronic auctions.
That's something we've been looking at, especially in Denmark, we had these
electronic auctions for sugar bid price. And in that case also the output needed
to be correct, because the output was actually the price at which you would
exchange the sugar bids. Finally that was the first application of multi-party
computation in the real life. And it had to do with agriculture.
So we are going back to the basics. All right. So what do I mean when I say that
I want to have secure multi-party computation? What do I mean by secure? So,
first of all, correctness. We would like the output of the computation to be
correct.
So, for instance, the poker cards are random. And then we also want privacy.
So we don't want that information about your input to be leaked. So in this case I
don't want you to know what my hand is. Otherwise poker is not fun.
But there are also other security requirements. For instance, if you're playing
poker, you can imagine someone is playing on two tables at the same time, but
you don't want them to get cards from one table and play them at the other table,
right.
So instead of listing a series of requirements, security requirements, what the
cryptographic community has been doing in the last 20 years is to have this ideal
world secure definition kind of. First we define an ideal world where we have a
trusted entity that we actually trust that is programmed to run some computation
on parties' input. Then the parties will just deliver the input to the central entity
that will compute the output of the function and just deliver it to each party.
And in this setting, if one of the parties' corrupted, tries to do something it is not
supposed to do, the central entity is trusted. So it's going to say that guy's trying
to cheat and we'll kick him out of the computation or something like that. So in
this scenario, security is achieved by definition.
Then we have the real world where, instead of having the central entity we're
running this multi-party protocol where the parties are exchanging some
encrypted garbled messages. And then we want this protocol to be secure as
the other world.
So we showed the secure intuitively the real world behaves like the ideal world.
That means that has been formalized by saying that adversary cannot distinguish
in which world this is. So if the adversary, whatever attack that the adversary
can do in the real world, they can also do in the other world. Then the real world
is secure, because there is nothing he can do in the other world.
And we want this property to be satisfied no matter where the adversary
behaves. So we want to be sure that the security holds -- that the adversary
might deviate from the protocol as much as he likes.
So to define these, we have to introduce a new party that is called a simulator.
The simulator is kind of a bridge between the real world and the other world.
So the simulator is going to interact with the other world. This is a mental
experiment to prove the security of the protocol. We're going to introduce this
new product called the simulator that we, the protocol designer, controls that is
going to interact with the other world and see the input/output of the other world,
and will try to run the protocol with this adversary here and he has to make the
adversary believe that he's actually playing this game.
So the way to define security is that we get the adversary. We let him play with
the real game, where he's interacting with other parties. And we let him play, or
we let him play the other game, where there's a simulator and is interacting with
the other world through the simulator. We flip a bit and put the adversary either
in this situation or this other situation. Let him run as much as he wants and ask
him what is he playing. The adversary cannot distinguish between these two
worlds then we say the protocol is secure.
Actually, in the slide there is just one adversary, but what we want to really -what we really want here is to have security when actually all but one parties are
corrupted. And this is particularly interesting in the case, for instance, of two
parties computation. So if we're in two, purely as soon as one party is corrupted,
everyone but two is corrupted and also in the case of multi-party computation.
Turns out we have to pay some price in order to get multi-party computation
secure against the dishonest majority. So given that we are focusing on the case
of general functionality, so we want to provide the protocol that computes any
functionality, we cannot guarantee either determination of fairness. By
determination, I mean that the adversary can run away from the protocol and the
protocol is not going to finish.
The adversary might choose to learn the input. And then after he sees his input,
he can choose whether you're going to see the input or not. So the adversary
learns -- sorry, his output. The adversary learns his output. Then he can run
away.
This is impossible -- fairness is actually a nice thing to achieve for multi-party
computation. Unfortunately, it's impossible to achieve it for general
functionalities. There's been some recent work by [indiscernible] and John Cart
and other people about special class of functionalities where you can actually,
that you can actually compute with fairness.
But that's outside the scope of this talk. So we have been working for, people
have been working on multi-party computation for more than 20 years now. No -yeah. Starting by Yau in '82 and the protocol by Gene W. In '87 and
[indiscernible]. Those are feasibility results. So here the problem of two-party
and multi-party computation have been defined and solved for the case of
computational security in stand-alone and those solutions are secure against the
dishonest majority.
Stand-alone meaning that the protocol here is guaranteed to be secure, just if
you ran one instance of the protocol at the time. So, for instance, he played two
tables of poker at the same time there might be some problem.
Then there's been some work on information theoretic security later on. And in
order to get this kind of security, so here the security holds just if adversary is
computationally limited. Here the security also no matter how powerful the
adversary is. But here you need to assume there's an honest majority. Clearly
this solution are not interesting for the case of two-party computation, for
instance. Then after more recently, definition of UC security has been
introduced. And this is the definition of security, we're going to go for where UC
means universally composable.
And in UC security, we have a very strong security guarantee that says that no
matter how many protocols are running parallel over the network, and this is
especially useful over the Internet while I'm running a protocol, I don't know
whether you're running other protocols, or maybe I'm running other protocols.
And we want the security.
If a protocol is secure in the UC sense, then its security will keep being satisfied
no matter how many protocols are running at the same time.
And this solution -- this was the first feasibility results for UC security against the
dishonest majority, but the use of genetic proofs makes the protocols the highest
answer. But it doesn't give you a way of going down and implementing it.
On the line of efficient multi-party computation protocols, securing the UC
framework against dishonest majority and with some kind of efficiency, there's
been some work by Damgard and Nielsen in 2002/2003, where you can compute
any arithmetic sequence, but here the solution is based on a threshold
homomorphic encryption. In particular, meaning Pallais [phonetic] encryption
scheme and this kind of scheme.
And here the setup assumption is that the beginning of time there's someone that
generates a public key and gives share of the secret key to all the participants of
the protocol.
Unfortunately, there isn't really any good efficient way so far to generate these
shared safe ups without going to a trusted party. So generating a Pallais public
key, and with the shares, it's still something that requires some heavy machinery.
And then there's been some work by [indiscernible] and by myself and Nielsen on
efficient multi-party computation for boolean circuits. Those are like, so the first
solution here was Yau in 1982 and those solutions are all about how to get Yau
garb to work in the UC framework against the active adversaries in an efficient
way.
While in this paper that I'm going to present now and then this work by the same
authors here that appeared at DC this year -- last year -- a solution for arithmetic
circuits has been proposed.
The efficiency -- the way that our work compares to this one is asymptotically we
have the same complexity, but all our constants are smaller. While they get the
result or more general assumptions.
And then there's been some work in actually trying to go get these protocols and
implementing them. So probably the first one was Fair Play by the team in Haifa,
with Barry Peterson and his team. And they have this. I believe you can
download it from the website. And they have solution for boolean circuits for two
party and N parties with passive security.
Then [indiscernible], they implemented this paper that appeared 2007 where you
can get two-party computation against active adversary. And in the world of
arithmetic circuits, the first problem has been whip nose, with the first solution it
was for N party computation against passive adversary and then against active
adversary but still requiring honest majority. Now we're implementing what I'm
going to present to you. So that brings, that feeds the gap between honest
majority and dishonest majority.
And the results are another group in this in charge, they have this framework,
shared mined. They have similar results to what we are doing.
So the comparison between arithmetic and boolean multi-party computation is
not so really clear because when you have computing arithmetics, if you compute
arithmetic circuits, additions are easy, multiplications are easy. But in other
operations like comparisons are harder, are less efficient to be performed.
While boolean circuits are really good at compared values or doing equality, and
I mean any boolean circuits. But then they're less efficient when it comes to
addition or multiplications.
So there is a trade-off here, and which solution you get, which solution you use
should be thought -- application, depending on the application you're actually
interested in.
We actually are working together with the Fair Play people in order to try to
measure two solutions and maybe jump from arithmetic to boolean and do
everything in the domain where it's more efficient.
Okay. So let me start with describing our solution. So, first of all, let's assume
that there is a public key for [indiscernible] scheme that is available to parties.
This is just random string. So we assume that this is, that we have this public
key in this sky. It comes from the sky. No one knows the discrete log of H and
base G. But this is really just random string, so it can be corrupt or generated
somehow. Then I'm going to use the notation X in a box to write Peterson
commitment of X using randomness R.
Then we have, in part, every party has a shared XI and a shared I of the value
that is committed and of the randomness used in the commitment.
Now, if you have two shared commitments like this, it's really easy to compute
addition of these values. So if you have a commitment of X and a commitment of
Y and you want to compute a commitment of X plus Y, it turns out that each party
just needs to add their own shares locally. And this is really, if you have to do a
lot of addition, it means the additions basically come for free.
In order to do multiplication, it's a bit more tricky, because just multiplying your
local share is not going to work. But what we can do is that assuming that we
have trusted guy that gives you more typically three bits, ABC, where C is the
product of A times B. So this guy gives to the people just random triplets of this
form. Then you actually can compute a new multiplication of U times V and the
result is W, just by doing two openings and by doing a linear combination of the
results.
Clearly, if we're back to square one because now if this guy is corrupted and
colludes with some of the parties in the protocol, what we can do is communicate
A, B, C to this guy, and given that this value is U minus A and B minus B are
public, from these values A and B the corrupted guy can learn UV and W. So
basically the problem of now all our problem is to replace this guy by multi-party
computation property control is going to do the same thing as this guy is doing
gives out triplets of multiplicative commitments in a trusted way.
A note here: So basically all these things can be processed. And that's
something that we like, because everything I told you so far, like if you have this
trusted guy then doing multi-party computation is being really efficient. Doing
addition is just doing some local additions and then doing multiplication is just
opening to commitment.
And this is quite nice, because then everything we are going to do and all the
properties that we're going to describe from now on doesn't depend on the
function you want to compute or the input of the computation. So you can just
have this guy computing these triplets overnight and then share them between
the people and then in a week from now we will compute something. And the
actual online computation is going to be efficient while the off-line, the
preprocessing part is going to be slightly more involved.
So let's see how to actually do these random multiplications. We will start by
taking any passive secure protocol for multiplication in ZP where P is the father
of this group.
So to be honest, when I say take any passive protocol, there aren't really that
many candidates out there. And most of them would work using double
homomorphic encryption. There's also a solution based on another assumption
like coding theory assumption but this is what given that we actually are going to
implement this protocol, that seemed to be the best solution for us.
So we're going to look at Pallais system where there is an encryption algorithm,
encryption algorithm. And the nice thing about Pallais it's homomorphic,
meaning if you get your ciphertext, encryption of X and encryption of Y, multiply
them together, they'll get is an encryption of X plus Y mode line. And this is, we
have some mismatching because the model that we are working and
commitments is P. This is a prime number, while the model that we are working,
that Pallais system works is N is a big, say, is a model of two large primes.
So we will need to do something about it. So I'm going to write an I for PIs public
key. Everyone has his own public secret key pair. And I'm going to assume that
the public key models are much bigger than the prime using the computation.
These assumptions actually make sense in the real world, because as the group
that we used to do, the Peterson commitment, we can use an elliptic curve group
of points. And while -- elliptic curves cryptography given that we don't know how
to break the log in exponential time, basically means if you pick P to be a
number, I don't know, 160 bits or 200 bits, you're going to have the same security
as the Pallais crypto system for the same module of size 2,000 bits. So there is
actually a big gap in that, and we're going to use this gap in a functional way
doing our protocol.
So let's look at this simple semi honest multiplication protocol. So we have one
party that has his own shares A1 and B2. Other partner has A2, B2. Want to
compute the C1 and C2 such that the sum is the product of A times B. So the
first party can send an encryption of his own share under his own public key to
the other party.
They can multiply B2 inside the ciphertext and then max the result with some
randomness. Then we can do the other thing, the other way around. And the
other party can compute their shares of C1 and C2.
And then at the end they can just commit to their shares and exchange the
shares. I'm assuming here that the parties are semi honest. Meaning that they
follow the protocol as they're supposed to and then they will try to learn some
information from this protocol. So the parties are always going to do what they're
supposed to do here.
So you can just let them commit and say this is the result. I'm committing to this
one. Why is this protocol secure against a semi-honest adversary. This party
just is an encryption of A1. And this encryption Pallais semantic here, so it's not
learning anything about A1. What happens inside this multiplication, as I told
you, assume that N is much bigger than P and therefore I'm going to choose B in
some big range. And therefore when I compute A times B plus D modular and
this is the same as an integer, this is the same as integer computation, this
number is small compared to N so there's no model of interaction. This is really
just a small number plus a big number.
And therefore this result here is going to be statistically close to uniform in zero
PQ. So what this party sees when it encrypts this value, doesn't depend on B, so
it's not learning any information.
That's perfect. But, of course, parties here can cheat. They can send, at the end
of the protocol they can send commitments to other values. They could be
choosing this A1 and B2, B1, B2, not in the range they're supposed to be, but in
bigger range. So we need to force the parties to behave in a semi honest way.
Of course, we could use zero knowledge proofs. That's a good solution in
theory, but not really in practice. Especially these range proofs. If I'm encrypting
a value between zero and P and I have to prove that what I encrypted has more
value, these range proofs are really, really expensive in practice.
So instead of going for the zero knowledge approach, we're going to go for this
efficient cut and choose approach that is inspired by this previous work of us
[indiscernible] computation, where we had basically -- that's some similar result
to the one I'm presenting but for boolean circuits.
And the cost of these efficient cut and choose is going to be just a small constant
factor with respect to the semi honest protocol.
So let's see how to do this. So, first of all, we let the parties generate a lot of this
more typical triplets using the semi honest protocol. Like one million of these
triplets. And of course in some of these triplets, the adversary might be choosing
to cheat and learn some information. So when there is a skull here, it means
adversary has some excellent information or the triplets was generated in an
incorrect way.
So the first step is we're going to check a bunch of these triplets in order to detect
cheating. So we ask all the parties, we generate one million triplets, and then we
can clip a subset, a random string that defines a subset of this. And you just
check that everything, everyone reveals the randomness they've been using
during the protocol; and if everything matches, if every party was behaving
correctly, we accept and we proceed. If we detect some cheating we abort the
protocol and stop.
>>: Is this a constant, or can you have each party choose its own set of samples
to look at? Each form has showing these commitments?
>> Claudio Orlandi: Depending. The two-party case, it's enough if you just
exchange randomness. If you're in parties, you have to do a small conflict like
the one party has to commit, then everyone else sends their randomness, then
you open a commitment. Otherwise the last party could just bias the set towards
his own ->>: [indiscernible] detractable have a function of the bits, the challenges?
>> Claudio Orlandi: Not really.
>>: [indiscernible].
>> Claudio Orlandi: Yeah. We didn't think of it -- it seems that one random more
interaction -- I mean, it's not worth making too many. Because each one of
these -- what's really scaling the performance of this is like really -- the bottleneck
of this is generating these triplets because that involves Pallais encryption
decryption. That's the big thing. What you want to minimize is the number of
triplets that we generate.
So after we generate these triplets, we're going to randomly partition the
remaining triplets. Some of them are going to be correct. Some of them are
going to be maybe still maliciously generated, and we're going to partition them in
small sets of size 2C plus 1. And now we're going to combine them in order to
distill some good triplets from a bunch of possibly maliciously generated triplets.
In order to do this, what we do really is that we go back to Shamika [phonetic]
sharing. And we're going to, in a way, I guess the main idea is that we're going
to do Shamika reconstruction over these triplets. So Shamika sharing gives you
privacy up to some of the, up to a certain threshold of the shares are leaked.
And that's what we're going to use. So we have a bunch of these commitments.
Some of them are known to the adversary, but most of them are not.
And then when you combine them together using Shamika sharing, what you
actually get is a bunch of new commitments where the content is unknown to the
adversary.
In practice, this is just taking a linear combination of this commitment. And now
I'm going to show you why this works. So in Shamika setting, we start with two
random polynomials, F of X and G of X of the degree less or equal to T. Such
that in F of zero and Z of zero we have some secret two and some secret B.
And then if you consider the product polynomial H of X, equal to F of X times G
of X, this is a new polynomial of degree 2T. And in this polynomial evaluated in
zero contains the value U times B.
So what we're going to do is we're going to generate a random bunch of
coefficients for the first polynomial and for the second polynomial, trying to get, to
compute the product of the two original secrets.
So we're going to define these two committed polynomials. Committed F of X
and committed G of X. That is actually everyone can evaluate in public, because
there are known X is in the clear. So you can -- when you get the coefficient of
the RIs and U you kind of evaluate F of X at any point.
You can get a commitment to the value of F of X at any point and the same for G
of X.
So now what we want to do is that we have points over the polynomial F and
points over the polynomial G. And we want to get points over the polynomial H
so we can reconstruct H and reconstruct the product 2 times V. So we let all the
parties evaluate the polynomial F and the polynomial G on 2 plus G one points.
And for each one of these points we are going to do these multiplications using
one of the original triplets that we computed before.
Okay. So now if these triplets was generated correctly, was generated in a semi
honest way at the beginning, then these multiplication is secure and the values F
of I, G of I and H of I will still be secret. If this triplet was generated in a malicious
way, maybe the adversary knows A and B, then we would learn these two points.
But our hope is that the adversary knows few of these points. So we learn few of
these points that we not allow him to reconstruct the secret H of 0.
So after we do this multiplication we have a bunch of points over the H
polynomial, H of 1 to H 2, 3 plus 1 and that determines the polynomial H of X of
degree less than or equal to 2T. Given that we have 2T plus one points, we can
construct the value of this polynomial in here that is U times B.
And this is just a linear combination of the coefficients of these points. Any
questions?
>>: I do. UVW is very small, 2 Ts and a factor of 2T.
>> Claudio Orlandi: In VW, what you do, this is multiplication on the shares that
we have. So BCW we have a bunch of players. We assume that a majority of
those are honest. Then you do the multiplication, and given that the adversary
knows less than two points on the polynomial it doesn't learn the secret.
What we do here is somehow different. We generate -- instead of the parties in
VWs who respond to our triplets, when you generate the triplets, each one of
these triplets is like a party in this VW protocol that we run. In a way we're
running VW, we're running like a two-layered protocol. So we have a protocol for
dishonest majority and our protocol for honest majority on top of another protocol
for dishonest majority.
Okay. So why is this secure? Let's play this game. I put here in front of you C
buckets of size 2T plus 1. Okay. And then I ask you to prepare a box full of
balls. We're going to put green balls and red balls inside this box. I cannot see
what's inside the box. The box is sealed.
And you can choose how many red balls and how many green balls to put inside
the box. So green balls are triplets where you didn't cheat and red balls are
triplets where you cheated. Now I put the hand in the box and I start extracting
balls. And if I see a red one, you lose. That is the check that we are doing at the
beginning. We have enough of the triplets that I see if you cheated, you lost the
game. You're the adversary, of course.
And if I don't see any red ball, we go to the second phase of the game. And the
second phase of the game I opened the box and I start putting balls at random
from the box inside the buckets. So the first ball goes there. The second ball
goes here. The third ball goes there. And so on.
And at the end of the game you win if you didn't lose before. And if any of these
buckets has a majority of red balls.
So if you play this game now, how many red balls would you put in the box?
What's the optimal strategy to win this game? This is an exercise and more or
less the best strategy is to put something like 1.5 times the size of the bucket.
So we can bound the probability that the adversary is going to win our game.
And the probability that adversary is going to win is less than 1 over the number
of buckets I put there raised to the size of the buckets. And we can make this as
small as we want. The nice thing here is that we get security, not just by the size
of the buckets, but also from the number of the buckets. That means that this
approach, our approach works better if you compute bigger functionalities. So if
you want to do one million multiplication instead of 100 multiplication, the
replication factor is going to be smaller and smaller. The more multiplication you
do, the more complex the computation you are performing, the more efficient it
gets. And that is nice because usually this arithmetic circuits are quite big when
you turn, like we have like real life problems into arithmetic sequence, they get
big. So it's actually, it's a nice feature.
So this is what I showed you so far as how to get one good multiplication, one
good triplets out of a bunch of triplets. Actually, if instead of using Shamika
sharing we used the pack Shamika sharing, so instead of embedding one secret
in the polynomial, we'll embed a bunch of secrets inside the polynomial, we'll get
these numbers to be much better. Instead of distilling one good triplet from one
set, we distill a constant factor like one forth of good triplets out of each set.
So I'll have an overview of the proof. So show you the ball game. So from the
ball game we know that if I don't, when we start, we check out of the triplets. If
you didn't, if I didn't see any good, any bad triplets, that means that there are few
bad triplets.
If you go to the second stage of the game, means you didn't put too many.
Otherwise, you put a lot of -- if you misbehave, if you misbehave too often, I'm
going to catch you with that probability. If you misbehave just a few times, there
are going to be a few bad triplets that are going to, so we're going to have less
than three bad triplets per set. That means the adversary knows less than three
points on each polynomial. That means he has no information on UV, from
Shamika sharing.
There's one extra step that you have to take to get such security. So one of the
security requirements for UC intuitively is parties have to prove another input.
This has to do with normability. This has to do with the fact that you want to be
sure that if a party's playing with, on more protocols at the same time, you can't
get some information from one protocol, like one ciphertext from one protocol
forwarding to every protocol. Every time they input something, they have to
prove that they know actually those values.
And so what we do again is instead of giving proof of knowledge of these values,
we basically preprocess knowledge by generating some random UC comparison
commitment. I'm going to tell you now what those are, and ask people to open
differences between these random UC commitments and the pair commitment
that is doing the protocol. So basically PI is going to generate some UC
commitment R and a UC -- and by generating this UC commitment is going to
prove that he knows R. And then when he wants to input a value A, he just
opens the difference between -- this should be R or this should be X. You
choose which one.
So it's going to open the difference between this random commitment and the
actual value, and opening this difference is a proof of knowledge of the fact that
he knows the values, actually giving input to the protocol. So how do we
generate this UC comparison commitment that can be used together with the
standard comparison commitment.
So we start again from a semi honest UC commitment. We look -- we assume
that there are camera first in the sky that contains four group elements. The UC
commitment X with randomness R and X is going to have this form. This is
basically an Ngmal [phonetic] encryption in the first two places, and there's a
Peterson commitment down here.
When you want to open one of these commitments, you're going to send the
value X and S. So R, the value used during the Ngmal encryption is actually
never revealed. So the sender sends this value and the receiver checks that this
component is consistent with this opening.
So why do we have this first part if we never use it? Because when we go and
do the security proof, what happens is that the simulator is going to choose the
camera first string and therefore he knows the trap door. So the discrete log of 2
base one and G base three. So using the first disruptor, the simulator can
extract X from this first part and using the second chapter the simulator can open
the commitment to any value of his choice. And basically instructing a
commitment and equivocating a commitment are the two secure requirements for
UC commitment.
Again, this works just if the parties are semi honest because if the party's
malicious it can put X here and some other value here, right? So what we're
going to do is just to reapply the same idea from before. So we're going to have
the sender generate a lot of random commitments XIs, you see commitment X1
to UC commitment XN and open out of those commitments and check that those
are consistent.
Again, we're going to randomly partition this commitment, combine them together
and still get good commitments from possibly some bad commitments.
So basically what I have told you is this legal approach from -- this technique we
use in both for the triplets and for the commitment is this legal approach that we
use also in this previous paper and this approach can be, you produce a lot of
bricks, then you check some of them to see that they are good then you combine
these other bricks in order to get an object that works even if some of the bricks
are not good.
As a result we get security against adversary in an efficient way and with this
preprocessing paper where you can preprocess all these expensive, all the
redundancy. All the expensive replication factors can be pushed at the beginning
of the protocol. And then when you have the online protocol, this is going to be
as secure as the semi honest protocol basically. Application of this approach
have been in the two-party computation case for boolean circuits and MP circuits
I told you now and UC commitments are the building blocks of this protocol. And
hopefully I hoped this approach, this kind of efficient cut and choose my time, find
new application in all sorts of scenarios. And I guess that's it.
[applause]
>>: So I'm a little skeptical of your claim of this oneness of zero notch proofs.
Have you done any comparisons, say, with GS proofs that these might be the
proofs, generate a proof?
>> Claudio Orlandi: What we proved is, for instance, to do the multiplication, you
need to start with Pallais. Can you do just proof for proof Pallais? That they
would ->>: You could if you use a large enough -- if you had a large enough -- because
you still have to ->> Claudio Orlandi: But I guess the answer is no, we didn't compare. But we
just thought that having this Pallais thing in between would just break all the
proofs that annual. But any suggestion would be highly appreciated.
>>: No efficiency [indiscernible].
>> Claudio Orlandi: Yeah. That doesn't. Maybe we just have just chosen the
wrong statement to prove. There are two ways of looking at it. Either there are
no efficient proofs for our statement, or we have chosen the wrong statement.
But the basic step doing the multiplication and proving that the multiplication has
been done correctly are -- I don't know how to do that. Also because we don't
want to bring the encryptions on line. So there are efficient proofs for Pallais.
Okay. The thing is that Pallais encryption -- so we have Pallais in the
preprocessing phase and then in the online phase we have just commitments,
comparison commitments or of an elliptic curve. There's a huge gap between the
efficiency of comparison commitment and elliptical curve and Pallais decryption.
I don't remember the numbers, but it's like the order of 1,000 or something like
that.
>> Melissa Chase: Any other questions.
>>: I have another question. Do you know what the coefficient of this running?
>> Claudio Orlandi: Yeah, actually. So this work is under submission and the
results are another work that's the implementation and that is also under
submission.
>>: Okay. So it's efficient actually compute like what? Functionality?
>> Claudio Orlandi: I think the benchmark application was the same auctions,
the same auctions as we used in the first case.
>> Melissa Chase:
[applause]
>> Claudio Orlandi: Thank you.
Download