22549 >> Vinod Vaikuntanathan: Very happy to have Daniel Wichs... He's a graduate student. [inaudible]. And he will...

advertisement
22549
>> Vinod Vaikuntanathan: Very happy to have Daniel Wichs visiting from NYU.
He's a graduate student. [inaudible]. And he will soon go to IBM as a [inaudible]
post-doc, very prestigious position. And he'll talk about separating succinct
nontractive arguments from all falisfiable assumptions today.
>> Daniel Wichs: Thanks. Joint work with Craig Gentry. And so here's a picture
of a noninteractive argument.
And so I don't have any experience of this sort of thing. I'm lucky from what my
friends tell me it turns out it seems to be a very hard problem whether these can
be made more succinct or shorter.
And so this talk will be about formalizing why this is so hard.
>>: [inaudible].
>> Daniel Wichs: Huh?
>>: [inaudible].
>> Daniel Wichs: No. It's a Google image of something. So what do people
argue about? Well, we have some language L. This is kind of the set of all
statements that we believe are true. And we want to convince someone that a
statement X is in the language. And so we know that NP is really the class of
languages for which we have noninteractive proofs with an efficient verifier.
Sometimes these are called witnesses, if I want to convince something is in an
NP language, I can give the witness and the person is convinced that the
statement is true and can check it efficiently.
So the question for this talk is how succinctly can we argue membership in NP
languages.
And it turns out that if you think about kind of arguing membership as being the
regular kind of NP witnesses, then these type of witnesses can't actually be too
short. And the reason is because you can decide membership just by trying
every possible witness.
So, in particular, that means if the witness size was sublinear in the statement
size, then you could decide any problem in NP and subexponential time which
we don't believe to be likely.
And this actually even generalizes to interactive proofs. So even in interactive
proofs the communication can't be too short.
>>: N is what, N is the length of the statement?
>> Daniel Wichs: Yeah. So even interactive proofs can be too short. But the
work of Kilian and Micali from the '90s showed you might be able to get
arguments if you weakened the notion of soundness. So they looked at
computationally sound proofs; in the rest of the talk when I say arguments, that's
what I mean, computationally sound proofs. This means it might be possible if
you're unbounded time, you might be able to prove false statements.
But if you're efficient, then you cannot. So efficient people can only prove true
statements. They can't find arguments for false statements.
On the other hand, we also want that if [inaudible] are honest they should be able
to prove statements efficiently as long as they have the kind of standard type of
NP witness. So if you have an NP witness you can compute arguments.
And when I say "succinct," the question is whether we can make these
arguments the size of these arguments or the communication to be just
polynomial some fixed polynomial in the security parameter and polylogarithmic
in the instance and witness size. So another way to think of this bound is that
actually if the statement witness are, let's say, less than 2 to the security
parameter, then this is just some, also some fixed polynomial and the secured
parameter means the size of the argument is a fixed polynomial in the security
parameter, no matter how big of a polynomial kind of instance and witness you
want to use.
So that's really the question: Can we have a fixed polynomial bound on the size
of the arguments?
>>: [inaudible].
>> Daniel Wichs: No, I do. I mean I'm comfortable.
>>: I mean, what I mean is verifier statement, [inaudible].
>> Daniel Wichs: That's true.
>>: So I mean so the verifier->> Daniel Wichs: Oh.
>>: But the proof is long.
>> Daniel Wichs: The proof should still be shorter than the statement. So what
do we know? Well, we know that we can actually get arguments like this that are
interactive. So using full rounds of interaction, just assuming that collision
resistant hash functions exist. That's the result of Kilian from '92, and we can
even make them noninteractive, but only as far as we know only in the random
Oracle model, that's [inaudible]'s result from '94.
So the question is can we get these succinct noninteractive arguments in the
standard model. I'll call them SNARGs for short because it sounds funny. So
that's what I'll call them.
So there's a problem, and the problem is that the answer is no in at least one
sense which is that there's always a small adversary or nonuniform adversary
that for every level of the security parameter has a hard coded false statement
and a verifying proof for it.
So this is really the same reason that collision resistant, unkey collision resistant
hash functions don't exist, because you can always have a hard coated collision.
And of course we don't throw up our hands that resistant hash functions don't
exist, we just talk about collision resistant hash functions that some parameters
or some kind of key. And we'll do the same exact same thing for SNARGs.
When I talk about noninteractive arguments, I'll be talking about them in some
kind of parameters. Here I'll call them common reference strings.
Okay. So the question is of these types of ->>: There will always exist a verifying proof of false statement.
>> Daniel Wichs: If you, under some assumptions, yeah. So unless you can do
it unconditionally, there will always be a verifying proof for false statements.
So, well, do these type of SNARGs exist. We have some positive evidence
because we can take Micali's construction from the random Oracle model and
replace the random Oracle with some complicated hash function and just
assume that it's secure. And the CRS will be the description of the hash function.
And, well, what can we say about it? I don't know how to break it. So we can
conjecture that it's secure, but we really can't say much else.
So that's not really very satisfying for cryptographers. So can we actually come
up with a construction that we can prove secure under some real assumption like
that one way functions exist or DDH or something like that. Or maybe these
assumptions are too nice, can we prove it under some interactive assumption like
one more discrete log or some funny-sounding assumption like Q decisional
augmented bilinear Diffie Hellman exponent assumption picking on my co-author
here for assumption defined.
>>: [inaudible].
>> Daniel Wichs: Huh?
>>: [inaudible] in public.
>> Daniel Wichs: Oh, yeah. Oops. So the answer given this work is actually a
negative answer. We show that, no, you really can't prove it under these
assumptions, under these nice assumptions that you'd want to prove it under. Of
course, there's some restrictions.
So here's the more formal result. So we show that there's no black box reduction
proof of security for any SNARG construction under any of these assumptions
from the last slide, even kind of the weird ones, in fact, the larger class of
assumptions that I'll call falsifiable assumptions.
All right. So in order to explain this, I have to explain what all these things are,
what a black box reduction type of proof techniques, explain what that is, and
what falsifiable assumptions are, before I do that let me actually focus a little
more what the definition for security for SNARGs is, to be a little more formal
about it.
So for SNARG we have three algorithms. We have generation algorithm that
creates the CRS, depending on secure parameter. We have a prover that gets a
statement of witness, generates a proof and a verifier that tries to verify.
And completeness is just a regular thing, that if the statement is true, the witness
is good, then these arguments pi will verify as well. And so soundness is a little
more complicated.
So here the attacker actually sees the CRS first, then gets to choose a statement
and a proof. And he wins if the statement is false and the proof verifies.
So the only subtlety here is that I'm talking about adaptive soundness, attacker
first sees the CRS, then gets to choose the false statement that he wants to
prove. And that's what you want from a noninteractive argument because the
CRS is fixed in the sky, the attacker can kind of adaptively choose a false
statement.
So you could think of weaker notions, and because of the negative result, it's
good to get it to prove a negative result weak of a notion as possible. So one is
we could talk about designated verifier SNARGs where there's also a secret key
generated along with the CRS and the verifier needs a secret key to verify.
And another weakening we could talk about is static soundness, attacker first
uses the statement and then before seeing the CRS then gets a random CRS.
So he can't choose it adaptively. And it turns out that all of the results in this talk
will actually also apply to designated verifier SNARGs, but we don't know about
static soundness. So we crucially rely on the data soundness.
I just wanted to mention that really we have a SNARG with static soundness, it's
the same thing as a two-round argument system. You can think of the CRS as a
first round and as the response as the second round. So really the only
distinction between noninteractive and two rounds is for noninteractive you want
adaptive soundness; you think of the CRS as being published once in the sky
and the attacker then kind of keeps using it afterwards.
>>: Noninteractive public --
>> Daniel Wichs: Yes, unless you have -- we also talk about designated
verifiers, so it would be two rounds. Let me draw the map.
Here we have the various notions from strongest to weakest. So while SNARGs
without a CRS we know they don't exist, at least for nonuniform attackers. For
publicly verifiable and designated verifier SNARGs with adaptive soundness,
that's exactly where our results are coming in. So we believe they exist, but we
can't prove them secure under the kind of normal assumptions.
And for two round and three round arguments with static soundness we actually
don't know anything. So that's an open question. And four rounds we already
have Kilian's results. So assuming collision resistant hash functions, these exist.
So that's the math.
>>: We know three round CRS, right.
>> Daniel Wichs: Yeah, right. So all right. So the next thing I'll talk about is
what a false -- what is the class of assumptions that we are separating from.
So falsifiable assumptions were first defined by Naor and this isn't the exact
definition he used. It's kind of in the same spirit. It may be a little broader which
is better for our own negative result. So falsifiable assumption is just any
assumption that you can describe as a game between an efficient interactive
challenger. So the assumption, the description of the assumption is just the
description of this challenger.
And the assumption says that, and this challenger plays a game against an
adversary and efficiently decides whether the adversary wins or not. So the
challenger decides that and can do it efficiently.
And the assumption says that for any polynomial time adversary the probability of
winning is -- should be negligible, maybe for decisional assumptions can do
one-half plus negligible. And so this really models things like discrete log, where
the challenger chooses a random exponent and sends group element G to the X
power and decides whether you win or not just depending whether you send
back X or not.
And really models pretty much anything you could think of DDH, RSA, LWE,
learning with errors. Even kind of these interactive assumptions like one more
discrete log that's modeled as well. The challenger can be interactive.
>>: What does one more discrete log?
>> Daniel Wichs: So it's not real formal, but -- it's what you can prove the
SNARG protocol under, right.
Okay. But there are some things that aren't falsifiable. An important example for
this talk is actually if you take some construction of a SNARG, Micali's
construction where you replace the random Oracle with some complicated hash
function and assume that it's secure, this is an assumption.
Now, I can write this as a game between a challenger and an attacker, but the
challenger isn't inefficient. The reason the attacker wins if he produces a false
statement and a verifying proof and there's no way for the challenger to decide
whether the statement is true or false.
There are other things like notch of exponent assumption which I can't even write
down as a game between an attacker and challenger requires some extractability
property. So if you don't know what that is, it's not real important. You can make
assumptions that aren't falsifiable under this definition, that don't meet this
definition.
>>: Knowledge of assumption ->> Daniel Wichs: It's also not necessary -- if you take some proof system, it's a
proof of knowledge, it's also not written syntactically, it's not a game between
attacker and challenger.
>>: [inaudible].
>> Daniel Wichs: Yeah. Well, the definition requires existence of some extractor
and things like that. It's not really syntactically game.
>>: When you say the game has to exclude like -- these extractors next, you
could just have a game that includes the extractor built into it?
>> Daniel Wichs: No.
>>: Because black box dependency on adversary then?
>> Daniel Wichs: The definition of notch of exponent isn't that there's a
challenger that tests you and figures out whether you win. It's for every attacker
there's more quantifiers, for every attacker there exists an extractor such that this
happens.
>>: The quantifier is different; you could have extractor do it the universal
extractor part of a game as long as it's efficient then it seems like it fits in this
definition.
>>: Double-check the system -- that's what I was going to say.
>> Daniel Wichs: It's not like the challenger -- in order to run the extractor you
need nonblack box access to the attacker. This is a game where the challenger
is an interactive machine, sending charges and responses and that's not what
knowledge of exponent would be.
>>: Couple of things. There are universal extractors, issues where [inaudible]
one extractor given the Oracle, calls the attacker and the Oracle -- even for those
things you still can't --
>> Daniel Wichs: You can come up with nonfalsifiable assumptions like some
argument system is proof of knowledge. It doesn't syntactically measure
[inaudible] but you can prove them under falsifiable assumptions, like that
one-way assumptions exist. It's not excluded here.
>>: Some proof of knowledge extractor, my question is, it seems to me even this,
so the extractor could push ->> Daniel Wichs: Well, you should recurse rewinding or something, right?
>>: Forget rewinding. Assume the game -- what you have to do is give some
kind of approval or whatever, then you have extractor and extractor fails.
>> Daniel Wichs: Yeah, yeah. That would be falsifiable.
>>: It's not falsifiable, because it's a statement you need to check the statement
is false. Produces a false statement, prove around the extractor, it fails other
than it's a fail statement.
>> Daniel Wichs: I didn't know there was a statement. All right. You can come
with assumptions that don't match this, I guess that's the point.
>>: What is the famous example that apriori [inaudible] falsifiable but you can
build from falsifiable assumptions.
>> Daniel Wichs: That's zero proof of knowledge proof, right?
Okay. So the last thing I want to talk about is what's the -- this statement is about
a specific type of proof technique that can be used to show security. So what's
this proof technique, what's the black box reduction?
So normally we want to prove that under some assumption like let's say that
discrete log is hard, we build a SNARG, we want to prove it's secure under this
assumption.
The way we usually do it is by via contra positive. If there's an attack on the
SNARG, SNARG, attack if you will, then that should imply an attack on the
assumption.
And a black box reduction is just a constructive proof of this, a special type of,
constructive proof where we actually build an efficient Oracle access algorithm
that we call a reduction. This reduction gets access, Oracle access to the
attacker, and using this it should break the assumption, it should win this game
with the challenger.
Okay. So the reduction is just an efficient Oracle access machine. It can talk,
the Oracle that it accesses attacker can be efficient or inefficient. It doesn't
matter. It should win as long as the attacker is a good attacker. It should
actually win this assumption, break this assumption even if the attacker that talks
is an inefficient guy as long as it's a successful attacker. That's really the
property that you want from reduction.
And this now captures the fact that we don't know how to reason about attackers
being efficient or inefficient, other than just kind of running them and calling them
as a black box, and then if the guy's efficient, then the reduction is efficient as
well, then the reduction together with the attacker are efficient as well.
And so sometimes reductions you also have to talk about rewinding. But here
because we can -- this is kind of a two-round game between the attacker and for
the SNARG attack, you just get a CRS and issue a output statement of proof.
You can assume that the attacker is stateless, so you don't have to worry about
rewinding. Really can just talk about Oracle access.
Okay. So now we have -- now we have all the tools we need to kind of
understand this result. So there's two caveats that I want to mention. So first I
have to assume that the falsifiable assumption isn't actually false. It's very easy
to prove things under false assumption, false assumptions, right?
So that's one thing. And the other one is that SNARGs actually could exist
unconditionally, P is equal to NP so I need to make some hardness assumption
as well. And I'll actually need to assume that sub X -- I'll assume that
subexponential hard one way functions exist. That would be the assumption
here.
>>: That's not strictly necessary, right?
>> Daniel Wichs: It's not strictly necessary.
>>: You can say there's a SNARG or language that it's not ->> Daniel Wichs: So I'm talking about SNARGs for all of NP, let's say. So in
order to -- right? So in order to show, if P is equal to NP then SNARGs for NP
exist even if NP has logarithmic witnesses which is not a priori. I need to make
some assumption along these lines.
>>: It's not a minimum assumption.
>> Daniel Wichs: No, right, it doesn't. Exactly.
So I'm just going to restate the result in a little nicer way.
So assume you do have a SNARG construction, you do have a black box
reduction to prove of security of it for under some falsifiable assumption then one
of these two things must hold, either the assumption is already false, you right
away get an attack on it using reduction, or it just -- so that would be kind of a
trivial statement.
Or subexponentially hard one- way functions don't exist, which we believe they
do. So that would be unlikely. Okay. So let's talk about the proof techniques.
The main idea about proof technique is to show a special type of attack that I'll
call a simulatable attack. So a simulatable attack is a special type of attack
against a SNARG. So you give me a SNARG, a construction of a SNARG
proposed construction, I should come up with this attack. And it's always easy to
come up with inefficient attacks. So this isn't surprising. But it's a special type of
attack.
So and what's special about it is there also exists a simulator which is efficient,
and no distinguisher can tell the attacker from simulator. There's an attack that's
inefficient, outputs false statements and valid proofs, runs in exponential time.
There's a simulator that's efficient and kind of looks the same to any bounded
party.
And so this is a weird thing. Like you're not going to get a simulatable attack,
let's say, against an encryption scheme without you already getting a real attack
against, a real efficient attack against an encryption scheme, because, well, if the
inefficient attacker, if there's an attack that breaks the encryption scheme, there's
a simulator, the simulator break it as well. It's exactly not true for SNARGs or
assumptions that aren't falsifiable because you might, because you might not be
able to tell whether you're breaking the scheme or not.
>>: Because the statement is -- you can't tell if the statement is false.
>> Daniel Wichs: Exactly. So in fact the efficient simulator here will output true
statements and valid proofs. And you can't tell the difference because you can't
tell whether the statements are true or false.
So that's really the main idea of the proof. So I just need to show two things.
First I need to show if you do have a simulatable attack, if you have the special
type of thing, then there's the black box separation, you won't be able to prove
security in a black box reduction. And then I need to show that every SNARG
construction comes with the simulatable attack.
So the second part will be easier. So let's start with that. So here assume that
there is this special type of simulatable attack, and we have a reduction that
breaks the assumption. Well, so remember this attack is inefficient. So right now
this kind of box over here is an inefficient attack against the assumption. So it's
not very interesting,
But, oh and, yeah, the assumption challenger here. Here says that this attack is
good. But we can always think of the reduction in the assumption together as
one efficient machine. Because both of these are efficient and here's the only
place we use the assumption as falsifiable, right? Otherwise it wasn't falsifiable
this wouldn't be the case.
And so this is one efficient machine. And so we can replace the attacker with a
simulator and still we'll get the same outcome. And now if we lump the reduction
in the simulator together, it means there are action efficient attack against the
assumption. So this is kind of discrete log assumption so this guy sends you G
to the X and then here you will have an efficient attack that computes X from G to
the X.
>>: This is the only place you use the fact that the assumption is -- that the
assumption is not false.
>> Daniel Wichs: Right. Right. Yeah. So, right. So if there's a black box
reduction to this assumption and you have a simulatable SNARG attack then the
assumption is false. That's what we derived right now. So it's exactly what it
says right here.
Of course we assume we have this simulatable attack that's what we have to
prove right now that every SNARG assumption has a simulatable attack. So,
well, here this statement is actually not true unless I make an assumption. Why
is that? Because we could have unconditionally secure SNARGs if P is equal to
NP so there would be no attack against them, right? So here in order to show
there exists a simulatable attack I need to make some assumption. So here I'll
make the assumption -- right.
So here's my subexponential hardness assumption to reduce the one way
assumptions NP language L, two distributions, one over the statement in the
language and one over statements outside. I'll call them the good distribution
and the bad distribution. And these two distributions should be indistinguishable.
And I'll furthermore assume that you can actually sample statements from the
good distribution along with a witness.
So this you actually get by PRG. So if you think -- if you look at pseudorandom
generators you could say the good distribution of pseudorandom strings the bad
distribution is random strings and these two are indistinguishable.
And that's implied by one-way functions.
>>: What's the subexponential?
>> Daniel Wichs: Say that again?
>>: What is the subexponential come in?
>> Daniel Wichs: That's what I'm assuming. I'm assuming subexponential
indistinguishability here, yeah. So let's kind of fix this -- so let's actually fix this
particular hard language L and what I'm going to show is that N is not for this
language. We'll have a simulatable attack, which means that N is not for all of
NP as well because it will imply one for this language.
Okay. So that's what we need to show. So what -- in order to show a
simulatable attack, we need to give these two machines or two entities the
attacker that's inefficient and the simulator that's efficient but looks the same
what will they do, they'll have produce some statements --
>>: [inaudible] the performing thing, could be class of large sub class of NP,
which would have SNARGs.
>> Daniel Wichs: For example, P, yeah. Yeah.
>>: Document [inaudible].
>> Daniel Wichs: It doesn't but it says there will be no SNARGs for any language
which has these indistinguishability problem which is a hard problem,
indistinguishable problem on.
>>: What is the [inaudible].
>>: Because [inaudible] language.
>> Daniel Wichs: But I mean, you could -- so I don't know what's known about
having sub exponentially hard pseudorandom generators P and co-and P.
It seems reasonable, though. So I would think ->>: I see. So ->> Daniel Wichs: So any subexponentially hub pseudorandom generators in it,
you'll show that SNARGs don't exist.
>>: [inaudible] produces false statements, you're saying there's false statements
[inaudible] SNARG is not allowed, not required to produce an inference.
>> Daniel Wichs: Yes, exactly. Yeah.
>>: [inaudible].
>> Daniel Wichs: Yeah. Right. So the side -- the false statements, from the bad
distribution the simulator will produce ones from the good distribution along with
a witness, and it's also clear the simulator will actually do the honest thing. Just
run proving algorithm to produce the argument pi. But now what does the
attacker do?
He kind of needs to find a verifying argument. And he can do this using brute
force. But it's actually this is actually somewhat nontrivial. So you could think
naive idea would be just to try all possible arguments until one verifies and
maybe output the first one that does when you sample a bad statement.
But the problem is the one you sample may not look anything at all like the
correct distribution that's produced using this kind of by sampling good
statements and actually running the proving algorithm.
So what I'm saying is that finding an argument that verifies is not the same as
finding an argument that's indistinguishable from the ones that are produced by
the actual, by the honest algorithm.
So what does it mean? What should the attacker actually do to produce
something that's indistinguishable that you can't tell apart? That's really the
question here.
>>: What are you trying to do?
>> Daniel Wichs: Okay. So remember I need to build an attacker and a
simulator that output essentially the same distribution.
>>: Given an attacker you build a simulator.
>> Daniel Wichs: No, no, I need to build a special attacker and a special
simulator together. Not for any attacker. I'm constructing the attacker. Yeah, so
here I'm saying both attacker simulator will just sample statements one from the
bad language one from the good language. Simulator is clear you should also
use the correct algorithm to generate an argument. It's not clear apriori what the
attacker should do. So the attacker is trying to match -- here I'm trying to
construct an attacker that matches the simulator that outputs distribution enable
proof that looks indistinguishable from the one the simulator uses.
But I don't actually a priori know how to do this, right? It's not just trying brute
force and outputting the first proof that verifies is not a good strategy, might not
be a good strategy.
>>: Sometimes the simulator, the other way around, just efficient, kind of starts
with one ->> Daniel Wichs: Right.
>>: There is only one thing you can do -- [inaudible] efficient.
>> Daniel Wichs: Yes, exactly. So I don't need -- it's not going to be efficient the
way of finding these arguments could be arbitrary. But it's not even clear a priori
that there exists some inefficient way of doing this. At least I don't know what it
is. And so I'm actually still not going to show you what it is. I'm going to -- it's
going to be nonconstructive but I'm going to show there at least is a way of
sampling these proofs that then matches the simulator.
So correct-looking arguments. So here's kind of the basic statement that I really
need to show. So I want to say that for every -- or that we will show. So I want
to -- we'll show that for every efficient prover algorithm with relatively short
outputs, so here's where I'll use the size of the output is short, the CS -- the
proofs, the arguments are short. There should exist an inefficient way of lying.
So if I sample good statements and give you the output of the proof algorithm on
them then it will be indistinguishable of using bad statements and using the lying
algorithm.
Okay? Actually I'll prove something stronger. I don't care that the proving
algorithm is efficient. This actually even works if it's inefficient as long as the
output size is short.
And if it's inefficient, we don't actually need to care about the witness, because
this proving algorithm can just sample the witness on its own. So we can erase
that. And then it becomes kind of a nice maybe on its own statement about
some indistinguishability property with auxiliary information.
So what I'm really showing is that for any two distributions that are
indistinguishable, if I have some extra auxiliary information that I tell you about
samples from the first distribution, the good distribution, let's say, that could be
completely inefficient. So it's inefficiently produced auxiliary information that I
can't generate myself there exists a way of lying about samples from the other
distributions so that the joint distributions still look indistinguishable.
Okay? Of course this isn't really true if -- well, I need to talk about why, what
does it mean that it's short. So the security here goes down by the fact it's
exponential in the size of the auxiliary information. It's necessary, if I give you
pseudorandom string and I give you the seed, you know it's pseudorandom. I
can't match it by giving you a uniform string and some lie, right? Because
uniform strings don't have seeds from the CRG that will match them.
So it's really important that the amount of information I'm giving you is not too
large. Okay. And so I think we have time. So I'll actually go into the proof. So
the proof is somewhat similar to Nisan's proof of [inaudible] hardcore lemma uses
the min/max argument. So I think it's interesting. I think this is kind of an
interesting theorem on its own. Should have some other application. So, for
example, it seems very related to a theorem of [inaudible] that shows that if you
have a pseudorandom generator and get leakage on a seed, the hill entropy of
the output goes down by the size of the leakage. Because here it says you can
think of this auxiliary information as leakage on the seed of the pseudorandom
generator, says seeing the pseudorandom string, output of generator output on
the seed looks like a random string and extra information about that.
So that's what this theorem would say.
>>: [inaudible].
>> Daniel Wichs: It's not exactly the same. There's technical reasons why these
statements aren't the same. But it has a similar feel.
Okay. So how do we prove this? Well, let's prove the contra positive. Let's
assume otherwise. What we're assuming is there exists some auxiliary
information that I can give you about the one distribution, the good distribution
such that for every attempt of lying, every attempt to emulate it, there exists a
distinguisher of relatively small size, which can distinguish with good probability.
Okay. So this is the assumption that we have. And what we want to show is that
if this is true, then we can distinguish the good and the bad distributions, just on
their own.
And there's kind of two problems or two difficulties in proving this, right? So first
off right now we have some distinguishers that require both a statement and
some auxiliary information. We want to have a distinguisher that just gets these
samples X, right, doesn't get any pi and the second thing is right now there's kind
of the wrong art of quantifiers we have a distinguisher verbatim to lie but we don't
have one universal distinguisher we can use.
So it's not clear how to do this. So the first part of the proof is to try to switch the
order of quantifiers using min/max theorem. And so we can do that by
essentially interpreting this as a game between two players and minimizing
player and a maximizing player, so the minimizing player is trying to come up
with a lie and the maximizing player is trying to come up with a distinguisher that
has the best chance of distinguishing, right?
So the minimizing player is trying to come up with a lie for which the distinguisher
has the least chance of succeeding, and the maximizing trying to come up with
the best possible distinguisher.
>>: [inaudible].
>> Daniel Wichs: Say it again.
>>: Lies with the algorithm?
>> Daniel Wichs: Any function. I mean, I don't care really if it's -- or anything like
that.
>>: No problem with size or anything.
>> Daniel Wichs: No, it can be arbitrary. So the distinguisher is a bounded size.
That's the only thing. So, of course, to use min/max we actually need to talk
about distributions, right, that the player should be able to use randomized kind
of -- randomized or make randomized moves.
So well so we need to talk about distributions over distinguishers. So that's kind
of a technical point. I'll mostly ignore that. The lying algorithm is already you
know having a distribution over probabilistic algorithm is another probabilistic
algorithm, so that doesn't matter. So now that we have that, we can apply the
min/max theorem and reverse the order of quantifiers. So alignment over here.
And now we get the other. Now we have one family or a distribution of
distinguishers that can distinct such a random one from this class can distinguish
every possible lie.
>>: Is it easy to distinguish [inaudible].
>> Daniel Wichs: It's not one distinguisher, it's -- and it's not necessarily an
efficiently sampled distribution distinguisher. There's some distribution on
efficient distinguishers.
>>: Distinguishers are all ->> Daniel Wichs: The distinguishers are all small. But the -- I might not be able
to sample from this distribution. All right. We'll get to this in a sec.
So just to rewrite this, that says that there exists some auxiliary information.
Some distribution distinguisher such at for every attempt to lie the distinguishers
will win until actually, tell the difference. So the next thing we need to do is get
rid of this auxiliary information because we want a class of distinguishers that
only gets X.
So how do we do that? Well, I want to define the following value of X, following
function of X. It's essentially the -- it's kind of what the probability that the
distinguisher will say that the statement is a bad statement if I give it the best
possible lie. Okay. So this is the -- so the best possible lie is the auxiliary -- is
the value pi which will make the distinguisher output 1 with the least probability.
>>: Distinguisher means distinguisher sample over the [inaudible].
>> Daniel Wichs: Yes, so this probability is only over D, right?
>>: Affix D.
>> Daniel Wichs: No, the probability is over D. Over the distinguishers from this
distribution, right? So this is exactly kind of the -- this is exactly the pi that the
line distribution should give if it wants to get the smallest value, right?
So essentially then this inequality over here says that there's a difference
between the expected value of X from the bad distribution and the good
distribution.
Okay? So that's exactly what we have. So now we have one kind of function of
X that's different, that's different between the two distributions. So it seems like
this gives us a way to test.
>>: [inaudible] [inaudible] is optimal?
>> Daniel Wichs: It's not, because -- so -- so I'm subtracting a smaller value than
this, right? Here I have some particular distribution OX and I'm subtracting this
probability out. If I subtract out even kind of a worse one, right, I'll get a better
advantage. Does that make sense?
>>: It's minimal then.
>> Daniel Wichs: Right. So I'm subtracting a smaller guy.
>>: [inaudible].
>> Daniel Wichs: Oh, yes, that should be epsilon. My fault. Change epsilon.
Sorry about that. Right. So now we have a specific kind of value or function of X
that seems to be different between the two distributions. So the right strategy
seems to be to test, to kind of try to come up with an estimate for this. And if you
come up with a good enough estimate then we can test whether the X is in the
good distribution or the bad distribution.
>>: [inaudible].
>> Daniel Wichs: Yeah, sorry last minute changes. So sorry. That's what I
meant. I'm going to try to come up with an estimate for that. To do that we'll try
all possible values -- so we'll get X, we'll try all possible values of the argument pi
and just try run many distinguishers on this.
Okay? And we'll try to estimate what the probability of the output 1 is and then
depending on that we'll decide whether it's in the good set or the bad set.
>>: By D we [inaudible].
>> Daniel Wichs: Right. Choose many distinguishers from the distribution and
run them. So now there's a problem in that -- now we actually have an algorithm
that distinguishes the good and the bad distributions. But it needs to sample
from this, from some distribution of distinguishers, that's where each distinguisher
is efficient but the sampling from the distribution is inefficient.
So this is a problem. But of course this is kind of all nonuniform stuff so we can
just fix the coins, the best coins of this algorithm and now we get a fixed
algorithm that just has some fixed efficient distinguishers from this distribution
and works well. So by fixing the coins we actually get a real efficient algorithm.
>>: Along many points that ->> Daniel Wichs: Yeah.
And so that gives us -- so that actually finishes the proof. That shows that now
this gives us a way of distinguishing the original distributions on their own without
an auxiliary information.
>>: [inaudible] shows epsilon -- what is the epsilon [inaudible] or ->> Daniel Wichs: Right. So the statement -- so the statement is if you have -- if
you have distinguishers that run in time S and epsilon, then the size of the
distinguisher grows by one -- yeah. The security loss is proportional to 1 over
epsilon and the size of the -- and the exponential proof size. That kills you more.
>>: [inaudible] applied to the thinking about --
>> Daniel Wichs: I wasn't actually planning on it. Yeah, but the point is that -yeah, the point is that you can always get -- if you assume subexponential
hardness you can always choose statements long enough so that the hardness
is bigger, and the proof size is fixed, fixed polynomial, you can always choose the
statement size big enough that it overcomes, that the hardness of distinguishing
these two statements is much bigger than exponentially in the proof size.
So that's really all I need here. Yeah, I wasn't going to talk about the parameters
too much, give the idea.
Yeah, so that's where the parameters come. So complexity is exponential and
the proof size and one that should probably be 1 over epsilon. Yeah. So that
gives us a strategy for what the attacker should do. It's not we don't know what
the strategy is, but it exists. So now we've shown that there exists this attacker
and the sim ->>: This is the min and max.
>> Daniel Wichs: Yes, exactly. So now we've shown that there exists this
attacker and simulator where the attacker is doing the bad thing, giving false
statements and proofs, and the simulator is giving true statements and true
proofs and you can't tell them apart.
There's a couple of subtleties. So we only talked about distinguishers that just
get one sample from this distribution. But really the distinguisher can call the
attacker many times.
Just do a hybrid argument that's relatively easy. So there's another subtlety
where the distinguisher can call the Oracle with kind of smaller values of the
security parameter so it can give them a very small CRS, and then he'll get
maybe very small statements in proofs and those you can distinguish. You can -so we have this attacker that works for kind of all sizes, right, all security
parameters. And you can call him on anything you want, if you're an Oracle
that's trying to distinguish. You can call him on purposely small things, and you
can run in enough time to figure out whether the statements he's giving you are
bad or good.
This is a little bit messy. This raises a lot of subtleties that the paper handles.
The fix is that the simulator, if the simulator sometimes also gives false
statements if the values are small enough. I don't want to get too much ->>: [inaudible].
>> Daniel Wichs: Yeah, exactly. By just doing brute force in polynomial time.
Yeah.
>>: [inaudible].
>> Daniel Wichs: It turns out you can make it work. It's pretty messy. So I'll
warn you. I've been told that actually a lot of times people do these buyback
separations, they don't consider -- this subtlety apparently comes up a lot in black
box separations and often people don't kind of ignore it. But it seems to be an
important thing, but I don't know.
You usually can overcome it with some tricks.
>>: [inaudible].
>> Daniel Wichs: That's what I was wondering, but I'm not sure. Yeah, I don't
know. Okay. So, yeah, that really just finishes the proof. So just to get back to
the main results. So show if there's a black box reduction under for some
SNARG constructions under some falsifiable assumptions, then either we have a
simulatable attack on this assumption and therefore on this SNARG and
therefore the falsifiable assumption is just false or if no simulatable attack, if the
simulatable attacks don't exist, it means that there are no subexponentially hard
one-way functions.
Think of two extensions. So first we define succinct to be pretty strictly we said
that the argument size is polylogarithmic in the instance and witness. You could
ask well what about sublinear arguments; and, yeah, you can actually -- the
same separation goes through. You just need to make a slightly stronger
assumption. So for subexponentially hard one-way functions or subset
membership problems, now you need to talk about subexponentially hard ones.
Okay. And so I want to compare a little bit to other black box separation so this
area of black box separation has a long history starting with Impagliazzo and
Ruderich. And most of the kind of priority -- the way the black box separation
works is that they showed that you can't construct a primitive A using a generic
version of primitive B as a black box. For example, you can't construct key
agreement if you get some generic one-way permutation and you don't know
anything about you. But of course you can't construct the agreement from
specific one-way permutations like RSA, right?
So it talked about really the structure of the construction using some primitive,
some generic primitive as a black box. So here the result -- our result is
somewhat different, because the construction can be arbitrary. We don't care
about how the SNARG construction works. We don't even talk about any generic
primitives like generic one way permutation we talk about generic assumptions
like RSA or GDH.
So the black box separation here is that the attacker or what's black box here is
that the reduction uses the attacker as a black box. So it's a restriction not on the
construction but on the proof technique. Okay?
And so that might actually be in some sense that might be harder to get around,
because we do have ways of using primitives as an in a black box way like
proving zero, using zero knowledge proofs on them and things like that we don't
have too many ways of using attackers in a nonblack box way. Maybe
[inaudible] arguably is of this form.
>>: [inaudible].
>> Daniel Wichs: Right. Yeah, well, yeah. Yeah, that's exactly it.
>>: Sublinear zero knowledge argument. Exactly from the argument's point,
right?
>> Daniel Wichs: Yeah, I was going to mention it. I think. Well, all right. So,
yeah, so we -- so we saw the result. And so here's, there's a couple of
interesting open problems. So the first one is, can you use nonblack box
techniques. Again, that seems to be difficult. We really don't have, don't have
much -- really don't have anything that we know how to throw at it. So the other
thing is can you build SNARGs under nonfalsifiable assumptions like notch of
exponent and in fact the answer is yes under fairly strong versions of notch of
exponents and bilinear groups.
>>: Definitely this doesn't fit into your non, in your falsifiable category, flags. Why
exactly? Because the notch exponential needs to be black box in the adversary?
>> Daniel Wichs: Every notch of exponential is not falsifiable.
>>: Make a stronger assumption there exists a universal like extractor node,
right?
>> Daniel Wichs: I ->>: We'll talk about it.
>> Daniel Wichs: I don't think ->>: You don't think that works.
>> Daniel Wichs: I don't think so. Universal extractor has to get something. It
has to get the code of the attacker or something. It's not an interactive
challenger that just starts the attacker in a black box way. I don't know how to
write any of these notch of exponents syntactically as a game between the
challenger and the attacker where the challenger is just sending some and the
attacker is responding.
>>: Okay.
>> Daniel Wichs: If you can think about this result as being some, giving you
actually a separation between notch of exponent and all falsifiable assumptions
now. Essentially it says not only is syntactically do we not know how to write
notch of exponents as an assumption, you can't show notch of exponents secure
under any falsifiable sums in a black box way.
>> Vinod Vaikuntanathan: [inaudible].
>> Daniel Wichs: I don't think so. But probably there's a much more simpler way
to do it but indirectly we kind of showed that as well.
>>: So how -- SNARGs as well? Are they linear or ->>: [inaudible].
>>: I see.
>>: [inaudible].
>>: Any reason why square root of N of poly log?
>>: I don't think you can get arbitrary ->>: Confused by the linear ->>: [inaudible].
>>: I don't know. I don't know the exact case.
>>: What are the notch of exponent, what form, shape? Fairly arbitrary, the
[inaudible] there's another algorithm that computes this black box [inaudible] yes?
>>: Given the coins and the description of the attacker, you don't [inaudible].
>> Daniel Wichs: It's kind of like the attacker produces an output of a certain a
form and he needs to know the input.
>>: So it's kind of ->> Daniel Wichs: There's an extractor that's seeing this algorithm and output.
>>: That produces trying to see what's transpired you're saying there's a
universal extractor. I think in this case it's like you can, you need to -- it doesn't
work right because there's not enough stuff to give it -- give it in the transcript, the
run, the adversary A. It's clear this doesn't exhibit [inaudible] falsifier. I guess it
has to be something to handle on the attacker, clearly.
>> Daniel Wichs: The extractor it's important that he gets to see the random
points and the code of the attacker.
>>: I think the thing is if you had such a universal check you should bring ->>: [inaudible].
>>: Bring the attacks [inaudible] so the reduction between [inaudible].
>>: Yeah.
>>: Separate that.
>>: Wonder if your results, I assume that the SNARG growth, constructor under
the simple notch of exponent assumption, would you then say there's no
reduction between if you have sort of the [inaudible].
>> Daniel Wichs: Yeah, I'm not sure.
>>: Showed that the assumption is not provable from ->> Daniel Wichs: Yeah.
>>: From extracting from the [inaudible].
>> Daniel Wichs: Okay. So the next thing is that there's a couple of other
questions. So maybe you could get arguments that are succinct in the witness
size but not the statement size. I think that would be probably in practice that
would be probably very useful already. And the result doesn't rule that out. So
that would be real interesting. This result requires that the argument size is really
polylogarithmic in the instance as well.
>>: So the growth result, is it -- is it sublinear in the statement size or just the
witness size?
>> Daniel Wichs: I think it's the statement size as well. I'm not really sure. Both.
And the last thing is what about constructions of two or three round arguments
rather than with static soundness. So can you get these or do the black box
separations extend. So the main stumbling block ->>: [inaudible].
>> Daniel Wichs: Yeah. Exactly. That if you have two round argument, so with
static soundness, the attacker first uses the statement you get to send a CRS
and he should be able to give you a valid proof. So you can rewind for the same
statement you can get many proofs.
>>: You have to show the rest mechanically how you [inaudible].
>> Daniel Wichs: Uh-huh. And exact our thing breaks down in that case. All
right.
>> Vinod Vaikuntanathan: Let's thank our speaker.
[applause]
>>: Comments about the other?
>>: Write down the definition of verifiable assumption.
>>: Exactly.
>>: What is the best -- is your guys' paper now the best write-up of falsifiable
assumption definitions?
>> Daniel Wichs: We made up a new definition from Mooney because he was,
Mooney was trying to actually even talk about differences in kind of standard
assumption like RSA and discrete log and there are more minor differences that
you could talk about where one assumption is more falsifiable than another. So
we gave a much broader definition because that's what we care about. We just
wanted to have a broad separation that's the best for us.
So, for example, [inaudible] I think didn't talk about interactive games and things
like that. In some sense those assumptions are less believable, right, than
noninteractive ones. Less falsifiable.
>>: And other papers write-ups some of the definitions of falsifiable assumptions
following [inaudible].
>> Daniel Wichs: I don't know about that. Yeah. No -- yeah.
>> Vinod Vaikuntanathan: That's good.
Download