23355

advertisement
>> Nishanth Chandran: We are very happy to have Xin Li from University of
Washington with us today. Xin got his PhD from working with David Zuckerman at
University of Texas at Austin and is now a Simmons postdoctoral fellow at the
University of Washington. He has done a lot of very nice work in the area of random
extractors and its application to cryptography and he is going to be talking to us about
privacy amplification and non-malleable extractors. Xin Li.
>> Xin Li: Thank you. This is joint work with Yevgenity Dodis, Trevor Wooley and
David Zuckerman. The Title Is Privacy Amplification and Non-Malleable Extractors Via
Character Sums. Let me start with the introduction of these problems. First the setting is
that you have two parties that are involved. They share some random string but the
random string is not uniform. Instead it is just some weakly random string that I am
going to define formally later. Notice that this string is secret in the sense that it is not
known to the adversary, so the adversary is watching this channel. We also assume that
these two parties have local randomness which is private in the sense that local
randomness is not known to the adversary. So what is the problem? The problem is
basically that two parties in the presence of the adversary that they want to somehow
communicate with each other such that at the end of the protocol they want to agree on
their shared uniform and privately random strings.
So these two strings are going to be the same for both parties and they are supposed to be
uniform and private in the sense that the strings are not known to the adversary. So that
is the basic setting. And in this talk and also in our regular work we are assuming that the
Eve is computationally unbounded, so if Eve is computationally unbounded then we can
somehow use the standard cryptography primitives to deal with this problem, but we are
not assuming that. Instead we are assuming that Eve has only limited computational
power, so this framework is basically purely informational [inaudible] setting.
We are assuming that the two parties share a weakly random string and the former model
for some weak random string is called the weak random source. So basically if we have
an empty string then the weak random source is just some random variable or for all
probabilities distribution over the empty strings. But then you need a way to mirror the
entropy of the weak source which we are going to verify the nonstandard notion about
min-entropy. So it is simple that the source has min-entropy [inaudible] then basically it
is saying that for any [inaudible] support the property that the random variable takes this
property is at 2 to the minus k. And for this talk and also for most of the applications
using weak random sources you can just simply think of the weak random source to be
some uniform distribution over some large subset which has size of the least 2 to the k.
That is going to make our [inaudible] symbol. So just assume that the source is weak and
use the uniform distribution over the subset A.
>>: [inaudible] go from only a [inaudible] source?
>> Xin Li: Sorry?
>>: Can you always go from the min-entropy source to a [inaudible]?
>> Xin Li: The thing is that every weak source with min-entropy is actually a convex
combination of the flat sources. As the title suggests, I am going to talk about
randomness extractors. And so what is a randomness extractor? This is supposed to be
some algorithm that is giving as an input of some weak random string, weak source and
this with some entropy k and the algorithm is supposed to output say m bits that is
supposed to be uniform by the way. We allow the outputs to have some small statistical
error. And the error is just the little one normal statistical error or the variation distance.
But the problem is that if you want the extractor to be a deterministic algorithm then this
task is actually impossible, even if the random source has entropy's something like M -1
and even if you went down to just one bit. That is impossible if you want the extractor to
be a deterministic algorithm.
Given the next results, so there are several different ways to attack this problem. One of
these is what we now call a seeded extractor, which was introduced in the paper by Nisan
and Zuckerman and then subsequently studied in many research papers. So here the
extractor is not the deterministic algorithm. Instead it is a probabilistic algorithm in the
sense that is also given the small random seed which we will call Y that has D bits. The
seed is going to be independent of the weak random source. And then with the seed you
can output some M bits with small statistical error. And the goal here is that you want
this number bits D to be much smaller than the length of the seed. So generally what we
want is D to be something like all of the log N if this epsilon is constant.
So this kind of object seed extractors have been studied a lot, and now we have nearly
optimal constructions of these kinds of extractors. And we are also going to need a
notion of a strong extractor , so basically this is saying that not only is the output
distribution cloud still uniform, but is also closed uniform even if, given the seed Y here.
So basically this is saying that joined distribution of the output and the Y is close to a
uniform distribution.
So given the previously strong extractor, here is a protocol for privacy amplification
problem where the adversary is assumed to be passive. So by passive I mean that the
adversary is only allowed to watch the messages transmitted between the two parties, but
she is not allowed to change the messages in any way. So in this setting there is a simple
protocol using strong seeded extractors. So the protocol is that first you have Alice pick
some uniform random seed Y from her own local random bits and then Alice just sends
this string to Bob. And then they just apply this strong seeded extractor. They both
supply this strong seeded extractor. And the claim is they are going to end up with some
shared uniform and privately random string.
And why is this the case? This is because the adversary cannot change the seed. So both
Alice and Bob get the same seed Y and now they apply the extractor and because it is
strong which means that even if you condition on seed Y, the output is close to uniform.
And also because here the only information that Eve can get is the seed Y here. So even
conditioned on this, the output is close to uniform, which means that basically they are
private to the adversary. So that is a simple protocol.
If you think about it, if now we allow the adversary to be active in the sense that she can
indeed change the messages then the above protocol actually fails. And it fails for a
simple reason because the adversary can change the random string Y into something else
by [inaudible]. And then if you apply this strong extractor again, then those two outputs
may no longer be equal to each other. So in this case the protocol fails.
So now most basically what do we need by and active adversary? It is a very powerful
adversary in the sense that the adversary can somehow arbitrarily change, insert, delete,
modify and reorder the messages. For example, the adversary can first run several rounds
with one party before it decides to resume execution with the other party, meaning that in
this sense the protocol may not behave synchronously between the two parties. So here
these activities raise a quite powerful adversary.
In this talk as well in our work we are assuming that the adversary is active and so we are
trying to deal with this kind of adversary. Here is the formal definition of privacy
amplification protocol. Here we are assuming that we have an active adversary and in the
end the two parties, Alice and Bob, they are supposed to output some strings, but they
can also output some special symbol here. So this symbol just indicates that this party
rejects the protocol and for the privacy amplification protocol, we will require the
protocol to have three properties. So the first of these is called correctness. So this is
simple. This just says that if even adversary is passive, then with a property at one that
the two outputs of the two parties are the same, and they are not equal to the rejection
symbol here. That is actually quite intuitive.
And then robustness which says if Eve is indeed active, then the property that Eve can
corrupt the protocol is small. So what I mean by Eve corrupting the protocol, so you can
corrupt the protocol if Eve can somehow managed to make the two outputs RA and RB
not equal to each other. And also they are not the rejection symbol. In this case we will
say that Eve actually corrupts the protocol and we want to have a probability where Eve
can do this is small.
And then the third property is just the extraction property which says if with high
probability a party does not reject then his or her output is going to be close to uniform
even in the adversary's view. That is actually the goal, the original goal of the protocol to
get, to convert the original weak random source into a uniform string. And now we have
two parameters that are of particular interest to here. So first of all this K minus M, so M
means the length of the output. K is the entropy of the original weak random source. So
K minus M is the entropy loss of the protocol and we also have this parameter epsilon
here and we are defining S which is log one over epsilon which is the security parameter
of the protocol. Sure?
>>: How can you guarantee that the [inaudible] can you also call it bad then say
[inaudible]?
>> Xin Li: If one of them gets a uniform string and then the other one gets bottom, then
that is okay because the output actually rejects the protocol.
>>: But can you get the one person [inaudible] and the other person [inaudible]?
>> Xin Li: One person always knows the other person fails?
>>: [inaudible]. Maybe not the other way around.
>> Xin Li: Yeah, it may not be this case.
>>: [inaudible].
>>: Right. So I see the you [inaudible] so A doesn't know the B got the last message but
B should always know that A got the message.
>>: It's a one-way [inaudible].
>>: [inaudible].
>> Xin Li: That's right.
>>: [inaudible] the properties.
>> Xin Li: That's right.
>>: Thanks.
>> Xin Li: So, any other questions? Is it clear about the definition here? Okay. Now
this table shows some of the previous works here. So remember S is the security
parameter. So basically the ultimate goal of the privacy amplification protocol is that you
want to get a protocol with as a small number of rounds as possible and you want to get a
small entropy loss as possible. So because here the first result do to Morrow and Wolfe
in 1997. In a one round protocol which is pretty good, but they only work for entropy
rate, entropy is something like A is greater than two thirds of the N and entropy loss is
pretty big, something like N minus K.
And then this protocol is later improved by these authors Dodis and K [inaudible] to
work toward the case where entropy is greater than N over 2, but entropy loss is too big.
It is one round here. And then just to note that there is a negative result by Dodis and
Wichs in this paper which basically shows that this entropy K is smaller than N over 2
then there is no Y round protocol. So in that case the best that you can get is 2 round
protocols. But the first protocol of that breaks this N over 2 barrier is do to Renau and
Wolfe in 2003. So they actually gave a protocol which can essentially work for any
entropy here. But it would require something round something like this, S plus log N and
the entropy loss you saw is S plus log N squared. And then there is also a [inaudible] by
this paper Canookthy and Raising but the parameters are essentially the same and then
there is this paper by Dodis and Wichs who reduced the number of rounds to two but the
entropy loss is still S plus log N squared. And then there was a paper by [inaudible] so
four authors who managed to get entropy loss to something like S plus log N, but then the
number of rounds blows up to also something like S plus log N. And just note that the
optimal protocol provided in this paper by Dodis and Wichs, but is non-explicit. So they
showed that non-explicitly you have these optimal protocols which can achieve two
rounds and you have entropy loss of epsilon S plus log N.
So all of the previous results they can only achieve optimum in just one of these two
parameters. We can achieve the two rounds but then the entropy loss is bigger. Or you
can achieve optimum entropy loss but then the round is bigger, S plus log N. So the other
problem is that you can achieve optimum in both of these parameters. In this paper by
Dodis and Wichs they showed that these optimal results can actually be achieved by
using something called a non-maleable extractor, which I am going to talk about later.
So these are the previous results.
So our results are basically--in this paper we give two improvements of previous results.
So in the case where entropy is bigger than N over 2, so we get the two round protocol,
so recall that you have you have [inaudible] round the protocol entropy loss has to be
much bigger, N minus K. So here we did a two round protocol, but we get optimum
entropy loss, S Plus log N. In this case where you only have like say any consistency so
here the Delta is obviously constant. So in this case if you have any Delta and the entry
here we actually get the protocol that is pretty close to the optimum result. We get a
constant number round protocol that also has entropy loss, S plus optimum entropy loss,
S plus log N.
>>: [inaudible].
>> Xin Li: This constant depends on these Delta here. But as long as Delta is constant,
this is also constant. So we obtain these two improvements by constructing the first
explicit non-maleable extractor. So in the rest of the talk I am going to first talk about the
non-maleable extractor’s construction and say a few words about the proof. And then I
will talk about [inaudible] supplication protocol for these entropy rates, Delta [inaudible]
any constants. And I was going to say a few words about this two round protocol which
is actually a direct corollary from the result of Dodis and Wichs.
So now what is a non-maleable extractor? So it is defining this type by Dodis and Wichs.
I will first start with a seeded extractor as you have seen before. So this is simple. You
just have an extractor. You have a seed, an output and the output is supposed to be close
to uniform. So here is this notation just means that this output has errors, so this error is
epsilon from a uniform distribution. And then so this is a seated extractor, so a stronger
notion of the seated extractor which you have also seen before, the strong seated
extractor, which requires that you have also been given the seed Y here. The output is
still close to uniform.
So now the non-maleable extractor. In a way you can just think of it as an even stronger
notion of the strong seeded extractor. So here you are, you are not only given this seed
Y, but you are also given an output of the non-maleable extractor on some corollary seed
AY here. So this AY is assumed to be not equal to the original seed Y, but except from
this AY could be arbitrary correlated to the seed Y. So now we require that even if you
are given the seed Y and also you are given the outputs of the non-maleable extractor, the
arbitrary corollary seed AY, the original output of the extractor is still close to uniform.
And just as a note that the output cannot be bigger than the entropy over 2 here. So is
this definition clear? Any problems?
>>: Can you explain [inaudible]? So if some intrusion has already leaked above Y,
[inaudible]?
>> Xin Li: Yeah, actually the seed Y is completely leaked to the adversary.
>>: So then--okay so [inaudible] is chosen [inaudible] uniform?
>> Xin Li: Yes, I was choosing uniform before. And the adversary can also choose
another seed AY which is not equal to Y. But except this condition she can choose
whatever he wants. This AY could be arbitrarily correlated to this seed Y. And then the
adversary can also get the information of the number of extractors’ outputs on this seed
AY here. Which means that she actually now gets to kind of dimensions. She gets the
seed Y. She gets these outputs and another seed AY. And we are requiring that even if
the adversary gets these two informations, still the output of the non-maleable extractor
on the original seed Y is close to uniform.
>>: So basically you're saying that it is just like it is [inaudible] to the extractor, but now
even if you [inaudible] challenge for the [inaudible] from the extractor so the [inaudible].
>> Xin Li: Right.
>>: That is what you are trying to say.
>> Xin Li: So is actually much stronger than the strong seeded extractor, in the sense
that adversary gets other output here.
>>: So [inaudible] extractor and the other gets the [inaudible] and if they were correlated
[inaudible] even if they were not correlated [inaudible] be uniform [inaudible] cannot be
correlated and [inaudible].
>> Xin Li: I am sorry. Say that again.
>>: So think of this as [inaudible], right? And the adversary is supposed output a string.
[inaudible] output looks correlated [inaudible] the string?
>> Xin Li: Yes, that is right.
>>: And he can do that. [inaudible] uniform so they will not [inaudible] on the left
[inaudible]. That is why it is not [inaudible].
>> Xin Li: Yeah, right.
>>: Not just [inaudible] distinguish the [inaudible]. Much weaker requirement
[inaudible]. Compared to [inaudible].
>> Xin Li: Right, but here we are considering the information [inaudible] setting so
because the [inaudible] distance. Okay. Any other questions?
So just to give you some sense of the non-maleable extractor, here is a [inaudible]
example. So if you can see this extractor, which is just the end product. So you are
taking source X. You take an independent seed Y. So here this Y has the same length as
the source. And you just take the end product of the [inaudible] F2. This is actually a
very good strong extractor. So it actually works for essentially any entropy here. But it
turns out these are very bad non-maleable extractor. So it is not a non-maleable extractor
even though the entropy K here is something like N -1. So to see this, you can consider
the source X, which is a first [inaudible] is fixed to be zero and the rest of the [inaudible]
is just a uniform distribution. So you can see that this source has an entropy of N -1. But
now the adversary given any seed Y he can just make this AY to be the string that has the
first piece of Y flipped and the rest of the piece is the same. And then you can see
usually that X in the product with Y is always the same as X in the product with AY here,
which means that this extractor, despite the very good strong constructor, it is very bad
non-maleable extractor.
So we somehow suggest that the task to construct a non-maleable extractor is not a trivial
thing. So in this paper by Dodis and Wichs they showed that if you can construct a
number of extractors for entropy K then you can actually get optimal protocol for
essentially the same entropy. And they managed to show that such random number of
extractors exist. So if you think about it, it is not even clear that these kinds of things
exist. But they showed that these kinds of things do indeed exist and they showed that
you can use the seed [inaudible] D something similar to the D extractor, something that is
[inaudible] log N here. And then you can extract up to something like N equals 2K over
2, and you can just ignore the other thing, and extract up to 2K over two bits. But they
didn't give any constructions of the explicit number of extractors. They only constructed
some weaker forms of non-maleable extractors, but I am not going to describe this.
So you see in this paper that we give the first explicit non-maleable extractor. Before our
work there was no known explicit non-maleable extractor for whatever parameters. So
this number of extractor that we construct works for entropy greater than N over 2. And
it uses seed [inaudible] N so the seed [inaudible] use the same as the source and it outputs
something like K minus N over two bits. So if this K is the number 2+ Delta then in this
case we can put a linear number of bits here. But if we want more bits, we want for more
then log N bits, then the efficiency of our construction relies on some conjectures about
[inaudible] arithmetic progressions. But I am not going to talk about getting to the details
about this, just some well-known conjectures.
So the corollary that we get is to round privacy amplification protocol, if we assume these
conjectures. So this is a direct corollary of the result from Dodis and Wichs. And then
we get using these numbers of extractor even with the, even if the [inaudible] breaks
entropy greater than number 2, we managed to get a new protocol that runs a constant
number of rounds and we get also optimal entropy loss. So there is one very nice
subsequent work here which is by Cohen, Raz and Segev who also gave construction
networks for entropy greater than N over 2. But they can use it mostly lengths and then
they can use something like log N, but their output is actually smaller than the seed
lengths which means if you want to output the large number of bits, then seed lengths is
again is forced to be large. And also their construction does not rely on any unproven
assumptions. In fact, our construction can be viewed a special case of their construction.
And after seeing their nice paper we also showed that our original number of extractors
works even if the seed is not uniform, even if the seed only has some mean entropies,
something like N plus log N times some constant here. This again in terms we can also
achieve the small seed lengths as in their construction. But this work is only done after
we had seen their paper.
Now I am going to talk a little bit about our construction of the non-maleable extractors.
So if we want to output one bit then the construction is actually simple. It is just a Paley
graph construction over some finite of FQ here. So we let Chi S here to be the quadratic
character. So if Chi S is some square of some element, then it is one, if not then you
might as well have your S equal zero then. This is zero. Because you can very
conveniently convert this +1, -1 to 0 and 1. And also to note that this quadratic character
is a [inaudible] character which means that the Chi S times 2 is the Chi S times Chi T
here.
And now our extractor here is simple. It is just take so it is X; you take the independency
of Y and you compute the quadratic character of X plus Y or of some field of FQ here.
And if we want [inaudible] then basically this extractor that has been studied before in
this paper by [inaudible] and [inaudible] but they study this extractor in the context of
two source extractors and we show this is also a number extractor. So here we basically
choose some prime that is congruent to 1 log 2 to the M and let a G to be generating this
few Fq. So the conjecture as I mentioned before that this such a prime q can be found
efficiently. And then the extractor is just the discrete log of X plus Y mod some 2 to the
M, so MQ is the number of output bits. It turns out that we can compute this discrete log
mod 2 to the M in an efficient way.
>>: [inaudible].
>> Xin Li: Huh?
>>: [inaudible] prediction or?
>> Xin Li: No. This does not rely on the conjecture. The conjecture is just to find this
prime Q.
>>: So you are putting conjecture [inaudible] one [inaudible] so you can put [inaudible]
so even if for example finding this [inaudible] takes time, you need to do it only once for
every application.
>> Xin Li: Yes. Exactly. Okay so, now I am going to say a few words about the
analysis as to why this is a non-maleable extractor, so basically to show that this is nonmaleable extractor is sufficed to show that most of sees Y, so first of all this party is close
to uniform. This is actually already follows from the party that this thing is a strong
extractor. So I am not going to go into the details of this but this thing has already been
proved in this old paper by [inaudible] that this is a strong extractor. So the main
problem we need to prove is so here is the one bit case. We want to prove that the X
output of these two things is still close to uniform. That means that this output is
somehow uncorrelated with the original output.
This is essentially the same thing as we want to show that the first party is close to
[inaudible] then these X sorted with this close to uniform. So to prove this, the main tool
we use is a character sum estimate. So this is a little messy here but the main thing is that
this S you can think of as in support of the weak random source which is a subset. So the
weak random source you can think of it just as a uniform distribution over the set S. And
they say it is just the adversary function from FQ to FQ. So when you have Y the
adversary changes to something AY here. So basically to show this thing is sufficed to
show that this party holds. So this is basically saying that for the sum over all Cs Y that
so you know the X or when you convert it into the quadratic character it is basically
equivalent to the modification here. So basically you want to show this.
And the main theorem to prove this is Weil’s Theorem. So basically it says if f is a
monic polynomial with R distinct roots which is not square then the absolute value of the
sum of the quadratic character of FZ where the Z is over the [inaudible] of F Q is
bounded is bounded away from Q, so the trivia boundary is something like Q here but
here the boundary gets much smaller, something like Q to the square root of Q. So using
this we can bound this quantity here. So I guess I am not going to go into the details, but
just--I am going to escape this. So let me just say that in the end we can end up with
estimating some quantity like this. And the key of this [inaudible] is basically a
polynomial of S with the [inaudible] and so you can check that this H is square if and
only if Y equals Z or Y equals YZ and Z equals ZY.
And then you can just divide it into two cases and you can just bound this thing trivially
by considering the true case of whether it is square or not and you get this bound here.
Just note that it takes a while to analyze a case where you extract more bits then you need
to estimate this thing for any nontrivial characters [inaudible] type prime. And to do that
we need some kind of non-uniform [inaudible].
For the rest of the talk I am going to describe the privacy implication protocols for
entropy rate K equals [inaudible] for any constant [inaudible]. So to describe the
protocol, first I am going to introduce some cryptography permitting called message
authentication code or you can just simply say Mac. So what is a message authentication
code? Again here you have two parties that [inaudible]. But here we assume that instead
of sharing some weakly random stream, they share some uniform and private string R
here. And also they have this adversary who was active, who was computationally
unbounded and who controls this channel between the two parties. So what does this
Mac do? So essentially, so here Alice picks it uniform random string Y so this string Y is
chosen from a local uniform random bits and Alice sends the string Y together with some
tag T here. So this tag T is computed from this Mac by using this share the random bits
R as a seed. So this is going to be the Mac user’s random seat R here to compute a tag T
for this message Y. then this is just going to send Y on the to block. But as usual as the
adversary you can change the messages into something else. Can try to change it to some
Y prime or some T prime and the main property that the message authentication code that
we want is that if the adversary does not know the shared uniform string R, then the
probability that the adversary can come up with a different string Y prime and can also
come up with a correct tag T prime for this message Y prime is going to be small. It is
going to be bounded by some epsilon here. So that is main properties that we need for
message authentication code. So is this clear? Any questions?
And just a note that, so this Mac works in this case where this R is uniform random. But
just note that there are constructions of message authentication codes which work even if
this R is not completely uniform random. But if this R has entropy rate greater than one
half then it still works. Entropy rate here just means that the entropy divided by the
length of the string R here. So we have this kind of Mac that works even if R not
completely uniform.
Given the previous message authentication code, so here is a protocol for the case where
the entropy rate is greater than one half. So this is basically the protocol by Dodis and
Wichs plus the construction of R non-maleable extractor here. So now these two parties,
Alice and Bob, they share some N, K weak random source. So this means that the source
[inaudible] and has entropy K here. Again the first step is Alice chooses random string Y
from her local random bits and sends it to Bob. And this could be changed by the
adversary into something else Y prime, here. And then both parties using this nonmaleable extractor to compute outputs. So Alice outputs the number of extractor applied
to X and Y. Bob computes the number of extractor that applies to X and Y prime. So
now they get two outputs R and R prime here which may not equal to each other.
And then in the next round Bob would choose some W prime, so this is uniform random
string from his own local random bits. And then Bob will send W prime together with
tag T prime to Alice. In this tag is computed by using this output out prime. As a Mac
key to get a tag file for his message W prime, and sends it to Alice. And then because
Alice may get something completely different from him because some WT here. And
then Alice would just check the Mac, would check if this tag is the correct in Mac for this
message that she receives, and if not then Alice would just reject. Otherwise if Alice
does not reject, then she will compute these outputs and Bob will always compute his
own outputs Z prime here.
So I claim that this is a privacy amplification protocol and the analysis turns out to be
simple. Just you consider two cases. The first case is that the adversary Eve does not
change the seed Y. If the adversary does not change Y, then why prime and Y are
obviously the same, which means that our Y prime and R are also the same. And also
since this non-maleable extractor is also strong CD extractor, so this R prime and R are
the same and are private in Eve’s view. And then I just said by the property of the Mac
the, Bob can also authenticate his message that he sent to Alice. And the adversary if she
wants to change it to some WT then the probability that Alice does not reject is going to
be small by the probability of the Mac.
On the other hand if the adversary does change Y, so if the adversary changes Y into
something different, Y prime, then the number of extractor probably guarantees that this
Y prime is actually independent of R. So even condition R prime this R is uniform. So
then if Eve sees this Mac T prime because this only comes from this output R prime,
which is completely independent of R. So given this information the adversary cannot
come up with the correct tag for this message W, which means in this case, Alice will
also reject with high probability. So basically two cases. If it doesn't change Y then the
Mac guarantees that Bob can also get the message to Alice. If Eve does change Y than
the non-maleable extractor protocol guarantees that Alice will reject with high
probability. Is this protocol clear?
And then by plugging a number of extractable entropy rate greater than a half we get this
2 run protocol. And just know that all of these messages Y, W NT prime has lend
something like S plus log N so you get the optimal entropy loss per [inaudible]. And so
Alice is just set as to whether Eve changes the seed Y here.
So now I am going to explain the privacy amplification protocol for the case where K is
equal to Delta N. So you basically use these previous protocols. So again we have Alice
and Bob and they share a weak random source here. So the first idea is that, so we want
to apply this numbering structure for entropy greater than one half, but here the source
only has a very small entropy rate Delta right here. So the first idea is that we can apply
something called a somewhere condenser. So what is a somewhere condenser? So
[inaudible] can convert these weak random source with entropy Delta into a number of, a
constant number of blocks. So this constant number [inaudible] depends on this
[inaudible]. But nonetheless it is a constant number of blocks, such that at least one of
the blocks has entropy rate greater than .9. So we can see that it is very natural that we
can convert into this and then you want to somehow try the previous protocols in some of
these blocks where the entropy rate is greater than half.
So indeed the idea is that you want to apply this 2 round protocol to every block. And
then we hope that at some of the intermediate blocks where the entropy rate is bigger than
half the protocol will succeed. So more specifically, we're going to associate the phase of
the protocol with the use of the block. And we want the final protocol to achieve the
final two goals. The first goal is in the phase of the first good block, so by the first good
block, I mean the first block that has an entropy greater than one half, so in this graph so
this block is bad in this block is good. And this is the first good block. So we want to
say that in the first good block, so Alice and Bob agree on the private uniform random
string. So this is in this phase associated with this first good block. So at the end of this
phase the two parties agreed on some private uniform random string.
And the second goal we want to present once this is done we somehow want to guarantee
that all subsequent phases either continue to agree on some private uniform random
string, and then in this case then we can just use the output from the last phase in the final
output. Because here they agree on the uniform random string and then they continue to
agree on a uniform random string, and so then they still agree on some uniform random
string. Let's see how we can achieve the first goal. If you think about it the first goal is
to have the two parties agree it on a uniform random string on the first good block. The
first good block already has entropy required at a half but the message is sent before the
first good phase may leak some information about the first good block. So it may not
have the entropy rate of greater than one half but the solution is that we can limit the size
of the messages sent before the first good block. So as long as the security department is
not too large we can limit the size of the messages so that, because you notice that there
is only a constant number of blocks, so there is at most a constant number of rounds, so
you can limit the size of the messages such that, because the size of the message is small
so the information leaks is small.
So you can guarantee even if you condition all of the messages sent before the good
block still has entropy rate greater than one half. And so in this case we did and then
plugging the previous two round protocol for entropy rate greater than half, we guarantee
that we will achieve the first goal. And that the end of the phase of the first good block,
the two parties will agree on some uniform random string that is private to the adversary.
Let's see how we can achieve goal two here. So here is a, this graph shows a particular
phase, so the phases for the block Xi here. So this shows something up to the phase of
the first good block. So here this Xi is say entropy is less than one half as the picture
shows. So this is exactly the same protocol, same two round protocol as before. I didn't
change anything. I just added the subscript i here. All the X's and Y's become Xi’s and
Yi’s. So this is phase i here so first status shows this Yi sent to Bob and so the adversary
may change it to Yi prime and it computes this Ri and Ri prime and then Bob chooses Wi
prime and sends it to Alice together with a tag Ti prime computed using Ri prime and
Alice checks using the Mac Ti. If the Mac Ti she receives is correct, if not, she rejects,
otherwise to compute the output Zi which is some extractor applied to X and Wi and Wi
prime.
So this is exactly the same two round protocol as before. But if you think about it the
protocol may fail because here the Xi we have small entropy, small entropy, so the
entropy might be smaller than half. So the previous two round protocol may not work.
Basically the Mac with key Ri may fail. So what are we going to do? The solution is
that also remember in the one when we first reached the phase of the first good block, we
already had, at the end of that phase, we already had the two parties agree on some
uniform random string. So we can use that shared uniform random string from previous
round to construct another Mac so this Zi prime-Y is the output string from the previous
round. So we use it to construct another Mac and then we also use this Mac to
authenticate the message W prime. So here instead of using just one Mac, we will use
another Mac to authenticate the sent message W prime. So this seems to be good, except
we have another problem which is that the Mac of the construction using this i here may
leak information about this string Zi-1. But the solution is that we are saying that there
are Macs that are resistant even to leakage. Even if the adversary knows some
information about the Mac Zi-1 as long as the leakage is not too much, as long as this Zi1 has entropy greater than one half than this Mac can still work because it is a leakage
resistant Mac.
So here is a modified protocol here. So again this is the same two run protocols. I didn't
change anything. So now I am going to add another thing; that they have output from the
previous phase the R -1 here. So this is the phase up to the first good block. So the two
parties already share some uniform random previous outputs from previous rounds the R
-1 here. And then instead of just using this Mac to authenticate the blue i prime, we will
use another Mac. We will call Qi prime which is this Mac, LR Mac which stands for
leakage resistant Mac. So we will use this Zi prime, the R -1 prime as a key for this
leakage resistant Mac to authenticate that [inaudible] i prime and get another tag Qi
prime. So we sent both of these tags to Alice and of course Alice can get some modified
version WTQ and then Alice instead of just checking one Mac, she will check both of
these Macs and she will reject if either one of these does not hold.
Otherwise they will just compute the output of Zi as before. So now if you think of Zi
[inaudible] at the end of the protocol, at the end of the phase for the first good block, the
two parties end up with some uniform random string. And then this will guarantee that
the subsequent round they will also agree on some uniform random string. And then the
next round they can use this Zi and then they will again end up with some agreed uniform
random string and this keeps going on until the last phase.
>>: [inaudible] leakage [inaudible].
>> Xin Li: Yes, even [inaudible] for the key is not uniform [inaudible] has entropy rate
greater than one half. Yeah?
>>: [inaudible].
>> Xin Li: Yeah right. So basically the only thing that the adversary can now know
about Zi prime is the R -1 prime is this Ti prime thing because these Y's, these W's they
are all independent of them. So we can set the tag, so the size of the tag can be smaller
than half of the length of the R-1, then this leakage resistant Mac will guarantee that Bob
can authenticate this Wi prime to Alice.
So we are almost done. However, there is one other set of problems here. So the
problem is that Eve may not behave synchronously, which means that she can talk about
many times before he talks to Alice. So in this case, Alice and Bob may not have a
synchronous view of the protocol, so they may not use the same Zi or Zi to construct the
Mac which causes the problem of the protocol. So the solution to this last problem is that
we can add a liveness test for both parties at the end of each phase. So basically here
when Bob sends Wi prime to Alice, he will also require a response from Alice by
applying some strong extractor to X and Wi. So Bob will check this response so if this
response is not equal to his output then he will reject, and when Alice sends his response
to Bob she sends also her own challenge which is Vi and Bob will also compute a
response and send it to Alice and Alice will also check and if this response doesn't match
then she will reject it.
So this is basically saying that if Eve tries to talk to Bob many times before she talks to
Alice, then these will not get a response from Alice. So she will have to come up with a
response, fake a response from Alice to make Bob go through this response check. But
then that can only happen, can only succeed only with very small probability with Bob
because it is a strong extractor so the response thing, if Eve does not know X then this
thing is completely uniform in Eve's view. So the probability that you can come up with
a correct response to Bob without the response from Alice is going to be very small. In
that case Bob will reject with high probability. So in this case by adding this liveness test
we can make sure that the protocol actually behaves synchronously as we want. So that
is--sure?
>>: [inaudible] liveness test thing even before you get to the first good round or you just
don't care?
>> Xin Li: What is that?
>>: So once you got to this good block, good round [inaudible] the liveness test, does it
also help before you get to the first good block?
>> Xin Li: You need to do the liveness test for every phase to ensure that the two bodies
behave synchronously.
>>: [inaudible].
>> Xin Li: If they are in different blocks…
>>: [inaudible].
>>: [inaudible] if Wi is random and we are not in a good block do we still get guarantee
with Wi?
>> Xin Li: Well, Wi prime is chosen from Bob's some private local random list.
>>: [inaudible].
>> Xin Li: So yes, just by adding this liveness test into our previous protocols so that
gives us our final protocols, and in each phase there are only two rounds, something like
that, so you get a constant number of rounds because you have a constant number of
blocks and because each of these things they have, so the size of each of these messages
is something like Res plus log N, so the final entry log is only Res plus log N.
So that is our final protocol and here are some conclusions and open questions. So we
construct the first non-maleable extractor for entropy greater than N over 2. So of course
the first problem is to construct non-maleable extractors for smaller entropy, say Delta N
or even N to the Delta something. And just know that we recently get improvement that
we can achieve--we can construct a number of extractor for entropy for rates slightly
below number 2. Something like say .4 line and something like that. But it still remains
[inaudible] construct number extractors for any constant [inaudible]. Or N to the, UN to
the .9 would be very big improvement.
And then the second result is that we get this constant number on the around privacy
amplification protocol for the K square at K equals Delta N. And it will be interesting to
get the number of rounds down to two rounds. In that case we would get the truly
optimal protocol and then again even if we can get the constant number on the protocol
for the case where K equals N to the Delta, so even into the .9 would be very big
improvement over previous results. That is all. Thank you.
[applause].
>>: I have just one question. Is it easy to construct, is there a way to construct a nonmaleable extractor in many, many bits from a non-maleable extractor of one bit by
increasing the seed length?
>> Xin Li: That is actually related to another, so we have another improvement actually
and so as I said, so here, this result by [inaudible] have the output smaller than seed
length. So we have another improvement that we can come out by giving these kinds of
nominally constructed [inaudible] outputs. One number we can use in some other kind of
twos to get it to output a large number of bits. But that is still only, that still means that
this output has to be at least log N. So if you want to use a number of extractors that
output just one bit to convert into a non-maleable extractor that output more bits, then
that is still open. We don't know how to do that.
>>: Also what happens if you--so you have this protocol for entropy loss and [inaudible]
for spreading the constant of the blocks. If you split it up into [inaudible] number of
blocks, what happens? You probably would not get better results than existing ones or
you can still make that protocol better, or?
>> Xin Li: Right. Actually our protocol, we have this constant number around the
protocols for K equals Delta but we can actually get some things slightly sub linear. But
then we wouldn't get a constant number of rounds. We get, and the entropy loss is not
optimal but it can still be better than S plus log N squared. But that is only slightly sub
linear. If you go beyond the second point then the number of rounds would blow up to
polynomial or something like that.
>>: [inaudible] W.
>>: Otherwise.
>>: Otherwise.
>> Xin Li: Oh, otherwise.
>> Nishanth Chandran: Let's say thanks again.
[applause].
Download