Kristin Lauter: Okay

advertisement
>> Kristin Lauter: Okay. So today we're very pleased to have Chris Peikert
visiting us to speak about lattice-based cryptography. Chris is coming to
us from SRI. He did his PhD with Silvio Micali at MIT, and he's doing lots of
exciting work on lattice-based cryptography and Lossy trapdoor functions
and lots of other fun things. So, thank you, Chris.
>> Chris Peikert: Thank you. Thank you, Kristin, and thank you everyone
for coming.
Before I start I was warned that Kristin asks a lot of questions, so, and I
encourage everyone else to do the same, and we'll make it interactive. So
the topic for the talk is lattice-based cryptography. And in modern
cryptography over the past few decade, we've had some enormous
successes in this discipline. We can do some amazing and seemingly
impossible things. For example, we can run an election where every voter
can be assured that her vote counts in the final tally and yet nobody can
convince anyone of how they voted.
Can we query a database without revealing to the database anything about
what we're actually looking for. And a group of mutually untrustworthy
parties, untrusting and untrustworthy parties can play a game of virtual
poker over the Internet without needing to employee any kind of trusted
deal or anything like that.
So all of these applications build upon rich set of fundamental
cryptographic primitives such as encryption for confidentiality, digital
signatures, zero knowledge and other protocols. And all this is built rests
upon a foundation of mathematical problems, fundamental mathematical
problems that we believe to be hard or intractable, although we haven't
been able to find proofs of these conjectures yet.
Now, looking at the breadth and depth of all the applications and primitives
that we have, the foundations to me may be or maybe to you, also, might
look a little bit narrow. Essentially all the schemes that are in common
used to rely upon one of two assumptions. Either the factoring the very
large numbers is intractable or infeasible or that solving discreet log -- that
discreet logarithm problem over certain types of multiplicative groups is
hard.
And many schemes rely on even stronger assumptions then these, but as a
baseline they at least all need one of these two problems to be hard. So
let's take a little bit closer look at factoring schemes.
A typical operation in these schemes is what's called modular
exponentiation or exponentiation modular some large number N that's the
product of two primes. Concretely it looks something like this. We take
some enormous number, we raise it to the power of some other enormous
number and then we take the remainder when divided by some other
enormous number and so these numbers look pretty big but in fact this is a
toy example. The really numbers that you would use in practice are about
two to five times longer. Not two to five times as large but two to five times
longer as many digits. Okay?
Now, in many cases this presents no problem. We have good algorithms
for performing these operations but in a lot of scenarios with restricted
devices or in high throughput scenarios, the cost of doing this a lot can
become prohibitive. And the reason is we have to use such large numbers
because we need the number N to be hard to factor.
Now, speaking of hard to factor, there's also kind of an uncomfortable fact
that was -- and very surprising fact that was discovered by Peter Shor in
1994 which is that factoring is actually easy for quantum computers. So
more precisely Shor gave an efficient algorithm that works in the model of
quantum computation that will take in any number N and shredding this
count will run on it for a while and spit out the two factors or however many
factors there are. And I checked that. That is an accurate factorization. So
->>: Did you find the random -- the composite or you just went [inaudible].
>> Chris Peikert: Yeah, I used my home quantum computer. So this is
slightly uncomfortable because if -- we don't have quantum computers yet,
but if we even think that there's a moderate chance or even a small chance
that quantum computers might be built in a large scale, then essentially all
the crypto that we used to, at least commonly use, will be completely
insecure. I should mention Shor's algorithm also works for the discreet log
problem over any group. So there's no saving us there either.
So if we're worried even a little bit about quantum computers even in the
far future, we should start thinking today about what are some alternative
mathematical foundations. What I'll tell you about today is what I believe a
very promising avenue for -- in that direction. And it's based on objects
called lattices, which are geometric objects which that are seemingly very
simple but actually extremely rich that were studied by very stern-faced
looking mathematicians for about 200 years, including Gauss, Quikfeld
[phonetic], Hurewicz, Minkowski and many others.
And schemes based on lattices have a host of potential advantages. First,
asymptotically, they're actually very efficient. This is because their
operations are very simple, typically involving just linear operations like
multiplication, addition with very -- with reasonably small integers. This is
about the kinds of sizes that we'd be encountering.
Moreover these operations are trivially parallelisable, so notice down these
columns we have all the same numbers and on modern architectures
where there's a lot of in built parallelism, both multi-core and within cores,
there's very easy -- these can be -- these operations can be accelerated
very quickly.
Lattice-based schemes also seem -- appear to be very secure, in particular
we don't know of any quantum attacks. Shor's algorithm doesn't work for
lattices and despite a lot of research in this area, there's basic -- we don't
know how to speed up things using the quantum computer at all.
Moreover, they have a very interesting and unique security property,
unique in cryptography of what I'll call hardest case or worst-case security.
This was first shown in this pioneering work, very beautiful work by Itai in
1996. I'll speak more about what this means a little bit later.
But basically in the decade following Itai's work there were many
improvements, extensions, simplification, but in terms of cryptography not
a whole lot changed. So pretty much all we knew how to do was this -- a
couple primitives called cryptographic caching and public-key encryption
and all this long line of works for the first decade. Yes?
>>: [inaudible].
>> Chris Peikert: Okay. So we could do something called collision
resistant caching at least, to be precise. So in this first decade not a whole
lot changed from a cryptographic point of view. What I'd like to tell you
about today is that actually lattices -- lattices and lattice-based schemes
can be extremely expressive, very rich, we can get lots of strong primitives
that allow us to build rich applications.
So this will be the outline of the talk. First, I'll go into some detail about a
line of works that I've been doing with various coauthors that follow the
theme of how to use a support basis in a cryptographic scheme. This is
based on work with Gentry and Vaikuntanathan that appeared in STOC last
year which introduced several new techniques and some new
cryptographic construction and then I'll talk a bit more about some further
applications that we built. The first two works appeared in crypto last year.
The W is waters. And then also there's an upcoming work in this stock
with the digital applications. And I believe there will be many more.
Then I'll talk about a bit more generally about some related threads of
research they have been doing and some problems that I hope to tackle in
the future. The talk will be extremely informal. I won't give anything
resembling, remotely resembling a proof. But the nice thing about
lattice-based crypto is that we can draw lots of pretty pictures, so we'll be
seeing a lot of those.
So now is a good time to pause and take any questions you might have or
concerns, anything like that before I move in.
>>: [inaudible].
>> Chris Peikert: Yes. Yes?
>>: [inaudible].
>> Chris Peikert: Okay. So I will actually say so in the very next slide, a
couple of those applications. Actually there's a long history of uses
lattices in cryptanalysis, in breaking sort of what are called
knapsack-based crypto systems back -- that were proposed back in
the '80s.
What I'm talking about today is the complete reverse side of this, which is
to use hard lattice problems as a foundation for secure schemes rather
than breaking them. So I won't say anything about the sort of traditional
application of lattices and a crypto, although that's a very rich, rich area as
well.
Okay. Let's ->>: [inaudible] like quantum algorithms so [inaudible].
>> Chris Peikert: Yes. I'm not an expert on this. Most of the reductions go
in the uninteresting direction as far as I'm aware, so they say that if you
could solve what's called the dihedral hidden subgroup problem, then you
could solve certain lattice problems. But that doesn't tell us much because
dihedral subgroup -- hidden subgroup is sort of famous open problem for
quantum computing. So I believe I have the direction right. I don't think
there are any really ->>: [inaudible].
>> Chris Peikert: Interesting completeness results for quantum.
Okay. So let's plunge ahead. So part one, how to use a short basis. The
example primitive that Mira [phonetic] asked for is digital signatures. So
this will be a primitive to keep in mind as we go along. And the idea behind
digital signatures is we have these two people, Alice and Bob who would
like to maintain a relationship over the network. They're about to be
separated but they want to make sure that they're talking to each other. So
what they'll do is Alice will create two related keys. These create a secret
key which she keeps to herself and she'll create a related public key which
she can post on a bulletin board or give to Bob directly before she heads
off.
And when she wants to send Bob a message over the network in the future,
she takes that message and applies some algorithm together with her
secret key to attach -- to compute and attach a signature to the end of the
message which then Bob can use her public key to verify that the message
is authentically coming from Alice. And the security property of digital
signatures roughly is that after watching many signatures go by, an
interloper shouldn't be able to create a message and forge a signature from
Alice. So this black hat will never be able to convince Bob that he's Alice.
So that's roughly the security property we're looking for. One of the main
tools, especially in practice for building digital signatures is something
called a trapdoor function. So the idea is the public key will be a function
or some description of some function F and the secret key will be some
trapdoor information that gives us some power, some extra power about F.
Concretely this proposal, the earliest proposal was given by Diffie and
Hellman in their seminal paper over three decades ago. They proposed
what was called a trap -- what they called a trapdoor permutation. This is a
bijection from a domain on to itself. And given the description of F, you
can easily compute in the forward direction.
But given a random point in the domain, it should be hard to compute the
unique free image of that point without knowing the trapdoor. Of course, if
you know the trapdoor, then it's easy to go in both directions. Okay? So
it's pretty easy and very natural to imagine how we can use this kind of
primitive if we have one. Oh, by the way, I should mention Diffie and
Hellman articulated this concept and then a year later the famous RSA
system which was proposed by Rivest, Shamir and Adleman, there have
been a few other proposals since then but they all rely on the hardness of
factoring.
So once we have this object, we can then construct a very natural hashing
sign -- what's called a hashing sign signature scheme. And it works
roughly as follows: If Alice has some message she wants to sign, then she
hashes it down to a point in the domain and we imagine this hash function
as some kind of idealized random function and that gives us a point Y
which Alice can then invert using the trapdoor. And that will be the
signature. Bob can of course check the signature by simply applying F to
the signature and checking that it equals the hash of the message. And
roughly this seems to be hard to forge because if we want to forge a new
message then we have to be able to invert some random point in the
domain. I'm being informal here. But that's roughly how it works.
So what we'll do in this work is introduce a new twist on this very old and
cherished concept. And it's what we call a preimageable sampleable of a
function. So we're changing the picture a little bit. We now have domain
and range which are not necessarily the same. In particular, the domain
might be much larger than the range. And furthermore, we endow the
domain with some probability distribution. May not be uniform. However,
under this input distribution F maps to the uniform distribution over the
range. Okay. So if we pick an X randomly from the input distribution, we'll
get a uniformly random output in the range.
Okay? So the property as before, it's easy to compute in the forward
direction. Now, notice that because the domain is much bigger than the
range, just by a pigeon hole argument there are typically going to be many
inputs that map to the same output Y. The security property is that for a
random value Y we shouldn't be able to find any of these pure images.
With the trapdoor, we can indeed invert. But what we require is something
a little bit stronger. Instead of just being able to find some preimage of Y,
we want to actually sample from among all the preimages under the
appropriate distribution; that is under the input distribution conditioned on
it being a preimage of Y. This is where we get the name preimage sample.
So what this requires is some kind of algorithm, some kind of algorithmic
innovation that allows us to pick from all the preimages with the
appropriate distribution. Okay? Now, that's the concept. Now we can
imagine what if we did hash and sign. What if we followed the hashing sign
approach for signatures? So again, it's the exact same thing. We hash the
message down to some point in the range, and then we apply the preimage
sampler to pick out one of the preimages, a random image which will be the
signature.
Now, it turns out that this scheme is just as secure as the previous one that
I showed you that was based on trapdoor permutations. The reason is not
obvious, I think, but it turns out that the crux of the proof, the security
proof, is that we can generate a pair, X and Y, in two equivalent ways with
the same joint distribution. Okay? So what happens in the real system is a
Y is chosen uniformly from the range and then we simple a random
preimage. What happens roughly in the proof is that we choose an X from
the input distribution and then go in the forward direction to a uniform
point in the range.
And if you look at the joint distribution of this pair, it's the same in both
pictures.
>>: So your [inaudible] some kind of simulated ->> Chris Peikert: Yes.
>>: Experiment [inaudible] in reality on the other side, so what is the
challenge that you ->> Chris Peikert: So embedding the challenge is essentially how the same
way that it works in the previous schemes. What I'm talking about here is
just how you simulate access to the signer, the signing algorithm. So you
can do it by computing X and -- computing the forward direction. And then
you know a signature X for any hashed value Y.
>>: [inaudible].
>> Chris Peikert: So actually we're modeling this function H as a random
oracle, which is sort of unfortunate from a theoretical point of view
although it's what typically is used in practice, basically the only practical
signature schemes that we know of. So with the caveat that H is random as
a -- modelled as a random oracle which is what we need, also, in the case
of trapdoor permutations, we get the same security results.
Okay. In fact, I've slightly undermined myself. In fact, the proof -- because
of this -- because of this many preimages property, the proof in embedding
the challenge is actually simpler and tighter than the ones we have for
trapdoor permutations. So we can actually give a tight security reduction
from our signature scheme to the security of the preimage simple function
and these tight reductions are not possible when you are using trapdoor
permutations. So in a sense, this is actually a better primitive for the
security proof, which is somewhat surprising.
>>: [inaudible].
>> Chris Peikert: Yes.
>>: [inaudible].
>> Chris Peikert: Uh-huh.
>>: Suggests that you [inaudible] even if I'm given one per image
[inaudible].
>> Chris Peikert: That's correct. That's correct.
>>: And that is an assumption that's probably [inaudible].
>> Chris Peikert: It's not an explicit assumption, but it's guaranteed to us
by the fact that it's a random oracle. Yeah. So finding two inputs to a
random oracle that collide is going to be hard. Of course meeting that
notion, constructing a true random oracle is a very challenging open
problem.
What I want to stress here is that we're playing the same game security
game as we normally play and practice oriented signature schemes.
>>: So just to make sure I understand that what you're saying is that if you
use a trapdoor function and you have -- you give the attacker the additional
ability to query some preimages ->> Chris Peikert: Yes.
>>: Do you still ->> Chris Peikert: On random points in the range, yes.
>>: Non equal to the one challenge that you might give them ->> Chris Peikert: That's correct.
>>: Then it doesn't [inaudible]?
>> Chris Peikert: That's correct. Exactly right. And the reason is precisely
because we have these two -- getting random preimages of random values
Y can be simulated by going in the forward direction here. And the
important point is that the view of the adversary is the same in both cases
because the joint distribution on X and Y is the same. Which is why we
need this preimage sampleability property. It wouldn't be enough to simply
make some canonical preimage and evidence time we need this random
sampling.
Any other questions about this? This is the abstract primitive that we'll
build and one of the applications. I'll talk about a few more later. Okay. So
all that's left to do is build it, which is maybe the hard part. So we're going
to build it based on these objects I've been talking about called lattices.
And a lattice is simply a periodic grid of points in N dimensional space, R
to the N, Euclidian space. One way to define it is as the set of all integer
combinations of some N linearly independent vectors.
So here we have B1 and B2 coming from the origin and the green points
are all the points in the lattice going to infinity. Any given lattice in two
dimensions or more has an infinite number of bases. So this is one basis
which looks pretty nice. This is another basis which doesn't look as nice.
In particular the vectors are longer, they're more skewed.
>>: [inaudible] the integers C which are infinite.
>> Chris Peikert: That's correct.
>>: Previously you had to [inaudible] 257 I like much better [inaudible].
>> Chris Peikert: That's correct.
>>: Because they are finite. So the actual scheme does it use a finite or ->> Chris Peikert: Yes. Yes. So the actual scheme is going to use a finite
ring Z mod 257 or some prime, something like this. The foundation that the
cryptographic or complexity foundation will be arbitrary lattices which go
off to infinity. But it is a -- it is kind of interesting that we can take from it
this unbounded world to this bounded modular world.
So we have a basis here which is more skewed and longer. This I will
consider to be a short basis. This is a -- whoops. This is a long basis. The
vectors are longer. I could show you an even longer one if I had more
room on the slide.
>>: So [inaudible] to minimize the sum of all the [inaudible].
>> Chris Peikert: Short will actually mean -- for the purposes of the talk,
short will be minimize the longest vector in the basis. So we'll measure the
length of a basis is the length of its longest vector. So probably the most
commonly known problem you may also know about it on lattices is the
shortest vector problem. It's a computational problem that asks given an
arbitrary basis, B, find a nonzero lattice vector that's as short as possible.
We could relax the requirement a little bit and ask for a gamma
approximation, a gamma factor approximation. So while this dotted line
here is the shortest vector -- is the shortest vector in the lattice, anything
inside the blue ball would be an acceptable answer to the approximation
version.
Typically we consider a gamma factor that's some small pawn on O and N,
maybe N squared, something like that. A related problem which will be -which we'll actually be using for the crypto is what I'll call decoding. And
decoding on a lattice asks given a basis B and some target point T out in
space, find a relatively close by lattice point. Okay? So any solution -- any
ball -- any point inside this blue ball would be a solution to the decoding
problem.
Actually one can show that the decoding problem and the shortest vector
problem, they're essentially the same. So we'll be using the decoding
version.
>>: [inaudible] closest [inaudible].
>> Chris Peikert: It is similar, although they are not the same. The closest
vector problem asks for the unique lattice vector that's as close as possible
to the target point. What we're asking for is any point within some fixed
radius, some fixed large enough radius. So in particular we're always
guaranteed that there will be many lattice points within that radius. But any
of them is a valid solution.
>>: [inaudible] the radius [inaudible].
>> Chris Peikert: If -- yeah. And the radius will be large enough so that we
have a guarantee there are many points always within this radius.
Now, in two dimensions these problems are very easy. Even in ten
dimensions the problem's easy. You could probably see it by inspection.
But what's interesting is that these problems seem to become very hard
exponentially hard in the dimension as the dimension grows. Our best
algorithms take exponential time to solve the problems that we are looking
at and if you restrict yourself to polynomial time the radius that you're able
to decode within is something terrible, something exponentially large.
So these problems appear to be very difficult. Moreover, they have a very
amazing wonderful property of essentially a worse case than average case
equivalence. So this was the work of Itai in '96. He showed that there's a
family, a particular family of lattices such that if you choose a random
lattice from that family, then decoding on that lattice is as hard as decoding
on any arbitrary lattice of related dimension.
This is a crucial property for cryptography because we need not only for
there to be, you know, some hard instances out there, but we need for the
random instances that make up our keys to be hard. So the property here
is that even if you have a vanishingly small probability of success on a
random lattice from this family, then you can indeed solve decoding on any
lattice. It's really remarkable property that's quite unique.
We'll be using this random family sort of as a black box. So I won't go into
any more details about this, but our cryptographic schemes will build upon
them. And the family is actually quite simple to describe and a random
number is easily sampleable and it has all the properties that you would
like. All right.
So any questions about the basics here?
>>: When you say a family of random [inaudible] kind of randomly
generate a basis and that's [inaudible].
>> Chris Peikert: Sort of. I mean, there's a particular distribution over the
bases that we choose from. But within that distribution, that's the family.
Essentially you pick a basis according to this distribution and the instance
will be hard. All right.
So not only do we need hard lattice but we also need trap doors for them.
And the main idea is that a short basis is going to act as a trapdoor for
these hard lattices. Okay? So that's the main idea. The short basis is
something that's hard to compute on its own because we know -- we know
that computing short vectors is difficult, but if we know a short basis then
it's presumably going to give us some extra power working with this lattice.
So there -- what I see is basically two main questions in implementing this
main idea. The first is how to generate the trapdoor and the lattice together
at the same time, right, because we have to generate a public key and a
secret key in a related way. And we need the public version of it to be hard
indeed.
Now, in the world of factoring, this is trivial. Essentially you just choose
two large primes, you multiply them together, you get some modulus N,
and we essentially conjecture that factoring such an N is hard for an
adversary. With lattices it's not so clear how to do this. The reason why is
well, you might imagine picking some short vectors as a basis, munging
them up in some way and releasing some alternate version of it.
The problem with this is that you don't necessarily end up in the hard
family. So the lattice you generate may be easy and it won't necessarily
land on the hard random family that we would like to use. However, it's
very fortunate that in a remarkable act of foresight Itai in '99 gave an
algorithm that actually selects a lattice from the hard family together with a
short basis. As far as we know, this paper was never applied in any
cryptographic scheme and actually never even cited until a lot of work. So
it was very little known. The basis that he generates is relatively short
although not as short as you might like. So in a recent work we also
showed how to do the same thing while achieving an optimally short
trapdoor basis.
Okay. So now we know how to generate a hard instance together with its
short basis, trapdoor. The second key question that I believe is really the
more essential one is once we have the trapdoor how do we actually use it
securely in a cryptographic scheme?
>>: Can you say a little bit more about how you get the short basis
[inaudible] construction?
>> Chris Peikert: Yes. So essentially what happens is we show a way to
start with some short vectors to begin with and to kind of mung them
around and end up hitting a lattice in the hard family with the appropriate
distribution. So it's really quite a technical work that is difficult to describe
without getting that the details of what the hard family looks like. Although
we have some basically simplifications making the generation algorithm
more modular and a tighter analysis.
The reason you'd like a shorter basis is that it gives you tighter concrete
security. You don't have to use such large parameters. The key sizes can
go down like this. Okay. So the second question is once you have a short
basis, once you have this trapdoor, how do you use it securely in crypto
schemes? And this is I think a very non trivial question to answer. The
reason is that there are actually proposals for using short bases going all
the way back to 1997. But in fact, they turned out to be completely broken
because essentially the schemes leaked information about the secret key
over time to the adversary to the point where the adversary can actually
just reconstruct the secret key. So after releasing say several signatures,
the adversary can just rediscover the secret key by itself.
>>: [inaudible].
>> Chris Peikert: [inaudible] does not use a short basis at all. So there -- it
kind of is orthogonal an orthogonal direction? In fact, I think prior to our
work there were basically no applications of a short basis in crypto. They
used for example one short vector in a lattice as the secret key. Okay. So
this is the question we're going to be keying upon for the remainder of this
part of the talk. In order to do so I need to introduce to you one really
fundamental essential concept which is blurring a lattice. So here's a
lattice and an image. If you open it up in your favorite graphics editing
program, there's often a special effect called Gaussian blur which you can
choose and there's a little slider and if you move the slider up there's a
picture looking something like this.
This kind of corresponds to blurring every lattice point with Gaussian,
matching it out. If you move the slider out a little bit more, it looks like this.
And as soon as you move it wide enough, the picture becomes entirely flat.
Formally what we can say is that the picture becomes flat, that it's sort of
uniform over the whole space. As soon as the standard deviation of the
Gaussian exceeds the length of the shortest basis for that lattice. Okay.
So that gives us a lower bound on how much noise or how much blur we
need to get to make this uniform.
This was first used in these worst case average case connections that I
talked about by [inaudible] in '03 and '04. Well, we're going to use it -- we're
going to use this concept now inside a cryptographic scheme itself. Okay?
So here's what our -- here's how we'll -- here's how we'll define our
preimage sampleable function. First the forward direction. So the function
F is going to be specified by some hard basis for a lattice. So we generate
the lattice together with a short basis. The hard basis is what becomes
public.
To evaluate the function we simply pick a random lattice point, and this is
informal, and then just perturb it by some Gaussian noise. Okay? And by
the -- by the fact I showed you in the previous slide, the outcome of this
process will be uniform over the real numbers, over R to the N. Now, in
real estate there is no uniform distribution over R to the N. We can make
this formal but intuitively much more simple to view it this way.
>>: You say it's a function. Does the function choose the V or [inaudible]?
>> Chris Peikert: Both.
>>: Both.
>> Chris Peikert: So actually the function itself there's a random input.
>>: Yeah.
>> Chris Peikert: Basically. And the random input specifies V and the
pertubation. Okay? So actually I prefer to think of it more as a process, as
a randomized process rather than a function itself. We have this
randomized process which picks out lattice points and adds some
pertubation and the outcome of the process is uniform.
>>: [inaudible].
>> Chris Peikert: Right.
>>: [inaudible] just intuitively. I'm having trouble with truly uniform.
>> Chris Peikert: Right.
>>: In this sense because you start at a lattice point even the [inaudible]
the Gaussian [inaudible].
>> Chris Peikert: Right, right.
>>: Is going to still have a higher density at the [inaudible].
>> Chris Peikert: Okay. So you're exactly right that it's not -- it's not
uniform in a totally per se sense but it is uniform up to negligible statistical
[inaudible] and the amount of noise you need to add is actually quite small
in order to make this happen.
>>: [inaudible]. Ways to get beyond the many other [inaudible].
>> Chris Peikert: Right. You do but actually a proof by picture. I'll do -- I'm
not -- I added, I don't know, some amount of noise here. And yes, you're
right, there's a high concentration around the lattice here. So this is not
enough noise. If I just added -- I think I doubled the noise here, and
suddenly it's looking pretty -- I triple the noise and I'm already uniform. So
surprisingly it doesn't take much to get uniform.
>>: So you were saying before it just takes enough noise so that you
exceed the like minimum distance of the [inaudible].
>> Chris Peikert: The -- the shortest basis, yeah. So whatever the ->>: [inaudible] the shortest basis would that also be at minimum, but if a -if you looked at that one picture, if you drew the distance between two
points, you would also translate that to the origin?
>> Chris Peikert: Right. Right. So I'll give a -- let me just draw an example
here. A lattice that looks like this. It has small minimum distance. This is
one. But this would be a hundred. And the shortest basis for this lattice
has to include some vector of length a hundred or at least a hundred. So in
order to smooth out this lattice, we need to use a hundred noise rather than
one.
>>: [inaudible] a hundred [inaudible].
>> Chris Peikert: Right. It's a little bit more than a hundred. A hundred
times a small factor. But basically a hundred. Yeah. It's -- I mean, I agree,
it's a very surprising fact, very useful.
>>: Well, you're just saying the length of the basis, is length of the longest
->> Chris Peikert: Longest member of the basis.
>>: Member of the basis.
>> Chris Peikert: That's right.
>>: You don't have to move any further than that you have once you get
from one point to the --
>> Chris Peikert: Well, you can touch every point, that's true. Yeah.
What's interesting is that actually a Gaussian itself will come effectively -will become effectively uniform when you move out in that way. And this is
-- this is very non trivial to show. You use some great harmonic analysis
and things like this. Very cool.
>>: Gaussian go to infinity with some small probability.
>> Chris Peikert: That's correct.
>>: So is it always a small probability that that signature will break or ->> Chris Peikert: Yes. I'll actually mention that in a moment. But you're
way ahead of me. Yes. So Gaussians do go out to infinity, however, they
are highly concentrated within basically a root N factor of their standard
deviation in N dimensions. So effectively a Gaussian is a ball of some
radius if you cut off the tail. Okay. So we're describing the forward
direction of the function here which is to pick a random point and add
some noise, enough noise to make the whole thing uniform. Make the
outcome uniform.
Now, let's consider all the possible preimages of this output. So this is the
set of all lattice points we could have started from. And effectively what I
claim is that all those points of to lie within a ball of relatively small radius.
Now, it's true that even a point way out here could produce this T, but it's
such a -- such a rare event that we effectively cut it off and say that the only
preimages are those within this -- within this ball. Okay?
So if a -- if I'm challenged to find a preimage of T, then effectively I have to
decode T to a relatively near by point. Something within this reasonably
small radius. This is exactly the decoding problem that appears to be
extremely hard, even on average. So we immediately get that inverting this
function, finding any preimage on T is indeed hard. Now, let's look at now
the set of all preimages of T. We know there are many of them. In
particular these four green points here. Let's consider now the conditional
distribution of where we could have started given that we ended at T.
Okay? So there's a distribution over these preimage points, these lattice
points. And it looks something like this. It's a Gaussian, but it's restricted
to the lie only on the lattice and centered at T. So it's this kind of very nice
looking thing except the support of this distribution is only on lattice
points.
This object turns out to be a extremely useful analytical tool that's been
used in the mathematical study of lattices more recently and complexity
theory. What we're going to do is turn it into an actual algorithmic tool,
okay, because what we want to do if we're going to sample a preimage, we
need to actually draw a sample from this funny looking discreet Gaussian
distribution.
Okay. Everything reasonably clear so far?
>>: So the goal is that you wanted to [inaudible] primitive which has this
sample, sampleability meaning that ->> Chris Peikert: That's right.
>>: An adversary -- like an adversary only has a hard basis can also
generate ->> Chris Peikert: No. So the adversary who only has the hard basis can't
decode T.
>>: Okay.
>> Chris Peikert: Right? We don't want them to be able to find a preimage.
>>: But given our short basis ->> Chris Peikert: But given our short basis, yeah, we will be able to find a
preimage. And in fact, we'll sample the preimage from this funny, funny
looking distribution here.
>>: So strictly speaking a preimage, it's okay to decode to any one of
those factors in that ->> Chris Peikert: In that ball, yes.
>>: In that ball?
>> Chris Peikert: Yes.
>>: And how did you determine the radius of that ball?
>> Chris Peikert: Okay. So this ball is basically a root N factor times the
standard deviation of the Gaussian that we started with. Because a
Gaussian lies within -- if you take a sample from a Gaussian, it lies within
that ball with all but exponentially small probability. So effectively we kind
of cut off the tail of the Gaussian and this doesn't affect anything in the
argument.
By the way, we're learning, I guess, by the financial crisis that it's
dangerous to cut off the tails of Gaussians, but that's when you've
assumed things are Gaussian when they're not. But we have control over
our distributions and we can do that. Gaussians are getting a bad name
nowadays, and I don't think they should. It's a bit of a tangent.
Okay. So our goal from now on is to now implement this preimage
sampleability, implement this preimage sampler. And to do so, we need to
say we're given a short basis and a target point T, and we want to sample
from this funny looking object efficiently. Okay? Seems -- seems
challenging. What I want to stress is that this distribution here is oblivious
to our short basis. It didn't know what short basis we're going to use, it's
just a Gaussian, and it exists independent of any particular basis. All we
know is that its deviation is related to the length of the basis, but nothing
more.
So we need to use our basis to sample from an object, from a distribution
that has no knowledge of our basis. It seems like a difficult task, but in fact
the algorithm itself is extremely simple. Basically what we do is add a
randomized twist to this classical algorithm of bad lie from '86, which is
pretty much what you might come up with if you had the afternoon to think
about it. So this algorithm is called the nearest plain algorithm and it
works as follows: We have a target point T we want to decode and a short
basis. What we do is imagine partitioning the lattice into hyperplanes
where each hyperplane contains a different integer multiple of S2. We'll
work from the back, from the back forward. So this is zero times S2 is the
plane with 1 times S is 2, 2 times S2, and so forth. So we know the part -the lattice partitions in this way. And the [inaudible] nearest plane
algorithm would simply round off T to the nearest plane as the name
suggests.
What we'll do is not deterministically round off to the nearest plane, but
actually choose a plane at random according to an appropriate distribution.
So maybe we don't go to the nearest but we round off to this one
randomized round to this one instead. And we project T down to the
hyperplane and just iterate in one lower dimension. Okay? So then we
might randomly round off to the nearest point in the subplane.
If we run this algorithm a different time, maybe we round off here and
round off that way. What's I think extremely interesting and surprising is
that this algorithm does indeed sample from this distribution. And it loses
-- it doesn't care which basis you start with as long as you have the basis
of rounded length. So to prove this is somewhat technical as you might
imagine, but the idea behind the proof is that if we look at the ideal thing
that we really want to sample from, let's look at the probability mask on
every plane. Yes.
>>: If you [inaudible] does not [inaudible] trapdoor [inaudible] why is it
trapdoor, why can't you use any basis to ->> Chris Peikert: Good. Okay. So the key is this condition up here. The
standard deviation that we sample from has to exceed the length of the
longest element. So if I have a basis which is very bad, I won't be able to
sample from a tight, tight Gaussian. That's the key different. But if I have a
basis which is quite short, I can sample from a Gaussian that's very
concentrated. And that's the difference between having a bad basis and
having a good basis.
Okay. So I won't -- I won't say much about the proof other than the fact that
at essentially the output of this algorithm is oblivious to the actual basis
that was used and to run the algorithm, which I think is somewhat
surprising.
>>: So do -- there also means that the trap do is generally not [inaudible].
>> Chris Peikert: That's correct. Yes. There are many -- there are many
equivalent trap doors.
>>: [inaudible] basis satisfying [inaudible].
>> Chris Peikert: Exactly. And any basis of short enough basis gives you
the same power.
>>: So [inaudible] question. So why can't you use decoding algorithms in
this context? What am I missing here?
>> Chris Peikert: So like lattice decoding algorithms?
>>: No, so [inaudible] linear [inaudible] we have to basically determine by a
basis.
>> Chris Peikert: Right.
>>: So I mean you've got this continuous aspect [inaudible].
>> Chris Peikert: Right. So the first answer is that decoding algorithms for
codes typically only work when you're within some bounded radius of the
code. So you have to be within a close distance to the code in order for it
to work. If you're outside of that, the output is undefined. We need this to
work on an arbitrary point in the reels. Moreover, we need it -- we need the
output of the algorithm to be randomized. And it's essential for security
that we are sampling a random element according to this distribution and
not that we're just finding some close by lattice point. And if we were to be
deterministic here, then that would be a very big security problem.
>>: Oh, so you mean in your procedure for providing [inaudible].
>> Chris Peikert: Yes. Yes. So it has -- it very crucially has to be
randomized, and it has to draw from this distribution.
>>: Yeah, but I meant could be adversary [inaudible].
>> Chris Peikert: Okay. I see. So the adversary because it has a bad
basis, essentially the decoding problem is very difficult because if it tries to
run this, if it tries to run say this algorithm, it's going to end up with a point,
a lattice point are very, very far away from the target. And that's just
because when you get down to rounding, the points are going to be very
far spread within these hyperplanes and so you're taking huge jumps at
each step, and it takes you too far away.
>>: Why does [inaudible] during the algorithm to find the lattices point
[inaudible] finding lattice point and then ->> Chris Peikert: Right. Because this is actually what the signer does. So
this is not a thought experiment or anything like that but the signing
algorithm has to draw from this particular distribution. You might imagine
other ways of trying to get this, but you really need to draw from this
precise distribution, otherwise you start leaking information about your
basis. So it's really quite crucial that you do it exactly this way.
And that's illustrated actually by the breaks of the prior proposals that GGH
scheme and the untrue scheme that we have. So I believe, yes, okay. Let
me just mention a few other applications, cryptographic applications will
bubble up to the top the here and get out of the technical details.
So another application of the short bases is something called identity
based encryption and essentially the idea here goes back to Shamir from
1984, and he said what if you didn't have to have a public key, there's all
these certification problems with public keys, what if your name was your
public key? Someone wants to encrypt a message to you, all they need to
know is your name and maybe one key for your entire Microsoft
organization.
So it was about 17 years before this was actually possible, possible to do
something like this by the scheme of Bonet and Franklin and somewhat
surprisingly we can now do it based on lattices as well. So this is a very
powerful primitive and it has lots of applications.
A further work we showed some zero knowledge proof systems that also
use the short basis in a critical way, and this can be used for public key
certification, identification protocols, lots of things like that. We also gave
a protocol called oblivious transfer which is a fundamental object in multi
party computation and it has the very interesting property that it's -- what's
called universally composable. It means you can run it in the context of
many other algorithms, you can run it comparable, concurrently with
anything on the network and it remains secure. And moreover, the
protocol is extremely efficient, it's round optimal, it's one message from the
sender to the -- from the receiver to the sender and then one message
back. So it's a very efficient. And we give a general framework actually
that works for a variety -- under a variety of different assumptions. And
lattices are [inaudible].
A fourth application is chosen cypher text security. So here the use of the
short basis is really essential for achieving this strong gold standard
notion of security for encryption. For example, your question about Itai
Dwork when they have just one short vector involved, it's not clear at all
how you can get chosen cypher text security with that kind of scheme.
Surprisingly with a short basis, you can do it quite naturally. And I think
there's more to come, more applications of this to come as well.
Okay. So I'll finish off with a couple lines of related work and some
problems I'd like to look at in the future. So one line is using some special
objects, special classes of lattices called cyclic or ideal lattices, these are
lattices that have some extra symmetries, geometric symmetries or
algebraic symmetries in them and the idea is that we can build truly
practical schemes that have relatively small key sizes and are really, really
fast. So this is the foundation -- these ideas are the foundation of our
candidate submission to the SHA-3 competition that's being run by NIST
right now, which our function is call SWIFFT. The extra F is to make it
FS40A transform which is the core operation involved in evaluating the
function. And FFTs are really nice because you can run them in parallel,
there are specialized circuits for them and we can implement them really
fast even on standard commodity processors.
>>: [inaudible].
>> Chris Peikert: So really, really fast means if you give us all the power of
your -- you know, your Intel core duo or whatever, our throughput is
comparable to SHA, say, 256, SHA-512. It's within a couple factors. Factor
of two to three. And that's not even using multiple cores. So that's on one
core. And I think what's promising about this approach is that it's going to
scale up very nicely to the future because parallelism is going to become
more and more available to us. We now have graphics cards that, you
know, have so much parallelism on them we don't know what to do with it,
and you could -- and fair FFT machines. So I think this is -- this is a
promising approach, this parallelism aspect.
>>: [inaudible].
>> Chris Peikert: Actually over a finite field. So more accurately would be
the -- what's sometimes called the number theoretic transform. You can
actually do it over either, so if your hardware works for the complex
numbers, it's fine, you can run it over the complex, but the function is
defined over a field. And the number -- and the FFT can be done over that
finite field. To keep everything discreet.
>>: [inaudible].
>> Chris Peikert: So the property is that -- okay. So I'll say what a cyclic
lattice is. It's a lattice where every point -- if you have a point in the lattice
then you can rotate its coordinates. You can bring the first coordinate to
the end and push everything forward and that point will also be in the
lattice. So it's this closure property. And of course you can rotate as many
times as you want and it remains in the lattice.
So this kind of seemingly simple symmetry has some really deep
connections to algebraic number theory it turns out. So getting it to
number fields and rings of integers and all kinds of great, great stuff there.
So some work in progress is to take the -- take the ideas that are behind the
hash function and do things like encryption and signatures as well. With
comparable levels of efficiency.
The dream would be to get public key crypto that could run as fast or
almost as fast as symmetric key crypto does now or hash functions do
now. That would be -- that's the long term vision.
>>: So it's [inaudible] it's actually the same as being like having a normal
basis?
>> Chris Peikert: Let's see. A normal basis. I don't -- I don't know what a
normal basis ->>: A normal basis is like all the powers in an algebraic integer like one
alpha squared ->> Chris Peikert: All right. So power basis.
>>: [inaudible] alpha [inaudible].
>> Chris Peikert: Yes, it's similar to that. In fact, that's where the
connections start to come about. You don't have to have the power basis.
So getting into ideals and things like that we can talk about it maybe more
offline, but similar to -- similar kinds of closure properties. And the benefit
here is that the lattice can be represented in a much more compact form,
so the keys are smaller and the manipulations can now be doing FFTs
rather than generic inner products and slower operations like that.
>>: So it's a hard problem it's still like, what, finding an alpha -- I mean
finding a short basis?
>> Chris Peikert: Yes. The hard problem is to find a short vector in the
special class of lattices now. So it's potentially it's extra secure my
potentially make the problems easier, although nobody as far as I'm aware
has found any way to exploit that.
>>: What dimension do you use?
>> Chris Peikert: The dimension here -- so there's no clear answer.
There's a trade-off between the dimension and the field size that you can -that you can make. In practice SWIFFT has dimensions 64 and a field size
of 257. So the overall group is of size 257 over 64 which is I think around
1024 or something. Or no, more. 1496? I don't know. I can't -- but ->>: [inaudible].
>> Chris Peikert: Sorry?
>>: That's only [inaudible].
>> Chris Peikert: So it's two to the eighth.
>>: [inaudible] 12.
>> Chris Peikert: Yeah.
>>: And a little bit extra.
>> Chris Peikert: Okay. Yes.
>>: So it sounds like the [inaudible] like he's just LLL for finding algebraic
dependence and algebraic [inaudible].
>> Chris Peikert: Right.
>>: You can usually find -- I mean 64 sounds low to me.
>> Chris Peikert: Oh, so the dimension -- the dimension of the underlying
lattice is not actually 64. So there are some complications regarding what
is the actual reduction give you and what are the actual best algorithms
known and the reduction is often looser than the best attack. So in order to
be practical we designed SWIFFT to defeat all the known attacks and that
makes you trade off the dimension and the field size and things like this.
So it's not really a dimension of 64 as it's a larger thing in practice. But
there are lots of, lots of detailed, you know, fiddling with exact security
parameters and things like that involved.
>>: This is a related question. So what happens if you use the LLL in the
setting of your more general [inaudible].
>> Chris Peikert: Right. So what happens if you run LLL is you can get
vectors in the lattice which are somewhat short, but they're really -- they're
really far too long to be of any use. So you -- LLL basically gives you this
exponentially bad approximation to a short vector, to the shortest vector.
So you can -- LLL actually turns out to be not a very promising avenue of
attack. There are other more combinatorial attacks that can do even better
but even those are far too weak to break the schemes.
>>: And in the context of these applications you were listing before this
slide, what would the N be typically, the dimension be?
>> Chris Peikert: Let's see. So in these ->>: Like in the hundreds or in the thousands?
>> Chris Peikert: It would have to be at least in the high hundreds I think.
For -- to get a meaningful security proof, you know, end-to-end security
proof. Again, I'll say in practice what happened is is the proof is often
looser than what appears to be the true hardness of the problem. So what
we do is we typically prove the scheme is secure in an asymptotic sense
and then we start to look at, okay, what are the concrete dimensions
needed to defeat the best attacks available? And they're basically kind of
two classes of attacks and if we -- we beat them both handily then we say
okay, this is -- these are reasonable security levels. Reasonable ski sizes.
Another direction is to just define and construct more rich crypto schemes,
so crypto schemes that have useful properties allow us to do things that
we want to do. For example, in a work with Brent Waters from last STOC
we defined and constructed some objects called Lossy trapdoor functions
which have a ton of applications and many more seem to be coming down
the line. Recently we also did a very simple and efficient construction of
what's called circular secure encryption which is often needed in things
like formal models, proving crypto secure informal models. It's also used
in credential systems and things like this. So this is a nice, a nice primitive
to have.
And in progress I have some work on getting even more expressive,
expressiveness to the identity based schemes that we constructed before.
>>: What does circular secure mean.
>> Chris Peikert: Circular secure means you can encrypt your own secret
key under the corresponding public key. And the question is this actually
is it secure to do this? So I -- in things like disk encryption products it
often happens that the secret key ends up on disk and this gets encrypted
with itself and this can cause some problems. The security proofs don't in
general go through. And you can devise kind of pathological schemes that
actually break if you do such a thing. So we'd like to have some kind of
guarantee that it's safe to do things like this. And it can be generalized to
cycles of key encryptions among many different parties that are unrelated,
things like that.
I'm also interested in the basic complexity theoretic foundations of lattices.
I've given some -- have some works on stronger connections between the
worst case and average case hardness of lattice problems. And I think
more generally it's interesting to look at things like learning algorithms,
hardness amplification, extractors, things like that, which have been
developed for discreet domains and to extend or see which of these things
make sense and which can be done over continuous domains as well.
Looking to the future some big kind of very vague problems. I don't even
know how to express necessarily. There are basically two kinds of
approaches to solving lattice problems and they don't seem to be
compatible at all, they're kind of incomparable. I think there should be
some unified way that gets the best of both of those. So I'd be interested in
looking at that and investigating what's the true complexity of these lattice
problems.
What's the nature, what are the limitations of quantum attacks. Do we have
a good reason to think that quantum can't break lattices based schemes?
We don't have algorithms but quantum's very misunderstood and under
understood, so this is a very important direction to go. And I think
developing further connections between standard number theory problems
that we use and crypto and these lattice problems would be very
interesting. It would be excellent to have a proof that says factoring is no
harder than solving the lattice problems we rely on. That would mean
these lattice based schemes are at least secure as factoring based
schemes. We don't have any connection right now, so we don't know
whether one is strictly better than the other and it would be nice to get
some kind of ordering among them.
So that's all. Thank you.
[applause].
>> Chris Peikert: Yes?
>>: Could I ask a naive question.
>> Chris Peikert: Sure. Yes. Please.
>>: The [inaudible] security. It seems like if an attacker can get you to
cache the same type -- sends you the [inaudible] for the same message
multiple times.
>> Chris Peikert: Yes.
>>: They learn something about the distribution ->> Chris Peikert: Yes. That's actually a very good point that I completely
slid under the rug. So as I described the scheme, maybe I'll go back to it.
Okay. I've got to go back a ways. But that's okay we'll get there. Almost.
Okay. So, yes, the question is what happens if you ask the signer to sign
the same message twice? Presumably what they'll do is well, they'll hash
the message, they'll get the same Y in both cases, it's a deterministic cache
function, but if the same Y and then they'll run the preimage sampler and
they'll probably get two different preimages in both cases. And is this a
problem? Yes, it would be a huge problem if they got this.
So what we do actually in the similar is add some randomization to the
message. So if you tack some random salt on to the message and send the
salt along with things, then what happens is you get a different Y every
time, even if you sign the same message. So that's one way of doing it.
And in that case we can prove that it's really secure to do stuff.
>>: [inaudible].
>> Chris Peikert: The signer chooses the salt, yes, and sends it along with
the signature. So only a few bits are needed. Another solution is to kind of
keep state so the signer checks, have I signed this message before, if so, it
releases some signature not so useful but the state can actually be
removed using something called a pseudo random function as well. So if
you -- if you don't like salt, you can use a pseudo random function, and that
ensure that you're always giving out the same signature on the same
message. Yes. It's a very important aspect of the implementation.
>>: If you want the standard security parameters with [inaudible] proposed
how many bits do you need in the public key and the private key in the
randomness you do for a signature?
>> Chris Peikert: Right. So the answer is it depends. If you're using these
general -- these generally lattices, I think the keys, the key sizes and so
forth are probably too large for prime time right now. That's why we're
looking at cyclic and ideal lattices because they bring the key sizes down
dramatically, they bring the signature sizes down and kind of all aspects so
the scheme gets smaller and more compact.
>>: How many smaller. What do you hope to get at the end of the day?
>> Chris Peikert: So at the end of the day, I don't think we'll be getting keys
that are one kilobit like they are for RSA. I think that's probably too much
to hope for. I think.
>>: [inaudible].
>> Chris Peikert: I think under that. I think probably in the 10s of kilobits is
a reasonable thing. Maybe even down below 10 kilobits. For things like
public key encryption. Now, for our missed hash function submission, the
key is some -- it's some fixed key, so it's hard wired into the code and it's
not big, it's not big, it's ->>: And how much randomness do you need [inaudible] how many bits of
RAM ->> Chris Peikert: To run this algorithm?
>>: The signature, the L signature, it needs randomness.
>> Chris Peikert: It needs randomness. Yes.
>>: How much?
>> Chris Peikert: There's a lower bound of at least N bits of randomness
where --
>>: If you want [inaudible] bits security how many [inaudible] is what I'm
asking.
>> Chris Peikert: Let's see. So if you really want to optimize the
randomness, you're going to be needing around a little more than linear, so
N to the one plus epsilon, bits of randomness where N is the dimension.
Now, the real dimension in the crypto scheme could be something like
around a thousand or more. So the answer is probably a couple kilobits of
randomness.
>>: [inaudible].
>> Chris Peikert: Yes.
>>: [inaudible].
>> Chris Peikert: The answer is vice versa. [laughter]. NTRU or NTRU is
actually based on a lot of the ideas of behind cyclic lattices, and that's kind
of what inspired the theoretical work. NTRU lacks any security proofs at
all. And it's been broken several times. The different schemes of NTRU
have been broken several times. For example, this is the called NTRU sign,
so it's NTRU -- it's a significant variant of NTRU. It was broken by this
attack of Wynn and Regoff [phonetic] in 2006. There is some additional
countermeasures that are proposed here. We broke those in the recent
work that we've submitted.
So the problems with NTRU is that they don't have really any rigorous
security, everything is pretty heuristic. And for that reason I think things
have broken a lot, been broken a lot over time and they have to keep
moving the goal posts. So I really think this is an example where lattices
are so subtle that we really should be using provable security techniques
to make sure that our schemes are really rock solid, you know, and don't
have design flaws in them.
>>: So they are designs that [inaudible].
>> Chris Peikert: Right.
>>: [inaudible].
>> Chris Peikert: Yeah. There's no security reduction or security proofs
involved. There are sort of heuristic assumptions about this particular
family of lattices is hard and concrete values. But no proofs basically of
any security.
>>: So I know the underlying design is drastically different.
>> Chris Peikert: Right.
>>: But [inaudible] very high level this approach feels to me a lot like
[inaudible].
>> Chris Peikert: Uh-huh.
>>: There's a secret specialized basis which you can do stuff with, you
transform that and ->> Chris Peikert: Exactly.
>>: To a random hard basis.
>> Chris Peikert: Right.
>>: That other people would [inaudible].
>> Chris Peikert: Right.
>>: And ->> Chris Peikert: So does that keep me up at night is the question?
>>: Yeah. [inaudible] knapsack failed over ->> Chris Peikert: Right. Now, this is the reason why knapsacks have
failed. The reason is they generate some -- they start with some, you know,
special basis or something, and then they mung it up and end up with
some kind of lattice or knapsack kind of problem. The issue with this is
that because of the way they started out and because of the way it gets
generated, this resulting problem has some hidden structure in it.
>>: [inaudible].
>> Chris Peikert: The structure is still there.
>>: [inaudible].
>> Chris Peikert: And that's what gets -- that's what gets used as a lever to
break these old knapsack problems. The very nice thing here is that we're
generating our instance from the truly hard random family where we have a
security proof that this instance is hard. And it's drawn uniformly from the
hard family and yet we still are able to embed a secret basis into it, a short
secret basis into it. So I mean I really want to stress a crucial aspect of
how the schemes get used. If you do something more heuristic you may
be leaving yourself open to some hidden structure that can be exploited.
And this -- these methods are -- end up with a structureless key.
>>: [Inaudible] somewhat secured ->> Chris Peikert: Yes. Yes.
>> Kristin Lauter: Any more questions? Okay. So let's thank you our
speaker.
[applause]
Download