Lattices

advertisement
Lattices
Definition and Related
Problems
Lattices
Definition (lattice): Given a basis v1,..,vnRn,
The lattice L=L(v1,..,vn) is

v


a1,..., an  Z s.t. v   aivi 
i1

n
Illustration - A lattice in R2
“Recipe”:
Each point corresponds
to a vector in the lattice
1. Take two linearly
independent vectors
in R2.
2. Close them for
addition and for
multiplication by an
integer scalar.
etc. ... etc. ...
Shortest Vector Problem
SVP (Shortest Vector Problem):
Given a lattice L find s  0  L s.t.
for any x  0  L
|| x ||  || s ||.
The Shortest Vector - Examples
What’s the shortest vector in the lattice spanned by
the two given vectors?
Closest Vector Problem
CVP (Closet Vector Problem):
Given a lattice L and a vector yRn, find a vL, s.t.
|| y - v || is minimal.
Which lattice
vector is closest
to the marked
vector?
Lattice Approximation Problems
 g-Approximation version:
Find a vector y s.t. ||y|| < g  shortest(L)
 g-Gap version: Given L, and a number d, distinguish
between
– The ‘yes’ instances ( shortest(L)  d )
– The ‘no’ instances ( shortest(L) > gd )
shortest
If g-Gap problem is NP-hard, then having a g-approximation
polynomial algorithm --> P=NP.
Lattice Approximation Problems
 g-Approximation version:
Find a vector y s.t. ||y|| < g  shortest(L)
 g-Gap version: Given L, and a number d, distinguish
between
– The ‘yes’ instances ( shortest(L)  d )
– The ‘no’ instances ( shortest(L) > gd )
shortest
If g-Gap problem is NP-hard, then having a g-approximation
polynomial algorithm --> P=NP.
Lattice Problems - Brief History
[Dirichlet, Minkowski] no CVP algorithms…
 [LLL] Approximation algorithm for SVP, factor 2n/2
 [Babai] Extension to CVP
 [Schnorr] Improved factor, 2n/lg n for both CVP and SVP


[vEB]: CVP is NP-hard
 [ABSS]: Approximating CVP is
– NP hard to within any constant
– Almost NP hard to within an almost polynomial factor.
Lattice Problems - Recent History

[Ajtai96]: worst-case/average-case reduction for SVP.
 [Ajtai-Dwork96]: Cryptosystem.
 [Ajtai97]: SVP is NP-hard (for randomized reductions).
 [Micc98]: SVP is NP-hard to approximate to within some
constant factor.

[DKRS]: CVP is NP hard to within an almost polynomial
factor.
 [LLS]: Approximating CVP to within n1.5 is in coNP.
 [GG]: Approximating SVP and CVP to within n is in
coAMNP.
Ohh...
Why is SVP not
but isn’t that
the same as
just an annoying
CVP with y=0?
technicality?...
CVP/SVP - which is easier?
Reminder:
 Definition (Lattice): Given a basis
v1,..,vnRn, The lattice L=L(v1,..,vk)
is {aivi | ai integers}
 SVP (Shortest Vector Problem):
Find the shortest non-zero vector
in L.
 CVP (Closest Vector Problem):
closest
n
Given a vector yR , find a vL
closest to y.
shortest
y
Trying to Reduce SVP to CVP
...But#1
this
will
also
#1
try:
y=0
Note
that
we
can
similarly
try:
try:
y=0
yield
c=0s=0...
BSVP
B
b
y1
0
CVP
(c1-1,c2,...,cn)
ec10c
Finds (c1,...,cn)Zn
which minimizes
|| c1b1+...+cnbn- b
0y1 ||
SVP
Finds (c1,...,cn)0Zn which minimizes
|| c1b1+...+cnbn ||
s
Geometrical Intuition
The obvious reduction:
b2
shortest: b2-2b1
the shortest vector is the
difference between (say)
b2 and the lattice vector
closest to b2 (not b2!!)
Thus we would like to somehow
“extract” b2 from the lattice,
so the oracle for CVP will be
forced to find the non-trivial
vector closest to b2.
The closest to b2,
besides b2 itself.
b1 The lattice L
...This is not as simple as it sounds...
That’s to
not really
a problem!SVP to CVP
Trying
Reduce
Since one of the coefficients
of the SV must be odd (Why?),
The
trick:
b1allwith
we can
do thisreplace
process for
the vectors in the basis and
take the shortest result!
BSVP
B(1)
b
y1
0
CVP
2b1 in the basis
Since c1Z, s0
(2c(c1 1-1,c2,...,cn)
0c
cc
1
Finds (c1,...,cn)Zn
But in this way we
which minimizes
only discover the
||||c1c2b
+
0y1 || shortest vector among
1b11+...+c
nbn- b
SVP
those with odd
coefficients for b1
Finds (c1,...,cn)0Zn which minimizes
|| c1b1+...+cnbn ||
s
Geometrical Intuition
By doubling a vector in the basis, we extract it from the
lattice without changing the lattice “too much”
The closest to b1 in
the original lattice,
lost in the new lattice
The lattice L’’ L
L’’=span (2b1,b2)
The closest to b2 in
the original lattice.
Also in the new lattice.
The lattice L’ L
L’=span (b1,2b2)
But we risk losing the closest point in the process.
It’s a calculated risk though: one of the closest points has to survive...
The Reduction of g-SVP to g-CVP
Input: A pair (B,d), B=(b1,..,bn) and dR
for j=1 to n do
invoke the CVP oracle on(B(j),bj,d)
Output: The OR of all oracle replies.
Where B(j) = (b1,..,bj-1,2bj,bj+1,..,bn)
Hardness of SVP & applications
Finding and even approximating the
shortest vector is hard.
 Next we will see how this fact can be
exploited for cryptography.
 We start by explaining the general
frame of work: a well known
cryptographic method called publickey cryptosystem.

Public-Key Cryptosystems and
brave spies...
The brave spy
wants to send the
HQ a secret
message
In the HQ a
brand new lock
was developed
for such cases
...But the
enemy is in
between...
The enemy
will attack
within a
week
The The spy can easily
solution:lock it without the
key
HQ->spy
THEY can
see and
duplicate
whatever
transformed
And now the locked message can
be sent to the HQ without fear,
And read only there.
HQ<--Spy
Public-Key Cryptosystem (76)
Requirements:
Two poly-time computable functions Encr
and Decr, s.t:
1. x Decr(Encr(x))=x
2. Given Encr(x) only, it is hard to find x.
Usage
Make Encr public so anyone can send you
messages, keep Decr private.
The Dual Lattice
L* = { y | x  L: yx  Z}
Give a basis {v1, .., vn} for L one can construct,
in poly-time, a basis {u1,…,un}:
ui  vj = 0
( i  j)
ui  vi = 1
In other words
U = u1,…,un
U = (Vt)-1
where
V = v1, .., vn
Shortest Vector - Hidden Hyperplane
Observation: the shortest
vector induces distinct
layers in the dual lattice.
-s
distance = 1/||S||
H0 = {y| ys = 0}
H1 = {y| ys = 1}
Hk = {y| ys = k}
s – shortest vector
H – hidden hyperplane
Encrypting
Given the lattice L, the encryption is polynomial:
Encoding 0
Encoding 1
s
s
Choose a
random point
(1) Choose a random
lattice point
(2) Perturb it
s – shortest vector
H – hidden hyperplane
Decrypting
Given s, decryption can be carried out in polynomialtime, otherwise it is hard
Decoding 0
s
If the projection
of the point is not
close to any of the
hyperplanes
Decoding 1
s
If the projection is
close to one of the
hyperplanes
s – shortest vector
H – hidden hyperplane
GG

Approximating SVP and CVP to within
n is in NP  coAM
Hence if these problem are shown
NP-hard the polynomial-time
hierarchy collapses
The World According
Ajtai- to Lattices
DKRS
GG
L3
Miccianci
o
CVP
SVP
1+1/n
1
O(1)
O(logn)
2
NP-hardness
n1/lglgn
NPco-AM
n

2n/lgn
Poly-time
approximation
OPEN PROBLEMS Is g-SVP NP-
For super-polynomial,
hard to within n
Can
LLL
be
sub-exponential factors;
?
improved?
is it a class of its own?
CVP
SVP
1+1/n
1
O(1)
O(logn)
2
NP-hardness
n1/lglgn
NPco-AM
n

2n/lgn
Poly-time
approximation
Approximating SVP in Poly-Time
The LLL Algorithm
What’s coming up next?
To within what factor can SVP be
approximated?
 In this chapter we describe a polynomial
time algorithm for approximating SVP to
factor 2(n-1)/2.


We would later see that approximating the
shortest vector to within 2/(1+2) for
some >0 is NP-hard.
The Fundamental Insight(?)
Assume an orthogonal basis for a lattice.
The shortest vector in this lattice is
Illustration
x=2v1+v2
||x||>||2v1||
||x||>||v2||
x
v2
v1
The Fundamental Insight(!)
Assume an orthogonal basis for a lattice.
The shortest vector in this lattice is
the shortest basis vector
Why?

If a1,...,akZ and v1,...,vk are orthogonal, then
||a1v1+...+akvk||2 = a12•||v1||2+...+ak2•||vk||2

Therefore if vi is the shortest basis vector,
and there exits an 1  i  n s.t ai  0, then
||a1v1+...+akvk||2  ||vi||2(a12+...+ak2)  ||vi||2

 No non-zero lattice vector is longer than vi
What if we don’t get an orthogonal
basis?
take a vector and
Remember the good old Gram-Schmidt
procedure:
subtract its projections
on each one of the vectors
already taken
basis
for a
subspace
in Rn
v1
..
vk
v1*
Gram-Schmidt
..
vk*
orthogonal
basis for
the same
sub-space
in Rn
Projections
Computing the projection of v on u (denoted w):
v
|| w ||
w
u
|| u ||
|| w |||| v || Cos()
|| vv,||uCos
 () || u ||
u u
w
u
2
|| u || u ||
|| u ||
u

w
Formally: The Gram-Schmidt
Procedure
Input: a basis {v1,...,vk} of some subspace in Rn.
 Output: an orthogonal basis {v1*,...,vk*}, s.t for
every 1ik,
span({v1,...,vi}) = span({v1*,...,vi*})
 Process: The procedure starts with {v1*=v1}.
 Each iteration (1<ik) adds a vector, which is
orthogonal to the subspace already spanned:

i1
 vi , v j * 
j1
|| v j * ||
vi *  vi  
2
vj *
Example
v3
v3*
v3v1 only to
simplify the
presentation
The projection of v3 on v2*
v2*
The sub-space spanned by v1*,v2*
v1*
Wishful Thinking
Unfortunately, the basis
Gram-Schmidt constructs doesn’t
necessarily span the same lattice
basis
for a
sublattice
space
in Rn
v1
..
orthogonal
basis for
the projection of
Example:
..
the same
v2 on v1 is 1.35v1.
lattice
sub-space
vk*
v1 and v2-1.35v
v2
in Rn 1
don’t span the
v1*
Gram-Schmidt
vk
As a matter of fact, not
every lattice even has an
orthogonal basis...
same lattice as v1
and v2.
v1
Nevertheless...

Invoking Gram-Schmidt on a lattice
basis produces a lower-bound on the
length of the shortest vector in this
lattice.
Lower-Bound on the Length of the
Shortest Vector
Claim: Let vi* be the shortest vector in the basis
constructed by Gram-Schmidt.
For any non-zero lattice vector x:
jm, the projection
||vi*||  ||x||
of vj on vm* is 0.
Proof: There exist z1,...,zkZ, r1,...,rkR, The
suchprojection
that
of v on
k
k
i1
i1
x   zi vi   ri vi *
vm* is vm*.
Let m be the largest index for which zm0  rm=zm,
and thus
||x||  rm||vm*|| = zm||vm*||  ||vi*||

m
Compromise
Still we’ll have to settle for less than
an orthogonal basis:
 We’ll construct reduced basis.
 Reduced basis are composed of
“almost” orthogonal and relatively
short vectors.
 They will therefore suffice for our
purpose.

Reduced Basis
Definition (reduced basis):
A basis {v1,…,vn} of a lattice is called reduced if:
(1)  1  j < i  n ij
 vi , v j * 
(2)  1  i < n
The projection
of vi on {vi*,...,vn*}
|| v j ||2
1

2
The projection of
vi+1 on {vi*,...,vn*}
¾||vi*||2  ||vi+1*+i+1,ivi*||2
Properties of Reduced Basis(1)
Claim: If a basis {v1,...,vn} is reduced, then
for every 1i<n
½||vi*||2  ||vi+1*||2
Proof:
{v
}vis
reduced
Since |<v
vi*
and
are
orthogonal
Since
,vin*>/<v
1,...,v
i+1*i*,v
i+1
i*>|½
¾||
vi *
||2  || ||2 +
2
v2*
||2 ||
||
||
 vi 1 , vi * 1vi21 , vi *  
 vi *
vvii11 ** 
v *
|| vi * || 2  2|| vi * || 2 i
2
And the claim follows. 
Corollary: By induction on i-j, for all ij,
(½)i-j ||vj*||2||vi*||2
i
Properties of Reduced Basis(2)
Claim: If a basis {v1,...,vn} is reduced, then
for every 1jin
(½)i-1 ||vj||2  ||vi*||2
Proof:
2 (½)k-j ||v *||2
Since
|<v
,v
*>/<v
*,v
*>|½
and
1kj-1
||v
*||
Some
arithmetics...
Since
{v
*,...,v
*}
is
an
orthogonal
basis
Rearranging
the
terms
i+1
i
i
i
k
j
Geometric
sum
Which
implies
that
By
the
previous
corollary
1
n
2  j 1k  j 2

v
,
v
*
kv j22, 1vk*j t 

1
2

1
1

2
2 2
2
v

v
*

v
*
||
v
*
||


||
v
*
||
2
2
||
v
*
||


||
v
*
||

||
v
*
||
2
*
||
j
j
k
2 2  2 ) ||j v k * ||
|| v j || || vjj * || i 
j  ij 2  (
||k1v1k *2 |||| v 2*t ||1
2 k 1 k4
k


j 1
i 1
j 1 jj11
j
22 i 2 j
And this is in fact what we wanted to prove. 
Approximation for SVP
The previous claim together with the lower
bound min||vi*|| on the length of the shortest
vector result:
The length of the first vector of any reduced
basis provides us with at least a 2(n-1)/2
approximation for the length of the shortest
vector.
It now remains to show that any basis can be
reduced in polynomial time.
Reduced Basis

Recall the definition of reduced basis is
composed of two requirements:
–  1  j < i  n |ij|  ½
–  1  i < n ¾||vi*||2  ||vi+1*+i+1,ivi*||2

We introduce two types of “latticepreserving” transformations: reduction and
swap, which will allow us to reduce any
given basis.
First Transformation: Reduction
The transformation ( for 1  l < k  n )
vk
vk-kl•vl
1in
vi
vi Using this transformation
The consequences
we can ensure for all j<i
|ij|  ½
1in
vi*
vi*
1j<l
kj
kj - kl•lj
kl
ij
kl - kl
ij
1j<in
ik
Second Transformation: Swap
The transformation ( for 1  k  n )
vk
vk+1
1in
vk+1
vk
vi
vi
The important consequence
vk*
vk+1*+k+1,kvk*
If we use this transformation, for a k which satisfies
¾||vk*||2 > ||vk+1*+k+1,kvk*||2,
we manage to reduce the value of ||vk*||2 by (at most) ¾.
Algorithm for Basis Reduction
Use the reduction transformation to
obtain |ij|  ½ for any i>j.
 Apply the swap transformation for
some k which satisfies
¾||vk*||2>||vk+1*+k+1,kvk*||2.
 Stop if there is no such k.

Termination
Given a basis {v1,...,vn}; viZn i=1,...,n for a
lattice, define:
di=||v1*||2 •... •||vi*||2
D=d1•... •dn-1
di is the square of the determinant of the
lattice of rank i spanned by v1,...,vi.
Thus DN.
Termination
How are changes in the basis affect D?
The reduction transformation doesn’t
change the vj*-s, and therefore does not
affect D.

Suppose we apply the swap transformation
for i. For all ji, dj is not affected by the
swap. Also for all j<i vj* is unchanged. Thus D
is reduced by a factor < ¾.

Polynomial-Time
Since DN and its value decreases with
every iteration, the algorithm necessarily
terminates.
 Note that D’s initial value is poly-time
computable, what implies the total number
of iterations is also polynomial.
 It is also true that each iteration takes
polynomial time. The proof is omitted here.

Summary

We have seen, that reduced basis give us an approximation
of 2(n-1)/2 on SVP.

We have also presented a polynomial-time algorithm for
the construction of such basis ([LLL82]).
Hardness of Approx. SVP
[MICC]
GapSVPg:
Input: (B,d) where B is a basis for a lattice in Rn and
d  R.
n
Yes instances: (B,d) s.t. z  Z Bz  d .
No instances: (B,d) s.t. z  Z n Bz  gd .
GapCVP’g:
Input: (B,y,d) where B  Zkn, y  Zk, and d  R.
n
Yes instances: (B,y,d) s.t. z  0,1 Bz  y  d .
No instances: (B,y,d) s.t. z  Zn , i  Z Bz  iy  gd .
Reducing CVP to SVP
We will use the fact that GapCVPc’ is NP-hard
for every constant c, and give a reduction
from GapCVP’2/ to GapSVP2/(1+2), for
every  > 0.
A Robust Lattice: No Short Vectors
lnp1
lnp2
L
.
0
lnp1
.
0
.
D
lnpm

lnpm
p1,…,pm – the m
smallest prime
numbers.
P
A Robust Lattice: No Short Vectors
Lemma:
2
z  Z m , z  0 Lz  2ln  .
lnp1
lnp2
Proof: Let zZm be a non-zero
vector.
z
z
g

p
g

p


Define  z 0 i ,  z 0 i
and g = g+ · g-.
g+ and g- are integers, and
z  0  g+  g |g+ - g-|  1.
i
i
2
m
Dz   zi2 ln pi
i1
m
  zi ln pi  ln g .
i1
i
i
L
.
0
lnp1
.
0
.
D
lnpm

lnpm
p1,…,pm – the m
smallest prime
numbers.
P
A Robust Lattice: No Short Vectors (2)
Proof (cont.):
lnp1
m
Pz   zi ln pi
lnp2
i1
 ln g  ln g
 ln1 
 ln1 
g g
min g ,g 
1
g


L
1 
g
.
 Lz 2  Dz 2  2 Pz2
 ln g 
2
g
.
which is a convex function
of g with minimum in g = 2.
 Lz 2  2ln   1  2ln  .
.
0
lnp1
.
0
.
D
lnpm

lnpm
P
A Robust Lattice: Many Close Vectors
lnp1
lnp2
L
.
0
lnp1
.
0
.
0
lnpm

lnpm
-lnb
-s
A Robust Lattice: Many Close Vectors
Lemma:  z  {0,1}m, if
m
lnp1
g   p  b, b  b 
i1
zi
i
lnp2
2
then Lz  s  ln b  2 .
2
Proof: Dz  Pz  ln g
 ln b  ln1  1   ln b  1 .
 Lz  s
2
 Dz   Pz  ln b
2
2
2
 ln g  2 ln g  ln b
 ln b  1  1  ln b  2 .
2
L
.
0
lnp1
.
0
.
0
lnpm

lnpm
-lnb
-s
Scheme
The End Result
lnp1
lnp2
L
.
0
lnp1
.
0
.
0
lnpm

lnpm
  Bnxk  Ckxm
SVP lattice
-lnb
-y
-s
The End Result
Lemma: Let Z  {0,1}m be a set
of vectors containing
exactly n 1’s. If
Z  n! m4 nk /  and C  {0,1}km
is chosen by setting each
L
entry independently at
random with probability
p  41nk , then
Pr[x{0,1}k zZ Cz = x]
> 1 - 7.
lnp1
lnp2
.
0
lnp1
.
0
.
0
lnpm

lnpm
  Bnxk  Ckxm
SVP lattice
-lnb
-y
-s
The End Result (2)
lnp1
lnp2
L
.
0
lnp1
.
0
.
0
lnpm

lnpm
  Bnxk  Ckxm
SVP lattice
-lnb
-y
-s
The End Result (2)
Lemma: For any constant  > 0
there exists a probabilistic
polynomial time algorithm
that on input 1k computes a
lattice L  R(m+1)m, a vector
s  Rm+1 and a matrix
C  Zkm s.t. with
probability arbitrarily close
to 1
2
1. z  Z, z  0 Lz  2 .
2.  x  {0,1}k  z  Zm
s.t. Cz = x and
2
Lz  s  1   .
lnp1
lnp2
L
.
0
lnp1
.
0
.
0
lnpm

lnpm
  Bnxk  Ckxm
SVP lattice
-lnb
-y
-s
The End Result (3)
Proof: Let  be a constant
0 <  < 1, let k be a
sufficiently large integer,
and let  = b1-.
 z  Z m , z  0
2
Lz  2ln   21   ln b .
Let m = k4/+1 and n  2mln m .
Let Z be the set of vectors
z  {0,1}m containing
exactly n 1’s, s.t.

z



p

b
,
b

b
 i
m
lnp1
lnp2
L
.
0
lnp1
.
0
.
lnpm

lnpm
  Bnxk  Ckxm
i
i1
2
 z  Z Lz  s  ln b  2.
0
SVP lattice
-lnb
-y
-s
The End Result (4)
Proof (cont.): Let S be the set
of all products of n distinct
primes  pm.  |S| = |Z|.
By the prime number
theorem  1  i  m
pi < 2mlnm=M.  S  [1,Mn]. L
m n

S m

n.
n
Divide [1,Mn] into intervals
of the form [b,b+b’] where
b' 2log b    b. There are
O(M(1-)n) such intervals.
 they contain an average
n
n
m
m
of at least nM     n (2ln m) 
elements of S each.
lnp1
lnp2
0

lnp1
1 

1 
.
.

0
lnpm

lnpm
  Bnxk  Ckxm
2

.
0
SVP lattice
-lnb
-y
-s
The End Result (5)
Proof (cont.): Choose a random
element of S and select the
interval containing it. The
probability that this
interval contains less than
nmln m n elements of S is at
most O((lnm/2)-n) < 2-n.
 for all sufficiently large
k we can assume
n
m
Z  S  nln m   n! m4 nk / 
and therefore with
probability arbitrarily close
to 1,  x  {0,1}k  z  Z
Cz = x.


lnp1
lnp2
L
.
0
lnp1
.
0
.
0
lnpm

lnpm
  Bnxk  Ckxm
SVP lattice
-lnb
-y
-s
Sum Up
Theorem: The shortest vector in a lattice is NP-hard to
approximate within any constant factor less than 2.
Proof: The proof is by reduction from GapCVP’c to
GapSVPg, where c = 2/ and g = 2/(1+2).
Let (B,y,d) be an instance of GapCVP’c. We define an
instance (V,t) of GapSVPg s.t.
- (B,y,d) is a Yes instance  (V,t) is a Yes instance.
- (B,y,d) is a No instance  (V,t) is a No instance.
Let L, s, C and V be as defined above, where  =  / d,
and let t = (1+2).
Completeness
Proof (cont.):
 Assume that (B,y,d) is a Yes instance of GapCVP’c.
 x  0,1 Bx  y  d .
k

From the previous lemma  z  Zm s.t. Cz = x and
2
Lz  s  1  .
z 
Define a vector u    .
1
2
2
2
2
Vu  Lz  s   Bx  y  1  2  t2
 (V,t) is a Yes instance of GapSVPg.
Soundness
Proof (cont.):
 Assume that (B,y,d) is a No instance of GapCVP’c, and let
z
2
2
2
u     Z m1 , u  0 . Vu  Lz  ws  2 Bx  wy .
w 
2
2
Lz

ws

Lz
2.
If w = 0 then z  0, and
2
2
If w  0 then  Bx  wy  2c2d2  2 .
 Vu  2  gt.
 (V,t) is a No instance of GapSVPg.
Ajtai: SVP Instances Hard on Average
Approximating
SVP (factor= nc )
Approximating
On random instances
Shortest Basis
from a specific
constructible distribution
(factor= n10+c )
Approximating
SVP (factor= n10+c )
Finding
Unique-SVP
Average-Case Distribution

Pick an n*m matrix A, with
coefficients uniformly ranging over
[0,…,q-1]. m  c1n log n, q  nc  .
2

A = v 1 v 2 … vm
Def: (A) = {x  Zm | Ax  0 mod q }
Denote by 'n,c ,c this distribution
1
2
A mod-q lattice: (v1 v2 v3 v4)
2v1+v4
v2
(2,0,0,1)
v3
(1,1,1,0)
v1
q(a,b,c,d)
v4
1
q
A Lattice with a Short Vector
For n,c ,c (which generates a lattice
with a short vector) only v1,…,vm-1 are
chosen randomly. vm is generated by
choosing 1,…,m-1 R {0,1} and setting
1
2
m1
vm   ivi .
i1
Lemma: For a sufficiently large m, the
distribution n,c ,c is exponentially
close to the distribution 'n,c ,c .
1
2
1
2
sh & bl
Def: sh(L) – the length of the shortest vector in
L.
Def: Let b1,…,bn  L. The length of {b1,…,bn} is
defined by b1,...,bn   max bi .
1in
Def: bl(L) – the length of the shortest basis of L.
SVP & BL
Def: SVPf(n,c ,c ) – for at least ½ of L  n,c ,c find
a vector of length at most f(n)sh(L).
1
2
1
2
Def: The problem BLf(L) – find a basis of length
at most f(n)bl(L).
BL  SVP()
Thm: There are constants c1,c2,c3 such that
BLnc3 L    SVPn n,c1 ,c2


Proof:
Assuming a procedure for SVP() we construct
a poly-time algorithm that finds a short basis
for any lattice L.
Lemma: Given a set of linearly independent
elements r1,…,rn  L we can construct, in
polynomial time, a basis s1,…,sn of L such that
s1,..., sn   n r1,..., rn 
Halving M
Let a1,…,an  L be a set of independent elements,
and let M  a1,..., an  .
The previous lemma shows that if M  nc 1blL  ,
we can find a short basis.
3
In case M  nc 1blL , we will construct another set
of linearly independent elements,
b1,…,bn  L, such that b1,..., bn   M2 .
Iterating this process, for logM steps, we can
find a set of linearly independent c1,…,cn  L,
such that
c1,..., cn   nc 1blL  .
3
3
L. I. a1,…,an  L.I. f1,…,fn  L, s.t. f1 ,..., fn   n3M
& W = P(f1,…,fn) (parallelepiped) is almost a cube
[i.e. the distance of each vertex of W from the vertices
of a fixed cube is at most nM]
Now we cut W into qn parallelepipeds.
Let vi be the corner of the parallelepiped that contains i.
v1,…,vm define a lattice from 'n,c1 ,c2
Therefore,
we can sequence
find,
withof
probability
 ½ a1vector
We take a random
lattice points
,…,m, and
m
m s.t. h  n and
h
 Zfor
hiviparallelepiped
 0.
find
each 1  i  m 
the
that contains i.
i1
Proposition: With positive probability
m
u   hii  0 and u  M2 .
i1
SVP  BL
Lemma: There is an absolute constant c such that
1  sh(L*)bl(L)  cn2.
Theorem: SVP c3 L    BL c3 L  .
n
n
Proof: If we can get an estimate on bl(L*), then by
the above lemma, we can obtain an estimate on
sh((L*)*) = sh(L).
Hardness of approx. CVP
[DKRS]
g-CVP is NP-hard for g=n1/loglog n
n - lattice dimension
Improving
– Hardness (NP-hardness instead of quasi-NPhardness)
– Non-approximation factor (from 2(logn)1-)

[ABSS] reduction: uses PCP to show
– NP-hard for g=O(1)
1-
– Quasi-NP-hard g=2(logn) by repeated blow-up.

Barrier -

SSAT: a new non-PCP characterization of NP.
NP-hard to approximate to within g=n1/loglogn .
2(logn)
1-
const >0
SAT
Input:
=f1,..,fn Boolean functions ‘tests’
x1,..,xn’ variables with range {0,1}
Problem: Is  satisfiable?
Thm (Cook-Levin): SAT is NP-complete
(even when depend()=3)
SAT as a consistency problem
Input
=f1,..,fn Boolean functions - ‘tests’
x1,..,xn’ variables with range R
for each test: a list of satisfying assignments
Problem
Is there an assignment to the tests that is consistent?
f(x,y,z)
g(w,x,z)
h(y,w,x)
(0,2,7)
(2,3,7)
(3,1,1)
(1,0,7)
(1,3,1)
(3,2,2)
(0,1,0)
(2,1,0)
(2,1,5)
Super-Assignments
A natural assignment for f(x,y,z)
1
0
A(f) = (3,1,1)
f(x,y,z)’s super-assignment
SA(f)=-2(3,1,1)+2(3,2,5)+3(5,1,2)
3
(1,1,2) (3,1,1) (3,2,5) (3,3,1) (5,1,2)
2
1
0
-1
(1,1,2) (3,1,1) (3,2,5) (3,3,1) (5,1,2)
-2
||SA(f)|| = |-2|+|2|+|3| = 7
Norm SA - Averagef||A(f)||
Consistency
In the SAT case:
A(f)
= (3,2,5)
A(f)|x := (3)
x  f,g that depend on x: A(f)|x = A(g)|x
Consistency
SA(f) = +3(1,1,2)

-2(3,2,5)

2(3,3,1)
-2+2=0
SA(f)|x := +3(1)

0(3)
(1,1,2)
3
(3,3,1)
2
1
0
-1
(1)
(2)
(3)
-2
(3,2,5)
Consistency: x  f,g that depend on x: SA(f)|x = SA(g)|x
g-SSAT - Definition
Input:
=f1,..,fn tests over variables x1,..,xn’ with range R
for each test fi - a list of sat. assign.
Problem: Distinguish between
[Yes] There is a natural assignment for 
[No] Any non-trivial consistent super-assignment is of
norm > g
Theorem: SSAT is NP-hard for g=n
(conjecture: g=n ,  = some constant)
1/loglog n
.
SSAT is NP-hard to approximate
to within g = n1/loglogn
Reducing SSAT to CVP
f,(1,2)
f,f’,x
*
1
2
3
w
w
0
w
0
0
w
0
I
f(w,x)
f’(z,x)
f’,(3,2)
w
w
w
w
w
w
w
w
0
0
0
0
0
0
0
0
Yes --> Yes:
dist(L,target) = n
No --> No:
dist(L,target) > gn
Choose w = gn + 1
A consistency gadget
*
1
2
3
w
w
0
w
0
0
w
0
w
w
w
w
A consistency gadget
*
1
2
3
a1
a2
a3
b1
b2
b3
www
000
www
www
www
00w
ww0
www
www
00w
www
ww0
ww0
00w
ww0
ww0
ww0
000
www
ww0
ww0
000
ww0
www
a1 + a2 + a3
a2 + a3
a1 +
a1 + a2
+ a3
w
w
w
w
= 1
+ b1
= 1
+ b2
= 1
+ b3
= 1
Download