Complexity Lower Bounds, P vs NP
& Gowers Blog
 ≠ ?
Try to prove they are different by using circuit complexity.
Define a function to be : {0,1} → {±1}
To built a circuit we define the following:
1) "Basic Functions" –  (1 , … ,  ) = (-1)
2) "Basic Operations":
 +1
a.  → -
b. ,  →  ∨ 
c. ,  →  ∧ 
3) Straight line composition: 1 , 2 , … ,  such that  is either a basic function or
obtained from 1 , 2 <  by basic operations.
The definition represents a DAG (Directional Acyclic Graph)
Definition: a function  has (circuit) complexity  if ∃ a circuit 1 , … ,  as above.
⁄
: All functions with a polynomial circuit complexity are our equivalent of .
{{0,1} → {±1}| ℎ    < log log  }
If we find ℎ ∈  . . ℎ ∉ ⁄ then  ≠ .
For that purpose, lets define a complexity measuring function .
Such  must satisfy:
1) () = 1 if  is basic
2) () = (-)
3) If (), () are small then ( ∨ ) is small
4) () is large for some ℎ ∈ 
Attempts to find such 
Idea 1
Take the fourier representation of  = ∑⊂[] ()
̂ = max|()|

Define () = 
1

: {0,1} → {±1}

 ∈ ℝ2
Instead of the standard base, we use the Fourier base:

{} = (-1) .
∀ ⊆ {1, … , }.  = ∏ ()
∈
Fourier base is: { }
Problem
2
1 =  = ‖‖22 = ∑|̂()|
∀{0,1}2 → {1 } ∃ . . |̂()| >
And the function () = (-1)
1 2 +2 3 +3 4 ,…, 1
1
2
also has ̂ =
1

22
Could there be a good measure ?
( ∧ ) ≤  () + ()
} - a formal complexity measure
( ∧ ) ≤ () + ()
Related to the formula size of  – ()?
Formula trees with basic formulas on leaves and basic operations in internal nodes. Formula
size is the number of leaves.
Claim: For any formal complexity measure  – () ≤ ()
Proof: By induction
( ∨ ) ≤ () + () ≤ () + ()
Assume the smallest formula for f writes  =  ∨ ℎ, then
() = , (ℎ) =  → () =  + .
Note that the smallest formula might be even smaller!
() = () + (ℎ) ≤ () + (ℎ) ≤ ()
So far it seems as if there isn't a good …
On one hand: () = ̂
1

is bad for easy functions.
On the other hand, () = () is bad because tautological!
We haven't made any progress...
Natural Proofs – Razborov & Rudich

{0,1}={0,1}
X
(0,1,1,0, … )
=⏟
||
 ⊂ {0,1}
 ⊂ {{0,1} → {±1}}
 is  pseudo-random w.r.t.  if
∀ ∈ ℱ |[() = 1| ∈ ] − [() = 1]| < 
Extreme opposite – 1 ∈ ℱ
 is pseudo random if every  ∈ ℱ cannot distinguish  ≈  from  ≈ {0,1} 
Main point: Random functions of lowe complexity look like random functions w.r.t. a poly
time distinguisher.
Let  = all points in {0,1} with respect to polyline functions
 = {: {0,1} → {±1}|    (   ℎ )}
Statement: ∀ > 0.  is -pseudorandom w.r.t. ℱ.
A "natural proof" for  ≠  would
- Devise a "simplicity" probability S of Boolean functions, so that () = 1 for all
simple (poly-time complexity) functions  ∈ .
- If  itself is poly-time computable (in its input length - || = 2 ) then since X is pseudorandom for ℱ and  ∈ ℱ, it follows that () = 1 for almost all  ∈ {0,1} .
This is bad because a random function  ∈ {0,1}^ shouldn't be simple!
So either  ∉ ℱ or () = 1 for almost all functions.
A proof is "natural" if it defines a simplicity property  such that:
(1) All low complexity functions are simple
(2) A random function is not simple
(3) Whether or not a function is simple can be determined in poly-time
(4) Some NP-function is not simple
1,2,3 cannot hold together!
-------- end of lesson 1
Connection between P, NP and Circuits
 ⊂ {0,1}∗
 ≔  ∩ {0,1}
 = { }∞
=1
The Clique language has a circuit complexity ≥ ()
↔
∞
{
}
For any sequence of circuits  =1 solves Clique
∃0 ∀ > 0 ( ) > ()
The class ⁄ ≡ all languages computable by poly circuits.
The set of poly-time functions looks like the set of all functions
Looks like = To a simple observer (another polynomial time algorithm).
Exercise: Let  be a formal complexity measure. Prove that if there exist ℎ: {0,1} → {±1}
1
(ℎ) > 4 ∙ , then [() > ] ≥ 4
 random, : {0,1} → {±1}
Things that are known

The discrete log function
Let ℤ be the cyclic group with N elements
Let g be a generator
 = {1 , 2 , 3 , … } ℤ∗
 :  →  () =  
 -−1 ← discrete log function, is 1-1, believed hard to compute.

CONJ – There exists some  > 0 . . the complexity of this problem is ≥ 2 .
Goldreich-Levin “Hard core bit”: Any one way permutation → gives rise to a pseudorandom generator.
A pseudo random generator is:
{0,1} → {0,1}+1
Such that you cannot tell the difference between the half that emerged from the domain
and the half that didn’t (in the range).
 ( ⏟
 , ⏟
 ) = (()
 , ∑    2)
⏟ ,⏟
⏟



2

2

2

{0,1}
  
2
{0,1}
2
:
→
Now they constructed a pseudo random function generator
The took a seed (denoted ).
y
y(1 ())
y(0 ())
y(0 (0 ()))
y(1 (0 ()))
y(0 (1 (()))
y(1 (1 (()))
(, ) ≔  ( ∘ … ∘ 2 1 ())
Define  : {0,1} → {0,1}  () = (, )
Consider the distribution { }∈{0,1} ,  >   > 2

For each y  is poly-time computable

This distribution is pseudo-random against polynomial time(in  ) distinguishers
On the other hand…
, or any property being used in a lower bound proof, shouldn’t be too complex either!
() = 1 iff  has low circuit complexity is not good (trivial).
Note  ∈ !
Take the basic functions:
(- − 1)
⋮
100

1
(- − 1)
⋮
2
(- − 1)
⋮
3
… (- − 1)
⋮

A model for generating a random formula.
I have  ∙ 100 ∙ 2 functions.
But this is false! Why? Because using AND or OR changes the distribution from
Gowers Norms ( )
Fix a finite set  and consider ℝ (the vector space of functions :  → ℝ.
A norm on this space is a function ‖‖: ℝ  → ℝ+
s.t.
‖‖ = ||‖‖
 ≠ 0 → ‖‖ ≠ 0
‖ + ‖ ≤ ‖‖ + ‖‖
1
2
3
4
 0 to …
Example: ‖‖ = max|()| (  ∈ )
Definition (dual norm):
1
‖‖∗ = max{〈, 〉|‖‖ ≤ 1} where 〈, 〉 = ∑∈ ()() ← “Correlation”
||
Example: For ‖‖∞, the dual is:
‖‖∗∞ = max{< ,  > |‖‖∞ ≤ 1, ∀: |()| ≤ 1}
1
1
max {∑ ()() ||()| ≤ 1} =
∑ ()(()) =
∑|()| = ‖‖1
||
||
In general: If ‖∙‖is in P, it doesn’t mean that ‖∙‖∗is in P.
Another Example: Let̂ be an abelian group
‖‖4 42 =  [()( + )( + )( +  + )]
,,
Is it in P? YES.
1
Is ‖‖∗42 in P? Turns out that ‖‖∗42 = ‖̂‖4 = (∑⊆[](̂()4 ))4
Exercise: Prove that the 2 norm is a norm. Hint: Cauchy’s norm.
However, can define 2 norm
TODO: Did not have enough time to copy the formula:
‖‖2 = (  (()( + )( + )( +  + )( + )( +  + )()
,,,
Do not know a poly-time algorithm for ‖ ∙ ‖∗2
“Goal” for introducing these norms was to extend fourier analysis to “higher degree”.
: {0,1} → {±1}. ̂() =    ℎ ℎ   (−1)∑∈ 
 : {0,1} → {±1} 2 (1 , … ,  ) = ∏∈(−1) ←linear phase functions
Consider degree d phase functions (−1)^(̅ ) where deg  ≤ 
Fix : {0,1} → {±1}
 :
{0,1}
 ∈ {0,1}
→ {±1} defined by  () = () ∙ ( + )
() = (−1)(())
Say, for instance:
() = (−1)1 +2 +3 ∙ (−1)1 +2 +3 +1 +2 +3 = (−1)1 ++3 =constant! Doesn’t depend
on x.
If () = (−1)() for deg  ≤ 
Then  () = (−1)
′ ()
ℎ deg  ′ ≤  − 1
(−1)() ∙ (−1)(+)
Define , ≡ ( )
Similarily: 1 … = (… ((1 ) ) … )
2
---- end of lesson 2

: {0,1} → {±1}

‖‖2 =

1 … ∈{0,1}
∏
(0 + 1 1 + ⋯ +   )
1 … ∈{0,1}
0 =   (1 …  ) (dimension k affine subspace)
0 , 0 + 1 , 0 + 2 , 0 + 1 + 2
Example: Suppose  is a linear function
∃1 , … ,  . . () = ∑   (2)
Then for any choice of 0 , 1 , 2
(0 ) + +(0 ) + (1 ) + + ⋯ = 0
Hence for any choise of 0 , 1 , 2
+ + (1 + 1 1 + 2 2 ) ≡ 0
1 ,2
Similarly the expectancy is 0 as well.
Linearity Testing
(proven by Blum-Luby-Rubinfeld)
: {0,1} → {±1}
Question: Is  a linear function?
Definition 1: ∃1 , … ,  . . ∀. () = ∑  
Definition 2: ∀,  () + () = ( + )
Definition 2 implies definition 1 since:
We can define  = ( ), then () = (∑   )

=
∑  ∙ ( )…
Testing
Global object, e.g. : {0,1} → {0,1}
Want to test if  ∈ 
In our example – all linear functions.
Only willing to invest limited resources, but willing to randomize.
Question: Can we deduce global property by considering local behavior?
If the answer is yes we say that this  is testable.
Non testable property:
1 …  - Boolean variables. ~1 … ~ - their negations.
Fom the list above, I select  3-CNF clauses indices at andom.
Most of the time we will find clauses that don’t have any shared variables
If  > 50 than with high probability  is unsatisfiable!
PCP “theory” implies that every polynomialy varifiable property (e.g. formula satisfiability)
can be cast (“encoded”) in testable form.
Definition: ∀,  {0,1} → {0,1}
Distance(, ) =  [() ≠ ()]
∈{0,1}
Distance(, ) = min (, )
f∈S
Theorem (BLR):
Let : {0,1} → {0,1} If distance (, ) ≥ 0
Then (() + () + ( + ) ≠ 0) ≥ Ω()
,
Theorem (AKKLR):
Let : {0,1} → {0,1} if (, ()) ≥ δ
Then  (((0 + 1 1 + ⋯ + +1 +1 ))) ≥ Ω( ∙ 2− )
0 ,…,+1
So we select an affine space of  + 1 points.
We look at all of these points and check that they are not zero.
 is a degree k function if
(1 , … ,  ) = ∑  ∏ 
||⊆
∈
, 
(1 , … ,  ) = 1 2 … 
If  has degree  then ∀0 , … , +1
((0 + 1 1 + ⋯ + +! +1 ))
Proof: Let  () = ( + )
The function  = +1 , ,…,2 has degree 0 (it is constant)
The above expression equals (0 )(0 + 1 ) ≡ 0
Claim: Let (,  − 1) be the correlation of  with degree  − 1 polynomials.
 = (1 − ) −  = 1 − 2 ∙ (,  − 1  )
(,  − 1) ≤ ‖‖
Reed-Mulle (Low-Degee) Test
Let  be the closest degree  polynomial to .
 ≔ (, )
1.  is tiny –  < 2−
If an affine  + 1 space contains exactly one point  . . () ≠ () then the test
rejects.
We will prove that this happens with constant probability.
Assume ~2−
Choose a random affine subspace  by choosing × a random full rank matrix
over 2 and a random  ∈ 2
 = { =  + | ∈ 2 }
For each   − The event () ≠ ()
 −  and ∀  ≠  ( ) = ( )
 is distributed uniformly in 2
 is distributed uniformly on 2 \{ }
∀ ≠ [ ] = 
[   ] ≤  2
,
[ ] ≥ [ ] − ∑ [ ∧  ] ≥  − 2 ∙  2 ≈ 
≠
 (⋃  ) = ∑ [ ] = 2 ∙  = 
,
2.
--- End of lesson 3


Last week we:
 Defined the  norm ⇔Degree  − 1 test (“low degree test”)
 Proved the following theorem: If ‖‖ > 1 −  Then there ∃ of degree  − 1
〈, 〉 =  [()()] > 1 −  

Lemma 1: Let : {0,1} → {0,1}
 - some polynomial
Denote (, deg ) = max{〈, 〉| = (−1) () ,  − deg }
So:
(, deg ) ≤ ‖‖+1
Lemma 2: For every ℎ: {0,1} → {0,1}
‖ℎ‖ ≤ ‖ℎ‖+1
Proof of 2:
We shall use the fact that:
[ 2 ] ≥ ([])2
+1
+1
‖ℎ‖2+1
=

∏

1 ,…,  ,…,
+1
+1 1
ℎ ( + ∑   )
=1

=



∏ ℎ ( + ∑   ) ∏ ℎ ( + +1 ∑   )
1 ,…, 
+1  ,…,
1

=1
1 ,…,
=1
′
Now let’s fix:  ← 
 ′ ←  + +1

=



′
′
∏ ℎ ( + ∑   ) ∏ ℎ ( ∑   )
1 ,…, 
+1  ,…,
1

=1
1 ,…,
=1
2

=

1 ,…,
( ∏ ℎ ( ′ + ∑   ))

1 ,…,
=1
2

≥( 

+1
∏ ℎ ( ′ + ∑   )) = ‖ℎ‖2
! ,…,  ,…,
1

∎
=1
Proof of 1:
For any ℎ: {0,1} → {±1}
1
1) | ℎ()| = ‖ℎ‖′

×3
(‖ℎ‖2
4 4
= ( |ℎ̂()| ) )

2) ∀ ‖ℎ‖ ≤ ‖ℎ‖+1
‖ ∙ ‖
3) ∀. ∀: {0,1} → {±1} degree  polynomial ⏟


= ‖‖+1
Proof for 1:
‖ℎ‖21 = 
∏ ℎ( + 1 ) = 
∏ ℎ( + 1 )


1 =0,1
′
′
1 =0,1
Denote  = ,  =  + 
= ( ℎ()) ( ℎ())


Proof for 3:
+1
‖ ∙
+1
‖2+1
=


+1
+1
∏
1 ,…,+1  ,…,
1
+1
 ( + ∑   )  ( + ∑   ) = ‖‖2+1
=1
Because ∀, 1 , … , +1 ∏1 ,…,  +
=1
∑+1
=1  
=1
Let : {0,1} → {±1} be the degree  function closest to  (i.e. attaining max correlation).
Define ℎ() = () ∙ ()
(, ) = | ()()|
  1

-
=
  2
‖ℎ‖
≤
‖ℎ‖+1
  3
=
‖‖+1 ∎
considered ‖‖3 as a “formal complexity measure”
Consider dual norms:
‖‖∗ = max{〈, 〉|‖‖ ≤ 1}
Motivation for dual:
1) “NP-ish” definition possibly circumvents RR
2) More robust
TODO: Draw world
Suppose we have two parts in our world. A and B
And we have two functions ,  such that  is random on A and is 1 on B and  is the exact
opposite.
‖‖3 =constant.
‖‖3 =constant as well.
ℎ =∨
‖ℎ‖3 = !
This is a problem! We just use an or and got such a dramatic difference
‖ℎ‖∗ ≥ 〈ℎ, ℎ〉 = 1
〈ℎ, ℎ〉 = 
1
Can take  = ‖ℎ‖ - very large!
‖‖∗3 =?
‖‖∗3
Need to find a “norming function”
Use ℎ!
‖
ℎ
‖=1
‖ℎ‖
1
ℎ
1
1
‖‖∗ ≥ 〈, 〉 =
 ()ℎ() = () ⏞
()() + [] ∙ ℎ() ∙ 1 = 2 ∙
‖ℎ‖
‖ℎ‖

1
‖ℎ‖
=very large!
Note: 〈, 〉 ≤ ‖‖ ∙ ‖‖∗
Question 1: Given : {0,1} → {±1} Can we compute ‖‖∗ in polytime?
(an open question even for k=3)
Question 2: Suppose you know that ‖‖∗3 ≥  by [Samorodnitsky ‘07] ∃ deg 2 polynomial
2
that correlates with . Can P be found in time poly(2 )? (search space is 2 )
Gappalan-Klivans-Zuckerman ’08: “list-decoding Reed-Muller Codes”.
If  is -correlated with some degree-2 function, then the following is true:
1) Number of deg 2 polynomials correlation with  ≤ 2 ()
2) Can find list above in time 2 ()
Interpretation: if ‖‖ small then ‖‖∗ is large
Simplicity Property = {|‖‖∗  } ⊆ {|‖‖  }- This is a property in !
If the first if 1 and the second is 2 , then the first one that contains all functions.
Next idea:  norm for super-constant .
Now the naïve algorithm for computing ‖‖
Takes (2 ) −time. If  = () this is not polytime
 = 2 , ^
Still intent to use dual ‖ ‖∗ norm (want robustness)
Question 3: Is there an algorithm for ‖ ‖ running in time better than  −1 ∙ log .
Download

Complexity Lower Bounds, P vs NP & Gowers Blog