Revisiting the hardness of approximation of Maximum 3-Satisfiability 1 Introduction

advertisement
og
re
ss
Revisiting the hardness of approximation of
Maximum 3-Satisfiability
Jeffrey Finkelstein
Computer Science Department, Boston University
October 18, 2014
1
Introduction
W
or
k-
in
-
pr
In this work, we attempt to clarify the existing proofs that Maximum 3Satisfiability is both hard to approximate in polynomial time and complete
for the class APX under an appropriate type of reduction. By doing this, we
hope to provide not only a better understanding of the original proofs, but also
a platform on which to base similar proofs. We use this platform to provide
a clarification of the existing proofs that Maximum 3-Satisfiability is both
hard to approximate by efficient and highly parallel algorithms and complete for
the class NCX under an appropriate type of reduction.
We introduce the necessary definitions and basic facts in section 2. In
subsection 3.1, we present a proof that Maximum 3-Satisfiability is hard
to approximate in polynomial time within a constant factor, for sufficiently
small constants. The proof here is a careful rewrite of the proofs given in
[3, Theorem 6.3] and [9, Corollary 29.8]. The corresponding proof for highly
parallel algorithms is in subsection 3.2. In subsection 4.1 we present a proof
that Maximum 3-Satisfiability is complete for the class of polynomial time
constant factor approximable optimization problems, under an appropriate
type of reduction. The proof here is a careful rewrite of the proof given in
[3, Theorem 8.6]. The corresponding proof for highly parallel algorithms is in
subsection 4.2.
Copyright 2012, 2014 Jeffrey Finkelstein ⟨jeffreyf@bu.edu⟩.
This document is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License, which is available at https://creativecommons.org/licenses/by-sa/4.0/.
The LATEX markup that generated this document can be downloaded from its website at
https://github.com/jfinkels/apxcompleteness. The markup is distributed under the same
license.
1
2
Preliminaries
Throughout this work, Σ = {0, 1}, and Σ∗ is the set of all finite binary strings.
The length of a string x ∈ Σ∗ is denoted |x|. We denote the set of positive
rational numbers by Q∗ and the natural numbers (excluding 0) by N. A decision
problem is a subset of Σ∗ . A binary relation R ⊆ Σ∗ ×Σ∗ is polynomially bounded
if there exists a polynomial p such that for all (x, y) ∈ R, we have |y| ≤ p(|x|).
2.1
2.1.1
Decision problems
Classes of decision problems
NP is the class of decision problems L for which there exists a polynomially
bounded solution relation (or NP relation) R such that
1. R is decidable by a deterministic Turing machine running in polynomial
time, and
2. for all x ∈ Σ∗ , we have x ∈ L if and only if there exists a y such that
(x, y) ∈ R.
P is the subclass of NP in which each decision problem L is decidable by a
deterministic Turing machine running in polynomial time. NC is the subclass of
P in which each decision problem is decidable by a logarithmic space uniform
family of Boolean circuits (consisting of AND, OR, and NOTgates, each with
fan-in at most two) of polynomial size and polylogarithmic depth. FNC is the
class of functions computable by an NC circuit family. NNC is the class of
decision problems L for which there exists a polynomially bounded solution
relation (or NNC relation) R such that
1. R is decidable by an NC circuit family, and
2. for all x ∈ Σ∗ , we have x ∈ L if and only if there exists a y such that
(x, y) ∈ R.
(A different but equivalent definition for NNC is given in [10], where it is called
NNC(poly).)
The class of functions computable in polynomial time is denoted FP. The
class of functions computable by a NC circuit family is denoted FNC.
Theorem 2.1 ([10]). NNC = NP.
Lemma 2.2.
1. If f ∈ FP and g ∈ FP, then f ◦ g ∈ FP.
2. If f ∈ FNC and g ∈ FNC, then f ◦ g ∈ FNC.
Lemma 2.3. The addition, subtraction, multiplication, division, and exponentiation functions are all computable by NC circuit families.
2
2.1.2
Reductions among decision problems
If P and Q are two decision problems, we say P many-one reduces to Q and write
P ≤m Q if there exists a function f such that x ∈ P if and only if f (x) ∈ Q. If
f is computable in polynomial time (respectively, NC) then we say the reduction
is a polynomial time (respectively, NC) many-one reduction and write P ≤P
m Q
C
(respectively, P ≤N
Q). For any complexity class C and Q ∈ C, if for all
m
problems P ∈ C we have P many-one reduces to Q, then we say Q is complete
for C under many-one reductions.
2.2
Optimization problems
An optimization problem is given by (I, S, m, t), where I is called the instance
set, S ⊆ I × Σ∗ and is called the solution relation, m : I × Σ∗ → N and is
called the measure function, and t ∈ {max, min} and is called the type of the
optimization. The set of solutions corresponding to an instance x ∈ I is denoted
SP (x). The optimal measure for any x ∈ I is denoted m∗ (x). Observe that for
all x ∈ I and all y ∈ Σ∗ , if t = max then m(x, y) ≤ m∗ (x) and if t = min then
m(x, y) ≥ m∗ (x). The performance ratio, R, of a solution y for x is
∗
m (x) m(x, y)
R(x, y) = max
,
.
m(x, y) m∗ (x)
A function f : I → Σ∗ is an r(n)-approximator if R(x, f (x)) ≤ r(|x|), for some
function r : N → Q+ . If r(n) is a constant function with value δ, we simply say
f is a δ-approximator. A function f : I × N → Σ∗ is an approximation scheme if
R(x, f (x, k)) ≤ 1 + k1 for all x ∈ I and all k ∈ N.
For the sake of simplicity, in subsection 2.5 and section 3 (the sections dealing
with hardness of approximation) we will consider only maximization problems,
although the definitions and results also hold for minimization problems, with
the appropriate (slight) changes to the definitions (for example, in Definition 2.17
1
we would require that m∗ (f (x)) > (1 + ) · c(x) instead of m∗ (f (x)) < 1+
· c(x)).
2.2.1
Classes of optimization problems
The complexity class NPO is the class of all optimization problems (I, S, m, t)
such that the following conditions hold.
1. The instance set I is decidable by a deterministic polynomial time Turing
machine.
2. The solution relation S is decidable by a deterministic polynomial time
Turing machine and is polynomially bounded.
3. The measure function m is computable by a deterministic polynomial time
Turing machine.
The complexity class APX is the subclass of NPO in which for each optimization
problem P there exists a polynomial time computable r-approximator for P ,
3
where r(n) ∈ O(1) for all n ∈ N. The complexity class PTAS is the subclass
of APX in which for each optimization problem P , defined by P = (I, S, m, t),
there exists a function f : I × N → Σ∗ such that R(x, f (x, k)) ≤ 1 + k1 for all
x ∈ I and all k ∈ N, and f is computable in polynomial time with respect to
the length of x.
The complexity class NNCO is the class of optimization problems (I, S, m, t)
such that the following conditions hold.
1. The instance set I is decidable by an NC circuit family.
2. The solution relation S is decidable by an NC circuit family and is polynomially bounded.
3. The measure function m is computable by an FNC circuit family.
The complexity class NCX is the subclass of NNCO in which for each optimization
problem P there exists an r-approximator, computable by an FNC circuit family,
for P , where r(n) ∈ O(1) for all n ∈ N. The complexity class NCAS is the subclass
of NCX in which for each optimization problem P , defined by P = (I, S, m, t),
there exists a function f : I × N → Σ∗ such that R(x, f (x, k)) ≤ 1 + k1 for all
x ∈ I and all k ∈ N, and f is computable by an FNC circuit family with respect
to the length of x.
Note that since NNC ⊆ NP, it follow that NNCO ⊆ NPO.
2.2.2
Classes of approximable optimization problems
We will also consider classes of optimization problems which have efficient
approximation algorithms.
Definition 2.4. Let P ∈ NPO. P ∈ APX if there exists a polynomial time
computable r-approximator for P , where r(n) ∈ O(1) for all n ∈ N. P ∈ PTAS
if there exists an approximation scheme f for P such that f (x, k) is computable
in polynomial time with respect to the length of x.
Definition 2.5. Let P ∈ NNCO. P ∈ NCX if there exists an r-approximator
in FNC for P , where r(n) ∈ O(1) for all n ∈ N. P ∈ NCAS if there exists
an approximation scheme f for P such that fk ∈ FNC for each k ∈ N, where
fk (x) = f (x, k) for all x ∈ Σ∗ .
2.2.3
Reductions among optimization problems
Definition 2.6 ([3, Definition 8.3]). Let P and Q be optimization problems,
so P = (IP , SP , mP , tP ) and Q = (IQ , SQ , mQ , tQ ). There is an AP reduction
+
from P to Q, denoted P ≤P
AP Q, if there exist functions f : IP × Q → IQ and
∗
+
∗
+
g : IP × Σ × Q → Σ and constant α ∈ Q with α ≥ 1 such that:
1. For all x ∈ IP and all r ∈ Q+ with r > 1, we have f (x, r) ∈ IP .
2. For all x ∈ IP and all r ∈ Q+ with r > 1, if SP (x) 6= ∅ then SQ (f (x, r)) 6= ∅.
4
3. For all x ∈ IP , all r ∈ Q+ with r > 1, and all y ∈ SQ (f (x, r)), we have
g(x, y, r) ∈ SP (x).
4. For all x ∈ IP , all r ∈ Q+ with r > 1, and all y ∈ SQ (f (x, r)), we have
RQ (f (x, r), y) ≤ r implies RP (x, g(x, y, r)) ≤ 1 + α · (r − 1).
We identify such a reduction with the triple (f, g, α).
If, furthermore, f is computable in polynomial time with respect to x and g
is computable in polynomial time with respect to x and y, then the reduction
is called a polynomial time AP reduction. If f and g are computable by FNC
circuit families, then the reduction is called an NC AP reduction.
Although the definition of AP reduction allows f and g access to r, the
quality of the approximation, in most known natural reductions, this access is
not necessary. Therefore, in some cases, we will write f (x) and g(x, y) instead
of f (x, r) and g(x, y, r).
There are a wealth of other kinds of reductions among optimization problems
designed to capture the notion that a good approximate solution for problem
Q implies a good approximate solution for problem P . Some experts suggest
that the AP reduction is the correct reduction to use to show, for example,
completeness in APX. For a survey of other approximation preserving reductions,
see [5].
2.3
Boolean formulae
A literal is a variable or its negation. A Boolean formula is in conjunctive normal
form (CNF) if it is a conjunction of clauses, each of which is a disjunction of
literals. A Boolean formula is called k-CNF if it is in conjunctive normal form
and each of the clauses has at most k literals. The set of variables which appear
in a Boolean formula is V ar(φ). A truth assignment to a Boolean formula φ is a
function τ : V ar(φ) → {0, 1}. A truth assignment satisfies a Boolean formula φ
if replacing the variables in φ with their corresponding truth assignments yields
a true formula.
In this work, the most important decision problem concerning Boolean
formulae is the Satisfiability problem, which is the problem of deciding
whether a Boolean formula is satisfiable. We will also consider its restriction to
3-CNF Boolean formula, and a corresponding optimization problem.
Definition 2.7 (Satisfiability).
Instance: a Boolean formula φ
Question: Is there a truth assignment τ to φ such that τ satisfies φ?
Definition 2.8 (3-Satisfiability).
Instance: a 3-CNF Boolean formula φ
Question: Is there a truth assignment τ to φ such that τ satisfies φ?
5
Definition 2.9 (Maximum 3-Satisfiability).
Instance: a 3-CNF Boolean formula φ
Solution: a truth assignment τ to φ
Measure: the number of satisfied clauses in φ
We sometimes wish to transform a general CNF Boolean formula to a 3-CNF
Boolean formula. The following lemma guarantees such a transformation.
Lemma 2.10 ([3, Example 6.5]). There is an NC computable function T that
maps a Boolean formula in conjunctive normal form with c clauses and at most
k literals per clause into an equivalent Boolean formula in conjunctive normal
form with c · (k − 2) clauses and at most three literals per clause.
It will be helpful to have a lower bound on the number of clauses in a Boolean
formula that are satisfiable.
Lemma 2.11. At least half of the clauses of any Boolean formula in conjunctive
normal form are always satisfiable.
Proof idea. Iteratively choose the true or false setting of the variable which
would satisfy the greatest number of clauses, simplifying after each step.
It is also necessary for us to know the computational complexity of evaluating
(or simplifying) a Boolean formula given a (partial) truth assignment.
Lemma 2.12. There is a logarithmic space (more generally, a polynomial time)
Turing machine which, when given Boolean formula φ and truth assignment τ ,
whether τ satisfies φ.
See [4] for an exact characterization of the complexity of the Boolean formula
evaluation problem.
The Cook-Levin Theorem provides a generic transformation from a nondeterministic polynomial time Turing machine to an equivalent Boolean formula
which captures its exact computation. A proof can be found in any introductory
text on computational complexity, although most texts just provide a polynomial
time computable function and note that a more careful examination of the proof
would produce a logarithmic space computable function.
Lemma 2.13 (Cook-Levin Theorem). There exists a logarithmic space (and
more generally an NC and P) computable function C which takes two inputs, the
description of a nondeterministic polynomial time Turing machine B and a tuple
of its arguments (x1 , x2 , . . . , xk ), and outputs a Boolean formula φ (of polynomial
size with respect to the size of (x1 , x2 , . . . , xk )) whose variables represent the bits
of the arguments to C, such that B(x1 , x2 , . . . , xk ) accepts if and only if φ is
satisfiable.
The Cook-Levin Theorem provides a generic reduction which allows us to
prove that Satisfiability is NP-complete (recall that NNC = NP).
6
Lemma 2.14. Satisfiability is complete for NNC under NC many-one reductions, and more generally it is complete for NP under polynomial time many-one
reductions.
Proof. Since NP = NNC and NC ⊆ P it suffices to show that the problem is
complete in NP under NC reductions. By Lemma 2.13, there is a logarithmic
space computable (and hence NC computable) function which maps any nondeterministic polynomial time Turing machine to an equivalent Boolean formula.
This function satisfies the requirements of an NC many-one reduction.
We also wish to know the complexity of the Maximum 3-Satisfiability
problem.
Lemma 2.15. Maximum 3-Satisfiability is in NNCO, and more generally
in NPO.
Proof. It is clear that the instance set is decidable in NC. By Lemma 2.12, the
solution set is decidable in NC. To compute the measure function, evaluate each
of the clauses and then compute the total number of those that are satisfied.
Since Boolean function evaluation is in NC and computing the sum of m numbers
is in NC, the measure function is computable by an NC circuit family.
2.4
Probabilistically checkable proofs
A (r(n), q(n))-verifier is a probabilistic polynomial time Turing machine that, on
input (x, π, ρ) where |x| = n, π is interpreted as a proof (also known as a witness
or certificate) and ρ is interpreted as a sequence of random bits, reads at most
O(r(n)) random bits from ρ and reads (with random access) at most O(q(n))
bits from π. Define the complexity class PCP(r(n), q(n)) (for probabilistically
checkable proofs with O(r(n)) random bits and O(q(n)) proof queries) as follows:
a decision problem L ∈ PCP(r(n), q(n)) if there exists a (r(n), q(n))-verifier V
and a polynomial p such that
1. if x ∈ L then there exists a π ∈ Σ∗ with |π| ≤ p(|x|) such that
Pr
ρ∈Σp(|x|)
[V (x, π; ρ) accepts] = 1,
and
2. if x ∈
/ L then for all π ∈ Σ∗ with |π| ≤ p(|x|), we have
Pr
ρ∈Σp(|x|)
[V (x, π; ρ) accepts] <
1
.
2
Theorem 2.16 (PCP Theorem [1]). NP = PCP(lg n, 1).
7
2.5
Gap-introducing and gap-preserving reductions
Definition 2.17 ([9, Section 29.1]). Let P be a decision problem and Q be
a maximization problem with Q = (I, S, m, max). There is a gap-introducing
reduction from P to Q if there exist functions f : Σ∗ → I and c : Σ∗ → N and an
∈ Q+ such that for any x ∈ Σ∗ ,
1. if x ∈ P then m∗ (f (x)) ≥ c(x), and
2. if x ∈
/ P then m∗ (f (x)) <
1
1+
· c(x).
We call the gap parameter of the reduction.
If, furthermore, f and c are computable in polynomial time, then the reduction
is called a polynomial time gap-introducing reduction. If f and c are computable
by FNC circuit families, then the reduction is called an NC gap-introducing
reduction.
Notice that as the value of increases, so does the “gap” between the optimal
measures corresponding to strings in P and strings not in P .
Definition 2.18. Let P ∈ NP and Q be a maximization problem in NPO, with
Q = (I, S, m, max). Let RP be the NP relation induced by P . Suppose there is a
polynomial time (respectively, NC) gap-introducing reduction (f, c, ) from P to
Q. The gap-introducing reduction is enhanced if there exists a polynomial time
(respectively, NC) computable function g : Σ∗ × Σ∗ → Σ∗ such that for all x ∈ Σ∗
1
and all y ∈ S(f (x)) with m(f (x), y) ≥ 1+
· m∗ (f (x)), we have (x, g(x, y)) ∈ RP
if and only if x ∈ P .
We also want to be able to use a reduction which preserves some gap behavior
between optimization problems.
Definition 2.19 ([9, Section 29.1]). Let P and Q be maximization problems,
where P = (IP , SP , mP , max) and Q = (IQ , SQ , mQ , max). There is a gappreserving reduction from P to Q if there exist a function f : IP → IQ , functions
cP , cQ : N → N, and constants α, β ∈ Q+ , with α ≥ 0 and β ≥ 0, such that
• if m∗P (x) ≥ cP (x) then m∗Q (f (x)) ≥ cQ (f (x)), and
• if m∗P (x) <
1
1+α
· cP (x) then m∗Q (f (x)) <
1
1+β
· cQ (f (x)).
We identify such a reduction with the five-tuple (f, cP , cQ , α, β).
If, furthermore, f , cP , and cQ are computable in polynomial time, then the
reduction is called a polynomial time gap-introducing reduction. If f , cP , and
cQ are computable by FNC circuit families, then the reduction is called an NC
gap-introducing reduction.
Intuitively, the gap in P scales with α and the gap in Q with β. Also, we
note here that although Vazirani allows α and β to be functions of the length
of the input x, we require them to be constants both for consistency with the
8
definition of the gap-introducing reduction and for simplicity since we will not
need this relaxation.
Like the enhanced gap-introducing reduction, there is also an enhanced
gap-preserving reduction.
Definition 2.20. Let P and Q be two maximization problems, defined by
P = (IP , SP , mP , max) and Q = (IQ , SQ , mQ , max). Suppose that there exists a
polynomial time (respectively, NC) gap-preserving reduction (f, cP , cQ , α, β) from
P to Q. The gap-preserving reduction is enhanced if there exists a polynomial
time (respectively, NC) computable function g : IP × Σ∗ → Σ∗ such that for
1
all x ∈ IP and all y ∈ SQ (f (x)) with mQ (f (x), y) ≥ 1+β
· m∗Q (f (x)), we have
1
∗
g(x, y) ∈ SP (x) and mP (x, g(x, y)) ≥ 1+α · mP (x). We identify such a reduction
with the six-tuple (f, g, cP , cQ , α, β).
Gap-introducing reductions and gap-preserving reductions compose in the
natural way, as do enhanced gap-introducing and enhanced gap-preserving
reductions.
Lemma 2.21. Let P be a decision problem and Q and T be maximization
problems. Suppose there is a polynomial time (respectively, NC) enhanced gapintroducing reduction (f, g, c, ) from P to Q and a polynomial time (respectively,
NC) enhanced gap-preserving reduction (f 0 , g 0 , cQ , cT , α, β) from Q to T . If
1. α ≤ ,
2. c(x) ≥ cQ (f (x)), and
3.
1
1+
· c(x) ≤
1
1+α
· cQ (f (x)),
then (f˜, g̃, c̃, ˜) is a polynomial time (respectively, NC) enhanced gap-introducing
reduction from P to Q, where
f˜(x) = f 0 (f (x)),
g̃(x, y) = g(x, g 0 (f (x), y)),
c̃(x) = cT (f 0 (f (x))), and
˜ = β.
Proof. Suppose first that x ∈ P . Our goal is to show that m∗T (f 0 (f (x))) ≥ c̃(x).
Since we have a gap-introducing reduction from P to Q, we know m∗Q (f (x)) ≥
c(x). Since we have a gap-preserving reduction from Q to T , we know that if
m∗Q (f (x)) ≥ cQ (f (x)) then m∗T (f 0 (f (x))) ≥ cT (f 0 (f (x))). Since c(x) ≥ cQ (f (x))
by hypothesis, it follows that m∗T (f 0 (f (x))) ≥ cT (f 0 (f (x))) = c̃(x).
1
Suppose now that x ∈
/ P . Our goal is to show that m∗T (f 0 (f (x))) < 1+˜
· c̃(x).
∗
Since we have a gap-introducing reduction from P to Q, we know mQ (f (x)) <
1
1+ · c(x). Since we have a gap-preserving reduction from Q to T , we know
1
1
that if m∗Q (f (x)) < 1+α
· cQ (f (x)) then m∗T (f 0 (f (x))) < 1+β
· cT (f 0 (f (x))).
1
1
Since 1+ · c(x) ≤ 1+α · cQ (f (x)) by hypothesis, it follows that m∗T (f 0 (f (x))) <
1
1
0
1+β · cT (f (f (x))) = 1+˜
· c̃(x).
9
We have now shown that the stated reduction is gap-introducing; it remains to show that it is also enhanced. Suppose yT ∈ ST (f 0 (f (x))) and
1
1
∗
0
∗
0
mT (f 0 (f (x)), yT ) ≥ 1+˜
mT (f (f (x))) = 1+β mT (f (f (x))). Our goal is to
show that (x, g̃(x, yT )) ∈ RP if and only if x ∈ P . Since we have an enhanced gap-preserving reduction from Q to T , we know that g 0 (f (x), yT ) ∈
1
m∗Q (f (x)). Since α ≤ and since
SQ (f (x)) and mQ (f (x), g 0 (f (x), yT )) ≥ 1+α
we have an enhanced gap-introducing reduction from P to Q, we know that
(x, g(x, g 0 (f (x), yT ))) ∈ RP if and only if x ∈ P . Hence (x, g̃(x, yT )) ∈ RP if and
only if (x, g(x, g 0 (f (x), yT ))) ∈ RP if and only if x ∈ P .
Since we have proven the three conditions required for an enhanced gapintroducing reduction, we finally conclude that (f˜, g̃, c̃, ˜) is an enhanced gapintroducing reduction from P to T . The functions f˜, g̃, and c̃ are all polynomial
time (respectively, NC) computable because polynomial time (respectively, NC)
computable functions compose.
3
Hardness of approximation
In this section we use the PCP theorem to prove Maximum 3-Satisfiability is
hard to approximate (specifically, that for certain constant factors, no approximation can exist unless there is some collapse among complexity classes). We do this
by reducing Satisfiability to Maximum k-Function Satisfiability, and
then reducing that to Maximum 3-Satisfiability, ensuring that the reductions
compose. We first prove that the problem is hard to approximate in polynomial
time, then translate the proof to the efficient and highly parallel setting to show
that the problem is also hard to approximate in parallel.
Definition 3.1 (k-Function Satisfiability).
Instance: finite set of Boolean variables x1 , x2 , . . . , xn and a finite set of functions
g1 , g2 , . . . , gm , each of which is a function of k of the variables, where k ≥ 2
Question: Does there exist a truth assignment τ such that all the functions are
satisfied?
Definition 3.2 (Maximum k-Function Satisfiability).
Instance: finite set of Boolean variables x1 , x2 , . . . , xn and a finite set of functions
g1 , g2 , . . . , gm , each of which is a function of k of the variables, where k ≥ 2
Solution: a truth assignment τ to the variables
Measure: the number of satisfied functions
Lemma 3.3. Maximum k-Function Satisfiability ∈ NNCO, and more generally in NPO.
Proof. The proof is similar to the proof of Lemma 2.15.
3.1
Hardness of polynomial time approximation
The main tool of this section is the following theorem, which provides sufficient
conditions for an optimization problem in NPO to be hard to approximate.
10
Theorem 3.4 ([3, Theorem 3.7]). Let P be an NP-complete decision problem
and Q be a maximization problem in NPO. If there is a polynomial time gapintroducing reduction from P to Q with gap parameter , then for all r < 1 + ,
there is no polynomial time r-approximator for P unless P = NP.
Proof. In order to produce a contradiction, suppose that there exists a polynomial
time r-approximator A for Q with r < 1 + . Let f and c be the functions which
define the gap-introducing reduction. Define algorithm A0 as follows on input x:
1
A0 (x) accepts if and only if m(f (x), A(f (x))) > 1+
· c(x), for all x ∈ Σ∗ . We
0
will show that A is a polynomial time algorithm for the NP-complete problem
P , and hence P = NP.
Since f , A, c, and basic arithmetic operations such as addition and multiplication are polynomial time computable functions, so is A0 . In order to show
that A0 is correct we must consider two cases.
1. Suppose x ∈ P . Since A is an r-approximator for a maximization problem
and r < 1 + , we have
m∗ (f (x))
≤ r < 1 + ,
m(f (x), A(f (x)))
which implies
1
1
m(f (x), A(f (x)))
≥ >
.
m∗ (f (x))
r
1+
Since m∗ (f (x)) ≥ c(x) by hypothesis, this implies
1
· m∗ (f (x))
1+
1
≥
· c(x).
1+
m(f (x), A(f (x))) >
Hence A0 will accept on input x.
1
/ P then m∗ (f (x)) < 1+
· c(x) by hypothesis. Since Q is a max2. If x ∈
imization problem, m(f (x), A(f (x))) ≤ m∗ (f (x)). Combining the two
1
inequalities, we find m(f (x), A(f (x))) < 1+
· c(x). Hence, A0 will reject
on input x.
We have shown that A0 is a correct polynomial-time computable algorithm which
decides the NP-complete problem P , and the conclusion follows.
Corollary 3.5. Let P be an NP-complete decision problem and Q be a maximization problem in NPO. If there is a polynomial time gap-introducing reduction
from P to Q with gap parameter then Q ∈
/ PTAS unless P = NP.
Proof. In order to produce a contradiction, suppose that there exists a polynomial
time approximation scheme for Q, so there exists a function f such that on
input x and N , f produces solutions for Q, f is computable in time polynomial
in the length of x, and R(x, f (x, N )) ≤ 1 + N1 for all x ∈ Σ∗ and all N ∈ N.
11
Specifically, if we choose N > 1 then R(x, f (x, N )) ≤ 1 + N1 < 1 + , for all
x ∈ Σ∗ . Therefore, f is in fact a polynomial time r-approximator for Q, where
r < 1 + . The conclusion follows from Theorem 3.13.
In order to show that Maximum 3-Satisfiability is hard to approximate,
we will show a reduction to it from Satisfiability using Maximum k-Function
Satisfiability as an intermediate maximization problem.
Lemma 3.6 ([9, Lemma 29.10]). There is an NC (and more generally, a polynomial time) enhanced gap-introducing reduction from Satisfiability to Maximum k-Function Satisfiability with gap parameter 1, for some k ∈ N.
Proof. For the sake of brevity, we will let P = Satisfiability and Q =
Maximum k-Function Satisfiability.
By Theorem 2.16, there exists a (lg n, 1)-verifier for P . Let φ be a Boolean
formula of length n, which will be the input to the verifier. Let V be that verifier,
let c0 and q be the constants such that V uses at most c0 lg n random bits and
at most q bits of the proof. For simplicity, let the length of the random string
input to V be exactly c0 lg n. When considering all possible random strings ρ
from which V reads its random bits, V reads a total of at most q · 2c0 lg n bits
of the proof, which equals q · nc0 . Let B be the set of at most q · nc0 Boolean
variables representing the values of the bits of the proof at the queried locations.
We let k = q, and then for each string ρ (of length c0 lg n), we will define
a function gρ which is a function of at most q of the Boolean variables from
B. Consider the Cook-Levin transformation of the operation of the verifier V
on input (φ, τ ; ρ), where φ is a Boolean formula (an instance of P ) and τ is
a satisfying assignment to the variables of φ. Let ψ be the Boolean formula
over the variables z1 , z2 , . . . , zh(n) produced by this transformation, where h(n)
is the polynomial bounding the size of the Boolean formula produced by the
Cook-Levin transformation. Some of these variables depend on the bits of φ,
some depend on q bits of τ , and some depend on the bits of ρ (and these sets
may intersect). If we let φ and ρ be fixed, however, we can define gρ to be the
restriction of the Boolean function ψ to the q variables of ψ corresponding to
the q bits of τ read by the verifier on input (φ, τ ; ρ).
Let G = {gρ | ρ ∈ Σ∗ and |ρ| = c0 lg n}. Notice that |G| = 2c0 lg n = nc0 . Now
we define the enhanced gap-introducing reduction as follows for all Boolean
formulae φ of length n, and all truth assignments τ to the variables in B (not
the variables of φ):
f (φ) = (B, G)
c(φ) = nc0
=1
g(φ, τ ) = τ +
where τ + is the extension of τ in which any variables in φ not in B are assigned
an arbitrary binary value (say 0). The function c is NC computable because
12
multiplication is in NC, and there are a constant number of multiplications.
The function f is NC computable because B is simply a list of variables which
requires almost no computation and the functions in G can be constructed by
nc0 processors in parallel, each of which constructs the function gρ by performing
a partial Boolean function evaluation (in logarithmic space, see Lemma 2.12).
The function g is NC computable because it simply copies τ to its output along
with a polynomial number of extra bits.
If φ ∈ P then there is a truth assignment τ such that V accepts on input
(φ, τ ; ρ) with probability 1 over the random strings ρ. In this case, m∗ (f (φ)) =
m∗ ((B, G)) = nc0 , since all the functions gρ in G are satisfied. If φ ∈
/ P then for
every truth assignment, the verifier accepts with probability at most 12 . In this
case, every truth assignment satisfies less than half of all the nc0 functions in G.
1
Hence m∗ (f (φ)) = m∗ ((B, G)) < 21 · nc0 = 1+
c(φ).
To show that this gap-introducing reduction is enhanced, we suppose τ is
a satisfying assignment for f (φ) which satisfies at least half of the satisfiable
functions in f (φ). Let RP be the NP-relation corresponding to P . Our goal is
to show that (φ, g(φ, τ )) ∈ RP if and only if φ ∈ P . The forward implication is
trivially true, since the existence of any satisfying assignment for φ implies that φ
is satisfiable. Therefore, it suffices to show that φ ∈ P implies (φ, g(φ, τ )) ∈ RP .
Suppose φ is satisfiable. Since all the clauses in φ must be satisfiable, all the
functions in f (φ) are satisfiable. Then g(φ, τ ), which equals τ + , is a truth
assignment on which the verifier V will accept with probability 1 (it ignores the
bits of τ + not in τ ). Hence (φ, g(φ, τ )) ∈ RP .
Therefore we have constructed an NC enhanced gap-introducing reduction
with gap parameter 1 from P to Q (that is, from Satisfiability to Maximum
k-Function Satisfiability).
Note that Lemma 3.6 did not use any information specific to Satisfiability;
any NP-complete decision problem would suffice.
Lemma 3.7 ([9, Proof of Theorem 29.7]). There is an NC many-one reduction
from k-Function Satisfiability to 3-Satisfiability.
Proof. Let ({x1 , x2 , . . . , xn }, {f1 , f2 , . . . , fm }) be an instance of k-Function
Satisfiability. Without loss of generality, each function fi of k variables can
be written as a CNF Boolean formula containing at most 2k clauses, in which
each clause
Vm contains at most k literals (why?). Call this Boolean formula ψi , and
let ψ = i=1 ψi . Observe that the given instance of k-Function Satisfiability
is satisfiable if and only if ψ is satisfiable.
Let T be the NC computable function from Lemma 2.10. Now we can define
the reduction g by g(φ) = T (ψ) for all Boolean formulae φ. The total number
of clauses in ψ is at most m · 2k , so the total number of clauses in T (ψ) is
at most m · 2k · (k − 2) (still a polynomial in the length of the input since
k is considered a fixed constant). g is NC computable because each of the
m functions can be transformed by an independent processor, each processor
performs the transformation from fi to ψi in a constant amount of time (since
k is a constant, so is 2k ), and for each processor the transformation fron ψi to
13
T (ψi ) is computable in NC. Finally, φ is satisfiable if and only if ψ is satisfiable
if and only if T (ψ) is satisfiable.
Lemma 3.8. There exists an NC (and more generally, a polynomial time)
enhanced gap-preserving reduction from Maximum k-Function Satisfiability
to Maximum 3-Satisfiability.
Proof. For brevity, we let P = Maximum k-Function Satisfiability and
Q = Maximum 3-Satisfiability.
We assume k is a fixed constant. Let f0 be the reduction given in Lemma 3.7
from k-Function Satisfiability to 3-Satisfiability. Define the six functions
as follows for all finite sets of Boolean variables U , all finite sets of Boolean
functions F of at most k of the variables in U , and all truth assignments τ to
3-CNF Boolean formulae:
f (hU, F i) = f0 (hU, F i)
g(hU, F i, τ ) = τ |U
cP (hU, F i) = |F |
cQ (φ) = |φ|
α=1
β = 2k+1 · (k − 2) − 1
(see explanation below),
where τ |U represents the restriction of the satisfying truth assignment τ to only
the variables of U .
f is computable by an NC circuit family because f0 is. g is computable by an
NC circuit family because it simply chooses a subset of the bits of τ . cP and cQ
are computable in NC because they simply output the length of their respective
inputs (or part of their inputs).
Suppose m∗P (hU, F i) ≥ cP (hU, F i) = |F |, which implies that all the functions
in F are satisfiable. By the correctness of the many-one reduction f0 , all functions
in F are satisfiable if and only if all clauses in f0 (hU, F i) are satisfiable. Therefore,
if φ denotes f0 (hU, F i), then m∗Q (f (hU, F i)) = m∗Q (φ) = |φ| = cQ (φ).
Now suppose instead that m∗P (hU, F i) < 12 · |F |, so less than half of the
functions in F are satisfiable, or in other words, at least half of the functions
in F are unsatisfiable. Suppose γi is the conjunction which represents fi in the
formula f0 (hU, F i). For each function fi which is not satisfiable, at least one
clause of γi must be unsatisfiable, by the correctness of f0 . Hence the number
of unsatisfiable clauses in f (hU, F i) is at least 12 |F |, so m∗Q (f (hU, F i)) < 21 |F |.
Since cQ (f (hU, F i)) = |f0 (hU, F i)| ≤ |F | · 2k · (k − 2), we must choose β such
1
that 12 |F | ≤ 1+β
· |F | · 2k · (k − 2). Hence, we choose β = 2k+1 · (k − 2) − 1.
TODO 3.9. This theorem is false because the reduction cannot be enhanced;
consider the following counterexample. Let k = 3 and suppose each function in
F is simply a single disjunction of three variables, so |f (hU, F i)| = |F |. Assume
1
|F | = 18 |F | of the clauses in
that a truth assignment τ satisfies at least 2k+1 (k−2)
f (hU, F i). In the case that such a truth assignment satisfies exactly one eighth
14
of the clauses, then exactly one eighth of the functions in F are satisfied by τ |U
(which equals τ ). Since one eighth is less than one half, this is a contradiction.
TODO 3.10. Maybe check out [2] for more another description of the completeness of Maximum 3-Satisfiability.
TODO 3.11. In [7, Theorem 4], which they attribute to [1], the authors state
directly that there is an enhanced gap-introducing reduction from any language
in NP to Maximum 3-Satisfiability.
Lemma 3.12. There is an NC (more generally, a polynomial time) enhanced
gap-introducing reduction from the decision problem Satisfiability to the
optimization problem Maximum 3-Satisfiability.
Proof. We will use Lemma 3.6 and Lemma 3.8 to invoke Lemma 2.21.
We will let (f, g, c, ) be the NC enhanced gap-introducing reduction and let
(f 0 , g 0 , cQ , cT , α, β) be the NC enhanced gap-preserving reduction. The enhanced
gap-introducing reduction of Lemma 3.6 has
c(φ) = nc0 , and
= 1,
where n = |φ| and c0 is a constant which comes from the proof of Lemma 3.6
(specifically, from bounding the number of random bits read by the verifier). The
enhanced gap-preserving reduction of Lemma 3.8 has
cQ (hU, F i) = |F |,
cT (φ) = |φ|
α = 1, and
β = 2k+1 · (k − 2) − 1,
where k is some constant.
To meet the conditions in the hypothesis of Lemma 2.21, we need α ≤ ,
1
1
c(φ) ≥ cQ (f (φ)), and 1+
· c(φ) ≤ 1+α
· cQ (f (φ)). The first property is true
c0
because α = . Since c(φ) = n and cQ (f (φ)) = cQ (hB, Gi) = |G| = nc0 , the
1
1
· c(φ) = 12 · nc0 and 1+α
· cQ (f (φ)) =
first inequality is satisfied. Since 1+
1
1
c0
·
c
(hB,
Gi)
=
·
n
,
the
second
inequality
is
also
satisfied.
Q
2
2
Therefore we can conclude that there is an NC enhanced gap-introducing
15
reduction (f˜, g̃, c̃, ˜) from Satisfiability to Maximum 3-Satisfiability, where
f˜(φ) = f 0 (f (φ))
= f 0 (hB, Gi)
= f0 (hB, Gi),
g̃(φ, τ ) = g(φ, g 0 (f (φ), τ ))
= g(φ, g 0 (hB, Gi, τ ))
= g(φ, τ |B )
+
= τ |B ,
c̃(φ) = cT (f 0 (f (φ)))
= cT (f 0 (hB, Gi))
= cT (f0 (hB, Gi))
= |f0 (hB, Gi)|
= |G| · 2k · (k − 2), and
˜ = β
= 2k+1 · (k − 2) − 1.
Although Lemma 3.12 specifies an enhanced gap-introducing reduction, the
following theorem does not require that the reduction be enhanced. However,
we provide the enhanced reduction because Lemma 3.12 will be used in section 4,
in order to prove that Maximum 3-Satisfiability is complete for the class
APX (and NCX).
Theorem 3.13 ([3, Theorem 6.3] and [9, Corollary 29.8]). There is a constant
k such that no polynomial time r-approximator for Maximum 3-Satisfiability
exists unless P = NP, for all r < 2k+1 · (k − 2).
Proof. Follows from Lemma 3.12 and Theorem 3.4.
TODO 3.14. This should be something like r < 1 +
1
2k+1 ·(k−2)−1
...
Corollary 3.15. Maximum 3-Satisfiability ∈
/ PTAS unless P = NP.
Proof. Follows from Theorem 3.13 and Corollary 3.5.
3.2
Hardness of parallel approximation
Theorem 3.16 (NC version of Theorem 3.4). Let P be a decision problem which
is complete for NNC under NC many-one reductions. Let Q be a maximization
problem in NNCO. If there is an NC gap-introducing reduction from P to Q with
gap parameter , then for all r < 1 + , there is no FNC r-approximator for P
unless NC = NNC.
16
Proof. The proof is similar to the proof of Theorem 3.4.
Corollary 3.17. Let P be a decision problem which is complete for NNC under
NC many-one reductions. Let Q be a maximization problem in NNCO. If there is
an NC gap-introducing reduction from P to Q then Q ∈
/ NCAS unless NC = NNC.
Proof. The proof is similar to the proof of Corollary 3.5.
By Lemma 3.12, we already know that there exists an NC enhanced gapintroducing reduction from Satisfiability to Maximum 3-Satisfiability.
By Lemma 2.14 we know that Satisfiability is complete for NNC under NC
many-one reductions. By Lemma 2.15 we know Maximum 3-Satisfiability
is in NNCO. Therefore we have the following analogs of Theorem 3.13 and
Corollary 3.15.
Theorem 3.18. There is a constant k such that no FNC r-approximator for
Maximum 3-Satisfiability exists unless NC = NNC, for all r < 2k+1 · (k − 2).
Corollary 3.19. Maximum 3-Satisfiability ∈
/ NCAS unless NC = NNC.
4
Completeness in approximation classes
We can now give another perspective on why Maximum 3-Satisfiability has
no polynomial time approximation scheme unless P = NP, and why it has no
NC approximation scheme unless NC = NNC. In this section we prove that
this problem is in fact complete for APX under polynomial time AP reductions,
and complete for NCX under NC AP reductions. This means that if there were
a polynomial time approximation scheme for this problem, then there would
be one for all problems in APX (that is, we would have APX = PTAS), and if
there were an NC approximation scheme for it, then there would be one for
all problems in NCX (that is, NCX = NCAS). In fact, we have P = NP if and
only if APX = PTAS [5, Section 2], and NC = NNC if and only if NCX = NCAS.
(Interestingly, we also have P = NP if and only if APX = NCX [6, Theorem 8.2.9].)
Therefore, just as NP-completeness intuitively implies hardness of decision for
decision problems, so does APX-completeness imply hardness of approximation
for optimization problems.
As a historical note, APX-completeness was a precursor to hardness of
approximation results due to the PCP Theorem.
4.1
APX-completeness
To show that Maximum 3-Satisfiability is complete for APX under AP
reductions we must show that it is in APX and that there is an AP reduction
from every optimization problem in APX to it. We will do this in three steps.
First, we will show that it is in APX. Second, we will show that it is complete
under AP reductions for the class of maximization problems in APX. Finally,
we will show that there is an AP reduction from every minimization problem to
some maximization problem in APX.
17
Theorem 4.1. Maximum 3-Satisfiability is in NCX, and more generally in
APX.
Proof. Maximum 3-Satisfiability is a restriction of Maximum k-Function
Satisfiability, which is in NCX [8, Corollary 13]. Since NCX ⊆ APX, it is also
in APX.
Before proceeding to Theorem 4.6, we require the following lemmata.
Lemma 4.2. Let P ∈ APX, with P = (I, S, m, max), and let A be the rapproximator for P for some constant r. Let rn ≥ 1. There exists a nondeterministic polynomial time algorithm B such that on input (x, i), where x ∈ I and
i ∈ N, B decides if there exists a solution y with
rn i · m(x, A(x)) ≤ m(x, y) ≤ rn i+1 · m(x, A(x)).
Furthermore, if a solution y exists, B writes y before accepting.
Proof. Let p be the polynomial which bounds the size of the solutions for an
instance x of size n. Define B as in Algorithm 1. Since all operations used during
Algorithm 1 Nondeterministic polynomial time algorithm that decides if there
is an approximate solution for instance x of problem P in interval i
Input: x ∈ I, i ≥ 0
1:
2:
3:
4:
5:
6:
7:
8:
function B(x, i):
guess y with |y| ≤ p(|x|)
a ← mP (x, AP (x))
if (x, y) ∈ S and rn i · a ≤ mP (x, y) < rn i+1 · a then
write y
Accept
else
Reject
the execution of B are polynomial time computable, B runs in polynomial time.
Since B accepts if and only if there exists a solution y with the stated bounds,
the conclusion follows.
Lemma 4.3. Let r, ri , k > 1. Let ψ be a Boolean formula which satisfies
ψ = ψ0 ∧ ψ1 ∧ · · · ∧ ψk−1 , where each ψi is a 3-CNF Boolean formula with
exactly M clauses, for some M ∈ N. Let R(ψ, τ ) be the performance ratio
corresponding to Maximum 3-Satisfiability for all 3-CNF Boolean formulae
ψ and all truth assignments τ . Let τ be a truth assignment such that R(ψ, τ ) ≤ r
and R(ψi , τ ) = ri , for all i ≤ k − 1 (here, R(ψi , τ ) is the performance ratio of τ
with respect to only ψi ). Then ri ≤ 1−2k1 r−1 .
( r )
18
Proof. Let m be the measure for Maximum 3-Satisfiability, so m(ψ, τ ) is
the number of clauses satisfied by τ . First, we have
m∗ (ψ)
≤r
m(ψ, τ )
m∗ (ψ)
.
=⇒ m(ψ, τ ) ≥
r
R(ψ, τ ) =
From this, we have
m∗ (ψ)
m∗ (ψ) − m(ψ, τ ) ≤ m∗ (ψ) −
r 1
∗
= m (ψ) 1 −
r
r
−
1
= m∗ (ψ)
r
r−1
≤ kM
,
r
where the final inequality is true because there are at most M (satisfiable) clauses
in each of the k formulae ψ0 , ψ1 , . . . , ψk−1 .
Since
m∗ (ψ) =
k−1
X
m∗ (ψi )
and
m(ψ, τ ) =
i=0
k−1
X
m(ψi , τ ),
i=0
we have
m∗ (ψ) − m(ψ, τ ) =
k−1
X
m∗ (ψj ) −
j=0
=
k−1
X
k−1
X
m(ψj , τ )
j=0
(m∗ (ψj ) − m(ψj , τ ))
j=0
≥ m∗ (ψi ) − m(ψi , τ )
ri − 1
= m∗ (ψi )
ri
M ri − 1
·
,
≥
2
ri
where the last inequality follows from Lemma 2.11. Combining the upper and
lower bounds, we have
M ri − 1
r−1
·
≤ kM
,
2
ri
r
19
from which we find
1−
1
≤ 2k
ri
and hence
1 − 2k
r−1
r
r−1
r
1
.
ri
≤
,
Taking the multiplicative inverse, we find
ri ≤
1
1 − 2k
r−1
r
,
which concludes the proof.
Lemma 4.4. Let rP , ∈ Q+ with rP > 1 and > 0. There exists a constant
α ∈ Q+ (which depends only on rP and ) with α > 1 such that the following
is true. Let r ∈ Q+ with 1 < r < 1 + rPα−1 , let rn = 1 + α · (r − 1), and let
k = dlogrn rP e. Then
.
r <1+
2k · (1 + ) − Proof. Since rn = 1 + α · (r − 1), we have r = 1 +
1+
rn −1
α ,
so we intend to show
rn − 1
<1+
,
α
2k · (1 + ) − or in other words,
(rn − 1) [2k · (1 + ) − ]
.
In fact, we will prove the stronger lower bound
α>
α>
(rn − 1) [2k · (1 + )]
by finding an upper bound on k which allows us to cancel the (rn − 1) term in
the numerator.
Since k is defined to equal dlogrn rP e, we have
log rP
log rn
rn log rP
≤1+
log rn
rn log rP
≤1+
rn − 1
rn log rP + rn − 1
=
rn − 1
rP log rP + rP − 1
<
,
rn − 1
k ≤1+
20
where the last inequality is true because we have assumed r < 1 + rPα−1 , which
implies rn < rP . Since we know k · (rn − 1) < rP log rP + rP − 1 it suffices to
choose
2(1 + )(rP log rP + rP − 1)
.
α≥
Choosing α equal to that quantity (so that it depends only on rP and ) gives
us the desired upper bound on r.
Lemma 4.5. Let r, ri , k, ∈ R+ with r > 1, ri > 1, k > 1, and > 0. If
ri ≤
1
1 − 2k( r−1
r )
r <1+
and
,
2k · (1 + ) − then ri < 1 + .
Proof. It suffices to show 1−2k(1 r−1 ) < 1 + . To determine if r meets this
r
condition, we attempt to express the r that appears in this inequality in terms
of and k.
1
<1+
1 − 2k( r−1
r )
r−1
1
< 1 − 2k(
)
⇐⇒
1+
r
r−1
1
⇐⇒ 2k(
)<1−
r
1+
1
1
r−1
<
−
⇐⇒
r
2k 2k · (1 + )
1
1+
1
⇐⇒ 1 − <
−
r
2k · (1 + ) 2k · (1 + )
1
⇐⇒
>1−
r
2k · (1 + )
1
2k · (1 + )
⇐⇒
>
−
r
2k · (1 + ) 2k · (1 + )
2k · (1 + )
⇐⇒ r <
2k · (1 + ) − ⇐⇒ r < 1 +
2k · (1 + ) − Since the last inequality is true by hypothesis and each step is a biconditional,
the first inequality is true, which is what we intended to show.
Theorem 4.6 ([3, Theorem 8.6]). Maximum 3-Satisfiability is complete for
the class of maximization problems in APX under ≤P
AP reductions.
We describe the idea of the proof before presenting the proof itself, since it
is somewhat technical.
21
Proof idea. First, we recall the enhanced gap-introducing reduction (fs , gs , )
from Lemma 3.12, which shows that there exist a positive constant and functions
fs and gs such that for any CNF formula φ, if τ is a truth assignment satisfying
1
at least 1+
of the maximum number of satisfiable clauses of fs (φ) then gs (φ, τ )
is a satisfying assignment for φ if and only if φ is satisfiable. Here fs is a mapping
from CNF formulae to 3-CNF formulae.
The reduction we present is a generic reduction from an arbitrary maximization problem in APX to Maximum 3-Satisfiability, again using a Cook-Levin
transformation so that the number of satisfiable clauses in the Boolean formula
corresponds in some way to the measure of a solution in the original maximization problem. In order to search for a “good enough” approximate solution for
Maximum 3-Satisfiability, we will partition the interval in which we are
certain the optimal solution lives (but which is too large) into a constant number
of subintervals. For each subinterval we will use the Cook-Levin transformation
to construct a Boolean formula which is satisfiable if and only if there is an
approximate solution in that subinterval, and which exposes the approximate
solution (by having Boolean variables y1 , y2 , . . . , yk whose truth values represent
the values of the bits in the binary representation of y). Furthermore, we apply
the function fs to each of those formulae in order to provide the gap guarantee
discussed in the previous paragraph; this ensures that we can use the “enhanced”
mapping gs to reconstruct a solution for the original problem. Finally, we use
gs on one of the formulae corresponding to a subinterval to produce a “good
enough” approximate solution for the original maximization problem.
Proof of Theorem 4.6. For the sake of brevity, we let the single letter Q represent
Maximum 3-Satisfiability.
Let P be a maximization problem in APX, so we have P = (I, S, m, max).
Let AP be the polynomial time computable rP -approximator for P , where rP is
some constant. We need to define an AP reduction (f, g, α) from P to Q. We
will delay the definition of the constant α until later in the proof, and until then
we assume it has been fixed. We now wish to define functions f and g for all
r > 1. Let rn = 1 + α · (r − 1); this is the desired upper bound of the performance
ratio RP (x, g(x, τ, r)), as required by the AP condition when RQ (f (x, r), τ ) ≤ r.
In the case that rp ≤ rn , defining g(x, y, r) = AP (x) for all x and y is sufficient,
and no definition of f is necessary, since R(x, g(x, y, r)) = R(x, AP (x)) ≤ rP ≤ rn .
Suppose from here on that rP > rn .
For f we intend to map an instance x of P to a 3-CNF Boolean formula,
and for g we intend to map a truth assignment which satisfies a constant factor
of the satisfiable clauses of f (x, r) to a solution y for P . In order to recover a
solution y for P with RP (x, y) ≤ rn , we need a solution whose measure is close
to m∗P (x), the optimal measure of x. If we let a(x) = mP (x, AP (x)), then
a(x) ≤ m∗P (x) ≤ rP · a(x),
(1)
since a(x) is the measure of a rP -approximate solution to a maximization problem.
We can’t just output any solution in this range, because the approximation
22
guarantee would be only rP ; we need it to be rn . Hence we will subdivide this
interval and derive a solution within a subinterval of size rn .
Let us divide the interval [a(x), rP · a(x)] into k subintervals,
[a(x), rn · a(x)] , rn · a(x), rn 2 · a(x) , . . . , rn k−1 · a(x), rP · a(x) ,
where k = dlogrn rP e. (Notice that the sizes of these subintervals are not
necessarily equal; except that of the final subinterval, the sizes are only nondecreasing. This won’t be a problem, as we only need a multiplicative constant
approximation, not an additive constant approximation.) By construction, we
have rn k = rn dlogrn rP e ≥ rn logrn rP = rP , and hence rP · a(x) ≤ rn k · a(x).
Combining this with Equation 1, we have a(x) ≤ m∗P (x) ≤ rP · a(x) ≤ rn k · a(x),
so the optimal measure of x must exist in one of these intervals. Now we construct
f and g in such a way that for any x ∈ I, we can recover an approximate solution
y that lies in the same interval in which m∗P (x) lies. If we were to find a solution
y in the same interval in which the optimal measure lies, say, interval j, then we
would have
rn j · a(x) ≤ mP (x, y) ≤ m∗P (x) ≤ rn j+1 · a(x).
(2)
Then, since
m∗P (x)
≤ rn j+1
a(x)
and
rn j ≤
mP (x, y)
a(x)
we find that
m∗P (x)
RP (x, y) =
=
mP (x, y)
m∗
P (x)
a(x)
mP (x,y)
a(x)
≤
rn j+1
= rn .
rn j
(3)
So this is our goal when constructing f and g.
To define f we need some auxiliary functions. Let B be the Turing machine guaranteed by Lemma 4.2. Let fs be the function from the enhanced
gap-introducing reduction of Lemma 3.12. Let C be the transformation from
Lemma 2.13. Now we can define f by
f (x, r) =
k−1
^
fs (C(B, (x, i))),
i=0
for all x ∈ I. Let us explain this function. Since B writes a solution y before
it accepts, C(B, (x, i)) will have some variables, y1 , y2 , . . . , yp(|x|) , which, correspond to the bits of a solution y, should one exist in subinterval i. Applying fs
to this Boolean formula produces a 3-CNF formula which has a gap behavior
which will be discussed later (since it depends on gs ). Observe that f (x, r) is
still a 3-CNF formula, since a conjunction of 3-CNF formulae is itself a 3-CNF
formula. We assume without loss of generality (by padding with unsatisfiable
clauses) that each formula fs (C(B, (x, i))) has the same number of clauses, say
M.
Now, to determine how to define g, first suppose x ∈ I and τ ∈ SQ (f (x, r)).
For each i with 0 ≤ i ≤ k − 1, let φi = C(B(x, i)) and ψi = fs (φi ). Let
23
ψ = f (x, r), so τ is a truth assignment to ψ which satisfies some of the clauses.
Suppose gs is the function from the enhanced gap-introducing reduction which
produces solutions for P , as described in the preceding discussion. For any τ
1
that satisfies a 1+
factor of the maximum number of satisfiable clauses in ψi
for all i ≤ k − 1, if we let τi = gs (φi , τ ) then τi satisfies φi if and only if φi is
1
satisfiable. We will assume for now that τ does indeed satisfy a 1+
factor of
the maximum number of satisfiable clauses of ψi for all i, and delay the proof
of this fact until later. Furthermore, if φi is satisfiable via truth assignment τi ,
then we can recover the bits of the solution y from the appropriate variables of
φi , as described previously. Let T be the function which outputs y given φi and
τi ; this function can be computed in logarithmic space by Lemma 2.12.
Let j be the maximum i such that τi satisfies φi (such a j exists because
m∗P (x) is in some interval, and for all intervals beyond that one, φi must not be
satisfiable). This implies that there is a y in interval j with
rn j · a(x) ≤ mP (x, y) ≤ m∗P (x) ≤ rn j+1 · a(x),
which is exactly the statement of Equation 2, and hence Equation 3 follows.
Therefore, we define g(x, τ, r) as in Algorithm 2. Observe that g is polynomial
time computable since k, the number of intervals to search, is a constant, and
all other functions are polynomial time computable.
Algorithm 2 Deterministic polynomial time algorithm that computes a rn approximate solution for x
Input: x ∈ I, a truth assignment τ , r > 1
function g(x, τ , r):
for i ∈ {0, 1, . . . , k − 1} do
φi = C(B, (x, i))
τi = gs (φi , τ )
5:
j ← maxk−1
i=0 {i | τi satisfies φi }
6:
return T (φj , τj )
1:
2:
3:
4:
We now complete the tasks which we delayed above. We will prove that any
1
truth assignment τ to ψ satisfies at least a 1+
factor of the maximum number
of satisfiable clauses in each ψi for each i ≤ k − 1, and in doing so we will also
choose the value of the constant α. By Lemma 4.3, we know that
ri ≤
1
1 − 2k
r−1
r
,
where ri is the performance ratio of τ with respect to only ψi . By Lemma 4.4
(using the assumption that rn < rP ), there exists a constant α that depends
only on the constant rP (defined by the approximation algorithm AP ) and the
constant (defined in a previous theorem) such that
r <1+
.
(4)
2k · (1 + ) − 24
1
Applying Lemma 4.5, we have ri ≤ 1 + . In other words, mQ (ψi , τ ) ≥ 1+
·
∗
mQ (ψi ), which is what we intended to show.
In summary, we define our reduction (f, g, α) as follows for all x ∈ I, r > 1,
and τ ∈ SQ (f (x, r)):
f (x, r) =
k−1
^
fs (C(B, (x, i)))
i=0
g(x, τ, r) = T (φj , gs (φj , τ ))
α = 2(rP log rP + rP − 1)
1+
Theorem 4.7 ([3, Theorem 8.7]). For every minimization problem P ∈ APX,
where P = (I, S, mP , min), there exists a maximization problem Q ∈ APX such
that P ≤P
AP Q.
Proof. Proof omitted; see [3] for the proof.
Corollary 4.8. Maximum 3-Satisfiability is complete for APX under polynomial time AP reductions.
Proof. Follows from Theorem 4.1, Theorem 4.6, and Theorem 4.7.
4.2
NCX-completeness
To show that Maximum 3-Satisfiability is complete for NCX under NC AP
reductions we must show that it is in NCX and that there is an NC AP reduction
from every optimization problem in NCX to it. By Theorem 4.1, we already know
that it is in NCX. We will show that it is complete under NC AP reductions for
the class of maximization problems in NCX. Then we will show that there is
an NC AP reduction from every minimization problem to some maximization
problem in NCX.
Theorem 4.9 (NC version of Theorem 4.6). Maximum 3-Satisfiability is
C
complete for the class of maximization problems in NCX under ≤N
AP reductions.
Proof. The proof is nearly the same as the proof of Theorem 4.6, replacing all
polynomial time computable functions and procedures with those computable
by NC circuits.
When defining f , observe that for each i, the formula fs (C(B, (x, i))) can be
computed independently and in parallel. The description of B can be written
in constant time since it does not depend on the input x, and additionally its
length is a constant in |x|. Since i < k and k is a constant, i can be written in
constant time when it is provided as input to the function C. C is a logarithmic
space computable function, and fs is computable in NC. It follows that f can
be computed by an NC circuit.
25
When defining g, we again observe that for each i, the formula φi and the
truth assignment τi can be computed independently and in parallel, as can
deciding whether τi satisfies φi . We can decide in logarithmic space whether a
truth assignment satisfies a Boolean formula by Lemma 2.12. Computing the
maximum of a constant number of natural numbers, each of which has constant
size, can be performed by an NC circuit. It follows that g can be computed by
an NC circuit.
The value of α and the algebra proving the correctness of the reduction
remains the same.
Theorem 4.10 (NC version of Theorem 4.7). For every minimization problem
C
P ∈ NCX there exists a maximization problem Q ∈ NCX such that P ≤N
AP Q.
Proof. A careful examination of the proof of Theorem 4.7 reveals that if P ∈ NCX,
the functions necessary for the reduction are also computable in NC.
Corollary 4.11. Maximum 3-Satisfiability is complete for NCX under NC
AP reductions.
Proof. Follows from Theorem 4.1, Theorem 4.9, and Theorem 4.10.
References
[1] Sanjeev Arora et al. “Proof verification and the hardness of approximation
problems”. In: Journal of the ACM 45.3 (May 1998), pp. 501–555. issn:
0004-5411. doi: 10.1145/278298.278306. url: http://doi.acm.org/
10.1145/278298.278306.
[2] G. Ausiello and V.Th. Paschos. “Reductions, completeness and the hardness
of approximability”. In: European Journal of Operational Research 172.3
(2006), pp. 719–739. issn: 0377-2217. doi: 10.1016/j.ejor.2005.06.
006. url: http : / / www . sciencedirect . com / science / article / pii /
S0377221705005011.
[3] Giorgio Ausiello et al. Complexity and Approximation: Combinatorial
Optimization Problems and Their Approximability Properties. Springer,
1999. isbn: 9783540654315.
[4] Samuel R. Buss. “The Boolean formula value problem is in ALOGTIME”.
In: in Proceedings of the 19th Annual ACM Symposium on Theory of
Computing. 1987, pp. 123–131.
[5] P. Crescenzi. “A Short Guide To Approximation Preserving Reductions”.
In: Proceedings of the 12th Annual IEEE Conference on Computational
Complexity. CCC ’97. Washington, DC, USA: IEEE Computer Society,
1997, pp. 262–273. isbn: 0-8186-7907-7. url: http : / / dl . acm . org /
citation.cfm?id=791230.792302.
26
[6] J. Díaz et al. Paradigms for fast parallel approximability. Cambridge
International Series on Parallel Computation. Cambridge University Press,
1997. isbn: 9780521117920. url: http://books.google.com/books?id=
tC9gCQ2lmVcC.
[7] Sanjeev Khanna et al. “On syntactic versus computational views of approximability”. In: SIAM Journal on Computing (1999).
[8] L. Trevisan. “Parallel Approximation Algorithms by Positive Linear Programming”. In: Algorithmica 21 (1 1998), pp. 72–88. issn: 0178-4617. doi:
10.1007/PL00009209. url: http://dx.doi.org/10.1007/PL00009209.
[9] Vijay V. Vazirani. Approximation Algorithms. Springer, 2001, 2003. isbn:
3540653678.
[10] Marty J. Wolf. “Nondeterministic circuits, space complexity and quasigroups”. In: Theoretical Computer Science 125 (1994), pp. 295–313.
27
Download