Improved Approximation Algorithms for

advertisement
Improved
Approximation
Maximum
Semidefinite
MIC13EL
X.
Algorithms
Cut and Satisfiability
for
Problems
Using
Programming
GOEMANS
Massachusetts Institute of Technology, Cambridge, Massachusetts
AND
DAVID
IBM
P. WILLIAMSON
T. J. Watson Research Center, Yorktown Heights, New York
We present randomized approximation algorithms for the maximum cut (MAX CUT)
and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected
value at least .87856 times the optimal value. These algorithms use a simple and elegant
technique that randomly rounds the solution to a nonlinear programming relaxation. This
relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization
problem. The best previously known approximation algorithms for these problems had perfc~rmance guarantees of ~ for MAX CUT and ~ for MAX 2SAT. Slight extensions of our analysis
lead to a .79607-approximation algorithm for the maximum directed cut problem (MAX DICUT)
and a .758-approximation algorithm for MAX SAT, where the best previously known approxim ation algorithms had performance guarantees of ~ and ~, respectively. Our algorithm gives the first
substantial progress in approximating MAX CUT in nearly twenty years, and represents the first
use of :semidefinite programming in the design of approximation algorithms.
Abstract.
Categories and Subject Descriptors: F2.2 [Analysis of Algorithms and Problem Complexity]:
on discrete structures; G2.2 [Discrete MathNonumerical Algorithms and Problems—computations
Symposium on Theory
A preliminary version has appeared in Proceedings of the 26th AnnualACM
of Computing (Montreal, Que., Canada). ACM, New York, 1994, pp. 422–431.
The research of M. X. Goemans was supported in part by National Science Foundation (NSF)
contract CCR 93-02476 and DARPA contract NOO014-92-J-1799.
The research of D. P. Williamson was supported by an NSF Postdoctoral Fellowship. This
research was conducted while the author was visiting MIT.
Authors’ addresses: M. X. Goemans, Department of Mathematics, Room 2-382, Massachusetts
Institute of Technology, Cambridge, MA 02139, e-mail: goemans@math.mit. edu; D. P. Williamson,
IBM T, J. Watson Research Center, Room 33-219, P.O. Box 218, Yorktown Heights, NY 1059I8,
e-mail: dpw@watson.ibm. corn.
Permission to make digital/hard copy of part or all of this work for personal or classroom use is
grantedl without fee provided that copies are not made or distributed for profit or commercial
advantage, the copyright notice, the title of the publication, and its date appear, and notice is
given tlhat copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
01995 ACM 0004-5411/95/1100-1115 $03.50
Journalof the Associationfor ComputinsMachinery,Vol. 42,No. 6, November1995,pp.1115-1145.
M. X. GOEMANS AND D. P. WILLIAMSON
1116
ematics]: Graph Theo~—graph
rithms (including
Monte-Car-lo);
algorithms; G3 [Probability
11.2 [Algebraic manipulation]:
and Statistics] —probabik~ic algoAlgorithms—analysis of algorithm
General Terms: Algorithms
Additional Key Words and Phrases: Approximation
algorithms, satisfiability
algorithms, convex optimization, randomized
1. Introduction
Given
graph
an undirected
the edges (i, j)
G = (V, E)
G E, the ‘maximum
the set of vertices
S that
that
of the edges with
is, the weight
simplicity,
(S>~)
Karp’s
we usually
and nonnegative
cut problem
maximizes
(MAX
the weight
Wi i = Wii on
is that ‘of firidi~g
of the edges in the
one endpoint
set wil = O for (i, j)
weights
CUT)
in S and the other
G E and denote
the weight
cut (J, S);
in S. For
of a cut
Wij. The MAX
CUT problem
is one of the
by ~(S>S)
= ~ie~,j=~
original
NP-complete
problems
[Karp 1972] and has long been known to
be NP-complete
even if the problem
is unweighed;
that
is, if Wij = 1 for
all
(i, j) ~ E [Garey et al. 1976]. The MAX
CUT problem
is solvable in polynomial time for some special classes of graphs (e.g., if the graph is planar [Orlova
and Dorfman
1972; Hadlock
1975]). Besides its theoretical
importance,
the
MAX
CUT problem
has applications
in circuit
layout
design and statistical
physics
(Barahona
problem,
et al. [1988]).
the reader
Because
it
maximization
is unlikely
problems,
a p-approximation
For a comprehensive
is referred
to Poljak
that
there
a typical
algorithm;
exist
approach
that
survey
and Tuza
of the MAX
CUT
[1995].
efficient
algorithms
to solving
for
such a problem
is, a polynomial-time
algorithm
NP-hard
is to find
that delivers
a
solution
of value
at least
p times the optimal
value.
The constant
p is
sometimes
called the pe~ormance
guarantee of the algorithm.
We will also use
the term “p-approximation
algorithm”
for randomized
polynomial-time
algorithms
that deliver
solutions
whose expected
value is at least p times the
optimal.
Sahni
and Gonzales
[1976]
presented
a ~-approximation
algorithm
for
the MAX
CUT problem.
Their
algorithm
iterates
through
the vertices
and
decides whether
or not to assign vertex
i to S based on which placement
maximizes
the weight of the cut of vertices 1 to i. This algorithm
is essentially
equivalent
to the randomized
vertex to decide which vertices
of researchers
have presented
MAX
CUT
problem
with
algorithm
that flips an unbiased
coin for each
are assigned to the set S. Since 1976, a number
approximation
algorithms
for the unweighed
performance
guarantees
11
~+—
[Vitfinyi
of
1981],
2m
1
n–1
–+—
2
4m
11
j+~
[Poljak
[Haglin
and Turzik
and Venkatesan
1982],
1991],
and
11
~+z
[Hofmeister
and Lefmann
1995],
Algorithms
for Maximum
Cut and Satisfiability
Problems
1117
(where n = IVI, m = IEl and A denotes the maximum
was made in improving
the constant in the performance
degree), but no progress
guarantee
beyond that
of Salhni and Gonzales’s
straightforward
We present a simple, randomized
maximum
cut problem
where
(y=
and
E is any positive
progress
algorithm
2(3
min
—
0< B<7, %- 1 –Cos$
scalar. The algorithm
algorithm
The best
guarantee
~-approximation
imprcwed
2SAT
overall MAX
tion algorithm
algorithm
algorithm
algorithm
best
the
the
>0.87856,
represents
the
first
substantial
maximum
2-satisfiability
years. The
( a – c)-
problem
(MAX
was given in Goemans and Williamson
[1994b]. The
leads to .7584 -approximation
algorithm
for the
SAT problem,
fractionally
better than Yannakakis’
~-approximafor MAX SAT. Finally, a slight extension of our analysis yields
p=
algorithm
for the maximum
2
min
OSO<arccos(–1/3)
previously
guarantee
for
for
previously
known algorithm
for this problem
has a performof ~ and is due to Yannakakis
[1994]. A somewhat
simpler
( p – ~)-approximation
DICUT),
where
The
●)-approximation
–
in approximating
the MAX
CUT problem
in nearly twenty
for MAX
CUT
also leads directly
to a randomized
approlximation
2SAT).
ance
algorithm.
(a
known
cut problem
a
(MAX
2r–3e
>0.79607.
~ 1 + 3cos
algorithm
of ~ [Papadimitriou
directed
for
MAX
and Yannakakis
O
DICUT
has a performance
1991].
Our algorithm
depends on a means of randomly
rounding
a solution
to a
nonlinear
relaxation
of the MAX
CUT problem.
This relaxation
can either be
seen as a semidejinite program
or as an eigenvalue minimization
problem.
To our
knowledge,
this is the first
the
and
design
crucial
role
analysis
in allowing
semidefinite
programs
algorithms.
a better
The
performance
have been
used
in
relaxation
plays
a
guarantee:
compared
the value of the solution
xi< j Wij. This sum can be arbitrarily
of the maximum
A semidefinite
that
us to obtain
approximation
algorithms
total sum of the weights
the value
time
of approximation
program
previous
obtained
to the
close to twice
cut.
is the problem
of optimizing
a linear
function
of a
symmetric
matrix subject to linear equality
constraints
and the constraint
that
the matrix be positive semidefinite.
Semidefinite
programming
is a special case
of convex programming
and also of the so-called
linear programming
over cones
or cone-LP
since the set of positive semidefinite
matrices constitutes
a convex
cone. To some extent,
semidefinite
programming
programming;
see Alizadeh
[1995] for a comparison.
gant
duality
theory
Alizadeh
[1995]).
programs
(Pataki
of cone-LP
The
[1995]).
simplex
Given
(see Wolkowicz
method
any e >0,
can
is very similar
to lineiir
It inherits
the very ele-
[1981]
be
and
the
generalized
semidefinite
programs
to
exposition
by
sernidefinite
can be solved
time (c is part of the input, so the
within an additive
error of ● in polynomial
running
time dependence
on ~ is polynomial
in log 1/~).
This can be done
through
the ellipsoid
algorithm
(Grotschel
et al. [1988]) and other polynomiaJtime algorithms
for convex programming
(Vaidya
[1989]), as well as interiorpoint methods (Nesterov
and Nemirovskii
[1989; 1994] and Alizadeh
[1995]). To
terminate
in polynomial
time, these algorithms
implicitly
assume some requirement on the feasible space or on the size of the optimum
solution;
for details,
M. X. GOEMANS AND D. P. WILLIAMSON
1118
see Grotschel
et al. [1988]
Nesterov
and Nemirovskii,
the
design
ming;
and
analysis
for several
the survey
paper
Semidefinite
and Section 3.3 of Alizadeh
[1995]. Since the work
and Alizadeh,
there has been much development
of interior-point
references
available
by Vandenberghe
programming
methods
for
at the time
and Boyd
has many
program-
of this paper,
see
[19961.
interesting
areas including
control
theory, nonlinear
1 In combinatorial
natorial
optimization.
semidefinite
of writing
of
in
applications
programming,
optimization,
in a variety
of
geomet~,
and combithe importance
of
semidefinite
programming
is that it leads to tighter
relaxations
than the
classical
linear
programming
relaxations
for many graph and combinatorial
problems.
A beautiful
application
of semidefinite
programming
is the work of
Low%z
[1979]
polynomial-time
known
on the
Shannon
solvability
polynomial-time
of
capacity
algorithm
[1981]).
graph (Grotschel et al.
interest
in semidefinite
of a graph.
semidefinite
In
programs,
conjunction
this
leads
with
the
to the
only
for finding
the largest stable set in a perfect
More
recently,
there
has been increased
programming
from
a combinatorial
point-of-view.z
This
started with the work of LOV5SZ and Schrijver
[1989; 1990], who developed
a
machinery
to define
tighter
and tighter
relaxations
of any integer
program
based on quadratic
and semidefinite
programming.
Their papers demonstrated
the wide applicability
and the power of semidefinite
programming
for combinatorial optimization
problems.
Our use of semidefinite
programming
relaxations
was inspired by these papers, and by the paper of Alizadeh
[1995].
For
MAX
equivalent
CUT,
Poljak
[1993a;
mizing
a decreasing
constraints
the
semidefinite
to an eigenvalue
1993b].
An
programming
minimization
eigenvalue
that
xi
consider
is
by Delorme
problem
& of a matrix
is, minimizing
we
proposed
minimization
sum of the eigenvalues
on the matrix;
relaxation
problem
consists
subject
rni Ai, where
and
of mini-
to equality
Al > A2 >
“.”
>
of the semidefinite
proh. and ml >mz
> ““” > m. > 0. The equivalence
gram we consider
and the eigenvalue
bound
of Delorme
and Poljak
was
established
by Poljak
and Rendl
[1995a]. Building
on work by Overton
and
Womersley
[1992; 1993], Alizadeh
[1995] has shown that eigenvalue
minimization problems
can in general be formulated
as semidefinite
programs.
This is
potentially
quite useful,
bounds
for combinatorial
since there is an abundant
optimization
problems;
Mohar
and Poljak [1993].
As shown by Poljak
and
Rendl
[1993 c], the eigenvalue
bound
provides
in
and
practice.
Delorme
Poljak
[1994;
1995b]
literature
on eigenvalue
see the survey paper by
and
a very good bound
[1993a;
1993b]
between the maximum
cut and their eigenvalue
are aware of is the 5-cycle for which the ratio
study
bound.
is
Delorme
and
Poljak
on the maximum
the
worst-case
The worst
instance
cut
ratio
they
32
= 0.88445
...,
25 + 56
but they were unable to prove a bound better than 0.5 in the worst case. Our
result implies a worst-case
bound of a, very close to the bound for the 5-cycle.
1See, for example, Nesterov and Nemirovskii [1994], Boyd et al. [1994], Vandenberghe and Boyd
~1996],and Alizadeh [1995], and the references therein.
See, for example, Lov&z and Schrijver [1989; 1990], Alizadeh [1995], Poljak and Rendl [1995a],
Feige and Lowisz [1992], and Lovfisz [1992].
Algon!thms
The
ward
for ik%ximurn
above
discussion
modifications
in the
CUT
are
MAX
constant
result.
imply
that
behavior
will
MAX
[1991],
and
indicates
CUT,
algorithm
of
MAX
CUT,
the
bound
for
MAX
2SAT,
for
and
there
MAX
exists
any of these
a
prob-
et al. [unpublished
for MAX
DICUT
CUT
straightfom--
improvements
that
et al. 1992]. Bellare
manuscript]
have shown that c is as small as 83/84
for MAX
2SAT. Since bidirected
instances of MAX
instan~ces
that
MAX
so it is known
a c-approximation
P = NP [Arora
1119
not lead to significant
Furthermore,
SNP-hard
c < 1 such that
lems would
Problems
on the worst-case
of our technique
MAX
DICLJT
Cut and Satisfiability
CUT and 95/!16
are equivalent
to
also
applies
to
MAX
DIC~JT.
Since the appearance
of an abstract of this paper, Feige
have extended our technique
to yield a .931 -approximation
2SAT and a .859-approximation
inite programming
and similar
algorithm
rounding
and Goemans [1995]
algorithm
for MAX
for MAX DICUT.
By using semidefideas, Karger -et al. [1994] have been
able to show how to color a k-colorable
graph with 0( n* – ‘3’(~ + 1))) colors in
polynomial
time. Frieze and Jerrum
[1996] have used the technique
to devise
approximation
algorithms
for the maximum
k-way cut problem
that improve on
the best previously
[1995] apply ideas
seems likely
that
known 1 – l/k
from this paper
the techniques
performance
guarantee.
to the “betweenness”
in this paper
will
designing
approximation
algorithms.
We expect that in practice
the performance
better
than
the
computational
a number
worst-case
bounds.
experiments
of different
with
types
have
the MAX
CUT
of random
.96 of the optimal
solution.
A preliminary
version of this
paper
continue
of our
We
graphs
Chor and Sudan
problem.
Thus, it
to prove
algorithms
performed
some
algorithm
which
the algorithm
[Goemans
will
useful
be much
preliminary
show that
is usually
and Williamson
in
cm
within
1994a]
pre-
sented a method
to obtain
deterministic
versions of our approximation
algorithm with the same performance
guarantees.
However,
the method given had
a subtle
error,
Ramesh
[1995]
as was pointed
document
the
schem~e for our algorithms.
The paper is structured
MAX
CUT
in Section
out
to us by several
error
as follows:
and
propose
We present
2, and its analysis
researchers.
their
own
the randomized
in Section
3. We
Mahajan
and
derandomizaticm
algorithm
elaborate
for
on our
semidefinite
programming
bound and its relationship
with other work on the
MAX
CUT problem
in Section 4. The quality of the relaxation
is investigated
in Section 5, and computational
results are presented
in Section 6. In Section 7,
we show how to extend
SAT,
open
the algorithm
MAX DICUT,
and other
lproblems in Section 8.
2. The Randomized
to an algorithm
problems.
Approximation
We conclude
Algorithm
for MXX
for
with
MAX
2SAT,
a few remarks
MAX
and
CUT
n} and nonnegative
weights Wij ~ ~<ji
Given a graph with vertex set V = {1,...,
for each pair of vertices
i and j, the weight
of the maximum
cut w(S, S) N
given by the following
integer quadratic
program:
Maximize
~
~,wij(l
-
.Yiyj)
1<J
(Q)
subject
to: yi c { – 1, 1}
Vi G V.
M. X. GOEMANS AND D. P. WILLIAMSON
1120
To
see this,
~(S,S)
note
= ~ ~i<
that
the
set S = {il yi = 1} corresponds
to a cut
Since solving this integer
quadratic
program
is NP-complete,
relaxations
of (Q). Relaxations
of (Q) are obtained
by relaxing
constraints
thus,
all
optimal
of (Q),
possible
value
of weight
j W~j(l – Yiyj).
and
extending
solutions
of
of the relaxation
the
(Q)
objective
are
function
feasible
is an upper
for
bound
we consider
some of the
to the
the
larger
relaxation,
on the optimal
space;
and
value
the
of (Q).
We can interpret
(Q) as restricting
Yi to be a l-dimensional
vector of unit
norm. Some very interesting
relaxations
can be defined
by allowing
yi to be a
multidimensional
vector
Ui of unit Euclidean
norm.
Since the linear
space
spanned by the vectors
vectors
belong
to R”
n-dimensional
unit
ing optimization
jective
function
Ui has dimension
(or
sphere
at most
Rm for some
S. (or S~ for
problem
is indeed
in such a way that
n, we can assume
that
these
m < n), or more precisely
to the
m s n). To ensure that the result-
a relaxation,
we need to define
the obit reduces to ~ ~i. j Wij(l – yiyj) in the
case of vectors
lying in a l-dimensional
space. There
are several natural
ways of guaranteeing
this property.
For example, one can replace (1 – yiyj) by
(1 – u, “ Uj), where
and Vj. The resulting
u, “ Uj represents
relaxation
Maximize
(P)
subject
We will show
programming.
MAX
CUT
the
inner
is denoted
~ >, Wij(l
Z<J
product
(or
dot
product)
of
~i
by (P):
-
U, “ ‘j)
Vi
to: Ui G S.
in Section 4 that we can solve this relaxation
We can now present our simple randomized
E V.
using semidefinite
algorithm
for the
problem.
(1) Solve (P), obtaining
an optimal
set of vectors Ui.
(2) Let r be a vector uniformly
distributed
on the unit
sphere
S..
(3) Set S = {il.ZJi “r > O}.
In other words, we choose a random
hyperplane
through
its normal)
and partition
the vertices
into those vectors
the origin (with
that lie “above”
r as
the
plane (i.e., have a nonnegative
inner product with r) and those that lie “below”
it (i.e., have a negative inner product with r). The motivation
for this randomized step comes from the fact that (P) is independent
of the coordinate
system:
applying
any
orthonormal
transformation
to
a set
of
vectors
results
in
a
solution
with the same objective
value.
Let W denote
the value of the cut produced
in this way, and E[W]
its
expectation.
We will show in Theorem
3.1 in Section 3 that, given any set of
vectors Ui ● S., the expected weight of the cut defined by a random hyperplane
is
arccos( ui - uj )
E[W]
We will
also show in Theorem
E[W]
=
~wij
i <j
T
3.3 that
> a“ :
~wij(l
1<]
– Ui”uj),
Algorithms
for Maximum
Cut and Satisjiability
1121
Problems
where
26
~=”—
> .878.
0%.
If Z; ~ is the optimal
the relaxation
algorithm
(P),
is equal
value
then
%- 1 – Cos 8
of the maximum
cut and Z;
since the expected
to E[ W]
> aZj
weight
> aZ~c,
is the optimal
the algorithm
assume
program
by using
that
the
weights
are
integral.
randclmized
to
algorithm
a (Z;
will
in polynomial
Section
(P) is equivalent
to a semidefinite
program.
an algorithm
for semidefinite
programming,
E > 0, a set of vectors Ui’s of value
the input
size and log l/~.
On
equal
In
4, we
show
produce
a cut
The
of expected
point
timle.
that
the
Then we will show that,
we can obtain,
for any
greater than 2$ – E in time
these approximately
optimal
– e) z ( a – e )Z~c.
of
by the
has a performance
guarantee
of a for the MAX
CUT problem.
We must argue that the algorithm
can be implemented
We
value
of the cut generated
value
on the unit
polynomial
vectors,
greater
sphere
in
the
than
or
S. can be
generated
by drawing
n values xl, X2, ..., x. independently
from the standard
normal
distribution,
and normalizing
the vector obtained
(see Knuth
[1981, p.
130]); for our purposes,
there is no need to normalize
the resulting
vector x.
The standard
normal
distribution
can be simulated
tion between
O and 1 (see Knuth
randomized
( a – e)-approximation
3. Analysis
using
the uniform
In this section,
we analyze
case and then
and the generalization
the performance
consider
to negative
of the algorithm.
the case in which
edge weights.
We first
the maximum
We conclude
a new formulation
for the MAX
CUT problem.
u.} be any vectors belonging
to S., and let E[W]
Let {ul,...,
value of the cut
previcms section.
THEOREM
w(S, ~) produced
by the
We start by characterizing
a
analyze
cut is large
this section
with
be the expected
randomized
algorithm
given in the
the expected value of the cut.
3.1
a vector r drawn uniformly
of expectation
that
E[W’]
where sgn(x)
the following
LEMMA
is thus
of the Algorithm
the general
Given
linearity
distribu-
[1981, p. 117]). The algorithm
algorithm
for MAX
CUT,
=
= 1 if x >0,
lemma.
~wij
i <j
from
“ pr[sgn(ui
and
the unit
“ r)
– 1 otherwise.
sphere
S., we know
+ sgn(~j
“ r)],
Theorem
3.1 is thus
3.2
Pr[sgn(ui”
r)
# sgn(uj”
r)]
= +arccos(ui
c uj).
by the
implied
by
M. X. GOEMANS AND D. P. WILLIAMSON
1122
PROOF.
A
restatement
of the
lemma
is that
the
probability
the
random
hyperplane
separates the two vectors is directly
proportional
to the angle between the two vectors; that is, it is proportional
to the angle f3 = arccos( Ui “ Uj).
By symmetry,
Pr[sgn(ui”
r-) # sgn( Uj “ r)] = 2 Pr[ Ui . r >0, Uj” r < O]. The set
{r:
u,. r >0,
Uj “ r < O} corresponds
dihedral
angle
digon of angle
t9/2m
r >0,
Our
to the intersection
times
the measure
of the
full
sphere.
Uj . r < O] = t9/2T, and the lemma follows.
main
theorem
applied to vectors
has a performance
is the following:
Theorem
the following
lemma
3.4.
PROOF.
words,
above,
Pr[ui
.
this theorem
the algorithm
20
m 1 – Cos 19”
For
> +
of the
– Ui”uj).
of Wij, the result
follows
from
to y = vi oUj.
– 1 s y s 1, 1 arccos(y)\m
for
a follows
cos 6 = y. See Figure
quality
~,wij(l
c<j
3.1 and the nonnegativity
applied
The expression
of variables
The
whose
3.3
E[W’]
LEMMA
other
than Z: – ~ implies that
● . Recall
that we defined
03%
By using
In
❑
As we have argued
Ui’s of value greater
guarantee
of a –
~=”—
THEOREM
of two half-spaces
is precisely
6; its intersection
with the sphere is a spherical
O and, by symmetry
of the sphere, thus has measure
equal to
straightforwardly
1, part
performance
> a” +(1 – y).
by using
the change
❑
(i).
guarantee
is established
by the
following
lemma.
LEMMA
3.5.
PROOF.
a >.87856.
Using
simple
9 = 2.331122...,
the
the bound of 0.87856,
calculus,
nonzero
we first
one
26
—
%-1for O < 6 s 7r/2.
implies
that,
The concavity
can see that
a achieves
root of cos 6 + (3 sin 6 = 1. To
observe that
its value
formally
>1
COSO
of ~(~)
= 1 – cos 0 in the range
for any O., we have ~(13) s ~(do)
7r/2
s
(3 s n-
+ ($ – 00)~’( 6), or 1 – cos 9 s
1 – cos 190+ (0 – t90)sin /30 = 9 sin 190+ (1 – cos 00 – 60sin 6.).
= 2.331122 for which 1 – cos 130– OOsin 00<0,
we derive that
0 sin O., implying
for
prove
Choosing
f30
1 – cos O <
that
2
❑
>0.87856.
a>
T sin 130
We have analyzed the expected value of the cut returned
by the algorithm.
For randomized
algorithms,
one can often also give high probability
results. In
this case, however, even computing
the variance of the cut value analytically
is
difficult.
Nevertheless,
one could use the fact that W is bounded
(by ~i. j Wij)
to give lower
bounds
on the probability
that
W is less than
(1 –
●)E[W].
Algorithms
for Maximum
Cut and Satisjiability
Problems
13.23
t
FIG. 1. (i) Plot of z = arccos(y)\~ as a function of t = ~(1 – y). The ratio z/t is thus the slope
of the line between (O,O) and (z, t). The minimum slope is precisely a = 0.742/0.844 and
corresponds to the dashed line. (ii) The plot also represents h(t) = arccos(l – 2t)/ r as ~
functi,on of t.As t approaches 1, h(t)/t also approaches 1. (iii) The dashed line corresponds to h
between O and y = .844.
3.1 ANALYSIS WHEN THE MAXIMUM
CUT Is LARGE.
We can refine
the
analysis and prove a better performance
guarantee
whenever
the value of the
relaxation
(P) constitutes
a large fraction
of the total weight.
Define
JJ&
=
~i.
j wij
and
the minimum
h(t)
of h(t)/t
= arccos(l
-
2t)/m.
in the interval
Let
y be the value
(O, 1]. The value
of t attaining
of y is approximately
.84458.
THIEOREM 3.1.1.
Let
A=
IfA
#-g,wijl-;
tot1<J
””j<
z y, then
The theorem
implies a performance
guarantee
of h(A)/A
– c when A > y.
As A varies between
y and 1, one easily verifies that h(A)\xl
varies between
a and 1 (see Figure
1, part (ii)).
PROOF.
can rewrite
for e = (i, j),
Letting
A, = wij/ WtOt and x, = (1 – Ui” Uj)/2
A as A = ~, A=X,. The expected weight of the cut produced
we
by
M. X. GOEMANS AND D. P. WILLIAMSON
1124
the algorithm
is equal
to
arccos(l
arccos( Ui c vj )
E[w]
=
~w[j
i cj
—
.
w&~AJz(xe).
= Wtot ~ Ae
e
T
– 2X, )
‘i-r
e
To bound
E[w],
we evaluate
Min
~A.h(x.)
subject
to:
;
A~xg = A
e
O<xe
Consider
the
rel~xation
obtained
<l.
by replacing
h(t)
by the
largestx (pointwise)
convex function
k(t) smaller or equal to h(t). It is easy to see that h(t) is linear
with slope a between O and y, and then equal to h(t) for any t greater than y.
See Figure
1, part
(iii).
~A#(.x.)
But
> ~&fi(x,)
we have
function.
This
a k
~&xg
()
c
e
where
for A > y,
used the fact
that
~e
= Z(A)
= h(A),
e
A, = 1, A. > 0 and that
& is a convex
shows that
E[w] > wtot/z(z4) =
yp,wi,
y’,
Z<J
proving
❑
the result.
3.2 ANALYSIS FOR NEGATIVE
EDGE WEIGHTS.
So far, we have assumed
that
the weights are nonnegative.
In several practical
problems,
some edge weights
are negative
[Barahona
et al. 1988]. In this case the definition
of the performance
guarantee
has to be modified
or negative.
We
arbitrary
weights.
THEOREM
now
3.2.1.
give
Let
{E[W]
since the optimum
a corresponding
W.=
– W.}
xi.
j Wij,
value
generalization
could
The quantity
~
i<j:wij>o
I’vij
E[w]
z ~ / ~~~ij(l
– W.
arccos( ~i “ Uj )
~
+
–ui.
uj) – W.
can be written
~
i<j:wi,
3.3 to
where x- = nzirz(O, x). Then
)
I
\Li<j
PROOF.
be positive
of Theorem
l~,jl
<O
~ _
(
.
as
arccos(ui
%-
“ ‘j)
.
)
Algorithms
Similarly,
for Maximum
(z~
Cut and Satisylability
i< j Wij(l
– Ui “ Uj) – W_)
Problems
is equal
to
I–vi”vj
~wij2+
i<j:wij>O
The result
it.
El
LEMMA
therefore
3.2.2.
PROOF.
z =
i<,~j<O1’’’’ijll
follows
For
follows
that
from
3.4 and
Lemma
m – arccos(z)
3.3 A NEW FORMULATION
our analysis is a new
Consider
the following
Lemma
the
– 1 s z s 1, 1 – 1 arccos(z)/m
The lemma
--y and noting
from
‘~
Maximize
~wij
An
variation
of
> a” ~(1 + z).
3.4 by using
CUT.
formulation
program:
.
following
a change
interesting
of the
of variables
❑
= arccos( –z).
OF MAX
nonlinear
nonlinear
“ ‘j
consequence
maximum
cut
of
problem.
arccos( Ui “ vi)
~
i <j
subject
(R)
Let
Zj$ denote
to:
the optimal
T13E0REM 3.3.1.
Z:
value
= Z~c.
PROOF.
We first show that
of (Q): the objective
function
case of vectors
.vi lying
Z: > Z*MC. This follows since (R) is a relaxation
of (R) reduces to ~ xi< j Wij (1 – Uivj) in the
in a l-dimensional
TO see that z: ~ Z&C> 1A the
From Theorem
3.1, we see that
expected
least
value
of this program.
is exactly
Z;,
space.
VeCtOrS Vi denote the optimal
solution
to (~~).
the randomized
algorithm
gives a cut whose
implying
that
there
must
exist
a cut of value
at
❑
Z:.
4. Relaxations
and Duality
In this section, we address the question
of solving the relaxation
(P). We do so
by showing that (P) is equivalent
to a semidefinite
program,
We then explore
the dual of this semidefinite
program
and relate it to the eigenvalue
minimization bound of Delorme
and Poljak [1993a; 1993b].
4.1 SOLVING THE RELAXATION.
We begin
by defining
some terms
and
notation.
All matrices
under
consideration
are defined
over the reals. An
n x n matrix
X%.X :20.
The
A is said to be positive
following
statements
semidefinite
are equivalent
if for
for
every vector
a symmetric
x = R”,
matrix
A
(see, e.g., Lancaster
and Tismenets@
[19851); (i) A is positive semidefinite,
(ii)
all eigenvalues
of A are nonnegative,
and (iii) there exists a matrix B such that
A = J?TB. In (iii), B can either be a (possibly
singular)
n X n matrix,
or an
positive semidefinit
e matrix
m X n matrix for some rrz < n. Given a symmetric
A, an m x n matrix
B of full row-rank
satisfying (iii) can be obtained
in 0(rz3)
time using an incomplete
Cholesky decomposition
[Golub and Van Loan 1983,
p. 90, P5.2-3].
1126
M. X. GOEMANS AND D. P. WILLIAMSON
Using the decomposition
Y with y,, = 1 corresponds
simply
correspond
Y = BTB, one can see that a positive semidefinite
precisely
to a set of unit vectors UI, . . . . u. = S~:
the vector
Ui to the ith
column
of B. Then
yij = ZJi. Uj. The
matrix Y is known as the Gram matrix of {u,, ..., u.} [Lancaster
and Tismenetwe ;ai
reformulate
(P)
as a
this equivalence;
sky 1985, p. 110]. Using
semidefinite
Z:
program:
;
= Max
~,wij(l
G<]
– y~j)
(SD)
subject
to :yii = 1
Y symmetric
where
Y = (y,j).
correlation
to
The
matrices
optimality
irrational.
in
for
any
using
~ >0,
solutions
to
(SD)
et al. 1990]. Strictly
polynomial
However,
obtain,
feasible
[Grone
time;
the
an algorithm
a solution
are
optimal
value
often
speaking,
value
semidefinite
referred
we cannot
Z;
for semidefinite
of
positive
might
in
programming,
greater
than
Z:
to
as
solve (SD)
–
fact
be
one can
●
in
time
polynomial
in the input size and log l/~.
For example, Alizadeh’s
adaptation
of
Ye’s interior-point
algorithm
to semidefinite
programming
[Alizadeh
1995]
performs
ture of
O(&(log
WtOt + log l/~))
iterations.
By exploiting
the simple structhe problem
(SD) as is indicated
in Rendl
et al. [1993] (see also
Vandenberghe
in 0(n3) time.
and Boyd [1996, Sect. 7.4]), each iteration
Once an almost optimal
solution
to (SD)
an incomplete
Cholesky
some
decomposition
to obtain
can be implemented
is found, one can use
vectors
u ~, ...,
u. E S~ for
m s n such that
Among
solution
all
optimum
of low
rank.
solutions
Grone
to (SD),
one
can
et al. [1990]
show
that
(SD) (i.e., which cannot be expressed as the strict
feasible solutions)
has rank at most 1 where
show
the
existence
any extreme
convex
solution
combination
of
a
of
of other
1(1 + 1)
~ n,
2
that
is,
For related results, see Li and Tam [1994], Christensen
and Vesterstr@m
[1979],
Loewy [1980], and Laurent
and Poljak [1996]. This means that there exists a
primal
optimum
solution
Y*
to (SD)
of rank
less than
m,
and
that
the
optimum
vectors u, of (P) can be embedded
in R!~ with m < ~.
This result
also follows from a more general
statement
about semidefinite
programs
due
to Barvinok
[1995] and implicit
in Pataki
[1994]: any extreme
solution
of a
k linear
equalities
has rank at most 1 where
semidefinite
program
with
1(1 + 1)/2 < k.
Algorithms
for Maximum
Cut and Satisjiability
4.2 THE SEMIDEFINITE
elegant
duality
cussing
DUAL.
for
of a semidefinite
rewrite
the objective
(SD).
program
function
– ~ ~ ~ ~ j ~ijyij.
As mentioned
semidefinite
the dual of the program
function
~~01
theory
In matrix
subject
to:
1127
in the introduction,
programming.
It is typical
turn
to assume
that
the objective
For
this
— yij),
function
W = (wij)
can be expressed as C ‘C; in other words, the weight
for some vectors
Ci’s and -yi = Ci. Ci = llci112. The
weak
duality
~s cj), which
between
can
or even as
can be com~e-
and
Tr
denotes
the
semidefinite,
where diag( y ) denotes the diagonal
matrix whose ith diagonal
dual has a simple interpretation.
Since W + diag(y)
is positive
Showing
we
dis-
description:
W + diag( y ) positive
w(S, ~) = ( z ~6s ci) . ( ~j
to
purpose,
~ ~ij(l
the objective
where
is an
now
as ~ ~ ~. * ~~.
form,
there
We
is symmetric.
of (SD)
niently
written
as ~ WtOt – ~Tr(WY),
trace.
The dual of (SD) has a very simple
(D)
Problems
(P)
is never
and (D),
entry is Yi. The
semidefinite,
it
Wij can be viewed as Ci ~Cj
weight
of any cut is thus
greater
namely
than
that
Z:
< Z:,
is easy.
Consider
any primal
feasible
solution
Y and any dual vector y. Since both Y
and W + diag( y) are positive semidefinite,
we derive that Tr((diag( y) + W)Y)
z O l(see Lancaster
and Tismenetsky
[1985, p. 218, ex. 14]). But Tr((diag( y) +
W)Y)
= Tr(diag(y)Y)
ference of the dual
value is nonnegative.
For semidefinite
the
primal
value
(respectively,
and there
attained.
programs
optimum
maximum
mum)
+ Tr(WY)
objective
These,
xi
in their
minimum)
yi + Tr(WY),
value
full
is equal
is no guarantee
however,
=
function
to
generality,
the
is in fact
that
implying
and the primal
dual
there
the supremum
are pathological
cases. Our
the
value.
Also,
(respectively,
(respectively,
clif-
function
is no guarantee
optimum
a supremum
that
objective
that
the
irlfi-
infimum)l
programs
(SD)
is
and (D)
behave nicely; both programs
attain their optimum
values, and these values are
equal (i.e., Z; = Z:).
This can be shown in a variety of ways (see Poljak and
Rendl
[1995a],
Alizadeh
[19951, and Barvinok
[1995]).
Given that strong duality
holds in our case, the argument
showing
weak
duality implies that, for the optimum
primal solution
Y* and the optimum
dual
solution
y*, we have Tr((diag(y
*) + W) Y*) = 0. Since both
diag(y *) + W
and Y* are positive
semidefinite,
we derive that (diag(y *) + W)Y*
= O (see
Lancaster
and Tismenetsky
[1985, p. 218, ex. 14]). This is the strong form of
complementary
slackness
for
semidefinite
programs
(see Alizadeh
[1995]);
the
component-wise
product
expressed by the trace is replaced by matrix multiplication. This implies, for example, that Y* and diag( y *) + W share a system of
eigenvectors
and that rank(Y*)
+ rank(diag(y
*) + W) s n.
4.3 THE
EIGENVALUE
BOUND
OF DELORME
AND POLJAK.
The
relaxation
(D) (and thus (P)) is also equivalent
to an eigenvalue
upper bound
on the
value of the maximum
cut Z~c
introduced
by Delorme
and poljak
[1993a;
1993 b]. To describe the bound, we first introduce
some notation.
The Lapla-
1128
cian
M. X. GOEMANS AND D. P. WILLIAMSON
matrix
L = (lij)
The maximum
LEMMA
+ ... +Ufl
is defined
eigenvalue
by lij =
of a matrix
4.3.1 [DELORME
= O. Then
AND
POLJAK
;A~~X
is an upper
bound
PROOF.
quadratic
principle
The
– Wij for
i # j and
A is denoted
(L
lij =
~.
~ wj~.
by A~QX(A).
[1993b].]
Let
u G R“
satisfv
U1
+ diag(u))
on Z&c.
proof
program
( A~~X(M)
is simple.
Let
y be an optimal
solution
(Q). Notice
that
y~Ly = 4Z~c.
= mullXll= ~ x~lvfx), we obtain
y~(L
A~.X(L
+ diag(u))
By
to the
using
the
integer
Rayleigh
+ diag(u))y
z
YTY
n
1
— .
-[ n
yTLy
+
~
izl
Jj2Ui
I
4Z;C
n’
proving
❑
the lemma.
A vector
u satisfying
(n/4)&X(L
is to optimize
~=
~ Ui = O is called
+ diag(u)).
The bound
g(u) over all correcting
Z&~
proposed
vectors:
= Inf
(EIG)
As mentioned
in the introduction,
as semidefinite
programs.
(SD) and (lSIG) was established
For completeness,
we derive
uector.
and Poljak
Let
g(u)
=
[1993b]
g(u)
subject
formulated
a correcting
by Delorme
to:
~ Ui = O.
i=l
eigenvalue
minimization
For MAX
CUT,
problems
the equivalence
by Poljak and Rendl [1995a].
the equivalence
between
(lZIG)
can be
between
and
the
dual
of diag( y *) + TV
(D). For the optimum
dual vector -y*, the smallest eigenvalue
must be O, since otherwise
we could decrease all yi* by some ~ > 0 and thus
reduce
the dual objective
function.
This can be rewritten
as A~~x( – w –
diag(y
’))
Moreover,
= O. Define
defining
A as ( ~i
Ui = A – y:
y;
+ 2W,0,)/rz.
By
–
definition,
~j HJ,j, one easilJ verifies
that
– W – diag(y”)
= L + diag(u)
– AI,
implying
that
diag(u))
= A. This shows that Z*EIG < z:.
The converse inequality
reversing
the argument.
and
that
5. Qualip
Z;
= nA/4.
xi u, = O
&~X(L
+
follows
by
of the Relaxation
In this section we consider the tightness
semidefinite
bound
Z:.
Observe
that
corollaqr
COROLLARY
5.1.
For any instance
of our analysis
Theorem
3.3
of MAX
Z;c
—>
z;
Q!.
CUT,
and the quality of the
implies
the following
Algorithms
for Maximum
Fc}r the 5-cycle,
Cut and Satisjiability
Delorme
and Poljak
Z;c
—
Z;IG
implying
optimal
relaxation
vectors
(P)
[1993b]
1:129
have shown
that
32
.
= 0.88445
...,
25 + 56
that our worst-case
from~ the
Problems
analysis
is almost
by observing
lie in a 2-dimensional
that
tight.
for
One can obtain
the
subspace
5-cycle
this bound
1–2–3–4–5–l,
and can be expressed
the
as
‘=(cos(asin(:))
fori=l,,,.,5
corresponding
to
“=%+cosa=
25+:fi
Since
= 4 for
Z~r
Delclrme
the 5-cycle,
and Poljak
this yields
have shown
Z;c
for
special
However,
worst-case.
Although
exist
subclasses
they were
the worst-case
instances
for
which
2
to prove
value
and Poljak.
25 + 56
of graphs,
unable
of Delorme
32
Z;IG
holds
the bound
that
such
a bound
of Z&c\Z~
E[ W ]\Z~
as planar
graphs
better
is not
is very
close
than
or line
0.5 in the
completely
to
a,
graphs.
absolute
settled,
showing
there
that
the
analysis of our algorithm
tion) has observed that
is practically
tight. Leslie Hall (personal
communicaE[W]/Z~
= .8787 for the Petersen graph [Bondy and
Murty
1976, p. 55]. In Figure 2, we give an unweighed
instance for which the
ratio is less than ,8796 in which the vectors have a nice three-dimensional
representation.
We have also constructed
for which
the ratio
self-dual
polytopes
strongly
self-dual
circumscribed
is less than
due
to Loviisz
[Lcn&z
around
a weighted
.8786. These
[1983].
A polytope
1983], if (i) P is inscribed
the
sphere
with
instance
two instances
origin
P in R”
in the unit
as center
on 103 vertices
are based on strongly
is said
sphere,
and with
o between vertices
some O < r z 1, and (iii) there is a bijection
P such that, for every vertex o of P, the facet u(u) is orthogonal
to be
(ii)
radius
P is
r for
and facets of
to the vector
u. Fcjr example, in R 2, the strongly self-dual
polytopes
are precisely the regular
odd :polygons. One can associate a graph G = (V, E) to a self-dual
polytope
P:
the vertex set V corresponds
to the vertices of P and there k an edge (u, w) if
w belongs to the facet u(u)
(or, equivalently,
u belongs to u(w)).
For the
regular
odd
polygons,
these
graphs
are
conditions
(ii) and (iii), the inner product
is eqpal to – r. As a result, a strongly
simply
the
odd
cycles.
Because
of
u “ w for any pair of adjacent vertices
self-dual
polytope
leads to a feasilble
LOV6SZ [1983] gives a recursive
solution
of (P)
of value
((1 + r)/2)WtOt.
construction
of a class of strongly
self-dual
polytopes.
One can show that, by
choosing
the dimension
n large enough,
his construction
leads to strongly
self-dual
polytopes
for which r is arbitrarily
close to the critical value giving a
bound of a. However,
it is unclear
whether,
in general,
for such polytopes,
nonnegative
weights can be selected such that the vectors given by the polytope
1130
M.
X.
GOEMANS AND D. P. WILLIAMSON
5
3
FIG. 2. Graph on 11 vertices for which the ratio E[w]/z~
is less than .8796 in the unweighed
case. The convex hull of the optimum vectors is depicted on the right; the circle represents the
center of the sphere.
constitute
instances
an optimum
solution
to (P).
lead to a proof that E[ W]/Z~
this could
be shown,
6. Computational
this would
not imply
Nevertheless,
we conjecture
that such
can be arbitrarily
close to a. Even if
anything
for the ratio
Z~c/Z~.
Results
In practice,
we expect that the algorithm
will perform
much better
than the
worst-case
bound of a. Poljak and Rendl [1994; 1995b] (see also Delorme
and
Poljak [1993 c]) report
computational
results showing that the bound
Z~I~ is
typically
less than 2-5%
and, in the instances they tried, never worse than 8%
away from Z~c. We also performed
our own computational
experiments,
in
which
the cuts computed
by the algorithm
were usually
within
4% of the
semidefinite
bound Z;, and never less than 9% from
the algorithm,
we used code supplied
by Vanderbei
special class of semidefinite
semidefinite
program,
then
the bound. To implement
[Rendl
et al. 1993] for a
programs.
We used Vanderbei’s
code to solve the
we generated
50 random
vectors
r. We output
the best of the 50 cuts induced.
We applied our code to a small subset of the
instances considered
by Poljak and Rendl [1995 b]. In particular,
we considered
several
defined
different
types of random graphs, as well as complete
geometric
graphs
by Traveling
Salesman Problem
(TSP) instances from the TSPLIB
(see
Reinelt
[1991]).
For four different
types of random
graphs,
we ran 50 instances
on graphs
of
50 vertices, 20 on graphs of size 100, and 5 on graphs of size 200. In the Type A
random graph, each edge (i, j) is included
with probability
1/2. In the Type B
random
graph, each edge is given a weight drawn uniformly
from the interval
[– 50, 50]; the ratio of Theorem
3.2.1 is used in reporting
nearness
to the
semidefinite
bound. In the Type C random graph of size n > 10, an edge (i, j)
is included
with probability
10/n, leading to constant expected degree. Finally,
in the Type D random
graphs, an edge (i, j) is included
with probability
.1 if
i s n/2
and j > n/2
and probability
.05 otherwise,
leading
to a large cut
between the vertices in [1,...,
n\2]
and those in [n/2
rize the results of our experiments
in Table I. CPU
seconds on a Sun SPARCstation
1.
+ 1,...,
~]. We summa.
Times are given in CPU
Algorithms
for Maximum
TABLE I.
Cut and Satisfiability
Problems
1’131
SUMMARY OF RESULTSOF ALGORITHM ON RANDOM INSTANCES
Type of Graph
Type A
Type B
Type C
Type D
Size
Num ‘lkials
Ave Int Gap
Ave CPU Time
50
50
.96988
36.28
100
20
.96783
323.08
200
5
.97209
4629.62
50
50
.97202
23.06
100
20
.97097
217.42
200
5
.97237
2989.00
50
50
.95746
23.53
100
20
.94214
306.84
200
5
.92362
2546.42
50
50
.95855
27.35
100
20
.93984
355.32
200
5
.93635
10709.42
NOTE: Ave Int Gap is the average ratio of the value of the cut generated to
the semidefinite bound, except for Type B graphs, where it is the ratio of the
value of the cut generated minus the negative edge weights to the semidefinite
bound minus the negative edge weights.
In
the
TSPILIB,
points
TSP
edge weight
instances,
i and j. We summarize
compute
equal
case of the
and set the
the optimal
to
2$,
and
solution;
for
the
we used
Wij to the
our
results
in Table
for 5 instances,
others,
we
Euclidean
Euclidean
have
II.
the value
been
able
instances
distance
from
‘the
between
the
In all 10 instances,
we
of the cut produceti
to
exploit
is
additional
information
from the dual semidefinite
program
to prove optimality
(for the
problems
dantzig42,
gr48 and hk48, Poljak and Rendl [1995b] also show tlhat
our fsolution is optimal).
For all TSPLIB
instances,
the maximum
cut value is
within
.995 of the semidefinite
bound, Given these computational
results, it is
tempting
to speculate
that a much stronger
bound
can be proven for these
Euclidean
instances. However,
the instance defined by a unit length equilateral
triangle
has a maximum
cut value of 2, but Z; = ~, for a ratio of $ = 0.8889.
Homer
and Peinado
[unpublished
rithm
on a CM-5,
and have shown
optimal
solutions
to a number
minimization
problems.
These
(personal
communication)
manuscript]
have implemented
our algothat it produces
optimal
or very neau-ly
of MAX
CUT
instances
derived
from via
instances
were provided
by Michael
Junger
and have between
828 and 1366 vertices.
7. Generalizations
We can use the same technique
as in Section 2 to approximate
problems.
In the next section, we describe a variation
of MAX
several otlher
CUT and give
1132
M.
GOEMANS AND D. P. WILLIAMSON
X.
TABLE II.
SUMMARY OF RESULTSOF
ALGORITHM ON TSPLIB INSTANCES
Size
SD Val
Cut Val
Time
42
42638
42638
43.35
gr120
120
2156775
2156667
754.87
gr48
48
321815
320277
26.17
gr96
96
105470
105295
531.50
hk48
48
771712
771712
66.52
kroAIOO
100
5897392
5897392
420.83
kroBIOO
100
5763047
5763047
917.47
kroCIOO
100
5890760
5890760
398.7,8
kroDIOO
100
5463946
5463250
469.48
kroEIOO
100
5986675
5986591
375.68
Instance
dantzig42
NOTE: SD Val is the value produced by the
semidefinite relaxation. Cut Val is the value of
the best cut output by the algorithm.
●)-approximation
an ( a –
algorithm
for it. In Section
approximation
algorithm
for the MAX 2SAT problem,
a slightly improved
algorithm
for MAX
SAT. Finally,
(/3 - c)-approximation
algorithm
DICUT),
where
~ >.79607.
In
more general
integer
quadratic
7.2, we give an ( a – ~)and show that it leads to
in Section 7.3, we give a
for the maximum
directed cut problem
(MAX
all cases, we will show how to approximate
programs
that can be used to model
these
problems.
7.1 MAX
CUT
RES
in which
CUT.
pairs
The MAX
of vertices
RES
are forced
CUT
problem
is a variation
to be on either
cut or on different
sides of the cut. The extension
of the algorithm
RES CUT problem
is trivial. We merely need to add the following
to (P):
Ui . Uj = 1 for
(z, j)
= E+
and
Ui . Vj =
–1
for
of MAX
the same side of the
(i, j)
to the MAX
constraints
c E-,
where
E+
(respectively,
E-) corresponds
to the pair of vertices forced to be on the same
side (respectively,
different
sides) of the cut. Using the randomized
algorithm
of Section 2 and setting
yj = 1 if r . Ui > 0, and yi = – 1 otherwise,
gives a
feasible solution
to MAX
RES CUT, assuming that a feasible
solution
exists.
Indeed,
it is easy to see that if u, oUj = 1, then the algorithm
will produce
a
solution
such that y, yj = 1. If Ui “ Uj = – 1, then the only case in which the
algorithm
produces a solution
such that yiyj # — 1 is when Ui . r = Uj “ r = O, an
event that happens
cut is unchanged
(a
– ~)-approximation
Another
approach
RES CUT
to MAX
with probability
and, therefore,
algorithm.
to the problem
CUT
O. The analysis of the expected value of the
the resulting
algorithm
is a randomized
is to use a standard
based on contracting
reduction
edges and “switching”
of MAX
cuts (see,
Algorithms
for Maximum
e.g. [Poljak
weights
and
Cut and Satisy2ability
Rendl
to appear]).
and so we do not discuss
to show
that
our
MAX
CUT
performance
guarantee
In fact, a more general
(in the
negative
This
Problems
reduction
it here,
introduces
although
algorithm
1“133
Theorem
applied
negative
edge
3.2.1 can be used
to a reduced
instance
has a
of ( a – e) for the original
MAX
RES CUT instance.
statement
can be made: any p-approximation
algorithm
sense of Theorem
3.2.1) for MAX
CUT
edge weights
leads to a p-approximation
instances
algorithm
possibly
having
for MAX
RES
CUT.
7.2 MAX
problem
each
2SAT
(MAX
clause
AND MAX
SAT)
is
a
SAT.
is defined
disjunction
of
{X*, X2,...,
x.}. A literal is either
/(Cj) of a clause Cj is the number
for
each
Cj ● % there
clause
An instance
of the maximum
by a collection
literals
drawn
a variable
of distinct
is an
& of Boolean
from
a
satisfiabillity
clauses,
set
of
where
variables
x or its negation
Z The length
Iiterals in the clause. In addition,
associated
nonnegative
weight
~j.
An
optimal
solution
to a MAX
SAT instance
is an assignment
of truth values to
the variables
xl, ..., x. that maximizes
the sum of the weight of the satisfied
clauses.
length
MAX
2SAT
at most
two.
approximation
consists
MAX
algorithm
known
and is due to Yannakakis
As with
the MAX
[Papadimitriou
CUT
of MAX
2SAT
instances
previously
[1994]
in which
[Garey
MAX
1991];
each clause
et al. 1976];
has a performance
(see also Goemans
problem,
and Yannakakis
SAT
is NP-complete
2SAT
thus,
such that the existence
of a c-approximation
[Arora
et al. 1992]. Bellare
et al. [unpublished
guarantee
and Williamson
is known
there
to be MAX
exists
some
has
the best
of ~
[199412]).
SNP-hard
constant
c ‘< 1
algorithm
implies
that P = NP
manuscript]
have shown that a
95\96-approximation
algorithm
for MAX
2SAT would imply P = NP. Haglin
[1992] and Haglin (personal
communication)
has shown that any p-approxinaation algorithm
for MAX
RES CUT can be translated
into a p-approximation
algorithm
for MAX
observation
together
mentioned
in the previous
CUT with negative
for MAX
2SAT.
7.2,.1 MAX
quadratic
2SAT.
where
section
edge weights
In order
show a direct algorithm
from MAX
RES CUT
shows
that
translates
to model
any p-approximation
into
MAX
here. Haglin’s
to MAX
CUT
for
a p-approximation
2SAT,
we consider
MAX
algorithm
the integer
program
Maximize
(Q’)
2SAT, but we will
with the reduction
subject
ai j and
~ [aij(l
i <j
- yiyj)
+ b,j(l
+ y,yj)]
Vi = V,
to: yi ● { – 1, 1}
bi j are nonnegative.
The
objective
function
of (Q’)
is thus
a
nonnegative
linear
form in 1 ~ yi yj. To model
MAX
2SAT using (Q ‘), we
introduce
a variable
yi in the quadratic
program
for each Boolean
variable
xi
in the 2SAT instance; wc also introduce
an additional
variable
y.. The value of
– 1 or 1 will correspond
to “true”
in the MAX
y. will determine
whether
2SAT instance. More precisely,
xi is true if y, = y. and false otherwise.
a Boolean
formula
C, we define its value u(C) to be 1 if the formula
Given
is true
M. X. GOEMANS AND D. P. WILLIAMSON
1134
and O if the formula
is false.
Thus,
1 + YOYi
Z)(xi) =
2
and
Z)(ii)
Observe
= 1 – Z)(xi)
1 – YOYi
=
z
.
that
/J(Xi
VXj)
=
1 –
.—
u(~i
A~j)
1 – U(Zi)U(Zj)
=
= 1 –
1 – YOYi 1 – YOYj
*
2
1
3 + yo.yj + YOYj –
4 (
— l+
+ l+
YOYi
4
YtYiYj
YOYj
+
)
l–YiYj
4“
4
The value of other
clauses with 2 literals
can be similarly
expressed;
for
instance,
if xi is negated one only needs to replace yi by – yi. Therefore,
the
value u(C) of any clause with at most two literals per clause can be expressed
in the form required
modelled
as
in (Q ‘). As
Maximize
subject
(SAT)
a result,
the
MAX
(Q’)
problem
can be
~
Wjv(cj)
C,e’?z
Vi={o,l,...,
to: yi G { – 1, 1}
where the U(Cj) are nonnegative
linear combinations
The (SAT) program
is in the same form as (Q’),
We relax
2SAT
n},
of 1 + yi yj and 1 – yi yj.
to:
Maximize
~
[a,j(l
– ZJi“ Uj) + bij(l
+ u,. .vj)]
i <j
(P’)
Let
subject
E[ V]
algorithm.
Vi = V.
to: Ui e S.
be the expected
value
By the linearity
of expectation,
E[V]
of the solution
= 2~aijPr[sgn(ui
i cj
‘ r)
+ 2~bijPr[sgn(ui
i <j
Using
the
analysis
of
the
max
cut
produced
# sgn(uj
. r)
algorithm,
by the randomized
. r)]
= sgn(uj
“ r)].
we
that
note
Pr[sgn(vi
sgn( ~j or)] = 1 – l\m- arccos( Ui “ Uj), and thus the approximation
more general program
follows from Lemmas 3.4 and 3.2.2.
Hence, we can show the following
theorem,
which implies that
is an ( a – .E)-approximation
algorithm
for (Q’)
and thus for MAX
ratio
“ r) =
for the
the algorithm
2SAT.
Algorithms
for Maximum
THEOREM
Cut and Satisfiability
Problems
1135
7.2.1.1
E[V]
> ax
[aij(l
i <j
– Uiwj)
+ b,$l
+ Uj .Uj)l.
7.2.2 MAX
SAT.
The improved
MAX
2SAT algorithm
leads to a slightly
imp roved approximation
algorithm
for MAX SAT. In Goemans and Williamson
[1994b],
we developed
several
randomized
~-approximation
algorithms
for
MAX
SAT.
We considered
MAX
SAT
problem,
clause
Cj and 1,7 is the set of ‘negated
subject
By associating
clause
solution
that
to
MAX
(y, z), if xi
clause
j will
xi
the
programming
relaxation
set of nonnegated
variables
set true,
and Zj = O with
the
linear
1,+ denotes
in Cj:
of the
variables
in
-
to:
yi = 1 with
Cj satisfied,
corresponds
the following
where
SAT
xi
set false,
Cj not satisfied,
problem.
is set to be true
be satisfied
yi = O with
clause
We
with
showed
probability
Zj = 1 with
the program
that
for
yi, then
any
exactly
feasible
the probability
is at least
(l-(l-;)’)zj,
for k = l(Cj).
two algorithms:
We then considered
choosing
(1) set xi true independently
independently
with
ity that
a length
probability
k clause
~. Given
is satisfied
randomly
between
the following
with probability
yi; (2) set xi true
this combined
algorithm,
;(l-2-k)+;(l-(1
-;)k)zj.
This expression
can be shown to be at least ~zj for all lengths
optimal
solution
to the linear
program
is used, the algorithm
~-aplproximation
algorithm
for MAX
SAT, since the expected
algorithm
is at least
We formulate
$ ~j
a slightly
u(C) denote a relaxed
in wlhich the products
Thus,
the probabil-
is at least
k. Thus, if an
results
in~ a
value of the
Wj Zj.
different
relaxation
of the MAX
SAT
problem.
Let
version of the expression
u used in the previous section
yi yj are replaced
by inner products
of vectors
~i “ Uj.
M. X. GOEMANS AND D. P. WILLIAMSON
1136
and
We then
consider
the following
relaxation,
U(cj)
Thus,
the
if we set xi to be true
corresponding
and Williamson
2 Zj
with
semidefinite
[1994b],
Vcj
probability
program,
we satisfy
U(xi)
then
a clause
%?,l(cj)
=
= 2
for the optimal
by the
arguments
Cj of length
k with
least (1 – (1 – (l\k))~)zj.
To obtain the improved
bound, we consider three algorithms:
independently
with probability
~; (2) set xi true independently
solution
probability
rithm
a u(Cj)
pi, where
pl
+ pz + pq = 1. From
(3) the probability
> CYZj. Thus
~
‘j(-5~~
j:l(Cj)= 1
If we set pl
.7554 zj
the value
+
that
a clause
the expected
the
Cj of length
value
previous
(p,+ ap’3)zj) +
section,
1 or 2 is satisfied
of the solution
vector
i with
for
algo-
is at least
is at least
‘j(.75p~ + (.75~~+
~
at
(1) set xi true
with probability
U(xi) (given the optimal
solution
to the program);
(3) pick a random unit
r and set xi true iff sgn( Ui “ r) = sgn( UO“ r). Suppose we use algorithm
probability
to
of Goemans
a~~)zj)
j:l(Cj)=2
= p,
=
.4785
and p~ =
.0430,
then
the expected
value
is at least
wjzj, yielding
a .7554-approximation
algorithm.
To see this, we check
of the expression
for lengths 1 through
4, and notice that
‘4-(1
-3’)-+
and
.4785
We can obtain
(
(1 – 2-5)
even a slightly
+ 1 – ~
e
better
)
> .7554.
approximation
algorithm
for the MAX
SAT problem.
The bottleneck
contributes
no expected weight
in the analysis
above is that algorithm
(3)
for clauses of length 3 or greater. For a given
clause
let
Cj of length
3 or more,
Pj be a set of length
2 clauses
formed
by
Algorithms
taking
for Maximum
the literals
Cut and Satisfiability
of Cj two at a time;
least one of the literals
in pi will
be satisfied.
thus,
in Cj is set true,
Thus,
Problems
~
then
the following
will
1137
l(cj)
contain
clauses. If at
()
l(cj)
–21 of the clauses
at least
program
is a relaxation
of the IWAX
SAT problem:
subject
to:
~ ‘(x,)
i ● 11+
+
~
i
●
‘(z,)
> ‘j
> Zj
U(cj)
Vcj
e f7
Vcj
e %’, l(cj)
= 2
Vcj
e
>3
Ij–
%’,l(cj)
Algorithm
(3) has expected value of au(C)
for each C = Pj for any j, so tlhat
its expected value for any clause of length 3 or more becomes at least
a“
&
~;,u(c)
()
= a o
J
l(Cj)
1
l(C’j)
–
~
1 ~GPj
u(c)
1
2]
2
.—
l(Cj)zj’
2a
so that
the overall
expectation
Wj(-5~~ + (~’2+
~
of the algorithm
a~~)zj)
~
+
1
j:l(Cj)=
+
will
be at least
wj(.75p,
~
(1 -
Wj
j:l(Cj)>3
setting
through
pl
= pz = .467
which
can
6, and noticing
small
and
be verified
.467
Other
+
a!p,)zj)
[
a&
-(1-*]’(c’))P2+
algorithm,
(.75p2
2-’(cJ)p1
‘[(1
By
+
j:l(Cj) = 2
improvements
p~ =
P3)zj)
.066,
we
by checking
obtain
the
a .7584-approximation
expression
that
1
1 – .
+(1–
e )
((
are possible
2-7)
)
? .7584.
by tightening
the analysis.
for
lengths
1
1138
M. X. GOEMANS AND D. P. WILLIAMSON
7.3 MAX
DICUT.
Suppose
weights Wij on each directed
k the head. The maximum
vertices S that maximizes
heads in ~. The problem
we are given
a directed
arc (i, j) = A, where
directed
cut problem
MAX
DICUT,
we consider
subject
where
Cij~ and
also be written
objective
quadratic
Yi =
‘Yj
dij~
to:
y, = { –1,
‘Yk
and
program
+ y~Y~ + YjY~)]
‘di ~ V,
Observe
that
1 – y, yj – yi y~ + yj y~ can
(or as (1 – yiyj)(l
can be interpreted
Moreover,
1 – yiyj
O othe~ise~
quadratic
1}
– yiy~)
function
of (Q”)
form in 1 i yiyj.
=
– yiyj
are nonnegative.
as (1 – yiyj)(l
algorithm
for MAX
DICUT
and Yannakakis
1991].
the integer
+dij~(l
(Q” )
G = (V, xl) and
the weight of the edges with their tails in S and their
is NP-hard via a straightforward
reduction
from MAX
CUT. The best previously
known approximation
has a performance
guarantee
of ~ [Papadimitriou
To model
graph
i is the tail of the arc and j
is that of finding
the set of
while
1 +
+ yjy~)),
and, thus, the
as a nonnegative
restricted
– yiy~ + yjy~ is equal to 4 if
YiYj + Yiyk + Yjyk is 4 if Yi = yj = Y~
and is O otherwise.
We can
introducing
model
the MAX
DICUT
problem
using the program
a variable
y, for each i G V, and, as with the MAX 2SAT
and introducing
iff
a variable
yi = yO. Then
yO that will
denote
arc (i, j) contributes
1
~w~j(l
+
YiY())(l
– YjYO)
=
over all arcs (i, j)
as (Q”).
that
vertex
equal
to weighted
if the directed
outdegree,
form
(Q ‘), and therefore
guarantee
of ( a – 6).
We relax (Q”) to:
our
subject
to:
+
YiYO
graph
i = S
(Q”)
algorithm
– YiYj)
of the same form
has weighted
indegree
reduces
has
of every
to one of the
a performance
+ U1. u, + Vi- Uk + u, . Vk)]
Vi ~ V.
Ui = S.
2
the same algorithm
As we will show,
2v–36
min
0S9<arcc0s(-
– YjYO
= A gives a program
the program
We approximate
(Q”)
by using exactly
analysis is somewhat
more complicated.
guarantee
/3 is slightly weaker, namely
p=
1
~wij(l
approximation
+dijk(l
(P”)
the S side of the cut. Thus,
weight
to the cut. Summing
We observe
(Q”)
by
program,
>0.79607.
1/3) ~
1 + 3cos
d
as before.
The
the performance
Algo,tithms
Given
linearity
for Maximum
a vector
Cut and Satisjiability
r drawn
of expectation
X
4
uniformly
that
from
the expected
[Cijk “ Pr[f%n(ui
11!39
Problems
the unit
value
sphere
E[u]
S., we know
of the solution
by the
output
is
‘) # ‘EJn(uj
“ ‘) = ‘gn(uk
“ ‘)]
“
i,j, k
+dij~
“ Pr[sgn(ui”
r)
= sgn(uj
“r)
= sgn(uk
“ r)]].
Consider any term in the sum, say ~ij~ “ Pr[sgn(ui . r) = sgn( ~j “r) = sgn(u~ “ r)].
The cij~ terms can be dealt with similarly
by simply replacing
vi by – vi. The
perfclrmance
LEMMA
guarantee
“r)
= 17.3.2.
1-
PROOF
from
the proof
of the following
two lemmas,
7.3.1
Pr[sgn(ui
LEMMA
follows
= Sgn(uj or)
= sgn(u~ “ r)]
&(arccos(ui
. Z)j) + arccos(ui
. Uk) + arCCOS(Uj . Uk)).
For any vi, Vj, v~ e S.,
#_(a~CCOS(Ui
OF LEMMA
7.3.1.
spherical
geometry.
The
the area of the spherical
Vj, and v~. Stated
- Uj) + arccos(ui
A
very
desired
triangle
short
eU,)
+ a~CCOS(Uj - U,))
proof
can
be
given
relying
on
probability
can be seen to be equal to twilce
polar to the spherical
triangle
defined by vi,
this way, the result
is a corollary
to Girard’s
[1629]
formula
(see Rosenfeld
[1988]) expressing
the area of a spherical
triangle
with angles
@l, 6J, and tl~ as its excess @l + Oz i- OS – z-.
We also present a proof of the lemma from first principles.
In fact, our proof
parallels
Euler’s
[1781] proof
define the following
events:
Note
that
(see Rosenfeld
[1988])
of Girard’s
A:
Sgn(Ui . r)
= Sgn(.lJj “r)
= sgn(uk
“r)
Bi :
Sgn(lli
- r)
# Sgn(Uj “ r)
= sgn(uk
“ r)
Ci :
sgn(vj
or)
= sgn(uk
Cj :
Sgll(Vi
“
r)
= sgn(u~ “ r)
Ck :
Sgll(L’i “ r)
= Sgn(Uj “ r).
Bi = Ci – A. We
define
Bj and
formula.
We
or)
B~ similarly,
so that
Bj = Cj – .A
and Ek = ck – A. Clearly,
Pr[A]
+ Pr[Bi]
+ Pr[Bj]
Also, Pr[Ci] = Pr[ A ] + Pr[Bi]
and similarly
equalities
and subtracting
(l), we obtain
Pr[Ci]
+ Pr[Cj]
+ Pr[C~]
+ Pr[B~]
for
j
(1)
= 1.
and
= 1 + 2Pr[A].
k. Adding
up
these
(2!)
1140
By
M. X. GOEMANS AND D. P. WILLIAMSON
Lemma
Together
3.2,
with
Pr[A]
proving
Pr[Ci]
= 1 – I/z-
arccos(~j
. ~~) and
= 1 – ~
(arccos(ui
for
j
and
k.
“ Uj) + arccos(ui
“ u~) + arccos(uj
PROOF OF LEMMA 7.3.2.
One can easily verify
greater
than
0.79607.
Let
a = arccos( u, “ Uj),
arccos( Uj. u~).
possible values
ou~)),
❑
the lemma.
that the defined value of ~ is
b = arccos(ui “ u~), and c =
From
the theory
of spherical
triangles,
it follows
that the
for (a, b, c) over all possible vectors Ui, Uj, and v~ define the set
S={(a,
b,c):O<a<n,
c<a+b,
(see Berger
similarly
(2), we derive
[1987,
O<bs
bga+c,
Corollary
1 – &(a
T,Osc
a<b+
18.6.12.3]).
c,a+b+c
The
+ b + c) > ~(1
S7r,
claim
+ cos(a)
S2~}.
can thus be restated
+ cos(b)
as
+ cos(c))
for all (a, b, c) C S.
Let (a, b, c) minimize
h(a,
b,c)
= 1 – -&(a
+ b + c) – :(1
over all (a, b, c) = S. We consider
(l)a-tb+c=27r.
hand,
We
1 + cos(a)
have
several
+ cos(a)
cases:
1 – (l\2w)(a
+ Cos(b)
+ COS(b) + COS(C))
+ b + c) = O. On
the
other
+ Cos(c)
=1+
cos(a)
=1+
cOs(a+b’+2c0s(%c0s(=
+ cos(h)
+ cos(u
+ b)
= 2COS2
(+)+2c0s(+]c0s(+!
.2COS(+)[COS(+)
We now derive
lz(a,b,c)>
the last inequality
+Cos(
:)].
(3)
that
-;
20,
COS(+)[COS(+)+COS(+]
following
T
—<
2
from
the fact that
a+b
—<v—
2
a–b
—
2
Algorithms
for Maximum
Cut and Satisfiability
Problems
1141
and thus
a+b
Cos —
2
()
(2)
<0
and
a= b+corb=a+corc
+ b. Observe that
=a+b.
l–
On the other
[C”s(+)+cosrwl’o
hand,
Bysymmetry,
+ cos(b)
+ COS(C)
+COS(*))
a+b
—
2
a+b
1 + Cos —
2“
( )( ( ))
S2COS
x = (a + b)/2,
we observe
1 .
—
2
,11
;
that
the claim
Cos(.x)(l
(3)
~z = O or
b = O or
2x
—->
T
is equivalent
to
+ Cos(x))
z.
One can in fact verify
for any O < x < r/2,
a
by (3) we have that
.2COS(+)(COS(9)
l–
thatc=
a+b
—.
T
&(a+b+c)=l–
1 + cos(a)
Letting
assume
that
0.81989
Cos(x)(l
+ Cos(x))
2
implying
the claim.
c =’ O: Without
loss
of
definition
of S implies that b = c, and thus
reduces to the previous one.
generality,
let
a = O. The
b = a + c. This case therefore
(4)
(z = n or b = n or c = rr.
Assume
a = m. This implies that b + c =’ n
and, thus, a + b + c = 2m. We have thus reduced the problem
to case (1).
(5)
l[n the last case, .,.(a, b, c) belongs to the interior
of S. This implies that the
gradient
of h must vanish and-the hessian of h must be posi~ive semidefinite
at (a, b, c). In other
words,
2
sina=sinb=sinc=—,
pn
and cos a 2 0, cos b z O and
b = C. But
h(a,
a,a)
cos c > 0, From
= 1 – ~
– :(1
Thus,
we obtain
DICUT
a ( ~ – e)-approximation
problem.
we derive
that
a =
+ 3cos(a)).
The lemma now follows from the fact that a s
and the fact that 1 + 3 cos a < 0 for a > arccos
MAX
this,
2T/3,
algorithm
– 1/3.
for
the definition
❑
( Q“ ) and
of ~
for
the
M. X. GOEMANS AND D. P. WILLIAMSON
1142
8. Concluding
Our
Remarks
motivation
for studying
semidefinite
programming
a realization
that the standard tool of using
approximation
algorithms
had limits which
the conclusion
of Goemans
and Williamson
programming
relaxation
for the maximum
arbitrarily
close to twice
the work
the value
of LOV6SZ and Schrijver
tighter
relaxations
could
seemed
worthwhile
to investigate
of the maximum
[1989;
be obtained
relaxations
cut in the worst
1990], which
through
the power
showed
made,
with
and Goemans,
the continued
While
this
especially
tion
improved
results
interesting
for
problems.
MAX
semidefinite
program.
weighted
vertex cover
later Bar-Yehuda
linear programming
The first
CUT
for
MAX
2SAT
it
from
a first
steps
and MAX
question
For example,
the first
involved
solving a linear
a worst-
step
have
DICUT
in this
already
by Feige
is whether
without
whereas
is no longer
CUT
solving
and
in which
Perhaps a
for MAX
CUT. The second question
is whether
to the semidefinite
program
leads to a better
there
clear-cut.
MAX
the
2-approximation
algorithms
for
program
[Hochbaum
1982], but
bound.
There is some reason to think
are known for which the program would
graph,
a .878 -approxima-
explicitly
and Even [1981] devised a primal-dual
algorithm
was used only in the analysis of the algorithm.
SAT
this
find
is a gap of 32/(25
semidefinite
program
for the 5-cycle.
One consequence
of this paper is that
problems
and
of such relaxations
can be obtained
semidefinite
analog is possible
adding
additional
constraints
any planar
for MAX
tighter
programming,
and for coloring
by Karger, Motwani,
and Sudan. We think that
investigation
of these methods
is promising.
paper
leaves many open questions,
we think
there
are two
algorithm
worst-case
constraints
from
case. Given
that
semidefinite
case perspective.
The results
of this paper
constitute
direction.
As we mentioned
in the introduction,
further
been
came
linear programming
relaxations
for
might not be easily surpassed (see
[1994b]). In fact, a classical linear
cut problem
can be shown to be
When
had
the situation
the best-known
such
might be true. Linear
an optimal
solution
on
+ 5fi)
with
for
the
several
MAX
approximation
long-standing
and
current
SNP
results
well-defined
bounds as ~ and ~, it was tempting
to believe that perhaps no further
work
could be done in approximating
these problems,
and that it was only a matter
of time before matching
hardness results would be found. The improved
results
in this paper should rescue algorithm
designers from such fatalism.
Although
MAX
work
SNP problems
cannot be approximated
to do in designing
improved
approximation
arbitrarily
closely,
algorithms.
there
still
is
ACKNOWLEDGMENTS . Since the appearance
of an extended
abstract
of this
paper,
we have benefited
from
discussions
with
a very large number
of
colleagues.
In particular,
we would like to thank Don Coppersmith
and David
Shmoys for pointing
out to us that our MAX
2SAT algorithm
could lead to an
improved
MAX
SAT algorithm,
Jon Kleinberg
for bringing
Lmlisz
[1983] to
our attention,
Bob Vanderbei
for kindly
giving
us his semidefinite
code,
Francisco
Barahona
and Michael
Jiinger for providing
problem
instances, Joel
Spencer for motivating
Theorem
3.1.1, Farid Alizadeh,
Gabor Pataki, and Rob
Freund
for results
on semidefinite
programming,
and Shang-Hua
Teng for
bringing
Knuth
[1981] to our attention.
We received
other useful comments
from Farid ,?dizadeh, Joseph Cheriyan,
Jon Kleinberg,
Monique
Laurent,
Colin
Algorithms
for Maximum
McDiarmid,
Giovanni
mous
Cut and Satisfiability
Rinaldi,
David
Shmoys,
Problems
Eva Tardos,
1143
and the two anony-
referees.
REFERENCES
F. 1995. Interior point methods in semidefinite programming with applications to
combinatorial optimization. SIAM J. Optimiz. 5, 13–51.
A~orm, S., LUND, C., MOTWANI, R., SUDAN, M., AND SZEGEDY, M. 1992. Proof verification and
hardness of approximation problems. In Proceedings of the 33rd Annual Symposium on Foundations of Computer Science. IEEE, New York, pp. 14–23.
BAR-YEHUDA, R., ANDEVEN, S. 1981. A linear time approximation algorithm for the weighted
vertex cover problem. J. Algorithms 2, 198–203.
BARAHONA, F., GROTSCHEL, M., JUNGER, M., AND REINELT, G. 1988. An application of connbinatorial optimization to statistical physics and circuit layout design. Oper. Res. 36, 493–513.
BARVINOK, A. I. 1995. Problems of distance geometry and convex properties of quadratic
maps. Disc. Computat. Geom. 13, 189–202.
BELLARE, M., GOLDREICH, O., AND SUDAN,M. 1995. Free bits, PCP and non-approximabili~y—
Tovvards tight results, In Proceedings of the 36th Annua[ Symposium on Foundations of Computer
Science. IEEE, Los Alamitos, Calif., pp. 422-431.
BERGIER,M. 1987. Geomety 11. Springer-Verlag, Berlin.
BONDY, J. A., AND MURTY, U. S. R. 1976. Graph Theo~ with Applications. American Elsevier
Publishing, New York, N.Y.
BOYD,S., EL GHAOUI, L., FERON, E., AND BALAKRISHNAN, V. 1994. Linear Matrix Inequalities in
System and Control Theoiy. SIAM, Philadelphiaj Pa.
’95,
CHOR,B., ANDSUDAN,M. 1995. A geometric approach to betweenness. In Algorithms-ESA
P. Spirakis, ed. Lecture Notes in Computer Science, vol. 979. Springer-Verlag, New York,
pp. 227-237.
CHRISTENSEN, J., AND VESTERSTRONL J. 1979. A note on extreme positive definite matrices.
ALIZADEH,
Math. Annalen, 24465-68.
DELORME, C., AND POLJAK, S. 1993a. Combinatorial properties and the complexity of a
max-cut approximation. Europ. J. Combinat. 14, 313–333.
DELORME, C., AND POLJAK,S. 1993b, Laplacian eigenvalues and the maximum cut problem.
Math. Prog. 62, 557-574.
DELORME,C., ANDPODAK, S. 1993c. The performance of an eigenvalue bound on the max-cut
problem in some classesof graphs. Disc. Math. 111, 145-156.
EULER, L. 1781. De mensura angulorum solidorum. Petersburg.
FEIGE,,U., AND GOEMANS,M. X. 1995. Approximating the value of two prover proof systems,
with applications to MAX 2SAT and MAX DICUT. In Proceedings of the 3rd Israel Symposium
on Theory of Computing and Systems. IEEE Computer Society Press, Los Alamitos, Callif.,
pp. 182-189.
FEIGE, U., ANDLovAsz, L. 1992. Two-prover one-round proof systems: Their power and their
problems. In Proceedings of the 24th Annual ACM Symposium on the Theo~ of Computing
(Victoria, S.C., Canada, May 4-6). ACM, New York, pp. 733-744.
FRIEZE, A., AND JERRUM, M.
1996. Improved approximation algorithms for MAX k-CUT and
MAX BISECTION. AZgotithmica, to appear.
GAREY, M., JOHNSON,D., AND STOCKMEYER,L. 1976. Some simplified NP-complete graph
problems. Theoret. Comput. Sci. 1,237-267.
GIRARD, A. 1629. De la mesure de la superficie des triangles et polygones sph&iques. Appendix
to “l[nvention nouvelle en l’alg~bre”. Amsterdam.
GOEMANS,M. X., AND WILLIAMSON,D. P. 1994a. .878-approximation algorithms for MAX
CUT and MAX 2SAT. In Proceedings of the 26th Annual ACM Symposium on the Theory of
Computing (Montreal, Que., Canada, May 23–25). ACM, New York, pp. 422-431.
GOEMANS,M. X., AND WILLIAMSON,D. P. 1994b. New 3/4-approximation algorithms for the
maximum satisfiability problem. SL4M J. Disc. Math. 7, 656-666.
GOLUE+G. H., AND VAN LoAN, C. F. 1983. Matrix Computations. The Johns Hopkins University IPress, Baltimore, Md.
GRONE,R., PIERCE,S., AND WATKINS,W. 1990. Extremal correlation matrices. Lin. Algebra
Appl, 134, 63-70.
GROTSCHEL,M., Lovkz, L., AND SCHRIJVER,A. 1981. The ellipsoid method and its consequences in combinatorial optimization. Combinatorics 1, 169-197.
1144
M. X. GOEMANS AND D. P. WILLIAMSON
GROTSCHEL,
M., Loviisz, L., ANDSCHRIJVER,
A. 1988. Geometric Algorithms and Combinatorial
Optimization. Springer-Verlag, Berlin.
HADLOCK,F. 1975. Finding a maximum cut of a planar graph in polynomial time. SIAM J.
Comput. 4, 221-225.
HAGLIN, D. J. 1992. Approximating maximum 2-CNF satisfiability. Paral. Proc. Lett. 2, 181-187.
HAGLIN) D. J., AND VENKATESAN,S. M. 1991. Approximation and intractability results for the
13.
maximum cut problem and its variants. IEEE Trarrs. Comput. 40, 110–1
HOCHBAUM,D. S. 1982. Approximation algorithms for the set covering and vertex cover
problems. SIAM J. Comput. 11, 555-556.
HOFMEISTER) T., AND LEFMANN, H.
1995. A combinatorial design approach to MAXCUT. In
Proceedings of the 13th Symposium on Theoretical Aspects of Computer Science. To appear.
HOMER, S., AND PEINADO,M. A highly parallel implementation of the Goemans/Williamson
algorithm to approximate MaxCut. Manuscript.
nRGER, D., Momv.mx, R., AND SUDAN,M. 1994. Approximate graph coloring by semidefinite
programming. In Proceedings of the 35th Annual Symposium on Foundations of Computer
Science. IEEE, New York, pp. 2-13.
KARP, R. M. 1972. Reducibility among combinatorial problems. In Complexi@ of Computer
Computations, R. Miller and J. Thatcher, eds. Plenum Press, New York, pp. 85-103.
KNUTH, D. E. 1981. Seminumerical Algorithms. Vol. 2 of The Art of Computer Programming.
Addison-Wesley, Reading, Mass.
LANCASTER,P., ANDTISMENETSKY,
M. 1985. The Theoiy of Matrices. Academic Press, Orlando,
Fla.
LAURENT,M., AND POUW, S. 1996. On the facial structure of the set of correlation matrices.
SIAM J. Matrix Anal., and Appl. to appear.
LI, C.-K., ANDTAM, B.-S. 1994. A note on extreme correlation matrices. NAM J. Matrix Anal.
and Appl. 15, 903–908.
LOEWY,R.
1980. Extreme points of a convex subset of the cone of positive definite matrices.
Math. Annalen. 253, 227-232.
LOVA.SZ,L. 1979. On the Shannon capacity of a graph. IEEE
Trans. In& TheoV IT-25,
1-7.
LovAsz, L. 1983. Self-dual polytopes and the chromatic number of distance graphs on the
sphere. Acts Sci. Mad-r. 45, 317–323.
LOV1.SZ, L.
1992. Combinatorial optimization: Some problems and trends. DIMACS Technical
Report 92-53.
Lov.&z, L., AND SCHRIJVER,A. 1989. Matrix cones, projection representations, and stable set
polyhedra. In Polyhedral Combinatorics, vol. 1 of DILL4CS Series in Discrete Mathematics and
Theoretical Computer Science. American Mathematical Society, Providence, R.I.
Loviisz, L., AND SCHRIJVER,A. 1990. Cones of matrices and setfunctions, and O–1 optimization. SIAM J. Optimiz. 1, 166–190.
MAHAJAN,S., ANDRAMESH,H. 1995. Derandomizing semidefinite programming based approximation algorithms. In Proceedings of the 36th Annual Symposium on Foundations of Computer
Science. IEEE, Los Afamitos, Calif., pp. 162-163.
MOHAR, B., AND POUX, S. 1993. Eigenvalue methods in combinatorial optimization. In
Combinatorial
and Graph-Theoretic Problems in Linear Algebra, vol. 50 of The IAL4 Volumes in
R. Brualdi, S. Friedland, and V. Kfee, eds. Springer-Verlag,
Mathematics and Its Applications.
New York.
NESTEROV,Y., AND NEMIROVSKU,A. 1989. Self-Concordant Functions and Polynomial Time
Methods in Convex Programming. Central Economic and Mathematical Institute, Academy of
Science, Moscow, CIS.
NESTEROV,Y., AND NEMIROVSJCN,
A. 1994. Interior Point Polynomial Methods in Convex programming. Society for Industrial and Applied Mathematics, Philadelphia, Pa.
ORLOVA,G. I., ANDDORFMAN,Y. G. 1972. Finding the maximal cut in a graph. Eng. Cyber. 10,
502-506.
OVERTON,M. L., AND WOMERSLEY,R. S. 1992. On the sum of the largest eigenvalues of a
symmetric matrix. SIAM J. Matrix Anal. and Appl. 13, 41–45.
OVERTON,M. L., AND WOMERSLEY,R. S. 1993. Optimality conditions and duality theory for
minimizing sums of the largest eigenvalues of symmetric matrices. Math. Prog. 62, 321–357.
PAPADIMITRIOU)C. H., m YANNAKAKJS,M, 1991. Optimization, approximation, and complexhy classes. J. Comput. Syst. Sci. 43, 425–440.
PATAKI, G. 1994. On the multiplicity of optimal eigenvalues. Management Science Research
Report MSRR-604, GSIA. Carnegie-Mellon Univ., Pittsburgh, Pa.
Algorithms
for Maximum
Cut and Satisfiability
Problems
1145
1995, On cone-LP’s and semi-definite programs: Facial structure, basic solutions,
and the simplex method. GSIA Working paper WP 1995-03, Carnegie-Mellon Univ., Pittsburgh,
Pa.
POUAK, S., AND RENDL,F. 1994. Node and edge relaxations of the max-cut problem. Comput-
PATAKI, G.
ing 52, 123–137,
POLJAI<,S., AND RENDL, F. 1995a. Nonpolyhedral relaxations of graph-bisection problems.
SZAM J. Optim., to appear.
POLJAK,S., AND RENDL,F. 1995b. Solving the max-cut problem using eigenvalues. Disc. A@.
Math. 62, 249-278.
POLTAK,S., AND TURZiK, D. 1982. A polynomial algorithm for constructing a large bipartite
subgraph, with an application to a satisfiability problem. Can. J. Math, 34, 519–524.
POLJ.ML,S., ANDTUZA, Z. 1995. Maximum cuts and largest bipartite subgraphs. In Combinatorial Optirnizatiorr. W. Cook, L. Lov&z, and P. Seymour, eds. DIMACS series in Discrete
Mathematics and Theoretical Computer Science, Vol. 20. American Mathematical Society,
Providence, R,l. To be published.
REINEI.T, G. 1991. TSPLIB—A traveling salesman problem library. ORSA J. Comput. 3,
376-384.
RENDL, F., VANDERBEI, R., AND WOLKOWICZ, H.
1993. Interior point methods for max-min
eigenvalue problems. Report 264, Technische Universitat Graz, Graz, Austria.
ROSENIFELD,B. 1988. A histo~ of non-Euclidean geometry. Springer-Verlag, Berlin.
SAHNI,S., AND GONZALEZ, T, 1976. P-complete approximation problems. J. ACM 23, 3 (July),
555-565.
VAIDYA, P. M.
1989. A new algorithm for minimizing convex functions over convex sets. In
Proceedings of the 30th Annual Symposium on Foundations of Computer Science. IEEE, New
York, pp. 338-343.
VANDENBERGHE, L., AND BOYD, S. 1996. Semidefinite programming. SZAM Rev. To appear.
VITANYI, P. M. 1981. How well can a graph be n-colored? Disc. Math. 34, 69-80.
WOLKC,WICZ,H. 1981. Some applications of optimization in matrix theory. Lin. Algebra Appl.
40, 1101-118.
YANNAKAKIS,M.
1994. On the approximation
of maximum satisfiability.
J. Algorithms
17,
475-.502.
RECEIVEDJUNE1994; REVISEDJUNE1995; ACCEPTED
JULY 1995
JOuxndof the Associationfor ComputingMachinery,vol. 42,No. 6, November199:5.
Download