These

advertisement
18.434 - PRESENTATION 2
APPROXIMATION ALGORITHMS VIA SDP: MAX CUT
ARIEL SCHVARTZMAN
0.1. Intro. Suppose we have an undirected graph G = (V, E) and a weight function
w : E → R+P
. Recall a cut is a partition of the vertex set V = A ∪ B. The value
of a cut is a∈A,b∈B,(a,b)∈E w(a, b). Throughout this discussion, we will assume
P
without loss of generality that e∈E we = 1.
We have studied earlier this semester that there are nice linear programs for min
cuts and that their duals correspond to max flows. We also know that min cuts can
be solved in polynomial time. In this case, our goal is to find the maximum cut in
the graph. We won’t be able to find a polynomial time exact solution due to the
hard nature of the problem.
0.2. Where do we stand on this?
• MAX CUT is NP-Complete: 3SAT ≤p MAX CUT
• Even worst: it is NP-Hard to approximate MAX CUT with a ratio better
than 16
17 = 0.941.. (Hastad, 1997).
• This means that the best we can do is compromise optimality in order to
get an (approximate) solution in polynomial time.
0.3. Recap on Approximation Algorithms. We say an algorithms is an α approximation algorithm if the solution it provides f (x) is within a factor of α of the
optimal solution. In a maximization problem, this implies that
αOPT ≤ f (x) ≤ OPT
for some α < 1.
0.4. Application. The maximum bipartite subgraph problem is essentially the
same as max cut. In this case we are given a graph G = (V, E) and we are looking
for a subgraph H = (V, X) such that H is bipartite and the size of X is maximal.
Finding a cut with value k implies that there exists a bipartition with k edges, and
vice versa. For example, look at K5 .
1. A first cut
To get an idea of how large the max cut can be in a graph, lets try a naive
approach. Select vertices independently at random with probability 0.5. The cut
is then defined by the selected vertices. We will now analyze the expected size of
the cut.
E[random cut] = E[
X
w(a, b)1{π(a) 6= π(b)}] =
(a,b)∈E
X
(a,b)∈E
1
w(a, b)E[1{π(a) 6= π(b)}]
2
ARIEL SCHVARTZMAN
where π(a) is the side of the partition to which a belongs. It is not hard to
notice that a, b will be on different sides of the cut with probability 12 . Therefore,
the expected value of the randomized cut is
E[random cut] =
1
1 X
w(a, b) =
2
2
(a,b)∈E
Since this is the average value of a random cut, we know that OPT ≥ 12 .
2. A bad cut
Consider the following IP formulation:
P
max (u,v)∈E wuv zuv
zuv ≤ xu + xv
zuv ≤ 2 − (xu + xv )
xu , ze ∈ {0, 1}
u 6= v
∀(u, v) ∈ E
∀(u, v) ∈ E
Intuitively, this makes sense because we have that zuv = 1 ⇐⇒ xu 6= xv .
We can’t solve IPs so we must relax this formulation to obtain a linear program.
Suppose that we relax this so that xu , ze ∈ [0, 1].
There is a major problem with this approach: let xu = 21 for all u ∈ V . Therefore,
ze = 1 for all e ∈ E. This implies that the value of the max cut is always 1,
independent of the graph. This is obviously not true. Consider for instance K3
where OP T = 23 .
3. A good cut
Consider the following quadratic program
max
P
wuv 12 − 21 yu yv
yu yu = 1
(u,v)∈E
∀(u, v) ∈ E
∀u ∈ V
In this case, we get a contribution of 0 if yu = yv and a contribution of wuv if
yu = −yv . This program behaves the way we want it to, but it is still not a linear
program. We need to relax it. Let y~v ∈ Rd . This will be an SDP
max
P
wuv 12 − 21 y~u y~v
y~u y~u = 1
(u,v)∈E
∀(u, v) ∈ E
∀u ∈ V
Geometrically, what this is doing is embedding the graph onto the unit ball S d−1 .
Let σuv = cos(y~u , y~v ). We would like the σuv to be as close as −1 as possible. This
will maximize the sum. This program is better than the other one and will allow
us to work better.
Let SDPOPT be the optimal solution to this SDP. Since this is a relaxed version
of max cut we know that SDPOPT ≥ OPT. Moreover α SDPOPT ≥ α OPT for
any α > 0.
18.434 - PRESENTATION 2
APPROXIMATION ALGORITHMS VIA SDP: MAX CUT
3
4. Goemans-Williamson
We can solve the SDP in polynomial time with the ellipsoid algorithm. Let y~v
be the solution. We need to chose a way to round these vectors and assign them
to a side of the partition. Consider the following algorithm proposed by GoemansWilliamson in 1995.
Cut the vectors {yv }v∈V with a random hyperplane through the origin. This will
create a partition on the vectors that will define a natural partition on the graph.
We can randomly choose a vector ~g 6= 0 ∈ Rd . This vector will be normal to the
hyperplane that we will use. Therefore, if y~u · ~g ≥ 0 we say u ∈ A. Otherwise, say
u ∈ B.
4.1. Analysis of GW. Fix an edge (u, v) ∈ E. The probability of π(u) 6= π(v) is
the same as the probability of the hyperplane separating y~u , y~v . Moreover, consider
the 2D plane defined by these vectors. The probability that a random hyperplane
cuts them is proportional to the angle between the vectors. That is to say
Pr (π(u) 6= π(v))
=
=
=
θ
π −1
cos (y~u ·y~v )
π
cos−1 (σuv )
π
By linearity of Expectation, we get that
E[GW cut] =
X
wuv
(u,v)∈E
Recall that the SDPOPT =
can find some α such that
cos−1 (σuv )
π
P
(u,v)∈E
wuv
1
2
cos−1 (σuv )
π
− 21 σuv ≥ OPT. Therefore, if we
≥ α 21 − 21 σuv
∀σuv ∈ [−1, 1]
we can conclude that E[GW Cut] ≥ αSDPOPT ≥ αOPT. As it turns out,
α = 0.87856... works.
Remark If the ’unique games conjecture’ is true, then this is the best possible
approximation ratio possible.
Download