Lecture 10: Semidefinite Programming

advertisement
Comp 260: Advanced Algorithms
Tufts University, Spring 2011
Prof. Lenore Cowen
Scribe: Eli Brown
Lecture 10: Semidefinite Programming
This is a short lecture because the class went to the Tufts Graduate
Research Symposium.
1
Max Cut with SDP
We will revisit a problem from earlier in the course and show a new approach
to solving it. The new approach builds on material we have seen since we
first attacked Max Cut with a randomized algorithm.
1.1
Max Cut
To review, the problem is posed:
Input: A graph G = (V, E) with weights wi,j for each (i, j) ∈ E.
Assume ∀i, j that wi,j ≥ 0 and that wi,j = 0 if (i, j) ∈
/ E.
Goal: Divide nodes into two subsets to maximize the total weight of
the edges that cross between the subsets. We say those edges “cross
the cut”.
We addressed this problem with a randomized solution. Let each vertex
independently flip a fair coin and based on the outcome, place that vertex
into either set S or S (the sides of the cut). Let W be the total weight of
1
the cut, that is the sum of the weights of the edges that cross it.
X
E[W ] =
wi,j · P r[(i, j)crosses the cut]
(i,j)∈E
=
X
wi,j · P r[i ∈ S, j ∈
/ S∨i∈
/ S, j ∈ S]
(i,j)∈E
=
1 X
wi,j
2
(i,j)∈E
=
1
· OPT
2
The expected value of our solution is 21 · OPT, but that does not mean
(without further mathematical argument) that a randomized solution will
meet this expected value a constant fraction of the time. What we need to
argue this at we have an algorithm that is 12 ·OPT. However, we presented two
different ways to achieve 12 · OPT deterministically for the maxcut problem
earlier in the course.
1.2
Semidefinite Programming (SDP)
Previously, we discussed linear programming (LP) and its applications. Semidefinite programming (SDP) is a generalization of LP. Instead of using linear
equations for constraints, we have constraints given by a positive semidefinite
matrix.
Definition 1.2.1 A matrix A is Positive Semidefinite for A ∈ Rn×n ⇐⇒
xT Ax ≥ 0, ∀x ∈ Rn
For a PSD matrix, the following are equivalent:
1. A is positive semidefinite
2. A has only non-negative eigenvalues
3. A = B T B for some matrix B ∈ Rn×n
2
A semidefinite program has the form:
P
max or min: i,j ci,j xi,j
P
subject to: i,j,k ai,j,k xi,j = bk,i
∀k
where: X = the matrix of xi,j is symmetric positive semi-definite
SDP are polynomial-time solvable. It is similar to linear programming
but on non-linear equations. The non-linear equations must form a positive
semidefinite matrix. Two methods for solving these systems include:
• Ellipsoid Method – Geometric Algorithms and Combinatorial Optimization. M. Grötschel, L. Lovász, A. Schrijver. 1988.
• Interior Point Methods – Interior Point Methods in Semidefinite Programming with Applications to Combinatorial Optimization. F. Alizadeh. 1993.
SDP is equivalent to Vector Programming, which is written in the
form:
P
max or min: i,j ci,j (vi · vj )
P
subject to: i,j ai,j,k (vi · vj ) = bk
∀k
where: vi ∈ Rn
The equivalence comes from the fact that X = V T V , for V composed of the
vi as columns ⇐⇒ x is PSD.
1.3
Solving Max Cut with SDP
Thanks to SDP, we can improve our approximation from 0.5 · OPT to 0.878 ·
OPT.
3
At the heart of this method is the idea that we could do the maximization
over 1-dimensional unit vectors. Assign a “vector” vi for each vertex in the
graph, and let:
−1
if vertex i is in S
vi =
1
if vertex i is in S
Then maximize the following expression, in which the sum increases by 1
exactly when the vertices are in opposite sets:
1X
wi,j (1 − vi vj )
2 i<j
To actually write an SDP that solves the problem, though, we need to
assign n-dimensional vectors. We need to make vi · vi = 1 ∀i, vi ∈ Rn . If
you drew them in n-space, those vectors would be on the unit sphere. Since
they represent vertices from the max cut problem, we want a way to separate
the vectors into two sets. The ones that have large angles between them are
the ones we want to put on either side of the cut. The insight for how to do
that is that we will pick a random hyperplane to split them.
Goemans-Williamson Algorithm:
1. Solve the vector program
2. Choose a random vector r uniformly from the unit sphere. To do that,
choose each coordinate from a Guassian distribution then normalize.
3. Set yi = +1 if r · vi ≥ 0 otherwise yi = −1
Proof 1.3.1 Now the expected value of the weight of the cut is:
X
E[W ] =
wi,j P r[vi + vj separated by r]
i<j
We can determine that probability by projecting r onto the plane defined by
vi + vj . All we need to know is how much of the unit circle is cut by the angle
given π1 arccos(vi · vj ). We get the OPT bound from
max
0<x<1
1
π
arccos(x)
≥ 0.878
− x)
1
(1
2
4
Related documents
Download