long-ppt

advertisement
Venkatesan Guruswami (CMU)
Yury Makarychev (TTI-C)
Prasad Raghavendra (Georgia Tech)
David Steurer (MSR)
Yuan Zhou (CMU)
Bipartite graph recognition
• Depth-first search/breadth-first search
• With some noise?
– Given a bipartite graph with 1% noisy edges,
can we remove a small fraction of edges (10% say)
to get a bipartite graph,
i.e. can we divide the vertices into two parts, so
that 90% of the edges go accross the two parts?
MaxCut
• cut(A, B) = edges(A, B) / |E|
where B = V - A
• exact one of i, j in A :
edge (i, j) "on the cut"
• MaxCut: find A, B such
that cut(A, B) is maximized
G=(V,E)
A
B=V-A
cut(A, B) = 4/5
1  xi  x j subject 2
1
MaxCut  max
xxi i{11,,
1},i 
V
i V

to
| E | (i , j )E
2
– Bipartite graph recognition: MaxCut = 1 ?
– Robust bipartite graph recognition: given MaxCut ≥
0.99, to find cut(A, B) ≥ 0.9
c vs. s approximation for MaxCut
• Given a graph with MaxCut value at least c, can we find a
cut of value at least s ?
• Robust bipartite graph recognition: given MaxCut ≥ 0.99,
to find cut(A, B) ≥ 0.9
– 0.99 vs 0.9 approximation
– "approximating almost perfect MaxCut"
Robust bipartite graph recognition
• Task: given MaxCut ≥ 0.99, find cut(A, B) ≥ 0.9
• We can always find cut(A, B) ≥ 1/2.
– Assign each vertex -1, 1 randomly
– For any edge (i, j), E[(1 - xixj)/2] = 1/2
1  xi  x j 
 1
E[alg]  E 


2
 | E | (i , j )E

1  xi  x j  1
1

E


| E | (i , j )E 
2
 2
vi
vj
Robust bipartite graph recognition (cont'd)
• Task: given MaxCut ≥ 0.99, find cut(A, B) ≥ 0.9
• We can always find cut(A, B) ≥ 1/2.
• Better than 1/2?
– DFS/BFS/greedy?
No combinatorial algorithm
known until very recent [KS11]
– Linear Programming?
Natural LPs have big
Integrality Gaps [VK07,
STT07, CMM09]
Robust bipartite graph recognition (cont'd)
• Task: given MaxCut ≥ 0.99, find cut(A, B) ≥ 0.9
• We can always find cut(A, B) ≥ 1/2.
• Better than 1/2?
• The GW Semidefinite Programming relaxation [GW95]
1  vi , v j
1
max

| E | (i , j )E
2
subject
to
|| vi ||2  1, i V
– 0.878-approximation
– Given MaxCut  (1   ) , can find a cut  (1   )
• (1   ) vs (1   ) approximation, tight under
Unique Games Conjecture [Kho02, KKMO07, MOO10]
Robust satisfiability algorithms
• Given an instance which can be satisfied by removing ε
fraction of constraints, to make the instance satisfiable
by removing g(ε) fraction of constraints
– g(ε) -> 0 as ε -> 0
•
•
•
•
Examples
(1   ) vs. (1   ) algorithm for MaxCut [GW95]
(1   ) vs. (1   ) algorithm for Max2SAT [Zwick98]
(1   ) vs. (1  1 / log 1 ) algorithm for MaxHorn3SAT
[Zwick98]
MaxBisection
G = (V, E)
A
Objective:
B

edges( A, B) 
maxcut( A, B) 

|E|


| A || B |
| V | even
1  xi  x j
1
MaxBisection  max

| E | (i , j )E
2
xi  1, i V
2
subject
to
x
iV
i
0
MaxBisection (cont'd)
G = (V, E)
A
Objective:
B
| V | even

edges( A, B) 
maxcut( A, B) 

|E|


| A || B |
• Approximating MaxBisection?
– No easier than MaxCut
• Reduction: take two copies of the MaxCut instance
MaxBisection (cont'd)
G = (V, E)
A
Objective:
B
| V | even

edges( A, B) 
maxcut( A, B) 

|E|


| A || B |
• Approximating MaxBisection?
– No easier than MaxCut
– Strictly harder than MaxCut?
– Approximation ratio: 0.6514 [FJ97], 0.699 [Ye01],
0.7016 [HZ02], 0.7027 [FL06]
– Approximating almost perfect solutions? Not known
Finding almost-perfect MaxBisection
• Question
– Is there a (1   ) vs (1  g ( )) approximation algorithm
for MaxBisection, where g ( )  0 as   0 ?
• Answer. Yes.
• Our result.
– Theorem. There is a (1   ) vs (1   1/ 3 log 1 )
approximation algorithm for MaxBisection.
– Theorem. Given a (1   ) satisfiable MaxBisection
instance, it is easy to find a (.49, .51)-balanced cut of
value (1   ) .
Extension to MinBisection
• MinBisection
– minimize edges(A, B)/|V|, s.t. B = V - A, |B| = |A|
• Our result
– Theorem. There is a  vs  1/ 3 log 1 approximation
algorithm for MaxBisection.
– Theorem. Given a MinBisection instance of value  ,
it is easy to find a (.49, .51)-balanced cut of value  .
The rest of this talk...
• Previous algorithms for MaxBisection.
• Theorem. There is a (1   ) vs (1   1/ 20 log n)
approximation algorithm for MaxBisection.
Previous algorithms for MaxBisection
The GW algorithm for (almost perfect) MaxCut [GW95]
• MaxCut objective
1  xi  x j subject 2
1
MaxCut  max
xi  1, i V

to
| E | (i , j )E
2
MaxCut = 2/3
-1
0
1
• SDP relaxation
1  vi , v j subject
1
2
||
v
||
 1, i V
SDP  max
i

to
| E | ( i , j )E
2
SDP ≥ MaxCut
In this example:
SDP = 3/4 > MaxCut
The "rounding" algorithm
1  vi , v j subject
1
2
||
v
||
 1, i V
SDP  max
i

to
| E | ( i , j )E
2
• Lemma. We can (in poly time) get a cut of value 1  
when SDP  1  
• Algorithm. Choose a random hyperplane, the hyperplane
divides the vertices into two parts.
• Analysis.
The "rounding" algorithm (cont'd)
1  vi , v j subject
1
2
||
v
||
 1, i V
SDP  max
i

to
| E | ( i , j )E
2
• Lemma. We can (in poly time) get a cut of value 1  
when SDP  1  
• Algorithm. Choose a random hyperplane, the hyperplane
divides the vertices into two parts.
• Analysis.
– SDP  1   implies for most edges (i, j), their SDP
contribution (1  vi , v j ) / 2 is large
– Claim. If (1  vi , v j ) / 2  1   , then
Pr[vi , v j seperatedby randomhyperplane] 1  
– Therefore, the random hyperplane cuts many edges
(in expectation)
The "rounding" algorithm (cont'd)
vi ,vi j, vj 
– Claim. If (1 
) /1221   , then
Pr[vi , v j seperatedby randomhyperplane] 1  
– Proof.
vi
vi, vj seperated
by
P r[vi , v j seperated
]
the hyperplane

2 2 arccos vi , v j


2
2
vj
vi,arccos
vj not seperated
 vi , v j
 1
by the hyperplane


 1
arccos1  2 


 
 1 O 
Known algorithms for MaxBisection
• The standard SDP (used by all the previous algorithms)
1  vi , v j
1
max

| E | (i , j )E
2
, subject to
|| vi ||2  1, i V
v
iV
i
0
Bisection
condition
• Gives non-trivial approximation gaurantee
• But does not help find almost perfect MaxBisection
Known algorithms for MaxBisection (cont'd)
• The standard SDP (used by all the previous algorithms)
1  vi , v j
1
max

| E | (i , j )E
2
• The "integrality gap"
OPT < 0.9
, subject to
|| vi ||2  1, i V
v
iV
SDP = 1
i
0
Known algorithms for MaxBisection (cont'd)
• The standard SDP (used by all the previous algorithms)
1  vi , v j
1
max

| E | (i , j )E
2
, subject to
|| vi ||2  1, i V
v
iV
i
0
• The "integrality gap" : instances that OPT < 0.9, SDP = 1
• Why is this a bad news for SDP?
– Instances that OPT > 1 - ε, SDP > 1 - ε
– Instances that OPT < 0.9, SDP > 1 - ε
– SDP cannot tell whether an instance is almost
satisfiable (OPT > 1 - ε) or not.
Our approach
• Theorem. There is a (1   ) vs (1   1/ 20 log n)
approximation algorithm for MaxBisection.
A simple fact
• Fact.(1 / 2   ,1 / 2   ) -balanced cut of value c
of value c  2 .

bisection
• Proof. Get the bisection by moving  fraction of
random vertices from the large side to the small side.
– fraction of cut edges affected : at most 2 in
expectation
• Only need to find almost bisections.
Almost perfect MaxCuts on expanders
• λ-expander: for each S  V, such that vol( S )  vol(V ) / 2 ,
we have edges( S , V  S )
, where vol( S )   di

iS
vol( S )
G=(V,E)
S
Almost perfect MaxCuts on expanders (cont'd)
• λ-expander: for each S  V, such that vol( S )  vol(V ) / 2 ,
we have edges( S , V  S )
, where vol( S )   di

iS
vol( S )
• Key Observation. The (volume of) difference between
two (1   ) cuts on a λ-expander is at most 2 /   vol(V ) .
• Proof.
cut( A, B)  1   cut(C , D)  1  
C
X
edges( X  Y ,V  X  Y )  2  vol(V )


A
B
vol( X  Y )  2 /   vol(V )
 
Y
D
Almost perfect MaxCuts on expanders (cont'd)
• λ-expander: for each S  V, such that vol( S )  vol(V ) / 2 ,
we have edges( S , V  S )
, where vol( S )   di

iS
vol( S )
• Key Observation. The (volume of) difference between
two (1   ) cuts on a λ-expander is at most 2 /   vol(V ) .
• Approximating almost perfect MaxBisection on
expanders is easy.
– Just run the GW alg. to find the MaxCut.
The algorithm (sketch)
• Decompose the graph into expanders
– Discard all the inter-expander edges
• Approximate OPT's behavior on each expander by
finding MaxCut (GW)
– Discard all the uncut edges
• Combine the cuts on the expanders
– Take one side from each cut to get an almost
bisection. (subset sum)
2:
1:
Step 3:
find
decompose
MaxCut
into
combine
pieces
G=(V,E)
expanders
Expander decomposition
• Cheeger's inequality. Can (efficiently) find a cut of
sparsity  if the graph is not a  -expander.
• Corollary. A graph can be (efficiently) decomposed into
 -expanders by removing  log n edges (in fraction).
• Proof.
– If the graph is not an expander, divide it into small
parts by sparsest cut (cheeger's inequality).
– Process the small parts recursively.
G=(V,E)
λ-expander
The algorithm
• Decompose the graph into
1/ 20
log n edges.
– Lose 
 1/10-expanders.
• Apply GW algorithm on each expander to approximate OPT.
– OPT(MaxBisection) = (1   )
– GW finds (1   ) cuts on these expanders
1/ 2
1/ 10
  1/10 different from behavior of OPT
•  /
– Lose  1 / 2 edges.
• Combine the cuts on the expanders (subset sum).
•
•
( 2 
1
1/10 1
,
2


1/ 20
-balanced
cut
of
value
(
1


log n)
)
a bisection of value (1   1/ 20 log n)
1/10
• Proved:
1/ 20
• Theorem. There is a (1   ) vs (1  
log n)
approximation algorithm for MaxBisection.
• Will prove:
• Theorem. There is a (1   ) vs (1   1/ 20 log 1 )
approximation algorithm for MaxBisection.
short story
Eliminating the log n factor
• Recall. Only need to find almost bisections (
a bisection)
1/ 20
-close to
• Observation. Subset sum is "flexible with small items"
– Making small items
(101,
304)
more biased does not
(397,
201)
change the solution
(8,
(3,
0)
5)
too much.
(8,
(6,
0)
2)
sum
(6,
(5,
0)
1)
(5,
(3,
0)
2)
(515,
515)
Eliminating the log n factor
• Recall. Only need to find almost bisections (
a bisection)
1/ 20
-close to
• Observation. Subset sum is "flexible with small items"
– Making small items
(101,
304)
more biased does not
(397,
201)
change the solution
(8,
0)
too much.
(8,
0)
sum
(6,
0)
(5,
0)
(498,
505)
Eliminating the log n factor
• Recall. Only need to find almost bisections (
a bisection)
1/ 20
-close to
• Observation. Subset sum is "flexible with small items"
– Making small items
(101,
304)
more biased does not
(397,
201)
change the solution
(8,
0)
too much.
(8,
0)
sum
(6,
0)
(5,
0)
(506,
505)
Eliminating the log n factor
• Recall. Only need to find almost bisections (
a bisection)
1/ 20
-close to
• Observation. Subset sum is "flexible with small items"
– Making small items
(101,
304)
more biased does not
(397,
201)
change the solution
(8,
0)
too much.
(0,
8)
sum
(6,
0)
(5,
0)
(506,
513)
Eliminating the log n factor
• Recall. Only need to find almost bisections (
a bisection)
1/ 20
-close to
• Observation. Subset sum is "flexible with small items"
– Making small items
(101,
304)
more biased does not
(397,
201)
change the solution
(8,
0)
too much.
(0,
8)
sum
(6,
0)
(5,
0)
(512,
513)
Eliminating the log n factor
• Recall. Only need to find almost bisections (
a bisection)
1/ 20
-close to
• Observation. Subset sum is "flexible with small items"
– Making small items
(101,
304)
more biased does not
(397,
201)
change the solution
(8,
0)
too much.
(0,
8)
sum
(6,
0)
(5,
0)
(517,
513)
Eliminating the log n factor
• Recall. Only need to find almost bisections (
a bisection)
1/ 20
-close to
• Observation. Subset sum is "flexible with small items"
– Making small items
(200,
0)
more biased does not
(0,
2)
change the solution
(0,
2)
too much.
100
– However, making small items copies
more balanced might be
a bad idea.
(0,
2)
sum
(200,
200)
Eliminating the log n factor
• Recall. Only need to find almost bisections (
a bisection)
1/ 20
-close to
• Observation. Subset sum is "flexible with small items"
– Making small items
(200,
0)
more biased does not
(1,
1)
change the solution
(1,
1)
too much.
100
– However, making small items copies
more balanced might be
a bad idea.
(1,
1)
sum
(300,
100)
Eliminating the log n factor (cont'd)
• Idea. Terminate early in the decomposition process.
Decompose the graph into
1 / 10
–  -expanders (large items), or
– subgraphs of   n vertices (small items).
• Corollary. Only need to discard 
1/ 20
log 1 edges.
• Lemma. We can find an almost bisection if the MaxCuts
we get for small sets are more biased than those in OPT.
Finding a biased MaxCut
• To find a cut that is as biased as OPT and as good as
OPT (in terms of cut value).
• Lemma. Given G=(V,E), if there exists a cut (X, Y) of
value (1   ) , then one can find a cut (A, B) of
value (1   ) , such that | A || X |   | V | .
MaxBisection
Biased MaxCut
The algorithm
• Decompose the graph into
1/ 20
log 1 edges.
– Lose 
 1/10-expanders or small parts.
• Apply GW algorithm on each expander to approximate OPT.
– Lose  1 / 2 edges,  1/ 2 /  1/10   1/10 different from OPT
• Find biased MaxCuts in small parts.
– Lose  1 / 2 edges, at most  1 / 2 less biased than OPT
• Combine the cuts on the expanders and small parts (subset
sum).
•
•
( 2 
1
1/10 1
,
2


1/ 20
-balanced
cut
of
value
(1   log 1 )
)
a bisection of value (1   1/ 20 log 1 )
1/10
Finding a biased MaxCut -- A simpler task
• Lemma. Given G=(V,E), if there exists a cut (X, Y) of
value (1   ) , then one can find a cut (A, B) of
value (1  8  ) , such that | A || X | 8  | V | .
• SDP.
maximize
subject to
1  v0 , vi
1
 2
| V | iV
1  vi , v j
1
 1 

| E | ( i , j )V
2
|| vi ||2  1
• Claim. SDP ≥ |X|/|V|
--- Bias
--- Cut value
, i {0} V
Rounding algorithm (sketch)
• Goal: given SDP solution, to find a cut (A, B) such that
– cut( A, B)  1  8 
– | A | / | V | SDP  8 
• For most (1   fraction) edges (i, j), we have
1  vi , v j
 1    1  vi , v j  1  2 
2
• vi, vj are almost opposite to each other: vi ≈ - vj,
 vi , v0   v j , v0
• Indeed,
| vi , v0  v j , v0 || vi  v j , v0 | vi  v j

2
vi  v j  2 vi , v j  2 4 
2
Rounding algorithm (sketch) (cont'd)
for most edges (i, j):
vi , v0   v j , v0  4 
• Project all vectors to v0
• Divide v0 axis into intervals
– length = 8   4 
• Most ( 1  8  fraction )
8
v0

edges' incident
I(-4) I(-3) I(-2) I(-1) I(1) I(2) I(3) I(4)
vertices fall into
opposite intervals (good edges)
• Discard all bad edges
Rounding algorithm (sketch) (cont'd)
• Let the cut (A, B) be
– for each pair of intervals I(k) and I(-k),
let A include the one with more vertices,
B include the other
• (A, B) cuts all
good edges
cut( A, B)  1  8 
v0
-4 -3 -2 -1
1
2
3
4
Rounding algorithm (sketch) (cont'd)
• Let the cut (A, B) be
– for each pair of intervals I(k) and I(-k),
let A include the one with more vertices,
B include the other
• For each i in I(k)
For each i in I(-k)
vi , v0  (k  1)8 
vi , v0  (k  1)8 

1  vi , v0
iI ( k ) I (  k )
SDP  
2

 max{| I (k ) |, | I (k ) |} | (| I (k ) |  | I (k ) |)8 
k iI ( k ) I (  k )
1  vi , v0
2
  max{| I (k ) |, | I (k ) |} |  | V | 8 
k
| A |  | V | 8 
Finding a biased MaxCut
• Lemma. Given G=(V,E), if there exists a cut (X, Y) of
value (1   ) , then one can find a cut (A, B) of
value (1   ) , such that | A || X |   | V | .
• SDP.
maximize
subject to
1  v0 , vi
1
 2
| V | iV
--- Bias
1  vi , v j
1
 1 

| E | ( i , j )V
2
--- Cut value
|| vi ||2  1
2
2
 -triangle
inequality
| vi  v j , v0 |
, i {0} V
|| vi  v j ||2
2
, i, j V
Future directions
• (1   ) vs (1   )
approximation?
• "Global conditions" for other CSPs.
– Balanced Unique Games?
The End.
Any questions?
Eliminating the log n factor
• Another key step.
• Idea. Terminate early in the decomposition process.
Decompose the graph into  1/ 10-expanders or subgraphs
of   n vertices.
• Corollary. Only need to discard  1/ 20 log 1 edges.
• Lemma. We can find an almost bisection if the MaxCuts
for small sets are more biased than those in OPT.
MaxBisection
Biased MaxCut
Finding a biased MaxCut
• Lemma. Given G=(V,E), if there exists a cut (X, Y) of
value (1   ) , then one can find a cut (A, B) of
value (1   ) , such that | A || X |   | V .|
• SDP.
1
maximize
subject to
v ,v

|V |
iV
0
i
1  vi , v j
1
 1 

| E | ( i , j )V
2
|| vi ||2  1
2
2
 -triangle
inequality
| vi  v j , v0 |
, i {0} V
|| vi  v j ||2
2
, i, j V
• Rounding. A hybrid of hyperplane and threshold rounding.
Future directions
• (1   ) vs (1   )
approximation?
• "Global conditions" for other CSPs.
– Balanced Unique Games?
The End.
Any questions?
Download