Approximation Schemes via Sherali-Adams Hierarchy for Dense Constraint Satisfaction Problems and Assignment Problems

advertisement
Approximation Schemes via
Sherali-Adams Hierarchy for
Dense Constraint Satisfaction Problems
and Assignment Problems
Yuichi Yoshida (NII & PFI)
Yuan Zhou (CMU)
Constraint satisfaction problems (CSPs)
• In Max-kCSP, given:
– a set of variables: V = {v1, v2, v3, …, vn}
– the domain of variables: D
– a set of arity-k “local” constraints: C
• Goal: find an assignment α : V  D to maximize
#satisfied constraints in C
ìï
üï
max í å pi1,i2 ,… ,ik (a (vi1 ), a (vi2 ),… , a (vik ))ý
a :V®D
ïî(i1,i2 ,… ,ik )ÎC
ïþ
Constraint satisfaction problems (CSPs)
• In Max-kCSP, given:
– a set of variables: V = {v1, v2, v3, …, vn}
– the domain of variables: D
– a set of arity-k “local” constraints: C
• Goal: find an assignment α : V  D to maximize
#satisfied constraints in C
• Example: MaxCut
, Max-3SAT, UniqueGames, …
– D = {0, 1}
– p(i,j) = 1[vi ≠ vj]
Assignment problems (APs)
• In Max-kAP, given
– a set of variables V = {v1, v2, v3, …, vn}
– a set of arity-k “local” constraints C
• Goal: find a bijection π : V  {1, 2, …, n} (i.e.
permutaion) to maximize #satisfied constraints in C
ìï
üï
max í å pi1,i2 ,… ,ik ( p (vi1 ), p (vi2 ),… , p (vik ))ý
p :V®[n]
ïî(i1,i2 ,… ,ik )ÎC
ïþ
Assignment problems (APs)
• Examples
– MaxAcyclicSubgraph (MAS)
• π(u) < π(v)
– Betweenness
–
–
• π(u) < π(v) < π(w) or π(w) < π(v) < π(u)
MaxGraphIsomorphism (Max-GI)
• (π(u), π(v)) ∈ E(H), where H is a fixed graph
DensekSubgraph (DkS)
• (π(u), π(v)) ∈ E(Kk), where Kk is a k-clique
Approximate schemes
• Max-kCSP and Max-kAP are
NP-Hard in general
• Polynomial-time approximation scheme (PTAS): for
any constant ε > 0, the algorithm runs in nO(1) time
and gives (1-ε)-approximation
• Quasi-PTAS: the algorithm runs in nO(log n) time
• Max-kCSP/Max-kAP admits
PTAS or quasi-PTAS when
the instance is “dense” or “metric”
PTAS for dense/metric Max-kCSP
• Max-kCSP
is dense: has Ω(nk) constraints.
– PTAS for dense MaxCut [dlV96]
– PTAS for dense Max-kCSP [AKK99, FK96, AdlVKK03]
• Max-2CSP is
metric:
edge weight ω satisfies ω(u, v) ≤ ω(u, w)+ω(w, v)
– PTAS for metric MaxCut [dlVK01]
– PTAS for metric MaxBisection [FdlVKK04]
– PTAS for locally dense Max-kCSP (a generalized definition
of “metric”) [dlVKKV05]
Quasi-PTAS for dense Max-kAP
• Max-kAP is
dense:
– roughly speaking, the instance has Ω(nk)
constraints
• In [AFK02]
– (1-ε)-approximate dense MAS, Betweenness in
nO(1/ε^2) time
– (1-ε)-approximate dense DkS, Max-GI, Max-kAP in
nO(log n/ε^2) time
Previous techniques
•
•
•
•
•
Exhaustive search on a small set of variables [AKK99]
Weak Szemerédi’s regularity lemma [FK96]
Copying important variables [dlVK01]
A variant of SVD [dlVKKV05]
Linear programming relaxation for “assignment
problems with extra constraints” [AFK02]
• In this paper, we show:
The standard Sherali-Adams LP relaxation hierarchy is a
unified approach to all these results!
Sherali-Adams LP relaxation hierarchy
• A systematic way to write tighter and tighter LP
relaxations: [SA90]
• In an r-round SA LP relaxation,
– For each set S = {v1, …, vr} of r variables, we have a
distribution of assignments μS = μ{v1, …, vr}
– For any two sets S and T, marginal distributions
are consistent: μS(S∩T) = μT(S∩T)
• Solving an r-round LP relaxation takes nO(r) time.
Our results
• Sherali-Adams LP-based proof for known results
– O(1/ε2)-round SA LP relaxation gives (1-ε)-approximation
to dense or locally dense Max-kCSP, and Max-kCSP with
global cardinality constraints such as MaxBisection
– O(log n/ε2)-round SA LP relaxation gives (1-ε)approximation to dense or locally dense Max-kAP
• New algorithms
– Quasi-PTAS for Maxk-HypergraphIsomorphism when one graph
is dense and the other one is locally dense
Our techniques
• Solve the Sherali-Adams LP relaxation for sufficiently
many rounds (Ω(1/ε2) or Ω((log n)/ε2))
• Randomized conditioning operation to bring down
the pair-wise correlations
• Independent rounding for Max-kCSP
• Special rounding for Max-kAP
Conditioning operation
• Randomly choose v from V, sample a ~ μv
• For each local distribution μ{v1, …, vr}, generate the new local
distribution μ{v1, …, vr}|v=a
• r-round SA solution  (r-1)-round SA solution
• Essentially from [RT12]:
– After t steps of conditioning,
1
– on average, μ{v1, …, vk} is only
-far from μ{v1} x … x μ{vk}
t
Independent rounding for Max-kCSP
After Ω(1/ε2) steps of conditioning,
on average, μ{v1, …, vk} is only ε-far from μ{v1} x … x μ{vk}
Sample each v from μ{v}, and we have
éë p(v ,… ,v ) (v1,… , vk )ùû =
éë p(v ,… ,v ) (v1,… , vk )ùû ± e
E
1
k
1
k
(v1,… ,vk )~m{v ,… ,v }
(v1,… ,vk )~m{v } ´… ´m{v }
E
1
k
1
k
Therefore,
E [ rounding solution] = [ LP value] ± e × nk
This is a (1-O(ε))-(multiplicative) approximation because
of the density
Rounding for Max-kAP
• Independent sampling does not work:
– objective value is good, but resulting assignment might
not be permutation because of collisions
• Our special rounding:
– View {μ{v}(w)}v,w as a doubly stochastic matrix, therefore
a distribution of permutations
– Distribution supported on one permutation  ✔
Similar operation
– Two permutations?  Merge them
in [AFK02]
– Even more permutations? 
Pick arbitrary two, merge them, and iterate
Merging two permutations
1. View the two permutations as disjoint cycles
2. Break long cycles (length > n1/2) into short ones (length ≤ n1/2)
3. In each cycle, choose Permutation 1/Permutation 2 independently
Analysis
• Step 2: modified O(n1/2) entries
of Permutation 2, affecting O(n-1/2)fraction of the constraints
n1/2
Merging two permutations
1. View the two permutations as disjoint cycles
2. Break long cycles (length > n1/2) into short ones (length ≤ n1/2)
3. In each cycle, choose Permutation 1/Permutation 2 independently
Analysis
• Step 3: value of the constraints
where each variable from a distinct
cycle is preserved because of
independence
– all but n-1/2-fraction of them
n1/2
Merging two permutations
1. View the two permutations as disjoint cycles
2. Break long cycles (length > n1/2) into short ones (length ≤ n1/2)
3. In each cycle, choose Permutation 1/Permutation 2 independently
Analysis
• Conclusion:
In this way, we get a permutation
whose objective value is at least
(1 – O(n-1/2)) * [Indep. Sampling]
≥ (1 – O(n-1/2)) (1 – O(ε)) [Val of LP]
n1/2
Future directions
• Can we solve the Sherali-Adams LP faster (as in
[GS12]) to get a PTAS for dense assignment problems?
Thanks
Download