Iterative Randomized Rounding Thomas Rothvoß Carg`ese 2011 Department of Mathematics, M.I.T.

advertisement
Iterative Randomized Rounding
Thomas Rothvoß
Department of Mathematics, M.I.T.
Cargèse 2011
Joint work with Jaroslaw Byrka,
Fabrizio Grandoni and Laura Sanità
What is Iterative Randomized Rounding?
Set Cover:
◮
◮
Input: Sets S1 , . . . , Sm over elements 1, . . . , n; cost c(Si )
P
S
Goal: minI⊆[m] { i∈I ci | i∈I Si = [n]}
What is Iterative Randomized Rounding?
Set Cover:
Input: Sets S1 , . . . , Sm over elements 1, . . . , n; cost c(Si )
P
S
◮ Goal: minI⊆[m] {
i∈I ci |
i∈I Si = [n]}
Standard LP:
◮
min
m
X
c(Si ) · xi
i=1
X
xi ≥ 1 ∀j ∈ [n]
i:j∈Si
xi ≥ 0 ∀i ∈ [m]
What is Iterative Randomized Rounding?
Set Cover:
Input: Sets S1 , . . . , Sm over elements 1, . . . , n; cost c(Si )
P
S
◮ Goal: minI⊆[m] {
i∈I ci |
i∈I Si = [n]}
Standard LP:
◮
min
m
X
c(Si ) · xi
i=1
X
xi ≥ 1 ∀j ∈ [n]
i:j∈Si
xi ≥ 0 ∀i ∈ [m]
1/2
1/2
1/2
1/2
1/2
1/2
1/2
1/2
1/2
What is Iterative Randomized Rounding?
Set Cover:
Input: Sets S1 , . . . , Sm over elements 1, . . . , n; cost c(Si )
P
S
◮ Goal: minI⊆[m] {
i∈I ci |
i∈I Si = [n]}
Standard LP:
◮
min
m
X
c(Si ) · xi
i=1
X
xi ≥ 1 ∀j ∈ [n]
i:j∈Si
xi ≥ 0 ∀i ∈ [m]
Known:
◮
Integrality gap is Θ(ln n)
1/2
1/2
1/2
1/2
1/2
1/2
1/2
1/2
1/2
What is Iterative Randomized Rounding?
Set Cover:
Input: Sets S1 , . . . , Sm over elements 1, . . . , n; cost c(Si )
P
S
◮ Goal: minI⊆[m] {
i∈I ci |
i∈I Si = [n]}
Standard LP:
◮
min
m
X
c(Si ) · xi
i=1
X
xi ≥ 1 ∀j ∈ [n]
i:j∈Si
1/2
1/2
1/2
1/2
1/2
xi ≥ 0 ∀i ∈ [m]
Known:
◮
Integrality gap is Θ(ln n)
◮
Suppose |Si | ≤ k. Then gap is Θ(ln k).
1/2
1/2
1/2
1/2
Iterative randomized rounding algorithm:
(1) FOR t = 1 TO ∞
(2) Solve LP → x
(3) FOR ALL i: Buy Si with prob. xi (remove covered el.)
(4) IF no elemente left THEN RETURN all bought sets
Iterative randomized rounding algorithm:
(1) FOR t = 1 TO ∞
(2) Solve LP → x
(3) FOR ALL i: Buy Si with prob. xi (remove covered el.)
(4) IF no elemente left THEN RETURN all bought sets
iteration t = 1
1
2
x
1
2
1
2
1
2
1
2
bought
sets
1
2
1
2
1
2
1
2
iteration t = 2
Iterative randomized rounding algorithm:
(1) FOR t = 1 TO ∞
(2) Solve LP → x
(3) FOR ALL i: Buy Si with prob. xi (remove covered el.)
(4) IF no elemente left THEN RETURN all bought sets
iteration t = 1
1
2
x
1
2
1
2
1
2
1
2
bought
sets
1
2
1
2
1
2
1
2
iteration t = 2
Iterative randomized rounding algorithm:
(1) FOR t = 1 TO ∞
(2) Solve LP → x
(3) FOR ALL i: Buy Si with prob. xi (remove covered el.)
(4) IF no elemente left THEN RETURN all bought sets
iteration t = 1
1
2
x
iteration t = 2
1
1
1
2
1
2
1
2
1
2
bought
sets
1
2
1
2
1
2
1
2
1
Iterative randomized rounding algorithm:
(1) FOR t = 1 TO ∞
(2) Solve LP → x
(3) FOR ALL i: Buy Si with prob. xi (remove covered el.)
(4) IF no elemente left THEN RETURN all bought sets
iteration t = 1
1
2
x
iteration t = 2
1
1
1
2
1
2
1
2
1
2
bought
sets
1
2
1
2
1
2
1
2
1
Iterative randomized rounding algorithm:
(1) FOR t = 1 TO ∞
(2) Solve LP → x
(3) FOR ALL i: Buy Si with prob. xi (remove covered el.)
(4) IF no elemente left THEN RETURN all bought sets
iteration t = 1
1
2
x
iteration t = 2
1
1
1
2
1
2
1
2
1
2
bought
sets
1
2
1
2
1
2
1
2
1
SOLUTION
Analysis
E[AP X]
=
X
E[OP Tf in step t]
t≥1
I∗
Analysis
◮
Let I ∗ be optimal Set Cover solution.
E[AP X]
=
X
E[OP Tf in step t]
t≥1
≤
X
t≥1
E[c(I t )]
I∗
Analysis
◮
Let I ∗ be optimal Set Cover solution.
E[AP X]
=
X
E[OP Tf in step t]
t≥1
≤
X
E[c(I t )]
b
b
t≥1
I∗
b
b
b
covered after t it
Analysis
◮
◮
Let I ∗ be optimal Set Cover solution.
I t := {i ∈ I ∗ | not yet all elements in Si covered} (←
feasible in step t)
E[AP X]
=
X
E[OP Tf in step t]
t≥1
≤
X
E[c(I t )]
b
b
t≥1
It
b
b
b
covered after t it
Analysis
◮
◮
Let I ∗ be optimal Set Cover solution.
I t := {i ∈ I ∗ | not yet all elements in Si covered} (←
feasible in step t)
E[AP X]
=
X
E[OP Tf in step t]
t≥1
≤
X
E[c(I t )]
=
X
E[# iterations Si is in I t ] · c(Si ) b
b
t≥1
i∈I ∗
b
It
b
b
covered after t it
Analysis
◮
◮
Let I ∗ be optimal Set Cover solution.
I t := {i ∈ I ∗ | not yet all elements in Si covered} (←
feasible in step t)
◮
Pr[element j not covered after ln(2k) it.] ≤ e− ln(2k) =
◮
Pr[not all el. in Si covered after ln(2k) it.] ≤ k ·
E[AP X]
=
X
1
2k
=
1
2k
1
2
E[OP Tf in step t]
t≥1
≤
X
E[c(I t )]
=
X
E[# iterations Si is in I t ] ·c(Si )
|
{z
}
b
t≥1
i∈I ∗
≤O(ln(k))
b
It
b
b
b
covered after t it
Analysis
◮
◮
Let I ∗ be optimal Set Cover solution.
I t := {i ∈ I ∗ | not yet all elements in Si covered} (←
feasible in step t)
◮
Pr[element j not covered after ln(2k) it.] ≤ e− ln(2k) =
◮
Pr[not all el. in Si covered after ln(2k) it.] ≤ k ·
E[AP X]
=
X
1
2k
=
1
2k
1
2
E[OP Tf in step t]
t≥1
≤
X
E[c(I t )]
=
X
E[# iterations Si is in I t ] ·c(Si )
|
{z
}
b
t≥1
i∈I ∗
=
≤O(ln(k))
b
It
b
b
b
O(ln k) · OP T
covered after t it
Steiner Tree
Given:
◮
undirected graph G = (V, E)
◮
cost c : E → Q+
◮
terminals R ⊆ V
Find: Min-cost Steiner tree, spanning R.
OP T := min{c(S) | S spans R}
terminals
W.l.o.g.: c is metric.
Steiner Tree
Given:
◮
undirected graph G = (V, E)
◮
cost c : E → Q+
◮
terminals R ⊆ V
Find: Min-cost Steiner tree, spanning R.
OP T := min{c(S) | S spans R}
Steiner node
W.l.o.g.: c is metric.
Steiner tree
Spanning tree
terminal spanning tree
Min-cost terminal spanning tree (MST):
Spanning tree
terminal spanning tree
Min-cost terminal spanning tree (MST):
◮ Can be computed in poly-time.
Spanning tree
terminal spanning tree
Min-cost terminal spanning tree (MST):
◮ Can be computed in poly-time.
◮
Costs ≤ 2 · OP T .
Known results for Steiner tree:
Approximations:
◮
2-apx (minimum spanning tree heuristic)
◮
1.83-apx [Zelikovsky ’93]
◮
1.667-apx [Prömel & Steger ’97]
◮
1.644-apx [Karpinski & Zelikovsky ’97]
◮
1.598-apx [Hougardy & Prömel ’99]
◮
1.55-apx [Robins & Zelikovsky ’00]
Known results for Steiner tree:
Approximations:
◮
2-apx (minimum spanning tree heuristic)
◮
1.83-apx [Zelikovsky ’93]
◮
1.667-apx [Prömel & Steger ’97]
◮
1.644-apx [Karpinski & Zelikovsky ’97]
◮
1.598-apx [Hougardy & Prömel ’99]
◮
1.55-apx [Robins & Zelikovsky ’00]
Hardness:
◮
NP-hard even if edge costs ∈ {1, 2} [Bern & Plassmann ’89]
◮
no <
96
95 -apx
unless NP = P [Chlebik & Chlebikova ’02]
Our results:
Theorem
There is a polynomial time 1.39-approximation.
◮
LP-based! (Directed-Component Cut Relaxation)
◮
Algorithmic framework: Iterative Randomized Rounding
Our results:
Theorem
There is a polynomial time 1.39-approximation.
◮
LP-based! (Directed-Component Cut Relaxation)
◮
Algorithmic framework: Iterative Randomized Rounding
Theorem
The Directed-Component Cut Relaxation has an integrality gap
of at most 1.55.
◮
First < 2 bound for any LP-relaxation.
Bi-directed cut relaxation
Bi-directed cut relaxation
◮
Pick a root r ∈ R
root r
Bi-directed cut relaxation
◮
◮
Pick a root r ∈ R
Bi-direct edges
root r
Bi-directed cut relaxation
◮
◮
Pick a root r ∈ R
Bi-direct edges
min
X
c(e)ze
(BCR)
ze ≥ 1
∀U ⊆ V \ {r} : U ∩ R 6= ∅
ze ≥ 0
∀e ∈ E.
root r
e∈E
X
e∈δ+ (U )
U
Bi-directed cut relaxation
◮
◮
Pick a root r ∈ R
Bi-direct edges
min
X
c(e)ze
(BCR)
ze ≥ 1
∀U ⊆ V \ {r} : U ∩ R 6= ∅
ze ≥ 0
∀e ∈ E.
root r
ze =
e∈E
X
e∈δ+ (U )
U
1
2
Bi-directed cut relaxation
◮
◮
Pick a root r ∈ R
Bi-direct edges
min
X
c(e)ze
(BCR)
ze ≥ 1
∀U ⊆ V \ {r} : U ∩ R 6= ∅
ze ≥ 0
∀e ∈ E.
root r
ze =
e∈E
X
e∈δ+ (U )
Theorem (Edmonds ’67)
R = V ⇒ BCR integral
◮
◮
Integrality gap ≤ 4/3 for quasi-bipartite graphs
[Chakrabarty, Devanur, Vazirani ’08]
Integrality gap ∈ [1.16, 2]
U
1
2
Components
directed component C
sink(C)
◮
C = set of directed components
Directed component cut relaxation
min
X
c(C) · xC
(DCR)
C∈C
X
xC
≥ 1
∀∅ ⊂ U ⊆ R \ {r}
xC
≥ 0
∀C ∈ C
C ∈ C : R(C) ∩ U 6= ∅,
sink(C) ∈
/U
root r
U
Directed component cut relaxation
min
X
c(C) · xC
(DCR)
C∈C
X
xC
≥ 1
∀∅ ⊂ U ⊆ R \ {r}
xC
≥ 0
∀C ∈ C
C ∈ C : R(C) ∩ U 6= ∅,
sink(C) ∈
/U
root r
xC1 =
1
2
b
b
xC2 =
U
b
1
2
xC3 =
1
2
Directed component cut relaxation
min
X
c(C) · xC
(DCR)
C∈C
X
xC
≥ 1
∀∅ ⊂ U ⊆ R \ {r}
xC
≥ 0
∀C ∈ C
C ∈ C : R(C) ∩ U 6= ∅,
sink(C) ∈
/U
Properties:
◮
Number of variables: exponential
◮
Number of constraints: exponential
◮
Approximable within 1 + ε (we ignore the ε here).
Solvability of the LP
Lemma
For any ε > 0, a solution x of cost ≤ (1 + ε)OP Tf can be
computed in polynomial time.
Solvability of the LP
Lemma
For any ε > 0, a solution x of cost ≤ (1 + ε)OP Tf can be
computed in polynomial time.
◮
Use only components of size 2⌈1/ε⌉ = O(1)
[Borchers & Du ’97]: Increases cost by ≤ 1 + ε
→ # variables polynomial
Solvability of the LP
Lemma
For any ε > 0, a solution x of cost ≤ (1 + ε)OP Tf can be
computed in polynomial time.
≤ 2⌈1/ε⌉
◮
Use only components of size 2⌈1/ε⌉ = O(1)
[Borchers & Du ’97]: Increases cost by ≤ 1 + ε
→ # variables polynomial
Solvability of the LP
Lemma
For any ε > 0, a solution x of cost ≤ (1 + ε)OP Tf can be
computed in polynomial time.
≤ 2⌈1/ε⌉
◮
◮
Use only components of size 2⌈1/ε⌉ = O(1)
[Borchers & Du ’97]: Increases cost by ≤ 1 + ε
→ # variables polynomial
Compact flow formulation → # constraints polynomial
(or solve with ellipsoid method).
An iterative randomized rounding algo
(1) FOR t = 1, . . . , ∞ DO
(2) Compute opt. LP solution x
(3) Sample a component:
Pr[sample C] =
xC
1T x
and contract it.
(4) IF all terminals connected THEN output sampled
components
An iterative randomized rounding algo
(1) FOR t = 1, . . . , ∞ DO
(2) Compute opt. LP solution x
(3) Sample a component:
Pr[sample C] =
xC
1T x
and contract it.
(4) IF all terminals connected THEN output sampled
components
xC1 =
1
2
b
b
xC2 =
b
1
2
xC3 =
1
2
An iterative randomized rounding algo
(1) FOR t = 1, . . . , ∞ DO
(2) Compute opt. LP solution x
(3) Sample a component:
Pr[sample C] =
xC
1T x
and contract it.
(4) IF all terminals connected THEN output sampled
components
C1 b
An iterative randomized rounding algo
(1) FOR t = 1, . . . , ∞ DO
(2) Compute opt. LP solution x
(3) Sample a component:
Pr[sample C] =
xC
1T x
and contract it.
(4) IF all terminals connected THEN output sampled
components
An iterative randomized rounding algo
(1) FOR t = 1, . . . , ∞ DO
(2) Compute opt. LP solution x
(3) Sample a component:
Pr[sample C] =
xC
1T x
and contract it.
(4) IF all terminals connected THEN output sampled
components
b
C2
An iterative randomized rounding algo
(1) FOR t = 1, . . . , ∞ DO
(2) Compute opt. LP solution x
(3) Sample a component:
Pr[sample C] =
xC
1T x
and contract it.
(4) IF all terminals connected THEN output sampled
components
An iterative randomized rounding algo
(1) FOR t = 1, . . . , ∞ DO
(2) Compute opt. LP solution x
(3) Sample a component:
Pr[sample C] =
xC
1T x
and contract it.
(4) IF all terminals connected THEN output sampled
components
C1 b
b
C2
An iterative randomized rounding algo
(1) FOR t = 1, . . . , ∞ DO
(2) Compute opt. LP solution x
(3) Sample a component:
Pr[sample C] =
xC
1T x
and contract it.
(4) IF all terminals connected THEN output sampled
components
◮
W.l.o.g. M := 1T x invariant
Roadmap
◮
In one iteration t:
E[c(comp. sampled in it. t)] =
X xC
C
M
·c(C) ≤
1
·OP T in it t
M
2 · OP T
1 · OP T
1·M
2·M
t = #iterations
Roadmap
◮
In one iteration t:
E[c(comp. sampled in it. t)] =
X xC
C
◮
M
·c(C) ≤
1
·OP T in it t
M
In total
X
X 1
E[c(comp. sampled in it. t)] ≤
·E[OP T in iteration t]
M
t≥1
t≥1
2 · OP T
1 · OP T
1·M
2·M
t = #iterations
Roadmap
◮
In one iteration t:
E[c(comp. sampled in it. t)] =
X xC
C
◮
M
·c(C) ≤
1
·OP T in it t
M
In total
X
X 1
E[c(comp. sampled in it. t)] ≤
·E[OP T in iteration t]
M
t≥1
t≥1
2 · OP T
E[OP T after t it] ≤ (1 −
1 · OP T
1·M
2·M
1 t
2M )
· OP T
t = #iterations
Roadmap
◮
In one iteration t:
E[c(comp. sampled in it. t)] =
X xC
C
◮
1
·OP T in it t
M
In total
X
X 1
E[c(comp. sampled in it. t)] ≤
·E[OP T in iteration t]
M
t≥1
2 · OP T
M
·c(C) ≤
t≥1
E[OP T after t it] ≤ (1 −
1 t
M)
· 2 · OP T
E[OP T after t it] ≤ (1 −
1 · OP T
1·M
2·M
1 t
2M )
· OP T
t = #iterations
Bridges
◮
Let S be Steiner tree
C
Bridges
◮
Let S be Steiner tree, C a component
C
Bridges
◮
Let S be Steiner tree, C a component
C
◮
Bridges:
BrS (C) = argmax{c(B) | B ⊆ S, S\B ∪ C is connected}
Bridges
◮
Let S be Steiner tree, C a component
C
◮
Bridges:
BrS (C) = argmax{c(B) | B ⊆ S, S\B ∪ C is connected}
The saving function
Definition
For a Steiner tree S, the saving function w : E → Q+ is
defined as
w(u, v) := max{c(e) | e on u − v path in S}.
w(u, v) := max{c(e) | e on u − v path in S}
u
v
A saving lemma
Lemma
For any component C, ∃ saving tree spanning the terminals of
C with
c(BrS (C)) = w(saving tree)
C
r
r
1
r
1
2
1
5
S 1
A saving lemma
Lemma
For any component C, ∃ saving tree spanning the terminals of
C with
c(BrS (C)) = w(saving tree)
b
saving tree
r
5
1
r
2
1
2
1
5
S 1
r
The Bridge Lemma (1)
Lemma (Bridge Lemma)
For T terminal spanning tree, x LP solution:
1
terminal spanning tree
· c(T )
E c
≤ 1−
after 1 sampling step
M
root r
1/2
1/2
1/2
T
The Bridge Lemma (1)
Lemma (Bridge Lemma)
For T terminal spanning tree, x LP solution:
X
xC · c(BrT (C)) ≥ c(T )
C∈C
root r
1/2
1/2
1/2
T
The Bridge Lemma (1)
Lemma (Bridge Lemma)
For T terminal spanning tree, x LP solution:
X
xC · c(BrT (C)) ≥ c(T )
C∈C
◮
For any C, ∃ saving tree:
c(BrT (C)) = w(saving tree of C)
root r
1/2
1/2
1/2
T
The Bridge Lemma (1)
Lemma (Bridge Lemma)
For T terminal spanning tree, x LP solution:
X
xC · c(BrT (C)) ≥ c(T )
C∈C
◮
For any C, ∃ saving tree:
c(BrT (C)) = w(saving tree of C)
◮
Transfer capacity from component to
its saving tree
1/2
b
→ capacity reservation y : E → Q+
root r
1/2
1/2
T
1/2
The Bridge Lemma (1)
Lemma (Bridge Lemma)
For T terminal spanning tree, x LP solution:
X
xC · c(BrT (C)) ≥ c(T )
C∈C
◮
For any C, ∃ saving tree:
c(BrT (C)) = w(saving tree of C)
◮
Transfer capacity from component to
its saving tree
1/2
b
→ capacity reservation y : E → Q+
root r
1/2
1/2
T
1/2
1/2
1/2
The Bridge Lemma (1)
Lemma (Bridge Lemma)
For T terminal spanning tree, x LP solution:
X
xC · c(BrT (C)) ≥ c(T )
C∈C
◮
For any C, ∃ saving tree:
c(BrT (C)) = w(saving tree of C)
◮
Transfer capacity from component to
its saving tree
1/2
b
→ capacity reservation y : E → Q+
root r
b
1/2
T
1/2
1/2
1/2
The Bridge Lemma (1)
Lemma (Bridge Lemma)
For T terminal spanning tree, x LP solution:
X
xC · c(BrT (C)) ≥ c(T )
C∈C
◮
For any C, ∃ saving tree:
c(BrT (C)) = w(saving tree of C)
◮
Transfer capacity from component to
its saving tree
1/2
b
→ capacity reservation y : E → Q+
root r
b
1/2
T
1/2
1/2
1/2
1/21/2
The Bridge Lemma (1)
Lemma (Bridge Lemma)
For T terminal spanning tree, x LP solution:
X
xC · c(BrT (C)) ≥ c(T )
C∈C
◮
For any C, ∃ saving tree:
c(BrT (C)) = w(saving tree of C)
◮
Transfer capacity from component to
its saving tree
1/2
b
→ capacity reservation y : E → Q+
X
xC · c(BrT (C)) = w(y)
root r
b
C∈C
1/2
T
1/2
1/2
1/2
1/2b
The Bridge Lemma (2)
X
xC · c(BrT (C)) = w(y)
C∈C
root r
1/2
1/2
1/2
T
1/2
1/2
1/2
The Bridge Lemma (2)
Edmonds Thm
X
xC · c(BrT (C)) = w(y) ≥ w(F )
C∈C
root r
F
T
The Bridge Lemma (2)
Cycle rule
Edmonds Thm
X
xC · c(BrT (C)) = w(y) ≥ w(F ) ≥ c(T )
C∈C
root r
F
T
A 1st bound on OPT
Lemma
E[OP T after it. t] ≤ 1 −
1 t
M
· 2 · OP T .
A 1st bound on OPT
Lemma
E[OP T after it. t] ≤ 1 −
◮
1 t
M
· 2 · OP T .
Initially c(MST) ≤ 2 · OP T
A 1st bound on OPT
Lemma
E[OP T after it. t] ≤ 1 −
1 t
M
· 2 · OP T .
◮
Initially c(MST) ≤ 2 · OP T
◮
In any iteration
E[c(new MST)]
≤
c(old MST) − E[c(Brold MST (C))]
A 1st bound on OPT
Lemma
E[OP T after it. t] ≤ 1 −
1 t
M
· 2 · OP T .
◮
Initially c(MST) ≤ 2 · OP T
◮
In any iteration
E[c(new MST)]
≤
=
c(old MST) − E[c(Brold MST (C))]
1 X
xC · c(Brold MST (C))
c(old MST) −
M
C∈C
A 1st bound on OPT
Lemma
E[OP T after it. t] ≤ 1 −
1 t
M
· 2 · OP T .
◮
Initially c(MST) ≤ 2 · OP T
◮
In any iteration
E[c(new MST)]
≤
=
c(old MST) − E[c(Brold MST (C))]
1 X
xC · c(Brold MST (C))
c(old MST) −
M
C∈C
|
{z
}
≥c(old MST)
A 1st bound on OPT
Lemma
E[OP T after it. t] ≤ 1 −
1 t
M
· 2 · OP T .
◮
Initially c(MST) ≤ 2 · OP T
◮
In any iteration
E[c(new MST)]
≤
=
c(old MST) − E[c(Brold MST (C))]
1 X
xC · c(Brold MST (C))
c(old MST) −
M
C∈C
|
{z
}
≥c(old MST)
≤
1−
1
M
· c(old MST)
A 2nd bound on OP T
Theorem
In any iteration
1 · old OPT
E[new OPT] ≤ 1 −
2M
◮
Let S be opt. Steiner tree
S
A 2nd bound on OP T
Theorem
In any iteration
1 · old OPT
E[new OPT] ≤ 1 −
2M
◮
Let S be opt. Steiner tree
◮
From each inner node in S: Contract
the cheapest edge going to a child
S
A 2nd bound on OP T
Theorem
In any iteration
1 · old OPT
E[new OPT] ≤ 1 −
2M
b
◮
Let S be opt. Steiner tree
◮
From each inner node in S: Contract
the cheapest edge going to a child
◮
A terminal spanning tree T
remains
b
T
b
A 2nd bound on OP T
Theorem
In any iteration
1 · old OPT
E[new OPT] ≤ 1 −
2M
b
◮
Let S be opt. Steiner tree
◮
From each inner node in S: Contract
the cheapest edge going to a child
◮
b
T
A terminal spanning tree T
remains
E[save on S] ≥ E[save on T ]
Bridge Lem
≥
1
1
· c(T ) ≥
· c(S)
|{z}
M
2M
1
≥ c(S)
2
b
The approximation guarantee
Theorem
E[AP X] ≤ (1.5+ε) · OP T .
2 · OP T
1 · OP T
◮
1·M
2 · M t = #iterations
Cost of sampled components:
∞
X
1
· E[OP T in it. t]
M
t=1
The approximation guarantee
Theorem
E[AP X] ≤ (1.5+ε) · OP T .
1 t
E[OP T after t it] ≤ (1 − M
) · 2 · OP T
−t/M
≤ 2e
· OP T
2 · OP T
1 t
E[OP T after t it] ≤ (1 − 2M
) · OP T
−t/(2M
)
≤ e
· OP T
1 · OP T
◮
1·M
2 · M t = #iterations
Cost of sampled components:
∞
X
1
· E[OP T in it. t]
M
t=1
The approximation guarantee
Theorem
E[AP X] ≤ (1.5+ε) · OP T .
2 · OP T
1 · OP T
◮
1 t
E[OP T after t it] ≤ (1 − M
) · 2 · OP T
−t/M
≤ 2e
· OP T
1 t
E[OP T after t it] ≤ (1 − 2M
) · OP T
−t/(2M
)
≤ e
· OP T
1·M
2 · M t = #iterations
Cost of sampled components:
∞
X
1
· E[OP T in it. t]
M
t=1
Z ∞
M →∞
min{2e−x , e−x/2 } dx
→
OP T ·
0
The approximation guarantee
Theorem
E[AP X] ≤ (1.5+ε) · OP T .
2 · OP T
1 · OP T
◮
1 t
E[OP T after t it] ≤ (1 − M
) · 2 · OP T
−t/M
≤ 2e
· OP T
1 t
E[OP T after t it] ≤ (1 − 2M
) · OP T
−t/(2M
)
≤ e
· OP T
1·M
2 · M t = #iterations
Cost of sampled components:
∞
X
1
· E[OP T in it. t]
M
t=1
Z ∞
M →∞
min{2e−x , e−x/2 } dx = 1.5 · OP T
→
OP T ·
0
A generalized bridge lemma
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
A generalized bridge lemma
1/2 1/2 1/2
pr.
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
1
3
pr.
1
3
pr.
1/2 1/2 1/2
1/2 1/2 1/2
1
3
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
A generalized bridge lemma
1/2 1/2 1/2
pr.
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
1
3
pr.
1
3
pr.
1/2 1/2 1/2
1/2 1/2 1/2
1
3
1/2 1/2 1/2
1/2 1/2 1/2
1/2 1/2 1/2
A generalized bridge lemma
1/2 1/2 1/2
pr.
1/2 1/2 1/2
pr. 1 pr. 0
1/2 1/2 1/2
◮
1/2 1/2 1/2
1
3
pr.
1
3
pr.
1
3
1/2 1/2 1/2
1/2 1/2 1/2
pr. 1 pr. 0
1/2 1/2 1/2
pr. 1
1/2 1/2 1/2
1/2 1/2 1/2
Observe: Each edge in T removed with prob 2 ·
1
3
=
1
M!
A generalized bridge lemma
1/2 1/2 1/2
pr.
1/2 1/2 1/2
pr. 1 pr. 0
1/2 1/2 1/2
◮
◮
1/2 1/2 1/2
1
3
pr.
1
3
pr.
1
3
1/2 1/2 1/2
1/2 1/2 1/2
pr. 1 pr. 0
1/2 1/2 1/2
pr. 1
1/2 1/2 1/2
1/2 1/2 1/2
Observe: Each edge in T removed with prob 2 · 31 =
To show: We can always find these probabilities!
1
M!
A generalized bridge lemma (2)
Lemma
Let T be terminal spanning tree. Sample C from LP solution x.
There are B ⊆ T (dep. on C) s.t.
◮
(T \B) ∪ C spans all terminals
◮
Pr[e ∈ B] ≥
1
M
∀e ∈ T .
A generalized bridge lemma (2)
Lemma
Let T be terminal spanning tree. Sample C from LP solution x.
There are B ⊆ T (dep. on C) s.t.
◮
(T \B) ∪ C spans all terminals
◮
Pr[e ∈ B] ≥
1
M
∀e ∈ T .
P
B:(T \B)∪C conn. Pr[rem. B | C] = 1 ∀C
P
1
B∋e,C Pr[C] · Pr[rem. B | C] ≥ M ∀e
◮
Suppose system (1) has no non-negative solution.
A generalized bridge lemma (2)
Lemma
Let T be terminal spanning tree. Sample C from LP solution x.
There are B ⊆ T (dep. on C) s.t.
◮
(T \B) ∪ C spans all terminals
◮
Pr[e ∈ B] ≥
1
M
∀e ∈ T .
P
B:(T \B)∪C conn. Pr[rem. B | C] = 1 ∀C
P
1
B∋e,C Pr[C] · Pr[rem. B | C] ≥ M ∀e
dual
◮
◮
P yC
C yC
≥ xC · c(B) ∀B : T \B ∪ C conn.
< c(T )
Suppose system (1) has no non-negative solution.
Farkas Lemma: System (2) has solution (y, c) ≥ 0
A generalized bridge lemma (2)
Lemma
Let T be terminal spanning tree. Sample C from LP solution x.
There are B ⊆ T (dep. on C) s.t.
◮
(T \B) ∪ C spans all terminals
◮
Pr[e ∈ B] ≥
1
M
∀e ∈ T .
P
B:(T \B)∪C conn. Pr[rem. B | C] = 1 ∀C
P
1
B∋e,C Pr[C] · Pr[rem. B | C] ≥ M ∀e
dual
◮
◮
◮
P yC
C yC
≥ xC · c(B) ∀B : T \B ∪ C conn.
< c(T )
Suppose system (1) has no non-negative solution.
Farkas Lemma: System (2) has solution (y, c) ≥ 0
Contradiction to Bridge Lemma!
The 1.39 bound
◮
Let S ∗ optimum Steiner tree.
The 1.39 bound
e
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
The 1.39 bound
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
The 1.39 bound
◮
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
Consider cycles S ∗ ∪ {u, v} containing exactly one marked
edge.
The 1.39 bound
◮
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
Consider cycles S ∗ ∪ {u, v} containing exactly one marked
edge.
The 1.39 bound
◮
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
Consider cycles S ∗ ∪ {u, v} containing exactly one marked
edge.
The 1.39 bound
◮
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
Consider cycles S ∗ ∪ {u, v} containing exactly one marked
edge.
The 1.39 bound
◮
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
Consider cycles S ∗ ∪ {u, v} containing exactly one marked
edge.
The 1.39 bound
◮
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
Consider cycles S ∗ ∪ {u, v} containing exactly one marked
edge.
The 1.39 bound
◮
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
Consider cycles S ∗ ∪ {u, v} containing exactly one marked
edge.
The 1.39 bound
◮
◮
◮
◮
◮
Let S ∗ optimum Steiner tree.
Goal: Define Steiner tree S t ⊆ S ∗ after t iterations with
E[t : e ∈ S t ] ≤ 1.39 · M .
For every internal node in S ∗ : Mark a random outgoing
edge.
Consider cycles S ∗ ∪ {u, v} containing exactly one marked
edge.
Such edges {u, v} induce terminal spanning tree T
The 1.39 bound
◮
Def S t : e ∈ T not deleted ⇒ keep edges in corr. cycle in S ∗
The 1.39 bound
◮
Def S t : e ∈ T not deleted ⇒ keep edges in corr. cycle in S ∗
The 1.39 bound
◮
Def S t : e ∈ T not deleted ⇒ keep edges in corr. cycle in S ∗
The 1.39 bound
e
◮
◮
Def S t : e ∈ T not deleted ⇒ keep edges in corr. cycle in S ∗
1
per it.
Random process deletes an edge e ∈ T with pr. M
The 1.39 bound
e
◮
◮
◮
Def S t : e ∈ T not deleted ⇒ keep edges in corr. cycle in S ∗
1
per it.
Random process deletes an edge e ∈ T with pr. M
E[t : e deleted] ≤ M
The 1.39 bound
ek
e1
e2
◮
◮
◮
◮
Def S t : e ∈ T not deleted ⇒ keep edges in corr. cycle in S ∗
1
per it.
Random process deletes an edge e ∈ T with pr. M
E[t : e deleted] ≤ M
E[t : e1 , . . . , ek deleted] ≤ (1 + 12 + . . . + k1 ) · M = H(k) · M
→ Coupon Collector Theorem
The 1.39 bound
e
◮
◮
◮
◮
◮
Def S t : e ∈ T not deleted ⇒ keep edges in corr. cycle in S ∗
1
per it.
Random process deletes an edge e ∈ T with pr. M
E[t : e deleted] ≤ M
E[t : e1 , . . . , ek deleted] ≤ (1 + 12 + . . . + k1 ) · M = H(k) · M
→ Coupon Collector Theorem
E[t : e deleted] ≤ H(#cycles through e) · M
The 1.39 bound
e
◮
◮
◮
◮
◮
◮
Def S t : e ∈ T not deleted ⇒ keep edges in corr. cycle in S ∗
1
per it.
Random process deletes an edge e ∈ T with pr. M
E[t : e deleted] ≤ M
E[t : e1 , . . . , ek deleted] ≤ (1 + 12 + . . . + k1 ) · M = H(k) · M
→ Coupon Collector Theorem
E[t : e deleted] ≤ H(#cycles through e) · M
Pr[e in k cycles] =X
( 12 )k
1 k
E[t : e deleted] ≤
·H(k)·M = ln(4)·M ≈ 1.39·M.
2
k≥1
Open problems
Open Problem I
1.01 ≤ Steiner tree approximability ≤ 1.39
Open problems
Open Problem I
1.01 ≤ Steiner tree approximability ≤ 1.39
Open Problem II
Is there an iterative randomized rounding approach for
Facility Location or k-Median?
Open problems
Open Problem III
Is there an iterative randomized rounding approach for ATSP?
(1) Solve Held-Karp relaxation:
min cT x
x(δ+ (S)) ≥ 1 ∀∅ ⊂ S ⊂ V
x(δ+ (v)) = x(δ− (v)) = 1 ∀v ∈ V
xe ≥ 0 ∀e ∈ E
(2) Sample a collection of cycles C from x∗ .
(3) Show E[c(C)] ≤ 1000 · OP T
(4) Show E[OP T after contracting C] ≤ 0.999 · OP T .
This would yield a O(1)-apx.
Open problems
Open Problem III
Is there an iterative randomized rounding approach for ATSP?
(1) Solve Held-Karp relaxation:
min cT x
x(δ+ (S)) ≥ 1 ∀∅ ⊂ S ⊂ V
x(δ+ (v)) = x(δ− (v)) = 1 ∀v ∈ V
xe ≥ 0 ∀e ∈ E
(2) Sample a collection of cycles C from x∗ .
(3) Show E[c(C)] ≤ 1000 · OP T
(4) Show E[OP T after contracting C] ≤ 0.999 · OP T .
This would yield a O(1)-apx.
Thanks for your attention
Download