The Entropy Rounding Method in Approximation Algorithms Thomas Rothvoß Carg`ese 2011

advertisement
The Entropy Rounding Method in
Approximation Algorithms
Thomas Rothvoß
Department of Mathematics, M.I.T.
Cargèse 2011
A general rounding problem
Problem:
◮
◮
Given: A ∈ Rn×m , fractional solution x ∈ [0, 1]m
Find: y ∈ {0, 1}m : Ax ≈ Ay
A general rounding problem
Problem:
◮
◮
Given: A ∈ Rn×m , fractional solution x ∈ [0, 1]m
Find: y ∈ {0, 1}m : Ax ≈ Ay
General strategies:
◮
Randomized rounding:
Flip Pr[yi = 1] = xi (in)dependently
A general rounding problem
Problem:
◮
◮
Given: A ∈ Rn×m , fractional solution x ∈ [0, 1]m
Find: y ∈ {0, 1}m : Ax ≈ Ay
General strategies:
◮
Randomized rounding:
Flip Pr[yi = 1] = xi (in)dependently
◮
Use properties of basic solutions:
|supp(x)| ≤ #rows(A)
A general rounding problem
Problem:
◮
◮
Given: A ∈ Rn×m , fractional solution x ∈ [0, 1]m
Find: y ∈ {0, 1}m : Ax ≈ Ay
General strategies:
◮
Randomized rounding:
Flip Pr[yi = 1] = xi (in)dependently
◮
Use properties of basic solutions:
|supp(x)| ≤ #rows(A)
Try another way:
◮
“Entropy rounding method” based on discrepancy theory
A general rounding problem
Problem:
◮
◮
Given: A ∈ Rn×m , fractional solution x ∈ [0, 1]m
Find: y ∈ {0, 1}m : Ax ≈ Ay
General strategies:
◮
Randomized rounding:
Flip Pr[yi = 1] = xi (in)dependently
◮
Use properties of basic solutions:
|supp(x)| ≤ #rows(A)
Try another way:
◮
“Entropy rounding method” based on discrepancy theory
Application:
◮
A (OP T + O(log2 OP T ))-algorithm for Bin Packing
With Rejection
Discrepancy theory
◮
Set system S = {S1 , . . . , Sm }, Si ⊆ [n]
i
S
Discrepancy theory
b
◮
◮
Set system S = {S1 , . . . , Sm }, Si ⊆ [n]
Coloring χ : [n] → {−1, +1}
b
i
b
S
Discrepancy theory
b
◮
◮
◮
Set system S = {S1 , . . . , Sm }, Si ⊆ [n]
Coloring χ : [n] → {−1, +1}
Discrepancy
disc(S) =
where χ(S) =
min
max |χ(S)|.
χ:[n]→{±1} S∈S
P
i∈S
χ(i).
b
i
b
S
Discrepancy theory
b
◮
◮
◮
Set system S = {S1 , . . . , Sm }, Si ⊆ [n]
Coloring χ : [n] → {−1, +1}
Discrepancy
disc(S) =
where χ(S) =
Known results:
◮
min
b
i
b
S
max |χ(S)|.
χ:[n]→{±1} S∈S
P
i∈S
χ(i).
√
n sets, n elements: disc(S) = O( n) [Spencer ’85]
Discrepancy theory
b
◮
◮
◮
Set system S = {S1 , . . . , Sm }, Si ⊆ [n]
Coloring χ : [n] → {−1, +1}
Discrepancy
disc(S) =
where χ(S) =
Known results:
◮
◮
min
b
i
b
S
max |χ(S)|.
χ:[n]→{±1} S∈S
P
i∈S
χ(i).
√
n sets, n elements: disc(S) = O( n) [Spencer ’85]
Every element in ≤ t sets:√disc(S) < 2t [Beck & Fiala ’81]
Conjecture: disc(S) ≤ O( t)
Discrepancy theory
b
◮
◮
◮
Set system S = {S1 , . . . , Sm }, Si ⊆ [n]
Coloring χ : [n] → {−1, +1}
Discrepancy
disc(S) =
where χ(S) =
Known results:
◮
◮
min
b
i
b
S
max |χ(S)|.
χ:[n]→{±1} S∈S
P
i∈S
χ(i).
√
n sets, n elements: disc(S) = O( n) [Spencer ’85]
Every element in ≤ t sets:√disc(S) < 2t [Beck & Fiala ’81]
Conjecture: disc(S) ≤ O( t)
More definitions:
◮
◮
Partial coloring: χ : [n] → {0, −1, +1}
Half coloring: χ : [n] → {0, −1, +1}, |supp(χ)| ≥ n/2
Matrix discrepancy
◮
Matrix A
disc(A) =
min kAχk∞
χ∈{±1}n

i

1 1 0
A = 0 1 1
1 0 1
S
i
disc(S)
=
disc(A)
set S
Matrix discrepancy
◮
Matrix A
disc(A) =
min kAχk∞
χ∈{±1}n

i

1 1 0
A = 0 1 1
1 0 1
S
set S
i
disc(S)
=
disc(A)
Theorem (Lovász, Spencer & Vesztergombi ’86)
Finding good colorings ⇒ ∀x ∈ [0, 1]m can find y ∈ {0, 1}m with
kAx − Ayk∞ small
Entropy rounding (simple version)
◮
◮
◮
Input: A ∈ Rn×m , x ∈ [0, 1]m
Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:
kA′ χk∞ ≤ ∆
Output: y ∈ {0, 1}m : kAx − Ayk∞ ≤ O(log m) · ∆
(1) y := x
(2) FOR phase k = last bit TO 1 DO A =
(3) Call yi active if kth bit is 1
(4) Find half-coloring
χ : active var → {−1, +1, 0}
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
!
y = ( 0.11 0.10 0.11 0.01 )
Entropy rounding (simple version)
◮
◮
◮
Input: A ∈ Rn×m , x ∈ [0, 1]m
Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:
kA′ χk∞ ≤ ∆
Output: y ∈ {0, 1}m : kAx − Ayk∞ ≤ O(log m) · ∆
(1) y := x
(2) FOR phase k = last bit TO 1 DO A =
(3) Call yi active if kth bit is 1
(4) Find half-coloring
χ : active var → {−1, +1, 0}
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
!
y = ( 0.11 0.10 0.11 0.01 )
Entropy rounding (simple version)
◮
◮
◮
Input: A ∈ Rn×m , x ∈ [0, 1]m
Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:
kA′ χk∞ ≤ ∆
Output: y ∈ {0, 1}m : kAx − Ayk∞ ≤ O(log m) · ∆
!
(1) y := x
(2) FOR phase k = last bit TO 1 DO A =
(3) Call yi active if kth bit is 1
(4) Find half-coloring
χ : active var → {−1, +1, 0}
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
y = ( 0.11 0.10 0.11 0.01 )
χ = ( +1
−1
0 )
Entropy rounding (simple version)
◮
◮
◮
Input: A ∈ Rn×m , x ∈ [0, 1]m
Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:
kA′ χk∞ ≤ ∆
Output: y ∈ {0, 1}m : kAx − Ayk∞ ≤ O(log m) · ∆
!
(1) y := x
(2) FOR phase k = last bit TO 1 DO A =
(3) Call yi active if kth bit is 1
(4) Find half-coloring
χ : active var → {−1, +1, 0}
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
y = ( 0.11 0.10 0.11 0.01 )
χ = ( +1
y′
0 )
= ( 1.00 0.10 0.10 0.01 )
−1
Entropy rounding (simple version)
◮
◮
◮
Input: A ∈ Rn×m , x ∈ [0, 1]m
Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:
kA′ χk∞ ≤ ∆
Output: y ∈ {0, 1}m : kAx − Ayk∞ ≤ O(log m) · ∆
!
(1) y := x
(2) FOR phase k = last bit TO 1 DO A =
(3) Call yi active if kth bit is 1
(4) Find half-coloring
χ : active var → {−1, +1, 0}
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
Analysis:
◮ A phase has ≤ log m iterations
y = ( 0.11 0.10 0.11 0.01 )
χ = ( +1
y′
0 )
= ( 1.00 0.10 0.10 0.01 )
−1
Entropy rounding (simple version)
◮
◮
◮
Input: A ∈ Rn×m , x ∈ [0, 1]m
Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:
kA′ χk∞ ≤ ∆
Output: y ∈ {0, 1}m : kAx − Ayk∞ ≤ O(log m) · ∆
!
(1) y := x
(2) FOR phase k = last bit TO 1 DO A =
(3) Call yi active if kth bit is 1
(4) Find half-coloring
χ : active var → {−1, +1, 0}
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
y = ( 0.11 0.10 0.11 0.01 )
χ = ( +1
y′
0 )
= ( 1.00 0.10 0.10 0.01 )
−1
Analysis:
◮ A phase has ≤ log m iterations
◮ During phase k: kAy ′ − Ayk∞ = ( 1 )k kAχk∞ ≤ ( 1 )k ∆
2
2
Entropy rounding (simple version)
◮
◮
◮
Input: A ∈ Rn×m , x ∈ [0, 1]m
Assume: ∀ submatrix A′ ⊆ A: half-coloring χ:
kA′ χk∞ ≤ ∆
Output: y ∈ {0, 1}m : kAx − Ayk∞ ≤ O(log m) · ∆
!
(1) y := x
(2) FOR phase k = last bit TO 1 DO A =
(3) Call yi active if kth bit is 1
(4) Find half-coloring
χ : active var → {−1, +1, 0}
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
y = ( 0.11 0.10 0.11 0.01 )
χ = ( +1
y′
0 )
= ( 1.00 0.10 0.10 0.01 )
−1
Analysis:
◮ A phase has ≤ log m iterations
◮ During phase k: kAy ′ − Ayk∞ = ( 1 )k kAχk∞ ≤ ( 1 )k ∆
2
2
◮ Triangle inequality:
Xm 1 k
X log
· ∆ = O(log m) · ∆
kAx − Ayk∞ ≤
2
k≥1 t=1
Entropy
Definition
For random variable Z, the entropy is
H(Z) =
X
z
Pr[Z = z] · log2
1
Pr[Z = z]
Entropy
Definition
For random variable Z, the entropy is
H(Z) =
X
z
Pr[Z = z] · log2
1
Pr[Z = z]
Example: Pr[Z = a] = p and Pr[Z = b] = 1 − p
1
H(Z) = p · log( p1 ) + (1 − p) · log( 1−p
)
1.0
1.0
0.5
x · log( x1 )
0.5
p
0
x
0
0
0.5
1.0
0
0.5
1.0
Entropy
Definition
For random variable Z, the entropy is
H(Z) =
X
z
Pr[Z = z] · log2
1
Pr[Z = z]
Properties:
◮
Uniform distribution maximizes entropy: If Z attains k
distinct values: H(Z) ≤ log2 (k) (attained if
Pr[Z = z] = k1 ∀z).
Entropy
Definition
For random variable Z, the entropy is
H(Z) =
X
z
Pr[Z = z] · log2
1
Pr[Z = z]
Properties:
◮
◮
Uniform distribution maximizes entropy: If Z attains k
distinct values: H(Z) ≤ log2 (k) (attained if
Pr[Z = z] = k1 ∀z).
One likely event: ∃z : Pr[Z = z] ≥ ( 12 )H(Z)
Entropy
Definition
For random variable Z, the entropy is
H(Z) =
X
z
Pr[Z = z] · log2
1
Pr[Z = z]
Properties:
◮
◮
◮
Uniform distribution maximizes entropy: If Z attains k
distinct values: H(Z) ≤ log2 (k) (attained if
Pr[Z = z] = k1 ∀z).
One likely event: ∃z : Pr[Z = z] ≥ ( 12 )H(Z)
Subadditivity: H(f (Z, Z ′ )) ≤ H(Z) + H(Z ′ ).
Chernov-type bound
Lemma
Let X1 , . . . , Xn be indep. RV with Pr[Xi = ±1] = 12 .
n
h X
√ i
2
Xi ≥ λ n ≤ 2e−λ /2 .
Pr i=1
Chernov-type bound
Lemma
Let X1 , . . . , Xn be indep. RV with Pr[Xi = ±αi ] = 12 .
n
i
h X
2
Xi ≥ λkαk2 ≤ 2e−λ /2 .
Pr i=1
◮
Standard deviation:
s
v
u n
sX
X
uX
2
E[(Xi − E[Xi ]) ] = t
Var[
Xi ] =
α2i = kαk2
i
i
i=1
An isoperimetric inequality
Lemma (Special case of Isoperimetric Ineq – Kleitman’66)
For any X ⊆ {0, 1}n of size |X| ≥ 20.8n and n ≥ 2, there are
x, y ∈ X with kx − yk1 ≥ n/2.
x
y
An isoperimetric inequality
Lemma (Special case of Isoperimetric Ineq – Kleitman’66)
For any X ⊆ {0, 1}n of size |X| ≥ 20.8n and n ≥ 2, there are
x, y ∈ X with kx − yk1 ≥ n/2.
x
y
◮
Proof with weaker constant:
X
ball of radius n/10 ≤
around 0
0≤q<n/10
n
en n/10
≤
< 20.8n
q
n/10
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
A2 χ
2∆
m
A=
{±1}m
2∆
A1 χ
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
2∆
A2 χ
m
A=
{±1}m
b
b
2∆
A1 χ
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
2∆
A2 χ
m
A=
{±1}m
b
A1 χ
b
b
b
2∆
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
2∆
A2 χ
m
A=
b
{±1}
m
b
A1 χ
b
b
b
b
2∆
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
2∆
A2 χ
m
A=
b
b
{±1}
m
b
b
A1 χ
b
b
b
b
2∆
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
2∆
A2 χ
m
A=
b
b
{±1}
m
b
b
b
b
2∆
A1 χ
b
b
b
b
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
2∆
A2 χ
m
A=
b
{±1}
b
b
b
b
b
m
b
b
b
b
2∆
A1 χ
b
b
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
2∆
A2 χ
m
A=
b
{±1}
b
b
b
2∆
A1 χ
b
b
b
b
b
b
m
b
b
b
b
Theorem [Beck’s entropy method]
H
χi ∈{±1}
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
2∆
A2 χ
m
A=
b
{±1}
b
b
b
b
2∆
A1 χ
b
b
b
b
b
b
b
m
b
b
b
b
Theorem [Beck’s entropy method]
H
χi ∈{±1}
◮
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
∃cell : Pr[Aχ ∈ cell] ≥ ( 21 )m/5 .
2∆
A2 χ
m
A=
b
{±1}
b
b
b
b
2∆
A1 χ
b
b
b
b
b
b
b
m
b
b
b
b
Theorem [Beck’s entropy method]
H
χi ∈{±1}
◮
◮
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
∃cell : Pr[Aχ ∈ cell] ≥ ( 21 )m/5 .
At least 2m · ( 21 )m/5 = 20.8m colorings χ have Aχ ∈ cell
2∆
A2 χ
m
A=
b
{±1}m
b
b
b
b
b
2∆
A1 χ
Theorem [Beck’s entropy method]
H
χi ∈{±1}
◮
◮
◮
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
∃cell : Pr[Aχ ∈ cell] ≥ ( 21 )m/5 .
At least 2m · ( 21 )m/5 = 20.8m colorings χ have Aχ ∈ cell
Pick χ1 , χ2 differing in half of entries
2∆
A2 χ
m
A=
b
χ1
{±1}m
χ2
b
b
b
Aχ1
Aχ2
2∆
A1 χ
Theorem [Beck’s entropy method]
H
χi ∈{±1}
◮
◮
◮
◮
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
∃cell : Pr[Aχ ∈ cell] ≥ ( 21 )m/5 .
At least 2m · ( 21 )m/5 = 20.8m colorings χ have Aχ ∈ cell
Pick χ1 , χ2 differing in half of entries
Define χ0 (i) := 21 (χ1 (i) − χ2 (i)) ∈ {0, ±1}.
2∆
A2 χ
m
A=
b
χ1
{±1}m
χ2
b
b
b
Aχ1
Aχ2
2∆
A1 χ
Theorem [Beck’s entropy method]
H
χi ∈{±1}
◮
◮
◮
◮
◮
l
Aχ
2∆
k
≤
m
5
⇒ ∃half-coloring χ0 : kAχ0 k∞ ≤ ∆.
∃cell : Pr[Aχ ∈ cell] ≥ ( 21 )m/5 .
At least 2m · ( 21 )m/5 = 20.8m colorings χ have Aχ ∈ cell
Pick χ1 , χ2 differing in half of entries
Define χ0 (i) := 21 (χ1 (i) − χ2 (i)) ∈ {0, ±1}.
Then kAχ0 k∞ ≤ 12 kAχ1 − Aχ2 k∞ ≤ ∆.
2∆
A2 χ
m
A=
b
χ1
{±1}m
χ2
b
b
b
Aχ1
Aχ2
2∆
A1 χ
A slight generalization
Theorem
For any auxiliary function f (χ) with kAχ − f (χ)k∞ ≤ ∆:
0
0
H (f (χ)) ≤ m
5 ⇒ ∃half-coloring χ : kAχ k∞ ≤ ∆.
χi ∈{±1}
2∆
A2 χ
b
b
b
m
A=
b
{±1}
b
b
b
m
b
b
A1 χ
b
b
b
b
b
2∆
b
b
Aχ
f (χ)
A bound on the entropy
Lemma
For α ∈ Rm , ∆ > 0:
H
χi ∈{±1}
αT χ 2∆
∆ :=
≤G
kαk2
| {z }
(
2
9e−λ /5
log2 (32 + 64/λ)
if λ ≥ 2
if λ < 2
=:λ
T bound on H( α2∆χ )
O(log( λ1 )) + O(1)
b
2
2)
e−Ω(λ
λ
Proof – Case λ ≥ 2
1.0
x · log( x1 )
x
0.5
0
−3∆
−∆
∆
3∆
Z = −1
Z=0
Z=1
◮
Recall: ∆ = λ · kαk2 with λ ≥ 2 and Z =
H(Z) =
X
i∈Z
Pr[Z = i] · log
0
l
1
Pr[Z = i]
αT χ
2∆
k
0.5 1.0
Proof – Case λ ≥ 2
2)
Pr[Z = 0] ≥ 1 − e−Ω(λ
1.0
x · log( x1 )
x
0.5
0
−3∆
−∆
∆
3∆
Z = −1
Z=0
Z=1
◮
◮
Recall: ∆ = λ · kαk2 with λ ≥ 2 and Z =
2)
1
) ≤ e−Ω(λ
Pr[Z = 0] · log( Pr[Z=0]
H(Z) =
X
i∈Z
Pr[Z = i] · log
0
l
1
Pr[Z = i]
αT χ
2∆
k
0.5 1.0
Proof – Case λ ≥ 2
Pr[Z = i] ≤ Pr[αT χ is ≥ iλ times standard dev] ≤ e−Ω(i
2 λ2 )
1.0
x · log( x1 )
x
0.5
0
−3∆
−∆
∆
3∆
Z = −1
Z=0
Z=1
◮
◮
◮
0
Recall: ∆ = λ · kαk2 with λ ≥ 2 and Z =
2)
1
) ≤ e−Ω(λ
Pr[Z = 0] · log( Pr[Z=0]
2 2
l
αT χ
2∆
2 2
1
Pr[Z = i] · log( Pr[Z=i]
) ≤ e−Ω(i λ ) · log(ei λ )
X
1
H(Z) =
Pr[Z = i] · log
Pr[Z = i]
i∈Z
k
0.5 1.0
Proof – Case λ ≥ 2
Pr[Z = i] ≤ Pr[αT χ is ≥ iλ times standard dev] ≤ e−Ω(i
2 λ2 )
1.0
x · log( x1 )
x
0.5
0
−3∆
−∆
∆
3∆
Z = −1
Z=0
Z=1
◮
◮
◮
0
Recall: ∆ = λ · kαk2 with λ ≥ 2 and Z =
2)
1
) ≤ e−Ω(λ
Pr[Z = 0] · log( Pr[Z=0]
2 2
l
αT χ
2∆
2 2
0.5 1.0
k
1
Pr[Z = i] · log( Pr[Z=i]
) ≤ e−Ω(i λ ) · log(ei λ )
X
1
2
≤ e−Ω(λ )
H(Z) =
Pr[Z = i] · log
Pr[Z = i]
i∈Z
Proof – Case λ < 2
2∆
−3kαk2
−kαk2
Subadditivity:
H(Z)
≤
kαk2
3kαk2
Proof – Case λ < 2
2∆
−3kαk2
−kαk2
kαk2
block
Subadditivity:
H(Z)
≤
≤
H(which block of length 2kαk2 ) +
O(1) +
3kαk2
Proof – Case λ < 2
2∆
−3kαk2
−kαk2
kαk2
block
2kαk2
2∆
=
1
λ
intervals
Subadditivity:
H(Z)
≤
≤
H(which block of length 2kαk2 ) + H(index)
O(1) + O(log λ1 )
3kαk2
Entropy rounding (extended version)
Algorithm:
◮ Input: A ∈ Rn×m , x ∈ [0, 1]m
(1) y := x
(2) FOR phase k = last bit TO 1 DO
(3) Call yi active if kth bit is 1
(4) Find half-coloring χ : active var → {−1, +1, 0}
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
Entropy rounding (extended version)
Algorithm:
◮ Input: A ∈ [−1, 1]n×m , x ∈ [0, 1]m
P
◮ Row weights w(i) (
i w(i) = 1; w(i) ≥ 0)
(1) y := x
(2) FOR phase k = last bit TO 1 DO
(3) Call yi active if kth bit is 1
(4) Find half-coloring χ : active var → {−1, +1, 0}
var
∆i
) ≤ w(i) · #active
with |Ai χ| ≤ ∆i s.t. G( √#active
5
var
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
Entropy rounding (extended version)
Algorithm:
◮ Input: A ∈ [−1, 1]n×m , x ∈ [0, 1]m
P
◮ Row weights w(i) (
i w(i) = 1; w(i) ≥ 0)
(1) y := x
(2) FOR phase k = last bit TO 1 DO
(3) Call yi active if kth bit is 1
(4) Find half-coloring χ : active var → {−1, +1, 0}
var
∆i
) ≤ w(i) · #active
with |Ai χ| ≤ ∆i s.t. G( √#active
5
var
(5) Update y ′ := y + ( 12 )k χ
(6) REPEAT WHILE ∃ active var.
In each step:
X
n
n
Subadd. X
Ai χ
∆i
Ai χ
#act.var.
H
G √
≤
≤
H
≤
2∆i i
2∆i
5
#act. var
i=1
i=1
◮
◮
′
Use α ∈ [−1, 1]m ⇒ kαk2 ≤
√
m′
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
1
w(i)
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
1
w(i)
it 1
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
1
w(i)
it 2
it 1
v halves
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
1
w(i)
it . . .
it 2
it 1
v halves
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
1
w(i)
it . . .
it 2
it 1
v halves
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
1
w(i)
it . . .
it 2
it 1
v halves
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
1
w(i)
it . . .
it 2
it 1
v halves
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
it log m
1
w(i)
it . . .
it 2
it 1
v halves
Entropy rounding (extended version) (2)
◮
Consider row i while rounding bit k:
∆i
Θ
q
1
w(i)
∼
q
1
)·
ln( v·w(i)
√
∼ ( 12 )v·w(i) ·
v
b
√
v
v := #active var.
Θ
it log m
1
w(i)
it . . .
it 2
it 1
v halves
◮
By convergence: |Ai x − Ai y| ≤ O
q
1
w(i)
Example: Discrepancy of set systems
◮
Given: Set system S with n sets and n elements

1 1 0
A = 0 1 1
1 0 1

i
S
i
set S
Example: Discrepancy of set systems
◮
Given: Set system S with n sets and n elements

1 1 0
A = 0 1 1
1 0 1

i
◮
S
Pick x := ( 21 , . . . , 12 ); weight w(i) :=
set S
i
1
n
for each row
Example: Discrepancy of set systems
◮
Given: Set system S with n sets and n elements

1 1 0
A = 0 1 1
1 0 1

i
S
set S
i
◮
Pick x := ( 21 , . . . , 12 ); weight w(i) :=
◮
There is y ∈ {0, 1}n : kAx − Ayk∞
1
n
for each row
q
√
1
= O( 1/n
) = O( n).
Example: Discrepancy of set systems
◮
Given: Set system S with n sets and n elements

1 1 0
A = 0 1 1
1 0 1

i
◮
◮
◮
◮
S
set S
i
Pick x := ( 21 , . . . , 12 ); weight w(i) :=
1
n
for each row
q
√
1
= O( 1/n
) = O( n).
There is y ∈ {0, 1}n : kAx − Ayk∞
(
√
+1 yi = 1
has discrepancy O( n).
Coloring χ(i) =
−1 yi = 0
“6 Standard deviations suffice”-Thm [Spencer ’85]
Summarizing
Theorem
Input:
◮
◮
◮
matrix A ∈ [−1, 1]n×m (∀A′ ⊆ A: there are
′)
f : −∆ ≤ A′ χ − f (χ) ≤ ∆ and H(f (χ)) ≤ #cols(A
)
10
vector x ∈ [0, 1]m
P
row weights w(i) ( i w(i) = 1)
There is a y ∈ {0, 1}m with
◮ Bounded difference:
◮
◮
|Ai x − Ai y| ≤ O(log(m))
· ∆i
p
|Ai x − Ai y| ≤ O( 1/w(i))
Summarizing
Theorem
Input:
◮
◮
◮
matrix A ∈ [−1, 1]n×m (∀A′ ⊆ A: there are
′)
f : −∆ ≤ A′ χ − f (χ) ≤ ∆ and H(f (χ)) ≤ #cols(A
)
10
vector x ∈ [0, 1]m
P
row weights w(i) ( i w(i) = 1)
There is a random variable y ∈ {0, 1}m with
◮ Bounded difference:
◮
◮
|Ai x − Ai y| ≤ O(log(m))
· ∆i
p
|Ai x − Ai y| ≤ O( 1/w(i))
◮
Preserved expectation: E[yi ] = xi .
◮
Randomness: y = x +
P
k≥1
Plog m
t=1
( 12 )k · χ(k,t)
Summarizing
Theorem
Input:
◮
◮
◮
matrix A ∈ [−1, 1]n×m (∀A′ ⊆ A: there are
′)
f : −∆ ≤ A′ χ − f (χ) ≤ ∆ and H(f (χ)) ≤ #cols(A
)
10
vector x ∈ [0, 1]m
P
row weights w(i) ( i w(i) = 1)
There is a random variable y ∈ {0, 1}m with
◮ Bounded difference:
◮
◮
|Ai x − Ai y| ≤ O(log(m))
· ∆i
p
|Ai x − Ai y| ≤ O( 1/w(i))
◮
Preserved expectation: E[yi ] = xi .
◮
Randomness: y = x +
P
k≥1
Plog m
t=1
( 12 )k · rand. ± 1 · χ(k,t)
Summarizing
Theorem
Input:
◮
◮
◮
matrix A ∈ [−1, 1]n×m (∀A′ ⊆ A: there are
′)
f : −∆ ≤ A′ χ − f (χ) ≤ ∆ and H(f (χ)) ≤ #cols(A
)
10
vector x ∈ [0, 1]m
P
row weights w(i) ( i w(i) = 1)
There is a random variable y ∈ {0, 1}m with
◮ Bounded difference:
◮
◮
|Ai x − Ai y| ≤ O(log(m))
· ∆i
p
|Ai x − Ai y| ≤ O( 1/w(i))
◮
Preserved expectation: E[yi ] = xi .
◮
Randomness: y = x +
◮
Can be computed by SDP in poly-time using [Bansal ’10]
P
k≥1
Plog m
t=1
( 12 )k · rand. ± 1 · χ(k,t)
Bin Packing With Rejection
Input:
◮
Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection
penalty πi ∈ [0, 1]
Goal: Pack or reject. Minimize # bins + rejection cost.
1
0
1
bin 1
bin 2
si
input
πi
0.9
0.7
0.4
0.6
Bin Packing With Rejection
Input:
◮
Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection
penalty πi ∈ [0, 1]
Goal: Pack or reject. Minimize # bins + rejection cost.
1
0
1
bin 1
bin 2
si
input
πi
0.7
0.4
0.6
Bin Packing With Rejection
Input:
◮
Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection
penalty πi ∈ [0, 1]
Goal: Pack or reject. Minimize # bins + rejection cost.
1
0
1
bin 1
si
πi
bin 2
×
0.7
input
0.4
0.6
Bin Packing With Rejection
Input:
◮
Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection
penalty πi ∈ [0, 1]
Goal: Pack or reject. Minimize # bins + rejection cost.
1
0
1
bin 1
si
πi
bin 2
×
0.7
input
0.6
Bin Packing With Rejection
Input:
◮
Items i ∈ {1, . . . , n} with size si ∈ [0, 1], and rejection
penalty πi ∈ [0, 1]
Goal: Pack or reject. Minimize # bins + rejection cost.
1
0
1
bin 1
si
πi
bin 2
×
0.7
input
Known results
Bin Packing With Rejection:
◮
APTAS [Epstein ’06]
◮
Faster APTAS [Bein, Correa & Han ’08]
◮
AFPTAS (AP X ≤ OP T +
[Epstein & Levin ’10]
OP T
)
(log OP T )1−o(1)
Known results
Bin Packing With Rejection:
◮
APTAS [Epstein ’06]
◮
Faster APTAS [Bein, Correa & Han ’08]
◮
AFPTAS (AP X ≤ OP T +
[Epstein & Levin ’10]
OP T
)
(log OP T )1−o(1)
Bin Packing:
◮
AP X ≤ OP T + O(log2 OP T ) [Karmarkar & Karp ’82]
Known results
Bin Packing With Rejection:
◮
APTAS [Epstein ’06]
◮
Faster APTAS [Bein, Correa & Han ’08]
◮
AFPTAS (AP X ≤ OP T +
[Epstein & Levin ’10]
OP T
)
(log OP T )1−o(1)
Bin Packing:
◮
AP X ≤ OP T + O(log2 OP T ) [Karmarkar & Karp ’82]
Theorem
There is a randomized approximation algorithm for Bin
Packing With Rejection with
AP X ≤ OP T + O(log2 OP T )
(with high probability).
The column-based LP
Set Cover formulation:
◮
◮
LP:
Bins: Sets S ⊆ [n] with
P
i∈S si
≤ 1 of cost c(S) = 1
Rejections: Sets S = {i} of cost c(S) = πi
min
X
S∈S
c(S) · xS
X
S∈S
1S · xS ≥ 1
xS ≥ 0 ∀S ∈ S
The column-based LP - Example
1
si
πi
0.44
0.4
0.3
0.9
0.7
0.4
0.26
0.6
input
The column-based LP - Example
1
si
πi
min 1

1
0

0
0
0.44
0.4
0.3
0.9
0.7
0.4
0.26
input
0.6
1 1 1 1 1 1 1 1 1 1 1 | .9 .7 .4 .6 x

 
0 0 0 1 1 1 0 0 0 1 0 |1 0 0 0
1
1
1 0 0 1 0 0 1 1 0 0 1 | 0 1 0 0
x ≥  
1
0 1 0 0 1 0 1 0 1 1 1 | 0 0 1 0
0 0 1 0 0 1 0 1 1 1 1 |0 0 0 1
1
x ≥ 0
The column-based LP - Example
1
si
πi
min 1

1
0

0
0
0.44
0.4
0.3
0.9
0.7
0.4
0.26
input
0.6
1 1 1 1 1 1 1 1 1 1 1 | .9 .7 .4 .6 x

 
0 0 0 1 1 1 0 0 0 1 0 |1 0 0 0
1
1
1 0 0 1 0 0 1 1 0 0 1 | 0 1 0 0
x ≥  
1
0 1 0 0 1 0 1 0 1 1 1 | 0 0 1 0
0 0 1 0 0 1 0 1 1 1 1 |0 0 0 1
1
x ≥ 0
1/2×
1/2×
1/2×
Massaging the LP
◮
Sort items s1 ≥ . . . ≥ sn
Massaging the LP
◮
◮
Sort items s1 ≥ . . . ≥ sn
Call items large if size at least ε :=
small
log OP Tf
OP Tf
(otherwise
Massaging the LP
◮
◮
Sort items s1 ≥ . . . ≥ sn
Call items large if size at least ε :=
small

1
1

0

0
1
0
1
1
1
1
1
0
1
1
0
1
0
0
1
1
0
1
0
0
0

0
0

1

0
0
log OP Tf
OP Tf
(otherwise
Massaging the LP
◮
Sort items s1 ≥ . . . ≥ sn
log OP Tf
OP Tf
◮
Call items large if size at least ε :=
small
◮
Add up rows 1, . . . , i of constraint matrix to obtain row i
for matrix A (for large i’s).
“1 slot per item” ⇒ “i slots for largest i items”

1
1

0

0
1
0
1
1
1
1
1
0
1
1
0
1
0
0
1
1
0
1
0
0
0
(otherwise



0
1 0 1 1 0 0
 2 1 1 1 1 0 
0





1 → A = 
2
2
2
1
1
1





0
0
Massaging the LP
◮
Sort items s1 ≥ . . . ≥ sn
log OP Tf
OP Tf
◮
Call items large if size at least ε :=
small
◮
Add up rows 1, . . . , i of constraint matrix to obtain row i
for matrix A (for large i’s).
“1 slot per item” ⇒ “i slots for largest i items”
P
Append row vector ( i∈S:si <ε si )S
◮

1
1

0

0
1
0
1
1
1
1
1
0
1
1
0
1
0
0
1
1
0
1
0
0
0
(otherwise



0
1 0 1 1 0 0
 2 1 1 1 1 0 
0





1 → A = 
2
2
2
1
1
1




0
space for small items 
0
Massaging the LP
◮
Sort items s1 ≥ . . . ≥ sn
log OP Tf
OP Tf
◮
Call items large if size at least ε :=
small
◮
Add up rows 1, . . . , i of constraint matrix to obtain row i
for matrix A (for large i’s).
“1 slot per item” ⇒ “i slots for largest i items”
P
Append row vector ( i∈S:si <ε si )S
◮
◮
(otherwise
Append objective function c




1 0 1 1 0 0
1 0 1 1 0 0
 2 1 1 1 1 0 
1 1 0 0 1 0




0 1 1 0 0 1 → A =  2 2 2 1 1 1 




 space for small items 
0 1 1 1 0 0
1 1 0 1 0 0
objective function
Entropy bound for monotone matrices
Theorem
Let A be column-monotone matrix, max entry ≤ ∆, sum of last
row = σ. There are auxiliary RV f : kAχ − f (χ)k∞ = O(∆) and
σ
).
Hχ (f (χ)) ≤ O( ∆

1
1

1

1
A=
2

2


P 2
σ=
2
0
0
0
1
1
1
1
1
0
1
2
2
2
2
2
3
≤∆

0
0

0

0

0

1

2
Ai χ
2
i
Entropy bound for monotone matrices
Theorem
Let A be column-monotone matrix, max entry ≤ ∆, sum of last
row = σ. There are auxiliary RV f : kAχ − f (χ)k∞ = O(∆) and
σ
).
Hχ (f (χ)) ≤ O( ∆
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2 1 2 0
2 1 2 1


2 1 2 2
Ai χ
2 1 3 2
◮
i
Idea: Describe random walk A1 χ, . . . , Aσ χ
O(∆)-approximately
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2
1
2
0


2 1 2 1


2 1 2 2
2 1 3 2
◮
Ai χ
i
Idea: Describe random walk A1 χ, . . . , Aσ χ
O(∆)-approximately
σ
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2
1
2
0


2 1 2 1


2 1 2 2
2 1 3 2
◮
◮
k
Ai χ
σ
D
i
Idea: Describe random walk A1 χ, . . . , Aσ χ
O(∆)-approximately
For each interval D of length 2k · ∆:
∆
gD (χ) := covered distance in D rounded to 1.1
k
2k ∆
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2
1
2
0


2 1 2 1


2 1 2 2 gD (χ)
2 1 3 2
◮
◮
k
Ai χ
σ
D
i
Idea: Describe random walk A1 χ, . . . , Aσ χ
O(∆)-approximately
For each interval D of length 2k · ∆:
∆
gD (χ) := covered distance in D rounded to 1.1
k
2k ∆
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2
1
2
0


2 1 2 1


2 1 2 2 gD (χ)
2 1 3 2
◮
◮
◮
k
Ai χ
σ
D
2k ∆
i
Idea: Describe random walk A1 χ, . . . , Aσ χ
O(∆)-approximately
For each interval D of length 2k · ∆:
∆
gD (χ) := covered distance in D rounded to 1.1
k
√
p
Std-Dev for gD : max dep. · |D| ≤ ∆ · 2k ∆ = 2k/2 · ∆
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2
1
2
0


2 1 2 1


2 1 2 2 gD (χ)
2 1 3 2
◮
◮
◮
σ
D
2k ∆
i
Idea: Describe random walk A1 χ, . . . , Aσ χ
O(∆)-approximately
For each interval D of length 2k · ∆:
∆
gD (χ) := covered distance in D rounded to 1.1
k
√
p
Std-Dev for gD : max dep. · |D| ≤ ∆ · 2k ∆ = 2k/2 · ∆
k
◮
k
Ai χ
H(gD ) ≤ G( ∆/1.1
) ≤ G(2−k ) = O(log 2k ) = O(k)
2k/2 ∆
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2
1
2
0


2 1 2 1


2 1 2 2 gD (χ)
2 1 3 2
◮
◮
◮
◮
σ
D
2k ∆
i
Idea: Describe random walk A1 χ, . . . , Aσ χ
O(∆)-approximately
For each interval D of length 2k · ∆:
∆
gD (χ) := covered distance in D rounded to 1.1
k
√
p
Std-Dev for gD : max dep. · |D| ≤ ∆ · 2k ∆ = 2k/2 · ∆
k
◮
k
Ai χ
H(gD ) ≤ G( ∆/1.1
) ≤ G(2−k ) = O(log 2k ) = O(k)
2k/2 ∆ P
σ
σ
Total entropy of g:
k≥1 2k ∆ · O(k) = O( ∆ ).
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2 1 2 0
2 1 2 1


2 1 2 2
2 1 3 2
◮
Ai χ
i
We know Ai χ up to an error of:
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2 1 2 0
2 1 2 1


2 1 2 2
2 1 3 2
◮
Ai χ
i
We know Ai χ up to an error of:
Entropy bound for monotone matrices
χ = (+1,-1,-1,+1)


1 0 0 0
1 0 1 0


1 0 2 0


1 1 2 0


A=

2 1 2 0
2 1 2 1


2 1 2 2
2 1 3 2
◮
Ai χ
i
P
We know Ai χ up to
an
error
of:
k≥1
P
(formally fi (χ) := D:ṠD=[i] gD (χ))
∆
1.1k
= O(∆).
Approximation algorithm for BPWR
◮
Apply rounding theorem to fractional sol. x


1 0 1 1 0 0
 2 1 1 1 1 0 



A=
 2 2 2 1 1 1 
 space for small items  weight 1/2
objective function
weight 1/2
Approximation algorithm for BPWR
◮
Apply rounding theorem to fractional sol. x
◮
Pick ∆i := Θ( s1i )

1 0 1 1 0 0
 ∆ := Θ( s1 )
i
i
 2 1 1 1 1 0  i

A=
 2 2 2 1 1 1 
 space for small items  weight 1/2
objective function
weight 1/2

Approximation algorithm for BPWR
◮
Apply rounding theorem to fractional sol. x
◮
Pick ∆i := Θ( s1i )
◮
Assume for 1 sec that
O(
1
2k
≤ si ≤
1
k
(same size class)
1
1 X
1 X X
σ
# active var.
|S| · ≤
si ≤
)≤
∆
20
k
10
10
S active
S active i∈S
| {z }
≤1


1 0 1 1 0 0
 ∆ := Θ( s1 )
i
i
 2 1 1 1 1 0  i

A=
 2 2 2 1 1 1 
 space for small items  weight 1/2
objective function
weight 1/2
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m :
◮
◮
◮
|Ai y − Ai x| ≤ O(log m) ·
|cT x − cT y| ≤ O(1)
1
si
space for small items in x and y differs by O(1)
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
space for small items in x and y differs by O(1)
Input:
(large items)
y:
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
space for small items in x and y differs by O(1)
Input:
(large items)
y:
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
space for small items in x and y differs by O(1)
Input:
(large items)
y:
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
space for small items in x and y differs by O(1)
Input:
(large items)
y:
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
space for small items in x and y differs by O(1)
Input:
(large items)
y:
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
space for small items in x and y differs by O(1)
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
1
large items:
0
bin 1
bin 2
bin 3
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
1
small items:
large items:
0
bin 1
bin 2
bin 3
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
1
small items:
large items:
0
bin 1
bin 2
bin 3
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
1
small items:
large items:
0
bin 1
bin 2
bin 3
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
1
small items:
large items:
0
bin 1
bin 2
bin 3
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
1
small items:
large items:
0
bin 1
bin 2
bin 3
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
assign small items integrally
→ O(ε) · OP Tf = O(log OP Tf )
1
small items:
large items:
0
bin 1
bin 2
bin 3
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
assign small items integrally
→ O(ε) · OP Tf = O(log OP Tf )
Doing the math:
◮
may assume that xS ≤ 1 − ε (o.w. buy them apriori)
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
assign small items integrally
→ O(ε) · OP Tf = O(log OP Tf )
Doing the math:
◮
◮
may assume that xS ≤ 1 − ε (o.w. buy them apriori)
Can assume that m ≤ #large items + 2
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
assign small items integrally
→ O(ε) · OP Tf = O(log OP Tf )
Doing the math:
◮
◮
◮
may assume that xS ≤ 1 − ε (o.w. buy them apriori)
Can assume that m ≤ #large items + 2
OP Tf ≥ ε2 · #large items ⇒ log m, log
1
ε
= O(log OP Tf )
Approximation algorithm for BPWR (2)
Obtain y ∈ {0, 1}m : → repair to feasible solution
◮
◮
◮
Ai y ≥ Ai x = i (y reserves ≥ i slots for largest i items)
Buy O(log m) · s1i slots for each size class → O(log m · log 1ε )
|cT x − cT y| ≤ O(1)
y reserves enough space for small items → O(1)
assign small items integrally
→ O(ε) · OP Tf = O(log OP Tf )
Doing the math:
◮
◮
◮
◮
may assume that xS ≤ 1 − ε (o.w. buy them apriori)
Can assume that m ≤ #large items + 2
OP Tf ≥ ε2 · #large items ⇒ log m, log
2
AP X − OP Tf ≤ O(log OP Tf )
1
ε
= O(log OP Tf )
Open problem I
Method works pretty well for other Bin Packing variants. But:
Open question I
Are there other applications?
Open problem II
Bin Packing:
min
P
P
S∈S
S∈S
xS
1S · xS
xS
≥ 1
≥ 0 ∀S ∈ S
Modified Integer Roundup Conjecture
OP T ≤ ⌈OP Tf ⌉ + 1
◮
◮
◮
True, if # of different item sizes ≤ 7 [Sebő, Shmonin ’09]
Best known general bound: OP T ≤ OP Tf + O(log2 n)
WARNING: No o(log2 n) bound possible by just “selecting
patterns from an initial fractional solution and rounding up
items” [Eisenbrand, Pálvölgyi, R. ’11]
Open problem II
Bin Packing:
min
P
P
S∈S
S∈S
xS
1S · xS
xS
≥ 1
≥ 0 ∀S ∈ S
Modified Integer Roundup Conjecture
OP T ≤ ⌈OP Tf ⌉ + 1
◮
◮
◮
True, if # of different item sizes ≤ 7 [Sebő, Shmonin ’09]
Best known general bound: OP T ≤ OP Tf + O(log2 n)
WARNING: No o(log2 n) bound possible by just “selecting
patterns from an initial fractional solution and rounding up
items” [Eisenbrand, Pálvölgyi, R. ’11]
Thanks for your attention
Download