On the Computational Complexity of Periodic Scheduling PhD defense Thomas Rothvoß

advertisement
On the Computational Complexity
of Periodic Scheduling
PhD defense
Thomas Rothvoß
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
(relative) deadline
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
period
(relative) deadline
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
period
(relative) deadline
W.l.o.g.: Task τi releases job of length c(τi ) at z · p(τi ) and
absolute deadline z · p(τi ) + d(τi ) (z ∈ N0 )
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
period
(relative) deadline
W.l.o.g.: Task τi releases job of length c(τi ) at z · p(τi ) and
absolute deadline z · p(τi ) + d(τi ) (z ∈ N0 )
Utilization: u(τi ) =
c(τi )
p(τi )
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
period
(relative) deadline
W.l.o.g.: Task τi releases job of length c(τi ) at z · p(τi ) and
absolute deadline z · p(τi ) + d(τi ) (z ∈ N0 )
Utilization: u(τi ) =
c(τi )
p(τi )
Settings:
◮ static priorities ↔ dynamic priorities
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
period
(relative) deadline
W.l.o.g.: Task τi releases job of length c(τi ) at z · p(τi ) and
absolute deadline z · p(τi ) + d(τi ) (z ∈ N0 )
Utilization: u(τi ) =
c(τi )
p(τi )
Settings:
◮ static priorities ↔ dynamic priorities
◮ single-processor ↔ multi-processor
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
period
(relative) deadline
W.l.o.g.: Task τi releases job of length c(τi ) at z · p(τi ) and
absolute deadline z · p(τi ) + d(τi ) (z ∈ N0 )
Utilization: u(τi ) =
c(τi )
p(τi )
Settings:
◮ static priorities ↔ dynamic priorities
◮ single-processor ↔ multi-processor
◮ preemptive scheduling ↔ non-preemptive
Real-time Scheduling
Given: (synchronous) tasks τ1 , . . . , τn with
τi = (c(τi ), d(τi ), p(τi ))
running time
period
(relative) deadline
W.l.o.g.: Task τi releases job of length c(τi ) at z · p(τi ) and
absolute deadline z · p(τi ) + d(τi ) (z ∈ N0 )
Utilization: u(τi ) =
c(τi )
p(τi )
Settings:
◮ static priorities ↔ dynamic priorities
◮ single-processor ↔ multi-processor
◮ preemptive scheduling ↔ non-preemptive
Implicit deadlines: d(τi ) = p(τi )
Constrained deadlines: d(τi ) ≤ p(τi )
Example: Implicit deadlines & static priorities
c(τ1 ) = 1
d(τ1 ) = p(τ1 ) = 2
b
0 1 2 3 4 5 6 7 8 9 10
c(τ2 ) = 2
d(τ2 ) = p(τ2 ) = 5
b
b
time
Example: Implicit deadlines & static priorities
Theorem (Liu & Layland ’73)
Optimal priorities:
1
p(τi )
for τi (Rate-monotonic schedule)
c(τ1 ) = 1
d(τ1 ) = p(τ1 ) = 2
b
0 1 2 3 4 5 6 7 8 9 10
c(τ2 ) = 2
d(τ2 ) = p(τ2 ) = 5
b
b
time
Example: Implicit deadlines & static priorities
Theorem (Liu & Layland ’73)
Optimal priorities:
1
p(τi )
for τi (Rate-monotonic schedule)
c(τ1 ) = 1
d(τ1 ) = p(τ1 ) = 2
b
0 1 2 3 4 5 6 7 8 9 10
c(τ2 ) = 2
d(τ2 ) = p(τ2 ) = 5
b
b
time
Example: Implicit deadlines & static priorities
Theorem (Liu & Layland ’73)
Optimal priorities:
1
p(τi )
for τi (Rate-monotonic schedule)
c(τ1 ) = 1
d(τ1 ) = p(τ1 ) = 2
b
0 1 2 3 4 5 6 7 8 9 10
c(τ2 ) = 2
d(τ2 ) = p(τ2 ) = 5
b
b
time
Feasibility test for implicit-deadline tasks
Theorem (Lehoczky et al. ’89)
If p(τ1 ) ≤ . . . ≤ p(τn ) then the response time r(τi ) in a
Rate-monotonic schedule is the smallest non-negative value s.t.
X r(τi ) c(τi ) +
c(τj ) ≤ r(τi )
p(τj )
j<i
1 machine suffices ⇔ ∀i : r(τi ) ≤ p(τi ).
Simultaneous Diophantine Approximation
(SDA)
Given:
◮
α1 , . . . , αn ∈ Q
◮
bound N ∈ N
◮
error bound ε > 0
Decide:
∃Q ∈ {1, . . . , N } : max αi −
i=1,...,n
ε
Z
≤
Q
Q
Simultaneous Diophantine Approximation
(SDA)
Given:
◮
α1 , . . . , αn ∈ Q
◮
bound N ∈ N
◮
error bound ε > 0
Decide:
∃Q ∈ {1, . . . , N } : max αi −
i=1,...,n
ε
Z
≤
Q
Q
⇔ ∃Q ∈ {1, . . . , N } : max |⌈Qαi ⌋ − Qαi | ≤ ε
i=1,...,n
Simultaneous Diophantine Approximation
(SDA)
Given:
◮
α1 , . . . , αn ∈ Q
◮
bound N ∈ N
◮
error bound ε > 0
Decide:
∃Q ∈ {1, . . . , N } : max αi −
i=1,...,n
ε
Z
≤
Q
Q
⇔ ∃Q ∈ {1, . . . , N } : max |⌈Qαi ⌋ − Qαi | ≤ ε
i=1,...,n
◮
NP-hard [Lagarias ’85]
Simultaneous Diophantine Approximation
(SDA)
Given:
◮
α1 , . . . , αn ∈ Q
◮
bound N ∈ N
◮
error bound ε > 0
Decide:
∃Q ∈ {1, . . . , N } : max αi −
i=1,...,n
ε
Z
≤
Q
Q
⇔ ∃Q ∈ {1, . . . , N } : max |⌈Qαi ⌋ − Qαi | ≤ ε
i=1,...,n
◮
NP-hard [Lagarias ’85]
◮
Gap version NP-hard [Rössner & Seifert ’96,
Chen & Meng ’07]
Simultaneous Diophantine Approximation (2)
Theorem (Rössner & Seifert ’96, Chen & Meng ’07)
Given α1 , . . . , αn , N , ε > 0 it is NP-hard to distinguish
◮
∃Q ∈ {1, . . . , N } : max |⌈Qαi ⌋ − Qαi | ≤ ε
i=1,...,n
◮
∄Q ∈ {1, . . . , n
O(1)
log log n
O(1)
N } : max |⌈Qαi ⌋ − Qαi | ≤ n log log n ε
i=1,...,n
Simultaneous Diophantine Approximation (2)
Theorem (Rössner & Seifert ’96, Chen & Meng ’07)
Given α1 , . . . , αn , N , ε > 0 it is NP-hard to distinguish
◮
∃Q ∈ {1, . . . , N } : max |⌈Qαi ⌋ − Qαi | ≤ ε
i=1,...,n
◮
∄Q ∈ {1, . . . , n
O(1)
log log n
O(1)
N } : max |⌈Qαi ⌋ − Qαi | ≤ n log log n ε
i=1,...,n
Theorem (Eisenbrand & R. - APPROX’09)
Given α1 , . . . , αn , N , ε > 0 it is NP-hard to distinguish
◮
∃Q ∈ {N/2, . . . , N } : max |⌈Qαi ⌋ − Qαi | ≤ ε
i=1,...,n
◮
∄Q ∈ {1, . . . , 2n
nO(1)
even if ε ≤ ( 21 )
.
O(1)
O(1)
· N } : max |⌈Qαi ⌋ − Qαi | ≤ n log log n · ε
i=1,...,n
Directed Diophantine Approximation (DDA)
Theorem (Eisenbrand & R. - APPROX’09)
Given α1 , . . . , αn , N , ε > 0 it is NP-hard to distinguish
◮
∃Q ∈ {N/2, . . . , N } : max |⌈Qαi ⌉ − Qαi | ≤ ε
i=1,...,n
◮
∄Q ∈ {1, . . . , n
even if ε ≤ ( 21 )n
O(1)
O(1)
log log n
· N } : max |⌈Qαi ⌉ − Qαi | ≤ 2n
i=1,...,n
.
O(1)
·ε
Directed Diophantine Approximation (DDA)
Theorem (Eisenbrand & R. - APPROX’09)
Given α1 , . . . , αn , N , ε > 0 it is NP-hard to distinguish
◮
∃Q ∈ {N/2, . . . , N } : max |⌈Qαi ⌉ − Qαi | ≤ ε
i=1,...,n
◮
∄Q ∈ {1, . . . , n
even if ε ≤ ( 21 )n
O(1)
O(1)
log log n
· N } : max |⌈Qαi ⌉ − Qαi | ≤ 2n
O(1)
i=1,...,n
.
Theorem (Eisenbrand & R. - SODA’10)
Given α1 , . . . , αn , w1 , . . . , wn ≥ 0, N , ε > 0 it is NP-hard to
distinguish
Pn
◮ ∃Q ∈ [N/2, N ] :
i=1 wi (Qαi − ⌊Qαi ⌋) ≤ ε
O(1)
Pn
nO(1) · ε
◮ ∄Q ∈ [1, n log log n · N ] :
i=1 wi (Qαi − ⌊Qαi ⌋) ≤ 2
even if ε ≤ ( 12 )n
O(1)
.
·ε
Hardness of Response Time Computation
Theorem (Eisenbrand & R. - RTSS’08)
Computing response times for implicit-deadline tasks w.r.t. to a
Rate-monotonic schedule, i.e. solving
)
(
n−1
X r c(τi ) ≤ r
min r ≥ 0 | c(τn ) +
p(τi )
i=1
(p(τ1 ) ≤ . . . ≤ p(τn )) is NP-hard (even to approximate within a
O(1)
factor of n log log n ).
◮
Reduction from Directed Diophantine Approximation
Mixing Set
min cs s + cT y
s + ai yi ≥ bi
s ∈ R≥0
y
∈ Zn
∀i = 1, . . . , n
Mixing Set
min cs s + cT y
s + ai yi ≥ bi
∀i = 1, . . . , n
s ∈ R≥0
y
◮
∈ Zn
Complexity status? [Conforti, Di Summa & Wolsey ’08]
Mixing Set
min cs s + cT y
s + ai yi ≥ bi
∀i = 1, . . . , n
s ∈ R≥0
y
◮
∈ Zn
Complexity status? [Conforti, Di Summa & Wolsey ’08]
Theorem (Eisenbrand & R. - APPROX’09)
Solving Mixing Set is NP-hard.
1. Model Directed Diophantine Approximation (almost) as
Mixing Set
2. Simulate missing constraint with Lagrangian relaxation
Testing EDF-schedulability
Setting: Constrained deadline tasks, i.e. d(τi ) ≤ p(τi )
Theorem (Baruah, Mok & Rosier ’90)
DBF(τi , Q)
Q − d(τi )
+ 1 · c(τi )
p(τi )
|
{z
}
=: DBF(τi , Q)
3c(τi )
2c(τi )
c(τi )
d(τi )
d(τi ) + 1 · p(τi )
d(τi ) + 2 · p(τi )
Q
Testing EDF-schedulability
Setting: Constrained deadline tasks, i.e. d(τi ) ≤ p(τi )
Theorem (Baruah, Mok & Rosier ’90)
Earliest-Deadline-First schedule is feasible iff
n X
Q − d(τi )
∀Q ≥ 0 :
+ 1 · c(τi ) ≤ Q
p(τi )
i=1 |
{z
}
=: DBF(τi , Q)
Testing EDF-schedulability
Setting: Constrained deadline tasks, i.e. d(τi ) ≤ p(τi )
Theorem (Baruah, Mok & Rosier ’90)
Earliest-Deadline-First schedule is feasible iff
n X
Q − d(τi )
∀Q ≥ 0 :
+ 1 · c(τi ) ≤ Q
p(τi )
i=1 |
{z
}
=: DBF(τi , Q)
◮
Complexity status is “Problem 2” in Open Problems in
Real-Time Scheduling [Baruah & Pruhs ’09]
Testing EDF-schedulability
Setting: Constrained deadline tasks, i.e. d(τi ) ≤ p(τi )
Theorem (Baruah, Mok & Rosier ’90)
Earliest-Deadline-First schedule is feasible iff
n X
Q − d(τi )
∀Q ≥ 0 :
+ 1 · c(τi ) ≤ Q
p(τi )
i=1 |
{z
}
=: DBF(τi , Q)
◮
Complexity status is “Problem 2” in Open Problems in
Real-Time Scheduling [Baruah & Pruhs ’09]
Theorem (Eisenbrand & R. - SODA’10)
Testing EDF-schedulability is coNP-hard.
Reduction (1)
Given: DDA instance α1 , . . . , αn , w1 , . . . , wn , N ∈ N, ε > 0
Reduction (1)
Given: DDA instance α1 , . . . , αn , w1 , . . . , wn , N ∈ N, ε > 0
Define: Task set S = {τ0 , . . . , τn } s.t.
Pn
◮ Yes: ∃Q ∈ [N/2, N ] :
i=1 wi (Qαi − ⌊Qαi ⌋) ≤ ε
⇒ S not EDF-schedulable (∃Q ≥ 0 : DBF(S, Q) > β · Q)
Pn
◮ No: ∄Q ∈ [N/2, 3N ] :
i=1 wi (Qαi − ⌊Qαi ⌋) ≤ 3ε
⇒ S EDF-schedulable (∀Q ≥ 0 : DBF(S, Q) ≤ β · Q)
Reduction (1)
Given: DDA instance α1 , . . . , αn , w1 , . . . , wn , N ∈ N, ε > 0
Define: Task set S = {τ0 , . . . , τn } s.t.
Pn
◮ Yes: ∃Q ∈ [N/2, N ] :
i=1 wi (Qαi − ⌊Qαi ⌋) ≤ ε
⇒ S not EDF-schedulable (∃Q ≥ 0 : DBF(S, Q) > β · Q)
Pn
◮ No: ∄Q ∈ [N/2, 3N ] :
i=1 wi (Qαi − ⌊Qαi ⌋) ≤ 3ε
⇒ S EDF-schedulable (∀Q ≥ 0 : DBF(S, Q) ≤ β · Q)
Task system:
τi = (c(τi ), d(τi ), p(τi )) :=
U :=
wi ,
1 1
,
αi αi
n
X
c(τi )
p(τi )
i=1
∀i = 1, . . . , n
Reduction (2)
Q·U −
n X
Q − d(τi )
i=1
p(τi )
+ 1 c(τi )
Reduction (2)
Q·U −
n X
Q − d(τi )
i=1
p(τi )
+ 1 c(τi )
=
n X
Q
Q
−
c(τi )
p(τi )
p(τi )
i=1
Reduction (2)
Q·U −
n X
Q − d(τi )
i=1
p(τi )
+ 1 c(τi )
=
n X
Q
Q
−
c(τi )
p(τi )
p(τi )
i=1
=
n
X
i=1
wi (Qαi − ⌊Qαi ⌋)
Reduction (2)
Q·U −
n X
Q − d(τi )
i=1
p(τi )
+ 1 c(τi )
=
n X
Q
Q
−
c(τi )
p(τi )
p(τi )
i=1
=
n
X
wi (Qαi − ⌊Qαi ⌋)
i=1
⇒
DBF({τ1 , . . . , τn }, Q)
≈U
Q
⇔
approx. error small
Reduction (2)
Q·U −
n X
Q − d(τi )
i=1
p(τi )
+ 1 c(τi )
=
n X
Q
Q
−
c(τi )
p(τi )
p(τi )
i=1
=
n
X
wi (Qαi − ⌊Qαi ⌋)
i=1
⇒
DBF({τ1 , . . . , τn }, Q)
≈U
Q
⇔
approx. error small
DBF({τ1 , . . . , τn }, Q)
Q
Q
Reduction (2)
Q·U −
n X
Q − d(τi )
i=1
p(τi )
+ 1 c(τi )
=
n X
Q
Q
−
c(τi )
p(τi )
p(τi )
i=1
=
n
X
wi (Qαi − ⌊Qαi ⌋)
i=1
⇒
U
DBF({τ1 , . . . , τn }, Q)
≈U
Q
⇔
approx. error small
b
DBF({τ1 , . . . , τn }, Q)
Q
Q
scm(p(τ1 ), . . . , p(τn ))
Reduction (2)
Q·U −
n X
Q − d(τi )
i=1
p(τi )
+ 1 c(τi )
=
n X
Q
Q
−
c(τi )
p(τi )
p(τi )
i=1
=
n
X
wi (Qαi − ⌊Qαi ⌋)
i=1
⇒
DBF({τ1 , . . . , τn }, Q)
≈U
Q
⇔
approx. error small
b
U
DBF({τ1 , . . . , τn }, Q)
Q
Q
N/2
N
scm(p(τ1 ), . . . , p(τn ))
Reduction (2)
Q·U −
n X
Q − d(τi )
i=1
p(τi )
+ 1 c(τi )
=
n X
Q
Q
−
c(τi )
p(τi )
p(τi )
i=1
=
n
X
wi (Qαi − ⌊Qαi ⌋)
i=1
⇒
U
DBF({τ1 , . . . , τn }, Q)
≈U
Q
b
⇔
approx. error small
b
DBF({τ1 , . . . , τn }, Q)
Q
Q
N/2 Q∗ N
scm(p(τ1 ), . . . , p(τn ))
Reduction (3)
Add a special task
τ0 = (c(τ0 ), d(τ0 ), p(τ0 )) := (3ε, N/2, ∞)
DBF(τ0 , Q)
Q
Q
N/2
N
Reduction (4)
U
Q
N/2
N
Reduction (4)
DBF(S, Q)
Q
U
Q
N/2
N
Reduction (4)
β := U +
ε
N
DBF(S, Q)
Q
U
U−
ε
N
Q
N/2
N
Algorithms for Multi-processor Scheduling
Setting: Implicit deadlines (d(τi ) = p(τi )), multi-processor
Algorithms for Multi-processor Scheduling
Setting: Implicit deadlines (d(τi ) = p(τi )), multi-processor
Known results:
◮
NP-hard (generalization of Bin Packing)
Algorithms for Multi-processor Scheduling
Setting: Implicit deadlines (d(τi ) = p(τi )), multi-processor
Known results:
◮
NP-hard (generalization of Bin Packing)
◮
asymp. 1.75-apx [Burchard et al. ’95]
Algorithms for Multi-processor Scheduling
Setting: Implicit deadlines (d(τi ) = p(τi )), multi-processor
Known results:
◮
NP-hard (generalization of Bin Packing)
◮
asymp. 1.75-apx [Burchard et al. ’95]
Theorem (Eisenbrand & R. - ICALP’08)
O(1)
In time Oε (1) · n(1/ε)
one can find an assignment to
AP X ≤ (1 + ε) · OP T + O(1)
machines such that the RM-schedule is feasible if the machines
are speed up by a factor of 1 + ε (resource augmentation).
1. Rounding, clustering & merging small tasks
2. Use relaxed feasibility notion
3. Dynamic programming
Algorithms for Multi-processor Scheduling (2)
Theorem
Let k ∈ N be an arbitrary parameter. In time O(n3 ) one can
find an assignment of implicit-deadline tasks to
3 1
+
OP T + 9k
AP X ≤
2 k
machines (→ asymptotic 23 -apx).
1. Create graph G = (S, E) with tasks as nodes and edges
(τ1 , τ2 ) ∈ E :⇔ {τ1 , τ2 } RM-schedulable (on 1 machine)
2. Define suitable edge weights
3. Mincost matching ⇒ good assignment
Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable}
Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable}
(
1 pack machine with {τi | xi = 1}
λx =
0 otherwise
Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable}
(
1 pack machine with {τi | xi = 1}
λx =
0 otherwise
Column-based LP (or configuration LP):
min 1T λ
X
λx x ≥ 1
x∈P
λ ≥ 0
Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable}
(
1 pack machine with {τi | xi = 1}
λx =
0 otherwise
Column-based LP (or configuration LP):
min 1T λ
X
λx x ≥ 1
x∈P
λ ≥ 0
◮
For Bin Packing: OP T − OP Tf ≤ O(log2 n)
[Karmarkar & Karp ’82]
Algorithms for Multi-processor Scheduling (3)
P = {x ∈ {0, 1}n | {τi | xi = 1} RM-schedulable}
(
1 pack machine with {τi | xi = 1}
λx =
0 otherwise
Column-based LP (or configuration LP):
min 1T λ
X
λx x ≥ 1
x∈P
λ ≥ 0
◮
For Bin Packing: OP T − OP Tf ≤ O(log2 n)
[Karmarkar & Karp ’82]
Theorem
4
1.33 ≈ ≤ sup
3 instances
OP T
OP Tf
≤ 1 + ln(2) ≈ 1.69
Algorithms for Multi-processor Scheduling (4)
Theorem (Eisenbrand & R. - ICALP’08)
For all ε > 0, there is no polynomial time algorithm with
AP X ≤ OP T + n1−ε
unless P = NP (⇒ no AFPTAS).
◮
Cloning of 3-Partition instances
Algorithms for Multi-processor Scheduling (5)
◮
FFMP is a simple First Fit like heuristic with running time
O(n log n)
Algorithms for Multi-processor Scheduling (5)
◮
FFMP is a simple First Fit like heuristic with running time
O(n log n)
Theorem (Karrenbauer & R. - ESA’09)
For n tasks S = {τ1 , . . . , τn } with u(τi ) ∈ [0, 1] uniformly at
random
E[FFMP(S)] ≤ E[u(S)] + O(n3/4 (log n)3/8 )
(average utilization → 100%).
◮
Reduce to known results from the average case analysis of
Bin Packing
Dynamic vs. static priorities
◮
Setting: constrained-deadline tasks, i.e. d(τi ) ≤ p(τi )
Dynamic vs. static priorities
◮
Setting: constrained-deadline tasks, i.e. d(τi ) ≤ p(τi )
◮
Worst-case speedup factor
proc. speed for static priority schedule
f := sup
≥1
proc. speed for arbitrary schedule
instances
Dynamic vs. static priorities
◮
Setting: constrained-deadline tasks, i.e. d(τi ) ≤ p(τi )
◮
Worst-case speedup factor
proc. speed for static priority schedule
f := sup
≥1
proc. speed for arbitrary schedule
instances
◮
Known:
3
2
≤ f ≤ 2 [Baruah & Burns ’08]
Dynamic vs. static priorities
◮
Setting: constrained-deadline tasks, i.e. d(τi ) ≤ p(τi )
◮
Worst-case speedup factor
proc. speed for static priority schedule
f := sup
≥1
proc. speed for arbitrary schedule
instances
◮
Known:
3
2
≤ f ≤ 2 [Baruah & Burns ’08]
Theorem (Davis, R., Baruah & Burns - RT Systems ’09)
1
≈ 1.76
Ω
where Ω ≈ 0.567 is the unique positive real root of x = ln(1/x).
f=
The end
Thanks for your attention
Download