High-Multiplicity Cyclic Job Shop Scheduling

advertisement
High-Multiplicity Cyclic Job Shop Scheduling
Maxim Sviridenko
IBM T. J. Watson Research Center
P.O. Box 218
Yorktown Heights, NY, 10598, USA
sviri@us.ibm.com
Tracy Kimbrel
IBM T. J. Watson Research Center
P.O. Box 218
Yorktown Heights, NY, 10598, USA
kimbrel@us.ibm.com
October 27, 2012
Abstract
We consider the High-Multiplicity Cyclic Job Shop Scheduling Problem, in which we are given a
set of identical jobs and must find a schedule that repeats itself with some cycle time τ . In each cycle,
operations are processed just as in the previous cycle, but on jobs indexed one greater in the ordering
of jobs. There are two objectives of interest: the cycle time τ and the flow time F from the start to the
finish of each job. We give several approximation algorithms after showing that a very restricted case is
APX-hard.
1 Introduction
Job shop scheduling is a widely studied and difficult combinatorial optimization problem [10]. We are given
a collection of jobs and a set of machines. Each job consists of a sequence of operations, which must be
performed in order. Each operation has a particular processing time and must be performed on a specific
machine.
Here we consider what we term the High-Multiplicity Cyclic Job Shop Scheduling Problem, in which
the jobs are all the same and the schedule must process them in a cyclic fashion; that is, the schedule must
repeat the same pattern of operations every τ time steps for some cycle time τ . τ measures the throughput
of a schedule. We will also be interested in the flow time F , that is, the time it takes to complete one job.
This measure may be of interest by itself, and it also indicates the amount of work-in-progress at any time
during steady state.
One practical source of this and similar problems is in VLSI circuit fabrication [17]. Shop floor environments are considered easier to manage if production follows a repeating pattern [11]. Compilers in
high-performance computing environments schedule loops of code to be repeated many times [9]. It is
desirable to maximize throughput and at the same time minimize work-in-progress in some manufacturing
environments [9].
Although there have been many papers on this and related problems (which have been given many
different names including periodic scheduling problems), most are devoted to NP-hardness and heuristic
algorithms. Little is known regarding hardness of approximation and approximation algorithms. We discuss
related work in section 1.2 below after defining the problem precisely.
1
1.1 Model and notation
In the job shop scheduling problem there is a set J of jobs that must be processed on a given set M =
{M1 , . . . , Mm } of m machines. Each job Jj consists of a sequence of µj operations O1j , . . . , Oµj j that
must be processed in order. We call this order the precedence relation. Operation Okj must be processed on
machine Mπkj , during pkj time units. A machine can process at most one operation at a time, and each job
may be processed by at most one machine at any time. For a given schedule, let Ckj be the completion time
of operation Okj and let Cj be the completion time of job j. Let Sj denote the start time of job j.
In the High-Multiplicity Cyclic Job Shop Scheduling Problem, we are given a set of identical jobs. Their
number is not important, as will be seen; we may assume the set is infinite. Thus we will drop the subscript
j from our notation when it is not needed and write pk , etc. For convenience, we will just refer to jobs and
operations by their indices. It will be convenient to number the jobs starting at 0 and also to index time
starting at 0; we assume job 0 starts at time 0. We require that there exist some τ , the cycle time, such that at
every time t ≥ 0 and for every machine i, if i processes operation Okj at time t, then i processes operation
Ok,j+1 at time t + τ . Thus a new job starts every τ time steps; i.e., Sj = τ j.
There are two objectives of interest in this scheduling problem: the cycle time τ and the flow time
F = Cj − S j = C0 .
Let µ be the number of operations in a job on all machines.
machine i, let ℓi be the total amount
P For each P
of work in a single job processed by machine i. Let ℓ = i∈M ℓi = µk=1 pk denote the total length of
each job and let τ̄ denote the per-job maximum load on any machine. (The reason for this notation will
become clear shortly.)
Obviously we must have F ≥ ℓ and τ ≥ τ̄ . We will make use of the following folklore result.
Fact 1 For any instance, there exists a schedule with cycle time τ = τ̄ .
Proof. For a given operation k, let i be the machine that processes k and let p be the sum of all pk′ for
operations processed by i with k′ < k. We schedule operation k in job j to start at time τ̄ (j + k) + p and
finish at time τ̄ (j + k) + p + pk . It should be clear that all processing times and precedence constraints are
satisfied and that our schedule is cyclic with τ = τ̄ .
Fix a time t ≥ 0 and a machine i. If there is no operation in any job scheduled on machine i at time t,
we do not need to prove anything. Otherwise we need to prove that only one operation is scheduled on i at
time t. Let r = t mod τ̄ . Suppose t = τ̄ (j1 + k1 ) + r = τ̄ (j2 + k2 ) + r. By our schedule’s definition r
determines a unique k such that only operations k (in any jobs) might be scheduled on machine i at time t.
Thus k1 = k2 and then j1 = j2 , and we are done.
1.2 Related work
Graves et al. [6] initiated the study of the problem and gave a heuristic for minimizing flow time subject to
a given cycle time. Roundy [15] showed NP-hardness of the general case of minimizing flow time subject
to the minimum cycle time. McCormick and Rao [11] further showed that even if each machine has only
two operations in each job, but with different processing times, and the “cyclic machine structure” (see [15]
or [11] for a definition) is fixed, it is still NP-hard to minimize flow
Ptime. 2Mittendorf and Timkovsky [13]
give a simple greedy algorithm with an absolute error bound of τ̄ m
i=1 (li − li )/2 on the flow time and a
more complicated algorithm with a slightly better bound, improving somewhat on the trivial flow time of
τ̄ ℓ which follows from the proof of Fact 1. They also show the NP-hardness of the special case in which all
machines but one have a single unit time operation in each job. They call this latter problem the “robotic
2
flow shop” problem. Hall, Lee, and Posner [7] study the problem in which there are many copies of each of
a small number of different job types and give various NP-hardness results and polynomial time algorithms.
Timkovsky [17] surveys a wide range of environments for periodic scheduling including the problem
studied in this paper. Hanen and Munier [9] also survey related problems. We refer the reader to these
surveys for details.
1.3 Our results
In this paper we show the following results:
Theorem 1 Minimizing flow time subject to minimum cycle time τ = τ̄ in the High-Multiplicity Cyclic Job
Shop Scheduling Problem is APX-hard even for unit time operations and at most 2 operations (in each job)
per machine, so that τ̄ = 2.
This problem is shown to be essentially equivalent to Max Cut in 4-regular graphs. Below in our next
Theorem we establish the connection between Max Res Cut Problem and High-Multiplicity Cyclic Job Shop
Scheduling Problem in the case when τ̄ = 2. In the Max Res Cut Problem The edges are partitioned into two
sets E = E + ∪ E − . The goal is to find a cut that maximizes the number of edges from E + with endpoints
lying on different sides of the cut plus the number of edges from E − with endpoints lying on the same side
of the cut.
Theorem 2 Given an α-approximation algorithm (α ∈ (0, 1)) for the Max Res Cut Problem in unweighted
graphs of degree 4 there exists a (2 − α)-approximation algorithm for the High-Multiplicity Cyclic Job Shop
Scheduling Problem with unit processing times and at most two operations per machine. The objective of
the problem is to minimize the flow time F subject to the constraint on the cycle time τ = τ̄ = 2.
The best known approximation algorithm for the Max Res Cut problem in general graphs is due to
Goemans and Williamson and has performance guarantee α ≈ 0.878 [4]. Since this problem resembles
Max Cut Problem that has better approximation algorithms for graphs of bounded degree due to Feige,
Karpinski and Landberg [3] and Halperin, Livnat and Zwick [8], it is possible that the techniques from [3]
and [8] will provide even better performance guarantee in graphs of degree 4.
Definition 3 For the classical job shop problem, let L denote the maximum machine load, where load is
defined as the sum of lengths of all operations in all jobs on a machine, and let ∆ denote the maximum job
length. The trivial lower bound on the makespan is max{L, ∆}.
In the results below we will refer to a cycle time and flow time pair τ ∗ and F ∗ for the High-Multiplicity
Cyclic Job Shop Scheduling Problem which we may take to be Pareto optimal; i.e., there exists a schedule
with cycle time τ ∗ and flow time F ∗ , but no schedule improves one without making the other worse. Since
our problem is inherently bi-criteria we will always treat one of them as a constraint and the other one as an
objective function.
Theorem 4 Given an α-approximation algorithm with respect to the trivial lower bound for the classical
job shop scheduling problem with makespan objective, we can design an algorithm that finds a schedule for
the High-Multiplicity Cyclic Job Shop Scheduling Problem with cycle time 2ατ̄ and flow time 2αℓ (recall
τ ∗ ≥ τ̄ and F ∗ ≥ ℓ).
3
Note that the best known approximation algorithms for the general job shop scheduling problem have
2
mµ
) [5, 16] and for the job shop scheduling with unit processing
performance guarantee α = O( loglog
2
log mµ
times α = O( logloglogmm ) [2]. All these algorithms’ analyses use the trivial lower bound.
Theorem 5 For any ε > 0 there exists an algorithm that finds a schedule for the High-Multiplicity Cyclic
Job Shop Scheduling Problem
1. with cycle time (1 + ε)τ̄ and flow time O(log m)ℓ for instances of the problem with unit processing
times,
2. with cycle time τ̄ + (1 + ε)(log τ̄ )τ̄ and flow time O(log m)(log τ̄ )ℓ for general instances,
3. with cycle time (1 + ε)τ ∗ and flow time F ∗ for the instances with unit processing times and a constant
number of machines,
where the hidden constant depends on ε only.
Theorem 6 There exists a randomized algorithm that with high probability finds a schedule
for the High√
Multiplicity Cyclic Job Shop Scheduling Problem with cycle time τ̄ and flow time O( τ̄ log mτ̄ )ℓ for instances of the problem with unit processing times.
2 Hardness of minimizing F subject to τ = τ̄
Now we prove Theorem 1.
Proof. We reduce Max Cut on 4-regular graphs, which has been observed to be APX-hard [12], to the
problem of finding a minimum flow-time cyclic schedule with τ = τ̄ = 2 for jobs with at most 2 operations
on each machine. All operations are unit processing time.
In a schedule with τ = 2, say an operation is scheduled with offset 0 if it is scheduled in an even time
slot and offset 1 otherwise. We can assume that no operation is delayed two or more steps since otherwise
we could shift all remaining operations by the time steps and obtain a feasible schedule with better flow
time. A schedule is thus uniquely determined by an assignment of offsets 0 and 1 to each operation such
that the two operations on each machine (if there are two) are given different offsets.
Our problem can be reformulated as follows. We will construct a formula over the logical exclusive-or
(XOR) operation ⊕. However, the formula will have a special form, and thus the trivial equivalence of Max
Cut and such formulas with two unnegated literals in each clause can not be used. Our clauses will have two
literals each, but they will include negations and will consist of a chain l1 ⊕ l2 , l2 ⊕ l3 , . . . , lℓ−1 ⊕ lℓ . Each
literal except the first and last occurs in either 2 or 0 clauses - some variables will occur with and without
negations, and some will occur only without negations. The latter correspond to machines with only one
operation in each job. The first and last literals occur only once.
For each machine i define a boolean variable xi . We interpret xi as follows. If xi = 0, then the first
operation on machine i is scheduled with offset 0 and the second (if any) is scheduled with offset 1, and if
xi = 1 these are reversed. Let lk = xk if operation k is the first operation on a machine, and let lk = x̄k
if k is the second. If lk ⊕ lk+1 = 1 then operations k and k + 1 can be scheduled without an intervening
delay. Otherwise, there must be a unit of delay between them; i.e., the operations are scheduled with the
same offset.
4
Thus we wish to maximize the number of satisfied clauses of the form lk ⊕ lk+1 for 0 ≤ k < ℓ. Notice
that if both lk and lk+1 are negated literals, we can drop both negations and obtain an equivalent clause. It is
easy to see that we can assume without loss of generality that x1 = 0, and that the flow time is 2ℓ − 1 minus
the number of satisfied clauses.
Let G = (V, E) be an undirected 4-regular graph. We define a variable u for each node u ∈ V . Consider
an Euler tour of G. For all edges (u, v) except the last edge of the tour, we construct a clause as follows. If
(u, v) is crossed (in either direction) the first time time the tour visits nodes u and v, or if it is the second
time each node is visited, we have the clause u ⊕ v. Suppose when edge e = (u, v) is crossed (in whichever
direction the tour crosses it) it is the first visit to u, say, and the second visit to v. Then we add a dummy
variable ze and clauses u ⊕ ze and ze ⊕ v̄.
Consider an assignment to the variables. We construct a cut of G in the obvious way according to the
assignment. Note that for two adjacent nodes u and v with associated clause u ⊕ v, the edge (u, v) is cut
if and only if the corresponding clause is satisfied. In the case of an edge e = (u, v) for which a dummy
variable was defined, one or both clauses can be satisfied with the right choice of ze . If only one is satisfied,
then u and v̄ are different which means edge e is not cut. If both are satsified, then u and v̄ are the same and
e is cut. Thus the size of the optimal cut is within 1 (recall that we ignored the last edge of the tour) of the
number of satisfied clauses in an optimal assignment minus the number of dummy variables.
Finally notice that our formula over ⊕ derived from the Euler tour of G corresponds exactly to an
instance of minimizing flow time subject to τ = 2 as described above. We define one machine per variable
in the formula and two operations one for positive literal and one for negative literal. For dummy variables
we just define one operation since they do not have negative literals. Each clause in our formula corresponds
to a precedence constraint between two operations (literals). Since the number of dummy variables is at
most the number of edges in G, APX-hardness of Max Cut in 4-regular graphs implies APX-hardness of
our problem.
3 Approximation algorithms
Now we prove Theorem 2.
Proof. In Section 2 we showed the reduction from the High-Multiplicity Cyclic Job Shop Scheduling
Problem with unit processing times subject to the constraint τ ∗ ≤ 2 to the maximum satisfiability problem
with clauses of the type u ⊕ v and u ⊕ v̄. Maximizing the number of satisfied clauses is exactly the Max Res
Cut Problem since the clause u ⊕ v corresponds to an edge (u, v) ∈ E + and the clause u ⊕ v̄ corresponds
to an edge (u, v) ∈ E − . Since each variable has at most four appearances in the satisfiability problem we
obtain that the graph has degree 4. There exists a Res Cut with K clauses (edges) satisfied if and only if
there exists a cyclic schedule with cycle time τ ≤ 2 and flow time F = 2ℓ − K.
Therefore, the optimal schedule with flow time F ∗ = 2ℓ − K ∗ corresponds to a Res Cut of value K ∗ .
Applying an α-approximation algorithm to the Max Res Cut instance we obtain an approximate schedule
with flow time at most F = 2ℓ − αK ∗ . Therefore, the performance guarantee of this algorithm for the
∗
High-Multiplicity Cyclic Job Shop Scheduling Problem is maxK ∗ ∈(0,ℓ] 2ℓ−αK
2ℓ−K ∗ = 2 − α.
Next we prove Theorem 4.
Proof. Consider one copy of a job J in our instance of the High-Multiplicity Cyclic Job Shop Scheduling
Problem. Define an instance of the classical job shop scheduling problem with makespan objective with at
most m jobs (J1′ , . . . , ) as follows. The first job J1′ consists of the first s1 operations of the job J that have
5
Ps1 −1
Ps 1
′
total processing time at least τ̄ , i.e.
2 consists of the
k=1 pk < τ̄ . The second job J
k=1 pk ≥ τ̄ and
P
s2
first s2 remaining operations of the job J that have total processing time at least τ̄ , i.e.
k=s1 +1 pk ≥ τ̄
Ps2 −1
and k=s1 +1 pk < τ̄ . Continue in this fashion until all of J has been broken up into smaller jobs.
The maximum job length in the new instance of the job shop scheduling problem is at most 2τ̄ . Given
an α-approximation algorithm for the classical job shop scheduling problem with makespan objective (with
respect to the trivial lower bound) we can find a schedule of length at most 2ατ̄ for the new instance of the
job shop problem defined above. It is easy to see how to construct a cyclic schedule for the original problem
using this schedule.
A schedule for the above defined instance of the classical job shop scheduling problem gives one cycle
of a schedule for the corresponding instance of the High-Multiplicity Cyclic Job Shop Scheduling Problem.
Therefore, the cycle length in our schedule is at most 2ατ̄ . Since each job in the cyclic scheduling problem
is partitioned into at most ℓ/τ̄ pieces and each of them appears in a different cycle we obtain that the flow
time of each job is at most (ℓ/τ̄ ) · 2ατ̄ = 2αℓ.
We now prove Theorem 5.
Proof.
1. Our algorithm is based on the algorithm from [2] for classical job shop scheduling with unit processing
times. This algorithm finds a schedule of length (1 + ε)L + Kε (log m)∆ where L is the maximum
machine load in the instance, ∆ is the maximum job length, and Kε depends on ε only.
Similarly to the proof of the Theorem 4 we define the sequence of jobs J1′ , . . . ,. The first job J1′
consists of first ετ̄ /(Kε log m) operations of the job from the instance of the High-Multiplicity Cyclic
Job Shop Scheduling Problem. The second job consists of the second ετ̄ /(Kε log m), operations and
so on.
The resulting instance of the classical job shop scheduling problem has maximum machine load of
L = τ̄ (because this is the maximum per-job load in our original problem) and maximum job length
of ετ̄ /(Kε log m). Therefore, there exists a schedule of length (1 + 2ε)τ̄ for this instance by the
algorithm from [2]. As before, this schedule corresponds to one cycle for our High-Multiplicity
Cyclic Job Shop Scheduling Problem. The cycle time of this schedule is (1 + 2ε)τ̄ and the flow time
is upper bounded by the cycle time multiplied by the number of cycles each job has to span, which is
log m
= O(log m)ℓ. Therefore, for any constant ε > 0 we can find a schedule with
(1 + 2ε)τ̄ · ℓ · Kε ετ̄
cycle time (1 + ε)τ̄ and flow time Kε′ (log m)ℓ.
2. For general instances of the problem we apply a similar algorithm with some additional tricks. First
we define the sequence of jobs J1′ , J1′′ , J2′ , J2′′ , . . . ,. The first job J1′ consists of the longest prefix of operations of total length at most ετ̄ /(Kε log m) of the job from the instance of the High′
Multiplicity Cyclic Job Shop Scheduling
Ps1 +1 Problem. That is J1 consists of′′first s1 operations such that
P
s1
k=1 pk > ετ̄ /(Kε log m). The job J1 consists of the single operk=1 pk ≤ ετ̄ /(Kε log m) and
ation with index s1 + 1. The job J2′ consists of the next s2 operations with length ”almost” equal to
ετ̄ /(Kε log m) and the job J2′′ consists of the single operation and so on. Let 2N be the total number
of jobs. Note that N = O(log m)ℓ/τ̄ .
′ , J ′′ within one cycle of length τ̄ + (1 + ε)(log τ̄ )τ̄ .
We now show how to process jobs J1′ , J1′′ , . . . , JN
N
′′ together in a schedule of length τ̄ . Obviously, this schedule is
First we process all jobs J1′′ , . . . , JN
trivial to construct since these jobs consists of one operation only.
6
′ with processing time p as a sequence of p unit
After that we treat each operation of jobs J1′ , . . . , JN
k
k
operations that must be processed on the same machine. Of course, this transformation is not necessarily polynomial but using the standard scaling trick from [16] we can guarantee that all processing
times are polynomially bounded in the number of machines and operations per job with negligible
loss in the objective function (makespan in [16]; cycle time and flow time in our paper).
For this instance of the problem we apply the algorithm described in the first part of the proof. We
now show how to transform a cycle of this schedule such that its length increases by at most a factor
of log τ̄ and all operations will be processed nonpreemptively.
We show the transformation of an arbitrary preemptive schedule for the classical job shop scheduling
instance of length X into a nonpreemptive schedule of length X log X. Cut the preemptive schedule
in half by a line at time X/2 if some operation is partially processed in both intervals (0, X/2] and
(X/2, X] then delete this operation from the schedule. Note that there could be at most one such
operation per job. The remaining schedule is decomposed into two preemptive schedules of length
X/2. Process the deleted operation between these two schedules. Obviously, the deleted operation
can be processed in a new interval of length X nonpreemptively. We now repeat the process with
the two remaining preemptive “subschedules.” After s steps we have 2s preemptive subschedules of
length X/2s and all operations that are processed in some subschedule are processed there completely.
By cutting each of them we add at most 2s X/2s = X to the total length of the current schedule.
Therefore, after log X steps we obtain a nonpreemptive schedule of length X log X.
The cycle time of the constructed schedule is at most (1 + (1 + ε) log τ̄ )τ̄ and every job spans
O(log m)ℓ/τ̄ cycles.
3. Our exact algorithm is based on the following observation. If the optimal flow time F ∗ > Kε′ (log m)ℓ
then we must have τ ∗ < (1 + ε)τ̄ since otherwise our first algorithm in this proof would find a better
solution than the optimal one. Therefore, the solution with cycle time at most (1 + ε)τ̄ and flow time
at most Kε′ (log m)ℓ satisfies the conditions of the Theorem.
In the other case F ∗ ≤ Kε′ (log m)ℓ ≤ Kε′ m(log m)τ̄ . Then in the optimal solution for our HighMultiplicity Cyclic Job Shop Scheduling Problem each job crosses a constant number of cycles
N ≤ Kε′ m(log m). Therefore, we can enumerate over all possible partitions of one job into at
most Kε′ m(log m) consecutive pieces. Each such piece will correspond to the operations of the job
processed within one cycle.
For each such partition we would like to find a cyclic schedule with cycle time τ ∗ that minimizes
the flow time F ∗ . This problem is exactly the classical job shop scheduling problem with N ≤
′ . We need to find a schedule of total length at most τ ∗ that minimizes
Kε′ m(log m) jobs J1′ , . . . , JN
′ . This objective
the objective function (N − 1)τ ∗ + CN where CN is the completion time of the job JN
function is actually equal to the flow time since each job in the original instance crosses N − 1 cycles
completely and one last cycle partially.
The problem defined above is easily reducible to the classical job shop scheduling problem with a
constant number of jobs. We just need to ”guess” the completion time CN and add one dummy
′ with processing time τ ∗ − C . After that we
machine and dummy last operation for the job JN
N
need to solve the following recognition problem: ”Is there a schedule of length τ ∗ for the instance of
the classical job shop with N ≤ Kε′ m(log m) jobs?” This recognition problem can be solved by the
well-known dynamic programming (DP) procedure that is a generalization of the classical geometric
algorithm for two-job instances [1]. The running time of this DP is pseudopolynomial in the input
7
size for the general job instances and polynomial for instances with unit processing times. Note also
that the dependence of running time on N is exponential.
For the next proof we will need the following well-known concentration inequality; see, for example [14].
Lemma (Chernoff bound) Let X1 , . . . , X
Pn be independent random
P Bernoulli variables such that, P r(Xi =
1) = pi for i = 1, . . . , n. Then for X = ni=1 Xi , µ = E[X] = ni=1 pi and any δ ∈ (0, 2e − 1]
P r[X > (1 + δ)µ] < e−µδ
2 /4
,
−µδ2 /2
P r[X < (1 − δ)µ] < e
.
Now we prove Theorem 6.
Proof. Our proof starts analogously to the proofs
√ of the√previous theorems. We define a sequence of jobs
J1′ , . . . ,. The first job J1′ consists of the first τ̄ /(C log mτ̄ ) operations of the job from the instance
of
Cyclic Job Shop Scheduling Problem. The second job consists of the second
√ the High-Multiplicity
√
τ̄ /(C log mτ̄ ) operations, and so on. The constant C will be defined later. We also add dummy jobs
such that the load of each machine is exactly τ̄ . √
We now show how to schedule these new ℓC log mτ̄ /τ̄ jobs with makespan τ̄ . During the course
of the algorithm we may partition some of these jobs into two jobs at random. This at most doubles the
total number of cycles spanned by the original job from the High-Multiplicity Cyclic Job Shop Scheduling
Problem instance.
√
√
We define B = 3 τ̄ /(C log mτ̄ ) buckets and assign the first operation of each job to a bucket independently at random with equal probabilities. If the first operation of a job gets assigned to a bucket
t ∈ {1, . . . , B} then each operation s of the same job is assigned to bucket t + 3(s − 1) mod B, i.e., we
assign operations of the jobs into buckets in wrap-around fashion keeping a distance of exactly two buckets
between any two operations of the same job.
We now schedule all operations assigned to the first bucket in an arbitrary order, and then we process
all operations assigned to the second bucket, and so on. Obviously the total makespan of such a schedule is
exactly τ̄ since we never introduce delays in the schedule.
We want to show now that the only violated precedence constraints in this schedule are those that correspond to a job that wraps around in the schedule, i.e. one of the operations of a job is processed in the last
three buckets and the next operation is processed in one of the first three buckets. We correct this infeasibility by removing the precedence arc between such two operations and defining two jobs instead of one. By
doing this we could double the total number of jobs and therefore double the flow time of our final schedule.
We now claim that Chernoff Bounds imply that there are no other precedence constraint violations.
Consider some operation Os assigned to a bucket t ∈ {1, . . . , B} and its successor Os+1 assigned to the
bucket t + 3. Assume that Os must be processed on machine Mi and Os+1 must be processed on machine
Mi′ . Note that
√ p
τ̄
τ̄
= √
√
= C τ̄ log mτ̄ /3
B
3 τ̄ /(C log mτ̄ )
is the expected number
√ √ of operations assigned to a single bucket on one machine.
Let µt = tC τ̄ log mτ̄ /3. The probability that the total number of operations that must be processed
on machine Mi and were assigned to the buckets 1, . . . , t is larger than
√ p
(1 + δ)µt = (1 + δ)tC τ̄ log mτ̄ /3
8
is at most e−δ
2 µ /4
t
. Choosing
√ p
δ = δt = (C τ̄ log mτ̄ /3)/µt = 1/t
we obtain that the probability of the bad event is at most
−δ2 µt /4
e
−µt /(4t2 )
=e
√ √
−(C τ̄ log mτ̄ /3)/(4B)
≤e
−C 2 log mτ̄ /36
=e
=
1
mτ̄
C 2 /36
.
Analogously, we derive that the probability that the total number of operations that must be processed
on machine Mi′ and were assigned to the buckets 1, . . . , t + 2 is smaller than
√ p
(1 − δ)µt+2 = (1 − δ)(t + 2)C τ̄ log mτ̄ /3
for δ = δt+2 = 1/(t + 2) is at most
e−δ
2µ
t+2 /4
≤
1
mτ̄
C 2 /36
.
Note that we assume that in the worst case the operation Os+1 starts first in the bucket t + 3 and thus we
need to estimate the number of operations in the previous buckets.
Therefore, with high probability operation Os will end no later than (1 + δt )µt and operation Os+1 will
start not earlier than (1 − δt+2 )µt+2 . Since
(1 + δt )µt = µt+1 = (1 − δt+2 )µt+2
we obtain that in our final schedule there are no violations of precedence constraints between operations
in bucket t on machine Mi and bucket t + 3 on machine Mi′ . Choosing a large enough constant C and
noticing that the number of bucket and machine pairs is O(mτ̄ ) we obtain that with high probability there
is no violations of precedence constraints.
References
[1] S. B. Akers, A graphical approach to production scheduling problems, Operations Research 4 (1956),
244–245.
[2] N. Bansal,T. Kimbrel and M. Sviridenko, Job shop scheduling with unit processing times, Math. Oper.
Res. 31 (2006), no. 2, 381–389.
[3] U. Feige, M. Karpinski and M. Landberg, Improved approximation of max-cut on graphs of bounded
degree, Journal of Algorithms, V. 43 , Issue 2 (2002), 201–219.
[4] M. Goemans and D. Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using2 semidefinite programming, J. Assoc. Comput. Mach. 42 (1995), no. 6, 1115–1145.
[5] L.A. Goldberg, M. Paterson, A. Srinivasan, and E. Sweedyk Better approximation guarantees for jobshop scheduling. SIAM J. Discrete Math. 14 (2001), 67–92.
9
[6] S. Graves, H. Meal, D. Stefek, and A. Zeghmi. Scheduling of re-entrant flow shops. Journal of Operations Management 3 (1983), 197–207.
[7] Nicholas G. Hall, Tae-Eog Lee, Marc E. Posner. The complexity of cyclic shop scheduling problems.
Journal of Scheduling Volume 5, Issue 4, (2002), 307–327.
[8] E. Halperin, D. Livnat and U. Zwick, MAX CUT in cubic graphs, Journal of Algorithms V. 53 , Issue 2
(2004), 169–185.
[9] C. Hanen and A. Munier. Cyclic scheduling on parallel processors: an overview. In P. Chretienne, E.
Coffman, J. Lenstra, and Z. Liu (eds.), Scheduling Theory and its Applications, Ch. 4, John Wiley, New
York, 1996.
[10] E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Shmoys. Sequencing and Scheduling:
Algorithms and Complexity. In S.C. Graves, A.H.G. Rinnooy Kan, and P.H. Zipkin (eds.), Logistics
of Production and Inventory, Handbooks in Operations Research and Management Science 4, NorthHolland, Amsterdam, 1993, 445-522.
[11] S. McCormick and U. Rao. Some complexity results in cyclic scheduling. Mathl. Comput. Modelling
V. 20, Issue 2 (1994), 107–122.
[12] F. Meunier and A. Sebö. Paintshop, odd cycles and necklace splitting. Submitted for publication; see
http://algo.inria.fr/fmeunier/.
[13] M. Mittendorf and V. Timkovsky. On scheduling cycle shops: classification, complexity and approximation. Journal of Scheduling V. 5 (2002), 135–169.
[14] R. Motwani and P. Raghavan, Randomized algorithms, Cambridge University Press, Cambridge, 1995.
[15] R. Roundy. Cyclic schedules for job shops with identical jobs. Mathematics of Operations Research
V. 17, Issue 4 (1992), 842–865.
[16] D. Shmoys, C. Stein, and J. Wein. Improved Approximation Algorithms for Shop Scheduling Problems. SIAM Journal on Computing 23:3, 617-632, 1994.
[17] V. Timkovsky. Cycle shop scheduling. In J. Leung (ed), Handbook of Scheduling: Algorithms, Models,
and Performance Analysis, Chapter 7, CRC Press LLC, Boca Raton, FL., 2004.
10
Download