Positive Half-Products and Scheduling with Controllable Processing

advertisement
Positive Half-Products and Scheduling with
Controllable Processing Times
Adam Janiaka , Mikhail Y. Kovalyovb , Wieslaw Kubiakc,1 , Frank Wernerd
a
Institute of Engineering Cybernetics, Wroclaw University of Technology, Wroclaw,
Poland, E-mail: janiak@ict.pwr.wroc.pl
b
United Institute of Informatics Problems, National Academy of Sciences of Belarus, and
Faculty of Economics, Belarus State University, 220050 Minsk, Belarus, E-mail:
koval@newman.bas-net.by
c
Faculty of Business Administration, Memorial University of Newfoundland, St. John’s,
Canada, E-mail: wkubiak@mun.ca
d
Otto-von-Guericke-Universität, Magdeburg, Germany, E-mail:
frank.werner@mathematik.uni-magdeburg.de
Abstract: We study the single machine scheduling problem with controllable job processing times to minimize a linear combination of the total weighted job completion time and
the total weighted processing time compression. We show that this scheduling problem
is a positive half-product minimization problem. Positive half-products make up an interesting subclass of half-products and are introduced in this paper to provide a conceptual
framework for the problem with controllable job processing times as well as other problems. This framework allows to readily derive in one fell swoop a number of results for the
problem with controllable processing times from more general results obtained earlier for
the half-product. We also present fast fully polynomial time approximation schemes for the
problem with controllable processing times. The schemes apply to all positive half-products.
Key Words: single machine scheduling; controllable processing times; pseudo-Boolean
optimization; fully polynomial time approximation scheme; computational complexity.
1
Corresponding author
1
1
Scheduling with controllable processing times
In the problem with controllable processing times, there are n independent and nonpreemptive jobs to be scheduled for processing on a single machine. All jobs are available for processing at time zero. The processing time of job j is a variable pj ∈ [0, uj ],
j = 1, . . . , n. A decision maker is to determine the values of job processing times p =
(p1 , . . . , pn ) and a permutation of jobs π so as to minimize a linear combination T W C =
Pn
j=1
wj Cj +
Pn
j=1
vj (uj − pj ) of the total weighted completion time
Pn
j=1
wj Cj , where Cj
denotes the completion time of job j, and the total weighted processing time compression
Pn
j=1
vj (uj − pj ).
All numerical data are positive integers. The value of variable pj is a non-negative real
number in [0, uj ], j = 1, . . . , n. Setting pj = 0 means that either the processing time of job
j is negligible and thus it practically does not delay the completion times of other jobs or
job j is rejected with penalty vj uj .
Vickson [9, 10] was first to study this problem, as well as a more general problem with
arbitrary non-negative lower bounds lj , lj ≤ uj , j = 1, . . . , n, on job processing times, more
than 20 years ago. Other applications of the problem can be found in Williams [12] and
Janiak [3], where the reader is referred to for more comprehensive references.
For arbitrary lj and wj = 1, j = 1, . . . , n, Vickson [9] recasts the problem as an assignment problem. For arbitrary weights wj , Vickson [10] presents an enumerative algorithm
for the problem. Vickson [9] also shows that the search for optimal job processing times
p = (p1 , . . . , pn ) can be limited as follows.
Lemma 1 There exists an optimal p = (p1 , . . . , pn ) with pj ∈ {lj , uj }, j = 1, . . . , n.
Furthermore, the Shortest Weighted Processing Time (SWPT) rule of Smith [8] limits
the search for an optimal permutation π, given p, as follows.
Lemma 2 There exists an optimal π with pπ(j) /wπ(j) ≤ pπ(j+1) /wπ(j+1) for j = 1, . . . , n−1.
From Lemma 1 and Lemma 2 the following corollary follows immediately.
2
Corollary 1 There exists an optimal solution such that pj ∈ {lj , uj }, j = 1, . . . , n, jobs
with processing times pj = lj are sequenced in the non-decreasing order of lj /wj and jobs
with processing times pj = uj are sequenced in the non-decreasing order of uj /wj .
From now on, we assume that the jobs are re-indexed such that u1 /w1 ≤ · · · ≤ un /wn ,
and lj = 0, j = 1, . . . , n. By Corollary 1, the scheduling problem with controllable job
processing times, we also refer to it as the problem of minimizing TWC for convenience,
reduces to deciding on a partition of the set of jobs into a subset with pj = 0 and a subset
with pj = uj , and then scheduling the latter jobs in the increasing order of their indices.
Let p∗ = (p∗1 , . . . , p∗n ) denote an optimal selection of processing times in the problem of
minimizing TWC.
Our goal is threefold. First, to show in Section 2, that the scheduling with controllable
processing times is polynomially equivalent to the problem of minimizing a special subclass
of half-products. We define the latter in Section 2, it suffices to mention here that the
equivalence immediately implies that the problem of minimizing TWC is NP-hard in the
ordinary sense. Second, to show in Section 3, two fast fully polynomial time approximation
schemes (FPTAS) for the problem of minimizing TWC, see Garey and Johnson [2] for the
definition of FPTAS. The schemes generalize a well known FPTAS proposed for halfproduct minimization by Badics and Boros [1]. One runs in O(n2 log U/ε) time, where
U=
P
1≤j≤n
uj , the other in O(n2 log W/ε) time, where W =
P
1≤j≤n
wj . Finally, to briefly
discuss, in Section 4, prospects of using the FPTAS developed in this paper to improve
efficiency of the existing FPTAS’es for a special subclass of positive half-products.
2
Scheduling with controllable processing times and
half-product minimization
The half-product is a pseudo-Boolean function of the form
H(x) = H(x1 , . . . , xn ) = D +
X
1≤i<j≤n
3
ai b j x i x j −
X
1≤i≤n
c i xi ,
where xj ∈ {0, 1}, j = 1, . . . , n, a = (a1 , . . . , an−1 ) and b = (b2 , . . . , bn ) are vectors of
non-negative integers, c = (c1 , . . . , cn ) is an arbitrary integer vector, and D is an integer.
Denote by x∗ = (x∗1 , . . . , x∗n ) a 0-1 vector minimizing H(x).
The half-product was introduced by Badics and Boros [1] for D = 0, and independently
by Kubiak [6]. It has attracted attention since a number of scheduling problems can be
recast as half-product minimization problems, see Kubiak [7].
Theorem 1 The problem of minimizing TWC and the problem of minimizing half-products
with a2 /b2 ≤ · · · ≤ an−1 /bn−1 , are polynomially equivalent.
Proof. Let vectors w = (w1 , . . . , wn ), u = (u1 , . . . , un ), and v = (v1 , . . . , vn ) make up an
instance of the problem of minimizing TWC. Define a half-product as follows
T W C(x) =
n
X
wj x j
j=1
j
X
i=1
ui x i +
n
X
X
vj uj (1−xj ) =
ui w j x i x j −
1≤i<j≤n
j=1
n
X
uj (vj −wj )xj +
j=1
n
X
v j uj ,
j=1
(1)
where obviously u2 /w2 ≤ · · · ≤ un−1 /wn−1 . Let us set xj to 1 if pj = uj and to 0 if pj = 0.
By Corollary 1, there always is an optimal p∗ which translates this assignment into an
optimal x∗ . Moreover, both problems have the same optimal value.
Now, let
H(x) = D +
X
ai b j x i x j −
X
c i xi
1≤i≤n
1≤i<j≤n
be a half-product. Define an instance of the TWC minimization problem as follows: uj = aj
and wj = M bj for j = 1, . . . , n, where M =
vj = M (bj +
cj
)
aj
Qn
j=1 aj , b1 =
l
a 1 b2
a2
m
and an =
l
an−1 bn
bn−1
m
, and
for j = 1, . . . , n. The multiplier M is chosen such that all vj are integer.
By definition of b1 and an as well as inequalities a2 /b2 ≤ · · · ≤ an−1 /bn−1 , we have that
vectors w = (w1 , . . . , wn ), u = (u1 , . . . , un ), and v = (v1 , . . . , vn ) make up an instance of
the problem of minimizing TWC with u1 /w1 ≤ · · · ≤ un /wn . Let us set pj to uj if xj = 1
and to 0 if xj = 0. By Corollary 1, there always is an optimal x∗ which translates this
assignment into an optimal p∗ . Moreover, the optimal value of the TWC minimization
problem is equal to M [H(x∗ ) − D +
1≤i≤n (cj
P
4
+ aj bj )].
It follows immediately from Theorem 1 that the TWC minimization is NP-hard in
the ordinary sense since the half-product minimization with a2 /b2 ≤ · · · ≤ an−1 /bn−1 , is
NP-hard, see Jurisch, Kubiak and Józefowska [4]. Recently, Wan, Yen and Li [11] have
independently proved that the problem of minimizing TWC is NP-hard.
The half-product T W C(x) given by (1) admits a pair of dynamic programming algorithms, see Jurisch, Kubiak and Józefowska [4]. One runs in O(n
Pn
j=1
wj ) time, and thus
solves the TWC minimization problem with weights wj = 1, j = 1, . . . , n, in O(n2 ) time
which is faster than the assignment algorithm, running in O(n3 ) time, of Vickson [9]. The
latter, however, solves a more general problem with arbitrary lj , j = 1, . . . , n. The other
algorithm runs in O(n
Pn
j=1
uj ) time, and thus solves the problem with processing times
in [0, 1], i.e., uj = 1, j = 1, . . . , n, in O(n2 ) time. Finally, it is clear from the T W C(x)
definition that vj ≤ wj , j = 1, . . . , n, implies p∗j = 0, j = 1, . . . , n.
Badics and Boros [1] derived a FPTAS for the half-product minimization problem with
D = 0. However, their scheme cannot be directly used as a FPTAS for the TWC minimization because adding constant D =
Pn
j=1
vj uj can significantly decrease the absolute
value of the optimum for some instances of the half-product minimization problem. To
explain this, we begin with the following result.
Lemma 3 For any positive rational function f of n, there always is an instance of the
T W C(x) minimization problem such that |T W C(x∗ ) − D|/|T W C(x∗ )| > f (n), where D =
Pn
j=1
vj uj , for an optimal x∗ .
Proof. We first observe that T W C(x∗ ) > 0 and T W C(x∗ )−D ≤ T W C(0, . . . , 0)−D = 0.
Therefore, inequality in the statement of the lemma can be written as
D > (f (n) + 1)T W C(x∗ ).
Obviously,
X
ui wj = T W C(1, . . . , 1) ≥ T W C(x∗ ).
1≤i≤j≤n
Now, consider an instance with vj = 2df (n) + 1e(wj + . . . + wn ), for j = 1, . . . , n. We have
D = 2df (n) + 1eT W C(1, . . . , 1)
5
for this instance and thus the lemma holds.
Let x0 be an ε-approximate solution to the problem of minimizing T W C(x) − D. We
have
∆=
T W C(x0 ) − T W C(x∗ )
≤ ε.
|T W C(x∗ ) − D|
It follows from Lemma 3 that
T W C(x0 ) − T W C(x∗ )
> f (n)∆
T W C(x∗ )
for some instances. Therefore, an ε-approximate solution to the problem of minimizing
T W C(x) − D obtained by the FPTAS of Badics and Boros [1] cannot be used to obtain
an f (n)ε-approximate solution to the problem of minimizing T W C(x) for any rational
function f (n), a polynomial in particular. Consequently, we need a different FPTAS than
that of Badics and Boros. Such a FPTAS is presented in the following section.
3
Positive half-products and their FPTAS
Consider any half-product
H(x) = D +
X
ai b j x i x j −
1≤i<j≤n
X
c i xi .
1≤i≤n
Let N = {i : ci < 0} and P = {i : ci ≥ 0}. We can rewrite H(x) as follows.
H(x) = D −
X
ci +
i∈P
X
ai b j x i x j +
1≤i<j≤n
X
ci (1 − xi ) +
i∈P
X
(−ci )xi ,
i∈N
with all coefficients standing at variables or their products being non-negative. We refer
to a half-product as a positive half-product if the constant D −
P
i∈P
ci ≥ 0. Thus, the
positive half-products are pseudo-Boolean functions of the form
F (x) =
X
ai b j x i x j +
1≤i<j≤n
n
X
j=1
hj (1 − xj ) +
n
X
gj xj + d,
j=1
where all coefficients are positive integers.
The T W C(x) is a positive half-product since we have T W C(x) = F (x) by setting
d = 0, aj = uj , bj = wj , hj = uj vj and gj = uj wj , j = 1, . . . , n, see (1).
6
We now develop a FPTAS for the problem of F (x) minimization, which obviously
directly applies to the problem of minimizing TWC. We start with a simple decomposition
result for F (x), see also Badics and Boros [1].
Lemma 4 For any x and k = 1, . . . , n, we have
F (x) = F1,k (x) + a1,k (x)bk+1,n (x) + Fk+1,n (x) + d,
where
F1,k (x) =
X
ai b j x i x j +
1≤i<j≤k
Fk+1,n (x) =
X
k
X
hj (1 − xj ) +
n
X
hj (1 − xj ) +
j=k+1
k+1≤i<j≤n
a1,k (x) =
k
X
gj x j ,
j=1
j=1
ai b j x i x j +
k
X
n
X
gj x j ,
j=k+1
aj x j ,
j=1
bk+1,n (x) =
n
X
bj xj .
j=k+1
Proof. Straightforward algebraic manipulation.
Though F (x) is a pseudo-Boolean function on binary vectors we rather see it as a
function on finite words over two letter alphabet {0, 1} in our subsequent presentation,
which needs to discuss binary vectors of varying dimension. Let {0, 1}∗ be the set of all
finite words on the alphabet {0, 1} with the empty word Λ included. Let |x| be the length
of x, i.e., the number of letters in x ∈ {0, 1}∗ . For a word x = x1 x2 . . . xn , let us call the
word x1 x2 . . . xk , the k-prefix of x, and the word xk+1 . . . xn , the (n − k)-suffix of x, for
k = 0, 1, . . . , n. We assume 0-prefix and 0-suffix being empty words. The concatenations
x0 and x1 denote word x extended by 0 and 1, respectively.
Our FPTAS trims the solution space using a general approach developed by Badics
and Boros [1], and Kovalyov and Kubiak [5] for half-products and decomposable partition
problems. The scheme takes an instance of a positive half-product and ε as its inputs,
and iteratively, starting with the empty word Λ, builds a solution to the half-product
minimization. At iteration k selected words of length k are partitioned into subsets to
7
ensure that each subset includes only those words that are δ-close to each other, more
precisely, for any two words x and y in the same subset the algorithm ensures
|a1,k (x) − a1,k (y)| ≤ δ min{a1,k (x), a1,k (y)}
for some positive δ dependent on ε and n to be defined later. Then, F1,k (·) is used to
select a single word x from each subset of the partition. The word has the smallest value
F1,k (·) among all words in the same subset of the partition. Only the selected words pass
to iteration k + 1, where each word is extended by concatenating either 0 or 1 at its end,
and the iteration repeats. Finally, when k reaches n the algorithm stops selecting a word
with the minimum value of F (·) among all words that reached iteration n. The details of
the algorithm are as follows.
Algorithm Aε .
Step 1. (Initialization) Calculate δ > 0 such that (1 + δ)n = 1 + ε. Set k = 0 and
X0 = {Λ}.
Step 2. (Recursive filtering) Construct set Yk = {x0, x1|x ∈ Xk−1 }. Calculate a1,k (x)
and F1,k (x) for each x ∈ Yk . If k = n, then set Xn = Yn and go to Step 3. Otherwise,
partition Yk into subsets Yr,k , r = 1, . . . , sk , such that
|a1,k (x) − a1,k (y)| ≤ δ min{a1,k (x), a1,k (y)}
for any x and y in the same subset. From each subset Yr,k , select a vector xr,k such
that F1,k (xr,k ) = min{F1,k (x)|x ∈ Yr,k }. Set Xk = {xr,k |r = 1, . . . , sk }, k = k + 1 and
repeat Step 2.
Step 3. (ε-approximate solution) Select a solution xε ∈ Xn such that
F (xε ) =
min{F (x)|x ∈ Xn } and stop.
We now show that algorithm Aε produces solution xε of required relative error ε. The
algorithm complexity is shown in Theorem 3, where the efficient implementation of Step 2
is discussed in detail.
8
Theorem 2 Algorithm Aε finds xε ∈ Xn such that F (xε ) − F (x∗ ) ≤ εF (x∗ ).
Proof. For an optimal x∗ , let x(0) , . . . , x(n) be n + 1 words of length n each such that
(a) k-prefix of x(k) is in Xk , for k = 0, . . . , n,
(b) both x(k) and x∗ share the same (n − k)-suffix, for k = 0, . . . , n,
(c) k-prefixes of x(k−1) and x(k) are in the same Yr,k , for k = 1, . . . , n.
By (a) and (b), x(0) = x∗ .
Our proof relies on inequalities (2), (3) and (5) that we now prove.
First, since all coefficients in F (x) are non-negative, we have
a1,k (x∗ )bk+1,n (x∗ ) ≤ F (x∗ ).
(2)
a1,k (x(k−1) ) ≤ (1 + δ)k−1 a1,k (x∗ ), k = 1, . . . , n.
(3)
Second, we have
We prove this inequality by induction on k. For k = 1, (3) holds since x(0) = x∗ . Assume
that (3) holds for 1 ≤ k ≤ n − 1. Let us prove that (2) holds for k + 1. By (c), k-prefixes
of x(k−1) and x(k) are in the same subset Yr,k , thus we have
a1,k (x(k) ) ≤ (1 + δ)a1,k (x(k−1) ), k = 1, . . . , n.
(4)
Finally,
a1,k+1 (x(k) ) = a1,k (x(k) ) + ak+1 x∗k+1 ≤ (1 + δ)a1,k (x(k−1) ) + ak+1 x∗k+1 ≤
(1 + δ)k a1,k (x∗ ) + ak+1 x∗k+1 ≤ (1 + δ)k a1,k+1 (x∗ ).
Here, the first equation follows from the definitions of Lemma 4, the first inequality follows
from (4), the second one follows from the inductive assumption, and the last one again
from the definitions of Lemma 4.
Third, we have
F (x(k) ) − F (x(k−1) ) ≤ δ(1 + δ)k−1 F (x∗ ).
9
(5)
To prove it, we observe that by definitions of F (x) and x(k) , we have
F (x(k) ) − F (x(k−1) ) = F1,k (x(k) ) − F1,k (x(k−1) ) + (a1,k (x(k) ) − a1,k (x(k−1) ))bk+1,n (x∗ ).
By (c), k-prefixes of x(k−1) and x(k) are in the same subset Yr,k . Consequently, (a1,k (x(k) )−
a1,k (x(k−1) )) ≤ δa1,k (x(k−1) ). Moreover, the minimum value of F1,k over all vectors in Yr,k
is attained at x(k) , thus, F1,k (x(k) ) ≤ F1,k (x(k−1) ). Therefore, by (2) and (3)
F (x(k) ) − F (x(k−1) ) ≤ δa1,k (x(k−1) )bk+1,n (x∗ ) ≤
δ(1 + δ)k−1 a1,k (x∗ )bk+1,n (x∗ ) ≤ δ(1 + δ)k−1 F (x∗ ).
We are now ready to prove the theorem. We have F (xε ) ≤ F (x(n) ) and x(0) = x∗ .
Therefore, by (5) and the definition of δ, (1 + δ)n = 1 + ε, we have
F (xε ) − F (x∗ ) ≤ F (x(n) ) − F (x(0) ) =
n
X
(k)
(F (x ) − F (x
(k−1)
∗
)) ≤ δF (x )
n
X
(1 + δ)k−1 = εF (x∗ ),
k=1
k=1
which completes the proof.
Theorem 3 Algorithm Aε can be implemented to run in O(n2 log A/ε) time, where A =
Pn
j=1
aj .
Proof.
The key to the complexity of Aε is the implementation of set Yk partitioning
in Step 2. There, we arrange the words in Yk in ascending order of their a1,k (·) values,
we call this order an a-order, so that 0 ≤ a1,k (y1 ) ≤ a1,k (y2 ) ≤ . . . ≤ a1,k (y|Yk | ). Then,
we assign y1 , y2 , . . . , yi1 to set Y1,k until detecting i1 such that a1,k (yi1 ) ≤ (1 + δ)a1,k (y1 )
and a1,k (yi1 +1 ) > (1 + δ)a1,k (y1 ). If such an i1 does not exist, then we set Y1,k = Yk and
stop. Next, we assign yi1 +1 , yi1 +2 , . . . , yi2 to set Y2,k until detecting i2 such that a1,k (yi2 ) ≤
(1 + δ)a1,k (yi1 +1 ) and a1,k (yi2 +1 ) > (1 + δ)a1,k (yi1 +1 ). If such an i2 does not exist, then
we set Y2,k = Yk \Y1,k and stop. We continue this partitioning until y|Yk | is included in
Ysk ,k , for some sk . It is crucial for the complexity to notice here that, if Xk−1 is a-ordered,
then obviously both {x0|x ∈ Xk−1 } and {x1|x ∈ Xk−1 } easily inherit its a-order and their
10
merging leads to the a-ordered set Yk in linear time. Moreover, the selection of a single
vector from each set of the partition Y1,k , . . . , Ysk ,k again inherits the a-order which results
in a-ordered Xk . Consequently, the a-order of words is an invariant of Step 2, and therefore
Pn
the step can be implemented in O(|Yk |), and the whole algorithm in O(
k=1
|Yk |) time.
Furthermore, we have |Yk | = 2|Xk−1 | = 2sk−1 and sk ≤ K + 1, k = 1, . . . , n, where K is an
integer that satisfies (1+δ)K ≥ A. Consequently, the algorithm runs in O(nK) time, and it
remains to estimate the value of K. We have K ≥ log A/ log(1 + δ). From the relationship
between ε and δ defined in Step 1 of the algorithm, we have log(1 + δ) = log(1 + ε)/n.
Since log(1 + ε) ≤ ε, for 0 < ε ≤ 1, then K ≥ n log A/ε. Notice that if ε > 1, then a
1-approximate solution can be taken as an ε-approximate solution, and we may assume
0 < ε ≤ 1 without loss of generality. Thus, by setting K = dn log A/εe, we obtain the
required complexity.
Theorems 2 and 3 prove that Aε is a FPTAS for any positive half-product, and the
problem with controllable job processing times in particular.
Another scheme with time complexity O(n2 log B/ε), where B =
Pn
j=1 bj ,
can be derived
in a similar way as Aε . The scheme relies on values bk+1,n (x) and Fk+1,n (x) for its recursive
filtering in Step 2, and builds word xn xn−1 . . . x1 starting from empty word Λ.
4
Conclusions and further research
We have shown that the single machine scheduling problem with controllable job processing
times is polynomially equivalent to the problem of maximizing a special subclass of halfproducts, namely, positive half-products. This immediately proves that not only is the
former problem NP-hard but also that it can be solved in pseudo-polynomial time by
dynamic programs proposed earlier for the half-product minimization, see [4]. We have
also developed a couple of fully polynomial time approximation schemes for the problem
with controllable processing times. The schemes apply to a general class of problems called
positive half-products that we have also introduced in this paper. The class includes,
for instance, the two machine weighted completion time problem, and it is very likely
11
to include many more scheduling problems. The search for them seems an exciting and
practically important topic for further research since it may ultimately lead to more efficient
approximation schemes, based on the schemes presented in this paper, for many scheduling
positive half-product problems.
Acknowledgment
M.Y. Kovalyov was supported in part by INTAS under grant number 00-217. W. Kubiak
has been supported by the Natural Sciences and Engineering Council of Canada Research
Grant OGP0105675. The authors would like to thank anonymous referees for their constructive comments that resulted in an improved paper.
References
[1] T. Badics and E. Boros, Minimization of half-products, Mathematics of Operations
Research 23 (1998), 649–660.
[2] M.R. Garey and D.S. Johnson, Computers and intractability: a guide to the theory of
NP-completeness, W.H. Freeman and Co., San Francisco, 1979.
[3] A. Janiak, Scheduling and resource allocation problems in some flow type manufacturing processes. In: Modern production concepts, G. Fandel and G. Zapfel (Eds.),
Springer-Verlag, Berlin, 1991, 404–415.
[4] B. Jurisch, W. Kubiak and J. Józefowska, Algorithms for minclique scheduling problems, Discrete Applied Mathematics 72 (1997), 115–139.
[5] M.Y. Kovalyov and W. Kubiak, Fully polynomial approximation schemes for decomposable partition problems, Operations Research Proceedings 1999, Selected papers of
the Symposium on Operations Research (SOR’99), Magdeburg, September 1-3, 1999,
397–401.
12
[6] W. Kubiak, New results on the completion time variance minimization, Discrete Applied Mathematics 58 (1995), 157–168.
[7] W. Kubiak, Minimization of ordered, symmetric half-products, submitted for publication (2001).
[8] W.E. Smith, Various optimizers for single-stage production, Naval Research Logistics
Quarterly 3 (1956), 59–66.
[9] R.G. Vickson, Two single machine sequencing problems involving controllable job
processing times, AIIE Transactions 12 (1980) 1155–1167.
[10] R.G. Vickson, Choosing the job sequence and processing times to minimize total
processing plus flow cost on single machine, Operations Research 28 (1980) 1155–1167.
[11] G. Wan, B.P.C. Yen and C.L. Li, Single machine scheduling to minimize total compression plus weighted flow cost is NP-hard, Information Processing Letters 79 (2001),
273–280.
[12] T.J. Willams (ed.), Analysis and design of hierarchical control systems. North-Holland,
Amsterdam, 1986.
13
Download