008-0763 - Pomsmeetings.org

advertisement
Single-machine scheduling with learning effects in intermittent batch production
Dar-Li Yang and Wen-Hung Kuo*
Department of Information Management,
National Formosa University,
Yun-Lin, Taiwan 632, R.O.C.
Abstract
This paper studies a single-machine scheduling problem with three models of learning
and forgetting effects in intermittent batch production. They are the models of no
transmission, partial transmission and total transmission of learning from batch to
batch, respectively. The phenomena are existed in many realistic production systems.
The objective is to minimize the makespan. We provide a polynomial time algorithm
to solve the problems with the models of no transmission and partial transmission of
learning from batch to batch, respectively. We also provide two polynomial time
algorithms to find the optimal solution of two special cases in the problem with the
model of total transmission of learning from batch to batch.
Keywords: single-machine; scheduling; intermittent; learning; makespan
*
Correspondence: WH Kuo, Department of Information Management , National Formosa University,
Yun-Lin, Taiwan 632, ROC.
E-mail address: whkuo@nfu.edu.tw
1
1. Introduction
In classical scheduling problems, processing times of jobs are assumed to be
constant. However, in many realistic situations, because the firms and employees
perform the same task repeatedly, they learn how to perform more efficiently.
Therefore, the actual processing time of a job is shorter when it is scheduled later,
rather than earlier in the sequence. This phenomenon is known as the “learning effect”
in the literature.
Biskup [1] first proposed a learning effect model in which the processing time of a
job is a function of the job position in a sequence. He showed that single-machine
scheduling problems with a learning effect still remain polynomially solvable if the
objective is to minimize the deviation from a common due date or to minimize the
sum of flow times. Mosheiov [2] provided a polynomial time solution for the
single-machine makespan minimization problem and solved two multi-criteria
problems which can be formulated as assignment problems. He also showed that the
SPT (the shortest processing time first) rule does not remain optimal for the minimum
flow-time problem on parallel identical machines. Mosheiov [3] further showed that
the flow-time minimization problem with the learning effect on parallel identical
machines has a solution which is polynomial in the number of jobs. Mosheiov and
Sidney [4] extended learning effect to be job-dependent, that is, learning rates are
2
different from job to job. They showed that the problems of makespan and total
flow-time minimization on a single machine, a due-date assignment problem and total
flow-time minimization on unrelated parallel machines remain polynomially solvable.
The above learning effect models tell us that, as cumulative jobs increase, the
processing time of the following job decreases in a continuous production system. It
gives the implication of continuous rather than intermittent batch production.
However, the pattern of many realistic production systems is intermittent. There is a
setup between two production runs. In such intermittent production, it is reasonable to
assume that if a large amount of time has elapsed between production runs, the
learning effect would not continue to follow what it was left when production resumes,
but that the processing time of the following job would revert to a higher level. This
suggests a phenomenon of forgetting between production runs. That is, there may be
only partial or even no transmission of learning from one production run to another
one. Hence, it is clear that this problem of learning and forgetting is an interesting
topic for intermittent production.
2. Notations and assumptions
As mentioned above, we consider a single-machine scheduling problem with
learning and forgetting effects in intermittent batch production. The problem is
3
developed using the following notations. Additional notations will be introduced
when needed throughout the paper.
m
: the number of batches. ( m  2 )
Bi : the ith batch, i=1, 2, …, m.
ni
: the number of jobs in batch Bi , i=1, 2, …, m.
n
: the total number of jobs. (i.e. n1  n2  ...  nm  n )
J ij : the jth job in batch Bi ,
j=1, 2, …, ni .
si
: the batch setup time of batch Bi
ai
: the learning factor of jobs within batch Bi . ( ai  0 )
bi
: the learning factor of batch Bi . ( bi  0 )
pij : the normal processing time of J ij in the original sequence.
pijr : the actual processing time of J ij which is scheduled in the r th position
in a sequence in batch Bi .
p i[k ] : the normal processing time of J i[k ] which is scheduled in the k th position
in a sequence in batch Bi .
pik[ k ] : the actual processing time of J i[k ] which is scheduled in the k th position
in a sequence in batch Bi .
C ij
: the completion time of J ij
C i[k ] : the completion time of J i[k ] which is scheduled in the k th position
in a sequence in batch Bi .
C m a x : the makespan of all jobs.
4
There are n jobs grouped into m batches and processed on a single machine. All
jobs are available at time zero. Assume that there is no setup time between two
consecutive jobs in the same batch. However, a setup time is required to process a
batch and it is sequence-independent. Moreover, the normal processing time of a job
is incurred if the job is scheduled first in the first production batch. The actual
processing times of the following jobs are smaller than their normal processing times
because of the learning effect. Assume that the actual processing time of a job is a
decreasing function of its position in a sequence. Usually, the learning effect can be
accumulated through completing jobs. However, if a large amount of setup time has
elapsed between production runs, it may incur a forgetting effect. That is, there may
be partial or even no transmission of learning from batch to batch. Therefore, three
models of learning and forgetting effects are considered in the following.
3. Model I: No transmission of learning from batch to batch
In the first model, we consider that there is no transmission of learning from batch
to batch. The objective of the single-machine scheduling problem is to minimize the
makespan of all jobs. As mentioned in Biskup [1], we assume that the actual
processing time of job J ij when scheduled in position r in batch Bi , is given by
pijr  pij r ai
5
Hence, the makespan of all jobs is as follows:
ni
m
m
i 1
i 1 j 1
C max   si   pi[ j ] j ai .
For convenience, let LE denote the learning effect and S denote the existence of a
sequence-independent batch setup time. In addition, let B denote that the problem is
an intermittent batch production problem and Tno denote that there is no
transmission of learning from batch to batch. Therefore, following the three-field
notation
of
Graham
et
al.
[5],
the
proposed
problem
is
denoted
by
1 / B, S , LE , Tno / C max .
Theorem 1. For the problem of 1 / B, S , LE , Tno / C max , there exists an optimal
schedule that satisfies the following conditions: (a) the jobs within a batch are
sequenced in non-decreasing order of their normal processing time. (b) the batches
can be sequenced in any order.
Proof. The problem to arrange the job sequence within a batch is the same as the
problem of 1 / LE / Cmax . Mosheiov [2] proved that the optimal schedule is to
sequence the jobs in non-decreasing order of their normal processing time. Therefore,
the theorem follows because there is no transmission of learning from batch to batch
and the batch setup is sequence-independent.
6
□
Based on Theorem 1, a simple algorithm to determine the optimal schedule for the
problem of 1 / B, S , LE , Tno / C max is developed as follow.
Algorithm 1.
Step 1: Arrange jobs within each batch in non-decreasing order of their normal
processing times.
Step 2: Batches are scheduled in any order.
The optimal job sequence within a certain batch Bi can be obtained by a
sorting algorithm and thus taking O(ni log ni ) time.
Hence, the total running time
m
to sequence jobs of all batches is
 O(n
i 1
i
log ni ) . On the other hand, the running time
to sequence the batches in any order is O(1) . Therefore, the overall complexity of the
problem of 1 / B, S , LE , Tno / C max is O(n log n) (see Kuo and Yang [8]).
4. Model II: Partial transmission of learning from batch to batch
In the second model, we consider partial transmission of learning from batch to
batch in the single-machine scheduling problem. Therefore, there exists some learning
effect between batches besides the learning effect within a batch. We assume that the
learning effect of jobs within a batch is the same as that in the first model. In addition,
7
the actual processing time of batch Bi when scheduled in the rth batch is defined as
follows.
Pir  Pi r bi
where Pi is the total processing time of jobs within batch Bi if there is no
ni
transmission of learning from batch to batch. That is, Pi   pi[ k ] k ai .
k 1
Then, the makespan of all jobs is calculated as follows:
m


 
m

C m a x  si  pi[1]1ai  pi[ 2 ] 2 ai  ...  pi[ ni ] niai i bi   si  Pi i bi

(1)
i 1
i 1
Let T part denote that there is only partial transmission of learning from batch to
batch. Then the proposed problem is denoted by 1 / B, S , LE , T part / C max . As in Biskup
[1], let xir be a 0/1 variable such that xir = 1 if batch Bi is the rth batch to be
processed and xir = 0 otherwise. Then the problem of 1 / B, S , LE , T part / C max can be
formulated as the following assignment problem.
m
min
m
i
i 1
m
s.t.
m
 s   P r
x
i 1
r 1
bi
xir
ir
 1,
r =1, 2, …, m,
ir
 1,
i =1, 2, …, m,
m
x
i 1 r 1
i
xir  0 or 1,
i, r =1, 2, …, m.
(2)
Since Pi of batch Bi is not affected by the batch sequence, from Theorem 1,
Pi can be minimized by sequencing the corresponding jobs in non-decreasing order
of their normal processing time. Based on the above analysis, a simple algorithm to
8
determine the optimal schedule for the problem of 1 / B, S , LE , T part / C max
is
developed as follow.
Algorithm 2.
Step 1: Arrange jobs within each batch in non-decreasing order of their normal
processing times.
Step 2: Formulate the corresponding assignment problem as Eq.(2) and determine the
batch sequence according to the solution of the corresponding problem.
From the analysis in Section 3, the complexity of Step 1 is O(n log n) . On the
other hand, Step 2 is to solve an assignment problem and thus the complexity of Step
2 is O(m 3 ) . Thus, the overall complexity of Algorithm 2 is O(n log n  m 3 ) .
Corollary 1. If the learning factors of all batches are equal, i.e. bi  b , then for the
problem of 1 / B, S , LE , T part / C max , there exists an optimal schedule that satisfies the
following conditions: (a) the jobs within a batch are sequenced in non-decreasing
order of their normal processing time. (b) the batches are sequenced in non-decreasing
order of Pi .
Proof. As stated in Theorem 1, the problem to arrange the job sequence within a batch
9
is the same as the problem of 1 / LE / Cmax . Mosheiov [2] proved that the optimal
schedule is to sequence the jobs in non-decreasing order of their normal processing
m
s
time. In addition, the term of
i 1
i
in Eq.(1) is constant. Therefore, to minimize the
m
makespan of all jobs is equal to minimizing the term of
 Pi
i 1
bi  b , to minimize
i
bi
in Eq.(1). Because
m
 Pi
i
i 1
b
is equivalent to minimizing the makespan of the
problem of 1 / LE / Cmax if batches are taken as jobs. Then, the result in part (b) of
Corollary 1 follows.
□
From Corollary 1, if bi  b , the complexity of the problem of 1 / B, S , LE , T part / C max
is reduced to O(n log n) .
5. Model III: Total transmission of learning from batch to batch
In the third model, we consider total transmission of learning from batch to batch
in the single-machine scheduling problem. Without loss of generality, assume that
batch Bi is sequenced in the ith batch. Then, the actual processing time of job J ij
when scheduled in position r in batch Bi is as follows.
ai
i 1


p  pij  r   nk  .
k 1


r
ij
Hence, the makespan of all jobs is calculated as follows.
10
C max
ai
i 1


  si   pi[ r ]  r   nk  .
i 1
i 1 r 1
k 1


m
m
ni
Let Ttotal denote total transmission of learning from batch to batch. Then the
proposed problem is denoted by 1 / B, S , LE , Ttotal / C max .
Theorem 2. For any batch sequence of the 1 / B, S , LE , Ttotal / C max problem, the total
processing time of jobs within the batch is minimized by sequencing jobs in
non-decreasing order of their normal processing time.
Proof. The theorem can be easily proved by using simple job interchanging technique.
□
Corollary 2. For the problem of 1 / B, S , LE , Ttotal / C max , there exists an optimal
schedule by sequencing jobs within each batch in non-decreasing order of their
normal processing time.
□
Proof. The result follows directly from Theorem 2.
In the following, two special cases of the problem of 1 / B, S , LE , Ttotal / C max are
discussed in Theorem 3 and Algorithm 3, respectively.
Definition
1.
Bi
is
dominated
by
11
Bj ,
or
Bj
dominates
Bi
iff
max{ pik | k  1,2,..., ni }  min{ p jk | k  1,2,..., n j } . The symbol Bi  B j denotes that
Bi is dominated by B j .
Definition 2. The batches form an increasing sequence of dominating batches iff
B1  B2  ...  Bm .
Theorem 3. If B1  B2  ...  Bm and a1  a2  ...  am , then for the problem of
1 / B, S , LE , Ttotal / C max , there exists an optimal schedule satisfies the following
conditions.
(a) the jobs within a batch are sequenced in non-decreasing order of their normal
processing time.
(b) the batches are arranged as an increasing sequence of dominating batches.
Proof. The theorem can be easily proved by using simple job interchanging technique.
□
Next, if the job numbers of all batches are equal (i.e. n1  n2  ...  nm  n
m
 n ),
then the proposed problem can be formulated as an assignment problem. Again, let
xir be a 0/1 variable such that xir = 1 if batch Bi is the rth batch to be processed
and xir = 0 otherwise. Then, the problem to minimize the makespan of all jobs is
12
formulated as follows.
m
min
m
i
i 1
m
s.t.
m
n
 s   p (r  1)n  j 
x
i 1
r 1
i 1 r 1 j 1
ir
 1,
r =1, 2, …, m,
ir
 1,
i =1, 2, …, m,
m
x
ai
i[ j ]
xir  0 or 1,
xir
i, r =1, 2, …, m.
(3)
Based on Theorem 2 and the above analysis, to determine the optimal schedule for
the problem of 1 / B, S , LE , Ttotal / C max , a similar algorithm as Algorithm 2 is
developed as follow.
Algorithm 3.
Step 1: Arrange jobs within each batch in non-decreasing order of their normal
processing times.
Step 2: Formulate the corresponding assignment problem as Eq.(3) and determine the
batch sequence according to the solution of the corresponding problem.
Note that Step 1 can be obtained by a sorting algorithm and thus it takes
O ( n log n ) time while Step 2 is to solve an assignment problem and thus it takes
O(m 3 ) time. Thus, the overall time complexity of Algorithm 3 is O(n log n  m 3 ) .
13
6. Conclusions
This paper studies a single-machine scheduling problem with three models of
learning and forgetting effects in intermittent batch production. They are the models
of no transmission, partial transmission and total transmission of learning from batch
to batch, respectively. The objective is to minimize the makespan. We provide a
polynomial time algorithm to solve the problems with the models of no transmission
and partial transmission of learning from batch to batch, respectively. We also provide
two polynomial time algorithms to find the optimal solutions of two special cases in
the problem with the model of total transmission of learning from batch to batch.
However, the complexity of the problem with the model of total transmission of
learning from batch to batch is an open question. Therefore, it is an interesting topic
for the future research. Besides, in this study, the learning effect of a job depends on
its position in a schedule. There are other learning effect models studied in the
literature [6,7]. Thus, it is also worthwhile to consider another learning effect model
in intermittent batch production, for example, a time-dependent learning effect model
proposed by Kuo and Yang [8,9].
14
Acknowledgements
This research is supported in part by the National Science Council of Taiwan,
Republic of China, under grant number NSC-94-2213-E-150-016.
References
1. Biskup D (1999). Single-machine scheduling with learning considerations. Eur J
Opl Res 115: 173-178.
2. Mosheiov G (2001a). Scheduling problems with learning effect. Eur J Opl Res 132:
687-693.
3. Mosheiov G (2001b). Parallel machine scheduling with learning effect. J Opl Res
Soc 52: 1-5.
4. Mosheiov G and Sidney JB (2003). Scheduling with general job-dependent
learning curves. Eur J Opl Res 147: 665-670.
5. Graham RL, Lawler EL, Lenstra JK, Rinnooy Kan AHG. (1979). Optimization and
approximation in deterministic sequencing and scheduling: A survey. Annals
Discrete Mathematics 5: 287-326.
6. Alidaee B, Womer NK (1990). Scheduling with time dependent processing times:
Review and extensions. J Opl Res Soc 50: 711-729.
7. Cheng TCE, Ding Q, Lin BMT (2004). A concise survey of scheduling with
15
time-dependent processing times. Eur J Opl Res 152: 1-13.
8. Kuo WH, Yang DL (2006). Single-machine group scheduling with a
time-dependent learning effect. Comput Opns Res 33(8): 2099-2112.
9. Kuo WH, Yang DL (2006). Minimizing the total completion time in a single
machine scheduling problem with a time-dependent learning effect. Eur J Opl Res
174(2): 1184-1190.
16
Download