Job Shop Scheduling by Simulated Annealing

advertisement
Job Shop Scheduling by Simulated Annealing
Author(s): Peter J. M. van Laarhoven, Emile H. L. Aarts, Jan Karel Lenstra
Source: Operations Research, Vol. 40, No. 1 (Jan. - Feb., 1992), pp. 113-125
Published by: INFORMS
Stable URL: http://www.jstor.org/stable/171189
Accessed: 11/03/2010 16:23
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you
may use content in the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/action/showPublisher?publisherCode=informs.
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research.
http://www.jstor.org
JOB SHOP SCHEDULINGBY SIMULATEDANNEALING
PETER J. M. VAN LAARHOVEN
NederlandsePhilipsBedrijven,Eindhoven,TheNetherlands
EMILEH. L. AARTS
PhilipsResearchLaboratories,Eindhoven,TheNetherlands
and
EindhovenUniversityof Technology,Eindhoven,TheNetherlands
JAN KARELLENSTRA
EindhovenUniversityof Technology,Eindhoven,TheNetherlandsand CWI,Amsterdam,TheNetherlands
(ReceivedJuly 1988;revisionsreceivedMay 1989,January1990;acceptedJune 1990)
We describean approximationalgorithmfor the problem of finding the minimum makespanin a job shop. The
algorithmis based on simulatedannealing,a generalizationof the well known iterativeimprovementapproachto
combinatorialoptimizationproblems.The generalizationinvolves the acceptanceof cost-increasingtransitionswith a
nonzeroprobabilityto avoid gettingstuck in local minima. We prove that our algorithmasymptoticallyconvergesin
probabilityto a globally minimal solution, despite the fact that the Markovchains generatedby the algorithmare
generallynot irreducible.Computationalexperimentsshow that our algorithmcan find shortermakespansthan two
recentapproximationapproachesthat are moretailoredto the job shop schedulingproblem.This is, however,at the cost
of largerunningtimes.
e are concernedwith a problemin machine
schedulingknown as the job shop scheduling
problem(Coffman 1976, French 1982). Informally,
the problemcanbe describedas follows.We aregiven
a set of jobs and a set of machines.Eachjob consists
of a chain of operations,each of which needs to be
processedduringan uninterruptedtime periodof a
givenlengthon a given machine.Eachmachinecan
processat mostone operationat a time. A scheduleis
an allocationof the operationsto time intervalson
the machines.The problemis to find a scheduleof
minimumlength.
The job shop schedulingproblem is among the
hardestcombinatorialoptimizationproblems.Not
only is it NP-hard,but even amongthe membersof
thelatterclassit appearsto belongto the moredifficult
ones (Lawler,Lenstraand RinnooyKan 1982).Optimizationalgorithmsforjob shop schedulingproceed
by branch and bound, see, for instance, Lageweg,
Lenstraand Rinnooy Kan (1977), and Carlierand
Pinson(1989).Most approximation
algorithmsuse a
priorityrule, i.e., a rule for choosing an operation
from a specifiedsubsetof as yet unscheduledopera-
tions. Adams, Balas and Zawack(1988) developed
a shifting bottleneckprocedure,which employs an
ingeniouscombinationof scheduleconstructionand
iterative improvement, guided by solutions to
single-machineproblems.The approachpursuedby
Adams,Balasand Zawackis stronglytailoredto the
problemat hand. It is based on a fair amount of
combinatorialinsight into the job shop scheduling
problem,and its implementationrequiresa certain
levelof programmer
sophistication.
In thispaper,we investigatethe potentialof a more
generalapproachbasedon the easilyimplementable
simulatedannealingalgorithm(Kirkpatrick,Gelatt
and Vecchi 1983, and Cerny 1985). A similar
approachwas independentlyinvestigatedby Matsuo,
Suh and Sullivan (1988). Their controlledsearch
simulated annealing method is less general than
ours in the sense that it uses more problemspecific
neighborhoods.
The organizationof this paper is as follows. In
Section 1 we give a formal problemdefinition.In
Section2 the basicelementsof the simulatedannealing algorithmare reviewed.In Section3 we describe
W
Subjectclassifications:Analysisof algorithms,suboptimalalgorithms:simulatedannealing.Production/scheduling,
sequencing,deterministic,multiple
machine:job shop scheduling.
Area of review: MANUFACTURING, PRODUCTION AND SCHEDULING.
OperationsResearch
Vol. 40, No. 1, January-February
1992
113
0030-364X/92/4001-0113$01.25
? 1992OperationsResearchSocietyof America
114 /
VAN LAARHOVEN,AARTSAND LENSTRA
the applicationof simulatedannealingto job shop
scheduling.We proveasymptoticconvergenceof the
algorithmto a globallyminimalsolutionby showing
that the neighborhoodstructureis such that each
ergodicset contains at least one global minimum.
Section4 containsthe resultsof a computationalstudy
in whichsimulatedannealingis usedto find approximate solutionsto a largeset of instancesof the job
shopschedulingproblem.We compareour simulated
annealingmethod with three other approaches,i.e.,
time-equivalentiterativeimprovement,the shifting
bottleneckprocedureof Adams,Balasand Zawack,
andthe controlledsearchsimulatedannealingmethod
of Matsuo,Suh and Sullivan.In Section 5 we end
with some concludingremarks.
1. THE PROBLEM
Wearegivena set -f of njobs,a set X of m machines,
and a set a of N operations.For each operation
v E a there is a job J, E /o to which it belongs,a
machineM, E 4 on which it requiresprocessing,
anda processingtime t, E NJ.Thereis a binaryrelation
->
on a thatdecomposesa into chainscorresponding
to the jobs; more specifically, if v -> w, then J, = JW,
and there is no x 4 {v, w} such that v -> x or x -> w.
The problemis to find a starttime s, for each operation v E a suchthat
max s, + t,,
(1)
vE&
is minimizedsubjectto
s >- O for all v E
(2)
a
SW- S >, tv if v ->W, v, w E
SW- SV; tv V sV- S
a
(3)
tw
if MV = Mw, v, w Ec.
(4)
It is usefulto representthe problemby the disjunctive
graphmodel of Roy and Sussmann(1964). The disjunctivegraphG = (V, A, E) is definedas follows:
* V = A U {0, N + I}, where 0 and N + 1 are two
fictitiousoperations;the weightof a vertexv is given
= 0).
by the processingtime tv(to =
tN+1
* A = 1(v, w) Iv, w E ,- v-> w} U 1(0, w) IwE ,
Av E a: v -> w} U {(v, N + 1) Iv E A, Aw Ea
v -> WI.Thus, A contains arcs connecting consecu-
tive operationsof the samejob, as well as arcsfrom
0 to the firstoperationof eachjob and fromthe last
operationof eachjob to N + 1.
* E -= v, w} IMv = MW}.Thus, edges in E connect
operationsto be processedby the samemachine.
1
O~
~~
a~'
7
2
3~~~~~~~~~~~
10
"
8
9
Figure1. The disjunctivegraph G of a 3-job 3machineinstance.Operations1, 5 and9 are
processedby machine1, operations2, 4 and
8 by machine2, and operations3, 6 and 7
by machine3; 0 and 10 are the fictitious
initial and final operations,respectively.
Thickarrowsdenotearcsin A, dottedlines
edgesin E.
Figure1 illustratesthe disjunctivegraphfor a 3-job,
3-machineinstance,whereeachjob consistsof three
operations.
For each pairof operationsv, w E a with v ,
condition(3) is representedby an arc (v, w) in A.
Similarly,for each pair of operationsv, w E a with
M, = MW,the disjunctiveconstraint(4) is represented
by an edge {v, w} in E, and the two waysto settlethe
disjunctioncorrespondto the two possibleorientations of the edge. There is an obvious one-to-one
correspondence
betweena set of choicesin (4) thatis
overallfeasibleand an orientationof all the edgesin
E for which the resultingdigraphis acyclic. The
objectivevalue (the makespan)of the corresponding
solutionis givenby the lengthof a longestpathin this
digraph.Sucha set of orientationsdecomposesa into
chainscorresponding
to the machines,i.e., it defines
for each machinean orderingor permuationof the
operationsto be processedby that machine. Conversely,a set of machinepermutationsdefinesa set of
orientationsof the edgesin E, thoughnot necessarily
one which resultsin an acyclic digraph.Since the
longestpathin a cyclicdigraphhas infinitelength,we
can now rephrasethe problemas thatof findinga set
of machinepermutationsthat minimizesthe longest
pathin the resultingdigraph.In Section3 we use this
formulationof the problemto findapproximatesolutionsby simulatedannealing.
2. SIMULATEDANNEALING
Ever since its introduction, independently by
Kirkpatrick,Gelatt and Vecchi (1983) and Cerny
JobShopSchedulingbySimulatedAnnealing/
(1985),simulatedannealinghasbeenappliedto many
combinatorialoptimizationproblemsin such diverse
designof integratedcircuits,
areasas computer-aided
image processing,code design and neural network
theory;for a review the readeris referredto Van
Laarhovenand Aarts(1987). The algorithmis based
on an intriguingcombinationof ideas from at first
sightcompletelyunrelatedfieldsof science,viz. combinatorialoptimizationand statisticalphysics.On the
one hand,the algorithmcan be consideredas a generalizationof the well known iterativeimprovement
optimizationproblems,on
approachto combinatorial
the otherhand,it can be viewedas an analogueof an
algorithmusedin statisticalphysicsforcomputersimulationof the annealingof a solidto its groundstate,
i.e., the statewith minimumenergy.In this paper,we
mainly restrictourselvesto the first point of view;
thus,we firstbrieflyreviewiterativeimprovement.
Generally,a combinatorialoptimizationproblemis
or
a tuple(I, W),whereR is the set of configurations
solutionsof the problem, and C: S -11 R the cost
function(Papadimitriouand Steiglitz 1982). To be
able to use iterativeimprovementwe need a neighborhoodstructureY: R -- 2'; thus, for each configcalledthe
urationi, XA(i)is a subsetof configurations,
areusuallydefined
of i. Neighborhoods
neighborhood
by firstchoosinga simpletype of transitionto obtain
a new configurationfroma givenone andthen definthat
ing the neighborhoodas the set of configurations
can be obtainedfrom a given configurationin one
transition.
a cost functionand
Giventhe set of configurations,
a neighborhoodstructure,we can definethe iterative
improvementalgorithmas follows. The algorithm
consistsof a numberof iterations.At the startof each
iteration,a configurationi is givenand a transitionto
a configurationj E XA(i) is generated. If C(j)
<
C(i),
the start configurationin the next iteration is j,
otherwiseit is i. If R is finiteandif the transitionsare
generatedin some exhaustiveenumerativeway, then
the algorithmterminatesby definitionin a localminimum. Unfortunately,a local minimum may differ
considerablyin cost from a globalminimum.Simulatedannealingcan be viewedas an attemptto find
localminimaby allowingthe acceptance
near-optimal
transitions.Moreprecisely,if i and
of cost-increasing
j E X(i) are the two configurationsto choosefrom,
thenthe algorithmcontinueswithconfiguration
j with
a probabilitygivenby mint1, exp(-(C(j) - C(i))Ic)}
wherec is a positivecontrolparameter,whichis graduallydecreasedduringthe executionof the algorithm.
Thus, c is the analogueof the temperaturein the
physicalannealingprocess.Note that the aforemen-
115
tioned probabilitydecreasesfor increasingvaluesof
C(j) - C(i) and for decreasingvaluesof c, and costdecreasingtransitionsarealwaysaccepted.
For a fixed value of c, the configurationsthat are
consecutivelyvisitedby the algorithmcan be seen as
a MarkovchainwithtransitionmatrixP = P(c) given
by (AartsandVanLaarhoven1985a,LundyandMees
1985)
1986,and Romeoand Sangiovanni-Vincentelli
if 54
$ i
[GjAij(c)
(5)
IS}
Pij (c)
GjkAik(c)
l-
if
=
i,
k=1
wherethe generationprobabilitiesGijaregivenby
Iv(z) I
if j E X(i)
0O
otherwise,
(6)
G1j(c)=
andthe acceptanceprobabilities
Aijby
Ai(c) = min{1, exp
((Cj) - C(i)))}.
(7)
Thestationarydistributionof thisMarkovchainexists
and is given by (Aartsand Van Laarhoven1985a,
LundyandMees 1986,and Romeoand SangiovanniVincentelli1985):
IXl(i) IAi0(c)
i(c) =
E
I,/-(j) IA=4(c)
Ej)=-
(8
for some io E _opt where Ropt is the set of globally
providedthe neighborhoods
minimalconfigurations,
(i,j) there
of configurations
for
pair
aresuchthat each
i to j.
from
leading
transitions
of
sequence
is a finite
the
requirement
to
is
equivalent
condition
The latter
that the matrix G be irreducible.It can readilybe
shownthat
lim qi(c) =
CI
[ Mom1
if i E Ropt
lo
otherwise.
(9)
We recall that the stationarydistributionof the
distribution
Markovchainis definedas theprobability
afteran infinitenumberof tranof the configurations
sitions.Thus,we concludefrom(9) thatthe simulated
annealingalgorithmconvergeswith probability1 to a
globallyminimalconfigurationif the followingconditionsaresatisfied:
* the sequenceof values of the control parameter
convergesto 0;
* the Markovchainsgeneratedat each valueof c are
of infinitelength;and
* the matrixG is irreducible.
116 / VAN LAARHOVEN,AARTS AND LENSTRA
Unfortunately, the neighborhood structurechosen for
jE5'9X
job shop schedulingin Section 3 is such that the
correspondingmatrix G is not irreducible.In that
case, we can still proveasymptoticconvergenceprovidedthe neighborhoodsare such that for each configurationi there is a finite sequenceof transitions
0 < lim PrIX(k) =j
= ( E Pr{X(O) = ix
Laarhoven1988).To do so, we use the fact that in
every chain the recurrentconfigurationscan be
divided uniquely into irreducibleergodic sets Sl,
In additionto the ergodicsets there
5T.
Y2,
is a set S9 of transientconfigurationsfrom which
in the ergodicsetscan be reached(but
configurations
satisfy
not vice versa).Note thatif the neighborhoods
condition,then each5Y, contains
the aforementioned
at leastone globallyminimalconfiguration.
Now considerthe sequenceof configurationsconstitutingthe MarkovchainassociatedwithP(c). There
aretwo possibilities:eitherthe Markovchainstartsin
a transientconfigurationor it does not. In the latter
constitutingthe Markovchain
case,the configurations
all belongto the sameirreducibleergodicset 5t and
we can proveasymptoticconvergenceas before,with
A10j(c)
A101(c)
Z5Ai,,j(c)
...,
On the other hand, if the Markov
chainstartsin a transientconfiguration,it will eventually "land" (Feller 1950) in an ergodic set 5t, t E
fl, ..., T), thoughit is not a prioriknownwhichone.
The line of reasoningdescribedabove can then be
appliedagain.
Wecanmakethe precedingargumentsmoreprecise
by introducingthe notion of a stationary matrix Q,
whoseelementsqjjaredefinedby
(10)
qij = lim Pr{X(k)=j I X(0) = i}.
k-+oo
Usingthe resultsin Chapter15,Sections6-8 of Feller,
we obtain
ifj E
0
Aio(c)
q-j=
alEl(=-9,,Ai,,I(c)
|
or i E
Awo(c)
X"tEs>tA1(c)
Yt
tj
for some t E{,..
xi, + E PrIX(O) =i
4op,(Van
leading from i to some configuration io E
M replaced by St.
PrIX(O)= i}* qj
=E
k-~~~~~~ooi
.,
T,
if i, jE Yt(
forsomet E I1,. . .., T),
if i E-- jEE t
for some tE {1,.. .
T,
Using(7) we find
lim
Aioj(c)
-
ifj E Yt, jI
reaches the ergodic set Yt.
From( 11) we obtain,for a recurrentconfiguration
Mopt.
(13)
0
Consequently,
limcjO(limk,0-Pr{X(k)= j) = 0
for any transientor nongloballyminimalrecurrent
configurationj. In other words
lim(lim Pr{X(k)E
=opt
=
1
c4O kk--oo
(14)
whereM'ptdenotesthe nonemptyset of globallyminimal recurrentconfigurations.
Someof the conditionsforasymptoticconvergence,
as, for instance,the infinitelengthof Markovchains,
cannotbe met in practice.In any finite-timeimplementation,we thereforehave to make a choice with
respectto eachof the followingparameters:
* the lengthof the Markovchains,
* the initialvalueof the controlparameter,
* the decrementruleof the controlparameter,
* the finalvalueof the controlparameter.
Sucha choiceis usuallyreferredto as a coolingscheduleor an annealingschedule.The resultsin this paper
have been obtainedby an implementationof simulated annealingthat employs the cooling schedule
derivedby AartsandVan Laarhoven(1985a,b). This
is a three-parameter
schedule:The parametersXoand
E.determinethe initialand finalvaluesof the control
parameter,respectively,whereasthe decrementrule
dependson a parameter6 in the followingway:
~Ck
Ck+ I =
(5
(15)
ln(1 + 6)/3ck]
whereckis the valueof the controlparameterfor the
kth Markovchainand Sk is the standarddeviationof
the cost valuesof the configurations
obtainedby generatingthe kth Markovchain.
1
wherexit is the probabilitythat the Markovchain,
startingfromthe transientconfigurationi, eventually
( 12)
+
[Ck *
JobShopSchedulingby SimulatedAnnealing/
The decrementof Ck is such that the stationary
distributions of two succeeding Markov chains
approximatelysatisfy (Aarts and Van Laarhoven
1985a)
I
1+
< qi(ck)
<qi(ck+1)
<
k
+
1,2,...
foralliEM.
(16)
Thus,forsmallvaluesof 6,the stationarydistributions
of succeedingMarkovchainsare"close"to eachother
andwe thereforearguethat,afterdecreasingCkto Ck+l,
a smallnumberof transitionssufficesto let the probabilitydistributionof the configurations
approachthe
new stationary distribution q(ck+1). Note that small
values of 6 correspondto a slow decrementof the
controlparameter.
Finally,we choosethe lengthof eachMarkovchain,
Lk, equalto the size of the largestneighborhood,
i.e.,
Lk
=max IX(i)I,
iE R
k= 1, 2, . . . .
(17)
We haveappliedsimulatedannealingbasedon this
coolingscheduleto manyproblemsin combinatorial
optimization(see, for example,Van Laarhovenand
Aarts)and havefoundit extremelyrobustin that the
final results are typicallywithin 2% of the global
minimum,when 6 is chosen sufficientlylow (0.1 or
smaller).
Undersomemildassumptions,it is possibleto show
that the aforementionedcooling scheduleleads to a
of the simulatedannealingalgorithm
time-complexity
givenby &(,rLin I M I), wherer is the time involved
in the generationand (possible)acceptanceof a transition and L is the size of the largestneighborhood
(the lengthof the Markovchains)(Aaartsand Van
Laarhoven1985a).If one worksout this boundfor a
particularcombinatorialoptimizationproblem,it is
usuallypolynomialin the sizeof theproblem.In those
cases,we havea polynomial-time
approximation
algorithm.Such a resultwith respectto the efficiency of
the algorithmis only worthwhilein combinationwith
resultson its effectiveness, viz. on the differencein
cost betweensolutionsreturnedby the algorithmand
globallyminimal ones. From a theoreticalpoint of
view, very little is known about the effectivenessof
simulatedannealing,but there are many empirical
results;see for instancethe extensivecomputational
experimentsof Johnsonet al. (1989).Forthejob shop
schedulingproblem,we presentan empiricalanalysis
of the effectivenessandefficiencyof simulatedannealing in Section4, but firstthe applicationof simulated
117
annealingto the job shop schedulingproblemis discussedin moredetail.
3. SIMULATEDANNEALINGAND JOB SHOP
SCHEDULING
We recallfrom the previoussectionthat in orderto
applysimulatedannealingto any combinatorialoptimization problem,we need a precisedefinitionof
configurations,a cost functionand a neighborhood
structure.Furthermore,
to proveasymptoticconvergencewe mustshowthatthe neighborhoodstructure
is suchthatforan arbitrary
configurationi thereexists
at leastone globallyminimalconfigurationio E Ropt
that can be reachedfrom i in a finite numberof
transitions.Hereinafter,we discuss these items in
moredetail.
3.1. Configurations
We recallfrom Section 1 that we can solve the job
shop scheduling problem by consideringsets of
machinepermutationsand by determining,for such
a set of permutations,the longestpathin the digraph
whichresultsfromgivingthe edgesin the disjunctive
graphthe orientationsdeterminedby the permutations. We thereforedefine a configurationi of the
problem as a set Hli = [7ri, ..., 7rim}Iof machine
permutations.lrik is to be interpretedas the orderin
whichthe operationson machinek are processed:If
Mv = k for some v E &, then lrik(v) denotes the
operationfollowingv on machinek. Consequently,
the numberof configurationsis given by HlkmIMk!,
whereMkis the numberof operationsto be processed
by machine k(mk
=I{v
E &a Mv = k 1).
3.2. Cost Function
For each configurationi we definethe followingtwo
digraphs:
Di = (V, A U Ei), where
Ei= {(v, w) I{v, w) E
E and rik(v) = w
for some k E X1. (18)
D,= (V, A UEi), where
i= {(v, w) I{v, w) E E and Xrik(v) = w
for some
k EX,/I 1<l
mk - l}.
(19)
In otherwords,Di is the digraphobtainedfrom the
disjunctivegraphby givingthe edgesin E the orientations resultingfrom Hli;the digraphDi can be
obtainedfrom D, by takingonly those arcsfrom E,
that connect successive operations on the same
118 / VAN LAARHOVEN,
AARTSANDLENSTRA
machine.It is well knownthatthe longestpathsin Di
and Di have equallength.Thus,the cost of a configurationi can be foundby determiningthe lengthof
a longestpath from 0 to N + 1 in Di. To compute
such a cost, we use a simple labeling algorithm,
based on Bellman'sequations(Bellman 1958), for
solvingthe longest-pathproblemin a digraph.The
time-complexityof this algorithmis proportionalto
the numberof arcs in the graph.In our case, this
number equals IA I + IEij = (N + n) + (N - m);
accordingly,the labelingalgorithmtakes 6(N) time
to computethe cost of a configuration.
3.3. NeighborhoodStructure
A transition is generated by choosing vertices v and
w, such that:
1. v andw aresuccessiveoperationson somemachine
k;
2. (v, w) E E, is a criticalarc, i.e., (v, w) is on a
longest path in Di;
andreversingthe orderin whichv andwareprocessed
on machinek. Thus,in the digraphDi such a transition resultsin reversingthe arc connectingv and w
and replacingthe arcs(u, v) and (w, x) by (u, w) and
andx =
(v, x), respectively,where u = 1ri1(v)
krik(W).
Our choice is motivated by two facts:
* Reversing a critical arc in a diagraphDi can never
leadto a cyclicdiagraphDj (see Lemma2).
* If the reversal of a noncritical arc in Di leads to an
acyclic graph Dj, a longest path q in Dj cannot be
shorter than a longest path p in Di (because Dj still
containsthe pathp).
Thus, we exclude beforehand some noncost-
to some globallyminimalconfiguration.In orderto
do so, we needtwo lemmas.
Lemma 1. Consideran arbitraryconfiguration
i and
an arbitrary
globalminimumio E
Mopt.
If i 4
Mopt,
thenthe set Ki(io)definedby
Ki(io) = {e = (v, w) E Es I e
is criticalA (w, v) E E11 (20)
is not empty.
Proof. The proofconsistsof two parts:First,we show
that Ei always contains critical arcs, unless i E
Mop,;
next that thereare alwayscriticalarcsin Ei that do
not belongto EiO
unlessagaini E8 opt.
1. Supposethat Ei containsno criticalarcs,then all
criticalarcsbelongto A. Consequently,a longest
path consists of arcs connectingverticescorrespondingto operationsof the same job; accordingly,its lengthis givenby thetotalprocessingtime
of thatjob. But this is a lowerboundto the length
of a longest path in any digraphDa,hence i E
Mpt.
2. Supposethat for all criticalarcs e in Ei, we have
e E EiP.We then knowthat any longestpathp in
Di is also a path q in Di,. The lengthof a longest
pathr in DBo
is also the lengthof a longestpathin
and
because
io E Ropt, we have length(r) <
Dio
length(p).But by definition length(r)> length
(q) = length(p). Consequently, length(p) =
length(r)and i E Ropt.
Lemma 2. Supposethat e = (v, w) E Ei is a critical
arc of an acyclicdigraphDi. Let Dj be the digraph
obtainedfromDi by reversingthe arc e in Ei. ThenDj
is also acyclic.
decreasingtransitions and, in addition, all transitions
that might resultin a cyclic digraph.Consequently,
the neighborhoodstructureis such that the algorithm visits only digraphscorrespondingto feasible
solutions.
The neighborhoodof a configurationi is thusgiven
by the set of acyclicdiagraphsthatcan be obtainedby
reversinga criticalarcbelongingto Ejin the graphDi.
Consequently,IX(i) I < k=1 (Mk - 1) = N- m3.4. AsymptoticConvergence
It is not difficult to construct a problem instance
containing pairs of configurations(i, j) for which there
is no finite sequence of transitions leading from i to j
(Van Laarhoven).Thus,to proveasymptoticconvergence, we must show that for each configurationi
thereis a finitesequenceof transitionsleadingfrom i
Proof. SupposethatDjis cyclic.BecauseDi is acyclic,
the arc(w, v) is partof the cyclein D>.Consequently,
there is a path (v, x, y, . . ., w) in D>.But this path
can also be found in Di and is clearlya longerpath
from v to w than the arc (v, w). This contradictsthe
assumptionthat (v, w) is on a longest path in Di.
Hence,Dj is acyclic.
Givena configurationio E Mopt, we definetwo sets
for an arbitraryconfigurationi:
Mi(io)= {e = (v, w) E Ei l(w, v) EG io}
(21)
Mi(io)= {e = (v, w) E Ei I (w, v) e EioJI
(22)
In view of Section 2, the followingtheorem now
ensuresasymptoticconvergencein probabilityto a
globallyminimalconfiguration.
JobShopSchedulingby SimulatedAnnealing/
Theorem 1. For each configurationi (4opt
it is
possibleto constructa finite sequenceof transitions
leadingfrom i to a globallyminimalconfiguration.
Proof. We choosean arbitrary
configuration
ioE Mopt
and construct a sequence of configurations Xo,
XI, . . .} as follows:
1. X0=i.
2.
Xk+I
is obtained from
e E Kxk(o)
Xk
by reversingan arc
in EXk. According to Lemma 2, this can
be done withoutcreatinga cyclein Dxk+,. Furthermore,this operationis of the aforementioned
type
of transition.
It is easyto see thatif IJ1k(io) I > 0 then
|MI.k+l(iO)I= |Ik(iO) I
-
1.
(23)
Hence, for k = IM1(io)l, IMxk(io) I = 0. Using
Ki(io)5 Mi(io)C M1(io),we find KXk(io)= 0 for k =
1. According to Lemma 1, this implies
Mi(io)
i
Xk E Ropt*
4. COMPUTATIONAL
RESULTS
We have analyzedthe finite-timebehaviorof the
simulatedannealingalgorithmempiricallyby running
the algorithmon a numberof instancesof the job
shopschedulingproblem,varyingin sizefromsixjobs
on six machinesto thirtyjobs on ten machines.For
all instances,the numberof operationsof each job
equals the number of machinesand each job has
preciselyone operationon eachmachine.In thatcase,
the numberof configurations
of eachinstanceis given
by (n!)m,the labelingalgorithmtakes6(nm) time to
computethe cost of a configuration,and the size of
the neighborhoodof a configurationis boundedby
m(n- 1).
FIS1, FIS2 and FIS3 (Table I) are three problem
instancesdue to Fisherand Thompson(1963), the
fortyinstancesin TableIIaredueto Lawrence( 1984).
FIS2 is a notorious10-job,10-machineinstancethat
hasdefiedsolutionto optimalityformorethantwenty
years.A coupleof yearsago, a solutionwith cost 930
was found after 1 hour 47 minutesof runningtime,
and no improvementwas foundafter9 hours6 minutes(Lageweg1984).Thiscostvaluewasonlyrecently
provedto be globallyminimalby Carlierand Pinson
(1989).ForFIS1,FIS2andFIS3,theprocessingtimes
of the operationsarerandomlydrawnandrangefrom
1 to 10 (FIS1)or to 99 (FIS2andFIS3)unitsof time.
The sequenceof machinesfor each job is such that
lower-numbered
machinestend to be used for earlier
119
operations.For the Lawrenceinstances,processing
times are drawnfrom a uniformdistributionon the
interval[5, 99];the sequenceof machinesforeachjob
is random.
The performance of simulated annealing on
theseinstancesis reportedin TableI for the FisherThompson instances, and in Table II for the
Lawrenceinstances.The averagesin these tablesare
computedfrom five solutions,obtainedby running
the algorithm,controlledby the cooling schedule
describedin Section 2, five times on each instance
andrecordingthe bestconfiguration
encounteredduring each run (this need not necessarilybe the final
configuration).The probabilisticnatureof the algorithm makesit necessaryto carryout multipleruns
on the sameprobleminstancein orderto get meaningfulresults.
All resultsareobtainedwiththe parametersXoand
e, set to 0.95 and 10-6,respectively,and for different
valuesof the distanceparameter8. Runningtimesare
CPU timeson a VAX-785.
From TablesI and II we can make the following
observations:
1. The qualityof the averagebest solutionreturned
by the algorithmimprovesconsiderablywhen 8 is
decreased.This is in accordancewith the theory
underlyingthe employedcoolingschedule:smaller
values of 8 correspondto a better approximation of the asymptoticbehavior(Aartsand Van
Laarhoven1985a). Furthermore,the difference
betweenthe averagebest solutionand a globally
minimalone doesnot deteriorate
with
significantly
increasingproblemsize.Forthe FIS2 instance,the
fivebestsolutionsobtainedwith8 = 10-' havecost
valuesof 930 (twice),934, 935, and 938, respectively.Thus,a globallyminimalsolutionis found
2 out of 5 times,whichis quitea remarkable
result,
consideringthe notorietyof this instance.
2. As for runningtimes, the bound for the running
time given in Section 2 is ((nm)3 In n) (L =
6(nm), I II= 6((n!)m) andr = 6(nm)). Thus, for
fixedm the boundis &(n3In n), and for fixedn it
is (m'3).Forthe A, B and C instancesin TableII,
for whichm is constant,the averagerunningtime
T for 8 = 0.01 is approximatelygiven by t =
to * n2215 In n, for some constantto (X2 = 1.00);
fortheG, B andI instances,forwhichn is constant,
the averagerunningtime for 8 = 0.01 is approxir
matelygivenby T= t m2.406,
for some constant
t1 (x 2 = 1.00). Thus, the observedrunningtimes
are in good accordancewith the bound given in
Section2.
120 /
VAN LAARHOVEN,AARTS AND LENSTRA
Table I
Results for ProblemInstancesof Fisherand Thompson(1963)"
SimulatedAnnealing
Problem
C
6
6 machines,6 jobs
FISi
10'1
tC
ABZ
T
%
at
Cbt
CA
MSS
t4A
1.3
0.0
1.82
0.00
8
52
1
8
55*
55*
55*
1
1,039.6
985.8
942.4
933.4
15.1
22.1
4.5
3.1
11.78
6.00
1.33
0.37
113
779
5,945
57,772
13
61
180
2,364
1,028
951
937
930*
930*
426
10-2
1,354.2
1,229.0
26.5
33.6
16.24
5.49
123
848
13
93
10-3
104
1,187.0
1,173.8
18.7
1.89
6,840
5.2
0.76
62,759
IterativeImprovement
389
7,805
1,173
1,165*
a-t
Gaist
10-2
10 machines,10jobs
FIS2
10-1
10-2
10-3
10-4
56.0
55.0*
CM
tM
-
946
494
20 machines,5 jobs
FIS3
10-1
#C
FISi
FIS2
FIS3
803.2
9,441.2
5,221.0
55.4
1,018.2
1,331.4
UrC
%
t
0.8
9.1
9.5
0.73
9.48
14.28
52
5,945
6,841
Simulatedannealing:
: the distanceparameter(smaller1-valuesimply slower
6
cooling);
C, uc: the averagecost and standarddeviationof best solution over five runs;
t, o? the averagerunningtime and standarddeviationover
five runs(in seconds);
Cbes, : the best cost found over five runs;
So : the percentageof C over optimalcost.
Iterativeimprovement:
: the averagenumberof local minimaover five macroruns(see text);
C, rc: the averagecost and standarddeviationof best solua
In the remainderof this section we comparethe
resultsobtainedwithourmethodwithresultsobtained
withthreeothermethods,viz.
* time-equivalent
iterativeimprovement,
* the shiftingbottleneckprocedure(Adams,Balasand
Zawack),and
* controlledsearchsimulatedannealing(Matsuo,Suh
and Sullivan).
4.1. Time-Equivalent Iterative Improvement
Table I also contains results obtainedby repeated
executionof a time-equivalent
iterativeimprovement
algorithmbasedon the same neighborhoodstructure
as our simulatedannealingalgorithm.It is basedon
repeatedexecution of iterativeimprovement.The
averagesfor the time-equivalentiterativeimprove-
0
0
0
1,325
1,184
1,178
40
55*
1,006
1,319
tion over five macroruns;
the averagerunningtime and standarddeviationover
five macroruns(in seconds);
the best cost found over five macroruns;
Cbes,
the percentageof C over optimalcost.
S
Adams,Balasand Zawack(1988):
: the best cost found;
CA
tA
runningtime (in seconds).
Matsuo,Suh and Sullivan(1988):
CM the best cost found;
tM
runningtime (in seconds);
*
provablyoptimalcost.
t,
oS
mentareobtainedfromfive macroruns.Eachmacrorun consists of repeatedexecution of the iterative
improvementalgorithmfor a largenumberof randomlygeneratedinitialconfigurations
andthusyields
a largenumberof local minima. Executionof each
macrorunis terminatedas soon as the runningtime
is equal to the runningtime of an averagerun of
simulatedannealingapplied to the same problem
instancewith the distanceparameter6 set to i0-'
(10-2for FIS1);C is the averageof the bestcost value
foundduringeachmacrorun.
We observe that repeatedexecution of iterative
improvementis easily outperformedby simulated
annealingfor the two largerproblems.The difference
is significant:for FIS3, for instance,the averagebest
solution obtainedby simulatedannealingis almost
11%betterin cost thanthe one obtainedby repeated
executionof iterativeimprovement.
JobShopSchedulingbySimulatedAnnealing/ 121
Table II
Results for ProblemInstancesof Lawrence(1984)a
ABZ
SimulatedAnnealing
Problem
6
10 machines,10jobs
Al
1.0
0.1
0.01
CYc
t
St
Ctst
CA
MSS
tA
CM
tM
1023.4
981.0
966.2
30.6
17.3
10.1
26
110
686
1.5
17.1
83.3
991
956
956
978
120
959
78
A2
1.0
0.1
0.01
861.0
792.4
787.8
41.2
6.2
1.6
23
112
720
3.7
7.0
109.0
797
784
785
787
96
784
47
A3
1.0
0.1
0.01
902.6
872.2
861.2
30.9
12.4
0.4
23
112
673
1.6
22.1
69.0
870
861
861
859
112
848
53
A4
1.0
0.1
0.01
950.0
881.4
853.4
54.5
6.9
4.6
24
97
830
5.3
20.4
85.4
904
874
848
860
120
842
58
A5
1.0
0.1
0.01
1021.6
927.6
908.4
26.2
18.9
4.2
30
86
667
1.9
7.9
126.9
994
907
902
914
144
907
50
10 machines,15jobs
BI
1.0
0.1
0.01
1176.2
1115.2
1067.6
37.8
23.9
3.7
69
299
1991
6.7
50.9
341.1
1133
1085
1063
1084
181
1071
103
B2
1.0
0.1
0.01
1125.6
977.4
944.2
35.6
19.5
4.7
65
307
2163
3.6
36.5
154.6
1094
963
938
944
210
927
92
B3
1.0
0.1
0.01
1155.8
1051.0
1032.0*
64.2
24.6
0.0
63
275
2093
5.6
35.8
89.7
1056
1032*
1032*
1032*
113
1032*
10
B4
1.0
0.1
0.01
1101.0
977.6
966.6
53.5
8.1
8.7
71
252
2098
5.0
28.5
406.0
1032
968
952
976
217
973
100
B5
1.0
0.1
0.01
1114.6
1035.4
1004.4
9.1
10.6
14.4
77
283
2133
16.9
44.3
374.5
1103
1017
992
1017
215
991
90
10 machines,20 jobs
Cl
1.0
0.1
0.01
1397.0
1268.0
1219.0
69.1
9.7
2.0
139
555
4342
16.0
81.7
597.8
1311
1252
1218*
1224
372
1218*
27
C2
1.0
0.1
0.01
1434.2
1311.6
1273.6
40.0
12.7
5.2
139
651
4535
6.4
82.9
392.0
1390
1295
1269
1291
419
1274
143
C3
1.0
0.1
0.01
1414.6
1280.2
1244.8
57.8
23.6
15.4
135
614
4354
7.4
83.3
349.8
1335
1246
1224
1250
451
1216
153
C4
1.0
0.1
0.01
1387.4
1260.4
1226.4
47.0
35.4
6.5
138
581
4408
14.1
24.0
450.9
1307
1203
1218
1239
446
1196
134
C5
1.0
0.1
0.01
1539.2
1393.6
1355.0*
44.2
9.6
0.0
145
605
3956
20.6
84.4
428.2
1492
1381
1355*
1355*
276
1355*
4
122 / VAN LAARHOVEN,AARTSAND LENSTRA
Table IT-Continued
Results for ProblemInstancesof Lawrence(1984)
Problem
6
C
c
t
MSS
ABZ
SimulatedAnnealing
t
Cst
CA
tA
CM
tM
1Omachines,30 jobs
1.0
Dl
0.1
1882.2
1784.0*
39.3
0.0
442
1517
79.3
58.1
1821
1784*
1784*
19
-
-
D2
1.0
0.1
1921.4
1850.0*
35.3
0.0
492
1752
66.2
124.6
1868
1850*
1850*
15
-
-
D3
1.0
0.1
1761.8
1726.6
12.2
15.2
433
1880
40.4
130.8
1740
1719*
1719*
14
-
-
D4
1.0
0.1
1816.4
1775.6
27.7
38.4
470
1886
31.2
232.4
1788
1721*
1721*
11
-
-
D5
1.0
0.1
2011.2
1890.0
81.3
4.0
434
1668
34.6
107.9
1888*
1888*
1888*
11
-
-
707.0
666.0*
666.0*
32.2
0.0
0.0
6
20
123
1.0
3.5
15.3
666*
666*
666*
666*
1
-
-
5 machines,10jobs
1.0
Fl
0.1
0.01
F2
1.0
0.1
0.01
719.0
671.0
663.0
20.0
11.1
4.9
6
24
117
1.0
2.5
19.0
685
655*
655*
669
6
655*
2
F3
1.0
0.1
0.01
689.6
635.6
617.6
22.4
9.5
8.5
5
24
129
0.9
3.8
12.6
664
626
606
605
32
597
17
F4
1.0
0.1
0.01
665.4
617.2
593.8
56.9
20.5
2.1
6
21
121
0.9
5.2
15.9
608
594
590
593
23
590
17
F5
1.0
0.1
0.01
594.4
593.0*
593.0*
2.8
0.0
0.0
5
19
118
0.6
4.2
15.3
593*
593*
593*
593*
0
-
-
5 machines,15jobs
1.0
GI
0.1
0.01
937.2
926.0*
926.0*
13.7
0.0
0.0
16
52
286
2.8
5.8
32.1
926*
926*
926*
926*
1
-
-
G2
1.0
0.1
0.01
948.6
900.6
890.0*
44.1
8.5
0.0
15
66
376
1.6
15.2
48.3
911
890*
890*
890*
1
-
-
G3
1.0
0.1
0.01
905.8
863.0*
863.0*
34.2
0.0
0.0
16
55
292
0.4
7.3
40.8
863*
863*
863*
863*
2
863*
G4
1.0
0.1
0.01
965.2
951.0*
951.0*
20.0
0.0
0.0
13
47
283
1.0
5.9
25.9
951*
951*
951*
951*
0
-
-
G5
1.0
0.1
0.01
958.0*
958.0*
958.0*
0.0
0.0
0.0
14
45
243
1.6
2.0
42.3
958*
958*
958*
958*
0
-
-
1
JobShopSchedulingby SimulatedAnnealing/
123
Table II-Continued
Results for ProblemInstancesof Lawrence(1984)a
SimulatedAnnealing
Problem
5 machines,20 jobs
HI
1.0
0.1
0.01
t
MSS
ABZ
C!
Cbt
CA
tA
CM
tM
1229.6
1222.0*
1222.0*
14.7
0.0
0.0
32
108
627
3.9
17.2
18.4
1222*
1222*
1222*
1222*
1
-
-
H2
1.0
0.1
0.01
1042.8
1061.2
1039.0*
7.6
44.4
0.0
34
116
655
3.9
11.9
30.7
1039*
1039*
1039*
1039*
0
-
-
H3
1.0
0.1
0.01
1154.6
1150.0*
1150.0*
9.2
0.0
0.0
32
118
564
2.5
18.0
85.9
1150*
1150*
1150*
1150*
1
-
-
H4
1.0
0.1
0.01
1292.0*
1292.0*
1292.0*
0.0
0.0
0.0
27
93
462
1.7
20.6
21.8
1292*
1292*
1292*
1292*
0
-
-
H5
1.0
0.1
0.01
1299.8
1252.5
1207.0*
77.4
18.8
0.0
34
126
736
5.3
16.2
26.3
1207*
1233
1207*
1207*
2
-
-
1487.6
1343.2
1300.0
40.4
30.2
7.8
152
785
5346
6.4
80.6
399.8
1450
1297
1293
1305
268
1292
312
15 machines,15jobs
II
1.0
0.1
0.01
a
?c
6C
I2
1.0
0.1
0.01
1580.2
1479.4
1442.4
38.3
28.8
5.7
173
757
5287
12.1
94.6
688.5
1523
1457
1433
1423
419
1435
289
I3
1.0
0.1
0.01
1422.2
1303.4
1227.2
28.3
30.5
8.2
173
713
5480
25.3
90.8
614.8
1376
1263
1215
1255
540
1231
336
I4
1.0
0.1
0.01
1408.6
1305.4
1258.2
44.4
27.5
5.2
186
673
5766
24.6
75.1
800.3
1348
1264
1248
1273
335
1251
330
I5
1.0
0.1
0.01
1399.8
1282.2
1247.4
60.2
15.5
9.9
162
745
5373
8.8
68.4
1066.4
1318
1254
1234
1269
450
1235
308
The legendfor this tableappearson the bottom of TableI.
4.2. The Shifting Bottleneck Procedure
TablesI and II also containfor eachinstancethe cost
value of the best solutionobtainedby Adams,Balas
and Zawackwith theirshiftingbottleneckprocedure.
Mostvaluesareobtainedby a secondheuristic,which
embedsthe aforementionedslidingbottleneckprocedureandproceedsby partialenumerationof the solution space.The valuesforthe instancesF1, F5, G3 as
wellas for the D and H instancesareobtainedby the
slidingbottleneckprocedureonly. The corresponding
runningtimes are obtainedby halvingthe running
times in Adams,Balasand Zawack,since these correspondto a VAX-780. Adams, Balas and Zawack
show their approachto be superiorto a numberof
approachesbased on prioritydispatchingrules:The
typicalimprovementis reportedto be between4%
and 10%.
Comparison of simulated annealing and the
shiftingbottleneckprocedureleads to the following
observations:
1. For those instancesfor which Adams,Balasand
Zawackdo not find a globallyminimal solution
(mainlythe A, B, C and I instancesin Table II),
the runningtimes of simulatedannealingwith
a = 0.1 and of the heuristicof Adams, Balas
andZawackareof the sameorderof magnitude.In
this case,the best solutionfoundby Adams,Balas
betterthanthe average
andZawackis considerably
124 /
VAN LAARHOVEN,
AARTSANDLENSTRA
best solutionreturnedby simulatedannealingand
as good as the best solutionfoundin five runsof
simulatedannealing.
Putting 6 = 0.01 makes simulatedannealing
much slowerthan the heuristicof Adams, Balas
and Zawack,but now theirbestsolutionis slightly
worsethan the averagebest solutionof simulated
annealingand considerablyworse than the best
solutionin five runs of simulatedannealing(the
typicalimprovementis between1 and 3%).
2. Forthe instancesforwhichthe heuristicof Adams,
Balasand Zawackfinds a globallyminimalsolution,it outperforms
simulatedannealing:Thelatter
algorithmalsofindsglobalminima,buttakesmuch
moretime to do so.
Admittedly,theperformance
of the shiftingbottleneck
heuristicis liableto improveif it is allowedmoretime.
Nevertheless,the resultsin TablesI and II indicate
that simulatedannealingis a promisingapproachto
job shop scheduling,as well as a robustone (cf. the
small difference between C and Cbestin Tables I and
II for 6 = 0.01) and certainlysuperiorto traditional
approaches,suchas proceduresbasedon prioritydispatchingrules.
4.3. Controlled Search Simulated Annealing
Finally, Tables I and II contain for a number of
instancesthe cost valueof the best solutionobtained
by Matsuo, Suh and Sullivanwith their controlled
searchsimulatedannealingmethod.They use neighborhoodsconsistingof schedulesthatcan be obtained
by several types of (multi)adjacencyinterchanges
of operationsthat are critical for determiningthe
makespan.Briefly,these neighborhoodsare obtained
by augmentingthe relativelysimple neighborhoods
usedin our approachby addingbetterschedulesthat
are found by exploringthe structureof the problem
at hand.Evidently,this makesthe approachless generalthan ours.As the tablesshow,this augmentation
enhancesthe efficiencyof the algorithmbut not its
the qualityof the finalsolutionremains
effectiveness;
roughlythe same.
The running times for this approach,given in
Tables I and II, are again obtainedby halvingthe
times givenby Matsuo,Suh and Sullivan,since they
also used a VAX-780computer.Comparisonof our
simulatedannealingmethod with controlledsearch
simulatedannealingyieldsthe followingconclusions.
1. For those instancesfor which Matsuo, Suh and
Sullivando not find an optimum,our approach
finds,on the average,solutionsof the samequality
but at the cost of largercomputationalefforts.
2. For those instancesfor which an optimum was
found, the controlledsearchsimulatedannealing
method finds it in remarkablysmallerrunning
times, even when comparedto the runningtimes
usedby the shiftingbottleneckprocedure.
5. CONCLUSION
We havediscusseda new approachto job shopscheduling based on a randomizedversion of iterative
improvement.The probabilisticelementof the algorithm (the acceptanceof cost-increasingtransitions
with a nonzeroprobability)makessimulatedannealing a significantlybetterapproachthan the classical
iterativeimprovementapproachon whichit is based.
Thedifferenceis especiallypronouncedforlargeproblem instances.
For a numberof well-studiedproblemsin combinatorial optimization a comparisonof simulated
annealingand tailoredheuristicsusuallyleadsto the
conclusionthattailoredalgorithmsaremoreefficient
and more effectivethan simulatedannealing:They
find bettersolutionsin less time (see, for example,
Van Laarhovenand Aarts,and Johnsonet al.). Interestingly,this does not seemto be entirelythe casefor
job shopscheduling:Simulatedannealinghasa potential of finding shortermakespansthan the tailored
heuristicof Adams,Balasand Zawack,at the cost of
large runningtimes. In other words, tailoringthe
algorithmtowardthe combinatorialstructureof the
problemdoes not yield a more effective,merely a
more efficient algorithm.This observationis confirmedby the resultsof Matsuo,Suh and Sullivan,
who describea simulatedannealing-based
approach
to job shop scheduling,employinga more problem
specificneighborhoodstructure.Theirapproachalso
leadsto a more efficientalgorithm,but it againproducesresultsof the samequality.
Weconsiderthe disadvantage
of largerunningtimes
to be compensatedfor by the simplicityof the algorithm,by its ease of implementation,by the fact that
it requiresno deep insight into the combinatorial
structureof the problem,and, of course,by the high
qualityof the solutionsit returns.
REFERENCES
1985a.
AARTS,E. H. L., AND P. J. M. VANLAARHOVEN.
StatisticalCooling:A GeneralApproachto CombinatorialOptimizationProblems.Philips J. Res. 40,
193-226.
AARTS,E. H. L., AND P. J. M. VANLAARHOVEN.
1985b.
A New Polynomial Time Cooling Schedule. In
JobShopSchedulingbySimulatedAnnealing/
Proceedings IEEE International Conference on
Computer-Aided Design, Santa Clara, Calif.,
206-208 (November).
ADAMS,
J., E. BALAS
ANDD. ZAWACK.
1988.The Shifting
Bottleneck Procedure for Job Shop Scheduling.
Mgmt. Sci. 34, 391-401.
BELLMAN, R. E. 1958. On a Routing Problem. Quart.
Apple.Math. 16, 87-90.
CARLIER,J., AND E. PINSON. 1989. An Algorithm for
Solving the Job-Shop Problem. Mgmt. Sci. 35,
164-176.
CERNY,V. 1985. ThermodynamicalApproach to the
TravelingSalesmanProblem:An EfficientSimulation Algorithm.J. Opt. TheoryAppl.45, 41-51.
E. G., JR. (ed.). 1976. ComputerandJob-Shop
COFFMAN,
SchedulingTheory.John Wiley, New York.
FELLER,W. 1950. An Introductionto ProbabilityTheory
and Applications,Vol. 1. John Wiley, New York.
FISHER, H., AND G. L. THOMPSON. 1963. Probabilistic
LearningCombinationsof Local Job-Shop Scheduling Rules. In Industrial Scheduling. J. F.
Muth and G. L. Thompson (eds.). Prentice Hall,
EnglewoodCliffs,N.J., 225-25 1.
FRENCH, S. 1982. Sequencingand Scheduling:An Introduction to the Mathematics of the Job-Shop.
Horwood,Chichester,U.K.
JOHNSON, D. S., C. R. ARAGON, L. A. McGEOCH AND
C. SCHEVON.
1989. Optimizationby SimulatedAnnealing:An ExperimentalEvaluation(PartI). Opns.
Res. 37, 865-892.
KIRKPATRICK,S., C. D. GELATT, JR. AND M. P. VECCHI.
1983. Optimization by Simulated Annealing.
Science220, 671-680.
B. J. 1984. PrivateCommunication.
LAGEWEG,
LAGEWEG,B. J., J. K. LENSTRAAND A. H. G. RINNOOY
KAN. 1977. Job-ShopSchedulingby Implicit Enu-
125
meration.Mgmt. Sci. 24, 441-450.
AND A. H. G. RINNOOY
LAWLER,E. L., J. K. LENSTRA
KAN.1982. Recent Developmentsin Deterministic
Sequencingand Scheduling:A Survey.In Deterministic and StochasticScheduling.M. A. H. Dempster,
J. K. Lenstraand A. H. G. Rinnooy Kan (eds.).
Reidel,Dordrecht,The Netherlands,35-73.
LAWRENCE,S. 1984. Resource Constrained Project
Scheduling:An ExperimentalInvestigationof Heuristic SchedulingTechniques(Supplement).Graduate School of IndustrialAdministration,Carnegie
Mellon University,Pittsburgh.
LUNDY, M., AND A. MEES. 1986. Convergence of an
AnnealingAlgorithm.Math. Prog.34, 111-124.
MATSUO,H., C. J. SUH AND R. S. SULLIVAN.1988. A
ControlledSearchSimulatedAnnealingMethodfor
the GeneralJobshopSchedulingProblem.Working
Paper03-04-88, Departmentof Management,The
Universityof Texas at Austin.
C. H., AND K. STEIGLITZ.
PAPADIMITRIOU,
1982. Com-
binatorialOptimization:Algorithmsand Complexity. Prentice-Hall,EnglewoodCliffs,N.J.
ROMEO,F., AND A. L. SANGIOVANNI-VINCENTELLI.
1985.
ProbabilisticHill Climbing Algorithms:Properties
and Applications.In Proceedings1985 ChapelHill
Conferenceon VLSI, Chapel Hill, N.C., 393-417
(May).
Roy, B., AND B. SAUSSMANN.1964. Les Problemes
d'Ordonnancementavec ConstraintsDisjonctives.
Note DS No. 9 bis, SEMA,Paris.
VAN LAARHOVEN,
P. J. M. 1988. Theoretical and Com-
putationalAspects of SimulatedAnnealing. Ph.D.
Thesis,ErasmusUniversity,Rotterdam.
VAN LAARHOVEN,
P. J. M., AND E. H. L. AARTS.1987.
Simulated Annealing: Theory and Applications.
Reidel,Dordrecht,The Netherlands.
Download