Redacted for privacy AN ABSTRACT OF THE DISSERTATION OF

advertisement
AN ABSTRACT OF THE DISSERTATION OF
Nasser Salmasi for the degree of Doctor of Philosophy in Industrial Engineering
presented on September 15, 2005.
Title: Multi-Stage Group Scheduling Problems with Sequence Dependent Setups.
Abstract approved:
Redacted for privacy
Rasaratnam Lodndran
The challenges faced by manufacturing companies have forced them to become more
efficient. Cellular manufacturing is a concept that has been accepted as a technique
for increasing manufacturing productivity in batch type production by efficient
grouping of parts (jobs) with some similarities in processing operations into groups
and sequentially matching machine cell capabilities for performing these operations.
In each cell, finding the best sequence of processing the assigned groups to the cell
and the jobs in each group by considering some measure of effectiveness, improves
the efficiency of production.
In this research, it is assumed that n groups are assigned to a cell that has m machines.
Each group includes
bL
jobs (i = 1, 2, ..., n). The set-up time of a group for each
machine depends on the immediately preceding group that is processed on that
machine (i.e., sequence dependent set-up time). The goal is to find the best sequence
of processing jobs and groups by considering minimization of makespan or
minimization of sum of the completion times.
The proposed problems are proven to be NP-hard. Thus, three heuristic algorithms
based on tabu search are developed to solve problems. Also, two different initial
solution generators are developed to aid in the application of the tabu search-based
algorithms.
The lower bounding techniques are developed to evaluate the quality of solutions of
the heuristic algorithms. For minimizing makespan, a lower bounding technique
based on relaxing a few constraints of the mathematical model is developed. For
minimizing sum of the completion times, a lower bounding approach based on
Branch-and-Price (B&P) technique is developed.
Because several versions of tabu search are used to solve the problem, to find the best
heuristic algorithm, random test problems, ranging in size from small, medium, to
large are created and solved by the heuristic algorithms. A detailed statistical
experiment, based on nested split-plot design, is performed to find the best heuristic
algorithm and the best initial solution generator. The results of the experiment show
that the tabu search-based algorithms can provide good quality solutions for the
problems with an average percentage error of 8.15%.
©Copyright by Nasser Salmasi
September 15, 2005
All Rights Reserved
Multi-Stage Group Scheduling Problems with Sequence Dependent Setups
by
Nasser Salmasi
A DISSERTATION
submitted to
Oregon State University
In part fulfillment of
the requirement for the
degree of
Doctor of Philosophy
Presented September 15, 2005
Commencement June 2006
Doctor of Philosophy dissertation of Nasser Salmasi
Presented on September 15, 2005
Approved:
Redacted for privacy
Major Professor, representing j{ndustrial Engineering
Redacted for privacy
Head of the Department of Industrial and Mifacturing Engineering
Redacted for privacy
Dean of th'eJGduate School
I understand that my dissertation will become part of the permanent collection of
Oregon State University libraries. My signature below authorizes release of my
dissertation to any reader upon request.
Redacted for privacy
Nasser Salmasi, Author
ACKNOWLEDGEMENTS
This dissertation is the result of my work as a PhD student at Oregon State
University (OSU). It could not have been completed without the significant
contribution of many people and organizations that have supported and
encouraged me continuously during this period. I would like to acknowledge their
contributions for helping me complete my PhD at OSU.
First of all, I thank my major professor, Dr. Rasaratnam Logendran, for his
excellent guidance and financial support while I was his research assistant on the
NSF grant (Grant No. DMI-0010118). His mentorship was of great value while he
patiently and generously spent time with me in our meetings throughout these
years. I am grateful for his comments, inspiration, and encouragement, all of
which led me to a deeper understanding of the topic of this dissertation.
I specially thank the other members of my PhD committee. I am thankful to Dr.
Jeff Arthur, my minor professor in the area of operations research, for his help
with my research and his wonderful classes. Many thanks also to Dr. David
Porter, my Minor professor in the area of information systems, for his advice and
providing me with a laptop and sharing his research lab with me as a place to
perform my experiments. I must also express my sincere gratitude to Dr. David S.
Kim, the next member of my PhD committee, for his help with my research and
for having me as his teaching assistant. I learned many things by working with
him. Additionally, I am thankful to Dr. Saurabh Sethia for being my Graduate
Council Representative (GCR) during the first two years of this research. I also
thank Dr. Mark Pagell for being my GCR for the final defense.
I wish to thank Dr. Dave Birkes and Mr. Raghavendran Dharmapuri Nagarajan
(Rags) for helping me with the experimental design process.
Many thanks to the department of Industrial and Manufacturing Engineering
(IME) at Oregon State University (OSU), and all of its faculty and staff members
for financially supporting me as a Graduate Teaching Assistant during these
years. Many thanks to Bill Layton, for providing such a stable computer network
that allowed me to worry less about our computer system.
I also appreciate the help of my friends from all over the world. I am grateful for
their support, prayers, encouragement, and advice during these challenging years.
I am especially grateful for the friendship of Dr. Shakib Shaken, who always
generously shared his valuable experience and inforniation with me on every
possible matter.
During my PhD program, I had a wonderful colleague, Cumhur Alper
Gelogullari, whose help always led to identifying a short cut for solving problems.
I must also step back and thank the faculty and staff of Sharif University of
Technology in my home country, Iran, for raising my interest in operations
research, while I earned my Bachelor's and Master's degrees in Industrial
Engineering.
It is clear that I would not have been able to travel down this path without the
help, support, and encouragement of my lovely parents. Their encouragement has
always been my greatest motivational force. There is no way to thank them
enough for their efforts in helping me to earn my PhD degree.
TABLE OF CONTENTS
Page
CHAPTER 1: INTRODUCTION .........................................................
1
CHAPTER 2: LITERATUPE REVIEW .................................................
4
2.1 Sequence Independent Job Scheduling (SIJS)
...........................
4
2.2. Sequence Dependent Job Scheduling (SDJS)
...........................
5
2.3. Sequence Independent Group Scheduling (SIGS) .........................
6
2.4. Sequence Dependent Group Scheduling (SDGS)
........................
8
.....................
9
CHAPTER 3: MOTIVATION AND PROBLEM STATEMENT
3.1 Motivation
....................................................................
............................................................
10
...........................................
13
...........................................................................
13
3.2 Problem Statement
CHAPTER 4: MATHEMATICAL MODELS
4.1 Models
9
......................................................
16
..........................................................................
17
4.2 Complexity of Problems
4.3 Example
CHAPTER 5: HEURISTIC ALGORITHM (TABU SEARCH) ........................
19
TABLE OF CONTENTS (Continued)
5.1 Overview of Tabu Search
.
19
5.2 Tabu Search Mechanism
....................................................
20
5.2.1 Forbidden Strategy
...................................................
20
......................................................
21
.............................
21
...............................................................
24
5.2.2 Freeing Strategy
5.2.3 Short-Term and Long-Term Strategies
5.3 Initial Solution
5.3.1 Initial Solution Techniques for Minimization of Makespan
Criterion................................................................
24
....................................................
24
5.3.1.1 Rank Order
5.3.1.2 Applying the Result of Schaller et al. 'S (2000) Algorithm
as an Initial Solution ...........................
25
5.3.1.2.1 Step 1. Applying CDS (Campbell-Dudek-Smith,
1970) Based Procedure to Find the Best Job Sequence
for Groups ......................................
25
5.3.1.2.2 Step 2. Applying NEH Based Procedure to Find
the Best Group Sequence ..................................
5.3.2 Initial Solution Techniques for Minimization of the Sum of the
Completion Times Criterion ..................................
5.3.2.1 Rank Order
....................................................
26
27
27
5.3.2.2 Relaxing the Problem to a Single Machine, SIGS
Problem.........................................................
....................................
28
.........................................................
29
................................................
29
5.4 Generation of Neighborhood Solutions
5.5 Steps of Tabu Search
28
5.5.1 Step 1: Initial Solution
TABLE OF CONTENTS (Continued)
5.5.2 Step 2: Evaluate the Objective Function Value of the Seed
30
................................................
30
5.5.3.1 Step 3.1: Find Inside Neighborhood Solutions ............
30
............
30
..................................
32
...............................................
34
5.5.4.1 Step 4.1: Find outside Neighborhood Solutions ..........
34
5.5.3 Step 3: Inside Search
5.5.3.2 Step 3.2: Evaluate the Inside Neighborhoods
5.5.3.3 Step 3.3: Stopping Criteria
5.5.4 Step 4: Outside Search
5.5.4.2 Step 4.2: Evaluate the Objective Function Value of
Outside Neighborhoods
.....................................
5.5.4.3 Step 4.3: Stopping Criteria ..................................
34
36
5.6 Two-Machine SDGS Problem with Minimization of Makespan
Criterion......................................................................
39
5.7 Applied Parameters for Proposed Research Problems....................
40
5.7.1
Empirical Formulae for Two Machine Problems by
Considering Minimization of Makespan Criterion ...............
40
5.7.2 Empirical formulae for Three Machine and Six Machine
Problems by Considering Minimization of Makespan Criterion
40
5.7.3 Empirical Formulae for Two, Three and Six Machine Problems
by Considering Minimization of Sum of the Completion Times
Criterion.................................................................
41
5.8 Application of Tabu Search to an Example Problem by Considering
.....................................
42
...............................................
43
Minimization of Makespan Criterion
5.8.1 Step 1: Initial Solution
TABLE OF CONTENTS (Continued)
5.8.2 Step 2: Evaluate the Objective Function Value of the Initial
Solution................................................................
43
.......................................
43
5.8.3.1 Step 3.1: Evaluate Inside Neighborhoods .................
44
5.8.3 Step 3: Perform Inside Search
5.8.3.2 Step 3.2: Evaluate the Stopping Criteria for Inside
Search..............................................................
45
..............................................
46
.....................................
46
................
46
5.8.3.3 Repeat the Cycle
5.8.4 Step 4: Perform Outside Search
5.8.4.1 Step 4.1: Evaluate Outside Neighborhoods
5.8.4.2 Step 4.2: Evaluate the Stopping Criteria for Outside
Search..............................................................
48
.............................................
48
5.8.4.3: Repeat the Cycle
5.9 Application of Tabu Search to an Example Problem by Considering
.............
..............................................
Minimization of Sum of the Completion Times Criterion
49
5.9.1 Step 1: Initial Solution
49
5.9.2 Step 2: Evaluate the Objective Function Value of the Initial
Solution...................................................................
49
......................................
50
5.9.3.1 Step 3.1: Evaluate Inside Neighborhoods ..................
50
5.9.3 Step 3: Perform Inside Search
5.9.3.2 Step 3.2: Evaluate the Stopping Criteria for Inside
Search..............................................................
52
5.9.3.3: Repeat the Cycle .............................................
52
5.9.4 Step 4: Perform Outside Search......................................
52
................
53
5.9.4.1 Step 4.1: Evaluate Outside Neighborhoods
TABLE OF CONTENTS (Continued)
5.9.4.2 Step 4.2: Evaluate the Stopping Criteria for Outside
Search..............................................................
5.9.4.3: Repeat the Cycle
CHAPTER 6: LOWER BOUNDS
................................................
.......................................................
6.1 Lower Bounding Techniques for Minimization of Makespan ...........
55
55
56
56
6.1.1 Application of the Lower Bounding Technique to a Problem
Instance..................................................................
60
6.2 Lower Bounding Technique for Minimization of Sum of the
..........................................................
60
6.2.1 Simplifying the Two-Machine Problem...........................
65
6.2.1.1 The Relaxing Rule for SP1 in the Two Machine
Problem .........................................................
67
Completion Times
6.2.1.2 The Relaxing Rule for
SP2
in the Two-Machine
Problem ........................................................
69
6.2.2 Simplifying the Three-Machine Problem ..........................
70
6.2.2.1 The Relaxing Rule for SP1 in the Three-Machine
Problem ...........................................................
73
6.2.2.2 The Relaxing Rule for SP2 in the Three-Machine
Problem ...........................................................
73
6.2.2.3 The Relaxing Rule for SP3 in the Three-Machine
Problem..........................................................
75
6.2.3 A Generalized Model for Simplifying the Multiple-Machine
Problems..............................................................
75
TABLE OF CONTENTS (Continued)
6.2.3.1 The Relaxing Rule for
SP1
in the Multiple-Machine
Problem.............................................................
77
6.2.3.2 The Relaxing Rule for SP2 through SPmi in the Multiple
Machine Problem ................................................
77
6.2.3.3 The Relaxing Rule for
SPm
in the Multiple Machine
Problem.............................................................
78
6.2.4 Adding an Auxiliary Constraint to Simplify Finding the
................................................
79
..................................................
79
Sequence of Dummy Jobs
6.2.5 Solving Sub-Problems
...............................................................
80
......................................................
82
6.2.8 The Software Application .............................................
82
........................
82
6.2.lOExample ................................................................
84
6.2.6 Branching
6.2.7 Stopping Criteria
6.2.9 The Lower Bound for the Original Problem
...........................................
87
.....................................................
87
7.2 Test Problems Specifications
...............................................
93
7.3 Two Machine Test Problems
................................................
95
7.4 Three Machine Test Problems ...............................................
96
..................................................
101
CHAPTER 7: EXPERIMENTAL DESIGN
7.1 Steps of the Experiment
7.5 Six Machine Test Problems
TABLE OF CONTENTS (Continued)
CHAPTER 8: RESULTS
8.1 The Results for the Makespan Criterion
8.1.1
.
103
....................................
103
The Results of Two-Machine Problems by Considering
Minimization of Makespan Criterion
................................
103
8.1.1.1 Comparison among Heuristic Algorithms and Lower
Bound...............................................................
104
8.1.1.2 The Experimental Design to Compare the Time Spent
for Heuristic Algorithms for Two Machine Problems by
Considering Minimization of Makespan Criterion ............ 109
8.1.1.3 The Comparison between the Best Tabu Search and the
Results of Schaller et al. (2000) Algorithm .................... 113
8.1.2 The Results of Three-Machine Makespan Criterion ................
113
8.1.2.1 Comparison among Heuristic Algorithms and Lower
Bound for Three Machine Problems by Considering
Minimization of Makespan
.......................................
114
8.1.2.2 The Experimental Design to Compare the Time Spent for
Heuristic Algorithms for Three Machine Problems by
.......................
120
8.1.2.3 The Comparison Between the Best Tabu Search and the
Results of Schaller et al. (2000) Algorithm .....................
126
8.1.3 The Results of Six-Machine Makespan Criterion ...................
127
Considering Minimization of Makespan
8.1.3.1 Comparison among Heuristic Algorithms and the Lower
Bound...............................................................
127
8.1.3.2 The Experimental Design to Compare the Time Spent for
Heuristic Algorithms for Six Machine Problems by
133
Considering Minimization of Makespan ......................
TABLE OF CONTENTS (Continued)
8.1.3.3 The Comparison Between the Best Tabu Search and the
Results of Schaller et al. (2000) Algorithm for Six-Machine
Problems by Considering Minimization of Makespan
Criterion.............................................................
136
8.2 The Results for Minimization of Sum of the Completion Times
Criterion ........................................................................
137
The Results of Two-Machine Problems by Considering
8.2.1
Minimization of Sum of the Completion Times Criterion
8.2.1.1
.........
137
Comparison among Heuristic Algorithms for Two
Machine Problems by Considering Minimization of Sum of
138
the Completion Times ...........................................
8.2.1.2 The Experimental Design to Compare the Time Spent for
Heuristic Algorithms .............................................
141
...........................
144
8.2.1.3 Evaluating the Quality of Solutions
8.2.2 The Results of Three-Machine Problems by Considering
Minimization of Sum of the Completion Times Criterion
.........
145
8.2.2.1 Comparison among Heuristic Algorithms for Three
Machine Problems by Considering Minimization of Sum of
the Completion Times ............................................. 146
8.2.2.2 The Experimental Design to Compare the Time Spent
for Heuristic Algorithms
.......................................
152
..........................
158
8.2.2.3 Evaluating the Quality of Solutions
8.2.3
The Results of Six-Machine Problems by Considering
Minimization of Sum of the Completion Times Criterion ......
8.2.3.1
159
Comparison among Heuristic Algorithms for Six
Machine Problems by Considering Minimization of
Sum of the Completion Times
...............................
159
TABLE OF CONTENTS (Continued)
Page
8.2.3.2 The Experimental Design to Compare the Time Spent
for Heuristic Algorithms by Considering Minimization of
163
Sum of the Completion Times Criterion ...................
8.2.3.3 Evaluating the Quality of Solutions
CHAPTER 9: DISCUSSION
..........................
.............................................................
9.1 Analyzing the Results of Minimization of Makespan Criterion
......
167
168
168
9.2 Analyzing the Results of Minimization of Sum of the Completion
Times Criterion
.............................................................
170
CHPATER 10: CONCLUSIONS AND SUGGESTIONS FOR FUTHURE
RESEARCH
................................................................
172
10.1 Suggestions for Future Research ............................................
174
10.1.1 Defining Related Research Problems ...............................
174
10.2 Applying New Techniques (tools) to Solve Proposed Problems .......
176
BIBLIOAGRAPHY ...........................................................................
178
APPENDICES ..............................................................................
183
A The ANOVA and Test of Effect Slices Tables for the Result Chapter. ..
184
B The Percentage Errors for Schaller et al. (2000) Algorithm ................
217
LIST OF FIGURES
Pge
Figure
3.1 The Scheduling Tree Diagram
...................................................
12
4.1 The Gantt chart of processing groups as well as jobs in rank order ...
17
5.1 Flow chart for outside search ...................................................
38
.....................................................
39
5.3 The Gantt chart of the initial solution ...........................................
43
...................................
49
5.5 The Gantt chart of the initial solution ...........................................
50
...................................
55
......................
67
....................................................
81
6.3 The objective function value of nodes for an incomplete problem .........
83
5.2 Flow chart for inside search
5.4 The Gantt chart of the tabu search sequence
5.6 The Gantt chart of the tabu search sequence
6.1 The Gantt chart of processing two different sequences
6.2 The branching rule flow chart
8.1 The normal probability plot of the experimental design of finding the best
heuristic algorithm for two machine problem by considering
minimization of makespan
.................................................
8.2 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for two machine problem by
considering minimization of makespan ......................................
107
111
8.3 The normal probability plot of the experimental design of finding the best
heuristic algorithm for three machine problem by considering
minimization of makespan
.................................................
8.4 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for three machine problem by
considering minimization of makespan ...................................
119
125
8.5 The normal probability plot of the experimental design of finding the best
heuristic algorithm for six machine problem by considering
minimization of makespan
...............................................
131
LIST OF FIGURES
Figure
8.6 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for six machine problem by
considering minimization of makespan .................................
135
8.7 The normal probability plot of the experimental design of finding the best
heuristic algorithm for two machine problem by considering
minimization of sum of the completion times
..........................
8.8 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for two machine problem by
considering minimization of sum of the completion times criterion..
140
143
8.9 The normal probability plot of the experimental design of finding the best
heuristic algorithm for three machine problem by considering
minimization of sum of the completion times
...........................
150
8.10 The normal probability plot of the experimental design of finding the
most efficient heuristic algorithm for the three machine problem by
considering minimization of sum of the completion times criterion
157
8.11 The normal probability plot of the experimental design of finding the best
heuristic algorithm for six machine problem by considering
minimization of sum of the completion times criterion
................
162
8.12 The normal probability plot of the experimental design of finding the
most efficient heuristic algorithm for six machine problem by
considering minimization of sum of the completion times criterion..
165
LIST OF TABLES
Table
Page
4.lThe run time ofjobs in groups .....................................................
17
4.2 The set up times for groups ........................................................
17
5.1 The outside search parameters for two machine problems with makespan
criterion.........................................................................
40
5.2 The outside search parameters for three machine and six machine
problems with makespan criterion ...........................................
41
5.3 The inside search parameters for three machine and six machine
problems with makespan criterion ..........................................
41
5.4 The outside search parameters for two, three, and six machine problems
with minimization of sum of the completion times criterion.............
42
5.5 The inside search parameters for two, three, and six machine problems
with minimization of sum of the completion times criterion...........
42
5.6 The neighborhoods of the inside initial solution
..............................
44
5.7 The neighborhoods of the outside initial solution ..............................
47
5.8 The neighborhoods of the inside initial solution ...............................
50
.............................
53
.........................................
68
5.9 The neighborhoods of the outside initial solution
6.1 The completion time ofjobs in Si and S2
6.2 The coefficient ofX1Jk's in SPs ...................................................
6.3 The coefficient ofX]k's in SPs
72
....................................................
.........................................................
84
6.5 The branching coefficients OfASq,(j+J)l at the end of the first node ..........
85
6.6 The result of the second node .....................................................
86
........................................................
86
7.1 The set-up time of each machine on two-machine problems .................
94
6.4 The result of the first node
6.7 The result of the third node
LIST OF TABLES
Table
____
7.2 The set-up time of each machine on three-machine problems ..................
94
....................
94
7.4 Small size problems based on group category (two machine) .....................
95
7.5 Medium size problems based on group category (two machine) ..............
95
7.6 Large size problems based on group category (two machine) .................
95
7.3 The set-up time of each machine on six-machine problems
7.7 The specification of test problems generated for two machine
problem ........................................................................ 96
7.8 Small group, small job size problems (three machine)
.........................
97
.....................
97
.........................
97
.....................
97
..................
97
......................
97
........................
98
.....................
98
..........................
98
......................
98
7.9 Small group, medium job size problems (three machine)
7.10 Small group, large job size problems (three machine)
7.11 Medium group, small job size problems (three machine)
7.12 Medium group, medium job size problems (three machine)
7.13 Medium group, large job size problems (three machine)
7.14 Large group, small job size problems (three machine)
7.15 Large group, medium job size problems (three machine)
7.16 Large group, large job size problems (three machine)
7.17 The test problems generated for three machine problem
7.18 Small size problems based on group category (six machine) .................
101
7.19 Medium size problems based on group category (six machine) .............
101
7.20 Large size problems based on group category (six machine) ..................
101
7.21 The specification of generated test problems for six machine problem
102
8.1 The results of the experiments with test problems for two machine
problems by considering minimization of makespan
.................
105
LIST OF TABLES
Table
Page
8.2 The ANOVA for two machine problem by considering minimization of
makespan for algorithm comparison ......................................
108
8.3 Test of effect slices for two machine problem by considering minimization
of makespan for algorithm comparison ...................................
109
8.4 The time spent for the test problems of two machine problems (in seconds)
by considering minimization of makespan criterion .....................
109
8.5 The results of the experiments with test problems for three machine
problems by considering minimization of makespan criterion ..........
114
8.6 The time spent for the test problems of three machine problems (in
seconds) by considering minimization of makespan criterion ...........
121
8.7 The results of the experiments with test problems for six machine
problems by considering minimization of makespan criterion
.........
128
8.8 The lower bound value of test problems for six machine problems by
considering minimization of makespan criterion .........................
130
8.9 The time spent for the test problems of six machine problems (in seconds)
by considering minimization of makespan criterion .....................
133
8.10 The results of the test problems for
two
machine problems by
considering minimization of sum of the completion times ...............
138
8.11 The time spent for the test problems of two machine problems (in
seconds) by considering minimization of sum of the completion
times...........................................................................
141
8.12 The results of the lower bounding technique for two machine problems
by considering minimization of sum of the completion times
criterion ..........................................................................
145
8.13 The results of the test problems for three machine problems by
considering minimization of sum of the completion times criterion...
146
8.14 The experimental cells of three machine problems by considering
minimization of sum of the completion times criterion in which the
heuristic algorithms do not have the same performance ................
151
LIST OF TABLES
Table
8.15 The time spent for the test problems of three machine problems (in
seconds) by considering minimization of sum of the completion
..................................................................
152
8.16 The results of the lower bounding technique for three machine problems
by considering minimization of sum of the completion times criterion
158
times criterion
8.17 The heuristic algorithms results of the test problems for six machine
problems by considering minimization of sum of the completion
times criterion
......................................................................
160
8.18 The experimental cells of six machine problems by considering
minimization of sum of the completion times criterion in which the
initial solution generators do not have the same performance ..........
163
8.19 The time spent for the test problems for six machine problems (in
seconds) by considering minimization of sum of the completion
times criterion
.................................................................
163
8.20 The experimental cells of six machine problems by considering
minimization of sum of the completion times criterion in which the
heuristic algorithms do not have the same time spent ..................
166
8.21 The result of the lower bounding technique for six machine problems by
considering minimization of sum of the completion times
criterion .........................................................................
167
9.1 The results of test problems for minimization of makespan criterion
......
169
9.2 The result of the most efficient initial solution generator by considering
minimization of makespan criterion ........................................
169
9.3 The results of test problems for minimization of sum of the completion
times criterion .................................................................
171
9.4 The percentage error of the test problems for minimization of sum of the
completion times by removing problems with more than 50%
percentage error
...............................................................
171
Multi-Stage Group Scheduling Problems with Sequence Dependent Setups
CHAPTER 1: iNTRODUCTION
The challenges faced by manufacturing companies have forced them to become more
efficient and more flexible. In the 1970s, a method of manufacturing, called Cellular
Manufacturing (CM), was developed. CM is a suitable approach to increase the
productivity and flexibility of production in a manufacturing company that produces a
variety of products in small batches.
In CM, the parts are assigned to different groups based on their similarities in shape,
material, or similar processing operations. The machines are also assigned to different
cells in order to decompose the production line. The groups are then assigned to a
particular cell, which includes several machines that have the ability to perform the
necessary operations for groups. This decomposition of machines and jobs has several
advantages such as significant reduction in set-up time, work-in-progress inventories,
and simplified flow of parts and tools (Logendran, 2002).
Sequencing and scheduling is a form of decision-making that plays a crucial role in
manufacturing and service industries (Pinedo, 2002). They have been applied to
improve the efficiency of production since the beginning of the last century. Thus, the
next step for increasing the efficiency of production is finding the best sequence of
processing the assigned groups to the cell as well as the jobs of a group in order to
maximize or minimize some measure of effectiveness. This subject is called Group
Scheduling. Two relevant objectives in the investigation of group scheduling problems,
minimization of makespan and minimization of the sum of the completion times, are
considered in this research. The goal of minimization of makespan is to minimize the
completion time of the last job on the last machine. On the other hand, the goal of
minimization of the sum of the completion times is to minimize the average completion
time of all jobs. The purpose of both of these criteria is to deliver orders as quickly as
possible to the customers. In general, the longer the jobs stay on the shop floor, the
higher they cost the company. Suppose that a company receives a large order from a
2
customer to produce several different groups ofjobs. The efficient way of preparing the
order is to compress (minimize) the completion time of the last job processed so that
the entire order can be delivered to the customer as quickly as possible in one shipment.
On the other hand, suppose that the company receives several orders from
different
customers, and that all of them have the same priority (weight) for the company. In this
case, minimization of the sum of the completion times is appropriate to maximize the
efficiency as it would indirectly minimize the work-in-progress inventories.
In group scheduling problems, all jobs that belong to a group require similar set-up on
machines. Thus, a major set-up is required for processing each group on every machine.
The set-up operation of a group includes preparing the machine, bringing required
tools, setting the required jigs and fixtures, inspecting the materials and cleanup
(Allahverdi et al, 1999), which should be considered as a separate operation on
machines for some problems rather than considering it as a part of processing time. The
separable set-up time scheduling problems are divided into two major categories:
sequence dependent, and sequence independent scheduling. If the set-up time of a
group for each machine depends on the immediately preceding group that is processed
on that machine, the problem is classified as "sequence dependent group scheduling,"
Otherwise, it is called "sequence independent group scheduling".
The importance of sequence dependent set-up time scheduling problems has been
discussed in several studies. Allahverdi et al. (1999) mentioned the results of a survey
performed by Panwalker et al. (1973), in which 75% of the manufacturing managers
mentioned that they had the experience of producing parts with sequence dependent
set-up time specifications and almost 15% of them believe that all production
operations belong to sequence dependent scheduling problems. Wortman (1992)
explained the importance of considering sequence dependent set-up times for the
effective management of manufacturing capacity. There are many real world
applications of sequence dependent scheduling problems. Schaller et al. (2000)
discussed an industry case of sequence dependent group scheduling problem in printed
circuit boards (PCBs) in which the major set-up is required to switch from a group of
PCBs to another. Painting industry is another example of such problems.
Vakharia et al. (1995) and Schaller et al. (1997, 2000) present branch and bound
approaches to solve the Sequence Dependent Group Scheduling problems (SDGS) with
multiple machines by considering minimization of makespan. They also propose a fast
heuristic algorithm to minimize the makespan for a SDGS problem by applying some
of the existing scheduling heuristic algorithms to find a sequence for processing groups
as well as jobs in a group. Because their algorithm does not consider the relationship
between
groups
and job sequences, it may not provide a good quality solution.
Considering the widespread practical applications of sequence dependent
group
scheduling in industry and the importance of minimizing the makespan and minimizing
the sum of the completion times, developing a heuristic algorithm to solve these
problems in a reasonable time with good quality can help to improve the efficiency of
production.
The industry needs an algorithm which can provide a sequence of processing groups as
well as jobs with a good quality (optimal or near optimal) in a short time. Because the
proposed research problems, as will be discussed in the chapters that follow, belong to
NP-hard problems, the time required to solve the real world problems optimally by
applying an exact algorithm is unreasonably high. Thus, a heuristic algorithm is
necessary to get a solution close enough to the optimal solution in a reasonable time. In
order to evaluate the performance of the heuristic algorithm, a lower bounding
technique is also required to be applied as a yard stick. Thus, a lower bounding
technique is developed for each criterion to estimate the quality of solutions. These
lower bounds are created based on the mathematical models of the proposed research
problems.
4
CHAPTER 2: LITERATURE REVIEW
Basically, the flowshop scheduling problems can be classified as job scheduling
problems and group scheduling problems. A group scheduling problem, as discussed
earlier, occurs when assigned jobs to a cell, based on more similarities in process or
shape are set into different groups and are processed in sequence. If the jobs are
investigated independently, the problem is classified as a job scheduling problem.
Because of some similarity between these two classes of problems, the related articles
about both of them in sequence independent and sequence dependent set-up mode are
investigated.
Cheng et al. (2000) and Allahverdi et al. (1999) did a comprehensive literature review
about job scheduling and group scheduling problems. The articles that are most related
to the proposed research problems (minimization of makespan and minimization of the
sum of the completion times) are presented in four different categories as follows:
2.1 Sequence Independent Job Scheduling (SIJS)
Yoshida and Hitomi (1979) pioneered the investigation of SIJS problems. They
considered the two machine flowshop problem and proposed an algorithm based on
Johnson's (1954) rules to obtain the optimal solution for minimization of makespan.
Lageweg et al. (1978) developed a general lower bound for the permutation flowshop
problem by considering minimization of makespan. Bagga and Khurana (1986)
developed a branch and bound algorithm for a
two
machine flow shop sequence
independent job scheduling problem for minimizing the sum of the completion times.
They also developed a lower bound for their problem. The proposed algorithm is
applied to solve problems with 5 to 9 jobs. Proust et al. (1991) proposed three heuristic
algorithms for minimizing the makespan. The first one is an extension of the CDS
heuristic by Campbell, Dudek, and Smith (1970) for the standard flowshop scheduling
problem. The second is a greedy procedure which augments an available partial
schedule with the job that minimizes a lower bound. The third is constructed by
incorporating the 2-job interchange neighborhood search in the above two heuristics for
5
the problem. All these algorithms were evaluated empirically. They also developed a
branch and bound algorithm which can be used for small size problems. Gupta (1972)
described dominance condition and an optimization algorithm to minimize makespan of
SIJS. The proposed algorithm is applied to solve problems with 3 to 6 jobs and 4 to 6
machines. This algorithm can not be used to solve large size problems. Allahverdi
(2000) addressed the two-machine SIJS problem by considering the mean-flow time
criterion. He developed an algorithm to solve problems optimally up to 35 jobs in a
reasonable time.
2.2 Sequence Dependent Job Scheduling (SDJS)
Corwin and Esogbue (1974) considered the two-machine flow shop job scheduling
problems where only one of the machines is characterized by sequence dependent setup
times and proposed a dynamic programming approach to obtain the optimal solution for
minimizing the makespan. Gupta and Darrow (1986) provided a few heuristic
algorithms to find the minimum makespan for a two-machine SDJS problem. The result
of the experiment shows that their heuristic algorithms have good performances for
problems where set-up times are smaller than run time. Computational experiments are
also performed to find out which proposed heuristic algorithm has the best performance
in different size of problems. The results of the experiments reveal that for different
size of problems, a different heuristic algorithm has a superior performance. Bellman et
al. (1982) developed a dynamic programming model to optimally solve a flow shop
scheduling problem with three machines where only one of the machines requires
sequence dependent set-up time. Test problems up to 12 jobs are solved optimally by
the proposed algorithm. Srikar and Ghosh (1986) proposed a mixed integer linear
programming formulation for SDJS. The model is applied to solve several randomly
generated instances of SDJS that included six machines and six jobs at most. Based on
the results of the computational experiment the time taken to solve the problems was
reasonable. Stafford and Tseng (1990) corrected a minor error in the Srikar-Ghosh
formulation. The corrected model is used to solve problems with 5 machines and 7 jobs
by considering minimization of the sum of the completion times criterion. The problem
took about 6 CPU hours to be solved on a personal computer. Gupta et al. (1995)
developed a branch and bound technique to find the minimum makespan for SDJS
problems. They solved problems up to 20 jobs with the proposed algorithm. Rios-
Mercado and Bard (1998) presented a branch-and-cut (B&C) algorithm for
minimization of makespan of SDJS with m machines. The same authors (1999)
presented a branch and bound algorithm which includes the implementation of both
lower and upper bounding procedures, a dominance elimination criterion, and special
feature such as a partial enumeration strategy for minimization of makespan of SDJS
problems.
Gupta (1988) proposed several heuristic algorithms to find the minimum makespan for
SDJS problems. Simons (1992) developed four heuristics for this problem. The main
idea of two of his heuristics is based on well known Vogel's approximation method in
transportation problems. Parthasarathy and Rajendran (1997) proposed a heuristic
algorithm based on simulated annealing to minimize the weighted tardiness of SDJS.
2.3 Sequence Independent Group Scheduling (SIGS)
Ham et al. (1985) described a two-step procedure to solve SIGS problem optimally by
considering minimization of makespan criterion. Baker (1990) generalized these results
and provided a polynomial time optimization algorithm consisting of two steps for jobs
and group sequences. Hitomi and Ham (1976) proposed a branch and bound procedure
to find the minimum makespan of SIGS problems with multiple-machines. Extensions
of their work are described in Ham et al. (1985). The procedure first creates a sequence
of groups in the job sets, and then develops job sequences within each group. The
proposed model is a family version of the heuristic by Petrov (1968). Logendran and
Sriskandarajah (1993) addressed the blocking version of the problem with only separate
set-up times, i.e., a finished job on the first machine will block the machine from being
set-up for another job, until the set-up operation of the job starts on the second
machine. They proposed a heuristic by ignoring the set-up times, and analyzed the
worst-case performance of the heuristic. Campbell et al. (1970) presented a multiple-
7
pass heuristic for solving flowshop scheduling problems with three or more machines.
Vakharia and Chang (1990) modified this heuristic for scheduling groups in a flowshop
manufacturing cell. They also performed a computational experiment to compare this
heuristic algorithm with a simulated annealing heuristic algorithm. The results show
that the simulated annealing heuristic provides good quality solutions at reasonable
computational expense. Skorin-Kapov and Vakharia (1993) developed a tabu search
approach to minimize the completion time of SIGS. They performed a computational
experiment to compare the performance of tabu search algorithm versus simulated
annealing algorithm by Vakharia and Chang (1990). The results of their computational
experiment reveal that the tabu search has a better performance by generating better
solutions in less computational time. The computational experiment is performed with
problems including 3 to 10 groups, 3 to 10 machines, and 3 to 10 jobs. The authors
investigated six versions of tabu search. The experiment is performed with three
different set-up times in which the distribution of set-up times was greater than the run
times of jobs in all of them. Based on their computational experiments, LTM-max (tabu
search by considering long term memory to intensify the search) has the best
performance among fixed tabu-list size versions. Their experiments also show that
variable tabu-list size versions provide better solutions than fixed size versions. Sridhar
and Rajendran (1994) also developed a genetic algorithm for minimizing the makespan
for SIGS. Helal and Rabelo (2004) classified the published heuristics for SIGS
problems into three categories, single path, multiple pass, and iterative heuristics based
on the complexity of the method. They also compare the performance of simulated
annealing versus tabu search for some test problems in which the largest problem
includes 8 groups, 8 jobs in a group, and 8 machines. The results show that the tabu
search algorithm has a slightly better performance than simulated annealing by
considering minimization of makespan criterion. Schaller (2000) performed a design of
experiment to compare the performance of tabu search (developed by Skorin-Kapov
and Vakharia, 1993) and genetic algorithm (Sridhar and Rajendran, 1994) and reported
that the tabu search has a better performance.
2.4 Sequence Dependent Group Scheduling (SDGS)
Jordan (1996) discussed the extension of a genetic algorithm to solve the two machine
SDGS problem to minimize the weighted sum of earliness and tardiness penalties.
Vakharia et al. (1995) and Schaller et al. (1997, 2000) present branch and bound
approaches as well as several heuristics to solve the SDGS with multiple machines. The
highlight of their research is published in a paper by Schaller et al. (2000). They
propose a heuristic algorithm to minimize the makespan for a SDGS problem. In this
algorithm, the sequence of groups and jobs that belong to a group are investigated
independently. Finding a solution for the proposed problem requires two aspects:
finding the sequence of jobs within each group, and finding the sequence of groups.
While there is interaction between these two aspects, the authors assumed that these
sequences can be developed independent of each other. They applied a few existing
heuristic algorithms such as Campbell-Dudek-Smith (CDS) (1970) procedure to find
the best sequence of jobs in a group. The sequence of groups is investigated by
applying a few algorithms based on the procedure by Gupta and Darrow (1986), and
Baker's (1990) scheduling algorithm. The authors also provide a lower bounding
technique to evaluate the quality of their solutions by generalizing the machine based
bound of traditional flowshop scheduling problems.
Reddy and Narendran (2003) investigated the SDGS problems by considering dynamic
conditions. The process time of jobs was assumed to have an exponential distribution.
They also relaxed the assumption of availability of all jobs at the beginning of the
scheduling. Simulation experiments are applied to find the best sequence of jobs and
groups to minimize the tardiness as well as the number of tardy jobs.
CHAPTER 3: MOTIVATION AND PROBLEM STATEMENT
3.1 Motivation
Allahverdi et al. (1999) provide some explanations about the applications of sequence
dependent scheduling problems. Panwalker et al. (1973) discovered that about 75% of
the managers reported at least some operations they schedule require sequence
dependent set-up times, while approximately 15% reported all operations requiring
sequence dependent set-up times. Flynn (1987) determined that application of both
sequence dependent set-up procedures and group technology principles increase output
capacity in a cellular manufacturing shop, and Wortman (1992) explained the
importance of considering sequence dependent set-up times for the effective
management of manufacturing capacity.
Ham et al. (1985) discussed about the importance of applying group technology
(combining jobs into groups). They said "Development and implementation of
Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) in the
manufacturing industry lead to more integrated applications of group technology
concept. It has been recognized that group technology is an essential element of the
foundation for successful development and implementation of CAD/CAM through
application of the part-family concept based on some similarities between jobs. This
approach creates a compatible, economic basis for evolution of computer automation in
batch manufacturing through increased use of hierarchical computer control and multi
station NC manufacturing systems."
The above literature review reveals that while a considerable body of literature on
sequence dependent and sequence independent job scheduling has been created, there
still exist several potential areas worthy of further research on sequence dependent and
sequence independent group scheduling (Cheng et al., 2000). Considering the
widespread practical applications of sequence dependent group scheduling in industry
(as discussed its necessity in above paragraphs), especially in hardware manufacturing,
ILi
and the importance of minimizing the makespan and minimizing the sum of the
completion times, further research on these subjects is still required.
3.2 Problem Statement
In this research, it is assumed that n groups (G1, G2, ..., G) are assigned to a cell that
has m machines
(M1,
M2,..., Mm). Each group includes bjobs (i =
1, 2, ..., n).
The set-
up time of a group for each machine depends on the immediately preceding group that
is processed on that machine (sequence dependent set-up time). The purpose of this
research is to find the best sequence of processing jobs as well as groups by considering
minimizing some measure of effectiveness. Two such measures include the
minimization of makespan and minimization of the sum of the completion times. The
assumptions made in this research are:
All jobs and groups are processed in the same sequence on all machines
(permutation scheduling). This is the only way of production in some industries.
For instance, if a conveyer is used to transfer jobs among machines, then all jobs
should be processed in the same sequence on all machines.
All jobs in each group are available at the beginning of the schedule. This is
commonly known as static job releases. It means that the flow time of a job is the
same as its completion time on the last machine.
All jobs and groups have the same importance (weight) for the company.
All machines are available at the beginning of planning horizon.
This problem belongs to static flowshop problems. Figure 3.1 shows the classification
of all scheduling problems, including the proposed research problem.
The size of the problems which are investigated during this research are as follows:
Number of groups: Group scheduling problems including
2
to 16 groups are
investigated. Based on the reviewed papers, the previous research has focused on
problems with at most ten groups.
11
Number of jobs in a group: Problems including 2 to 10 jobs in a group are
considered in this research. Based on the papers reviewed, the previous
investigations have been limited to at most ten jobs in a group (Schaller et al.,
2000).
Number of machines in a cell: As discussed before, the goal of applying cellular
manufacturing is to decompose the production activities and simplify them. Thus, if
in a cell, too many machines are assigned, then the goal of applying cellular
manufacturing is violated. Based on this fact, in many cases the number of
machines in a cell does not exceed six. Thus, problems up to six machines in a cell
are investigated in this research.
Another assumption considered in this research is that in all cases the required set-up
time for a group on a machine is considerably greater than the run time of jobs on
machines. In many production lines the required set-up time of a machine is larger than
the run time of individual jobs.
It is clear that any one of the n groups of jobs in the current planning horizon can be
preceded by the last group that was processed in the previous planning horizon. This
"last group" is referred as the reference group 'R' in this research. The reference group
is a group which was processed as the last group on a machine in the previous planning
horizon. Thus the required set-up time of each group compared to the reference group
should be considered to find the best sequence.
12
Scheduling
Problems
Static
Scheduling
Dynamic
Scheduling
Multi machine
Problems
Machine
Problems
Parallel
Machine
Problems
Shop
Scheduling
Problems
Job Shop
Problems
Flowshop
Problems
Job
Scheduling
Sequence
Independent Setup time Problems
Group
Scheduling
Sequence
dependent Set-up
time Problems
Figure 3.1 The Scheduling Tree Diagram
Sequence
Independent Setup time Problems
=
Sequeiwe
dependent Set-up
time Problems
13
CHAPTER 4: MATHEMATICAL MODELS
The mathematical programming models for minimization of makespan and
minimization of the sum of the completion times criteria for a multi-stage (two or more
machines) SDGS problem are demonstrated below. The models belong to Mixed
Integer Linear Programming (MILP) models.
4.1 Models
The parameters, decision variables, and the mathematical models are as follows:
Parameters:
a:
Number of groups
Numberofjobsingroupp
bmax:
Number of machines
The maximum number of jobs in groups, max{b}
p = 1,2,.. .,a
N:
Number of jobs in all groups
N =
tpjk.
Run time ofjobj in group p on machine k
m:
p1,2,...,a
fFor Real Jobs; Run time of job
tpjk
p1k.
.
For Dummy jobs,
j in group p on machine k
-M
j = 1,2,.. .,bmax
k=1,2,...,m
p=
1,2,. . .
j
1,2,.. .,bmax
k = 1,2,. . . ,m
The setup time for group 1 on machine k if group p is the
preceding group
p,l = 1,2,. ,a
k
1,2,. . . ,m
The summation of run times in group p on machine k
Tk
=
.
.
1pjk
Decision Variables:
i= 1,2,...,a
The completion time ofjobj in
th
slot on machine k
Ii; Ifgrouppisassignedtosloti
WIP
tO; Otherwise
The completion time of
I=
0,1,2,...,a
p 0,1,2,.. .,a
tO; Otherwise
Ii; Ifjobqisprocessedafterjobjinsloti
Yijq
j = 1,2,.. .,bmax
k=1,2,...,m
th
slot on machine k
j
i=
1,2,...,a
j,q = 1,2,..
q
I
= 0,1,2,. . . ,a
14
The setup time for a group assigned to slot i on machine
e'k.
i= 1,2,...,a
k
i=0,1,2,...,a-1
1; If group p is assigned to slot i and group 1 is
AS ip(i + 1)1 =
assigned to slot
i
p0,1,2,...,a-1
+1
1=
pl
0; Otherwise
1,2,..,a
Model
Minimizez' =
Minimize z2 =
(1.1)
(makespan objective function)
Cak
a
j1
bm
(sum of the completion times objective
j1
function)
'1 2
Subject to:
=
1
Wj =1
p=1,2,...a
(2)
i1,2,...,a
(3)
p=1
p=O
=1
lI
ASip(ll)l
i0,1,2,...,a-1
=
i0,1,2,..,a-1
>AS(i_1)pilSp1k
p,l= 1,2,...,a
k=1,2,...m
i=1,2,...,a
p=O 1=1
= C(_I)l + Set1i +
T1
a
Xyk
(4)
1)
Wip
ASl(l)l
Setk
(p
C(1_1)k+ Set
ik+ >Wiptpjk
p=1
Xk XJ'k + MYW
>Wiptjk
Wipt'.k
i =
1,2,.. .,a
j
(7)
(8)
(9)
1,2,...,bmax
i =
(10)
1,2,.. .,a
k = 1,2,3...m
Wj, AS1(j+l)l = 0,1
(11)
j1,2,...,bmax
k2,3...m
i=1,2,...,a
0
(6)
k'1,2,3...m
i r1,2,...,a
!;wip tpjk
Cjk = max {Xk}
Xjk, CIk ,Setlk
0,1,2,3,...,a
(5)
j,j'=1,2,...bmaxj<j'
XjkXjk + M(l
Xk Xu(k_1)
i
pl
pl
y=0,1
k2,3...m
(12)
(13)
(/<j')
The mathematical model for each of the two objective functions is a Mixed Integer
Linear Programming (MILP) model. It is assumed that there exist slots for groups and
each group should be assigned to one of them. In real world problems, groups have
different number of jobs. Because each group can be assigned to any slot, to simplify
creating the mathematical model, it is assumed that every group has the same number
15
of jobs, comprised of real and dummy jobs. This number is equal to
bmax
which is also
the maximum number of real jobs in a group. If a group has fewer real jobs than
the difference, i.e.,
bm
bmax,
number of real jobs, is assumed to be occupied by dummy
jobs. The objective function can either be minimize the makespan of jobs on the last
machine (1.1) or minimize the sum of the completion times of processing all jobs on
the last machine (1.2).
Based on the model, there are 'a' slots and each group should be assigned to one of
them. It is clear that each slot should contain just one group and every group should be
assigned to only one slot. Constraints (2) and (3) support this fact.
The set-up time of a group on a machine is dependent on that group and the group
processed immediately preceding it. Constraint (4) is included in the model to support
this fact. If group p is assigned to slot i and group 1 is assigned to slot i+1, then
AS(+J)l
must be equal to one. Likewise, if group p is not assigned to slot i or group 1 is not
assigned to slot i + 1, then
AS1(l+J)l
must be equal to zero. Constraints (5) and (6) ensure
that each is true. Constraint (7) calculates the required set-up time of groups on
machines. The required set-up time for a group on a machine is calculated based on the
assigned group to the slot and the group assigned to the preceding slot.
The completion time of the group assigned to a slot on the first machine is calculated in
constraint (8). The completion time of a group assigned to a slot is equal to the
summation of the completion time of the group assigned to the preceding slot, the
required set-up time for the group of this slot, and the summation of run time of all jobs
in the group.
Constraint (9) is added to the model to find the completion time of jobs on machines.
The completion time of a job that belongs to a group is greater than the summation of
the completion time of the group processed in the previous slot, the set-up time for the
group, and the run time of the job.
16
Constraints (10) and (11) are a kind of either/or constraints. They are added to the
model to find the sequence of processing jobs that belong to a group. Ifjobj in a group
is processed after jobj' of the same group, then the difference between the completion
time ofjobj and jobj' on all machines should be greater than or equal to the run time of
jobj.
A machine can start processing a job only if it is finished on the previous machine. It
means that the completion time of a job on a machine should be greater than or equal to
the summation of the completion time of the job on the preceding machine plus the run
time of the job on that machine. Constraint (12) is added to the model to support this
fact. It is clear that the completion time of a group on a machine is equal to the
completion time of the last job of the group which is processed by the machine.
Constraint (13) is added to the model for this reason. These models can be used as a
base to estimate the quality of heuristic algorithms.
4.2 Complexity of Problems
Gupta and Darrow (1986) proved that the two machine sequence dependent job
scheduling (SDJS) problem is a NP-hard problem. Garey et al. (1976) also proved that:
the flowshop job scheduling problem by considering minimization of makespan
criterion for more than two machines (m? 3) is an NP-hard problem.
the flow-shop job scheduling problem by considering minimization of the sum of
the completion times criterion with more than one machine (m? 2) belongs to NPhard problems as well.
Based on these insights, it is easy to see that the proposed problems in this research are
easily reducible to the ones already proven NP-hard. Thus, the fact that the proposed
problems are NP-hard, follows immediately.
17
4.3 Example
An example is shown to demonstrate the problem. Suppose three groups including 3, 2,
and 3 jobs in each are assigned to a cell with two machines to process. The required run
time of each job on each machine are given in Table 4.1. For instance the first job of
the first group (J11) has a run time of 3 on M1 and a run time of 4 on M2.
Table 4.lThe run time ofjobs in groups
G1
G3
G2
M1
M2
M1
M2
J11
3
4
4
3
J12
2
5
3
1
J13
2
1
J22
J31
J33
M1
M2
5
2
3
5
4
2
The set-up time of each group on each machine based on the preceding processed group is
shown in Table 4.2. In this table R stands for the reference group. As explained before, the
reference group is a group which was processed as the last group on the machines in the
previous planning horizon.
Table 4.2 The set up times for groups
M1
_
G1
G22
G3
M2
G2
G3
4
2
---
3
1
3
2
5
G1
__3
G32
R
4
3
G2
G3
5
2
---
3
4
1
1
A possible schedule of processing groups as well as jobs is processing them according
to their rank order:
G1 (J11- J12- .113) - G2 (.121- .122) - G3 (J31- J32- J33).
The Gantt-chart of
this schedule is demonstrated in Figure 4.1. In this Gantt chart the set-up time of each
group is shown by S,,k.
M1
M2
S011
'i
S012
.112
13
'ii
S121
J
Ji2
.113
J22
S231
S121
J21
J31
'22
J32
J33
S232
I
Figure 4.1 The Gantt chart of processing groups as well as jobs in rank order
I
Based on this Gantt-chart, the completion time of each job on the last machine is as
follows:
J11: 9
J21: 23
J31: 29
J12:
J22:
J32:
14
J13:
15
24
34
J33:
36
The makespan of a schedule as discussed before is the completion time of the last job
on the last machine. In this schedule, this is equal to 36. The sum of the completion
times of a schedule is the summation of completion time of jobs on the last machine.
Thus, the sum of the completion times of this schedule for this problem is equal to 184.
If the problem is solved optimally by the mathematical models, the optimal solution for
minimization of makespan is equal to 34 and for minimization of the sum of the
completion times is equal to 165.
19
CHAPTER 5: HEURISTIC ALGORITHM (TABU SEARCH)
Because the proposed research problems are NP-hard, the mathematical model cannot
be applied to solve industry size problems in a reasonable time. It also requires a
powerful computer and advanced linear programming software which may not be
available in every company.
The requirement of applying the proposed ideas (i.e., finding the best sequence of
processing groups as well as jobs in a group to improve the efficiency) in industry is
finding a technique that is capable of solving large size problems in a reasonable time.
Thus, a heuristic algorithm should be developed for finding a solution close enough to
the optimal solution of mathematical models in a short time.
One category of these heuristic algorithms is diversification/intensification techniques.
The most popular algorithms that belong to this category are tabu search, genetic
algorithm, and simulated annealing. According to previous research (Skorin-Kapov and
Vakharia (1993), Nowicki and Smutnicki (1996), Logendran and Sonthinen (1997)),
Schaller (2000), and Helal and Rabelo (2004) tabu search has shown more promising
performance than the others for scheduling problems.
5.1 Overview of Tabu Search
The tabu search is a heuristic algorithm which is developed independently by Glover
(1986) and Hansen (1986) for solving combinatorial optimization problems. The
principles and mathematical description of this concept can be found in Glover (1989,
1990a, and 1990b), Laguna et al. (1991), Reeves (1993), Widmar and Hertz (1989), and
Taillard (1990). This technique has been used to find a good quality solution for many
scheduling problems.
Finding a solution for the proposed research problems involves two levels. The first
level investigates to find the best sequence of groups. During the first level, a sequence
of groups is chosen. The second level investigates to find the sequence of jobs in each
group based on the chosen group sequence by the first level. If the tabu search heuristic
20
is applied to solve proposed research problems, it should cover both levels. Thus, a
two-level tabu search is developed to solve proposed research problems. In the first
(outside) level, the best sequence of groups is investigated. When a sequence of groups
by the outside level is chosen, the second (inside) level finds the best sequence of jobs
that belong to each group by considering the desired measure of effectiveness. The
solution is comprised of the sequence of groups and the sequence of jobs in each group
that provides the best objective function value based on the chosen criterion.
The tabu search method is used for the outside search to move from a group sequence
to another one. This is done for the inside search by moving from a sequence of jobs in
a group sequence to another sequence of jobs in the same group sequence. The
relationship between the outside and inside search is that once the outside search is
performed to get a new group sequence, the search process is switched to inside search.
The inside search is performed to find the best sequence of jobs in groups by
considering the proposed group sequence by outside search. When the inside search
stopping criteria is satisfied, the best found job sequence is considered. Then the search
returns back to the outside search to find a new group sequence. The outside search
stops when the outside search stopping criteria are satisfied. The best found solution is
reported as the final solution.
5.2 Tabu Search Mechanism
Based on Glover (1989, 1990(b)), a simple tabu search algorithm consists of three main
strategies: Forbidden strategy, Freeing Strategy, and Short term strategy. Pham and
Karaboga (1998) provided a brief explanation about these strategies as follows:
5.2.1 Forbidden Strategy
The forbidden strategy controls the entries to the tabu-list. It is mainly applied to avoid
cycling problems by forbidding certain moves which are called tabu. It prevents the
search return to a previously visited point. Ideally all previous visited points should be
stored in tabu-list, but this needs too much memory and computational effort. Thus, this
21
is done just for a few very last moves by preventing the choice of moves that represent
the reversal of any decision taken during a sequence of the last T iterations. These
solutions are stored in the tabu-list. This leads the search to move progressively away
from all solutions of the previous T iterations. T is called the "tabu-list length" or
"tabu-list size". The probability of cycling depends on the value of T. If T is too small,
the probability of cycling is too high. If it is too large then the search might be driven
away from good solution regions before these regions are completely explored.
During this process, an aspiration criterion is applied to make a move free if it is of
good quality and can prevent cycling. While the aspiration criterion has a role of
guiding the search, tabu restrictions have a role in constraining the search space. A
solution is acceptable if the tabu restrictions are satisfied. However, a tabu solution is
also assumed acceptable if an aspiration criterion applied regardless of the tabu status.
The move attributes are recorded and used in tabu search to impose constraints that
prevent moves from being chosen that would reverse the changes represented by these
attributes.
5.2.2 Freeing Strategy
The freeing strategy is used to decide what exits the tabu-list. The strategy deletes the
tabu restrictions of the solutions so that they can be reconsidered in further steps of the
search. The attributes of the tabu solution remain on the tabu-list for a duration of T
iterations. A solution is considered admissible if its attributes are not tabu or if it passes
the aspiration criterion test.
5.2.3 Short-Term and Long-Term Strategies
The above are implemented using short-term and long-term memory functions. The
combination of these two memory functions allows for intensifying and diversifying
the search. Intensification means more thoroughly searching neighborhoods that are
historically found good. On the other hand, diversification occurs when the search is
22
continued in the areas which the search is never performed or performed less than the
other areas.
Consider a mountain chain with several peaks. Suppose a mountain climber is asked to
find the highest pick of this chain by climbing up the picks and measure their height
and find the best. He can move during days and stay nights in a rest area. There are
stations in the path that the climber can stay nights in these stages. The climber starts
his exploration from one of these stations. Each station has some neighbor stations that
the climber can go there in one day. In each station, the information of all neighbor
stations such as the height of the station and the direction of them is provided. Every
night based on these information the climber decides his next day move. He is told he
can stop his search, if he can find a predefined number of picks or if he cannot find a
neighbor station with higher height of his current rest area for a few days. He is also
told he can visit each station at most once in his way. Thus the climber starts with a rest
area and performs his measurements to satisfy his sponsors.
The act of tabu search to find the optimal or near optimal solution of a problem is the
same as the mountain climber's job. Tabu search performs the search like hill climbing.
It starts with a feasible solution (point) and moves to the best (highest) neighbor. It
finds the nearest and the highest top and then comes down to find another top. It stops
if it can find a few tops or if it cannot find a better solution in a few iterations.
The hill climbing like tabu search algorithm progresses the search at each step to a
better (higher evaluation) move. When a peak is found, it may be the local optimum
and not the global one. Tabu search has the capacity of not getting caught in the trap of
local optima by moving the search to the new regions until a (near) global optimum is
reached. The search stops if one of the stopping criteria such as the number of iterations
without improvement or the number of local optimal points found is satisfied. During
this process the search may find the optimal solution as one of the visited peaks, but the
search cannot identify if a peak is global optimal.
23
The first step for tabu search is having an initial solution. The initial solution can be
chosen arbitrarily. It can be a feasible or even an infeasible point. It can be generated
randomly or by a systematic procedure. Usually an initial solution with better quality
can increase the efficiency of the search and the quality of the results.
When an initial solution is chosen, its neighborhood solutions can be explored by
perturbing it. The value of each of these neighborhood solutions is determined by the
objective functions which in this research problem are minimization of the sum of the
completion times or minimization of makespan. These neighborhoods have to be
compared with Tabu_List filter whose goal is to prevent the cycle trap of local optima.
This filter is implemented through comparison of neighborhood solutions against a set
of restricted moves listed in tabu-list (TL). This list is constructed based on the recent
change in previous best solutions. The tabu-list records these changes or moves in the
order they are applied. The size of the tabu-list is determined through experimentation.
After a set of neighborhood solutions is generated, the best local move among them is
compared against the tabu-list. If the move is restricted, it is normally ignored and the
second best move is considered. There are cases, however, that a restricted move may
have a better value than the best global value found so far, the aspiration level. In this
case, the tabu restriction is ignored. The best move after filtering against tabu-list and
aspiration criterion is compared with the current members of candidate list. If the
chosen neighborhood does not belong to the current candidate list, it is selected for next
perturbation and generation of new neighborhood.
Otherwise, the next best
neighborhood is chosen. This move is recorded into the tabu-list (TL). This process is
repeated until the search is terminated by satisfying one of the stopping criteria.
Short term memory is the core of tabu search process. Long-term memory components
can enhance the quality of the solution of the short term memory. Long term memory
search can focus further on searching the areas that were historically promising
(intensification); or perform the search in neighborhoods that were rarely visited before
(diversification). The information on all the previous moves in short term memory is
24
considered for this investigation. After one complete set of search is performed, with
the aid of long term memory, a new complete search is restarted. The number of
restarts is arbitrary and depends on the required precision of the solution. Applying
more restarts may provide better solution, but it prolongs the time required.
5.3 Initial Solution
The tabu search needs a feasible solution to start. The quality of the results as well as
the efficiency of the search can be significantly improved if a good initial solution
generator is applied. For instance, Logendran and Subur (2004) during solving an
unrelated parallel machine scheduling problem reported that by choosing different
initial solution generators the efficiency and the effectiveness of the search algorithm
can be significantly changed.
In this research two different initial solution generating mechanisms are developed for
each criterion. These mechanisms for each criterion are explained as follows:
5.3.1 Initial Solution Techniques for Minimization of Makespan Criterion
The initial solution generators applied for minimization of makespan criterion are as
follows:
5.3.1.1 Rank Order
The simplest way of defining an initial solution for proposed research problems is
considering the rank order of groups as well as jobs that belong to each group as a
feasible solution. This sequence is applied as the first initial solution mechanism for
minimization of makespan.
25
5.3.1.2 Applying the Result of Schaller et al.'s (2000) Algorithm as an Initial
Solution
Schaller et al. (2000) suggested a few heuristic algorithms to solve SDGS problems by
considering minimization of makespan. Their article is discussed in detailed in
literature review. They suggested two different heuristic algorithms to find the
sequence ofjobs in a group and six different algorithms to find the sequence of groups.
They applied these mechanisms to find the group sequences and job sequences
independently. An experimental design is performed by the authors to find the best
heuristic algorithm. Based on their experiment, a three step algorithm provides a better
solution for the problem than the other algorithms. In this algorithm, at step one an
algorithm based on Campbell et al. (1970) procedure, which is known as CDS
algorithm, is applied to find the sequence of jobs in a group. During the second step, a
modified Nawaz et al. (1983) procedure, which is known as NEH procedure, is applied
to find the sequence of groups. Finally, at step three the neighborhoods of the generated
sequence by the first two steps are investigated and the best of them is chosen as the
final solution. They suggested that this solution can be applied as an initial seed for a
meta heuristic algorithm such as tabu search to improve the quality of solutions. Based
on their suggestion, the result of their algorithm is considered as one of the initial
solutions for minimization of makespan criterion. Thus, the proposed algorithm is
applied to generate an initial solution for the heuristic algorithm as the second initial
solution provider. Because the tabu search algorithm has the ability of finding the
neighborhoods of a sequence, the third step of the proposed algorithm (finding the
neighborhoods of the generated sequence by the first two steps) is ignored to generate
an initial solution. The steps of generating the initial solution based on Schaller et al.
(2000) are as follows:
5.3.1.2.1 Step 1. Applying CDS (Campbell-Dudek-Smith, 1970) Based Procedure
to Find the Best Job Sequence for Groups
A procedure based on Campbell et al. (1970) algorithm is suggested to find the
sequence of jobs of a group. This procedure is applied to find the sequence of
processing jobs for each group independently.
To find the sequence of jobs in a group for a m machine problem, the algorithm
generates rn-i auxiliary two-machine flow shop schedules in the following manner. Let
p1,2,...,a
j = 1,2,.. .,ba
Run time ofjobj in group p on machine k
tJk:
k=1,2,...,m
In the kthl auxiliary problem (k
= 2, 3,
...,
rn)
the run time of each job on each auxiliary
machine can be defined as follows:
Oi =
The processing time for thejtjob on machine 1 (Mi)
tpjk
=
tpjk
The processing time for thej" job on machine 2
(M2)
i=m+1-k
is the summation of the run time of job] on machines 1 through
summation of the run time of job j on the last
Then the Johnson's
(1954)
k
k,
and
is the
machines in the technological order.
two-machine algorithm is applied to find the sequence of
processing of jobs in the group. Based on this procedure, for each auxiliary problem, a
job sequence is generated. The completion time of each sequence is then calculated and
the best of them is chosen as the best job sequence of each group.
5.3.1.2.2 Step 2. Applying NEH Based Procedure to Find the Best Group Sequence
Nawaz et al.
(1983)
proposed a heuristic algorithm for a multi stage, job scheduling
problem to minimize makespan. The proposed algorithm is known as NEll in
scheduling literature. Schaller et al. (2000) applied this algorithm with small
modification to find the sequence of
groups. To
apply NEH algorithm, the following
parameters should be calculated for each group:
Parameter 1: the average set-up time of each
group
In this procedure, the average set-up time for each
as follows:
on each machine:
group
on each machine is calculated
27
-k
Set =
a
Parameter 2: The effective run time of each group on each machine:
This can be calculated as follows:
Ek =
+
tpjk
j=1
The steps of the modified NEll algorithm are as follows:
Step 1: Find the effective run time of each group on all machines.
T =
Epk
Step 2: Arrange groups in descending order of T.
Step 3: Pick the two groups from the first and second position of the list of step 2, and
find the best sequence for those two groups by calculating the makespan for the two
possible sequences. Do not change the relative positions of these two groups with
respect to each other in the remaining steps of the algorithm. Set i = 3.
Step 4: Pick the group in the
jh
position of the list generated in step 2 and find the best
sequence by placing it at all possible I positions in the partial sequence found in the
previous step, without changing the relative positions to each other of the already
assigned groups. The number of enumerations at this step is equal to i.
Step 5: If n = i, stop, otherwise set i = i +1 and go to step 4.
5.3.2 Initial Solution Techniques for Minimization of the Sum of the Completion
Times Criterion
The initial solution generators applied for minimization of the sum of the completion
times criterion are as follows:
5.3.2.1 Rank Order
The first initial solution technique considered for minimization of the sum of the
completion time criterion is the same as the one applied for minimization of makespan.
A1
5.3.2.2 Relaxing the Problem to a Single Machine, SIGS Problem
Ham et al. (1985), proposed a procedure to minimize the sum of the completion times
of single machine sequence independent job scheduling problem. This procedure is
applied to find an initial solution for the proposed heuristic algorithm. The problem is
relaxed to 'm' independent single machine job scheduling problems. Then each
problem is solved independently and the best of them is considered as the initial
solution for the heuristic algorithm.
In this procedure, for each independent problem, the jobs that belong to each group are
ordered based on their run time. In other words, a job with shorter run time is processed
before a job with longer run time. The sequence of groups can be calculated by
applying the following steps:
Step 1: Calculate the required minimum set-up time for each group on each machine
MinSpk=min{Sq,k}
i1,2,...,a
Step 2: Find the order of groups based on the following inequalities:
Mm
Sik + Tlk
bi
Mm
S2k + T2k
<
Mm
b2
Sak + Tak
ba
This procedure is performed for each machine. After finding m different sequence of
jobs and groups, the best of them by considering minimization of the sum of the
completion times is considered as the initial solution for the heuristic algorithm.
5.4 Generation of Neighborhood Solutions
When a feasible solution is considered as a seed, the neighborhoods of the seed should
be generated to explore the search. The process of finding the neighborhoods during
inside and outside search are discussed separately.
29
During the inside search, a neighborhood of a seed can be generated by applying swap
moves, i.e. changing the orders of two sequenced jobs that belong to a group. By
changing the position of the last and the first job of a group, another neighborhood can
be generated. The number of inside neighborhoods of a seed can be calculated as
follows:
If there are g groups and each group has njobs, then the number of neighborhoods is
g
equal to
flj,
if there is at least 3 jobs in every group. If there are two jobs in a group,
i=1
then there is only one neighborhood for that group.
The outside neighborhoods can be derived similar to the inside neighborhoods by
applying swap moves. The number of outside neighborhoods is equal to the number of
groups if there are at least three groups in the cell. For a problem including two groups,
there is only one neighborhood.
5.5 Steps of Tabu Search
The concentration of this research is to develop a heuristic algorithm to solve optimally
or near optimally the SDGS problems by considering minimization of makespan and
minimization of the sum of the completion times. The steps of the proposed heuristic
algorithm (tabu search) are as follows:
5.5.1 Step 1: Initial Solution
As discussed before, it is required that the search starts with an initial solution. The
initial solution can be generated by one of the methods discussed before for both inside
and outside levels. This initial solution is considered as a seed for both outside and
inside search.
30
5.5.2 Step 2: Evaluate the Objective Function Value of the Seed
When the first seed of the search is determined, the value of the sequence is calculated
based on the investigated objective function.
5.5.3 Step 3: Inside Search
When the seed of the outside and inside search is determined, the first step is finding
the best job sequences based on the current group sequence (outside seed). Thus the
inside search is performed to find the best job sequence. The steps of inside search are
as follows:
5.5.3.1 Step 3.1: Find Inside Neighborhood Solutions
The neighborhoods of the initial solution are generated by applying swap moves.
5.5.3.2 Step 3.2: Evaluate the Inside neighborhoods
Evaluating the generated neighborhoods based on the objective function value
(minimization of makespan or minimization of the sum of the completion times). In this
step the neighborhoods are checked against the inside tabu-list and disqualified
neighborhoods are excluded. This filtering for a neighborhood can be ignored if its
value is better (lower) than the inside aspiration level. Aspiration level is the best value
found among all the neighborhoods of the current inside search. The neighborhoods
which exist in the inside candidate list are also excluded. Then among the available
neighborhoods, the best of them is chosen for the next seed and added to the candidate
list. The following parameters are updated:
31
Tabu-list:
When a new seed is chosen, the tabu-list should be updated. Tabu-list keeps track of
recent moves. It prevents the search to return to the search area which is explored
recently. Because the fixed tabu-list is applied in this research, if the list is filled, the
oldest member of the tabu-list is replaced with the new member of the tabu list. The
size of Tabu-list is found based on experiments and it is adjusted based on the size of
the problem.
Candidate List:
All the feasible solutions considered as a seed are saved in inside candidate list. The
first member of this list is the initial solution. At every iteration the chosen
neighborhood for the next seed is added to the candidate list.
Inside Aspiration Level:
The inside search aspiration level is equal to the value of the best solution found during
current inside search. If the value of new member of candidate list is better (lower) than
the current value of aspiration level, then the value of aspiration level is updated.
Index list:
Index list is a subset of candidate list. It includes the local optimal points visited by the
search. If the objective function value of a member of the candidate list is less than the
value of the objective function of immediately before and next members, the point is a
local optimal point. In such cases, the point is added to the index list. The maximum
number of entries to the index list is considered as one of the inside search stopping
criteria. It is clear that at each iteration, the current seed can be added to the index list
by considering the objective function value of the next seed, thus at each iteration, the
previous seed is tested to assess if it qualifies to be added to the index list.
32
Number of Iterations without improvement:
One of the criteria of stopping the search is the maximum number of iterations without
improvement. When a feasible solution is added to the candidate list, if its value is not
less than the value of the previous member of the candidate list, the value of iterations
without improvement is increased by one; otherwise this counter resets to zero. If the
value of this parameter reaches to the value of inside maximum iterations without
improvement, the inside search stops.
Long-term Memory.
Long-term memory is used to diversify or intensify the search. During diversification,
the search explores areas which are not explored or explored less before. On the other
hand, intensification leads the search to explore more around the areas which are
explored more than other areas before. A three dimensional matrix is used to gather the
required information for long term memory inside search. The first dimension points to
each group. The second one is about each available job slot, and the third one is for the
job number. The value of each member of this matrix reveals the number of times that a
job belongs to a group is assigned to a particular job-slot. For instance, if
Jnside_LongTerm[1][2][4] =7, it means that the forth job of the first group is assigned
seven times to job-slot 2. When the short-term memory search is over, the maximum
of this matrix reveals that which job is assigned to which job-slot more than the others.
On the other hand, the minimum of this matrix reveals that which job is assigned to
which job-slot less than the others. Based on this information, the long term search is
performed by fixing the job in the job-slot with the maximum (minimum) number of
frequency and the search is performed again.
5.5.3.3 Step 3.3: Stopping Criteria
The above process is repeated until one of the stopping criteria below is met:
The maximum number of iterations without improvement is reached.
33
The maximum number of entries to the index list is reached.
At every restart of inside search the Tabu-list, the candidate list, the Index-list, and the
number of iterations without improvement are reset to zero.
The two applications of long term memory are as follows:
LTM-Max:
As explained, the frequency three-dimensional matrix indicates how many times a job
that belongs to a group has been assigned to a specific job-slot. Thus, the maximum
number in this matrix indicates the maximum number of times a job is assigned to a
slot. The LTM-Max method takes advantage of this and intensifies the search around
this area. In this section the inside search is performed again by considering a new seed.
The seed is the same as the initial solution of inside search with a few changes. The job
with the maximum assignment to a job slot is assigned to that job slot and remains in
the same place during the search. If the fixed job is assigned to the same job slot which
was assigned in the initial solution, then the next maximum should be chosen.
LTM-Min:
Instead of LTM-Max, this method chooses the minimum value of the frequency matrix.
In other words, this algorithm diversifies the search and performs the search in areas
which were never explored or explored less during the search. Other steps of LTM-Min
are the same as LTM-Max.
When the inside search is completed, the best sequence ofjobs is considered as the best
solution of the current group sequences. Then the search is switched to the outside
search.
34
5.5.4 Step 4: Outside Search
When the best job sequence of the outside seed is found by the inside search, the search
is switched to the outside search. The steps of the outside search are as follows:
5.5.4.1 Step 4.1: Find Outside Neighborhood Solutions
The neighborhoods of the initial solution are generated by applying swap moves.
5.5.4.2 Step 4.2: Evaluate the Objective function Value of Outside Neighborhoods
In this step, for each neighborhood, the inside search is performed to find the best job
sequence. The neighborhoods are checked against the outside tabu-list and disqualified
neighborhoods are excluded. This filtering for a neighborhood can be ignored if its
value is lower than the search aspiration level. Aspiration level is the best found
feasible solution by the search. The neighborhoods which exist in the outside candidate
list are also excluded. Then among the available neighborhoods the best of them is
chosen for the next seed and added to the candidate list. The following parameters are
updated respectively:
Outside Tabu-list:
When a new seed is chosen, the tabu-list should be updated. Tabu-list keeps track of
recent moves. It prevents the search to return to the search area which was explored
recently. Because the fixed tabu-list is applied in this research, if the list is filled, the
oldest member of the tabu-list is replaced with the new member of the tabu list. The
size of the Outside Tabu-list is found based on experiments and it is adjusted based on
the size of the problem.
35
Candidate List:
All of the feasible solutions considered as a seed are saved in the candidate list. The
first member of this list is the initial solution. At each iteration the chosen
neighborhood is added to the candidate list.
Aspiration Level:
The search aspiration level is equal to the value of the best solution found during the
search. If the value of a new member of candidate list is better (lower) than the current
value of aspiration level, then the value of aspiration level is updated.
Index list:
Index list is a subset of candidate list. It includes the local optimal points visited by the
search. If the objective function value of a member of a candidate list is less than the
value of the objective function of immediately before and next members, the point is a
local optimal point. In such cases, the point is added to the outside index list. The
maximum number of entries to the index list is considered as one of the outside search
stopping criteria. It is clear that at each iteration, the current seed can be added to the
index list by considering the objective function value of the next seed. Thus at each
iteration, the previous seed is tested to assess if it qualifies to be added to the outside
index list.
Number of iterations without improvement:
One of the critenons of stopping the search is the maximum number of iterations
without improvement. When a feasible solution is added to the candidate list, if its
value is not less than the value of the previous member of the candidate list, the value
of iterations without improvement is increased by one; otherwise this counter is reset to
36
zero. If the value of this parameter reaches to the value of maximum iterations without
improvement, the search stops.
5.5.4.3 Step 4.3: Stopping Criteria
The above process is repeated until one of the stopping criteria below is met.
The maximum number of iterations without improvement is reached.
The maximum number of entries to the index list is reached.
At every restart of the search the tabulist, the candidate list, the indexlist, and the
number of iterations without improvement are reset to zero.
Long-term Memory:
A two dimensional matrix is used to gather the required information for long term
memory outside search. The first dimension points to each slot. The second one is for
the group number. The value of each member of this matrix reveals the number of
times that a group is assigned to a particular slot. For instance if LongTerm[2][4] 7, it
means that the fourth group is assigned seven times to slot 2. The two applications of
long term memory are as follows:
LTM-Max:
As explained the frequency two-dimensional matrix indicates how many times a group
has been assigned to a specific slot. Thus, the maximum of this matrix indicates the
maximum number of times a group is assigned to a slot. The LTM-Max method takes
advantage of this and intensifies the search around this area. In this section the outside
search is performed again by considering a new seed. The seed is the same as the initial
solution of outside search with a few changes. The group with the maximum
assignment to a slot is assigned to that slot and remains in the same place during the
37
search. If the fixed group is assigned to the same slot which was assigned in the initial
solution, the next maximum should be chosen.
LTM-Min.
Instead of LTM-Max, this method chooses the minimum value of the frequency matrix.
In other words, this algorithm diversifies the search and performs the search in areas
which were never explored or explored less during the search. Other steps of LTM-Min
are the same as LTM-Max.
When the outside search is completed, the solution with the best objective function
value is reported as the result of the search. The steps of performing tabu search for
outside and inside search are depicted in the flow charts below which are shown in
figure 5.1 and figure 5.2.
Start with an outside initial solution
Perform the inside search to find the
best job sequence and get the value of
objective function
Admit the Solution to OCL and OIL
Initialize OTL, 01W, OAL. OLTM
Apply outside swap moves
I
I
Perform inside search to get the
Objective function value for each
neighbour
the move'
Yes
tabu?
Is OAL
satisfied?
No
I
Disregard the
move
Yes
No
Identify the best solution
oes the new Solution
Yes
Use OLTM to
Identify new
LRestarts
No
of restarts
NNreathethedV
Apply the move that corresponds
to the best Solution
Terminate the search
Update OTL, OAL, OCL,OIL,
OlWI. OLTM
Return the best Solution
frorn OIL
I
OTL: Outside Tabu List
OAL: Outside Aspiration Level
OCL: Outside Candidate List
OIL: Outside Index List
OLTM: Outside Long Term memory
OTLS: Outside Tabu List Size
OIWI:
No
stopping criteria met
(OIWI>M01\M) or>_--(ONW> MOILS)
Figure 5.1 Flow chart for outside search
End
Number of Outside Iterations
without Improvement
ONWI: Number of entries to the Outside Index
List
MOIWI: Maximum Number of Outside
Iterations without Improvement
MOILS: Maximum Outside Index List Size
39
Start With an inside initial Solution
Initialize ITL. IIW1, IAL, ILTM
Apply outside swap moves
Yes
Disregard the
move
Identify the best solution
the outside
maximum number
Apply the move that corresponds
to the best solution
Terminate the search
Update ITL, IAL, ICL,IIL, 11W,
ILTM
Retum the best solution
from IlL
No
s stopping criteria met
Yes
ITL: Inside Tabu List
IAL: Inside Aspiration Level
ICL: Inside Candidate List
IlL:
Index List
ILTM: Inside Long Term memory
ITLS: Inside Tabu List Size
flWI: Number of Inside Iterations
without Improvement
INWI: Number of Entries to the Inside Index
List
MIIWI: Maximum Number of Inside Iterations
without Improvement
MIlLS: Maximum Inside Index List Size
End
LS)
Figure 5.2 Flow chart for inside search
5.6 Two-Machine SDGS Problem with Minimization of Makespan Criterion
For the two machine SDGS problems with minimization of makespan criterion,
Logendran et al. (2006) showed that the optimal sequence of jobs in each group
conforms to Johnson's algorithm (1954). Thus, the heuristic search algorithm for these
problems can be relaxed to a one level search in order to find the best sequence of
processing groups. During the search for each group sequence, the sequence of
processing jobs belonging to each group are calculated according to Johnson's (1954)
algorithm.
5.7 Applied Parameters for Proposed Research Problems
As mentioned before, the size of the problems are investigated during this research
include 2 to 16 groups in a cell and 2 to 10 jobs in a group. The empirical formulae or
the value of parameters used for these research problems are presented in tables below.
To generate these formulae or parameter values, several test problems, different from
the ones applied for the main experiments, are generated. Then these test problems are
solved by heuristic algorithms by applying different values for each parameter to find
the best value for each parameter. These formulae are generated based on experiments.
In some cases a formula for a range can be generated and in some of them a value for a
parameter in a range is offered.
Empirical Formulae for Two-Machine Problems
Minimization of Makespan Criterion
5.7.1
by
Considering
These problems, as discussed in Section 5.6, require a one level search. Thus, it is only
necessary to find empirical formulae for outside search parameters. These formulae,
presented in Table 5.1, are constructed based on the number of groups of the problem.
Table 5.1 The outside search parameters for two machine problems with makespan
criterion
Iterations without
Tabu list size
improvement
Number of
Number of
Number of T
Parameter
Parameter
groups (G)
groups
(G)
Formula
groups
(G)
value/formula
value/formula
From To
From To
From To_I
G*l.25
2
3
2
2
2
10
(G14)+1
Index list
I
I
I
4
7
11
6
10
16
G*3
G*10
10
16
G*2
11
15
(G14)+2
16
16
5
Q*5Ø
5.7.2 Empirical Formulae for Three-Machine and Six-Machine Problems by
Considering Minimization of Makespan Criterion
The empirical formulae for these problems are presented in Table 5.2 and Table 5.3.
The formulae for outside search parameters are constructed based on the number of
41
groups, and for the inside search parameters are constructed based on the number of
total jobs in groups. In some cases, rather than offering a formula, a value for the
parameter in a specific range is offered.
Table 5.2 The outside search parameters for three machine and six machine problems
with makespan criterion
Iterations without
.
Tabu list size
improvement
Number of
Number of
Number of
Parameter
Parameter
groups (G)
groups (G) Formula
value/formula groups (G) value/formula
From To
From
From
To
__________
Index list
2
4
3
2
2
5
4
6
G
G*2
7
10
9
16
G10
13
1.
I
2
1
2
3
5
13
6
12
16
(G/2)+1
G
I
16
12
15
16
(G/5)+1
(G14)+1
(G14)
12
G*50
Table 5.3 The inside search parameters for three machine and six machine problems
with makespan criterion
Index list
Number of
jobs(J)
From To
2
31
81
Parameter
value
Iterations without
.
Tabu list size
improvement
Number of
Number of
Parameter
Parameter
jobs(J)
jobs(J)
value
value
From To
From To
I
30
80
2
120
4
3
2
30
40
50
60
80
100
29
39
49
59
79
99
120
1
2
2
65
164
120
1
2
3
4
5
6
7
5.7.3 Empirical Formulae for Two, Three and Six Machine Problems by
Considering Minimization of Sum of the Completion Times Criterion
The empirical formulae for these problems are presented in Table 5.4 and Table 5.5.
The formulae for outside search parameters are constructed based on the number of
groups, and for the inside search parameters are constructed based on the number of
total jobs in groups.
42
Table 5.4 The outside search parameters for two, three, and six machine problems with
minimization of sum of the completion times criterion
Iterations without
Tabu list size
improvement
Number
Number
Parameter
Parameter
of groups Parameter
of groups
value/formula
value
value/formula
(G)
(G)
From To
From To
Index list
Number
of groups
(G)
From [To
2
5
7
8
13
4
6
7
G 10
12
16
G*20
250
2
4
6
7
2
G*2
8
15
67
2
5
3
8
6
8
10
15
9
14
16
3
G12
5
2
6
7
14
16
1
2
3
4
6
11
Table 5.5 The inside search parameters for two, three, and six machine problems with
minimization of sum of the completion times criterion
Index list
Number of
jobs(J)
From To
2
21
31
41
51
91
20
30
40
50
90
120
Parameter
value
Iterations without
Tabu list size
improvement
Number of
Number of T
Parameter
Parameter
jobs(J)
jobs(J)
value
value
From 1=12=1
From To
2
3
5
6
7
I
2
31
41
51
8
30
40
50
120
2
5
2
21
31
8
40
3
51
61
76
86
96
101
20
30
39
50
60
75
85
95
100
120
1
2
3
4
5
6
8
9
10
13
5.8 Application of Tabu Search to an Example Problem by Considering
Minimization of Makespan Criterion
The application of the steps of tabu search, which are listed in Section 5.5, is
demonstrated to solve the example presented in chapter four. In this example,
minimization of makespan criterion is considered. The tabu search parameters applied
for this example are as follows:
43
Inside Tabu-list size: 1
Inside maximum number of entries to the Index-list: 3
Inside maximum iterations without improvement: 2
Outside Tabu-list size: 1
Outside maximum number of entries to the Index-list: 2
Outside maximum iterations without improvement: 1
5.8.1 Step 1: Initial Solution
The search starts with an initial solution. The initial solution for this problem, based on
the rank order initial solution generator, is as follows:
G1 (Jn-J12-J13) G2(J21-J22) G3(J31-J32-J33)
5.8.2 Step 2: Evaluate the Objective Function Value of the Initial Solution
The objective function value of the initial solution (makespan) is equal to 38 (Figure
5.3). This solution is considered as the current inside aspiration level as well.
S011
M2
J1
S012
.112
J13
fit
S121
J21
J22
S231
ft2
ft3
S122
J21
J31
22
232
J32
J33
'32
'33
Figure 5.3 The Gantt chart of the initial solution
5.8.3 Step 3: Perform Inside Search
The inside search is performed to find the best job sequences for the outside initial seed
(G1-
G2- G3).
5.8.3.1 Step 3.1: Evaluate Inside Neighborhoods
The neighborhoods of the inside initial seed, and their objective function value, are
found. Each seed of this example has seven neighborhoods, shown with their objective
function values in Table 5.6. The difference between each neighborhood and the initial
seed is shown in bold.
Table 5.6 The neighborhoods of the inside initial solution
Group seed
Neighborhood
G1
G2
Objective
function
value
G3
0(Initial)
Jll
fT2
JI3
.121
.122
J31
J32
.133
38
J31
J32
J31
J33
J33
J33
J33
38
38
38
38
37
40
37
1
J12
J11
J13
J21
2
Jii
J13
J12
J21
3
J13
.112
Jll
J2l
J22
J22
J22
4
J11
JI2
Jl2
Ji2
Jl2
J13
J22
J21
J31
J32
J32
J32
J21
J22
J22
J22
J32
J31
.133
J31
J33
J32
J33
J32
J31
5*
6
7
J11
Jii
J11
Jl3
Jl3
Jl3
J21
J21
J31
Based on the objective function value of the neighborhoods, the neighborhoods 5 and 7
have the lowest values and can be considered as the next seed. In this example, as
neighborhood 5 is the first best entry into the list, it is chosen as the next seed. By
choosing the next seed, the following parameters are updated:
Tabu-list:
The new seed is generated by changing the process sequence of the first two jobs of the
third group. These moves are saved in the tabu list. In other words, in the next iteration
(the tabu list size is equal to one) J32 cannot be processed as the second job of the third
group, and J31 cannot be processed as the first job of the third group.
45
Candidate-list:
The new seed is added to the Candidate-list as the second member of this list. The first
member of the Candidate-list is always the initial solution.
Aspiration level:
Because the value of the objective function of the new seed is better than the aspiration
level (37 compared to 38), the inside aspiration level is also updated.
Index-list:
The initial solution is the first member of the index list. Because the value of the new
seed is lower than the previous seed, it has the potential to be added to the index list.
Thus, if the next seed has an objective function value more than 37, this seed (the
second one) is added to the index list as the second local optimal solution. Thus, in this
stage, the Index list just includes the initial solution.
Long-term memory frequency matrix
The frequency matrix which is used for long-term memory is updated.
5.8.3.2 Step 3.2: Evaluate the Stopping Criteria for Inside Search
At the end of each iteration, the stopping criteria are evaluated. If one of the criteria is
satisfied, the inside search is stopped, otherwise the next iteration is started. In this
stage of this example, the number of iterations without improvement is equal to zero
and the number of entries to the index list is equal to one. Because none of the stopping
criteria are satisfied, the next iteration is started with the new seed similar to the
previous iteration.
5.8.3.3 Repeat the Cycle
Inside iterations are performed until one of the inside search stopping criteria is
satisfied. The inside search will be terminated by one of the stopping criteria. In this
example, the inside search for the current inside seed is terminated after two iterations.
The best found job sequence is the one with makespan equal to 37 (the second member
of Candidate-list). After terminating the inside search, the search is switched to the
outside search.
5.8.4 Step 4: Perform Outside Search
The outside search is performed to find the best group sequence. The steps of outsidesearch are as follows:
5.8.4.1 Step 4.1: Evaluate Outside Neighborhoods
The seed of the outside search is used to find the outside neighborhoods and their
objective function values. The neighborhoods of the outside initial solution are as
follows. The difference between each neighborhood and the seed is shown in bold.
G2- G1- G3
G1- G3- G2
G3- G2- G1
The inside search is performed for each neighborhood to get the best job sequence and
find the objective function value of the neighborhood. Table 5.7 shows the objective
function value of these neighborhoods after performing the inside search.
47
Table 5.7 The neighborhoods of the outside initial solution
Neighborhood
Sequence
G
ft2
0(Jnitial)
.Jli
G2
1
J2l
2
JI3
L
Value
G2
G3
J22
J21
.132
G3
G1
I
I
.1221.112
Jll
'l3
.112
I
J3l
J32
G2
3
.131
J32
33
33
G
I
Jl3
38
'32
.131
G1
Jit
37
J33
.131
'21
"33
[
G1
1'12
'p11
r
22
"2l
J22
34
40
'13
Based on the objective function values of the neighborhoods, the second neighborhood
has the best objective function value, and is considered as the next seed. By choosing
the next seed, the following parameters are updated:
Tabu-list:
As replacement of the last two groups generates the new seed, these moves are saved in
the tabu list. In other words, in the next iteration (the tabu list size is equal to one) G2
cannot be processed as the second group and G3 cannot be processed as the third group.
Candidate-list:
The new seed is added to the outside Candidate-list as the second member of this list.
The first member of the outside Candidate-list is always the outside initial solution.
Aspiration level:
Because the value of the objective function of the new seed is better than the aspiration
level (34 compared to 37), the value of the aspiration level is also updated.
Index-list:
The initial solution is the first member of the index list. Because the value of the new
seed is lower than the previous seed, it has the potential to be added to the index list.
Thus, if the next seed has an objective function value more than 34, this seed (the
second one) is added to the index list as the second local optimal solution. In this stage,
the index list just includes the initial solution.
Long-term memory frequency matrix
The frequency matrix which is used for outside search long-term memory is updated.
5.8.4.2 Step 4.2: Evaluate the Stopping Criteria for Outside Search
At the end of each outside iteration, the stopping criteria for outside search are
evaluated. If one of the criteria is satisfied, the search for the current restart is stopped;
otherwise, the next iteration is begun. In this stage of this example, the number of
iterations without improvement is equal to zero and the number of entries to the index
list is equal to one. None of the stopping criteria are satisfied. Thus, the next iteration is
started with the new seed similar to the previous iteration.
5.8.4.3: Repeat the Cycle
These iterations are performed until one of the outside search stopping criteria is
satisfied. The outside search will be terminated by one of the stopping criteria. In this
example, the best found group sequence is the one with makespan equal to 34 (the
second member of Candidate-list). After terminating the outside search, the best
sequence of groups, as well as jobs, are reported as the best schedule. The Gantt chart
of the result of the tabu search for this problem is shown in Figure 5.4.
S011
1W2
hi
S012
J12
J13
S131
J31
J32
I
Jii
J33
S321
J21
J22
J32
J33
S322
J21
I
Ji2
.113
I
S132
I
.131
.122
Figure 5.4 The Gantt chart of the tabu search sequence
5.9 Application of Tabu Search to an Example Problem by Considering
Minimization of Sum of the Completion Times Criterion
The application of the steps of tabu search, which are listed in Section 5.5, is
demonstrated to solve the example presented in chapter four by considering
minimization of sum of the completion times criterion. The tabu search parameters
applied for this example are as follows:
Inside Tabu-list size: 1
Inside maximum number of entries to the Index-list: 2
Inside maximum iterations without improvement: 1
Outside Tabu-list size: 1
Outside maximum number of entries to the Index-list: 2
Outside maximum iterations without improvement: 2
5.9.1 Step 1: Initial Solution
The search starts with an initial solution. The initial solution for this problem, based on
the rank order initial solution generator, is as follows:
G1 (J11-
J12- J)
G2(J21- J22)
G3 (J31- J32- J33)
5.9.2 Step 2: Evaluate the Objective Function Value of the Initial Solution
The objective function value of the initial solution (sum of the completion times
criterion) is equal to 186 (Figure 5.5). This solution is considered as the current inside
aspiration level as well.
M1
soil
M2
Jl!
J13
.112
S012
J11
S121
J21
J22
J12
.113
S122
J31
S231
21
22
.132
S232
J33
32
Figure 5.5 The Gantt chart of the initial solution
5.9.3 Step 3: Perform Inside Search
The inside search is performed to find the best job sequences for the outside initial seed
(G1- G2- G3).
5.9.3.1 Step 3.1: Evaluate Inside Neighborhoods
The neighborhoods of the inside initial seed, and their objective function value, are
found. Each seed of this example has seven neighborhoods, shown with their objective
function values in Table 5.8. The difference between each neighborhood and the initial
seed is shown in bold.
Table 5.8 The neighborhoods of the inside initial solution
Objective
Function
Value
Group Seed
Neighborhood
O(Initial)
G3
G2
J
.112
...
f1.3....:
J21
J22
1
J12
J11
J13
J21
.122
2
J11
.113
.112
J21
J22
3
J13
J12
Jll
J22
J11
Jl2
Jl2
J11
J13
J21
4
5*
.111
J12
.113
J21
J21
J22
J21
J22
JlI
Jl2
J13
J21
6
7
J13
J3
J32
.133
J31
J31
J31
J31
J32
J32
J32
J32
.J3
J32
J31
J33
J32
J33
J22
J31
J22
J33
J3
J33
J33
J32
J31
186
182
182
179
184
187
186
184
Based on the objective function value of the neighborhoods, the third neighborhood has
the lowest objective function value and can be considered as the next seed. By choosing
the next seed, the following parameters are updated:
51
Tabu-list:
The new seed is generated by changing the sequence of processing the first and the
third jobs of the first group. These moves are saved in the tabu list. In other words, in
the next iteration (the inside tabu list size is equal to one)
J13
cannot be processed as the
third job of the first group, and .111 cannot be processed as the first job of the first group.
Candidate-list:
The new seed is added to the Candidate-list as the second member of this list. The first
member of the Candidate-list is always the initial solution.
Aspiration level:
Because the value of the objective function of the new seed is better than the aspiration
level (179 compared to 186), the inside aspiration level is also updated.
Index-list:
The initial solution is the first member of the index list. Because the value of the new
seed is lower than the previous seed, it has the potential to be added to the index list.
Thus, if the next seed has an objective function value more than 179, this seed (the
second one) is added to the index list as the second local optimal solution. Thus, in this
stage, the Index list just includes the initial solution.
Long-term memory frequency matrix
The frequency matrix which is used for long-term memory is updated.
52
5.9.3.2 Step 3.2: Evaluate the Stopping Criteria for Inside Search
At the end of each inside search iteration, the stopping criteria are evaluated. If one of
the criteria is satisfied, the inside search is stopped; otherwise, the next iteration is
started. In this stage of this example, the number of iterations without improvement is
equal to zero and the number of entries to the index list is equal to one. Because none
of the stopping criteria are satisfied, the next iteration is started with the new seed
similar to the previous iteration.
5.9.3.3 Repeat the Cycle
Inside iterations are performed until one of the inside search stopping criteria is
satisfied. The inside search will be terminated by one of the stopping criteria. In this
example, the inside search for the current inside seed is terminated after six iterations.
The best found job sequence is found at the forth inside iteration with the objective
function value of 174 with the sequence of groups and jobs in each group given as
below:
G1 (J12-J13-J11) - G2(J22-J21)
G3(J33-J32-J31)
After terminating the inside search, the search is switched to the outside search.
5.9.4 Step 4: Perform Outside Search
The outside search is performed to find the best group sequence. The steps of outsidesearch are as follows:
53
5.9.4.1 Step 4.1: Evaluate Outside Neighborhoods
The seed for the outside search is used to find the outside neighborhoods and their
objective function values. The neighborhoods of the outside initial solution are as
follows. The difference between each neighborhood and the seed is shown in bold.
G2- G1- G3
G1- G3- G2
G3- G2- G1
The inside search is performed for each neighborhood to get the best job sequence and
find the objective function value of the neighborhood. Table 5.9 shows the objective
function value of these neighborhoods after performing the inside search.
Table 5.9 The neighborhoods of the outside initial solution
Neighborhood
O(Irntial)
Value
Sequence
G1
JI2
'i3
G2
Jll
J2!
Jl3
Jl2
J33
J32
Jll
J31
J32
J1313
J22
J2l
G3
G1
I
1
J1
J22
181
I
J21
J22
G1
2
I
165
I
Jl2
Jl3
Jll
G3
3
J12
J3l
J32
J33
J3l
G1
G2
J33
J2l
J22
Jl3
174
Jl2
214
Jll
Based on the objective function values of the neighborhoods, the second neighborhood
has the best objective function value, and is considered as the next seed. By choosing
the next seed, the following parameters are updated:
54
Tabu-list:
As swapping of the last two groups generates the new seed, these moves are saved in
the tabu list. In other words, in the next iteration (the tabu list size is equal to one) G2
cannot be processed as the second group and G3 cannot be processed as the third group.
Candidate-list:
The new seed is added to the outside Candidate-list as the second member of this list.
The first member of the outside Candidate-list is always the outside initial solution.
Aspiration level:
Because the value of the objective function of the new seed is better than the aspiration
level (165 compared to 179), the value of the aspiration level is also updated.
Index-list:
The initial solution is the first member of the index list. Because the value of the new
seed is lower than the previous seed, it has the potential to be added to the index list.
Thus, if the next seed has an objective function value more than 165, this seed (the
second one) is added to the index list as the second local optimal solution. At this stage,
the index list just includes the initial solution.
Long-term memory frequency matrix
The frequency matrix which is used for outside search long-term memory is updated.
55
5.9.4.2 Step 4.2: Evaluate the Stopping Criteria for Outside Search
At the end of each outside iteration, the stopping criteria for outside search are
evaluated. If one of the criteria is satisfied, the search for the current restart is stopped;
otherwise, the next iteration is begun. At this stage of this example, the number of
iterations without improvement is equal to zero and the number of entries to the index
list is equal to one. None of the stopping criteria are satisfied. Thus, the next iteration is
started with the new seed similar to the previous iteration.
5.9.4.3 Repeat the Cycle
These iterations are performed until one of the outside search stopping criteria is
satisfied. The outside search will be terminated by one of the stopping criteria. In this
example, the best found group sequence is the one with sum of the completion times
equal to 165 (the second member of Candidate-list). After terminating the outside
search, the best sequence of groups, as well as jobs, are reported as the best schedule.
The Gantt chart for the result of the tabu search for this problem is shown in Figure 5.6.
1W1
M2
S011
J12
so12
I
.111
'13
J12
5131
J13
.111
I
S132
J32
J3l
S321
J22
J21
J33
J32
J31
S322
J22
Figure 5.6 The Gantt chart of the tabu search sequence
J21
56
CHAPTER 6: LOWER BOUNDS
When a heuristic algorithm is used to solve a problem, the quality of solution of the
heuristic algorithm should be evaluated. The precision of the algorithm can be
evaluated by comparing the result of the algorithm with that of the optimal solution.
However, it is impossible to find the optimal solution of many problems in an efficient
way. In many cases, it is almost impossible to find the optimal solution of a problem in
a reasonable time. For instance, solving some of the proposed research problems will
take days to be solved optimally. Because finding the optimal solution becomes
increasingly difficult as the problem size increases, the lower bounding techniques are
the most useful ones to evaluate the quality of the heuristic algorithm's solutions. A
lower bound for a problem is an algorithm which is used to find a solution, at most
equal to the optimal solution or close enough to the optimal solution of the problem in a
time interval much faster than solving the mathematical model. During the
development of lower bound, both precision and the time efficiency of the algorithm
should be considered. It is clear that the optimal solution of a problem is between the
result of heuristic algorithm and the value of its lower bound. In this research problem a
specific lower bounding technique is developed for minimization of makespan. Another
lower bounding technique based on Branch-and-Price is also developed which can be
applied for both minimization of makespan and minimization of the sum of the
completion times criteria.
6.1 Lower Bounding Technique for Minimization of Makespan
For minimizing the makespan criterion, a lower bounding technique based on relaxing
the problem from SDGS to SDJS (Logendran et al. 2006) is developed. In this
technique, every group is considered as an independent job. The run time of these
independent jobs (groups) on each machine is considered equal to the summation of run
time of its jobs on each machine. Then the problem is treated as a SDJS problem. The
solution of this problem is a lower bound for the original problem because the possible
idle times between processing jobs that belong to a group on all machines are ignored.
57
Solving the mathematical model of this relaxed problem is much easier than solving the
original problem, but the relaxed one still belongs to NP-hard problems (SDJS
problems are NP-hard). The parameters, decision variables, and the mathematical
model of this lower bounding model are as follows:
Parameters:
a:
Number of groups
Number ofjobs in group p
Number of machines
b:
m:
The setup time for group ion machine k if groupp is
the preceding group (p l)
The summation of run times ofjobs of group p on
machine k
The minimum run time ofjobs in group p
on machine k
sp1k.
Tpk.
Gpk.
p=
1,2,. . ,a
.
p,l 0,1,2,.. .,a
k = 1,2,. . . ,m
Tk
tpjk
1,2,.. .,a
k = 1,2,. .
p
.
Decision Variables:
WIP
Ii; Ifgrouppisassignedtosloti
i=
10; Otherwise
p0,1,2,...,a
C1k:
The completion time of jth slot on machine k
se ik.
The setup time for a group assigned to slot i on
machine k
1;
=
If group p is assigned to slot i and group l is
assigned to slot i + 1
0; Otherwise
=
0,1,2,...,a
.
.
i = 1,2,.. .,a
k = 1,2,. . . ,m
= 0,1,2,. . ,a-1
p = 0,1,2,. . . ,a
.
li,2,..,a p l
Model
(1)
MinimizeZ = Cak
Subject to:
1
p = 1,2,.. .a
(2)
i1,2,...,a
(3)
58
a
a
p=O
1=1
ASipi+ii = 1
ASl(+I)l
ASI(l+l)l
Wip
i=0,1,2,..,a-1
pl
W+1i
a
Setk
a
=
p=o1=1
AS_ii Slk
a
Ci = C(_I)l+ Set jj+
i=1,2,...,a
W1Ti
i
p=1
=
W,,(Tp,) +
C(j-1)(k-1) + Setl(k-1) +
p,l=0,1,2,...,a
a
Ck , Setk
0
(9)
W(Gp(k-1))
1,2,3,...,a
i = 1,2,3,.. .,a
k2,3...,m
Wjp(GPk)
,
(7)
(8)
1,2,3,.. .,a
i
Ci(k-1) +
(5)
(6)
p=l
p=1
Ck
pl
k1,2,...m
a
a
Cjk
(4)
i=0,1,2,...,a-1 (p 1)
AS,(f+l)/ = 0,1
(10)
(I <j')
The mathematical model is a Mixed Integer Linear Programming (MILP) model. It is
assumed that there exist slots for groups and each group should be assigned to one of
them. In real world problems, groups will have different number of jobs. Because each
group can be assigned to any slot, to simplify creating the mathematical model, it is
assumed that every group has the same number of jobs, comprised of real and dummy
jobs. This number is equal to
bm
which is also the maximum number of real jobs in a
group. If a group has fewer real jobs than bm, the difference, i.e., bmax number of real
jobs, is assumed to be occupied by dummy jobs. The objective function is the
minimization of makespan of jobs on the last machine, given by (1).
Based on the model, there are 'a' slots and each group should be assigned to one of
them. It is clear that each slot should contain just one group and every group should be
assigned to only one slot. Constraints (2) and (3) support this fact.
The set-up time of a group on a machine is dependent on that group and the group
processed immediately preceding it. Constraint (4) is included in the model to support
this fact. If group p is assigned to slot i and group 1 is assigned to slot i+1, then ASi,,(+l)l
must be equal to one. Likewise, if group p is not assigned to slot i or group 1 is not
59
assigned to slot 1+1, then AS(+J)l must be equal to zero. Constraints
(5)
and (6) ensure
that each is true. Constraint (7) calculates the required set-up time of groups on
machines. The required set-up time for a group on a machine is calculated based on the
assigned group to the slot and the group assigned to the proceeding slot.
The completion time of the group assigned to a slot on the first machine is calculated in
constraint (8). The completion time of a group assigned to a slot is equal to the
summation of the completion time of the group assigned to the preceding slot, the
required set-up time for the group of this slot, and the summation of run time of all jobs
in the group.
The time that the process of a group can be started on a machine, other than the first
machine, depends on both availability of the machine and the group. Constraint
(9) is
added to the model to check the soonest possible time that a group can be processed on
a machine by considering the availability of the group. A group is available to be
processed on a machine if at least one of its jobs is processed on the immediately
preceding machine. It is clear that the soonest time to start processing a group on a
machine is the time that the job with the minimum run time of the group is processed
on the preceding machine. This constraint is added to the model to support this fact.
The completion time of a group assigned to a slot on a machine except the first machine
should be greater than the completion time of the group on the previous machine, plus
the required set-up time for the group on the previous machine, the minimum run time
of jobs of the group on the previous machine, and the total run time of all jobs on the
current machine.
It is also clear that in a real schedule of processing groups, the completion time of a
group on a machine should be greater than the completion time of the group on the
proceeding machine plus the minimum run time of jobs of the group on the current
machine. Constraint (10) is added to the model to support this fact.
This mathematical model can be applied to find a lower bound for all size of problems
by considering minimization of makespan.
6.1.1 Application of the Lower Bounding Technique to a Problem Instance
The model is applied to find a lower bound for the proposed problem in chapter four by
considering the minimization of makespan criterion. Based on the optimal solution of
the model the lower bound of this problem is equal to 34 which is equal to the result of
the optimal solution and the heuristic algorithm.
6.2 Lower Bounding Technique for Minimization of Sum of the Completion Times
The proposed lower bounding technique for minimization of makespan cannot be
applied to find the lower bound for minimization of sum of the completion times
criterion. The research showed that the Branch-and-Price technique is a suitable
approach to find a good quality lower bound for this criterion. Thus, a lower bounding
approach based on branch-and-price technique is developed to find lower bounds with
good quality.
In Branch-and-Price (B&P) algorithm (Barnhart et al. (1998), Wilhelm (2001), and
Wilhelm et al. (2003)), the problem is reformulated with a huge number of variables.
Then the problem is decomposed into a master problem and one or more sub-problems.
The sets of columns (variables) are left out of the LP relaxation of master problem
because there are too many columns to handle efficiently and most of them will have
their associated variable equal to zero in an optimal solution anyway. To check the
optimality of an LP solution of the master problem, one or more sub-problem, called
the pricing problem(s), which are a separation problem for the dual LP, are solved to
try to identify columns to enter the basis. If such columns are found, the LP is reoptimized. Branching occurs when no columns price out to enter the basis and the LP
solution does not satisfy the integrality condition. B&P allows column generation to be
61
applied during the branch-and-bound tree. Some of the advantages of using B&P to
solve the mathematical programming problems with a huge number of variables are:
The LP relaxation of such formulation provides tighter lower bound than LP
relaxation of MILPs.
A compact formulation of a MILP may have a symmetric structure that causes
branch-and-bound to perform poorly because the problem barely changes after
branching. A reformulation with a huge number of variables can eliminate this
symmetry (Barnhart, 1998). This issue is seriously observed in the proposed
research problems.
The goal is to develop a mathematical model whose LP relaxation provides a good
lower bound to the optimal solution of the original problem. Thus, the problem is
reformulated with a huge number of variables. These variables are the set of feasible
solutions. Then the mathematical model is decomposed into a master problem and one
or more sub-problems (SPs). The number of SPs is equal to the number of machines
considered in the original problem.
At the beginning, the LP relaxation of the master problem is solved by considering a
few of the feasible solutions. Because the master problem at this stage does not include
all possible solutions, it is called the Restricted Master Problem (RMP). To check the
optimality of the LP solution to the RMP, the sub-problems (SPs), called pricing
problem(s), are solved to find colunms to enter the basis. If such columns exist, the LP
is re-optimized. If there is no colunm to enter and the LP solution does not satisfy the
integrality conditions, then branching is applied for the optimal solution of the LP
problem.
In the mathematical model of the problem, which was demonstrated in chapter four,
constraints (2) through (7) deal with finding the set-up times for groups. Constraints (9)
through (11) deal with finding the completion time of jobs on each machine. These
constraints can be applied for any machine separately. Constraint (12) is the only
62
constraint that deals with the completion time of jobs on more than one machine. Thus,
this constraint is considered as the linking (complicating) constraint in the model. The
RMP includes constraint (12), which is a relational constraint between the completion
times of jobs on machines and a convexity constraint for each sub-problem. The
parameters, decision variables, and mathematical model of a RMP for a m machine
problem for minimization of sum of the completion times are formulated as follows:
Parameters:
The number of colunms that exist in RMP related to the k = 1,2,. . .
hk
kth machine
The completion time 0ffth job 0fth slot on machine kin
h,' existing solution in RMP related to machine k
Xiv:
i = 1,2,..
j = 1,2,. . .
k=1,2,...,m
j
0,1,2,.. . ,a
p = 0,1,2,. . .
Ii; If group p is assigned to slot un ht existing solution in
Wh
k
=
RMP related to machine k
ipk
k= 1,2,...,m
Otherwise
p=1,2,...,a
Run time ofjobj in groupp on machine k
tPJk:
j
= 1,2,..
Decision Variable:
The decision variable of the
machine k in RMP
h
existing solution of
k = 1,2,. . . ,m
Model:
ab
h
MinZ =2X'
h1
(11)
i1 j=1
Subject to:
i = 1,2,.. .,a
h=1
2(Xk
WktJk)
pSI
h11
2'Xkl)
1= 1,2,...,bmax
hk
k
hsl
(12)
k = 2,3,..
1,2,3,...,m
(13)
63
= 0,1
In this model, ,% are decision variables of the RMP. If the djk's are the dual variables
of constraint (12) and a1, denote the dual variables of constraint (13), the dual problem
of the RMP is as follows:
m
MaxZ'=
abmax
m
di(0)+ M=1aM
k=2 1=1 j=1
(14)
ST:
-
a bmax
i=1 j=1
d(Xi) + ai
0
a
a bmax
(15)
a bmax
dl1i(X W1tJk)+ak( i=1 j=1 d(k+1)XYk)O
i=1 j=1
k=2,3,m-1
(16)
p=l
a
abmax
i=1 j=1
d1jk
Wiptpjm)+am(
dm(Xm
p=l
0
a
Unrestricted
abmax
Xym)O
(17)
i=1 j=1
i = 1,2,.. .,a
= 1,2,..
j
k=2,3,...,m
A feasible solution of RMP is optimal, if its dual can satisfy the constraints of the dual
problem. Thus, to check the optimality of RMP, the constraints of the dual problem are
checked. As it is shown, each constraint of the dual problem deals with the completion
time of jobs on a specific machine. For instance, the first constraint which is shown in
(15), deals with the completion time of jobs on the first machine. Thus, to check the
optimality, sub-problems, in which their objective functions are the left hand side of the
dual constraints, are solved. If the value of the objective function is greater than zero,
the column is added to RMP.
Decomposition leads to an independent sub-problem for each machine. When the LP
relaxation of the master problem is solved, it generates new dual variables for sub
problems. The sub-problems are:
MaxW1-
SP1
a bmax
d2(Xi)+ai
(18)
1=1 j=1
Subject to:
Constraint (2) through (11) of the original problem in chapter 4
SP2
a bmax
through
MaxWk
SPm..i
''
'
a bmjix
a
W,tfk) + ak
dk(Xk
''
dj(k+1)Xyjç)
(
1
(19)
k=2,3,...,m-1
Subject to
Constraints (2) through (7), (9) through (11), and (13) of the original
problem in chapter 4
a bmax
SPm
Max Wm
i=1 j=1
a bmax
a
djm(Xum
Wip tpjm) + am
(20)
Xym)
(
j=1 j=1
p=l
Subject to
Constraints (2) through (7), (9) through (11), and (13) of the original
problem in chapter 4
The decomposed model has problems below:
.
If this decomposition model is applied to solve problems, the SPs are still NP-hard
and cannot be solved in an efficient way by increasing the size of the problem. The
experiment shows that by increasing the number of jobs in a group, the subproblems become more complicated.
The coefficient of Xyk'S in the objective function of SPs, except SP1, can be positive.
In this case, because of the maximization of the objective function, as well as
deficiency of any constraint in SPs to limit the value of
X1k'S,
it is possible that a
sub-problem generates a solution in which there are unnecessary idle times among
the process of jobs. In some cases, it may be possible that one of the SPs be
unbounded as well. For instance, consider a two machine problem. The coefficient
of X12s in
SP2
is equal to
(d2 -
1). By considering the maximization of the
objective function, it is possible that if
du2? 1, SP2
generates a solution in which
65
there are unnecessary idle times among processing jobs of a group. Thus, it is
required to add an upper bound to SP2. In this case the quality of solution and the
efficiency of the algorithm are highly dependent on the value of the chosen upper
bound for sub-problems.
Based on these facts, if a model can be created in which the sequence of jobs in a group
can be identified easily in sub-problems and the requirement of applying an upper
bound for SPs are removed, the SPs can be solved more efficiently.
Consider a mathematical model with minimizing the objective function. It is clear that
if a new decision variable is added to the model, the value of the objective function will
not be increased. Because the goal of the decomposition model is to find a lower bound
for the original problem, by adding new variables to the RMP and generate a new
model, the optimal solution of the new model can still provide a lower bound for the
original problem. This can be used to generate a new model with easier sub-problems.
These new models are discussed separately for each machine-size problem.
6.2.1 Simplifying the Two-Machine Problem
Consider the two-machine problem. The RMP model is as follows:
h1: The number of solutions related to M1 in RMP
h2: The number of solutions related to M2 in RMP
a bmax
MinZ=
h=1
ifl:i=1 j=1 x:;2
(21)
Subject to:
a
h1
Wp2t pj2)
p=1
hk
2i
1
hl=1
,h1
0
,...,
k=l,2
h =1
= 0,1
k=
1,2
The sub-problems (SP1 and Sp2) are, respectively, as follows:
2
max
(23)
a
MaxWi =
SP1
i=1 j=1
d,(X1)
(24)
Subject to:
Constraints (2) through (11) of the original problem in chapter 4
a bmax
SP2
MaxW2 =
1=1 j=1
a bmax
a
d(X
W1 t) ( 1=1 j1 Xy)
pI
(25)
Subject to
Constraints (2) through (7), (9) through (11), and (13) of the original
problem in chapter 4
An upper bound for X2's
As discussed, the coefficient of X12's in the objective function of SP2 is equal to (d2 -1).
By considering the maximization of the objective function, it is possible that if d2? 1,
SP2 becomes unbounded. In order to prevent this, and creating easier SPs, an artificial
variable is added to each relational constraint of the RMP. The coefficients of these
artificial variables in the objective function are equal to one. In this case, a new
constraint is added to the dual problem of RMP which guarantees d2
1. The new
RMP model and its dual problem are as follows:
a bmax
MinZ =
h=1
1=1
j=1
x
a bmax
+
W t2)
p=i
i=1 j=1
s
(26)
2X+S
0
i = 1,2,.. .,a
j =1, 2,.. .,bmax
(27)
hk
k=1,2
(28)
h1
= 0,1
The dual problem of RMP:
MaxZ'=
a bmax
i=1
d,j2*@)+ai+a2
(29)
j=1
ST:
a bma
i=1 j=l
d1
d02(X)+a1°
pI
W1t2) + a2
(30)
(
j=l
X2)
(3 1)
0
I = 1,2,.. .,a
j
1,2,.. .,bmax
(32)
67
'.,
U y2
ak Unrestricted
0
(33)
This model will provide a lower bound for the original problem. The experiments show
that the new model requires less iterations to find the optimal solution of each node
than the first one. But the most important advantage of this model is that the sequence
ofjobs that belong to a group can be identified easier. There are rules that can relax the
job sequence constraints of sub-problems. These rules for each sub-problem are
discussed in the following sections.
6.2.1.1 The Relaxing Rule for SP1 in the Two-Machine Problem
The objective function of SP1 is as follows:
a bmax
SP1
Max Wi =
i=1 1=1
d2(Xi)
Consider two different sequences of processing jobs for SP1, which are shown in Figure
6.1. The only difference between these two sequences is that the sequences of
processing job i and job j, in which both of them belong to a group, are changed. The
completion time of the preceding jobs before these jobs are shown by tA in this figure.
d1 and d1 are the dual values of these jobs, respectively.
Figure 6.1 The Gantt chart of processing two different sequences
Assume that the objective function value of S1 is less than S2. The completion time of
these jobs in S1 and S2 are shown in Table 6.1.
Table 6.1 The completion time ofjobs in S1 and S2
LJob
Job i
[
Dual variable
Jobj
Completion time in S
Completion time in 52
d,
1A + ti
tA + tj + ti
d1
tA + tj + tj
tA +
By substituting these values in the objective functions of SP1, because S1 has a smaller
objective function value, inequality below holds true.
-dj(tA+t)-dJ(tA+tj+t)-dJ(tA+tJ)-dj(tA+tj+t)
(34)
by simplif'ing the inequality, the result is:
fL
(35)
Based on this fact, at the optimal solution of SP1, if there is no idle time among the
process time of jobs, the sequence of jobs that belong to a group should respect
inequality (35). Thus, by applying the constraints below to SP1, for any given group
sequence, the sequence ofjobs in a group can be calculated by the model.
Yzjq
"ci
p=I
tpjt
-'
j,q-1,2,...bm
'
of
tpjl
j<q
(37)
tpql
In these constraints, if job q is processed after job
P1
(36)
E =1,2,...,a
j
in slot i, then the value
is positive. In this case, by applying constraint (36) in 5P1, Yijq
tpql
will be equal to 1. Constraint (37) supports this value as well. On the other hand, if job
q is processed before jobj in slot i, then the value of
d
d2
_2) is negative. By
w(-P'
a
tpjl
applying constraint (36) in SP1,
Y,jq
tpql
will have a value greater than a negative number. In
this case, constraint (37) forces Yq to take the value equal to zero.
rJ
If there are dummy jobs in a group, they are considered to be processed as the last jobs
of the group. In order to force this into the model, a binary parameter is defined as
follows:
JI;
u1=
job of group p is a dummy job
if the
p = 1,2,.. .,a
=
{ 0; otherwise
By revising (36) and (37), and by adding
.,bmax
to each of these constraints, the
Upq
t
pq 1
dummy jobs will be guaranteed to be processed as the last jobs of the group because if
job q is a dummy job, then Y will be equal to one. The revised constraints are:
dg2
IdY.
a
ijq
p1
tpji
a
Y '+W
tpql
+
u
diq2'
pg
Id f2di2
yq
pji
i = 1,2,.. .,a
tpql
j,q=1,2,...bmax j<q
di2]
pg
tpql
(38)
(39)
tpql
6.2.1.2 The Relaxing Rule for SP2 in the Two-Machine Problem
The objective function of SP2 is as follows:
a bmax
Max W2 =
j=1
j1
a
dy2(Xu2
a bmax
w1
p=l
x,2)
t) (
(40)
1=1 1=1
Consider two different sequences of processing jobs (Si and S2) which are shown in
Figure 6.1. Assume that the objective function value of S1 is less than S2 The
completion time of the jobs which make the difference between S1 and S2 i.e., job) and
job i, in S1 and S2 are shown in Table 6.1. In this case, by substituting the value of
completion times in the objective function of SP2, the inequality below holds true:
dj(tA+tj-tj)-(tA+tj)+dj(tA+t+tJ-tJ)-(tA+tj+tJ)dJ(tA+tJ-tJ)(tA + t3) +d(tA + t+ t1 t) (tA + t+ t)
(41)
by simplifying the above inequality, the result is:
d-1<d-1
ti
ti
(42)
70
Based on this fact, because the objective function of SP2 is maximization, at the
optimal solution of SP2, if there is no idle time among the process time of jobs, the
sequence of jobs that belong to a group should respect inequality (42). Thus, by
applying the constraints below to SP2, for any given group sequence, the sequence of
jobs in a group can be calculated by the model. The reason for adding the last part to
the constraints below I U
pq
Idjq2 -1
a
YuqWp1
p1
1)
(c/ y2
tpj2
d,2 -1
)
d2
pq
tpq2
is the same as the one explained for SP1.
12
tp2
I
a
(43)
j,q = 1,2,... bmax
J
Yyq1+Wip (diq2_1dy21 +Upq
tpq2
pl
tpj2
= 1,2,.. .,a
d2-1
j<q
tpj2 )
(44)
If the above constraints are added to the model, because there is no more idle time
between processing jobs of a group, constraint (8) of the original model in chapter four,
can be added to SP2 as follows:
C2 = C(ll)2+Setl2+WTP2
(45)
i = 1,2,3,.. .,a
Adding this constraint helps to solve SP2 easier because it restricts the completion time
of each group.
6.2.2 Simplifying the Three-Machine Problem
Consider the three-machine problem. The model for the RMP is as follows:
h1: The number of solutions related to M1 in RMP
h2: The number of solutions related to M2 in RMP
h3: The number of solutions related to M3 in RMP
a bmax
h3
MinZ=
h=1
i=lj=1
43
(46)
Subject to:
h=1
h=l
(x2
w2t12)-
p=l
(x3- w3t3)p=1
(47)
hl=1
i = 1,2,.. .,a
I= 1,2,.. .,bmax
(48)
hl=1
i = 1,2,...,a
I= 1,2,.. .,bmax
71
hk
=1
=
k= 1,2,3
(49)
0,1
The dual problem is as follows:
3
3
MaxZ'=
d(0)+ Mrrl
aM
k=2 1=1 j=l
(50)
ST:
a
i=1 j=1
abm
d2(X1i)+ai0
d(X
i=1 j=I
a
(51)
ab
a
W1 t) + cr2
p=I
d13X,,2)
(
0
(52)
1=1 j=1
a
a
1=1 j=1
d(X13- p4 Wj1,t13)+a3-(1=1 j=1 Xy3)°
d1k
0
a
(53)
Unrestricted
The sub-problems are as follows, respectively:
a
SP1
bmax
Max w1 =
1=1 j=1
d,(Xi)
(54)
Subject to:
Constraints (2) through (11) of the original problem in chapter 4
MaxW2=
a bm,
a
a bmax
d2(X2-p=lWt12)-Ei=1 j=1 dj3Xy
/=1 j=1
(55)
Subject to
Constraints (2) through (7), (9) through (11), and (13) of the original
problem in chapter 4 and an upper bound for Xq2
a
SP3
a
a
dy3(Xy3-WtJ3)1=1 j=1
i=1 j=1
pl
Max W3=>
(56)
Subject to
Constraints (2) through (7), (9) through (11), and (13) of the original
problem in chapter 4 and an upper bound for X13
In order to prevent any idle time among the process time of jobs in a group and
preventing the unbounded solution in any of the SPs, the coefficient of Xk in any sub-
problem should be negative (the objective function of SPs are maximization). The
coefficients of1(,1k in sub-problems are shown in Table 6.2:
72
Table 6.2 The coefficient of X1k's in SPs
Sub Problem
I
X,i Coefficient
SP1
-d112
SP2
SP3
d12 - d,3
d113-1
Thus, the inequalities below should hold true in order to prevent unbounded solutions:
dU2? 0
d2-d3
d3
0
1
To support these rules, a set of artificial variables, Si11, are added to the relational
constraint for the Jth job of slot i of the constraint between M1 and M2 with the
coefficient equal to 1. The same set of artificial variables is also added for the relational
constraint between M2 and M3 with the coefficient equal to -1, respectively. Another
artificial variable set, S2, is also added to the relational constraint of the th job of slot i
between M2 and M3 with the coefficient equal to 1. The coefficient of this set in the
objective function is equal to 1. The new model and its dual problem are as follows:
a bmax
MinZ =
1=1 j=1
h=1
a bma
X3 +
i=I j=1
S2
(57)
Subject to:
h2
h=1
a
A(x2 W,2t12)
p=l
h3
a
%(x3 W,3tJ3)
h=l
p=1
hl=1
h2
hl=1
'vhl
h12S2ySl
0
2
i = 1,2,.. .,a
j= l,2,...,bmax
i = 1,2,.. .,a
j= 1,2,.. .,bmax
(58)
(59)
H
=
k=1,2,3
h=1
(60)
= 0,1
The dual problem of the new model is as follows:
MaxZ'=
ST:
3
abmax
3
d,,(0)+ M=1aM
k=2 1=1 j=1
(61)
73
-
a bmax
i=1 j=1
a bmax
i=1 j=1
d,2(X,i) + ai
d(Xz
j=l
(62)
a
p=I
W1tJ2)+a2(
a
a bniax
i=1
0
di(X1
j=1 j=1
a bmax
p=l
dl
a bmax
Wi t3) + a3 (
1=1 j=
dy3X,)O
(63)
x) 0
(64)
i
j2
J
d2-d, 0
AO
(4f 3
1,2,.. .,a
,
,. . .,
max
(66)
ak Unrestricted
(67)
This model will provide a lower bound for the original problem as well. There are rules
that can relax the job sequence constraints of sub-problems. These rules for each subproblem are discussed in the following sections.
6.2.2.1 The Relaxing Rule for SP1 in the Three-Machine Problem
The rules to relax SP1 for three-machine problem is the same as the one discussed for
two-machine problem. Thus, the same constraints can be applied for solving SP1 of
three machine problem.
y>W[2_dY2+UdY2]
tpq2
d q2 1
Yiq1+1Vq,
P=l
tpqz
tpj2
i=l,2,...,a
tpj2
d__-1
,j2
tpj2
+Upq
d
2
j,q
1
tpjZ
j<q
(68)
1,2,... bmax
(69)
J
6.2.2.2 The Relaxing Rule for SP2 in the Three-Machine Problem
The objective function of SP2 is as follows:
a bmax
i=t
j=1.
d(Xj
a
W,ptpJ2)+a2(
p=l
a bmax
i=l j=1
dyX)
(70)
Consider two different sequence of processing jobs which are discussed in section
6.2.1.1. The Gantt chart of processing jobs is shown in Figure 6.1. Assume that the
74
objective function value of S1 is less than S2 The completion time of these jobs in S1
and S2 is shown in Table 6.1. In this case, by substituting the value of completions
times in the objective function of SP2, the inequality below holds true:
d2('tA + t,
t)
d31(tA + t) + d2J(tA + t1 +
,- t) d3(tA + t1 + t)
d31(tA + tj
+t+t)
tJ)-d3J(tA +t,) +d21(tA +tJ+tj-tj)-d3,(tA
(71)
by simplifying the inequality, the result is:
d3J <d2
d2J
d3
ti
(72)
ti
Based on this fact, at the optimal solution of SP2, if there is no idle time among the
process time of jobs, the sequence of jobs that belong to a group should respect
inequality (72). Thus, by applying the constraints below to the 5P2, for any given group
sequence, the sequence of jobs in a group can be calculated by the model.
Yyg
Wp[32
P1
d3-d2
tpq2
tpi2
Upg
d3d2]
Ygq I+JV,p diq3diq2 d3_du2+U
P1
tpq2
tpi2
tp,2
d3-d2
i =
(73)
1,2,...,a
j,q - 1,L,... 0max
j
q
(74)
tpi2
In these constraints, if job q is processed after job j in slot i, then the value of
w
[d
q3
d
d
igZ
tpq2
y3
d u2J is positive. In this case, by applying constraint (73) in
tpi2
SP2, Yijq will be equal to 1. Constraint (74) supports this value as well. On the other
hand,
w
if job q
[d d
iq3
tpq2
1q2
is
processed before job j in slot
d3 d2]
i,
then the
value of
is negative. By applying constraint (73) in SP2, YzJq
tp2
will have a value greater than a negative number. In this case, constraint (74) forces Yijq
d
to take the value equal to zero. The reason for adding the last part
d
2] to
pg
the constraints (73) and (74) is the same as the one discussed in the previous sections.
75
6.2.2.3 The Relaxing Rule for SP3 in the Three-Machine Problem
The rules to relax SP3 for three-machine problem is the same as the one discussed for
M2 in two-machine problem. Thus, the same constraints can be applied for solving SP3
of three machine problem. These constraints are as follows:
v
Upq1yq
d3-1
a
i = 1,2,.. .,a
j,q = 1,2,... bmax
L..i
tpj3
tpq3
tpj3
)
y <1(diq3_1du3_1 +Upq d3-1l
a
ijq
tpj3
tpq3
tpj3
(75)
j <q
(76)
)
6.2.3 A Generalized Model for Simplifying the Multiple-Machine Problems
Consider the rn-machine problem. The RMP of the problem is shown in (11) through
(13) and its dual problem is depicted by (14) through (17). The sub-problems are shown
in (18) through (20).
In order to prevent any idle time among the process time of jobs in a group and
preventing the unbounded solution in any of the SPs, the coefficient of Xjk in any sub-
problem should be negative (the objective function of SPs are maximization). The
coefficients of X,jk in sub-problems are shown in Table 6.3:
Table 6.3 The coefficient of X1k's in SPs
Sub Problem
I
X Coefficient
SP1
SPk k
2,.. .,m-1
dk- d])
dijmA
SPm
Thus, the inequalities below should hold true in order to prevent unbounded solutions:
?0
i= 1,2, ...,a
duk du
0
j1,2,..., bmax
76
k = 2,3,.. .,m-2 n = k+1
dum <1
To support these rules, a set of artificial variables, Sl,,, are added to the relational
constraint for the
j
job of slot j of the constraint between M1 and M2 with the
coefficient equal to 1. The same set of artificial variables is also added to the relational
constraint between M2 and M3 with the coefficient equal to -1, respectively. Another
artificial variable set, S2, is also added to the relational constraint of the jill job of slot i
between M2 and M3 with the coefficient equal to 1. The same set of artificial variables
is also added to the relational constraint between M3 and M4 with the coefficient equal
to -1, respectively. These artificial variables are added respectively to consecutive
constraints from M2-M3 through Mmi Mm. The new model and its dual problem are as
follows:
a bmax
h3
MinZ =
i=1 j=1
h=1
Subject to:
a
h=1
i=1 j=1
Wt)
x2-
h=1
a bmax
p=l
W tJk)
(77)
SM
I = 1,2,...,a
j = 1,2,.. .,bmax
hl=1
hk-1
hl=1
A'
k-1)
Sk
S(k-1)
0
I = 1,2,.. .,a
1,2,.. .,bmax
(78)
(79)
k=2,3,...,m
H
k1,2,m
h=1
(80)
= 0,1
The dual problem of the new model:
m
abmax
MaxZ=
m
duk*(0)+
k=2 i=1 j=1
aM
(81)
0
(82)
M=1
ST:
-
a bmax
i=1 j=1
d,(X) + ai
a
a bmax
i=1 j=1
dq(Xjk
p=l
a bmax
WitJk)+ak(Y
i=1 j=1
dj(k+1)Xjk)O
k=r2,3,...,m1
(83)
77
a
a
j1 j=1
dym(X,jrn
d2
dk
dm
p=l
a bmax
Wiptpjmam
1=1 j=t
Xym)
= 1,2,.. .,a
j= 1,2,. . . ,bma,
85
k=2,3,. . .,m-1
(86)
i
1
d(k+1)
0
ak Unrestricted
0
(87)
This model will provide a lower bound for the original problem as well. There are rules
that can relax the job sequence constraints of sub-problems. These rules for each subproblem are discussed in the following sections.
6.2.3.1 The Relaxing Rule for SP1 in the Multiple-Machine Problem
The rules to relax SP1 for m-machine problem is the same as the ones discussed for
two-machine and three machine problems. Thus, the same constraints can be applied
for solving SP1 of the multi-machine problem.
YjqWipiU H
a
Idu2
diq2
diq2'
tpjl
tpql
tpqi)
P=l
a
1du2
diq2
tpjl
tpql
Y,jq
P=l
i
= 1,2,.. .,a
j,q=1,2,...bmax
diq2"
+Upql
(88)
j<q
(89)
tpqi)
6.2.3.2 The Relaxing Rule for SP2 through
SPm..i
in the Multiple-Machine Problem
The objective function of any sub-problem, except the first and the last sub-problems,
SPk(k= 2,3,..., rn-i) is as follows:
a bmax
1=1
j=1
d(X
a
a bmax
Wip t) + ak (
p=l
i=1 j=l
d+l) Xk)
(90)
To find a rule to relax the job sequence constraints, consider two different sequences of
processing jobs which are shown in Figure 6.1. Assume that the objective function
value of S1 is less than S2 The completion time of the jobs which make the difference
between S1 and S2 are shown in Table 6.1. In this case, by substituting the value of
completions times in the objective function of SPk, the inequality below holds true:
dk(tA + t, t1) d+J)j(tA + t,) + dkf(tA + t1 + t3- t) d+J)J(tA + t1 + t)
d+J)J(tA +
t) d(kJ)J(tA + t) + dkj(tA + t + t1 t,) d+1)j(tA + tj + t1)
(91)
by simplifying the inequality, the result is:
d
kj
d,
d(k+l)J
d(k+l)I
ti
(92)
ti
Based on this fact, at the optimal solution of SPk, if there is no idle time among the
process time of jobs, the sequence of jobs that belong to a group should respect
inequality (92). Thus, by applying the constraints below to the SPk, for any given group
sequence, the sequence of jobs in a group can be calculated by the model. The reason
for adding the last part Upq
(dU(k+1)
duk)
to the constraints is the same as that
)
tpjk
discussed in the previous sections.
a
'dig(k+I)
YgJq
diqk
dy(k+1)
dUk
P
p=1
(dig(k+1)diqk
tpjk
d/(k+)dUk
+Upq(
tpjk
tpqk
dik)')
du(k+1)
tpjk
pqk
a
+Upq(
)
i = 1,2,..
J,q=z1,2,...bmax
(93)
k = 2,3,.. .,m4
(94)
j<q
tpfk
6.2.3.3 The Relaxing Rule for SPm in the Multiple-Machine Problem
The rules to relax SPm for rn-machine problem is the same as the one discussed for SP2
in the two-machine problem. Thus, the same constraints can be applied for solving SPm
of the multi-machine problem. These constraints are as follows:
a
Yyq
Pl
w0l
a
i+
P=l
d__1 d.1 + Upq d.1
iqm
ijm
ijm
tpqm
tpjm
tpjm
qm
tpqrn
1 d' d-'
urn
tpjrn
+ Upq
ym
tpjrn
i = 1,2,.. .,a
j,q 1,2,... bmax j < q
(95)
(96)
79
6.2.4 Adding an Auxiliary Constraint to Simplify Finding the Sequence of Dummy
Jobs
As mentioned, the dummy jobs of a group are processed as the last jobs of the group. In
order to facilitate solving sub-problems, the constraint below is added to each subproblem. This constraint relaxes the job sequence binary variables of dummy jobs in
the mathematical model. In this constraint the parameter tlpjk is defined as follows:
I-i; if the
tlpjk
th
job of group p is a dummy job
1tk ; otherwise
p
.,a
j
l,2,...,bmax
k= 1,2, .., m
If q is a dummy job, the value of tlpqk is equal to -1. Thus, the right hand side of the
constraint below is equal to 1 if job q is a dummy job. This leads to Yyq = 1 if job q is a
dummy job. Adding the constraint below to SPs, causes the dummy jobs of each group
considered to be processed as the last jobs of the group.
(-1
Y VW
p1
t1pqk)
a
/q
h.d
p
i =1,2,...,a
j,q=1,2,...bmax j<q
(97)
6.2.5 Solving Sub-Problems
As mentioned, the sub-problems are NP-hard, so it is better to avoid solving them
optimally as long as possible. It is clear that during solving a node, any colunm with
positive coefficient can be added to RMP in order to help improve the objective
function value. Based on this fact, it is not necessary that the sub-problems be solved
optimally during the intermediate levels of solving a node. Thus, the heuristic algorithm
(tabu search) is applied to solve sub-problems until it can provide a solution with
positive coefficient. When the heuristic algorithm is unable to find columns with
positive coefficient for all sub-problems, the sub-problems are solved optimally. This
process is performed until none of the sub-problems can provide a column with positive
coefficient. At this time, the node is solved optimally. In other words, at the end of each
node, all sub-problems should be solved optimally to make sure that the optimal
solution of a node is found.
6.2.6 Branching
The LP relaxation of the RMP which is solved by colunm generation will not
necessarily provide integral solution. In this case, applying a standard branch-andbound procedure to the RMP with its existing columns will not guarantee an optimal
(or feasible) solution (Barnhart, 1998). Barnhart et al. (1995) and Desrosiere et al.
(1995) suggested to branch on the original variables of the problem. This means that
the branching rules for the proposed problem should be based on AS(+l)l or other
original variables.
Because the sequence of jobs in a group can be calculated by the rules discussed in the
previous section, the branching is only performed on the group variables, i.e., AS(+J)l 'S
or W1,. In this case, to find the best variable to branch, all AS1(+J)l variables related to
all machines are considered.
Each column that exists in RMP has a coefficient (,%) at the optimal solution of each
node. To find the best variable for branching, for each AS(+l)l related to each
machine, a branching coefficient is calculated. The value of this coefficient is sum of
the coefficient of the existing columns in RMP in which ASi1(+J)l =1. Wilhelm et al.
(2001) suggested to branch on the original variable in which its branching coefficient
has the nearest value to "0.5" compared to the other variables. Thus, branching occurs
on the variable in which its branching coefficient has the closest value to 0.5. The
branching rules applied for this problem is shown as a flow chart in Figure 6.2.
Suppose that AS(+l)l (AS(+J)l that belongs to the kth sub-problem) has the closest
value to 0.5 among all variables. In this case, the parent node is branched on two new
nodes. In one node, constraint AS1J,(e+J), =1 is added to SPk and in the other node the
constraint AS(l+J)l = 0 is added to SPk as well. All existing columns related to all
machines but the kth machine are added to both new nodes. The columns related to the
kth machine are separated in two parts. The ones in which ASi(IJ),
1 are added to the
first node and the remaining are added to the one which includes the ASI1,(eI)l = 0
constraint.
Initialize Coefficient of Lamda's
Pick the First Existing Column in RMP
V
Get the Coefficient of the Column
Identify the Relative Machine
Pick the First AS
V
Yes
AS=i'
V
ACId the coefficient to the
No
L,
............................................................
V
No
Are all ASs
checked?
Yes
V
Yes
a there any other
column in RMP
No
Pick the AS with the value closest to
05
V
Apply Branching
Figure 6.2 The branching rule flow chart
branching coefficinet of AS
6.2.7 Stopping Criteria
The branching process can be continued until all nodes provide an integer solution, be
infeasible, or are fathomed. Because finishing this process requires a considerable
amount of time, especially for large size problems, and considering the required amount
of time for solving sub-problems optimally which are NP-hard, a time limitation is
applied for solving problems.
During solving problems to obtain lower bounds, if the time spent for solving a
problem exceeds 4 hours, the sub-problems of the current node started is solved
optimally once. After solving all sub-problems optimally, the algorithm stops and the
best lower bound obtained so far is reported as the lower bound of the problem.
The maximum time spent to solve a sub-problem is set to at most two hours. If a sub-
problem cannot be solved optimally in two hours, solving the sub-problem is stopped
and the lower bound of the sub-problem is considered as the objective function value of
the sub-problem. During solving the nodes, the breadth first procedure is used to solve
the nodes. In other words, all nodes of a higher level have priority to be solved
compared to the nodes in lower levels.
6.2.8 The Software Application
The B&P algorithm is coded by concert technology concept of CPLEX 9.0 version, by
applying the beta version of a library function called MAESTRO developed for the
B&P algorithm.
6.2.9 The Lower Bound for the Original Problem
During solving a problem with the B&P algorithm, the algorithm stops for one of the
following reasons:
The B&P algorithm is solved optimally.
The B&P algorithm cannot be solved optimally because of imposed time limitation.
If the B&P algorithm is solved optimally, the optimal solution of the mathematical
model is a lower bound for the original problem.
If the B&P algorithm is not solved optimally, there are some rules to calculate the
lower bound of the original problem. These rules are discussed as follows:
If in a problem, all nodes cannot be solved because of time limitation, the lower
bound of the original problem is the minimum value of the solved nodes in which
all their branches are not solved yet. For instance, consider a problem in which all
possible nodes of the problem cannot be solved. The objective function values of
solved nodes for such a problem are shown in Figure 6.3. Suppose the B&P
algorithm is stopped by the end of the
6th
node because of time limitation. In this
case, the lower bound of the original problem is equal to the minimum objective
function value of nodes number 3, 4, and 5 which is equal to 154.
Level 0
Node(1)
I
150
I
I
Level I
I
I
Node(2)
I
I
153
I
1
I
Level 2
Node(4)
I
Node(5)
I
Node(3)
I
I
156
I
I
I
I
155
I
Node(6)
156
I
I
Node(7)
Unable to
solve
I
I
Figure 6.3 The objective function value of nodes for an incomplete problem
In some problems, the sub-problems cannot be solved optimally in their time
limitation (two hours). In such cases, the algorithm stops solving the sub-problem
after two hours and the lower bound of the sub-problem is considered as the
objective function value of the sub-problem.
If in a problem, a node cannot be solved optimally, the lower bound of the problem
is equal to the objective function value of the recent RMP minus the summation of
the objective function values of sub-problems (Lubbecke and Desrosiers, 2004).
6.2.10 Example
The problem shown in chapter four is considered to find a lower bound as an example.
As explained before, the optimal solution for this problem, by considering
minimization of the sum of the completion times criterion, is equal to 165. The
heuristic algorithm (tabu search) also provides a solution equal to 165.
As discussed, the B&P algorithm requires an initial solution. The experiment showed
that, if an initial solution with good quality is considered, the efficiency of the
algorithm will increase. Thus, the result of the tabu search is considered as the initial
solution for the problem. The RIVIP is solved to find the optimal solution of the first
node. The value of RMP and sub-problems during the first node iterations are shown in
the table below.
Table 6.4 The result of the first node
Iteration
o
1
2
3
4
RMP
165.000000
164.500000
160.375000
160.000000
159.000000
Alphal
0.000000
111.500000
132.250000
137.000000
136.000000
The branching coefficients of
6.5.
AS(1+])l
Alpha2
165.000000
53.000000
28.125000
23.000000
23.000000
SP1
0.000000
16.500000
6.131579
1.000000
0.000000
SP2
46.000000
15.500000
3.375000
0.000000
0.000000
at the end of the first node are as shown in Table
Table 6.5 The branching coefficients of AS(1+J)l at the end of the first node
AS
L1JL
111
TTT
Absolute
difference
Machine Value
value
compared
to 0.5
211
211
2
0
0
1
0
0
2
0
0
1
1
0
2
0.238
0.238
1
0
0
2
0
0
1
0
0
2
0
0
1
0
0
2
0
0
1
0
0
1
2
32
0.762
0.238
1
0
0
221
221
222
222
223
223
231
2
T
232
2
2
0
0
2
1
0
0
2
0
0
1
2
T
!2
!.!
!.1
!
3
T
1
0
!21
!21
1
P
0
113
1
T
1
iT
1
AS
3
T2
2
T
2
2T3
2
T
3
3
3
2
233
Absolute
difference
Machine Value
value
compared
to 9.5
1
0
0
2
0
0
1
0
0
2
0.762
0.238
1
0
0
2
0
0
1
0
0
2
0
0
1
0
0
2
0
0
1
0
0
2
0
0
1
0
0
2
0
0
1
1
0
2
0.238
0.238
1
0
0
2
0
0
Based on the result, the best variable to branch is AS21123. In this case, two new nodes
are created. In the first node, the constraint AS1123 = 1 is added to its
added to
SP2
SP2. AS1123
= 0 is
of the second node as well. All existing columns of RMP related to the
second machine are passed to the new nodes according to the added constraint. All
columns related to
SP1
more constraint at its
are added to both nodes. Then the second node which has one
SP2 (AS1123
during its iterations are as follows:
= 1) is solved. The value of RMP and sub-problems
Table 6.6 The results of the second node
Iteration
5
6
7
8
9
10
RMP
165.000000
161.888889
160.553114
159.984694
159.802286
159.642857
Alphal
85.000000
121.777778
105.851648
122.405977
124.118857
128.428571
Alpha 2
80.000000
40.111111
54.701465
37.578717
35.683429
31.214286
SP1
SP2
7.000000
3.592593
2.102564
1.306122
0.676000
0.000000
10.000000
4.111111
2.401099
0.637755
0.432000
0.000000
The optimal solution of this node is 159.642857. The branching coefficients of all
variables are equal to zero. Thus, branching cannot be continued for this node.
The other node generated by the first node which has one more constraint at its SP2
(AS1123
= 0) is solved. The value of RMP and sub-problems during its iterations are as
follows:
Table 6.7 The result of the third node
Iteration
11
12
13
RMP
159.000000
159.000000
159.000000
Alphal
136.000000
136.000000
136.000000
Alpha 2
23.000000
23.000000
23.000000
SP1
1.000000
0.117647
0.000000
SP2
1.000000
0.000000
0.000000
The optimal solution of this node is 159.0000. The branching coefficients of all
variables are equal to zero. Thus, branching cannot be continued for this node.
At this stage, there is no node to branch and the algorithm stops. The lower bound is
159.000 with an error of 3.77%.
CHAPTER 7: EXPERIMENTAL DESIGN
In this research, several versions of tabu search are used to find a good quality solution.
Thus, the design of experiment techniques should be applied for choosing the most
efficient version. Some random test problems are created and the solutions of these
problems obtained by these algorithms are compared by design of experiment
techniques to identify the best algorithm. The steps of performing the experiments are
as follows:
7.1 Steps of the Experiment
The steps of performing the experimental design for the proposed research problems
based on Montgomery's text (2001, p 14) are as follows:
The goal of performing the experimental design is to compare the quality of the
1.
solutions of the proposed heuristic algorithms (several versions of tabu search) as
well as the efficiency of the algorithms. Another interest of performing the
experiment is to identify if there is a difference between the initial solution
generator techniques.
2. The factors considered for this research problem are as follows:
-
Number of groups: It is clear that by increasing the number of groups, the
problem becomes more complicated and consequently the quality of solutions
provided by heuristic algorithms may decrease, so the number of groups is
considered as the first factor in this study. The main idea of applying group
scheduling techniques in production is to decompose the production line into
small size and independent cells. Thus, in industry too many groups are not
expected to be assigned for processing in the same cell. Logendran et al. (2006)
investigated group scheduling problems by considering at most 16 groups in a
cell. Schaller et al. (2000) performed their experiments by considering at most 10
groups in a cell. Based on these experiences, the maximum number of groups in a
cell is considered equal to sixteen in this research. The levels of this factor can be
['I'J
defined in three different categories: small, medium, and large. Small size
problems are problems including 2 to 5 groups. Problems with 6 through 10
groups are considered as medium size problems, and finally problems with 11
through 16 groups are classified as large size problems.
-
Number of jobs in a group: The number of jobs that belongs to a group may
affect the quality of solution. This can be considered as the second factor of the
experimental design. In this research, the maximum number of jobs that belongs
to a group in a problem is considered as a factor. For instance, if in a group
scheduling problem with three groups, groups have 3, 6, and 9 jobs respectively,
then the problem is classified as a 9-jobs problem. In this research, the maximum
number of jobs that belong to a group is considered as ten which is the same as
Schaller et al. 's (2000) suggestion. This factor has also three levels. Level 1
includes problems with at most 2 to 4 jobs in a group. Problems with 5 to 7 jobs
in a group are classified as level 2, and finally if one of the groups of a problem
includes 8 to 10 jobs, then the problem belongs to level 3 based on its number of
jobs.
The ratio of set-up times: The preliminary experiments indicate that the quality
of solutions strongly depends on the ratio of set-up times of groups on machines.
This factor must be considered as the third factor. This can be considered as a
factor with three levels. These levels are as follows:
Level!:
0
Level2:
0.8
ratio<0.8
ratio.1.2
Level 3:
1.2
ratio
cc
It is clear that this factor should be applied to all machine pairs. For instance, in a
three machines problem this ratio for "M1/M2" and "M21 M3" should be compared.
Thus, this can be considered as two separate factors in this problem.
Level 1 of each factor investigates the problems in which the ratio of set-up times
of machine pair is increased. In other words, if the set-up time of groups on
machine i is smaller than the set-up time of groups on machine i +1, then the ratio
of "Me
/M+1"
has a value less than 1. If this value is less than 0.8, it is assumed
that there is significant difference between the required set-up times of groups on
these pair of machines. Level 2 includes the problems in which this ratio has a
value almost equal to 1. Thus, the problems whose required set-up times on the
investigated machine pair are almost equal are classified in this level, and finally
level 3 investigates the problems in which the required set-up time of machines is
decreased in the investigated machine pair.
-
Initial solution: Based on preliminary experiments, providing an initial solution
with good quality can improve the quality of final solutions. Thus, the applied
initial solution generator should be considered as a factor. Two different
techniques of generating initial solution are applied for each criterion and each of
them can be considered as a level for this factor.
The number of machines in a cell is not a suitable factor in this experiment. Two
main reasons are as follows: The first reason is that in a cellular manufacturing
system, the number of machines in a cell is always fixed. In other words, in cellular
manufacturing design, each cell includes a specific number of machines, so the
number of machines in a cell is not a variable. The second reason is that the number
of machines does not change the number of feasible solutions (possible sequences)
of the problem. The number of feasible sequences for a problem only depends on
the number of jobs in a group and the number of groups. Based on this fact, the
number of machines may not affect the quality of solutions.
3.
The response variables of the experiments are the objective function value (OFV) of
the algorithms (i.e., the makespan of the problem or the sum of the completion
times of the problem) and the time consumed to perform the algorithm.
4. The basic principles of experimental design such as replication (generating several
random test problems for each basic experiment) and blocking (defining some test
problems to ignore nuisance factors) is considered. It is not required to solve test
problems with a random sequence because the results are deterministic (the
objective function value of the algorithms is deterministic).
5.
Consider the factors that are used to define the model. The first three factors, i.e.,
the group, job, and the set-up ratio factors are the ones which are used to generate a
test problem. Then, each test problem is solved by the heuristic algorithms by
applying one of the two initial solution generators. Based on this explanation, each
experimental unit of the first three factors (which generate a test problem) is split
into six different parts to be solved by one of the combinations of the heuristic
algorithms and the initial solution generators. Thus, the split plot design is the most
appropriate model to compare the results. As the test problems are created based on
the groups, jobs, and set-up ratio factors, these factors are put in the whole-plot and
the remaining factors, i.e., the algorithm factor and the initial solution generator
factor (which are the most important factors) are put in the sub-plot. Each test
problem is considered as an experimental block. The factors in the whole plot are
considered nested to generate a test problem. A problem instance, which is
considered as a block for the sub-plot factors, is generated for specific levels of
whole-plot factors. The problems (blocks) are treated as a random factor. The
factors that belong to whole plot generate a block in a nested way. This model (the
split plot design) is also applied by Amini and Barr (1993) to a similar problem.
They performed an experimental design to compare the performance of three
Network re-optimization algorithms. In their experiments, they defined several
classes of problems and generated some test problems for each class. They applied
a split plot design in which the factors that generate the test problems are put in the
whole plot and the remaining factors are put in the sub-plot.
6. A problem instance, which is considered as a block for the sub-plot factors, is
generated for specific levels of whole-plot factors. The problems (blocks) are
treated as a random factor. The factors that belong to whole plot generate a block in
a nested way.
91
7. Thus, the model is a mixed model, because it includes fixed factors (groups, jobs,
set-up ratios, algorithms, and initial solutions) as well as random factor (problem
instances).
8.
For example, the model of the experiment for a 3-machine problem can be
represented as:
Yijklmnr = p +
+ Rik + R21 + (G*J) + (G*R1)ik + (G*R2)i +
G1 +
(J*R1)Jk +
(J*R2).i + (R1*R2)kl + Tt(jkl) + am + I,,+ (G*a)jm +
+ (.J*a)jm + (J*I)1
+
(R1*J)
+(R1*a)
+ (R2*a),m + (.R2*J)1 + (a*J)mn + (G*J*R1),jk + (G*j*R2)i +
(G*J*a)ym + (G*J*I
+ (G*R1*I)ikn + (G*R2
+ (G*R1*R2)jkl + (G*R1*a)j
+
(G*R2*1)11
+
(J*R1*a)j
+ (J*R1*I)jim +
*a)iim
+ (G*a*I)jmn + (J*R1*R2)kl
(J*R2*a)jim + (J*R2*I)3i + (J*a *J)jmn + (Ri *R2 *a)klm + (Ri *R2 *J)kl + (Ri *a *J),
+ (R2*a*I)imn + (G*J*R1*R2)
+
+
(G*J*R2*1)i+ (G*J*a*I)jjmn
(G *R1 *a *J).,
(J*R1 *a *J).,
+
+
(G*R1*R2*a)jklm
+
(G *R2 *a *J)jlmfl
+
+ (J*R2 *a *J)jlmn +
(G*J*Ri *I?*I)kl + (G*J*Ri *a*])
+ (J*Ri**a*I).kl
(G*J*R1*1),, -- (G*J*R2*a)uim
(J*R1 *R2 *a)jklm
(Ri *R2 *a *J)klmfl
+
(G*RJ*R2*1)jkln
+ (G*J*R2*a*I)yimn + (G*Ri **a*J).k1
+ (G*J*R1*R2*a*I)Uklmn +
Cijklmnr
where
p the overall mean
G1: the effect of group factor, i = 1, 2, 3
.J the effect of job factor,j = 1, 2, 3
Rik: the ratio of set-up time ofM1/M2 factor, k 1, 2, 3
R21: the ratio of set-up time of M21M3 factor, 1 1, 2, 3
the block factor (a random factor)
am: the algorithm effect factor m = 1, 2, 3
I: the algorithms effect factor n = 1, 2
8ijklmnr the error term
The interactions of the effects are also considered in the model.
9.
The goals of performing the experimental design are as follows:
Which heuristic algorithm has the best performance?
Is there any difference between the initial solution generators?
The hypothesis test to investigate for the first goal is:
+
(J*RJ *R2 *J)jklfl +
+ (G *J*R 1 *R2 *a)yklm +
+
92
H0: a1=a2=a3
H1: if any of the a 's is different from the others
and the hypothesis test to investigate for the second goal is:
H0: 11=12
Hi: I112
11. The significant level is chosen equal to 5%.
12. Model adequacy checking is performed by checking the normality assumption. This
could be made by plotting a histogram of the residuals. A useful procedure is to
construct a normal probability plot of the residuals. If the error distribution is
normal, this plot should look like a line. In visualizing the straight line, more
emphasis should be placed on the central values of the plot than on the extremes
(Montgomery, 2001).
This comparison is performed for minimization of makespan and minimization of sum
of the completion times criteria, for 2, 3, and 6 machine problems separately by
considering the generated test problems.
If this technique is applied for problems with more machines, the number of test
problems which should be investigated will increase highly. For instance, for a sixmachine problem, because the number of whole-plot factors will increase to 7 (group
factor, job factor, and 5 factors for ratios of sequenced machines), if in each cell only 2
replicates are applied, then it is required to solve 37*2 = 4,374 problems. By
considering that there are three versions of tabu search and two different initial solution
generator mechanisms for each criterion (minimization of makespan and minimization
of the sum of the completion times), 4374*3*2 = 26,244 problems should be solved for
each criterion. This is the correct way to perform the experiment, but in the interest of
time, it is not practical for this research. Thus, the experiment for problems with more
than three machines is proposed to be performed by just applying one factor for the
93
ratio of set-up times for all machine pairs. In this case, only a factor is defined for the
ratio of set-up times of machine pairs. This factor has three levels as explained before.
Level 1:
0
ratio
.8
0.8< ratio<1.2
LeveI3: 1.2< ratio<co
Level2:
Level 1 indicates the problems in which the required set-up times for each machine are
increased sequentially. The second level investigates the problems in which the set-up
times of all machines are almost equal. And finally, level three investigates the
problems in which the set-up times of machines are decreased from the first machine to
the last machine. For instance, a six machine problem would belong to level 1 if the
ratio of set-up times of its machines has the following relations:
0
M1IM20.8
0M2/M30.8
0M3/M40.8
0M4IM50.8
0
M5/M6
0.8
7.2 Test Problems Specifications
To perform the design of experiment techniques, two test problems (replicates) are
generated for each cell. These problems are generated based on specifications below:
.
The run time of each job on each machine is a random integer from a uniform
discrete distribution [1, 20].
The number of groups is a random integer from a uniform discrete distribution [1,
5], [6, 10], and [11, 16] for small, medium, and large size problems, respectively.
The number of jobs in a group is a random integer from a discrete uniform
distribution [2, 4], [5, 7], and [8, 10] for small, medium, and large size problems,
respectively, based on the job factor.
The set-up time of groups on each machine for two-machine problem is shown in
Table 7.1. As discussed, in the first level the ratio of set-up time between M1 and
M2 should be less than 0.8. If these set-up times are generated bas
U[ 17,67] for M1 and M2 respectively, then the average ratio of set-up tim
to 0.607, which satisfies the condition. The set-up times for other levels are
generated similar to this rule to satisfy the required ratio of each level as well.
The set-up times of groups on each machine for problems with three and six
machine problems are shown in Table 7.2 and Table 7.3. The set-up times shown in
Table 7.2 for three machine problem can be applied for set-up time ratio factors (Ri
and
R2).
The distribution to generate random set-up time for each machine in each
level is chosen based on the required ratio among set-up times.
Table 7.1 The set-up time of each machine on two-machine problems
Machine
M1
M2
[
Levell
U[1,50]
U[17,67]
[
Level 2
U[1,50]
U[1,50]
Level 3
U[17,67]
U[1,50]
Table 7.2 The set-up time of each machine on three-machine problems
Machine
Level 1
M1
U[1,50]
U[17,67]
U[45,95]
M2
M3
Level 2
U[1,50]
U[1,50]
U[1,50]
Level 3
U[45,95]
U[17,67]
U[1,50]
Table 7.3 The set-up time of each machine on six-machine problems
Machine
M2
M3
M4
M5
M6
Level 1
U[1,50]
U[17,67]
U[45,95]
U[92,142]
U{170,220]
U[300,350]
Level 2
U[1,50]
U[1,50]
U[1,50]
U[1,50]
U[1,50]
U[1,50]
Level 3
U[300,350]
U[170,220]
U[92,142]
U[45,95]
U[17,67]
U[1,50]
7.3 Two Machine Test Problems
The two machine test problem has factors below:
Group factor with three levels
.
.
Job factor with three levels
The set-up ratio M4/ M2 with three levels
The initial solution factor with two levels
Algorithm factor with three levels
To cover these factors two replicates are generated based on the first three factors. Then
each problem is solved by each heuristic algorithm with both initial solution generator
techniques. Thus the problems can be classified in 27 different classes as follows:
Table 7.4 Small size problems based on group category (two machine)
Job category
Small
Medium
Large
Level 1
Cl
C4
C7
Set-up category
Level 2
C2
C5
C8
Level 3
C3
C6
C9
Table 7.5 Medium size problems based on group category (two machine)
Job category
Small
Medium
Large
Level 1
ClO
C13
C16
Set-up category
Level 2
Cli
C14
C17
Level 3
C12
C15
C18
Table 7.6 Large size problems based on group category (two machine)
Job category
Small
Medium
Large
Level 1
C19
C22
C25
Set-up category
Level 2
C20
C23
C26
Level 3
C21
C24
C27
For each of these classes, two random problems (replicates) are generated. Thus, 54 test
problems are generated for two machine problems. The specifications of these
problems are shown in the table below:
Table 7.7 The specification of test problems generated for two machine problem
0
0
-
0
-
-
.
0
0
ci
C2
C3
C4
C5
C6
C7
C8
C9
cio
cii
C12
C13
C14
1
4
2
3
3
4
6
4
4
4
13
10
8
2
3
5
5
4
16
3
4
8
5
28
22
12
4
13
5
7
6
7
7
7
7
10
14
15
16
17
18
19
2
8
31
15
5
9
35
5
10
40
4
9
31
4
10
28
6
8
4
4
25
9
9
10
6
9
10
6
10
3
8
4
3
10
2
5
20
21
22
23
24
25
26
27
28
4
4
4
7
7
7
7
17
13
31
25
17
22
27
33
16
60
62
33
52
C15
C16
C17
C18
C19
C20
C2 1
C22
C23
C24
C25
C26
C27
29
30
6
10
7
7
32
49
31
8
8
33
10
6
9
6
9
10
10
8
9
10
41
32
4
4
4
29
40
46
40
40
46
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
11
13
16
13
14
15
16
13
16
12
15
14
11
51
15
12
52
16
53
15
16
54
7.4 Three Machine Test Problems
For the three machine problems, following factors are considered:
4
4
4
7
7
7
7
7
7
10
10
10
10
10
10
51
71
29
58
31
79
63
66
65
76
69
75
99
83
106
79
108
97
Group factor with three levels
Job factor with three levels
The set-up ratio M1/ M2 with three levels
The set-up ratio M2/ M3 with three levels
The initial solution factor with two levels
Algorithm factor with three levels
To cover these factors two replicates are generated based on the first four factors. Then
each problem is solved by each algorithm with both initial solution generator
techniques. Thus, the problems are classified in 81 different classes as follows:
Table 7.8 Small group, small job size
problems (three machine)
M2/ M3 Ratio
M1/ M2
Ratio
Level 1
Level 2
Level 3
Cl
Level 2
C2
C4
C7
C5
C8
Level 1
Level 3
C3
C6
C9
Table 7.9 Small group, medium job
size problems (three machine)
M1/M2
Ratio
Level 1
Level2
Level3
Level 1
ClO
C13
C16
M2/M3 Ratio
Level 2 Level 3
C12
Cli
C15
C14
C18
C17
Table 7.10 Small group, large job
size problems (three machine)
Table 7.11 Medium group, small job
size problems (three machine)
M1/M2
Ratio
Level 1
Level 2
Level 3
M1/ M2
M2/M3 Ratio
Level 1 Level 2 Level 3
C19
C20
C21
C22
C23
C24
C25
C26
C27
Table 7.12 Medium group, medium
job size problems (three machine)
M1/ M2
Ratio
Level 1
Level 2
Level 3
Level 1
C37
C40
C43
M2/ M3 Ratio
Level 2 Level 3
C38
C39
C41
C42
C44
C45
Ratio
Level 1
Level2
Level3
M2/ M3 Ratio
Level 1 Level 2 Level 3
C30
C29
C28
C33
C32
C31
C36
C35
C34
Table 7.13 Medium group, large job
size problems (three machine)
M2/ M3 Ratio
M1/ M2
Ratio
Level 1
Level2
Level 3
Level 1
C46
C49
C52
Level 2
C47
C50
Level 3
C48
C53
C54
C51
Table 7.15 Large group, medium job
size problems (three machine)
Table 7.14 Large group, small job
size problems (three machine)
M2/ M3 Ratio
M1/ M2
Ratio
Level 1
Level2
Level 3
Level 1
C55
C58
C61
Level 2
C56
C59
C62
M1/M2
Ratio
Level 1
Level 2
Level 3
C57
C60
C63
Level3
M2/M3 Ratio
Level 1 Level 2 Level 3
C66
C64
C65
C69
C67
C68
C72
C71
C70
Table 7.16 Large group, large job size
problems (three machine)
M1/ M2
Ratio
Level 1
Level 2
Level3
Ratio
Level 2 Level 3
C74
C75
C77
C78
C81
C80
M2/ M3
Level 1
C73
C76
C79
For each of these classes, two random problems (replicates) are generated. Thus, 162
test problems are generated for three machine problems. The specifications of these
problems are as follows:
Table 7.17 The test problems generated for three machine problem
0
-
-1
-
0
r1
-
.
-
15
5
4
13
5
4
14
14
2
4
7
15
4
4
13
16
5
4
17
17
2
3
5
18
4
4
13
19
3
6
18
20
4
6
22
4
2
5
4
16
12
4
4
12
13
4
3
4
10
6
5
4
16
5
4
15
3
3
7
3
3
8
4
4
13
C6
C3
C8
C4
8
cs
10
C9
ClO
0
4
2
C7
'1
4
1
C2
-.
0
-
0
ci
0
-
C
Table 7.17 (Continued) The test problems generated for three machine problem
0
.
-
0
E
cii
C12
C13
C14
C16
C17
C18
C19
C20
C21
C22
C23
C24
C25
C26
C27
C28
C29
21
2
7
13
22
23
24
25
26
27
28
29
30
5
7
31
5
6
23
4
6
20
3
6
17
4
7
5
6
23
26
3
7
19
5
5
22
2
7
13
31
3
7
15
32
5
6
23
4
6
19
3
5
15
2
7
12
5
7
4
10
29
34
5
9
34
35
36
38
4
10
38
36
40
3
10
27
41
5
10
42
2
2
4
8
16
10
18
31
5
9
10
4
8
31
3
9
5
9
24
42
43
44
45
46
48
49
50
3
8
51
5
8
52
3
9
5
8
54
4
10
23
37
23
36
36
55
6
4
19
56
57
58
8
4
4
4
20
6
7
0
E
0
19
17
C30
C31
C32
C33
C34
C35
C36
C37
C38
C39
C40
C41
C42
C43
C44
C45
C46
C47
C48
0
7
60
9
61
8
62
9
63
8
64
7
10
65
66
67
68
69
70
0
4
4
4
4
4
4
4
19
24
23
32
25
21
34
4
18
7
3
18
10
4
4
29
30
24
6
7
21
71
9
4
4
72
6
4
18
7
7
41
6
5
9
7
8
6
9
8
7
9
7
28
50
39
46
49
9
7
51
80
6
6
81
8
5
82
83
84
85
86
87
88
89
90
10
6
91
9
7
74
76
77
78
92
93
94
96
8
9
9
10
30
36
54
29
49
42
36
26
40
27
48
56
43
49
54
7
8
33
10
10
62
6
6
10
6
9
6
8
5
8
5
10
7
9
10
7
5
10
9
100
Table 7.17 (Continued) The test problems generated for three machine problem
aD
aD
I;
aD
0
0
aD
0
0
0
E
C49
C50
C51
C52
C53
C54
css
C56
C57
C58
C59
C60
C61
C62
C63
C64
C65
C66
97
-
-t
o
=
-
o
aD
133
16
7
69
41
134
11
7
50
63
135
12
7
50
136
14
7
53
137
13
7
62
138
15
7
66
139
14
7
75
140
13
7
54
141
15
7
59
142
16
7
66
143
13
7
54
144
15
7
68
145
12
9
62
146
14
10
74
147
13
10
81
148
15
10
85
149
16
9
93
150
15
10
110
151
15
10
104
152
14
9
77
153
16
10
96
154
13
9
66
155
14
10
84
156
11
10
86
157
13
10
89
158
15
10
101
159
15
10
97
35
160
14
10
86
11
10
64
4
47
40
161
162
14
8
77
7
50
14
7
83
129
11
7
64
130
14
7
88
131
11
7
69
132
16
7
104
8
9
48
98
6
10
99
10
10
100
8
9
38
101
7
10
48
102
10
8
40
103
10
10
66
104
8
7
36
105
6
10
38
106
9
8
43
107
10
108
8
10
60
109
13
4
110
15
4
39
111
16
4
46
112
14
4
42
113
13
4
114
15
4
115
15
116
14
4
117
11
4
118
14
4
39
119
11
4
32
120
15
4
50
121
11
4
31
122
15
4
47
123
13
4
36
124
12
4
125
14
4
126
13
127
12
128
45
44
C67
C68
C69
C70
C71
C72
C73
C74
C75
C76
C77
C78
C79
C80
C81
101
7.5 Six Machine Test Problems
The six machine test problem has the following factors:
Group factor with three levels
Job factor with three levels
The set-up ratio among machines with three levels
The initial solution factor with two levels
Algorithm factor with three levels
Two replicates are generated for each experimental cell based on the first three factors.
Then each problem is solved by each algorithm with both initial solution generator
techniques. Thus the problems can be classified in 27 different classes as follows:
Table 7.18 Small size problems based on group category (six machine)
Job category
Small
Medium
Large
Level 1
Cl
C4
C7
]
Set-up category
Level 2
Level 3
C2
C5
C8
C3
C6
C9
Table 7.19 Medium size problems based on group category (six machine)
Job category
Small
Medium
Large
Level 1
Set-up category
Level 2
dO
Cli
C13
C16
C14
_ C17
I
I
Level 3
C12
C15
_ C18
Table 7.20 Large size problems based on group category (six machine)
Job category
Small
Medium
Large
Level 1
C19
C22
C25
Set-up category
Level 2
C20
C23
C26
Level 3
C21
C24
C27
102
For each of these classes, two random problems (replicates) are generated. Thus, 54 test
problems are generated for six machine problems. The specifications of these problems
are as follows:
Table 7.21 The specification of generated test problems for six machine problem
0
-t
0
-
0
B
ci
C2
C3
C5
C6
C7
C8
C9
co
C12
C13
C14
C15
9
4
3
10
2
4
7
4
4
15
2
4
8
5
7
29
8
4
6
21
9
2
7
11
10
5
7
25
11
4
6
15
12
5
7
20
13
2
10
14
3
10
15
10
9
21
17
4
4
4
29
28
9
28
18
3
9
21
C4
16
0
-
0
4
6
-
-t
3
4
0
0
1
2
0
0
19
9
4
31
20
8
4
19
21
9
9
23
8
24
6
4
4
4
4
22
22
25
6
7
26
27
28
7
7
28
29
8
6
33
9
7
38
25
24
16
C16
C17
C18
C19
C20
C21
C22
C23
C24
C25
C26
C27
29
7
7
34
30
6
7
31
31
9
9
32
8
10
50
49
7
10
46
34
10
10
65
35
10
10
56
36
8
10
46
37
38
39
16
4
4
45
37
40
13
41
11
42
14
4
4
4
4
43
44
13
6
40
34
45
47
16
7
63
45
15
7
71
46
47
48
49
50
14
7
60
15
7
69
16
7
78
13
10
95
14
10
80
51
13
10
72
52
15
10
77
53
16
9
100
54
15
10
117
12
14
32
103
CHAPTER 8: RESULTS
The random test problems are solved by three different versions of tabu search by
applying two different initial solution generators. The proper lower bounding technique
also provides a lower bound for test problems. The results of the experiments for each
criterion are as follows:
8.1 The Results for the Makespan Criterion
The results for two, three, and six machine problems by considering minimization of
makespan are as follows:
8.1.1 The Results of Two-Machine Problems by Considering Minimization of
Makespan Criterion
All 54 test problems of two machine problems are solved by heuristic algorithms to
find the algorithm with the best performance and the best initial solution generator. The
lower bounding technique is also applied for each test problem to evaluate the quality
of solution. The solution of the Schaller et al. (2000) algorithm for each test problem is
also obtained to compare the results with the heuristic algorithm. The results are
presented in three sections:
In the first section, the results of the heuristic algorithms are compared to the lower
bound to evaluate the quality of solutions. In this section, the results of the
experimental design to find the algorithm with the best performance as well as
finding the best initial solution generator is presented.
In the second section, an experimental design to compare the time spent for
heuristic algorithms is performed and the results are presented.
In the third section, a comparison between the best found heuristic algorithm and
Schaller et al. (2000) algorithm based on the results of the test problems is
performed.
104
8.1.1.1 Comparison among Heuristic Algorithms and Lower Bound
The results obtained from the heuristic algorithms and the results obtained from the
lower bounding technique are shown in Table 8.1. This table has columns described
below:
Lower bound: The test problems are solved by the lower bounding technique for
minimization of makespan criterion and the results are presented in the table below.
The time spent to solve the mathematical model for the lower bounding technique
for each problem is also provided in this table.
The results of the heuristic algorithm: As mentioned before, the two machine
problem which considers the minimization of makespan criterion has some
advantages compared to the other proposed problems, such as relaxing the heuristic
algorithm to a one level search. The results obtained from the heuristic algorithms
are shown in Table 8.1 by applying two different initial solution generators. As
mentioned before, the job sequence of SDGS problems by considering
minimization of makespan for two machine problems can be calculated based on
Johnson's (1954) algorithm. Thus, for the second initial solution, the Schaller et
al.'s (2000) algorithm is applied to find the sequence of groups. Then the sequence
of jobs in each group is calculated based on Johnson's (1954) algorithm. In this
table, TS 1 stands for tabu search with short term memory algorithm, TS2 stands for
the LTM-Max, and TS3 stands for LTM-Min.
The best result of the heuristic algorithms (the one with the minimum objective
function value) for each test problem is considered to estimate the quality of
solutions. These error percentages are shown in the "Best error" column of Table
8.1. Based on the results, the average error percentage of all test problems
compared to the best result of the heuristic algorithms is equal to 0.68% and the
maximum error is 4.5% (problem 42). This error percentage is calculated based on
the formula below:
105
The heuristic algorithm objective function
The lower bound objective function
The lower bound objective function
Table 8.1 The results of the experiments with test problems for two machine problems
by considering minimization of makespan
Initial 1
Initial 2
ci)
I
TS2
TS3
TS1
TS2
TS3
1
280
0.05
287
287
287
287
287
287
287
0.025
2
237
0.03
237
237
237
237
237
237
237
0
171
171
171
171
171
171
0
171
0.02
171
4
130
0.02
130
130
130
130
130
130
130
0
s
321
0
321
321
321
321
321
321
321
0
6
209
0.03
209
209
209
209
209
209
209
0
403
0.14
404
404
404
404
404
404
404
0.002
354
0.02
354
354
354
354
354
354
354
0
264
0.03
264
264
264
264
264
264
264
0
152
152
0
C2
C3
TS1
C4
8
CS
152
0
152
152
152
152
152
527
0.02
527
527
527
527
527
527
527
0
12
405
0.2
405
405
405
405
405
405
405
0
13
491
0.01
491
491
491
491
491
491
491
0
14
249
0
249
249
249
249
249
249
249
0
437
0.19
445
445
445
437
437
437
437
0
16
490
0.14
490
490
490
490
490
490
490
0
17
397
0.01
397
397
397
397
397
397
397
0
18
396
0.03
396
396
396
396
396
396
396
0
19
335
0.05
346
346
335
335
335
335
335
0
20
457
0.06
457
457
457
457
457
457
457
0
21
346
1.73
358
358
358
370
368
368
358
0.035
22
383
2.81
391
391
391
391
391
391
391
0.021
23
583
2.95
604
584
604
590
590
590
584
0.002
24
330
0.19
330
330
330
336
330
336
330
0
25
880
1.53
888
888
888
880
880
880
880
0
26
815
5.55
819
819
819
838
838
838
819
0.005
27
440
0.33
440
440
440
440
440
440
440
0
28
659
3.34
659
659
659
666
659
666
659
0
10
C6
C7
C8
C9
ClO
C12
C13
C14
106
Table 8.1 (Continued) The results of the experiments with test problems for two
machine problems by considering minimization of makespan
a
Initial 2
Initial 1
ci)
C15
TS1 J TS2
TS3
TS1
TS2
TS3
0.36
495
495
495
503
503
503
495
0
0.17
678
678
678
678
678
678
678
0
657
0.08
657
657
657
657
657
657
657
0
32
733
0.09
734
734
734
734
733
734
733
0
33
859
2.64
873
863
873
871
871
871
863
0.005
34
383
0.42
383
383
383
388
383
388
383
0
35
768
1.39
772
772
772
777
777
777
772
0.005
36
471
0.38
474
474
474
474
474
474
474
0.006
37
521
2.78
521
521
521
533
533
521
521
0
38
667
21.36
692
672
692
685
685
685
672
0.007
39
605
83.41
649
623
649
627
612
627
612
0.012
40
547
15.38
569
552
569
567
567
567
552
0.009
41
682
30.91
710
698
710
694
694
694
694
0.018
42
786
24.72
830
821
830
829
826
829
821
0.045
43
1176
21.45
1208
1208
1208
1213
1213
1213
1208
0.027
44
965
21
978
978
978
989
977
989
977
0.012
45
766
34.38
800
800
800
794
794
794
794
0.037
29
495
30
678
31
C16
C17
C18
C19
C20
C21
C22
C23
46
817
6.08
834
824
824
835
830
835
824
0.009
47
1064
20.64
1098
1098
1098
1095
1095
1095
1095
0.029
48
1066
28.72
1075
1075
1075
1094
1094
1081
1075
0.008
947
10.59
952
952
952
960
957
960
952
0.005
50
1343
17.75
1355
1355
1355
1345
1345
1345
1345
0.001
51
923
4.8
939
939
939
934
934
923
923
0
52
1286
107
1323
1312
1323
1321
1324
1321
1312
0.02
53
1115
14.15
1138
1138
1138
1149
1116
1149
1116
0.001
54
1374
60.34
1409
1405
1405
1402
0.02
Average Error:
0.68
C24
C25
C26
C27
1402
1409
1405
The experimental design is coded with Statistical Analysis System, SAS, version 9.1, to
find the best heuristic algorithm as well as the best initial solution. The normal
probability plot of the residuals (evaluated as the difference between the actual value of
the objective function and the predicted one by the model) confirms that the residuals
have a normal distribution (Figure 8.1). Thus, there is evidence that the parametric
107
statistics-based analysis of variance (ANOVA) can be used to further analyze the
results.
15
10
S
H
Jo
-s
-10
-15
.1
1
5
10
25
so
is
80
95
99
99.9
Nors I Fe-nt
Figure 8.1 The normal probability plot of the experimental design of finding the best
heuristic algorithm for two machine problem by considering minimization of makespan
The result of ANOVA is presented in Table 8.2. The results of the experiment show
that there is a significant difference among the objective function values of heuristic
algorithms (the result of F test is equal to 0.0048). To find the difference among the
heuristic algorithms, the Tukey test is performed. The result of Tukey's test shows that
TS2 has a better performance compared to the other heuristic algorithms.
The results of the experimental design also show that there is no difference between the
initial solution generators for two machine problems (the result of F test is equal to
0.4975).
Among the interactions, only the interaction between group factor and algorithm factor
(G*A), and job factor and initial solution factor (J*J) are significant. The significant
factors and interactions are shown in bold in Table 8.2.
Table 8.2 The ANOVA table for two machine problem by considering minimization of
makespan for algorithm comparison
The Mixed Procedure
Type 3 Tests of Fixed Effects
Num
DF
Den
DF
G
2
0
J
2
0
Effect
Ri
A
2
0
2
I
1
135
135
G*J
4
0
G*R1
4
0
Q*A
4
2
135
135
4
0
4
135
135
135
135
135
G*I
J*R1
J*A
J*I
2
Ri*A
R1*I
A*I
G*J*R1
2
8
0
G*J*A
8
G*J*I
4
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
G*Rl*A
G*R1*I
G*A*I
J*R1*A
J*R1*I
J*A*I
4
2
8
4
4
8
4
4
R1*A*I
G*J*Ri*A
16
G*J*R1*I
G*J*A*I
8
G*R1*A*I
8
J*R1*A*I
G*J*R1*A*I
4
8
8
16
F Value
434444
122878
16974.6
5.56
0.46
14163.4
4021.26
2.69
1.81
7523.18
0.90
4.37
0.55
0.72
0.51
1976.72
0.58
1.24
0.18
1.93
0.48
0.53
0.64
1.02
0.18
0.89
1.07
0.77
0.39
0.43
0.38
Pr > F
<.0001
<.0001
0.0488
0.0048
0.4975
0.0445
0.5347
0.0339
0.1671
0.2300
0.4682
0.0145
0.7023
0.4869
0.5992
0.9140
0.7895
0.2962
0.9937
0.1083
0.7482
0.8285
0.6352
0.4005
0.9460
0.5868
0.3889
0.6318
0.9229
0.9036
0.9844
A test of effect slice is performed to obtain detailed information on the highest
significant order interactions for the algorithm and the initial solution effects i.e., G*A
and J*J by considering Tukey-Kramer adjustment. The results are shown in Table 8.3.
Based on the results, for large size problems, TS2 has a better performance compared to
the other heuristic algorithms.
ID
Table 8.3 Test of effect slices for two machine problem by considering minimization of
makespan for algorithm comparison
Differences of Least Squares Means
Effect G
G*A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
A
J
_G
I
_A
J
1
1
1
2
1
1
1
3
1
2
1
3
2
1
2
2
2
1
2
3
2
2
2
3
3
1
3
2
3
1
3
3
3
2
3
3
J*I
J*I
J*I
1
1
1
2
2
1
2
2
3
1
3
2
Adj P
Adjustment
Pr > jt
I
1.0000
1.0000
1.0000
0.2221
0.7550
0.3623
<.0001
0.2705
0.0016
0.2908
0.0061
0.5833
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
1.0000
1.0000
1.0000
0.9492
1.0000
0.9918
0.0009
0.9724
0.0409
0.8960
0.0654
0.9939
8.1.1.2 The Experimental Design to Compare the Time Spent for Heuristic
Algorithms for Two-Machine Problems by Considering Minimization of
Makespan Criterion
The time spent to terminate the search algorithm and the time spent to find the best
solution for each heuristic algorithm are shown in Table 8.4 for all test problems.
Table 8.4 The time spent for the test problems of two machine problems (in seconds)
by considering minimization of makespan
---Initial 1
,
TS1
.
TS2
2
ci
C2
_!_
±-
_±._
__
2
3
4
10
C3
C4
C5
0
0
0
0
0
2
3
5
0
-u-- ..JL
Initial 2
TS2
TS1
2
2
_2_
.iL.
.iL
JL
0
0
0
0
--
-p--
TS3
2
2
i..
0
J..
......Q..
..i
0
0
0
Q. .JL _2_
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
8
0
0
0
0
0
0
0
0
0
0
0
0
JL.
....L
6
22
0
0
0
0
0
0
.L
7
..!L
.Q.
0
..SL.
1L.
2
7
13
0
0
0
0
..!
3
4
8
4
.2..
10
6
2
0
p-4
TS3
0
'
...L
L.
...2...
0
2..
0
0
L.
0
JL
0
.JL JL.
0
0
0
JL JL
0
..Q..
0
0
JL.
0
.JL JL.
0
0
0
0
0
o
0
.1
0
P!P:.. :P:.
!
P.:
:
I;II
PIPI
111
Table 8.4 (Continued) The time spent for the test problems of two machine problems (in
seconds) by considering minimization of makespan
Initial 2
Initial 1
TS1
TS2
TS1
TS2
2.
2.
2.
.
2
2
2.
-
.
2
2
TS2
TSL
2.
2
2
2
e
C24
C25
47
15
48
7
76
2
1
4
1
3
1
1
0
4
0
3
0
14
7
69
1
1
3
0
2
0
1
0
3
1
3
3
_LL
10
75
1
0
1
0
1
0
0
0
1
1
1
0
10
99
2
1
3
1
3
0
2
1
4
1
3
0
10
83
0
0
2
0
2
1
0
0
2
0
2
2
10
106
1
1
4
2
4
1
4
1
4
4
4
2
2...
L
1
4
5015
C26
5216
10
C27
5416
1
108
10
..L
0
1
L _L
4
i.
4
4
_L
.JL ±.
..L
2
0
4
1
The experimental design is performed by applying SAS 9.1 to find the most efficient
heuristic algorithm. The normal probability plot of the residuals is shown in Figure 8.2.
The middle part of the plot can be interpreted as a line. Thus, the parametric statisticsbased analysis of variance (ANOVA) can be used to further analyze the results.
10
+
-I- +
05
ill
I
I
-I-
41lI1tIlIII4I1
R
00
muiui..
I IlIlIWlIlUhl
-H-
-0 5
-I-
-I-
-I-
-1 0
.1
1
5
10
25
50
15
90
95
99
99.9
Iorna I Percent I Ins
Figure 8.2 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for two machine problem by considering minimization of
makespan
Q..
1
112
The ANOVA table for time spent comparison is presented in Table A. 1 in appendices.
The results of the experiment show that there is a significant difference among the time
spent for heuristic algorithms (the result ofF test is less than 0.0001). The Tukey test is
applied to find the difference. The result of Tukey's test shows that TS 1 is more
efficient compared to the other two heuristic algorithms.
The results of the experimental design show that there is a significant difference
between the time spent for algorithms by applying different initial solution techniques
for two machine problems (the result of F test is equal to 0.032 1). A comparison
between the time spent by the heuristic algorithms in applying different initial solutions
shows that the first initial solution generator results in a better performance of the
heuristic algorithms than the second generator.
Among the interactions, the interaction between the group factor and all subplot factors
i.e., G*A and G*I are significant. It means that the group factor is an important factor
in such problems. The algorithm factor is also important because most of the
interactions including the algorithm factor (G*A, G*I, Ri *A, A*I, G*R1 *A, G*A*I,
G*J*R1 *A) are significant. The significant factors and interactions are shown in bold
in Table A.1.
A test of effect slice is performed to obtain detailed information on the highest
significant order interactions for the algorithm and the initial solution effects i.e.,
G*J*R1*A and G*J*I. by considering Tukey-Kramer adjustment. The results are
shown in Table A.2. This table shows the performance of the heuristic algorithms as
well as the initial solutions for each cell of the experiment. Based on the results, the
summary of the significant differences are summarized as follows:
.
For any large size group problems, there is a significant difference between the
time spent by the algorithms. In all of them, TS 1 required less time compared to
the others. This result was expected because in TS2 and TS3, the algorithm has
a chance to search more to find a better objective function value. By increasing
the size of the problems, this difference can be observed more prominently.
113
.
For any large size group and large size job problem, there is a significant
difference between the time spent by the algorithms in applying different initial
solutions. In these problems, the first initial solution generator has a better
performance than the second (Schaller et al., 2000) one.
8.1.1.3 The Comparison between the Best Tabu Search and the Results of Schaller
et al. (2000) Algorithm
In this section, a paired t-test is performed between the results of the best tabu search
algorithm and the results of Schaller et al. (2000) algorithm for test problems. The
result of Schaller et al. (2000) algorithm for test problems is presented in Table B. 1 of
appendix. As discussed in section 8.1.1.1, TS2 has the best performance compared to
the other algorithms. Because there is no difference between the initial solution
generators, the results of TS2 by considering the first initial solution generator is
applied to be compared with the result of Schaller et al. (2000) algorithm. The result of
the paired-t test shows a significant difference between the results of two algorithms.
The average error percentage of Schaller et al. (2000) algorithm for the test problems is
equal to 9% and the maximum of percentage error for a test problem is equal to 28%.
These percentage errors are too high compared to the ones obtained by the proposed
heuristic algorithms (0.68%).
8.1.2 The Results of Three-Machine Makespan Criterion
The 162 test problems of three machine problem are solved by heuristic algorithms to
find the algorithm with the best performance and the best initial solution. The lower
bounding technique is also applied for each test problem to evaluate the quality of
solutions. Then, the result of the best heuristic algorithm is compared with the result of
Schaller et al. (2000) algorithm, as the best current available algorithm, for all test
problems. The results are presented in sections below.
114
8.1.2.1 Comparison among Heuristic Algorithms and the Lower Bound for Three
Machine Problems by Considering Minimization of Makespan
The results of applying the heuristic algorithms and the results of the lower bounding
technique are shown in Table 8.5. This table has columns described below:
Lower bound: The results of the lower bounding technique as well as the time spent
for solving the lower bound problem for each test problem are presented in the table
below.
The results of the heuristic algorithm: The test problems are solved by three
different heuristic algorithms with two different initial solutions.
The minimum value of the objective function of heuristic algorithms for each test
problem is considered to estimate the quality of solutions. This value is compared to
the value of the lower bound of the test problem. The error percentage of each test
problem is shown in the "Best Error" column of Table 8.5. Based on the results, the
average error percentage of the best solutions is equal to 1.00% and the maximum
error is 4.9%, which is obtained in test problem 34. This error percentage is
calculated based on the formula presented in section 8.1.1.1.
Table 8.5 The results of the experiments with test problems for three machine problems
by considering minimization of makespan criterion
Initial 2
Initial 1
=
.
,-.'.
TS1
TS2
TS3
TS1
TS2
TS3
.
1
221
221
0
221
221
221
221
221
221
221
0.000
2
481
494
312
236
242
379
0.03
491
491
481
491
493
481
0.000
0.6
303
303
303
0.000
230
239
230
230
0.03
351
351
351
351
351
230
239
351
0.018
239
309
230
239
303
0.03
309
230
239
494
309
230
239
303
C2
5
226
239
6
351
4
0.03
239
351
0.000
0.000
115
Table 8.5 (Continued) The results of the experiments with test problems for three
machine problems by considering minimization of makespan criterion
Initial 2
Initial 1
.'
-
Nt
(D
1.
Nt
CID
TS1
TS2
TS3
TS1
TS2
TS3
Nt
0
C4
7
8
C5
C6
9
10
11
12
C7
C8
C9
cio
13
14
15
16
C14
C15
C16
C17
C18
C19
C20
C21
C22
305
471
203
389
478
194
18
35
408
334
429
207
498
430
335
409
456
506
345
428
214
398
532
397
350
266
36
571
38
547
656
19
20
22
C13
291
335
17
21
C12
436
268
242
23
24
25
26
27
28
29
30
31
32
34
474
268
244
332
367
352
514
210
408
492
200
413
344
435
214
505
448
342
415
459
546
352
453
214
405
545
458
373
270
638
550
666
511
0.17
0.03
0.03
0.08
0.02
0.23
0.03
0.02
0.09
0.03
0.02
0.05
0.03
0.05
0.01
0.19
0.14
0.02
0.02
0.08
0.03
0.03
0.16
0.06
0.03
0.19
0.02
0.03
0
0.03
0
0.03
0.03
0.03
41
413
652
241
671
261
357
0.25
42
44
554
554
0.02
40
437
0
0
444
268
244
300
342
308
490
215
392
492
200
410
344
433
210
524
448
336
412
459
507
352
432
217
405
539
402
373
270
583
548
665
503
413
660
249
364
554
444
268
244
300
342
308
490
215
392
492
200
410
344
433
209
498
430
336
412
459
507
352
432
217
405
533
402
373
270
583
548
665
503
413
660
249
364
554
444
268
244
300
342
308
490
215
392
492
200
410
344
433
209
498
448
336
412
459
507
352
428
214
405
536
402
373
270
583
548
665
503
413
662
249
364
554
438
268
244
300
340
308
490
210
392
492
200
410
344
430
209
505
448
340
412
456
544
346
428
214
405
533
402
367
270
589
547
666
498
413
655
246
357
554
438
268
244
300
340
308
490
210
392
492
200
410
344
430
209
505
438
268
244
300
340
308
490
210
392
492
200
410
344
430
209
441
439
340
412
456
340
412
456
544
346
428
214
405
505
523
346
428
214
405
533
533
402
367
270
582
547
663
498
413
655
246
357
554
402
367
270
589
547
666
498
413
655
246
357
554
438
268
244
300
340
308
490
210
392
492
200
410
344
430
209
498
430
336
412
456
507
346
428
214
405
533
402
367
270
582
547
663
498
413
655
246
357
554
0.005
0.000
0.008
0.031
0.015
0.010
0.040
0.034
0.008
0.029
0.03 1
0.005
0.030
0.002
0.010
0.000
0.000
0.003
0.007
0.000
0.002
0.003
0.000
0.000
0.018
0.002
0.013
0.049
0.015
0.019
0.000
0.011
0.008
0.000
0.005
0.02 1
0.035
0.000
116
Table 8.5 (Continued) The results of the experiments with test problems for three
machine problems by considering minimization of makespan criterion
Initial 2
Initial 1
a
cM
-t
cM
r-
TS1
TS2
TS3
TS1
TS2
TS3
0.41
0.14
0.05
0.03
0.11
0.02
0.19
0.02
0.03
0.02
0.36
0.63
0.59
1.03
0.83
2.53
1.28
3.05
3.31
10.83
5.76
0.55
0.67
16.58
0.52
742
525
424
648
750
478
703
432
696
641
582
575
403
426
408
533
658
718
536
508
699
399
635
900
599
802
731
571
787
608
831
652
682
804
1048
664
617
847
742
525
422
648
739
478
703
432
648
742
525
422
646
750
478
703
641
641
760
523
422
618
740
475
725
428
656
624
760
523
422
618
740
475
700
428
642
624
746
523
422
618
740
475
724
428
649
624
582
575
403
426
408
533
658
718
536
508
694
399
635
899
599
802
582
575
403
426
408
533
647
717
536
508
694
394
635
900
599
802
581
581
581
576
403
426
408
533
668
717
544
518
694
398
635
897
599
791
576
403
426
408
533
666
717
544
518
694
398
635
897
599
791
731
571
731
571
741
571
576
403
426
408
533
668
717
544
508
694
398
635
897
599
791
733
787
608
787
608
831
652
831
652
787
604
829
672
804
1046
664
617
846
682
804
1048
664
617
854
C
C24
C25
C26
C27
C28
C29
C30
C31
C32
C33
C34
C35
C36
C37
C38
C39
C40
C41
45
46
47
48
734
518
55
417
617
737
466
689
412
634
615
576
56
57
573
403
58
59
424
408
60
533
647
717
529
495
694
384
625
897
596
785
716
562
784
596
825
636
667
800
50
51
52
s
54
61
62
63
64
65
66
67
68
69
70
71
72
74
75
76
78
79
80
81
82
1031
643
609
839
769
531
422
636
763
475
728
429
662
626
629
719
416
475
466
570
724
778
585
555
735
433
637
1017
666
890
776
620
787
609
876
704
739
859
1087
684
676
897
2.92
0.09
0.19
0.69
0.05
3.73
0.16
1.99
2.66
1.48
0.03
1.61
16.47
431
692
571
787
604
829
652
669
653
669
802
1055
802
1055
659
617
847
659
617
847
741
571
787
604
829
652
669
802
1052
659
617
846
742
523
422
618
739
475
700
428
642
624
581
575
403
426
408
533
647
717
536
508
694
394
635
897
599
791
731
571
787
604
829
652
669
802
1046
659
617
846
0.011
0.010
0.012
0.002
0.003
0.019
0.016
0.039
0.013
0.0 15
0.009
0.003
0.000
0.005
0.000
0.000
0.000
0.000
0.013
0.026
0.000
0.026
0.016
0.000
0.005
0.008
0.021
0.016
0.004
0.013
0.005
0.025
0.003
0.003
0.015
0.025
0.013
0.008
117
Table 8.5 (Continued) The results of the experiments with test problems for three
machine problems by considering minimization of makespan criterion
ci)
C42
C43
C44
C45
C46
C47
C48
C49
cso
csi
C52
C53
C54
C55
C56
C57
C58
C59
C60
C61
83
84
85
86
87
88
89
90
91
92
94
95
96
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
509
779
979
873
697
902
754
951
961
777
764
820
527
878
897
774
964
640
695
754
1260
839
747
953
1061
1086
1021
1166
847
791
663
796
1237
1205
693
865
641
920
968
1285
537
946
1087
925
816
990
830
1074
1057
836
864
933
552
969
1018
856
1047
676
771
901
1351
889
776
1044
1183
1184
1177
1288
1004
925
741
961
1394
1316
800
1003
743
1093
1090
1478
Initial 2
Initial 1
I
-
TS1
TS2
TS3
TS1
TS2
TS3
0.78
4
2.72
514
798
514
795
514
798
514
790
514
790
1003
891
1003
888
701
906
1005
891
723
906
775
953
965
777
769
832
528
884
1008
891
701
1005
887
701
906
758
953
961
777
767
834
527
884
906
774
514
790
1008
173
0.08
4.36
0.08
3.67
0.94
0.05
2.08
3.69
1.36
6.3
2.64
0.06
24.63
3.34
1.27
10.42
4.75
1.81
0.03
1.75
1.7
0.63
16.55
12.77
2356
73.36
41.66
451.9
21.75
66.17
25.01
254.9
13
247.3
11.78
1285
716
906
775
953
965
777
769
832
528
898
906
774
971
641
695
758
1276
843
762
971
1072
1092
1046
1188
885
820
672
828
1254
1215
719
887
648
936
975
1308
769
953
965
777
767
830
528
884
906
774
971
640
695
758
1271
843
761
971
1066
1092
1033
1188
864
820
668
822
1254
1215
703
887
648
934
975
1308
906
774
972
640
695
758
1276
843
762
971
1066
1092
1044
1188
885
820
672
828
1254
1215
719
897
648
936
975
1308
906
775
964
961
777
767
834
527
894
906
774
970
641
695
758
1262
845
757
955
1066
1092
1040
1189
863
816
691
827
1260
1238
718
874
644
965
970
1321
969
641
695
758
1262
845
757
955
1066
1092
1040
1187
863
810
691
825
1260
1223
705
874
644
953
970
1306
891
701
906
775
964
961
777
767
834
527
894
906
774
970
641
695
758
1262
845
757
955
1066
1092
1038
1178
863
816
691
827
1260
1228
703
888
644
942
970
1314
514
790
1003
887
701
906
758
953
961
777
767
830
527
884
906
774
969
640
695
758
1262
843
757
955
1066
1092
1033
1178
863
810
668
822
1254
1215
703
874
644
934
970
1306
0.010
0.014
0.025
0.016
0.006
0.004
0.005
0.002
0.000
0.000
0.004
0.012
0.000
0.007
0.010
0.000
0.005
0.000
0.000
0.005
0.002
0.005
0.013
0.002
0.005
0.006
0.012
0.010
0.019
0.024
0.008
0.033
0.014
0.008
0.014
0.010
0.005
0.015
0.002
0.016
118
Table 8.5 (Continued) The results of the experiments with test problems for three
machine problems by considering minimization of makespan criterion
1-c,J
Initial 1
.
TSI
C62
C63
C64
C65
C66
C67
C68
C69
C70
C71
123
1083
941
1162
1028
1088
1537
966
1307
996
1369
1545
1077
888
973
1210
1047
1378
1127
1174
1673
1060
1456
1114
1584
1706
1167
960
1154
1213
161
1028
1106
1568
1255
1369
1393
1242
1423
1252
1415
1170
1284
1296
1503
1763
1526
1411
1080
1293
1206
1676
1955
1697
1558
1188
162
1530
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
C72
C73
C74
C75
C76
C77
C78
C79
C80
C81
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
1131
1721
2.34
16.11
10.58
8.89
12.64
11.09
5,77
339.8
24.28
500.7
27.8
3.48
119.7
483.9
108.9
489.1
77.75
29.77
1110
947
1189
1058
1103
1558
979
1325
1009
1401
1567
1082
895
1003
1033
1133
1606
1286
1400
1453
1273
1457
1253
1420
1170
1300
1325
1539
1797
1543
1447
1086
1322
1223
1686
1984
1715
1596
1207
1370
1568
17.3
1545 18.45
1376 16.41
1722 36.38
1402 0.33
1620 12.53
1324 100.5
1460 362.1
1448 353.4
1677 23.03
1927 54.63
1646 34.73
1654 1380
1261
107
1452
266
1310
74
1788
35
2149
165
1815 33.14
1688 0.78
1274
5
1695
15
1550
Initial 2
TS3
TSI
1108
952
1189
1057
1103
1543
979
1325
997
1387
1567
1082
889
993
1033
1131
1598
1272
1387
1453
1255
1457
1252
1420
1170
1298
1337
1520
1797
1543
1447
1086
1307
1206
1692
1973
1726
1573
1207
1118
952
1189
1058
1103
1558
979
1325
1009
1401
1567
1082
894
1003
1033
1133
1607
1286
1400
1453
1244
1455
1253
1420
1170
1309
1305
1539
1797
1543
1452
1086
1305
1208
1686
1988
1731
1573
1207
1107
958
1190
1046
1096
1545
992
1338
1007
1404
1569
1087
894
993
1043
1129
1598
1279
1395
1443
1258
1458
1255
1448
1197
1295
1347
1514
1802
1555
1448
1093
1305
1217
1695
1989
1742
1587
1201
1550
1550
1565
[ TS2
{ TS2
1107
955
1190
1044
1095
1545
992
1317
1002
1396
1569
1091
894
993
1032
1129
1598
1269
1395
1435
1258
1458
1255
1441
1197
1295
1330
1514
1802
1547
1413
1093
1305
1217
1695
1976
1712
1587
1201
1548
TS3
1101
1101
958
1190
1046
1096
1545
971
1338
1007
1404
1569
1089
894
1001
1043
1129
1598
1279
1395
1443
1258
1458
1255
1448
1197
1295
1336
1514
1777
1555
1445
1101
1305
1220
1695
1989
1742
1587
1201
1558
947
1189
1044
1095
1543
971
1317
997
1387
1567
1082
889
993
1032
1129
1598
1269
1387
1435
1244
1455
1252
1420
1170
1295
1305
1514
1777
1543
1413
1086
1305
1206
1686
1973
1712
1573
1201
1548
0.017
0.006
0.023
0.016
0.006
0.004
0.005
0.008
0.001
0.013
0.014
0.005
0.001
0.021
0.004
0.021
0.019
0.011
0.013
0.030
0.002
0.022
0.000
0.004
0.000
0.009
0.007
0.007
0.008
0.011
0.001
0.006
0.009
0.000
0.006
0.009
0.009
0.010
0.011
0.012
119
The normal probability plot of the residuals confirms that the residuals have a normal
distribution (Figure 8.3). Thus, there is evidence that the parametric statistics-based
analysis of variance (ANOVA) can be used to further analyze the results.
30
20
10
H
-20
-30
01
.1
1
5
10
25
50
15
90
95
99
19.9
99.99
Nar,1 Prcnt le
Figure 8.3 The normal probability plot of the experimental design of finding the best
heuristic algorithm for three machine problem by considering minimization of
makespan
The experimental design is performed by applying SAS 9.1 to find the best heuristic
algorithm as well as the best initial solution. The ANOVA table is shown in Table A.3
of appendix. The results of the experiment show that there is a significant difference
among the objective function values of heuristic algorithms (the result of F test is less
than 0.0001). To find the best heuristic algorithm, a Tukey test is performed. The result
of Tukey's test shows that TS2 has a better performance compared to the other two
heuristic algorithms.
The results of the experimental design show that there is no difference between
applying different initial solution generators for three machine problems (the result of F
test is equal to 0.2732).
120
Among the interactions, the interaction between the group factor and all sub-plot
factors (G*A and G*I) are significant. This supports the importance of group as a
factor.
The other significant interactions are R1*I, G*J*I, G*R2*I, J*j*, G*J*R2*I,
G*R1*R2*I, and G*J*R1*R2*I.
A test of effect slice is performed to obtain detailed information on the highest
significant order interactions for the algorithm and the initial solution effects i.e., G *A
and G*J*R1*R2*I by considering Tukey-Kramer adjustment. The results are shown in
Table A.4 in appendix. Based on the results, the summary of significant differences are
as follows:
For large size group problems, there is a significant difference among the
performance of heuristic algorithms. Based on Tukey' s test results, for these
problems, TS2 has a better performance than the other heuristic algorithms.
For small size group, large size job, and the third level of set-up ratio for both
RI and R2 factors, there is a significant difference between the performance of
heuristic algorithms. In these problems, the second initial solution generator has
a better performance.
8.1.2.2 The Experimental Design to Compare the Time Spent for Heuristic
Algorithms for Three Machine Problems by Considering Minimization of
Makespan
The time spent to terminate the search algorithm and the time spent to find the best
solution for each heuristic algorithm are shown in Table 8.6 for all test problems.
121
Table 8.6 The time spent for the test problems of three machine problems
(in seconds) by considering minimization of makespan criterion
Initial 1
S
_L_
2
C3
C4
0
0
0
0
p--4
0
0
6
0
0
.L.
Q
p
8
cs
0
--
JL
0
2
12
0
0
0
P
2
0
0
0
2
2
2
eD
D
D
L JL iL
Q
0
0
0
0
0
0
0
0
0
0
p--
0
0
20
0
eD
JL JL _2_ _Q_
0
JL JL
0
0
0
2
0
0
0
0
0
0
0
0
p-p--
0
0
0
0
0
0
LJ
0
0
0
0
0
P_
0
0
L
0
JL
_Q0
0
L JL JL JL
1
1
iL JL
0
0
0
0
0
0
L JL
0
__
0
P
L
0
0
0
0
0
0
0
0
p
0
0
0
0
0
0
0
o
0
0
0
0
0
0
0
_
_
Q
0
L
JL
0
iL
Q
Q
Q.
Q
Q
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
IL
9
p-_L
.__
0
0
0
0
0
0
0
0
0
--
2
0
0
0
0
0
0
p--
-p---
p--
C 15
30
0
32
0
C16
34
0
36
0
0
p--0
__
0
0
0
2_
0
_
0
0
--
p---
0
0
L
L.
-0
0
0
0
P
0
0
0
0
0
0
-- -0
2
L JL JL
0
0
0
±
0
0
JL JL
0
0
0
0
0
0
0
0
0
0
0
0
0
0
L
L
0
0
p--
p--0
0
p--
p--
JL JL JL JL
p--
p--
Cl 8
0
p---
--
C17
p---
L
___ __0
0
0
0
0
22
28
_
0
L L
C 14
0
0
Q_
p
J
0
Q
26
L
0
0
_Q.
C13
0
cL
IL
24
0
0
L iL
0
-L ___ JL
0
0
JL JL
JL JL
0
.L _L
0
0
0
__P_-
0
Q
±L
2
2
0
p--
C 10
eD
p---
P
p18
2
2
0
p-- _2_ .JL
0
TS3
p--
JL JL JL JL
-u--
TS2
TS1
0
_Q_ p-- p-0
C9
C12
0
0
16
cii
0
10
14
C8
2
p--
C6
C7
2
JL JL JL
2
C2
TS3
S
2
ci
Initial 2
0
0
122
Table 8.6 (Continued) The time spent for the test problems of three machine
problems (in seconds) by considering minimization of makespan criterion
Initial 1
TSI
TS2
2
D
2
2
.Q
L
P
1
1
0
0
40
0
0
0
0
0
0
L
Q.
Q
0
0
0
0
0
0
0
0
0
0
0
0
0
P
0
9
0
0
44
-p-0
L
0
0
-p48
0
0
0
0
0
0
0
52
C28
-v-
60
1
±!-
--
62
1
0
0
0
0
J
0
0
0
0
0
0
0
0
0
0
0
0
0
-p--
--- -p--
0
0
0
0
P
0
0
0
0
0
2
0
3
1
1
0
_2__
1
-i-_
6
0
1
0
0
0
0
0
C32
64
C33
66
-
-p-0
-p0
6
1
-k2
12
7
14
1
1
-p4
0
5
0
0
0
-72
L
p--f-0
0
0
_2__
0
0
0
0
0
0
0
0
0
0
0
0
L
L
L JL
0
0
0
0
1
1
0
1
0
00-- -p--
0
C38-f-76
1
0
0
0
-p1
5
0
-p-
1
L
± L
3
1
9
0
0
0
0
0
0
L
_i_
3
L
5
1
0
Q
0
_i__
_±_
0
P
4
0
0
1
1
__ __
3
L
6
L
1
0
J
0
.JL
0
--- -p-- ___ i
0
L
0
0
0
15
0
1
5
1
0
0
0
JL
±
L
3
0
P
15
0
1
0
6
0
0
0
0
0
0
.L
-p----
JL
0
0
0
L
0
C37
74
JL
P
3
-p--70
2
-p--
-p--
Q
C34
68
0
0
Q_
-p-
C30
0
-- -p--
1
_2_
-- -p--
-p--
1
0
2
JL JL JL
L JL JL L JL
-p-
L L
0
±
0
0
1
58
.JL
L
__
0
1
1
0
0
0
-p--
-p---
-p--
56
0
0
0
0
0
0
0
0
P
0
0
-- -p--
0
0
eD
P
JL
0
2
D
P
0
0
0
-p---
54
0
2
P.
0
±
-- --
C26
2
TS3
0
-p--
-p---p--
50
L
L L
46
C25
....
-- -p--
C23
C36
2
0
C22
C35
2
fD
Q
42
C3 1
2
0
C21
C29
2
TS2
9
C20
C27
3
eD
TSI
TS3
38
Cl 9
C24
Initial 2
1
!!_
1
7
0
0
__- __
2
1
-- -- --- --- --0
0
0
0
0
0
12
9
11
0
123
Table 8.6 (Continued) The time spent for the test problems of three machine
problems (in seconds) by considering minimization of makespan criterion
Initial 1
TS1
-
2i
2
---
C40
2
_IP._
__
.__
19
5
20
2
0
0
0
82
29
27
137
69
196
64
37
79
2
4
L
0
L
L
84
21
17
86
0
0
88
12
L
C43
!2
90
C46
4
5
5
5
_i_ __ _L
6
3
1
0
0
0
33
29
li
L L
L
22
12
88
P
1
19
17
112
±
P
JL
..L
5
23
4
0
0
±
1
-135
00
L L
L
4
1
L L
L
L L i
L L
48
90
±
1 L JL L 1± L
4
4
P
L
P
32
4
4
4
--- 37 _i_ 48L _ _2__ L JL
±
i
il.
L il
L
0
88
35
137
2
5
_!___
2
84
18
4
116
3
2
2
--
5
15
5
13
51
148
71
149
53
1
1
1
1
0
12
6
90
17
88
7
.
49
2
98
0
100
1
0
6
3
7
102
14
9
61
13
109
26
13
6
0
3
1
6
1
1
0
23
125
150
0
1
0
0
.± ±L
±L J
i
27
5
96
1
125
5
25
0
0
0
2
L JQ
18
0
37
5
185
L ±
101
94
L
i
25
23
5
cso
_±L
61
1
27
5
18
1
-p----
-t
5
L JL JL
20
92
C48
-
L
0
C47
6
1
0
iQ.
1
3
±
5
3
±L
J
I
2
8
3
68
11
72
11
6
0
6
0
17
6
± L .± ±
ci
C52
fL
--'104
1
_P_
C53
C54
TS3
.±
J
-p--
C44
C49
0
-p-
C42
TS2
TS1
-p-
-- ---
80
C41
C45
-
2
2
78
TS3
_
E
C39
Initial 2
TS2
106
2
LQ7
iL
Q
L
L.
1
15
3
L
16
Q
3
±
2
JL L
2
17
6
L i
L i
108
6
1
34
2
25
2
5
4
17
5
25
7
110
15
15
89
27
110
33
16
9
92
71
86
21
IL
1L
112
28
7
191
14
213
148
217
14
263
149
--
Ji
iL
_
L
C55---- -- -- -- -- --- ---- -- -C56
C57-114
35
--17
251
--- -- -- --18
205
20
27
193
251
47
38
--- ---- ---
--247
41
36
124
Table 8.6 (Continued) The time spent for the test problems of three machine
problems (in seconds) by considering minimization of makespan criterion
Initial 2
Initial 1
TS2
TS1
-
TS3
TS2
TS1
TS3
-
_
E
2
2
2
2
(9
C58
-- _2_ -i-- -!l116
32
25
118
18
120
61
_
C60
15
40
128
357
158
122
4!
266
-p-
C62----124
29
83
18
486
110
61
_i__ _L
5
126
13
.i_
-_
44
_i__
_
465
142
--_
_
53
64
8
21
3
108
63
103
5
274
189
43
1129
115
1051
88
488
228
78
977
548
1137
148
iQ
-L
447
198
2117
1650
2784
396
23
128
31
147
48
449
168
9
9
53
138
24
755
1337
384
1692
18
--- p---- ------ --- --- --- -- -J
-- -- ---p--- -- -----
121
1260
130
22!
176
1471
132
448
118
-2378
639
2266
±
i*
1L
23
223
4!L J
42
32
2
C66
---- ----
134
30
136
58
138
131
15
145
25
!QL
233
2
2
12!_
__
287
432
69
58
8
420
20
910
417
1030
305
132
5
1036
i
i2
_ __ __
J9_
106
iL JL
i
C70
50
33
284
148
---66
1067
C71--C72--137
146
142
J42
148
i2P
284
150
456
152
152
C73
65
922
-i---
-----
129
995
J±
179
347
127
38
51
11L
14
185
95
274
27
----
181
150
79
1215
604
1166
228
177
893
130
135
29
1051
82
806
59
316
1139
396
139
--- --- --J2
695
1719
1053
868
244
--
C76
14
1175
1502
--
---
C75
Z2
J2
160
L
1768
97!
-- -- -- --- -- --- ---- ---- --- ----- ---- -223
146
-----
---'--
144
J
15
1L
L
J.L i
---
C69
J2
400
---
C68
142
127
64
65
196
140
328
J!
_
3
8
120
C65---
_!__
421
8
106
278
41
9
C64---
_2L
217
294
78
21
C67
170
226
277
57
128
210
252
70
8
126
14
_i.
10
C63--
C74
21
J
p--
-p-
C61
65
___!__ _
_
38
(9
-
120
_
2
2
(9
-L _2_ -- -!--__ _IL _J_
219
-- -p-
C59
2
(9
(9
(9
61
205
2
2
1751
±
623
---
_
2i_
115
J9
267
128
467
260
147
456
136
784
137
768
136
JQ
957
JL
i2iQ
_ZL
1913
178
2579
255
133
775
565
457
150
J1
-
±L
129
--
525
J
290
471
ii
125
Table 8.6 (Continued) The time spent for the test problems of three machine
problems (in seconds) by considering minimization of makespan criterion
Initial 2
Initial 1
TS2
TS1
H
C
H
C
H
C
*
E
J-
C78
C79
C80
C8
-2
2
-- _I
H
C
H
C
E
2
C77
TS3
TS2
TS1
TS3
H
2
2
2
J2!
2
iL J4
_2
JI1_
112
154
97
97
571
205
617
201
95
53
718
145
641
121
155
224
190
1248
948
1534
331
224
35
1636
187
1663
178
156
131
57
835
459
864
119
132
101
673
123
596
29
157
203
25
1514
892
1532
70
199
11
607
12
588
13
158
401
373
2833
2435
3037
1074
406
327
3071
2256
3105
910
159
325
305
1965
826
1871
669
302
25
2211
1145
1715
48
160
224
148
1528
684
1695
763
228
27
719
34
1757
78
161
64
31
406
61
502
91
68
18
496
52
501
52
162
142
123
785
277
821
252
143
93
1097
898
1114
1095
2_
-
-
1
The normal probability plot of the residuals is shown in Figure 8.4. The residual plot is
close to a line. It shows that ANOVA can be used to analyze the results.
750
-I-
-
500
4
250
H
0
250
500
+
tit4d4
+
750
01
.1
1
5
10
25
50
75
90
95
9
9
99
9
N0..-_oI
Figure 8.4 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for three machine problem by considering minimization of
makespan
The ANOVA table for comparison of time spent is presented in Table A.5 in appendix.
The results of the experiment show that there is a significant difference among the time
spent for heuristic algorithms (the result ofF test is less than 0.000 1). The Tukey test is
126
applied to find the difference. The result of Tukey' s test shows that the time spent by
TS 1
is less than the other two heuristic algorithms. This result was expected as
discussed in previous sections.
The results of the experimental design show that there is not a significant difference
between the time spent by algorithms in applying different initial solution techniques
for three machine problems (the result ofF test is equal to 0.1929).
Among the interactions, the interactions between the algorithm factor and group and
job factors (G*A, J*A, G*J*A, R1*R2*A, and G*R1*R2*A) are significant. This
result was expected, because by increasing the size of the problems (increasing the
number of groups or number of jobs in a group), the difference between the time spent
by TS1 and the other two heuristic algorithms (TS2 and TS3) will be increased.
The effect slice test is performed for more detailed comparisons on the highest
significant order interaction for the algorithm effects i.e., and G*J*R1*R2*A by
considering Tukey-Kramer adjustment. Based on the results for any large size group
problem, there is a significant difference between the time spent by the algorithms. In
all of these problems, TS1 required less time compared to TS2 and TS3. This result was
expected because by increasing the size of the problems, the difference between time
spent by TS1 compared to TS2 and TS3 will be increased.
8.1.2.3 The Comparison between the Best Tabu Search and the Results of Schaller
et al. (2000) Algorithm for six machine Problems
The results of Schaller et al. (2000) algorithm are compared to the result of the heuristic
algorithms in this section by applying a paired t-test. The result of Schaller et al. (2000)
algorithm for test problems is presented in Table B.2 of appendix. As discussed in
section 8.1.2.1, TS2 has the best performance compared to the other algorithms.
Because there is no difference between the initial solution generators, the results of TS2
by considering the first initial solution generator is applied to be compared with the
results of Schaller et al. (2000) algorithm. The result of the paired t-test shows a
127
significant difference between the results of two algorithms. In other words, TS2 has a
better performance compared to Schaller et al. (2000) algorithm for three machine
problems.
The average error percentage of Schaller et al. (2000) algorithm for the test problems is
equal to 9% and the maximum of percentage error for a test problem is equal to 25%.
These results are too high compared to the one obtained by the proposed heuristic
algorithms (1.00%).
8.1.3 The Results of Six-Machine Makespan Criterion
The heuristic algorithms are applied to solve all 54 test problems for six machine
problems to find the algorithm with the best performance and the best initial solution
generator. The lower bounding technique is also applied for each test problem to
evaluate the quality of solution. The Schaller et al. (2000) algorithm for each test
problem is also applied to compare the results with the heuristic algorithm. The results
are presented in the sections below.
8.1.3.1 Comparison among Heuristic Algorithms and the Lower Bound
The results of performing the heuristic algorithms and the result of the lower bounding
technique are shown in the tables below. The performance of lower bounding technique
for small size problems was not good enough. Thus, four of the small size problems in
which the lower bounding technique could not find a good quality lower bound, and
can be solved optimally in negligible time, are solved optimally by the original
mathematical model. Table 8.7 shows the results of the heuristic algorithms as well as
the result of the best heuristic algorithm for each test problem. In Table 8.8, the optimal
solution of those small size problems which are solved optimally, the result of the
lower bounding technique, the time spent to solve the problems with the lower
bounding mathematical model, the result of Schaller et al. (2000) algorithm, and the
minimum objective function value obtained by the heuristic algorithms are shown. The
128
minimum value of the objective function of the heuristic algorithms for each test
problem is considered to estimate the quality of solutions. This value is compared to the
value of the lower bound or the objective function value of the optimal solution (for
those test problems which are solved optimally) of the test problems. The error
percentage of each test problem is shown in the "Best Error" colunm of Table 8.8.
Based on the results, the average error percentage of the heuristic algorithm is equal to
1.60% and the maximum error is 7.8%, which is obtained in test problem 15. This error
percentage is calculated based on the formula presented in section 8.1.1.1.
Table 8.7 The results of the experiments with test problems for
six machine problems by considering minimization of makespan
criterion
Initial 1
TS1
TS2
TS3
TS1
TS2
TS3
1
1688
1683
1688
1688
1676
1688
1676
2
1086
1086
1086
1086
1086
1086
1086
279
279
279
279
279
279
279
169
169
169
169
169
169
169
1420
1420
1420
1420
1420
1420
1420
797
797
797
797
797
797
797
1878
1878
1878
1863
1863
1863
1863
1477
1477
1477
1477
1477
1477
1477
224
224
224
217
217
217
217
410
410
410
424
424
424
410
i
1466
1466
1466
1472
1472
1468
1466
2
1818
1795
1809
1795
1795
1795
1795
3
786
786
786
786
786
786
786
4
1179
1179
1179
1179
1179
1179
1179
5
496
496
496
496
496
496
496
16
418
418
418
409
409
409
409
17
1583
1583
1583
1583
1583
1583
1583
C2
4
C3
C4
8
C5
10
C6
C7
C8
C9
cio
cii
C12
C13
Initial 2
18
1218
1218
1218
1212
1212
1212
1212
19
3087
3087
3087
3087
3087
3087
3087
20
2633
2633
2633
2631
2631
2631
2631
21
451
451
451
454
452
452
451
22
527
527
527
532
532
528
527
23
2782
2763
2782
2763
2763
2763
2763
24
2108
2108
2097
2097
2097
2097
2097
25
2172
2172
2172
2172
2172
2172
2172
26
2459
2459
2459
2459
2459
2459
2459
129
Table 8.7 (Continued) The results of the experiments with
test problems for six machine problems by considering
minimization of makespan criterion
Initial 1
Initial 2
-.
eD
#)
ri
27
C15
C16
TS3
TSI
TS2
TS3
564
564
564
564
565
564
564
632
632
645
633
631
633
631
29
2538
2538
2538
2528
2528
2528
2528
30
2223
2223
2223
2223
2223
2223
2223
31
3269
3268
3269
3269
3266
3269
3266
32
2945
2945
2945
2945
2945
2945
2945
685
682
683
681
682
682
681
34
C18
36
C19
38
983
979
981
980
983
984
979
3769
3769
3769
3755
3749
3755
3749
2942
2942
2942
2942
2942
2942
2942
5305
5299
5305
5272
5272
5272
5272
3942
3941
3941
3954
3948
3946
3941
728
731
731
743
728
728
734
40
740
740
751
758
749
755
740
41
3803
3797
3803
3803
3797
3803
3797
42
4775
4768
4781
4778
4778
4778
4768
4479
4478
4479
4473
4473
4473
4473
5428
5428
5428
5449
5449
5449
5428
1075
1075
1094
1100
1094
1090
1075
993
979
993
990
986
991
979
5296
5288
5296
5297
5286
5291
5286
5633
5633
5633
5643
5627
5643
5627
4947
4920
4947
4943
4943
4943
4920
50
5045
5035
5045
5051
5032
5051
5032
51
1092
1092
1106
1082
1077
1084
1077
52
1194
1176
1206
1189
1189
1198
1176
5900
5899
5900
5923
5891
5919
5891
5866
5848
5848
5882
5879
5872
5848
C20
C22
44
C23
C24
48
C25
C26
TS2
28
C17
C21
TS1
C27
54
130
Table 8.8 The lower bound value of test problems for six machine problems by
considering minimization of makespan criterion
D
D
rD
E
0
D
0
-.
e'
1666
0.05
1682
1666
1676
0.006
0.006
2
1086
0.02
1086
1086
1086
0.000
0.000
3
279
262
0.2
284
279
279
0.065
0.000
4
169
156
0.03
204
169
169
0.083
0.000
1391
0.03
1427
1391
1420
0.021
0.021
C3
6
797
C4
8
CS
9
10
C8
C9
ClO
C12
C13
C14
C15
C16
217
753
0
797
797
797
0.058
0.000
1863
0.05
1871
1863
1863
0.000
0.000
1477
0.02
1477
1477
1477
0.000
0.000
195
0.02
228
217
217
0.113
0.000
390
0.66
424
390
410
0.051
0.051
1442
0.02
1481
1442
1466
0.017
0.017
12
1765
0.05
1795
1765
1795
0.017
0.017
13
786
0.01
786
786
786
0.000
0.000
14
1179
0.02
1184
1179
1179
0.000
0.000
15
460
0.16
497
460
496
0.078
0.078
16
383
0.19
411
383
409
0.068
0.068
17
1530
0.02
1598
1530
1583
0.035
0.035
18
1159
0.02
1215
1159
1212
0.046
0.046
19
3062
0.11
3310
3062
3087
0.008
0.008
20
2631
0.09
2666
2631
2631
0.000
0.000
21
430
2.28
477
430
451
0.049
0.049
22
506
60.92
543
506
527
0.042
0.042
23
2748
0
2805
2748
2763
0.005
0.005
24
2084
0.09
2097
2084
2097
0.006
0.006
25
2172
0.06
2195
2172
2172
0.000
0.000
26
2459
0.5
2465
2459
2459
0.000
0.000
27
553
19.33
582
553
564
0.020
0.020
28
619
34.17
650
619
631
0.019
0.019
29
2500
0.05
2555
2500
2528
0.011
0.011
30
2189
0.05
2257
2189
2223
0.016
0.016
31
3266
0.09
3282
3266
3266
0.000
0.000
32
2945
0.08
2967
2945
2945
0.000
0.000
667
7.48
705
667
681
0.021
0.021
949
408.47
1021
949
979
0.032
0.032
3730
2.13
3838
3730
3749
0.005
0.005
2898
0.08
2952
2898
2942
0.015
0.015
C6
C7
0
.
1
Cl
C2
0
-.
C17
34
C18
36
131
Table 8.8 (Continued) The lower bound value of test problems for six
machine problems by considering minimization of makespan criterion
0
00
eD.
r
0
©
E
94
5337
5255
5272
0.003
0.003
3.22
3940
3927
3941
0.004
0.004
694
36550
746
694
728
0.049
0.049
40
721
21253
801
721
740
0.026
0.026
41
3763
4.58
3834
3763
3797
0.009
0.009
42
4726
1.3
4800
4726
4768
0.009
0.009
4
4455
6.41
4506
4455
4473
0.004
0.004
44
5396
48.17
5459
5396
5428
0.006
0.006
1033
58574
1120
1033
1075
0.041
0.041
C20
C23
46
C24
48
957
53166
1035
957
979
0.023
0.023
5225
20.97
5317
5225
5286
0.012
0.012
5572
24.19
5688
5572
5627
0.010
0.010
4911
0.92
4973
4911
4920
0.002
0.002
50
5027
25.03
5067
5027
5032
0.001
0.001
51
1047
18571
1136
1047
1077
0.029
0.029
52
1136
23506
1239
1119
1176
0.051
0.035
53
5829
16.16
5957
5829
5891
0.011
0.011
54
5802
1.31
5914
5802
5848
0.008
0.008
C25
C26
0
3927
38
C22
©
5255
C19
C21
eDeb
B
C27
The normal probability plot of the residuals confirms that the residuals have a normal
distribution (Figure 8.5). Thus, there is evidence that the parametric statistics-based
analysis of variance (ANOVA) can be used to further analyze the results.
15
+
10
5
F'
0
-s
-10
+
1s
1
I
S
tO
2S
50
75
90
SS
99
99. 9
flor.,.al ra,-cen tiles
Figure 8.5 The normal probability plot of the experimental design of finding the best
heuristic algorithm for six machine problem by considering minimization of makespan
132
The SAS 9.1 is used to perform the experimental design to find the best heuristic
algorithm as well as the best initial solution. The ANOVA table is shown in Table A.6
in appendix. The results of the experiment show that there is a significant difference
among the objective function values of heuristic algorithms (the result of F test is equal
to 0.0004). To find the best heuristic algorithm, a Tukey test is performed. The result of
Tukey's test shows that TS2 has a better performance compared to the other two
heuristic algorithms.
The results of the experimental design show that there is no difference between the
initial solution generator for six machine problems (the result of F test is equal to
0.3 344).
Among the interactions, the interaction between the group factor and all sub-plot
factors (G*A and G*I) are significant. This supports the idea of importance of the
group factor. The significant interactions that include the initial solution factor are
G*R1*J, J*Rl*I, and G*J*R1*I.
Table A.7 in appendix shows the result of the effect slice test for detailed comparisons
by considering the highest significant order interactions for the algorithm and the initial
solution effects i.e., G *A and G*J*R1 *J based on Tukey-Kramer adjustment. This
table shows the performance of the heuristic algorithms as well as the initial solutions
for each cell of the experimental design. Based on the result, the significant differences
are as follows:
For all large size group problems, there is a significant difference among the
performance of the heuristic algorithms. In these problems, TS2 has a better
performance compared to the other heuristic algorithms. This result came from a
Tukey test.
For large size group, large size job, and the third level of set-up ratio factor, (the
set-up time of groups on machines are decreased from M1 to
M6),
there is a
significant difference between the performances of initial solutions. In these
problems, the random initial solution generator has a better performance.
133
8.1.3.2 The Experimental Design to Compare the Time Spent for Heuristic
Algorithms for Six-machine Problems by Considering Minimization of Makespan
The time spent to terminate the search algorithm and the time spent to find the best
solution for each heuristic algorithm are shown in Table 8.9 for all test problems.
Table 8.9 The time spent for the test problems of six machine problems (in seconds)
by considering minimization of makespan criterion
Initial 1
TS1
C
2
ci
e
e
C
C
*
D
C
eD
2
2
2
2
eD
2
2
_L JL JL
0
0
0
0
L L
2
0
0
0
0
0
0
0
0
p--
0
0
0
0
4
0
0
0
0
0
0
0
0
Q
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
p--
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
p
0
--10
0
!L
12
0
0
C7
14
0
0
_2_
C8
2
2
TS3
{
eD
8
cli
C
eD
C4
cio
C
TS2
TS1
2
6
C9
TS3
e
C3
C6
TS2
2
C2
C5
Initial 2
e
JL JL.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
JL
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
00
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
L
0
0
16
0
0
0
0
0
0
0
0
0
0
0
0
_iL
0
0
0
0
0
0
0
0
0
0
0
0
18
0
0
0
0
0
0
0
0
0
0
19
1
0
10
0
9
1
1
0
8
1
8
1
20
0
0
1
1
1
0
1
0
1
0
0
0
0
0
0
0
0
0
1
0
22
1
0
3
0
4
0
2
0
3
1
24
0
0
IL
C12
2
1
2
0
0
0
0
0
00
00
10
00
0
1
0
0
2
0
1
0
0
0
0
0
134
Table 8.9 (Continued)The time spent for the test problems of six machine
problems (in seconds) by considering minimization of makespan criterion
Initial 1
Initial 2
TSI
TS2
0____
____
eD
o
TS2
TS1
TS3
o
2
:
2
2
e
eD
2
eD
2
2
5
e
D
D
ep
eD
2
2
2
eD
et
2
0
0
0
0
26
0
0
1
0
1
0
0
0
1
0
1
0
27
2
0
7
0
9
1
2
1
9
1
7
2
28
4
3
19
6
22
4
3
1
21
10
21
1
_2_
_L
0
3
0
6
0
1
0
5
1
4
1
30
0
0
1
0
1
0
0
0
1
0
1
1
31
6
4
46
31
46
13
6
4
29
17
28
8
32
3
1
25
3
24
3
3
0
15
0
16
1
33
5
2
15
12
13
7
4
1
16
3
12
5
34
45
11
204
197
177
27
45
36
227
166
195
42
35
22
5
135
11
118
6
23
16
122
61
65
20
36
3
2
19
3
25
8
3
2
23
5
25
6
37
23
20
187
182
177
60
23
18
188
50
168
50
38
4
3
33
27
33
22
4
3
29
11
30
15
39
14
12
98
30
97
28
13
7
100
18
94
32
40
17
7
103
14
106
14
17
3
123
116
117
37
41
5
5
31
14
35
12
5
1
35
21
34
3
42
21
12
103
64
140
119
20
19
153
53
107
34
43
18
7
137
110
136
19
17
8
112
19
112
19
44
86
31
668
93
666
93
80
2
659
6
661
6
45
108
49
750
129
895
216
113
88
873
478
891
178
46
76
24
496
269
424
381
77
74
496
437
437
105
47
90
66
514
316
684
174
88
54
740
438
624
600
48
166
117
1061
281
1138
309
141
35
1137
1048
1108
103
49
145
133
1111
973
1075
393
144
23
412
25
395
25
50
116
55
942
838
946
163
119
78
592
560
497
134
110
99
8
768
455
774
201
91
153
130
1142
377
1115
260
C13
C14
Cl 5
TS3
Cl 6
c 7
C18
C19
C20
C2 1
C22
C23
C24
C25
C26
0
0
51
98
8
688
282
751
52
139
100
985
881
1102
285
206
1889
1634
2455
618
307
209
907
520
2293
1943
362
24
2709
2634
2298
2242
346
67
1799
898
1020
677
C27
54
The normal probability plot of the residuals is shown in Figure 8.6. The plot confirms
that the residuals have a normal distribution. Thus, there is evidence that the parametric
statistics-based analysis of variance (ANOVA) can be used to further analyze the
results.
135
750
+
500
250
+
II
0
-250
+ +
-500
I-,-750
.1
1
5
10
25
50
75
90
95
99
99.9
Nn.-n1 PtI1e
Figure 8.6 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for six machine problem by considering minimization of
makespan
The ANOVA table for comparison of time spent is presented in Table A.8. The results
of the experiment show that there is a significant difference among the time spent for
heuristic algorithms (the result ofF test is less than 0.0001). The Tukey test is applied
to find the difference. The result of Tukey' s test shows that the time spent by TS 1 is
less than the other two heuristic algorithms.
The results of the experiment show that there is a significant difference between the
time spent by algorithms in applying different initial solution generators for six
machine problems (the result of F test is equal to 0.0256). If the search is initiated with
the second initial solution (Schaller el al., 2000), the search requires less time than the
first one.
Among the interactions, all interactions between the algorithm factor and the whole
plot factors (G*A, J*A, R1*A, G*J*A, G*R1*A, J*R1*A, and G*J*R1*A) are
significant. This result was expected, because by increasing the size of the problems
(increasing the number of groups or number ofjobs in a group), the difference between
the time spent by TS 1 and the other two heuristic algorithms (TS2 and TS3) will be
increased.
136
The interaction of the initial solution factor and the group and job factors (G*I, JJ, and
G*J*I) are significant as well. The interactions of whole plot factors (G, J, and Ri) with
sub-plot factors (A and I) are also significant.
Table A.9 in appendix shows the result of the effect slice test for detailed comparisons
by considering the highest significant order interactions for the algorithm and the initial
solution effects i.e., G*J*R1*A and G*J*Rl*1 based on Tukey-Kramer adjustment.
This table shows the performance of the heuristic algorithms as well as the initial
solutions for each cell of the experimental design. Based on the result, the significant
differences are as follows:
For large size group, medium and large size job problems, there is a significant
difference between the time spent by the algorithms. in all of these problems,
TS1 required less time compared to TS2 and TS3. This result was expected
because by increasing the size of the problems, the difference between time
spent by TS1 compared to TS2 and TS3 will be increased.
For large size group, large size job, the first and the third level of set-up ratio
factor problems, there is a significant difference between the performance of
initial solutions. Iii both of these cases, the second initial solution has a better
performance.
8.1.3.3 The Comparison Between the Best Tabu Search and the Results of Schaller
et al. (2000) Algorithm for Six-Machine Problems by Considering Minimization of
Makespan Criterion
In this section, a paired t-test is performed between the results of the best tabu search
algorithm and the results of Schaller et al. (2000) algorithm for test problems. The
result of Schaller et al. (2000) algorithm for test problems is presented in Table B.3 of
appendix. As discussed in section 8.1.3.1, TS2 has the best performance compared to
the other algorithms. Because there is no difference between the initial solution
generators, the results of TS2 by considering the first initial solution generator is
applied to be compared with the result of Schaller et al. (2000) algorithm. The result of
137
the paired t-test shows a significant difference between the results of two algorithms. In
other words, TS2 has a better performance compared to the Schaller et al. (2000)
algorithm for six machine problems.
The average error percentage of Schaller et al. (2000) algorithm for the test problems is
equal to 7% and the maximum percentage error for a test problem is equal to 31%.
These results are too high compared to the one obtained by the proposed heuristic
algorithms (1.60%).
8.2 The Results for Minimization of Sum of the Completion Times Criterion
The results for two, three, and six machine problems by considering the minimization
of sum of the completion times criterion are as follows:
8.2.1 The Results of Two-Machine Problems by Considering Minimization of Sum
of the Completion Times Criterion
All 54 test problems of two machine problems are solved by heuristic algorithms to
find the algorithm with the best performance and the best initial solution generator. In
the interest of time, the lower bounding technique is also applied to only some of the
test problems to evaluate the quality of solutions. The results are presented in three
sections.
.
In the first section, the results of the heuristic algorithms are presented. The result
of the experimental design to find the algorithm with the best performance as well
as for finding the best initial solution generator are presented as well.
.
In the second section, an experimental design to compare the time spent for
heuristic algorithms is performed and the results are presented.
.
In the third section, the result of the lower bounding technique for a few test
problems to estimate the quality of solutions is presented. For this criterion, because
solving the decomposed problem requires an enormous amount of time for large
size problems, in the interest of time, only a few of test problems are solved by the
lower bounding technique to evaluate the quality of solutions.
138
8.2.1.1 Comparison Among Heuristic Algorithms for Two Machine Problems by
Considering Minimization of Sum of the Completion Times
The results obtained from applying the heuristic algorithms are shown in Table 8.10
by
using two different initial solution generators. In this table TS1 stands for the tabu
search algorithm with short term memory, TS2 stands for the LTM-Max, and TS3
stands for LTM-Min.
Table 8.10 The results of the test problems for two machine problems
by considering minimization of sum of the completion times
Initial I
-u
&
CD
Initial 2
C)
-
a
c
a
a
.
cn
.
-.
Q
g
TSI
TS2
TS3
TSI
TS2
TS3
1
4
4
13
2011
2011
2011
2011
2011
2011
2
3
4
10
1425
1425
1425
1436
1436
1436
3
3
4
8
811
811
811
811
811
811
3
-p---
454
454
454
454
454
3290
986
2925
986
2906
2592
1300
454
2906
986
5796
4310
2592
1300
-5
5
4
16
6
3
4
8
3290
986
7
5
7
28
6061
6061
6061
8
4
6
22
9
3
7
17
10
2
7
13
4310
2600
1263
4310
2600
1263
4310
2607
1263
2906
986
5796
4310
2592
1300
11
5
7
31
9064
9047
9064
9024
8876
8876
12
4
7
25
5
10
31
5595
8442
5589
13
5579
8423
5579
8423
5493
8085
14
2
8
15
2091
2091
5
9
5
10
35
40
8012
16
2160
8012
10456
2091
15
2160
8012
10456
5497
8160
2132
8787
10453
8787
10340
8254
10453
17
4
9
31
18
4
10
28
6336
5590
3406
6336
5563
3233
5960
4129
5643
6365
5563
3233
5952
4129
5643
10096
3043
27506
25772
7725
17255
6345
5563
3233
5952
4129
5643
10096
3043
27506
25772
7725
17255
19
6
4
17
6336
5590
3558
20
8
4
21
9
3
25
22
5960
4129
22
9
4
27
23
24
10
4
33
6
4
16
5643
9904
3250
25
9
7
60
26
27
10
7
62
6
7
33
28
10
7
52
27506
25708
7702
17234
8442
9904
3207
27501
25708
7714
17234
10364
6336
5590
3408
5960
4129
5643
9904
3043
27506
25708
7702
17234
986
5796
4310
5952
4129
5643
10058
3043
27501
25600
7724
17196
139
Table 8.10 (Continued) The results of the test problems for two
machine problems by considering minimization of sum of the
completion times
-I
0
2o
a
Initial I
TSI
TS2
Initial 2
TS3
TSI
TS2
TS3
29
6
7
32
8241
8055
8241
30
10
7
49
17615
17187
17902
8630
17192
8630
17187
17192
31
8
9
41
12691
12693
12691
12689
12689
12689
32
8
10
51
18430
10
71
6
8
29
35
9
9
58
36
37
6
10
31
11
4
38
13
39
16
40
13
4
4
4
30717
5986
21428
8124
8100
15066
15026
11677
11677
18482
30718
5986
21428
8124
8100
14500
15026
11677
41
14
4
14461
14461
14211
42
15
4
16
7
19352
47642
19305
43
44
45
46
47
48
49
13
7
16
7
12
7
15
7
29
40
46
40
40
46
79
63
66
65
76
30692
6187
21459
8117
8248
14798
15053
11813
15333
20932
30690
34
14
7
69
11
10
75
50
15
10
99
51
12
10
83
52
16
10
106
53
15
10
79
27654
28028
43055
39839
35438
66392
39542
69827
45664
18480
30700
6116
21441
8117
8234
14246
15028
11813
14386
20581
48949
30646
26966
27071
43055
39839
35438
64642
39392
69103
45403
18482
10
18482
31336
6187
21438
8117
8248
14798
15053
11813
15533
21136
49111
18482
33
45471
31680
26318
28776
43889
38080
37090
66455
38140
70317
44358
47532
31263
26149
27446
43318
38080
36467
65053
38145
70260
43758
19352
47642
31263
26318
28776
43889
38080
36915
66455
38140
70309
44428
54
16
10
108
75519
76717
77832
78802
76198
78802
30751
49111
30751
27657
27681
43090
39893
35438
66392
39542
69689
5986
21428
8124
8100
15066
15026
8241
The normal probability plot of the residuals confirms that the residuals have a normal
distribution (Figure 8.7). Thus, there is evidence that the parametric statistics-based
analysis of variance (ANOVA) can be used to further analyze the results.
140
1500
+
++
1000
500
II
0
0
-500
-1000
+
-1500
.1
1
5
10
25
50
75
90
95
99
99.9
Ilornal Percentilen
Figure 8.7 The normal probability plot of the experimental design of finding the best
heuristic algorithm for two machine problem by considering minimization of sum of
the completion times
The result of ANOVA is presented in Table A. 10. The results of the experiment show
that there is a significant difference among the objective function values of heuristic
algorithms (the result of F test is equal to 0.03 94). To find the best heuristic algorithm,
a Tukey test is performed. The result of Tukey' s test shows that TS2 is a better
performer compared to the other two heuristic algorithms. The results of the
experimental design also show that there is no difference between the initial solution
generators for two machine problems (the result ofF test is equal to 0.0960).
Among the interactions, only the interaction between group factor and algorithm factor
(G*A) is significant. It means that by changing the size of the problem, the suitability
of the heuristic algorithm can be changed. The significant factors and interactions are
shown in bold in Table A.10.
A test of effect slice is performed to obtain detailed information by considering the
highest significant order interactions for the algorithm effect i.e., G *A based on
141
Tukey-Kramer adjustment. The results are shown in Table A. 11 in appendix. Based on
the results, for large size group problems, TS2 has a better performance compared to
the other heuristic algorithms.
8.2.1.2 The Experimental Design to Compare the Time Spent for Heuristic
Algorithms
The time spent to terminate the search algorithm and the time spent to find the best
solution for each heuristic algorithm are shown in Table 8.11 for all test problems.
Table 8.11 The time spent for the test problems of two machine problems (in seconds)
by considering minimization of sum of the completion times
Initial 1
TS2
TSI
_
Initial 2
TS3
TS3
TS2
TS1
eD
-
_I
2
2
2
2
-
-
-
B
2
1
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
3
0
0
0
0
0
0
0
0
0
0
0
0
4
0
0
0
0
0
0
0
0
0
0
0
0
5
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
0
0
0
0
7
0
0
0
0
0
0
0
0
0
0
0
0
8
0
0
0
0
0
0
0
0
0
0
0
0
9
0
0
0
0
0
0
0
0
0
0
0
0
10
0
0
0
0
0
0
0
0
0
0
0
0
11
0
0
0
0
0
0
0
0
0
0
0
0
12
0
0
0
0
0
0
0
0
0
0
0
0
13
0
0
0
0
0
0
0
0
0
0
0
0
14
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
16
0
0
142
Table 8.11 (Continued) The time spent for the test problems of two machine
problems (in seconds) by considering minimization of sum of the completion
times
Initial 1
-
;
-
Initial 2
TS2
TSI
TS3
_;;-
-
2
2
TS2
TS1
-;--
-;-
2
TS3
-;;__
2
2
2
17
0
0
0
0
0
0
0
0
0
0
0
_Q_
18
0
0
0
0
0
0
0
0
0
0
0
0
19
0
0
0
0
0
0
0
0
0
0
0
0
20
1
1
5
1
6
2
1
0
6
1
5
1
21
1
1
6
2
7
3
1
0
5
0
6
0
22
2
1
14
2
13
2
2
1
11
3
12
3
23
4
1
26
2
23
2
2
1
23
7
25
4
24
0
0
0
0
0
0
0
0
0
0
0
0
25
16
13
70
42
129
29
29
12
87
74
107
26
26
54
31
349
87
309
88
21
19
177
102
238
50
27
0
0
0
0
1
1
0
0
0
0
0
0
10
28
36
27
121
49
143
60
30
4
105
74
182
29
0
0
1
0
1
0
0
0
0
0
1
0
30
22
18
62
27
82
80
22
7
62
38
58
12
31
4
4
41
10
24
14
8
3
44
8
37
8
32
15
2
34
29
103
5
16
1
69
35
95
5
33
77
8
304
70
352
99
82
45
221
114
321
151
34
0
0
1
1
0
0
0
0
0
0
0
0
35
24
14
102
95
126
97
34
4
81
4
94
4
36
0
0
0
0
1
0
0
0
0
0
1
0
37
2
1
16
10
15
2
2
1
7
4
7
4
38
8
7
51
50
47
23
5
2
54
6
47
47
39
25
11
196
134
198
32
24
18
185
53
200
54
40
11
6
92
19
71
20
12
7
87
19
96
19
41
10
10
82
77
93
28
11
6
80
19
89
60
42
25
16
194
69
194
155
23
7
195
167
185
23
43
100
43
536
290
613
86
90
10
756
385
716
30
44
88
36
406
186
146
51
42
37
227
81
216
73
45
125
30
489
429
720
75
110
28
666
543
730
61
46
43
32
386
385
324
261
22
21
311
288
183
26
47
162
34
712
66
1078
68
164
29
749
736
775
40
48
115
107
534
251
835
315
110
94
570
222
860
260
49
37
6
187
8
146
11
14
2
161
147
195
8
50
228
76
1643
1457
1055
163
189
36
1385
1080
513
38
51
211
99
1167
621
1081
276
90
18
1478
43
1868
59
52
371
115
1069
891
1701
549
327
219
1763
454
1734
436
53
181
101
956
805
1399
495
186
74
1281
597
1329
492
54
471
342
1159
775
1981
311
432
86
1188
1115
1238
91
143
The normal probability plot of the residuals is shown in Figure 8.8. The normal
probability plot of the residuals confirms that the residuals have a normal distribution.
Thus, the ANOVA can be applied for detailed comparison.
400
-f
+
+
200
-1+
Fl
a
0
-200
+ +1-
+
-400
.1
1
5
10
25
50
75
90
95
99
99.9
Normal Percentiles
Figure 8.8 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for two machine problem by considering minimization of
sum of the completion times criterion
The ANOVA table for comparison of time spent is presented in Table A.12 in
appendix. The results of the experiment show that there is a significant difference
among the time spent for heuristic algorithms (the result of F test is less than 0.000 1).
The Tukey test is applied to find the difference. The result of Tukey's test shows that
the time spent by TS 1 is less than the other two heuristic algorithms.
The results of the experiment show that there is not a significant difference between the
time spent by algorithms in applying different initial solution generators for two
machine problems (the result ofF test is equal to 0.9 170).
144
Among the interactions, the interactions between the algorithm factor and all of the
whole plot factors (G*A, J*A R1*A, G*J*A, G*R1*A, J*R1*A, and G*J*R1*A) are
significant.
A test of effect slice is performed to obtain detailed information by considering the
highest significant order interactions for the algorithm effect i.e., G*J*R1 *A based on
Tukey-Kramer adjustment. The results are shown in Table A.13 in appendix. Based on
the results, for large size group, medium and large size job problems, there is a
significant difference between the time spent by the algorithms. In all of these
problems, TS 1 required less time compared to TS2 and TS3. This result was expected
because by increasing the size of the problems, the difference between the time spent
by TS1 compared to TS2 and TS3 will be increased.
8.2.1.3 Evaluating the Quality of Solutions
The lower bounding technique based on B&P algorithm is applied to estimate the
quality of solutions. If the B&P algorithm is applied to solve large size problems, it
requires a considerable amount of time to get a lower bound. Thus, in the interest of
time, only a few of test problems are considered to be solved for estimating the quality
of solutions as a sample. The size of the sample is considered 10 for two machine
problems. Thus, every other five problems (problems number 1, 6, 11, ..., 51 of test
problems) are considered to be solved by the lower bounding technique. The results are
shown in the table below. This comparison is performed by considering the result of the
best tabu search. The results show that the average percentage error is equal to 14.40%.
The results are shown in Table 8.12.
145
Table 8.12 The results of the lower bounding technique
for two machine problems by considering minimization
of sum of the completion times criterion
w
>C
:'
G)
0
B
a.
D
1
3
2.
CD
!.
CD
-'
-.
0
U)
a
C
-.
-u
CD
CD
-
CD
Cl)
CD
C)
0
c
U)
CD
CD
CD
.
U)
B
1
1
4
4
13
1857.2
2
2011
0.0828
6
3
4
8
964.15
2
986
0.0227
11
5
7
31
8831
5
8876
0.0051
16
5
10
40
9274.3
200
10340
0.1149
21
9
3
22
3567.6
6687
4129
0.1574
26
10
7
62
23299
13282
25600
0.0988
31
8
9
41
12513
5186
12689
0.0141
36
6
10
31
7462.4
3600
8117
0.0877
41
14
4
40
12533
28800
14211
0.1339
46
12
7
65
23915
85991
27071
0.132
51
12
10
83
22054
28800
38140
0.729
Average:
0.144
8.2.2 The Results of Three-Machine Problems by Considering Minimization of
Sum of the Completion Times Criterion
All 162 test problems of three machine problems are solved by heuristic algorithms to
find the algorithm with the best performance and the best initial solution. In the interest
of time, the lower bounding technique is also applied for some of the selected test
problems to evaluate the quality of solution. The results are presented in three sections.
In the first section, the results of the heuristic algorithms, the result of the
experimental design to find the algorithm with the best performance as well as for
finding the best initial solution are presented.
In the second section, an experimental design to compare the time spent for
heuristic algorithms is performed and the results are presented.
In the third section, the result of the lower bounding technique for a few test
problems to estimate the quality of solution is presented.
146
8.2.2.1 Comparison among Heuristic Algorithms for Three Machine Problems by
Considering Minimization of Sum of the Completion Times
The result of performing the heuristic algorithms is shown in Table 8.13 by applying
two different initial solution generators. In this table TS 1 stands for the tabu search
algorithm with short term memory, TS2 stands for the LTM-Max, and TS3 stands for
LTM-Min.
Table 8.13 The results of the test problems for three machine problems by considering
minimization of sum of the completion times criterion
Initial 2
Initial 1
.
2
TSI
TS2
TS3
TSI
TS2
TS3
1
2
4
7
1021
1021
1021
1021
1021
1021
1021
2
5
4
16
4246
4246
4246
4246
4246
4246
4246
3
4
4
12
2297
2123
2100
2133
2133
2103
2100
4
3
4
10
1354
1354
1354
1354
1354
1354
1354
5
3
4
11
1605
1603
1603
1653
1597
1597
1597
6
5
4
16
3448
3448
3448
3327
3327
3327
3327
7
5
4
15
3952
3952
3704
3616
3616
3616
3616
8
3
3
7
1228
1228
1225
1225
1225
1225
1225
9
3
3
8
1200
1200
1200
1200
1200
1200
1200
10
4
4
13
2244
2244
2244
2244
2243
2244
2243
11
15
3120
3120
3120
3121
3121
3121
3120
2444
3893
4
4
12
5
4
13
2463
2463
2444
2463
2463
2463
13
5
4
14
3893
3893
3893
3894
3894
3894
14
2
4
7
1005
1005
1005
1005
1005
1005
1005
15
4
4
13
2956
2956
2956
2956
2956
2956
2956
16
5
4
17
4817
4807
4807
4826
4826
4815
4807
17
2
3
5
708
708
708
708
708
708
708
18
4
4
13
3008
3008
3008
3008
3008
3008
3008
19
3
6
18
3694
3694
3694
3717
3717
3717
3694
20
4
6
22
5534
5534
5534
5546
5546
5542
5534
21
2
7
13
1900
1900
1879
1832
1832
1832
1832
8366
22
5
7
31
8489
8489
8367
8974
8562
8366
23
5
6
23
5770
5770
5770
5770
5770
5770
5770
24
4
6
20
3892
3890
3890
3890
3890
3890
3890
25
3
6
17
3964
3964
3964
3964
3964
3964
3964
26
4
7
23
6236
6236
6236
6236
6236
6236
6236
27
5
6
26
7468
7468
7468
7468
7437
7468
7437
28
3
7
19
4039
4039
3983
4046
4046
4046
3983
147
Table 8.13 (Continued) The results of the test problems for three machine problems
by considering minimization of sum of the completion times criterion
Initial 2
Initial 1
0
2
TSI
TS2
TS3
TSI
TS2
TS3
5363
5363
5370
5418
5418
5418
29
5
5
22
30
2
7
13
1804
1804
1804
1810
1810
1810
1804
31
3
7
15
3628
3628
3628
3625
3625
3625
3625
32
5
6
23
7084
7084
7084
7100
7100
7100
7084
33
4
6
19
4599
4599
4599
4788
4744
4615
4599
34
3
5
15
3248
3248
3248
3233
3233
3233
3233
35
2
7
12
2098
2098
2096
2094
2094
2094
2094
36
5
7
29
10190
10190
9936
9587
9587
9587
9587
37
4
10
34
11014
11014
10809
10612
10595
10612
10595
38
5
9
38
13597
13597
13597
13608
13608
13608
13597
39
4
10
36
9685
9685
9685
9719
9691
9719
9685
40
3
10
27
6406
6406
6406
6420
6420
6420
6406
41
5
10
49
17126
17126
17126
17624
17561
17440
17126
42
2
8
16
2432
2432
2432
2369
2341
2369
2341
43
2
10
18
3905
3905
3905
3813
3813
3813
3813
44
4
9
31
9706
9706
9706
9706
9706
9706
9706
45
5
10
49
19640
19646
19640
20184
19882
20184
19640
46
4
8
31
9055
9015
9015
8919
8919
8919
8919
47
3
9
24
5739
5739
5739
5758
5758
5758
5739
48
5
9
42
14240
14229
14220
14300
14174
14300
14174
49
5
9
43
18616
18579
18496
18032
17979
18008
17979
50
3
8
23
6684
6684
6684
6692
6690
6690
6684
51
5
8
37
14760
14732
14727
14724
14724
14724
14724
52
3
9
23
6325
6325
6274
6248
6248
6248
6248
53
5
8
36
13822
13822
13754
13409
13409
13017
13017
54
4
10
36
12566
12553
12566
12483
12483
12483
12483
55
6
4
19
6689
6689
6428
5986
5986
5986
5986
56
8
4
20
6242
6242
6242
6213
6200
6200
6200
57
6
4
19
4424
4424
4424
4424
4424
4424
4424
58
7
4
17
4164
4159
4164
4164
4159
4164
4159
59
7
4
19
4466
4316
4466
4316
4316
4316
4316
60
9
4
24
7862
7717
7862
7717
7717
7717
7717
61
8
4
23
7974
7974
7974
7974
7974
7974
7974
62
9
4
32
12397
12397
12397
12397
12397
12397
12397
63
7458
7458
7458
7494
7458
7494
7458
5363
8
4
25
64
7
4
21
6139
6139
6139
6032
6032
6032
6032
65
10
4
34
12744
12744
12744
12810
12810
12810
12744
66
6
4
18
4456
4241
4411
4233
4233
4233
4233
67
7
3
18
6396
6396
6396
6396
6396
6396
6396
68
10
4
29
13658
13658
13658
13658
13603
13658
13603
148
Table 8.13 (Continued) The results of the test problems for three machine problems
by considering minimization of sum of the completion times criterion
I;
Initial 2
Initial 1
-
0
TSI
TS2
TS3
TSI
TS3
TS2
[
69
7
4
21
6643
6643
6643
6643
6643
6643
6643
70
9
4
30
12424
12390
12424
12424
12390
12424
12390
71
9
4
24
9346
9346
9364
9364
9319
9364
9319
72
6
4
18
5359
5359
5359
5381
5381
5381
5359
73
7
7
41
16892
16828
16897
16897
16886
16897
16828
74
6
5
28
9300
9197
9300
9197
9197
9197
9197
75
9
7
50
21656
21656
21753
21794
21794
21713
21656
76
8
6
39
13460
13446
13588
13587
13441
13587
13441
77
8
7
46
15917
15873
15917
15883
15883
15883
15873
78
9
7
49
20513
20425
20666
20505
20504
20505
20425
79
9
7
51
27969
27969
27969
28012
28001
27969
27969
80
6
6
30
10467
10278
10278
10444
10444
10444
10278
81
8
5
36
12575
12536
12574
12542
12498
12501
12498
82
10
6
54
24911
24615
24612
24675
24598
24696
24598
83
6
6
29
8126
8126
8126
8145
8145
8130
8126
84
10
6
49
21542
21484
20523
20479
20443
20479
20443
85
9
6
42
22799
22799
22799
22826
22826
22794
22794
86
8
5
36
17443
17443
17453
17453
17453
17465
17443
87
8
5
26
9677
9677
9677
9677
9677
9677
9677
88
10
7
40
18546
18508
18546
18851
18539
18851
18508
89
9
5
27
10789
10785
10786
10788
10788
10785
10785
90
10
7
48
23838
23838
23838
23976
24235
24130
23838
91
9
10
56
27038
26997
26930
26965
26965
26965
26930
92
7
9
43
17556
17469
17556
17305
17134
17134
17134
93
8
9
49
19897
19650
19897
19464
19396
19382
19382
94
9
10
54
22634
22634
22634
22634
22703
22703
22634
95
7
8
33
8786
8769
8760
8764
8760
8764
8760
96
10
10
62
29187
27589
28374
28168
28172
29037
27589
97
8
9
48
22158
22158
22225
22384
22384
22150
22150
98
6
10
41
16700
16697
16700
16472
16472
16472
16472
99
10
10
63
31747
31485
32116
31396
31487
31388
31388
100
8
9
38
12586
12586
12595
12650
12599
12638
12586
101
7
10
48
17697
17568
17697
17579
17579
17579
17568
102
10
8
40
16630
16447
16429
16639
16821
16639
16429
103
10
10
66
42413
42343
42535
42497
42497
42097
42097
104
8
7
36
16330
16191
16660
16321
16172
16321
16172
105
6
10
38
16726
15865
16742
15856
15856
15856
15856
106
9
8
43
21287
21198
21216
21210
21351
21264
21198
107
10
7
50
27672
27672
27672
27651
27651
27651
27651
108
8
10
60
33022
32986
33010
32938
32806
32806
32806
149
Table 8.13 (Continued) The results of the test problems for three machine problems
by considering minimization of sum of the completion times criterion
.
Initial 2
Initial 1
0
TSI
TS2
TS3
TSI
TS2
TS3
109
13
4
37
20387
20387
20387
20248
20248
20248
20248
110
15
4
39
23144
23045
23415
23079
22954
22826
22826
111
16
4
46
21423
21423
21423
21300
21051
21488
21051
112
14
4
42
18551
18048
18551
18019
18019
18019
18019
113
13
4
39
14721
14381
14721
14111
13864
14111
13864
114
15
4
45
19123
19060
19123
19072
19072
19207
19060
115
15
4
43
26182
26182
26182
26674
26674
26642
26182
116
14
4
44
27603
27603
27603
27359
27359
27359
27359
117
11
4
35
13806
13806
13806
13663
13663
13843
13663
118
14
4
39
18298
18298
18298
18782
18336
18782
18298
119
11
4
32
11533
11300
11464
11496
11330
11492
11300
120
15
4
50
25956
25338
24917
24812
24812
24182
24182
121
11
4
31
16106
16106
16106
16165
16165
16165
16106
122
15
4
47
32646
32436
32244
32535
32535
32535
32244
123
13
4
36
20034
19712
20285
19480
19382
19480
19382
124
12
4
35
17402
17397
17622
17385
17385
17385
17385
125
14
4
47
28640
28640
28640
28362
28362
28355
28355
126
13
4
40
21239
21239
21239
20403
20403
20403
20403
127
12
7
50
26059
26059
26236
25696
25696
25696
25696
128
14
7
83
67862
67900
67862
66510
65900
66510
65900
129
11
7
64
33204
33002
33002
33146
33146
33146
33002
130
14
7
88
63232
62764
62487
61666
64297
61666
61666
131
11
7
69
37175
37650
36845
37400
37166
37171
36845
132
16
7
104
75681
74487
75254
73891
76455
75987
73891
133
16
7
69
55749
55749
55749
52023
51733
52023
51733
134
11
7
50
27425
27172
27545
27780
27346
27269
27172
135
12
7
50
23146
23128
23177
23276
23276
23230
23128
136
14
7
53
28362
28124
26641
27475
27475
27503
26641
137
13
7
62
34306
34195
34201
34706
34706
34706
34195
138
15
7
66
40345
40070
40345
39288
38987
39804
38987
139
14
7
75
60096
59211
60088
60171
60171
60161
59211
140
13
7
54
35196
34247
35667
32294
32294
32294
32294
141
15
7
59
41144
40731
41144
40687
40687
40687
40687
142
16
7
66
48443
47863
48443
47756
47756
48383
38987
143
13
7
54
35570
35570
32994
33015
32661
33019
32661
144
15
7
68
50778
50759
51083
50192
49594
50192
49594
145
12
9
62
39367
39367
39367
39438
39156
39438
39156
146
14
10
74
52196
52196
51793
50876
50893
51524
50876
147
13
10
81
49274
49274
49274
49988
48266
49951
48266
150
Table 8.13 (Continued) The results of the test problems for three machine problems
by considering minimization of sum of the completion times criterion
Initial 2
Initial 1
0
=
1.
______
B
eD
______ ______ ______ ______
TSI
TS2
TS3
TSI
TS2
TS3
85
55711
55711
56284
55007
55007
54790
54790
148
15
149
16
9
93
64832
64823
65497
65233
62415
62418
62415
150
15
10
110
82943
81906
83125
83593
83593
83467
81906
151
15
10
104
92931
91688
91394
92332
91179
92332
91179
152
14
9
77
62829
59902
61977
58563
58103
58541
58103
153
16
10
96
70956
69338
71199
71054
69767
69434
69338
154
13
9
66
38719
38291
38174
38596
37624
37717
37624
155
14
10
84
57010
57000
57038
56479
55577
55775
55577
156
11
10
86
53356
54701
53149
53824
53655
54044
53149
157
13
10
89
76753
76904
76904
76241
77832
77346
76241
158
15
10
101
95207
95207
95461
97472
97472
96318
95207
159
15
10
97
83626
84271
84722
83022
83343
82976
82976
160
14
10
86
65526
65489
64731
64935
64917
64524
64524
161
11
10
64
38024
41018
37999
38646
37931
37391
37391
162
14
8
77
60730
60730
60730
61955
59746
61955
59746
10
The normal probability plot of the residuals is shown in Figure 8.9. The plot confirms
that there is evidence to apply the parametric statistics-based analysis of variance
(ANOVA) to further analyze of the results.
1500
+
-I-
tft+
1000
500
R
0
-500
-1000
+
-f
-1500
.01
.1
1
5
10
25
50
75
90
95
99
99.9
99.99
Normal Percent I 1cc
Figure 8.9 The normal probability plot of the experimental design of finding the best
heuristic algorithm for three machine problem by considering minimization of sum of
the completion times
151
The result of ANOVA is presented in Table A. 14. The results of the experiment show
that there is a significant difference among the objective function values of heuristic
algorithms (the result of F test equal to 0.0429). The result of Tukey's test shows that
TS2 is a better performer compared to the other two heuristic algorithms.
The results of the experimental design show that there is a significant difference
between the results of heuristic algorithms by applying different initial solution
generators for three machine problems (the result of F test is less than 0.0001). A
comparison between the average values of objective functions shows that applying the
second initial solution generator in heuristic algorithms provides a better solution.
Among the interactions, the interaction between group factor and the initial solution
factor (G*I and G*J*I) are significant. The significant factors and interactions are
shown in bold in Table A.14. The other significant interactions are J*R2*I, G*J*R1*I,
G*J*R2*I, and G*R1*R2*I.
A test of effect slice is performed to obtain detailed information by considering the
highest significant order interactions for the initial solution generator effect i.e.,
G*J*R1 *R2*I based on Tukey-Kramer adjustment. The results are shown in Table
A. 15 in appendix. Based on the results, for the initial solution generators comparison,
there is a significant difference among the performance of initial solutions in a few
cells. These cells are presented in the table below with the best initial solution of each
cell.
Table 8.14 The experimental cells of three machine problems by considering
minimization of sum of the completion times criterion in which the initial solution
generators do not have the same performance
The level
of group
The level of
The level
of Ri
The level
of R2
2
1
3
3
1
3
2
2
2
3
3
3
2
1
factor
3
3
job factor
The best heuristic
algorithm
Initial solution generator 2
Initial solution generator 2
Initial solution generator 2
Initial solution generator 2
152
8.2.2.2 The Experimental Design to Compare the Time Spent for Heuristic
Algorithms
Table 8.15 presents the time spent to terminate the search algorithms and the time spent
to find the best solution for each heuristic algorithm.
Table 8.15 The time spent for the test problems of three machine problems (in seconds)
by considering minimization of sum of the completion times criterion
Urn-nn
rutn w
AUHAHAHUHAHA
153
Table 8.15 (Continued) The time spent for the test problems of three machine
problems (in seconds) by considering minimization of sum of the completion
times criterion
rrmnv
-
U
IuII
154
Table 8.15 (Continued) The time spent for the test problems of three machine
problems (in seconds) by considering minimization of sum of the completion
times criterion
Initial 2
Initial 1
TS1
TS2
TS3
TS3
TS2
TSI
0
-
E
-
-
2
-
C
*
Il)
2
2
2
2
2
2
2
2
2
2
10
2
34
3
6
6
14
6
33
15
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
1
1
1
0
3
2
2
6561
68
2
2
9
2
5
3
7
16
3
0
0
1
1
1
0
10
12
69
1
0
1
0
70
3
1
18
13
20
4
3
0
21
11
21
2
1
0
6
1
10
0
1
0
6
5
j_
0
0
0
0
0
o
0
0
73
4
0
10
7
13
2
o
5
8
0
0
0
5
14
10
14
8
0
0
0
0
35
25
141
43
139
6
0
27
0
75
76
9
5
32
17
37
13
77
22
3
34
5
4
3
10
6
74
4
34
66
78
78
8
23
6
79
21
3
62
4
54
3
21
0
0
0
0
0
81
9
7
63
55
25
170
60
82
44
324
20
72
0
0
0
1
0
84
30
0
207
36
138
85
15
13
22
71
290
0
0
0
0
0
165
3
169
116
26
15
32
5
14
5
17
5
98
81
61
9
7
52
15
69
14
0
0
1
1
8
32
39
16
61
55
0
35
326
0
8
164
327
3
0
30
0
0
0
0
0
39
6
187
77
129
16
12
14
4
6
30
82
1
86
4
1
49
25
2
27
49
20
7
2
46
34
8
80
46
87
1
0
5
0
6
0
1
0
2
0
6
0
88
1
0
37
18
19
1
3
2
54
32
24
5
0
0
89
4
4
5
2
2
1
10
1
275
8
11
2
90
106
13
117
18
29
26
146
26
175
58
91
30
7
164
68
134
31
3
133
6
178
6
92
0
0
2
2
1
0
10
7
7
4
4
78
77
9
155
122
141
48
94
49
23
130
27
65
235
3
22
47
14
93
38
4
27
49
225
26
272
41
23
95
2
2
10
5
17
2
3
1
16
9
17
1
96
25
12
128
99
287
284
80
46
182
23
242
241
97
8
7
12
105
47
4
2
28
74
16
2
1
0
0
0
0
379
65
19
74
324
4
0
164
252
10
59
33
58
22
29
3
19
4
98
0
0
99
100
38
46
2
405
100
9
2
54
4
47
13
9
0
39
4
101
6
2
26
10
51
5
3
1
1
155
Table 8.15 (Continued) The time spent for the test problems of three machine
problems (in seconds) by considering minimization of sum of the completion
times criterion
Initial 2
Initial 1
TSI
TS2
TS3
TS1
TS2
TS3
TS1
TS2
TS3
TS1
TS2
TS3
-i---;a
a
E
a
a
a
a
-
102
11
9
46
137
26
405
23
24
5
57
64
324
12
103
69
69
40
410
199
49
444
104
1
0
15
8
15
1
1
1
14
7
12
1
105
0
0
1
1
0
0
0
0
0
0
0
0
106
16
4
76
52
55
19
16
9
85
44
16
93
19
8
36
7
95
52
92
134
131
28
52
28
58
6
8
438
10719
1
76
3
69
3
18
7
108
28
5
107
47
172
11
38
3
109
0
0
26
1
33
1
10
10
8
1
63
62
51
8
36
35
20
270
190
295
93
71
61
103
61
30
110
7
6
66
29
62
111
33
29
83
112
29
16
141
274
217
25
88
46
113
14
13
212
219
117
108
90
41
14
10
99
82
114
31
16
199
112
266
50
34
26
76
115
18
16
148
50
137
48
3
1
267
24
3
69
260
87
116
27
1
1
23
131
65
151
66
1176
6
60
16
52
17
118
2
101
8
41
9
30
324
3
1194
2
41
120
47
25
355
121
6
1
122
35
123
9
1249
125
29
1263
23
22
70
14
5
85
5
104
52
9
71
62
18
117
83
85
36
12
32
14
1
1
32
12
382
44
70
47
35
360
101
20
345
102
1
6
0
21
2
38
1
2
24
40
189
167
41
30
266
87
45
67
8
6
217
45
64
50
255
76
61
8
21
16
7
61
19
58
18
5
1
21
4
45
46
27
166
81
242
81
34
11
243
32
149
2
33
118
4
44
44
160
3
80
16
37
13
129
168
275
419
87
911
45
456
363
95
295
510
209
436
28
282
4
3
9
26
1
481
2
188
363
357
552
407
96
149
366
526
2236
518
61
548
49
22
329
72
337
1228
1
104
127
31
23
128
107
61
129
99
73
130
19
15
131
132
102
349
96
349
133
91
28
134
14
14
433
100
95
106
85
4
135
38
5
108
54
194
1
136
97
92
370
66
21
308
383
729
137
486
477
40
96
552
123
80
613
559
1027
530
237
22
138
133
260
252
1274
34
37
28
130
96
96
109
396
104
1
321
320
1654
1455
656
67
461
211
81
404
14
882
158
119
954
742
0
28
76
39
37
444
2273
515
45
225
673
245
1080
1
64
237
145
21
97
35
39
27
156
Table 8.15 (Continued) The time spent for the test problems of three machine
problems (in seconds) by considering minimization of sum of the completion
times criterion
Initial 2
Initial 1
TS2
TS1
TS3
TS2
TS1
TS3
a
a
a
2
2
3
2
100
83
961
950
684
257
196
140
94
41
635
600
278
239
142
48
767
132
503
132
141
126
63
995
521
687
127
113
111
1036
327
701
226
142
163
116
1083
591
1623
344
173
165
1349
477
1393
143
83
67
432
158
343
227
29
5
480
259
431
287
79
144
205
192
399
1245
1055
161
46
1008
947
56
3
3
265
4
50
28
329
172
858
306
93
145
946
146
146
83
31
64
147
120
142
39
508
983
392
147
449
554
80
232
135
568
768
290
514
532
1050
217
247
148
224
1738
304
272
422
1286
2181
2834
508
320
268
293
1266
352
442
1378
1103
149
310
352
1957
1155
2675
367
2673
150
614
477
1878
140
1960
798
459
421
1188
494
151
330
312
1451
969
1503
341
233
207
1268
610
2039
1004
405
550
3216
549
140
33
300
206
754
520
30
2254
783
380
2060
1930
139
67
a
a
115
2
2
594
245
a
881
216
68
748
152
94
11
540
171
153
527
110
2576
1810
154
89
340
87
667
300
638
149
33
23
402
169
183
1077
400
1494
131
133
23
1221
1205
709
2134
327
155
156
179
69
793
101
1112
286
207
167
551
157
355
337
1035
273
567
212
310
258
688
779
442
459
892
35
280
158
460
433
1907
1039
803
454
31
1073
241
187
2129
1287
620
281
152
3006
2076
91
159
2997
1897
160
60
51
587
87
210
94
66
161
89
72
89
86
845
289
132
162
126
104
319
123
1202
309
136
596
191
2677
1848
118
400
167
34
27
295
207
171
198
110
136
80
1094
1084
1106
239
161
The normal probability plot of the residuals is shown in Figure 8.10. The normal
probability plot of the residuals confirms that the residuals have a normal distribution.
Thus, the ANOVA can be applied for detailed comparison.
157
750
+
500
250
A
0
-250
-500
+
-750
.01
.1
1
5
10
25
50
75
Normal Percent I lee
90
95
99
99.9
99.99
Figure 8.10 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for the three machine problem by considering
minimization of sum of the completion times criterion
The ANOVA table for comparison of time spent is presented in Table A.16. The results
of the experiment show that there is a significant difference among the time spent for
heuristic algorithms (the result of F test is less than 0.000 1). The Tukey test is applied
to find the difference. The result of Tukey's test shows that the time spent to solve TS1
is less than the other two heuristic algorithms.
The results of the experiment show that there is not a significant difference between the
time spent by algorithms in applying different initial solution generators for three
machine problems (the result ofF test is equal to 0.8320).
Among the interactions, the interactions between the algorithm factor and all of the
whole plot factors (G*A, J*A p*A, G*J*A, G*R2*A, R1*R2*A, G*R1*R2*A,
J*R1*1*A, and G*J*R1*R2*A) are significant.
The effect slice test is performed for more detailed comparisons. Based on the results,
the summary of significant differences is as follows:
For any large size group, medium and large size job problems, there is a
significant difference among the time spent by the algorithms. In all of these
problems, TS1 required less time compared to TS2 and TS3. This result was
expected because by increasing the size of the problems, the difference between
the time spent by TS1 compared to TS2 and TS3 will be increased.
8.2.2.3 Evaluating the Quality of Solutions
The lower bounding technique based on B&P algorithm is applied to estimate the
quality of solutions. As discussed before, in the interest of time, only a few test
problems are considered to be solved for estimating the quality of solutions as a
sample. The size of the sample is considered 20 for three machine problems. Thus,
every other eight problems (problems number 1, 9, 17, 25, ..., 161 of test problems) are
considered to be solved by the lower bounding technique. The results are shown in the
table below. This comparison is performed by considering the results of the best tabu
search. The results show that the average percentage error is equal to 17.2%.
Table 8.16 The results of the lower bounding technique for three
machine problems by considering minimization of sum of the
completion times criterion
0
.
-
1
-.
0
2
4
7
1021
1
1021
0
9
3
3
8
1052
1
1200
0.141
17
2
3
5
705
5
708
0.004
25
3
6
17
3833.86
31
3964
0.033
33
4
6
19
4435
198
4599
0.037
41
5
10
49
14806
12000
17126
0.157
49
5
9
43
16434
3600
17979
0.094
57
6
4
19
3810
700
4424
0.161
65
10
4
34
11637
27859
12744
0.095
73
7
7
41
16150.27
7064
16828
0.042
81
8
5
36
10327.12
29385
12498
0.210
89
9
5
27
9635
15788
10785
0.119
97
8
9
48
20350.59
17125
22150
0.088
105
6
10
38
15380.8
28818
15856
0.031
113
13
4
39
11092
32273
13864
0.250
159
Table 8.16 (Continued) The results of the lower bounding
technique for three machine problems by considering
minimization of sum of the completion times criterion
o
o
-
0
'-.
0©
eD
..-.
0
0
.
o
p121
11
4
31
14321.72
17171
16106
0.125
129
11
7
64
28921.3
22417
33002
0.141
137
13
7
62
19626.1
28800
34195
0.742
145
12
9
62
24770.7
29990
39156
0.581
153
16
10
96
52752
32000
69338
0.314
161
11
10
64
30250
28800
37931
0.254
Average:
0.172
8.2.3 The Results of Six-Machine Problems by Considering Minimization of Sum
of the Completion Times Criterion
All 54 test problems of six machine problems are solved by heuristic algorithms to find
the algorithm with the best performance and the best initial solution generator. In the
interest of time, the lower bounding technique is also applied for some of the selected
test problems to evaluate the quality of solution.
The results are presented in the
sections below:
In the first section, the results of the heuristic algorithms and the results of the
experimental design to find the algorithm with the best performance as well as for
finding the best initial solution are presented.
In the second section, an experimental design to compare the time spent for
heuristic algorithms is performed and the results are presented.
In the third section, the results of the lower bounding technique for a few test
problems to estimate the quality of solution are presented.
8.2.3.1 Comparison among Heuristic Algorithms for Six Machine Problems by
Considering Minimization of Sum of the Completion Times
The results of performing the heuristic algorithms are shown in Table 8.17 by applying
two different initial solution generators.
160
Table 8.17 The heuristic algorithms results of the test problems for six machine
problems by considering minimization of sum of the completion times criterion
Initial 2
Initial 1
-.
-
TS1
TS2
TS3
TS1
TS2
TS3
1
5
3
11
10256
10256
10256
10256
10256
10256
2
3
4
9
5583
5583
5583
5583
5583
5583
3
4
3
10
1951
1787
1911
1824
1824
1824
4
2
4
7
879
879
879
886
886
886
5
4
4
15
12878
12878
12878
12891
12891
12891
6
2
4
8
4762
4762
4762
4781
4781
4770
7
5
7
29
30223
30223
30223
30223
30223
30223
8
4
6
21
18235
18235
18235
18235
18235
18235
9
2
7
11
1608
1608
1608
1581
1581
1581
10
5
7
25
5972
5972
5988
6014
6014
6014
11
4
6
15
11843
11843
11834
11791
11791
11791
12
5
7
20
17308
17308
17308
17349
17349
17349
13
2
10
3
6198
6198
6198
6198
6198
6198
14
3
10
29
20779
20779
20779
20779
20779
20779
15
4
10
28
8353
8201
8204
8355
8355
8355
16
4
9
21
5419
5419
5382
5359
5359
5359
17
4
9
28
26106
26106
26106
25902
25902
25902
18
3
9
21
15350
15350
15350
15257
15257
15257
19
9
4
31
47870
47870
47870
47870
47870
47870
20
8
4
19
25487
25487
25487
25487
25487
25487
21
6
4
22
5854
5854
5854
5854
5854
5854
22
9
4
25
7553
7553
7553
7553
7553
7553
23
8
4
24
34155
34155
34155
34128
34128
34128
24
6
4
16
17977
17977
17977
18096
18096
18027
25
6
7
28
29061
29061
29061
28874
28874
28874
26
7
7
29
34477
34477
34477
34477
34477
34477
27
8
6
33
10255
10135
10205
10262
10140
10187
28
9
7
38
13704
13701
13935
13703
13688
13779
29
7
7
34
43663
43663
43663
43506
43506
43469
30
6
7
31
35504
35504
35412
35412
35412
35412
31
9
9
50
74057
74057
74057
74057
74057
74057
32
8
10
49
66666
66666
66666
66666
66666
66666
33
7
10
46
17881
17645
17699
18273
17853
18156
34
10
10
65
34341
34275
34472
34502
34683
34738
35
10
10
56
101652
101652
101652
101493
101493
101493
36
8
10
46
62877
62871
62877
62987
62987
62987
37
16
4
45
114308
114308
114308
111771
111771
111771
38
12
4
32
60020
60020
60020
60084
60084
60084
161
Table 8.17 (Continued) The heuristic algorithms results of the test problems for
six machine problems by considering minimization of sum of the completion
times criterion
-
Initial 2
Initial 1
-
©
eD___
TS1
TS2
TS3
TS1
TS2
TS3
15844
15846
15594
15531
15594
39
14
4
37
15933
40
13
4
40
17388
17376
17239
16813
16813
16954
41
11
4
34
64472
64472
64472
64465
64465
64465
42
14
4
45
108018
108018
108010
107175
107175
107175
43
13
6
47
93655
93655
93655
93655
93655
93655
44
16
7
63
159887
159887
159887
145647
145647
145647
45
15
7
71
41136
40711
41319
42209
42054
41739
46
14
7
60
33607
31692
32829
32435
32435
32435
47
15
7
69
161705
161705
161668
161830
161830
161688
48
16
7
78
205140
205140
205125
207399
207025
207399
49
13
10
95
224192
223247
219865
218940
218940
218940
50
14
10
80
185849
178816
185849
178766
178766
178766
51
13
10
72
41884
41948
41896
43251
42550
42657
52
15
10
77
50526
48397
49821
48451
48140
48607
53
16
9
100
277569
277523
277569
276153
273784
276153
54
15
10
117
343065
343090
342575
338906
338954
338723
The normal probability plot of the residuals confirms that the residuals have a normal
distribution (Figure 8.11). Thus, the ANOVA can be performed to find the best
heuristic algorithm as well as the best initial solution generator.
162
150000
+
100000
50000
+
Fl
0
-50000
-100000
+
-150000
.1
1
5
10
25
N.,rn
50
1
75
90
95
99
99.9
Pere.-.t I 1
Figure 8.11 The normal probability plot of the experimental design of finding the best
heuristic algorithm for six machine problem by considering minimization of sum of the
completion times criterion
The ANOVA table is shown in Table A. 17 in appendix. The results of the experiment
show that there is not a significant difference among the objective function values of
the heuristic algorithms (the result ofF test is equal to 0.6 189).
The results of the experiment also show that there is a significant difference between
the performance of the initial solution generators for six machine problems (the result
of F test is less than 0.0001). The comparison between the average objective function
values of the initial solutions shows that applying the second initial solution provides a
better solution than the first one.
Among the interactions, the interactions between the initial solution factor and the
group factor and the ratio factors (G*I, Ri *J, G*R1 *J, and G*J*R1 *1) are significant.
A test of effect slice is performed to obtain detailed information by considering the
highest significant order interactions for the initial solution generator effect i.e.,
G*J*R1 *J based on Tukey-Kramer adjustment. The results are shown in Table A. 18 in
appendix. Based on the results, the summary of significant differences are as follows:
163
For the initial solution generators comparison, there is a significant difference
among the performance of initial solutions in a few cells. These cells are
presented in the table below with the best initial solution of each cell.
Table 8.18 The experimental cells of six machine problems by considering
minimization of sum of the completion times criterion in which the initial
solution generators do not have the same performance
The level of
group factor
The level of
job factor
The level
of Ri
3
2
1
3
3
1
3
3
3
The best heuristic algorithm
Initial solution generator 2
Initial solution generator 2
Initial solution generator 2
8.2.3.2 The Experimental Design to Compare the Time Spent for Heuristic
Algorithms by Considering Minimization of Sum of the Completion Times
Criterion
The time spent to terminate the search algorithm and the time spent to find the best
solution for each heuristic algorithm are shown in Table 8.19 for all test problems.
Table 8.19 The time spent for the test problems of six machine problems (in seconds)
by considering minimization of sum of the completion times criterion
Initial 1
TS1
Initial 2
TS2
TS3
-
TS3
TS2
TS1
-
1
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
___Q_
0
4
0
0
0
0
0
0
5
0
0
0
0
0
0
6
0
0
0
0
0
0
7
0
0
1
1
0
0
00
00
00
8
0
0
0
0
0
0
0
0
164
Table 8.19 (Continued) The time spent for the test problems of six machine
problems (in seconds) by considering minimization of sum of the completion times
criterion
nuwii'
H
1*
1*
HUHUHUHUHUHU
165
Table 8.19 (Continued) The time spent for the test problems of six machine
problems (in seconds) by considering minimization of sum of the completion times
criterion
Initial2
Initiall
TS2
TSI
TS3
TS2
TSI
-zr
j
CD
2.
m
-1
_i
a
g.
2.
0
Cfl
-I
CD
CD
45
46
47
48
49
50
378
291
135
95
-I
-I
1
3
3
3
3
CD
CD
-I
f
3
CD
CD
TS3
!
a
w
CD
-I
3
CD
CD
CD
332
2
2512
27
664
2283
49
35
137
894
9174
7186
14
68
14
CD
3
CD
CD
52
51
388
268
16
14
2430
1015
635
4930
65
13
12
125
62
118
23
37
15
207
44
51
268
43
2925
303
134
167
3021
981
191
53
66
315
76
719
415
61
2838
5467
803
2381
530
125
853
1197
79
140
52
4052
4318
674
912
749
3007
1495
768
6233
239
536
3015
4579
1887
54
415
321
2286
682
1318
720
504
304
3668
327
3064
949
2948
1827
2890
2956
1002
2759
641
330
47
872
64
128
51
50
1475
779
168
168
114
631
59
613
609
15
43
851
1344
97
3015
The normal probability plot of the residuals confirms that the residuals have a normal
distribution (Figure 8.12). Thus, the ANOVA can be applied to find the best heuristic
algorithm as well as the best initial solution generator.
3000
-I-
2000
+
-F
1000
El
0
-1000
-I-
+
-I-
-2000
+
-3000
.1
1
10
2
50
75
90
95
99
99.9
P.cr.tl 1
Figure 8.12 The normal probability plot of the experimental design of finding the most
efficient heuristic algorithm for six machine problem by considering minimization of
sum of the completion times criterion
166
The ANOVA table for comparison of time spent is presented in Table A. 19. The results
of the experiment show that there is a significant difference among the time spent for
heuristic algorithms (the result of F test is less than 0.000 1). The Tukey test is applied
to find the difference. The result of Tukey' s test shows that the time spent by TS 1 is
less than the other two heuristic algorithms.
The results of the experiment show that there is not a significant difference between the
time spent by algorithms in applying different initial solution generators for six
machine problems (the result ofF test is equal to 0.1152).
Among the interactions, the interactions between the algorithm factor and all of the
whole plot factors (G*A, J*A, R1*A, G*J*A, G*R1*A, J*R1*A, and G*J*R1*A) are
significant.
The effect slice test is performed for more detailed comparisons. The results are shown
in Table A.20 in appendix. Based on the results, the summary of significant differences
are as follows:
For the heuristic algorithms comparison, there is a significant difference among
the time spent by the heuristic algorithms in a few cells. These cells are
presented in the table below with the best heuristic algorithm for each cell.
Table 8.20 The experimental cells of six machine problems by
considering minimization of sum of the completion times criterion
in which the heuristic algorithms do not have the same time spent
The level
of group
factor
3
3
3
3
The level of
The level
job factor
of Ri
2
2
2
3
2
3
3
3
The most efficient
heuristic algorithm
TS1
TS1
TS1
TS1
167
8.2.3.3 Evaluating the Quality of Solutions
The lower bounding technique based on B&P algorithm is applied to estimate the
quality of solutions. As discussed before, in the interest of time, only a few of test
problems are considered to be solved for estimating the quality of solutions as a
sample. The size of the sample is considered 10 for six machine problems. Thus, every
other five problems (problems number 1, 6, 11, 21, ..., 51 of test problems) are
considered to be solved by the lower bounding technique. The results are shown in the
table below. This comparison is performed by considering the result of the best tabu
search. The results show that the average percentage error is equal to 14%.
Table 8.21 The result of the lower bounding technique for six
machine problems by considering minimization of sum of the
completion times criterion
0
o
-
0 ©
_.
11
1
5
3
0
D
0
10256
1
10256
0.000
6
2
4
8
4502
4
4762
0.058
11
4
6
15
36
11791
0.010
16
4
9
21
11678
4256.45
6
4
22
4728.26
26
7
7
29
31
9
9
50
36
8
10
41
11
4
14
7
51
13
10
72
64465
31692
41884
0.017
46
46
34
60
34477
74057
60370
63360
19517
32389
5359
5854
34477
74057
62871
0.259
21
23764
28800
Average:
0.140
41
1899
24924
29226
28800
28800
0.238
0.000
0.000
0.041
0.624
0.293
168
CHAPTER 9: DISCUSSION
The heuristic algorithms are applied to solve problems by using different initial solution
generators for proposed criteria. The lower bounding technique for each criterion is
also applied for the test problems to estimate the quality of solutions. The analysis of
the results of the experiment for each criterion is as follows:
9.1 Analyzing the Results of Minimization of Makespan Criterion
As discussed in chapter two, Schaller et al. (2000) investigated SDGS problems by
considering minimization of makespan. They developed a heuristic algorithm to solve
the problem and noted that their algorithm may not provide a good quality solution, but
the result worth to be used as the initial solution of a heuristic search algorithm such as
tabu search.
In this research, all test problems of two, three, and six machine problems are solved by
three versions of heuristic algorithm (tabu search) by applying two different initial
solution generators. The first initial solution generator is a random sequence generator
and the second one is developed based on the result of Schaller et al. (2000) algorithm.
The lower bounding technique is also applied to get a lower bound for each test
problem.
The results of the experiment show that TS2 (LTM_Max) has the best performance
compared to the other heuristic algorithms in all problems. In other words, it provides a
better sequence for groups as well as jobs in each group. This result is expected because
LTM_Max is more capable of obtaining a better solution than the other heuristic
algorithms, because it is extensively searching around the areas (neighborhoods) that
are historically found good (intensification).
Based on the results, there is no significant difference between the objective function
values of the heuristic algorithms by applying different initial solutions. This means
169
that applying Schaller et al. (2000) algorithm as the initial solution generator does not
help to improve the quality of solution. The results also show that the heuristic
algorithms (tabu search) provide a much better solution than Schaller et al. (2000)
algorithm. These results are shown in Table 9.1.
The feasible solution space of the problem has too many local optimum points. Thus,
starting with a good quality local optimal solution as the initial solution does not
guarantee of obtaining a better final solution by the heuristic algorithm. This may be
the reason for not improving the quality of solutions by applying Schaller et al. (2000)
algorithm as an initial solution generator.
Table 9.1 The results of test problems for minimization of makespan criterion
Problem
Two machine
Three machine
Six Machine
The best
heuristic
algorithm
TS2
TS2
TS2
The best
initial
solution
Percentage
error for
Schaller et
al. (2000)
9.14%
8.76%
7.08%
Percentage
error for
the best
tabu search
0.68%
1.00%
1.64%
Is TS better
than
Schaller et
al. (2000)?
Yes
Yes
Yes
The results of comparing the heuristic algorithms based on their efficiency (the time
spent to perform the search) is shown in Table 9.2. Based on the results, for two
machine problem, the random initial solution generator is better than that based on
Schaller et al. (2000) algorithm. On the other hand, for the six machine problem, the
Schaller et al. (2000) initial solution generator is a better performer.
Table 9.2 The result of the most efficient initial
solution generator by considering minimization
of makespan criterion
Problem
Two machine
The efficient initial solution
The first initial solution
generator
Three machine
Six Machine
The second initial solution
generator
170
9.2 Analyzing the Results of Minimization of Sum of the Completion Times
Criterion
There is no readily available algorithm from previous research for minimization of sum
of the completion times criterion, and compare its performance with that of the
heuristic algorithm.
For this criterion, two different initial solution generators are developed. The fist is a
random sequence generator, and the second is developed based on relaxing the problem
to a single machine SDGS problem. All test problems of two, three, and six machine
problems are solved by three versions of heuristic algorithm (tabu search) by applying
these two initial solution generators. The lower bounding approach is also applied to
get a lower bound for some test problems. The summary of the results are shown in
Table 9.3.
The results of the experiment show that TS2 (LTM_Max) has the best performance
compared to the other heuristic algorithms for two and three machine problems. None
of the heuristic algorithms show a superior performance compared to the others for six
machine problems. The reason is that all possible combination of problems are not
tested in this research because of interest of time. As mentioned before, only six
machine problems in which the ratio of set-up time between consecutive machines are
increased, equal, or decreased are considered in this research.
The results also show that there is not a significant difference between the objective
function values of the heuristic algorithms by applying different initial solution
generators for two and three machine problems, but the second initial solution
generator provides better results for six machine problem.
The efficiency of TS1 (the time spent to perform the search by heuristic algorithm) is
better than the other heuristic algorithms in all problems.
171
The percentage error of the problems is not satisfactory for large size problems in all
two, three, and six machine problems. The reason is that for large size problems, the
sub-problems cannot be solved optimally during their time limitation (two hours) and
the lower bound obtained for sub-problems after two hours is not of good quality.
Table 9.3 The results of test problems for minimization of sum of the completion times
criterion
Problem
Two
machine
Three
machine
Six
Machine
The best
heuristic
algorithm
The best
initial
solution
generator
The most
efficient
heuristic
algorithm
The most
efficient initial
solution
generator
Percentage
error
TS2
TS1
14.4%
TS2
TS1
17.2%
TS1
14.0%
----
Initial 2
As it is shown, the percentage errors of the problems are 14.4%, 17.2%, and 14.0% for
two, three, and six machine problems. At each of these problem instances, there are a
few problems with high percentage error (more than 50%). If these problems with high
percentage error are removed from the sample, the percentage error of the problems is
improved drastically. Table 9.4 shows the percentage errors by removing the problems
with more than 50% error.
Table 9.4 The percentage error of the test problems for minimization of sum of the
completion times by removing problems with more than 50% percentage error
Problem
Two
machine
Three
machine
Six
Percentage
error
Percentage error (by removing problems with more
than 50% percentage error)
14.4%
8.5% (by removing problem 51)
17.2%
12.1% (by removing problems 137, 145)
14.0%
9.2% (by removing problem 46)
Machine___________
172
CHPATER 10: CONCLUSIONS AND SUGGESTIONS FOR FUTHURE
RESEARCH
Manufacturing companies need to improve their efficiency in order to survive in
current competitive world. One way of improving the efficiency is reducing the
production cost by producing the products as quickly as possible. It is clear that the
longer time the products stay on the shop floor, the higher they cost the company.
Cellular Manufacturing (CM), is known as a technique to improve the efficiency in
batch type production by reducing the production time. In this approach, all machines
of the production line of the company are assigned to several independent cells. Then,
parts based on their similarity in shape or production requirements are set in different
groups. Finally, the groups are assigned to cells according to the capability of available
machines in each cell. This decomposition of machines and grouping parts leads to
significant reduction in set-up time, work-in-progress inventories, and simplified flow
of parts and tools which generally increase the production efficiency.
The efficiency of production can be further improved, if the best sequence of
processing groups in a cell as well as jobs that belong to a group are found based on
maximizing or minimizing some measure of effectiveness. This subject is called Group
Scheduling. Two relevant objectives in the investigation of group scheduling problems,
minimization of makespan and minimization of the sum of the completion times, were
considered in this research. The goal of these objectives is to process parts as quickly as
possible and deliver them to the customer.
In group scheduling problems, each group requires a major set-up on every machine.
The separable set-up time scheduling problems are divided into two major categories:
sequence dependent, and sequence independent scheduling. If the set-up time of a
group for each machine depends on the immediately preceding group that is processed
on that machine, the problem is classified as "sequence dependent group scheduling,"
Otherwise, it is called "sequence independent group scheduling".
173
In chapter two, it is shown that although a considerable body of literature on sequence
dependent and sequence independent group scheduling has been created, there still
exist several potential areas worthy of further research on sequence dependent and
sequence independent group scheduling (Cheng et al., 2000).
In this research, sequence dependent group scheduling problems are discussed by
considering minimization of makespan and minimization of the sum of the completion
times criteria.
A mathematical model is developed to solve the problems optimally, but it is proved
that the mathematical model is NP-hard. Thus, it is required to develop a heuristic
algorithm to solve industry size problems in a reasonable time.
Based on previous research, tabu search has produced a better performance compared
to other heuristic algorithms in similar problems. Thus, a few versions of tabu search
are developed to solve problems heuristically and generate solutions with good quality.
Two different initial solution generators for the heuristic algorithms are developed for
each criterion as well.
To support the quality of solutions, a lower bound is required to estimate the quality of
solutions. For each criterion, a different lower bounding mechanism is created to get
lower bounds for problems.
For the minimization of makespan, a lower bounding technique is developed by
relaxing the mathematical model of the problem from sequence dependent group
scheduling problem to sequence dependent job scheduling problem and adding a few
constraints to get tighter lower bound. The results show that the average percentage
error of the heuristic algorithm for this criterion is 0.68%, 1.00%, and 1.60% for two,
three and six machine problems, respectively.
174
For minimization of the sum of completion times criterion, a lower bounding technique
based on Branch-and-Price (B&P) is developed. In this model, the mathematical model
is decomposed to a master problem and one or more sub-problems. The number of sub-
problems is equal to the number of machines. The results show that the average
percentage error of the heuristic algorithm for this criterion is 14.4%, 17.2%, and
14.0% for two, three and six machine problems, respectively.
The experimental design techniques are applied to find the best heuristic algorithm and
the best initial solution generator for each criterion. To compare the performance of the
heuristic algorithms, for each machine size problem, the random test problems are
generated and solved by the heuristic algorithms. Then the experimental design
techniques are applied to find the best heuristic algorithm with the best performance as
well as the best initial solution generator.
10.1 Suggestions for Future Research
The suggestions for future research can be categorized as follows in the following
sections:
Defining new research problems related to the one discussed in this dissertation
Applying new techniques to solve the proposed problems in this dissertation
Each of the above items are discussed below
10.1.1 Defining Related Research Problems
As Cheng et al. (2000) mentioned, there is still room for more research in the area of
sequence dependent group scheduling problems.
As mentioned, the research problem of this dissertation is constructed based on some
assumptions. These assumptions are explained in chapter three. By relaxing any of
175
these assumptions, a new research problem can be defined. These research problems
are as follows:
The first assumption was based on permutation scheduling. In other words, in this
research it was assumed that all jobs and groups are processed in the same sequence
on all machines (permutation scheduling). If a company can relax this assumption
in its production line, there are possibilities that the company can further reduce the
production time and work-in-progress inventories. To solve the new research
problem, the tools applied in this research can be applied as follows:
o The mathematical model proposed in this research can be applied by making a
few changes.
o The heuristic algorithm proposed in this research can be applied by making
minor changes in the section of calculating the objective function value of
neighborhoods.
o The lower bounding technique proposed for minimization of makespan criterion
can be applied by making minor changes.
o The lower bounding technique proposed based on B&P algorithm can be
applied to get a lower bound to estimate the quality of solutions of heuristic
algorithms.
The second assumption was based on static job releases. In other words, in this
research it is assumed that all jobs in each group are available at the beginning of
the schedule. If this assumption is relaxed, the mathematical models, the heuristic
algorithm techniques, and the lower bounding algorithm (B&P) can be applied by
making changes for both criteria, but the changes can be substantial.
The third assumption was about the priority of jobs as well as group. In this
research it was assumed that all jobs and groups have the same importance
(weight). In real world problems, there are cases that a company has orders in
which they are more important than the other orders. This problem can be solved by
the tools presented in this research by making the following minor changes:
o The mathematical model proposed in this research can be applied by making
some minor changes.
176
o The heuristic algorithm (tabu search) proposed in this research can be applied
by making minor changes in the section of calculating the objective function
value of neighborhoods.
o The lower bounding technique proposed based on B&P algorithm can be
applied to get a lower bound to estimate the quality of solutions of heuristic
algorithms for both criteria.
The last assumption was based on machine availabilities. In this research it was
assumed that all machines are available at the beginning of planning horizon. There
are situations in real world that some of the machines in the production line may not
be available for a while. This problem can be solved by the tools presented in this
research by making the following minor changes:
o The mathematical model proposed in this research can be applied by making
some minor changes.
o The heuristic algorithm (tabu search) proposed in this research can be applied
by making minor changes in the section of calculating the objective function
value of neighborhoods.
o The lower bounding technique proposed based on B&P algorithm can be
applied to get a lower bound to estimate the quality of solutions of heuristic
algorithms for both criteria.
In this research, minimization of makespan and minimization of sum of the
completion times are considered as a criterion. There are other criteria which may
be more suitable for companies to apply. For instance, if a company has some
stringent deadlines for its orders, then minimizing the sum of the tardiness of all job
orders would be a better performance measure for the company.
10.2 Applying New Techniques (tools) to Solve Proposed Problems
There are tools other than the ones applied in this research to solve problems. Some of
these tools may have better performance than the ones applied in this research. It is
valuable if their performance is examined with the ones proposed in this research.
Some of these techniques are as follows:
177
Heuristic algorithm: Tabu search is applied to get good quality solutions in this
research because of its better performance in previous research compared to genetic
algorithm and simulated annealing. It may be worth to apply other heuristic
algorithms such as ant colony algorithm to compare its performance with tabu
search result.
Lower bounding technique: The B&P algorithm is applied to get a lower bound for
minimization of sum of the completion times criterion. The performance of the
algorithm was not good enough. There is still room to improve the quality of the
lower bound in one of areas below:
o Applying other approaches for branching. The branching rule applied in this
research (branch on the original variables i.e., AS variables) may not necessarily
be the best way to branch.
o Trying other stopping criteria.
178
BIBLIOAGRAPHY
Allahverdi, A., Gupta, J.N.D., Aldowaisian, T., 1999, A Review of Scheduling
Research Involving Setup Considerations, Omega, International Journal of
Management Science, 27, 219-239.
Allahverdi, A., 2000, Minimizing Mean Flowtime in a Two-machine Flowshop with
Sequence Independent Setup Times, Computers and Operations Research, 27, 111127.
Amini, M.M., Barr, R.S., 1993, Network Reoptimization Algorithms, A Statistically
Design Compariosion, Orsa Journal of Computing, 5,4, 385-408.
Bagga, P.C., and Khurana, K. 1986, Two Machine Flowshop with Separated Sequence-
Independent Set-up Times: Mean Completion Time Criterion, Indian Journal of
Management and Systems, 2, 1, 47-57.
Baker, K.R., 1990, Scheduling Groups of Jobs in the Two-machine Flowshop,
Mthematical and Computer Modeling, 13, 3, 29-3 6.
Barnhart, C., Hane, C.A., Johnson, E.L., and Sigimondi, G., 1995A, a Column
Generation and Partitioning Approach for Multi-commodity Flow Problem,
Telecommunication Systems, 3, 239-258.
Bamhart, C., Johnson, E.L., Nemhauser, G.L., Savelsbergh, M.W.P., and Vance, P.11.,
1998, Branch and Price: Column Generation for Solving Huge Integer Programs,
Operations Research, 46, 3, 3 16-329.
Bellman, R., Esogbue, A.O., and Nabeshima, I., 1982, Mathematical Aspects of
Scheduling and Applications, Pergamon Press,New York.
Campbell, H.G., Dudek, R.A., and Smith, M.L., 1970, A Heuristic Algorithm for the nJob and rn-Machine Sequencing Problem, Management Science, 16, 10, 630-637.
Cheng, T.C.E., Gupta, J.N.D., and Wang, G., 2000, A Review of Flowshop Scheduling
Research with Set-up Times, Production and Operations Management, 9, 3, 262282.
Corwin, B.D., and Esogbue, A.O., 1974, Two-Machine Flowshop Scheduling Problems
with Sequence Dependent Set-up Times: A Dynamic Programming Approach,
Naval Research Logistics Quarterly, 21, 3, 5 15-524.
Desrosiers, J., Dumas, Y., Solomon, M.M., Soumis, F., 1995, Time Constrained
Routing and Scheduling. hi Handbooks in Operations Research and Management
179
Science. Ball, M. E., Magnanti, T.L., Monma, C., Nenthauser, G.L., Elsevier,
Amesterdam.
Flynn, B.B., 1987, The effects of Set-up time on output capacity in cellular
manufacturing, International Journal of Production Research, 25, 1761-1772.
Garey, M.D., Johnson, D.S., and Sethi, R., 1976, The Complexity of Flowshop and
Jobshop Scheduling, Mathematics of Operations Research, 1, 2, 117-129.
Glover, F., 1986, Future Paths for Integer Programming and Links to Artificial
Intelligence, Computers and Operations Research, 13, 5 33-549.
Glover, F., 1989, Tabu-Search-Part I, ORSA Journal of Computing, 1, 190-206.
Glover, F., 1990a, Tabu-Search-Part II, ORSA Journal of Computing, 1, 4-32.
Glover, F., 1990b, Tabu-Search: A Tutorial, Interfaces, 20, 74-94.
Gupta, J.N.D., 1972, Optimal Scheduling in Multi-stage Flowshop, AIIE Transaction,
4, 238-243.
Gupta, J.N.D., 1988, Flowshop Scheduling with Sequence Dependent Set-up Times,
Proceeding of the ORSA/TIMS National Meeting, Washington DC.
Gupta, J.N.D., and Darrow, W.P., 1986, The Two-Machine Sequence Dependent
Flowshop Scheduling Problem, European Journal of Operational Research, 439446.
Gupta, J.N.D., Das, S.R., and Ghosh, S., 1995, Flowshop Scheduling with Sequence
Dependent Set-up Times, Working Paper, Department of Management, Ball State
University, Muncie, IN.
Ham, I., Hitomi, K., and Yoshida, T., 1985, Group Technology, Kluwer-Nijhoff
Publishing, Boston, MA.
Hansen, P., 1986, The Steepest Ascent Mildest Decent Heuristic for Combinatorial
Programming, Conference on Numerical Methods in Combinatorial Optimization,
Capri, Italy.
Helal, M., and Rabelo, L., 2004, Investigating Group-Scheduling Heuristics in the
Context of the Two-phase Nature of the Model in a Flow Cell, Proceedings(CDROM), 12th Annual Industrial Engineering Research Conference (IERC), Houston,
TX, May 16-19.
Hitomi, K., and Ham, I., 1976, Operations Scheduling for Group Technology
Applications, CIRP Annals, 25, 419-422.
Johnson, S.M., 1954, Optimal Two and Three Stage Production Scheduling with Set-up
times Included, Naval Research Logistic Quarterly, 1, 1, 6 1-68.
Jordan, C., 1996, Batching and Scheduling: Models and Methods for Several Problem
Classes, Springer, Berlin, Germany.
Lageweg, B.J, Lenstra, J.K, Rinnooy Kan, A.H.G, 1978, A General Bounding Scheme
for the Permutation Flowshop Problem, Operations Research, 26, 1, 53-67.
Laguna, M., Barnes, J.W., and Glover, F., 1991, Tabu Search Methods for a Single
Machine Scheduling Problem, Journal of International Manufacturing, 2, 63-74.
Logendran, R., 2002, Class Notes for Design and Scheduling of Cellular Manufacturing
Systems (1E564), Oregon State University.
Logendran, R, Salmasi, N., and Sriskandarajah, C., 2006, Two-Machine Group
Scheduling Problems in Discrete Parts Manufacturing with Sequence-Dependent
Setups, Journal of Computers and Operations Research, , 33, 158-180.
Logendran, R, and Sonthinen, A., 1997, A Tabu search-based Approach for Scheduling
Job-shop Type Flexible Manufacturing Systems, Journal of the Operational
Research Society, 48, 264-277.
Logendran, R, and Sriskandarajah, C., 1993, Two Machine Group Scheduling Problem
with Blocking and Anticipatory Set-ups, European Journal of Operational Research,
69, 3, 467-481.
Logendran, R, and Subur, F., 2004, Unrelated Parallel Machine Scheduling with Job
Splitting, TIE Transaction, 36, 3, 359-372.
Lubbecke, M.E., and Desrosiers, J, 2004, Selected Topics in Column Generation, Les
Cahiers de GERAD G-2002-64, Group for Research in Decision Analysis, Montreal,
Canada,. To appear in Operations Research.
Montgomery, D.C., 2001, Design and Analysis of Experiments, New York, John Wiley
& Sons.
Nawaz, M, Enscore, E, and Ham, I, 1983, A Heuristic Algorithm for the rn-Machine, nJob Flow-shop Sequencing Problem, Omega, International Journal of Management
Science, 11, 1,91-95.
Nowicki, E., and Smutnicki C., 1996, A Fast Tabu Search Algorithm for the
permutation Flow-shop Problem, European Journal of Operations Research, 91,
160-175.
181
Panwalker, SS., Dudek, R.A., and Smith M.L., 1973, Sequencing Research and the
Industrial Scheduling Problem, Symposium on the Theory of Scheduling and its
Applications, 29-38.
Parthasarathy, S., and Rajendran, C., 1997, A Simulated Annealing Heuristic for
Scheduling to Minimize Weighted Tardiness in a Flowshop with Sequence
Dependent Set-up Time of Jobs- A Case Study, Production Planning and Control, 8,
5, 475-483.
Petrov, V.A., 1968, Flowline Group Production Planning, Business Publications,
London, United Kingdom.
Pinedo, M., 2002, Scheduling Theory Algorithms and Systems, Prentice Hall.
Pham, D.T., and Karaboga, D., 1998, Intelligent Optimization Techniques, Springer.
Proust, C., Gupta, J.N.D., and Deschamps, V., 1991, Flowshop scheduling with Set-up,
Processing and Removal Time Separated, International Journal of Production
Research, 29,3, 479-493.
Reddy, V., and Narendran, T.T, 2003, Heuristics for Scheduling Sequence Dependent
Set-up Jobs in Flow Line Cells, International Journal of Production Research, 41, 1,
193-206.
Reeves, C.R., 1993, Modern Heuristic Techniques for Combinatorial Problems, John
Wiley & Sons.
Rios-Mercado, R.Z., and Bard, J.F., 1998, Computational Experience with a Branch-
and-Cut Algorithm for Flowshop Scheduling with Set-ups, Computers and
Operations Research, 25, 5, 35 1-366.
Rios-Mercado, R.Z., and Bard, J.F., 1999, A Branch and Bound Algorithm for
Flowshop Scheduling with Set-up Times, TIE Transaction, 31, 8, 721-731.
Schaller, J.E., 2000, A Compansion of Heuristic for Family and Job Scheduling in a
Flow-line Manufacturing Cell, International Journal of Production Research, 28, 2,
287-308.
Schaller, J.E., Gupta, J.N.D., and Vakharia, A.J., 1997, Group Scheduling with
Sequence Dependent Set-ups, Proceedings of the Annual Decision Science Institute
Meeting, San Diego, CA, 1141-1143.
Schaller, J.E., Gupta, J.N.D., and Vakharia, A.J., 2000, Scheduling a Flowline
Manufacturing Cell with Sequence Dependent Family Setup Times, European
Journal of Operational Research, 125, 324-339.
182
Simons, J.V, 1992, Heuristic in Flowshop Scheduling with Sequence Dependent Setup
Time, Omega, 20, 2, 215-225.
Skorin-Kapov, J., and Vakharia, A.J., 1993, Scheduling a Flow-Line Manufacturing
Cell: A Tabu Search Approach, International Journal of Production Research, 31, 7,
1721-1734.
Sridhar, J and Rajendran, C., 1994, A Genetic Algorithm for Family and Job
Scheduling a Flow-Line Based Manufacturing Cell, Computers and Industrial
Engineering, 27, 1-4, 469-472.
Srikar, B.N., and Ghosh, S., 1986, A MLP Model for the n-job, M-stage Flowshop,
with Sequence Dependent Set-up Times, International Journal of Production
Research, 24, 6, 1459-1472.
Stafford, E.F., and Tseng, F.T, 1990, On the Srikar-Ghosh MILP Model for the N*M
SDST Flowshop Problem, International Journal of Production Research, 28, 10,
18 17-1830.
Taillard, E., 1990, Some Efficient Heuristic Methods for the Flowshop Sequencing
Problem, European Journal of Operational Research, 47, 65-74.
Vakharia, A.J., and Chang, Y.L., 1990, a Simulated Annealing Approach to scheduling
a Manufacturing Cell, Naval Research Logistics, 37, 6, 559-577.
Vakharia, A.J., Schaller, J.E., and Gupta, J.N.D., 1995, Designing and Scheduling
Manufacturing Cells, Proceeding of the iNFORMS National Meeting, New
Orleans, LA.
Widmar, M., and Hertz, A., 1989, A New Heuristic Method for the Flowshop
Sequencing Problem, European Journal of Operational Research, 41, 186-193.
Wilhelm, W.E., 2001, A Technical Review of Colunm Generation in Integer
Programming, Optimization and Engineering, 2, 159-200.
Wilhelm, W.E., Damodaran, P., and LI, J., 2003, Prescribing the Content and Timing of
Product Upgrade, TIE Transactions, 35,647-663.
Wortman, D.B., 1992, Managing Capacity: Getting the most from your Firm's Assets,
md Eng, 24, 47-49.
Yoshida, T., and Hitomi, K., 1979, Optimal Two-Stage Production Scheduling with
Setup Times Separated, AITE Transactions, 11, 261-263.
183
APPENDICES
184
APPENDIX A: THE ANOVA AND TEST OF EFFECT SLICES TABLES FOR
THE RESULT CHAPTER
Table A. 1 The ANOVA table for two machine problem by considering minimization of
makespan for time spent comparison
Type 3 Tests of Fixed Effects
I'Jum
Effect
DF
Den
DF
G
J
Il
2
0
2
27
27
A
2
I
1
135
135
G*J
G*Rl
G*A
G*I
J*R1
J*A
4
0
R1*A
R1*I
A*I
G*J*R1
G*J*A
G*J*I
G*R1*A
G*R1*I
G*A*I
J*R1*A
J*R1*I
J*A*I
R1*A*I
G*J*R1*A
G*J*R1*I
G*J*A*I
G*R1*A*I
J*R1*A*I
G*J*R1*A*I
2
4
0
4
135
135
27
135
135
135
135
135
27
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
2
4
4
2
4
2
2
8
8
4
8
4
4
8
4
4
4
16
8
8
8
8
16
F Value
Pr > F
106.94
0.78
1.02
203.99
4.69
13.99
18.41
203.99
4.69
0.30
1.55
2.60
5.79
0.50
6.09
0.30
1.55
2.60
5.79
0.50
6.09
1.93
0.16
0.85
0.68
1.93
0.16
0.85
0.68
0.59
0.59
<.0001
0.4701
0.3735
<.0001
0.0321
0.5503
0.4140
<.0001
0.0107
0.876].
0.1912
0.0782
0.0002
0.6053
0.0029
0.9600
0.1457
0.0391
<.0001
0.7329
0.0002
0.0606
0.9604
0.4943
0.6082
0.0227
0.9960
0.5582
0.7100
0.7839
0.8863
185
Table A.2 Test of effect slices for two machine problem by considering minimization
of makespan for time spent comparison
The Mixed Procedure
Differences of Least Squares Means
Effect
G
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
Q*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
1
1
_A _Pr
Ri
A
_G
1
1
1
1
1
1
2
1
1
1
1
1
1
3
1
1
1
2
1
1
1
3
1
1
2
1
1
1
2
2
1
1
2
1
1
1
2
3
1
1
2
2
1
1
2
3
1
1
3
1
1
1
3
2
1
1
3
1
1
1
3
3
1
1
3
2
1
1
3
3
1
2
1
1
1
2
1
2
1
2
1
1
1
2
1
3
1
2
1
2
1
2
1
3
1
2
2
1
1
2
2
2
1
2
2
1
1
2
2
3
1
2
2
2
1
2
2
3
1
2
3
1
1
2
3
2
1
2
3
1
1
2
3
3
1
2
3
2
1
2
3
3
1
3
1
1
1
3
1
2
1
3
1
1
1
3
1
3
1
3
1
2
1
3
1
3
1
3
2
1
1
3
2
2
1
3
2
1
1
3
2
3
1
3
2
2
1
3
2
3
1
3
3
1
1
3
3
2
1
3
3
1
3
3
3
1
3
3
2
1
1
3
3
3
2
1
1
1
2
1
1
2
2
1
1
1
2
1
1
3
2
1
1
2
2
1
1
3
2
1
2
1
2
1
2
2
2
1
2
1
2
1
2
3
2
1
2
2
2
1
2
3
2
1
3
1
2
1
3
2
2
1
3
1
2
1
3
3
2
1
3
2
2
1
3
3
2
2
1
1
2
2
1
2
2
2
1
1
2
2
1
3
2
2
1
2
2
2
1
3
2
2
2
1
2
2
2
2
2
2
2
1
2
2
2
3
2
2
2
2
2
2
2
3
2
2
3
1
2
2
3
2
2
2
3
1
2
2
3
3
2
2
3
2
2
2
3
3
J
J
Ri
>
Jt
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
Adjustment
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kranier
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Ils
Table A.2 (Continued) Test of effect slices for two machine problem by considering
minimization of makespan for time spent comparison
The Mixed Procedure
Differences of Least Squares Means
A
G
A
J
Ri
Pr > ti
Effect
G
J
Ri
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*Ri*A
G*J*Ri*A
G*J*R1*A
G*J*Ri*A
G*J*Ri*A
G*J*Ri*A
2
3
1
1
2
3
1
2
2
3
1
1
2
3
1
3
2
3
1
2
2
3
1
3
2
3
2
1
2
3
2
2
2
3
2
1
2
3
2
3
2
3
2
2
2
3
2
3
2
3
3
1
2
3
3
2
2
3
3
1
2
3
3
3
2
3
3
2
2
3
3
3
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
3
3
1
1
1
1
1
1
3
3
1
1
1
2
3
1
1
2
3
1
1
3
0.2124
3
3
1
1
2
3
3
1
1
2
2
2
1
1
2
3
<.0001 Tukey-Iraiuer
<.0001 Tukey-Krainer
G*J*R1*A
3
1
2
2
3
1
2
3
0.2124
Tukey-Kramer
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
3
1
3
1
3
1
3
1
3
1
3
1
3
3
2
3
<.0001
<.0001
Tukey-Xramer
Tukey-Kramer
3
1
3
2
3
1
3
3
0.0134
Tukey-Kramer
3
3
2
2
1
1
1
1
3
3
2
2
1
1
2
3
<.0001
<.0001
Tukey-Kramer
Tukey-Krainer
1
3
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
Adjustment
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
<.0001 Tukey-Xramer
<.000]. Tukey-Xramer
Tukey-Kramer
G*J*R1*A
3
2
1
2
3
2
1
3
0.0134
Tukey-Kramer
G*J*R1*A
G*.J*R1*A
3
2
2
3
2
2
3
2
2
1
1
3
2
2
2
3
<.0001
<.0001
Tukey-Kramer
Tukey-Kraiuer
G*J*R1*A
G*J*R1*A
3
2
2
2
3
2
2
3
1.0000
Tukey-Kramer
2
2
3
3
1
1
3
3
2
2
3
3
2
G*J*R1*A
3
3
3
<.0001
<.0001
Tukey-ICramer
Tukey-Kramer
G*J*R1*A
3
2
3
2
3
G**R1*I
2
3
3
0.0003
Tukey-Kramer
3
1
1
3
1
2
3
1
1
3
G*J*R1*A
3
3
3
3
1
3
<.0001 Tukey-Kraiuer
<.000]. Tukey-Kramer
G*J*R1*A
3
3
1
2
3
3
1
3
0.2124
G*J*R1*A
3
3
2
2
1
G*J*Ri*A
3
3
1
3
3
3
3
2
2
2
3
<.0001 Tukey-Kramer
<.0001 Tukey-ICraiuer
G*J*R1*A
3
3
2
2
3
3
2
3
1.0000
Tukey-Kramer
G*J*R1*A
G*J*R1*A
3
3
3
3
3
<.0001
<.0001
Tukey-Kramer
3
3
3
2
3
1
1
3
3
3
3
G*J*R1*A
3
3
3
2
3
3
3
3
0.2124
Tukey-Kramer
Tukey-Kramer
Tukey-raiuer
187
Table A.2 (Continued) Test of effect slices for two machine problem by considering
minimization of makespan for time spent comparison
The Mixed Procedure
Differences of Least Squares Means
Effect
G
J
I
_G
J
G*J*I
G*J*I
G*J*I
G*J*I
G*J*I
G*J*I
G*J*I
G*J*I
G*J*I
1
1
1
1
1
2
1
2
1
1
2
2
1
3
1
1
3
2
2
1
1
2
1
2
2
2
1
2
2
2
2
3
1
2
3
2
3
1
1
3
1
2
3
2
1
3
2
2
3
3
1
3
3
2
I
Pr >
ti
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.2396
1.0000
<.0001
Adjustment
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Table A.3 The ANOVA table for three machine problem by considering minimization
of makespan for algorithm comparison
Type 3 Tests of Fixed Effects
Num
DF
Den
DF
F Value
Pr > F
R2
2
2
2
2
0
0
0
0
360.42
57.00
16.96
10.54
<.0001
<.0001
<.0001
<.0001
A
2
405
14.97
<.0001
I
1
G*J
G*Rl
405
4
4
0
G*R2
G*A
4
0
0
1.20
3.87
1.74
2.56
0.2732
0.0063
0.1490
0.0444
2
405
405
2.77
4.58
0.0270
0.0108
4
4
0
0
4
405
405
405
0.46
0.16
0.67
0.94
0.58
1.14
0.7651
0.9576
0.6122
0.3911
0.6813
0.3354
2
405
4.71
0.0095
R2*I
A*I
G*J*Rl
4
2
2
8
405
405
405
G*J*R2
G*J*A
8
0
0
0.4623
0.3546
0.8649
0.9720
0.9979
0.9539
Effect
G
J
Rl
G*I
J*R1
J*R2
J*A
R1*R2
R1*A
R1*I
R2*A
4
2
4
4
0
8
405
0.90
1.04
0.15
0.28
0.13
0.33
G*J*I
G*R1*R2
G*R1*A
4
405
6.37
<.0001
8
0
8
G*R1*I
4
G*R2*A
G*R2*I
G*A*I
8
4
405
405
405
0.94
0.21
0.45
0.90
0.4906
0.9898
0.7722
0.5130
405
5.01
0.0006
405
0.20
0.39
0.74
1.63
0.48
0.9399
0.9253
0.6580
0.1654
0.8678
J*R1*R2
J*R1*A
4
8
8
0
J*R2*A
8
405
405
405
J*R2*I
4
405
5.85
0.0001
J*A*I
4
Rl*R2*A
R1*R2*I
R1*A*I
R2*A*I
G*J*R1*R2
G*J*R1*A
G*J*R1*I
G*J*R2*A
8
405
405
405
405
405
16
405
405
405
0.45
0.36
0.92
0.26
0.37
0.81
0.69
1.60
0.17
0.7754
0.9430
0.4542
0.9014
0.8271
0.6686
0.8090
0.1242
0.9999
8
405
3.36
0.0010
J*1fl*I
G*J*R2*I
4
4
4
4
16
16
8
0
Table A.3 (Continued) The ANOVA table for three machine problem by considering
minimization of makespan for algorithm comparison
Type 3 Tests of
Effect
Fixed Effects
Num
DF
Den
DF
F Value
Pr > F
8
G*J*A*I
G*Rl*R2*A
16
405
405
0.78
0.51
0.6205
0.9402
G*R1*R2*I
8
405
3.73
0.0003
8
8
0.17
0.32
0.31
1.65
0.13
0.53
0.11
0.33
0.9951
0.9589
0.9953
0.1078
0.9979
0.8365
0.9990
0.9998
G*Rl*A*I
R1*R2*A*I
G*J*R1*R2*A
32
405
405
405
405
405
405
405
405
G*J*R].*R2*I
16
405
2.14
0.0063
G*J*Rl*A*I
G*J*R2*A*I
16
16
16
16
32
405
405
405
405
405
0.33
0.22
0.48
0.67
0.53
0.9943
0.9995
0.9569
0.8243
0.9837
G*R2*A*I
J*R1*R2*J
J*Rl*R2*I
J*Rl*A*I
J*R2*A*I
G*R1*R2*A*I
J*R1*R2*A*I
G*J*R1*R2*A*I
16
8
8
8
8
190
Table A.4 Test of effect slices for three machine problem by considering minimization
of makespan for algorithm comparison
Differences of Least Squares Means
Effect
G
A
G A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
1
1
1
2
1
1
1
3
1
2
1
3
2
1
2
2
2
1
2
3
2
2
2
3
3
3
2
3
1
1
3
3
3
2
3
3
Pr >
t
0.0400
0.1668
0.5000
0.0992
0.7011
0.2053
<.0001
0.0684
0.0001
Adjustment
Adj P
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
Tukey-Kramer
0.5024
0.9031
0.9991
0.7748
1.0000
0.9398
<.0001
0.6640
0.0043
191
Table A.4 (Continued) Test of effect slices for three machine problem by considering
minimization of makespan for algorithm comparison
Effect
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
J Ri R2 I
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
1
1
2
2
2
3
3
3
2
3
1
2
3
1
1
1
1
1
1
1
1
1
1
1
2
2
2
3
2
3
1
1
1
1
1
1
1
1
2
3
1
2
3
1
2
3
1
1
1
1
1
1
2
2
2
3
2
3
1
2
3
1
2
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
3
3
3
3
1
3
1
1
3
1
3
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
J R1R2 I Pr > ti Adju Adj P
G
1
3
2
2
2
2
2
2
3
3
1
1
1
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
1
2
3
1
2
3
1
2
3
2
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
3
1
2
2
2
3
3
3
1
3
1
2
1
3
3
1
2
3
1
1
1
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
1
1
1
2
2
2
2
3
2
1
2
2
3
1
2
3
3
1
1
1
2
2
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
0.4593
0.7672
1.0000
0.3746
1.0000
0.7672
0.4593
1.0000
1.0000
0.6570
0.7672
0.7734
0.9694
0.0004
0.4898
0.6570
0.3746
0.5873
0.8823
0.4593
0.1999
0.3004
0.0940
<.0001
0.1676
0.1529
<.0001
1.0000
1.0000
1.0000
0.0684
0.0304
0.8823
0.6930
0.1040
0.2780
0.5538
0.8051
0.0847
0.8051
0.7298
0.3004
0.6570
0.0684
0.5873
0.5538
0.8435
0.8823
1.0000
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.6903
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0742
1.0000
1.0000
<.0001
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
192
Table A.4 (Continued) Test of effect slices for three machine problem by considering
minimization of makespan for algorithm comparison
Effect
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G
2
2
2
2
2
J Ri R2 I
3
3
3
3
3
3
1
1
1
1
1
1
3
1
3
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
2
2
3
3
3
1
1
Nurn
2
1
3
1
1
1
2
1
3
1
1
1
2
1
1
3
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
1
3
3
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
J R1R2 I Pr > ti Adju Adj P
G
Den
2
2
2
2
2
3
3
3
3
3
3
3
3
1
1
1
3
1
3
1
3
3
3
3
3
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
2
3
1
2
3
1
2
3
1
2
3
1
2
3
3
3
1
2
3
1
2
3
1
2
1
2
3
1
2
3
2
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
0.8823
1.0000
0.1266
0.0023
0.7672
0.4593
0.0020
0.0023
0.0023
0.0140
0.0256
0.9214
0.9606
0.0940
0.0236
0.0762
0.4300
0.1832
0.6930
0.6570
0.0940
0.0489
0.7298
<.0001
0.0038
0.6217
0.3746
0.5873
0.9214
0.1393
0.0343
0.8823
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
T- K
1.0000
1.0000
1.0000
0.9721
1.0000
1.0000
0.9605
0.9721
0.9721
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1936
0.9918
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
193
Table A.5 The ANOVA table for three machine problem by considering minimization
of makespan for time spent comparison
Type 3 Tests of Fixed Effects
Num
DF
Den
DF
R2
A
2
2
2
2
81
81
81
81
2
I
1
405
405
Effect
G
J
Ri
Q*J
G*R1
G*R2
G*A
G*I
J*R1
J*R2
J*A
J*
R1*R2
Ri*A
Ri*I
R2*A
R2*I
A*I
G*J*R1
G*J*R2
G*J*A
G*J*I
4
4
4
4
2
4
4
4
2
81
2
2
8
8
81
81
8
405
405
2
4
4
G*R1*I
4
8
J*R2*I
4
4
8
8
4
8
J**
4
R1*R2*A
8
R1*R2*I
Ri*A*I
R2*A*I
G*J*R1*R2
G*J*Ri*A
G*J*R1*I
G*J*R2*A
G*J*R2*I
405
405
405
405
405
405
405
G*R2*A
J*R2*A
81
81
4
8
8
J*R1*I
405
405
4
G*R1*R2
G*R1*A
G*R2*I
G*A*I
J*R1*R2
J*R1*A
81
81
81
4
4
4
4
16
16
8
16
8
81
405
405
405
405
405
81
405
405
405
405
405
405
405
405
405
81
405
405
405
405
F Value
Pr > F
84.46
20.64
0.26
0.15
157.48
1.70
17.94
0.31
0.12
140.03
1.46
0.56
0.32
32.80
0.80
1.79
0.54
0.19
0.38
1.17
0.44
0.55
0.25
28.73
0.68
1.73
0.50
0.21
0.35
1.19
0.36
0.95
1.33
0.15
0.89
0.54
0.18
2.93
0.26
0.18
0.46
0.81
1.23
0.15
0.73
0.55
<.0001
<.0001
0.7734
0.8626
<.0001
0.1929
<.0001
0.8737
0.9743
<.0001
0.2328
0.6902
0.8670
<.0001
0.4516
0.1388
0.7097
0.8281
0.8244
0.3128
0.6432
0.8131
0.9799
<.0001
0.6047
0.1050
0.8572
0.9321
0.9436
0.3144
0.8366
0.4833
0.2247
0.9637
0.5217
0.7078
0.9481
0.0034
0.9009
0.9464
0.7622
0.6690
0.2392
0.9964
0.7624
0.8175
194
Table A.5 (Continued) The ANOVA table for three machine problem by considering
minimization of makespan for time spent comparison
Type 3 Tests of Fixed Effects
Effect
G*J*A*I
G*R1*R2*A
G*R1*R2*I
G*R1*A*I
G*R2*A*I
J*R1*R2*A
J*R1*R2*I
J*R1*A*I
J*R2*A*I
R1*R2*A*I
G*J*R1*R2*A
G*J*R1*R2*I
G*J*R1*A*I
Num
DF
8
16
8
8
8
16
8
8
8
8
32
16
16
Den
DF
F Value
Pr > F
405
0.16
0.9959
405
2.82
0.0002
405
405
405
405
405
405
405
405
405
405
405
0.28
0.17
0.47
1.63
0.23
0.14
0.23
0.15
1.43
0.23
0.14
0.9721
0.9942
0.8759
0.0572
0.9856
0.9970
0.9860
0.9967
0.0650
0.9994
1.0000
195
Table A.6 The ANOVA table for six machine problem by considering minimization of
makespan for algorithm comparison
Type 3 Tests of Fixed Effects
Num
DF
Den
DF
G
J
Rl
2
0
2
0
2
0
A
2
I
1
135
135
G*J
G*R1
G*A
G*I
4
0
J*Jj
4
0
J*A
4
J*]
2
Rl*A
4
R1*I
A*I
G*J*R1
G*J*A
G*J*I
G*R1*A
2
135
135
135
135
135
G*R].*I
4
G*A*I
J*R1*A
J*R1*I
J*A*I
R1*A*I
4
Effect
G*J*R1*A
G*J*R1*I
G*J*A*I
G*R1*A*I
J*Rl*A*I
G*J*Rl*A*I
4
4
2
2
0
135
135
8
0
8
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
4
8
8
4
4
4
16
8
8
8
8
16
F Value
Pr > F
253.77
6.97
270.48
8.41
0.94
3.99
37.48
4.02
5.57
2.34
0.35
0.82
1.02
0.37
0.75
0.83
0.75
2.00
0.54
4.09
0.46
0.31
3.42
0.08
0.51
0.45
3.08
0.18
0.58
0.37
0.21
<.0001
0.0036
<.0001
0.0004
0.3344
0.0114
<.0001
0.0041
0.0047
0.0804
0.8408
0.4443
0.3992
0.6915
0.4763
0.5810
0.6474
0.0987
0.8264
0.0037
0.7668
0.9610
0.0107
0.9873
0.7270
0.9662
0.0032
0.9935
0.7968
0.9336
09995
196
Table A.7 Test of effect slices for six machine problem by considering minimization of
makespan for the algorithm comparison
Differences of Least Squares Means
Effect
G
G*A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
1
1
1
2
2
2
3
G*J*Ri*I
G*J*Ri*I
G*J*R1*I
G*J*Ri*I
G*J*R1*I
G*J*Ri*I
G*J*Ri*I
G*J*Ri*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*Ri*I
G*J*R1*I
G*J*Ri*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*Ri*I
G*J*Ri*I
G*J*Ri*I
G*J*Ri*I
G*J*R1*I
G*J*R1*I
J Ri
I
A
1
1
2
1
1
2
1
3
1
3
1
1
1
1
2
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
G
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
J
Ri
1
1
1
2
2
2
3
3
3
1
1
1
1
1
1
2
1
2
1
1
2
3
3
3
1
1
2
2
2
2
I
A
2
3
3
2
3
3
2
3
3
1
1
1
1
2
2
2
2
2
2
2
3
3
3
3
2
3
3
3
1
1
1
2
3
2
3
3
3
3
2
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
Adj
Adj P
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
0.9974
1.0000
0.9999
0.9990
1.0000
0.9995
0.0004
0.9995
<.0001
1.0000
1.0000
1.0000
0.9960
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.7765
1.0000
1.0000
1.0000
1.0000
1.0000
0.9870
0.4396
1.0000
1.0000
0.9939
0.9990
1.0000
1.0000
0.9910
0.0021
197
Table A.8 The ANOVA table for six machine problem by considering minimization of
makespan for time spent comparison
Type 3 Tests of Fixed Effects
Num
DF
Den
DF
G
J
Ri
2
0
2
0
2
0
A
2
I
1
4
135
135
Effect
G*J
G*R1
G*A
G*I
J*R1
J*A
J*I
R1*A
R1*I
A*I
G*J*R1
G*J*A
G*J*I
G*R1*A
G*R1*I
G*I*I
J*R1*A
J*R1*I
0
4
135
135
2
4
4
2
4
2
2
0
8
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
4
8
4
4
8
4
4
16
8
8
8
8
G*J*R1*A*I
0
135
135
135
135
135
8
4
R1*A*I
G*J*R1*A
G*J*R1*I
G*J*A*I
G*R1*A*I
0
4
16
F Value
Pr > F
153.58
43.91
10.50
84.52
5.09
36.80
10.27
75.91
4.86
4.97
20.72
5.99
5.03
2.25
1.30
4.91
17.67
5.76
4.92
2.06
1.23
2.33
2.19
1.61
0.54
2.28
2.01
1.56
0.49
0.65
0.62
<.0001
<.0001
0.0004
<.0001
0.0256
<.0001
<.0001
<.0001
0.0092
0.0039
<.0001
0.0032
0.0008
0.1090
0.2761
0.0008
<.0001
0.0003
<.0001
0.0891
0.3009
0.0223
0.0729
0.1749
0.7062
0.0055
0.0493
0.1430
0.8595
0.7322
0.8636
Table A.9 Test of effect slices for six machine problem by considering minimization of
makespan for time spent comparison
Differences of Least Squares Means
Effect
G
G*J*Ri*A
1
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
1
G*J*Ri*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
J Ri
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
Al
G
1
1
2
1
1
2
1
1
2
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
1
2
1
1
1
1
1
1
1
2
1
1
2
1
1
2
1
1
2
1
1
2
1
1
2
1
1
2
1
1
2
1
1
2
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
J
Ri_A_I
1
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
1
1
1
1
1
1
1
2
2
2
1
1
1
3
1
3
1
3
2
1
2
2
2
2
2
2
2
1
1
2
2
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
Adju
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
AdjP
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
199
Table A.9 (Continued) Test of effect slices for six machine problem by considering
minimization of makespan for time spent comparison
Differences of Least Squares Means
Effect
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*Ri*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G
2
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
J Ri
A
2
2
1
1
2
1
1
2
1
1
2
1
1
2
1
1
2
1
1
2
1
1
2
2
2
2
2
2
2
2
2
2
2
1
1
1
3
3
3
3
3
3
3
3
3
3
3
3
3
3
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
2
I
G
3
3
3
3
3
3
3
3
3
3
3
3
2
3
1
1
1
1
1
2
2
2
3
3
3
2
1
1
3
3
3
3
3
2
3
1
1
3
3
3
2
J
Ri
A
2
3
3
3
3
3
3
3
3
3
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
2
3
3
2
3
3
2
3
3
2
3
1
1
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
1
2
2
2
3
3
2
3
3
2
3
3
3
3
3
I Adju
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
Adj P
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.1564
0.1576
1.0000
<.0001
<.0001
1.0000
<.0001
<.0001
1.0000
<.0001
<.0001
1.0000
<.0001
<.0001
1.0000
<.0001
<.0001
0.9995
200
Table A.9 (Continued) Test of effect slices for six machine problem by considering
minimization of makespan for time spent comparison
Effect
G J Ri I G _J Ri -I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
1
1
adju Adj P
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
1
1
3
3
1
1
3
2
3
1
3
1
3
1
1
1
1
1
1
1
1
1
1
1
2
2
2
3
3
3
1
1
1
1
2
3
1
2
2
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
2
2
2
2
1
2
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
3
2
2
3
1
2
3
1
2
3
3
1
2
3
1
2
3
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.0013
1.0000
<.0001
201
Table A. 10 The ANOVA table for two machine problem by considering minimization
of sum of the completion times for algorithm comparison
Type 3 Tests of Fixed Effects
Effect
G
J
Num
DF
Den
DF
2
0
2
0
Rl
2
0
A
2
I
1
135
135
G*J
G*Rl
4
0
4
0
G*A
4
135
135
G*I
J*R1
J*A
2
4
4
2
R1*A
Rl*I
A*I
G*J*Rl
G*J*A
G*J*I
4
2
2
8
0
8
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
4
G*Rl*A
8
G*Rl*I
4
G*A*I
4
J*R1*A
8
J*R1*I
J*A*I
4
R1*A*I
G*J*R1*A
4
4
16
G*J*R1*I
G*J*A*I
8
G*R1*A*I
8
J*R1*A*I
G*J*R1*A*I
0
135
135
135
135
135
8
8
16
F Value
Pr > F
55.66
20.87
0.49
3.31
2.81
7.13
0.61
2.87
1.57
0.90
0.48
1.87
0.11
1.97
0.08
0.19
0.60
1.33
0.10
2.26
0.12
0.41
0.90
0.27
0.16
0.42
1.04
0.41
0.22
0.42
0.35
<.0001
<.0001
0.6189
0.0394
0.0960
0.0005
0.6584
0.0256
0.2110
0.4798
0.7475
0.1579
0.9803
0.1439
0.9231
0.9901
0.7776
0.2606
0.9991
0.0661
0.9751
0.9125
0.4653
0.8998
0.9581
0.9750
0.4063
0.9149
0.9873
0.9098
0.9910
202
Table A. 11 Test of effect slices for two machine problem by considering minimization
of sum of the completion times for algorithm comparison
Differences of Least Squares Means
Standard
Effect G
A _G _A Estimate Error
G*A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
G*A
1
1
1
2
1
1
1
3
1
2
1
3
2
1
2
2
2
1
2
3
2
2
2
3
3
1
3
2
3
1
3
3
3
2
3
3
8.6944
55.0833
46.3889
51.8056
30.0278
-21.7778
383.81
0.6389
-383.17
105.74
105.74
105.74
105.74
105.74
105.74
105.74
105.74
105.74
DF
Pr >
135
135
135
135
135
135
135
135
135
0.9346
0.6033
0.6616
0.6250
0.7769
0.8371
0.0004
0.9952
0.0004
ti
Adju
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
Adj
1.0000
0.9999
1.0000
0.9999
1.0000
1.0000
0.0117
1.0000
0.0119
203
Table A.12 The ANOVA table for two machine problem by considering minimization
of sum of the completion times for time spent comparison
Type 3 Tests of Fixed Effects
Num
DF
Den
DF
G
J
Ri
2
0
2
0
2
0
A
2
I
1
135
135
G*J
G*Ri
G*A
G*I
J*R1
J*A
4
0
Effect
J*J
R1*A
R1*I
A*I
G*J*Ri
G*J*A
G*J*I
G*R1*A
G*Ri*I
G*A*I
J*R1*A
J*R1*I
J*A*I
Ri*A*I
G*J*R1*A
G*J*R1*I
G*J*A*I
G*R1*A*I
J*R1*A*I
G*J*R1*A*I
4
4
2
0
135
135
4
0
4
135
135
135
135
135
2
4
2
2
8
0
8
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
4
8
4
4
8
4
4
4
16
8
8
8
8
16
F Value
Pr > F
69.11
22.41
2.32
106.04
0.01
18.24
3.02
79.70
0.07
1.69
23.87
0.08
4.94
1.53
0.78
1.14
19.71
0.05
5.96
1.45
1.01
3.12
1.29
0.49
0.92
2.51
1.77
0.49
0.84
0.62
0.73
<.0001
<.0001
0.1176
<.0001
0.9170
<.0001
0.0351
<.0001
0.9298
0.1821
<.0001
0.9242
0.0009
0.2194
0.4589
0.3714
<.0001
0.9951
<.0001
0.2213
0.4040
0.0029
0.2784
0.7455
0.4539
0.0022
0.0876
0.8603
0.5725
0.7599
0.7551
204
Table A. 13 Test of effect slices for two machine problem by considering minimization
of sum of the completion times for time spent comparison
Effect
G
G*J*R1*A
G*J*Rl*A
G*J*R1*A
G*J*Rl*A
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
G*J*Rl*A
G*J*Rl*A
G*J*Rl*A
G*J*Rl*A
G*J*R1*A
G*J*R1*A
G*J*Rl*A
G*J*R1*A
G*J*R1*A
G*J*Rl*A
G*J*Rl*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Rl*A
G*J*Rl*A
G*J*R1*A
G*J*R1*A
G*J*Rl*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R].*A
G*J*Rl*A
G*J*Rl*A
G*J*R1*A
G*J*R1*A
G*J*Rl*A
G*J*R1*A
G*J*R1*A
G*J*Rl*A
J Ri
A
G
J Ri A
Estimate
DF
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
-367E-14
-368E-14
-711E-17
-305E-14
-466E-14
-161E-14
-269E-14
-318E-14
-49E-14
-195E-14
-237E-14
-419E-15
-127E-14
-269E-14
-141E-14
-419E-15
8.95E-13
1.31E-12
-249E-15
-711E-15
-462E-15
-249E-15
3.75E-12
4E-12
-201E-14
-585E-14
-384E-14
-2.2500
-2.2500
4.69E-13
-7.5000
-8.0000
-0.5000
-10.7500
-10.5000
0.2500
-140.75
-165.75
-25.0000
-40.0000
-65.0000
-25.0000
-20.2500
-24.5000
-4.2500
-36.2500
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
1
2
1
3
1
2
1
1
1
3
2
1
1
1
2
2
2
1
1
1
2
3
2
2
1
1
2
3
3
1
1
1
3
2
3
1
1
1
3
3
3
2
1
1
3
3
2
1
1
1
2
1
2
2
1
1
1
2
1
3
2
1
2
1
2
1
3
2
2
1
1
2
2
2
2
2
1
1
2
2
3
2
2
2
1
2
2
3
2
3
1
1
2
3
2
2
3
1
1
2
3
3
2
3
2
1
2
3
3
3
1
1
1
3
1
2
3
1
1
1
3
1
3
3
1
2
1
3
1
3
3
2
1
1
3
2
2
3
2
1
1
3
2
3
3
2
2
1
3
2
3
3
3
1
1
3
3
2
3
3
1
1
3
3
3
3
3
2
1
3
3
3
2
1
1
1
2
1
1
2
2
1
1
1
2
1
1
3
2
1
1
2
2
1
1
3
2
1
2
1
2
1
2
2
2
1
2
1
2
1
2
3
2
1
2
2
2
1
2
3
2
1
3
1
2
1
3
2
2
1
3
1
2
1
3
3
2
1
3
2
2
1
3
3
2
2
1
1
2
2
1
2
2
2
1
1
2
2
1
3
2
2
1
2
2
2
1
3
2
2
2
1
2
2
2
2
2
2
2
1
2
2
2
3
2
2
2
2
2
2
2
3
2
2
3
1
2
2
3
2
2
2
3
1
2
2
3
3
2
2
3
2
2
2
3
3
2
3
1
1
2
3
1
2
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
t Value Pr >ItI
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
-0.00
0.00
0.00
-0.00
-0.00
-0.00
-0.00
0.00
0.00
-0.00
-0.00
-0.00
-0.03
-0.03
0.00
-0.10
-0.11
-0.01
-0.14
-0.14
0.00
-1.88
-2.2].
-0.33
-0.53
-0.87
-0.33
-0.27
-0.33
-0.06
-0.48
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.9761
0.9761
1.0000
0.9204
0.9151
0.9947
0.8861
0.8887
0.9973
0.0623
0.0286
0.7390
0.5941
0.3869
0.7390
0.7873
0.7441
0.9548
0.6291
Adjus
T-K
T-K
T-K
T-K
T--K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
205
Table A. 13 (Continued) Test of effect slices for two machine problem by considering
minimization of sum of the completion times for time spent comparison
Effect
G
J Ri
A
G
G*J*Rl*A
G*J*Ri*A
G*J*Ri*A
G*J*Ri*A
G*J*Rl*A
G*J*Ri*A
G*J*Rl*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
1
1
2
2
2
1
2
1
1
2
1
1
2
1
1
2
1
1
2
2
2
2
2
2
2
2
2
3
3
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
3
3
3
3
3
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
1
1
1
2
2
2
3
3
3
3
3
1
3
3
3
1
2
3
3
1
1
1
3
3
3
3
3
1
1
2
2
2
3
3
3
2
1
1
1
1
1
2
2
2
3
3
3
1
1
2
3
1
3
3
3
3
3
3
3
3
3
3
3
3
1
2
2
1
1
2
1
1
2
J Ri A
3
3
3
3
3
3
3
3
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
3
3
2
3
3
2
3
3
2
3
3
2
3
3
1
2
3
3
2
3
1
3
2
2
2
3
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
1.
1
1
1
2
2
2
3
3
3
Estimate
DF
-54.0000 135
-17.7500 135
-91.7500 135
-128.50 135
-36.7500 135
-31.2500 135
-41.0000 135
-9.7500 135
-27.7500 135
-24.7500 135
3.0000 135
-122.00 135
-123.25 135
-1.2500 135
-120.50 135
-123.00 135
-2.5000 135
-401.25 135
-342.75 135
58.5000 135
-388.00 135
-414.25 135
-26.2500 135
-503.50 135
-749.25 135
-245.75
135
-727.00
135
-360.25
135
366.75
135
-1119.50 135
-1346.25 135
-226.75 135
-828.50 135
-1169.25 135
-340.75 135
t Value Pr >t
-0.72
-0.24
-1.23
-1.72
-0.49
-0.42
-0.55
-0.13
-0.37
-0.33
0.04
-1.63
-1.65
-0.02
-1.61
-1.64
-0.03
-5.36
-4.58
0.78
0.4721
0.8130
0.2226
0.0885
0.6244
0.6771
0.5849
0.8966
0.7115
0.7415
0.9681
0.1056
0.1021
0.9867
0.1099
0.1028
0.9734
<.0001
<.0001
0.4361
<.0001
<.0001
0.7265
<.0001
-5.18
-5.53
-0.35
-6.72
-10.01 <.0001
-3.28 0.0013
-9.71 <.0001
-4.81 <.0001
4.90 <.0001
-14.95 <.0001
-17.98 <.0001
-3.03 0.0030
-11.06 <.0001
-15.61 <.0001
-4.55 <.0001
Adjus
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-I(
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
206
Table A.14 The ANOVA table for three machine problem by considering minimization
of sum of the completion times for algorithm comparison
Type 3 Tests of Fixed Effects
Effect
G
J
Ri
R2
A
I
G*J
G*Rl
Num
DF
Den
DF
2
2
2
2
2
0
0
0
1
4
0
405
405
0
0
J*
4
4
4
2
4
4
4
2
Rl*R2
4
0
R1*A
4
2
4
2
2
405
405
405
405
405
8
8
0
0
8
405
405
G*R2
G*A
G*I
J*Rl
J*R2
J*A
R1*I
R2*A
R2*I
A*I
G*J*Rl
G*J*R2
G*J*A
G*J*I
G*Ri*R2
G*R1*A
G*Rl*I
4
8
8
J*Rl*R2
4
8
4
4
8
J*R1*A
8
G*R2*A
G*R2*I
G*A*I
J*R1*I
J*R2*A
J*R2*I
J*A*I
R1*R2*A
R1*R2*I
Rl*A*I
R2*A*I
G*J*Rl*R2
G*J*Rl*A
G*J*R1*I
G*J*R2*P
4
8
4
4
8
4
4
4
16
16
8
16
0
405
405
0
0
405
405
0
405
405
405
405
405
0
405
405
405
405
405
405
405
405
405
0
405
405
405
F Value
Pr > F
221.45
73.91
1.88
1.35
3.17
38.41
17.39
0.61
0.70
1.19
21.90
1.25
0.16
0.75
2.27
1.22
0.55
0.41
0.15
1.88
0.07
0.83
0.18
0.37
3.17
1.52
0.58
0.87
0.22
0.80
0.14
0.52
0.88
1.45
0.29
2.75
0.79
0.42
3.46
0.19
0.65
1.03
0.98
2.87
0.25
<.0001
<.0001
0.1591
0.2653
0.0429
<.0001
<.0001
0.6534
0.5933
0.3157
<.0001
0.2975
0.9563
0.5577
0.1046
0.3093
0.6973
0.6665
0.9607
0.1534
0.9323
0.5772
0.9934
0.9367
0.0139
0.1635
0.7947
0.4803
0.9873
0.5265
0.9691
0.8390
0.5349
0.2163
0.9687
0.0281
0.5299
0.9082
0.0085
0.9434
0.6248
0.4335
0.4770
0.0041
0.9988
207
Table A.14 (Continued) The ANOVA table for three machine problem by considering
minimization of sum of the completion times for algorithm comparison
Type 3 Tests of Fixed Effects
Effect
Num
DF
Den
DF
F Value
Pr > F
G*J*R2*I
8
405
3.62
0.0004
G*J*A*I
G*Rl*R2*A
8
16
405
405
1.18
0.28
0.3087
0.9978
G*R1*R2*I
8
405
3.23
0.0014
G*Rl*A*I
8
G*R2*A*I
8
405
405
405
405
405
405
405
405
405
405
405
405
405
405
0.14
0.78
0.53
1.07
0.21
0.88
0.88
0.52
1.19
0.36
1.12
0.68
0.47
0.48
0.9973
0.6249
0.9292
0.3820
0.9898
0.5319
0.5346
0.9870
0.2738
0.9894
0.3372
0.8116
0.9594
0.9927
J*Rl*R2*A
J*R1*R2*I
J*R1*A*I
J*R2*A*I
R1*R2*A*I
G*J*R1*R2*A
G*J*R1*R2*I
G*J*R1*A*I
G*J*R2*A*I
G*R1*R2*A*I
J*R1*R2*A*I
G*J*R1*R2*A*I
16
8
8
8
8
32
16
16
16
16
16
32
Table A. 15 Test of effect slices for three machine problem by considering
minimization of sum of the completion times for the algorithm comparison
Effect
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G
1
1
1
1
1
1
1
1
J El
1
1
1
1
1
1
1
1
1
1
1
2
2
2
3
3
3
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
1
2
1
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
2
1
1
2
2
2
2
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
1
1
1
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
1
2
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
3
2
2
2
2
3
3
3
3
3
2
3
3
3
1
1
2
2
3
3
3
R2
I
_G
1
1
J
El _R2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
3
3
3
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
1
3
1
2
3
1
2
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
1.
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
3
1
2
3
1
2
3
1
2
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1.
1
1
1
1
1
1
1
1
1
2
2
2
3
3
3
1
1
1
2
2
1
1
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
1
1
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
1
1
1
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
1
2
3
I
Adju
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
2 T-K
2 T-K
2 T-K
2 T-K
2 T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2 T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
T-K
2
2
2
2
2
2
2
2
2
2
2
2
2
Adj
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
209
Table A. 15 (Continued) Test of effect slices for three machine problem by considering
minimization of sum of the completion times for the algorithm comparison
Effect
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G*J*R1*R2*I
G*J*R1*R2*I
G*J*Ri*R2*I
G
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
J Ri
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
2
3
2
3
2
3
1
1
1
2
3
3
3
3
3
3
3
3
3
2
2
3
3
3
R2
I
_G
1
2
3
1
2
3
1
2
3
1
2
3
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
3
1
1
1
1
2
3
1
2
3
1
1
1
1
1
1
1
2
3
1
2
2
1
1
1
1
1
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
J
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
Ri
R2
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
1
2
3
1
2
3
I
Adju
2
3
1
2
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
2
2
2
2
2
2
2
3
1
2
3
1
2
3
1
2
3
1
3
1
2
3
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
Adj
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.2505
1.0000
1.0000
<.0001
1.0000
1.0000
0.0161
1.0000
0.0008
1.0000
1.0000
1.0000
<.0001
1.0000
1.0000
0.1452
0.9465
0.9991
210
Table A. 16 The ANOVA table for three machine problem by considering minimization
of sum of the completion times for time spent comparison
Type 3 Tests of Fixed Effects
Num
DF
Den
DF
F Value
Pr > F
R2
2
2
2
2
0
0
0
0
73.71
21.50
0.18
1.09
<.0001
<.0001
0.8377
0.3412
A
2
405
140.35
<.0001
I
1
G*J
G*Rl
405
G*R2
4
4
4
0
0
0
G*A
4
405
G*I
J*R1
J*R2
2
405
4
4
J*A
R1*R2
R1*A
Effect
G
J
Ri
0
0
0.05
16.69
0.22
0.91
110.12
0.17
0.18
0.25
0.8320
<.0001
0.9249
0.4613
<.0001
0.8425
0.9502
0.9088
4
405
29.84
<.0001
405
R1*I
2
4
4
2
405
405
0.49
2.45
0.89
0.55
0.6125
0.0523
0.4722
0.5765
R2*A
4
405
3.24
0.0124
R2*I
A*I
2
2
G*J*R1
G*J*R2
405
405
8
8
0
0
0.02
0.41
0.31
0.18
0.9840
0.6632
0.9596
0.9932
G*J*A
0
8
405
23.36
<.0001
405
G*Rl*I
4
8
8
4
405
405
0.62
2.18
0.86
0.42
0.6460
0.0371
0.5533
0.7910
G*R2*A
G*J*I
G*R1*R2
G*R1*A
0
8
405
2.85
0.0043
G*R2*I
G*A*I
J*R1*R2
J*R1*A
4
4
405
405
J*R1*I
4
4
4
405
405
405
405
405
0.03
0.48
1.27
0.53
0.35
0.94
0.35
0.14
0.9981
0.7487
0.2728
0.8336
0.8435
0.4813
0.8445
0.9670
8
405
3.27
0.0013
4
4
4
405
405
405
0.37
0.18
0.05
1.12
0.75
0.38
0.77
0.47
0.8325
0.9486
0.9951
0.3546
0.7426
0.9291
0.7201
0.8778
J*R2*A
J*R2*I
R1*R2*A
R1*R2*I
R1*A*I
R2*A*I
G*J*R1*R2
G*J*Rl*A
G*J*Ri*I
G*J*R2*A
G*J*R2*I
8
8
8
16
16
8
16
8
0
0
405
405
405
405
211
Table A. 16 (Continued) The ANOVA table for three machine problem by considering
minimization of sum of the completion times for time spent comparison
Type 3 Tests of Fixed Effects
Effect
G*J*A*I
G*Rl*R2*A
G*Rl*R2*I
G*Rl*A*I
G*R2*A*I
J*R1*R2*A
J*Rl*R2*I
J*Rl*A*I
J*R2*A*I
R1*R2*A*I
G*J*R1*R2*A
G*J*R1*R2*I
G*J*R1*A*I
G*J*R2*A*I
G*R1*R2*A*I
J*R1*R2*A*I
G*J*Rl*R2*A*I
Num
DF
Den
DF
F Value
Pr > F
8
405
405
405
405
405
405
405
405
405
405
405
405
405
405
405
405
405
0.16
2.97
0.27
0.22
0.06
1.71
0.36
0.31
0.19
0.23
1.56
0.36
0.32
0.24
0.26
0.19
0.19
0.9961
0.0001
0.9744
0.9877
0.9999
0.0424
0.9417
0.9628
0.9916
0.9859
0.0287
0.9902
0.9947
0.9991
0.9985
0.9998
1.0000
16
8
8
8
16
8
8
8
8
32
16
16
16
16
16
32
212
Table A.17 The ANOVA table for six machine problem by considering minimization
of sum of the completion times criterion for the algorithm comparison
Type 3 Tests of Fixed Effects
Effect
Num
DF
Den
DF
2
0
2
0
G
J
Ri
2
0
A
2
I
1
27
135
135
T(G*J*Ri)
0
G*J
G*Ri
G*A
G*I
4
J*Ri
4
0
J*A
4
135
135
135
135
135
0
4
135
135
2
2
R1*A
R1*I
A*I
G*J*R1
G*J*A
G*J*I
G*Ri*A
G*R1*I
G*A*I
J*R1*A
J*Ri*I
4
2
2
8
0
8
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
4
8
4
4
8
4
4
Ri*A*I
G*J*R1*A
G*J*Rl*I
G*J*A*I
G*Rl*A*I
J*R1*A*I
G*J*Rl*A*I
0
4
4
16
8
8
8
8
16
F Value
Pr > F
165.81
39.29
62.32
0.48
17.09
15.37
27.44
0.39
0.39
16.95
6.75
0.25
1.62
0.04
9.93
0.13
3.75
0.22
1.70
0.04
9.43
0.11
0.06
3.85
0.07
0.14
0.07
3.59
0.06
0.14
0.14
0.15
<.0001
<.0001
<.0001
0.6189
<.0001
<.0001
<.0001
0.8135
0.8135
<.0001
0.0007
0.9104
0.2008
0.9970
<.0001
0.8811
0.0045
0.9869
0.1545
1.0000
<.0001
0.9782
0.9999
0.0054
0.9910
0.9659
1.0000
0.0008
0.9999
0.9970
0.9970
1.0000
213
Table A. 18. Test of effect slices for six machine problem by considering minimization
of sum of the completion times for the algorithm comparison
Effect
G*J*Rl*I
G*J*Ri*I
G*J*Ri*I
G*J*Ri*I
G*J*Ri*I
G*J*Ri*I
G*J*Ri*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
G*J*R1*I
J Ri I
G
J
_G
I
Ri
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
1
1
1
2
2
1
3
1
1
1
3
2
2
1
1
1
2
1
2
2
2
1
1
2
2
2
2
3
1
1
2
3
2
3
1
1
1
3
1
2
3
2
1
3
2
2
3
3
1
3
3
2
2
1
1
2
1
2
2
1
3
1
1
1
1
1
2
2
1
2
2
2
2
2
2
Estimate
DF t Value Pr>ItlAdju
2
1
3
2
2
2
1
2
2
2
2
2
3
1
1
1
2
2
3
2
3
1
1
2
3
1
2
2
3
2
1
2
3
2
2
2
3
3
1
2
3
3
2
3
1
1
1
3
1
1
2
3
1
2
1
3
1
2
2
3
1
3
1
3
1
3
2
3
2
1
1
3
2
1
2
7120.00
135 10.11
<.0001 T-K
3
2
2
1
3
2
2
2
-335.50
135 -0.48
0.6347
3
3
1
1
3
3
1
2
4116.67
135
5.84
<.0001 T-K
3
3
2
1
3
3
2
2
136.00
135
0.19
0.8472
3
3
3
1
3
3
3
2
3119.67
135
4.43
<.0001 T-K
2
1
1
2
2
1
2
2
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
0.00
0.04
-0.02
0.00
-0.01
0.01
-0.00
-0.04
0.21
0.00
0.00
-0.05
0.13
0.04
0.16
-0.00
-0.45
0.03
1.76
0.55
0.60
1.0000
0.9706
0.9840
1.0000
0.9945
0.9955
1.0000
0.9691
0.8334
1.0000
1.0000
0.9610
0.8946
0.9668
0.8702
1.0000
0.6551
0.9734
0.0815
0.5829
0.5486
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
8.73E-12
26.0000
-14.1667
2.57E-il
-4.8333
4.0000
-822E-13
-27.3333
148.50
2E-i1
1.GGE-il
-34.5000
93.5000
29.3333
115.33
-768E-13
-315.33
23.5000
1236.50
387.83
423.67
T-K
T-K
214
Table A. 19 The ANOVA table for six machine problem by considering minimization
of sum of the completion times criterion for time spent comparison
Type 3 Tests of Fixed Effects
Num
DF
Den
DF
G
J
Ri
2
0
2
0
2
0
A
2
135
135
Effect
I
1
G*J
G*Rl
G*A
G*I
J*R1
J*A
4
4
0
4
2
4
4
135
135
2
R1*A
Rl*I
A*I
G*J*R1
G*J*A
G*J*I
G*R1*A
G*R1*I
G*A*I
J*R1*A
J*Ri*I
J*A*I
R1*A*I
G*J*R1*A
G*J*R1*I
G*J*A*I
G*R1*A*I
J*R1*A*I
G*J*R1*A*I
4
2
2
0
0
135
135
135
135
135
8
0
8
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
4
8
4
4
8
4
4
4
16
8
8
8
8
16
F Value
Pr > F
16.59
4.22
3.93
24.83
2.51
3.38
3.30
20.72
2.66
1.64
5.07
0.60
5.73
2.19
0.53
1.54
4.32
0.64
4.93
2.28
0.48
2.80
0.91
0.14
0.59
2.68
0.91
0.13
0.62
0.32
0.36
<.0001
0.0255
0.0317
<.0001
0.1152
0.0229
0.0253
<.0001
0.0737
0.1925
0.0008
0.5485
0.0003
0.1157
0.5897
0.1913
0.0001
0.6359
<.0001
0.0642
0.7514
0.0066
0.4622
0.9663
0.6696
0.0010
0.5113
0.9977
0.7585
0.9570
0.9896
215
Table A.20 Test of effect slices for six machine problem by considering minimization
of sum of the completion times for time spent comparison
Effect
G*J*Ri*A
G*J*Rl*A
G*J*Rl*A
G*J*Rl*A
G*J*Rl*A
G*J*R1*A
G*J*Rl*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G
J Ri A _G J Ri A
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
1
2
1
1
2
2
1
3
1
1
1
3
1
3
2
2
1
1
2
1
1
2
1
2
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
1
2
2
2
2
3
1
2
3
1
2
3
2
3
1
1
3
1
1
3
1
2
3
2
1
3
2
1
3
2
2
3
3
1
3
3
1
3
3
2
2
1
1
1
2
1
1
1
2
1
1
2
1
2
1
2
1
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
1
1
1
2
1
2
1
2
1
2
1
1
2
2
2
1
3
1
2
1
3
1
2
1
3
2
2
2
1
1
2
2
1
1
2
2
1
2
2
2
2
1
2
2
2
1
2
2
2
2
2
2
3
1
2
2
3
1
2
2
3
2
2
3
1
1
2
3
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
2
2
2
2
2
2
2
2
2
3
3
2
3
1
2
2
3
1
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
3
3
1
1
1
2
2
2
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
2
3
3
3
1
1
1
3
2
3
3
Estimate
8.35E-14
-341E-15
-425E-15
2.87E-12
l.25E-12
-162E-14
-853E-16
-102E-14
-938E-15
-0.2500
1.58E-12
0.2500
-0.2500
6.21E-12
0.2500
4.16E-12
-135E-14
-551E-14
1.17E-12
-432E-14
-549E-14
4.38E-12
1.25E-12
-313E-14
1.08E-12
1.8E-11
1.69E-11
-13.2500
-15.0000
-1.7500
-13.0000
-12.2500
0.7500
-1.7500
-2.7500
-1.0000
-1.5000
-2.5000
-1.0000
-82.5000
-92.7500
-10.2500
-4.5000
-3.5000
1.0000
-57.5000
-58.7500
-1.2500
DF
t Value Pr>ItI adju
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.04
-0.04
-0.01
-0.04
-0.04
0.00
-0.01
-0.01
-0.00
-0.00
-0.01
-0.00
-0.24
-0.27
-0.03
-0.01
-0.01
0.00
-0.17
-0.17
-0.00
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
135
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.9994
1.0000
0.9994
0.9994
1.0000
0.9994
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
0.9691
0.9650
0.9959
0.9696
0.9714
0.9982
0.9959
0.9936
0.9977
0.9965
0.9942
0.9977
0.8092
0.7860
0.9761
0.9895
0.9918
0.9977
0.8663
0.8634
0.9971
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
216
Table A.20 (Continued) Test of effect slices for six machine problem by considering
minimization of sum of the completion times for time spent comparison
Effect
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*Ri*A
G*J*Ri*A
G*J*R1*A
G*J*Ri*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*R1*A
G*J*IU*A
G*J*R1*A
G
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
J
Ri
AGJ
3
3
3
3
3
3
1
1
1
1
1
1
2
2
2
3
3
3
1
1
2
1
1
2
1
1
1
1
1
2
1
1
2
1
1
2
1
1
1
1
1
2
2
2
2
2
2
2
3
3
3
23
23
23
23
23
23
31
31
31
31
31
31
31
31
31
2
2
1
1
1
2
2
2
2
2
3
3
1
1
2
3
2
32
32
32
32
32
32
32
3
3
3
3
3
3
3
3
3
1
1
1
2
1
1
2
3
3
3
1
2
1
2
2
3
3
1
1
3
2
33
33
33
33
33
33
2
1
1
2
3
3
2
2
3
3
3
Ri A Estimate
2 2 -255.50
2 3 -264.50
2 3 -9.0000
3 2 -260.75
3 3 -132.00
33
128.75
33
3.5000
1 2 -116.75
1 3 -136.00
1 3 -19.250
2 2 -154.00
2 3 -160.00
2 3 -6.0000
3 2 -59.000
3 3 -55.500
1 2 -115.25
1 3 -163.75
1 3 -48.500
2 2 -1361.5
2 3 -2293.3
2 3 -931.75
3 2 -3568.5
3 3 -2350.8
3
3 1217.75
1 2 -87.250
1 3 -226.25
1 3 -139.00
2 2 -3148.5
2 3 -3576.8
2 3 -428.25
3 2 -1403.3
3 3 -1371.0
3 3 32.2500
DF
t Value Pr>t adju
135 -0.75 0.4549
135 -0.78 0.4392
135 -0.03 0.9790
135 -0.76 0.4457
135 -0.39 0.6992
135 0.38 0.7063
135 -0.34 0.7326
135 -0.40 0.6906
135 -0.06 0.9551
135 -0.45 0.6522
135 -0.47 0.6396
135 -0.02 0.9860
135 -0.17 0.8629
135 -0.16 0.8709
135 0.01 0.9918
135 -0.34 0.7359
135 -0.48 0.6318
135 -0.14 0.8871
135 -3.99 0.0001
135 -6.73 <.0001
135 -2.73 0.0071
135-10.47 <.0001
135 -6.90 <.0001
135 3.57 0.0005
135 -0.26 0.7984
135 -0.66 0.5081
135 -0.41 0.6841
135 -9.23 <.0001
135 -10.5 <.0001
135 -1.26 0.2112
135 -4.12 <.0001
135 -4.02 <.0001
135 0.09 0.9248
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
T-K
217
Appendix B. The Percentage Errors of Schaller et al. (2000) Algorithm
Table B.l The percentage error of Schaller et al. (2000)
algorithm for two machine problem
rJ)
eD
0
0
eD
0
e
0
-
-
1.
0
-
-t
eD
2
280
237
3
4
288
237
0.029
28
0.000
29
171
171
0.000
30
130
130
0.000
31
5
321
321
0.000
32
6
209
209
0.000
33
7
411
0.020
34
354
0.000
9
403
354
264
282
10
152
152
11
527
12
405
13
491
14
25
26
249
437
490
397
396
335
457
346
383
583
330
880
815
27
440
1
8
15
16
17
18
19
20
21
22
23
24
740
0.123
530
0.071
781
0.152
680
780
0.035
936
397
0.090
35
659
495
678
657
733
859
383
768
813
0.059
0.068
36
471
37
521
506
624
0.074
0.000
550
405
0.044
38
39
818
772
0.226
0.000
667
605
515
249
445
0.049
40
641
0.172
0.000
41
547
682
835
0.224
0.018
42
496
397
0.012
43
44
420
0.061
335
488
430
463
647
0.000
0.000
0.068
45
46
786
1176
965
766
817
1064
1066
0.064
0.037
0.198
0.276
985
0.253
1314
0.117
1051
0.089
932
916
0.217
1276
0.199
1300
0.220
1047
0.106
1483
0.104
1063
0.152
0.121
0.209
47
48
49
0.110
50
358
0.085
51
953
937
0.083
52
0.116
53
1286
1115
1435
0.150
1237
0.109
445
0.011
54
1374
1478
0.076
0.243
947
1343
923
218
Table B.2 The percentage error of Schaller et al.(2000) algorithm
for three machine problem
1
221
221
0
31
398
2
481
0.027
32
532
3
303
0.030
5
226
239
6
351
7
436
8
9
268
242
10
291
0.141
33
34
35
36
37
38
39
40
397
4
11
335
494
312
236
242
379
474
268
244
332
367
0.096
41
12
305
352
0.154
13
471
514
14
203
15
16
389
478
17
194
18
408
334
429
405
545
458
373
270
638
550
666
0.018
511
0.034
413
652
437
0.058
671
0.029
42
241
261
0.083
0.091
43
345
357
0.035
210
0.034
554
734
554
0.000
0.049
769
0.048
518
417
617
531
0.025
737
466
422
636
763
475
0.012
0.014
44
45
46
47
48
49
50
0.034
51
52
0.042
53
689
412
634
728
429
662
0.057
0.014
0.021
54
55
615
576
626
629
0.018
0.015
0.007
56
57
719
416
0.255
0.079
573
403
0.020
58
59
475
466
0.120
0.058
424
408
0.000
60
533
570
0.069
26
27
28
29
456
345
428
408
492
200
413
344
435
214
505
448
342
415
459
546
352
453
30
214
214
19
20
21
22
23
24
25
207
498
430
335
409
506
0.044
0.013
0.080
0.087
0.000
0.008
0.029
0.031
0.012
0.030
350
266
571
547
656
494
0.024
0.154
0.066
0.015
0.117
0.005
0.015
0.031
0.035
0.019
0.041
0.044
0.092
0.032
0.142
219
Table B.2 (Continued) The percentage error of Schaller et al.(2000)
algorithm for three machine problem
Cl)
Cl)
Cl)
-
-
,)
©
-
-
1
0
0
D
0
©
-
61
647
724
0.119
91
961
1057
0.100
62
717
778
0.085
92
777
0.076
63
529
585
0.106
93
764
836
864
64
495
694
555
735
0.121
94
95
820
527
933
552
0.138
0.059
384
433
0.128
96
0.104
625
637
0.019
97
878
897
897
1017
0.134
98
596
0.117
99
774
964
65
66
67
68
69
0
-
-
0.131
0.047
70
785
666
890
0.134
100
640
969
1018
856
1047
676
71
716
776
0.084
101
695
771
0.109
72
562
620
0.103
102
754
901
0.195
73
784
787
0.004
103
1260
1351
0.072
74
596
0.022
104
825
0.062
105
839
747
76
636
704
0.107
106
953
889
776
1044
0.060
75
609
876
77
667
739
0.108
107
1061
1183
0.115
0.135
0.106
0.086
0.056
0.039
0.095
78
800
859
0.074
108
1086
1184
0.090
79
1031
1087
0.054
109
1021
1177
0.153
80
643
0.064
110
1166
1288
0.105
81
609
684
676
0.110
111
847
1004
0.185
82
839
0.069
112
791
925
0.169
83
509
897
537
946
1087
0.055
113
741
0.118
0.214
114
0.110
115
663
796
1237
925
816
0.060
116
1205
0.171
117
0.098
118
754
990
830
0.101
951
1074
0.129
84
779
85
979
86
873
87
697
88
902
89
90
961
0.207
1394
0.127
0.092
693
1316
800
865
1003
0.160
119
641
743
0.159
120
920
1093
0.188
0.154
220
Table B.2 (Continued) The percentage error of Schaller et al.(2000)
algorithm for three machine problem
=
D
0
C,
-
-
0
rD
1.
0
Cl)
0
0
z
0
-
121
968
1090
0.126
122
1285
1478
123
1083
1210
124
941
125
r
-
-
.
0
0
0
-
151
1763
1927
0.093
0.150
152
1526
1646
0.079
0.117
153
1411
1654
0.172
1047
0.113
154
1080
1261
0.168
1162
1378
0.186
155
1293
1452
0.123
126
1028
1127
0.096
156
1206
1310
0.086
127
1088
1174
0.079
157
1676
1788
0.067
128
1537
1673
0.088
158
1955
966
1060
0.097
159
1697
130
1307
1456
0.114
160
1558
131
996
1114
0.118
161
1188
2149
1815
1688
1274
0.099
129
132
1369
1584
0.157
162
1530
1695
0.108
133
1545
1706
0.104
134
1077
1167
0.084
135
960
0.081
1131
0.162
137
888
973
1028
1154
0.123
138
1106
1213
0.097
139
1568
1721
0.098
140
1255
1370
0.092
141
1369
0.145
142
1393
1568
1545
143
1242
1376
0.108
144
1423
1722
0.210
145
1252
1402
0.120
146
1415
1620
0.145
147
1170
1324
0.132
148
1284
1460
0.137
149
1296
1448
0.117
150
1503
1677
0.116
136
0.109
0.070
0.083
0.072
221
Table B.3 The Percentage error of Schaller et al.(2002) algorithm
for six machine problem
0
-
0
-
0
0
1.
ri;
0
-
-
=
-
0
r
0
0
t'D
0
0
1
1666
1688
0.013
28
2
1086
1086
0.000
29
3
262
156
287
204
1427
0.095
30
0.308
31
0.026
32
797
0.058
1871
0.004
33
34
1477
0.000
35
4
5
6
7
8
1391
753
1863
1477
9
195
231
0.185
10
390
450
0.154
11
1442
1481
0.027
12
1765
1803
0.022
13
786
1179
786
1184
0.000
36
37
38
39
40
0.004
41
520
440
1598
0.130
0.149
17
460
383
1530
42
43
44
18
1159
1215
0.048
19
3062
3121
0.019
20
2631
0.014
21
430
22
506
23
24
2748
2084
25
26
2172
2459
2667
488
582
2854
2097
2195
2564
27
553
608
14
15
16
0.044
0.039
45
46
47
48
49
50
0.006
51
0.011
52
0.043
0.099
0.135
0.150
619
2500
2189
3266
2945
667
949
3730
2898
5255
3927
694_
721
3763
4726
4455
5396
1033
957
5225
5572
4911
720
0.163
2584
0.034
2257
3376
2967
726
0.031
0.034
0.007
0.088
1061
0.118
3868
0.037
2978
0.028
5498
4042
866
0.046
830
3834
4884
4557
5558
0.151
0.033
1272
0.231
1123
0.173
5412
0.036
5780
5130
5215
1180
0.029
0.248
0.019
0.023
0.030
0.037
0.045
1297
0.142
53
5027
1047
1136
5829
6072
0.042
54
5802
5999
0.034
0.037
0.127
Download