Optimization of economic lot scheduling problem with backordering

Applied Mathematics and Computation 251 (2015) 404–422
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier.com/locate/amc
Optimization of economic lot scheduling problem with
backordering and shelf-life considerations using calibrated
metaheuristic algorithms
Maryam Mohammadi a,b, Siti Nurmaya Musa a,b, Ardeshir Bahreininejad a,b,c,⇑
a
Centre of Advanced Manufacturing and Material Processing (AMMP Centre), Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia
Department of Mechanical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia
c
Faculty of Engineering, Institut Teknologi Brunei, Gadong BE1440, Brunei
b
a r t i c l e
i n f o
Keywords:
Economic lot scheduling problem
Genetic algorithm
Particle swarm optimization
Simulated annealing
Artificial bee colony
Taguchi
a b s t r a c t
This paper addresses the optimization of economic lot scheduling problem, where multiple
items are produced on a single machine in a cyclical pattern. It is assumed that each item
can be produced more than once in every cycle, each product has a shelf-life restriction,
and backordering is permitted. The aim is to determine the optimal production rate, production frequency, cycle time, as well as a feasible manufacturing schedule for the family
of items, and to minimize the long-run average costs. Efficient search procedures are presented to obtain the optimum solutions by employing four well-known metaheuristic algorithms, namely genetic algorithm (GA), particle swarm optimization (PSO), simulated
annealing (SA), and artificial bee colony (ABC). Furthermore, to make the algorithms more
effective, Taguchi method is employed to tune various parameters of the proposed algorithms. The computational performance and statistical optimization results show the effectiveness and superiority of the metaheuristic algorithms over other reported methods in
the literature.
Ó 2014 Elsevier Inc. All rights reserved.
1. Introduction
The economic lot scheduling problem (ELSP) is concerned with scheduling the production of multiple items in a single
facility environment on a continual basis. ELSP typically imposes a restriction that one item can be produced at a time, so
that the machine has to be stopped before commencing the production of a different item. Therefore, a production scheduling problem appears due to the need for incorporating the setups and production runs of various items [1]. The first published work in this area dates back to 1958, that the need of scheduling the production of different items on a single
manufacturing center was introduced with the aim of meeting demands without backorders, while minimizing the long
run average costs, namely setup and holding costs [2]. Throughout the past half century, a considerable amount of research
on this problem has been published with several directions of extensions. Subsequently, various heuristic approaches have
been suggested using any of the basic period, common cycle, or time-varying lot size methods [3–5].
⇑ Corresponding author at: Centre of Advanced Manufacturing and Material Processing (AMMP Centre), Faculty of Engineering, University of Malaya,
50603 Kuala Lumpur, Malaysia.
E-mail addresses: maryam_mohammadi@siswa.um.edu.my (M. Mohammadi), nurmaya@um.edu.my (S.N. Musa), bahreininejad@um.edu.my, ardeshir.bahreininejad@itb.edu.bn (A. Bahreininejad).
http://dx.doi.org/10.1016/j.amc.2014.11.035
0096-3003/Ó 2014 Elsevier Inc. All rights reserved.
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
405
In a multiple product manufacturing environment where lots have diverse production times and sizes, the main purpose is to determine an optimal cycle time in which all the items are produced. When the optimum cycle time goes beyond
the time restriction of life for an item, known as shelf-life, the cycle time period must be decreased to less than or equal
to the shelf-life to ensure a feasible schedule [6]. If the products are stored more than a specified period of time, some products may get spoiled. Recently, shelf-life constraint is taken into account with the implementation of the three options,
namely cycle time reduction, production rate reduction, and simultaneously cycle time and production rate reduction. This
constraint appends another feature to the ELSP. The ELSP with shelf-life has been received less consideration in the
literatures.
Silver [7] examined the ELSP with shelf-life constraint in his rotational cycle model. Two options of the cycle time decrement and the production rate reduction were taken into consideration. He proved that it is more cost-efficient to diminish
the production rate. Sarker and Babu [6], on the other hand, reversed the presumption implied by Silver [7], and showed that
if a machine operating cost is inserted to the model, decreasing the cycle time can be more efficient. However, Sarker and
Babu [6] interpreted that such postulations depend on the problem parameters such as production time cost, setup time,
setup cost, and shelf-life. Viswanathan and Goyal [8] developed a mathematical model to obtain the optimum cycle time
and manufacturing rate in a family production context with shelf-life constraint while disallowing the backordering. Viswanathan and Goyal [9] modified their earlier model by considering backorders, and proved that backordering can decrease the
total cost significantly.
The ELSP is categorized as NP-hard [10], which leads to difficulty of checking every feasible schedule in a reasonable
amount of computational time. Most researchers have focused on the generation of near optimal repetitive schedules.
Recently, metaheuristic algorithms have been implemented effectively to solve the ELSP.
Khouja et al. [11] solved the ELSP with consideration of basic period approach using genetic algorithm (GA), and showed
that GA is preferably appropriate for solving the problem. Moon et al. [1] utilized a hybrid genetic algorithm (HGA) to solve
the single facility ELSP based on the time-varying lot size method, and compared the performance of HGA with the wellknown Dobson’s heuristic (DH) [12]. Numerical experiments showed that the proposed algorithm outperformed DH. Jenabi
et al. [13] solved the ELSP in a flow shop setting employing HGA and simulated annealing (SA). Computational results indicated the superiority of the proposed HGA compared to SA with respect to the solution quality. However, SA algorithm outperformed the HGA with respect to the required computational time.
Chatfield [14] developed a genetic lot scheduling (GLS) procedure to solve the ELSP under the extended basic period
approach. The procedure was applied to the well-known Bomberger’s benchmark [15] problem, and compared with the proposed GA by Khouja et al. [11]. It was shown that GLS produces regularly lower cost solutions than Khouja et al. [11]. Chandrasekaran et al. [16] investigated the ELSP with the time-varying lot size approach and sequence-independent/sequencedependent setup times of parts, and applied GA, SA, and ant colony optimization. Raza and Akgunduz [17] examined the ELSP
with time-varying lot size approach, and conducted a comparative study of heuristic methods, namely DH, HGA, neighborhood search heuristics (NS), tabu search (TS), and SA on Bomberger’s [15], and Mallya’s [18] problems. Their results showed
that the SA outperformed DH, HGA, and NS. The SA algorithm also indicated faster convergence than the TS algorithm, but
resulted in solutions of a similar quality. Sun et al. [19] solved the ELSP in a multiple identical machines environment applying GA under the common cycle policy.
Most studies investigating the different aspects of the ELSP assumed that each item is produced exactly once in the rotational production cycle. Goyal [20] and Viswanathan [21] implied that manufacturing of every item more than once per cycle
may be more economical. Although this policy might result in a solution with a lower cost, it may bring about an infeasible
schedule due to the overlapping production time of various items. Yan et al. [22] tackled the problem of schedule infeasibility when the production of each item more than one time in every cycle is permissible. They indicated that advancing or
delaying the manufacturing start times of some items can lead to a feasible production plan using a two-stage heuristic algorithm. Initially, their model was simplified by omitting the schedule adjustment constraints and costs. Then, in the case of an
infeasible schedule a modification procedure was employed using a greedy heuristic of sequentially selecting the activities,
one every time, for either advancing or delaying the manufacturing start time, until a practicable schedule is achieved. However, the solution of the large scale proposed ELSP model seems to be out of reach using the suggested approach by Yan et al.
[22] due to its complexity and computational effort. Furthermore, In Yan et al. [22] the items’ production frequencies were
restricted to three in order to make the problem practical, and limit the computational attempts. Thus, efficient heuristic
methods are required to solve the proposed NP-hard model for large problems usually found in real-world situations.
So far, however, there has been no research on the ELSP with multiple products having unknown production frequencies, while considering the backordering and shelf-life constraints using metaheuristic methods. It this paper, the proposed
model by Yan et al. [22] is investigated with minor modification in the cost function and constraints in order to obtain a
feasible schedule, and minimize the long-run average cost using optimization engines, including the GA, particle swarm
optimization (PSO), SA, and artificial bee colony (ABC) algorithms. Numerical examples found in the literature are solved
and optimized to reveal the dominance and advantages of metaheuristic approaches among other suggested methods in
the literature.
The remainder of this paper is organized as follows. In Section 2, the problem background, objective function, and problem constraints are discussed. In Section 3, the applied methods are explained. The details of the parameter tuning come in
Section 4. Section 5 demonstrates numerical examples and reports the obtained results. Finally, conclusions are given in
Section 6.
406
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
2. Problem description and mathematical formulation
The ELSP in this study is considered in a single machine environment with production of N items in the manufacturing cycle time of T, where backordering is permitted for any of the items, and each of which has a specified shelf-life.
Moreover, the restricted assumption of production of every item exactly once in a cycle considered in previous studies is
removed, and it is allowed to produce each item more than once in every cycle. The purpose is to find the optimal production rate, production frequency, cycle time, and batch size of each item as well as a feasible manufacturing schedule
for the family of items. Furthermore, it is attempted to minimize the long-run average cost including setup, production,
holding, and backorder costs, in addition to the adjustment cost if production time conflicts are occurred between the
products in a cycle.
The mathematical model studied throughout the paper is based on the following assumptions and notations:
Assumptions:
(a) Each item has a deterministic and constant demand.
(b) Each item has a deterministic and constant setup time.
(c) Backordering is permissible for each item.
(d) There is a shelf-life restriction for each item.
(e) There is a machine operating cost per time unit.
(f) The first in first out (FIFO) rule is considered for the inventory transactions.
Indices:
i
N
w, w0
j
Parameters:
Di
Pmax
i
Rmin
i
Li
Ui
Si
Hi
Bi
O
Variables:
pi
ri
fi
ti
ki
aji
bji
ww
i
bi
qi
mi
vi
T
C(T)
Product (i = 1, 2, . . ., N)
Total number of products
Production batch (w, w0 = 1, 2, . . ., F ¼
Batch number ( j = 1, 2, . . ., fi)
PN
i¼1 f i
)
Demand rate for item i
Maximum possible production rate for item i
Ratio of demand to maximum production rate for item i
Shelf-life of item i
Setup time per batch
Setup cost for item i per time unit (machine operating cost is excluded during setup)
Inventory holding cost for item i per time unit
Backorder cost for item i per time unit
Machine operating cost per unit time
Production rate for item i
Ratio of demand to production rate for item i
Production frequency for item i per cycle
Cycle time for item i
Production start time for item i
Production start time advancement for item i in its jth production batch
Production start time delay for item i in its jth production batch
1; if item is produced in the wth batch
0; otherwise
Amount of item i backordered in each cycle
Production batch size of item i
Maximum backorder level for item i
Shelf time of item i
Entire production cycle time
Total cost for the entire production cycle time
Using the above notations, the mathematical model for the ELSP with various production frequencies, backordering, and
shelf-life constraints is presented as follows:
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
407
2.1. Cost function
The annual cost for setting up the machine and products, and production is given by:
N
N
1X
1X
Di
T
ðSi þ OU i Þ þ
O
T i¼1
T i¼1
pi
ð1Þ
The annual cost for holding the products with considering backorders, is given by:
2
D
N H i q 1 i bi
X
i
pi
2qi 1 Dpii
i¼1
ð2Þ
The annual backordering cost is given by:
N
X
i¼1
2
Bb
i i 2qi 1 Dp i
ð3Þ
i
The adjustment cost in case of overlapping production times of items is given by:
N
X
H i þ Bi
i¼1
2
Di
fi X
2
j
ai
j¼1
!
fi X
2
Di
j
þ
bi
1
pi
j¼1
ð4Þ
The detail of adjustment cost can be found in Yan et al. [22].
Adding Eqs. (1)–(4), the total annual cost, C(T), is given by:
2
D
2
N
N
N H i q 1 i bi
N
X
X
i
pi
1X
1X
Di
Bb
i i Tþ
CðTÞ ¼
ðSi þ OU i Þ þ
O
þ
Di
T i¼1
T i¼1
pi
2qi 1 Dpii
i¼1
i¼1 2qi 1 p
i
!
fi fi N
X
X
X
2
2
Hi þ Bi
Di
1
Di
aji þ
bji
þ
2
pi
j¼1
j¼1
i¼1
ð5Þ
i
Substituting qi ¼ t i Di , Di =pi ¼ ri , T ¼ ti f i , and bi ¼ Di t i ð1 ri Þ H HþB
[9] in Eq. (5), and then simplifying, the total yearly cost
i
i
for a group of N items, where there are fi production batches for item i, can be obtained by:
CðTÞ ¼
"
!
#
fi fi N
X
X
2
2
1X
Di H i
Bi
Hi þ Bi
Si þ OU i f i þ Ori ti f i þ t 2i f i
ð1 ri Þ
þ
Di
aji þ
bji
ð1 ri Þ
T i¼1
2
Hi þ Bi
2
j¼1
j¼1
ð6Þ
2.2. Constraints
For a feasible solution, the total setup time and production time for N products cannot go beyond the cycle time T.
Therefore,
PN
1
i¼1 U i f i
P
Ni¼1 ri
6T
ð7Þ
Moreover, it is necessary that:
N
X
ri < 1
ð8Þ
i¼1
The adopted production rate for each item should not exceed the maximum possible production rate. Hence:
pi 6 Pmax
i
for i ¼ 1; 2; . . . ; N
ð9Þ
Rmin
6 ri
i
for i ¼ 1; 2; . . . ; N
ð10Þ
or,
Rmin
i
Di =P max
:
i
where
¼
It is assumed that:
N
X
Rmin
61
i
i¼1
Otherwise, there would not be any feasible production schedule.
ð11Þ
408
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
According to Viswanathan and Goyal [9], the shelf-life constraint for item i, before schedule adjustment, can be written
as:
ti ð1 r i Þ
Bi
6 Li
H i þ Bi
for i ¼ 1; 2; . . . ; N
ð12Þ
If item i is produced more than once in a production cycle, the shelf-life constraint with considering the production start
time advancement or delay for the jth batch of item i (in case of an infeasible schedule) will be changed to:
ti ð1 r i Þ
Bi
þ aji bji 6 Li
H i þ Bi
for i ¼ 1; 2; . . . ; N; j ¼ 1; 2; . . . ; f i
ð13Þ
The production of an item can be commenced only after the completion of production of its former batch. Hence:
JðwÞ
JðwÞ
Jðw1Þ
Jðw1Þ
kIðwÞ þ ðJðwÞ 1ÞtIðwÞ aIðwÞ þ bIðwÞ P kIðw1Þ þ ðJðw 1Þ 1Þt Iðw1Þ aIðw1Þ þ bIðw1Þ þ r Iðw1Þ tIðw1Þ
for w ¼ 2; . . . ; F
ð14Þ
where F ¼
PN
i¼1 f i :
For which, in eq. (14), I(w) represents that wth production batch within a manufacturing cycle belongs to which item.
Therefore:
IðwÞ ¼
N
X
ww
for w ¼ 1; 2; . . . ; F
i i
ð15Þ
i¼1
where
ww
i ¼
1; if item i is produced in the wth batch
0; otherwise
ð16Þ
In Eq. (14), J(w) shows the item’s batch number. Hence:
JðwÞ ¼
w
X
0
ww
IðwÞ
for w ¼ 1; 2; . . . ; F
ð17Þ
w0 ¼1
Eq. (18) shows the total number of batches or production frequency for each item in a cycle:
F
X
ww
i ¼ fi
for i ¼ 1; 2; . . . ; N
ð18Þ
w¼1
To prevent the production of different items from overlap in a cycle, Eq. (19) must be used:
N
X
ww
for w ¼ 1; 2; . . . ; F
i ¼ 1
ð19Þ
i¼1
Eq. (20) restricts the completion time of the last batch so that it cannot go beyond the entire production cycle time:
JðFÞ
JðFÞ
kIðFÞ þ ð JðFÞ 1Þt IðFÞ aIðFÞ þ bIðFÞ þ r IðFÞ tIðFÞ 6 T
ð20Þ
It should be noted that for attaining production feasibility an item’s production start time can be either advanced or
delayed, but both cannot occur. Therefore:
aji bji ¼ 0 for i ¼ 1; 2; . . . ; N ; j ¼ 1; 2; . . . ; f i
ð21Þ
The optimum backorder level for each item can be expressed as Eq. (22):
mi ¼ t i Di ð1 r i Þ
Hi
H i þ Bi
for i ¼ 1; 2; . . . ; N
ð22Þ
Constraints (23) are the non-negativity constraints:
TP0
ri P 0
aji ; bji P 0
for i ¼ 1; 2; . . . ; N
for i ¼ 1; 2; . . . ; N; j ¼ 1; 2; . . . ; f i
ð23Þ
f i > 0 integer for i ¼ 1; 2; . . . ; N
3. Solution algorithms
The formulation given in Section 2 is a nonlinear mixed integer programming problem. These characteristics justify the
model to be adequately difficult to solve using exact methods. To deal with the intricacy, and obtain near-optimal results in a
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
409
reasonable computational time, metaheuristic approaches are widely used for which GA, PSO, SA, and ABC are explained in
the following subsections.
3.1. Genetic algorithms
Genetic algorithm (GA) is a stochastic search method based on the natural evolution process. Fundamental of GA was primarily instated by Holland [23]. Simplicity and capability of finding quick reasonable solutions for intricate searching and optimization problems have brought about a growing interest over GA. This algorithm is based upon ‘‘survival of the fittest’’
principles by Darwin and simulates the process of natural evolution. A GA contains a set of individuals that constitute the population. Every individual in the population is represented by a particular chromosome which indicates a plausible solution to
the existing problem. Throughout consecutive repetitions, called generations, the chromosomes evolve. During each generation,
the fitness value of each chromosome is evaluated. Upon the selection of some chromosomes from the existing generation as
parents, offspring will be produced by either crossover or mutation operators. The algorithm will be stopped when a termination condition is reached. The required steps to solve the proposed model by a GA are explained in the next subsections.
3.1.1. Initial conditions
The initial information required to begin a GA includes:
1.
2.
3.
4.
The number of chromosomes kept in each generation that is named population size, and indicated by ‘Npop’.
The probability of operating crossover known as crossover rate represented by ‘Pc’.
The probability of operating mutation called mutation rate represented by ‘Pm’.
Maximum number of iterations denoted by ‘max iter’.
Npop, Pc, and Pm are calibrated using the Taguchi method descried in Section 4.
3.1.2. Chromosome
One of the most principal factors for effective application of the GA is creating an appropriate chromosomal structure. A
GA starts with encoding the variables of the problem as finite-length strings. These strings are called chromosomes. The
number of genes in a chromosome is equal to decision variables. As proposed model in this paper is a non-linear problem
containing three different types of variables (discrete, continuous, and binary), a real number representation is applied to
reduce this complexity. Matrices X1, X2, and X3 present the general form of chromosomes:
X1 ¼
p1
p2
pN
t1
t2
ð24Þ
tN
Matrix X1 contains two rows and N columns. The two elements of each column represent the adopted production rate p
(integer), and cycle time t (floating point) for every item respectively.
0
w11
B 1
B w2
B
B
B :
X2 ¼ B
B :
B
B
@ :
w1N
w21
w22
:
:
:
w2N
wF1
1
C
wF2 C
C
C
:
: C
C
:
: C
C
C
:
: A
wFN
ð25Þ
X2 is a N F matrix of binary values for the variable ww
i . N shows the total number of products, and F indicates the
total number of production frequencies for all items. To have a feasible schedule for each column only one non-zero
value of one must be generated, and the rest of elements are zero (one and zero indicate that the item is produced
or not during cycle time T). Summation of values in each row shows the production frequency for each item.
As products are allowed to have more than one setup per cycle, it might cause an infeasible production plan. In order to
attain feasibility, the production start time of some items can be advanced (aji ) or delayed (bji ). However, the start time for a
product can be adjusted by either advancing or delaying, but not both. Matrix X3, containing N rows and F columns, represents the chromosome for the floating point variables aji and bji .
0
a11 ; b11
B 1 1
B a2 ; b2
B
B
B
X3 ¼ B
B
B
B
@
:
1
a21 ; b21 aF1 ; bF1
C
a22 ; b22 aF2 ; bF2 C
C
:
:
:
:
:
:
:
:
:
:
:
a1N ; b1N a2N ; b2N aFN ; bFN
C
C
C
C
C
C
A
ð26Þ
410
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
3.1.3. Initial population
The GA generates a randomly initial population of g chromosomes, where g denotes the size of population. Let Xg = {Xg1,
Xg2, . . ., Xgd} indicates the gth chromosome in the population, and each solution Xg is a d-dimensional vector, where d refers to
the optimization variables which are explained in Section 3.1.2. Then, the GA updates boundaries for each variable using Eq.
(27):
X gn ¼ lbn þ y ðubn lbn Þ for g ¼ 1; 2; . . . ; Npop and n ¼ 1; 2; . . . ; d
ð27Þ
where y is a random number in the range [0, 1], and lbn and ubn are the lower and upper bounds for the dimension n, respectively. For the integer variables, Xgn is rounded.
3.1.4. Evaluation and constraint handling
When chromosomes are produced, a fitness value must be assigned to chromosomes of each generation in order to evaluate them. This evaluation is achieved by the objective function given in Eq. (6) to measure the fitness of each individual in
the population.
As shown in Section 2.2, the mathematical model of this research contains various constraints, which may lead to the
production of infeasible chromosomes. In order to deal with infeasibility, the penalty policy is applied, which is the transformation of a constrained optimization problem into an unconstrained one. It can be attained by adding or multiplying a
specific amount to/by the objective function value according to the amount of obtained constraints’ violations in a solution.
When a chromosome is feasible, its penalty is set to zero, while in case of infeasibility, the coefficient is selected sufficiently
large [24]. Therefore, the fitness function for a chromosome will be equal to the sum of the objective function value and penalties as shown in Eq. (28), where s represents a solution, and C(s) is the objective function value for solution s. The penalty
policy is employed for all the metaheuristic algorithms presented in this research.
fitnessðsÞ ¼ CðsÞ þ PenaltyðsÞ
Penalty ¼ 0 if s is feasible
Penalty > 0
ð28Þ
otherwise
3.1.5. Selection
Selection in a GA determines the evolutionary process flow. In each generation, individuals are chosen to reproduce, creating offspring for the new population. Therefore, it provides the selection of the individual, and the number of its copies that
will be chosen as parent chromosomes. Usually, the fittest individuals will have a larger probability to be selected for the
next generation. In this research, the ‘‘roulette wheel’’ method has been applied for the selection process. The basic thought
behind this method is that every individual is provided an opportunity to become a parent proportional to its fitness value.
All individuals have a chance of being chosen to reproduce the next generation. Clearly, individuals with the larger fitness
have a higher chance of being selected to form the mating pool for the next generation. The selection probability, ag, for individual g with fitness Cg, is calculated by Eq. (29) [25]:
Cg
ag ¼ PNpop
g¼1 C g
ð29Þ
The selection procedure is based on spinning the wheel Npop times, each time selecting a single chromosome for the new
process.
3.1.6. Crossover
The crossover is the main operator of generating new chromosomes. It applies on two parent chromosomes with the predetermined crossover rate (Pc), and produces two offspring by mixing the features of parent chromosomes. It causes the offspring to inherit favorable genes from parents and creates better chromosomes. In this research, the arithmetic crossover
operator that linearly combines parent chromosome vector is used to produce offspring. The two offspring are obtained
using Eqs. (30) and (31).
offspring ð1Þ ¼ y parentð1Þ þ ð1 yÞ parent ð2Þ
ð30Þ
offspring ð2Þ ¼ y parentð2Þ þ ð1 yÞ parent ð1Þ
ð31Þ
where y is a random number in the range [0, 1]. For variables p and t, the size of random numbers must be equal to N, and for
variable F, it is equal to one, since F is an individual element. For variables w; a, and b, the random number returns an N F
F
F
matrix, where here F ¼ min offspring ð1Þ ; offspring ð2Þ , when the produced offsprings for variable F have different sizes. For
the integer variables p, F, and w the amounts of produced offsprings are rounded.
3.1.7. Mutation
Mutation exerts stochastic change in chromosome genes with probability Pm. It is considered as a background operator that keeps genetic diversity within the population. Mutation may contribute in preventing the algorithm to get
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
411
stuck at a local minimum as well as reaching an untimely convergence. It ensures that the irreversible loss of genetic
information does not take place. In this research, first, a random number in range [0, 1] is generated. Then, if random
number is less than 0.5, Eq. (32), and if it is greater than 0.5, Eq. (33) will be used to mutate the selected genes. Suppose
a particular gene such as Xk is chosen for mutation; then the value of Xk will be changed to the new value X 0k using Eqs.
(32) and (33):
X 0k ¼ X k y 1 X 0k ¼ X k þ y 1 q
max iter
q
max iter
ðX k lbk Þ
ð32Þ
ðubk X k Þ
ð33Þ
where k is an integer value in range [1, N], lbk and ubk are the lower and upper bounds of the specific gene, y is a random
variable in the range [0, 1], and q is the number of current generation. For the integer variables p and F the amounts produced by Eqs. (32) and (33) are rounded.
For the binary variable w, an integer number in the range [1, (N F)] is generated in order to select the element of matrix
w. Then, if the chosen element is one, it will be replaced with zero, and vice versa. For variables a and b, first, an integer number in the range [1, (N F)] is produced to select the element. Then, a random number in the range [0, 1] is generated. If it is
less than 0.5, Eq. (34), and if it is greater than 0.5, Eq. (35) will be used to mutate the selected element.
X 0k ¼ X k y 1 X 0k ¼ X k þ y 1 q
max iter
q
max iter
ðX k 0Þ
ð34Þ
ð1 X k Þ
ð35Þ
3.1.8. New population
Fitness function value of all members, including parents and offspring are assessed in this stage. Next, the chromosomes
with higher fitness scores are chosen to create a new population. To attain a better solution, the fittest chromosomes must be
maintained at the end of this stage. Note that the number of chosen chromosomes must be equal to Npop.
3.1.9. Termination
The selection and reproduction of parents will be continued until the algorithm reaches a stopping criterion. The procedure can be ended after a predetermined number of generations, or when no substantial improvement during any one generation is achieved. In this study, the procedure terminates when the number of iterations reaches the maximum number of
generations which is set to 3000. Moreover, in order to have fair comparisons, the same number of iteration is used for all
applied optimizers presented in this paper.
3.2. Particle swarm optimization
Particle swarm optimization (PSO) is a population-based evolutionary method which was initially introduced by Kennedy
and Eberhart [26]. The idea of the procedure was inspired by the social behavior of fish schooling or bird flocking choreography. Similar to the GA, PSO begins its search process with a population of individuals positioned on the search space, and
explores for an optimum solution by updating generations.
Unlike the GA, PSO has no genetic operators such as crossover and mutation to operate the individuals of the population,
and the members of the whole population are kept during the search process. Instead, it relies on the social behavior of the
individuals to create new solutions for future generation. PSO exchanges the information among individuals (particles) and
the population (swarm). Every particle continuously updates its flying path based on its own best previous experience in
which the best previous position is acquired by all members of particle’s neighborhood. Moreover, in PSO, all particles
assume the whole swarm as their neighborhood. Therefore, there occurs social sharing of information between particles
of a population, and particles benefit from the neighboring experience or the experience of the whole swarm in the searching
procedure [27].
Every particle in the swarm has five individual properties: (i) position, (ii) velocity, (iii) objective function value
related to the position, (iv) the best position explored so far by the particle, and (v) objective function value related
to the best position of the particle. In any iteration of PSO, the velocity and position of particles are updated according
to Eqs. (36) and (37):
V g ðk þ 1Þ ¼ WV g ðkÞ þ c1 y1 kg ðkÞ vg ðkÞ þ c2 y2 cg ðkÞ vg ðkÞ
ð36Þ
vg ðk þ 1Þ ¼ vg ðkÞ þ V g ðk þ 1Þ
ð37Þ
where g = 1, 2, . . ., Npop; k denotes the iteration; Vg is the velocity of gth particle, W is the inertia weight that controls the
impact of the previous velocity of the particle on its current velocity, and it plays an important role in balancing global
412
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
and local search ability of PSO; c1 is the cognitive parameter, and c2 is the social parameter; y1 and y2 are random numbers
within the range [0, 1]; kg is the own best position found by particle g; vg is the current value particle g; and cg is global best
particle explored so far by the whole swarm.
For each particle in swarm, its fitness value will be evaluated. Then, each particle’s fitness evaluation will be compared
with the current particle’s own best. If current value is better than own best, own best value will be set to the current value,
and the own best location to the current location. Next, the fitness evaluation with the population’s overall previous best will
be compare. If the current value is better than global best, then global best will be set to the current particle’s array index and
value. The new velocity of each particle g is calculated with Eq. (36), and the position of particle g is updated with Eq. (37),
which is adding its new velocity to its current position. This process continues until stopping condition is satisfied, which is
reaching the maximum number of iterations equal to 3000.
3.3. Simulated annealing
Simulated annealing (SA) is an effective stochastic search algorithm for solving combinatorial and global optimization
problems. Kirkpatrick et al. [28] initially proposed the idea of simulated annealing algorithm. The basic thought is
inspired from the physical process of cooling melt material to solid form. Based on this procedure, SA explores different
areas of the solution space of a problem by annealing from a high to a low temperature. During the search process both
better solutions as well as low quality solutions are accepted with a nonzero probability related to the temperature in
the cooling schedule at that time. This feature brings about avoiding local minima trap, and reaching a better solution. In
the beginning, this probability is large, and it will be reduced during the execution with a positive parameter such as
temperature [29].
The solution representation and the evaluation in the SA are similar to the GA. The neighborhood search structure is a
procedure that generates a new solution that slightly changes the current solution. To delineate the neighborhood configuration, the suggested mutation operator of GA, explained in Section 3.1.7, is used to prevent the fast convergence of SA. The
main steps of the SA algorithm are described as follow.
3.3.1. Cooling schedule
System temperature determines the degree of randomness towards solution, and it is reduced with a known plan in
accordance with the progress of solution procedure. In reality, system temperature is a solution subspace of the problem
accepted in each iteration. As the algorithm progresses and the temperature decreases, inappropriate solutions have lesser
chance of being accepted. Cooling schedule determines the functional form of the change in temperature required in SA. It
consists of initial temperature (E0), number of iterations at each temperature, temperature reduction function, and final temperature (Ef). The initial temperature is the starting point of temperature computation in every iteration. It must be adequately high to escape an early convergence. Basically, SA starts with an initial temperature where almost all worsening
moves are accepted regardless of the objective function value.
A geometric temperature reduction rule, which is the most commonly utilized decrement rule, is applied for this study. If
the temperature at kth iteration is Ek, then the temperature at (k + 1)th iteration is given by [28]:
Ekþ1 ¼ z Ek
ð38Þ
where z denotes the cooling factor in the range [0, 1].
3.3.2. Main loop of the SA
SA begins with a high temperature and selects initial solutions (s0) randomly. Next, a new solution (sn) within the neighborhood of the current solution (s) is computed in each iteration. In the minimization problem, if the value of the objective
function, C(sn), is smaller than the previous value, C(s), the new solution is accepted. Otherwise, the SA algorithm uses a stochastic function given in Eq. (39) for accepting the new solution in order to avoid local optimum trap.
a ¼ expðDC=EÞ
ð39Þ
where DC ¼ Cðsn Þ CðsÞ, and E is the current state temperature.
3.3.3. Termination condition
The algorithm ends after a pre-set number of iterations without refining the current best solution.
3.4. Artificial bee colony
Artificial bee colony (ABC) algorithm was proposed by Karaboga [30] which is inspired by the foraging behavior of a bee
colony. It is a population-based search procedure suitable for multi-variable and continuous multi-modal optimization problems. The ABC algorithm includes four main components, namely food sources, employed bees, onlooker bees, and scout
bees. The main steps of the ABC procedure are described in the next subsections.
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
413
3.4.1. Initialization of the parameters
The main parameters of the ABC algorithm are the colony size (NB), number of food sources (NS), number of trials after
which a food source is supposed to be discarded (limit), and maximum number of cycles of the search process (MCN). Several
combinations of NB and limit have been implemented by applying Taguchi method described in Section 4. The number of
food sources (NS) is considered as NB/2. MCN is set to 3000.
The first half of the colony size of the ABC algorithm represents the number of employed bees, and the second half stands
for the number of onlooker bees. For every food source position, only one employed bee is assigned. In other words, the number of food source positions (possible solutions) surrounding the hive is equal to the number of employed bees [31].
3.4.2. Initialization of the population
The ABC algorithm generates a randomly initial population of g solutions (g = 1, 2, . . ., NS). Then the algorithm updates
boundaries for each variable using Eq. (27) explained in Section 3.1.3. After initialization, the population of the food source
positions is commanded to iterate the cycle of search processes for the employed, onlooker and scout bees
(cycle = 1, 2, . . ., MCN).
3.4.3. Employed bee phase
Employed bees are responsible for exploiting the nectar sources. In every cycle of the search process, they revise the positions of the food source in their memory according to the visual information, and measure the nectars’ values (fitness) of the
new positions (reformed solutions). Once all the employed bees finish the search process, they share the information such as
nectar amounts of the food source, distance, and their positions with the onlooker bees. Then, every employed bee explores
its neighborhood food source (Xg) to generate a new food source (Xnew) according to Eq. (40):
X newðnÞ ¼ X gn þ y ðX gn X en Þ for e ¼ 1; 2; . . . ; NS; n ¼ 1; 2; . . . ; d; e – g
ð40Þ
where Xg is a d-dimensional vector, and d refers to the optimization parameters; and y is a uniformly distributed random
number in the range [1, 1]. For the integer variables Xnew(n) is rounded. Once Xnew is determined, it will be appraised
and compared with Xg. If the quality of Xg is worse than Xnew, Xg will be substituted with Xnew; otherwise, Xg is kept. This
means that a greedy selection process is utilized between the candidate and old solutions.
3.4.4. Onlooker bee phase
Onlooker bees wait at the dance area around the hive to make decision for selecting food sources. The length of a dance is
related to the nectar’s quality (fitness value) of the food sources currently being utilized by the employed bees. They evaluate
the nectars’ information acquired from all the employed bees, and select food sources with probability related to their nectars’ amounts. Subsequently, they produce new food information, discard the one with inferior quality compared to the old
one, and share their information on the dance area. An onlooker bee appraises the food information obtained from employed
bees, and chooses a food source (Xg) based on the probability (ag) pertinent to the nectar’s quality. ag is determined using Eq.
(41):
Cg
ag ¼ PNS
g¼1 C g
ð41Þ
where Cg is the fitness value of the gth food source Xg. Once the onlooker has chosen its food source, it makes an adjustment
on Xg using Eq. (40). After the source is evaluated, greedy selection is applied and the onlooker bee either memorizes the new
position by forgetting the old one or keeps the old one. If solution Xg cannot be improved, its counter holding trials is incremented by one; otherwise, the counter is reset to zero. This process is repeated until all onlookers are distributed into food
source sites.
3.4.5. Scout bee phase
Scout bees perform a random search for new food sources, and substitute the discarded ones. If a food source (Xg) does not
improve for a specified number of trials (limit), associated employed bee will abandon the food source, and that employed
bee will become a scout. The counters, which are updated during search, are utilized in order to decide if a source is to be
abandoned. If the value of the counter is greater than the control parameter of the ABC algorithm, known as limit, then the
source associated with this counter is assumed to be exhausted and is abandoned. The food source abandoned by its bee is
replaced with a new food source discovered by the scout [32]. The scout randomly generates a food source. Eq. (27) is used
for this purpose. In the basic ABC process, during per cycle at most one scout goes outside for exploring a new food source.
After the new position is specified, a new algorithm cycle begins. After each cycle, the finest solution will be memorized. The
same processes are iterated until the termination condition is reached.
4. Parameter tuning
Since the quality of the solutions obtained by metaheuristic algorithms depends on the values of their parameters, Taguchi method is utilized to calibrate the parameters of the four proposed algorithms. Taguchi method is a fractional factorial
414
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
experiment introduced by Taguchi as an efficient alternative for full factorial experiments. Taguchi divides the factors affecting the performance (response) of a process into two groups: noise factors (N) that cannot be controlled and controllable
factors (S) such as the parameters of a metaheuristic algorithm which can be controlled by designers. Taguchi methods focus
on the level combinations of the control parameters to minimize the effects of the noise factors. In Taguchi’s parameter
design phase, an experimental design is used to arrange the control and noise factors in the inner and outer orthogonal
arrays respectively. Afterward, the signal-to-noise (S/N) ratio is computed for each experimental combination. After the calculation of the S/N ratios, these ratios are analyzed to determine the optimal control factor level combination [33].
Taguchi categorizes objective functions into three groups: First, smaller is better for which the objective function is of a
minimization type. Second, nominal is the best; for which the objective function has modest variance around its target.
Third, bigger is better, where the objective function is of a maximization type [34]. Since the objective function of the model
of this research is the minimization type, ‘‘the smaller the better’’ is appropriate, which it is given by Eq. (42):
S=N ¼ 10log10
n
1X
C2
n e¼1 e
!
ð42Þ
where Ce is the objective function value of a given experiment e, and n is the number of times the experiment is performed.
In the Taguchi method, the parameters that might have considerable effects on the process output are initially chosen for
tuning. The parameters of GA that require calibration are Pc, Pm, and Npop. In PSO, c1, c2, W, and Npop are the parameters to be
tuned. In SA, the parameters to be tuned are E0, Ef, and z. NB and limit are the ABC parameters to be calibrated. Afterward, by a
trial and error method, the values that produce satisfactory fitness function are chosen to employ the experiments.
Table 1 shows the algorithms’ parameters, each at three levels with nine observations represented by L9. Fig. 1 shows the
mean S/N ratio plot for different parameter levels of the proposed algorithms. According to Fig. 1, the best parameter levels
are the highest mean of S/N values. Table 2 shows the optimal levels of the parameters for all algorithms.
5. Results and discussions
In order to illustrate the performance of the four metaheuristic approaches on the proposed ELSP model, a three-product
inventory problem is investigated using the data given in Table 3, which was provided by Yan et al. [22]. Applied optimizers
were written and coded in MATLAB programming software. Furthermore, the data set was also applied on the ELSP models
proposed by Sarker and Babu [6] and Viswanathan and Goyal [9].
In order to compare the performances of the four algorithms, 20 different optimization runs have been carried out with
the parameters settings given in Table 2. The statistical optimization results for the ELSP with machine operating costs of
$1000, $750, $500, and $250, using GA, PSO, SA, and ABC are reported in Tables 4–7 respectively. Table 8 represents the minimum cost found by each algorithm accompanied with other reported results given in the literature for different values of
annual machine operating cost. Fig. 2 shows the optimization performance of the four metaheuristic methods compared to
the previously reported methods in terms of objective function values for different machine operating costs.
It can be interpreted from Table 8 that all metaheuristic methods found the minimum total cost among all other results
reported in the literature. It is notable that ABC algorithm found the best known solutions, and outperformed GA, SA, and PSO
for all machine operating costs. The ABC algorithm is also very competitive to the PSO algorithm. ABC obtained approximately 22%, 23%, 33% and 36% reduction in total cost for machine operating costs $1000, $750, $500, and $250 respectively
compared to the two-stage heuristic algorithm presented by Yan et al. [22].
It is evident that if a product is allowed to be produced more than once per cycle a lower total cost will be generated.
Hence, the results approve the findings pointed out earlier by Goyal [20], Viswanathan [21], and Yan et al. [22] regarding
obtaining a lower cost while production of items more than once in a cycle. Assumption made by Sarker and Babu [6]
Table 1
GA, PSO, SA, and ABC parameter levels.
Algorithm
Parameters
Levels
1
2
3
GA
Pc
Pm
Npop
0.1
0.8
100
0.2
0.9
150
0.3
1
200
PSO
c1 & c2
W
Npop
0.5, 4
0.55
100
4, 0.5
0.75
150
4, 4
0.95
200
SA
E0
Ef
z
10
0.01
0.85
20
0.01
0.90
30
0.001
0.95
ABC
NB
Limit
50
2
100
50
200
100
415
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
Fig. 1. The mean S/N ratio plot for each level of the factors for proposed algorithms.
Table 2
Optimal values of the algorithms’ parameters.
Algorithm
Parameters
Optimal values
GA
Pc
Pm
Npop
0.1
1
200
PSO
c1 & c2
W
Npop
4, 0.5
0.95
200
SA
E0
Ef
z
20
0.001
0.95
ABC
NB
Limit
200
100
Table 3
Input data for the ELSP model.
Item i
Di
P max
i
Ui
Si
Hi
Bi
Li
1
2
3
1000
400
700
3000
2500
2500
0.0005
0.0010
0.0015
125
25
75
3
25
15
5
50
25
0.20
0.11
0.20
and Viswanathan and Goyal [9] to produce every item exactly once per cycle produced higher total costs. However, backordering was ignored in Sarker and Babu [6]. According to Viswanathan and Goyal [9], incorporating backordering in the model
generates a smaller total cost compared to the model which ignores backorders.
416
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
Table 4
Objective function values for machine operating cost $1000.
Problem No.
GA
PSO
SA
ABC
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
2527.53
2504.15
2506.15
2390.01
2408.64
2394.19
2391.95
2508.17
2510.40
2399.38
2392.92
2399.18
2500.21
2517.48
2523.17
2410.20
2503.95
2399.23
2399.14
2407.17
2492.72
2522.53
2540.86
2402.12
2550.91
2576.47
2437.91
2539.63
2582.34
2489.03
2430.08
2572.80
2442.53
2538.24
2557.13
2617.68
2549.69
2564.27
2478.06
2676.40
2461.64
2493.41
2492.65
2527.11
2462.54
2404.70
2492.28
2568.35
2484.05
2469.31
2459.97
2459.11
2537.49
2479.63
2474.26
2481.19
2452.04
2452.30
2455.16
2476.95
2192.76
2182.59
2180.79
2214.37
2202.87
2205.59
2213.42
2196.18
2208.99
2220.02
2203.05
2201.43
2209.84
2214.45
2197.54
2204.94
2179.44
2181.90
2182.08
2180.23
The obtained minimum total cost for each metaheuristic algorithm is highlighted in bold.
Table 5
Objective function values for machine operating cost $750.
Problem No.
GA
PSO
SA
ABC
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
2422.42
2499.79
2509.71
2389.58
2513.23
2347.50
2408.23
2508.92
2509.82
2519.89
2383.22
2334.89
2332.33
2389.14
2512.17
2509.97
2522.42
2499.79
2509.71
2381.89
2123.21
2186.54
2264.94
2131.89
2070.63
2032.34
2170.04
2112.61
2030.60
2100.87
2082.48
2292.63
2176.31
2075.39
2200.94
2131.64
2095.21
2076.68
2111.76
2054.77
2265.29
2229.03
2236.06
2219.29
2252.58
2227.72
2230.39
2234.94
2144.06
2204.53
2247.53
2208.56
2265.04
2289.34
2265.10
2167.75
2209.85
2266.46
2274.54
2204.46
2025.14
2038.85
2045.07
2094.71
2055.15
2139.94
2011.99
2005.23
2019.49
2099.13
2044.87
2017.73
2045.51
2061.47
2045.25
2095.14
2006.91
2086.42
2076.16
2151.00
The obtained minimum total cost for each metaheuristic algorithm is highlighted in bold.
Fig. 3 represents the trend of objective function values obtained by the applied metaheuristic algorithms for all machines
operating cost.
The model’s variables obtained by the ABC method for the best solutions presented in Table 8 for all machine operating
costs are summarized in Table 9.
The results also show that for each item the storage period has not exceeded than its shelf-life, which avoids the spoilage
of products. Based on Goyal [20] storage time for an item can be decreased by production of that item more frequently in a
manufacturing cycle. In ELSP with respecting to shelf-life constraint, three different options are examined. Option 1: cycle
time reduction, in which the cycle time is reduced to the shelf-life period of the product, and the chance of products being
stored beyond its shelf-life is avoided. Option2: production rate diminution, in which with a constant demand rate and a
reduced production rate, the inventory build-up is decreased, and thus the products in the inventory are used up at a quicker
rate. Therefore, products are not kept beyond their shelf-life period. Option 3: decreasing the cycle time and the production
rate simultaneously, that combines options 1 and 2 together to prevent breaking the shelf-life constraint.
Sarker and Babu [6] revealed that when the machine operating cost is considered, the reduction of the cycle time can be
more effective. Likewise, the results in this paper indicated that with rather high machine operating cost decrement of cycle
417
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
Table 6
Objective function values for machine operating cost $500.
Problem No.
GA
PSO
SA
ABC
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1799.72
1843.18
1761.10
1944.28
1718.38
1906.51
1848.29
1761.33
1951.31
1829.99
1839.90
1904.87
1782.35
1727.24
1935.35
1881.41
1843.61
1843.16
1856.65
1870.40
1657.66
1681.92
1662.25
1657.74
1682.21
1689.01
1650.82
1686.73
1671.35
1676.19
1656.40
1667.31
1649.44
1667.46
1662.98
1649.33
1696.58
1649.51
1678.09
1661.33
1778.97
1775.49
1742.32
1758.53
1765.94
1773.33
1785.82
1749.56
1772.42
1860.71
1822.46
1827.89
1778.97
1775.49
1742.32
1758.53
1774.22
1771.83
1788.49
1771.00
1803.62
1791.95
1815.17
1812.63
1778.20
1725.15
1726.00
1655.05
1676.68
1661.76
1647.54
1659.47
1655.01
1646.67
1686.26
1681.34
1682.70
1686.43
1682.84
1682.86
The obtained minimum total cost for each metaheuristic algorithm is highlighted in bold.
Table 7
Objective function values for machine operating cost $250.
Problem No.
GA
PSO
SA
ABC
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1590.79
1632.11
1647.13
1686.28
1630.24
1556.09
1638.58
1549.02
1613.58
1625.22
1564.83
1591.38
1602.61
1571.09
1626.95
1590.40
1597.84
1594.63
1540.39
1615.41
1567.12
1576.27
1502.71
1495.70
1531.65
1537.23
1564.35
1501.32
1533.51
1539.28
1524.88
1553.69
1532.34
1558.40
1556.86
1498.82
1537.74
1532.23
1563.08
1553.67
1647.02
1650.67
1651.26
1637.26
1641.64
1629.12
1636.37
1666.40
1648.78
1649.96
1649.70
1664.06
1642.39
1633.39
1644.16
1642.95
1628.98
1665.76
1664.87
1648.13
1404.95
1416.38
1415.51
1417.29
1415.18
1409.04
1410.45
1418.08
1406.36
1406.44
1415.15
1404.88
1425.29
1410.86
1413.59
1425.09
1423.00
1409.63
1412.59
1419.20
The obtained minimum total cost for each metaheuristic algorithm is highlighted in bold.
Table 8
Comparisons of the best obtained solutions using four optimization engines and previous reported results.
Operating cost
GA
PSO
SA
ABC
Sarker and Babu [6]
Viswanathan and Goyal [9]
Yan et al. [22]
O = $1000
O = $750
O = $500
O = $250
$2390.01
$2332.33
$1718.38
$1540.39
$2402.12
$2030.60
$1649.33
$1495.70
$2404.70
$2144.06
$1742.32
$1628.98
$2179.44
$2005.23
$1646.67
$1404.88
$3678
$3427
$3177
$2927
$3091
$2884
$2666
$2449
$2788
$2590
$2454
$2201
time yields a lower cost than decrement in the production rates. Adversely, when the machine operating cost decreases, production rate reduction produces a lower cost. For instance, when the machine operating cost is $250, the production rate of
item 1 is reduced from 3000 to 2493, and for items 2 and 3, the production rates have been diminished from 2500 to 1651
and 2000 respectively. It shows that production rate diminution generates the lower cost than the cycle time decrement.
Moreover, the order of production of items and their frequency (ww
i ) for machine operating costs $1000, $750, $500, and
$250 are shown in matrices X2(1), X2(2), X2(3), and X2(4) respectively. In Yan et al. [22], the items’ production frequencies were
418
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
Fig. 2. Graphical representation of the performance comparison between the applied metaheuristic algorithms and previous methods in terms of objective
function values.
Fig. 3. Trend of objective function values by proposed algorithms.
Table 9
Summary of optimization results obtained by ABC algorithm.
Item i
pi
ri
fi
ti
qi
mi
vi
T
$1000
1
2
3
2800
2000
1998
0.36
0.20
0.35
3
3
4
0.140
0.140
0.105
140
56
74
34
15
18
0.13
0.09
0.07
0.42
$750
1
2
3
2782
1833
1936
0.36
0.22
0.36
2
3
4
0.200
0.133
0.100
200
53
70
48
14
17
0.05
0.03
0.03
0.40
$500
1
2
3
2696
2000
1997
0.37
0.20
0.35
2
4
4
0.200
0.100
0.100
200
40
70
47
11
17
0.03
0.02
0.02
0.40
$250
1
2
3
2493
1651
2000
0.40
0.24
0.35
4
4
4
0.135
0.135
0.135
135
54
95
30
14
23
0.11
0.10
0.08
0.54
limited to three in order to make the problem practical, and limit the computational effort. Their proposed algorithm takes
long run time to solve relatively large problems. In this paper, there is no such limitation on the production frequencies. For
various machine operating costs the production schedule appeared to be infeasible, meaning that production of three items
having different production frequencies caused production time overlaps. Hence, to eliminate the infeasible schedule, the
start times for some items were adjusted, and total cost was calculated with considering the adjustment cost based on
the production time advancement cost. The production start time advancement (aji ) (time units) for per batch in each cycle
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
Fig. 4. Convergence path of fitness function for machine operating cost $1000 by GA.
Fig. 5. Convergence path of fitness function for machine operating cost $1000 by PSO.
Fig. 6. Convergence path of fitness function for machine operating cost $1000 by SA.
419
420
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
Fig. 7. Convergence path of fitness function for machine operating cost $1000 by ABC.
Table 10
ANOVA for objective function values of machine operating cost $1000.
Source
Degree of freedom (DF)
SS
MS
F-test
p-value
Optimization engines
Error
Total
3
76
79
1298448
177043
1475491
432816
2330
185.80
0.000
Fig. 8. Box-plot of objective function value criterion comparison for machine operating cost $1000.
time T obtained by ABC method for machine operating costs $1000, $750, $500, and $250 are shown in matrices X3(1), X3(2),
X3(3), and X3(4) respectively.
0
1 0
0
B
X 2ð1Þ ¼ @ 0 1 0
0
0
0 1
0
B
X 3ð1Þ ¼ @ 0 0 0
X 2ð2Þ
0
0
0
0 1 1
0
0:033
0
0
0
0
1
C
0A
0
0 0 0 0:010
0
0 1 0
0 1 1 0
0 1 1 0
0 0 0
0
0
0:011 0:013
0
1
0 0 1 0 0 1 0 0 0
B
C
¼ @1 0 0 0 1 0 0 1 0A
0 1 0 1 0 0 1 0 1
0
0:037
0
0
0
0
0
0
0:005 0:013
1
C
A
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
0
0 0 0
B
X 3ð2Þ ¼ @ 0 0 0
0
1 0
B
X 2ð3Þ ¼ @ 0
0
0
0
1 0
0
0
0:050
0
0
0
0
0:008
0
0
0:026
0
0
0
0:005
0
0:010
0
0
0 1
0
0
0 1
1 1 0
0 0 0
B
X 3ð3Þ ¼ @ 0 0 0
0
B
X 2ð4Þ ¼ @ 0
0
1 0
0
0
0
0
B
X 3ð4Þ ¼ @ 0 0 0 0
0
1
0 1
0:034
0
0
0
0
0:008
0
0
0
0:002
0
0 1
0
0
0 1
0
0
0
0
0
0
C
A
C
0 1 1 0A
0:003 0:006
0
1
0
0
C
A
0
0:003
1
C
0 1 1 1 0A
0 1 1 1 0
0 0 0 0
0
0
1 1 0
1 0
0
0 1 0
0 0 0 0:008
0
1
0
0 0 0 0:012
421
0
0
0
0
0:047
0
0
0
0
0 0 0 0 0:019 0 0:006
0
0
0
0
0:022 0 0:010
0
0
0
0:015
0
1
C
A
0
The convergence path of the fitness function for machine operating cost $1000 using applied optimization methods are
drawn in Figs. 4–7 respectively.
To compare the performance of the metaheuristic algorithms statistically, the one-way analysis of variance (ANOVA) is
utilized based on the objective function values obtained for machine operating cost $1000. This process is executed using
Minitab software. Table 10 shows the ANOVA results. The p-value is 0.000, which indicates the null hypothesis is rejected
at 95% confidence level, meaning that the mean values of total cost of four algorithms are not all the same. Fig. 8 supports
this conclusion as well.
6. Conclusions
In this paper, the scheduling optimization of a family of items for the ELSP model with shelf-life restrictions, backordering, and multiple setups in a production cycle was presented. The goal was to find the optimum production rate, production
frequency, and cycle time for the family context so that the total cost including setup, holding, backordering, and adjustment
costs are minimized. The distinguishing feature of this work, compared to previous studies, is allowing production of each
item more than once in a cycle which brings about a significant reduction in long run average cost. However, this assumption
may lead to an infeasible schedule. Hence, to eliminate the production time conflicts and to achieve feasibility, the schedule
was adjusted by advancing production start times of some batches of products. It was shown that allowing multiple setups
leads to improving solutions.
The proposed ELSP model may be difficult or impossible to be solved using exact methods. Therefore, four metaheuristic
algorithms (GA, PSO, SA, and ABC) were used to find near optimal solutions in a moderate computational time. Furthermore,
Taguchi method was implemented to calibrate the parameters of the metaheuristic methods. To compare the performance of
the proposed algorithms, one-way analysis of variance (ANOVA) was conducted. The results showed that all four algorithms
can efficiently solve the proposed model, while providing promising solutions compared to those available in the literature.
However, the ABC method was superior to other three optimization methods as it found the best known solutions and generated lower costs.
Acknowledgment
The authors would like to acknowledge University of Malaya (UM), Kuala Lumpur, Malaysia for providing the necessary
facilities and resources for this research. This research was fully funded by the Ministry of Higher Education, Malaysia with
the high impact research (HIR) grant number of HIR-MOHE-D000001-16001.
References
[1]
[2]
[3]
[4]
I. Moon, E.A. Silver, S. Choi, Hybrid genetic algorithm for the economic lot-scheduling problem, Int. J. Prod. Res. 40 (2002) 809–824.
J. Rogers, A computational approach to the economic lot scheduling problem, Manage. Sci. 4 (1958) 264–291.
S.A. Raza, A. Akgunduz, M. Chen, A tabu search algorithm for solving economic lot scheduling problem, J. Heuristics 12 (2006) 413–426.
B. Akrami, B. Karimi, S.M. Moattar Hosseini, Two metaheuristic methods for the common cycle economic lot sizing and scheduling in flexible flow
shops with limited intermediate buffers: the finite horizon case, Appl. Math. Comput. 183 (2006) 634–645.
[5] K. Nilsson, A. Segerstedt, Corrections of costs to feasible solutions of economic lot scheduling problems, Comput. Ind. Eng. 54 (2008) 155–168.
422
M. Mohammadi et al. / Applied Mathematics and Computation 251 (2015) 404–422
[6] B.R. Sarker, P.S. Babu, Effect of production cost on shelf-life, Int. J. Prod. Res. 31 (1993) 1865–1872.
[7] E.A. Silver, Shelf life considerations in a family production context, Int. J. Prod. Res. 27 (1989) 2021–2026.
[8] S. Viswanathan, S.K. Goyal, Optimal cycle time and production rate in a family production context with shelf life considerations, Int. J. Prod. Res. 35
(1997) 1703–1712.
[9] S. Viswanathan, S.K. Goyal, Incorporating planned backorders in a family production context with shelf-life considerations, Int. J. Prod. Res. 38 (2000)
829–836.
[10] W.L. Hsu, On the general feasibility test of scheduling lot sizes for several products on one machine, Manage. Sci. 29 (1983) 93–105.
[11] M. Khouja, Z. Michalewicz, M. Wilmot, The use of genetic algorithms to solve the economic lot size scheduling problem, Eur. J. Oper. Res. 110 (1998)
509–524.
[12] G. Dobson, The economic lot-scheduling problem: achieving feasibility using time-varying lot sizes, Oper. Res. 35 (1987) 764–771.
[13] M. Jenabi, S.M.T. Fatemi Ghomi, S.A. Torabi, B. Karimi, Two hybrid meta-heuristics for the finite horizon ELSP in flexible flow lines with unrelated
parallel machines, Appl. Math. Comput. 186 (2007) 230–245.
[14] D.C. Chatfield, The economic lot scheduling problem: a pure genetic search approach, Comput. Oper. Res. 34 (2007) 2865–2881.
[15] E.E. Bomberger, A dynamic programming approach to a lot size scheduling problem, Manage. Sci. 12 (1966) 778–784.
[16] C. Chandrasekaran, C. Rajendran, O.V. Krishnaiah Chetty, D. Hanumanna, Metaheuristics for solving economic lot scheduling problems (ELSP) using
time-varying lot-sizes approach, Eur. J. Ind. Eng. 1 (2007) 152–181.
[17] A.S. Raza, A. Akgunduz, A comparative study of heuristic algorithms on economic lot scheduling problem, Comput. Ind. Eng. 55 (2008) 94–109.
[18] R. Mallya, Multi-product scheduling on a single machine: a case study, Omega 20 (1992) 529–534.
[19] H. Sun, H.C. Huang, W. Jaruphongsa, Genetic algorithms for the multiple-machine economic lot scheduling problem, Int. J. Adv. Manuf. Technol. 43
(2009) 1251–1260.
[20] S.K. Goyal, A note on effect of production cost of shelf life, Int. J. Prod. Res. 32 (1994) 2243–2245.
[21] S. Viswanathan, A note on effect of production cost on shelf life, Int. J. Prod. Res. 33 (1995) 3485–3486.
[22] C. Yan, Y. Liao, A. Banerjee, Multi-product lot scheduling with backordering and shelf-life constraints, Omega 41 (2013) 510–516.
[23] J.H. Holland, Adaptation in Natural and Artificial Systems: An introductory Analysis with Applications to Biology, Control, and Artificial Intelligence,
University of Michigan Press, 1975.
[24] S.H.R. Pasandideh, S.T.A. Niaki, A genetic algorithm approach to optimize a multiproducts EPQ model with discrete delivery orders and constrained
space, Appl. Math. Comput. 195 (2008) 506–514.
[25] M. Gen, R. Cheng, L. Lin, Network Models and Optimization: Multi Objective Genetic Algorithm Approach, Springer, 2008.
[26] J, Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia,
1995, pp. 1942–1948.
[27] Y.Y. Chen, J.T. Lin, A modified particle swarm optimization for production planning problems in the TFT Array process, Expert Syst. Appl. 36 (2009)
12264–12271.
[28] S. Kirkpatrick, Optimization by simulated annealing: quantitative studies, J. Stat. Phys. 34 (1984) 975–986.
[29] M. Yaghini, Z. Khandaghabadi, A hybrid metaheuristic algorithm for dynamic rail car fleet sizing problem, Appl. Math. Model. 37 (2013) 4127–4138.
[30] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization, Techn. Rep. TR06, Erciyes University Press, Turkey, 2005.
[31] Q.K. Pan, M. Fatih Tasgetiren, P.N. Suganthan, T.J. Chua, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem,
Inf. Sci. 181 (2011) 2455–2468.
[32] B. Akay, D. Karaboga, A modified artificial bee colony algorithm for real-parameter optimization, Inf. Sci. 192 (2010) 120–142.
[33] S.M. Mousavi, V. Hajipour, S.T.A. Niaki, N. Aalikar, A multi-product multi-period inventory control problem under inflation and discount: a parametertuned particle swarm optimization algorithm, Int. J. Adv. Manuf. Technol. 70 (2013) 1739–1756.
[34] J. Sadeghi, S.M. Mousavi, S.T.A. Niaki, S. Sadeghi, Optimizing a multi-vendor multi-retailer vendor managed inventory problem: two tuned metaheuristic algorithms, Knowl. Based Syst. 50 (2013) 159–170.