Research Journal of Applied Sciences, Engineering and Technology 4(22): 4805-4812,... ISSN: 2040-7467

advertisement
Research Journal of Applied Sciences, Engineering and Technology 4(22): 4805-4812, 2012
ISSN: 2040-7467
© Maxwell Scientific Organization, 2012
Submitted: May 01, 2012
Accepted: May 22, 2012
Published: November 15, 2012
A New Cooperative Particle Swarm Optimizer and its Application in Permutation
Flow Shop Scheduling Problem
1
1
Desheng Li and 2Na Deng
Department of Science, Anhui Science and Technology University, Fengyang 233100, China
2
Department of Computer, Hubei University of Technology, Wuhan 430068, China
Abstract: In this study, a new variant of Particle Swarm Optimization, Electoral Cooperative PSO (ECPSO),
is presented and applied into solving the Permutation Flow Shop Scheduling Problem (PFSSP). Firstly, an
electoral swarm is generated by the voting of primitive sub-swarms and also participates in evolution of swarm,
whose particle candidates come from primitive sub-swarms with variable votes from them. Besides, a fast
fitness computation method using processing time matrix of a valid schedule is also imported to accelerate the
calculation of makespan function. On the other hand, in order to prevent trapping into local optimization, a
disturbance factor mechanism is imported to check the particles movements for resetting the original subswarms and renewing the electoral swarm. To test the basic use and performance of ECPSO, some experiments
on function optimization are executed on functions with unfixed and fixed numbers of dimensions. The
proposed method was also applied to well-known benchmark of PFSSP, Taillard dataset; the results
demonstrated good performances and robustness of ECPSO compared to some versions of PSO.
Keywords: Cooperative evolution, particle swarm optimization, permutation flow shop scheduling system
INTRODUCTION
Particle Swarm Optimization is an optimization
algorithm that models the social behavior of birds within
a flock. The algorithm was first introduced by Kennedy
and Eberart (1995) as a simulation of this behavior, but
quickly evolved into one of the most powerful
optimization algorithms in the Computational Intelligence
field (Kennedy and Eberart, 1995).
A variation of PSO, Cooperative Particle Swarm
Optimization (CPSO) proposed by Van den Bergh F.
could be seen as a improvement to the single swarm PSO
(Van den Bergh and Engelbrecht, 2001), in which the
high-dimension search space can be decompose into small
scale ones similar to the idea of RELAX/CLEAN
algorithm. However, its difference to it is that due to the
imported information exchange mechanism among
particles, the more accurate estimates did not need
reduplicative iterations any more. Compared to basic
single swarm PSO, both robustness and precision are
improved and guarantied.
In key idea of CPSO is to divide all the n-dimension
vectors into k sub-swarms. In each pass of iteration, the
solution is updated based on k sub-swarms rather than the
original one. When the particles in one sub-swarm
complete a search along some component, their latest best
position will be combined with other sub-swarms to
generate a whole solution.
The Flow Shop Scheduling Problem (FSSP) is a
typical combinatorial optimization problem with a strong
engineering background of finding the optimal processing
sequence and time of jobs on machines under the
constraints of resources. In a basic variation of FSSP,
Permutation FSSP (PFSSP), an additional constraint that
all jobs must enter the machines in the same sequence
order is attached to FSSP and its goal is also to find a job
permutation that minimizes a specific performance
criterion (usually makespan or total flowtime). The
PFSSP with makespan minimization is usually denoted as
Fm|prmu|Cmax, where m is the number of machines, prmu
represents only permutation schedules are permitted and
Cmax denotes the optimization criterion. Compared to the
original FSSP, it has smaller search space n! than the (n!)m
of sequencing jobs in FSSP.
As the Fm|prmu|Cmax, when m>3, proven a kind of
strongly NP-hard problem (Garey et al., 1976), the exact
algorithms can hardly be designed to solve it any more
(Dimopoulos et al., 2000; Valente et al., 2005). Many
heuristic methods are used to solve it, such as GA, SA,
EP, TS and so force. Recently, several approaches have
been developed to solve the PFSSP based on PSO
algorithm and the experimental results show that they are
more efficacious than the algorithms based on
constructive heuristics, such as NEH (Nawaz et al., 1983).
However, these algorithms are still suffered from the
problem of premature convergence and easily trapped into
local optimum.
In order to refrain from these shortcomings, some
variations of PSO are proposed using techniques such as
Variable Neighborhood Search (VNS) (Tasgetiren et al.,
Corresponding Author: Desheng Li, Department of Science, Anhui Science and Technology University, Fengyang 233100, China
4805
Res. J. Appl. Sci. Eng. Technol., 4(22): 4805-4812, 2012
Fig. 1: Cooperative mechanism of ECPSO
2004), Iterative Greedy (IG) (Ye and Liu, 2011),
Cooperative Evolution (CE) (Yu et al., 2009) and so forth.
In the literature of Tasgetiren et al., (2004), a hybrid PSO
algorithm, PSOVNS was proposed based on VNS, which
adopted a random key rule to construct position of particle
mapping to job scheduling. Y. Tian and D. Liu suggested
a hybrid meta-heuristic PSO, HDCPSO, to minimize the
makespan of PFSSP (Ye and Liu, 2011). In the method,
the job destruction and construction operations in IG
algorithm are used to mutate the particles to reduce the
probability of premature of swarm. Yu et al. have
developed an improved cooperative Particle Swarm
Optimization, ICPSO, to solve PFSSP, which use both
greed and random approaches with a policy of
synthetically learning method in their research study (Yu
et al., 2009).
In this study, an Electoral Cooperative Particle
Swarm Optimization (ECPSO) based on primitive subswarms and electoral swarm is presented to solving
PFSSP. The electoral swarm is generated by the voting of
primitive sub-swarms and also participates in evolution of
swarm, whose candidate particles come from primitive
sub-swarms with variable votes. Moreover, several
optimization strategies are employed to avoid falling into
local optimum, improve the diversity and achieve better
solution.
METHODOLOGY
The proposed cooperative particle swarm optimizer
with PFSSP:
Electoral cooperative mechanism: In this study, we
present a new cooperative swarm optimization algorithm
named ECPSO. Firstly, we will discuss the dynamics of
particles in the swarm, which is different with plain PSO
and conventional cooperative PSO algorithms. The
movement equation can be formalized as follows:
Vidnew = ω × Vidoid + C1 × Rand () × ( Pidbest − Pid ) + C2 ×
best
Rand () × ( Pidbest − Pid ) + C3 × Rand () × ( P$gid
↑ id − Pid )
(1)
Pnewid=Poldid+Vnewid
(2)
Compared to CPSO algorithm, we add an electoral
best
global component C3 × Rand () × ( P$gid
↑ id − Pid ) to denote
best
the impact of electoral swarm, where P$gid
is the global
best
best position in electoral swarm and P$gid
↑ id expresses
vector component which is the projection of P$ best to the
gid
corresponding sub-swarm. Due to the employment of this
component, the particles in each sub-swarm therefore
update their global best position by Eq. (1), which are the
results associated with minimal fitness value of their local
best positions and global best positions of electoral
swarm.
The principle of electoral cooperative mechanism is
depicted in Fig. 1, in which it can clearly seen that three
parts: the local best position (particles with orange color),
the global best position in sub-swarm (particles with blue
color) and that of electoral swarm (particles with purple
color) both take participate in the evaluation of fitness
function with its own position. Note that the members of
electoral swarm are voted from the primitive sub-swarms
with dynamic population during different generation of
iteration.
The function b shown in Eq. (3) performs exactly
this: it takes the best particle from each of the other sub-
4806
Res. J. Appl. Sci. Eng. Technol., 4(22): 4805-4812, 2012
Initialization
Primitive Electoral Electoral tribes
tribes
evaluation and
tribes
evaluation Election fitness choose
Population
renewal and
swarms update
End
Fig. 2: Algorithmic flow schema of ECPSO
swarms, concatenates them, splicing in the current particle
from the current sub-swarm j in the appropriate position.
According to this function, the composition of Pbestid,Pbestid
and P$idbest can be calculated based on Eq. (4)(6):
best
best
best
best
b(u, Z ) = ( S1 . Pgid
,..., S u −1 . Pgid
, Z , S u +1 . Pgid
,..., S k . Pgid
)
1≤ u ≤ k
b(u, S u . Pidbest ) = ar min fitness (b(u, S u . Pidbest ), b(u, S u . Pid ))
1≤ u ≤ k
best
b(u, S u . Pgid
) = arg min fitness (b(u, S u . Pid ))
1 ≤ id ≤ s,1 ≤ u ≤ k
best
best
b(u, S u . P$gid
) = arg min fitness (b(u, S u . Pidbest ), b(u, ES . Pgid
))
1 ≤ id ≤ s,1 ≤ u ≤ k
(3)
Step 3: Construct the corresponding sub-swarms of
every swarm-farm, which take responsibility for
optimizes related vector component.
Step 4: Initialize the representative positions by NEH
and others randomly, velocities of each particle
in sub-swarms.
Step 5: Let the current position as its local best position
and select a random particle as the global best
positions in the sub-swarms.d
(4)
In the second phase, the primitive sub-swarms in
swarm-farms take their respective optimization to
calculate the fitness function and get the local best and
sub-swarms global best positions. Concretely, the main
steps are as follows:
(5)
(6)
Main procedure of ECPSO: The main procedure of
ECPSO shown in Fig. 2 will be introduced in this section.
It consists of five basic phases: initialization, primitive
sub-swarm evaluation, electoral swarm election, electoral
swarm evaluation, fitness choose, population renewal and
swarms update.
In initialization phase, the operations include: setting
an initial swarm of S population, dividing it into k subswarms and initialize all kinds of parameters. NEH
(Nawaz et al., 1983), as a heuristic constructional
algorithm of solving PFSSP, is often used to initialize the
swarm to realize the rapid convergence. In our ECPSO,
the representative particles in all sub-swarms and electoral
swarm are built by NEH and the rests are random
generated. As for the representation of particle, we adopt
an approach of transforming the real number encoding
into nature number one.
The concrete steps is as following: Firstly, sort the
random generated particles by component weight into a
sequence of new particles, whose original sites
corresponding to the initial particles could be assembled
into a new natural number sequence. By this tactics, the
real number encoded particles sequence could be
converted to a natural number encoded feasible solution
particles sequence:
Step 1: Divide the population into s swarm-farms, range
from swarm-farm-1 to swarm-farm-s.
Step 2: Set the related parameters.
Step 6: Evaluate the objective values of all individuals
and determine the best individual best with the
best objective value in the sub-swarms.
Step 7: Constructed by the greedy method of the full
function of the position vectors of particles in
each sub-swarm and then repeat this operation
by the random function of them.
Step 8: Compare the two results, a better choice the
better one for the fitness value of the particle.
To integrate the local search and global search
optimization technology, a new global best position could
also be calculus by a new electoral swarm generated from
primitive sub-swarms. After applying the mechanism of
electoral swarm, it can import the diversity of the
population in a way and can also avoid the premature of
the particles.
The final best position not only rely on the local best
and sub-swarms global best positions, but also on the
global best position of the electoral swarm.
Moreover, a technique of votes tuning is also
imported to reflect the activity of each primitive subswarm, which adopts a thought that if a sub-swarm can be
determined an source of best fitness, then we can say it is
more activity than other sub-swarms whose votes to
electoral swarm will be increased accordingly. Otherwise
the votes would be decreased by an exponential decline
policy:
Step 9: Get the votes of primitive sub-swarms and elect
the best (first time randomly) particles in
respective primitive sub-swarms into a new
electoral swarm.
4807
Res. J. Appl. Sci. Eng. Technol., 4(22): 4805-4812, 2012
Step 10: Evaluate the objective values of all individuals
in the electoral swarm and determine the best
individual best with the best objective value in
this swarm.
The renewals of all swarms include that of primitive
sub-swarms and electoral swarm. The update mechanism
of the former is the same to the traditional PSO
algorithms; while the refresh of electoral swarm consists
of the checking of the fitness hold-on generations of
iteration and the re-voting procedure:
only one job and each job can be processed by only one
machine at a time (capacity constraints).
Then the completion time could be calculated as
following:
Step 11: Check the fitness hold-on generations of
iteration. If true, then rest the velocities of all
particles.
Step 12: Based on the results of optimizations of
primitive sub-swarms and electoral swarm,
update the positions and velocities of all
particles according to Eq. (1)-(2).
Step 13: Update the generation of iterations, if it not
reaches the limit, then repeat the Step 6,
otherwise stop the procedure and output the
local best fitness value.
Cπ j ,k = max{Cπ j −1k , Cπ j k −1 } + t π j k , j ∈ 2,..., n, k ∈ 2,..., m
CB1,1 = tB1,1
(7)
Cπ j 1 = Cπ j −1, 1 + tπ j ,1 , j02,
..., n
Cπ1 , k = Cπ1 , k − 1 + tπ 1 , k , k02,
(8)
m
(9)
Cmax( π ) = Cπ n ,m
(11)
In our study, the goal is to find the permutation of
jobs that minimizes the makespan as in Eq. (12). So, the
PFSP with the makespan criterion is to find a permutation
B* in the set of all permutations J such that:
Cmax(B*)# Cπ
n ,m
, œB0J
(12)
Fitness evaluation: To accelerate the fitness evaluation
phrase of PFSSP in PSO, a particular matrix (called Valid
Schedule Matrix (VSM) (Zhang et al., 2010) is employed
to calculate the processing time. In our algorithm, we also
adopt this method to accelerate the fitness evaluation.
Suppose a PFSSP with n jobs and each job owing m
operations, the VSM J={B1,B2, ... , Bn}can be defined in
Eq. (13), where Bn0J,tjk is the processing time of job j on
machine k, 1#i#n, 1#j#m:
Formulation of PFSSP: Firstly, we suppose several
notations: A finite set J of n jobs {Ji}ni=1 to be processed;
A finite set M of m machines {Mk}mk = 1 can perform
operations; Each job Ji consists of m operations (Oi,1,
,Oi.m); The parameter ti,k denotes the processing time of
job Ji on machine Mk;Cji,k denotes the completion time of
job Ji on machine Mk ; B = {B1, B2, ... ,Bn} denotes a
permutation of jobs.
Secondly, we also comply with the following
hypotheses: All jobs Ji must be processed on every
machine in the same sequence, given by the indexing of
the machines; Oi,k has to be processed on machine Mk for
an uninterrupted and fixed processing time period, while
no operation can be preempted; Each machine can process
⎛ t1,π
⎜ 1
T=⎜ M
⎜
⎝ t m,π 1
t1,1,
if i = I , j = I ;
⎧
⎪t
+ t1, j , if i = I , j ≠ I ;
K t1,π n ⎞
j
1
,
1
−
⎪⎪
⎟
O
M ⎟ , t i , j = ⎨ t i −1,1 + t i ,1 , if i ≠ I , j ≠ I ;
⎟
⎪t
+ t , if t i −1, j , > t i ., j −1 ;
K t m,π n ⎠
⎪ i −1, j i , j
⎪⎩ t i , j −1 + t i , j , if t i −1, j < t i , j −1
(13)
3
2.5
30 00
2
25 00
start point
1.5
20 00
15 00
1
solution
10 00
0.5
5 00
0
0
3
2
2
-0.5
1
1
0
0
-1
-2
-1
-1
(10)
-1.5
-1
-0.5
0
-2
(a)
(b)
Fig. 3: Landscape and its position of solution of rosenbrock’s function
4808
0.5
1
1.5
2
Res. J. Appl. Sci. Eng. Technol., 4(22): 4805-4812, 2012
In the ECPSO algorithm, each particles fitness value
is denoted by the makespan of the corresponding
schedule.It can be computed as follows: Firstly, get the
processing time matrix P of the permutation through
swapping the column of original processing time matrix.
Then, update the matrix P according to the following rule.
Finally, the resulted value of element pm,n is the makespan
of the schedule.
EXPERIMENTAL RESULTS
Performance on test function optimization: To study
the search behavior and its performance of ECPSO with
other versions of PSO, such as plain PSO, CPSO, some
typical test functions were selected as the examples,
whose definitions can be found in Appendix A.
Rosenbrock function, whose landscape depicted in
Fig. 3, is frequently used as a test function to test the
performance of optimization algorithms. From the figure,
we can clearly see that the global optimum lies inside a
long, narrow, parabolic shaped flat valley and it is very
difficult to find global optimum. So it usually used to test
the performance and limit of the ability of global
optimization.
Sphere function is a continuous, convex, symmetrical
and unimodal function, it is mainly used to test the
accuracy of optimization algorithms.
Quartic function is unimodal with random noise.
Noisy functions are widespread in real-world problems
and every evaluation for the function is disturbed by
Table 1: Results of functions optimization
PSO
------------------------------------------------------No. Function
Best
Worst
Mean
SD
1
FSphere(x)
5.16E-139
3.91E-131 3.56E-132 9.76E-132
2
FRosenbrock(x) 1.03E-02
3.99E+00 1.09E+00 8.18E-01
3
FQuadric(x)
5.88E-136
2.12E-128 8.28E-130 3.86E-129
4
FGriew ank(x)
9.86E-03
1.50E-01 5.05E-02 2.61E-02
6.12E-06
9.00E+00 4.67E+00 2.09E+00
5
FRastrigin(x)
6
FHartmann(x)
-3.32237
-3.20316
3.27866
-0.05843
7
FMichalewics(x) -9.660151
-9.463607 -9.616562 0.0451111
noise, so the particles information is inherited and
diffused noisily, which makes the problem hard for
optimization.
Griewank function is a spin, inseparable variabledimension multimode function. As the increase of its
dimension, the scope of local optimum gets narrower so
that searching global optimum becomes easy relatively.
Therefore, for Griewank function, it is harder to get
solution in low dimension than in high dimension.
Rastrigin function, based on Sphere function, uses
cosine function to generate lots of local optimal points. It
is a complex multimodal function and optimization falls
into the local optimum easily.
In addition, some Functions with fixed numbers of
dimensions are also include, for example, the landscape
of Bohachevsky function extends smoothly, so it is easy
for all variants of PSO to optimize them.
Further, computational complexity of all variants
used in the study is qualitatively ranked in Table 1. From
Table 1, we could observe that ECPSO has efficient
computation and the computational complexity is mainly
affected by the swarm size and the problem
dimensionality. The computational complexity of ECPSO
is higher than other variants. This is because that ECPSO
adds a new velocity component, which is used to select a
particle among the swarm for updating each velocity
dimension of each particle. CPSO and ECPSO can probe
better solutions than PSO and with smaller standard
deviation fluctuations. Moreover, compared to plain PSO,
the cooperative version reached the best solutions more
CPSO
--------------------------------------------------Best
Worst
Mean
SD
7.24E-54 7.29E-52 1.33E-52 1.81E-52
4.28E-01 6.84E-01 5.46E-01 5.62E-02
1.71E-10 8.54E-10 6.02E-10 2.55E-10
0
0
0
0
3.01E+00 8.90E+01 4.91E+01 2.56E+01
-3.32237
-3.13768 -3.28442 0.06006
-9.660151 -9.6601517 -9.660151 0
Table 2: Results for PFSSP regarding the quality of solutions found
PSO
CPSO
----------------------------------------------------------No. Problem BKS NEH
Best
Worst Mean
Best
Worst Mean
1
TA031
2724 2733
2724
2748 2733.6
2724 2751 2733.1
2
TA035
2863 2868
2864
2887 2868.6
2863 2864 2863.8
3
TA040
2782 2790
2783
2817 2789.7
2782 2814 2787.7
4
TA041
2991 3135
3092
3209 3144.2
3061 3141 3105.3
5
TA045
2976 3160
3092
3209 3144.2
3061 3141 3105.3
6
TA050
3065 3257
3166
3257 3211.7
3132 3212 3162.1
7
TA051
3850 4082
4016
4125 4059.1
3971 4105 4033.7
8
TA055
3610 3835
3847
3948 3892.3
3749 3867 3801.3
9
TA060
3756 4079
3914
4082 3976.9
3909 4002 3942.9
10 TA061
5493 5519
5495
5533 5507.4
5493 5495 5494.2
11 TA065
5250 5266
5255
5312 5226.4
5252 5255 5254.1
12 TA070
5322 5341
5342
5376 5356.0
5328 5342 5334.8
13 TA071
5770 5846
5900
6037 5962.1
5831 5918 5869
14 TA075
5467 5679
5641
5778 5712.4
5516 5718 5601.6
15 TA080
5845 5918
5903
6069 6001.1
5903 5947 5910.2
16 TA081
6202 6541
6640
6821 6721.6
6505 6700 6560.3
17 TA085
6314 6695
6705
6848 6798.3
6578 6708 6625.5
18 TA090
6434 6677
6783
6974 6872.1
6645 6760 6706.7
4809
ECPSO
------------------------------------------------------Best
Worst
Mean
SD
4.05E-29 1.04E-24 1.17E-25 2.22E-25
1.14E-02 3.25E+00 1.20E+00 1.27E+00
2.87E+01 1.18E+02 4.43E+01 2.08E+01
0
0
0
0
2.50E+01 1.25E+02 7.52E+01 3.52E+01
-3.32237
-3.20316 -3.25879
0.06049
-9.660151 -9.660151 -9.660151 0
ICPSO
---------------------------Best
Worst Mean
2724 2746 2727.2
2863 2887 2866.2
2782 2791 2784.3
3060 3133 3084.1
3060 3133 3084.1
3140 3204 3167.0
3977 4111 4015.7
3771 3874 3811.9
3869 3968 3917.8
5493 5495 5493.6
5250 5255 5253.5
5328 5346 5335.2
5827 5909 5865.7
5539 5665 5600.3
5903 5914 5904.2
6515 6941 6600.5
6584 6697 6625.7
6657 6784 6727.8
ECPSO
----------------------------Best
Worst Mean
2724 2735 2729.9
2863 2887 2866.1
2782 2786 2783.9
3047 3146 3086.2
3047 3146 3086.2
3130 3195 3156.2
3960 4037 4002.5
3740 3920 3793.8
3861 4007 3910.5
5493 5527 5497
5250 5255 5253.5
5324 5350 5338.1
5797 6013 5889.1
5570 5674 5610.9
5881 5904 5892.1
6474 6607 6534.6
6558 6735 6641.4
6615 6742 6704.9
Res. J. Appl. Sci. Eng. Technol., 4(22): 4805-4812, 2012
often than basic version as shown in bold typeface.
Essentially, CPSO and ECPSO can be seen in the same
family of cooperative PSO. So the promotion of ability of
optimization appears relatively limited.
Taillard's benchmark suite for makespan criterion: In
this section, an analysis of the results acquired on the
Taillard benchmark suite are provided (Taillard, 1993), of
which the solution quality is mainly measured in terms of
the relative increase in makespan with respect to the best
known solutions. Each instance was run 10 consecutive
times and both deviations of the best and average
makespan achieved are calculated (Min, Max and Avg),
respectively.
From Table 2, we can clearly get that the proposed
ECPSO algorithm performed greatly better than the plain
3000
3700
ICPSO optimum
CPSO optimum
PSO optimum
ECPSO optimum
ICPSO optimum
CPSO optimum
PSO optimum
ECPSO optimum
3600
Fitness values
2950
Fitness values
PSO algorithm and some meta-heuristic methods such as
NEH. Also, compared to the basic Cooperative PSO
(CPSO) and Improved Cooperative PSO (ICSPO) (Yu
et al., 2009), the convergence property has been enhanced
by the proposed techniques in the study. Moreover,
another interesting result is that although the ECPSO has
improved the quality of solution to a certain extent, but
the ability to probe the best solution (denoted as BKSBest Known Solution) is the same with CPSO and
ICPSO. In other words, the lowerbound of best solution
can be probed may be determined by the cooperative
mechanism itself or other stuff rather than the some kinds
of improvement endeavor.
Figure 4 illustrate the typical convergences of PSO,
CPSO, ICPSO and ECPSO in Taillard benchmark suite,
in which Fig. 4a, f illustrate the TA031-TA081 instances
2900
2850
2800
2750
3500
3400
3300
3200
3100
3000
2700
0
100 200 300 400 500 600 700 800 900 1000
Evolvement generation
0
100 200 300 400 500 600 700 800 900 1000
Evolvement generation
(a) Ta031
4800
4700
4600
(b) Ta041
5850
ICPSO optimum
CPSO optimum
PSO optimum
ECPSO optimum
ICPSO optimum
CPSO optimum
PSO optimum
ECPSO optimum
Fitness values
5800
4500
4400
4300
4200
4100
5750
5700
5650
5600
5550
5000
4000
5450
3900
0
100 200 300 400 500 600 700 800 900 1000
Evolvement generation
0
100 200 300 400 500 600 700 800 900 1000
Evolvement generation
(c) Ta051
(d) Ta061
7600
ICPSO optimum
CPSO optimum
PSO optimum
ECPSO optimum
ICPSO optimum
CPSO optimum
PSO optimum
ECPSO optimum
7400
Fitness values
Fitness values
6700
6600
6500
6400
6300
6200
6100
6000
7200
7000
6800
6600
5900
6400
5800
0
100 200 300 400 500 600 700 800 900 1000
Evolvement generation
0
100 200 300 400 500 600 700 800 900 1000
Evolvement generation
(e) Ta071
(f) Ta081
Fig. 4: Evolution curves of the optimal solution for PSO, CPSO, ICPSO and ECPSO
4810
Res. J. Appl. Sci. Eng. Technol., 4(22): 4805-4812, 2012
respectively. From these figures, it can be seen that the
varying curves of objective values using the family of
Cooperative PSO descend much faster than using plain
PSO. In addition, the fitness values descent to lower level
by using ECPSO than CPSO and ICPSO due to the
different algorithmic mechanism. The results of the
experiments indicated that the proposed ECPSO can lead
to more efficiency and stability than plain PSO and
versions of Cooperative PSO.
Moreover, another interesting result is that although
the ECPSO has improved the quality of solution to a
certain extent, but the ability to probe the best solution
(denoted as BKS-Best Known Solution) is the same with
the other members of Cooperative PSO family, such as
CPSO and ICPSO. In other words, the lower bound of
best solution can be probed may be determined by the
cooperative mechanism itself or other stuff rather than the
some kinds of improvement endeavor.
⎛ i
⎞
FGriewank ( x ) = ∑ ⎜ ∑ x j ⎟
⎠
i =1 ⎝ j =1
D
2
Rastrigin Function:
∑[x
D
FRastrigin ( x ) =
i =1
2
i
]
− 10 cos(2πxi ) + 10
Hartmann Function:
4
⎡
FHartmann ( x ) = − ∑ α i exp ⎢ −
i =1
⎣
6
∑B
j =1
ij
(x
j
− Qij
⎤
) ⎥⎦
2
3 17 3.05 17
8⎞
.
⎛ 10
⎟
⎜
01
8 14⎟
.
⎜ 0.05 10 17
α = [112
. . , 3, 3.2] , B = ⎜
3 35
10 17 8 ⎟
. 107
.
⎟
⎜
8 0.05 10 01
. 14⎠
⎝ 17
T
CONCLUSION
⎛ 1312
⎜
⎜ 2329
Q = 10 ⎜
2348
⎜
⎝ 4047
In this study, we present a new cooperative multiswarm optimization algorithm named ECPSO to solve
PFSSP. Our main contribution includes:
C
C
A dynamic electoral swarm is generated by the
primitive sub-swarm and also communicate with
them.
The projection of electoral swarm global best
position on each primitive sub-swarm become a
global factor effecting the movement of particles in
primitive sub-swarms.
Compared with PSO and other versions of
cooperative PSO, simulation and comparisons
demonstrate that ECPSO is more effective and robust. In
the future study, the neighbor topologic and the limit to
promotion of the PFSSP about ECPSO would be our
interests.
Appendix A:
Sphere Function:
FSphere ( x ) =
D
∑x
i =1
2
i
Rosenbrock Function:
D
FRosenbrock ( x ) =
∑ (100( x
2
i
i =1
Quadric Function:
⎛ i
⎞
FQuadric ( x ) = ∑ ⎜ ∑ x j ⎟
⎠
i =1 ⎝ j =1
D
Griewank Function:
2
− xi +1 ) 2 + ( xi − 1) 2 )
−4
124 8283 5886⎞
⎟
4135 8307 3736 1004 9991⎟
1451 3522 2883 3047 6650⎟
⎟
8828 8732 5743 1091 381 ⎠
1696 5569
Michalewics Function:
⎛ ⎛ jx 2j ⎞ ⎞
⎟⎟ ⎟
FMichalewice ( x ) = − ∑ sin( x j )⎜⎜ sin⎜⎜
⎟
j =1
⎝ ⎝ π ⎠⎠
2
20
ACKNOWLEDGMENT
This study is supported by the Talent Introduction
Special Fund of Anhui Science and Technology
University (Grant No. ZRC2011304).
REFERENCES
Dimopoulos, C., M. Ali and S. Zalza, 2000. Recent
developments in evolutionary computation for
manufacturing optimization: Problems, solutions and
computations. IEEE T. Evolut. Comput., 4: 93-113.
Garey, M., D. Johnson and R. Sethy, 1976. The
complexity of flow shop and job shop scheduling.
Math. Oper. Res., 1(2): 117-129.
Kennedy, J. and R.C. Eberart, 1995. Particle swarm
optimization. Proceedings of the IEEE International
Conference on Neural Networks 95, pp: 1942-1948.
Nawaz, M., E. Enscore and I. Ham, 1983. A heuristics
algorithm for the m-machine, n-job flow shop
sequencing problem. Omega, 11: 91-95.
Tasgetiren, M.F., M. Sevkli, Y.C. Liang and G.
Gencyilmaz, 2004. Particle swarm optimization
algorithm for permutation flow shop sequencing
problem. Lecture Notes Comput. Sci., Berlin
Heidelberg-Springer Verlag, 3172: 382-389.
4811
Res. J. Appl. Sci. Eng. Technol., 4(22): 4805-4812, 2012
Taillard, E., 1993. Benchmarks for basic scheduling
problems. Eur. J. Oper. Res., 64: 278-285.
Van den Bergh, F. and A.P. Engelbrecht, 2001. Effects of
swarm size on cooperative particle swarm optimizers.
Proceedings of the GECCO, San Francisco, Morgan
Kaufmann, pp: 892-899.
Valente, M., S. Jorge and A.F.S. Rui Alves, 2005. An
exact approach to early/tardy scheduling with release
dates. Comput. Oper. Res., 32: 2905-2917.
Ye, T. and D.Y. Liu, 2011. A hybrid particle swarm
optimization method for flow shop scheduling
problem. Acta Electron. Sin., 9: 1087-1093.
Yu, B.N., B. Jiao and X.S. Gu, 2009. An improved
cooperative particle swarm optimization and its
application to flow shop scheduling problem. J. East
China University Sci. Technol. (Nat. Sci. Edn.), 35:
468-474.
Zhang, J., C. Zhang and S. Liang, 2010. The circular
discrete particle swarm optimization algorithm for
flow shop scheduling problem. Expert Syst. Appl.,
37: 5827-5834.
4812
Download