A Survey of the State of the Art in Particle Swarm Optimization

advertisement
From the SelectedWorks of Mahdiyeh Eslami
(‫)مهدیه اسالمی‬
January 2012
A Survey of the State of the Art in Particle Swarm
Optimization
Contact
Author
Start Your Own
SelectedWorks
Notify Me
of New Work
Available at: http://works.bepress.com/mahdiyeh_eslami/35
Research journal of Applied Sciences, Engineering and Technology 4(9): 1181-1197, 2012
ISSN: 2040-7467
© Maxwell Scientific Organization, 2012
Submitted: January 11, 2012
Accepted: February 09, 2012
Published: May 01, 2012
A Survey of the State of the Art in Particle Swarm Optimization
1
Mahdiyeh Eslami, 1Hussain Shareef, 2Mohammad Khajehzadeh and 1Azah Mohamed
1
Electrical, Electronic and Systems Engineering Department, University Kebangsaan Malaysia,
Bangi, 43600, Selangor, Malaysia
2
Civil Engineering Department, Islamic Azad University, Anar Branch, Anar, Iran
Abstract: Meta-heuristic optimization algorithms have become popular choice for solving complex and
intricate problems which are otherwise difficult to solve by traditional methods. In the present study an attempt
is made to review the one main algorithm is a well known meta-heuristic; Particle Swarm Optimization (PSO).
PSO, in its present form, has been in existence for roughly a decade, a relatively short time compared with some
of the other natural computing paradigms such as artificial neural networks and evolutionary computation.
However, in that short period, PSO has gained widespread appeal amongst researchers and has been shown to
offer good performance in a variety of application domains, with potential for hybridization and specialization,
and demonstration of some interesting emergent behavior. This study comprises a snapshot of particle swarm
optimization from the authors’ perspective, including variations in the algorithm, modifications and refinements
introduced to prevent swarm stagnation and hybridization of PSO with other heuristic algorithms.
Key words: Hybridization, modification, particle swarm optimization
INTRODUCTION
Optimization is ubiquitous and spontaneous process
that forms an integral part of our day-to-day life. In the
most basic sense, it can be defined as an art of selecting
the best alternative among a given set of options.
Optimization problems are widely encountered in various
fields of science and technology such as engineering
designs, agricultural sciences, manufacturing systems,
economics, physical sciences, pattern recognition etc. in
fact optimization techniques are being extensively used in
various spheres of human activities, where decisions have
to be taken in some complex situation that can be
represented by a mathematical model. Optimization can
thus be viewed as one of the major quantitative tools in
network of decision making, in which decisions have to
be taken to optimize one or more objectives in some
prescribed set of circumstances. Sometimes such
problems can be very complex due to the actual and
practical nature of the objective function or the model
constraints. In view of the practical utility of optimization
problems there is a need for efficient and robust
computational algorithms, which can numerically solve on
computers the mathematical models of medium as well as
large size optimization problem arising in different fields.
Traditionally, optimization methods involved derivativebased techniques such as those summarized in (Echer and
Kupferschmid, 1988; Momoh et al., 1999a; Momoh et al.,
1999b). These techniques are robust and have proven their
effectiveness in handling many classes of optimization
problems. However, such techniques can encounter
difficulties such as getting trapped in local minima,
increasing computational complexity, and not being
applicable to certain classes of objective functions. This
led to the need of developing a new class of solution
methods that can overcome these shortcomings.
In the past few decades several global optimization
algorithms have been developed that are based on the
nature inspired analogy. These are mostly population
based meta-heuristics also called general purpose
algorithms because of their applicability to a wide range
of problems. Global optimization techniques are fast
growing tools that can overcome most of the limitations
found in derivative-based techniques. Some popular
global optimization algorithms include Genetic
Algorithms (GA) (Holland, 1992), Particle Swarm
Optimization (PSO) (Kennedy and Eberhart, 1995),
Differential Evolution (DE) (Price and Storn, 1995),
Evolutionary Programming (EP) (Maxfield and Fogel,
1965), Ant Colony Optimization (ACO) (Dorigo et al.,
1991)etc. A chronological order of the development of
some of the popular nature inspired meta-heuristics is
given in Fig. 1. These algorithms have proved their mettle
in solving complex and intricate optimization problems
arising in various fields.
Figure 2 shows the distribution of publications which
applied the meta-heuristics methods to solve the
optimization problem. This survey is based on ISI Web of
Corresponding Author: Mahdiyeh Eslami, Electrical, Electronic and Systems Engineering Department, University Kebangsaan
Malaysia, Bangi, 43600, Selangor, Malaysia
1181
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
been shown to offer good performance in a variety of
application domains, with potential for hybridization and
specialization. It is a simple and robust strategy based on
the social and cooperative behavior shown by various
species like flock of bird, school of fish etc. PSO and its
variants have been effectively applied to a wide range of
benchmark as well as real life optimization problems.
This paper aims to offer a compendious and timely
review on the development of the PSO algorithm as a well
known and popular search strategy. For the aim of this
review, a literature overview has been carried out
including the IEEE and ISI Web of Knowledge databases
which are the largest abstract and citation databases of
research literature and quality web sources. The survey
spans over the last 10 years from 2001 to March 2011.
Figure 3 statistically illustrates the number of published
research papers on the subject of the PSO during the last
10 years.
This paper presents a state of the art review of the
previous studies which proposed modified versions of
particle swarm optimization and applied them on various
optimization problems arising in different fields. The
review demonstrated that the particle swarm optimization
and its modified versions have widespread application in
complex optimization domains, and is currently a major
research topic, offering an alternative to the more
established evolutionary computation techniques that may
be applied in many of the same domains.
Fig. 1: Popular meta-heuristics in chronological order
Fig. 2: Pie chart showing the publication distribution of the
meta-heuristics algorithms
1600
Number of papers
1400
1200
1000
800
600
400
200
20
09
20
10
20
11
20
01
20
02
20
03
20
04
20
05
20
06
20
07
20
08
0
Year
Fig. 3: The bar diagram of the number of PSO papers published
in the year 2001 to 2011
Particle Swarm Optimization (PSO): The PSO
algorithm is a population-based optimization technique
belongs to category of the Evolutionary Computation
(EC) for solving global optimization problems. Its concept
was initially proposed by Kennedy as a simulation of
social behavior and the PSO algorithm was first
introduced as an optimization method in 1995 by
Kennedy and Eberhart (1995). In a PSO system, multiple
candidate solutions coexist and collaborate
simultaneously. Each solution called a particle, flies in the
problem search space looking for the optimal position to
land. A particle, during the generations, adjusts its
position according to its own experience as well as the
experience of neighboring particles. PSO system
combines local search method (through self experience)
with global search methods (through neighboring
experience), attempting to balance exploration and
exploitation. A particle status on the search space is
characterized by two factors: its position (Xi) and velocity
(Vi). The new velocity and position of particle will be
updated according to the following equations:
Knowledge databases and included most of the papers
that have been published during the past decade.
PSO is a well known and popular search strategy has
gained widespread appeal amongst researchers and has
1182
Vi  k  1  Vi  k   C1  Rand .   pbesti  k   X i  k 
 C2  rand .   gbest  k   X i  k 
(1)
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Xi [k + 1] = Xi [k] + Vi [k + 1]
(2)
If Vi > Vmax, then Vi = Vmax
If Vi < ! Vmax, then Vi = ! Vmax
where, Vi = [vi,1, vi, 2, ..., vi,n] called the velocity for particle
i, which represents the distance to be traveled by this
particle from its current position; Xi = [xi,1,xi,2, ..., xi,n]
represents the position of particle i; pbest represents the
best previous position of particle i (i.e., local-best position
or its experience); gbest represents the best position
among all particles in the population X= [X1,X2, . . .,XN]
(i.e. global-best position); Rand(.) and rand(.) are two
independently uniformly distributed random variables
with range [0, 1]; C1 and C2 are positive numbers
parameters called acceleration coefficients that guide each
particle toward the individual best and the swarm best
positions, respectively.
The first part of Eq. (1), Vi [k], represents particle’s
previous velocity, which serves as a memory of the
previous flight direction. This memory term can be
visualized as a momentum, which prevents the particle
from drastically changing its direction and biases it
towards the current direction. The second part, C1 Rand (.)
(pbesti [k]- Xi[k]), is called the cognition part and it
indicates the personal experience of the particle. We can
say that, this cognition part resembles individual memory
of the position that was best for the particle. The effect of
this term is that particles are drawn back to their own best
positions, resembling the tendency of individuals to return
to situations or places that were most satisfying in the
past. The third part, C2rand (.) (gbest [k] -Xi [k]),
represents the cooperation among particles and is
therefore named as the social component. This term
resembles a group norm or standard which individuals
seek to attain. The effect of this term is that each particle
is also drawn towards the best position found by its
neighbor.
After the PSO was issued, several considerations
have been taken into account to facilitate the convergence
and prevent an explosion of the swarm. A few important
and interesting modifications in the basic structure of PSO
focus on limiting the maximum velocity, adding inertia
weight and constriction factor. Details are discussed in the
following:
Selection of maximum velocity: At each step of the
iteration, all particles move to a new place, the direction
and distance is defined by their velocities. It is not
desirable in practice that many particles leave the search
space in an explosive manner. Equation (1) shows that the
velocity of any given particle is a stochastic variable and
that it is prone to create an uncontrolled trajectory,
allowing the particle to follow wider cycles in the design
space, as well as letting even more escape it. In order to
limit the impact of these phenomena, particle’s velocity
should be clamped into a reasonable interval. Here the
new constant Vmax is defined:
Normally, the value of Vmax is set empirically,
according to the characteristics of the problem. A large
value increases the convergence speed, as well as the
probability of convergence to a local minimum. In
contrast, a small value decreases the efficiency of the
algorithm whilst increasing its ability to search. Empirical
rules require that for any given dimension, the value of
Vmax could be set as half range of possible values for the
search space. Research work from Fan and Shi and
Eberhart (2001) shows that an appropriate dynamically
changing Vmax can improve the performance of the PSO.
Adding inertia weight: Shi and Eberhart (1998)
proposed a new parameter w for the PSO, named inertia
weight, in order to better control the scope of the search,
which multiplies the velocity at the previous time step,
i.e., Vi(t). The use of the inertia weight w improved
performance in a number of applications. This parameter
can be interpreted as inertia constant; Eq. (1) is now
becomes:
Vi  k  1  w  Vi  k   C1  Rand .   pbesti  k   X i  k 
 C2  rand .   gbest  k   X i  k 
(3)
Essentially, this parameter controls the exploration of
the search space, so that a high value allows particles to
move with large velocities in order to find the global
optimum neighborhood in a fast way and a low value can
narrow the particles’ search region. Initially the inertia
weight was kept static during the entire search process for
every particle and dimension. However, with the due
course of time dynamic inertia weights were introduced.
Constriction factor: When the particle swarm algorithm
is run without restraining the velocity in some way, the
system simply explodes after a few iterations. Clerc and
Kennedy (2002) induced a constriction coefficient P in
order to control the convergence properties of a particle
swarm system. The value of the constriction factor is
calculated as follows:

2
2  C  C 2  4C
, C1  C2  C  4.0
(4)
with the constriction factor P, the PSO equation for
computing the velocity is:
1183
 Vi  k   C1  Rand .   pbesti  k   X i  k  

Vi  k  1    
  C  rand .   gbest  k   X  k 



i
2
(5)
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
The constriction factor results in convergence over
time; the amplitude of the trajectory’s oscillations
decreases over time. Note that as C increases above 4.0,
P gets smaller. For instance, if C = 5 then P . 0.38 from
Eq. (4), which causes a very pronounced damping effect.
A normal choice is that C is set to 4.1 and the constant P
is thus 0.729, which works fine. The advantage of using
constriction factor P is that there is no need to use Vmax nor
to guess the values for any parameters which govern the
convergence and prevent explosion. The constriction
approach is effectively equivalent to the inertia weight
approach. Both approaches have the objective of
balancing exploration and exploitation, and thereby
improving convergence time and the quality of solution
found. It should be noted that low values of w and P result
in exploitation with little exploration, while large values
result in exploration with difficulties in refining solutions
(Alvarez-Benitez et al., 2005). As a member of stochastic
search algorithms, PSO has two major drawbacks
(Lovbjerg, 2002). The first drawback is that the PSO has
a problem-dependent performance. This dependency is
usually caused by the way parameters are set, i.e.
assigning different parameter settings to PSO will result
in high performance variance. In general, no single
parameter setting exists which can be applied to all
problems and performs dominantly better than other
parameter settings. The common way to deal with this
problem is to use self-adaptive parameters. Selfadaptation has been successfully applied to PSO by Clerc
(1999), Shi and Eberhart (2001), Hu and Eberhart (2002),
Ratnaweera et al. (2004) and Tsou and MacNish (2003)
and so on. The second drawback of PSO is its premature
character, i.e. it could converge to local minimum.
According to Angeline (1998a,b), although PSO
converges to an optimum much faster than other
evolutionary algorithms, it usually cannot improve the
quality of the solutions as the number of iterations is
increased. PSO usually suffers from premature
convergence when high multi-modal problems are being
optimized.
Several efforts have been made to advance a variation
of PSO that have diminished the impact of the two
aforementioned disadvantages. Some of them have
already been discussed, including inertia weight, the
constriction factor and so on. Further modifications as
well as hybridization of the PSO with another kind of
optimization algorithm are discussed in the next section.
Modifications of the original PSO: After the PSO was
discovered, it attracted the attention of many researchers
for its beneficial features. Various researchers have
analyzed it and experimented with it, including
mathematicians, engineers, physicists, biochemists, and
psychologists and many variations were created to further
improve the performance of PSO. Unfortunately, even
focusing on the PSO variants and specializations that,
from our point of view, have had the most impact and
seem to hold the most promise for the future of the
paradigm, these are too numerous for us to describe every
one of them. So, we have been forced to leave out some
streams in particle swarm research.
Van den Bergh and Engelbrecht (2002) proposed a
new PSO algorithm with strong local convergence
properties, called the Guaranteed Convergence Particle
Swarm Optimizer (GCPSO) (van den Bergh and
Engelbrecht, 2002). The new algorithm performs much
better with a smaller number of particles, compared to the
original PSO. Multi-start PSO (MPSO) which is an
extension to GCPSO in order to turn it into a global
search algorithm was proposed in (Van den Bergh, 200.
Several versions of MPSO were proposed by Van den
Bergh (2002) based on the criterion used to determine the
convergence of GCPSO. The GCPSO is assumed to
converge to a minimum if the counter reaches a certain
limit. According to (Van den Bergh, 2002), MPSO
generally performed better than GCPSO in most of the
tested cases. Theoretically, if the swarm could be regenerated an infinite amount of times, the GCPSO could
eventually find the global optimal. However, the
performance of MPSO obviously degrades when the
number of design variables in the objective function
increases.
He et al. (2004) presented a PSO with passive
congregation (PSOPC) to improve the performance of
Standard PSO (SPSO). Passive congregation is an
important biological force preserving swarm integrity. By
introducing passive congregation to PSO, information can
be transferred among individuals of the swarm. This
approach was tested with a benchmark test and compared
with standard Gbest mode PSO, Lbest mode PSO and
PSO with a constriction factor, respectively.
Experimental results indicate that the PSO with passive
congregation improves the search performance on the
benchmark functions significantly. Padhye (2008) has
collected strategies on how to select gbest and pbest.
These strategies were originally designed for PSO to
solve multi-objective problems with a good convergence,
as well as diversity and spread along the Pareto-optimal
front. However they can also be extended to other kinds
of optimization problems. The author brings forward
some new ideas about how to select gbest and pbest,
together with the existing issues. Full Informed PSO
(FIPSO) was introduced in 2004 by Mendes et al. (2004).
The method is an alternative that is conceptually more
concise and promises to perform more effectively than the
traditional particle swarm algorithm. In this new version,
the particle uses information from all its neighbors, rather
than just the best one. Several variants are based on this
1184
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
approach, such as: PSOOP (PSO with opposite particles)
(Wang and Zhang, 2005), TSPSO (two-stage PSO)
(Zhuang et al., 2008) , RDNPSO (random dynamic
neighborhood PSO) (Mohais et al., 2008) , RDNEMPSO
(randomized directed neighborhoods with edge migration
in PSO) (Mohais et al., 2004) , PS2O (particle swarms
swarm optimizer) (Chen et al., 2008) .
Li (2004) proposed an improved PSO using the
notion of species to determine its neighborhood best
values, for solving multimodal optimization problems. In
the proposed Species-based PSO (SPSO), the swarm
population is divided into species subpopulations based
on their similarity. Each species is grouped around a
dominating particle called the species seed. At each
iteration step, species seeds are identified from the entire
population and then adopted as neighborhood bests for
these individual species groups separately. Species are
formed adaptively at each step based on the feedback
obtained from the multimodal fitness landscape. Over
successive iterations, species are able to simultaneously
optimize towards multiple optima, regardless of if they are
global or local optima. This approach was tested on a
series of widely used multi-modal test functions and
found all the global optima for the all test functions with
good accuracy. An Adaptive Particle Swarm Optimization
(APSO) on individual level was presented by Xie et al.
(2002a) . By analyzing the social model of PSO, a
replacing criterion based on the diversity of fitness
between current particle and the best historical experience
is introduced to maintain the social attribution of swarm
adaptively by taking off inactive particles. As the
evolution of swarm unfolds, the particles may loss the
local and global search abilities when their individual best
positions are very close to the best positions achieved
from their neighbors which are called inactive particles.
To overcome this issue, Xie et al. (2002b) defined a
constant , as a measurement to detect inactive particles.
Moreover, they proposed that the positions and velocities
of the particles regenerated randomly. The testing of three
benchmark functions using APSO indicates it improves
the average performance effectively.
Carvalho and Filho proposed Clan PSO (CPSO) (de
Carvalho and Bastos-Filho 2009), which includes:
C
C
Create clan topologies: Clans are groups of
individuals, or tribes, united by a kinship based on a
lineage or a mere symbolic. For each iteration, each
clan performs a search and marks the particle that
had reached the best position of the entire clan.
Leader delegation: The process of marking the best
within the clan is exactly the same as stipulating a
clan’s symbol as guide. It is the same as delegating
the power to lead the others in the group.
C
C
Leaders’ conference: Leaders of all the clans
exchange their information using Gbest or Lbest
models.
Clans feedback information: After the leaders’
conference, each leader will return to its own clan.
The new information acquired in the conference will
be spread widely within the clan. Results have shown
that CPSO achieves better degrees of convergence. A
new particle swarm optimizer, called Stochastic PSO
(SPSO), which is guaranteed to convergence to the
global optimization solution with probability one, is
presented based on the analysis of the standard PSO
(Cui and Zeng, 2004) . In this approach, if the global
best position is replaced by a particle’s position in
some interaction, this particles’ position will be
regenerated and if a particles’ new position coincides
with the global best position, its position will also be
regenerated randomly. The authors have proved that
this is a guaranteed global convergence optimizer and
through some numerical tests this optimizer showed
its good performance.
He and Han (2006) presented PSO with disturbance
term (PSO-DT) which add a disturbance term to the
velocity updating equation based on the prototype of the
standard PSO trying to improve (or avoid) the
shortcoming of standard PSO. The addition of the
disturbance term based on existing structure effectively
mends the defects. The convergence of the improved
algorithm was analyzed. Simulation results demonstrated
that the improved algorithm have a better performance
than the standard one.
A modified particle swarm optimizer named
Cooperative PSO (CPSO) was proposed by Van den
Bergh and Engelbrecht (2004). The CPSO could
significantly improve the performance of the original PSO
by utilizing multiple swarms for optimizing different
components of the solution vector by employing
cooperative behavior. In this method, the search space is
partitioned by dividing the solution vectors into smaller
vectors, based on the partition several swarms will be
randomly generated in different parts of the search space
and used to optimize different parts of the solution vector.
Then, two cooperative PSO models are proposed by the
authors. In one of them (CPSO-Sk), a swarm with an ndimensional vector is divided into n swarms of onedimensional vectors and each swarm attempts to optimize
a single component of the solution vector. In the other
variant (CPSO-Hk), in each iteration, CPSO-Sk is
implemented to obtain a sequence of potential solution
points and half of those are randomly selected for an
update with the original PSO. In case the new position
dominates the previous one, the information of the
sequence will be updated. Angeline (1998a, b) applied
selection strategy to improve the performance of standard
1185
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
PSO. In the proposed method, each particle was ranked
based on a comparison of its performance compared with
that of a group of randomly selected particles. A particle
is awarded one point whenever it shows better fitness in
a tournament than another. Members of the population are
then organized in descending order according to their
accumulated points. The bottom half of the population is
then replaced by the top half. This step reduces the
diversity of the population. The results show that the new
approach performed better than the PSO (with and
without w) in uni-modal functions. However, this
approach performed worse than the PSO for functions
with many local optima. Therefore, it can be concluded
that although the use of a selection method improves the
exploitation capability of the PSO, it reduces its
exploration capability. In order to prevent premature
convergence Xie et al. (2002a) Proposed Dissipative PSO
(DPSO) by adding random mutation to PSO. This could
be thought of as an inspiration for GA. DPSO introduces
negative entropy through the addition of randomness to
the particles. The results showed that DPSO performed
better than standard PSO when applied to the benchmark
problems. The concept of mutation is also adopted by
other variants, such as: PSOG (PSO with gaussian
mutation) (Higashi and Iba, 2003) , PSOM (PSO with
mutation) (Stacey et al., 2003). Binary PSO (BPSO) was
first proposed by Kennedy and Eberhart (1997). In this
approach, the position in design space of each particle is
expressed in a binary string. In the binary version,
trajectories are changes in the probability that a
coordinate will take on a zero or one value. Of course this
approach can also be applied to continuous optimization
problems, since any continuous real value can also be
represented as a bit string. Jiao et al. (2008) proposed
Improved Particle Swarm Optimization algorithm (IPSO)
by introducing a dynamic inertia weight PSO algorithm
for optimization problems. The algorithm used the
dynamic inertia weight to decrease the inertia factor in the
velocity update equation of the original PSO. The
algorithm was tested with a set of 6 benchmark functions
with 30, 50 and 150 different dimensions and compared
with standard PSO, respectively. Experimental results
indicated that the IPSO improves the search performance
on the benchmark functions significantly.
In order to keep the balance between the global
exploration and the local exploitation validly, Jie et al.
(2008) developed a Knowledge-based Cooperative
Particle Swarm Optimization (KCPSO). KCPSO mainly
simulates the self-cognitive and self-learning process of
evolutionary agents in special environment, and
introduces a knowledge billboard to record varieties of
search information. Moreover, KCPSO takes advantage of
multi-swarm to maintain the swarm diversity and tries to
guide their evolution by the shared information. Under the
guide of the shared information, KCPSO manipulates
each sub-swarm to go on with local exploitation in
different local area, in which every particle follows a
social learning behavior mode; at the same time, KCPSO
carries out the global exploration through the escaping
behavior and the cooperative behavior of the particles in
different sub-swarms. KCPSO can maintain appropriate
swarm diversity and alleviate the premature convergence
validly. The proposed model was applied to some wellknown benchmarks. The relative experimental results
showed KCPSO is a robust global optimization method
for the complex multimodal functions.
In the other study, a Constrained Particle Swarm
Optimization (CPSO) is developed by Azadani et al.
(2010). In this method, constraint handling is based on
particle ranking and uniform distribution. For equality
constraints, the coefficient weights are defined and
applied for initializing and updating procedure. This
method applied to schedule generation and reserve
dispatch in a multi-area electricity market considering
system constraints to ensure the security and reliability of
the power system. CPSO applied to three case studies and
results showed promising performance of the algorithm
for smooth and non smooth cost functions. Liu et al.
(2007a) proposed center particle swarm optimization
(Center PSO), which introduced a center particle into
LDWPSO to improve the performance. Xi et al. (2008)
introduced an improved Quantum-behaved PSO, which
introduces a weight parameter into the calculation of the
mean best position in QPSO in order to render the
importance of particles in the swarm when they are
evolving; this method is called Weighted QPSO
(WQPSO). Yang et al. (2007) proposed another dynamic
inertia weight to modify the velocity update formula in a
method called modified Particle Swarm Optimization with
Dynamic Adaptation (DAPSO). Ali and Kaelo (2008)
presented an efficiency value to identify the cause for
their slow convergence in PSO. Some modifications were
proposed in the position update rule of PSO in order to
make the convergence faster. Jiang et al.(2010) developed
a shuffling master-slave swarm evolutionary algorithm
based on particle swarm optimization (MSSE-PSO). The
population is sampled randomly from the feasible space
and partitioned into several sub-swarms (one master
swarm and additional slave swarms), in which each slave
swarm independently executes PSO. The master swarm is
enhanced by the social knowledge of the master swarm
itself and that of the slave swarms. For promoting any
PSO variants, Tsoulos and Stavrakoudis (2010) proposed
a stopping rule and similarity check in order to enhance
the speed of all PSO variants. These PSO variants propose
interesting strategies in terms of how to avoid premature
convergence for sustaining the variety amongst
1186
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Table 1: A summary of modifications of the PSO
Author/s
Context
Kennedy and Eberhart (1997)
BPSO
Angeline (1998a, b)
Selection
Van den Bergh and Engelbrecht (2002)
Van den Bergh (2002)
Xie et al. (2002a)
GCPSO
MPSO
APSO
He et al. (2004)
Mendes et al. (2004)
Li (2004)
Cui and Zeng (2004)
Van den Bergh and Engelbrecht (2004)
PSOPC
FIPSO
SPSO
SPSO
CPSO
Xie et al. (2002b)
He and Han (2006)
Liu et al. (2007a)
Yang et al. (2007)
Padhye (2008)
Carvalho and Bastos-Filho (2009)
DPSO
PSO-DT
Center PSO
DAPSO
MOPSO
CPSO
Jiao et al. (2008)
Jie et al. (2008)
Xi et al. (2008)
IPSO
KCPSO
WQPSO
Azadani et al. (2010)
Jiang et al. (2010)
CPSO
MSSE-PSO
Engelbrecht (2010)
HPSO
Chuang et al. (2011)
C-Catfish PSO
Brief introducing
Discrete design variables are expressed in binary form in this variant.
Particles are sorted based on their performance, the worst half is then replaced by
the best half.
Induce a new particle searching around the global best position found so far.
Use GCPSO recursively until some stopping criteria is met.
Improve swarm’s local and global searching ability by inserting self-organization
theory.
Add a passive congregation part to the particle’s velocity update formula.
Use all particle i’s neighbors’ best personal position to update i’s velocity.
Use several adaptively updated species-based sub swarms to search design space.
Particle i’s position will be regenerated randomly if it is too close to the gbest.
Use multi-swarms to search different dimensions of the design space by
employing cooperative behavior.
Add mutation to the PSO in order to prevent premature convergence.
Induce a disturbance term to the velocity update formula.
Introduce a center particle into LDWPSO
Use dynamic inertia weight to modify the velocity update formula
Utilize new selecting on strategies to get pbest and gbest.
Use sub-swarms to search design space, sub swarms communicate with each
other every some iterations.
Introduce a dynamic inertia weight PSO algorithm for optimization problems
Introduce a knowledge billboard to record varieties of search information
introduces a weight parameter into the calculation of the mean best position in
QPSO
constraint handling is based on particle ranking and uniform distribution
population is sampled randomly from the feasible space and partitioned into
several sub-swarms
particles are allowed to follow different search behaviors selected from a
behavior pool
introduced chaotic maps into catfish particle swarm optimization
individuals, and also contain properties that ultimately
evolve a population towards a higher fitness (global
optimization or local optima). Chaotic catfish particle
swarm optimization (C-CatfishPSO) is a novel
optimization algorithm proposed by Chuang et al. (2011).
Recently, numerous improvements, which rely on the
chaos approach, have been proposed for PSO in order to
overcome the disadvantage of the algorithm (Wang et al.,
2001; Chuanwen and Bompard, 2005a, b; Liu et al., 2005;
Xiang et al., 2007; Coelho and Mariani, 2009). CCatfishPSO introduced chaotic maps into catfish particle
swarm optimization (CatfishPSO), which increase the
search capability of CatfishPSO via the chaos approach.
Simple CatfishPSO relies on the incorporation of catfish
particles into PSO (Chuang et al., 2008). This effect is the
result of the introduction of new particles into the search
space (catfish particles), which replace particles with the
worst fitness; these catfish particles are initialized at
extreme points of the search space when the fitness of the
global best particle has not improved for a certain number
of consecutive iterations. This results in further
opportunities of finding better solutions for the swarm by
guiding the whole swarm to promising new regions of the
search space (Chuang et al., 2008). In C-CatfishPSO, the
logistic map was introduced to improve the search
behavior and to prevent entrapment of the particles in a
locally optimal solution. The introduced chaotic maps
strengthen the solution quality of PSO and Catfish PSO
significantly. Engelbrecht (2010) introduced a
Heterogeneous PSO (HPSO). In the standard PSO and
most of its modifications, particles follow the same
behaviors. That is, particles implement the same velocity
and position update rules and they exhibit the same search
characteristics. In HPSO particles are allowed to follow
different search behaviors in terms of the particle position
and velocity updates selected from a behavior pool,
thereby efficiently addressing the exploration-exploitation
tradeoff problem. Two versions of the HPSO were
proposed, the one static, where behaviors do not change,
and a dynamic version where a particle may change its
behavior during the search process if it cannot improve its
personal best position. For the purposes of this
preliminary study, five different behaviors were included
in the behavior pool. A preliminary empirical analysis
showed that much can be gained by using heterogeneous
swarms. Recently the authors proposed the effective
modifications to the original particle swam optimization
algorithm and apply these modified techniques in the field
of civil and electrical engineering (Eslami et al., 2010a, b,
c, d, 2011a, b, c, d, e; Khajehzadeh et al., 2010a, b,
2011a, b, 2012). Also, Table 1 shows a summary of
modifications of the PSO.
1187
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Hybridization of PSO algorithm: Hybridization is a
growing area of intelligent systems research, which aims
to combine the desirable properties of different
approaches to mitigate their individual weaknesses. A
natural evolution of the population based search
algorithms like that of PSO can be achieved by integrating
the methods that have already been tested successfully for
solving complex problems (Thangaraj et al., 2011).
Researchers have enhanced the performance of PSO by
incorporating in it the fundamentals of other popular
techniques like selection, mutation and crossover of GA
and DE. Also, attempts have been made to improve the
performance of other evolutionary algorithms like GA,
DE, and ACO etc. by integrating in them the velocity and
position update equations of PSO. The main goal, as we
see is to harness the strong points of the algorithms in
order to keep a balance between the exploration and
exploitation factors thereby preventing the stagnation of
population and preventing premature convergence. A
selection of hybrid algorithms in which PSO is one of the
prime algorithms is briefly surveyed here.
Hybridization with Evolutionary Algorithms (EAs),
including GAs, has been a popular strategy for improving
PSO performance. With both approaches being population
based, such hybrids are readily formulated. Angeline
(1998a, b) applied a tournament selection process that
replaced each poorly performing particle’s velocity and
position with those of better performing particles. This
movement in space improved performance in three out of
four test functions but moved away from the social
metaphor of PSO. Brits et al. (2002) used GCPSO (van
den Bergh and Engelbrecht, 2002) in their niching PSO
(Niche PSO) algorithm. Borrowing techniques from GAs,
the Niche PSO initially sets up sub-swarm leaders by
training the main swarm using Kennedy and Eberhart
(1997) cognition only model. Niches are then identified
and a sub-swarm radius set; as the optimization progresses
particles are allowed to join sub-swarms, which are in turn
allowed to merge. Once particle velocity has minimized
they have converged to their sub-swarm’s optimum. The
technique successfully converged every time although the
authors confess that results were very dependent on the
swarm being initialized correctly (using Faure sequences).
Robinson et al. (2002), trying to optimize a profiled
corrugated horn antenna, noted that a GA improved faster
early in the run, and PSO improved later. As a
consequence of this observation, they hybridized the two
algorithms by switching from one to the other after
several hundred iterations. They found the best horn by
going from PSO to GA (PSO-GA) and noted that the
particle swarm by itself outperformed both the GA by
itself and the GA-PSO hybrid, though the PSO-GA hybrid
performed best of all. It appears from their result that PSO
more effectively explores the search space for the best
region, while GA is effective at finding the best point
once the population has converged on a single region; this
is consistent with other findings. Grimaldi et al. (2004)
proposed a hybrid technique combining GA and PSO
called Genetical Swarm Optimization (GSO) for solving
combinatorial optimization problems. In GSO, the
population is divided into two parts and is evolved with
these two techniques in every iteration. The populations
are then recombined in the updated population, that is
again divided randomly into two parts in the next iteration
for another run of genetic or particle swarm operators.
Several hybridization strategies (static, dynamic, alternate,
self adaptive etc.) were proposed for GSO algorithm and
validated them with some multimodal benchmark
problems in (Gandelli et al., 2007) and also the
application of GSO has also been shown in (Gandelli
et al., 2006; Grimaccia et al., 2007). Juang (2004)
introduced a new hybridization of GA and PSO
(HGAPSO) where the upper-half of the best-performing
individuals in a population are regarded as elites.
However, instead of being reproduced directly to the next
generation, these elites are first enhanced. The group
constituted by the elites is regarded as a swarm, and each
elite corresponds to a particle within it. In this regard, the
elites are enhanced by PSO, an operation which mimics
the maturing phenomenon in nature. An improved GA
called GA-PSO was proposed by Kim and Park (2006) by
using PSO and the concept of Euclidean distance. In GAPSO, the performance of GA was improved by using PSO
and Euclidian data distance on mutation procedure of GA.
He applied his algorithm for obtaining the local and
global optima of Foxhole function. A hybrid technique
combining two heuristic optimization techniques, GA and
PSO, was proposed for the global optimization of
multimodal functions (Kao and Zahara, 2008). Denoted as
GA-PSO, this hybrid technique incorporates concepts
from GA and PSO and creates individuals in a new
generation not only by crossover and mutation operations
as found in GA but also by mechanisms of PSO. A Hybrid
PSO (HPSO) was proposed (Shunmugalatha and
Slochanal, 2008), which incorporates the breeding and
subpopulation process in GA into PSO. The
implementations of HPSO to test systems showed that it
converges to better solution much faster. A hybrid
constrained GA and PSO method for the evaluation of the
load flow in heavy-loaded power systems was developed
by Ting et al. (2008). The new algorithm was used to find
the maximum loading points of three IEEE test systems.
A Hybrid algorithm called GA/PSO was proposed for
solving multi-objective optimization problems and
validated their algorithm on test problems and also on
engineering design problems (Jeong et al., 2009). In this
1188
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
algorithm, the first multiple solutions are generated
randomly as an initial population and objective function
values are evaluated for each solution. After the
evaluation, the population is divided into two subpopulations one of which is updated by the GA operation,
while the other is updated by PSO operation. Valdez
et al. (2010) described a new hybrid approach for
optimization combining PSO and GA using fuzzy logic to
integrate the results of both methods and for parameters
tuning. The new optimization method combined the
advantages of PSO and GA to give us an improved FPSO
+ FGA hybrid approach. Fuzzy logic was used to combine
the results of the PSO and GA in the best way possible.
Premalatha and Natarajan (2009) proposed a discrete PSO
with GA operators for document clustering. The strategy
induces a reproduction operation by GA operators when
the stagnation in movement of the particle is detected.
They named their algorithm as DPSO with mutation and
crossover. Abdel-Kader (2010) proposed GAI-PSO
algorithm, which combines the standard velocity and
position update rules of PSO with the ideas of selection
and crossover from GAs. The GAI-PSO algorithm
searches the solution space to find the optimal initial
cluster centroids for the next phase. The second phase is
a local refining stage utilizing the k-means algorithm
which can efficiently converge to the optimal solution.
Kuo and Lin (2010) proposed an evolutionary-based
clustering algorithm based on a hybrid of GA and PSO for
order clustering in order to reduce Surface Mount
Technology (SMT) setup time. Hendtlass (2001) applied
the DE perturbation approach to adapt particle positions.
In this algorithm, named SDEA, particles’ positions are
updated only if their offspring have better fitness. The DE
reproduction process is applied to the particles in swarm
at specified intervals. At the specified intervals, the PSO
swarm serves as the population for DE algorithm, and the
DE is executed for a number of generations. After
execution of DE, the evolved population is further
optimized using PSO. Zhang and Xie (2003) also used
different techniques in tandem, rather than combining
them, in proposed DEPSO. In this case the DE and
canonical PSO operators were used on alternate
generations; when DE was in use, the trial mutation
replaced the individual best at a rate controlled by a
crossover constant and a random dimension selector that
ensured at least one mutation occurred each time. Kannan
et al. (2004) applied DE to each particle for a finite
number of iterations, and replaced the particle with the
best individual obtained from the DE process. An
algorithm named DEPSO was introduced by Talbi and
Batouche (2004). It differs from the DEPSO algorithm
proposed by Zhang and Xie (2003) as the DE operators
are applied only to the best particle obtained by PSO. This
algorithm was applied on medical image processing
problem. Das et al. (2008) suggested a method of
adjusting velocities of the particles in PSO with a vector
differential operator borrowed from the DE family. In the
proposed scheme the cognitive term is omitted; instead
the particle velocities are perturbed by a new term
containing the weighted different of the position vectors
of any two distinct particles randomly chosen from the
swarm. This differential velocity term is inspired by the
DE mutation scheme and hence the authors name this
algorithm as PSO-DV (particle swarm with differentially
perturbed velocity). A hybrid of the barebones PSO and
DE (BBDE) was proposed by Omran et al. (2009). DE
was used to mutate, for each particle, the attractor
associated with that particle, defined as a weighted
average of its personal and neighborhood best positions.
The performance of the proposed approach was
investigated and compared with DE, a Von Neumann
particle swarm optimizer and a barebones particle swarm
optimizer. The experiments conducted show that the
BBDE provides excellent results with the added
advantage of little, almost no parameter tuning. Zhang
et al. (2009) developed a hybrid of DE and PSO called
DE-PSO, where three alternative updating strategies are
used. The DE updating strategy is executed once in every
l generations and if a particle’s best encountered position
and the position of its current best neighbor are equal then
random updating strategy is executed otherwise PSO
updating strategy is used. A novel hybrid algorithm
named PSO-DE was introduced (Liu et al., 2010), in
which DE is incorporated to update the previous best
positions of PSO particles to force them to jump out of
local attractor in order to prevent stagnation of population.
Caponio et al. (2009) proposed the super-fit memetic
differential evolution (SFMDE) by hybridizing DE and
PSO with two other local search methods; Nelder Mead
algorithm and Rosenbrock algorithm. PSO assists the DE
in the beginning of the optimization process by helping to
generate a super-fit individual and then the local searches
are applied adaptively by means of a parameter which
measures the quality of super-fit individual. SFMDE was
used for solving unconstrained standard benchmark
problems and two engineering problems. An improved
hybrid algorithm based on conventional PSO and DE
(PSO-DE) was proposed (Khamsawang et al., 2010)for
solving an economic dispatch problem with the generator
constraints. In PSO-DE, the mutation operators of the DE
are used for improving diversity exploration of PSO and
are activated if velocity values of PSO are near to zero or
violate the boundary conditions. Besides GA and DE,
PSO has been hybridized with some other local and global
search techniques as well. Shelokar et al. (2007) proposed
a hybrid of PSO with ACO and called their algorithm
1189
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
PSACO. It is a two stage algorithm, in which PSO is
applied in the first stage and PCO is implemented in the
second stage. Here, ACO works as a local search
procedure in which the ‘ants’ apply the pheromone guided
mechanism to update the positions found by the particles
in the earlier stages. Hendtlass and Randall (2001)
proposed a hybridization of PSO with ACO. A list of best
positions found so far is recorded and the neighborhood
best is randomly selected from the list instead of the
current neighborhood best. More recently, Holden and
Freitas (2008) introduced a hybrid PSO/ACO algorithm
for hierarchical classification. This was applied to the
functional classification of enzymes, with very promising
results. Victoire and Jeyakumar (2004) hybridized PSO
with another algorithm called Sequential Quadratic
Programming (SQP) for solving economic dispatch
problem. PSO was applied as a main optimizer while SQP
was applied to fine tune the solution obtained after every
PSO run. The best values obtained by PSO at the end of
each iteration are treated as a starting point of SQP and
are further improved with the hope of finding a better
solution. Grosan et al. (2005) proposed a variant of the
PSO technique named Independent Neighborhoods PSO
(INPSO) dealing with sub-swarms for solving the well
known geometrical place problems. In case of such
problems the search space consists of more than one point
each of which is to be located. The performance of the
INPSO approach is compared with Geometrical Place
Evolutionary Algorithms (GPEA). To enhance the
performance of the INPSO approach, a hybrid algorithm
combining INPSO and GPEA is also proposed. Chuanwen
and Bompard (2005a, b) hybridized the chaotic version of
PSO with linear interior point to handle the problems
remaining in the traditional arithmetic of time consuming
convergence and demanding initial values. They classified
their algorithm into two phases; first the chaotic PSO is
applied to search the solution space and then linear
interior method is used to search the global optimum. Liu
et al. (2007b) hybridized a Turbulent PSO (TPSO) with a
fuzzy logic controller to produce a Fuzzy Adaptive TPSO
(FATPSO). The TPSO used the principle that PSO’s
premature convergence is caused by particles stagnating
about a sub-optimal location. A minimum velocity was
introduced with the velocity memory being replaced by a
random turbulence operator when a particle exceeded it.
The fuzzy logic extension was then applied to adaptively
regulate the velocity parameters during an optimization
run thus enabling coarse-grained explorative searches to
occur in the early phases before being replaced by finegrained exploitation later. A hybridization of PSO with
Tabu Search (TS) was proposed by Sha and Hsu (2006)
for solving Job Shop Problem (JSP). They modified the
PSO for solving the discrete JSP and applied TS for
refining the quality of solutions. Another hybrid of PSO
was suggested with Simulated Annealing (SA) for solving
constrained optimization problems (He and Wang, 2007).
SA was used to the best solution of the swarm to help the
algorithm escape from local optima. Mo et al. (2007)
proposed a new algorithm for function optimization
problems, Particle Swarm assisted Incremental Evolution
Strategy (PIES), which is designed for enhancing the
performance of evolutionary computation techniques by
evolving the input variables incrementally. In this version
of PSO, the process of evolution is divided into several
phases. Each phase is composed of two steps; in the first
step, a population is evolved with regard to a certain
concerned variable on some moving cutting plane
adaptively. In the next step the better-performing
individuals obtained from step one and the population
obtained from the last phase are joined together in second
step to form an initial population. Here, PSO is applied as
a global phase while Evolutionary strategy is applied as a
local phase. Fan et al. (2004) Fan and Zahara (2007)
hybridized PSO with a well know local search namely
Nelder Mead Simplex method and applied it for solving
unconstrained benchmark problems. This hybrid version
was named as NM-PSO, which was later extended for
solving constrained optimization problems by Zahara and
Kao (2009). Also a hybrid technique based on combining
the K-means algorithm, NM-PSO, called K-NM-PSO,
was applied to data clustering problem Kao et al., (2008).
Another hybrid PSO algorithm for cluster analysis was
proposed by Firouzi et al. (2008). PSO was integrated
with SA and k-means algorithms. Shen et al. (2008)
developed a hybrid algorithm of PSO and TS called
HPSOTS approach for gene selection for tumor
classification. TS was incorporated as a local
improvement procedure to enable the algorithm to
overleap local optima. A hybrid of PSO and Artificial
Immune System (AIS) was introduced for job scheduling
problem (Ge et al., 2008). An effective Hybrid Particle
Swarm Cooperative Optimization (HPSCO) algorithm
combining SA method and simplex method was proposed
by Song et al. (2008). The main idea of HPSCO is to
divide particle swarm into several sub-groups and achieve
optimization through cooperativeness of different subgroups among the groups. Ramana et al. (2009)
introduced a hybrid PSO, which combined the merits of
the parameter-free PSO (pf-PSO) and the extrapolated
PSO like algorithm (ePSO). Various hybrid combinations
of pf-PSO and ePSO methods were considered and tested
with the standard benchmark problems. Kuo (2009)
presented a new hybrid PSO model named HPSO that
combines a random-key encoding scheme, individual
enhancement scheme, and PSO for solving the flow-shop
scheduling problem. Chen et al. (2010) proposed a hybrid
1190
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Table 2: A summary of hybridized PSO with other optimization algorithms
Author/s
Context
Hendtlass (2001)
SDEA
Hendtlass and Randall (2001)
Robinson et al. (2002)
GA-PSO & PSO-GA
Wei et al. (2002)
SFEP
Zhang and Xie (2003)
DEPSO
Grimaldi et al. (2004)
GSO
Juang (2004)
GAPSO
Kannan et al. (2004)
Talbi and Batouche (2004)
DEPSO
Victoire and Jeyakumar (2004)
PSO-SQP
Fan et al. (2004)
NM-PSO
Grosan et al. (2005)
INPSO
Chuanwen and Bompard (2005a, b)
New PSO
Sha and Hsu (2006)
HPSO
Gandelli et al. (2007)
GSO
Liu et al. (2007b)
FATPSO
He and Wang (2007)
HPSO
Mo et al. (2007)
PIES
Fan and Zahara (2007)
NM-PSO
Shelokar et al. (2007)
PSACO
Kao and Zahara (2008)
GA-PSO
Shunmugalatha and Slochanal (2008)
HPSO
Ting et al. (2008)
CGA/PSO
Das et al. (2008)
PSO-DV
Holden and Freitas (2008)
PSO/ACO
Kao et al. (2008)
K-NM-PSO
Firouzi et al. (2008)
PSO-SA-K
Shen et al. (2008)
HPSOTS
Ge et al. (2008)
HIA
Song et al. (2008)
HPSCO
Pant et al. (2008)
AMPSO
Cui et al. (2008)
Hybrid algorithm
Jeong et al. (2009)
GanPSO
Valdez et al. (2010)
GanPSO
Premalatha and Natarajan (2009)
DPSO-mutation-rossover
Omran et al. (2009)
BBDE
Zhang et al. (2009)
DE-PSO
Caponio et al. (2009)
SFMDE
Zahara and Kao (2009)
NM-PSO
Ramana et al. (2009)
Pf-PSO, ePSO
Kuo (2009)
HPSO
Kavehand and Talatahari (2009a)
HPSACO
Kavehand and Talatahari (2009b)
DHPSACO
Abdel-Kader EN.CITE(2010)
GAI-PSO
Kuo and Lin (2010)
HGAPSOA
Liu et al. (2010)
PSO-DE
Khamsawang et al. (2010)
PSO-DE
of PSO and Extremal Optimization (EO), called PSO-EO
algorithm, where EO is invoked in PSO at l generation
intervals (l is predefined by the user). A heuristic and
discrete heuristic particle swarm ant colony optimization
(HPSACO and DHPSACO) have been presented for
optimum design of trusses by Kaveh and Talatahari
(2009a, b). A Hybrid Harmony Search algorithm with
swarm intelligence (HHS) to solve the dynamic economic
load dispatch problem was introduced (Pandi and
Panigrahi, 2011). This approach was an attempt to
hybridize the HS algorithm with the powerful population
Application
Unconstrained global optimization
Discrete optimization problems
Engineering design
Unconstrained global optimization
Unconstrained global optimization
Electromagnetic application
Network design
Generation expansion planning
Medical image processing
Power systems
Multimodal functions
Geometric place problems
Power systems
Job shop scheduling
Unconstrained global optimization
Unconstrained global optimization
Constrained optimization
Unconstrained global optimization
Unconstrained global optimization
Unconstrained global optimization
Unconstrained global optimization
Power system
Power system
Engineering design
Data mining
Data clustering
Cluster analysis
Gene selection and tumor classification
Job shop scheduling
industrial optimization
Unconstrained global optimization
Unconstrained global optimization
Multi-objective optimization problems
Unconstrained optimization problems
Document clustering
Unconstrained optimization problems
Unconstrained optimization
Unconstrained global optimization
Engineering design
Unconstrained global optimization
Flow shop scheduling
Engineering design
Engineering design
Data clustering
Data clustering
Constrained optimization and engineering problems
Power systems
based algorithm PSO for a better convergence of the
proposed algorithm. Kayhan et al. (2010) proposed a new
hybrid global-local optimization algorithm named
PSOLVER that combines PSO and a spreadsheet “Solver”
to solve continuous optimization problems. In the
proposed algorithm, PSO and Solver are applied as the
global and local optimizers, respectively. Thus, PSO and
Solver work mutually by feeding each other in terms of
initial and sub-initial solution points to produce fine initial
solutions and avoid from local optima. The idea of
embedding swarm directions in Fast Evolutionary
1191
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
refinement of the approach and integration with other
techniques, as well as applications moving out of the
research laboratory and into industry and commerce.
ACKNOWLEDGMENT
The authors are grateful to Universiti Kebangsaan
Malaysia (UKM) for supporting this study under grant
UKM-DLP-2011-059.
REFERENCES
Fig. 4: Pie chart showing the distribution of publication of
research articles having hybrid versions of PSO
Programming (FEP) (Yao et al., 1999) to improve the
performance of the latter was proposed by Wei et al.
(2002). They applied their algorithm on a set of 6
unconstrained benchmark problems. The AMPSO
algorithm proposed by Pant et al. (2008) is hybrid version
of PSO including EP based adaptive mutation operator
using Beta distribution. Another version of hybridizing
PSO was suggested by Cui et al. (2008) by combining
PSO with Fitness Uniform Selection Strategy (FUSS) and
with Random Walk Strategy (RWS). The former strategy
induces a weak selection pressure into the standard PSO
algorithm while the latter helps in enhancing the
exploration capabilities to escape the local minima. The
summary of the main features of the hybridized PSO
versions with other optimization algorithms are given in
Table 2. The distribution of publication of research
articles having hybrid versions of PSO is shown in Fig. 4
via pie chart.
CONCLUSION
PSO is a powerful optimization technique that has
been applied to wide range of search and optimization
problems. It has gathered considerable interest from the
natural computing research community and has been seen
to offer rapid and effective optimization of complex
multidimensional search spaces. This review has
considered the background and history of PSO and its
place within the broader natural computing paradigm. The
review then moved on to consider various refinements to
the original formulation of PSO and modifications and
hybridizations of the algorithm. These have been obtained
by identifying and analyzing the publications stored in
IEEE Xplore and ISI Web of Knowledge databases at the
time of writing. The rate of growth of PSO publications is
particularly amazing (Fig. 4). The number of publications
reporting PSO applications has grown nearly exponential
for the last few years. Clearly, the algorithm shines for its
simplicity and for the ease with which it can be adapted to
different application domains and hybridized with other
techniques. The next decade will no doubt see further
Abdel-Kader, R.F., 2010. Genetically improved PSO
algorithm for efficient data clustering. In Proceedings
of International Conference on Machine Learning
and Computing, pp: 71-75.
Ali, M. and P. Kaelo, 2008. Improved particle swarm
algorithms for global optimization. Appl. Math.
Comput., 196: 578-593.
Alvarez-Benitez, J.E., R.M. Everson and J.E. Fieldsend,
2005. A MOPSO algorithm based exclusively on
pareto dominance concepts. Proceedings of the 3rd
International Conference on Evolutionary MultiCriterion Optimization, Guanajuato, Mexico, pp:
459-473.
Angeline, P.J., 1998a. Evolutionary optimization versus
particle swarm optimization: Philosophy and
performance differences. Proceedings of the 7th
Annual Conference on Evolutionary Programming,
San Diego, USA, pp. 601-610.
Angeline, P.J., 1998b. Using selection to improve particle
swarm optimization. The IEEE International
Conference on Evolutionary Computation
Proceedings Anchorage, AK , USA, pp: 84-89.
Azadani, E.N., S. Hosseinian and B. Moradzadeh, 2010.
Generation and reserve dispatch in a competitive
market using constrained particle swarm
optimization. Int. J. Electr. Power Energ. Syst. 32:
79-86.
Brits, R., A.P. Engelbrecht and F. Van den Bergh, 2002.
A niching particle swarm optimizer. Proceedings of
the 4th Asia-Pacific Conference on Simulated
Evolution and Learning, pp: 692-696.
Caponio, A., F. Neri and V. Tirronen, 2009. Super-fit
control adaptation in memetic differential evolution
frameworks. Soft Comput., 13: 811-831.
Carvalho, D.F., C.J.A. Bastos-Filho, 2009. Clan particle
swarm optimization. IEEE Congress on Evolutionary
Computation, pp: 3044-3051.
Chen, H., Y. Zhu, K. Hu and T. Ku, 2008. Global
optimization based on hierarchical coevolution
model. IEEE Congress on Evolutionary Computation,
pp: 1497-1504.
1192
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Chen, M.R., X. Li, X. Zhang and Y.Z. Lu, 2010. A novel
particle swarm optimizer hybridized with extremal
optimization. Appl. Soft Comput., 10: 367-373.
Chuang, L.Y., S.W. Tsai and C.H. Yang, 2008. Catfish
particle swarm optimization. IEEE Swarm
Intelligence Symposium, Louis, Missouri, pp: 1-5.
Chuang, L.Y., S.W. Tsai and C.H. Yang, 2011. Chaotic
catfish particle swarm optimization for solving global
numerical optimization problems. Appl. Math.
Comput., 217: 6900-6916.
Chuanwen, J. and E. Bompard, 2005a. A hybrid method
of chaotic particle swarm optimization and linear
interior for reactive power optimisation. Math.
Comput. Simulat., 68: 57-65.
Chuanwen, J. and E. Bompard, 2005b. A self-adaptive
chaotic particle swarm algorithm for short term
hydroelectric system scheduling in deregulated
environment. Energ. Convers. Manage., 46:
2689-2696.
Clerc, M., 1999. The swarm and the queen: towards a
deterministic and adaptive particle swarm
optimization. Proceedings of the Congress on
Evolutionary Computation, Washington D.C., USA,
pp: 1951-1957.
Clerc, M. and J. Kennedy, 2002. The particle swarmexplosion, stability, and convergence in a
multidimensional complex space. IEEE Trans.
Evolut. Comput., 6: 58-73.
Coelho, L.S. and V.C. Mariani, 2009. A novel chaotic
particle swarm optimization approach using Hénon
map and implicit filtering local search for economic
load dispatch. Chaos, Solitons Fract. 39: 510-518.
Cui, Z., X. Cai, J. Zeng and G. Sun, 2008. Particle swarm
optimization with FUSS and RWS for high
dimensional functions. Appl. Math. Comput., 205:
98-108.
Cui, Z. and J. Zeng, 2004. A guaranteed global
convergence particle swarm optimizer. Proceedings
of 4th International Conference on Rough Sets and
Current Trends in Computing, Uppsala, Sweden, pp:
762-767.
Das, S., A. Abraham and A. Konar, 2008. Particle swarm
optimization and differential evolution algorithms:
Technical analysis, applications and hybridization
perspectives. Advances of Computational
Intelligence in Industrial Systems, Studies in
Computational Intelligence, pp. 1-38.
De Carvalho, D.F. and C.J.A. Bastos-Filho, 2009. Clan
particle swarm optimization. Int. J. Intell. Comput.
Cybern., 2: 197-227.
Dorigo, M., V. Maniezzo and A. Colorni, 1991. Positive
feedback as a search strategy. Technical Report 91016, Dipartimento di Elettronica, Politecnico di
Milano, IT, pp: 91-106.
Echer, J. and M. Kupferschmid, 1988. Introduction to
Operations Research, Wiley, New York.
Engelbrecht, A., 2010. Heterogeneous particle swarm
optimization. Proceeding of the 7th International
Conference on Swarm Intelligence, pp: 191-202.
Eslami, M., H. Shareef, A. Mohamed and S. Ghoshal,
(2010). Tuning of power system stabilizers using
particle swarm optimization with passive
congregation. Int. J. Phys. Sci., 5(17): 2574-2589.
Eslami, M., H. Shareef and A. Mohamed, 2011. Power
system stabilizer design using hybrid multi-objective
particle swarm optimization with chaos. J. Cent.
South Univ. Technol., 18(5): 1579-1588.
Eslami, M., H. Shareef and A. Mohamed, 2010. Optimal
Tuning of Power System Stabilizers Using Modified
Particle Swarm Optimization. Proceedings of the
14th International Middle East Power Systems
Conference (MEPCON’10), Cairo University, Egypt,
IEEE, pp: 386-391.
Eslami, M., H. Shareef, A. Mohamed and
M. Khajehzadeh, 2010. Improved particle swarm
optimization with disturbance term for multi-machine
power system stabilizer design. Aust. J. Basic Appl.
Sci., 4(12): 5768-5779.
Eslami, M., H. Shareef, A. Mohamed and
M. Khajehzadeh, 2010. A hybrid PSO technique for
damping electro-mechanical oscillations in large
power system. In Proceeding, 2010 IEEE student
conference on research and development engineering: Innovation and beyond, SCOReD 2010,
Kuala Lumpur, Malaysia, 442-447
Eslami, M., H. Shareef, A. Mohamed and
M. Khajehzadeh, 2011. Coordinated design of PSS
and SVC Damping Controller Using CPSO.
Proceedings of the 5th International Power
Engineering and Optimization Conference
(PEOCO2011), Malaysia, pp: 11-16
Eslami, M., H. Shareef,
A. Mohamed and
M. Khajehzadeh, 2011. Optimal location of PSS
Using Improved PSO with chaotic sequence.
Proceedings of the 1st International Conference on
Electrical, Control and Computer Engineering
(InECCE 2011), Kuantan, Malaysia. 253-258
Eslami, M., H. Shareef and A. Mohamed,
2011.Optimization and coordination of damping
controls for optimal oscillations damping in multimachine power system. Int. Rev. Electr. Eng., 6(4):
1984-1993.
Eslami, M., H. Shareef and A. Mohamed, 2011. Particle
swarm optimization for simultaneous tuning of static
var compensator and power system stabilizer. Prz.
Elektrotechniczny (Electr. Rev.), 87(9 A) 343-347.
Fan, H. and Y. Shi, 2001. Study of Vmax of the particle
swarm optimization algorithm. In the Workshop on
Particle Swarm Optimization, Indianapolis, IN:
Purdue School of Engineering and Technology, pp:
1193
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Fan, S.K.S., Y. Liang and E. Zahara, 2004. Hybrid
simplex search and particle swarm optimization for
the global optimization of multimodal functions. Eng.
Optimiz., 36: 401-418.
Fan, S.K.S. and E. Zahara, 2007. A hybrid simplex search
and particle swarm optimization for unconstrained
optimization. Eur. J. Oper. Res., 181: 527-548.
Firouzi, B., T. Niknam and M. Nayeripour, 2008. A new
evolutionary algorithm for cluster analysis. World
Academy of Science, Engineering, and Technology,
pp: 605-609.
Gandelli, A., F. Grimaccia, M. Mussetta, P. Pirinoli and
R. Zich, 2007. Development and validation of
different hybridization strategies between GA and
PSO. In Proceedings of the IEEE Congress on
Evolutionary Computation, pp: 2782-2787.
Gandelli, A., F. Grimaccia, M. Mussetta, P. Pirinoli and
R. Zich, 2006. Genetical swarm optimization: an
evolutionary algorithm for antenna design.
AUTOMATIKA, 47: 105-112.
Ge, H.W., L. Sun, Y.C. Liang and F. Qian, 2008. An
effective PSO and AIS-based hybrid intelligent
algorithm for job-shop scheduling. IEEE Trans. Syst.
Man Cybern., Part A: Syst. Hum., 38: 358-368.
Grimaccia, F., M. Mussetta and R.E. Zich, 2007.
Genetical swarm optimization: Self-adaptive hybrid
evolutionary algorithm for electromagnetics. IEEE
Trans. Antenn. Propag., 55: 781-785.
Grimaldi, E.A., F. Grimaccia, M. Mussetta, P. Pirinoli
and R. Zich, 2004. A new hybrid genetical-swarm
algorithm for electromagnetic optimization. In
Proceedings of International Conference on
Computational Electromagnetics and its
Applications, Beijing, China, pp: 157-160.
Grosan, C., A. Abraham and M. Nicoara, 2005. Search
optimization using hybrid particle sub-swarms and
evolutionary algorithms. Int. J. Simul. Syst. Sc.
Technol., 6: 60-79.
He, Q. and C. Han, 2006. An improved particle swarm
optimization algorithm with disturbance term.
Comput. Intell. Bioinfo., 4115: 100-108.
He, Q. and L. Wang, 2007. A hybrid particle swarm
optimization with a feasibility-based rule for
constrained optimization. Appl. Math. Comput., 186:
1407-1422.
He, S., Q. Wu, J. Wen, J. Saunders and R. Paton, 2004. A
particle swarm optimizer with passive congregation.
Biosystems, 78: 135-147.
Hendtlass, T., (2001). A combined swarm differential
evolution algorithm for optimization problems.
Proceedings of 14th International Conference on
Industrial and Engineering Applications of Artificial
Intelligence and Expert Systems, Lecture Notes in
Computer Science, pp: 11-18.
Hendtlass, T. and M. Randall, (2001). A survey of ant
colony and particle swarm meta-heuristics and their
application to discrete optimization problems.
Proceedings of the Inaugural Workshop on Artificial
Life, pp: 15-25.
Higashi, N. and H. Iba, (2003). Particle swarm
optimization with Gaussian mutation. In Proceedings
of the IEEE Swarm Intelligence Symposium, SIS
'03., pp: 72-79.
Holden, N. and A.A. Freitas, (2008). A hybrid PSO/ACO
algorithm for discovering classification rules in data
mining. J. Artif evolut. Applied, 2008:1-11.
Holland, J.H., (1992). Adaptation in natural and artificial
systems: An introductory analysis with applications
to biology, control, and artificial intelligence: The
MIT Press,
Hu, X. and R.C. Eberhart, 2002. Adaptive particle swarm
optimization: detection and response to dynamic
systems. In Proceedings of congress on Evolutionary
Computation Hawaii, USA, pp: 1666-1670.
Jeong, S., S. Hasegawa, K. Shimoyama and S. Obayashi,
2009. Development and investigation of efficient
GA/PSO-hybrid algorithm applicable to real-world
design optimization. IEEE Comput. Intell. Mag., 4:
36-44.
Jiang, Y., C. Liu, C. Huang and X. Wu, 2010. Improved
particle swarm algorithm for hydrological parameter
optimization. Appl. Math. Comput., 217: 3207-3215.
Jiao, B., Z. Lian and X. Gu, 2008. A dynamic inertia
weight particle swarm optimization algorithm. Chaos,
Solitons Fract., 37: 698-705.
Jie, J., J. Zeng, C. Han and Q. Wang, 2008. Knowledgebased cooperative particle swarm optimization. Appl.
Math. Comput., 205: 861-873.
Juang, C.F., 2004. A hybrid of genetic algorithm and
particle swarm optimization for recurrent network
design. IEEE Trans. Syst. Man Cybern. Part B:
Cybern., 34: 997-1006.
Kannan, S., S.M.R. Slochanal, P. Subbaraj and N.P.
Padhy, 2004. Application of particle swarm
optimization technique and its variants to generation
expansion planning problem. Electr. Power Syst.
Res., 70: 203-210.
Kao, Y.T. and E. Zahara, 2008. A hybrid genetic
algorithm and particle swarm optimization for
multimodal functions. Applied Soft Comput., 8:
849-857.
Kao, Y.T., E. Zahara and I. Kao, 2008. A hybridized
approach to data clustering. Exp. Syst. Appl., 34:
1754-1762.
Kaveh, A. and S. Talatahari, 2009a. A particle swarm ant
colony optimization for truss structures with discrete
variables. J. Const. Steel Res., 65: 1558-1568.
1194
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Kaveh, A. and S. Talatahari, 2009b. Particle swarm
optimizer, ant colony strategy and harmony search
scheme hybridized for optimization of truss
structures. Comput. Struct., 87: 267-283.
Kayhan, A.H., H. Ceylan, M.T. Ayvaz and G. Gurarslan,
2010. PSOLVER: A new hybrid particle swarm
optimization algorithm for solving continuous
optimization problems. Exp. Syst. Appl., 37:
6798-6808.
Kennedy, J. and R. Eberhart, 1995. Particle swarm
optimization. Proceedings of the IEEE International
Conference on Neural Networks, Piscataway, pp:
1942-1948.
Kennedy, J. and R.C. Eberhart, 1997. A discrete binary
version of the particle swarm algorithm. IEEE
International Conference on Computational
Cybernetics and Simulation, Orlando, FL , USA, pp:
4104-4108.
Khamsawang, S., P. Wannakarn and S. Jiriwibhakorn,
2010. Hybrid PSO-DE for solving the economic
dispatch problem with generator constraints. In
Proceedings of the IEEE International Conference on
Computer and Automation Engineering, pp: 135-139.
Khajehzadeh, M., M. Taha and A. El-Shafie, 2010.
Modified particle swarm optimization for
probabilistic slope stability analysis. Int. J. Phys. Sci.,
5(15): 2248-2258.
Khajehzadeh, M., M.R. Taha, A. El-Shafie and
M. Eslami, 2011. Modified particle swarm
optimization for optimum design of spread footing
and retaining wall. J. Zhejiang Univ-Sci. A (Appl
Phys and Eng)., 12(6): 415-427.
Khajehzadeh, M., M. Taha and A. El-Shafie, 2011.
Reliability analysis of earth slopes using hybrid
chaotic particle swarm optimization. J. Cent. South
Univ. Technol., 18(5): 1626-1637.
Khajehzadeh, M., M.R. Taha and A. El-shafie, A. 2010.
Economic Design of Retaining Wall Using Particle
Swarm Optimization with Passive Congregation.
Aust. J. Basic Appl. Sci., 4(11): 5500-5507.
Khajehzadeh, M., M.R. Taha, A. El-Shafie and
M. Eslami, 2012. Locating the general failure surface
of earth slope using particle swarm optimization.
Civil Eng. Environ. Syst., 29(1): 41-57. DOI:
10.1080/ 10286608.2012.663356
Kim, H. and J. Park, 2006. Improvement of genetic
algorithm using PSO and Euclidean data distance.
Int. J. Inf. Technol., 12: 142-148.
Kuo, I., 2009. An efficient flow-shop scheduling
algorithm based on a hybrid particle swarm
optimization model. Exp. Syst. Appl., 36:
7027-7032.
Kuo, R. and L. Lin, 2010. Application of a hybrid of
genetic algorithm and particle swarm optimization
algorithm for order clustering. Decis. Support Syst.,
49: 451-462.
Li, X., 2004. Adaptively choosing neighbourhood bests
using species in a particle swarm optimizer for
multimodal function optimization. Proceedings of the
Congress on Genetic and Evolutionary Computation,
pp: 105-116.
Liu, B., L. Wang, Y.H. Jin, F. Tang and D.X. Huang,
2005. Improved particle swarm optimization
combined with chaos. Chaos Solitons Frac., 25:
1261-1271.
Liu, H., A. Abraham and W. Zhang, 2007a. A fuzzy
adaptive turbulent particle swarm optimisation. Int. J.
Innov. Comput. Appl., 1: 39-47.
Liu, H., Z. Cai and Y. Wang, 2010. Hybridizing particle
swarm optimization with differential evolution for
constrained numerical and engineering optimization.
Appl. Soft Comput., 10: 629-640.
Liu, Y., Z. Qin, Z. Shi and J. Lu, 2007b. Center particle
swarm optimization. Neurocomputing, 70: 672-679.
Lovbjerg, M., 2002. Improving particle swarm
optimization by hybridization of stochastic search
heuristics and self-organized criticality. Master's
Thesis, Department of Computer Science, University
of Aarhus.
Maxfield, A.C.M. and L. Fogel, 1965. Artificial
intelligence through a simulation of evolution.
Biophysics and Cybernetics Systems: Proceedings of
the Second Cybernetics Sciences. Spartan Books,
Washington DC.
Mendes, R., J. Kennedy and J. Neves, 2004. The fully
informed particle swarm: simpler, maybe better.
IEEE Trans. Evolut. Comput., 8: 204-210.
Mo, W., S. Guan and S. Puthusserypady, 2007. A novel
hybrid algorithm for function optimization: particle
swarm assisted incremental evolution strategy. Stud.
Comput. Intell., 75: 101-125.
Mohais, A., R. Mendes, C. Ward and C. Posthoff, 2005.
Neighborhood re-structuring in particle swarm
optimization. Lect. Notes Comput. Sci., 3809:
776-785.
Mohais, A.S., C. Ward and C. Posthoff, 2004.
Randomized directed neighborhoods with edge
migration in particle swarm optimization. Congress
on Evolutionary Computation, pp: 548-555.
Momoh, J.A., R. Adapa and M. El-Hawary, 1999a. A
review of selected optimal power flow literature to
1993. I. Nonlinear and quadratic programming
approaches. IEEE Trans. Power Syst., 14: 96-104.
Momoh, J.A., M. El-Hawary and R. Adapa, 1999b. A
review of selected optimal power flow literature to
1993. II. Newton, linear programming and interior
point methods. IEEE Trans. Power Syst., 14:
105-111.
Omran, M.G.H., P. Engelbrecht and A. Salman, 2009.
Bare bones differential evolution. Eur. J. Oper. Res.,
196: 128-139.
1195
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Padhye, N., 2008. Topology optimization of compliant
mechanism using multi-objective particle swarm
optimization. Proceedings of the Conference
Companion on Genetic and Evolutionary
Eomputation, New York, pp: 1831-1834.
Pandi, V.R. and B.K. Panigrahi, 2011. Dynamic
economic load dispatch using hybrid swarm
intelligence based harmony search algorithm. Exp.
Syst. Appl., 38: 8509-8851.
Pant, M., R. Thangaraj and A. Abraham, 2008. Particle
swarm optimization using adaptive mutation.
Proceedings of 19th International Conference on
Database and Expert Systems Application, Italy, pp:
519-523.
Premalatha, K. and A. Natarajan, 2009. Discrete PSO
with GA operators for document clustering. Int.
J. Recent Trend. Eng., 1: 20-24.
Price, K. and R. Storn, 1995. Differential Evolution-a
simple and efficient adaptive scheme for global
optimization over continuous spaces. International
Computer Science Institute-Publications.
Ramana, G., M. Arumugam and C. Loo, 2009. Hybrid
particle swarm optimization algorithm with fine
tuning operators. Int. J. Bio Inspir. Comput., 1:
14-31.
Ratnaweera, A., S.K. Halgamuge and H.C. Watson, 2004.
Self-organizing hierarchical particle swarm optimizer
with time-varying acceleration coefficients. IEEE
Trans. Evolut. Comput., 8: 240-255.
Robinson, J., S. Sinton and Y. Rahmat-Samii, 2002.
Particle swarm, genetic algorithm, and their hybrids:
optimization of a profiled corrugated horn antenna. In
Proceedings of the IEEE International Symposium in
Antennas and Propagation Society, pp: 314-317.
Sha, D. and C.Y. Hsu, 2006. A hybrid particle swarm
optimization for job shop scheduling problem.
Comput. Indust. Eng., 51: 791-808.
Shelokar, P., P. Siarry, V. Jayaraman and B. Kulkarni,
2007. Particle swarm and ant colony algorithms
hybridized for improved continuous optimization.
Applied Math. Comput., 188: 129-142.
Shen, Q., W.M. Shi and W. Kong, 2008. Hybrid particle
swarm optimization and tabu search approach for
selecting genes for tumor classification using gene
expression data. Comput. Biol. Chem., 32: 53-60.
Shi, Y. and R. Eberhart, 1998. A modified particle swarm
optimizer. Evolutionary Computation Proceedings,
IEEE World Congress on Computational
Intelligence, AK, USA, pp: 69-73.
Shi, Y. and R.C. Eberhart, 2001. Fuzzy adaptive particle
swarm optimization. In Proceedings Congress on
Evolutionary Computation, Seoul, Korea, pp:
101-106
Shunmugalatha, A. and S. Slochanal, 2008. Optimum cost
of generation for maximum loadability limit of power
system using hybrid particle swarm optimization. Int.
J. Electr. Power Energy Syst., 30: 486-490.
Song, S., L. Kong, Y. Gan and R. Su, 2008. Hybrid
particle swarm cooperative optimization algorithm
and its application to MBC in alumina production.
Prog. Natural Sci., 18: 1423-1428.
Stacey, A., M. Jancic and I. Grundy, 2003. Particle swarm
optimization with mutation. The 2003 Congress on
Evolutionary Computation, CEC '03., 1422:
1425-1430.
Talbi, H. and M. Batouche, 2004. Hybrid particle swarm
with differential evolution for multimodal image
registration. Proceedings of the IEEE International
Conference
on
Industrial
Technology, pp:
1567-1572.
Thangaraj, R., M. Pant, A. Abraham and P. Bouvry, 2011.
Particle swarm optimization: Hybridization
perspectives and experimental illustrations. Applied
Math. Comput., 217: 5208-5226
Ting, T., K. Wong and C. Chung, 2008. Hybrid
constrained genetic algorithm/particle swarm
optimisation load flow algorithm. IET Gener. Trans.
Dis., 2: 800-812.
Tsou, D. and C. MacNish, 2003. Adaptive particle swarm
optimisation for high-dimensional highly convex
search spaces. Proceedings of IEEE Congress on
Evolutionary Computation 2003 (CEC 2003)
Canbella, Australia, pp: 783-789
Tsoulos, I.G. and A. Stavrakoudis, 2010. Enhancing PSO
methods for global optimization. Applied Math.
Comput., 216: 2988-3001.
Valdez, F., P. Melin and O. Castillo, 2010. An improved
evolutionary method with fuzzy logic for combining
Particle Swarm Optimization and Genetic
Algorithms. Applied Soft Comput., 11: 2625-2632.
Van den Bergh, F., 2002. An analysis of particle swarm
optimizers. Ph.D. Thesis. Department of Computer
Science, University of Pretoria, South Africa.
van den Bergh, F. and A. Engelbrecht, 2002. A new
locally convergent particle swarm optimiser. In
Proceedings of the IEEE Conference on Systems,
Man, and Cybernetics Hammamet, Tunisia, pp: 1-6.
Van den Bergh, F. and A.P. Engelbrecht, 2004. A
cooperative approach to particle swarm optimization.
IEEE Trans. Evolut. Comput., 8: 225-239.
Victoire, T. and A.E. Jeyakumar, 2004. Hybrid PSO-SQP
for economic dispatch with valve-point effect. Electr.
Power Syst. Res., 71: 51-59.
Wang, L., D. Zheng and Q. Lin, 2001. Survey on chaotic
optimization methods. Comput. Technol. Automat.,
20: 1-5.
1196
Res. J. Appl. Sci. Eng. Technol., 4(9): 1181-1197, 2012
Wang, R. and X. Zhang, 2005. Particle swarm
optimization with opposite particles. Advances in
Artificial Intelligence, pp: 633-640.
Wei, C., Z. He, Y. Zhang and W. Pei, 2002. Swarm
directions embedded in fast evolutionary
programming. Proceedings of the World Congress on
Computational Intelligence, pp: 1278-1283.
Xi, M., J. Sun and W. Xu, 2008. An improved quantumbehaved particle swarm optimization algorithm with
weighted mean best position. Appl. Math. Comput.,
205: 751-759.
Xiang, T., X. Liao and K. Wong, 2007. An improved
particle swarm optimization algorithm combined with
piecewise linear chaotic map. Applied Math.
Comput., 190: 1637-1645.
Xie, X.F., W.J. Zhang and Z.L. Yang, 2002a. Adaptive
particle swarm optimization on individual level. 6th
International Conference on Signal Processing, pp:
1215-1218.
Xie, X.F., W.J. Zhang and Z.L. Yang, 2002b. Dissipative
particle swarm optimization. In Proceedings of the
Congress on Evolutionary Computation, pp:
1456-1461.
Yang, X., J. Yuan and H. Mao, 2007. A modified particle
swarm optimizer with dynamic adaptation. Appl.
Math. Comput., 189: 1205-1213.
Yao, X., Y. Liu and G. Lin, 1999. Evolutionary
programming made faster. IEEE Trans. Evolut.
Comput., 3: 82-102.
Zahara, E. and Y.T. Kao, 2009. Hybrid Nelder-Mead
simplex search and particle swarm optimization for
constrained engineering design problems. Expert
Syst. Appl., 36: 3880-3886.
Zhang, C., J. Ning, S. Lu, D. Ouyang and T. Ding, 2009.
A novel hybrid differential evolution and particle
swarm optimization algorithm for unconstrained
optimization. Operat. Res. Lett., 37: 117-122.
Zhang, W.J. and X.F. Xie, 2003. DEPSO: hybrid particle
swarm with differential evolution operator.
Proceedings of the IEEE International Conference on
Systems, Man and Cybernetics, Washington DC,
USA, pp: 3816-3821.
Zhuang, T., Q. Li, Q. Guo and X. Wang, 2008. A twostage particle swarm optimizer. IEEE World
Congress on Computational Intelligence, Hong
Kong, pp: 557-563.
1197
Download