PSO_brief_Description

advertisement
Particle Swarm Optimization
______________________________________________________________________________
1. Particle Swarm Optimization
Kennedy and Eberhart in 1995 developed the particle swarm optimization (PSO) algorithm
by studying social and cognitive behavior of ants. The individuals, called particles, are “flown”
through a multidimensional search space. Optimizing using the particle swarm requires very
simple operations creating a highly efficient algorithm. After the PSO chooses the most likely
parameters for an optimum solution, it multiplies these by a uniform random term, which
prevents premature convergence. This is a major concern for this search space. Particles formed
at the beginning of the PSO process remain fully functioning until the solution is found.
The movement of the particles is influenced by two factors using the global particle-toparticle best solution and the local particle’s iteration-to-iteration best solution. As a result of
iteration-to-iteration information, the particle stores in its memory the best solution it has visited
so far, called “pbest”, and experiences an attraction towards this solution as it traverses through
the solution search space. This attraction is stronger if the best solution is farther from the current
particle’s location and not related to its performance. As a result of the particle-to-particle
information, the particle stores in its memory the best solution visited by any particle, and an
attraction towards this solution, called “gbest”, results as well. The first, pbest, and second,
gbest, factors are called the cognitive and social components, respectively. After each iteration
the pbest and gbest are updated for each particle if better, more dominating solutions (in terms of
performance or fitness), is found. This process continues, iteratively, until either the algorithm
achieves the desired result, or it's determined that an acceptable solution cannot be found within
computational limits determined by the application. These two factors determine the direction
and amount of movement resulting from the particle’s velocity. Interestingly, the performance of
the two solution points does not affect the direction or amount of motion in traditional PSOs but
completely controls the choice of the global and local best solution Error! Reference source
not found.. A modified PSO, Fitness Distance Ratio PSO, incorporates the solution’s
performance or fitness into the velocity and does have faster convergence to the globally best
answer.
The PSO defines each particle in the D-dimensional space as X i  ( xi1 , xi 2 ,...., xiD ) , where the
subscript 'i' represents the particle number and the second subscript is the dimension, number of
parameters defining the solution. The memory of the previous best position is represented as
Pi  ( pi1 , pi 2 ,...., piD ) , and a velocity for each dimension is independently established as
Vi  (vi1 , vi 2 ,...., viD ) . After each iteration, the velocity term is updated, and the particle is moved
with some randomness in the direction of its own best position, pbest, and the global best
position, gbest. This is apparent in the velocity update equation, given by
Vid
( t 1)

  Vid ( t )  U [0,1]  1  ( pid ( t )  xid ( t ) ) 
U [0,1]  2  ( pgd ( t )  xid ( t ) )
(1)
______________________________________________________________________
Evolutionary Design and Optimization Group
Particle Swarm Optimization
______________________________________________________________________________
The position is updated using this velocity and
X id ( t 1)  X id ( t )  Vid ( t 1)
(2)
where U [0,1] samples a uniform random distribution, t is a relative time index,  1 and  2 are
weights trading off the impact of the local best and global best solutions’ on the particle’s total
velocity.
The particle swarm optimization algorithm is highly efficient in searching complex,
continuous solution landscapes. The particle swarm can also be implemented as a parallel
algorithm improving its efficiency for real-time applications. The particles can be split up among
multiple processors and then the global best solution is shared among the particles.
2. Binary Particle Swarm Optimizer
In the binary valued space the continuity looses meaning and the interpretation of the fitness
function as a function of the position loses its meaning. A binary version of the algorithm
transitions particles in a probabilistic space using the velocity of the particle. This has implied
that both the binary variables have a probability associated with them. The swarm tries to
maximize the probability of a certain binary variable by having a velocity such that its
probability is maximized. The algorithm uses the same velocity update equation as in (1) but the
values of ‘X’ are now discrete and binary. For position update, first the velocity is transformed
into an [0, 1] interval using the sigmoid function given by
Sid  sig (Vid ) 
1
1  eVid
(3)
where, Vid is the velocity of the ith particle’s dth dimension. A random number is generated using
a uniform distribution which is compared to the value generated from the sigmoid function and a
decision is made about the X id in the following manner.
X id  u( Sid  U [0,1])
(4)
a unit step function. The decision regarding X id is now probabilistic, implying that higher
the value of the Vid , higher the value of the Sid, making probability of deciding ‘1’ for X id
higher. It should be noted that as Vid   , Sid  1, making it impossible X id to return to zero
u is
after that point. Until that point there is some probability of X id returning to zero. Figure 1
shows this property of the binary PSO. The probability of Xid =1 increases as Vid increases.
However, P( X id  1) is almost equal to 1 for Vid>10, but is not exactly equal to 1. This is the key
to the design of the discrete binary PSO, since particles do not get stuck once they find optima.
______________________________________________________________________
Evolutionary Design and Optimization Group
Particle Swarm Optimization
______________________________________________________________________________
Figure 1. Transformation of the Particle Velocity to a Binary Variable
3. Discrete Particle Swarm Optimizer
In Binary PSO, the particles try to learn over iterations and position themselves in a probabilistic
space such that the probability of either 0 or 1 is higher for a particular dimension. Similar to the
design of the binary PSO the PSO for the multi-valued problems is designed. For discrete multi
valued optimization problems the range of the variables lie between [0 M-1], where ‘M’ implies
the M-ary number system. The same velocity update and particle representation are used in this
algorithm. The position update equation is however changed in the following manner. The
velocity is first transformed into a number between [0, M] using the following sigmoid
transformation
Sid 
M
1  eVid
(5)
A number is the generated using normal distribution with parameters N ( Sid ,  ( M  1)) . The number
generated is rounded to the closest discrete variable. Hence the
X id  round ( Sid  ( M  1)    randn(1))
(6)
Additional conditions are applied for values past the range of the discrete variable as in
 M  1, X id  M  1
X id  
(7)
X id  0
 0,
The velocity update equation remains the same as (1). The positions of the particles are discrete
values between [0, M-1]. Note that for a given Sid there is a probability of having any number
between [0, M-1]. However, the probability of having a number reduces based on its distance
from Sid. In the following subsection the relation between Sid and the probability of resultant
discrete variables is given. The transformation of velocity using a sigmoid function and the
generation of the random variable from Gaussian distribution using this value is illustrated in
Figure 2.
______________________________________________________________________
Evolutionary Design and Optimization Group
Particle Swarm Optimization
______________________________________________________________________________
Figure 2. Transformation of the Particle Velocity to a Discrete Variable
4. Non-Dominated Sorting Particle Swarm Optimization
If one solution achieves the minimum performance value for every performance parameter
in the fitness function, the solution is declared dominant or optimal as in

 f  x*   f  x 
xF

(5)
where f ( x ) is the fitness function converting a solution point, x, of multiple dimensions into a

vector of performance values, and x is the solution vector in the feasible solution space, F. The
final optimal solution, x* , is dominant in all its dimensions. Then, the Pareto surface consists of
a single point rather than a monotonically decreasing curve as in figure below and performing
trade-off is not an option. For the situation that is more common, non-dominance, a search
algorithm can find a set of solution points comprising the Pareto surface. The non-dominant
solution’s performance is best for at least one value so that the solutions do not dominate one
another.
The classical particle swarm optimizer works on a single performance function or in other
words single objective. In presence of multiple objectives the algorithm is modified to evolve the
set of solutions from the Pareto set. In this section the algorithm called the non-dominated
particle swarm optimizer (NSPSO) is described. The algorithm uses the same velocity update
and position update functions as in the classical particle swarm. The algorithm differs in the way
______________________________________________________________________
Evolutionary Design and Optimization Group
Particle Swarm Optimization
______________________________________________________________________________
the pbest is updated. Let us denote the 'current' solutions of the particle swarm using R and the
'pbest' solutions of the particles as P
The new algorithm updates the 'pbest' in the following manner.
1. Merge the R and P solutions into one set. Call this set the entire set denoted by H.
2. Define an empty set called J.
3. Identify the set of non-dominated solutions from the set H. Let the set of the nondominated set be called ND.
4. Add these solutions to the set J. If the number of solutions in J are equal to the number
of the particles then this is the new pbest solution vector and for the next iteration, else
go to step 4
5. Remove ND from H and go to step 2.
A non-dominant PSO basically initializes a group of particles as shown in Figure 3 below
and moves them with every iteration towards the diagonal in the graph until the Pareto surface is
reached. This approach is an expansion of the min.-max approach. Instead of minimizing only
the maximum error performance, a set of solutions is found whose members focus on
minimizing one or more of the error performance functions as it is applied to the feasible
solution region. If there are only 2 parameters, the results can be plotted as performance
objective versus performance objective or a 2D plot demonstrating the correlation between
parameters.
______________________________________________________________________
Evolutionary Design and Optimization Group
Particle Swarm Optimization
______________________________________________________________________________
Figure 3. Evolution of a Solutions in Pareto Set for a Sensor Network Example Problem
______________________________________________________________________
Evolutionary Design and Optimization Group
Download