Particle Swarm Optimization Method in Multiobjective Problems

advertisement
Particle Swarm Optimization Method in Multiobjective
Problems
K.E. Parsopoulos
Department of Mathematics,
University of Patras Artificial
Intelligence Research Center (UPAIRC),
University of Patras,
GR-26110 Patras, Greece
kostasp @ math.upatras.gr
M.N. Vrahatis
Department of Mathematics,
University of Patras Artificial
Intelligence Research Center (UPAIRC),
University of Patras,
GR-26110 Patras, Greece
vrahatis @ math.upatras.gr
ABSTRACT
chniquee can be used to detect Pareto Optimal solutions,
this approach suffers from two critical drawbacks; (I) the objectives have to be aggregated in one single objective function and (II) only one solution can be detected per optimization run. The inherent difficulty to foreknow which
aggregation of the objectives is appropriate in addition to
the heavy computational cost of Gradient-based techniques,
necessitates the development of more efficient and rigorous
methods. Evolutionary Algorithms (EA) seem to be eepedally suited to MO problems due to their abilities, to search
simultaneously for multiple Pareto Optimal solutions and,
perform better global search of the search space [20].
The Particle Swarm Optimization (PSO) is a Swarm Intelligence method that models social behavior to guide swarms
of particles towards the most promising regions of the search
space [3]. PSO has proved to be efficient at solving Unconstrained Global Optimization and engineering problems [4,
10, 11, 12, 13, 17]. It is easily implemented, using either
binary or floating point encoding, and it usually results in
faster convergence rates than the Genetic Algorithms [7].
Although PSO's performance, in single-objective optimization tasks, has been extensively studied, there are insulBcient results for MO problems thus far. In this paper a first
study of the PSO's performance in MO problems is presented through experiments on well-known test functions.
In the next section the basic concepts and terminology of
MO are briefly presented. In Section 3 the PSO method is
described and briefly analyzed. In Section 4 the performance
of PSO in terms of finding the Pareto Front in Weighted
Aggregation cases is exhibited, while in Section 5 a modified
PSO is used to perform MO similar to the VEGA system. In
the last section conclusions are derived and farther research
directions are proposed.
This paper constitutes a first study of the Particle Swarm
Optimization (PSO) method in Mu|tiobjective Optimi~.ation (MO) problems. The ability of PSO to detect Pareto
Optimal points and capture the shape of the Pareto Front
is studied through experiments on well-known non-trivial
test functions. The Weighted Aggregation technique with
fixed or adaptive weights is considered. Furthermore, critical aspects of the VEGA approach for Multiobjective Optimization using Genetic Algorithms are adapted to the PSO
framework in order to develop a multi--swarm PSO that can
cope effectively with MO problems. Conclusions are derived
and ideas for further research are proposed.
Keywords
Particle Swazm Optimization, Multiobjective Optimization
1.
INTRODUCTION
Multiobjective Optimization (MO) problems are very common, especially in engineering applications, due to the multicriteria nature of most real-world problems. Design of
complex hardware/software systems [20], atomic structure
determination of proteins [2], potential function parameter
optimization [18], x-ray diffraction pattern recognition [14],
curve fitting [1] and production scheduling [19] are such applications, where two or more, sometimes competing and/or
incommensurable objective functions have to be minimized
simultaneously. In contrast to the single-objective optimization case, where the optimal solution is clearly defined, in
MO problems there is a whole set of trade-offs giving rise to
numerous Pareto Optimal solutions. These points are optimal solutions for the MO problem when all objectives are
simultaneously considered.
Although t.he traditional Gradient-based optimization te-
2.
BASIC CONCEPTS
Let X be an n-dlmensional search space and ]~(z), i =
1,... , k, be k objective functions defined over X. Assuming,
Permission to malce digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or dislributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
g~Cz) ~< 0,
i---- 1,... ,m,
be m inequality co~traints, the MO problem can be stated
as finding a vector z* = (z~,z~,... ,z*) E X that sarisflee the constraints and optimizes (without loss of generality
we consider only the minimization case) the vector function f(z) ~- ill(z), f 3 ( z ) , . . . , ]k(z)]. The objective functions may be in conflict, thus, in most cases it is impossible
• 4C 2002, Madrid, Spain
(~) 2002 ACM 1-58113-445-2/02/03 _|5.00.
603
ity; t h e d i s t a n c e b e t w e e n t h e b e s t p r e v i o u s p o s i t i o n of t h e
p a r t i c l e a n d its c u r r e n t p o s i t i o n , a n d finally; t h e d i s t a n c e
b e t w e e n t h e s w a r m ' s b e s t e x p e r i e n c e ( t h e p o s i t i o n of t h e
b e s t p a r t i c l e in t h e s w a r m ) a n d t h e 4-th p a r t i c l e ' s c u r r e n t
p o s i t i o n . T h e n , a c c o r d i n g t o E q u a t i o n (2), t h e i - t h p a r t i c l e
'qiies" t o w a r d s a n e w p o s i t i o n . I n general, t h e p e r f o r m a n c e
of each p a r t i c l e is m e a s u r e d a c c o r d i n g to a fitness f u n c t i o n ,
w h i c h is p r o b l e m - d e p e n d e n t . I n o p t i m i z a t i o n p r o b l e m s , t h e
fitness f u n c t i o n is u s u a l l y t h e o b j e c t i v e f u n c t i o n u n d e r c o n sideration.
T h e role of t h e i n e r t i a weight w is c o n s i d e r e d t o b e c r u c i a l
for t h e P S O ' s convergence. T h e i n e r t i a weight is e m p l o y e d
t o c o n t r o l t h e i m p a c t of t h e p r e v i o u s h i s t o r y of velocities on
t h e c u r r e n t v e l o c i t y of each p a r t i c l e . T h u s , t h e p a r a m e t e r
w regulates the trade-off between global and local explor a t i o n a b i l i t y of t h e s w a r m . A l a r g e i n e r t i a w e i g h t f a c i l i t a t e s
g l o b a l e x p l o r a t i o n ( s e a r c h i n g n e w a r e a s ) , w h i l e a s m a l l one
t e n d s to f a c i l i t a t e local e x p l o r a t i o n , i.e. f i n e - t u n i n g t h e c u r r e n t s e a r c h area. A s u i t a b l e v a l u e for t h e i n e r t i a w e i g h t w
b a l a n c e s t h e g l o b a l mad local e x p l o r a t i o n a b i l i t y a n d , consequently, r e d u c e s t h e n u m b e r of i t e r a t i o n s r e q u i r e d t o l o c a t e
t h e o p t i m u m solution. A g e n e r a l r u l e o f t h u m b s u g g e s t s t h a t
i t is b e t t e r t o i n i t i a l l y set t h e i n e r t i a t o a l a r g e value, in o r d e r
t o m a k e b e t t e r g l o b a l e x p l o r a t i o n of t h e s e a r c h s p a c e , a n d
g r a d u a l l y d e c r e a s e i t t o g e t m o r e refined solutions. T h u s ,
a t i m e - d e c r e a s i n g i n e r t i a weight value is used. T h e i n i t i a l
s w a r m c a n b e g e n e r a t e d e i t h e r r a n d o m l y or u s i n g a S o b o l
s e q u e n c e g e n e r a t o r [15], w h i c h e n s u r e s t h a t t h e p a r t i c l e s will
b e u n i f o r m l y d i s t r i b u t e d w i t h i n t h e s e a r c h space.
F r o m t h e a b o v e discussion, it is o b v i o u s t h a t P S O r e s e m bles, t o s o m e e x t e n t , t h e " m u t a t i o n " o p e r a t o r of G e n e t i c
A l g o r i t h m ~ t h r o u g h t h e p o s i t i o n u p d a t e E q u a t i o n s (1) mad
(2). N o t e , however, t h a t in P S O t h e ~ m u t a t i o n " o p e r a t o r is
g u i d e d b y t h e p a r t i c l e ' s own ~flying" e x p e r i e n c e a n d b e n e f i t s
f r o m t h e s w a r m ' s "flying" e x p e r i e n c e . I n o t h e r words, P S O
is c o n s i d e r e d as p e r f o r m i n g m u t a t i o n w i t h a "conscience",
as p o i n t e d o u t b y E b e r h a r t a n d S h i [4].
I n t h e n e x t section, s o m e w e l l - k n o w n b e n c h m a r k M O
p r o b l e m s a r e d e s c r i b e d a n d r e s u l t s f r o m t h e a p p l i c a t i o n of
PSO using Weighted Aggregation approaches are exhibited.
t o o b t a i n for all o b j e c t i v e s t h e global m i n i m u m a t t h e s a m e
p o i n t . T h e goal of M O is t o p r o v i d e a s e t of P a r e t o O p t i m a l
solutions to the aforementioned problem.
L e t u = ( u l , . . - , uk) a n d v -- ( v l , . . . , vk) b e two vectors.
T h e n , ~t dominateJ v if a n d only if ul ~< vl, i = 1 , - - . , k, and
< vi for a t l e a s t one c o m p o n e n t . T h i s p r o p e r t y is k n o w n
as Pare~o Dominance a n d it is u s e d t o define t h e P a r e t o
O p t i m a l points. T h u s , a s o l u t i o n z of t h e M O p r o b l e m
is s a i d t o b e Pareto Optimal if a n d o n l y if the~re does n o t
exist a n o t h e r s o l u t i o n y such t h a t f ( y ) d o m i n a t e s f ( z ) . T h e
set of all P a r e t o O p t i m a l s o l u t i o n s of a n M O p r o b l e m is
c a l l e d Pareto Optimal Set a n d we d e n o t e it w i t h 7 ~*. T h e
set ~ *
= {(/~(-.),... ,Ik(z)) l z e r*} is c ~ e d Par~to
]W'ont. A Pareto Front 73£ =* is called cortvez if a n d only if
V u, v E 73~r*, V A ~ (0, 1), B u7 e 7~£=" : AI[u][+(1--A)IIvl[ >t
I1~11- R e s p e c t i v e l y ,
it is c a l l e d concaue if a n d o n l y i f V u, v
73.~ *, V A E (0, 1), ~ w E 7~3c~ : ~ l l u l l + (1 - A)llull ~< I1~11.
A P a r e t o F r o n t c a n be c o n v e x , c o n c a v e or p a r t i a l l y c o n v e x
a n d / o r c o n c a v e a n d / o r d i s c o n t i n u o u s . T h e l a s t t h r e e cases
p r e s e n t t h e g r e a t e s t d i ~ c u l t y for m o s t M O t e c h n i q u e s .
I n t h e n e x t section, t h e P S O t e c h n i q u e is briefly p r e s e n t e d .
3.
PARTICLE SWARM OPTIMIZATION
T h e P S O is a S w a r m I n t e l l i g e n c e m e t h o d t h a t differs f r o m
w e l l - k n o w n E v o l u t i o n a r y C o m p u t a t i o n a l g o r i t h m s , such as
t h e G e n e t i c A l g o r i t h m s [5, 6, 7], in t h a t t h e p o p u l a t i o n is
not manipulated through operators inspired by the human
D N A p r o c e d t t r e s . I n s t e a d , in P S O , t h e p o p u l a t i o n d y n a m i c s
s i m u l a t e t h e b e h a v i o r of a Kbixds' flock", w h e r e social s h a r ing of i n f o r m a t i o n t a k e s p l a c e a n d i n d i v i d u a l s p r o f i t f r o m t h e
discoveries a n d p r e v i o u s e x p e r i e n c e of all o t h e r c o m p a n i o n s
d u r i n g t h e s e a r c h for food. T h u s , each c o m p a n i o n , called
particle, in t h e p o p u l a t i o n , w h i c h is c a l l e d stuarm, is a s s u m e d
t o ~fly" over t h e s e a r c h s p a c e l o o k i n g for p r o m i s i n g r e g i o n s
on t h e l a n d s c a p e . F o r e x a m p l e , in t h e s i n g l e - o b j e c t i v e m i n i m i z a t i o n case, s u c h r e g i o n s possess lower f u n c t i o n values
t h e e o t h e r s p r e v i o u s l y visited. I n t h i s c o n t e x t , each p a r t i c l e
is t r e a t e d as a p o i n t i n t o t h e search space, w h i c h a d j u s t s its
own "flying" a c c o r d i n g t o i t s flying e x p e r i e n c e as well as t h e
flying e x p e r i e n c e of o t h e r p a r t i c l e s .
F i r s t , l e t us define t h e n o t a t i o n a d o p t e d in t h i s p a p e r :
a s s u m i n g t h a t t h e s e a r c h s p a c e is D - d i m e n s i o n a l , t h e i - t h
p a r t i c l e of t h e s w a r m is r e p r e s e n t e d b y t h e D - d i m e n s i o n a l
v e c t o r X i ---- ( z l x , z ~ 2 , . . . ,zCD) a n d t h e b e s t p a r t i c l e in t h e
s w a r m , i.e. t h e p a r t i c l e w i t h t h e s m a l l e s t f u n c t i o n value, is
d e n o t e d b y t h e i n d e x g. T h e b e s t p r e v i o u s p o s i t i o n (i.e. t h e
p o s i t i o n g i v i n g t h e b e s t f u n c t i o n value) of t h e / - t h p a r t i cle is r e c o r d e d a n d r e p r e s e n t e d as P~ ---- (p~t,p~2,... ,p~D),
while t h e p o s i t i o n c h a n g e (velocity) of t h e i - t h p a r t i c l e is
r e p r e s e n t e d as Vi ~ (v~x, v~2,... , V~D). F o l l o w i n g t h i s n o t a t i o n , t h e p a r t i c l e s are m a n i p u l a t e d a c c o r d i n g t o t h e following equations
Uid
=
WJIJid"4-CXrl~id -- ~id) -4- c 2 r 2 ~ s d -- ~id),
(1)
z~
=
z,d+Xv~,
(2)
4.
T H E W E I G H T E D A G G R E G A T I O N APPROACH
T h e Weighted Aggregation is t h e m o s t c o m m o n a p p r o a c h
for c o p i n g w i t h M O p r o b l e m s . A c c o r d i n g to t h i s a p p r o a c h ,
all t h e o b j e c t i v e s a r e s u m m e d t o a w e i g h t e d c o m b i n a t i o n
F = ~-]~=1 w J i ( z ) , w h e r e w~, i = 1 , . . . , k, a r e n o n - n e g a t i v e
weights. I t is u s u a l l y a s s u m e d t h a t ~ - - x wi = 1. T h e s e
weights c a n b e e i t h e r f i x e d or d y n a m i c a l l y a d a p t e d d u r i n g
the optimization.
I f t h e w e i g h t s a r e f i x e d t h e n we a r e in t h e case of t h e
Conuentional Weighted Aggregation ( C W A ) . U s i n g t h i s a p p r o a c h o n l y a single P a r e t o O p t i m a l p o i n t c a n b e o b t a i n e d
p e r o p t i m i z a t i o n r u n a n d a p r i o r i k n o w l e d g e of t h e s e a r c h
s p a c e is r e q u i r e d in o r d e r t o c h o o s e t h e a p p r o p r i a t e w e i g h t s
[9]. T h u s , t h e s e a r c h h a s t o b e r e p e a t e d s e v e r a l t i m e s t o o b t a i n a d e s i r e d n u m b e r o f P a x e t o O p t i m a l p o i n t s . Yet, t h i s is
n o t s u i t a b l e in m o s t p r o b l e m s d u e to t i m e l i m i t a t i o n s a n d
h e a v y c o m p u t a t i o n a l costs. M o r e o v e r , C W A is u n a b l e t o
d e t e c t s o l u t i o n s in c o n c a v e r e g i o n s o f t h e P a r e t o F r o n t [9].
O t h e r W e i g h t e d A g g r e g a t i o n approar2aes h a v e b e e n p r o p o s e d t o a l l e v i a t e t h e l i m i t a t i o n s of t h e C W A . F o r a two-
w h e r e d = 1 , 2 , . . . , D ; N is t h e size of t h e p o p u l a t i o n ;
i ---- 1, 2 , . . . , N ; X is a c o n s t r i c t i o n f a c t o r w h i c h c o n t r o l s a n d
c o n s t r i c t s t h e v e l o c i t y ' s m a g n i t u d e ; ~ is t h e i n e r t i a weight;
cl a n d c2 a r e t w o p o s i t i v e c o n s t a n t s ; rx a n d r2 are two r a n d o m n u m b e r s w i t h i n t h e r a n g e [0, 1].
E q u a t i o n (1) d e t e r m i n e s t h e i - t h p a r t i c l e ' s n e w v e l o c i t y
as a f u n c t i o n of t h r e e t e r m s : t h e p a r t i c l e ' s p r e v i o u s veloc-
604
objective M O problem, the weights can be modified during
t h e optimization, according to t h e following equations,
fast convergence r a t e (less t h a n 2 minutes were needed for
each function).
~/)l(t) = sign ( s i n ( 2 ~ r t / F ) ) , Io2 (t) = 1 -- wl (t),
~ m A m q
m
where t is t h e i t e r a t i o n ' s index a n d F is the weights' change
frequency. This is t h e well-known Bang-Bang Weighted Apgrepation (BWA) approach, according to which, t h e weights
are changed a b r u p t l y due to t h e usage of t h e sign(-) function. Alternatively, the weights can b e gradually modified
according to t h e equations,
w~(t) -- I s i n ( ~ t / F ) l ,
~
m
g
g
~2(t) -- ] - wx(t).
O
This is called Dynamic Weighted Aggregation (DWA). T h e
slow change of t h e weights forces t h e optimizer to keep moving on t h e P a r e t o Front, if it is convex, performing b e t t e r
t h a n in t h e BWA case. I f t h e P a r e t o Front is concave then
t h e performance using D W A and B W A is almost identical
when Genetic A l g o r i t h m s are used [9].
T h e three different approaches t h a t are m e n t i o n e d above
have been applied in experiments using the P S O technique,
with F ----100 for the B W A a n d F ----200 for the D W A case
respectively. T h e b e n c h m a r k problems t h a t were used are
' F i g u r e 1: C W A a p p r o a c h
q
b
P
~
O
O
O
D
~
r e s u l t s f o r F I a n d F2.
|
d ~ m e d in [8, 21]:
* F u n c t i o n F1 (convex, uniform P a r e t o Front):
Y~ = 1-. E~=~ ~-L f~ = ~ E~_-I(~,
-
2) 2.
t
.
.
.
.
.
.
.
.
.
.
* Function F2 (convex, n o n - u n i f o r m P a r e t o Front):
I,
F i g u r e 2: C W A a p p r o a c h r e s u l t s f o r F3 a n d F4.
. F u n c t i o n Fs (concave P a r e t o Front):
f l = z l , 9 ---- 1 -]- ~
E~'--2 z,, f2 -----9 (1 - ( ] 1 / 9 ) 2 ) .
As it was expected, the CWA approach was able to detect
the P a r e t o Front in F1, where it is convex and uniform,
in F2, where it is convex a n d non-uniform, and in F4 it
detected only t h e convex parts. In Fs (concave case) it was
unable to detect P a r e t o O p t i m a l points other t h a n the two
ends of the P a r e t o Front. T h e o b t a i n e d P a r e t o Fronts for
the experiments using the B W A s a d D W A approaches are
exhibited in Figures 3--6.
• F u n c t i o n F4 ( n e i t h e r p u r e l y c o n v e x n o r p u r e l y concave
Pareto Front): Yl = "-1, g = 1 + ~ E~=2 =',
• Function Fs (Pareto Front t h a t consists of s e p a r a t e d
convex parts): f l = z l , g = I + ~ - I ~-~-~=2z,,
=
~r~l
-
A l t h o u g h simple a n d consisting of only two objectives,
these problems are considered difficult (especially Fs, F4 a n d
F~) due to t h e shape of t h e P a r e t o Front (purely or p a r t i a l l y
concave, discontinuous e t c . ) . In order to have comparable
results in finding the P a r e t o F r o n t of the b e n c h m a r k problems with t h e results provided in [9] for t h e Evolutionary
Algorithms case, we used the pseudocode provided in [9]
to build a n d m a i n t a i n t h e archive of P a r e t o solutions a n d
we performed all simulations using t h e s a m e p a r a m e t e r values. Thus, we performed 150 iterations of P S O for each
problem, with z G [0, 1] 2. T h e P S O p a r a m e t e r s were fixed
c~ ---- c2 = 0.5 a n d the inertia w was gradually decreased
from 1 towards 0.4. T h e size of t h e swarm d e p e n d e d on the
p r o b l e m b u t never exceeded 40.
T h e first experiments were done using the C W A approach
with a small swarm of 10 particles. The desired n u m b e r of
P a r e t o O p t i m a l points was 20 for the functions F1, F2, F.~
a n d 40 for F4. Thus we h a d to run the algorithm 20 times
for t h e first three functions and 40 for the fourth. T h e obt a i n e d P s r e t o Fronts are exhibited in Figures 1 a n d 2 and
t h e y are similar to those o b t a i n e d from t h e E v o l u t i o n a r y
A l g o r i t h m in [9] b u t with very low c o m p u t a t i o n a l cost and
mm~Ps~o~
i.
l
l
lUl
. l
g ,m
~
I
u
i
IJ
| ii(lu
•
u
F i g u r e 3: B W A ( l e f t ) a n d D W A ( r i g h t ) a p p r o a c h e s '
results for the function FI.
It is clear t h a t P S O succeed in c a p t u r i n g t h e shape of
P a r e t o Front in each case. T h e results are b e t t e r using t h e
D W A approach, with t h e exception of t h e case of the concave P a r e t o Front of t h e function F3, at which the B W A
a p p r o a c h performed bet~.er. S w a r m size was equal to 20 for
all simulations, except for the function F~, for which it was
set to 40.
T h e M O problems can be alternatively solved using p o p u
l a t i o n - b a s e d n o n - P a r e t o approaches i n s t e a d of Weights Ag-
605
i
=~
%
F i g u r e 6: B W A (left) a n d D W A , ( r i g h t ) a p p r o a c h e s '
r e s u l t s f o r t h e f u n c t i o n Fs.
F i g u r e 4: B W A (left) a n d D W A ( r i g h t ) a p p r o a c h e s '
r e s u l t s f o r t h e f u n c t i o n Fs.
i
%
6.
gregating approaches. One such approach is V E G A (Vector
Evaluated Genetic Algorithm), developed by Scha~er [16].
In the next section, a modification of the PSO algorithm
borrowing ideas from V E G A is used in MO problems.
A POPULATION-BASED
APPROACH
%l
F i g u r e 7: V E P S O
F i g u r e 5: B W A (left) a n d D W A ( r i g h t ) a p p r o a c h e s '
r e s u l t s f o r t h e f u n c t i o n F4.
5.
t.,
results for t h e f u n c t i o n Ft.
CONCLUSIONS
A first s t u d y of the performance of the P S O technique in
MO problems has been presented. T h e P S O m e t h o d solved
efficiently well known test problems, including difficult cases
for MO techniques, such as concavity a n d / o r discontinuity
of the Pareto Front. We used low dimensional objectives
in order to investigate the simplest cases first. Besides, it
is a general feeling t h a t two objectives are sufficient to reflect essential aspects of MO [21]. Promising results were
obtained even when the size of the swarm was very small.
In addition to the Weighted Aggregation cases, a modified
version of P S G (VEPSO) t h a t resembles the V E G A ideas
was also developed and applied on t h e same problems, with
promising results.
Further research will include investigation of the performance of P S O in higher-dimensional problems with more
t h a n two objectives. Especially in the case of the P o p u l a t i o n Based N o n - P a r e t o Approach, a r a n d o m selection of the objective, in problems with more t h a n two objectives, seems
very interesting. Theoretical work is also required to fully
u n d e r s t a n d the swarm's dynamics and behavior during the
optimization.
NON-PARETO
According to the V E G A approach, fractions of the next
generation, or subpopulations, are selected from the old generation according to each of the objectives, separately. After
shuffling all these sub-populations together, crossover and
m u t a t i o n are applied to generate the new population [16].
T h e main ideas of V E G A were adopted a n d modified to
fit the P S O framework, developing the VEPSO algorithm.
We used two swarms to solve the five b e n c h m a r k problems
F1-F5. Each swarm was evaluated according to one of the
objectives but, information coming from the other swarm
was used to determine the change of the velocities. Specifically, the best particle of the second swarm was used for
the determination of the new velocities of the first swarm's
particles, using E q u a t i o n (1), and vice versa. Alternatively,
the best positions of the second swarm can be used, in conjunction with the best particle of the second swarm, for the
evaluation of the velocities of the first swarm, and vice versa.
T h e obtained Pareto Fronts for all benchmark problems are
exhibited in Figures 7 - 11. The left part of each figure is
the Pareto Front obtained using only the best particle of the
other swarm, while the right part is obtained using b o t h the
best particle and the best previous positions of the other
swarm. W i t h the exception of Function Fa, no significant
difference between the two approaches was observed. For
each experiment, two swarms of size 20 were used and the
algorithm ran for 200 iterations.
7.
REFERENCES
[1] H. Ahonen, P.A. Desouza, and V.K. Garg. A Genetic
Algorithm for Fitting Lorentzian Line-Shapes in
Mossbauer-Spectra. Nuclear Instruments ~ Methods
~n Physics Research, 124(4):633-638, 1997.
[2] T.S. Bush, C.B..A. Catlow, and P.D. Battle.
Evolutionary P r o g r a m m i n g Techniques for Predicting
Inorganic Crystal-Structuxes. J. Materials Chemistry,
5(8):1269-1272, 1995.
[3] 11.C. E b e r h a r t and J. Kennedy. A New Optimizer
Using Particle Swarm Theory. In Proc. 6th Int. Syrup.
606
|
'%
%
"%
*%.
\
%
%,
"4h
"1
%% •
%
•1
u
u
~
~
~
u
u
u
1
11
F i g u r e 8: V E P S O r e s u l t s for t h e f u n c t i o n F2.
I I
m
1111
,
is
u
u
u
nlq
u
u
al
u
i
F i g u r e 10~ V E P S O r e s u l t s for t h e f u n c t i o n F4.
i
II
•qi
%.
¼.
I
i
•
%
•
i L
%
F i g u r e 11: V E P S O r e s u l t s for t h e f u n c t i o n Fe.
F i g u r e 9: V E P S O r e s u l t s for t h e f u n c t i o n Fa.
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13] K.E. Parsopoulos and M.N. Vrahatis. Particle Swarm
Optimizer in Noisy sad Continuously Changing
Environments. M.H. Hamza (Ed.), Artificial
Intelligence and Soft Computing, pages 289-294,
IASTED/ACTA Press, 2001.
[14] W. Paszkowicz. Application of the Smooth Genetic
Algorithm for Indexing Powder Patterns: Test for the
Orthorhombic System. Materials Science Forum,
228(1&2):19-24, 1996.
[15] W.H. Press, W.T. Vetterling, S.A. Teukolsky, and
B.P. Fl~--ery. Numerical Recipes in Fortran 77,
Cambridge University Press, Cambridge, 1992.
[16] J.D. Schaffer. Multiple Objective Optimization with
Vector Evaluated Genetic Algorithms. In Genetic
Algorithma and their Applicationa: Pron. first Int.
Con]. on Genetic Algorithma, pages 93--100, 1985.
[17] Y. Shi, R.C. Eberhart, and Y. Chen. Implementation
of Evolutionary Fuzzy Systems. I E E E T)'ans. Fuzzy
Systems, 7(2):109-119, 1999.
[18] A.J. Skinner and J.Q. Broughton. Neural Networks in
Computational Materials Science: Training
Algorithms. Modelling and Simulation in Material
Science and Engineering, 3(3):371-390, 1995.
[19] K. Swinehart, M. Yasin, and E. Guimaraes. Applying
an Analytical Approach to Shop-Floor Scheduling: A
Case-Study. Int. J. Materiab gJ Product Technology,
11(1-2):98-107, 1996.
[20] E. Zitzler. Evolutionary Algorithms for Multiobjective
Optimization: Methods and Applications. Ph.D.
thesis, Swiss Fed. Inst. Techn. Zurich, 1999.
[21] E. Zitzler, K. Deb, and L. Thiele. Comparison of
Multiobjective Evolution Algorithms: Empirical
Results. Evolutionar~ Computation, 8(2):173-195,
2000.
on Micro Mach. ~ Hum. 8ci., pages 39-43, 1995.
It.C. Eberhart and Y.H. Shi. Evolving Artificial
Neural Networks. In Proc. Int. Conf. on Neural
Networks and Brain, 1998.
lt.C. Ebexhart, P.K. Simpson, and R.W- Dobbins.
Computational Intelligence P C Tools. Academic Press
Professional, Boston, 1996.
J. Kennedy and R.C. Eberhart. Particle Swarm
Optimization. In Proc. of the IEEE Int. Conf. Neural
Networks, pages 1942-1948, 1995.
J. Kennedy and It.C. Eberhart. Stoarm Intelligence.
Morgan Kaufmann Publishers, 2001.
J.D. Knowles and D.W. Corne. Approximating the
Nondominated Front Using the Pareto Archived
Evolution Strategies. Euolutionary Computation,
8(2):149-172, 2000.
Y. Jin, M. Olhofer, and B. SendhoE Dynamic
Weighted Aggregation for Evolutionary
Multi-Objective Optimization: Why Does It Work
and How?. In Prac. GECCO ~001 Con]., 2001.
K.E. Parsopoulos, V.P. Plagiaaakos, G.D. Magoulas,
and M.N. Vra~atis. Objective Function "Stretching"
to Alleviate Convergence to Local Minima. Nonlinear
Analysis, TMA, 47(5):3419-3424, 2000.
K.E. Parsopoulos, V.P. Plagianakos, G.D. Magoulas,
and M.N. Vrallatis. Stretching Technique for
Obtaining Global Minimizers Through Particle Swarm
Optimization. In Proc. Particle Swarm Optimization
Workahop, pages 22-29, 2001.
K . E Parsopoulos and M.N. Vrahatis. Modification of
the Particle Swarm Optimizer for Locating All the
Global Minima. V. Kurkova, N. Steele, It. Neruda,
M. Kaxny (Eds.), Artificial Neural Networks and
Genetic Algorithms, pages 324-327, Springer, 2001.
607
Download