Unlicensed-7-PDF725-728_engineering optimization

advertisement
13.3
Simulated Annealing
707
Step 1: Choose the parameters of the SA method. The initial temperature is taken
as the average value of f evaluated at four randomly selected points in the
_02_ , (2) = _10
5 _,
design space. By selecting the random points as X 1)(
8
(4)
X
=
_10_
X 3) =( _ _
, X = 10 , we find the corresponding values of the objective
function as5 f 1)( = 476, f 2)( = 340, f 3)( = 381, f 4)( = 340, respectively. Noting that the average value of the objective functions f 1)(
,f
,f
, and f 4)(
2)(
3)(
is 384.25, we assume the initial temperature to be T = 384.25. The temperature reduction factor is chosen as c = 0.5. To make the computations brief, we
choose the maximum permissible number of iterations (at any specific value
_5 4_
of temperature) as n = 2. We select the initial design point as X1 =
.
Step 2: Evaluate the objective function value at X1
as f 1 = 349.0 and set the iteration
number as i = 1.
Step 3: Generate a new design point in the vicinity of the current design point. For
this, we select two uniformly distributed random numbers u 1 and u 2; u 1 for
x 1 in the vicinity of 4 and u 2 for x 2 in the vicinity of 5. The numbers u 1 and
u 2 are chosen as 0.31 and 0.57, respectively. By choosing the ranges of
x1
and x 2 as (Š2, 10) and (Š1, 11), which represent ranges of ±6 about
their
respective current values, the uniformly distributed random numbers r 1 and r2
{10 Š (Š2)} = Š2
= 1.72
in the ranges ofr1x =1 Š2
and+xu 2,1corresponding
to u+ 0.31(12)
1 and u 2, can be found as
r2 = Š1 + u
Š (Š1)} = Š1 + 0.57(12) = 5.84
_1
.72 .
r2
Since the objective function value5 f 2 = f (X 2) = 387.7312, the value of _f
.84
is given by
which gives X
2
=
•f
_r
2{11
1
=f
=
2
Šf
1
= 387.7312 Š 349.0 = 38.7312
Step 4: Since the value of _f is positive, we use the Metropolis criterion to decide
whether to accept or reject the current point. For this we choose a random
number in the range (0, 1) as r = 0.83. Equation (13.18) gives the probability
of accepting the new design point X2 as
P [X 2] = e
Š_f /kT
(E 1)
By assuming the value of the Boltzmann's constant k to be 1 for simplicity in
Eq. (E 1), we obtain
P [X 2] = e
Š_f /kT
= e Š38.7312/384.25 =
0
.9041
Since r = 0.83 is smaller than 0.9041, we accept the point X2 =
next design point. Note that, although the objective function value f
than f
1,
_
.1
5 .84
as the
72_
2 is larger
we accept X2 because this is an early stage of simulation and
the
current temperature is high.
Step 3: Update the iteration number as i = 2. Since the iteration number i is less than
or equal to n, we proceed to step 3.
708
Modern Methods of Optimization
Step 3: Generate a new design point in the vicinity of the current design point X 2 =
_1 .72_. For this, we choose the range of each design variable as ±6 about
5 .84
its current value so that the ranges are given by (Š6 + 1.72, 6 + 1.72) =
(Š4.28, 7.72) for x 1 and (Š6 + 5.84, 6 + 5.84) = (Š0.16, 11.84) for x 2. By
selecting two uniformly distributed random numbers in the range (0, 1) as
u1 = 0 .92 and u 2 = 0.73, the corresponding uniformly distributed random
numbers in the ranges of x 1 and x 2 become
r1 = Š4.28 + u
1{7.72
Š (Š4.28)} = Š4.28 + 0.92(12) = 6.76
r2 = Š0.16 + u
2{11.84
Š (Š0.16)} = Š0.16 + 0.73(12) = 8.60
which gives X3 = _r 1 r _= _8 .60
with a function value of f 3 = 313.3264.
.6
2
_
We note that the function 76value
f 3 is better than f 2 with _f = f 3 Š f
2=
313.3264
= Š74.4048.
Step 4: Since
_f Š<387.7312
0, we accept
the current point as X3 and increase the iteration
number to i = 3. Since i > n, we go to step 5.
Step 5: Since a cycle of iterations with the current value of temperature is completed,
we reduce the temperature to a new value of T = 0.5 (384.25) = 192.125.
Reset the current iteration number as i = 1 and go to step 3.
Step 3: Generate a new design point in the vicinity of the current design point X3 and
continue the procedure until the temperature is reduced to a small value (until
convergence).
13.4
13.4.1
PARTICLE SWARM OPTIMIZATION
Introduction
Particle swarm optimization, abbreviated as PSO, is based on the behavior of a colony
or swarm of insects, such as ants, termites, bees, and wasps; a flock of birds; or a
school of fish. The particle swarm optimization algorithm mimics the behavior of these
social organisms. The word particle denotes, for example, a bee in a colony or a
bird in a flock. Each individual or particle in a swarm behaves in a distributed way
using its own intelligence and the collective or group intelligence of the swarm. As
such, if one particle discovers a good path to food, the rest of the swarm will also be
able to follow the good path instantly even if their location is far away in the swarm.
Optimization methods based on swarm intelligence are called behaviorally inspired
algorithms as opposed to the genetic algorithms, which are called evolution-based
procedures. The PSO algorithm was originally proposed by Kennedy and Eberhart in
1995 [13.34].
In the context of multivariable optimization, the swarm is assumed to be of specified
or fixed size with each particle located initially at random locations in the multidimensional design space. Each particle is assumed to have two characteristics: a position
and a velocity. Each particle wanders around in the design space and remembers the
best position (in terms of the food source or objective function value) it has discovered. The particles communicate information or good positions to each other and adjust
their individual positions and velocities based on the information received on the good
positions.
13.4
Particle Swarm Optimization
709
As an example, consider the behavior of birds in a flock. Although each bird has
a limited intelligence by itself, it follows the following simple rules:
1. It tries not to come too close to other birds.
2. It steers toward the average direction of other birds.
3. It tries to fit the "average position" between other birds with no wide gaps in
the flock.
Thus the behavior of the flock or swarm is based on a combination of three simple
factors:
1. Cohesion—stick together.
2. Separation—don't come too close.
3. Alignment—follow the general heading of the flock.
The PSO is developed based on the following model:
1. When one bird locates a target or food (or maximum of the objective function),
it instantaneously transmits the information to all other birds.
2. All other birds gravitate to the target or food (or maximum of the objective
function), but not directly.
3. There is a component of each bird's own independent thinking as well as its
past memory.
Thus the model simulates a random search in the design space for the maximum value
of the objective function. As such, gradually over many iterations, the birds go to the
target (or maximum of the objective function).
13.4.2
Computational Implementation of PSO
Consider an unconstrained maximization problem:
Maximize f (X)
with X(l) _ X _ X(u)
(13.20)
where X(l) and X(u)
denote the lower and upper bounds on X, respectively. The PSO
procedure can be implemented through the following steps.
1. Assume the size of the swarm (number of particles) is N . To reduce the total
number of function evaluations needed to find a solution, we must assume a
smaller size of the swarm. But with too small a swarm size it is likely to take
us longer to find a solution or, in some cases, we may not be able to find
a
solution at all. Usually a size of 20 to 30 particles is assumed for the swarm as
a compromise.
2. Generate the initial population of X in the range X(l) and X(u) randomly as
(i)
(i)
X1 , X 2, . . . , X N. Hereafter, for convenience, the particle (position of) j and
its velocity in iteration i are denoted as X j
and Vj , respectively. Thus the
(
0)(j
=
1,
2,
.
.
.
,
N)
are
called
particles
or
vectors
of particles
X
particles
generated
initially
are
denoted
X
(0),
X
(0),
.of. coordinates
. , X N(0). The
vectors
j
1
2
710
Modern Methods of Optimization
(similar to chromosomes in genetic algorithms). Evaluate the objective function
values corresponding to the particles as f [X 1(0)], f [X 2(0)], . . . , f [X N(0)].
3. Find the velocities of particles. All particles will be moving to the optimal point
with a velocity. Initially, all particle velocities are assumed to be zero. Set the
iteration number as i = 1.
4. In the ith iteration, find the following two important parameters used by a
typical particle j:
(a) The historical best value of X j(i) (coordinates of jth particle in the current iteration i), P best,j , with the highest value of the objective function,
f [X j(i)], encountered by particle j in all the previous iterations.
The historical best value of X j(i) (coordinates of all particles up to that
iteration), Gbest , with the highest value of the objective function f [X j(i)],
encountered in all the previous iterations by any of the N particles.
(b) Find the velocity of particle j in the ith iteration as follows:
V j(i) = V j(i Š 1) + c 1r
+ c 2r
2[Gbest
1[P best,j
Š X j(i Š 1)]
Š X j(i Š 1)]; j = 1, 2, . . . , N
(13.21)
where c 1 and c 2 are the cognitive (individual) and social (group) learning
rates, respectively, and r 1 and r 2 are uniformly distributed random numbers
in the range 0 and 1. The parameters c 1 and c 2 denote the relative importance
of the memory (position) of the particle itself to the memory (position) of
the swarm. The values of c 1 and c 2 are usually assumed to be 2 so
that
c 1r 1 and c 2r 2 ensure that the particles would overfly the target about half
the time.
(c) Find the position or coordinate of the jth particle in ith iteration as
(13.22)
X j(i) = X j(i Š 1) + V j(i); j = 1, 2, . . . , N
where a time step of unity is assumed in the velocity term in Eq. (13.22).
Evaluate the objective function values corresponding to the particles as
f [X 1(i)], F [X 2(i)], . . . , F [X N(i)].
5. Check the convergence of the current solution. If the positions of all particles
converge to the same set of values, the method is assumed to have converged.
If the convergence criterion is not satisfied, step 4 is repeated by updating the
iteration number as i = i + 1, and by computing the new values of P
Gbest . The iterative process is continued until all particles converge to the same
optimum solution.
13.4.3
Improvement to the Particle Swarm Optimization Method
It is found that usually the particle velocities build up too fast and the maximum of the
objective function is skipped. Hence an inertia term, _, is added to reduce the velocity.
Usually, the value of _ is assumed to vary linearly from 0.9 to 0.4 as the iterative process
progresses. The velocity of the jth particle, with the inertia term, is assumed as
and
best,j
Download