- genetic algorithm had been widely used in combinatorial

advertisement
Improved Evolutionary Strategy Genetic Algorithm for Nonlinear
Programming Problems
Hui-xia Zhu, Fu-lin Wang, Wen-tao Zhang, Qian-ting Li
School of Engineering, Northeast Agriculture University, Harbin, China
(hljzhuhuixia@126.com)
Abstract - Genetic algorithms have unique advantages
in dealing with optimization problems. In this paper the
main focus is on the improvement of a genetic algorithm and
its application in nonlinear programming problems. In the
evolutionary strategy algorithm, the optimal group
preserving method was used and individuals with low fitness
values were mutated. The crossover operator uses the
crossover method according to the segmented mode of
decision variables. This strategy ensured that each decision
variable had the opportunity to produce offspring by
crossover, thus, speeding up evolution. In optimizing the
nonlinear programming problem with constraints, the
correction operator method was introduced to improve the
feasible degree of infeasible individuals. MATLAB
simulation results confirmed the validity of the proposed
method. The method can effectively solve nonlinear
programming problems with greatly improved solution
quality and convergence speed, making it an effective,
reliable and convenient method.
genetic algorithm had been widely used in combinatorial
optimization,
controller's
structural
parameters
optimization etc. fields, and had become one of the
primary methods of solving nonlinear planning
problems[1-9].
In this paper, the evolution strategy was improved
after analyzing the process of the genetic algorithm and
the improved algorithm took full advantages of genetic
algorithm to solve unconstrained and constrained
nonlinear programming problems. In the MATLAB
environment, the numerical example showed that the
proposed improved genetic algorithm for solving
unconstrained and constrained nonlinear programming
was effective, and the experiment proved it was a kind of
algorithm with calculation stability and better
performance.
II. NONLINEAR PROGRAMMING PROBLEMS
Keywords - nonlinear programming, genetic algorithm,
improved evolutionary strategy, correction operator method
I. INTRODUCTION
Nonlinear programming problem (NPP) had become
an important branch of operations research, and it was the
mathematical programming with the objective function or
constraints being nonlinear functions. There were a
variety of traditional methods to solve nonlinear
programming problems such as center method, gradient
projection method, the penalty function method, feasible
direction method, the multiplier method. But these
methods had their specific scope and limitations, the
objective function and constraint conditions generally had
continuous and differentiable request. The traditional
optimization methods were difficult to adopt as the
optimized object being more complicated. Genetic
algorithm overcame the shortcomings of traditional
algorithm, it only required the optimization problem could
be calculated, eliminating the limitations of optimization
problems having continuous and differentiable request,
which was beyond the traditional method. It used the
forms of organization search, with parallel global search
capability, high robustness and strong adaptability, and it
could obtain higher efficiency of optimization. The basic
idea was first made by Professor John Holland. The
____________________
Natural Science Foundation of China (31071331)
The nonlinear programming problems could be
divided into unconstrained problems and constrained
problems[1][10].We presented the mathematical model here
in its general form.
The unconstrained nonlinear programming model:
min f(X)
X  En
(1)
Where, the independent variable X=(x1,x2,…,xn)T was an n
dimensional vector (point) in Euclidean space .It was the
unconstrained minimization problem that was for the
minimum point of the objective function f(X) in En.
The constrained nonlinear programming model:
min f(X)
X  En
s.t. hi(X)=0, i=1,2,…,m
gj(X)≥0,j=1,2,…,l
(2)
Where, "min" stood for "minimizing" and symbol "s.t"
stood for "subject to". It was the unconstrained
minimization problem that was for the minimum point of
the objective function f(X) in En. Here, hi(X) =0 and
gj(X)≥0 were the constrained conditions.
For max f(X) =-min[-f(X)], only the minimization
problem of objective function was needed to take into
consideration without loss of generality.
If some constrained conditions were "≤" inequality,
they were needed to be multiplied at both ends of the
constraints by "-1". So we could only consider the
constraint in the form of "≥".
III. ANALYSIS AND DESCRIPTION OF THE
IMPROVED GENETIC ALGORITHM
Based on the simple genetic algorithm, the following
gave the analysis design and description of the algorithm
which improved the genetic evolution strategy of genetic
algorithm.
A. Encoding and Decoding
We used the binary encoding and multi-parameter
cascade encoding. It meant that we made each parameter
encoded by means of the binary method and then
connected the binary encoded parameters with each other
in a certain order to constitute the final code which
represented an individual including all parameters. The bit
string length depended on the solving precision of specific
problems, the higher precision we required, the longer the
bit string.
If the interval of someone parameter was [A, B] and
the precision was c digits after decimal point, then the
calculation formula for bit string length was:
(B-A) 10c≤2L
(3)
Here, L took the smallest integer which made the above
equation valid.
If the interval of someone parameter was [A, B], the
corresponding substring in the individual code
was bL bL 1bL  2 b2b1 , then its corresponding decoding
formula was:
L
B A
(4)
X  A  ( bi 2i 1 ) L
2
1
i 1
B. Production of the Initial Population
There were two conditions when producing initial
population. One was to solve the unconstrained problem;
the other was to solve the constrained problem. Suppose
the number of decision variables was n, the population
scale was m, ai and bi were lower limit and upper limit of
a decision variable respectively. For the unconstrained
problem, binary encoding was adopted to randomly
produce initial individuals of the population.
For constrained problems, the initial population could
be selected at random under certain constraint conditions.
It also could be produced in the following manner:
First, a known initial feasible individual X1(0) was
given artificially. It met the following conditions:
g j ( X1(0) )  g j ( X11(0) , X12(0) , X13(0) , , X1(0)
n )0
The other individuals were produced in the following
way[11]:
X 2(0)  A  r2 ( B  A)
(5)
Here, A=(a1, a2, a3,…,an)T, B=(b1, b2, b3,…,bn)T, r2 =( r21,
r22, r23,…, r2n)T, random number rij  U(0,1).
Then checking whether X 2(0) satisfied the constraints
or not. If the constraints were satisfied, another individual
would produce as X 2(0) . If the constraints were not
satisfied, X 2(0) would be corrected by correction operator.
C. Correction Operator Method
When genetic algorithm was applied to deal with
constrained nonlinear programming problems, the core
problem was how to treat constraint conditions. Solving it
as unconstrained problems at first, checking whether there
were constraint violations in the search process. If there
were no violations, it indicated that it was a feasible
solution; if not, it meant it was not a feasible solution. The
traditional method of dealing with infeasible solutions
was to punish those infeasible chromosomes or to discard
infeasible chromosome. Its essence was to eliminate
infeasible solution to reduce the search space in the
evolutionary process [12-17]. The improved evolution
strategy genetic algorithm used the correction operator
method, which selected certain strategy to fix the
infeasible solution. Different from the method of penalty
function, the correction operator method only used the
transform of objective function as the measure of the
adaptability with no additional items, and it always
returned feasible solution. It had broken the traditional
idea, avoided the problem of low searching efficiency
because of refusing infeasible solutions and avoided early
convergence due to the introduction of punishment factor,
and also avoided some problems such as the result
considerably deviating constraint area after mutation
operation.
If there were r linear equations of constraints, and the
linear equations’ rank was r < n, all decision variables
could be expressed by n-r decision variables. Taking them
into inequality group and the objective function, the
original n decision variables problem became n-r decision
variables problem with only inequality group constraints.
So we could only consider problems with only inequality
group constraints. The production of initial individuals,
offspring produced by crossover operation and individuals
after mutation, all were needed to be judged whether they
met the constraints, if not, fixed them in time. Such
design of genetic operation made solution vectors always
bounded in the feasible region.
The concrete realization way of correction operator
was:
Each individual was tested whether it satisfied the
constraints. If so, continued the genetic operation; if not,
let it approach the former feasible individual
(assumed X 1(0) and the former feasible individual should
be an inner point). The approaching was an iterated
process according to the following formula:
X 2(0)  X 1(0)   ( X 2(0)  X 1(0) )
(6)
Where, α was step length factor. If it still didn't satisfy the
constraint, then the accelerated contraction step length
was used, that was α=(1/2)n, here, n was search times.
Big step length factor could affect the constraint
satisfaction and reduce the repairing effect and even affect
the search efficiency and speed, whereas, too small step
length factor couldn't play the role of proper correction.
So the method of gradually reducing the step length factor
could both protect the previous correction result and give
full play to correction strategy.
Thus X 2(0) was made to feasible individual after some
times of iteration, then X 3(0) was produced as X 2(0) and
become feasible individual. In the same way, all the
needed feasible individuals were produced. For binary
genetic algorithm, these feasible individuals were
phenotype form of binary genetic algorithm. Real coding
individuals were converted into binary string according to
the mapping relationship between genotype and
phenotype. Then the feasible individuals of binary genetic
algorithm were obtained.
This kind of linear search way of infeasible individual
moving to the direction of feasible individual had the
advantage of improving infeasible individual, initiative
guiding infeasible individuals to extreme point of
population, making the algorithm realize optimization in
global space. This paper introduced the correction
operator to improve the feasibility of infeasible
individuals. This method was simple and feasible. And
the treatment on infeasible individuals was also one
novelty of improving evolution strategy of genetic
algorithm.
m
ps  fi /  fi
(9)
i 1
F. Crossover Operator
The number of decision variables might be more in
practical problems. Because binary encoding and
multiparameter cascade encoding were adopted, the one
point crossover would make only one decision variable
cross in a certain position in this encoding mode, leaving
no crossover for other variables. So the segmented
crossover mode of decision variables was used, giving
each decision variable the probability pc of single point
crossover. Each decision variable had a cross opportunity
to produce offspring. This improvement was also another
novelty of evolutionary strategy of genetic algorithm.
G. Mutation Operator
Alleles of some genes were randomly reversed
according to mutation probability pm. Parent population
individuals and child population individuals after
crossover were sorted together according to their fitness
values before mutation and only individuals with low
fitness values were mutated. Thus not only good schema
could avoid being destroyed, but also mutation probability
could be appropriately increased, so generating more new
individuals. It was good to increase the population's
diversity, to traverse all of the state, and to jump out local
optimum.
H. Population Evolution
D. Fitness functions
If the objective function was for minimal
optimization, the following transformation was
applied[18]:
 c  f ( x)
Fit ( f ( x))   max
 0
f ( x)  cmax
other
(7)
Here, cmax was a estimated value which was enough large
for the problem.
If the objective function was for maximal
optimization, the following transformation was applied:
 f ( x)  cmin
Fit ( f ( x))  
 0
f ( x)  cmin
other
(8)
Here, cmin was an estimated value which was enough
small for the problem.
E. Selection Operator
Selection operator used the roulette selection method.
The selection probability of individual i:
In the process of population evolution, parent
population individuals and child population individuals
after crossover were put together to form a new temporary
population, and the fitness value of each individual in the
new temporary population was calculated, m individuals
with high fitness values were preserved, then m
individuals with low fitness values were mutated and the
mutated m individuals and the m previous preserved
individuals were put together to form a new temporary
population. Thereafter individuals in the new temporary
population were sorted according their fitness values and
m individuals with high fitness value were selected as the
next generation to accomplish the population evolution.
The evolution method was based on the traditional
elite preserving method, realizing preserving optimal
group. The advantage of this method was to reduce
possibilities of optimal solution being destroyed by
crossover or mutation in the process of evolution.
Moreover, premature convergence was avoided which
might be present in traditional elite preserving method
because all individuals approached one or two individuals
with high fitness values quickly. This was another novel
place of improving the evolution strategy of genetic
algorithm.
I. Algorithm Stopping Criteria
was Microsoft Windows XP, compile environment was
MATLAB 7.11.0 (R2010b).
Two criteria were adopted to terminate algorithm:
(1)The number of generations was more than a preset
value;
(2)The difference of fitness value between two
successive evolutions was less than or equal to a given
precision, namely to meet the condition:
| Fitmax - Fitmin |≤ε
(10)
B. Experimental Results and Analysis
In the below table, the interval lower bound was a,
the interval upper bound was b, the precision was c digits
after decimal point, the population size was m, the
maximum evolution generation was T.
Example 1:
n
Here, Fitmax was the individual's maximum fitness
value of a population; Fitmin was the individual's minimum
fitness value of a population.
IV. EXPERIMENTAL DATA AND RESULTS
A. Experimental Data and Parameters
min f1(x)=  xi2
(11)
i 1
f1(x) was a continuous, convex, single peak function.
It was an unconstrained optimization problem. Only one
global minimum in the 0, the minimum was 0. We
selected n=2, n=5, n=10 in the simulation experiment to
verify the correctness of Improved Evolutionary Strategy
Genetic Algorithm (IESGA). 100 times were executed for
f1(x) with crossover probability 0.75, mutation probability
0.05, the end precision 0. All runs converged to the
optimal solution. Parameter settings and calculation
results were shown in Table 1:
In the experiment, simulations of two examples were
used to validate the correctness of the algorithm and to
test the performance of the algorithm. The hardware
environment in the experiment were Intel Pentium DualCore E2200@2.20GHz, 2GB RAM. The operating system
TABLE I
PARAMETER SETTINGS AND CALCULATION RESULTS
f1(x)
a
b
c
m
T
Number of generation to obtain the
optimal solution for the first time
Variable values
Optimal solution
n=2
-5.12
5.12
6
80
100
45
(0, 0)
0
n=5
-5.12
5.12
6
80
200
145
(0, 0, 0, 0, 0)
0
n=10
-5.12
5.12
6
80
500
350
(-0.000023, 0, -0.000030, 0, 0, 0 ,
-0.000396, -0.000396, 0, 0.000010)
0
Observing the optimization results in Table 1,
Improved Evolutionary Strategy Genetic Algorithm had
faster computing speed and higher accuracy, and could
robustly convergence to global optimal solution. With the
increment of the number of decision variables, the number
of generation to obtain the optimal solution for the first
time also increased. This accorded with the objective law,
and was also correct.
Example 2:
max f2(x)= -2 x12 +2x1 x2-2 x22 +4x1+6x2
(12)
s.t. 2 x12 -2x2≤0
x1 +5x2≤5
x1 , x2≥0
The objective function f2(x) was a quadratic,
polynomial function. Under the conditions of inequality,
linear and nonlinear constraints, the theoretical optimal
value f2(0.658,0.868)=6.613. In the simulation
experiment, the crossover probability was 0.75, mutation
probability was 0.05, the end of precision was 0, the
maximum number of evolution generation was 70, 100
times were executed for f2(x), and all converged to the
optimal solution. The comparison of simulation results
which used methods of Feasible Direction (FD), Penalty
Function (PF)[19] and Improved Evolutionary Strategy
Genetic Algorithm (IESGA) was shown in Table 2.
TABLE II
THE COMPARISION OF SIMULATION RESULTS OF FD, PF AND IESGA
f2(x)
x1
x2
Optimal solution
FD
0.630
0.874
6.544
PF
0.645
0.869
6.566
IESGA
0.658872
0.868225
6.613083
Observing the optimization results in Table 2, the
result of using Improved Evolutionary Strategy Genetic
Algorithms was better than the two others, and the
optimal solution got the theoretical value. This showed
that using Improved Evolutionary Strategy Genetic
Algorithms
to
optimize
constrained
nonlinear
programming was correct and effective and it was a
reliable and efficient global optimization algorithm.
V. CONCLUSIONS
(1) The Improved Evolutionary Strategy Genetic
Algorithm preserves the optimal groups based on the
traditional elite preservation method. The advantage of
this method is that it reduces the possibility of optimal
solutions being destroyed by crossovers or mutations in
the process of evolution. Premature convergence, which
may be present in the traditional elite preservation
method, is avoided because all individuals quickly
converge to one or two individuals with high fitness
values.
(2) The correction operator breaks the traditional idea
and avoids some problems such as low searching
efficiency by refusing infeasible solutions, early
convergence by introducing a punishment factor and
deviation from the constraint area considerably after
mutation operation.
(3) The combination of the improved evolutionary
strategy and the method of correction operator can
effectively solve many nonlinear programming problems,
greatly improve solution quality and convergence speed,
realize the linear search method of moving infeasible
individuals towards feasible individuals, and effectively
guide infeasible individuals.
The disposal of infeasible individuals by the
correction operator is simple and effective. It is proved to
be an effective, reliable, and convenient method.
REFERENCES
[1] Operations research editorial group, Operations Research
(the third edition) (in Chinese). Beijing: Tsinghua
University Press, 2005,pp.133-190.
[2] Bazarra M S,Shetty L M. Nonlinear Programming Theory
and Algorithms. New York:John Wiley &Sons,1979.124159,373-378.
[3] Bi Yiming, Li Jingwen, Li Guomin, Liu Xuemei. “Design
and realization of genetic algorithm for solving nonlinear
programming problem” (in Chinese), Systems Engineering
and Electronics, Vol.22,no.2, pp.82-89,2000.
[4] Liang Ximing, Zhu Can, Yan Donghuang. “Novel genetic
algorithm based on species selection for
solving
constrained non-linear programming problems” (in
Chinese), Journal of Central South University(Science and
Technology), Vol.40,no.1, pp.185-189, 2009.
[5] Holland J H. Adaptation in Natural and ArtificialSystEms.
USA:Univ.of Michigan,1975.
[6] Hansen J V. “Genetic search methods in air traffic control”,
Computers and Operations Research, Vol.31, no. 3,
pp.445-459, 2004.
[7] Saleh H A, Chelouah R. “The design of the global navigation
satellite system surveying networks using genetic
algorithms”, Engineering Applications of Artificial
Intelligence, Vol.17,no.1, pp.111-122, 2004.
[8] uidette H, Youlal H. “Fuzzy Dynamic path planning using
genetic algorithms”, Electronics Letters, Vol.36 , no.4
,pp.374-376,2000.
[9] Lyer, Srikanth K, Saxena, et al. “Improved genetic algorithm
for the permutation flowshop scheduling problem”,
Computer and Operations Research, Vol.31, no. 4, pp.593606, 2004.
[10] Sui Yunkang, Jia Zhichao, “A continuous approach to 0-1
linear problem and its solution with genetic algorithm”,
Mathematics in Practice and Theory,vol.40, no.6,pp.119127, 2010.
[11]Wang Fulin,Wang Jiquan, Wu Changyou, Wu Qiufeng.
“The improved research on actual number genetic
algorithms” (in Chinese),Journal of Biomathematics,vol.21 ,
no.1,pp.0153-0158, 2006.
[12] Gao Juan. “Genetic algorithm and its application in
nonlinear programming,” Master dissertation, Xi’an
University of Architecture and Technology, Xi'an,
China,2010,06.
[13] Wang Denggang, Liu Yingxi, Li Shouchen. “Hybrid
genetic algorithm for solving a class of nonlinear
programming problems” (in Chinese), Journal of Shanghai
Jiaotong University, Vol.37, no.12,pp.1953-1956, 2003.
[14] Ge Yaping, Wang Jianhong, Yan Shijian. “A differentiable
and ‘almost’ exact penalty function method for nonlinear
programming” (in Chinese) , Journal of Nanjing Normal
University(Natural Science Edition), Vol.31, no. 1,pp.3841, 2008.
[15] He Dakuo,Wang Fuli,Mao Zhizhong, “Improved genetic
algorithm in discrete variable non-linear programming
problems” (in Chinese) , Control and Decision,
Vol.21,no.4, pp.396-399, 2006.
[16] Tang Jiafu, Wang Dingwei, Gao Zhen, Wang Jin, “Hybrid
genetic algorithm for solving non-linear programming
problem”
(in
Chinese),
Acta
Automatica
Sinica,Vol.26,no.3,pp.401-404,2000.
[17] Wang Xiaoping, Cao Liming. Genetic Algorithm—
Theories, Applications and Software Realization (in
Chinese). Xi'an: Xi 'an Jiaotong University Press,
2002.pp.1-210.
[18] Wang Dingwei, Wang Junwei, Wang Hongfeng, Zhang
Ruiyou, Guo Zhe, Intelligent Optimization Methods (in
Chinese) . Beijing: Higher Education Press , 2007,pp.20-80.
[19] Tang Jiafu,Wang Dingwei, “Improved genetic algorithm
for nonlinear programming problems” (in Chinese), Journal
of Northeastern University (Natural Science ), Vol. 18,no.5,
pp.490-493, 1997.
Download