Uploaded by Mani S Prasad

LN -2 Single VAriable Optimization Techniques

advertisement
AMITY INSTITUTE OF SPACE SCIENCE & TECHNOLOGY
Classical Optimization Technique
Single Variable Optimization
M S Prasad
7/15/2015
This lecture note is prepared for students of Graduate engg and is based on open literature and text
books. This needs to be read in conjunction with class room discussions.
Classical Optimization Techniques :LN -2
The classical optimization techniques are useful in finding the optimum solution or un
constrained maxima or minima of continuous and differentiable functions.
These are analytical methods and make use of differential calculus in locating the optimum
solution. The classical methods have limited scope in practical applications as some of them
involve objective functions which are not continuous and/or differentiable. �Yet, the study of
these classical techniques of optimization form a basis for developing most of the numerical
techniques that have evolved into advanced techniques more suitable to today’s practical
problems
Single-variable optimization algorithms
The algorithms described in this section can be used to solve minimization problems of the
following type:
Minimize f(x) is the objective function and x is a real variable. The purpose of an
optimization algorithm is to find a solution x, for which the function f(x) is minimum.
These algorithms are classified into two categories
i.
Direct methods
ii.
Gradient based methods
Direct methods do not use any derivative in formation of the objective function ; only
objective function values are used to guide the search process. However gradient based
methods use derivative information (first and/or second order) to guide the search process.
Optimality criteria
There are three different types of optimal points [ minima] are:
Local Optimal x*is said to be a local optimal point, if no point in the neighborhood has a
function value smaller than f(x*).
Global Optimal point:
A point or solution x** is said to be a global optimal point, if no point in the entire search space
has a function value smaller than f(x**).
Inflection point:
x*is an inflection point if f(x*) increases locally as x*increases & decreases locally as x*reduces
or f(x*) decreases locally as x* increases and increases locally as x*decreases.
Let the objective function f(x) is the chosen search space f'(x)and f''(x) are first and second
derivatives. A point x is a minimum if f'(x)=0 & f''(x)>0. If f‘(x)=0,the point is either a minimum, a
maximum or an inflection point .
Suppose at point x*,the first derivative is zero and the first non- zero higher order derivative is
denoted by n, then if n is odd, x*is an inflection point If n is even, x*is a local optimum
(i)
If the derivative is +ve, x*is a local minimum
(ii)
If the derivative is –ve,x* is a local maximum
Many powerful direct search algorithms have been developed for the unconstrained
minimization of general functions. These algorithms require an initial estimate to the optimum
point, denoted by xo. With this estimate as starting point, the algorithm generates a sequence
of estimates x1,, x2, , by successively searching directly from each point in a direction of descent
to determine the next point. The process is terminated if either no further progress is made,
or if a point xk is reached (for smooth functions) at which the first necessary condition i.e f ‘(x)
= 0 is sufficiently accurately satisfied, in which case x* = xk. It is usually, although not always,
required that the function value at the new iterate x i+1 be lower than that at xi
An important sub-class of direct search methods, specifically suitable for smooth functions, are
the so-called line search descent methods. Basic to these methods is the selection of a descent
direction u i+1 at each iterate
Xi that ensures descent at xi in the direction ui+1 , i.e. it is required that the directional derivative
in the direction ui+1 be negative.
General structure of a line search descent method
1. Given starting point x0 and positive tolerances e1, e2 and e3, set i = 1.
2. Select a descent direction u i.
3. Perform a one-dimensional line search in direction u i: i.e.
To get minimum 𝛌i
4.
Set
5. Test for convergence
Else go to step 6
6. set i = i +1 go to step 2
Fig 1.0 Sequence search and steps
Finding the Minima
A. Bracketing Methods:
The minimum of a function is found in two phases. Initially an approximate method is used to
find a lower and an upper bound of the minimum. Next, a sophisticated technique is used to
search within these two limits to find the optimal solution.
A1
Exhaustive Search Method
It is the simplest of all search methods. The optimum of a function is bracketed by calculating
the function values at a number of equally spaced points.
The search begin from a lower bound on the variable and three consecutive function values are
compared at a time based on the assumption of uni modality of the function. Based on the
outcome of comparison , the search is either terminated or continued by replacing one of the
three points with a new point.
Algorithm
Step 1
Say n is the number of intermediate points then set
Step II
If
Step III
the minimum point lies between ( x1 ,x3 ) , else
Is
If yes , go to Step2.
Else
no minimum exists in (a, b) or a boundary point (a or b) is the minimum point
.
A2. Bounding Phase Method
Step 1
Choose an initial guess xo and an increment Δ Set counter k=0
Step 2
If
f( xo - |Δx|) >= f(xo) >= f( xo + |Δx|) then Δ is positive
f( xo - |Δx|) <= f(xo) <= f( xo + |Δx|) then Δ is negative
else go to step 1
Step 3
Set xk+1 = xk + 2kΔ
Step4
If f (xk+1 ) < f (xk ) then
Set k=k+1 and go to Step 3
else,
the minimum lies in the interval ( x k-1 , x k+1 ) and
terminate .
A3 Region elimination methods:
Once the minimum point is bracketed, a more sophisticated algorithm is used to improve the
accuracy of the solution. Region elimination methods are used for this purpose. The
fundamental rule for region elimination method is as follows : -
Fig 2.0 Unimodal Function
Xj
Consider a unimodal function as above . The two points x1 and x2 lie in the interval (a,b) and
satisfy x1< x2. For minimization, the following conditions apply
If f (x 1) > f(x2) then the minimum does not lie in( a,x1)
If f (x1) < f(x2) then the minimum does not lie in (x2,b)
If f (x1) = f(x2) then the minimum does not lie in (a,x1) and (x2,b)
A little variation to this algorithm is based on elimination of portion of search i.e. known as
Interval Halving Method . In the figure above we have an intermediate point as Xj . The
possibilities of functional search now could be as under : a. If f(x1 ) < f(xj) then minimum cannot lie beyond Xj . Hence we can reduce the search
space between X1 or a and xj , thus reducing it by half.
b. If f (x 1) > f(xj) then minimum cannot lie between a and X1 . thus reduction of ¼
search space.
This process can be continued till the small search space is not achieved .
Algorithm
Step I
Choose lower and Upper bound a and b : a small value Є for accuracy .
Xj = a+b/2 ; Lo = L = b – a ; compute f(xj) .
Step II
Set X1 = a+ L/4 ; x2 = b- L/4 ; compute f(X1 ) and f(X2 )
Step III
If f(x1 ) < f(xj) then b = xj , xj = x1 go to step V
else
step IV
if f(x2 ) < f(xj) then a = xj , xj = x2 go to step V
else
a = x1 , b= x2 go to step V.
Step v
Set L = b-a ; if |L| < Є
terminate .
else
Step II.
A4
Fibonacci search method { Golden ratio }
The search interval is reduced according to Fibonacci numbers.
Fn = Fn-1 + Fn-2 where n = 2,3,4 and F0 = F1 = 1
Fig 2.1 Fibonacci search points(x1and x2)
Algorithm:
Choose a lower bound a and upper bound b . Set L = b-a. Assume desired iteration to be n . Set K =
2.
Step 2
Compute
Set
and
Step 3 . Compute one of f ( X1) or f(X2) which was not evaluated earlier.
Use the fundamental region elimination rule to eliminate a region. Set new a and b.
Step 4
Is k=n? If no, set k=k+1 and go to Step2.
Else terminate .
Golden Section search
C.
Gradient based Method
Despite finding the derivative is difficult this method is used commonly due to simple
calculations involved
Newton Raphson Method : Considering Taylor series expansion
Algorithm:
1.Choose initial guess X 1 and small value Є Set k = 1. Compute f ‘ (X1 ).
2.Compute f “ (XK)
3.Calculate
Xk+1 = Xk - ( f ’ ( X k) / f “ ( Xk ) )
Compute f “ ( xk )
4. If | f ‘ ( xk+1 ) | less than Є terminate .
Else
k = k+1 and go to Step 2.
The Convergence depend on initial guess.
_____________________________________________________________________________
.
Reference
1. Arora JS. Introduction to Optimum Design (2nd edn). Elsevier Academic Press, 2004.
2. Barbosa HJC and Lemonge ACC. A new adaptive penalty scheme for genetic algorithms.
Information Sciences 2003; 156(3–4):215–251.
3. Cox SE, Haftka RT, Baker CA, Grossman B, Mason WH and Watson LT. A comparison of
global optimization methods for the design of a high-speed civil transport. Journal of
Global Optimization 2001; 21(4):415–432.
4. Dorigo M, Maniezzo V and Colorni A. Ant system: optimization by a colony of
cooperating agents. IEEE Transactions on Systems, Man and Cybernetics, Part B 1996;
26(1):29–41.
Download