*** 1 - Washington State University

advertisement
System Optimization
(1)
Liang Yu
Department of Biological Systems Engineering
Washington State University
04. 16. 2013
Outline
Background of Engineering Optimization
Application of Optimization
Introduction
Fundamentals of Optimization
Optimization Toolbox in Matlab
Optimization in Process Plants
Engineering applications of
optimization
Some typical applications from different
engineering disciplines
Design of water resources systems for
maximum benefit
Design of pumps, turbines, and heat transfer
equipment for maximum efficiency
Optimum design of chemical processing
equipment and plants
Selection of a site for an industry
Optimum design of control systems
Application: Metabolic Flux Analysis
Flux Balance Analysis (FBA)
• in silico simulation
• Linear programming (LP)
• Genome-scale
maximize ∑ci ∙vi
s.t.
S∙v = 0
lb < v < ub
Metabolic Steady state
13C-assisted
Metabolic Flux Analysis
• in vivo search
• Nonlinear programming (NLP)
• Simplified model
minimize (MDVexp-MDVsim)2
s.t.
S∙v = 0
IDV = f(v, IMM, IDV)
MDV = M∙IDV
lb < v < ub
IDV: isotopomer distribution vector
MDV: mass distribution vector
Metabolic & isotopic Steady state
Application for optimization of
biorefinery configurations
Pham, Viet, and Mahmoud El‐Halwagi.
"Process synthesis and optimization of
biorefinery configurations." AIChE Journal
58.4 (2012): 1212-1221.
Part of the branching trees for the production of bioalcohols from lignocellulosic bio-mass
Optimization Tree
Introduction
Definition
Optimization is the act of obtaining the best result under given
circumstances. It can be defined as the process of finding the
conditions that give the maximum or minimum value of a
function.
Goal
either to minimize the effort required or to maximize the desired
benefit.
Optimization problem could be linear or non-linear.
Non –linear optimization is accomplished by numerical ‘Search
Methods’.
Search methods are used iteratively before a solution is achieved.
The search procedure is termed as algorithm.
Introduction
Minimum of f (x) is same as maximum of −f (x)
Optimum solution is found while satisfying its constraint (derivative must
be zero at optimum).
Introduction
Linear problem – solved by Simplex or Graphical
methods.
The solution of the linear problem lies on boundaries of the
feasible region.
Non-linear problem solution lies within and on the
boundaries of the feasible region.
Solution of linear problem
Three dimensional solution of non-linear
problem
Introduction
Optimization Programming Languages
GAMS - General Algebraic Modeling System
LINDO - Widely used in business applications
AMPL - A Mathematical Programming
Language
Others: MPL, ILOG
Software with Optimization Capabilities
Excel – Solver
MATLAB
MathCAD
Mathematica
Maple
Others
Statement of an optimization problem
An optimization or a mathematical programming
problem can be stated
Find X =
x1
x2
...
xn
which minimizes f (X)
Subject to the constraints
gj (X) ≤ 0, j = 1, 2, . . . ,m
lj (X) = 0, j = 1, 2, . . . , p
Fundamentals of Optimization
Single Objective function f(x)
Maximization
Minimization
Design Variables, xi , i=0,1,2,3…..
Constraints
Inequality
Equality
Maximize X1 + 1.5 X2
Subject to:
X1 + X2 ≤ 150
0.25 X1 + 0.5 X2 ≤ 50
X1 ≥ 50
X2 ≥ 25
X1 ≥0, X2 ≥0
Example of design variables and
constraints used in optimization.
Optimal points
Local minima/maxima points: A point or Solution x* is at local
point if there is no other x in its Neighborhood less than x*
Global minima/maxima points: A point or Solution x** is at
global point if there is no other x in entire search space less
than x**
Fundamentals of Optimization
Global versus local optimization.
Local point is equal to global point if
the function is convex.
Fundamentals of Optimization
Function f is convex if f(Xa) is less than value of the
corresponding point joining f(X1) and f(X2).
Convexity condition – Hessian 2nd order derivative)
matrix of function f must be positive semi definite (
Eigen values +ve or zero).
Convex and nonconvex set
Convex function
Mathematical Background
Slop or gradient of the objective function f – represent
the direction in which the function will decrease/increase
most rapidly df
f ( x  x)  f ( x)
f
dx
 lim
 lim
x
x 0
x 0
x
Taylor series expansion
df
f ( x p  x) 
dx
1 d2 f
(x) 
2
2
!
dx
xp
(x) 2  .......
xp
Jacobian – matrix of gradient of f with respect to several
variables
 f f f 
 x
J 
 g
 x
y
g
y
z 

g 
z 
Mathematical Background
Slope -First order Condition (FOC) – Provides function’s
slope information
f ( X *)  0
Hessian – Second derivative of function of several
variables, Sign indicates max.(+ve) or min.(-ve)
 2 f
 2
x
H  2
 f
 xy

2 f 

yx 
2 f 
y 2 
Second order condition (SOC)
Eigen values of H(X*) are all positive
Determinants of all lower order of H(X*) are +ve
Optimization Algorithm
Deterministic - specific rules to move from one
iteration to next , gradient, Hessian
Stochastic – probalistic rules are used for
subsequent iteration
Optimal Design – Engineering Design based on
optimization algorithm
Lagrangian method – sum of objective function
and linear combination of the constraints.
Optimization Methods
Deterministic
Direct Search – Use Objective function values to locate minimum
Gradient Based – first or second order of objective function.
Minimization objective function f(x) is used with –ve sign – f(x) for maximization
problem.
Single Variable
Newton – Raphson is Gradient based technique (FOC)
Golden Search – step size reducing iterative method
Multivariable Techniques ( Make use of Single variable Techniques
specially Golden Section)
Unconstrained Optimization
Powell Method – Quadratic (degree 2) objective function polynomial is non-gradient based.
Gradient Based – Steepest Descent (FOC) or Least Square minimum (LMS)
Hessian Based -Conjugate Gradient (FOC) and BFGS (SOC)
Optimization Methods - Constrained
Constrained Optimization
Indirect approach – by transforming into unconstrained problem.
Exterior Penalty Function (EPF) and Augmented Lagrange Multiplier
Direct Method Sequential Linear Programming (SLP), Sequential
Quadratic Programming (SQP) and steepest Generalized Reduced
Gradient Method (GRG)
Descent Gradient or LMS
Advanced Optimization Methods
Global Optimization – Stochastic
techniques
Simulated Annealing (SA) method –
minimum energy principle of cooling
metal crystalline structure
Genetic Algorithm (GA) – Survival of the
fittest principle based upon evolutionary
theory
Optimization Toolbox in Matlab
Key features
Interactive tools for defining and solving optimization
problems and monitoring solution progress
Solvers for nonlinear and multiobjective optimization
Solvers for nonlinear least squares, data fitting, and
nonlinear equations
Methods for solving quadratic and linear
programming problems
Methods for solving binary integer programming
problems
Parallel computing support in selected constrained
nonlinear solvers
How to use Optimization Toolbox
Optimization Functions
Function files can be directly provided by M File
Syntax: [x,fval] = fminsearch(fun,x0)
Optimization Tool graphical user interface (GUI)
Define and modify problems quickly
Use the correct syntax for optimization functions
Import and export from the MATLAB workspace
Generate code containing your configuration for a solver and
options
Change parameters of an optimization during the execution of
certain Global Optimization Toolbox functions
Function Optimization
Optimization concerns the minimization or maximization
of functions
Standard Optimization Problem:

min f x
x

g  x  0
~
~
hi x  0 Equality Constraints
~
Subject to:
j
~
Inequality Constraints
x L k  xk  xU k Side Constraints
Where:

f x
~
x
~
is the objective function, which measure and evaluate the performance of a
system. In a standard problem, we are minimizing the function. For
maximization, it is equivalent to minimization of the –ve of the objective
function.
is a column vector of design variables, which can
affect the performance of the system.
Function Optimization
Constraints – Limitation to the design space. Can be
linear or nonlinear, explicit or implicit functions

g  x  0
hi x  0
~
j
Equality Constraints
Inequality Constraints
~
Most algorithm require less than
x L k  xk  xU k
Side Constraints
Optimization Toolbox Solvers
Minimizers
This group of solvers attempts to find a local minimum of the objective function
near a starting point x0. They address problems of unconstrained optimization,
linear programming, quadratic programming, and general nonlinear
programming.
Multiobjective minimizers
This group of solvers attempts to either minimize the maximum value of a set of
functions (fminimax), or to find a location where a collection of functions is below
some prespecified values (fgoalattain).
Equation solvers
This group of solvers attempts to find a solution to a scalar- or vector-valued
nonlinear equation f(x) = 0 near a starting point x0. Equation-solving can be
considered a form of optimization because it is equivalent to finding the minimum
norm of f(x) near x0.
Least-Squares (curve-fitting) solvers
This group of solvers attempts to minimize a sum of squares. This type of
problem frequently arises in fitting a model to data. The solvers address
problems of finding nonnegative solutions, bounded or linearly constrained
solutions, and fitting parameterized nonlinear models to data.
Objective Function
Linear
Quadratic
Sum-of-squares (Least squares)
Smooth nonlinear
Nonsmooth
Constraint Type
None (unconstrained)
Bound
Linear (including bound)
General smooth
Discrete (integer)
Select Solvers by Objective and Constraint
Constraint
Type
Linear
Quadratic
None
n/a (f = const, or quadprog,
min = )
Theory,
Examples
Bound
linprog, Theory, quadprog,
Examples
Theory,
Examples
Linear
Objective Type
Least Squares Smooth nonlinear Nonsmooth
\, lsqcurvefit,
lsqnonlin,
Theory,
Examples
lsqcurvefit,
lsqlin, lsqnonlin,
lsqnonneg,
Theory,
Examples
lsqlin, Theory,
Examples
linprog, Theory, quadprog,
Examples
Theory,
Examples
General smooth fmincon, Theory, fmincon, Theory, fmincon, Theory,
Examples
Examples
Examples
Discrete
bintprog,
Theory,
Example
fminsearch,
fminunc, Theory,
Examples
fminsearch, *
fminbnd, fmincon, *
fseminf, Theory,
Examples
fmincon, fseminf, *
Theory, Examples
fmincon, fseminf, *
Theory, Examples
Minimization Algorithm
Minimization Algorithm (Cont.)
Equation Solving Algorithms
Least-Squares Algorithms
Implementing Optimization Toolbox
Most of these optimization routines require the
definition of an M-file containing the function, f,
to be minimized.
Maximization is achieved by supplying the
routines with –f.
Optimization options passed to the routines
change optimization parameters.
Default optimization parameters can be changed
through an options structure.
Unconstrained Minimization
Consider the problem of finding a set of values [x1 x2]T
that solves

min f x  e x1  4 x12  2 x22  4 x1 x2  2 x2  1
x
~
~
x   x1
~
x2 
T
Steps:
Create an M-file that returns the function value (Objective
Function). Call it objfun.m
Then, invoke the unconstrained minimization routine. Use
fminunc
Step 1 – Objective Function
x   x1
function f = objfun(x)
~
x2 
T
f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);
Objective function
Step 2 – Invoke Routine
Starting with a guess
x0 = [-1,1];
Optimization parameters settings
options = optimset(‘LargeScale’,’off’);
[xmin,feval,exitflag,output]=
fminunc(‘objfun’,x0,options);
Output arguments
Input arguments
Step 3 – Results
xmin =
0.5000
-1.0000
Minimum point of design variables
feval =
1.3028e-010
Objective function value
exitflag =
1
Exitflag tells if the algorithm is converged.
If exitflag > 0, then local minimum is found
output =
iterations: 7
funcCount: 40
stepsize: 1
Some other information
firstorderopt: 8.1998e-004
algorithm: 'medium-scale: Quasi-Newton line search'
More on fminunc – Input
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,…)
fun
x0
Option
P1,P2,…
: Return a function of objective function.
: Starts with an initial guess. The guess must be a vector
of size of number of design variables.
: To set some of the optimization parameters. (More
after few slides)
: To pass additional parameters.
More on fminunc – Output
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,…)
• xmin: Vector of the minimum point (optimal point). The size is the
number of design variables.
• feval: The objective function value of at the optimal point.
• exitflag: A value shows whether the optimization routine is
terminated successfully. (converged if >0)
• Output: This structure gives more details about the optimization
• grad: The gradient value at the optimal point.
• hessian: The hessian value of at the optimal point
Next Class
Please take your laptop and install Matlab
Download