x - Dalton State College

advertisement
INTRODUCTION TO OPERATIONS
RESEARCH
Nonlinear Programming
NONLINEAR PROGRAMMING


A nonlinear program (NLP) is similar to a linear
program in that it is composed of an objective
function, general constraints, and variable bounds.
A nonlinear program includes at least one nonlinear
function, which could be the objective function, or
some or all of the constraints.


Z = x12 + 1/x2
Many real systems are inherently nonlinear

Unfortunately, nonlinear models are much more difficult to
optimize
NONLINEAR PROGRAMMING
 General Form of Nonlinear
Maximize
Z = f(x)
Subject to.
gi(x) ≤ b1
xi ≥ 0
 No single algorithm will solve every specific problem
 Different algorithms are used for different types of
problems
EXAMPLE

Wyndor Glass example with nonlinear constraint
Maximize
Z = 3x1 + 5x2
Subject to
x1
≤4
9x12 + 5x22 ≤ 216
x1, x2 ≥ 0


The optimal solution is no longer a CPF
anymore, but it still lies on the boundary of
the feasible region.
We no longer have the tremendous
simplification used in LP of limiting the search
for an optimal solution to just the CPF
solutions.
EXAMPLE

Wyndor Glass example with nonlinear objective function
Maximize
Z = 126x1 – 9x12 + 182x2 – 13 x22
Subject to
x1
≤4
x2 ≤ 12
3x1 + 2x2 ≤ 18
x1, x2 ≥ 0

The optimal solution is no longer a CPF
anymore, but it still lies on the boundary of
the feasible region.
EXAMPLE

Wyndor Glass example with nonlinear objective function
Maximize
Z = 54x1 – 9x12 + 78x2 – 13 x22
Subject to
x1
≤4
x2 ≤ 12
3x1 + 2x2 ≤ 18
x1, x2 ≥ 0


The optimal solution lies inside the feasible
region.
That means we need to look at the entire
feasible region, not just the boundaries.
CHARACTERISTICS
Unlike linear programming, solution is often not on the
boundary of the feasible solution space.
Cannot simply look at points on the solution space
boundary, but must consider other points on the surface of
the objective function.
This greatly complicates solution approaches.
Solution techniques can be very complex.
TYPES OF NONLINEAR PROBLEMS

Nonlinear programming problems come in
many different shapes and forms








Unconstrained Optimization
Linearly Constrained Optimization
Quadratic Programming
Convex Programming
Separable Programming
Nonconvex Programming
Geometric Programming
Fractional Programming
ONE VARIABLE UNCONSTRAINED
 These problems have no constraints, so the
objective is simply to maximize the objective
function
 Basic function types
 Concave
 Entire function is concave down
 Convex
 Entire function is concave up
ONE VARIABLE UNCONSTRAINED
 Basic
calculus
Find the critical points
 Unfortunately this may be difficult for many
functions
 Estimate the maximum

 Bisection
Method
 Newton’s Method
BISECTION METHOD
NEWTON’S METHOD
MULTIVARIABLE UNCONSTRAINED

GRADIENT SEARCH PROCEDURE

GRADIENT SEARCH EXAMPLE
Gradient Search procedure: Z = 2x1x2 + 2x2 – x12 – 2x22
df / dx1 = 2x2 – 2x1
df / dx2 = 2x1 + 2 – 4x2
Initialization: set x1*, x2* = 0.
1) Set x1 = x1* + t(df / dx1) = 0 + t(2×0 – 2×0) = 0.
Set x2 = x2* + t(df / dx2) = 0 + t(2×0 + 2 – 4×0) = 2t.
f (x1 , x2) = (2)(0)(2t) +(2)(2t) – (0)(0) – (2)(2t)(2t) = 4t – 8t2.
2) f ’(x1 , x2) = 4 – 16t. Let 4 – 16t = 0 then t* = ¼.
3) ReSet x1* = x1* + t (df / dx1) = 0 + ¼(2×0 – 2×0) = 0.
x2* = x2* + t(df / dx2) = 0 + ¼(2×0 + 2 – 4×0) = ½.
Stopping rule: df / dx1 = 1, df / dx2 = 0
GRADIENT SEARCH EXAMPLE
Gradient Search procedure: Z = 2x1x2 + 2x2 – x12 – 2x22
df / dx1 = 2x2 – 2x1
df / dx2 = 2x1 + 2 – 4x2
Iteration 2: x1* = 0
x2* = ½ .
1) Set x = (0,1/2) + t(1,0) = (t,1/2).
f (t,1/2) = (2)(t)(1/2) +(2)(1/2) – (t)(t) – (2)(1/2)(1/2) = t – t2 + ½.
2) f ’(t,1/2) = 1 - 2t.
Let 1 – 2t = 0 then t* = ½ .
3) ReSet x* = (0,1/2) + ½ (1,0) = (½, ½).
Stopping rule: df / dx1 = 0, df / dx2 = 1.
GRADIENT SEARCH EXAMPLE
Gradient Search procedure: Z = 2x1x2 + 2x2 – x12 – 2x22
df / dx1 = 2x2 – 2x1
df / dx2 = 2x1 + 2 – 4x2
Continue for a few more iterations:
Iteration 1:
x* = (0, 0)
Iteration 2:
x* = (½, ½)
Iteration 3:
x* = (½, ¾)
Iteration 4:
x* = (¾, ⅞)
Iteration 5:
x* = (⅞, ⅞)
Notice the value is converging toward x* = (1, 1)
This is the optimal solution since the gradient is 0
f (1,1) = (0,0)
CONSTRAINED OPTIMIZATION
 Nonlinear
objective function with linear
constraints
 Karush-Kuhn Tucker conditions

KKT conditions
Download