The selection of optimal break points in piecewise linear function... by Haifie Loo Lai

advertisement
The selection of optimal break points in piecewise linear function analysis
by Haifie Loo Lai
A thesis submitted to the Graduate Faculty in partial fulfillment of the requirements for the degree of
MASTER OF SCIENCE in Industrial Engineering
Montana State University
© Copyright by Haifie Loo Lai (1973)
Abstract:
Methods are presented which optimally select the break points of a piecewise linear function used to
approximate a nonlinear functional.
On one hand, given the analytic functions which are to be approximated by piecewise linear functions,
and the number of segments of the linear function, break points are selected so as to minimize the total
absolute difference between the analytic function and the approximating piecewise linear functions. On
the other hand, given a table of raw data values presented in ordered array, which are to be fitted by
piecewise linear functions, break points are selected at the largest value of the independent variable
such that no infeasibility is incurred. Infeasibility occurs whenever the deviation of the observed data
from the fitted linear segment exceeds a prescribed value K, the maximum allowable deviation
specified. These break-point search methods are compatible with and incorporated into the usual
method of separable programming to obtain more precise approximating solutions. In presenting th$s thesis {n partial fulfillment of the require­
ments for an advanced degree.at Montana State University, I agree that
the Library shall make it freely available for inspection.
I further
agree that permission for extensive copying of this thesis for scholarly
purposes may' be granted by my major-professor, or, in his absence, by
the Director of Libraries'.
It is understood that any copying or publi­
cation of this thesis.for financial gain shall not be allowed without
my written permission.
Date
THE SELECTION OF OPTIMAL BREAK POINTS IN
PIECEWISE LINEAR FUNCTION ANALYSIS
by
HAIFIE LOO LAI
A thesis submitted to the Graduate Faculty in partial
fulfillment of the requirements for the degree
of
MASTER OF SCIENCE
in
Industrial Engineering
Approved:
Head, Major Department
MONTANA STATE UNIVERSITY
Bozeman, Montana
December, 1973
iii
' ACKNOWLEDGMENTS
The writer gratefully acknowledges Dr. D. W. Boyd for his invalu­
able advice in structuring this research.
Special thanks to Dr. D. F .
Gibson and Dr. W. R. Taylor for their contribution in examining this
research.
Finally, to my husband, Ching-E, for his patience and encourage­
ment in fulfilling this assignment.
iv
TABLE OF CONTENTS
Page
VITA............
ii
ACKNOWLEDGMENTS.......................... ■..................... iii.
LIST OF TABLES................................. . ...............
v
LIST OF F I G U R E S ........ ........................ .■............
vi
ABSTRACT.................................. .. '.................. vii
CHAPTER I:
INTRODUCTION. . . ....................... •......... ..
CHAPTER II:
DEVELOPMENT OF BREAK POINT SEARCH METHOD FOR THE
SEPARABLE CONVEX FUNCTION..........................
The Approximation of a Separable Function ...............
Discussion and Adaptation of the Solution Technique to
Separable Convex Programming. . .................■......... ..
Extensions to Further Grid Refinement ..................... . .
CHAPTER III:
ANALYSIS OF RAW DATA.. . . . . . . . . . . . . . .
Fitting One-Dimensional Raw-Data............
Extension to n-Dimensional Raw Data Analysis..............
CHAPTER IV:
APPLICATION OF THE SOLUTION TECHNIQUE TO THE
SEPARABLE CONVEX PROGRAMMING P R O B L E M ..........
Illustrative Example I . . . . . . . . . . . ......... . . . . .
Illustrative Example 2 ................ ...................... .
CHAPTER V:
CONCLUDING DISCUSSION ............
H cN in
The Problem C l a s s ........ ......................
Summary of Past Work.......... * .......................
Purpose of this Study . ................... ................
I
7
7
21
28
33
33
51
54
54
68
. . . . . . . . .
74
APPENDICES. . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
Appendix I: Definition of Convexity..............
Appendix II: Numerical Solution Method of Nonlinear Equations.
Appendix III: MFOR LP Tableau and the Program Output of
Illustrative Example I......................................
Appendix IV: Computer Program to Search for the Optimal Break
Points. ...............................................
Appendix V: Solution Obtained from the Differential Algorithm.
BIBLIOGRAPHY........... .................................... .. .
81
82
85
89
95
97
V
LIST OF TABLES
Table
.
' .
•
Page
3-1 CONSTRAINT I AND MINIMIZATION. ■................................. 48
3-2 CONSTRAINT I AND MAXIMIZATION. ..............
3-3 CONSTRAINT RELATIONSHIPS OF x,
48
AND d2 . .................. 49
3-4 CONSTRAINT I .................... ...........................49
3-5 CONSTRAINT 2
'.......... .............. .............. 50
4- 1 VALUES OF Cy5X1)WHEN X 0 IS SETTO " 0 " ......................... 70
4-2 VALUES OF (y,x ) WHEN X^ IS SETTO " 0 " ......................... 70
4-3 OPTIMAL BREAK POINTS, SLOPES AND INTERCEPTS OF f (x ) AND
f2 (x2) ............ '........................... .. .......... 70
i
vi
LIST OF FIGURES
Figure.
Page
2-1
Piecewise Linear Approximation of an Arbitrary ConcaveConvex Function......................... .................... 8
2-2
Histogram Approximation of
2-3
Piecewise Linear Approximation of x
2-4
Piecewise Linear Approximation of x
2-5
Piecewise Linear Approximation
2-6
Approximation of f .(x.) by f^ (x.) ................. .......... 24
2-7
Piecewise Linear Approximation
8
................................
of x
2
2
2
Using One Break Point . . 10
Using Two Break Points. . 12
Using ThreeBreak Points. 15
of f (x) by f ( x ) ............... 26
2-8
Grid Refinement When x Lies on the Break Line.
2-9
Grid Refinement When x Lies Within an Interval
.
.30
. . . . . . . .
2-10 Grid Refinement When x Lies on the First Interval........
30
. . 32
2- 11 Grid Refinement When x Lies on the Last Breaking Line..........32
3- 1
Break Point Shift. ......................
36
3-2
Intercept Shift.......................................... . . 38
3-3
Approximation by Finite Slope........... .
3-4
38
Linear Approximationof a Continuous Nonlinear
Process . . . . 40
3- 5 Linear Approximation of a Discontinuous Nonlinear Process. . . 45
4- 1
Piecewise Linear Approximation of f^(x^) ............... .. . 67
4-2
Piecewise Linear Approximation of fgCxg) • ........
. ....
4-3
Piecewise Linear Approximation of y = f ^ ( x ^ ) ............ . . 7l
4-4
Piecewise Linear Approximation of y =
. . . . . . . . .
67
71
ABSTRACT
Methods are presented which optimally select the break points of
a piecewise linear function used to approximate a nonlinear functional,.
On one hand,■given the analytic functions which are to be approximated
by piecewise linear functions, and the number of segments of the linear
function, break points are selected so as to minimize the total abso­
lute difference between the analytic function and the approximating
piecewise linear functions. On the other hand, given a table of raw
data values presented in ordered array, which are to be fitted by
piecewise linear functions, break points are selected at the largest
value of the independent variable such that no infeasibility is incurred.
Infeasibility occurs whenever the deviation of the observed data from
the fitted linear segment exceeds a prescribed value K, the maximum
allowable deviation specified. These break-point search methods are
■compatible with and incorporated into the usual method of separable
programming to obtain more precise approximating solutions.
CHAPTER I
INTRODUCTION
The Problem Class
Mathematical programming covers an extensive area composed of a
great variety of problems for which numerous methods have been
developed for solving particular classes of problems.
One important class of problems occurs in nonlinear programming
in which the objective function and each of the constraints consist of
separable functions.
That is, minimize (maximize)
F= If.(x.)
J=Ij j
n
subject to
I g ..(x ){£, =, l>b
j=l J 3
x. > 0
J “
where i = I, 2 , ..., m; j = I, 2,
,' n.
Such problems can be solved by replacing the functions f^ (x_.)
and g ..(x.) with polygonal approximations.
1J I
■
That is, each function is
approximated by straight line segments between selected points.
Hence,
the original nonlinear programming problem is transformed into a linear
programming problem so that an approximate solution can be obtained by
a slight modification of the simplex method.
Thus, for the problem
stated above, the linear programming counterpart is,
2
minimize (maximize)
F=
Ji
f .(x.)
j=l J 3
n
subject to
J g ..(x ){£ , =, ^}b
J 3
j=l
x. > 0
J where i = I, 2 , .
m; j = I, 2 , ..., n;
f (Xj) is the polygonal approximation to the' function f^(x^.); and
g^(x^) is the polygonal approximation to the function
•
The key concept of this special technique is that a separable
concave (convex) function of a single variable can be approximated as
closely as desired by piecewise linear functions.
In fact, second
order conditions for the existence of an optimum require that all of the
f .(x.) be concave (convex) functions when the objective is to maximize
J
l
n
(minimize) J f .(x.).
j=l J 3
Summary of Past Work
•
Previous studies have resulted in the development of algorithms for
■jthe separable convex programming problem.
For example, Charnes
and
;iLemke (6) and Miller (18) have developed linear approximation methods.
The main feature of these methods is to linearize the nonlinear func­
tions on a grid of points spanning a suitable portion of the space of
the problem.
T
T
T
(x-, x0, .
1 2
I
2
T
T
Let X , X , ..., X be a collection of n-vectors (X
x ))•
n
=
Any point x. of this collection may be written as
.1
3
x. =
1
T
£ A x. where
t=1
1
JiX
= 1 for all t.
't
Then, given any function f (x) ,
1 2
the linearization of f (x) on the grid x , x ,
through the approximation f (x) = J X f (x ).
x
T
is attained
Thus, any nonlinear pro­
gramming problem becomes a linear problem in the new variable At if
f (x) is replaced throughout by its representation above.
The nonlinear
programming problem can now be stated in the approximate form:
Maximize (minimize) ^Atf(xt)
t
subject to
^At = I,
At >_ 0
^Atg .(Xt){<_, =, >_}0 for all i.
The transformed problem can be solved by the usual simplex method.
Another similar method for the solution of the separable convex
programming problem is presented by Hillier and Lieberman (13), and
also makes use of a linear programming approximation of the original non­
linear programming problem.
f .(x.), to the true curve
3
3
S. . . + S
m X., +
3k 3k
However, the approximation function
f .(x.), is constructed as f.(x.) = s
3
3
3
3
3
x
+
. ‘+ s . .x. .; where s
is the slope of the piece3m3 3m3
3k
wise linear approximating function.
The original variable, x^ , is
broken into m. segments at the breakpoints, b., .
3
is defined as the k
3
3&
til
segment of x_..
The new variable, x
3k
,
The problem is then transformed to:
4
Maximize (Minimize)
subject to
£( Ji s ^ x ^ )
j k=l
j (xjmj) i S =»
0 for all i
Xjk < V
> for all j , k.
xJ k ^ 0
In both cases, the closeness to which the linearized solution
approximates the optimum is determined by the fineness of the grid.
If
the chosen grid is fine enough in the neighborhood of the solution,
answers of suitable accuracy can be obtained.
After the linear programming approximation is solved, if a more
precise answer is needed, a grid refinement process can be used.
The
algorithm for grid refinement described by Dantzig (8) starts with
the computing of coefficients for each term of the approximating func­
tions, for instance.at x = 0, 0.5, I, with grid size 0*5.
Upon solving
the first linear programming approximation, halve the grid size and
compute coefficients for only those new values which are adjacent to
the current solution for x.
Thus, on the second piecewise approximation,
if the previous solution were x = 0 , compute the coefficient at x =
0.25, discarding the value at x = I; if x = 0.5 compute coefficients
.at 0.25 and 0.75, and discard the values at 0 and I; if the solution x
is a weighted average of two grid points 0 and.0.5, then include a grid
5
value of x at .25 and discard the value at I, etc.
In this way, for
each successive piecewise approximation problem, coefficients are
computed for at.most three values of each x and such that the range of
x is halved each time.
It was pointed out that successive values of x
obtained in this manner will stay within the.range established by prev­
ious cycles. .
Another grid refinement procedure is described by Alloin (2).
1 2
T
Suppose that a grid x , x , ..., x is given, and the associated linear
programming approximation
; I
2
T'
is solved, yielding A , A , ..., A and the
dual solution u^, u^, ..., u^.
When refinement
of the grid is needed,
of several possible points that might be adjoined to the given grid as
a further refinement to obtain a better solution, which point would the
simplex method choose as contributing the most to the solution?
suggested that the decomposition procedure can be applied.
He
The procedure
is to evaluate the reduced cost for each new column of the linear pro­
gramming problem and to choose the one with largest reduced cost.
equivalently, x
unconstrained).
Or
is the maximizing value of f.(x.) - /u.g..(x.), (x.
J J
^
iJ J
J
Thus, a new column which tightens the grid is con­
structed, a new variable is added, and the simplex method again employed
to find a new solution to the expanded linear programming problem.
Purpose of This Study
The survey of past work reveals that the approximation techniques
used in all the existing algorithms are simply based upon judicious
6
selection of the break points of the piecewise linear function rather
than selection via a stated criterion.
Hence, development of a solution-
technique to include "non-subjective" selection of break points is
believed to be useful.
Therefore, the main purpose of this study is to
develop some criterion for the optimal segmentation of f^Cx^) for piecewise linear approximation.
The solution technique is also to be
adaptable to the usual separable convex programming approach.
Closely related to this problem is the analysis of raw data which
exhibit no interactions between independent variables. .A method is
sought by which to analyze the objective function and constraints, which
may show strong nonlinearities, while still in raw-data form.
The development of solution techniques for separable convex func­
tions and for the raw-data form of the problem is presented in Chapter
II and Chapter III, respectively.
Chapter IV contains two illustrative
examples, one for the separable convex programming problem, and another
for the analysis of raw data.
A summary discussion and suggestions are
presented in Chapter V.
\
CHAPTER II
DEVELOPMENT OF BREAK POINT SEARCH METHOD FOR THE
SEPARABLE CONVEX FUNCTION
The Approximation of a Separable Function
Consider the case where the given function can be written in the
form f(x1 , X
JL
z
i
x ) = f- Cx1) + f0 (x_) + . . . + f (x ), where
n
i i
Z z
n n
f .(x.) is a specified function of x. only, for j = I, 2,
3 3
3
n.
Thus,
f(x^, x^, ..., x^) is called a "separable" function, since it can be
'
written as a finite sum of separate terms, each of which involves only
a single variable.
• An approximation technique is available for obtaining a solution
to the problem having such a separable objective function.
This tech­
nique involves reducing the problem to a linear programming problem by
approximating each f^Cx^) by several piecewise linear functions.
sider, for example, the function sketched in Figure 2-1.
Con­
The piecewise
lfeiear approximation is given by f(x) where the s's represent the slopes
of the approximating lines and the b ’s represent the break points selec­
ted to give the desired degree of approximation.
When f(x) is differen­
tiable, the value df(x)/dx can be plotted against x as in Figure 2-2.
The area under the df(x)/dx curve may be used to represent f(x) and the
histogram used to represent df(x)/dx.
I, 2 ,
By choosing appropriate b / s
n) , the absolute difference |f(x) - f(x) | can be made as
small as desired; thus effecting a suitable approximation.
That is,
an appropriate criterion to determine the suitability of a specific
(i =
f(x)
8
Figure 2-1.
Piecewise Linear Approximation of an Arbitrary ConcaveConvex Function
Figure 2-2.
Histogram Approximation of df/dx
9
approximation is to solve for the break points by minimizing the abso­
lute value of the total difference between the linear approximating lines
and the true curve.
2
For example, given f (x) = x .
Let x be bounded by 0
let the curve be approximated by two linear functions.
x
10 and
Then, the break
point can be determined by minimizing the following function (see Figure
2-3):
Minimize
-L
2
2 ,,
/Is1X - X Idx + /Is1X 1 + S2 (x - X1) - x dx
(I)
where X 1 is the break point, and S1 and s, are the slopes of the correspending linear functions.
By definition, S1 = f (X1)Zx1 =
X1
2
X1
2,
V xi■X1-
Then by convexity,
2
Z1 = /Is1X - x |dx = / (x.x - x )dx
1
O
0
-
6.
Therefore, minimize
X2=IO
= X1Zb + / (S1X1 + s2x - S2X1 - x )dx
= - ^x1 + IOx1 + ^s2xJ “ IOSgX1 + SOs2 - 1000Z3 ----------- (2-1)
Since s
f(x ) - f(x )
= — -----------2
Xg-X1
100 - x 2
= — rr----- = 10 + x., substitute into equaIO-X1
I
tion 2-1 to obtain the following equation:
z(1) = Sx1 - SOx1 + 500Z3
(2- 2 )
10
f(x)
x „=10
Figure 2-3.
Piecewise Linear Approximation of x
Using One Break Point
11
Take the derivative with respect to
obtain the minimum.
and set it equal to zero to
Thus,
(2-3)
IOx^ - 50 = 0
. . x, = 5 , s„ = 10 + x. = 15.
1
2
I
10 > 0.
Check the second derivative:
Therefore, x^ = 5 is the desired minimum.
32,(1)
2
O A-Ji
=
The break point is
at x = 5, and the two approximating linear functions are 5x for the
first segment and -50 + 15x for the second segment.
For the same function above, when three linear functions are used
to approximate the curve, the objective becomes to minimize the following
function (see Figure 2-4):
x
fP)
- L o
Minimize Z'
= / |s^x - x |dx +
Ov
x
2I
/|s^x^ + S 2 (x - x^) - x |dx
iI v-
X3=IO
+
Jjs1X 1 + S 2 (x2 - X1) + s3 (x - X2) - x IdJ
X2 V
A straightforward way to minimize the above function is simply to
take the derivatives with respect to the unknowns, X1 and X2 , and set
them equal to zero.
Thus, the following simultaneous nonlinear equations
are obtained.
-X2 + Zx1X2
-xix2 + °*^X1 - 50 + l°x2 = 0
(2-4)
(2-5)
12
f(x)
Figure 2-4.
Piecewise Linear Approximation of x
2
Using Two Break Points
13
Solve the above equations to obtain
20 exceeds 10, the bounding value of x.
= O.Sx^, x^ = 20, or 6 .66.
=
Hence, 6.66 is chosen and x^ =
3.33.
It may be noted that when the given function is more complicated
2
than the parabola x , simultaneous nonlinear equations which are diffi­
cult to solve will be obtained.
An alternative way to find the minimum
is therefore desirable.
Assume that the minimum value of
+ z^ is known for x^ = K,
where K is assumed to be the optimal second breakpoint.
to minimize z
, (the minimum of z + z_) +
.
,
2
s3(x - x2) - x Idx must be minimized.
/
Then, in order
+ s 2(x2 - xV
+
x 2~k
If in the previous solution, x 2 =
K is used instead of 10, equations 2-2 and 2-3 will be of the form:
(2-6)
(2-7)
Kx 1 - K /2
Solving for X 1 in terms of K, then X 1 = 0.5 K leads to s 2 = K + X 1 =
1.5K.
Therefore, the minimum value of z^ and Z2 in terms of K can be
determined from equation 2-6 as the
3
minimum of z^"^ =
2
(0.5K) + — (0.5K)
= 0.042K , and
X3=IO
0.042k
+ / Is1X1 + s2(x2 - X1) + s3(x - x2) - x |dx
V
k
(2-8)
14
100 - x,
Similarly s,
10 - x.
10 + K .
Equation 2-8 can be written as
z (2) = -0.125K3 + 5K2 - 5OK + 166.7
Take the derivative with respect to K and set it equal to zero to obtain
the minimum.
Hence,
-0.375K2 + IOK - 50 = 0
K = 6.66 or 20
K = 20 exceeds 10, the bounding value of K, and hence must be discarded.
32 z ^
(2)
Since --- x — = 5 > 0 at K = 6 .66, z
has been minimized. Therefore,
BK
X 2 = K = 6 .66, X 1 = 0.5K = 3.33. The break points are at x ^ = 3.33 and
x 2 = 6 .66, and are the same as the values obtained by using the previous
direct method.
In like manner, if four linear approximating lines are desired,
the same technique can be applied to x
now is to minimize z
= J |s
O
+
(see Figure 2-5) .
(3)
X1
z ^
2
x2
x - X 2 1dx +
1
>____________>
JJs1X
+ s (x - x ) - x |d)
X1 1 1
2
I v----------- ---------- -
Jjs1X 1 + S 2 (x2 - X1) + s3 (x - X2) - X2 Jdi
2v
The objective
15
Figure 2-5.
Piecewise Linear Approximation of x
Points
Using Three Break
16
V
10
2
Ix1X1 + S2 Cx2 - X1) + S 3 Cx3 - X2) + S ^ C x - x ) - x |dx
+ ]
First, using the direct method by taking the derivatives with respect to
the unknowns, X1 , X2 , X3 , and setting them equal to zero yields the
following simultaneous nonlinear equations:
■
¥1 +
xIx2 = 0
( 2 "9)
-X1X2 + ^x1 - ^x 3 + X 2X 3 = 0 ------------------------- ;--------- C2-10)
-X2X 3 + -|x2 - 50 + IOx3 = 0 ------------------------------------ (2-11)
2
Solve the above equations for X1 = O 1Sx3, X 3 = ^ X 3 , and X 3 = 15, or
7.5.
15 exceeds 10, the bounding value of x.
Hence, X 3 = 7.5 is chosen,
X 3 = 5 and X1 = 2.5.
Now, solve for the minimum by the alternative step-by-step method.
If X 3 of the previous solution is some fixed value less than 10, then
the value of K in terms of X3 is again obtained from the minimum of
*3
2
JIs1X1 + S3 Cx3 - X1) + S3Cx - X2) - x Idx. Similarly S 3 =
x =K
X 3 + K, andZthe above expression can be transformed to:
Z1 + Z2 +
-0.125K3 + 0.SK2X 3 - 0.5Kx2 + 0.167x3
------------------------- (2-12)
Take the derivative with respect to K and set it equal to zero:
17
-0.375K
- 0.5x^ + Kx 3
0
(2-13)
Solve for K in terms of x„ to obtain K = 0.666x,
x 3
-
The minimum value of
(minimum o f z^+ Z3) +
3
0.020x 3.
/ I s^x1 + S 3 Ox3 - x^) + S3 Ox - X3) - x |dx e q u a l s
X-=K
(3) z
Thus, z
can be written as follows:
x =10
4
0 . 020x 3 + / I s 1X1 + S3Ox3 - X1 ) + S3Cx
- - 0 . 147x 3 + Sx3 - SOx
- X3) + s^(x - X3) - x Idi
+ 500/3
with X 3 = K = 0.GbSx3 and X 1 = 0.5K = O-Sx3 = O-SSSx3 . Take the derivative
with respect to X 3 and set it equal to zero to obtain the minimum. Hence,
-O-AAlx3 + IOx3 - 50 = 0
--------------------------------------- (2-14)
X 3 = 7.48, X 3 = 0.666x 3 = 4.98, X 3 = O-Sx3 = 0.333x3 = 2.49.
These values are almost the same as those obtained from the direct method.
The small difference is due to the round-off error in calculation.
From the preceding example, one may conclude that if an additional
approximating linear function is required, one can simply add one more
segment and obtain new break points by modifying the previous answer.
Thus, the refinement process is suitably carried out by the dynamic pro­
gramming approach.
The original problem of n variables is transformed
to n problems of one variable only. And at each stage, a problem of one
variable is optimized.
18
The solution technique as illustrated on the previous page can be
generalized to any single-variable function f (x) exhibiting convexity or
concavity-.
An objective function containing such terms as eX , log x,
etc., can be very difficult to handle.
However, any single-variable
function can always be expanded in- a power series (I).
That is to say,
any single-variable function can always be expressed in the form of a
polynomial and hence can be easily integrated and differentiated.
Thus,
applying the approach of the preceding example, a linear or nonlinear
equation of a single variable in each stage can always be obtained and
the.solution easily identified.
The solution technique presented.so far is actually nothing more
than a general approach to find the best approximation under the cri­
terion that the total absolute difference between the approximating
function and the given function be minimized.
The general approach when n linear functions are used to approxi­
mate the given function, f(x), is summarized as follows:
1.
Check that the given function is convex within the region
of the desired approximation.
2.
Minimize z
Cl)
where two linear approximating functions are
to be used.
X1
X2
2 = / I s - X - f (x5 Idx + . J Sj-xI + s9(x - X 1 )
0
X1
I
\_______ _ _______ /
z
I
f (x) Idx
________ „ _______________ y
Z
2
19
where
is the break point,
and s^ are the slopes of the
corresponding linear approximating functions, and x 0 is the
assumed known optimal second break point.
(I)
Or, minimize
/Ia1 + S]X - f (x)Idx + /Ia1 + S1X 1 + s2 (x - X1) - f (x)|dx
when f (x) has the vertical intercept a .
3.
Solve for the value of X 1 in terms of x^, and express the
minimum value of Z1 + z^ in terms of x^.
4.
Formulate a new objective function as the sum of the minimum
of Z1 + z and the total absolute difference of the third interI
z
val of approximation.
Then, apply the same minimization tech­
nique to solve for the value of the current unknown variable,
x2, in terms of x^.
5.
Formulate another new objective function as the sum of the
minimum of~~z£-+ z2 + z^ and the total difference of the fourth
interval of approximation.
Apply the same technique to solve
for the value of the current unknown, x^, in terms of x^.
6 . Repeat the formulation and minimization of the new objective
function.
Each time include one more interval of approximation.
Thus, the relationship between X1 , x., ..., etc. is established step-by-step.
When the relationship between x
and
20
xn (x
= K, the boundary value) is established, the problem
is then solved in its entirety.
This solution procedure developed for separable convex functions
can also be modified to handle the case of separable nonconvex functions
It is known that linear interpolation always, gives overestimation or
underestimation when the given function is convex or concave, respec­
tively.
If the interval of "convexity" and "concavity" for a given func
tion within the boundary of desired approximation can be identified, the
difficulty in determining the absolute value is removed, and the same
procedure can be applied to obtain the optimal break points.
.The solution procedure suggested then is to identify the interval
of "convexity" by setting second derivatives greater-than or equal-to
zero to obtain the bound value and similarly to identify the interval of
"concavity".
Then, the same technique of solving for the optimal break
points can be applied to each convex and concave part of the function.
This is illustrated via the following example:
Given f (x) =
13
3 2
— x - -rx , where 0 < x < 10
6
2
—
i p l
dX
92f(x)
I 2 - 3x
2
x - 3
—
-
21
3f ^x) - x-3 > 0
Sx2
~
if x > 3
~
— -
if x < 3
= x-3 < 0
■
9=
Accordingly, the interval where convexity holds is 3 ^ x
interval where concavity holds is 0 _< x
10 and the
3.
The problem is then divided into two new problems:
1.
1 3
Approximate f (x) = — x
o
-
3 2
by linear functions on the
2
% x
interval 0 ^ x ^ 3.
2.
1 3
Approximate f (x) = — x -
3 2
— :x by linear functions on the
interval 3 £ x £ 10.
Discussion and Adaptation of the Solution Technique to Separable Convex
Programming
. The development of the solution technique stems from the interative
dynamic programming approach;
that is, to start with a small portion of
the problem and to find the optimal solution for this smaller problem.
It then gradually enlarges the problem, finding the current optimal solu­
tion from the previous one, until the original problem is solved in its
entirety.
'
Any single-variable function can always be expanded in a power
series of the form a„xn + a, x11 ■ * ■ + . . . + a .x + a , a polynomial
U
l
n—l
n
approximation.
Although the solution approach developed can be applied •
to any single-variable function, it does have certain limitations.
22
First,.it can be applied only- to functions of a single variable.
Therefore, when a separable .function is encountered, one can apply this
technique to each single-r-variable sub function, separately.
When a
nonseparable function is encountered, only-in very rare cases can one
apply this approach.
However, it might be possible to use a suitable
transformation of variables to transform the original non—separable
function into a separable function of some new variables.
let y = x^ i x^Xg + x^ .
For example,
This is a non-separable function since it con­
tains a-cross product term.
u - w, then y = u + w + u
2
However, if we let x^ = u + w, and x^ =
- w
2
+ u - w = 2u + u
2
- w
2
has been trans­
formed into a separable function of the new variables u and w.
cases this kind of transformation does, not work at all.
In many
Therefore,
the technique developed is generally limited to use with separable
functions.
The second pitfall in this approach is the possible necessity of
■
solving for the roots of a nonlinear equation 'of polynomial form within
the region 0 <_ x <_ K.
But, several methods exist for finding the roots,
of nonlinear equations and are presented in Appendix II.
Next, consider the application of optimal break points to the
separable convex programming problem:
Maximize
n
Y f.(x.I
J=I3 J
: --
23
subject to
I a ..x. £ b .
j=l 13 3
1
for I = I, 2 ,
and
x. > 0
J -
for 3 = I, 2 , ..., n;
m:
where f^ (x^) is a concave function for 3 = I, 2,
n.
Let nx be the
number of break points for the approximating function f.(x.) (see Figure
2- 6) , and b j >bji+ b j 2 .....
I b_.v be the values of x^ at which the
jk
k=l
Jj b
is the upper bound of the value of
k=l 3
x.. Also let s M (k = I, 2..... m.) be the slope of the piecewise
3k
k-1
linear function when J b ^ < x^ < J b ^ p. Then,
£=1 3%
&=1
break points occur, and where
k-l
i£ xJ k-l
' ^ 3 k
3>
if £=1
I
k
I x3,I
- ^I1b3*
k-l
ItJH - £=1
Zb3&
£=1
1£ xJ - J 1bJt
for k = I, 2 , ..., m^ and 3 = I, 2, ... ,n
It follows that
0 - xjk ± bjk
and
X. E Xjl + Xj2 + ... + X jmj
24
Xjl
Figure 2-6.
Xj2
Xj 3
Approximation of f_.(x^.) by f_ (x^)
25
The piecewise linear approximating function f^(x^) can be written as
f.(x.) = S 11X.. + s.-x.- + ... + s. .x. ..
J J
Jl Jl
j2 j2
jmj jmj
Thus, the original problem
can be reformulated as:
Maximize
"
mJ
I ( I s x )
j=l k=l 3 3
Y
Y3
Z a - - W x., ) I b
j=l 3 k=l 3
subject to
for j = I, 2, ..., n; k = I, 2, ..., mu
" d xjk i bjk
and
for 1 = 1 , 2 , ..., m;
for j = I, 2 , ..., n.
7 X 11 > 0
k-i Jk -
This transformed problem can be solved as a linear programming
problem using the usual simplex method.
*
*
*
If (x^, x ^ .....) is
Vl *
n H
the optimal solution to this problem, then x = I X 1., x =
*2 *
*n *
I
k=l It
y X ri, , .... x = / X , must be the optimal solution to the approxik-i 2k
n
p-i nk
mate form of the original problem (13).
An alternative way to solve the separable programming problem was
developed by Miller (18).
Given the function sketched in Figure 2-7,
let the range of x be 0 £ x £ K and suppose that m break points x^ have
been chosen, where x_ = 0, x^ < Xg < ... < x^ = K.
^ (x)
f(x) = f M
k +
- ^ (x) Jc
x, ,/ -w;—
(X " Xk} *
When x^ £ x
x^_+ ^,
26
f(x)
x,=K
Figure 2-7.
Piecewise Linear Approximation of f (x) by f (x)
27
Any x in the interval x. < x < x, .. can be written as
k —
— k+1
(I - A)x^ for some A, where 0 <_ A
x - =Sc ■ liW
I.
x = Ax, ,, +
k+1
Subtracting x^ from both sides,
+ xk ' ixk - 1V ■ X(xk+i - V '
so that
f(x) - £(x)k +
Let A = A
k+1
£<x)k+i " f<x)k
Xk+1 I- ^ ---- U x ktl - Xk) - Xf(X)ktl + (I - X)f(x)k .
, I -A= A
k
then, when x^ £ x <_ x^+ ^ there exists a unique A^ and A ^ ^ such that
x * Xkxk + Xk+ixk + r and £(x) * xk£(x)k + Xk+i£(x)k+i ” lth xk + xk+i * 1
and A1 , A, ,, > 0.
k
k+1 —
x =
Hence for 0 < x < K,
—
—
m
^
m
m
^A x , f(x) = I A f(x) , I a
k=0k k
k=0 k
k k=0
= I , and A >_ 0.
k
In addition, no more than two of the A^ may be positive and they must
also be adjacent.
Hence, for a separable programming problem, after the lineariza­
tion solution technique is applied to each single-variable function,
the problem can be transformed into a linear programming problem by the
procedure illustrated as follows.
Suppose that for each single variable, a sequence of break points
has been chosen.
Assume for the moment that the same number of break
points m has been chosen for each variable.
Then, x. can be expressed
28
r.J
2 ^kjXl'j’ w^iere xk-j
k=0
for all j , k, and
the value of the break points, and
J A,. - I for all j .
k=0 J
>_ 0
The original problem can now be
transformed into:
n
"j
T
7 A, .f, ., where f, . = f .(x, ) ,
j=l k=0 kj kj
kJ
J k
Maximize (Minimize)
subject to A, . > 0
kj “
for all j ,
m.
J
for all j ;
J o Xti ' 1
and
n
I
J
I xId S1, K 1M - ' 1 »
j=l k=0 kJ 1 3
kJ
From the solution A
m.
kJ
}0 for all i.
of this problem, the approximate solution, x. =
3
v3 *
I
A, .x, ., of the original problem is obtained.
k=0 J 3
This solution technique can also be applied to nonconvex problems
and hence covers a more extensive area.
However, the usual simplex
method has to be modified to take care of the restrictions on A,..
kj
For the general nonlinear programming problem, there is, in general, no
way to show that the particular solution obtained is a global optimum.
Extensions to Further Grid Refinement
The solution obtained via the transformed linear programming prob­
lem is an approximation to the original problem.
The closeness of this
approximate solution to the real solution is, of course, determined by
29
the fineness of the grid.
Thus, one would tend to approximate as closely
as desired, or equivalently, to choose more break points to achieve a
finer grid.
This procedure is generally acceptable.
However, a better
and more efficient approach is to generate a finer grid in the neighbor­
hood of the optimum, rather than to select the many break points.
A fine
grid beyond the neighborhood of the optimum only increases the work of
selecting optimal break points and the number of variables, so does not
contribute to the refinement of the approximating solution.
A procedure
developed for this purpose is illustrated below.
After choosing a suitable fineness of grid, the optimal break points
are selected using the search method developed in this chapter.
The
problem is then transformed to a linear programming problem and solved
by the simplex algorithm.
The algorithm terminates when it reaches a
local optimum, yielding the first optimal value of x.
Four situations
can then be identified:
I.
If the value of x^M e s on the break line of the approximating
function, the intervals immediately preceding and following
the break line are refined to four intervals by the search
method, and the break points at the two ends are discarded
(see Figure 2-8).
■ 2. . If the value of x lies within the interval between two break
lines of the approximating function, that interval is refined
to two intervals (see Figure 2-9).
y
Figure 2-8.
30
Grid Refinement When x Lies on the Break Line
y
21
Figure 2-9.
31
Grid Refinement When x Lies Within an Interval
31
3 . •If the value of x lies before the first break line, this
• interval is refined to two intervals (see Figure 2-10).
4.
If the value of x lies on the last break line of the approxi­
mating function, the interval immediately preceding the break
line is refined to two intervals (see Figure 2-11).
Next, the second linear programming problem is formed with the
value of variables restricted within the interval defined.
This pro­
cedure of grid refinement can be repeated at each stage of the problem
as desired.
The procedure suggested above is similar to Dantzig's approach (8)
for the refinement of grid point.
break points.
The difference is in the selection of
Here, the break points are selected via the search method,
whereas Dantzig only halves the grid size instead of selecting the
optimal grid size.
32
y
y=f(x,)
11
21
Figure 2-10.
X*
X
Grid Refinement When x Lies on the First Interval
y
Figure 2-11.
Grid Refinement When x Lies on the Last Breaking Line
CHAPTER III
ANALYSIS OF RAW DATA
Fitting One-Dimensional Raw-Data
A major interest for many engineering investigations is to make
predictions about a dependent variable in. terms of one or more inde­
pendent variables through the analysis of data.
In order to make such
predictions, usually a formula or a mathematical equation is found which
relates the dependent variable to certain independent variables.
This
chapter contains the development of a procedure to fit raw. data repre­
sentation of objective and/or constraint functions by several, straightline "functions when the data indicate a definite, curvilinear relation­
ship.
Thus the original, nonlinear problem is to be transformed to a
linear problem.
First, the special case of a dependent variable to be
predicted in terms of a single, independent variable is considered in
this section.
If a set of data can be represented by a straight line y = ct + Bx,
where y is the dependent variable and x is the independent variable,
then, for any given x, the mean of the y ’s is given by^a + Bx.
Taking
into consideration the possible error of measurement and the possible
influence of variables other than x, then for any observed y, the
regression equation can be modified to y^ = a + Bx^ + e^, where the e^'s
are values of an independent, normally-distributed, random variable having
2
"zero" mean and common variance a .
2
Usually a
2
cannot be known but must
2
be estimated by S^, the unbiased estimator for a :
34
'
S S
- (S )2
2 = xx yy
xy
e
n(n - 2)S
(3-1)
xx
n
where
n
nIx2
S
xx
i=l
Y
2
I
(
Xi)
i=l.
n
2
(i=l
IVi)2
S
=
yy
n Z y,
S
n
n
n
n I x y - ( I x ) ( J y )5
i=l
1=1
1=1
xy
i=l
-
n is the number of data points, and (x^,y^) is the value of the It^
observation.
Let k be the maximum allowable deviation between the observed
value and the straight-line calculated value, y^ = ot + gx_.
Thus, if
the data can be described by y^ = ct + gx^, for feasible chance devia­
tion, 3o must be less than or equal to k; or equivalently, a <_ k/3 and
2
a
2
<_ k /9.
Suppose the data at hand is best modeled by a curvilinear
— __
2
function such as y\ = a + g ^x^+ g^x^ + e^, but that nevertheless a
linear predicting equation is used.
Then the variance of y as calculated
by equation 3-1 will be affected not only by the variance of e but
2
also by the variance of g^x . Adopt the criterion that if the constraint
S
2
2
< k /9 is satisfied, the hypothesis that the data represent a
e —
straight line is accepted.
If S
2
2
> k /9, one concludes that the data
cannot be represented by a single straight line.
35
. Following the above logic, a procedure for finding the break points
is next developed.
These break points divide the whole, data-point set
into several subsets, and within each subset a single straight line is
fitted.
The method proceeds as follows:
Step I —
Starting with the first
•■
three data-points in consecu-
2
tive order, evaluate S£ for this subset..
Step 2 -- Check to see if S
2
2
<_' k /9 or not.
more data point to the subset and calculate S
Step 3 —
2
If "yes", add one
again.
2
Repeat Step 2 until S^ becomes infeasible.
2
last, feasible data-point used to calculate S
.
Then, the
,
'
■
is identified as the
e
break point for the first, straight-line segment.
Step 4 —
Starting with the last data-point used to calculate the
2
previous S^ and the immediate, following two data-points, calculate
2
S^ for this second subset.
Then, repeat Steps 2 and 3 until the next
break point is found.
Step 5 —
Repeat Step 4 until all the data points have been
included, and the break points have been identified.
Step 6 —
After all the break points are found, the best straight
line segment that fits the data points in each subset can then be
obtained by the usual method of least squares.
There is one problem incurred in the solution process, however.
That is the intersections of these straight lines will not in general
occur at the break points.
This is illustrated in Figure 3-1.
I
Figure 3-1.
Break Point Shift
37
One of three methods can be used to overcome this problem.
first method is to shift the break points from
Figure 3-1).
The
to x^ and x^ (see
This can be done, however, only if the new break points
do not create infeasibilities.
That is, the maximum deviation created
is still less than the prescribed value k.
The second method can be applied only under certain conditions.
That is, when d^ and d^ in Figure 3-2 are relatively small and the data
points appear to warrant the assumption of "continuity".
The intercept a^ in Figure 3-2, for the first segment, is derived
via usual regression analysis.
The intercept "a^" of the next segment
is fixed, however, as determined by the intersection of the first pre­
dicting line and the corresponding break line as shown in Figure 3-2.
Also, the regression .line is shifted to provide continuity.
Similarly,
the third regression line can be shifted as shown in the graph.
The third method to overcome this discontinuity is by approximating
the infinite-slope step with a finite slope (see Figure 3-3).
Connect
the last point (x ,y ) of the
segment with the first point
ni ni
(x ,-,y .T) of the immediate following segment, where y
and y
n.+l "'n.+l
°
'n.
'n.+l
X l
I
I
is the calculated value of the dependent variable y when the independent
variable x is x
and x ,,, respectively,
n.
n.+l
i
x
Thus, the problem is again transformed to a piecewise linear pro­
gramming problem and can be solved with the usual separable programming
approach.
If the constrained optimal value.of x lies within the above
38
y
Figure 3-2.
Intercept Shift
n.+l
Figure 3-3.
Approximation by Finite Slope
39
mentioned interval of finite slope, the result is a compromise solution
in a region of ambiguity, y
- y . Otherwise, the solution obtained
i
i
is a nonambiguous optimal solution to the original problem.
In some instances, one might identify certain points exhibiting
■some specific property as the break points.
Once these break points are
identified, the straight lines which best fit the data can be obtained
by minimizing the total squared deviation. .The solution procedure is
best illustrated through a nonnumerical example.
Given that x^ and x^ are the "best" break points for the data
shown in Figure 3-4.
these data?
What will be the best straight lines that fit
The solution method proceeds as follows:
Let a- be the vertical intercept of the first straight line, b-,
bg arid b^ be the slopes of the corresponding straight line for each of
the segments.
The objective is to minimize the total squared deviation
by solving for the value of a^, b^, b^, and b y
n Ihx2
ni
L
Therefore, minimize
P
1
-
B1
-
+
I
CI1 -
a1 - It1X1 - Ij2Cz 1 - X1))'
i=n1+l
i=0
nl+n2+n3
I,
i=n^+n2+ l
where
<Yi " aI - bIxI - b2 (x2 - Xl> - b3 (xi - X2)}
is the value of the observation, n^, n^, n^ is the number of
data points in the first, second, and third subsets, respectively.
40
y
Figure 3-4.
Linear Approximation of a Continuous Nonlinear Process
41
In order to find the minimum of z, take derivatives with respect
to a^, b^,
3z
3a
and
and equate to zero:
I
I
-2 I P .
i=0
2
-2
I
(?!
i=n +1
I
-2
3z
3b
I
2 3
I
Of.
i=n^+n2+l
- *l))
"2 Z (Y
i=0
I
2
-2 I
- a
2
- b x )x
21VI2 [Y
I=H1+!
3
I=U1-Hi2+!
3z
3b
aI ~ bIxI + b 2xl _ b 2X2 “ b3^xi " x 2^
(Y1
"2 1I 2 (Y
i=ni+i
- a
aI ” bIxI- b 2 (xi - xi ^ xi
aI " bIxI + b 2xl ‘ b2x2 ‘ b 3 <Xi " x2>)
- b x
- b (x
0— (3-2)
- x ))(x
0 -(3-3)
- x )
nl+n 2+n3
-
2
(Y1 - B 1 - b 1x 1 - b 2 (x2 - X1) - b 3 (x1 - X2))(x2 - X1)
i=n1+n2+l
— (3-4)
n 1+n2+n3
ig; = -2. Z
3
i=n+n2+l
- aI - bIxI - b2 <x2 " xV
' b 3 <Xi " x2),<xi - * 2 > ' 0
— (3-5)
42
In simplifying equations 3-2, 3-3, 3-4, and 3-5, the following
simultaneous linear equations are obtained:
nl+n2+n3
I
Y± = (ni+n2+n3^al+ ^n2+n3^xl+ ^ XiJb1+ [~(n2+ n3)x1
ni+n2
ni+n2+n3
n3X2 + . ^
xi -*b2 + (~*3X2 +
^
X1Jb
i=n^+l
i=n1+n2+l
(3-6)
n 1+tl2+n3
IX1Y1+Xi X Yi= Kn2+n3)x1+ XiIa1+ [(n2+n3>x1
1=0
i=0
i=n^+l
I 2
2
+ 2 XiJb1 + [-(n2 + n3)x1 + U3X ^ 3 + X1
1=0
2
x^jb,
i=n1+l
n 1+n 2+n 3
C-H3Xix2 + X 1
X
X1Jb
(3-7)
i=n1+n2+l
nl+n2
^ + nJtn3
- Z ^,xiY i + xI , L
I=H1+!
i=n1+l
ni+n2+n 3
Y i - x2 . L
^ Yi ■ [(n2 + n 3)xl
i=n1+n2+l
nItn2
nItn2
- H 3X3- ^
X 3Ia3 + ((n2 + nJ x 1 - D3X 1X 3 - X 1
J
XiJb1
i=n1+l
i=n1+l
43
nl+n2
nItn2
i=n^+l
i=n^+l
+ [-(n2+H-Jx1+ Zn-X1X2"n3x2+ 2jti I xi
Xi- L-I X1Jb2
2
[-n x x
1
+ n x
J
nl+n2+n3
I
"l+"2+"3
+ x
"l^2+"3
I
x - x
I
x ]b
I=Ii1-Hi2+! 1
Z I=Ii^n2+! 1 J
Ii1-Hi2-Hi3
nl+n2+n3
Z
X 1Y 1 + X 2
Z
Y 1 - In3X 2 -
I-D1 Hi2+!
i'nl+n 2+ l
XjIa1
!=Ii1-Hi2+!
2
nItn2tnS
I
In3X1X2 - X
X1Jb1 + I-H3X1xO + n x
i=n1+n2+l
^
nIHi2Hi 3
nl+n2+n3
x9
I
x ]b
Z !=Ii1Hi2+! 1
(3-8)
+ [-n x
J
+ 2x
I
x
Z i=n1+n2+l 1
*!+"2^3
+ x
I
x
!=Ii1Hi2+! 1
nl+n2+n3
I
9
x.]b3
!=n^ n2+!
--- (3-9)
Solve the above system of linear equations for the values of a ^ b ^
b 2 and b 3 which minimize z.
In this manner, the best linear functions
that fit the data are found.
All the methods considered up to this point give no guarantee
that the maximum deviation of the fitted lines from the observed value
will be less than the prescribed value, k.
Also, if it can be assumed
that the data represent a continuous process, then these methods will
give reasonable approximations.
44
Now, consider the'case where the data do not represent a continuous
process.
Then the nonintersecting lines obtained from the above methods
do represent the true situation.
When such an objective function is to
be optimized subject to some linear constraints, the usual linear pro­
gramming approach or local separable programming cannot be used.
Hence,
a methodology must be developed to handle this case and is illustrated
below.
Given the problem of maximizing some objective function f (x)
subject to the linear constraints, a^x £
for i = I, 2 , ..., m, x ^.0 ,
and where the exact, objective function is not known, but must be approxi­
mated from a table of data.
Assume that it is known that the objective
y is a function of x, where y = f(x) must be a concave function for a
maximum to exist.
The break point technique is applied and the data are
approximated by several linear functions as illustrated in Figure 3-5.
Define a^ as the vertical intercept of the first straight line,
s^,Sg and S3 as the.slopes of the corresponding straight lines, and d^
and dr, as the differences between two consecutive lines at the break
point.
Thus, the first segment can be represented as a^ + s^x^, the
second as a^ + .s^b^ + d^ + SgXg, and the third as a^ + s^b^ + d^ + s^bg
-
dg
+
S 3X 3
where
45
y
Figure 3-5.
Linear Approximation of a Discontinuous Nonlinear Process
46
k-1
O
if % I
I hH
A=I
Xk
k-1
I b
'A
A=I
x-
k-1
if I b
A=I
k-1
A bA
A=I
A=I
and 0
(3-10)
I x <
A=I
k
if x I
I b^
A=I
x
A=I
ifxI A=I
Ib(
>
(3-11)
dk = <
if x >
I b,
A=I
x = x^ + x^ + ... + x^, and k = I, 2, 3.
Equation set 3-10 embodies two restrictions:
0 ^ x^ Ji b^ and
1.
2 . X^ = 0 whenever x^_^ < b^_^
Restriction I is in the proper format to be added to the original
constraints.
format.
However, Restriction 2 does not fit the linear programming
Fortunately, it is not necessary to add this restriction to
the model.
Since the assumption states that f (x) is concave, Restriction
2 is satisfied automatically.
This convexity assumption implies that
47
> s2 > ••• > sJc*
This fact makes it easy to see that any solution
violating Restriction 2 cannot be optimal-.
Hence, the -remaining problem is to transform equation set 3-11
into k linear programming constraints.
To handle this, the fixed charge
method can be used (12) . .
Consider a fixed.charge problem.
Minimize
xn =
Ji
f .(x.)
j=l 3 J
kj + cjxj
if
X^ > 0
if
Xj = 0
w h e r e f j (Xj )
s u bject to linear constraints plus
k. * 0
x. >_ 0 for j = I, 2 , ..., n.
The problem can be rewritten as:
Minimize
x
=
U
n
£ (k.y. + c.x )
j=l 3 3
J 3
subject to the original constraints plus
1.
x
J
- My
J
< 0
2. 0 I y j < I
where y . is an integer and M. is a very big number.
48
The reasoning for adding the two new constraints is illustrated
in Table 3-1.
I
= 0
0
I
I or 0
with respect to
minimization of x
H
> 0
get yj
Il
want y^
%
lfxJ
CONSTRAINT I AND MINIMIZATION
=
0
L-u
TABLE 3-1.
yj
O
0 . The reasoning is illustrated in Table 3-2.
TABLE 3-2.
if x.
CONSTRAINT I AND MAXIMIZATION
want y^
> 0
I
= 0
0
get y^
I or 0
0
with respect to
maximization of x
O
yj = 1
11
0
yj -
(-1.
SIh
For a maximization objective function, constraint I becomes x. -
Applying the same logic to the example problem, the objective func­
tion can be written as y = a^ + s^x^ + d^y^ + S^ 2 - d 2y 2 + S 3X3. With
respect to the maximizing criterion, y^ should equal I, when d^ exists,
or equivalently, when the value of x is greater than b^; and y 2 should
49
equal I, when
exists, or equivalently, when the value of x is greater
• Also, y^, y ^ are integers and 0 £ ? i ’?2 —
than
e required
conditions are summarized in Table 3-3.
TABLE 3-3.
if x-b^
CONSTRAINT RELATIONSHIPS OF x, d^ AND d^
want d^
if x-Cbj+b^
= O
v|
0
= 0
T—I
rO
Il
v|
0
> 0
want y^
= I
> O
want d^
= O
= d2
want y
= O
= I
The approach taken in the fixed charge problem can now be applied
to obtain the desired constraint.
Therefore, the new constraints used
to take care of equation 3-11 are:
(x - b1) - MCyi - I) ^ 0
(Constraint I)
TABLE 3-4.
CONSTRAINT I
with respect to
maximization of y
if x-b^
want y^
get y 1
I 0
= O
O
Y1 = O
> O
I
I
yI = 1
50
(x - b 1 + t>2) - My 2 <_ 0
(constraint 2)
TABLE 3-5.
if x-
(V
want y 2
V
CONSTRAINT 2
get y 2
0
= 0
I or 0
>
0
= I
= I
O
Ii
CS
<
with respect to
maximization of y
y2 = 1
Thus, the original problem is transformed to a mixed integer programming
problem.
Maximize
y = a^ + s^x^ + d^y^ + s2x 2 -
subject to
a. ( I x.)
J=Ij
3.
J
for i
I, 2, ..., m
for j
I, 2, 3
1
xj - bj
(I x
d2y 2 + S3X3
,
- b1) - -
y >0
for j = I, 2, 3
j=l
( I X
j=l
and
- bi + b 2) - My 2 <_
x.
>0
J “
0 I Vv V2 I I
for j
for j = I, 2, 3
I, 2, 3
51
This transformed problem can be solved by the usual simplex method
of linear programming with a little care to interpret the optimal solu.
3
tion for the 0-1 requirement of y ,y . If ( J.x. - b 1) > 0 , the value
j=l J .
of y obtained will be I with respect to the maximization criterion,
1
3
even if. ignoring the integer restriction. If ( £ x. - b ) £ 0, y
3=1 3
obtained will be some value less than I and greater than 0 depending on
the value of M submitted, if ignoring the integer
restriction.
In this
case, the value of y should be interpreted as 0. By the same logic,
3
if ( J x. - b + b 9) > 0 , y 9 obtained will be some value less than I and
j=l 3
I
4
greater than 0 depending on the value of M submitted when the integer
restriction is ignored, and the actual value of y„ should be interpreted
3
2
as I. If ( J x. - b- + b 9) _< 0, the value of y 9 obtained will be 0 with
3=1 3
respect to maximization of y, even if ignoring the integer restriction.
Thus, an approximate solution to the problem can be obtained.
Extension to n-Dimensional Raw Data Analysis
For data of several variables which exhibit nonlinear property,
the situation is much more complicated.
There is no direct way to
approximate the general multivariable function by piecewise linear func­
tions.
A limited approach to handle this is presented, however.
If a set of data can be represented as a function of several
variables, where no interactions occur between these independent
52
variables,* such as F(X) = F1 (X1) + f (x ) + ... + f (x ).
I I
2 /
n n
specified function of x^ only, for j = I, 2, ..., n.
f.(x.) is a
j j
Two approaches
for approximating these data by piecewise linear functions can be deter­
mined according to the specific condition.
The first approach is used when the data obtained are such that it
is possible to attribute the contribution of each f.(x.) taken alone to
J J
F(X).
In this case, the general approach suggested is to analyze each
f^(x^) versus F(X) separately.
The same technique developed for one­
dimensional data is used.
• The situation stated above, however, is very rare in the real world.
Most of the time, it is always impossible to separate the effect of two
or more variables on the dependent variable.
When this occurs, the
approach suggested is to fit the data at hand by the sum of several
polynomials first.
Then, the break point search method developed for
analytic functions in Chapter II is applied.
One example of such a situation is when a given table of data can
be represented as F(X) = f (x ) + f_(x ) where the exact expressions of
X X
Ii (Xi) and
Z
are not known yet.
Z
Thus, if a second degree polynomial
function is assumed for f^(x^) and F3 (X3) ; then,
F(X) = f^(x^) + F3 (X3)
*The case where interactions between the independent variables
exists is beyond the scope of inquiry being concerned only with
separable functions.
53
2
=
+ kg +
2
+ b^x^.
And the coefficients, k^, a^,
a^, k g , b^, and b^ can be solved by the criterion of least squares. That
n
is, minimize the squared deviation z, where z = Ji (F(X) . - f. (x.). 2
. i=l
1
^2^X2^i^ ’ an<^ ^(X)^ is the It observed value of F(X), f^^l^i
t^ie
It^1 calculated value of f^(x^), and f g (.Xg)
is the i ^ calculated value
of f2 (x2).
This results in six simultaneous linear equations of six unknowns,
k^, a^, a2, Ic2 , b^, and b g ,
and hence their values can be easily obtained.
Thus, the original problem has been transformed to a problem of approxi­
mating the sum of two polynomial functions by piecewise linear functions.
The break point search method developed in Chapter II can then be applied.
CHAPTER IV
APPLICATION OF THE SOLUTION TECHNIQUE TO THE
SEPARABLE CONVEX PROGRAMMING PROBLEM
Two illustrative examples are presented in this chapter. One is
a separable convex programming problem with both objective function and
constraints in analytic form.
Another is a two-dimensional convex pro­
gramming problem with the objective function represented by a table of
values.
Illustrative Example I
Given the problem below:
Maximize F(X) = f^(x^) +
subject to
“ xi + 6x2 ~ 2x2
'=
x^ + 3Xg <_ 8
------------------------------(Constraint I)
5x^ + 2Xg ^ 14
---------------------- -------(Constraint 2)
\
— 0 > x2 I 0
and ^ 2 ^ 2)
Find the approximate solution by replacing
eac^1 ^ith
three linear functions.
Since the objective function, F(X), is the sum of two, single3
2
variable functions, 4x^ - X^ and 6x^ - RXg, the solution technique
developed in Chapter II is applicable.
Step I:
Find the boundary values of x^ and x^.
]_
'From constraint I, Xg ^ 8/3 then, x- <
2 —
2, x. < 2.
I —
x^,
substitute into constraint 2,
The range of X1 and x„ is 0 < X1 < 2, 0 < X0 < 2.
I
z
—
I —
—
Z — ■
55
Step 2:
Check the convexity of f^(x^) and
(x^).
d f^ x1)
-Gx1 < 0
I —
for 0 < X1 < 2
— I —
-4 < 0
for 0 < x„ < 2
d f2 Cx2^
—
Hence, f^(x^) and
are
2
~
concave functions, ensuring that the
maximum does exist.
Step 3;
(I)
Solve for the optimal break points of the three piecewiselinear functions.
For f^(x^), the break points for two piecewise-linear functions
are given by
and b ^ .
1) = JlAx1 - x^ - S11X1 Idx1 + j
Formulate z^*
b12=2
IAx1 - X 1 - Sn I^
- S 21(X1-H11)Idx
S 11 = F1 (X1)Zb 11 = C A b 11 - H11Vb 11 = A -
"11
sI2 =
11
f ,<b “ >- ~ b ! (l>ll)
(b12
bIl)
- 2b,, - b 2
Substitution yields:
= 4 -
- L
- bI2b U
H11
- i Ii
b12=2
56
zi1)=
I
[
°
bIl
(4xi " Xi " 4xi + bn xi)dx +
12(4x i " xi - 4bH + bIl
x4
b= x=
2,3.,
I . 11 I
+ 2b11x1 + bn x1 - 2b11- b u )dx = - ^ +
b 12-2
-4bUxI+bIIxI+j Y 1 - 2bLxI
Minimize
4 - 4bIl + bIl
by taking the derivative of z^b^ with respect to the
unknown, b ^ :
- -4 + Ib21 = 0
^
11
1.15
b 12
2
I21 = A
^1 1 =
^=1.1,
0.575, or b^^ = 0.575b12.
* /I N
Thus,
, the m i n i m u m of
*
(z^ +
z^)
bH
z l 1} =
I0
b12
(4xl " x I - s Ilx V d x I +
= - j p -- 12
Extension
in terms
I
bn
of
b^2>
is g i v e n
^
( 4 x l - x I - s I l b Il -
S 12(X1
- b I V dxI
= 0.1525b4 2
to t h r e e p i e c e w i s e - l i n e a r
functions
for f^(x^),
requires
b13 = 2 -
2
Formulate
by:
z^2^ = z ^ ^ +
/ 14x^ - x^ - s^^b^^ - s^2^b12 ~ bIl^
S13^xI “ b 12^ IdxI
that
57
=11 ' * - bH
=12
4
b12
b 12bll
bIl
' 4 - b13 ' b 13b12 - bL
=13 =
2b12
b12
b13"2
%(1) + /(4x1 - x, AfeI2 + 2b12Xl
zI
+
2xi - f
- 4bi2xi + bi2xi
2b12 + bI2xI^dxI
2bL xI + ^ r 1
1.9025b42 - SbJ2 + AbJ2 - Ab12 + A
Minimize zj
by taking the derivative with respect to the unknown, b ^ :
,_(2 )
d b T T ' fI cbI2) = "4 + 8bI2 " 1 5 b L
+ 7 -610bL
Solve equation A-I by Newton-Raphson method.
I1Cb12) = 8 - SOb12 + 22.83bJ2
First guess, b j ^ = 1.5.
"
0
■(A-I)
58
1.5 -
-0.0670
14.3675
1.5 + 0.0047 = 1.5047
fI(bI f )
fI(bIf)
1.5047 -
fi<b(f)
Thus,
% 1.505.
0.0018
14.5484
1.5048
Also, by Sturm's Theorem (see Appendix II), there is
exactly one real root within the range 0 <_
I 2 for equation 4-1.
b12 = 1.505, the second derivative is greater than zero; hence,
is minimum at b ^ = 1.505, and b^^ = 0.575 b ^ = 0.865.
(2)
For Ig(Xg) and two piecewise-linear functions having break points
at b„, and b,
22 '
Formulate z.
b 21
2
b22-2
,
J |6x 2 - 2x 2 - S21X2 Jdx2 + /
|6x 2 - 2x 2 - s21b
b21
°
- s22(x2 - b 21)|dx
S21 = f2 (b21)/b21 = 6 - 2b21
f2 (b22^
f2 (b21^
6 - 2b22 _ 2b21
(b22 - b21)
2 - 2b,
b22=2
b22=2
z21) " / C6x2 “ 2x2 - 6x2 + 2b21X2^dx2 +
/
C6x2 “ 2x2 “ 6b21 + 2b2]
- 6x 9 + 6b01 + 2b22x2 - 2b21b 22 + 2b01x 0 - 2b91)dx
21 2
21
2
At
„ 3
b
Zx,
? 21
+
- 3 + b 21x2
0
„ 3
^a2
2
2
3 + b 22X2 “ 2b21b22X2 + bZlx2
b22=2
= I ‘ 4b21 + 2b21’
Minimize
—
-
by taking the derivative with respect to the unknown,
= -4 + 4b21 = 0
b21
21 _ I
2
b21
Thus, z ^
, the minimum of z ^
=
1
°'5b22
in terms of b
is obtained from
z^1) = - ^ b 22 + b22 " °-5b22 + °-25b22 = 0 -083b22
Then, formulate z
( 2) _ * ( 1)
zo
J I^X9~
S23^X3 “ b 22^ ^dx2
S2i = 6 - 2b21
S22
6
"
2b22
2 - 2b
"
2b21
—s2ib2i—s22^b22~boi^
:
60
0l083bL + J ( 6x2 ~ 2x2 " 6b21 + 2b21 " 6b22 + 6b21 +2b22 " 2b21b22
+ 2b21b22
2b21 - 2x 2 + 2b22 + 2b22%2)dx2
0.OSSb32
2x3
3=2
*
IT
-
*
b22=2
+
2b22*2
"
*2
+
b22=2
Y - 8b22 + 8b22 - 2 '23b22
Minimize Z2
by taking the derivative with respect to the unknown, b,
d b T T " f2 (b22) “ "8 + 16b22 " 6 -75bL
= °
b 22 = 0.717 or 1.653
d2z f )
16 - 13.Sb22 = -6.32 > 0, when b22 = 0.717
d24 2)
— jg--- = 16 - 13.Sb22 = -6.32 < 0, when b22 = 1.653
Hence, b 22 = 0.717 identifies the minimum so that b ^ = O-Sb22 = 0.3585.
Step 4:
Transform the problem into a linear programming problem to
be solved by the usual simplex method.
Evaluate the slopes of each approximating segment:
sii ‘ 4 - bn
3.253
61
=12 - 4 - b12 - bI2bIl - bIl ‘ -0 -308
si3
~2hn ~ hi2
S^1 = 6 -
~5'27
= 5.282
s 22 - 6 - 26^2 - 2b2^ - 3.848
S 23 = 2 - 2b22 = 0.566
The problem can then be written as follows:
Maximize Fj(X) - SjjXjj + Sj j Xj2 + Sj3Xj3 + SjjXjj + Sj j X jj 4
S23X23
3.253x i ;l - 0.308x 12 - 5.27x^3 + 5.282x21 + ' .848x 22
+ 0.566x,
subject to
X^1 + X12 + X 13 + 3x21 + 3x22 + Sx33 ± 8
Sx11 + Sx12 + Sx13 + 2x 21 + Zx22 + Zx23 I 14
X 11 < b 11 = 0.865
11 —
11
X12 <_ (b12-bll^ = 0,639
X 13 < (2-b12) = 0.496
x21 <_ 0.359
X22 <_ (b22_b21^ = 0,358
62
X 23 I (2-b22) = 1.283
xjk I O
I Jk
for j
Xj for j
I, 2; k
I, 2, 3
and
I, 2
k
Step 5:
Computer solution.
The transformed problem was solved via the MFOR LP package for
linear programming on the Sigma 7 at the Montana State University
Computing Center (see Appendix III for computer output).
The optimal solution as obtained is summarized below.
0.865
X21 =
0
X22 =
0
x*
^58
= 1.283
F3 (X*) = 6.8138
*
x
I
I x*
k=l X
= 0.865
*
X
2
Thus, the first, approximate, optimal policy for the original problem
is X 3 = 0.865, x2 = 2.
63
If the approximate solution is not satisfactory, grid refinement
is necessary.
By the procedure suggested in Chapter II,
is to be
refined with respect to the two intervals (0, 0.865) and (0.865, 1.504),
%2 with respect to the interval (0.717, 2) to obtain a better approxi­
mate solution.
First, for f^(x^) on the interval (0, 0.865) formulate
m
z i
bIl
=
/
.
K
-
X 1 -
0.865
s i i x i Id x i
+1
U x 1
_
- X 1 -
S i 1^
1
-
s L ( X r ^ 1 )Id ,
bIl
,2
4 — b
S12 = 4 " ( 0 - 86 5) 2 - ( 0 . 8 6 5 ) 6 ^ - b ^ = 3.25 2 - 0 . 8 6 5 6 ^ - b ^
-0.28 - 0.323b'
+ 0.433b'^ , after integration and simplification.
(I)
-0.323 + 1.299b|2 = 0
dbIl
bU - I f i I - 0-25- bh
- 0-5
For f^(x^) on the interval (0.865, 1.504), formulate
I
I4x- — X^ — f (0.865) - s' (x
0.865)Idx
0.865
1.504
+ /
L x 1 - X 1 - f (0.865) - S ^ b j 1 - 0.865) - s ^ ^ - b ^ ) |dx
bIl
64
fi (bii) " fi<0 -865)
- 0.865
- b
1.738 - 1.504b'
- b ,2
f1 (1.504) - f1 (bj1)
1.504 - b ^
,2
3.252 - 0.865bV
can be simplified to
z<2) = 0.282 - 3.1 3 5 b + 2.107b^2 - 0.429b'^
--- -3.135 + 4.214b^ - 1.287b|2* = 0
b ^ = 1.15, or 2.09
Since 2.09 > 1.504, 1.15 is chosen.
The refined break points for
f^(x^) are at 0.5, 0.865, and 1.15.
Now, for f2 (x2> on the interval (0.717, 2), formulate
=
/ |6x„ - 2x 9 - f 9 (0.717) - s' (x9 - 0.717) |dx
0.717
+
2
?
J |6x 2 - 2x2 " f2(0-717) - s21(b21 ~ °’717) ~ S22(x2"b21)^dx
b12
f2(b2i) - ^2,(0.717)
b'^
0.717
f2(2) - f2(b21)
4.566
2b21
65
2.641 - 3.486b'
-3.486 + 2.566 b ^
+ 1.283b
,2
21
O
1.35
b21
Similarly, the refined break points for f2<x2) are at 0.717, 1.35, and 2.
The new slopes are
f,(0.865) - I1(0.5)
--- = 2.57
11 "
0.865 - 0.5
f1 (1.15) - f1 (0. 865)
I
-
12 "
0.94
1.15 - 0.865
f2 (I.35) - f2 (0. 717)
I
-
21
I
1.35 - 0.717
_
— 1.0/
I Q7
f2(2) - f2(1.35)
22
2 - 1.35
• = -0.7.
The second approximating linear programming problem can then be
formulated as:
,*
Maximize F2 (X )
8IlxIl + S12X12 + S21X21 + S22X22
+ 0.94x |2 + I.87x 21 - 0.7x 22
subject to
2.57x|1
----------------- (4-1)
X^1 + X^2 + 3 x ^ + 3x^2 £ {8 - (0.5) - 3(0.717)} = 5.349
Sx^1 + 5x|2 + 2x 21 + 2x 22
U 4 - 5(0.5) - 2(0.717)} = 10.066
66
X j1 < (0.865 - 0.5) = 0.365
xj2 <_ (1.15 - 0.865) = 0.285
Xj1 <_ (1.35 - 0.717) = 0.633
xj2 < (2 - 1.35) = 0.65
Xjk I 0
for j = I, 2,; k = I, 2
where 0.5 + x' + x* = x„
11
IZ
I
0.717 + Xj1 + xj2 = x2 .
This transformed problem was again solved using the linear programming
algorithm.
The optimal solution obtained is:
Xj1 = 0.365
Xj1 = 0.633
xj* = 0.285
xj* = 0
(see Figures 4-1 and 4-2)
F*(X) = 2.3897
Thus,
*
*
*.
X1 = x^j + X1j + 0.5 = 1.15
X2 = X21 + X22 + 0,717 = 1,35
Thus, the optimal solution of the original problem can be calculated
by adding the "intercepts" ^(0.5) and f2 (0.717) to the value of
eq. 4-1.
67
Figure 4-1.
0.865 1.15
1.504
Piecewise Linear Approximation of f^(x^)
0.359 0.717
1.35
2
Figure 4-2. Piecewise Linear Approximation of f„(x_)
68
F(XA) = F2 (XA) + F1 (O-S) + f2 (0.717) = 7.5387
*
A i
where X = I *
■X x 2
'1.15'
The approximating solution has increased by
,1.35
0.725 and deviates by 0.040 from the optimal solution, 7.579, as
obtained by a direct method of solution of the original nonlinear
problem (see Appendix V) .
' Note that the optimal solution of the linear programming problem
always lies oh a break line.
This is due to the fact that only the
extreme points of the objective function are searched by the simplex
algorithm.
Illustrative Example 2
Given a variable y which is dependent on two variables X1 and X2,
with the restrictions that the sum of 2x1 and 3x2 must be less than
or equal to 15, and the value of X1 and X2 each must be greater than
or at least equal to zero.
It is desired to find the value of X1 and
X2 which maximizes the value of y subject to the stated restrictions.
Rewritten in mathematical form, the problem can be stated as:
Maximize
y = f (x1$x2)
subject to
2x1 + 3x2
15
X1 > 0, x2 > 0
Suppose that an analytic expression for y in terms of X1 and X2
has not been found yet, but that observed values of y, when X1 and X2
69
are set to different values, are available and as presented in Tables 4-1
and 4-2.
The problem type then is that of Chapter III, and hence that
solution technique is used.
Step I:
Find the best piecewise-linear equations of y to fit the
. observed data.
For y =
> when
is set to zero, and y = fgCx^) when x^ is
set to zero, the optimal break points and the associated linear regres- •
sion coefficients were solved for using the computer (see Appendix IV).
Assume that the maximum allowable deviation is 4.
The optimal break
points, the slopes,. and the vertical intercepts at respective origins
are given in Table 4-3.
Close examination reveals that the intersections of the approximating
lines do not occur (see Figures 4-3 and 4-4).
However, the vertical
gaps are so small that it can be concluded that the process exhibits
continuity.
Hence, the second of the three methods to overcome dis­
continuity can be applied; that is, the method of intercept shift.
The
ending value of the vertical intercept of the immediate preceding segment
is identified as the vertical intercept of the linear equation of the
current segment with the slope unchanged.
Define 0 I x11 I 1.5, 0 < Xg1 I 2, 0 < X 31 £ 0.5, 0 £ x12 <_ I,
0 _< X 32 <_ 3, 0 < X 32 <_ I, 0
X^2 £ 3, 0 <_ X 32 _< 2, the linear equa­
tions in each segment of ^1 Cx1) and ^ 2 ^ 2)
can
determined as
9.286 + 4;857x i;l, 16.572 - 4x21, and 8.572 - 6x31; 1.000 + lO.OOOx^,
70
TABLE 4-1.
VALUES OF (y,x^) WHEN x
TABLE 4-2.
VALUES OF (y,x ) WHEN Xj IS SET TO "0"
y
X2
TABLE 4-3.
I
ii
15
22
26
27
23
20
17
8
0
0
i
2
3
4
5
6
7
8
9
10
OPTIMAL BREAK POINTS, SLOPES AND INTERCEPTS OF f^(x^) AND f^(x^)
Segment
fI txP
Z
Z
IS SET TO "0".
i
(0,1.5)
2
Slope
Vertical Intercept
4.857
9.286
(I.5,3.5)
-4.000
22.000
3
(3.5,4)
-6.000
28.000
I
(0,1)
10.000
1.000
2
(1,4)
5.200
5.500
3
(4,5)
1.000
22.000
4
(5,8)
-3.300
43.201
5
(8,10)
-8.499
84.824
fItxI1
W
Figure 4-4.
Piecewise Linear Approximation of y = fgCx^)
72
11 + 5.20x22» 26.6 +
» 27.6 - 3 .Sx ^
j
and 17.7 - 8.499x^2» respectively.
The piecewise linear function of f^(x^) can then be written as 9.286 +
4.8 5 7 x ^ - 4X 22 ~ 6x^^.
In the same manner, the piecewise linear equa­
tion of f2 (X2) can be determined as 1.000 + lO.OOOx^^ + 5.20x22 + X32 ~
3.3x^2 - 8.499x^2'
Maximize
The original problem can now be transformed to:
y = 10.286 + 4.857x^1 - 4x^^ - 6x^^ + 10.000x2^ + 5.20x22 + X^2
- 3.3x^2 ~ 8.499x^2
subject to
2x11 + 2x21 + 2x31 + 3x^2 +
+ 3x32 + 3x42 + 3x52 — 15
xII - I":
x21 - 2
X31 — 0,5
xI2 - 1
x22 i 3
x32 - 1
x42 - 3
x52 - 2
xJ1 I 0
XJ2
for j = I, 2, 3
for j = I, 2, 3, 4, 5
where x^ = x ^ + X21 + X 31
x2 ’ x12 + x22 + x32 + x42 + x52
73
Solve this problem via the linear programming algorithm; the optimal
solution can be identified as (see Appendix IV):
X12 = 1.000
X11 = 1.500
*
X2I - 0
X22 = 3.000
*
*31 ' 0
*
x„„ = 0
*
*
*
*
X1 = X11 + X01 + X01 = 1.500
*
*
*
*
*
*
x0 = x01 + X00 + X00 + x,„ + X00 = 4.000
21
22
42
52
y
= I(Xjj X2) = 10.286 + 32.886
43.172.
0
*
0
CHAPTER V
CONCLUDING DISCUSSION
Any practical optimization problem in the real world can be classi­
fied into two categories.
The first category entails knowledge about the
objective function, the domain of feasibility, and the form of analytic
functions and inequalities.
This prior knowledge enables the use of
existing mathematical programming algorithms to search for the optimum.
The second category of optimization problems occurs when an analytic
expression for the objective function is not available, but a specific
set of values of the independent variables and the corresponding values
of the objective function are available.
In such circumstances, the
usual approach for finding the optimum is to fit a curve to the data,
followed by the application of an appropriate optimization algorithm.
In both cases, selection of a proper mathematical programming algorithm
is very important to the efficiency and success of the search for the
optimum, as is insight of the impact the constraints have on the opti­
mum .
Roughly speaking, mathematical programming can be partitioned into
two main areas— linear programming and nonlinear programming.
Linear
programming is concerned with determining optimal decision variables
for problems in which a linear objective function and constraints are
involved.
There are numerous practical applications of linear program­
ming throughout industry, business, as well as other areas.
Very
efficient algorithms are also available for solving these problems.
75
Nonlinear programming, on the other hand, is concerned with determining
optimal decision variables in which nonlinear objective and/or nonlinear
constraints are involved.
models.
There are also many nonlinear optimization
However, solution techniques for nonlinear optimization prob­
lems are not as readily applicable as are linear optimization solution
techniques.
Although many different nonlinear programming algorithms
have been developed to solve different special classes of problems,
each algorithm has application only within a narrow class of problems.
Therefore, the separable programming approach which offers greater
generality becomes an attractive way to solve nonlinear programming
problems.
As pointed out by Dorn (9), a large number of practical problems
have separable objective functions.
or more specifically
The separable programming approach,
the piecewise linear approximation approach, is .
actually the most universally used in practical applications.
Separable
programming can be used for many of the nonlinear programming problems,
including the geometric programming problem as recently reported by
Kochenberger, Woolsey, and McCarl (14).
Although transforming the original nonlinear problem to the
approximating linear problem greatly increases the number of variables,
the problem can still be solved more easily and quickly due to the
efficiency of the simplex algorithm.
In many cases, the most valuable
'
aspect of an optimization problem is not the optimal plan itself, but
76
father the resulting insight gained especially with respect to a sensi­
tivity analysis of the problem.
Linear programming has the advantage
that the impact of constraints on the optimum is very easy to identify.
Numerous computer codes and techniques for handling" special formulations
of the problem are also available.
Even though there are many practical
problems which are nonlinear, the simplex method has proven to be so
efficient that the approximate solution given by the linear programming
model is very acceptable for most of these problems.
. I
The search technique developed for the optimal break points of the
piecewise linear function in Chapter II is amenable to the existing
separable convex programming method.
It helps to find a better approxi­
mation of the objective function, and hence a more precise, near opti­
mum solution can be obtained.
When this search technique is applied to
each stage of the approximating linear programming problem by the pro­
cedure suggested in Chapter II, it can effect a reduction in the number
of iterations required to obtain the same optimal solution.
The optimal break-point search technique is very easy to apply,
the only difficulty involved is the determination of the roots of a
single-variable nonlinear equation, which is of the same order as the
objective function to be approximated.
That is, if the degree of the
objective function is n, then the degree of the nonlinear equation to
be solved is also n (after integration and successive differentiation).
For many practical problems, the degree of the objective function need
77
riot be great, and hence the solution technique is applied easily.
When both the objective function and the constraints are non­
linear, questions arise as to which should one choose to select the
optimal break points.
The author suggests that if there is only one
nonlinear constraint, one should choose the constraint as the basis
for selecting the optimal break points in that the simplex method
examines the extreme points of the feasibility region in locating the
optimum.
The result is a more precise constraint set, or equivalently,
a better set of extreme points.
If, however, there is more than one
nonlinear constraint, then the choice is arbitrary.
Having solved the
first approximating linear programming problem and having some idea of the neighborhood of the optimum, successively more precise approximations
can be obtained by further grid refinement.
Another method to handle this situation might be mentioned in
the event that the independent variable x can be expressed explicitly,
(11).
For example, if the given problem is:
Maximize
subject to
f (x) = /x + a^ - Va^
bx
2
- 2x <_ c
x 2. 0
Let
then,
y = /x + a^ - /aT
x = y2 + 2/a^y + (a^ - a^)
78
bx2 - 2x = by^ + 4bZa^y3 + (ba^b - 2a^b - 2)y2 + 4(b - I)/ a ^ y
+ (a2 - a1)[b(a2 - a^) + -2b - 2] <_ C
The transformed problem can be stated as:
Maximize
y
A
subject to
and
y >_ 0,
by
‘ O
+ (4b/a^)y + (ba^b
O
2a^b - 2)y
+ 4(b-l) /a^y £ D
if y = /x + a^ - /a^ >_ 0 for all x.
In this transformed problem, there is only one nonlinear function which
needs to be approximated.
The optimal break-point search method developed in Chapter III
takes into consideration the second category of optimization problems
indicated at the beginning of this chapter.
The curve selected to fit
the data is composed of piecewise linear functions so that the simplex
algorithm can be applied and the advantages 8f linear programming struc­
ture can be attained.
One natural limitation involved in this problem
is the danger of extrapolation beyond the scope of observations. Unless
the scope of observations is broad enough to cover the actual feasi­
bility region, a global optimum cannot be guaranteed.
If the constraint set of the separable programming problem is
not convex, so that there may be several extreme or locally optimum
solutions, then the global optimum cannot be guaranteed by the solution
procedure discussed so far.
Since the linear programming algorithm
- 79
searches the extreme points, it will terminate upon reaching a local
optimum.
algorithm.
Soland (21) attacks this problem by a branch and bound
The algorithm solves a sequence of problems, in each of
which the objective function is convex.
each of the variables x
J
Each of these problems restricts
to a suitable subinterval of (£.,L.) and
J l
replaces each original objective function ip by its convex envelope* over
the subinterval.
The minimum of the optimal solutions over all subsets
is then identified as the global minimum.
A very practical advantage of separable programming is,that one
can terminate at any stage of the procedure when the approximating
solution becomes satisfactory.
This fact constitutes one of the
reasons why separable programming is gaining more and more attention.
In conclusion, the purpose of this study was to develop a simple,
systematic method to search for the optimal break-points of the piecewise linear function used in the separable programming problem.
A
future prospect for investigation is to program the break-point search
method and the refinement process via the linear programming algorithm.
The possibility of the direct fitting of multivariate data with piecewise linear functions might also be investigated.
*Convex envelope of ijj is the highest convex function which fits
below ip.
APPENDICES
APPENDIX I
DEFINITION OF.CONVEXITY
A function of a single variable, f (x), is a convex function, if,
for each pair of values of x, for instance, x^ and x^, f (Ax^ + (I - AJx^)
<_ Af(x^) + (I - A)f (X2) for all values of A, where 0 <_ A <_ I.
f (x) is a strictly convex function if the equality sign in the
above expression is removed.
It is a concave function (or a strictly
concave function) if this statement holds when >_. (or >) is used instead
of
'
. If f (x) possesses a second derivative everywhere, then f (x) is con-
2
2
vex if and only if d f (x)/d x >_ 0 for all values of x for which f (x) is
defined.
2
2
Similarly, f (x) is strictly convex when d f (x)/dx
when d. f (x)/dx
2
2
< 0, and strictly concave when d f (x) /dx
2
2
> 0, concave
< 0.'
APPENDIX II
■NUMERICAL SOLUTION METHOD OF NONLINEAR EQUATIONS
Determination of the Number of Real Roots for a Polynomial Equation
on the Interval (a,b)— Sturm's Theorem (3)
Let Fq
e
P (x ) be a polynomial of degree
Let -F^ be the remainder
n in"x, F^
in the division of F^ by F^, and
sponding quotient; that is, Fq = Q^F^ - F^ and similarly, F
..., and finally F , = Q F ,
n-1
n n
a
the
= Q2^2 “
J
■
Let a < b be two real numbers.
and V be the number of variations in signs of the sequence of
b
functions F^, F^, F^5 .... evaluated at a and b , respectively.
V
corre­
with the convention that any nonzero con­
stant divides any polynomial exactly.
Let V
EP^(x).
Then,
> V1 and V - V1 exactly equals the number of real roots of P (x) = 0
a — b
a
b
n
on the interval (a,b).
This is true for simple as well as multiple
roots, but each multiple
root is to be counted only once.
Thus, for equation
4-1, 7.QlOb^ ~ ISb^2 + Sb^2 - 4
rO - 7-610bI2
3 - 15bL + Sb12 - 4
F1 = 22.83b12 - SOb1 2 + 8
F 2 " -5bI2 + 5 -33bI2 - 4
F 3 = 5.648b12 - 10.264
F. = -3.699b,. - 4
4
12
-16.344
'
=0
83
The sign of Fq , F^, F^, F3, F^, and F5 when
equals 0, 2, respectively,
are shown below
b12
F0
F1
F3
F4
F5
V
I
I
I
2
3
+
I
+
I
+
+
I
I
2
I
0
F2
= V 3 - V q = 3 - 2 = I, and there is exactly one real root
Thus,
for equation 4-1 on the interval (0,2).
Determination of the Value of a Root for a Polynomial Equation
I.
Method of False Position (3)
In this method, we find experimentally, two numbers x^ and x^,
as close together as possible, such that y^ = f(x^) and y^ = f (x^) are
of opposite signs.
the equation
If f (x) is continuous, then there is a root x^ of
f(x) = 0 in the interval (X^1X3). A first approximation
x (1) can be derived by replacing the function y = f (x) in the interval
(X^1X3) by the straight line which passes through the points (x^,y^) and
(x2 ,y2) and finding the abscissa of the point where this line cuts the
x axis.
Thus, x ^
= x^ - ((x^ - X ^ Z C y 1 - y2)
better approximation, y ^
= f(x^)
is computed.
To obtain an even
Then, if y^ and y ^
are of opposite signs, a root lies in the interval (x^, x ^ ) .
Other-
.84
wise y
(I) and
interval (x
must be of opposite signs, and a root lies in the
(I) ,x^).
After locating the root in this way, repeat the
above procedure to obtain a closer approximation.
2.
Newton-Raphson Method (3)
In this method, start with an approximation x. C D
to the
I the second approximation,
required root x^ of the equation f (x) = 0?
(2) to the root can be obtained as x. ( 2 )
where f '(x) is the derivative of f (x).
further approximations x ^ \
x^\
x (1) - (f(x(1)/f'(x(i:))
Repeat the process to obtain
..., etc;, successively from the
formula x^i+1^ = x ^ ^ - I(Xi) Z f (Xi) .
The process converges rapidly
when the numerical value of the derivative f ’(x) is large relative to
f(x) in the neighborhood of the required root.
MFOR LP TABLEAU (1)— ILLUSTRATIVE EXAMPLE I
9
10
11
12
13
14
-3.253 0.308 5.217 -5.282 -3.848 -0.566
0.865
0.639
0.496
0.359
0.358
1.283
MFOR LP TABLEAU AND THE PROGRAM OUTPUT OF ILLUSTRATIVE EXAMPLE
TABLE I H-I.
(
■OPTIMAL SOLUTION
(3>
matrix
m 32
Na m e
7
8
I
10
11
*
5
6
name
2
3
A
5
6
7
8
9
10
11
12
13
14
R. h ,.S, ITER STEPS PIVS 9BJECTIVE
O
-6,813845
RS
3
4
4
VALUE
6.813845
1.135000
5.675000
.865000
•639000
»496000
«359000
.358000
1.283000
value
.865000
.000000
.000000
.359000
.358000
1.283000
1.135000
5.675000
,000000
.639000
.496000
,000000
,000000
.000000
... . RNs
RSW
O
I
2
3
4
5
6
O INFEAS DETERMINANT MIN, R/CSST
•000 _!•OOOOOE O 1..- .3080000
.— — .
•000000
8,000000
14.000000
•865000
— . '639000
,496000
•359000
•358000
1.283000
7
8
TRUE
/COST/
■3.253000
__
.308000
5,217000
■5,282000
.3,848000
*•566Q00
.000000
,000000
,000000
.000000 .
. .
.000000
»000000
,000000 ......
,000000
____
FRI CE
__
—
I.OOQOOO
...,000000....
,000000
3.253000
..oooooo .
,000000
5-282000
3,343000
,566000
REDUCED
■«666000
3,000000
2,000000
,oooooo
____,000000
•oooooo
,oooooo
«000000
1.000000
. .3
♦
7
8
I
... .
«000000
3.253000
2
,oooooo
•oooooo
4
5
5,282000
3(548000
,566000
6 ____
R8W
. «000000 . __
•308000
5,217000
•OOOOOO
•oooooo
•oooooo
•oooooo
NEW CSL SLD CSL
$ _____I* -
_
.
■i
.
TABLE III-2.
MFOR LP TABLEAU (2)— ILLUSTRATIVE EXAMPLE I
Column
-2.57
-0.94
-1.87
5.349
10.066
0.365
0.285
0.633
f
(
(3 ) MATRIX R.H.S, ITER STgPs =IVS OBJECTIVE
O
M?2
r6
3 3 3
.2,389660
NAME
5
6
1
2
3
10
NAME
---J
2
3
--- A ——
5
6
7
8
9
10
VALUE
2*389660
2*800000
5*550000---.365000
«285000
*633000
,650000
VALuE
,365000
,285000
,633000
,000000
2,800000
5,550000
,000000
«000000
,000000
.650000
ROW
RHS
O
I
2
3
4
T -6
0
Infeas determinant min, r/cost new ceu old
•000 I.OOpOOE P__ «7000000
2
PSICE.
»000000
5,349000
10.066000
«365000
*265000
,633000
«650000
TRyE
/eC9T/
---- .2.570000
.,940000
•1.870000
•700000
•oooooo
,0 0 0 0 0 0
,000000
•oooooo
«000000
■,oooooo
_#
1,000000
,700000
'OOOOQO________ 3.000030
»000000
8,000000
2*573000
»000000
,'949000
tOOOOOO
I*8TpOOO
»000000
»000000
1.000000
REOycED
R8W
------1—
»000000
»000000
A
. -'OOOOOO
5
»700000
I
•oooooo
•oooooo
2
?»570000
•940000
!•870000
•oooooo
6
OB <»
•90T!*AL solution
APPENDIX IV
COMPUTER PROGRAM TO SEARCH FOR THE OPTIMAL BREAK POINTS
Variable
Description
X(I)
Independent variable of the regression equation
Y(I)
Dependent variable of the regression equation
NK
Maximum allowable deviation of the observed value
from the calculated value
90
LAI >578
I 1,000
P 2 *000
3 ,
3,000
4 4,000
5 5.000
6 •
6.000
7.000
7 8 8.000
3
9,000
ID 10.000
11.000
11 *
12 12.000
13 13.000
14 .
14,000
15 15.000
I6 16.000
17 17.000
18 18.000
19.000
15? 20.000
20 21.000
21 •
22.000
22 23 23.000
24 24.000
25 25.000
26 26.000
27 27.000
23 23.000
29 29.000
30 .000
33 31.000
31 32 32.000
33 33.000
34 "
34.000
35 .
35.000
36 36.000
37 37,000
33 38.ooo
30.000
39 40 ■
40.000
O d D 1^l
08/30/73
11:43
DIMEN5I9N x (15)
Di vIEn s i 3 n y <i~ )
RE AD (135,I )N#NK
READ(105,12)(X(I),Y(I )>IsI/N )
M=I
2 SJYX=O,
SJYXE=O.
S j y v =D.
SJYYE=O.
SJYXy=O'
J=Y + 2
09 3 I=Y/J
SJYX«5JMX+X(I)
SJ y XE = S1
J MXE + X( I )■**£•
SJYV =SJYV + V (I )
SJYY2=SJMY2+Y(I)#*2»
3 SJYXY=SJYXV+X(I)*Y(I)
SXX=3,*SJYX2-SUYX**2,
SYY=3,*SJ"Y2.SUYY**2.
SXY = 3. »SuYXV-SLlYX^SJMY
SE =(SXX*SYV-SXY**2.)/(3,*SXX)
F = (N<**2,)/s.
Ir (S E »l E •F )39
T9 6
J A =J * I
B=(Y(JA)-Y(M))/(X(JA)-X(Y) )
A = ( (Y(JA)+Y(M))/2,)-B*(X(JA)+X(M))/2
W^ITEf108,13)JA,B,A
M=Y + !
39 Tg £
4 SJ y X=SJYX-X(M)
SJYX2=SJYX2-X(M)**2,
S j y Y=S j MY-Y(M)
SJYYE=SJMYE-Yf")*+?'
SJYXy=SJMXY-XfM)*Y(M)
SXX=IN»SJ"X2-S'JYX**2,
S YY = IN + S'jMyE- SyMY + * 2 •
SXY= IN,S JMXY-SLJYX+SJYY
SE=(SXX*SYY-SXY**?.)/(IN*(lN-2)*SXX)
b =s x y / s x x
A = (SJYY/IN)-3#(SUYX/IN)
91
41
42
43
44
45
46
47
43
49
50
51
5?
53
54
55
56
57
58
59
6 0
-
*
*
•
w
«
61
62
63 6
4 -
65 6 6
•
67 68
69 70 71
»
SC 81 -
72
73
74
75
76
77
78
79
41. COO
42.000
43.000
44.000
45.000
46.000
47.000
48.000
49.000
50.000
51.000
52*000
53.000
54.000
55 . 000
56*000
57.000
58.000
59.000
60,COO
61.000
62.000
63.000
64,000
65.000
6 6 . 0 0 0
67.000
6 8 .OCO
69,000
70.000
71.000
72,000
73.000
74-000
75,000
76.000
77.000
78.000
79.000
80*000
81*000
W9IT
f
( 10s, 5K,SXX,SXY,SYY,SF,B, A
5 r03yAT(/,lX,'THE B R E A K
p 9JNT IS A T p 3 l M
V j ^ B E R '/
I//,lx#'SXX= ' , F i 0.3,2X >ISXY1 I,F l 0.3,2X ,'SYY=',F 1 0 , 3
22 *, 'SE='#510'3# / / # ! % ' 'SLS3E B =' » F 10•3#2 X # 'VERTICAL
3' INTE^CEDT A = '#F1C'3)
I-((N-M+l) • LT • 2 ) GS T9 9
GS T 9 2
Y= Y * 2
INs 3
L=Y + 1
DS 7 I=L# N
S J y X b S JYX+X ( I )
SjYX2=5JYX2+X(I)**2'
SJYY=SJNYtY(I )
SJYYg=SUNY?+Y(I )**2'
SJ ^ X y = S J N X y + X ( I) * y ( I )
IN 8 Isj+ I
SXX=IN*SUYX 2 .SUNX** 2 ,
SYY=INtSjYYB-SyYY*»2*
SXY=IN*SJYXY"SUMX,SJMY
SE=(SXX*SYY»SXY**2'>/UN*(lN-a)*SXX)
Ir ( S f , 3T , F ) 39
TS 8
7 CSNTINJE
S YrY+IN-3
IN=I n -I
GS T9 4
9 NA-N-I
B s ( Y ( S i ) - Y f N A ) )/( X(N)-X(NA) )
A=Y(NA)
WRjTF ( IO 8 ,10)3,X(NA),A
10 f 9R y AT( / / # IX#'SL9 p E B='#Fl0.3#2X#'VERTICAL '#
11 In t e r c e p t a at x = i #f 5,3# i = i , f i o «3)
I F O R M A T (2l2)
12 F3RMAT(2F10'2)
13 FSPMAT ( / / , I X,'THE BREAK PSlNT IA AT PSINT',
I' s|Um 3£R * , I 2, / / # IX, 'SLSPE B a ' ,FlO, 3, 2x,
2' V E R T I c A y INTERCEPT A = i ,f 10,3)
11
STg=
END
6
92
THE BREAK POINT
IS AT POINT NUMBER
SXX=
SXY=
3 . SOO
SLOPE B=
4.857
17.000
SYY=
86.000
VERTICAL INTERCEPT A=
THE BREAK POINT
IS AT POINT NUMBER
SXX=
SXY=
12.500
3
-50.000
SE=
9.286
7
SYY=
220.001
SE=
SLOPE B=
-4.000
VERTICAL INTERCEPT A=
SLOPE B=
♦STOP* 0
-6.000
VERTICAL INTERCEPT A AT X = 3 .500=
.
•
THE BREAK POINT
SLOPE B=
■
IS AT POINT NUMBER
SXX=
SXY=
5.200
THE BREAK POINT
SLOPE B=
104.000
548.004
VERTICAL INTERCEPT A=
SE=
.900
5*500
SXX=
SXY=
-3.300
-66.000
SYY=
218.988
VERTICAL INTERCEPT A=
IS AT POINT NUMBER 11
SXX= '
SXY=
-51.000
SYY=
22.000
9
THE BREAK POINT
,
1.000
VERTICAL INTERCEPT A=
IS AT POINT NUMBER
6.0O1
‘
5
SYY=
THE BREAK POINT
SLOPE B=
7.000
IA AT POINT NUMBER 6
1.000
I 9.999
22.000
VERTICAL INTERCEPT A=
THE BREAK POINT
SLOPE B=
I .333
IA AT POINT NUMBER 2
10.000
20.000
1.143
-8.499
VERTICAL INTERCEPT A=
SLOPE B=
♦STOP* 0
-8.000
VERTICAL INTERCEPT A AT
.
.147
43.201
433.999
SLOPE B=
SE =
SE=
.184
84.824
X=9.000=
8 000
*
TABLE IV- I.
MFOR LP TABLEAU— ILLUSTRATIVE EXAMPLE 2
10
-4.857
-5.2
11
12
13
14
15
16
17
C
(
"S0TMAL SBLUTI8N
(3> matrix R1H.S. ITER STrPS PIVS OBJECTIVE
O
m33
r9
4
4
4
-32. 885500
name
I
^4
11
12
4
5
15
16
17
NAME
I
2
3
4
5
6
7
8
9
10
11
12
---- n
14
15
16
17
_________ (
VALJE
32*385500
1*500000
»000000
2*000000
•500000
"1,000000
3.000000
I.OOOOOO
3.000000
2.000000
v *lje
___ RSw____
O INT£AS DETERMINANT MIN, R/C9ST
•000 3.OOOOOE 0
1.3903333
RHS
P 9 I C E ___ ___
O
•OOOOOO
I
15*000000
--- — 2-----1*500000
3
2.OOOOOO
4
•500000
5
-- 1*000000
*
6
3*000000
7
1*000000
8
3*000000
9
2.000000
TRjE
/cSST/
r.500000 ■ “
" ".4.857000
•oooooo
4.000000
•oooooo
6.000000
l.ooooocr
-10.000003
3*000000
-5,200000
•oooooo
•1.000000
•oooooo
..... 3*300000
•oooooo
3,499000
•oooooo
•OOOOOO
•oooooo
■ .OOOOOO
2*000000
•oooooo
,500000
•oooooo
.oooooo ----- .000000 --«000000
.oooooo
1.000000
•OOOOOO
3.000000
•OOOOOO .
2,000000
•OOOOOO
1*733333
1*390333
*003000
_ .oooooo
*000000
4.SOOOOO
.000030
.333333
•*000000
.003000
___.oooooo
.000000
.oooooo
,303000
REDjjED
T
'S
6
•oooooo
•oooooo
•OOOOOO
•OOOOOO
.000030
RSw
“ ^OOOOOO
7.466667
9*466667
■ ‘OOOOOO
•OOOOOO
4*200000
8*500000
13*699000
1*733333
1.390333
•OOOOOO
4.SOOOOO
■OOOOOO
9
1.733333
__ *000000
-.333333
•300030
1*000000
’
NEW CS, OLD CSL
3
4
• 2
7
S
9
■ -
I*
10 .
APPENDIX V
SOLUTION OBTAINED FROM THE DIFFERENTIAL ALGORITHM (22)
be the constrained derivatives for m = I, 2,
Let
be the
smallest negative derivative, and v^ be the largest positive decision
derivative for which the corresponding decision variable d^ is positive.
Problem solution is illustrated as follows:
Step I
3
2
Maximize y = 4x^ - x^ + 6x^ - 2x^
subject to
x^ + 3x^ ^ 8
Sx1 + 2Xg £ 14
X1, X2 I 0
can be transformed to:
Maximize
3
y = 4x1 - X1 + 6x2 -
subject to
2
(V-I)
*1 + 3x2 + *3 = 8
Sx1 + 2x 2 + X 4 - 14
xI * x2’ X3’ X4 -
(V-2)
0
Let X1, x2 be the decision variable and x^, x, be the state variables.
4
V1 = 4 - Ix1 = 4
’
I when X1 = x2 = 0
v 2 = 6 - 4x 2 = 6
Vi = 0
96
= Vg = 6, thus, y can be increased by increasing
while holding
constant.
From equation V-I, x^ can be increased by 2.67 without going beyond
the feasible region.
From equation V-2, x^ can be increased by 7
without going beyond the feasible region.
Also, from the expression
for V^, x^ can be increased at most by 1.5 in order to increase y.
Hence, x
= min(2.67,7,I.5) = 1.5.
The constraints become
x^ + 3Xg + x^ = 3.5
--------------------------------------------- (V-3)
5x
---------------------------------------------(V-4)
I
+ 2x
2
+ x. = 11
4
Step 2
= 4 - 3x^ = 4
> when x^ = 0, x^
1.5
Vg = 6 - 4Xg = 0
Vi = 0,
v^ = V1 = 4; y can be increased by increasing X1-
From equation V-3, X1 can be increased by 3.5, from equation V-4,
X1 can be increased by 2.2.
increased at most by 1.15.
y = 7.579.
But from the expression of V1 , X1 can be
Therefore, X1 = min(3.5,2.2,1.15) = 1.15;
BIBLIOGRAPHY
1.
Abramowitz, M. and L. A. Stegun, Handbook of Mathematical Functions,
National Bureau of Standards, Applied Mathematics Series, 55,
1964 , pp• 67-89.
2.
Alloin, Guy, "A Simplex Method for a Class of Nonconvex Separable
Problems," Management Science, Vol. 17, No. I, Sept., 1970,
pp. 66-77.
3.
Chakravarti, I. M., R. G. Laha, and J. Roy, Handbook of Methods of
Applied Statistics, Vol. I, John Wiley and Sons, Inc., New
York,, 1967 , Chapter 3.
4.
Charnes, A. and W. W. Cooper, Management Models and Industrial
Applications of Linear Programming, John Wiley and Sons, Inc.,
New York, 1961.
5.
Charnes, A. and W. W. Cooper, "Nonlinear Power of Adjacent Extreme
Point Methods," Econometrica, Vol. 25, No. I, Jan., 1957.
6.
Charnes, A. and C. E. Lemke, "Minimization of Nonlinear Separable
Convex Functionals," Naval Research Logistics Quarterly, Vol.
I, No. 4, Dec., 1954, p p . ,301-312.
7.
Dantzig, G. B., "Recent Advances in Linear Programming," Management
Science, Vol. 2, No. 2, Jan., 1956, pp. 131-144.
8.
Dantzig, G. B., et al., "A Linear Programming Approach to the
Chemical Equilibrium Problem"" Management Science, Vol. 5,
No. I, Oct., 1958, pp. 38-43.
9.
Dorn, W. S., "Nonlinear Programming— A Survey," Management Science, '
, Vol. 9, No. 2, Jan., 1963, pp. 171-208.
:
10.
Falk, J. H. and R. M. Soland, "An Algorithm for Separable Nonconvex Programming Problems," Management Science, Vol. 15,
No. 9, May, 1969, pp. 550-569.
11.
Hartley, H. 0., "Nonlinear Programming by the. Simplex Method,"
Econometrica, Vol. 29, No. 2, Apr., 1961, pp. 223-237.
12.
Hadley, G., Nonlinear and Dynamic Programming, Addison-Wesley
Publishing Co., 1964.
x
/
13.
Hillier, F. S . and G. J. Lieberman, Introduction to Operations
Research, Holden-Day, Inc., 1968, PP• 581-586.
(
I
J
.
98
14.
Kochenberger, G. A., R. G. D. Woolsey, and B. A. McCarl, "On the
Solution of Geometric Programs via Separable Programming,"
Operational Research Quarterly, Vol. 24, No. 2, June, 1973,
' pp. 285-294.
15.
Lawler, E. L. and D. E. Wood,."Branch and Bound Methods: A Survey,".
Operations Research, Vol. 14, No. 4, July-August, 1966, pp.
705-707.
16.
Lorentz, G. G., Approximation of Functions, Holt, Rinehart and
Winston, 1966.
17.
Miller, I . and J . E. Freund, Probability and Statistics for
Engineers, Prentice-Hall, Inc., Englewood Cliffs, New Jersey,
1965, pp. 226-258.
18.
Miller, C., "The Simplex Method for Local Separable Programming,"
Recent Advances in Mathematical Programming, R. Graves and
P. Wolfe (eds.), McGraw-Hill, New York, 1963.
19.
Orchard-Hays, W., Advanced Linear Programming Computing Techniques,
McGraw-Hill, New York, 1968, pp. 205-209.
20.
Soland, R. M . , "An Algorithm for Separable Nonconvex Programming
Problem II Nonconvex Constraints," Management Science., Vol. 17,
No. 11, July, 1971, pp. 759-773.
21.
Vogt, L. and J. Even, "Piecewise Linear Programming Solutions of
Transportation Costs as Obtained from Rate Tariffs,"
AIIE Transactions, June, 1972, pp. 148-
22.
Wilde, D. J. and C. S . Beightler, Foundations of Optimization,
Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1967,
Chapter 1-5.
/
MONTANA STATE UNTv CBStty / rooxarrc
3 1762 1001 4686 7
#
I
LI39 Lai, Haifie Loo
cop.2
The selection of
optimal break points
in piecewise linear
function analysis
'
*bow««g__
Z /-*?
Download