PROGRAMMING-BASED SOLUTION REDUCTION OF DIMENSIONALITY IN DYNAMIC METHODS FOR NONLINEAR INTEGER PROGRAMMING

advertisement
811
Internat. J. Math. & Math. Sci.
VOL. II NO. 4 (1988) 811-814
REDUCTION OF DIMENSIONALITY IN DYNAMIC PROGRAMMING-BASED SOLUTION
METHODS FOR NONLINEAR INTEGER PROGRAMMING
BALASUBRAMANIAN RAM
Department of Industrial Engineering
North Carolina A & T State University
Greensboro, NC 27411
and
A.J.G. BABU
Industrial Systems Department
University of South Florida
Tampa, FL 33620
(Received June 5, 1986)
ABSTRACT.
This paper suggests a method of formulating any nonlinear integer pro-
gramming problem, with any number of constraints, as an equivalent single constraint
problem, thus reducing the dimensionality of the associated dynamic programming problem.
KEY WORDS AND PHRASES.
Dynamic
Programming, Integer Programming.
1980 AMS SUBJECT CLASSIFICATION CODE. 90C39, 90CI0.
i.
INTRODUCTION.
There are numerous application areas in which it is possible to model the sit-
uation under study by formulating a discrete-variable nonlinear optimization model.
In general, these situations can be represented by a nonlinear objective function with
nonlinear constraints.
Examples of such situations include facilities location, invest-
ment analysis and transportation problems [i].
Maximize
f(x l,x 2,.
Subject to
gi(xl,x2
x.1 > 0
x.
The problem can be stated as follows:
(i.I)
,Xn
x
n)
for i
b
1,2
is integer for
for
i
i
i
1,2,...,m
(1.2)
n
(1.3)
1,2,...,n
(1.4)
There are four broad classes of solution methods available for problems (i.i)-
(1.4):
dynamic programming-based methods, branch and bound methods, linear approxi-
mation methods and hybrid methods.
Dynamic Programming-based methods require that either the objective function
(i.i) or the constraint set (1.2) be separable in the decision variables. Direct
dynamic programming methods require that each of the constraints in constraint set
(1.2) be separable while the hypersurface algorithm of Cooper and Cooper [1-3] require
that the objective function (i.i) be separable.
B.
812
[4]
Wei Shih
A.J.G. BABU
has developed a branch-and-bound method for the separable programm-
ing problem with the additional restriction that the component functions of the
objective function satisfy the law of diminishing returns.
This solution procedure
was proven to yield the optimal solution; the proof was given by Mjelde [5].
Shih’s
procedure was reported to be faster than both dynamic programming and exhaustive search
methods.
Attempts to linearize the nonlinear integer programming problem involve a radical
increase in the number of variables and constraints.
Methods to achieve more economical
linear representations of 0-i polynomials have been undertaken by Glover and Woolsey [6].
In spite of the developments, this method seems to be not very encouraging except for
problems containing certain special structures.
Aust [7] suggested a dynamic programming-branch and bound hybrid approach for nonlinear integer programming problems with separable objective function and separable
constraints.
The method involves partitioning the m constraints into p disjoint set
of constraints.
2.
DYNAMIC PROGRAMMING METHODS.
The direct dynamic programming-based methods for problems (1.1)-(1.4), for the
single-constraint case give rise to one-dimensional dynamic programming problems (problems requiring one state variable per stage).
Problems with multiple constraints give
rise to multi-dimensional dynamic-programming problems.
The storage requirement for
tables increases very rapidly with increase in the number of constraints.
This is
referred to as the curse of dimensionality.
A number of methods have been proposed to reduce the problem of dimensionality
including lagrange multiplier method, successive approximation method and polynomial
approximation method.
A detailed discussion of these methods can be found in Cooper
[8].
Cooper and Cooper [I] developed the hypersurface algorithm to reduce the dimensionality problem.
The basic notion behind the algorithm is to search objective function
hypersurfaces to see if it contains any feasible points.
The lattice points for each
hypersurface are found by solving a one-dimensional dynamic programming problem.
The
method requires a separable objective function but does not impose any restriction on
the form of the constraints.
This paper suggests a method to reduce the storage requirement, which is a
serious problem in dynamic programming-based methods, especially for problems with
a large number of constraints.
3.
REDUCTION OF DIMENSlONALITY BY CONSTRAINT AGGREGATION.
A nonlinear integer programming problem with several constraints can be reduced
to a problem with a single constraint if the original constraint set can be reduced to
an equivalent single constraint.
The following theorem provides a method of aggregating
both linear and nonlinear constraints with mild restrictions on the type of nonlinear
constraints.
THEOREM"
(See Babu and Ram [9] for a similar result in linear integer programming).
The system of equations (3.1) is equivalent to the equation (3.2).
n
Z
j=l
r..
=b.i
a.. x.
13
J
for
i
1,2,...,m
(3.1)
NONLINEAR INTEGER PROGRAMMING PROBLEM
m
aij bi, rij
r..
a..x. xJ
j=l ’J J
m
r.
(n pi
i=l
where
-
n
Y-
are integer
constants
813
(n pi
i--I
__(b
i)
(3.2)
x. are integer variables and
J
Pi
are distinct
prime numbers other than unity.
PROOF:
Consider a solution
(Xl,X2...
x
to the system (3 i). Multiplying each
n
and adding we get equation (3.2).
Pi
We now prove that (3.2) implies (3.1). Assume (3.2) holds. Let B be any real
i
numbers satisfying
equation in (3.1) by constants fn
n
ro
a..x.
1J J
E
j=l
Since
B
m
12
m
i
Pi
i=l
<
I.
12
are clearly disjoint.
I I,
i
Since (B
i
hi)
I
{1,2,
I
Hence any solution
n}
and
Bi
>
bi
for i
12
where
After cancelling like terms on both sides of (3.4)
iII
12
and (b
Bi),
i
i
I
l
are integers and
other than unity, equation (3.5) cannot be satisfied.
i c I.
Rewriting (3.2), we have
(3.4)
I where I I
12
i
are integers.
Pi
B.i
for i
i
b.
Suppose
b.z
(3.3)
are integers, B
aij xj, rij
i=l
B.
(Xl,X 2
x
n)
Pi
are prime numbers
This implies that B
i
b
i
for all
to (3.2) will also be a solution to (3.1).
Thus, the system of equations (3.1) and the equation (3.2) are equivalent.
The above theorem gives a method of reducing a system of nonlinear constraints
satisfying the following conditions, to a system consisting of a single constraint.
(i) Each of the constraints must be separable in the variables.
(ii) The exponents of the variables must be integers.
(iii) The constraint coefficients and the right hand side constants
in the constraints must be integers.
4.
COMPUTATIONAL CONSIDERATIONS.
The use of the above theorem for problem (1.1)
(1.4) satisfying the conditions
stated in Section 3, will result in a one-dimensional dynamic programming problem in
place of a multi-dimensional problem.
When the number constraints in the original
problem is large the storage requirement for a computer implementation of the method
It must be mentioned here that in a computer implementation,
is dramatically reduced.
the one-dimensional approach will involve additional table search as the state variables
take on non-integer values and hence cannot be used directly as array indices. The
causes errors in evaluation of
irrational nature of the constraint multipliers fn
Pi’
the state variable value.
chosen tolerance value.
This problem can be circumvented by specifying a carefully
81
B.
AND A.J.G. BABU
REFERENCES
i.
COOPER, L. and COOPER, M.W. Nonlinear Integer Programming, omputers and Mathematics
with Applications, 1 (1975), 215-222.
2.
COOPER, M. W.
3.
COOPER, M. W.
4.
SHIH, WEI.
5.
439-451.
MJELDE, K. M.
6.
7.
8.
9.
An Improved Algorithm for Nonlinear Integer Programming, Technical
Report IEOR 77005, Southern Methodist University, Dallas, Texas (1977).
The Use of Dynamic Programming Methodology for the Solution of a
Class of Nonlinear Programming Problems, Naval Research Logistics Quar.terly,
27 (1980), 89-95.
A Branch and Bound Procedure for a Class of Discrete Resource Allocation
Problems with Several Constraints, Operations Research Quarterly, 2_8 (1977),
The Optimality of an Incremental Solution of a Problem Related to
Distribution of Effort, Operations Research Quarterly, 26 (1975), 867-870.
.
GLOVER, F. and WOOLSEY, E. Converting the 0-i Polynomial Programming Problem to
a 0-1 Linear Program, Operations Research, 2_2 (1974), 180-182.
AUST, R. J. Dynamic Programming Branch and Bound Algorithm for Pure Integer
Programming, Computers and Opera.tions Research,
(1976), 27-28.
COOPER, L. and COOPER, M. W. Introduction to Dynamic Programming, Pergamon Press,
Inc., New York (1981).
BABU, A. J. G. and RAH, B. On the Aggregation of Constraints in Integer Programs,
Abstracts of American Mathematical Society, 2 (1981), 483.
Download