Document 11070381

advertisement
LIBRARY
OF THE
MASSACHUSETTS INSTITUTE
OF TECHNOLOGY
ZiA^^mf
WORKING PAPER
ALFRED
P.
SLOAN SCHOOL OF MANAGEMENT
PARTITIONING ALGORITHMS
FOR A
CLASS OF KNAPSACK PROBLEMS
340-68
John F. Pierce*
MASSACHUSETTS
INSTITUTE OF TECHNOLOGY
50 MEMORIAL DRIVE
IDGE, MASSACHUSETTS
PARTITIONING ALGORITHMS
FOR A
CLASS OF KNAPSACK PROBLEMS
340-68
John F. Pierce*
Assistant Professor, Sloan School of Management, M.I.T.
Work reported herein was supported in part by
U.S. Plywood - Champion Papers, Inc.
Abstract
In this paper algorithms are presented for evaluating
the knapsack function for a class of two-dimensional knapsack
problems such as arises, for example,
in the solution of cutting
stock problems in staged guillotine cutting operations.
This
class includes as a special case the classical one-dimensional
knapsack problem.
In each algorithm evaluation proceeds from
the larger knapsack sizes to the smaller, exploiting for each
size the optimality of all partitions or descendents of an
optimal solution for
a
knapsack of that size.
All of the
algorithms discussed are based on the methods of combinatorial
programming and are reliable in the respect that if carried to
completion they guarantee optimal evaluations for knapsacks
of each size.
TABLE OF CONTENTS
INTRODUCTION
I
.
II
.
UNDERLYING THEORY AND IDEAS
10
III
.
CONTROLLED INTERVAL SEARCH PROCEDURES
38
CONTROLLED PARENT-GENERATION SEARCH PROCEDURES
43
THE K-STAGE KNAPSACK PROBLEM
53
CONCLUDING REMARKS
73
Appendix
A:
One-Dimensional Example
79
Appendix
B:
Two-Dimensional Example
83
Appendix
C:
Stage k-1 of Two-Dimensional Example
86
IV,
V.
VI
.
References
1
90
Introduction
I.
In its simplest form the knapsack problertr may be
described as follows. Given m items with "sizes"
1-,,
l2,-.-lui
^^'^
associated values V2^,V2,...,v
mine the non-negative integral number
put in
a
a^^
deter-
of each item to
knapsack of size L so as to realize
a
total
value F(L), where:
F(L)
=
maximum
Z
a. v.
^
^
i=-l
Subject to:
and
m
a.l.
1=1
.Z,
a.
>^
11—< L
,
integer
all
i
The function F(L) has been termed by Gilmore and
Gomory
[8
J
the knapsack function.
Besides arising in contexts wherein an assortment
of items are to be selectively packed into some form of
knapsack (e.g., spare parts in a flyaway kit, cargo in
rail car)
a
knapsack problems also arise in contexts in
which an assortment of items of differing values are to
be realized from a knapsack of given size, as in the
cutting of stock into rolls or sheets to match customer
demands.
In some contexts only a single dimension of
the items and knapsack is relevant for the problem,
as in
a case where rolls of paper of constant diameter and
given width are to be slit into rolls of the same diameter
1.
The name originally ascribed to this broad class of
optimization problems by Dant/ig [3 ,.
.
-2and smaller widths.
be considered,
In others,
two or more dimensions must
as in the loading of cargo subject both to
weight and volume limits, the cutting of stock into an
assortment of sheets of different widths and lengths, or
the dynamic budgeting of funds over a number of time periods
In some contexts knapsack problems arise not only directly
but indirectly as well, as for example,
in the course of
solving linear programming formulations of various stock
cutting problems.
In these latter cases it may be necessary to
solve literally hundreds of knapsack problems in the course of
solving a single programming problem.
The efficiency of
algorithms for the knapsack problem is therefore a critical
determinant of the size and comprehensiveness of the models
which can be employed in practice.
Although the results of
the present study are more general in their applicability
as will be apparent,
it is a problem of this type which has
provided the motivation for the study and will provide the
context
for purposes of discussion.
Specifically, the problem of underlying interest concerns
the cutting of rectangular sheets of stock of size W x
l
into
smaller sizes w.xl., i=l,2,....m so as to fulfill requirements
for
r.
1
sheets of size w.xl., i=l,2,...,m at minimum cost.
11
Following the approach suggested by Gilmore and Gomory
[l]
the problem can be formulated as a linear programming problem
in which there is a column vector
to each permissible
(a
,a^,...,a
m
12
)
corresponding
-3-
pattern for cutting up
a
sheet of size Wxl
the number of pieces of size
a
w-j^xlj^,
shown which yields
2
a
2
3
is
pattern is
rectangles of size w.xl,,
size W3XI3, 4 of size W4XI4,
a^^
i^=l,2,...m cut from
in Figure
For example,
single sheet.
where
,
of size
vi-jXl-],
2
of
and
of
2
If m=10 then in a problem in which this
size wgxlq.
constitutes
a
permissible cutting pattern there is
a
column vector (2,0,2,4,0,0,3,0,2,0) associated with this
pattern.
one generates column vectors only as
In practice
they become needed during the solution of the linear
programming problem^ rather than explicitly generate the
complete set of such patterns before commencing to solve
the problem.
a
accomplished by solving at each iteration
This is
knapsack problem of the form:
maximum
=
F(W,L)
(^l'^2'
•
Z a
i^l
.
^^^
-'^m'
Subject to the condition that
,aj^) correspond to a
(aj^, a_
,
.
.
.
permissible pattern for
cutting up a sheet of size Wxl
where
'if-
.
is the value of the dual variable associated with
requirement r^ at the present iteration and
maximum value realizeable from
2.
2
a
F(Vn;,I-)
rectangle of^
the
is
si.ie V7xT
.
In some instances the orientation of a required rectangle
such
is irrelevant;
Wj^>:l-L with respect to the sheet Wxl
cases it is sufficient for knapsack-problem purposes to
simply include among the original requirements a rectangle
of width lj_ and length w-l having the same value Vj^
m
.
-4In the present case the cutting patterns to be deemed
permissible are those which, in the terminology of Gilmore
and Gomory [7], can be obtained through staged guillotine
cutting.
By a guillotine cut is meant one in which the cut
begins at one side of the rectangle and traverses the stock
in a straight line to the opposite side.
By k-stage guillotine
cutting is meant cutting which takes place in k distinct stages,
each stage
in which cutting takes place lengthwise alternating
with a stage in which cutting takes place widthwise.
example. Figure
two stages:
1
For
illustrates a pattern which can be cut in
in the first stage the sheet is slit lengthwise
into three strips (plus edge trim)
and in the second each of
the strips is chopped individually across its v/idth to pro-
duce the desired sizes.
rvT
:^s:
L^r^ ^::T33rxs:: 3
'
w
r^
K
H
Figure
1.
Pattern obtained with exact,
2-stage guillotine cutting.
-5In some instances a triituning stage is permitted after
the completion of the k
stage, at which time the resulting
rectangles may be trimmed on one side to yield the required
This is termed the inexact case to distinguish it from
size.
the exact case in which all rectangles resulting at the com-
pletion of the k^^ stage must comply exactly with the
dimensions of the requirements without further cutting operations.
3-stage,
Figure
2
shows a pattern which can be produced with
inexact guillotine cutting; the pattern in Figure
on the other hand,
is realizeable by exact cutting.
\\\\\\ \\\
1,
3
-6in single-stage cutting when a strip of stock of length L,
is to be chopped up into a^ pieces of length
say,
1^,
i=l,2,...,m so as to maximize the value F{L).
This is
the basic one-dimensional knapsack problem stated in the
opening paragraph for which a number of dynamic programming
algorithms [2,5,7,8,14] and combinatorial programming algorithms [6,12] are available.
In the case of 2-stage guillotine cutting an optimal
pattern can be determined by solving the required knapsack
problem in two steps.
The first requires the evaluation of
an optimal cutting policy for chopping up a strip of any
width
wj^
(and length L)
which may result from cutting at the
first stage; the second step requires the determination of
an optimal plan for cutting strips at the first stage.
That
IS;
(i)
For all widths wi_ determine ?.:
the optimal
value F(L) obtainable by fitting rectangles
Wjxlj end to end into a strip of width Wj_
and length L, where Wj <^ w^ in the inexact case,
and Wj-Wj^ in the exact case.
For each i this
is a basic one-dimensional knapsack problem.
(ii)
Determine an optimal solution to the
basic one-dimensional knapsack problem
with F(W)=max Z ^x^^, subject to the
,
constraint Z
3.
a^Wj^ <
W
For the special case in which aj_=0,l for all
ences [ 4]
[10] and [ll].
.
i^
see refer-
-7-
When there are m different requirement widths Wj^ solution
of the knapsack problem for 2-stage guillotine cutting
implies the solution ol
one-dimensional type.
(ra
+ 1) knapsack problems of the basic
In practice this can actually be
accomplished in two knapsack calculations by using the
dynamic programming algorithm in reference
ordering the widths so that w,
W2 <
<
.
.
.
algorithm evaluates successively F-|(L), F2
the process of determining F
(L)
,
since, by
[7]
<
w
(L)
where FgCL)
this
,
,
.
.
.
,
Fjj^(L)
is the
value of a knapsack of size L using only the first
1
12
,
1
...,1
,
s
in
optimal
lengths,
Alternately, the problem can be solved by a
s
combinatorial programming algorithm
[isj
.
For 3-stage cutting; similarly, an optimal pattern can be
determined by solving the required knapsack problem in three
steps.
The first step now involves the determination of an
optimal policy for cutting at the third stage any sheet which
might result as a consequence of cutting at the second stage.
This requires the evaluation of the knapsack function F(y
for all y,
< y < W and for each strip length x=l^
,x)
for
which there is a requirement w^xl^ at the present (third)
4
stage.
4.
nondominated
Upon evaluation the set of
rectangle
if
We will say that size x_x y dominates size x x y
> F (y X)
X < X
y < y and F (y X
,
,
)
,
.
-8-
sizes X X y which result form the requirements for a 2-stage
problem (stages
1
solved in two st^»
and
as
2)
which remains, this problem being
described in the preceding paragraph.
Similarly, in k-stage guillotine cutting solution of the
knapsack problem begins at stage k and proceeds step by step
through stages k-1, k-2,...2, each step employing as the
appropriate requirements sizes w-xl. the non-dominated sizes
X X y determined at the previous step,
the procedure cul-
minating with the solution of the 2-stage knapsack problem.
In practice^ in k-stage cutting the number of requirements
and/or the number of stages feasibly considered is severely limited
computationally by the time expended in evaluating the knapsack
function F(y,x),
< y
<^
W,
x=lj_,
i^l,2,...,m.
One possibility,
presently, is to employ the dynamic programming algorithm of
reference
1]^
<_
F(lj^^,
I2 <
W)
[7]
•
•
•
which, upon ordering the lengths so that
^
1_,
evaluates in the single computation for
all of the desired values of F(y,x)
bility is to employ for each
1-
.
Another possi-
independently an efficient
dynamic programming algorithm such as Gilmore and Goraory's
algorithm l.B in reference
[7j
to evaluate F(1^,W) which
evaluation also yields the values F(y,
1^)
,
< y < w.
m
The
choice between these alternatives depends on the number of
the
-9-
different sizes m being considered.
5
At present computational
feasibility appears to be questionable even for 3-stage cutting
in problems having a moderate number of requirement sizes.
An alternative to these approaches to be considered
in this report is to develop algorithms based on combinatorial
programming rather than on dynamic programming considerations
for evaluating the function F(y,x)
at each stage k >
an
2,
alternative suggested by results to date indicating that for
both the one-stage and the two-stage knapsack problem there is
an algorithm of the former type which is generally faster
than those reported of the latter type.
5
In the following
sections we consider first the basic one-dimensional problem
(1);
in Section II we present the underlying theory and
ideas
to be used and in Sections III and IV two different algorithms
for evaluating the knapsack function.
Then in Section V
the results of Section II are neneralized for the two-
dimensional knapsack problem arising
at stage k and algorithms
discussed for evaluating the knapsa>_k function.
5.
The results of a comparative study of knapsack algorithms
will be presented in a subsequent report.
o
-10-
Underlying Theory and Ideas
II.
Throughout this section the problem to be considered is the
basic one-dimensional problem stated earlier:
l,,l^,.,..l
with 'lengths"
^
1
F(L)
2
'
and values v,,v„
1'
m
,v
2
m
determine:
,
m
maximum
=
given m items
V a.
.2,
.
1=1
^'^2'
1
1
m
m
Subject
and
to:
(a
,a
,
...,a
)
ill-'-'
Z l.a. by L(A),
1.
1
i
< L
^
(1)
integer
> o,
a.
for all L, o < L < L
a
.E,
1=1
all i
To facilitate discussion a solution
max
will commonly be denoted by A, its length
and its value Z v.a. by V(A).
ill-'''
To begin we note that germane to all of the algorithms
that will be discussed in this paper is the fact that from
an optimal solution A
~o
12
=
(a,,
a^,
a
...,
m
)
for length L one can
">
o
derive immediately an optimal solution for each of itsT]_ (a.+l)
partitions or descendents
Theorem
1:
—o
Or,
to state it more precisely:
12
=
If A
to
.
(a°
a°
.
.
,
a°)
with value F(L)
(1)
12
(a'
.
a',..., a')
m
m
,
is an optimal solution
then
A'
=
is an optimal solution to
A' such that
with value F(L'), for each —
m
a! <
1
6.
a.,
1
i=l,2,
,m,
where
L'
= Z
i=i
l.a'..
1
1
This same result has been derived earlier by Shapiro and
Wagner [14] who have considered also a class of multidimensional knapsack problems and a distribution-ofeffort problem.
(1)
-2
o
oo
.
-11To prove the theorem we suppose that for some descendent A'
there
—
exists
a
solution A with length S l.a. < Z l.a! and value
1
i
> Z v.a!
£ v.a.
ill
111
-1111
O
Z v.a* > F(L)
i=l,2,...,m, with length
,
.
A'
—o
l.a*
111
for any descendent
,
-'
value
L and
is an optimal
^
A'
—
suppose that an optimal
m
the sample problem 7
Then if this optimal solution is, say, A. as shown
of Appendix A.
we have at once the optimal solutions A_
~D
in Table A.l,
,
<
—
Y
solution for length L=ll has been obtained
,A
X
Hence there can exist no solution
As an illustration of Theorem 1,
A
1
contrary to the assumption that A
,
solution for length L.
better than
.
But then there exists a solution
.
/\
A*,a*^a. -a! +a
1
,
A.
_,, A.,
10
,
11
,
and hence the values F(9)=8, F(8)^7, F(6)=6,
and A
F(5)=3, F(3)-2 and F(2)=l.
Theorem
1
starting with L
immediately suggests the evaluation of F(L) by
max
and proceeding
in an order
'^
That is, we begin by solving
recording F{L
max
)=V(A-)
'
7.
in Table A.
-HD
generated and L -Zl
values F(L')=V'
with L-L
(1)
.
1
a
.
3
L
max
> L,
to get A
1
.
> L^>...>0
2
After
all descendents —
Al of A
—
are
andV'=Zl.a'. evaluated for each, and the
1
1
1
We then determine the largest value L
recorded.
For illustration the data for a problem with four lengths is
detailed in Appendix A.
In Table A.l are listed the 44 possible feasible solutions A. arrayed in lexicographically
decreasing order, where A^ is lexicographically smaller than
A if a?-a^, aS^as,
a|_] =^r_l> ^nd a| > a^ for some r,
o < r < m.
In Table A. 2 these same solutions are re-ordered
by length, L^ = Zlj_a-)
hnd arranged in lexicographically
Table A. 3 gives the function
within
length.
decreasing order
F{L), and finally, Table A. 4 gives a non-dominated schedule
of solutions A from which all values of F(L) can be realized.
-
.
.
,
,
.
,
-12for which F(L)
problem
has not been evaluated and solve the knapsack
for L=L, getting an optimal solution A
(1)
.
As before.
/\
we record F(L)=V(A
i
the values F(L')
.
),
generate all descendents A[ of A. and record
—1
—1
Continuing in this manner we finally reach
L=0 at which time we conclude by deriving from the table F(L)
a
schedule of dominating solutions by eliminating all entries k
for which there exists an entry
j
such that L > L
'^
F(L
)
^
- F{L.)
.
and
J
.
J
Within this general framework there are a number of
alternatives and considerations which arise.
We consider first
some further possibilities for reducing the number of values L
that need be evaluated by explicitly solving knapsack problem
To begin it is noted that while F(L)
< L "~
< L
max
values of L.
(1)
is defined for all
it can change value for only
a finite number of
'
For exact evaluation of F(L)
example, to evaluate the lengths L=6 26,
,
it is sufficient,
. . .
,k6
< L
and where
largest integer such that k6 —
max
,
6
for
where k is the
can be taken to be
the value of the largest factor b which is common to each of the
12m
values 1,, 1^,...,1
be a multiple of b.
,
since all lengths Z
,
x=l
In our example,
l.a.
11
must necessarily
for instance, b=l so that in
such a case all integer values of L would be considered.
Alternately, a more course grid might be used.
For instance,
precision of F(L) +A F were acceptable for every L then any
to the value (AF)/
[max (v./l )]
would be satisfactory.
case, however, we may set F(L)=0 for all L < min(l.)
i
5
if a
up
In any
-13-
Hence in evaluating F(L)
divisible by
(exactly)
only values
and
inin(l.)
>
'l
need be evaluated, and for some of these values
6
can be evaluated directly from other solutions without neces-
F(L)
sitating the solving of a separate knapsack problem as a consequence
Fortunately, there may be additional lengths L which
of Theorem 1.
can be evaluated without explicitly solving a knapsack problem for
length L.
For example, suppose that problem
(1)
has been solved
for a length L and an optimal solution A* obtained having value
11—
m
V*(L)
.
_
If L*=.Z, l.a. < L then A* constitutes ^n optimal solution
1=1
for all L,
L*
<^
L < L.
This fact while quite apparent" can be very
significant in reducing total problem-solving effort.
To facili-
tate future reference we state it in theorem form:
Theorem
2:
If A* is an optimal knapsack solution to problem
with constraint
11 <—
T.l.a.
(1)
L having value V and length
L*= yi.a*. then A* is an optimal solution to all
11
—
problems with constraint Zl.a
1
i
<
-
L for L* < L < C.
Thus in solving the one knapsack problem with length C we obtain
the evaluation of all points L*
^ L
< L together with those
associated with each descendent of A*.
In addition to these possibilities it is also possible to
determine directly the value of F(L)
Theorem
3:
Let the
m
for some L:
items be ordered so that
...>(I)
>(i)^^
— ^m
—
? —
({),
1
where
8.
111
(t).-v./l
•
Then solution —
A*
=
Since if there were a solution yielding V(L)
could not be optimal for L.
12'
(a* a*
,
>
V*
.
(f)
,
..
0)
and
then A*
,
.
-14all of its descendents are optimal for length
L* = SI. a*, where
1
L
9
12
a* = <1-/1^ - a>,
1
2
o-*0.
To prove that A* is optimal, suppose the contrary.
exists a solution —
A with SI.
Since l^at <
11
a,
SI. a* and V= Sv 3
<
—
a.
ill
.
11
.
.
= <L*/1 > so that necessarily a
l-ij^?
> V* = Sv a*
ill
ill
^
^'^
Suppose
•
llf222
Then an upper
bound on V is V
^'^
= a*.
Suppose there
= v a* + ({)^(l^a*)
.
But
since this bound is attained by A*, there exists no S with a =a*
Suppose, then, that a,=a? is v^(a* - b)
+
C})^
(l^b + l^a*)
cannot exceed V* since
length L*
.
b > 0.
t>,
(|)
<
(t^,
Then an upper bound on V
= v^a* + v^a* +
{(J)^-*})^)
•
which
(l^^b)
Therefore A* is optimal for
.
And finally, the descendents of A* are optimal as
direct consequence of Theorem
a
1.
Referring again to the problem in Appendix A for illustration,
it is seen that all lengths are ordered so that
Therefore a* = <1 /I -a> = <6/3
of A = (2,1,0,0)
(1,0,0,0)
-
a> =1.
values F(12) - 12, F(9) =
^'-^
>
'l)^
>
(2,0,0,0),
(1,1,0,0),
We thus have immediately the optimal
8,
F(6)
= 6 and F(3)
While highly desirable to extend Theorem
= 2.
3
to other cases
which can be evaluated directly, extension appears to lead to
rather specialized results which are cumbersome to implement.
Theorem 4 represents one such extension:
9.
0>,>
All feasible descendents
must therefore be optimal:
and (0,1,0,0).
>
We will denote by <g> the largest integer contained in g.
-15-
Theorem
Let the lengths be ordered so that 6, >
4:
where
i)
^
then A^
/I11
If
12
...,
= v.
.
1
=
a*,
(a*,
1
2
descendents are optimal, where a* < <(1.
and a*
s-1,
...,
1=^2,
[<(1
rain
s
,
s-1 )
(C)
^^j
-6
)-l
'j
+ l'
>
-
j
(6
j
+1
-6
)
j+2^
-6
a* ^1. ^+...(0.
j+3 j+3
J+1 s-1
<s
i<.j
for all j,
j+2
a*
)
-
a*
1
j
s
1
+
+2
+
^
s-1 s-1
1
s
inequalities:
(d)
^j + 1
-6
)»
j
-(!)
((t).
+ 3'
b
)
s
'J+1
1
s
s
,
s fl
(a*-a )1
s
s
that of A*.
V -V*
V*
a
-
1
a*
s
1
.
S-2S-2
^
1
<{)
s
,
a )1
-1
6
s-ls-l--!
+
-
+
[dv
^
s-2
)
1
+ a*
s
s
s—>
<
-
2
,
a*
d)
v
(J
,
s-1 s-1
a*6
dO
>
—
1
sss
,
^s-1 s-1
or d(0
s
s
(i)
,-6 )a*l
^s-1 ^s
s
.
s
.
-a
(a*
s
,
)
v
s
+
s
then
s-2:
+(a* 1
s-1 s-1J
+ a*v
s
=
]
Thirdly, suppose
s-1,.
]
+ t
[dl
•
^s-1
s
for
1
,0
^
s-ls-ls-2
^+
s-2
d < a*
—
—C
s-2^,
+
•
< d
~;
holds for d-1, we have simply the one condition,
>
s-2^ —
.
^-0
^ >
s-2
s-1Jl s-2 —
Since this inequality holds for all
1
.
-
for
value exceeding^
a
For V* —
> v" we then must have,
].
a*
s-l'^s-l s-1
(a*-a
[
ss— V*. since
-
then V -V*
+ a*l
,
V
,)+({)
s-1 s-1
^s-1
+ a*
s
-O
J -(a*s
ss-1
s-1 s-1
d0
)v
s
,
- a*
.
1
1
- a
^=a* ^-d;
s-2 s-2
a*
cannot have
A
—
,
•
-
- ((a*
s
(0
<
— V*
Therefore, suppose
a.=a* for i=l
"^^
~
-U
But since V
.
s
v*
=
Suppose
11
Suppose a
.
Then an upper bound on V is V
1,2,..., s-1.
,
2.
with value V > V*
there exists a solution —
?.
=
m
and where
,
'
To prove Theorem 4 we proceed as in the previous theorem.
i
6
—
)/l.>
/I >, b
satisfies the following set of (s-2)
b
.>
and all of its
0,...,0)
s
.
i
J
a*,
.
i, 2 <
< s,
->'—-'—'
for all
1.
>
,
.
j-1
>
~
(p_
1
(t)
,
s-1
(?
^s
)
when it
a*
((t)
-
^-(t)
J
s-2s-]
...
holds, v"
When this inequality
n
J
<
V*.
.
a*
1
s
.
s
-16-
Similarly we consider in turn
< a*,
a.
i
3-4
= s-3,
in
1:
each case an additional inequality of the same form results which
must be satisfied to insure V^ < V*
The result is thus the
.
inequalities as stated.
(s-2)
As indicated,
employ.
this result would appear rather cumbersome to
On the other hand, the potential savings in total
problem-solving time accruing through its use might be quite
significant in particular problems.
While Theorem 4 is not directly
applicable to the sample problem in Appendix A since 1^ > 1~, let
us assume for illustration that
have
1
>
>
1
1
were not in the problem.
1
so that Theorem 4 with s =
3
We then
is applicable.
From
the theorem we get the inequality
-(i))bl
^4^
4 4
>(())
^^1 -()))'l
^2'
1 - ^^2
{(t)
°^
("^1
- ^2)
^^ - ^^)
^4 <
^1 ^
1^
(
-
1
(2/3
Therefore a* < min [<1 /I >,6]
and all solutions
(0,1,0,1),
(2,0,0,0),
= 3,
-
F(3)
= 2,
6
_
2
(1,1,0,1),
= 9,
(1/3)
(1/6)
min (1,6)
(0,1,0,0), and (0,0,0,1)
values F(12) - 12, F(ll)
F(5)
2/3)
1/2)
(2)
are optimal.
F(8)
=
^
^
2
f^
1/3
so that A* = (2,1,0,1)
1,
(1,1,0,0)
= 8,
F(9)
=
(6)
7,
(1,0,0,1),
(1,0,0,0)
This yields the
F(6)
= 6,
= 1.
and F(2)
As an aside, reference to Table A.
3
shows that these values
are in fact the optimal values for the illustrative problem
10.
Some simplification is possible at the expense of the range
of a* by making use of the fact that for every j, l.a*
^
^.a*,^ +
j+1 j+1
1.
.
.
.
+
1
s
a* <
s
1
.
,
j-1
.
^
^
.
-17-
consisting of all four lengths including 1^.
the fact that
This follows from
can indeed be eliminated from the problem since
1
^2
it is dominated by 1^ and
45
42
1^ + 1^ = 1^ and v^ + v^ > v^
4
5
2
1^-.
Were this to be recognized at the outset
1
.
could be eliminated,
thus making applicable Theorem 4 rather than the weaker Theorem 3.
of course, discovery of facts of this nature
On the other hand,
is
We will refer again to this point later.
time-consuming.
With these possibilities of determining directly the value of
F(L,)
for some values of L, we return now to possibilities for
inferring the optimal value of other points from bounding con-
Theorem
siderations.
2
provided the simplest and most direct
result of a bounding nature but the possibilities are much more
We summarize them in theorem form for subse-
general, of course.
quent reference:
Theorem
5:
If U(I)
is any valid upper
any valid lower bound, then B(L) < F(L)
and B(L)
is
< U(L)
If B(L)
.
bound on the value of F(L)
=
U(I,)
then F(L)
= B(L)
and any
solution A* optimal for I* < L yielding value V*
is optimal
for length
=
B(L)
I
To develop specific bounds for use with Theorem
5
we note
first two theorems -'-' employed by Gilmore and Gomory [8].
Theorem
11.
6:
Let the lengths
^
Theorems
5
1.
1
be ordered so that
and 6 in reference
[8|,
p.
1054.
1—>({)„>
^2—
(j),
.
•
.>
— ^m
(j)
,
-18and let F.
.
.
denote the maximum value of
(L)
attainable for L using only lengths 1.,
Then for any L there exists for each
J
r,
.
< r < m,
1
such that
a length L
F(L)
,,..., 1,
k
+1
1.
J
(1)
= F^^^2^
for some L
(L^)
and L
< L
+
F^^^^^^^a^)
^-
L
< L.
As they have shown the theorem is satisfied by the value
L^==l
> r,
j
d
,+1 r+2^d r+2^+...+ld,
mm'
r+1 r+1
value
U
=
1 .d
j
.
for each
j
is generally much smaller than this
in practice.
In particular. Theorem 6 holds for r =
L
lb
r r
and d., although they have
for any positive integers b
found that for any r,L
where
such that for all L > L_
U
Therefore, for L > L ,F(L)
<L/1 >v
,
Theorem
7:
F(L-1
=
there exists an
211(2,
<(L-L„)/l-> v +F
=
F(L)
,
1:
+ v
)
„x (L.)
m)
2
.
Defining f(L)=F(L)-
.
the following important result is obtained:
There exists a length L
= f (L-1
f (1)
)
such that for all L > L
that is,
;
f (L)
,
is periodic of period
That is, if we define g (y)
n
=
F(nl,+y)
1
12.
-
is determined F(L)
,
g
n
nv-
i
1
then for all n such that nl, ~
> Lfore once g(y)
1
.
1
1
(y)
for all y,
= g(y)
< y <
for all y.
1,
L
There-
becomes known for all L=nl +y > L
A somewhat lower value L* for L follows from the fact that the
value of a- for each
may be limited to a. —
< d.-l in an
J
optimal
solution so that L*=l
(d
,-l)+...+l ^d -l)+e
^ _ra
r+1 r+1
l.hf-e,€-^o=L--CS.
j=r+l J
i
i
,
,
^
1
mm
.
.
-19-
For L <
Theorem
I
for particular lengths L by
may be possible to infer values F(L)
employing Theorem
Theorem
m
5
conjunction with the following:
^nd all y,
For all n >
8:
g ^^(y)
g„n (yj
1
< y <
For all y^ and y^,
.
<
However, it
of course does not apply.
7
(y,)
g„
n
1
g
,
<
(y)
< y^ < y^ < 1^,
•
Z
The proof follows in each case upon substituting g (y)= F(nl +y)
In
F(nlJ for
g
if F{nl +y)
F(
(n+1)
(y)
- F(nl )< F(
+ y)
1
In the first case,
,
-
(n+1)- 1^+y)
F(nl +y)
>
+
v
g
n
optimal solution for length L=nl +y then
therefore an optimal solution must have
i~
< y
< y^ <
F{nl,+y)+v
>
,
=
A'
^^i' ^2
=
*
'
*
1
only if
(a +1, a
a
^m^
,
.
.
.
,
a
+ v
is
;
nl'~n2
for all
follows directly from the fact that F(nl +/
1,
)
value at least as large,
That g (yj< g (y^)
.
1
^^ ^"
'
and has value F(nl +y)
feasible for length L-(n+l)l +y
and hence F((n+l)l,+y)
if and only
g n+i, (y)
F({n+1) 1^), or, if and
But if A
.
<
~
(y)
)
<
F(nl^ + y^)
If we now denote by g
respectively on g
n
9^
(y)
,
(y)
and g
n
(y)
an upper and lower bound
it follows from the theorem that
max
(y)==
n
^^s^^'^
^^^
^
< X < y
<
s
< n
and
g^(y)=
[{g3(x)
rain
X
>^
y
s
^
n
)
,
^^y]
-
(3)
-20-
Therefore upon using Theorem
s nv +g^(y)
In
B(L)
g
n
(y)
- 9
n
(y)
for L = nl
< nv
< F(L)
in particular F(L)
so that,
5
in
+ g"(y)
= U(L)
(4)
becomes known for all L such that
Thus in the example, for instance, we conclude
•
since F(13) = F(12) = 12 that g"(l)
n
that F(nl +1)
+ y.
= F(nl
for n=0,l,2:
)
=0
for n <
2
and therefore
F(7)=F(6) =6 and F(1)=F(0)=0.
Finally it is noted that during the generation and evaluation
A
of descendents it may be possible to identify additional values L
These possibilities are
that can be evaluated directly.
extensions of a descendent, where an extension of a solution A is
defined to be any vector —
A" for which a." —
>
1
Theorem
9:
a.
1
for all i=l,2,...,ra.
Let A be an optimal solution with value F(L) and
let
A'
be any descendent of A.
for any d > o, then F(L")
If F(L'
- F(L"-1)
=
)
=F{L' -d)
...
=
F(L"-d)
for every solution A" which is both an extension of
A'
and a descendent of A.
tion for length (L'-d)
If A* is an optimal solu-
then (A*4-A"-A')
solution for length L", L"-l,
.
.
is an optimal
.J^"-d.
The proof follows from the fact that an upper bound on F(l."-d)
U(L"-d)
= F(L")
= F(L')
+ S V.
j_
U{L"-d) = F(L' -d)
f-
But the solution (A*
(a."
1
1
V. (a. "-a. •)
1
^
A"
- A')
,
-
a.'), or since F(L'
)
is
=F(l
'
- a.
'
-d)
1
wnere L"=L' + 2
1. (a.
"
)
.
attains this bound and is feasible
-21for length
(L"-d)
is an optimal
since L* < L'-d; hence (A*+A"-A')
solution for length (L"-d)
be optimal for L", L"-l,
.
.
Similarly by Theorem
.
.
it must also
2
,L"-d+l
Incorporating these considerations into the general evaluation
procedure sketched earlier there emerges a procedure for evaluating
F(L)
< L <
—
—
.
'
L
,
max'
First a value of the
something as follows.
decrement 6 is selected and F{L) set to zero for all L < min(l.)
i
Then Theorem
3
and/or Theorem 4 is employed to determine directly
one or more parent solutions A^
.
Each parent together with its
descendents are then generated and evaluated to give an initial set
of entries F(L)
Next, the length L* beyond which f
.
(L)
is periodic
A
of g (y)
and g
n
F(L)=nv
+
^ n
1
g
n
(y)
(y)
,
for nl, + y = L are determined.
1
If g
n
(y)= g
(y)
n
in which case the process simply proceeds to the
next smaller unevaluated length L.
When
for which the bounds are not equal,
a
a
length L is encountered
knapsack problem is solved
for length L to determine an optimal solution A^
F(L)
the values
For the largest unevaluated length L < L
is evaluated.
= V* for all L* < L
^
L,
.
Upon setting
the descendents A^ of A^ not previ-
ously obtained are generated and evaluated, and the results
recorded in the table.
All of the newly evaluated points F(L)
bounds g
the table are now used to update
r^n
relevant n
ancl y.
(y)
and g
n
(y)
in
for all
The process then continues with th3 next
smaller unevaluated L, eventually terminatiig when all L have been
evaluated
Referring once again to the example in Appendix
A,
we begin
then
-22by selecting S^l,
+
1
+
(6-1)
setting F(0)-F(l)-0 and computing L* =
(3-1)
1
=
3
+ 25 + 4 - 32.
Next suppose Theorem
in this problem.
parent solution A
^
(2,1,0,0)
(2-1)
Since L* > 13 it is therefore
not possible to exploit the periodicity of f(L)
7
1
3
according to Theorem
This yields
is applied.
from which is generated the descend-
ents "~1
A,,A^,A-,
and A^,, resulting in the values F(12), F(9), F(6)
31
~D ~"11
Since g_(l) / 9^(1) we next solve the knapsack problem
and F(3).
for L=13 which, according to Table A. 3, yields A
1
n
^
for the next unevaluated length, L=ll.
.
.
Solving the single
knapsack problem for L=ll we get, according to Table A, 3, A
(1,1,0,1) with
^^'\q' \i'
F(3)
and
A= (1,0,0,
A
(2)
2)
V^9
:^3o'
.
-31 ^"^ -44 ^^^^ values F(9), F(8), F(6) and F(5),
Next a knapsack problem for L=10 is solved, yielding
with L=10 and V=8; and descendents A
(1)
=
and L =11; upon generating descendents we get
with values F(8), F(6
g (l)=0=g
=12
We thus record F(13)=12 and proceed to evaluate g
and L,=12.
and g
with value V
),
F(4)
and F(2).
,
A
,
A
and
Next for L=7 we find
so that F(7)=F(6), thereby completing the evaluation.
Upon eliminating the redundant entries we then get the schedule
of non-dominated entries of Table A, 4.
As is apparent from this example a potential shortcoming of
the present procedure is the extensive redundancy that can occur.
For example,
in che above procedure F(6) was evaluated three times.
-23-
One major source of this redundancy lies in the generation of
descendents.
For each parent A
determined during problem-solving
associated with every
we desire of course the values F(L')
A, ,A„,...,A
descendent A'; but if all descendents of all parents —
1 —2
s-1
—
discovered to date have been generated and evaluated, then we now
desire only those descendents
-^
A ,A ,...,
—1-2
or A
,.
—s-1
A'
—
which are not
a
descendent of
That is, we wish now to generate
and evaluate
^
the solutions included in the set of descendents D
only
^
which are
s
J
not included in the set D
S
,
s-1
is the set of solutions
,
where set D. is the set
J
containing
13
These are the solutions
A" with
—
]^
a'.'
> a.
for some
i
for each k,k=i
tion of descendents of A
—
s
a.
a'.'
,
2,
.
.
.
,
U,
k=l
S,
k and
and all of its descendents.
A.
<
1—1
•
,
for all
s-1
.
i
but with
To restrict genera-
possibility of
to these solutions _
A" one c-r
course is to simply check the k constraints continually during the
generation process to insure they remain satisfied.
Another pos-
sibility is to first determine what shall be termed the base
solution
(s)
of which A
—
is an extension.
The desired solutions
A" are then simply those solutions which are both an extension of
one or more base solutions and a descendent of A
—
.
To facilitate discussion of base solutions let us consider a
two-variable problem for which we suppose that two solutions
13.
Although redundancy in the generation of solutions A is hereby
eliminated, there may still be duplicity in the evaluation of
particular F(L) if there exist alternate optimum solutions for
length L.
-242
A =(a' a') and A =(a
Figure
a
,
2
)
have been determined.
As shown in
these solutions define intervals on the a
3
which have been labelled Y ,Y ,Y
,
X ,X
,
and X
and a
scales
We shall use
.
these same labels to refer also to the set of integer values con-
tained in the interval.
Considering first the solution A
,
symbol © the "or" operation and by
set of solutions S
(Y
© Y
© X
)
the "and" operation;
S
and by Y.
Y^ © Y^)
Similarly for A
© X
)
and S
consists of the set:
X^ = Y^ © Y^ © X^ - Y^ © X^ © X^
That is, for any solution A = (a ,a
of A. either a > a
the set of solutions
S
set of integer values of a
the
complementary to the set Y., then S
(X
then the
consists of all solutions contained in the set
Similarly if we denote by
.
complementary to
Y
suppose we denote by the
or a > a
)
.
the set S
consists of the solutions
of the solutions
the set of descendents D
which is not a descendent
(Y
© Y
)
© X
which results for both A
.
Therefore
and A
is
u S^ consisting of the solutions:
S
1
2
((Y^eY^)
©X^)
© (Y^© (X^©X2))
-(Y^©
X^^®
X^)
^ (Y^© X^)
Hence the additional descendents to be generated for A
having
-25-
"2
Figure
2
Intervals defined for
3.
and a
and A
by solutions A
=
(a^
2
,
a^
=
^^i
'
2
)
(2)
(1)
+
4-
2
a
a.
a 3-^
1
1
X^d)
Figure 4.
X^i2)
Intervals defined for a
by solutions A,
a
2
A^
—1—2
,
and
and —3
A_
^o
^
-26-
already generated those in D
the set contained in set D
((Y^eY^)
0X^)
is the set Y 0X
but not in set D
but not in the set:
Y ©Y 0X
>
,
= Y^e((Y2©Y^)
To continue, suppose now that solution A
having values as shown in Figure
1
2
2
2
and x!"©X
©Y
(Y
(x^ex^))
is determined
1
Substituting Y-i©Y
4.
© X^.
2
for Y
for X^ we now have set S_ consisting of the solutions
3
2
1
1
(X ©X^)
)
.
In the same way set D
finding the intersection of S
((Y2©Y^)
=
in the set
i.e.
14
(x^ex^))
(Y^0
this is of course
;
Y^ffi
© (X2©X^))
((Y2©Y2)
and D
is determined by
:
(Y2©((Y2©Y3)
{X^®X^))
(X2©X^)
© ((Y2©Y^)
)
© X^
© X^
{X'^BX^))
.
The set of additional descendents to be generated is the difference between the sets 5
and 5
,
X
(Y
)
To put the procedure in a more operational form it is noted
that in each of the terms in the expression 5
of permissible values of each variable
a.
1
the range or set
> b..
1—1
is of the form a.
It is therefore always possible to uniquely represent each term
as a vector P= (p, ,p_
in D
.
.
.
For example, the three terms
,p ).
1
in
P
,
(5)
2
1
can be expressed as P=(a +1,0), P =(a +l,a +1)
and
2
=(0,a +1); thus if a solution A is not to be a descendent of
A or A
—1-2'
.
a
14.
> a
1
+ 1;
it must satisfy at least one of three options:
-
a
> a
+1
and a
> a
1
This further reduces to (Y © Y^)
o
J
of the expression is
present purposes.
+ 1;
or a
2-3
© (X^ © X_)
2
> a^ +
1
.
Or,
but this form
preferred for
(5)
-27in other words, A must be an extension of at least one of the
vectors
P
P
,
and P
The vectors
.
will be termed bases or base
P
12
beinq denoted as
solutions, the set associated with D
s
For a single solution A^ the set
m vectors
V
(a,+
1,0,0,
s
IP
—
-,
j.
consists or tne
S
k
k
(0,a^ +1,0, ...,0),..., (0,0,..., a +1).
m
...,0),
)^
For compactness, however, the values a.+l will commonly be repre-
sented in a single vector H=(a
12
k
k
k
m
a_+l,...,a +1).
+1,
Finally, each term in the expression defining the subset of
solution;, contained in set D
12
T^ © ...
T,
values of a
.
:
J
T
m
n.,
,
s-1
but not in set 5
s
is of the form
where T. represents
the set of ^
permissible
'^
,
J
(where possibly n.=p.=0).
,,...p.
J+1
J
n.
J
the set of solutions
lA"
J
Therefore
J
defined by each term can be defined by
}
two vectors _
P and C,
_ where _P is the base solution with b.=n
J
all j, and C
- is a vector with c.=p.-n., all
J
The set
i.
J
J
,
J
-
(a"}
is
thus the set (P+Aj generated by the set {Aj of descendents of C.
For example, the set of solutions (Y
© X
)
that is in set 5
but not in set D^ is completely defined by base solution
2
(a^ +1,
1
a^ +1)
and vector C
=
(a
3
-a
2
,
a
3
-a
P^
=
1
)
With this notation we may sum up the discussion with the
following theorem:
Theorem
10:
Let A^,A
and let
D^_^,
s>
,
...,A
s-1
{P_
1,
)
be solutions to a knapsack problem (1)
be the set of bases for the set
where A^
=
(0,0,...,0), P°= (0,0,...0),
28and
is the set of solutions consisting of A
S
s
descendents.
s
12
Let H =(h,, h^,...,h
s
—
of solutions S
s-1
{P
h.
s
s
= a.
1
)
represent the set
+1 for all
s-1
p.
1
—<
h.
1
s
for all i.
The set of bases
(i)
of
s-1
{£
)
{P_
]
derived from base P^
-t
= p.,
s-1
each base P
(ii)
s-1
for
)
consists of the subset
not contained in {£
p.
{P
Then:
with the m base solutions £
setting
and let
i,
1
be the subset of the base solutions
)*
which
where
,
s
s
m
and its
m
s-1
i
9^
=
]
s-1
]*
£z
,
1.
..., £.->•••£
,
J
(p,
jP-,^
'^1"=^2
,
the set
together
,
•
-P
•
and p.
,
{P
s-1
m
=h
by
)
.
,
for
}*
The solutions in set D which are not included
s
in set D^_^, are those in the sets
{p
^~"'"+A
^~^
]
where there is one such set for each base
s-1
s-1
in the set
(P
—
tions in the set
IP
P
—
)*,
s-1
r
—t
+A
and where the solus-1
—
,
are the solutions
j
which are both a descendent of A
and an extension
—
of P^
s—
,
solutions
i.e. all solutions P
A.
t
s-1
s— 1
+ hu
descendent from (A -P^
—
"S
s—1
fo^
s-1
)
.
Proof follows by representing the indicated sets in terms of the
solutions A
,
A
and simplifying,
,
...,
A
,
performing the indicated set operations
and then representing the resulting sets in terms
of the indicated vectors just as was done in the preceding dis-
cussion.
-29-
In principle the number of base solutions in the set
may become quite large since each new solution A
of which A
m additional bases for each base B
—
{P_
)
give, rise to
is an extension.
—
it may be possible to eliminate some of
In practice, however,
them through dominance and feasibility considerations:
Theorem
11:
Let
P,
^c
k
k
=
(p,
P
,
1
for the set D
s
F(L), and let
Let
of L.
7,
^'
-^
k
v
k
]^
p.
^1
.
1
^
i"
k
,
.
•
p
,
2
m
at stage
s
.
)
be any base solution
in the evaluation of
L
be the largest unevaluated value
=
'I
i_
1.
be the length of P
p.
1
J.
Then base
its value.
and
can be eliminated
P,
—
from explicit consideration at stage
k
s
and all suc-
ceeding stages when any of the following conditions
are satisfied.
(a)
l^ > L
(b) y^
(c)
(d)
< F{£^)
\
< B(i^)
y,
<
y
and
i,
>
i
for any other base P
at the present stage.
where F(L)
is the
length L, and B(L)
maximum value obtainable for a knapsack of
is a lower bound on the
maximum attainable
value.
To prove the theorem consider first the present stage, s,
(a)
follows since there can exist no feasible solution A
—
having
-30-
length L
and
(d)
which P
< L of
S
is a descendent when
(b)
(c)
,
rL
all follow from the fact that for every feasible solu-
tion A to knapsack problem
a
> L.
£
K.
for length L which includes
(1)
descendent, there exists another feasible solution
A'
£
as
with value
larger than that of A which is determined from A by simply replacing the length
utilized by
£
P_
by P
,
,
in the case of
by the appropriate feasible solutions determining F{£.)
(d)
and
,
and B(i
)
can be eliminated for stage s.
k
It remains to show that no base solution P' for stage s+i
in cases
(b)
and (c)
Thus £
.
—
derived from
P,
—
s+ '
i
need be Present in the set
{p
—
}
follows immediately from the facts that every base
identical to or an extension of
(Theorem 10)
P
,
However, this
.
is either
p'
—
that any ex-
K.
tension of a nonfeasible solution is nonfeasible, and that any
extension of
a
dominated solution is dominated.
Referring once again to the example in Appendix A, we begin
by setting F(0)=F( IK) and by applying Theorem
and descendents A
and F(3)
.
A
,
A
=
and A
and values F(12), F(9), F
(0,0,0,0), yields the base vectors
as phown in Table A. 5.
that P
,
which yields A=(2,l 0,0)
3
(
6)
From A we determine, according to Theorem 10, H={3,2,1,1)
which, given P
P
,
as befbiE
Applying Theorem 11 with L
is nonfeasible and P
is dominated so that
{P_
,P.-,
P.
=
]
13
=
P, and
we find
{P.^jP^5.
Next we solve the knapsack problem for L=13 and, as before, get
solution A
with length 12 and value F(12)
.
Since we already have
-31this solution and its descendents we proceed as before to update
bounds and to solve
problem with L^ll, getting A^- (1,1,0,1)
a
with length 11 and value F(ll)
Examination of the set {£
.
)
so that the set of new descendshows —4
A. to be an extension of —4
P
.
for all descendents A of
ents to be generated is the set {P..+A),
(A4-P4)
=
(1,1,0,0):
(0,1,0,1)
(1,0,0,1),
(1,1,0,1),
H=(2,2,l,2) and determining the intersection with P
—6—7
—5
^
2
(P
}
^
(P
P^
,
)
.
Now L
again solving the knapsack problem we get A
length 10 and value F(10)
o
,
P^^, P ^
10
=
so that F(7)
and P
,
11
nonfeasible, so that
g
10 as before,
=
and
and
(1,0,0,2) with
P
the
(0,0,0,2) with values
Forming the intersection of H=(2, 1,1,3) with
and F(4).
we get ~~3
P^
=
Since A^ is an extension of
.
descendents generated become (1,0,0,2)
P^
we get bases
Since P^ is nonfeasible and P^ and P^ are both
dominated, we have
F(10)
(0,0,0,1)
Forming the vector
with values F(ll), F(8), F(5), and F(2).
P^,P^,P and P„.
—5—6—7—8
and
{P_
=
)
F(6)
=
,
all of which are dominated or
12
(P.-,
i
•
As before we next find that
which therefore completes the
evaluation
Up to this point the motivation for and use of base solutions
has been as a means through which redundancy could be reduced in
the generation of descendents.
In addition, however,
they can also
be used more directly to sometimes make the evaluation process
for a given knapsack length
the following theorem:
I,
= L
more efficient, as suggested by
-32-
Theorem
12:
A, ,A^,
Let —1—2
be solutions to a knapsack
^
A.,..., A
...,
—
—J
problem, where A. is an optimal solution for length
L{A.)
j=l,2,...,s; and let D
,
J
s
denote the set of
descendents for ~"1
A, ,A^,..., and A
—2
~s
Let B(L) and U(L)
.
as determined according to (4)
be the bounds on F(L)
on the basis of solutions in the set D .for all
s'
—< L —<
L
max
s
Let
.
—
for the set D
(P
—
J
be the set of base solutions
s
where base solution
,
P_
has length
iq and value 7q, and let L be any unevaluated length.
Then:
(i)
which is
If A is any solution in the set D
feasible for L=L and which maximizes the value
Z V. a
i
1
.
then:
,
1
qq
B(t)s max
U(L)
(ii)
+ B(L-i
(7
q---qq
< V{A)
))
< max(7
< B(L)
If D(L)
-^.f
q
.
then there exists an optimal
knapsack solution for L = L in the
(iii)
fU(L-i
(
7
B(L -
-
q
f
)
then
= U(L)
(P
*q
q
set D
-t-
A
-q
.
is
>
an optimal solution for L = L among solutions
in the set D
value B(L
,
s
-7)
is any solution with
where A
-q
and length L(A
~ q ~<(L
q
If U (L
)
-> B(L)then
(P
f
-q
solution among^ both sets D
A
"q
s
)
^
-
£
)
.
q
is an optimal
and D
.
s
))
-33If 7. + U(L-i.)
(iv)
J
J
s
solutions
P
~<
s
for any base
)
q
then there exists an optirnal
-q
"J
/
q
and P
.
+ B(L -
y
solution with value F(L) which is not an ex-
tension of base
s
P
.
-J
To prove the theorem it is noted that if A* is an optimal solution
for lenqth
L then by the definition of the sets D
^
-^
By the derivation of
be in one or the other.
< V(A*)
B(L)
value B(L)
if in set D
If in 5
.
—
it has value
(4)
s
then by Theorems 10 and 11 it
-q
U(L-
7
)
-q
)
IP.
1
If it is
•
'*
then it has value:
+ B(L-^
y
<
<
- V(A*)
-
)
q
q
f
7
q
so that if it is an extension of any base solution in
q
[P
s
,
is an extension of at least one base solution in
s
s
regardless of the set it is in, and strictly
< U(L)
an extension of P
it must
and 5
s
~~qqq
then V{A*)
> max
as stated in part
+B(L-i
{7
)
],
and V(A*)
~
Proof of Parts (ii),
(i).
follow immediately from part
(i)
max
<
"
\y
and
(iii)
qq
+U(L-i
(
iv)
)
],
then
and the definitions of B(L)
and U(L)
U(L)
To illustrate the use of the theorem let us consider again
the example as it was solved with the last procedure.
ing Theorem
3
Upon apply-
we got F(12)=12, F(9)=8, F(6)==6 ana F(3)=2, and upon
determining the base solutions and applying Theorem 11 we got
{P
—
)
=
PJ
—3—4
(p.,,
with
334
i^
=
5,
L = 13 we find that B(13)=12,
(7
+B(11)]
=
11,
=3,
7
£^
^
2
and 7_ -
U(13)=13, B(13)=max
and U(13) =max (73+U(8)
[
1.
4
)
,
+B(8)),
[(r
(74+U(ll)
)
Turning to
]
= 13.
Hence
-34-
there may possibly exist an optimal solution for
S
(in which case it must be an extension of
in the set
'l==13
since 7-.+U(8) <B( 13)
P_
)
Therefore we solve the knapsack problem for L=13, getting solution
A
Thus we
with L =12 and F(12)=12 which we have already obtained.
turn to the evaluation of L=ll and compute B(ll)=8, U(ll)=12,
=9=U(11)
B (11)
Since U(ll)
.
(£3+^3)^(0,0,1,0)
+
(1,0,0,0)
> B(ll)
and
and both solutions
(P4+A4)
=
+
(0,0,1,0)
(1,1,0,0)
have value U(ll), both constitute optimal solutions for 'l=11,
Generating descendents of both we get
yielding F(ll)=9.
F(8)=7, F(5)=3 and F(2)=l, and determining the new set of base
solutions and applying Theorem 11 we get
and 7=2.
B(10)-8,
o
=
(0,0,0,2)
.
,
)
=
{Fg
)
U(10)=9, B(10)=8 = U(10); again since U(10)
o
—
2
with i=4
Next we consider L=10 for which it is determined that
since solution (Pq+A.)
U(10)
{P
+
(1,0,0,0)
attains value
'»
(P^+A„)
^8 constitutes an optimal solution for L=10.
~8
ing descendents we get F(4)=2.
so that F(7)=F(6)
and
> B(10)
Generat-
As before we now find that
g, (1)
which therefore completes the evaluation.
Hence by directly employing the base solutions in the evaluation
process it has been possible in the example to reduce to one the
number of knapsack problems that needed to be solved explicitly
in evaluating F(L)
Returning at this point to the problem of redundancy, we
discussed earlier how a major source of redundancy can be
=0
.
-35-
eliminated through the use of base solutions in the generation of
Redundancy still occurs,
descendents of a new parent solution.
however, whenever in solving a knapsack problem explicitly the
resulting solution is the same as or the descendent of a solution
,
,
already obtained.
This occurred in the example just considered
when solution of the knapsack problem for L=13 yielded solution A.
with length L =12 and value F{12), a result already obtained
through use of Theorem
3.
To minimize redundancy arising in this manner two alternatives
will be considered.
One is to directly constrain the search space
when seeking the value F(L)
sidered
so that no feasible solution is con-
having the value F(L) of any length L already evaluated,
with the one possible exception of
a
solution with value F(L +
6
)
.
This approach eliminates considerable redundancy but may necessitate the solution of a large number of individual knapsack problems.
This approach is discussed briefly in Section III.
The other alternative to be considered consists in constraining
the search to solutions which are not descendent from any previ-
ously evaluated parent solution.
This approach fails to eliminate
redundancy completely in so far as an alternate optimum solution
may be generated for a value F(L)
already evaluated.
requires more bookkeeping to implement.
It also
However, its use may
possibly necessitate the solution of fewer individual knapsack
problems.
This approach will be discussed in more detail in Section Qt
-36-
Before turning to these alternatives it is noted that an
additional matter requiring consideration is the amount of effort
to be expended in trying to eliminated particular lengths
the problem before evaluation of F(L) commences.
length
1.
e.
1
from
.
For in general,
can be eliminated from the problem (i.e. set a
if there exist any set of non-negative
all solutions)
1
.
=
in
integers
such that:
2
•
11—<
e.l.
/.
1.
and
S
...
J
11—>
e.v.
v.
(6)
J
As noted earlier one reason for wishing to eliminate a
dominated length
1
.
is that when
1
.
^
j-1
>
1
.
j
,
+1
1
.
,
< 1
.
>
1
.
,
and
J+1
J
its presence
can significantly
potential
limit the f
^
^
ji
use of Theorem 4.
with
1
J-1
J
In the illustrative problem,
for instance,
eliminated from the problem the number of values F(L)
determined from the theorem is increased from
5
to 9
,
thus reduc-
ing the number of individual knapsack problems to be considered
explicitly.
Another reason is that the elimination of lengths
1.
can significantly affect the value of h*, the length beyond which
the function f(y) may be assumed periodic.
for instance, elimination of
1
would reduce L* from 32 co
simplifying the evaluation process.
elimination of such lengths
1.
In the sample problem,
See Footnote #5 on page
9
thus
Still a third is that the
can materially
affect the problem-
solving time required to solve a given knapsack problem
15.
7,
(1)
for a
-37-
single length L.
However, the identification of such dominated
lengths can be time-consuming
;
therefore in practice a trade-off
must be made in diverting problem-solving time from evaluation
per se to pre-analysis of the data.
This is a subject requiring
subsequent empirical investigation.
16.
Procedurally they can be identified, for instance, through
use of the algorithm given in Section III
.
-sell I.
Controlled Interval Search Procedures
As was suggested in the previous section, one further means
of minimizing redundancy in the evaluation of F(L)
using base solutions)
is to limit search
solutions such that F(L')
(in addition to
when evaluating F(L)
to
< V < F(L + 6), where V is the value of
a potential solution for a knapsack problem with length L, L is
the largest remaining unevaluated length and
evaluated length less than L.
l'
is the largest
By restricting search to solutions
with value V in the range F(L')
< V < F(l,
+5)
it is guaranteed
that no previously generated solution is rediscovered.
If in
this limited search an optimal solution A*is discovered then F{L)=
V{A*)
for all L, L(A*)
< L < L.
In such a case all previously
ungenerated descendents are generated and evaluated, and the new
set of base solutions determined.
On the other hand should no
feasible solution with value in this range be found for length
then F(L)
= F(L
-&)=...=
F(L').
I,
In either case bounds are
then updated and all solutions which can be inferred from these
bounds determined and evaluated.
The procedure then repeats for
the largest remaining unevaluated length L until all lengths have
been evaluated.
Throughout this procedure the knapsack problem being solved
explicitly in each instance is a constrained knapsack problem of
the form:
-39
-
Maximize: V = Zv a
Ill
.
^^^
Subject to: Zl.a. —
11 < L
< V <
V''"
and
where V
1
integer all
> 0,
a.
and V
u
v"
i
represent lower and upper bounds on the acceptable
Such a problem is readily solved with
value of the solution.
any of the existing one-dimensional knapsack algorithms, but a
combinatorial programming algorithm of the type described in
references
and
[6]
[12
is especially well-suited for the problem
J
since it explicitly utilizes the bounds V
to reduce search.
and V
To facilitate subsequent discussion a complete algorithm of this
type is stated below:
Order the lengths so that
(i)
v./l.
Set G
•
1
1
Set a^
(ii)
-
v"
=
\l )
'
-
V
^2 "
\
>
0, > 0^
1—
"~
2
...
—>
m
,
where
.
=
1
.
1
1
/
*
'
"
*'
"
^"
m
2
m-1
1 .a.)
(L- Z
J
V
1
/
If V > 0,
;
/
m
d = L -
"^
.
a
,
and V
^-
v a
Z
.,11
.
.
-
V
L
x=l
solution with value exceeding the lower
bound has been discovered.
go to step
1
.,11
1=1
'
a
"^
««
\
'
<
(iii)
J
i-.!
Go to step (iv)
.
Otherwise
(v)
As a lower bound we can always use, for example, the value
17.
V = F(L')
h, where h is the largest factor common to each of
v^
for the upper, of course, V =F(I*5),
the values v
.... v
m
-^
12
,
,
;
-4
(iv)
0-
If V = G terminate problem-solving since a solution A
with value equal to the upper bound has been discovered.
If V < G set G"
V =
= G - V,
0;
save A as the best
solution discovered so far and go to step
(v)
If there is an i,
(v)
< n»-l such that a.
1
<
i
be the largest such
i
and go to step (vi)
.
let
>
j
Otherwise
there exists no more candidate solutions so that
problem-solving is complete.
(vi)
Set
112
a"
= a,
IfV-v.
al = a„,
,
+
.
....
2
(d +
Go to (viii)
a'
j-1
= a.
<Ogoto
1.)
and
,
j-i'
a
'.
= a
J
(vii)
.
-
J
Otherwise set
.
m
1
V
\
=V-v.
V'
,
J
m
+ Z v.a'. , d'
j+1 ^ ^
J+1
= d +
-
1.
J
m
l.a!
.,11
S
(vii)
Set
step
(viii)
a'.
J
Go to step
(iii)
Jr
.
/
^
.
V =V-v.a.,
=0
=d-i- l.a. and return to
d'
J
J
'
J
J
(v)
If G = V
-
the problem.
V
no feasible solution has been found to
If G < V
u
-
V
1
discovered is A with value V
then the best solution
-
G.
1
-H 1-
From these steps it is clear that the more stringent the
bounds V
1
and V
u
solving effort.
the greater the potential savings
U{L)
B(L)
=
h and V
-(-
are the bounds defined in
(4)
or,
;
=
U(L)
=
[max
and U(t)
{B(L)
,
B(L)
in the event base solutions
m
are as defined
common to the values
12
v,
,
to use the bounds
and V^ = min [U(L), U(L)] where B(L)
+ hj
)
where B(L) and
,
are being employed in the evaluation process,
V
Theorem 12, and h is the largest factor
v^,
...,
v
m
.
Use of these more stringent
bounds, however, will not necessarily guarantee
algorithm
iri
problem-
therefore, be desirable to use the more
It may,
stringent bounds, V
m
a
more efficient
toto since in the event there is no feasible solution
with value in the range V
it becomes necessary to then
and V
ascertain the feasible solution which yielded the value
a task
- h)
(V
which may or may not offset the savings in search.
As an illustration let us consider the explicit solution of
the knapsack problem with L = 13 which arose in the example of
Section III. From the application of Theorem
B(13)
= F(12)
=
12,
we therefore get V
1
and since h
-
=13.
u
For V
since there exist no values F(L)
1
3
we have the bound
for the given values of v.
we employ the value
for L > 13.
-L
-
Commencing the
13
-
we proceed in order through steps (ii)
algorithm at step
(i)
(iii)
(vii)
(v)
,
(vi)
,
,
42-
,
and then to step (viii) where
(v)
problem-solving terminates: there exists no feasible solution for
L - 13 with value 13.
While the algorithm has been found to be quite efficient
in the form as stated
18
,
it is noted in conclusion that possibly
it might be made more efficient through one or more changes.
example, according to Theorem
maximum length L
1
-,...,
r+2
1
m
r
6
For
there exists for each length
that need ever be composed of lengths
1
a
1
,
in an optimal solution for a problem
of length
L,
'^
^
y
r
,-1-1
^-i-..., la.
_a
i.e.,L„>l
— r-t-1,a r-hl
r-t-2 r-i-2
m.m
Thus it might
prove
x^
J
useful to include a test to insure that Z
i=l,
j
l.a. >Li,
,11—0
2,
1=1
...,
m
-
1,
although this represents substantial testing time.
Alternately it might perhaps prove more efficient to simply check
that a
1
does not drop below the value
("(L
^
-
L-)/l
-i-
)
.99
...
1
9/.
>*
Either test is readily incorporated in the algorithm by modifying
slightly steps (ii)
and
(vi)
Another possibility is to impose
.
upper bounds on the variables a.: as was mentioned in Sectionll,
in no optimal solution need a.
1
llll
d.l.
18.
= b.l,
exceed the value
XX
for nonnegative
integers d.
^
and b..
(d
.
1
-
1)
,
where
Thus in steps
(ii)
f
The results of a comparative study of knapsack algorithms
will be presented in a subsequent report.
\
I
-43-
and
of the algorithm
i-1
(vi)
/ (L - 7
a
1
J
,
)
/I
a.
could be set to min [(d.
-
1)
Still other possibilities derive from the
].
i
J
results of Glover
[9
J
which may profitably be used.
These all,
however, remain possibilities for future investigation.
Controlled Parent-Generation Search Procedures
IV.
As suggested in Section
JI
another means for minimizing
redundancy in the evaluation of F(L)
base solutions)
a
(in addition to the use of
is to control the search process so that in solving
knapsack problem for length L the solution which results is in
no instance a descendent of any solution already determined.
In
this section a procedure of this type will be discussed which,
that in Section
jll,
like
begins with the largest unevaluated length L
and proceeds stage by stage
toward length L = 0, but which, unlike
that in Section HI, always yields a feasible solution for each
knapsack problem of length L explicitly solved
—
although its
value may not exceed the bound B(L)
To be more explicit let us consider Theorem 13 upon which the
underlying search process is based:
Theorem
13:
Let —1—2
A,, A^,
...,
A.,
J
...,
A —
s
.
be solutions to
knapsack problems in which A. is optimal for length
L(A.)
-J
among all solutions in the set D
where
S
k
-
tt
J-1 - K
c
k'
is the set of solutions consisting of A^
'^
-44-
and all of its descendents.
largest value
in the set D
attainable among solutions
v.
ill
a.
?"
with length L —
< L, for all un^
,
s-1
evaluated lengths L.
be an optimal
And let A
solution in the set D
for the largest
,
s-1
unevaluated length, L.
Then:
= max [B(L), V(A )]
F(L)
(i)
Let B(L) denote the
for all'L,
"~S
L(A
—s
(ii)
L ~
< L.
<
~
^ Bit)
F(ti)
B(^)
F(L')
(iii)
)
A'
s
=
for all L such that
—> V(A
—
)
.
max [b(L'), VXA'
of A
s
)
]
for all descendents
heretofore unevaluated.-
Proof of the theorem follows from the fact that since the sets
D
are complementary an optimal solution for every length
and D
L must be contained in at least one of the sets.
For each un-
evaluated length L, B(L) constitutes the maximum value attainable
among solutions in D
V(L*)
is the
then F(L*)
=
;
O "" i
hence,
for any unevaluated length
maximum value attainable among solutions in D
max [B{L*), V(L*)j.
—
optimal among solutions D
optimal among solutions D
_
O "^ i
A
for length L it is by Theorem
for all lengths L(A
)
< L*
(i)
.
If A
if
^ ™" X
Therefore since solution A
is
2,
< L,
and
ta
therefore F(L*) can be evaluated for each length L(A
as stated in part
L*.
)
< L* < L
is optimal among solutions in set
- 45-
then, by Theorem
D
solutions in set D
1,
,
s-1
,
all descendants A"
and hence F(L*)
lengths L(A'), as stated in part (iii).
are also optimal among
can be evaluated for all
Finally, proof of part (ii)
of the theorem follows directly from the fact that V(A
)
maximuin attainable value for lengths L among solutions
m
D
,
s-1
in D
is the
the set
and therefore if a higher value is attainable from a solution
,
s-1
then it must be optimal.
With Theorem 13 as the basis the procedure may now be stated
The procedure begins with L
as follows.
simply
^
and proceeds
max
stage by stage with the evaluation of the largest remaining un-
evaluated length
evaluate
I
For each length L attempts are first made to
.
by direct means, by bounding considerations, by use
F(I,)
of base solutions,
etc.,
as in the procedures
m
When all of these preliminary attempts fail F(L)
through use of Theorem 13 by determining
a
Sections
11
and HI.
is then evaluated
solution A
which is
~"S
optimal
A,,
~1
A„,
~2
for length
...,
A
~s-i
A
I
—
among the solutions in the set D
.
where
are the parent solutions which have been determined
thus far in the evaluation process.
Upon finding
P
.all of its
previously ungenerated descendents are generated and all of the
lengths
I
identified in Theorem 13 evaluated.
The procedure then
continues on with the next largest remaining unevaluated length
until all lengths have been evaluated.
46-
•
Within this procedure there remains the task of solving the
constrained knapsack problems which result when the preliminary
attempts fail in the evaluation of F(L)
In each such case the
.
problem is of the form:
Maximize
V = Zv a
.
.
1
1
1
A
Subject to:
21 a
< L
^11.
.
11
for some
< d
1-1
all
]^
a.
a
and
where
.99
d.
...
(8)
.
> a.
.
a
> W(L)
a.
> 0,
=
2,
integer,
is the upper bound on a.
9j>^
i
i,
the lower bound on a
all k =
1,
...
m
3,
for all
and W{L)
=
<f
(
,
2,
....
s-1
i
(L - L
'
)
/I
)
+
as discussed in Section ]II.
To solve the problem we can employ the combinatorial
programming algorithm discussed in the preceding section.
the bounds V
u
and V
1
taken to be F(L +
5)
and
With
respectively, the
only mandatory modification for solving the present problem
(8)
is
to ascertain the feasibility of a solution A before retaining it as
the best discovered so far.
is initially set to 0.
(iii.b), where
(iii.a)
(iii.b)
becomes:
Thus in step (ii)
Step (iii)
in
the algorithm V
is then divided into
(iii.a)
becomes the present step (iii) and step
and
-47
(iii.a)
If
a.
-
for some i,
> a.
go to step (iii.b)
all k =
1,
2,
...,
Otherwise go to step
.
s
(v)
With these changes the algorithm yields an optimal solution
to
whenever a feasible solution exists.
(8)
For the present problem, however, it may be possible to
improve the efficiency of the algorithm through the incorporation
of one or more additional tests within the procedure.
consider the requirement that
11
]^
requirement is satisfied by, say a
value of V becomes
(L
-
1. a.
possible ways of satisfying
+
)
then an upper bound on the
,
Choosing among all
Va.s^.•
a.
> a.
fL -
1.
When this
for some i.
> a.
a.
For example,
one which yields the highest
upper bound we have
0.1
vj^
=
max
r0
1
=
jET
L - min
(a^
11
(0
- 0.
D)
+
)
11
+ V, (a^ +
(1.) (a^ ^
1)
J
1)
i
thus constitutes an upper bound on V given that constraint
V
11
]^
a.
> a.
has yet to be satisfied for some
i
> 0.
if at a given stage of enumeration the values a'
have been specified and constraint
unsatisfied, the bound becomes:
11
]^
a.
> a.
More generally,
a^
,
...,
still remains
a'.
-48-
V
J
k
J
^1
Zv^a^
-
+
k
0.)(l.)(a.
-
(0
l>j
1
-'
> a."
not yet satisfied after the specification of
a'.,
an upper bound on V at the completion of stage
...,
+
(9)
1)
-^
the set of indices k for the constraints
If we denote by Q.
"a.
min
-
(L - 2:i^a^)
,0"
a'
a'
j
when
considering all constraints is
-
k
(V.j=Zva'
r r
^
^
V.
--^min
I
-^
,
keo.
J
.
,(L-Zla')
jfl
r r
-1-0.
-
,
,
1
max {min
^
k€0
i>l
(^
j
,
1
.
+1
-/^
.
i
)
(
1
.
i
)
(a
k
.
(-1)
i
.
J
J
(10)
For the discovery of a better feasible solution,
necessary that V
at stage
.
then,
it is
exceed the value of the best feasible
j
solution discovered so far.
in a more preferable form for implementation,
To put (10)
w., =
Lk
(a. +1)1.,
1
1
= min
W.,
jk
.
{w.,
ik
.
{i/d.>a
11
\
{fi
-Cf.)
ji-li
.
(1.) (a^+l)
1
in
and (10)
(9)
II
)^
a.
min
^
E
> a.
(J?.
+
J+1
1
.,
jk
to
(0
>
j
1
-i-
.
.
.
,
-0 )w.,
j+liik
Note that in the
1.
.
)
i
has been changed from
111
(1.) (a^
f
1)
=-
i
>
j
This is done because if constraint
can be satisfied with a feasible choice of a
-(2(.)
1
1
that the range of
i
min
11
{
{i/d.>a
definition of
=
.,
Jk
{i/d.>a
..,
1
and E
)
= min
)
1
},
let
since (0.
J+1
,
-
C^
.
1
)
-
for
.
,
then
-
j
j+1
i
+ 1,
49-
and hence the computation of E
is of no value.
Upon substitution.
we then have,
_
J
^=Zva'
J
- Zl a')
+0^. , (L
fL-Zla')
J^^
^ ^
1 ' ^
V. = Zv a'
J
1
-
max
{E.,
{E.
kcQ.
J
a
,
< a
j+1.
(11)
1
}
J^
k
j+1
.
,
J
The more stringent bound (11) can then replace that of IZv
^
a'
r r
+ 0.
J
-
+1
J
(L - Zl
^
a')
r r
j
employed in step (vi) of the algorithm.
Besides this possibility it may be possible to improve
efficiency through use of the following test for feasibility:
j
If max
keQ,
(W
.-
}
> (L - Zl a'
-•
no feasible solution
)
(12)
1
J
with
a,
1
12
=
a'
a^ ^
a'
2
...,
a.
'
J
=
a
'.
exists.
j
This follows directly from the definition of W
.,
,
the smallest
length required to satisfy the kth constraint if it has not been
satisfied with
a'.,
1
i
—<
i.
It is noted that in all cases it is sufficient to compute W.,
jk
and E
.,
jk
for all
j
just once.
However, as L decreases with successive
'
knapsack problems it may be desirable to re-evaluate these values
since with a smaller L the effective value of
for one or more i,
d.
1
may be decreased
thus increasing some of the values W., and E
Jk
Jk
-50
As an illustration,
let us again consider the example in
Appendix A and let us assume that as
the parent A
=
(2,
1,
0,
0)
a
consequence of Theorem
and all feasible descendents A"
3
have
Assume also that the feasible, nondominated set
been determined.
of base solutions has been determined,
(P'
)
=
(
P,
,
P,
)
.
The
largest remaining unevaluated length L is therefore 13 which has
an upper value U(13)
and d
=
2.
Since
=
L'
the constraint for k =
13.
As determined earlier, d
= 32
we have w(13)
1
1,
d
= 5,
With these limits
cannot be satisfied with either
Therefore we have the following values:
1
= 0,
=
a,
1
or a„.
2
-51-
7
>W
V
We therefore set
.
exceeds
in (10)
Next,
Hence we try
1,
0,
-
5j
>
/
r
min
=
a'
4
r-
> 10 we set a'
1/2(13 - 11)
=
a'
-
=
2
=
l'
>W
= 4
l„a')
Now
1.
.
satisfied.
and (11)
which satisfies the constraint
We thus save A
=
Since in (10) V
-
as the best so far.
2)
Now we backtrack to
10
;
2J
,
la'
(L -
leaving both (10)
0,
i.\"r/
{13-6)/3/,
[
and gives a solution value of 10.
imposed by A
(1,
and in (11),
min L^t/
=
a'
min
=
a'
and set
2
((13-6) /b) = 1.
a'
= 0.
In
(10)
we now have
therefore we backtrack, setting
10;
I
=
=
j
<((13-6)/2; = 3, yielding the value V = 9.
backtrack to
j
-
Hence the solution A -
3/2 < 10.
=
and set
1
(1,
1,
0,
Then,
0.
2)
+ 2/3(13)
(0
with value v
=
10
and length L = 13 is optimal for this constrained knapsack problem.
Referring to the set
P
=
0,
(0,
descendents
(0,
1,
0,
0,
1)
(1,
2),
0,
2),
0,
1,
(1,
1),
evaluate each, getting V(13)
7,
V{7)
= 4,
V(5)
= 3,
it follows that F(13)
F(7)
= 6,,F(5)
we find that A is an extension of
}
We thus generate the new previously ungenerated
.
1,
(0,
[p^'
= 3,
V(4)
= 12,
F(4)
evaluation is complete.
-
(0,
-
0,
1,
0,
10,
1),
0,
(1,
2)
V(ll)
2,
= 9,
and F(2)
and
= 9,
= 2 and V(2)
F(ll)
0,
0,
(0,
2),
0,
V(10)
(1,
0,
=
8,
F(10)
=
=
Since F(l)
1.
1)
0,
0.
and
V(8)
=
With these values
= 1.
+
Since V < 10 we
Now in (10) we have
= 0.
a'
=
a'
+ 3)
(6
8,
F(8)
= 7,
- 0,
1),
-52-
In conclusion it is noted that another possibility for solving
the constrained knapsack problem defined in
P
each nondominated base solution —
(8)
is to find for
an optimal extension (P
-Tq
+ A*)
-T5
< L, and then select that for which
+ A*) such that L{P
-nq
-^
V(P
U(L
is maximum.
+ A*)
-^
-q
-
1
)
In each nondominated case where B(L -
1
)
q
an optimal extension can be readily determined,
for
q
instance, by first applying the algorithm in Section
knapsack problem with length L = L
value F(L
-
1
)
-
1
.
HI
to the
In each such case the
thus becomes determined together with all un-
q
evaluated descendents of A* -- in addition to those values which
eventually become determined through use of Theorem 13.
The
potential shortcoming of this approach, however, is that it may
entail explicit solution of knapsack problems whose value would
otherwise become evaluated more efficiently as descendents of a
parent solution.
jt
-53-
The k-staqe Knapsack Problem
V.
As was discussed in the introduction, when k
jc
2
the knapsack
problem for k-stage guillotine cutting is solved by solving one or
more one-dimensional knapsack problems.
necessary to first evaluate
functions F{W,
L)
(k -
For k >
2
however it is
two-dimensional knapsack
2)
the results of which then define a 2-stage
In this section we consider the evaluation of F(W,
problem.
L)
for the k-stage problem when k > 2.
To begin let us consider explicitly the knapsack problem at
Let the stock size be W
stage k.
sizes be w.
x
1.
with value v.,
max
x L
= 1,
i
and let the requirement
max
m.
...,
2,
stage is performed length-
let us suppose that cutting at the k
wise so that we need consider strips of widths w
t < m,
For specificity
where for each i>w. = w. for some
> ^^
>
>
•••
^^.
>
And finally, suppose
j.
that inexact cutting is permissable so that in a strip of width w
all sizes w.
x
1
1.
1
with
w,
1
< w are allowable.
—
With these assumptions
the knapsack problem at stage k is defined as follows:
items with sizes w.
x
1.
F(W, l)
and values v.,
-
maximum
(
Subject to:
and
for all L,
'
<
—
I
—V
L
a.
,
max'
a
/w
11
.
.
> 0,
2,
...,
m determine:
Zv a
.
—< W
i
)
^11-
Z a.
= 1,
i
given m
1.
<-
L
integer
and for each W=w
(
all
.,
J
i
j==l,2,....t.
1
.
a)
-54-
To solve problem (l.a) we can proceed in
manner similar to
a
that employed in earlier sections for the one-dimensional problem.
We begin with the width w L and, starting with L
of decreasing length L the values F(w
,
L)
F{w
.l)
,
,..,
and w
.evaluate in order
<
<
~ L ~ L max
,
1
proceed in turn to widths w ,w
max
,
.
We then
in each case evaluating
for all L in order of decreasing length.
As in the one-
dimensional problem the attempt is made throughout problem-solving
to determine the value F(W,L)
of an unevaluated size WxL first by
direct means, by bounding considerations^ and/or by the use of base
solutions.
When such attempts fail a knapsack problem akin either
to that treated in Section III or to that in Section IV
solved and the value of F(W,L)
ungenerated decendents
a
are determined and evaluated, and all
A'
—
,-
other values F(W,L) which can be deduced from A
are determined.
Should
be determined in the process all previously
parent solution A
/
thereby determined.
is then
and its descendents
The procedure then continues on with the largest
unevaluated length for the largest unevaluated strip width.
Within this general framework some of the results of the
earlier sections can be applied without change while others require
modification or extension.
for solving the problem,
cations required.
Before further considering procedures
therefore, we first summarize the modifi-
These are stated without proof since they follow
immediately from the original proof with appropriate modification.
-55-
Throughout this section we will denote by L(A) the length
Z a,
1.
X a.
V.
^
~"12
of a solution A =
j_li
i
^
denote by
by
7
^
its width
and by W(A)
~
,
i
k
]^
ill
a.
::
(w.)
and by a
v.,
> Oj
its value
Similarly we will
.
1
its width
^
k
12
of a base solution
1
^
^
i
max
{i/a.
the length
its value Z p.
.a^,...,a ),by V(A)
~
m
(a
P, =(p,
""k
max
(i/p.k > 0]
i
,p^
,...,p
(w.)
.
k
m
),
We
^
1
will assume that evaluation of F(W,L) proceeds from strip to strip
in the order w
>
w
>...> w
and that for a given strip in the
order of decreasing length.
Turning first to equations used in earlier sections, the following modifications are required for the present problem:
max [g^
=
g^ (w ,y)
,x))
(w
(2. a)
< X < y
< s < n
min
g^(Wj,y) =
x),
[g^Cw
J2<^(w
)-y]
(3. a)
X > y
s
B(w.,L)=max
[
>
(nv +g
1
J
'^
n
(w
.
,v)
where L
= nl +y,
< w.,
w.xl. w, 1
1
1
J
For equation
n
for which
(6)
and
(t)
,
(w
1
L)]
<F(w.,l)
<
(4. a)
J
=U(w
,L)
J
refer to the size
1.
1
1
*-!
u(w. ^,L)]
j-1
•,
J
and v.
1
B(w
,
J
min [(nv +g"(<7.,y))
1
)
J
)
is maximum.
J
two different applications can be distinguished
for the present problem.
In one,
if there exists a set of nonnegative
-56-
integers
w
<
{e.
)
satisfying
then size w.xl. can be elmintated from consideration
w
11
k
j
in which e =0 for all k such that
(6)
throughout the evaluation of F(W,L)
other,
if there exists a set
(e
for all W and L.
satisfying
)
In the
in which e =0
(5)
K
K,
for all k such that
w,
k
< w and
—
w.
<
~"
then size w.xl. can be
11
w,
J
eliminated throughout the evaluation of
for all L.
F(w,L)
Turning to the theorems of the earlier sections the relevant
modifications for the present problem are as follows:
Theorem l.a:
If A is an optimal solution to problem (l.a) with
knapsack size WxL then
A'
is an optimal solution for
A
all knapsacks of size WxL for which W(A')
A
for every descendent
and L=L(A'),
Theorem
2 .a:
W
< W <
of A.
A'
If A is an optimal solution to problem (l.a) with
knapsack size W^L then A is an optimal solution for
A A
all knapsacks of size WxL for which W(A)
A
and L(A)
Theorem
3. a:
< L < L.
< w be ordered so that
Let all sizes w.xl. with w. ~
1
11
f,
1
>
-
i?^
^^2
->
.
.
.
>
Of
-
descendents
A A
WxL, W(A')
A'
,
'^m'
tion A* = (^1*'
^-y*
,
'
'
where
Of
.
= v /I
.
1
'^i
*^'
''->
0)
4. a:
Then solu-
are optimal for knapsacks of size
A
< W <
<H/^2-
1
^"^ ^^^ °f ^^^
A
w and L=L(A'), where
'= -^i./i.- ^>,'r'-^o
sl*=
Theorem
A
< W < W
O'
See modifications in Theorem
3. a,
-57-
Theorem
5
No modification required.
Theorem
6.
Applies independently to problem defined for each
width w., j=l,2,
Theorem
...,
Applies independently to problem defined for each
7.
width w., j=l,2,...,
Theorem
t.
Applies independently to problem defined for each
8.
width w., j=l,2,
Theorem
t.
-..,
t.
Applies independently to problem defined for each
9.
width w., j^l,2,...,
No modification required.
Theorem 10.
Theorem 11.
t.
a:
Let P be any base solution for the set D in the
s
-k
evaluation of F(W,L) and let
.^^
JC
,a,
K
and
be its
y
K,
Let w be the largest
length, width and value.
width whose evaluation is incomplete, and let L be the
largest unevaluated length for this width.
(i)
Base
P,
-k
Then:
can be eliminated from consideration
m
subsequent evaluation of F(W,L) when any of the
following conditions
ar^:
^=»tisfied:
> w
(a) a
a
(c)
a,= w
(d)
a,
=
K
K
k
=
_
I
w and
(b)
i
and
7
w and
7
K
> L
<
K
,
k
F
L)
(w,
< B(w,
K
£^
k
)
all
-58(e)
o.
K
> a.
,
>
Jt',
K
J
solution P
.
and
£
/
J
K
in the set 5
-J
(ii)
Base
P^^
<
,
for any other base
7
J
.
s
can be eliminated from consideration
throughout the evaluation of F(w,L)
L<L, W-w,
<w
ifci
and
i
ri
Theorem 12. a;
Let -1—2
A,, A^,
A.,
..-,
....
J
for all
> L.
K
A be solutions to knap-s
sack problems (l.a) where A. is optimal for a
knapsack of size W(A.) x L(A.), and let D
-J
-J
the set of descendents for A,, ~2
—1 A^,
Let B(w,L) and U(w,
and A
.
—
be bounds on F(w,L) as
L)
determined according to
solutions in the set D
....
denote
s
on the basis of all
(4. a)
s
,
Let
.
s
IP
,
be the subset
}
of base solutions which are feasible and nondomi-
nated for width w and length L.
(i)
,
(ii)
,
(iii)
,
(iv)
Then:
Same as Theorem 12 with
.
the one-dimensional bounds
replaced by the corresponding two-dimensional bounds,
e.g.
Theorem 13. a:
Let A^, A^,
,
A.,
,
knapsack problems (l.a)
solutions in the set D
J
B(L)
A s-1
—
,
=
L)
be solutions to
in which A.
—1
by B(w,
S
J^
K— Ik
,
is optimal among
where
S
k
is the
set of solutions consisting of A^ and its descendents
-59Let B(t^,L) denote the largest value
E v.
1
among solutions in the set D
attainable
a.
^
with length L <
s^
s-1
width W —
< W.
^
And let A
L
be an optimal solution in D
for the largest unevaluated length £ of width w.
(i)
FCf^,^)^ max
W(A
_g
(li)
)
F(^,f)=
B(^,L)
(iii)
[B(^,L)
V(A
,
< ^ _
< W and L{A
_
_g
B(<^,L)
> V
(A
and
)
Then:
for all sizes Wxf,
)]
<
_
s-1
< £.
L _
for all sizes ^xL such that
)
,
< w,
vi
L < L.
F(W,L')= max [b(W,L') V{A')] for all descendents
A'
—
s
of A
—
,
for all widths W(A')
—s
—< W —
-^
w.
Returning at this point to the discussion of the overall
problem-solving procedure for the two-dimensional problem, there
are numerous ways of synthesizing the results into an algorithm,
as was the case in the one-dimensional problem.
Figure
5
As one possibility
shows the flow chart of an algorithm which employs the
procedure of Section Hi for coping with each one-dimensional
problem requiring explicit consideration.
For illustration, we use
the algorithm to solve the sample problem given in Appendix B.
Beginning at block
[pS
I
19.
ps
J
^
(p
jp^j
1
and^^,
in Figure
5
it is determined that w=6
upon identifying the set of lengths
In the partitioned representation IP^
£^ of the set of bases
P^ the subset P^ denotes these bases that must be considered
for the present width w and P_^ those which can be eliminated
during consideration of width v7 but must be reconsidered for
widths smaller than w. The set ^ is the null set.
i
|
-60.1„,1^,1.
{1
that 6=h-l, where
},
P=
(0,0,0,0).
Since min
(
1
)
.
-2
F(6,l)-0.
Proceeding, it is determined in block
that the variables
9
are presently in the desired order, and upon application of
Theorem
that A=(2, 1,0,0), which is an extension of P
3. a.
.
Generating descendents in block 10 there results the solutions
^1
A
^5
>
Employing the fact in Theorem l.a. that each
in TaDle B.l.
descendent
L (A'
)
with the values, lengths and widths as shown
and A
A'
is optimal for all widths W(A'
)
<
w < w for length
there results the values F(6,3), F(5,3), F(6,6), F(5,6),
,
F(3,6), F(6,9), F(5,9), F(6,12), F(5,12)
values are shown in Table B.2 where the
and F(3,12).
2
These
in the upper left corner
of the cells signifies that the value is in the second group of
values F(W,L) determined.
Table B.3 identifies
F(W,L)
.
a
The entry corresponding to F(W,L)
in
solution in Table B.l yielding value
Resuming with the algorithm,
formation of vector H=(3,2,l,l)
completes the processing in block 10.
Proceeding to blocks 14 and
that B(6,13)=12 and U(5,13)=13.
15^
£=13 for which it is determined
Since the bounds are unequal the
algorithm proceeds to block 18 at which point it is determined that
(ps+1 |p + l) =
P
(P
,P
,P
,P
|pf},
are as shown in Table B.4.
where the base solutions P
,P
,P
and
Applying Theorem 11. a in block 20 it
is found that P
can be eliminated completely from the evaluation
of F(W,L)
can be eliminated at least from evaluation with w=6.
and
P_
-61-
Start
__L_
Order widths w >w„>...>w
12
Set
IP
J
P^J -
iP^
I
t?.',
set w -w,
.
,
n
J
1
P^=(0,0,-..,0)
2.
3_
Identify set of sizes
{w.xljfor which w.<w,.
11
i~
Determine largest common factor
and h of corresponding set
F(w.,L)=0 for all 0<
6
of set
J
{1.
1
)
Set
(v.
L < min
Are all lengths L for width w. evaluated?
J
No
Yes
Determine largest unevaluated length £
I
Has intersection of H with set (P^|p^i
f
O
Set F(w,L)= B(w,L) f or
all unevaluated lengths £
Record all solutions
_L2.
I
62-
1
14.
•
All lengths
L
for width w evaluated?
Yes
No
15,
Determine largest unevaluated length £
Evaluate B(w,L) and W(vJ,f). (eq. 4. a)
16.
B(w,L)= U(W, £)?
Yes
No
"17.
19.
Set F(W,f = B(w,C)
Record all solutions.
Has intersection of
H and setlPSJpS) been
determined
)
Yes
No
18.
Form intersection of H and the
set {P
(Th. 10)
Synthesize
wtth remaining bases in set
}
—s
•
.
.
—
20,
Reduce set of bases (Th. 11.
;;s + i
=s+I
to get tP
P
*.
)
Is subset
Yes
(P^^-*-
a)
,
I
1
.
.21^
empty?
No
22,
(£>-•
Evaluate B(w,L) and U(w,C)
I
V (w,f)
(2)
5.
Continued
23.
B
(w,f,)
Yes
No
Figure
<
.
{Th.
12. a)
-63-
©
24.
For any base P does there exist
"^
_
+ A
extension(s)
(P
such that
)
V(P +A
)
- U
(w,L)?
Yes
No
J^
_2^^
Define parent solution
A
-
-
+ A
(P
-^
Set (pS
-
^q
=
}
E
)
Set V = h
(s)
+
max [B(L) ,B(L)
and V^ = min [U(L), U
.
(p
^q
(L)
]
]
Solve knapsack problem (Section II)
j.
©
21.
Does a feasible solution A exist?
Yes
No
28
Determine set of uases
3
iP
A °f
which A is an
Set F(w,L) = max
Figure
5
.
Continued
,
Record all solutions
extension
(b
[B(w,L)
©
B(w,l
-64-
Therefore
I
j
=s+l
P
)
reduces to the set iP^.P,
-
-s+1
{P
— s+1
(p
is not empty the bounds
)
I
£
-,
^
Since
•
-
B(6,13)=ll and U(6,13)=13 are
evaluated from which it follows that U(6,13)
Therefore
> B(6,13).
the algorithm proceeds to block 24 and then to block 26.
At this
point the algorithm of Sectionlll is applied with L=13 and
V =V =13.
The outcome is that no feasible solution exists for
L=13 and V=13, and hence F (6, 13) =F{6, 12) =12
Returning to block 14 we now have L=ll for which it is
determined that B(6,ll)=8 and U(5,ll)=12.
of H
and
(P.
possible for
|
P.
Since the intersection
has been determined and no further reduction is
)
the algorithm proceeds to block 22 and evaluates
L^^ll
> B(6,ll) we next go to block
Since U(6,ll)
B(6, 11) =U(6, 11) =9.
24 where it is learned from Theorem 12. a that V(P +A
U(6,ll)
)
=V(P +A
so that both the extension (P.,+A, ,)=A_ and {P ^+A^) =A^ is
—3 —11 —7
—4 —5 —4
optimal for F(6,ll).
The set
{P
)
E
thus becomes
(P-,,P.
3
4
}.
Generating
descendents of both in block 10 and evaluating the resulting
solutions there results the 11 values of F(W,L) signified by a
4
)
in Table B.2.
Recording these solutions and forming the
vectors H=(2,2,l,2)
and H=2, 1,2,1), the algorithm proceeds to
block 14.
With L=10 the bounds B(6,10)=8 and U{6,10)=9 are evaluated
and the algorithm proceeds to block 18.
section of H=(2,2,l,2) with
(P3>£5^P^'E7>Pq
I
^2 ^
^"<^
{P.-,P.
|
P
)
Determining the interthe result is set
H=(2, 1,2,1) with the latter the result
-65-
is set
-2^ which in block 17 reduces
fP5'Pf,'P7'P8'^'-10'-ll'-12
to the set
(P
I
o
P_,P^}.
b
2
'
Evaluating B
proceed to block 19 and set F(6,
(
6, 10)
-U(6, 10) =8=B(6, 10) we
10) =F(6, 9)
Returning to block 14 and resuming with L=7 the bounds
3(6,
7)
=U(6,
7)
=6 are determined with the result F(6,
Again the algorithm returns to block 14.
3(6,4) =2=U(6,4)
so that F(6,
4)
=F(6,
3)
7)
=F (6
,
6)
This time L=4 for which
At this point there remains
.
no unevaluated length so we proceed to block 12, set w=5, and
continue to block
At block
2
2,
the set of sizes
(1,1,1)
In block 4 it is found that
values 5=h=l and F(5,l)=0 determined.
L=13 and in block
that the set
7
Forming solution A in block
of
P_
8
9
(P
j
is identified and the
can be reduced to
(P_o
o
(?
1
}
it is found not to be an extension
so that the algorithm proceeds to block 15 where it is
determined that 3(
is then found,
3(5,4) =U(5,
B(5,L))
4)
5
,
13) =U(
in turn,
5
,
13) =12
Setting F( 5,
.
that 3(5, 10) =U(5, 10)
,
it
13) =F(5, 12)
3(5,
7)
=U(5,
7)
and
(after each of which the algorithm sets F(5,L)=
to complete evaluation for w=5.
Continuing from block 12 with w=3 the algorithm identifies
the set
[1,1), determines
the values 6=2, h=l and L=10, and
ascertains that no reduction in
variables in block
9
(P
)
is possible.
Ordering the
and applying Theorem 3. a the result is
solution A=(2,0,0,2) which is an extension of P
.
o
Proceeding
-66to block 10 the descendents
A^^^^
are generated and the
and A
evaluations F(3,10), F(3,4) and F(2,4) obtained, as shown in
Forming H=(3,l,l,3) the algorithm returns to block 14.
Table B.2.
At this point evaluation for w=3 is complete and the algorithm
sets w=2,
values
and
identifies the set of sizes (l.), and determines the
4
5 = 2,
(P
—
)
h=l and L=^12.
Forming the intersection of H=(3,l,l,3)
there results the set
upon reduction
iP.-,
!
^^-
(P,
,P,
P,
,P,
—13^ ~14 —15^ ~16^
.
,
I
0')
which becomes
'
The procedure now determines directly
f-
the solution A=(0,0,0,6) which is found to be an extension of
P ^
—16
,
Generating and evaluating descendents there results the
values as shown in Table B.2.
Forming H= (1,1, 1,7) the algorithm
goes to block 14 and, upon finding all lengths evaluated, continues
to blocks 12 and 13 where it terminates.
An alternative to the algorithm in Figure
5
substituting the search procedure of Section iv
Section
III,
results by
for that of
thereby replacing the evaluation scheme based on the
controlled search of ranges of the value F(W,L) with a scheme
based on the controlled generation of parent solutions.
accomplished with the following modifications in Figure
Block 10:
26:
5:
Evaluate solutions on the basis of Theorem
13. a
Block
This is
as well as on that of Theorems l.a and 2.
Replace content of block by simply,
"Solve knapsack problem (Section IV )",
a.
-67-
Replace content of block by simply, "Set
Block 29:
F{w,L) ^^B(w,l)
for all unevaluated lengths
for width w.
L
Record all solutions."
With this algorithm problem-solving proceeds quite differently
As can be seen from the numbers
for the example xn Appendix B.
in the upper right corner of the cells in Table B.2 the evaluation
of F(W,L)
is now obtained in five groups.
as follows.
Theorem
3. a
For
mind.
w=:6,
1^2 so
In summary, these occur
Application of
that F(6,l)=0.
from which is
then yields parent solution A={2,lj0,0)
generated descendents A ,A ,A
(see Table B.l)
and A
evaluation results in the entries of group
2
whose
in Table B.2.
this point L=13 and the reduced set of base solutions is
P
j
P„
)
(see Table B.4)
which, being an extension of £
,
,
yields parent solution A
results in the generation of
,
descendents A^ A^, A^ A^p A^g A^^ A^3
,
,
and A^^.
,
Evaluating these
solutions in block 10 there results the entries of group
Table B.2 which completes the evaluation for w=6.
w=5 It is found that min
P.^
In block 26 solution of the knapsack
.
problem by the procedure of Section IV
,
(
At
(1. )-2
so that F(5,l)=0.
in
3
Proceeding to
With the
evaluation of w=5 complete the algorithm proceeds to w=3 which is
also complete and hence to w
2
for which it is determined that L=12
Forming the intersection of H and
{P^,P^,P^
—3 —5 —6 ,P^,P,
—7 —16^
I
I
P^
—2
i
g
(P
)
there results the set
which reduces to the set
—16' ^\.
iP,,-l
Finally,
-68-
determining in block
descendents, A
group
5
,A
9
,
the parent solution A= (0,0, 0,6)
and A
A
and generating
there results the entries in
,
which completes the evaluation.
With both algorithms discussed for the two-dimensional problem
more efficient procedure may result through use of one or more
a
dominance tests based on
(6)
.
For example in block
a test can be applied to each size
tests can be applied in block
2
of Figure
5
to try to eliminate it
(w.xl.)
Then for each width
during the evaluation of F(W,L) completely.
w.
1
to the sizes in the identified
J
set {w.xl.
11
}
to try to eliminate one or more of them during the
evaluation for width w., all sizes thereby eliminated then being
reconsidered for width w.
j
of (6)
In this second application
^^
,.
+1
there is, in addition to the advantages associated with the
elimination of dominated sizes mentioned in Sectionll, a savings
in the generation of base solutions.
For if size w xl
20
dominated in all sets {w.xl.
11
)
shown that for all widths w. >
J
_
_
for widths
w.
J
> w,
k
has been
^
'^
then it can be
size w xl can be ignored in
q q
w,
k
forming vector H (block 10) and subsequently in determining the
intersection of H and the set {£
)
(blocks
is sufficient to simply add to the set
on
—
'
^If size
{P
6
)
and 10)
;
instead it
at the beginning of
•
i"
'
w.xl. is dominated for width
widths larger than w,
k-1
,
.
w,
,
it is dominated for all
-69~
21
the evaluation
for width w
unit base solution
a
-0,0,... 0,
P
1,0,...,0), where the unit element occurs in the position
representing size w xl
•
Dominance considerations are also important in permitting a
more compact representation in some problems, thereby making
possible considerable economies in data manipulation.
Since these
possibilities tend to increase as problem-solving progresses
from stage
to stages k-1, k-2,
k
etc.,
let us consider briefly
at this point the problem at stage k-1.
Upon completing the evaluation of the knapsack function
F,
k
for
(W,L)
L < L
<
—
—
max
and W=w
.
i=l,2,...,t at stage k the
,
J
problem at stage k-1 is to evaluate the function F
L = l,,l^.
12'
.
< W
—< W —
max
and
.,,1
z
.
for
(W,L)
This problem is identical to
that at stage k except cutting now takes place widthwise.
The
I
I
sizes w.xl.
for stage k-1 are the non-dominated sizes which result
at stage k,
i.e. the sizes w.xL for which:
11
J
(w.,L)
kj
> F
F,
fw.
kj
+
,L)
,
and F fw.,L)
1
kj
> F(w.,I')
j- 2
,
3
,
.
.
,
,
t ;0<
I
J
<L
—
(13)
or
F,
k
where
.s
L'
(0,1.)
>
is the
F,
k
max
largest evaluated length of the same width that
smaller than L.
function F (W,L)
< L < L
(0,L'
For example if we assume Table B.3 to be the
for stage k then the sizes for stage
(k-1)
are
K.
21ln Figure 5 this step can be inserted in block
reduction step.
7
preceding the
iOax
-70-
those whose value in the Table is encircled.
With the sizes
define^ in (13) we can now proceed to solve the problem at stage
using any of the algorithms appropriate for stage k.
(k-1)
Returning at this point to the discussion of dominance
considerations it is noted that upon application of dominance
tests in block
2
the 14 sizes in the example will,
length 1., be reduced so that at most
identified set
m
the
and
z
Or,
{w'.xl! ).
sizes w'.xl!
11
there are
for every
sizes appear in the
3
to state it more generally,
t
different widths w, ,w„,...,w^
1
,1
nondominated sizes in the set
then there is at most
,...,1
11
(w'.xl
t
2
1
different lengths
if among
;
}
for any length 1.-
Since
J
these sizes are always uniquely defined for a given length
(i.e.
for each width
with W. =w
w,
k
t
1.
the size in the set is the size w'.xl
11
and the largest length not exceeding 1.)
I
it is therefore
possible throughout problem-solving to represent the problem in
terms of vectors H,£ and A having only
elements rather than m.
t
This can result in considerable economies in data manipulation.
To conclude we consider the sample problem at stage (k-1)
defined by the nondominated sizes at stage
k
data for the problem is given in Appendix C.
complete set of feasible solution vectors
2?
1
•
is the number of elements of width w
1
The
Table C.l lists
A-(a ,a ,a
m
—
a.
22
in Table B.3.
)
solution ~
A.
for the
the
-71-
prob]em, showing that with the compact representation there are
only 12 solution vectors even though there are 14 sizes in the
Table C.2 shows the base solutions generated in solving
problem.
the problem either'^-^ by the algorithm employing the procedure of
Section
ILI
or by that employing the procedure of Section
Table C.3 gives the optimal values F
K— 1
(W,L)
IV,
which result at
cutting stage k-1 and Table C.4 identifies a solution vector A
in Table C.l which will yield the optimal value.
Thus,
at stage
for example, Table C.3 indicates that a sheet arriving
(k-1)
of size 8x11 can be cut widthwise at stage k-1 and
the resulting pieces cut lengthwise at stage k in a manner yielding
a
value of F
(8, 11) =21.
According to Table C.4 this is
accomplished by cutting according to solution vector —
A^ at stage
k-1, where according to Table C.l
of 14 sizes with vector A
A
implies
1
A^=( 1,2,0)
5
.
Scanning the list
and length 11 it is found that solution
unit of size 2x10 and
2
units of size 3x10.
Pro-
ceeding to Table B.3 it is seen that at stage k the unit of size
2x10 will be cut according
A^„ in Table B.l and each
^ to solution —40
of the units of size 3x10 according to solution A
.
Therefore
putting these results together it follows that if a sheet of size
8x11 is cut in the two stages according to a pattern such as that
{W,L) are actually obtained
23For this problem all evaluations F
K 1
through direct solutions, use of base solutions, and bounding
considerations, and hence the procedures of Section iTT or IV
are never employed.
-72in Figure 6.
then a maximum value will be obtained
7-7"
8
i
-73-
VI
Concluding Remarks
.
In the preceding sections partitioning algorithms have been
discussed for evaluating
been described
m
sheets of stock.
a
class of knapsack functions which has
the context of k-stage guillotine cutting of
In some contexts there are additional attributes
of the problem to be considered or constraints to be imposed
besides those discussed so far.
In conclusion we comment briefly
on the application or adaptation of the partitioning algorithms
in a few such cases.
At the one extreme there are problems which can be accommodated
with no changes in the algorithms.
This is the case for the
problem mentioned earlier in which orientation of sizes (lengthwise
or widthwise) with respect to the stock sheet is immaterial.
In
this case one simply represents each rotatable size in the problem
as both a unit of size w.xl.
11
and a unit of size l.xw.
11
,
both of
value v., and proceeds with the algorithm in its existing form.
At the other extreme are variations of the problem which can
be accommodated only with considerable modification in the
algorithms.
This is the case,
for instance, with problems having
one or more of the following types of constraints considered by
Gilmore and Gomory
[s]
:
a
constraint on the maximum number of
units into which a strip can be chopped or strips into which a
-74-
sheet can be slit:
?a.
11 —<
a
K
(14)
limit on the maximum number of units of size w.xl.
1
1
that can be
cut from a strip or of strips of width w. that can be slit from a
1
sheet:
a.
< d
i=l> 2,
.
.
.
.
,m
(15)
and a limit on the number of different sizes that can be obtained
from a single strip or number of different strips from a sheet:
^5(a.)
where 6(a.)=l if
a.
>
<M
(16)
otherwise.
and
As in the case of dynamic programming approaches, the modi-
fication required in the partitioning algorithms to accommodate
these constraints will seemingly result in an increase in the
problem-solving effort required to solve the problem.
In the
case of the partitioning algorithms, use of the underlying
search strategies, the generation and evaluation of descendents,
the use of base solutions, etc, all remain appropriate but
significant changes must be made in many of the details.
changes are similar for all three types of constraints,
These
(14),
(15)
and (16), and are summarized below for a problem with constraint
(
14)
:
F(W,L,K) =maximum
(a./w.
1
1
?v a
.
< W)
—
^
^
^
-75-
Subject to:
2a
1
.
1
1
(l.b)
< L
—
.
1
Sa.
11""< K
and
a.
integer
> 0,
all i.
A principal change in the algorithms is brought about as a
consequence of the change in the optimality of descendents:
If A is an optimal solution for problem (l.b)
Theorem l.b:
with knapsack size WxL and limit K then
optimal solution with value F(W,L,K)
knapsacks with W(A'
11—^
?a'.
Since descendent
A'
111
K < K-?a
—
is not
+^a'.
)
,
£ W
< W,
L=L(A')
is an
A'
for all
and
~~"
for all descendents A'
of A.
necessarily optimal for limit K a
procedural choice arises between solving problems (l.b.) with the
limit set at K and, by ignoring constraint (14), solving problem
The first alternative assures that the value F(W,L,K)
(l.a).
itself will always be determined but yields only lower bounds
B(W(A'
)
,L(A'
)
,K)
on the value for descendents of the optimal
solution; hence these points must each be subsequently evaluated
themselves for optimal values.
other hand,
The second alternative, on the
fails to guarantee that the value F(W,L,K) will be
determined but does yield an optimal value for all descendents
A'
-
for which
?a'.
1 1
< K and yields an upper bound U(W(A').
—
_
for all other descendents.
1,(A'),K)
_
The choice between these alternatives
76-
is highly dependent on the probability that an optimal solution
to problem
(
1
.
a)
will satisfy constraint (14)^ and thus may change
during problem-solving.
In addition a number of other changes must be made in the
algorithms.
Theorems
and 4 concerning direct determination of
3
optimal solutions must be revised to limit solutions to those
which are feasible according to constraint (14).
Theorems 6,7
and 8 are invalid in the presence of (14), thus rendering invalid
equations (2),
(3)
Consequently the bounds in
and (4).
become simply B(w.
,
conditions given in
now
(4. a)
Similarly the dominance
,L)
and U(w.
(6)
are now invalid and must be replaced by
,L)
.
conditions appropriate to the constraints on the problem.
24
Theorem 10 concerning the generation of base solutions remains
valid together with Theorem 13. However both Theorem 11 concerning
reduction of the set
s
IP^
)
and Theorem 12 concerning the use of
base solutions in evaluation require extensive revision based on
the feasibility and dominance conditions appropriate to the
Finally, the procedures of both
constraints on the problem.
Section HI and Section IV
24.
In the case of (14)
the condition:
must be slightly modified to handle
and/or (16) condition
^
and V
> V .
it tollov!?s from''"Theo^em
1
.
1
.
.
(6)
is replaced by
.
1 of reference
In the case of (15)
to condition (6) should be added the following:
<(L-lj)/l.> +
e.
< d..
|9l
that
-77-
constraint
(
14)
.
Between these extremes are problems which can be solved
with small or moderate changes in the basic algorithms.
We conclude
with two illustrations in which a knapsack function is to be
evaluated for
a
In the first
number of different stock sizes.
example the knapsack function F(W,L),
<
W < W
,
evaluated at each stage k for each of the stock sizes W xL
W XL
,
and W XL
shown in Figure
is to be
0< L< L
,
7.
(I,,W^)
w
Figure
(^2'V
7.
Multiple stock sizes.
This can be done efficiently by evaluating concomitantly at each
stage k the points in the shaded area of the Figure.
To accomplish
dependent
are modified to simply make L
this the basic algorithms
^
max
at all times on the width w. be considered when cutting lengthwise,
and W
max
at all times dependent on
1,
K
when cutting widthwise.
In
the second illustration evaluation of the knapsack function F(L)
is desired for
(only)
lengths L ^L^^.-.^L^.
This can be accomplished
efficiently by modifying the basic algorithm employing the procedure
-78-
of Section
ni
to always select the largest unevaluated length
from the set [h^,h^,
set becomes empty.
.
.
,h^]
,
evaluation being complete when the
-79-
APPENDIX
A:
ONE-DIMENSIONAL
Maximum 'knapsack size L
1.
1
2
3
4
TABLE A.l
V.
1
6
3
5
2
6
1
2
2/3
3/5
1/2
1
=13
<^,
1
3
max
EX/^J^PLE
POTENTIAL KNAPSACK SOLUTIONS A =
.
(a
,
,
,
a^ a^ a^)
LISTED IN LEXICOGRAPHICALLY DECREASING ORDER
Index
-80-
TABLE A.
Index
POTENTIAL KNAPSACK SOLUTIONS
LISTED IN DECREASING ORDER BY LENGTH,
LEXICOGRAPHICALLY DECREASING ORDER WITHIN LENGTH
-81-
TABLE A.
MAXIMUM KNAPSACK VALUE F(L)
3
FOR KNAPSACK OF LENGTH L
Index
i_
1
1
4
9
5
10
11
11
30
43
31
44
Length
Value
L
F(L)
13
12
11
10
9
8
7
6
5
12
12
9
8
8
4
2
3
2
2
1
7
6
6
3
1
TABLE A. 4
SCHEDULE OF NON-DOMINATED OPTIMAL SOLUTIONS
Value
-82-
TABLE A.
Index
BASE SOLUTIONS
-83-
APPENDIX
B:
TWO-DIMENSIONAL EXAMPLE
Maximum knapsack size W=8, L=13
.
w.
TABLE B.l
1
6
3
6
1
2
3
5
2
3
5
6
3
4
2
2
1
2/3
3/5
1/2
POTENTIAL KNAPSACK SOLUTIONS A.=(a
,a
LISTED IN LEXICOGRAPHICALLY DECREASING ORDER
Index
a
a
)
-84-
TABLE
\l
w\
*
B.2
VALUE OF KNAPSACK FUNCTION F(W,L)
-85-
TABLE B.4
BA?^E
SOLUTIONS FOR TWO-DIMENSIONAL EXAMPLE
Index Base Solution
P_.
Width
Length
Value
-86-
APPENDIX
C:
Item
1
STAGE K-1 OF TWO-DIMENTIONAL EXAMPLE
Length
VJidth
Valu e
^
-87T.'BTE C.l
POTENTIAL KNAPSACK SOLUTIONS* A""(^i32^3^
FOR STAGE k-1.
Index
-88-
TABLE C.3
X
VALUE OF KNAPSACK FUNCTION F
,{W,L)
89-
TABLE C.4
X
OPTIMAL SOLUTIONS*
YIELDING
,(W,L)
*
k-1
F,
-90REFERENCES
1.
Barnett, S. and G. J. Kynch, "Exact Solution of a Simple
Cutting Problem", Operations Research Vol. 15, No.
(1967) p. 1051-6.
6
,
2.
"Notes on the Theory of Dynamic Programming.
Bellman, R.
Maximization
over Discrete Sets", Naval Research
IV
Logistics Quarterly Vol. 3, No. 1-2 (March. June 1956)
p. 67-70.
,
,
3.
Dantzig, G. B., "Discrete Variable Extremum Problems",
Operations Research Vol. 5, No. 2 (1957) p. 266-276.
,
4.
Fulkerson, D. R., "Flow Networks and Combinatorial Operations
Research", Memorandum RM-3378, October 1962, The Rand
Corporation^
5.
Gilmore, P. C. and R. E. Gomory, "A Linear Programming
Approach to the Cutting Stock Problem" Operations
Research Vol. 9,
p. 849-859 (1961)
,
,
6.
and
"A Linear Programming Approach to
the Cutting Stock Problem
Part II", Operations
Research Vol. 11, p. 863-888 (1963)
,
—
,
7.
and
"Multi-Stage Cutting Stock Problems
of Two and More Dimensions", Operations Research Vol.
13,
p. 94-120 (1965)
,
,
8.
"The Theory and Computation of
Knapsack Functions", Operations Research Vol. 14,
and
,
p.
,
1045-1074 (1966)
9.
10.
Glover, F., "The Knapsack Problem:
Some Relations for An
Improved Algorithm". Management Sciences Research
Report No. 28, Carnegie Institute of Technology,
January 1965, 23pp.
Kolesa, P., "A Branch and Bound Algorithm for the Knapsack
Problem", Management Science Vol. 13, No. 9 (1967)
p. 723-735.
,
11.
Pandit, S.N.N.
"The Loading Problemj' Operations Research
Vol. 10, No. 5 (1962) p. 639-646.
,
,
-91-
REFERENCES (Continued)
12.
Pierce, J. F., Some Large Scale Production Scheduling Problems
Prentice-Hall, New York, 1964.
in the Paper Industry
,
13.
14.
"On the Solution of Integer Cutting Stock Problems
by Combinatorial Programming - Part II", IBM Cambridge
Scientific Center Report, June 1968.
,
Shapiro, J. and W. M. Wagner, "^ Finite Renewal Algorithm
for the Knapsack and Turnpike Models", Operations
Research Vol. 15, No. 2 (1967) p. 319-341,
,
15.
Weingartner, H. M.
and D. N. Ness, "Methods for the Solution
of the Multi-Dimensional 0/1 Knapsack Problem"
Operations Research Vol. 15, No. 1 (1967) p„ 83-103,
,
,
irt-ft.«'
Date Due
HUUi
Lib-26-67
MIT LIBRARIES
332' fo*^
3
TDflD
D3
TD4 TEM
Mil llBRflftlti
'y^i\-hi>
3
IDflD
DD3
DM Tfil
Mil LIBRARIES
HDj
3
TDflD
0D3
fl73
^b2
3
TOfiO
0D3
fl73
773 336-6.^
IIIIIIII '''''T(''"T''r''in'"' II
JW .miJJ,i|
J
TUflD
003 T04
TOflD
«^«
t^73
MIT LIBRARIES
3
/
3 ??-62
DD3 673 TEl
33<?'-6'?
TD60 003
3
E77
"=105
?^0-65
li'iLlliuhillillll"
|l|ll||lllllllllllllll*|l«ll'"l'"
3
TOaO 003 T05 5bT
MIT lLBRABIES
llllll
3
TOfiO
003
fi7M
lllllllll
30T
Download