BASEMEN

advertisement
BASEMEN
28
:414
2.
JAN 10^285
ALFRED
P.
WORKING PAPER
SLOAN SCHOOL OF MANAGEMENT
A PROBABILISTIC ANALYSIS OF
THE MULTIKNAPSACK VALUE FUNCTION
by
M,
A.H.G.
Meanti
(*)
Rmnooy Kan
(
(**)
* * *
L.
Stougie
C.
Vercellis
'
^
Working paper
*
MASSACHUSETTS
INSTITUTE OF TECHNOLOGY
50 MEMORIAL DRIVE
CAMBRIDGE, MASSACHUSETTS 02139
1611-84
A PROBABILISTIC ANALYSIS OF
THE MULTIKNAPSACK VALUE FUNCTION
by
(*)
M. Meanti
(**)
A.H.G. Rinnooy Kan
***
L. Stougie
(*)
C. Vercellis
'
•
(
)
.
Working paper
&
1611-84
September 1984
{*)
Dipartimento di Matematica, University di Milano, Italy
(**)
Econometric Institute, Erasmus University Rotterdam, The
Netherlands/Sloan School M.I.T., Cambridge, Massachusetts, U.S.A.
{***)
Center for Mathematics and Computer Science, Amsterdam.,
The Netherlands
This research was partially supported by NSF Grant ECS-831-6224
and MPI Project "Matematica computazionale"
<•'
JAN
r->'
-^iij
-
1985
1
--'
.
—
•^
;r-.-^
5
[
f|
-
1
.
Introduction
The Multi Knapsack
Let each item
space
a
.
i]
j
problem is defined as follows,
(MK)
require
(j=1,2,...n)
of knapsack
i
a
certain amount of
and let c. denote
(i=1,2,...m)
3
the profit of having item
a
-
1
j
in the knapsacks.
zero-one variable, taking value one if item
Let x. be
j
is
included in the knapsacks, and zero otherwise. The capacities of the knapsacks are denoted by b.
,
b^,...b
.
This leads to the following optimization problem:
n
max
y
c .X
j
n
s.t.
t.
y
a.
a.^x.
.:
<
X.
e
}
{0,1
(i=1,2,...m)
b.
(j
= 1 ,2,
.
.
(MK)
.n)
MK has been used to model problems in the areas of
scheduling and capital budgeting /10/. The problem is known
to be NP-hard /6/;
it is a generalization of the standard
knapsack problem (m=1). MK can be solved by
approximation scheme /5/, but
a
a
polynomial
fully polynomial one
cannot exist unless P=NP /9/,
In this paper, we are interested in analysing the be-
haviour of the optimal value of MK with respect to changing
capacities of the knapsacks. More specifically, we would
like to obtain an expression that represents the optimal
value as a function of
problem instances.
0751257
b.
1
(i=1,2,...m)
over
a
range of
-
-
2
A stochastic model for the problem parameters c.,
(1=1, 2,... m,
a
j=1,2,...n)
will show that the sequence of
optimal values, properly normalized, converges with
probability one (wp1
)
to a function of the b.s as the
number n of items tends to infinity while the number m of
knapsacks is fixed.
A number of probabilistic analyses of approximate algo-
rithms for the knapsack
in the past
(/I/, /2/,
problem
(m=1
have been conducted
)
/7/, /11/). So far, the optimum
value has not been asymptotically characterized as a
function of the capacity b of the knapsack. A similar
comment applies to the probabilistic analysis developed in
/5/ for MK under a stochastic model less general than the
one we consider here.
In fact, our only requirement will be
absolute continuity of the distributions involved; the
results are unusually robust.
The main result in Section
3
is the asymptotic charac-
terization of the optimum
value as
'^
(i=1
,
2
,
.
.
.m)
.
a
function of
b.
1
This characterization is obtained by showing
that the optimal values of the LP-relaxation of MK
by MKLP)
(denoted
have a regular limiting behaviour holding wpl
The fact that the optimal values of MK and MKLP are
asymptotic to each other follows from
a
result in Section
Three other relevant results are obtained. To
characterize the integer optimal solution itself, we shall
prove that the zero-one vectors in
a
certain sequence are
optimal solutions of MK infinitely often (i.o.). We also
show that, relative to the optimal integer solution, at
2.
-
3
-
least one constraint has a slack variable whose expected
value equals zero i.o.
.
Finally, we prove that the se-
quence of optimal dual multipliers of MKLP also has
regular asymptotic behaviour, converging to
In Section 4,
a
a
limit wp1
the results obtained are applied to
problem MK for the cases that m=1 and m=2 and for specific
distributions of the parameters
so as to obtain explicit
formulae for the optimal value of MK, They are depicted
in Figures
1
and
3.
Finally, in Section
5,
an approximation algorithm is
proposed. It generalizes the greedy heuristic for MK (cf.
/3/)
$n
and will be shown to be asymptotically optimal wpl
-
2
-
4
Stochastic model and upper and lower bounds
.
Let us assume that the profit parameters c.
are independent identically distributed
(j=1,2,...n)
random
(i.i.d.)
E with
(r.v.), defined over a bounded interval in
variables
common distribution F
.
Analogously, for each i=1,2,...,m,
the requirement coefficients a..
r.v.'s
to be i.i.d.
(
j
=
1
,
2
.
,
.
.
,
n)
with common distribution
are assumed
over a
F.
1
bounded interval.
In the sequel, r.v.'s
will be underlined. Without loss
are defined
of generality the interval on which c., a
-13
-J
We further assume that the
will be assumed to be [_0,1_[
.
.
distributions F
,
c
F.
(
i=1
,
2
,
.
.
.
are absolutely continuous
,m)
1
and that the corresponding densities
f
(i=1
f.
,
are continuous and strictly positive over
,
2
,
.
.
,m)
.
(0,1).
grow
It is reasonable to assume that the capacities b.
proportionally to the number of items. Specifically, let
11
b.=nB.
(i=1 ,2,
.
.
.
remarked
As
0<6.<Era. 1,
for 66V = (6:
,m)
/11/
in
would tend to be redundant if
.
'
5E fa
—
1
'
.
(i=1
,
2
,
.
.
,m)}
.
il-'
i-th constraint
the
,
S
—
1
in the sense
,
.
il-i
|
that it would be satisfied even if all items are included,
with probability tending to one as
n^°°.
Define
n
n
z
-n
=max{
c.x.l
a
a..x.ine.
y
y C.X.I
/,-iD
3
^^-3
-^-^1=1
=1
'
{
i=l
2
,
.
.
.
,m)
(i=1
,
2
,
.
.
,
,
x
.
s
j;
,
1
M
j
=l
,
2
,
.
.
.
n
D
to be the optimal value of MK and
LP
z
—
n
n
=max {
y
c.x.l
y
a..x.$n6.
to be the optimal value of MKLP
.
,m)
,
O^x.^1
(j=1,2,...n)
5
By definition
T,P
f
-J
r
-f
/ ...
-
-
3
.
-
6
Asymptotic characterization of the optimal value
In order to derive a function of b.
LP
which
_z
is asymptotic wp1
w
—
= max
(X)
{
.,11^ n -1
i=1
.b.+
c
y
j
2
,
.
.
to
,m)
.
defined as
,
m
n
X
y
,
we consider the Lacvanaean
,
relaxation of the problem MKLP
m
(i=1
.
-
=
X
y
.
"
.
.
a
O^Xj<:i
1-11
.
.
,(j
=
1,2,
J
i=1
for every realization of the
It is well known that,
)chastic parameters, the
th(
function w
stochastic
n
is convex over
(X)
the region X$0. Moreover,
min
w
(X
LP
=
z
"
X^O
(3.1)
n
vector of multipliers minimizing w
—
Define the r.v.'s
m
Let X" be
a
—
1
if c
.-
X
y
.a.
5:
.
i=1
x^(X)
(X)
(j
=1
,
2
,
.
.
.n)
otherwise
Then
m
w
(X)
=
m
n
x.b.+
y
1=1
c -
y
.
-D
j=1
y
,i^
X
a
.
.
X
.
1-13
.
(
X
-D
Define also
m
n
1
L
-n
(X)
=- w
n —
(X)
=
y
X.
6.+ -
1=1
m
(
c
y
.
-
y
-3 ^t^
3=1
X
.
a
x^(X)
.
-3
1-13
I
I.
We
first
)
prove a preliminary lemma that establishes
the almost sure convergence of L
—
(X)
to a non-random
,n)}
-
-
7
function L(X), defined by
m
=
L(X)
Here
m
A.6.+E
..11
1=1
I
E ex
L
ex
—
(X)
I
.
1=1
.
and E a.x
(A)
-
L
E a.x
X.
1
(A)
~i-
.
are used to denote respecti-
(A)
vely the common values of E Lfc.x. (A) J^ and E ra..x.(A) J1
-3-3
^-xj-3
j=1 2
We shall also write E ax (A) for the vector
n)
|
(
(E
,
,
.
.
.
.
a,x (A), E a_x
—
1—
—2—
(A)
x
..., E a m—
(A)).
—
We observe that the r.v.'s {c.x.(X)}
(j=1,2,...n)
are
L
i.i.d., as well as the r.v.'s
{a..x.(A)} (j=1,2,...n) for
-i]-j
any 16 {l,2,.,.m}. This property will be used throughout
sometimes without being explicitly mentioned. In
the paper,
particular, it can be used to apply the strong law of
large numbers to the sequence L
—
(X)
and obtain the following
result.
LEMMA 3.1: For every A^O, L
—
(X)
^^^>
L(X)
.
The next two lemmas, whose proofs are given in the Ap-
pendix, describe some properties of the function L(X) which
will turn to be useful in the sequel.
LEMMA 3.2
L(X)
:
is twice continuously
strictly convex. For each 66V it has
dif f erentiable and
a
unique non-zero
minimum X"=X"(B) over the region X5O. The gradient is given
by
Vl(X) =
LEMMA 3.3
;
for i=1
,
,
2
3-E ax^(X)
.
For X- the following conditions are satisfied
.
.
.m:
-
A"V(3.-E a.x-^(X")
1—
(i)
1
E a.x
(ii)
—
1
(X")
(3.1)
—1
n
L(A"')
=
)
<B.
If we can ^
prove that L
—n
-
8
.
>
(X'"')
—
L(,\"'),
then through
^
would provide an asymptotic characterization of
LP
z
—
To establish this convergence, we have to strengthen the
result in Lemma 3.1 from pointwise to uniform convergence
wpl
.
To do this, we shall apply a theorem from /1 2/ which
states that pointwise convergence of a sequence of convex
functions on a compact subset of their domain implies
uniform convergence on this subset to
function that is
a
also convex.
First, we have to show that the minima
and X" are contained in a compact subset
(n=1,2,...)
{X"}
—
S
.In
R
^
fact,
we shall show that the minimization of the functions
L
(X)
(n=1,2,...)
and L(X)
—
minimization over the set
over R
+
is equivalent to their
m
S ={X|
X.50
B.X. ^1,
I
(i=1 ,2,
.
.
.
,m)
}
.
i=1
LEMMA 3.4
:
For every realization of the stochastic para-
meters, the functions L(X) and L
their minimum within the set
n
(X)
(n=1,2,...)
S.
it is easy to see that
Proof:
L
(0)
=
1
^
^
I
c.
3=1
For any other value of
^1
^
A
it holds that
attain
L
n
(A)
>
1
Convexity of L
(c.-
y
Hence, for every
L
(A)
I
with
A
$
9
X
.a.
m
n
L(X)=Xe+-
-
-
n
X3
x^
.)
{\)
X
>^
^
.
we have
> 1/
(0)
implies, together with the inequalities
.The same arguments applied to
above, that A"es, n=1,2,...
show that A"eS.
L(A)
As a direct application of Rockaf ellar
'
s
theorem 10.8
/1 2/, we then obtain the following result.
LEMMA 3.5
:
L
wpl
-^^>
(A)
uniformly on
L(A)
S.
We are now in a position to prove the required result.
THEOREM 3.1:
Proof:
L
(A
—n —
)
wp
-^—1 >
L
A
"
(
)
.
Uniform convergence wpl on
Pr{V£>0
3n
:
o
Vn^n
sup
,^^
Aes
o
II
'-n
can be written as
S
(A)-L(A)|<e} =
1
.
(3.2)
'
It is easy to see that
(A")-L(A")|
—n —n
II
'
'
The combination of
Pr{V£>0
3n
o
:
^
|L (A)-L(A)|
sup
'^
—
AGS
(3.3)
.
'
(3.2)
Vn^n
o
and
II
'
yields
(3.3)
A
(
"
—n —n
-L
A
(
"
|
< £
}
=
1
'
which proves the required result.
From (3.1) and Theorem
3.1
it follows that
10 -
-
-1
z
n —
LP Wp
-^—1 >
^
L
,
.
(
X
:':
,
)
o
(3.4)
/
.
/,
\
Moreover, we know from Lemma 2.1 that, for each realization,
-Znn
LP
1
11
^-Z^-Z
nnnn
"^
LP
1
n
,•^c^
(3.5)
.
Combining (3.4) and (3.5) one can easily establish the
following theorem.
Wp
^^—1 >
I
1
-
THEOREM 3.2:
z
—
n
LA)
,,:':,
.
This latter result gives the required as^^rmptotic
characterization of the optimum of MK
Observe that L
hand side
A
(
"
is actually a function of the right-
)
(i=1,2,.,.m)
b.
holding almost surely.
,
minimization of L(A) over
implicitly defined
by
The problem of deriving a
S.
closed-form expression for this function, or at least of
evaluating it
for
numerically
of its arguments b.
subsequent Section
(i=1,2,...m)
4
values
different
will be considered in the
for specific distributions F ,F
.(
i=
1
,
.
.
.
,nil
Whereas Theorem 3.2 is concerned with the behaviour of
the optimum value
z
—
of MK
,
the following result says
something about the corresponding optimal solution.
x " A " = (x^
The vector —
—1
is optimal infinitely often (i.e.).
THEOREM
Proof
R ={x
—
n
:
L
3.3
:
(
)
A
"
)
x^
—2
,
Define the following events: Q ={x
L
:.-
(A
(
)
is suboptimal}
and T ={x (A")
—
n
(
(A
A
"
)
)
.
.
.
x^
~n
(
X
")
)
is not optimal},
is not feasible},
-
-
11
By definition, we have to show that the series Zpr{Q
}
is convergent. Obviously,
Pr{Q
Pr{R
$
}
n
n
Pr{T
+
}
}
n
We have
n
Pr{R
n
^
}
Pr
{
r1
Pr{-
n
—
3e:
^
.
LP
I
c.x.(X )< - z
~3~3
n —
)
n
^
-
z
-n
c .X
)
L,
-:-:
.^^
.
.....
(X
e
}=
,
I
)
-
>e
n
Since -
j
(
n
.^^
c.x.(A")
-J-3
-
z
-n
^^^>
)
0,
the series ZPr{R
n
}
converges for every £>0. Moreover,
Pr{T
1
B.- -
in
= Pr{ 3i,£>0:
}
n
$mPr{6.1 --n
1
"
Lemma
3
-
-
6.
1
n
L
.^^
n
•
a..x.(X")
-i]-D
y
so that the series ZPr{T
L
a..x.(X")
.^^-iD-D
I
}
<
-
e}
<-£},
is also convergent,
since from
.
a..x^(X")
-ID-]
^^^
)
^>
6.-E a.x^(X")
1
As a conseguence of Lemmas 3.2,
-1-
^0
(
i=1
,
2
,
.
.
.m)
.1
3.3 and Theorem 3,4 at
least one constraint will be binding in expectation at the
integer optimum i.e., in the sense that the expected value
of its slack equals zero i.o.
12 -
-
in general
However, this property is not true
constraints
Section
comments for the case m=2 in
our
(cf.
for all
41
To conclude this section a stronger version of Theorem 3.1
be proved
will
which extends the convergence result
,
from the sequence of minima of the Lagrangean L (a) to the
"
wp T
This
sequence of their arguments, showing that A -^— > X
~n
regular asymptotic behaviour of the
property indicates
•.'.•
.
sequence of optimal dual multipliers
be useful
developing the approximation algorithm pro-
in
posed in Section
5.
THEOREM 3.4:
X
"
—
•
Proof
A Taylor
;
Moreover, it will
_X'.
Wpl
-^— >
••:
X
.
around
series of L(X)
x"
can be
written as
L(X""')-L(X''')
—
where H(X
point
X
)
n
=
(x""''-.\'"')
—
VL(X'"')
+ ^(x""'-x''')h;a
2
lying
on
the line segment
Positive definiteness of H(X)
Let a(X)
n
Qx.
)
X
J
for every
X
,
(x''"'-x'"')
—
(3.6)
evaluated at a
is the Hessian matrix of L(X)
fore belonging to the convex set
Appendix)
—n
and there-
S.
implies strict positivity
(see the
of its eigenvalues.
denote the smallest eigenvalue of H(X), and let
a=inf a(X); note that the continuity of a(X)
implies that a>0-
xes
for any n,
It can be easily shown that,
^(X""'-X""')
2
—n
H(X
n
)
(X'"'-X""')
—n
>^2
IU'"'-X""'||
"
—
^
.
(3.7)
-
13 -
Moreover, due to Lemma 3.3, we have
-X"Vl(X")
=
(3.8)
—X"VL(X")
n
Combining (3.6),
^
L(X")-L(X")
—
(3.8), we obtain that
and
(3.7)
>
^
2
II
X"
~n
X"|| 2.
-
Consequently, to prove convergence of
wp 1
remains to show that L(X")
—
^>
n
L
(
X "
)
X_"
to X",
it
.
Indeed,
L(X""')
—
n
and
|L(X")-L
'
|L
'
-L(X")|
—n
(X")|
—n —n
(X")-L(X")|
—n — n
'
-^>
^
Il(X""')-L
—
—
n
-£->
(X'"')| + 1l
— n (X""')-L(X")
—n
n —n
by Lemma 3.5, whereas
'
bv Theorem 3.1.B
,
-
4
,
Particular cases: m=1
,
-
14
in=2
The actual computation of E ex
a
choice for
parameters
(X)
and E a x
.
requires
(X)
specific distribution for the stochastic'
a
of MK
in this section, we assume that the
.
are
c. as well as the requirements a.
coefficients -3
~i3
uniformly distributed over [[0,1^. This assumption is
profit
^
.
essentially the same as in / 5/, / 7/, /11/.
We first present and discuss the results concerning MK
with one constraint. In this case, straightforward calculations lead to the following formulae:
1
2
E ax
~
A
if
\
i
^
if
A
>
1
if
X
<
1
if
X
>
1
3
(X) =
ex'
E ex
—r
-
(X)
3X
2
L(X)
=
^(p-
^^ ^h
''
^''
6X
1
if 0<6<7
6
/6B
A
=
-3
-
36
.^
If
1
g
o
^6<
1
2
-15-
Therefore, by Theorem 3.2,
Wpl
MA
-1„^>
1
I
if 0<B<-^
B
'I
:•:
.11
,
)=
3„2
3
1
je + -6+
The graph of L(X
Figure
If --<6<-
5
as a function of
)
is shovm in
3
1
Notice that in accordance with Leminas 3.2 and 3.3,
E ax
(a'")-B=0.
In the case that m=2, E ex
take different forms
(X)
ax
E
,
(A)
and E a
over different regions of the
x''^(X)
^2"
H
plane. We will describe directly the corresponding functions
X
=X
defined on the (6^,
(S)
Figure
B^)
plane, referring to
2
Define the following regions:
A =
{
B =
{(3^,62):
6^4b', 32^' e,4}
C =
{(3^,32):
^^>\.
D =
{(3^,32):
3^^62,
B2>|b^- I
E =
{(3^,62):
Bi^32.
B2>fB'
(3^,62)
The values of
regions where B^
3^ and
<
:
X
B_
viceversa
B2^24B^}
S-i^B^/
324^1
and L(X
)
- \'
,
,
£^^32^:^}
6i^£2<l|>
.
in the corresponding five
can be obtained
in
6,<J}
by exchanging
the formulae given below
.
3
with
-
Region A
16
-
:
^2
s
=
,
242
V96^62
L(X
)
=
Region B
:
\
X.
246.
-
Region
X"
,
E
:
L(X")
17 -
to obtain closed form equations for the values
for
B
lying in E, an equation of either fourth
or eighth degree should be solved explicitly. Yet,
numerical evaluation is possible through the direct solution of the problem min L(,\), using an appropriate nonAes
,
linear programming routine.
A picture of the surface L(X")/ defined over the
(6^/6^)
plane and evaluated either analytically or numeis presented in Figure
rically,
By calculating
6
.
-E a.x
(A")
3.
(i=1,2)
for the regions
A,B,C and D and using the remark at the end of Theorem 3.3,
one can verify that in A and D both constraints are i.o.
binding in expectation at the optimal integer solution,
while in B and C this is true only for the tighter
constraint. This corresponds to intuition: when
sufficiently small with respect to
6.,
Q
is
as in B and C,
the
first constraint can be disregarded, so that MK with two
constraints is reduced to
a
simple knapsack problem. TO
support this conclusion, observe that the values of A^ and
L(A")
obtained for the regions
B and C are
identical to
the corresponding ones derived for the case m= 1
Similar calculations can be carried our for m>3, even
though in these cases only numerical approximation of A"
and L(>.")
of L(X)
is possible for many values of 6.
and VL{X)
The computation
can be seen to amount to integrating the
density function over regions defined by linear inequalities,
In many situations,
closed form expressions for these
integrals can be derived in principle.
-
5.
-
18
Probabilistic analysis of an approximation algorithm
The algorithm for solving the MK problem considered
in this section belongs to the family of generalized
greedy heuristics, that can be described as follows.
Given m positive weights y^
1
y„
,
...,y
,
/
m
/
,
.
order the items
according to decreasing ratios
c
p.
.
=
(
m
J
y.a.
y
j=1
,
2
.
.
,n)
•
x=1
The generalized greedy heuristic, denoted by G(y), then
select items according to this order until the next item
considered will violate one of the constraints if added
to the knapsacks.
Let
z
n
(y)
indicate the value of the
solution obtained by this heuristic. Obviously, the quality of the approximation
z
(y)
is affected by the choice
of the weight vector y. The behaviour of G(y)
as a
function of y will be analyzed in a forthcoming paper. In
particular, it is possible to show that (under
a
non-
degeneracy assumption) the choice y=A", where for each
problem instance \" is the vector of optimal dual multin
pliers, leads to
a
solution
z
G
n
(.\'")
n
which is the same as
the integer round-down of the optimal solution of MKLP
A drawback of this choice seems to be that y depends on
the particular problem instance in a non-trivial way. Yet,
in light of Theorem 3.4 the choice y=X where
in Section
3)
to be the unique minumum of L(a)
\
,
''
is defined
(as
seems to be a rea-
-
19
-
sonable alternative, at least in
probabilistic frame-
a
work.
In fact,
we shall
show that the generalized greedy
heuristic corresponding
weights y=A is asymptoto
tically optimal wp1 with respect to the stochastic model
of MK introduced in Section 2.
The following result
bility that
normalized sum of r.v.'s deviates from its
a
is due to Hoeffding / 8/
mean
bounds on the proba-
providing
and will be useful in
what follows.
12
LEMMA 5.1:
Let X^, X^,
...X
n
r.v.'s
be independent
^
taking values in the interval 10,11
-
and having finite
mean and variance. Then, for 0<t<1- —
E
r
Pr
Pr
n
J= 1
(
n
X
/,
.
^
<
e
^j'^M^
e
^
n
r n
!<.
Lj = 1
y
-1
-2nt
(5.1)
^
I
-'^
j= 1
-
n
Pr
T
t
n
1
- n
n-
n
-
-1
<:
n
X .-E
t
U
2
e
-2nt
(5.2)
-2nt
(5.3)
Define the r.v.
n
u
n
=
inf{u>0
J
I
.
$b.
a .X
—11—1 (uX")
1
.
.
j=1
.
(i=1
,
2
,
.
.
.
,m)}
20 -
-
Clearly,
weight
p
can be interpreted as the value of the
jj
corresponding to the last item included in the
.
knapsacks by the algorithm G{X").
We first establish
quence of r.v.'s
{u
-^
r-
u
-^n
1
•
split
The proof will be
:
that
show
wp1
u
-^n
>
= inf{y^O:
y
.
}
Wpl
.
THEOREM 5.1:
Proof
convergence result for the se-
a
Let
(y)
!_
that
in two parts. We first
where
E a^x
prove
We then
u»
'^
L
)^3.
(-X
=
1
(
i=1
,
2
,
.
.
.m)
}
.
and I(y) be respectively the indicator func-
tions of the sets
n
{y^O:
=
H
I
j
(1=1
a .x^(yX'") ^ b.
a.
.
,
2
,
.
.
.
,m)
^
=1
and
Q =
E a.x
{y^O:
We want to show that
(yX")$
(i=1,2,...m)}.
6-
for u>0,
^ "
wpl
I
(n)
^>
—n ^
first that there exists at least one index
which E
ax
K,
that
L
:V
(yX
)>£,
K
,
so that I(y)
=
I(u)- Assume
k,
1<:k$m,
We then have
,
0.
for
-
""
1
(y)-I (y) |>c}= Pr{n
Pr {ll
—
"
1
= Pr
{-
[
Y
.
a
.
.x
L
,
v.~ij~i
•(
y X ")
^6
.
-
(
i=1
,
2
,
.
.
.
,m)
1
}
L
L
L
aj^jXj(y^ )-E a^^x (P^
^^^k"^ -k- ^^^ ^^
-••
-2n(B, -E a, x
k
-ke
^
-
21
L
•••
2
•'•
(yX")
,^
)
,.
(5.4)
"^
,
where the last inequality is derived from (5.2) by applying
Lemma 5.1 to the i.i.d. r.v.'s
r.v.': ^-.^i'^iX")
^3
(
j
=1
,
2
,
.
.
.
,
n)
3
Suppose now that I(y)=1. Then
Pr{|l
(y)-I(y)l>£} = Pr
""
1
3k: -
{
3
n
L
1
^
n Pr {-
y
3
-2n
"
x.(yX )-E
a
-E
X
L
Combining (5.4) and
(5.5)
ne
k
)
> 6,,
=1
ax L (yX")>eT,-E ax L (yx")
"
"
a,
2
'•
-k-
^
L
.x.(yX
=1
(yX")
(B,
a,
T
,^
)
^.
(5.5)
,
again by Lemma 5.1.
it follows that the series
ZPr{|^ (y)-I(y)|>e} is convergent for
that
I
(y)
^> I(y)
yjiO
and any £>0,
.
—
Turning now to the sequence
y
,
we have that
so
-
Pr{|iJ
—
-ijl>e}
for any £>0,
Now,
ma
3
.
Pr{y -y>e} + Pr{y-y >e}
^
—
—
h-^]
Z
Pr{I(p+ f)-I (y+
2
—n
^
22 -
+ Pr{I
—n
(y-
f)-I(y2
f)
2
=
1}
wpl
so that
y
—
to show that y=1
,
y.
^>
we recall condition
of Lem-
(ii)
3
E —1—
a.x
L
*
(A
$
)
B
(i=1
.
1
2
,
,
.
.
,m)
.
(5.6)
,
noticing that it implies that y^l. Suppose now that y<1. Since
the functions E a.x
(X
(i=1
)
creasing in each component
2
,
X
,
,m
.
.
(k=1
,
.
2
,
are strictly de-
)
.
.
.
,m
)
(see the Ap-
pendix), y<1 implies that all inequalities in (5.6) hold
strictly. This in turn implies
ma 3.3
that
by condition
(i)
contradicting Lemma 3.2.
a'"=0,
of Lem-
g
We are now in a position to prove the final result.
X
THEOREM 5.2:
z
(
—
"
)
^^^>
By definition of
Proof:
G
z(X)=
^
=^
Let — (y =E^
—n
£l^1
)
ffi
y
L
y_
L
X
(
it follows that
,
:V
c.x.(yX).
(Hn^"^
I
We have that,
iin
for b>0.
n
Pr{ |-z^(>
n—
"'"')
-L(X ) \>z}
Pr{
I
n
c .X
(y
>^-3-D -n
)
.
A
)
-L(X")
I
>e}
23 -
-
^
Pr{ |n
3
^^^
D
n
X")- ^(£^) |+|^(y^)-L(A")
-n
-n
Pr{|-
by Theorem 5.1.
.
.
(5.7)
=1
The function p{y)
c
>e}
c.x.{^^x")-^iii^)\>-}+-Pr{\^{^^)-Lix")\>-]
I
j
'.
I
"
1
v<
c .x^(y
I
ZPr{ Icp
'—
is continuous,
so that
j^iu
)
Wpl
—=—>
:':
L(X
)
It follows that
(u )-L(a")
-^n
Moreover, letting F
>-
'2
<
}
(5.8)
0°.
I
n
(u)
^
be the distribution of
u
,
we
'=^n
have that
n
•j.c
Pr{ In
.
Tex.
Pr{ |n
c .X
y
(u
.^^-3-j -n
D =
x")-
(p
(u
(yX")- ^(y)
n
>
)
|>
-
=
}
I
^} dF
n
(y)
1
(5.9)
2e
4
the last inequality being derived from
5.1
(5.3),
since Lemma
can be applied to the i.i.d. r.v.'s c.x.(yX").
Combining (5.7),
(5.8)
and
(5.9),
one finally obtains
the required result.
Further investigation of the family of heuristics G(y)
seems to be an interesting topic for future research.
Although the results of this paper carry over immediately
24
-
(with
^
-
to the minimization of MK
replacing $), prelimi-
nary investigations suggest that the heuristic
s
worst
case behaviour for these two models differs substantially,
- 25
/I/
G.
-
d'Atri, Probabilistic analysis of the knapsack
problem. Technical Report N.
Paris,
G.
Groupe de Recherche
Centre National de la Recherche Scientif ique
22,
/2/
1,
1978.
Ausiello, A. Marchetti-Spaccamela and M. Protasi,
Probabilistic analysis of the solution of the knapsack
problem. Proc.
10th IFIP Conference, Springer-Verlag,
New York, 1982, 557-565.
/3/
G.
Dobson, Worst-case analysis of greedy heuristics for
integer programming with nonnegative data. Math. Oper.
Res.
/4/
4
(1982)
,
515-531
.
A.V. Fiacco and G.P. McCormick, Non linear programming:
sequential unconstrained minimization techniques.
Wiley, New York,
/5/
1968.
A.M. Frieze and M.R.B. Clarke, Approximation algorithms
for the m-dimensional 0-1 knapsack problem: worst-case
and probabilistic analysis. Eur. J. Oper. Res.
(1984)
/6/
,
15
100-109.
M.R. Garey and D.S. Johnson, Computers and intractability;
guide to the theory of NP-completeness
.
Freeman, San
Francisco, 1978.
/I /
A.V.Goldberg and
exact solution of
A.
a
Marchetti-Spaccamela, On finding the
0-1
knapsack problem. Proc. 16th
Ann. ACM Symp. on Theory of Computing, Association for
Computing Machinery, New York, 1984, 359-368.
-
/8/
W.
26 -
Hoeffding, Probability inequalities for sums of
bounded random variables. Am. Stat. Ass.
(1963),
/9/
B.
J.
3
13-30.
Korte and
R.
Schrader, On the existence of fast
approximation schemes. Report No. 80163-OR, Institut
fur Okonometrie und Operations Research, Rheinische
Friederich Wilhelms Universitat, Bonn, 1980.
/10/
J.
Lorie and
L.
Savage, Three problems in capital
rationing. Journal of Business
/11/
G.S.
28
(1955),
229-239.
Lueker, On the average difference between the
solution to linear and integer knapsack problems.
Applied Probability-Computer Science, the Interface,
/12/
Birkhauser, 1982.
Vol.
1,
R.I.
Rockafellar, Convex Analysis. Princeton Uni-
versity Press, Princeton, 1970.
/13/
L.
Schwartz, Cours d'analyse, Vol.
Paris,
1967.
1.
Hermann,
-
- 27
Appendix
The purpose of this section is that of proving Lemmas
3.2 and 3.3.
To this end, the following two results are
useful
L
LEMMA A.I
L
The functions E ex (A), E a.x (A), j=1,2,...m,
;
are continuously dif f erentiable and strictly decreasing
.
with respect to each component
-
(k =
A,
1,2,...m).
.. r:
Proof
It will be shown that the partial derivatives
:
3Ea.x^(A)
—^\
exist and are continuous and strictly negative, for
TT
each
A
0.
>_
This implies the required result for Ea x (A);
similar arguments can be applied
for Ecx
(
A
to prove the Lemma also
)
To simplify the notations assume that k=m without loss of
,
generality. Let
a' = (a,
,
a.
,
.
.
.a
2
I
m-
and define
),
I
m
{a,c
D =
0<a<1
I
0<c<1
,
c>
,
A.a.}
^
i=1
-----
0<a'<1,
D'= {a',c|
m-1
0<c<1, c>
T
i=
1
A. a.}
"
"
m-1
D =
{a
m
0<a
— <1,
m
A
m—
<c -
a
m m—
A. a.
i i
T
^
1=1
1
m
Let F(a,,...a ,c)
m
F
=
1
c
(c)*
m-1
=
F
c
(c)
•
n
F. (a.)
1
i=1
1
.
By Fubini
-^
F.(a.)
.^11
^-'
il
'
s
and F'(a,,...a
1
m
theorem, we have that
,c) =
28
Ea.x
(X)
a.dF
=
a .dF
3
J
J
D
D'
D
dF'
m
U(X )dF'
=
(A.I)
"^
J
D*
m
where
m
U(A
m
a .dF
=
)
3
I
a .dF
=
m
3
m
,
m
with
m-1
m-1
(c-
A.a.
y
11
^
i=1
)
/X
if c-
m
y
'.
,
i=1
m
X.a.<X
1 1— m
otherwise
m-1
X.a., the function U(X
For 0<X <c- y
m
m
1 1
_
hence it is continuously dif f erentiable
is constant,
)
.
m-1
For
X
m
>c-
of F
X.a., by the absolute continuity
'
y
^
i=
1
1
"^
m
,
it
1
follows that U(X
and
is dif f erentiable
m-1
m-1
^
r
c- y
x.a.
c- y x.a.
^
' '
i=1
i-1
m
,
)
11
a.f
-
3
dU
dA
/
m
X
I
?^
m
m
(A. 2)
m
m-1
f
c-f
For each
x.a.
(a' ,c)
in
m
^
i'l
m
X
^
y
p
m-1
^
c-
^
x.a.
y
1=1
I
3
X
all
m
j
>0,
D'
m
m
.
X"
m
the function
except
=
on
dU
is continous for
-;—
m
the hyperplane
-
- 29
m-1
Therefore, the function U(A
is
m
m
continuously dif f erentiable for A >0
almost everywhere in
c-
X.a.
y
.
=
X
1
1
^
.
)
with respect to dF
D'
it can be concluded
9E a.x
3Ea.x
9 A
.
1
3/
A
m
,
that
(A)
—
9 A
/
Since D' does not depend on
'
exists and is continuous for
A
m
(A)
dU
dA
m
dU
dX
dF
m
dF'
m
>0.
In addition,
(A. 3)
,
m
D'
o
D'
m-1
where D'={a',c: 0<a'<1,
0<c<1,
0<c— —
—
— —
o
A. a. <
.^^11—
1=
7
A
m}
1
since
dU
dA
on D'-D'
o
m
It remains to be shown that the right derivative of
Ea.x
—J—
(A. 3)
in
(A)
=0 exists
A
and is equal
to the limit of
^
ra
for
A
m
^0
To see this, define
.
m-1
D' =
a
{a*
i
0<a'<1
—
—
,
.a.
.^,11—
1=1
<
A
y
1
}
,
m-1
=
D'
0<c<1
{c
0<c—
,
I
A. a.
1 1
y
'-
i=
—<
A
m}
1
m-1
D
=
a
{a
'
'
0<a<1
—
—
A
y
,
i=1
We have, for
A
3Ea.x
lim
X
m
^O'
9X
m
.a.
'
<
^-
1
}
.
>0,
(X)
m
lim
X ^0^
m
dU
dX
D"
a
D'
c
dF
m
dF'
a
(A. 4)
-
- 30
m-1
where F'(a')
F.(a!).
.^11
=
Applying the substitution
n
a
1=1
m
to the inner integral in
a
''"
(A. 2), and for all j=1,2,...m,
c=
X
y
(A. 4)
.
one obtains from
,
-1
1
m
^
-^
dX
dF
m
-
a .a
c
J
f
m
^
dF
X.a.
y
c
'
i=1
ra
'
D'
c
Since D
does not depend
on
^
a
9Ea.x
lim
X
m
9X
-0 +
X
we have
,
m
m
(X)
lim
/
-
m
D
X
a
/
-a a
.
J
->0'
m
y
X.a
i=1
^
f
c
m
m-1
-,
a f
m c
j
X.a.
y
1=
dF
^
11
.
1
dF
a
,
(A. 5)
m
where F
(a) =
F
11
.
^
i=1
(a.
^
)
It can be easily verified that the right derivative of
Ea.x
(X)
-J-
in
X
m
exists and is equal to the last term in
=0
(A. 5)
To conclude that Ea.x
respect to
dEa.x
X
from
strictly decreasing with
observe that
(X)
is strictly
negative
for
J
3
,
oX
,
is
L
-J.,
m
(X)
m
(A. 2),
for every
X
(A. 3)
>0,
and
{A. 5)
X
>0.
Indeed, it follows
fj^
that this derivative is equal,
to the integral of a strictly negative
function over a set of positive measure.
-
LEMMA A.
2
—
3E ex
;For k=1,2,...m, the following relation holds;
?
(X)
=
3E a X
X
1
(X)
~i~
.
^
i=1
dX,
Proof
-
31
3X,
Again, without loss of generality, assume k=m.
:
Define
m
D
0<a<1
-
{a:
=
a
—
X.a.
7
,
^
i=
.
.
X
<
-
1
1
}
1
m
D
0<c<1
{c:
=
c
,
>
y
X .a.
1=1
We have, by Lemma A.I,
3E ex
3X
(X)
3X
m
dF
cdF
m
D
D
dU
dX
D
(A. 6)
a
m
m
y
^
i=1
where U
(
c
X
m
CdF
m
X
m
>0,
substituted in
3E ex
3X
(X)
m
11
=
)
m
dU,
Henee, for
X.a.
dX
m
m
x.a. a
f
(
which,
'
gives
(A. 6),
yx.
.^,1
1=1
m
y X.a.),
.^,iim
c.^,11
i=1
i=1
y
.
c
m
^
-a.af
(yx.a.
.^,11 )dF
ime 1=1
/
/
J
D
(A. 7)
-
- 32
m-1
Applying the substitution
integral in
m
A.a.)/X
T
1
.^
and performing
(A. 7)
1
m
to the
straightforward
obtain
we
calculations,
a = (c-
m
/
-a.a
1
/
f
m c
(
)df
A. a )dF
y A.a.
.^,11
a
dA
c
D
where D",
(A. 7)
A
>0.
F'
and
(A. 8)
'
m
and U are defined as in Lemma A.I. Combining
(A. 8),
the required result easily follows for
Continuity of the partial derivatives implies that
'^
m
Lemma A.
holds also for
2
A
m
=0.
Lemma 3.2 and 3.3 can now be proved.
Proof of Lemma 3.2:
For
l,2,...m, we have
3E cx^
3E a x^{A)
3L_
k.=
+
—
8 A,
(
I
A,
3E
=
\
ax
3E a.x
1—
m
A
—
(A)
A
i=1
(A)
9a!
by Lemmas A.I and A.
This implies that L(A)
2.
is conti-
nuously dif f erentiable and that VL(A)=6-E ax (A).
Moreover, we have
^
3L
3A
3 A,
(6j^-E
a^x
(A)
)
3A
3E
a,
x
—k—
(A)
>
3A
,
.
again by Lemma A.I. This implies that L(A)
continuously dif ferentiable
is twice
and strictly convex over
the region A>0, as its Hessian matrix H(X)
is positive
-
33
-
definite for A>0.
L(\)
is strictly convex,
so that if it has minimum
over the region X>0, that minimum must be unique. To see
that at least one minimum exists, one can proceed as in
the proof of Lemma 3.4.
Proof of Lemma 3.3;
Since L(X)
is continuously differen-
tiable, and the constraints X>0 do satisfy the first-
order constraint qualifications in X" /4/, Kuhn-Tucker
conditions for optimality hold at X"and lead immediately
to
(i)
and
(ii)
.
a,
J
c
a
e
beta
Figure
1
Figure
2
beta2
~J^
3720 u09
Figure
3
betal
Mil LlBWARIti
3
TOflD DOM
Sia TD
Date
M^miM
Lib-26-67
pa^
ev<^
CocU
^r
T'^Y-
Download