-1523

advertisement
December
Revised
L I DS-P-1523
1985
August 1986
Relaxation
Methods for Network
Flow Problems
with Convex Arc Costs*
by
Dimitri
Bertsekast
P.
Patrick
A.
Paul
Hosseint
Tseng
Abstract
We consider the standard single commodity network flow
problem with both linear and strictly convex
nondifferentiable arc costs.
are strictly convex
Seidel
of
For the case where all
we study the convergence of
type relaxation
computation.
method that is well
We then extend
this
the arc costs are linear.
relaxation
method for
problem proposed
in
possibly
Bertsekas
a dual Gauss-
suited for parallel
method to the case where some
As a special
the linear
arc costs
case we recover
minimum cost network
flow
El] and Bertsekas and Tseng
*Work supported by the National
NSF-ECS-821'7668.
Science
Foundation
a
under
[21.
grant
1"-'he authors are with the Laboratory for- Information and Decision
Systems, Massachusetts Institute
of Technology, Cambridge, MA
0.21 39.
1.
Introduction
Consider a directed graph with set of nodes N and set of
arcs A.
We will
write j
'(i,k)
to denote that the head and
nodes of arc i are i and k respectively.
tail
The network incidence
matrix is denoted by E and has elements eij given by
if i is the head node of arc j
I
e. .
=
-1
if i is the tail
otherwise.
L
node of arc
j
(1)
We denote by xj the flow of arc i, and by d i the deficit of node
i which is defined by
e.-.x,
d
j
In words d
i
V iENj
is the balance of flow outgoing
coming into i.
from i,
and flow
The vectors with coordinates xj and d i are
denoted x and d respectively.
d
(2)
EA
Thus equation
(2) is written as
= Ex.
(3)
In what follows the association of particular- deficit vectors and
flow vectors via
Each arc
f jR .i
(-':',
cost subject
i
±+::.].
(3) should be clear from the context.
has associated with
it a cost function
We consider the problem of minimizing total
to a conservation
of
flow costraint
at
each node
minimize
=
f(y)
f
j'A
subject to
xEC
(4)
where C is the circulation subspace
f.di=O ,
,
C=
iEN}
=
{x.'=O}..
-
(5)
We make the following assumptions on f j:
Assumption A:
Each function f
is convex,
lower semicontinuous,
and there exists at least one feasible solution for problem
i.e.
(4),
the effective domain of f
dom(f)
{Tf
xr
and the circulation
Assump(tionB:
B
(
+,,)}
subspace C have a nonempty intersection.
The conjugate convex
function of each fj defined
by
(1t
is real
3.
=
sup{t . .
x
valued, ie.
AssuLmption
follows that
-,
the set
3
(x.)}
(6)
3
< gj(tij)<+, for all tj ER,
of points
),
which
(possibly +,:,'- or -
i1e.
f
B implies that f j(xj)
dom(f
is
->
jj
a nonempty
interval
--x' for all
where fj
is
the right
we denote
w::)
by c]
real
and ji
valued,
It
denoted
and
left
and
Pj respectively,
endpoints of
-4-
C: .
=
.3
j
We call
supf{
.
i
3
inf{xj
=
cj and
respectively.
that
in
(
x
X+
is
pe
easily
for every tj there
(6),
:
(x)<+',:}.
fj
Qj the
It
< +,:
is
and lower
capacity
seen that
Assumptions
some x jEdom(+j)
bounds
attaining
of fj
A and
B imply
the supremum
and furthermore
i m
f .x
.)
+
It follows that the cost function of
and therefore
(4) has bounded level
(using also the lower semicontinuity of f)
sets,
there
exists at least one optimal flow vector.
Assumptions A and B are satisfied for example if fj is of
the form
f
if
NfA
. X.
()
where
j,
%j
alrd fj
In this
Qjcj
otherwise
cj
are given upper
is
a
and lower
bounds on the flow of arc
real
valued convex
function on -the real
case gj(tj)
is linear for
jtjj
lij and cj
1. :) .
xj£
line
R,
large enough with slopes
as tj approaches -|: and +,, respectively
(see Figure
fj(xj)
gi i(tj)
slope fj(NI
I
I;
Ii~i
slope fj(c)
X
,
C.
slope cj
Problem
Rockafellar
theory
(4) is called the optimal
[3].
distribution probem in
The same reference develops in detail
a duality
(a refinement of what can be obtained from Fenchel's
duality
theorem)
involving the dual
minimize g(t)
problem
g.(t.)
.j
subject to
g
A
t.C1
where t is the vector with coordinates tj,
orthogonal
and C
complement of C.
We call
the tension subspace.
From
jEA,
and C
is the
tj the tension of the arc
(1)-(3)
tEC if and only if there exist scalars pi,
and
ieN,
i
(5) we have that
called prices
such that
or
V j7EA with
-=pFi -p>
tj
i
'
(i kl::
(9)
equivalently
t
where ET
p is
( 10
ETp
is
the transpose
the vector with
problem
(8)
can
of the network
coordinates
also be written
PiiEN.
incidence
matrix
E,
'Therefore the dual
and
minimize
q(p)
subject to no constraints on p
where q
is the dual
q(p)
functional
(p-Pi
i
g
j~(i
As shown in
(12
Ik)
E3]3 p. 349, Assumption A guarantees that there is no
duality gap in the sense that the primal
are opposites
(in
optimal
costs
of each other-
An important fact
that
and dual
for the purposes of the present paper is
view of Assumption
B above)
the dual
problem
(11)
is
is
unconstrained optimization problem.
If each function fj
strictly convex,
is also differentiable
p.
253)
the dual
functional
([43,
and as a result unconstrained smooth optimization methods
can be applied for solution.
gradient of the dual
fj
an
This is particularly so since the
cost can be easily calculated.
is strictly convex,
for every tension vector t
Indeed, when
there exists a
unique flow vector x such that
X
-arga.
max
x.
. -f
(z )}
V jEA
(13)
Zj
and it can be shown
tj
C4,
p.2183
that
xj is
the gradient
of
gj at
).'
From
=
(12) we see that
(1),
derivatives of the dual
3aqi;pi
)
for a given price vector p the partial
functional
q are given by
VV i N.
(t)
e..
Equivalently
(14)
jEA.
V
9g(t
)¥
(15))
8g(p)p
(2)), the partial derivative _eqals
(cf.
the deficit of node i when the arc flowsx
,.,
i
are the uniqge
scalars defined by (13).
The differentiability
is
convex.
strictly
of the dual
motivates a
Gauss-Seidel type of
whereby, given a price vector p,
flows
xj
= Vgj(tj),
(negative)
deficit,
j:A,
one calculates
chooses
a node
and decreases
where the corresponding
partial
.
i
Pi up to
functional
it
whereby
carried
lends itself
minimization
coordinates
is
Indeed
can be done in
and analyzed
in
becomes
zero.
the
One then repeats the procedure iteratively.
but also because
this
the point
q along
algorithm above is attractive not only because of
computation,
algorithm
with positive
8q/api
derivative
cost
the corresponding
(increases)
(This amounts to minimizing the dual
coordinate pi )
cost when the primal
naturally
along
an asynchronous
15J.
its simplicity
to distributed
different price
out simultaneously
Bertsekas and El Baz
The
by several
format
processors.
as described
Simulations of a
synchronous
remarkable
, parallel
method
speedup
Gauss-Seidel
optimization
in
require for
level
the dual
cost
sets
have shown
time.
(1.2)
convergence
on the cost
(see
the same constant
Even
computation
type [19]
relaxation methods for unconstrained
conve'xity assumption
its
this
have been studied extensively
they typically
of
of
E 10
something
minimized
for a
like
as well
a
node prices
level
strict
as boundedness
counterexample).
always has unbounded
to all
However
1[6-[10.
Unfortunately
sets
since adding
leaves the cost unchanged.
if we remove this degree of freedom by restricting the price
of some special
space),
the dual
node to be zero
(i.e. passing to a quotient
cost may still have unbounded level
not strictly convex when the functions fj
as in the important special
constraints.
problem
(7) where they imply capacity
solution of the dual
also shown assuming tlhe dual has an optimal
we actually require that the minimization
done only approximately.
relaxed
order.
Furthermore
The only requirement
infinitely often.
in that it
of the primal
Convergence of the corresponding price vector
sequence to some optimal
ar-bitr-ary
(Section 2)
a flow sequence generated by the Gauss-
method to the unique optimal solution
(4).
is
are nondifferentiable
One contribution of the present paper
is to show convergence of
Seidel
case
sets and
requires a
This result
rather
improves on a result by FPang
Ciottle and Pang [12J)
solution.
(11)
For this
nodes can be relaxed
is
is
along coordinait-es
that
each
is new and
unconventional
r11]
problem
be
in
node is
is remarkable
method of
proof.
It
(see also an earlier paper by
which asserts convergence of
the flow
vector
with
sequence
convex
minimization
convergence
vector sequence;
problem where the primal
the linear
constraints
structure.
The paper by Cottle
convergence
to a dual
problem
c.arried
optimal
with qai.tadratic
assumption
and
out.
described
of
the form
(rather than just
requires
arc
places a
it
not be
need
need
a network
solution
not have
[12)
asserts
subsequlrene
for' a transportation
but also uses a nondegeneracy
costs
in
the way relaxation
i.s strengthened
in
is
our analysis
as
have proposed
methods that
and work with
minimization
along directions
cope with situations
not possible.
linear
arc costs.
involving
where
They a]. low line
several
minimizing along
Computational
coordinates
a single
exrperimentation
coordinate
shows that
in
terms of
time
some of the best primal
simplex and primal dual
The second objective
3 a
to
with standard
these methods are very promising and outperform
avai lable.
to
are conceptually related
benchmark problems and a code named RELAX [1],[2]
computation
strictly
and Bertsekas and
HoweverE Bertsekas [1],
6auss-,eedel
Section
to a
applies
cost is not differentiable, and the Gauss--Seidel
method breaks down.
is
on
above.
convex, the dual
[2]
exact
cost function
and PFang
restriction
This result
however
When some of the arc cost functions fj are not
Tseng
(7)
along each coordinate and contains no assertion
and
separable,
is
Pang's result
as we assume).
of the price
more general
gj
and strongly convex
fj differentiable,
strictly
that
under the assumption
new relaxation
of
method that
this
paper
in
is
codes curr'ently
to propose
in
some sense bridges the
gap between
the strictly
descr-ibed earlier,
We show that
nonlinear
(convex)
other
this
method
nonlinrear,
algorithm for
works with both
arc cost
linear
and
above.
network
To our knowledge
the only
problems with both
linear
and
possibly nondifferentiable, arc costs is R'ockafellar's
fortified descent
roughly the
linear
method
arc costs, and contains as special cases both
methods described
klnown
arc cost Gauss-Seidel
and the Bertsekas-Tserng
version.
relaxation
convex
method
(f31,
Ch. 9).
Our algorithm relates i.n
same way to the Bertsekas-Tseng relax.-ation
Rockafellar's relates
to the classical
The last section of
computatio-nal
primal-dual
method, as
method.
the paper provides results of
experimentation
with codes
relaxation algorithms proposed.
implementing
both of
the
-i -
2.
The Relaxation Method for Strictly
In this section
will
Convex Arc Costs
in addition to Assumrptions A and B,
there
be a standing assumption that each fj is strictly convex.
Two important consequences of this assumption are that the
optimal
flow vector
is unique,
and that the conjugate functions
.'iire differentiable (in addition to being real valued by
Assumption
[4, p.
B).
218])
Vg (tj)
Furthermore
with tj
f .
where
=
arg max
..,
Vgj(tj)
t
st
+j<(xj)
way
we define
(see also [3] and
that we have for alltj
is
{t x .- f .(x .).
f+
.
and f+(xj)
(see Fig.
for xj
in
2.1).
the
(16
the unique scalar
the Complementary
fj at xj
usual
indeed it is easily verified
Slackness
xj
satisfying
together
(CS) condition
)
(17)
denote the left
and right
derivatives
These derivatives are defined
interior
of
dom (fi)
When
-,
of
in the
< Qj
-
cj
._..
slope fj(x-)
groph of fi
pe
f)
-cslope
f-(x.
-
xj
I~~~~~~~~~tC
1
,~
When
j
cj
.im
ij = cj
Final.ly when
Vgj(tj)
define
f(),
f+(c)
-= -,ox
fj(ci
e. .Vg
.ijEA
+t:.
=
Note
ET'p,
i
(t.)
j
We
by
V iEN
.
and denote by d(p) the vector with coordinates
d i (p).
Note that the definition of d is identical
in
except
(2),
-
is continuous and monotonically nondecreasing.
(p)
where t
=
we define fj(.j)
the deficit functions di
d
n
) ,i.
< +,, we define
f(c .)
that
f+((i i
li=m
f+
to that given
that here we have used the strict convexity of fj
to express flow and deficit as functions of the dual
price
vector.
the relation
above
In view of the form of
the dual
functionalt
yields
.
(p)
Sirnce d i (p)
=
...
is a partial
function we have that d i
V iEN.
derivative of a differentiable convex
is continuous and monotonically
nondecreasing_ in the coordi nate
i
We now define
the one sketched
with positive
decreased
q(p).
in
a Gauss-Seidel
Section
(negative)
(increased)
More formally,
fixed scalar
1 whereby at
deficit
ds( p )
each
is
we
initially choose
(0,1).
similar
iteration
chosen
with the aim of decreasing
&:in the interval
the relaxation
type of algorithm
to
a node s
and ps is
the dual cost
a price vector
p,
and a
Then we execute repeatedly
iteration described below.
--14-
Relaxation
Iteration
d. (p)
If
for Strictly
V iEN
=
Convex. Arc Costs
then STOP.
Else
Set
Choose any node s.
p = d
(p)
If
p -
If
p
If
p. < 0, then increase ps so that
0,
> (--I
do nothig.
then decrease p 5
The only assumption
nodes are chosen
C:
relaxation
iteration
The
0
2< ds(p)
i0 7" ds(p)
we make regarding
is
for relaxation
Every node in
Ass!4.:t.Jmtion
so that
N is
an infinite
the order
in
as the node s
chosen
is well
of
which
in
the
times.
in
defined
the sense that
To see this suppose
every step in -the iteration is exi.ecutable.
that
>6.
the following:
number
relaxation iteration
Sp.
-
0,
> and there does not exist a A<-t:such that ds(p-+Ae
8p, where
definition
lim d
e5
denotes
of
d,
the s-th
kj, and cj,
coordinate
it
is
easily
e Si S.. S +L -,
A
(p+Ae )
Then using the
seen that
es c.
O..
_
6 >
0
s
sj
which implies that the flow deficit
vector.
s)
of node s is positive for any
flow x within the upper and lower arc capacity bounds and
contradticts the ex.istence of a feasible flow
(Ass.umption A).
<
analogous ar-gument can be made for the case where p
0.
An
In order to obtain our convergence result we must show that
-the sequence of flow vectors generated
by the relaxation
-15-
algorithm approaches the circulation subspace C (given by
use is as follows:
The line of argument that we will
lower bouCnd the amount of
per
improvement
in the dual
iteration by a positive quantity.
We will
(5)).
We will.1
functional
q
then show that
if
the sequence of flow vectors do not approach the circulation
subspace9 the quantity itself
constant
which implies that the optimal
T-.
This will
value of
primal
dual
functional
has a
contradict the finiteness of the optimal
cost.
We will
denote the price vector generated at the rth
iteration by pr
rth
can be lower bounded by a positive
,
r
iteration by sr
0 1,2...
and the node operated on at the
1
r
2
To simplify notation we will
t..
denote
ET r
r
r
We denote
by .r
the
vector with coordinates
following from the CS condition
symmetry
?radient of the dual cost jo at tr~
thie
primal
cost f~ at ,..
network we will
positively
For any
oriented
in
Y},
and Y
jrzA.
(16) or
while tr
directed
use Y+ to denote the set
~,
of
Note the
jr
is the
(17):
is a subgradient
of
cycle
Y of the
arcs {j
to denote Y-Yf
AIj is
We first
three preliminary results:
Prposi ti on 2.1
dSr
(pr)
. 0j
We have for al
r
such that
pr+1
f pr
[i.e.
show
-16-
r+i
qrp
r1
'v .
-[f.
j3g
A ]
r
f.(
]3
r
.
.)
Pr o-of:
(6),
holding
Fix, an index
(12)
q) (p
and
'
i})
if
r
(16)
t
0.
>
,
-]
line minimization
>
r
-X. )t ]
]
r
with equality
r+l
x.
,
i2,U...
0
Eds(pr+
is used
s
D enote s
=
and A
(18)
1)-J
P
=
Fro
Ps
we have
.pxr
(t
f j(x
-
],
s.
V r-
-T[her-efor-e
q
Pr,) r+1
(pr)-q(p
r
_
r
r
x .- f
t
r xr.- f
A
EA
(18)
Ad
folloP1~7ws
Pr- Oos i t in
1
S0
FPr-o(of: n8We first
J
t.f(x
J
The
str-ict
-f
r)
(X
) . -. }r.+
(
J .
J
r+1
(rX
:.
r+[
J
;
rf}
i .]
r+ 1
"3
S3
r )r
.ani ]-(x
i)]
.
Axr+
f. .
;
s-h
.] EAi S3.3
E
con~vexity
at every
we use line
if
=
sequence
note that
r+1
X.
J
(and d[(r+l)_
from the
222
3]
I f .. .x
.3i>j
Sine
'
i Jj E
Et.
(x)
EfSx
3r+1
.
._
r+1
. )-
is
ofC
f- and the fac.t.
botnded.
iteration
the tot.al
deficit
l
-17-
does not
~
i
N
increase,
Id
i"
|i
i.e.
(ptr+1 )I
) t|
i
2
N
(This follows from the fac:t
itself
I
that a flow change on an arc reflects
in a change of the deficit of its head node and an
opposite
deficit
change in the deficit of
r
of node s
cannot increase in
iteration).
an arc i
chosen for relaxation
absolute
Suppose
is
it follows
subsequence if necessary)
such that\.v
J
+ +,:,for
all
is bounded.
r
.1
j.Y +
r
.3
=
o,
and also
tr
e
tr
Jw Y+ t a
we hnave for all
Jrr
r
Then there
+
+
must exist
as r- +:,
rFR.
that there exists a directed cycle Y
.3
+
xIr j
We now argue by
(passing into another
and
.
Since by the CS condition (i7)
r)t
.]
Furthermore the
at the rth iteration
is unboundedi
.xr.'
bounded
node.
value or change sign during that
and a subsequence R such that
{d(pr)}
i, $
rFR.
its tail
It follows that {d(pr)}
J
contradiction.
Since
r
t i.(
+ -,)'for all
FjY
as r
-
f++ (' r )
- ( x<
r
*f
r)
'
tr
jiY
This
is aimplies
contradiction
rj
:
-:.:
f+
i
0:
jE -y
a +,.>implies fj().')
since vr
4. -(x..
The next result is remarkable in that it
while
D-.
+.-+:,:
shows that
under a
mild restriction on the way the relaxation iteration is carried
out
(which
is
sequence of
urnusual
in
price vectors approaches the dual
Vg
practice),
optimal
the
set
The result depends on the monotonicity
manner-,
functions
very easy to satisfy
typically
in an
of
the
,.
Given pERIIN,
Pr-oposition 2.::
let s be a node and let Fp denote
a dual price vector obtained by applying the relaxation iteration
node s.
to p using
(p)
if d
if d
(p)
Assume in addition that p
0 then d[p
5
mi npip - -
largest
and all
iENi
Yf(p-p) ]
.> 0
tV
< C0,
that
optimal dual
p
(19)
Assumption
to assuming
+
> 0,
so that
c'
(19a)
0.
(19b)
5
Then for all. kEN,
Note:
Cp + C.(p-p)]
> 0 then d
is chosen
P,
(smallest)
is
*
when ds(p)
chosen
<-
price vectors p
maxp i-P
> 0 [d
greater
(less)
(p)
i
. 03
or
we have
i(N}0)
is
equal
equivalent
to
the
minimizing point of the dual cast along the
sth coordinate
the dual
cost
{Pr+aes t ::.>ci
F'Proof:
has a unique
is
automatically
minimizing point
satisfied
if
along the line
dual
vector p.
price vector
Let
pK
and consider
k be such that
an
pk-Pk = max{Pi-
We have
p ,:
P -k
-
>
Pi-p
-
Vgj
-
is
P
iP. p1:
Pi
)
p.
by
# k
((k,i)
j (-P
(p
g
'
i-P
(i
.k).
function,
we have that
i)
_pi-k)
- clk
d (P*
The desired
defined
F ,Vj i
k
a nondecreasing
VgjI~i-3
(i-~k)
p=
V i
V j
k-Pi
-_ )
Vgj.
Thus dk(
1.
P ,
pPji
p1..-i
have
It
.
price
i-EN
i
Since
from p.
Fix an optimal
arbitrary
p
starting
assertion
Assume that
(20)
ds(p)
holds
0:.
if
ds(p)
Consider
= 0 since
the vector
then we
p
,@
r
Pi
if
S
.Ps
Then we have
+
s-p
i
0 s
P1 li eT
ma'{pi-pi
= max{i-p
iN'
f
i
-
max-p
N
nd by the
preceding argument we have ds(F)
2_ 0.
assumption
while at the same time p
(19),
and Pi = Pi
is
we have
similar
for all
i #s.
when d s (p)
Therefore, using
The assertion
(20)
follows.
The proof
> 0.
Q.E.D.
Note that Proposition 2.3 implies among other things that,
if
(19)
is satisfied at all
generated
by the relax'ation
p
c::an show that
proposition
shows
relaxation
a)
2' .4:
method
that
bounded.
{pr} must converge
x.* is
C)
lim C(pr)i(
r "p":,
Let {pr,,'r}
be a sequence
convex
vector.
the unique optimal
generated
arc costs.
if
we
the
We are
by the
Then:
(21)
0.
-f
price vector
to that
r -i+
where
Furthermore
result.
method for strictly
lim d(pr)
is
the sequence {pr}
accumulates at an optimal
now ready to show our main
Proposition
iterations,
flow vector.
n
p
q(p).*
If condition
d)
(19) is
satisfied at each
problem has an optimal
dual
l im p
r
+ p
iteration, and the
solution, then
*
(23)
r- +|:'
where p
Proof:
is some optimal. price vector.
We first show that
a)
lim d
(pr)
r
r- .:0 s
Indeed
=
0.
(24)
if this is not so there must exist
subsequence R such that
i>
Idz (pr)
! for all
of generality we assume that d< (pr) >
.(Ppr)j I ~
an
e.
>
0 and a
rER.
Without loss
E for all rER.
Since
Idr (Pr+i)I we have that at the rth iteration some
old
A where
arc incident to node sr must change its flow by at least
A
=
(l1-)
x/|A.
By passing to a subsequence if necessary we
assume that this happens for the same arc j*
that
-.r+
l
(Proposition
·
,-i-.',*r:R
A, for all rER.
2.2)
Proposition
2.1
we have
rER, and
Using the boundedness of {-.xr>
we may also assume that
converges to some xj.
for all
Using
the subsequence
the convexity
of
fj
and
pr
- q(p
q(p
r+l.
p
)
r+.)-
_f .(
r
f.*(x .*)
J
.iJ
.3
J
L
'*+A_
f
._ f .*r
Takinq the limit
lim f *(>, '*)
r ; J~')
f
.
as r
+
i _ f .*(x.*)
.]
.j3
r
at..
Or
r
.+6f
f.(x .) f .()
.+)
f.
-6)
) x
., rER and using the facts x,
, x.
andr
(in view of the upper semicontinuity of
.3
we obtain
J.3
lim
r-+1 t r-- t r *
.
-
J*l
l q(Pr) - q(p
)r+
]
f
.*.'*+x
.*(+.)
..( *s)
-
r zR,-
>-
i
This implies that
because
from
lim q(pr) =
r +',:*.
(6) and
(12)
But this is not possible
-x,
we have q(p)
>
-
f .(
for all
.
p
j EA
and xEC.
Therefore
We now show
be the set
of
(24) is proved by contradiction.
(21).
indices r
loss of
generality that
(24)].
For every rER
that
i
- sr .
not chosen
for
Choose any i EN.
Tak: e any
such
> 2:.
di
let
Then during
that
(pr)
r'
d i (pr)
£
for
all
be the first
iterations
relaxation while its
r,
deficit
r
E > 0 and
ai
Assume
with
i
witthout
= sr
index with r'
r+1
5
.. .r'-1
decreases
let
[cf.
> r
node i
from
such
is
R
than 2_
greater
iterat.ions
than
the the total
amount o:f
deficit
of
ad has negative deficit
of the neighbor
nonpositive
the
nodes of
after
than
2r.
will
follows
value d
is
and the deficit
of
it
that
0.
follows
by at
least
r
b)
d. (pr)
r
For
r
all
r
i
the
the
of
(pr)
i
is
it
> 0
a neighbor of
absolute deficit
must
the set
R of
Since E: > 0 is
i
in ds(pr)
) and therefore
and arcs j
we
he
d
the
-d
iterations
i (pr) }
r,
decrease by more
indices r
arbitrary
(pr)J.
CS
remain
the total
Similarly we can show that
nd
will
2 mi
dur-ing
that
s
r -~O)
lim inf
i
increase
This shows that
> 2E cannot be infinite.
lim sup d i (pr)
absolute
the deficit
Since all
.
by an
decreased
that for any of
for which
be dec:rreased
the total
It
s,
the iteration,
iteration.
r'r...
j,
r
these
must be matched by decreases of the deficits
absol.ute defi c:it
durirng
is
-for relax at i on
d 5 (p)r)
the iteration
during
()
0> 0 from a positive
must be that the node s chosen
duringi
as noted earlier-
Next observe
.r'-l3 say r 3
by a amount
that
note that the total
To see this
2.2.
r+1., y
r,
decreased
td1
increase at any iteration
Proposition
iterations
We claim
F.
deficit
more than 2E.
cannot
proof
to lower
0.
ti
we have the CS condition
for which
d
we obtain
i
(pt )
-24-
-
r
r
J
.3
]3
++
i25
r
]
If Y is any cycle we have
/
5
j
Let
+
=
).L
jE <
J
(25)
so from
-2
t.
t
-
i-
j Y
r
0
t ]--
J
we obtain
'
_
.y
j.
f
y
y
+
r
(Ox )
<
3
o3
j
1
L
y
i
J
-
-
semicontinuity of
f
i
(
y
Ey
Then .From (26),
fj
(fj),
3
j
fx.
5<
+
.i E Y
0
flow
([33,
unique optimal
Ch. 8),
flow x'-
bounded we obtain xr
j
(cf.
(upper)
cycles Y'
f(-x .)
JEY
while from part a) we have :-C,
optimal
and the lower
we have for all
r.
j)
-
>
{xr}ER be a subsequence converging to some x
Proposition 2.2) '
c)
r )
<
-
Z.
JEg-Y
]
This implies that
v is an
and therefore must be equal
Since, by Proposition 2
2
,r
a x*
For every arc i for which
j
cj there are three
to the
i1s
(26
1i
*tj,.r
2)
,
is
bounded.
:j
] .
lirainf t
-.,<
r
and
x,.
.3
=
-. :
lim
inf
r -:,:,
tr
.
rr-
r>
)
+
]
C
3
j- B
i xj
]
'
r
_R
*
C,
E
and
trt
'v'
3
j
and therefore
{tr
R is bounded.
.t...
tr*
j.=
the limit above
r
of arcs j such that
)
(
q
r _,.., ..
s;
jiEr
Sinc:e x
for all
= cj we must have x
*
where B is a set
f(x
+,:,:.
R such that
subsequence
(sice
(since tr
lim sup tr
rx,:,,
m
is easily seen that we can construct a
-fact it
j:E-A
and
> xi
while for an arc .j with .j
Using this
.
r :.
-,,
.>
sup t
im
l
r :.,
-.
.
=
for
*
L
r
jEeB
j E
is bounded
= i0)
We have
jEB
we obtain by taking
+ lirn
f (:)
q(pr) O Z
.
r .0)
f(x*)
+ q (p)
desir-ed
d)
!: 0.
and
(12)
This together with the relation above show the
Crp
By Proposition 2.3,
converging
have for all
(r
rr
It follows using part
b),
(f+)
_
.,R
-.
that for all
f
where x*
is the optimal
together with x *
optimal
conclusion
Let
{p
let
t
vector p*, and
r,.
R
be a
ETp*
We
dual
()
r
,
V
and the lower
(upper) semicontinuity of
jEA
t.
(x
different
bounded.
jEA
r+
)
.
is
J
to a
<
f- .'
be dual
(6)
result.
subsequence
fj
p using
hand we have for all
On the other
.
.),i
flow vector.
Therefore t* satisfies
the complementary slackness conditions and must
Proposition
optimal
follows.
price
2 3n shows that
vectors as
limit
4prj cannot have two
points
and the
U E. D,
.
PThe Relaxation
Arc Costs
Method
for
Mixed
Linear
We first introduce some terminology.
point b
We will
dom(f j) is a breakpoint of fj if
that the dual functional
q,
as given by
and Strictly
(b) :
f
(12),
piecewise either linear or strictly convexlinear
piece
(breakpoint)
of
the primal
corresponds to a break:pointA'of
Fig.
f(b).
Note
is separable and is
Roughly speaking each
fj
cost function gj
(see
:3_.
1.).
Assumption
D'
f j(xj)
> -,:,x and
that every feasible primal
guarantees
fj (x
)
< +,:
xjEdom(fj).
solution is regularly feasible, and
(together with Assumptions A and B) that the dual
problem has an optimal
solution
(L31, pu
3.60),
we say that XcERH I and pERINI satisfy
Slackness
f-or all
(C31, Ch. 8)? Assumption D implies
In the terminology of
O,
say that a
cost function
the dual
Convex
For a given
E >
_-Cpn
(c-CS for short) if
_-+
r.
where t
s
= E p.
bounds, called
+
f .
<-
t. *t-
For a
<
f (x.*)
<i
given p,
+
(27)
V
i
A
(27)
defines upper and lower
E-bounds. on the flow vector:
n-)
kQ
min
Then
x and p satisfying
t. .- E
c
EE
E-CS is
m ax|fj()
equivalent
+ $tt
to
j EAi
(2,i
gj(tj)
/'
fj(x.j)
)We/ T ..
ei
~_..~
Xj
t
slope
tj
r
I~~~c~
I. Qc
j3
we
were t
V jEA
ETp
graph of
(29)
For a given
the subdifferential
3. 2-3. .
mapping of fj as shown
Intuition suggests that if
subspace C,
c
and c
ETp. we can obtain
tj,
x and p satisy
p should be near optimal.
x
E-CS, and
is
from the
the
from
in Figures
in the circulation
E is
small,
then both x and
later
This idea will be made precise
when we explore the near optimality properties of the solution
generated
by a relaxation
The definition
of
algorithm
E-CS
is
related
that
to
the
uses the notion of
c-subgradient
introduced in nondifferentiable optimization
to the fortified descent
method,
however
and upper bounds
j
idea
as well
on ixj
.
lower
given by
)+E
j9
and
(28)
as
'The latter
for a given p and t = EpT ,p uses different
(tj+-t)j(
particularly
E3J.
method of Rockafellar
inaf
A>O
Our bounds of
E13J
in
E-CS.
seem simpler
when some of
sup
A>O
for
it.
-g
-(t
)-.A)
A
implementation
purposes
the cost functions fj are linear within
their effective domain.
For a given x within the E-bounds,
node i as
in
we define the deficit of
(2) and say that a sequence of nodes {nl
n .. .
a flow aumrit4
ng tp
if
£
k
forms
fj (xj)
Graph of fj
Xj
ge
E,~
r
A
cjEuX
E
3.c
,
d
< 0,
*.
d
n
> 0,
tk >g.j ..
x
and
:
.
if
c.
]
i
if j
]mlnm)
~
(nm+n
,
mn{,-1}
'
(n
,
mE{1r,...,k-1}
Let
xj
cj -
if j'(nmnm+)
if
x-
j'(rim
°
nn )
We will call
min
mi
{-d
dn
ni-
,
c
d
j
k
the capaci i~y of the path.
The relaxation algorithm of this
section uses the labeling method of Ford and Fulk:erson C14] for
finding flow augmenting paths, and for augmenting flow along
them.
For a given tension vector tEC
and any subset of nodes S,
we define CE (S,t) by
C
(S,t) =
E-
.-
E
jc[l; ,N\S]
]
where we use the notation
3
jE [NSS]
c
[S,N\S]
=
{j
j"(i ,k7
We also define the
u=
(iS
i ES,k cS
INt-vector
-1
)
1
if
if
E[N\SS]
{iJ
=
k)
(i
0:i, C
(S,t) :
iES
i ES
that for any
0 implies that u(S) is a dual descent direction
at p, where p is any price vector satisfying ETp = t.
the direction
~q'~~~
q'(p;u(S))
Ic-·q
q
at
p
u(S) defined by
q' (p; u (S)co
where
This
derivative of
follows from the fact that the directional
in
kESi}.
u(S) by
The importance of these notions is due to the fact
E
i qS,
(p;p+u(S))
lim
j
=
E
16\SSe
-q (p)
-
0
L
4
\
e-<
c.
A
are the E-bounds corresponding
making use of the fact _j <r
,
C (St),im
> 0Q
Q
to E=O and we are
for all
We now describe the relaxat i on algorithm.
iterative and uses the
z-CS idea.
The scalar
E >_.
The algorithm is
E_
is kept
fixed
throughout the algorithm.
have a dual
iteration we
price vector p and a flow vector x satisfying
all
. cj for
At the beginning of each
j.A.
If x.C
then
we terminate.
in which case a
flow augmentation is performed to bring x "closer" to C;
descent direction,
this direction is performed.
:E = 0,
effective domain,
in which case a dual
or to
descent along
When each fj is linear within its
and all problem data is integer-,
the
algorithm coincides with the relaxation method of [1],[C2.
each fj
is
the exact
strictly
line
convex
previous section
Relaxat i on
3tt
version of
the algorithm
of
the
(6 = 0).
Iteration
0.__DiGiven p and fx. satisfying
Q
and d be the corresponding
S3te__1
When
E = 0 the algorithm coincides with
and
minimization
j
Otherwise we use
labeling to either find a flow augmentation path,
-find a dual
<
~Q
tPick
a node s
such that
•
Else set all
unscanned.
Give the label
! c_
tension
ds >
terminate.
xj
0.
If
for
j,
all
and deficit
let
t
vectors.
no such node ex'ists
and
nodes to be unlabeled
0 to node s.
Set
S ={0}
and
go to Step 2.
t:::2:e
L22-
Choose a
labeled
butt
unscanned
node k.
Set S *- SuL{k}.
and go to Step 3m
S~tep) 3
Scan the label
of
label
unlabeled
k to all
j-'(k,m)
and to
for j'(m,k).
the node kF:as follows:
nodes m such that
j
the
< cj
for
xj
.> g
unlabeled nodes m such that
all
If
Give
Ce(S,t)
if for any of the nodes m
then go to Step 4.
> . 0 then
go to Step 5.
Else
labeled from k we have dm < 0
Else go to Step 2.
Step
4
(Flow Augmentation
A flow augmenting
Step)
path has been
the node m (with dm < 0)
in
l.abels starting
from m.
Increase
in
the flow of
Let
by tracing
p be the capacity of
by p the flow of
all
the
arcs on the path
the direction from m to s,
decrease by
other arcs on the path.
all
at
Step 3 and ends
The path can be constructed
oriented
jp
Update the
vector d and return.
deficit
(Dual
5E-
identified
at the node s.
path.
Lt
e
found which starts
Descent
Determine
Step)
X* such that
q (p + .u(S))
min {q (p+Xu (S) )}
A >0
Set p 4. p +
Update x
and update the bounds
.*Lu(S)
to maintain
the
E-C condition
Qj and
PE_
rcj.
xj _ cj and
return.
Val iditr
and Finite Termination of the Relaxation
We will show that,
under Assumption
D,
all
Iteration
steps
in the
Relax'
at i on
Iteration are executable and that the iteration
terminates
in
Steps
0,
1,
a finite
number
and 3 are trivially
of operations.
ex:ecutable.
Step 2 is
e->ecutable on its first pass since the node s
unscanned.
'To show that it
remains executable
passes we only need to verify that each
Step 3 there always exists
2,
if all
certainly
is labeled but
on subsequent
time we go to Step 2 from
a labeled but unscanned
labeled nodes are also scanned we have
node.
In
Step
i S'
)~~,
d.
i
,.
=
d
Z
J ECSN\S]
j
i
-
D.;
L
Z
j EEN\SS]
-
.j~ESN\S]
_
L
sc
therefore
in
deficits
we obtain
the previous
branched to Step 5 rather
since the rule for
c
=
C
C
that
other labeled nodes
CE(S,t)
. 0i and
pass through Step 3 we would
than
to Step 2o
labeling ensures
Step 4 is
Step 5 is executable since CE(Srt)
descent direction at p,
minimizing stepsize
that
along
X.
lim
>
0 implies that u(S) is a
In that case the convexity of q
q'p+(
Xu(S);
::1
u(S))
.0 >
X
'
0
.
Then it can be easily seen that either
i.e.
achieving the minimum
X
that
i(S))
is possible.
and we can show that there exists
a stepsize
the direction u(S).
q' (p +.Xu(S)
executable
To see this assume the contrary,
there does not exist
implies
have
that a flow augmenting path
exists from node m to node s, so a flow augmentation
dual
(S t)
jW [N\SS]
Since node s has positive deficit and all
have nonnegative
J
0
a
jEEN\SS]
jE[S,N\S]
in
which case Assumption
A is
Cdom(f)
violated
C.
p.-
-- ::, for some j~E[SNf//S
)
(Dj
the proof
number
finite
of
operations
increased
or fj
To
terminates
in
a
we cannot loop between
we note that
since the number of
often
for
+:
(cj)
D is violated.
iterationr
the relaxation
that
Step 2 and Step 3 infinitely
nodes is
or'
-
some j[CN\SS] in which case Assumption
complete
empty],
jE N\S.,S]
j_[SN\S]
and either ft
C is
scanned
Step 3.
by one each time we visit
We ne::xt show that the relaxation algorithm, when applied in
conjunction with an easily implementable labeling rule,
terminates
divided
into
in
a finite number of
that the number of
dual
descent
'loneby arguing that the optimal
the number of dual
involves showing
successive
dual
descent
that
dual
steps is
choosing an appropriate
steps
is
not
is
steps
the number of
descent
part
The first
two separate parts.
The proof
iterations.
cost
involves showing
This is
infinite.
The second part
flow augmentations
labeling scheme
This
for
-:: if
necessarily
is
infinite.
finite.
may be
is
between
done by
the relaxation
algorithm and showing that
finite under
the number of flow augmentatiorns is
the chosen scheme.
two schemes:
For this
purpose we will propose
breadth-first search and arc discrimination.
We first show that the stepsize in each dual
bounded from below by
E.
descent step is
Indeed our definition of
c-CS was
motivated primarily from this fact.
P'roposition
3.1
greater than
Proof:
The stepsize in each dual
Assumption BE
L..et S denote the subset
other
direction
words,
the dual
--1
c
0
if
=
p
+
= ETp' .
and t'
is subdifferentiabie everywhere.
by the relaxation
descent
direction
u is
In
given by
(
.
Now consider
Eu
Then
=
t. - E
.J
if
jES,N\S
-j
=
t.j +
if
j EN\S,S]
t-
otherwise
so that
iteration.
i ES
t'
]
t
to the dual
if iES
and S satisfies CE(S4t)
p'
q(p)
of nodes corresponding
generated
I
1
is
x.
Under
descent
descent step
p'
given by
=r min
q
~f'
_
-£)
t
C = M
C
iE
}= min -
-f
I
rtj J |C
,
ti
)
i
j
ma{T~lf
_
":
t
for all
jc[SN\Sl.
for
.JE1\N\S,3
"
maxif(~,
tZ
<
all
Therefore
C
1
_
+=
q' (p'u
Since q is convex, q' (p;u)
descent
step is
< 0 and q' (p+Eu;u) < 0
.E
cx[
]
0 f or al1
q' (p +ay;tqu)
- C E(S
greater than
t)
< 0.
imply that
the stepsize in a dual
Therefore
Q.E.D.
_.
We will now use PFroposition 3.1 to prove that the number of
dual descent
is
a first
steps is
step in
Proprosition 3.2
relaxation
The following result
necessarily finite.
this direction.
Let pr denote the price vector generated by the
algorithm
Then
.just before the rth dual descent step,
for each rE{03 1. 2n..n}
q(pr )_q (p r.+.I
:>(,
j.
jECS
rN\S
.j:EE[N\
\S rS
where we define
r
or
If
C 8j
)-1
::_
3
,
.-
j
rj
i
j
Tt rt
or
32]
+ tr=g-(trE ·
)j
r
r
ifsNS
je[S
, N\S r
if
EES
S3
9 -r
(t r
iff
jE[S
S N\Sr
and Sr denotes the node subset corresponding to the descent
direction at the rth dual
Proof:
j t
dti
Fromf
) =
a
r
ti
the definition
rr
3(t3
r
] and] X+r we have that
3
j-
-sct
r
.r
(Xr
the definition
tW~gu <t' ) i
<
of
i3ji.(t3=
3)
)j. 9
def teth
g (t r ) = Xtf
From
descent step.
r
~) =
of
j p-)-f3 i3
.
r
w fflc>pt ) r
r
NS\.S
qp r
-q (p-+l
r
that
) (tr
'S
](
r
I
we have that
r
r+.(S
Combining the above three sets of equalities
obtainr
r
I
jECN\S
3l..
r
r
r -L_)-g
+[g
and from Proposition
ste
we have that
r
EES
V
V j_[N\S r S r
,r(tr+E)-f .(r)
Sr and u(Sr)
q,
je.
and inequalities we
--37-
r
IJELS
r
j j- -
J
J
J
J
J
j
J
r
,r N\S
N\S' I o rS
[NS'
J
I
,-r
Sr
Si nce
r
ri
1 ',
jECS
jEr,\.-
jE[1r
-q_
(where
ri
1h
r
the last
+
jEEN\S
NSr
Cr+ Lu(Sr)
(P
jES
jSS
j
LAJSr))
strict
>
inequality
[r
3
S
follows from Proposition
3.1)
-38-
the
left side of
(32)
from the convexity of
is
We will
of
vector,
dual
r
For each
j
~.
r .
*t...
I.::
r ,:,:,
+
tr
subsequence
generality
bounded,
or
(One
descent
infinite.
We denote the price
dual
descent
c
for
some
Q>
is
bounded
subsequence
then
R
(33),
that,
<+'x.,
unbounded.
for each arc jsA,
construct
if
tr,
.
(
p
and
such a collection
is
{t'}
isN
is
,
crp
(:;~:)
.
>..)
(;54)
Consider a
loss
of
either
We now partition
(L.1)
and
the
)<:'.
.
Without
subsets NO, N 1 ,...,NL
cx
f .3.(c
f
-,
tends to o, or tends to -).
+:
step by p r'
(34) trivially hold.
r}
-,.t
R is
such that
r
r
P )}
i
is
by the
and the flow vector generated
for'
R
t~ ~ some
~ ~ subsequence
~~~~~~.
r
the
we show the following property of
of nonempty
way to
is
the rth
First
suppose
a collection
{(P
vector,
algorithm at
sequence.
If
follows
Suppose that
by contradiction.
descent steps
.r respectively.
r
(32)
fj.Q.E.D.
argue
the tension
relaxation
{t
side of
finite.
ProCof:
number
The right
Under Assumption D the number of dual
Proposition 3.3:
steps
follows.
N
such
into
that
kENp
to
consider
a graph
identical
is
to the original except that all
bounded are discarded, and all
i such that {tr-
arcs
R
reversed
in their
directed
cycle is zero we see that this graph is acyclic.
set N 0
orientation.
is the set of nodes
outgoing arcs.
incident
For a
{ j- {k
A+
-
A
a
similarly after
-ra
:: )'. i E
N
kE
'T
all
arcs
N
.T-: a
A
is
aA
a
'T
T
k: E
N }.
'T
T2a
'
cut in
N }
a
the network:
and
+
j
a
-
-F::,
+
,r' r RK
r:ER
i belongs to some A.
j belongs to some A-.
if and only if
if and only if
Consider any fixed positive scalar
that +for all
li
Fr9,,:r E R
-
A.
Equation
(35)
implies
a
r
9 ,n (t -- A)
=
c
+
a
~~~~~~~~~~~Si
ncc~~Aa
Since
The
of this acyclic graph having no
It..
t
are
we define the following arc sets:
) |i
ji i
Then each set
.
in the acyclic graph have been discarded etc.).
,2,... ,i_
=-
-
Since the sum of tensions along a
The set N 1 is obtained
to N0
R
arcs i such that {tr
l ii m
r
+
i t ri.A)
.4:,:.
r- R
(36)
r+
c
~'
(p
r
9
-
+tr-U)
"
a
~)
+ -
i-r
+
J
(t
r+A)
jEA
A- j
a
where u. is given by
-.
0.
()
it follows from
lim
r'~:-:, r ER
Let
q
iE
.T>_a N
otherwise
(36) that
(pr+tu;u)
-
-
c.
+
jEA
+
+
Z
j
EA
[9 denote the right hand side quantity in
that 68 = 0
(37),
We will
argue
Clearly we cannot have 68 > 50 since this would imply
that there does not exist a primal
cannot have 8
large
(37)
<:C0 since then
feasible solution.
We also
(37) implies that for r sufficiently
-41-
q (p
This is
r
r
+tu1)U
q (pr) + A8
not possible since A can be chosen
while q(pr)
is
possibility
that
c+3
nonincreasing
.j E:Aa
It
>
--Aa
]
=
.
cj,
V
.j ~
This implies
This leaves the only
L
1 ...
a
for every
jS
a
.
.
(33) and
Now we will
the dual
-k a =
,£,
follows that
.v
feasible
=
flow vector
jA
j
(34).
bound from below the amount of
cost per dual
length is
which
atl
than
the rth
improvement
descent step by a positive
dual
Consider
e.
we denote by I. Also
direction
cost
more
we have
, a = 1 ....IL
a
Proposition 3.1 assu;res us that at each dual
step
large
8 = 0 or
+
a
with r.
arbitrarily
let
descent
constant.
descent step the
E1/4
the interval
u r denote
step.
the dual
z,
subsequence
for all
aEI
(33),
)
R such
3/4
El
descent
We have that
the dual
is decreasing on the line segment connecting tr and t r
It follows from
in
+i .
(34), and Assumption D that there e:xists a
that for r sufficiently large,
r'_ER,
we have
-- 42-
r
r rrr.
r
}Yj
' .
>. c .vgj(tr+AV)v
+ =izfs
JEJ
r ~EJ.
J
r..
-
v
F
g(t+Av
..
S
+
++
r
where we define
J+
- '
t:. .ci rER
]
jj {tr }
3 rER
and vr
-
ETlr
.
We consider
case
In
(p
of
derivative
I.
some subinterval
Ir of
+Au;u
L
.vr
c
=sj J
where v r
3:R+R-
by
8(A)
case
In
I of
v.
]i3 J
in
the
interval
I.
(i)
q(pr+Aur)
at least
is
of
8(A)
In
case
distinct
assumes more than 2fAt
length
Ir
derivative
the right
(i)
values
distinct
I
follows that q' (pr+Aur;ur) over
q
-::
r;ER
De-fine
a fixed rER.
the interval
values in
r ji I']
.atr..
).
two case5s.
the right
,: -,
bounded)
Consider
assumes at most 2fA
in
is
q ( p +Au
(A)
(ii)
-
for
linear
E/4fA| and
A
it
is linear of the form
+
(38)
b
jEJ_
3j
= ETur and bj denotes some breakpoint
of fj.
This
C):5
4 r
implies that. for each jcJO such that vj
gj(t: +Avr)
A in
is linear with slope bj for
J
f 0,
# O•
helthe dual
functional
For each
Ir
ijEJ,
·tr37rER is bounded and therefore the number of distinct linear
length i
pieces of gj of
E/41Af
encountered during the course of
This together with the fact
the algorithm is finite.
that vr
chosen .from a finite set imply that q' (pr+Aur;ur) (cf.
only assume one of
Ir .
a finite set of
It follows that
cost
in case
set of
&E/41Ai
This implies that case
indexes r
need only
must e.ist
a
case
(i) can occur for only a finite
(ii).
vr
such that
jEcJ(
In
cost tends to -,x,)and we
case
(ii)
for each
# 0 and the right
the function h(A) defined by h(A) = g(tr +Av)
three
distinct
1 or -1
(.
it follows that either tr-+'
3j
-FA) for at least two points
to a
g
t r +1
and either
sufficiently
that
tr: +
that
> tr
large.
l
> tr
Since .jE% 'the
limit
if
point
tj.
+
assumes at
least
> t'r+
E and
A<
in
I or
least two points AJ
(trt-A1 ) for
subsequence
A
of
Since v
tj
Without
E for
rt-
E or trl
all
rER that
subsequence
Passing
for all
loss of
generality
are
is
.trjr_
fixed interval L such that
Then
it
equals
either
g (t..+A)
tj+ 1
it
< g-
_ t7-_
I.
is
rER
and
Passing
the same j
that
we will
sufficiently
are
assume
large.
bounded and therefore
to a subsequence
.
.t'rfixed
converges
to that
t.
int3rval L such
r
< A2 in
necessary we can assume that
+
rER there
derivative
I.
values in the interval
gJftj-/Ar~
2
dual
where 6 is some positive
(for otherwise the dual
to consider
can
the subinterval
(i) we can bound the amount of
improvement from below by
scalar.
values over
(38))
is
if
follows
hias a
necessary we assume
that
there exists
a
-44--
[t
:t r
L
+tr
4
'p
(3.9)
r sufficiently large
r¥ER
V
and
<
rl
for at least two distinct
points rtl and
'Then
a1nd
[gj(a)
3
We then define
gJ(q2 )
g(l 2 1=
a
r-12 in L.
+
-=
1
2 )
gj
<
and g(ql)
r-2
92 belcong to the interval
gj _ (b) ]
where a,b are the left and the right end points of L
and they satisfy
respectively.
s'l
.(
<
Then f or
g
+
3
<'
(1
f
and
r sufficiently large, rsFlR,
r
(tr .)
:
'1
.
.J 2
:,
<-
g
we obtain
(cf.
(39))
r
.i
that
(4.1)
(t+E.)
3
It follows from Proposition 3.2 that for all
rF-R
(40)
~..,)......
)
sufficientiy
large
-45-
q (p
r
r+11
r
--
)-q (p
(g(tr
(gg - +))+))
).f
jj ]
+
- ff
(t
r
f
)
+
r
+
(g (
+
(g
r
(t
+ (tr
2gj
) - if
f
where the second
of
From
fj.
constant,
per dual
part of
unchanged
the max
bounded from below by a positive
It
when
of flow augmentations
Since the
nodes may result
data,
descent
in
is
successive
solve finitely
will
Fulkerson
issue in
as capacity
(
15], p.
choice
of
Here we propose two such schemes:
labeled
so
with
breadth-
In practice, the data is
being stored on a finite precision machine, and
therefore finite convergence
is assured even if
labeling
arbt! i tr-ar i iy,
Breadth-first
126)
of flow augmentations,
labeling is necessary to deal
first search and arc discrimination.
always rational,
the
steps,
E-bounds taken
number
an infinite
Assumption
involves
between
an arbitrary
irrational,
cost
E-bounds remain
labeling algorithm used
a more specific scheme for
irrational
dual
was shown by Ford and
the data
contradicting
termination proof
our finite
flow problem withthe given
constraints
that,
is
successive
whether the
is
effect
between
that the
Therefore the dual
finite.
is
descent steps
and the convexity
positive.
cost tends to -'',
the number
showing that
is
descent
and the dual
The second
dual
(42)
(41)
)
fj we obtain
and the convexity of
(40)
improvement
. i(42
f
follows from
inequality
hand side of
right
-
(jS
search
is
a
well-known
scheme used
in
is done
-46-
It
].abel.ing.
was shown
it
flow
augmentations
are labeled
a
holds if
fact
requires
is
proof,
of
labeling
When
search,
descent
augmenting
of
them).
repeat
infinitely
pEP is
saturated
at the upper
(sink)
Consider
the path,
flow augmentations
cannot become a
sink or
paths must repeat
in
We say that
of
the direction
(lower)
bound,
a path prP.
some arc of
(source)
After
of
augmenting
the flow of
arcs on
finite
paths that
a path
the arc is
is oriented from the
of
p.
a flow augmentation
p will become
L..et Ap denote the set
flow
an arc belonging to
p if
and the
are only a
there
and the arc
of p to the sink
is
vice versa)
(since all
of flow
Let P be the set
often.
a
node with positive
paths are simple and therefore
number
is
of sources and sinks must become
while the set
flow augmenting
we call
simplicity,
a source and a
deficit
(since a source
flow augmentations
of
For
Since the number of
a
after
infinite,
source
is not
successive dual
between
the number
that
assume
a sink.
set of
our knowledge
done by breadth-first
is
flow augmentations
node with negative
fixed
This
iteration.
and to
and obtain a contradiction.
deficit
is labeled
finite.
infinite
p.
deficit
the literature.
We will
F'roof
of
deficit
nodes with positive
all
the case for the relaxation
4
Proposition 3.4
steps
tihe number
We now show that the same conclusion
a nontrivial
in
the number
if
node with positive
as is
reported
finite
initially.
single
initially,
is
In
queue.
search,
irst
under breadth-f
that,
[161
using a FIFO
implemented
can be easily
saturated
p that
in
using p as
the direction
become
saturated
of
in
-47-
Ap is clearly nonempty.
the direction of p infinitely often.
will
show, by induction, that Ap is empty when breadth-first
search
is
done and thus obtain
initializati on:
from the sink
rPiroof:
This
saturated,
it
kth
For all
t
pEFP,
a contradiction.
every acEA
is
must remain
Indtuctive_3.
saturated
that,
Suppose
every asAp, is
denote
by s
and t
from then on.
for- all
p.
there
e;xists a
respectively,
and an arc a
subpath pi
the subpath
p1
this
Figure
(cf.
It follows that
generated
path p some arc of
of
p'
generated p node t'
F"rom
are
is
done by
arcs on the
labeled before
that
must be saturated
that
node t).
This arc
Since the number of arcs on pi is
the
is
generated p'
that
during the iteration
before
a
less than that of
before the iteration
p
sink we
3. 4)
a and t
would have been
would be labeled
k,
Suppose
must be a
the number of
the subpath
(otherwise
must then belong to Apt.
strictly less than
just
at
.for all
Ap such that
(see Figure
3.4) must be strictly
node t').
p.
Since the labeling
implies that
path node t
that,
there
(otherwise during the iteration
as the flow augmenting
the direction
in
the arcs on p between
the direction of p.
search,
it
is
whose source and
After a becomes saturated,
t.
the inductive hypothesis,
breadth--fir-st
every aEAp
We will show
pEP,
f low augment i ng path p' to unsaturate
in
pEP,
k+1 arcs away from the sink of
Then
k arcs away from
unsaturated
least one arc away
if the arc on p incident to t is
true since
the contrizar-y
is at
of p.
least [k arcs away from the sink of
p.FP,
We
inductive hypothesis
is
Since the inductive hypothesis holds for all
contradicted.
:i and the
in
t!
s t!
s
p,
a
P1
Pt
.:'-'
of arcs on each flow augmenting path
number
follows that Ap is empty for all
is at most
IN1-l,
it
p and the desired contradiction
is obtained.
E. D.
In the arc discrimination scheme, the order
in which nodes
are labeled and scanned is given by the following simple rule:
Each labeled but unscanned node records whether it
is
connected to an unlabeled neighbor by an arc whose flow is
strictly between the lower and upper bounds.
A node with
such a neighbor is scanned first.
The proof
[171.
of finite co vergence
under this scheme is given in
The implementation of the arc discrimination scheme
requires
more global
information than breadth-first search.
However,
when the rel
ax ation algorithm
both positive
descent
and
steps,
beneficial
in
is
the case of
can still
not known if
F'roposition
of
this
extended to operate on
nodes between
successive dual
which had been shown to be computationrally
di:scrimination
It
negative deficit
is
section
cost
be shown
this
is
problems,
to yield finite
also true of
and 3.A. show that
-.3
terminates
Since the algorithm
have zero value,
linear
after
the final
convergence.
breadth-first
search.
the relax'ation
a finite
only terminates
arc
number of
when
all
algorithm
iterations.
the node deficits
flow vector' x must belong
to C.
.:-CS is maintained at all
iterations of the algorithm,
that
price
x
and the final
dual
We next show that
generated
optimal
by the relaxation
cost by taking
vector
we can bring
must satisfy
the cost of
algorithm arbitrarilly
E sufficiently
small.
it
Since:
i
follows
E-CS also.
the solution
close to the
The main part
of
the argunment
Propositio
=
n
satisfy
is
3. 5:
CS.
:
O
Proof-:
embodied
If
Let t
+
Since
= ETp.
.
t"- j
!
E
-
~ and
let
~ and p
such that
xj
f .(.)
Hence
( .-. ' )f.
(' .)
I'
p satisfy
tt)
an arc i
+
and
then
q(p)
p
CfJi
j )i
Take
the nex't proposition.
Let x and p satisfy E-CS,
,,EC
f (x)
in
CS we have
jiA.
_
j.
<
Then by convex'ity of
f
()
f j
(
Hence
f
J
('
.)
+ g
(t .)
2
.3
(x
-
j -
xjj
awhiere
j
j
) (f.
j
E: +
(x .)
J
-
t .
J3
<
J
J
.'gt
the second inequality follows from E:-CS.
is similarly obtained when xj
+ x .t
so
oj,we have
This
inequ.ality
f
j
(. ) + g. (t.)
From the definition
x .tJ
<•
j
I
i
V j EA.n
of gj we also have
V jEA.
+ g (t.)
f.(:. ()
,jAl we
By combining these two inequlalities. and adding over all
xL.
IZ ,EF4 J -
Since xEC
f
EH
'
3
(
we have
[
3
x
.
3
t
.
E
t
j
j
=
j3 En
1x-)
J
0 and the result
3
+
x t1]
* n
j
Aj 3
fol lows. _
.i
E. D.
From Ftroposition 3.5 we can obtain a simple bound on the
the solution
suboptimality of
and
cj<
for all
+<:,:
Cor,;ollary
+:,: for all
i0 .
3.5
Let
> .
in the special case where
s
Z -f
j iA.
x and
j A,
then
f (x )
+
q(p)
p
satisfy
:<
E-CS.
(c.-..)
jEA
, -,<
If xZC
j
~-
.c
:
-51.-
the general
For
Proposition
:;.6
vector pair
such that
Proof:
First
.s(E)
not
C
all
Let
is
bounded
x j (s
n)
jEYY
By
). + ,+
> t
-+
( )
jEY
which
))
x ( E)
satisfy
0 as ,E
-i
remains
cycle Y and a
for
all
je Y +
jSY+
contradicts
jEY
(43).
where
i.(.)
all
{E sn }
-,
t.(
t . (f sE )
.J n
f .(I)
t
> 0
that
such
-
j(n )
.*
' for
=
f or
all
j .Y
-
s.)
t
EP(g
) we have
n
=
Therefore
-
;(E) is
is some vector satisfying
(:
E)
for
all
jY(43)
for
for' all
all n
n
0
Now we will show that:j(sE)-x.j(E)
- 0,
.
for
sC
(E)
If
large,
and
Since t(s
j
x
P
liim
jY
for all
-
as *E .- 0.
bounded
sequence
and
for n sufficiently
S,
t.(E
.. (E; )
..
¥.+
n
C .S
B3
all
"
s-CS and
C), then si nce
.+
for'
= E p(En)
where t(En)
j .
as
denote any flow and price
and p(S)
+ q (p(
ssumption
'f .
r)
.(E)
a directed
exists
This implies that
t(e
x
and p(E)
we show that
=r +,.
1im
x (s)
Then f* ( s)E) i
x (E)EC.
there
case we have:
is
bounded
bounded
fj(i(sr))
as
E
for
all
-4
<. t.(i
0.
jEA as
)
f
)
j(
4
all
i EA.
from above.
bounded
( )
for
+'-:
bounded
as
If
If
cj
below.
This then
Ij (E)-xj (f)
e
certain
to solve the problem first
gap
evaluate
f(<.(E:))
+ q(p(:))
decide
whether
and
are f:inite,
cj
estimate on
the quality
E.
is
in
Proposition
must be to achieve a
i(:),
turn implies that
completes our proof
Unfortunately
view of
for
of
for
tell
bounded fr'om
IjA as
E
If
x.()
by Corollary 3.5,
solution,
however
e
We need
on the basis
and dual
0.
Q.E.D.
us how small
E to obtain
the solution
(W) is
bounded
near optimality.
some guess
E needs to be decreased.
all
is
t
Proposition 3.5.
3.6 does not
primal
-.j(E)
.j(E)
is
bounded
degree of
between
we can,
trivially
we have that
bounded
is
we can argue that
Si milarly,
Therefore
in
j ( _) is
+:: then by Asseumption B we have f
=
~ .43*+,:. Since xj(
E)
from above whichl-
from above.
:,: then
c<
of
and
the
and then
the bounds
obtain an apri or i
j
Computational
4.
Eer
i mentat
Two experimental
on linear benchmark
convex
code,
named MNREL.AX
strictly
convex
Fortran
named NRELAX,
code,
method for strictly
code N1ETGEN
cost)
Section
in
to the linear
(see [1,r
mix.ed
linear'
and
system.
using the public domain
"standard"
In
a modified
benchmark
in
their
co ding
of
under
our
standard
identical
cost
the arcs as
efficiency
0 against the very efficient
[2])
cost
form whereby a quadratic
cost of some or all
order" to test
linear
We tested
code.
thiese problems either-
form or
with E =
The second
Both codes were written
can be obtained using this
discussed below.
2.
11-750 and were compiled and run under the
Tthere are 40
[183.
codes with some of
RtELAX
Section 3.
problems were generated
problems that
was added
implements the relaxation
problems of
problems of
on a VAX
The test
MNRELAX
problems and
implements the method for
VMS version 3.6 operating
(linear
the methods of the paper
variations.
The first
in
on
codes implementing
were developed and tested
nonlinear
i
we tested
linear
cost code
conditions on the first .:30
NETGEN benchmark problems. The two codes are close to being
mathematically equivalent
uses floating
appear
point arithmetic.
to indicate
that
There were two issues
cost problems but MNRELAX
on linear
The results
MNRELAX
is
coded fairly
that we wanted to
shown
in
Table
1
efficiently.
clarify
through
the
ex peri ment at i on
a)
The effect
b)
The relative
of the parameter
efficiency
E on the performance of MNREL.AX.X
of NRELAX
versus MtiNRELAX
with optimal
Problem
# of
# of
MNRELAX
Nodes
Arcs
e = 0
1
200
1300
5.13
2.33
2
3
200
20
1500
2000
6.33
4.86
4
2.50
2.52
200
2200
7.74
5_
200
2900
6.83
RELAX
3.32
3.33
6
300
3150
12.85
5.04
7
300
4500
13.46
7.50
8
9
300
300
5155
6075
10
300
14.54
17.38
6300
14.39
5.15
7.73
11
400
1500
4.71
1.64
12
13
400
400
2250
3000
5.81
6.27
1.89
2.58
14
15
400
400
3750
4500
7.79
9.64
2.87
4.41
16
400
1306
9.06
3.77
17
400
2443
8.87
3.87
18
1-9
20
400
400
400
1306
8.98
4.00
2443
8.81
3.74
1416
9.82
5.04
400
400
400
24
25
2836
1416
2836
400
400
1382
2676
10.36
9.08
13.80
5.21
4.69
6.19
26
400
1382
3.73
1.86
27
28
400
10U0
2676
29W0
6.41
2.17
3.51
5.83
29
30
1000
1000
3400
4400
19.15
25.62
6.84
13.53
21
22
23
Table 1 :
[
_
4.73
7.15
6.19
2.41
3.23
Times for NETGEN benchmark problems. MNRELAX uses
e = 0. Times in this and subsequent tables are in secs on a VAX 11750.
All codes are written in Fortran and compiled under VMS
version 3.6.
-54-
choice ot
the parameter
A large number of
E
it
is
the iterations
is
except
it appears
with large
E the
Proposition
appears
way to operate MNRELAX
is
with
and
NIRELAX
somewhat
Table
increases
with
intervals
defined by the F.-
a
and all
because
of
that
-= 0
arc costs
are mathematically
faster
The reason
,.
large number of
direction
flow
can be found.
: leads also to inaccurate solutions
it
E = 0
E
for such problems the time
are needed before a descent
thatz a large value of
When
that
and as a result
3.5),
(cf.
- 0 and terminate
nodes becomes sufficiently
Indeed
in
"difficuit'
when the deficit of all
that
augmentations
some very
with
bounds become larger,
Givenr
for all
for MtNRELAX to terminate
probably
problems.
best to operate MNRELAX
close to zero.
required
convex
experiments some of which are presented
Table 2 and 3 showed that
problems
on strictly
for most problems
or with
E very
are strictly
equivalen:t.
more efficient
small.
convex,
However
coding
the best
MNRELAX
NRELAX
as shown
is
in
3.
Finally
in
Table 4 we show results
obtained on some
"difficult" problems with strictly convex arc costs.
problems
were constructed
These
by choosing the quadratic cost
efficients of some arcs to be very small relative to others as
described
in Table 4.
This is similar
to a situation
in
nonlinear- unconstrained minimization where the Hessian matrilx
the cost function has some eigenvalues that are very small
relative
MNRELA..X
with
to otlher eigenvalUes.
with nonzero
E=: 0.
E-
For- this class of problems
can outperform both NRELAX and MNREL.AX
This is not surprising in. view of
the coordinate
of
Probl
em #
# of
Nodes
# of
Arcs
MNRELAX
E> 0
MNRELAX
s= 0
Sign. digits of
Accuracy
1
200
1300
30.79
11.49
6
2
3
4
5
200
200
200
200
1500
2000
2200
2900
40.66
34.48
32.05
50.43
11.73
9.31
f11.60
28.14
6
7
6
5
6
300
3150
74.10
26.01
6
7
9
10
300
300
300
300
4500
5155
6075
6300
140.70
116.06
96.59
94.71
48.64
76.35
49.25
36.43
6
6
6
6
11
400
1500
263.14
26.35
4
12
14
400
400
400
2250
3000
3750
31.86
(5)36.25
79.88
15
400
4500
180.93
240.76
4 36.80
146.69
(3)42.23
4
4
4
2
16
400
1306
144.08
(7)86.40
3
17
18
-19
20
400
400
400
400
2443
1306
2443
1416
261.91
294.88
108.04
214.99
(7)47.31
53.71
37.54
(7)68.17
5
4
5
3
21
400
2836
37.24
(7)18.43
4
400
400
400
2836
1382
2676
34.56
66.58
167.53
(7)18.09
(5)45.87
(6)22.73
4
3
5
8
23
-24
25
Table 2:
Times for NETGEN benchmark problems modified so that 50% of
the arcs have an additional quadratic cost with coefficient from the range
[5,10].
Numbers in parentheses where present indicate significant digits of
accuracy of the answer. In MNRELAX e is kept constant during the solution of
each problem.
#of
Nodes
#of
Arcs
MNRELAX
e> 0
MNRELAX
e= 0
1
200
2
20
3
2
4
200
5 200
1300
1500
2000
2200
2900
22.63
.37
17.97
6
7
8
10
300
300
300
300
300
3150
4500
5155
6075
6300
113.75
85.26
95.11
70.48
(6)99.69
11
12
13
14
400
400
400
400
1500
2250
73000
3750
4500
(7)43.19
(6)39.56
16
17
18
19
20
400
400
400
400
400
1306
2443
1306
2443
1416
65.86
60.62
84.26
60.56
108.49
21
22
23
400
400
400
400
400
2836
1416
2836
1382
2676
62.78
95.91
43.41
59.83
53.57
Probl
em #
9
15
25
4
Table 3 :
NRELAX
Sign. digits of
accuracy
10.80
19.12 11.51m
20.77
12.04
25.38
17.22
32.37
21.44
7
7
8
7
6
57.95
50.49
70.08
69.44
69.33
40.23
37.77
49.38
48.04
41.41
7
6
8
34.37
33.31
34.62 3.66
(6)34.97
35.32
64.90
42.53
14.67
12.98
18.34
20.86
24.95
6
5
5
5
5
54.19
46.20
72.41
46.18
72.11
21.54
21.89
48.97
6
6
7
6
7
38.79
55.25
33.21
65.35
42.06
38.69
42.03 6
7
z0.70
6
7
7
19.33
37.87
34.82
20.80
38.70
42.47
37.88
77
5
Times for NETGEN benchmark problems modified so that all arcs have an
additional quadratic cost with coefficient from the range [5,101. Numbers in parentheses
where present indicate significant digits of accuracy of the answer. In MNRELAX e is kept
constant during solution of each problem.
Problem
#
# of
Nodes
# of
Arcs
1
200
1300
.0001
5
200
2900
.001
7
300
400
400
4500
1500
15
4500
16
400
1306
18
400
1306 .001
24
400
1382
11
MNRELAX
>0
Small
quad
coeff.
MNRELAX
£ > 0
NRELAX
(4)52.98
(3)58.00
(5 227.93
(2)27.16
.0001
301.03
.0001 4' 111.36
.01
361.71
(2)131.10
.001
(3)287.13
.001
M
(2)50.15
(2218.24
(3)1957.70
(2)350.97
(3-188.7
4417.83
2)321.76
(2)46.21
(2)134.51
Table 4: Times for NETGEN benchmark problems modified so that all arcs have an additional
quadratic term. In 50% of the arcs the quadratic cost coefficient was
small as indicated. In the other 50% of the arcs the quadratic cost coefficient was from the
range [5,10]. Numbers in parentheses where present indicate significant digits of accuracy of
the answer. In MNRELAX E is progressively decreased during solution of each problem.
-55-
descent
found
most efficient
with a
where primal
St:i. ll
it
several
factor
of
and d.tual
E
of
methods of
We do not
to do some
problems.
paper are not very
k-;now however
of
a better
specified
Aup
to the point
accuracy.
E was not easy to
initial
experimentation
The conclusion
suitable
that we
to termination then
the process
by a
value for
of MNRELAX
one whereby we start
is
operate MNRELAX
values differ
these difficult
this
The version
10) and repeat
the proper- starting
was necessary
NRELAX.
these problems
for
moderate value of
reduce E by a
and
of
interpretation
for such
alter-native.
is
idetermine
with
that
the
problemsn
-56-
References
[1:]
Bertsek.:as,
.
F'. , "A Unified FrameworkC
for Primal-dual
Methods in Minimum Cost Network Flow Problems", Mathematical
1985.
:. 2, Noa 2,
125-145,
jpp.
Proframmin , Vol
[2]
Bertsekas, D. P. and Tseng, P., "Relaxation Methods for
LIDS ReportPF-1.33-9,
Minimum Cost Network- Flow Problems",
of Technology? October- 1983.
Mass. Institute
[3]
Rockafellar, R. T,1 Network
iley-Interscience,
Proc raqmmin, ,i
Flows and Monotropic
983.
Princeton
Univ.
Press,
[43
Rockafellar,
1970.
:5:3
i
"Distributed Asynchronous
Bertsekas, D. P. and Elbaz, D.
Relaxation Methods for Convex Network Flow Problems", LIDS
of Technology, October 1984,
Report P-1417, Mass. Institute
(to appear) .
SIAM J. on Control and Otimization,
i
[6]
Zangwi .
1969.
C7]
Sargent, R. W. H. and Sebastian, D. J.n, "On the Convergence
of Sequential Minimization Algorithms", Journal of
±Optimization Theory and Applications, Vol. 12, pp. 567-575,
1973.
[8]
FPolak,
[9]
Luenberiger, D.
Wesley, 1984.
[10]
Powell, M. J. D., "On Search Directions for Minimization
Al gori thms", Mathematical Frogamming. 4, Northh-Ho land
Publishing, pp. 193-201, 1973.
[11]
Pang, J. S., "On the Convergence of Dual Ascent Methods for
Large-Scale Linearly Constrained Optimization Problems",
Unpublished manuscript, The University of Texas at Dallas,
1984.
,
R. T., Convex Analysis,
W.
., Nonlinear
Programmin
E., Computational Methods
Approach,Scademic Press, 1971.
G.,
g,
Pr-ent ice Hail,
in Optimization:
A Unif ied
Linear and Nonlinear Proqramming, Addison
[12] Cottle, R. W. and Pang, J. S., "On the Convergence of a
BlEock Successive Over-Relaxation Method for a Class o.f
Linear Complementarity Problems", Mathematical Proarammin
:Stfudy 1.7, North Holland Publishing, pp. 126t-138, 1.92.
[13] Bertsekas, D. FP. and Mitter, S. K., "A Descent Numerical
Method for Opti mi zati on Problems with Nondi f ferenti able Cost
F-uncti.nais", SIAM JC. Control, Vol. 11, No. 4, pp. 637-.652,
November 197:3.
-57-
D.
R.
Flows in
Netowrks,
[141
Ford , L. R., Jr-.,
and Fulkerson,
F'rinceton Univ. Press, 1962.
[15]
PapadimitrioLu,
_pytimijization:
1982.
[16]
"Theoretical Improvements in
Edmonds, J. and K:arp, R.,
Algorithmic Efficiency for Network Flow Problems", Journal
of the Association forComputing Mtachinery, Vol.
19, pp.
248- 264, 1972.
[17]
Tseng, P.,
Problems",
1986.
1[18]
Klingman, D., Napier, A., Stutz, J., "NETGEN - A Program for
G3eneration Large Scale (Un)capacitated Assignment,
Transportat ion and Minimum Cost Network. Problems",
!Mtan__aesment Science, Vol. 20, pp. 814-822, 1974.
[t19
"Simulating a Distributed
ZeniOs, S , and Mulvey, J. M.,
Relax:ati on Method for Convex Networtk Problems",
Synchronou-Fs
Working Paper,
Deptn. of Civil Engineering, Princeton Univ.,
Jan. 1985.
,
K., Combinatorial
C. H. and Steiglitz,
Algorithms and Complexity, Prentice hall,
Monotropic Programnming
"Relaxation Methods for
Ph.D. Thesis, Operations Research Center, MIT,
Download