A Dual Version of Tardos's by

advertisement
A Dual Version of
Algorithm for Linear
James
Sloan
_Y____n_l______ll_11_^1-1__·___._.
W.P.
No.
Tardos's
Programming
by
B. Orlin
1686-85
July 1985
Revised June 1986
Abstract
Recently,
Eva
Tardos developed an
linear program min(cx:
polynomial
in the size
Ax = b, x
of A,
Her algorithm focuses on
the cost
coefficients.
approach'
>
algorithm for solving the
O) whose solution time
independent of
the dual LP
the sizes of c and b.
and employs an approximation of
Here we adopt what may be called a
in that it focuses
on the
significant differences
primal LP.
has
some
the
dual approach conceptually simpler.
This
A Dual
from Tardos's approach which make
Version of Tardos's Algorithm for Linear
Programming.
1
· _IV1_II__*___Fl__I_
· *X.---··--s·--------__-_ll---__--
'dual
dual approach
Subject Classification:
742.
is
Introduction
Recently, Eva Tardos
algorithm for
the linear program min(cx:
time polynomial
in m,
basic arithmetic
for
[1986] developed
n and
Ax = b, x > 0)
+ 1),
log(Ama x
a polynomial
time
that runs
assuming that the four
operations are each counted as one step.
linear programs developed
in
from combinatorial
Indeed,
problems such as
the multi-commodity flow problem the algorithm is strongly
polynomial,
i.e.,
the dimension of
the number of arithmetic
the problem rather than
steps is polynomial
in the
size of the
in
input.
Tardos's algorithm is based on a generalization of her strongly
polynomial algorithm for minimum cost network
Subsequently, Frank and Tardos
approach so that it
flows (Tardos
[1985]).
11985] have generalized Tardos's
is valid for
linear programs
an exponential number of constraints)
(with potentially
solved by the ellipsoidal
algorithm.
Here we present a
algorithmic
'dual' variant of her algorithm.
ideas are the same, viz.,
The major
both algorithms solve a
sequence of approximated problems and in each instance identify at
least one variable that must be zero in an optimal solution.
Nevertheless, our algorithm does differ from Tardos's algorithm
more significant ways than is indicated by the dualization.
negative side,
in
On the
the approximated problems here have m times as many
bits as do the approximations used in Tardos's algorithm.
Thus
under the standard models of computation, our algorithm is less
efficient than Tardos's algorithm.
algorithm is a "single
On the positive side, our
phase" algorithm and is more direct and
conceptually simpler than Tardos's
2
"two-phase" algorithm.
The first
phase of Tardos's algorithm finds the optimizer face of the LP.
second phase constructs a basic feasible solution to
face.
The
the optimizer
Our algorithm simultaneously constructs a feasible solution
and an optimum basic feasible
Recently, Fujishige
approach to Tardos to
problem.
solution.
[1986]
has developed a similar dual
solve the capacitated minimum cost flow
His algorithm results in an improvement over Tardos's
algorithm in the theoretical
worst case time comparable
algorithm has
to
worst case running time, and provides a
that of Orlin
subsequently been
[1985].
Fujishige's
improved upon by Galil
and Tardos
[1986].
The algorithm described below runs
arithmetic steps.
include:
in O(m6.
The arithmetic operations of
5
log(Ama x + m + 1))
the algorithm
addition, subtraction, integer division by a constant,
integer multiplication by a constant and comparisons.
that
log(Amax) is polynomially bounded in m and n
In the case
(as, for example,
in multi-commodity network flow problems and in generalized covering
and set packing problems),
then the number of arithmetic operations
is polynomially bounded in m and n.
called strongly_polynomial
as per
In such a case the algorithm is
Tardos
One naturally may question the
existence
of a strongly polynomial
The significance
Rather,
real significance of the
algorithm for linear
of the concept does not lie
since one would not
log(Amax),
[1986].
in practice wish to
log(bmax) or
log(cmax) is
3
in its practicality,
solve an LP in which
exponential
strongly polynomial algorithms are
theoretical domain.
programs.
in m and n.
significant
in the
A key theoretical motivation for
studying the question of
'strong polynomiality' deals with the geometry of
If
we compare the
bounded to those
linear programs in which log(Amax) is polynomially
in which
difference in terms
is
of
linear programs.
log(Amax) is exponential
of the structure of
in m and n,
the polyhedra is
theoretical interest whether the facial
programs can grow continually more complex as
the
subtle.
It
structure of linear
log(Amax)
increases
exponentially.
Similarly, one may wish to know if the complexity of solving
linear programs stems
in part from the large numbers involved.
Such
is apparently the case in integer programming with a bounded number
of variables.
Indeed,
there
is
probably no strongly polynomial
algorithm for 2-variable singly constrained
such an algorithm would
integer programs since
imply that one could compute whether
numbers a and b are relatively prime in 0(1)
(Consider
arithmetic
the integer program min (Ox: ax-by=l;
x,
two
steps.
y > 0 integer).)
The Problem and the Algorithm
1.
Consider the
linear programming problem
Minimize
cx
Subject to
Ax > b
(1)
x >O
where c is an
integral n-vector, A
b is an integral m-vector.
j
E
[1..n]).
max (vil:
is an integral m x n matrix, and
We let Amax = max
(aijl
In general, for any vector or matrix v,
: i
E
vmax denotes
vi is a component of v).
The approximation method described below relies
following two properties of
linear programs:
4
[1..m],
on the
(1) if there is an optimal
solution that
solution then there is an optimal
is basic;
(2) the non-zero components of each basic solution may be
obtained by premultiplication of
inverse of some basis B of
the right hand side vector b by the
1.
Let M = (Amax)m+l(m)(m!).
submatrix of
of A has
PROOF.
A,-I]
using Cramer's
rule,
B- 1 .
we may bound the coefficients of
FACT
Moreover,
[A,-I].
Let B be any m x m invertible
and let A = B-1[A,-I].
Then each coefficient
both its numerator and denominator bounded by M.
Let M' = (Amax)mm!.
determinants
of submatrices
Then M'
of A.
is an upper bound on
By Cramer's rule M'
bound on the numerator and denominator
Multiplying B-1 by A does
not
Rather than prove a number
to the algorithm, we will
for
largest denominator and
by a factor of
D
at most mAmax.
of additional facts and
lemmas prior
first describe the essential aspects of
the algorithm and subsequently prove that
arithmetic operations
is an upper
of each coefficient of B - 1 .
increase the
increases the largest numerator
the
the total number of
the algorithm is O(m 6 . 5
log (Amax + m + 1)).
The following "algorithm" is really an outline of our dual
approach to Tardos's algorithm.
Some of the key implementation
details are discussed in the Lemmas and Theorem that
value M refers to the upper
follow.
The
bound on Amax defined in Fact 1.
Although it is not yet apparent,
it is very important that the
algorithm will solve both the LP and its dual.
5
ALGORITHM
STEP 1.
I.
Form an approximation to (1) as follows.
Choose an m-vector b' and an integer k > 1 so that
=
(i)
b
Lbj/
(ii)
either
k
for j c [1 ..m],
2(Mm) 2 m < bmax
and
< 4(Mm) 2 m or else
b = b' and bmax < 2(Mm) 2 m.
STEP 2.
Solve
the linear program
Minimize
cx
Subject to
Ax x,
Case
case,
1.
Is
= b'
(2)
s > 0
Suppose there is no feasible solution to
there is also no feasible solution to
terminates.
(A feasible solution x'
feasible for
(2),
Case 2.
there
(2).
In this
(1) and the algorithm
to (1) would imply that x'/k
a contradiction.)
Suppose that the LP
(2) is unbounded.
In this case
is a solution y > 0 such that Ay > 0 and cy < 0.
feasible, then this vector y will also show that (1)
In this case, we test
(1) for feasibility by
If
(1)
is
is unbounded.
replacing c by 0 and
returning to Step 2.
Case 3.
Suppose that there is an optimal solution.
case we let B be an optimal basis,
STEP 3.
If B-lb > O,
In this
and we go to Step 3.
then B is also an optimal basis for
(1).
Otherwise partition B into non-empty subbases D and E so that the
'final tableau' with respect to
(1) is as follows.
6
is
Minimize
Subject
= CNXN
to
+ NXN = bD
xD
+ N2XN = bE
XE
XD,
where bD > 0, and
bi >
STEP 4.
in
xE,
>
addition, for
,
iD,
jcE
(Mm)2 1bjl
(4)
Minimize
q (CNXN)
Subject to
q(xE
+
2XN)
XE' xN
If there
xN
(Recursively) find a basic optimal
where q is
(3)
IDET(B)I,
and thus all
an optimal basic
(1).
to
q bE
(5)
> 0.
coefficients of
is no feasible solution to
feasible solution to
solution x,x
E
N
(5),
Otherwise,
(5) are integral.
then there is also no
the following solution
feasible solution for
(6) is
(1).
XE = XE
X-
N
D
X*
(6)
N
bD
Nx*
Moreover, if B is the optimal
basis of
then cgBB-l
solution.
is an optimal dual
To prove that the above
log(Ama x)
(or equivalently,
in a number of details;
the basic solution in
'algorithm' is polynomial
in n and
in m,
(6),
n and
log M), we still need to fill
however, first let us look at the essential
ideas.
7
If one were very optimistic, one might expect
solve the
basis.
(2)
approximated problem in Step 2 and obtain
Unfortunately,
is not primal
feasible for
basic variables
is
in
(1).
(3)
In
such that bD
thus reduce
fewer
we may remove
the original
constraints.
Moreover,
show
the splitting
is so large relative to bE that
optimal
solution.
Once we can
the variables xD from consideration and
m constraint problem to a problem with
We then recursively apply the same algorithm to
the smaller problem, and within m
(Similarly, we could have,
replaced the dual
such a case, we can
is possible.
we can be assured that xD > 0 in some
be so assured,
the optimal
it is conceivable that an optimal basis of
that a splitting such as
of
that one could
iterations we are through.
in the spirit of Tardos's algorithm,
inequality
constraints yA D
cD by equality
<
constraints.)
LEMMA
1.
We may determine the
in O(m log M) arithmetic
PROOF.
for
completeness.)
interval
If bmax
2
In this way,
(Mm)2m then we let k=1 and b'=b.
and we
let bj=Lbj/kJ.
to find the greatest
In the case that
integer dj
such that
djk.
we reduce the problem of computing Lbj/k
multiplication problems such that
the
Otherwise,
we may determine b' by using binary search in the
[O..4(Mm) 2 m]
bj
included
We first determine bax in at most m-l
we let k =Lbmax/2(Mm)2mJ
(Mm)2m,
in Step
steps.
(This proof uses the same idea as Tardos, and is
comparisons.
k
integer k and the vector b'
smaller multiplier is
in
(m log M)
each of these multiplications
less than 4(Mm) 2 m .
8
to
Moreover, to divide
by
a number
reduce
less
than 4(Mm)2m, we may use
such a division to a sequence of
operations that are
limited to
standard algorithms which
log(4(Mm) 2 m)
arithmetic
the basic ones allowed
in our
description of strongly polynomial algorithms.
We also note that
bj
Also
it is
LEMMA
and
2.
if bj
=Lbj/k
= bmax and
2(Mm)2mJ
Lbmax
bj
easy to show that b i
if k
t
1, then
= 2(Mm)2m
< b
<
D
4(Mm)2m.
The optimal objective values of
the linear programs
in
(3)
(5) are not unbounded.
PROOF.
O
cN > O.
LEMMA 3.
Either B-1b
> 0, or else there is a partition of
B into
non-empty subbases D and E as described in Step 3.
PROOF.
that
B-1b > 0 then the result
If
B-lb
is
trivially true.
In particular, k > 1 and bmax >
0.
Suppose now
(Mm)2m.
And
suppose without loss of generality that
1bl
>
b 21
>
b1 >
...
>
bmi.
To prove the Lemma we will
(Mm)2m-l k
show that
(7)
and we will also show
if bj < 0 then
where k is the value
Ibjl < Mmk
determined
,
in Step 1.
9
(8)
From (7) and
(8) and by our assumption that
bj
< 0 for some
Ibll/bml >(Mm)2m- 2
j,
(9)
m-1
Since
Ibjl/lbj+ll > (Mm)2 m-2
=
Ibll/Ibm
j=1
it follows that there is some index r such that
Ibr/tbr+lI
If there
the
is more than one
least such
variables
From
> 0 for
b
and
(10 )
ind ex satisfying (10),
then choose
r to be
select D to consist of bas ic
We may then
our choice of
j <
r, br > bl(Mm) - 2 m+ 4
(8) are true.
bmax
> k(Mm) 3 .
By
r.
To complete the proof of the
(7)
2
(Mm)
1,...,r and select E to consist of basic variables
r+l,...,m.
(8),
index.
>
< mBmax
Lemma it suffices to prove that
We now prove
(7).
bmax
bl
< mAmax
Since b = Bb,
and thus
b1
This latter
> bmax/mAmax > k(Mm) 2 m-
inequality is true since bmax/k
We now prove
chosen
b' so
Let D =
(8).
that
Moreover, B-lb'
> O.
bj-k
kbj
Suppose
(dij)
= B-1.
> (Mm)2m.
From Step 1, we have
bj.
(B-lb)i
10
__1__1_11111_1____---..-
1
<
0.
Then
(B-lb)i
m
E
j=1
=
m
E
j=1
>-
The last inequality
the proof
of Lemma
LEMMA 4.
Any of the
m
m
dij bj
dij bj
- k
j=l
Ibj
dijl
-
!
dij
bj
(11)
j=1
kbjl
-kMm.
>
dij
is valid since
M.
This
completes
3.
°
linear programs
(5) as
recursively obtained by
< M.
the algorithm will have a constraint matrix qC for which qCma x
Moreover,
PROOF.
and
(F-1 )max < M.
The coefficients of qC may be obtained by multiplying A by
Det(B)B
that
for any basis F of C,
-1
for some basis
the columns and
Thus Dmax < M by Fact 1.
B.
Assume now
rows have been permuted so that A = [B,N
1 ,N 2 ]
that
I1
0
0
I
N 1 l1
N12
C
N2 2
B-1A =
where
I
and I2 are identity matrices of appropriate dimension.
may assume without loss
making duplicates
that [II
I2 is
N11]
F
of generality that F is disjoint from
of the columns
in F n
I2.)
is transformed into the
transformed into C-1.
LEMMA 5.
2
Thus
(C-
1
I2
(We
by
If we now pivot so
identity matrix, then
O
)max < M by Fact 1.
Algorithm A will either find some subproblem
in Step 2
that has no feasible solution or else it will determine the optimal
solution via equation
(6).
11
PROOF.
If the algorithm finds any subproblem
feasible,
then there
(2) that
is no feasible solution to
not created any infeasibilities
(1)
in the recursion
created any new infeasibilities by rounding down
Next we consider
to
(1),
and we prove
the case in which there
our result
inductively.
is not
because we have
(5) nor have we
b in Step 1.
is a feasible solution
Assume that our
recursive algorithm has found an optimal solution (xE,xN)
LP(5).
We assume
basic.
Thus
XF
without loss
there is some
= F-1
then xi
< Mmt.
M 2 m2 t.
By our
such that bi
(bE)max,
it
If
4,
(NI)max < M.
(6)
is primal
feasible.
1tclH
remaining is the
follows.
<
the dual of
Since it is also dual
implementation of Step 2.
We
Let U.11 denote the sup norm.
4(Mm) 2 m, then solve (2) using a minor modification
If
c
> 4(Mm) 2 m,
then
(2) using this algorithm recursively.
(2)
< 4(Mm)2 m using our slight modification
algorithm,
of bD is
LI
We will observe that the solution time in
I[cl
(N1 XN)max <
Thus
of Karmarkar's algorithm as described below.
solve
Thus
is optimal.
Step 2 as
STEP 2.
and if i is a basic variable of F,
N1 XN >
The key detail
refine
(5) such that
partition in Step 3, each coefficient bi
and the solution in
feasible,
basis F of
Also by Lemma
> M 2 m 2 t.
bD -
of generality that the solution is
bE
if t =
By Lemma 4,
to the
and thus we may solve
(1)
12
is polynomial
of Karmarkar's
in polynomial
time if
if
[1984]
Ilcll < 4(Mm)2m.
case of
If lcl > 4(Mm) 2 m then the dual of
(2) is a special
in which the cost coefficients are bounded by 4(Mm) 2 m.
(1)
As such we may apply our algorithm
recursively.
One cannot use Karmarkar's original algorithm for the following
reason:
in its
form Karmarkar's algorithm solves only the
original
primal
problem, whereas
primal
and dual
2 above we need to
in Step
One easy method of
problems.
solve both the
resolving the
b' +
c*
small positive number c.
is eJ for
well
some sufficiently
perturbation method which guarantees
known
non-degenerate.
As such,
identify an optimal
optimal basic
solution has
/M 2 .
=
Cramer's rule
'
This
perturbation leads to an
*]
is also dual
primal and dual optimal
shows that
solutions.
right hand side by O(m log M)
increase
the fractional
we may
part of
the solution is non-degenerate.
is non-zero and thus
bmax > (Mm)2m, this
B that
the algorithm is still polynomial,
B-1(b
+
is
Because of non-degeneracy
a basis
feasible, and thus B induces both
choose
that any basis
solution and then use standard techniques
basic solution.
As for ensuring that
This is the
we can use Karmarkar's algorithm to
primal
to move to an optimal
this
*, where the
j-th component of
difficulty is to replace b' by
increase in the description of
bits.
However, in the
the
case that
in the number of bits due to the
perturbation method is only a constant multiplicative factor of
the
number of bits of b'.
An alternative method of resolving the difficulty in Step 2 is
to avoid the need for obtaining an optimal dual solution.
one can use the Adler and Hochbaum
algorithm
that runs in time
independent
of the size of
[1985]
polynomial
bmax.
13
Instead,
extension of the ellipsoid
in n,
log(Amax) and log(cmax)
A plausible method that does not seem to work is solving the
primal
and dual
problems simultaneously, e.g.,
one could reformulate
(1) as
minimize
-
ctx
subject to
Ax
bty
b
>
Aty
x,
y
c
>
However, we cannot make the same reductions as stated in Step 2 in
the case that cmax is
function and
THEOREM
O(m 6 . 5
PROOF.
Next,
1.
in the
large, since c appears both in the objective
right-hand-side.
Algorithm 1 solves the linear program
log(m + Amax + 1))
First,
(1)
in
arithmetic operations.
the optimality of the algorithm was shown in Lemma
each of the approximated linear programs solved
algorithm has an input size L = O(m log M)
arithmetic
steps per linear program
O(m 4
(m + Amax + 1).
.5
log
because of the slack
and so
by Karmarkar's
and thus the number of
is 0(m3.5
log M)
=
(We have m + n variables rather
variables.
handle the precision.
Karmarkar
< 4(Mm)2 m, then
5
log M)
includes another factor of
L to
We do not include this factor of L since we
are counting only the number of arithmetic operations.)
Jlcll
than n
Moreover, m + n < 2m by assumption,
the running time of Karmarkar's algorithm is O(m 3 .
arithmetic operations.
5.
the number of
linear programs
If
solved in Step 2
is at most m because at least one basic variable is eliminated in
Step
3 at each iteration.
requires the
0(m 2 )
If
Ucil
> 4(Mm)2 m, then to solve
solution of at most m linear programs.
linear programs all
1_1
Thus there are
together for a total running time of
14
··_____1_________I
(2)
Ill__·I______X_______Ol__lslll-·-·--
____
O(m 5 . 5
log M)
programs.
It
= O(m 6 . 5
(m + Amax +1))
log
other arithmetic operations
is easy to verify that the
also have time bounded in total
by O(m 5 .
for solving the linear
5
log M).
Conclusions
As noted
theoretical
Ilbl
by Tardos, her algorithm treats a problem of
interest,
> (mM)2m.
Within
i.e.,
LP's
in which
cil >
(mM)2m
this domain, Tardos's algorithm
or
is a
significant achievement.
Here we have shown
that a dual version of Tardos's algorithm
has a notable conceptual
simpler
in that
advantage:
it solves
viz,
linear programs
the algorithm
itself is
in one rather than two
Moreover, computationally the algorithm is superior in
phases.
special cases,
Fujishige
such as for the transshipment problem.
[1985]
and Galil
and Tardos
A major related theoretical
(See
[1986].)
question
is:
can
we find
an
algorithm for linear programs whose number of' arithmetic operations
is bounded in m and n and independent of Amax?
general
problem,
For this more
it is not apparent how one can generalize Tardos's
algorithm or the algorithm presented here.
Acknowledgments
I wish
to thank Dorit Hochbaum for pointing out that
Karmarkar's algorithm does not necessarily solve
the dual
LP.
also wish to thank Leslie Hall and an anonymous referee for
suggestions on how to
improve the presentation.
15
I
References
I. Adler and D. Hochbaum.
(1985).
On an implementation of the
ellipsoidal algorithm, the complexity of which is independent
of the objective function.
In preparation.
A.
Frank and E. Tardos.
(1985).
A combinatorial application of
simultaneous approximation algorithm.
In preparation.
S.
Fujishige (1986).
A capacity-rounding algorithm for the minimumcost circulation problem:
a dual framework of the Tardos
algorithm.
Math Programming 35, 298-309.
Z. Galil and E. Tardos.
Cost Flow Problem.
the
(1986).
An
(n 2 log n(m+n log n)) Minimum
Submitted for publication.
N. Karmarkar.
(1984).
A new polynomial time algorithm for linear
programming.
Combinatorica, 4, 373-395.
J. Orlin.
(1985).
Polynomial time simplex and non-simplex
algorithms for the minimum cost flow problem.
MIT, Sloan
School working paper.
E. Tardos.
(1985).
A strongly polynomial minimum cost circulation
algorithm.
Combinatorica.
To appear.
E. Tardos.
(1986).
A strongly polynomial algorithm to solve
combinatorial linear programs.
Operations Research.
To
appear.
16
_1_____1__1_11__1__--
Download