Parametric Linear Programming and by T.L. Magnanti*

advertisement
Parametric Linear Programming and
Anti-Cycling Pivoting Rules
by
T.L. Magnanti*
and J.B. Orlin**
Sloan School of Management
M.I.T.
Sloan W.P.
*
**
No.
1730-85
October
1985
Support in part by grant ECS-83/6224 from the Systems Theory and
Operations Research Division of the National Science Foundation.
Support in part by Presidential Young Investigator grant
8451517-ECS of the National Science Foundation.
ABSTRACT
The traditional
perturbution
resolving degeneracy in
that eliminate
(or lexicographic)
simplex ratio
rule and,
restrict the choice of exiting basic variables.
variables.
Using ideas
rule also
Bland's
from parametric
linear programming, we
that do not
limit the choice of
exiting variables beyond the simplex ratio rule.
that
ties
for the ratio rule can
similar approach gives pivoting rules
that do not restrict
therefore,
restricts the choice of exiting
develop anti-cycling pivoting rules
variable
for
linear programming impose decision rules
ties in the
combinatorial pivoting
methods
That
is,
leave the basis.
for the dual
any
A
simplex method
the choice of entering variables.
The primal
simplex method for minimization problems permits an
entering variable at each
iteration to
be any variable with a
negative reduced cost and permits the exiting variable to be any
variable that satisfies the
any implementation of
minimum ratio rule.
the procedure
problem is nondegenerate.
As
is well-known,
is guaranteed to
In addition,
converge if
there are two well-known
methods for resolving degeneracy.
The first of these,
perturbation
lexicographic)
(or equivalently, the
cycling by refining
(Charnes [1952],
the selection
Dantzig
the combinatorial rule,
refining the
variables.
[1951],
the
method, avoids
rule for the exiting variable
Wolfe
[1963]).
developed by Bland
selection rule for both
The
the
The second method,
[1979],
avoids cycling by
the exiting and entering
situation raises the following natural question:
Is
there a simplex pivoting procedure for avoiding cycling that does
not
restrict the minimum ratio rule choice of
exiting variables?
In
this note, we answer this question affirmatively by describing an
anti-cycling rule based on a
cycling by refining the
variable.
"homotopy principle"
selection rule
that avoids
for only the entering
We also describe an analogous dual
pivoting procedure
that avoids cycling by refining only the choice of exiting
variables.
Our procedures are based upon a few elementary observations
concerning parametric
some importance
some theoretical
simplex methods.
in their own right,
These observations may be of
since they may shed light on
issues encountered in several
2
recent analyses
III
of average case
(e.g., Adler
[1983]).
parametric
that
performance of parametric simplex methods
[1983],
Borgward
[1982],
[1983],
Smale
In particular, whenever the probability distribution of
linear program is
chosen, as
is frequently the case,
a
so
the problem satisfies a property that we call dual
nondegeneracy, then the parametric
if it
Haimovich
algorithm converges finitely even
is degenerate.
Parametric Linear Programming
Consider the following parametric linear programming problem:
(c + ed)x
Minimize
Ax = b
subject to
x
P(e)
0
>
where A is an mxn constraint matrix with (for notational
convenience) full row rank.
For a given value
, we say that P(e)
is nearly dual nondegenerate
if for each primal
feasible basis B
there
ci
is at most one nonbasic
+ ed
i
is 0.
We
variable x i whose reduced cost
say that the parametric
problem P is dual
nondegenerate if P(e) is nearly dual nondegenerate for all ecR.
Consider the usual parametric simplex algorithm for solving
P(e) for all
values of e,
all sufficiently
starting with a basis that
large values of
.
In the case that P is dual
nondegenerate, we show that the procedure will
any perturbations).
simplex pivot
We
rules that
is optimal for
not cycle
(without
then apply this result to give new primal
(1) are guaranteed
3
to avoid cycling, and
(2)
rely only on the selection of
basic
the entering variable;
i.e.,
any
variable satisfying the minimum ratio rule may leave the
basis.
The following procedure is a version of
simplex method as applied to a minimization
the usual parametric
problem.
Begin
let
BO be an optimal
let
i=1;
while d
basis for P(e) for all 8 > 90;
0 do
begin
let B = Bi-1;
c,
let
d denote the reduced costs for the vectors
c and d
with respect to basis B;
select index s so that
let e
i
=
(-cj/dj:dj > 0);
-Cs/ds;
if B- 1 As <
0 then quit
for all e <
else
-cs/ds = max
let Bi
(since P(e)
is unbounded from below
e');
be obtained from B by pivoting in x
pivoting out any variable chosen by the usual
pivot rule
(i.e.,
ties
in the ratio rule can
arbitrarily);
let
i =
i+1;
end.
end.
4
and
simplex
be broken
III
We note
c +
ed
that if
d < 0
at any point
e < ei -
1
.
this algorithm, then
< e i- i and hence
> c + ei-ld for all
for all
in
B is an optimal basis
The iterative modification of
in the parametric
programming procedure can be conceptualized differently as expressed
in the following observation.
REMARK
for
1.
Suppose
some e*
that B=Bi-
and let ei
be
1
is an optimal
selected as in
basis
the "while loop" of
parametric algorithm when applied to this basis B
and d are defined
PROOF.
If
by B).
< ei,
by our choice of
= min
Then ei
then c s + e
< 0
ds
for problem P(e*)
(e:
and B
the
(consequently,
B is optimal
c
for P(e)).
is non-optimal.
Also,
ei ,
Cs + eids
if
d
>
cJ + e dj
if
dj
< O.
cj + ei dj >
But then
each
cj
+ eidj
because B is an optimal
>
O,
since c s
basis
+
for P(e*).
sid =
0 and cj
Since
cj
+ e*adj
+ eidj = 0
every variable j corresponding to a column from B,
>
for
B
is optimal for
i
and Bi for i > 1
p(ei)
Let B
be an optimal basis
be defined recursively as in
i
for P(e ° )
= reduced cost with respect
into
Bi
- 1
let
8
the parametric procedure.
to
obtain Bi.
5
Moreover,
let
to the basis Bi- 1 of the variable
s
pivoted
and
PROPOSITION 1.
For all
i > 0,
[e i+ 1,
(i)
B i is optimal in
(ii)
If P(e) is dual nondegenerate, then e i + 1 < el;
(iii)
ci
PROOF:
5
< 0
for all
i];
i such that ei
> 0.
Part (i) is a consequence of our previous remark, the fact
that both B i
and Bi - 1 are optimal at ei , and the fact that (8e:
Let i be selected so that e
i +1
via a contradiction.
We prove (ii)
optimal for P(e)) is an interval.
= eI
Bi is
and either i = O or else e i
< e i -1
Let xt be the variable pivoted into basis Bi to obtain Bi+ 1 and
let xp be the variable pivoted out of basis Bi - 1
let
and dp
>
dp > 0 for
< 0
(dp < 0 since
cp + e
i
= 0 and
dp
Moreover, the assumption ei+l
< ei).
t + e
implies that ct + ei+l dt =
dt =
p + ei
Also
Then xt # xp
be the reduced cost for d with respect to Bi.
because dt
cp +
to obtain Bi.
=
p = O,
contradicting the near dual degeneracy of P(ei).
i
To see (iii), note that
= -ci/di and d1 > 0.
This proposition implies that the sequence of pivots generated
by the parametric algorithm defines a simplex algorithm for the
nonparametric problem minimize {cx: Ax = b, x > O)}.
We record this
result as follows:
COROLLARY.
Let BO,
...
,
Bt be a sequence of
bases obtained by
the parametric algorithm, and let t be first index for which
less than O.
Then
6
t+
is
III
(i)
Bt
is optimal for min
(ii)
the bases BO,
(iii)
the pivot
rule,
...
,
(cx:
sequence satisfies the usual pivot
viz.,
(-cjldj:dj > 0)
selection
the entering variable has a negative reduced
I
Note that Proposition 1 remains
nondegeneracy with
b, x > 0);
Bt are distinct;
cost for cost vector c.
REMARK 2.
Ax =
valid if we replace dual
the slightly weaker assumption that
is unique and hence the
the argmax
index s is unique at each
iteration of the parametric algorithm.
An Application
to a Primal Pivot Rule
We next show how to apply the previous proposition to define
primal
pivoting rules that,
without any special
choosing the exiting variable,
provision for
avoid cycling.
Our previous results demonstrate
that the parametric simplex
method defines an anti-cycling pivot sequence for
minimize
cx
subject to
Ax
= b
(P)
x > 0
whenever we can choose the objective function coefficients c and d so
that
(i)
some basis B of A
values of
(ii)
is optimal
for all sufficiently large
, and
P is dual nondegenerate.
To establish these
two criteria,
let B be any feasible basis
7
for
(P)
corresponding to the columns of B and with positive components
corresponding to
the
columns
of N will satisfy property
ensure that P is dual nondegenerate, however,
ties
in computing the entering variables
test
-cs/d
s
= max
{-cj/dj:
perturbations of c or d.
usual
dj
>
0).
(i).
To
requires that we avoid
from the parametric ratio
One natural approach is to
For example,
following the
(nonparametric) perturbation theory of
lead from
use
the
linear programming, we
might perturb the nonbasic columns of c or d by distinct powers of
>
for some small
.
Alternatively, we might use a variant of
the
familiar big M method:
choose the nonbasic cost coefficients of
d as
distinct powers of M for some large constant M.
lead
to the
following parametric objective functions:
(1)
c +
(2)
c + e(¢N + IN),
(3)
c + e(l/c)N.
N
+ e
In these expressions,
N,
and
N denotes a vector with zero components
corresponding to columns
in B and with components
for all
say,
1N
=
columns in N in,
= 1, and
N with
these objective
function is
These procedures
(1/c)N =
For each of
N with
=
2
,
...
,
e
n - m
order
from A.
1/.
The first two of
The third
Also
objective
big M method alluded to prior to the last
choice M =
1/c.
these objective
unique optimal basis
functions, for all 0
<
for sufficiently large values of e.
each one defines an anti-cycling
nondegenerate.
6
functions perturb c and d.
the polynomial
display with the
their natural
c 1,
< 1, B is a
Therefore,
pivot rule if P is dual
To demonstrate this property, we first establish a
preliminary result by a modification
of the usual
argument.
8
1-
-Y·---
----------------
perturbation
PROPOSITION
2
Let Al,
D = (D 1 ,
...
,
D 2,
...
,
some columns of
that h = h -
some basis B.
Am be the columns of
D)
B.
hDD-1A
is any other basis,
possibly containing
Suppose that A = [B,N],
(hD is
Suppose that
that h =
N and
the subvector of h corresponding to
Then hi and hj
are distinct nonzero
the columns
in D.)
polynomials
in c with zero constant terms whenever i and j are
indices corresponding to distinct columns of A other than
D1,
D2
...
,
D.
PROOF:
First,
We use a linear independence argument.
let us duplicate
the columns of D and consider the following partitioned matrix:
containing
[0,I]
an
I
CD
B
N
D
I, and a submatrix C D
(n-m) identity matrix
corresponding to the columns D from [B,N].
column of
copy of
CD
is either a copy of a column of
a column of
matrix M has
en-m).
(n-m) x
0
linear
the identity matrix I.
rank of n.
Let Q = [0,I].
Let
Then h =
CN
1
D- N
9
0 or a
We observe that the
in M:
O
I
t
D'-lB
each
(
1
,
c2 ,
Next consider the following
matrix obtained by pivoting on the basis D
M
the submatrix
denote the vector
Q.
CB
Therefore,
I.
of
Let Q' = [CB,
CN].
Then Q' has full row rank n-m because M has full
row rank n, and M' is obtained from M by pivoting.
Moreover,
the last
m columns of M' are copies of columns of the first n columns.
The
column of Q' corresponding to each of these "original columns" must be
a 0 vector.
Deleting these n zero vectors leaves an (n-m) x (n-m)
nonsingular submatrix Q''.
Finally,
observe
h -
that
hDD-1A =
Q -
CDD-1A = cQ''.
Consequently, the two polynomials hi and hj refered to in the
statement of the proposition may be obtained from two distinct
elements of the vector
Q''
with zero constant terms.
and are thus distinct polynomials in £
X
Now consider the selection rule for the incoming variable at any
point in the parametric simplex method.
basis
Assume that the current
is D and that c and d are the reduced costs of c and d = 1N with
respect to this basis.
As in Proposition 2, let h =
N.
Then the
ratio rule for choosing the incoming variable for the three objective
functions (1),
(2) and
-(cj
(3) becomes:
+ hj)/dj:
(1')
max
(2')
max
{-cj/(dj
(3')
max
{-cj/hj(l1/):
+ hj):
dj >
dj
0)
> 0),
and
hj(l/c) > 0).
In (3'), hj(1/e) denotes the polynomial in 1/c obtained by replacing
E by 1/e in hj.
By the usual purturbation argument,
sufficiently small
1),
constant (i.e.,
<
if £ is
(D) for some constant
a
(D) <
then a single index j gives the unique minimum in each of these
ratios.
Consequently, for all
< min ({e(D): D is a basis of A),
10
II______
___·_I_
___(·lll_____ly^m__yl·-·l---------
_____
the
III
entering variable
and
is
unique and Proposition
its Corollary apply.
for D as a function of 'c
(Similarly,
and
if we express
the reduced
That is,
is
if
sufficiently small,
is 0
of xj
the reduced
, then by Proposition 2,
cost is a different nonconstant polynomial
Therefore,
1 (see Remark 2 as well)
in
cost
each reduced
and is linear
in
.
then the value of e for which
is different for all
nonbasic variables
j.
P is dual nondegenerate.)
Each of the perturbations (1),
(2),
and
(3)
lead to different
pivot rules that can be interpreted as certain lexicographic selection
procedures.
Since our purpose in this note has been merely to
establish the possibility of simplex point
the
leaving variable, we will not
procedures, nor do we discuss
rules that do not restrict
specify the details of these
their computation requirements
(or claim
that they are efficient).
In concluding this section,
(or homotopies)
we might note that the parametrics
seem essential for the
results given
in this paper.
Simply perturbing the objective function to avoid ties in the
selection of an entering variable will
not suffice.
For example,
standard examples of cycling in the simplex method, there
degeneracy and
the choice of an entering variable
in
is no dual
is unique.
Dual Pivot Rules
Arguments similar to those used previously apply to right-hand
side parametrics
in a dual simplex algorithm.
parametric problem
minimize
subject
cx
to
Ax
x
11
= b + eg
>
0.
That
is,
consider the
We say that this problem is primal nondegenerate if for any dual
feasible basis B (i.e.,
c - cgB B-1A > 0) and parameter value
is at most one basic variable xj whose value is zero.
that B °
', there
Then, assuming
is an optimal basis for sufficiently large values of the
parameter e and assuming primal nondegeneracy, we can mimic our
earlier arguments to show that the usual parametric dual simplex
method will
(i)
compute optimal solutions to P'(e) for all
(ii)
for values of
(i.e.,
> 0, defines a dual simplex method
at any step (i),
satisfies br
0° ,
<
=
[(Bi)-lb]r
the leaving variable x r
< 0).
Consequently, the primal nondegeneracy assumption results in a
finitely convergent dual simplex algorithm that permits any variable
satisfying the dual minimum ratio rule to leave the basis.
In order
to ensure primal nondegeneracy, we can introduce right hand side
perturbations; that is consider right hand sides such as the following
[here
is
a column vector defined by £
(i)
b + eg +
(ii)
b + e(g
,
=
(1,
2
,
....
cm)]:
or
+ c).
Choosing the m-vector g so that (BO)- 1 g > 0 will ensure that B
unique optimal basis for sufficiently large values for
is a
and that this
perturbation will ensure primal nondegeneracy.
Acknowledgment
We are grateful to Robert Freund for a suggestion that led to an
improvement in the proof of Proposition 2.
12
· _1___1_1_____·___1___I_-_.__E--·r^-
·_l__i___·______I·
.__..I
___.___.
111
References
Adler, I. (1983).
"The Expected number of pivots needed to solve
parametric linear programs and the efficency of the self-dual
simplex method," Technical Report, Department of Industrial
Engineering and Operations Research, University of California,
Berkeley, California.
Bland, R.G. (1977).
New finite pivoting rules for the simplex method.
Math. of Oper. Research
2, 103-107.
Borgwardt, K.H. (1982).
"The average number of steps required by the
simplex method is polynomial," Zeitschrift fur Operations
Research
26, 157-177.
Charnes, A. (1952).
Optimality
Econometrica
20, 160-170.
and degeneracy in
linear programming.
Dantzig, G.B. (1951).
Maximization of a linear function of variables
subject to linear inequalities.
In T.C. Koopmans (ed.), Activity
Analysis of Production and Allocation, John Wiley and Sons, Inc.,
New York.
Haimovich, M. (1983).
"The simplex method is very good! - On the
expected number of pivot steps and related properties of random
linear programs," Technical Report, Columbia University, New
York.
Smale, S. (1983).
"On the average number of steps of the simplex
method of linear programming," Mathematical Programming
27,
262.
Wolfe, P. (1963).
programming.
A technique for resolving degeneracy in linear
J. SIAM
11, 205-211.
13
241-
Download