AN ANALOG OF KARMARKAR'S ALGORITHM

advertisement
AN ANALOG OF KARMARKAR'S ALGORITHM
FOR INEQUALITY CONSTRAINED LINEAR PROGRAMS,
WITH A 'NEW" CLASS OF PROJECTIVE TRANSFORMATIONS
FOR CENTERING A POLYTOPE
Robert M. Freund
Sloan W.P. No. 1921-87
August 1987
This research was done while author was a Visiting Scientist at Cornell
University.
Research supported in part by ONR Contract N00014-87-K-0212.
Author's address:
Professor Robert M. Freund
M.I.T., Sloan School of Management
50 Memorial Drive
Cambridge, MA. 02139
ABSTRACT
We present an algorithm, analogous to Karmarkar's algorithm, for the
linear programming problem:
maximizing
c x
subject to
Ax < b
, which
The main new idea in
works directly in the space of linear inequalities.
this algorithm is the simple construction of a projective transformation of
the feasible region that maps the current iterate
x
to the analytic
center of the transformed feasible region.
Key words:
Linear Program, Projective Transformation, Polytope, Center,
Karmarkar's algorithm.
1
1.
Introduction.
Karmarkar's algorithm [4] is designed to work in the
T
Ax = O, e x = 1, x
space of feasible solutions to the system
Although any bounded system of linear inequalities
Ax
b
0.
can be linearly
transformed into an instance of this particular system, it is more natural
to work in the space of linear inequalities
Ax
researcher's point of view, the system
b
Ax
b
directly.
From a
is often easier to
conceptualize and can be "seen" graphically, directly, if
x
three-dimensional, regardless of the number of constraints.
is two- or
Herein, we
present an algorithm, analogous to Karmarkar's algorithm, that solves a
linear program of the form:
T
c x , subject to
maximize
Ax
b , under
assumptions similar to those employed in Karmarkar's algorithm.
Our
algorithm performs a simple projective transformation of the feasible
region that maps the current iterate
x
to the analytic center of the
transformed feasible region.
2.
s
Notation.
Let
is a vector in
e
be the vector of ones, namely
Em, then
S
refers to the diagonal matrix whose
diagonal entries are the components of
s=
51
S
If
I
is a subset of
e = (1,1...)
in, then
s, i.e.,
0
·
e+
= {x e 3lx > 0}.
TT
If
2
3.
The Algorithm.
Our interest is in solving the linear program
P:
maximize
subject to
where
A
is a matrix of size
has a nonempty interior, and that
I = {x e IRnAx < b)
U, the optimal value of
is bounded,
P. is known in
Furthermore, we assume that we have an initial point x
s o = b - Axo
whose associated slack vector
eTS O A = O
Condition
Ax < b
m x n.
We assume that the feasible region
advance.
c x = U
(1)
satisfies:
sO = e.
and
(1)
may appear to be restrictive, but we shall exhibit an
elementary projective transformation that enables us to assume
without any loss of generality.
The first condition of
xO
necessary and sufficient condition for
of linear inequalities
reviewed below.
e int
Ax
b
(1)
x
is simply the
(see, e.g., Sonnevend [5]), which is
A
the slack on each constraint is one.
have been
The algorithm
is then stated as follows:
For
k= 0,1,...,
Step k:
do:
Define
and
Set
sk = b - Ax
k
k =ATXk.
-1
T
Ak =
A -eyk
and Uk = U - VkXk-
holds,
to be the center of the system
The second condition is that the rows of
scaled so that at
(1)
,
vk
,
= U
ck =c
cTxk,
vkyk
~T
bk =
.
-1
(Ak)-l
Define the search direction
= (1/m)
and
dk =
ck
ck(AkAk)
k
-1
e,
T
b - eykxk
3
a dk
xk+1 =
1
d
Xk+l Xk+ 1 +
+ yT (adk)
Xka
.,where
a >0
is a steplength, e.g.,
a = 1/3.
Note immediately that all of the effort lies in computing
(ATAk)-lck.
Expanding, we see that
T
(Ak)
T -2
T-
T
T
= [A Sk A + y(-e SkA + eTey)
is computed as two rank-1 updates of the matrix
-
T-
T)
(A Ske)(y )]
ATSk2 A. Thus, as in
Karmarkar's algorithm, the computational burden lies in solving
T -2 -1
(A Sk A) ck
efficiently.
To measure the performance of the algorithm, we will use the potential
function
m
F(x) = m en (U - cx) -
en (b - Ax)i ,
i=l
which is defined for all
x
in the interior of the feasible region.
We will show below that this algorithm is an analog of Karmarkar's
algorithm, by tracking how the algorithm performs in the slack space.
Thus, let us rewrite
P as
P:
maximize
cTx = U
X,s
subject to
Ax + s = b
and define the primal feasible space as
(;Y)
= {(x;s) a in x ImmAx + s = b, s
0}.
4
We also define the slack space alone to be
=
mls 2 0, s = b - Ax
s e
for some
x e IRn}.
We first must develop some results regarding projective
transformations of the spaces
4.
and
.
A Class of Projective Transformations of
in the interior of
,
x e [.
Ax + s = b
so that
Suppose we have a given vector
and
y
and
R.
s > O.
that satisfies
Let
Let
be a point
Tv = U - c x.
T
y (x - x) < 1
(z;r) = g(x;s)
Then the projective transformation
x
for all
given by:
-1
=
+
(
- x)
r
and
s
1 - yT(x -
1 - y(x - X
is well-defined for all
shows that for
(x;s)
(x;s) e (;:)
().
Furthermore, direct substitution
, the
z
and
r
defined by
must satisfy:
-T
U - -T_
vy x
(c - vy) z
(S
1A
- eyT )z + r
and
=: (S
b - ey x)
0.
r
Thus, we can define the linear program
P - :
y,x
maximize
subject to
-T
-
c z = U
Az + r = b
r 2 0,
where
(2)
)
(z;r) = g(x;s)
5
-1
T
(S -1 A - eyT )
A
b = (S
b - eyTx)
(3)
c = (c - vy),
U = (U - vyT).
Associated with this linear system we define
(Z;9) =
and define
Z and
IRn x
(z;r)
+ r = b,
mz
r
0}
9A analogously.
Note that the condition yT(x - x) < 1 holds for all
only if y
lies in the interior of the polar of
(-
x)
= {y a
nyTx
It can be shown that because
01
for all
x e I
if and
(O - x) , defined as
- x)}.
x e(
is bounded and has a nonempty interior,
that
(9 - x) = ({ye
and that
y = ATX
mly = A
0
- x)©
y e int (
where
Ts= 1
Lemma 1. If y = ATX
transformation
by
and
-z-x
x = x +
T
y(x - x) < 1
(and hence
X > O
where
(z;r) = g(x;s)
(x;s) = h(z;r)
satisfying XTs = 1},
for some X )O
.
> O and
for all
x e )
if
In this case we have the following:
XTs = 1
, then the
is well-defined, and has an inverse, given
, where
s=
z
1 + y (z -
)
Sr
(4)
r
1 + y(z
-x)
6
If
(z;r) = g(x;s)
, then
(x;s) e (9)
Furthermore, any of the constraints in
c x
U
,
Ax
equality if and only if the corresponding constraint of
Az
b
(z;r) e (Z;!).
if and only if
b
is satisfied at
_T
c z < U,
is satisfied at equality.
(The transformation
is a slight modification of a projective
g(-;-)
transformation presented as an exercise in Grunbaum [3] , on page 48.
/
is a polytope containing the origin in its interior, then its polar
transformation of
39
y
results in a projective
int t
given by
= {z
Our transformation
by
Ie
Rnz
g(-;-)
=
x
1 -yx T
for some
x e
.
is a translation of this transformation by
Recall that the center of the system of inequalities
point
that maximizes
x e
fo
Grunbaum notes
is also a polytope containing the origin in its interior.
that a translation of
If
m
1i
i=l
Ax
b
x.)
is that
(b - Ax)i, or equivalently,
m
2
en (b - Ax)i, see, e.g., Sonnevend [5].
Under our assumption of
i=l
boundedness and full dimensionality, it is straightforward to show that
is the center of the system
e
T
-1
-1A =O
Ax
if and only if,
b
s = b - Ax ,
where
We next construct a specific
center of the projected polytope
x
y
X.
= (1/m)S-le
and
s > O.
that will ensure that
If we define
and
y = ATX,
x
becomes the
7
X > 0
then because
of
(-x)
0
Lemma 2.
,
(x;s)
(i)
(ii)
le
x e
for all
be used to define
- x)0
int (
we know that
.
lies in the interior
y
We have:
be given, and suppose
(3:;)
X = (l/m)S
y
T-
X s = 1,
yT(x-x) < 1
and
Let
and
s > O.
y = ATX.
, and by defining
g(-;-)
Let
Then
by (2), we ha,ve:
(x,e) = g(x;s) ,
is the center of the system
x
Az < b ,
where
are
A, b
defined as in (3),
(iii)
The set
=
!% is contained in the standard simplex in
r 6
mJeTr = m , r > 0}.
Part (i) of Lemma 2 is obvious.
slack vector associated with
e A = O.
x
To see (ii), note that
in
Pyx
.
e
is the
and so we must show that
This derivation is
Te A
T--l e(S
A
T
ey)
T-= eS
For part (iii) , note that for any
TT Az) = e b = e (S
T
T
e r = e (beT
m , namely
l(b - A)
= eTs -s
-
T T(1/m)eeeTS
A = 0.
r
TT
l
_ eyx) = e(S -lb
b
=
A
T( /m)ee T
eT(1/m)ee
T
Lemma 2 demonstrates that the projective transformation
transforms the slack space
g(-;-)
to a subspace of the standard simplex, and
transforms the current slack
s
to the center
Thus the projective transformation
projective transformation.
-1AxZ
g(-;-)
We also have:
e
of the standard simplex.
corresponds to Karmarkar's
8
Lemma 3. The potential function of problem
m
-T
G(z) = en (U- cz) -
P
y,x
x
defined as
'
en (b- Az)i
i=l
m
F(x)
differs from
en (si).
by the constant
i=1
Thus, as in Karmarkar's algorithm, changes in the potential function
are invariant under the projective transformation
Here we examine a particular iterate of the
5. Analysis of the Algorithm.
To ease the notational burden, the subscript
algorithm.
Let
x be the current iterate.
Then the vector
problem
P
T
en
y = ATX
s
s = b - Ax > O and
Let
the
vector
y
is constructed, where
is transformed to
The slack vector
(3).
g(-;-).
Px
is suppressed.
v = U - cTx.
Ah
X = (1/m)S
-1
e, and the
A, b, c, U are given as in
where
is transformed by
=
k
g(-;-)
the standard simplex, as in Karmarkar's algorithm.
to the center
e
of
Changes in the
potential function are invariant under this transformation, by Lemma 3.
We now need to show that the direction
d
given by
d
-T- -1(A A) c
c (A A)
c
corresponds to Karmarkar's search direction.
Karmarkar's search direction
is the projected gradient of the objective function in the transformed
problem.
Because the feasible region
A) has full column rank.
equivalent to
P
y~
,
z = (A A)
Thus
-T- -1
(A A)
(b - r).
of
P
is bounded,
exists, and so
A
(and hence
Az + r = b
is
Substituting this last expression in
we form the equivalent linear program:
9
minimize
subject to
c) Tr
(A(ATA)
(I - A(ATA)
AT)r = (I - A(ATA)- A)b
r > 0.
(This transformation is also used in Gay [2].)
vector
A(ATA)
The objective function
of this program lies in the null space of
c
d
, so that
the normed direction
-A(ATA)
c (ATA)
is Karmarkar's normed direction.
c
In the space
Z
, this direction
p
corresponds to
-T- -1-
d
(A A)
c
c (AA) c
i.e.,
d
is the unique direction
d
Thus we see that the direction
to Karmarkar's normed direction
a steplength of
function
F(x)
a = 1/3
p.
in
d
Ad + p = 0.
given in the algorithm corresponds
will guarantee a decrease in the potential
of at least 1/5.
Furthermore, as suggested in Todd and
F(-)
on the line
x + ad
is advantageous.
The next iterate, in
space, is the point
projectively transform back to
h(-;-)
satisfying
Following Todd and Burrell [7], using
Burrell [7], performing a linesearch of
a > 0,
in
We
space using the inverse transformation
given by (4), to obtain the new iterate in
x
x + ad .
ad
1 + y (ad)
space, which is
10
6.
Remarks:
a.
Getting Started.
We required, in order to start the algorithm, that the initial point
xO
must satisfy condition
(1).
This condition requires that
Ax
center of the system of inequalities
system be scaled so that
be the
and that the rows of this
b
b - Ax 0 = e .
x
If our initial point
x
does not
satisfy this condition, we simply perform one projective transformation to
That is, we
transform the linear inequality system into the required form.
set
s
= b - Ax0
,
T
vo = U - c x0
and define
YO = (1/m)A SO1e
Ao =
oA - ey
bo = Solb - ey XO
c O = c - Vo Y
UO = U-
and
x
Then, exactly as in Lemma 2,
and
e = b
- Ax
, and
eTA
VoY
is the center of the system
= O.
Aox
b
0
Our initial linear program, of course,
becomes
maximize
subject to
b.
T
CoX = U0
A
b0
0
Complexity and Inscribed/Circumscribed Ellipsoids.
The algorithm retains the exact polynomial complexity as Karmarkar's
principal algorithm, namely it solves
operations.
P
in
O(m4 L)
arithmetic
However, using Karmarkar's methodology for modifying
11
(2A-k)
- ',
the modified algorithm should solve
arithmetic operations.
P
in
O(m3 5L)
One of the constructions that Karmarkar's algorithm
uses is that the ratio of the largest inscribed ball in the standard
simplex, to the smallest circumscribed ball, is 1/(m - 1) .
this translates to the fact that if x
O = {x e
n Ax < b}
centered at
polytope
x
lies in the interior of
, then there are ellipsoids
, for which
{z e RnIAz < b},
Ein C
and
C Eo
nd E
Ein
where, of course,
A
---1
(S
, where
and
M
and
Eou t
- x).
This
We can
1)}
-x)TATA(z - x)
T
t
by
-
A - ey)
EoU t , each
is the transformed
by Sonnevend [5].
{z eaun(z
Eou t = {z en(z
and
t
Ein
x) = (1/(m - 1))(E
(Ein-
result was proven for the center of
explicitly characterize
In our system,
, and
m(m -
1)},
T-l
-1
y = (/m)AS
e , s = b - Ax.
Using this ellipsoid construction, we could prove the complexity bound
for the algorithm directly, without resorting to Karmarkar's results.
But
inasmuch as this algorithm was developed to be an analog of Karmarkar's for
linear inequality systems, it is fitting to place it in this perspective.
c. Other Results.
The methodology presented herein can be used to translate other
results regarding Karmarkar's algorithm to the space of linear inequality
constraints.
Although we assume that the objective function value is known
in advance, this assumption can be relaxed.
The results on objective
function value bounding (see Todd and Burrell [7]
and
Anstreicher [1]),
dual variables (see Todd and Burrell [7] and Ye [8]), fractional
12
programming (Anstreicher [1]), and acceleration techniques (see Todd [6]),
all should carry over to the space of linear inequality constraints.
Acknowledgement
I would like to acknowledge Michael Todd for stimulating discussions
on this topic, and his reading of a draft of this paper.
REFERENCES
[1]
K.M. Anstreicher. "A monotonic projective algorithm for fractional
linear programming using dual variable." Algorithmica I (1986),
483-498.
[2]
D. Gay, "A variant of Karmarkar's linear programming algorithm for
problems in standard form," Mathematical Programming 37 (1987) 81-90.
[3]
B. Grunbaum.
[4]
N. Karmarkar. "A new polynomial time algorithm for linear
programming." Combinatorica 4 (1984), 373-395.
[5]
Gy. Sonnevend. "An 'analytical centre' for polyhedrons and new
classes of global algorithms for linear (smooth, convex)
programming." Proc. 12th IFIP Conf. System Modelling, Budapest,
1985.
[6]
M.J. Todd. "Improved bounds and containing ellipsoids in Karmarkar's
linear programming algorithm." Manuscript, School of Operations
Research and Industrial Engineering, Cornell University, Ithaca,
New York, (1986).
[7]
M.J. Todd and B.P. Burrell. "An extension of Karmarkar's algorithm
for linear programming using dual variables." Algorithmica I
Convex Polytopes.
Wiley, New York (1967).
(1986), 409-424.
[8]
Y. Ye. "Dual approach of Karmarkar's algorithm and the ellipsoid
method." Manuscript, Engineering-Economic Systems Department,
Stanford University, Stanford, California, (1987).
Download