Sty;-~::::- ....... . . *<, , - -:Woing~l pap;::;00 -:, . .:: .:-::-: ::·::::o. ;:* ~:::i~:: I',.;' '.- .... . .'- '.' :: : '::-::;: ": D -. . ' -'!:' ~;; .. . ':: , ;·I: = :· , ' '-:- :·· I;··; :--r: : ;·-· :· I :.`··· . - .-. ·. i !-· .; · I ·i:i · -:.·: ·· -'i:··:.·::· I. 1 ,: m - .. .. .. =. 7 ......... TST N;4 ITU E; US:TT:S A MSASIX N9CH E ...... , tY N .~I ;:~~::: ... ... ...... .... .... AN ANALOG OF KARMARKAR'S ALGORITHM FOR INEQUALITY CONSTRAINED LINEAR PROGRAMS, WITH A "NEW" CLASS OF PROJECTIVE TRANSFORMATIONS FOR CEITERING A POLYTOPE by Robert M. Freund OR 166-87 August 1987 AN ANALOG OF KARMARKAR'S ALGORITHM FOR INEQUALITY CONSTRAINED LINEAR PROGRAMS, WITH A "NEW" CLASS OF PROJECTIVE TRANSFORMATIONS FOR CENTERING A POLYTOPE by Robert M. Freund OR August 1987 This research was done while author was a Visiting Scientist at Cornell University. Research supported in part by ONR Contract N00014-87-K-0212. Author's address: Professor Robert M. Freund M.I.T., Sloan School of Management 50 Memorial Drive Cambridge, MA. 02139 ABSTRACT We present an algorithm, analogous to Karmarkar's algorithm, for the linear programming problem: maximizing c x subject to works directly in the space of linear inequalities. Ax < b , which The main new idea in this algorithm is the simple construction of a projective transformation of the feasible region that maps the current iterate x to the analytic center of the transformed feasible region. Key words: Linear Program, Projective Transformation, Polytope, Center, Karmarkar's algorithm. 1 1. Introduction. Karmarkar's algorithm [4] is designed to work in the T Ax = O, e x = 1, x > 0. space of feasible solutions to the system Although any bounded system of linear inequalities can be linearly Ax < b transformed into an instance of this particular system, it is more natural to work in the space of linear inequalities Ax < b Ax < b researcher's point of view, the system From a directly. is often easier to conceptualize and can be "seen" graphically, directly, if x three-dimensional, regardless of the number of constraints. is two- or Herein, we present an algorithm, analogous to Karmarkar's algorithm, that solves a linear program of the form: T c x , subject to maximize Ax < b , under assumptions similar to those employed in Karmarkar's algorithm. Our algorithm performs a simple projective transformation of the feasible region that maps the current iterate x to the analytic center of the transformed feasible region. 2. s Notation. Let is a vector in e be the vector of ones, namely R , then S S = is a subset of n, then s, i.e., [::MJ ° I T refers to the diagonal matrix whose diagonal entries are the components of If e = (1,1... ++ S° = {x e 1x > 0}. 1) If 2 3. The Algorithm. Our interest is in solving the linear program P: T c x = U maximize subject to where A is a matrix of size Ax < b m x n. We assume that the feasible region has a nonempty interior, and that advance. = {x e Rn Ax < b} is bounded, U, the optimal value of P, is known in Furthermore, we assume that we have an initial point xo 0 whose associated slack vector s = b - Ax 0 eTS O A = O Condition (1) int satisfies: and s = e, (1) may appear to be restrictive, but we shall exhibit an elementary projective transformation that enables us to assume without any loss of generality. The first condition of necessary and sufficient condition for of linear inequalities reviewed below. Ax < b x (1) x0 is simply the (see, e.g., Sonnevend [5]), which is the slack on each constraint is one. A have been The algorithm is then stated as follows: For k = 0,1,..., Step k: Define do: s k = b - Ax k ____ = -b-x and Yk Set A k = SA and Uk vk = U - cTXk and - U-cx k = (/m)Se, ATXk -l T - ey k holds, to be the center of the system The second condition is that the rows of scaled so that at (1) ck b vkk k =c T = U - VkkXk Define the search direction dk = (AkAk) -1Ck VCT T ck(Ak -k(A ck T b - eykx 3 a dk Xk+la = Xk +d 1 + y (adk) , where a > 0 is a steplength, e.g., a = 1/3. Note immediately that all of the effort lies in computing (AkAk)-lck. Expanding, we see that T (Ak) T-2T T TT = [ASkA + y(-eTSkA + eTey T) -(ASke)(y)] is computed as two rank-1 updates of the matrix ATSk2A. Thus, as in Karmarkar's algorithm, the computational burden lies in solving T -2 -1 (A Sk A) ck efficiently. To measure the performance of the algorithm, we will use the potential function m F(x) = m en (U - cx) - en (b - Ax)i , i=l which is defined for all x in the interior of the feasible region. We will show below that this algorithm is an analog of Karmarkar's algorithm, by tracking how the algorithm performs in the slack space. Thus, let us rewrite P as P: maximize T c x =U X,S subject to Ax + s = b s> and define the primal feasible space as (O;) = {(x;s) e i x mlAx + s = b, s 0}. 4 We also define the slack space alone to be Y= {s e mls > O, s = b - Ax for some x e Rn}. We first must develop some results regarding projective transformations of the spaces and Y. 4. A Class of Projective Transformations of in the interior of , so that Suppose we have a given vector x e . Ax + s = b y and (x + (x - ) 1 - y (x- is well-defined for all shows that for and (x;s) e (;Y) , the and and r defined by < U - vy x = (S -lb - eyx) r > 0. Thus, we can define the linear program P -: y,x maximize subject to -T cz = U Az + r = b r 2 0, where for all given by: (2) 1 - y (x - must satisfy: (S -1A - eyT)z + r Tv = U - c x. Furthermore, direct substitution z (c - vy)Tz Let be a point -1 S x) (x;s) e (;b°). x T y T(x - x) < 1 (z;r) = g(x;s) ;~~) r= x) Y. Let s > O. that satisfies Then the projective transformation = and (z;r) = g(x;s) 5 (S -1 -1A- A b= (S eyTT ) - eyTx) b (3) c = (c - vy), - T- U = (U - vy x). Associated with this linear system we define = and define and {(z;r) e n x ImIAz + r = b, r > O} analogously. J Note that the condition only if y yT(x - x) < 1 holds for all x e if and lies in the interior of the polar of ({ - x) , defined as ( x e ( - x)}. -x) = {y nlyTx < 1 It can be shown that because for all is bounded and has a nonempty interior, that -o and that y = ATX Lemma 1. where If (and hence XTs = 1 and T y = A X transformation by - x)© y e int ( T where (z;r) = g(x;s) (x;s) = h(z;r) yT(x - x) < 1 for all x e ) if X >0 . In this case we have the following: X > 0 and TX s = 1 , then the is well-defined, and has an inverse, given , where z-x Z X = x + 1+ y(z - Sr s = x) 1+ y(z (4) - x) 6 If (z;r) = g(x;s) , then (x;s) E (;9) Furthermore, any of the constraints in c x < U , Ax < b is satisfied at -T c z < U, equality if and only if the corresponding constraint of Az < b (z;r) e (Z;A). if and only if is satisfied at equality. (The transformation g(-;-) is a slight modification of a projective transformation presented as an exercise in Grunbaum [3] , on page 48. I is a polytope containing the origin in its interior, then its polar is also a polytope containing the origin in its interior. that a translation of transformation of 0 y e int t0 by If a0 Grunbaum notes results in a projective given by z= z e !|z = x for some x e }. l-yx Our transformation g(-;-) is a translation of this transformation by Recall that the center of the system of inequalities point m I i=l x e that maximizes Ax < b is that m 1 (b - Ax)i, or equivalently, i=l n (b - Ax)i, see, e.g., Sonnevend [5]. Under our assumption of boundedness and full dimensionality, it is straightforward to show that is the center of the system T -eS A = 0 , x.) Ax < b if and only if, where We next construct a specific center of the projected polytope x s = b - Ax , y Z. and s > O. that will ensure that If we define and =(/m)= /m)le e and = ATX y=A, x becomes the 7 of (-x), X s = 1, we know that yT (x-x) < 1 and (x;s) e (;5) Let Lemma 2. and > then because = (l/m)S y e int ( le x e for all . lies in the interior y We have: y = ATN. be used to define - x) , and by defining Let s > 0. be given, and suppose g(-;-) Then by (2), we ha,ve: (x,e) = g(x;s) , (i) (ii) Az < b , is the center of the system x where A, b are defined as in (3), 0}. = {r e emleTr = m , r To see (ii), note that Part (i) of Lemma 2 is obvious. slack vector associated with e A = 0. m , namely is contained in the standard simplex in The set (iii) x in y,x P , e is the and so we must show that This derivation is eyT ) = eTS -1A -(1/m)eTee eT =eT(-1A- For part (iii) , note that for any T- -1 (b - A = 0. , TT TT--1 e b = e (S -lb - eyx) = eT(S T T e r = e (b- A z) e S = r e S T T eT(1/m)eeS A Ax) T -1-T s = e e = m. Ax) = e S Lemma 2 demonstrates that the projective transformation transforms the slack space to a subspace of the standard simplex, and Y transforms the current slack s to the center Thus the projective transformation projective transformation. g(-;-) g(-;-) We also have: e of the standard simplex. corresponds to Karmarkar's 8 Lemma 3. The potential function of problem -T Py,x - , defined as En (b - Az)i G(z) = en (U - c z) i=l m differs from F(x) by the constant n (si). 2 i=l Thus, as in Karmarkar's algorithm, changes in the potential function g(-;-). are invariant under the projective transformation 5. Analysis of the Algorithm. algorithm. Let x To ease the notational burden, the subscript be the current iterate. Then the vector problem (3). P Here we examine a particular iterate of the T y = A X s = b - Ax > 0 is constructed, where is transformed to The slack vector Let s where P y,x is suppressed. T- and v = U - c x. - X = (1/m)S A, b, c, U is transformed by k g(-;-) the standard simplex, as in Karmarkar's algorithm. -l e, and the are given as in to the center e of Changes in the potential function are invariant under this transformation, by Lemma 3. We now need to show that the direction given by d -T(- -1c (A A) c corresponds to Karmarkar's search direction. Karmarkar's search direction is the projected gradient of the objective function in the transformed problem. A) Because the feasible region has full column rank. equivalent to P -- , y,x Thus -T- -(A A) -T- -l z = (A A) (b - r). of P is bounded, exists, and so A (and hence Az + r = b is Substituting this last expression in we form the equivalent linear program: 9 minimize subject to (A(A A) c) r (AA) A )r (I - =(I- A(AA) A )b r > 0. (This transformation is also used in Gay [2].) vector The objective function ---T- -1A(A A) c of this program lies in the null space of , so that the normed direction -T- -1- - -A(A-A) p= c (ATA) -lc , this direction p In the space is Karmarkar's normed direction. corresponds to -T- -1(A A) c _ c(AA) d i.e., Thus we see that the direction to Karmarkar's normed direction a steplength of function F(x) a = 1/3 in d is the unique direction d c in satisfying Ad + p = 0. given in the algorithm corresponds p. Following Todd and Burrell [7], using will guarantee a decrease in the potential of at least 1/5. Furthermore, as suggested in Todd and Burrell [7], performing a linesearch of F(-) on the line x + ad a 2 0, is advantageous. The next iterate, in space, is the point Z projectively transform back to h(-;-) space using the inverse transformation given by (4), to obtain the new iterate in x + x + ad . We ad 1 + yT(a) space, which is 10 6. Remarks: a. Getting Started. We required, in order to start the algorithm, that the initial point xO must satisfy condition (1). This condition requires that center of the system of inequalities system be scaled so that Ax < b b - Ax 0 = e . x be the and that the rows of this If our initial point x does not satisfy this condition, we simply perform one projective transformation to transform the linear inequality system into the required form. set sO = b - Ax , T v0 = U - c x 0 That is, we ,and define YO = (1/m)A SO1e A = SoA - eyT b0 = Solb - eyTxO co = c - v y and U Then, exactly as in Lemma 2, and e = b0 - AxO , and x0 = U - vOy x o is the center of the system Aox < b T e A0 = 0. Our initial linear program, of course, becomes maximize subject to b. T CoX = U 0 Aox0 < bo Complexity and Inscribed/Circumscribed Ellipsoids. The algorithm retains the exact polynomial complexity as Karmarkar's principal algorithm, namely it solves operations. P in O(m4L) arithmetic However, using Karmarkar's methodology for modifying 11 (ASk2Ak ) , the modified algorithm should solve arithmetic operations. P in O(m3.5L) One of the constructions that Karmarkar's algorithm uses is that the ratio of the largest inscribed ball in the standard simplex, to the smallest circumscribed ball, is 1/(m - 1) . this translates to the fact that if = x e nIAx < b} centered at polytope {z e lies in the interior of , then there are ellipsoids , for which x x InIAz < b}, E.in C and E.in in (Ein- andEou t = where, of course, and n(z - Ein = { A = (S nl(z Ein = (1/(m - x EoUt, each is the transformed 1))(Eou t - x). by Sonnevend [5]. Eout out This We can by )TATA(z - x) x)TATA(z - x) A - ey) and 2 C Eou t , where result was proven for the center of explicitly characterize In our system, , and < 1/(m -1)} < m(m - y = (/m)AS )}, e , s = b- Ax Using this ellipsoid construction, we could prove the complexity bound for the algorithm directly, without resorting to Karmarkar's results. But inasmuch as this algorithm was developed to be an analog of Karmarkar's for linear inequality systems, it is fitting to place it in this perspective. c. Other Results. The methodology presented herein can be used to translate other results regarding Karmarkar's algorithm to the space of linear inequality constraints. Although we assume that the objective function value is known in advance, this assumption can be relaxed. The results on objective function value bounding (see Todd and Burrell [7] and Anstreicher [1]), dual variables (see Todd and Burrell [7] and Ye [8]), fractional 12 programming (Anstreicher [1]), and acceleration techniques (see Todd [6]), all should carry over to the space of linear inequality constraints. Acknowledgement I would like to acknowledge Michael Todd for stimulating discussions on this topic, and his reading of a draft of this paper. REFERENCES [1] K.M. Anstreicher. "A monotonic projective algorithm for fractional linear programming using dual variable." Algorithmica I (1986), 483-498. [2] D. Gay, "A variant of Karmarkar's linear programming algorithm for problems in standard form," Mathematical Programming 37 (1987) 81-90. [3] B. Grunbaum. [4] N. Karmarkar. "A new polynomial time algorithm for linear programming." Combinatorica 4 (1984), 373-395. [5] Gy. Sonnevend. "An 'analytical centre' for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming." Proc. 12th IFIP Conf. System Modelling, Budapest, 1985. [6] M.J. Todd. "Improved bounds and containing ellipsoids in Karmarkar's linear programming algorithm." Manuscript, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York, (1986). [7] M.J. Todd and B.P. Burrell. "An extension of Karmarkar's algorithm for linear programming using dual variables." Algorithmica I (1986), 409-424. [8] Y. Ye. "Dual approach of Karmarkar's algorithm and the ellipsoid method." Manuscript, Engineering-Economic Systems Department, Stanford University, Stanford, California, (1987). Convex Polytopes. Wiley, New York (1967).