A Polynomial Algorithm for the Two-Variable Integer Programming Problem RAVINDRAN KANNAN Cornell University, Ithaca, N e w Y o r k ABSTRACT A polynomial time algorithm is presented for solving the following two-variable integer programming problem maximize ClXl + c2x2 subject to a, lxl + a,2x2 =< b,, I = 1, 2, and x~, x2 => O, integers, , n, where a,j, cj, and b, are assumed to be nonnegattve integers This generahzes a result of Htrschberg and Wong, who developed a polynomial algorithm for the same problem with only one constraint (l e, where n = 1) However, the techniques used here are quite different KEY WORDS AND PHRASES integer programming, knapsack problem, polynomial algorithm, coefficient size, feasible region decomposition CR CATEGORIES 3 15, 5 25, 5 30, 5 40 Introduction We consider the following integer programming problem: (1) maximize subject to and clxa + czx2 a a x l + a,2xz ~ b,, xl, x2 => 0, a= 1,2 .... n, integers, where au, G, and b, are assumed to be nonnegative integers. We first show that the solution to (1) can be obtained easily from the solutions to at most n problems, each of which is of the form (2) maximize subject to c l x l + c2x2 a l x i + a2x2 =< b, f ~ xl =< g, and xl, x2 => 0, integers, (2a) (2b) (2c) where all constants involved are positive integers; further they can all be computed in time no greater than a fixed polynomial of the size of the input which is assumed to be represented in binary encoding. It is also assumed that g _-< [ b / a i J . If not, we can replace g by [ b / a i J . (Using different techniques, Hirschberg and Wong [l] have developed a polynomial algorithm to solve (2) where constraint (2b) is deleted.) Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery To copy otherwise, or to republish, requires a fee and/or specific permission This research was supported by Sonderforschungsberelch 21 (DFG), Institut fur Okonometne und Operations Research, OniversttM Bonn, Bonn, West Germany, and the National Science Foundation under Grant ENG 7609936 to Cornell University, Ithaca, New York Author's present address Department of Electrical Engmeenng and Computer Science, Division of Computer Science, University of California at Berkeley, Berkeley, CA 94720 © 1980 ACM 0004-5411/80/0100--0118 $00 75 Journalofthe AsSoclaUonforComputingMachinery,Vol 27,No I, January 1980,p 118-122 A Polynomtal Algorithm f o r the Two. Variable Integer Programming Problem 119 1. Decomposition THEOREM 1 (DECOMPOSITIONTHEOREM). The set S defined as S = {(x~, x2)]a, lxl + a,2x2 ~ b,, i = 1. . . . . n; Xl, x2 ~ 0} where a u and b, are positive integers can be expressed as UP-iS,, where m =<n and each S, is o f the f o r m S, = {(xl, x2)[fi =<x~ _-<g,; z,x~ + z,2x~ =<s,; xl, x2 _->0}. Here z,~, z,2, s, are all posmve integers, fi and g, are positive rationals, and all these constants can all be computed in polynomial time f r o m a,j and b,. Of course we note that intuitively the theorem is quite o b v i o u s - - o n e can draw the straight lines a , x l + a,2x2 = b, and by looking at the graph, it would be easy to determine the "tightest" constraint in each o f several regions. Since there are at most (~) points o f intersection of these hnes in the first quadrant, there would be at most (~) of these regions. The fact that this number can be reduced to n is also easy to see. W h a t follows is a rigorous proof of the theorem. Definitton. A constraint a,o,~x~ +ato2X2~ b,o is said to be binding f o r S in the region f_-< Xl =< g if for f - <_ Xl ~ g, a,oxXl +a~o2x2 ~ b, o implies a,lxx + a,2x2 _-< b , for each i = 1, 2, . . . , n . LEMMA 1. I f a constraint is binding f o r S in the two regions f =< xl ~ g, f ' _-< x l ~ g' where f _ < - g _-<f ' _-<g', then it is binding f o r S in the region f_-< x~ _-<g'. PROOf. Since S is convex, the lemma is obvious. [] Thus in the decomposition o f S into S,, we can assume that each o f the n constraints o f S is a binding constraint for at most one region. Hence, there are at most n regions, i.e., at most n S,'s. PROOF OF THEOREM 1. The proof is by reduction on n. F o r n = 1, $1 = S. Note that we could take f i = 0 and gt = b~/an. Now suppose for S with n constraints, we could decompose S into {S,},m=l, m _--<n. Let g = S N {.(xl, X2)lan+l,lXl + an+l,2X2 ~ bn+l) (we have added one more constraint). To decompose S as m the theorem, we find the point of intersection (u,, vJ of an+uX~ + a,+~,2x2 = b,+~ with z,~xl + z,2x2 = s, for each l, t = 1, 2, . . . . m. (We are using the notauon of Theorem 1 for S and S,.) Then for each i, i = 1, 2, . . . . m, we do the following: Iffi =< u, _-<g,, then the (n + l)th constraint is binding for S for a segment of [fi, g,] and the constraint z,~x~ + z,2xe _-<s, for the rest of [fi, g,]. It is easy to determine these segments. (See Figure 1.) Here the (n + l)th constraint is binding for the segment [fl, u,] and the other constraint for the segment [u,, g,]. If, however, u, ¢ [fl, g,], then either a,+a,xxi + an+l,2x2 _-< b,+~ or z,ax~ + z,2x2 ~ s, IS binding for the whole of [fi, g,] for S. It is again easy to determine which constraint is binding~ Thus we have decomposed S. By Lemma 1, there are at most (n + 1) regions into which S has been decomposed. It is also clear that this algorithm runs in time O(n2). [] Thus to solve (1) we only need to find the maximum of caxl + c2x2 over the integer points in each set in the decomposition and then take the maximum among all these maxima. ",K-'-'Zil X l + Z i 2 X2 = Si FIG |. 120 2. RAVINDRAN KANNAN The Algorithm We now need to solve a problem of the form (3) maximize subject to ClX] + C2X2 alxl + azxz =<b, and L <~ xl ~_ U, Xl, x2 _->0, integers, and U (3a) (3b) (3c) where (4) a~, as, L, are nonnegativeintegers. If any one of the following three conditions is met, then (3) is obviously trivially solved: (5) Cl (or c2) ~ 0:xl (6) (or x2) al can be set to its lowest permissible value and the resulting one-variable problem solved, or a2=0 or b-<_0, L = u. (7) Our algorithm successively reduces (3) to other problems of the same form until one of the conditions (5), (6), or (7) is met. Assume to start with that none of these three conditions is satisfied. We can assume, without loss of generality, that al U =< b. If not, replace U by [b/alJ. The set of x~, x2 satisfying (3a), (3b), and (3c) is the set of integer points inside the triangle T2 and the rectangle T~ shown in Figure 2. Maximizing a linear function over the integer points in the rectangle T~ is trivial. So, we need only to consider the triangle T2. It is clear that with the change of variables, (8) x] = xl - L, [ b - axU] x~ ffi xz - / az /' x] and x~ are nonnegative for every integer point in T2. Thus, we can always reduce problem (3) to the following one: (9) maximize clx] + c2x~ subject to a~x~ + a2x~ ~ b - [b and x~, x~ ~ 0, - - axU] - a l l = b' (say) integers, (9b) where al, a2, Cl, c2 > 0, integers. T 2 ~ i + a2x z = b x2 T, f U FIG 2 (9a) "'4" X I A Polynomial Algorithm for the Two-Variable Integer Programming Problem 121 If again b' <= 0, the problem is trivial. So assume b' > 0. After renumbering variables, if necessary, assume that aa ~ a2. Let k -- [a~/a2J and let al = ka2 + p. (Note that as # 0.) Then since ca, c2 > 0, we may add the constraint x[ ~ [(b' - alx])/a2J to (9) without altering the set of optimal solutions. But a2 J l_ = a2 ~ _-> ~ - x~ - x~ k ~ (in the context of (9a) and (9b)) (again in the same context). Therefore, the set of optimal solution to (9) is the same as that of (10) below: (10) maximize s u b j e c t to and c~x~ + c2x~ a~x~ + a2xi ~ b', (10a) x~, x~ ~ O, integers, (10b) x2' = > (10c) -x k. Replacing x[ ~ 0 by x~ ~ [b'/aaJ (in the context of (10c)) and substituting (11) x~' = x l - - x~ k into (10), we get (12) maximize (cl -c2(k))x~ + c2x~' + c2(k) subject to p x l + a2x~' ~ b' - (k) x~, x~' ~ 0, Lb,] a2, integers, We now have a problem of the same form as (3), and it is clear that using (11) we can convert an optimal solution of (12) into one o f (9). If now (12) satisfies (5), (6), or (7), we are done. Otherwise, we iterate the same process on (12). Note that to obtain (12) from (3), we need to perform a constant number of multiplications and divisions each o f which takes at most O(d log d log log d) time [2], where d is the length of the input. Further, we claim that after two iterations the size of the lesser coefficient in the knapsack constraint is cut down by a factor of at least 2. This is because in (9) either p = (al - [alia2] a2) =< a2/2, in which case already in (12) the lesser coefficient is at most ½ of that in (3), o r p > a2/2. But m this latter case a further iteration on (12) yields a problem with coefficients p and a2 - [a2/p] p = a2 - p < a2/2. Thus at most 2 log as iterations are required before condition (6) is satisfied. Thus, the problem is solved in O(d 2 log d log log d) time. To summarize the above discussion we present the algorithm to solve a problem of the form (3). The procedure below assumes that the input sausfies (4). We will also assume that either the problem given is infeasible (in which case by convention we set the value of the optimal solution to -oo) or it has a finite optimal solution value, The procedure tells which of the two cases holds and in the latter case gives an optimal solution (xa, x2) as a vector. procedure KP(cl, c2, al, a2, b, L, U ) begin if (5), (6), or (7) holds t h e n r e t u r n the solutton after a tnvtal calculation, if ( U ~ oo) t h e n U ~-- ram(U, lb/alJ, C o m m e n t W e n o w r e d u c e (3) to (9), if (L ~ 0 or U ~ oo) t h e n r e t u r n the better o f the two solutwns (U, l(b - azU)/a2J) and (KP(cl, c2, al, a2, b - [(b - alU)/a2] a~ - alL, O, ~ ) + (L, [(b - alU)/a2])), 122 RAVINDRANKANNAN CommentNow L = 0 and U = oo. After ensuringthat a~ ~ a2, we recurstvelysolve(12), 0 I 0); if(a'<a2) thenreturnKP(c2'cl'a2'a"b'O'°°)(! k ~ [aa/a~], p ~ al - kay; KP(cl - c2k, c2, p, az, b - k i b/ad az, O, [ b/ad) return 0 ! + (0, [b/aiJk), end 3. C o n c l u s i o n s In [1] it was conjectured that there is a polynominal algorithm that solves the general kvariable knapsack provided k is assumed to be a constant. The algorithm given here in Section 2 for the two-variable case seems to be simpler than the one in [1], and hence is probably more amenable to efforts at generalizing tt to any fixed n u m b e r of variables It has, however, the same time complexity as the latter. It is not clear that this algorithm is different from that given by Hirschberg and Wong, although it was arrived at in a rather different way. The referee suggests that ~t would be of interest to see if in fact the algorithm given here could not be proved to be identical to or a simple modification of that of Hlrschberg and Wong. REFERENCES ! HIRSCHBERG, D S., AND WONG, C K A polynomialalgorithmfor the knapsackproblemin two variables J A C M 23, 1 (Jan 1976), 147-154 2 AHO,A V, HOPCROFT,J E, ANDULLMAN,J D The Design and Analysts of Computer Algorithms AddisonWesley, Reading,Mass, 1974 RECEIVED JULY 1977, REVISED FEBRUARY 1979 Journal of the Assooatlon for Computing Machinery, Vol 27 No 1, January 1980