;a^ii;:;t';v.'''..'i'a'-r-,i:.'.H'!]w.si '% n- nD28 .M414 r' ALFRED P. WORKING PAPER SLOAN SCHOOL OF MANAGEMENT A FAST AXD SIMPLE ALGORITHM FOR THE NLAXIMUM FLOW PROBLEM R. K. Ahuja and 1. B. Sioan W.F. No. 19J5-S7 Orlin June 1987 MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139 OCT 201987 ) A FAST AXD SIMPLE ALGORITHM FOR THE MAXIMUM FLOW PROBLEM R. K. Ahuja and ]. B. Sioan W.F. Xo. 1905-S7 Orhn June 1987 A and Simple Algorithm for the Maximum Flow Problem Fast Ravindra K. Ahuja* and James B. Orlin Sloan School of Management Massachusetts Institute of Technology Cambridge, MA. 02139, USA Abstract We maximum among that n-^ 0(nm present a simple log is n). log n-^ U) sequential algorithm m flow problem with n nodes, arcs directed U + from the source node. polynomially bounded in n , Under , dei se networks without using any by complex data of n of for in time 0(nm 0(nm On A Fast and Simple Algorithm for the non-sparse and non- structures. leave from Indian Institute of Technology, Maximum Kanpur - + log (n^/m), Subject Classification 484. U the practical assumption bound a factor of log bound a capacity our algorithm runs This result improves the current best obtained by Goldberg and Tarjan and arcs, for the Flo^v Problem 208 016 , INDI.A The maximum flow problem in is one of the most fundamental problems network flow theory and has been investigated extensively. This problem was first their formulated by Ford and Fulkerson [1956] who also solved well-known augmenting path algorithm. Since then a it number using of algorithms have been developed for this problem, as tabulated below. Here n is the number m of nodes, on the integral arc the number of arcs, and U is an upper bound capacities. Due # is Running Time to 0(nm U) 1 Ford and Fulkerson 2 Edmonds and Karp 3 Dinic [1970] O(n^) 4 Karzanov [1974] OCn^) 5 Cherkaskyll977] ©(n^m^/^) 6 Malhotra, 7 Galil[1978] 8 Galil and 9 Shiloach and Vishkin 10 Sleator and Taijan [1983] 0(nm 11 Tarjan[1984] OCn-"*) 12 Gabow[1985] O(nmlogU) 13 Goldberg [1985] OCn^) 14 Goldberg and Tarjan [1986] 0(nm from source in [1978] O(n^) 0(n5/3m2.'3) Naamad 0(nm [1978] showed if log^ n) OCn^) [1982] [1972] that the log n) log (n^/m)) Ford and Fulkerson [1956] flows are augmented along shortest paths Independently, Dinic [1970] introduced the concept of shortest path networks, called algorithm. This 0(nm2) [1972] time 0(nm-^) to sink. 956] Kumar and Maheshwari Edmonds and Karp algorithm runs [ 1 layered networks, bound was improved to and obtained an O(n-m) O(n^) by Karzanov [1974] who introduced the concept of preflows in a layered network. to a flow except that the amount flowing into a node flowing out of a node. may A preflow is similar exceed the amount Since then, researchers have improved the complexity of Dinic's algorithm for sparse networks by devising sophisticated data structures. tree is the Among most these contributions, attractive The algorithms were a of from and Tarjan's dynamic [1983] a worst case point of view. Goldberg [1985] and of Goldberg and Tarjan [1986] novel departure from these approaches in the sense that they did not construct layered networks. method but does It not maintain layered networks. Their algorithm maintains To estimate which nodes label for contains the essence of Karzanov's preflow and proceeds by pushing flows a preflow sink. Sleator each node that augmenting path is a to nodes estimated are closer to the sink, it to be closer to the maintains a distance lower bound on the length of a shortest to the sink. Distance labels are a better computational device than layered networks since these labels are simpler to understand, easier to manipulate, and easier to use in a parallel algorithm. implementing the dynamic tree data structure, Moreover, by cleverly Goldberg and Tarjan obtained the best computational complexity for sparse as well as dense networks. Bertsekas [1986] problem recently developed an algorithm for the that simultaneously generalizes both the and Bertsekas [1979] auction algorithm for the minimum Goldberg-Tarjan algorithm assignment problem. For problems with arc capacities polynomially bounded in n maximum flow algorithm is minimum , our an improvement of Goldberg and Tarjan's algorithm and uses concepts of scaling introduced bv [1972] for the cost flow cost flow problem and later Edmonds and Karp extended by Gabow [1985] for other network optimization problems. The bottleneck operation in the straightforward implementation of Goldberg and Tarjan's algorithm number 0(nm the computational time to the dynamic tree data structure. pushes can be reduced U the excesses to nodes with become is U (i.e., = OCn*^^^^) < that the number of of non-saturating scaling." Our each scaling iteration requires flows from nodes with sufficiently while never allowing sufficiently small excesses Consequently, the computational time of Under polynomial in n) the reasonable assumption that the algorithm runs in time , < 1 . log n for both non-sparse a factor of networks E we push + n^ log U). it is show However, they reduce . + rr log n) which improves the bound of Goldberg and Tarjan's algorithm by i.e., if too large. 0(nm our algorithm We the by a very clever application log (n^/m)) scaling iterations; O(n^) non-saturating pushes large excesses to 0(n-^) is 0(n^ log U) by using "excess to algorithm performs log 0(nm which of "non-saturating pushes" is for m which = Moreover, our algorithm more efficient in practice), since with little is model is is with e much requires only elementary data structures arc capacities are exponentially large numbers. arguablv inappropriate. in number operations take of bit Odog U) which It is of computation (as described counts the some easy to implement (and computationally attractive even uniform model of computation, steps, is for computational overhead. Our algorithm and the it m= Q (n^"^^) and OCn^-"^^) and non-dense networks, all more if is not 0(n'^'^^) In this case, the arithmetic operations take realistic to In this 0(1) adopt the logarithmic by Cook and Reckhow operations. U [1973]) which model, most arithmetic steps than 0(1) steps. Using the logarithmic model of computation and modifying our algorithm slightly to speed up arithmetic operations on large integers , we . obtain an 0(nm log n + n- log n log U) algorithm for the maximum flow- problem. The corresponding time bound for the Goldberg-Tarjan algorithm 0(nm is the non-dense c: our algorithm i? U becomes as exponentially large, our T^oldberg and Tarjan's algorithm algorithm surpa this Hence log (n^/m) log U). by m/n a factor of in n other words, the relative worst case performance of "kingly U increases superior as in size. Our on results logarithmic moaei of computation will be presented in Ahuja and Orlin [19S7a] Our maximum flow algorithm since we push flow from one node parallel processors U assumption that is comparable Vishkin [19S2] we = is make "massively difficult to at a time. Nevertheless, with d == parallel" f m/n1 can obtain an 0(n^ log Ud) time bound. Under the ©(n"^'^^) , the algorithm runs in 0(n- log n) time, to the best available which time bounds obtained by Shiloach and and Goldberg and Tarjan [1986] using ?: processors. ;7flrfl//f/ Thus, our algorithm has an advantage in situations for which parallel processors are at a premium. Our work on presented in Ahuja and Orlin [19S7b] . Notation 1. Let G for even,' arc t the parallel algorithms will be are = (N, A) be a directed network with a positive integer capacity (i, j) and e A. Let n = INI two distinguished nodes m= A I of the network. I . We The source that for everv arc (i, j) possibly with zero capacity. Let (s, A e , an arc U = max j)6 A (j, i) (UgJ. is and assume without generality that the network does not contain multiple arcs. assume s We also contained in sink loss of further A , Ujj A flow S a function is X - x^i A— R x: Xjj I (j, t) {j: some v > V A is {j: forall N- e i (1) {s, t}, (2) , e A) Xjj < u-j for all , is (i, j) to e A (3) , determine a flow x X is a function A} {j; x: A— R > which satisfies (2) , (3), and (1): I - Xji Xjj > , for all i e N- (4) {s, t}. A} (i,j)e The algorithms described in this paper maintain a preflow at each intermediate stage. For a given preflow x , we define for each node excess e, for maximized. the following relaxation of (j,i)€ , The maximum flow problem . -prcfloiv I = = V x^t < which satisfying {j:(i,j)GA} {j:(i,i)6A} for > = I {j: (j.i)e Xjj A} I {j: (i,j)e Xjj A} i g N- (s, t] , the A node with positive excess residual capacity of any arc = - positive residue > given by rj: illustrates thest x^: (i, j) + x- cities is tions. e . A is referred to as an active node . The with respect to a given preflow x The network consisting only referred to as the residual network. is of arcs with Figure 1 Network vsnth arc capacities. Node 1 is sink. (Arcs Node 4 is the zero capacities are not the source. v»-ith shewn) Network with a preflow x The residual network with residual arc capacities Figure 1. Illustrations of preflow and residual network. A distance function N— d: > preflow x for a Z"^ function from the set of nodes to the non-negative integers distance function CI. d(t C2. d(i; is valid ') also satisfies the following if it + 1 , for Our algorithm maintains and is a labels each This fact for each from i to i, is A with (3, 1, 2, 0) 1(c), d = (0 say , , , minimum we ) at . each iteration, The distance to t d(i) in the residual d(i). length of any path the distance label exact. call is the tliat a valid distance label, though represents the exact distance labels. define the arc adjacency list is equals the d(i) residual netw^ork, then directed out of the node adjacency > r- d(i). i We . two conditions: vaUd distance function a a easy to demonstrate using induction in the value the distance label in the t We list e with the corresponding value i For example, in Figure d = (i, j) lower-bound on the length of any path from node network. If node every arc is i , i.e., A(i) of a list A(i) : = {(i, a set of arcs rather than the node k) e A : i e N k e N}. as the set of arcs Note more conventional that our definition of a as a set of nodes. All logarithms in this paper are otherwise. assumed to be of base 2 unless stated 10 The Preflow-Push Algorithms 2. The preflozv-push algorithms for the maintain a preflow at flow problem every step and proceed by pushing the excess of nodes The closer to the sink. maximum first preflow-push algorithm is due to Karzanov [1974]. Tarjan [1984] has suggested a simplified version of this algorithm. The recent algorithms of Goldberg [1985] and Goldberg and Tarjan [1986] are based on ideas similar to those presented in Tarjan [1984] but use distance labels to direct flows closer to the sink instead of constructing layered network. thus call their this section, algorithm the distance-directed preflow-push algorithm. we we of brevity, We review the basic features of shall simply refer to as the In algorithm, which for the sake this we preflow-push algorithm. Here describe the 1-phase version of the preflow-push algorithm presented by Goldberg [1987]. The results in this section are due to Goldberg and Tarjan [1986]. All operations of the preflow-push algorithm are performed using only local information. At each iteration of the algorithm (except initialization node, i.e., and at the termination) a (non-source) iterative step is to If the network contains at least one active node with positive excess. choose some active node and the sink, with closer being judged excess flow at this with respect node cannot be sent to to The goal of each send its when the excess "closer" to to the current distance labels. nodes with smaller distance then the method increases the distance label of the node. terminates at the labels, The algorithm network contains no active nodes. The preflow-push algorithm uses the following subroutines: 11 PRE-PROCESS. On d(s) = n and (Alternatively, t. d(t) the = (s, , d(i) send = units of flow. Let u^: for each 1 distance label for each node ^^ i iit s, on the residual network t s or an active node PUSH(i). Select an arc (i, j) i ^^ e A(i) s, with min We say that a push of flow on arc {e, , r^: non-saturating } units of flow r^j is 0) . can be starting at . and > from node (i, j) > Cj (i.e., t 6 = RELABEL(i). to i d(i) = d(j) + 5 = rjj 1. j. saturating if , otherwise. Replace d(i) by min{ dQ) +1: (i , j) g A(i) and r,: > Figure 2 contains the generic version of the preflow-push algorithm. begin t ) Select and e A(s) Let . SELECT. Send j) a breadth first search detern^int node each arc }. 12 and PUSH(i) Figure 3 illustrates the steps RELABEL(i) as applied to the network in Figure 1(a). Figure 3(a) specifies the preflow determined The SELECT step PRE-PROCESS. has residual capacity (2, 4) = r24 node 2 selects 1 for examination. d(2) = d(4) + and 1 , by Since arc the algorithm The push performs a (saturating) push of value 5 = min{ 1, 1} reduces the excess of node 2 deleted from the residual network and arc an active node, it is Hence condition. 1 added Arc . (2, 4) is to the residual network. Since node 2 can be selected again for further pushes. The arcs have positive residual (2, 1) new (4, 2) to units. min {d(3) + 1 , d(l) + 1) = min {2, 5} = 2 The pre-process step accomplishes several important causes the nodes adjacent to some node with started.) s to of the minimum is distances in d guaranteed that path from t , there are non-decreasing (see in s . tasks. we First, it can select to is s is , a the feasibilit}' of setting lower bound on the length no path from Lemma 1 s to follow), to we t . Since are also subsequent iterations the residual network will never contain a directed path from push flow from to s again. s to a (Otherwise, the algorithm could not get positive excess. immediate. Third, since d(s) = n n and node 2 gives have positive excess, so that Second, by saturating arcs incident d(s) = (2, 3) satisfy the distance RELABEL(2) ,and the algorithm performs distance d'(2) = do not capacities, but they is still t, and so there can never be any need to 13 d(3) = l d(l) = 4 (a) d(4) = Thv .'ork with a preflow after the pre-processing step. d(3) = l d(l) = 4 d(4) = After the execution of step (b) d(3) = PUSH(2 1 d(l) = 4 d(4) = dt2) = 2 = e 1 -I (c) Fisurt 3. After the execution of step RELABEL(2). Illustriitioiis of Push and Relabel steps. 14 In our improvement the results given in Goldberg and Tarjan [1986]. proofs in order to Lemma make we need of the preflow-push algorithm, more this presentation We some include Moreover, labels at each step. at few of of their self-contained. The generic preflow-push algorithm maintains 1. a valid distance each relabel step the distance label of a node strictly increases. First note that the pre-process step constructs valid distance Proof. labels. Assume operation, inducti\ el}^ that the distance function operation on the arc may (i, j) an additional condition d(j) create an additional arc < d(i) conditions remain satisfied since A operation. push operation on residual network, but this During relabel step, the a and (i, j) e A(i) The relabel step d(i) = d(i) < min{d(i) + d(j) + 1 > r- is and 0) + d(i) arc needs 1 = d(j) (i, j) new l: > be i) A push with satisfied. r- > , and This validity by the property of the push might delete this arc from the is node is i d'(i) = min{d(j) + consistent with the validity conditions. is no arc (i, j) € A(i) with Hence, . (i, j) 1 distance label of performed when there r-: + to (j, . does not affect the validity of the distance function. which , and C2 CI satisfies the validity conditions i.e., valid prior to an is e A(i) and r- > 0} = d'(i) , thereby proving the second part of the lemma. Lemma 2. At any stage of the preflow-push algorithm, each node with positive excess network. is connected to node s by a directed path in the residual 1: 15 Proof. By the flow decomposition theory any preflow x can be decomposed with respect into non-negative flows along nodes to active pi' active node in the flow decc and (iii) How x of in th the flows G tion of x is a path from Corollary i e N time that node last there must be the reversal of in the residual fact that Corollary d^^ i was relabeled, it = n and condition C2 imply that The number 2. one, and bv Corollarv Corollarv Suppose d(i) = d(j) sent back from j + to i change cannot occur arc (i, j) are both in m is less of saturating pushes (at . ) (i, j) which until d'(j) A , K so the (P with the be an from s to i and hence G', is most n- 1 n - 1 2n'^ = d'(i) at number most n + by 1 > d(i) + 1 (Recall that = and the total we assume of arcs in the residual times. . iteration (at until (i, j) d(j) nm at least + flow number that network (i, j) is is This flow 2); Thus by Corollary at least 2. times, most 2n at some on to i < 2n. node by no more than at from . label of a sent and a positive excess, at becomes saturated increases d(j) no more than nm. is P had than Then no more flow can be can become saturated saturations than of relabel steps that arc 1). path P no node can be relabeled more than The number 3. Proof. which 1 a dj < dg + Each relabel step increases the distance Proof. i s < 2n. d(i) , hence the residual network contained a path of length The Let cycles. network G paths from (ii) t, , in G'. s For each node 1. The Proof. to i is to s around directed Thus Then . orientation of each arc reversed) there . [1962] to the original net^vork paths from source (i) and Fulkerson of Ford 1 of arc and (j, no more i) s. . 16 Lemma The number 3. non-saturating pushes of See Goldberg and Tarjan [1986] Proof. Lemma When Proof. maximum a the algorithm terminates, each so the final preflow excess; labels satisfy the conditions is a feasible flow. CI and C2 Zn-^m. . The algorithm terminates with 4. most is at and node flow. in N - {s, t) Further, since the distance d(s) = n it , follows termination the residual network contains no directed path from This condition is the classical termination criteria for the algorithm of Ford and Fulkerson [1962] The to pushes upon to s maximum t . flow . potential bottleneck operation in the preflow based algorithms Karzanov number has zero [1974] , Tarjan [1984], and Goldberg and Tarjan [1986] of non-saturating pushes, for the following reason: which dominates the number The saturating pushes cause is due the of saturating structural changes - they delete saturated arcs from the residual network. This bound pushes —no matter which order they are performed. The non-saturating in of 0(nm) on number observation leads to a the of saturating pushes do not change the structure of the residual network and seem more difficult to 0(n-^) [1986] bound. Goldberg by examining the nodes reduced the non-saturating pushes in a specific order, to and Goldberg and Tarjan reduced the average time per non-saturating push by using a sophisticated data structure. we [1985] In the next section can dramatically reduce the 0(n2 log U) number we show that by using of non-saturating pushes to scaling, 17 The Scaling Algorithm 3. Our maximum flow algorithm improves algorithm of Section and uses "excess 2, non-saturating pushes from O(n-m) push flow from active nodes with sufficiently small excesses power iteration is of 2 and satisfies units of flow we K = flog U1 + ej <A is ; that after at we obtain a maximum K LIST(r) lists : = e N : ej too large. e N Further . , new a A decreases by scaling a factor of 2. A/2 we , To ensure increase. label most 4n- that consider only nodes we excess, . non-saturating pushes, the at least 2 scaling iterations, all and a new scaling iteration node excesses drop to zero and flow. distance label (i nodes with and among these nodes with large In order to select an active minimum i at least excess-dominator decreases by a factor of most to defined to be the least integer A that for all node with minimum distance begins. After at is scaling iterations. For a 1 and the excess-dominator does not with excess more than A/2 We show basic idea of assure that each non-saturating push sends at least each non-saturating push has value select a The . sufficiently large excesses to considered to have begun whenever In a scaling iteration A/2 0{rr log U) to scaling iteration, the excess-dominator a number scaling" to reduce the while never letting the excesses become The algorithm performs is the generic preflow-push node with excess more than A/2 and with among such > A/2 and nodes, d(i) = r) we maintain the for each r = 1, . a lists . . , n-1 . These can bo maintained in the form of either a linked stack or a linked queue (sec, for example, Aho, Hopcroft and Ullman [1974]), which enables insertion 18 and deletion smallest index for r which LIST(r) As per Goldberg and Tarjan, select efficiently the eligible arc for with each node arranged The variable of elements in 0(1) time.) i a A(i) list, arbitrarily, , non-empty. we use the following data structure pushing flow out of of arcs directed out of first arc in the current arc made is its arc list. found to list is we say that the next arc is This node. We Arcs in each maintain list has a current arc i again the is list is i. Initially, the (i, j) which can be null. first When At arc in the arc arc the i examined sequentially and whenever list is this point, the its is current arc of node be ineligible for pushing flow, the next arc the current arc. arc current arc it. a to but the order once decided remains unchanged current candidate for pushing flow out of the the is throughout the algorithm. Each node is level indicates completely examined, node is relabeled list. The algorithm can be described formally in the as follows: and the 19 MAX-FLOW; procedure begin PRE-FROCESS; K: = for ' log k = U1 to ; K do begin A = 2^^-'^ for each level : = while if 1 N e i do if > A/2 e, then add i ; level < 2n do LIST(level) = o then level: = level + 1 else begin select a node i from LIST(level); PUSH/RELABEL(i) end; end; end; to LIST(d(i)); 20 procedure PUSH/RELABEL(i); begin found: = false let (i, j) ; be the current arc of node while found = if d(i) else if = + d(j) and 1 ^ null do (i, j) r- then found: = true > replace the current arc of found = true begin and false i; node i by the next arc (i, j); on arc (i, j) ; then delete node i then {push} push min{ej , rjj A - ej , units of flow update residual capacities and the excesses; if (the updated excess) from if ;t j or s < A/2 e^ , LIST(d(i)); and t add node (the j to updated excess) LIST(d(i)) and e; > A/2 set level: , then = level - end else begin (relabel) delete node update d(i): add node the end; end: i node be from i LIST(d(i)); = min{d(j) + to LIST(d(i)) 1 ; (i, j) and let the first arc of A(i); e A(i) and r- > 0} the current arc oi ; 1; 21 Complexity of the Algorithm 4. In this section , we show algorithm with scaling correctly computes a 0(nm + Lemma n-^ log maximum ithm following two conditions: satisfies the Each non-saturating push from C3. least Ko C4. flow in ne. I" The 5. preflow-push that the distance-directed A/2 node a node to a i j sends at units of flow. A excess increases above (i.e., . the excess-dominator does not increase subsequent to a push.) For ever)' push on arc Proof. node is i a node with smallest distance than A/2, and d(j) by sending min {e^ in a non-saturating Further, the subsequent All we have ej>A/2 and (i, j) = , d(i) r- , - 1 A- < e; by the property d(i) ) among nodes whose label > min {A/2 push the algorithm sends push operation increases to the push. Then node excesses thus remain e': e; = e.+ less r-) , only from a A/2 min { Let . A/2 , e': r- than or equal to A the properties stated in the conditions , A/2 excess , since more is push operation. Hence units of flow, at least While there are other ways of ensuring satisfies of the < e: we ensure that units of flow. be the excess A- e; } at node < e.+ A - e; . that the algorithm is a simple and efficient always C3 and C4, pushing flow With properties C3 and C4, the kind of "restrained greedy approach." way of enforcing these conditions. push operation may be viewed as a Property C3 ensures that the push j <A node with excess greater than A/2 and with minimum distance among such nodes , . . 9? from to i maximum iteration. sufficiently large to restraint so as to prevent excess is Keeping the be distributed should make if a number node much it maximum easier for of nodes 6. If Its nodes were to , it major impact to d(i) is possible the useful in "encourage" flow excesses send flow towards the send flow to a single node j sink. it is , (Alternatively, likely that able to send the accumulated flow directly to the sink; node each push value of F i increase to A.) is to j would have satisfies to conditions be returned from where is is bounded by 2n . at most ^ following two cases must apply: came.) Srr- ej d(i) . N bounded by 2n- A because When it and C4, then the number C3 Consider the potential function F = initial if C4 may be very i€ The (It is node network. This distribution of flows of non-saturating pushes per scaling iteration Proof. A. excess, In particular, may that the during an all its from exceeding excess as low as in fairly equally in the of the excess of Lemma e: between A and A/2 would not be j never exceeds A excess increases during a push. practice as well as in theory. to C4 ensures Propert}' effective. In particular, rather than greedily getting rid of maximum maximum be infeasibility of flow conservation shows some that the is j ej the algorithm examines is bounded by A and node i , one of the 23 Case The algorithm I. pushed. In this case no arc is (i, j) 1e bounded by 5: Since the bounded by 2n, 2n2A Case i le satisfies goes up by distance label of i unable to find an arc along which flow can be initial 5 > d(i) 1 units. value of increases in F due = d(i) d(j) + 1 and The increase and in and the > F thus is increases are all its to relabelings of Tj: nodes are bounded by . The algorithm 2. pushed and so it is able to identify an arc on performs either a saturating or A/2 units of flow to a decrease in F of node with smaller distance at least A/2 units. F cannot exceed increases in 4n-^ A Since the , this (A more careful analysis leads Theorem a non-saturating push. F decreases. Moreover, each non-saturating push sends either case, times. which flow can be with a corresponding more than case cannot occur to a bound at least value of F plus the initial The procedure Max-Flow performs 1. label In of 4n'^ ©(n-^ log 8n-^ ) . U) non-saturating pushes. The Proof. Lemma 5 , initial value of the excess-dominator A the value of the excess-dominator decreases 8n- non-saturating pushes and a U1 for such scaling iterations, all Lemma i e 4 is 2'"^°S N- {s, t}. must be a A < 1 new and by the flow. By "1 . a factor of 2 scaling iteration begins. After 1 within + flog integrality of the flows e^ = The algorithm thus obtains maximum by ^ < 2U a feasible flow, which by 24 Theorem 0(nm The complexity of 2. + n^ log U) flow algorithm of the algorithm depends upon the number executions of the while loop in the main program. PUSH/RELABEL(i) either a PUSH/RELABEL(i) A 1. push is results in Case performed or the value of the variable one of the following outcomes: the non-saturating pushes are Oirr log 0(nm The distance 2. is + n^ log U) label of node label requires i goes up. By Corollary examining arcs examined 0(n) increases is Thus the algorithm The the number After i I A(i) effort A(i). to find an arc to number of the Since each arc in all this outcome new A distance is of the 0(n-^) distance total effort needed is 0(nm + n^ log U) i is is Zj 0(1) plus replaced by the next arc in A(i). Case 2 occurs and distance i. iG plus the this case perform the push operation node such replacements for node goes up. Thus, the Computing 2, procedure PUSH/RELABEL(i) of times the current arc of I , . calls the needed in i. times, the total effort for 0(nm) U) times. can occur 0(n) times for each node times. In each such execution performed. Since the number of saturating pushes are 0(nm) and occurs step of Each execution of the procedure level increases. Case is . The complexity Proof: maximum the 2n lA(i)l = 0(nm) times N PUSIi/RELABEL(i) operations. This label of is clearly node 25 0(nm + n- log U). The lists LIST(i) are stored as linked stacks hence addition and deletion of any element takes Oil) time. updating these the number of increases of the variable bounded above by 2n - 1 level decreases by number 5, can decrease only 1. Hence its bounded by when a increases over the push all which of pushes plus (2n log U), is Hence 1. number its number performed and 0(nm again level is of of decreases plus 2n scaling iterations are is bound + n-^ in improve a practical matter, several modifications of the its actual execution time without affecting such a case bounded by log U) its algorithm might worst case complexity. 1. Modify the 2. Allow some non-saturating pushes of small amount. 3. Try scale factor. to locate nodes disconnected from the in the present sink. form uses a scaling factor of 2, i.e., it reduces the excess-dominator by a factor 2 in the consecutive scaling iterations. However, in practice it might be better dominator by some other fixed integer factor be the least power of and property C3 becomes P that is no less P > to scale the excess2. The excess-dominator than the excess flow at any node, it the . characterize the modifications as one of the following three types: The algorithm will . Refinements As We is to In each scaling iteration, and bounded below by increases per scaling iteration Further, level. Consequently, we need not a bottleneck operation. Finally, lists is and queues, 26 Each non-saturating push from a node C3'. i node to a j sends at least A/p units of flow. The procedure maxflow can by factor letting LIST(j) = 0(nm run in + (i: logo U) pn-^ any fixed value of P is > A/p d(i) time. be altered easily to incorporate the P scale The algorithm can be shown }. Clearly, to from the worst case point of view optimum. The best choice of the value of P in practice should be determined empirically. Our algorithm as stated selects a saturating or a non-saturating push. flow out of A/2 this or reduce node its until either node with We ej every non-saturating push. some at least fixed A/2 One r > 1 , we perform a non-saturating node and even decrease its push r > 1 is the of value many excess below A/2. A/2 units of flow during The same complexity of the algorithm one out of every a could, however, keep pushing the Also, the algorithm as stated sends at least for and performs excess to zero. This variation might produce saturating pushes from the if > A/2 is obtained non-saturating pushes sends units of flo\v. potential bottleneck in practice number particular, the algorithm "recognizes" that the residual path from node i to node may suggested that it search so as to make use of breadth first t only when d(i) of relabels. network contains no exceeds n - 2 be desirable occasionally the distance labels exact. to . Goldberg [1987] perform a breadth He In first discovered that a judicious search could dramatically speed up the algorithm. 27 An whose alternative distance is approach k. with distance greater than k network. (Once node since the shortest path j is At from j to flo\s' in (We can . We would all nodes become disconnected from the nodes are sent back to the source. of the distance-based preflow-push algorithm has is the most relax this efficient in [19S7a]. realistic maximum flow problem. algorithm for the U that assumption on algorithm slightly and adopt the more computation as discussed stays disconnected Summary our algorithm n it nondecreasing in length.) problem under the reasonable assumption bounded each node relabel, then is several advantages over other algorithms for the First, any of nodes t this time, the excesses of active Our improvement after disconnected from the sinks, is Conclusions and 6. keep track of the number n^ disconnected from the sink in the residual avoid selecting such nodes until sink. to decreases to n^, If is U is if maximum polynomially we modify the logarithmic model of ) Second, our model has several intuitively appealing features that are not guaranteed in other preflow-push methods. from a node with a large excess. We always try to push flo^v (Gabow's scaling version of Dinic's algorithm always selects an augmentation along a path with a large capacity.) This feature accounts for never allow too much much of the efficiency of his excess to accumulate at a node. algorithm relies on verv simple data structures with method. Also, we In addition, our little computational 28 This feature contrasts dramatically with the algorithms that rely overhead. on dynamic trees. Third, our algorithm [1972] a novel approach to combinatorial scaling In the previous scaling algorithms developed algorithms. Karp is Rock , [1980] , and Gabow [1985] , by Edmonds and scaling involved a sequential approximation of either the cost coefficients or the capacities and right-hand-sides, approximated by c/2^ to solve the we would (e.g., for problem with some c first solve the problem with the costs integer T. We would approximated by c/2'^~^ then reoptimize so as . We would reoptimize for the problem with c approximated by c/2^~^, and so Our scaling method does not Orlin[1987a], we which includes fit into this standard then forth.) framework. In Ahuja and describe an alternate framework for scaling algorithms of the previous scaling algorithms as well as the algorithm all described here and >he algorithms of Goldberg and Tarjan [1987] for the minimum cost flow problem, and the algorithms of Gabow and Tarjan [1987] assignment and related problems. for the Fourth, our algorithm appears to be quite efficient in practice. Ahuja and Orlin for the [1987c], maximum we empirically test a flow problem. preliminary results to number In of different algorithms The algorithm presented here appears from be comparable to the best other algorithm for a range of distributions. only simple data structures so that it is maximum f\o\v In addition, our algorithm involves relatively easy to encode. 29 Fifth, our algorithm can be implemented efficiently on a parallel computer with that d = a small m/n and , 0(n-^ log n) time on number that a log parallel of parallel processors. U = O(log random d parallel processors. This time bound n). Then our algorithm runs access is In particular, suppose machine (PRAM) using only comparable achieved by Goldberg and Tarjan [1986] using in to the time bound n parallel processors. Thus introduce a n/d improvement in utilization of computational resources. addition, our model of computation is discussed in Ahuia and Orlin [1987b]. less restrictive. These results are we In 30 Acknowledgements We wish to thank John Bartholdi and Tom Magnanti suggestions which led to improvements in the presentation. was supported in part for their This research by the Presidential Young Investigator Grant 8451517-ECS of the National Science Foundation. 31 REFERENCES Hopcroft and J.D. UUman. 1974. The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading, MA. Aho, A.V. , J.E. Scaling Algorithms for Ahuja, R.K., and J.B. Orlin. 1987a. Combinatorial Optimization Problems and the Logarithmic Model of Computation. Work Ahuja, R.K., and in progress. 1987b. Orlin. J.B. A Fast Parallel Work Algorithm Using Few Parallel Processors. Maximum Flow in progress. Ahuja, R.K., and J.B. Orlin. 1987c. Numerical Investigation of Maximum Flo\v Algorithms. Work in progress. Bertsekas, D.P. A 1979. Distributed Algorithm for the Assignment Unpublished Manuscript. Problem. Bertsekas, D.P. 1986 Distributed Relaxation . Network Flow Problems. Decision and Control Cherkasky, R.V. in , 1977 Greece . . of 0(V^ VE) Operation, of Solution of Economical Problems Cook, S.A., and R.A. Reckhow. 1973. Access Machines /. of Comput. System . in 112-125 7, Mathematical (in Russian). Time Bounded Random Sci. 7 , 354 - 375 . Algorithm for Solution of a Problem of Maximum Networks with Power Estimation, Soviet Math. Dokl. 11 Dinic, E.A. Flow for Linear Algorithm of Construction of Maximal Flow Networks with Complexity Methods Methods Proceedings of the 25th IEEE Coyiference on 1970. , 1277-1280. Edmonds, J., Improvements in Network Flow Problems. /. y4ssoL. Comput and R.M. Karp. Algorithmic Efficiency for Mach. 19, 248-264. 1972. Ford, L.R., and D.R. Fulkerson. 1956. Can. J. Math. 8, 399-404. Theoretical Maximal Flow through a Network. Ford, L.R., and D.R. Fulkerson. 1962. Flows University Press, Princeton, New Jersey. in Networks. Princeton 32 Gabow, H.N. 1985. Scaling Algorithms for Netw^ork Problems. Comput. System Sci. Gabow, H.N., and R.E. Tarjan. 1987. Faster Scaling Algorithms Research Report, Computer Science Dept., Graph Matching. Princeton University, Problem, of for Jersey. E^/^) Algorithm for the Acta Informatica 14 Maximal Row Tll-lAl. , Naamad. 1980. An 0(EV5/3 log^ V^ Algorithm Maximum Flow Problem. /. Comput. System Sci. 21 203-217. and Galil, Z. for the New Princeton, An 0(V5/3 1980. Galil, Z. /. 148-168. 31, A. , Goldberg, A.V. 1985 A New Max-Flow Algorithm. Technical Report Laboratory for Computer Science, M.I.T. . MIT/LCS/TM-291 , , Cambridge, MA. Goldberg, A.\^, and R.E. Tarjan. 1986. A New Approach to the Maximum Flow Problem. Proceedings of the Eight Annual ACM Symposium on the Theory of Computing. Goldberg, A.V., and R.E. Tarjan. 1987. Solving Minimum Cost Flow Problem by Successive Approximation. Proceedings of the Niytth Annual ACM Symposium on the Theory of Computing. Goldberg, A.V. 1987. Efficient Graph Algorithms for Sequential and Computers. Unpublished Ph.D. Dissertation, Laboratory for Science, M.I.T., Cambridge, MA. Parallel Computer Karzanov, A.V. 1974. Determining the Maximal Flow in a Network by the Method of Preflows, Soviet Math. Dokl. 15 434-437. , Malhotra, V.M., M. An 0( V I Inform. I ^) Pramodh Kumar, and Algorithm Process. Lett. 7 Rock, H. 1980. , for Finding Maximum Maheshwari. 1978. Flows in Networks. 277-278. Scaling Techniques for Minimal Cost Discrete Structures and Algorithms, Miinchen, S.N. Eds. Network Flows, V. Page, Carl Hanser, 181-191. 1978. An 0(n I log- I) Maximum-Flow Algorithm. Tech. Rep. STAN-CS-78-S02, Computer Science Dept., Stanford University, Shiloach, Y. Stanford, CA. 33 Shiloach, Y. , Algorithm. Sleator, D.D. and U. Vishkin. Algorithms 3 /. 1980. Network Flow, 1982. An Oin- An 0(nm Tech. Trees, /. and Rep. log n) Algorithm for Maximum STAN-CS-80-831, Computer Science R.E. Tarjan. Compui. System Tarjan, R.E. Algorithm, 1984. Max-Flow 128-146. , Dept., Stanford University, Stanford, Sleator, D.D., log n) Parallel A Sci. 1983. 24, CA. A Data Structure for Dynamic 362-391. Simple Version of Karzanov's Blocking Flow Operations Research Lett. 2 , 265-268. 5^k 053 Date Due Lib-26-6T MIT LlBHARltS 3 TDflD DDS 131 flMT