CS6234: Lecture 4 Linear Programming LP and Simplex Algorithm [PS82]-Ch2 Duality [PS82]-Ch3 Primal-Dual Algorithm [PS82]-Ch5 Additional topics: Reading/Presentation by students Lecture notes adapted from Comb Opt course by Jorisk, Math Dept, Maashricht Univ, Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai Combinatorial Optimization Chapter 5 [PS82] The primal dual algorithm 7/1/2016 Combinatorial Optimization Masters OR Primal Dual Method found x, succeed! primal restricted primal dual restricted dual y z Construct a better dual 7/1/2016 Combinatorial Optimization Masters OR Source: CUHK, CS5160 LP in standard form + Dual min s.t. c’ x Ax x = ≥ b (≥ 0 ) 0 (P) Primal max s.t. π’ b π’ A ≤ c π’ free Dual 7/1/2016 Combinatorial Optimization Masters OR (D) Definition of the Dual Definition 3.1: Given an LP in general form, called the primal, the dual is defined as follows Primal Dual Min c’x Max π’b 7/1/2016 a’i x = bi iεM πi free xj ≥ 0 jεN π’ Ai ≤ ci Combinatorial Optimization Masters OR Complementary slackness Theorem 3.4 A pair, x, π, respectively feasible in a primal-dual pair is optimal if and only if: ui = πi (a’ix - bi) = 0 for all i vj = (cj - π’Aj) xj = 0 for all j. 7/1/2016 Combinatorial Optimization Masters OR (1) (2) Idea of a Primal-Dual Algorithm Suppose we have a dual feasible solution π. If we can find a primal feasible solution x such that xj = 0 whenever cj – π’Aj > 0 then: Ai x = bi for all i and hence (1) holds because x is a feasible solution to the primal (2) holds because xj = 0 whenever cj – π’Aj = 0. Thus the complementary slackness relations hold, and hence x and π are optimal solutions to the primal problem (P) and the dual problem (D) resp. 7/1/2016 Combinatorial Optimization Masters OR Outline of the primal dual algorithm P x D RP ρ π Adjustment to π 7/1/2016 DRP Combinatorial Optimization Masters OR Primal Dual Method found x, succeed! primal restricted primal dual restricted dual y z Construct a better dual 7/1/2016 Combinatorial Optimization Masters OR Source: CUHK, CS5160 Getting started: finding a dual feasible π If c ≥ 0, π = 0 is a dual feasible solution. When cj < 0 for some j, introduce variable xm+1, and the additional constraint x1 + x2 + … + xm+1 = bm+1. (bm+1 large enough) The dual (D) then becomes max s.t. π’ b + πm+1 bm+1 π’ A + πm+1 πm+1 ≤ cj ≤ 0. for all j, The solution πi = 0, i=1…m, and πm+1 = minj cj < 0 is feasible for the dual (D). 7/1/2016 Combinatorial Optimization Masters OR Given a dual feasible solution π Thus, we assume we have a dual feasible solution π. Consider the set J = { j : π’j Aj = cj }. A solution x to the primal (P) is optimal iff xj=0 for all j not in J. Hence, we aim for an x feasible in Σj=1n aij xj = bi i =1,..,m xj ≥ 0, for all j in J xj = 0, for all j not in J 7/1/2016 Combinatorial Optimization Masters OR Restricted Primal (RP) Min ξ = Σj=1m xja Σj=1n aijxj + xj xj xja xja = bi ≥ 0, = 0, ≥ 0, i =1,..,m for all j in J for all j not in J i =1,..,m (RP) If ξopt = 0, the corresponding optimal solution to (RP) yields an x which satisfies (together with π) the complementary slackness conditions. Thus, in the remainder we consider the case where ξopt > 0. 7/1/2016 Combinatorial Optimization Masters OR The dual of the restricted primal: Min Σj=1m xja s.t. ΣjεJ aij xj + xja xj xj xja Max s.t. π’ b π’ Aj πi πi ≤0 ≤ 1, free = bi ≥ 0, = 0, ≥ 0, i =1,..,m for all j in J for all j not in J i =1,..,m for all j in J i=1…m, i=1…m, We denote by ρ the optimal solution of this (DRP). 7/1/2016 Combinatorial Optimization Masters OR DRP (RP) Comparison of (D) and (DRP) (D) Max s.t. (DRP) π’ b π’ Aj ≤ cj j=1…n πi free i=1…m. 7/1/2016 Max s.t. π’ b π’ Aj πi πi Combinatorial Optimization Masters OR ≤0 ≤ 1, free for all j in J i=1…m, i=1…m. A Primal Dual iteration…. Let θ ε R, and consider π* = π + θρ. Assume θ is such that π* is feasible in (D). Then π*’ b = π’b+ θρ’b. Obviously ρ’b = ξopt > 0, since otherwise x together with π satisfies the complementary slackness conditions. Thus, by choosing θ > 0, π*’ b > π’b, yielding an improved solution of the dual (D). 7/1/2016 Combinatorial Optimization Masters OR A Primal Dual iteration For π* to be feasible in (D), it must hold that: π*’Aj = π’Aj + θρ’Aj ≤ cj , j=1…n . From the definition of (DRP) it holds that ρ’Aj ≤ 0 for all j in J. Thus if ρ’Aj ≤ 0 for all j, we can choose θ = +∞.But then (D) is unbounded, and hence (P) is infeasible. (Theorem 5.1) Thus, we assume that ρ’Aj > 0 for some j not in J. 7/1/2016 Combinatorial Optimization Masters OR A Primal Dual iteration We are going to allow j to ‘enter the basis’ of (D)… Let θ1 to be the maximum value such that π’ Aj+ θρ’Aj ≤ cj for all j not in J and ρ’Aj > 0 Dual constraint j is satisfied at equality, and then by complementary slackness, primal variable xj can have nonnegative value Define π* = π + θ1 ρ. Then π*’b = π’ b+ θ1 ρ’ b > π*’b. 7/1/2016 Combinatorial Optimization Masters OR The Primal Dual Algorithm Input: Feasible solution π for (D). Output: Optimal solution x for (P) if it exists. Infeasible false; opt false; While not (infeasible or opt) begin J {j: π’ Aj =cj } Solve (RP) giving solution x. If ξopt = 0 then opt true else if ρ’A ≤ 0 then infeasible true else π π + θ1 ρ End {while} 7/1/2016 Combinatorial Optimization Masters OR Admissible columns Definition Given a solution π to the dual (D), let J = { j : π’ Aj =cj }. Then any column Aj , jεJ is called an admissable columns. Theorem 5.3. A column which is in the optimal basis of (RP) and admissable in some iteration of the Primal Dual algorithm remains admissable at the start of the next iteration. 7/1/2016 Combinatorial Optimization Masters OR Admissable columns Proof. If Aj is in the optimal basis of (RP) then, by complementary slackness dj - ρ’ Aj = 0, where dj is the cost coefficient in the objective function of the (RP). Hence dj = 0, and thus ρ’ Aj = 0. This in turn implies that π*’ Aj = π’ Aj+ θρ’Aj = π’ Aj = cj Since j was an admissable column. Thus j remains an admissable column. 7/1/2016 Combinatorial Optimization Masters OR Consequence In each iteration, the optimal solution x of (RP), is also a basic feasible solution of the (RP) in the next iteration. Thus subsequent (RP)’s can be solved taking the optimal solution of the previous one as a starting point for a simplex iteration. Moreover, using an anti cycling rule, the Primal Dual algorithm is finite (see Theorem 5.4). 7/1/2016 Combinatorial Optimization Masters OR A Primal Dual method for Shortest Path node-arc incidence matrix A: 1 if arc k leaves node i aij = { -1 if arc k enters node i 0 otherwise Primal (P): min s.t. 7/1/2016 c’ f Σk Ak fk Σk Ak fk f =0 =1 ≥0 for all i ≠ s,t. i = s. Combinatorial Optimization Masters OR Dual (D) of the shortest path problem max πs s.t. πi π πt πj ≤ cij, free, = 0. Admissable arcs J = { arcs (i,j) : πi - πj = cij }. 7/1/2016 Combinatorial Optimization Masters OR Restricted Primal (RP) of Shortest Path min Σi x i a s.t. ΣkεJ Ak fk + xia Σ{j:(s,j)εJ} A(s,j)f(s,j) + xsa fk fk x ia 7/1/2016 =0 =1 ≥0 =0 ≥0 Combinatorial Optimization Masters OR for all i ≠ s,t. for all k, for all k not in J, for all i. Dual of the Restricted Primal (DRP) max πs s.t. πi πi πt π 7/1/2016 πj ≤ 0, ≤ 1, = 0, free, for all arcs (i,j) in J. for all i. Combinatorial Optimization Masters OR Solving (DRP) Obviously πs ≤ 1, and since we aim to maximize πs we try πs = 1. But then all nodes i reachable by an admissable arc from s must also have πi = 1. This argument applies recursively. Similarly since πt = 0, all nodes i (recursively) reachable from the sink, must have πi = 0. 7/1/2016 Combinatorial Optimization Masters OR Optimal solution ρ to (DRP) t s 1 1 0 If the source can be reached from the sink by a path consisting of admissable arcs πs = 1 end hence (RP) has optimal value zero and we are done. 7/1/2016 Combinatorial Optimization Masters OR Solving (DRP) Let θ1 to be the maximum value such that π’ Ak+ θρ’Ak≤ ck for all k not in J and ρ’Ak > 0. Thus we must consider (i,j) such that ρi -ρj =1. Therefore θ1 is the minimum over the aforementioned (i,j) of cij – (πi - πj ). Interpretation on next slide… 7/1/2016 Combinatorial Optimization Masters OR Interpretation of Primal Dual iteration We assume cij > 0 for i,j = 1…m. Then the solution πi = 0 for i=1…m is feasible in (D), and selected as the starting solution. We select a non admissable arc from a green or red node i (which has ρi = 1 to a yellow node j, which has ρi = 0. Since arc (i,j) is non admissable it must hold that πi – πj < cij. 7/1/2016 Combinatorial Optimization Masters OR Example 3 1 3 2 2 t 3 s 2 5 1 2 7/1/2016 1 Combinatorial Optimization Masters OR 4 Initial dual feasible solution π1=0 3 1 π3=0 3 2 2 πt=0 t πs=0 3 s 2 5 1 2 1 π2=0 Admissable arcs: Ø 7/1/2016 Combinatorial Optimization Masters OR 4 π4=0 Initial solution of (RP) xsa=1, xia=0, for all i ≠ s, no variables fk for admissable columns Ak. 7/1/2016 Combinatorial Optimization Masters OR First DRP ρ1=1 3 1 ρ3=1 3 2 2 ρt=0 t ρs=1 3 s 2 5 1 2 1 ρ2=1 θ1 = c3t + πt - π3 = 2 7/1/2016 Combinatorial Optimization Masters OR 4 ρ4=1 Next solution of (RP) xsa=1, xia=0, for all i ≠ s, variables fk for admissable columns Ak from node 3 to t, but primal variables remain unchanged. (and will remain unchanged, until some arc leaving s is admissable… 7/1/2016 Combinatorial Optimization Masters OR Next dual feasible solution π1=2 3 1 π3=2 3 2 2 πt=0 t πs=2 3 s 2 5 1 2 1 π2=2 Admissable arcs: (3,t) 7/1/2016 Combinatorial Optimization Masters OR 4 π4=2 Next DRP ρ1=1 3 1 ρ3=0 3 2 2 ρt=0 t ρs=1 3 s 2 5 1 2 1 ρ2=1 θ1 = c43 + π3 - π4 = 2 7/1/2016 Combinatorial Optimization Masters OR 4 ρ4=1 Next dual feasible solution π1=4 3 1 π3=2 3 2 2 πt=0 t πs=4 3 s 2 5 1 2 1 π2=4 Admissable arcs: (3,t), (4,3) 7/1/2016 Combinatorial Optimization Masters OR 4 π4=4 Next DRP ρ1=1 3 1 ρ3=0 3 2 2 ρt=0 t ρs=1 3 s 2 5 1 2 1 ρ2=1 4 ρ4=0 θ1 = c24 + π4 - π2 = 1 = c13 + π3 - π1 = 1 7/1/2016 Combinatorial Optimization Masters OR Next dual feasible solution π1=5 3 1 π3=2 3 2 2 πt=0 t πs=5 3 s 2 5 1 2 1 π2=5 Admissable arcs: (3,t), (4,3), (1,3), (2,4) 7/1/2016 Combinatorial Optimization Masters OR 4 π4=4 Next DRP ρ1=0 3 1 ρ3=0 3 2 2 ρt=0 t ρs=1 3 s 2 5 1 2 1 ρ2=0 θ1 = cs2 + π2 - πs = 1 7/1/2016 Combinatorial Optimization Masters OR 4 ρ4=0 Next dual feasible solution π1=5 3 1 π3=2 3 2 2 πt=0 t πs=6 3 s 2 5 1 2 1 π2=5 Admissable arcs: (3,t), (4,3), (1,3), (2,4),(s,2) 7/1/2016 Combinatorial Optimization Masters OR 4 π4=4 Next solution of (RP) xia=0, for all i, fs2=1, f24=1, f43=1, f3t=1, all other fij = 0. Solution value 0 f is feasible in primal (P), and since c ’x = π’b optimal in (P)! 7/1/2016 Combinatorial Optimization Masters OR Combinatorialization… (P) has an integer cost vector c, and (0,1) right hand side. (RP) has a (0,1) cost vector and right hand side b. Therefore in (RP) the complexity is not from the numbers but from the combinatorics. Similarly (D) has a (0,1) vector b, and integer right hand side c. (DRP) has a (0,1) cost vector and right hand side. Therefore in (DRP) the complexity is not from the numbers but from the combinatorics. The primal dual algorithm solves a problem with ‘numerical’ complexity, by repeatedly solving a problem which has only ‘combinatorial’ complexity. This concept, which Papadimitriou calls ‘combinatorialization’ is frequently encountered in combinatorial optimization. 7/1/2016 Combinatorial Optimization Masters OR Additional notes from [CUHK] Hon Wai Leong, NUS (CS6234, Spring 2009) Page 44 Copyright © 2009 by Leong Hon Wai Primal Dual Method found x, succeed! primal restricted primal dual restricted dual y z Construct a better dual Lecture 20: March 28 Primal Dual Program Primal Program Dual Program If there is a feasible primal solution x and a feasible dual solution y, then both are optimal solutions. Primal-Dual Method: An algorithm to construct such a pair of solutions. Optimality Condition Suppose there is a feasible primal solution x, and a feasible dual solution y. How do we check that they are optimal solutions? Avoid strict inequality Optimality Condition Avoid strict inequality Complementary Slackness Conditions Primal complementary slackness condition: Dual complementary slackness condition: Primal Dual Method Start from a feasible dual solution. Search for a feasible primal solution satisfying complementary slackness conditions. repeat If not, improve the objective value of dual solution. Restricted Primal Given a feasible dual solution y, how do we search for a feasible primal solution x that satisfies complementary slackness conditions? Formulate this as an LP itself! Primal complementary slackness condition: Dual complementary slackness condition: Restricted Primal If j not in J, then we need x(j) to be zero. If i not in I, then we need If zero, we are done Restricted primal Restricted Dual Suppose the objective of the restricted primal is not zero, what do we do? Then we want to find a better dual solution. Restricted primal nonzero Restricted dual Restricted Dual Consider y+єz as the new dual solution. Dual Program nonzero Larger value Still feasible Restricted dual General Framework found x, succeed! primal restricted primal dual y restricted dual z Construct a better dual Bipartite Matching Primal complementary slackness condition: Hungarian Method Start from a feasible vertex cover Find a perfect matching using tight edges non-zero Consider y- єz, a better dual Remarks It is not a polynomial time method. It reduces the weighted problem to the unweighted problem, so that the restricted primal linear program is easier to solve, and often there are combinatorial algorithms to solve it. Many combinatorial algorithms, like max-flow, matching, min-cost flow, shortest path, spanning tree, …, can be derived within this framework. Approximation Algorithm How do we adapt the primal-dual method for approximation algorithms? We want to construct a primal feasible solution x and a dual feasible solution y so that cx and by are “close”. Avoid losing too much Approximate Optimality Condition Avoid strict inequality Approximate Complementary Slackness Condition Primal complementary slackness condition: Dual complementary slackness condition: Only a sufficient condition Vertex Cover Primal complementary slackness condition: Dual complementary slackness condition: Approximate Optimality Conditions Primal complementary slackness condition: Just focus on this! Pick only vertices that go tight. Dual complementary slackness condition: Pick only edge with one vertex in the vertex cover. This is nothing! This would imply a 2-approximation. Algorithm Pick only vertices that go tight. Algorithm (2-approximation for vertex cover) Initially, x=0, y=0 When there is an uncovered edge Pick an uncovered edge, and raise y(e) until some vertices go tight. Add all tight vertices to the vertex cover. Output the vertex cover x. Familiar? This is the greedy matching 2-approximation when every vertex has the same cost. Thank you. Q &A Hon Wai Leong, NUS (CS6234, Spring 2009) Page 65 Copyright © 2009 by Leong Hon Wai