Proof of Strong Duality, Complementary Slackness and Marginal Values. Let x

advertisement
Proof of Strong Duality, Complementary Slackness
and Marginal Values.
Theorem Let x∗ be an optimal solution to the primal and y ∗ to the dual where
max
primal
c·x
Ax ≤ b
x≥0
min
dual
b·y
A y ≥ c.
y≥0
T
Then c · x∗ = b · y∗ .
Proof: Let A be an m × n matrix.
We obtain the Revised Simplex Formulas (our dictionaries!) by
x
= b where we have n original variables (typically our decision
first writing Ax ≤ b as [AI]
xS
variables) and m slack variables. Then considering the variables split into the m basic variables and
the n non basic variables for some column basis B of [AI] we obtain the Revised Simplex Formulas.
xB = B −1 b − B −1 AN xN
z = cTB B −1 b + (cTN − cTB B −1 AN )xN
Assume that the Simplex Method has pivoted to an optimal solution given by a basis B. Thus
the current basic feasible solution x has c · x = c · x∗ and x is given as xB = B −1 b and xN = 0.
The value of the objective function for x is equal to cTB B −1 b = c · x = c · x∗ since x and x∗ are
asserted to be optimal. The simplex method terminates if the coefficients in the z row are all ≤ 0
and so we assert
cTN − cTB B −1 AN ≤ 0.
We can readily assert that
cTB − cTB B −1 B ≤ 0
since of course cTB − cTB B −1 B = 0. But now we have a symmetric expression for all variables,
namely for all variables xi
ci − cTB B −1 Ai ≤ 0
where Ai denotes the column of [AI] indexed by xi . We regroup the variables into the original
variables x and the slack variables xS to obtain
cT − cTB B −1 A ≤ 0
and
0T − cTB B −1 I ≤ 0
If we let
cTB B −1 = yT ,
we obtain
cT − cTB B −1 A ≤ 0 implies cT ≤ yT A implies AT y ≥ c
and
0T − cTB B −1 I ≤ 0 implies 0T ≤ yT implies y ≥ 0.
This means that y is a feasible solution to the dual. Also we compute
b · y = yT b = cTB B −1 b = c · x
and so by Weak Duality x (as given above) is optimal to the primal and y is optimal to the dual.
The book has essentially this same standard proof but the proof above has the virtue of showing
the power of the matrix notation. Students will use the Revised Simplex formulas in the Revised
Simplex method and become familiar with it both in plugging in numbers and also as a matrix
formula. We use it below in the result concerning marginal values.
It is useful to apply this theorem with real numbers. In particular, in any final dictionary
(yielding an optimal solution) you may read off the optimal dual solution by reading the negatives
of the coefficients of the slack variables in the z row. This is what we called the ‘magic’ coefficients
in an earlier lecture. That the dual solution or ‘magic’ coefficients behave as promised can be shown
by a theorem of the alternative.
Theorem (Complementary Slackness) Let x be a feasible solution to the primal and y be a
feasible solution to the dual where
max
primal
c·x
Ax ≤ b
x≥0
min
dual
b·y
AT y ≥ c .
y≥0
Then x is optimal to the primal and y is optimal to the dual if and only if the conditions of
Complementary Slackness hold:
(bi −
n
X
aij xj )yi = xn+i yi = 0 for i = 1, 2, . . . , m
j=1
and
(
m
X
aji yi − cj )xj = ym+j xj = 0 for j = 1, 2, . . . , n.
i=1
Proof: Given that x and y are feasible implies Ax ≤ b, x ≥ 0, AT y ≥ c, and y ≥ 0. Now x ≥ 0,
AT y ≥ c implies xT AT y ≥ xT c where xT c = c · x. Also Ax ≤ b, y ≥ 0 implies yT Ax ≤ y T b
where yT b = b · y. We have
c · x = xT c ≤ xT AT y = yT Ax ≤ yT b = b · y
using the fact that xT AT y is a 1 × 1 matrix and so xT AT y = (xT AT y)T = yT Ax. Now by Strong
Duality, x and y are both optimal to their respective Linear Programs if and only if c · x = b · y
and so if and only if xT c = xT AT y and yT Ax = y T b and hence
if and only if x ·(AT y − c) = 0 and
y · (b − Ax) = 0. We note that 0 = x · (AT y − c) =
and
Pm
i=1
Pn
j=1
xj (
Pm
aji yi − cj ) and since xj ≥ 0
i=1
aji yi − cj ≥ 0 for each j = 1, 2, . . . , n we deduce that xj (
j = 1, 2, . . . , n. But then each of the n terms in the sum
Pn
j=1
xj (
Pm
i=1
Pm
i=1
Pm
aji yi − cj )≥ 0 for each
aji yi − cj ) are positive
and yet the sum is 0 and so we deduce that each term is 0, namely xj ( i=1 aji yi − cj ) = 0 for each
j = 1, 2, . . . , n which is the second condition for Complementary Slackness. Similarly we deduce
that y · (b − Ax) = 0 if and only if
yi (bi −
n
X
j=1
aij xj ) = yi xn+i = 0 for i = 1, 2, . . . , m.
We can readily obtain the marginal values interpretation for the dual variables by using the
Revised Simplex Formulas. We say that yi is the ‘marginal value’ or ‘shadow price’ for resource i
as given in the ith constraint of the primal
ai1 x1 + ai2 x2 + · · · + ain xn ≤ bi .
If we alter bi by ∆bi , then we expect the objective function to change by about yi ∆bi . This is made
more precise in the following theorem. Note that the marginal values interpretation is one of the
reasons that Linear Programming is so useful.
Theorem Consider the standard primal/dual pair of LPs. Let x∗ be an optimal solution for
the primal with z ∗ = c · x∗ . Let B be an optimal basis for the primal so that y ∗ = (cTB B −1 )T =
∗ T
(y1∗ , y2∗ , . . . , ym
) is an optimal solution to the dual. Let b0 = (∆1 , ∆2 , . . . , ∆m )T and consider the
altered primal as follows:
max
primal:
c·x
Ax ≤ b
x≥0
max
altered primal:
c·x
Ax ≤ b + b0
x≥0
Then the optimal value of the objective function for the altered primal is ≤ z ∗ + (
equality holding for B −1 (b + b0 ) ≥ 0.
Pm
i=1
yi∗ · ∆i ) with
Proof: We have c · x∗ = cB · xB = cTB B −1 b. You might as well imagine that in fact x∗ arises from
the optimal basis B (the solution x∗B = B −1 b that aries from B is an optimal basis) but in general
there can be many different optimal solutions, some not even arising from a basis/dictionary. Given
that B was an optimal basis, we know the optimal value of the objective function can be written
as z ∗ = cTB B −1 b.
As in our proof of Strong Duality, we know that y = (cTB B −1 )T is a feasible solution to the dual
and also b · y = yT b = z ∗ . As we go from the primal to the altered primal we note that y is still a
feasible solution to the dual of the altered primal, since we have not changed A or c. The objective
function value of y in the dual of the altered primal is
0
0
∗
(b + b ) · y = b · y + b · y = z + (
m
X
yi∗ · ∆i )
i=1
and so by weak duality any optimal solution to the primal can have no larger objective function
value.
If in addition we have B −1 (b + b0 ) ≥ 0, then we have a basic feasible solution to the altered
primal xB = B −1 (b + b0 ) and the value of the objective function in the altered primal is
P
∗
cB · xB = cTB B −1 (b + b0 ) = cTB B −1 b + cTB B −1 b0 = z ∗ + m
i=1 yi · ∆i .
Thus we have feasible solutions to the altered primal and the dual of the altered primal that
have the same objective function values and so by Weak Duality, we deduce that they are both
optimal.
If there exists a non degenerate optimal basic feasible solution given by an optimal basis B and
then xB = B −1 b > 0. In matrix notation, we say v > 0 if each entry of the vector v is greater
than 0. You would immediately note that for b0 relatively small in magnitude, you would expect
that B −1 (b + b0 ) ≥ 0. But also the dual solution y ∗ is unique and hence y∗ = (cTB B −1 )T . To see
that the dual solution is unique we use complementary slackness (as we have done in the quiz 3
for example) and find that the m basic variables being strictly positive implies m equations for
the values of y. These equations correspond to the jth equation of the dual if the variable x ∗j > 0
and the equations yi = 0 if the variable x∗n+i > 0. If you think of this in matrix form we have the
equations
B T y = cB
and hence the unique solution is yT = cTB B −1 as stated. If we do have degeneracy then there may
be other optimal dual solutions y∗ . And then the predictive value must be diminished.
I would note that for the vector b0 large enough and of a certain direction, it would be quite
reasonable that the altered primal becomes infeasible.
Download