Duality l. p

advertisement
Duality
for
linear programming
Illustration of the notion
• Consider an enterprise
producing r items:
fk = demand for the item k =1,…, r
using s components:
hl = availability of the component l = 1,…, s
• The enterprise can use any of the n process (activities):
xj = level for using the process j = 1,…, n
cj = the unit cost for using the process j = 1,…, n
• The process j
produces ekj units of the item k =1,…, r
uses glj units of the component l = 1,…, s
for each unit of its use
Illustration of the notion
•
•
Consider an enterprise
producing r items:
fk = demand for the item k =1,…, r
using s components:
hl = availability of the component l =
1,…, s
The enterprise can use any of the n process
(activities):
xj = level for using the process j = 1,…, n
cj = the unit cost for using the process j =
1,…, n
•
The enterprise problem: determine the
level of each process for satisfying the
without exceeding the availabilities in
order to minimize the total production
cost.
•
Model
n
min z   c j x j
j 1
n
S. t.
e
j 1
•
The process j
produces ekj units of the item k
=1,…, r
uses glj units of the component l =
1,…, s
each time it is used at level 1
kj
n
g
j 1
lj
x j  fk
k  1, 2,..., r (demands)
x j  hl
l  1, 2,..., s (availabilities)
xj  0
j  1, 2,..., n
Illustration of the notion
• A business man makes an offer to buy all the components and to sell the
items required by the enterprise to satisfy the demands.
• He must state proper unit prices (to be determined) to make the offer
interesting for the enterprise:
vk item unit price k = 1, 2, … , r
wl component unit price l = 1, 2, …, s.
n
min z   c j x j
j 1
n
S.t.
e
j 1
kj
n
g
j 1
lj
x j  fk
x j  hl
xj  0
k  1, 2,..., r (demands)
l  1, 2,..., s (availabilities)
j  1, 2,..., n
vk
wl
Illustration of the notion
The business man must state
proper unit prices (to be
determined) to make the offer
interesting for the enterprise
n
min z   c j x j
j 1
n
S. t
e
j 1
kj
n
To complete its analysis, the
enterprise must verify that for
each process j, the cost of making
business with him is smaller or
equal than using the process j. But
the cost of making business with
him is equal to the difference
between buyng the items required
and selling the components
unused in order to simulate using
one unit of process j (cj ).
g
j 1
lj
x j  fk
x j  hl
k  1, 2,..., r (demands)
l  1, 2,..., s (availabilities)
xj  0
r
j  1, 2,..., n
s
e v   g w
k 1
kj k
buying the
items
l 1
lj
l
selling the
components
≤
cj
vk
wl
Illustration of the notion
r
s
e v   g w
k 1
kj k
lj
l 1
buying the
items
l
≤
cj
selling the
components
• The business man problem is to maximize his profit while maintaining
the prices competitive for the enterprise
r
s
k 1
l 1
r
s
max p   f k vk   hl wl
S. t
e
k 1
v   g lj wl  c j
kj k
j  1, 2,..., n
l 1
vk  0
k  1, 2,..., r
wl  0
l  1, 2,..., s
Illustration of the notion
• The enterprise problem: multiply the availability constraints by -1
n
min z   c j x j
j 1
 ekj x j  f k
j 1
g
j 1
n
k  1, 2,..., rS.(demands)
t.
 ekj x j  f k
k  1, 2,..., r (demands)
j 1
n
1
min z   c j x j
j 1
n
S. t.
n
lj
x j  hl
xj  0
n
l  1, 2,..., s (availabilities)
  glj x j  hl
l  1, 2,..., s (availabilities)
j 1
j  1, 2,..., n
xj  0
j  1, 2,..., n
n
 E 
 G 


Enterprise
problem
n
min z   c j x j
r
s
j 1
n
e
S.t.
kj
j 1
x j  fk
e1 j
k  1, 2,..., r (demands)
ek1
n
  glj x j  hl
l  1, 2,..., s (availabilities)
ek 2 

ekj

j 1
xj  0
j  1, 2,..., n
erj
 g1 j
Business man problem
r
s
k 1
l 1
r
s
max p   f k vk   hl wl
S. t.
e
k 1
v   g lj wl  c j
kj k
 ekn

j  1, 2,..., n
l 1
vk  0
k  1, 2,..., r
wl  0
l  1, 2,..., s
 gl1  gl 2   glj
  gln

e1 j  ekj  erjg sj g1 j   glj  gsj
 E
r
T
G
s
T

n
Primaln
min z   c j x j
j 1
n
e
S. t.
kj
j 1
x j  fk
k  1, 2,..., r (demands)
n
  glj x j  hl
l  1, 2,..., s (availabilities)
j 1
xj  0
Dual
s
max p   f k vk   hl wl
S. t.
k 1
l 1
r
s
e
k 1
 
 w 
min c T x
S. t. Ax  b
x0
j  1, 2,..., n
r
min z  c T x
S. t.
 E
 f
x

 G 
 h 
x0
v   g lj wl  c j
kj k
j  1, 2,..., n
l 1
vk  0
k  1, 2,..., r
wl  0
l  1, 2,..., s
v 
max p   f T  h T   
 w
S. t.
v
 E T  G T     c
 w
v, w  0
x
max bT y
S. t. AT y  c
y0
Primal dual problems
Linear programming problem specified with equalities
Dual problem
Primal problem
T
min c x
S. t. Ax  b
x0
y
max bT y
S. t. AT y  c
y0
x
Linear programming in standard form
Primal problem
min c T x
S. t. Ax  b
x0
Dual problem
y
max bT y
S. t. AT y  c
x
min cT x
S. t. Ax  b
x0
min c T x  0T s
S. t. Ax  Is  b
x  0, s  0
max bT y
 AT 
c
S. t.  T  y   
0
 I 
max bT y
S. t. AT y  c
 Iy  0
max bT y
S. t. AT y  c
y0
Duality theorems
• It is easy to show that we can move from one pair of primal-dual problems
to the other.
• It is also easy to show that the dual of the dual problem is the primal
problem.
• Thus we are showing the duality theorems using the pair where the primal
primal is in the standard form:
primal
Dual
min c T x
S. t. Ax  b
x0
max bT y
S. t. AT y  c
Duality theorems
• Weak duality theorem
If x  x : Ax  b, x  0 (i.e., x is feasible for the primal problem) and
if y  y : AT y  c (i.e., y is feasible for the dual problem), then


bT y  c T x
T
T T
T
Proof Indeed, b y  x A y  x c
since AT y  c and x  0.
Duality theorems


• Corollary If x * x : Ax  b, x  0 and y  y : A y  c , and if
bT y*  cT x* , then x* and y* are optimal solutions for the primal and dual
problems, respectively..
*
T
Proof It follows from the weak duality theorem that for any feasible
solution x of the primal problem
cT x  bT y*  cT x*.
Consequently x* is an optimal solution of the primal problem.
We can show the optimality of y* for the dual problem using a similar
proof.
Duality theorems
• Strong duality theorem If one of the two primal or dual problem has a
finite value optimal solution, then the other problem has the same property,
and the optimal values of the two problems are equal. If one of the two
problems is unbounded, then the feasible domain of the other problem is
empty.
Proof The second part of the theorem follows directly from the weak
duality theorem. Indeed, suppose that the primal problem is unbounded
below, and thus cTx→ – ∞. For contradiction, suppose that the dual
problem is feasible. Then there would exist a solution y  y : AT y  c ,
and from the weak duality theoren, it would follow that bT y  cT x ; i.e., bTy
would be a lower bound for the value of the primal objective function cTx, a
contradiction.


Recall: The simplex multipliers
Denote the vector   R
specified by
Then
m
c T  cT  cBT B1 A
 T  cBT B1
c T  cT   T A
or
c1,
, cn   c1,
, cn    T a1,
c j  c j   T a j
where a j denotes the jth column
of the contraint matrix A
, an 
 is the simplex multipliers vector
associated with the basis B.
The vector  has one element associated
with each row (constraint) of the tableau.
Duality theorems
To prove the first part of the theorem, suppose that x* is an optimal solution
of the primal problem with a value of z*.
Let x j , x j ,..., x j be the basic variables.
1
2
m
Let cB  [c j1 , c j2 ,..., c jm ]T, and π be the simplex multipliers associated with
the optimal basis. Recall that the relative costs of the variables are specified
as follows
c j  c j   T a j j  1, 2,..., n
where a  j denotes the jth column of the matrix A.
Suppose that the basic optimal solution has the following property
c j  c j   T a j  0 j  1, 2,..., n
Consequently
 T a j  c j j  1,2,..., n
Duality theorems
Suppose that the basic optimal solution has the following property
c j  c j   T a j  0 j  1,2,..., n
Consequently
or
 T a j  c j j  1,2,..., n
aTj  c j j  1,2,..., n
and the matrix format of these relations:
AT  c
This implies that
   y : AT y  c
i.e., π is a feasible solution of the dual problem.
Duality theorems
Determine the value of the dual objective function for the dual feasible
solution π. Recall that
 B
1T
cB .
It follows that
T
T
bT  bT B1 cB  (B1b)T cB  xB* cB  z*
Consequently, it follows from the corollary of the weak duality theorem that
π is an optimal solution of the dual problem, and that
 T b  z* .
Complementary slackness theory
• We now introduce new necessary and sufficient conditions for a pair of
feasible solutions of the primal and of the dual to be optimal for each of
these problems.
• Consider first the following pair of primal-dual problems.
primal
min c T x
S. t. Ax  b
x0
Dual
max bT y
S. t. AT y  c
x
Complementary slackness theory
• Complementary slackness theorem 1
Let x and y be feasible solution for the primal and the dual, respectively.
Then x and y are optimal solutions for these problems if and only if for all
j = 1,2,…,n
i 
 ii 
xj  0
 a y  cj
aTj y  c j  x j  0
T
j
min c T x
S. t. Ax  b
x0
max bT y
S. t. AT y  c x
Poof   First we prove the sufficiency of the conditions. Assume that
the conditions (i) et (ii) are satisfied for all j=1,2,…,n. Then
x j [aTj y  c j ]  0
n
Hence

j 1
x j  aTj y  c j   0
j  1, 2,..., n
n
 x j aTj y   x1aT1  x2aT2 
j 1
Complementary slackness
theory
j  1, 2,...,
  x1n
, x2 ,
x j [aTj y  c j ]  0
n
Hence

j 1
x j  aTj y  c j   0
n
But 0 

j 1
x j a y  c j  
Consequently
T
j
 aT1 
 T
a
, xn   2  y
 
aT 
 n 
 x T AT y
n

 xn aTn  y
n
x a y
T
j j
j 1

x j c j  x T AT y  c T x  bT y  c T x
j 1
bT y  c T x
and the corollary of the weak duality theorem implies that x et y are optimal
solutions for the primal and the dual problems, respectively.
Complementary slackness theory

Now we prove the necessity of the sonditions. Suppose that the
solutions x et y are optimal solutions for the primal and the dual problems,
respectively, and
bT y  c T x.
Then referring to the first part of the theorem
n

j 1
x j  a y  c j  
Since
T
j
n

n
x a y
T
j j
j 1

x j c j  xT AT y  c T x  bT y  c T x  0
j 1
x j  0 et aTj y  c j j  1, 2,..., n,
it follows that
x j aTj y  c j   0 j  1, 2,..., n.
The proof is completed.
Complementary slackness theory
• Now consider the other pair of primal-dual problems
min c T x
S. t. Ax  b
x0
y
max bT y
S. t. AT y  c
y0
x
• Complementary slackness theorem 2
Let x and y be feasible solution for the primal and dual problems,
recpectively. Then x and y are opyimal solutions of these problems
if and only if
for all j = 1,2,…,n
for all i=1,2,…,m
i 
ii 
xj  0
 aTj y  c j
aTj y  c j 
xj  0
iii 
iv
ai  x  bi
yi  0


yi  0
ai x  bi
Complementary slackness theory
Proof This theorem is in fact a corollary of the complementary slackness
theorem 1. Transform the primal problem into the standard form using the
slack variables si , i=1,2,…,m:
min c T x
Sujet à Ax  Is  b
x, s  0
min c T x
S. t. Ax  b
x0
The dual of the primal problem in standard form
max bT y
S. t AT y  c
I y0

max bT y
S. t. AT y  c
I y0
Complementary slackness theory
Use the result in the preceding theorem to this pair of primal-dual problems
min c T x
S. t. Ax  Is  b
x, s  0
max bT y
S. t. AT y  c
I y0
y
For j=1,2,…,n
i 
 ii 
xj  0
aTj y  c j


aTj y  c j
xj  0

 yi  0

si  0
and for i=1,2,…,m
iii  si  0
iv  yi  0
x
s
Complementary slackness theory
For j=1,2,…,n
i 
 ii 
xj  0
aTj y  c j


aTj y  c j
xj  0
and for i=1,2,…,m
iii  si  0
iv  yi  0

 yi  0

si  0
But si  ai x  bi and then the conditions become
iii 
iv
ai  x  bi
yi  0


yi  0
ai x  bi
min c T x
S. t. Ax  Is  b
x, s  0
Dual simplex algorithm
• The dual simplex method is an iterative procedure to solve a linear
programming problem in standard form.
min z  c1 x1  c2 x2  cn xn
S. t.
a11 x1  a12 x2  ...  a1n xn  b1
a21 x1  a22 x2  ...  a2 n xn  b2
.
.
.
.
.
.
.
.
am1 x1  am 2 x2  ...  amn xn  bm
xj  0
j  1, 2,..., n
Dual simplex algorithm
• At each iteration, a basic infeasible solution of problem is available, except
at the last iteration, for which the relative costs of all variables are non
negatives.
• Exemple
min z 
3/ 2u
 1/ 2h
 27
S. t.
x
 1/ 4u
 1/ 4h  6 / 4
 1/ 4u  p  3/ 4h  15/ 2
y  1/12u
 5/12h  13/ 2
x , y , u, p, h  0
basic var .
r.h.s.
Dual simplex algorithm
Analyse one iteration of the dual simplex algorithm, and suppose that the
current solution is as follows:
basic var.
r.h.s.
c j  0 j  1,2,..., n
c ji  0 i  1,2,..., m
Leaving criterion
basic var.
c j  0 j  1,2,..., n
r.h.s.
If bi  0 i  1,2,
, m, then the solution is feasible
c ji  0 i  1,2,..., m and optimal. The algorithm stops.
Leaving criterion
basic var.
c j  0 j  1,2,..., n
c ji  0 i  1,2,..., m
r.h.s.
Otherwise, let br  min bi . If arj  0 j  1, 2,
i 1,2, ,m
, n, then
the feasible domain is empty. Indeed, since
n
for all x  0,  arj x j  0
and
j 1
it is not possible to find x  0 such that
br  
n
a
j 1
rj
x j = br .
Leaving criterion
basic var.
r.h.s.
c j  0 j  1,2,..., n
c ji  0 i  1,2,..., m
Otherwise, let let br  min bi   0. x jr is the leaving variable.
i 1,2, ,m
The pivot will be completed with an element in the r th row.
Leaving criterion
basic var.
r.h.s.
c j  0 j  1,2,..., n
c ji  0 i  1,2,..., m
b1  a1s xs
br  ars xs
bm  ams xs
We select the entering variable xs in such a way that
1) the value of the leaving variable x jr increases when
the value of xs increases
ii) the relativecosts of all the variables remains non
negative when the pivot on ars is completed to modify
a rs  0
the tableau.
Leaving criterion
basic var.
r.h.s.
ar1 ar 2
ars ars
1
ars
0
1
0
arn
ars
0
br
ars
c j  0 j  1,2,..., n
We select the entering variable xs in such a way that
c ji  0 i  1,2,..., m
a rs  0 
cj 
arj
ars
cs  0,
j  1,2,
,n

1) the value of the leaving variable x jr increases when
the value of xs increases
ii) the relative costs of all the variables remains non
negative when the pivot on ars is completed to modify
the tableau.
Leaving criterion
basic var.
r.h.s.
ar1 ar 2
ars ars
1
ars
0
1
0
arn
ars
0
br
ars
c j  0 j  1,2,..., n
c ji  0 i  1,2,..., m
a rs  0 
cj 
arj
ars
cs  0,
j  1,2,
,n

We select the entering variable xs in such a way that
If avalue
valuevariable
of c j can
only increase
1) the
ofthen
the the
leaving
xr increases
when
rj  0,
since
cs of0xand
ars  0.
the
value
s increases
ii) the relative costs of all the variables remains non
negative when the pivot on ars is completed to modify
the tableau.
Leaving criterion
basic var.
r.h.s.
ar1 ar 2
ars ars
1
ars
0
1
0
arn
ars
0
br
ars
c j  0 j  1,2,..., n
We select the entering variable xs in such a way that
For all 1)
j such
that aof
0, leaving
we havevariable
to inforce
the non negativity
the value
the
xr increases
when
rj 
c ji  0 i  1,2,..., m
a rs  of
0
the relative
cost by
properly the pivot element ars .
the value
of xselecting
s increases
cj 
arj
ars
cs  0,
j  1,2,
,n

ii) the relative costs of all the variables remains non
negative when the pivot on ars is completed to modify
the tableau.
Entering criterion
For all j such that arj  0, we have to inforce the non negativity
of the relative cost; i.e.,
cj 
arj
ars
cs  0, j such that arj  0
cj
cs

 0, j such that arj  0
arj ars
cj
cs

, j such that arj  0
arj ars
Then the index s of the entering variable is such that

cs
 cj
 max  : arj  0
ars j 1,2, ,n 
 arj





or

cs
 cj
 min 
: arj  0
j

1,2,
,
n
ars
 arj






Pivot
• To obtain the simplex tableau associated with the new basis where the
entering variable xs remplaces the leaving variable xr we complete the pivot
on the element ars  0.
Exemple
basic var.
r.h.s.
• x is the leaving variable, and consequantly, the pivot is completed in the
first row of the tableau
• h is the entering variable, and consequently, the pivot is completed on the
element -1/4
• After pivoting, the tableau becomes
basic var.
r.h.s.
This feasible solution
is optimal
Convergence when the problem is non degenerate
• Non degeneracy assumption:
the relative costs of the non basic variables are positive at each iteration
• Theorem: Consider a linear programming problem in standard form.
min z  c T x
Subject to Ax  b
x0
c, x  R n , b  R m
A m  n matrix
If the matrix A is of full rank, and if the non degeneracy assumption is
verified, then the dual simplex algorithm terminates in a finite number of
iterations.
• Proof:
Since the rank of matrix A is equal to m, then each basic feasible solution
includes m basic variables strictly positive (non degeneracy assumption).
But there is a finite number of ways to select m columns among the
n columns of A to specify an m  m sub matrix of A :
n!
n 
 m 
  m! (n  m)!
But the non feasible basis of A are a subset of these. Then
n!
n 

m
  m ! (n  m)!
is an upper bound on the number of non feasible basis of A.
• The influence of pivoting on the objective function during an iteration of
the simplex
r.h.s.
Deviding row r
basic var.
by ars
→
z
br
a rs
 cs
Substact from
br
 z   z   z  cs
 z
a rs
since br  0, a rs  0, and c s  0 under the non degeneracy ass.
br
 z   z   z  cs
 z
a rs
since br  0, a rs  0, and c s  0 under the non degeneracy ass.
Then z  z, and the value of the objective function increases stricly at
each iteration.
Consequently, the same basic non feasible solution cannot repeat during the
completion of the dual simplex algorithm.
Since the number of basic non feasible solution is bounded, it follows that
the dual simplex algorithm must be completed in a finite number of
iterations.
Comparing
(primal) simplexe alg. and dual simplexe alg.
Simplex alg.
Dual simplex alg.
Search in the feasible domain
Search out of the feasible domain
Search for an entering variable to
reduce the value of the objective function
Search for a leaving variable to eliminate
a negative basic variable
Search for a leaving variable preserving
the feasibility of the new solution
Search for an entering variable preserving
the non negativity of the relative costs
Stop when an optimal solution is found
or when the problem is not bounded
below
Stop when the solution becomes feasible
or when the problem is not feasible
Download