Sensitivity analysis in linear semi-infinite programming via partitions M.A. Goberna , T. Terlaky

advertisement
Sensitivity analysis in linear semi-infinite
programming via partitions
M.A. Goberna∗, T. Terlaky†, and M.I. Todorov‡
December 2006
Abstract
This paper provides sufficient conditions for the optimal value function of a given linear semi-infinite programming problem to depend
linearly on the size of the perturbations, when these perturbations are
directional, involve either the cost coefficients or the right-hand-side
function or both, and they are sufficiently small. Two kinds of partitions are considered. The first one concerns the effective domain of
the optimal value as a function of the cost coefficients, and consists
of maximal regions on which this value function is linear. The second
class of partitions considered in the paper concern the index set of
the constraints through a suitable extension of the concept of optimal
partition from ordinary to semi-infinte linear programming. These
partitions provide convex sets, in particular segments, on which the
optimal value is a linear function of the size of the perturbations, for
the three types of perturbations considered in this paper.
Key words Sensitivity analysis, linear semi-infinite programming,
linear programming, optimal value function.
∗
Dept. of Statistics and Operations Research, Alicante University, 03071 Alicante,
Spain. E-Mail: mgoberna@ua.es. Research supported by MEC and FEDER, Grant
MTM2005-08572-C03-01.
†
Dept. of Computing and Software, McMaster University, Hamilton, ON, Canada. EMail: terlaky@mcmaster.ca. Research partially supported by NSERC, MITACS and the
Canada Research Chair Program.
‡
Dept. of Physics and Mathematics, UDLA, 72820 San Andrés Cholula, Puebla, Mexico. On leave from IMI-BAS, Sofia, Bulgaria. E-Mail: maxim.todorov@udlap.mx. Research partially supported by CONACyT of MX.Grant 44003
1
1
Introduction
Given a linear semi-infinite programming (LSIP) problem and a perturbation
direction of the cost vector and/or the right-hand-side (RHS) function, we
give conditions guaranteeing the linearity of the optimal value function with
respect to the size of the perturbation provided this size is sufficiently small.
The preceding works are, first, a stream of papers on sensitivity analysis in
ordinary and parametric linear programming (LP) from an optimal partition
perspective ([1], [2], [4], [10], [6], [7], [11], [12], [13], [14], [15], [16], [17], [18])
and, second, the recent paper [8], where conditions are given for the linearity
(not only on segments) of the optimal value function of a LSIP problem with
respect to (non-simultaneous) perturbations of the cost vector or the RHS
function from a duality perspective.
We consider given a vector c ∈ Rn , two (possibly infinite) sets of indices,
U and V , such that U ∩ V = ∅ and U 6= ∅, and two functions a : T → Rn
and b : T → R, where T := U ∪ V . We associate with the triple (a, b, c) ∈
(Rn )T × RT × Rn (the data) a primal nominal problem in Rn ,
P : Inf c0 x
s.t. a0t x ≥ bt , t ∈ U,
a0t x = bt , t ∈ V,
which is assumed to be consistent, and its corresponding dual nominal problem in R(T ) (the linear space of generalized finite sequences, i.e., the functions
λ : T → R such that λt = 0 for all t ∈ T except maybe for a finite number
of indices),
X
D : Sup
λt bt
Xt∈T
s.t.
λt at = c,
t∈T
λt ≥ 0, t ∈ U.
These problems are called bounded when their optimal values, denoted by
v P and v D , are finite. In contrast with LP, in LSIP the boundedness of both
problems does not imply their solvability and v P = v D . We denote by F and
F ∗ (by Λ and Λ∗ ) the feasible and the optimal sets of P (of D, respectively).
We assume throughout that ∅ 6= F 6= Rn .
If we replace c by z ∈ Rn in P and D we get parametric LSIP problems
whose optimal value depends on z. These optimal value functions, from Rn
2
to R = R∪ {±∞} , are concave, proper and (positively) homogeneous (a
function f : Rn → R is called homogeneous if f (λx) = λf (z) for all z ∈ Rn
and λ > 0).
The size of the perturbations of c can be measured through the Euclidean
norm in Rn , k·k , with associated distance d. Concerning the perturbations of
b : T → R, we consider the linear space RT equipped with the pseudometric
δ (f, g) := supt∈T |f (t) − g (t)|, for f , g ∈ RT (we may have δ (f, g) = +∞).
The zero-vector in RT is denoted by 0T .
The canonical basis, the zero-vector and the open unit ball in Rn will be
denoted by {e1 , ..., en }, 0n and B (0n ; 1), respectively. For any set X 6= ∅,
we denote by |X|, cl X, int X, rint X, conv X, cone X, aff X, span X and
X 0 the cardinality, the closure, the interior, the relative interior, the convex
hull, the convex conical hull (of X ∪{0n }), the affine hull, the linear hull, and
the positive polar of X, respectively. The dimension of a convex set X ⊆ Rn
will be denoted by dim X. A vector y ∈ Rn is a feasible direction at x ∈ X
if there exists ε > 0 such that x + εy ∈ X. The cone of feasible directions at
x will be denoted by D (X; x).
Now we summarize some basic concepts and results of LSIP theory that
will be used throughout (all these results can be found in [9]).
Let problem P be defined by the triple (a, b, c). Its characteristic cone is
¶¾
½µ
¶
µ
¶
µ
at
at
0n
.
K := cone
,t ∈ T;−
,t ∈ V ;
bt
bt
−1
The Farkas lemma establishes that u0 x ≥ α for all x ∈ F if and only if
(u, α) ∈ cl K. Thus cl K only depends on F whereas Λ depends on K (and
so on the constraint system of P ). Given x ∈ F , the set of active indices at
x is T (x) := {t ∈ T | a0t x = bt }. Obviously, V ⊆ T (x). The active cone at x
is
A (x) := cone {at , t ∈ T (x) ; −at , t ∈ V } .
It is easy to see that x ∈ F ∗ if and only if c ∈ D (F ; x)0 and also that
A (x) ⊆ D (F ; x)0 for all x ∈ F . Consequently, if c ∈ A (x) (the KKT
condition) then x ∈ F ∗ , and the converse statement holds if K is closed.
A point x∗ ∈ F is a strongly unique optimal solution if there exists α > 0
such that c0 x ≥ c0 x∗ + α kx − x∗ k for all x ∈ F (in which case F ∗ = {x∗ }).
This happens if and only if c ∈ int D (F ; x∗ )0 .
The weak duality theorem establishes that v D ≤ v P . The equality holds
if either K is closed or c ∈ rint M , where M := cone {at , t ∈ T ; −at , t ∈ V } is
3
the so-called first moment cone. Moreover the first condition entails Λ∗ 6= ∅
if Λ 6= ∅ and the second one F ∗ 6= ∅.
F is bounded if and only if M = Rn and F ∗ is bounded if and only if
c ∈ int M . Since M is invariant through the perturbations considered in this
paper, if the primal feasible set is bounded, the same is true for the perturbed
problems. The strong Slater condition (existence of x ∈ Rn and ε > 0 such
that a0t x ≥ bt + ε for all t ∈ U , and a0t x = bt for all t ∈ V ), together with the
linear independence of {at , t ∈ V } if V 6= ∅, guarantees the solvability of the
problem obtained by replacing b with w ∈ RT provided δ (w, b) is sufficiently
small. Under both assumptions, the perturbed problems have zero duality
gap for sufficiently small perturbations of the data.
This paper is structured as follows. Section 2 shows that the effective
domain of any convex homogeneous function can be partitioned into maximal
relatively open convex cones where the function is linear (i.e., finite, convex
and concave) which are called linearity cones of the given function. Section
3 extends and analyzes the concepts of complementary solution and optimal
partition from LP to LSIP. Section 4 examines the linearity of the optimal
value functions associated with perturbations of c on convex sets (e.g, on
segments emanating from c and on relatively open convex cones) by means
of the theory developed in Section 2 (as both optimal value functions are
concave in the case of perturbations of c) and Section 3. Sections 5 and 6
give sufficient conditions for the optimal value function to depend linearly on
the size of the perturbations when the perturbed data are the RHS function
b or both parameters, c and b, respectively. These conditions are expressed
in terms of optimal partitions. Finally, Section 7 contains the conclusions.
2
Linearity cones of convex homogeneous functions
The effective domain of f : Rn → R is denoted by dom f . In this section
we prove that, if f is convex and homogeneous, then there exists a partition
of (dom f ) \ {0n } into maximal relatively open convex cones on which f is
linear.
Lemma 1 Let f : Rn → R be a convex function. Then the following statements hold:
4
(i) If A : Rn → Rn is a linear mapping, then f ◦ A is also convex. Moreover,
if f is homogeneous (linear), then f ◦ A is also homogeneous (linear,
respectively).
(ii) If C ⊂ Rn is convex and h is a linear function on Rn such that f (x) ≤
h (x) for all x ∈ C and f (x) = h (x) for a certain x ∈ rint C, then
f (x) = h (x) for all x ∈ C.
(iii) If C and D are convex sets such that (rint C) ∩ (rint D) 6= ∅ and D ⊂
aff C, f is linear on D and f (x) = d0 x + δ for all x ∈ C, with d ∈ Rn
and δ ∈ R, then f (x) = d0 x + δ for all x ∈ D.
Proof : (i) It is immediate.
(ii) Since f − h : Rn → R is also convex, we can assume f (x) ≤ 0 for all
x ∈ C and f (x) = 0.
Take an arbitrary x ∈ C. Since x ∈ rint C, there exists µ > 1 such
that z := (1 − µ) x + µx ∈ C. Then we have x = µ−1 z + (1 − µ−1 ) x, with
0 < µ−1 < 1 and, by convexity of f we get
¡
¢
¡
¢
0 = f (x) ≤ µ−1 f (z) + 1 − µ−1 f (x) ≤ 1 − µ−1 f (x) .
Consequently, 0 ≤ f (x). Since we are assuming f (x) ≤ 0, we have f (x) = 0.
(iii) Take an arbitrary x ∈ D. Select a point x ∈ (rint C) ∩ (rint D). Since
x ∈ D ⊂ (aff C) ∩ (aff D), based on the same arguments as in part (ii), there
exists an element z ∈ C ∩ D and ε > 0, ε < 1, such that x = εz + (1 − ε) x.
Taking into account that x, z ∈ C and the linearity of f on [x, z] ⊂ D,
we have
d0 x + δ = f (x) = εf (z) + (1 − ε) f (x) = ε (d0 z + δ) + (1 − ε) f (x) ,
from which we get f (x) = d0 x + δ.
u
t
Lemma 2 Let C and D be two cones in Rn such that C is convex, relatively
open and C ∩ D 6= ∅. Then C ⊂ C + D.
Proof : Let c ∈ C ∩ D. Given x ∈ C, since c, x ∈ C and this is relatively
open, there exists µ > 1 such that y := (1 − µ) c + µx ∈ C. Then x =
µ−1 y + (1 − µ−1 ) c ∈ C + D. Hence C ⊂ C + D.
¤
5
Proposition 1 Let f : Rn → R be a convex homogeneous function. Let
{Ci , i ∈ I} be a finite family of relatively open convex
P cones containing c ∈
Rn \ {0n } on which f is linear. Then f is linear on i∈I Ci .
P
Proof : Given J ⊂ I, J 6= ∅, we denote CJ = i∈J Ci , which is also a
relatively open convex coneP(the three properties are preserved by the sum)
J ⊂ I, then by
containing c (because c = i∈J |J|−1 c ∈ CJ ). If ∅ 6= H
Lemma 2, CH ⊂ CH + CJ\H = CJ .
Let dim CI = m ≤ n. The case when m = 1 is trivial, so we suppose
that m ≥ 2. Let k be the minimum cardinality of the sets J ⊂ I such that
dim CJ = m. We can assume without loss of generality that dim CK = m,
where K = {1, ..., k} ⊂ I. Obviously, CK ⊂ CI .
First we show that
2 ≤ dim C1 < dim (C1 + C2 ) < ... < dim CK = m.
(1)
P
If dim C1 = 1, then span C1 = span {c} ⊂ span C2 and dim
i = 2 k Ci =
m, contradicting the definition of k.
P
Analogously, if there exists j ∈ {1, ..., k − 1} such that dim j+1
C =
P
P
P i=1 i
dim ji=1 Ci , then span Cj+1 ⊂ span ji=1 Ci and we have dim i∈K\{j} Ci =
m, contradicting again the definition of k.
Observe that (1) entails that k + 1 ≤ dim CK = m, span Ci 6= span Cj
if i P
6= j = 1, 2, ..., k (since C1 , .., Ck can be re-ordered arbitrarily) and
dim i∈K\{j} Ci < m. where j = 1, 2, ..., k.
Now we select m vectors of Rn as follows. Let m0 = 1. Let m1 =
dim C1 ≥ 2 and let us select in C1 a set of m1 linearly independent vectors,
{v1 , ..., vm1 }, where v1 = c. Now, let m2 = dim (C1 + C2 ) > m1 . Since
C1 ⊂ C1 + C2 (by Lemma 2), there exist wi ∈ C1 and vi ∈ C2 , i = m1 +
1, ..., m2 , such that {v1 , ..., vm1 , wm1 +1 + vm1 +1 , ..., wm2 + vm2 } form a basis
of span {C1 + C2 }. Since wi ∈ span C1 , i = m1 + 1, ..., m2 , the system of
m2 vectors {v1 , ..., vm2 } is also a basis of span {C1 + C2 }. By induction,
considering all the k cones, we obtain mk = m linearly independent vectors
v1 , ..., vm1 , vm1 +1 , ..., vm2 , ..., vmk−1 +1 , ..., vmk ∈ CK ,
such that mi−1 < mi , i = 1, 2, ..., k, and
ª
©
CK ⊂ CI ⊂ span v1 , ..., vm1 , vm1 +1 , ..., vm2 , ..., vmk−1 +1 , ..., vmk .
6
Now, we define a new family of relatively open cones
© {B1 , ..., Bk } , conª
taining c, as follows: B1 = C1 and Bi = Ci ∩ span v1 , vmi−1 +1 , ..., vmi ,
i = 2, ...,
P k (recall that any linear subspace is relatively open). Obviously,
BK := i∈K Bi is a relatively open convex cone such that c ∈ BK , dim BK =
m and BK ⊂ CK ⊂ CI .
n
Let A be a (non-singular) linear transformation
© on R such thatªAvi =
ei , i = 1, 2, ..., m. Therefore e1 ∈ ABi ⊂ span e1 , emi−1 +1 , ..., emi , i =
1, 2, ..., k, and all the sets
ABK ⊂ ACK ⊂ ACI ⊂ A span {v1 , ..., vm } = span {e1 , ..., em } = Rm ×{0n−m }
are relatively open, have the same dimension m and contain e1 .
The function g : Rn → R such that g = f ◦ A−1 is convex and homogeneous (by Lemma 1, part (i)), so that g (0n ) = 0. We denote d =
(d1 , ..., dm , 0..., 0) ∈ Rn , where dj := g (ej ), j = 1, ..., m. Observe that
d1 = g(e1 ) = f (A−1 e1 ) = f (v1 ) = f (c).
(2)
Given i = 1, 2, ..., k, since f is linear on Bi ⊂ Ci , g is also linear on
©
ª
ABi ⊂ span e1 , emi−1 +1 , ..., emi ,
and we can express
mi
X
g(x) = x1 d1 +
xj dj
j=mi−1 +1
for all x ∈ ABi and for all i = 1, 2, ..., k.
Now we prove that
g(y) ≤ d0 y for all y ∈ ABK .
P
(3)
i
Take an arbitrary y ∈ ABK . ThenP
we can write y =
i∈K y , with
mi
i
i
i
y ∈ ABi , i = 1, 2, ..., k. Let y = y1 e1 + j=mi−1 +1 yj ej , i = 1, 2, ..., k. Then
we have
X
X
k −1 y i ) = kg(
k −1 y i )
g(y) = g(k
i∈K
i∈K


mi
X
X
X
y1i d1 +
yji dj 
≤
g(y i ) =
i
=
i∈K
Ã
X
i∈K
!
j=mi−1 +1
i∈K
y1i d1 +
X
mi
X
i∈K j=mi−1 +1
7
yji dj = d0 y.
From (2), (3) and item (ii) of Lemma 1, recalling that e1 ∈ rint ABK =
ABK , we get
g(y) = d0 y for all y ∈ ABK .
(4)
In order to extend (4) to the whole cone ACI , let us fix i ∈ I. Since we
have e1 ∈ (rint ACi ) ∩ (rint ABK ) = ACi ∩ ABK and ACi ⊂ Rm × {0n−m } =
aff (ABK ), according to item (iii) of Lemma 1, formula (4) entails g(y) = d0 y
for all y ∈ ACi .
P
Now, let us take an arbitrary point y ∈ ACI , whereby y = i∈I y i , with
y i ∈ ACi , i ∈ I. Since g is a convex homogeneous function we have
X
X
g(y) ≤
g(y i ) =
d0 y i = dy.
i∈I
i∈I
Applying again item (ii) of Lemma 1, we conclude that g is linear on ACI .
Therefore f = g ◦ A is linear on A−1 (ACI ) = CI .
u
t
Let us illustrate Proposition 1 with two simple examples.
Example 1 Consider the convex cones C1 = {x ∈ R3 | x1 = 0, x3 > 0} and
C2 = {x ∈ R3 | x2 = 0, x3 > 0}. They are relatively open and e3 ∈ C1 ∩ C2 .
Thus, any convex homogeneous function f : R3 → R which is linear on both
cones, C1 and C2 , is also linear on CI = C1 + C2 = {x ∈ R3 | x3 > 0}.
Concerning the objects used in the above proof, m = 3, k = 2, i.e., K = I =
{1, 2}, and we could choose v1 = e3 , v2 = e2 and v3 = e1 , so that A is the
symmetry in R3 with respect to the plane x3 = x2 . Then Bi = Ci , i = 1, 2,
BK = CI and so ABK = ACI = {y ∈ R3 | y1 > 0}.
Example 2 The function f (x) = max {−x1 , −x2 } is convex and homogeneous on R2 and it vanishes on the relatively open convex cones C1 = R++ ×
{0} and C2 = {0} × R++ , but it is not even linear on its sum C1 + C2 = R2++ .
Observe also that Ci ∩ (C1 + C2 ) = ∅ although Ci ⊂ cl (C1 + C2 ), i = 1, 2.
This example shows that the assumptions on the intersection of the relatively
open convex cones in Lemma 2 and Proposition 1 are not superfluous.
Consider also the convex cone C3 = R2+ . Obviously, C1 ∩ C3 6= ∅ but
C3 * C1 + C3 , so that Lemma 2 only guarantees that the relatively open
convex cone is contained in the sum of the two cones.
Proposition 2 Let f : Rn → R be a convex homogeneous function and
let c ∈ Rn \ {0n } . Then there exists a largest relatively open convex cone
containing c on which f is linear.
8
Proof : Let C := {Ci , i ∈ I} be the class of all relatively open convex cones
containing c on which f is linear. We shall prove that C := ∪i∈I Ci ∈ C (i.e.,
C is the maximum of C for the inclusion).
Since f is linear on cone {c} \ {0n }, this is an element of C so that I 6= ∅.
Let us denote with J the family
P of all nonempty finite subsets of I.
For each J ∈ J , the sum CJ := i∈J Ci is a relatively open convex cone
containing c and so CJ ∈ C by Proposition 1. Since C ⊂ {CJ , J ∈ J } ⊂ C, we
have C = ∪J∈J CJ . On the other hand, given {J, H} ⊂ J such that J ⊂ H,
we have shown in Proposition 1 that
CJ ⊂ CH .
(5)
Now we show that C satisfies all the requirements.
C is a convex cone: The union of cones is a cone. On the other hand,
given x1 , x2 ∈ C, if xi ∈ CJi , i = 1, 2, taking J = J1 ∪ J2 ∈ J , (5) yields
xi ∈ CJ , i = 1, 2. Since CJ is convex, we have [x1 , x2 ] ⊂ CJ ⊂ C.
C is relatively open: Let x ∈ C and let y ∈ aff C. Then we can write
y=
m
X
i=1
λi yi , m ∈ N,
m
X
λi = 1, and yi ∈ C, i = 1, ..., m.
i=1
By (5) there exists J ∈ J such that x, yi ∈ CJ , i = 1, ..., m. Since CJ is
relatively open, there exists µ > 1 such that µx + (1 − µ) y ∈ CJ ⊂ C. Then
x ∈ rint C.
f is linear on C: Let x1 , x2 ∈ C. Let J ∈ J such that x1 , x2 ∈ CJ . Since
f is linear on CJ , we have f ((1 − λ) x1 + λx2 ) = (1 − λ) f (x1 ) + λf (x2 ) for
all λ ∈ [0, 1].
u
t
Given a convex (concave) homogeneous function f , we define the linearity
cone of f at z ∈ (dom f ) \ {0n } as the largest relatively open convex cone
containing z on which f is linear (this definition is correct by Proposition 2).
We denote it by Cz .
Proposition 3 The linearity cones of a convex (concave) homogeneous function f : Rn → R constitute a partition of (dom f ) \ {0n } .
Proof : We denote by Cz be the family of all the relatively open convex
cones containing z ∈ (dom f ) \ {0n } on which f is linear. Obviously, Cz is
the maximum of Cz for the inclusion.
9
Let us assume that the statement is not true. Let z 1 , z 2 ∈ (dom f ) \ {0n }
such that Cz1 ∩ Cz2 6= ∅ and Cz1 6= Cz2 . Take an arbitrary z ∈ Cz1 ∩ Cz2 .
Since Cz1 , Cz2 ∈ Cz , we have Cz1 , Cz2 ⊂ Cz , with Czi Cz for some i = 1, 2.
Then, Czi cannot be the linearity cone of f at z i .
u
t
3
Optimal partitions
Let us consider the primal LSIP problem P introduced in Section 1 and
its dual problem D. We associate with each primal-dual feasible solution,
(x, λ) ∈ F × Λ, the supporting sets σ (x) := {t ∈ U | a0t x > bt } and σ (λ) :=
{t ∈ U | λt > 0}. The couple (x, λ) ∈ F × Λ is called a complementary
solution of the pair P − D if σ (x) ∩ σ (λ) = ∅.
The next two results clarify the relationship between optimality and complementary solutions in LSIP (which is more complex than in LP).
Proposition 4 The pair (x, λ) ∈ F × Λ is a complementary solution of
P − D if and only if it is a primal-dual optimal solution and v D = v P . In
that case, the following statements are true:
(i) If x ∈ F satisfies a0t x = bt for all t ∈ σ (λ) , then x ∈ F ∗ .
(ii) If λ ∈ Λ satisfies λt = 0 for all t ∈ σ (x) , then λ ∈ Λ∗ .
Proof: Let (x, λ) be a complementary solution of P − D. Then σ (x) ∩
σ (λ) X
= ∅, i.e., λt (a0t x − bt ) = 0 for all t ∈ U . Since a0t x = bt for all t ∈ V , we
have
λt (a0t x − bt ) = 0, so that
t∈T
X
t∈T
λt bt =
Ã
X
!0
λt at
x = c0 x,
t∈T
and the weak duality theorem yields the coincidence of optimal values (i.e.,
v D = v P ), x ∈ F ∗ and λ ∈ Λ∗ . The converse statement is trivial.
Now we assume that (x, λ) is a complementary solution of P − D.
(i) Let x ∈ F be such that a0t x = bt for all t ∈ σ (λ) . Then we have
X
X
X
vD =
λt bt =
λt bt =
λt a0t x
t∈T
=
X
t∈T
t∈V ∪σ(λ)
Ã
!0 t∈V ∪σ(λ)
X
λt a0t x =
λt at x = c0 x ≥ v P ,
t∈T
10
and the conclusion is consequence of the weak duality theorem.
(ii) Let λ ∈ Λ be such that λt = 0 for all t ∈ σ (x) . Then

0
Ã
!0
X
X
v P = c0 x =
λt a t x = 
λt at  x
=
t∈T
X
λt bt =
X
t∈V ∪σ(λ)
λt bt ≤ v D ,
t∈T
t∈V ∪σ(λ)
so that λ ∈ Λ∗ again by the weak duality theorem.
u
t
¡
¢
Corollary 1 Given a point x ∈ F, there exists λ ∈ Λ such that x, λ is a
complementary solution of P − D if and only if x is an optimal solution for
some finite subproblem of P.
¡
¢
Proof: If x, λ is a complementary solution of P − D, by Proposition 4,
Ã
!0
X
X
X
λt a t x = c0 x =
λt bt , so that
λt (a0t x − bt ) = 0, i.e., c ∈ A (x).
t∈T
t∈T
t∈T
¡ ¢
Thus x is an optimal solution of the problem resulting of replacing U by σ λ
in P . Replacing in that problem {a0t x = bt , t ∈ V } by an equivalent finite
subsystem, we obtain an equivalent finite subproblem with optimal solution
x.
Conversely, assume that x is an optimal solution of the finite subproblem
of P obtained substituting U and V with the finite subsets U and V . Since the
(T )
KKT condition characterizes
optimality
in LP, there exists
¡
¢
P λ ∈ R0+ such that
λt = 0 for all t ∈ T U ∪ V , λt ≥ 0 for all t ∈ U, t∈T λt (at x − bt ) = 0,
¡
¢
P
and c ∈ t∈T λt at . Then it is easy to show that x, λ is a complementary
solution of P − D, again by Proposition 4.
u
t
The constraint system of P is called locally Farkas-Minkowski (see [9,
Chapter 5] and references therein) if u0 x ≥ α for all x ∈ F, with u0 x = α
for some x ∈ F , implies that u0 x ≥ α for every x solution of some finite
subsystem. This property is equivalent to assert that, for every z ∈ Rn , if
x is an optimal solution of P (z), then it is also optimal solution for some
finite subproblem of P (z) . Thus Corolary 1 gives two new characterizations
of this class of linear semi-infinite
¡ U ¢3 systems.
is called an optimal partition if there exA triple (B, N, Z) ∈ 2
ists a complementary solution (x, λ) such that B = σ (x), N = σ (λ) and
11
Z = U (B ∪ N ) (for the sake of brevity we omit problems and couples of
problems when they are implicit in the context). Obviously, the non-empty
elements of the tripartition (B, N, Z) give a partition of U (similar tripartitions have been used in [2] and [7] in order to extend the optimal partition
approach
¡
¢ from LP to quadratic programming). We say that a tripartition
B, N , Z is maximal if
B=
[
x∈F ∗
σ(x),
N=
[
σ(λ) and Z = U \ (B ∪ N ).
λ∈Λ∗
Note that the definition of the maximal partition imply that B ⊂ B and
N ⊂ N for every optimal partition (B, N, Z) . The uniqueness of the maximal
partition is straightforward consequence of the definition. If there exist an
optimal solution pair x ∈ F ∗ and λ ∈ Λ∗ such that σ(x) = B and σ(λ) = N ,
then the maximal partition is called the maximal optimal partition.
Proposition 5 The maximal optimal partition exists if and only if v D =¡v P¢
and there exist x ∈ F ∗ and λ ∈ Λ∗ such that σ (x) ⊂ σ (x) and σ (λ) ⊂ σ λ
¡
¢
for all (x, λ) ∈ F ∗ × Λ∗ . In particular, if B, N , Z is an optimal partition
such that Z = ∅, then it is a maximal optimal partition.
Proof: The first statement is straightforward consequence of Proposition
4.
¡
¢
Now, let x, λ be a complementary solution such that B = σ (x), N =
¡ ¢
¡
¢
σ λ , and B ∪ N = U (in which case x, λ is called strictly complementary
solution of P − D). Let (B, N, Z) be an arbitrary optimal partition and
let (x, λ) be a complementary solution such that
¡ σ¢ (x) = B and σ (λ) = N.
Again by Proposition 4, the pairs (x, λ) and x, λ are also complementary
solutions, so that B∩ N = ∅ and B ∩ N = ∅, i.e., N ⊂ U B = N and
B ⊂ U N = B.
u
t
¡ The next
¢ example illustrates the existence of maximal optimal partitions
B, N , Z such that Z 6= ∅.
Example 3 Consider the problem P in R2 such that T = {−2, −1−, 0, 1, ...} ,
the objective function is the null one, and the constraints are tx1 ≥ −1, for
t = 1, 2, .., −x1 ≥ 0 (t = 0), x2 ≥ 0 (t = −1), and −x2 ≥ −1 (t = −2). We
have F ∗ = {0}×[0, 1] and Λ∗ = {0T } . It is easy to show that (T {0} , ∅, {0})
is the maximal optimal partition.
12
The solvability of P guarantees the existence of a point x such that σ (x) ⊂
σ (x) for all x ∈ F ∗ due to the finite dimension of the space of variables (take
x ∈ rint F ∗ ). Concerning D, if Λ∗ is the ¡convex
hull of a finite set, then
¢
its arithmetic mean, λ, satisfies σ (λ) ⊂ σ λ for all λ ∈ Λ∗ . Nevertheless,
v D = v P and primal-dual solvability do not guarantee the existence of the
maximal optimal partition, as the following example shows.
Example 4 Consider the following LSIP problem in R2 :
P : Inf x2
s.t. −x1 + x2 ≥ 0 (t = 1)
x 1 + x2 ≥ 0
(t = 2)
x2 ≥ 0.
t = 3, 4, ...
Obviously, v D = v P = 0, with F ∗ = {02 } . For r ∈ Nn we denote by o
λr
1
2
, λ3 , λ4 , ... ,
the indicator function of {r} . Since Λ∗ = Λ = conv λ +λ
2
[
σ(λ) = T and so the maximal partition (∅, T, ∅) cannot be optimal.
∗
λ∈Λ
From Proposition 4, if (B, N, Z) is an optimal partition of P¡, a¢ sufficient
optimality condition for x ∈ F (λ ∈ Λ) is that σ (x) ∩ N = ∅ (σ λ ∩ B = ∅,
respectively). When the maximal optimal partition exists, it provides the
weakest optimality criterion based on optimal partitions.
4
Perturbing c
The perturbed problems of P and D to be considered in this section are
P (z) : Inf z 0 x
s.t. a0t x ≥ bt , t ∈ U,
a0t x = bt , t ∈ V,
and
D (z) : Sup
s.t.
P
t∈T
P
tεT
λt bt
λt at = z,
λt ≥ 0, t ∈ U,
13
where the parameter z ranges on Rn . We denote the optimal values of P (z)
and D (z) as v P (z) and v D (z), respectively (since Sections 4-6 deal with
optimal value functions of different parameters, in order to avoid confusion,
our notation makes explicit the corresponding argument, e.g., we write v P (z)
and v D (z) instead of just v P and v D ). With this notation, the effective
domain of v D (z) is the first moment cone M and the optimal values of the
nominal problem P and its dual D are v P (c) and v D (c), respectively. In [8,
Section 2 ] we have shown that v P (z) is linear on a certain neighborhood of c
(or on an open convex cone containing c) if and only if c ∈ int D (F ; x∗ )0 or,
equivalently, if and only if P has a strongly unique solution. Moreover, v P (z)
is linear on a segment emanating from c in the direction of d ∈ Rn \ {0n } if P
and D are solvable, with v D = v P , and the following problem is also solvable
and has zero duality gap:
P
Dd : Sup
λt bt + µv P (c)
t∈T
P
λt at + µc = d,
s.t.
tεT
λt ≥ 0, t ∈ U.
This is the case, in particular, if P is a bounded LP problem and d satisfies
inf {d0 x | x ∈ F ∗ } 6= −∞.
©¡
¢
ª
¡
¢
Lemma 3 Let ci , λi , i ∈ I ⊂ Rn × R(T ) and x ∈ Rn be such that x, λi
is a complementary solution of P (ci ) − D (ci ) for all i ∈ I. Then P (z) and
D (z) are solvable and
©
ª
(6)
v P (z) = v D (z) = x0 z for all z ∈ conv ci , , i ∈ I .
(I)
Proof : Let z ∈ conv {ci , , i ∈ I} . Then there exists µ ∈ R+ such that
X
X
z=
µi ci and
µi = 1.
i∈I
i∈I
Since the feasible set is the same for P (z) and for all P (ci ), i ∈ I, x is a
feasible solution of P (z) .
P
Consider the element λz := i∈I µi λi ∈ R(T ) . We shall prove that λz is
a feasible solution of D (z) . In fact, since
λi is a feasible solution of D (ci ),
P
i
i
we have
λit ≥ 0 for all t ∈ U and
t∈T λt at = c for all i ∈ I. Thus,
P
λzt = i∈I µi λit ≥ 0 for all t ∈ U and
X
X X
X
λzt at =
µi
λit at =
µi ci = z.
t∈T
i∈I
t∈T
14
i∈I
¡ ¢
¡ ¢
Since σ (λz ) ⊂ ∪i∈I σ λi and σ (x) ∩ σ λi = ∅ for all i ∈ I, we have
σ (x) ∩ σ (λz ) = ∅, i.e., (x, λz ) is a complementary solution of P (z). Then,
applying Proposition 4 to P (z), we conclude that v P (z) = v D (z) = z 0 x. u
t
Proposition 6 Let {ci , i ∈ I} ⊂ Rn be such that there exists a common
optimal partition for the family of problems {P (ci ) , i ∈ I} . Then v P (z) =
v D (z) is linear on conv {ci , i ∈ I} .
Proof : Let (B, N, Z) be optimal partition for P (ci ) , for all i ∈ I. Select
j ∈ I arbitrarily and let x = xj . According to the final remark in Section
i
2,
i ∈ I. Then, by Proposition 4,
¡ x iis¢ an optimal solution for P (c ) , for all
x, λ is a complementary solution of P (ci ) − D (ci ) , for all i ∈ I. Applying
Lemma 3, P (z) and D (z) are solvable and v P (z) = v D (z) = z 0 x for all
z ∈ conv {ci , i ∈ I} .
Under the additional assumption, since v P (z) is linear on conv {ci , , i ∈ I}
and this is a neighborhood of c, P has a strongly unique solution.
u
t
Under the assumption of Proposition 6, if c ∈ int conv {ci , i ∈ I} (e.g.,
if all the problems P (ci ) have the same maximal optimal partition), then
P has a strongly unique optimal solution. This is the case if there exists a
common optimal partition for all the problems P (z) such that z belongs to a
certain neighborhood of c. In fact, the next example shows that the linearity
of v P (z) = v D (z) on a neighborhood of c does not entail the existence of a
set {ci , i ∈ I} as in Proposition 6.
Example 5 Let us consider the LSIP problem with index set Z
P : Inf x1 + x2
s.t.
tx1 ≥ −1, t = 1, 2, 3, ...,
−tx2 ≥ −1, t = 0, −1, −2, ....
Since the characteristic cone is K = {x ∈ R3 | x1 ≥ 0, x2 ≥ 0, x3 < 0}∪{03 },
F = R2+ , 02 is the strongly unique solution of P and v P (z) = 0 for all z ∈ R2+
(the effective domain of v P (z)). Given z ∈ R2+ , since v D (z) ≤ v P (z) = 0
(Z)
and the sequence {λr } ⊂ R+ such that
 z
t = r,
 r1 ,
r
z2
,
t
= −r,
λt =
 r
0, otherwise,
15
P
2
→ 0 as r → ∞, we
is feasible for D (z) and satisfies t∈Z λrt bt = − z1 +z
r
D
2
have also v (z) = 0 for all z ∈ R+ although D (z) is only solvable when
z = 02 . Thus no complementary solution exists for D (z) if z 6= 02 . It is easy
to see that the maximal optimal partition of P (02 ) is (Z, ∅, ∅).
Corollary 2 Given d ∈ Rn , if there exists ε > 0 such that P (c + εd) and P
have a common optimal partition, then v P (z) = v D (z) is linear on [c, c + εd] .
Proof : Apply Proposition 6 to {c1 , c2 }, where c1 := c and c2 := c + εd. u
t
Example 6 Consider the primal LSIP problem
P : Inf c0 x
£
¤
s.t. − (cos t) x1 − (sin t) x2 ≥ −1, t ∈ 0, π2 ,
x1 ≥ 0 (t = 2), x2 ≥ 0 (t = 3).
for three different cost vectors:
(a) c = (1, 1)0 . If z ∈¡R2++ ,¢ there exists a unique complementary solution
of P (z) − D (z) : 02 , λ , where

 z1 , t = 2,
z2 , t = 3,
λt =

0, otherwise.
¡£ π ¤
¢
Since 0,
optimal (actually maximal) parti© 2 , {2, 3} , ∅2 isª a common
P
tion for P (z) , z ∈ R++ , v (z) = v D (z) is linear on R2++ by Proposition 6. In fact, v P (z) = v D (z) = 0 for all z ∈ R2++ (Figure 1
represents the graph of v P (z) = v D (z)).
¤
¢
¡£
(b) c = (1, 0)0 . P (c) has a maximal optimal partition, 0, π2 ∪ {3} , {2} , ∅
(and two other optimal partitions). If d ∈
/ cone {c} and ε > 0 is sufficiently small, z := c + εd satisfies z1 >
0
¡£ and
¤ either z2¢ > 0 (in which
π
case the maximal partition of P (z) is 0, 2 , {2, 3} , ∅ , as in (a)) or
¡
¢
or z2 < 0. In this case the unique complementary solution is (0, 1) , λ ,
where

 −z2 , t = π2 ,
z1 ,
t = 2,
λt =

0,
otherwise.
£
©
ª ¢
¡£
Thus the maximal optimal partition of P (z) is 0, π2 ∪ {3} , π2 , 2 , ∅ .
This implies that, for any d ∈ R2 , there exists ε > 0 such that v P (z) =
v D (z) is linear on [c, c + εd] .
16
¡
¢
(c) c = (−1, −1)0 . The unique complementary solution is x0 , λ0 such that
x0 = √12 (1, 1)0 and
½ √
2, t = π4 ,
0
λt =
0,
otherwise,
so that
©£ theπ ¤maximal
© 𠪪 optimal partition©ofπ ªP (−1, −1) is (B, N, ∅) where
B = 0, 2 \ 4
∪ {2, 3} andN = 4 . Given an arbitrary d ∈ R2 ,
c + ρd ∈ R2−− if ρ is sufficiently small. For such a ρ, the optimal set
c+ρd
of P (c + ρd) is F ∗ (c + ρd) = {xρ }, where xρ = − kc+ρdk
∈ R2++ . There
µ
¶
¤ π£
cos α
ρ
exists a unique α ∈ 0, 2 (depending on ρ) such that x =
.
sin α
©£
¤
ª
Obviously, σ (xρ ) = 0, π2 \ {α} ∪ {2, 3} . Similarly, the optimal set
of D (c + ρd) is Λ∗ (c + ρd) = {λρ }, where
½
kc + ρdk , t = α,
ρ
λt =
0,
otherwise.
Thus σ (xρ ) = B and σ (xρ ) = N if and only if d ∈ span {c} . Observe
that, given d ∈ R2 , there exists ε > 0 such that v P (z) = v D (z) is linear
on [c, c + εd] if and only if d ∈ span {c} .
¡
¢
Figure 1 shows the existence of a partition of dom v P (z) \ {02 } =
R2 \ {02 } in relatively open convex cones on which v P (z) is linear. In fact,
since the hypograph of v P (z) is the convex cone cl K ([9, Theorem 8.1]),
v P (z) is a concave, proper, upper
homogeneous
function
© semi-continuous
¡
¢
ª
P
P
P
and, according to Proposition 3, Cz , z ∈ dom v (z) \¡ {0n } , where
¢ Cz
P
P
denotes the linearity cone of v (z) at z, is a partition of dom v (z) \ {0n }
in maximal regions of linearity.
In the particular case of Example 6, the partition associated with v P (z)
P
P
has infinitely many elements, e.g., C(1,1)
= R2++ , C(−1,−1)
= cone {(−1, −1)} \ {02 } ,
©
ª
P
P
and C(1,0) = cone {(1, 0)} \ {02 }. Observe that Cz , z ∈ R2 \ {02 } is a partition of R2 \ {02 } such that
£
¤
½
1, z ∈ R2− ∪ (R+ × {0}) ∪ ({0} × R+ ) \02 ,
P
dim Cz =
2, otherwise.
D
Concerning
© D v (z) , it isªalso concave, proper and homogeneous. We denote by Cz , z ∈ M \ {0n } the corresponding partition. In Example 6,
v D (z) = v P (z) , so that both functions have the same partition. This is
not true in general, as the following example shows.
17
Example 7 Take n = 3, T = {t ∈ R3 | t1 + t2 + t3 = 1, ti > 0, i = 1, 2, 3} ∪
{(1, 1, 0)}, and the constraints t1 x1 + t2 x2 + t3 x3 ≥ 0 for all t 6= (1, 1, 0) and
x1 + x2 ≥ −1 otherwise. Then the linearity cones of v P (z) are the seven
faces of dom v P (z) = R3+ different from {03 } whereas v D (z) has only two
linearity cones, R3++ and cone {(1, 1, 0)} \ {03 }.
Proposition 7 Let c 6= 0n . If d ∈ span CcP (d ∈ span CcD ), then there exists
ε > 0 such that v P (z) (v D (z), respectively) is linear on [c, c + εd] .
Proof : If d ∈ span CcP , then there exists ε > 0 such that [c, c + εd] ⊂ CcP .
Since v P (z) is linear on CcP the conclusion is immediate (the proof is the
same for v D (z)).
u
t
18
5
Perturbing b
The perturbed problems in this section are
P (w) : Inf c0 x
s.t. a0t x ≥ wt , t ∈ U,
a0t x = wt , t ∈ V,
and
D (w) : Sup
s.t.
P
t∈T
P
λt w t
λt at = c,
tεT
λt ≥ 0, t ∈ U,
with respective optimal values v P (w) and v D (w). With this notation, the
optimal values of the nominal problem P and its dual D are v P (b) and v D (b),
respectively. Observe that now v P (w) , v D (w) : RT → R, so that we cannot expect simple counterparts for the results in Section 4 unless |T | < ∞.
In fact, in LP, v P (w) , v D (w) : R|T | → R are ordinary homogeneous convex functions, so that Proposition 7 applies (observe that the parameter is
now the gradient of the objective function of D, as in Section 4 but exchanging the roles of the problems). In such a case, if there exists x∗ ∈ F ∗
such that {at , t ∈ T (x∗ )} is a basis of Rn , then v P (w) = c0 x (w) on a certain neighborhood of b, where x (w) is the unique solution of the system
{a0t x = wt , t ∈ T (x∗ )} (by Cramer’s rule). Then dim CbP = |T | and v P (w) is
linear on a certain neighborhood of b.
If T is infinite, the first difficulty comes from the fact that the perturbations of w affect the feasible set of the primal problem and possibly its
consistency and the second from the infinite dimension of RT which does not
allows us to use Proposition 3. In [8, Section 2 ] it is shown that, if¡ v P (w)
¢
is linear on a certain neighborhood of b (in the pseudometric space RT , δ ,
then D has at most one optimal solution (the converse is true under strong
assumptions). Moreover, v P (w) is linear on a segment emanating from b in
the direction of a bounded function d ∈ RT \ {0T } if P and D are solvable
with the same optimal value, the problem
Pd : Inf c0 x + v P (b) y
s.t. a0t x + bt y ≥ dt , t ∈ U,
a0t x + bt y = dt , t ∈ V
19
is also solvable and has zero duality gap, and either there exists an optimal
solution of Pd , (z ∗ , y ∗ ), such that y ∗ ≥ 0 or there exists an optimal solution
of P , x∗ , such that either T (x∗ ) = T or there exist two scalars µ and η such
that 0 < µ ≤ a0t x∗ − bt ≤ η for all t ∈
/ T (x∗ ) . This is the case, in particular,
if |T | < ∞ and P and Dd are both bounded.
¡
¢
Lemma 4 Let {(bi , xi ) , i ∈ I} ⊂ RT × Rn and λ ∈ R(T ) such that xi , λ is
a complementary solution of P (bi ) − D (bi ) for all i ∈ I. Then P (w) and
D (w) are solvable and
X
©
ª
v P (w) = v D (w) =
λt wt for all w ∈ conv bi , i ∈ I .
(7)
t∈T
P
P
(I)
Proof : Let w = i∈I µi bi , with
i∈I µi = 1 and µ ∈ R+ .
P
We shall prove that xw := i∈I µi xi is a feasible solution of P (w). In
fact, given i ∈ I, we have a0t xi ≥ bit for all t ∈ U and a0t xi = bit for all t ∈ V , so
that a0t xw ≥ wt for all t ∈ U and a0t xw = wt for all t ∈ VP
.
On the other hand, if t ∈ U satisfies a0t xw > wt , i.e., i∈I µi (a0t xi − bit ) >
¡
¢
0, then there exists j ∈ I such that µj a0t xj − bjt > 0 so that a0t xj − bjt > 0.
¢
¡
Since xj , λ is a complementary solution of P (bj ), we must have λt = 0.
¡
¢
We have shown that the primal-dual feasible solution xw , λ of P (w) is a
complementary solution of that problem. Applying Proposition 4 we get the
aimed conclusion.
u
t
Proposition 8 Let conv {bi , i ∈ I} be such that all the problems P (bi ) , i ∈
I, have the same optimal partition. Then v P (w) = v D (w) is linear on
conv {bi , i ∈ I} .
Proof : It is a straightforward consequence of Lemma 4.
u
t
In particular, if b ∈ int conv {bi , i ∈ I} (e.g., the maximal partition is the
same for all the problems P (w) such that w belongs to a certain neighborhood of b), then D has a unique optimal solution. We can have v P (w) =
v D (w) linear (or even constant) on a certain neighborhood of b such that no
optimal partition exists on that neighborhood.
20
Example 8 (Example 5 revisited) Let w ∈ RT be such that
δ (w, b) = sup |w (t) + 1| < 1.
t∈T
It is easy to see that −2 < w (t) < 0 for all t ∈ T. Thus P (w) and P have
the same characteristic cone
©
ª
K = x ∈ R3 | x1 ≥ 0, x2 ≥ 0, x3 < 0 ∪ {03 } ,
in which case
v P (w) = sup {γ ∈ R | (1, 1, γ) ∈ cl K} = 0
and
v D (w) = sup {γ ∈ R | (1, 1, γ) ∈ K} = 0.
Since 0 ∈
/ {γ ∈ R | (1, 1, γ) ∈ K}, D (w) is not solvable and so P (w) has
no complementary solution.
Corollary 3 Given d ∈ RT , if there exists ε > 0 such that P (b + εd) has the
same optimal partition as P , then v P (w) = v D (w) is linear on [b, b + εd].
Proof : It follows from Lemma 4.
u
t
Let us mention that the recent paper [3] provides an upper bound for
v (b) − v D (w) when D (b) is consistent and P (w) is also consistent in some
neighborhood of b.
D
6
Perturbing c and b
The main advantage of the optimal partition approach is that it allows to
study the simultaneous perturbation of cost and RHS coefficients. We denote
by (z, w) the result of perturbing the vector (c, b) (called rim data in the LP
literature). To do this we consider the parametric problem
P (z, w) : Inf z 0 x
s.t. a0t x ≥ wt , t ∈ U,
a0t x = wt , t ∈ V,
and its corresponding dual
21
X
D (z, w) : Sup
Xt∈T
s.t.
λt wt
λt at = z,
λt ≥ 0, t ∈ U.
t∈T
In order to describe the behavior of the value functions of these problems
we define a class of functions after giving a brief motivation. Let L be a linear
space and let ϕ : L2 → R be a bilinear form on L. Let C = conv {vi , i ∈ I} ⊂
L and let qij := ϕ (vi , vj ), (i, j) ∈ I 2 . Then any v ∈ C can be expressed as
X
X
(I)
v=
µi vi ,
µi = 1, µ ∈ R+ .
(8)
i∈I
i∈I
Then we have
ϕ (v, v) =
X
µi µj qij .
(9)
i,j∈I
Accordingly, given q : C → R, where C = conv {vi , i ∈ I} ⊂ L, we say
that q is quadratic on C if there exist real numbers qij , i, j ∈ I, such that
(9) holds for all v ∈ C satisfying (8).
Proposition 9 Let {(ci , bi ) , i ∈ I} ⊂ Rn × RT be such that there exists a
common optimal partition for the family of problems P (ci , bi ) , i ∈ I. Then
P (z, w) and D (z, w) are solvable and v P (z, w) = v D (z, w) on conv {ci , i ∈ I}×
conv {bi , i ∈ I} and v P (z, w) is quadratic on conv {(ci , bi ) , i ∈ I} .
Moreover, if (c, b) ∈ conv {ci , i ∈ I} × conv {bi , i ∈ I}, then v P (z, b) and
v P (c, w) are linear on conv {ci , i ∈ I} and conv {bi , i ∈ I}, respectively.
Proof : Let (B, N, Z) be a common optimal partition of P (ci , bi ) for all
i ∈ I. Let (z, w) ∈ conv {ci , i ∈ I} × conv {bi , i ∈ I}. Then we can write
X
X
X
X
(T )
z=
δ i ci , w =
γ i bi ,
δi =
γ i = 1, δ, γ ∈ R+ .
(10)
¡
i∈I
i
i
¢
i∈I
n
i∈I
(T )
i∈I
Let x , λ ∈ R × R
be a complementary solution of P (ci , bi ) −
i i
D
to (B, N, Z) . We shall prove that x :=
P(c , b ),i i ∈ I, corresponding
P
i
i∈I γ i x and λ :=
i∈I δ i λ constitute a complementary solution of P (z, w).
Since a0t xi ≥ bit for all t ∈ U and a0t xi = bit for all t ∈ V , we have
a0t x ≥ wt for all t ∈ U and a0t x = wt for all t ∈ V , i.e., x is a feasible solution
of P (z, w).
22
On the otherPhand, λit ≥ 0 for all t ∈ U and all
Pi ∈ I entails λt ≥ 0 for all
i
i
λt at = z.
t ∈ U , whereas
λt at = c for all i ∈ I implies
tεT
tεT
¡
¢
We have shown that x, λ P
is a primal-dual feasible solution. Moreover,
0
if t ∈ U satisfies at x > wt , i.e., i∈I γ i (a0t xi − bit ) > 0, the there exists j ∈ I
such that a0t xj > bjt . Thus, by the assumption on the optimal partition of
of problems, t ∈ B and so λit = 0 for all i ∈ I. Hence λt = 0 and
¡the family
¢
x, λ turns out to be complementary solution of P (z, w). Then, according
to Proposition 4, applied to P (z, w), we have
¡ that
¢ P (z, w) and D (z, w) are
solvable and v P (z, w) = v D (z, w). Since x, λ is a primal-dual optimal
solution, we have
X
v P (z, w) = x0 z =
λt wt = v D (z, w) .
(11)
tεT
i 0
j
Let qij = (c ) x , i, j ∈ I and let C := conv {(ci , bi ) , i ∈ I}. Let (z, w) =
P
(T )
i i
i∈I µi (c , b ),
i∈I µi = 1 and µ ∈ R+ . Then, since we can take δ i = γ i =
µi in (10), (11) yields
Ã
!0 Ã
!
X
X
X
v P (z, w) =
µ j xj
µi ci =
µi µj qij .
P
j∈I
i∈I
i,j∈I
Now assume that (c, b) ∈ conv {ci , i ∈ I} × conv {bi , i ∈ I}.
P
P
P
(T )
Let b =
i ∈ Iγ i bi , with i∈I γ i = 1, γ ∈ R+ . Then x := i∈I γ i xi is
constant andP(11) yields v P (z, b) = z 0 x for all z ∈ conv {ci , i ∈ I}. Similarly,
v P (c, w) =
λt wt if w ∈ conv {bi , i ∈ I}, with λ fixed, and this is a linear
tεT
function of w.
u
t
Obviously, if (c, b) ∈ int conv {(ci , bi ) , i ∈ I}, then v P (z, w) = v D (z, w) is
quadratic on a neighborhood of (c, b). In particular, if the problems P (z, w)
have a common optimal partition when (z, w) ranges on a certain neighborhood of (c, b), then we can assert that P has strongly unique solution and D
has unique solution). In Example 5, v P (c, w) = v D (c, w) = 0 for all (c, w)
such that δ (w, b) < 1 and kz − ck < 1. Nevertheless, the only perturbed
problems which have optimal partition are of the form P (0n , w), so that the
condition in Proposition 9 fails.
Corollary 4 Given (d, f ) ∈ Rn × RT , if there exists ε > 0 such that the
problem P ((c, b) + ε (d, f )) has the same maximal optimal partition as P ,
23
then v P (z, w) = v D (z, w) is quadratic on the interval [(c, b) , (c, b) + ε (d, f )] .
Moreover, v P (z, b) (v P (c, w)) is linear function of z on [c, c + εd] (of w on
[b, b + εf ] , respectively).
Proof : It is an immediate consequence of Proposition 9.
7
u
t
Conclusions
In this paper we examine the linearity of the primal and the dual optimal
value functions (which can be different in LSIP) relative to the size of perturbations of the cost vector, the RHS vector or both, on convex subsets of their
effective domain. The new results on sensitivity analysis in LSIP in Sections
4-6 have been obtained by means of two different partition approaches whose
fundamentals are developed in Sections 2 and 3:
1. Partition of the domain of the optimal value functions in maximal relatively open convex cones where they are linear (the so-called linearity
cones). The partition corresponding to the primal value function only
depends on the primal feasible set whereas the corresponding to the
dual optimal value function depends on the constraints. The advantage of this approach is that it provides a significant insight into the
optimal value functions. The inconveniences are, first, that this approach only applies to perturbations of c and, second, that computing
linearity cones may be a difficult task in practice.
2. Optimal partitions of the index set of the inequality constraints. The
advantage of this approach is that it yields sufficient conditions for
the linearity of the optimal value functions for a variety of convex sets
for the three types of perturbations considered in this paper. The
multiplicity of optimal partitions and the possible lack of a maximal
partition in LSIP is the main difficulty when checking these sufficient
conditions in practice (at least in comparison with LP).
A third approach to sensitivity analysis in LSIP, valid for perturbation of
b or c (but not both) has been sketched at the beginning of Sections 4 and 5,
where we recall the corresponding extensions of Gauvin’s formulae [5]. The
main inconvenience of this approach is that it only provides linearity tests for
the optimal value functions on segments, and its main advantage consists of
24
the fact that these tests also provide directional derivatives in the direction
of the corresponding segment.
References
[1] I. Adler and R. Monteiro, A geometric view of parametric linear programming, Algorithmica 8 (1992) 161-176.
[2] A. Berkelaar, C. Roos, and T. Terlaky, The optimal set and optimal partition approach to linear and quadratic programming, Recent Advances
in Sensitivity Analysis and Parametric Programming (T. Gal and H.
Greenberg, eds.), Kluwer, Dordrecht, 1997, pp.1-44.
[3] M. J. Cánovas, M. A. López, J. Parra, and F. J. Toledo, An asymptotic
approach to dual sensitivity in linear semi-infinite optimization, Tech.
Report, Dept. of Statistics and Operations Research, Alicante University, 2006.
[4] T. Gal, Postoptimal Analysis, Parametric Programing, and Related Topics: Degeneracy, Multicriteria Decision Making, Redundancy (2nd ed),
Walter de Gruyter & Co., New York, NY, 1995.
[5] J. Gauvin, Formulae for the sensitivity analysis of linear programming
problems, Approximation, Optimization and Mathematical Economics
(M. Lassonde, ed.), Physica-Verlag, Berlin, 2001, pp. 117-120.
[6] A. Ghaffari Hadigheh and T. Terlaky, Sensitivity analysis in linear optimization: invariant support set intervals, European J. of Oper. Res.
169 (2006) 1158-1175.
[7] A. Ghaffari Hadigheh, O. Romanko, and T. Terlaky, Sensitivity analysis in convex quadratic optimization: simultaneous perturbation of the
objective and right-hand-side vectors, Algorithmic Oper. Res., to appear.
[8] M. A. Goberna, S. Gómez, F. Guerra, and M. I. Todorov, Sensitivity
analysis in linear semi-infinite programming: perturbing cost and righthand-side coefficients, European J. Oper. Res., online 2 May 2006.
25
[9] M. A. Goberna and M. A. López, Linear Semi-Infinite Optimization,
Wiley, Chichester, 1998.
[10] H. Greenberg, The use of the optimal partition in a linear programming
solution for postoptimal analysis, Oper. Res. Letters 15 (1994) 179-185.
[11] H. Greenberg, Matrix sensitivity analysis from an interior solution of a
linear program, INFORMS J. on Computing 11 (1999) 316-327.
[12] H. Greenberg, Simultaneous primal-dual right-hand-side sensitivity analysis from a strict complementary solution of a linear program, SIAM J.
Optimization 10 (2000) 427-442.
[13] H. Greenberg, A. Holder, C. Roos, and T. Terlaky, On the dimension of
the set of rim perturbations for optimal partition invariance, SIAM J.
Optimization 9 (1998) 207-216.
[14] B. Jansen, J. J. de Jong, C. Roos, and T. Terlaky, Sensitivity analysis in
linear programming: just be careful!, European J. Oper. Res. 101 (1997)
15-28.
[15] B. Jansen, C. Roos, and T. Terlaky, An interior point approach to
postoptimal and parametric analysis in linear programming, Tech. Report, Eotvös University, Budapest, Hungary, 1992.
[16] B. Jansen, C. Roos, T. Terlaky, and J.-Ph. Vial, Interior-point methodology for linear programming: duality, sensitivity analysis and computational aspects, Tech. Report 93-28, Delft University of Technology,
Faculty of Technical Mathematics and Computer Science, Delft, Netherlands, 1993.
[17] R. Monteiro and S. Mehotra, A generalized parametric analysis approach
and its implication to sensitivity analysis in interior point methods,
Mathematical Programming 72 (1996) 65-82.
[18] C. Roos, T. Terlaky, and J.-Ph. Vial, Theory and Algorithms for Linear
optimization: An Interior Point Approach, Wiley, Chichester, 1997.
[19] C. Roos, T. Terlaky, and J.-Ph. Vial, Interior Point Methods for Linear
Optimization (2nd ed), Springer, New York, NY, 2006.
26
Download