2. General approach

advertisement
OPTIMAL VALUE BOUNDS IN NONLINEAR PROGRAMMING
WITH INTERVAL DATA
Milan Hladík
Charles University, Faculty of Mathematics and Physics, Department of Applied Mathematics
Malostranské nám. 25, 118 00, Prague, Czech Republic
E-mail: milan.hladik@matfyz.cz
Abstract. We are concerned with nonlinear programming problems the input data in which vary in some real
compact intervals. The question is to determine bounds of the optimal values. We present a general approach, where,
under some assumption, the lower and upper bounds are computable by using two optimization problems. Even
though these two optimization problems are hard to solve in general, we show that for some particular subclasses they
can be reduced to easy problems. Subclasses that are considered are convex quadratic programming and posynomial
geometric programming.
Keywords: interval systems, nonlinear programming, optimal value range, interval matrix.
1. Introduction
Mathematical programming is widely used to model practical problems. In real world applications,
however, input data are not always known constants and are subject to diverse uncertainties.
Various approaches were developed to deal with uncertainties. Within this paper we assume that we
are given interval estimates of the problem quantities.
We focus on nonlinear programming problems under interval uncertainty and our aim is to
compute the range of optimal values for all instances of the interval quantities. This knowledge
provide the decision maker with useful information for making more appropriate decisions.
We generalize the result of (Hladík, 2007) on optimal value range of interval linear
programming problems. Our approach is applicable for various classes of nonlinear programs with
interval data, however, better approximations are obtained in the case when duality gap is zero and
dependences between quantities do not occur. We discuss some special cases later within this paper.
First, we introduce some notation from interval analysis (Alefeld & Herzberger, 1983). An
interval matrix is defined as [ A] : { A  R mn : A  A  A} , where A  A are known matrices. By
1
convention, n -dimensional interval vectors are interval matrices n 1 . By mid ( A) : ( A  A) and
2
1
rad ( A) : ( A  A) we denote the midpoint and the radius of [ A] , respectively.
2
2. General approach
By an interval nonlinear program we mean the family of nonlinear programs
f  A, c  : inf f c ( x) subject to FA ( x)  0,
(1)
where f c : R n  R is a real function with input data c varying inside an interval vector [c] , and
FA : R n  R m is a vector function with input data A varying inside an interval matrix [ A] . The
lower and upper bound of the optimal value is defined as follows
f : inf f ( A, c) subject to A  [ A], c  [c],
f : sup f ( A, c) subject to A  [ A], c  [c].
Let us consider a dual problem to (1) having the form of
g  A, c  : sup g A,c ( y ) subject to G A,c ( y )  0,
where g A,c : R k  R and G A,c : R k  R h are functions depending on A  [ A] and c  [c] . The
weak duality property holds if
f ( A, c)  g ( A, c) A  [ A], c  [c].
Strong duality means that a finite optimal value to one problem ensures the existence of an optimal
solution to the other and that their optimal objective values equals. The most restrictive notion of
zero duality gap refers to the equation
f ( A, c)  g ( A, c) c  [c], A  [ A].
Suppose we are given a dual problem with weak duality property. Strong duality may not hold, but
more tighter approximation is achieved under strong duality or zero duality gap assumption.
Next, denote the functions
f ( x) : inf f c ( x) subject to c  [c],
g ( y ) : sup g A,c ( y ) subject to c  [c], A  [ A],
and so called solution sets
M : {x  R n : FA ( x)  0, A  [ A]},
N : { y  R k : G A,c ( y)  0, A  [ A], c  [c]}.
We are now to formulate our main result on how to compute the lower and upper bound of the
optimal value function.
Theorem 1. We have
f  inf f ( x) subject to x  M .
If the zero duality gap is guaranteed, then
f  sup g ( y) subject to y  N.
If the functions g A,c ( y ) and G A,c ( y ) have no interval parameter in common, then
f  sup g ( y) subject to y  N.
(2)
(3)
Proof.
1. (lower bound) From the definition of the lower bound it follows that
f  inf f ( A, c) subject to A  [ A], c  [c]
 inf f c ( x) subject to A  [ A], c  [c], FA ( x)  0
 inf f c ( x) subject to x  M , c  [c]
 inf f ( x) subject to x  M .
2. (upper bound) Under zero duality gap assumption it follows that for every A  [ A] and c  [c]
one has
f ( A, c)  g ( A, c).
Maximizing both sides of the equation over c  [c] and A  [ A] yields
sup f ( A, c) s.t. A  [ A], c  [c]  sup g ( A, c) s.t. A  [ A], c  [c]
 sup g A,c ( y ) s.t. A  [ A], c  [c], G A,c ( y )  0
 sup g ( y ) s.t. y  N .
Proof of inequality (3) is a slight modification of the previous way. From weak duality we have that
f ( A, c)  g ( A, c)
holds for every A  [ A] and c  [c] . Maximizing both sides of the equation over c  [c] and
A  [ A] yields
sup f ( A, c) s.t. A  [ A], c  [c]  sup g ( A, c) s.t. A  [ A], c  [c]
 sup g A,c ( y ) s.t. A  [ A], c  [c], G A,c ( y )  0
 sup g ( y ) s.t. y  N .
■
It is a simple consequence of Theorem 1 that if zero duality gap is assured and the functions
g A,c ( y ) and G A,c ( y ) have no interval parameter in common, then
f  sup g ( y) subject to y  N.
Applicability of Theorem 1 depends highly on how efficiently we are able to compute the
functions f (x) and g ( y ) and the solution sets M and N . Generally, it is a great challenge, but in
some special nonlinear problems we can quite easily obtain simple structures. In the following subsections we apply the main result on convex quadratic programming and posynomial geometric
programming.
Similar results for these two cases were developed independently by (Liu & Wang, 2007)
and (Liu, 2006), respectively; for the space limitation of this paper we refer the reader to the
original papers. Nevertheless, our approach is not only more general and applicable to another
mathematical programming problems, but also more efficient. The differences are discussed in the
subsections in detail.
2.1 Convex quadratic programming
Consider a quadratic programming problem
inf x T Cx  d T x subject to Ax  b, x  0,
where C  R nn is a positive semidefinite matrix. Its Dorn dual (Bazarra et al., 1993) is
sup  x T Cx  b T u subject to 2Cx  AT u  d  0, u  0.
First, suppose that the matrix C is fixed, and A, b and d vary in given interval matrices
[ A], [b] and [d ] , respectively. Bounds of the optimal value are computable by Theorem 1 as
T
f  inf x T Cx  d x subject to Ax  b, x  0,
T
f  sup  x T Cx  b u subject to 2Cx  A u  d  0, u  0,
where the feasible sets are described by using Theorem 2.22 from (Rohn, 2006).
Now we deal with the case when the matrix C varies in some given interval matrix [C ] .
T
Since x T C x  d T x  x T Cx  d T x for every C  [C ], d  [d ] and every feasible point x , we have
that the function f (x) is described by f ( x)  xT Cx  d T x , and thus the lower bound can be
achieved accordingly by
f  inf x T C x  d T x subject to Ax  b, x  0,
(4)
The upper bound cannot be approximated neither (2), as the duality gap is not always zero, nor by
(3) as there is a multiple appearance of C . Nevertheless, the nonzero duality gap happens rarely and
we can formulate some sufficient conditions.
The only case when the duality gap is nonzero is when both the primal and dual problems
are infeasible. Checking their infeasibility for all instances of interval data is a hard problem; we
can, however, use the following sufficient conditions:
1. From Theorem 2.26 in (Rohn, 2006) we have that if the inequality system Ax  b, x  0 is
feasible (has a solution), then the primal problem is feasible for all A  [ A] and b  [b] , and
hence the duality gap is zero.
2. From Theorem 2.23 in (Rohn, 2006)
we get that if the inequality system
T
1
2
1 2
2Cx  2Cx  A u  d  0, u, x , x  0 is feasible, then the dual problem is feasible for all
A  [ A], C  [C ] and d  [d ] , and hence the duality gap is zero.
3. Duality gap iz zero if the matrix C is positive definite for all C  [C ] .
In the case when the duality gap is zero, we can compute the approximation of the upper
bound of the optimal value function by (2) as follows
f  sup  x T mid (C ) x  | x |T rad (C ) | x | b T u
T
subject to 2mid (C ) x  2rad (C ) | x |  A u  d  0, u  0.
Notice that the feasible set is described by using Gerlach theorem—see e.g. (Rohn, 2006)—and the
objective function
g ( x, u )  sup  x T Cx  b T u subject to C  [C ], b  [b]
  x T mid (C ) x  | x |T rad (C ) | x | b T u
is obtained by inspection.
Nevertheless, better result can be derived. As xT Cx  x T Cx for every C  [C ] and every
feasible point x , we have that the highest optimal value is obtained when C  C . Hence
T
f  sup  xT Cx  bT u subject to 2Cx  A u  d  0, u  0.
If zero duality gap is not guaranteed, we have only approximation from below
(5)
T
f  sup  x T Cx  bT u subject to 2Cx  A u  d  0, u  0.
Both the programs (4) and (5) are simple convex quadratic programming problems. In
contrary to (Liu & Wang, 2007), we use much less variables and we are also able to deal with
uncertainties in the matrix C . Moreover, we also take into account the possibility of nonzero
duality gap.
Example 1. Consider the convex quadratic program with input data given by
1
 [2,3]  1
[5,3] 
 [1,2]

 [2,4] 
, d  
, A  
, b  
,
C  
 1 2 
 [1,2] 
 [2,3] [1,0.5] 
 [3,4] 
which is a slight modification of the example in (Liu & Wang, 2007).
By (4), the lower bound of the optimal value function is
 2  1   5 
1 1 
 4
 x    x subject to 
 x   , x  0
f  inf x T 
1 2   1 
 2  1
 4
 3.5
Now we verify that there is zero duality gap for all instances of interval data. This can be
easily seen, as any of the three sufficient conditions is fulfilled. Thus the upper bound of the optimal
value function is computable by (5), and we obtain
3    3
 3  1  2 
 6  2  2
 x   u subject to 
 x  
 x     0, u  0
f  sup  x T 
 1 2   3
  2 4   1  0.5   2 
 0.5
So the optimal value function varies in the interval [3.5,0.5] .
2.2 Posynomial geometric programming
A posynomial geometric program is a problem of the form (Bazaraa, 1993, Boţ et al., 2006, Liu,
2006)
n
inf
 ci  x j
iI 0
aij
j 1
n
subject to
 ci  x j
iI k
aij
 1, k  1,..., m,
(6)
j 1
x j  0, j  1,..., n.
By the definition, ci  0 for all i  1,..., p . The index sets numbers the terms sequentially, that is,
they are of the form I 0  {1,..., p 0 } , I 1  { p 0  1,..., p1} ,…, I m  { pm1  1,..., p} . The coefficients
vary in given intervals as follows: ci  [ci ], c i  0, i  1,..., p, a ij  [a ij ], i  1,..., p, j  1,..., n. The
corresponding dual problem is
 p  c  yi  m
sup    i    z kzk
 i 1  yi   k 1


subject to  yi  1,




iI 0
 yi  z k ,
k  1,..., m,
iI k
p
 aij yi  0,
j  1,..., n,
i 1
y i , z k  0, i  1,..., p, k  1,..., m.
Duality gap is zero as long as the dual problem is feasible and the primal problem has in interior
feasible point.
Now, suppose that the coefficients ci , i  1,..., p, vary inside given intervals [ci ] ; see (Liu,
2006). Computing the optimal value range is an easy task as the lower bound is attained by (6)
when ci  c i , and the upper bound when ci  c i .
More complicated is the situation when also the coefficients aij , i  1,..., p, j  1,..., n, vary
inside some intervals [ a ij ] . First, we focus on the lower bound.
To use Theorem 1 we have to determine the function f (x) and the set M . As the variable
x j is positive for every j  1,...n , we immediately have that parameters ci , i  1,..., p, attain their
lower value c i . Parameter a ij attains its lower value if x j  1 and the upper value if x j  1 , which
we formulate as aij  mid (aij )  rad (aij ) sgn(log( x j )) . Therefore the function f (x) can be written
as
f ( x) 
n
 ci  x j
iI 0
m id( aij ) rad ( aij ) sgn(log(x j ))
,
j 1
and the set M is described analogously by the constraints
n
 ci  x j
iI k
m id( aij )  rad ( aij ) sgn(log(x j ))
 1, k  1,...m,
j 1
x j  0, j  1,..., n.
According to Theorem 1, the lower bound of the optimal value function is
f  inf f ( x) subject to x  M .
Notice that this optimization problem is not easy in general. What we can do is, for instance,
to decompose it into 2 n sub-problems according to the sign of x j -s. Each such sub-problem is a
geometric program restricted on an orthant.
Surprisingly, the upper bound of the optimal value function is much more easy to compute
than the lower bound is. Suppose that zero duality gap is assured for all instances of interval data.
By Theorem 1,
f  sup g ( y, z) subject to ( y, z)  N.
The function g ( y, z ) is computable as
 p  c i  yi  m

g ( y, z )        z kzk .
 i 1  yi   k 1



Using Theorem 2.13 in (Rohn, 2006), we get that the set N is described by the system
 yi  1,
iI 0
 yi  z k , k  1,..., m,
iI k
p
 a ij yi  0,
j  1,..., n,
i 1
p
 a ij yi  0,
j  1,..., n,
i 1
y i , z k  0 i  1,..., p, k  1,..., m.
Thus, we obtain an optimization problem which is efficiently solvable. This problem exhibits less
variables than the optimization problem proposed by (Liu, 2006). Our approach also takes into
account possible uncertainty in the exponents aij , i  1,..., p, j  1,..., n .
3. Conclusions
We proposed the general approach for computing the range of the optimal value function of
nonlinear programming problems under interval uncertainty. Having an appropriate duality and
assuming there are no dependences between the interval quantities, we are able to determine the
optimal range in question by solving two optimization problems. These two problems are easy to
solve at least for some particular classes of nonlinear programs. We discussed the approach for
convex quadratic programs and posynomial geometric programs. We have seen, however, that
exhibiting special structure of these cases yields better results than the general approach do. In these
cases, we also obtained a bit better conclusions than the ones proposed in (Liu & Wang, 2007) and
in (Liu, 2006).
References







Alefeld, G., Herzberger, J. (1983) Introduction to Interval Computations, Academic Press,
London
Bazaraa, M. S., Sherali, H. D., Shetty, C. M. (1993) Nonlinear programming. Theory and
algorithms, 2nd ed., Wiley, New York
Boţ, R. I., Grad, S. M., Wanka, G. (2006) “Fenchel-Lagrange duality versus geometric
duality in convex optimization”, Journal of Optimization Theory and Applications, Vol. 129
No. 1, pp. 33-54
Hladík, M. (2007) “Optimal value range in interval linear programming“, submitted to
Fuzzy Optimization and Decision Making
Liu, S.-T. (2006) “Posynomial geometric programming with parametric uncertainty”,
European Journal of Operational Research, Vol. 168 No. 2, pp. 345-353
Liu, S.-T., Wang, R.-T. (2007) “A numerical solution method to interval quadratic
programming”, Applied Mathematics and Computation, Vol. 189 No. 2, pp. 1274-1281
Rohn, J. (2006) “Solvability of systems of interval linear equations and inequalities”, in
Fiedler, M., et al., Linear optimization problems with inexact data, Springer, New York, pp.
35-77
Download