nondifferentiable minimax fractional programming with square root

advertisement
,0
,0
,0
NONDIFFERENTIABLE MINIMAX
FRACTIONAL PROGRAMMING WITH
SQUARE ROOT TERMS
¼ ATORESCU,
¼
Anton BAT
Vasile PREDA and Miruna BELDIMAN
University of Bucharest, Faculty of Mathematics and Computer Science
14 Academiei str., 010014 - Bucharest
E-mail: batator@fmi.unibuc.ro
preda@fmi.unibuc.ro
Abstract
We establish necessary and su¢ cient optimality condition for a class of nondi¤erentiable minimax fractional programming problems with square root terms involving
( ; ; )-invex funtions. Subsequently, we apply the optimality condition to formulate a parametric dual problem and we prove weak duality, strong duality, and strict
converse duality theorems.
1. Introduction
Let us consider the following continuous di¤erentiable mappings:
f : Rn Rm ! R;
g : R n ! Rp ;
with g = (g1 ;
h : Rn
Rm ! R;
; gp ) : We denote
P = fx 2 Rn j gj (x)
0; j = 1; 2;
; pg
(1.1)
and consider Y
Rm to be a compact subset of Rm : Let Br ; r = 1; ; and Dq ;
q = 1; ; be n n positive semide…nite matrices such that for each (x; y) 2 P Y;
we have:
f (x; y) +
Xp
x> Br x
0
r=1
h (x; y)
Xq
q=1
1
x> Dq x > 0
In this paper we consider the following nondi¤erentiable minimax fractional programming problem:
Pp >
f (x; y) +
x Br x
r=1
inf sup
(P)
x2P y2Y
Pp >
h (x; y)
x Dq x
q=1
For
= = 1; this problem was studied by Lai et al. [14], and further, if
B1 = D1 = 0; (P) is a di¤erentiable minimax fractional programming problem
which has been studied by Chandra and Kumar [7], Liu and Wu [16]. Many authors
investigated the optimality conditions and duality theorems for minimax (fractional)
programming problems. For details, one can consult [1, 4, 14, 15, 21, 26]. Problems
which contain square root terms were …rst studied by Mond [18]. Some extensions
of Mond’s results were obtained, for example, by Chandra et al. [5], Preda [20],
Zhang and Mond [30], Preda and Köller [22].
In an earlier work, under conditions of convexity, Schmittendorf [25] established
necessary and su¢ cient optimality conditions for the problem:
inf sup
x2P y2Y
(x; y) ;
(P1)
where
: Rn Rm ! R is a continuous di¤erentiable mapping. Later, Yadev
and Mukherjee [28] employed the optimality conditions of Schmittendorf [25] to
construct two dual problems and they derived duality theorems for (convex) di¤erentiable fractional minimax programming. In [7], Chandra and Kumar constructed
two modi…ed dual problems for which they proved duality theorems for (convex)
di¤erentiable fractional minimax programming. Liu and Wu [16] relaxed the convexity assumption in the su¢ cient optimality of [7] and they employed the optimality conditions to construct one parametric dual and two other dual models of
parametric-free problems, and they established weak duality, strong duality, and
strict converse duality theorems for a class of generalized minimax fractional programming involving generalized convex functions. Several authors considered the
optimality and duality theorems for nondi¤erentiable nonconvex minimax fractional
programming problems, one can consult [15, 26, 29].
In this paper, we present necessary and su¢ cient optimality conditions for problem (P) and we apply the optimality conditions to construct one parametric dual
problem. Further, we establish for this pair of dual problems weak duality, strong
duality, and strictly converse duality theorems. Some de…nitions and notations are
given in Section 2. In Section 3, necessary optimality contitions are proved and we
derive su¢ cient conditions under the assumption of generalized convexity. Using
the optimality conditions, in Section 4 we state the parametric dual problem and
prove the duality results.
2
2. Notations and Preliminary Results
Throughout this paper, we denote by Rn the n-dimensional Euclidean space and
by Rn+ its nonnegative orthant. Let us consider the set P de…ned by (1.1), and for
each x 2 P; we de…ne
J (x) = fj 2 f1; 2;
; pg j gj (x) = 0g ;
9
8
p
P
>
>
>
P p >
>
>
x> Br x >
f (x; z) +
=
<
f (x;y)+
x Br x
r=1
r=1
;
= sup
Y (x) =
y2Y
P p >
>
>
Pp >
z2Y
h(x;y)
x Dq x
>
>
>
h (x; z)
x Dq x >
q=1
;
:
8
>
>
<
K (x) =
(s; t; y) 2 N
>
>
:
q=1
1
Rs+
Rms
s
n + 1;
s
P
ti = 1;
i=1
9
>
>
=
and y = (y1 ;
; ys ) 2 Rms >
>
;
with yi 2 Y (x) ; i = 1; s
:
Since f and h are continuous di¤erentiable functions and Y is a compact set in Rm ;
it follows that for each x0 2 P; we have Y (x0 ) 6= ;; and for any yi 2 Y (x0 ) ; we
denote
Pp >
x0 Br x0
f (x0 ; yi ) +
r=1
k0 =
:
(2.1)
Pp >
h (x0 ; yi )
x0 Dq x0
q=1
Let A be an m n matrix and let M; Mi ; i = 1;
positive semide…nite matrices.
; k; be n
n symmetric
Lemma 2.1 [27] We have
Ax
>
0 ) c x+
k p
X
x > Mi x
0;
i=1
n
if and only if there exist y 2 Rm
+ and vi 2 R ; i = 1; k; such that
Avi
0;
vi> Mi vi
1; i = 1; k;
>
A y =c+
k
X
Mi vi :
i=1
If all Mi = 0; Lemma 2.1 becomes the well-known Farkas lemma.
We shall use the generalized Schwarz inequality [23]:
p
p
x> M v
x> M x v > M v:
We note that equality holds in (2.2) if M x = M v for some
3
0:
(2.2)
Obviously, from (2.2), we have
v>M v
1 ) x> M v
p
x> M x:
(2.3)
The following lemma is given by Schmittendorf [25] for the problem (P1):
Lemma 2.2 [25] Let x0 be a solution of the minimax problem (P1) and the vectors
rgj (x0 ) ; j 2 J (x0 ) are linearly independent. Then there exist a positive integer s;
1 s n+1; real numbers ti 0; i = 1; s; j 0; j = 1; p; and vectors yi 2 Y (x0 ) ;
i = 1; s; such that
s
P
i=1
ti rx (x0 ; yi ) +
j gj
p
P
j=1
j rgj
(x0 ) = 0;
(x0 ) = 0; j = 1; p;
s
P
ti 6= 0:
i=1
Now we give the de…nitions of ( ; ; )-quasi-invexity and ( ; ; )-pseudo-invexity
as extensions of the invexity notion. The invexity notion of a function was introduced
into optimization theory by Hanson [11] and the name of invex function was given
by Craven [8]. Some extensions of invexity as pseudo-invexity, quasi-invexity and
-invexity, -pseudo-invexity, -quasi-invexity are presented, for example, in Craven
and Glover [9], Kaul and Kaur [13], Preda [19], Mititelu and Stancu-Minasian [17].
In this paper we only shall use the following notions:
De…nition 2.1 A di¤erentiable function ' : C
Rn ! R is ( ; ; )-invex at
x0 2 C if there exist functions : C C ! Rn ; : C C ! R+ and 2 R such
that
' (x) ' (x0 )
(x; x0 )> r' (x0 ) + (x; x0 ) :
If ' is ( ; ; )-invex at x0 2 C; then ' is called ( ; ; )-incave at x0 2 C:
If the inequality holds strictly, then ' is called to be strictly ( ; ; )-invex.
De…nition 2.2 A di¤erentiable function ' : C Rn ! R is ( ; ; )-pseudo-invex
at x0 2 C if there exist functions : C C ! Rn ; : C C ! R+ and 2 R such
that the following hold:
(x; x0 )> r' (x0 )
(x; x0 ) =) ' (x)
' (x0 ) ;
8 x 2 C;
(x; x0 ) ;
8 x 2 C:
or equivalently,
' (x) < ' (x0 ) =)
(x; x0 )> r' (x0 ) <
If ' is ( ; ; )-pseudo-invex at x0 2 C; then ' is called ( ; ; )-pseudo-incave at
x0 2 C:
4
De…nition 2.3 A di¤erentiable function ' : C
Rn ! R is strictly ( ; ; )pseudo-invex at x0 2 C if there exist functions : C C ! Rn ; : C C ! R+
and 2 R such that the following hold:
(x; x0 )> r' (x0 )
(x; x0 ) =) ' (x) > ' (x0 ) ;
8 x 2 C; x 6= x0 :
De…nition 2.4 A di¤erentiable function ' : C
Rn ! R is ( ; ; )-quasi-invex
n
at x0 2 C if there exist functions : C C ! R ; : C C ! R+ and 2 R such
that the following hold:
' (x)
(x; x0 )> r' (x0 )
' (x0 ) =)
(x; x0 ) ;
8 x 2 C:
If ' is ( ; ; )-quasi-invex at x0 2 C; then ' is called ( ; ; )-quasi-incave at
x0 2 C:
If in the above de…nitions the corresponding property of a di¤erentiable function
' : C Rn ! R is satis…ed for any x0 2 C; then we say that ' has the respective
( ; ; )-characteristic on C:
3. Necessary and Su¢ cient Optimality Conditions
For any x 2 P; let us denote the following index sets:
B (x) =
r 2 f1; 2;
; g j x> Br x > 0 ;
D (x) =
q 2 f1; 2;
; g j x> Dq x > 0 ;
B (x) = f1; 2;
D (x) = f1; 2;
; g n B (x) = r j x> Br x = 0 ;
; g n D (x) = q j x> Dq x = 0 :
Using Lemma 2.2, we may prove the following necessary optimality conditions
for problem (P).
Theorem 3.1 (Necessary Condition) If x0 is an optimal solution of problem
(P) for which B (x0 ) = ;; D (x0 ) = ;; and rgj (x0 ) ; j 2 J (x0 ) are linearly independent, then there exist (s; t; y) 2 K (x0 ) ; k0 2 R+ ; wr 2 Rn ; r = 1; ; vq 2 Rn ; q = 1; ;
and 2 Rp+ such that
"
!#
s
X
X
X
ti rf (x0 ; yi ) +
Br wr k0 rh (x0 ; yi )
Dq v q
+
i=1
r=1
+
q=1
p
X
j=1
j rgj
(x0 ) = 0;
5
(3.1)
f (x0 ; yi ) +
Xq
x>
0 Br x0
k0
Xq
x>
0 Dq x0
h (x0 ; yi )
r=1
q=1
p
X
j gj
!
= 0;
8 i = 1; s; (3.2)
(3.3)
(x0 ) = 0;
j=1
ti
s
X
0;
wr> Br wr
1;
x>
0 Br wr =
vq> Dq vq
1;
x>
0 Dq v q =
p
(3.4)
ti = 1;
i=1
x>
0 Br x0 ;
p
x>
0 Dq x0
9
r = 1; ; =
q = 1; : ;
(3.5)
Proof. Since all Br ; r = 1; ; and Dq ; q = 1; ; are positive de…nite and f and h
are di¤erentiable functions, it follows that the function
Pp
f (x; y) +
x> Br x
r=1
Pp >
x Dq x
h (x; y)
q=1
is di¤erentiable with respect to x for any given y 2 Rm : In Lemma 2.2, the di¤erentiable function in (P1) is replaced by the objective (fractional) function of (P),
and, like the Kuhn-Tucker type formula, it follows that there exist a positive integer
s; 1 s n + 1; and vectors t 2 Rs+ ; 2 Rp+ ; yi 2 Y (x0 ) ; i = 1; s; such that
"
s
X
P Br x0
1
p
ti
rf (x0 ; yi ) +
p
P
r=1
x>
>
0 Br x0
i=1
h (x0 ; yi )
x 0 Dq x 0
(3.6)
q=1
!#
p
X
P Dq x0
p
k0 rh (x0 ; yi )
+
j rgj (x0 ) = 0
x>
D
x
r=1
q
0
0
j=1
p
X
j gj
(x0 ) = 0;
(3.7)
j=1
s
X
ti > 0;
i=1
where k0 is given by (2.1).
6
(3.8)
Now let us denote
x0
wr = p
;
r = 1; ;
B
x
x>
r
0
0
x0
;
q = 1; ;
vq = p >
x0 Dq x0
t0i
ti = P
;
where t0i =
s
t0i
h (x0 ; yi )
i=1
Equations (3.6) and (3.7) become
"
s
X
X
ti rf (x0 ; yi ) +
Br wr
i=1
k0
r=1
+
p
X
j rgj
p
X
j gj
j=1
2 Rp+ ; ti
q=1
rh (x0 ; yi )
;
x>
0 Dq x 0
X
q=1
Dq vq
!#
+
(x0 ) = 0;
(x0 ) = 0;
j=1
where
ti
Pp
0 for all i = 1; s; with
s
P
ti > 0: This proves (3.1) - (3.4).
i=1
Furthermore, it veri…es easely that we have
p
x>
for any r = 1; ;
wr> Br wr = 1; x>
B
w
=
r
r
0 Br x0 ;
0
p
vq> Dq vq = 1; x>
x>
for any q = 1; :
0 Dq x0 ;
0 Dq v q =
So relation (3.5) also holds, and the theorem is proved.
We notice that, in the above theorem, all matrices Br and Dq are supposed to be
positive de…nite. If at least one of B (x0 ) or D (x0 ) is not empty, then the functions
involved in the objective function of problem (P) are not di¤erentiable. In this case,
the necessary optimality conditions still hold under some additional assumptions.
For x0 2 P and (s; t; y) 2 K (x0 ) we de…ne the following vector:
0
s
X
X
Bx
p r 0
=
ti @rf (x0 ; yi ) +
x>
0 Br x0
i=1
r2B(x0 )
11
0
X
Dx
p q 0 AA
k0 @rh (x0 ; yi )
x>
0 Dq x0
r2D(x0 )
7
Now we de…ne a set Z as follows:
9
8
z > rgj (x0 ) 0; j 2 J (x0 ) ; =
<
Zy (x0 ) = z 2 Rn
;
:
and the relation (3.9) holds.
0
1
s
q
X
X p
X
z> +
ti @
z > Br z +
z > (k0 )2 Dq z A < 0:
i=1
r2B(x0 )
(3.9)
q2D(x0 )
If one of the index sets involved in the above expressions is empty, then the correspondig sum vanishes.
Using Lemma 2.1, we establish the following result:
Theorem 3.2 Let x0 be an optimal solution of problem (P) and at least one of
B (x0 ) or D (x0 ) is not empty. Let (s; t; y) 2 K (x0 ) be such that Zy (x0 ) = ;: Then
there exist vectors wr 2 Rn ; r = 1; ; vq 2 Rn ; q = 1; ; and 2 Rp+ which satisfy
the relations (3.1) - (3.5).
Proof. Using (2.1) we get (3.2), and relation (3.4) follows directly from the assumptions.
Since Zy (x0 ) = ;; for any z 2 Rn with: z > rgj (x0 ) 0; j 2 J (x0 ) ; we have
0
1
s
X
X p
X q
z> +
ti @
z > Br z +
z > (k0 )2 Dq z A 0:
i=1
r2B(x0 )
Let us denote:
=
q2D(x0 )
s
X
ti ;
=
i=1
s
X
ti k0
i=1
Now we apply Lemma 2.1, considering:
the rows of matrix A are the vectors [ rgj (x0 )] ; j 2 J (x0 ) ;
c= ;
2
MrB =
Br
0
if r 2 B (x0 )
and MqD =
if r 2 B (x0 )
2
Dq
0
if q 2 D (x0 )
:
if q 2 D (x0 )
It follows that there exist the scalars j 0; j 2 J (x0 ) ; and the vectors wr 2 Rn ;
r 2 B (x0 ) ; vq 2 Rn ; q 2 D (x0 ) ; such that
X
X
X
B
rg
(x
)
=
c
+
M
w
+
MqD vq
(3.10)
j
0
r
j
r
j2J(x0 )
r2B(x0 )
8
q2D(x0 )
and
wr> MrB wr
1;
r 2 B (x0 )
vq> MqD vq
1;
q 2 D (x0 )
Since gj (x0 ) = 0 for j 2 J (x0 ) ; we have:
j2
= J (x0 ) ; we put j = 0: It follows:
p
X
j gj
j gj
(3.11)
(x0 ) = 0 for j 2 J (x0 ) : If
(x0 ) = 0
j=1
which shows that relation (3.3) holds.
Now we de…ne
8
< p x0
; if r 2 B (x0 )
x>
wr =
0 Br x0
:
wr ;
if r 2 B (x0 )
8
x
< p 0
; if q 2 D (x0 )
D
x
x>
vq =
q
0
0
:
vq ;
if q 2 D (x0 )
With this notations, equality (3.10) yields relation (3.1).
From (3.11) we get: wr> Br wr
1 for any r = 1; p
: Further, if r 2 B (x0 ) ; we
>
>
have x0 Br x0 = 0; which implies Br x0 = p
0; and then x>
0 Br x0 = 0 = x0 Br wr : If
>
r 2 B (x0 ) ; we obviously have x0 Br wr = x>
0 Br x0 : The same arguments apply to
matrices Dq ; so relation (3.5) holds. Therefore the theorem is proved.
For convenience, if a point x0 2 P has the property that the vectors rgj (x0 ) ;
j 2 J (x0 ) ; are linear independent and the set Zy (x0 ) = ;; then we say that x0 2 P
satisfy a constraint quali…cation.
The results of Theorems 3.1 and 3.2 are the necessary conditions for the optimal
solution of problem (P). Actually, the conditions (3.1) - (3.5) are also the su¢ cient optimality conditions for (P), for which we state the following result involving
generalized invex functions, which are weaker assumptions than Lai et al. use in
[14].
Theorem 3.3 (Su¢ cient Conditions) Let x0 2 P be a feasible solution of (P)
and there exist a positive integer s; 1 s n + 1; yi 2 Y (x0 ) ; i = 1; s; k0 2 R+ ;
de…ned by (2.1), t 2 Rs+ ; wr 2 Rn ; r = 1; ; vq 2 Rn ; q = 1; ; and 2 Rp+ such that
the relations (3.1) - (3.5) are satis…ed. If any one of the following four conditions
holds:
(a) f ( ; yi ) +
P
( )> Br wr is ( ; i ; )-invex, h ( ; yi )
r=1
incave for i = 1; s;
P
q=1
p
P
j gj
( ) is ( ;
0;
j=1
)-invex, and
0
( )> Dq vq is ( ; 0i ; )-
+
s
P
i=1
9
ti (
i
+
0
i k0 )
0;
(b)
def
( ) =
s
P
ti f ( ; yi ) +
P
( )> Br wr
k0 h ( ; y i )
r=1
i=1
( ; ; )-invex and
p
P
j gj
P
( )> Dq vq
is
q=1
( ) is ( ;
)-invex, and
0;
+
0;
0
j=1
(c)
p
P
( ) is ( ; ; )-pseudo-invex and
( ) is ( ;
j gj
0;
)-quasi-invex,
j=1
and
(d)
+
0;
0
( ) is ( ; ; )-quasi-invex and
p
P
j gj
( ) is strictly ( ;
0;
)-pseudo-invex,
j=1
and
+
0;
0
then x0 is an optimal solution of (P).
Proof. On contrary, let us suppose that x0 is not an optimal solution of (P). Then
there exists an x1 2 P such that
f (x1 ; y) +
r=1
sup
y2Y
Pp >
x1 Br x1
Pp >
x1 Dq x1
h (x1 ; y)
f (x0 ; y) +
r=1
< sup
y2Y
q=1
We note that, for yi 2 Y (x0 ) ; i = 1; s; we have
r=1
sup
y2Y
Pp >
x0 Br x0
h (x0 ; y)
Pp >
x0 Dq x0
f (x0 ; yi ) +
f (x1 ; yi ) +
Pp
r=1
=
Pp
h (x0 ; yi )
h (x1 ; yi )
q=1
Thus, we have
f (x1 ; y) +
Pp
r=1
Pp
q=1
y2Y
x>
0 Br x0
= k0 ;
x>
0 Dq x 0
Pp
r=1
sup
x>
1 Dq x1
f (x1 ; yi ) +
h (x1 ; yi )
q=1
x>
1 Br x1
Pp
Pp
r=1
q=1
and
x>
0 Br x0
Pp >
x0 Dq x0
h (x0 ; y)
q=1
f (x0 ; y) +
Pp
h (x1 ; y)
x>
1 Br x1
Pp
q=1
x>
1 Dq x1
x>
1 Br x1
< k0 ;
x>
1 Dq x1
10
for i = 1; s:
:
It follows that
f (x1 ; yi )+
Xq
x>
1 Br x1 k0
Xq
h (x1 ; yi )
r=1
x>
1 Dq x 1
q=1
!
< 0; for i = 1; s: (3.12)
Using the relations (2.3), (3.5), (3.12), (3.2), and (3.4), we obtain
!#
"
s
X
X
X
x>
k0 h (x1 ; yi )
x>
(x1 ) =
ti f (x1 ; yi ) +
1 Br wr
1 Dq v q
s
X
r=1
"i=1
ti f (x1 ; yi ) +
i=1
q=1
Xp
x>
1 Br x1
k0
=
s
X
q=1
=
Xq
h (x0 ; yi )
k0
X
x>
0 Br wr
k0
X
h (x0 ; yi )
r=1
(x0 ) :
!#
x>
0 Dq x0
q=1
r=1
"
ti f (x0 ; yi ) +
i=1
h (x1 ; yi )
r=1
< 0=
"
s
X
Xq
x>
=
ti f (x0 ; yi ) +
0 Br x0
i=1
Xp
x>
1 Dq x1
x>
0 Dq vq
q=1
!#
!#
It follows that
(3.13)
(x1 ) < (x0 ) :
1. If hypothesis (a) holds, then for i = 1; s; we have
f (x1 ; yi ) +
P
r=1
x>
1 Br wr
>
(x1 ; x0 )
and
h (x1 ; yi ) +
P
q=1
rf (x0 ; yi ) +
P
rh (x0 ; yi ) +
P
r=1
x>
0 Br wr
(3.14)
Br wr
+
i
(x1 ; x0 ) ;
r=1
x>
1 Dq vq + h (x0 ; yi )
>
(x1 ; x0 )
f (x0 ; yi )
P
q=1
P
q=1
x>
0 Dq vq
Dq vq
(3.15)
+
0
i
(x1 ; x0 ) :
Now, multiplying (3.14) by ti ; (3.15) by ti k0 ; and then sum up these inequalities, we
11
obtain
(x1 )
(x0 )
s
X
P
>
ti rf (x0 ; yi ) +
Br wr
(x1 ; x0 )
+
s
X
r=1
i=1
ti (
i
k0 rh (x0 ; yi )
+ k0 0i ) (x1 ; x0 ) :
P
Dq v q
q=1
i=1
Further, by (3.1) and ( ;
p
P
0 ; )-invexity of
j gj
( ) ; we get
j=1
(x1 )
(x0 )
>
(x1 ; x0 )
p
X
j rgj
j=1
p
X
(x0 ) +
j=1
X
j gj
+
0
ti (
i
!
+ k0 0i )
i=1
Since x1 2 P; we have gi (x1 )
(x1 )
(x0 )
i
+ k0 0i ) (x1 ; x0 )
(x0 ) +
j=1
s
X
ti (
i=1
p
j gj (x1 ) +
+
s
X
(x1 ; x0 ) :
0; i = 1; s; and using (3.3) it follows
!
s
X
ti ( i + k0 0i )
(x1 ; x0 ) 0;
0+
i=1
which contradicts the inequality (3.13).
2. If the hypothesis (b) holds, we have
(x1 )
(x0 )
s
X
P
(x1 ; x0 )>
ti rf (x0 ; yi ) +
Br wr
r=1
i=1
+
k0 rh (x0 ; yi )
(x1 ; x0 ) :
Using relation (3.1) and the ( ;
0 ; )-invexity of
p
P
j gj
P
Dq v q
q=1
( ) ; we obtain
j=1
(x1 )
(x0 )
>
(x1 ; x0 )
p
X
j=1
p
X
j gj
(x1 ) +
j=1
( +
j rgj
p
X
j=1
0)
(x1 ; x0 )
12
0;
(x0 ) +
j gj
(x1 ; x0 )
(x0 ) + ( +
0)
(x1 ; x0 )
which contradicts the inequality (3.13).
3. If the hypothesis (c) holds, using the ( ; ; )-pseudo-invexity of
from (3.13) that
(x1 ) <
(x1 ; x0 )> r (x0 ) <
(x0 ) =)
Using again relation (3.1), from (3.16) and
>
(x1 ; x0 )
p
X
j=1
Since x1 2 P imply gi (x1 )
p
X
j rgj
+
(x0 ) >
j gj
(x1 )
0=
j=1
Using the ( ;
0;
(x1 ; x0 ) :
(3.16)
0; we get
0
(x1 ; x0 )
0; i = 1; s; and
; it follows
0
(x1 ; x0 ) :
(3.17)
2 Rp+ ; using (3.3) we have
p
X
j gj
(3.18)
(x0 ) :
j=1
)-quasi-invexity of
p
P
j gj
( ) ; we get from the last relation
j=1
>
(x1 ; x0 )
p
X
j=1
j rgj
(x0 )
0
(x1 ; x0 )
which contradicts the inequality (3.17).
4. If the hypothesis (d) holds, the ( ; ; )-quasi-invexity of
(x1 )
(x1 ; x0 )> r (x0 )
(x0 ) =)
From here, together with (3.1) and
>
(x1 ; x0 )
p
X
j=1
j rgj
+
0
(x0 )
(x1 ; x0 ) :
0; we have
(x1 ; x0 )
0
Since (3.18) is true, the strictly ( ; ; )-pseudo-invexity of
(x1 ; x0 ) :
p
P
j=1
>
(x1 ; x0 )
p
X
j=1
j rgj
(x0 ) <
which contradicts the inequality (3.19).
Therefore the proof of the theorem is complete.
13
imply
0
(x1 ; x0 )
j gj
( ) imply
(3.19)
4. Duality
Let us consider the set H (s; t; y) consisting of all (z; ; k; v; w) 2 Rn
R+ Rn
Rn ; where v = (v1 ;
; v ) ; vq 2 Rn ; q = 1; ; and w = (w1 ;
wr 2 Rn ; r = 1; ; which satisfy the following conditions:
!#
"
s
X
X
X
Br wr k rh (z; yi )
Dq v q
+
ti rf (z; yi ) +
r=1
i=1
+
q=1
p
X
j=1
s
X
i=1
"
ti f (z; yi ) +
X
z > Br wr
j rgj
(4.1)
(z) = 0;
k h (z; yi )
X
z > Dq vq
q=1
r=1
p
X
Rp+
;w );
!#
(4.2)
0;
0;
(4.3)
(s; t; y) 2 K (z)
(4.4)
j gj
(z)
j=1
wr> Br wr 1; r = 1; ;
(4.5)
vq> Dq vq 1; q = 1; :
The optimality conditions, stated in the preceding section for the minimax problem (P), suggest us to de…ne the following dual problem:
max
(s;t;y)2K(z)
sup fk j (z; u; k; v; w) 2 H (s; t; y)g
(DP)
If, for a triplet (s; t; y) 2 K (z) ; the set H (s; t; y) = ;; then we de…ne the supremum over H (s; t; y) to be 1: Further, we denote
"
!#
s
X
X
X
( )=
ti f ( ; yi ) +
( )> Br wr k h ( ; yi )
( )> Dq vq
r=1
i=1
q=1
Now, we can state the following weak duality theorem for (P) and (DP).
Theorem 4.1 (Weak Duality) Let x 2 P be a feasible solution of (P) and
(x; ; k; v; w; s; t; y) be a feasible solution of (DP). If any of the following four conditions holds:
(a) f ( ; yi ) +
P
( )> Br wr is ( ; i ; )-invex, h ( ; yi )
r=1
incave for i = 1; s;
P
q=1
p
P
j gj
( ) is ( ;
0;
j=1
)-invex, and
0
( )> Dq vq is ( ; 0i ; )-
+
s
P
i=1
14
ti (
i
+
0
i k)
0;
(b)
( ) is ( ; ; )-invex and
p
P
j gj
( ) is ( ;
)-invex, and
0;
+
0
0;
j=1
(c)
( ) is ( ; ; )-pseudo-invex and
p
P
( ) is ( ;
j gj
0;
)-quasi-invex,
j=1
and
(d)
+
0
0;
( ) is ( ; ; )-quasi-invex and
p
P
( ) is strictly ( ;
j gj
0;
)-pseudo-invex,
j=1
and
+
0
0;
then
f (x; y) +
x> Br x
r=1
sup
y2Y
Pp
Pp
h (x; y)
(4.6)
k
x > Dq x
q=1
Proof. If we suppose, on contrary, that
f (x; y) +
x> Br x
r=1
sup
y2Y
Pp
Pp
h (x; y)
<k
x> D
qx
q=1
then we have, for all y 2 Y;
f (x; y) +
Xp
x> Br x
Xq
x> Dq x
k h (x; y)
q=1
r=1
It follows that, for ti
0; i = 1; s; with
s
P
!
< 0:
ti = 1;
i=1
ti f (x; y) +
p
x> Br x
k h (x; y)
q
x> Dq x
0;
(4.7)
i = 1; s;
with at least one strict inequality, because t = (t1 ;
; ts ) 6= 0:
Taking into account the relations (2.3), (4.5), (4.7) and (4.2), we have
"
!#
s
X
X
X
(x) =
ti f (x; yi ) +
x> Br wr k h (x; yi )
x > Dq v q
i=1
s
X
i=1
"
ti f (x; yi ) +
r=1
Xp
q=1
x> Br x
r=1
k h (x; yi )
Xq
q=1
15
x> Dq x
!#
s
X
< 0
"
ti f (z; yi ) +
i=1
=
X
z > Br wr
X
k h (z; yi )
r=1
(z) ;
z > Dq vq
q=1
!#
that is
(4.8)
(x) < (z) :
1. If hypothesis (a) holds, then for i = 1; s; we have
f (x; yi ) +
P
x> Br wr
r=1
>
(x; z)
and
h (x; yi ) +
P
P
f (z; yi )
rf (z; yi ) +
P
z > Br wr
r=1
(4.9)
+
Br wr
(x; z) ;
i
r=1
P
x> Dq vq + h (z; yi )
q=1
z > Dq vq
q=1
>
P
rh (z; yi ) +
(x; z)
Dq vq
(4.10)
+
q=1
0
i
(x; z) :
Now, multiplying (4.9) by ti ; (4.10) by ti k; and then sum up these inequalities, we
obtain
(x)
(z)
s
X
P
>
(x; z)
ti rf (z; yi ) +
Br wr
+
s
X
r=1
i=1
ti (
k rh (z; yi )
i
+ k 0i ) (x; z) :
P
Dq v q
q=1
i=1
Further, by (4.1) and ( ;
0;
)-invexity of
p
P
j gj
( ) ; we get
j=1
(x)
>
(z)
(x; z)
p
X
j=1
p
X
j gj
(x) +
j=1
p
X
j gj
j rgj
(z) +
j=1
(z) +
0
s
X
ti (
i=1
+
s
X
i=1
ti (
i
i
+ k 0i ) (x; z)
!
+ k 0i )
(x; z) :
0; i = 1; s; and using (4.3) it follows
!
s
X
(z)
ti ( i + k 0i )
(x; z) 0;
0+
Since x 2 P; we have gi (x)
(x)
i=1
which contradicts the inequality (4.8). Hence, the inequality (4.6) is true.
2. Similarly, one can prove the case of hypothesis (b).
16
3. If hypothesis (c) holds, using the ( ; ; )-pseudo-invexity of
(4.8) that
(x; z)> r (z) <
(x; z)
Consequently, relations (4.1), (4.11) and + 0 0; yield
>
(x; z)
p
X
j rgj
p
X
j gj (x)
j=1
Because x 2 P;
(z) >
(x; z)
2 Rp+ ; and (4.3), we have
0=
j=1
Using the ( ;
0;
p
X
j gj
0
; we get from
(x; z) :
(4.11)
(4.12)
(z) :
j=1
)-quasi-invexity of
>
(x; z)
p
X
j=1
Pp
j=1
j rgj
j gj
(z)
( ) ; we get from the last relation
0
(x; z) ;
which contradicts the inequality (4.12).
4. The result under the hypothesis (d) follows similarly like in step 3.
Therefore the proof of the theorem is complete.
Theorem 4.2 (Strong Duality) Let x be an optimal solution of problem (P).
Assume that x satis…es a constraint quali…cation for problem (P). Then there exist (s ; t ; y ) 2 K (x ) and (x ; ; k ; v ; w ) 2 H (s ; t ; y ) such that (x ; ; k ;
v ; w ; s ; t ; y ) is a feasible solution of (DP). If the hypotheses of Theorem 4.1 are
also satis…ed, then (x ; ; k ; v ; w ; s ; t ; y ) is an optimal solution for (DP), and
both problems (P) and (DP) have the same optimal values.
Proof. By Theorems 3.1 and 3.2, there exist (s ; t ; y ) 2 K (x ) and (x ; ; k ;
v ; w ) 2 H (s ; t ; y ) such that (x ; ; k ; v ; w ; s ; t ; y ) is a feasible solution of
(DP), and
q
P
f (x ; yi ) +
(x )> Br x
r=1
k =
:
Pq
>
h (x ; yi )
(x ) Dq x
q=1
The optimality of this feasible solution for (DP) follows from Theorem 4.1.
Theorem 4.3 (Strict Converse Duality) Let x and (z; ; k; v; w; s; t; y) be the
optimal solutions of (P) and (DP), respectively, and that the hypotheses of Theorem
4.2 are ful…lled. If any one of the following three conditions holds:
17
(a) one of f ( ; yi ) +
P
( )> Br wr is strictly ( ; i ; )-invex,
r=1
h ( ; yi )
P
( )> Dq vq is strictly ( ; 0i ; )-incave for i = 1; s; or
q=1
strictly ( ;
0;
)-invex, and
0
+
s
P
ti
i
0
ik
+
i=1
(b) either
s
P
ti f ( ; yi ) +
P
( )> Br wr
0;
P
k h ( ; yi )
r=1
i=1
( ; ; )-invex or
p
P
j gj
p
P
j gj
( ) is
j=1
( )> Dq vq
is strictly
q=1
( ) is strictly ( ;
0;
)-invex, and
+
0
0;
j=1
(c)
s
P
ti f ( ; yi ) +
P
( )> Br wr
k h ( ; yi )
r=1
i=1
P
( )> Dq vq
is
q=1
strictly ( ; ; )-pseudo-invex and
p
P
( ) is ( ;
j gj
0;
)-quasi-invex,
j=1
and
+
0
0;
then x = z; that is, z is an optimal solution for problem (P) and
f (z; y) +
z > Br z
r=1
sup
y2Y
Pp
h (z; y)
Pp
= k:
z > Dq z
q=1
Proof.
Suppose on the contrary that x 6= z: From Theorem 4.2 we know
that there exist (s ; t ; y ) 2 K (x ) and (x ; ; k ; v ; w ) 2 H (s ; t ; y ) such that
(x ; ; k ; v ; w ; s ; t ; y ) is a feasible solution for (DP) with the optimal value
q
P
f (x ; y) +
(x )> Br x
r=1
k = sup
:
Pq
y2Y
>
h (x ; y)
(x ) Dq x
q=1
Now, we proceed similarly as in the proof of Theorem 4.1, replacing x by x and
(z; ; k; v; w; s; t; y) by z; ; k; v; w; s; t; y ; so that we arrive at the strict inequality
q
P
f (x ; y) +
(x )> Br x
r=1
sup
> k:
Pq
y2Y
>
h (x ; y)
(x ) Dq x
q=1
18
But this contradicts the fact
sup
y2Y
f (x ; y) +
h (x ; y)
q
q
(x )> Br x
= k = k;
(x )> Dq x
and we conclude that x = z: Hence, the proof of the theorem is complete.
5. Special Cases
If we consider special cases of the results presented in this paper, we may retrieve
some previous results obtained by other authors.
1. If we consider = = 1; we obtain the results obtained by Lai et al. [14].
2. If Br = 0; r = 1; ; and Dq = 0; q = 1; ; we obtain the results of Chandra and
Kumar [7], respectively that of Liu and Wu [16].
3. If the set Y is a singleton,
= 1; h
1 and Dq = 0; q = 1; ; we obtain
the results presented respectively in Mond [18], Chandra et al. [5], Preda [20],
Zhang and Mond [30], Preda and Köller [22].
4. If the set Y is a singleton and = = 0 (that is, we have no square root terms),
then problem (P) becomes the standard fractional programming problem and
the dual problem (DP) reduce to the well known dual of Schaible [24].
5. For the case of the generalized fractional programming [2, 3, 6, 10, 12], the set
m
P
Y can be taken as the simplex Y = y 2 Rm yi 0; yi = 1 ; Br = 0;
i=1
r = 1; ; and Dq = 0; q = 1; ; and
m
P
f (x; y)
= i=1
m
P
h (x; y)
yi fi (x)
:
yi hi (x)
i=1
In this case the dual (DP) reduces to the dual problem of [2].
References
[1] C.R. Bector and B.L. Bhatia, Su¢ cient optimality conditions and duality for a
minmax problem, Utilitas Math., 27 (1985), 229-247.
[2] C.R. Bector, S. Chandra and M.K. Bector, Generalized fractional programming
duality: a parametric approach, J. Optim. Theory Appl., 60 (1988), 243-260.
19
[3] C.R. Bector and S.K. Suneja, Duality in nondi¤erentiable generalized fractional
programming, Asia-Paci…c J. Oper. Res., 5 (1988), 134-139.
[4] C.R. Bector, S. Chandra, and I. Husain, Second order duality for a minimax
programming problem, Opsearch, 28 (1991), 249-263.
[5] S. Chandra, B.D. Craven and B. Mond, Generalized concavity and duality with
a square root term, Optimization, 16 (1985), 653-662.
[6] S. Chandra, B.D. Craven and B. Mond, Generalized fractional programming
duality: a ratio game approach, J. Austral. Math. Soc. (Series B), 28 (1986),
170-180.
[7] S. Chandra and V. Kumar, Duality in fractional minimax programming, J. Austral. Math. Soc. Ser. A, 58 (1995), 376-386.
[8] B.D. Craven, Invex functions and constrained local minima, Bull. Austral. Math.
Soc. 24 (1981), 357-366.
[9] B.D. Craven and B.M. Glover, Invex functions and duality, J. Austral. Math.
Soc. Ser. A, 39 (1985), 1-20.
[10] J.P. Crouzeix, J.A. Ferland and S. Schaible, Duality in generalized fractional
programming, Math. Progr., 27 (1983), 342-354.
[11] M.A. Hanson, On su¢ ciency of the Kuhn-Tucker conditions, J. Math. Anal.
Appl. 80 (1981), 545-550.
[12] R. Jagannathan and S. Schaible, Duality in generalized fractional programming
via Farkas lemma, J. Optim. Thory Appl., 41 (1983), 417-424.
[13] R.N. Kaul and S. Kaur, Optimality criteria in nonlinear programming involving
nonconvex functions, J. Math. Anal. Appl. 105 (1985), 104-112.
[14] H.C. Lai, J.C. Liu, and K. Tanaka, Necessary and su¢ cient conditions for minimax fractional programming, J. Math. Anal. Appl., 230 (1999), 311-328.
[15] J.C. Liu, Optimality and duality for generalized fractional programming involving nonsmooth (F; )-convex functions, Comput. Math. Appl., 32 (1996), 91102.
[16] J.C. Liu and C.S. Wu, On minimax fractional optimality conditions with (F; )convexity, J. Math. Anal. Appl., 219 (1998), 36-51.
[17] S. Mititelu and I.M. Stancu-Minasian, Invexity at a point: generalizations and
classi…cation, Bull. Austral. Math. Soc. 48 (1993), 117-126.
[18] B. Mond, A class of nondi¤erentiable mathematical programming problems, J.
Math. Anal. Appl., 46 (1974), 169-174.
[19] V. Preda, Generalized invexity for a class of programming problems, Stud. Cerc.
20
Mat. 42 (1990), 304-316.
[20] V. Preda, On generalized convexity and duality with a square root term, Zeitschrift
für Operations Research, 36 (1992), 547-563.
[21] V. Preda and A. B¼
at¼
atorescu, On duality for minmax generalized B-vex programming involving n-set functions, Journal of Convex Analysis, 9(2) (2002),
609-623.
[22] V. Preda and E. Köller, Duality for a nondi¤erentiable programming problem
with a square root term, Rev. Roumaine Math. Pures Appl., 45(5) (2000), 873882.
[23] F. Riesz, B.S. Nagy, Leçons d’analyse fonctionelle, Academiai Kiado, Budapest,
1972.
[24] S. Schaible, Fractional programming I. Duality, Management Sci., 22 (1976),
858-867.
[25] W.E. Schmittendorf, Necessary conditions and su¢ cient conditions for static
minmax problems, J. Math. Anal. Appl., 57 (1977), 683-693.
[26] C. Singh, Optimality conditions for fractional minimax programming, J. Math.
Anal. Appl., 100 (1984), 409-415.
[27] S.M. Sinha, A extension of a theorem of supports of a convex function, Management Sci., 16 (1966), 409-415.
[28] S.R. Yadev and R.N. Mukherjee, Duality for fractional minimax programming
problems, J. Austral. Math. Soc. Ser. B, 31 (1990), 484-492.
[29] G.J. Zalmai, Optimality conditions and duality models for generalized fractional
programming problems containing locally subdi¤erentiable and -convex functions, Optimization, 32 (1995), 95-124.
[30] J. Zhang and B. Mond, Duality for a nondi¤erentiable programming problem,
Bull. Austral. Math. Soc., 55 (1997), 29-44.
21
Download