Geometric Programming Subject to System of Fuzzy Relation Inequalities

advertisement
Available at
http://pvamu.edu/aam
Appl. Appl. Math.
ISSN: 1932-9466
Applications and Applied
Mathematics:
An International Journal
(AAM)
Vol. 7, Issue 1 (June 2012), pp. 261 – 282
Geometric Programming Subject to System of
Fuzzy Relation Inequalities
Elyas Shivanian and Mahdi Keshtkar
Department of Mathematics
Imam Khomeini International University
Qazvin, Iran
Esmaile Khorram
Faculty of Mathematics and Computer Science
Amirkabir University of Technology
Tehran, Iran
Received: March 15, 2011; Accepted: May 02, 2012
Abstract
In this paper, an optimization model with geometric objective function is presented. Geometric
programming is widely used; many objective functions in optimization problems can be analyzed
by geometric programming. We often encounter these in resource allocation and structure
optimization and technology management, etc. On the other hand, fuzzy relation equalities and
inequalities are also used in many areas. We here present a geometric programming model with a
monomial objective function subject to the fuzzy relation inequality constraints with maxproduct composition. Simplification operations have been given to accelerate the resolution of
the problem by removing the components having no effect on the solution process. Also, an
algorithm and two practical examples are presented to abbreviate and illustrate the steps of the
problem resolution.
Keywords: Geometric programming; Fuzzy relation equalities and inequalities; Max- product
composition
MSC 2010: 90C70, 94D05, 47S40
261
262
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
1. Introduction
Fuzzy relation equations (FRE), fuzzy relation inequalities (FRI) and their connected problems
have been investigated by many researchers in both theoretical and applied areas Han (2006),
Hassanzadeh (2011), Di Nola (1984), Zener (1971), Fang and Puthenpura (1993), Higashi and
Klir (1984), Guo et al. (1988), Shivanian and Khorram (2007), Shivanian and Khorram (2010),
Khorram (2008), Abbasi-Molai (2010), Perfilieva and Novák (2007), Abbasi-Molai (2010),
Shieh (2007), Ghodousian and Khorram (2008). Sanchez (1977), started a development of the
theory and applications of FRE treated as a formalized model for non-precise concepts.
Generally, FRE and FRI have a number of properties that make them suitable for formulating the
uncertain information upon which many applied concepts are usually based. The application of
(FRE) and (FRI) can be seen in many areas, for instance, fuzzy control, fuzzy decision making,
system analysis, fuzzy modeling, fuzzy arithmetic, fuzzy symptom diagnosis, and especially
fuzzy medical diagnosis, and so on (see Alayón et al. (2007), Berrached et al. (2002), Di Nola
and Russo (2007), Zener (1971), Dubois and Prade (1980), Jian-Xin (2004), Loia (2005),
Nobuhara et al. (2006), Pappis and Karacapilidis (1995), Pedrycz (1981), Perfilieva and Novák
(2007), Vasantha et al. (2004), Homayouni et al. (2009).
An interesting extensively investigated kind of such problems is the optimization of the objective
functions on the region whose set of feasible solutions have been defined as FRE or FRI
constraints Brouke et al. (1998), Fang and Li (1999), Guo and Xia (2006), Jian-Xin (2008),
Shivanian (2007), Loetamonphong (2001), Wu (2008). Fang and Li (1999) solved the linear
optimization problem with respect to the FRE constraints by considering the max-min
composition Fang and Li (1999). The max-min composition is commonly used when a system
requires conservative solutions in the sense that the goodness of one value cannot compensate
the badness of another value Loetamonphong (2001). Recent results in the literature, however,
show that the min operator is not always the best choice for the intersection operation. Instead,
the max-product composition provided results better or equivalent to the max-min composition
in some applications Alayón et al. (2007).
The fundamental result for fuzzy relation equations with max-product composition goes back to
Pedrycz (1981). A recent study in this regard can be found in Bourk and Fisher (1998). They
extended the study of an inverse solution of a system of fuzzy relation equations with maxproduct composition. They provided theoretical results for determining the complete sets of
solutions as well as the conditions for the existence of resolutions. Their results showed that such
complete sets of solutions can be characterized by one maximum solution and a number of
minimal solutions. A problem of optimization was studied by Loetamonfong and Fang with maxproduct composition Loetamonphong (2001), which was improved by Jian-Xin by shrinking the
search region Jian-Xin (2008). The linear objective optimization problem with FRI was
investigated by Zhang et al. (2003), where the fuzzy operator is considered as max-min
composition. Also, Guo and Xia presented an algorithm to accelerate the resolution of this
problem Guo and Xia (2006). Zener, Duffin and Peterson proposed the geometric programming
theory in 1961 Duffin et al. (1967), Peterson (1967). A large number of applications can be
found in business administration, economic analysis, resource allocation, and environmental
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
263
engineering Zener (1971). In 1987, Cao proposed the fuzzy geometric programming problem
Cao (2001). He solved several problems of power systems Cao (1999). Liu applied it to
economic management Liu (2004). Verma and Biswal have applied the theory Biswal (1992),
Verma (1990). In view of the importance of geometric programming and the fuzzy relation
equation in theory and applications, Yang and Cao have proposed a fuzzy relation geometric
programming, discussed optimal solutions with two kinds of objective functions based on fuzzy
max product operator Yang and Cao (2005a), Yang and Cao (2005b).
In this paper, we generalize the geometric programming of the FRE with the max-product
operator Yang and Cao (2005b), by considering the fuzzy relation inequalities instead of the
equations in the constraints. This problem can be formulated as follows:
min

Z  max {c j .x j j }
j 1,2,3,..., n
subject to
A  x  d1
(1)
B x  d2
x  [0,1]n ,
where c j ,  j  R , c j  0 and A  (aij ) mn , aij  [0,1] , B  (bij ) ln , bij  [0,1] , are fuzzy matrices,
d 1  (d i1 ) m1  [0,1]m , d 2  (d i2 ) l1  [0,1]l are fuzzy vectors, c  (c j ) n1  R n is the vector of cost
coefficients, and x  ( x j ) n1  [0,1] n is an unknown vector, and “  ” denotes the fuzzy maxproduct operator as defined below. Problem (1) can be rewritten as the following problem in
detail:
min

Z  max{c j . x j j }
subject
j J
to
ai  x  d i1
i  I 1  {1, 2,..., m}
bi  x  d i2
i  I 2  {1, 2,..., l}
0  xj 1
j  J  {1, 2,..., n},
(2)
where ai and bi are the i th row of the matrices A and B , respectively, and the constraints are
expressed by the max-product operator definition as:
ai  x  max{aij  x j }  d i1
i  I 1
bi  x  max{bij  x j }  d i2
i  I 2
jJ
jJ
(3)
In section 2, the set of the feasible solutions of problem 2 and its properties are studied. A
necessary condition and a sufficient condition are given to realize the feasibility of problem 2. In
section 3, some simplification operations are presented to accelerate the resolution process. Also,
in section 4 an algorithm is introduced to solve the problem using the results of the previous
264
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
sections, and two practical examples are given to illustrate the algorithm in this section. Finally,
a conclusion is stated in section 5.
2. The Characteristics of the Set of Feasible Solution
Notations:
We shall use, during the paper, these notations as follows:
S ( A, d 1 ) i  { x  [0,1] n : a i  x  d i1 } for each i  I
1
S ( B, d 2 ) i  {x  [0,1] n : bi  x  d i2 } for each i  I 2
S ( A , d 1 )   1 S ( A , d 1 ) i  { x  [ 0 ,1] n : A  x  d 1 }
i I
S ( B , d 2 )   2 S ( B , d 2 ) i  { x  [ 0 ,1] n : B  x  d 2 }
i I
S ( A, B, d 1 , d 2 )  S ( A, d 1 )  S ( B, d 2 )  {x  [0,1] n : A  x  d 1 , B  x  d 2 } .
Corollary 1:
1
x  S ( A, d 1 ) i for each i  I if
and only if there exists some ji  J such that x ji 
d i2
similarly, x  S ( B, d ) i for each i  I if and only if x j 
, j  J .
bij
2
2
Proof:
This clearly results from relations (3).
Lemma 1:
1
(a) S ( A, d 1 )   if and only if for each i  I there exists some ji  J such that aiji  d i1 .
(b) If S ( A, d 1 )   then 1  [1,1,...,1]1tn is the greatest element in set S ( A, d 1 ) .
d i1
,
aiji
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
265
Proof:
1
(a) Suppose S ( A, d 1 )   and x  S ( A, d 1 ) . Thus, x  S ( A, d 1 ) i , i  I and then for each
d i1
i  I we have x ji 
for some ji  J from Corollary 1. Therefore, since x  S ( A, d 1 )
aiji
1
then x  [0,1] n and then
d i1
1
 1 , i  I which implies that there is a
aiji
ji  J such
that aiji  di1 , i  I 1. Conversely, suppose that there exists some ji  J such that aiji  d i1 , i  I .
1
Set x  1  [1,1,...,1]1tn , since x  [0,1] n and x ji  1 
d i1
1
1
, i  I then x  S ( A, d 1 ) i , i  I
a iji
from Corollary 1, and then x  S ( A, d 1 ) .
(b) Proof is attained from the part (a) and Corollary 1.
Lemma 2:
(a) S ( B, d 2 )   .
(b) The smallest element in set S ( B, d 2 ) is 0  [0,0,...,0]1tn .
Proof:
Set x  0  [0,0,...,0]1tn . Since d i2  0 and bij  0 (in case bi  0 the problem is always well
defined and it is clear), then
d2
d i2
 0 . Therefore, x j  i , i  I 2 , j  J , then Corollary 1
biji
biji
implies that x  S ( B, d 2 ) and then part (a) and (b) are proved.
Theorem 1: Necessary Condition
1
If S ( A, B, d 1 , d 2 )   , then for each i  I there exist j  J such that aij  d i1 .
Proof:
Suppose
that S ( A, B, d 1 , d 2 )   ,
then,
since S ( A, B, d 1 , d 2 )  S ( A, d 1 )  S ( B, d 2 ),
S ( A, d )   , at this time the theorem is proved by using part (a) of Lemma 1.
1
Definition 1:
then
266
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
Set x  ( x j ) n1 where
1,

xj  
 di2
min
 i 1,...l  :
 bij

i : bij  di2

bij  di2  ,

otherwise.
Lemma 3:
If S ( B, d 2 )   then x is the greatest element in set S ( B, d 2 ) .
Proof:
See Shivanian (2010).
Corollary 2:
S ( B , d 2 )  { x  [ 0 ,1] n : B  x  d 2 }  [ 0 , x ] , in which x and 0 are as defined in Definition
1 and Lemma 2, respectively.
Proof:
Since S ( B, d 2 )   then 0 and x are the single smallest element and greatest element,
respectively, from Lemmas 2 and 3. Let x  [0, x] , then x  [0,1] n and x  x , Thus,
bi  x  bi  x  d i2 , i  I that implies x  S ( B, d 2 ) . Conversely, let x  S ( B, d 2 ) from part (b)
2
d i2
,
of Lemma 2, 0  x and also x  S ( B, d ) i , i  I . Then, Corollary 1 requires x j 
bij
2
2
i  I 2 and j  J . Hence, x j  x j , j  J that means x  x . Therefore, x  [0, x] .
Definition 2:
1
Let J i  { j  J : aij  d i1 } , i  I . For each j  J i , we define i x ( j )  (i x ( j ) k ) n1 such that
ix ( j ) k
 di1
 ,
  aij
0,

k j
k  j.
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
267
Lemma 4:
Consider a fixed i  I .
1
(a) If
di1  0,
then
the
vectors
ix( j )
are
the
only
minimal
1
elements of S ( A, d ) i for each j  J i .
(b) If d i1  0 then 0 is the smallest element in S ( A, d 1 ) i .
Proof:
(a) Suppose j  J i and i  I . Since i x ( j ) j 
1
d i1
, then i x ( j )  S ( A, d 1 ) i , from Corollary 1. By
aij
contradiction, suppose x  S ( A, d ) i and x  i x ( j )
1
k  J and k  j . Then x j 
d i1
. Hence we must have x j 
and x k  0 for
aij
d i1
, j  J and then x  S ( A, d 1 ) i from Corollary 1, which is a
aij
contradiction.
(b) It is clear from Corollary 1 and the fact that x j  0 , j  J .
Corollary 3:
If S ( A, d 1 ) i   , then S ( A, d 1 )i  { x  [0,1]n : ai  x  d i1}   [ix ( j ) ,1], where i  I 1 and i x ( j ) is
jJ i
as defined in Definition 2.
Proof:
If S ( A, d 1 ) i   then from Lemmas 1 and 4, the vector 1 is the maximum solution and the
vectors i x ( j ) , j  J i are the minimal solutions in S ( A, d 1 ) i . Let x   [i x ( j ) ,1] .Then x  [i x ( j ) ,1]
jJ i
for some j  J i and, then, x  [0,1] n and x j  i x ( j ) j 
1
i
d
from Definition 2, hence, x  S ( A, d 1 ) i
aij
from Corollary 1. Conversely, let x  S ( A, d 1 ) i . Then there exits some j   J such that
x j 
d i1
d1
from Corollary 1. Since x  [0,1] n , then i  1 , and then, j   J i . Therefore,
a ij 
a ij 
i x ( j)  x  1 that implies x   [i x ( j ) ,1] .
jJ i
Definition 3:
268
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
Let e  (e(1), e(2),...e(m))  J 1  J 2...  J m such that e(i )  j  J i .We define x(e)  ( x(e) j ) n1 , in
 d i 
x(e) j  max
{i x ( e (i )) j }  max
 
e
e
iI j
iI j  a 
 ij 
e
1
I j  {i  I : e(i )  j} .
1
I ej  
if
which
and
x (e) j  0
if I ej   ,
where
Corollary 4:
(a) If d i1  0 for some i  I 1 , then we can remove the i th row of matrix A with no effect on the
calculation of the vectors x(e) for each e  J I  J 1  J 2  ...  J m .
(b) If j  J i , i  I , then we can remove the j th column of the matrix A before calculating
the vectors x(e) , e  J I and set x(e) j  0 for each e  J I
1
Proof:
(a) It is proved from Definition 3 and part (b) of Lemma 4, because we will get the minimal
elements of S ( A, d 1 ) .
(b) It is proved by only using Definition 3.
Lemma 5:
Suppose S ( A, d 1 )   then S ( A, d 1 )   [ x(e),1] where X (e)  {x(e) : e  J I } .
X (e)
Proof:
If S ( A, d 1 )   , then S ( A, d 1 ) i   , i  I . It is clear that x (e ) is minimal it would be the
1
solution i.e. x (e )  S (A , d 1 ) , so at first step, we prove it is solution. Suppose that i is fixed and
1
1
 d i  d i
then x (e ) j  max{
from Definition 3 and so
i
}
max


 
x (e ( i )) j
i I ej
i I ej
 aij  aij
aij x (e ) j  d i1 , hence x (e ) j  S (A , d 1 )i then x (e )  S (A , d 1 ) . Therefore, we have
e (i )  j  J i
S ( A, d 1 )  1 S ( A, d 1 )i  1[  [ix ( j ) ,1]]  1[  [ix ( e (i )) ,1]]
iI
iI
jJ i
iI
e ( i )J i
  [ 1[ix ( e (i )) ,1]]   [max{
ix ( e ( i )) },1]   [ x(e),1]   [ x(e),1],
1
eJ I iI
eJ I
from Corollary 3 and Definition 3.
iI
eJ I
X (e)
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
269
From Lemma 5, it is obvious that S ( A, d 1 )   [ x(e),1] and X 0 (e)  S 0 ( A, d 1 ) ,
X 0 (e)
1
where X 0 (e) and S 0 ( A, d ) are the set of minimal solutions in X (e) and S ( A, d 1 ) , respectively.
Theorem 2:
If S ( A, B, d 1 , d 2 )   , then S ( A, B, d 1 , d 2 )   [ x(e), x] .
X 0 (e)
Proof:
By using Corollary 2 and the result of Lemma 5, we have
S ( A, B, d 1 , d 2 )  S ( A, d 1 )  S ( B, d 2 )  {  [ x(e),1]}  [0, x]   [ x(e), x]
X 0 (e)
X 0 (e)
and the proof is complete.
Corollary 5: Necessary and Sufficient Conditions
S ( A, B, d 1 , d 2 )   if and only if x  S ( A, d 1 ) . Equivalently, S ( A, B, d 1 , d 2 )   if and only if
there exists some e  J I such that x(e)  x .
Proof:
that S ( A, B, d 1 , d 2 )   ,
Suppose
then S ( A, B, d 1 , d 2 )   [ x(e), x]
X 0 (e)
by
Theorem
2,
then x  S ( A, B, d 1 , d 2 ) , and hence x  S ( A, d 1 ) . Conversely let x  S ( A, d 1 ) . Meanwhile we
know x  S ( B, d 2 ) , therefore x  S ( A, d 1 )  S ( B, d 2 )  S ( A, B, d 1 , d 2 ) .
3. Simplification Operations and the Resolution Algorithm
In order to solve problem (1), we first convert it into the two sub-problems below:


min Z  max{
c j .x j j }

min Z  max{
c j .x j j }

jR
s.t
A  x  d1
Bx  d
jR
(4a)
2
x  [0,1] ,
n
where R    j :  j  0, j  J  and R    j :  j  0, j  J  .
Lemma 6:
s.t
A  x  d1
Bx  d
x  [0,1]n ,
2
(4b)
270
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
The optimal solution of problem (4b) is x in Definition 1.
Proof:
In objective function (4b)  j  0 therefore, x j
j
is a monotone decreasing function of x j in

interval 0  x j  1 for each j  R  , so is max {c j .x j j } of x j too. Hence x is optimal solution
jR
because x is the greatest element in the set S ( A, B, d 1 , d 2 ) .
Lemma 7:
The optimal solution of problem (4a) belongs to X 0 (e) .
Proof:
In objective function (4a),  j  0 therefore, x j
j
is a monotone increasing function of x j in

interval 0  x j  1 for each j  R  , so is max {c j .x j j } of x j too. Now, suppose that
jR
y  S ( A , B , d , d ) has selected arbitrary then, there exists x ( e 0 )  X 0 ( e ) such that y  x ( e 0 ) .
1
2

Since
max {c j .x j j }
jR
is

a
monotone
increasing
function
of
xj
then,

max {c j . y j j }  max {c j .x(e0 ) j j } therefore, one of the elements of X 0 (e) is the optimal solution
jR
jR
of problem (4a).
Theorem 3:
Assume that x ( e 0 ) be an optimal solution of problem (4a) (it is possible not to be unique) then,
the optimal solution of problem (1) is x  , defined as follow:
 x j ,
x 
 x (e0 ) j ,

j
j  R
j  R
Proof:
Suppose that S ( A, B, d 1 , d 2 ) then by Lemmas 6 and 7, we have
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)

271


max{c j .x j j }  max{
c j .x j j }max{
c j .x j j }


jJ
jR
jR
j

 max{
c j .x (e0 ) j j }max{
c j .x j }


jR
jR

 max{c j .x j j }.
jJ
Therefore, x  is optimal solution of problem (1) and the proof is completed.
For calculating x  it is sufficient to find x and x(e0 ) from Theorem 3. While x is easily
attained by Definition 1, x(e0 ) is usually hard to find. Since X 0 (e) is attained by pair wise
comparison between the members of set X (e) , then the finding process of set X 0 (e) is timeconsuming if X (e) has many members. Therefore, a simplification operation can accelerate the
resolution of problem (4a) by removing the vectors e  J I such that x(e) is not optimal in (4a).
One of such operations is given by Corollary 4. Other operations are attained by the theorems
below.
Theorem 4:
The set of feasible solutions for problem (1), namely S ( A, B, d 1 , d 2 ) , is nonempty if and only if


d i1
1
for each i  I set J i   j  J i :
 x j  is nonempty, where x is defined by Definition 1.
aij


Proof:
Suppose S ( A, B, d 1 , d 2 )   .
From
Corollary
5,
x  S ( A, B, d 1 , d 2 )
and
then
we
d1
1
1
have x  S ( A, d 1 ) i , i  I . Thus, for each i  I there exists some j  J such that x j  i
aij
from Corollary 1 which means J i   , i  I . Conversely, suppose J i   , i  I . Then
d i1
1
1
there exists some j  J such that x j 
, i  I . Hence, x  S ( A, d 1 ) i , i  I from Corollary
aij
1
1
1 that implies x  S ( A, d 1 ) . These facts together with Lemma 3 imply x  S ( A, B, d 1 , d 2 ) , and
therefore S ( A, B, d 1 , d 2 )   .
Theorem 5:
If S ( A, B, d 1 , d 2 )   , then
S ( A, B, d 1 , d 2 )   [ x(e), x] where X (e)  {x(e) : e  J I  J 1  J 2  ...  J m } .
X (e)
272
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
Proof:
By Theorem 2, it is sufficient to show x(e)  S ( A, B, d 1 , d 2 ) if e  J I . Suppose e  J I . Thus,
1
there exist i   I and j   J i such that e(i )  j  and x j 
d i1
e
. Then i   I j and by Definition
aij
 d i1  d i1
 x j . Therefore, x(e)  x is not correct, which implies
3 we have x(e) j   max
 
iI ej  a 
a



i
j
i
j
 
1
2
x(e)  S ( A, B, d , d ) by Theorem 2.
1
From defined notation of theorem 4, J i  J i , i  I ,which requires X (e)  X (e) , also,
S 0 ( A, B, d 1 , d 2 )  X (e) by Theorem 4 in which S 0 ( A, B, d 1 , d 2 ) is the minimal elements of
S ( A, B, d 1 , d 2 ) , thus Theorem 5 reduces the region of search to find set S 0 ( A, B, d 1 , d 2 ) .
Definition 4:
We define J i  { j :
j  R
and
j  J i } for i  I .
1
Theorem 6:
Suppose x(e0 ) is an optimal solution in (4a) and J i   for some i   I , then there exist x(e)
1
such that e(i )  J i , and also x(e) is the optimal solution in (4a).
Proof:
1
Suppose J i   for some i   I and e0 (i)  j  . Define e  J I
such that e(i )  k  J i and
1
e(i )  e0 (i ) for each i  I and i  i  . From Definition 3 we have:
 d i1 
 d i1 

x(e0 ) j  max
max


   x(e) j
e0
iI ej0  a 
 ij  iiIij   aij 
Also, x(e0 ) j  x(e) j for each j  J and j  j , k . Therefore, by noting that k  R  we have



max{
c j .x(e0 ) j j }  max{c j .x(e0 ) j j , max{
c j .x(e0 ) j j }}


jR
jR
j  j



 max{c j .x(e) j j , max{
c j .x(e) j j }}  max{
c j .x(e) j j }.


jR
j  j
jR
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
273
Therefore x(e) is the optimal solution in (4a), and then the proof is complete.
Corollary 6:
If J i   for some i  I then, we can remove the i th row of matrix A without any effect on
finding an optimal solution of problem (4a).
1
Definition 5:
Let j1 , j 2  J ,  j1  0 and  j2  0 . We say j 2 dominates j1 if and only if
(a)
1
j1  J i implies j 2  J i , i  I .
d i1  j2
d i1  j1
) .
(b) For each i  I such that j1  J i we have c j1 .( )  c j2 .(
aij2
aij1
1
Theorem 7:
Suppose x(e0 ) is the optimal solution in (4a) and j 2 dominates j1 for j1 , j 2  R  , then there
exists x(e) such that I ej1   , and also x(e) is the optimal solution in (4a). (Notification:  j1  0
and  j2  0 ).
Proof:
Define e  (e(i )) m1 such that
e0 (i ), i  I ej10 ,
e(i )  
e0
 j2 , i  I j1 .
It is obvious that I ej1   and, then, x(e) j1  0 . Also, x(e0 ) j  x(e) j for each j  J
and j  j1 , j 2 . From Definition 3, x(e) j2 
x(e0 ) j2  x(e) j2 
So, we have
d i10
ai0 j2
.
d i10
ai0 j2
. Now, if i0  I ej10 , then
274
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram



max{
c j .x(e0 ) j j }  max{c j1 .x(e0 ) j1 j1 , max{
c j .x(e0 ) j j }}


jR
jR
j  j1


 max{
c j .x(e0 ) j j }  max{
c j .x(e) j j }.


jR
j  j1
The
proof
is
jR
complete
in

this
case.
Otherwise,
suppose i0  I ej10 .

max{
c j .x(e0 ) j j }  max{
c j .x(e) j j }. By Definition 3, let x(e0 ) j2 


jR
jR
We
show
di1
. Then, we have
aij2
c j2 .x(e0 ) j2  0 from part (a) of Corollary 4 and Definition 5. Therefore, since

max {c j .x(e0 ) j j }  max{c j1 .x(e0 ) j1
jR
 j1
, c j2 . x (e0 ) j 2
 j2

, max
{c j .x(e0 ) j j }}

jR
j  j1 , j 2
and



max {c j .x(e) j j }  max{c j2 .x(e) j2 j2 , max
{c j .x(e) j j }} ,

jR
jR
j  j1 , j 2
it is sufficient to show c j1 .x(e0 ) j1  c j2 .x(e) j2 . Let x(e0 ) j1 
d i1
from Definition 3. Since j 2
a ij1
dominates j1 , then we have
d i10  j2
d i1  j1
)
c j1 .( )  c j2 .(
,
aij1
ai0 j2
which means c j1 .x(e0 ) j1  c j2 .x(e) j2 if i0  i  . Otherwise, suppose i0  i  . Since i0  I ej10 and j 2
dominates j1 , then
c j1 .(
d i10
ai0 j1
)
 j1
 c j2 .(
d i10
ai0 j2
)
 j2
Also, by Definition 3, we have
x(e0 ) j1  max
{
e0
iI j
1
This implies
d i1
d1
}  i .
aij1
aij1
.
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
(
275
d i1  j1
d1 
)  ( i ) j1 , i  I ej10 .
aij1
aij1
Therefore,
d i10  j1
d i10  j2
d i1  j1
)  c j2 .(
) ,
c j1 .( )  c j1 .(
aij1
ai0 j1
ai0 j2


which requires c j1 .x(e0 ) j1  c j2 .x(e) j2 . Hence, max {c j .x(e0 ) j j }  max {c j .x(e) j j } and the
jR
jR
proof is completed.
Corollary 7:
If j 2 dominates j1 for some j1 , j 2  R  , then we can remove the j1 th column of the matrix A
without any effect on finding the optimal solution x(e0 ) in (4a).
4. Algorithm for Finding an Optimal Solution and Examples
Definition 6:
Consider problem (1). We call A  (a ij ) mn and B  (b ij ) ln the characteristic matrices of
matrix A and matrix B , respectively, where a ij 
b ij 
d i1
1
for each i  I and j  J , also
aij
d i2
0
k
2
for each i  I and j  J . (set  1 and   )
bij
0
0
Algorithm:
Given problem (2),
1.
Find matrices A and B by Definition 6.
If there exists i  I such that a ij  1 , j  J , then stop. Problem 2 is infeasible (see
Theorem 1).
3. Calculate x from B by Definition 1.
1
4. If there exists i  I such that d i1  0 , then remove the i’th row of matrix A (see part (a)
of Corollary 4).
2.
5.
1
If a ij  x j , then set a ij  0 , i  I and j  J .
1
276
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
6.
If there exists i  I such that a ij  0 , j  J , then stop. Problem (2) is infeasible (see
Theorems 4 and 5)
7.
If there exists j   J such that a ij  0 , i  I , then remove the j  th column of the
1
1
matrix A (see part (b) of Corollary 4) and set x(e0 ) j  0 .
8.
For each i  I , if J i   then remove the ith row of the matrix A (see Corollary 6)
9.
Remove each column j  J from A such that j  R  and set x(e0 ) j  0 .
1
10. If j 2 dominates j1 , ( j1 , j 2  R  ) then remove column j1 from A , j1 , j 2  J (see
Corollary 7) and set x(e0 ) j1  0 .
11. Let J inew  { j  J i : a ij  0} and J Inew  J 1new  J 2new  ...  J mnew . Find the vectors x(e) ,
e  J Inew , by Definition 3 from A , and x(e0 ) by pair wise comparison between the
vectors x(e) .
12. Find x  from Theorem 3.
Example 1:
We consider the problem below which was given by J. H. Yang & B. Y. Cao, Yang and Cao
(2005b), and solve by above algorithm. We will see the results are the similar.
1
3
1



min Z  max0.4  ( x1 ) 2 ,0.7  ( x 2 ) 2 ,0.6  ( x3 ) 2 ,0.1  ( x 4 )  2 


 x1 
 0.5 0 0.6 0.8   0.4 
 0.5 0.2 0 0.4    x2   0.2  ,

  x    0  x j  1, j  1, 2, 3, 4.
0.2 0.1 0.3 0.2   3  0.2 
 x4 
This problem is a simple case from problem (1). Matrices A and B are equal in this problem,
which means the constraints are A  x  b and A  x  b .
Step 1:
2


0.8  3 0.5
A  0.4 1  0.5 .


2
1 2
1
3


Step 2:
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
Step 3:
2
x  [0.4,1, ,0.5]
3
Step 4:
Step 5:

0 0
A  0.4 1

0 0

2
3
0
2
3

0.5
0.5

0

Step 6:
Step 7:
Step 8:
J 1  {4} and J 2  {1,4} , therefore first and second rows are removed. Then

A  0 0

2
3

0

Step 9:
By this step, first and fourth columns are removed, and then the new matrix A is as follow:

A  0

2
3 
Step 10:
It is clear that j 3 dominates j 2 by Definition 5, so we remove the second column from

A  0

and obtain
2
A 
3
Step 11:
2
3 
277
278
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
2
x(e0 )  (0,0, ,0)
3
Step 12:
2
x   (0.4,0, ,0.5)
3
1
3



2 1
Z   max 0.4  (0.4) 2 , 0.7  (0) 2 , 0.6  ( ) 2 , 0.1 (0.5) 2   max 0.63, 0, 0.48, 0.4  0.63.
3


Example 2: Consider the problem below:
min Z
 0.5
 0. 9

 0. 2

0.55
0.6
0.2

 0.5
1
1
 

2
1
2
 max2  ( x1 ) ,3  ( x 2 ) , ( x3 ) ,3  ( x 4 ) 2 


0.8 0.35 0.25  x1   0.4 
0.92
1
0.86  x 2   0.9 


1
0.45 0.8   x3   0.8 

   
0.6 0.8 0.64  x 4  0.65
x 
0.5 0.1 0.1  1   0.48
x
0.6 0.6 0.5    2   0.56  ,
0  x j  1,
 x3 
0.9 0.8 0.4    0.72 
 x4 
Step 1:
Matrices, A and B are as follows:
 0.8 0.5 1.14 1.6 
 1 0.97 0.9 1.04

A
 4
0.8 1.77
1 


1.18 1.08 0.81 1.01
 0.8 0.96 4.8 4.8 
B   2.8 0.93 0.93 1.12
1.44 0.8 0.9 1.8 
Step 2:
Step 3:
j  1, 2, 3, 4.
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
279
x  [0.8,0.8,0.9,1]
Step 4:
Step 5:
By considering this step, matrix A is converted to the following:
0.8 0.5 0
0
0 0.9
A
 0 0.8 0

0 0.81
0
0
0
1

0
Step 6:
Step 7:
Step 8:
According to this step, since J 1  {2} and J 3  {2,4} , therefore we can remove the first and third
rows, then we have:
0 0 0.9 0
A

0 0 0.81 0
Step 9:
By this step, we remove the second and fourth columns, and we set x(e0 ) 2  x(e0 ) 4  0 , then:
0 0.9 
A

0 0.81
Step 10:
The first column is removed by this step.
 0.9 
A

0.81
Step 11:
x(e0 )  (0,0,0.9,0)
Step 12:
280
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
By this x  and Z * are calculated as follows:
x   (0,0.8,0.9,1)
1
1
 

2
1
2
Z  max 2.(0) ,3.(0.8) , (0.9) ,3.(1) 2   3.75 .


*
5. Conclusion
In this paper, we have studied the geometric programming with fuzzy relational inequality
constraints defined by the max-product operator. Since the difficulty of this problem is finding
the minimal solutions optimizing the same problem with the objective

function min Z  max{
c j .x j j } , has been presented in an algorithm together with some

j R
simplifying operations to accelerate the problem resolution. Also, we have been given two
examples to illustrate the proposed algorithm.
Acknowledgement
The respected three anonymous referees have carefully reviewed this paper. As a result of their
careful analysis, our paper has been improved. The authors would like to express their
thankfulness to them for their helpful constructive comments.
REFERENCES
Abbasi-Molai, A. (2010). Fuzzy linear objective function optimization with fuzzy-valued maxproduct fuzzy relation inequality constraints, Mathematical and Computer Modelling, Vol.
51, No. 9-10.
Abbasi-Molai, A. (2010). Two new algorithms for solving optimization problems with one linear
objective function and finitely many constraints of fuzzy relation inequalities, Journal of
Computational and Applied Mathematics, Vol. 233, No. 8.
Alayón, S., Robertson, R., Warfield, S. K., and Ruiz-Alzola, J. (2007). A fuzzy system for
helping medical diagnosis of malformations of cortical development, Journal of Biomedical
Informatics, Vol. 40, No. 3.
Berrached, A., Beheshti, M., de Korvin, A. and Aló, R. (2002). Applying Fuzzy Relation
Equations to Threat Analysis, Proceedings of the 35th Hawaii International Conference on
System Sciences.
Biswal, M. P. (1992). Fuzzy programming technique to solve multi-objective geometric
programming problems. Fuzzy Sets and Systems, Vol. 51, No. 1.
Brouke, M. M. and Fisher, D. G. (1998). Solution Algorithms for Fuzzy Relation Equations with
Max-product composition, Fuzzy sets and Systems, Vol. 94.
AAM: Intern. J., Vol. 7, Issue 1 (June 2012)
281
Cao, B. Y. (1999). Fuzzy geometric programming optimum seeking in power supply radius of
transformer substation, 1999 IEEE International Fuzzy Systems Conference Proceedings,
Vol. 3, Korea.
Cao, B. Y. (2001). Fuzzy geometric programming, Boston, Kluwer Academic Publishers.
Di Nola, A and Russo, C. (2007). Lukasiewicz transform and its application to compression and
reconstruction of digital images, Information Sciences, Vol. 177.
Di Nola, A., Pedrycz, W. and Sessa, S. (1984). Some theoretical aspects of fuzzy relation
equations describing fuzzy systems, Information Sciences, Vol. 34.
Dubois, and Prade, H. (1980). Fuzzy sets and systems: Theory and Applications, Academic Press,
New York.
Duffin, R. J., Peterson, E. L. and C. Zener, C. (1967). Geometric programming-theory and
application. New York: Wiley.
Fang, S. C. and Li, G. (1999). Solving fuzzy relations equations with a linear objective function,
Fuzzy Sets and Systems, Vol. 103.
Fang, S. C. and Puthenpura, S. (1993). Linear Optimization and Extensions: Theory and
Algorithm, Prentice-Hall, Englewood Cliffs, NJ.
Ghodousian, A. and Khorram, E. (2008). Fuzzy linear optimization in the presence of the fuzzy
relation inequality constraints with max–min composition. Information Sciences, Vol. 178,
No. 2.
Guo, F. F. and Xia, Z. Q. (2006). An algorithm for solving optimization Problems with one
linear objective Function and Finitely Many Constraints of Fuzzy Relation Inequalities.
Fuzzy optimization and Decision making,Vol. 5.
Guo, S. Z., Wang, P. Z., Di Nola, A. and Sessa, S. (1988). Further contributions to the study of
finite fuzzy relation equations. Fuzzy Sets and systems, Vol. 26.
Han, S. C., Li, H. G. and Wang, J. Y. (2006). Resolution of finite fuzzy relation equations based
on strong pseudo-t-norms, Applied Mathematics Letters, Vol. 19, No. 8.
Hassanzadeh, R., Khorram, E., Mahdavi, I. and Mahdavi-Amiri, N. (2011). A genetic algorithm
for optimization problems with fuzzy relation constraints using max-product composition,
Applied Soft Computing, Vol. 11, No. 1.
Higashi, M. and Klir, G. J. (1984). Resolution of finite fuzzy relation equations, Fuzzy Sets and
Systems, Vol 13.
Homayouni, S. M., Hong, T. S. and Ismail, N. (2009). Development of genetic fuzzy logic
controllers for complex production systems, Computers & Industrial Engineering, Vol. 57,
No. 4.
Innocent, P. R. and John, R. I. (2004). Computer aided fuzzy medical diagnosis, Information
Sciences, Vol. 162, No. 2.
Jian-Xin, L. (2008). A new algorithm for minimizing a linear objective function with fuzzy
relation equation constraints, Fuzzy Sets and Systems, Vol. 159, No. 17.
Khorram, E. and Hassanzadeh, R. (2008). Solving nonlinear optimization problems subjected to
fuzzy relation equation constraints with max-average composition using a modified genetic
algorithm, Computers & Industrial Engineering, Vol.55, No. 1.
Liu, S. T. (2004). Fuzzy geometric programming approach to a fuzzy machining economics
model, International Journal of Production Research, Vol. 42, No. 16.
Loetamonphong, J. and Fang, S. C. (2001). Optimization of Fuzzy Relation Equations with Maxproduct Composition, Fuzzy Sets and Systems, Vol. 118.
282
Elyas Shivanian, Mahdi Keshtkar and Esmaile Khorram
Loia, V. and Sessa, S. (2005). Fuzzy relation equations for coding and decoding processes of
images and videos, Information Sciences, Vol. 171.
Nobuhara, H., Bede, B. and Hirota, K. (2006). On various eigen fuzzy sets and their application
to image reconstruction, Information Sciences, Vol. 176.
Pappis, C. P. and Karacapilidis, N. I. (1995). Application of a similarity measure of fuzzy sets to
fuzzy relational equations, Fuzzy Sets and Systems, Vol. 75,No. 2.
Pedrycz, W. (1981). An approach to the analysis of fuzzy systems. Int. J. Control, Vol. 34.
Perfilieva, I. and Novák, V. (2007). System of fuzzy relation equations as a continuous model of
IF–THEN rules, Information Sciences, Vol. 177, No. 15-16.
Peterson, E. L. (1976). Geometric programming, SIAM Review, Vol. 18, No. 1.
Sanchez, E. (1977). Solution in composite fuzzy relation equations: Application to medical
diagnosis in Brouwerian logic, In Fuzzy Automata and Decision Processes, North-Holland,
New York.
Shieh, Bih-Sheue. (2007). Solutions of fuzzy relation equations based on continuous t-norms,
Information Sciences, Vol. 177, No. 19.
Shivanian, E. and Khorram, E. (2007). Optimization of linear objective function subject to Fuzzy
relation inequalities constraints with max-average composition, Iranian journal of fuzzy
system. Vol. 4, No. 2.
Shivanian, E. and Khorram, E. (2010). Optimization of linear objective function subject to Fuzzy
relation inequalities constraints with max-product composition, Iranian journal of fuzzy
system, Vol. 7, No. 3.
Vasantha, W. B., Kandasamy and Smarandache, F. (2004). Fuzzy relational maps and
neutrosophic relational maps. hexis church rock.
Verma, R. K. (1990). Fuzzy geometric programming with several objective functions, Fuzzy
Sets and Systems, Vol. 35. No. 1.
Wu, Y. K. (2008). Optimizing the geometric programming problem with single-term exponents
subject to max–min fuzzy relational equation constraints, Mathematical and Computer
Modelling, Vol. 47, No. 3-4.
Yang, J. H. and Cao, B. Y. (2005a). Geometric programming with fuzzy relation equation
constraints. IEEE International Fuzzy Systems Conference Proceedings, Reno, Nevada.
Yang, J. H. and Cao, B. Y. (2005b). Geometric programming with max-product fuzzy relation
equation constraints, Proceedings of the 24th North American Fuzzy Information Processing
Society, Ann Arbor, Michigan.
Zener, C. (1971). Engineering design by geometric programming, New York, Wiley.
Zhang, H. T., Dong, H. M. and Ren, R. H. (2003). Programming Problem with Fuzzy Relation
Inequality Constraints, Journal of Liaoning Noramal University, Vol. 3.
Download