MODIFIED GEOMETRIC PROGRAMMING PROBLEM AND ITS

advertisement
J. Appl. Math. & Computing Vol. 17(2005), No. 1 - 2, pp. 121 - 144
MODIFIED GEOMETRIC PROGRAMMING PROBLEM AND
ITS APPLICATIONS
SAHIDUL ISLAM AND TAPAN KUMAR ROY∗
Abstract. In this paper, we propose unconstrained and constrained posynomial Geometric Programming (GP) problem with negative or positive integral degree of difficulty. Conventional GP approach has been modified to
solve some special type of GP problems. In specific case, when the degree of
difficulty is negative, the normality and the orthogonality conditions of the
dual program give a system of linear equations. No general solution vector
exists for this system of linear equations. But an approximate solution
can be determined by the least square and also max-min method. Here,
modified form of geometric programming method has been demonstrated
and for that purpose necessary theorems have been derived. Finally, these
are illustrated by numerical examples and applications.
AMS Mathematics Subject Classification: 90B05, 90C10, 65K18
Key words and phrases : Geometric programming, residual vector, least
square method, max-min method, modified geometric programming problem.
1. Introduction
Since late 1960’s, Geometric Programming (GP) has been known and used in
various fields (like OR, Engineering sciences etc.). Duffin, Peterson and Zener [5]
and Zener [14] discussed the basic theories on GP with engineering application
in their books. Another famous book on GP and its application appeared in
1976 [3]. There are many references on applications and the methods of GP in
the survey paper by Ecker [6]. They described GP with positive or zero degree
Received June 13, 2003. Revised April 17, 2004. ∗ Corresponding author.
This research was supported by C.S.I.R. junior research fellowship in the Department of Mathematics, Bengal Engineering College (A Deemed University). This support is great fully acknowledged.
c 2005 Korean Society for Computational & Applied Mathematics and Korean SIGGAM.
121
122
Sahidul Islam and Tapan Kumar Roy
of difficulty. But there may be some problems on GP with negative degree of
difficulty. Sinha et al. [13] proposed it theoretically. Abou-El-Ata and his group
applied modified form of GP in inventory models ([1], [7]). Authors [8] also
applied MGP with negative degree of difficulty in an inventory model.
In this paper we have proposed unconstrained / constrained GP and modified
form of Geometric Programming (MGP) problem with negative or positive integral degree of difficulty. The modified form of geometric programming method
has been demonstrated and for that purpose necessary theorems have been derived. The least square and max-min methods are described to get approximate
solutions of dual variables of GP/MGP problems with negative degree of difficulty. Finally, these are illustrated by numerical examples and applications.
2. Unconstrained problem
2.1. Geometric Programming (GP) problem
Primal program :
Primal Geometric Programming (PGP) problem is :
Minimize g(t) =
T0
X
Ck
k=1
m
Y
α
tj kj
(1)
j=1
subject to
tj > 0, (j = 1, 2, . . . , m).
Here Ck (> 0), (k = 1, 2, . . . , T0 ) and αkj be any real number.
It is an unconstrained posynomial geometric programming problem with the
Degree of Difficulty (DD) = T0 − (m + 1).
Dual program :
Dual Programming (DP) problem of (1) is :
δ
T0 Y
Ck k
Maximize ν(δ) =
δk
k=1
subject to
T0
X
δk = 1,
(Normality condition)
k=1
T0
X
αkj δk = 0, (j = 1, 2, . . . , m),
(Orthogonality conditions)
k=1
δk > 0, (k = 1, 2, . . . , T0 ),
(Positivity conditions)
(2)
Modified geometric programming and its applications
123
where δ = (δ1 , δ2 , . . . , δT0 )T .
Case I : For T0 ≥ m+1, the dual program presents a system of linear equations
for the dual variables, where the number of linear equations is either less than
or equal to the dual variables. More or unique solution vector exist for the dual
variable vectors [3].
Case II : For T0 < m + 1, the dual program presents a system of linear
equations for the dual variables, where the number of linear equations is greater
than the number of dual variables. In this case generally no solution vector exists
for the dual variables. However, one can get an approximate solution vector for
this system using either the Least Square (LS) or Max-Min (MM) [4] method.
These are applied to solve such a system of linear equations. Ones optimal
dual variable vector δ ∗ are known, the corresponding values of the primal variable
vector t is found from the following relations :
m
Y
α
Ck
tj kj = δk∗ ν ∗ (δ ∗ ), (k = 1, 2, . . . , T0 ).
(3)
j=1
Taking logarithms in (3), T0 log-linear simultaneous equations are transformed
as
∗ ∗ ∗ m
X
δ ν (δ )
, (k = 1, 2, . . . , T0 ).
αkj (log tj ) = log k
(4)
Ck
j=1
It is a system of linear equations in xj (= log tj ) for j = 1, 2, . . . , m. Since there
are more primal variables tj than the number of terms T0 , many solutions tj
may exist. So, to find the optimal primal variables tj , it remains to minimize the
primal objective function with respect to reduced m − T0 (6= 0) variables. When
m − T0 = 0 i.e. number of primal variables = number of log-linear equations,
primal variables can be determined uniquely from log-linear equations.
2.1.1. Residual vector
A system of linear equations (say m equations and n unknown variables with
m > n) takes the form
n
X
αij δj = bi , (i = 1, 2, . . . , m)
(5)
j=1
i.e., Aδ = B, where A = [αij ]m×n , δ = [δ1 , δ2 , . . . , δn ]T and B = [b1 , b2 , . . . , bm ]T .
The residual vector is defined as
R = Aδ − B,
where
R = [r1 , r2 , . . . , rm ]T .
(6)
124
Sahidul Islam and Tapan Kumar Roy
2.1.2. Least Square (LS) method
Approximate solutions for the system of linear equations (5) can be determined by LS method [9].
m
X
Find δj (j = 1, 2, . . . , n), which minimize RT R =
ri2 .
Setting the partial derivatives of
m
X
i=1
ri2 with respect to δj , and equating them
i=1
to zero, following n normal equations are obtained :


+
(αi1 αi2 )δ2 + . . . +
(αi1 αin )δn =
(αi1 bi ), 




i=1
i=1
i=1
i=1


m
m
m
m

X
X
X
X

2
(αi2 αi1 )δ1 +
(αi2 )δ2 + . . . +
(αi2 αin )δn =
(αi2 bi ), 
i=1
i=1
i=1
i=1



..................



m
m
m
m

X
X
X
X

2

(αin αi1 )δ1 +
(αin αi2 )δ2 + . . . +
(αin )δn =
(αin bi ). 

m
X
i=1
(α2i1 )δ1
m
X
m
X
i=1
m
X
i=1
(7)
i=1
It is a symmetric positive definite system of equations in δj and determines the
components of dual variables δj (j = 1, 2, . . . , n).
2.1.3. Max-Min (MM) method
Approximate solutions for the system of linear equations (5) can also be determined by MM method. By this method one can obtain the dual variable vector
δ for which the absolutely largest component of the residual vector R(= Aδ − B)
becomes minimum. i.e.,
Min(Max(|r1 |, |r2 |, . . . , |rm |))
(8)
subject to
αi1 δ1 + αi2 δ2 + . . . + αin δn − bi = ri , (i = 1, 2, . . . , m).
δj ≥ 0, (j = 1, 2, . . . , n).
Considering an auxiliary variable r, the above problem (8) is equivalent to
Minimize r
subject to
αi1 δ1 + αi2 δ2 + . . . + αin δn − bi ≤ r,
−αi1 δ1 − αi2 δ2 − . . . − αin δn + bi ≤ r,
(9)
δj ≥ 0, (j = 1, 2, . . . , n).
(i = 1, 2, . . . , m)
Modified geometric programming and its applications
δ
Denoting rj = δj0 , (j = 1, 2, . . . , m) and
ming Problem (LPP) as follows :
1
r
125
= δ00 , (9) becomes a Linear Program-
Maximize δ00
(10)
subject to
αi1 δ10 + αi2 δ20 + . . . + αin δn0 − bi δ00 ≤ 1,
−αi1 δ10 − αi2 δ20 − . . . − αin δn0 + bi δ00 ≤ 1,
(i = 1, 2, . . . , m)
δj0 ≥ 0, (j = 0, 1, 2, . . . , n).
After calculating δ00 , δ10 , δ20 , . . . , δn0 from this LPP (10), approximate solutions of
dual variables δj can be obtained.
So, applying either LS or MM method dual variables can be determined. Then
corresponding optimal dual objective function of (2) is found.
Example 1 (Unconstrained GP problem).
−1 −1
Min Z(t) = t1 t2 t−2
3 + 2t3 t1 t2 + 5t3
(a)
subject to
t1 , t2 , t3 > 0.
Here DD = 3 − (3 + 1) = −1(< 0). It is a polynomial geometric programming
problem with negative degree of difficulty.
Geometric programming method :
Maximize ν(δ 0 ) =
1
δ10
δ1 2
δ20
δ2 5
δ30
δ3
(b)
subject to
δ1 + δ2 + δ3
= 1,
δ1 − δ2
= 0,
δ1 − δ2
= 0,
−2δ1 + δ2 + δ3 = 0,
δ1 , δ2 , δ3
> 0.
It is a system of 4 linear equations with 3 unknowns variables. Approximate
solutions of this system of linear equations by LS method are δ1∗ = 0.333, δ2∗ =
0.333, δ3∗ = 0.333 and corresponding optimal dual objective value i.e. ν ∗ (δ ∗ ) =
6.463.
So for primal decision variables, the following system of non-linear equations
is found
t1 t2 t−2
= 2.1522,
3
−1
2t3 t−1
t
= 2.1522,
1 2
5t3
= 2.1522,
126
Sahidul Islam and Tapan Kumar Roy
and the corresponding system of log-linear equations is
x1 + x2 − 2x3 = 0.7664,
x3 − x1 − x2 = 0.0733,
x3
= −0.8431,
(c)
where
xi = logti (i = 1, 2, 3).
∗
Solving this sytem of linear equations (c), the optimal primal variables t∗i (= exi )
(i = 1, 2, 3) are obtained. These are given in Table 1.
Non-Linear Programming (NLP) method :
From Karush - Kuhn - Tucker necessary conditions, we have
−2 −1
t2 t−2
= 0
3 − 2t3 t1 t2
−2
−2
t1 t3 − 2t3 t−1
= 0
1 t2
−1 −1
−2t1 t2 t−3
3 + 2t1 t2 + 5 = 0
Solving this system of non-linear equations, the optimal solutions t∗1 , t∗2 , t∗3 are
obtained. These are given in the Table 1. It is seen that GP method gives better
result than NLP method [2].
Table 1
Optimal solutions of Example 1
Methods
t∗1
t∗2
t∗3
Z ∗ (t∗ )
GP
0.3525 1.135 0.431 6.463016
NLP
0.6266 0.6383 0.431 6.463306
2.2. Modified geometric programming (MGP) problem
Primal program :
A special type (which is a separable function) of the Primal Geometric Programming (PGP) problem is
Minimize g(t) =
n
X
i=1
gi (t) =
T0
n X
X
i=1 k=1
Uik (t) =
T0
n X
X
i=1 k=1
Cik
m
Y
α
tijikj
(11)
j=1
subject to
tij > 0, (i = 1, 2, . . . , n; j = 1, 2, . . . , m).
Here Cik (> 0)(k = 1, 2, . . . , T0 ) and αikj are real number.
It is an unconstrained posynomial geometric programming problem with DD=
nT0 − (nm + 1).
Modified geometric programming and its applications
127
Dual program :
According to Harari et al. [7], the Dual Programming (DP) problem of (11)
is as follows :
δ
T0 n Y
Y
cik ik
Maximize ν(δ) =
(12)
δik
i=1
k=1
subject to








T0
X
δik = 1
(Normality conditions)
k=1
T0
X
αikj δik = 0, (j = 1, 2, . . . , m)

(Orthogonality conditions) 





(Positivity conditions)
k=1
δik > 0, (k = 1, 2, . . . , (T0 ),
(i = 1, 2, . . . , n),
where
δ = (δ11 , δ12 , . . . , δik , . . . , δnT0 )T .
Case I : For nT0 ≥ nm + n, the DP presents a system of linear equations for the
dual variables. A solution vector for the dual variables exists [3].
Case II : For nT0 < nm + n, in this case generally no solution vector for the
dual variables exists. However, using either the LS or MM method one can get
an approximate solution vector for this system (explained in 2.1.2 & 2.1.3).
Theorem 2.2.1. If t is a feasible vector for the unconstrained PGP (11) and if
δ is a feasible vector for the corresponding DP (12), then
p
g(t) ≥ n n ν(δ) (Primal-Dual Inequality)
Proof. The expression for g(t) can be written as


m
Y
αikj
t
C
ij
 ik

n X
T0
X


j=1

.
g(t) =
δik 

δ
ik


i=1 k=1
Here the weights are δi1 , δi2 , . . . , δiT0 and the positive terms are
ci1
m
Y
α
tiji1j
j=1
δi1
ci2
,
m
Y
α
tiji2j
j=1
δi2
ciT0
,...,
m
Y
α
tijiT0 j
j=1
δiT0
for i = 1, 2, . . . , n.
128
Sahidul Islam and Tapan Kumar Roy
Now applying weighted A.M. - G.M. inequality, we get

 
m
m
n
m
X
Y
Y
Y
α
α
α
iT
j
 ci1
tiji1j + ci2
tiji2j + . . . + ciT0
tij 0  


i=1

j=1
j=1
j=1


n


X


(δi1 + δi2 + . . . + δiT0 )


P
n
i=1
(δi1 +δi2 +...+δiT0 )
i=1

m
Y
δi1 
α
tiji1j
 ci1
n 
Y
j=1

≥


δi1
i=1 
or


g(t)

 n T
0
X X

δ

ik
i=1 k=1
or
g(t)
n
α
tiji2j
 ci2
 j=1


δi2







m
Y
P P
n
i=1
T0
k=1
δik






n
n Y
cik
i=1






T0
k=1
δik
 ciT0

j=1
...

δiT0

m
δik Y
or
n
≥
P
T0
tij k=1
αikj δik
δik = 1 for i = 1, 2, . . . , n]
δ m
T0 n Y
Y
cik ik Y
g(t)
n
δiT0 











j=1
T0
X
k=1
i=1 k=1
α
tijiT0 j

[since from normality condition of (12) i.e.,
=
m
Y
δik
m
Y
αikj
t
c
ik
ij

T0 
n Y
Y
 j=1



≥


δ
ik


i=1 k=1
P
≥
δi2
δik
P
T0
tij k=1
αikj δik
j=1
δ
T0 n Y
Y
cik ik
i=1 k=1
δik
[since from orthogonality condition of (12) i.e.,
T0
X
αikj δik = 0, (i = 1, 2, . . . , n; j = 1, 2, . . . , m)]
k=1
i.e.,
This completes the proof.
p
g(t) ≥ n n ν(δ).
Modified geometric programming and its applications
129
Theorem 2.2.2. If t∗ = (t∗i1 , . . . , t∗im ) for i = 2, . . . , n is a solution to the
PGP (11), then the corresponding DP (12) is consistent. Moreover the vector
∗
∗
δ ∗ = (δi1
, . . . , δiT
) (i = 1, 2, . . . , n) is defined by
0
∗
=
δik
uik (t∗ )
, (k = 1, 2, . . . , T0 ),
gi (t∗ )
where
uik (t∗ ) = cik
m
Y
∗αikj
tij
j=1
which is the kth term of g(t) for ith item is apsolution for DP and equality holds
in the primal-dual inequality i.e., g(t∗ ) = n n ν(δ ∗ ).
Proof. A typical term of g(t) is
αikm
ik1 αik2
uik (t) = cik tα
ti1 ti2 , . . . , tim , (i = 1, 2, . . . , n; k = 1, 2, . . . , T0 ).
(13)
So,
∂uik (t)
= αikj uik .
∂tij
Since t∗ = (t∗i1 , . . . , t∗im ) is a minimizer for g(t) it follows that
tij
(14)
T
0
∂g(t∗ ) X
∂uik (t∗ )
=
= 0.
∂tij
∂tij
(15)
k=1
Using (14) we have
T0
X
αikj uik (t∗ ) = 0.
(16)
k=1
Since gi (t∗ ) > 0, we can divide both sides of (16) by gi (t∗ ) to obtain
T0
X
uik (t∗ )
= 0.
αikj
gi (t∗ )
(17)
k=1
Consequently, if we define
uik (t∗ )
,
gi (t∗ )
∗
∗
then δ ∗ = (δi1
, . . . , δiT
) satisfies the orthogonality condition for the DP and
0
∗
Also δik
> 0. So the positivity condition is satisfied. Finally,
T0
T0 X
X
gi (t∗ )
uik (t∗ )
∗
=
= 1, (i = 1, 2, . . . , n)
δik
=
gi (t∗ )
gi (t∗ )
∗
δik
=
k=1
k=1
Hence the normality conditions hold. We conclude that the vector δ ∗ is feasible
for the dual problem. So the DP is consistent.
130
Sahidul Islam and Tapan Kumar Roy
Now we have
n
X
gi (t∗ )
i=1
T0
n X
X
=
u1k (t∗ )
u2k (t∗ )
unk (t∗ )
=
= ... =
, (k = 1, 2, . . . , T0 ).
δ1k
δ2k
δnk
δik
i=1 k=1
Also

∗
g(t )
n
n
n
X

gi (t∗ )


=  i=1
T0
n X
X

δ
ik

P
T0
k=1 (δ1k +δ2k +...+δnk )






i=1 k=1
δ δ
T0 Y
u1k (t∗ ) 1k u2k (t∗ ) 2k
∗
=
∗
∗
δ1k
k=1
∗
δ2k
δ δ
T0 Y
c1k 1k c2k 2k
∗
=
k=1
∗
∗
δ1k
∗
δ2k
δ
T0 n Y
Y
cik ik
...
...
cnk
∗
δnk
unk (t∗ )
∗
δnk
∗
δnk
∗
δnk
∗
=
i=1 k=1
i.e.,
∗
δik
= ν(δ ∗ )
p
g(t∗ ) = n n ν(δ ∗ ).
Hence the equality holds in the Primal-Dual inequality. This implies that δ ∗
is an optimal dual variable of the DP with components
∗
δik
=
uik (t∗ )
, (i = 1, 2, . . . , n; k = 1, 2, . . . , T0 ).
gi (t∗ )
This compltes the proof.
We can get dual variables which optimize ν ∗ (δ ∗ ) such that
n
X
gi (t)
uik (t)
i=1
= n T
0
δik
XX
δik
i=1p
k=1
n n ν(δ)
(following theorem 2.2.2)
=
n
Modified geometric programming and its applications
=
So,
uik (t) = δik
131
p
n
ν(δ).
p
n
ν(δ).
i.e.,
cik
m
Y
α
tijikj = δik
p
n
ν(δ), (i = 1, 2, . . . , n; k = 1, 2, . . . , T0 ).
(18)
j=1
Taking logarithms in (18), the linear simultaneous equations are transformed as
!
p
m
X
δik n ν(δ)
.
(19)
αikj (logtij ) = log
cik
j=1
It is a system of linear equations in xij (= Logtij ). Since there are more primal
variables tij than the number of terms nT0 , many solutions tij may exist. So, to
find the optimal primal variables tij , it remains to minimize the primal objective
function with respect to reduced nm − nT0 (6= 0) variables. When nm − nT0 = 0,
primal variables can be determined uniquely from log-linear equations.
Example 2 (Unconstrained MGP problem).
−1 −1
−3
−2 −2
Minimize Z(t) = t11 t21 t−2
31 + 2t31 t11 t21 + 5t31 + t12 t22 t32 + 3t32 t22 t12 + 4t32
subject to
tij > 0, (i = 1, 2, ; j = 1, 2, 3).
This problem can be written as
Minimize Z(t) =
3
2 X
X
Cik
i=1 k=1
3
X
α
tijikj
j=1
subject to tij > 0, (i = 1, 2; j = 1, 2, 3), where
C11
α111
=
=
α112
α113
= −1, α212 = −1, α122 = 1, α222 = −2, α132 = −2, α232 = 1,
= 0, α213 = 0, α123 = 1, α223 = 0, α133 = 0, α233 = 1.
1, C12 = 1, C13 = 2, C21 = 3, C22 = 5, C23 = 4,
1, α211 = 1, α121 = −2, α221 = 1, α131 = 1, α231 = −3,
Dual Programming of MGP (20) is as follows :
Maximize v(δ) =
δ
3 2 Y
Y
cik ik
i=1 k=1
δik
(20)
132
Sahidul Islam and Tapan Kumar Roy
subject to
δ11 + δ12 + δ13
δ21 + δ22 + δ23
δ11 − δ12
−2δ11 + δ12 + δ13
δ11 − 2δ12
δ21 − δ22
δ21 − 2δ22
−3δ21 + δ22 + δ23
=
=
=
=
=
=
=
=
1,
1,
0,
0,
0,
0,
0,
0
where δ = (δ11 , δ12 , δ13 , δ21 , δ22 , δ23 )T .
There is a system of eight linear equations with six unknown variable. Applying LS method the above system of linear equations reduces to
δ11 − δ12
−2δ11 + δ12 + δ13
δ11 + δ12 + δ13
δ21 − 2δ22
−3δ21 + δ22 + δ23
δ21 + δ22 + δ23
δ11 , δ12 , δ13 , δ21 , δ22 , δ23
=
=
=
=
=
=
>
0,
0,
1,
0,
0,
1,
0.
Approximate solutions of this sytem of linear equations are
∗
δ11
∗
δ21
∗
∗
= 0.333, δ12
= 0.333, δ13
= 0.333,
∗
∗
= 0.25, δ22 = 0.125, δ23 = 0.625
and optimal dual objective value i.e., ν ∗ (δ ∗ ) = 13.17.
√
The value of the objective function is g(t∗ ) = 2 13.17 = 7.23. So the Theorem 2.2.1 is verified.
And following system of non-linear equations gives optimal primal variables
t11 t21 t−2
31
t12 t22 t−2
32
−1 −1
2t31 t11
t21
−1 −1
2t32 t12
t22
5t31
5t32
=
=
=
=
=
=
4.3856,
3.2925,
4.3856,
1.6463,
8.2313,
4.3856.
Solving this system of non-linear equations, optimal solutions are determined.
The optimal solutions by MGP of this problem are given in the Table 2. Here
optimal solutions of Example 2 by NLP are also given in the Table 2. It is seen
that MGP method gives better than NLP method, which is expected.
Modified geometric programming and its applications
133
Table 2
Optimal solutions of Example 2
t∗11
Methods
t∗21
t∗31
t∗12
t∗22
t∗32
Z ∗ (t∗ )
MGP
0.349 1.145 0.431 2.506 0.773 1.0489 7.23
NLP
0.6416 0.6235 0.431 1.39 1.393 1.0489 7.31
3. Constrained problem
3.1. Geometric Programming (GP) problem
Primal program :
Minimize g0 (t) =
T0
X
C0k
k=1
subject to
Tr
X
gr (t) =
Crk
m
Y
α
tj rkj
j=1
k=1+Tr−1
tj > 0, (j = 1, 2, . . . , m);
m
Y
α
tj 0kj
(21)
j=1


≤1 
(r = 1, 2, . . . , l)


where
C0k (> 0)(k = 1, 2, . . . , T0 ), Crk (> 0)
and
αrkj (k = 1, 2, . . . , 1 + Tr−1 , . . . , Tr ; r = 0, 1, 2, . . . , l; j = 1, 2, . . . , m)
are real numbers.
It is a constrained posynomial PGP problem. The number of terms in each
posynomial constraint function varies and it is denoted by Tr for each r =
0, 1, 2, . . . l. Let T = T0 + T1 + T2 + . . . + Tl be the total number of terms in the
primal program. The Degree of Difficulty is (DD) = T − (m + 1).
Dual program :
The dual programming of (21) is as follows :
Maximize ν(δ) =
δ
Tr l Y
Y
crk rk
r=0 k=1
δrk


Tr
X
s=1+Tr−1
subject to
T0
X
k=1
δ0k = 1,
(Normality condition)
δrk
δrs 
(22)
134
Sahidul Islam and Tapan Kumar Roy
Tr
l X
X
αrkj δrk = 0, (j = 1, 2, . . . , m),
(Orthogonality conditions)
r=0 k=1
δrk > 0, (r = 0, 1, 2, . . . , l; k = 1, 2, . . . , Tr ).
(Positivity conditions)
Case I. For T ≥ m+1, the dual program presents a system of linear equations
for the dual variables. A solution vector exists for the dual variables [3].
Case II. For T < m + 1, in this case generally no solution vector exists for
the dual variables. However, one can get an approximate solution vector for this
system using either LS or MM method discussed earlier.
The solution procedure of this GP problem is same as the unconstrained GP
problem.
Example 3 (Constrained GP problem).
−1
−6 2
Minimize Z(t) = 2Ot−1
3 t4 + 3Ot2 t4
subject to
t31 t24 + t1−1 t42 t23 t24 ≤ 12
t1 , t2 , t3 , t4 > 0.
It is a constrained posynomial geometric programming problem with degree of
difficulty −1. This problem is also solved by GP and NLP as discussed before
and optimal solutions are shown in Table 3.
Table 3
Optimal solutions of Example 3
Methods
GP
NLP
t∗1
t∗2
t∗3
t∗4
Z ∗ (t∗ )
1.31 1.52 1.28 1.21 14.26
1.25 1.52 1.18 1.23 14.85
3.2. Modified Geometric Programming (MGP) problem
Primal program :
Minimize g0 (t) =
n
X
gi0 (t) =
i=1
T0
n X
X
i=1 k=1
subject to
gr (t) =
n
X
Tr
X
i=1 k=Tr−1 +1
Crik
C0ik
m
Y
α
tijrikj
≤ 1,
j=1
tij > 0, (i = 1, 2, . . . , n; j = 1, 2, . . . , m)
m
Y
α
tij0ikj
j=1





(r = 1, 2, . . . , l)
(23)
Modified geometric programming and its applications
135
where Cik (> 0) and αrikj (k = 1, 2, . . . , Tr−1 + 1, . . . , Tr ) are real number.
An associated Convex Geoemetric Programming (CGP) problem is
Minimize h0 (x) =
n
X
hi0 (x) =
i=1
subject to
hr (x) − 1 =
Tr
X
n
X
n X
T0
X
P
c0ik e
m
j=1
α0ikj xij
(24)
i=1 k=1
P
crik e
m
j=1
αrikj xij
i=1 k=Tr−1 +1
xij > 0 (i = 1, 2, . . . , n; j = 1, 2, . . . , m).


− 1 ≤ 0, 
(r = 1, 2, . . . , l)


Moreover, because t = ex is a strictly increasing function, the GP and CGP
are equivalent in the sense that t∗ = (t∗i1 , t∗i2 , . . . , t∗im ) is a solution of GP (23) if
∗
and only if x∗ = (x∗i1 , x∗i2 , . . . , x∗im ) is a solution of CGP (24) where t∗ij = exij .
Dual program :
According to Harari & Ata [7], the dual programming of (23) is
!δik
δ
Tr n Y
n
Y
cik ik X
Maximize ν(δ) =
δsk
(r = 0, 1, 2, . . . , l)
δik
s=1
i=1
(25)
k=1
subject to
T0
X
δik = 1 (Normality conditions)
k=1
T0
X
k=1
α0ikj δik +
l
X
Tr
X
αrikj δik
r=1 k=Tr−1 +1
= 0(j = 1, 2, . . . , m) (Orthogonality conditions)
δik > 0, (k = 1, 2, . . . , Tr−1 + 1, . . . , Tr ; r = 1, 2, . . . , l) (Positivity conditions)
for i = 1, 2, . . . , n, where δ = (δ11 , δ12 , δ1k , . . . , δnT0 )T .
Case I. For nT0 ≥ nm+n, the dual program presents a system of linear equations
for the dual variables. A solution vector exists for the dual variables [3].
Case II. For nT0 < nm + n, in this case generally no solution vector exists for
the dual variables. However, one can get an approximate solution vector for this
system using either the LS or MM method.
The solution procedure of this MGP problem is same as the unconstrained
MGP problem.
Example 4 (Constrained MGP problem).
−1 −1
−1 −1
−6 2
2
Minimize Z(t) = 2Ot31
t41 + 3Ot−6
21 t41 + 5t32 t42 + 20t22 t42
136
Sahidul Islam and Tapan Kumar Roy
subject to
−1 4 2 2
4 2 2
3 2
t311 t241 + t−1
11 t21 t31 t41 + t12 t42 + t12 t22 t32 t42 ≤ 25
t11 , t21 , t31 , t41 t12 , t22 , t32 , t42 > 0.
It is solved by MGP and also NLP method, the optimal results are given in
Table 4.
Table 4
Optimal solutions of Example 4
t∗11
Methods
t∗21
t∗31
t∗41
t∗12
t∗22
t∗32
t∗42 Z ∗ (t∗ )
MGP
1.62 1.43 1.56 1.06 1.01 1.41 1.52 0.81 21.97
NLP
1.78 1.34 2.16 0.88 1.11 1.32 2.09 0.68 22.83
Theorem 3.2.1. If t is a feasible vector for the constrained PGP (23) and if δ
is a feasible vector for the corresponding DP (25), then
p
g0 (t) ≥ n n ν(δ) (Primal-Dual Inequality).
Proof. The expression for g0 (t) can be written as


m
Y
α0ikj
tij 
 c0ik
T0
n X
X


j=1


g0 (t) =
δik 

δ
ik


i=1 k=1
(26)
We can apply the weighted A.M. ≥ G.M. inequality to this new expression for
g0 (t) and obtain
0

 ni=1 Tk=1
δik

δik
m
Y
α0ikj
t
c


0ik
ij

T0 
n Y


Y


g(t)
j=1




≥
 n T



0
X X

δ
ik


i=1 k=1

δik 
P P
i=1 k=1
or
g0 (t)
n
n
P
≥
n Y
C0ik
i=1
δik
T0
k=1
m
δik Y
j=1
P
T0
tij k=1
α0ikj δik
Modified geometric programming and its applications
137
[using normality condition]
δ m
T0 n Y
Y
0 α
C0ik ik Y Tk=1
0ikj δik
tij
.
=
δ
ik
i=1
j=1
P
(27)
k=1
Again gr (t) can be written as


m
Y
αrikj
t
c
rik
ij


Tr
n
X
X


j=1

.
gr (t) =
δik 

δik


i=1 k=Tr−1 +1
(28)
Applying weighted A.M. ≥ G.M. inequality in (28) we have
 ni=1 δik

δik

m
Y
αrikj
c
t
ij

 rik


Tr
n
Y
Y


 gr (t) 
j=1




≥
n



X
δ
ik




i=1 k=Tr−1 +1
δik
P
i=1
and
(gr (t))
P
n
i=1
δik
≥
Tr
Y
n
Y
i=1 k=Tr−1 +1
(r = 1, 2, . . . , l).
Using 1 ≥ (gr (t))
1≥
P
n
Y
n
i=1
crik
δik
δik
δik Y
m
P
tij
Tr
k=Tr−1 +1
αrikj δik
n
X
δsk
!δik
,
s=1
j=1
(r = 1, 2, . . . , l) [since gr (t) ≤ 1(r = 1, 2, . . . , l)]
!δik m
δ
n
Tr
r
Y
Y Tk=T
αrikj δik
crik ik X
r−1 +1
δsk
tij
,
δik
s=1
j=1
P
(29)
i=1 k=Tr−1 +1
(r = 1, 2, . . . , l).
Multiplying (27) and (29) we have
!δik
δ
n Y
Tr n Y
n
g0 (t)
Cik ik X
≥
δsk
n
δik
s=1
i=1 k=1
m
T0
Tr
l
Y
r=1
k=1 α0ikj δik +
k=Tr−1 +1 αrikj δik
·
tij
P
P P
j=1
(r = 0, 1, 2, . . . , l).
Using Orthogonality conditions the inequality (30) becomes
!δik
δ
n Y
Tr n Y
n
g0 (t)
cik ik X
≥
δsk
, (r = 0, 1, 2, . . . , l)
n
δik
s=1
i=1
k=1
(30)
138
Sahidul Islam and Tapan Kumar Roy
i.e.,
p
g0 (t) ≥ n n ν(δ).
This compltes the proof.
Theorem 3.2.2. Suppose that the constrained PGP (23) is super-consistent
and that t∗ is a solution for GP. Then the corresponding DP (25) is consistent
and has a solution δ ∗ which satisfies
p
g0 (t∗ ) = n n ν(δ ∗ )
and
∗
δik

 uik (t∗ )
,
(i = 1, 2, . . . , n; k = 1, 2, . . . , T0 )
=
g (t∗ )
 0 ∗
λir (δ )uik (t∗ ), (i = 1, 2, . . . , n; k = Tr−1 + 1, . . . , Tr ; r = 1, 2, . . . , l).
Proof. Since GP is super-consistent, so is the associated CGP. Also since GP
has a solution t∗ = (t∗i1 , t∗i2 , . . . , t∗im ), the associated CGP has a solution x∗ =
(x∗i1 , x∗i2 , . . . , x∗im ) given by x∗ij = lnt∗ij .
According to the Karush-Kuhn-Tucker (K-K-T) conditions, there is a vector
λ∗ = (λ∗i1 , . . . , λ∗il ) such that
λ∗ir ≥ 0,
(31)
∗
λir (hir (x∗ ) − 1) = 0,
(32)
l
∂hi0 (x∗ ) X ∗ ∂hir (x∗ )
+
λir
= 0.
(33)
∂xij
∂xij
r=1
Because tij = exij for i = 1, 2, . . . , n; j = 1, 2, . . . , m; it follows that for
r = 0, 1, 2, . . . , l
∂hir (x) ∂tij
∂gir (t) xij
∂hir (x)
=
=
e .
∂xij
∂tij ∂xij
∂tij
So, condition (33) is equivalent to
l
∂gi0 (t∗ ) X ∗ ∂gir (t∗ )
+
λir
=0
∂tij
∂tij
r=1
(34)
since exij > 0 but tij > 0. Hence (34) is equivalent to
l
t∗ij
∂gi0 (t∗ ) X ∗ ∗ ∂gir (t∗ )
+
λir tij
= 0.
∂tij
∂tij
r=1
Now the terms of gir (t) are of the form
uik (t) = crik
n
Y
i=1
α
tijrikj .
(35)
Modified geometric programming and its applications
139
It is clear that
t∗ij
Tr
X
∂gir (t∗ )
=
αrikj uik (t∗ ), (i = 1, 2, . . . , n; j = 1, 2, . . . , n; r = 1, 2, . . . , l).
∂tij
k=Tr−1 +1
So, (35) implies to
T0
X
Tr
X
l
X
α0ikj uik (t )+
∗
λ∗ir αrikj uik (t∗ ) = 0, (i = 1, 2, . . . , n; j = 1, 2, . . . , m).
r=1 k=Tr−1 +1
k=1
(36)
If we divide the last equation by
gi0 (t∗ ) =
T0
X
uik (t∗ ),
k=1
we obtain
Tr
X
l
α0ikj
k=1
uik (t∗ ) X
+
gi0 (t∗ ) r=1
Tr
X
αrikj
k=Tr−1 +1
λ∗ir uik (t∗ )
= 0.
gi0 (t∗ )
∗
δik
Define the vector
by

∗

 uik (t ) ,
(i = 1, 2, . . . , n; k = 1, 2, . . . , T0 )

∗
gi0 (t )
∗
δik
=
λir uik (t∗ )


, (i = 1, 2, . . . , n; k = Tr−1 + 1, . . . , Tr ; r = 1, 2, . . . , l).

gi0 (t∗ )
∗
Note that δik
> 0(i = 1, 2, . . . , n; k = 1, 2, . . . , T0 ) and that for each r ≥ 1,
∗
∗
either δik > 0 for all k with Tr−1 + 1 ≤ k ≤ Tr or δik
= 0 for all k with Tr−1 +
1 ≤ k ≤ Tr according as the corresponding Karush-Kuhn-Tucker multipliers
λ∗ir , (i = 1, 2, . . . , n; r = 1, 2, . . . , l) is positive or zero.
Also observe that vector δ ∗ satisfies all of the m exponents constraint equations in DP as well as the constraint
T0
X
∗
δik
=
k=1
T0
X
uik (t∗ )
k=1
gi0 (t∗ )
=
gi0 (t∗ )
= 1.
gi0 (t∗ )
∗
∗
Therefore, δ ∗ = (δi1
, . . . , δiT
) is a feasible vector for DP. Hence the DP is con0
sistent.
The Karush-Kuhn-Tucker multiplers λ∗ir are related to the corresponding λir (δ ∗ )
in DP as follows :
λir (δ ∗ ) =
Tr
X
k=1
∗
δik
=
Tr
X
k=1
λ∗ir
uik (t∗ )
gir (t∗ )
= λ∗ir
, (i = 1, 2, . . . , n; r = 1, 2, . . . , l).
∗
gi0 (t )
gi0 (t∗ )
140
Sahidul Islam and Tapan Kumar Roy
The Karush-Kuhn-Tucker condition (32) becomes
λ∗ir (gir (t∗ ) − 1) = 0.
(37)
So, we get
λ∗ir gir (t∗ ) = λ∗ir .
Therefore, for r = 1, 2, . . . , l and k = Tr−1 + 1, . . . , Tr , we see that
∗
δik
=
λ∗ir gir (t∗ )uik (t∗ )
λ∗ir uik (t∗ )
=
= λir (δ ∗ )uik (t∗ ),
gi0 (t∗ )
gi0 (t∗ )
(38)
The fact that δ ∗ is feasible for DP and t∗ is feasible for GP implies that
p
g0 (t∗ ) ≥ n n ν(δ ∗ )
because of Primal-Dual Inequality.
∗
Moreover, the values of δik
(i = 1, 2, . . . , n; r = 1, 2, . . . , l; k = 1, 2, . . . , Tr−1 +
1, . . . , Tr ) are precisely those that force equality in the Arithmetic-Geometric
Mean Inequalities that where used to obtain the Duality Inequality. Finally,
equation (37) shows that either gir (t∗ ) = 1 or λ∗ir = 0(i = 1, 2, . . . , n; r =
1, 2, . . . , l) and equation (38) shows that λ∗ir = 0 if and only if λir (δ ∗ ) = 0,
∗
(i = 1, 2, . . . , n; r = 1, 2, . . . , l). This means that the value of δik
actually force
equality in the Primal-Dual Inequality.
This completes the proof.
4. Application
4.1. GP problem (Grain-box problem)
Problem 1a.
“It has been decided to shift grain from a warehouse to a factory in an open
rectangular box of length x1 meters, width x2 meters and height x3 meters. The
bottom, sides and the ends of the box cost $80, $10 and $20/m2 respectively.
It costs $1 for each round trip of the box. Assuming that the box will have no
salvage value, find the minimum cost of transporting 80 m3 of grains.”
Modified geometric programming and its applications
141
As stated, this problem can be formulated as the unconstrained GP problem

80

+ 40x2 x3 + 20x1 x2 + 80x1 x2 ,
Minimize
(39)
x1 x2 x3
 where
x1 > 0, x2 > 0, x3 > 0.
It is an unconstrained posynomial geometric programmiing problem. The optimal dimension of the box are x∗1 = 1 m, x∗2 = 1/2 m, x∗3 = 2 m and minimum
total cost of this problem is $ 200.
Problem 1b.
Supposed that we now consider the following variant of the above problem
(similar discussion has done Duffin, Peterson and Zener [1] in their book). It
is required that the sides and bottom of the box should be made from scrap
materials but only 4 m2 of these scrap materials are available.
This variation of the problem leads us to the following constrained posynomial
GP problem :

80

+ 40x2 x3
 Minimize
x1 x2 x3
(40)

 subject to 2x1 x3 + x1 x2 ≤ 4,
where
x1 > 0, x2 > 0, x3 > 0.
Solving this constrained GP problem, we have the minimum total cost $ 95.24
and optimal dimensions of the box are x∗1 = 1.58 m, x∗2 = 1.25 m, x∗3 = 0.63 m.
4.2. MGP problem (Multi-grain-box problem)
Problem 2a.
Suppose that to shift grains from a warehouse to a factory in a finite number
(say n) of open rectangular boxes of lengths x1i meters widths x2i meters, and
heights x3i meters (i = 1, 2, . . . , n). The bottom, sides and the ends of the each
box cost $a, $bi and $ci /m2 respectively. It cost $1 for each round trip of the
boxes. Assuming that the boxes will have no salvage value, find the minimum
cost of transporting di m3 of grain.
As stated, this problem can be formulated as an unconstrained MGP problem

n 
di
 Minimize g(x) = X
+ ai x1i x2i + 2bi x1i x3i + 2ci x2i x3i ,
x1i x2i x3i
i=1


where x1i > 0, x2i > 0, x3i > 0 (i = 1, 2, . . . , n).
(41)
142
Sahidul Islam and Tapan Kumar Roy
In particular here we assume that the transporting di m3 of grains by the three
different open rectangular boxes whose bottom, sides and the ends of each box
costs are given in the Table 5. It is solved and optimal results are shown in the
Table 6.
Table 5
Input data for Problem 2a
ith Box ai ($/m2 )
i=1
80
i=2
50
i=3
40
bi ($/m2 )
10
15
5
ci ($/m2 )
20
25
30
di (m3 )
80
90
70
Table 6
Optimal results of Problem 2a
ith Box x1i
i=1
i=2
i=3
(meter)
1.00
1.2
2.16
x2i (meter)
0.5
0.72
0.36
x3i (meter)
2.0
1.2
1.44
g ∗ (x∗ )($)
572.23
Problem 2b.
Suppose that we now consider the following variant of the above problem. It
is required that the sides and bottom of the boxes should be made from scrap
materials but only w m2 of these scrap materials are available.
This variation of the problem leads us to the following constrained modified
geometric program :

n X
di


+
2c
x
x
Minimize
f
(x)
=

i 2i 3i


x1i x2i x3i

i=1
n
X
(42)

subject
to
(2x
x
+
x
x
)
≤
w,
1i
3i
1i
2i




i=1

where
x1i > 0, x2i > 0, x3i > 0 (i = 1, 2, . . . , n).
In particular here we assume transporting di m3 of grains by the three different
open rectanggular boxes. The end of each box cost is ci $/m2 and amount of
the transportin grains by three open rectangular boxes are di m3 . Input data
of this problem is given in the Table 7. It is a constrained posynomial MGP
problem. Optimal results of this problem are shown in Table 8.
Modified geometric programming and its applications
143
Table 7
Input data for Problem 2b
ith Box ci ($/m2 )
i=1
40
i=2
30
i=3
20
di (m3 )
80
90
70
w(m2 )
15
Table 8
Optimal results of Problem 2b
ith Box x1i
i=1
i=2
i=3
(meter)
2.33
2.0
1.49
x2i (meter)
1.14
1.32
1.47
x3i (meter)
0.57
0.66
0.74
g ∗ (x∗ )($)
221.44
5. Conclusion
We have discussed unconstrained constrained GP and MGP problem with
negative or positive integral degree of difficulty. The LS and MM methods are
described to solve GP/MGP problem with negative degree of difficulty. Here,
modified form of geometric programming method has been demonstrated and
for that purpose necessary theorems have been derived. This technique can be
applied to solve the different decision making problems (like in inventory [10]
and other areas).
Acknowledgement
The authors wish to acknowledge the helpful comments and suggestions of
the referee’s.
References
1. M. O. Abou-El-Ata and K. A. M. Kotb, Multi-item EOQ inventory model with varying
holding cost under two restrictions a geometric programming approach, Production Planning & Control 8(5) (1997), 608-611.
2. H. Basirzadeh, A. V. Kayyad and Effati, An approach for solving nonlinear problems, J.
Appl. Math. & Computing 9(2) (2002), 547-560.
3. C. S. Beightler and D. T. Philips, Applied Geometric Programming, Wiley, New York,
1976.
4. G. S. Beightler, D. T. Phillips and D. J. Wilde, Foundations of Optimization, Prentice-Hall,
Englewood Cliffs, NJ, 1979.
144
Sahidul Islam and Tapan Kumar Roy
5. R. J. Duffin, E. L. Peterson and C. M. Zener, Geometric Programming Theory and Applications, Wiley, New York, 1967.
6. J. Ecker, Geometric programming : methods, computations and applications, SIAM Rev.
22(3) (1980), 338-362.
7. A. M. A. Hariri and M. O. Abou-El-Ata, Multi-item EOQ inventory model with varying holding cost under two restrictions a geometric programming approach, Production
Planning & Control 8(5) (1997), 608-611.
8. Sahidul Islam and T. K. Roy, An economic production quantity model with flexibility and
reliability consideration and demand dependent unit cost under a constraint : Geometric
Programming Approach, Proceedings of National Symposium Department of Mathematics,
University of Kalyani, India, 21-22th March, 2002, 71-82.
9. C. L. Lawson and R. J. Hanson, Solving Least Squares Problems, Prentice Hall, Inc.,
Englewood Cliffs, NJ, 1974.
10. A. K. Pal and B. Mandal, An EOQ Model fo r Deteriorating Inventory with Alternating
Demand Rate, Korean J. Comput. & Appl. 4(2) (1997), 397-408.
11. A. L. Peressini, F. E. Sullivan and J. J. Jr. Uhl, The Mathematics Nonlinear Programming,
Springer-Verlag, New York, 1993.
12. F. Scheid, Numerical Analysis, McGraw-Hill, New York, 1968.
13. S. B. Sinha, A. Biswas and M. P. Biswal, Geometric programming problems with negative
degrees of difficulty, European Journal of Operational Research, 28 (1987), 101-103.
14. C. Zener, Engineering Design by Geometric Programmin, Wiley, 1971.
Sahidul Islam received his B. Sc. (Hons.) from Rampurhat College under University of
Burdwan and M. Sc. at Jadavpur University. In 2003, he received a Junior Research Fellow
from Council of Scientific and Industrial Research at Bengal Engineering College (Deemed
University). His research interests focus on some optimization problem in information and
Fuzzy systems and related OR models. Some papers are published and accepted in Proceedings of National Seminars.
T. K. Roy is a senior lecturer ini Mathematics at Bengal Engineering College (Deemed
University). His areas of research are Fuzzy set theory, Applications of OR in information
and Fuzzy Systems. He has published papers in various international and national journals
including European Journal of Operational Research, Computer and Operation Research,
Production Planning and Control, OPSEARCH. Some papers are also published and accepted in Proceedings of international and national Seminars.
Department of Mathematics, Bengal Engineering College (Deemed University), Howrah711103, West Bengal, India
e-mail : sahidul@yahoo.ca
e-mail : roy t k@yahoo.co.in
Download