SENSITIVITY ANALYSIS FOR PARAMETRIC VECTOR OPTIMIZATION PROBLEMS USING DIFFERENTIAL EQUATIONS APPROACH

advertisement
IJMMS 25:9 (2001) 621–628
PII. S0161171201002721
http://ijmms.hindawi.com
© Hindawi Publishing Corp.
SENSITIVITY ANALYSIS FOR PARAMETRIC VECTOR
OPTIMIZATION PROBLEMS USING DIFFERENTIAL
EQUATIONS APPROACH
FATMA M. ALI
(Received 11 September 1998)
Abstract. A new method for obtaining sensitivity information for parametric vector optimization problems (VOP)v is presented, where the parameters in the objective functions
and anywhere in the constraints. This method depends on using differential equations
technique for solving multiobjective nonlinear programing problems which is very effective in finding many local Pareto optimal solutions. The behavior of the local solutions for
slight perturbation of the parameters in the neighborhood of their chosen initial values
is presented by using the technique of trajectory continuation. Finally some examples are
given to show the efficiency of the proposed method.
2000 Mathematics Subject Classification. Primary 90C29, 90C30, 90C31.
1. Introduction. A differential equations approach is presented as a new method
for solving equality constrained nonlinear programing problems in [7]. This technique
was extended in [1] to be applicable for solving multiobjective nonlinear convex or
nonconvex programing with equality or inequality constrained problems. The multiobjective nonlinear programing problem is transformed also to a nonlinear autonomous
system of differential equations. In fact, the asymptotically stable critical points of the
differential system are constrained local Pareto optimal of the original optimization
problem. Recently, other results on equality or inequality constrained nonlinear programing problems with fuzzy parameters by using this approach in [2].
In this paper, sensitivity information is obtained by using the idea of the autonomous system of differential equations corresponding to the vector optimization problem (VOP) (see [1]). These information coincide with the explicit representation of the
first order partial derivatives of the local solution point and associated Lagrange multipliers to the parametric problem [4]. The problem under consideration is a parametric
vector optimization problem, where the parameters in the objective functions and anywhere in the constraints. The fundamental equation corresponding to the problem is
described in Section 2. By using the technique of trajectory continuation, [5, 6, 8], the
behavior of the local solution for slight perturbation of the parameters in the neighborhood of their chosen initial values is discussed in Section 3. Finally two illustrative
examples are given in Section 4.
2. Problem formulation. A mathematical programing problem with general perturbation in the objective functions and anywhere in the constraints has the form
622
FATMA M. ALI
(VOP)v : min f1 (x, v), f2 (x, v), . . . , fm (x, v) ,
x ∈ Rn
subject to G(x, v) ≤ 0, where Rn is an n-dimensional Euclidean space,
T
fj (x, v), j = 1, 2, . . . , m,
G(x, v) = g1 (x, v), g2 (x, v), . . . , gr (x, v)
(2.1)
(2.2)
possess continuous first and second derivatives.
The corresponding problem with scalar objective and equality constrainted [3], can
be written in the form
Pk (ε) : min fk (x, v)
(2.3)
subject to
F (x ) ∈ Rm−1 : fj (x, v) + sj2 − εj = 0,
r
G(x ) ∈ R :
where
gj (x, v) + ξi2
j = 1, 2, . . . , k − 1, k + 1, . . . , m,
= 0,
i = 1, 2, . . . , r ,
x = x1 , x2 , . . . , xn , s1 , s2 , . . . , sk−1 , sk+1 , . . . , sm , ξ1 , ξ2 , . . . , ξr ,
ε = ε1 , ε2 , . . . , εk−1 , εk+1 , . . . , εm .
(2.4)
(2.5)
Assume that the matrices A1 = ∇x F (x ) and A2 = ∇x G(x ) are of full ranks.
From [3], it is well known that the optimal solution x ∗ of Pk (ε) is an efficient solution for the vector optimization problem if one of the following conditions is valid:
(i) x ∗ solves Pk (ε) for every k = 1, 2, . . . , m,
(ii) x ∗ is the unique optimal solution of Pk (ε).
Recently, in [1] problem (2.4) was solved by using differential equations approach
for fixed v = 0 and the fundamental equations was
γ1
= −∇x fkT ,
B ẋ + AT1 AT2
γ2
(2.6)
A1 ẋ = −F ,
A2 ẋ = −G,
where B is a symmetric nonsingular matrix of order (n + m + r − 1) × (n + m + r − 1).
From the above autonomous system (2.6), we obtain
F
ẋ = φ(x ) = −P B −1 ∇fkT − P̃
,
(2.7a)
G
 
 F

γ1
A
1
−
(2.7b)
= D −1
B −1 ∇fk ,
 G

γ2
A2
where P = I − P1 ,
P1 = B
−1
AT1
AT2
D
−1
A1
,
A2
P̃ = B −1 AT1 AT2 D −1 ,
A1 B −1 AT1 A1 B −1 AT2
D(x ) =
A2 B −1 AT1 A2 B −1 AT2
is a nonsingular matrix.
(2.7c)
(2.7d)
(2.7e)
623
SENSITIVITY ANALYSIS FOR PARAMETRIC VECTOR . . .
In [1], it was proved that the matrix P (x ) is a projection operator which projects
where M is the tangent of the system of constraints
any vector in Rn+m−1 onto M(x),
where,
at x
= Y ∈R
M(x)
n+m+r −1
A1
:
y =0 .
A2
(2.8)
Also it was proved that if
(a) x ∗ is a regular point of the constraints,
(b) the second order sufficiency conditions are satisfied at x ∗ ,
∗
(c) there are no degenerate constraint at x ,
then any trajectory starting from a point within some neighborhood of the local min
imal point x ∗ converges to x ∗ at t → ∞.
3. Sensitivity information. By using the technique of trajectory continuation [5, 6,
8], we will discuss the behavior of the local solution x ∗ for slight perturbation of the
parameters in the neighborhood of their chosen initial values.
The following existence theorem, which is based on the implicit function theorem
[4], holds.
Theoem 3.1. Let x ∗ be a unique local solution of (VOP)v=0 satisfying the assumptions (a)–(c). Then there exists a continuously differentiable vector valued function x (·)
defined in some neighborhood N(v) so that x (v) = x ∗ , where x (v) is a unique local
solution for the problem (VOP)v for any v ∈ N(v) satisfying the assumptions (a)–(c).
For any v ∈ N(v) the fundamental equations corresponding to (VOP)v have the
following form:
B ẋ (v) + AT1
AT2
γ1
= −∇x fkT ,
γ2
A1 ẋ (v) = −F ,
(3.1)
A2 ẋ (v) = −G,
and consequently,
φ x (v) = ẋ (v) = −B −1 AT1
AT2
γ1
+ ∇x fkT .
γ2
(3.2)
From equation (2.7a) near x ∗ , one can write
dx (v)
∂φ x ∗ φ x (v) =
x (v) − x ∗ .
dt
∂x Proposition 3.2. If B(x ) = ∇2 fkT +
that is, the local minimal point x
∗
γ1T γ2T
∇2 F (x ) ∇2 G(x )
(3.3)
, then ∂φ(x ∗ )/∂(x ) = −I,
is asymptotically stable.
624
FATMA M. ALI
Proof. In fact
γ1
∂φ x ∗
∂
T
T
T
−1
−1
A1 A2
=
−B
− B ∇fk
γ2
∂(x )
∂x ∂ T
T
T
−1
−1
−
B
A
−
B
γ
+
A
∇f
=
1
1
2
k
∂x 2 T
∂B −1 T
∂B −1 T
−1
−1 T ∂γ1
A
γ
B
F
γ
−
B
A
−
A γ2
∇
=−
1
1
1
1
∂x ∂x ∂x 2
−1
T
∂B
∂γ2
− B −1 ∇2 G γ2 − B −1 AT2
− B −1 ∇2 fkT −
∇fkT
∂x ∂x ∇2 F
= −B −1 ∇2 fkT + γ1T γ2T
∇2 G
γ1
∂B −1
−
∇fkT + AT1 AT2
γ2
∂x


∂γ1


T
T  ∂x 
−1
A1 A2 
.
−B
 ∂γ2 
(3.4)
∂x Differentiating (2.7b) with respect to x , we get
 


∂ A1 B −1 AT2
∂ A1 B −1 AT1
∂γ1
γ
+
γ2  2 
1




∂x ∂x  ∂x 
 − ∇ F B −1 ∇fk
 = −D −1 



 ∂γ2 
−1 T
∇2 G
 ∂ A2 B −1 AT1

∂ A2 B A2
γ
+
γ
1
2
∂x
∂x ∂x A1 ∂B −1
A1
∇F
−1 2
−
∇f
+
∇
f
+
B
k
k
∇G
A2 ∂x A2


∇2 F
A1 ∂B −1 T
A1
−1 2
−1 T


A
+
A
+
∇
F
B
B
1

 ∇2 G
A2 ∂x 1
A2


−1
γ1 γ2 

= −D

 ∇2 F
−1
A1 ∂B
A1

T
−1 2 
−1 T
A
+
A
+
∇
G
B
B
2
2
2
∇ G
A2 ∂x
A2
∇2 F
A1 ∂B −1
A1
∇F
−1
−1 2
×
∇f
−
f
+
∇
f
+
B
B
k
k
k
∇2 G
∇G
A2 ∂x A2
2
∇2 F
A1
2
−1 ∇ F
γ1 γ2
= −D −1
f
B −1
+
D
+
∇
.
k
∇2 G
∇2 G
A2
From (3.4) and (3.5), we obtain
∇2 F
∂φ x ∗
T
T
−1
2 T
γ
γ
=
−
B
f
+
∇
1
2
k
∇2 G
∂x T
T
−1
−1 A1
−1
γ1
A1 A2
B
−B
−D
A2
∇F
+ D −1
∇G
∇2 F
2
γ2
+ ∇ fk
∇2 G
(3.5)
SENSITIVITY ANALYSIS FOR PARAMETRIC VECTOR . . .
∇2 F
+ γ 1 γ2
∇
∇2 G
A1
− AT1 AT2 D −1
B −1 γ1T
A2
=−B
−1
2
fkT
625
γ2T
∇2 F
2
f
−
B
+
∇
k
∇2 G
(3.6)
= − I,
that is,
lim x (t) = x ∗ ,
(3.7)
t→∞
where the solution near x ∗ becomes x (t) x ∗ + (x (0) − x ∗ )e−t asymptotically
stable.
For obtaining sensitivity information for the first order estimation of solutions of
a parametric optimization problem, we introduce the following system of differential
equations:
γ1
dx + ∇v AT1 AT2
= −∇v ∇x fkT ,
(3.8)
B
γ2
dv
F
A1 dx = −∇v
.
(3.9)
G
A2 dv
Then, one can easily obtain
γ1
F
dx (v)
T
T
T
−1
= −P B
∇v A1 A2
+ ∇v ∇fk
,
− P̃ ∇v
G
γ2
dv
T γ1
F
A1
A1
γ1
T
−1
−1
∇v
=D
∇v
−
B
∇v ∇x fk + ∇v
,
G
γ2
A2
AT2
γ2
(3.10)
(3.11)
where P , B, P̃ and D as in (3.4).
After solving (VOP)v , we may wish to answer the following question: if the problem
(VOP)v is replaced by (VOP)v , v ∈ N(v) what is the new efficient solution of the
problem (VOP)v , v ∈ N(v) without solving it again.
With (x , γ1 , γ2 ) = (x (v), γ1 (v), γ2 (v)) is identically for v near zero under the assumption of Theorem 3.1 and Proposition 3.2 a first order approximation of
x (v), γ1 (v), γ2 (v) in the neighborhood of v = 0 is given by
∗

  
dx (v)
x (v)
x∗


dv


   
 v + O v .
γ1 (v) = γ ∗  + 


  1 
(v)
γ
1


γ2 (v)
γ2∗
∇v
γ2 (v)

(3.12)
Sensitivity information (3.10) and (3.11) minimize the computation efforts needed
for finding many efficient solutions for parametric problem (VOP)v , v ∈ N(v).
4. Illustrative examples. In this section we provide numerical examples to clarify
the theory developed in the paper.
626
FATMA M. ALI
Example 4.1 (see [4]).
P (v) : minimize f (x, v) = x1 + v2 x2
subject to g(x, v) = x12 + x22 − v12 ≤ 0.
(4.1)
To illustrate the application of the general equations (3.10) and (3.11) for obtaining
sensitivity information for the first order estimation of the solution of the parametric
optimization problem, we write the following.
Reformulate P (v) with equality constraints in the form
P1 (v) : minimize f (x, v) = x1 + v2 x2
subject to g(x, v) = x12 + x22 − v12 + s 2 = 0.
(4.2)
Formulate (3.10) and (3.11) with B = ∇2 f T + γ∇2 g, then

1
2
1

B −1 =  0
γ

0
0
1
2
0

0


0,

1
2
γ
,
D −1 = 2
2 x1 + x22 + s 2

−2x1 v1 0


1
−2x2 v1 0
P̃ ∇v g = 2
,
2 x1 + x22 + s 2 
−2s v1
0


x1 x2
sx1
x12 − v12

1 
x1 x2
x22 − v12
sx2 
−P = 2 

,
v1
sx1
sx2
s 2 − v12


0




− P B −1 ∇v ∇x fkT
= 0



0
Consequently, we obtain
 x1
dx dv
 v1

x
 2

 v1

 s
v1
γ
dγ
−
v1
dv
(4.3)
x 1 x2 
2γv12 

x12 − s 2 

.
−
2γv12 

sx2 
2γv12
x1 x2 
2γv12 

x12 − s 2 

,
−
2γv12 

sx2 
(4.4)
2γv12
−
x2 .
2v12
(4.5)
Sensitivity information which we obtained in (4.5) coincide completely with the Fiacco’s
results in [4, page 106].
SENSITIVITY ANALYSIS FOR PARAMETRIC VECTOR . . .
627
Example 4.2.
P (v) : minimize (f1 (X, v), f2 (X, v))
!
subject to G(X) = (x, y) ∈ R2 : x + y ≤ 1 ,
2
2
(4.6)
2
2
where f1 (X, v) = x + y − v1 , f2 (X, v) = x − y − v2 .
Consequently,
P1 (ε) : minimize f1 (X, v)
subject to x + y − v1 ≤ 1, x 2 − y 2 − v2 ≤ ε2 .
(4.7)
Start with first choice ε = ε2 = 1 as in [3] and formulate P1 (ε) with equality constraints
in the form
P1 (ε) : minimize x 2 + y 2 − v1
subject to F (X , v) = f2 (X , v) = x 2 − y 2 − v2 + s 2 − 1 = 0,
(4.8)
2
G(X ) = x + y + ξ − 1 = 0,
where X = (x, y, x, ξ). Solve P1 (ε) to find the unperturbed point at v1 = 0, v2 = 0. To
formulate (3.10) and (3.11), we obtain

1
 2(1 + γ1 )



0

−1
B =


0



0
0
0
1
2(1 + γ1 )
0
0
1
2γ1
0
0

0 


0 

,

0 


1 
2γ2


1
2ξ 2
(x + y)
+
−


1 + γ1
γ2
(1 + γ1 )

1 

,
D −1 =
2


|D| 
2 x +y2
2s 2 
(x + y)
+
−
(1 + γ1 )
(1 + γ1 )
γ1


(x − y) 4xξ
2y(y − x) 2s 2
1
1
+
+


 2(1 + γ1 ) (1 + γ1 )
γ2
(1 + γ1 )
(1 + γ1 )
γ1 





1
(y − x) 4xξ 2
2x(x − y) 2s 2 
1


+
+


2(1 + γ1 ) (1 + γ1 )
γ2
2(1 + γ1 )
(1 + γ1 )
γ1 
1 
.

P̃ =

2
|D| 


1
4sξ
2s
1
2s(x
+
y)


+
−


2
2γ1 (1 + γ1 )
γ
2γ1
(1 + γ1 )






2
2
2
4ξ x + y
1
4s ξ 
2ξ(x + y)
1

+
−
(1 + γ1 )
2γ2
(1 + γ1 )
2γ2
γ1
(4.9)
628
FATMA M. ALI
Thus, we get

0




0

dx
−1 

=
dv
|D| 

0





0

y(y − x) s 2
+

(1 + γ1 )
γ1 


v2
(x − y) s 2 

−
+

(1 + γ1 ) (1 + γ1 ) γ1 
.


v2
2s(x + y)

−
−

2γ1
(1 + γ1 )



2
2
2
v2 2ξ x + y
2s ξ 
−
+
γ2
1 + γ1
γ1
−
v2
(1 + γ1 )
(4.10)
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
F. M. Ali, Technique for solving multiobjective nonlinear programming using differential
equations approach, J. Inform. Optim. Sci. 18 (1997), no. 3, 351–357. MR 99a:90167.
Zbl 919.90130.
, A differential equation approach to fuzzy non-linear programming problems, Fuzzy
Sets and Systems 93 (1998), no. 1, 57–61. MR 99b:90157. Zbl 919.90143.
V. Chankong and Y. Y. Haimes, Multiobjective Decision Making. [Theory and methodology],
North-Holland Series in System Science and Engineering, vol. 8, North-Holland Publishing Co., Amsterdam, 1983. MR 86k:90002. Zbl 622.90002.
A. V. Fiacco, Introduction to Sensitivity and Stability Analysis in Nonlinear Programming,
Mathematics in Science and Engineering, vol. 165, Academic Press, Florida, 1983.
MR 85b:90072. Zbl 543.90075.
W. Hahn, Stability of Motion, Translated from the German manuscript by Arne P. Baartz. Die
Grundlehren der mathematischen Wissenschaften, vol. 138, Springer-Verlag, New
York, 1967. MR 36#6716. Zbl 189.38503.
J. LaSalle and S. Lefschetz, Stability by Liapunov’s Direct Method, with Applications,
Mathematics in Science and Engineering, vol. 4, Academic Press, New York, 1961.
MR 24#A2712. Zbl 098.06102.
H. Yamashita, A differential equation approach to nonlinear programming, Math. Programming 18 (1980), no. 2, 155–168. MR 81k:90090. Zbl 436.90094.
T. Yoshizawa, Stability Theory by Liapunov’s Second Method, Publications of the Mathematical Society of Japan, no. 9, The Mathematical Society of Japan, Tokyo, 1966.
MR 34#7896. Zbl 144.10802.
Fatma M. Ali: Department of Mathematics, Faculty of Education, Ain Shams University, Roxy, Cairo, Egypt
Download