Research Article Generalized Minimax Programming with Nondifferentiable ( ,

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 854125, 7 pages
http://dx.doi.org/10.1155/2013/854125
Research Article
Generalized Minimax Programming with
Nondifferentiable (𝐺, 𝛽)-Invexity
D. H. Yuan and X. L. Liu
Department of Mathematics, Hanshan Normal University, Guangdong 521041, China
Correspondence should be addressed to D. H. Yuan; dhyuan@hstc.edu.cn
Received 26 November 2012; Accepted 23 December 2012
Academic Editor: C. Conca
Copyright © 2013 D. H. Yuan and X. L. Liu. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
We consider the generalized minimax programming problem (P) in which functions are locally Lipschitz (𝐺, 𝛽)-invex. Not only
𝐺-sufficient but also 𝐺-necessary optimality conditions are established for problem (P). With 𝐺-necessary optimality conditions
and (𝐺, 𝛽)-invexity on hand, we construct dual problem (DI) for the primal one (P) and prove duality results between problems
(P) and (DI). These results extend several known results to a wider class of programs.
1. Introduction
Convexity plays a central role in many aspects of mathematical programming including analysis of stability, sufficient optimality conditions, and duality. Based on convexity assumptions, nonlinear programming problems can be
solved efficiently. There have been many attempts to weaken
the convexity assumptions in order to treat many practical
problems. Therefore, many concepts of generalized convex
functions have been introduced and applied to mathematical programming problems in the literature [1]. One of
these concepts, invexity, was introduced by Hanson in [2].
Hanson has shown that invexity has a common property
in mathematical programming with convexity that KarushKuhn-Tucker conditions are sufficient for global optimality
of nonlinear programming under the invexity assumptions.
Ben-Israel and Mond [3] introduced the concept of preinvex
functions which is a special case of invexity.
Recently, Antczak extended further invexity to 𝐺-invexity
[4] for scalar differentiable functions and introduced new
necessary optimality conditions for differentiable mathematical programming problem. Antczak also applied the introduced 𝐺-invexity notion to develop sufficient optimality conditions and new duality results for differentiable mathematical programming problems. Furthermore, in the natural way,
Antczak’s definition of 𝐺-invexity was also extended to the
case of differentiable vector-valued functions. In [5], Antczak
defined vector 𝐺-invex (𝐺-incave) functions with respect to πœ‚
and applied this vector 𝐺-invexity to develop optimality conditions for differentiable multiobjective programming problems with both inequality and equality constraints. He also
established the so-called 𝐺-Karush-Kuhn-Tucker necessary
optimality conditions for differentiable vector optimization
problems under the Kuhn-Tucker constraint qualification
[5]. With this vector 𝐺-invexity concept, Antczak proved
new duality results for nonlinear differentiable multiobjective
programming problems [6]. A number of new vector duality
problems such as 𝐺-Mond-Weir, 𝐺-Wolfe, and 𝐺-mixed dual
vector problems to the primal one were also defined in [6].
In the last few years, many concepts of generalized convexity, which include (𝑝, π‘Ÿ)-invexity [7], (𝐹, 𝜌)-convexity [8],
(𝐹, 𝛼, 𝜌, 𝑑)-convexity [9], (𝐢, 𝛼, 𝜌, 𝑑)-convexity [10], (πœ™, 𝜌)invexity [11], 𝑉-π‘Ÿ-invexity [12], and their extensions, have
been introduced and applied to different mathematical programming problems. In particular, they have also been
applied to deal with minimax programming; see [13–17] for
details. However, we have not found a paper which deals with
generalized minimax programming problem (P) under 𝐺invexity or its generalizations assumptions.
Note that the function 𝐺 ∘ 𝑓 may not be differentiable
even if the function 𝐺 is differentiable. Yuan et al. [18] introduced the (𝐺𝑓 , 𝛽𝑓 )-invexity concept for the locally Lipschtiz
2
Journal of Applied Mathematics
function 𝑓. This (𝐺𝑓 , 𝛽𝑓 )-invexity extended Antczak’s 𝐺invexity concept to the nonsmooth case. In this paper, we deal
with nondifferentiable generalized minimax programming
problem (P) with the vector (𝐺, 𝛽)-invexity proposed in [18].
Here, the generalized minimax programming problem (P) is
presented as follows:
min
sup πœ™ (π‘₯, 𝑦)
𝑦∈π‘Œ
exists, then 𝑓0 (π‘₯; 𝑑) is called the Clarke derivative of 𝑓 at π‘₯
in the direction 𝑑. If this limit superior exists for all 𝑑 ∈ R𝑛 ,
then 𝑓 is called the Clarke differentiable at π‘₯. The set
πœ•π‘“ (π‘₯) = {𝜁 | 𝑓0 (π‘₯; 𝑑) ≥ ⟨𝜁, π‘‘βŸ©, ∀𝑑 ∈ R𝑛 }
(4)
is called the Clarke subdifferential of 𝑓 at π‘₯.
(P)
Note that if a given function 𝑓 is locally Lipschitz, then
the Clarke subdifferential πœ•π‘“(π‘₯) exists.
where π‘Œ is a compact subset of R𝑝 , πœ™(⋅, ⋅) : R𝑛 × R𝑝 →
R, and 𝑔𝑗 (⋅) : R𝑛 → R (𝑗 ∈ 𝑀). Let 𝐸𝑃 be the set of
feasible solutions of problem (P); in other words, 𝐸𝑃 = {π‘₯ ∈
R𝑛 | 𝑔𝑗 (π‘₯) ≤ 0, 𝑗 ∈ 𝑀}. For convenience, let us define the
following sets for every π‘₯ ∈ 𝐸:
Lemma 2 (see [18]). Let πœ“ be a real-valued Lipschitz continuous function defined on 𝑋 and denote the image of 𝑋 under
πœ“ by πΌπœ“ (𝑋); let πœ‘ : πΌπœ“ (𝑋) → R be a differentiable function
such that πœ‘σΈ€  (𝛾) is continuous on πΌπœ“ (𝑋) and πœ‘σΈ€  (𝛾) ≥ 0 for each
𝛾 ∈ πΌπœ“ (𝑋). Then the chain rule
𝐽 (π‘₯) = {𝑗 ∈ 𝑀 | 𝑔𝑗 (π‘₯) = 0} ,
(πœ‘ ∘ πœ“) (π‘₯, 𝑑) = πœ‘σΈ€  (πœ“ (π‘₯)) πœ“0 (π‘₯, 𝑑) ,
subject to
𝑔𝑗 (π‘₯) ≤ 0,
𝑗 = 1, . . . , π‘š,
π‘Œ (π‘₯) = {𝑦 ∈ π‘Œ | πœ™ (π‘₯, 𝑦) = sup πœ™ (π‘₯, 𝑧)} .
0
(1)
holds for each 𝑑 ∈ R𝑛 . Therefore,
𝑧∈π‘Œ
πœ• (πœ‘ ∘ πœ“) (π‘₯) = πœ‘σΈ€  (πœ“ (π‘₯)) πœ• (πœ“) (π‘₯) .
The rest of the paper is organized as follows. In Section 2,
we present concepts in regards to nondifferentiable vector
(𝐺, 𝛽)-invexity. In Section 3, we present not only 𝐺-sufficient
but also 𝐺-necessary optimality conditions for problem (P).
When the 𝐺-necessary optimality conditions and the (𝐺, 𝛽)invexity concept are utilized, dual problem (DI) is formulated
for the primal one (P) and duality results between them are
presented in Section 4.
for 𝑖 = 1, 2, . . . , 𝑛,
π‘₯ ≧ 𝑦 if and only if π‘₯𝑖 ≥ 𝑦𝑖 ,
for 𝑖 = 1, 2, . . . , 𝑛,
π‘₯ β©Ύ 𝑦 if and only if π‘₯𝑖 ≥ 𝑦𝑖 ,
for 𝑖 = 1, 2, . . . , 𝑛, but π‘₯ =ΜΈ 𝑦,
(2)
Let R𝑛+ = {π‘₯ ∈ R𝑛 | π‘₯ ≧ 0}, RΜ‡ 𝑛+ = {π‘₯ ∈ R𝑛 | π‘₯ > 0} and 𝑋 be
a subset of R𝑛 . For our convenience, denote 𝑄 := {1, 2, . . . , π‘ž},
𝑄∗ := {1, 2, . . . , π‘ž∗ }, 𝐾 := {1, 2, . . . , π‘˜}, and 𝑀 := {1, 2, . . . , π‘š}.
Further, we recall some definitions and a lemma.
Definition 1 (see [19]). Let 𝑑 ∈ R𝑛 , 𝑋 be a nonempty set of R𝑛
and 𝑓 : 𝑋 → R. If
𝑓0 (π‘₯; 𝑑) := lim sup
𝑦→π‘₯
πœ‡↓0
1
(𝑓 (𝑦 + πœ‡π‘‘) − 𝑓 (𝑦))
πœ‡
𝐺𝑓𝑖 ∘ 𝑓𝑖 (π‘₯) − 𝐺𝑓𝑖 ∘ 𝑓𝑖 (𝑒)
∀πœπ‘– ∈ πœ•π‘“π‘– (𝑒) ,
(7)
holds for all π‘₯ ∈ 𝑋 (π‘₯ =ΜΈ 𝑒) and 𝑖 ∈ 𝐾, then 𝑓 is said to
be a (strictly) nondifferentiable vector (𝐺𝑓 , 𝛽𝑓 )-invex at 𝑒
on 𝑋 (with respect to πœ‚) (or shortly, (𝐺𝑓 , 𝛽𝑓 )-invex at 𝑒 on
𝑓
𝑓
𝑓
𝑋), where 𝐺𝑓 = (𝐺𝑓1 , . . . , πΊπ‘“π‘˜ ) and 𝛽 := (𝛽1 , 𝛽2 , . . . , π›½π‘˜ ).
If 𝑓 is a (strictly) nondifferentiable vector (𝐺𝑓 , 𝛽𝑓 )-invex at
𝑒 on 𝑋 (with respect to πœ‚) for all 𝑒 ∈ 𝑋, then 𝑓 is a
(strictly) nondifferential vector (𝐺𝑓 , 𝛽𝑓 )-invex on 𝑋 (with
respect to πœ‚).
3. Optimality Conditions
π‘₯ >ΜΈ 𝑦 is the negation of π‘₯ > 𝑦,
π‘₯β©Ύ
𝑦 is the negation of π‘₯ β©Ύ 𝑦.
𝑓
and 𝛽𝑖 : 𝑋 × π‘‹ → R+ for 𝑖 ∈ 𝐾. Moreover, 𝐺𝑓𝑖 is strictly
increasing on its domain 𝐼𝑓𝑖 (𝑋) for each 𝑖 ∈ 𝐾. If
𝑓
In this section, we provide some definitions and results that
we will use in the sequel. The following convention for
equalities and inequalities will be used throughout the paper.
For any π‘₯ = (π‘₯1 , π‘₯2 , . . . , π‘₯𝑛 )𝑇 , 𝑦 = (𝑦1 , 𝑦2 , . . . , 𝑦𝑛 )𝑇 , we define
the following:
(6)
Definition 3. Let 𝑓 = (𝑓1 , . . . , π‘“π‘˜ ) be a vector-valued locally
Lipschitz function defined on a nonempty set 𝑋 ⊂ R𝑛 .
Consider the functions πœ‚ : 𝑋 × π‘‹ → R𝑛 , 𝐺𝑓𝑖 : 𝐼𝑓𝑖 (𝑋) → R,
≥ (>) 𝛽𝑖 (π‘₯, 𝑒) 𝐺𝑓󸀠 𝑖 (𝑓𝑖 (𝑒)) βŸ¨πœπ‘– , πœ‚ (π‘₯, 𝑒)⟩ ,
2. Notations and Preliminaries
π‘₯ > 𝑦 if and only if π‘₯𝑖 > 𝑦𝑖 ,
(5)
(3)
In this section, we firstly establish the 𝐺-necessary optimality
conditions for problem (P) involving functions which are
locally Lipschitz with respect to the variable π‘₯. For this
purpose, we will need some additional assumptions with
respect to problem (P).
Condition 4. Assume the following: (a) the set π‘Œ is compact;
(b) πœ™(π‘₯, 𝑦) and πœ•πœ™π‘₯ (π‘₯, 𝑦) are upper semicontinuous at
(π‘₯, 𝑦);
(c) πœ™(π‘₯, 𝑦) is locally Lipschitz in π‘₯ and this Lipschitz
continuity is uniform for 𝑦 in π‘Œ;
(d) πœ™(π‘₯, 𝑦) is regular in π‘₯; that is, πœ™π‘₯∘ (π‘₯, 𝑦; ⋅) = πœ™π‘₯σΈ€  (π‘₯, 𝑦; ⋅);
(e) 𝑔𝑗 , 𝑗 ∈ 𝑀 are regular and locally Lipschitz at π‘₯0 .
Journal of Applied Mathematics
3
Condition 5. For each πœ‚ = (πœ‚1 , . . . , πœ‚π‘š ) ∈ Rπ‘š satisfying the
conditions
πœ‚π‘— = 0,
∀𝑗 ∈ 𝑀 \ 𝐽 (π‘₯∗ ) ,
πœ‚π‘— ≥ 0,
(8)
∀𝑗 ∈ 𝐽 (π‘₯∗ ) ,
the following implication holds:
𝑧𝑗∗ ∈ πœ•π‘”π‘— (π‘₯∗ )
π‘š
∑ πœ‚π‘— 𝑧𝑗∗
𝑗=1
(∀𝑗 ∈ 𝑀) ,
= 0 󳨐⇒ πœ‚π‘— = 0,
𝑗 ∈ 𝑀.
Theorem 7 (𝐺-necessary optimality conditions). Let problem
(P) satisfy Conditions 4 and 5; let π‘₯∗ be an optimal solution
of problem (P). Assume that πΊπœ™ is both continuously differentiable and strictly increasing on πΌπœ™ (𝑋, π‘Œ). If 𝐺𝑔𝑗 is both
continuously differentiable and strictly increasing on 𝐼𝑔𝑗 (𝑋)
with 𝐺𝑔󸀠 𝑗 (𝑔𝑗 (π‘₯∗ )) > 0 for each 𝑗 ∈ 𝑀, then there exist positive
integer π‘ž∗ (1 ≤ π‘ž∗ ≤ 𝑛 + 1) and vectors 𝑦𝑖 ∈ π‘Œ(π‘₯∗ ) together
with scalars πœ†∗𝑖 (𝑖 ∈ 𝑄∗ ) and πœ‡π‘—∗ (𝑗 ∈ 𝑀) such that
π‘ž∗
(9)
0 ∈ ∑πœ†∗𝑖 πΊπœ™σΈ€  (πœ™ (π‘₯∗ , 𝑦𝑖 )) πœ•π‘₯ πœ™ (π‘₯∗ , 𝑦𝑖 )
𝑖=1
We will also use the following auxiliary programming problem (G-P):
min
sup πΊπœ™ ∘ πœ™ (π‘₯, 𝑦)
s.t.
𝐺𝑔 ∘ 𝑔 (π‘₯)
∗
∗
πœ‡π‘—∗ (𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯∗ ) − 𝐺𝑔𝑗 (0)) = 0,
πœ‡π‘—∗ ≥ 0,
+
𝑦∈π‘Œ
π‘ž∗
∑πœ†∗𝑖 = 1,
𝑖=1
≦ 𝐺𝑔 (0) ,
(G-P)
where 𝐺𝑔 (0) := (𝐺𝑔1 (0), 𝐺𝑔2 (0), . . . , πΊπ‘”π‘š (0)). We denote by
𝐸𝐺-𝑃 = {π‘₯ ∈ R𝑛 | 𝐺𝑔 ∘ 𝑔(π‘₯) ≦ 𝐺𝑔 (0)}, 𝐽󸀠 (π‘₯) := {𝑗 ∈ 𝑀 :
𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯) = 𝐺𝑔𝑗 (0)}. If function 𝐺𝑔𝑗 is strictly increasing on
𝐼𝑔𝑗 (𝑋) for each 𝑗 ∈ M, then 𝐸𝑃 = 𝐸𝐺-𝑃 and 𝐽(π‘₯) = 𝐽󸀠 (π‘₯).
So, we represent the set of all feasible solutions and the set
of constraint active indices for either (P) or (G-P) by the
notations 𝐸 and 𝐽(π‘₯), respectively.
The following necessary optimality conditions are presented in [20].
Theorem 6 (necessary optimality conditions). Let π‘₯∗ be an
optimal solution of (P). One also assumes that Conditions 4
and 5 hold. Then there exist positive integer π‘ž∗ and vectors 𝑦𝑖 ∈
π‘Œ(π‘₯∗ ) together with scalars πœ†∗𝑖 (𝑖 ∈ 𝑄∗ ) and πœ‡π‘—∗ (𝑗 ∈ 𝑀) such
that
∗
min
𝑗 ∈ 𝑀,
subject to
(π‘₯ ) = 0,
πœ‡π‘—∗
= 1,
πœ†∗𝑖
𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯) ≤ 0,
(𝐺-𝑃𝑦𝑖 )
𝑗 ∈ 𝑀.
π‘š
𝑗=1
πœ‡π‘—π‘– (𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯∗ ) − 𝐺𝑔𝑗 (0)) = 0,
πœ‡π‘—π‘– ≥ 0,
(15)
𝑗 ∈ 𝑀.
(16)
≥ 0,
> 0,
(14)
0 ∈ πœ† 𝑖 πœ•π‘₯ (πΊπœ™ ∘ πœ™) (π‘₯∗ , 𝑦𝑖 ) + ∑ πœ‡π‘—π‘–∗ πœ• (𝐺𝑔𝑗 ∘ 𝑔𝑗 ) (π‘₯∗ ) ,
𝑗 ∈ 𝑀,
(10)
So, we obtain from (15) that
∗
∑πœ†∗𝑖
𝑖=1
𝑖 ∈ 𝑄∗ .
It is easy to see that π‘₯∗ is an optimal solution to problem
(𝐺-𝑃𝑦𝑖 ). Thus, there exist πœ† 𝑖 > 0 and πœ‡π‘—π‘– ≥ 0 for 𝑗 ∈ 𝑀 such
that
𝑗=1
∗
πœ†∗𝑖 ≥ 0,
πΊπœ™ ∘ πœ™ (π‘₯, 𝑦𝑖 )
π‘š
𝑖=1
π‘ž
(𝑔𝑗 (π‘₯ )) πœ•π‘”π‘— (π‘₯ ) ,
Proof. Since π‘₯∗ is an optimal solution to problem (P), it is
easy to see that π‘₯∗ is an optimal solution to problem (G-P).
Consider problem (G-P), it is easy to check that problem
(G-P) satisfies the assumptions of Theorem 6. Therefore, we
choose 𝑦𝑖 ∈ π‘Œ(π‘₯∗ ) and 𝑖 ∈ 𝑄∗ with π‘ž∗ ≤ 𝑛 + 1, such that they
satisfy Theorem 6.
Now, for each 𝑦𝑖 , we consider the scalar programming
(𝐺-𝑃𝑦𝑖 ) as follows:
0 ∈ ∑πœ†∗𝑖 πœ•π‘₯ πœ™ (π‘₯∗ , 𝑦𝑖 ) + ∑πœ‡π‘—∗ πœ•π‘”π‘— (π‘₯∗ ) ,
πœ‡π‘—∗ 𝑔𝑗
∑πœ‡π‘—∗ 𝐺𝑔󸀠 𝑗
𝑗=1
(13)
:= (𝐺𝑔1 ∘ 𝑔1 (π‘₯) , 𝐺𝑔2 ∘ 𝑔2 (π‘₯) , . . . , πΊπ‘”π‘š ∘ π‘”π‘š (π‘₯))
π‘ž
(12)
π‘š
π‘ž∗
∗
0 ∈ ∑πœ† 𝑖 πœ•π‘₯ (πΊπœ™ ∘ πœ™) (π‘₯∗ , 𝑦𝑖 )
𝑖∈𝑄 .
𝑖=1
Furthermore, if 𝛼 is the number of nonzero πœ†∗𝑖 and 𝛽 is the
number of nonzero πœ‡π‘—∗ , then
1 ≤ 𝛼 + 𝛽 ≤ 𝑛 + 1.
(11)
Making use of Theorem 6, we can derive the following 𝐺-necessary conditions theorem for problem (P), see
Theorem 7, here we require the scalars πœ†∗𝑖 (𝑖 = 1, . . . , π‘ž∗ ) to
be positive.
π‘š
π‘ž∗
𝑗=1
𝑖=1
(17)
∗
+ ∑ (∑πœ‡π‘—π‘– ) πœ• (𝐺𝑔𝑗 ∘ 𝑔𝑗 ) (π‘₯ ) ,
or
π‘ž∗
π‘š
𝑖=1
𝑗=1
0 ∈ ∑πœ†∗𝑖 (πΊπœ™ ∘ πœ™) (π‘₯∗ , 𝑦𝑖 ) + ∑πœ‡π‘—∗ πœ• (𝐺𝑔𝑗 ∘ 𝑔𝑗 ) (π‘₯∗ ) ,
(18)
4
Journal of Applied Mathematics
π‘ž∗
π‘ž∗
π‘ž∗
where πœ†∗𝑖 = πœ† 𝑖 / ∑𝑗=1 πœ† 𝑗 , πœ‡π‘—∗ = ∑𝑖=1 πœ‡π‘—π‘– / ∑𝑖=1 πœ† 𝑖 . By Lemma 2,
we have
πœ•π‘₯ (πΊπœ™ ∘ πœ™) (π‘₯∗ , 𝑦𝑖 ) = πΊπœ™σΈ€  (πœ™ (π‘₯∗ , 𝑦𝑖 )) πœ•π‘₯ πœ™ (π‘₯∗ , 𝑦𝑖 ) ,
Employing (24) to (23), we have
π‘ž∗
𝑖 ∈ 𝑄∗ ,
𝑖=1
πœ• (𝐺𝑔𝑗 ∘ 𝑔𝑗 ) (π‘₯∗ ) = 𝐺𝑔󸀠 𝑗 (𝑔𝑗 (π‘₯∗ )) πœ•π‘”π‘— (π‘₯∗ ) ,
𝑗 ∈ 𝑀.
(19)
Now, from (18), we can deduce the required results.
Next, we derive 𝐺-sufficient optimality conditions for
problem (P) under the assumption of (𝐺, 𝛽)-invexity proposed in [18].
Theorem 8 (𝐺-sufficient optimality conditions). Let (π‘₯∗ , πœ‡∗ ,
𝜈∗ , π‘ž∗ , πœ†∗ , 𝑦) satisfy conditions (12)–(14), where 𝜈∗ = πœ™(π‘₯∗ ,
𝑦1 ) = ⋅ ⋅ ⋅ = πœ™(π‘₯∗ , π‘¦π‘ž∗ ); let πΊπœ™ be both continuously differentiable
and strictly increasing on πΌπœ™ (𝑋, π‘Œ); let 𝐺𝑔𝑗 be both continuously
differentiable and strictly increasing on 𝐼𝑔𝑗 (𝑋) for each 𝑗 ∈ 𝑀.
+
πΊπœ™ ∘ πœ™ (π‘₯0 , 𝑦𝑖 ) < πΊπœ™ ∘ πœ™ (π‘₯∗ , 𝑦𝑖 ) ,
𝑖 ∈ 𝑄∗ .
∗
π‘ž∗
∑𝑖=1 πœ†∗𝑖 (πΊπœ™ ∘ πœ™ (π‘₯0 , 𝑦𝑖 ) − πΊπœ™ ∘ πœ™ (π‘₯∗ , 𝑦𝑖 ))
which implies that
π‘ž∗
∗ σΈ€ 
∗
∗
0∈
∑πœ† 𝑖 πΊπœ™ (πœ™ (π‘₯ , 𝑦𝑖 )) πœ•π‘₯ πœ™ (π‘₯ , 𝑦𝑖 )
𝑖=1
+
∑ πœ‡π‘—∗ 𝐺𝑔󸀠 𝑗
𝑗=1
(27)
∗
∗
(𝑔𝑗 (π‘₯ )) πœ•π‘”π‘— (π‘₯ ) .
𝑒π‘₯+2𝑦 ,
πœ™ (π‘₯, 𝑦) = { 2π‘₯+𝑦
𝑒
,
󡄨󡄨
𝑔 (π‘₯) = 󡄨󡄨󡄨󡄨π‘₯ −
󡄨
, π‘₯∗ )
πœ™
∈ πœ•π‘₯ πœ™ (π‘₯ , 𝑦𝑖 ) ,
𝑦∈π‘Œ
𝑒π‘₯+2 ,
𝑒2π‘₯+1 ,
𝑦≥π‘₯
π‘₯ < 𝑦.
(29)
Since
< 0.
≥ (>) 𝛽𝑖 (π‘₯0 , π‘₯∗ ) πΊπœ™σΈ€  (πœ™ (π‘₯∗ , 𝑦𝑖 )) βŸ¨πœ‰π‘– , πœ‚ (π‘₯0 , π‘₯∗ )⟩ ,
∗
(28)
Then, πœ™(⋅, 𝑦) is (log, 1)-invex at π‘₯ = 1 for each 𝑦 ∈ π‘Œ, 𝑔 is
1-invex at π‘₯ = 1, and
(23)
By the generalized invexity assumptions of πœ™(⋅, 𝑦𝑖 ) and 𝑔𝑗 , we
have
πΊπœ™ ∘ πœ™ (π‘₯0 , 𝑦𝑖 ) − πΊπœ™ ∘ πœ™ (π‘₯∗ , 𝑦𝑖 )
πœ™
𝑦≥π‘₯
π‘₯ < 𝑦,
3 󡄨󡄨󡄨 1
󡄨󡄨 − .
2 󡄨󡄨 2
𝑓 (π‘₯) := sup πœ™ (π‘₯, 𝑦) = {
∗
∗
∑π‘š
𝑗=1 πœ‡π‘— (G𝑔𝑗 ∘ 𝑔𝑗 (π‘₯0 ) − 𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯ ))
∗
𝑖∈𝑄 ,
π‘₯ + 2𝑦,
log (πœ™ (π‘₯, 𝑦)) = {
2π‘₯ + 𝑦,
𝑦≥π‘₯
π‘₯ < 𝑦,
(30)
then
1,
𝑦≥π‘₯
{
{
πœ•π‘₯ log (πœ™ (π‘₯, 𝑦)) = {[1, 2] , π‘₯ = 𝑦
{
π‘₯ < 𝑦.
{2,
(31)
Consider π‘₯0 = 1. Since π‘Œ(π‘₯0 ) = {1}, then we can assume that
π‘ž = 1. Therefore,
𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯0 ) − 𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯∗ )
𝑔
≥ (>) 𝛽𝑗 (π‘₯0 , π‘₯∗ ) 𝐺𝑔󸀠 𝑗 (𝑔𝑗 (π‘₯∗ )) βŸ¨πœ‰π‘— , πœ‚ (π‘₯0 , π‘₯∗ )⟩ ,
𝑔
∀πœ‰π‘—
π‘š
This is a contradiction to condition (12).
, π‘₯∗ )
𝑔
𝑔
𝑗=1
𝑗 ∈ 𝐽 (π‘₯ ) ,
we can write the following statement
πœ™
∀πœ‰π‘–
(26)
+ ∑πœ‡π‘—∗ 𝐺𝑔󸀠 𝑗 (𝑔𝑗 (π‘₯∗ )) πœ‰π‘— , πœ‚ (π‘₯0 , π‘₯∗ )⟩ < 0,
(21)
(22)
(π‘₯0
πœ™
π‘š
∗
𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯0 ) ≤ 𝐺𝑔𝑗 (0) = 𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯ ) ,
𝑔
𝛽𝑗
<0
𝑖=1
Employing (13), (14), and the fact that
+
𝑔
)) βŸ¨πœ‰π‘— , πœ‚ (π‘₯0 , π‘₯∗ )⟩
Example 9. Let π‘Œ = [0, 1]. Define
By the monotonicity of πΊπœ™ , we have
(π‘₯0
(𝑔𝑗 (π‘₯
∗
⟨∑πœ†∗𝑖 πΊπœ™σΈ€  (πœ™ (π‘₯∗ , 𝑦𝑖 )) πœ‰π‘–
∗
𝑦∈π‘Œ
∑πœ‡π‘—∗ 𝐺𝑔󸀠 𝑗
𝑗=1
π‘ž∗
is an optimal solution to (P).
Proof. Suppose, contrary to the result, that π‘₯ is not an
optimal solution for problem (P). Hence, there exists π‘₯0 ∈ 𝐸
such that
sup πœ™ (π‘₯0 , 𝑦) < πœ™ (π‘₯∗ , 𝑦𝑖 ) , 𝑖 ∈ 𝑄∗ .
(20)
(25)
π‘š
or
πœ™
Assume that πœ™(⋅, 𝑦𝑖 ) is (πΊπœ™ , 𝛽𝑖 )-invex at π‘₯∗ on 𝐸 for each 𝑖 ∈ 𝑄∗
𝑔 𝑔
and 𝑔𝑗 is (𝐺𝑗 , 𝛽𝑗 )-invex at π‘₯∗ on 𝐸 for each 𝑗 ∈ 𝑀. Then, π‘₯∗
πœ™
𝛽𝑖
πœ™
∑πœ†∗𝑖 πΊπœ™σΈ€  (πœ™ (π‘₯∗ , 𝑦𝑖 )) βŸ¨πœ‰π‘– , πœ‚ (π‘₯0 , π‘₯∗ )⟩
∗
∈ πœ•π‘”π‘— (π‘₯ ) ,
0 ∈ πœ†πΊπœ™σΈ€  (πœ™ (1, 1)) πœ•π‘₯ πœ™ (1, 1) − πœ‡π‘”π‘—σΈ€  (1) = πœ† [1, 2] − πœ‡,
𝑗 ∈ 𝑀.
(24)
(32)
where πœ† = πœ‡ = 1. Now, from Theorem 8, we can say that
π‘₯0 = 1 is an optimal solution to (P).
Journal of Applied Mathematics
5
4. Duality
that
Making use of the optimality conditions of the preceding
section, we present dual problem (DI) to the primal one (P)
and establish 𝐺-weak, 𝐺-strong, and 𝐺-strict converse duality
theorems. For convenience, we use the following notations:
𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯) ≤ 𝐺𝑔𝑗 ∘ 𝑔𝑗 (𝑧) ,
π‘ž
∑πœ† 𝑖
πΊπœ™ ∘ πœ™ (π‘₯, 𝑦𝑖 ) − πΊπœ™ ∘ πœ™ (𝑧, 𝑦𝑖 )
πœ™
𝛽𝑖 (π‘₯, 𝑧)
𝑖=1
πœ† = (πœ† 1 , . . . , πœ† π‘ž ) ∈ RΜ‡ π‘ž+
π‘ž
(42)
Hence,
𝐾 (π‘₯) = { (π‘ž, πœ†, 𝑦) ∈ N × RΜ‡ π‘ž+ × Rπ‘π‘ž | 1 ≤ π‘ž ≤ 𝑛 + 1,
π‘š
𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯) − 𝐺𝑔𝑗 ∘ 𝑔𝑗 (𝑧)
𝑗=1
𝛽𝑗 (π‘₯, 𝑧)
+ ∑πœ‡π‘—
(33)
with ∑ πœ† 𝑖 = 1, 𝑦 = (𝑦1 , . . . π‘¦π‘ž )
𝑗 ∈ 𝑀.
𝑔
(43)
< 0.
𝑖=1
Similar to the proof of Theorem 8, by (43) and the generalized
invexity assumptions of πœ™(⋅, 𝑦𝑖 ) and 𝑔𝑗 , we have
with 𝑦𝑖 ∈ π‘Œ (π‘₯) , 𝑖 = 1, . . . π‘ž} .
𝐻1 (π‘ž, πœ†, 𝑦) denotes the set of all triplets (𝑧, πœ‡, 𝜈) ∈ R𝑛 × Rπ‘š
+ ×
R+ satisfying
π‘ž
π‘š
𝑖=1
𝑗=1
0 ∈ ∑πœ† 𝑖 πΊπœ™σΈ€  (πœ™ (𝑧, 𝑦𝑖 )) πœ•π‘§ πœ™ (𝑧, 𝑦𝑖 ) + ∑πœ‡π‘— 𝐺𝑔󸀠 𝑗 (𝑔𝑗 (𝑧)) πœ•π‘”π‘— (𝑧) ,
(34)
πœ™ (𝑧, 𝑦𝑖 ) ≥ 𝜈,
𝑖 = 1, 2, . . . , π‘ž,
𝑔
𝑖=1
𝑗=1
(44)
which follows that
π‘ž
π‘š
𝑖=1
𝑗=1
(36)
(π‘ž, πœ†, 𝑦) ∈ 𝐾 (𝑧) .
(37)
(45)
max
𝜈.
sup
(DI)
(π‘ž,πœ†,𝑦)∈𝐾(𝑧) (𝑧,πœ‡,𝜈)∈𝐻 (π‘ž,πœ†,𝑦)
1
Note that if 𝐻1 (π‘ž, πœ†, 𝑦) is empty for some triplet (π‘ž, πœ†, 𝑦) ∈
𝐾(𝑧), then define sup(𝑧,πœ‡,𝜈)∈𝐻1 (π‘ž,πœ†,𝑦) 𝜈 = −∞.
Theorem 10 (𝐺-weak duality). Let π‘₯ and (𝑧, πœ‡, 𝜈, π‘ž, πœ†, 𝑦) be
(𝑃)-feasible and (𝐷𝐼)-feasible, respectively; let πΊπœ™ be both continuously differentiable and strictly increasing on πΌπœ™ (𝑋, π‘Œ); let
𝐺𝑔𝑗 be both continuously differentiable and strictly increasing
πœ™
on 𝐼𝑔𝑗 (𝑋) for each 𝑗 ∈ 𝑀. If πœ™(⋅, 𝑦𝑖 ) is (πΊπœ™ , 𝛽𝑖 )-invex at 𝑧 for
𝑔
each 𝑖 ∈ 𝑄 and 𝑔𝑗 is (𝐺𝑔𝑗 , 𝛽𝑗 )-invex at 𝑧 for each 𝑗 ∈ 𝑀, then
sup πœ™ (π‘₯, 𝑦) β©Ύ 𝜈.
(38)
𝑦∈π‘Œ
Proof. Suppose to the contrary that sup𝑦∈π‘Œ πœ™(π‘₯, 𝑦) < 𝜈.
Therefore, we obtain
𝑦𝑖 ∈ π‘Œ (𝑧) ,
∀𝑦 ∈ π‘Œ, 𝑖 ∈ 𝑄.
(39)
Thus, we obtain from the monotonicity assumption of πΊπœ™ that
πΊπœ™ ∘ πœ™ (π‘₯, 𝑦𝑖 ) < πΊπœ™ ∘ πœ™ (𝑧, 𝑦𝑖 ) ,
𝑦𝑖 ∈ π‘Œ (𝑧) ,
𝑖 ∈ 𝑄. (40)
Again, we obtain from the monotonicity assumption of 𝐺𝑔𝑗
and the fact
𝑔𝑗 (π‘₯) ≤ 0,
π‘š
𝑗 ∈ 𝑀,
Our dual problem (DI) can be stated as follows:
πœ™ (π‘₯, 𝑦) < 𝜈 ≤ πœ™ (𝑧, 𝑦𝑖 ) ,
πœ™
σΈ€ 
σΈ€ 
0∈
∑πœ† 𝑖 πΊπœ™ (πœ™ (𝑧, 𝑦𝑖 )) πœ•π‘₯ πœ™ (𝑧, 𝑦𝑖 ) + ∑πœ‡π‘— 𝐺𝑔𝑗 (𝑔𝑗 (𝑧)) πœ•π‘”π‘— (𝑧) .
πœ‡π‘— 𝑔𝑗 (𝑧) ≥ 0,
𝑦𝑖 ∈ π‘Œ (𝑧) ,
(35)
π‘ž
⟨∑πœ† 𝑖 πΊπœ™σΈ€  (πœ™ (𝑧, 𝑦𝑖 )) πœ‰π‘– + ∑ πœ‡π‘— 𝐺𝑔󸀠 𝑗 (𝑔𝑗 (𝑧)) πœ‰π‘— , πœ‚ (π‘₯, 𝑧)⟩ < 0,
πœ‡π‘— 𝑔𝑗 (𝑧) ≥ 0,
πœ‡π‘— ≥ 0, 𝑗 ∈ 𝑀
(41)
Thus, we have a contradiction to (34). So sup𝑦∈π‘Œ πœ™(π‘₯, 𝑦) β©Ύ
𝜈.
Theorem 11 (𝐺-strong duality). Let problem (P) satisfy Conditions 4 and 5; let π‘₯∗ be an optimal solution of problem (P).
Suppose that πΊπœ™ is both continuously differentiable and strictly
increasing on πΌπœ™ (𝑋, π‘Œ) and 𝐺𝑔𝑗 is both continuously differentiable and strictly increasing on 𝐼𝑔𝑗 (𝑋) with 𝐺𝑔󸀠 𝑗 (𝑔𝑗 (π‘₯∗ )) >
0 for each 𝑗 ∈ 𝑀. If the hypothesis of Theorem 10 holds
for all (𝐷𝐼)-feasible points (𝑧, πœ‡, 𝜈, π‘ž, πœ†, 𝑦), then there exist
(π‘ž∗ , πœ†∗ , 𝑦∗ ) ∈ 𝐾(𝑧) and (π‘₯∗ , πœ‡∗ , 𝜈∗ ) ∈ 𝐻1 (π‘ž∗ , πœ†∗ , 𝑦∗ ) such
that (π‘ž∗ , πœ†∗ , 𝑦∗ , π‘₯∗ , πœ‡∗ , 𝜈∗ ) is a (DI) optimal solution, and the
two problems (P) and (DI) have the same optimal values.
Proof. By Theorem 7, there exists 𝜈∗ = πœ™(π‘₯∗ , 𝑦𝑖∗ ), 𝑖 = 1, . . . ,
π‘ž∗ , satisfying the requirements specified in the theorem, such
that (π‘ž∗ , πœ†∗ , 𝑦∗ , π‘₯∗ , πœ‡∗ , 𝜈∗ ) is a (DI) feasible solution, then
the optimality of this feasible solution for (DI) follows from
Theorem 10.
Theorem 12 (𝐺-strict converse duality). Let π‘₯ and (𝑧, πœ‡,
𝜈, π‘ž, πœ†, 𝑦) be optimal solutions for (P) and (DI), respectively.
Suppose that πΊπœ™ is both continuously differentiable and strictly
increasing on πΌπœ™ (𝑋, π‘Œ) and 𝐺𝑔𝑗 is both continuously differentiable and strictly increasing on 𝐼𝑔𝑗 (𝑋) for each 𝑗 ∈ 𝑀. If πœ™(⋅, 𝑦𝑖 )
πœ™
𝑔
is (πΊπœ™ , 𝛽𝑖 )-invex at 𝑧 for each 𝑖 ∈ 𝑄 and 𝑔𝑗 is (𝐺𝑔𝑗 , 𝛽𝑗 )-invex
at 𝑧 for each 𝑗 ∈ 𝑀, then π‘₯ = 𝑧; that is, 𝑧 is a (𝑃)-optimal
solution and sup𝑦∈π‘Œπœ™(π‘₯, 𝑦) = 𝜈.
6
Journal of Applied Mathematics
Proof. Suppose to the contrary that π‘₯ =ΜΈ 𝑧. Similar to the
πœ™
arguments as in the proof of Theorem 8, there exist πœ‰π‘– ∈
𝑔
πœ™(𝑧, 𝑦𝑖 ) and πœ‰π‘— ∈ 𝑔𝑗 (𝑧) such that
π‘ž
πœ™
π‘š
𝑔
0 = ⟨∑πœ† 𝑖 πΊπœ™σΈ€  (πœ™ (𝑧, 𝑦𝑖 )) πœ‰π‘– + ∑πœ‡π‘— 𝐺𝑔󸀠 𝑗 (𝑔𝑗 (𝑧)) πœ‰π‘— , πœ‚ (π‘₯, 𝑧)⟩
𝑖=1
π‘ž
< ∑πœ† 𝑖
Acknowledgments
The authors are grateful to the referee for his valuable
suggestions that helped to improve this paper in its present
form. This research is supported by Science Foundation of
Hanshan Normal University (LT200801).
𝑗=1
References
πΊπœ™ ∘ πœ™ (π‘₯, 𝑦𝑖 ) − πΊπœ™ ∘ πœ™ (𝑧, 𝑦𝑖 )
πœ™
𝛽𝑖 (π‘₯, 𝑧)
𝑖=1
π‘š
𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯) − 𝐺𝑔𝑗 ∘ 𝑔𝑗 (𝑧)
𝑗=1
𝛽𝑗 (π‘₯, 𝑧)
+ ∑πœ‡π‘—
𝑔
,
π‘š
𝐺𝑔𝑗 ∘ 𝑔𝑗 (π‘₯) − 𝐺𝑔𝑗 ∘ 𝑔𝑗 (𝑧)
𝑗=1
𝛽𝑗 (π‘₯, 𝑧)
∑πœ‡π‘—
𝑔
≤ 0.
(46)
Therefore,
π‘ž
∑πœ† 𝑖
πΊπœ™ ∘ πœ™ (π‘₯, 𝑦𝑖 ) − πΊπœ™ ∘ πœ™ (𝑧, 𝑦𝑖 )
πœ™
𝛽𝑖 (π‘₯, 𝑧)
𝑖=1
> 0.
(47)
From the above inequality, we can conclude that there exists
𝑖0 ∈ 𝑄, such that
πΊπœ™ ∘ πœ™ (π‘₯, 𝑦𝑖0 ) − πΊπœ™ ∘ πœ™ (𝑧, 𝑦𝑖0 ) > 0
(48)
πœ™ (π‘₯, 𝑦𝑖0 ) > πœ™ (𝑧, 𝑦𝑖0 ) .
(49)
sup πœ™ (π‘₯, 𝑦) ≥ πœ™ (π‘₯, 𝑦𝑖0 ) > πœ™ (𝑧, 𝑦𝑖0 ) > 𝜈.
(50)
or
It follows that
𝑦∈π‘Œ
On the other hand, we know from Theorem 10 that
sup πœ™ (π‘₯, 𝑦) = 𝜈.
𝑦∈π‘Œ
(51)
This contradicts to (50).
5. Conclusion
In this paper, we have discussed the applications of (𝐺, 𝛽)invexity for a class of nonsmooth minimax programming
problem (P). Firstly, we established 𝐺-necessary optimality
conditions for problem (P). Under the nondifferential (𝐺, 𝛽)invexity assumptions, we have also derived the sufficiency of
the 𝐺-necessary optimality conditions for the same problem.
Further, we have constructed a dual model (DI) and derived
𝐺-duality results between problems (P) and (DI). Note that
many researchers are interested in dealing with the minimax
programming under generalized invexity assumptions; see [1,
10, 11, 14–17]. However, we have not found results for minimax
programming problems under the 𝐺-invexity or its extension
assumptions. Hence, this work extends the applications of 𝐺invexity to the generalized minimax programming as well as
to the nonsmooth case.
[1] T. Antczak, “Generalized fractional minimax programming
with 𝐡-(𝑝, π‘Ÿ)-invexity,” Computers & Mathematics with Applications, vol. 56, no. 6, pp. 1505–1525, 2008.
[2] M. A. Hanson, “On sufficiency of the Kuhn-Tucker conditions,”
Journal of Mathematical Analysis and Applications, vol. 80, no.
2, pp. 545–550, 1981.
[3] A. Ben-Israel and B. Mond, “What is invexity?” Australian
Mathematical Society Journal B, vol. 28, no. 1, pp. 1–9, 1986.
[4] T. Antczak, “New optimality conditions and duality results of
𝐺 type in differentiable mathematical programming,” Nonlinear
Analysis: Theory, Methods & Applications, vol. 66, no. 7, pp. 1617–
1632, 2007.
[5] T. Antczak, “On 𝐺-invex multiobjective programming. Part I.
Optimality,” Journal of Global Optimization, vol. 43, no. 1, pp.
97–109, 2009.
[6] T. Antczak, “On 𝐺-invex multiobjective programming. Part II.
Duality,” Journal of Global Optimization, vol. 43, no. 1, pp. 111–
140, 2009.
[7] T. Antczak, “(𝑝, π‘Ÿ)-invex sets and functions,” Journal of Mathematical Analysis and Applications, vol. 263, no. 2, pp. 355–379,
2001.
[8] V. Preda, “On efficiency and duality for multiobjective programs,” Journal of Mathematical Analysis and Applications, vol.
166, no. 2, pp. 365–377, 1992.
[9] Z. A. Liang, H. X. Huang, and P. M. Pardalos, “Optimality
conditions and duality for a class of nonlinear fractional
programming problems,” Journal of Optimization Theory and
Applications, vol. 110, no. 3, pp. 611–619, 2001.
[10] D. H. Yuan, X. L. Liu, A. Chinchuluun, and P. M. Pardalos,
“Nondifferentiable minimax fractional programming problems
with (𝐢, 𝛼, 𝜌, 𝑑)-convexity,” Journal of Optimization Theory and
Applications, vol. 129, no. 1, pp. 185–199, 2006.
[11] M. V. SΜ§tefaΜ†nescu and A. SΜ§tefaΜ†nescu, “Minimax programming under new invexity assumptions,” Revue Roumaine de
Mathématiques Pures et Appliquées, vol. 52, no. 3, pp. 367–376,
2007.
[12] T. Antczak, “The notion of 𝑉-π‘Ÿ-invexity in differentiable multiobjective programming,” Journal of Applied Analysis, vol. 11, no.
1, pp. 63–79, 2005.
[13] H. C. Lai and J. C. Lee, “On duality theorems for a nondifferentiable minimax fractional programming,” Journal of
Computational and Applied Mathematics, vol. 146, no. 1, pp. 115–
126, 2002.
[14] T. Antczak, “Minimax programming under (𝑝, π‘Ÿ)-invexity,”
European Journal of Operational Research, vol. 158, no. 1, pp. 1–
19, 2004.
[15] I. Ahmad, S. Gupta, N. Kailey, and R. P. Agarwal, “Duality in
nondifierentiable minimax fractional programming with B-(p,
r)-invexity,” Journal of Inequalities and Applications, vol. 2011,
article 75, 2011.
[16] T. Antczak, “Nonsmooth minimax programming under locally
Lipschitz (Φ, 𝜌)-invexity,” Applied Mathematics and Computation, vol. 217, no. 23, pp. 9606–9624, 2011.
Journal of Applied Mathematics
[17] S. K. Mishra and K. Shukla, “Nonsmooth minimax programming problems with 𝑉-π‘Ÿ-invex functions,” Optimization, vol. 59,
no. 1, pp. 95–103, 2010.
[18] D. H. Yuan, X. L. Liu, S. Y. Yang, and G. M. Lai, “Nondifierential
mathematical programming involving (G, 𝛽)-invexity,” Journal
of Inequalities and Applications, vol. 2012, article 256, 2012.
[19] F. H. Clarke, Optimization and Nonsmooth Analysis, John Wiley
& Sons, New York, NY, USA, 1983.
[20] M. Studniarski and A. W. A. Taha, “A characterization of strict
local minimizers of order one for nonsmooth static minmax
problems,” Journal of Mathematical Analysis and Applications,
vol. 259, no. 2, pp. 368–376, 2001.
7
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download