Research Article Univex Interval-Valued Mapping with Differentiability and Its Lifeng Li,

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 383692, 8 pages
http://dx.doi.org/10.1155/2013/383692
Research Article
Univex Interval-Valued Mapping with Differentiability and Its
Application in Nonlinear Programming
Lifeng Li,1,2 Sanyang Liu,1 and Jianke Zhang2
1
2
Department of Mathematics, School of Science, Xidian University, Xi’an 710126, China
Department of Mathematics, School of Science, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
Correspondence should be addressed to Sanyang Liu; liusanyang@126.com
Received 25 November 2012; Accepted 26 May 2013
Academic Editor: Lotfollah Najjar
Copyright © 2013 Lifeng Li et al. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Interval-valued univex functions are introduced for differentiable programming problems. Optimality and duality results are
derived for a class of generalized convex optimization problems with interval-valued univex functions.
1. Introduction
Imposing the uncertainty upon the optimization problems is
an interesting research topic. The uncertainty may be interpreted as randomness, fuzziness, or interval-valued fuzziness.
The randomness occurring in the optimization problems
is categorized as the stochastic optimization problems, and
the imprecision (fuzziness) occurring in the optimization
problems is categorized as the fuzzy optimization problems.
In order to perfectly match the real situations, intervalvalued optimization problems may provide an alternative
choice for considering the uncertainty into the optimization
problems. That is to say, the coefficients in the interval-valued
optimization problems are assumed as closed intervals. Many
approaches for interval-valued optimization problems have
been explored in considerable details; see, for example, [1–
3]. Recently, Wu has extended the concept of convexity for
real-valued functions to LU-convexity for interval-valued
functions, then he has established the Karush-Tucker conditions [4–6] for an optimization problem with intervalvalued objective functions under the assumption of LUconvexity. Similar to the concept of nondominated solution
in vector optimization problems, Wu has proposed a solution concept in optimization problems with interval-valued
objective functions based on a partial ordering on the set of
all closed intervals, then the interval-valued Wolfe duality
theory [7] and Lagrangian duality theory [8] for intervalvalued optimization problems have been proposed. Recently,
Wu [9] has studied the duality theory for interval-valued
linear programming problems.
In 1981, Hanson [10] introduced the concept of invexity
and established Karush-Tucker type sufficient optimality conditions for a nonlinear programming problem. In [11], Kaul
et al. considered a differentiable multiobjective programming problem involving generalized type I functions. They
investigated Karush-Tucker type necessary and sufficient
conditions and obtained duality results under generalized
type I functions. The class of B-vex functions has been
introduced by Bector and Singh [12] as a generalization
of convex functions, and duality results are established
for vector valued B-invex programming in [13]. Bector et
al. [14] introduced the concept of univex functions as a
generalization of B-vex functions introduced by Bector et al.
[15]. Combining the concepts of type I and univex functions,
Rueda et al. [16] gave optimality conditions and duality results
for several mathematical programming problems. Aghezzaf
and Hachimi [17] introduced classes of generalized type I
functions for a differentiable multiobjective programming
problem and derived some Mond-Weir type duality results
under the above generalized type I assumptions. Gulati et al.
[18] introduced the concept of (𝐹, 𝛼, 𝜌, 𝑑)-𝑉-type I functions
and also studied sufficiency optimality conditions and duality
multiobjective programming problems.
This paper aims at extending the Karush-Tucker optimality conditions to nonconvex optimization problem with
interval-valued functions. First, we extend the concept of
2
Journal of Applied Mathematics
univexity for a real-valued function to an interval-valued
function and present the concept of interval-valued univex
functions. Then, the Karush-Tucker optimality conditions are
proposed for an interval-valued function under the assumption of interval-valued univexity.
2. Preliminaries
Let one denotes by I the class of all closed intervals in 𝑅.
𝐴 = [π‘ŽπΏ , π‘Žπ‘ˆ] ∈ I denotes a closed interval, where π‘ŽπΏ and
π‘Žπ‘ˆ mean the lower and upper bounds of 𝐴, respectively. For
every π‘Ž ∈ 𝑅, we denote π‘Ž = [π‘Ž, π‘Ž].
Definition 1. Let 𝐴 = [π‘ŽπΏ , π‘Žπ‘ˆ] and 𝐡 = [𝑏𝐿 , π‘π‘ˆ] be in I; one
has
(i) 𝐴 + 𝐡 = {π‘Ž + 𝑏 : π‘Ž ∈ 𝐴 and 𝑏 ∈ 𝐡} = [π‘ŽπΏ + 𝑏𝐿 , π‘Žπ‘ˆ + π‘π‘ˆ];
(ii) −𝐴 = {−π‘Ž : π‘Ž ∈ 𝐴} = [−π‘Žπ‘ˆ, −π‘ŽπΏ ];
(iii) 𝐴 × π΅ = {π‘Žπ‘ : π‘Ž ∈ 𝐴 and 𝑏 ∈ 𝐡} = [minπ‘Žπ‘ , maxπ‘Žπ‘ ],
where minπ‘Žπ‘ = min{π‘ŽπΏ 𝑏𝐿 , π‘ŽπΏ π‘π‘ˆ, π‘Žπ‘ˆπ‘πΏ , π‘Žπ‘ˆπ‘π‘ˆ} and
maxπ‘Žπ‘ = max{π‘ŽπΏ 𝑏𝐿 , π‘ŽπΏ π‘π‘ˆ, π‘Žπ‘ˆπ‘πΏ , π‘Žπ‘ˆπ‘π‘ˆ}.
Then, we can see that
𝐴 − 𝐡 = 𝐴 + (−𝐡) = [π‘ŽπΏ − π‘π‘ˆ, π‘Žπ‘ˆ − 𝑏𝐿 ] ,
𝐿
π‘ˆ
{[π‘˜π‘Ž , π‘˜π‘Ž ]
π‘˜π΄ = {π‘˜π‘Ž : π‘Ž ∈ 𝐴} = { π‘ˆ 𝐿
[π‘˜π‘Ž , π‘˜π‘Ž ]
{
if π‘˜ ≥ 0,
(1)
if π‘˜ < 0,
where π‘˜ is a real number.
By using Hausdorff metric, Neumaier [19] has proposed
Hausdorff metric between the two closed intervals 𝐴 and 𝐡 as
follows:
󡄨 󡄨
󡄨
󡄨
(2)
𝑑𝐻 (𝐴, 𝐡) = max {σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘ŽπΏ − 𝑏𝐿 󡄨󡄨󡄨󡄨 , σ΅„¨σ΅„¨σ΅„¨σ΅„¨π‘Žπ‘ˆ − π‘π‘ˆσ΅„¨σ΅„¨σ΅„¨σ΅„¨} .
𝐿
π‘ˆ
𝐿
π‘ˆ
Definition 2. Let 𝐴 = [π‘Ž , π‘Ž ] and 𝐡 = [𝑏 , 𝑏 ] be two closed
intervals in 𝑅. One writes 𝐴 βͺ― 𝐡 if and only if π‘ŽπΏ ≤ 𝑏𝐿 and
π‘Žπ‘ˆ ≤ π‘π‘ˆ, 𝐴 β‰Ί 𝐡 if and only if 𝐴 βͺ― 𝐡 and 𝐴 =ΜΈ 𝐡, that is, the
following (a1), (a2), or (a3) is satisfied:
(a1) π‘ŽπΏ < 𝑏𝐿 and π‘Žπ‘ˆ ≤ π‘π‘ˆ;
(a2) π‘ŽπΏ ≤ 𝑏𝐿 and π‘Žπ‘ˆ < π‘π‘ˆ;
(a3) π‘ŽπΏ < 𝑏𝐿 and π‘Žπ‘ˆ < π‘π‘ˆ.
Definition 3 (see [20]). Let 𝐴 = [π‘ŽπΏ , π‘Žπ‘ˆ] and 𝐡 = [𝑏𝐿 , π‘π‘ˆ] be
two closed intervals, the gH-difference of 𝐴 and 𝐡 is defined
by
𝐿
π‘ˆ
𝐿
π‘ˆ
[π‘Ž , π‘Ž ] βŠ–π‘” [𝑏 , 𝑏 ]
= [min (π‘ŽπΏ − 𝑏𝐿 , π‘Žπ‘ˆ − π‘π‘ˆ) , max (π‘ŽπΏ − 𝑏𝐿 , π‘Žπ‘ˆ − π‘π‘ˆ)] .
(3)
For example, [1, 3]βŠ–π‘” [0, 3] = [0, 1], [0, 3]βŠ–π‘” [1, 3] =
[−1, 0]. And π‘Ž − 𝑏 = [π‘Ž, π‘Ž]βŠ–π‘” [𝑏, 𝑏] = [π‘Ž − 𝑏, π‘Ž − 𝑏] = π‘Ž − 𝑏.
Proposition 4. (i) For every 𝐴, 𝐡 ∈ I, π΄βŠ–π‘” 𝐡 always exists
and π΄βŠ–π‘” 𝐡 ∈ I.
(ii) π΄βŠ–π‘” 𝐡 βͺ― 0 if and only if 𝐴 βͺ― 𝐡.
3. Interval-Valued Univex Functions
Definition 5 (interval-valued function). The function 𝑓 :
Ω → I is called an interval-valued function, where Ω ⊆ 𝑅𝑛 .
Then, 𝑓(x) = 𝑓(π‘₯1 , . . . , π‘₯𝑛 ) is a closed interval in 𝑅 for each
x ∈ 𝑅𝑛 , and 𝑓(x) can be also written as 𝑓(x) = [𝑓𝐿 (x), π‘“π‘ˆ(x)],
where 𝑓𝐿 (x) and π‘“π‘ˆ(x) are two real-valued functions defined
on 𝑅𝑛 and satisfy 𝑓𝐿 (x) ≤ π‘“π‘ˆ(x) for every x ∈ Ω.
Definition 6 (continuity of an interval-valued function). The
function 𝑓 : Ω ⊆ 𝑅𝑛 → I is said to be continuous at π‘₯ ∈ Ω
if both 𝑓𝐿 (x) and π‘“π‘ˆ(x) are continuous functions of x.
The concept of gH-derivative of a function 𝑓 : (π‘Ž, 𝑏) →
I is defined in [19].
Definition 7. Let π‘₯0 ∈ (π‘Ž, 𝑏) and β„Ž be such that π‘₯0 + β„Ž ∈ (π‘Ž, 𝑏),
then the gH-derivative of a function 𝑓 : (π‘Ž, 𝑏) → I at π‘₯0 is
defined as
𝑓󸀠 (π‘₯0 ) = lim [𝑓 (π‘₯0 + β„Ž) βŠ–π‘” 𝑓 (π‘₯0 )] .
(4)
π‘₯→0
If 𝑓󸀠 (π‘₯0 ) ∈ I exists, then we say that 𝑓 is generalized
Hukuhara differentiable (gH-differentiable, for short) at π‘₯0 .
Moreover, [21] also proved the following theorem.
Theorem 8. Let 𝑓 : (π‘Ž, 𝑏) → I be such that 𝑓(π‘₯) =
[𝑓𝐿 (π‘₯), π‘“π‘ˆ(π‘₯)]. The function 𝑓(π‘₯) is gH-differentiable if and
only if 𝑓𝐿 (π‘₯) and π‘“π‘ˆ(π‘₯) are differentiable real-valued functions. Furthermore,
σΈ€ 
σΈ€ 
𝑓󸀠 (π‘₯) = [min {(𝑓𝐿 ) (π‘₯) , (π‘“π‘ˆ) (π‘₯)} ,
𝐿 σΈ€ 
(5)
π‘ˆ σΈ€ 
max {(𝑓 ) (π‘₯) , (𝑓 ) (π‘₯)}] .
Definition 9 (gradient of an interval-valued function). Let
𝑓(x) be an interval-valued function defined on Ω, where
Ω is an open subset of 𝑅𝑛 . Let 𝐷π‘₯𝑖 (𝑖 = 1, 2, . . . , 𝑛) stand
for the partial differentiation with respect to the 𝑖th variable
π‘₯𝑖 . Assume that 𝑓𝐿 (x) and π‘“π‘ˆ(x) have continuous partial
derivatives so that 𝐷π‘₯𝑖 𝑓𝐿 (x) and 𝐷π‘₯𝑖 π‘“π‘ˆ(x) are continuous.
For 𝑖 = 1, 2, . . . , 𝑛, define
𝐷π‘₯𝑖 𝑓 (x) = [min (𝐷π‘₯𝑖 𝑓𝐿 (x) , 𝐷π‘₯𝑖 π‘“π‘ˆ (x)) ,
(6)
max (𝐷π‘₯𝑖 𝑓𝐿 (x) , 𝐷π‘₯𝑖 π‘“π‘ˆ (x))] .
We will say that 𝑓(x) is differentiable at x, and we write
𝑑
∇𝑓 (x) = (𝐷π‘₯1 𝑓 (x) , 𝐷π‘₯2 𝑓 (x) , . . . , 𝐷π‘₯𝑛 𝑓 (x)) .
(7)
We call ∇𝑓(x) the gradient of the interval-valued univex
function at x.
Journal of Applied Mathematics
3
Example 10. Let 𝑓 : R2 → I defined by 𝑓(x) = [π‘₯12 + π‘₯22 ,
2π‘₯12 + 2π‘₯22 + 3]. So 𝑓𝐿 (x) = π‘₯12 + π‘₯22 and π‘“π‘ˆ(x) = 2π‘₯12 +
2π‘₯22 + 3. 𝐷π‘₯1 𝑓𝐿 (x) = 2π‘₯1 , 𝐷π‘₯2 𝑓𝐿 (x) = 2π‘₯2 , 𝐷π‘₯1 π‘“π‘ˆ(x) = 4π‘₯1 ,
𝐷π‘₯2 π‘“π‘ˆ(x) = 4π‘₯2 . Thus,
{[2π‘₯1 , 4π‘₯1 ] if π‘₯1 ≥ 0,
𝐷π‘₯1 𝑓 (x) = {
[4π‘₯1 , 2π‘₯1 ] if π‘₯1 < 0,
{
{[2π‘₯2 , 4π‘₯2 ] if π‘₯2 ≥ 0,
𝐷π‘₯2 𝑓 (x) = {
[4π‘₯2 , 2π‘₯2 ] if π‘₯2 < 0.
{
(8)
𝑑
if π‘₯1 ≥ 0, π‘₯2 ≥ 0,
if π‘₯1 ≥ 0, π‘₯2 < 0,
if π‘₯1 < 0, π‘₯2 ≥ 0,
if π‘₯1 < 0, π‘₯2 < 0.
(9)
Further,
𝑑
(2π‘₯1 , 2π‘₯2 )
{
{
{
{
{
𝑑
{
{
(2π‘₯1 , 4π‘₯2 )
{
{
∇𝐿 𝑓 (x) = {
{
𝑑
{
{
(4π‘₯1 , 2π‘₯2 )
{
{
{
{
{
𝑑
{(4π‘₯1 , 4π‘₯2 )
if π‘₯1 ≥ 0, π‘₯2 ≥ 0,
𝑑
if π‘₯1 ≥ 0, π‘₯2 ≥ 0,
(4π‘₯1 , 4π‘₯2 )
{
{
{
{
{
𝑑
{
{
{
{(4π‘₯1 , 2π‘₯2 )
π‘ˆ
∇ 𝑓 (x) = {
{
𝑑
{
{
(2π‘₯1 , 4π‘₯2 )
{
{
{
{
{
𝑑
{(2π‘₯1 , 4π‘₯2 )
if π‘₯1 < 0, π‘₯2 ≥ 0,
(12)
Example 15. Consider the real-valued function πœ™1 given by
πœ™1 (π‘₯) = π‘₯ + 1, π‘₯ ∈ 𝑅, then we can obtain Φ1 ([π‘ŽπΏ , π‘Žπ‘ˆ]) =
[π‘ŽπΏ + 1, π‘Žπ‘ˆ + 1]. If πœ™2 (π‘₯) = |π‘₯|, π‘₯ ∈ 𝑅. Then
if π‘ŽπΏ ≥ 0,
[π‘ŽπΏ , π‘Žπ‘ˆ]
{
{
{
{
π‘ˆ
𝐿
{
if π‘Žπ‘ˆ ≤ 0,
Φ2 ([π‘ŽπΏ , π‘Žπ‘ˆ]) = {[−π‘Ž , −π‘Ž ]
{
{
{
{[0, max (−π‘ŽπΏ , π‘Žπ‘ˆ)] if π‘ŽπΏ < 0, π‘Žπ‘ˆ ≥ 0.
{
(13)
Example 17. Let 𝑓(π‘₯) = [π‘₯3 , π‘₯3 + 1], π‘₯ ∈ 𝑅,
(10)
if π‘₯1 ≥ 0, π‘₯2 < 0,
2
{ 𝑦
if π‘₯ ≥ 𝑦,
𝑏 (π‘₯, 𝑦) = { π‘₯ − 𝑦
if π‘₯ ≤ 𝑦,
{0
πœ‚ (π‘₯, 𝑦) = {
if π‘₯1 < 0, π‘₯2 ≥ 0,
if π‘₯1 < 0, π‘₯2 < 0.
Remark 11. If 𝑓𝐿 = π‘“π‘ˆ, then ∇𝑓(x) of interval-valued functions is the extension of ∇𝑓(x), where 𝑓 : Ω → 𝑅.
The concept of convexity plays an important role in the
optimization theory. In recent years, the concept of convexity
has been generalized in several directions by using novel
and innovative techniques. An important generalization of
convex functions is the introduction of univex functions,
which was introduced by Bector et al. [15].
Let 𝐾 be a nonempty open set in 𝑅𝑛 , and let 𝑓 : 𝐾 → 𝑅,
πœ‚ : 𝐾 × πΎ → 𝑅𝑛 , Φ : 𝑅 → 𝑅, and 𝑏 : 𝐾 × πΎ × [0, 1] → 𝑅+ ,
𝑏 = 𝑏(x, y, πœ†). If the function 𝑓 is differentiable, then 𝑏 does
not depend on πœ†; see [12] or [15].
Definition 12. A differentiable real-valued function 𝑓 is said
to be univex at y ∈ 𝐾 with respect to πœ‚, Φ, 𝑏 if for all x ∈ 𝐾
𝑏 (x, y) Φ [𝑓 (x) − 𝑓 (y)] ≥ πœ‚π‘‘ (x, y) ∇𝑓 (y) .
𝑏 (x, y) Φ [𝑓 (x) βŠ–π‘” 𝑓 (y)] βͺ° πœ‚π‘‘ (x, y) ∇𝑓 (y) .
Example 16. Let 𝑓(π‘₯) = [π‘₯2 , 2π‘₯2 + 3], π‘₯ ∈ 𝑅, 𝑏 = 1, πœ‚(π‘₯, 𝑦) =
π‘₯ − 𝑦, Φ = Φ2 , then 𝑓(π‘₯) is univex with respect to 𝑏, πœ‚, and
Φ.
if π‘₯1 ≥ 0, π‘₯2 < 0,
if π‘₯1 < 0, π‘₯2 < 0.
Definition 13 (interval-valued univex function). A differentiable interval-valued function 𝑓 is said to be univex at y ∈ 𝐾
with respect to πœ‚, Φ, 𝑏 if for all x ∈ 𝐾
Remark 14. (i) An interval-valued univex function is the
extension of a univex function by 𝑓𝐿 = π‘“π‘ˆ.
(ii) Φ : I → I could be deduced from πœ™ : 𝑅 → 𝑅 by
Φ(𝐴) := {𝑦 : ∃π‘₯ ∈ 𝐴, πœ™(π‘₯) = 𝑦, 𝑦 ∈ 𝑅}.
Thus,
([2π‘₯1 , 4π‘₯1 ] , [2π‘₯2 , 4π‘₯2 ])
{
{
{
{
{
𝑑
{
{
{
{([2π‘₯1 , 4π‘₯1 ] , [4π‘₯2 , 2π‘₯2 ])
∇𝑓 (x) = {
{
𝑑
{
{
([4π‘₯1 , 2π‘₯1 ] , [2π‘₯2 , 4π‘₯2 ])
{
{
{
{
{
𝑑
{([4π‘₯1 , 2π‘₯1 ] , [4π‘₯2 , 4π‘₯2 ])
Let 𝐾 be a nonempty open set in 𝑅𝑛 , and let 𝑓 : 𝐾 → I
be an interval-valued function, πœ‚ : 𝐾 × πΎ → 𝑅𝑛 , Φ : I →
I, and 𝑏 : 𝐾 × πΎ × [0, 1] → 𝑅+ , 𝑏 = 𝑏(x, y, πœ†).
(11)
2
(14)
2
π‘₯ + 𝑦 + π‘₯𝑦 if π‘₯ ≥ 𝑦,
π‘₯−𝑦
if π‘₯ ≤ 𝑦.
Let Φ : I → I be defined by Φ([π‘ŽπΏ , π‘Žπ‘ˆ]) = 3[π‘ŽπΏ , π‘Žπ‘ˆ], then
𝑓(π‘₯) is univex with respect to 𝑏, πœ‚ and Φ.
4. Optimality Criteria
Let 𝑓(x), 𝑔1 (x), . . . , π‘”π‘š (x) be differentiable interval-valued
functions defined on a nonempty open set 𝑋 ⊆ 𝑅𝑛 .
Throughout this paper we consider the following primal
problem (P):
min
𝑓 (x)
s.t. 𝑔 (x) βͺ― 0,
𝑖 = 1, 2, . . . , π‘š.
(P)
Let 𝑃 := {x ∈ 𝑋 : 𝑔(x) βͺ― 0, 𝑖 = 1, 2, . . . , π‘š}. We say x∗ is
an optimal solution of (P) if 𝑓(x) βͺ° 𝑓(x∗ ) for all P-feasible x.
In this section, we obtain sufficient optimality conditions for
a feasible solution x∗ to be efficient or properly efficient for
(P) in the form of the following theorems.
4
Journal of Applied Mathematics
Theorem 18. Let x∗ be P-feasible. Suppose that
It is equivalent to
(i) there exist πœ‚, Φ0 , 𝑏0 , Φ𝑖 , 𝑏𝑖 , 𝑖 = 1, 2, . . . , π‘š such that
𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )] βͺ° πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ ) ,
−𝑏𝑖 (x, x∗ ) Φ𝑖 [𝑔𝑖 (x∗ )] βͺ° πœ‚π‘‘ (x, x∗ ) ∇𝑔𝑖 (x∗ )
(15)
(16)
𝐿
π‘š
𝐿
{πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )} = −{πœ‚π‘‘ (x, x∗ ) ∑ 𝑦𝑖 ∇𝑔 (x∗ )} ,
𝑖=1
π‘ˆ
π‘š
π‘ˆ
(28)
{πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )} = −{πœ‚π‘‘ (x, x∗ ) ∑ 𝑦𝑖 ∇𝑔 (x∗ )} .
𝑖=1
for all feasible x;
From (16), we have
(ii) there exist y∗ = (𝑦1 , 𝑦2 , . . . , π‘¦π‘š )𝑑 ∈ π‘…π‘š such that
π‘š
∗
∗
∇𝑓 (x ) + ∑ 𝑦𝑖 ∇𝑔 (x ) = 0,
(17)
y∗ ≥ 0.
(18)
𝐿
𝐿
π‘ˆ
π‘ˆ
π‘ˆ
𝐿
{−𝑏𝑖 (x, x∗ ) Φ𝑖 [𝑔𝑖 (x∗ )]} ≥ {πœ‚π‘‘ (x, x∗ ) ∇𝑔𝑖 (x∗ )} ,
{−𝑏𝑖 (x, x∗ ) Φ𝑖 [𝑔𝑖 (x∗ )]} ≥ {πœ‚π‘‘ (x, x∗ ) ∇𝑔𝑖 (x∗ )} .
𝑖=1
From Definition 1, we have
Further, suppose that
−{𝑏𝑖 (x, x∗ ) Φ𝑖 [𝑔𝑖 (x∗ )]} ≥ {πœ‚π‘‘ (x, x∗ ) ∇𝑔𝑖 (x∗ )} ,
Φ0 (𝐴) βͺ° 0 󳨐⇒ 𝐴 βͺ° 0,
(19)
𝐴 βͺ― 0 󳨐⇒ Φ𝑖 (𝐴) βͺ° 0,
(20)
𝑏0 (x, x∗ ) > 0,
𝑏𝑖 (x, x∗ ) > 0
∗
∗
𝐿
𝑑
∗
π‘ˆ
∗
Thus,
{𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )]}
∗
for all feasible x. Then, x is an optimal solution of (P).
≥ {πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )}
Proof. Let x be P-feasible. Then,
(22)
𝑑
∗
𝐿
𝐿
π‘š
𝐿
∗
= −{πœ‚ (x, x ) ∑ 𝑦𝑖 ∇𝑔 (x )}
𝑖=1
From (20), we conclude that
π‘š
Φ𝑖 [𝑔𝑖 (x)] βͺ° 0.
≥ ∑ {𝑏𝑖 (x, x∗ ) Φ𝑖 [𝑔𝑖 (x∗ )]}
(23)
π‘ˆ
𝑖=1
Thus,
(31)
π‘š
≥ ∑ {𝑏𝑖 (x, x∗ ) Φ𝑖 [𝑔𝑖 (x∗ )]}
Φ𝐿𝑖 [𝑔𝑖 (x)] ≥ 0,
Φπ‘ˆ
𝑖 [𝑔𝑖
(30)
−{𝑏𝑖 (x, x ) Φ𝑖 [𝑔𝑖 (x )]} ≥ {πœ‚ (x, x ) ∇𝑔𝑖 (x )} .
(21)
𝑔𝑖 (x) βͺ― 0.
(29)
𝑖=1
(24)
(x)] ≥ 0.
𝐿
≥ 0,
By (15) and Definition 2, we have
𝐿
{𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )]}
𝐿
{𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )]} ≥ {πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )} ,
π‘ˆ
π‘ˆ
≥ {𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )]}
π‘ˆ
{𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )]} ≥ {πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )} .
(25)
𝐿
≥ 0.
So,
From (17),
𝑑
∗
∗
𝑑
∗
𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )] βͺ° 0.
π‘š
∗
πœ‚ (x, x ) ∇𝑓 (x ) + πœ‚ (x, x ) ∑ 𝑦𝑖 ∇𝑔 (x ) = 0.
(26)
𝑖=1
From (21), it follows that
It follows from Definition 2 that
𝑑
∗
∗
𝐿
𝑑
∗
π‘š
𝑖=1
∗
∗
π‘ˆ
𝑑
∗
π‘š
Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )] βͺ° 0.
(33)
𝑓 (x) βŠ–π‘” 𝑓 (x∗ ) βͺ° 0.
(34)
By (19),
𝐿
∗
{πœ‚ (x, x ) ∇𝑓 (x )} + {πœ‚ (x, x ) ∑ 𝑦𝑖 ∇𝑔 (x )} = 0,
𝑑
(32)
From Proposition 4, it follows that
π‘ˆ
∗
{πœ‚ (x, x ) ∇𝑓 (x )} + {πœ‚ (x, x ) ∑ 𝑦𝑖 ∇𝑔 (x )} = 0.
𝑖=1
(27)
𝑓 (x) βͺ° 𝑓 (x∗ ) .
Therefore, x∗ is an optimal solution of (P).
(35)
Journal of Applied Mathematics
5
Theorem 19. Let x∗ be P-feasible. Suppose that
From (38) and Definition 2, it follows that
(i) there exist πœ‚, Φ0 , 𝑏0 , Φ𝑖 , 𝑏𝑖 , 𝑖 = 1, 2, . . . , π‘š such that
πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ ) βͺ° 0 󳨐⇒ 𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )] βͺ° 0,
(36)
𝐿
π‘š
𝐿
{πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )} + {πœ‚π‘‘ (x, x∗ ) ∑ 𝑦𝑖 ∇𝑔 (x∗ )} = 0,
𝑖=1
𝑑
∗
π‘ˆ
∗
𝑑
∗
π‘š
π‘ˆ
∗
{πœ‚ (x, x ) ∇𝑓 (x )} + {πœ‚ (x, x ) ∑ 𝑦𝑖 ∇𝑔 (x )} = 0.
−𝑏𝑖 (x, x∗ ) Φ𝑖 [𝑔𝑖 (x∗ )] βͺ― 0 󳨐⇒ πœ‚π‘‘ (x, x∗ ) ∇𝑔𝑖 (x∗ ) βͺ― 0
𝑖=1
(37)
(48)
It is equivalent to
for all feasible x;
𝑑
∗
π‘š
(ii) there exist y = (𝑦1 , 𝑦2 , . . . , π‘¦π‘š ) ∈ 𝑅 such that
π‘š
∇𝑓 (x∗ ) + ∑ 𝑦𝑖 ∇𝑔 (x∗ ) = 0,
π‘š
π‘ˆ
y∗ ≥ 0.
𝐿
𝑖=1
(38)
𝑖=1
(39)
π‘ˆ
(49)
{πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )} = −{πœ‚π‘‘ (x, x∗ ) ∑ 𝑦𝑖 ∇𝑔 (x∗ )} .
𝑖=1
Therefore,
Further, suppose that
Φ0 (𝐴) βͺ° 0 󳨐⇒ 𝐴 βͺ° 0,
(40)
𝐴 βͺ― 0 󳨐⇒ Φ𝑖 (𝐴) βͺ° 0,
(41)
∗
∗
𝑏0 (x, x ) > 0,
𝑏𝑖 (x, x ) > 0
(42)
𝐿
{πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )} ≥ 0,
𝑑
Proof. Let x be P-feasible. Then, 𝑔𝑖 (x∗ ) βͺ― 0, from (41), we
obtain that
∗
Φ𝑖 [𝑔𝑖 (x )] βͺ° 0.
(43)
∗
π‘ˆ
πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ ) βͺ° 0.
(51)
𝑏0 (x, x∗ ) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (x∗ )] βͺ° 0.
(52)
By (36),
Then, from (40) and (42), we have
𝑓 (x) βŠ–π‘” 𝑓 (x∗ ) βͺ° 0.
−𝑏𝑖 (x, x∗ ) Φ𝑖 [𝑔𝑖 (x∗ )] βͺ― 0.
(44)
𝑓 (x) βͺ° 𝑓 (x∗ ) .
πœ‚ (x, x ) ∇𝑔𝑖 (x ) βͺ― 0.
(45)
(53)
From Proposition 4, it follows that
By (37),
∗
(50)
From Definition 2, we obtain that
So,
∗
∗
{πœ‚ (x, x ) ∇𝑓 (x )} ≥ 0.
for all feasible x. Then, x∗ is an optimal solution of (P).
𝑑
π‘š
𝐿
{πœ‚π‘‘ (x, x∗ ) ∇𝑓 (x∗ )} = −{πœ‚π‘‘ (x, x∗ ) ∑ 𝑦𝑖 ∇𝑔 (x∗ )} ,
(54)
Therefore, x∗ is an optimal solution of (P).
Thus,
5. Duality
π‘š
𝑑
∗
∗
− ∑ 𝑦𝑖 πœ‚ (x, x ) ∇𝑔𝑖 (x ) βͺ° 0.
(46)
𝑖=1
Consider the following:
max
Then, we have
π‘š
π‘š
π‘ˆ
𝑑
∗
𝑓 (u)
s.t.
∗
− {∑ 𝑦𝑖 πœ‚ (x, x ) ∇𝑔𝑖 (x )}
∇𝑓 (u) + ∑ 𝑦𝑖 ∇𝑔𝑖 (u) = 0,
𝑖=1
𝑖=1
𝑦𝑖 ∇𝑔𝑖 (u) βͺ° 0
𝐿
π‘š
𝑑
∗
∗
= {− ∑ 𝑦𝑖 πœ‚ (x, x ) ∇𝑔𝑖 (x )} ≥ 0,
𝑖=1
π‘š
𝑑
∗
∗
− {∑ 𝑦𝑖 πœ‚ (x, x ) ∇𝑔𝑖 (x )}
𝑖=1
π‘š
π‘ˆ
= {− ∑ 𝑦𝑖 πœ‚π‘‘ (x, x∗ ) ∇𝑔𝑖 (x∗ )} ≥ 0.
𝑖=1
𝑦𝑖 ≥ 0.
(47)
𝐿
(D)
Theorem 20 (weak duality). Let x be P-feasible, and let (u, y)
be D-feasible. Assume that there exist πœ‚, Φ0 , 𝑏0 , Φ𝑖 , 𝑏𝑖 , 𝑖 =
1, 2, . . . , π‘š such that
𝑏0 (x, u) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (u)] βͺ° πœ‚π‘‘ (x, u) ∇𝑓 (u) ,
−𝑏𝑖 (x, u) Φ𝑖 [𝑔𝑖 (u)] βͺ° πœ‚π‘‘ (x, u) ∇𝑔𝑖 (u)
(55)
6
Journal of Applied Mathematics
at u;
Since (u, y) is D-feasible we can obtain that,
π‘š
Φ0 (𝐴) βͺ° 0 󳨐⇒ 𝐴 βͺ° 0,
𝑏0 (x, u) > 0,
∇𝑓 (u) + ∑𝑦𝑖 ∇𝑔𝑖 (u) = 0.
(56)
𝑏𝑖 (x, u) ≥ 0
So,
and ∑π‘š
𝑖=1 𝑏𝑖 (x, u)𝑦𝑖 Φ𝑖 (𝑔𝑖 (u)) βͺ° 0. Then, 𝑓(x) βͺ° 𝑓(u).
π‘š
πœ‚π‘‘ (x, u) ∇𝑓 (u) + πœ‚π‘‘ (x, u) ∑ 𝑦𝑖 ∇𝑔𝑖 (u) = 0.
Proof. It is similar to the proof of Theorem 18.
πœ‚π‘‘ (x, u) ∇𝑓 (u) βͺ° 0 󳨐⇒ 𝑏0 (x, u) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (u)] βͺ° 0,
(57)
π‘š
𝑖=1
𝑖=1
𝑑
−𝑏1 (x, u) Φ1 [∑ 𝑦𝑖 𝑔𝑖 (u)] βͺ― 0 󳨐⇒ ∑ 𝑦𝑖 πœ‚ (x, u) ∇𝑔𝑖 (u) βͺ― 0
(58)
at u;
By Definition 2, it follows that
π‘š
𝐿
𝐿
{πœ‚π‘‘ (x, u) ∇𝑓 (u)} + {πœ‚π‘‘ (x, u) ∑ 𝑦𝑖 ∇𝑔𝑖 (u)} = 0,
𝑖=1
π‘š
π‘ˆ
π‘ˆ
Φ0 (𝐴) βͺ° 0 󳨐⇒ 𝐴 βͺ° 0,
(59)
𝐴 βͺ° 0 󳨐⇒ Φ1 (𝐴) βͺ° 0,
(60)
𝑏1 (x, u) ≥ 0.
(61)
Then, 𝑓(x) βͺ° 𝑓(u).
𝑖=1
Therefore,
π‘š
𝐿
𝐿
{πœ‚π‘‘ (x, u) ∇𝑓 (u)} = −{πœ‚π‘‘ (x, u) ∑ 𝑦𝑖 ∇𝑔𝑖 (u)} ≥ 0,
π‘š
π‘ˆ
π‘ˆ
(68)
{πœ‚π‘‘ (x, u) ∇𝑓 (u)} = −{πœ‚π‘‘ (x, u) ∑ 𝑦𝑖 ∇𝑔𝑖 (u)} ≥ 0.
𝑖=1
Then,
πœ‚π‘‘ (x, u) ∇𝑓 (u) βͺ° 0.
(69)
𝑏0 (x, u) Φ0 [𝑓 (x) βŠ–π‘” 𝑓 (u)] βͺ° 0.
(70)
By (57),
Since (u, y) is D-feasible, then 𝑦𝑖𝑑 ∇𝑔𝑖 (u)
Proof.
and (61),
βͺ° 0, from (60)
From (59) and (61),
π‘š
−𝑏1 (x, u) Φ1 [∑ 𝑦𝑖 𝑔𝑖 (u)] βͺ― 0.
(62)
𝑖=1
π‘š
𝑑
∑𝑦𝑖 πœ‚ (x, u) ∇𝑔𝑖 (u) βͺ― 0.
π‘š
− ∑ 𝑦𝑖 πœ‚π‘‘ (x, u) ∇𝑔𝑖 (u) βͺ° 0,
𝑖=1
π‘ˆ
π‘š
− {∑ 𝑦𝑖 πœ‚π‘‘ (x, u) ∇𝑔𝑖 (u)}
𝑖=1
𝐿
𝑑
= {− ∑ 𝑦𝑖 πœ‚ (x, u) ∇𝑔𝑖 (u)} ≥ 0,
𝑖=1
(72)
(64)
Proof. Since a constraint qualification is satisfied at x∗ , there
exists y∗ ∈ π‘…π‘š such that the following Kuhn-Tucker conditions are satisfied:
π‘š
∇𝑓 (x∗ ) + ∑ 𝑦𝑖 ∇𝑔𝑖 (x∗ ) = 0,
𝐿
𝑑
− {∑ 𝑦𝑖 πœ‚ (x, u) ∇𝑔𝑖 (u)}
π‘š
𝑖=1
∑ 𝑦𝑖 ∇𝑔𝑖 (x∗ ) = 0,
𝑖=1
π‘ˆ
= {− ∑ 𝑦𝑖 πœ‚π‘‘ (x, u) ∇𝑔𝑖 (u)} ≥ 0.
𝑖=1
𝑓 (x) βͺ° 𝑓 (u) .
Theorem 22 (strong duality). If x∗ is P-optimal and a constraint qualification is satisfied at x∗ , then there exists y∗ =
(𝑦1 , 𝑦2 , . . . , π‘¦π‘š )𝑑 ∈ π‘…π‘š such that (x∗ , y∗ ) is D-feasible and the
values of the objective functions for (P) and (D) are equal at
x∗ and (x∗ , y∗ ), respectively. Furthermore, if for all P-feasible x
and D-feasible (u, y), the hypotheses of Theorem 19 are satisfied,
then (x∗ , y∗ ) is D-optimal.
Thus,
π‘š
(71)
(63)
𝑖=1
π‘š
𝑓 (x) βŠ–π‘” 𝑓 (u) βͺ° 0.
thus,
Then, we have
π‘š
(67)
{πœ‚π‘‘ (x, u) ∇𝑓 (u)} + {πœ‚π‘‘ (x, u) ∑ 𝑦𝑖 ∇𝑔𝑖 (u)} = 0.
𝑖=1
𝑏0 (x, u) > 0,
(66)
𝑖=1
Theorem 21 (weak duality). Let x be P-feasible, and let (u, y)
be D-feasible. Assume that there exist πœ‚, Φ0 , 𝑏0 , Φ1 , 𝑏1 such that
π‘š
(65)
𝑖=1
𝑖=1
𝑦𝑖 ≥ 0.
Therefore, (x∗ , y∗ ) is D-feasible.
(73)
Journal of Applied Mathematics
7
Suppose that (x∗ , y∗ ) is not D-optimal. Then, there exists
a D-feasible (u, y) such that 𝑓(u) ≻ 𝑓(x∗ ). This contradicts
Theorem 20. Therefore, (x∗ , y∗ ) is D-optimal.
6. Numerical Example
Consider the following example:
minimize 𝑓 (x) = [π‘₯1 − sin (π‘₯2 ) + 1, π‘₯1 − sin (π‘₯2 ) + 3]
subject to
𝑔1 (x)
= [sin (π‘₯1 ) − 4 sin (π‘₯2 ) − 2,
sin (π‘₯1 ) − 4 sin (π‘₯2 )] βͺ― 0,
𝑔2 (x) = [2 sin (π‘₯1 ) + 7 sin (π‘₯2 ) + π‘₯1 − 7,
2 sin (π‘₯1 ) + 7 sin (π‘₯2 ) + π‘₯1 − 6]
βͺ― 0,
𝑔3 (x) = [2π‘₯1 + 2π‘₯2 − 5, 2π‘₯1 + 2π‘₯2 − 3] βͺ― 0,
𝑔4 (x) = [4π‘₯12 + 4π‘₯22 − 12, 4π‘₯12 + 4π‘₯22 − 9] βͺ― 0,
𝑔5 (x) = [− sin (π‘₯1 ) − 1, − sin (π‘₯1 )] βͺ― 0,
𝑔6 (x) = [− sin (π‘₯2 ) − 1, − sin (π‘₯2 )] βͺ― 0.
(74)
Note that the interval-valued objective function is univex
with respect to 𝑏 = 1, πœ‚(x, u) = x − u, Φ([π‘ŽπΏ , π‘Žπ‘ˆ]) = [π‘ŽπΏ , π‘Žπ‘ˆ],
and every 𝑔𝑖 (𝑖 = 1, 2, . . . , 6) is univex with respect to 𝑏 = 1,
Φ([π‘ŽπΏ , π‘Žπ‘ˆ]) = [π‘ŽπΏ , π‘Žπ‘ˆ]
πœ‚ (x, u) = (
sin π‘₯1 − sin 𝑒1 sin π‘₯2 − sin 𝑒2 𝑇
,
) ,
cos 𝑒1
cos 𝑒2
(75)
where x = (π‘₯1 , π‘₯2 )𝑇 and u = (𝑒1 , 𝑒2 )𝑇 .
It is easy to see that the problem satisfies the assumptions
of Theorem 18. Then,
𝑇
𝑇
(1, − cos π‘₯2 ) + πœ‡1 (cos π‘₯1 , −4 cos π‘₯2 )
𝑇
+ πœ‡2 (2 cos π‘₯1 + 1, 7 cos π‘₯2 ) + πœ‡3 (2, 2)𝑇
𝑇
+ πœ‡4 (8π‘₯1 , 8π‘₯2 ) + πœ‡5 (− cos π‘₯1 , 0)
𝑇
(76)
𝑇
+ πœ‡6 (0, − cos π‘₯2 ) = (0, 0)𝑇 .
After some algebraic calculations, we obtain that x∗ =
(0, sin−1 (6/7))𝑇 and u∗ = (0, 1/7, 0, 0, 10/7, 0)𝑇 . Therefore, x∗
is a solution.
Acknowledgments
This work is supported by the National Natural Science Foundation of China (Grant no. 60974082), and the Science Plan
Foundation of the Education Bureau of Shaanxi Province
(nos. 11JK1051, 2013JK1098, 2013JK1130, and 2013JK1182).
References
[1] C. Jiang, X. Han, G. R. Liu, and G. P. Liu, “A nonlinear interval number programming method for uncertain optimization
problems,” European Journal of Operational Research, vol. 188,
no. 1, pp. 1–13, 2008.
[2] S. Chanas and D. Kuchta, “Multiobjective programming in
optimization of interval objective functions—a generalized
approach,” European Journal of Operational Research, vol. 94,
no. 3, pp. 594–598, 1996.
[3] S.-T. Liu, “Posynomial geometric programming with interval
exponents and coefficients,” European Journal of Operational
Research, vol. 186, no. 1, pp. 7–27, 2008.
[4] H.-C. Wu, “The Karush-Kuhn-Tucker optimality conditions
in an optimization problem with interval-valued objective
function,” European Journal of Operational Research, vol. 176,
no. 1, pp. 46–59, 2007.
[5] H.-C. Wu, “The Karush-Kuhn-Tucker optimality conditions
in multiobjective programming problems with interval-valued
objective functions,” European Journal of Operational Research,
vol. 196, no. 1, pp. 49–60, 2009.
[6] H.-C. Wu, “On interval-valued nonlinear programming problems,” Journal of Mathematical Analysis and Applications, vol.
338, no. 1, pp. 299–316, 2008.
[7] H. C. Wu, “Wolfe duality for interval-valued optimization,”
Journal of Optimization Theory and Applications, vol. 138, no.
3, pp. 497–509, 2008.
[8] H. C. Wu, “Duality theory for optimization problems with
interval-valued objective functions,” Journal of Optimization
Theory and Applications, vol. 144, no. 3, pp. 615–628, 2010.
[9] H.-C. Wu, “Duality theory in interval-valued linear programming problems,” Journal of Optimization Theory and Applications, vol. 150, no. 2, pp. 298–316, 2011.
[10] M. A. Hanson, “On sufficiency of the Kuhn-Tucker conditions,”
Journal of Mathematical Analysis and Applications, vol. 80, no.
2, pp. 545–550, 1981.
[11] R. N. Kaul, S. K. Suneja, and M. K. Srivastava, “Optimality
criteria and duality in multiple-objective optimization involving generalized invexity,” Journal of Optimization Theory and
Applications, vol. 80, no. 3, pp. 465–482, 1994.
[12] C. R. Bector and C. Singh, “B-vex functions,” Journal of
Optimization Theory and Applications, vol. 71, no. 2, pp. 237–
253, 1991.
[13] C. R. Bector, M. K. Bector, A. Gill, and C. Singh, “Duality for
vector valued B-invex programming,” in Generalized Convexity
Proceedings, S. Komlosi, T. Rapcsak, and S. Schaible, Eds., pp.
358–373, Springer, Berlin, Germany, 1992.
[14] C. R. Bector, M. K. Bector, A. Gill, and C. Singh, “Univex sets,
functions and univex nonlinear programming,” in Generalized
Convexity Proceedings, S. Komlosi, T. Rapcsak, and S. Schaible,
Eds., pp. 3–18, Springer, Berlin, Germany, 1992.
[15] C. R. Bector, S. K. Suneja, and S. Gupta, “Univex functions
and Univex nonlinear programming,” in Proceedings of the
Administrative Sciences Association of Canada, pp. 115–124, 1992.
[16] N. G. Rueda, M. A. Hanson, and C. Singh, “Optimality and
duality with generalized convexity,” Journal of Optimization
Theory and Applications, vol. 86, no. 2, pp. 491–500, 1995.
[17] B. Aghezzaf and M. Hachimi, “Generalized invexity and duality
in multiobjective programming problems,” Journal of Global
Optimization, vol. 18, no. 1, pp. 91–101, 2000.
8
[18] T. R. Gulati, I. Ahmad, and D. Agarwal, “Sufficiency and duality
in multiobjective programming under generalized type I functions,” Journal of Optimization Theory and Applications, vol. 135,
no. 3, pp. 411–427, 2007.
[19] A. Neumaier, Interval Methods for Systems of Equations, Cambridge University Press, Cambridge, UK, 1990.
[20] L. Stefanini, “A generalization of Hukuhara difference for interval and fuzzy arithmetic,” in Soft Methods for Handling Variability and Imprecision, D. Dubois, M. A. Lubiano et al., Eds., vol.
48 of Series on Advances in Soft Computing, Springer, Berlin,
Germany, 2008.
[21] L. Stefanini and B. Bede, “Generalized Hukuhara differentiability of interval-valued functions and interval differential equations,” Nonlinear Analysis: Theory, Methods and Applications A,
vol. 71, no. 3-4, pp. 1311–1328, 2009.
Journal of Applied Mathematics
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download