Research Article Approximation for the Hierarchical Constrained Variational

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 604369, 8 pages
http://dx.doi.org/10.1155/2013/604369
Research Article
Approximation for the Hierarchical Constrained Variational
Inequalities over the Fixed Points of Nonexpansive Semigroups
Li-Jun Zhu
School of Information and Calculation, Beifang University of Nationalities, Yinchuan 750021, China
Correspondence should be addressed to Li-Jun Zhu; zhulijun1995@sohu.com
Received 22 December 2012; Accepted 6 February 2013
Academic Editor: Jen-Chih Yao
Copyright © 2013 Li-Jun Zhu. This is an open access article distributed under the Creative Commons Attribution License, which
permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The purpose of the present paper is to study the hierarchical constrained variational inequalities of finding a point π‘₯∗ such that
π‘₯∗ ∈ Ω, ⟨(𝐴 − 𝛾𝑓)π‘₯∗ − (𝐼 − 𝐡)𝑆π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0, ∀π‘₯ ∈ Ω, where Ω is the set of the solutions of the following variational inequality:
π‘₯∗ ∈ ϝ, ⟨(𝐴 − 𝑆)π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0, ∀π‘₯ ∈ ϝ, where 𝐴, 𝐡 are two strongly positive bounded linear operators, 𝑓 is a 𝜌-contraction, 𝑆 is a
nonexpansive mapping, and ϝ is the fixed points set of a nonexpansive semigroup {𝑇(𝑠)}𝑠≥0 . We present a double-net convergence
hierarchical to some elements in ϝ which solves the above hierarchical constrained variational inequalities.
1. Introduction
Let 𝐻 be a real Hilbert space with inner product ⟨⋅, ⋅⟩ and
norm β€– ⋅ β€–, respectively. Let 𝐢 be a nonempty closed convex
subset of 𝐻. Recall that a self-mapping 𝑓 of 𝐢 is said to be
contractive if there exists a constant 𝜌 ∈ [0, 1) such that
‖𝑓(π‘₯) − 𝑓(𝑦)β€– ≤ πœŒβ€–π‘₯ − 𝑦‖ for all π‘₯, 𝑦 ∈ 𝐢. A mapping
𝑇 : 𝐢 → 𝐢 is called nonexpansive if
σ΅„©
σ΅„© σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝑇π‘₯ − 𝑇𝑦󡄩󡄩󡄩 ≤ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 ,
∀π‘₯, 𝑦 ∈ 𝐢.
(1)
We denote by Fix(𝑇) the set of fixed points of 𝑇; that is,
Fix(𝑇) = {π‘₯ ∈ 𝐢 : 𝑇π‘₯ = π‘₯}. A bounded linear operator 𝐡
is called strongly positive on 𝐻 if there exists a constant 𝛾̃ > 0
such that
⟨𝐡π‘₯, π‘₯⟩ ≥ 𝛾̃‖π‘₯β€–2 ,
∀π‘₯ ∈ 𝐻.
(2)
It is well known that the variational inequality for an operator,
πœ‘ : 𝐻 → 𝐻, over a nonempty, closed, and convex set, 𝐢 ⊂ 𝐻,
is to find a point π‘₯∗ ∈ 𝐢 with the property
βŸ¨πœ‘ (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ ≥ 0,
∀𝑦 ∈ 𝐢.
(3)
The set of the solutions of the variational inequality (3)
is denoted by VI(𝐢, πœ‘). If the mapping πœ‘ is a monotone
operator, then we say that VI(3) is monotone. It is well known
that if πœ‘ is Lipschitzian and strongly monotone, then for small
enough 𝛿 > 0, the mapping 𝑃𝐢 (𝐼 − π›Ώπœ‘) is a contraction
on 𝐢 and so the sequence {π‘₯𝑛 } of Picard iterates, given by
π‘₯𝑛 = 𝑃𝐢 (𝐼−π›Ώπœ‘)π‘₯𝑛−1 (𝑛 ≥ 1), converges strongly to the unique
solution of the VI(3). This sort of VI(3) where πœ‘ is strongly
monotone and Lipschitzian is originated from Yamada [1].
However, if πœ‘ is only monotone (not strongly monotone),
then their iterative methods do not apply to VI.
Many practical problems such as signal processing and
network resource allocation are formulated as the variational
inequality over the set of the solutions of some nonlinear
mappings (e.g., the fixed point set of nonexpansive mappings), and algorithms to solve these problems have been
proposed. Iterative algorithms have been presented for the
convex optimization problem with a fixed point constraint
along with proof that these algorithms strongly converge to
the unique solution of problems with a strongly monotone
operator. The strong monotonicity condition guarantees the
uniqueness of the solution. For some related works on the
variational inequalities, please see [2–23] and the references
therein. Particulary, the variational inequality problems over
the fixed points of nonexpansive mappings have been considered. The reader can consult [16, 24]. On the other hand,
we note that in the literature, nonlinear ergodic theorems
for nonexpansive semigroups have been considered by many
authors; see, for example, [25–32]. In this paper, we will
2
Abstract and Applied Analysis
consider a general variational inequality problem with the
variational inequality constraint is the fixed points of nonexpansive semigroups.
The purpose of the present paper is to study the hierarchical constrained variational inequalities of finding a point
π‘₯∗ such that
π‘₯∗ ∈ Ω,
⟨(𝐴 − 𝛾𝑓) π‘₯∗ − (𝐼 − 𝐡) 𝑆π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ Ω,
(4)
where Ω is the set of the solutions of the following variational
inequality:
π‘₯∗ ∈ ϝ,
⟨(𝐴 − 𝑆) π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ ϝ,
Lemma 2 (see [34]). Let 𝐢 be a closed convex subset of a
real Hilbert space 𝐻 and let 𝑆 : 𝐢 → 𝐢 be a nonexpansive
mapping. Then, the mapping 𝐼 − 𝑆 is demiclosed. That is, if {π‘₯𝑛 }
is a sequence in 𝐢 such that π‘₯𝑛 → π‘₯∗ weakly and (𝐼 − 𝑆)π‘₯𝑛 →
𝑦 strongly, then (𝐼 − 𝑆)π‘₯∗ = 𝑦.
Lemma 3 (see [35]). Let 𝐢 be a nonempty closed convex subset
of a real Hilbert space 𝐻. Assume that a mapping 𝐹 : 𝐢 → 𝐻
is monotone and weakly continuous along segments (i.e., 𝐹(π‘₯ +
𝑑𝑦) → 𝐹(π‘₯) weakly, as 𝑑 → 0, whenever π‘₯ + 𝑑𝑦 ∈ 𝐢 for
π‘₯, 𝑦 ∈ 𝐢). Then the variational inequality
π‘₯∗ ∈ 𝐢,
(5)
where 𝐴 and 𝐡 are two strongly positive bounded linear
operators, 𝑓 is a 𝜌-contraction, 𝑆 is a nonexpansive mapping,
and ϝ is the fixed points set of a nonexpansive semigroup
{𝑇(𝑠)}𝑠≥0 . We present a double-net convergence hierarchical
to some elements in ϝ which solves the above hierarchical
constrained variational inequalities.
2. Preliminaries
Let 𝐢 be a nonempty closed convex subset of a real Hilbert
space 𝐻. The metric (or the nearest point) projection from 𝐻
onto 𝐢 is the mapping 𝑃𝐢 : 𝐻 → 𝐢 which assigns to each
point π‘₯ ∈ 𝐢 the unique point 𝑃𝐢π‘₯ ∈ 𝐢 satisfying the property
σ΅„©
σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©π‘₯ − 𝑃𝐢π‘₯σ΅„©σ΅„©σ΅„© = inf σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 =: 𝑑 (π‘₯, 𝐢) .
(6)
𝑦∈𝐢
It is well known that 𝑃𝐢 is a nonexpansive mapping and
satisfies
σ΅„©
σ΅„©2
⟨π‘₯ − 𝑦, 𝑃𝐢π‘₯ − π‘ƒπΆπ‘¦βŸ© ≥ 󡄩󡄩󡄩𝑃𝐢π‘₯ − 𝑃𝐢𝑦󡄩󡄩󡄩 , ∀π‘₯, 𝑦 ∈ 𝐻. (7)
⟨𝐹π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ 𝐻, 𝑦 ∈ 𝐢.
(8)
Recall that a family {𝑇(𝑠)}𝑠≥0 of mappings of 𝐻 into itself
is called a nonexpansive semigroup if it satisfies the following
conditions:
(S1) 𝑇(0)π‘₯ = π‘₯ for all π‘₯ ∈ 𝐻;
(S2) 𝑇(𝑠 + 𝑑) = 𝑇(𝑠)𝑇(𝑑) for all 𝑠, 𝑑 ≥ 0;
(S3) ‖𝑇(𝑠)π‘₯ − 𝑇(𝑠)𝑦‖ ≤ β€–π‘₯ − 𝑦‖ for all π‘₯, 𝑦 ∈ 𝐻 and 𝑠 ≥ 0;
(S4) for all π‘₯ ∈ 𝐻, 𝑠 → 𝑇(𝑠)π‘₯ is continuous.
We denote by Fix (𝑇(𝑠)) the set of fixed points of 𝑇(𝑠) and
by ϝ the set of all common fixed points of {𝑇(𝑠)}𝑠≥0 ; that is,
ϝ = ⋂𝑠≥0 Fix (𝑇(𝑠)). It is known that ϝ is closed and convex.
We need the following lemmas for proving our main
results.
Lemma 1 (see [33]). Let 𝐢 be a nonempty bounded closed
convex subset of a Hilbert space 𝐻 and let {𝑇(𝑠)}𝑠≥0 be a
nonexpansive semigroup on 𝐢. Then, for every β„Ž ≥ 0,
σ΅„©σ΅„©
σ΅„©σ΅„© 1 𝑑
1 𝑑
σ΅„©
σ΅„©
lim sup σ΅„©σ΅„©σ΅„© ∫ 𝑇 (𝑠) π‘₯𝑑𝑠 − 𝑇 (β„Ž) ∫ 𝑇 (𝑠) π‘₯𝑑𝑠󡄩󡄩󡄩 = 0.
𝑑 → ∞ π‘₯∈𝐢 σ΅„©
σ΅„©σ΅„©
𝑑 0
󡄩𝑑 0
(9)
(10)
is equivalent to the dual variational inequality
π‘₯∗ ∈ 𝐢,
⟨𝐹π‘₯, π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ 𝐢.
(11)
3. Main Results
Now we consider the following hierarchical variational
inequality with the variational inequality constraint over the
fixed points set of nonexpansive semigroups {𝑇(𝑠)}𝑠≥0 .
Problem 1. Let 𝐢 be a nonempty closed convex subset of a
real Hilbert space 𝐻. Let 𝑓 : 𝐢 → 𝐻 be a 𝜌-contraction with
coefficient 𝜌 ∈ [0, 1) and let 𝑆 : 𝐢 → 𝐢 be a nonexpansive
mapping. Let {𝑇(𝑠)}𝑠≥0 be a nonexpansive semigroup on 𝐢
and let 𝐴, 𝐡 : 𝐻 → 𝐻 be two strongly positive bounded
Μƒ (1 ≤ πœ†
Μƒ < 2) and 𝛾̃ (0 <
linear operators with coefficients πœ†
𝛾̃ < 1), respectively. Let 𝛾 be a constant satisfying 0 < π›ΎπœŒ < 𝛾̃.
Now, our objective is to find π‘₯∗ such that
π‘₯∗ ∈ Ω,
⟨(𝐴 − 𝛾𝑓) π‘₯∗ − (𝐼 − 𝐡) 𝑆π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ Ω,
Moreover, 𝑃𝐢 is characterized by the following property:
⟨π‘₯ − 𝑃𝐢π‘₯, 𝑦 − 𝑃𝐢π‘₯⟩ ≤ 0,
∀π‘₯ ∈ 𝐢,
(12)
where Ω := VI(ϝ, 𝐴 − 𝑆) is the set of the solutions of the
following variational inequality:
π‘₯∗ ∈ ϝ,
⟨(𝐴 − 𝑆) π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ ϝ.
(13)
We observe that (𝐴 − 𝛾𝑓) − (𝐼 − 𝐡)𝑆 is strongly monotone
and Lipschitz continuous. In fact, we have
⟨(𝐴 − 𝛾𝑓) π‘₯ − (𝐼 − 𝐡) 𝑆π‘₯ − [(𝐴 − 𝛾𝑓) 𝑦 − (𝐼 − 𝐡) 𝑆𝑦] , π‘₯ − π‘¦βŸ©
= ⟨𝐴π‘₯ − 𝐴𝑦, π‘₯ − π‘¦βŸ© − 𝛾 βŸ¨π‘“ (π‘₯) − 𝑓 (𝑦) , π‘₯ − π‘¦βŸ©
Μƒ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩2 − π›ΎπœŒσ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩2
− (𝐼 − 𝐡) βŸ¨π‘†π‘₯ − 𝑆𝑦, π‘₯ − π‘¦βŸ© ≥ πœ†
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©2
σ΅„©
Μƒ − 1 + 𝛾̃ − π›ΎπœŒ) σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩2 ,
− (1 − 𝛾̃) σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 = (πœ†
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©(𝐴 − 𝛾𝑓) π‘₯ − (𝐼 − 𝐡) 𝑆π‘₯ − [(𝐴 − 𝛾𝑓) 𝑦 − (𝐼 − 𝐡) 𝑆𝑦]σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
≤ 󡄩󡄩󡄩𝐴π‘₯ − 𝐴𝑦󡄩󡄩󡄩 + 𝛾 󡄩󡄩󡄩𝑓 (π‘₯) − 𝑓 (𝑦)σ΅„©σ΅„©σ΅„© + ‖𝐼 − 𝐡‖ 󡄩󡄩󡄩𝑆π‘₯ − 𝑆𝑦󡄩󡄩󡄩
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
≤ ‖𝐴‖ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 + π›ΎπœŒ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 + (1 − 𝛾̃) σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩
σ΅„©
σ΅„©
= (1 + ‖𝐴‖ + π›ΎπœŒ − 𝛾̃) σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 .
(14)
Abstract and Applied Analysis
3
Hence, the existence and the uniqueness of the solution to
Problem 1 are guaranteed.
In order to solve the above hierarchical constrainted
variational inequality, we present the following double net.
Theorem 5. Assume that VI (ϝ, 𝐴−𝑆) =ΜΈ 0. Then, for each fixed
𝑑 ∈ (0, 1), the net {π‘₯𝑠,𝑑 } defined by (15) converges in norm, as
𝑠 → 0+, to a solution π‘₯𝑑 ∈ ϝ. Moreover, as 𝑑 → 0+, the net
{π‘₯𝑑 } converges in norm to the unique solution π‘₯∗ of Problem 1.
Μƒ + 𝛾̃ − π›ΎπœŒ). Then, 0 < πœ… ≤ 1. For each
Algorithm 4. Set πœ… = 1/(πœ†
(𝑠, 𝑑) ∈ (0, πœ…) × (0, 1), we define a double net {π‘₯𝑠,𝑑 } implicitly
by
Proof. We first show that the sequence {π‘₯𝑠,𝑑 } is bounded. Take
𝑦∗ ∈ ϝ. From (15), we have
σ΅„©σ΅„©
∗σ΅„©
σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦 σ΅„©σ΅„©σ΅„© =
π‘₯𝑠,𝑑 = 𝑃𝐢 [𝑠 (𝑑𝛾𝑓 (π‘₯𝑠,𝑑 ) + (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑠,𝑑 )
1 πœ†π‘ 
+ (𝐼 − 𝑠𝐴) ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]] .
πœ†π‘  0
σ΅„©σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝑃𝑐 [𝑠 (𝑑𝛾𝑓 (π‘₯𝑠,𝑑 ) + (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑠,𝑑 )
σ΅„©σ΅„©
σ΅„©
(15)
+ (𝐼 − 𝑠A)
σ΅„©σ΅„©
σ΅„©
≤ 󡄩󡄩󡄩󡄩𝑠 (𝑑𝛾𝑓 (π‘₯𝑠,𝑑 ) + (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑠,𝑑 )
σ΅„©σ΅„©
Note that this implicit manner algorithm is well defined.
In fact, we define the mapping
+ (𝐼 − 𝑠𝐴)
π‘₯ 󳨃󳨀→ π‘Šπ‘ ,𝑑 (π‘₯) := 𝑃𝐢 [𝑠 (𝑑𝛾𝑓 (π‘₯) + (𝐼 − 𝑑𝐡) 𝑆π‘₯)
+ (𝐼 − 𝑠𝐴)
+ 𝑠 (𝐼 − 𝑑𝐡) (𝑆π‘₯𝑠,𝑑 − 𝑆𝑦∗ )
+ 𝑠𝑑𝛾𝑓 (𝑦∗ ) + 𝑠 (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝑠𝐴𝑦∗
Note that this self-mapping is a contraction. As a matter of
fact, we have
σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©π‘Šπ‘ ,𝑑 (π‘₯) − π‘Šπ‘ ,𝑑 (𝑦)σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„©σ΅„©
= 󡄩󡄩󡄩󡄩𝑃𝐢 [𝑠 (𝑑𝛾𝑓 (π‘₯) + (𝐼 − 𝑑𝐡) 𝑆π‘₯)
σ΅„©σ΅„©
+ (𝐼 − 𝑠𝐴) (
1 πœ†π‘ 
∫ 𝑇 (]) π‘₯𝑑]]
πœ†π‘  0
σ΅„©σ΅„©
1 πœ†π‘ 
σ΅„©
∫ 𝑇 (]) 𝑦𝑑]]σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
σ΅„©σ΅„©
(17)
σ΅„©
σ΅„©
≤ 𝑠𝑑𝛾 󡄩󡄩󡄩𝑓 (π‘₯) − 𝑓 (𝑦)σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
+ 𝑠 ‖𝐼 − 𝑑𝐡‖ 󡄩󡄩󡄩𝑆π‘₯ − 𝑆𝑦󡄩󡄩󡄩
σ΅„©σ΅„©
σ΅„© 1 πœ†π‘ 
+ ‖𝐼 − 𝑠𝐴‖ σ΅„©σ΅„©σ΅„©σ΅„© ∫ 𝑇 (]) π‘₯𝑑]
σ΅„©σ΅„© πœ† 𝑠 0
−
σ΅„©σ΅„©
1 πœ†π‘ 
σ΅„©
∫ 𝑇 (]) 𝑦𝑑]σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
σ΅„©σ΅„©
Μƒ − 1) 𝑠 − (̃𝛾 − π›ΎπœŒ) 𝑠𝑑] σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 .
≤ [1 − (πœ†
σ΅„©
σ΅„©
Μƒ − 1)𝑠 − (̃𝛾 −
Since (𝑠, 𝑑) ∈ (0, πœ…) × (0, 1), 0 < 1 − (πœ†
π›ΎπœŒ)𝑠𝑑 < 1. Hence, π‘Šπ‘ ,𝑑 is a contraction. Therefore, by Banach’s
Contraction Principle, π‘Šπ‘ ,𝑑 has a unique fixed point which is
denoted by π‘₯𝑠,𝑑 ∈ 𝐢.
Next we show the behavior of the net {π‘₯𝑠,𝑑 } as 𝑠 → 0 and
𝑑 → 0 successively.
σ΅„©σ΅„©
1 πœ†π‘ 
σ΅„©
∫ 𝑇 (]) π‘₯𝑑] − 𝑦∗ )σ΅„©σ΅„©σ΅„©σ΅„© (18)
πœ†π‘  0
σ΅„©σ΅„©
σ΅„©
σ΅„©
≤ 𝑠𝑑𝛾 󡄩󡄩󡄩𝑓 (π‘₯𝑠,𝑑 ) − 𝑓 (𝑦∗ )σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
+ 𝑠 ‖𝐼 − 𝑑𝐡‖ 󡄩󡄩󡄩𝑆π‘₯𝑠,𝑑 − 𝑆𝑦∗ σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„© 1 πœ†π‘ 
σ΅„©
+ ‖𝐼 − 𝑠𝐴‖ × σ΅„©σ΅„©σ΅„©σ΅„© ∫ 𝑇 (]) π‘₯𝑑] − 𝑦∗ σ΅„©σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„© πœ† 𝑠 0
σ΅„©σ΅„©
σ΅„©
σ΅„©
+ 𝑠 󡄩󡄩󡄩𝑑𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
≤ π‘ π‘‘π›ΎπœŒ σ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„© + 𝑠 (1 − 𝑑̃𝛾) σ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„©
Μƒ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦∗ σ΅„©σ΅„©σ΅„©
+ (1 − π‘ πœ†)
σ΅„© 𝑠,𝑑
σ΅„©
σ΅„©
σ΅„©
+ 𝑠 󡄩󡄩󡄩𝑑𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ σ΅„©σ΅„©σ΅„©
Μƒ − 1) 𝑠 − (̃𝛾 − π›ΎπœŒ) 𝑠𝑑] σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦∗ σ΅„©σ΅„©σ΅„©
= [1 − (πœ†
σ΅„© 𝑠,𝑑
σ΅„©
σ΅„©
σ΅„©
+ 𝑠 󡄩󡄩󡄩𝑑𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ σ΅„©σ΅„©σ΅„© .
− 𝑃𝐢 [𝑠 (𝑑𝛾𝑓 (𝑦) + (𝐼 − 𝑑𝐡) 𝑆𝑦)
+ (𝐼 − 𝑠𝐴)
σ΅„©σ΅„©
1 πœ†π‘ 
σ΅„©
∫ 𝑇 (]) π‘₯𝑑] − 𝑦∗ σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©
= 󡄩󡄩󡄩󡄩𝑠𝑑𝛾 (𝑓 (π‘₯𝑠,𝑑 ) − 𝑓 (𝑦∗ ))
σ΅„©σ΅„©
(16)
1 πœ†π‘ 
∫ 𝑇 (]) π‘₯𝑑]] ,
πœ†π‘  0
(𝑠, 𝑑) ∈ (0, πœ…) × (0, 1) .
+ (𝐼 − 𝑠𝐴)
σ΅„©σ΅„©
1 πœ†π‘ 
σ΅„©
∫ 𝑇 (]) π‘₯𝑑]] − 𝑦∗ σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
σ΅„©σ΅„©
Hence
σ΅„©σ΅„©
∗σ΅„©
σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦 σ΅„©σ΅„©σ΅„© ≤
1
Μƒ−1
(̃𝛾 − π›ΎπœŒ) 𝑑 + πœ†
σ΅„©
σ΅„©
× σ΅„©σ΅„©σ΅„©π‘‘π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ σ΅„©σ΅„©σ΅„© .
(19)
It follows that for each fixed 𝑑 ∈ (0, 1), {π‘₯𝑠,𝑑 } is bounded.
Next, we show that lim𝑠 → 0 ‖𝑇(𝜏)π‘₯𝑠,𝑑 − π‘₯𝑠,𝑑 β€– = 0 for all
0 ≤ 𝜏 < ∞ and consequently, as 𝑠 → 0+, the entire net
{π‘₯𝑠,𝑑 } converges in norm to π‘₯𝑑 ∈ ϝ.
4
Abstract and Applied Analysis
Μƒ−
For each fixed 𝑑 ∈ (0, 1), we set 𝑅𝑑 := (1/((̃𝛾 − π›ΎπœŒ)𝑑 + πœ†
∗
∗
∗
1))‖𝑑𝛾𝑓(𝑦 ) + (𝐼 − 𝑑𝐡)𝑆𝑦 − 𝐴𝑦 β€–. It is clear that for each fixed
𝑑 ∈ (0, 1), {π‘₯𝑠,𝑑 } ⊂ 𝐡(𝑦∗ , 𝑅𝑑 ). Notice that
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©
σ΅„©σ΅„© 1 πœ† 𝑠
σ΅„©σ΅„© ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑] − 𝑦∗ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„©σ΅„© ≤ 𝑅𝑑 .
σ΅„©σ΅„©
σ΅„©σ΅„© πœ† 𝑠 0
σ΅„©
σ΅„©
(20)
πœ†π‘ 
Set 𝑦𝑠,𝑑 = 𝑠(𝑑𝛾𝑓(π‘₯𝑠,𝑑 ) + (𝐼 − 𝑑𝐡)𝑆π‘₯𝑠,𝑑 ) + (𝐼 − 𝑠𝐴)(1/πœ† 𝑠 )
∫0 𝑇(])π‘₯𝑠,𝑑 𝑑] for all (𝑠, 𝑑) ∈ (0, πœ…) × (0, 1). We then have
π‘₯𝑠,𝑑 = 𝑃𝐢𝑦𝑠,𝑑 , and for any 𝑦∗ ∈ ϝ,
π‘₯𝑠,𝑑 − 𝑦∗ = π‘₯𝑠,𝑑 − 𝑦𝑠,𝑑 + 𝑦𝑠,𝑑 − 𝑦∗
= π‘₯𝑠,𝑑 − 𝑦𝑠,𝑑 + 𝑠 (𝑑𝛾𝑓 (π‘₯𝑠,𝑑 ) + (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑠,𝑑 )
+ (𝐼 − 𝑠𝐴)
Moreover, we observe that if π‘₯ ∈ 𝐡(𝑦∗ , 𝑅𝑑 ), then
+ 𝑠𝑑𝛾 (𝑓 (π‘₯𝑠,𝑑 ) − 𝑓 (𝑦∗ )) + 𝑠 (𝐼 − 𝑑𝐡) (𝑆π‘₯𝑠,𝑑 − 𝑆𝑦∗ )
σ΅„©
σ΅„©
σ΅„©σ΅„©
∗σ΅„©
∗σ΅„©
∗σ΅„©
󡄩󡄩𝑇 (𝑠) π‘₯ − 𝑦 σ΅„©σ΅„©σ΅„© ≤ 󡄩󡄩󡄩𝑇 (𝑠) π‘₯ − 𝑇 (𝑠) 𝑦 σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦 σ΅„©σ΅„©σ΅„© ≤ 𝑅𝑑 , (21)
+ 𝑠 (𝑑𝛾𝑓 (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ )
+ (𝐼 − 𝑠𝐴) (
that is, 𝐡(𝑦∗ , 𝑅𝑑 ) is 𝑇(𝑠)-invariant for all 𝑠.
From (15), we deduce
Notice that
σ΅„©σ΅„©
σ΅„©σ΅„©
1 πœ†π‘ 
σ΅„©
σ΅„©
≤ 󡄩󡄩󡄩󡄩𝑇 (𝜏) π‘₯𝑠,𝑑 − 𝑇 (𝜏) ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
σ΅„©σ΅„©
σ΅„©σ΅„©
⟨π‘₯𝑠,𝑑 − 𝑦𝑠,𝑑 , π‘₯𝑠,𝑑 − 𝑦∗ ⟩ ≤ 0.
σ΅„©σ΅„©
σ΅„©σ΅„©
1 πœ†π‘ 
1 πœ†π‘ 
σ΅„©
σ΅„©
+ 󡄩󡄩󡄩󡄩𝑇 (𝜏) ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑] −
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
πœ†π‘  0
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„© 1 πœ†π‘ 
σ΅„©
+ σ΅„©σ΅„©σ΅„©σ΅„© ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑] − π‘₯𝑠,𝑑 σ΅„©σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„© πœ† 𝑠 0
σ΅„©σ΅„©
Thus, we have
σ΅„©σ΅„©
∗ σ΅„©2
∗
σ΅„©σ΅„©π‘₯s,𝑑 − 𝑦 σ΅„©σ΅„©σ΅„© = ⟨π‘₯𝑠,𝑑 − 𝑦𝑠,𝑑 , π‘₯𝑠,𝑑 − 𝑦 ⟩
+ 𝑠 (𝐼 − 𝑑𝐡) βŸ¨π‘†π‘₯𝑠,𝑑 − 𝑆𝑦∗ , π‘₯𝑠,𝑑 − 𝑦∗ ⟩
+ (𝐼 − 𝑠𝐴)
σ΅„©σ΅„©
σ΅„©σ΅„©
1 πœ†π‘ 
σ΅„©
σ΅„©
+ 2 σ΅„©σ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 −
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
σ΅„©σ΅„©
σ΅„©σ΅„©
×⟨
1 πœ†π‘ 
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑] − 𝑦∗ , π‘₯𝑠,𝑑 − 𝑦∗ ⟩
πœ†π‘  0
+ 𝑠 βŸ¨π‘‘π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , π‘₯𝑠,𝑑 − 𝑦∗ ⟩
σ΅„©σ΅„©
σ΅„©
≤ 2𝑠 󡄩󡄩󡄩󡄩𝑑𝛾𝑓 (π‘₯𝑠,𝑑 ) + (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑠,𝑑
σ΅„©σ΅„©
σ΅„©
σ΅„©σ΅„©
σ΅„©
≤ 𝑠𝑑𝛾 󡄩󡄩󡄩𝑓 (π‘₯𝑠,𝑑 ) − 𝑓 (𝑦∗ )σ΅„©σ΅„©σ΅„© σ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©σ΅„©
σ΅„©
+ 𝑠 ‖𝐼 − 𝑑𝐡‖ 󡄩󡄩󡄩𝑆π‘₯𝑠,𝑑 − 𝑆𝑦∗ σ΅„©σ΅„©σ΅„© σ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„©
𝐴 πœ†π‘ 
σ΅„©
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
σ΅„©σ΅„©
+ 𝑠 βŸ¨π‘‘π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , π‘₯𝑠,𝑑 − 𝑦∗ ⟩
σ΅„©σ΅„©
σ΅„©σ΅„©
1 πœ†π‘ 
1 πœ†π‘ 
σ΅„©
σ΅„©
+ 󡄩󡄩󡄩󡄩𝑇 (𝜏) ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑] −
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]σ΅„©σ΅„©σ΅„©σ΅„© .
πœ†π‘  0
πœ†π‘  0
σ΅„©σ΅„©
σ΅„©σ΅„©
(22)
Since {π‘₯𝑠,𝑑 } is bounded, {𝑓(π‘₯𝑠,𝑑 )} and {𝑆π‘₯𝑠,𝑑 } are also bounded.
Then, from Lemma 1, we deduce for all 0 ≤ 𝜏 < ∞ and fixed
𝑑 ∈ (0, 1) that
𝑠→0
(25)
+ 𝑠𝑑𝛾 βŸ¨π‘“ (π‘₯𝑠,𝑑 ) − 𝑓 (𝑦∗ ) , π‘₯𝑠,𝑑 − 𝑦∗ ⟩
σ΅„©σ΅„©
σ΅„©σ΅„©
1 πœ†π‘ 
1 πœ†π‘ 
σ΅„©
σ΅„©
≤ 󡄩󡄩󡄩󡄩𝑇 (𝜏) ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑] −
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]σ΅„©σ΅„©σ΅„©σ΅„©
πœ†π‘  0
πœ†π‘  0
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©
σ΅„©
lim 󡄩󡄩󡄩𝑇 (𝜏) π‘₯𝑠,𝑑 − π‘₯𝑠,𝑑 σ΅„©σ΅„©σ΅„© = 0.
1 πœ†π‘ 
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑] − 𝑦∗ ) .
πœ†π‘  0
(24)
σ΅„©σ΅„©
σ΅„©
󡄩󡄩𝑇 (𝜏) π‘₯𝑠,𝑑 − π‘₯𝑠,𝑑 σ΅„©σ΅„©σ΅„©
−
1 πœ†π‘ 
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑] − 𝑦∗ = π‘₯𝑠,𝑑 − 𝑦𝑠,𝑑
πœ†π‘  0
(23)
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„© 1 πœ†π‘ 
σ΅„©
+‖𝐼−𝑠𝐴‖ σ΅„©σ΅„©σ΅„©σ΅„© ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]−𝑦∗ σ΅„©σ΅„©σ΅„©σ΅„© σ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 −𝑦∗ σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„© πœ† 𝑠 0
σ΅„©
σ΅„©2
σ΅„©2
σ΅„©
≤ π‘ π‘‘π›ΎπœŒσ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„© + 𝑠 (1 − 𝑑̃𝛾) σ΅„©σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„©
+ 𝑠 βŸ¨π‘‘π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , π‘₯𝑠,𝑑 − 𝑦∗ ⟩
Μƒ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦∗ σ΅„©σ΅„©σ΅„©2
+ (1 − π‘ πœ†)
σ΅„©
σ΅„© 𝑠,𝑑
Μƒ − 1) 𝑠 − (̃𝛾 − π›ΎπœŒ) 𝑠𝑑] σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦∗ σ΅„©σ΅„©σ΅„©2
= [1 − (πœ†
σ΅„©
σ΅„© 𝑠,𝑑
+ 𝑠 βŸ¨π‘‘π›Ύπ‘“ (𝑦∗ )+(𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , π‘₯𝑠,𝑑 − 𝑦∗ ⟩ .
(26)
Abstract and Applied Analysis
5
Note that the mapping 𝐴 − 𝑑𝛾𝑓 − (𝐼 − 𝑑𝐡)𝑆 is monotone for all
𝑑 ∈ (0, 1), since
So
σ΅„©σ΅„©
∗ σ΅„©2
σ΅„©σ΅„©π‘₯𝑠,𝑑 − 𝑦 σ΅„©σ΅„©σ΅„© ≤
1
⟨𝐴π‘₯ − 𝑑𝛾𝑓 (π‘₯) − (𝐼 − 𝑑𝐡) 𝑆π‘₯
Μƒ−1
(̃𝛾 − π›ΎπœŒ) 𝑑 + πœ†
× βŸ¨π‘‘π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝑑𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , π‘₯𝑠,𝑑 − 𝑦∗ ⟩ ,
∗
𝑦 ∈ ϝ.
(27)
Assume {𝑠𝑛 } ⊂ (0, 1) such that 𝑠𝑛 → 0 as 𝑛 → ∞. By (27),
we obtain immediately that
− (𝐴𝑦 − 𝑑𝛾𝑓 (𝑦) − (𝐼 − 𝑑𝐡) 𝑆𝑦) , π‘₯ − π‘¦βŸ©
= ⟨𝐴π‘₯ − 𝐴𝑦, π‘₯ − π‘¦βŸ© − 𝑑𝛾 βŸ¨π‘“ (π‘₯) − 𝑓 (𝑦) , π‘₯ − π‘¦βŸ©
σ΅„©2
σ΅„©2
σ΅„©
σ΅„©
− π‘‘π›ΎπœŒσ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 − (1 − 𝑑̃𝛾) σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩
Μƒ − 1 + (̃𝛾 − π›ΎπœŒ) 𝑑] σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩2 ≥ 0.
= [πœ†
σ΅„©
σ΅„©
By Lemma 3, (31) is equivalent to its dual VI:
1
σ΅„©2
σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯𝑠𝑛 ,𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„© ≤
σ΅„©
σ΅„©
Μƒ−1
(̃𝛾 − π›ΎπœŒ) 𝑑 + πœ†
π‘₯𝑑 ∈ ϝ,
⟨𝐴π‘₯𝑑 − 𝑑𝛾𝑓 (π‘₯𝑑 ) − (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑑 , 𝑦∗ − π‘₯𝑑 ⟩ ≥ 0,
∀𝑦∗ ∈ ϝ.
(33)
×βŸ¨π‘‘π›Ύπ‘“ (𝑦∗ )+(𝐼 − 𝑑𝐡) 𝑆𝑦∗ −𝐴𝑦∗ , π‘₯𝑠𝑛 ,𝑑 −𝑦∗ ⟩ ,
𝑦∗ ∈ ϝ.
(28)
Since {π‘₯𝑠𝑛 ,𝑑 } is bounded, there exists a subsequence {𝑠𝑛𝑖 } of {𝑠𝑛 }
such that {π‘₯𝑠𝑛 ,𝑑 } converges weakly to a point π‘₯𝑑 . From (23) and
𝑖
Lemma 2, we get π‘₯𝑑 ∈ ϝ. We can substitute π‘₯𝑑 for 𝑦∗ in (28)
to get
1
σ΅„©2
σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯𝑠𝑛 ,𝑑 − π‘₯𝑑 σ΅„©σ΅„©σ΅„© ≤
σ΅„©
σ΅„©
Μƒ−1
(̃𝛾 − π›ΎπœŒ) 𝑑 + πœ†
The weak convergence of {π‘₯𝑠𝑛 ,𝑑 } to π‘₯𝑑 actually implies that
π‘₯𝑠𝑛 ,𝑑 → π‘₯𝑑 strongly. This has proved the relative normcompactness of the net {π‘₯𝑠,𝑑 } as 𝑠 → 0+ for each fixed
𝑑 ∈ (0, 1).
In (28), we take the limit as 𝑛 → ∞ to get
Μƒ−1
(̃𝛾 − π›ΎπœŒ) 𝑑 + πœ†
⟨𝐴π‘₯𝑑󸀠 − 𝑑𝛾𝑓 (π‘₯𝑑󸀠 ) − (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑑󸀠 , 𝑦∗ − π‘₯𝑑󸀠 ⟩ ≥ 0,
∀𝑦∗ ∈ ϝ.
(34)
⟨𝐴π‘₯𝑑 − 𝑑𝛾𝑓 (π‘₯𝑑 ) − (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑑 , π‘₯𝑑󸀠 − π‘₯𝑑 ⟩ ≥ 0.
(35)
In (34), we take 𝑦∗ = π‘₯𝑑 to get
⟨𝐴π‘₯𝑑󸀠 − 𝑑𝛾𝑓 (π‘₯𝑑󸀠 ) − (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑑󸀠 , π‘₯𝑑 − π‘₯𝑑󸀠 ⟩ ≥ 0.
(36)
Adding up (35) and (36) yields
⟨𝐴π‘₯𝑑 − 𝐴π‘₯𝑑󸀠 , π‘₯𝑑 − π‘₯𝑑󸀠 ⟩ − 𝑑𝛾 βŸ¨π‘“ (π‘₯𝑑 ) − 𝑓 (π‘₯𝑑󸀠 ) , π‘₯𝑑 − π‘₯𝑑󸀠 ⟩
(37)
At the same time we note that
∗
∗
∗
× βŸ¨π‘‘π›Ύπ‘“ (𝑦 ) + (𝐼 − 𝑑𝐡) 𝑆𝑦 − 𝐴𝑦 , π‘₯𝑑 − 𝑦 ⟩ ,
∀𝑦∗ ∈ ϝ.
(30)
In particular, π‘₯𝑑 solves the following variational inequality:
π‘₯𝑑 ∈ ϝ,
π‘₯𝑑󸀠 ∈ ϝ,
− (𝐼 − 𝑑𝐡) βŸ¨π‘†π‘₯𝑑 − 𝑆π‘₯𝑑󸀠 , π‘₯𝑑 − π‘₯𝑑󸀠 ⟩ ≤ 0.
1
∗
Next we show that as 𝑠 → 0+, the entire net {π‘₯𝑠,𝑑 } converges
in norm to π‘₯𝑑 ∈ ϝ. We assume π‘₯𝑠𝑛󸀠 ,𝑑 → π‘₯𝑑󸀠 , where 𝑠𝑛󸀠 → 0.
Similarly, by the above proof, we deduce π‘₯𝑑󸀠 ∈ ϝ which solves
the following variational inequality:
In (33), we take 𝑦∗ = π‘₯𝑑󸀠 to get
× βŸ¨π‘‘π›Ύπ‘“ (π‘₯𝑑 ) + (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑑 − 𝐴π‘₯𝑑 , π‘₯𝑠𝑛 ,𝑑 − π‘₯𝑑 ⟩ .
(29)
σ΅„©σ΅„©
∗ σ΅„©2
σ΅„©σ΅„©π‘₯𝑑 − 𝑦 σ΅„©σ΅„©σ΅„© ≤
(32)
Μƒ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩2
− (𝐼 − 𝑑𝐡) βŸ¨π‘†π‘₯ − 𝑆𝑦, π‘₯ − π‘¦βŸ© ≥ πœ†
σ΅„©
σ΅„©
βŸ¨π΄π‘¦∗ − 𝑑𝛾𝑓 (𝑦∗ )
− (𝐼 − 𝑑𝐡) 𝑆𝑦∗ , 𝑦∗ − π‘₯𝑑 ⟩ ≥ 0,
⟨𝐴π‘₯𝑑 − 𝐴π‘₯𝑑󸀠 , π‘₯𝑑 − π‘₯𝑑󸀠 ⟩ − 𝑑𝛾 βŸ¨π‘“ (π‘₯𝑑 ) − 𝑓 (π‘₯𝑑󸀠 ) , π‘₯𝑑 − π‘₯𝑑󸀠 ⟩
− (𝐼 − 𝑑𝐡) βŸ¨π‘†π‘₯𝑑 − 𝑆π‘₯𝑑󸀠 , π‘₯𝑑 − π‘₯𝑑󸀠 ⟩
Μƒ − 1 + (̃𝛾 − π›ΎπœŒ) 𝑑] σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − π‘₯σΈ€  σ΅„©σ΅„©σ΅„©σ΅„©2
≥ [πœ†
𝑑󡄩
σ΅„© 𝑑
(38)
≥ 0.
Therefore, by (37) and (38), we deduce
∀𝑦∗ ∈ ϝ.
(31)
π‘₯𝑑󸀠 = π‘₯𝑑 .
(39)
6
Abstract and Applied Analysis
Hence the entire net {π‘₯𝑠,𝑑 } converges in norm to π‘₯𝑑 ∈ ϝ as
𝑠 → 0+.
As 𝑑 → 0+, the net {π‘₯𝑑 } converges to the unique solution
π‘₯∗ of Problem 1.
In (33), we take any 𝑦∗ ∈ Ω to deduce
⟨𝐴π‘₯𝑑 − 𝑑𝛾𝑓 (π‘₯𝑑 ) − (𝐼 − 𝑑𝐡) 𝑆π‘₯𝑑 , 𝑦∗ − π‘₯𝑑 ⟩ ≥ 0.
(40)
∗
By virtue of the monotonicity of 𝐴−𝑆 and the fact that 𝑦 ∈ Ω,
we have
⟨𝐴π‘₯𝑑 − 𝑆π‘₯𝑑 , 𝑦∗ − π‘₯𝑑 ⟩ ≤ βŸ¨π΄π‘¦∗ − 𝑆𝑦∗ , 𝑦∗ − π‘₯𝑑 ⟩ ≤ 0.
(41)
We can rewrite (40) as
(42)
+ (1 − 𝑑) (𝐴π‘₯𝑑 − 𝑆π‘₯𝑑 ) , 𝑦∗ − π‘₯𝑑 ⟩ ≥ 0.
⟨𝐴π‘₯𝑑 − 𝛾𝑓 (π‘₯𝑑 ) − (𝐼 − 𝐡) 𝑆π‘₯𝑑 , 𝑦∗ − π‘₯𝑑 ⟩ ≥ 0,
∀𝑦∗ ∈ Ω.
(43)
𝑑
βŸ¨π›Ύπ‘“ (π‘₯𝑑 ) + (𝐼 − 𝐡) 𝑆π‘₯𝑑
1−𝑑
−𝐴π‘₯𝑑 , 𝑦∗ − π‘₯𝑑 ⟩ ,
(47)
𝑦∗ ∈ ϝ.
However, since 𝐴 − 𝑆 is monotone,
(48)
Combining the last two relations yields
≥
𝑑
βŸ¨π›Ύπ‘“ (π‘₯𝑑 ) + (𝐼 − 𝐡) 𝑆π‘₯𝑑
1−𝑑
−𝐴π‘₯𝑑 , 𝑦∗ − π‘₯𝑑 ⟩ ,
Hence
σ΅„©σ΅„©
∗ σ΅„©2
∗
∗
σ΅„©σ΅„©π‘₯𝑑 − 𝑦 σ΅„©σ΅„©σ΅„© ≤ ⟨π‘₯𝑑 − 𝑦 , π‘₯𝑑 − 𝑦 ⟩
+ βŸ¨π›Ύπ‘“ (π‘₯𝑑 ) + (𝐼 − 𝐡) 𝑆π‘₯𝑑 − 𝐴π‘₯𝑑 , π‘₯𝑑 − 𝑦∗ ⟩
(49)
𝑦∗ ∈ ϝ.
Letting 𝑑 = 𝑑𝑛 → 0+ as 𝑛 → ∞ in (49), we get
⟨(𝐴 − 𝑆) 𝑦∗ , 𝑦∗ − π‘₯σΈ€  ⟩ ≥ 0,
∗
= 𝛾 βŸ¨π‘“ (π‘₯𝑑 ) − 𝑓 (𝑦 ) , π‘₯𝑑 − 𝑦 ⟩
+ (𝐼 − 𝐡) βŸ¨π‘†π‘₯𝑑 − 𝑆𝑦∗ , π‘₯𝑑 − 𝑦∗ ⟩
𝑦∗ ∈ ϝ.
(50)
𝑦∗ ∈ ϝ.
(51)
The equivalent dual VI of (50) is
+ βŸ¨π΄π‘¦∗ − 𝐴π‘₯𝑑 , π‘₯𝑑 − 𝑦∗ ⟩
∗
≥
⟨(𝐴 − 𝑆) 𝑦∗ , 𝑦∗ − π‘₯𝑑 ⟩
It follows from (41) and (42) that
∗
⟨(𝐴 − 𝑆) π‘₯𝑑 , 𝑦∗ − π‘₯𝑑 ⟩
⟨(𝐴 − 𝑆) 𝑦∗ , 𝑦∗ − π‘₯𝑑 ⟩ ≥ ⟨(𝐴 − 𝑆) π‘₯𝑑 , 𝑦∗ − π‘₯𝑑 ⟩ .
βŸ¨π‘‘ [𝐴π‘₯𝑑 − 𝛾𝑓 (π‘₯𝑑 ) − (𝐼 − 𝐡) 𝑆π‘₯𝑑 ]
∗
We next prove that πœ”π‘€ (π‘₯𝑑 ) ⊂ Ω; namely, if (𝑑𝑛 ) is a null
sequence in (0, 1) such that π‘₯𝑑𝑛 → π‘₯σΈ€  weakly as 𝑛 → ∞,
then π‘₯σΈ€  ∈ Ω. To see this, we use (33) to get
∗
⟨(𝐴 − 𝑆) π‘₯σΈ€  , 𝑦∗ − π‘₯σΈ€  ⟩ ≥ 0,
∗
+ βŸ¨π›Ύπ‘“ (𝑦 ) + (𝐼 − 𝐡) 𝑆𝑦 − 𝐴𝑦 , π‘₯𝑑 − 𝑦 ⟩
σ΅„©2
σ΅„©2
σ΅„©
σ΅„©
≤ π›ΎπœŒσ΅„©σ΅„©σ΅„©π‘₯𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„© + (1 − 𝛾̃) σ΅„©σ΅„©σ΅„©π‘₯𝑑 − 𝑦∗ σ΅„©σ΅„©σ΅„©
Μƒ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦∗ σ΅„©σ΅„©σ΅„©2
−πœ†
σ΅„©
σ΅„© 𝑑
∗
+ βŸ¨π›Ύπ‘“ (𝑦 ) + (𝐼 − 𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , π‘₯𝑑 − 𝑦∗ ⟩
Μƒ + 𝛾̃ − π›ΎπœŒ)] σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦∗ σ΅„©σ΅„©σ΅„©2
= [1 − (πœ†
σ΅„©
σ΅„© 𝑑
+ βŸ¨π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , π‘₯𝑑 − 𝑦∗ ⟩ .
(44)
Namely, π‘₯σΈ€  is a solution of VI(13); hence π‘₯σΈ€  ∈ Ω.
We further prove that π‘₯σΈ€  = π‘₯∗ , the unique solution of
VI(12). As a matter of fact, we have by (45)
1
σ΅„©σ΅„©
σ΅„©2
σ΅„©σ΅„©π‘₯𝑑𝑛 − π‘₯σΈ€  σ΅„©σ΅„©σ΅„© ≤
σ΅„©
σ΅„©
Μƒ
πœ† + 𝛾̃ − π›ΎπœŒ
× βŸ¨π›Ύπ‘“ (π‘₯σΈ€  ) + (𝐼 − 𝐡) 𝑆π‘₯σΈ€  − 𝐴π‘₯σΈ€  , π‘₯𝑑𝑛 − π‘₯σΈ€  ⟩ ,
π‘₯σΈ€  ∈ Ω.
(52)
Therefore,
1
σ΅„©σ΅„©
∗ σ΅„©2
σ΅„©σ΅„©π‘₯𝑑 − 𝑦 σ΅„©σ΅„©σ΅„© ≤
Μƒ
Μƒ
πœ† + 𝛾 − π›ΎπœŒ
× βŸ¨π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , π‘₯𝑑 − 𝑦∗ ⟩ ,
𝑦∗ ∈ Ω.
(45)
In particular,
Therefore, the weak convergence to π‘₯σΈ€  of {π‘₯𝑑𝑛 } right implies
that π‘₯𝑑𝑛 → π‘₯σΈ€  in norm. Now we can let 𝑑 = 𝑑𝑛 → 0 in (45) to
get
βŸ¨π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ , 𝑦∗ − π‘₯σΈ€  ⟩ ≤ 0,
∀𝑦∗ ∈ Ω,
(53)
which is equivalent to its dual VI
1
σ΅„©σ΅„©
∗σ΅„©
σ΅„©σ΅„©π‘₯𝑑 − 𝑦 σ΅„©σ΅„©σ΅„© ≤
Μƒ + 𝛾̃ − π›ΎπœŒ
πœ†
σ΅„©
σ΅„©
× σ΅„©σ΅„©σ΅„©π›Ύπ‘“ (𝑦∗ ) + (𝐼 − 𝐡) 𝑆𝑦∗ − 𝐴𝑦∗ σ΅„©σ΅„©σ΅„© ,
∀𝑑 ∈ (0, 1) ,
which implies that {π‘₯𝑑 } is bounded.
βŸ¨π›Ύπ‘“ (π‘₯σΈ€  ) + (𝐼 − 𝐡) 𝑆π‘₯σΈ€  − 𝐴π‘₯σΈ€  , 𝑦∗ − π‘₯σΈ€  ⟩ ≤ 0,
(46)
∀𝑦∗ ∈ Ω.
(54)
It turns out that π‘₯σΈ€  ∈ Ω solves VI(12). By uniqueness, we have
π‘₯σΈ€  = π‘₯∗ . This is sufficient to guarantee that π‘₯𝑑 → π‘₯∗ in norm,
as 𝑑 → 0+. This completes the proof.
Abstract and Applied Analysis
7
Corollary 6. For each (𝑠, 𝑑) ∈ (0, πœ…) × (0, 1), let {π‘₯𝑠,𝑑 } be a
double net defined by
π‘₯𝑠,𝑑 = 𝑃𝐢 [𝑠 (𝑑𝛾𝑓 (π‘₯𝑠,𝑑 ) + (1 − 𝑑𝐡) 𝑆π‘₯𝑠,𝑑 )
1 πœ†π‘ 
+ (1 − 𝑠) ∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]] ,
πœ†π‘  0
(55)
for all (𝑠, 𝑑) ∈ (0, πœ…) × (0, 1). Then, for each fixed 𝑑 ∈ (0, 1), the
net {π‘₯𝑠,𝑑 } defined by (55) converges in norm, as 𝑠 → 0+, to a
solution π‘₯𝑑 ∈ ϝ. Moreover, as 𝑑 → 0+, the net {π‘₯𝑑 } converges
in norm to π‘₯∗ which solves the following variational inequality:
π‘₯∗ ∈ Ω,
⟨(𝐼 − 𝛾𝑓) π‘₯∗ − (𝐼 − 𝐡) 𝑆π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ Ω,
(56)
where Ω is the set of the solutions of the following variational
inequality:
π‘₯∗ ∈ ϝ,
⟨(𝐼 − 𝑆) π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ ϝ.
(57)
Corollary 7. For each (𝑠, 𝑑) ∈ (0, πœ…) × (0, 1), let {π‘₯𝑠,𝑑 } be a
double net defined by
π‘₯𝑠,𝑑 = 𝑃𝐢 [𝑠 ((1 − 𝑑) 𝑆π‘₯𝑠,𝑑 ) + (1 − 𝑠)
1 πœ†π‘ 
∫ 𝑇 (]) π‘₯𝑠,𝑑 𝑑]] ,
πœ†π‘  0
(58)
for all (𝑠, 𝑑) ∈ (0, πœ…) × (0, 1). Then, for each fixed 𝑑 ∈ (0, 1), the
net {π‘₯𝑠,𝑑 } defined by (58) converges in norm, as 𝑠 → 0+, to a
solution π‘₯𝑑 ∈ ϝ. Moreover, as 𝑑 → 0+, the net {π‘₯𝑑 } converges to
the minimum norm solution π‘₯∗ of the following variational
inequality:
π‘₯∗ ∈ ϝ,
⟨(𝐼 − 𝑆) π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ ϝ.
(59)
Proof. In (55), we take 𝑓 = 0 and 𝐡 = 𝐼. Then (55) reduces to
(58). Hence, the net {π‘₯𝑑 } defined by (58) converges in norm
to π‘₯∗ ∈ Ω which satisfies
π‘₯∗ ∈ Ω,
⟨π‘₯∗ , π‘₯ − π‘₯∗ ⟩ ≥ 0,
∀π‘₯ ∈ Ω.
(60)
This indicates that
σ΅„© ∗σ΅„©
σ΅„©σ΅„© ∗ σ΅„©σ΅„©2
∗
σ΅„©σ΅„©π‘₯ σ΅„©σ΅„© ≤ ⟨π‘₯ , π‘₯⟩ ≤ σ΅„©σ΅„©σ΅„©π‘₯ σ΅„©σ΅„©σ΅„© β€–π‘₯β€– ,
∀π‘₯ ∈ Ω.
(61)
Therefore, π‘₯∗ is the minimum norm solution of the VI(59).
This completes the proof.
[2] L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “Relaxed extragradient
iterative methods for variational inequalities,” Applied Mathematics and Computation, vol. 218, no. 3, pp. 1112–1123, 2011.
[3] L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “Some iterative methods
for finding fixed points and for solving constrained convex
minimization problems,” Nonlinear Analysis. Theory, Methods
& Applications A, vol. 74, no. 16, pp. 5286–5302, 2011.
[4] L.-C. Ceng, S.-M. Guu, and J.-C. Yao, “A general iterative
method with strongly positive operators for general variational
inequalities,” Computers & Mathematics with Applications, vol.
59, no. 4, pp. 1441–1452, 2010.
[5] S.-S. Chang, H. W. Joseph Lee, and C. K. Chan, “A new method
for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization,”
Nonlinear Analysis. Theory, Methods & Applications A, vol. 70,
no. 9, pp. 3307–3319, 2009.
[6] S. S. Chang, H. W. Joseph Lee, C. K. Chan, and J. A. Liu,
“A new method for solving a system of generalized nonlinear
variational inequalities in Banach spaces,” Applied Mathematics
and Computation, vol. 217, no. 16, pp. 6830–6837, 2011.
[7] Y. J. Cho, I. K. Argyros, and N. Petrot, “Approximation methods
for common solutions of generalized equilibrium, systems of
nonlinear variational inequalities and fixed point problems,”
Computers & Mathematics with Applications, vol. 60, no. 8, pp.
2292–2301, 2010.
[8] Y. J. Cho, S. M. Kang, and X. Qin, “On systems of generalized
nonlinear variational inequalities in Banach spaces,” Applied
Mathematics and Computation, vol. 206, no. 1, pp. 214–220,
2008.
[9] F. Cianciaruso, V. Colao, L. Muglia, and H.-K. Xu, “On
an implicit hierarchical fixed point approach to variational
inequalities,” Bulletin of the Australian Mathematical Society,
vol. 80, no. 1, pp. 117–124, 2009.
[10] V. Colao, G. L. Acedo, and G. Marino, “An implicit method
for finding common solutions of variational inequalities and
systems of equilibrium problems and fixed points of infinite
family of nonexpansive mappings,” Nonlinear Analysis. Theory,
Methods & Applications A, vol. 71, no. 7-8, pp. 2708–2715, 2009.
[11] R. Glowinski, Numerical Methods for Nonlinear Variational
Problems, Springer, New York, NY, USA, 1984.
[12] P. Kumam, N. Petrot, and R. Wangkeeree, “Existence and
iterative approximation of solutions of generalized mixed quasivariational-like inequality problem in Banach spaces,” Applied
Mathematics and Computation, vol. 217, no. 18, pp. 7496–7503,
2011.
Acknowledgment
[13] M. Aslam Noor, “Some developments in general variational
inequalities,” Applied Mathematics and Computation, vol. 152,
no. 1, pp. 199–277, 2004.
The author was supported in part by NSFC 71161001-G0105,
NNSF of China (10671157 and 61261044), NGY2012097 and
Beifang University of Nationalities scientific research project.
[14] M. A. Noor, “An extragradient algorithm for solving general
nonconvex variational inequalities,” Applied Mathematics Letters, vol. 23, no. 8, pp. 917–921, 2010.
References
[15] H.-K. Xu, “Viscosity method for hierarchical fixed point
approach to variational inequalities,” Taiwanese Journal of
Mathematics, vol. 14, no. 2, pp. 463–478, 2010.
[1] I. Yamada, “The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point
sets of nonexpansive mappings,” in Inherently Parallel Algorithms for Feasibility and Optimization and Their Applications,
D. Butnariu, Y. Censor, and S. Reich, Eds., pp. 473–504, Elsevier,
Amsterdam, The Netherlands, 2001.
[16] I. Yamada, N. Ogura, and N. Shirakawa, “A numerically robust
hybrid steepest descent method for the convexly constrained
generalized inverse problems,” in Inverse Problems, Image Analysis, and Medical Imaging, vol. 313 of Contemporary Mathematics, pp. 269–305, American Mathematical Society, Providence,
RI, USA, 2002.
8
[17] Y. Yao, Y. J. Cho, and R. Chen, “An iterative algorithm
for solving fixed point problems, variational inequality problems and mixed equilibrium problems,” Nonlinear Analysis.
Theory, Methods & Applications A, vol. 71, no. 7-8, pp. 3363–
3373, 2009.
[18] Y. Yao, R. Chen, and H.-K. Xu, “Schemes for finding minimumnorm solutions of variational inequalities,” Nonlinear Analysis.
Theory, Methods & Applications A, vol. 72, no. 7-8, pp. 3447–
3456, 2010.
[19] Y. Yao, Y. J. Cho, and Y.-C. Liou, “Iterative algorithms for
hierarchical fixed points problems and variational inequalities,”
Mathematical and Computer Modelling, vol. 52, no. 9-10, pp.
1697–1705, 2010.
[20] Y. Yao and J.-C. Yao, “On modified iterative method for nonexpansive mappings and monotone mappings,” Applied Mathematics and Computation, vol. 186, no. 2, pp. 1551–1558, 2007.
[21] H. Zegeye, E. U. Ofoedu, and N. Shahzad, “Convergence theorems for equilibrium problem, variational inequality problem and countably infinite relatively quasi-nonexpansive mappings,” Applied Mathematics and Computation, vol. 216, no. 12,
pp. 3439–3449, 2010.
[22] H. Zegeye and N. Shahzad, “A hybrid scheme for finite families
of equilibrium, variational inequality and fixed point problems,”
Nonlinear Analysis. Theory, Methods & Applications A, vol. 74,
no. 1, pp. 263–272, 2011.
[23] L.-C. Zeng and J.-C. Yao, “Strong convergence theorem by an
extragradient method for fixed point problems and variational
inequality problems,” Taiwanese Journal of Mathematics, vol. 10,
no. 5, pp. 1293–1303, 2006.
[24] I. Yamada and N. Ogura, “Hybrid steepest descent method for
variational inequality problem over the fixed point set of certain
quasi-nonexpansive mappings,” Numerical Functional Analysis
and Optimization, vol. 25, no. 7-8, pp. 619–655, 2004.
[25] N. Buong, “Strong convergence theorem for nonexpansive
semigroups in Hilbert space,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 72, no. 12, pp. 4534–4540, 2010.
[26] R. Chen and H. He, “Viscosity approximation of common fixed
points of nonexpansive semigroups in Banach space,” Applied
Mathematics Letters, vol. 20, no. 7, pp. 751–757, 2007.
[27] R. Chen and Y. Song, “Convergence to common fixed point
of nonexpansive semigroups,” Journal of Computational and
Applied Mathematics, vol. 200, no. 2, pp. 566–575, 2007.
[28] F. Cianciaruso, G. Marino, and L. Muglia, “Iterative methods
for equilibrium and fixed point problems for nonexpansive
semigroups in Hilbert spaces,” Journal of Optimization Theory
and Applications, vol. 146, no. 2, pp. 491–509, 2010.
[29] A. T. M. Lau, “Semigroup of nonexpansive mappings on a
Hilbert space,” Journal of Mathematical Analysis and Applications, vol. 105, no. 2, pp. 514–522, 1985.
[30] A. T.-M. Lau and W. Takahashi, “Fixed point properties for
semigroup of nonexpansive mappings on Fréchet spaces,” Nonlinear Analysis. Theory, Methods & Applications A, vol. 70, no.
11, pp. 3837–3841, 2009.
[31] A. T.-M. Lau and Y. Zhang, “Fixed point properties of semigroups of non-expansive mappings,” Journal of Functional
Analysis, vol. 254, no. 10, pp. 2534–2554, 2008.
[32] H. Zegeye and N. Shahzad, “Strong convergence theorems for
a finite family of nonexpansive mappings and semigroups via
the hybrid method,” Nonlinear Analysis. Theory, Methods &
Applications A, vol. 72, no. 1, pp. 325–329, 2010.
Abstract and Applied Analysis
[33] T. Shimizu and W. Takahashi, “Strong convergence to common
fixed points of families of nonexpansive mappings,” Journal of
Mathematical Analysis and Applications, vol. 211, no. 1, pp. 71–
83, 1997.
[34] K. Goebel and W. A. Kirk, Topics in Metric Fixed Point
Theory, vol. 28 of Cambridge Studies in Advanced Mathematics,
Cambridge University Press, Cambridge, UK, 1990.
[35] X. Lu, H.-K. Xu, and X. Yin, “Hybrid methods for a class of
monotone variational inequalities,” Nonlinear Analysis. Theory,
Methods & Applications A, vol. 71, no. 3-4, pp. 1032–1041, 2009.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download