Algorithms for the variational inequalities and fixed point problems Yaqiang Liu

Available online at www.tjnsa.com
J. Nonlinear Sci. Appl. 9 (2016), 61–74
Research Article
Algorithms for the variational inequalities and fixed
point problems
Yaqiang Liua , Zhangsong Yaob,∗, Yeong-Cheng Liouc , Li-Jun Zhud
a
School of Management, Tianjin Polytechnic University, Tianjin 300387, China.
b
School of Information Engineering, Nanjing Xiaozhuang University, Nanjing 211171, China.
c
Department of Information Management, Cheng Shiu University, Kaohsiung 833, Taiwan and Center for General Education,
Kaohsiung Medical University, Kaohsiung 807, Taiwan.
d
School of Mathematics and Information Science, Beifang University of Nationalities, Yinchuan 750021, China.
Communicated by Yeol Je Cho
Abstract
A system of variational inequality and fixed point problems is considered. Two algorithms have been
constructed. Our algorithms can find the minimum norm solution of this system of variational inequality
c
and fixed point problems. 2016
All rights reserved.
Keywords: Variational inequality, monotone mapping, nonexpansive mapping, fixed point, minimum
norm.
2010 MSC: 47H05, 47H10, 47H17.
1. Introduction
Variational inequality problems were initially studied by Stampacchia [17] in 1964. Variational inequalities have applications in diverse disciplines such as physical, optimal control, optimization, mathematical
programming, mechanics and finance, see [12], [16], [17], [29] and the references therein. The main purpose
of this paper is devoted to find the minimum norm solution of some system of variational inequality and
fixed point problems.
Let H be a real Hilbert space with inner product h·, ·i and norm k · k, respectively. Let C be a nonempty
closed convex subset of H.
∗
Corresponding author
Email addresses: yani3115791@126.com (Yaqiang Liu), yaozhsong@163.com (Zhangsong Yao),
simplex_liou@hotmail.com (Yeong-Cheng Liou), zhulijun1995@sohu.com (Li-Jun Zhu)
Received 2015-05-12
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
62
Definition 1.1. A mapping F : C → H is called ζ-inverse strongly monotone if there exists a real number
ζ > 0 such that
hFx − Fy, x − yi ≥ ζkFx − Fyk2 , ∀x, y ∈ C.
Definition 1.2. A mapping R : C → H is called κ-contraction, if there exists a constant κ ∈ [0, 1) such
that kR(x) − R(y)k ≤ κkx − yk for all x, y ∈ C.
Definition 1.3. A mapping N : C → C is said to be nonexpansive if
kN x − N yk ≤ kx − yk, ∀x, y ∈ C.
We use F ix(N ) to denote the set of fixed points of N .
Definition 1.4. We call P rojC : H → C is the metric projection if P rojC : H → C assigns to each point
x ∈ C the unique point P rojC x ∈ C satisfying the property
kx − P rojC xk = inf kx − yk =: d(x, C).
y∈C
Let F : C → H be a nonlinear mapping. Recall that the classical variational inequality is to find x∗ ∈ C
such that
hFx∗ , x − x∗ i ≥ 0
for all x ∈ C.
(1.1)
The set of solutions of the variational inequality (1.1) is denoted by V I(F, C). The variational inequality
problem has been extensively studied in the literature. Related works, please see, e.g. [1]-[11], [13], [15],
[19]-[28], [30]-[34] and the references therein. For finding an element of F ix(N ) ∩ V I(F, C), Takahashi and
Toyoda [19] introduced the following iterative scheme:
xn+1 = ζn xn + (1 − ζn )N P rojC (xn − ηn Fxn ), n ≥ 0,
(1.2)
where P rojC is the metric projection of H onto C, {ζn } is a sequence in (0, 1), and {ηn } is a sequence in (0, 2ζ).
Takahashi and Toyoda showed that the sequence {xn } converges weakly to some z ∈ F ix(N ) ∩ V I(F, C).
Consequently, Nadezhkina and Takahashi [11] and Zeng and Yao [34] proposed some so-called extragradient
methods for finding a common element of the set of fixed points of a nonexpansive mapping and the set of
solutions of a variational inequality problem.
Recently, Ceng, Wang and Yao [3] considered a general system of variational inequality of finding x∗ ∈ C
such that
(
hηFy ∗ + x∗ − y ∗ , x − x∗ i ≥ 0, ∀x ∈ C,
(SV I)
hξGx∗ + y ∗ − x∗ , x − y ∗ i ≥ 0, ∀x ∈ C,
where F, G : C → H are two nonlinear mappings, y ∗ = P rojC (x∗ −ξGx∗ ), η > 0 and ξ > 0 are two constants.
The solutions set of SVI is denoted by Ω.
If take F = G, then SVI reduces to finding x∗ ∈ C such that
(
hηFy ∗ + x∗ − y ∗ , x − x∗ i ≥ 0, ∀x ∈ C,
hξFx∗ + y ∗ − x∗ , x − y ∗ i ≥ 0, ∀x ∈ C,
which is introduced by Verma [20] (see also Verma [21]). Further, if we add up the requirement that x∗ = y ∗ ,
then SVI reduces to the classical variational inequality problem (1.1). For finding an element of F ix(N ) ∩ Ω,
Ceng, Wang and Yao [3] introduced the following relaxed extragradient method:
( n
y = P rojC (xn − ξGxn ),
(1.3)
xn+1 = ζn u + βn xn + γn N P rojC (y n − ηFy n ), n ≥ 0.
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
63
They proved the strong convergence of the above method to some element in F ix(N ) ∩ Ω.
On the other hand, in many problems, it is needed to find a solution with minimum norm. A typical
example is the least-squares solution to the constrained linear inverse problem, see [14].
It is our purpose in this paper that we construct two methods, one implicit and one explicit, to find
the minimum norm element in F ix(N ) ∩ Ω; namely, the unique solution x∗ to the quadratic minimization
problem:
x∗ = arg
kxk2 .
min
x∈F ix(N )∩Ω
We obtain two strong convergence theorems.
2. Preliminaries
Let C be a nonempty closed convex subset of H. The following lemmas are useful for our main results.
Lemma 2.1. Given x ∈ H and z ∈ C.
(i) z = P rojC x if and only if there holds the relation:
hx − z, y − zi ≤ 0
for all y ∈ C.
(ii) z = P rojC x if and only if there holds the relation:
kx − zk2 ≤ kx − yk2 − ky − zk2
for all y ∈ C.
(iii) There holds the relation
hP rojC x − P rojC y, x − yi ≥ kP rojC x − P rojC yk2
for all x, y ∈ H.
Consequently, P rojC is nonexpansive and monotone.
Lemma 2.2 ([3]). Let C be a nonempty closed convex subset of a real Hilbert space H. Let the mapping
F : C → H be ζ-inverse strongly monotone. Then, we have
k(I − ηF)x − (I − ηF)yk2 ≤ kx − yk2 + η(η − 2ζ)kFx − Fyk2 , ∀x, y ∈ C.
In particular, if 0 ≤ η ≤ 2ζ, then I − ηF is nonexpansive.
Lemma 2.3 ([3]). x∗ is a solution of SVI if and only if x∗ is a fixed point of the mapping U : C → C defined
by
U(x) = P rojC [P rojC (x − ξGx) − ηFP rojC (x − ξGx)], ∀x ∈ C,
where y ∗ = P rojC (x∗ − ξGx∗ ).
In particular, if the mappings F, G : C → H are ζ-inverse strongly monotone and δ-inverse strongly
monotone, respectively, then the mapping U is a nonexpansive mapping provided η ∈ (0, 2ζ) and ξ ∈ (0, 2δ).
Lemma 2.4 ([18]). Let {xn } and {y n } be bounded sequences in a Banach space X and let {δn } be a sequence
in [0, 1] with 0 < lim inf δn ≤ lim sup δn < 1. Suppose xn+1 = (1 − δn )y n + δn xn for all integers n ≥ 0 and
n→∞
n→∞
lim sup(ky n+1 − y n k − kxn+1 − xn k) ≤ 0. Then, lim ky n − xn k = 0.
n→∞
n→∞
Lemma 2.5 ([10]). Let C be a closed convex subset of a real Hilbert space H and let N : C → C be a
nonexpansive mapping. Then, the mapping I − N is demiclosed. That is, if {xn } is a sequence in C such
that xn → x∗ weakly and (I − N )xn → y strongly, then (I − N )x∗ = y.
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
64
Lemma 2.6 ([23]). Assume {an } is a sequence of nonnegative real numbers such that
an+1 ≤ (1 − γn )an + δn γn ,
where {γn } is a sequence in (0, 1) and {δn } is a sequence such that
P∞
(1)
n=1 γn = ∞;
P
(2) lim supn→∞ δn ≤ 0 or ∞
n=1 |δn γn | < ∞.
Then limn→∞ an = 0.
3. Main results
In this section we will introduce two schemes for finding the unique point x∗ which solves the quadratic
minimization
kx∗ k2 =
min
kxk2 .
(3.1)
x∈F ix(N )∩Ω
Let C be a nonempty closed convex subset of a real Hilbert space H. Let R : C → H be a κ-contraction. Let
the mappings F, G : C → H be ζ-inverse strongly monotone and δ-inverse strongly monotone, respectively.
Suppose η ∈ (0, 2ζ) and ξ ∈ (0, 2δ). Let N : C → C be a nonexpansive mapping.
For each t ∈ (0, 1), we study the following mapping T t given by
T t x = N P rojC [tR(x) + (1 − t)P rojC (I − ηF)P rojC (I − ξG)x], ∀x ∈ C.
Since the mappings N , P rojC , I − ηF and I − ξG are nonexpansive, we can check easily that
kT t x − T t yk ≤ [1 − (1 − κ)t]kx − yk which implies that T t is a contraction. Then there exists a unique fixed
point xt of T t in C such that
 t
t
t

 z = P rojC (x − ξGx ),
y t = P rojC (z t − ηFz t ),


t
t
(3.2)
t
x = N P rojC [tR(x ) + (1 − t)y ].
In particular, if we take R ≡ 0, then (3.2) reduces to
 t
t
t

 z = P rojC (x − ξGx ),
y t = P rojC (z t − ηFz t ),

 t
x = N P rojC [(1 − t)y t ].
(3.3)
We next prove that the implicit methods (3.2) and (3.3) both converge.
Theorem 3.1. Suppose Γ := F ix(N ) ∩ Ω 6= ∅. Then the net {xt } generated by the implicit method (3.2)
converges in norm, as t → 0+ , to the unique solution x∗ of the following variational inequality
x∗ ∈ Γ,
h(I − R)x∗ , x − x∗ i ≥ 0,
x ∈ Γ.
(3.4)
In particular, if we take R = 0, then the net {xt } defined by (3.3) converges in norm, as t → 0+ , to the
minimum norm element in Γ, namely, the unique solution x∗ to the quadratic minimization problem:
x∗ = arg min kxk2 .
x∈Γ
Proof. First, we prove that {xt } is bounded. Take u ∈ Γ. From Lemma 2.3, we have u = N u and
u = P rojC [P rojC (u − ξGu) − ηFP rojC (u − ξGu)].
(3.5)
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
65
Put v = P rojC (u − ξGu). Then u = P rojC (v − ηFv). From Lemma 2.2, we note that
kz t − vk = kP rojC (xt − ξGxt ) − P rojC (u − ξGu)k ≤ kxt − uk,
and
ky t − uk = kP rojC (z t − ηFz t ) − P rojC (v − ηFv)k ≤ kz t − vk.
It follows from (3.2) that
kxt − uk = kN P rojC [tR(xt ) + (1 − t)y t ] − N P rojC uk
≤ kt(R(xt ) − u) + (1 − t)(y t − u)k
≤ tkR(xt ) − R(u)k + tkR(u) − uk + (1 − t)ky t − uk
≤ tκkxt − uk + tkR(u) − uk + (1 − t)kxt − uk
= [1 − (1 − κ)t]kxt − uk + tkR(u) − uk,
that is,
kxt − uk ≤
kR(u) − uk
.
1−κ
Hence, {xt } is bounded and so are {y t }, {z t } and {R(xt )}. Now we can choose a constant M > 0 such that
n
sup 2kR(xt ) − ukky t − uk + kR(xt ) − uk2 , 2ξkxt − z t − (u − v)k,
t
o
2ηkz t − y t + (u − v)k, ky t − R(xt )k2 ≤ M.
Since F is ζ-inverse strongly monotone and G is δ-inverse strongly monotone, we have from Lemma 2.2 that
ky t − uk2 = k(I − ηF)z t − (I − ηF)vk2
≤ kz t − vk2 + η(η − 2ζ)kFz t − Fvk2 ,
(3.6)
and
kz t − vk2 = k(I − ξG)xt − (I − ξG)uk2
≤ kxt − uk2 + ξ(ξ − 2δ)kGxt − Guk2 .
(3.7)
Combining (3.6) with (3.7) to get
ky t − uk2 = k(I − ηF)z t − (I − ηF)vk2
≤ kxt − uk2 + η(η − 2ζ)kFz t − Fvk2
+ ξ(ξ − 2δ)kGxt − Guk2 .
(3.8)
From (3.2) and (3.8), we have
kxt − uk2 ≤ k(1 − t)(y t − u) + t(R(xt ) − u)k2
= (1 − t)2 ky t − uk2 + 2t(1 − t)hR(xt ) − u, y t − ui + t2 kR(xt ) − uk2
= ky t − uk2 + tM
t
2
(3.9)
t
≤ kx − uk + η(η − 2ζ)kFz − Fvk
+ ξ(ξ − 2δ)kGxt − Guk2 + tM,
2
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
66
that is,
η(2ζ − η)kFz t − Fvk2 + ξ(2δ − ξ)kGxt − Guk2 ≤ tM → 0.
Since η(2ζ − η) > 0 and ξ(2δ − ξ) > 0, we derive
lim kFz t − Fvk = 0 and lim kGxt − Guk = 0.
t→0
(3.10)
t→0
From Lemma 2.1 and (3.2), we obtain
kz t − vk2 = kP rojC (xt − ξGxt ) − P rojC (u − ξGu)k2
≤ h(xt − ξGxt ) − (u − ξGu), z t − vi
1 t
=
k(x − ξGxt ) − (u − ξGu)k2 + kz t − vk2
2
− k(xt − u) − ξ(Gxt − Gu) − (z t − v)k2
1 t
≤
kx − uk2 + kz t − vk2 − k(xt − z t ) − ξ(Gxt − Gu) − (u − v)k2
2
1 t
kx − uk2 + kz t − vk2 − kxt − z t − (u − v)k2
=
2
+ 2ξhxt − z t − (u − v), Gxt − Gui − ξ 2 kGxt − Guk2 ,
and
ky t − uk = kP rojC (z t − ηFz t ) − P rojC (v − ηFv)k2
≤ hz t − ηFz t − (v − ηFv), y t − ui
1 t
=
kz − ηFz t − (v − ηFv)k2 + ky t − uk2
2
− kz t − ηFz t − (v − ηFv) − (y t − u)k2
1 t
kz − vk2 + ky t − uk2 − kz t − y t + (u − v)k2
≤
2
+ 2ηhFz t − Fv, z t − y t + (u − v)i − η 2 kFz t − Fvk2
1 t
≤
kx − uk2 + ky t − uk2 − kz t − y t + (u − v)k2
2
+ 2ηhFz t − Fv, z t − y t + (u − v)i .
Thus, we have
kz t − vk2 ≤ kxt − uk2 − kxt − z t − (u − v)k2 + M kGxt − Guk,
(3.11)
ky t − uk2 ≤ kxt − uk2 − kz t − y t + (u − v)k2 + M kFz t − Fvk.
(3.12)
and
By (3.9) and (3.11), we have
kxt − uk2 ≤ ky t − vk2 + tM
≤ kz t − vk2 + tM
≤ kxt − uk2 − kxt − z t − (u − v)k2 + (kGxt − Guk + t)M.
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
67
It follows that
kxt − z t − (u − v)k2 ≤ (kGxt − Guk + t)M.
Since kGxt − Guk → 0, we deduce that
lim kxt − z t − (u − v)k = 0.
t→0
(3.13)
From (3.9) and (3.12), we have
kxt − uk2 ≤ ky t − uk2 + tM
≤ kxt − uk2 − kz t − y t + (u − v)k2 + (kFz t − Fvk + t)M.
It follows that
kz t − y t + (u − v)k2 ≤ (kFz n − Fvk + t)M,
which implies that
lim kz t − y t + (u − v)k = 0.
t→0
(3.14)
Thus, combining (3.13) with (3.14), we deduce that
lim kxt − y t k = 0.
t→0
(3.15)
We note that
kxt − N y t k = kN P rojC [tR(xt ) + (1 − t)y t ] − N P rojC y t k
≤ tM → 0.
Hence,
kN y t − y t k ≤ kN y t − xt k + kxt − y t k → 0.
Therefore,
kxt − N xt k → 0.
(3.16)
At the same time, from (3.2) and Lemma 2.3, we have
kxt − U(xt )k = kN P rojC [tR(xt ) + (1 − t)U(xt )] − N P rojC [U(xt )]k
≤ tM → 0.
(3.17)
Next we show that {xt } is relatively norm compact as t → 0. Let {tn } ⊂ (0, 1) be a sequence such that
n
n
tn → 0 as n → ∞. Put xn := xt and y n := y t . From (3.15)-(3.17), we have
kxn − y n k → 0, kxn − N xn k → 0 and kxn − U(xn )k → 0.
From (3.2), we get
kxt − uk2 = kN P rojC [tR(xt ) + (1 − t)y t ] − N uk2
≤ ky t − u − ty t + tR(xt )k2
= ky t − uk2 − 2thy t , y t − ui + 2thR(xt ), y t − ui + t2 ky t − R(xt )k2
= ky t − uk2 − 2thy t − u, y t − ui − 2thu, y t − ui
+ 2thR(xt ) − R(u), y t − ui + 2thR(u), y t − ui + t2 ky t − R(xt )k2
≤ [1 − 2(1 − κ)t]kxt − uk2 + 2thR(u) − u, y t − ui
+ t2 ky t − R(xt )k2 .
(3.18)
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
68
It follows that
t
1
hu − R(u), u − y t i +
ky t − R(xt )k2
1−κ
2(1 − κ)
t
1
hu − R(u), u − y t i +
M.
≤
1−κ
2(1 − κ)
kxt − uk2 ≤
In particular,
kxn − uk2 ≤
1
tn
hu − R(u), u − y n i +
M,
1−κ
2(1 − κ)
u ∈ Γ.
(3.19)
By the boundedness of {xn }, without loss of generality, we assume that {xn } converges weakly to a point
x∗ ∈ C. It is clear that y n → x∗ weakly. From (3.18) we can use Lemma 2.5 to get x∗ ∈ Γ. We substitute
x∗ for u in (3.19) to get
kxn − x∗ k2 ≤
1
tn
hx∗ − R(x∗ ), x∗ − y n i +
M.
1−κ
2(1 − κ)
So the weak convergence of {y n } to x∗ implies that xn → x∗ strongly. We prove the relative norm compactness of the net {xt } as t → 0. In (3.19), we take the limit as n → ∞ to get
kx∗ − uk2 ≤
1
hu − R(u), u − x∗ i,
1−κ
u ∈ Γ.
(3.20)
Which implies that x∗ solves the following variational inequality
x∗ ∈ Γ,
h(I − R)u, u − x∗ i ≥ 0,
u ∈ Γ.
It equals to its dual variational inequality
x∗ ∈ Γ, h(I − R)x∗ , u − x∗ i ≥ 0, u ∈ Γ.
Therefore, x∗ = (P rojΓ R)x∗ . This shows that x∗ is the unique fixed point in Γ of the contraction P rojΓ R.
This is sufficient to conclude that the entire net {xt } converges in norm to x∗ as t → 0.
Setting R = 0, then (3.20) is reduced to
kx∗ − uk2 ≤ hu, u − x∗ i,
u ∈ Γ.
Equivalently,
kx∗ k2 ≤ hx∗ , ui,
u ∈ Γ.
This implies that
kx∗ k ≤ kuk,
u ∈ Γ.
Therefore, x∗ is the minimum norm element in Γ. This completes the proof.
Below we introduce an explicit scheme for finding the minimum-norm element in Γ.
Theorem 3.2. Suppose Γ := F ix(N ) ∩ Ω 6= ∅. For given x0 ∈ C arbitrarily, let the sequences {xn }, {y n }
and {z n } be generated iteratively by

n
n
n

 z = PC (x − ξGx ),
y n = PC (z n − ηFz n ),
(3.21)

 n+1
n
n
n
x
= δn x + (1 − δn )N P rojC [ζn R(x ) + (1 − ζn )y ], n ≥ 0,
where {ζn } and {δn } are two sequences in [0, 1] satisfying the following conditions:
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
∞
P
(i) lim ζn = 0 and
n→∞
69
ζn = ∞;
n=0
(ii) 0 < lim inf δn ≤ lim sup δn < 1.
n→∞
n→∞
Then the sequence {xn } converges strongly to x∗ which is the unique solution of variational inequality (3.4).
In particular, if R = 0, then x∗ is the minimum norm element in Γ.
Proof. First, we prove that the sequences {xn }, {y n } and {z n } are bounded.
Let v = P rojC (u − ξGu) and u = P rojC (v − ηFv). From (3.21), we get
ky n − uk = kP rojC (z n − ηFz n ) − P rojC (v − ηFv)k
≤ kz n − vk
= kP rojC (xn − ξGxn ) − P rojC (u − ξGu)k
≤ kxn − uk,
and
kxn+1 − uk = kδn (xn − u) + (1 − δn )(N P rojC [ζn R(xn ) + (1 − ζn )y n ] − u)k
≤ δn kxn − uk + (1 − δn )kζn (R(xn ) − u) + (1 − ζn )(y n − u)k
≤ δn kxn − uk + (1 − δn )[ζn kR(xn ) − R(u)k + ζn kR(u) − uk + (1 − ζn )ky n − uk]
≤ δn kxn − uk + (1 − δn )[ζn κkxn − uk + ζn kR(u) − uk + (1 − ζn )kxn − uk]
= [1 − (1 − κ)(1 − δn )ζn ]kxn − uk + ζn (1 − δn )kR(u) − uk
kR(u) − uk
≤ max{kxn − uk,
}.
1−κ
By induction, we obtain, for all n ≥ 0,
kR(u) − uk
kx − uk ≤ max kx0 − uk,
.
1−κ
n
Hence, {xn } is bounded. Consequently, we deduce that {y n } ,{z n }, {R(xn )}, {Fz n } and {Gxn } are all
bounded. Let M > 0 is a constant such that
n
sup ky n k + kR(xn )k, 2ky n − R(xn )kky n − uk + ky n − R(xn )k2 ,
n
(kxn − uk + kxn+1 − uk), 2ξkxn − z n − (u − v)k, 2ηkz n − y n + (u − v)k,
o
2ηkFz n − Fvkkz n − y n + (u − v)k, 2ξkxn − z n − (u − v)kkGxn − Gukk ≤ M.
Next we show lim ky n − N y n k = 0.
n→∞
Define xn+1 = δn xn + (1 − δn )un for all n ≥ 0. It follows from (3.21) that
kun+1 − un k = kN P rojC [ζn+1 R(xn+1 ) + (1 − ζn+1 )y n+1 ] − N P rojC [ζn R(xn ) + (1 − ζn )y n ]k
≤ kζn+1 R(xn+1 ) + (1 − ζn+1 )y n+1 − ζn R(xn ) − (1 − ζn )y n k
≤ ky n+1 − y n k + ζn+1 (ky n+1 k + kR(xn+1 )k) + ζn (ky n k + kR(xn )k)
≤ kP rojC (z n+1 − ηFz n+1 ) − P rojC (z n − ηFz n )k + M (ζn+1 + ζn )
≤ kz n+1 − z n k + M (ζn+1 + ζn )
= kP rojC (xn+1 − ξGxn+1 ) − P rojC (xn − ξGxn )k + M (ζn+1 + ζn )
≤ kxn+1 − xn k + M (ζn+1 + ζn ).
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
70
This together with (i) imply that
lim sup kun+1 − un k − kxn+1 − xn k ≤ 0.
n→∞
Hence by Lemma 2.4, we get lim kun − xn k = 0. Consequently,
n→∞
lim kxn+1 − xn k = lim (1 − δn )kun − xn k = 0.
n→∞
n→∞
By the convexity of the norm k · k, we have
kxn+1 − uk2 = kδn (xn − u) + (1 − δn )(un − u)k2
≤ δn kxn − uk2 + (1 − δn )kun − uk2
≤ δn kxn − uk2 + (1 − δn )ky n − u − ζn (y n − R(xn ))k2
≤ δn kxn − uk2 + (1 − δn )ky n − uk2 + ζn M.
(3.22)
From Lemma 2.2 and (3.21), we have
ky n − uk2 ≤ kz n − vk2 + η(η − 2ζ)kFz n − Fvk2
≤ kxn − uk2 + ξ(ξ − 2δ)kGxn − Guk2 + η(η − 2ζ)kFz n − Fvk2 .
(3.23)
Substituting (3.23) into (3.22), we have
kxn+1 − uk2 ≤ δn kxn − uk2 + (1 − δn )[kxn − uk2 + ξ(ξ − 2δ)kGxn − Guk2
+ η(η − 2ζ)kFz n − Fvk2 ] + ζn M
= kxn − uk2 + (1 − δn )ξ(ξ − 2δ)kGxn − Guk2
+ (1 − δn )η(η − 2ζ)kFz n − Fvk2 + ζn M.
Therefore,
(1 − δn )η(2ζ − η)kFz n − Fvk2 + (1 − δn )ξ(2δ − ξ)kGxn − Guk2
≤ kxn − uk − kxn+1 − uk + ζn M
≤ (kxn − uk + kxn+1 − uk)kxn − xn+1 k + ζn M
≤ (kxn − xn+1 k + ζn )M.
Since lim inf (1 − δn )η(2ζ − η) > 0, lim inf (1 − δn )ξ(2δ − ξ) > 0, kxn − xn+1 k → 0 and ζn → 0, we derive
n→∞
n→∞
lim kFz n − Fvk = 0 and lim kGxn − Guk = 0.
n→∞
n→∞
From Lemma 2.1 and (3.21), we obtain
kz n − vk2 = kPC (xn − ξGxn ) − PC (u − ξGu)k2
≤ h(xn − ξGxn ) − (u − ξGu), z n − vi
1 n
k(x − ξGxn ) − (u − ξGu)k2 + kz n − vk2 − k(xn − u) − ξ(Gxn − Gu) − (z n − v)k2
=
2
1 n
kx − uk2 + kz n − vk2 − k(xn − z n ) − ξ(Gxn − Gu) − (u − v)k2
≤
2
1 n
kx − uk2 + kz n − vk2 − kxn − z n − (u − v)k2
=
2
+ 2ξhxn − z n − (u − v), Gxn − Gui − ξ 2 kGxn − Guk2 ,
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
71
and
ky n − uk = kP rojC (z n − ηFz n ) − P rojC (v − ηFv)k2
≤ hz n − ηFz n − (v − ηFv), y n − ui
1 n
=
kz − ηFz n − (v − ηFv)k2 + ky n − uk2
2
− kz n − ηFz n − (v − ηFv) − (y n − u)k2
1 n
≤
kz − vk2 + ky n − uk2 − kz n − y n + (u − v)k2
2
+ 2ηhFz n − Fv, z n − y n + (u − v)i − η 2 kFz n − Fvk2
1 n
≤
kx − uk2 + ky n − uk2 − kz n − y n + (u − v)k2
2
+ 2ηhFz n − Fv, z n − y n + (u − v)i .
Thus, we deduce
kz n − vk2 ≤ kxn − uk2 − kxn − z n − (u − v)k2
+ 2ξkxn − z n − (u − v)kkGxn − Guk
≤ kxn − uk2 − kxn − z n − (u − v)k2 + M kGxn − Guk
(3.24)
ky n − uk2 ≤ kxn − uk2 − kz n − y n + (u − v)k2 + M kFz n − Fvk.
(3.25)
and
By (3.22) and (3.24), we have
kxn+1 − uk2 ≤ δn kxn − uk2 + (1 − δn )ky n − uk2 + ζn M
≤ δn kxn − uk2 + (1 − δn )kz n − vk2 + ζn M
≤ δn kxn − uk2 + (1 − δn )[kxn − uk2 − kxn − z n − (u − v)k2
+ M kGxn − Guk] + ζn M
≤ kxn − uk2 − (1 − δn )kxn − z n − (u − v)k2 + (kGxn − Guk + ζn )M.
It follows that
(1 − δn )kxn − z n − (u − v)k2 ≤ (kxn+1 − xn k + kGxn − Guk + ζn )M.
Since lim inf (1 − δn ) > 0, ζn → 0, kxn+1 − xn k → 0 and kGxn − Guk → 0, we deduce that
n→∞
lim kxn − z n − (u − v)k = 0.
n→∞
From (3.22) and (3.25), we have
kxn+1 − uk2 ≤ δn kxn − uk2 + (1 − δn )ky n − uk2 + ζn M
≤ δn kxn − uk2 + (1 − δn )[kxn − uk2 − kz n − y n + (u − v)k2
+ M kFz n − Fvk] + ζn M
≤ kxn − uk2 − (1 − δn )kz n − y n + (u − v)k2 + (kFz n − Fvk + ζn )M.
It follows that
(1 − δn )kz n − y n + (u − v)k2 ≤ (kxn+1 − xn k + kFz n − Fvk + ζn )M,
(3.26)
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
72
which implies that
lim kz n − y n + (u − v)k = 0.
(3.27)
n→∞
Thus, from (3.26) and (3.27), we deduce that
lim kxn − y n k = 0.
n→∞
Hence,
kN y n − un k = kN P rojC y n − N P rojC [ζn R(xn ) + (1 − ζn )y n ]k ≤ ζn M → 0.
Therefore,
kN y n − y n k ≤ kN y n − un k + kun − xn k + kxn − y n k → 0.
Next we prove
lim suphx∗ − R(x∗ ), x∗ − y n i ≤ 0,
n→∞
where x∗ = PΓ R(x∗ ).
Indeed, we can choose a subsequence {yni } of {y n } such that
lim suphx∗ − R(x∗ ), x∗ − y n i = lim hx∗ − R(x∗ ), x∗ − yni i.
i→∞
n→∞
Without loss of generality, we may further assume that yni → z weakly, then it is clear that z ∈ Γ. Therefore,
lim suphx∗ − R(x∗ ), x∗ − y n i = lim hx∗ − R(x∗ ), x∗ − zi ≤ 0.
i→∞
n→∞
From (3.21), we have
kxn+1 − x∗ k2 ≤ δn kxn − x∗ k2 + (1 − δn )kζn (R(xn ) − x∗ ) + (1 − ζn )(y n − x∗ )k2
≤ δn kxn − x∗ k2 + (1 − δn )[(1 − ζn )2 ky n − x∗ k2
+ 2ζn (1 − ζn )hR(xn ) − x∗ , y n − x∗ i + ζn 2 kR(xn ) − x∗ k2 ]
= δn kxn − x∗ k2 + (1 − δn )[(1 − ζn )2 ky n − x∗ k2
+ 2ζn (1 − ζn )hR(xn ) − R(x)∗ , y n − x∗ i
+ 2ζn (1 − ζn )hR(x∗ ) − x∗ , y n − x∗ i + ζn 2 kR(xn ) − x∗ k2 ]
≤ [1 − 2(1 − κ)(1 − δn )ζn ]kxn − x∗ k2
+ 2ζn (1 − ζn )(1 − δn )hR(x∗ ) − x∗ , y n − x∗ i + (1 − δn )ζn 2 M
= (1 − γ n ))kxn − x∗ k2 + δ n γ n ,
where γ n = 2(1 − κ)(1 − δn )ζn and δ n / =
(1−ζn )
∗
1−κ hR(x)
− x∗ , y n − x∗ i +
ζn M
2(1−κ) .
It is clear that
∞
P
γn = ∞
n=0
and lim sup δ ≤ 0. Hence, all conditions of Lemma 2.6 are satisfied. Therefore, we immediately deduce that
n→∞
xn → x∗ .
Finally, if we take R = 0, by the similar argument as that Theorem 3.1, we deduce immediately that x∗
is a solution of (3.5). This completes the proof.
Acknowledgment
Yeong-Cheng Liou was supported in part by NSC 101-2628-E-230-001-MY3 and NSC 101-2622-E-230005-CC3. Li-Jun Zhu was supported in part by NNSF of China (61362033).
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
73
References
[1] J. Y. Bello Cruz, A. N. Iusem, Convergence of direct methods for paramonotone variational inequalities, Comput.
Optim. Appl., 46 (2010), 247–263. 1
[2] A. Cabot, Proximal point algorithm controlled by a slowly vanishing term: applications to hierarchical minimization, SIAM J. Optim., 15 (2005), 555–572.
[3] L. C. Ceng, C. Wang, J. C. Yao, Strong convergence theorems by a relaxed extragradient method for a general
system of variational inequalities, Math. Methods Oper. Res., 67 (2008), 375–390. 1, 2.2, 2.3
[4] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert
space, J. Optim. Theory Appl., 148 (2011), 318–335.
[5] S. S. Chang, H. W. Joseph Lee, C. K. Chan, J. A. Liu, A new method for solving a system of generalized nonlinear
variational inequalities in Banach spaces, Appl. Math. Comput., 217 (2011), 6830–6837.
[6] R. Chen, Y. Su, H. K. Xu, Regularization and iteration methods for a class of monotone variational inequalities,
Taiwanese J. Math., 13 (2009), 739–752.
[7] B. S. He, Z. H. Yang, X. M. Yuan, An approximate proximal-extragradient type method for monotone variational
inequalities, J. Math. Anal. Appl., 300 (2004), 362–374.
[8] G. M. Korpelevich, An extragradient method for finding saddle points and for other problems, Èkonom. i Mat.
Metody, 12 (1976), 747–756.
[9] P. Li, S. M. Kang, L. J. Zhu, Visco-resolvent algorithms for monotone operators and nonexpansive mappings, J.
Nonlinear Sci. Appl., 7 (2014), 325–344.
[10] X. Lu, H. K. Xu, X. Yin, Hybrid methods for a class of monotone variational inequalities, Nonlinear Anal., 71
(2009), 1032–1041. 2.5
[11] N. Nadezhkina, W. Takahashi, Weak convergence theorem by an extragradient method for nonexpansive mappings
and monotone mappings, J. Optim. Theory Appl., 128 (2006), 191–201. 1, 1
[12] M. A. Noor, Some developments in general variational inequalities, Appl. Math. Comput., 152 (2004), 199–277.
1
[13] J. W. Peng, J. C. Yao, Strong convergence theorems of iterative scheme based on the extragradient method for
mixed equilibrium problems and fixed point problems, Math. Comput. Modeling, 49 (2009), 1816–1828. 1
[14] A. Sabharwal, L. C. Potter, Convexly constrained linear inverse problems: iterative least-squares and regularization, IEEE Trans. Signal Process., 46 (1998), 2345–2352. 1
[15] P. Saipara, P. Chaipunya, Y. J. Cho, P. Kumam, On strong and ∆-convergence of modified S-iteration for
uniformly continuous total asymptotically nonexpansive mappings in CAT(κ) spaces, J. Nonlinear Sci. Appl., 8
(2015), 965–975. 1
[16] P. Shi, Equivalence of variational inequalities with Wiener-Hopf equations, Proc. Amer. Math. Soc., 111 (1991),
339–346. 1
[17] G. Stampacchia, Formes bilineaires coercitives sur les ensembles convexes, C.R. Acad. Sci. Paris, 258 (1964),
4413–4416. 1
[18] T. Suzuki, Strong convergence theorems for infinite families of nonexpansive mappings in general Banach spaces,
Fixed Point Theory Appl., 2005 (2005), 103–123. 2.4
[19] W. Takahashi, M. Toyoda, Weak convergence theorems for nonexpansive mappings and monotone mappings, J.
Optim. Theory Appl., 118 (2003), 417–428. 1
[20] R. U. Verma, On a new system of nonlinear variational inequalities and associated iterative algorithms, Math.
Sci. Res. Hot-line, 3 (1999), 65–68. 1
[21] R. U. Verma, Iterative algorithms and a new system of nonlinear quasivariational inequalities, Adv. Nonlinear
Var. Inequal., 4 (2001), 117–124. 1
[22] U. Witthayarat, Y. J. Cho, P. Kumam, Approximation algorithm for fixed points of nonlinear operators and
solutions of mixed equilibrium problems and variational inclusion problems with applications, J. Nonlinear Sci.
Appl., 5 (2012), 475–494.
[23] H. K. Xu, An iterative approach to quadratic optimization, J. Optimi. Theory Appl., 116 (2003), 659–678. 2.6
[24] H. K. Xu, A variable Krasnoselski-Mann algorithm and the multiple-set split feasibility problem, Inverse Problems,
22 (2006), 2021–2034.
[25] H. K. Xu, Viscosity method for hierarchical fixed point approach to variational inequalities, Taiwanese J. Math.,
14 (2010), 463–478.
[26] H. K. Xu, T. H. Kim, Convergence of hybrid steepest-descent methods for variational inequalities, J. Optim.
Theory Appl., 119 (2003), 185–201.
[27] I. Yamada, N. Ogura, Hybrid steepest descent method for the variational inequality problem over the fixed point
set of certain quasi-nonexpansive mappings, Numer. Funct. Anal. Optim., 25 (2005), 619–655.
[28] I. Yamada, N. Ogura, N. Shirakawa, A numerically robust hybrid steepest descent method for the convexly constrained generalized inverse problems, Inverse Problems, Image Analysis, and Medical Imaging, Contemp. Math.,
313 (2002), 269–305. 1
[29] J. C. Yao, Variational inequalities with generalized monotone operators, Math. Oper. Res., 19 (1994), 691–705. 1
Y. Liu, Z. Yao, Y. C. Liou, L. J. Zhu, J. Nonlinear Sci. Appl. 9 (2016), 61–74
74
[30] Y. Yao, Y. C. Liou, Weak and strong convergence of Krasnoselski-Mann iteration for hierarchical fixed point
problems, Inverse Problems, 24 (2008), 501–508. 1
[31] Y. Yao, Y. C. Liou, J. C. Yao, An iterative algorithm for approximating convex minimization problem, Appl.
Math. Comput., 188 (2007), 648–656.
[32] Y. Yao, J. C. Yao, On modified iterative method for nonexpansive mappings and monotone mappings, Appl. Math.
Comput., 186 (2007), 1551–1558.
[33] Z. Yao, L. J. Zhu, S. M. Kang, Y. C. Liou, Iterative algorithms with perturbations for Lipschitz pseudocontractive
mappings in Banach spaces, J. Nonlinear Sci. Appl., 8 (2015), 935–943.
[34] L. C. Zeng, J. C. Yao, Strong convergence theorem by an extragradient method for fixed point problems and
variational inequality problems, Taiwanese J. Math., 10 (2006), 1293–1303. 1, 1