Research Article An Extension of Subgradient Method for Variational Inequality

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 531912, 7 pages
http://dx.doi.org/10.1155/2013/531912
Research Article
An Extension of Subgradient Method for Variational Inequality
Problems in Hilbert Space
Xueyong Wang, Shengjie Li, and Xipeng Kou
College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China
Correspondence should be addressed to Xueyong Wang; yonggk@163.com
Received 26 October 2012; Revised 30 January 2013; Accepted 1 February 2013
Academic Editor: Guanglu Zhou
Copyright © 2013 Xueyong Wang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
An extension of subgradient method for solving variational inequality problems is presented. A new iterative process, which relates
to the fixed point of a nonexpansive mapping and the current iterative point, is generated. A weak convergence theorem is obtained
for three sequences generated by the iterative process under some mild conditions.
1. Introduction
Let Ω be a nonempty closed convex subset of a real Hilbert
space 𝐻, and let 𝑓 : Ω → Ω be a continuous mapping. The
variational inequality problem, denoted by VI(𝑓, Ω), is to find
a vector π‘₯∗ ∈ Ω, such that
∗
∗
⟨π‘₯ − π‘₯ , 𝑓 (π‘₯ )⟩ ≥ 0,
∀ π‘₯ ∈ Ω.
(1)
Throughout the paper, let Ω∗ be the solution set of VI(𝑓, Ω),
which is assumed to be nonempty. In the special case when
Ω is the nonnegative orthant, (1) reduces to the nonlinear
complementarity problem. Find a vector π‘₯∗ ∈ Ω, such that
π‘₯∗ ≥ 0,
𝑓 (π‘₯∗ ) ≥ 0,
𝑇
𝑓(π‘₯∗ ) π‘₯∗ = 0.
(2)
The variational inequality problem plays an important role
in optimization theory and variational analysis. There are
numerous applications of variational inequalities in mathematics as well as in equilibrium problems arising from
engineering, economics, and other areas in real life, see [1–16]
and the references therein. Many algorithms, which employ
the projection onto the feasible set Ω of the variational
inequality or onto some related sets in order to iteratively
reach a solution, have been proposed to solve (1). Korpelevich
[2] proposed an extragradient method for finding the saddle
point of some special cases of the equilibrium problem.
Solodov and Svaiter [3] extended the extragradient algorithm
through replying the set Ω by the intersection of two sets
related to VI(𝑓, Ω). In each iteration of the algorithm, the new
vector π‘₯π‘˜+1 is calculated according to the following iterative
scheme. Given the current vector π‘₯π‘˜ , compute π‘Ÿ(π‘₯π‘˜ ) = π‘₯π‘˜ −
𝑃٠(π‘₯π‘˜ − 𝑓(π‘₯π‘˜ )), if π‘Ÿ(π‘₯π‘˜ ) = 0, stop; otherwise, compute
π‘§π‘˜ = π‘₯π‘˜ − πœ‚π‘˜ π‘Ÿ (π‘₯π‘˜ ) ,
(3)
where πœ‚π‘˜ = π‘Ÿπ‘šπ‘˜ and π‘šπ‘˜ being the smallest nonnegative integer
π‘š satisfying
σ΅„©
σ΅„©2
βŸ¨π‘“ (π‘₯π‘˜ − π‘Ÿπ‘š π‘Ÿ (π‘₯π‘˜ )) , π‘Ÿ (π‘₯π‘˜ )⟩ ≥ π›Ώσ΅„©σ΅„©σ΅„©π‘Ÿ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„© ,
(4)
and then compute
π‘₯π‘˜+1 = 𝑃Ω∩π»π‘˜ (π‘₯π‘˜ ) ,
(5)
where π»π‘˜ = {π‘₯ ∈ 𝑅𝑛 | ⟨π‘₯ − π‘§π‘˜ , 𝑓(π‘§π‘˜ )⟩ ≤ 0}.
On the other hand, Nadezhkina and Takahashi [11] got
π‘₯π‘˜+1 by the following iterative formula:
π‘₯π‘˜+1 = π›Όπ‘˜ π‘₯π‘˜ + (1 − π›Όπ‘˜ ) 𝑆𝑃٠(π‘₯π‘˜ − πœ† π‘˜ 𝑓 (π‘₯π‘˜ )) ,
(6)
∞
where {π›Όπ‘˜ }∞
π‘˜=0 is a sequence in (0, 1), {πœ† π‘˜ }π‘˜=0 is a sequence,
and 𝑆 : Ω → Ω is a nonexpansive mapping. Denoting the
fixed points set of 𝑆 by 𝐹(𝑆) and assuming 𝐹(𝑆)∩Ω∗ =ΜΈ 0, they
proved that the sequence {π‘₯π‘˜ }∞
π‘˜=0 converges weakly to some
π‘₯∗ ∈ 𝐹(𝑆) ∩ Ω∗ .
Motivated and inspired by the extragradient methods in
[2, 3], in this paper, we study further extragradient methods
2
Abstract and Applied Analysis
and analyze the weak converge property of three sequences
generated by our method.
The rest of this paper is organized as follows. In Section 2,
we give some preliminaries and basic results. In Section 3, we
present an extragradient algorithm and then discuss the weak
convergence of the sequences generated by the algorithm. In
Section 4, we modify the extragradient algorithm and give its
convergence analysis.
2. Preliminary and Basic Results
Let 𝐻 be a real Hilbert space with ⟨π‘₯, π‘¦βŸ© denoting the
inner product of the vectors π‘₯, 𝑦. Weak converge and strong
converge of the sequence {π‘₯π‘˜ }∞
π‘˜=0 to a point π‘₯ are denoted by
π‘₯π‘˜ ⇀ π‘₯ and π‘₯π‘˜ → π‘₯, respectively. Identity mapping from Ω
to itself is denoted by 𝐼.
For some vector π‘₯ ∈ 𝐻, the orthogonal projection of π‘₯
onto Ω, denoted by 𝑃٠(π‘₯), is defined as
σ΅„©
σ΅„©
𝑃٠(π‘₯) := arg min {󡄩󡄩󡄩𝑦 − π‘₯σ΅„©σ΅„©σ΅„© | 𝑦 ∈ Ω} .
(7)
The following lemma states some well-known properties of
the orthogonal projection operator.
Lemma 1. One has
∀ π‘₯ ∈ 𝑅𝑛 , ∀ 𝑦 ∈ Ω.
(1) ⟨π‘₯ − 𝑃٠(π‘₯) , 𝑃٠(π‘₯) − π‘¦βŸ© ≥ 0,
(8)
σ΅„©
σ΅„©
σ΅„© σ΅„©
(2) 󡄩󡄩󡄩𝑃٠(π‘₯) − 𝑃٠(𝑦)σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 , ∀ π‘₯, 𝑦 ∈ 𝑅𝑛 .
σ΅„©2 σ΅„©
σ΅„©
σ΅„©2
σ΅„©2 σ΅„©
(3) 󡄩󡄩󡄩𝑃٠(π‘₯) − 𝑦󡄩󡄩󡄩 ≤ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 − σ΅„©σ΅„©σ΅„©π‘₯ − 𝑃٠(π‘₯)σ΅„©σ΅„©σ΅„© ,
∀ π‘₯ ∈ 𝑅𝑛 , ∀ 𝑦 ∈ Ω.
(9)
(10)
(11)
A mapping 𝑓 is called monotone if
∀ π‘₯, 𝑦 ∈ Ω.
and define the function 𝑇(V) as
𝑓 (V) + 𝑁٠(V)
𝑇 (V) := {
0
(14)
(15)
and the fixed point set of a mapping 𝑆, denoted by 𝐹(𝑆), is
defined by
𝐹 (𝑆) := {π‘₯ ∈ Ω | 𝑆 (π‘₯) = π‘₯} .
(18)
Lemma 2. For any sequence {π‘₯π‘˜ }∞
π‘˜=0 ⊂ 𝐻 that converges
weakly to π‘₯(π‘₯π‘˜ ⇀ π‘₯), one has
σ΅„©
σ΅„©
σ΅„©
σ΅„©
lim inf σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯σ΅„©σ΅„©σ΅„© < lim inf σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − 𝑦󡄩󡄩󡄩 ,
π‘˜→∞
π‘˜→∞
∀𝑦 =ΜΈ π‘₯.
(19)
The next lemma is proposed in [10].
Lemma 3 (Demiclosedness principle). Let Ω be a closed,
convex subset of a real Hilbert space 𝐻, and let 𝑆 : Ω → 𝐻
be a nonexpansive mapping. Then 𝐼 − 𝑆 is demiclosed at 𝑦 ∈ 𝐻;
Μƒ, π‘₯
Μƒ∈Ω
that is, for any sequence {π‘₯π‘˜ }∞
π‘˜=0 ⊂ Ω, such that π‘₯π‘˜ ⇀ π‘₯
π‘₯ = 𝑦.
and (𝐼 − 𝑆)π‘₯π‘˜ → 𝑦, one has (𝐼 − 𝑆)Μƒ
3. An Algorithm and Its Convergence Analysis
In this section, we give our algorithm, and then discuss its
convergence. First, we need the following definition.
Definition 4. For some vector π‘₯ ∈ Ω, the projected residual
function is defined as
(20)
Obviously, we have that π‘₯ ∈ Ω∗ if and only if π‘Ÿ(π‘₯) = 0. Now
we describe our algorithm.
(16)
π‘¦π‘˜ = 𝑃٠(π‘₯π‘˜ − 𝑓 (π‘₯π‘˜ )) ,
(21)
π‘§π‘˜ = (1 − πœ‚π‘˜ ) π‘₯π‘˜ + πœ‚π‘˜ π‘¦π‘˜ ,
(22)
where πœ‚π‘˜ = π›Ύπ‘›π‘˜ and π‘›π‘˜ being the smallest nonnegative integer
𝑛 satisfying
The graph of 𝑓, denoted by 𝐺(𝑓), is defined by
A mapping 𝑆 : Ω → Ω is called nonexpansive if
σ΅„©
σ΅„© σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝑆 (π‘₯) − 𝑆 (𝑦)σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 , ∀ π‘₯, 𝑦 ∈ Ω,
if V ∈ Ω,
if V ∉ Ω.
Then 𝑇 is maximal monotone. It is well known that 0 ∈ 𝑇(V),
if and only if V ∈ Ω∗ . For more details, see, for example, [9]
and references therein. The following lemma is established in
Hilbert space and is well known as Opial condition.
(12)
A mapping 𝑓 is called Lipschitz continuous, if there exists an
𝐿 ≥ 0, such that
󡄩󡄩󡄩𝑓 (π‘₯) − 𝑓 (𝑦)σ΅„©σ΅„©σ΅„© ≤ 𝐿 σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 , ∀ π‘₯, 𝑦 ∈ Ω.
(13)
σ΅„©
σ΅„©
σ΅„©
σ΅„©
𝐺 (𝑓) := {(π‘₯, 𝑦) ∈ Ω × Ω | 𝑦 = 𝑓 (π‘₯)} .
(17)
Algorithm A. Step 0. Take 𝛿 ∈ (0, 1), 𝛾 ∈ (0, 1), π‘₯0 ∈ Ω, and
π‘˜ = 0.
Step 1. For the current iterative point π‘₯π‘˜ ∈ Ω, compute
∀ π‘₯, 𝑦 ∈ 𝑅𝑛 .
⟨π‘₯ − 𝑦, 𝑓 (π‘₯) − 𝑓 (𝑦)⟩ ≥ 0,
𝑁٠(V) := {𝑀 ∈ 𝐻 | ⟨V − 𝑒, π‘€βŸ© ≥ 0, 𝑒 ∈ Ω} ,
π‘Ÿ (π‘₯) := π‘₯ − 𝑃٠(π‘₯ − 𝑓 (π‘₯)) .
σ΅„©
σ΅„©2
(4) 󡄩󡄩󡄩𝑃٠(π‘₯) − 𝑃٠(𝑦)σ΅„©σ΅„©σ΅„©
σ΅„©2 σ΅„©
σ΅„©
σ΅„©2
≤ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 − 󡄩󡄩󡄩𝑃٠(π‘₯) − π‘₯ + 𝑦 − 𝑃٠(𝑦)σ΅„©σ΅„©σ΅„© ,
We denote the normal cone of Ω at V ∈ Ω by
σ΅„©
σ΅„©2
βŸ¨π‘“ (π‘₯π‘˜ − 𝛾𝑛 π‘Ÿ (π‘₯π‘˜ )) , π‘Ÿ (π‘₯π‘˜ )⟩ ≥ π›Ώσ΅„©σ΅„©σ΅„©π‘Ÿ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„© .
(23)
π‘‘π‘˜ = π‘ƒπ»π‘˜ ∩Ω (π‘₯π‘˜ ) ,
(24)
π‘₯π‘˜+1 = π›Όπ‘˜ π‘₯π‘˜ + (1 − π›Όπ‘˜ ) π‘†π‘‘π‘˜ ,
(25)
Compute
where {π›Όπ‘˜ } ⊂ (π‘Ž, 𝑏) (π‘Ž, 𝑏 ∈ (0, 1)), π»π‘˜ = {π‘₯ ∈ Ω | ⟨π‘₯ −
π‘§π‘˜ , 𝑓(π‘§π‘˜ )⟩ ≤ 0}, and 𝑆 : Ω → Ω is a nonexpansive mapping.
Step 2. If β€–π‘Ÿ(π‘₯π‘˜+1 )β€– = 0, stop; otherwise go to Step 1.
Abstract and Applied Analysis
3
Remark 5. The iterative point π‘‘π‘˜ is well computed in Algorithm A according to [3] and can be interpreted as follows: if
(23) is well defined, then π‘‘π‘˜ can be derived by the following
iterative scheme: compute
π‘₯π‘˜ = π‘ƒπ»π‘˜ (π‘₯π‘˜ ) ,
π‘‘π‘˜ = π‘ƒπ»π‘˜ ∩Ω (π‘₯π‘˜ ) .
(26)
For more details, see [3, 4].
Now we investigate the weak convergence property of
our algorithm. First we recall the following result, which was
proposed by Schu [17].
Substituting (32) into (31), we have
σ΅„©σ΅„©2
σ΅„©σ΅„©
σ΅„©σ΅„©
πœ‚π‘˜ βŸ¨π‘Ÿ(π‘₯π‘˜ ), 𝑓(π‘§π‘˜ )⟩
σ΅„©σ΅„©
σ΅„©σ΅„©
∗ σ΅„©2
∗σ΅„©
𝑓(𝑧
)
−
π‘₯
σ΅„©σ΅„©
σ΅„©σ΅„©π‘‘π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ −
π‘˜
σ΅„©σ΅„©
σ΅„©σ΅„©2
σ΅„©σ΅„©
σ΅„©σ΅„©
𝑓(𝑧
)
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„© π‘˜σ΅„©
σ΅„©σ΅„©2
σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„©
πœ‚π‘˜ βŸ¨π‘Ÿ(π‘₯π‘˜ ), 𝑓(π‘§π‘˜ )⟩
σ΅„©σ΅„©
− σ΅„©σ΅„©π‘₯π‘˜ −
𝑓(π‘§π‘˜ ) − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©2
σ΅„©σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©
σ΅„©
σ΅„©
2
2
σ΅„©
σ΅„©σ΅„©
σ΅„©
∗σ΅„©
= σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
βŸ¨π‘‘π‘˜ − π‘₯∗ , 𝑓 (π‘§π‘˜ )⟩
σ΅„©σ΅„©
σ΅„©σ΅„©2
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©
σ΅„©2 σ΅„©
σ΅„©2
σ΅„©
= σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
−2
Lemma 6. Let H be a real Hilbert space, let {π›Όπ‘˜ }∞
π‘˜=0 ⊂
(π‘Ž, 𝑏) (π‘Ž, 𝑏 ∈ (0, 1)) be a sequence of real number, and let
∞
{Vπ‘˜ }∞
π‘˜=0 ⊂ 𝐻, {π‘€π‘˜ }π‘˜=0 ⊂ 𝐻, such that
σ΅„© σ΅„©
lim sup σ΅„©σ΅„©σ΅„©Vπ‘˜ σ΅„©σ΅„©σ΅„© ≤ 𝑐,
π‘˜→∞
σ΅„© σ΅„©
lim sup σ΅„©σ΅„©σ΅„©π‘€π‘˜ σ΅„©σ΅„©σ΅„© ≤ 𝑐,
π‘˜→∞
π‘˜→∞
for some 𝑐 ≥ 0. Then one has
σ΅„©σ΅„©
σ΅„©2
σ΅„©
σ΅„©
∗ σ΅„©2
∗ σ΅„©2
σ΅„©σ΅„©π‘‘π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
(29)
πœ‚ βŸ¨π‘Ÿ (π‘₯ ) , 𝑓 (𝑧 )⟩
+ 2 π‘˜ σ΅„© π‘˜ σ΅„©2 π‘˜ βŸ¨π‘§π‘˜ − π‘‘π‘˜ , 𝑓 (π‘§π‘˜ )⟩ .
󡄩󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©σ΅„©
Proof. Letting π‘₯ = π‘₯π‘˜ and 𝑦 = π‘₯∗ . It follows from Lemma 1
(10) that
σ΅„©2
σ΅„©σ΅„©
σ΅„©
σ΅„©2
σ΅„©σ΅„©π‘ƒπ»π‘˜ ∩Ω (π‘₯π‘˜ ) − π‘₯∗ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„©σ΅„©2 − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘ƒπ»π‘˜ ∩Ω (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„© ,
σ΅„©
σ΅„©
σ΅„©
σ΅„©
(30)
that is,
(31)
From (20)–(23) in Algorithm A, we get ⟨π‘₯π‘˜ − π‘§π‘˜ , 𝑓(π‘§π‘˜ )⟩ > 0,
which means π‘₯π‘˜ ∉ π»π‘˜ . So, by the definition of the projection
operator and [3], we obtain
⟨π‘₯π‘˜ − π‘§π‘˜ , 𝑓 (π‘§π‘˜ )⟩
𝑓 (π‘§π‘˜ )
σ΅„©σ΅„©
σ΅„©2
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©σ΅„©
πœ‚ βŸ¨π‘Ÿ (π‘₯ ) , 𝑓 (𝑧 )⟩
= π‘₯π‘˜ − π‘˜ σ΅„© π‘˜ σ΅„©2 π‘˜ 𝑓 (π‘§π‘˜ ) .
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©
σ΅„©
σ΅„©
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩ ∗
⟨π‘₯ − π‘§π‘˜ , 𝑓 (π‘§π‘˜ )⟩ .
σ΅„©σ΅„©
σ΅„©σ΅„©2
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©
(28)
Theorem 7. Let Ω be a nonempty, closed, and convex subset of
𝐻, let 𝑓 be a monotone and 𝐿-Lipschitz continuous mapping,
𝐹(𝑆) ∩ Ω∗ =ΜΈ 0, and π‘₯∗ ∈ Ω∗ . Then for any sequence
∞
{π‘₯π‘˜ }∞
π‘˜=0 , {π‘‘π‘˜ }π‘˜=0 generated by Algorithm A, one has
π‘₯π‘˜ = π‘ƒπ»π‘˜ (π‘₯π‘˜ ) = π‘₯π‘˜ −
+2
Since 𝑓 is monotone, connecting with (1), we obtain
The following theorem is crucial in proving the boundness of the sequence {π‘₯π‘˜ }∞
π‘˜=0 .
σ΅„©σ΅„©
σ΅„©2
σ΅„©
σ΅„©
∗ σ΅„©2
∗ σ΅„©2
σ΅„©σ΅„©π‘‘π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© .
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
βŸ¨π‘§π‘˜ − π‘‘π‘˜ , 𝑓 (π‘§π‘˜ )⟩
σ΅„©σ΅„©
σ΅„©σ΅„©2
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©
(27)
σ΅„©
σ΅„©
lim sup σ΅„©σ΅„©σ΅„©π›Όπ‘˜ Vπ‘˜ + (1 − π›Όπ‘˜ ) π‘€π‘˜ σ΅„©σ΅„©σ΅„© = 𝑐,
σ΅„©
σ΅„©
lim σ΅„©σ΅„©Vπ‘˜ − π‘€π‘˜ σ΅„©σ΅„©σ΅„© = 0.
π‘˜→∞ σ΅„©
+2
(33)
βŸ¨π‘§π‘˜ − π‘₯∗ , 𝑓 (π‘§π‘˜ )⟩ ≥ 0.
(34)
Thus
σ΅„©σ΅„©
σ΅„©2
σ΅„©
σ΅„©
∗ σ΅„©2
∗ σ΅„©2
σ΅„©σ΅„©π‘‘π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
+2
(35)
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
βŸ¨π‘§π‘˜ − π‘‘π‘˜ , 𝑓 (π‘§π‘˜ )⟩ ,
σ΅„©σ΅„©
σ΅„©σ΅„©2
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
which completes the proof.
Theorem 8. Let Ω be a nonempty, closed, and convex subset of
𝐻, 𝑓 be a monotone and 𝐿-Lipschitz continuous mapping, and
∞
∞
𝐹(𝑆) ∩ Ω∗ =ΜΈ 0. Then for any sequence {π‘₯π‘˜ }∞
π‘˜=0 , {π‘¦π‘˜ }π‘˜=0 , {π‘‘π‘˜ }π‘˜=0
generated by Algorithm A, one has
2
πœ‚π›Ώ
σ΅„©
σ΅„©4
σ΅„©σ΅„©
∗ σ΅„©2 σ΅„©
∗ σ΅„©2
σ΅„©σ΅„©π‘₯π‘˜+1 − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© − (1 − π›Όπ‘˜ )(σ΅„©σ΅„© π‘˜ σ΅„©σ΅„©) σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„© ,
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
1 − π›Όπ‘˜ σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©2
σ΅„©
∗ σ΅„©2
∗ σ΅„©2
σ΅„©σ΅„©π‘₯π‘˜+1 − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© −
σ΅„©π‘₯ − 𝑑 σ΅„©σ΅„© .
2 σ΅„© π‘˜ π‘˜σ΅„©
(36)
Furthermore,
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„© = 0,
π‘˜→∞ σ΅„©
(32)
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© = 0,
π‘˜→∞ σ΅„©
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘¦π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© = 0.
π‘˜→∞ σ΅„©
(37)
4
Abstract and Applied Analysis
So we know that there exists πœ‰ ≥ 0, limπ‘˜ → ∞ β€–π‘₯π‘˜ − π‘₯∗ β€– = πœ‰,
and hence
σ΅„©
σ΅„©
lim πœ‚π‘˜ σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„© = 0,
(43)
π‘˜→∞ σ΅„©
Proof. Using (22), we have
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
βŸ¨π‘§π‘˜ − π‘₯π‘˜ , 𝑓 (π‘§π‘˜ )⟩
σ΅„©σ΅„©
σ΅„©2
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©σ΅„©
=
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
⟨−πœ‚π‘˜ π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
σ΅„©σ΅„©
σ΅„©2
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©σ΅„©
(38)
2
πœ‚ βŸ¨π‘Ÿ (π‘₯ ) , 𝑓 (𝑧 )⟩
= −( π‘˜ σ΅„©σ΅„© π‘˜ σ΅„©σ΅„© π‘˜ ) .
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
which implies that limπ‘˜ → ∞ β€–π‘₯π‘˜ − π‘¦π‘˜ β€– = 0 or limπ‘˜ → ∞ πœ‚π‘˜ = 0.
If limπ‘˜ → ∞ β€–π‘₯π‘˜ − π‘¦π‘˜ β€– = 0, we get the conclusion.
If limπ‘˜ → ∞ πœ‚π‘˜ = 0, we can deduce that the inequality (23)
in Algorithm A is not satisfied for π‘›π‘˜ − 1; that is, there exists
π‘˜0 , for all π‘˜ ≥ π‘˜0 ,
σ΅„©
σ΅„©2
βŸ¨π‘“ (π‘₯π‘˜ − 𝛾−1 πœ‚π‘˜ π‘Ÿ (π‘₯π‘˜ )) , π‘Ÿ (π‘₯π‘˜ )⟩ < π›Ώσ΅„©σ΅„©σ΅„©π‘Ÿ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„© .
By the Cauchy-Schwarz inequality,
2
Applying (8) by setting π‘₯ = π‘₯π‘˜ − 𝑓(π‘₯π‘˜ ), 𝑦 = π‘₯π‘˜ leads to
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
⟨π‘₯π‘˜ − π‘‘π‘˜ , 𝑓 (π‘§π‘˜ )⟩
σ΅„©σ΅„©
σ΅„©2
󡄩󡄩𝑓(π‘§π‘˜ )σ΅„©σ΅„©σ΅„©
(39)
2
≤(
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
σ΅„©2
σ΅„©
) + σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© .
σ΅„©σ΅„©
σ΅„©
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©2
βŸ¨π‘“ (π‘₯π‘˜ ), π‘Ÿ (π‘₯π‘˜ )⟩ ≥ σ΅„©σ΅„©σ΅„©π‘Ÿ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„©
2
πœ‚ βŸ¨π‘Ÿ (π‘₯ ) , 𝑓 (𝑧 )⟩
σ΅„©2
σ΅„©
− ( π‘˜ σ΅„©σ΅„© π‘˜ σ΅„©σ΅„© π‘˜ ) + σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
2
πœ‚ βŸ¨π‘Ÿ (π‘₯ ) , 𝑓 (𝑧 )⟩
σ΅„©2
σ΅„©
= σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© − ( π‘˜ σ΅„©σ΅„© π‘˜ σ΅„©σ΅„© π‘˜ )
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
(40)
2
πœ‚π›Ώ
σ΅„©2
σ΅„©
σ΅„©4
σ΅„©
≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© − ( σ΅„©σ΅„© π‘˜ σ΅„©σ΅„© ) σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„© .
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
Then we have
σ΅„©σ΅„©
∗σ΅„©
∗σ΅„©
σ΅„©2 σ΅„©σ΅„©
σ΅„©2
σ΅„©σ΅„©π‘₯π‘˜+1 − π‘₯ σ΅„©σ΅„© = σ΅„©σ΅„©π›Όπ‘˜ π‘₯π‘˜ + (1 − π›Όπ‘˜ )π‘†π‘‘π‘˜ − π‘₯ σ΅„©σ΅„©
σ΅„©
σ΅„©2
= σ΅„©σ΅„©σ΅„©π›Όπ‘˜ (π‘₯π‘˜ − π‘₯∗ ) + (1 − π›Όπ‘˜ )(π‘†π‘‘π‘˜ − π‘₯∗ )σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©2
σ΅„©2
σ΅„©
≤ π›Όπ‘˜ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© + (1 − π›Όπ‘˜ ) σ΅„©σ΅„©σ΅„©π‘‘π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©2
σ΅„©2
≤ π›Όπ‘˜ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© + (1 − π›Όπ‘˜ ) σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„©
2
⟨π‘₯π‘˜ −𝑓 (π‘₯π‘˜ )−𝑃٠(π‘₯π‘˜ − 𝑓 (π‘₯π‘˜ )) , 𝑃٠(π‘₯π‘˜ − 𝑓 (π‘₯π‘˜ )) − π‘₯π‘˜ ⟩ ≥ 0.
(45)
Therefore
Hence, by (23)
σ΅„©
σ΅„©
σ΅„©2
σ΅„©σ΅„©
∗ σ΅„©2
∗ σ΅„©2
σ΅„©σ΅„©π‘‘π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
πœ‚π›Ώ
− (1 − π›Όπ‘˜ ) ( σ΅„©σ΅„© π‘˜ σ΅„©σ΅„© )
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
(44)
σ΅„©σ΅„©
σ΅„©4
σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„©
2
πœ‚π›Ώ
σ΅„©2
σ΅„©
σ΅„©4
σ΅„©
= σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© −(1 − π›Όπ‘˜ )(σ΅„©σ΅„© π‘˜ σ΅„©σ΅„©) σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„©
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
σ΅„©2
σ΅„©
≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© ,
(41)
where the first inequation follows from that 𝑆 is a nonexpansive mapping.
∞
That means {π‘₯π‘˜ }∞
π‘˜=0 is bounded, and so as {π‘§π‘˜ }π‘˜=0 . Since
𝑓 is continuous; namely, there exists a constant 𝑀 > 0, s.t.
‖𝑓(π‘§π‘˜ )β€– ≤ 𝑀, for all π‘˜, we yet have
𝛿 2 2σ΅„©
σ΅„©σ΅„©
σ΅„©4
σ΅„©
∗ σ΅„©2
∗ σ΅„©2
σ΅„©σ΅„©π‘₯π‘˜+1 − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© − (1 − π›Όπ‘˜ ) ( ) πœ‚π‘˜ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„© .
𝑀
(42)
as π‘Ÿ (π‘₯π‘˜ ) = π‘₯π‘˜ −𝑃٠(π‘₯π‘˜ −𝑓 (π‘₯π‘˜ )) .
(46)
Passing onto the limit in (44), (46), we get 𝛿 limπ‘˜ → ∞ β€–π‘₯π‘˜ −
π‘¦π‘˜ β€– ≥ limπ‘˜ → ∞ β€–π‘₯π‘˜ − π‘¦π‘˜ β€–, since 𝛿 ∈ (0, 1), we obtain
limπ‘˜ → ∞ β€–π‘₯π‘˜ − π‘¦π‘˜ β€– = 0.
On the other hand, using Cauchy-Schwarz inequality
again, we have
2
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
⟨π‘₯π‘˜ − π‘‘π‘˜ , 𝑓 (π‘§π‘˜ )⟩
σ΅„©σ΅„©
σ΅„©σ΅„©2
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
2
πœ‚ βŸ¨π‘Ÿ (π‘₯ ) , 𝑓 (𝑧 )⟩
1 σ΅„©
σ΅„©2
≤ 2( π‘˜ σ΅„©σ΅„© π‘˜ σ΅„©σ΅„© π‘˜ ) + σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© .
2
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
(47)
Therefore,
σ΅„©σ΅„©
∗ σ΅„©2
σ΅„©σ΅„©π‘‘π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„©
πœ‚ βŸ¨π‘Ÿ (π‘₯ ) , 𝑓 (𝑧 )⟩
σ΅„©2 σ΅„©
σ΅„©2
σ΅„©
≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© − 2( π‘˜ σ΅„©σ΅„© π‘˜ σ΅„©σ΅„© π‘˜ )
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©
2
2
+ 2(
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
1 σ΅„©
σ΅„©2
) + σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©
2
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©σ΅„©
σ΅„©2 1 σ΅„©
σ΅„©2
σ΅„©
≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© .
2
(48)
Then we have
σ΅„©σ΅„©σ΅„©π‘₯π‘˜+1 − π‘₯∗ σ΅„©σ΅„©σ΅„©2 = σ΅„©σ΅„©σ΅„©π›Όπ‘˜ π‘₯π‘˜ + (1 − π›Όπ‘˜ )π‘†π‘‘π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„©2
σ΅„©
σ΅„©
σ΅„©
σ΅„©
2
σ΅„©σ΅„©
σ΅„©2
σ΅„©
∗σ΅„©
≤ π›Όπ‘˜ σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© + (1 − π›Όπ‘˜ ) σ΅„©σ΅„©σ΅„©π‘‘π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©2
σ΅„©2
σ΅„©
≤ π›Όπ‘˜ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© + (1 − π›Όπ‘˜ ) σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„©
1 − π›Όπ‘˜ σ΅„©σ΅„©
σ΅„©2
−
σ΅„©π‘₯ − 𝑑 σ΅„©σ΅„©
2 σ΅„© π‘˜ π‘˜σ΅„©
σ΅„©2 1 − π›Όπ‘˜ σ΅„©σ΅„©
σ΅„©2
σ΅„©
= σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© −
σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© .
2
(49)
Abstract and Applied Analysis
5
Noting that π›Όπ‘˜ ∈ (π‘Ž, 𝑏) (π‘Ž, 𝑏 ∈ (0, 1)), it easily follows that
0 < (1 − 𝑏)/2 < (1 − π›Όπ‘˜ )/2 < (1 − π‘Ž)/2 < 1, which implies that
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© = 0.
π‘˜→∞ σ΅„©
(50)
𝑒 ∈ 𝑇 (V) = 𝑓 (V) + 𝑁٠(V) ,
By the triangle inequality, we have
σ΅„©σ΅„©
σ΅„© σ΅„©
σ΅„© σ΅„©
σ΅„© σ΅„©
σ΅„©
σ΅„©σ΅„©π‘¦π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© = σ΅„©σ΅„©σ΅„©π‘¦π‘˜ − π‘₯π‘˜ + π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘¦π‘˜ − π‘₯π‘˜ σ΅„©σ΅„©σ΅„© + σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© .
(51)
Passing onto the limit in (51), we conclude
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘¦π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© = 0.
π‘˜→∞ σ΅„©
Second, we describe the details of π‘₯∗ ∈ Ω∗ .
Since π‘₯π‘˜π‘– ⇀ π‘₯∗ , using Theorem 8 we claim that π‘‘π‘˜π‘– ⇀ π‘₯∗
and π‘¦π‘˜π‘– ⇀ π‘₯∗ .
Letting (V, 𝑒) ∈ 𝐺(𝑇), we have
(52)
thus,
⟨V − π‘₯∗ , 𝑒 − 𝑓 (V)⟩ ≥ 0,
Proof. By Theorem 8, we know that {π‘₯π‘˜ }∞
π‘˜=0 is bound, which
∞
implies that there exists a subsequence {π‘₯π‘˜π‘– }∞
𝑖=0 of {π‘₯π‘˜ }π‘˜=0 that
∗
∗
converges weakly to some points π‘₯ ∈ 𝐹(𝑆) ∩ Ω .
First, we investigate some details of π‘₯∗ ∈ 𝐹(𝑆).
Letting π‘₯σΈ€  ∈ 𝐹(𝑆) ∩ Ω∗ , since 𝑆 is nonexpansive mapping,
from (29) we have
(53)
that is,
(54)
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘†π‘‘π‘˜ − π‘₯π‘˜ σ΅„©σ΅„©σ΅„© = 0.
σ΅„©
σ΅„©
lim 󡄩󡄩𝑆π‘₯π‘˜ − π‘₯π‘˜ σ΅„©σ΅„©σ΅„© = 0,
which imply that π‘₯∗ ∈ 𝐹(𝑆) by Lemma 3.
≥ ⟨V − π‘‘π‘˜π‘– , 𝑓 (V)⟩ − ⟨π‘₯π‘˜π‘– − 𝑓 (π‘₯π‘˜π‘– ) − π‘¦π‘˜π‘– , π‘¦π‘˜π‘– − V⟩
= ⟨V − π‘‘π‘˜π‘– , 𝑓 (V) − 𝑓 (π‘‘π‘˜π‘– )⟩ + ⟨V − π‘‘π‘˜π‘– , 𝑓 (π‘‘π‘˜π‘– )⟩
(63)
− ⟨V − π‘¦π‘˜π‘– , π‘¦π‘˜π‘– − π‘₯π‘˜π‘– ⟩ − ⟨V − π‘¦π‘˜π‘– , 𝑓 (π‘₯π‘˜π‘– )⟩
≥ ⟨V − π‘‘π‘˜π‘– , 𝑓 (π‘‘π‘˜π‘– )⟩ − ⟨V − π‘¦π‘˜π‘– , π‘¦π‘˜π‘– − π‘₯π‘˜π‘– ⟩
where the last inequation follows from the monotone of 𝑓.
Since 𝑓 is continuous, by (37) we have
𝑖→∞
lim π‘₯π‘˜π‘– = lim π‘‘π‘˜π‘– . (64)
𝑖→∞
𝑖→∞
⟨V − π‘₯∗ , π‘’βŸ© ≥ 0.
(55)
(56)
(57)
(65)
∗
−1
As 𝑓 is maximal monotone, we have π‘₯ ∈ 𝑓 (0), which
implies that π‘₯∗ ∈ Ω∗ .
At last we show that such π‘₯∗ is unique.
∞
Let {π‘₯π‘˜π‘— }∞
𝑗=0 be another subsequence of {π‘₯π‘˜ }π‘˜=0 , such that
∗
∗
π‘₯π‘˜π‘— ⇀ π‘₯Δ . Then we conclude that π‘₯Δ ∈ 𝐹(𝑆) ∩ Ω∗ . Suppose
π‘₯Δ∗ =ΜΈ π‘₯∗ ; by Lemma 2 we have
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„©
π‘˜→∞ σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©
= lim inf σ΅„©σ΅„©σ΅„©σ΅„©π‘₯π‘˜π‘– −π‘₯∗ σ΅„©σ΅„©σ΅„©σ΅„© < lim inf σ΅„©σ΅„©σ΅„©σ΅„©π‘₯π‘˜π‘– −π‘₯Δ∗ σ΅„©σ΅„©σ΅„©σ΅„© = lim σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯Δ∗ σ΅„©σ΅„©σ΅„©
𝑖→∞
𝑖→∞
π‘˜→∞
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©
σ΅„©
= lim inf σ΅„©σ΅„©σ΅„©π‘₯π‘˜π‘— −π‘₯Δ∗ σ΅„©σ΅„©σ΅„© < lim inf σ΅„©σ΅„©σ΅„©π‘₯π‘˜π‘— −π‘₯∗ σ΅„©σ΅„©σ΅„© = lim σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯∗ σ΅„©σ΅„©σ΅„© ,
σ΅„© 𝑗→∞ σ΅„©
σ΅„© π‘˜→∞
𝑗→∞ σ΅„©
(66)
and then passing onto the limit in (57), we deduce that
π‘˜→∞ σ΅„©
≥ ⟨V − π‘‘π‘˜π‘– , 𝑓 (V)⟩
Passing onto the limit in (63), we obtain
By the triangle inequality, we have
σ΅„©σ΅„©
σ΅„© σ΅„©
σ΅„© σ΅„©
σ΅„©
󡄩󡄩𝑆π‘₯π‘˜ − π‘₯π‘˜ σ΅„©σ΅„©σ΅„© ≤ 󡄩󡄩󡄩𝑆π‘₯π‘˜ − π‘†π‘‘π‘˜ σ΅„©σ΅„©σ΅„© + σ΅„©σ΅„©σ΅„©π‘†π‘‘π‘˜ − π‘₯π‘˜ σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„© σ΅„©
σ΅„©
≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© + σ΅„©σ΅„©σ΅„©π‘†π‘‘π‘˜ − π‘₯π‘˜ σ΅„©σ΅„©σ΅„© ,
⟨V − π‘‘π‘˜π‘– , π‘’βŸ©
lim 𝑓 (π‘₯π‘˜π‘– ) = lim 𝑓 (π‘¦π‘˜π‘– ) ,
From Lemma 6, it follows that
π‘˜→∞ σ΅„©
Note that 𝑒 − 𝑓(V) ∈ 𝑁٠(V) and π‘‘π‘˜ ∈ Ω, then
𝑖→∞
Then by (25) we have
σ΅„©
σ΅„©
= lim σ΅„©σ΅„©σ΅„©σ΅„©π‘₯π‘˜+1 − π‘₯σΈ€  σ΅„©σ΅„©σ΅„©σ΅„© = πœ‰.
π‘˜→∞
(62)
− ⟨V − π‘¦π‘˜π‘– , 𝑓 (π‘₯π‘˜π‘– )⟩ ,
Passing onto the limit in (53), we obtain
σ΅„©
σ΅„©
lim σ΅„©σ΅„©σ΅„©π›Όπ‘˜ (π‘₯π‘˜ − π‘₯σΈ€  ) + (1 − π›Όπ‘˜ ) (π‘†π‘‘π‘˜ − π‘₯σΈ€  )σ΅„©σ΅„©σ΅„©σ΅„©
π‘˜→∞ σ΅„©
(60)
⟨π‘₯π‘˜ − 𝑓 (π‘₯π‘˜ ) − 𝑃٠(π‘₯π‘˜ − 𝑓 (π‘₯π‘˜ )) , 𝑃٠(π‘₯π‘˜ − 𝑓 (π‘₯π‘˜ )) − V⟩ ≥ 0,
(61)
⟨π‘₯π‘˜ − 𝑓 (π‘₯π‘˜ ) − π‘¦π‘˜ , π‘¦π‘˜ − V⟩ ≥ 0.
Theorem 9. Let Ω be a nonempty, closed, and convex
subset of 𝐻, let 𝑓 be a monotone and 𝐿-Lipschitz continuous mapping, and 𝐹(𝑆) ∩ Ω∗ =ΜΈ 0. Then the sequences
∞
∞
{π‘₯π‘˜ }∞
π‘˜=0 , {π‘¦π‘˜ }π‘˜=0 , {π‘‘π‘˜ }π‘˜=0 generated by Algorithm A converge
weakly to the same point π‘₯∗ ∈ 𝐹(𝑆) ∩ Ω∗ , where π‘₯∗ =
limπ‘˜ → ∞ 𝑃𝐹(𝑆)∩Ω∗ (π‘₯π‘˜ ).
σ΅„©
σ΅„©
lim σ΅„©σ΅„©σ΅„©σ΅„©π‘†π‘‘π‘˜ − π‘₯σΈ€  σ΅„©σ΅„©σ΅„©σ΅„© ≤ πœ‰.
π‘˜→∞
∀ π‘₯∗ ∈ Ω.
Applying (8) by letting π‘₯ = π‘₯π‘˜ − 𝑓(π‘₯π‘˜ ), 𝑦 = V, we have
The proof is complete.
σ΅„©σ΅„©
σ΅„© σ΅„©
σ΅„© σ΅„©
σ΅„©
σ΅„©σ΅„©π‘†π‘‘π‘˜ − π‘₯σΈ€  σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘‘π‘˜ − π‘₯σΈ€  σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯σΈ€  σ΅„©σ΅„©σ΅„© .
σ΅„©
σ΅„© σ΅„©
σ΅„© σ΅„©
σ΅„©
𝑒 − 𝑓 (V) ∈ 𝑁٠(V) , (59)
(58)
which implies that limπ‘˜ → ∞ β€–π‘₯π‘˜ − π‘₯∗ β€– < limπ‘˜ → ∞ β€–π‘₯π‘˜ − π‘₯∗ β€–,
and this is a contradiction. Thus, π‘₯∗ = π‘₯Δ∗ , and the proof is
complete.
6
Abstract and Applied Analysis
4. Further Study
In this section we propose an extension of Algorithm A,
which is effective in practice. Similar to the investigation in
Section 3, for the constant 𝜏 > 0, we define a new projected
residual function as follows:
π‘Ÿ (π‘₯, 𝜏) := π‘₯ − 𝑃٠(π‘₯ − πœπ‘“ (π‘₯)) .
(67)
Proof. The proof of this theorem is similar to Theorem 7, so
we omit it.
Theorem 13. Let Ω be a nonempty, closed, and convex
subset of 𝐻, let 𝑓 be a monotone and 𝐿-Lipschitz continuous mapping, and 𝐹(𝑆) ∩ Ω∗ =ΜΈ 0. Then for any sequences
∞
∞
{π‘₯π‘˜ }∞
π‘˜=0 , {π‘¦π‘˜ }π‘˜=0 , {π‘‘π‘˜ }π‘˜=0 generated by Algorithm B, one has
1 − π›Όπ‘˜ σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©2
σ΅„©
∗ σ΅„©2
∗ σ΅„©2
σ΅„©π‘₯ − 𝑑 σ΅„©σ΅„© .
σ΅„©σ΅„©π‘₯π‘˜+1 − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© −
2 σ΅„© π‘˜ π‘˜σ΅„©
It is clear that the new projected residual function (67)
degenerates into (20) by setting 𝜏 = 1.
Algorithm B. Step 0. Take 𝛿 ∈ (0, 1), 𝛾 ∈ (0, 1), πœ‚−1 > 0, πœƒ >
1, π‘₯0 ∈ Ω, and π‘˜ = 0.
Step 1. For the current iterative point π‘₯π‘˜ ∈ Ω, compute
π‘¦π‘˜ = 𝑃٠(π‘₯π‘˜ − πœπ‘˜ 𝑓 (π‘₯π‘˜ )) ,
π‘§π‘˜ = (1 − πœ‚π‘˜ ) π‘₯π‘˜ + πœ‚π‘˜ π‘¦π‘˜ ,
βŸ¨π‘“ (π‘₯π‘˜ − 𝛾𝑛 πœπ‘˜ π‘Ÿ (π‘₯π‘˜ , πœπ‘˜ )) , π‘Ÿ (π‘₯π‘˜ , πœπ‘˜ )⟩ ≥
Furthermore,
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„© = 0,
π‘˜→∞ σ΅„©
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© = 0,
π‘˜→∞ σ΅„©
(68)
where πœπ‘˜ = min{πœƒπœ‚π‘˜−1 , 1}, πœ‚π‘˜ = π›Ύπ‘›π‘˜ πœπ‘˜ and π‘›π‘˜ being the
smallest nonnegative integer 𝑛 satisfying
𝛿 σ΅„©σ΅„©
σ΅„©2
σ΅„©π‘Ÿ (π‘₯π‘˜ , πœπ‘˜ )σ΅„©σ΅„©σ΅„© .
πœπ‘˜ σ΅„©
(69)
π‘₯π‘˜+1 = π›Όπ‘˜ π‘₯π‘˜ + (1 − π›Όπ‘˜ ) π‘†π‘‘π‘˜ ,
σ΅„©
σ΅„©
lim σ΅„©σ΅„©π‘¦π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„© = 0.
Proof. The proof of this theorem is similar to the Theorem 8.
The only difference is that (44) is substituted by
βŸ¨π‘“ (π‘₯π‘˜ − 𝛾−1 πœ‚π‘˜ π‘Ÿ (π‘₯π‘˜ , πœπ‘˜ )) , π‘Ÿ (π‘₯π‘˜ , πœπ‘˜ )⟩ <
𝛿 σ΅„©σ΅„©
σ΅„©2
σ΅„©π‘Ÿ (π‘₯π‘˜ , πœπ‘˜ )σ΅„©σ΅„©σ΅„© ,
πœπ‘˜ σ΅„©
(76)
where (76) follows from Lemma 11 with 𝜏1 = 1 and 𝜏2 = πœπ‘˜ .
(70)
where {π›Όπ‘˜ } ⊂ (π‘Ž, 𝑏) (π‘Ž, 𝑏 ∈ (0, 1)) and π»π‘˜ = {π‘₯ ∈ Ω | ⟨π‘₯ −
π‘§π‘˜ , 𝑓(π‘§π‘˜ )⟩ ≤ 0}.
Step 2. If β€–π‘Ÿ(π‘₯π‘˜ , πœπ‘˜ )β€– = 0, stop; otherwise go to Step 1.
At the rest of this section, we discuss the weak convergence property of Algorithm B.
Lemma 10. For any 𝜏 > 0, one has
π‘₯∗ is the solution of VI (𝑓, Ω) ⇐⇒ π‘₯∗ = 𝑃٠(π‘₯∗ − πœπ‘“ (π‘₯∗ )) .
(71)
Therefore, solving variational inequality is equivalent to
finding a zero point of the projected residual function π‘Ÿ(βˆ™, 𝜏).
Meanwhile we know that π‘Ÿ(π‘₯, 𝜏) is a continuous function of
π‘₯, as the projection mapping is nonexpansive.
Lemma 11. For any π‘₯ ∈ 𝑅𝑛 and 𝜏1 ≥ 𝜏2 > 0, it holds that
σ΅„©σ΅„©σ΅„©π‘Ÿ (π‘₯, 𝜏1 )σ΅„©σ΅„©σ΅„© ≥ σ΅„©σ΅„©σ΅„©π‘Ÿ (π‘₯, 𝜏2 )σ΅„©σ΅„©σ΅„© .
(72)
σ΅„©
σ΅„© σ΅„©
σ΅„©
Theorem 12. Let Ω be a nonempty, closed, and convex subset
of 𝐻, let 𝑓 be a monotone and 𝐿-Lipschitz continuous mapping,
∞
and 𝐹(𝑆) ∩ Ω∗ =ΜΈ 0. Then for any sequence {π‘₯π‘˜ }∞
π‘˜=0 , {π‘‘π‘˜ }π‘˜=0
generated by Algorithm B, one has
σ΅„©σ΅„©
σ΅„©2
σ΅„©
σ΅„©
∗ σ΅„©2
∗ σ΅„©2
σ΅„©σ΅„©π‘‘π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© ≤ σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘₯ σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘‘π‘˜ σ΅„©σ΅„©σ΅„©
+2
(75)
π‘˜→∞ σ΅„©
Compute
π‘‘π‘˜ = π‘ƒπ»π‘˜ ∩Ω (π‘₯π‘˜ ) ,
(74)
πœ‚π‘˜ βŸ¨π‘Ÿ (π‘₯π‘˜ , πœπ‘˜ ) , 𝑓 (π‘§π‘˜ )⟩
βŸ¨π‘§π‘˜ − π‘‘π‘˜ , 𝑓 (π‘§π‘˜ )⟩ .
σ΅„©σ΅„©
σ΅„©2
󡄩󡄩𝑓 (π‘§π‘˜ )σ΅„©σ΅„©σ΅„©
(73)
Theorem 14. Let Ω be a nonempty, closed, and convex
subset of 𝐻, let 𝑓 be a monotone and 𝐿-Lipschitz continuous mapping, and 𝐹(𝑆) ∩ Ω∗ =ΜΈ 0. Then the sequences
∞
∞
{π‘₯π‘˜ }∞
π‘˜=0 , {π‘¦π‘˜ }π‘˜=0 , {π‘‘π‘˜ }π‘˜=0 generated by Algorithm B converge
weakly to the same point π‘₯∗ ∈ 𝐹(𝑆) ∩ Ω∗ , where π‘₯∗ =
lim 𝑃𝐹(𝑆)∩Ω∗ (π‘₯π‘˜ ).
π‘˜→∞
5. Conclusions
In this paper, we proposed an extension of the extragradient
algorithm for solving monotone variational inequalities and
established its weak convergence theorem. The Algorithm B
is effective in practice. Meanwhile, we pointed out that the
solution of our algorithm is also a fixed point of a given
nonexpansive mapping.
Acknowledgments
This research was supported by the National Natural Science
Foundation of China (Grant: 11171362) and the Fundamental Research Funds for the central universities (Grant:
CDJXS12101103). The authors thank the anonymous reviewers for their valuable comments and suggestions, which
helped to improve the paper.
References
[1] R. W. Cottle, J.-S. Pang, and R. E. Stone, The Linear Complementarity Problem, Academic Press, Boston, Mas, USA, 1992.
Abstract and Applied Analysis
[2] G. M. Korpelevich, “The extragradient method for finding
saddle points and other problems,” Matecon, vol. 12, pp. 747–
756, 1976.
[3] M. V. Solodov and B. F. Svaiter, “A new projection method for
variational inequality problems,” SIAM Journal on Control and
Optimization, vol. 37, no. 3, pp. 765–776, 1999.
[4] M. V. Solodov, “Stationary points of bound constrained minimization reformulations of complementarity problems,” Journal
of Optimization Theory and Applications, vol. 94, no. 2, pp. 449–
467, 1997.
[5] Y. J. Wang, N. H. Xiu, and J. Z. Zhang, “Modified extragradient
method for variational inequalities and verification of solution
existence,” Journal of Optimization Theory and Applications, vol.
119, no. 1, pp. 167–183, 2003.
[6] W. Takahashi and M. Toyoda, “Weak convergence theorems for
nonexpansive mappings and monotone mappings,” Journal of
Optimization Theory and Applications, vol. 118, no. 2, pp. 417–
428, 2003.
[7] E. H. Zarantonello, Projections on Convex Sets in Hilbert Space
and Spectral Theory, Contributions to Nonlinear Functional
Analysis, Academic press, New York, NY, USA, 1971.
[8] Z. Opial, “Weak convergence of the sequence of successive
approximations for nonexpansive mappings,” Bulletin of the
American Mathematical Society, vol. 73, pp. 591–597, 1967.
[9] R. T. Rockafellar, “On the maximality of sums of nonlinear
monotone operators,” Transactions of the American Mathematical Society, vol. 149, pp. 75–88, 1970.
[10] F. E. Browder, “Fixed-point theorems for noncompact mappings
in Hilbert space,” Proceedings of the National Academy of
Sciences of the United States of America, vol. 53, pp. 1272–1276,
1965.
[11] N. Nadezhkina and W. Takahashi, “Weak convergence theorem
by an extragradient method for nonexpansive mappings and
monotone mappings,” Journal of Optimization Theory and
Applications, vol. 128, no. 1, pp. 191–201, 2006.
[12] B. C. Eaves, “On the basic theorem of complementarity,”
Mathematical Programming, vol. 1, no. 1, pp. 68–75, 1971.
[13] B. S. He and L. Z. Liao, “Improvements of some projection
methods for monotone nonlinear variational inequalities,” Journal of Optimization Theory and Applications, vol. 112, no. 1, pp.
111–128, 2002.
[14] Y. Censor, A. Gibali, and S. Reich, “The subgradient extragradient method for solving variational inequalities in Hilbert space,”
Journal of Optimization Theory and Applications, vol. 148, no. 2,
pp. 318–335, 2011.
[15] Y. Censor, A. Gibali, and S. Reich, “Two extensions of Korpelevich’s extragradient method for solving the variational
inequality problem in Euclidean space,” Tech. Rep., 2010.
[16] Y. J. Wang, N. H. Xiu, and C. Y. Wang, “Unified framework
of extragradient-type methods for pseudomonotone variational
inequalities,” Journal of Optimization Theory and Applications,
vol. 111, no. 3, pp. 641–656, 2001.
[17] J. Schu, “Weak and strong convergence to fixed points of asymptotically nonexpansive mappings,” Bulletin of the Australian
Mathematical Society, vol. 43, no. 1, pp. 153–159, 1991.
7
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download