Document 10824621

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2012, Article ID 892980, 12 pages
doi:10.1155/2012/892980
Research Article
A Proximal Point Method Involving Two
Resolvent Operators
Oganeditse A. Boikanyo
Department of Applied Sciences, Botswana International University of Science and Technology,
Private Bag 41, Gaborone, Botswana
Correspondence should be addressed to Oganeditse A. Boikanyo, boikanyoa@gmail.com
Received 29 May 2012; Accepted 1 August 2012
Academic Editor: Svatoslav Staněk
Copyright q 2012 Oganeditse A. Boikanyo. This is an open access article distributed under
the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We construct a sequence of proximal iterates that converges strongly under minimal assumptions
to a common zero of two maximal monotone operators in a Hilbert space. The algorithm
introduced in this paper puts together several proximal point algorithms under one frame work.
Therefore, the results presented here generalize and improve many results related to the proximal
point algorithm which were announced recently in the literature.
1. Introduction
Let K1 and K2 be nonempty, closed, and convex subsets of a real Hilbert space H with
nonempty intersection and consider the following problem:
find an x ∈ H such that x ∈ K1 ∩ K2 .
1.1
In his 1933 paper, von Neumann showed that if K1 and K2 are subspaces of H, then the
method of alternating projections, defined by
H x0 −→ x1 PK1 x0 −→ x2 PK2 x1 −→ x3 PK1 x2 −→ x4 PK2 x3 −→ . . . ,
1.2
converges strongly to a point in K1 ∩ K2 which is closest to the starting point x0 . The proof of
this classical result can be found for example in 1, 2. Ever since von Neumann announced
his result, many researchers have dedicated their time to study the convex feasibility problem
1.1. In his paper, Bregman 3 showed that if K1 and K2 are two arbitrary nonempty, closed,
2
Abstract and Applied Analysis
and convex subsets in H with nonempty intersection, then the sequence xn generated from
the method of alternating projections converges weakly to a point in K1 ∩ K2 . The work of
Hundal 4 revealed that the method of alternating projections fails in general to converge
strongly, see also 5.
Recall that the projection operator coincides with the resolvent of a normal cone. Thus,
the method of alternating projections can be extended in a natural way as follows: Given
x0 ∈ H, define a sequence xn iteratively by
x2n1 JβAn x2n en for n 0, 1, . . . ,
x2n JμBn x2n−1 en
for n 1, 2, . . . ,
1.3
for βn , μn ∈ 0, ∞, and two maximal monotone operators A and B, where en and en are
sequences of computational errors. Here JμA : I μA−1 is the resolvent of A. In this case,
problem 1.1 can be restated as
find an x ∈ DA ∩ DB such that x ∈ A−1 0 ∩ B−1 0.
1.4
For en 0 en and βn μn μ > 0, Bauschke et al. 6 proved that sequences generated
from the method of alternating resolvents 1.3 converges weakly to some point that solves
problem 1.4. In fact, they showed that such a sequences converges weakly to a point in
Fix JμA JμB provided that the fixed point set of the composition mapping JμA JμB is nonempty.
Note that strong convergence of this method fails in general, the same counter example of
Hundal 4 applies. For convergence analysis of algorithm 1.3 in the case when any of the
sequences of real numbers βn and μn is not a constant, and when the error sequences en and en are not zero for all n ≥ 1, we refer the reader to 7.
There are other papers in the literature that address strong convergence of a given
iterative process to solutions of 1.4. For example, several authors have discussed strong
convergence of an iterative process of the Halpern type to common solutions of a finite family
of maximal monotone operators in Hilbert spaces or even m-accretive operators in Banach
spaces. Among the most recent works in this direction is due to Hu and Liu 8. They showed
that under appropriate conditions, an iterative process of Halpern type defined by
xn1 αn u δn xn γn Srn xn ,
n ≥ 0,
1.5
where αn , δn , γn ∈ 0, 1 with αn δn γn 1 for all n ≥ 0, u, x0 ∈ H are given, Srn :
a0 I a1 Jr1n a2 Jr2n · · · al Jrln with Jri I rAi −1 for ai ∈ 0, 1, i 0, 1, . . . , l and li0 ai 1,
n
converges strongly to a point in i1 A−1
i 0 nearest to u.
Suppose that we want to find solutions to problem 1.4 iteratively. Then we observe
that when using the iterative process 1.5, one has to calculate two resolvents of maximal
monotone operators in order to find the next iterate. On the other hand, for algorithm 1.3,
one needs to calculate only one resolvent operator at each step. This clearly shows that
theoretically, algorithm 1.5 requires more computational time compared to algorithm 1.3.
The only disadvantage with algorithm 1.3 is that it does not always converge strongly and
the limit to which it converges to is not characterized. This is not the case with algorithm 1.5.
Abstract and Applied Analysis
3
Since weak convergence is not good for an effective algorithm, our purpose in this paper is to
modify algorithm 1.3 in such a way that strong convergence is guaranteed. More precisely,
for any two maximal monotone operators A and B, we define an iterative process in the
following way: For x0 , u ∈ H given, a sequence xn is generated using the rule
x2n1 αn u δn x2n γn JβAn x2n en
x2n JμBn λn u 1 − λn x2n−1 en
for n 0, 1, . . . ,
for n 1, 2, . . . ,
1.6
1.7
where αn , δn , γn , λn ∈ 0, 1 with αn δn γn 1 and βn , μn ∈ 0, ∞. We will also show
that algorithm 1.6, 1.7 contains several algorithms such as the prox-Tikhonov method,
the Halpern-type proximal point algorithm, and the regularized proximal method as special
cases. That is, with our algorithm, we are able to put several algorithms under one frame
work. Therefore, our main results improve, generalize, and unify many related results
announced recently in the literature.
2. Preliminary Results
In the sequel, H will be a real Hilbert space with inner product ·, · and induced norm · .
We recall that a map T : H → H is called nonexpansive if for every x, y ∈ H we have
T x − T y ≤ x − y. We say that a map T is firmly nonexpansive if for every x, y ∈ H, we
have
T x − T y2 ≤ x − y2 − I − T x − I − T y2 .
2.1
It is clear that firmly nonexpansive mappings are also nonexpansive. The converse need not
be true. The excellent book by Goebel and Reich 9 is recommended to the reader who is
interested in studying properties of firmly nonexpansive mappings. An operator A : DA ⊂
H → 2H is said to be monotone if
x − x , y − y ≥ 0,
∀ x, y , x , y ∈ GA,
2.2
where GA {x, y ∈ H ×H : x ∈ DA, y ∈ Ax} is the graph of A. In other words, an operator is monotone if its graph is a monotone subset of the product space H × H. An operator A
is called maximal monotone if in addition to being monotone, its graph is not properly contained in the graph of any other monotone operator. Note that if A is maximal monotone, then
so is its inverse A−1 . For a maximal monotone operator A, the resolvent of A, defined by JβA :
I βA−1 , is well defined on the whole space H, is single-valued, and is firmly nonexpansive
for every β > 0. It is known that the Yosida approximation of A, an operator defined by
Aβ : β−1 I − JβA where I is the identity operator is maximal monotone for every β > 0. For
the properties of maximal monotone operators discussed above, we refer the reader to 10.
Notations. given a sequence xn , we will use xn → x to mean that xn converges strongly
to x whereas xn x will mean that xn converges weakly to x. The weak ω-limit set of a
sequence xn will be denoted by ωw xn . That is,
ωw xn x ∈ H : xnk x for some subsequence xnk of xn .
2.3
4
Abstract and Applied Analysis
The following lemmas will be useful in proving our main results. The first lemma is a
basic property of norms in Hilbert spaces.
Lemma 2.1. For all x, y ∈ H, one has
x y2 ≤ y2 2 x, x y .
2.4
The next lemma is well known, it can be found for example in 10, page 20.
Lemma 2.2. Any maximal monotone operator A : DA ⊂ H → 2H satisfies the demiclosedness
principle. In other words, given any two sequences xn and yn satisfying xn → x and yn y
with xn , yn ∈ GA, then x, y ∈ GA.
Lemma 2.3 Xu 11. For any x ∈ H and μ ≥ β > 0,
x − JβA x ≤ 2x − JμA x,
2.5
where A : DA ⊂ H → 2H is a maximal monotone operator.
We end this section with the following key lemmas.
Lemma 2.4 Boikanyo and Moroşanu 12. Let sn be a sequence of nonnegative real numbers
satisfying
sn1 ≤ 1 − αn 1 − λn sn αn bn λn cn dn ,
n ≥ 0,
2.6
where αn , λn , bn , cn , and dn satisfy the conditions: i αn , λn ∈ 0, 1, with ∞
n0 1 − αn 0, ii lim supn → ∞ bn ≤ 0, iii lim supn → ∞ cn ≤ 0, and iv dn ≥ 0 for all n ≥ 0 with ∞
n0 dn < ∞.
Then limn → ∞ sn 0.
Remark 2.5. Note that if limn → ∞ αn 0, then
∞
n0 1
− αn 0 if and only if
∞
n0
αn ∞.
Lemma 2.6 Maingé 13. Let sn be a sequence of real numbers that does not decrease at infinity,
in the sense that there exists a subsequence snj of sn such that snj < snj 1 for all j ≥ 0. Define an
integer sequence τnn≥n0 as
τn max{n0 ≤ k ≤ n : sk < sk1 }.
2.7
Then τn → ∞ as n → ∞ and for all n ≥ n0 ,
max sτn , sn ≤ sτn1 .
2.8
Abstract and Applied Analysis
5
3. Main Results
We first begin by giving a strong convergence result associated with the exact iterative process
v2n1 αn u δn v2n γn JβAn v2n
v2n JμBn λn u 1 − λn v2n−1 for n 0, 1, . . . ,
for n 1, 2, . . . ,
3.1
3.2
where αn , δn , γn , λn ∈ 0, 1 with αn δn γn 1, βn , μn ∈ 0, ∞ and v0 , u ∈ H are given. The
proof of the following theorem makes use of some ideas of the papers 12–15.
Theorem 3.1. Let A : DA ⊂ H → 2H and B : DB ⊂ H → 2H be maximal monotone operators
with A−1 0 ∩ B−1 0 : F / ∅. For arbitrary but fixed vectors v0 , u ∈ H, let vn be the sequence
generated by 3.1, 3.2, where αn , δn , γn , λn ∈ 0, 1 with αn δn γn 1, and βn , μn ∈ 0, ∞.
Assume that i limn → ∞ αn 0, γn ≥ γ for some γ > 0 and limn → ∞ λn 0, ii either ∞
n0 αn ∞
∞
or n0 λn ∞, and iii βn ≥ β and μn ≥ μ for some β, μ > 0. Then vn converges strongly to the
point of F nearest to u.
Proof. Let p ∈ F. Then from 3.2 and the fact that the resolvent operator of a maximal
monotone operator B is nonexpansive, we have
v2n − p ≤ λn u − p 1 − λn v2n−1 − p ≤ λn u − p 1 − λn v2n−1 − p.
3.3
Again using the fact that the resolvent operator JβAn : H → H is nonexpansive, we have from
3.1
v2n1 − p ≤ αn u − p δn v2n − p γn JβAn v2n − p
≤ αn u − p 1 − αn v2n − p
≤ αn 1 − αn λn u − p 1 − αn 1 − λn v2n−1 − p
1 − 1 − αn 1 − λn u − p 1 − αn 1 − λn v2n−1 − p,
3.4
where the last inequality follows from 3.3. Using a simple induction argument, we get
n
n
v2n1 − p ≤ 1 −
1 − αk 1 − λk u − p v1 − p 1 − αk 1 − λk .
k1
3.5
k1
This shows that the subsequence v2n1 of vn is bounded. In view of 3.3, the subsequence
v2n is also bounded. Hence the sequence vn must be bounded.
Now from the firmly nonexpansive property of JβAn : H → H, we have for any p ∈ F
2
2 2 A
Jβn v2n − p ≤ v2n − p − v2n − JβAn v2n ,
3.6
6
Abstract and Applied Analysis
which in turn gives
2
2 2 2 v2n − p, JβAn v2n − p v2n − p JβAn v2n − p − v2n − JβAn v2n 2 2 A
≤ 2 v2n − p − v2n − Jβn v2n .
3.7
Again by using the firmly nonexpansive property of the resolvent JβAn : H → H, we see that
2
δn v2n − p γn JβAn v2n − p 2
2
δn2 v2n − p γn2 JβAn v2n − p 2γn δn v2n − p, JβAn v2n − p
3.8
2
2
≤ 1 − αn 2 v2n − p − γn γn 2δn v2n − JβAn v2n .
Now from 3.1 and Lemma 2.1, we have
2
v2n1 − p2 ≤ δn v2n − p γn JβAn v2n − p 2αn u − p, v2n1 − p
2
2
≤ 1 − αn v2n − p − εv2n − JβAn v2n 2αn u − p, v2n1 − p ,
3.9
where ε > 0 is such that γn γn 2δn ≥ ε. On the other hand, we observe that 3.2 is equivalent
to
v2n − p μn Bv2n λn u − p 1 − λn v2n−1 − p .
3.10
Multiplying this inclusion scalarly by 2v2n − p and using the monotonicity of B, we obtain
2
2v2n − p ≤ 2λn u − p, v2n − p 21 − λn v2n−1 − p, v2n − p
2 2
1 − λn v2n−1 − p v2n − p − v2n − v2n−1 2 2λn u − p, v2n − p ,
3.11
which implies that
v2n − p2 ≤ 1 − λn v2n−1 − p2 − v2n − v2n−1 2 2λn u − p, v2n − p .
3.12
Abstract and Applied Analysis
7
Using this inequality in 3.9, we get
2
v2n1 − p2 ≤ 1 − αn 1 − λn v2n−1 − p2 − ε
v2n − JβAn v2n 2αn u − p, v2n1 − p − 1 − αn 1 − λn v2n − v2n−1 2
2λn 1 − αn u − p, v2n − p .
3.13
If we denote sn : v2n−1 − PF u2 , then we have for some positive constant M
2
sn1 − sn v2n − v2n−1 2 εv2n − JβAn v2n ≤ αn λn M.
3.14
We now show that sn converges to zero strongly. For this purpose, we consider two possible
cases on the sequence sn .
Case 1. sn is eventually decreasing i.e., there exists N ≥ 0 such that sn is decreasing for all
n ≥ N. In this case, sn is convergent. Letting n → ∞ in 3.14, we get
lim v2n − v2n−1 0 lim v2n − JβAn v2n .
n→∞
n→∞
3.15
Now using the second part of 3.15 and the fact that αn → 0 as n → ∞, we get
v2n1 − v2n ≤ αn u − JβAn v2n γn JβAn v2n − v2n −→ 0,
3.16
as n → ∞. Also, we have the following from Lemma 2.3 and the first part of 3.15
v2n − JβA v2n ≤ 2v2n − JβAn v2n −→ 0,
3.17
as n → ∞. Since A−1
, where Aβ denotes the Yosida approximation of A, is demiclosed, it
β
follows that ωw v2n ⊂ A−1 0. On the other hand, from the nonexpansive property of the
resolvent operator of B, we get
v2n − JμB v2n ≤ 2v2n − JμBn v2n 3.18
≤ 2λn u − v2n−1 v2n−1 − v2n ,
where the first inequality follows from Lemma 2.3. Since Bμ−1 is demiclosed, passing to the limit in the above inequality yields ωw v2n ⊂ B−1 0, showing that
8
Abstract and Applied Analysis
ωw v2n ⊂ F : A−1 0 ∩ B−1 0. Therefore, there is a subsequence v2nk of v2n converging
weakly to some z ∈ F such that
lim sup
u − PF u, v2n − PF u lim sup
u − PF u, v2nk − PF u
n→∞
k→∞
3.19
u − PF u, z − PF u ≤ 0,
where the above inequality follows from the characterization of the projection operator. Note
that by virtue of 3.16, we have
lim sup
u − PF u, v2n1 − PF u ≤ 0
n→∞
3.20
as well. Now, we derive from 3.13
v2n1 − PF u2 ≤ 1 − αn 1 − λn v2n−1 − PF u2 2αn u − PF u, v2n1 − PF u
2λn 1 − αn u − PF u, v2n − PF u.
3.21
Using Lemma 2.4 we get v2n1 − PF u → 0 as n → ∞. Passing to the limit in 3.12, we also
get v2n − PF u → 0 as n → ∞. Therefore, we derive vn − PF u → 0 as n → ∞. This proves
the result for the case when sn is eventually decreasing.
Case 2. sn is not eventually decreasing, that is, there is a subsequence snj of sn such that
snj < snj 1 for all j ≥ 0. We then define an integer sequence τnn≥n0 as in Lemma 2.6 so that
sτn ≤ sτn1 for all n ≥ n0 . Then from 3.14, it follows that
lim v2τn − v2τn−1 0 lim v2τn − JβAτn v2τn .
n→∞
n→∞
3.22
We also derive from 3.1
v2τn1 − v2τn ≤ ατn u − JβAτn v2τn γτn JβAτn v2τn − v2τn −→ 0,
3.23
as n → ∞. In a similar way as in Case 1, we derive ωw v2τn ⊂ F. Consequently,
lim sup u − PF u, v2τn − PF u ≤ 0.
n→∞
3.24
Note that from 3.21 we have, for some positive constant K,
v2n1 − PF u2 ≤ 1 − αn 1 − λn v2n−1 − PF u2 αn Kv2n1 − v2n 2λn 1 − αn αn u − PF u, v2n − PF u.
3.25
Abstract and Applied Analysis
9
Therefore, for all n ≥ n0 , we have
sτn1 ≤ 1 − ατn 1 − λτn sτn ατn K v2τn1 − v2τn 2 λτn 1 − ατn ατn u − PF u, v2τn − PF u .
3.26
Since sτn ≤ sτn1 for all n ≥ n0 , we have
sτn1
ατn K v2τn1 − v2τn ≤ 2 u − PF u, v2τn − PF u λτn 1 − ατn ατn
≤ 2 u − PF u, v2τn − PF u K v2τn1 − v2τn .
3.27
Letting n → ∞ in the above inequality, we see that sτn1 → 0. Hence from 2.8 it follows
that sn → 0 as n → ∞. That is, v2n1 → PF u as n → ∞. Furthermore, for some positive
constant C, we have from 3.12
v2n − PF u2 ≤ 1 − λn v2n−1 − PF u2 λn C,
3.28
which implies that v2n → PF u as n → ∞. Hence, we have vn → PF u as n → ∞. This
completes the proof of the theorem.
We are now in a position to give a strong convergence result for the inexact iteration
process 1.6, 1.7. For the error sequence, we will use the 14 conditions established in 12.
Theorem 3.2. Let A : DA ⊂ H → 2H and B : DB ⊂ H → 2H be maximal monotone operators
∅. For arbitrary but fixed vectors x0 , u ∈ H, let xn be the sequence
with A−1 0 ∩ B−1 0 : F /
generated by 1.6, 1.7, where αn , δn , γn , λn ∈ 0, 1 with αn δn γn 1 and βn , μn ∈ 0, ∞.
Assume that limn → ∞ αn 0, γn ≥ γ for some γ > 0 and limn → ∞ λn 0, either ∞
n0 αn ∞ or
∞
λ
∞,
β
≥
β
and
μ
≥
μ
for
some
β,
μ
>
0.
Then
x
converges
strongly
to
the point of F
n
n
n
n
n0
nearest to u, provided that any of the following conditions is satisfied:
∞
a ∞
n0 en < ∞ and
n1 en < ∞;
b ∞
n0 en < ∞ and en /αn → 0;
∞
c n0 en < ∞ and en /λn → 0;
∞
d en /αn → 0 and
n1 en < ∞;
∞
e en /λn → 0 and
n1 en < ∞;
f en /αn → 0 and en /αn → 0;
g en /αn → 0 and en /λn → 0;
h en /λn → 0 and en /αn → 0;
i en /λn → 0 and en /λn → 0;
j en /αn → 0 and en /αn−1 → 0;
k en−1 /λn → 0 and en /αn−1 → 0;
l en−1 /λn → 0 and en /λn → 0;
10
Abstract and Applied Analysis
∞
en < ∞ and en /αn−1 → 0;
∞
n en−1 /λn → 0 and
n1 en < ∞.
m
n0
Proof. Taking note of Theorem 3.1, it suffices to show that xn − vn → 0 as n → ∞. Since the
resolvent of B is nonexpansive, we derive from 1.7 and 3.2 the following:
x2n − v2n ≤ 1 − λn x2n−1 − v2n−1 en .
3.29
Similarly, from 1.6 and 3.1, we have
x2n1 − v2n1 ≤ δn x2n − v2n γn JβAn x2n − JβAn v2n en 3.30
≤ 1 − αn x2n − v2n en .
Substituting 3.29 into 3.30 yields
x2n1 − v2n1 ≤ 1 − αn 1 − λn x2n−1 − v2n−1 en en .
3.31
Therefore, if the error sequence satisfy any of the conditions a–i, then it readily follows
from Lemma 2.4 that x2n1 − v2n1 → 0 as n → ∞. Passing to the limit in 3.29, we derive
x2n − v2n → 0 as well. If the error sequence satisfy any of the conditions j–n, then from
3.29 and 3.30, we have
x2n − v2n ≤ 1 − αn−1 1 − λn x2n−2 − v2n−2 en−1 en .
3.32
Then Lemma 2.4 guarantees that x2n − v2n → 0 as n → ∞. Passing to the limit in 3.30,
we derive x2n1 − v2n1 → 0 as well. This completes the proof of the theorem.
Note that when B ∂IH where ∂IH is the subdifferential of the indicator function of H
and λn 0 en for all n ≥ 1, then algorithm 1.6, 1.7 is reduced to the contraction proximal
point method which was introduced by Yao and Noor in 2008 16. Such a method is given
by
xn1 αn u δn xn γn JβAn xn en
for n 1, 2, . . . ,
3.33
where we have used the notation xn : x2n−1 . Here βn is a sequence in 0, ∞ and αn , δn , γn ∈
0, 1 with αn δn γn 1. For this method, we have the following strong convergence result.
Corollary 3.3. Let A : DA ⊂ H → 2H be a maximal monotone operator with A−1 0 : S /
∅. For
arbitrary but fixed vectors x1 , u ∈ H, let xn be the sequence generated by 3.33 where αn , δn , γn ∈
0, 1 with αn δn γn 1 and βn ∈ 0, ∞. Assume that limn → ∞ αn 0 with ∞
n1 αn ∞, γn ≥ γ
∞
for some γ > 0 and βn ≥ β for some β > 0. If either n1 en < ∞ or en /αn → 0, then xn converges strongly to the point of S nearest to u.
Corollary 3.3 generalizes and unifies many results announced recently in the literature
such as 7, Theorem 4, 16, Theorem 3.3, 17, Theorem 2, and 18, Theorem 3.1. We also
recover 15, Theorem 1.
Abstract and Applied Analysis
11
Remark 3.4. We refer the reader to the paper 12 for another generalization of the method
3.3.
In the case when A ∂IH where ∂IH is the subdifferential of the indicator function
of H and αn 0 en for all n ≥ 1, then algorithm 1.6, 1.7 reduces to the regularization
method
xn1 JμBn λn u 1 − λn xn en
for n 1, 2, . . . ,
3.34
where we have used the notation xn : x2n . In this case, we have the following strong
convergence result which improves results given in the papers 11, 19–21.
∅. For
Corollary 3.5. Let B : DB ⊂ H → 2H be a maximal monotone operator with B−1 0 : S /
arbitrary but fixed vectors x1 , u ∈ H, let xn be the sequence generated by 3.34 where λn ∈ 0, 1
and μn ∈ 0, ∞. Assume that limn → ∞ λn 0 with ∞
n1 λn ∞ and μn ≥ μ for some μ > 0. If either
∞
e
<
∞
or
e
/λ
→
0,
then
x
converges
strongly to the point of S nearest to u.
n
n
n
n
n1
It is worth mentioning that the regularization method is a generalization of the proxTikhonov method introduced by Lehdili and Moudafi 22, see 11. We also mention that for
λn → 0 and en → 0, the regularization method 3.34 is equivalent to the inexact Halpern
type proximal point algorithm, see 23. Therefore Corollary 3.5 also improves many results
given in the papers 15, 19, 22, 24–26 related to the inexact Halpern type proximal point
algorithm.
References
1 H. H. Bauschke, E. Matoušková, and S. Reich, “Projection and proximal point methods: convergence
results and counterexamples,” Nonlinear Analysis, vol. 56, no. 5, pp. 715–738, 2004.
2 E. Kopecká and S. Reich, “A note on the von Neumann alternating projections algorithm,” Journal of
Nonlinear and Convex Analysis, vol. 5, no. 3, pp. 379–386, 2004.
3 L. M. Bregman, “The method of successive projection for nding a common point of convex sets,” Soviet
Mathematics. Doklady, vol. 6, pp. 688–692, 1965.
4 H. S. Hundal, “An alternating projection that does not converge in norm,” Nonlinear Analysis, vol. 57,
no. 1, pp. 35–61, 2004.
5 E. Matoušková and S. Reich, “The Hundal example revisited,” Journal of Nonlinear and Convex Analysis,
vol. 4, no. 3, pp. 411–427, 2003.
6 H. H. Bauschke, P. L. Combettes, and S. Reich, “The asymptotic behavior of the composition of two
resolvents,” Nonlinear Analysis, vol. 60, no. 2, pp. 283–301, 2005.
7 O. A. Boikanyo and G. Moroşanu, “Four parameter proximal point algorithms,” Nonlinear Analysis,
vol. 74, no. 2, pp. 544–555, 2011.
8 L. Hu and L. Liu, “A new iterative algorithm for common solutions of a finite family of accretive
operators,” Nonlinear Analysis, vol. 70, no. 6, pp. 2344–2351, 2009.
9 K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, vol. 83 of
Monographs and Textbooks in Pure and Applied Mathematics, Marcel Dekker, New Yorkm, NY, USA, 1984.
10 G. Moroşanu, Nonlinear Evolution Equations and Applications, vol. 26 of Mathematics and its Applications,
D. Reidel, Dordrecht, The Netherlands, 1988.
11 H.-K. Xu, “A regularization method for the proximal point algorithm,” Journal of Global Optimization,
vol. 36, no. 1, pp. 115–125, 2006.
12 O. A. Boikanyo and G. Moroşanu, “Strong convergence of the method of alternating resolvents,” Journal of Nonlinear and Convex Analysis. In press.
13 P.-E. Maingé, “Strong convergence of projected subgradient methods for nonsmooth and nonstrictly
convex minimization,” Set-Valued Analysis, vol. 16, no. 7-8, pp. 899–912, 2008.
12
Abstract and Applied Analysis
14 O. A. Boikanyo and G. Moros, “A generalization of the regularization proximal point method,” Journal
of Nonlinear Analysis and Application, vol. 2012, Article ID jnaa-00129, 6 pages, 2012.
15 F. Wang and H. Cui, “On the contraction-proximal point algorithms with multi-parameters,” Journal
of Global Optimization. In press.
16 Y. Yao and M. A. Noor, “On convergence criteria of generalized proximal point algorithms,” Journal
of Computational and Applied Mathematics, vol. 217, no. 1, pp. 46–55, 2008.
17 O. A. Boikanyo and G. Moroşanu, “Multi parameter proximal point algorithms,” Journal of Nonlinear
and Convex Analysis, vol. 13, no. 2, pp. 221–231, 2012.
18 Y. Yao and N. Shahzad, “Strong convergence of a proximal point algorithm with general errors,”
Optimization Letters, vol. 6, no. 4, pp. 621–628, 2012.
19 O. A. Boikanyo and G. Moroşanu, “Inexact Halpern-type proximal point algorithm,” Journal of Global
Optimization, vol. 51, no. 1, pp. 11–26, 2011.
20 Y. Song and C. Yang, “A regularization method for the proximal point algorithm,” Journal of Global
Optimization, vol. 43, no. 1, pp. 171–174, 2009.
21 F. Wang, “A note on the regularized proximal point algorithm,” Journal of Global Optimization, vol. 50,
no. 3, pp. 531–535, 2011.
22 N. Lehdili and A. Moudafi, “Combining the proximal algorithm and Tikhonov regularization,”
Optimization, vol. 37, no. 3, pp. 239–252, 1996.
23 O. A. Boikanyo and G. Moroşanu, “A proximal point algorithm converging strongly for general
errors,” Optimization Letters, vol. 4, no. 4, pp. 635–641, 2010.
24 S. Kamimura and W. Takahashi, “Approximating solutions of maximal monotone operators in Hilbert
spaces,” Journal of Approximation Theory, vol. 106, no. 2, pp. 226–240, 2000.
25 G. Marino and H.-K. Xu, “Convergence of generalized proximal point algorithms,” Communications on
Pure and Applied Analysis, vol. 3, no. 4, pp. 791–808, 2004.
26 H.-K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol.
66, no. 1, pp. 240–256, 2002.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download