Research Article Regularization Method for the Approximate Split Equality

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 813635, 5 pages
http://dx.doi.org/10.1155/2013/813635
Research Article
Regularization Method for the Approximate Split Equality
Problem in Infinite-Dimensional Hilbert Spaces
Rudong Chen, Junlei Li, and Yijie Ren
Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China
Correspondence should be addressed to Rudong Chen; chenrd@tjpu.edu.cn
Received 31 March 2013; Accepted 12 April 2013
Academic Editor: Yisheng Song
Copyright © 2013 Rudong Chen et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We studied the approximate split equality problem (ASEP) in the framework of infinite-dimensional Hilbert spaces. Let 𝐻1 , 𝐻2 ,
and 𝐻3 be infinite-dimensional real Hilbert spaces, let 𝐢 ⊂ 𝐻1 and 𝑄 ⊂ 𝐻2 be two nonempty closed convex sets, and let 𝐴 :
𝐻1 → 𝐻3 and 𝐡 : 𝐻2 → 𝐻3 be two bounded linear operators. The ASEP in infinite-dimensional Hilbert spaces is to minimize
the function 𝑓(π‘₯, 𝑦) = (1/2)‖𝐴π‘₯ − 𝐡𝑦‖22 over π‘₯ ∈ 𝐢 and 𝑦 ∈ 𝑄. Recently, Moudafi and Byrne had proposed several algorithms
for solving the split equality problem and proved their convergence. Note that their algorithms have only weak convergence in
infinite-dimensional Hilbert spaces. In this paper, we used the regularization method to establish a single-step iterative for solving
the ASEP in infinite-dimensional Hilbert spaces and showed that the sequence generated by such algorithm strongly converges
to the minimum-norm solution of the ASEP. Note that, by taking 𝐡 = 𝐼 in the ASEP, we recover the approximate split feasibility
problem (ASFP).
1. Introduction
Let 𝐢 ⊆ 𝑅𝑁 and 𝑄 ⊆ 𝑅𝑀 be closed, nonempty convex
sets, and let 𝐴 and 𝐡 be 𝐽 by 𝑁 and 𝐽 by 𝑀 real matrices,
respectively. The split equality problem (SEP) in finitedimensional Hilbert spaces is to find π‘₯ ∈ 𝐢 and 𝑦 ∈ 𝑄
such that 𝐴π‘₯ = 𝐡𝑦; the approximate split equality problem
(ASEP) in finite-dimensional Hilbert spaces is to minimize
the function 𝑓(π‘₯, 𝑦) = (1/2)‖𝐴π‘₯ − 𝐡𝑦‖22 over π‘₯ ∈ 𝐢 and
𝑦 ∈ 𝑄. When 𝐽 = 𝑀 and 𝐡 = 𝐼, the SEP reduces to
the well-known split feasibility problem (SFP) and the ASEP
becomes the approximate split feasibility problem (ASFP).
For information on the split feasibility problem, please see [1–
9].
In this paper, we work in the framework of infinitedimensional Hilbert spaces. Let 𝐻1 , 𝐻2 , and 𝐻3 be infinitedimensional real Hilbert spaces, let 𝐢 ⊂ 𝐻1 and 𝑄 ⊂ 𝐻2 be
two nonempty closed convex sets, and let 𝐴 : 𝐻1 → 𝐻3 and
𝐡 : 𝐻2 → 𝐻3 be two bounded linear operators. The ASEP in
infinite-dimensional Hilbert spaces is
1σ΅„©
σ΅„©2
to minimize the function 𝑓 (π‘₯, 𝑦) = 󡄩󡄩󡄩𝐴π‘₯ − 𝐡𝑦󡄩󡄩󡄩2
2
over π‘₯ ∈ 𝐢 and 𝑦 ∈ 𝑄.
(1)
Very recently, for solving the SEP, Moudafi introduced the
following alternating CQ-algorithms (ACQA) in [10]:
π‘₯π‘˜+1 = 𝑃𝐢 (π‘₯π‘˜ − π›Ύπ‘˜ 𝐴∗ (𝐴π‘₯π‘˜ − π΅π‘¦π‘˜ )) ,
π‘¦π‘˜+1 = 𝑃𝑄 (π‘¦π‘˜ + π›Ύπ‘˜ 𝐡∗ (𝐴π‘₯π‘˜+1 − π΅π‘¦π‘˜ )) .
(2)
Then, he proved the weak convergence of the sequence
{π‘₯π‘˜ , π‘¦π‘˜ } to a solution of the SEP provided that the solution
set Γ := {π‘₯ ∈ 𝐢, 𝑦 ∈ 𝑄; 𝐴π‘₯ = 𝐡𝑦} is nonempty and some
conditions on the sequence of positive parameters (π›Ύπ‘˜ ) are
satisfied.
The ACQA involves two projections 𝑃𝐢 and 𝑃𝑄 and,
hence, might be hard to be implemented in the case where one
of them fails to have a closed-form expression. So, Moudafi
proposed the following relaxed CQ-algorithm (RACQA) in
[11]:
π‘₯π‘˜+1 = π‘ƒπΆπ‘˜ (π‘₯π‘˜ − 𝛾𝐴∗ (𝐴π‘₯π‘˜ − π΅π‘¦π‘˜ )) ,
π‘¦π‘˜+1 = π‘ƒπ‘„π‘˜ (π‘¦π‘˜ + 𝛽𝐡∗ (𝐴π‘₯π‘˜+1 − π΅π‘¦π‘˜ )) ,
(3)
where πΆπ‘˜ , π‘„π‘˜ were defined in [11], and then he proved the
weak convergence of the sequence {π‘₯π‘˜ , π‘¦π‘˜ } to a solution of the
SEP.
2
Abstract and Applied Analysis
In [12], Byrne considered and studied the algorithms
to solve the approximate split equality problem (ASEP),
which can be regarded as containing the consistent case
and the inconsistent case of the SEP. There, he proposed a
simultaneous iterative algorithm (SSEA) as follows:
π‘₯π‘˜+1 = 𝑃𝐢 (π‘₯π‘˜ − π›Ύπ‘˜ 𝐴𝑇 (𝐴π‘₯π‘˜ − π΅π‘¦π‘˜ )) ,
π‘¦π‘˜+1 = 𝑃𝑄 (π‘¦π‘˜ + π›Ύπ‘˜ 𝐡𝑇 (𝐴π‘₯π‘˜ − π΅π‘¦π‘˜ )) ,
(4)
where πœ– ≤ π›Ύπ‘˜ ≤ (2/𝜌(𝐺𝑇 𝐺)) − πœ–. Then, he proposed the
relaxed SSEA (RSSEA) and the perturbed version of the
SSEA (PSSEA) for solving the ASEP, and he proved their
convergence. Furthermore, he used these algorithms to solve
the approximate split feasibility problem (ASFP), which is
a special case of the ASEP. Note that he used the projected
Landweber algorithm as a tool in that article.
Note that the algorithms proposed by Moudafi and Byrne
have only weak convergence in infinite-dimensional Hilbert
spaces. In this paper, we use the regularization method to
establish a single-step iterative to solve the ASEP in infinitedimensional Hilbert spaces, and we will prove its strong
convergence.
2. Preliminaries
Definition 3. Let 𝑇 be a nonlinear operator whose domain is
𝐷(𝑇) ⊆ 𝐻 and whose range is 𝑅(𝑇) ⊆ 𝐻.
(a) 𝑇 is said to be monotone if
βŸ¨π‘‡π‘₯ − 𝑇𝑦, π‘₯ − π‘¦βŸ© ≥ 0,
∀π‘₯, 𝑦 ∈ 𝐷 (𝑇) .
(9)
(b) Given a number 𝛽 > 0, 𝑇 is said to be 𝛽-strongly
monotone if
σ΅„©2
σ΅„©
(10)
βŸ¨π‘‡π‘₯ − 𝑇𝑦, π‘₯ − π‘¦βŸ© ≥ 𝛽󡄩󡄩󡄩π‘₯ − 𝑦󡄩󡄩󡄩 , ∀π‘₯, 𝑦 ∈ 𝐷 (𝑇) .
(c) Given a number 𝐿 > 0, 𝑇 is said to be 𝐿-Lipschitz if
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©
(11)
󡄩󡄩𝑇π‘₯ − 𝑇𝑦󡄩󡄩󡄩 ≤ 𝐿 σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 , ∀π‘₯, 𝑦 ∈ 𝐷 (𝑇) .
Lemma 4 (see [13]). Assume that π‘Žπ‘› is a sequence of nonnegative real numbers such that
π‘Žπ‘›+1 ≤ (1 − 𝛾𝑛 ) π‘Žπ‘› + 𝛾𝑛 𝛿𝑛 ,
𝑛 ≥ 0,
(12)
where 𝛾𝑛 , 𝛿𝑛 are sequences of real numbers such that
(i) 𝛾𝑛 ⊂ (0, 1) and ∑∞
𝑛=0 𝛾𝑛 = ∞;
(ii) either lim sup𝑛 → ∞ 𝛿𝑛 ≤ 0 or ∑∞
𝑛=1 𝛾𝑛 |𝛿𝑛 | < ∞.
Then, lim𝑛 → ∞ π‘Žπ‘› = 0.
Let 𝐻 be a real Hilbert space with inner product ⟨⋅, ⋅⟩ and
norm β€– ⋅ β€–, respectively, and let 𝐾 be a nonempty closed
convex subset of 𝐻. Recall that the projection from 𝐻 onto 𝐾,
denoted as 𝑃𝐾 , is defined in such a way that, for each π‘₯ ∈ 𝐻,
𝑃𝐾 π‘₯ is the unique point in 𝐾 with the property
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©
(5)
σ΅„©σ΅„©π‘₯ − 𝑃𝐾 π‘₯σ΅„©σ΅„©σ΅„© = min {σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 : 𝑦 ∈ 𝐾} .
The following important properties of projections are
useful to our study.
Proposition 1. Given that π‘₯ ∈ 𝐻 and 𝑧 ∈ 𝐾;
(a) 𝑧 = 𝑃𝐾 π‘₯ if and only if ⟨π‘₯ − 𝑧, 𝑦 − π‘§βŸ© ≤ 0, for all 𝑦 ∈ 𝐾;
(b) βŸ¨π‘ƒπΎ 𝑒 − 𝑃𝐾 V, 𝑒 − V⟩ ≥ ‖𝑃𝐾 𝑒 − 𝑃𝐾 Vβ€–2 , for all 𝑒, V ∈ 𝐻.
Definition 2. A mapping 𝑇 : 𝐻 → 𝐻 is said to be
(a) nonexpansive if
󡄩󡄩󡄩𝑇π‘₯ − 𝑇𝑦󡄩󡄩󡄩 ≤ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 ,
σ΅„©
σ΅„© σ΅„©
σ΅„©
∀π‘₯, 𝑦 ∈ 𝐻;
∀π‘₯, 𝑦 ∈ 𝐻;
(6)
(7)
alternatively, 𝑇 is firmly nonexpansive if and only if 𝑇 can be
expressed as
1
𝑇 = ( ) (𝐼 + 𝑆) ,
2
3. Regularization Method for the ASEP
Let 𝑆 = 𝐢 × π‘„. Define
𝐺 = [𝐴 −𝐡] ,
π‘₯
πœ” = [ ].
𝑦
(8)
where 𝑆 : 𝐻 → 𝐻 is nonexpansive. It is well known that
projections are (firmly) nonexpansive.
(13)
The ASEP can now be reformulated as finding πœ” ∈ 𝑆
with minimizing the function β€–πΊπœ”β€– over πœ” ∈ 𝑆. Therefore,
solving the ASEP (1) is equivalent to solving the following
minimization problem (14).
The minimization problem
1
min𝑓 (πœ”) = β€–πΊπœ”β€–2
πœ”∈𝑆
2
(b) firmly nonexpansive if 2𝑇 − 𝐼 is nonexpansive, or
equivalently,
σ΅„©2
σ΅„©
βŸ¨π‘‡π‘₯ − 𝑇𝑦, π‘₯ − π‘¦βŸ© ≥ 󡄩󡄩󡄩𝑇π‘₯ − 𝑇𝑦󡄩󡄩󡄩 ,
Next, we will state and prove our main result in this paper.
(14)
is generally ill-posed. We consider the Tikhonov regularization (for more details about Tikhonov approximation, please
see [8, 14] and the references therein)
1
1
minπ‘“πœ€ (πœ”) = β€–πΊπœ”β€–2 + πœ€β€–πœ”β€–2 ,
πœ”∈𝑆
2
2
(15)
where πœ€ > 0 is the regularization parameter. The regularization minimization (15) has a unique solution which is denoted
by πœ”πœ€ . Assume that the minimization (14) is consistent, and let
πœ”min be its minimum-norm solution; namely, πœ”min ∈ Γ (Γ is
the solution set of the minimization (14)) has the property
σ΅„©σ΅„©σ΅„©πœ”min σ΅„©σ΅„©σ΅„© = min {β€–πœ”β€– : πœ” ∈ Γ} .
(16)
σ΅„©
σ΅„©
The following result is easily proved.
Abstract and Applied Analysis
3
Proposition 5. If the minimization (14) is consistent, then the
strong limπœ€ → 0 πœ”πœ€ exists and is the minimum-norm solution of
the minimization (14).
Proof. For any πœ” ∈ Γ, we have
πœ€ σ΅„© σ΅„©2
𝑓 (πœ”) + σ΅„©σ΅„©σ΅„©πœ”πœ€ σ΅„©σ΅„©σ΅„© ≤ 𝑓 (πœ”πœ€ ) +
2
πœ€ σ΅„©σ΅„© σ΅„©σ΅„©2
σ΅„©πœ” σ΅„©
2 σ΅„© πœ€σ΅„©
πœ€
= π‘“πœ€ (πœ”πœ€ ) ≤ π‘“πœ€ (πœ”) = 𝑓 (πœ”) + β€–πœ”β€–2 .
2
(17)
(18)
𝑓 (πœ”∗ ) ≤ lim inf 𝑓 (πœ”πœ€π‘— )
𝑗→∞
𝑗→∞
≤ lim inf π‘“πœ€π‘— (πœ”)
(19)
𝑗→∞
πœ€π‘—
2
β€–πœ”β€–2 ]
This means that πœ”∗ ∈ Γ. Since the norm is weak lower
semicontinuous, we get from (18) that β€–πœ”∗ β€– ≤ β€–πœ”β€– for all πœ” ∈
Γ; hence, πœ”∗ = πœ”min . This is sufficient to ensure that πœ”πœ€ ⇀
πœ”min . To obtain the strong convergence, noting that (18) holds
for πœ”min , we compute
Note that πœ”πœ€ is a fixed point of the mapping 𝑃𝑆 (𝐼 − 𝛾∇π‘“πœ€ )
for any 𝛾 > 0 satisfying (23) and can be obtained through the
limit as 𝑛 → ∞ of the sequence of Picard iterates as follows:
(20)
∇π‘“πœ€ = ∇𝑓 + πœ€πΌ = 𝐺 𝐺 + πœ€πΌ
(21)
is (πœ€+‖𝐺‖2 )-Lipschitz and πœ€-strongly monotone, the mapping
𝑃𝑆 (𝐼 − 𝛾∇π‘“πœ€ ) is a contraction with coefficient
2
1
√ 1 − 𝛾 (2πœ€ − 𝛾(‖𝐺‖2 + πœ€) ) (≤ √1 − πœ€π›Ύ ≤ 1 − πœ€π›Ύ) , (22)
2
where
πœ”π‘›+1 = 𝑃𝑆 (𝐼 − 𝛾𝑛 ∇π‘“πœ€π‘› ) πœ”π‘› = 𝑃𝑆 ((1 − πœ€π‘› 𝛾𝑛 ) πœ”π‘› − 𝛾𝑛 𝐺𝑇 πΊπœ”π‘› ) ,
(26)
where πœ€π‘› and 𝛾𝑛 satisfy the following conditions:
(ii) πœ€π‘› → 0 and 𝛾𝑛 → 0;
(iv) (|𝛾𝑛+1 − 𝛾𝑛 | + 𝛾𝑛 |πœ€π‘›+1 − πœ€π‘› |)/(πœ€π‘›+1 𝛾𝑛+1 )2 → 0.
Then, πœ”π‘› converges in norm to the minimum-norm solution
of the minimization problem (14).
Proof. Note that for any 𝛾 satisfying (23), πœ”πœ€ is a fixed point
of the mapping 𝑃𝑆 (𝐼 − 𝛾∇π‘“πœ€ ). For each 𝑛, let 𝑧𝑛 be the unique
fixed point of the contraction
𝑇𝑛 := 𝑃𝑆 (𝐼 − 𝛾𝑛 ∇π‘“πœ€π‘› ) .
(27)
Then, 𝑧𝑛 = πœ”πœ€π‘› , and so
𝑧𝑛 󳨀→ πœ”min in norm .
(28)
Thus, to prove the theorem, it suffices to prove that
πœ€
.
2
(‖𝐺‖2 + πœ€)
Secondly, letting πœ€ → 0 yields πœ”πœ€ → πœ”min in norm. It is
interesting to know whether these two steps can be combined
to get πœ”min in a single step. The following result shows that for
suitable choices of 𝛾 and πœ€, the minimum-norm solution πœ”min
can be obtained by a single step, motivated by Xu [8].
(iii) ∑∞
𝑛=1 πœ€π‘› 𝛾𝑛 = ∞;
Next we will state that πœ”min can be obtained by two steps.
First, observing that the gradient
𝑇
(25)
(i) 0 < 𝛾𝑛 ≤ πœ€π‘› /(β€– 𝐺‖2 + πœ€π‘› )2 for all (large enough) 𝑛;
Since πœ”πœ€ ⇀ πœ”min , we get πœ”πœ€ → πœ”min in norm. So, we
complete the proof.
0<𝛾≤
σ΅„©2
σ΅„©
≤ (1 − πœ€π›Ύ) σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 .
Theorem 6. Assume that the minimization problem (14) is
consistent. Define a sequence πœ”π‘› by the iterative algorithm
= 𝑓 (πœ”) .
σ΅„©σ΅„©
σ΅„©2 σ΅„© σ΅„©2
σ΅„©2
σ΅„©
σ΅„©σ΅„©πœ”πœ€ − πœ”min σ΅„©σ΅„©σ΅„© = σ΅„©σ΅„©σ΅„©πœ”πœ€ σ΅„©σ΅„©σ΅„© − 2 βŸ¨πœ”πœ€ , πœ”min ⟩ + σ΅„©σ΅„©σ΅„©πœ”min σ΅„©σ΅„©σ΅„©
σ΅„©2
σ΅„©
≤ 2 (σ΅„©σ΅„©σ΅„©πœ”min σ΅„©σ΅„©σ΅„© − βŸ¨πœ”πœ€ , πœ”min ⟩) .
(24)
2
σ΅„©2
σ΅„©
= [1 − 𝛾 (2πœ€ − 𝛾(πœ€ + ‖𝐺‖2 ) )] σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩
πœ€
= 𝑃𝑆 (𝐼 − 𝛾∇π‘“πœ€ ) πœ”π‘›πœ€ .
πœ”π‘›+1
≤ lim inf π‘“πœ€π‘— (πœ”πœ€π‘— )
𝑗→∞
σ΅„©
σ΅„©2
+ 𝛾2 σ΅„©σ΅„©σ΅„©∇π‘“πœ€ (π‘₯) − ∇π‘“πœ€ (𝑦)σ΅„©σ΅„©σ΅„©
σ΅„©2
σ΅„©
≤ (1 − 2π›Ύπœ€ + 𝛾2 (πœ€ + ‖𝐺‖ ) ) σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩
Therefore, πœ”πœ€ is bounded. Assume that πœ€π‘— → 0 is such
that πœ”πœ€π‘— ⇀ πœ”∗ . Then, the weak lower semicontinuity of 𝑓
implies that, for any πœ” ∈ 𝑆,
= lim inf [𝑓 (πœ”) +
σ΅„©2
σ΅„©σ΅„©
󡄩󡄩𝑃𝑆 (𝐼 − 𝛾∇π‘“πœ€ ) (π‘₯) − 𝑃𝑆 (𝐼 − 𝛾∇π‘“πœ€ ) (𝑦)σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©2
≤ σ΅„©σ΅„©σ΅„©(𝐼 − 𝛾∇π‘“πœ€ ) (π‘₯) − (𝐼 − 𝛾∇π‘“πœ€ ) (𝑦)σ΅„©σ΅„©σ΅„©
σ΅„©2
σ΅„©
= σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 − 2𝛾 ⟨∇π‘“πœ€ (π‘₯) − ∇π‘“πœ€ (𝑦) , π‘₯ − π‘¦βŸ©
2 2
It follows that, for all πœ€ > 0 and πœ” ∈ Γ,
σ΅„©σ΅„© σ΅„©σ΅„©
σ΅„©σ΅„©πœ”πœ€ σ΅„©σ΅„© ≤ β€–πœ”β€– .
Indeed, observe that
(23)
σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©πœ”π‘›+1 − 𝑧𝑛 σ΅„©σ΅„©σ΅„© 󳨀→ 0.
(29)
4
Abstract and Applied Analysis
Noting that 𝑇𝑛 has contraction coefficient of (1 −
(1/2)πœ€π‘› 𝛾𝑛 ), we have
σ΅„©σ΅„©
σ΅„© σ΅„©
σ΅„©
σ΅„©σ΅„©πœ”π‘›+1 − 𝑧𝑛 σ΅„©σ΅„©σ΅„© = 󡄩󡄩󡄩𝑇𝑛 πœ”π‘› − 𝑇𝑛 𝑧𝑛 σ΅„©σ΅„©σ΅„©
π‘₯𝑛+1 = 𝑃𝐢 ((1 − πœ€π‘› 𝛾𝑛 ) π‘₯𝑛 − 𝛾𝑛 𝐴𝑇 (𝐴π‘₯𝑛 − 𝐡𝑦𝑛 )) ,
1
σ΅„©
σ΅„©
≤ (1 − πœ€π‘› 𝛾𝑛 ) σ΅„©σ΅„©σ΅„©πœ”π‘› − 𝑧𝑛 σ΅„©σ΅„©σ΅„©
2
(30)
We now estimate
σ΅„©σ΅„©
σ΅„© σ΅„©
σ΅„©
󡄩󡄩𝑧𝑛 − 𝑧𝑛−1 σ΅„©σ΅„©σ΅„© = 󡄩󡄩󡄩𝑇𝑛 𝑧𝑛 − 𝑇𝑛−1 𝑧𝑛−1 σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„© σ΅„©
σ΅„©
≤ 󡄩󡄩󡄩𝑇𝑛 𝑧𝑛 − 𝑇𝑛 𝑧𝑛−1 σ΅„©σ΅„©σ΅„© + 󡄩󡄩󡄩𝑇𝑛 𝑧𝑛−1 − 𝑇𝑛−1 𝑧𝑛−1 σ΅„©σ΅„©σ΅„©
(31)
1
σ΅„©
σ΅„©
≤ (1 − πœ€π‘› 𝛾𝑛 ) 󡄩󡄩󡄩𝑧𝑛 − 𝑧𝑛−1 σ΅„©σ΅„©σ΅„©
2
σ΅„©
σ΅„©
+ 󡄩󡄩󡄩𝑇𝑛 𝑧𝑛−1 − 𝑇𝑛−1 𝑧𝑛−1 σ΅„©σ΅„©σ΅„© .
2 σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©
σ΅„©
󡄩󡄩𝑧𝑛 − 𝑧𝑛−1 σ΅„©σ΅„©σ΅„© ≤
󡄩𝑇 𝑧 − 𝑇𝑛−1 𝑧𝑛−1 σ΅„©σ΅„©σ΅„© .
πœ€π‘› 𝛾𝑛 σ΅„© 𝑛 𝑛−1
(32)
Combining (30), (32), and (33), we obtain
1
1
σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©πœ”π‘›+1 − 𝑧𝑛 σ΅„©σ΅„©σ΅„© ≤ (1 − πœ€π‘› 𝛾𝑛 ) σ΅„©σ΅„©σ΅„©πœ”π‘› − 𝑧𝑛−1 σ΅„©σ΅„©σ΅„© + ( πœ€π‘› 𝛾𝑛 ) 𝛽𝑛 , (34)
2
2
where
2
π‘₯𝑛+1 = 𝑃𝐢 ((1 − πœ€π‘› 𝛾𝑛 ) π‘₯𝑛 − 𝛾𝑛 𝐴𝑇 (𝐴π‘₯𝑛 − 𝑦𝑛 )) ,
󳨀→ 0.
(37)
This algorithm is different from the algorithms that have
been proposed to solve the ASFP, but it does solve the ASFP.
However, since 𝑧𝑛 is bounded, we have, for an appropriate
constant 𝑀 > 0,
σ΅„©
σ΅„©σ΅„©
󡄩󡄩𝑇𝑛 𝑧𝑛−1 − 𝑇𝑛−1 𝑧𝑛−1 σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
= 󡄩󡄩󡄩󡄩𝑃𝑆 (𝐼 − 𝛾𝑛 ∇π‘“πœ€π‘› ) 𝑧𝑛−1 − 𝑃𝑆 (𝐼 − 𝛾𝑛−1 ∇π‘“πœ€π‘›−1 ) 𝑧𝑛−1 σ΅„©σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
≤ σ΅„©σ΅„©σ΅„©σ΅„©(𝐼 − 𝛾𝑛 ∇π‘“πœ€π‘› ) 𝑧𝑛−1 − (𝐼 − 𝛾𝑛−1 ∇π‘“πœ€π‘›−1 ) 𝑧𝑛−1 σ΅„©σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
= 󡄩󡄩󡄩󡄩𝛾𝑛 ∇π‘“πœ€π‘› (𝑧𝑛−1 ) − 𝛾𝑛−1 ∇π‘“πœ€π‘›−1 (𝑧𝑛−1 )σ΅„©σ΅„©σ΅„©σ΅„©
σ΅„©
σ΅„©
= σ΅„©σ΅„©σ΅„©σ΅„©(𝛾𝑛 −𝛾𝑛−1 ) ∇π‘“πœ€π‘›(𝑧𝑛−1 )+𝛾𝑛−1 (∇π‘“πœ€π‘› (𝑧𝑛−1 )−∇π‘“πœ€π‘›−1 (𝑧𝑛−1 ))σ΅„©σ΅„©σ΅„©σ΅„©
󡄨
󡄨󡄩
σ΅„©
≤ 󡄨󡄨󡄨𝛾𝑛 − 𝛾𝑛−1 󡄨󡄨󡄨 σ΅„©σ΅„©σ΅„©∇𝑓 (𝑧𝑛−1 ) + πœ€π‘› 𝑧𝑛−1 σ΅„©σ΅„©σ΅„©
󡄨
󡄨󡄩
σ΅„©
+ 𝛾𝑛−1 σ΅„¨σ΅„¨σ΅„¨πœ€π‘› − πœ€π‘›−1 󡄨󡄨󡄨 󡄩󡄩󡄩𝑧𝑛−1 σ΅„©σ΅„©σ΅„©
󡄨
󡄨
󡄨
󡄨
≤ (󡄨󡄨󡄨𝛾𝑛 − 𝛾𝑛−1 󡄨󡄨󡄨 + 𝛾𝑛−1 σ΅„¨σ΅„¨σ΅„¨πœ€π‘› − πœ€π‘›−1 󡄨󡄨󡄨) 𝑀.
(33)
(πœ€π‘› 𝛾𝑛 )
(36)
Remark 9. Now, we apply the algorithm (36) to solve the
ASFP. Let 𝐡 = 𝐼; the iteration in (36) becomes
𝑦𝑛+1 = 𝑃𝑄 ((1 − πœ€π‘› 𝛾𝑛 ) 𝑦𝑛 + 𝛾𝑛 (𝐴π‘₯𝑛 − 𝑦𝑛 )) .
This implies that
󡄨
󡄨
󡄨
󡄨
4𝑀 (󡄨󡄨󡄨𝛾𝑛 − 𝛾𝑛−1 󡄨󡄨󡄨 + 𝛾𝑛−1 σ΅„¨σ΅„¨σ΅„¨πœ€π‘› − πœ€π‘›−1 󡄨󡄨󡄨)
𝑦𝑛+1 = 𝑃𝑄 ((1 − πœ€π‘› 𝛾𝑛 ) 𝑦𝑛 + 𝛾𝑛 𝐡𝑇 (𝐴π‘₯𝑛 − 𝐡𝑦𝑛 )) .
And we can obtain that the whole sequence (π‘₯𝑛 , 𝑦𝑛 )
generated by the algorithm (36) strongly converges to the
minimum-norm solution of the ASEP (1) provided that the
ASEP (1) is consistent and πœ€π‘› and 𝛾𝑛 satisfy the conditions (i)–
(iv).
1
σ΅„© σ΅„©
σ΅„©
σ΅„©
≤ (1 − πœ€π‘› 𝛾𝑛 ) σ΅„©σ΅„©σ΅„©πœ”π‘› − 𝑧𝑛−1 σ΅„©σ΅„©σ΅„© + 󡄩󡄩󡄩𝑧𝑛 − 𝑧𝑛−1 σ΅„©σ΅„©σ΅„© .
2
𝛽𝑛 =
Remark 8. We can express the algorithm (26) in terms of π‘₯
and 𝑦, and we get
(35)
Now applying Lemma 4 to (34) and using the conditions
(ii)–(iv), we conclude that β€–πœ”π‘›+1 − 𝑧𝑛 β€– → 0; therefore, πœ”π‘› →
πœ”min in norm.
Remark 7. Note that πœ€π‘› = 𝑛−𝛿 and 𝛾𝑛 = 𝑛−𝜎 with 0 < 𝛿 < 𝜎 < 1
and 𝜎 + 2𝛿 < 1 satisfy the conditions (i)–(iv).
In this paper, we considered the ASEP in infinitedimensional Hilbert spaces, which has broad applicability in
modeling significant real-world problems. Then, we used the
regularization method to propose a single-step iterative and
showed that the sequence generated by such an algorithm
strongly converges to the minimum-norm solution of the
ASEP (1). We also gave an algorithm for solving the ASFP in
Remark 9.
Acknowledgments
The authors wish to thank the referees for their helpful
comments and suggestions. This research was supported by
NSFC Grants no. 11071279.
References
[1] Y. Censor and T. Elfving, “A multiprojection algorithm using
Bregman projections in a product space,” Numerical Algorithms,
vol. 8, no. 2–4, pp. 221–239, 1994.
[2] C. Byrne, “Iterative oblique projection onto convex sets and the
split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441–
453, 2002.
[3] C. Byrne, “A unified treatment of some iterative algorithms in
signal processing and image reconstruction,” Inverse Problems,
vol. 20, no. 1, pp. 103–120, 2004.
[4] Q. Yang, “The relaxed CQ algorithm solving the split feasibility
problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004.
[5] B. Qu and N. Xiu, “A note on the CQ algorithm for the split
feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1655–
1665, 2005.
[6] B. Qu and N. Xiu, “A new halfspace-relaxation projection
method for the split feasibility problem,” Linear Algebra and its
Applications, vol. 428, no. 5-6, pp. 1218–1229, 2008.
[7] J. Zhao and Q. Yang, “Several solution methods for the split
feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791–
1799, 2005.
Abstract and Applied Analysis
[8] H.-K. Xu, “Iterative methods for the split feasibility problem in
infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26,
no. 10, Article ID 105018, p. 17, 2010.
[9] L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “Relaxed extragradient
methods for finding minimum-norm solutions of the split
feasibility problem,” Nonlinear Analysis A: Theory and Methods,
vol. 75, no. 4, pp. 2116–2125, 2012.
[10] A. Moudafi, “Alternating CQ-algorithm for convex feasibility
and split fixed-point problems,” Journal of Nonlinear and Convex Analysis, 2013.
[11] A. Moudafi, “A relaxed alternating CQ-algorithm for convex
feasibility problems,” Nonlinear Analysis. Theory, Methods &
Applications A: Theory and Methods, vol. 79, pp. 117–121, 2013.
[12] C. Byrne and A. Moudafi, “Extensions of the CQ algorithm
for the split feasibility and split equality problems,” 2013, hal00776640-version 1.
[13] H. K. Xu and T. H. Kim, “Convergence of hybrid steepestdescent methods for variational inequalities,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 185–201,
2003.
[14] A. Moudafi, “Coupling proximal algorithm and Tikhonov
method,” Nonlinear Times and Digest, vol. 1, no. 2, pp. 203–209,
1994.
5
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download