Document 10901405

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2012, Article ID 259813, 9 pages
doi:10.1155/2012/259813
Research Article
A Regularized Gradient Projection Method for
the Minimization Problem
Yonghong Yao,1 Shin Min Kang,2 Wu Jigang,3 and Pei-Xia Yang1
1
Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China
Department of Mathematics and the RINS, Gyeongsang National University,
Jinju 660-701, Republic of Korea
3
School of Computer Science and Software, Tianjin Polytechnic University, Tianjin 300387, China
2
Correspondence should be addressed to Shin Min Kang, smkang@gnu.ac.kr
Received 22 November 2011; Accepted 8 December 2011
Academic Editor: Yeong-Cheng Liou
Copyright q 2012 Yonghong Yao et al. This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original work is properly cited.
We investigate the following regularized gradient projection algorithm xn1 Pc I − γn ∇f αn Ixn , n ≥ 0. Under some different control conditions, we prove that this gradient projection
algorithm strongly converges to the minimum norm solution of the minimization problem
minx∈C fx.
1. Introduction
Let C be a nonempty closed and convex subset of a real Hilbert space H. Let f : H → R be a
real-valued convex function.
Consider the following constrained convex minimization problem:
min fx.
1.1
x∈C
Assume that 1.1 is consistent, that is, it has a solution and we use Ω to denote its solution
set. If f is Fréchet differentiable, then x∗ ∈ C solves 1.1 if and only if x∗ ∈ C satisfies the
following optimality condition:
∇fx∗ , x − x∗ ≥ 0,
∀x ∈ C,
1.2
2
Journal of Applied Mathematics
where ∇f denotes the gradient of f. Note that 1.2 can be rewritten as
x∗ − x∗ − ∇fx∗ , x − x∗ ≥ 0,
∀x ∈ C.
1.3
This shows that the minimization 1.1 is equivalent to the fixed point problem
PC x∗ − γ∇fx∗ x∗ ,
1.4
where γ > 0 is any constant and PC is the nearest point projection from H onto C. By using this
relationship, the gradient-projection algorithm is usually applied to solve the minimization
problem 1.1. This algorithm generates a sequence {xn } through the recursion:
xn1 PC xn − γn ∇fxn ,
n ≥ 0,
1.5
where the initial guess x0 ∈ C is chosen arbitrarily and {γn } is a sequence of stepsizes which
may be chosen in different ways. The gradient-projection algorithm 1.5 is a powerful tool
for solving constrained convex optimization problems and has well been studied in the case
of constant stepsizes γn γ for all n. The reader can refer to 1–9 and the references therein.
It is known 3 that if f has a Lipschitz continuous and strongly monotone gradient, then
the sequence {xn } can be strongly convergent to a minimizer of f in C. If the gradient of f
is only assumed to be Lipschitz continuous, then {xn } can only be weakly convergent if H
is infinite dimensional. In order to get the strong convergence, Xu 10 studied the following
regularized method:
xn1 PC I − γn ∇f αn I xn ,
n ≥ 0,
1.6
where the sequences {θn } ⊂ 0, 1 and {γn } ⊂ 0, ∞ satisfy the following conditions:
1 0 < γn ≤ αn /L αn 2 for all n;
2 limn → ∞ αn 0;
3 ∞
n0 αn γn ∞;
4 limn → ∞ |γn − γn−1 | |αn γn − αn−1 γn−1 |/αn γn 2 0.
Xu 10 proved that the sequence {xn } converges strongly to a minimizer of 1.1.
Motivated by Xu’s work, in the present paper, we further investigate the gradient
projection method 1.6. Under some different control conditions, we prove that this gradient
projection algorithm strongly converges to the minimum norm solution of the minimization
problem 1.1.
2. Preliminaries
Let C be a nonempty closed convex subset of a real Hilbert space H. A mapping T : C → C
is called nonexpansive if
T x − T y ≤ x − y,
∀x, y ∈ C.
2.1
Journal of Applied Mathematics
3
We will use FixT to denote the set of fixed points of T , that is, FixT {x ∈ C : x T x}.
A mapping T : C → C is said to be ν-inverse strongly monotone ν-ism, if there exists a
constant ν > 0 such that
2
x − y, T x − T y ≥ νT x − T y ,
∀x, y ∈ C.
2.2
Recall that the nearest point or metric projection from H onto C, denoted PC , assigns, to
each x ∈ H, the unique point PC x ∈ C with the property
x − PC x inf x − y : y ∈ C .
2.3
It is well known that the metric projection PC of H onto C has the following basic properties:
a PC x − PC y ≤ x − y for all x, y ∈ H;
b x − y, PC x − PC y ≥ PC x − PC y2 for every x, y ∈ H;
c x − PC x, y − PC x ≤ 0 for all x ∈ H, y ∈ C.
Next we adopt the following notation:
i xn → x means that xn converges strongly to x;
ii xn x means that xn converges weakly to x;
iii ωw xn : {x : ∃xnj x} is the weak ω-limit set of the sequence {xn }.
Lemma 2.1 see 11. Given T : H → H and letting V I − T be the complement of T , given also
S : H → H,
a T is nonexpansive if and only if V is 1/2-ism;
b if S is ν-ism, then, for γ > 0, γS is ν/γ-ism;
c S is averaged if and only if the complement I − S is ν-ism for some ν > 1/2.
Lemma 2.2 see 12, demiclosedness principle. Let C be a closed and convex subset of a Hilbert
∅. If {xn } is a sequence in C
space H, and let T : C → C be a nonexpansive mapping with FixT /
weakly converging to x and if {I − T xn } converges strongly to y, then
I − T x y.
2.4
In particular, if y 0, then x ∈ FixT .
Lemma 2.3 see 13. Let {xn } and {yn } be bounded sequences in a Banach space X, and let {βn }
be a sequence in 0, 1 with
0 < lim inf βn ≤ lim sup βn < 1.
2.5
xn1 1 − βn yn βn xn
2.6
n→∞
n→∞
Suppose that
4
Journal of Applied Mathematics
for all n ≥ 0 and
lim sup yn1 − yn − xn1 − xn ≤ 0.
n→∞
2.7
Then, limn → ∞ yn − xn 0.
Lemma 2.4 see 14. Assume that {an } is a sequence of nonnegative real numbers such that
an1 ≤ 1 − γn an δn ,
2.8
where {γn } is a sequence in 0, 1 and {δn } is a sequence such that
1 ∞
n1 γn ∞;
2 lim supn → ∞ δn /γn ≤ 0 or ∞
n1 |δn | < ∞.
Then, limn → ∞ an 0.
3. Main Result
In this section, we will state and prove our main result.
Theorem 3.1. Let C be a nonempty closed convex subset of a real Hilbert space H. Let f : C → R be
a real-valued Fréchet differentiable convex function. Assume Ω /
∅. Assume that the gradient ∇f is LLipschitzian. Let {xn } be a sequence generated by the following hybrid gradient projection algorithm:
xn1 PC I − γn ∇f αn I xn ,
n ≥ 0,
3.1
where the sequences {θn } ⊂ 0, 1 and {γn } ⊂ 0, 2/L 2αn satisfy the following conditions:
1 limn → ∞ αn 0 and ∞
n0 αn ∞;
2 0 < lim infn → ∞ γn ≤ lim supn → ∞ γn < 2/L and limn → ∞ γn1 − γn 0.
Then, the sequence {xn } generated by 3.1 converges to a minimizer x
of 1.1.
Proof. Note that the Lipschitz condition implies that the gradient ∇f is 1/L-ism 10. Then,
we have
PC I − γ ∇f αI x − PC I − γ ∇f αI y2
2
≤ I − γ ∇f αI x − I − γ ∇f αI y
2
1 − αγ x − y − γ ∇fx − ∇f y 2
2
2 1 − αγ x − y − 2 1 − αγ γ x − y, ∇fx − ∇f y γ 2 ∇fx − ∇f y 2
2
2
2 1
≤ 1 − αγ x − y − 2 1 − αγ γ ∇fx − ∇f y γ 2 ∇fx − ∇fy .
L
3.2
Journal of Applied Mathematics
5
If γ ∈ 0, 2/L 2α, then 21 − αγγ1/L ≥ γ 2 . It follows that
PC I − γ ∇f αI x − PC I − γ ∇f αI y2 ≤ 1 − αγ 2 x − y2 .
3.3
PC I − γ ∇f αI x − PC I − γ ∇f αI y ≤ 1 − αγ x − y,
3.4
Thus,
for all x, y ∈ C.
Take any x∗ ∈ S. Since x∗ ∈ C solves the minimization problem 1.1 if and only if x∗
solves the fixed-point equation x∗ PC I − γ∇fx∗ for any fixed positive number γ, so we
have x∗ PC I − γn ∇fx∗ for all n ≥ 0. From 3.1 and 3.4, we get
xn1 − x∗ PC I − γn ∇f αn I xn − PC I − γn ∇f x∗ PC I − γn ∇f αn I xn − PC I − γn ∇f αn I x∗ PC I − γn ∇f αn I x∗ − PC I − γn ∇f x∗ ≤ 1 − αn γn xn − x∗ αn γn x∗ 3.5
≤ max{xn − x∗ , x∗ }.
Thus, we deduce by induction that
xn − x∗ ≤ max{x0 − x∗ , x∗ }.
3.6
This indicates that the sequence {xn } is bounded.
Since the gradient ∇f is 1/L-ism, γ∇f is 1/γL-ism. So by Lemma 2.1, I − γn ∇f
is γn L/2-averaged; that is, I − γn ∇f 1 − γn L/2I γn L/2T for some nonexpansive
mapping T . Since PC is 1/2-averaged, PC I S/2 for some nonexpansive mapping S.
Then, we can rewrite xn1 as
1
1 I − γn ∇f αn I xn S I − γn ∇f αn I xn
2
2
1
1
1 xn − γn ∇fxn − γn αn xn S I − γn ∇f αn I xn
2
2
2
2 − γn L
γn L
1
1 xn T xn − γn αn xn S I − γn ∇f αn I xn
4
4
2
2
2 − γn L
2 γn L
xn yn ,
4
4
xn1 3.7
where
γn L
4
1
1 yn T xn − γn αn xn S I − γn ∇f αn I xn .
2 γn L
4
2
2
3.8
6
Journal of Applied Mathematics
It follows that
yn1 − yn 4
2 γ
γn1 L
1
1 T xn1 − γn1 αn1 xn1 S I − γn1 ∇f αn1 I xn1
4
2
2
n1 L
γn L
4
1
1 −
T xn − γn αn xn S I − γn ∇f αn I xn 2 γn L
4
2
2
γn1 L
4
1
1 ≤
T
x
γ
S
I
−
γ
−
α
x
I
x
∇f
α
n1
n1 n1 n1
n1
n1
n1
2 γn1 L 4
2
2
γn L
1
1 −
T xn − γn αn xn S I − γn ∇f αn I xn 4
2
2
4
4 γn L T xn − 1 γn αn xn 1 S I − γn ∇f αn I xn −
2 γn1 L 2 γn L 4
2
2
1
γn1 L
γn L
4
1
≤
T xn1 −
T xn 2 γn1 αn1 xn1 2 γn αn xn 2 γn1 L
4
4
2
I − γn1 ∇f αn1 I xn1 − I − γn ∇f αn I xn 2 γn1 L
γn L
4
4 1
1 −
T xn − γn αn xn S I − γn ∇f αn I xn .
2 γn1 L 2 γn L 4
2
2
3.9
Now we choose a constant M such that
γn L
1
1 sup xn , LT xn , ∇fxn , T
x
γ
S
I
−
γ
−
α
x
I
x
∇f
α
n
n n n
n
n
n ≤ M.
4
2
2
n
3.10
We have the following estimates:
γn1 L
γn1 L
γn L
γn1 L γn L
−
T xn 4 T xn1 − 4 T xn 4 T xn1 − T xn 4
4
γn1 L
T xn ≤
T xn1 − T xn γn1 − γn 4
4
γn1 L
≤
xn1 − xn γn1 − γn M,
4
I − γn1 ∇f αn1 I xn1 − I − γn ∇f αn I xn ≤ I − γn1 ∇f xn1 − I − γn1 ∇f xn γn1 − γn ∇fxn γn1 αn1 xn1
n
γn αn x
≤ xn1 − xn γn1 − γn γn1 αn1 γn αn M.
3.11
Journal of Applied Mathematics
7
Thus, we deduce
γn1 L
4
xn1 − xn γn1 − γn M γn1 αn1 γn αn M
2 γn1 L
4
2
xn1 − xn γn1 − γn γn1 αn1 γn αn M
2 γn1 L
4
4 M
−
2 γn1 L 2 γn L 6
γn1 − γn γn1 αn1 γn αn M
≤ xn1 − xn 2 γn1 L
4L
γn1 − γn M.
2 γn1 L 2 γn L
yn1 − yn ≤
3.12
Note that αn → 0 and γn1 − γn → 0. Hence, by Lemma 2.3, we get
lim sup yn1 − yn − xn1 − xn ≤ 0.
3.13
lim yn − xn 0.
3.14
n→∞
It follows that
n→∞
Consequently,
2 γn L
yn − xn 0.
n→∞
4
lim xn1 − xn lim
n→∞
3.15
Now we show that the weak limit set ωw xn ⊂ Ω. Choose any x ∈ ωw xn . Since {xn } is
At the same time,
bounded, there must exist a subsequence {xnj } of {xn } such that xnj x.
the real number sequence {γnj } is bounded. Thus, there exists a subsequence {γnji } of {γnj }
which converges to γ. Without loss of generality, we may assume that γnj → γ. Note that 0 <
lim infn → ∞ γn ≤ lim supn → ∞ γn < 2/L. So, γ ∈ 0, 2/L; that is, γnj → γ ∈ 0, 2/L as j → ∞.
Next, we only need to show that x ∈ Ω. First, from 3.15 we have that xnj 1 − xnj → 0.
Then, we have
xnj − PC I − γ∇f xnj ≤ xnj − xnj 1 xnj 1 − PC I − γnj ∇f xnj PC I − γnj ∇f xnj − PC I − γ∇f xnj PC I − γnj ∇f αnj I xnj − PC I − γnj ∇f xnj PC I − γnj ∇f xnj − PC I − γ∇f xnj xnj − xnj 1 ≤ αnj γnj xnj γnj − γ ∇f xnj xnj − xnj 1 −→ 0.
3.16
8
Journal of Applied Mathematics
Since γ ∈ 0, 2/L, PC I − γ∇f is nonexpansive. It then follows from Lemma 2.2
demiclosedness principle that x ∈ FixPC I − γ∇f. Hence, x ∈ Ω because of Ω FixPC I − γ∇f. So, ωw xn ⊂ Ω.
where x
is the minimum norm solution of 1.1. First,
Finally, we prove that xn → x,
xn − x
≥ 0. Observe that there exists a subsequence {xnj } of
we show that lim supn → ∞ x,
{xn } satisfying
xnj − x
.
lim x,
lim supx,
xn − x
j →∞
n→∞
3.17
Without
Since {xnj } is bounded, there exists a subsequence {xnji } of {xnj } such that xnji x.
Then, we obtain
loss of generality, we assume that xnj x.
xnj − x
x,
lim supx,
lim x,
x − x
≥ 0.
xn − x
n→∞
j →∞
3.18
Since γn < 2/L 2αn , γn /1 − αn γn < 2/L. So, I − γn /1 − αn γn ∇f is nonexpansive. By
using the property b of PC , we have
2
2 PC I − γn ∇f αn I xn − PC x
− γn ∇fx
xn1 − x
≤ I − γn ∇f αn I xn − x
− γn ∇fx
, xn1 − x
γn
γn
1 − αn γn
∇f xn − I −
∇f x,
xn1 − x
I−
1 − αn γn
1 − αn γn
− αn γn x,
xn1 − x
γn
γn
≤ 1 − αn γn I −
∇f xn − I −
∇f x
xn1 − x
1 − αn γn
1 − αn γn
3.19
− αn γn x,
xn1 − x
− αn γn x,
≤ 1 − αn γn xn − xx
xn1 − x
n1 − x
1 − αn γn
1
≤
2 xn1 − x
2 − αn γn x,
xn1 − x.
xn − x
2
2
It follows that
2 ≤ 1 − αn γn xn − x
2 αn γn −x,
xn1 − x.
xn1 − x
3.20
This completes the proof.
From Lemma 2.4, 3.18 and 3.20, we deduce that xn → x.
Remark 3.2. We obtain the strong convergence of the regularized gradient projection method
3.1 under some different control conditions.
Remark 3.3. From the proof of result, we observe that our algorithm 3.1 converges to a
special solution x
of the minimization 1.1. As a matter of fact, this special solution x
is
the minimum-norm solution of the minimization 1.1. Finding the minimum-norm solution
of practical problem is an interesting work due to its applications. A typical example is the
least-squares solution to the constrained linear inverse problem; see, for example, 15. For
Journal of Applied Mathematics
9
some related works on the minimum-norm solution and the minimization problems, please
see 16–22.
Acknowledgments
Y. Yao was supported in part by Colleges and Universities Science and Technology
Development Foundation 20091003 of Tianjin, NSFC 11071279, and NSFC 71161001-G0105.
W. Jigang is supported in part by NSFC 61173032.
References
1 P. H. Calamai and J. J. More, “Projected gradient methods for linearly constrained problems,”
Mathematical Programming, vol. 39, no. 1, pp. 93–116, 1987.
2 E. M. Gafni and D. P. Bertsekas, “Two-metric projection methods for constrained optimization,” SIAM
Journal on Control and Optimization, vol. 22, no. 6, pp. 936–964, 1984.
3 E. S. Levitin and B. T. Polyak, “Constrained minimization methods,” USSR Computational Mathematics
and Mathematical Physics, vol. 6, no. 5, pp. 1–50, 1966.
4 B. T. Polyak, Introduction to Optimization, Optimization Software, New York, NY, USA, 1987.
5 A. Ruszczyński, Nonlinear Optimization, Princeton University Press, Princeton, NJ, USA, 2006.
6 M. Su and H. K. Xu, “Remarks on the Gradient-projection algorithm,” Journal of Nonlinear Analysis
and Optimization, vol. 1, pp. 35–43, 2010.
7 C. Wang and N. Xiu, “Convergence of the gradient projection method for generalized convex
minimization,” Computational Optimization and Applications, vol. 16, no. 2, pp. 111–120, 2000.
8 N. Xiu, C. Wang, and J. Zhang, “Convergence properties of projection and contraction methods for
variational inequality problems,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 147–168,
2001.
9 N. Xiu, C. Wang, and L. Kong, “A note on the gradient projection method with exact stepsize rule,”
Journal of Computational Mathematics, vol. 25, no. 2, pp. 221–230, 2007.
10 H.-K. Xu, “Averaged mappings and the gradient-projection algorithm,” Journal of Optimization Theory
and Applications, vol. 150, no. 2, pp. 360–378, 2011.
11 C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image
reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004.
12 K. Goebel and W. A. Kirk, “Topics in metric fixed point theory,” in Cambridge Studies in Advanced
Mathematics, vol. 28, Cambridge University Press, Cambridge, UK, 1990.
13 T. Suzuki, “Strong convergence theorems for infinite families of nonexpansive mappings in general
Banach spaces,” Fixed Point Theory and Applications, no. 1, pp. 103–123, 2005.
14 H.-K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society,
vol. 66, no. 1, pp. 240–256, 2002.
15 A. Sabharwal and L. C. Potter, “Convexly constrained linear inverse problems: iterative least-squares
and regularization,” IEEE Transactions on Signal Processing, vol. 46, no. 9, pp. 2345–2352, 1998.
16 Y. Yao, R. Chen, and Y.-C. Liou, “A unified implicit algorithm for solving the triple-hierarchical
constrained optimization problem,” Mathematical & Computer Modelling, vol. 55, no. 3-4, pp. 1506–
1515, 2012.
17 Y. Yao, R. Chen, and H.-K. Xu, “Schemes for finding minimum-norm solutions of variational
inequalities,” Nonlinear Analysis: Theory, Methods & Applications, vol. 72, no. 7-8, pp. 3447–3456, 2010.
18 Y. Yao, Y. J. Cho, and Y.-C. Liou, “Algorithms of common solutions for variational inclusions, mixed
equilibrium problems and fixed point problems,” European Journal of Operational Research, vol. 212, no.
2, pp. 242–250, 2011.
19 Y. Yao, Y. J. Cho, and P.-X. Yang, “An iterative algorithm for a hierarchical problem,” Journal of Applied
Mathematics, vol. 2012, Article ID 320421, 13 pages, 2012.
20 Y. Yao, Y.-C. Liou, and S. M. Kang, “Two-step projection methods for a system of variational inequality
problems in Banach spaces,” Journal of Global Optimization. In press.
21 Y. Yao, M. A. Noor, and Y.-C. Liou, “Strong convergence of a modified extra-gradient method to the
minimum-norm solution of variational inequalities,” Abstract and Applied Analysis, vol. 2012, Article
ID 817436, 9 pages, 2012.
22 Y. Yao and N. Shahzad, “Strong convergence of a proximal point algorithm with general errors,”
Optimization Letters. In press.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download