Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 531912, 7 pages http://dx.doi.org/10.1155/2013/531912 Research Article An Extension of Subgradient Method for Variational Inequality Problems in Hilbert Space Xueyong Wang, Shengjie Li, and Xipeng Kou College of Mathematics and Statistics, Chongqing University, Chongqing 401331, China Correspondence should be addressed to Xueyong Wang; yonggk@163.com Received 26 October 2012; Revised 30 January 2013; Accepted 1 February 2013 Academic Editor: Guanglu Zhou Copyright © 2013 Xueyong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. An extension of subgradient method for solving variational inequality problems is presented. A new iterative process, which relates to the fixed point of a nonexpansive mapping and the current iterative point, is generated. A weak convergence theorem is obtained for three sequences generated by the iterative process under some mild conditions. 1. Introduction Let Ω be a nonempty closed convex subset of a real Hilbert space π», and let π : Ω → Ω be a continuous mapping. The variational inequality problem, denoted by VI(π, Ω), is to find a vector π₯∗ ∈ Ω, such that ∗ ∗ β¨π₯ − π₯ , π (π₯ )β© ≥ 0, ∀ π₯ ∈ Ω. (1) Throughout the paper, let Ω∗ be the solution set of VI(π, Ω), which is assumed to be nonempty. In the special case when Ω is the nonnegative orthant, (1) reduces to the nonlinear complementarity problem. Find a vector π₯∗ ∈ Ω, such that π₯∗ ≥ 0, π (π₯∗ ) ≥ 0, π π(π₯∗ ) π₯∗ = 0. (2) The variational inequality problem plays an important role in optimization theory and variational analysis. There are numerous applications of variational inequalities in mathematics as well as in equilibrium problems arising from engineering, economics, and other areas in real life, see [1–16] and the references therein. Many algorithms, which employ the projection onto the feasible set Ω of the variational inequality or onto some related sets in order to iteratively reach a solution, have been proposed to solve (1). Korpelevich [2] proposed an extragradient method for finding the saddle point of some special cases of the equilibrium problem. Solodov and Svaiter [3] extended the extragradient algorithm through replying the set Ω by the intersection of two sets related to VI(π, Ω). In each iteration of the algorithm, the new vector π₯π+1 is calculated according to the following iterative scheme. Given the current vector π₯π , compute π(π₯π ) = π₯π − πΩ (π₯π − π(π₯π )), if π(π₯π ) = 0, stop; otherwise, compute π§π = π₯π − ππ π (π₯π ) , (3) where ππ = πππ and ππ being the smallest nonnegative integer π satisfying σ΅© σ΅©2 β¨π (π₯π − ππ π (π₯π )) , π (π₯π )β© ≥ πΏσ΅©σ΅©σ΅©π (π₯π )σ΅©σ΅©σ΅© , (4) and then compute π₯π+1 = πΩ∩π»π (π₯π ) , (5) where π»π = {π₯ ∈ π π | β¨π₯ − π§π , π(π§π )β© ≤ 0}. On the other hand, Nadezhkina and Takahashi [11] got π₯π+1 by the following iterative formula: π₯π+1 = πΌπ π₯π + (1 − πΌπ ) ππΩ (π₯π − π π π (π₯π )) , (6) ∞ where {πΌπ }∞ π=0 is a sequence in (0, 1), {π π }π=0 is a sequence, and π : Ω → Ω is a nonexpansive mapping. Denoting the fixed points set of π by πΉ(π) and assuming πΉ(π)∩Ω∗ =ΜΈ 0, they proved that the sequence {π₯π }∞ π=0 converges weakly to some π₯∗ ∈ πΉ(π) ∩ Ω∗ . Motivated and inspired by the extragradient methods in [2, 3], in this paper, we study further extragradient methods 2 Abstract and Applied Analysis and analyze the weak converge property of three sequences generated by our method. The rest of this paper is organized as follows. In Section 2, we give some preliminaries and basic results. In Section 3, we present an extragradient algorithm and then discuss the weak convergence of the sequences generated by the algorithm. In Section 4, we modify the extragradient algorithm and give its convergence analysis. 2. Preliminary and Basic Results Let π» be a real Hilbert space with β¨π₯, π¦β© denoting the inner product of the vectors π₯, π¦. Weak converge and strong converge of the sequence {π₯π }∞ π=0 to a point π₯ are denoted by π₯π β π₯ and π₯π → π₯, respectively. Identity mapping from Ω to itself is denoted by πΌ. For some vector π₯ ∈ π», the orthogonal projection of π₯ onto Ω, denoted by πΩ (π₯), is defined as σ΅© σ΅© πΩ (π₯) := arg min {σ΅©σ΅©σ΅©π¦ − π₯σ΅©σ΅©σ΅© | π¦ ∈ Ω} . (7) The following lemma states some well-known properties of the orthogonal projection operator. Lemma 1. One has ∀ π₯ ∈ π π , ∀ π¦ ∈ Ω. (1) β¨π₯ − πΩ (π₯) , πΩ (π₯) − π¦β© ≥ 0, (8) σ΅© σ΅© σ΅© σ΅© (2) σ΅©σ΅©σ΅©πΩ (π₯) − πΩ (π¦)σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅© , ∀ π₯, π¦ ∈ π π . σ΅©2 σ΅© σ΅© σ΅©2 σ΅©2 σ΅© (3) σ΅©σ΅©σ΅©πΩ (π₯) − π¦σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯ − πΩ (π₯)σ΅©σ΅©σ΅© , ∀ π₯ ∈ π π , ∀ π¦ ∈ Ω. (9) (10) (11) A mapping π is called monotone if ∀ π₯, π¦ ∈ Ω. and define the function π(V) as π (V) + πΩ (V) π (V) := { 0 (14) (15) and the fixed point set of a mapping π, denoted by πΉ(π), is defined by πΉ (π) := {π₯ ∈ Ω | π (π₯) = π₯} . (18) Lemma 2. For any sequence {π₯π }∞ π=0 ⊂ π» that converges weakly to π₯(π₯π β π₯), one has σ΅© σ΅© σ΅© σ΅© lim inf σ΅©σ΅©σ΅©π₯π − π₯σ΅©σ΅©σ΅© < lim inf σ΅©σ΅©σ΅©π₯π − π¦σ΅©σ΅©σ΅© , π→∞ π→∞ ∀π¦ =ΜΈ π₯. (19) The next lemma is proposed in [10]. Lemma 3 (Demiclosedness principle). Let Ω be a closed, convex subset of a real Hilbert space π», and let π : Ω → π» be a nonexpansive mapping. Then πΌ − π is demiclosed at π¦ ∈ π»; Μ, π₯ Μ∈Ω that is, for any sequence {π₯π }∞ π=0 ⊂ Ω, such that π₯π β π₯ π₯ = π¦. and (πΌ − π)π₯π → π¦, one has (πΌ − π)Μ 3. An Algorithm and Its Convergence Analysis In this section, we give our algorithm, and then discuss its convergence. First, we need the following definition. Definition 4. For some vector π₯ ∈ Ω, the projected residual function is defined as (20) Obviously, we have that π₯ ∈ Ω∗ if and only if π(π₯) = 0. Now we describe our algorithm. (16) π¦π = πΩ (π₯π − π (π₯π )) , (21) π§π = (1 − ππ ) π₯π + ππ π¦π , (22) where ππ = πΎππ and ππ being the smallest nonnegative integer π satisfying The graph of π, denoted by πΊ(π), is defined by A mapping π : Ω → Ω is called nonexpansive if σ΅© σ΅© σ΅© σ΅©σ΅© σ΅©σ΅©π (π₯) − π (π¦)σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅© , ∀ π₯, π¦ ∈ Ω, if V ∈ Ω, if V ∉ Ω. Then π is maximal monotone. It is well known that 0 ∈ π(V), if and only if V ∈ Ω∗ . For more details, see, for example, [9] and references therein. The following lemma is established in Hilbert space and is well known as Opial condition. (12) A mapping π is called Lipschitz continuous, if there exists an πΏ ≥ 0, such that σ΅©σ΅©σ΅©π (π₯) − π (π¦)σ΅©σ΅©σ΅© ≤ πΏ σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅© , ∀ π₯, π¦ ∈ Ω. (13) σ΅© σ΅© σ΅© σ΅© πΊ (π) := {(π₯, π¦) ∈ Ω × Ω | π¦ = π (π₯)} . (17) Algorithm A. Step 0. Take πΏ ∈ (0, 1), πΎ ∈ (0, 1), π₯0 ∈ Ω, and π = 0. Step 1. For the current iterative point π₯π ∈ Ω, compute ∀ π₯, π¦ ∈ π π . β¨π₯ − π¦, π (π₯) − π (π¦)β© ≥ 0, πΩ (V) := {π€ ∈ π» | β¨V − π’, π€β© ≥ 0, π’ ∈ Ω} , π (π₯) := π₯ − πΩ (π₯ − π (π₯)) . σ΅© σ΅©2 (4) σ΅©σ΅©σ΅©πΩ (π₯) − πΩ (π¦)σ΅©σ΅©σ΅© σ΅©2 σ΅© σ΅© σ΅©2 ≤ σ΅©σ΅©σ΅©π₯ − π¦σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©πΩ (π₯) − π₯ + π¦ − πΩ (π¦)σ΅©σ΅©σ΅© , We denote the normal cone of Ω at V ∈ Ω by σ΅© σ΅©2 β¨π (π₯π − πΎπ π (π₯π )) , π (π₯π )β© ≥ πΏσ΅©σ΅©σ΅©π (π₯π )σ΅©σ΅©σ΅© . (23) π‘π = ππ»π ∩Ω (π₯π ) , (24) π₯π+1 = πΌπ π₯π + (1 − πΌπ ) ππ‘π , (25) Compute where {πΌπ } ⊂ (π, π) (π, π ∈ (0, 1)), π»π = {π₯ ∈ Ω | β¨π₯ − π§π , π(π§π )β© ≤ 0}, and π : Ω → Ω is a nonexpansive mapping. Step 2. If βπ(π₯π+1 )β = 0, stop; otherwise go to Step 1. Abstract and Applied Analysis 3 Remark 5. The iterative point π‘π is well computed in Algorithm A according to [3] and can be interpreted as follows: if (23) is well defined, then π‘π can be derived by the following iterative scheme: compute π₯π = ππ»π (π₯π ) , π‘π = ππ»π ∩Ω (π₯π ) . (26) For more details, see [3, 4]. Now we investigate the weak convergence property of our algorithm. First we recall the following result, which was proposed by Schu [17]. Substituting (32) into (31), we have σ΅©σ΅©2 σ΅©σ΅© σ΅©σ΅© ππ β¨π(π₯π ), π(π§π )β© σ΅©σ΅© σ΅©σ΅© ∗ σ΅©2 ∗σ΅© π(π§ ) − π₯ σ΅©σ΅© σ΅©σ΅©π‘π − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π σ΅©σ΅© σ΅©σ΅©2 σ΅©σ΅© σ΅©σ΅© π(π§ ) σ΅© σ΅© σ΅© σ΅© σ΅© πσ΅© σ΅©σ΅©2 σ΅©σ΅©σ΅© σ΅©σ΅© ππ β¨π(π₯π ), π(π§π )β© σ΅©σ΅© − σ΅©σ΅©π₯π − π(π§π ) − π‘π σ΅©σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©2 σ΅©σ΅© σ΅©σ΅© σ΅©σ΅©π(π§π )σ΅©σ΅© σ΅© σ΅© 2 2 σ΅© σ΅©σ΅© σ΅© ∗σ΅© = σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© ππ β¨π (π₯π ) , π (π§π )β© β¨π‘π − π₯∗ , π (π§π )β© σ΅©σ΅© σ΅©σ΅©2 σ΅©σ΅©π(π§π )σ΅©σ΅© σ΅©2 σ΅© σ΅©2 σ΅© = σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© −2 Lemma 6. Let H be a real Hilbert space, let {πΌπ }∞ π=0 ⊂ (π, π) (π, π ∈ (0, 1)) be a sequence of real number, and let ∞ {Vπ }∞ π=0 ⊂ π», {π€π }π=0 ⊂ π», such that σ΅© σ΅© lim sup σ΅©σ΅©σ΅©Vπ σ΅©σ΅©σ΅© ≤ π, π→∞ σ΅© σ΅© lim sup σ΅©σ΅©σ΅©π€π σ΅©σ΅©σ΅© ≤ π, π→∞ π→∞ for some π ≥ 0. Then one has σ΅©σ΅© σ΅©2 σ΅© σ΅© ∗ σ΅©2 ∗ σ΅©2 σ΅©σ΅©π‘π − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© (29) π β¨π (π₯ ) , π (π§ )β© + 2 π σ΅© π σ΅©2 π β¨π§π − π‘π , π (π§π )β© . σ΅©σ΅©σ΅©π (π§π )σ΅©σ΅©σ΅© Proof. Letting π₯ = π₯π and π¦ = π₯∗ . It follows from Lemma 1 (10) that σ΅©2 σ΅©σ΅© σ΅© σ΅©2 σ΅©σ΅©ππ»π ∩Ω (π₯π ) − π₯∗ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅©σ΅©2 − σ΅©σ΅©σ΅©π₯π − ππ»π ∩Ω (π₯π )σ΅©σ΅©σ΅© , σ΅© σ΅© σ΅© σ΅© (30) that is, (31) From (20)–(23) in Algorithm A, we get β¨π₯π − π§π , π(π§π )β© > 0, which means π₯π ∉ π»π . So, by the definition of the projection operator and [3], we obtain β¨π₯π − π§π , π (π§π )β© π (π§π ) σ΅©σ΅© σ΅©2 σ΅©σ΅©π(π§π )σ΅©σ΅©σ΅© π β¨π (π₯ ) , π (π§ )β© = π₯π − π σ΅© π σ΅©2 π π (π§π ) . σ΅©σ΅©π(π§π )σ΅©σ΅© σ΅© σ΅© ππ β¨π (π₯π ) , π (π§π )β© ∗ β¨π₯ − π§π , π (π§π )β© . σ΅©σ΅© σ΅©σ΅©2 σ΅©σ΅©π(π§π )σ΅©σ΅© (28) Theorem 7. Let Ω be a nonempty, closed, and convex subset of π», let π be a monotone and πΏ-Lipschitz continuous mapping, πΉ(π) ∩ Ω∗ =ΜΈ 0, and π₯∗ ∈ Ω∗ . Then for any sequence ∞ {π₯π }∞ π=0 , {π‘π }π=0 generated by Algorithm A, one has π₯π = ππ»π (π₯π ) = π₯π − +2 Since π is monotone, connecting with (1), we obtain The following theorem is crucial in proving the boundness of the sequence {π₯π }∞ π=0 . σ΅©σ΅© σ΅©2 σ΅© σ΅© ∗ σ΅©2 ∗ σ΅©2 σ΅©σ΅©π‘π − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© . ππ β¨π (π₯π ) , π (π§π )β© β¨π§π − π‘π , π (π§π )β© σ΅©σ΅© σ΅©σ΅©2 σ΅©σ΅©π(π§π )σ΅©σ΅© (27) σ΅© σ΅© lim sup σ΅©σ΅©σ΅©πΌπ Vπ + (1 − πΌπ ) π€π σ΅©σ΅©σ΅© = π, σ΅© σ΅© lim σ΅©σ΅©Vπ − π€π σ΅©σ΅©σ΅© = 0. π→∞ σ΅© +2 (33) β¨π§π − π₯∗ , π (π§π )β© ≥ 0. (34) Thus σ΅©σ΅© σ΅©2 σ΅© σ΅© ∗ σ΅©2 ∗ σ΅©2 σ΅©σ΅©π‘π − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© +2 (35) ππ β¨π (π₯π ) , π (π§π )β© β¨π§π − π‘π , π (π§π )β© , σ΅©σ΅© σ΅©σ΅©2 σ΅©σ΅©π (π§π )σ΅©σ΅© which completes the proof. Theorem 8. Let Ω be a nonempty, closed, and convex subset of π», π be a monotone and πΏ-Lipschitz continuous mapping, and ∞ ∞ πΉ(π) ∩ Ω∗ =ΜΈ 0. Then for any sequence {π₯π }∞ π=0 , {π¦π }π=0 , {π‘π }π=0 generated by Algorithm A, one has 2 ππΏ σ΅© σ΅©4 σ΅©σ΅© ∗ σ΅©2 σ΅© ∗ σ΅©2 σ΅©σ΅©π₯π+1 − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − (1 − πΌπ )(σ΅©σ΅© π σ΅©σ΅©) σ΅©σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© , σ΅©σ΅©π (π§π )σ΅©σ΅© 1 − πΌπ σ΅©σ΅© σ΅©σ΅© σ΅©2 σ΅© ∗ σ΅©2 ∗ σ΅©2 σ΅©σ΅©π₯π+1 − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − σ΅©π₯ − π‘ σ΅©σ΅© . 2 σ΅© π πσ΅© (36) Furthermore, σ΅© σ΅© lim σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© = 0, π→∞ σ΅© (32) σ΅© σ΅© lim σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© = 0, π→∞ σ΅© σ΅© σ΅© lim σ΅©σ΅©π¦π − π‘π σ΅©σ΅©σ΅© = 0. π→∞ σ΅© (37) 4 Abstract and Applied Analysis So we know that there exists π ≥ 0, limπ → ∞ βπ₯π − π₯∗ β = π, and hence σ΅© σ΅© lim ππ σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© = 0, (43) π→∞ σ΅© Proof. Using (22), we have ππ β¨π (π₯π ) , π (π§π )β© β¨π§π − π₯π , π (π§π )β© σ΅©σ΅© σ΅©2 σ΅©σ΅©π(π§π )σ΅©σ΅©σ΅© = ππ β¨π (π₯π ) , π (π§π )β© β¨−ππ π (π₯π ) , π (π§π )β© σ΅©σ΅© σ΅©2 σ΅©σ΅©π(π§π )σ΅©σ΅©σ΅© (38) 2 π β¨π (π₯ ) , π (π§ )β© = −( π σ΅©σ΅© π σ΅©σ΅© π ) . σ΅©σ΅©π (π§π )σ΅©σ΅© which implies that limπ → ∞ βπ₯π − π¦π β = 0 or limπ → ∞ ππ = 0. If limπ → ∞ βπ₯π − π¦π β = 0, we get the conclusion. If limπ → ∞ ππ = 0, we can deduce that the inequality (23) in Algorithm A is not satisfied for ππ − 1; that is, there exists π0 , for all π ≥ π0 , σ΅© σ΅©2 β¨π (π₯π − πΎ−1 ππ π (π₯π )) , π (π₯π )β© < πΏσ΅©σ΅©σ΅©π (π₯π )σ΅©σ΅©σ΅© . By the Cauchy-Schwarz inequality, 2 Applying (8) by setting π₯ = π₯π − π(π₯π ), π¦ = π₯π leads to ππ β¨π (π₯π ) , π (π§π )β© β¨π₯π − π‘π , π (π§π )β© σ΅©σ΅© σ΅©2 σ΅©σ΅©π(π§π )σ΅©σ΅©σ΅© (39) 2 ≤( ππ β¨π (π₯π ) , π (π§π )β© σ΅©2 σ΅© ) + σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© . σ΅©σ΅© σ΅© σ΅©σ΅©π (π§π )σ΅©σ΅©σ΅© σ΅© σ΅©2 β¨π (π₯π ), π (π₯π )β© ≥ σ΅©σ΅©σ΅©π (π₯π )σ΅©σ΅©σ΅© 2 π β¨π (π₯ ) , π (π§ )β© σ΅©2 σ΅© − ( π σ΅©σ΅© π σ΅©σ΅© π ) + σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© σ΅©σ΅©π (π§π )σ΅©σ΅© 2 π β¨π (π₯ ) , π (π§ )β© σ΅©2 σ΅© = σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© − ( π σ΅©σ΅© π σ΅©σ΅© π ) σ΅©σ΅©π (π§π )σ΅©σ΅© (40) 2 ππΏ σ΅©2 σ΅© σ΅©4 σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© − ( σ΅©σ΅© π σ΅©σ΅© ) σ΅©σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© . σ΅©σ΅©π (π§π )σ΅©σ΅© Then we have σ΅©σ΅© ∗σ΅© ∗σ΅© σ΅©2 σ΅©σ΅© σ΅©2 σ΅©σ΅©π₯π+1 − π₯ σ΅©σ΅© = σ΅©σ΅©πΌπ π₯π + (1 − πΌπ )ππ‘π − π₯ σ΅©σ΅© σ΅© σ΅©2 = σ΅©σ΅©σ΅©πΌπ (π₯π − π₯∗ ) + (1 − πΌπ )(ππ‘π − π₯∗ )σ΅©σ΅©σ΅© σ΅© σ΅©2 σ΅©2 σ΅© ≤ πΌπ σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© + (1 − πΌπ ) σ΅©σ΅©σ΅©π‘π − π₯∗ σ΅©σ΅©σ΅© σ΅© σ΅© σ΅©2 σ΅©2 ≤ πΌπ σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© + (1 − πΌπ ) σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© 2 β¨π₯π −π (π₯π )−πΩ (π₯π − π (π₯π )) , πΩ (π₯π − π (π₯π )) − π₯π β© ≥ 0. (45) Therefore Hence, by (23) σ΅© σ΅© σ΅©2 σ΅©σ΅© ∗ σ΅©2 ∗ σ΅©2 σ΅©σ΅©π‘π − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© ππΏ − (1 − πΌπ ) ( σ΅©σ΅© π σ΅©σ΅© ) σ΅©σ΅©π (π§π )σ΅©σ΅© (44) σ΅©σ΅© σ΅©4 σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© 2 ππΏ σ΅©2 σ΅© σ΅©4 σ΅© = σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© −(1 − πΌπ )(σ΅©σ΅© π σ΅©σ΅©) σ΅©σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© σ΅©σ΅©π (π§π )σ΅©σ΅© σ΅©2 σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© , (41) where the first inequation follows from that π is a nonexpansive mapping. ∞ That means {π₯π }∞ π=0 is bounded, and so as {π§π }π=0 . Since π is continuous; namely, there exists a constant π > 0, s.t. βπ(π§π )β ≤ π, for all π, we yet have πΏ 2 2σ΅© σ΅©σ΅© σ΅©4 σ΅© ∗ σ΅©2 ∗ σ΅©2 σ΅©σ΅©π₯π+1 − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − (1 − πΌπ ) ( ) ππ σ΅©σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© . π (42) as π (π₯π ) = π₯π −πΩ (π₯π −π (π₯π )) . (46) Passing onto the limit in (44), (46), we get πΏ limπ → ∞ βπ₯π − π¦π β ≥ limπ → ∞ βπ₯π − π¦π β, since πΏ ∈ (0, 1), we obtain limπ → ∞ βπ₯π − π¦π β = 0. On the other hand, using Cauchy-Schwarz inequality again, we have 2 ππ β¨π (π₯π ) , π (π§π )β© β¨π₯π − π‘π , π (π§π )β© σ΅©σ΅© σ΅©σ΅©2 σ΅©σ΅©π (π§π )σ΅©σ΅© 2 π β¨π (π₯ ) , π (π§ )β© 1 σ΅© σ΅©2 ≤ 2( π σ΅©σ΅© π σ΅©σ΅© π ) + σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© . 2 σ΅©σ΅©π (π§π )σ΅©σ΅© (47) Therefore, σ΅©σ΅© ∗ σ΅©2 σ΅©σ΅©π‘π − π₯ σ΅©σ΅©σ΅© π β¨π (π₯ ) , π (π§ )β© σ΅©2 σ΅© σ΅©2 σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© − 2( π σ΅©σ΅© π σ΅©σ΅© π ) σ΅©σ΅©π (π§π )σ΅©σ΅© 2 2 + 2( ππ β¨π (π₯π ) , π (π§π )β© 1 σ΅© σ΅©2 ) + σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© σ΅©σ΅© σ΅© 2 σ΅©σ΅©π (π§π )σ΅©σ΅©σ΅© σ΅©2 1 σ΅© σ΅©2 σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© . 2 (48) Then we have σ΅©σ΅©σ΅©π₯π+1 − π₯∗ σ΅©σ΅©σ΅©2 = σ΅©σ΅©σ΅©πΌπ π₯π + (1 − πΌπ )ππ‘π − π₯∗ σ΅©σ΅©σ΅©2 σ΅© σ΅© σ΅© σ΅© 2 σ΅©σ΅© σ΅©2 σ΅© ∗σ΅© ≤ πΌπ σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© + (1 − πΌπ ) σ΅©σ΅©σ΅©π‘π − π₯∗ σ΅©σ΅©σ΅© σ΅© σ΅©2 σ΅©2 σ΅© ≤ πΌπ σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© + (1 − πΌπ ) σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© 1 − πΌπ σ΅©σ΅© σ΅©2 − σ΅©π₯ − π‘ σ΅©σ΅© 2 σ΅© π πσ΅© σ΅©2 1 − πΌπ σ΅©σ΅© σ΅©2 σ΅© = σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© − σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© . 2 (49) Abstract and Applied Analysis 5 Noting that πΌπ ∈ (π, π) (π, π ∈ (0, 1)), it easily follows that 0 < (1 − π)/2 < (1 − πΌπ )/2 < (1 − π)/2 < 1, which implies that σ΅© σ΅© lim σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© = 0. π→∞ σ΅© (50) π’ ∈ π (V) = π (V) + πΩ (V) , By the triangle inequality, we have σ΅©σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅©σ΅©π¦π − π‘π σ΅©σ΅©σ΅© = σ΅©σ΅©σ΅©π¦π − π₯π + π₯π − π‘π σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π¦π − π₯π σ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© . (51) Passing onto the limit in (51), we conclude σ΅© σ΅© lim σ΅©σ΅©π¦π − π‘π σ΅©σ΅©σ΅© = 0. π→∞ σ΅© Second, we describe the details of π₯∗ ∈ Ω∗ . Since π₯ππ β π₯∗ , using Theorem 8 we claim that π‘ππ β π₯∗ and π¦ππ β π₯∗ . Letting (V, π’) ∈ πΊ(π), we have (52) thus, β¨V − π₯∗ , π’ − π (V)β© ≥ 0, Proof. By Theorem 8, we know that {π₯π }∞ π=0 is bound, which ∞ implies that there exists a subsequence {π₯ππ }∞ π=0 of {π₯π }π=0 that ∗ ∗ converges weakly to some points π₯ ∈ πΉ(π) ∩ Ω . First, we investigate some details of π₯∗ ∈ πΉ(π). Letting π₯σΈ ∈ πΉ(π) ∩ Ω∗ , since π is nonexpansive mapping, from (29) we have (53) that is, (54) σ΅© σ΅© lim σ΅©σ΅©ππ‘π − π₯π σ΅©σ΅©σ΅© = 0. σ΅© σ΅© lim σ΅©σ΅©ππ₯π − π₯π σ΅©σ΅©σ΅© = 0, which imply that π₯∗ ∈ πΉ(π) by Lemma 3. ≥ β¨V − π‘ππ , π (V)β© − β¨π₯ππ − π (π₯ππ ) − π¦ππ , π¦ππ − Vβ© = β¨V − π‘ππ , π (V) − π (π‘ππ )β© + β¨V − π‘ππ , π (π‘ππ )β© (63) − β¨V − π¦ππ , π¦ππ − π₯ππ β© − β¨V − π¦ππ , π (π₯ππ )β© ≥ β¨V − π‘ππ , π (π‘ππ )β© − β¨V − π¦ππ , π¦ππ − π₯ππ β© where the last inequation follows from the monotone of π. Since π is continuous, by (37) we have π→∞ lim π₯ππ = lim π‘ππ . (64) π→∞ π→∞ β¨V − π₯∗ , π’β© ≥ 0. (55) (56) (57) (65) ∗ −1 As π is maximal monotone, we have π₯ ∈ π (0), which implies that π₯∗ ∈ Ω∗ . At last we show that such π₯∗ is unique. ∞ Let {π₯ππ }∞ π=0 be another subsequence of {π₯π }π=0 , such that ∗ ∗ π₯ππ β π₯Δ . Then we conclude that π₯Δ ∈ πΉ(π) ∩ Ω∗ . Suppose π₯Δ∗ =ΜΈ π₯∗ ; by Lemma 2 we have σ΅© σ΅© lim σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© π→∞ σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© = lim inf σ΅©σ΅©σ΅©σ΅©π₯ππ −π₯∗ σ΅©σ΅©σ΅©σ΅© < lim inf σ΅©σ΅©σ΅©σ΅©π₯ππ −π₯Δ∗ σ΅©σ΅©σ΅©σ΅© = lim σ΅©σ΅©σ΅©π₯π − π₯Δ∗ σ΅©σ΅©σ΅© π→∞ π→∞ π→∞ σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© σ΅©σ΅© σ΅© σ΅© = lim inf σ΅©σ΅©σ΅©π₯ππ −π₯Δ∗ σ΅©σ΅©σ΅© < lim inf σ΅©σ΅©σ΅©π₯ππ −π₯∗ σ΅©σ΅©σ΅© = lim σ΅©σ΅©σ΅©π₯π − π₯∗ σ΅©σ΅©σ΅© , σ΅© π→∞ σ΅© σ΅© π→∞ π→∞ σ΅© (66) and then passing onto the limit in (57), we deduce that π→∞ σ΅© ≥ β¨V − π‘ππ , π (V)β© Passing onto the limit in (63), we obtain By the triangle inequality, we have σ΅©σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅©σ΅©ππ₯π − π₯π σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©ππ₯π − ππ‘π σ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©ππ‘π − π₯π σ΅©σ΅©σ΅© σ΅© σ΅© σ΅© σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©ππ‘π − π₯π σ΅©σ΅©σ΅© , β¨V − π‘ππ , π’β© lim π (π₯ππ ) = lim π (π¦ππ ) , From Lemma 6, it follows that π→∞ σ΅© Note that π’ − π(V) ∈ πΩ (V) and π‘π ∈ Ω, then π→∞ Then by (25) we have σ΅© σ΅© = lim σ΅©σ΅©σ΅©σ΅©π₯π+1 − π₯σΈ σ΅©σ΅©σ΅©σ΅© = π. π→∞ (62) − β¨V − π¦ππ , π (π₯ππ )β© , Passing onto the limit in (53), we obtain σ΅© σ΅© lim σ΅©σ΅©σ΅©πΌπ (π₯π − π₯σΈ ) + (1 − πΌπ ) (ππ‘π − π₯σΈ )σ΅©σ΅©σ΅©σ΅© π→∞ σ΅© (60) β¨π₯π − π (π₯π ) − πΩ (π₯π − π (π₯π )) , πΩ (π₯π − π (π₯π )) − Vβ© ≥ 0, (61) β¨π₯π − π (π₯π ) − π¦π , π¦π − Vβ© ≥ 0. Theorem 9. Let Ω be a nonempty, closed, and convex subset of π», let π be a monotone and πΏ-Lipschitz continuous mapping, and πΉ(π) ∩ Ω∗ =ΜΈ 0. Then the sequences ∞ ∞ {π₯π }∞ π=0 , {π¦π }π=0 , {π‘π }π=0 generated by Algorithm A converge weakly to the same point π₯∗ ∈ πΉ(π) ∩ Ω∗ , where π₯∗ = limπ → ∞ ππΉ(π)∩Ω∗ (π₯π ). σ΅© σ΅© lim σ΅©σ΅©σ΅©σ΅©ππ‘π − π₯σΈ σ΅©σ΅©σ΅©σ΅© ≤ π. π→∞ ∀ π₯∗ ∈ Ω. Applying (8) by letting π₯ = π₯π − π(π₯π ), π¦ = V, we have The proof is complete. σ΅©σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅©σ΅©ππ‘π − π₯σΈ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π‘π − π₯σΈ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯σΈ σ΅©σ΅©σ΅© . σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© π’ − π (V) ∈ πΩ (V) , (59) (58) which implies that limπ → ∞ βπ₯π − π₯∗ β < limπ → ∞ βπ₯π − π₯∗ β, and this is a contradiction. Thus, π₯∗ = π₯Δ∗ , and the proof is complete. 6 Abstract and Applied Analysis 4. Further Study In this section we propose an extension of Algorithm A, which is effective in practice. Similar to the investigation in Section 3, for the constant π > 0, we define a new projected residual function as follows: π (π₯, π) := π₯ − πΩ (π₯ − ππ (π₯)) . (67) Proof. The proof of this theorem is similar to Theorem 7, so we omit it. Theorem 13. Let Ω be a nonempty, closed, and convex subset of π», let π be a monotone and πΏ-Lipschitz continuous mapping, and πΉ(π) ∩ Ω∗ =ΜΈ 0. Then for any sequences ∞ ∞ {π₯π }∞ π=0 , {π¦π }π=0 , {π‘π }π=0 generated by Algorithm B, one has 1 − πΌπ σ΅©σ΅© σ΅©σ΅© σ΅©2 σ΅© ∗ σ΅©2 ∗ σ΅©2 σ΅©π₯ − π‘ σ΅©σ΅© . σ΅©σ΅©π₯π+1 − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − 2 σ΅© π πσ΅© It is clear that the new projected residual function (67) degenerates into (20) by setting π = 1. Algorithm B. Step 0. Take πΏ ∈ (0, 1), πΎ ∈ (0, 1), π−1 > 0, π > 1, π₯0 ∈ Ω, and π = 0. Step 1. For the current iterative point π₯π ∈ Ω, compute π¦π = πΩ (π₯π − ππ π (π₯π )) , π§π = (1 − ππ ) π₯π + ππ π¦π , β¨π (π₯π − πΎπ ππ π (π₯π , ππ )) , π (π₯π , ππ )β© ≥ Furthermore, σ΅© σ΅© lim σ΅©σ΅©π₯π − π¦π σ΅©σ΅©σ΅© = 0, π→∞ σ΅© σ΅© σ΅© lim σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© = 0, π→∞ σ΅© (68) where ππ = min{πππ−1 , 1}, ππ = πΎππ ππ and ππ being the smallest nonnegative integer π satisfying πΏ σ΅©σ΅© σ΅©2 σ΅©π (π₯π , ππ )σ΅©σ΅©σ΅© . ππ σ΅© (69) π₯π+1 = πΌπ π₯π + (1 − πΌπ ) ππ‘π , σ΅© σ΅© lim σ΅©σ΅©π¦π − π‘π σ΅©σ΅©σ΅© = 0. Proof. The proof of this theorem is similar to the Theorem 8. The only difference is that (44) is substituted by β¨π (π₯π − πΎ−1 ππ π (π₯π , ππ )) , π (π₯π , ππ )β© < πΏ σ΅©σ΅© σ΅©2 σ΅©π (π₯π , ππ )σ΅©σ΅©σ΅© , ππ σ΅© (76) where (76) follows from Lemma 11 with π1 = 1 and π2 = ππ . (70) where {πΌπ } ⊂ (π, π) (π, π ∈ (0, 1)) and π»π = {π₯ ∈ Ω | β¨π₯ − π§π , π(π§π )β© ≤ 0}. Step 2. If βπ(π₯π , ππ )β = 0, stop; otherwise go to Step 1. At the rest of this section, we discuss the weak convergence property of Algorithm B. Lemma 10. For any π > 0, one has π₯∗ is the solution of VI (π, Ω) ⇐⇒ π₯∗ = πΩ (π₯∗ − ππ (π₯∗ )) . (71) Therefore, solving variational inequality is equivalent to finding a zero point of the projected residual function π(β, π). Meanwhile we know that π(π₯, π) is a continuous function of π₯, as the projection mapping is nonexpansive. Lemma 11. For any π₯ ∈ π π and π1 ≥ π2 > 0, it holds that σ΅©σ΅©σ΅©π (π₯, π1 )σ΅©σ΅©σ΅© ≥ σ΅©σ΅©σ΅©π (π₯, π2 )σ΅©σ΅©σ΅© . (72) σ΅© σ΅© σ΅© σ΅© Theorem 12. Let Ω be a nonempty, closed, and convex subset of π», let π be a monotone and πΏ-Lipschitz continuous mapping, ∞ and πΉ(π) ∩ Ω∗ =ΜΈ 0. Then for any sequence {π₯π }∞ π=0 , {π‘π }π=0 generated by Algorithm B, one has σ΅©σ΅© σ΅©2 σ΅© σ΅© ∗ σ΅©2 ∗ σ΅©2 σ΅©σ΅©π‘π − π₯ σ΅©σ΅©σ΅© ≤ σ΅©σ΅©σ΅©π₯π − π₯ σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯π − π‘π σ΅©σ΅©σ΅© +2 (75) π→∞ σ΅© Compute π‘π = ππ»π ∩Ω (π₯π ) , (74) ππ β¨π (π₯π , ππ ) , π (π§π )β© β¨π§π − π‘π , π (π§π )β© . σ΅©σ΅© σ΅©2 σ΅©σ΅©π (π§π )σ΅©σ΅©σ΅© (73) Theorem 14. Let Ω be a nonempty, closed, and convex subset of π», let π be a monotone and πΏ-Lipschitz continuous mapping, and πΉ(π) ∩ Ω∗ =ΜΈ 0. Then the sequences ∞ ∞ {π₯π }∞ π=0 , {π¦π }π=0 , {π‘π }π=0 generated by Algorithm B converge weakly to the same point π₯∗ ∈ πΉ(π) ∩ Ω∗ , where π₯∗ = lim ππΉ(π)∩Ω∗ (π₯π ). π→∞ 5. Conclusions In this paper, we proposed an extension of the extragradient algorithm for solving monotone variational inequalities and established its weak convergence theorem. The Algorithm B is effective in practice. Meanwhile, we pointed out that the solution of our algorithm is also a fixed point of a given nonexpansive mapping. Acknowledgments This research was supported by the National Natural Science Foundation of China (Grant: 11171362) and the Fundamental Research Funds for the central universities (Grant: CDJXS12101103). The authors thank the anonymous reviewers for their valuable comments and suggestions, which helped to improve the paper. References [1] R. W. Cottle, J.-S. Pang, and R. E. Stone, The Linear Complementarity Problem, Academic Press, Boston, Mas, USA, 1992. Abstract and Applied Analysis [2] G. M. Korpelevich, “The extragradient method for finding saddle points and other problems,” Matecon, vol. 12, pp. 747– 756, 1976. [3] M. V. Solodov and B. F. Svaiter, “A new projection method for variational inequality problems,” SIAM Journal on Control and Optimization, vol. 37, no. 3, pp. 765–776, 1999. [4] M. V. Solodov, “Stationary points of bound constrained minimization reformulations of complementarity problems,” Journal of Optimization Theory and Applications, vol. 94, no. 2, pp. 449– 467, 1997. [5] Y. J. Wang, N. H. Xiu, and J. Z. Zhang, “Modified extragradient method for variational inequalities and verification of solution existence,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 167–183, 2003. [6] W. Takahashi and M. Toyoda, “Weak convergence theorems for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 118, no. 2, pp. 417– 428, 2003. [7] E. H. Zarantonello, Projections on Convex Sets in Hilbert Space and Spectral Theory, Contributions to Nonlinear Functional Analysis, Academic press, New York, NY, USA, 1971. [8] Z. Opial, “Weak convergence of the sequence of successive approximations for nonexpansive mappings,” Bulletin of the American Mathematical Society, vol. 73, pp. 591–597, 1967. [9] R. T. Rockafellar, “On the maximality of sums of nonlinear monotone operators,” Transactions of the American Mathematical Society, vol. 149, pp. 75–88, 1970. [10] F. E. Browder, “Fixed-point theorems for noncompact mappings in Hilbert space,” Proceedings of the National Academy of Sciences of the United States of America, vol. 53, pp. 1272–1276, 1965. [11] N. Nadezhkina and W. Takahashi, “Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 128, no. 1, pp. 191–201, 2006. [12] B. C. Eaves, “On the basic theorem of complementarity,” Mathematical Programming, vol. 1, no. 1, pp. 68–75, 1971. [13] B. S. He and L. Z. Liao, “Improvements of some projection methods for monotone nonlinear variational inequalities,” Journal of Optimization Theory and Applications, vol. 112, no. 1, pp. 111–128, 2002. [14] Y. Censor, A. Gibali, and S. Reich, “The subgradient extragradient method for solving variational inequalities in Hilbert space,” Journal of Optimization Theory and Applications, vol. 148, no. 2, pp. 318–335, 2011. [15] Y. Censor, A. Gibali, and S. Reich, “Two extensions of Korpelevich’s extragradient method for solving the variational inequality problem in Euclidean space,” Tech. Rep., 2010. [16] Y. J. Wang, N. H. Xiu, and C. Y. Wang, “Unified framework of extragradient-type methods for pseudomonotone variational inequalities,” Journal of Optimization Theory and Applications, vol. 111, no. 3, pp. 641–656, 2001. [17] J. Schu, “Weak and strong convergence to fixed points of asymptotically nonexpansive mappings,” Bulletin of the Australian Mathematical Society, vol. 43, no. 1, pp. 153–159, 1991. 7 Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014