Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 942315, 8 pages http://dx.doi.org/10.1155/2013/942315 Research Article Solving the Variational Inequality Problem Defined on Intersection of Finite Level Sets Songnian He1,2 and Caiping Yang1 1 2 College of Science, Civil Aviation University of China, Tianjin 30030, China Tianjin Key Laboratory for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China Correspondence should be addressed to Songnian He; hesongnian2003@yahoo.com.cn Received 3 March 2013; Accepted 14 April 2013 Academic Editor: Simeon Reich Copyright © 2013 S. He and C. Yang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Consider the variational inequality VI(đś, đš) of finding a point đĽ∗ ∈ đś satisfying the property â¨đšđĽ∗ , đĽ − đĽ∗ ⊠≥ 0, for all đĽ ∈ đś, where đś is the intersection of finite level sets of convex functions defined on a real Hilbert space đť and đš : đť → đť is an đż-Lipschitzian and đ-strongly monotone operator. Relaxed and self-adaptive iterative algorithms are devised for computing the unique solution of VI(đś, đš). Since our algorithm avoids calculating the projection đđś (calculating đđś by computing several sequences of projections onto half-spaces containing the original domain đś) directly and has no need to know any information of the constants đż and đ, the implementation of our algorithm is very easy. To prove strong convergence of our algorithms, a new lemma is established, which can be used as a fundamental tool for solving some nonlinear problems. Recall that an operator đš : đś → đť is called monotone, if 1. Introduction The variational inequality problem can mathematically be formulated as the problem of finding a point đĽ∗ ∈ đś with the property ∗ ∗ â¨đšđĽ , đĽ − đĽ ⊠≥ 0, ∀đĽ ∈ đś, (1) where đť is a real Hilbert space with inner product â¨⋅, ⋅⊠and norm â ⋅ â, đś is a nonempty closed convex subset of đť, and đš : đś → đť is a nonlinear operator. Since its inception by Stampacchia [1] in 1964, the variational inequality problem VI(đś, đš) has received much attention due to its applications in a large variety of problems arising in structural analysis, economics, optimization, operations research and engineering sciences; see [1–23] and the references therein. Using the projection technique, one can easily show that VI(đś, đš) is equivalent to the fixed-point problem (see, for example, [15]). Lemma 1. đĽ∗ ∈ đś is a solution of đđź(đś, đš) if and only if đĽ∗ ∈ đś satisfies the fixed-point relation: đĽ∗ = đđś (đź − đđš) đĽ∗ , (2) where đ > 0 is an arbitrary constant, đđś is the orthogonal projection onto đś, and đź is the identity operator on đť. â¨đšđĽ − đšđŚ, đĽ − đŚâŠ ≥ 0 ∀đĽ, đŚ ∈ đś. (3) Moreover, a monotone operator đš is called strictly monotone if the equality “=” holds only when đĽ = đŚ in the last relation. It is easy to see that VI(đś, đš) (1) has at most one solution if đš is strictly monotone. For variational inequality (1), đš is generally assumed to be Lipschitzian and strongly monotone on đś; that is, for some constants đż, đ > 0, đš satisfies the conditions óľŠ óľŠ óľŠ óľŠóľŠ óľŠóľŠđšđĽ − đšđŚóľŠóľŠóľŠ ≤ đż óľŠóľŠóľŠđĽ − đŚóľŠóľŠóľŠ , ∀đĽ, đŚ ∈ đś, óľŠ2 óľŠ â¨đšđĽ − đšđŚ, đĽ − đŚâŠ ≥ đóľŠóľŠóľŠđĽ − đŚóľŠóľŠóľŠ , ∀đĽ, đŚ ∈ đś. (4) In this case, đš is also called an đż-Lipschitzian and đ-strongly monotone operator. It is quite easy to show the simple result as follows. Lemma 2. Assume that đš satisfies conditions (4) and đ and đ are constants such that đ ∈ (0, 1) and đ ∈ (0, 2đ/đż2 ), respectively. Let đđ = đđś(đź − đđš) (or đź − đđš) and đđ,đ = đđś(đź−đđđš) (or đź−đđđš). Then đđ and đđ,đ are all contractions 2 Abstract and Applied Analysis with coefficients 1 − đ and 1 − đđ, respectively, where đ = (1/2)đ(2đ − đđż2 ). Recall a trivial inequality, which is well known and in common use. Using Banach’s contraction mapping principle, the following well-known result can be obtained easily from Lemmas 1 and 2. Lemma 4. For all đĽ, đŚ ∈ đť, there holds the following relation: Theorem 3. Assume that đš satisfies the conditions (4). Then đđź(đś, đš) has a unique solution. Moreover, for any 0 < đ < 2đ/đż2 , the sequence {đĽđ } with initial guess đĽ0 ∈ đś and defined recursively by đĽđ+1 = đđś (đź − đđš) đĽđ , đ ≥ 0, (5) converges strongly to the unique solution of đđź(đś, đš). However, Algorithm (5) has two evident weaknesses. On one hand, Algorithm (5) involves calculating the mapping đđś, while the computation of a projection onto a closed convex subset is generally difficult. If đś is the intersection of finite closed convex subsets of đť, that is, đś = âđ đ=1 đśđ ( ≠ 0), where đśđ (đ = 1, . . . , đ) is a closed convex subset of đť, then the computation of đđś is much more difficult. On the other hand, the determination of the stepsize đ depends on the constants đż and đ. This means that in order to implement Algorithm (5), one has first to compute (or estimate) the constants đż and đ, which is sometimes not an easy work in practice. In order to overcome the above weaknesses of the algorithm (5), a new relaxed and self-adaptive algorithm is proposed in this paper to solve VI(đś, đš), where đś is the intersection of finite level sets of convex functions defined on đť and đš : đť → đť is an đż-Lipschitzian and đstrongly monotone operator. Our method calculates đđś by computing finite sequences of projections onto half-spaces containing the original set đś and selects the stepsizes through a self-adaptive way. The implementation of our algorithm avoids computing đđś directly and has no need to know any information about đż and đ. The rest of this paper is organized as follows. Some useful lemmas are listed in the next section; in particular, a new lemma is established in order to prove strong convergence theorems of our algorithms, which can also be used as a fundamental tool for solving some nonlinear problems relating to fixed point. In the last section, a relaxed algorithm (for the case where đż and đ are known) and a relaxed selfadaptive algorithm (for the case where đż and đ are not known) are proposed, respectively. The strong convergence theorems of our algorithms are proved. 2. Preliminaries Throughout the rest of this paper, we denote by đť a real Hilbert space and by đź the identity operator on đť. If đ : đť → R is a differentiable functional, then we denote by ∇đ the gradient of đ. We will also use the following notations: (i) → denotes strong convergence. (ii) â denotes weak convergence. (iii) đđ¤ (đĽđ ) = {đĽ | ∃{đĽđđ } ⊂ {đĽđ } such that đĽđđ â đĽ} denotes the weak đ-limit set of {đĽđ }. óľŠ2 óľŠóľŠ 2 (6) óľŠóľŠđĽ + đŚóľŠóľŠóľŠ ≤ âđĽâ + 2â¨đŚ, đĽ + đŚâŠ. Recall that a mapping đ : đť → đť is said to be nonexpansive if óľŠ óľŠ óľŠ óľŠóľŠ (7) óľŠóľŠđđĽ − đđŚóľŠóľŠóľŠ ≤ óľŠóľŠóľŠđĽ − đŚóľŠóľŠóľŠ , đĽ, đŚ ∈ đť. đ : đť → đť is said to be firmly nonexpansive if, for đĽ, đŚ ∈ đť, óľŠ2 óľŠ óľŠóľŠ óľŠ2 óľŠ óľŠ2 óľŠóľŠđđĽ − đđŚóľŠóľŠóľŠ ≤ óľŠóľŠóľŠđĽ − đŚóľŠóľŠóľŠ − óľŠóľŠóľŠ(đź − đ)đĽ − (đź − đ)đŚóľŠóľŠóľŠ . (8) The following are characterizations of firmly nonexpansive mappings (see [7] or [24]). Lemma 5. Let đ : đť → đť be an operator. The following statements are equivalent. (i) đ is firmly nonexpansive. (ii) đź − đ is firmly nonexpansive. (iii) âđđĽ − đđŚâ2 ≤ â¨đĽ − đŚ, đđĽ − đđŚâŠ, đĽ, đŚ ∈ đť. We know that the orthogonal projection đđś from đť onto a nonempty closed convex subset đś ⊂ đť is a typical example of a firmly nonexpansive mapping [7], which is defined by óľŠ2 óľŠ đđśđĽ := arg minóľŠóľŠóľŠđĽ − đŚóľŠóľŠóľŠ , đŚ∈đś đĽ ∈ đť. (9) It is well known that đđśđĽ is characterized [7] by the inequality (for đĽ ∈ đť) đđśđĽ ∈ đś, â¨đĽ − đđśđĽ, đŚ − đđśđĽâŠ ≤ 0, ∀đŚ ∈ đś. (10) It is well known that the following lemma [25] is often used when we analyze the strong convergence of some algorithms for solving some nonlinear problems, such as fixed points of nonlinear mappings, variational inequalities, and split feasibility problems. In fact, this lemma has been regarded as a fundamental tool for solving some nonlinear problems relating to fixed point. Lemma 6 (see [25]). Assume (đđ ) is a sequence of nonnegative real numbers such that đđ+1 ≤ (1 − đžđ ) đđ + đžđ đżđ , đ ≥ 0, (11) where (đžđ ) is a sequence in (0, 1) and (đżđ ) is a sequence in R such that (i) ∑∞ đ=0 đžđ = ∞, (ii) lim supđ → ∞ đżđ ≤ 0 or ∑∞ đ=0 |đžđ đżđ | < ∞. Then limđ → ∞ đđ = 0. In this paper, inspired and encouraged by an idea in [26], we obtain the following lemma. Its key effect on the proofs of our main results will be illustrated in the next section and this may show that this lemma is likely to become a new fundamental tool for solving some nonlinear problems relating to fixed point. Abstract and Applied Analysis 3 Lemma 7. Assume (đ đ ) is a sequence of nonnegative real numbers such that đ đ+1 ≤ (1 − đžđ ) đ đ + đžđ đżđ + đ˝đ , đ đ+1 ≤ đ đ − đđ + đźđ , đ ≥ 0, đ ≥ 0, so that đđ(đ) → 0 as đ → ∞ using condition (ii). Due to the condition (iii), this implies that (12) lim sup đżđ(đ) ≤ 0. (13) Noting đ đ(đ) ≤ đ đ(đ)+1 for all đ > đ0 again, it follows from (12) that where (đžđ ) is a sequence in (0, 1), (đđ ) is a sequence of nonnegative real numbers and (đżđ ), (đźđ ), and (đ˝đ ) are three sequences in R such that (i) ∑∞ đ=0 đžđ = ∞, (20) đ→∞ đ đ(đ) ≤ đżđ(đ) + đ˝đ(đ) . đžđ(đ) (21) Combining (20), (21), and condition (iv) yields (ii) limđ → ∞ đźđ = 0, (iii) limđ → ∞ đđđ = 0 implies lim supđ → ∞ đżđđ ≤ 0 for any subsequence (đđ ) ⊂ (đ), (iv) lim supđ → ∞ (đ˝đ /đžđ ) ≤ 0. Then limđ → ∞ đ đ = 0. lim sup đ đ(đ) ≤ 0, (22) đ→∞ and hence đ đ(đ) → 0 as đ → ∞. This together with (13) implies that đ đ(đ)+1 ≤ đ đ(đ) − đđ(đ) + đźđ(đ) ół¨→ 0, (23) Proof. Following and generalizing an idea in [26], we distinguish two cases to prove đ đ → 0 as đ → 0. which together with (17), in turn, implies that đ đ → 0 as đ → ∞. Case 1. (đ đ ) is eventually decreasing (i.e., there exists đ ≥ 0 such that đ đ > đ đ+1 holds for all đ ≥ đ). In this case, (đ đ ) must be convergent, and from (13) it follows that The following result is just a special case of Lemma 7, that is, the case where đ˝đ = 0 for all đ ≥ 0. đđ ≤ (đ đ − đ đ+1 ) + đźđ . (14) Noting condition (ii), letting đ → ∞ in (14) yields đđ → 0 as đ → ∞. Using condition (iii), we get that lim supđ → ∞ đżđ ≤ 0. Noting this together with conditions (i) and (iv), we obtain đ đ → 0 by applying Lemma 6 to (12). Case 2. (đ đ ) is not eventually decreasing. Hence, we can find an integer đ0 such that đ đ0 ≤ đ đ0 +1 . Let us now define đ˝đ := {đ0 ≤ đ ≤ đ : đ đ ≤ đ đ+1 } , đ > đ0 . (15) đ > đ0 . (16) It is clear that đ(đ) → ∞ as đ → ∞ (otherwise, (đ đ ) is eventually decreasing). It is also clear that đ đ(đ) ≤ đ đ(đ)+1 for all đ > đ0 . Moreover, đ đ ≤ đ đ(đ)+1 , ∀đ > đ0 . (17) In fact, if đđ = đ, then inequity (17) is trivial; if đ(đ) = đ − 1, then đ(đ) + 1 = đ, and (17) is also trivial. If đ(đ) < đ − 1, then there exists an integer đ ≥ 2 such that đ(đ) + đ = đ. Thus we deduce from the definition of đ(đ) that đ đ(đ)+1 > đ đ(đ)+2 > ⋅ ⋅ ⋅ > đ đ(đ)+đ = đ đ , (18) and inequity (17) holds again. Since đ đ(đ) ≤ đ đ(đ)+1 for all đ > đ0 , it follows from (14) that 0 ≤ đđ(đ) ≤ đźđ(đ) ół¨→ 0, đ đ+1 ≤ (1 − đžđ ) đ đ + đžđ đżđ , đ đ+1 ≤ đ đ − đđ + đźđ , đ ≥ 0, đ ≥ 0, (24) where (đžđ ) is a sequence in (0, 1), (đđ ) is a sequence of nonnegative real numbers, and (đżđ ) and (đźđ ) are two sequences in R such that (i) ∑∞ đ=0 đžđ = ∞, (ii) limđ → ∞ đźđ = 0, (iii) limđ → ∞ đđđ = 0 implies lim supđ → ∞ đżđđ ≤ 0 for any subsequence (đđ ) ⊂ (đ). Obviously, đ˝đ is nonempty and satisfies đ˝đ ⊆ đ˝đ+1 . Let đ (đ) := max đ˝đ , Lemma 8. Assume (đ đ ) is a sequence of nonnegative real numbers such that (19) Then limđ → ∞ đ đ = 0. Recall that a function đ : đť → R is called convex if đ (đđĽ + (1 − đ) đŚ) ≤ đđ (đĽ) + (1 − đ) đ (đŚ) , ∀đ ∈ (0, 1) , ∀đĽ, đŚ ∈ đť. (25) A differentiable function đ is convex if and only if there holds the following relation: đ (đ§) ≥ đ (đĽ) + â¨∇đ (đĽ) , đ§ − đĽâŠ, ∀đ§ ∈ đť. (26) Recall that an element đ ∈ đť is said to be a subgradient of đ : đť → R at đĽ if đ (đ§) ≥ đ (đĽ) + â¨đ, đ§ − đĽâŠ, ∀đ§ ∈ đť. (27) A function đ : đť → R is said to be subdifferentiable at đĽ, if it has at least one subgradient at đĽ. The set of subgradients 4 Abstract and Applied Analysis of đ at the point đĽ is called the subdifferential of đ at đĽ and is denoted by đđ(đĽ). The last relation above is called the subdifferential inequality of đ at đĽ. A function đ is called subdifferentiable, if it is subdifferentiable at all đĽ ∈ đť. If a function đ is differentiable and convex, then its gradient and subgradient coincide. Recall that a function đ : đť → R is said to be weakly lower semicontinuous (đ¤-lsc) at đĽ if đĽđ â đĽ implies đ (đĽ) ≤ lim inf đ (đĽđ ) . đ→∞ (28) 3. Iterative Algorithms In this section, we consider the iterative algorithms for solving a particular kind of variational inequality (1) in which the closed convex subset đś is of the particular structure, that is the intersection of finite level sets of convex functions given as follows: đ đś = â {đĽ ∈ đť : đđ (đĽ) ≤ 0} , (29) đ=1 where đ is a positive integer and đđ : đť → R (đ = 1, . . . , đ) is a convex function. We always assume that đđ (đ = 1, . . . , đ) is subdifferentiable on đť and đđđ (đ = 1, . . . , đ) is a bounded operator (i.e., bounded on bounded sets). It is worth noting that every convex function defined on a finite-dimensional Hilbert space is subdifferentiable and its subdifferential operator is a bounded operator (see [27, Corollary 7.9]). We also assume that đš : đť → đť is an đż-Lipschitzian and đ-strongly monotone operator. It is well known that in this case VI(đś, đš) has a unique solution, henceforth, which is denoted by đĽ∗ . Without loss of the generality, we will consider only the case đ = 2; that is, đś = đś1 â đś2 , where đś1 = {đĽ ∈ đť : đ1 (đĽ) ≤ 0} , đś2 = {đĽ ∈ đť : đ2 (đĽ) ≤ 0} . (30) All of our results can be extended easily to the general case. The computation of a projection onto a closed convex subset is generally difficult. To overcome this difficulty, Fukushima [21] suggested a way to calculate the projection onto a level set of a convex function by computing a sequence of projections onto half-spaces containing the original level set. This idea is followed by Yang [28] and LoĚpez et al. [29], respectively, who introduced the relaxed đśđ algorithms for solving the split feasibility problem in finite-dimensional and infinite-dimensional Hilbert spaces, respectively. This idea is also used by Censor et al. [30] in the subgradient extragradient method for solving variational inequalities in a Hilbert space. We are now in a position to introduce a relaxed algorithm for computing the unique solution đĽ∗ of VI(đś, đš), where đś = đś1 â đś2 and đśđ (đ = 1, 2) is given as in (30). This scheme applies to the case where đż and đ are easy to be determined. Algorithm 1. Choose an arbitrary initial guess đĽ0 ∈ đť. The sequence (đĽđ ) is constructed via the formula đĽđ+1 = đđśđ2 đđśđ1 (đź − đ đ đđš) đĽđ , đ ⊞ 0, (31) where đśđ1 = {đĽ ∈ đť : đ1 (đĽđ ) ≤ â¨đđ1 , đĽđ − đĽâŠ} , đśđ2 = {đĽ ∈ đť : đ2 (đđśđ1 đĽđ ) ≤ â¨đđ2 , đđśđ1 đĽđ − đĽâŠ} , (32) where đđ1 ∈ đđ1 (đĽđ ), đđ2 ∈ đđ2 (đđśđ1 đĽđ ), the sequence (đ đ ) is in (0, 1), and đ is a constant such that đ ∈ (0, 2đ/đż2 ). We now analyze strong convergence of Algorithm 1, which also illustrates the application of Lemma 7 (or Lemma 8). Theorem 9. Assume that đ đ → 0 (đ → ∞) and ∑+∞ đ=1 đ đ = +∞. Then the sequence (đĽđ ) generated by Algorithm 1 converges strongly to the unique solution đĽ∗ of đđź(đś, đš). Proof. Firstly, we verify that (đĽđ ) is bounded. Indeed, it is easy to see from the subdifferential inequality and the definitions of đśđ1 and đśđ2 that đśđ1 ⊃ đś1 and đśđ2 ⊃ đś2 hold for all đ ≥ 0, and hence it follows that đśđ1 â đśđ2 ⊃ đś1 â đś2 = đś. Since the projection operators đđśđ1 and đđśđ2 are nonexpansive, we obtain from (31), Lemmas 2 and 4 that óľŠóľŠ ∗ óľŠ2 óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ óľŠ2 óľŠ = óľŠóľŠóľŠóľŠđđśđ2 đđśđ1 (đź − đ đ đđš)đĽđ − đđśđ2 đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ óľŠ2 óľŠ ≤ óľŠóľŠóľŠ(đź − đ đ đđš) đĽđ − (đź − đ đ đđš) đĽ∗ − đ đ đđšđĽ∗ óľŠóľŠóľŠ óľŠ2 óľŠ ≤ (1 − đđ đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ (33) − 2đ đ đ â¨đšđĽ∗ , đĽđ − đĽ∗ − đ đ đđšđĽđ ⊠óľŠ2 óľŠ ≤ (1 − đđ đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ − 2đ đ đ â¨đšđĽ∗ , đĽđ − đĽ∗ âŠ óľŠ óľŠóľŠ óľŠ + 2đ2đ đ2 óľŠóľŠóľŠđšđĽ∗ óľŠóľŠóľŠ óľŠóľŠóľŠđšđĽđ óľŠóľŠóľŠ , óľŠóľŠ ∗ óľŠ2 óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ óľŠ = óľŠóľŠóľŠóľŠđđśđ2 đđśđ1 (đź − đ đ đđš) đĽđ − đđśđ2 đđśđ1 (đź − đ đ đđš) đĽ∗ óľŠ2 +đđśđ2 đđśđ1 (đź − đ đ đđš)đĽ∗ − đđśđ2 đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ óľŠ2 óľŠ ≤ (1 − đđ đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ + 2 â¨đđśđ2 đđśđ1 (đź − đ đ đđš) đĽ∗ − đđśđ2 đđśđ1 đĽ∗ , đĽđ+1 − đĽ∗ ⊠óľŠ2 óľŠóľŠ óľŠ óľŠ óľŠ ≤ (1 − đđ đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ + 2đ đ đ óľŠóľŠóľŠđšđĽ∗ óľŠóľŠóľŠ óľŠóľŠóľŠđĽđ+1 − đĽ∗ óľŠóľŠóľŠ óľŠ2 1 óľŠ óľŠ2 óľŠ ≤ (1 − đđ đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ + đđ đ óľŠóľŠóľŠđĽđ+1 − đĽ∗ óľŠóľŠóľŠ 4 + 4đ đ đ2 óľŠóľŠ ∗ óľŠóľŠ2 óľŠđšđĽ óľŠóľŠ , đóľŠ where đ = (1/2)đ(2đ − đđż2 ). (34) Abstract and Applied Analysis 5 Consequently 1 − đđ đ óľŠóľŠ óľŠóľŠ ∗ óľŠ2 ∗ óľŠ2 óľŠóľŠđĽđ − đĽ óľŠóľŠóľŠ óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ ≤ 1 − (1/4) đđ đ (3/4) đđ đ 16đ2 óľŠóľŠ ∗ óľŠóľŠ2 + óľŠđšđĽ óľŠóľŠ . 1 − (1/4) đđ đ 3đ2 óľŠ then (33) and (40) can be rewritten as the following forms, respectively: đ đ+1 ≤ (1 − đžđ ) đ đ + đžđ đżđ , (35) It turns out that óľŠ óľŠóľŠ ∗óľŠ ∗ óľŠ 4đ óľŠ óľŠóľŠđšđĽ∗ óľŠóľŠóľŠ} , óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ ≤ max {óľŠóľŠóľŠđĽđ − đĽ óľŠóľŠóľŠ , óľŠ √3đ óľŠ (36) (42) đ đ+1 ≤ đ đ − đđ + đźđ . Finally, observing that the conditions đ đ → 0 and ∑∞ đ=1 đ đ = ∞ imply đźđ → 0 and ∑∞ đ=1 đžđ = ∞, respectively, in order to complete the proof using Lemma 7 (or Lemma 8), it suffices to verify that lim đđđ = 0 (43) lim sup đżđđ ≤ 0 (44) đ→∞ inductively óľŠ óľŠóľŠ ∗óľŠ ∗ óľŠ 4đ óľŠ óľŠóľŠđšđĽ∗ óľŠóľŠóľŠ} , óľŠóľŠđĽđ − đĽ óľŠóľŠóľŠ ≤ max {óľŠóľŠóľŠđĽ0 − đĽ óľŠóľŠóľŠ , óľŠ √3đ óľŠ implies (37) and this means that (đĽđ ) is bounded. Obviously, (đšđĽđ ) is also bounded. Secondly, since a projection is firmly nonexpansive, we obtain óľŠ2 óľŠóľŠ óľŠóľŠđđś2 đđś1 đĽđ − đđś2 đđś1 đĽ∗ óľŠóľŠóľŠ đ đ óľŠ óľŠ đ đ óľŠ2 óľŠ óľŠ2 óľŠóľŠ ≤ óľŠóľŠóľŠđđśđ1 đĽđ − đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ − óľŠóľŠóľŠóľŠđđśđ1 đĽđ − đđśđ2 đđśđ1 đĽđ óľŠóľŠóľŠóľŠ óľŠ2 óľŠ óľŠ2 óľŠ2 óľŠ óľŠ ≤ óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ − óľŠóľŠóľŠóľŠđĽđ − đđśđ1 đĽđ óľŠóľŠóľŠóľŠ − óľŠóľŠóľŠóľŠđđśđ1 đĽđ − đđśđ2 đđśđ1 đĽđ óľŠóľŠóľŠóľŠ ; (38) thus we also have óľŠóľŠóľŠđĽđ+1 − đĽ∗ óľŠóľŠóľŠ2 óľŠ óľŠ óľŠ2 óľŠóľŠ = óľŠóľŠóľŠđđśđ2 đđśđ1 (đź − đ đ đđš) đĽđ − đđśđ2 đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ óľŠ = óľŠóľŠóľŠóľŠđđśđ2 đđśđ1 (đź − đ đ đđš) đĽđ − đđśđ2 đđśđ1 đĽđ óľŠ2 +đđśđ2 đđśđ1 đĽđ − đđśđ2 đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ óľŠ2 óľŠ ≤ óľŠóľŠóľŠóľŠđđśđ2 đđśđ1 đĽđ − đđśđ2 đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ óľŠ óľŠ óľŠ óľŠ óľŠ2 óľŠ + 2đ đ đ óľŠóľŠóľŠđšđĽđ óľŠóľŠóľŠ ⋅ óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ + đ2đ đ2 óľŠóľŠóľŠđšđĽđ óľŠóľŠóľŠ đ đ đ and đđśđ2 đđśđ1 đĽđđ ∈ đśđ2đ , it follows that đ đ óľŠóľŠ óľŠóľŠ đ1 (đĽđđ ) ≤ â¨đđ1đ , đĽđđ − đđśđ1 đĽđđ ⊠≤ đ 1 óľŠóľŠóľŠđĽđđ − đđśđ1 đĽđđ óľŠóľŠóľŠ , óľŠ óľŠ đ đ đ đ đ đ óľŠóľŠ óľŠóľŠ ≤ đ 2 óľŠóľŠóľŠđđśđ1 đĽđđ − đđśđ2 đđśđ1 đĽđđ óľŠóľŠóľŠ . óľŠ đ óľŠ đ đ (39) đ→∞ (41) (46) Now if đĽó¸ ∈ đđ¤ (đĽđđ ), and (đĽđđ ) such that đĽđđ â đĽó¸ without loss of the generality, then the đ¤-lsc and (45) imply that đ1 (đĽó¸ ) ≤ lim inf đ1 (đĽđđ ) ≤ 0. đźđ = đđ đ , 2đ 2đ đ2 óľŠ óľŠóľŠ óľŠ đżđ = − â¨đšđĽ∗ , đĽđ − đĽ∗ ⊠+ đ óľŠóľŠóľŠđšđĽ∗ óľŠóľŠóľŠ óľŠóľŠóľŠđšđĽđ óľŠóľŠóľŠ , đ đ óľŠ2 óľŠ óľŠ2 óľŠ đđ = óľŠóľŠóľŠóľŠđĽđ − đđśđ1 đĽđ óľŠóľŠóľŠóľŠ + óľŠóľŠóľŠóľŠđđśđ1 đĽđ − đđśđ2 đđśđ1 đĽđ óľŠóľŠóľŠóľŠ , đ âđĽđđ − đĽ∗ â). From (32) and the trivial fact that đđśđ1 đĽđđ ∈ đśđ1đ đ2 (đđśđ1 đĽđđ ) ≤ â¨đđ2đ , đđśđ1 đĽđđ − đđśđ2 đđśđ1 đĽđđ ⊠where đ is a positive constant such that đ ≥ supđ {2đâđšđĽđ â ⋅ âđĽđ − đĽ∗ â + đ đ đ2 âđšđĽđ â2 }. The combination of (38) and (39) leads to óľŠ2 óľŠ óľŠ óľŠóľŠ ∗ óľŠ2 ∗ óľŠ2 óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ ≤ óľŠóľŠóľŠđĽđ − đĽ óľŠóľŠóľŠ − óľŠóľŠóľŠóľŠđĽđ − đđśđ1 đĽđ óľŠóľŠóľŠóľŠ (40) óľŠóľŠ2 óľŠóľŠ óľŠ óľŠ − óľŠóľŠđđśđ1 đĽđ − đđśđ2 đđśđ1 đĽđ óľŠóľŠ + đ đ đ. Setting đžđ = đđ đ , for any subsequence (đđ ) ⊂ (đ). In fact, if đđđ → 0 as đ → ∞, then âđĽđđ − đđśđ1 đĽđđ â → 0 and âđđśđ1 đĽđđ − đđśđ2 đđśđ1 đĽđđ â → đ đ đ đ 0 hold. Since đđ1 and đđ2 are bounded on bounded sets, we have two positive constants đ 1 and đ 2 such that âđđ1đ â ≤ đ 1 and âđđ2đ â ≤ đ 2 for all đ ≥ 0 (noting that (đđśđ1 đĽđđ ) is also bounded đ due to the fact that âđđśđ1 đĽđđ − đĽ∗ â = âđđśđ1 đĽđđ − đđśđ1 đĽ∗ â ≤ (45) óľŠ2 óľŠ ≤ óľŠóľŠóľŠóľŠđđśđ2 đđśđ1 đĽđ − đđśđ2 đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ + đ đ đ, óľŠ2 óľŠ đ đ = óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ , đ→∞ (47) This means that đĽó¸ ∈ đś1 holds. On the other hand, noting âđĽđđ − đđśđ1 đĽđđ â → 0, we can assert that đđśđ1 đĽđđ â đĽó¸ and đ đ have from the đ¤-lsc and (46) that đ2 (đĽó¸ ) ≤ lim inf đ2 (đđśđ1 đĽđđ ) ≤ 0. đ→∞ đ (48) This, in turn, implies that đĽó¸ ∈ đś2 . Moreover, we obtain that đĽó¸ ∈ đś1 â đś2 and hence đđ¤ (đĽđđ ) ⊂ đś1 â đś2 = đś. Noting đĽ∗ is the unique solution of VI(đś, đš), it turns out that 2đ lim sup {− â¨đšđĽ∗ , đĽđđ − đĽ∗ âŠ} đ đ→∞ =− 2đ lim inf â¨đšđĽ∗ , đĽđđ − đĽ∗ ⊠đ đ→∞ =− 2đ inf â¨đšđĽ∗ , đ¤ − đĽ∗ ⊠≤ 0. đ đ¤∈đđ¤ (đĽđđ ) (49) 6 Abstract and Applied Analysis Since đ đ → 0 and (đšđĽđ ) is bounded, it is easy to see that lim supđ → ∞ đżđđ ≤ 0. Observing that in Algorithm 1 the determination of the stepsize đ still depends on the constants đż and đ; this means that in order to implement Algorithm 1, one has first to estimate the constants đż and đ, which is sometimes not an easy work in practice. To overcome this difficulty, we furthermore introduce a so-called relaxed and self-adaptive algorithm, that is, a modification of Algorithm 1, in which the stepsize is selected through a self-adaptive way that has no connection with the constants đż and đ. Algorithm 2. Choose an arbitrary initial guess đĽ0 ∈ đť and an arbitrary element đĽ1 ∈ đť such that đĽ1 ≠ đĽ0 . Assume that the đth iterate đĽđ (đ ≥ 1) has been constructed. Continue and calculate the (đ + 1)th iterate đĽđ+1 via the following formula: đĽđ+1 = đđśđ2 đđśđ1 (đź − đ đ đđ đš) đĽđ , đ ≥ 1, if đĽđ ≠ đĽđ−1 , đ ≥ 1. if đĽđ = đĽđ−1 , (51) Firstly, we show that the sequence (đĽđ ) is well defined. Noting strong monotonicity of đš, đĽ1 ≠ đĽ0 implies that đšđĽ1 ≠ đšđĽ0 and đ1 is well defined via the first formula of (51). Consequently, đđ (đ ≥ 2) is well defined inductively according to (51) and thus the sequence (đĽđ ) is also well defined. Next, we estimate (đđ ) roughly. If đĽđ ≠ đĽđ−1 (that is, đšđĽđ ≠ đšđĽđ−1 ), set â¨đšđĽđ − đšđĽđ−1 , đĽđ − đĽđ−1 ⊠đđ = , óľŠóľŠ óľŠ2 óľŠóľŠđĽđ − đĽđ−1 óľŠóľŠóľŠ óľŠ óľŠóľŠ óľŠđšđĽ − đšđĽđ−1 óľŠóľŠóľŠ đż đ = óľŠ óľŠóľŠ đ óľŠ , đ ≥ 1. óľŠóľŠđĽđ − đĽđ−1 óľŠóľŠóľŠ (52) Obviously, it turns out that â¨đšđĽđ − đšđĽđ−1 , đĽđ − đĽđ−1 ⊠óľŠóľŠ óľŠóľŠ2 óľŠóľŠđĽđ − đĽđ−1 óľŠóľŠ óľŠ óľŠóľŠ óľŠđšđĽ − đšđĽđ−1 óľŠóľŠóľŠ ≤ óľŠ óľŠóľŠ đ óľŠ = đż đ ≤ đż. óľŠóľŠđĽđ − đĽđ−1 óľŠóľŠóľŠ đ ≤ đđ = (53) Proof. Setting đžđ = đ đ đđ and đ˝đ = (1/2)(2đ − đžđ đż2 ), it concludes observing đ đ → 0 and (54) that there exists some positive integer đ0 such that 0 < đžđ < đ , đż2 đ ≥ đ0 , (55) and consequently 1 đ˝đ ≥ đ, 2 đ ≥ đ0 . (56) Using Lemma 2, we have from (55) that đđśđ (đź − đžđ đš) (so is đź − đžđ đš) is a contraction with coefficient 1 − đžđ đ˝đ . This concludes that, for all đ ≥ đ0 , óľŠóľŠ ∗ óľŠ2 óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ óľŠ2 óľŠ = óľŠóľŠóľŠóľŠđđśđ2 đđśđ1 (đź − đžđ đš)đĽđ − đđśđ2 đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ óľŠ2 óľŠ ≤ óľŠóľŠóľŠ(đź − đžđ đš)đĽđ − (đź − đžđ đš)đĽ∗ − đžđ đšđĽ∗ óľŠóľŠóľŠ óľŠ2 óľŠ ≤ (1 − đžđ đ˝đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ (57) − 2đžđ â¨đšđĽ∗ , đĽđ − đĽ∗ − đžđ đšđĽđ ⊠, óľŠ2 óľŠ ≤ (1 − đžđ đ˝đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ − 2đžđ â¨đšđĽ∗ , đĽđ − đĽ∗ âŠ óľŠ óľŠóľŠ óľŠ + 2đžđ2 óľŠóľŠóľŠđšđĽ∗ óľŠóľŠóľŠ óľŠóľŠóľŠđšđĽđ óľŠóľŠóľŠ , óľŠóľŠ ∗ óľŠ2 óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ óľŠ = óľŠóľŠóľŠóľŠđđśđ2 đđśđ1 (đź − đžđ đš) đĽđ − đđśđ2 đđśđ1 (đź − đžđ đš) đĽ∗ óľŠ2 +đđśđ2 đđśđ1 (đź − đžđ đš)đĽ∗ − đđśđ2 đđśđ1 đĽ∗ óľŠóľŠóľŠóľŠ óľŠ2 óľŠ ≤ (1 − đžđ đ˝đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ + 2 â¨đđśđ2 đđśđ1 (đź − đžđ đš) đĽ∗ − đđśđ2 đđśđ1 đĽ∗ , đĽđ+1 − đĽ∗ ⊠óľŠ2 óľŠ óľŠóľŠ óľŠ óľŠ ≤ (1 − đžđ đ˝đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ + 2đžđ óľŠóľŠóľŠđšđĽ∗ óľŠóľŠóľŠ óľŠóľŠóľŠđĽđ+1 − đĽ∗ óľŠóľŠóľŠ óľŠ2 óľŠ ≤ (1 − đžđ đ˝đ ) óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ Consequently đ đ 1 1 ≤ đđ = đ2 ≤ ≤ . 2 đż đż đ đđ đ Theorem 10. Assume that đ đ → 0 (đ → ∞) and ∑+∞ đ=1 đ đ = +∞. Then the sequence (đĽđ ) generated by Algorithm 2 converges strongly to the unique solution đĽ∗ of đđź(đś, đš). (50) where đśđ1 and đśđ2 are given as in (32), the sequence (đ đ ) is in (0, 1), and the sequence (đđ ) is determined via the following relation: â¨đšđĽđ − đšđĽđ−1 , đĽđ − đĽđ−1 ⊠{ { , óľŠóľŠ óľŠ2 đđ = { óľŠóľŠđšđĽđ − đšđĽđ−1 óľŠóľŠóľŠ { {đđ−1 , By the definition of (đđ ), we can assert that (54) holds for all đ ≥ 1. Lemma 7 (or Lemma 8) is also important for the proof of the strong convergence of Algorithm 2. + (54) đžđ đ˝đ óľŠóľŠ 4đž óľŠ ∗ óľŠ2 ∗ óľŠ2 óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ + đ óľŠóľŠóľŠđšđĽ óľŠóľŠóľŠ , 4 đ˝đ (58) Abstract and Applied Analysis 7 Acknowledgments and hence 1 − đžđ đ˝đ óľŠóľŠ óľŠóľŠ ∗ óľŠ2 ∗ óľŠ2 óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ ≤ óľŠóľŠđĽđ − đĽ óľŠóľŠóľŠ 1 − (1/4) đžđ đ˝đ (59) (3/4) đžđ đ˝đ 16 óľŠóľŠ ∗ óľŠóľŠ2 + óľŠđšđĽ óľŠóľŠ . 1 − (1/4) đžđ đ˝đ 3đ˝đ2 óľŠ References Using (56), it turns out that 8 óľŠóľŠ ∗ óľŠóľŠ óľŠóľŠ óľŠ ∗óľŠ ∗óľŠ óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ ≤ max {óľŠóľŠóľŠđĽđ − đĽ óľŠóľŠóľŠ , óľŠđšđĽ óľŠóľŠ} , √3đ óľŠ đ ≥ đ0 , (60) inductively 8 óľŠóľŠ ∗ óľŠóľŠ óľŠ óľŠóľŠ ∗óľŠ ∗óľŠ óľŠđšđĽ óľŠóľŠ} , óľŠóľŠđĽđ − đĽ óľŠóľŠóľŠ ≤ max {óľŠóľŠóľŠóľŠđĽđ0 − đĽ óľŠóľŠóľŠóľŠ , √3đ óľŠ đ ≥ đ0 , (61) and this means that (đĽđ ) is bounded, so is (đšđĽđ ). By an argument similar to getting (38)–(40), we have óľŠ2 óľŠ óľŠ óľŠóľŠ ∗ óľŠ2 ∗ óľŠ2 óľŠóľŠđĽđ+1 − đĽ óľŠóľŠóľŠ ≤ óľŠóľŠóľŠđĽđ − đĽ óľŠóľŠóľŠ − óľŠóľŠóľŠóľŠđĽđ − đđśđ1 đĽđ óľŠóľŠóľŠóľŠ óľŠ2 óľŠ − óľŠóľŠóľŠóľŠđđśđ1 đĽđ − đđśđ2 đđśđ1 đĽđ óľŠóľŠóľŠóľŠ (62) where đ is a positive constant. Setting (63) đźđ = đđžđ , then (57) and (62) can be rewritten as the following forms, respectively: đ đ+1 ≤ đ đ − đđ + đźđ . (64) Clearly, đ đ → 0 and ∑∞ đ=1 đ đ = ∞, together with (54) and (56), imply that đźđ → 0 and ∑∞ đ=1 đžđ đ˝đ = ∞. By an argument very similar to the proof of Theorem 9, it is not difficult to verify that lim đđđ = 0 đ→∞ [3] A. Bnouhachem, “A self-adaptive method for solving general mixed variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 309, no. 1, pp. 136–150, 2005. [4] H. Brezis, Operateurs Maximaux Monotone et Semigroupes de Contractions dans les Espace d’Hilbert, North-Holland, Amsterdam, The Netherlands, 1973. [5] R. W. Cottle, F. Giannessi, and J. L. Lions, Variational Inequalities and Complementarity Problems: Theory and Application, John Wiley & Sons, New York, NY, USA, 1980. [6] M. Fukushima, “Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems,” Mathematical Programming A, vol. 53, no. 1, pp. 99–110, 1992. [9] R. Glowinski, J. L. Lions, and R. Tremolier, Numerical Analysis of Variational Inequalities, vol. 8, North-Holland, The Netherlands, Amsterdam, 1981. [10] P. T. Harker and J. S. Pang, “Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications,” Mathematical Programming B, vol. 48, no. 2, pp. 161–220, 1990. óľŠ2 óľŠ óľŠ2 óľŠ đđ = óľŠóľŠóľŠóľŠđĽđ − đđśđ1 đĽđ óľŠóľŠóľŠóľŠ + óľŠóľŠóľŠóľŠđđśđ1 đĽđ − đđśđ2 đđśđ1 đĽđ óľŠóľŠóľŠóľŠ , đ đ+1 ≤ (1 − đžđ đ˝đ ) đ đ + đžđ đ˝đ đżđ , [2] C. Baiocchi and A. Capelo, Variational and Quasivariational Inequalities, John Wiley & Sons, New York, NY, USA, 1984. [8] F. Giannessi, A. Maugeri, and P. M. Pardalos, Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models, Kluwer Academic, Dodrecht, The Netherlands, 2001. óľŠ2 óľŠ đ đ = óľŠóľŠóľŠđĽđ − đĽ∗ óľŠóľŠóľŠ , 2đž óľŠ 2 óľŠóľŠ óľŠ â¨đšđĽ∗ , đĽđ − đĽ∗ ⊠+ đ óľŠóľŠóľŠđšđĽ∗ óľŠóľŠóľŠ óľŠóľŠóľŠđšđĽđ óľŠóľŠóľŠ , đ˝đ đ˝đ [1] G. Stampacchia, “Formes bilineaires coercivites sur les ensembles convexes,” Comptes Rendus de l’AcadeĚmie des Sciences, vol. 258, pp. 4413–4416, 1964. [7] K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, vol. 83, Marcel Dekker, New York, NY, USA, 1984. + đžđ đ, đżđ = − This work was supported by National Natural Science Foundation of China (Grant no. 11201476) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing. (65) implies [11] B. S. He, “A class of implicit methods for monotone variational inequalities,” Reports of the Institute of Mathematics 95-1, Nanjing University, Nanjing, China, 1995. [12] B. S. He and L. Z. Liao, “Improvements of some projection methods for monotone nonlinear variational inequalities,” Journal of Optimization Theory and Applications, vol. 112, no. 1, pp. 111–128, 2002. [13] B. S. He, Z. H. Yang, and X. M. Yuan, “An approximate proximal-extragradient type method for monotone variational inequalities,” Journal of Mathematical Analysis and Applications, vol. 300, no. 2, pp. 362–374, 2004. [14] S. He and H. K. Xu, “Variational inequalities governed by boundedly Lipschitzian and strongly monotone operators,” Fixed Point Theory, vol. 10, no. 2, pp. 245–258, 2009. (66) [15] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and their Applications, SIAM, Philadelphia, Pa, USA, 2000. for any subsequence (đđ ) ⊂ (đ). Thus we can complete the proof by using Lemma 7 (or Lemma 8). [16] J. L. Lions and G. Stampacchia, “Variational inequalities,” Communications on Pure and Applied Mathematics, vol. 20, pp. 493–519, 1967. lim sup đżđđ ≤ 0 đ→∞ 8 [17] H. K. Xu and T. H. Kim, “Convergence of hybrid steepestdescent methods for variational inequalities,” Journal of Optimization Theory and Applications, vol. 119, no. 1, pp. 185–201, 2003. [18] H. K. Xu, “Viscosity approximation methods for nonexpansive mappings,” Journal of Mathematical Analysis and Applications, vol. 298, no. 1, pp. 279–291, 2004. [19] H. Yang and M. G. H. Bell, “Traffic restraint, road pricing and network equilibrium,” Transportation Research B, vol. 31, no. 4, pp. 303–314, 1997. [20] I. Yamada, “The hybrid steepest descent method for the variational inequality problem over the intersection of fixed point sets of nonexpansive mappings,” in Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications, D. Butnariu, Y. Censor, and S. Reich, Eds., vol. 8, pp. 473–504, North-Holland, Amsterdam, The Netherlands, 2001. [21] M. Fukushima, “A relaxed projection method for variational inequalities,” Mathematical Programming, vol. 35, no. 1, pp. 58– 70, 1986. [22] L. C. Ceng, Q. H. Ansari, and J. C. Yao, “Mann-type steepestdescent and modified hybrid steepest-descent methods for variational inequalities in Banach spaces,” Numerical Functional Analysis and Optimization, vol. 29, no. 9-10, pp. 987–1033, 2008. [23] L. C. Ceng, M. Teboulle, and J. C. Yao, “Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed-point problems,” Journal of Optimization Theory and Applications, vol. 146, no. 1, pp. 19–31, 2010. [24] K. Goebel and W. A. Kirk, Topics on Metric Fixed Point Theory, Cambridge University Press, Cambridge, UK, 1990. [25] H. K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002. [26] P. E. MaingeĚ, “A hybrid extragradient-viscosity method for monotone operators and fixed point problems,” SIAM Journal on Control and Optimization, vol. 47, no. 3, pp. 1499–1515, 2008. [27] H. H. Bauschke and J. M. Borwein, “On projection algorithms for solving convex feasibility problems,” SIAM Review, vol. 38, no. 3, pp. 367–426, 1996. [28] Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004. [29] G. LoĚpez, V. MartÄąĚn-MaĚrquez, F. Wang, and H. K. Xu, “Solving the split feasibility problem without prior knowledge of matrix norms,” Inverse Problems, vol. 28, no. 8, p. 085004, 18, 2012. [30] Y. Censor, A. Gibali, and S. Reich, “The subgradient extragradient method for solving variational inequalities in Hilbert space,” Journal of Optimization Theory and Applications, vol. 148, no. 2, pp. 318–335, 2011. Abstract and Applied Analysis Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014