Volume 8 (2007), Issue 1, Article 6, 9 pp. GENERAL SYSTEM OF STRONGLY PSEUDOMONOTONE NONLINEAR VARIATIONAL INEQUALITIES BASED ON PROJECTION SYSTEMS R. U. VERMA D EPARTMENT OF M ATHEMATICS U NIVERSITY OF T OLEDO T OLEDO , O HIO 43606, USA verma99@msn.com Received 26 February, 2006; accepted 11 December, 2006 Communicated by R.N. Mohapatra A BSTRACT. Let K1 and K2 , respectively, be non empty closed convex subsets of real Hilbert spaces H1 and H2 . The Approximation − solvability of a generalized system of nonlinear variational inequality (SN V I) problems based on the convergence of projection methods is discussed. The SNVI problem is stated as follows: find an element (x∗ , y ∗ ) ∈ K1 × K2 such that hρS(x∗ , y ∗ ), x − x∗ i ≥ 0, ∀x ∈ K1 and for ρ > 0, hηT (x∗ , y ∗ ), y − y ∗ i ≥ 0, ∀y ∈ K2 and for η > 0, where S : K1 × K2 → H1 and T : K1 × K2 → H2 are nonlinear mappings. Key words and phrases: Strongly pseudomonotone mappings, Approximation solvability, Projection methods, System of nonlinear variational inequalities. 2000 Mathematics Subject Classification. 49J40, 65B05, 47H20. 1. I NTRODUCTION Projection-like methods in general have been one of the most fundamental techniques for establishing the convergence analysis for solutions of problems arising from several fields, such as complementarity theory, convex quadratic programming, and variational problems. There exists a vast literature on approximation-solvability of several classes of variational/hemivariational inequalities in different space settings. The author [6, 7] introduced and studied a new system of nonlinear variational inequalities in Hilbert space settings. This class encompasses several classes of nonlinear variational inequality problems. In this paper we intend to explore, based on a general system of projection-like methods, the approximation-solvability of a system of nonlinear strongly pseudomonotone variational inequalities in Hilbert spaces. The obtained results extend/generalize the results in [1], [5] – [7] to the case of strongly pseudomonotone system of nonlinear variational inequalities. Approximation solvability of this system can also be established using the resolvent operator technique but in the more relaxed setting of Hilbert spaces. For more details, we refer the reader to [1] – [10]. 076-06 2 R. U. V ERMA Let H1 and H2 be two real Hilbert spaces with the inner product h·, ·i and norm k · k. Let S : K1 ×K2 → H1 and T : K1 ×K2 → H2 be any mappings on K1 ×K2 , where K1 and K2 are nonempty closed convex subsets of H1 and H2 , respectively. We consider a system of nonlinear variational inequality (abbreviated as SNVI) problems: determine an element (x∗ , y ∗ ) ∈ K1 × K2 such that (1.1) hρS(x∗ , y ∗ ), x − x∗ i ≥ 0 ∀x ∈ K1 (1.2) hηT (x∗ , y ∗ ), y − y ∗ i ≥ 0 ∀y ∈ K2 , where ρ, η > 0. The SNVI (1.1) − (1.2) problem is equivalent to the following projection formulas x∗ = Pk [x∗ − ρS(x∗ , y ∗ )] ∗ ∗ ∗ for ρ > 0 ∗ y = Qk [y − ηT (x , y )] for η > 0, where Pk is the projection of H1 onto K1 and QK is the projection of H2 onto K2 . We note that the SNVI (1.1)−(1.2) problem extends the NVI problem: determine an element ∗ x ∈ K1 such that hS(x∗ ), x − x∗ i ≥ 0, ∀x ∈ K1 . (1.3) Also, we note that the SNVI (1.1)−(1.2) problem is equivalent to a system of nonlinear complementarities (abbreviated as SNC): find an element (x∗ , y ∗ ) ∈ K1 ×K2 such that S(x∗ , y ∗ ) ∈ K1∗ , T (x∗ , y ∗ ) ∈ K2∗ , and (1.4) hρS(x∗ , y ∗ ), x∗ i = 0 for ρ > 0, (1.5) hηT (x∗ , y ∗ ), y ∗ i = 0 for η > 0, where K1∗ and K2∗ , respectively, are polar cones to K1 and K2 defined by K1∗ = {f ∈ H1 : hf, xi ≥ 0, ∀x ∈ K1 }. K2∗ = {g ∈ H2 : hg, yi ≥ 0, ∀g ∈ K2 }. Now, we recall some auxiliary results and notions crucial to the problem on hand. Lemma 1.1. For an element z ∈ H, we have x∈K and hx − z, y − xi ≥ 0, ∀y ∈ K if and only if x = Pk (z). Lemma 1.2 ([3]). Let {αk }, {β k }, and {γ k } be three nonnegative sequences such that αk+1 ≤ (1 − tk )αk + β k + γ k for k = 0, 1, 2, ..., P P∞ k k k k k where tk ∈ [0, 1], ∞ k=0 t = ∞, β = o(t ), and k=0 γ < ∞. Then α → 0 as k → ∞. A mapping T : H → H from a Hilbert space H into H is called monotone if hT (x) − T (y), x − yi ≥ 0 for all x, y ∈ H. The mapping T is (r)−strongly monotone if for each x, y ∈ H, we have hT (x) − T (y), x − yi ≥ r||x − y||2 for a constant r > 0. This implies that kT (x) − T (y)k ≥ rkx − yk, that is, T is (r)-expansive, and when r = 1, it is expansive. The mapping T is called (s)-Lipschitz continuous (or Lipschitzian) if there exists a constant s ≥ 0 such that kT (x) − T (y)k ≤ skx − yk, ∀ x, y ∈ H. T is called (µ)-cocoercive if for each x, y ∈ H, we have hT (x) − T (y), x − yi ≥ µ||T (x) − T (y)||2 J. Inequal. Pure and Appl. Math., 8(1) (2007), Art. 6, 9 pp. for a constant µ > 0. http://jipam.vu.edu.au/ S TRONGLY P SEUDOMONOTONE N ONLINEAR VARIATIONAL I NEQUALITIES 3 Clearly, every (µ)-cocoercive mapping T is ( µ1 )-Lipschitz continuous. We can easily see that the following implications on monotonicity, strong monotonicity and expansiveness hold: strong monotonicity ⇒ expansiveness ⇓ monotonicity T is called relaxed (γ)-cocoercive if there exists a constant γ > 0 such that hT (x) − T (y), x − yi ≥ (−γ)kT (x) − T (y)k2 , ∀x, y ∈ H. T is said to be (r)-strongly pseudomonotone if there exists a positive constant r such that hT (y), x − yi ≥ 0 ⇒ hT (x), x − yi ≥ r kx − yk2 , ∀x, y ∈ H. T is said to be relaxed (γ, r)-cocoercive if there exist constants γ,r > 0 such that hT (x) − T (y), x − yi ≥ (−γ) kT (x) − T (y)k2 + r kx − yk2 . Clearly, it implies that hT (x) − T (y), x − yi ≥ (−γ) kT (x) − T (y)k2 , that is, T is relaxed (γ)-cocoercive. T is said to be relaxed (γ, r)-pseudococoercive if there exist positive constants γ and r such that hT (y), x − yi ≥ 0 ⇒ hT (x), x − yi ≥ (−γ) kT (x) − T (y)k2 + r kx − yk2 , ∀x, y ∈ H. Thus, we have following implications: ⇒ strong (r)-pseudomonotonicity (r)-strong monotonicity ⇓ relaxed (γ, r)-cocoercivity ⇓ relaxed (γ, r)-pseudococoercivity 2. G ENERAL P ROJECTION M ETHODS This section deals with the convergence of projection methods in the context of the approximationsolvability of the SNVI (1.1) − (1.2) problem. Algorithm 2.1. For an arbitrarily chosen initial point (x0 , y 0 ) ∈ K1 × K2 , compute the sequences {xk } and {y k } such that xk+1 = (1 − ak − bk )xk + ak Pk [xk − ρS(xk , y k )] + bk uk y k+1 = (1 − αk − β k )y k + αk QK [y k − ηT (xk , y k )] + β k v k , where PK is the projection of H1 onto K1 , QK is the projection of H2 onto K2 , ρ, η > 0 are constants, S : K1 × K2 → H1 and T : K1 × K2 → H2 are any two mappings, and uk and v k , respectively, are bounded sequences in K1 and K2 . The sequences {ak }, {bk }, {αk }, and {β k } are in [0, 1] with (k ≥ 0) 0 ≤ ak + bk ≤ 1, J. Inequal. Pure and Appl. Math., 8(1) (2007), Art. 6, 9 pp. 0 ≤ αk + β k ≤ 1. http://jipam.vu.edu.au/ 4 R. U. V ERMA Algorithm 2.2. For an arbitrarily chosen initial point (x0 , y 0 ) ∈ K1 × K2 , compute the sequences {xk } and {y k } such that xk+1 = (1 − ak − bk )xk + ak Pk [xk − ρS(xk , y k )] + bk uk y k+1 = (1 − ak − bk )y k + ak QK [y k − ηT (xk , y k )] + bk v k , where PK is the projection of H1 onto K1 , QK is the projection of H2 onto K2 , ρ, η > 0 are constants, S : K1 × K2 → H1 and T : K1 × K2 → H2 are any two mappings, and uk and v k , respectively, are bounded sequences in K1 and K2 . The sequences {ak } and {bk }, are in [0, 1] with (k ≥ 0) 0 ≤ ak + bk ≤ 1. Algorithm 2.3. For an arbitrarily chosen initial point (x0 , y 0 ) ∈ K1 × K2 , compute the sequences {xk } and {y k } such that xk+1 = (1 − ak )xk + ak Pk [xk − ρS(xk , y k )] y k+1 = (1 − ak )y k + ak QK [y k − ηT (xk , y k )], where PK is the projection of H1 onto K1 , QK is the projection of H2 onto K2 , ρ, η > 0 are constants, S : K1 × K2 → H1 and T : K1 × K2 → H2 are any two mappings. The sequence {ak } ∈ [0, 1] for k ≥ 0. We consider, based on Algorithm 2.2, the approximation solvability of the SNVI (1.1)−(1.2) problem involving strongly pseudomonotone and Lipschitz continuous mappings in Hilbert space settings. Theorem 2.1. Let H1 and H2 be two real Hilbert spaces and, K1 and K2 , respectively, be nonempty closed convex subsets of H1 and H2 . Let S : K1 × K2 → H1 be strongly (r)− pseudomonotone and (µ)−Lipschitz continuous in the first variable and let S be (ν)−Lipschitz continuous in the second variable. Let T : K1 × K2 → H2 be strongly (s)−pseudomonotone and (β)−Lipschitz continuous in the second variable and let T be (τ )−Lipschitz continuous in the first variable. Let k · k∗ denote the norm on H1 × H2 defined by k(x, y)k∗ = (kxk + kyk) ∀(x, y) ∈ H1 × H2 . In addition, let s 1 − 2ρr + ρ + θ= s σ= 1 − 2ηr + η + ρµ2 2 ηβ 2 2 + ρ2 µ2 + ητ < 1 + η 2 β 2 + ρν < 1, let (x∗ , y ∗ ) ∈ K1 × K2 form a solution to the SNVI (1.1) − (1.2) problem, and let sequences {xk }, and {y k } be generated by Algorithm 2.2. Furthermore, let (i) hS(x∗ , y k ), xk − x∗ i ≥ 0; (ii) hT (xk , y ∗ ), y k − y ∗ i ≥ 0; (iii) 0P≤ ak + bk ≤ 1; P ∞ k k (iv) ∞ k=0 a = ∞, and k=0 b < ∞; (v) 0 < ρ < µ2r2 and 0 < η < β2s2 . Then the sequence {xk , y k } converges to (x∗ , y ∗ ). J. Inequal. Pure and Appl. Math., 8(1) (2007), Art. 6, 9 pp. http://jipam.vu.edu.au/ S TRONGLY P SEUDOMONOTONE N ONLINEAR VARIATIONAL I NEQUALITIES 5 Proof. Since (x∗ , y ∗ ) ∈ K1 × K2 forms a solution to the SNVI (1.1) − (1.2) problem, it follows that x∗ = PK [x∗ − ρS(x∗ , y ∗ )] and y ∗ = QK [x∗ − ηT (x∗ , y ∗ )]. Applying Algorithm 2.2, we have (2.1) kxk+1 − x∗ k = k(1 − ak − bk )xk + ak PK [xk − ρS(xk , y k )] + bk uk − (1 − ak − bk )x∗ − ak PK [x∗ − ρS(x∗ , y ∗ )] − bk x∗ k ≤ (1 − ak − bk )kxk − x∗ k + ak kPK [xk − ρS(xk , y k )] − PK [x∗ − ρS(x∗ , y ∗ )]k + M bk ≤ (1 − ak )kxk − x∗ k + ak kxk − x∗ − ρ[S(xk , y k ) − S(x∗ , y k ) + S(x∗ , y k ) − S(x∗ , y ∗ )]k + M bk ≤ (1 − ak )kxk − x∗ k + ak kxk − x∗ − ρ[S(xk , y k ) − S(x∗ , y k )]k + ρk[S(x∗ , y k ) − S(x∗ , y ∗ )]k + M bk , where M = max{sup kuk − x∗ k, sup kv k − y ∗ k} < ∞. Since S is strongly (r)− pseudomonotone and (µ)−Lipschitz continuous in the first variable, and S is (ν)−Lipschitz continuous in the second variable, we have in light of (i) that (2.2) kxk − x∗ − ρ[S(xk , y k ) − S(x∗ , y k )]k2 = kxk − x∗ k2 − 2ρhS(xk , y k ) − S(x∗ , y k ), xk − x∗ i + ρ2 kS(xk , y k ) − S(x∗ , y k )k2 = kxk − x∗ k2 − 2ρhS(xk , y k ), xk − x∗ i + 2ρhS(x∗ , y k ), xk − x∗ i + ρ2 kS(xk , y k ) − S(x∗ , y k )k2 ≤ kxk − x∗ k2 − 2ρrkxk − x∗ k2 + ρ2 µ2 kxk − x∗ k2 + 2ρhS(x∗ , y k ), xk − x∗ i ≤ kxk − x∗ k2 − 2ρrkxk − x∗ k2 + ρ2 µ2 kxk − x∗ k2 + 2ρhS(x∗ , y k ), xk − x∗ i = [1 − 2ρr + ρ2 µ2 ]kxk − x∗ k2 + 2ρhS(x∗ , y k ), xk − x∗ i. On the other hand, we have 2ρhS(x∗ , y k ), xk − x∗ i ≤ ρ[kS(x∗ , y k )k2 + kxk − x∗ k2 ] and (2.3) kS(x∗ , y k )k2 1 1 = {kS(x∗ , y k ) − S(xk , y k )k2 − 2[kS(xk , y k )k2 − kS(xk , y k ) + S(x∗ , y k )k2 } 2 2 1 1 ∗ k k k 2 k k 2 ≤ {kS(x , y ) − S(x , y )k − 2[kS(x , y )k − | kS(xk , y k ) − S(x∗ , y k )k |2 ]} 2 2 1 ≤ {kS(x∗ , y k ) − S(xk , y k )k2 2 µ2 ≤ kxk − x∗ k2 , 2 where 1 kS(xk , y k )k2 − | kS(xk , y k ) − S(x∗ , y k )k |2 > 0. 2 J. Inequal. Pure and Appl. Math., 8(1) (2007), Art. 6, 9 pp. http://jipam.vu.edu.au/ 6 R. U. V ERMA Therefore, we get 2 ρµ 2ρhS(x , y ), x − x i ≤ ρ + kxk − x∗ k2 . 2 ∗ k k ∗ It follows that 2 ρµ 2 kx − x − ρ[S(x , y ) − S(x , y )]k ≤ 1 − 2ρr + ρ + + (ρµ) kxk − x∗ k2 . 2 k ∗ k k ∗ k 2 As a result, we have kxk+1 − x∗ k ≤ (1 − ak )kxk − x∗ k + ak θkxk − x∗ k + ak ρνky k − y ∗ k + M bk , r 2 where θ = 1 − 2ρr + ρ + ρµ2 + ρ2 µ2 . (2.4) Similarly, we have k+1 y (2.5) − y ∗ ≤ (1 − ak )ky k − y ∗ k + ak σky k − y ∗ k + ak ητ kxk − x∗ k + M bk , r 2 where σ = 1 − 2ηr + η + ηβ2 + η 2 β 2 . It follows from (2.4) and (2.5) that (2.6) kxk+1 − x∗ k + ky k+1 − y ∗ k ≤ (1 − ak )kxk − x∗ k + ak θkxk − x∗ k + ak ητ kxk − x∗ k + M bk + (1 − ak )ky k − y ∗ k + ak σky k − y ∗ k + ak ρνky k − y ∗ k + M bk = [1 − (1 − δ)ak ](kxk − x∗ k + ky k − y ∗ k) + 2M bk , where δ = max{θ + ητ, σ + ρν} and H1 × H2 is a Banach space under the norm k · k∗ . If we set αk = kxk − x∗ k + ky k − y ∗ k, tk = (1 − δ)ak , β k = 2M bk for k = 0, 1, 2, ..., in Lemma 1.2, and apply (iii) and (iv), we conclude that kxk − x∗ k + ky k − y ∗ k → 0 as k → ∞. Hence, kxk+1 − x∗ k + ky k+1 − y ∗ k → 0. Consequently, the sequence {(xk , y k )} converges strongly to (x∗ , y ∗ ), a solution to the SNVI (1.1) − (1.2) problem. This completes the proof. Note that the proof of the following theorem follows rather directly without using Lemma 1.2. Theorem 2.2. Let H1 and H2 be two real Hilbert spaces and, K1 and K2 , respectively, be nonempty closed convex subsets of H1 and H2 . Let S : K1 × K2 → H1 be strongly (r)− pseudomonotone and (µ)−Lipschitz continuous in the first variable and let S be (ν)−Lipschitz continuous in the second variable. Let T : K1 × K2 → H2 be strongly (s)−pseudomonotone and (β)−Lipschitz continuous in the second variable and let T be (τ )−Lipschitz continuous in the first variable. Let k · k∗ denote the norm on H1 × H2 defined by k(x, y)k∗ = (kxk + kyk) J. Inequal. Pure and Appl. Math., 8(1) (2007), Art. 6, 9 pp. ∀(x, y) ∈ H1 × H2 . http://jipam.vu.edu.au/ S TRONGLY P SEUDOMONOTONE N ONLINEAR VARIATIONAL I NEQUALITIES 7 In addition, let s θ= 1 − 2ρr + ρ + s σ= 1 − 2ηr + η + ρµ2 2 ηβ 2 2 + ρ2 µ2 + ητ < 1, + η 2 β 2 + ρν < 1, let (x∗ , y ∗ ) ∈ K1 × K2 form a solution to the SNVI (1.1) − (1.2) problem, and let sequences {xk }, and {y k } be generated by Algorithm 2.3. Furthermore, let (i) (ii) (iii) (iv) (v) hS(x∗ , y k ), xk − x∗ i ≥ 0 hT (xk , y ∗ ), y k − y ∗ i ≥ 0 0P≤ ak ≤ 1 ∞ k k=0 a = ∞ 0 < ρ < µ2r2 and 0 < η < 2s . β2 Then the sequence {xk , y k )} converges strongly to (x∗ , y ∗ ). Proof. Since (x∗ , y ∗ ) ∈ K1 × K2 forms a solution to the SNVI (1.1) − (1.2) problem, it follows that x∗ = PK [x∗ − ρS(x∗ , y ∗ )] and y ∗ = QK [x∗ − ηT (x∗ , y ∗ )]. Applying Algorithm 2.3, we have (2.7) kxk+1 − x∗ k = k(1 − ak )xk + ak PK [xk − ρS(xk , y k )] − (1 − ak )x∗ − ak PK [x∗ − ρS(x∗ , y ∗ )]k ≤ (1 − ak )kxk − x∗ k + ak kPK [xk − ρS(xk , y k )] − PK [x∗ − ρS(x∗ , y ∗ )]k ≤ (1 − ak )kxk − x∗ k + ak kxk − x∗ − ρ[S(xk , y k ) − S(x∗ , y k ) + S(x∗ , y k ) − S(x∗ , y ∗ )]k ≤ (1 − ak )kxk − x∗ k + ak kxk − x∗ − ρ[S(xk , y k ) − S(x∗ , y k )]k + ρk[S(x∗ , y k ) − S(x∗ , y ∗ )]k. Since S is strongly (r)− pseudomonotone and (µ)−Lipschitz continuous in the first variable, and S is (ν)−Lipschitz continuous in the second variable, we have in light of (i) that (2.8) kxk − x∗ − ρ[S(xk , y k ) − S(x∗ , y k )]k2 = kx − x∗ k2 − 2ρhS(xk , y k ) − S(x∗ , y k ), xk − x∗ i + ρ2 kS(xk , y k ) − S(x∗ , y k )k2 = kx − x∗ k2 − 2ρhS(xk , y k ), xk − x∗ i + 2ρhS(x∗ , y k ), xk − x∗ i + ρ2 kS(xk , y k ) − S(x∗ , y k )k2 ≤ kxk − x∗ k2 − 2ρrkxk − x∗ k2 + ρ2 µ2 kxk − x∗ k2 + 2ρhS(x∗ , y k ), xk − x∗ i ≤ kxk − x∗ k2 − 2ρrkxk − x∗ k2 + ρ2 µ2 kxk − x∗ k2 + 2ρhS(x∗ , y k ), xk − x∗ i = [1 − 2ρr + ρ2 µ2 ]kxk − x∗ k2 + 2ρhS(x∗ , y k ), xk − x∗ i. On the other hand, we have 2ρhS(x∗ , y k ), xk − x∗ i ≤ ρ[kS(x∗ , y k )k2 + kxk − x∗ k2 ] J. Inequal. Pure and Appl. Math., 8(1) (2007), Art. 6, 9 pp. http://jipam.vu.edu.au/ 8 R. U. V ERMA and kS(x∗ , y k )k2 ( 1 kS(x∗ , y k ) − S(xk , y k )k2 = 2 ) 1 − 2 kS(xk , y k )k2 − kS(xk , y k ) + S(x∗ , y k )k2 2 ( 1 kS(x∗ , y k ) − S(xk , y k )k2 ≤ 2 ) 1 k k 2 k k ∗ k 2 − 2 kS(x , y )k − | kS(x , y ) − S(x , y )k | 2 (2.9) 1 ≤ kS(x∗ , y k ) − S(xk , y k )k2 2 µ2 ≤ kxk − x∗ k2 , 2 where kS(xk , y k )k2 − 1 S(xk , y k ) − S(x∗ , y k )2 > 0. 2 Therefore, we get 2 ρµ kxk − x∗ k2 . 2ρhS(x , y ), x − x i ≤ ρ + 2 ∗ k k ∗ It follows that 2 ρµ 2 kx − x − ρ[S(x , y ) − S(x , y )]k ≤ 1 − 2ρr + ρ + + (ρµ) kxk − x∗ k2 . 2 As a result, we have k ∗ k k ∗ k 2 kxk+1 − x∗ || ≤ (1 − ak )kxk − x∗ k + ak θkxk − x∗ k + ak ρνky k − y ∗ k, r 2 where θ = 1 − 2ρr + ρ + ρµ2 + ρ2 µ2 . (2.10) Similarly, we have ky k+1 − y ∗ k ≤ (1 − ak )ky k − y ∗ k + ak σky k − y ∗ k + ak ητ kxk − x∗ k, r 2 where σ = 1 − 2ηr + η + ηβ2 + η 2 β 2 . (2.11) It follows from (2.9) and (2.10) that (2.12) kxk+1 − x∗ k + ky k+1 − y ∗ k ≤ (1 − ak )kxk − x∗ k + ak θkxk − x∗ k + ak ητ kxk − x∗ k + (1 − ak )ky k − y ∗ k + ak σky k − y ∗ k + ak ρνky k − y ∗ k = [1 − (1 − δ)ak ](kxk − x∗ k + ky k − y ∗ k) k Y ≤ [1 − (1 − δ)aj ](kx0 − x∗ k + ky 0 − y ∗ k), j=0 where δ = max{θ + ητ, σ + ρν} and H1 × H2 is a Banach space under the norm k · k∗ . J. Inequal. Pure and Appl. Math., 8(1) (2007), Art. 6, 9 pp. http://jipam.vu.edu.au/ S TRONGLY P SEUDOMONOTONE N ONLINEAR VARIATIONAL I NEQUALITIES Since δ < 1 and P∞ k=0 9 ak is divergent, it follows that k Y lim [1 − (1 − δ)aj ] = 0 k→∞ as k → ∞. j=0 Therefore, kxk+1 − x∗ k + ky k+1 − y ∗ k → 0, and consequently, the sequence {(xk , y k )} converges strongly to (x∗ , y ∗ ), a solution to the SN V I (1.1) − (1.2) problem. This completes the proof. R EFERENCES [1] S.S. CHANG, Y.J. CHO, AND J.K. KIM, On the two-step projection methods and applications to variational inequalities, Mathematical Inequalities and Applications, accepted. [2] Z. LIU, J.S. UME, AND S.M. KANG, Generalized nonlinear variational-like inequalities in reflexive Banach spaces, Journal of Optimization Theory and Applications, 126(1) (2005), 157–174. [3] L. S. LIU, Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces, Journal of Mathematical Analysis and Applications, 194 (1995), 114–127. [4] H. NIE, Z. LIU, K. H. KIM AND S. M. KANG, A system of nonlinear variational inequalities involving strongly monotone and pseudocontractive mappings, Advances in Nonlinear Variational Inequalities, 6(2) (2003), 91–99. [5] R.U. VERMA, Nonlinear variational and constrained hemivariational inequalities, ZAMM: Z. Angew. Math. Mech., 77(5) (1997), 387–391. [6] R.U. VERMA, Generalized convergence analysis for two-step projection methods and applications to variational problems, Applied Mathematics Letters, 18 (2005), 1286–1292. [7] R.U. VERMA, Projection methods, algorithms and a new system of nonlinear variational inequalities, Computers and Mathematics with Applications, 41 (2001), 1025–1031. [8] R. WITTMANN, Approximation of fixed points of nonexpansive mappings, Archiv der Mathematik, 58 (1992), 486–491. [9] E. ZEIDLER, Nonlinear Functional Analysis and its Applications I, Springer-Verlag, New York, New York, 1986. [10] E. ZEIDLER, Nonlinear Functional Analysis and its Applications II/B, Springer-Verlag, New York, New York, 1990. J. Inequal. Pure and Appl. Math., 8(1) (2007), Art. 6, 9 pp. http://jipam.vu.edu.au/