Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 891232, 25 pages http://dx.doi.org/10.1155/2013/891232 Research Article Relaxed Extragradient Methods with Regularization for General System of Variational Inequalities with Constraints of Split Feasibility and Fixed Point Problems L. C. Ceng,1 A. PetruGel,2 and J. C. Yao3,4 1 Department of Mathematics, Shanghai Normal University, and Scientific Computing Key Laboratory of Shanghai Universities, Shanghai 200234, China 2 Department of Applied Mathematics, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania 3 Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung 807, Taiwan 4 Department of Applied Mathematics, National Sun Yat-Sen University, Kaohsiung 804, Taiwan Correspondence should be addressed to J. C. Yao; yaojc@kmu.edu.tw Received 1 September 2012; Accepted 15 October 2012 Academic Editor: Julian López-Gómez Copyright © 2013 L. C. Ceng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We suggest and analyze relaxed extragradient iterative algorithms with regularization for finding a common element of the solution set of a general system of variational inequalities, the solution set of a split feasibility problem, and the fixed point set of a strictly pseudocontractive mapping defined on a real Hilbert space. Here the relaxed extragradient methods with regularization are based on the well-known successive approximation method, extragradient method, viscosity approximation method, regularization method, and so on. Strong convergence of the proposed algorithms under some mild conditions is established. Our results represent the supplementation, improvement, extension, and development of the corresponding results in the very recent literature. 1. Introduction Let H be a real Hilbert space, whose inner product and norm are denoted by ⟨⋅, ⋅⟩ and ‖⋅‖, respectively. Let 𝐾 be a nonempty closed convex subset of H. The (nearest point or metric) projection from H onto 𝐾is denoted by 𝑃𝐾 . We write 𝑥𝑛 ⇀ 𝑥 to indicate that the sequence {𝑥𝑛 } converges weakly to 𝑥 and 𝑥𝑛 ⇀ 𝑥 to indicate that the sequence {𝑥𝑛 } converges strongly to 𝑥. Let 𝐶 and 𝑄 be nonempty closed convex subsets of infinite-dimensional real Hilbert spaces H1 and H2 , respectively. The split feasibility problem (SFP) is to find a point 𝑥∗ with the property: ∗ 𝑥 ∈ 𝐶, ∗ 𝐴𝑥 ∈ 𝑄, (1) where 𝐴 ∈ 𝐵(H1 , H2 ) and 𝐵(H1 , H2 ) denotes the family of all bounded linear operators from H1 to H2 . In 1994, the SFP was first introduced by Censor and Elfving [1], in finite-dimensional Hilbert spaces, for modeling inverse problems which arise from phase retrievals and in medical image reconstruction. A number of image reconstruction problems can be formulated as the SFP; see, for example, [2] and the references therein. Recently, it was found that the SFP can also be applied to study intensity-modulated radiation therapy; see, for example, [3–5] and the references therein. In the recent past, a wide variety of iterative methods have been used in signal processing and image reconstruction and for solving the SFP; see, for example, [2–12] and the references therein. A special case of the SFP is the following convex constrained linear inverse problem [13] of finding an element 𝑥 such that 𝑥 ∈ 𝐶, 𝐴𝑥 = 𝑏. (2) It has been extensively investigated in the literature using the projected Landweber iterative method [14]. Comparatively, the SFP has received much less attention so far, due to the complexity resulting from the set 𝑄. Therefore, whether various versions of the projected Landweber iterative method [14] can be extended to solve the SFP remains an interesting 2 Abstract and Applied Analysis open topics. For example, it is yet not clear whether the dual approach to (1.2) of [15] can be extended to the SFP. The original algorithm given in [1] involves the computation of the inverse 𝐴−1 (assuming the existence of the inverse of 𝐴) and thus has not become popular. A seemingly more popular algorithm that solves the SFP is the 𝐶𝑄 algorithm of Byrne [2, 7] which is found to be a gradient-projection method (GPM) in convex minimization. It is also a special case of the proximal forward-backward splitting method [16]. The 𝐶𝑄 algorithm only involves the computation of the projections 𝑃𝐶 and 𝑃𝑄 onto the sets 𝐶 and 𝑄, respectively, and is therefore implementable in the case where 𝑃𝐶 and 𝑃𝑄 have closed-form expressions, for example, 𝐶 and 𝑄 are closed balls or halfspaces. However, it remains a challenge how to implement the 𝐶𝑄 algorithm in the case where the projections 𝑃𝐶 and/or 𝑃𝑄 fail to have closed-form expressions, though theoretically we can prove the (weak) convergence of the algorithm. Very recently, Xu [6] gave a continuation of the study on the 𝐶𝑄 algorithm and its convergence. He applied Mann’s algorithm to the SFP and purposed an averaged 𝐶𝑄 algorithm which was proved to be weakly convergent to a solution of the SFP. He also established the strong convergence result, which shows that the minimum-norm solution can be obtained. Furthermore, Korpelevič [17] introduced the so-called extragradient method for finding a solution of a saddle point problem. He proved that the sequences generated by the proposed iterative algorithm converge to a solution of the saddle point problem. Throughout this paper, assume that the SFP is consistent, that is, the solution set Γ of the SFP is nonempty. Let 𝑓 : H1 → 𝑅 be a continuous differentiable function. The minimization problem 1 2 min𝑓 (𝑥) := 𝐴𝑥 − 𝑃𝑄𝐴𝑥 𝑥∈𝐶 2 (3) is ill posed. Therefore, Xu [6] considered the following Tikhonov regularization problem: 1 2 1 min𝑓𝛼 (𝑥) := 𝐴𝑥 − 𝑃𝑄𝐴𝑥 + 𝛼‖𝑥‖2 , 𝑥∈𝐶 2 2 (4) where 𝛼 > 0 is the regularization parameter. The regularized minimization (4) has a unique solution which is denoted by 𝑥𝛼 . The following results are easy to prove. Proposition 1 (see [18, Proposition 3.1]). Given 𝑥∗ ∈ H1 , the following statements are equivalent: (i) 𝑥∗ solves the SFP; (ii) 𝑥∗ solves the fixed point equation 𝑃𝐶 (𝐼 − 𝜆∇𝑓) 𝑥∗ = 𝑥∗ , (5) where 𝜆 > 0, ∇𝑓 = 𝐴∗ (𝐼 − 𝑃𝑄)𝐴 and 𝐴∗ is the adjoint of 𝐴; (iii) 𝑥∗ solves the variational inequality problem (VIP) of finding 𝑥∗ ∈ 𝐶 such that ⟨∇𝑓 (𝑥∗ ) , 𝑥 − 𝑥∗ ⟩ ≥ 0, ∀𝑥 ∈ 𝐶. (6) It is clear from Proposition 1 that Γ = Fix (𝑃𝐶 (𝐼 − 𝜆∇𝑓)) = VI (𝐶, ∇𝑓) , (7) for all 𝜆 > 0, where Fix(𝑃𝐶(𝐼 − 𝜆∇𝑓)) and VI(𝐶, ∇𝑓) denote the set of fixed points of 𝑃𝐶(𝐼 − 𝜆∇𝑓) and the solution set of VIP (6), respectively. Proposition 2 (see [18]). There hold the following statements: (i) the gradient ∇𝑓𝛼 = ∇𝑓 + 𝛼𝐼 = 𝐴∗ (𝐼 − 𝑃𝑄) 𝐴 + 𝛼𝐼 (8) is (𝛼 + ‖𝐴‖2 )-Lipschitz continuous and 𝛼-strongly monotone; (ii) the mapping 𝑃𝐶(𝐼 − 𝜆∇𝑓𝛼 ) is a contraction with coefficient 2 1 √ 1 − 𝜆 (2𝛼 − 𝜆(‖𝐴‖2 + 𝛼) ) (≤ √1 − 𝛼𝜆 ≤ 1 − 𝛼𝜆) , 2 (9) 2 where 0 < 𝜆 ≤ 𝛼/(‖𝐴‖2 + 𝛼) ; (iii) if the SFP is consistent, then the strong lim𝛼 → 0 𝑥𝛼 exists and is the minimum-norm solution of the SFP. Very recently, by combining the regularization method and extragradient method due to Nadezhkina and Takahashi [19], Ceng et al. [18] proposed an extragradient algorithm with regularization and proved that the sequences generated by the proposed algorithm converge weakly to an element of Fix(𝑆) ∩ Γ, where 𝑆 : 𝐶 → 𝐶 is a nonexpansive mapping. Theorem 3 (see [18, Theorem 3.1]). Let 𝑆 : 𝐶 → 𝐶 be a nonexpansive mapping such that Fix (𝑆) ∩ Γ ≠ 0. Let {𝑥𝑛 } and {𝑦𝑛 } the sequences in 𝐶 generated by the following extragradient algorithm: 𝑥0 = 𝑥 ∈ 𝐶 𝑐ℎ𝑜𝑠𝑒𝑛 𝑎𝑟𝑏𝑖𝑡𝑟𝑎𝑟𝑖𝑙𝑦, 𝑦𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) , 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + (1 − 𝛽𝑛 ) 𝑆𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑦𝑛 )) , ∀𝑛 ≥ 0, (10) 2 where ∑∞ 𝑛=0 𝛼𝑛 < ∞, {𝜆 𝑛 } ⊂ [𝑎, 𝑏] for some 𝑎, 𝑏 ∈ (0, 1/‖𝐴‖ ) and {𝛽𝑛 } ⊂ [𝑐, 𝑑] for some 𝑐, 𝑑 ∈ (0, 1). Then, both the sequences {𝑥𝑛 } and {𝑦𝑛 } converge weakly to an element 𝑥̂ ∈ Fix(𝑆) ∩ Γ. On the other hand, assume that 𝐶 is a nonempty closed convex subset of H and 𝐴 : 𝐶 → H is a mapping. The classical variational inequality problem (VIP) is to find 𝑥∗ ∈ 𝐶 such that ⟨𝐴𝑥∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0, ∀𝑥 ∈ 𝐶. (11) It is now well known that the variational inequalities are equivalent to the fixed point problems, the origin of which Abstract and Applied Analysis 3 can be traced back to Lions and Stampacchia [20]. This alternative formulation has been used to suggest and analyze projection iterative method for solving variational inequalities under the conditions that the involved operator must be strongly monotone and Lipschitz continuous. Related to the variational inequalities, we have the problem of finding the fixed points of nonexpansive mappings or strict pseudocontraction mappings, which is the current interest in functional analysis. Several people considered a unified approach to solve variational inequality problems and fixed point problems; see, for example, [21–28] and the references therein. A mapping 𝐴 : 𝐶 → H is said to be an 𝛼-inverse strongly monotone if there exists 𝛼 > 0 such that 2 ⟨𝐴𝑥 − 𝐴𝑦, 𝑥 − 𝑦⟩ ≥ 𝛼𝐴𝑥 − 𝐴𝑦 , ∀𝑥, 𝑦 ∈ 𝐶. (12) A mapping 𝑆 : 𝐶 → 𝐶 is said to be 𝑘-strictly pseudocontractive if there exists 0 ≤ 𝑘 < 1 such that 2 2 𝑆𝑥 − 𝑆𝑦 ≤ 𝑥 − 𝑦 2 + 𝑘(𝐼 − 𝑆) 𝑥 − (𝐼 − 𝑆) 𝑦 , [23] introduced and considered the following problem of finding (𝑥∗ , 𝑦∗ ) ∈ 𝐶 × 𝐶 such that ⟨𝜇1 𝐵1 𝑦∗ + 𝑥∗ − 𝑦∗ , 𝑥 − 𝑥∗ ⟩ ≥ 0, ∀𝑥 ∈ 𝐶, ⟨𝜇2 𝐵2 𝑥∗ + 𝑦∗ − 𝑥∗ , 𝑥 − 𝑦∗ ⟩ ≥ 0, ∀𝑥 ∈ 𝐶, which is called a general system of variational inequalities (GSVI), which 𝜇1 > 0 and 𝜇2 > 0 are two constants. The set of solutions of problem (14) is denoted by GSVI(𝐶, 𝐵1 , 𝐵2 ). In particular, if 𝐵1 = 𝐵2 , then problem (14) reduces to the new system of variational inequalities, introduced and studied by Verma [32]. Recently, Ceng et al. [23] transformed problem (14) into a fixed point problem in the following way. Lemma 4 (see [23]). For given 𝑥, 𝑦 ∈ 𝐶, (𝑥, 𝑦) is a solution of problem (14) if and only if 𝑥 is a fixed point of the mapping 𝐺 : 𝐶 → 𝐶 defined by 𝐺 (𝑥) = 𝑃𝐶 [𝑃𝐶 (𝑥 − 𝜇2 𝐵2 𝑥) −𝜇1 𝐵1 𝑃𝐶 (𝑥 − 𝜇2 𝐵2 𝑥)] , ∀𝑥, 𝑦 ∈ 𝐶. (13) In this case, we also say that 𝑆 is a 𝑘-strict pseudo-contraction. In particular, whenever 𝑘 = 0, 𝑆 becomes a nonexpansive mapping from 𝐶 into itself. It is clear that every inverse strongly monotone mapping is a monotone and Lipschitz continuous mapping. We denote by VI(𝐶, 𝐴) and Fix(𝑆) the solution set of problem (11) and the set of all fixed points of 𝑆, respectively. For finding an element of Fix(𝑆) ∩ VI(𝐶, 𝐴) under the assumption that a set 𝐶 ⊂ H is nonempty, closed, and convex, a mapping 𝑆 : 𝐶 → 𝐶 is nonexpansive, and a mapping 𝐴 : 𝐶 → H is 𝛼-inverse strongly monotone, Takahashi and Toyoda [29] introduced an iterative scheme and studied the weak convergence of the sequence generated by the proposed scheme to a point of Fix(𝑆) ∩ VI(𝐶, 𝐴). Recently, Iiduka and Takahashi [30] presented another iterative scheme for finding an element of Fix(𝑆)∩VI(𝐶, 𝐴) and showed that the sequence generated by the scheme converges strongly to 𝑃Fix(𝑆)∩VI(𝐶,𝐴) 𝑢, where 𝑢 is the initially chosen point in the iterative scheme and 𝑃𝐾 denotes the metric projection of H onto 𝐾. Based on Korpelevič’s extragradient method [17], Nadezhkina and Takahashi [19] introduced an iterative process for finding an element of Fix(𝑆) ∩ VI(𝐶, 𝐴) and proved the weak convergence of the sequence to a point of Fix(𝑆) ∩ VI(𝐶, 𝐴). Zeng and Yao [27] presented an iterative scheme for finding an element of Fix(𝑆) ∩ VI(𝐶, 𝐴) and proved that two sequences generated by the method converges strongly to an element of Fix(𝑆) ∩ VI(𝐶, 𝐴). Recently, Bnouhachem et al. [31] suggested and analyzed an iterative scheme for finding a common element of the fixed point set Fix(𝑆) of a nonexpansive mapping 𝑆 and the solution set VI(𝐶, 𝐴) of the variational inequality (11) for an inverse strongly monotone mapping 𝐴 : 𝐶 → H. Furthermore, as a much more general generalization of the classical variational inequality problem (11), Ceng et al. (14) ∀𝑥 ∈ 𝐶, (15) where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). In particular, if the mapping 𝐵𝑖 : 𝐶 → H is 𝛽𝑖 -inverse strongly monotone for 𝑖 = 1, 2, then the mapping 𝐺 is nonexpansive provided 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2. Utilizing Lemma 4, they introduced and studied a relaxed extragradient method for solving problem (14). Throughout this paper, the set of fixed points of the mapping 𝐺 is denoted by Ξ. Based on the relaxed extragradient method and viscosity approximation method, Yao et al. [26] proposed and analyzed an iterative algorithm for finding a common solution of the GSVI (14) and the fixed point problem of a strictly pseudo-contractive mapping 𝑆 : 𝐶 → 𝐶. Subsequently, Ceng et al. [33] further presented and analyzed an iterative scheme for finding a common element of the solution set of the VIP (11), the solution set of the GSVI (14), and the fixed point set of a strictly pseudo-contractive mapping 𝑆 : 𝐶 → 𝐶. Theorem 5 (see [33, Theorem 3.1]). Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H. Let 𝐴 : 𝐶 → H be 𝛼-inverse strongly monotone and 𝐵𝑖 : 𝐶 → H be 𝛽𝑖 -inverse strongly monotone for 𝑖 = 1, 2. Let 𝑆 : 𝐶 → 𝐶 be a 𝑘-strictly pseudo-contractive mapping such that Fix(𝑆)∩Ξ∩V𝐼(𝐶, 𝐴) ≠ 0. Let 𝑄 : 𝐶 → 𝐶 be a 𝜌-contraction with 𝜌 ∈ [0, 1/2). For 𝑢𝑛 } be given 𝑥0 ∈ 𝐶 arbitrarily, let the sequences {𝑥𝑛 }, {𝑢𝑛 }, {̃ generated by the relaxed extragradient iterative scheme: 𝑢𝑛 = 𝑃𝐶 [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] , 𝑢̃𝑛 = 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 𝐴𝑢𝑛 ) , 𝑢𝑛 ) , 𝑦𝑛 = 𝛼𝑛 𝑄𝑥𝑛 + (1 − 𝛼𝑛 ) 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 𝐴̃ 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + 𝛾𝑛 𝑦𝑛 + 𝛿𝑛 𝑆𝑦𝑛 , (16) ∀𝑛 ≥ 0, where 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2, and the following conditions hold for five sequences {𝜆 𝑛 } ⊂ (0, 𝛼] and {𝛼𝑛 }, {𝛽𝑛 }, {𝛾𝑛 }, {𝛿𝑛 } ⊂ [0, 1]: 4 Abstract and Applied Analysis (i) 𝛽𝑛 + 𝛾𝑛 + 𝛿𝑛 = 1 and (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0; (ii) lim𝑛 → ∞ 𝛼𝑛 = 0 and ∑∞ 𝑛=0 𝛼𝑛 = ∞; (iii) 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1 and lim inf 𝑛 → ∞ 𝛿𝑛 > 0; (iv) lim𝑛 → ∞ (𝛾𝑛+1 /(1 − 𝛽𝑛+1 ) − 𝛾𝑛 /(1 − 𝛽𝑛 )) = 0; (v) 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 𝛼 and lim𝑛 → ∞ |𝜆 𝑛+1 − 𝜆 𝑛 | = 0. Then the sequences {𝑥𝑛 }, {𝑢𝑛 }, {̃ 𝑢𝑛 } converge strongly to the same point 𝑥 = 𝑃Fix(𝑆)∩Ξ∩V𝐼(𝐶,𝐴) 𝑄𝑥 if and only if lim𝑛 → ∞ ‖𝑢𝑛+1 − 𝑢𝑛 ‖ = 0. Furthermore, (𝑥, 𝑦) is a solution of the GSVI (14), where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). Motivated and inspired by the research going on this area, we propose and analyze the following relaxed extragradient iterative algorithms with regularization for finding a common element of the solution set of the GSVI (14), the solution set of the SFP (1), and the fixed point set of a strictly pseudocontractive mapping 𝑆 : 𝐶 → 𝐶. Algorithm 6. Let 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2, {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝛽𝑛 }, {𝛾𝑛 }, {𝛿𝑛 } ⊂ [0, 1] such that 𝛽𝑛 + 𝛾𝑛 + 𝛿𝑛 = 1 for all 𝑛 ≥ 0. For given 𝑥0 ∈ 𝐶 arbitrarily, 𝑢𝑛 } be the sequences generated by the following let {𝑥𝑛 }, {𝑢𝑛 }, {̃ relaxed extragradient iterative scheme with regularization: 𝑢𝑛 = 𝑃𝐶 [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] , 𝑢̃𝑛 = 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 )) , 𝑢𝑛 )) , 𝑦𝑛 = 𝜎𝑛 𝑄𝑥𝑛 + (1 − 𝜎𝑛 ) 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + 𝛾𝑛 𝑦𝑛 + 𝛿𝑛 𝑆𝑦𝑛 , (17) Let 𝐾 be a nonempty, closed, and convex subset of a real Hilbert space H. Now we present some known results and definitions which will be used in the sequel. The metric (or nearest point) projection from H onto 𝐾 is the mapping 𝑃𝐾 : H → 𝐾 which assigns to each point 𝑥 ∈ H the unique point 𝑃𝐾 𝑥 ∈ 𝐾 satisfying the property (19) The following properties of projections are useful and pertinent to our purpose. Proposition 8 (see [34]). For given 𝑥 ∈ H and 𝑧 ∈ 𝐾: Under mild assumptions, it is proven that the sequences {𝑥𝑛 }, {𝑢𝑛 }, {̃ 𝑢𝑛 } converge strongly to the same point 𝑥 = 𝑃Fix(𝑆)∩Ξ∩Γ 𝑄𝑥 if and only if lim𝑛 → ∞ ‖𝑢𝑛+1 − 𝑢𝑛 ‖ = 0. Furthermore, (𝑥, 𝑦) is a solution of the GSVI (14), where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). Algorithm 7. Let 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2, {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝜏𝑛 }, {𝛽𝑛 }, {𝛾𝑛 }, {𝛿𝑛 } ⊂ [0, 1] such that 𝜎𝑛 +𝜏𝑛 ≤ 1 and 𝛽𝑛 +𝛾𝑛 +𝛿𝑛 = 1 for all 𝑛 ≥ 0. For given 𝑥0 ∈ 𝐶 arbitrarily, let {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 } be the sequences generated by the following relaxed extragradient iterative scheme with regularization: 𝑧𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) , 𝑦𝑛 (i) 𝑧 = 𝑃𝐾 𝑥 ⇔ ⟨𝑥 − 𝑧, 𝑦 − 𝑧⟩ ≤ 0, ∀𝑦 ∈ 𝐾; (ii) 𝑧 = 𝑃𝐾 𝑥 ⇔ ‖𝑥 − 𝑧‖2 ≤ ‖𝑥 − 𝑦‖2 − ‖𝑦 − 𝑧‖2 , ∀𝑦 ∈ 𝐾; (iii) ⟨𝑃𝐾 𝑥 − 𝑃𝐾 𝑦, 𝑥 − 𝑦⟩ ≥ ‖𝑃𝐾 𝑥 − 𝑃𝐾 𝑦‖2 , ∀𝑦 ∈ H, which hence implies that 𝑃𝐾 is nonexpansive and monotone. Definition 9. A mapping 𝑇 : H → H is said to be (a) nonexpansive if 𝑇𝑥 − 𝑇𝑦 ≤ 𝑥 − 𝑦 , ∀𝑥, 𝑦 ∈ H; (18) −𝜇1 𝐵1 𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 )] , 𝑥𝑛+1 ∀𝑛 ≥ 0, Also, under appropriate conditions, it is shown that the sequences {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 } converge strongly to the same point (20) (b) firmly nonexpansive if 2𝑇 − 𝐼 is nonexpansive, or equivalently, 2 ⟨𝑥 − 𝑦, 𝑇𝑥 − 𝑇𝑦⟩ ≥ 𝑇𝑥 − 𝑇𝑦 , = 𝜎𝑛 𝑄𝑥𝑛 + 𝜏𝑛 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) = 𝛽𝑛 𝑥𝑛 + 𝛾𝑛 𝑦𝑛 + 𝛿𝑛 𝑆𝑦𝑛 , 2. Preliminaries 𝑥 − 𝑃𝐾 𝑥 = inf 𝑥 − 𝑦 =: 𝑑 (𝑥, 𝐾) . 𝑦∈𝐾 ∀𝑛 ≥ 0. + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑃𝐶 [𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ) 𝑥 = 𝑃Fix(𝑆)∩Ξ∩Γ 𝑄𝑥 if and only if lim𝑛 → ∞ ‖𝑧𝑛+1 − 𝑧𝑛 ‖ = 0. Furthermore, (𝑥, 𝑦) is a solution of the GSVI (14), where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). Note that both [6, Theorem 5.7] and [18, Theorem 3.1] are weak convergence results for solving the SFP (1). Beyond question our strong convergence results are very interesting and quite valuable. Because our relaxed extragradient iterative schemes (17) and (18) with regularization involve a contractive self-mapping 𝑄, a 𝑘-strictly pseudo-contractive self-mapping 𝑆 and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [6, Theorem 5.7] and [18, Theorem 3.1], respectively. Furthermore, the relaxed extragradient iterative scheme (16) is extended to develop our relaxed extragradient iterative schemes (17) and (18) with regularization. All in all, our results represent the modification, supplementation, extension, and improvement of [6, Theorem 5.7], [18, Theorem 3.1], and [33, Theorem 3.1]. ∀𝑥, 𝑦 ∈ H; (21) alternatively, 𝑇 is firmly nonexpansive if and only if 𝑇 can be expressed as 𝑇= 1 (𝐼 + 𝑆) , 2 (22) where 𝑆 : H → H is nonexpansive; projections are firmly nonexpansive. Abstract and Applied Analysis 5 Definition 10. Let 𝑇 be a nonlinear operator with domain 𝐷(𝑇) ⊆ H and range 𝑅(𝑇) ⊆ H. (a) 𝑇 is said to be monotone if ⟨𝑥 − 𝑦, 𝑇𝑥 − 𝑇𝑦⟩ ≥ 0, (v) If the mappings {𝑇𝑖 }𝑁 𝑖=1 are averaged and have a common fixed point, then 𝑁 ∀𝑥, 𝑦 ∈ 𝐷 (𝑇) . (23) (b) Given a number 𝛽 > 0, 𝑇 is said to be 𝛽-strongly monotone if 2 (24) ⟨𝑥 − 𝑦, 𝑇𝑥 − 𝑇𝑦⟩ ≥ 𝛽𝑥 − 𝑦 , ∀𝑥, 𝑦 ∈ 𝐷 (𝑇) . (c) Given a number 𝜈 > 0, 𝑇 is said to be 𝜈-inverse strongly monotone (𝜈-ism) if 2 ⟨𝑥 − 𝑦, 𝑇𝑥 − 𝑇𝑦⟩ ≥ 𝜈𝑇𝑥 − 𝑇𝑦 , ∀𝑥, 𝑦 ∈ 𝐷 (𝑇) . (25) It can be easily seen that if 𝑆 is nonexpansive, then 𝐼 − 𝑆 is monotone. It is also easy to see that a projection 𝑃𝐾 is 1-ism. Inverse strongly monotone (also referred to as cocoercive) operators have been applied widely in solving practical problems in various fields, for instance, in traffic assignment problems; see, for example, [35, 36]. Definition 11. A mapping 𝑇 : H → H is said to be an averaged mapping if it can be written as the average of the identity 𝐼 and a nonexpansive mapping, that is, 𝑇 ≡ (1 − 𝛼) 𝐼 + 𝛼𝑆, (26) where 𝛼 ∈ (0, 1) and 𝑆 : H → His nonexpansive. More precisely, when the last equality holds, we say that 𝑇 is 𝛼averaged. Thus firmly nonexpansive mappings (in particular, projections) are 1/2-averaged maps. Proposition 12 (see [7]). Let 𝑇 : H → H be a given mapping. (i) 𝑇 is nonexpansive if and only if the complement 𝐼 − 𝑇 is 1/2-ism. (ii) If 𝑇 is 𝜈-ism, then for 𝛾 > 0, 𝛾𝑇 is 𝜈/𝛾-ism. (iii) 𝑇 is averaged if and only if the complement 𝐼 − 𝑇 is 𝜈-ism for some 𝜈 > 1/2. Indeed, for 𝛼 ∈ (0, 1), 𝑇 is 𝛼-averaged if and only if 𝐼 − 𝑇 is 1/2𝛼-ism. Proposition 13 (see [7, 37]). Let 𝑆, 𝑇, 𝑉 : H → H be given operators. (i) If 𝑇 = (1 − 𝛼)𝑆 + 𝛼𝑉 for some 𝛼 ∈ (0, 1) and if 𝑆 is averaged and 𝑉 is nonexpansive, then 𝑇 is averaged. (ii) 𝑇 is firmly nonexpansive if and only if the complement 𝐼 − 𝑇 is firmly nonexpansive. (iii) If 𝑇 = (1 − 𝛼)𝑆 + 𝛼𝑉 for some 𝛼 ∈ (0, 1) and if 𝑆 is firmly nonexpansive and 𝑉 is nonexpansive, then 𝑇 is averaged. (iv) The composite of finitely many averaged mappings is averaged. That is, if each of the mappings {𝑇𝑖 }𝑁 𝑖=1 is averaged, then so is the composite 𝑇1 ∘ 𝑇2 ∘ ⋅ ⋅ ⋅ ∘ 𝑇𝑁. In particular, if 𝑇1 is 𝛼1 -averaged and 𝑇2 is 𝛼2 -averaged, where 𝛼1 , 𝛼2 ∈ (0, 1), then the composite 𝑇1 ∘ 𝑇2 is 𝛼averaged, where 𝛼 = 𝛼1 + 𝛼2 − 𝛼1 𝛼2 . ⋂ Fix (𝑇𝑖 ) = Fix (𝑇1 ⋅ ⋅ ⋅ 𝑇𝑁) . (27) 𝑖=1 The notation Fix(𝑇) denotes the set of all fixed points of the mapping 𝑇, that is, Fix(𝑇) = {𝑥 ∈ H : 𝑇𝑥 = 𝑥}. It is clear that, in a real Hilbert space H, 𝑆 : 𝐶 → 𝐶 is 𝑘-strictly pseudo-contractive if and only if there holds the following inequality: ⟨𝑆𝑥 − 𝑆𝑦, 𝑥 − 𝑦⟩ 2 2 1 − 𝑘 ≤ 𝑥 − 𝑦 − (𝐼 − 𝑆) 𝑥 − (𝐼 − 𝑆) 𝑦 , 2 ∀𝑥, 𝑦 ∈ 𝐶. (28) This immediately implies that if 𝑆 is a 𝑘-strictly pseudocontractive mapping, then 𝐼 − 𝑆 is (1 − 𝑘)/2-inverse strongly monotone; for further detail, we refer to [38] and the references therein. It is well known that the class of strict pseudo-contractions strictly includes the class of nonexpansive mappings. In order to prove the main result of this paper, the following lemmas will be required. Lemma 14 (see [39]). Let {𝑥𝑛 } and {𝑦𝑛 } be bounded sequences in a Banach space 𝑋 and let {𝛽𝑛 } be a sequence in [0, 1] with 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1. Suppose 𝑥𝑛+1 = (1−𝛽𝑛 )𝑦𝑛 +𝛽𝑛 𝑥𝑛 for all integers 𝑛 ≥ 0 and lim sup𝑛 → ∞ (‖𝑦𝑛+1 − 𝑦𝑛 ‖ − ‖𝑥𝑛+1 − 𝑥𝑛 ‖) ≤ 0. Then, lim𝑛 → ∞ ‖𝑦𝑛 − 𝑥𝑛 ‖ = 0. Lemma 15 (see [38, Proposition 2.1]). Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H and 𝑆 : 𝐶 → 𝐶 a mapping. (i) If 𝑆 is a 𝑘-strict pseudo-contractive mapping, then 𝑆 satisfies the Lipschitz condition 1 + 𝑘 𝑥 − 𝑦 , 𝑆𝑥 − 𝑆𝑦 ≤ 1−𝑘 ∀𝑥, 𝑦 ∈ 𝐶. (29) (ii) If 𝑆 is a 𝑘-strict pseudo-contractive mapping, then the mapping 𝐼 − 𝑆 is semiclosed at 0, that is, if {𝑥𝑛 } is a sequence in 𝐶 such that 𝑥𝑛 → 𝑥̃ weakly and (𝐼 − 𝑆)𝑥𝑛 → 0 strongly, then (𝐼 − 𝑆)𝑥̃ = 0. (iii) If 𝑆 is 𝑘-(quasi-)strict pseudo-contraction, then the fixed point set Fix (𝑆) of 𝑆 is closed and convex so that the projection 𝑃 Fix (𝑆) is well defined. The following lemma plays a key role in proving strong convergence of the sequences generated by our algorithms. Lemma 16 (see [34]). Let {𝑎𝑛 } be a sequence of nonnegative real numbers satisfying the property 𝑎𝑛+1 ≤ (1 − 𝑠𝑛 ) 𝑎𝑛 + 𝑠𝑛 𝑡𝑛 + 𝑟𝑛 , where {𝑠𝑛 } ⊂ (0, 1] and {𝑡𝑛 } are such that ∀𝑛 ≥ 0, (30) 6 Abstract and Applied Analysis (i) ∑∞ 𝑛=0 𝑠𝑛 = ∞; (ii) either lim sup𝑛 → ∞ 𝑡𝑛 ≤ 0 or ∑∞ 𝑛=0 |𝑠𝑛 𝑡𝑛 | < ∞; (iii) ∑∞ 𝑛=0 𝑟𝑛 < ∞ where 𝑟𝑛 ≥ 0, ∀𝑛 ≥ 0. Then, lim𝑛 → ∞ 𝑎𝑛 = 0. Lemma 17 (see [26]). Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H. Let 𝑆 : 𝐶 → 𝐶 be a 𝑘-strictly pseudo-contractive mapping. Let 𝛾 and 𝛿 be two nonnegative real numbers such that (𝛾 + 𝛿)𝑘 ≤ 𝛾. Then 𝛾 (𝑥 − 𝑦) + 𝛿 (𝑆𝑥 − 𝑆𝑦) ≤ (𝛾 + 𝛿) 𝑥 − 𝑦 , ∀𝑥, 𝑦 ∈ 𝐶. (31) The following lemma is an immediate consequence of an inner product. Lemma 18. In a real Hilbert space H, there holds the inequality 2 2 𝑥 + 𝑦 ≤ ‖𝑥‖ + 2 ⟨𝑦, 𝑥 + 𝑦⟩ , ∀𝑥, 𝑦 ∈ H. (32) Let 𝐾 be a nonempty closed convex subset of a real Hilbert space H and let 𝐹 : 𝐾 → H be a monotone mapping. The variational inequality problem (VIP) is to find 𝑥 ∈ 𝐾 such that ⟨𝐹𝑥, 𝑦 − 𝑥⟩ ≥ 0, ∀𝑦 ∈ 𝐾. (33) The solution set of the VIP is denoted by VI(𝐾, 𝐹). It is well known that 𝑥 ∈ VI (𝐾, 𝐹) ⇐⇒ 𝑥 = 𝑃𝐾 (𝑥 − 𝜆𝐹𝑥) , ∀𝜆 > 0. 𝐹𝑣 + 𝑁𝐾 𝑣, if 𝑣 ∈ 𝐾, 0, if 𝑣 ∉ 𝐾. (vi) 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖2 and lim𝑛 → ∞ |𝜆 𝑛+1 − 𝜆 𝑛 | = 0. Then the sequences {𝑥𝑛 }, {𝑢𝑛 }, {̃ 𝑢𝑛 } converge strongly to the same point 𝑥 = 𝑃Fix(𝑆)∩Ξ∩Γ 𝑄𝑥 if and only if lim𝑛 → ∞ ‖𝑢𝑛+1 − 𝑢𝑛 ‖ = 0. Furthermore, (𝑥, 𝑦) is a solution of the GSVI (14), where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). Proof. First, taking into account 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖2 , without loss of generality we may assume that {𝜆 𝑛 } ⊂ [𝑎, 𝑏] for some 𝑎, 𝑏 ∈ (0, 1/‖𝐴‖2 ). Now, let us show that 𝑃𝐶(𝐼 − 𝜆∇𝑓𝛼 ) is 𝜁-averaged for each 𝜆 ∈ (0, 2/(𝛼 + ‖𝐴‖2 )), where 𝜁= 2 + 𝜆 (𝛼 + ‖𝐴‖2 ) 4 . (37) Indeed, it is easy to see that ∇𝑓 = 𝐴∗ (𝐼 − 𝑃𝑄)𝐴 is 1/‖𝐴‖2 ism, that is, 1 2 ⟨∇𝑓 (𝑥) − ∇𝑓 (𝑦) , 𝑥 − 𝑦⟩ ≥ ∇𝑓 (𝑥) − ∇𝑓 (𝑦) . (38) ‖𝐴‖2 Observe that (𝛼 + ‖𝐴‖2 ) ⟨∇𝑓𝛼 (𝑥) − ∇𝑓𝛼 (𝑦) , 𝑥 − 𝑦⟩ 2 = (𝛼 + ‖𝐴‖2 ) [𝛼𝑥 − 𝑦 (35) + ⟨∇𝑓 (𝑥) − ∇𝑓 (𝑦) , 𝑥 − 𝑦⟩ ] 2 = 𝛼2 𝑥 − 𝑦 Define 𝑇𝑣 = { (i) ∑∞ 𝑛=0 𝛼𝑛 < ∞; (ii) 𝛽𝑛 + 𝛾𝑛 + 𝛿𝑛 = 1 and (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0; (iii) lim𝑛 → ∞ 𝜎𝑛 = 0 and ∑∞ 𝑛=0 𝜎𝑛 = ∞; (iv) 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1 and lim inf 𝑛 → ∞ 𝛿𝑛 > 0; (v) lim𝑛 → ∞ (𝛾𝑛+1 /(1 − 𝛽𝑛+1 ) − 𝛾𝑛 /(1 − 𝛽𝑛 )) = 0; (34) A set-valued mapping 𝑇 : H → 2H is called monotone if for all 𝑥, 𝑦 ∈ H, 𝑓 ∈ 𝑇𝑥 and 𝑔 ∈ 𝑇𝑦 imply that ⟨𝑥 − 𝑦, 𝑓 − 𝑔⟩ ≥ 0. A monotone set-valued mapping 𝑇 : H → 2H is called maximal if its graph Gph(𝑇) is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping 𝑇 : H → 2H is maximal if and only if for (𝑥, 𝑓) ∈ H × H, ⟨𝑥 − 𝑦, 𝑓 − 𝑔⟩ ≥ 0 for every (𝑦, 𝑔) ∈ Gph(𝑇) implies that 𝑓 ∈ 𝑇𝑥. Let 𝐹 : 𝐾 → H be a monotone and Lipschitz continuous mapping and let 𝑁𝐾 𝑣 be the normal cone to 𝐾 at 𝑣 ∈ 𝐾, that is, 𝑁𝐾 𝑣 = {𝑤 ∈ H : ⟨𝑣 − 𝑢, 𝑤⟩ ≥ 0, ∀𝑢 ∈ 𝐾} . Theorem 19. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H1 . Let 𝐴 ∈ 𝐵(H1 , H2 ), and let 𝐵𝑖 : 𝐶 → H1 be 𝛽𝑖 -inverse strongly monotone for 𝑖 = 1, 2. Let 𝑆 : 𝐶 → 𝐶 be a 𝑘-strictly pseudo-contractive mapping such that Fix(𝑆) ∩ Ξ ∩ Γ ≠ 0. Let 𝑄 : 𝐶 → 𝐶 be a 𝜌-contraction with 𝜌 ∈ [0, 1/2). 𝑢𝑛 } For given 𝑥0 ∈ 𝐶 arbitrarily, let the sequences {𝑥𝑛 }, {𝑢𝑛 }, {̃ be generated by the relaxed extragradient iterative algorithm (17) with regularization, where 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2, {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝛽𝑛 }, {𝛾𝑛 }, {𝛿𝑛 } ⊂ [0, 1] such that (36) It is known that in this case the mapping 𝑇 is maximal monotone, and 0 ∈ 𝑇𝑣 if and only if 𝑣 ∈ VI(𝐾, 𝐹); see further details, one refers to [40] and the references therein. 3. Main Results In this section, we first prove the strong convergence of the sequences generated by the relaxed extragradient iterative algorithm (17) with regularization. 2 + 𝛼 ⟨∇𝑓 (𝑥) − ∇𝑓 (𝑦) , 𝑥 − 𝑦⟩ + 𝛼‖𝐴‖2 𝑥 − 𝑦 + ‖𝐴‖2 ⟨∇𝑓 (𝑥) − ∇𝑓 (𝑦) , 𝑥 − 𝑦⟩ 2 ≥ 𝛼2 𝑥 − 𝑦 + 2𝛼 ⟨∇𝑓 (𝑥) − ∇𝑓 (𝑦) , 𝑥 − 𝑦⟩ 2 + ∇𝑓 (𝑥) − ∇𝑓 (𝑦) 2 = 𝛼 (𝑥 − 𝑦) + ∇𝑓 (𝑥) − ∇𝑓 (𝑦) 2 = ∇𝑓𝛼 (𝑥) − ∇𝑓𝛼 (𝑦) . (39) Abstract and Applied Analysis 7 Hence, it follows that ∇𝑓𝛼 = 𝛼𝐼 + 𝐴∗ (𝐼 − 𝑃𝑄)𝐴 is 1/(𝛼 + ‖𝐴‖2 )-ism. Thus, 𝜆∇𝑓𝛼 is 1/𝜆(𝛼 + ‖𝐴‖2 )-ism according to Proposition 12(ii). By Proposition 12(iii) the complement 𝐼 − 𝜆∇𝑓𝛼 is 𝜆(𝛼 + ‖𝐴‖2 )/2-averaged. Therefore, noting that 𝑃𝐶 is 1/2-averaged, and utilizing Proposition 13(iv), we know that for each 𝜆 ∈ (0, 2/(𝛼 + ‖𝐴‖2 )), 𝑃𝐶(𝐼 − 𝜆∇𝑓𝛼 ) is 𝜁-averaged with 𝜁= = 2 2 1 𝜆 (𝛼 + ‖𝐴‖ ) 1 𝜆 (𝛼 + ‖𝐴‖ ) + − ⋅ 2 2 2 2 2 + 𝜆 (𝛼 + ‖𝐴‖2 ) 4 (40) 𝑛≥0 𝑛≥0 2 2 ̃ 𝑢𝑛 − 𝑝 = 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑢𝑛 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓) 𝑝 = 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑢𝑛 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 2 +𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓) 𝑝 2 ≤ 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑢𝑛 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 + 2 ⟨𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 ∈ (0, 1) . This shows that 𝑃𝐶(𝐼 − 𝜆∇𝑓𝛼 ) is nonexpansive. Furthermore, for {𝜆 𝑛 } ⊂ [𝑎, 𝑏] with 𝑎, 𝑏 ∈ (0, 1/‖𝐴‖2 ), we have 𝑎 ≤ inf 𝜆 𝑛 ≤ sup 𝜆 𝑛 ≤ 𝑏 < Utilizing Lemma 18 we also have 1 1 = lim . 2 𝑛 → ∞ 𝛼𝑛 + ‖𝐴‖2 ‖𝐴‖ (41) −𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓) 𝑝, 𝑢̃𝑛 − 𝑝⟩ 2 ≤ 𝑢𝑛 − 𝑝 + 2 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓) 𝑝 𝑢̃𝑛 − 𝑝 2 ≤ 𝑢𝑛 − 𝑝 + 2 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 − (𝐼 − 𝜆 𝑛 ∇𝑓) 𝑝 𝑢̃𝑛 − 𝑝 Without loss of generality we may assume that 𝑎 ≤ inf 𝜆 𝑛 ≤ sup 𝜆 𝑛 ≤ 𝑏 < 𝑛≥0 𝑛≥0 1 , 𝛼𝑛 + ‖𝐴‖2 ∀𝑛 ≥ 0. (42) Consequently, it follows that for each integer 𝑛 ≥ 0, 𝑃𝐶(𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) is 𝜁𝑛 -averaged with 2 2 1 𝜆 𝑛 (𝛼𝑛 + ‖𝐴‖ ) 1 𝜆 𝑛 (𝛼𝑛 + ‖𝐴‖ ) 𝜁𝑛 = + − ⋅ 2 2 2 2 = 2 + 𝜆 𝑛 (𝛼𝑛 + ‖𝐴‖2 ) 4 (43) This immediately implies that 𝑃𝐶(𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) is nonexpansive for all 𝑛 ≥ 0. Next we divide the remainder of the proof into several steps. Step 1. {𝑥𝑛 } is bounded. Indeed, take an arbitrary 𝑝 ∈ Fix(𝑆) ∩ Ξ ∩ Γ. Then, we get 𝑆𝑝 = 𝑝, 𝑃𝐶(𝐼 − 𝜆∇𝑓)𝑝 = 𝑝 for 𝜆 ∈ (0, 2/‖𝐴‖2 ), and (44) From (17) it follows that ̃ 𝑢𝑛 − 𝑝 = 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑢𝑛 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓) 𝑝 ≤ 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑢𝑛 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 + 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓) 𝑝 ≤ 𝑢𝑛 − 𝑝 + (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑝 − (𝐼 − 𝜆 𝑛 ∇𝑓) 𝑝 ≤ 𝑢𝑛 − 𝑝 + 𝜆 𝑛 𝛼𝑛 𝑝 . (45) (46) For simplicity, we write 𝑞 = 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝) , ∈ (0, 1) . 𝑝 = 𝑃𝐶 [𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝) − 𝜇1 𝐵1 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝)] . 2 = 𝑢𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 . 𝑥̃𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) , 𝑢𝑛 = 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 )) , (47) for each 𝑛 ≥ 0. Then 𝑦𝑛 = 𝜎𝑛 𝑄𝑥𝑛 + (1 − 𝜎𝑛 )𝑢𝑛 for each 𝑛 ≥ 0. Since 𝐵𝑖 : 𝐶 → H1 is 𝛽𝑖 -inverse strongly monotone for 𝑖 = 1, 2 and 0 < 𝜇𝑖 < 2𝛽𝑖 for 𝑖 = 1, 2, we know that for all 𝑛 ≥ 0, 2 𝑢𝑛 − 𝑝 = 𝑃𝐶 [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) 2 −𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] − 𝑝 = 𝑃𝐶 [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] 2 −𝑃𝐶 [𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝) − 𝜇1 𝐵1 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝)] ≤ [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] 2 − [𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝) − 𝜇1 𝐵1 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝)] = [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝)] 2 −𝜇1 [𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝐵1 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝)] 2 ≤ 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝) − 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) 2 − 𝐵1 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝) 8 Abstract and Applied Analysis 2 ≤ (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − (𝑝 − 𝜇2 𝐵2 𝑝) + 2 ⟨𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) − 𝑢̃𝑛 , 𝑢𝑛 − 𝑢̃𝑛 ⟩ 2 − 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 + 2𝜆 𝑛 𝛼𝑛 ⟨𝑝, 𝑝 − 𝑢̃𝑛 ⟩ . (49) 2 = (𝑥𝑛 − 𝑝) − 𝜇2 (𝐵2 𝑥𝑛 − 𝐵2 𝑝) 2 − 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 Further, by Proposition 8(i), we have 2 2 ≤ 𝑥𝑛 − 𝑝 − 𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 ⟨𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) − 𝑢̃𝑛 , 𝑢𝑛 − 𝑢̃𝑛 ⟩ 2 − 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 2 ≤ 𝑥𝑛 − 𝑝 . = ⟨𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 ) − 𝑢̃𝑛 , 𝑢𝑛 − 𝑢̃𝑛 ⟩ (48) 𝑢𝑛 ) , 𝑢𝑛 − 𝑢̃𝑛 ⟩ + ⟨𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 ) − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) , 𝑢𝑛 − 𝑢̃𝑛 ⟩ ≤ ⟨𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 ) − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ Furthermore, by Proposition 8(ii), we have ≤ 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 ) − ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) 𝑢𝑛 − 𝑢̃𝑛 2 2 𝑢𝑛 ) − 𝑝 𝑢𝑛 − 𝑝 ≤ 𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 2 − 𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) − 𝑢𝑛 2 2 = 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢𝑛 ≤ 𝜆 𝑛 (𝛼𝑛 + ‖𝐴‖2 ) 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑢̃𝑛 . + 2𝜆 𝑛 ⟨∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) , 𝑝 − 𝑢𝑛 ⟩ 2 2 = 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢𝑛 + 2𝜆 𝑛 ( ⟨∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) − ∇𝑓𝛼𝑛 (𝑝) , 𝑝 − 𝑢̃𝑛 ⟩ + ⟨∇𝑓𝛼𝑛 (𝑝) , 𝑝 − 𝑢̃𝑛 ⟩ + ⟨∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) , 𝑢̃𝑛 − 𝑢𝑛 ⟩ ) 2 2 ≤ 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢𝑛 + 2𝜆 𝑛 (⟨∇𝑓𝛼𝑛 (𝑝) , 𝑝 − 𝑢̃𝑛 ⟩ 𝑢𝑛 ) , 𝑢̃𝑛 − 𝑢𝑛 ⟩) + ⟨∇𝑓𝛼𝑛 (̃ 2 2 = 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢𝑛 + 2𝜆 𝑛 [ ⟨(𝛼𝑛 𝐼 + ∇𝑓) 𝑝, 𝑝 − 𝑢̃𝑛 ⟩ 𝑢𝑛 ) , 𝑢̃𝑛 − 𝑢𝑛 ⟩] + ⟨∇𝑓𝛼𝑛 (̃ 2 2 ≤ 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢𝑛 + 2𝜆 𝑛 [𝛼𝑛 ⟨𝑝, 𝑝 − 𝑢̃𝑛 ⟩ + ⟨∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) , 𝑢̃𝑛 − 𝑢𝑛 ⟩] 2 2 = 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢̃𝑛 2 − 2 ⟨𝑢𝑛 − 𝑢̃𝑛 , 𝑢̃𝑛 − 𝑢𝑛 ⟩ − 𝑢̃𝑛 − 𝑢𝑛 + 2𝜆 𝑛 [𝛼𝑛 ⟨𝑝, 𝑝 − 𝑢̃𝑛 ⟩ + ⟨∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) , 𝑢̃𝑛 − 𝑢𝑛 ⟩] 2 2 2 = 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢̃𝑛 − 𝑢̃𝑛 − 𝑢𝑛 So, from (45) we obtain 2 𝑢𝑛 − 𝑝 2 2 2 ≤ 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢̃𝑛 − 𝑢̃𝑛 − 𝑢𝑛 + 2 ⟨𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) − 𝑢̃𝑛 , 𝑢𝑛 − 𝑢̃𝑛 ⟩ + 2𝜆 𝑛 𝛼𝑛 ⟨𝑝, 𝑝 − 𝑢̃𝑛 ⟩ 2 2 ≤ 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢̃𝑛 2 − 𝑢̃𝑛 − 𝑢𝑛 + 2𝜆 𝑛 (𝛼𝑛 + ‖𝐴‖2 ) × 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑢̃𝑛 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 2 2 ≤ 𝑢𝑛 − 𝑝 − 𝑢𝑛 − 𝑢̃𝑛 2 − 𝑢̃𝑛 − 𝑢𝑛 2 2 + 𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) 𝑢𝑛 − 𝑢̃𝑛 2 + 𝑢̃𝑛 − 𝑢𝑛 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 2 = 𝑢𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 2 2 + (𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) − 1) 𝑢𝑛 − 𝑢̃𝑛 2 ≤ 𝑢𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 2 ≤ 𝑢𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 [ 𝑢𝑛 − 𝑝 + 𝜆 𝑛 𝛼𝑛 𝑝] 2 ≤ 𝑢𝑛 − 𝑝 (50) Abstract and Applied Analysis 9 2 + 4𝜆 𝑛 𝛼𝑛 𝑝 𝑢𝑛 − 𝑝 + 4𝜆2𝑛 𝛼𝑛2 𝑝 Now, we claim that 2 = (𝑢𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝) . (51) 𝑄𝑝 − 𝑝 } 𝑥𝑛+1 − 𝑝 ≤ max {𝑥0 − 𝑝 , 1−𝜌 (54) 𝑛 + 2𝑏 𝑝 ∑ 𝛼𝑗 . Hence it follows from (48) and (51) that 𝑦𝑛 − 𝑝 = 𝜎𝑛 (𝑄𝑥𝑛 − 𝑝) + (1 − 𝜎𝑛 ) (𝑢𝑛 − 𝑝) ≤ 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (1 − 𝜎𝑛 ) 𝑢𝑛 − 𝑝 ≤ 𝜎𝑛 (𝑄𝑥𝑛 − 𝑄𝑝 + 𝑄𝑝 − 𝑝) + (1 − 𝜎𝑛 ) (𝑢𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝) ≤ 𝜎𝑛 (𝜌 𝑥𝑛 − 𝑝 + 𝑄𝑝 − 𝑝) + (1 − 𝜎𝑛 ) (𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝) ≤ (1 − (1 − 𝜌) 𝜎𝑛 ) 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑝 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 = (1 − (1 − 𝜌) 𝜎𝑛 ) 𝑥𝑛 − 𝑝 𝑄𝑝 − 𝑝 + (1 − 𝜌) 𝜎𝑛 + 2𝜆 𝑛 𝛼𝑛 𝑝 1−𝜌 𝑗=0 As a matter of fact, if 𝑛 = 0, then it is clear that (54) is valid, that is, 0 𝑄𝑝 − 𝑝 } + 2𝑏 𝑝 ∑𝛼𝑗 . 𝑥1 − 𝑝 ≤ max {𝑥0 − 𝑝 , 1−𝜌 𝑗=0 (55) Assume that (54) holds for 𝑛 ≥ 1, that is, (52) 𝑛−1 𝑄𝑝 − 𝑝 } + 2𝑏 𝑝 ∑ 𝛼𝑗 . 𝑥𝑛 − 𝑝 ≤ max {𝑥0 − 𝑝 , 1−𝜌 𝑗=0 (56) Then, we conclude from (53) and (56) that 𝑥𝑛+1 − 𝑝 𝑄𝑝 − 𝑝 ≤ max {𝑥𝑛 − 𝑝 , } + 2𝑏 𝑝 𝛼𝑛 1−𝜌 𝑄𝑝 − 𝑝 ≤ max {𝑥𝑛 − 𝑝 , } + 2𝜆 𝑛 𝛼𝑛 𝑝 . 1−𝜌 Since (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0, utilizing Lemma 17 we obtain from (52) 𝑥𝑛+1 − 𝑝 = 𝛽𝑛 (𝑥𝑛 − 𝑝) + 𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝) ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝) ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 𝑦𝑛 − 𝑝 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 𝑄𝑝 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) [max {𝑥𝑛 − 𝑝 , } + 2𝜆 𝑛 𝛼𝑛 𝑝] 1−𝜌 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 { 𝑄𝑝 − 𝑝 ≤ max { max {𝑥0 − 𝑝 , } 1−𝜌 { 𝑛−1 𝑄𝑝 − 𝑝 } +2𝑏 𝑝 ∑ 𝛼𝑗 , } + 2𝑏 𝑝 𝛼𝑛 1 − 𝜌 𝑗=0 } 𝑄𝑝 − 𝑝 ≤ max {𝑥0 − 𝑝 , } 1−𝜌 (57) 𝑛−1 + 2𝑏 𝑝 ∑ 𝛼𝑗 + 2𝑏 𝑝 𝛼𝑛 𝑗=0 𝑄𝑝 − 𝑝 = max {𝑥0 − 𝑝 , } 1−𝜌 𝑛 + 2𝑏 𝑝 ∑𝛼𝑗 . 𝑗=0 𝑄𝑝 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) max {𝑥𝑛 − 𝑝 , } + 2𝜆 𝑛 𝛼𝑛 𝑝 1−𝜌 𝑄𝑝 − 𝑝 ≤ max {𝑥𝑛 − 𝑝 , } + 2𝑏 𝑝 𝛼𝑛 . 1−𝜌 (53) By induction, we conclude that (54) is valid. Hence, {𝑥𝑛 } is bounded. Since 𝑃𝐶, ∇𝑓𝛼𝑛 , 𝐵1 and 𝐵2 are Lipschitz continuous, 𝑢𝑛 }, {𝑢𝑛 }, {𝑦𝑛 } and {𝑥̃𝑛 } are bounded, it is easy to see that {𝑢𝑛 }, {̃ where 𝑥̃𝑛 = 𝑃𝐶(𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) for all 𝑛 ≥ 0. Step 2. lim𝑛 → ∞ ‖𝑥𝑛+1 − 𝑥𝑛 ‖ = 0. 10 Abstract and Applied Analysis Indeed, define 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + (1 − 𝛽𝑛 )𝑤𝑛 for all 𝑛 ≥ 0. It follows that 𝑤𝑛+1 − 𝑤𝑛 = = 2 −𝜇1 [𝐵1 𝑃𝐶 (𝑥𝑛+1 −𝜇2 𝐵2 𝑥𝑛+1 )−𝐵1 𝑃𝐶 (𝑥𝑛 −𝜇2 𝐵2 𝑥𝑛 )] 2 ≤ 𝑃𝐶 (𝑥𝑛+1 − 𝜇2 𝐵2 𝑥𝑛+1 ) − 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) 𝑥𝑛+2 − 𝛽𝑛+1 𝑥𝑛+1 𝑥𝑛+1 − 𝛽𝑛 𝑥𝑛 − 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝛾𝑛+1 𝑦𝑛+1 + 𝛿𝑛+1 𝑆𝑦𝑛+1 𝛾𝑛 𝑦𝑛 + 𝛿𝑛 𝑆𝑦𝑛 − 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝛾 (𝑦 − 𝑦𝑛 ) + 𝛿𝑛+1 (𝑆𝑦𝑛+1 − 𝑆𝑦𝑛 ) = 𝑛+1 𝑛+1 1 − 𝛽𝑛+1 +( 𝛾𝑛+1 𝛾𝑛 − )𝑦 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 +( 𝛿𝑛+1 𝛿𝑛 − ) 𝑆𝑦𝑛 . 1 − 𝛽𝑛+1 1 − 𝛽𝑛 = [𝑃𝐶 (𝑥𝑛+1 − 𝜇2 𝐵2 𝑥𝑛+1 ) − 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] − 𝜇1 (2𝛽1 − 𝜇1 ) (58) Since (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0, utilizing Lemma 17 we have 𝛾𝑛+1 (𝑦𝑛+1 − 𝑦𝑛 ) + 𝛿𝑛+1 (𝑆𝑦𝑛+1 − 𝑆𝑦𝑛 ) (59) ≤ (𝛾𝑛+1 + 𝛿𝑛+1 ) 𝑦𝑛+1 − 𝑦𝑛 . Next, we estimate ‖𝑦𝑛+1 − 𝑦𝑛 ‖. Observe that 𝑢̃𝑛+1 − 𝑢̃𝑛 = 𝑃𝐶 (𝑢𝑛+1 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (𝑢𝑛+1 )) −𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 )) ≤ 𝑃𝐶 (𝐼 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 ) 𝑢𝑛+1 −𝑃𝐶 (𝐼 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 ) 𝑢𝑛 + 𝑃𝐶 (𝐼 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 ) 𝑢𝑛 − 𝑃𝐶 (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑢𝑛 ≤ 𝑢𝑛+1 − 𝑢𝑛 + (𝐼 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 ) 𝑢𝑛 − (𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) 𝑢𝑛 = 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 𝐼 + ∇𝑓) 𝑢𝑛 − 𝜆 𝑛 (𝛼𝑛 𝐼 + ∇𝑓) 𝑢𝑛 ≤ 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑢𝑛 ) + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢𝑛 , (60) 2 𝑢𝑛+1 − 𝑢𝑛 = 𝑃𝐶 [𝑃𝐶 (𝑥𝑛+1 −𝜇2 𝐵2 𝑥𝑛+1 )−𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛+1 −𝜇2 𝐵2 𝑥𝑛+1 )] 2 −𝑃𝐶 [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] ≤ [𝑃𝐶 (𝑥𝑛+1 − 𝜇2 𝐵2 𝑥𝑛+1 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛+1 − 𝜇2 𝐵2 𝑥𝑛+1 )] 2 − [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] 2 × 𝐵1 𝑃𝐶 (𝑥𝑛+1 − 𝜇2 𝐵2 𝑥𝑛+1 ) − 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) 2 ≤ 𝑃𝐶 (𝑥𝑛+1 − 𝜇2 𝐵2 𝑥𝑛+1 ) − 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) 2 ≤ (𝑥𝑛+1 − 𝜇2 𝐵2 𝑥𝑛+1 ) − (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) 2 = (𝑥𝑛+1 − 𝑥𝑛 ) − 𝜇2 (𝐵2 𝑥𝑛+1 − 𝐵2 𝑥𝑛 ) 2 2 ≤ 𝑥𝑛+1 − 𝑥𝑛 − 𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑥𝑛+1 − 𝐵2 𝑥𝑛 2 ≤ 𝑥𝑛+1 − 𝑥𝑛 , (61) and hence 𝑢𝑛+1 − 𝑢𝑛 = 𝑃𝐶 (𝑢𝑛+1 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (̃ 𝑢𝑛+1 )) 𝑢𝑛 )) −𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ ≤ (𝑢𝑛+1 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (̃ 𝑢𝑛+1 )) − (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 )) = (𝐼 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 ) 𝑢𝑛+1 − (𝐼 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 ) 𝑢𝑛 + 𝜆 𝑛+1 (∇𝑓𝛼𝑛+1 (𝑢𝑛+1 ) − ∇𝑓𝛼𝑛+1 (𝑢𝑛 )) −𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (̃ 𝑢𝑛+1 ) + 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) ≤ (𝐼 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 ) 𝑢𝑛+1 − (𝐼 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 ) 𝑢𝑛 + 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (𝑢𝑛+1 ) − ∇𝑓𝛼𝑛+1 (𝑢𝑛 ) + 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (̃ 𝑢𝑛+1 ) − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) ≤ 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 𝐼 + ∇𝑓) (̃ 𝑢𝑛+1 ) − 𝜆 𝑛 (𝛼𝑛 𝐼 + ∇𝑓) (̃ 𝑢𝑛 ) ≤ 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 𝑢̃𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢̃𝑛 + 𝜆 𝑛+1 ∇𝑓 (̃ 𝑢𝑛+1 ) − 𝜆 𝑛 ∇𝑓 (̃ 𝑢𝑛 ) ≤ 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 𝑢̃𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢̃𝑛 𝑢𝑛 ) 𝑢𝑛+1 ) − ∇𝑓 (̃ + 𝜆 𝑛+1 ∇𝑓 (̃ 𝑢𝑛 ) + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (̃ Abstract and Applied Analysis 11 𝛾𝑛 𝛾 = 𝑦𝑛+1 − 𝑦𝑛 + 𝑛+1 − (𝑦 + 𝑆𝑦 ) 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝑛 ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) (𝑢𝑛+1 − 𝑢𝑛 + 𝑢̃𝑛+1 − 𝑢̃𝑛 ) 𝑢𝑛 ) + 𝜆 𝑛+1 𝛼𝑛+1 𝑢̃𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢̃𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (̃ + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 𝛾 𝛾𝑛 + 𝑛+1 − (𝑦 + 𝑆𝑦 ) . 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝑛 ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 𝑢̃𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢̃𝑛 𝑢𝑛 ) + 𝜆 𝑛+1 ‖𝐴‖2 𝑢̃𝑛+1 − 𝑢̃𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (̃ ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) × (𝑢𝑛+1 − 𝑢𝑛 + 𝑢̃𝑛+1 − 𝑢̃𝑛 ) + 𝜆 𝑛+1 𝛼𝑛+1 𝑢̃𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢̃𝑛 𝑢𝑛 ) . + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (̃ (65) (62) Combining (60) with (61), we get ̃ 𝑢𝑛+1 − 𝑢̃𝑛 ≤ 𝑢𝑛+1 − 𝑢𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑢𝑛 ) + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢𝑛 ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑢𝑛 ) + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢𝑛 . From (60), we deduce from condition (vi) that lim𝑛 → ∞ ‖̃ 𝑢𝑛+1 − 𝑢̃𝑛 ‖ = 0. Since {𝑥𝑛 }, {𝑢𝑛 }, {̃ 𝑢𝑛 }, {𝑦𝑛 } and {𝑢𝑛 } are bounded, it follows from conditions (i), (iii), (v), and (vi) that (63) lim sup (𝑤𝑛+1 − 𝑤𝑛 − 𝑥𝑛+1 − 𝑥𝑛 ) 𝑛→∞ ≤ lim sup {𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) 𝑛→∞ × (𝑢𝑛+1 − 𝑢𝑛 + 𝑢̃𝑛+1 − 𝑢̃𝑛 ) This together with (62) implies that 𝑦𝑛+1 − 𝑦𝑛 = 𝑢𝑛+1 + 𝜎𝑛+1 (𝑄𝑥𝑛+1 − 𝑢𝑛+1 ) −𝑢𝑛 − 𝜎𝑛 (𝑄𝑥𝑛 − 𝑢𝑛 ) ≤ 𝑢𝑛+1 − 𝑢𝑛 + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) (𝑢𝑛+1 − 𝑢𝑛 + 𝑢̃𝑛+1 − 𝑢̃𝑛 ) 𝑢𝑛 ) + 𝜆 𝑛+1 𝛼𝑛+1 𝑢̃𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢̃𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (̃ + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 . (64) Hence it follows from (58), (59), and (64) that 𝑤𝑛+1 − 𝑤𝑛 𝛾 (𝑦 − 𝑦𝑛 ) + 𝛿𝑛+1 (𝑆𝑦𝑛+1 − 𝑆𝑦𝑛 ) ≤ 𝑛+1 𝑛+1 1 − 𝛽𝑛+1 𝛾 𝛾𝑛 𝛿𝑛+1 𝛿𝑛 + 𝑛+1 − − 𝑦𝑛 + 𝑆𝑦𝑛 1 − 𝛽𝑛+1 1 − 𝛽𝑛 1 − 𝛽𝑛+1 1 − 𝛽𝑛 ≤ 𝛾𝑛+1 + 𝛿𝑛+1 𝑦 − 𝑦𝑛 1 − 𝛽𝑛+1 𝑛+1 𝛾 𝛾𝑛 + 𝑛+1 − (𝑦𝑛 + 𝑆𝑦𝑛 ) 1 − 𝛽𝑛+1 1 − 𝛽𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 𝑢̃𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑢̃𝑛 𝑢𝑛 ) + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (̃ + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 𝛾 𝛾𝑛 + 𝑛+1 − (𝑦 + 𝑆𝑦 )} = 0. 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝑛 (66) Hence by Lemma 14 we get lim𝑛 → ∞ ‖𝑤𝑛 − 𝑥𝑛 ‖ = 0. Thus, lim 𝑥 𝑛 → ∞ 𝑛+1 − 𝑥𝑛 = lim (1 − 𝛽𝑛 ) 𝑤𝑛 − 𝑥𝑛 = 0. 𝑛→∞ (67) Step 3. lim𝑛 → ∞ ‖𝐵2 𝑥𝑛 − 𝐵2 𝑝‖ = 0, lim𝑛 → ∞ ‖𝐵1 𝑥̃𝑛 − 𝐵1 𝑞‖ = 0, and lim𝑛 → ∞ ‖𝑢𝑛 − 𝑢̃𝑛 ‖ = 0, where 𝑞 = 𝑃𝐶(𝑝 − 𝜇2 𝐵2 𝑝). Indeed, utilizing Lemma 17 and the convexity of ‖ ⋅ ‖2 , we obtain from (17), (48), and (51) that 2 𝑥𝑛+1 − 𝑝 2 = 𝛽𝑛 (𝑥𝑛 − 𝑝) + 𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝) 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 1 2 + (𝛾𝑛 + 𝛿𝑛 ) [𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝)] 𝛾𝑛 + 𝛿𝑛 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 𝑦𝑛 − 𝑝 12 Abstract and Applied Analysis 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 Indeed, observe that 2 2 + (𝛾𝑛 + 𝛿𝑛 ) [𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (1 − 𝜎𝑛 ) 𝑢𝑛 − 𝑝 ] 𝑢𝑛 − 𝑢̃𝑛 = 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 )) − 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 )) ≤ (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 )) − (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 )) (71) = 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ) − ∇𝑓𝛼𝑛 (𝑢𝑛 ) ≤ 𝜆 𝑛 (𝛼𝑛 + ‖𝐴‖2 ) 𝑢̃𝑛 − 𝑢𝑛 . 2 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 𝑢𝑛 − 𝑝 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 2 + (𝛾𝑛 + 𝛿𝑛 ) [𝑢𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 2 2 + (𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) − 1) 𝑢𝑛 − 𝑢̃𝑛 ] 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 2 2 × [𝑥𝑛 − 𝑝 − 𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 2 − 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 2 2 + (𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) − 1) 𝑢𝑛 − 𝑢̃𝑛 ] 2 2 ≤ 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 − (𝛾𝑛 + 𝛿𝑛 ) This together with ‖̃ 𝑢𝑛 − 𝑢𝑛 ‖ → 0 implies that lim𝑛 → ∞ ‖𝑢𝑛 − 𝑢̃𝑛 ‖ = 0 and hence lim𝑛 → ∞ ‖𝑢𝑛 − 𝑢𝑛 ‖ = 0. By firm nonexpansiveness of 𝑃𝐶, we have ̃ 2 𝑥𝑛 − 𝑞 2 = 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝) 2 × [𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 ≤ ⟨(𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − (𝑝 − 𝜇2 𝐵2 𝑝) , 𝑥̃𝑛 − 𝑞⟩ 2 + 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 + (1 − 𝜆2𝑛 (𝛼𝑛 = 2 + ‖𝐴‖ ) ) 𝑢𝑛 − 𝑢̃𝑛 ] 2 2 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 . (68) ≤ Therefore, 2 (𝛾𝑛 + 𝛿𝑛 ) [𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 = 2 + 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 Step 4. lim𝑛 → ∞ ‖𝑆𝑦𝑛 − 𝑦𝑛 ‖ = 0. 1 2 2 2 [𝑥 −𝑝 + 𝑥̃ −𝑞 − 𝑥 − 𝑥̃ − (𝑝−𝑞) 2 𝑛 𝑛 𝑛 𝑛 2 −𝜇22 𝐵2 𝑥𝑛 − 𝐵2 𝑝 ] 2 2 2 ≤ 𝑥𝑛 − 𝑝 − 𝑥𝑛+1 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 2 ≤ (𝑥𝑛 − 𝑝 + 𝑥𝑛+1 − 𝑝) 𝑥𝑛 − 𝑥𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 . (69) 𝑛→∞ 1 2 2 [𝑥 − 𝑝 + 𝑥̃𝑛 − 𝑞 2 𝑛 2 −(𝑥𝑛 − 𝑥̃𝑛 ) − 𝜇2 (𝐵2 𝑥𝑛 − 𝐵2 𝑝) − (𝑝 − 𝑞) ] + 2𝜇2 ⟨𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) , 𝐵2 𝑥𝑛 − 𝐵2 𝑝⟩ 2 2 + (1 − 𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) ) 𝑢𝑛 − 𝑢̃𝑛 ] Since 𝛼𝑛 → 0, 𝜎𝑛 → 0, ‖𝑥𝑛 − 𝑥𝑛+1 ‖ → 0, lim inf 𝑛 → ∞ (𝛾𝑛 + 𝛿𝑛 ) > 0 and {𝜆 𝑛 } ⊂ [𝑎, 𝑏] for some 𝑎, 𝑏 ∈ (0, 1/‖𝐴‖2 ), it follows that lim 𝐵 𝑥̃ − 𝐵1 𝑞 = 0, lim 𝑢 − 𝑢̃𝑛 = 0, 𝑛→∞ 𝑛 𝑛→∞ 1 𝑛 (70) lim 𝐵2 𝑥𝑛 − 𝐵2 𝑝 = 0. 1 2 2 [𝑥 − 𝑝 − 𝜇2 (𝐵2 𝑥𝑛 − 𝐵2 𝑝) + 𝑥̃𝑛 − 𝑞 2 𝑛 2 −(𝑥𝑛 − 𝑝) − 𝜇2 (𝐵2 𝑥𝑛 − 𝐵2 𝑝) − (𝑥̃𝑛 − 𝑞) ] ≤ 1 2 2 [𝑥𝑛 − 𝑝 + 𝑥̃𝑛 − 𝑞 2 2 − 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) +2𝜇2 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑥𝑛 − 𝐵2 𝑝] , (72) that is, ̃ 2 2 2 𝑥𝑛 − 𝑞 ≤ 𝑥𝑛 − 𝑝 − 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) (73) + 2𝜇2 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 . Abstract and Applied Analysis 13 Moreover, using the argument technique similar to the above one, we derive + 2𝜇1 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 + 2 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑝 . ≤ ⟨(𝑥̃𝑛 − 𝜇1 𝐵1 𝑥̃𝑛 ) − (𝑞 − 𝜇1 𝐵1 𝑞) , 𝑢𝑛 − 𝑝⟩ 1 2 2 = [𝑥̃𝑛 − 𝑞 − 𝜇1 (𝐵1 𝑥̃𝑛 − 𝐵1 𝑞) + 𝑢𝑛 − 𝑝 2 2 −(𝑥̃𝑛 − 𝑞) − 𝜇1 (𝐵1 𝑥̃𝑛 − 𝐵1 𝑞) − (𝑢𝑛 − 𝑝) ] 1 2 2 ≤ [𝑥̃𝑛 − 𝑞 + 𝑢𝑛 − 𝑝 2 2 −(𝑥̃𝑛 − 𝑢𝑛 ) − 𝜇1 (𝐵1 𝑥̃𝑛 − 𝐵1 𝑞) + (𝑝 − 𝑞) ] (76) Thus from (17) and (76) it follows that 2 𝑥𝑛+1 − 𝑝 2 = 𝛽𝑛 (𝑥𝑛 − 𝑝) + 𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝) 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 𝑦𝑛 − 𝑝 1 2 2 2 [𝑥̃ − 𝑞 + 𝑢𝑛 − 𝑝 − 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 2 𝑛 2 2 = 𝛽𝑛 𝑥𝑛 − 𝑝 + (1 − 𝛽𝑛 ) 𝑦𝑛 − 𝑝 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 2𝜇1 ⟨𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) , 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞⟩ 2 2 + (1 − 𝛽𝑛 ) [𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (1 − 𝜎𝑛 ) 𝑢𝑛 − 𝑝 ] 2 −𝜇12 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 ] ≤ + 2𝜇2 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 2 − 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 2 𝑢𝑛 − 𝑝 2 = 𝑃𝐶 (𝑥̃𝑛 − 𝜇1 𝐵1 𝑥̃𝑛 ) − 𝑃𝐶 (𝑞 − 𝜇1 𝐵1 𝑞) = 2 2 ≤ 𝑥𝑛 − 𝑝 − 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 1 2 2 2 [𝑥̃ − 𝑞 + 𝑢𝑛 − 𝑝 − 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 2 𝑛 +2𝜇1 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 ] , 2 + (1 − 𝛽𝑛 ) 𝑢𝑛 − 𝑝 (74) 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 2 2 + (1 − 𝛽𝑛 ) [𝑥𝑛 − 𝑝 − 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) that is, 2 2 2 𝑢𝑛 − 𝑝 ≤ 𝑥̃𝑛 − 𝑞 − 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) (75) + 2𝜇1 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 . Utilizing (46), (73), and (75), we have 2 𝑢𝑛 − 𝑝 2 = 𝑢̃𝑛 − 𝑝 + 𝑢𝑛 − 𝑢̃𝑛 2 ≤ 𝑢̃𝑛 − 𝑝 + 2 ⟨𝑢𝑛 − 𝑢̃𝑛 , 𝑢𝑛 − 𝑝⟩ 2 ≤ 𝑢̃𝑛 − 𝑝 + 2 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑝 2 ≤ 𝑢𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 + 2 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑝 2 2 ≤ 𝑥̃𝑛 − 𝑞 − 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) + 2𝜇1 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 + 2 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑝 + 2𝜇2 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 2 − 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) + 2𝜇1 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 +2 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑝 ] , (77) which hence implies that 2 2 (1−𝛽𝑛 ) [𝑥𝑛 − 𝑥̃𝑛 −(𝑝−𝑞) + 𝑥̃𝑛 −𝑢𝑛 +(𝑝−𝑞) ] 2 2 2 ≤ 𝑥𝑛 − 𝑝 − 𝑥𝑛+1 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (1 − 𝛽𝑛 ) [2𝜇2 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 + 2𝜇1 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 + 2 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑝 ] 14 Abstract and Applied Analysis ≤ (𝑥𝑛 − 𝑝 + 𝑥𝑛+1 − 𝑝) show that 𝑝̂ ∈ Γ. As a matter of fact, since ‖𝑥𝑛 − 𝑢𝑛 ‖ → 0, ‖̃ 𝑢𝑛 −𝑢𝑛 ‖ → 0 and ‖𝑥𝑛 −𝑦𝑛 ‖ → 0, we deduce that 𝑥𝑛𝑖 → 𝑝̂ weakly and 𝑢̃𝑛𝑖 → 𝑝̂ weakly. Let 2 × 𝑥𝑛 − 𝑥𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜇2 𝑥𝑛 − 𝑥̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑥𝑛 − 𝐵2 𝑝 + 2𝜇1 𝑥̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑥̃𝑛 − 𝐵1 𝑞 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑢̃𝑛 − 𝑝 + 2 𝑢𝑛 − 𝑢̃𝑛 𝑢𝑛 − 𝑝 . (78) Since lim sup𝑛 → ∞ 𝛽𝑛 < 1, {𝜆 𝑛 } ⊂ [𝑎, 𝑏], 𝛼𝑛 → 0, 𝜎𝑛 → 0, ‖𝐵2 𝑥𝑛 − 𝐵2 𝑝‖ → 0, ‖𝐵1 𝑥̃𝑛 − 𝐵1 𝑞‖ → 0, ‖𝑢𝑛 − 𝑢̃𝑛 ‖ → 0 and ‖𝑥𝑛+1 − 𝑥𝑛 ‖ → 0, it follows from the boundedness of 𝑢𝑛 }, and {𝑢𝑛 } that {𝑥𝑛 }, {𝑥̃𝑛 }, {𝑢𝑛 }, {̃ lim 𝑥 − 𝑥̃𝑛 − (𝑝 − 𝑞) = 0, 𝑛→∞ 𝑛 lim 𝑥̃ − 𝑢𝑛 + (𝑝 − 𝑞) = 0. 𝑛→∞ 𝑛 lim 𝑥 − 𝑢𝑛 = 0. 𝑛→∞ 𝑛 𝑛→∞ 𝑤 − ∇𝑓 (𝑣) ∈ 𝑁𝐶𝑣. (88) So, we have ⟨𝑣 − 𝑢, 𝑤 − ∇𝑓 (𝑣)⟩ ≥ 0, ∀𝑢 ∈ 𝐶. 𝑢̃𝑛 = 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 )) , (89) 𝑣 ∈ 𝐶, (90) ⟨𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 ) − 𝑢̃𝑛 , 𝑢̃𝑛 − 𝑣⟩ ≥ 0, (91) 𝑢̃𝑛 − 𝑢𝑛 + ∇𝑓𝛼𝑛 (𝑢𝑛 )⟩ ≥ 0. 𝜆𝑛 (92) we have (80) and, hence, Since lim 𝑆𝑦 − 𝑦𝑛 = 0. 𝑛→∞ 𝑛 (87) On the other hand, from 𝑛→∞ it follows that lim 𝑆𝑦𝑛 − 𝑥𝑛 = 0, 𝑤 ∈ 𝑇𝑣 = ∇𝑓 (𝑣) + 𝑁𝐶𝑣, and hence (79) This together with ‖𝑦𝑛 − 𝑢𝑛 ‖ ≤ 𝜎𝑛 ‖𝑄𝑥𝑛 − 𝑢𝑛 ‖ → 0 implies that lim 𝑥𝑛 − 𝑦𝑛 = 0. (81) 𝛿𝑛 (𝑆𝑦𝑛 − 𝑥𝑛 ) ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝛾𝑛 𝑦𝑛 − 𝑥𝑛 , (86) where 𝑁𝐶𝑣 = {𝑤 ∈ H1 : ⟨𝑣 − 𝑢, 𝑤⟩ ≥ 0, ∀𝑢 ∈ 𝐶}. Then, 𝑇 is maximal monotone and 0 ∈ 𝑇𝑣 if and only if 𝑣 ∈ VI(𝐶, ∇𝑓); see [40] for more details. Let (𝑣, 𝑤) ∈ Gph(𝑇). Then, we have Consequently, it immediately follows that lim 𝑥 − 𝑢𝑛 = 0, 𝑛→∞ 𝑛 ∇𝑓 (𝑣) + 𝑁𝐶𝑣, if 𝑣 ∈ 𝐶, 0, if 𝑣 ∉ 𝐶, 𝑇𝑣 = { ⟨𝑣 − 𝑢̃𝑛 , Therefore, from 𝑤 − ∇𝑓 (𝑣) ∈ 𝑁𝐶𝑣, 𝑢̃𝑛𝑖 ∈ 𝐶, (93) we have (82) ⟨𝑣 − 𝑢̃𝑛𝑖 , 𝑤⟩ ≥ ⟨𝑣 − 𝑢̃𝑛𝑖 , ∇𝑓 (𝑣)⟩ ≥ ⟨𝑣 − 𝑢̃𝑛𝑖 , ∇𝑓 (𝑣)⟩ (83) Step 5. lim sup𝑛 → ∞ ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ ≤ 0 where 𝑥 = 𝑃Fix(𝑆)∩Ξ∩Γ 𝑄𝑥. Indeed, since {𝑥𝑛 } is bounded, there exists a subsequence {𝑥𝑛𝑖 } of {𝑥𝑛 } such that lim sup ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ = lim ⟨𝑄𝑥 − 𝑥, 𝑥𝑛𝑖 − 𝑥⟩ . (84) 𝑖→∞ 𝑛→∞ Also, since 𝐻 is reflexive and {𝑦𝑛 } is bounded, without loss of generality we may assume that 𝑦𝑛𝑖 → 𝑝̂ weakly for some 𝑝̂ ∈ 𝐶. First, it is clear from Lemma 15 that 𝑝̂ ∈ Fix(𝑆). Now let us show that 𝑝̂ ∈ Ξ. We note that 𝑥𝑛 − 𝐺 (𝑥𝑛 ) = 𝑥𝑛 − 𝑃𝐶 [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] = 𝑥𝑛 − 𝑢𝑛 → 0 (𝑛 → ∞) , (85) where 𝐺 : 𝐶 → 𝐶 is defined as such that in Lemma 4. According to Lemma 15 we obtain 𝑝̂ ∈ Ξ. Further, let us − ⟨𝑣 − 𝑢̃𝑛𝑖 , 𝑢̃𝑛𝑖 − 𝑢𝑛𝑖 𝜆 𝑛𝑖 + ∇𝑓𝛼𝑛 (𝑢𝑛𝑖 )⟩ 𝑖 = ⟨𝑣 − 𝑢̃𝑛𝑖 , ∇𝑓 (𝑣)⟩ − ⟨𝑣 − 𝑢̃𝑛𝑖 , 𝑢̃𝑛𝑖 − 𝑢𝑛𝑖 𝜆 𝑛𝑖 + ∇𝑓 (𝑢𝑛𝑖 )⟩ − 𝛼𝑛𝑖 ⟨𝑣 − 𝑢̃𝑛𝑖 , 𝑢𝑛𝑖 ⟩ 𝑢𝑛𝑖 )⟩ = ⟨𝑣 − 𝑢̃𝑛𝑖 , ∇𝑓 (𝑣) − ∇𝑓 (̃ 𝑢𝑛𝑖 ) − ∇𝑓 (𝑢𝑛𝑖 )⟩ + ⟨𝑣 − 𝑢̃𝑛𝑖 , ∇𝑓 (̃ − ⟨𝑣 − 𝑢̃𝑛𝑖 , 𝑢̃𝑛𝑖 − 𝑢𝑛𝑖 𝜆 𝑛𝑖 ⟩ − 𝛼𝑛𝑖 ⟨𝑣 − 𝑢̃𝑛𝑖 , 𝑢𝑛𝑖 ⟩ ≥ ⟨𝑣 − 𝑢̃𝑛𝑖 , ∇𝑓 (̃ 𝑢𝑛𝑖 ) − ∇𝑓 (𝑢𝑛𝑖 )⟩ − ⟨𝑣 − 𝑢̃𝑛𝑖 , 𝑢̃𝑛𝑖 − 𝑢𝑛𝑖 𝜆 𝑛𝑖 ⟩ − 𝛼𝑛𝑖 ⟨𝑣 − 𝑢̃𝑛𝑖 , 𝑢𝑛𝑖 ⟩ . (94) Abstract and Applied Analysis 15 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 Hence, we get ̂ 𝑤⟩ ≥ 0, ⟨𝑣 − 𝑝, as 𝑖 → ∞. (95) Since 𝑇 is maximal monotone, we have 𝑝̂ ∈ 𝑇−1 0, and, hence, 𝑝̂ ∈ VI(𝐶, ∇𝑓). Thus it is clear that 𝑝̂ ∈ Γ. Therefore, 𝑝̂ ∈ Fix(𝑆) ∩ Ξ ∩ Γ. Consequently, in terms of Proposition 8(i) we obtain from (84) that lim sup ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + (𝛾𝑛 + 𝛿𝑛 ) [ (1 − 𝜎𝑛 ) 2 × (𝑥𝑛 − 𝑥 + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑢̃𝑛 − 𝑥) +2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ ] 2 = (1 − (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 ) 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ 𝑛→∞ = lim ⟨𝑄𝑥 − 𝑥, 𝑥𝑛𝑖 − 𝑥⟩ = ⟨𝑄𝑥 − 𝑥, 𝑝̂ − 𝑥⟩ ≤ 0. (96) 𝑖→∞ 2 ≤ (1 − (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 ) 𝑥𝑛 − 𝑥 Step 6. lim𝑛 → ∞ ‖𝑥𝑛 − 𝑥‖ = 0. Indeed, from (48) and (51) it follows that 2 2 𝑢𝑛 − 𝑥 ≤ 𝑢𝑛 − 𝑥 + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑢̃𝑛 − 𝑥 2 ≤ 𝑥𝑛 − 𝑥 + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑢̃𝑛 − 𝑥 . + (𝛾𝑛 + 𝛿𝑛 ) 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑢̃𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑢̃𝑛 − 𝑥 (97) 2 ≤ (1 − (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 ) 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2𝜎𝑛 2 × [𝜌𝑥𝑛 − 𝑥 + ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 ] Note that + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑢̃𝑛 − 𝑥 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ = ⟨𝑄𝑥𝑛 − 𝑥, 𝑥𝑛 − 𝑥⟩ + ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥𝑛 ⟩ 2 = [1 − (1 − 2𝜌) (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 ] 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2𝜎𝑛 = ⟨𝑄𝑥𝑛 − 𝑄𝑥, 𝑥𝑛 − 𝑥⟩ + ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ × [⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 ] (98) + ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥𝑛 ⟩ 2 ≤ 𝜌𝑥𝑛 − 𝑥 + ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 . Utilizing Lemmas 17 and 18, we obtain from (48) and the convexity of ‖ ⋅ ‖2 2 𝑥𝑛+1 − 𝑥 2 = 𝛽𝑛 (𝑥𝑛 − 𝑥) + 𝛾𝑛 (𝑦𝑛 − 𝑥) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑥) 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 1 2 + (𝛾𝑛 + 𝛿𝑛 ) [𝛾𝑛 (𝑦𝑛 − 𝑥) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑥)] 𝛾𝑛 + 𝛿𝑛 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 𝑦𝑛 − 𝑥 2 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) [(1 − 𝜎𝑛 ) 𝑢𝑛 − 𝑥 + 2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ ] + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑢̃𝑛 − 𝑥 2 = [1 − (1 − 2𝜌) (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 ] 𝑥𝑛 − 𝑥 + (1 − 2𝜌) 2 [⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 ] 1 − 2𝜌 + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑢̃𝑛 − 𝑥 . × 𝜎𝑛 (99) Note that lim inf 𝑛 → ∞ (1 − 2𝜌)(𝛾𝑛 + 𝛿𝑛 ) > 0. It follows that ∑∞ 𝑛=0 (1 − 2𝜌)(𝛾𝑛 + 𝛿𝑛 )𝜎𝑛 = ∞. It is clear that 2 [⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 ] lim sup ≤ 0, 1 − 2𝜌 𝑛→∞ (100) because lim sup𝑛 → ∞ ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ ≤ 0 and lim𝑛 → ∞ ‖𝑥𝑛 − 𝑦𝑛 ‖ = 0. In addition, note also that {𝜆 𝑛 } ⊂ [𝑎, 𝑏], ∑∞ 𝑛=0 𝛼𝑛 < 2𝜆 𝛼 ‖𝑥‖‖̃ 𝑢𝑛 − ∞ and {̃ 𝑢𝑛 } is bounded. Hence we get ∑∞ 𝑛 𝑛 𝑛=0 𝑥‖ < ∞. Therefore, all conditions of Lemma 16 are satisfied. Consequently, we immediately deduce that ‖𝑥𝑛 − 𝑥‖ → 0 as 𝑛 → ∞. This completes the proof. Corollary 20. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H1 . Let 𝐴 ∈ 𝐵(H1 , H2 ) and 𝐵𝑖 : 𝐶 → H1 be 𝛽𝑖 -inverse strongly monotone for 𝑖 = 1, 2. Let 𝑆 : 𝐶 → 𝐶 16 Abstract and Applied Analysis be a 𝑘-strictly pseudo-contractive mapping such that Fix (𝑆) ∩ Ξ ∩ Γ ≠ 0. For fixed 𝑢 ∈ 𝐶 and given 𝑥0 ∈ 𝐶 arbitrarily, let the 𝑢𝑛 } be generated iteratively by sequences {𝑥𝑛 }, {𝑢𝑛 }, {̃ Proof. In Corollary 20, put 𝐵1 = 𝐵2 = 0 and 𝛾𝑛 = 0. Then, Ξ = 𝐶, 𝛽𝑛 + 𝛿𝑛 = 1 for all 𝑛 ≥ 0, and the iterative scheme (101) is equivalent to 𝑢𝑛 = 𝑃𝐶 [𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑥𝑛 − 𝜇2 𝐵2 𝑥𝑛 )] , 𝑢𝑛 = 𝑥𝑛 , 𝑢̃𝑛 = 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 )) , (101) 𝑢𝑛 )) , 𝑦𝑛 = 𝜎𝑛 𝑢 + (1 − 𝜎𝑛 ) 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + 𝛾𝑛 𝑦𝑛 + 𝛿𝑛 𝑆𝑦𝑛 , 𝑢𝑛 )) , 𝑦𝑛 = 𝜎𝑛 𝑢 + (1 − 𝜎𝑛 ) 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ ∀𝑛 ≥ 0, 𝑢𝑛+1 = 𝛽𝑛 𝑥𝑛 + 𝛿𝑛 𝑆𝑦𝑛 , where 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2, and the following conditions hold for six sequences {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝛽𝑛 }, {𝛾𝑛 }, {𝛿𝑛 } ⊂ [0, 1]: (i) ∑∞ 𝑛=0 𝛼𝑛 < ∞; (ii) 𝛽𝑛 + 𝛾𝑛 + 𝛿𝑛 = 1 and (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0; (iii) lim𝑛 → ∞ 𝜎𝑛 = 0 and ∑∞ 𝑛=0 𝜎𝑛 = ∞; (iv) 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1 and lim inf 𝑛 → ∞ 𝛿𝑛 > 0; (v) lim𝑛 → ∞ (𝛾𝑛+1 /(1 − 𝛽𝑛+1 ) − 𝛾𝑛 /(1 − 𝛽𝑛 )) = 0; 2 (vi) 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖ and lim𝑛 → ∞ |𝜆 𝑛+1 − 𝜆 𝑛 | = 0. Then the sequences {𝑥𝑛 }, {𝑢𝑛 }, {̃ 𝑢𝑛 } converge strongly to the same point 𝑥 = 𝑃 Fix (𝑆)∩Ξ∩Γ 𝑢 if and only if lim𝑛 → ∞ ‖𝑢𝑛+1 − 𝑢𝑛 ‖ = 0. Furthermore, (𝑥, 𝑦) is a solution of the GSVI (14), where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). Next, utilizing Corollary 20 we give the following improvement and extension of the main result in [18] (i.e., [18, Theorem 3.1]). Corollary 21. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H1 . Let 𝐴 ∈ 𝐵(H1 , H2 ) and 𝑆 : 𝐶 → 𝐶 be a nonexpansive mapping such that Fix (𝑆) ∩ Γ ≠ 0. For fixed 𝑢𝑛 } 𝑢 ∈ 𝐶 and given 𝑥0 ∈ 𝐶 arbitrarily, let the sequences {𝑢𝑛 }, {̃ be generated iteratively by (103) ∀𝑛 ≥ 0. This is is equivalent to (102). Since 𝑆 is a nonexpansive mapping, 𝑆 must be a 𝑘-strictly pseudo-contractive mapping with 𝑘 = 0. In this case, it is easy to see that conditions (i)– (vi) in Corollary 20 all are satisfied. Therefore, in terms of Corollary 20, we obtain the desired result. Now, we are in a position to prove the strong convergence of the sequences generated by the relaxed extragradient iterative algorithm (18) with regularization. Theorem 22. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H1 . Let 𝐴 ∈ 𝐵(H1 , H2 ) and 𝐵𝑖 : 𝐶 → H1 be 𝛽𝑖 -inverse strongly monotone for 𝑖 = 1, 2. Let 𝑆 : 𝐶 → 𝐶 be a 𝑘-strictly pseudocontractive mapping such that Fix (𝑆) ∩ Ξ ∩ Γ ≠ 0. Let 𝑄 : 𝐶 → 𝐶 be a 𝜌-contraction with 𝜌 ∈ [0, 1/2). For given 𝑥0 ∈ 𝐶 arbitrarily, let the sequences {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 } be generated by the relaxed extragradient iterative algorithm (18) with regularization, where 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2, {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝜏𝑛 }, {𝛽𝑛 }, {𝛾𝑛 }, {𝛿𝑛 } ⊂ [0, 1] such that (i) ∑∞ 𝑛=0 𝛼𝑛 < ∞; (ii) 𝜎𝑛 + 𝜏𝑛 ≤ 1, 𝛽𝑛 + 𝛾𝑛 + 𝛿𝑛 = 1 and (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0; (iii) lim𝑛 → ∞ 𝜎𝑛 = 0 and ∑∞ 𝑛=0 𝜎𝑛 = ∞; (iv) 0 < lim inf 𝑛 → ∞ 𝜏𝑛 ≤ lim sup𝑛 → ∞ 𝜏𝑛 < 1 and lim𝑛 → ∞ |𝜏𝑛+1 − 𝜏𝑛 | = 0; (v) 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1 and lim inf 𝑛 → ∞ 𝛿𝑛 > 0; 𝑢̃𝑛 = 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 )) , (vi) lim𝑛 → ∞ (𝛾𝑛+1 /(1 − 𝛽𝑛+1 ) − 𝛾𝑛 /(1 − 𝛽𝑛 )) = 0; 𝑢𝑛+1 = 𝛽𝑛 𝑢𝑛 + (1 − 𝛽𝑛 ) (102) × 𝑆 [𝜎𝑛 𝑢 + (1 − 𝜎𝑛 ) 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (̃ 𝑢𝑛 ))] , 𝑢̃𝑛 = 𝑃𝐶 (𝑢𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑢𝑛 )) , ∀𝑛 ≥ 0, where the following conditions hold for four sequences {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝛽𝑛 } ⊂ [0, 1]: (i) ∑∞ 𝑛=0 𝛼𝑛 < ∞; (ii) lim𝑛 → ∞ 𝜎𝑛 = 0 and ∑∞ 𝑛=0 𝜎𝑛 = ∞; (iii) 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1; (iv) 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖2 and lim𝑛 → ∞ |𝜆 𝑛+1 − 𝜆 𝑛 | = 0. Then the sequences {𝑢𝑛 }, {̃ 𝑢𝑛 } converge strongly to the same point 𝑥 = 𝑃 Fix (𝑆)∩Γ 𝑢 if and only if lim𝑛 → ∞ ‖𝑢𝑛+1 − 𝑢𝑛 ‖ = 0. (vii) 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖2 and lim𝑛 → ∞ |𝜆 𝑛+1 − 𝜆 𝑛 | = 0. Then the sequences {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 } converge strongly to the same point 𝑥 = 𝑃 Fix (𝑆)∩Ξ∩Γ 𝑄𝑥 if and only if lim𝑛 → ∞ ‖𝑧𝑛+1 − 𝑧𝑛 ‖ = 0. Furthermore, (𝑥, 𝑦) is a solution of the GSVI (14), where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). Proof. First, taking into account 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖2 , without loss of generality we may assume that {𝜆 𝑛 } ⊂ [𝑎, 𝑏] for some 𝑎, 𝑏 ∈ (0, 1/‖𝐴‖2 ). Repeating the same argument as that in the proof of Theorem 19, we can show that 𝑃𝐶(𝐼 − 𝜆∇𝑓𝛼 ) is 𝜁-averaged for each 𝜆 ∈ (0, 2/(𝛼 + ‖𝐴‖2 )), where 𝜁 = (2 + 𝜆(𝛼 + ‖𝐴‖2 ))/4. Further, repeating the same argument as that in the proof of Theorem 19, we can also show that for each integer 𝑛 ≥ Abstract and Applied Analysis 17 0, 𝑃𝐶(𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) is 𝜁𝑛 -averaged with 𝜁𝑛 = (2 + 𝜆 𝑛 (𝛼𝑛 + ‖𝐴‖2 ))/4 ∈ (0, 1). Next we divide the remainder of the proof into several steps. Step 1. {𝑥𝑛 } is bounded. Indeed, take 𝑝 ∈ Fix(𝑆) ∩ Ξ ∩ Γ arbitrarily. Then 𝑆𝑝 = 𝑝, 𝑃𝐶(𝐼 − 𝜆∇𝑓)𝑝 = 𝑝 for 𝜆 ∈ (0, 2/‖𝐴‖2 ), and 𝑝 = 𝑃𝐶 [𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝) − 𝜇1 𝐵1 𝑃𝐶 (𝑝 − 𝜇2 𝐵2 𝑝)] . (104) Utilizing the arguments similar to those of (45) and (46) in the proof of Theorem 19, from (18) we can obtain 𝑧𝑛 − 𝑝 ≤ 𝑥𝑛 − 𝑝 + 𝜆 𝑛 𝛼𝑛 𝑝 , 2 2 𝑧𝑛 − 𝑝 ≤ 𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 . (105) (106) For simplicity, we write 𝑞 = 𝑃𝐶(𝑝 − 𝜇2 𝐵2 𝑝), 𝑧̃𝑛 = 𝑃𝐶(𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ), 𝑢𝑛 = 𝑃𝐶 [𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ) −𝜇1 𝐵1 𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 )] , (107) 𝑢𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) , for each 𝑛 ≥ 0. Then 𝑦𝑛 = 𝜎𝑛 𝑄𝑥𝑛 +𝜏𝑛 𝑢𝑛 +(1−𝜎𝑛 −𝜏𝑛 )𝑢𝑛 for each 𝑛 ≥ 0. Since 𝐵𝑖 : 𝐶 → H1 is 𝛽𝑖 -inverse strongly monotone and 0 < 𝜇𝑖 < 2𝛽𝑖 for 𝑖 = 1, 2, utilizing the argument similar to that of (48) in the proof of Theorem 19, we can obtain that for all 𝑛 ≥ 0, 2 2 2 𝑢𝑛 − 𝑝 ≤ 𝑧𝑛 − 𝑝 − 𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 2 2 − 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 ≤ 𝑧𝑛 − 𝑝 . (108) Furthermore, utilizing Proposition 8(i)-(ii) and the argument similar to that of (51) in the proof of Theorem 19, from (105) we obtain 2 𝑢𝑛 − 𝑝 2 ≤ 𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 2 2 + (𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) − 1) 𝑥𝑛 − 𝑧𝑛 2 ≤ 𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 2 ≤ 𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 [𝑥𝑛 − 𝑝 + 𝜆 𝑛 𝛼𝑛 𝑝] 2 2 ≤ 𝑥𝑛 − 𝑝 + 4𝜆 𝑛 𝛼𝑛 𝑝 𝑥𝑛 − 𝑝 + 4𝜆2𝑛 𝛼𝑛2 𝑝 2 = (𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝) . (109) Hence it follows from (105), (108), and (109) that 𝑦𝑛 − 𝑝 = 𝜎𝑛 (𝑄𝑥𝑛 − 𝑝) +𝜏𝑛 (𝑢𝑛 − 𝑝) + (1 − 𝜎𝑛 − 𝜏𝑛 ) (𝑢𝑛 − 𝑝) ≤ 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 𝜏𝑛 𝑢𝑛 − 𝑝 + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑢𝑛 − 𝑝 ≤ 𝜎𝑛 (𝜌 𝑥𝑛 − 𝑝 + 𝑄𝑝 − 𝑝) + 𝜏𝑛 𝑢𝑛 − 𝑝 + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑧𝑛 − 𝑝 ≤ 𝜎𝑛 (𝜌 𝑥𝑛 − 𝑝 + 𝑄𝑝 − 𝑝) + 𝜏𝑛 (𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝) + (1 − 𝜎𝑛 − 𝜏𝑛 ) (𝑥𝑛 − 𝑝 + 𝜆 𝑛 𝛼𝑛 𝑝) = (1 − (1 − 𝜌) 𝜎𝑛 ) 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑝 − 𝑝 + [2𝜏𝑛 + (1 − 𝜎𝑛 − 𝜏𝑛 )] 𝜆 𝑛 𝛼𝑛 𝑝 ≤ (1 − (1 − 𝜌) 𝜎𝑛 ) 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑝 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 = (1 − (1 − 𝜌) 𝜎𝑛 ) 𝑥𝑛 − 𝑝 𝑄𝑝 − 𝑝 + (1 − 𝜌) 𝜎𝑛 + 2𝜆 𝑛 𝛼𝑛 𝑝 1−𝜌 𝑄𝑝 − 𝑝 ≤ max {𝑥𝑛 − 𝑝 , } + 2𝜆 𝑛 𝛼𝑛 𝑝 . 1−𝜌 (110) Since (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0, utilizing Lemma 17 we obtain from (110) 𝑥𝑛+1 − 𝑝 = 𝛽𝑛 (𝑥𝑛 − 𝑝) + 𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝) ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝) ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 𝑦𝑛 − 𝑝 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 𝑄𝑝 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) [max {𝑥𝑛 − 𝑝 , } + 2𝜆 𝑛 𝛼𝑛 𝑝] 1−𝜌 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 𝑄𝑝 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) max {𝑥𝑛 − 𝑝 , } + 2𝜆 𝑛 𝛼𝑛 𝑝 1−𝜌 𝑄𝑝 − 𝑝 ≤ max {𝑥𝑛 − 𝑝 , } + 2𝑏 𝑝 𝛼𝑛 . 1−𝜌 (111) 18 Abstract and Applied Analysis Repeating the same argument as that of (54) in the proof of Theorem 19, by induction we can prove that 𝑛 𝑄𝑝 − 𝑝 } + 2𝑏 𝑝 ∑𝛼𝑗 . 𝑥𝑛+1 − 𝑝 ≤ max {𝑥0 − 𝑝 , 1−𝜌 𝑗=0 (112) Thus, {𝑥𝑛 } is bounded. Since 𝑃𝐶, ∇𝑓𝛼𝑛 , 𝐵1 and 𝐵2 are Lipschitz continuous, it is easy to see that {𝑧𝑛 }, {𝑢𝑛 }, {𝑢𝑛 }, {𝑦𝑛 }, and {̃𝑧𝑛 } are bounded, where 𝑧̃𝑛 = 𝑃𝐶(𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ) for all 𝑛 ≥ 0. Step 2. lim𝑛 → ∞ ‖𝑥𝑛+1 − 𝑥𝑛 ‖ = 0. Indeed, define 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + (1 − 𝛽𝑛 )𝑤𝑛 for all 𝑛 ≥ 0. Then, utilizing the arguments similar to those of (58)–(61) in the proof of Theorem 19, we can obtain that 𝑤𝑛+1 − 𝑤𝑛 = (113) +( (due to Lemma 17) 𝑧𝑛+1 − 𝑧𝑛 ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑥𝑛 ) + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑥𝑛 , (114) 2 𝑢𝑛+1 − 𝑢𝑛 2 ≤ 𝑃𝐶 (𝑧𝑛+1 −𝜇2 𝐵2 𝑧𝑛+1 )−𝑃𝐶 (𝑧𝑛 −𝜇2 𝐵2 𝑧𝑛 ) − 𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑃𝐶 (𝑧𝑛+1 − 𝜇2 𝐵2 𝑧𝑛+1 ) 2 − 𝐵1 𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ) 2 ≤ (𝑧𝑛+1 − 𝜇2 𝐵2 𝑧𝑛+1 ) − (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ) 2 ≤ 𝑧𝑛+1 − 𝑧𝑛 2 − 𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑧𝑛+1 − 𝐵2 𝑧𝑛 2 ≤ 𝑧𝑛+1 − 𝑧𝑛 . (115) So, we have 𝑢𝑛+1 − 𝑢𝑛 = 𝑃𝐶 (𝑥𝑛+1 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (𝑧𝑛+1 )) −𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (𝑧𝑛+1 ) − ∇𝑓𝛼𝑛+1 (𝑧𝑛 ) + 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (𝑧𝑛 ) − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 ) ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) 𝑧𝑛+1 − 𝑧𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 𝐼 + ∇𝑓) 𝑧𝑛 − 𝜆 𝑛 (𝛼𝑛 𝐼 + ∇𝑓) 𝑧𝑛 ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝜆 𝑛+1 (𝛼𝑛+1 + ‖𝐴‖2 ) 𝑧𝑛+1 − 𝑧𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑧𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑧𝑛 ) + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑧𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑧𝑛 ) . (116) 𝛾𝑛+1 𝛾𝑛 − )𝑦 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝛿𝑛+1 𝛿𝑛 − ) 𝑆𝑦𝑛 , 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝛾𝑛+1 (𝑦𝑛+1 − 𝑦𝑛 ) + 𝛿𝑛+1 (𝑆𝑦𝑛+1 − 𝑆𝑦𝑛 ) ≤ (𝛾𝑛+1 + 𝛿𝑛+1 ) 𝑦𝑛+1 − 𝑦𝑛 , − [𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (𝑧𝑛 ) − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )] ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝑧𝑛+1 − 𝑧𝑛 𝛾𝑛+1 (𝑦𝑛+1 − 𝑦𝑛 ) + 𝛿𝑛+1 (𝑆𝑦𝑛+1 − 𝑆𝑦𝑛 ) 1 − 𝛽𝑛+1 +( = 𝑥𝑛+1 − 𝑥𝑛 − 𝜆 𝑛+1 [∇𝑓𝛼𝑛+1 (𝑧𝑛+1 ) − ∇𝑓𝛼𝑛+1 (𝑧𝑛 )] ≤ (𝑥𝑛+1 − 𝜆 𝑛+1 ∇𝑓𝛼𝑛+1 (𝑧𝑛+1 )) − (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) This together with (114) implies that 𝑦𝑛+1 − 𝑦𝑛 = 𝜎𝑛+1 (𝑄𝑥𝑛+1 − 𝑢𝑛+1 ) + 𝜏𝑛+1 𝑢𝑛+1 + (1 − 𝜏𝑛+1 ) 𝑢𝑛+1 −𝜎𝑛 (𝑄𝑥𝑛 − 𝑢𝑛 ) − 𝜏𝑛 𝑢𝑛 − (1 − 𝜏𝑛 ) 𝑢𝑛 ≤ 𝜏𝑛+1 𝑢𝑛+1 − 𝜏𝑛 𝑢𝑛 + (1 − 𝜏𝑛+1 ) 𝑢𝑛+1 − (1 − 𝜏𝑛 ) 𝑢𝑛 + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 ≤ 𝜏𝑛+1 − 𝜏𝑛 𝑢𝑛+1 + 𝜏𝑛 𝑢𝑛+1 − 𝑢𝑛 + 𝜏𝑛+1 − 𝜏𝑛 𝑢𝑛+1 + (1 − 𝜏𝑛 ) 𝑢𝑛+1 − 𝑢𝑛 + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 ≤ 𝜏𝑛 [ 𝑥𝑛+1 − 𝑥𝑛 + 𝑧𝑛+1 − 𝑧𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑧𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑧𝑛 ) ] + (1 − 𝜏𝑛 ) 𝑧𝑛+1 − 𝑧𝑛 + 𝜏𝑛+1 − 𝜏𝑛 (𝑢𝑛+1 + 𝑢𝑛+1 ) + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝑧𝑛+1 − 𝑧𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑧𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑧𝑛 ) + 𝜏𝑛+1 − 𝜏𝑛 (𝑢𝑛+1 + 𝑢𝑛+1 ) + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 . (117) Hence it follows from (113), and (117) that 𝑤𝑛+1 − 𝑤𝑛 𝛾 (𝑦 − 𝑦𝑛 ) + 𝛿𝑛+1 (𝑆𝑦𝑛+1 − 𝑆𝑦𝑛 ) ≤ 𝑛+1 𝑛+1 1 − 𝛽𝑛+1 Abstract and Applied Analysis 19 𝛾 𝛾𝑛 + 𝑛+1 − 𝑦 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝛿 𝛿𝑛 + 𝑛+1 − 𝑆𝑦 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 𝑦𝑛 − 𝑝 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 2 2 2 × [𝜎𝑛 𝑄𝑥𝑛 −𝑝 +𝜏𝑛 𝑢𝑛 −𝑝 +(1−𝜎𝑛 −𝜏𝑛 ) 𝑢𝑛 −𝑝 ] 𝛾𝑛+1 + 𝛿𝑛+1 𝑦 − 𝑦𝑛 1 − 𝛽𝑛+1 𝑛+1 𝛾 𝛾𝑛 + 𝑛+1 − (𝑦 + 𝑆𝑦 ) 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝑛 𝛾𝑛 𝛾 = 𝑦𝑛+1 − 𝑦𝑛 + 𝑛+1 − (𝑦 + 𝑆𝑦 ) 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝑛 ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝑧𝑛+1 − 𝑧𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑧𝑛 + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑧𝑛 ) + 𝜏𝑛+1 − 𝜏𝑛 (𝑢𝑛+1 + 𝑢𝑛+1 ) + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 𝛾 𝛾𝑛 + 𝑛+1 − (𝑦 + 𝑆𝑦 ) . 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝑛 ≤ (118) Since {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 }, {𝑢𝑛 }, and {𝑢𝑛 } are bounded, it follows from conditions (i), (iii), (iv), (vi), and (vii) that lim sup (𝑤𝑛+1 − 𝑤𝑛 − 𝑥𝑛+1 − 𝑥𝑛 ) 𝑛→∞ + 𝜆 𝑛+1 − 𝜆 𝑛 ∇𝑓 (𝑧𝑛 ) + 𝜏𝑛+1 − 𝜏𝑛 (𝑢𝑛+1 + 𝑢𝑛+1 ) + 𝜎𝑛+1 𝑄𝑥𝑛+1 − 𝑢𝑛+1 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 𝛾 𝛾𝑛 + 𝑛+1 − (𝑦 + 𝑆𝑦 ) } = 0. 1 − 𝛽𝑛+1 1 − 𝛽𝑛 𝑛 𝑛 (119) Hence by Lemma 14 we get lim𝑛 → ∞ ‖𝑤𝑛 − 𝑥𝑛 ‖ = 0. Thus, − 𝑥𝑛 = lim (1 − 𝛽𝑛 ) 𝑤𝑛 − 𝑥𝑛 = 0. 𝑛→∞ (120) Step 3. lim𝑛 → ∞ ‖𝐵2 𝑧𝑛 − 𝐵2 𝑝‖ = 0, lim𝑛 → ∞ ‖𝐵1 𝑧̃𝑛 − 𝐵1 𝑞‖ = 0, and lim𝑛 → ∞ ‖𝑥𝑛 − 𝑧𝑛 ‖ = 0, where 𝑞 = 𝑃𝐶(𝑝 − 𝜇2 𝐵2 𝑝). Indeed, utilizing Lemma 17 and the convexity of ‖ ⋅ ‖2 , we obtain from (18) and (106)–(109) that 2 𝑥𝑛+1 − 𝑝 2 = 𝛽𝑛 (𝑥𝑛 − 𝑝) + 𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝) 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 1 2 × [𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝)] 𝛾𝑛 + 𝛿𝑛 2 × {𝜏𝑛 [𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 2 2 + (𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) − 1) 𝑥𝑛 − 𝑧𝑛 ] 2 2 + (1−𝜏𝑛 ) [𝑧𝑛 −𝑝 −𝜇2 (2𝛽2 −𝜇2 ) 𝐵2 𝑧𝑛 −𝐵2 𝑝 2 −𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 ] } 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 2 × {𝜏𝑛 [𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 2 2 + (𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) − 1) 𝑥𝑛 − 𝑧𝑛 ] 2 + (1 − 𝜏𝑛 ) [𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 2 − 𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 2 −𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 ] } ≤ lim sup { 𝑧𝑛+1 − 𝑧𝑛 + 𝜆 𝑛+1 𝛼𝑛+1 − 𝜆 𝑛 𝛼𝑛 𝑧𝑛 𝑛→∞ lim 𝑥 𝑛 → ∞ 𝑛+1 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 2 2 + (𝛾𝑛 + 𝛿𝑛 ) [𝜏𝑛 𝑢𝑛 − 𝑝 + (1 − 𝜏𝑛 ) 𝑢𝑛 − 𝑝 ] 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 2 2 = 𝛽𝑛 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 2 × {𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 2 2 − 𝜏𝑛 (1 − 𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) ) 𝑥𝑛 − 𝑧𝑛 − (1 − 𝜏𝑛 ) 2 × [𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 2 +𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 ] } 2 2 ≤ 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 2 2 − (𝛾𝑛 + 𝛿𝑛 ) {𝜏𝑛 (1 − 𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) ) 𝑥𝑛 − 𝑧𝑛 2 + (1 − 𝜏𝑛 ) [𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 2 +𝜇1 (2𝛽1 −𝜇1 ) 𝐵1 𝑧̃𝑛 −𝐵1 𝑞 ] } . (121) Therefore, 2 2 (𝛾𝑛 + 𝛿𝑛 ) {𝜏𝑛 (1 − 𝜆2𝑛 (𝛼𝑛 + ‖𝐴‖2 ) ) 𝑥𝑛 − 𝑧𝑛 2 + (1 − 𝜏𝑛 ) [𝜇2 (2𝛽2 − 𝜇2 ) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 20 Abstract and Applied Analysis 2 +𝜇1 (2𝛽1 − 𝜇1 ) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 ] } 2 2 ≤ 𝑥𝑛 − 𝑝 − 𝑥𝑛+1 − 𝑝 2 2 × [𝑧̃𝑛 − 𝑞 − 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) ≤ (𝑥𝑛 − 𝑝 + 𝑥𝑛+1 − 𝑝) 𝑥𝑛 − 𝑥𝑛+1 +2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞] 2 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 . (122) Since 𝛼𝑛 → 0, 𝜎𝑛 → 0, ‖𝑥𝑛 − 𝑥𝑛+1 ‖ → 0, lim inf 𝑛 → ∞ (𝛾𝑛 + 𝛿𝑛 ) > 0, {𝜆 𝑛 } ⊂ [𝑎, 𝑏], and 0 < lim inf 𝑛 → ∞ 𝜏𝑛 ≤ lim sup𝑛 → ∞ 𝜏𝑛 < 1, it follows that lim 𝐵 𝑧̃ − 𝐵1 𝑞 = 0, 𝑛→∞ 1 𝑛 lim 𝐵 𝑧 − 𝐵2 𝑝 = 0. 𝑛→∞ 2 𝑛 2 ≤ 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 2 + 𝜏𝑛 (𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝) + (1 − 𝜏𝑛 ) 2 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 lim 𝑥 − 𝑧𝑛 = 0, 𝑛→∞ 𝑛 2 + (1 − 𝜏𝑛 ) 𝑢𝑛 − 𝑝 2 ≤ 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 2 + 𝜏𝑛 (𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝) + (1 − 𝜏𝑛 ) 2 2 × {𝑧𝑛 − 𝑝 − 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) + 2𝜇2 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 (123) Step 4. lim𝑛 → ∞ ‖𝑆𝑦𝑛 − 𝑦𝑛 ‖ = 0. Indeed, observe that 𝑢𝑛 − 𝑧𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) − 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) ≤ (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) − (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) = 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 ) − ∇𝑓𝛼𝑛 (𝑥𝑛 ) ≤ 𝜆 𝑛 (𝛼𝑛 + ‖𝐴‖2 ) 𝑧𝑛 − 𝑥𝑛 . (124) This together with ‖𝑧𝑛 − 𝑥𝑛 ‖ → 0 implies that lim𝑛 → ∞ ‖𝑢𝑛 − 𝑧𝑛 ‖ = 0 and hence lim𝑛 → ∞ ‖𝑢𝑛 − 𝑥𝑛 ‖ = 0. Utilizing the arguments similar to those of (73) and (75) in the proof of Theorem 19 we can prove that 2 2 2 ̃ 𝑧𝑛 − 𝑞 ≤ 𝑧𝑛 − 𝑝 − 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) + 2𝜇2 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 , 2 2 2 𝑢𝑛 − 𝑝 ≤ 𝑧̃𝑛 − 𝑞 − 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) + 2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 . (125) Utilizing (106), (109), (125), we have 2 𝑦𝑛 − 𝑝 2 = 𝜎𝑛 (𝑄𝑥𝑛 − 𝑝) + 𝜏𝑛 (𝑢𝑛 − 𝑝) + (1 − 𝜎𝑛 − 𝜏𝑛 ) (𝑢𝑛 − 𝑝) 2 2 2 ≤ 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 𝜏𝑛 𝑢𝑛 − 𝑝 + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑢𝑛 − 𝑝 2 2 ≤ 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 𝜏𝑛 𝑢𝑛 − 𝑝 2 − 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) +2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 } 2 ≤ 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 2 + 𝜏𝑛 (𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝) + (1 − 𝜏𝑛 ) 2 × {𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 2 − 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) + 2𝜇2 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 2 − 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) +2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞} 2 2 ≤ 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 + 2𝜇2 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 + 2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 2 2 − (1−𝜏𝑛 ) (𝑧𝑛 −̃𝑧𝑛 −(𝑝 − 𝑞) + 𝑧̃𝑛 −𝑢𝑛 +(𝑝−𝑞) ) . (126) Thus from (18) and (126) it follows that 2 𝑥𝑛+1 − 𝑝 2 = 𝛽𝑛 (𝑥𝑛 − 𝑝) + 𝛾𝑛 (𝑦𝑛 − 𝑝) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑝) 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (𝛾𝑛 + 𝛿𝑛 ) 𝑦𝑛 − 𝑝 2 2 = 𝛽𝑛 𝑥𝑛 − 𝑝 + (1 − 𝛽𝑛 ) 𝑦𝑛 − 𝑝 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑝 + (1 − 𝛽𝑛 ) 2 2 {𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 Abstract and Applied Analysis 21 + 2𝜇2 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 Since + 2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 𝛿𝑛 (𝑆𝑦𝑛 − 𝑥𝑛 ) ≤ 𝑥𝑛+1 − 𝑥𝑛 + 𝛾𝑛 𝑦𝑛 − 𝑥𝑛 , 2 − (1 − 𝜏𝑛 ) (𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) it follows that lim 𝑆𝑦 − 𝑥𝑛 = 0, 𝑛→∞ 𝑛 2 +𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) )} 2 2 ≤ 𝑥𝑛 − 𝑝 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 + 2𝜇2 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 + 2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 Step 5. lim sup𝑛 → ∞ ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ ≤ 0 where 𝑥 = 𝑃Fix(𝑆)∩Ξ∩Γ 𝑄𝑥. Indeed, since {𝑥𝑛 } is bounded, there exists a subsequence {𝑥𝑛𝑖 } of {𝑥𝑛 } such that 𝑛→∞ 2 2 × (𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) + 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) ) , (127) which hence implies that (1 − 𝛽𝑛 ) (1 − 𝜏𝑛 ) 2 2 × (𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) + 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) ) Also, since 𝐻 is reflexive and {𝑥𝑛 } is bounded, without loss of generality we may assume that 𝑥𝑛𝑖 → 𝑝̂ weakly for some 𝑝̂ ∈ 𝐶. Taking into account that ‖𝑥𝑛 −𝑦𝑛 ‖ → 0 and ‖𝑥𝑛 −𝑧𝑛 ‖ → 0 as 𝑛 → ∞, we deduce that 𝑦𝑛𝑖 → 𝑝̂ weakly and 𝑧𝑛𝑖 → 𝑝̂ weakly. First, it is clear from Lemma 15 and ‖𝑆𝑦𝑛 − 𝑦𝑛 ‖ → 0 that 𝑝̂ ∈ Fix(𝑆). Now let us show that 𝑝̂ ∈ Ξ. Note that 𝑧𝑛 − 𝐺 (𝑧𝑛 ) = 𝑧𝑛 − 𝑃𝐶 [𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 )] = 𝑧𝑛 − 𝑢𝑛 → 0 (𝑛 → ∞) , (136) 2 2 ≤ 𝑥𝑛 − 𝑝 − 𝑥𝑛+1 − 𝑝 2 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 + 2𝜇2 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 + 2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 ≤ (𝑥𝑛 − 𝑝 + 𝑥𝑛+1 − 𝑝) 𝑥𝑛 − 𝑥𝑛+1 2 + 𝜎𝑛 𝑄𝑥𝑛 − 𝑝 + 2𝜆 𝑛 𝛼𝑛 𝑝 𝑧𝑛 − 𝑝 + 2𝜇2 𝑧𝑛 − 𝑧̃𝑛 − (𝑝 − 𝑞) 𝐵2 𝑧𝑛 − 𝐵2 𝑝 + 2𝜇1 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) 𝐵1 𝑧̃𝑛 − 𝐵1 𝑞 . where 𝐺 : 𝐶 → 𝐶 is defined as that in Lemma 4. According to Lemma 15 we get 𝑝̂ ∈ Ξ. Further, let us show that 𝑝̂ ∈ Γ. As a matter of fact, define 𝑇𝑣 = { (128) Since lim sup𝑛 → ∞ 𝛽𝑛 < 1, lim sup𝑛 → ∞ 𝜏𝑛 < 1, {𝜆 𝑛 } ⊂ [𝑎, 𝑏], 𝛼𝑛 → 0, 𝜎𝑛 → 0, ‖𝐵2 𝑧𝑛 − 𝐵2 𝑝‖ → 0, ‖𝐵1 𝑧̃𝑛 − 𝐵1 𝑞‖ → 0 and ‖𝑥𝑛+1 − 𝑥𝑛 ‖ → 0, it follows from the boundedness of {𝑥𝑛 }, {𝑢𝑛 }, {𝑧𝑛 }, and {̃𝑧𝑛 } that lim 𝑧 − 𝑧̃𝑛 − (𝑝 − 𝑞) = 0, 𝑛→∞ 𝑛 (129) lim 𝑧̃𝑛 − 𝑢𝑛 + (𝑝 − 𝑞) = 0. 𝑛→∞ ∇𝑓 (𝑣) + 𝑁𝐶𝑣, if 𝑣 ∈ 𝐶, 0, if 𝑣 ∉ 𝐶, (130) ⟨𝑣 − 𝑢, 𝑤 − ∇𝑓 (𝑣)⟩ ≥ 0, ∀𝑢 ∈ 𝐶. (138) Observe that 𝑧𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) , 𝑣 ∈ 𝐶. (139) Utilizing the arguments similar to those of Step 5 in the proof of Theorem 19 we can prove that (140) Since 𝑇 is maximal monotone, we have 𝑝̂ ∈ 𝑇−1 0, and, hence, 𝑝̂ ∈ VI(𝐶, ∇𝑓). Thus it is clear that 𝑝̂ ∈ Γ. Therefore, 𝑝̂ ∈ Fix(𝑆) ∩ Ξ ∩ Γ. Consequently, in terms of Proposition 8 (i) we obtain from (135) that lim sup ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ = lim ⟨𝑄𝑥 − 𝑥, 𝑥𝑛𝑖 − 𝑥⟩ 𝑛→∞ (132) (137) where 𝑁𝐶𝑣 = {𝑤 ∈ H1 : ⟨𝑣 − 𝑢, 𝑤⟩ ≥ 0, ∀𝑢 ∈ 𝐶}. Then, 𝑇 is maximal monotone and 0 ∈ 𝑇𝑣 if and only if 𝑣 ∈ VI(𝐶, ∇𝑓); see [40] for more details. Let (𝑣, 𝑤) ∈ Gph(𝑇). Then, we have ̂ 𝑤⟩ ≥ 0. ⟨𝑣 − 𝑝, Also, note that 𝑦𝑛 − 𝑢𝑛 ≤ 𝜎𝑛 𝑄𝑥𝑛 − 𝑢𝑛 + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑢𝑛 − 𝑢𝑛 → 0. (131) This together with ‖𝑥𝑛 − 𝑢𝑛 ‖ → 0 implies that lim 𝑥 − 𝑦𝑛 = 0. 𝑛→∞ 𝑛 (134) lim sup ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ = lim ⟨𝑄𝑥 − 𝑥, 𝑥𝑛𝑖 − 𝑥⟩ . (135) 𝑖→∞ − (1 − 𝛽𝑛 ) (1 − 𝜏𝑛 ) Consequently, it immediately follows that lim 𝑢 − 𝑢𝑛 = 0. lim 𝑧 − 𝑢𝑛 = 0, 𝑛→∞ 𝑛 𝑛→∞ 𝑛 lim 𝑆𝑦𝑛 − 𝑦𝑛 = 0. 𝑛→∞ (133) 𝑖→∞ = ⟨𝑄𝑥 − 𝑥, 𝑝̂ − 𝑥⟩ ≤ 0. (141) 22 Abstract and Applied Analysis Step 6. lim𝑛 → ∞ ‖𝑥𝑛 − 𝑥‖ = 0. Indeed, observe that +2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ ] 2 = (1 − (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 ) 𝑥𝑛 − 𝑥 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ + (𝛾𝑛 + 𝛿𝑛 ) (1 − 𝜎𝑛 ) 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑧𝑛 − 𝑥 = ⟨𝑄𝑥𝑛 − 𝑥, 𝑥𝑛 − 𝑥⟩ + ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥𝑛 ⟩ = ⟨𝑄𝑥𝑛 − 𝑄𝑥, 𝑥𝑛 − 𝑥⟩ + ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥𝑛 ⟩ + (𝛾𝑛 + 𝛿𝑛 ) 2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ (142) 2 ≤ 𝜌𝑥𝑛 − 𝑥 + ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 . Utilizing Lemmas 17 and 18, we obtain from (106)–(109) and the convexity of ‖ ⋅ ‖2 that 2 𝑥𝑛+1 − 𝑥 2 = 𝛽𝑛 (𝑥𝑛 − 𝑥) + 𝛾𝑛 (𝑦𝑛 − 𝑥) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑥) 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 1 2 + (𝛾𝑛 + 𝛿𝑛 ) [𝛾𝑛 (𝑦𝑛 − 𝑥) + 𝛿𝑛 (𝑆𝑦𝑛 − 𝑥)] 𝛾𝑛 + 𝛿𝑛 2 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 𝑦𝑛 − 𝑥 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2 × [𝜏𝑛 (𝑢𝑛 − 𝑥) + (1 − 𝜎𝑛 − 𝜏𝑛 ) (𝑢𝑛 − 𝑥) +2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ ] 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2 2 × [𝜏𝑛 𝑢𝑛 − 𝑥 + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑢𝑛 − 𝑥 +2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ ] 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2 2 × [𝜏𝑛 𝑢𝑛 − 𝑥 + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑧𝑛 − 𝑥 +2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩] 2 ≤ 𝛽𝑛 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2 × [𝜏𝑛 (𝑥𝑛 − 𝑥 + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑧𝑛 − 𝑥) 2 ≤ (1 − (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 ) 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2𝜎𝑛 2 × [𝜌𝑥𝑛 − 𝑥 + ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 ] + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑧𝑛 − 𝑥 2 = [1 − (1 − 2𝜌) (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 ] 𝑥𝑛 − 𝑥 + (1 − 2𝜌) (𝛾𝑛 + 𝛿𝑛 ) 𝜎𝑛 2 [⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 ] 1 − 2𝜌 + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑧𝑛 − 𝑥 . × (143) Note that lim inf 𝑛 → ∞ (1 − 2𝜌)(𝛾𝑛 + 𝛿𝑛 ) > 0. It follows that ∑∞ 𝑛=0 (1 − 2𝜌)(𝛾𝑛 + 𝛿𝑛 )𝜎𝑛 = ∞. It is clear that 2 [⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ + 𝑄𝑥𝑛 − 𝑥 𝑦𝑛 − 𝑥𝑛 ] lim sup ≤ 0, 1 − 2𝜌 𝑛→∞ (144) because lim sup𝑛 → ∞ ⟨𝑄𝑥 − 𝑥, 𝑥𝑛 − 𝑥⟩ ≤ 0 and lim𝑛 → ∞ ‖𝑥𝑛 − 𝑦𝑛 ‖ = 0. In addition, note also that {𝜆 𝑛 } ⊂ [𝑎, 𝑏], ∑∞ 𝑛=0 𝛼𝑛 < 2𝜆 𝛼 ‖𝑥‖‖𝑧 ∞ and {𝑧𝑛 } is bounded. Hence we get ∑∞ 𝑛 𝑛 𝑛 − 𝑛=0 𝑥‖ < ∞. Therefore, all conditions of Lemma 16 are satisfied. Consequently, we immediately deduce that ‖𝑥𝑛 − 𝑥‖ → 0 as 𝑛 → ∞. In the meantime, taking into account that ‖𝑥𝑛 − 𝑦𝑛 ‖ → 0 and ‖𝑥𝑛 − 𝑧𝑛 ‖ → 0 as 𝑛 → ∞, we infer that lim 𝑦 − 𝑥 = lim 𝑧𝑛 − 𝑥 = 0. (145) 𝑛→∞ 𝑛 𝑛→∞ This completes the proof. Corollary 23. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H1 . Let 𝐴 ∈ 𝐵(H1 , H2 ) and let 𝐵𝑖 : 𝐶 → H1 be 𝛽𝑖 -inverse strongly monotone for 𝑖 = 1, 2. Let 𝑆 : 𝐶 → 𝐶 be a 𝑘-strictly pseudocontractive mapping such that Fix(𝑆) ∩ Ξ ∩ Γ ≠ 0. For fixed 𝑢 ∈ 𝐶 and given 𝑥0 ∈ 𝐶 arbitrarily, let the sequences {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 } be generated iteratively by + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑧𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) , × (𝑥𝑛 − 𝑥 + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑧𝑛 − 𝑥) 𝑦𝑛 = 𝜎𝑛 𝑢 + 𝜏𝑛 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) 2 +2𝜎𝑛 ⟨𝑄𝑥𝑛 − 𝑥, 𝑦𝑛 − 𝑥⟩ ] 2 = 𝛽𝑛 𝑥𝑛 − 𝑥 + (𝛾𝑛 + 𝛿𝑛 ) 2 × [(1 − 𝜎𝑛 ) (𝑥𝑛 − 𝑥 + 2𝜆 𝑛 𝛼𝑛 ‖𝑥‖ 𝑧𝑛 − 𝑥) + (1 − 𝜎𝑛 − 𝜏𝑛 ) × 𝑃𝐶 [𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 )] , 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + 𝛾𝑛 𝑦𝑛 + 𝛿𝑛 𝑆𝑦𝑛 , ∀𝑛 ≥ 0, (146) Abstract and Applied Analysis 23 where 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2, {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝜏𝑛 }, {𝛽𝑛 }, {𝛾𝑛 }, {𝛿𝑛 } ⊂ [0, 1] such that (i) ∑∞ 𝑛=0 𝛼𝑛 < ∞; (ii) 𝜎𝑛 + 𝜏𝑛 ≤ 1, 𝛽𝑛 + 𝛾𝑛 + 𝛿𝑛 = 1 and (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0; (iii) lim𝑛 → ∞ 𝜎𝑛 = 0 and ∑∞ 𝑛=0 𝜎𝑛 = ∞; (iv) 0 < lim inf 𝑛 → ∞ 𝜏𝑛 ≤ lim sup𝑛 → ∞ 𝜏𝑛 < 1 and lim𝑛 → ∞ |𝜏𝑛+1 − 𝜏𝑛 | = 0; (v) 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1 and lim inf 𝑛 → ∞ 𝛿𝑛 > 0; (vi) lim𝑛 → ∞ (𝛾𝑛+1 /(1 − 𝛽𝑛+1 ) − 𝛾𝑛 /(1 − 𝛽𝑛 )) = 0; (vii) 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖2 and lim𝑛 → ∞ |𝜆 𝑛+1 − 𝜆 𝑛 | = 0. Then the sequences {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 } converge strongly to the same point 𝑥 = 𝑃Fix(𝑆)∩Ξ∩Γ 𝑢 if and only if lim𝑛 → ∞ ‖𝑧𝑛+1 −𝑧𝑛 ‖ = 0. Furthermore, (𝑥, 𝑦) is a solution of the GSVI (14), where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). Corollary 24. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H1 . Let 𝐴 ∈ 𝐵(H1 , H2 ) and let 𝐵𝑖 : 𝐶 → H1 be 𝛽𝑖 -inverse strongly monotone for 𝑖 = 1, 2. Let 𝑆 : 𝐶 → 𝐶 be a nonexpansive mapping such that Fix(𝑆) ∩ Ξ ∩ Γ ≠ 0. Let 𝑄 : 𝐶 → 𝐶 be a 𝜌-contraction with 𝜌 ∈ [0, 1/2). For given 𝑥0 ∈ 𝐶 arbitrarily, let the sequences {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 } be generated iteratively by Corollary 25. Let 𝐶 be a nonempty closed convex subset of a real Hilbert space H1 . Let 𝐴 ∈ 𝐵(H1 , H2 ) and let 𝑆 : 𝐶 → 𝐶 be a nonexpansive mapping such that Fix (𝑆) ∩ Γ ≠ 0. For fixed 𝑢 ∈ 𝐶 and given 𝑥0 ∈ 𝐶 arbitrarily, let the sequences {𝑥𝑛 }, {𝑧𝑛 } be generated iteratively by 𝑧𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) , 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + (1 − 𝛽𝑛 ) × 𝑆 [𝜎𝑛 𝑢 + 𝜏𝑛 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑧𝑛 ] , (148) ∀𝑛 ≥ 0, where {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝜏𝑛 }, {𝛽𝑛 } ⊂ [0, 1] such that (i) ∑∞ 𝑛=0 𝛼𝑛 < ∞; (ii) 𝜎𝑛 + 𝜏𝑛 ≤ 1 for all 𝑛 ≥ 0; (iii) lim𝑛 → ∞ 𝜎𝑛 = 0 and ∑∞ 𝑛=0 𝜎𝑛 = ∞; (iv) 0 < lim inf 𝑛 → ∞ 𝜏𝑛 ≤ lim sup𝑛 → ∞ 𝜏𝑛 < 1 and lim𝑛 → ∞ |𝜏𝑛+1 − 𝜏𝑛 | = 0; (v) 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1; (vi) 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖2 and lim𝑛 → ∞ |𝜆 𝑛+1 − 𝜆 𝑛 | = 0. 𝑧𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) , 𝑦𝑛 = 𝜎𝑛 𝑄𝑥𝑛 + 𝜏𝑛 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) Then the sequences {𝑥𝑛 }, {𝑧𝑛 } converge strongly to the same point 𝑥 = 𝑃Fix(𝑆)∩Γ 𝑢 if and only if lim𝑛 → ∞ ‖𝑧𝑛+1 − 𝑧𝑛 ‖ = 0. + (1 − 𝜎𝑛 − 𝜏𝑛 ) × 𝑃𝐶 [𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 ) − 𝜇1 𝐵1 𝑃𝐶 (𝑧𝑛 − 𝜇2 𝐵2 𝑧𝑛 )] , 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + 𝛾𝑛 𝑦𝑛 + 𝛿𝑛 𝑆𝑦𝑛 , Next, utilizing Corollary 23 one gives the following improvement and extension of the main result in [18] (i.e., [18, Theorem 3.1]). ∀𝑛 ≥ 0, (147) where 𝜇𝑖 ∈ (0, 2𝛽𝑖 ) for 𝑖 = 1, 2, {𝛼𝑛 } ⊂ (0, ∞), {𝜆 𝑛 } ⊂ (0, 1/‖𝐴‖2 ) and {𝜎𝑛 }, {𝜏𝑛 }, {𝛽𝑛 }, {𝛾𝑛 }, {𝛿𝑛 } ⊂ [0, 1] such that (i) ∑∞ 𝑛=0 𝛼𝑛 < ∞; (ii) 𝜎𝑛 + 𝜏𝑛 ≤ 1, 𝛽𝑛 + 𝛾𝑛 + 𝛿𝑛 = 1 and (𝛾𝑛 + 𝛿𝑛 )𝑘 ≤ 𝛾𝑛 for all 𝑛 ≥ 0; (iii) lim𝑛 → ∞ 𝜎𝑛 = 0 and ∑∞ 𝑛=0 𝜎𝑛 = ∞; (iv) 0 < lim inf 𝑛 → ∞ 𝜏𝑛 ≤ lim sup𝑛 → ∞ 𝜏𝑛 < 1 and lim𝑛 → ∞ |𝜏𝑛+1 − 𝜏𝑛 | = 0; (v) 0 < lim inf 𝑛 → ∞ 𝛽𝑛 ≤ lim sup𝑛 → ∞ 𝛽𝑛 < 1 and lim inf 𝑛 → ∞ 𝛿𝑛 > 0; (vi) lim𝑛 → ∞ (𝛾𝑛+1 /(1 − 𝛽𝑛+1 ) − 𝛾𝑛 /(1 − 𝛽𝑛 )) = 0; (vii) 0 < lim inf 𝑛 → ∞ 𝜆 𝑛 ≤ lim sup𝑛 → ∞ 𝜆 𝑛 < 1/‖𝐴‖2 and lim𝑛 → ∞ |𝜆 𝑛+1 − 𝜆 𝑛 | = 0. Then the sequences {𝑥𝑛 }, {𝑦𝑛 }, {𝑧𝑛 } converge strongly to the same point 𝑥 = 𝑃 Fix (𝑆)∩Ξ∩Γ 𝑄𝑥 if and only if lim𝑛 → ∞ ‖𝑧𝑛+1 − 𝑧𝑛 ‖ = 0. Furthermore, (𝑥, 𝑦) is a solution of the GSVI (14), where 𝑦 = 𝑃𝐶(𝑥 − 𝜇2 𝐵2 𝑥). Proof. In Corollary 23, put 𝐵1 = 𝐵2 = 0 and 𝛾𝑛 = 0. Then, Ξ = 𝐶, 𝛽𝑛 +𝛿𝑛 = 1, 𝑃𝐶[𝑃𝐶(𝑧𝑛 −𝜇2 𝐵2 𝑧𝑛 )−𝜇1 𝐵1 𝑃𝐶(𝑧𝑛 −𝜇2 𝐵2 𝑧𝑛 )] = 𝑧𝑛 , and the iterative scheme (146) is equivalent to 𝑧𝑛 = 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑥𝑛 )) , 𝑦𝑛 = 𝜎𝑛 𝑢 + 𝜏𝑛 𝑃𝐶 (𝑥𝑛 − 𝜆 𝑛 ∇𝑓𝛼𝑛 (𝑧𝑛 )) + (1 − 𝜎𝑛 − 𝜏𝑛 ) 𝑧𝑛 , 𝑥𝑛+1 = 𝛽𝑛 𝑥𝑛 + 𝛿𝑛 𝑆𝑦𝑛 , ∀𝑛 ≥ 0. (149) This is equivalent to (148). Since 𝑆 is a nonexpansive mapping, 𝑆 must be a 𝑘-strictly pseudocontractive mapping with 𝑘 = 0. In this case, it is easy to see that conditions (i)–(vii) in Corollary 23 all are satisfied. Therefore, in terms of Corollary 23, we obtain the desired result. Remark 26. Our Theorems 19 and 22 improve, extend, and develop [6, Theorem 5.7], [18, Theorem 3.1], and [33, Theorem 3.1] in the following aspects. (i) Because both [6, Theorem 5.7] and [18, Theorem 3.1] are weak convergence results for solving the SFP, beyond question, our Theorems 19 and 22 as strong convergence results are very interesting and quite valuable. 24 Abstract and Applied Analysis (ii) The problem of finding an element of Fix(𝑆) ∩ Ξ ∩ Γ in our Theorems 19 and 22 is more general than the corresponding problems in [6, Theorem 5.7] and [18, Theorem 3.1], respectively. (iii) The relaxed extragradient iterative method for finding an element of Fix(𝑆) ∩ Ξ ∩ VI(𝐶, 𝐴) in [33, Theorem 3.1] is extended to develop the relaxed extragradient method with regularization for finding an element of Fix(𝑆) ∩ Ξ ∩ Γ in our Theorem 19. (iv) The proof of our Theorems 19 and 22 is very different from that of [33, Theorem 3.1] because our argument technique depends on Lemma 16, the restriction on the regularization parameter sequence {𝛼𝑛 }, and the properties of the averaged mappings 𝑃𝐶(𝐼 − 𝜆 𝑛 ∇𝑓𝛼𝑛 ) to a great extent. (v) Because our iterative schemes (17) and (18) involve a contractive self-mapping 𝑄, a 𝑘-strictly pseudocontractive self-mapping 𝑆, and several parameter sequences, they are more flexible and more subtle than the corresponding ones in [6, Theorem 5.7] and [18, Theorem 3.1], respectively. Acknowledgment The work of L. C. Ceng was partially supported by the National Science Foundation of China (11071169), Ph. D. Program Foundation of Ministry of Education of China (20123127110002). The work of J. C. Yao was partially supported by the Grant NSC 99-2115-M-037-002-MY3 of Taiwan. For A. Petruşsel, this work was possible with the nancial support of a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PNII-ID-PCE-2011-3-0094. References [1] Y. Censor and T. Elfving, “A multiprojection algorithm using Bregman projections in a product space,” Numerical Algorithms, vol. 8, no. 2–4, pp. 221–239, 1994. [2] C. Byrne, “Iterative oblique projection onto convex sets and the split feasibility problem,” Inverse Problems, vol. 18, no. 2, pp. 441– 453, 2002. [3] Y. Censor, T. Bortfeld, B. Martin, and A. Trofimov, “A unified approach for inversion problems in intensity-modulated radiation therapy,” Physics in Medicine & Biology, vol. 51, pp. 2353– 2365, 2006. [4] Y. Censor, T. Elfving, N. Kopf, and T. Bortfeld, “The multiplesets split feasibility problem and its applications for inverse problems,” Inverse Problems, vol. 21, no. 6, pp. 2071–2084, 2005. [5] Y. Censor, A. Motova, and A. Segal, “Perturbed projections and subgradient projections for the multiple-sets split feasibility problem,” Journal of Mathematical Analysis and Applications, vol. 327, no. 2, pp. 1244–1256, 2007. [6] H.-K. Xu, “Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces,” Inverse Problems, vol. 26, no. 10, article 105018, 2010. [7] C. Byrne, “A unified treatment of some iterative algorithms in signal processing and image reconstruction,” Inverse Problems, vol. 20, no. 1, pp. 103–120, 2004. [8] B. Qu and N. Xiu, “A note on the 𝐶𝑄 algorithm for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1655– 1665, 2005. [9] H.-K. Xu, “A variable Krasnosel’skii–Mann algorithm and the multiple-set split feasibility problem,” Inverse Problems, vol. 22, no. 6, pp. 2021–2034, 2006. [10] Q. Yang, “The relaxed CQ algorithm solving the split feasibility problem,” Inverse Problems, vol. 20, no. 4, pp. 1261–1266, 2004. [11] J. Zhao and Q. Yang, “Several solution methods for the split feasibility problem,” Inverse Problems, vol. 21, no. 5, pp. 1791– 1799, 2005. [12] M. I. Sezan and H. Stark, “Applications of convex projection theory to image recovery in tomography and related areas,” in Image Recovery Theory and Applications, H. Stark, Ed., pp. 415– 462, Academic, Orlando, Fla, USA, 1987. [13] B. Eicke, “Iteration methods for convexly constrained ill-posed problems in Hilbert space,” Numerical Functional Analysis and Optimization , vol. 13, no. 5-6, pp. 413–429, 1992. [14] L. Landweber, “An iteration formula for Fredholm integral equations of the first kind,” American Journal of Mathematics, vol. 73, pp. 615–624, 1951. [15] L. C. Potter and K. S. Arun, “A dual approach to linear inverse problems with convex constraints,” SIAM Journal on Control and Optimization, vol. 31, no. 4, pp. 1080–1092, 1993. [16] P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Modeling & Simulation, vol. 4, no. 4, pp. 1168–1200, 2005. [17] G. M. Korpelevič, “An extragradient method for finding saddle points and for other problems,” Èkonomika i Matematicheskie Metody, vol. 12, no. 4, pp. 747–756, 1976. [18] L.-C. Ceng, Q.H. Ansari, and J.-C. Yao, “An extragradient method for solving split feasibility and fixed point problems,” Computers & Mathematics with Applications, vol. 64, no. 4, pp. 633–642, 2012. [19] N. Nadezhkina and W. Takahashi, “Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 128, no. 1, pp. 191–201, 2006. [20] J.-L. Lions and G. Stampacchia, “Variational inequalities,” Communications on Pure and Applied Mathematics, vol. 20, pp. 493– 519, 1967. [21] L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “Viscosity approximation methods for generalized equilibrium problems and fixed point problems,” Journal of Global Optimization, vol. 43, no. 4, pp. 487–502, 2009. [22] L.-C. Ceng and S. Huang, “Modified extragradient methods for strict pseudo-contractions and monotone mappings,” Taiwanese Journal of Mathematics, vol. 13, no. 4, pp. 1197–1211, 2009. [23] L.-C. Ceng, C.-Y. Wang, and J.-C. Yao, “Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities,” Mathematical Methods of Operations Research, vol. 67, no. 3, pp. 375–390, 2008. [24] L.-C. Ceng and J.-C. Yao, “An extragradient-like approximation method for variational inequality problems and fixed point problems,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 205–215, 2007. [25] L.-C. Ceng and J.-C. Yao, “Relaxed viscosity approximation methods for fixed point problems and variational inequality problems,” Nonlinear Analysis A, vol. 69, no. 10, pp. 3299–3309, 2008. Abstract and Applied Analysis [26] Y. Yao, Y.-C. Liou, and S. M. Kang, “Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method,” Computers & Mathematics with Applications, vol. 59, no. 11, pp. 3472–3480, 2010. [27] L.-C. Zeng and J.-C. Yao, “Strong convergence theorem by an extragradient method for fixed point problems and variational inequality problems,” Taiwanese Journal of Mathematics, vol. 10, no. 5, pp. 1293–1303, 2006. [28] N. Nadezhkina and W. Takahashi, “Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings,” SIAM Journal on Optimization, vol. 16, no. 4, pp. 1230–1241, 2006. [29] W. Takahashi and M. Toyoda, “Weak convergence theorems for nonexpansive mappings and monotone mappings,” Journal of Optimization Theory and Applications, vol. 118, no. 2, pp. 417– 428, 2003. [30] H. Iiduka and W. Takahashi, “Strong convergence theorems for nonexpansive mappings and inverse-strongly monotone mappings,” Nonlinear Analysis A, vol. 61, no. 3, pp. 341–350, 2005. [31] A. Bnouhachem, M. Aslam Noor, and Z. Hao, “Some new extragradient iterative methods for variational inequalities,” Nonlinear Analysis A, vol. 70, no. 3, pp. 1321–1329, 2009. [32] R. U. Verma, “On a new system of nonlinear variational inequalities and associated iterative algorithms,” Mathematical Sciences Research Hot-Line, vol. 3, no. 8, pp. 65–68, 1999. [33] L.-C. Ceng, Q. H. Ansari, and J.-C. Yao, “Relaxed extragradient iterative methods for variational inequalities,” Applied Mathematics and Computation, vol. 218, no. 3, pp. 1112–1123, 2011. [34] H.-K. Xu, “Iterative algorithms for nonlinear operators,” Journal of the London Mathematical Society, vol. 66, no. 1, pp. 240–256, 2002. [35] D. P. Bertsekas and E. M. Gafni, “Projection methods for variational inequalities with application to the traffic assignment problem,” Mathematical Programming Study, no. 17, pp. 139–159, 1982. [36] D. Han and H. K. Lo, “Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities,” European Journal of Operational Research, vol. 159, no. 3, pp. 529–544, 2004. [37] P. L. Combettes, “Solving monotone inclusions via compositions of nonexpansive averaged operators,” Optimization, vol. 53, no. 5-6, pp. 475–504, 2004. [38] G. Marino and H.-K. Xu, “Weak and strong convergence theorems for strict pseudo-contractions in Hilbert spaces,” Journal of Mathematical Analysis and Applications, vol. 329, no. 1, pp. 336–346, 2007. [39] T. Suzuki, “Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals,” Journal of Mathematical Analysis and Applications, vol. 305, no. 1, pp. 227–239, 2005. [40] R. T. Rockafellar, “On the maximality of sums of nonlinear monotone operators,” Transactions of the American Mathematical Society, vol. 149, pp. 75–88, 1970. 25 Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014