Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 762165, 6 pages http://dx.doi.org/10.1155/2013/762165 Research Article Methods for Solving Generalized Nash Equilibrium Biao Qu and Jing Zhao School of Management, Qufu Normal University, Rizhao, Shandong 276826, China Correspondence should be addressed to Biao Qu; qubiao001@163.com Received 12 October 2012; Revised 5 January 2013; Accepted 17 January 2013 Academic Editor: Ya Ping Fang Copyright © 2013 B. Qu and J. Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The generalized Nash equilibrium problem (GNEP) is an extension of the standard Nash equilibrium problem (NEP), in which each player’s strategy set may depend on the rival player’s strategies. In this paper, we present two descent type methods. The algorithms are based on a reformulation of the generalized Nash equilibrium using Nikaido-Isoda function as unconstrained optimization. We prove that our algorithms are globally convergent and the convergence analysis is not based on conditions guaranteeing that every stationary point of the optimization problem is a solution of the GNEP. 1. Introduction The generalized Nash equilibrium problem (GNEP for short) is an extension of the standard Nash equilibrium problem (NEP for short), in which the strategy set of each player depends on the strategies of all the other players as well as on his own strategy. The GNEP has recently attracted much attention due to its applications in various fields like mathematics, computer science, economics, and engineering [1–11]. For more details, we refer the reader to a recent survey paper by Facchinei and Kanzow [3] and the references therein. Let us first recall the definition of the GNEP. There are π players labelled by an integer π£ = 1, . . . , π. Each player π£ controls the variables π₯π£ ∈ π ππ£ . Let π₯ = (π₯1 ⋅ ⋅ ⋅ π₯π )π be the vector formed by all these decision variables, where π := π1 + π2 + ⋅ ⋅ ⋅ + ππ£ . To emphasize the π£th player variable within the vector π₯, we sometimes write π₯ = (π₯π£ , π₯−π£ )π ∈ π π , where π₯−π£ denotes all the other player’s variables. In the games, each player controls the variables π₯π£ and tries to minimize a cost function ππ£ (π₯π£ , π₯−π£ ) subjects to the constraint (π₯π£ , π₯−π£ )π ∈ π with π₯−π£ given as exogenous, where π is a common strategy set. A vector π₯∗ := (π₯∗,1 , . . . , π₯∗,π)π is called a solution of the GNEP or a generalized Nash equilibrium, if for each player π£ = 1, . . . , π, π₯∗,π£ solves the following optimization problem with π₯∗,−π£ being fixed: ππ£ (π₯π£ , π₯∗,−π£ ) , min π£ π₯ s.t. (π₯π£ , π₯∗,−π£ ) ∈ π. (1) If π is defined as the Cartesian product of certain sets ππ£ ∈ π ππ£ , that is, π = π1 × π2 × ⋅ ⋅ ⋅ × ππ, then the GNEP reduces to the standard Nash equilibrium problem. Throughout this paper, we can make the following assumption. Assumption 1. (a) The set π is nonempty, closed, and convex. (b) The utility function ππ£ is continuously differentiable and, as a function of π₯π£ alone, convex. A basic tool for both the theoretical and the numerical solution of (generalized) Nash equilibrium problems is the Nikaido-Isoda function defined as π Ψ (π₯, π¦) = ∑ [ππ£ (π₯π£ , π₯−π£ ) − ππ£ (π¦π£ , π₯−π£ )] . (2) π£=1 Sometimes also the name Ky-Fan function can be found in the literature, see [12, 13]. In the following, we state a definition which we have taken from [9]. Definition 1. π₯∗ is a normalized Nash equilibrium of the GNEP, if maxπ¦ Ψ(π₯∗ , π¦) = 0 holds, where Ψ(π₯, π¦) denotes the Nikaido-Isoda function defined as (2). In order to overcome the nondifferentiable property of the mapping Ψ(π₯, π¦), von Heusinger and Kanzow [8] used a simple regularization of the Nikaido-Isoda function. For 2 Journal of Applied Mathematics a parameter πΌ > 0, the following regularized Nikaido-Isoda function was considered: π ΨπΌ (π₯, π¦) = ∑ [ππ£ (π₯π£ , π₯−π£ ) − ππ£ (π¦π£ , π₯−π£ )] − π£=1 πΌ σ΅©σ΅© σ΅©2 σ΅©π₯ − π¦σ΅©σ΅©σ΅© . 2σ΅© (3) Since under the given Assumption 1, ΨπΌ (π₯, π¦) is strongly concave in π¦, the maximization problem max ΨπΌ (π₯, π¦) , π¦ (4) s.t. π¦ ∈ π has a unique solution for each π₯, denoted by π¦πΌ (π₯). The corresponding value function is then defined by ΨπΌ (π₯, π¦) = ΨπΌ (π₯, π¦πΌ (π₯)) . ππΌ (π₯) = max π¦ (5) Let π½ > πΌ > 0 be a given parameter. The corresponding value function is then defined by Ψπ½ (π₯, π¦) = Ψπ½ (π₯, π¦π½ (π₯)) . ππ½ (π₯) = max π¦ (6) In this paper, we develop two new descent methods for finding a normalized Nash equilibrium of the GNEP by solving the optimization problem (9). The key to our methods is a strategy for adjusting πΌ and π½ when a stationary point of VπΌπ½ (π₯) is not a solution of the GNEP. We will show that our algorithms are globally convergent to a normalized Nash equilibrium under appropriate assumption on the cost function, which is not stronger than the one considered in [8]. The organization of the paper is as follows. In Section 2, we state the main assumption underlying our algorithms and present some examples of the GNEP satisfying it. In Section 3, we derive some useful properties of the function ππΌπ½ (π₯). In Section 4, we formally state our algorithms and prove that they are both globally convergent to a normalized Nash equilibrium. 2. Main Assumption In order to construct algorithms and guarantee the convergence of them, we give the following assumption. Assumption 2. For any π½ if π¦πΌ (π₯) =ΜΈ π¦π½ (π₯), we have > πΌ > 0 and π₯ π Define ππΌπ½ (π₯) = ππΌ (π₯) − ππ½ (π₯) . ∈ π π , π ∑ (∇ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ∇ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )) (7) π£=1 ⋅ (π¦π½ (π₯) − π¦πΌ (π₯)) In [8], the following important properties of the function ππΌπ½ (π₯) have been proved. π π£ (10) −π£ π£ −π£ π ≥ ∑ (∇π₯π£ ππ£ (π¦π½ (π₯) , π₯ ) − ∇π₯π£ ππ£ (π¦πΌ (π₯) , π₯ )) π£=1 Theorem 2. The following statements hold: ⋅ (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ ) . π (a) ππΌπ½ (π₯) ≥ 0 for any π₯ ∈ π ; (b) π₯∗ is a normalized Nash equilibrium of the GNEP if and only if ππΌπ½ (π₯∗ ) = 0; (c) ππΌπ½ (π₯) is continuously differentiable on π π and that ∇ππΌπ½ (π₯) We next consider three examples which satisfy Assumption 2. Example 3. Let us consider the case in which all the cost functions are separable, that is, (11) ππ£ (π₯) = ππ£ (π₯π£ ) + ππ£ (π₯−π£ ) , where ππ£ : π ππ£ → π is convex and ππ£ : π π−ππ£ → π . A simple calculation shows that, for any π¦ ∈ π π , we have = ∇ππΌ (π₯) − ∇ππ½ (π₯) π π = ∑ [∇ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ∇ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )] ∑ (∇π₯π£ ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ∇π₯π£ ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )) π£=1 ∇π₯1 π1 (π¦πΌ (π₯)1 , π₯−1 ) − ∇π₯1 π1 (π¦π½ (π₯)1 , π₯−1 ) .. ) +( . π −π π −π ∇π₯π ππ (π¦πΌ (π₯) , π₯ ) − ∇π₯π ππ (π¦π½ (π₯) , π₯ ) − πΌ (π₯ − π¦πΌ (π₯)) + π½ (π₯ − π¦π½ (π₯)) . π£=1 ⋅ (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ ) π π = ∑ (∇ππ£ (π¦π½ (π₯)π£ ) − ∇ππ£ (π¦πΌ (π₯)π£ )) (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ ) , π£=1 π (8) π ∑ (∇ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ∇ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )) π π£=1 From Theorem 2, we know that the normalized Nash equilibrium of the GNEP is precisely the global minima of the smooth unconstrained optimization problem (see [5]) as minπ ππΌπ½ (π₯) π₯∈π with zero optimal value. (9) ⋅ (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ ) π π = ∑ (∇ππ£ (π¦π½ (π₯)π£ ) − ∇ππ£ (π¦πΌ (π₯)π£ )) (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ ) . π£=1 (12) Hence Assumption 2 holds. Journal of Applied Mathematics 3 Example 4. Consider the case where the cost function ππ£ (π₯) is quadratic, that is, π 1 π π ππ£ (π₯) = (π₯π£ ) π΄ π£π£ π₯π£ + ∑ (π₯π£ ) π΄ π£π π₯π 2 π=1,π =ΜΈ π£ (13) for π£ = 1, . . . , π. We have Example 5. Consider the GNEP with π = 2 as π = {π₯ ∈ π π : π₯1 ≥ 1, π₯2 ≥ 1, π₯1 + π₯2 ≤ 10} and the cost function π1 (π₯) = π₯1 π₯2 and π2 (π₯) = −π₯1 π₯2 . The point π₯∗ = (1, 9)π is the unique normalized Nash equilibrium. For any π¦ ∈ π 2 , we have π π π£ −π£ π£ −π£ ∑ (∇π₯π£ ππ£ (π¦π½ (π₯) , π₯ ) − ∇π₯π£ ππ£ (π¦πΌ (π₯) , π₯ )) π ∑ (∇ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ∇ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )) π π£=1 π£=1 ⋅ (π¦π½ (π₯) − π¦πΌ (π₯)) = 0, ⋅ (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ ) π π −π£ π£ −π£ (18) π£ −π£ π ⋅ (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ ) = 0. π ∑ (∇ππ£ (π¦π½ (π₯) , π₯ ) − ∇ππ£ (π¦πΌ (π₯) , π₯ )) Therefore Assumption 2 holds, but (16) does not hold for any π½ > πΌ > 0. π£=1 ⋅ (π¦π½ (π₯) − π¦πΌ (π₯)) 3. Properties of ππΌπ½ (π₯) Lemma 6. For any π½ > πΌ > 0 and π₯ ∈ π π , we have = β¨π¦π½ (π₯) − π¦πΌ (π₯) , π½ − πΌ σ΅©σ΅© σ΅©2 πΌ σ΅© σ΅©2 σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©π¦πΌ (π₯) − π¦π½ (π₯)σ΅©σ΅©σ΅© , σ΅© σ΅© 2 σ΅© 2σ΅© ππΌπ½ (π₯) ≥ π΄ 11 π΄ 21 π΄ 21 π΄ 22 .. ( ... . π΄ 1π π΄ 2π −π£ π£=1 π£=1 π£ π£ ∑ (∇π₯π£ ππ£ (π¦π½ (π₯) , π₯ ) − ∇π₯π£ ππ£ (π¦πΌ (π₯) , π₯ )) = ∑ β¨π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ , π΄ π£π£ (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ )β© , π ⋅ ⋅ ⋅ π΄ 1π ⋅ ⋅ ⋅ π΄ 2π .. ) π¦ (π₯) − π¦ (π₯)β© . .. . π½ πΌ . ⋅ ⋅ ⋅ π΄ ππ (14) Therefore, if the matrix 0 π΄ 12 π΄ 21 0 .. ( ... . π΄ π1 π΄ π2 π½ − πΌ σ΅©σ΅© σ΅©2 σ΅©2 π½ σ΅© σ΅©π₯ − π¦πΌ (π₯)σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©σ΅©π¦πΌ (π₯) − π¦π½ (π₯)σ΅©σ΅©σ΅©σ΅© . 2 σ΅© 2 ππΌπ½ (π₯) ≤ π ∑ [∇π₯π£ ππ£ (π¦πΌ (π₯)π£ , π₯−π£ ) − πΌ (π₯π£ − π¦πΌ (π₯)π£ )] π (21) (15) In a similar way, it follows that π¦π½ (π₯) satisfies π π ∑ [∇π₯π£ ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − π½ (π₯π£ − π¦π½ (π₯)π£ )] π£=1 π£ π£=1 × (π¦π½ (π₯) − π¦πΌ (π₯)) > 0 (22) π£ ⋅ (π¦πΌ (π₯) − π¦π½ (π₯) ) ≥ 0. In the following example, we show the relationship between our assumption and the one considered in [8] as follows. For any π½ > πΌ > 0, a given π₯ ∈ π π with π¦πΌ (π₯) =ΜΈ π¦π½ (π₯), the inequality π (20) ⋅ (π¦π½ (π₯)π£ − π¦πΌ (π₯)π£ ) ≥ 0. ⋅ ⋅ ⋅ π΄ 1π ⋅ ⋅ ⋅ π΄ 2π . .. . .. ) ⋅⋅⋅ 0 ∑ (∇ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ∇ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )) (19) Proof. Since π¦πΌ (π₯) satisfies the optimality condition, then π£=1 is positive semidefinite, Assumption 2 is satisfied. holds. (17) Since ππ£ (π₯) as a function of π₯π£ alone is convex, we have π ∑ [ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )] π£=1 (23) π − πΌ(π₯ − π¦πΌ (π₯)) (π¦π½ (π₯) − π¦πΌ (π₯)) ≥ 0, π (16) π ∑ [ππ£ (π¦πΌ (π₯)π£ , π₯−π£ ) − ππ£ (π¦π½ (π₯)π£ , π₯−π£ )] π£=1 (24) π − π½(π₯ − π¦π½ (π₯)) (π¦πΌ (π₯) − π¦π½ (π₯)) ≥ 0 4 Journal of Applied Mathematics respectively. Thus, using the definition of ππΌπ½ (π₯) and (23), we have π ππΌπ½ (π₯) = ∑ [ππ£ (π¦πΌ (π₯)π£ , π₯−π£ ) − ππ£ (π¦π½ (π₯)π£ , π₯−π£ )] π ⋅ (π¦π½ (π₯) − π¦πΌ (π₯)) ∇π₯1 π1 (π¦πΌ (π₯)1 , π₯−1 )−∇π₯1 π1 (π¦π½ (π₯)1 , π₯−1 ) .. ) +( . π −π ∇π₯π ππ (π¦πΌ (π₯) , π₯ )−∇π₯π ππ (π¦π½ (π₯)π , π₯−π ) π ≥ πΌ(π₯ − π¦πΌ (π₯)) (π¦π½ (π₯) − π¦πΌ (π₯)) π½ σ΅©σ΅© σ΅©2 πΌ σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©σ΅©π₯ − π¦πΌ (π₯)σ΅©σ΅©σ΅©σ΅©2 σ΅© 2σ΅© 2 π½σ΅© σ΅©2 = πΌ(π₯ − π¦πΌ (π₯)) (π¦π½ (π₯) − π¦πΌ (π₯)) + σ΅©σ΅©σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅©σ΅© 2 πΌ σ΅©σ΅© σ΅©σ΅©2 − σ΅©σ΅©σ΅©π₯ − π¦π½ (π₯) + π¦π½ (π₯) − π¦πΌ (π₯)σ΅©σ΅©σ΅© 2 π π½ − πΌ σ΅©σ΅© σ΅©2 πΌ σ΅© σ΅©2 σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅© + σ΅©σ΅©σ΅©π¦πΌ (π₯) − π¦π½ (π₯)σ΅©σ΅©σ΅© . σ΅© σ΅© σ΅© σ΅© 2 2 π π£=1 π½σ΅© σ΅©2 πΌ σ΅© σ΅©2 + σ΅©σ΅©σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©π₯ − π¦πΌ (π₯)σ΅©σ΅©σ΅© 2 2 = ππΌπ½ (π₯)π (π¦π½ (π₯) − π¦πΌ (π₯)) = ∑ [∇ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ∇ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )] π£=1 + Equation (8) and Assumption 2 yield (25) ⋅ (π¦π½ (π₯) − π¦πΌ (π₯)) π − [πΌ (π₯ − π¦πΌ (π₯)) − π½ (π₯ − π¦π½ (π₯))] ⋅ (π¦π½ (π₯) − π¦πΌ (π₯)) π ≥ −[πΌ (π₯ − π¦πΌ (π₯)) − π½ (π₯ − π¦π½ (π₯))] ⋅ (π¦π½ (π₯) − π¦πΌ (π₯)) =: ππΌπ½ (π₯) ≥ 0, (31) Similarly, using the definition of ππΌπ½ (π₯) and (24), we have ππΌπ½ (π₯) ≤ π½ − πΌ σ΅©σ΅© σ΅©2 σ΅©2 π½ σ΅© σ΅©π₯ − π¦πΌ (π₯)σ΅©σ΅©σ΅© − σ΅©σ΅©σ΅©σ΅©π¦πΌ (π₯) − π¦π½ (π₯)σ΅©σ΅©σ΅©σ΅© . 2 σ΅© 2 (26) The proof is complete. Lemma 7. Assume π is bounded. For any π½ > πΌ > 0 and π₯ ∈ π π , we have ππΌσΈ π½σΈ (π₯) ππΌπ½ (π₯) lim sup ≤ . (27) σΈ σΈ π½−πΌ σΈ σΈ π½ →∞,πΌ → 0 π½ − πΌ Proof. We have from (19) that π½−πΌ πΌ σ΅©σ΅© σ΅©2 σ΅©2 σ΅© σ΅©σ΅©π¦ (π₯) − π¦π½ (π₯)σ΅©σ΅©σ΅© ≥ σ΅©σ΅©σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅©σ΅© + σ΅© π½ − πΌσ΅© πΌ (28) σ΅©2 σ΅© ≥ σ΅©σ΅©σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅©σ΅© . 2ππΌπ½ (π₯) σ΅© σ΅©σ΅© σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅© ≤ √ , σ΅© σ΅© π½−πΌ (32) πΌ σ΅©σ΅© σ΅©2 σ΅©σ΅©π¦πΌ (π₯) − π¦π½ (π₯)σ΅©σ΅©σ΅© , σ΅© σ΅© 2 (33) π£ −π£ π£ −π£ where πΎπΌπ½ (π₯) = ∑π π£=1 [ππ£ (π₯ , π₯ ) − ππ£ (π¦π½ (π₯) , π₯ )] − (πΌ/2)βπ₯ − π¦π½ (π₯)β2 . π π½σΈ − πΌσΈ ππΌ (π₯) ≥ ∑ [ππ£ (π₯π£ , π₯−π£ ) − ππ£ (π¦π½ (π₯)π£ , π₯−π£ )] π£=1 π£ −π£ π£ −π£ 2 ∑π π£=1 [ππ£ (π¦π½σΈ (π₯) , π₯ ) − ππ£ (π¦πΌσΈ (π₯) , π₯ )] π½σΈ − πΌσΈ σ΅©2 σ΅© σ΅©2 σ΅© πΌσΈ σ΅©σ΅©σ΅©π₯ − π¦πΌσΈ (π₯)σ΅©σ΅©σ΅© + π½σΈ σ΅©σ΅©σ΅©σ΅©π₯ − π¦π½σΈ (π₯)σ΅©σ΅©σ΅©σ΅© − . π½σΈ − πΌσΈ (29) Since π¦πΌ (π₯) ∈ π, π¦π½ (π₯) ∈ π and π is bounded, we get that ππΌπ½ ππΌσΈ π½σΈ (π₯) 1 σ΅©σ΅© σ΅©2 ≤ σ΅©σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅©σ΅© ≤ lim sup . (30) σΈ σΈ 2 π½−πΌ π½σΈ → ∞,πΌσΈ →0 π½ − πΌ This completes the proof. Lemma 8. For any π½ > πΌ > 0 and π₯ ∈ π π , we have Proof. Inequality (32) follows immediately from (19) in Lemma 6. The definition of ππΌ (π₯) implies that By the definition of ππΌπ½ (π₯), we have 2ππΌσΈ π½σΈ = where nonnegativity of ππΌπ½ (π₯) follows from the inequalities (23) and (24). In particular, either ππΌπ½ (π₯) is above a tolerance π > 0, in which case π¦πΌ (π₯) − π¦π½ (π₯) is a direction of sufficient descent for ππΌπ½ (π₯) at π₯ or else, as we show in the lemma below, and π₯ is an approximate solution of the GNEP with accuracy depending on π, πΌ, π½. This result will lead to our methods. πΎπΌπ½ (π₯) ≤ ππΌ (π₯) ≤ πΎπΌπ½ (π₯) + ππΌπ½ (π₯) + 2ππΌπ½ (π₯) π πΌσ΅© σ΅©2 − σ΅©σ΅©σ΅©σ΅©π₯ − π¦π½ (π₯)σ΅©σ΅©σ΅©σ΅© , 2 (34) which proves the first inequality in (33). Since ππΌπ½ (π₯) is the sum of the nonnegative quantity π (∑π£=1 [ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )] − πΌ(π₯ − π¦πΌ (π₯))π ⋅(π¦π½ (π₯) − π¦πΌ (π₯))) with another nonnegative quantity (see (23) and (24)), we have π ππΌπ½ (π₯) ≥ ∑ [ππ£ (π¦π½ (π₯)π£ , π₯−π£ ) − ππ£ (π¦πΌ (π₯)π£ , π₯−π£ )] π£=1 π − πΌ(π₯ − π¦πΌ (π₯)) (π¦π½ (π₯) − π¦πΌ (π₯)) . (35) Journal of Applied Mathematics 5 Thus, ππΌ (π₯) ≤ πΎπΌπ½ (π₯) + ππΌπ½ (π₯) + πΌ σ΅©σ΅© σ΅©2 σ΅©σ΅©π¦ (π₯) − π¦π½ (π₯)σ΅©σ΅©σ΅© , σ΅© 2σ΅© πΌ (36) which is the second inequality in (33). This completes the proof. 4. Two Methods for Solving the GNEP In this section, we introduce two methods for solving the GNEP, motivated by the D-gap function scheme for solving monotone variational inequalities [14, 15]. We first formally describe our methods below and then analyze their convergence using Lemma 8. Algorithm 9. Choose an arbitrary initial point π₯0 ∈ π π , and any π½0 > πΌ0 > 0. Choose any sequences of numbers ππ > 0, ππ ≥ 0, π π ∈ [0, 1), π = 1, 2, . . ., such that ∞ ππ = 0, lim ππ = lim π→∞ π → ∞ 1 − ππ ∑ (1 − π π ) = ∞. (37) π=1 For π = 1, 2, . . ., we iterate the following. Iteration k. Choose any 0 < πΌπ ≤ (1/2)πΌπ−1 . Choose any π½π ≥ 2π½π−1 and π₯π ∈ π π satisfying ππΌπ π½π (π₯π ) π½π − πΌπ ≤ ππ + π π ππΌπ−1 π½π−1 (π₯π−1 ) π½π−1 − πΌπ−1 . (38) Apply a descent method to the unconstrained minimization of the function ππΌπ π½π , with π₯π as the starting point and using π¦πΌπ − π¦π½π as a safeguard descent direction at π₯, until the method generates an π₯ ∈ π π satisfying ππΌπ π½π (π₯) ≤ ππ . The resulting π₯ is denoted by π₯π . Theorem 10. Assume X is bounded. Let {π₯π , πΌπ , π½π , ππ , ππ , π π }π=0,1,2,... be generated by Algorithm 9. Then {π₯π } is bounded; π½π → ∞; πΌπ → 0; and every cluster point of {π₯π } is a normalized Nash equilibrium of the GNEP. π π Proof. Denote π = ππΌπ π½π (π₯ )/(π½π − πΌπ ). By (38), we have ππ ≤ ππ + π π ππ−1 for π = 1, 2, . . . and it follows from (37) that ππ → 0 ([16], Lemma 3). For each π ∈ {1, 2, . . .}, we have from Lemma 8 that (32) and (33) hold with πΌ = πΌπ , π½ = π½π , π₯ = π₯π . This together with ππΌπ ,π½π (π₯π ) ≤ ππ and βπ¦πΌπ (π₯π ) − π¦π½π (π₯π )β ≤ diam(π) yields σ΅©σ΅© π σ΅© σ΅©σ΅©π₯ − π¦π½π (π₯π )σ΅©σ΅©σ΅© ≤ √2ππ , σ΅© σ΅© πΌ diam (π)2 πΎ ≤ ππΌπ (π₯ ) ≥ πΎ + π + π , 2 π where πΎπ π = π π (39) π,π£ π,−π£ ) − ππ£ (π¦π½π£π (π₯π ), π₯π,−π£ )] − ∑π π£=1 [ππ£ (π₯ , π₯ 2 (πΌ/2)βπ₯π − π¦π½π (π₯π )β and diam(π) = maxπ₯,π¦∈π βπ₯ − π¦β. Since ππ → 0, the first inequality in (39) implies {π₯π } is bounded. Moreover, this also implies ππ → 0. Since πΌπ → 0, the last two inequalities in (39) yield ππΌπ (π₯π ) → 0. Since for each π¦ ∈ π, we have ππΌπ (π₯π ) ≥ 2 Ψ(π₯, π¦) − (πΌπ /2)βπ₯π − π¦β , and this yields 0 ≥ Ψ(π₯∞ , π¦) for each cluster point π₯∞ of {π₯π }. Thus, each cluster point π₯∞ is a normalized Nash equilibrium of the GNEP. This completes the proof. Algorithm 11. Choose any π₯0 ∈ π π , any π½0 > πΌ0 , and two sequences of nonnegative numbers ππ , ππ , π = 1, 2 . . . such that ∞ ∞ ππ + ππ > 0 ∀π, ∑ ππ < ∞, π=1 ∑ ππ < ∞. (40) π=1 Choose any continuous function π : π + → π + with π(π‘) = 0 ⇔ π‘ = 0. For π = 1, 2, . . ., we iterate the following. Iteration k. Choose any 0 < πΌπ ≤ (1/2)πΌπ−1 and then choose π½π ≥ 2π½π−1 satisfying ππΌπ π½π (π₯π−1 ) π½π − πΌπ ≤ (1 + ππ ) ππΌπ−1 π½π−1 (π₯π−1 ) π½π−1 − πΌπ−1 + ππ . (41) Apply a descent method to the unconstrained minimization of the function ππΌπ π½π (π₯π ) with π₯π−1 as the starting point. We assume the descent method has the property that the amount of descent achieved at π₯ per step is bounded away from zero whenever π₯ is bounded and β∇ππΌπ π½π (π₯)β is bounded away from zero. Then, either the method in a finite number of steps generates an x satisfying ππΌ π½ (π₯) σ΅© σ΅©σ΅© σ΅©σ΅©∇ππΌπ π½π (π₯)σ΅©σ΅©σ΅© ≤ π ( π π (42) ) , σ΅© σ΅© π½π − πΌπ which we denote by π₯π , or else ππΌπ π½π (π₯) must decrease towards zero, in which case any cluster point of π₯ solves the GNEP. Theorem 12. Assume X is bounded. Let {π₯π , πΌπ , π½π , ππ , ππ }π=0,1,2,... be generated by Algorithm 11. (a) Suppose π₯π is obtained for all π. Then, {π₯π }is bounded; π½π → ∞; πΌπ → 0, and every cluster point of {π₯π } is a normalized Nash equilibrium of the GNEP. (b) Suppose π₯π is not obtained for some π. Then, the descent method generates a bounded sequence of x with ππΌπ π½π (π₯) → 0 so every cluster point of π₯ solves the GNEP. Proof. (a) Since we use a descent method at iteration π to obtain π₯π from π₯π−1 , then ππΌπ π½π (π₯π ) ≤ ππΌπ π½π (π₯π−1 ), so (41) yields ππΌ π½ (π₯π−1 ) ππΌπ π½π (π₯π ) (43) ≤ (1 + ππ ) π−1 π−1 + ππ . π½π − πΌπ π½π−1 − πΌπ−1 Denote ππ = ππΌπ π½π (π₯π )/(π½π −πΌπ ). This can then be written as ππ ≤ (1 + ππ )ππ−1 + ππ for π = 1, 2, . . .. Using ππ ≥ 0 and (41), it follows that the sequence {ππ } converges to some π ≥ 0 ([16], Lemma 2). Since (32) implies σ΅©σ΅© π π σ΅© σ΅© (44) σ΅©σ΅©σ΅©π₯ − π¦π½π (π₯ )σ΅©σ΅©σ΅© ≤ √2ππ , ∀π the sequence {π₯π } is bounded. 6 Journal of Applied Mathematics We claim that π = 0. Suppose the contrary. Then for all π sufficiently large, it holds that ππ ≥ π/2. Then, π ≤ 2 ∑π π£=1 [ππ£ (π¦π½π£π π (π₯ ) , π₯ π,−π£ )− ππ£ (π¦πΌπ£π π (π₯ ) , π₯ π,−π£ )] π½π − πΌπ − σ΅© σ΅©2 σ΅© σ΅©2 (πΌπ /2) σ΅©σ΅©σ΅©σ΅©π₯π − π¦πΌπ (π₯π )σ΅©σ΅©σ΅©σ΅© + (π½π /2) σ΅©σ΅©σ΅©σ΅©π₯π − π¦π½π (π₯π )σ΅©σ΅©σ΅©σ΅© π½π − πΌπ (45) Since, by the construction of the algorithm, π½π → ∞ and πΌπ → 0, and {π₯π } is bounded (as are π¦πΌπ (π₯π ) and π¦π½π (π₯π )), we get 0< π σ΅© σ΅©2 ≤ lim inf σ΅©σ΅©σ΅©σ΅©π₯π − π¦π½π (π₯π )σ΅©σ΅©σ΅©σ΅© . π → ∞ 2 (46) Then limπ → ∞ βπ₯π − π¦π½π (π₯π )βπ½π = ∞, so σ΅© σ΅© lim σ΅©σ΅©σ΅©∇ππΌπ ,π½π (π₯π )σ΅©σ΅©σ΅©σ΅© = ∞. π→∞ σ΅© (47) Since π₯π satisfies (42), β∇ππΌπ ,π½π (π₯π )β ≤ π(ππ ) for all π, this contradicts convergence of {π(ππ )} (recall that π is a continuous function). Hence, π = 0. For each π ∈ {1, 2, . . .}, we have from Lemma 8 that (33) holds with πΌ = πΌπ , π½ = π½π , π₯ = π₯π and from the fact ππΌπ½ (π₯) ≥ 0 that π ππΌπ π½π (π₯π ) ≤ ππ := ∇ππΌπ π½π (π₯π ) (π¦π½π (π₯π ) − π¦πΌπ (π₯π )) . (48) This, together with βπ¦πΌπ (π₯π ) − π¦π½π (π₯π )β ≤ diam(π), yields πΎπ ≤ ππΌ (π₯) ≤ πΎπ + ππ + πΌπ diam (π)2 , 2 (49) π£ π,π£ π,−π£ ) − ππ£ (π¦π½π (π₯π ) , π₯π,−π£ )] − where πΎπ = ∑π π£=1 [ππ£ (π₯ , π₯ 2 (πΌπ /2)βπ₯π − π¦π½π (π₯π )β and diam(π) = maxπ₯,π¦∈π βπ₯−π¦β. Since ππ → 0, (44) implies {π₯π } is bounded. Moreover, (44) implies πΎπ → 0. Also, we have β∇ππΌπ π½π (π₯π )β ≤ π(ππ ) → 0, so ππ → 0. From the facts that πΌπ → 0, (49), πΎπ → 0 and ππ → 0, we get ππΌπ (π₯π ) → 0. Since for each π¦ ∈ π, we have from the definition of ππΌ (π₯) that πΌ σ΅© σ΅©2 ππΌπ (π₯π ) ≥ Ψ (π₯, π¦) − π σ΅©σ΅©σ΅©σ΅©π₯π − π¦σ΅©σ΅©σ΅©σ΅© , (50) 2 which yields 0 ≥ Ψ(π₯∞ , π¦) for each cluster point π₯∞ of {π₯π }. Thus, each cluster point π₯∞ is a normalized Nash equilibrium of the GNEP. (b) It is easy to proof that ππΌπ π½π (π₯) → 0. Hence π₯∞ is a normalized Nash equilibrium of the GNEP. The proof is completed. Acknowledgments This research was partly supported by the National Natural Science Foundation of China (11271226, 10971118) and the Promotive Research Fund for Excellent Young and MiddleAged Scientists of Shandong Province (BS2010SF010). References [1] A. Dreves and C. Kanzow, “Nonsmooth optimization reformulations characterizing all solutions of jointly convex generalized Nash equilibrium problems,” Computational Optimization and Applications, vol. 50, no. 1, pp. 23–48, 2011. [2] A. Dreves, C. Kanzow, and O. Stein, “Nonsmooth optimization reformulations of player convex generalized Nash equilibrium problems,” Journal of Global Optimization, vol. 53, no. 4, pp. 587–614, 2012. [3] F. Facchinei and C. Kanzow, “Generalized Nash equilibrium problems,” Annals of Operations Research, vol. 175, pp. 177–211, 2010. [4] F. Facchinei, A. Fischer, and V. Piccialli, “On generalized Nash games and variational inequalities,” Operations Research Letters, vol. 35, no. 2, pp. 159–164, 2007. [5] F. Facchinei and C. Kanzow, “Generalized Nash equilibrium problems,” A Quarterly Journal of Operations Research, vol. 5, no. 3, pp. 173–210, 2007. [6] D. Han, H. Zhang, G. Qian, and L. Xu, “An improved two-step method for solving generalized Nash equilibrium problems,” European Journal of Operational Research, vol. 216, no. 3, pp. 613–623, 2012. [7] P. T. Harker, “Generalized Nash games and quasi-variational inequalities,” European Journal of Operational Research, vol. 54, no. 1, pp. 81–94, 1991. [8] A. von Heusinger and C. Kanzow, “Optimization reformulations of the generalized Nash equilibrium problem using Nikaido-Isoda-type functions,” Computational Optimization and Applications, vol. 43, no. 3, pp. 353–377, 2009. [9] B. Panicucci, M. Pappalardo, and M. Passacantando, “On solving generalized Nash equilibrium problems via optimization,” Optimization Letters, vol. 3, no. 3, pp. 419–435, 2009. [10] B. Qu and J. G. Jiang, “On the computation of normalized nash equilibrium for generalized nash equilibrium problem,” Journal of Convergence Information Technology, vol. 7, no. 22, pp. 16–21, 2012. [11] J. Zhang, B. Qu, and N. Xiu, “Some projection-like methods for the generalized Nash equilibria,” Computational Optimization and Applications, vol. 45, no. 1, pp. 89–109, 2010. [12] S. D. FlaΜm and A. S. Antipin, “Equilibrium programming using proximal-like algorithms,” Mathematical Programming, vol. 78, no. 1, pp. 29–41, 1997. [13] S. D. FlaΜm and A. RuszczynΜski, “Noncooperative convex games: computing equilibrium by partial regularization,” Working Paper 94–42, Laxenburg, Austria, 1994. [14] M. V. Solodov and P. Tseng, “Some methods based on the Dgap function for solving monotone variational inequalities,” Computational Optimization and Applications, vol. 17, no. 2-3, pp. 255–277, 2000. [15] B. Qu, C. Y. Wang, and J. Z. Zhang, “Convergence and error bound of a method for solving variational inequality problems via the generalized D-gap function,” Journal of Optimization Theory and Applications, vol. 119, no. 3, pp. 535–552, 2003. [16] B. T. Polyak, Introduction to Optimization, Translations Series in Mathematics and Engineering, Optimization Software, New York, NY, USA, 1987. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014