Research Article Methods for Solving Generalized Nash Equilibrium

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 762165, 6 pages
http://dx.doi.org/10.1155/2013/762165
Research Article
Methods for Solving Generalized Nash Equilibrium
Biao Qu and Jing Zhao
School of Management, Qufu Normal University, Rizhao, Shandong 276826, China
Correspondence should be addressed to Biao Qu; qubiao001@163.com
Received 12 October 2012; Revised 5 January 2013; Accepted 17 January 2013
Academic Editor: Ya Ping Fang
Copyright © 2013 B. Qu and J. Zhao. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The generalized Nash equilibrium problem (GNEP) is an extension of the standard Nash equilibrium problem (NEP), in which each
player’s strategy set may depend on the rival player’s strategies. In this paper, we present two descent type methods. The algorithms
are based on a reformulation of the generalized Nash equilibrium using Nikaido-Isoda function as unconstrained optimization.
We prove that our algorithms are globally convergent and the convergence analysis is not based on conditions guaranteeing that
every stationary point of the optimization problem is a solution of the GNEP.
1. Introduction
The generalized Nash equilibrium problem (GNEP for short)
is an extension of the standard Nash equilibrium problem
(NEP for short), in which the strategy set of each player
depends on the strategies of all the other players as well as
on his own strategy. The GNEP has recently attracted much
attention due to its applications in various fields like mathematics, computer science, economics, and engineering [1–11].
For more details, we refer the reader to a recent survey paper
by Facchinei and Kanzow [3] and the references therein.
Let us first recall the definition of the GNEP. There are
𝑁 players labelled by an integer 𝑣 = 1, . . . , 𝑁. Each player
𝑣 controls the variables π‘₯𝑣 ∈ 𝑅𝑛𝑣 . Let π‘₯ = (π‘₯1 ⋅ ⋅ ⋅ π‘₯𝑛 )𝑇 be
the vector formed by all these decision variables, where 𝑛 :=
𝑛1 + 𝑛2 + ⋅ ⋅ ⋅ + 𝑛𝑣 . To emphasize the 𝑣th player variable within
the vector π‘₯, we sometimes write π‘₯ = (π‘₯𝑣 , π‘₯−𝑣 )𝑇 ∈ 𝑅𝑛 , where
π‘₯−𝑣 denotes all the other player’s variables. In the games, each
player controls the variables π‘₯𝑣 and tries to minimize a cost
function πœƒπ‘£ (π‘₯𝑣 , π‘₯−𝑣 ) subjects to the constraint (π‘₯𝑣 , π‘₯−𝑣 )𝑇 ∈ 𝑋
with π‘₯−𝑣 given as exogenous, where 𝑋 is a common strategy
set. A vector π‘₯∗ := (π‘₯∗,1 , . . . , π‘₯∗,𝑁)𝑇 is called a solution of the
GNEP or a generalized Nash equilibrium, if for each player
𝑣 = 1, . . . , 𝑁, π‘₯∗,𝑣 solves the following optimization problem
with π‘₯∗,−𝑣 being fixed:
πœƒπ‘£ (π‘₯𝑣 , π‘₯∗,−𝑣 ) ,
min
𝑣
π‘₯
s.t. (π‘₯𝑣 , π‘₯∗,−𝑣 ) ∈ 𝑋.
(1)
If 𝑋 is defined as the Cartesian product of certain sets
𝑋𝑣 ∈ 𝑅𝑛𝑣 , that is, 𝑋 = 𝑋1 × π‘‹2 × ⋅ ⋅ ⋅ × π‘‹π‘, then the GNEP
reduces to the standard Nash equilibrium problem.
Throughout this paper, we can make the following
assumption.
Assumption 1. (a) The set 𝑋 is nonempty, closed, and convex.
(b) The utility function πœƒπ‘£ is continuously differentiable and,
as a function of π‘₯𝑣 alone, convex.
A basic tool for both the theoretical and the numerical
solution of (generalized) Nash equilibrium problems is the
Nikaido-Isoda function defined as
𝑁
Ψ (π‘₯, 𝑦) = ∑ [πœƒπ‘£ (π‘₯𝑣 , π‘₯−𝑣 ) − πœƒπ‘£ (𝑦𝑣 , π‘₯−𝑣 )] .
(2)
𝑣=1
Sometimes also the name Ky-Fan function can be found
in the literature, see [12, 13]. In the following, we state a
definition which we have taken from [9].
Definition 1. π‘₯∗ is a normalized Nash equilibrium of the
GNEP, if max𝑦 Ψ(π‘₯∗ , 𝑦) = 0 holds, where Ψ(π‘₯, 𝑦) denotes the
Nikaido-Isoda function defined as (2).
In order to overcome the nondifferentiable property of
the mapping Ψ(π‘₯, 𝑦), von Heusinger and Kanzow [8] used
a simple regularization of the Nikaido-Isoda function. For
2
Journal of Applied Mathematics
a parameter 𝛼 > 0, the following regularized Nikaido-Isoda
function was considered:
𝑁
Ψ𝛼 (π‘₯, 𝑦) = ∑ [πœƒπ‘£ (π‘₯𝑣 , π‘₯−𝑣 ) − πœƒπ‘£ (𝑦𝑣 , π‘₯−𝑣 )] −
𝑣=1
𝛼 σ΅„©σ΅„©
σ΅„©2
σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 .
2σ΅„©
(3)
Since under the given Assumption 1, Ψ𝛼 (π‘₯, 𝑦) is strongly
concave in 𝑦, the maximization problem
max Ψ𝛼 (π‘₯, 𝑦) ,
𝑦
(4)
s.t. 𝑦 ∈ 𝑋
has a unique solution for each π‘₯, denoted by 𝑦𝛼 (π‘₯).
The corresponding value function is then defined by
Ψ𝛼 (π‘₯, 𝑦) = Ψ𝛼 (π‘₯, 𝑦𝛼 (π‘₯)) .
𝑉𝛼 (π‘₯) = max
𝑦
(5)
Let 𝛽 > 𝛼 > 0 be a given parameter. The corresponding
value function is then defined by
Ψ𝛽 (π‘₯, 𝑦) = Ψ𝛽 (π‘₯, 𝑦𝛽 (π‘₯)) .
𝑉𝛽 (π‘₯) = max
𝑦
(6)
In this paper, we develop two new descent methods for
finding a normalized Nash equilibrium of the GNEP by
solving the optimization problem (9). The key to our methods
is a strategy for adjusting 𝛼 and 𝛽 when a stationary point
of V𝛼𝛽 (π‘₯) is not a solution of the GNEP. We will show
that our algorithms are globally convergent to a normalized
Nash equilibrium under appropriate assumption on the cost
function, which is not stronger than the one considered in [8].
The organization of the paper is as follows. In Section 2,
we state the main assumption underlying our algorithms and
present some examples of the GNEP satisfying it. In Section 3,
we derive some useful properties of the function 𝑉𝛼𝛽 (π‘₯).
In Section 4, we formally state our algorithms and prove
that they are both globally convergent to a normalized Nash
equilibrium.
2. Main Assumption
In order to construct algorithms and guarantee the convergence of them, we give the following assumption.
Assumption 2. For any 𝛽
if 𝑦𝛼 (π‘₯) =ΜΈ 𝑦𝛽 (π‘₯), we have
>
𝛼
>
0 and π‘₯
𝑁
Define
𝑉𝛼𝛽 (π‘₯) = 𝑉𝛼 (π‘₯) − 𝑉𝛽 (π‘₯) .
∈
𝑅𝑛 ,
𝑇
∑ (∇πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − ∇πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 ))
(7)
𝑣=1
⋅ (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))
In [8], the following important properties of the function
𝑉𝛼𝛽 (π‘₯) have been proved.
𝑁
𝑣
(10)
−𝑣
𝑣
−𝑣
𝑇
≥ ∑ (∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛽 (π‘₯) , π‘₯ ) − ∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛼 (π‘₯) , π‘₯ ))
𝑣=1
Theorem 2. The following statements hold:
⋅ (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 ) .
𝑛
(a) 𝑉𝛼𝛽 (π‘₯) ≥ 0 for any π‘₯ ∈ 𝑅 ;
(b) π‘₯∗ is a normalized Nash equilibrium of the GNEP if
and only if 𝑉𝛼𝛽 (π‘₯∗ ) = 0;
(c) 𝑉𝛼𝛽 (π‘₯) is continuously differentiable on 𝑅𝑛 and that
∇𝑉𝛼𝛽 (π‘₯)
We next consider three examples which satisfy
Assumption 2.
Example 3. Let us consider the case in which all the cost
functions are separable, that is,
(11)
πœƒπ‘£ (π‘₯) = 𝑓𝑣 (π‘₯𝑣 ) + 𝑔𝑣 (π‘₯−𝑣 ) ,
where 𝑓𝑣 : 𝑅𝑛𝑣 → 𝑅 is convex and 𝑔𝑣 : 𝑅𝑛−𝑛𝑣 → 𝑅. A simple
calculation shows that, for any 𝑦 ∈ 𝑅𝑛 , we have
= ∇𝑉𝛼 (π‘₯) − ∇𝑉𝛽 (π‘₯)
𝑁
𝑁
= ∑ [∇πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − ∇πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 )]
∑ (∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − ∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 ))
𝑣=1
∇π‘₯1 πœƒ1 (𝑦𝛼 (π‘₯)1 , π‘₯−1 ) − ∇π‘₯1 πœƒ1 (𝑦𝛽 (π‘₯)1 , π‘₯−1 )
..
)
+(
.
𝑁 −𝑁
𝑁 −𝑁
∇π‘₯𝑁 πœƒπ‘ (𝑦𝛼 (π‘₯) , π‘₯ ) − ∇π‘₯𝑁 πœƒπ‘ (𝑦𝛽 (π‘₯) , π‘₯ )
− 𝛼 (π‘₯ − 𝑦𝛼 (π‘₯)) + 𝛽 (π‘₯ − 𝑦𝛽 (π‘₯)) .
𝑣=1
⋅ (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 )
𝑁
𝑇
= ∑ (∇𝑓𝑣 (𝑦𝛽 (π‘₯)𝑣 ) − ∇𝑓𝑣 (𝑦𝛼 (π‘₯)𝑣 )) (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 ) ,
𝑣=1
𝑁
(8)
𝑇
∑ (∇πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − ∇πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 ))
𝑇
𝑣=1
From Theorem 2, we know that the normalized Nash
equilibrium of the GNEP is precisely the global minima of
the smooth unconstrained optimization problem (see [5]) as
min𝑛 𝑉𝛼𝛽 (π‘₯)
π‘₯∈𝑅
with zero optimal value.
(9)
⋅ (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 )
𝑁
𝑇
= ∑ (∇𝑓𝑣 (𝑦𝛽 (π‘₯)𝑣 ) − ∇𝑓𝑣 (𝑦𝛼 (π‘₯)𝑣 )) (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 ) .
𝑣=1
(12)
Hence Assumption 2 holds.
Journal of Applied Mathematics
3
Example 4. Consider the case where the cost function πœƒπ‘£ (π‘₯)
is quadratic, that is,
𝑁
1
𝑇
𝑇
πœƒπ‘£ (π‘₯) = (π‘₯𝑣 ) 𝐴 𝑣𝑣 π‘₯𝑣 + ∑ (π‘₯𝑣 ) 𝐴 π‘£πœ‡ π‘₯πœ‡
2
πœ‡=1,πœ‡ =ΜΈ 𝑣
(13)
for 𝑣 = 1, . . . , 𝑁. We have
Example 5. Consider the GNEP with 𝑁 = 2 as
𝑋 = {π‘₯ ∈ 𝑅𝑛 : π‘₯1 ≥ 1, π‘₯2 ≥ 1, π‘₯1 + π‘₯2 ≤ 10}
and the cost function πœƒ1 (π‘₯) = π‘₯1 π‘₯2 and πœƒ2 (π‘₯) = −π‘₯1 π‘₯2 .
The point π‘₯∗ = (1, 9)𝑇 is the unique normalized Nash
equilibrium. For any 𝑦 ∈ 𝑅2 , we have
𝑁
𝑁
𝑣
−𝑣
𝑣
−𝑣
∑ (∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛽 (π‘₯) , π‘₯ ) − ∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛼 (π‘₯) , π‘₯ ))
𝑇
∑ (∇πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − ∇πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 ))
𝑇
𝑣=1
𝑣=1
⋅ (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯)) = 0,
⋅ (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 )
𝑁
𝑁
−𝑣
𝑣
−𝑣
(18)
𝑣
−𝑣
𝑇
⋅ (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 ) = 0.
𝑇
∑ (∇πœƒπ‘£ (𝑦𝛽 (π‘₯) , π‘₯ ) − ∇πœƒπ‘£ (𝑦𝛼 (π‘₯) , π‘₯ ))
Therefore Assumption 2 holds, but (16) does not hold for
any 𝛽 > 𝛼 > 0.
𝑣=1
⋅ (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))
3. Properties of 𝑉𝛼𝛽 (π‘₯)
Lemma 6. For any 𝛽 > 𝛼 > 0 and π‘₯ ∈ 𝑅𝑛 , we have
= βŸ¨π‘¦π›½ (π‘₯) − 𝑦𝛼 (π‘₯) ,
𝛽 − 𝛼 σ΅„©σ΅„©
σ΅„©2 𝛼 σ΅„©
σ΅„©2
σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„© + 󡄩󡄩󡄩𝑦𝛼 (π‘₯) − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„© ,
σ΅„©
σ΅„©
2 σ΅„©
2σ΅„©
𝑉𝛼𝛽 (π‘₯) ≥
𝐴 11 𝐴 21
𝐴 21 𝐴 22
..
( ...
.
𝐴 1𝑁 𝐴 2𝑁
−𝑣
𝑣=1
𝑣=1
𝑣
𝑣
∑ (∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛽 (π‘₯) , π‘₯ ) − ∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛼 (π‘₯) , π‘₯ ))
= ∑ βŸ¨π‘¦π›½ (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 , 𝐴 𝑣𝑣 (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 )⟩ ,
𝑁
⋅ ⋅ ⋅ 𝐴 1𝑁
⋅ ⋅ ⋅ 𝐴 2𝑁
.. ) 𝑦 (π‘₯) − 𝑦 (π‘₯)⟩ .
..
.
𝛽
𝛼
.
⋅ ⋅ ⋅ 𝐴 𝑁𝑁
(14)
Therefore, if the matrix
0 𝐴 12
𝐴 21 0
..
( ...
.
𝐴 𝑁1 𝐴 𝑁2
𝛽 − 𝛼 σ΅„©σ΅„©
σ΅„©2
σ΅„©2 𝛽 σ΅„©
σ΅„©π‘₯ − 𝑦𝛼 (π‘₯)σ΅„©σ΅„©σ΅„© − 󡄩󡄩󡄩󡄩𝑦𝛼 (π‘₯) − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„© .
2 σ΅„©
2
𝑉𝛼𝛽 (π‘₯) ≤
𝑁
∑ [∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 ) − 𝛼 (π‘₯𝑣 − 𝑦𝛼 (π‘₯)𝑣 )]
𝑇
(21)
(15)
In a similar way, it follows that 𝑦𝛽 (π‘₯) satisfies
𝑁
𝑇
∑ [∇π‘₯𝑣 πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − 𝛽 (π‘₯𝑣 − 𝑦𝛽 (π‘₯)𝑣 )]
𝑣=1
𝑣
𝑣=1
× (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯)) > 0
(22)
𝑣
⋅ (𝑦𝛼 (π‘₯) − 𝑦𝛽 (π‘₯) ) ≥ 0.
In the following example, we show the relationship
between our assumption and the one considered in [8] as
follows.
For any 𝛽 > 𝛼 > 0, a given π‘₯ ∈ 𝑅𝑛 with 𝑦𝛼 (π‘₯) =ΜΈ 𝑦𝛽 (π‘₯), the
inequality
𝑁
(20)
⋅ (𝑦𝛽 (π‘₯)𝑣 − 𝑦𝛼 (π‘₯)𝑣 ) ≥ 0.
⋅ ⋅ ⋅ 𝐴 1𝑁
⋅ ⋅ ⋅ 𝐴 2𝑁
.
..
. .. )
⋅⋅⋅ 0
∑ (∇πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − ∇πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 ))
(19)
Proof. Since 𝑦𝛼 (π‘₯) satisfies the optimality condition, then
𝑣=1
is positive semidefinite, Assumption 2 is satisfied.
holds.
(17)
Since πœƒπ‘£ (π‘₯) as a function of π‘₯𝑣 alone is convex, we have
𝑁
∑ [πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 )]
𝑣=1
(23)
𝑇
− 𝛼(π‘₯ − 𝑦𝛼 (π‘₯)) (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯)) ≥ 0,
𝑇
(16)
𝑁
∑ [πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 ) − πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 )]
𝑣=1
(24)
𝑇
− 𝛽(π‘₯ − 𝑦𝛽 (π‘₯)) (𝑦𝛼 (π‘₯) − 𝑦𝛽 (π‘₯)) ≥ 0
4
Journal of Applied Mathematics
respectively. Thus, using the definition of 𝑉𝛼𝛽 (π‘₯) and (23), we
have
𝑁
𝑉𝛼𝛽 (π‘₯) = ∑ [πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 ) − πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 )]
𝑁
⋅ (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))
∇π‘₯1 πœƒ1 (𝑦𝛼 (π‘₯)1 , π‘₯−1 )−∇π‘₯1 πœƒ1 (𝑦𝛽 (π‘₯)1 , π‘₯−1 )
..
)
+(
.
𝑁 −𝑁
∇π‘₯𝑁 πœƒπ‘ (𝑦𝛼 (π‘₯) , π‘₯ )−∇π‘₯𝑁 πœƒπ‘ (𝑦𝛽 (π‘₯)𝑁 , π‘₯−𝑁 )
𝑇
≥ 𝛼(π‘₯ − 𝑦𝛼 (π‘₯)) (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))
𝛽 σ΅„©σ΅„©
σ΅„©2 𝛼
σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛼 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„©2
σ΅„©
2σ΅„©
2
𝛽󡄩
σ΅„©2
= 𝛼(π‘₯ − 𝑦𝛼 (π‘₯)) (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯)) + σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„©
2
𝛼 σ΅„©σ΅„©
σ΅„©σ΅„©2
− σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯) + 𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯)σ΅„©σ΅„©σ΅„©
2
𝑇
𝛽 − 𝛼 σ΅„©σ΅„©
σ΅„©2 𝛼 σ΅„©
σ΅„©2
σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„© + 󡄩󡄩󡄩𝑦𝛼 (π‘₯) − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„© .
σ΅„©
σ΅„©
σ΅„©
σ΅„©
2
2
𝑇
𝑣=1
𝛽󡄩
σ΅„©2 𝛼 σ΅„©
σ΅„©2
+ σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„© − σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛼 (π‘₯)σ΅„©σ΅„©σ΅„©
2
2
=
𝑉𝛼𝛽 (π‘₯)𝑇 (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))
= ∑ [∇πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − ∇πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 )]
𝑣=1
+
Equation (8) and Assumption 2 yield
(25)
⋅ (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))
𝑇
− [𝛼 (π‘₯ − 𝑦𝛼 (π‘₯)) − 𝛽 (π‘₯ − 𝑦𝛽 (π‘₯))]
⋅ (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))
𝑇
≥ −[𝛼 (π‘₯ − 𝑦𝛼 (π‘₯)) − 𝛽 (π‘₯ − 𝑦𝛽 (π‘₯))]
⋅ (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))
=: 𝑒𝛼𝛽 (π‘₯) ≥ 0,
(31)
Similarly, using the definition of 𝑉𝛼𝛽 (π‘₯) and (24), we have
𝑉𝛼𝛽 (π‘₯) ≤
𝛽 − 𝛼 σ΅„©σ΅„©
σ΅„©2
σ΅„©2 𝛽 σ΅„©
σ΅„©π‘₯ − 𝑦𝛼 (π‘₯)σ΅„©σ΅„©σ΅„© − 󡄩󡄩󡄩󡄩𝑦𝛼 (π‘₯) − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„© .
2 σ΅„©
2
(26)
The proof is complete.
Lemma 7. Assume 𝑋 is bounded. For any 𝛽 > 𝛼 > 0 and
π‘₯ ∈ 𝑅𝑛 , we have
𝑉𝛼󸀠 𝛽󸀠 (π‘₯) 𝑉𝛼𝛽 (π‘₯)
lim sup
≤
.
(27)
σΈ€ 
σΈ€ 
𝛽−𝛼
σΈ€ 
σΈ€ 
𝛽 →∞,𝛼 → 0 𝛽 − 𝛼
Proof. We have from (19) that
𝛽−𝛼
𝛼 σ΅„©σ΅„©
σ΅„©2
σ΅„©2
σ΅„©
󡄩󡄩𝑦 (π‘₯) − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©
≥ σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„© +
σ΅„©
𝛽 − 𝛼󡄩 𝛼
(28)
σ΅„©2
σ΅„©
≥ σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„© .
2𝑉𝛼𝛽 (π‘₯)
σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„© ≤ √
,
σ΅„©
σ΅„©
𝛽−𝛼
(32)
𝛼 σ΅„©σ΅„©
σ΅„©2
󡄩󡄩𝑦𝛼 (π‘₯) − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„© ,
σ΅„©
σ΅„©
2
(33)
𝑣 −𝑣
𝑣 −𝑣
where 𝛾𝛼𝛽 (π‘₯) = ∑𝑁
𝑣=1 [πœƒπ‘£ (π‘₯ , π‘₯ ) − πœƒπ‘£ (𝑦𝛽 (π‘₯) , π‘₯ )] −
(𝛼/2)β€–π‘₯ − 𝑦𝛽 (π‘₯)β€–2 .
𝑁
𝛽󸀠 − 𝛼󸀠
𝑉𝛼 (π‘₯) ≥ ∑ [πœƒπ‘£ (π‘₯𝑣 , π‘₯−𝑣 ) − πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 )]
𝑣=1
𝑣 −𝑣
𝑣 −𝑣
2 ∑𝑁
𝑣=1 [πœƒπ‘£ (𝑦𝛽󸀠 (π‘₯) , π‘₯ ) − πœƒπ‘£ (𝑦𝛼󸀠 (π‘₯) , π‘₯ )]
𝛽󸀠 − 𝛼󸀠
σ΅„©2
σ΅„©
σ΅„©2
σ΅„©
𝛼󸀠 σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛼󸀠 (π‘₯)σ΅„©σ΅„©σ΅„© + 𝛽󸀠 σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛽󸀠 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„©
−
.
𝛽󸀠 − 𝛼󸀠
(29)
Since 𝑦𝛼 (π‘₯) ∈ 𝑋, 𝑦𝛽 (π‘₯) ∈ 𝑋 and 𝑋 is bounded, we get
that
𝑉𝛼𝛽
𝑉𝛼󸀠 𝛽󸀠 (π‘₯) 1 σ΅„©σ΅„©
σ΅„©2
≤ σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„© ≤
lim sup
.
(30)
σΈ€ 
σΈ€ 
2
𝛽−𝛼
𝛽󸀠 → ∞,𝛼󸀠 →0 𝛽 − 𝛼
This completes the proof.
Lemma 8. For any 𝛽 > 𝛼 > 0 and π‘₯ ∈ 𝑅𝑛 , we have
Proof. Inequality (32) follows immediately from (19) in
Lemma 6.
The definition of 𝑉𝛼 (π‘₯) implies that
By the definition of 𝑉𝛼𝛽 (π‘₯), we have
2𝑉𝛼󸀠 𝛽󸀠
=
where nonnegativity of 𝑒𝛼𝛽 (π‘₯) follows from the inequalities
(23) and (24). In particular, either 𝑒𝛼𝛽 (π‘₯) is above a tolerance
πœ€ > 0, in which case 𝑦𝛼 (π‘₯) − 𝑦𝛽 (π‘₯) is a direction of sufficient
descent for 𝑉𝛼𝛽 (π‘₯) at π‘₯ or else, as we show in the lemma below,
and π‘₯ is an approximate solution of the GNEP with accuracy
depending on πœ€, 𝛼, 𝛽. This result will lead to our methods.
𝛾𝛼𝛽 (π‘₯) ≤ 𝑉𝛼 (π‘₯) ≤ 𝛾𝛼𝛽 (π‘₯) + 𝑒𝛼𝛽 (π‘₯) +
2𝑉𝛼𝛽 (π‘₯)
𝑇
𝛼󡄩
σ΅„©2
− σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„©σ΅„© ,
2
(34)
which proves the first inequality in (33).
Since 𝑒𝛼𝛽 (π‘₯) is the sum of the nonnegative quantity
𝑁
(∑𝑣=1 [πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 )] − 𝛼(π‘₯ − 𝑦𝛼 (π‘₯))𝑇
⋅(𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯))) with another nonnegative quantity (see (23)
and (24)), we have
𝑁
𝑒𝛼𝛽 (π‘₯) ≥ ∑ [πœƒπ‘£ (𝑦𝛽 (π‘₯)𝑣 , π‘₯−𝑣 ) − πœƒπ‘£ (𝑦𝛼 (π‘₯)𝑣 , π‘₯−𝑣 )]
𝑣=1
𝑇
− 𝛼(π‘₯ − 𝑦𝛼 (π‘₯)) (𝑦𝛽 (π‘₯) − 𝑦𝛼 (π‘₯)) .
(35)
Journal of Applied Mathematics
5
Thus,
𝑉𝛼 (π‘₯) ≤ 𝛾𝛼𝛽 (π‘₯) + 𝑒𝛼𝛽 (π‘₯) +
𝛼 σ΅„©σ΅„©
σ΅„©2
󡄩󡄩𝑦 (π‘₯) − 𝑦𝛽 (π‘₯)σ΅„©σ΅„©σ΅„© ,
σ΅„©
2σ΅„© 𝛼
(36)
which is the second inequality in (33). This completes the
proof.
4. Two Methods for Solving the GNEP
In this section, we introduce two methods for solving the
GNEP, motivated by the D-gap function scheme for solving
monotone variational inequalities [14, 15]. We first formally
describe our methods below and then analyze their convergence using Lemma 8.
Algorithm 9. Choose an arbitrary initial point π‘₯0 ∈ 𝑅𝑛 , and
any 𝛽0 > 𝛼0 > 0. Choose any sequences of numbers πœ€π‘˜ >
0, πœ‚π‘˜ ≥ 0, πœ† π‘˜ ∈ [0, 1), π‘˜ = 1, 2, . . ., such that
∞
πœ‚π‘˜
= 0,
lim πœ€π‘˜ = lim
π‘˜→∞
π‘˜ → ∞ 1 − πœ†π‘˜
∑ (1 − πœ† π‘˜ ) = ∞.
(37)
π‘˜=1
For π‘˜ = 1, 2, . . ., we iterate the following.
Iteration k. Choose any 0 < π›Όπ‘˜ ≤ (1/2)π›Όπ‘˜−1 . Choose any π›½π‘˜ ≥
2π›½π‘˜−1 and π‘₯π‘˜ ∈ 𝑅𝑛 satisfying
π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜ )
π›½π‘˜ − π›Όπ‘˜
≤ πœ‚π‘˜ + πœ† π‘˜
π‘‰π›Όπ‘˜−1 π›½π‘˜−1 (π‘₯π‘˜−1 )
π›½π‘˜−1 − π›Όπ‘˜−1
.
(38)
Apply a descent method to the unconstrained minimization of the function π‘‰π›Όπ‘˜ π›½π‘˜ , with π‘₯π‘˜ as the starting point and
using π‘¦π›Όπ‘˜ − π‘¦π›½π‘˜ as a safeguard descent direction at π‘₯, until the
method generates an π‘₯ ∈ 𝑅𝑛 satisfying π‘’π›Όπ‘˜ π›½π‘˜ (π‘₯) ≤ πœ€π‘˜ . The
resulting π‘₯ is denoted by π‘₯π‘˜ .
Theorem 10. Assume X is bounded. Let {π‘₯π‘˜ , π›Όπ‘˜ , π›½π‘˜ , πœ€π‘˜ ,
πœ‚π‘˜ , πœ† π‘˜ }π‘˜=0,1,2,... be generated by Algorithm 9. Then {π‘₯π‘˜ } is
bounded; π›½π‘˜ → ∞; π›Όπ‘˜ → 0; and every cluster point of {π‘₯π‘˜ } is
a normalized Nash equilibrium of the GNEP.
π‘˜
π‘˜
Proof. Denote π‘Ž = π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯ )/(π›½π‘˜ − π›Όπ‘˜ ). By (38), we have
π‘Žπ‘˜ ≤ πœ‚π‘˜ + πœ† π‘˜ π‘Žπ‘˜−1 for π‘˜ = 1, 2, . . . and it follows from (37)
that π‘Žπ‘˜ → 0 ([16], Lemma 3). For each π‘˜ ∈ {1, 2, . . .},
we have from Lemma 8 that (32) and (33) hold with 𝛼 =
π›Όπ‘˜ , 𝛽 = π›½π‘˜ , π‘₯ = π‘₯π‘˜ . This together with π‘’π›Όπ‘˜ ,π›½π‘˜ (π‘₯π‘˜ ) ≤ πœ€π‘˜ and
β€–π‘¦π›Όπ‘˜ (π‘₯π‘˜ ) − π‘¦π›½π‘˜ (π‘₯π‘˜ )β€– ≤ diam(𝑋) yields
σ΅„©σ΅„© π‘˜
σ΅„©
σ΅„©σ΅„©π‘₯ − π‘¦π›½π‘˜ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„© ≤ √2π‘Žπ‘˜ ,
σ΅„©
σ΅„©
𝛼 diam (𝑋)2
𝛾 ≤ π‘‰π›Όπ‘˜ (π‘₯ ) ≥ 𝛾 + πœ€ + π‘˜
,
2
π‘˜
where π›Ύπ‘˜
π‘˜
=
π‘˜
π‘˜
(39)
π‘˜,𝑣 π‘˜,−𝑣
) − πœƒπ‘£ (π‘¦π›½π‘£π‘˜ (π‘₯π‘˜ ), π‘₯π‘˜,−𝑣 )] −
∑𝑁
𝑣=1 [πœƒπ‘£ (π‘₯ , π‘₯
2
(𝛼/2)β€–π‘₯π‘˜ − π‘¦π›½π‘˜ (π‘₯π‘˜ )β€– and diam(𝑋) = maxπ‘₯,𝑦∈𝑋 β€–π‘₯ − 𝑦‖.
Since π‘Žπ‘˜ → 0, the first inequality in (39) implies {π‘₯π‘˜ } is
bounded. Moreover, this also implies π‘Ÿπ‘˜ → 0.
Since π›Όπ‘˜ → 0, the last two inequalities in (39) yield
π‘‰π›Όπ‘˜ (π‘₯π‘˜ ) → 0. Since for each 𝑦 ∈ 𝑋, we have π‘‰π›Όπ‘˜ (π‘₯π‘˜ ) ≥
2
Ψ(π‘₯, 𝑦) − (π›Όπ‘˜ /2)β€–π‘₯π‘˜ − 𝑦‖ , and this yields 0 ≥ Ψ(π‘₯∞ , 𝑦) for
each cluster point π‘₯∞ of {π‘₯π‘˜ }. Thus, each cluster point π‘₯∞ is
a normalized Nash equilibrium of the GNEP. This completes
the proof.
Algorithm 11. Choose any π‘₯0 ∈ 𝑅𝑛 , any 𝛽0 > 𝛼0 , and two
sequences of nonnegative numbers πœŒπ‘˜ , πœ‚π‘˜ , π‘˜ = 1, 2 . . . such
that
∞
∞
πœ‚π‘˜ + πœŒπ‘˜ > 0 ∀π‘˜,
∑ πœŒπ‘˜ < ∞,
π‘˜=1
∑ πœ‚π‘˜ < ∞.
(40)
π‘˜=1
Choose any continuous function πœ™ : 𝑅+ → 𝑅+ with πœ™(𝑑) =
0 ⇔ 𝑑 = 0. For π‘˜ = 1, 2, . . ., we iterate the following.
Iteration k. Choose any 0 < π›Όπ‘˜ ≤ (1/2)π›Όπ‘˜−1 and then choose
π›½π‘˜ ≥ 2π›½π‘˜−1 satisfying
π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜−1 )
π›½π‘˜ − π›Όπ‘˜
≤ (1 + πœŒπ‘˜ )
π‘‰π›Όπ‘˜−1 π›½π‘˜−1 (π‘₯π‘˜−1 )
π›½π‘˜−1 − π›Όπ‘˜−1
+ πœ‚π‘˜ .
(41)
Apply a descent method to the unconstrained minimization of the function π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜ ) with π‘₯π‘˜−1 as the starting
point. We assume the descent method has the property that
the amount of descent achieved at π‘₯ per step is bounded
away from zero whenever π‘₯ is bounded and β€–∇π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯)β€– is
bounded away from zero. Then, either the method in a finite
number of steps generates an x satisfying
𝑉𝛼 𝛽 (π‘₯)
σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©∇π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯)σ΅„©σ΅„©σ΅„© ≤ πœ™ ( π‘˜ π‘˜
(42)
) ,
σ΅„©
σ΅„©
π›½π‘˜ − π›Όπ‘˜
which we denote by π‘₯π‘˜ , or else π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯) must decrease towards
zero, in which case any cluster point of π‘₯ solves the GNEP.
Theorem 12. Assume X is bounded. Let {π‘₯π‘˜ , π›Όπ‘˜ , π›½π‘˜ ,
πœŒπ‘˜ , πœ‚π‘˜ }π‘˜=0,1,2,... be generated by Algorithm 11.
(a) Suppose π‘₯π‘˜ is obtained for all π‘˜. Then, {π‘₯π‘˜ }is bounded;
π›½π‘˜ → ∞; π›Όπ‘˜ → 0, and every cluster point of {π‘₯π‘˜ } is a
normalized Nash equilibrium of the GNEP.
(b) Suppose π‘₯π‘˜ is not obtained for some π‘˜. Then, the
descent method generates a bounded sequence of x with
π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯) → 0 so every cluster point of π‘₯ solves the
GNEP.
Proof. (a) Since we use a descent method at iteration π‘˜ to
obtain π‘₯π‘˜ from π‘₯π‘˜−1 , then π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜ ) ≤ π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜−1 ), so (41)
yields
𝑉𝛼 𝛽 (π‘₯π‘˜−1 )
π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜ )
(43)
≤ (1 + πœŒπ‘˜ ) π‘˜−1 π‘˜−1
+ πœ‚π‘˜ .
π›½π‘˜ − π›Όπ‘˜
π›½π‘˜−1 − π›Όπ‘˜−1
Denote π‘Žπ‘˜ = π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜ )/(π›½π‘˜ −π›Όπ‘˜ ). This can then be written
as π‘Žπ‘˜ ≤ (1 + πœŒπ‘˜ )π‘Žπ‘˜−1 + πœ‚π‘˜ for π‘˜ = 1, 2, . . .. Using π‘Žπ‘˜ ≥ 0 and
(41), it follows that the sequence {π‘Žπ‘˜ } converges to some π‘Ž ≥ 0
([16], Lemma 2). Since (32) implies
σ΅„©σ΅„© π‘˜
π‘˜ σ΅„©
σ΅„©
(44)
σ΅„©σ΅„©σ΅„©π‘₯ − π‘¦π›½π‘˜ (π‘₯ )σ΅„©σ΅„©σ΅„© ≤ √2π‘Žπ‘˜ , ∀π‘˜
the sequence {π‘₯π‘˜ } is bounded.
6
Journal of Applied Mathematics
We claim that π‘Ž = 0. Suppose the contrary. Then for all π‘˜
sufficiently large, it holds that π‘Žπ‘˜ ≥ π‘Ž/2. Then,
π‘Ž
≤
2
∑𝑁
𝑣=1
[πœƒπ‘£ (π‘¦π›½π‘£π‘˜
π‘˜
(π‘₯ ) , π‘₯
π‘˜,−𝑣
)−
πœƒπ‘£ (π‘¦π›Όπ‘£π‘˜
π‘˜
(π‘₯ ) , π‘₯
π‘˜,−𝑣
)]
π›½π‘˜ − π›Όπ‘˜
−
σ΅„©
σ΅„©2
σ΅„©
σ΅„©2
(π›Όπ‘˜ /2) σ΅„©σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π›Όπ‘˜ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„©σ΅„© + (π›½π‘˜ /2) σ΅„©σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π›½π‘˜ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„©σ΅„©
π›½π‘˜ − π›Όπ‘˜
(45)
Since, by the construction of the algorithm, π›½π‘˜ → ∞ and
π›Όπ‘˜ → 0, and {π‘₯π‘˜ } is bounded (as are π‘¦π›Όπ‘˜ (π‘₯π‘˜ ) and π‘¦π›½π‘˜ (π‘₯π‘˜ )), we
get
0<
π‘Ž
σ΅„©
σ΅„©2
≤ lim inf σ΅„©σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − π‘¦π›½π‘˜ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„©σ΅„© .
π‘˜
→
∞
2
(46)
Then limπ‘˜ → ∞ β€–π‘₯π‘˜ − π‘¦π›½π‘˜ (π‘₯π‘˜ )β€–π›½π‘˜ = ∞, so
σ΅„©
σ΅„©
lim σ΅„©σ΅„©σ΅„©∇π‘‰π›Όπ‘˜ ,π›½π‘˜ (π‘₯π‘˜ )σ΅„©σ΅„©σ΅„©σ΅„© = ∞.
π‘˜→∞ σ΅„©
(47)
Since π‘₯π‘˜ satisfies (42), β€–∇π‘‰π›Όπ‘˜ ,π›½π‘˜ (π‘₯π‘˜ )β€– ≤ πœ™(π‘Žπ‘˜ ) for all π‘˜,
this contradicts convergence of {πœ™(π‘Žπ‘˜ )} (recall that πœ™ is a
continuous function). Hence, π‘Ž = 0. For each π‘˜ ∈ {1, 2, . . .},
we have from Lemma 8 that (33) holds with 𝛼 = π›Όπ‘˜ , 𝛽 = π›½π‘˜ ,
π‘₯ = π‘₯π‘˜ and from the fact 𝑒𝛼𝛽 (π‘₯) ≥ 0 that
𝑇
π‘’π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜ ) ≤ πœ€π‘˜ := ∇π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜ ) (π‘¦π›½π‘˜ (π‘₯π‘˜ ) − π‘¦π›Όπ‘˜ (π‘₯π‘˜ )) .
(48)
This, together with β€–π‘¦π›Όπ‘˜ (π‘₯π‘˜ ) − π‘¦π›½π‘˜ (π‘₯π‘˜ )β€– ≤ diam(𝑋), yields
π›Ύπ‘˜ ≤ 𝑉𝛼 (π‘₯) ≤ π›Ύπ‘˜ + πœ€π‘˜ +
π›Όπ‘˜ diam (𝑋)2
,
2
(49)
𝑣
π‘˜,𝑣 π‘˜,−𝑣
) − πœƒπ‘£ (π‘¦π›½π‘˜ (π‘₯π‘˜ ) , π‘₯π‘˜,−𝑣 )] −
where π›Ύπ‘˜ = ∑𝑁
𝑣=1 [πœƒπ‘£ (π‘₯ , π‘₯
2
(π›Όπ‘˜ /2)β€–π‘₯π‘˜ − π‘¦π›½π‘˜ (π‘₯π‘˜ )β€– and diam(𝑋) = maxπ‘₯,𝑦∈𝑋 β€–π‘₯−𝑦‖. Since
π‘Žπ‘˜ → 0, (44) implies {π‘₯π‘˜ } is bounded. Moreover, (44) implies
π›Ύπ‘˜ → 0. Also, we have β€–∇π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯π‘˜ )β€– ≤ πœ™(π‘Žπ‘˜ ) → 0, so πœ€π‘˜ → 0.
From the facts that π›Όπ‘˜ → 0, (49), π›Ύπ‘˜ → 0 and πœ€π‘˜ → 0,
we get π‘‰π›Όπ‘˜ (π‘₯π‘˜ ) → 0. Since for each 𝑦 ∈ 𝑋, we have from the
definition of 𝑉𝛼 (π‘₯) that
𝛼 σ΅„©
σ΅„©2
π‘‰π›Όπ‘˜ (π‘₯π‘˜ ) ≥ Ψ (π‘₯, 𝑦) − π‘˜ σ΅„©σ΅„©σ΅„©σ΅„©π‘₯π‘˜ − 𝑦󡄩󡄩󡄩󡄩 ,
(50)
2
which yields 0 ≥ Ψ(π‘₯∞ , 𝑦) for each cluster point π‘₯∞ of {π‘₯π‘˜ }.
Thus, each cluster point π‘₯∞ is a normalized Nash equilibrium
of the GNEP.
(b) It is easy to proof that π‘‰π›Όπ‘˜ π›½π‘˜ (π‘₯) → 0. Hence π‘₯∞ is a
normalized Nash equilibrium of the GNEP.
The proof is completed.
Acknowledgments
This research was partly supported by the National Natural
Science Foundation of China (11271226, 10971118) and the
Promotive Research Fund for Excellent Young and MiddleAged Scientists of Shandong Province (BS2010SF010).
References
[1] A. Dreves and C. Kanzow, “Nonsmooth optimization reformulations characterizing all solutions of jointly convex generalized
Nash equilibrium problems,” Computational Optimization and
Applications, vol. 50, no. 1, pp. 23–48, 2011.
[2] A. Dreves, C. Kanzow, and O. Stein, “Nonsmooth optimization
reformulations of player convex generalized Nash equilibrium
problems,” Journal of Global Optimization, vol. 53, no. 4, pp.
587–614, 2012.
[3] F. Facchinei and C. Kanzow, “Generalized Nash equilibrium
problems,” Annals of Operations Research, vol. 175, pp. 177–211,
2010.
[4] F. Facchinei, A. Fischer, and V. Piccialli, “On generalized Nash
games and variational inequalities,” Operations Research Letters,
vol. 35, no. 2, pp. 159–164, 2007.
[5] F. Facchinei and C. Kanzow, “Generalized Nash equilibrium
problems,” A Quarterly Journal of Operations Research, vol. 5,
no. 3, pp. 173–210, 2007.
[6] D. Han, H. Zhang, G. Qian, and L. Xu, “An improved two-step
method for solving generalized Nash equilibrium problems,”
European Journal of Operational Research, vol. 216, no. 3, pp.
613–623, 2012.
[7] P. T. Harker, “Generalized Nash games and quasi-variational
inequalities,” European Journal of Operational Research, vol. 54,
no. 1, pp. 81–94, 1991.
[8] A. von Heusinger and C. Kanzow, “Optimization reformulations of the generalized Nash equilibrium problem using
Nikaido-Isoda-type functions,” Computational Optimization
and Applications, vol. 43, no. 3, pp. 353–377, 2009.
[9] B. Panicucci, M. Pappalardo, and M. Passacantando, “On solving generalized Nash equilibrium problems via optimization,”
Optimization Letters, vol. 3, no. 3, pp. 419–435, 2009.
[10] B. Qu and J. G. Jiang, “On the computation of normalized nash
equilibrium for generalized nash equilibrium problem,” Journal
of Convergence Information Technology, vol. 7, no. 22, pp. 16–21,
2012.
[11] J. Zhang, B. Qu, and N. Xiu, “Some projection-like methods for
the generalized Nash equilibria,” Computational Optimization
and Applications, vol. 45, no. 1, pp. 89–109, 2010.
[12] S. D. Flåm and A. S. Antipin, “Equilibrium programming using
proximal-like algorithms,” Mathematical Programming, vol. 78,
no. 1, pp. 29–41, 1997.
[13] S. D. Flåm and A. Ruszczyński, “Noncooperative convex games:
computing equilibrium by partial regularization,” Working
Paper 94–42, Laxenburg, Austria, 1994.
[14] M. V. Solodov and P. Tseng, “Some methods based on the Dgap function for solving monotone variational inequalities,”
Computational Optimization and Applications, vol. 17, no. 2-3,
pp. 255–277, 2000.
[15] B. Qu, C. Y. Wang, and J. Z. Zhang, “Convergence and error
bound of a method for solving variational inequality problems
via the generalized D-gap function,” Journal of Optimization
Theory and Applications, vol. 119, no. 3, pp. 535–552, 2003.
[16] B. T. Polyak, Introduction to Optimization, Translations Series
in Mathematics and Engineering, Optimization Software, New
York, NY, USA, 1987.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download