Research Article A New Gap Function for Vector Variational Hui-qiang Ma,

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 423040, 8 pages
http://dx.doi.org/10.1155/2013/423040
Research Article
A New Gap Function for Vector Variational
Inequalities with an Application
Hui-qiang Ma,1 Nan-jing Huang,2 Meng Wu,3 and Donal O’Regan4
1
School of Economics, Southwest University for Nationalities, Chengdu, Sichuan 610041, China
Department of Mathematics, Sichuan University, Chengdu, Sichuan 610064, China
3
Business School, Sichuan University, Chengdu, Sichuan 610064, China
4
School of Mathematics, Statistics and Applied Mathematics, National University of Ireland, Galway, Ireland
2
Correspondence should be addressed to Nan-jing Huang; nanjinghuang@hotmail.com
Received 29 May 2013; Accepted 4 June 2013
Academic Editor: Gue Myung Lee
Copyright © 2013 Hui-qiang Ma et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We consider a vector variational inequality in a finite-dimensional space. A new gap function is proposed, and an equivalent
optimization problem for the vector variational inequality is also provided. Under some suitable conditions, we prove that the
gap function is directionally differentiable and that any point satisfying the first-order necessary optimality condition for the
equivalent optimization problem solves the vector variational inequality. As an application, we use the new gap function to
reformulate a stochastic vector variational inequality as a deterministic optimization problem. We solve this optimization problem
by employing the sample average approximation method. The convergence of optimal solutions of the approximation problems is
also investigated.
1. Introduction
The vector variational inequality (VVI for short), which was
first proposed by Giannessi [1], has been widely investigated
by many authors (see [2–9] and the references therein). VVI
can be used to model a range of vector equilibrium problems
in economics, traffic networks, and migration equilibrium
problems (see [1]).
One approach for solving a VVI is to transform it into
an equivalent optimization problem by using a gap function.
A gap function was first introduced to study optimization
problems and has become a powerful tool in the study
of convex optimization problems. Also a gap function was
introduced in the study of scalar variational inequalities. It
can reformulate a scalar variational inequality as an equivalent optimization problem, and so some effective solution
methods and algorithms for optimization problems can be
used to find solutions of variational inequalities. Recently,
many authors extended the theory of gap functions to VVI
and vector equilibrium problems (see [2, 4, 6–9]). In this
paper, we present a new gap function for VVI and reformulate
it as an equivalent optimization problem. We also prove that
the gap function is directionally differentiable and that any
point satisfying the first-order necessary optimality condition
for the equivalent optimization problem solves the VVI.
In many practical problems, problem data will involve
some uncertain factors. In order to reflect the uncertainties, stochastic vector variational inequalities are needed.
Recently, stochastic scalar variational inequalities have
received a lot of attention in the literature (see [10–20]).
The ERM (expected residual minimization) method was
proposed by Chen and Fukushima [11] in the study of
stochastic complementarity problems. They formulated a
stochastic linear complementarity problem (SLCP) as a minimization problem which minimizes the expectation of a
NCP function (also called a residual function) of SLCP and
regarded a solution of the minimization problem as a solution
of SLCP. This method is the so-called expected residual
minimization method. Following the ideas of Chen and
Fukushima [11], Zhang and Chen [20] considered stochastic
nonlinear complementary problems. Luo and Lin [18, 19]
generalized the expected residual minimization method to
2
Journal of Applied Mathematics
solve a stochastic linear and/or nonlinear variational inequality problem. However, in comparison to stochastic scalar
variational inequalities, there are very few results in the
literature on stochastic vector variational inequalities. In this
paper, we consider a deterministic reformulation for the
stochastic vector variational inequality (SVVI). Our focus is
on the expected residual minimization (ERM) method for the
stochastic vector variational inequality. It is well known that
VVI is more complicated than a variational inequality, and
they model many practical problems. Therefore, it is meaningful and interesting to study stochastic vector variational
inequalities.
The rest of this paper is organized as follows. In Section 2,
some preliminaries are given. In Section 3, a new gap function
for VVI is constructed and some suitable conditions are
given to ensure that the new gap function is directionally
differentiable and that any point satisfying the first-order
necessary condition of optimality for the new optimization
problem solves the vector variational inequality. In Section 4,
the stochastic VVI is presented and the new gap function
is used to reformulate SVVI as a deterministic optimization
problem.
2. Preliminaries
(i) πœ“(π‘₯) ≥ 0, for all π‘₯ ∈ 𝐾;
(ii) πœ“(π‘₯∗ ) = 0 iff π‘₯∗ solves VVI (1).
Suppose that 𝐺 is an 𝑛 × π‘› symmetric positive definite
matrix. Let
{ 𝑝
πœ‘πœ‰ (π‘₯) := max {⟨∑ πœ‰π‘— 𝐹𝑗 (π‘₯) , π‘₯ − π‘¦βŸ© −
𝑦∈𝐾
{ 𝑗=1
where |π‘₯|2𝐺 := ⟨π‘₯, 𝐺π‘₯⟩. Note that
√πœ† min |π‘₯| ≤ |π‘₯|𝐺 ≤ √πœ† max |π‘₯| ,
𝑇
(⟨𝐹1 (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ , . . . , βŸ¨πΉπ‘ (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩) ∉ − int R𝑝+ ,
∀𝑦 ∈ 𝐾,
(1)
𝑝
𝑝
where R+ is the nonnegative orthant of R𝑝 and int R+
𝑝
denotes the interior of R+ . Denote by 𝑆 the solution set of VVI
(1) and by π‘†πœ‰ the solution set of the following scalar variational
inequality (VIπœ‰ ): find a vector π‘₯∗ ∈ 𝐾, such that
𝑝
⟨∑ πœ‰π‘— 𝐹𝑗 (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ ≥ 0,
∀𝑦 ∈ 𝐾,
(2)
𝑗=1
𝑝
𝑝
where πœ‰ ∈ 𝐡 := {πœ‰ ∈ R+ : ∑𝑗=1 πœ‰π‘— = 1}.
Definition 1. A function πœ‘πœ‰ : 𝐾 → R is said to be a gap
function for VIπœ‰ (2) if it satisfies the following properties:
(i) πœ‘πœ‰ (π‘₯) ≥ 0, for all π‘₯ ∈ 𝐾;
(ii) πœ‘πœ‰ (π‘₯∗ ) = 0 iff π‘₯∗ solves VIπœ‰ (2).
Definition 2. A function πœ“ : 𝐾 → R is said to be a gap
function for VVI (1) if it satisfies the following properties:
(4)
where πœ† min and πœ† max are the smallest and largest eigenvalues
of 𝐺, respectively. It was shown in [21] that the maximum in
(3) is attained at
𝑝
π»πœ‰ (π‘₯) := Proj𝐾,𝐺 (π‘₯ − 𝐺−1 ∑ πœ‰π‘— 𝐹𝑗 (π‘₯)) ,
(5)
𝑗=1
where Proj𝐾,𝐺(π‘₯) is the projection of the point π‘₯ onto the
closed convex set 𝐾 with respect to the norm | ⋅ |𝐺. Thus,
𝑝
In this section, we will introduce some basic notations and
preliminary results.
Throughout this paper, denote by π‘₯𝑇 the transpose of a
vector or matrix π‘₯, by | ⋅ | the Euclidean norm of a vector
or matrix, and by ⟨⋅, ⋅⟩ the inner product in R𝑛 . Let 𝐾 be
a nonempty, closed, and convex set of R𝑛 , 𝐹𝑖 : R𝑛 → R𝑛
(𝑖 = 1, . . . , 𝑝) mappings, and 𝐹 := (𝐹1 , . . . , 𝐹𝑝 )𝑇 . The vector
variational inequality is to find a vector π‘₯∗ ∈ 𝐾 such that
1 󡄨󡄨
󡄨2 }
󡄨󡄨π‘₯ − 𝑦󡄨󡄨󡄨𝐺} , (3)
2
}
πœ‘πœ‰ (π‘₯) = ⟨ ∑πœ‰π‘— 𝐹𝑗 (π‘₯) , π‘₯ − π»πœ‰ (π‘₯)⟩ −
𝑗=1
1 󡄨󡄨
󡄨2
󡄨󡄨π‘₯ − π»πœ‰ (π‘₯)󡄨󡄨󡄨 . (6)
󡄨𝐺
2󡄨
Lemma 3. The projection operator Proj𝐾,𝐺(⋅) is nonexpansive;
that is,
󡄨󡄨
󡄨
󡄨󡄨Proj𝐾,𝐺(π‘₯) − Proj𝐾,𝐺(𝑦)󡄨󡄨󡄨 ≤ 󡄨󡄨󡄨󡄨π‘₯ − 𝑦󡄨󡄨󡄨󡄨𝐺, ∀π‘₯, 𝑦 ∈ R𝑛 . (7)
󡄨
󡄨𝐺
Lemma 4. The function πœ‘πœ‰ is a gap function for VIπœ‰ (2) and
π‘₯∗ ∈ 𝐾 solves VIπœ‰ (2) iff it solves the following optimization
problem:
min πœ‘πœ‰ (π‘₯) .
(8)
π‘₯∈𝐾
The gap function πœ‘πœ‰ is also called the regularized gap
function for VIπœ‰ . When 𝐹𝑗 (𝑗 = 1, . . . , 𝑝) are continuously
differentiable, we have the following results.
Lemma 5. If 𝐹𝑗 (𝑗 = 1, . . . , 𝑝) are continuously differentiable,
then πœ‘πœ‰ is also continuously differentiable in π‘₯, and its gradient
is given by
𝑝
𝑝
∇π‘₯ πœ‘πœ‰ (π‘₯) = ∑πœ‰π‘— 𝐹𝑗 (π‘₯) − ( ∑πœ‰π‘— ∇π‘₯ 𝐹𝑗 (π‘₯) − 𝐺) (π»πœ‰ (π‘₯) − π‘₯) .
𝑗=1
𝑗=1
(9)
Lemma 6. Assume that 𝐹𝑗 are continuously differentiable and
that the Jacobian matrixes ∇π‘₯ 𝐹𝑗 (π‘₯) are positive definite for all
π‘₯ ∈ 𝐾 (𝑗 = 1, . . . , 𝑝). If π‘₯ is a stationary point of problem (8),
that is,
⟨∇π‘₯ πœ‘πœ‰ (π‘₯) , 𝑦 − π‘₯⟩ ≥ 0,
then it solves VIπœ‰ (2).
∀𝑦 ∈ 𝐾,
(10)
Journal of Applied Mathematics
3
Since 𝐾 is convex, from Theorems 11.1 and 11.3 of [22], it
follows that
3. A New Gap Function for VVI and
Its Properties
In this section, based on the regularized gap function πœ‘πœ‰ for
VIπœ‰ (2), we construct a new gap function for VVI (1) and
establish some properties under some mild conditions.
Let
πœ“ (π‘₯) := min πœ‘πœ‰ (π‘₯) .
(11)
πœ‰∈𝐡
{𝑝
}
sup βŸ¨π‘, π‘¦βŸ© ,
inf { ∑𝑏𝑗 βŸ¨πΉπ‘— (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩} ≥
𝑦∈𝐾
𝑝
𝑦∈(− int R+ )
𝑗=1
{
}
where 𝑏 = (𝑏1 , . . . , 𝑏𝑝 ) =ΜΈ 0. Moreover, we have 𝑏 > 0. In fact, if
𝑏𝑗 < 0 for some 𝑗 ∈ {1, . . . , 𝑝}, then we have
sup
Before showing that πœ“ is a gap function for VVI, we first
present a useful result.
𝑆 = ∪πœ‰∈𝐡 π‘†πœ‰ .
(12)
Proof. Suppose that π‘₯∗ ∈ ∪πœ‰∈𝐡 π‘†πœ‰ . Then, there exists a πœ‰ ∈ 𝐡
such that π‘₯∗ ∈ π‘†πœ‰ and
𝑝
𝑗=1
𝑗=1
βŸ¨π‘, π‘¦βŸ© = +∞.
(20)
{𝑝
}
inf {∑𝑏𝑗 βŸ¨πΉπ‘— (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩}
𝑦∈𝐾
{𝑗=1
}
}
{𝑝
≤ {∑ 𝑏𝑗 βŸ¨πΉπ‘— (π‘₯∗ ) , π‘₯∗ − π‘₯∗ ⟩} = 0
}
{𝑗=1
∑πœ‰π‘— βŸ¨πΉπ‘— (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ = ⟨∑ πœ‰π‘— 𝐹𝑗 (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ ≥ 0,
∀𝑦 ∈ 𝐾.
(13)
For any fixed 𝑦 ∈ 𝐾, since πœ‰ ∈ 𝐡, there exists a 𝑗 ∈ {1, . . . , 𝑝}
such that
∗
𝑝
𝑦∈(− int R+ )
On the other hand,
Lemma 7. The following assertion is true:
𝑝
(19)
∗
βŸ¨πΉπ‘— (π‘₯ ) , 𝑦 − π‘₯ ⟩ ≥ 0,
<
sup
𝑝
(21)
βŸ¨π‘, π‘¦βŸ©,
𝑦∈(− int R+ )
which is a contradiction. Thus, 𝑏 > 0. This implies that, for
any 𝑧 ∈ 𝐾,
𝑝
∑𝑏𝑗 βŸ¨πΉπ‘— (π‘₯∗ ) , 𝑧 − π‘₯∗ ⟩
𝑗=1
(14)
{𝑝
}
≥ inf { ∑𝑏𝑗 βŸ¨πΉπ‘— (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩}
𝑦∈𝐾
{𝑗=1
}
and so
𝑇
≥
(⟨𝐹1 (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ , . . . , βŸ¨πΉπ‘ (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩) ∉ − int R𝑝+ .
(15)
sup
𝑦∈(− int R+ )
𝑝
(22)
βŸ¨π‘, π‘¦βŸ© = 0.
𝑝
Taking πœ‰ = 𝑏/(∑𝑗=1 𝑏𝑗 ), then πœ‰ ∈ 𝐡 and
Thus, we have
𝑇
(⟨𝐹1 (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ , . . . , βŸ¨πΉπ‘ (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩) ∉ − int R𝑝+ ,
∀𝑦 ∈ 𝐾.
(16)
This implies that π‘₯∗ ∈ 𝑆 and ∪πœ‰∈𝐡 π‘†πœ‰ ⊂ 𝑆.
Conversely, suppose that π‘₯∗ ∈ 𝑆. Then, we have
∀𝑧 ∈ 𝐾,
(23)
𝑗=1
which implies that π‘₯∗ ∈ π‘†πœ‰ and 𝑆 ⊂ ∪πœ‰∈𝐡 π‘†πœ‰ .
This completes the proof.
Now, we can prove that πœ“ is a gap function for VVI (1).
𝑇
(⟨𝐹1 (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ , . . . , βŸ¨πΉπ‘ (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩) ∉ − int R𝑝+ ,
∀𝑦 ∈ 𝐾,
(17)
and so
𝑇
{(⟨𝐹1 (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ , . . . , βŸ¨πΉπ‘ (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩) : ∀𝑦 ∈ 𝐾}
∩ (− int R𝑝+ ) = 0.
𝑝
⟨ ∑ πœ‰π‘— 𝐹𝑗 (π‘₯∗ ) , 𝑧 − π‘₯∗ ⟩ ≥ 0,
(18)
Theorem 8. The function πœ“ given by (11) is a gap function for
VVI (1). Hence, π‘₯∗ ∈ 𝐾 solves VVI (1) iff it solves the following
optimization problem:
min πœ“ (π‘₯) .
π‘₯∈𝐾
(24)
Proof. Note that for any πœ‰ ∈ 𝐡, πœ‘πœ‰ (π‘₯) given by (3) is a gap
function for VIπœ‰ (2). It follows from Definition 1 that πœ‘πœ‰ (π‘₯) ≥
0 for all π‘₯ ∈ 𝐾 and hence πœ“(π‘₯) = minπœ‰∈𝐡 πœ‘πœ‰ (π‘₯) ≥ 0 for all
π‘₯ ∈ 𝐾.
Assume that πœ“(π‘₯∗ ) = 0 for some π‘₯∗ ∈ 𝐾. From (6), it is
easy to see that πœ‘πœ‰ (π‘₯∗ ) is continuous in πœ‰. Since 𝐡 is a closed
4
Journal of Applied Mathematics
and bounded set, there exists a vector πœ‰ ∈ 𝐡 such that πœ“(π‘₯∗ ) =
πœ‘πœ‰ (π‘₯∗ ), which implies that πœ‘πœ‰ (π‘₯∗ ) = 0 and π‘₯∗ ∈ π‘†πœ‰ . It follows
from Lemma 7 that π‘₯∗ solves VVI (1).
Suppose that π‘₯∗ solves VVI (1). From Lemma 7, it follows
that there exists a vector πœ‰ ∈ 𝐡 such that π‘₯∗ ∈ π‘†πœ‰ and so
πœ‘πœ‰ (π‘₯∗ ) = 0. Since, for all πœ‰ ∈ 𝐡, πœ‘πœ‰ (π‘₯∗ ) ≥ 0, we have πœ“(π‘₯∗ ) =
minπœ‰∈𝐡 πœ‘πœ‰ (π‘₯∗ ) = 0.
Thus, (11) is a gap function for VVI (1). The last assertion
is obvious from the definition of gap function.
This completes the proof.
Since πœ“ is constructed based on the regularized gap
function for VIπœ‰ , we wish to call it a regularized gap function
for VVI (1). Theorem 8 indicates that in order to get a solution
of VVI (1), we only need to solve problem (24). In what
follows, we will discuss some properties of the regularized gap
function πœ“.
Theorem 9. If 𝐹𝑗 (𝑗 = 1, . . . , 𝑝) are continuously differentiable, then the regularized gap function πœ“ is directionally
differentiable in any direction 𝑑 ∈ R𝑛 , and its directional
derivative πœ“σΈ€  (π‘₯; 𝑑) is given by
πœ“σΈ€  (π‘₯; 𝑑) = inf ⟨∇π‘₯ πœ‘πœ‰ (π‘₯) , π‘‘βŸ© ,
πœ‰∈𝐡(π‘₯)
(25)
where 𝐡(π‘₯) := {πœ‰ ∈ 𝐡 : πœ‘πœ‰ (π‘₯) = minπœ‰∈𝐡 πœ‘πœ‰ (π‘₯)}.
Proof. It follows since the projection operator is nonexpansive that πœ‘πœ‰ (π‘₯) is continuous in (πœ‰, π‘₯). Thus, 𝐡(π‘₯) is nonempty
for any π‘₯ ∈ 𝐾. From Lemma 5, it follows that πœ‘πœ‰ (π‘₯) is
continuously differentiable in π‘₯ and that
𝑝
𝑝
∇π‘₯ πœ‘πœ‰ (π‘₯) = ∑πœ‰π‘— 𝐹𝑗 (π‘₯) − ( ∑ πœ‰π‘— ∇π‘₯ 𝐹𝑗 (π‘₯) − 𝐺) (π»πœ‰ (π‘₯) − π‘₯)
𝑗=1
𝑗=1
(26)
is continuous in (πœ‰, π‘₯). It follows from Theorem 1 of [23] that
πœ“ is directionally differentiable in any direction 𝑑 ∈ R𝑛 , and
its directional derivative πœ“σΈ€  (π‘₯; 𝑑) is given by
πœ“σΈ€  (π‘₯; 𝑑) = inf ⟨∇π‘₯ πœ‘πœ‰ (π‘₯) , π‘‘βŸ© .
πœ‰∈𝐡(π‘₯)
(27)
This completes the proof.
By the directional differentiability of πœ“ shown in
Theorem 9, the first-order necessary condition of optimality
for problem (24) can be stated as
πœ“σΈ€  (π‘₯; 𝑦 − π‘₯) ≥ 0,
∀𝑦 ∈ 𝐾.
(28)
If one wishes to solve VVI (1) via the optimization problem
(24), we need to obtain its global optimal solution. However,
since the function πœ“ is in general nonconvex, it is difficult to
find a global optimal solution. Fortunately, we can prove that
any point satisfying the first-order condition of optimality
becomes a solution of VVI (1). Thus, existing optimization
algorithms can be used to find a solution of VVI (1).
Theorem 10. Assume that 𝐹𝑗 are continuously differentiable
and that the Jacobian matrixes ∇π‘₯ 𝐹𝑗 (π‘₯) are positive definite for
all π‘₯ ∈ 𝐾 (𝑗 = 1, . . . , 𝑝). If π‘₯∗ ∈ 𝐾 satisfies the first-order
condition of optimality (28), then π‘₯∗ solves VVI (1).
Proof. It follows from Theorem 9 and (28) that, for some πœ‰ ∈
𝐡(π‘₯∗ ),
⟨∇π‘₯∗ πœ‘πœ‰ (π‘₯∗ ) , 𝑦 − π‘₯∗ ⟩ ≥ 0
(29)
holds for any 𝑦 ∈ 𝐾. This implies that π‘₯∗ is a stationary point
of problem (8). It follows from Lemma 6 that π‘₯∗ solves VIπœ‰ .
From Lemma 7, we see that π‘₯∗ solves VVI. This completes the
proof.
From Theorems 8 and 10, it is easy to get the following
corollary.
Corollary 11. Assume that the conditions in Theorem 10 are
all satisfied. If π‘₯∗ ∈ 𝐾 satisfies the first-order condition of
optimality (28), then π‘₯∗ is a global optimal solution of problem
(24).
4. Stochastic Vector Variational Inequality
In this section, we consider the stochastic vector variational
inequality (SVVI). First, we present a deterministic reformulation for SVVI by employing the ERM method and the
regularized gap function. Second, we solve this reformulation
by the SAA method.
In most important practical applications, the functions
𝐹𝑗 (𝑗 = 1, . . . , 𝑝) always involve some random factors or
uncertainties. Let (Ω, F, 𝑃) be a probability space. Taking
the randomness into account, we get a stochastic vector
variational inequality problem (SVVI): find a vector π‘₯∗ ∈ 𝐾
such that
(⟨𝐹1 (π‘₯∗ , πœ”) , 𝑦 − π‘₯∗ ⟩ , . . . , βŸ¨πΉπ‘ (π‘₯∗ , πœ”) , 𝑦 − π‘₯∗ ⟩)
∉ − int R𝑝+ ,
𝑇
(30)
∀𝑦 ∈ 𝐾, a.s.,
where 𝐹𝑗 : R𝑛 × Ω → R𝑛 are mappings and a.s. is the
abbreviation for “almost surely” under the given probability
measure 𝑃.
Because of the random element πœ”, we cannot generally
find a vector π‘₯∗ ∈ 𝐾 such that (30) holds almost surely.
That is, (30) is not well defined if we think of solving
(30) before knowing the realization πœ”. Therefore, in order
to get a reasonable resolution, an appropriate deterministic
reformulation for SVVI becomes an important issue in the
study of the considered problem. In this section, we will
employ the ERM method to solve (30).
Define
πœ“ (π‘₯, πœ”) := min πœ‘πœ‰ (π‘₯, πœ”) ,
(31)
πœ‰∈𝐡
where
{ 𝑝
πœ‘πœ‰ (π‘₯, πœ”) := max {⟨ ∑πœ‰π‘— 𝐹𝑗 (π‘₯, πœ”) , π‘₯ − π‘¦βŸ© −
𝑦∈𝐾
{ 𝑗=1
1 󡄨󡄨
󡄨2 }
󡄨󡄨π‘₯ − 𝑦󡄨󡄨󡄨𝐺} .
2
}
(32)
Journal of Applied Mathematics
5
and, for any scalar 𝑐,
The maximum in (32) is attained at
𝑝
π»πœ‰ (π‘₯, πœ”) := Proj𝐾,𝐺 (π‘₯ − 𝐺−1 ∑πœ‰π‘— 𝐹𝑗 (π‘₯, πœ”)) ,
𝑝
󡄨
󡄨
󡄨
󡄨
E [ ∑ (󡄨󡄨󡄨󡄨𝑀𝑗 (πœ”)󡄨󡄨󡄨󡄨 𝑐 + 󡄨󡄨󡄨󡄨𝑄𝑗 (πœ”)󡄨󡄨󡄨󡄨)] < +∞.
]
[𝑗=1
(33)
𝑗=1
𝑝
πœ‘πœ‰ (π‘₯, πœ”) = ⟨ ∑πœ‰π‘— 𝐹𝑗 (π‘₯, πœ”) , π‘₯ − π»πœ‰ (π‘₯, πœ”)⟩
𝑗=1
(34)
1󡄨
󡄨2
− 󡄨󡄨󡄨󡄨π‘₯ − π»πœ‰ (π‘₯, πœ”)󡄨󡄨󡄨󡄨𝐺.
2
π‘₯∈𝐾
(35)
where E is the expectation operator.
Note that the objective function Θ(π‘₯) contains the mathematical expectation. In practical applications, it is in general
very difficult to calculate E[πœ“(π‘₯, πœ”)] in a closed form. Thus,
we will have to approximate it through discretization. One
of the most popular discretization approaches is the sample
average approximation method. In general, for an integrable
function πœ™ : Ω → R, we approximate the expected value
E[πœ™(πœ”)] with the sample average (1/π‘π‘˜ ) ∑πœ”π‘– ∈Ωπ‘˜ πœ™(πœ”π‘– ), where
πœ”1 , . . . , πœ”π‘π‘˜ are independently and identically distributed
random samples of πœ” and Ωπ‘˜ := {πœ”1 , . . . , πœ”π‘π‘˜ }. By the strong
law of large numbers, we get the following lemma.
Lemma 12. If πœ™(πœ”) is integrable, then
1
lim
∑ πœ™ (πœ”π‘– ) = E [πœ™ (πœ”)]
π‘˜ → ∞ π‘π‘˜
πœ” ∈Ω
𝑖
(36)
holds with probability one.
Let Θπ‘˜ (π‘₯) := (1/π‘π‘˜ ) ∑πœ”π‘– ∈Ωπ‘˜ πœ“(π‘₯, πœ”π‘– ). Applying the above
technique, we get the following approximation of (35):
π‘₯∈𝐾
󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨
󡄨
󡄨
󡄨󡄨min𝑓 (πœ‰) − min𝑔 (πœ‰)󡄨󡄨󡄨 ≤ max 󡄨󡄨󡄨𝑓 (πœ‰) − 𝑔 (πœ‰)󡄨󡄨󡄨 .
󡄨󡄨 πœ‰∈𝐡
󡄨󡄨 πœ‰∈𝐡
πœ‰∈𝐡
Proof. Without loss of generality, we assume that
minπœ‰∈𝐡 𝑓(πœ‰) ≤ minπœ‰∈𝐡 𝑔(πœ‰). Let πœ‰ minimize 𝑓 and πœ‰Μ‚ minimize
Μ‚ and 𝑔(πœ‰)
Μ‚ ≤ 𝑔(πœ‰). Thus
𝑔, respectively. Hence, 𝑓(πœ‰) ≤ 𝑔(πœ‰)
󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨
Μ‚ − 𝑓 (πœ‰)
󡄨󡄨min𝑓 (πœ‰) − min𝑔 (πœ‰)󡄨󡄨󡄨 = 𝑔 (πœ‰)
πœ‰∈𝐡
󡄨󡄨
󡄨󡄨 πœ‰∈𝐡
≤ 𝑔 (πœ‰) − 𝑓 (πœ‰)
πœ‰∈𝐡
Lemma 14. When 𝐹𝑗 (π‘₯, πœ”) = 𝑀𝑗 (πœ”)π‘₯ + 𝑄𝑗 (πœ”) (𝑗 = 1, . . . , 𝑝),
the function πœ‘πœ‰ (π‘₯, πœ”) is continuously differentiable in π‘₯ almost
surely, and its gradient is given by
𝑝
∇π‘₯ πœ‘πœ‰ (π‘₯, πœ”) = ∑ πœ‰π‘— 𝐹𝑗 (π‘₯, πœ”)
𝑗=1
(37)
𝑗=1
(38)
Proof. The proof is the same as that of Theorem 3.2 in [21], so
we omit it here.
Lemma 15. For any π‘₯ ∈ 𝐾, one has
𝑝
2
󡄨󡄨
󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨π‘₯ − π»πœ‰ (π‘₯, πœ”)󡄨󡄨󡄨 ≤
󡄨
󡄨
󡄨
󡄨 πœ† min ∑ 󡄨󡄨𝐹𝑗 (π‘₯, πœ”)󡄨󡄨 .
𝑗=1
This condition implies that
2
E[ ∑𝑀𝑗 (πœ”)] < +∞,
[𝑗=1
]
𝑝
𝑝
󡄨
󡄨
󡄨
󡄨
E [( ∑ 󡄨󡄨󡄨󡄨𝑀𝑗 (πœ”)󡄨󡄨󡄨󡄨) ( ∑ 󡄨󡄨󡄨󡄨𝑄𝑗 (πœ”)󡄨󡄨󡄨󡄨)] < +∞,
𝑗=1
]
[ 𝑗=1
(44)
𝑝
− ( ∑πœ‰π‘— 𝑀𝑗 (πœ”) − 𝐺) (π»πœ‰ (π‘₯, πœ”) − π‘₯) .
(𝑗 = 1, . . . , 𝑝), where 𝑀𝑗 : Ω → R𝑛×𝑛 and 𝑄𝑗 : Ω → R𝑛 are
measurable functions such that
󡄨2
󡄨2
󡄨
󡄨
E [󡄨󡄨󡄨󡄨𝑄(πœ”)𝑗 󡄨󡄨󡄨󡄨 ] < +∞,
E [󡄨󡄨󡄨󡄨𝑀𝑗 (πœ”)󡄨󡄨󡄨󡄨 ] < +∞,
(39)
𝑗 = 1, . . . , 𝑝.
𝑝
(43)
󡄨
󡄨
≤ max 󡄨󡄨󡄨𝑓 (πœ‰) − 𝑔 (πœ‰)󡄨󡄨󡄨 .
In the rest of this section, we focus on the case
𝐹𝑗 (π‘₯, πœ”) := 𝑀𝑗 (πœ”) π‘₯ + 𝑄𝑗 (πœ”)
(42)
This completes the proof.
π‘˜
min Θπ‘˜ (π‘₯) .
The following results will be useful in the proof of the
convergence result.
Lemma 13. Let 𝑓, 𝑔 : R𝑝 → R+ be continuous. If
minπœ‰∈𝐡 𝑓(πœ‰) < +∞ and minπœ‰∈𝐡 𝑔(πœ‰) < +∞, then
The ERM reformulation is given as follows:
min Θ (π‘₯) := E [πœ“ (π‘₯, πœ”)] ,
(41)
(45)
Proof. For any fixed πœ” ∈ Ω, πœ‘πœ‰ (π‘₯, πœ”) is the gap function of the
following scalar variational inequality: find a vector π‘₯∗ ∈ 𝐾,
such that
(40)
𝑝
⟨∑ πœ‰π‘— 𝐹𝑗 (π‘₯∗ , πœ”) , 𝑦 − π‘₯∗ ⟩ ≥ 0,
𝑗=1
∀𝑦 ∈ 𝐾.
(46)
6
Journal of Applied Mathematics
Hence, πœ‘πœ‰ (π‘₯, πœ”) ≥ 0 for all π‘₯ ∈ 𝐾. From (4) and (34), we have
1 󡄨󡄨
󡄨2
󡄨󡄨π‘₯ − π»πœ‰ (π‘₯, πœ”)󡄨󡄨󡄨
󡄨𝐺
󡄨
2
󡄨󡄨
󡄨󡄨
󡄨󡄨 1
󡄨󡄨
1
π‘˜
∗
󡄨󡄨
󡄨
πœ“
(π‘₯
,
πœ”
)
−
πœ“
(π‘₯
,
πœ”
)
∑
∑
󡄨󡄨
𝑖
𝑖 󡄨󡄨󡄨
π‘π‘˜ πœ” ∈Ω
󡄨󡄨 π‘π‘˜ πœ”π‘– ∈Ωπ‘˜
󡄨󡄨
𝑖
π‘˜
󡄨
󡄨
𝑝
≤ ⟨∑ πœ‰π‘— 𝐹𝑗 (π‘₯, πœ”) , π‘₯ − π»πœ‰ (π‘₯, πœ”)⟩
≤
𝑗=1
󡄨󡄨
󡄨󡄨 𝑝
󡄨󡄨 󡄨
󡄨󡄨
󡄨
≤ 󡄨󡄨󡄨󡄨∑ πœ‰π‘— 𝐹𝑗 (π‘₯, πœ”)󡄨󡄨󡄨󡄨 󡄨󡄨󡄨󡄨π‘₯ − π»πœ‰ (π‘₯, πœ”)󡄨󡄨󡄨󡄨
󡄨󡄨
󡄨󡄨𝑗=1
󡄨
󡄨
It follows from Lemma 13 and the mean-value theorem that
(47)
𝑖
=
π‘˜
󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨
π‘˜
∗
󡄨󡄨min πœ‘πœ‰ (π‘₯ , πœ”π‘– ) − min πœ‘πœ‰ (π‘₯ , πœ”π‘– )󡄨󡄨󡄨
󡄨󡄨
󡄨 πœ‰∈𝐡
πœ‰∈𝐡
󡄨
π‘˜
1
∑
π‘π‘˜ πœ” ∈Ω
𝑖
𝑝
≤
1
󡄨
󡄨
∑ σ΅„¨σ΅„¨σ΅„¨πœ“ (π‘₯π‘˜ , πœ”π‘– ) − πœ“ (π‘₯∗ , πœ”π‘– )󡄨󡄨󡄨󡄨
π‘π‘˜ πœ” ∈٠󡄨
1
󡄨󡄨
󡄨
󡄨
∑ 󡄨󡄨󡄨𝐹𝑗 (π‘₯, πœ”)󡄨󡄨󡄨󡄨 󡄨󡄨󡄨󡄨π‘₯ − π»πœ‰ (π‘₯, πœ”)󡄨󡄨󡄨󡄨𝐺,
√πœ† min 𝑗=1 󡄨
≤
1
󡄨
󡄨
∑ max σ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ‘πœ‰ (π‘₯π‘˜ , πœ”π‘– ) − πœ‘πœ‰ (π‘₯∗ , πœ”π‘– )󡄨󡄨󡄨󡄨
π‘π‘˜ πœ” ∈Ω πœ‰∈𝐡
𝑖
≤
and so
(52)
π‘˜
1
󡄨
󡄨
󡄨󡄨
∑ max 󡄨󡄨󡄨∇ πœ‘ (π‘¦π‘˜,𝑖 , πœ”π‘– )󡄨󡄨󡄨󡄨 󡄨󡄨󡄨󡄨π‘₯π‘˜ − π‘₯∗ 󡄨󡄨󡄨󡄨 ,
π‘π‘˜ πœ” ∈Ω πœ‰∈𝐡 󡄨 π‘₯ πœ‰
𝑖
π‘˜
𝑝
2
󡄨󡄨
󡄨
󡄨󡄨
󡄨󡄨
󡄨󡄨π‘₯ − π»πœ‰ (π‘₯, πœ”)󡄨󡄨󡄨 ≤
󡄨
󡄨
󡄨
󡄨𝐺 √πœ† ∑ 󡄨󡄨𝐹𝑗 (π‘₯, πœ”)󡄨󡄨 .
min 𝑗=1
(48)
It follows from (4) that
1 󡄨󡄨
󡄨󡄨
󡄨
󡄨󡄨
󡄨󡄨π‘₯ − π»πœ‰ (π‘₯, πœ”)󡄨󡄨󡄨 ≤
󡄨
󡄨
󡄨
󡄨 √πœ† 󡄨󡄨π‘₯ − π»πœ‰ (π‘₯, πœ”)󡄨󡄨𝐺
min
≤
2
𝑝
󡄨
󡄨
∑ 󡄨󡄨󡄨󡄨𝐹𝑗 (π‘₯, πœ”)󡄨󡄨󡄨󡄨 .
(49)
πœ† min 𝑗=1
where π‘¦π‘˜,𝑖 = πœ† π‘˜,𝑖 π‘₯π‘˜ + (1 − πœ† π‘˜,𝑖 )π‘₯∗ with πœ† π‘˜,𝑖 ∈ [0, 1]. Because
limπ‘˜ → +∞ π‘₯π‘˜ = π‘₯∗ , there exists a constant 𝐢 such that |π‘₯π‘˜ | ≤ 𝐢
for each π‘˜. By the definition of π‘¦π‘˜,𝑖 , we have |π‘¦π‘˜,𝑖 | ≤ 𝐢. Hence,
for any πœ‰ ∈ 𝐡 and πœ”π‘– ∈ Ωπ‘˜ ,
󡄨󡄨
󡄨
󡄨󡄨∇π‘₯ πœ‘πœ‰ (π‘¦π‘˜,𝑖 , πœ”π‘– )󡄨󡄨󡄨
󡄨
󡄨
󡄨󡄨 𝑝
󡄨󡄨
= 󡄨󡄨󡄨󡄨 ∑πœ‰π‘— 𝐹𝑗 (π‘¦π‘˜,𝑖 , πœ”π‘– )
󡄨󡄨𝑗=1
󡄨
𝑝
This completes the proof.
− ( ∑ πœ‰π‘— ∇π‘₯ 𝐹𝑗 (π‘¦π‘˜,𝑖 , πœ”π‘– ) − 𝐺)
𝑗=1
󡄨󡄨
󡄨󡄨
× (π»πœ‰ (𝑦 , πœ”π‘– ) − 𝑦 ) 󡄨󡄨󡄨󡄨
󡄨󡄨
󡄨
Now, we obtain the convergence of optimal solutions of
problem (37) in the following theorem.
Theorem 16. Let {π‘₯π‘˜ } be a sequence of optimal solutions of
problem (37). Then, any accumulation point of {π‘₯π‘˜ } is an
optimal solution of problem (35).
Proof. Let π‘₯∗ be an accumulation point of {π‘₯π‘˜ }. Without loss
of generality, we assume that π‘₯π‘˜ itself converges to π‘₯∗ as π‘˜
tends to infinity. It is obvious that π‘₯∗ ∈ 𝐾. At first, we will
show that
1
lim
∑ πœ“ (π‘₯π‘˜ , πœ”π‘– ) = E [πœ“ (π‘₯∗ , πœ”)] .
π‘˜ → ∞ π‘π‘˜
πœ” ∈Ω
𝑖
π‘˜,𝑖
𝑝
󡄨 󡄨󡄨
󡄨
≤ ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ‰π‘— 󡄨󡄨󡄨󡄨 󡄨󡄨󡄨󡄨𝐹𝑗 (π‘¦π‘˜,𝑖 , πœ”π‘– )󡄨󡄨󡄨󡄨
𝑗=1
𝑝
󡄨 󡄨󡄨
󡄨
+ ( ∑ σ΅„¨σ΅„¨σ΅„¨σ΅„¨πœ‰π‘— 󡄨󡄨󡄨󡄨 󡄨󡄨󡄨󡄨∇π‘₯ 𝐹𝑗 (π‘¦π‘˜,𝑖 , πœ”π‘– )󡄨󡄨󡄨󡄨 + |𝐺|)
𝑗=1
󡄨
󡄨
× σ΅„¨σ΅„¨σ΅„¨σ΅„¨π»πœ‰ (π‘¦π‘˜,𝑖 , πœ”π‘– ) − π‘¦π‘˜,𝑖 󡄨󡄨󡄨󡄨
𝑝
(50)
π‘˜
󡄨
󡄨
≤ ∑ 󡄨󡄨󡄨󡄨𝐹𝑗 (π‘¦π‘˜,𝑖 , πœ”π‘– )󡄨󡄨󡄨󡄨
𝑗=1
+
From Lemma 12, it suffices to show that
󡄨󡄨
󡄨󡄨
󡄨󡄨 1
󡄨󡄨
1
lim 󡄨󡄨󡄨󡄨
∑ πœ“ (π‘₯π‘˜ , πœ”π‘– ) −
∑ πœ“ (π‘₯∗ , πœ”π‘– )󡄨󡄨󡄨󡄨 = 0.
π‘˜ → ∞ 󡄨󡄨 π‘π‘˜
π‘π‘˜ πœ” ∈Ω
󡄨󡄨
𝑖
π‘˜
󡄨 πœ”π‘– ∈Ωπ‘˜
󡄨
π‘˜,𝑖
2
πœ† min
𝑝
(51)
𝑝
󡄨
󡄨
( ∑ 󡄨󡄨󡄨󡄨∇π‘₯ 𝐹𝑗 (π‘¦π‘˜,𝑖 , πœ”π‘– )󡄨󡄨󡄨󡄨 + |𝐺|)
𝑗=1
󡄨
󡄨
× ∑ 󡄨󡄨󡄨󡄨𝐹𝑗 (π‘¦π‘˜,𝑖 , πœ”π‘– )󡄨󡄨󡄨󡄨
𝑗=1
Journal of Applied Mathematics
≤(
2
7
𝑝
󡄨
󡄨
∑ 󡄨󡄨󡄨󡄨𝑀𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨 +
πœ† min 𝑗=1
2
πœ† min
Since π‘₯π‘˜ solves problem (37) for each π‘˜, we have that, for
any π‘₯ ∈ 𝐾,
|𝐺| + 1)
1
1
∑ πœ“ (π‘₯π‘˜ , πœ”π‘– ) ≤
∑ πœ“ (π‘₯, πœ”π‘– ) .
π‘π‘˜ πœ” ∈Ω
π‘π‘˜ πœ” ∈Ω
𝑝
󡄨
󡄨
󡄨
󡄨
× ∑ (󡄨󡄨󡄨󡄨𝑀𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨 𝐢 + 󡄨󡄨󡄨󡄨𝑄𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨)
𝑖
𝑗=1
=
𝑖
(57)
π‘˜
Letting π‘˜ → ∞ above, we get from Lemma 12 and (50) that
2
𝑝
π‘˜
2𝐢
󡄨
󡄨
( ∑ 󡄨󡄨󡄨𝑀 (πœ” )󡄨󡄨󡄨)
πœ† min 𝑗=1 󡄨 𝑗 𝑖 󡄨
E [πœ“ (π‘₯∗ , πœ”)] ≤ E [πœ“ (π‘₯, πœ”)] ,
(58)
which means that π‘₯∗ is an optimal solution of problem (35).
This completes the proof.
𝑝
2
󡄨
󡄨
󡄨
󡄨
+ ∑ (󡄨󡄨󡄨󡄨𝑀𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨 𝐢 + 󡄨󡄨󡄨󡄨𝑄𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨) (
|𝐺| + 1)
πœ† min
𝑗=1
2
+
πœ† min
𝑝
𝑝
𝑗=1
𝑗=1
Acknowledgments
󡄨
󡄨
󡄨
󡄨
( ∑ 󡄨󡄨󡄨󡄨𝑀𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨) ( ∑ 󡄨󡄨󡄨󡄨𝑄𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨) ,
(53)
where the second inequality is from Lemma 15. Thus,
󡄨󡄨
󡄨󡄨
󡄨󡄨 1
󡄨󡄨
1
π‘˜
∗
󡄨󡄨
∑ πœ“ (π‘₯ , πœ”π‘– ) −
∑ πœ“ (π‘₯ , πœ”π‘– )󡄨󡄨󡄨󡄨
󡄨󡄨
π‘π‘˜ πœ” ∈Ω
󡄨󡄨 π‘π‘˜ πœ”π‘– ∈Ωπ‘˜
󡄨󡄨
𝑖
π‘˜
󡄨
󡄨
References
1
󡄨
󡄨
󡄨󡄨
≤
∑ max 󡄨󡄨󡄨∇ πœ‘ (π‘¦π‘˜,𝑖 , πœ”π‘– )󡄨󡄨󡄨󡄨 󡄨󡄨󡄨󡄨π‘₯π‘˜ − π‘₯∗ 󡄨󡄨󡄨󡄨
π‘π‘˜ πœ” ∈Ω πœ‰∈𝐡 󡄨 π‘₯ πœ‰
𝑖
2𝐢
≤
πœ† min
π‘˜
𝑝
󡄨󡄨 π‘˜
󡄨 1
󡄨󡄨
󡄨󡄨
󡄨󡄨π‘₯ − π‘₯∗ 󡄨󡄨󡄨
󡄨
󡄨
󡄨
󡄨 π‘π‘˜ ∑ ( ∑ 󡄨󡄨𝑀𝑗 (πœ”π‘– )󡄨󡄨)
πœ”π‘– ∈Ωπ‘˜
2
𝑗=1
𝑝
󡄨
󡄨 1
󡄨
󡄨
󡄨
󡄨
+ 󡄨󡄨󡄨󡄨π‘₯π‘˜ − π‘₯∗ 󡄨󡄨󡄨󡄨
∑ ∑ (󡄨󡄨󡄨𝑀 (πœ” )󡄨󡄨󡄨 𝐢 + 󡄨󡄨󡄨󡄨𝑄𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨) (54)
π‘π‘˜ πœ” ∈٠𝑗=1 󡄨 𝑗 𝑖 󡄨
𝑖
π‘˜
×(
2
|𝐺| + 1)
πœ† min
𝑝
+
2 󡄨󡄨 π‘˜
󡄨 1
󡄨
󡄨
󡄨󡄨π‘₯ − π‘₯∗ 󡄨󡄨󡄨
∑ ( ∑ 󡄨󡄨󡄨󡄨𝑀𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨)
󡄨
󡄨
πœ† min
π‘π‘˜
πœ”π‘– ∈Ωπ‘˜
𝑗=1
𝑝
󡄨
󡄨
× ( ∑ 󡄨󡄨󡄨󡄨𝑄𝑗 (πœ”π‘– )󡄨󡄨󡄨󡄨) .
𝑗=1
From (40) and (41), each term in the last inequality above
converges to zero, and so
󡄨󡄨
󡄨󡄨
󡄨󡄨 1
󡄨󡄨
1
π‘˜
∗
󡄨󡄨
󡄨󡄨 = 0.
πœ“
(π‘₯
,
πœ”
)
−
πœ“
(π‘₯
,
πœ”
)
∑
∑
󡄨
󡄨󡄨
𝑖
𝑖
π‘˜ → ∞ 󡄨󡄨󡄨 π‘π‘˜
𝑁
󡄨󡄨
π‘˜ πœ”π‘– ∈Ωπ‘˜
󡄨 πœ”π‘– ∈Ωπ‘˜
󡄨
lim
(55)
Since
1
∑ πœ“ (π‘₯∗ , πœ”π‘– ) = E [πœ“ (π‘₯∗ , πœ”)] ,
π‘˜ → ∞ π‘π‘˜
πœ” ∈Ω
lim
𝑖
This work was supported by the National Natural Science
Foundation of China (11171237, 11101069, and 71101099), by
the Construction Foundation of Southwest University for
Nationalities for the subject of applied economics (2011XWDS0202), and by Sichuan University (SKQY201330).
(56)
π‘˜
(50) is true.
Now, we are in the position to show that π‘₯∗ is a solution
of problem (35).
[1] F. Giannessi, Ed., Vector Variational Inequalities and Vector
Equilibria, vol. 38, Kluwer Academic, Dordrecht, The Netherlands, 2000.
[2] S. J. Li and C. R. Chen, “Stability of weak vector variational
inequality,” Nonlinear Analysis. Theory, Methods & Applications,
vol. 70, no. 4, pp. 1528–1535, 2009.
[3] Y. H. Cheng and D. L. Zhu, “Global stability results for the weak
vector variational inequality,” Journal of Global Optimization,
vol. 32, no. 4, pp. 543–550, 2005.
[4] X. Q. Yang and J. C. Yao, “Gap functions and existence of
solutions to set-valued vector variational inequalities,” Journal
of Optimization Theory and Applications, vol. 115, no. 2, pp. 407–
417, 2002.
[5] N.-J. Huang and Y.-P. Fang, “On vector variational inequalities
in reflexive Banach spaces,” Journal of Global Optimization, vol.
32, no. 4, pp. 495–505, 2005.
[6] S. J. Li, H. Yan, and G. Y. Chen, “Differential and sensitivity
properties of gap functions for vector variational inequalities,”
Mathematical Methods of Operations Research, vol. 57, no. 3, pp.
377–391, 2003.
[7] K. W. Meng and S. J. Li, “Differential and sensitivity properties
of gap functions for Minty vector variational inequalities,”
Journal of Mathematical Analysis and Applications, vol. 337, no.
1, pp. 386–398, 2008.
[8] N. J. Huang, J. Li, and J. C. Yao, “Gap functions and existence of
solutions for a system of vector equilibrium problems,” Journal
of Optimization Theory and Applications, vol. 133, no. 2, pp. 201–
212, 2007.
[9] G.-Y. Chen, C.-J. Goh, and X. Q. Yang, “On gap functions for
vector variational inequalities,” in Vector Variational Inequalities
and Vector Equilibria, vol. 38, pp. 55–72, Kluwer Academic,
Dordrecht, The Netherlands, 2000.
[10] R. P. Agdeppa, N. Yamashita, and M. Fukushima, “Convex expected residual models for stochastic affine variational
inequality problems and its application to the traffic equilibrium
problem,” Pacific Journal of Optimization, vol. 6, no. 1, pp. 3–19,
2010.
8
[11] X. Chen and M. Fukushima, “Expected residual minimization
method for stochastic linear complementarity problems,” Mathematics of Operations Research, vol. 30, no. 4, pp. 1022–1038,
2005.
[12] X. Chen, C. Zhang, and M. Fukushima, “Robust solution of
monotone stochastic linear complementarity problems,” Mathematical Programming, vol. 117, no. 1-2, pp. 51–80, 2009.
[13] H. Fang, X. Chen, and M. Fukushima, “Stochastic 𝑅0 matrix linear complementarity problems,” SIAM Journal on Optimization,
vol. 18, no. 2, pp. 482–506, 2007.
[14] H. Jiang and H. Xu, “Stochastic approximation approaches to
the stochastic variational inequality problem,” IEEE Transactions on Automatic Control, vol. 53, no. 6, pp. 1462–1475, 2008.
[15] G.-H. Lin and M. Fukushima, “New reformulations for stochastic nonlinear complementarity problems,” Optimization Methods & Software, vol. 21, no. 4, pp. 551–564, 2006.
[16] C. Ling, L. Qi, G. Zhou, and L. Caccetta, “The 𝑆𝐢1 property of
an expected residual function arising from stochastic complementarity problems,” Operations Research Letters, vol. 36, no. 4,
pp. 456–460, 2008.
[17] M.-J. Luo and G.-H. Lin, “Stochastic variational inequality
problems with additional constraints and their applications in
supply chain network equilibria,” Pacific Journal of Optimization, vol. 7, no. 2, pp. 263–279, 2011.
[18] M. J. Luo and G. H. Lin, “Expected residual minimization
method for stochastic variational inequality problems,” Journal
of Optimization Theory and Applications, vol. 140, no. 1, pp. 103–
116, 2009.
[19] M. J. Luo and G. H. Lin, “Convergence results of the ERM
method for nonlinear stochastic variational inequality problems,” Journal of Optimization Theory and Applications, vol. 142,
no. 3, pp. 569–581, 2009.
[20] C. Zhang and X. Chen, “Stochastic nonlinear complementarity
problem and applications to traffic equilibrium under uncertainty,” Journal of Optimization Theory and Applications, vol. 137,
no. 2, pp. 277–295, 2008.
[21] M. Fukushima, “Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems,” Mathematical Programming, vol. 53, no. 1, pp. 99–
110, 1992.
[22] R. T. Rockafellar, Convex Analysis, Princeton University Press,
Princeton, NJ, USA, 1970.
[23] W. Hogan, “Directional derivatives for extremal-value functions with applications to the completely convex case,” Operations Research, vol. 21, pp. 188–209, 1973.
Journal of Applied Mathematics
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download