Research Article Properties of Expected Residual Minimization Model for Mei-Ju Luo

advertisement
Hindawi Publishing Corporation
Journal of Applied Mathematics
Volume 2013, Article ID 497586, 7 pages
http://dx.doi.org/10.1155/2013/497586
Research Article
Properties of Expected Residual Minimization Model for
a Class of Stochastic Complementarity Problems
Mei-Ju Luo1 and Yuan Lu2
1
2
School of Mathematics, Liaoning University, Liaoning 110036, China
School of Sciences, Shenyang University, Liaoning 110044, China
Correspondence should be addressed to Mei-Ju Luo; meiju luo@yahoo.cn
Received 31 January 2013; Revised 10 May 2013; Accepted 14 May 2013
Academic Editor: Farhad Hosseinzadeh Lotfi
Copyright © 2013 M.-J. Luo and Y. Lu. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Expected residual minimization (ERM) model which minimizes an expected residual function defined by an NCP function has been
studied in the literature for solving stochastic complementarity problems. In this paper, we first give the definitions of stochastic
𝑃-function, stochastic 𝑃0 -function, and stochastic uniformly 𝑃-function. Furthermore, the conditions such that the function is a
stochastic 𝑃(𝑃0 )-function are considered. We then study the boundedness of solution set and global error bounds of the expected
residual functions defined by the “Fischer-Burmeister” (FB) function and “min” function. The conclusion indicates that solutions
of the ERM model are robust in the sense that they may have a minimum sensitivity with respect to random parameter variations
in stochastic complementarity problems. On the other hand, we employ quasi-Monte Carlo methods and derivative-free methods
to solve ERM model.
1. Introduction
Given a vector-valued function 𝐹 : R𝑛 × Ω → R𝑛 , the stochastic complementarity problems, denoted by SCP(𝐹(π‘₯, πœ”)), are
to find a vector π‘₯∗ such that
π‘₯∗ ≥ 0,
𝑇
𝐹 (π‘₯∗ , πœ”) ≥ 0,
(π‘₯∗ ) 𝐹 (π‘₯∗ , πœ”) = 0,
πœ” ∈ Ω a.s.,
(1)
where πœ” ∈ Ω ⊆ Rπ‘š is a random vector with given probability
distribution P and “a.s.” means “almost surely” under the
given probability measure. Particularly, when 𝐹 is an affine
function of π‘₯ for any πœ”, that is,
𝐹 (π‘₯, πœ”) = 𝑀 (πœ”) π‘₯ + π‘ž (πœ”) ,
πœ” ∈ Ω,
(2)
where 𝑀(πœ”) ∈ R𝑛×𝑛 and π‘ž(πœ”) ∈ R𝑛 , the SCP(𝐹(π‘₯, πœ”)) is
called stochastic linear complementarity problems, denoted
by SLCP(𝑀(πœ”), π‘ž(πœ”)). Correspondingly, problem (1) is called
stochastic nonlinear complementarity problem, denoted by
SNCP(𝐹(π‘₯, πœ”)) if 𝐹 can not be denoted by an affine function
of π‘₯ for any πœ”. The deterministic problems, which are called
complementarity problems (denoted by CP(𝐹(π‘₯))), have
been intensively studied. More information about theoretical
analysis, numerical algorithms and applications especially in
economics and engineering can be found in comprehensive
books [1, 2].
In practical applications, some elements may involve
stochastic factors. In fact, due to stochastic factors, the
function value of 𝐹 depends not only on π‘₯, but also on
random vectors. Hence, problem (1) does not have solution
in general for almost all πœ” ∈ Ω. To solve these problems,
researchers focus on giving reasonable deterministic reformulations for SCP(𝐹(π‘₯, πœ”)). Certainly, different deterministic
formulations may yield different solutions that are optimal
in different senses. In the study of SCP(𝐹(π‘₯, πœ”)), three types
of formulations have been proposed; the expected value
(EV) formulation, the expected residual minimization (ERM)
formulation, and the SMPEC formulation [3].
The EV formulation is studied by Gürkan et al. [4]. The
problem considered in [4] is actually a stochastic variational
inequality problem. When applied to the SCP(𝐹(π‘₯, πœ”)), the
EV model can be stated as follows:
π‘₯∗ ≥ 0,
E [𝐹 (π‘₯∗ , πœ”)] ≥ 0,
𝑇
(π‘₯∗ ) E [𝐹 (π‘₯∗ , πœ”)] = 0, (3)
where E means expectation with respect to πœ”.
2
Journal of Applied Mathematics
The ERM model is first proposed by Chen and Fukushima
[5] for solving the SLCP(𝑀(πœ”), π‘ž(πœ”)). By employing an NCP
function πœ™, the SCP(𝐹(π‘₯, πœ”)) (1) is transformed equivalently
to the stochastic equations
Φ (π‘₯, πœ”) = 0,
πœ” ∈ Ω a.s.,
(4)
where Φ : R𝑛 × Ω → R𝑛 is defined by
πœ™ (π‘₯1 , 𝐹1 (π‘₯, πœ”))
..
),
Φ (π‘₯, πœ”) := (
.
(5)
πœ™ (π‘₯𝑛 , 𝐹𝑛 (π‘₯, πœ”))
2. Stochastic 𝑃(𝑃0 )-Function
and π‘₯𝑖 denotes the 𝑖th component of the vector π‘₯. Here πœ™ :
R𝑛 → R is an NCP function which has the property
πœ™ (π‘Ž, 𝑏) = 0 ⇐⇒ π‘Ž ≥ 0,
𝑏 ≥ 0, π‘Žπ‘ = 0.
(6)
Then the ERM formulation for (1) is given by
min πœƒ (π‘₯) := E [β€–Φ (π‘₯, πœ”)β€–2 ] .
π‘₯∈R𝑛+
(7)
The NCP functions employed in [5] include the FischerBurmeister function, which is defined by
πœ™FB (π‘Ž, 𝑏) := √π‘Ž2 + 𝑏2 − (π‘Ž + 𝑏)
(8)
and the min function
πœ™min (π‘Ž, 𝑏) := min {π‘Ž, 𝑏} .
(9)
In particular, it is known [6, 7] that there exist the following
relations between these two functions:
2 󡄨󡄨
󡄨 󡄨 󡄨
󡄨
󡄨
σ΅„¨πœ™ 󡄨󡄨 ≤ σ΅„¨σ΅„¨πœ™ 󡄨󡄨 ≤ (√2 + 2) σ΅„¨σ΅„¨σ΅„¨πœ™min 󡄨󡄨󡄨 .
√2 + 2 󡄨 min 󡄨 󡄨 FB 󡄨
𝑃-function, which can be regarded as a generalization of
the deterministic 𝑃, 𝑃0 -function, and uniformly 𝑃-function
or an extension of stochastic 𝑃 matrix and stochastic 𝑃0
matrix [14]. In addition, some properties of a stochastic
𝑃(𝑃0 )-function are given. In Section 3, we show the sufficient conditions for the solution set of ERM problem to
be nonempty and bounded. In Section 4, we discuss error
bounds of SCP(𝐹(π‘₯, πœ”)). In Section 5, an algorithm will be
given to solve ERM model. We then give conclusions in
Section 6.
(10)
As observed in [5], the ERM formulations with different
NCP functions may have different properties. Subsequently,
the ERM formulation for SCP(𝐹(π‘₯, πœ”)) has been studied in
[6, 8–13]. Note that Fang et al. [8] propose a new concept of
stochastic matrice: 𝑀(⋅) is called a stochastic 𝑅0 matrix if
P {πœ” : π‘₯ ≥ 0, 𝑀 (πœ”) π‘₯ ≥ 0, π‘₯𝑇𝑀 (πœ”) π‘₯ = 0} = 1 󳨐⇒ π‘₯ = 0.
(11)
Moreover, Zhang and Chen [11] introduce a new concept
of stochastic 𝑅0 function, which can be regarded as a
generalization of the stochastic 𝑅0 matrix given in [8].
Throughout this paper, we suppose that the sample space
Ω is nonempty and compact set and that the function 𝐹(π‘₯, πœ”)
is continuous with respect to π‘₯ and πœ”. On the other hand, we
will use the following notations: 𝐼(π‘₯) = {𝑖 : π‘₯𝑖 = 0} and
𝐽(π‘₯) = {𝑖 : π‘₯𝑖 =ΜΈ 0} for a given vector π‘₯ ∈ R𝑛 . βŸ¨π‘™, π‘›βŸ© represents
the set {𝑙, 𝑙 + 1, . . . , 𝑙 + 𝑛} for natural numbers 𝑙 and 𝑒 with
𝑙 < 𝑒. π‘₯+ = max{π‘₯, 0} for any given vector π‘₯. β€– ⋅ β€– refers to the
Euclidean norm.
The remainder of the paper is organized as follows:
in Section 2, we introduce the concepts of a stochastic 𝑃function, a stochastic 𝑃0 -function, and a stochastic uniformly
It is well known that the 𝑃-function, 𝑃0 -function, and
uniformly 𝑃-function play an important role in the nonlinear
complementarity problems theory [1]. We will introduce
a new concept of stochastic 𝑃-function, 𝑃0 -function, and
uniformly 𝑃-function, which can be regarded as a generalization of their deterministic form or stochastic 𝑃 matrix and
stochastic 𝑃0 matrix.
Definition 1 (see [14]). 𝑀(⋅) is called a stochastic 𝑃(𝑃0 )-matrix
if there exists 𝑖 ∈ 𝐽(π‘₯) such that, for every π‘₯ =ΜΈ 0 in R𝑛 ,
P {πœ” : π‘₯𝑖 (𝑀 (πœ”) π‘₯)𝑖 > 0 (≥ 0)} > 0.
(12)
Definition 2. A function 𝐹 : R𝑛 × Ω → R𝑛 is a stochastic
𝑃(𝑃0 )-function if there exist 𝑖 ∈ 𝐽(π‘₯, 𝑦), 𝑖 ∈ ⟨1, π‘›βŸ© such that,
for every π‘₯ =ΜΈ 𝑦 in R𝑛 ,
π‘₯𝑖 =ΜΈ 𝑦𝑖 ,
P {πœ” : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”)) > 0 (≥ 0)} > 0.
(13)
Definition 3. A function 𝐹 : R𝑛 × Ω → R𝑛 is a stochastic
uniformly 𝑃-function if there exists a positive constant 𝛼 and
𝑖 ∈ 𝐽(π‘₯, 𝑦), 𝑖 ∈ ⟨1, π‘›βŸ© such that, for every π‘₯ =ΜΈ 𝑦 in R𝑛 ,
σ΅„©2
σ΅„©
P {πœ” : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”)) ≥ 𝛼󡄩󡄩󡄩π‘₯ − 𝑦󡄩󡄩󡄩 } > 0.
(14)
Clearly, every stochastic uniformly 𝑃-function must be a
stochastic 𝑃-function, which in turn must be a stochastic 𝑃0 function. We further cite the definition of “equicoercive” in
[11]. More information about this definition can be found in
[11].
Definition 4 (see [11]). We say that 𝐹 : R𝑛 × Ω → R𝑛 is
equicoercive on D ⊆ R𝑛 , if, for any {π‘₯π‘˜ } ⊆ D satisfying
β€–π‘₯π‘˜ β€– → ∞, the existence of {πœ”π‘˜ } ⊆ supp Ω with
limπ‘˜ → ∞ 𝐹𝑖 (π‘₯π‘˜ , πœ”π‘˜ ) = ∞(limπ‘˜ → ∞ (−𝐹𝑖 (π‘₯π‘˜ , πœ”π‘˜ ))+ = ∞) for
some 𝑖 ∈ ⟨1, π‘›βŸ© implies that
P {πœ” : lim 𝐹𝑖 (π‘₯π‘˜ , πœ”) = ∞}
π‘˜→∞
(15)
π‘˜
> 0 (P {πœ” : lim (−𝐹𝑖 (π‘₯ , πœ”))+ = ∞} > 0) ,
π‘˜→∞
Journal of Applied Mathematics
3
Proof. For the “if ” part, suppose on the contrary that 𝐹 is not
Μƒ 𝑦,
Μƒ π‘₯Μƒ =ΜΈ 𝑦̃ in
a stochastic 𝑃(𝑃0 )-function, and then there exist π‘₯,
R𝑛 for any 𝑖 ∈ ⟨1, π‘›βŸ© satisfying
where
supp Ω
:= {πœ” ∈ Ω : ∫
π΅πœ” (πœ”,])∩Ω
𝑑𝐹 (πœ”) > 0 for any ] > 0}
(16)
and π΅πœ” (πœ”, ]) := {πœ” : β€–πœ” − πœ”β€– < ]} and 𝐹(πœ”) is the distribution
function of πœ”.
More details about supp Ω were included in [8].
π‘₯̃𝑖 =ΜΈ 𝑦̃𝑖 ,
Μƒ πœ”)) > 0 (≥ 0)} = 0.
Μƒ πœ”) − 𝐹𝑖 (𝑦,
P {πœ” : (π‘₯̃𝑖 − 𝑦̃𝑖 ) (𝐹𝑖 (π‘₯,
(22)
On the other hand, since 𝐹(⋅, πœ”) is a 𝑃(𝑃0 )-function, then
Μƒ 𝑦̃ there exist 𝑖 ∈ 𝐽(π‘₯,
Μƒ 𝑦),
Μƒ 𝑖 ∈ ⟨1, π‘›βŸ© such that
for π‘₯,
Μƒ πœ”)) > 0 (≥ 0) .
Μƒ πœ”) − 𝐹𝑖 (𝑦,
(π‘₯̃𝑖 − 𝑦̃𝑖 ) (𝐹𝑖 (π‘₯,
(23)
Proposition 5. If 𝐹 is a stochastic 𝑃0 -function, then 𝐹 + πœ€π‘₯ is
a stochastic 𝑃-function for every πœ€ > 0.
Notice that πœ” ∈ supp Ω, by the definition of supp Ω in (16),
we have
Proof. From the definition of stochastic 𝑃0 -function, there
exist 𝑖 ∈ 𝐽(π‘₯, 𝑦), 𝑖 ∈ ⟨1, π‘›βŸ© such that, for every π‘₯ =ΜΈ 𝑦,
Μƒ πœ”)) > 0 (≥ 0)} > 0. (24)
Μƒ πœ”) − 𝐹𝑖 (𝑦,
P {πœ” : (π‘₯̃𝑖 − 𝑦̃𝑖 ) (𝐹𝑖 (π‘₯,
π‘₯𝑖 =ΜΈ 𝑦𝑖 ,
P {πœ” : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”)) ≥ 0} > 0.
(17)
Hence, we have
P {πœ” : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) + πœ€π‘₯𝑖 − (𝐹𝑖 (𝑦, πœ”) + πœ€π‘¦π‘– )) > 0}
This contradicts formulation (22). Therefore, 𝐹 is a stochastic
𝑃(𝑃0 )-function.
Now for the “only if ” part, suppose on the contrary that
there does not exist a πœ” ∈ supp Ω such that 𝐹(⋅, πœ”) is a 𝑃(𝑃0 )function. Then for any 𝑖 ∈ ⟨1, π‘›βŸ©, πœ” ∈ supp Ω, there exists
π‘₯, 𝑦, π‘₯ =ΜΈ 𝑦 in R𝑛 , such that
2
= P {πœ” : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”)) + πœ€(π‘₯𝑖 − 𝑦𝑖 ) > 0}
≥ P {πœ” : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”)) ≥ 0} > 0.
(18)
This proposition gives the relationship between stochastic
𝑃0 -function and stochastic 𝑃-function.
Proposition 6. Let 𝐹 be an affine function of π‘₯ for any πœ” ∈ Ω
defined by (2). Then 𝐹 is a stochastic 𝑃(𝑃0 )-function if and only
if 𝑀(⋅) is a stochastic 𝑃(𝑃0 ) matrix.
Proof. By the definition of stochastic 𝑃(𝑃0 )-function, we
have that there exist 𝑖 ∈ 𝐽(π‘₯, 𝑦), 𝑖 ∈ ⟨1, π‘›βŸ© such that, for every
π‘₯ =ΜΈ 𝑦,
π‘₯𝑖 =ΜΈ 𝑦𝑖 ,
P {πœ” : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”)) > 0 (≥ 0)} > 0,
P {πœ” : (π‘₯𝑖 − 𝑦𝑖 ) (𝑀 (πœ”) (π‘₯ − 𝑦))𝑖 > 0 (≥ 0)} > 0,
(25)
which means that
π‘₯ 𝑖 = ΜΈ 𝑦𝑖 ,
P {πœ” ∈ supp Ω : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”))
(26)
> 0 (≥ 0) } = 0.
By the definition of supp Ω in (16), we have P{πœ” ∈ Ω \
supp Ω} = 0. Hence, formulation (26) is equivalent to
π‘₯ 𝑖 = ΜΈ 𝑦𝑖 ,
P {πœ” ∈ Ω : (π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”)) > 0 (≥ 0)} = 0,
(27)
which contradicts definition (20). Therefore, there exists a
πœ” ∈ supp Ω such that 𝐹(⋅, πœ”) is a 𝑃(𝑃0 )-function.
(20)
when 𝐹 is defined by (2). Set 𝑧 = π‘₯−𝑦; then 𝑧 =ΜΈ 0, and we have
that formulation (20) holds if and only if there exists 𝑖 ∈ 𝐽(𝑧)
such that, for every 𝑧 =ΜΈ 0,
P {πœ” : 𝑧𝑖 (𝑀 (πœ”) 𝑧)𝑖 > 0 (≥ 0)} > 0.
(π‘₯𝑖 − 𝑦𝑖 ) (𝐹𝑖 (π‘₯, πœ”) − 𝐹𝑖 (𝑦, πœ”)) ≤ 0 (< 0) ,
(19)
which is equivalent to
π‘₯𝑖 =ΜΈ 𝑦𝑖 ,
π‘₯ 𝑖 = ΜΈ 𝑦𝑖 ,
(21)
Hence, 𝑀(⋅) is a stochastic 𝑃(𝑃0 ) matrix.
Proposition 7. 𝐹 is a stochastic 𝑃(𝑃0 )-function if and only if
there exists a πœ” ∈ supp Ω such that 𝐹(⋅, πœ”) is a 𝑃(𝑃0 )-function.
Theorem 8. Suppose that 𝑓(π‘₯) := E[𝐹(π‘₯, πœ”)] is a 𝑃(𝑃0 )function. Then 𝐹 is a stochastic 𝑃(𝑃0 )-function.
Proof. Suppose on the contrary that 𝐹 is not a stochastic
Μƒ 𝑦,
Μƒ π‘₯Μƒ =ΜΈ 𝑦̃ in R𝑛 for any 𝑖 ∈
𝑃(𝑃0 )-function, then there exist π‘₯,
⟨1, π‘›βŸ© satisfying
π‘₯̃𝑖 =ΜΈ 𝑦̃𝑖 ,
Μƒ πœ”)) > 0 (≥ 0)} = 0.
Μƒ πœ”) − 𝐹𝑖 (𝑦,
P {πœ” : (π‘₯̃𝑖 − 𝑦̃𝑖 ) (𝐹𝑖 (π‘₯,
(28)
This means that
Μƒ πœ”)) ≤ 0 (< 0)
Μƒ πœ”) − 𝐹𝑖 (𝑦,
(π‘₯̃𝑖 − 𝑦̃𝑖 ) (𝐹𝑖 (π‘₯,
(29)
4
Journal of Applied Mathematics
always holds for any 𝑖 ∈ ⟨1, π‘›βŸ© and πœ” ∈ Ω. Furthermore,
following from (29), we have
Μƒ πœ”))] ≤ 0 (< 0) ,
Μƒ πœ”) − 𝐹𝑖 (𝑦,
E [(π‘₯̃𝑖 − 𝑦̃𝑖 ) (𝐹𝑖 (π‘₯,
from which we get
󡄨
󡄨
π‘˜ 2
𝛼√ ∑(π‘₯𝑗 𝑖 ) ≤ √ ∑ 󡄨󡄨󡄨󡄨𝐹𝑗 (π‘₯π‘˜π‘– , πœ”π‘˜π‘– ) − 𝐹𝑗 (π‘¦π‘˜π‘– , πœ”π‘˜π‘– )󡄨󡄨󡄨󡄨.
(30)
𝑗∈𝐼
that is
Μƒ ≤ 0 (< 0) ,
Μƒ − 𝑓𝑖 (𝑦))
(π‘₯̃𝑖 − 𝑦̃𝑖 ) (𝑓𝑖 (π‘₯)
(31)
which contradicts the definition of 𝑃(𝑃0 )-function. Therefore,
𝐹 is a stochastic 𝑃(𝑃0 )-function.
Note that there is at most one solution (may not be
a solution) for the EV model stochastic complementarity
problems if 𝑓(π‘₯) := E[𝐹(π‘₯, πœ”)] is a 𝑃(𝑃0 )-function.
𝑗∈𝐼
By definition, the sequence {π‘¦π‘˜π‘– } remains bounded. From the
continuity of 𝐹, it follows that the sequence {𝐹𝑗 (π‘¦π‘˜π‘– , πœ”π‘˜π‘– )} is
also bounded for every 𝑗 ∈ 𝐼. Hence, taking a limit in (38),
we obtain that there is at least one index 𝑗 ∈ 𝐼 such that
π‘˜
Proof. Suppose on the contrary that the ERM model defined
by πœ™min is not bounded. Thus there exist a sequence {π‘₯π‘˜ } ⊂ R+𝑛
with β€–π‘₯π‘˜ β€– → ∞ (π‘˜ → ∞) and a constant 𝑐 ∈ R+ , such that
π‘˜
πœƒ (π‘₯ ) ≤ 𝑐,
for ∀π‘˜.
(32)
P {πœ” : lim 𝐹𝑗 (π‘₯π‘˜π‘– , πœ”) = ∞} > 0.
0
π‘¦π‘–π‘˜ := { π‘˜
π‘₯𝑖
if 𝑖 ∈ 𝐼,
if 𝑖 ∉ 𝐼.
Let
π‘˜
Ω1 = {πœ” : lim min (π‘₯𝑗 𝑖 , 𝐹𝑗 (π‘₯π‘˜π‘– , πœ”)) = ∞} .
π‘˜π‘– → ∞
From the definition of π‘¦π‘˜ and the fact that 𝐹 is a stochastic
uniformly 𝑃-function, we obtain that for any π‘₯π‘˜ , π‘¦π‘˜ , there
exists 𝑖 such that
(35)
π‘˜π‘– → ∞
Take subsequence π‘₯π‘˜π‘– , π‘¦π‘˜π‘– such that the corresponding subscript of (36) is 𝑗. Noting that 𝑗 ∈ 𝐼 and taking (36) into
account, we have
π‘˜ 2
𝛼∑(π‘₯𝑗 𝑖 )
𝑗∈𝐼
≤
π‘˜π‘–
π‘˜π‘–
π‘˜π‘–
(𝐹𝑗 (π‘₯ , πœ” ) − 𝐹𝑗 (𝑦 , πœ” ))
󡄨
󡄨
π‘˜ 2
≤ √ ∑(π‘₯𝑗 𝑖 ) ⋅ √ ∑ 󡄨󡄨󡄨󡄨𝐹𝑗 (π‘₯π‘˜π‘– , πœ”π‘˜π‘– ) − 𝐹𝑗 (π‘¦π‘˜π‘– , πœ”π‘˜π‘– )󡄨󡄨󡄨󡄨,
𝑗∈𝐼
𝑗∈𝐼
2
π‘˜π‘–
(42)
(π‘₯ , πœ”))) ] .
Since
2
π‘˜
lim inf (min (π‘₯𝑗 𝑖 , 𝐹𝑗 (π‘₯π‘˜π‘– , πœ”))) = ∞
π‘˜π‘– → ∞
(43)
on Ω1 and P{Ω1 } > 0, then the left-hand side of formulation
(42) is infinite. Therefore,
2
π‘˜
lim inf EΩ1 [(min (π‘₯𝑗 𝑖 , 𝐹𝑗 (π‘₯π‘˜π‘– , πœ”))) ] = ∞.
π‘˜π‘– → ∞
(44)
σ΅„©
σ΅„©2
πœƒ (π‘₯π‘˜π‘– ) = E [σ΅„©σ΅„©σ΅„©σ΅„©Φ(π‘₯π‘˜π‘– , 𝐹(π‘₯π‘˜π‘– , πœ”))σ΅„©σ΅„©σ΅„©σ΅„© ]
π‘˜
σ΅„©2
σ΅„©
(π‘₯π‘–π‘˜ − π‘¦π‘–π‘˜ ) (𝐹𝑖 (π‘₯π‘˜ , πœ”π‘˜ ) − 𝐹𝑖 (π‘¦π‘˜ , πœ”π‘˜ )) ≥ 𝛼󡄩󡄩󡄩󡄩π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„©σ΅„© . (36)
π‘˜π‘–
π‘˜
lim inf EΩ1 [(min (π‘₯𝑗 𝑖 , 𝐹𝑗
π‘˜π‘– → ∞
2
≥ EΩ1 [(min (π‘₯𝑗 𝑖 , 𝐹𝑗 (π‘₯π‘˜π‘– , πœ”))) ]
and hence there are πœ”π‘˜ ∈ supp Ω satisfying
π‘˜
π‘₯𝑗 𝑖
2
π‘˜
EΩ1 [lim inf (min (π‘₯𝑗 𝑖 , 𝐹𝑗 (π‘₯π‘˜π‘– , πœ”))) ]
Moreover, it is easy to find
P {πœ” : (π‘₯π‘–π‘˜ − π‘¦π‘–π‘˜ ) (𝐹𝑖 (π‘₯π‘˜ , πœ”) − 𝐹𝑖 (π‘¦π‘˜ , πœ”))
σ΅„©
σ΅„©2
≥ 𝛼󡄩󡄩󡄩󡄩π‘₯π‘˜ − π‘¦π‘˜ σ΅„©σ΅„©σ΅„©σ΅„© } > 0,
(41)
Then P{Ω1 } > 0. By Fatou’s Lemma [15], we have
≤
(34)
(40)
π‘˜π‘– → ∞
(33)
By assumption, we have 𝐼 ≠ 0. We now define a sequence
{π‘¦π‘˜ } ⊆ R𝑛 as follows:
(39)
Since 𝐹 is equicoercive on R𝑛 , we have
Define the index set 𝐼 ⊆ {1, . . . , 𝑛} by
𝐼 := {𝑖 | {π‘₯π‘–π‘˜ } is unbounded} .
𝐹𝑗 (π‘₯π‘˜π‘– , πœ”π‘˜π‘– ) 󳨀→ ∞.
π‘₯𝑗 𝑖 󳨀→ ∞,
3. Boundedness of Solution Set
Theorem 9. Suppose that 𝐹 is a stochastic uniformly 𝑃function and 𝐹 is equicoercive on R𝑛 . Then the solution set
of ERM model (7) defined by πœ™min and πœ™FB is nonempty and
bounded.
(38)
(45)
󳨀→ ∞
as π‘˜π‘– → ∞. This contradicts formulation (32). Hence, the
solution set of ERM model (7) defined by πœ™min is nonempty
and bounded. Similar results about πœ™FB can be obtained by
relation formulation (10).
4. Robust Solution
(37)
As we show, both EV model and ERM model give decisions
by a deterministic formulation. However, the decisions may
not be the best or may be even infeasible for each individual
event. In fact, we should take risk into account to make
Journal of Applied Mathematics
5
a priori decision in many cases. Naturally, it is necessary to
know how good or how bad the decision which we have given
can be. In this section, we study the robustness of solutions of
the ERM model. Let SOL(𝐹(π‘₯, πœ”)) denote the solution set of
SCP(𝐹(π‘₯, πœ”)), and define the distance from a point π‘₯ to the
set SOL(𝐹(π‘₯, πœ”)) by
σ΅„©σ΅„©σ΅„©π‘₯ − π‘₯σΈ€  σ΅„©σ΅„©σ΅„© .
dist (π‘₯, SOL (𝐹 (π‘₯, πœ”))) := σΈ€  inf
σ΅„©
σ΅„©σ΅„©
(46)
π‘₯ ∈SOL(𝐹(π‘₯,πœ”)) σ΅„©
Theorem 10. Assume that Ω = {πœ”1 , πœ”2 , . . . , πœ”π‘} ⊂ Rπ‘š ,
and πœ” takes values πœ”1 , . . . , πœ”π‘ with respective probabilities
𝑝1 , . . . , 𝑝𝑁. Furthermore, suppose that for every πœ” ∈ Ω, 𝐹(π‘₯, πœ”)
is uniformly P-function and Lipschitz continuous with respect
to π‘₯. Then there is a constant 𝐢 > 0 such that
E [dist (π‘₯, SOL (𝐹 (π‘₯, πœ”)))] ≤ 𝐢 ⋅ √πœƒ (π‘₯),
(47)
where πœƒ(π‘₯) is defined by πœ™min or πœ™FB .
Proof. For any fixed πœ”π‘– , since 𝐹(π‘₯, πœ”π‘– ) is uniformly 𝑃function and Lipschitz continuous, from Corollary 3.19 of
Μ‚ 𝑖 ) of CP(𝐹(π‘₯, πœ”π‘– )), and there
[16], we have unique solution π‘₯(πœ”
exists a constant 𝐢𝑖 such that
σ΅„©σ΅„©
σ΅„©
σ΅„©
σ΅„©
σ΅„©σ΅„©π‘₯ − π‘₯Μ‚ (πœ”π‘– )σ΅„©σ΅„©σ΅„© ≤ 𝐢𝑖 σ΅„©σ΅„©σ΅„©min {π‘₯, 𝐹 (π‘₯, πœ”π‘– )}σ΅„©σ΅„©σ΅„© .
(48)
σ΅„©
σ΅„©
σ΅„©
σ΅„©
Letting 𝐢 := ((√2 + 2)/2) max{𝐢1 , . . . , 𝐢𝑁}, we have
E2 [dist (π‘₯, SOL (𝐹 (π‘₯, πœ”)))]
σ΅„©
σ΅„©
= E2 [σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − π‘₯Μ‚ (πœ”π‘– )σ΅„©σ΅„©σ΅„©σ΅„©]
σ΅„©
σ΅„©2
≤ E [σ΅„©σ΅„©σ΅„©σ΅„©π‘₯ − π‘₯Μ‚ (πœ”π‘– )σ΅„©σ΅„©σ΅„©σ΅„© ]
𝑁
≤ ∑𝑝𝑖 ⋅
𝑖=1
𝑁
≤ ∑𝑝𝑖 ⋅
𝑖=1
√2 + 2
⋅(
)
2
𝑛
𝑗=1
1
∑
𝑁 πœ”π‘– ∈Ω
σ΅„©σ΅„©
σ΅„©2
σ΅„©σ΅„©Φ (π‘₯, πœ”π‘– )σ΅„©σ΅„©σ΅„© 𝜌 (πœ”π‘– ) ,
σ΅„©
σ΅„©
(51)
𝑁
where Ω𝑁 = {πœ”π‘– | 𝑖 = 1, 2, . . . , 𝑁} is a set of observations
generated by a quasi-Monte Carlo method such that Ω𝑁 ⊆
Ω and 𝜌(πœ”) stands for the probability density function.
In the rest of this paper, we assume that the probability
density function 𝜌 is continuous on Ω. For each 𝑁, πœƒπ‘(π‘₯)
is continuously differentiable function. We denote by π‘₯𝑁 the
optimal solutions of approximation problems (51). We are
interested in the situation where the first-order derivatives of
πœƒπ‘(π‘₯) cannot be explicitly calculated or approximated.
(52)
Condition 2. If {π‘₯π‘˜π‘} and {π‘¦π‘˜π‘} are sequences of points such
that π‘₯π‘˜π‘ ≥ 0, π‘¦π‘˜π‘ ≥ 0 converging to some π‘₯𝑁 and πΌπ‘˜π‘ ⊆
𝐼(π‘₯𝑁) := {𝑖 | π‘₯𝑁
𝑖 = 0} for all π‘˜, then
{dist (π‘‡πΌπ‘˜π‘ (π‘₯π‘˜π‘) , π‘‡πΌπ‘˜π‘ (π‘¦π‘˜π‘))} 󳨀→ 0,
𝑗=1
𝑖=1
min πœƒπ‘ (π‘₯) :=
π‘₯∈R𝑛+
is compact.
2
2
𝑛
Note that the ERM model (7) included an expectation function, which is generally difficult to be evaluated exactly. Hence
in this section, we first employ a quasi-Monte Carlo method
to obtain approximation problems of (7) for numerical
integration. Then, we consider derivative-free methods to
solve these approximation problems.
By the quasi-Monte Carlo method, we obtain the following approximation problem of (7):
𝐿 := {π‘₯ ≥ 0 | 𝑓 (π‘₯) ≤ 𝑓 (π‘₯0 )}
× ∑ (√𝐹𝑗2 (π‘₯, πœ”π‘– ) + π‘₯𝑗2 − (𝐹𝑗 (π‘₯, πœ”π‘– ) + π‘₯𝑗 ))
𝑁
5. Quasi-Monte Carlo and Derivative-Free
Methods for Solving ERM Model
Condition 1. Given a point π‘₯0 ≥ 0, the level set
σ΅„©
σ΅„©2
𝐢𝑖2 σ΅„©σ΅„©σ΅„©σ΅„©min {π‘₯, 𝐹 (π‘₯, πœ”π‘– )}σ΅„©σ΅„©σ΅„©σ΅„©
𝐢𝑖2
This inequality indicates that the expected distance to the
solution set SOL(𝐹(π‘₯, πœ”)) for πœ” ∈ Ω is also likely to be small
at the solution π‘₯∗ of (7). In other words, we may expect
that a solution of the ERM formulation (7) has a minimum
sensitivity with respect to random parameter variations in
SCP(𝐹(π‘₯, πœ”)). In this sense, solutions of (7) can be regarded
as robust solutions for SCP(𝐹(π‘₯, πœ”)).
≤ 𝐢2 ∑𝑝𝑖 ∑(√𝐹𝑗2 (π‘₯, πœ”π‘– ) + π‘₯𝑗2 − (𝐹𝑗 (π‘₯, πœ”π‘– ) + π‘₯𝑗 ))
2
= 𝐢2 πœƒ (π‘₯) ,
(49)
where the first inequality follows from Cauchy-Schwarz
inequality, the second inequality follows from formulation
(48), and the third inequality follows from formulation (10).
This completes the proof of the theorem.
Theorem 10 particularly shows that for the solution π‘₯∗ of
(7),
E [dist (π‘₯∗ , SOL (𝐹 (π‘₯, πœ”)))] ≤ 𝐢 ⋅ √πœƒ (π‘₯∗ ).
(50)
(53)
where dist(𝑇1 , 𝑇2 ) = max𝑑1 ∈𝑇1 :‖𝑑1 β€–=1 {min𝑑2 ∈𝑇2 ‖𝑑1 − 𝑑2 β€–} and
𝑁
π‘‡πΌπ‘˜π‘ (π‘₯) := {π‘‘π‘˜π‘ ∈ R𝑛 | π‘‘π‘˜,𝑖
≥ 0, ∀𝑖 ∈ πΌπ‘˜π‘}.
Condition 3. For every π‘₯𝑁 ≥ 0 there exist scalars 𝛿 > 0, and
πœ‚ > 0 such that
𝑛
min ‖𝑧 − π‘₯β€– ≤ πœ‚∑ max (−π‘₯𝑖 , 0) ,
𝑧≥0
𝑖=1
(54)
∀π‘₯ ∈ {π‘₯ ∈ R𝑛 | β€–π‘₯ − π‘₯β€– ≤ 𝛿} .
Condition 4. Given π‘₯π‘˜π‘ and πœ–π‘˜π‘ > 0, the set of search
directions
σ΅„© 𝑁,𝑗 σ΅„©
𝑁,𝑗
(55)
π·π‘˜π‘ = {π‘‘π‘˜ , 𝑗 = 1, . . . , π‘Ÿπ‘˜π‘} , with σ΅„©σ΅„©σ΅„©σ΅„©π‘‘π‘˜ σ΅„©σ΅„©σ΅„©σ΅„© = 1,
6
Journal of Applied Mathematics
satisfing π‘Ÿπ‘˜π‘ is uniformly bounded and cone{π·π‘˜π‘} = 𝑇(π‘₯π‘˜π‘;
πœ–π‘˜π‘). Here,
cone {π·π‘˜π‘}
𝑁,π‘Ÿπ‘˜π‘ π‘Ÿπ‘˜π‘
= {π‘‘π‘˜π‘,1 𝛽1 + ⋅ ⋅ ⋅ + π‘‘π‘˜
𝛽
𝑁
: 𝛽1 ≥ 0, . . . , π›½π‘Ÿπ‘˜ ≥ 0} ,
𝑁
𝑁
≥ 0, π‘₯π‘˜,𝑖
≤ πœ–π‘˜π‘} .
𝑇 (π‘₯π‘˜π‘; πœ–π‘˜π‘) = {π‘‘π‘˜π‘ ∈ R𝑛 | π‘‘π‘˜,𝑖
(56)
Under Conditions 1, 2, and 3 and by choosing π·π‘˜π‘
satisfying Condition 4 with πœ–π‘˜π‘ → 0, then the following
generated iterates have at least one cluster point that is a
stationary point of (51) for each 𝑁.
Algorithm 11. Parameters: π‘₯0𝑁 ≥ 0, 𝛼̃0𝑁 > 0, 𝛾𝑁 > 0, πœƒ1𝑁 ∈
(0, 1), πœƒ2𝑁 ∈ (0, 1), πœ–0𝑁 > 0.
Step 1. Set π‘˜π‘ = 0.
𝑁,𝑗
Step 2. Choose a set of directions π·π‘˜π‘ = {π‘‘π‘˜ , 𝑗 = 1, . . . , π‘Ÿπ‘˜π‘}
satisfying Condition 4.
Step 3.
𝑁,𝑗
(a) Set 𝑗 = 1, π‘¦π‘˜
𝑁,𝑗
= π‘₯π‘˜π‘, π›ΌΜƒπ‘˜
= π›ΌΜƒπ‘˜π‘.
𝑁,𝑗
𝑁,𝑗
(b) Compute the maximum stepsize π›Όπ‘˜ such that 𝑦𝑖,π‘˜ +
𝑁,𝑗 𝑁,𝑗
𝑁,𝑗
𝑁,𝑗 𝑁,𝑗
π›Όπ‘˜ 𝑑̂𝑖,π‘˜ ≥ 0 for all 𝑖. Set π›ΌΜ‚π‘˜ = min{π›Όπ‘˜ , π›ΌΜƒπ‘˜ }.
𝑁,𝑗
𝑁,𝑗
𝑁,𝑗
𝑁,𝑗
π›Όπ‘˜ )2 ,
(c) If π›ΌΜ‚π‘˜ > 0 and πœƒπ‘(π‘¦Μ‚π‘˜ ) ≤ πœƒπ‘(π‘¦π‘˜ ) − 𝛾(Μ‚
𝑁,𝑗+1
𝑁,𝑗
𝑁,𝑗
𝑁,𝑗+1
set π›ΌΜƒπ‘˜
= π›Όπ‘˜ ; otherwise set π›Όπ‘˜ = 0, π‘¦π‘˜
=
𝑁,𝑗 𝑁,𝑗+1
𝑁,𝑗
= πœƒ1π‘π›ΌΜƒπ‘˜ .
π‘¦π‘˜ , π›ΌΜƒπ‘˜
𝑁,𝑗
(d) If π›Όπ‘˜
𝑁,𝑗
𝑁
= π›Όπ‘˜ , set πœ–π‘˜+1
= πœ–π‘˜π‘, and go to Step 4.
(e) If 𝑗 < π‘Ÿπ‘˜π‘, set 𝑗 = 𝑗 + 1, and go to Step 3(b). Otherwise
𝑁
set πœ–π‘˜+1
= πœƒ2π‘πœ–π‘˜π‘ and go to Step 4.
𝑁,𝑗+1
𝑁
𝑁
Step 4. Find π‘₯π‘˜+1
≥ 0 such that πœƒπ‘(π‘₯π‘˜+1
) ≤ πœƒπ‘(π‘¦π‘˜
𝑁,𝑗+1
𝑁
π›ΌΜƒπ‘˜+1 = π›ΌΜƒπ‘˜
, π‘ŸΜƒπ‘˜ = 𝑗, π‘˜ = π‘˜ + 1, and go to Step 2.
). Set
For this algorithm, it is easy to proof that if π‘₯π‘˜π‘ is the
sequence produced by algorithm under Conditions 1–4, then
π‘₯π‘˜π‘ is bounded and there exists at least one cluster point which
is a stationary point of problem (51) for each 𝑁.
6. Conclusions
The SCP(𝐹(π‘₯, πœ”)) has a wide range of applications in engineering and economics. Therefore, it is meaningful and
interesting to study this problem. In this paper, we give the
definitions of stochastic 𝑃-function, stochastic 𝑃0 -function
and stochastic uniformly 𝑃-function, which can be regarded
as a generalization of the deterministic formulation or an
extension of a stochastic 𝑅0 function given in [11]. Moreover,
we consider the conditions when the function is a stochastic
𝑃(𝑃0 )-function. Furthermore, we show that the involved
function being a stochastic uniformly 𝑃-function and equicoercive [11] are sufficient conditions for the solution set of the
expected residual minimization problem to be nonempty and
bounded. Finally, we illustrate that the ERM formulation produces robust solutions with minimum sensitivity in violation
of feasibility with respect to random parameter variations
in SCP(𝐹(π‘₯, πœ”)). On the other hand, we employ a quasiMonte Carlo method to obtain approximation problems of
(7) for dealing numerical integration and further consider
derivative-free methods to solve these approximation problems.
Acknowledgments
This work was supported by NSFC Grants no. 11226238 and
no. 11226230, and predeclaration fund of state project of
Liaoning university 2012, 2012LDGY01, and University Scientific Research Projects of School of Education Department
of Liaoning Province 2012, 2012427.
References
[1] F. Facchinei and J. S. Pang, Finite-Dimensional Variational
Inequalities and Complementarity Problems, Springer, New
York, NY, USA, 2003.
[2] R. W. Cottle, J.-S. Pang, and R. E. Stone, The Linear Complementarity Problem, Computer Science and Scientific Computing,
Academic Press, Boston, Mass, USA, 1992.
[3] G.-H. Lin and M. Fukushima, “New reformulations for stochastic nonlinear complementarity problems,” Optimization Methods & Software, vol. 21, no. 4, pp. 551–564, 2006.
[4] G. Gürkan, A. Y. Özge, and S. M. Robinson, “Sample-path
solution of stochastic variational inequalities,” Mathematical
Programming, vol. 84, no. 2, pp. 313–333, 1999.
[5] X. Chen and M. Fukushima, “Expected residual minimization
method for stochastic linear complementarity problems,” Mathematics of Operations Research, vol. 30, no. 4, pp. 1022–1038,
2005.
[6] X. Chen, C. Zhang, and M. Fukushima, “Robust solution of
monotone stochastic linear complementarity problems,” Mathematical Programming, vol. 117, no. 1-2, pp. 51–80, 2009.
[7] P. Tseng, “Growth behavior of a class of merit functions for the
nonlinear complementarity problem,” Journal of Optimization
Theory and Applications, vol. 89, no. 1, pp. 17–37, 1996.
[8] H. Fang, X. Chen, and M. Fukushima, “Stochastic 𝑅0 matrix linear complementarity problems,” SIAM Journal on Optimization,
vol. 18, no. 2, pp. 482–506, 2007.
[9] G.-H. Lin, X. Chen, and M. Fukushima, “New restricted
NCP functions and their applications to stochastic NCP and
stochastic MPEC,” Optimization, vol. 56, no. 5-6, pp. 641–953,
2007.
[10] C. Ling, L. Qi, G. Zhou, and L. Caccetta, “The 𝑆𝐢1 property of
an expected residual function arising from stochastic complementarity problems,” Operations Research Letters, vol. 36, no. 4,
pp. 456–460, 2008.
[11] C. Zhang and X. Chen, “Stochastic nonlinear complementarity
problem and applications to traffic equilibrium under uncertainty,” Journal of Optimization Theory and Applications, vol. 137,
no. 2, pp. 277–295, 2008.
Journal of Applied Mathematics
[12] C. Zhang and X. Chen, “Smoothing projected gradient method
and its application to stochastic linear complementarity problems,” SIAM Journal on Optimization, vol. 20, no. 2, pp. 627–649,
2009.
[13] G. L. Zhou and L. Caccetta, “Feasible semismooth Newton
method for a class of stochastic linear complementarity problems,” Journal of Optimization Theory and Applications, vol. 139,
no. 2, pp. 379–392, 2008.
[14] X. L. Li, H. W. Liu, and Y. K. Huang, “Stochastic 𝑃 matrix and
𝑃0 matrix linear complementarity problem,” Journal of Systems
Science and Mathematical Sciences, vol. 31, no. 1, pp. 123–128,
2011.
[15] K. L. Chung, A Course in Probability Theory, Academic Press,
New York, NY, USA, 2nd edition, 1974.
[16] B. Chen and P. T. Harker, “Smooth approximations to nonlinear
complementarity problems,” SIAM Journal on Optimization,
vol. 7, no. 2, pp. 403–420, 1997.
7
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download