Research Article On Set-Valued Complementarity Problems Jinchuan Zhou, Jein-Shan Chen,

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 105930, 11 pages
http://dx.doi.org/10.1155/2013/105930
Research Article
On Set-Valued Complementarity Problems
Jinchuan Zhou,1 Jein-Shan Chen,2 and Gue Myung Lee3
1
Department of Mathematics, School of Science, Shandong University of Technology, Zibo 255049, China
Department of Mathematics, National Taiwan Normal University, Taipei 11677, Taiwan
3
Department of Applied Mathematics, Pukyong National University, Busan 608-737, Republic of Korea
2
Correspondence should be addressed to Jein-Shan Chen; jschen@math.ntnu.edu.tw
Received 18 September 2012; Accepted 14 December 2012
Academic Editor: Mohamed Tawhid
Copyright © 2013 Jinchuan Zhou et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper investigates the set-valued complementarity problems (SVCP) which poses rather different features from those that
classical complementarity problems hold, due to tthe fact that he index set is not fixed, but dependent on π‘₯. While comparing the
set-valued complementarity problems with the classical complementarity problems, we analyze the solution set of SVCP. Moreover,
properties of merit functions for SVCP are studied, such being as level bounded and error bounded. Finally, some possible research
directions are discussed.
1. Motivations and Preliminaries
The set-valued complementarity problem (SVCP) is to find π‘₯ ∈
R𝑛 such that
π‘₯ ≥ 0,
𝑦 ≥ 0,
π‘₯𝑇 𝑦 = 0,
for some 𝑦 ∈ Θ (π‘₯) ,
(1)
where Θ : R𝑛+ 󴁂󴀱 R𝑛 is a set-valued mapping. The setvalued complementarity problem plays an important role in
the sensitivity analysis of complementarity problems [1] and
economic equilibrium problems [2]. However, there has been
very little study on the set-valued complementarity problems
compared to the classical complementarity problems. In fact,
the SVCP (1) can be recast as follows, which is denoted by
SVNCP(𝐹, Ω) to find π‘₯ ∈ R𝑛 , such that
π‘₯ ≥ 0,
𝐹 (π‘₯, 𝑀) ≥ 0,
π‘₯𝑇 𝐹 (π‘₯, 𝑀) = 0,
for some 𝑀 ∈ Ω (π‘₯) ,
𝑛
π‘š
𝑛
𝑛
(2)
π‘š
where 𝐹 : R × R → R and Ω : R 󴁂󴀱 R are a set-valued
mapping. To see this, if letting
Θ (π‘₯) = ⋃ {𝐹 (π‘₯, 𝑀)} ,
𝑀∈Ω(π‘₯)
(3)
then (1) reduces to (2). Conversely, if 𝐹(π‘₯, 𝑀) = 𝑀 and Ω(π‘₯) =
Θ(π‘₯), then (2) takes the form of (1).
The SVNCP(𝐹, Ω) given as in (2) provides an unified
framework for several interesting and important problems in
optimization fields described as below.
(i) Nonlinear complementarity problem [1], which is to
find π‘₯ ∈ R𝑛 , such that
π‘₯ ≥ 0,
𝐹 (π‘₯) ≥ 0,
⟨π‘₯, 𝐹 (π‘₯)⟩ = 0.
(4)
This corresponds to 𝐹(π‘₯, 𝑀) := 𝐹(π‘₯) + 𝑀 and Ω(π‘₯) =
{0} for all π‘₯ ∈ R𝑛 . In other words, the set-valued
complementarity problem reduces to the classical
complementarity problem under such case.
(ii) Extended linear complementarity problem [3, 4],
which is to find π‘₯, 𝑀 ∈ R𝑛 , such that
π‘₯ ≥ 0,
𝑀 ≥ 0,
π‘₯𝑇 𝑀 = 0,
with 𝑀1 π‘₯ − 𝑀2 𝑀 ∈ 𝑃,
(5)
where 𝑀1 , 𝑀2 ∈ Rπ‘š×𝑛 and 𝑃 ⊆ Rπ‘š are a poly-hedron.
This corresponds to 𝐹(π‘₯, 𝑀) = 𝑀 and Ω(π‘₯) = {𝑀|𝑀1 π‘₯ −
𝑀2 𝑀 ∈ 𝑃}. In particular, when 𝑃 = {π‘ž}, it further reduces to the horizontal linear complementarity problem and to the usual linear complementarity problem,
in addition to 𝑀2 being an identify matrix.
2
Abstract and Applied Analysis
(iii) Implicit complementarity problem [5], which is to find
π‘₯, 𝑀 ∈ R𝑛 and 𝑧 ∈ Rπ‘š , such that
π‘₯ ≥ 0,
π‘₯𝑇 𝑀 = 0,
𝑀 ≥ 0,
with 𝐹 (π‘₯, 𝑀, 𝑧) = 0,
(6)
where 𝐹 : R2𝑛×π‘š → R𝑙 . This can be rewritten as
π‘₯ ≥ 0,
𝑀 ≥ 0,
π‘₯𝑇 𝑀 = 0,
with 𝑀 satisfying 𝐹 (π‘₯, 𝑀, 𝑧) = 0 for some 𝑧.
𝐾 (π‘₯) = {π‘₯ | π‘₯𝑖 ≥ 0 for 𝑖 ∈ 𝐼 \ 𝐼 (π‘₯)
(7)
This is clearly an SVNCP(𝐹, Ω) where 𝐹(π‘₯, 𝑀) = 𝑀 and Ω(π‘₯) =
∪𝑧∈Rπ‘š {𝑀 | 𝐹(π‘₯, 𝑀, 𝑧) = 0} .
(iv) Mixed nonlinear complementarity problem, which is to
find π‘₯ ∈ R𝑛 , and 𝑀 ∈ Rπ‘š such that
π‘₯ ≥ 0,
⟨π‘₯, 𝐹 (π‘₯, 𝑀)⟩ = 0,
𝐹 (π‘₯, 𝑀) ≥ 0,
with 𝐺 (π‘₯, 𝑀) = 0.
(8)
This is an SVNCP(𝐹, Ω) where it corresponds to Ω(π‘₯) = {π‘₯|
𝐺(π‘₯, 𝑀) = 0} . Note that the mixed nonlinear complementarity problem is a natural extension of Karush-Kuhn-Tucker
(KKT) conditions for the following nonlinear programming:
min
s.t.
𝑓 (π‘₯)
𝑔𝑖 (π‘₯) ≤ 0,
β„Žπ‘— (π‘₯) = 0,
𝑖 = 1, 2, . . . , π‘š,
and π‘₯𝑖 = 0 for 𝑖 ∈ 𝐼 (π‘₯)} .
π‘₯ ≥ 0,
π‘š
𝑙
𝑖=1
𝑗=1
(10)
β„Ž (π‘₯) = 0,
πœ† ≥ 0,
βŸ¨πœ†, 𝑔 (π‘₯)⟩ = 0,
where 𝑔(π‘₯) := (𝑔1 (π‘₯), . . . , π‘”π‘š (π‘₯)), β„Ž(π‘₯) := (β„Ž1 (π‘₯), . . . , β„Žπ‘™ (π‘₯)),
and πœ† := (πœ† 1 , . . . , πœ† π‘š ). Then, letting 𝑀 := (πœ†, πœ‡), 𝐹(π‘₯, 𝑀) :=
−𝑔(π‘₯), and
π‘š
𝑙
∇𝑓 (π‘₯) + ∑πœ† 𝑖 ∇𝑔𝑖 (π‘₯) + ∑ πœ‡∇β„Žπ‘— (π‘₯)
)
𝐺 (π‘₯, 𝑀) := (
𝑖=1
𝑗=1
(11)
π‘₯𝑇 𝐹 (π‘₯) = 0,
𝐹 (π‘₯) ≥ 0,
with π‘₯ ∈ 𝐾 (π‘₯) (14)
which is an SVNCP(𝐹, Ω).
(vi) Min-max programming [8], which is to solve the
following problem:
min max 𝑓 (π‘₯, 𝑀) ,
(15)
π‘₯∈R𝑛+ 𝑀∈Ω
𝑗 = 1, . . . , 𝑙.
∇𝑓 (π‘₯) + ∑πœ† 𝑖 ∇𝑔𝑖 (π‘₯) + ∑πœ‡π‘— ∇β„Žπ‘— (π‘₯) = 0,
(13)
Clearly, 0 ∈ 𝐾(π‘₯) which says ⟨π‘₯, 𝐹(π‘₯)⟩ ≤ 0 by taking 𝑦 = 0
in (12). Note that π‘₯ ≥ 0 because π‘₯ ∈ 𝐾(π‘₯). Next, we will
claim that 𝐹𝑖 (π‘₯) ≥ 0 for all 𝑖 = 1, 2, . . . , 𝑛. It is enough to
consider the case where 𝑖 ∈ 𝐼\𝐼(π‘₯). Under such case, by taking
𝑦 = 𝛽𝑒𝑖 in (12) with 𝛽 being an arbitrarily positive scalar, we
have 𝛽𝐹𝑖 (π‘₯) ≥ 𝐹(π‘₯)𝑇 π‘₯. Since 𝛽 can be made sufficiently large,
it implies that 𝐹𝑖 (π‘₯) ≥ 0. Thus, we obtain 𝐹(π‘₯)𝑇 π‘₯ ≥ 0. In
summary, under such case, QVI(𝐾, 𝐹) becomes
(9)
To see this, we first write out the KKT conditions as follows:
𝑔 (π‘₯) ≤ 0,
It is well-known that QVI(𝐾, 𝐹) reduces to the classical nonlinear complementarity problem when 𝐾(π‘₯) is independent
of π‘₯, say, 𝐾(π‘₯) = R𝑛+ for all π‘₯. Now, let us explain why it is
related to SVNCP(𝐹, Ω). To this end, given π‘₯ ∈ R𝑛 , we define
𝐼(π‘₯) = {𝑖 | 𝐹𝑖 (π‘₯) > 0}, and let
where 𝑓 : R𝑛 × Ω → R is a continuously differentiable
function, and Ω is a compact subset in Rπ‘š . First, we define
πœ“(π‘₯) := max𝑀∈٠𝑓(π‘₯, 𝑀). Although πœ“ is not necessarily
Frechet differentiable, it is directional differentiable (even
semi-smooth), see [9]. Now, let us check the first-order
necessary conditions for problem (15). In fact, if π‘₯∗ is a local
minimizer of (15), then
πœ“σΈ€  (π‘₯∗ ; π‘₯ − π‘₯∗ ) = max∗ ⟨∇π‘₯ 𝑓 (π‘₯∗ , 𝑀) , π‘₯ − π‘₯∗ ⟩ ≥ 0,
𝑀∈Ω(π‘₯ )
∀π‘₯ ∈ R𝑛+ ,
(16)
which is equivalent to
inf max ⟨∇π‘₯ 𝑓 (π‘₯∗ , 𝑀) , π‘₯ − π‘₯∗ ⟩ = 0,
β„Ž (π‘₯)
π‘₯∈R𝑛+ 𝑀∈Ω(π‘₯∗ )
(17)
implies that the KKT system (10) becomes a mixed complementarity problem.
Besides the above various complementarity problems,
SVNCP(𝐹, Ω) has a close relation with the Quasi-variational
inequality, a special of the extended general variational
inequalities [6, 7], and min-max programming, which is
elaborated as below.
where Ω(π‘₯) means the active set at π‘₯, that is, Ω(π‘₯) := {𝑀 ∈
Ω|πœ“(π‘₯) = 𝑓(π‘₯, 𝑀)}. At our first glance, the formula (17) is not
related to SVNCP(𝐹, Ω). Nonetheless, we will show that if Ω
is convex and the function 𝑓(π‘₯, ⋅) is concave over Ω, then the
first-order necessary conditions form an SVNCP(𝐹, Ω), see
below proposition.
(v) Quasi-variational inequality [2]. Given a point-topoint map 𝐹 from R𝑛 to itself and a point-to-set map
𝐾 from R𝑛 into subsets of R𝑛 , the Quasi-variational
inequality QVI(𝐾, 𝐹) is to find a vector π‘₯ ∈ 𝐾(π‘₯),
such that
Proposition 1. Let Ω be nonempty, compact, and convex set
in Rπ‘š . Suppose that, for each π‘₯, the function 𝑓(π‘₯, ⋅) is concave
over Ω. If π‘₯∗ is a local optimal solution of (15), then there exists
𝑀∗ ∈ Ω(π‘₯∗ ), such that
⟨𝐹 (π‘₯) , 𝑦 − π‘₯⟩ ≥ 0,
∀𝑦 ∈ 𝐾 (π‘₯) .
(12)
π‘₯∗ ≥ 0,
∇π‘₯ 𝑓 (π‘₯∗ , 𝑀∗ ) ≥ 0,
⟨∇π‘₯ 𝑓 (π‘₯∗ , 𝑀∗ ) , π‘₯∗ ⟩ = 0.
(18)
Abstract and Applied Analysis
3
Proof. Note first that for each π‘₯ the inner problem
πœ“ (π‘₯) := max𝑓 (π‘₯, 𝑀)
(19)
𝑀∈Ω
is a concave optimization problem, since 𝑓(π‘₯, ⋅) is concave
and Ω is convex. This ensures that Ω(π‘₯), which denotes the
optimal solution set of (19), is convex as well. Now we claim
that the function
∗
∗
β„Ž (𝑀) := ⟨∇π‘₯ 𝑓 (π‘₯ , 𝑀) , π‘₯ − π‘₯ ⟩
∗
(20)
∗
is concave over Ω(π‘₯ ). Indeed, for 𝑀1 , 𝑀2 ∈ Ω(π‘₯ ) and 𝛼 ∈
[0, 1], we have
β„Ž (𝛼𝑀1 + (1 − 𝛼) 𝑀2 )
= ⟨∇π‘₯ 𝑓 (π‘₯∗ , 𝛼𝑀1 + (1 − 𝛼) 𝑀2 ) , π‘₯ − π‘₯∗ ⟩
= lim (𝑓 (π‘₯∗ + 𝑑 (π‘₯ − π‘₯∗ ) , 𝛼𝑀1 + (1 − 𝛼) 𝑀2 )
𝑑↓0
− 𝑓 (π‘₯∗ , 𝛼𝑀1 + (1 − 𝛼) 𝑀2 )) (𝑑)−1
= lim
𝑑↓0
𝑓 (π‘₯∗ + 𝑑 (π‘₯ − π‘₯∗ ) , 𝛼𝑀1 + (1 − 𝛼) 𝑀2 ) − πœ“ (π‘₯∗ )
𝑑
≥ lim (𝛼𝑓 (π‘₯∗ + 𝑑 (π‘₯ − π‘₯∗ ) , 𝑀1 ) + (1 − 𝛼)
𝑑↓0
× π‘“ (π‘₯∗ + 𝑑 (π‘₯ − π‘₯∗ ) , 𝑀2 ) − πœ“ (π‘₯∗ )) (𝑑)−1
= lim
𝑑↓0
𝛼 [𝑓 (π‘₯∗ + 𝑑 (π‘₯ − π‘₯∗ ) , 𝑀1 ) − 𝑓 (π‘₯∗ , 𝑀1 )]
𝑑
+ lim
𝑑↓0
(1 − 𝛼) [𝑓 (π‘₯∗ + 𝑑 (π‘₯ − π‘₯∗ ) , 𝑀2 ) − 𝑓 (π‘₯∗ , 𝑀2 )]
𝑑
= 𝛼 ⟨∇π‘₯ 𝑓 (π‘₯∗ , 𝑀1 ) , π‘₯ − π‘₯∗ ⟩ + (1 − 𝛼)
× βŸ¨∇π‘₯ 𝑓 (π‘₯∗ , 𝑀2 ) , π‘₯ − π‘₯∗ ⟩
= π›Όβ„Ž (𝑀1 ) + (1 − 𝛼) β„Ž (𝑀2 ) ,
(21)
where we use the fact that 𝛼𝑀1 + (1 − 𝛼)𝑀2 ∈ Ω(π‘₯∗ ) (since
Ω(π‘₯∗ ) is convex) and 𝑓(π‘₯∗ , 𝑀) = πœ“(π‘₯∗ ) for all 𝑀 ∈ Ω(π‘₯∗ ).
On the other hand, applying the Min-Max Theorem [10,
Corollary 37.3.2] to (17) yields
max inf ⟨∇π‘₯ 𝑓 (π‘₯∗ , 𝑀) , π‘₯ − π‘₯∗ ⟩ = 0.
𝑀∈Ω(π‘₯∗ )π‘₯∈R𝑛+
Since Ω is bounded and Ω(π‘₯∗ ) is closed, we can assume,
without loss of generality, that π‘€πœ€ → 𝑀∗ ∈ Ω(π‘₯∗ ) as πœ€ → 0.
Thus, taking the limit in (25) gives
⟨∇π‘₯ 𝑓 (π‘₯∗ , 𝑀∗ ) , π‘₯∗ ⟩ ≤ 0.
Now, let π‘₯ = π‘₯∗ + π‘˜π‘’π‘– ∈ R𝑛+ . It follows from (24) that
πœ€
(∇π‘₯ 𝑓 (π‘₯∗ , π‘€πœ€ ))𝑖 ≥ − ,
π‘˜
inf ⟨∇π‘₯ 𝑓 (π‘₯∗ , π‘€πœ€ ) , π‘₯ − π‘₯∗ ⟩ ≥ −πœ€,
From all the above, we have seen that SVNCP(𝐹, Ω) given
as in (2) covers a range of optimization problems. Therefore,
in this paper, we mainly focus on SVNCP(𝐹, Ω). Due to
its equivalence to SVCP (1), our analysis and results for
SVNCP(𝐹, Ω) can be carried over to SVCP (1). This paper
is organized as follows. In Section 1, connection between
SVNCP(𝐹, Ω) and various optimization problems is introduced. We recall some background materials in Section 2.
Besides comparing the set-valued complementarity problems
with the classical complementarity problems, we analyze the
solution set of SVCP in Section 3. Moreover, properties of
merit functions for SVCP are studied in Section 4, such
as level bounded and error bound. Finally, some possible
research directions are discussed.
A few words about the notations used throughout the
paper. For any π‘₯, 𝑦 ∈ R𝑛 , the inner product is denoted by
π‘₯𝑇 𝑦 or ⟨π‘₯, π‘¦βŸ©. We write π‘₯ ≥ 𝑦 (or π‘₯ > 𝑦) if π‘₯𝑖 ≥ 𝑦𝑖 (or
π‘₯𝑖 > 𝑦𝑖 ) for all 𝑖 = 1, 2, . . . , 𝑛. Let 𝑒 be the vector with all
components being 1, and let 𝑒𝑖 be the 𝑖-row of identity matrix.
Denote 𝑁∞ := ∪∞
𝑛=1 {{𝑛, 𝑛 + 1, . . .}}. While SVNCP(𝐹, Ω)
meaning the set-valued nonlinear complementary problem
(2), SVLCP(𝑀, π‘ž, Ω) denotes the linear case, that is, 𝐹(π‘₯, 𝑀) =
𝑀(𝑀)π‘₯ + π‘ž(𝑀), where 𝑀 : Rπ‘š → R𝑛×𝑛 and π‘ž : Rπ‘š → R𝑛 .
For a continuously differentiable function 𝐹 : R𝑛 × Rπ‘š →
R𝑙 , we denote the 𝑙 × π‘› Jacobian matrix of partial derivatives
of 𝐹 at (π‘₯, 𝑀) with respect to π‘₯ by 𝐽π‘₯ 𝐹(π‘₯, 𝑀), whereas the
transposed Jacobian is denoted by ∇π‘₯ 𝐹(π‘₯, 𝑀). For a mapping
𝐻 : R𝑛 → Rπ‘š , define
lim inf 𝐻1 (π‘₯)
π‘₯→π‘₯
lim inf 𝐻2 (π‘₯)
π‘₯→π‘₯
lim inf 𝐻 (π‘₯) := (
).
..
π‘₯→π‘₯
.
lim inf π»π‘š (π‘₯)
(22)
(23)
(27)
which implies that (∇π‘₯ 𝑓(π‘₯∗ , π‘€πœ€ ))𝑖 ≥ 0 by letting π‘˜ → ∞,
and hence (∇π‘₯ 𝑓(π‘₯∗ , 𝑀∗ ))𝑖 ≥ 0 for all 𝑖 = 1, 2, . . . , 𝑛, that
is, ∇π‘₯ 𝑓(π‘₯∗ , 𝑀∗ ) ≥ 0. This together with (26) means that
⟨∇π‘₯ 𝑓(π‘₯∗ , 𝑀∗ ), π‘₯∗ ⟩ = 0. Thus, (18) holds.
Hence, for arbitrary πœ€ > 0, we can find π‘€πœ€ ∈ Ω(π‘₯∗ ), such that
π‘₯∈R𝑛+
(26)
(28)
π‘₯→π‘₯
Given a set-valued mapping 𝑀 : R𝑛 󴁂󴀱 Rπ‘š , define
lim sup 𝑀 (π‘₯)
that is,
π‘₯→π‘₯
⟨∇π‘₯ 𝑓 (π‘₯∗ , π‘€πœ€ ) , π‘₯ − π‘₯∗ ⟩ ≥ −πœ€,
∀π‘₯ ∈ R𝑛+ .
(24)
lim inf 𝑀 (π‘₯)
In particular, plugging in π‘₯ = 0 in (24) implies
⟨∇π‘₯ 𝑓 (π‘₯∗ , π‘€πœ€ ) , π‘₯∗ ⟩ ≤ πœ€.
:= {𝑒 | ∃π‘₯𝑛 󳨀→ π‘₯, ∃𝑒𝑛 󳨀→ 𝑒 with 𝑒𝑛 ∈ 𝑀 (π‘₯𝑛 )} ,
π‘₯→π‘₯
(25)
:= {𝑒 ∀π‘₯𝑛 󳨀→ π‘₯, ∃𝑒𝑛 󳨀→ 𝑒 with 𝑒𝑛 ∈ 𝑀 (π‘₯𝑛 )} .
(29)
4
Abstract and Applied Analysis
We say that 𝑀 is outer semicontinuous at π‘₯, if
lim sup 𝑀 (π‘₯) ⊂ 𝑀 (π‘₯) ,
π‘₯→π‘₯
Theorem 4. If Ω(π‘₯) is inner semicontinuous, and 𝑀(𝑀) is
continuous, then (34) and (35) are equivalent.
(30)
and inner semicontinuous at π‘₯ if
lim inf 𝑀 (π‘₯) ⊃ 𝑀 (π‘₯) .
π‘₯→π‘₯
(31)
We say that 𝑀 is continuous at π‘₯ if it is both outer semicontinuous and inner semicontinuous at π‘₯. For more details about
these functions, please refer to [9, 11]. Throughout this paper,
we always assume that the set-valued mapping Ω : R𝑛 󴁂󴀱 Rπ‘š
is closed valued; that is, Ω(π‘₯) is closed for all π‘₯ ∈ R𝑛 [11,
Chapter 1].
Then, taking the lower limit yields
𝑛→∞
Definition 2. A matrix 𝑀 ∈ R𝑛×𝑛 is said to be an 𝑆-matrix if
there exists π‘₯ ∈ R𝑛 , such that
𝑀π‘₯ > 0,
(32)
where the equality follows from the continuity of π‘Žπ‘– (𝑀), which
is ensured by the continuity of 𝑀(𝑀). Because πœ€ > 0 is
arbitrary, and {π‘₯𝑛 } is an arbitrary sequence converging to π‘₯0 ,
we obtain
lim inf 𝐻𝑖 (π‘₯) ≥ 𝐻𝑖 (π‘₯0 ) ,
π‘₯ → π‘₯0
𝑀 (𝑀) π‘₯ > 0,
for some 𝑀 ∈ Ω (π‘₯)
(34)
lim inf 𝐻1 (π‘₯)
π‘₯ → π‘₯0
..
lim inf 𝐻 (π‘₯) = (
)
.
π‘₯ → π‘₯0
lim inf 𝐻𝑛 (π‘₯)
π‘₯→π‘₯
0
(40)
𝐻1 (π‘₯0 )
≥ ( ... ) = 𝐻 (π‘₯0 ) ,
𝐻𝑛 (π‘₯0 )
that is,
lim inf max 𝑀 (𝑀) π‘₯ ≥ max 𝑀 (𝑀) π‘₯0 .
π‘₯ → π‘₯0 𝑀∈Ω(π‘₯)
𝑀∈Ω(π‘₯0 )
(41)
If π‘₯ satisfies (35), then
is not equivalent to
π‘₯ ≥ 0,
(39)
which says 𝐻𝑖 is lower semi-continuous. This further implies
(33)
see [13, Remark 2.2]. However, such equivalence fails to hold
for its corresponding cases in set-valued complementarity
problem. In other words,
π‘₯ > 0,
𝑇
𝑛→∞
(38)
Note that 𝑀 ∈ R𝑛×𝑛 is an 𝑆-matrix, if and only if the classical linear complementarity problem LCP(𝑀, π‘ž) is feasible for
all π‘ž ∈ R𝑛 , see [12, Prop. 3.1.5]. Moreover, the above condition
in Definition 2 is equivalent to
π‘₯ ≥ 0,
(37)
lim inf 𝐻𝑖 (π‘₯𝑛 ) ≥ lim π‘Žπ‘– (𝑀𝑛 ) π‘₯𝑛 = π‘Žπ‘– (𝑀0 ) π‘₯0 > 𝐻𝑖 (π‘₯0 ) − πœ€,
It is well known that various matrix classes paly different roles
in the theory of linear complementarity problem, such as 𝑃matrix, 𝑆-matrix, 𝑄-matrix, and 𝑍-matrix, see [1, 12] for more
details. Here we recall some of them which will be needed in
the subsequent analysis.
𝑀π‘₯ > 0.
𝑇
𝐻𝑖 (π‘₯𝑛 ) = max π‘Žπ‘– (𝑀)𝑇 π‘₯𝑛 ≥ π‘Žπ‘– (𝑀𝑛 ) π‘₯𝑛 .
𝑀∈Ω(π‘₯𝑛 )
𝑇
2. Focus on SVLCP(𝑀, π‘ž, Ω)
π‘₯ > 0,
Proof. We only need to show (35)⇒(34). Let 𝐻(π‘₯) =
max𝑀∈Ω(π‘₯) 𝑀(𝑀)π‘₯, and denote by π‘Žπ‘– (π‘₯) the 𝑖th row of 𝑀(𝑀).
Hence 𝐻𝑖 (π‘₯) = max𝑀∈Ω(π‘₯) π‘Žπ‘– (π‘₯)𝑇 π‘₯. With this, suppose π‘₯0 is
an arbitrary but fixed point, we know that for any πœ€ > 0, there
exists 𝑀0 ∈ Ω(π‘₯0 ) such that π‘Žπ‘– (𝑀0 )𝑇 π‘₯0 > 𝐻𝑖 (π‘₯0 ) − πœ€. Since
Ω(π‘₯) is inner semi-continuous, for any π‘₯𝑛 → π‘₯0 , there exists
𝑀𝑛 ∈ Ω(π‘₯𝑛 ) satisfying 𝑀𝑛 → 𝑀0 . This implies
𝑀 (𝑀) π‘₯ > 0,
for some 𝑀 ∈ Ω (π‘₯) .
(35)
π‘₯ ≥ 0,
𝑀 (𝑀) π‘₯ > 0,
for some 𝑀 ∈ Ω (π‘₯)
(42)
It is clear that (34) implies (35). But, the converse implication
does not hold, which is illustrated in Example 3.
which is equivalent to
Example 3. Let
On the other hand, lim inf πœ† → 0+ 𝐻(π‘₯ + πœ†π‘’) ≥ 𝐻(π‘₯) > 0, and
π‘₯ + πœ†π‘’ > 0 for πœ† > 0. By taking πœ† > 0 small enough, we know
π‘₯ + πœ†π‘’ satisfies (34). Thus, the proof is complete.
𝑀 (𝑀) = (
𝑀 0
),
0 𝑀
{0, 1} , π‘₯ = (1, 0) ∈ R2 ;
Ω (π‘₯) = {
otherwise.
{0} ,
(36)
If 𝑀(𝑀)π‘₯ > 0, then 𝑀 = 1, and such case holds only when
π‘₯ = (1, 0). Therefore, (35) is satisfied, but (34) is not.
We point out that the set-valued mapping Ω(π‘₯) in
Example 3 is indeed outer semi-continuous. A natural question arises: what happens if Ω(π‘₯) is inner semi-continuous?
The answer is given in Theorem 4 as below.
π‘₯ ≥ 0,
𝐻 (π‘₯) > 0.
(43)
There is another point worthy of pointing out. We
mentioned that the classical linear complementarity problem
LCP(𝑀, π‘ž) is feasible for all π‘ž ∈ R𝑛 if and only if 𝑀 ∈ R𝑛×𝑛 is
a 𝑆-matrix; that is, there exists π‘₯ ∈ R𝑛 , such that
π‘₯ > 0,
𝑀π‘₯ > 0.
(44)
Is there any analogous result in the set-valued set? Yes, we
have an answer for it in Theorem 5 below.
Abstract and Applied Analysis
5
Theorem 5. Consider the set-valued linear complementarity
problem SVLCP(𝑀, π‘ž, Ω). If there exists π‘₯ ∈ R𝑛 , such that
π‘₯ ≥ 0,
𝑀 (𝑀) π‘₯ > 0,
for some 𝑀 ∈ β‹‚ ⋃ Ω (𝑛π‘₯) ,
Example 7. Let
𝑀 (𝑀) = (
Μƒ ∞ 𝑛∈𝑁
Μƒ
𝑁∈𝑁
(45)
then SVLCP(𝑀, π‘ž, Ω) is feasible for all π‘ž : Rπ‘š → R𝑛 being
bounded from below.
Proof. Let π‘ž be any mapping from Rπ‘š to R𝑛 being bounded
from below, that is, there exists 𝛽 ∈ R, such that π‘ž(𝑀) ≥ 𝛽𝑒.
Suppose that π‘₯0 and 𝑀0 satisfy (45), which means
π‘₯0 ≥ 0,
𝑀 (𝑀0 ) π‘₯0 > 0,
𝑀0 ∈ β‹‚ ⋃ Ω (𝑛π‘₯0 ) .
Μƒ ∞ 𝑛∈𝑁
Μƒ
𝑁∈𝑁
(46)
Μƒ ∈ 𝑁∞ , we have 𝑀0∈∪ Μƒ Ω(𝑛π‘₯0 ). In particular,
Then, for any 𝑁
𝑛∈𝑁
we observe the following:
Μƒ = {1, 2, . . . , }, then there exists 𝑛1 , such
(1) if taking 𝑁
that 𝑀0 ∈ Ω(𝑛1 π‘₯0 );
Μƒ = {𝑛1 + 1, . . . , }, then there exists 𝑛2 with
(2) if taking 𝑁
𝑛2 > 𝑛1 , such that 𝑀0 ∈ Ω(𝑛2 π‘₯0 ).
Repeating the above process yields a sequence {π‘›π‘˜ }, such that
𝑀0 ∈ Ω(π‘›π‘˜ π‘₯0 ) and π‘›π‘˜ → ∞. Since 𝑀(𝑀0 )π‘₯0 > 0, it ensures
the existence of 𝛼 > 0, such that 𝑀(𝑀0 )π‘₯0 > 𝛼𝑒. Taking
π‘˜ large enough to satisfy π‘›π‘˜ > max{−𝛽/𝛼, 0} gives π›Όπ‘›π‘˜ 𝑒 >
−𝛽𝑒 ≥ −π‘ž(𝑀). Then, it implies that
𝑀 (𝑀0 ) π‘›π‘˜ π‘₯0 > π›Όπ‘›π‘˜ 𝑒 ≥ −π‘ž (𝑀) ,
(47)
and hence
π‘›π‘˜ π‘₯0 ≥ 0,
𝑀 (𝑀0 ) (π‘›π‘˜ π‘₯0 ) + π‘ž (𝑀) > 0,
𝑀0 ∈ Ω (π‘›π‘˜ π‘₯0 ) ,
(48)
𝑀 0
),
0 −𝑀
−1, π‘₯1 = 0;
(53)
Ω (π‘₯) = {
1,
otherwise.
For π‘₯1 =ΜΈ 0, we have 𝑀(𝑀)π‘₯ = (π‘₯1 , −π‘₯2 ), and hence
π‘₯1 (𝑀(𝑀)π‘₯)1 = π‘₯12 > 0. For π‘₯1 = 0, we know π‘₯2 =ΜΈ 0 (since π‘₯ =ΜΈ 0)
which says 𝑀(𝑀)π‘₯ = (−π‘₯1 , π‘₯2 ), and hence π‘₯2 (𝑀(𝑀)π‘₯)2 =
π‘₯22 > 0. Therefore, condition (51) is satisfied. But, condition
(52) fails to hold because 𝑀(𝑀)π‘₯ = (π‘₯1 , −π‘₯2 ) or (−π‘₯1 , π‘₯2 ).
Hence, 𝑀(𝑀)π‘₯ > 0 implies that π‘₯2 < 0 or π‘₯1 < 0, which
contradicts with π‘₯ ≥ 0.
Definition 8. A matrix 𝑀 ∈ R𝑛×𝑛 is said to be semimonotone
if
∀ 0 =ΜΈ π‘₯ ≥ 0 󳨐⇒ ∃π‘₯π‘˜ > 0
such that (𝑀π‘₯)π‘˜ ≥ 0.
For the classical linear complementarity problem, we
know that 𝑀 is semimonotone if and only if LCP(𝑀, π‘ž)
with π‘ž > 0 has a unique solution (zero solution), see [12,
Theorem 3.9.3]. One may wonder whether such fact still
holds in set-valued case. Before answering it, we need to
know how to generalize concept of semi-monotonicity to its
corresponding definition in the set-valued case.
Definition 9. The set of matrices {𝑀(𝑀) | 𝑀 ∈ Ω(π‘₯)} is said
to be
(a) strongly semimonotone if for any nonzero π‘₯ ≥ 0,
∃π‘₯π‘˜ > 0 such that (𝑀 (𝑀) π‘₯)π‘˜ ≥ 0
∀π‘₯ =ΜΈ 0,
∃π‘˜ ∈ {1, 2, . . . , 𝑛} such that π‘₯π‘˜ (𝑀π‘₯)π‘˜ > 0. (49)
From [12, Corollary 3.3.5], we know that every 𝑃-matrix
is an 𝑆-matrix. In other words, if 𝑀 satisfies (49), then the
following system is solvable:
π‘₯ ≥ 0,
𝑀π‘₯ > 0.
(50)
Their respective corresponding conditions in set-valued complementarity problem are
∀π‘₯ =ΜΈ 0,
∃π‘˜ ∈ {1, . . . , 𝑛}
such that π‘₯π‘˜ (𝑀 (𝑀) π‘₯)π‘˜ > 0,
for some 𝑀 ∈ Ω (π‘₯) ,
(51)
π‘₯ ≥ 0, 𝑀 (𝑀) π‘₯ >
for some 𝑀 ∈ Ω (π‘₯) .
(52)
Example 7 shows that the aforementioned implication is not
valid as well in set-valued complementarity problem.
∀𝑀 ∈ Ω (π‘₯) ;
(55)
(b) weakly semimonotone if for any nonzero π‘₯ ≥ 0,
∃π‘₯π‘˜ > 0 such that (𝑀 (𝑀) π‘₯)π‘˜ ≥ 0
which says π‘›π‘˜ π‘₯0 is a feasible point of SVLCP(𝑀, π‘ž, Ω).
Definition 6. A matrix 𝑀 ∈ R𝑛×𝑛 is said to be a 𝑃-matrix
if all its principal minors are positive, or equivalently [12,
Theorem 3.3.4],
(54)
for some 𝑀 ∈ Ω (π‘₯) .
(56)
Unlike the classical linear complementarity problem case,
here are parallel results regarding set-valued linear complementarity problem which strong (weak) semi-monotonicity
plays in.
Theorem 10. For the SVLCP(𝑀, π‘ž, Ω), the following statements hold.
(a) If the set of matrices {𝑀(𝑀) | 𝑀 ∈ Ω(π‘₯)} is strongly
semi-monotone, then for any positive mapping π‘ž, that
is, π‘ž(𝑀) > 0 for all 𝑀, SVLCP(𝑀, π‘ž, Ω) has zero as its
unique solution.
(b) If SVLCP(𝑀, π‘ž, Ω) with π‘ž(𝑀) > 0 has zero as its unique
solution, then the set of matrices {𝑀(𝑀) | 𝑀 ∈ Ω(π‘₯)} is
weakly semi-monotone.
Proof. (a) It is clear that, for any positive mapping π‘ž, π‘₯ = 0
is a solution of SVLCP(𝑀, π‘ž, Ω). Suppose there is another
nonzero solution π‘₯, that is, ∃𝑀 ∈ Ω(π‘₯), such that
π‘₯ ≥ 0,
𝑀 (𝑀) π‘₯ + π‘ž (𝑀) ≥ 0,
π‘₯𝑇 (𝑀 (π‘₯) + π‘ž (𝑀)) = 0.
(57)
6
Abstract and Applied Analysis
It follows from (55) that there exists π‘˜ ∈ {1, 2, . . . , 𝑛}, such that
π‘₯π‘˜ > 0 and (𝑀(𝑀)π‘₯)π‘˜ ≥ 0, and hence (𝑀(𝑀)π‘₯ + π‘ž(𝑀))π‘˜ > 0,
which contradicts condition (57).
(b) Suppose {𝑀(𝑀) | 𝑀 ∈ Ω(π‘₯)} is not weakly semi-monotone. Then, there exists a nonzero π‘₯ ≥ 0, for all π‘˜ ∈ 𝐼+ (π‘₯) :=
{𝑖 | π‘₯𝑖 > 0} , (𝑀(𝑀)π‘₯)π‘˜ < 0 for all 𝑀 ∈ Ω(π‘₯). Choose 𝑀 ∈
Ω(π‘₯). Let π‘ž(𝑀) = 1 for all 𝑀 =ΜΈ 𝑀 and
−(𝑀 (𝑀) π‘₯)π‘˜ ,
π‘žπ‘˜ (𝑀) = {
max {(𝑀 (𝑀) π‘₯)π‘˜ , 0} + 1,
π‘˜ ∈ 𝐼+ (π‘₯) ;
otherwise.
(58)
𝑀 (𝑀) π‘₯ + π‘ž (𝑀) ≥ 0,
π‘₯ ≥ 0,
π‘₯𝑇 (𝑀 (𝑀) π‘₯ + π‘ž (𝑀)) = 0,
Theorem 10(b) says that the weak semi-monotonicity is
a necessary condition for zero being the unique solution of
SVLCP(𝑀, π‘ž, Ω). However, it is not the sufficient condition,
see Example 11.
Example 11. Let
−𝑀 1 0
𝑀 (𝑀) = ( 0 0 1) ,
1 0 0
Ω (π‘₯) = {0, 1} .
(60)
For any nonzero π‘₯ = (π‘₯1 , π‘₯2 , π‘₯3 ) ≥ 0, we have 𝑀(0)π‘₯ = (π‘₯2 ,
π‘₯3 , π‘₯1 ) ≥ 0. If we plug in π‘ž = (1, 1, 1), by a simple cal-culation,
π‘₯ = (1, 0, 0) satisfies
π‘₯ ≥ 0,
𝑀 (1) π‘₯ + π‘ž ≥ 0,
π‘₯𝑇 (𝑀 (1) + π‘ž) = 0
(61)
which means that SVLCP(𝑀, π‘ž, Ω) has a nonzero solution.
We also notice that the set-valued mapping Ω(π‘₯) is even
continuous in Example 11.
So far, we have seen some major difference between the
classical complementarity problem and set-valued complementarity problem. Such phenomenon undoubtedly confirms that it is an interesting, important, and challenging task
to study the set-valued complementarity problem, which, to
some extent, is the main motivation of this paper.
To close this section, we introduce some other concepts
which will be used later too. A function 𝑓 : R𝑛 → R is level
bounded, if the level set {π‘₯ | 𝑓(π‘₯) ≤ 𝛼} is bounded for all 𝛼 ∈
R. The metric projection of π‘₯ to a closed convex subset 𝐴 ⊂
R𝑛 is denoted by Π𝐴 (π‘₯), that is, Π𝐴(π‘₯) := arg min𝑦∈𝐴‖π‘₯ − 𝑦‖.
The distance function is defined as dist(π‘₯, 𝐴) := β€–π‘₯ − Π𝐴 (π‘₯)β€–.
π‘₯𝑇 𝐹 (π‘₯, 𝑀) = 0,
a.e. 𝑀 ∈ Ω,
(62)
𝐹 (π‘₯, 𝑀) ≥ 0,
π‘₯𝑇 𝐹 (π‘₯, 𝑀) = 0,
∀𝑀 ∈ Ω,
(63)
which we denote it by SINCP(𝐹, Ω). In addition, the authors
introduce the following two complementarity problems in
[18] to find π‘₯ ∈ R𝑛 , such that
with 𝑀 ∈ Ω (π‘₯) ,
(59)
that is, the nonzero vector π‘₯ is a solution of SVLCP(𝑀, π‘ž, Ω),
which is a contradiction.
𝐹 (π‘₯, 𝑀) ≥ 0,
where 𝑀 is a random vector in a given probability space and
the semi-infinite complementarity problem [18] to find π‘₯ ∈ R𝑛 ,
such that
π‘₯ ≥ 0,
Therefore, π‘ž(𝑀) > 0 for all 𝑀. According to the above
construction, we have
π‘₯ ≥ 0,
involved, for example, the stochastic complementarity problem
[14–17], to find π‘₯ ∈ R𝑛 , such that
π‘₯ ≥ 0,
𝐹min (π‘₯) ≥ 0,
π‘₯𝑇 𝐹min (π‘₯) = 0,
π‘₯ ≥ 0,
𝐹max (π‘₯) ≥ 0,
π‘₯𝑇 𝐹max (π‘₯) = 0,
(64)
where
min 𝐹1 (π‘₯, 𝑀)
𝑀∈Ω
..
𝐹min (π‘₯) := (
),
.
min𝐹𝑛 (π‘₯, 𝑀)
(65)
max 𝐹1 (π‘₯, 𝑀)
𝑀∈Ω
..
𝐹max (π‘₯) := (
).
.
max𝐹𝑛 (π‘₯, 𝑀)
(66)
𝑀∈Ω
𝑀∈Ω
These two problems are denoted by NCP(𝐹min ) and
NCP(𝐹max ), respectively. Is there any relationship among
their solutions sets? In order to further describing such
relationship, we adapt the following notations:
(i) SOL(𝐹, Ω) means the solution set of SVNCP(𝐹, Ω),
(ii) SOL(𝑀, π‘ž, Ω) means the solution set of SVLCP(𝐹, Ω),
Μ‚ Ω) means the solution set of SINCP(𝐹, Ω),
(iii) SOL(𝐹,
(iv) SOL(𝐹min ) means the solution set of NCP(𝐹min ),
(v) SOL(𝐹max ) means the solution set of NCP(𝐹max ).
Besides, for the purpose of comparison, we restrict that Ω(π‘₯)
is fixed; that is, there exists a subset Ω in Rπ‘š , such that Ω(π‘₯) =
Ω for all π‘₯ ∈ R𝑛 .
It is easy to see that the solution set of SINCP(𝐹, Ω) is
⋂𝑀∈Ω SOL(𝐹𝑀 ), but that of SVNCP(𝑓, Ω) is ⋃𝑀∈Ω SOL(𝐹𝑀 ),
where 𝐹𝑀 (π‘₯):=𝐹(π‘₯, 𝑀). Hence, the solution set of SINCP(𝐹, Ω)
is included in that of SVNCP(𝐹, Ω). In other words, we have
Μ‚ (𝐹, Ω) ⊆ SOL (𝐹, Ω) .
SOL
(67)
The inclusion (67) can be strict as shown in Example 12.
3. Properties of Solution Sets
Recently, many authors study other classes of complementarity problems, in which another type of vector 𝑀 ∈ Ω is
Example 12. Let 𝐹(π‘₯, 𝑀) = (𝑀, 1) and Ω(π‘₯) = [0, 1]. Then, we
Μ‚ Ω) = {0, 0}, whereas SOL(𝐹, Ω) = {π‘₯ |
can verify that SOL(𝐹,
π‘₯1 ≥ 0, π‘₯2 = 0}.
Abstract and Applied Analysis
7
However, the solution set of SVNCP(𝐹, Ω), NCP(𝐹min ),
and NCP(𝐹max ) are not included each other. This is illustrated
in Examples 13–14.
Example 13. SOL(𝐹min ) ⊆ΜΈ SOL(𝐹, Ω) and SOL(𝐹, Ω) ⊆ΜΈ
SOL(𝐹min ).
(a) Let 𝐹(π‘₯, 𝑀) = (1 − 𝑀, 𝑀) and Ω = [0, 1]. Then,
SOL(𝐹min ) = R2+ , SOL(𝐹, Ω) = ⋃𝑀∈Ω SOL(𝐹𝑀 ) =
{(π‘₯1 , π‘₯2 )𝑇 | π‘₯1 ≥ 0, π‘₯2 ≥ 0, and π‘₯1 π‘₯2 = 0}.
(b) Let 𝐹(π‘₯, 𝑀) = (𝑀 − 1, π‘₯2 ) and Ω = [0, 1]. Then,
SOL(𝐹min ) = 0 and SOL(𝐹, Ω) = {(π‘₯1 , π‘₯2 ) | π‘₯1 ≥ 0,
π‘₯2 = 0}.
Example 14. SOL(𝐹max ) ⊆ΜΈ SOL(𝐹, Ω) and SOL(𝐹, Ω) ⊆ΜΈ
SOL(𝐹max ).
(a) Let 𝐹(π‘₯, 𝑀) = (𝑀 − 1, −𝑀) and Ω = [0, 1]. Then,
SOL(𝐹max ) = R2+ and SOL(𝐹, Ω) = 0.
(b) Let 𝐹(π‘₯, 𝑀)= (𝑀, −𝑀) and Ω = [0, 1]. Then, SOL(𝐹max ) =
{(π‘₯1 , π‘₯2 ) | π‘₯1 = 0, π‘₯2 ≥ 0} and SOL(𝐹, Ω) = R2+ .
Similarly, Example 15 shows that the solution set of NCP
(𝐹max ) and NCP(𝐹min ) cannot be included each other.
Example 15. SOL(𝐹max ) ⊆ΜΈ SOL(𝐹min ) and SOL(𝐹min ) ⊆ΜΈ
SOL(𝐹max ).
(a) Let 𝐹(π‘₯, 𝑀) = (𝑀 − 1, 0) and Ω = [0, 1]. Then,
SOL(𝐹min ) = 0 and SOL(𝐹max ) = R2+ .
(b) Let 𝐹(π‘₯, 𝑀) = (𝑀, 𝑀) and Ω = [0, 1]. Then, SOL(𝐹min ) =
R2+ and SOL(𝐹max ) = {(0, 0)}.
In spite of these, we obtain some results which describe
the relationship among them.
Theorem 16. Let Ω(π‘₯) = Ω for all π‘₯ ∈ R𝑛 . Then, we have
(a) SOL(𝐹, Ω) ∩ {π‘₯ | 𝐹min (π‘₯) ≥ 0} ⊆ SOL(𝐹min );
(b) SOL(𝐹max ) ∩ {π‘₯ | 𝐹(π‘₯, 𝑀) ≥ 0 for some 𝑀 ∈ Ω} ⊆
SOL(𝐹, Ω);
(c) SOL(𝐹min ) ∩ {π‘₯ | π‘₯𝑇 𝐹max (π‘₯) ≤ 0} = SOL(𝐹max ) ∩ {π‘₯ |
𝐹min (π‘₯) ≥ 0} ⊆ S𝑂𝐿(𝐹, Ω).
Proof. Parts (a) and (b) follow immediately from the fact
π‘₯𝑇 𝐹min (π‘₯) ≤ π‘₯𝑇 𝐹 (π‘₯, 𝑀) ≤ π‘₯𝑇 𝐹max (π‘₯)
∀𝑀 ∈ Ω, π‘₯ ∈ R𝑛+ .
(68)
Part (c) is from (67), since the two sets in the left side of (c) is
Μ‚ Ω) by [18].
SOL(𝐹,
For further characterizing the solution sets, we recall that
for a set-valued mapping 𝑀 : R𝑛 󴁂󴀱 Rπ‘š , its inverse mapping
(see [9, Chapter 5]) is defined as
𝑀−1 (𝑦) := {π‘₯ | 𝑦 ∈ 𝑀 (π‘₯)} .
(69)
Theorem 17. For SVNCP(𝐹, Ω), we have
S𝑂𝐿 (𝐹, Ω) = ⋃ [SOL (𝐹𝑀 ) ∩ Ω−1 (𝑀)] .
𝑀∈Rπ‘š
(70)
Proof. In fact, the desired result follows from
SOL (𝐹, Ω)
= {π‘₯ | π‘₯ ∈ SOL (𝐹𝑀 ) and 𝑀 ∈ Ω (π‘₯)
for some 𝑀 ∈ Rπ‘š }
= {π‘₯ | π‘₯ ∈ SOL (𝐹𝑀 ) and π‘₯ ∈ Ω−1 (𝑀)
(71)
for some 𝑀 ∈ Rπ‘š }
= ⋃ [SOL (𝐹𝑀 ) ∩ Ω−1 (𝑀)] ,
𝑀∈Rπ‘š
where the second equality is due to the definition of inverse
mapping given as above.
4. Merit Functions for SVNCP and SVLCP
It is well known that one of the important approaches for
solving the complementarity problems is to transfer it to
a system of equations or an unconstrained optimization
via NCP functions or merit functions. Hence, we turn our
attention in this section to address merit functions for
SVNCP(𝐹, Ω) and SVLCP(𝑀, π‘ž, Ω).
A function πœ™ : R2 → R is called an NCP function if it
satisfies
πœ™ (π‘Ž, 𝑏) = 0 ⇐⇒ π‘Ž ≥ 0,
𝑏 ≥ 0, π‘Žπ‘ = 0.
(72)
For example, the natural residual πœ™NR (π‘Ž, 𝑏) = min{π‘Ž, 𝑏} and
the Fischer-Burmeister function πœ™FB (π‘Ž, 𝑏) = √π‘Ž2 + 𝑏2 −(π‘Ž+𝑏)
are popular NCP-functions. Please also refer to [19] for a
detailed survey on the existing NCP-functions. In addition,
a real-valued function 𝑓 : R𝑛 → R is called a merit (or
residual) function for a complementarity problem if 𝑓(π‘₯) ≥ 0
for all π‘₯ ∈ R𝑛 and 𝑓(π‘₯) = 0 if and only if π‘₯ is a solution of
the complementarity problem. Given an NCP-function πœ™, we
define
π‘Ÿ (π‘₯, 𝑀) := β€–Φ (π‘₯, 𝐹 (π‘₯, 𝑀))β€–
where Φ (π‘₯, 𝑦) := (πœ™ (π‘₯1 , 𝑦1 ) , . . . , πœ™ (π‘₯𝑛 , 𝑦𝑛 )) .
(73)
Then, it is not hard to verify that the function given by
π‘Ÿ (π‘₯) := min π‘Ÿ (π‘₯, 𝑀)
(74)
𝑀∈Ω(π‘₯)
is a merit function for SVNCP(𝐹, Ω). Note that the merit
function (74) is rather different from the traditional one,
because the index set is not a fixed set, but dependent on π‘₯.
We say that a merit function π‘Ÿ(π‘₯) has a global error bound
with a modulus 𝑐 > 0 if
dist (π‘₯, SOL (𝐹, Ω)) ≤ 𝑐 ⋅ π‘Ÿ (π‘₯) ,
∀π‘₯ ∈ R𝑛 .
(75)
For more information about the error bound, please see [20]
which is an excellent survey paper regarding the issue of error
bounds.
8
Abstract and Applied Analysis
Theorem 18. Assume that there exists a set Ω ⊂ Rπ‘š , such that
Ω(π‘₯) = Ω for all π‘₯ ∈ R𝑛 , and that for each 𝑀 ∈ Ω, π‘Ÿ(π‘₯, 𝑀) is
a global error bound of 𝑁𝐢𝑃(𝐹𝑀 ) with the modulus πœ‚(𝑀) > 0,
that is,
dist (π‘₯, S𝑂𝐿 (𝐹𝑀 )) ≤ πœ‚ (𝑀) π‘Ÿ (π‘₯, 𝑀) ,
∀π‘₯ ∈ R𝑛 .
(76)
is well defined because 𝑀(𝑀) is continuous, and Ω is
compact.
For simplification of notations, we write π‘₯ → ∞ instead of
β€–π‘₯β€– → ∞. We now introduce the following definitions which
are similar to (29):
lim sup𝑀 (π‘₯)
π‘₯→∞
In addition, if
πœ‚ := max πœ‚ (𝑀) < +∞,
(77)
𝑀∈Ω
then π‘Ÿ(π‘₯) = min𝑀∈Ω π‘Ÿ(π‘₯, 𝑀) provides a global error bound for
SVNCP(𝐹, Ω) with the modulus πœ‚.
(78)
SOL (𝐹, Ω) = ⋃ [SOL (𝐹𝑀 ) ∩ Ω−1 (𝑀)] = ⋃ SOL (𝐹𝑀 ) .
𝑀∈Ω
(79)
Therefore,
dist (π‘₯, SOL (𝐹, Ω))
𝑀∈Ω
≤ min dist (π‘₯, SOL (𝐹𝑀 ))
𝑀∈Ω
≤ min πœ‚ (𝑀) ⋅ π‘Ÿ (π‘₯, 𝑀)
(80)
𝑀∈Ω
≤ min max πœ‚ (𝑀) ⋅ π‘Ÿ (π‘₯, 𝑀)
𝑀∈٠𝑀∈Ω
= max πœ‚ (𝑀) min π‘Ÿ (π‘₯, 𝑀)
𝑀∈Ω
= πœ‚ ⋅ π‘Ÿ (π‘₯) .
Thus, the proof is complete.
One may ask when condition (77) is satisfied? Indeed, the
condition (77) is satisfied if
(i) Ω is a finite set;
(ii) 𝐹(π‘₯, 𝑀) = 𝑀(𝑀)π‘₯ + π‘ž(𝑀) where 𝑀(𝑀) is continuous,
and for each 𝑀 ∈ Ω the matrix 𝑀(𝑀) is a 𝑃-matrix. In
this case the modulus πœ‚(𝑀) takes an explicitly formula,
that is,
σ΅„©
σ΅„©
πœ‚ (𝑀) = max 𝑛 σ΅„©σ΅„©σ΅„©σ΅„©(𝐼 − 𝐷 + 𝐷𝑀 (𝑀))−1 σ΅„©σ΅„©σ΅„©σ΅„© ,
(81)
𝑑∈[0,1]
see [21, 22]. Hence, we see that
σ΅„©
σ΅„©
πœ‚ = max 𝑛 σ΅„©σ΅„©σ΅„©σ΅„©(𝐼 − 𝐷 + 𝐷𝑀 (𝑀))−1 σ΅„©σ΅„©σ΅„©σ΅„©
𝑑∈[0,1]
𝑀∈Ω
:= {𝑒 ∀π‘₯𝑛 󳨀→ ∞, ∃𝑒𝑛 󳨀→ 𝑒 wπ‘–π‘‘β„Ž 𝑒𝑛 ∈ 𝑀 (π‘₯𝑛 )} .
Definition 19. For SVLCP(𝑀, π‘ž, Ω), the set of matrices
{𝑀(𝑀) | 𝑀 ∈ Ω(π‘₯)} is said to have the limit-𝑅0 property
if
𝑀 (𝑀) π‘₯ ≥ 0,
π‘₯𝑇 𝑀 (𝑀) π‘₯ = 0
for some 𝑀 ∈ lim sup Ω (π‘₯) 󳨐⇒ π‘₯ = 0.
(84)
π‘₯→∞
In the case of a linear complementarity problem, that is,
Ω(π‘₯) is a fixed single-point set, Definition 19 coincides with
that of 𝑅0 -matrix.
Theorem 20. For SVLCP(𝑀, π‘ž, Ω), suppose that there exists a
bounded set Ω, such that Ω(π‘₯) ⊂ Ω for all π‘₯ ∈ R𝑛 , and 𝑀(𝑀)
and π‘ž(𝑀) are continuous on Ω. If the set of matrices {𝑀(𝑀) |
𝑀 ∈ Ω(π‘₯)} has the limit-𝑅0 property, then the merit function
π‘Ÿ(π‘₯) = min𝑀∈Ω(π‘₯) β€– min{π‘₯, 𝑀(𝑀)π‘₯ + π‘ž(𝑀)}β€– is level bounded.
= dist (π‘₯, ⋃ SOL (𝐹𝑀 ))
𝑀∈Ω
π‘₯→∞
π‘₯ ≥ 0,
It then follows from Theorem 17 that
𝑀∈Rπ‘š
lim inf 𝑀 (π‘₯)
(83)
Proof. Noticing that if Ω(π‘₯) = Ω for all π‘₯ ∈ R𝑛 , then
R𝑛 , 𝑀 ∈ Ω,
Ω−1 (𝑀) = {
0,
𝑀 ∉ Ω.
:= {𝑒 | ∃π‘₯𝑛 󳨀→ ∞, ∃𝑒𝑛 󳨀→ 𝑒 with 𝑒𝑛 ∈ 𝑀 (π‘₯𝑛 )} ,
Proof. We argue this result by contradiction. Suppose there
exists a sequence {π‘₯𝑛 } satisfying β€–π‘₯𝑛 β€– → ∞, and π‘Ÿ(π‘₯𝑛 ) is
bounded. Then,
σ΅„©
σ΅„©σ΅„©
π‘Ÿ (π‘₯𝑛 )
π‘ž (𝑀) σ΅„©σ΅„©σ΅„©
π‘₯𝑛
π‘₯𝑛
σ΅„©σ΅„©
σ΅„©
min
{
=
min
,
𝑀
+
}
(𝑀)
σ΅„©
σ΅„©σ΅„© σ΅„©σ΅„©
σ΅„©σ΅„© σ΅„©σ΅„©
σ΅„©σ΅„© σ΅„©σ΅„© σ΅„©σ΅„© σ΅„©σ΅„© σ΅„©σ΅„©σ΅„©
σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯𝑛 σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯𝑛 σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯𝑛 σ΅„©σ΅„© σ΅„©σ΅„©π‘₯𝑛 σ΅„©σ΅„© σ΅„©σ΅„©
𝑀∈Ω(π‘₯𝑛 ) σ΅„©
σ΅„©σ΅„©
π‘ž (𝑀 ) σ΅„©σ΅„©σ΅„©
π‘₯
π‘₯
σ΅„©
= σ΅„©σ΅„©σ΅„©σ΅„©min { σ΅„©σ΅„© 𝑛 σ΅„©σ΅„© , 𝑀 (𝑀𝑛 ) σ΅„©σ΅„© 𝑛 σ΅„©σ΅„© + σ΅„©σ΅„© 𝑛󡄩󡄩 }σ΅„©σ΅„©σ΅„©σ΅„© ,
σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯𝑛 σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯𝑛 σ΅„©σ΅„©
σ΅„©σ΅„©π‘₯𝑛 σ΅„©σ΅„© σ΅„©σ΅„©
(85)
where we assume the minimizer is attained at 𝑀𝑛 ∈ Ω(π‘₯𝑛 ),
whose existence is ensured by the compactness of Ω(π‘₯𝑛 ),
since Ω(π‘₯) is closed and Ω is bounded. Taking a subsequence
if necessary, we can assume that {π‘₯𝑛 /β€–π‘₯𝑛 β€–} and {𝑀𝑛 } are both
convergent in which π‘₯ and 𝑀 represent their corresponding
limit point. Thus, we have
𝑀 ∈ lim sup Ω (π‘₯𝑛 ) ⊂ lim sup Ω (π‘₯) .
𝑛→∞
β€–π‘₯β€– → ∞
Now, taking the limit in (85) yields
β€–min {π‘₯, 𝑀 (𝑀) π‘₯}β€– = 0,
(82)
(86)
(87)
where we have used the fact that π‘ž(𝑀𝑛 )/β€–π‘₯𝑛 β€– → 0, because π‘ž
is continuous, and 𝑀𝑛 ∈ Ω is bounded. This contradicts (84)
since π‘₯ is a nonzero vector.
Abstract and Applied Analysis
9
Note that the condition (84) is equivalent to
⋃
SOL (𝑀 (𝑀)) = {0} ,
𝑀∈lim sup Ω(π‘₯)
(88)
π‘₯→∞
which is also equivalent to saying that each matrix 𝑀(𝑀) for
𝑀 ∈ lim supπ‘₯ → ∞ Ω(π‘₯) is a 𝑅0 -matrix.
Theorem 21. For SVLCP(𝑀, π‘ž, Ω), suppose that there exists
a compact set Ω, such that Ω(π‘₯) ⊂ Ω for all π‘₯ ∈
R𝑛 , and 𝑀(𝑀) and π‘ž(𝑀) are continuous on Ω. If π‘Ÿ(π‘₯) =
min𝑀∈Ω(π‘₯) β€– min{π‘₯, 𝑀(𝑀)π‘₯ + π‘ž(𝑀)}β€– is level bounded, then the
following implication holds
π‘₯ ≥ 0,
𝑀 (w) π‘₯ ≥ 0,
π‘₯𝑇 𝑀 (𝑀) π‘₯ = 0
for some 𝑀 ∈ β‹‚ ⋃ Ω (𝑛π‘₯) 󳨐⇒ π‘₯ = 0.
The above conclusion is equivalent to say that for each
𝑀 ∈ ∩𝑁∈𝑁
Μƒ ∞ ∪𝑛∈𝑁
Μƒ Ω(𝑛π‘₯), the matrix 𝑀(𝑀) is a 𝑅0 -matrix.
Finally, let us discuss a special case where the set-valued
mapping Ω(π‘₯) has an explicit form, for example, Ω(π‘₯) = {𝑀 |
𝐻(π‘₯, 𝑀) = 0 and 𝐺(π‘₯, 𝑀) ≥ 0}, where 𝐻 : R𝑛 × Rπ‘š → R𝑙1
and 𝐺 : R𝑛 ×Rπ‘š → R𝑙2 . Then, the solution set can be further
characterized.
Theorem 22. If Ω(π‘₯) := {𝑀 | 𝐺(π‘₯, 𝑀) ≥ 0, 𝐻(π‘₯, 𝑀) = 0}, then
π‘₯
{
SOL (𝐹, Ω) = ⋃ {π‘₯ | ( 0 ) ∈ SOL (Θ𝑀 )
𝑀∈Rπ‘š
𝛼
{
}
2
and 0 ∈ R𝑙1 } ,
with 𝛼 ∈ R𝑙++
}
(89)
Μƒ ∞ 𝑛∈𝑁
Μƒ
𝑁∈𝑁
Proof. Suppose that there exist a nonzero vector π‘₯0 , and 𝑀0 ∈
∩𝑁∈𝑁
Μƒ ∞ ∪𝑛∈𝑁
Μƒ Ω(𝑛π‘₯0 ), such that
𝑀 (𝑀0 ) π‘₯0 ≥ 0,
π‘₯0 ≥ 0,
π‘₯0𝑇 𝑀 (𝑀0 ) π‘₯0 = 0.
(90)
Similar to the argument as in Theorem 5, there exists a
sequence {π‘›π‘˜ } with π‘›π‘˜ → ∞ and 𝑀0 ∈ Ω(π‘›π‘˜ π‘₯0 ). Hence,
σ΅„©
σ΅„©
π‘Ÿ (π‘›π‘˜ π‘₯0 ) = min σ΅„©σ΅„©σ΅„©min {π‘›π‘˜ π‘₯0 , π‘›π‘˜ 𝑀 (𝑀) π‘₯0 + π‘ž (𝑀)}σ΅„©σ΅„©σ΅„©
𝑀∈Ω(π‘›π‘˜ π‘₯0 )
σ΅„©
σ΅„©
≤ σ΅„©σ΅„©σ΅„©min {π‘›π‘˜ π‘₯0 , π‘›π‘˜ 𝑀 (𝑀0 ) π‘₯0 + π‘ž (𝑀0 )}σ΅„©σ΅„©σ΅„©
𝑛
σ΅„©
σ΅„©
≤ ∑ σ΅„©σ΅„©σ΅„©min {π‘›π‘˜ (π‘₯0 )𝑖 , π‘›π‘˜ (𝑀 (𝑀0 ) π‘₯0 )𝑖 + π‘ž(𝑀0 )𝑖 }σ΅„©σ΅„©σ΅„© .
𝑖=1
(91)
Next, we proceed the arguments by discussing the following
two cases.
Case 1. For (π‘₯0 )𝑖 > 0, we have (𝑀(𝑀0 )π‘₯0 )𝑖 = 0 from (90).
Since max𝑀∈Ω π‘ž(𝑀) is finite due to the compactness of Ω
and the continuity of π‘ž(𝑀), π‘›π‘˜ (π‘₯0 )𝑖 > max𝑀∈Ω π‘ž(𝑀) for π‘˜
sufficiently large. Therefore, we obtain
σ΅„©σ΅„©
σ΅„© σ΅„©
σ΅„©
σ΅„©σ΅„©min {π‘›π‘˜ (π‘₯0 )𝑖 , π‘›π‘˜ (𝑀 (𝑀0 ) π‘₯0 )𝑖 + π‘žπ‘– (𝑀0 )}σ΅„©σ΅„©σ΅„© = σ΅„©σ΅„©σ΅„©π‘žπ‘– (𝑀0 )σ΅„©σ΅„©σ΅„© .
(92)
Case 2. For (π‘₯0 )𝑖 = 0, by a simple calculation, we have
σ΅„©σ΅„©
σ΅„©
σ΅„©σ΅„©min {π‘›π‘˜ (π‘₯0 )𝑖 , π‘›π‘˜ (𝑀 (𝑀0 ) π‘₯0 )𝑖 + π‘žπ‘– (𝑀0 )}σ΅„©σ΅„©σ΅„©
= 0,
if π‘›π‘˜ (𝑀 (𝑀0 ) π‘₯0 )𝑖 + π‘žπ‘– (𝑀0 ) ≥ 0,
×{ σ΅„©
},
σ΅„©
≤ σ΅„©σ΅„©σ΅„©π‘žπ‘– (𝑀0 )σ΅„©σ΅„©σ΅„© , if π‘›π‘˜ (𝑀 (𝑀0 ) π‘₯0 )𝑖 + π‘žπ‘– (𝑀0 ) < 0,
(93)
where the inequality in the latter case comes from the fact that
π‘žπ‘– (𝑀0 ) ≤ π‘›π‘˜ (𝑀(𝑀0 )π‘₯0 )𝑖 + π‘žπ‘– (𝑀0 ) < 0. Thus,
𝑛
σ΅„©
σ΅„©
π‘Ÿ (π‘›π‘˜ π‘₯0 ) ≤ ∑ σ΅„©σ΅„©σ΅„©π‘žπ‘– (𝑀0 )σ΅„©σ΅„©σ΅„© .
(94)
𝑛=1
This contradicts the level boundedness of π‘Ÿ(π‘₯) since π‘›π‘˜ π‘₯0 →
∞.
(95)
𝐹(π‘₯,𝑀)
where Θ𝑀 : R𝑛 → R𝑛+𝑙1 +𝑙2 is defined as Θ𝑀 (π‘₯) := ( 𝐺(π‘₯,𝑀) )
𝑙
2
:= {𝛼 ∈ R𝑙2 | 𝛼𝑖 > 0 for all 𝑖 = 1, . . . , 𝑙2 }.
and R++
𝐻(π‘₯,𝑀)
Proof. Noting that the problem (2) is to find 𝑀 ∈ Rπ‘š and
π‘₯ ∈ R𝑛 , such that
π‘₯ ≥ 0,
𝐹 (π‘₯, 𝑀) ≥ 0,
𝐺 (π‘₯, 𝑀) ≥ 0,
⟨𝐹 (π‘₯, 𝑀) , π‘₯⟩ = 0,
𝐻 (π‘₯, 𝑀) = 0,
(96)
namely, to find 𝑀 ∈ Rπ‘š and π‘₯ ∈ R𝑛 satisfying
π‘₯ ≥ 0,
0 ≥ 0,
𝛼 > 0,
𝐹 (π‘₯, 𝑀) ≥ 0,
𝐺 (π‘₯, 𝑀) ≥ 0,
𝐻 (π‘₯, 𝑀) ≥ 0,
⟨𝐹 (π‘₯, 𝑀) , π‘₯⟩ = 0,
0 ⋅ 𝐺 (π‘₯, 𝑀) = 0,
(97)
βŸ¨π›Ό, 𝐻 (π‘₯, 𝑀)⟩ = 0.
In other words,
π‘₯
( 0 ) ≥ 0,
𝛼
𝐹 (π‘₯, 𝑀)
( 𝐺 (π‘₯, 𝑀) ) ≥ 0,
𝐻 (π‘₯, 𝑀)
𝑇
𝐹 (π‘₯, 𝑀)
π‘₯
( 0 ) ( 𝐺 (π‘₯, 𝑀) ) = 0.
𝛼
𝐻 (π‘₯, 𝑀)
(98)
Then, the desired result follows.
The foregoing result indicates that the set-valued complementarity problem is different from the classical complementarity problem, since it restricts that some components of the
solution must be positive or zero, which is not required in the
classical complementarity problems.
Moreover, the set-valued complementarity problem can
be further reformulated to be an equation, that is, finding π‘₯ ∈
R𝑛 and 𝑀 ∈ Rπ‘š to satisfy the following equation
Γ (π‘₯, 𝑀) = (
πœ‰ (π‘₯, 𝑀)
𝐻 (π‘₯, 𝑀)
2
dist (𝐺 (π‘₯, 𝑀) |
𝑙
R+2 )
) = 0,
(99)
where πœ‰(π‘₯, 𝑀) = (1/2)β€–ΦFB (π‘₯, 𝐹(π‘₯, 𝑀))β€–2 . Note that when 𝐴
is a closed convex set, then πœƒ(π‘₯) := dist2 (π‘₯, 𝐴) is continuously
10
Abstract and Applied Analysis
differentiable, and ∇πœƒ(π‘₯) = 2(π‘₯ − Π𝐴 (π‘₯)). This fact together
with β€–πœ™FB β€–2 being continuously differentiable implies the
following immediately.
of knowledge, the stochastic variational inequalities is to find
π‘₯ ∈ π‘‹πœ‰ , such that
Theorem 23. Suppose that 𝐺 and 𝐻 are continuously differentiable, and πœ™ is the Fischer-Burmeister function, then Γ is
continuously differentiable and
R𝑛+ , then the stochastic variational inequalities reduce
𝐽π‘₯ 𝐻 (π‘₯, 𝑀)
𝑇
2(𝐺 (π‘₯, 𝑀)− ∏ (𝐺 (π‘₯, 𝑀))) 𝐽π‘₯ 𝐺 (π‘₯, 𝑀)
𝑙
R+2
(
∀𝑦 ∈ π‘‹πœ‰ , πœ‰ ∈ Ξ.
(103)
If π‘‹πœ‰ =
to the stochastic complementarity problem as follows:
π‘₯ ≥ 0,
𝐽π‘₯ πœ‰ (π‘₯, 𝑀)
𝐽𝑇 (π‘₯, 𝑀) = (
𝑇
(𝑦 − π‘₯) 𝐹 (π‘₯, πœ‰) ≥ 0,
π‘₯𝑇 𝐹 (π‘₯, πœ‰) = 0,
𝐹 (π‘₯, πœ‰) ≥ 0,
πœ‰ ∈ Ξ.
(104)
The optimization problem corresponding to stochastic complementarity problem is
σ΅„©
σ΅„©
min E {σ΅„©σ΅„©σ΅„©Φ (π‘₯, 𝐹 (π‘₯, πœ‰))σ΅„©σ΅„©σ΅„©} .
(105)
π‘₯∈R𝑛
+
When Ξ is a discrete set, say Ξ := {πœ‰1 , πœ‰2 , . . . , πœ‰π‘£ }, then
𝑣
𝐽𝑀 πœ‰ (π‘₯, 𝑀)
σ΅„©
σ΅„©
σ΅„©
σ΅„©
E {σ΅„©σ΅„©σ΅„©Φ (π‘₯, 𝐹 (π‘₯, πœ‰))σ΅„©σ΅„©σ΅„©} = ∑𝑃 (πœ‰π‘– ) σ΅„©σ΅„©σ΅„©Φ (π‘₯, 𝐹 (π‘₯, πœ‰π‘– ))σ΅„©σ΅„©σ΅„© ,
𝐽𝑀 𝐻 (π‘₯, 𝑀)
𝑇
),
2(𝐺 (π‘₯, 𝑀)− ∏ (𝐺 (π‘₯, 𝑀))) 𝐽𝑀 𝐺 (π‘₯, 𝑀)
𝑙
R+2
)
(100)
(106)
𝑖=1
where 𝑃(πœ‰π‘– ) is the probability of πœ‰π‘– . If the optimal value of
(105) is zero, then it follows from (106) that (104) coincides
with
π‘₯ ≥ 0,
𝐹 (π‘₯, πœ‰π‘– ) ≥ 0,
π‘₯𝑇 𝐹 (π‘₯, πœ‰π‘– ) = 0,
∀πœ‰π‘– ∈ Ξ satisfying 𝑃 (πœ‰π‘– ) > 0.
where
(107)
When Ξ is a continuous set, then
𝐽π‘₯ πœ‰ (π‘₯, 𝑀) = ΦFB (π‘₯, 𝐹 (π‘₯, 𝑀))𝑇
σ΅„©
σ΅„©
σ΅„©
σ΅„©
E {σ΅„©σ΅„©σ΅„©Φ (π‘₯, 𝐹 (π‘₯, πœ‰))σ΅„©σ΅„©σ΅„©} = ∫ σ΅„©σ΅„©σ΅„©Φ (π‘₯, 𝐹 (π‘₯, πœ‰π‘– ))σ΅„©σ΅„©σ΅„© 𝑃 (π‘₯) 𝑑π‘₯, (108)
Ω
× [Dπ‘Ž (π‘₯, 𝐹 (π‘₯, 𝑀))
+D𝑏 (π‘₯, 𝐹 (π‘₯, 𝑀)) 𝐽π‘₯ 𝐹 (π‘₯, 𝐹 (π‘₯, 𝑀))] ,
𝐽𝑀 πœ‰ (π‘₯, 𝑀) = ΦFB (π‘₯, 𝐹 (π‘₯, 𝑀))𝑇 D𝑏 (π‘₯, 𝐹 (π‘₯, 𝑀))
where 𝑃(π‘₯) is the density function. In this case, (104) takes
the form of
π‘₯ ≥ 0,
× π½π‘€ 𝐹 (π‘₯, 𝑀) .
(101)
Here Dπ‘Ž (π‘₯, 𝐹(π‘₯, 𝑀)) and D𝑏 (π‘₯, 𝐹(π‘₯, 𝑀)) mean the sets of n × n
diagonal matrices diag (π‘Ž1 (π‘₯, 𝐹(π‘₯, 𝑀)), . . . , π‘Žπ‘› (π‘₯, 𝐹(π‘₯, 𝑀))) and
diag (𝑏1 (π‘₯, 𝐹(π‘₯, 𝑀)),. . .,𝑏𝑛 (π‘₯, 𝐹(π‘₯, 𝑀))), respectively, and
(π‘Žπ‘– (π‘₯, 𝐹 (π‘₯, 𝑀)) , 𝑏𝑖 (π‘₯, 𝐹 (π‘₯, 𝑀)))
(π‘₯𝑖 , 𝐹𝑖 (π‘₯, 𝑀))
{
{
ΜΈ
−(1, 1) ,
if (π‘₯𝑖 , 𝐹𝑖 (π‘₯, 𝑀)) =0,
=
{
{
{ √π‘₯2 + 𝐹2 (π‘₯, 𝑀)
𝑖
𝑖
{
{
{
{
{∈ ⋃ {cos πœƒ, sin πœƒ}−(1, 1) , if (π‘₯𝑖 , 𝐹𝑖 (π‘₯, 𝑀)) = 0.
{ πœƒ∈[0,2πœ‹]
(102)
5. Further Discussions
In this paper, we have paid much attention to the set-valued
complementarity problems which posses rather different
features from those of classical complementarity problems.
As suggested by one referee, we here briefly discuss the
relation between stochastic variational inequalities and the
set-valued complementarity problems. Given 𝐹 : R𝑛 × Ξ →
R, π‘‹πœ‰ ⊂ R𝑛 and Ξ ⊂ R𝑙 , a set representing future states
𝐹 (π‘₯, πœ‰) ≥ 0,
π‘₯𝑇 𝐹 (π‘₯, πœ‰) = 0,
a.e. πœ‰ ∈ Ξ, (109)
or equivalently there exists a subset Ξ0 ⊂ Ξ with 𝑃(Ξ0 ) = 0,
such that
π‘₯ ≥ 0,
𝐹 (π‘₯, πœ‰) ≥ 0,
π‘₯𝑇 𝐹 (π‘₯, πœ‰) = 0,
∀πœ‰ ∈ Ξ \ Ξ0 .
(110)
Hence the stochastic complementarity problem is, in certain
extent, a semi-infinite complementarity problem (SICP).
Due to some major difference between set-valued complementarity problems and classical complementarity problems, there are still many interesting, important, and challenging questions for further investigation as below, to name
a few.
(i) How to extend other important concepts used in
classical linear complementarity problems to setvalued cases (like 𝑃0 , 𝑃∗ , 𝑍, 𝑄, 𝑄0 , 𝑆, 𝑆, copositive,
column sufficient matrix, etc.)?
(ii) How to propose an effective algorithm to solve (99)?
(iii) Can we provide some sufficient conditions to ensure
the existence of solutions? One possible direction
is to use fixed-point theory. In fact, the set-valued
complementarity problem is to find π‘₯ ∈ R𝑛 , such that
π‘₯ = max {0, π‘₯ − 𝐹 (π‘₯, 𝑀)} = ΠR𝑛+ (π‘₯ − 𝐹 (π‘₯, 𝑀))
for some 𝑀 ∈ Ω (π‘₯) ,
(111)
Abstract and Applied Analysis
11
that is,
π‘₯ ∈ ΠR𝑛+ (π‘₯ − 𝐹̃ (π‘₯)) ,
(112)
Μƒ
where 𝐹(π‘₯)
:= ⋃𝑀∈Ω(π‘₯) 𝐹(π‘₯, 𝑀). Note that (112) is a fixed point
Μƒ where 𝐼 denotes the
of a set-valued mapping ΠR𝑛+ (𝐼 − 𝐹),
identify mapping.
Acknowledgments
The authors would like to thank three referees for their
carefully reading and suggestions which help to improve this
paper. J. Zhou is supported by National Natural Science Foundation of China (11101248, 11271233), Shandong Province Natural Science Foundation (ZR2010AQ026, ZR2012AM016),
and Young Teacher Support Program of Shandong University
of Technology. J. S. Chen is supported by National Science
Council of Taiwan. G. M. Lee is supported by the National
Research Foundation of Korea (NRF) Grant funded by the
Korea government (MEST) (no. 2012-0006236).
References
[1] F. Facchinei and J. S. Pang, Finite-Dimensional Variational
Inequalities and Complementarity Problems, Volume I and II,
Springer, New York, NY, USA, 2003.
[2] J. S. Pang and M. Fukushima, “Quasi-variational inequalities,
generalized Nash equilibria, and multi-leader-follower games,”
Computational Management Science, vol. 2, no. 1, pp. 21–56,
2005.
[3] M. S. Gowda, “On the extended linear complementarity problem,” Mathematical Programming B, vol. 72, no. 1, pp. 33–50,
1996.
[4] O. L. Mangasarian and J. S. Pang, “The extended linear complementarity problem,” SIAM Journal on Matrix Analysis and
Application, vol. 16, pp. 359–368, 1995.
[5] J. S. Pang, “The implicit complementarity problem,” in Nonlinear Programming, O. L. Mangasarian, R. R. Meyer, and S. M.
Robinson, Eds., Academic Press, New York, NY, USA, 4 edition,
1981.
[6] M. A. Noor, “Some developments in general variational
inequalities,” Applied Mathematics and Computation, vol. 152,
no. 1, pp. 199–277, 2004.
[7] M. A. Noor, “Extended general variational inequalities,” Applied
Mathematics Letters, vol. 22, no. 2, pp. 182–185, 2009.
[8] E. Polak, Optimization: Algorithms and Consistent Approximation, Springer, New York, NY, USA, 1997.
[9] R. T. Rockafellar, R. J. Wets, and R. J., Variational Analysis,
Springer, New York, NY, USA, 1998.
[10] R. T. Rockafellar, Convex Analysis, Princeton University Press,
Princeton, NJ, USA, 1970.
[11] J. P. Aubin and H. Frankowska, Set-Valued Analysis, Birkhauser,
Boston, Mass, USA, 1990.
[12] R. Cottle, J. S. Pang, and R. Stone, The Linear Complementarity
Problem, Academic Press, New York, NY, USA, 1992.
[13] M. Fiedler and V. Pták, “Some generalizations of positive
definiteness and monotonicity,” Numerische Mathematik, vol. 9,
no. 2, pp. 163–172, 1966.
[14] X. Chen and M. Fukushima, “Expected residual minimization
method for stochastic linear complementarity problems,” Mathematics of Operations Research, vol. 30, no. 4, pp. 1022–1038,
2005.
[15] X. Chen, C. Zhang, and M. Fukushima, “Robust solution of
monotone stochastic linear complementarity problems,” Mathematical Programming, vol. 117, no. 1-2, pp. 51–80, 2009.
[16] H. Fang, X. Chen, and M. Fukushima, “Stochastic R 0 matrix
linear complementarity problems,” SIAM Journal on Optimization, vol. 18, no. 2, pp. 482–506, 2007.
[17] C. Zhang and X. Chen, “Smoothing projected gradient method
and its application to stochastic linear complementarity problems,” SIAM Journal on Optimization, vol. 20, no. 2, pp. 627–649,
2009.
[18] J. C. Zhou, N. H. Xiu, and J. S. Chen, “Solution properties
and error bounds for semi-infinite complementarity problems,”
Journal of Industrial and Management Optimization, vol. 9, pp.
99–115, 2013.
[19] A. Galantai, “Properties and construction of NCP functions,”
Computational Optimization and Applications, vol. 52, pp. 805–
824, 2012.
[20] J. S. Pang, “Error bounds in mathematical programming,”
Mathematical Programming B, vol. 79, no. 1–3, pp. 299–332, 1997.
[21] X. Chen and S. Xiang, “Computation of error bounds for Pmatrix linear complementarity problems,” Mathematical Programming, vol. 106, no. 3, pp. 513–525, 2006.
[22] S. A. Gabriel and J. J. More, “Smoothing of mixed complementarity problems,” in Complementarityand Variational
Problems, M. C. Ferris and J. S. Pang, Eds., SIAM Publications,
Philadelphia, Pa, USA, 1997.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download