VARIATIONAL INEQUALITIES FOR SET-VALUED VECTOR

advertisement
SIAM J. CONTROL OPTIM.
Vol. 50, No. 4, pp. 2486–2514
c 2012 Society for Industrial and Applied Mathematics
VARIATIONAL INEQUALITIES FOR SET-VALUED VECTOR
FIELDS ON RIEMANNIAN MANIFOLDS: CONVEXITY OF THE
SOLUTION SET AND THE PROXIMAL POINT ALGORITHM∗
CHONG LI† AND JEN-CHIH YAO‡
Abstract. We consider variational inequality problems for set-valued vector fields on general
Riemannian manifolds. The existence results of the solution, convexity of the solution set, and the
convergence property of the proximal point algorithm for the variational inequality problems for
set-valued mappings on Riemannian manifolds are established. Applications to convex optimization
problems on Riemannian manifolds are provided.
Key words. variational inequalities, Riemannian manifold, monotone vector fields, proximal
point algorithm, convexity of solution set
AMS subject classifications. Primary, 49J40; Secondary, 58D17
DOI. 10.1137/110834962
1. Introduction. Various problems posed on manifolds arise in many natural
contexts. Classical examples are given by some numerical problems such as eigenvalue
problems and invariant subspace computations, constrained minimization problems,
and boundary value problems on manifolds, etc.; see, for example, [1, 13, 17, 26, 37,
38, 39]. Recent interests are focused on extending some classical and important results
for solving these problems on linear spaces to the setting of manifolds. For example,
some numerical methods such as Newton’s method, the conjugate gradient method,
the trust-region method, and their modifications for optimization problems on linear
spaces are extended to the Riemanninan manifolds setting [2, 11, 16, 24]. A theory of
subdifferential calculus for functions defined on Riemannian manifolds is developed
in [3, 20], where these results are applied to show existence and uniqueness of viscosity
solutions to Hamilton–Jacobi equations defined on Riemannian manifolds and to study
constrained optimization problems and nonclassical problems of calculus of variations
on Riemannian manifolds. The important notions of monotonicity in linear spaces
were extended to Riemannian manifolds and have been studied extensively in [21, 22,
27, 28, 29, 30, 31, 40], while weak sharp minima for constrained optimization problems
on Riemannian manifolds are explored recently in [23], where various notions of weak
sharp minima are extended and their complete characterizations are established.
Variational inequalities on Rn are powerful tools for studying constrained optimization problems and equilibrium problems, as well as complementary problems, and
have been studied extensively; see, for example, the survey [18] and the book [32].
Variational inequality problems on Riemannian manifolds were first introduced
and studied in [26] by Németh for univalued vector fields on Hadamard manifolds.
∗ Received by the editors May 23, 2011; accepted for publication (in revised form) May 18, 2012;
published electronically August 28, 2012.
http://www.siam.org/journals/sicon/50-4/83496.html
† Department of Mathematics, Zhejiang University, Hangzhou 310027, People’s Republic of China
(cli@zju.edu.cn). This author’s research was partially supported by the National Natural Science
Foundation of China (grant 11171300) and by Zhejiang Provincial Natural Science Foundation of
China under grant Y6110006.
‡ Center for General Education, Kaohsiung Medical University, Kaohsiung 80702, Taiwan (yaojc@
kmu.edu.tw). This author’s research was partially supported by the National Science Council of
Taiwan under grant NSC 99-2115-M-037-002-MY3.
2486
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2487
Németh established in [26] some basic results on existence and uniqueness of the
solution for variational inequality problems on Hadamard manifolds and proposed an
open problem on how to extend the existence and uniqueness results from Hadamard
manifolds to Riemannian manifolds. This problem was solved completely in [25]
by Li et al. The famous proximal point algorithm for optimization problems and
for variational inequality problems on Hilbert spaces were extended to the setting
of Hadamard manifolds, respectively, in [14] and [21], where the well-posedness and
convergence results of the proximal point algorithm on Hadamard manifolds were
established. Another basic and interesting problem for variational inequalities is the
convexity of the solution set, which, unlike in the linear cases, seems nontrivial even
for univalued vector fields on Hadamard manifolds.
Our interest in the present paper is the study of variational inequality problems for
set-valued vector fields on general Riemannian manifolds (not necessarily Hadamard
manifolds). The purpose of the present paper is to explore the existence of the solution and the convexity of the solution set, as well as the convergence property of the
proximal point algorithm for the variational inequality problems for set-valued mappings on Riemannian manifolds. Compared with the corresponding ones for linear
space, the problems considered here for the general Riemannian manifold cases are
much more complicated, and most of the known techniques in linear space setting does
not work; for example, even in a Hadamard manifold we do not have the following
property (which holds trivially in a linear space and plays a crucial role in the study
of the convexity problem of the solution set): The image of any linear segment in
the tangent space under an exponential map is a geodesic segment in the underlying
manifold. Most of the main results obtained in the present paper extend and improve
the corresponding ones for univalued vector fields on Hadamard manifolds and/or
Riemannian manifolds, while the convexity results of solution sets are completely new
for set-valued vector fields on Hadamard and Riemannian manifolds.
The paper is organized as follows. The next section contains some necessary notation, notion, and preliminary results. The existence and uniqueness of the solution
set of the variational inequality problems for set-valued mappings on general Riemannian manifolds are established in section 3. In sections 4 and 5, the convexity results
of the solution set and the convergence results of the proximal point algorithm of the
variational inequality problems are provided for set-valued mappings on Riemannian
manifolds of curvature bounded above. Applications of our results on convergence of
the proximal point algorithm to convex optimization problems on Riemannian manifolds are presented in section 6.
2. Notation and preliminary results. We begin with some necessary notation, notions, and preliminary results about Riemannian manifolds that will be used
in the next sections. The readers are referred to some textbooks for details, for example, [5, 12, 33, 36].
Let M be a finite-dimensional Riemannian manifold with the Levi–Civita connection ∇ on M . Let x ∈ M , and let Tx M denote the tangent space at x to M . We denote
by ·, ·x the scalar product on Tx M with the associated norm · x , where the subscript x is sometimes omitted. For x, y ∈ M , let c : [0, 1] → M be a piecewise smooth
1
curve joining x to y. Then the arc-length of c is defined by l(c) := 0 ċ(t) dt,
while the Riemannian distance from x to y is defined by d(x, y) := inf c l(c), where
the infimum is taken over all piecewise smooth curves c : [0, 1] → M joining x to y.
Recall that a curve c : [0, 1] → M joining x to y is a geodesic if
(2.1)
c(0) = x,
c(1) = y,
and ∇ċ ċ = 0 on [0, 1],
2488
CHONG LI AND JEN-CHIH YAO
and a geodesic c : [0, 1] → M joining x to y is minimal if its arc-length equals its
Riemannian distance between x and y. By the Hopf–Rinow theorem (cf. [12]), if M
is additionally complete and connected, then (M, d) is a complete metric space, and
there is at least one minimal geodesic joining x to y. Moreover, the exponential map
at x expx : Tx M → M is well-defined on Tx M . Clearly, a curve c : [0, 1] → M is a
minimal geodesic joining x to y if and only if there exists a vector v ∈ Tx M such that
v = d(x, y) and c(t) = expx (tv) for each t ∈ [0, 1].
Let γ be a geodesic. We use Pγ,·,· to denote the parallel transport on the tangent
bundle T M (defined below) along γ with respect to ∇, which is defined by
Pγ,γ(b),γ(a)(v) = V (γ(b)) for any a, b ∈ R and v ∈ Tγ(a) M,
where V is the unique vector field satisfying ∇γ (t) V = 0 for all t and V (γ(a)) = v.
Then, for any a, b ∈ R, Pγ,γ(b),γ(a) is an isometry from Tγ(a) M to Tγ(b) M . We will
write Py,x instead of Pγ,y,x in the case when γ is a minimal geodesic joining x to y
and no confusion arises.
For a Banach space or a Riemannian manifold Z, we use BZ (p, r) and BZ (p, r) to
denote, respectively, the open metric ball and the closed metric ball at p with radius
r, that is,
BZ (p, r) = {q ∈ Z : d(p, q) < r}
and BZ (p, r) = {q ∈ Z : d(p, q) ≤ r}.
We often omit the subscript Z if no confusion arises. Recall that, for a point x ∈ M ,
the convexity radius at x is defined by
each ball in B(x, r) is strongly convex
(2.2)
rx := sup r > 0 :
.
and each geodesic in B(x, r) is minimal
Clearly, if M is a Hadamard manifold, then rx = +∞ for each x ∈ M . Let
({x} × Tx M )
T M :=
x∈M
denote the tangent bundle. Then T M is a Riemannian manifold with the natural
differential structure and the Riemannian metric; see, for example, [12]. Thus, the
following proposition can be proved by a direct application of the definition of parallel
transports (cf. [21, Lemma 2.4] in the case when M is a Hadamard manifold).
Proposition 2.1. Let z0 ∈ M , and define the mapping Pz0 : T M → Tz0 M by
Pz0 (z, u) := Pz0 ,z u for each (z, u) ∈ T M.
Then Pz0 is continuous on x∈B(z0 ,rz ) ({x} × Tx M ).
0
We denote by Γx,y the set of all geodesics c := γxy : [0, 1] → M satisfying (2.1).
Note that each c ∈ Γx,y can be extended naturally to a geodesic defined on R in the
case when M is complete. Definition 2.2 below presents the notions of the different
kinds of convexities, where items (a) and (b) are known in [41], while item (c) is
known in [7]; see also [23, 25].
Definition 2.2. Let A be a nonempty subset of M . The set A is said to be
(a) weakly convex if, for any p, q ∈ A, there is a minimal geodesic of M joining p
to q lying in A;
(b) strongly convex if, for any p, q ∈ A, there is just one minimal geodesic of M
joining p to q and it is in A;
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2489
(c) locally convex if, for any p ∈ A, there is a positive ε > 0 such that A ∩ B(p, ε)
is strongly convex.
Clearly, the following implications hold for a nonempty set A in M :
(2.3)
The strong convexity =⇒the weak convexity =⇒the local convexity.
Remark 2.1. Recall (cf. [36]) that M is a Hadamard manifold if it is a simple
connected and complete Riemannian manifold of nonpositive sectional curvature. In
a Hadamard manifold, the geodesic between any two points is unique and the exponential map at each point of M is a global diffeomorphism. Therefore, all convexities
in a Hadamard manifold coincide and are simply called the convexity.
Recall from [39, p. 110] (see also [36, Remark V.4.2]) that a point o ∈ M is called
a pole of M if expo : To M → M is a global diffeomorphism, which implies that for
each point x ∈ M , the geodesic of M joining o to x is unique. For the existence result
of the solutions of the problem (3.1), the notion of the weak pole of A in the following
definition was introduced in [25].
Definition 2.3. A point o ∈ A is called a weak pole of A if for each x ∈ A, the
minimal geodesic of M joining o to x is unique and lies in A.
Clearly, any subset with a weak pole is connected. Let p ∈ A. For the following
proposition, which is known in [36, pp. 169–171], we use (p) to denote the supremum
of the radius r such that the set A ∩ B(p, r) is strongly convex, that is,
(p) := sup{r > 0 : A ∩ B(p, r) is strongly convex}.
Proposition 2.4. Suppose that M is a complete Riemannian manifold. Let
A ⊆ M be a nonempty closed connected and locally convex subset. Then, there exists
a connected (embedded) k-dimensional totally geodesic submanifold N of M possessing
the following properties:
(i) A = N and γxy ([0, 1)) ⊂ N for any p ∈ A, x ∈ B(p, (p)) ∩ N , y ∈ B(p, (p)) ∩
A, and γxy ∈ Γx,y ;
/ A for any y ∈
/ N and any t ∈ (1, t0 ], where t0 > 1 is such that
(ii) γxy (t) ∈
γxy ([0, t0 ]) ⊂ B(p, (p)).
Following [36, p. 171], the sets intR A := N and ∂R A := A \ N are called the
(relative) interior and the (relative) boundary of A, respectively. Let int A denote the
topological interior of A, that is, x ∈ int A if and only if B(x, δ) ⊆ A for some δ > 0.
Some useful properties about the interiors are given in the following proposition. Let
x̄ ∈ A. We use Fx̄ A to denote the set of all feasible directions by
(2.4)
Fx̄ A := {v ∈ Tx̄ M : there exists t̄ > 0 such that expx̄ tv ∈ A for any t ∈ (0, t̄)}.
Remark 2.2. Assume that M is complete and A is locally convex. Following [36,
p. 171], we define the set C(x̄) by
C(x̄) := {v ∈ Tx̄ M \ {0} : there exists t ∈ (0, rx̄ ) such that expx̄ t(v/v) ∈ N } ∪ {0}.
Then one checks by definition that
(2.5)
C(x̄) ⊆ Fx̄ A ⊆ C(x̄)
for any x̄ ∈ A
and
(2.6)
C(x̄) = Fx̄ A
for any x̄ ∈ N.
2490
CHONG LI AND JEN-CHIH YAO
Proposition 2.5. Suppose that M is a complete Riemannian manifold and A
is a closed connected and locally convex subset of M . Let N = intR A. Then the
following assertions hold:
(i) x̄ ∈ intR A ⇐⇒ Fx̄ A = Tx̄ N ⇐⇒ B(x̄, δ) ∩ expx̄ (Tx̄ N ) ⊆ A for some δ > 0.
(ii) x̄ ∈ int A ⇐⇒ Fx̄ A = Tx̄ M.
(iii) If int A = ∅, then intR A = int A.
(iv) If x̄ ∈ intR A is a weak pole of A, then γxx̄ ((0, 1]) ⊆ intR A for any x ∈ A.
Proof. In view of Remark 2.2, one easily sees that assertion (i) holds by [36,
pp. 169–171], while assertion (ii) is clear by the definition of the interior. As for
assertion (iii), we note that if x̄ ∈ int A = ∅, then Tx̄ N = Tx̄ M by (ii), and so N is an
m-dimensional totally geodesic submanifold N of M , where m := dimM . This means
that, for any x ∈ N , Tx N is of dimension m, and so Tx N = Tx M . Thus (iii) is seen
to hold by assertion (ii). Finally, assertion (iv) is known in [25, Proposition 4.3].
3. Existence and uniqueness results. Throughout the whole paper, we always assume that M is a complete Riemannian manifold. Let A ⊆ M be a nonempty
TM
set, and let ΓA
x,y denote the set of all γxy ∈ Γx,y such that γxy ⊆ A. Let V : A → 2
be a set-valued vector field on A, that is, ∅ = V (x) ⊆ Tx M for each x ∈ A. The variational inequality problem considered here on Riemannian manifold M is of finding
x̄ ∈ A such that
(3.1)
∃v̄ ∈ V (x̄) satisfying v̄, γ̇x̄y (0) ≥ 0 for each y ∈ A and each γx̄y ∈ ΓA
x̄,y .
Any point x̄ ∈ A satisfying (3.1) is called a solution of the variational inequality
problem (3.1), and the set of all solutions is denoted by S(V, A).
The following theorem on the existence of solutions of the variational problem
(3.1) for continuous (univalued) vector fields on A was proved in [25], which plays an
important role for the study of this section.
Theorem 3.1. Let A ⊂ M be a compact, locally convex subset of M , and let
V be a continuous vector field on A. Suppose that A has a weak pole o ∈ intR A.
Then S(V, A) = ∅, that is, the variational inequality problem (3.1) admits at least one
solution x̄.
Write AR := A ∩ B(o, R), where R > 0 and o ∈ A. Consider the following
variational inequality problem: Find xR ∈ AR and vR ∈ V (xR ) such that
(3.2)
R
vR , γ̇xR y (0) ≥ 0 for each y ∈ AR and each γxR y ∈ ΓA
xR ,y .
The following proposition establishes the relationship between the solutions for
problems (3.2) and (3.1), which will be used frequently for our study in what follows.
Proposition 3.2. Let A ⊂ M be a nonempty subset, and let V be a set-valued
vector field on A. Then x̄ ∈ A is a solution of the problem (3.1) if and only if there
exist R > 0 and a point o ∈ A such that x̄ ∈ B(o, R) and x̄ is a solution of the problem
(3.2), that is,
(3.3)
x̄ ∈ S(V, A) ⇐⇒ [∃R > 0, o ∈ A such that x̄ ∈ S(V, AR ) ∩ B(o, R)].
Proof. The necessity part is trivial. Hence we need only prove the sufficiency
part. To this end, suppose there exist o ∈ A and R > 0 such that x̄ ∈ S(V, AR ), and
so x̄ ∈ B(o, R). Let y ∈ A and γx̄y ∈ ΓA
x̄,y . Then there exists δ > 0 such that
δ γ̇x̄y (0) < min{rx̄ , R − d(o, x̄)},
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2491
where rx̄ is the convexity radius at x̄. Set z = expx̄ (δ γ̇x̄y (0)). Then z ∈ AR , and
γx̄,z ∈ Γx̄,z is unique and contained in AR ; hence it satisfies that γ̇x̄z (0) = δ γ̇x̄y (0).
Since x̄ ∈ S(V, AR ), it follows that there exists v̄ ∈ V (x̄) such that v̄, δ γ̇x̄y (0) ≥ 0.
Consequently,v̄, γ̇x̄y (0) ≥ 0. This shows that x̄ ∈ S(V, A) because y ∈ A and
γx̄y ∈ ΓA
x̄,y are arbitrary.
Below we will extend the existence theorem to the case of set-valued vector fields
on A. Let (X, dX ) and (Y, dY ) be metric spaces. Let X × Y be the product metric
space endowed with the product metric d defined by
d((x1 , y1 ), (x2 , y2 )) := dX (x1 , x2 ) + dY (y1 , y2 )
for any (x1 , y1 ), (x2 , y2 ) ∈ X × Y.
Let Γ be a set-valued mapping from X to Y . Let gph(Γ) denote the graph of Γ defined
by
gph(Γ) := {(x, y) ∈ X × Y : y ∈ Γ(x)}.
Recall that Γ is upper semicontinuous on X if for any x0 ∈ X and any open set U
containing Γ(x0 ), there exists a neighborhood B(x0 , δ) of x0 such that Γ(x) ⊆ U for
any x ∈ B(x0 , δ).
In the following definition, we extend this notion to set-valued vector fields on
manifolds; see, for example, [21].
Definition 3.3. Let A ⊂ M be a nonempty subset and V be a set-valued vector
field on A. Let x0 ∈ A. V is said to be
(a) upper semicontinuous at x0 if for any open set W satisfying V (x0 ) ⊆ W ⊆
Tx0 M , there exists an open neighborhood U (x0 ) of x0 such that Px0 ,x V (x) ⊆ W for
any x ∈ U (x0 ) ∩ A.
(b) upper Kuratowski semicontinuous at x0 if for any sequences {xk } ⊆ A and
{uk } ⊂ T M with each uk ∈ V (xk ), relations limk→∞ xk = x0 and limk→∞ uk = u0
imply u0 ∈ V (x0 ).
(c) upper semicontinuous (resp., upper Kuratowski semicontinuous) on A if it is
upper semicontinuous (resp., upper Kuratowski semicontinuous) at each x0 ∈ A.
Remark 3.1. By definition, the following assertions hold:
(i) The upper semicontinuity implies the upper Kuratowski semicontinuity. The
converse is also true if A is compact and V is compact-valued.
(ii) A set-valued vector field V on A is upper semicontinuous at x0 ∈ A if and
only if the set-valued mapping V : A → 2T M is upper semicontinuous at x0 ; that is,
for any open subset W of T M containing V (x0 ), there exists a neighborhood U (x0 )
of x0 such that V (x) ⊆ W for all x ∈ U (x0 ) ∩ A.
The following proposition is known in [4, Theorem 1].
Proposition 3.4. Let X be a metric space without isolated points, and let Y be
a normed linear space. Let Γ : X → 2Y be an upper semicontinuous mapping such
that Γ(x) is compact and convex for each x ∈ X. Then for each > 0 there exists a
continuous function f : X → Y such that gph(f ) ⊆ U (gph(Γ), ), where U (gph(Γ), )
is an -neighborhood of gph(Γ):
U (gph(Γ), ) := {(x, y) ∈ X × Y : d((x, y), gph(Γ)) < }.
Consider a compact subset A of M , and let {B(xi , ri ) : i = 1, 2, . . . , m} be an
rxi
open cover of A, that is, A ⊆ ∪m
i=1 B(xi , ri ), where, for each i = 1, 2, . . . , m, ri := 2
and rxi is the convexity radius at xi . Fixing such an open cover and x ∈ A, we denote
by I(x) the index-set such that i ∈ I(x) if and only if x ∈ B(xi , ri ).
2492
CHONG LI AND JEN-CHIH YAO
Let V(A) denote the set of all upper Kuratowski semicontinuous set-valued vector
fields V satisfying that V (x) is compact and convex for each x ∈ A. By Proposition
3.4, we can verify the following existence result of -approximations for set-valued
vector fields on A.
Lemma 3.5. Suppose that A is a connected compact subset of M and that V ∈
V(A). Let ε > 0. Then, there exists a continuous vector field Vε such that for any x ∈
M , there exist a subset {xεi : i ∈ I(x)} ⊆ A and a point y ε ∈ conv{Px,xi Pxi ,xεi V (xεi ) :
i ∈ I(x)} satisfying that
(3.4)
Vε (x) − y ε < ε
and
d(xεi , x) < ε for each i ∈ I(x).
Proof. By the well-known finite partition theorem of unity (see [12], for example) and without loss of generality, we may assume that there exist m nonnegative
continuous functions {ψi : A → R : i = 1, . . . , m} such that
(3.5)
supp(ψi ) := {x ∈ A : ψi (x) = 0} ⊆ B(xi , ri )
and
(3.6)
m
ψi (x) = 1
for each x ∈ A.
i=1
Let i = 1, 2, . . . , m and consider the mapping Fi : A ∩ B(xi , ri ) → 2Txi M defined by
Fi (x) := Pxi ,x V (x)
for each x ∈ A ∩ B(xi , ri ).
Then Fi is upper semicontinuous by Proposition 2.1 and Fi (x) is compact convex for
each x ∈ A ∩ B(xi , ri ). Then by Proposition 3.4, there exists a continuous mapping
f˜iε : A ∩ B(xi , ri ) → Txi M such that
gph(f˜iε ) ⊆ U (gph(Fi ), ).
(3.7)
Let fiε : A → T M be defined by fiε (x) := f˜iε (x) for each x ∈ A ∩ B(xi , ri ) and
fiε (x) := 0 otherwise. Define the vector field Vε : A → M by
Vε (x) =
m
ψi (x)Px,xi fiε (x)
for each x ∈ A.
i=1
It is easy to check that Vε is continuous on A. Now we prove that V is as desired.
To do this, let x ∈ A and let I + (x) be the subset of {1, 2, . . . , m} such that ψi (x) > 0
for i ∈ I + (x). Then I + (x) ⊆ I(x) by (3.5). Furthermore, we have by (3.7) that, for
each i ∈ I + (x), there exist xi ∈ A ∩ B(xi , ri ) and yi ∈ Fi (xi ) such that
(3.8)
d(x, xi ) + fiε (x) − yi < .
Write y := i∈J(x) ψi (x)Px,xi yi . Then y ε ∈ conv{Px,xi Pxi ,xεi V (xεi ) : i ∈ I + (x)} by
the definition of Fi . Moreover, by (3.8), together with the definitions of V (x) and y ,
one sees that xi ∈ B(x, ) for each i ∈ I + (x) and
V (x) − y ≤
ψi (x)fiε (x) − yi < ;
i∈J(x)
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2493
hence (3.4) is shown and the proof is complete.
Theorem 3.6. Let A ⊂ M be a compact and locally convex subset of M . Suppose
that A has a weak pole o ∈ intR A and that V ∈ V(A). Then the variational inequality
problem (3.1) admits at least one solution.
Proof. Fix an open cover {B(xi , ri ) : i = 1, 2, . . . , m} of A and let n ∈ N. Then,
applying Lemma 3.5 to n1 in place of , we have that there exists a continuous vector
field Vn : A → T M with the properties mentioned in Lemma 3.5 for Vn in place of V .
Applying Theorem 3.1 to Vn , we have that S(Vn , A) = ∅. Take x̄n ∈ S(Vn , A). Then,
we can choose
(3.9)
1
{xni : i ∈ I(x̄n )} ⊆ B x̄n ,
and vn ∈ conv{Px̄n ,xi Pxi ,xni V (xni ) : i ∈ I(x̄n )}
n
satisfying
Vn (x̄n ) − vn <
(3.10)
1
.
n
Without loss of generality, we
m̄may assume that I(x̄n ) = {1, 2, . . . , m̄} for some natural
number m̄ ≤ m and vn := i=1 tni Px̄n ,xi Pxi ,xni vin with each vin ∈ V (xni ), each tni ≥ 0,
m̄
and i=1 tni = 1.
Recalling that A is compact, V is upper semicontinuous, and each V (x) is compact, we see that, for each i = 1, . . . , m̄, the sequence {vin : n ∈ N} is bounded. Thus,
without loss of generality, we may assume that
(3.11)
x̄n → x̄,
tni → ti
and vin → vi
for each i = 1, . . . , m̄
(using subsequences if necessary). This, together with (3.9), implies that
(3.12)
d(xni , x̄) ≤ d(xni , x̄n ) + d(x̄n , x̄) → 0
for each i = 1, . . . , m̄.
It follows from Proposition 2.1 and (3.11) that Px̄n ,xi Pxi ,xni vin → vi for each i. Furthermore, we have that vi ∈ V (x̄) for each i = 1, . . . , m̄ since V is upper Kuratowski
semicontinuous, and so
(3.13)
v := lim vn ∈ V (x̄)
n
as V (x̄) is closed and convex.
Consider the variational inequality problem (3.2) with R = rx̄ /2 and o = x̄. Since
x̄n ∈ S(Vn , A) ⊆ S(Vn , AR ) and x̄n → x̄, it follows that, for n large enough,
Vn (x̄n ), exp−1
x̄n y ≥ 0
for each y ∈ AR .
−1
Since Vn (x̄n ) → v by (3.10) and (3.13), and since exp−1
x̄n y → expx̄ y (noting that
x̄n → x̄), we conclude by taking limits that
v, exp−1
x̄ y ≥ 0
for each y ∈ AR .
This shows that x̄ ∈ S(V, AR ) as v ∈ V (x̄), and so x̄ ∈ S(V, A) by Proposition 3.2.
The proof is complete.
Note that if A is a convex subset of a Hadamard manifold M , then intR A is
nonempty and each point of A is a pole. Hence the following corollary, which extends
2494
CHONG LI AND JEN-CHIH YAO
the corresponding existence result in [26] to the setting of set-valued vector fields,
now is a direct consequence of Theorem 3.6.
Corollary 3.7. Suppose that M is a Hadamard manifold, and let A ⊂ M be
a compact convex set. Let V ∈ V(A). Then the variational inequality problem (3.1)
admits at least one solution.
To extend the existence result on solutions of the variational inequality problem
(3.1) to the case when A is not necessarily bounded, we introduce the coerciveness
condition for set-valued vector fields on Riemannian manifolds. Recall that γ̄ox denotes the unique minimal geodesic joining o to x when o is a weak pole of A and
x ∈ A.
Definition 3.8. Let A ⊂ M be a subset of M containing a weak pole o of A.
Let V be a vector field on A. V is said to satisfy the coerciveness condition if
(3.14)
vx , γ̄˙ xo (0) − v0 , γ̄˙ xo (1)
→ −∞ as d(o, x) → +∞ for x ∈ A.
sup
d(o, x)
v0 ∈V (o),vx ∈V (x)
Definition 3.9. We say that a locally closed convex set A has the BCC (bounded
convex cover) property if there exists o ∈ A such that, for any R > 0, there exists a
locally convex compact subset of M containing A ∩ B(o, R).
Clearly, a locally closed convex set A has the BCC property if and only if, for any
bounded subset A0 ⊆ A, there exists a locally convex compact subset of M containing
A0 . Moreover, by [25, Proposition 4.5], if M is complete and if the sectional curvature
of M is nonnegative everywhere or nonpositive everywhere, then any locally convex
closed subset of M has the BCC property.
Theorem 3.10. Let A ⊂ M be a locally convex closed set with a weak pole
o ∈ intR A, and let V ∈ V(A) satisfy the coerciveness condition. Suppose that A has
the BCC property. Then the variational inequality problem (3.1) admits at least a
solution.
Proof. Take H > |V (o)| := inf v∈V (o) v. Then, by the assumed coerciveness
condition, there is R > 0 such that
(3.15)
(vx , γ̄˙ xo (0)−v0 , γ̄˙ xo (1)) ≤ −Hd(o, x) for each x ∈ A with d(o, x) ≥ R.
sup
v0 ∈V (o),vx ∈V (x)
Hence, for each x ∈ A with d(o, x) ≥ R, one has that
(3.16)
sup vx , γ̄˙ xo (0) ≤ −Hd(o, x) + |V (o)| · γ̄˙ xo (1) = (|V (o)| − H)d(o, x) < 0.
vx ∈V (x)
By assumption, there exists a locally convex compact subset KR of M containing
A ∩ B(o, R). Then ÂR := A ∩ KR ⊆ M is a locally convex compact set. Moreover,
note that, for any x ∈ ÂR , the unique minimal geodesic γ̄ox joining o to x lies in
B(o, R) and thus in A ∩ B(o, R) because o is a weak pole of A. This implies that γ̄ox
is in KR and then in ÂR . Hence o ∈ intR ÂR is a weak pole of ÂR . Thus Theorem
3.6 is applied (with ÂR in place of A) to get that S(V, ÂR ) = ∅.
Since AR ⊆ ÂR , it follows that S(V, ÂR ) ⊆ S(V, AR ). Furthermore, by (3.16),
we have that S(V, AR ) ⊆ B(o, R). It follows from Proposition 3.2 that S(V, ÂR ) ⊆
S(V, AR ) ⊆ S(V, A). This together with the fact that S(V, ÂR ) = ∅ completes the
proof.
Thus the following corollary is direct from Theorem 3.10.
Corollary 3.11. Let A ⊂ M be a locally convex closed set with a weak pole
o ∈ intR A, and let V ∈ V(A) satisfy the coerciveness condition. Suppose that M is
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2495
complete and that the sectional curvature of M is nonnegative everywhere or nonpositive everywhere. Then the variational inequality problem (3.1) admits at least one
solution.
Below we consider the uniqueness problem of the solution of the variational inequality problem (3.1). The notions of monotonicity on Riemannian manifolds in the
following definition are important tools for the study of the uniqueness problems and
have been extensively studied in [8, 9, 10, 15, 19, 27, 28, 29, 30] for univalued vector
fields and in [21, 22, 40] for set-valued vector fields.
Definition 3.12. Let A ⊂ M be a nonempty weakly convex set and let V be a
set-valued vector field on A. The vector field V is said to be
(1) monotone on A if for any x, y ∈ A and γxy ∈ ΓA
x,y the following inequality
holds:
vx , γ̇xy (0) − vy , γ̇xy (1) ≤ 0
(3.17)
for any vx ∈ V (x), vy ∈ V (y);
(2) strictly monotone on A if for any x, y ∈ A and γxy ∈ ΓA
x,y the following
inequality holds:
vx , γ̇xy (0) − vy , γ̇xy (1) < 0
(3.18)
for any vx ∈ V (x), vy ∈ V (y);
(3) strongly monotone on A if there exists ρ > 0 such that for any x, y ∈ A and
γxy ∈ ΓA
x,y the following inequality holds:
(3.19)
vx , γ̇xy (0) − vy , γ̇xy (1) ≤ −ρ l2 (γxy )
for any vx ∈ V (x), vy ∈ V (y).
Now we have the following uniqueness result on the solution of problem (3.1).
Theorem 3.13. Let A0 ⊆ A ⊂ M . Suppose that A0 is weakly convex and V
is strictly monotone on A0 satisfying S(V, A) ⊆ A0 . Then the variational inequality
problem (3.1) admits at most one solution. In particular, if A is weakly convex and
V is strictly monotone on A, then the variational inequality problem (3.1) admits at
most one solution.
Proof. Suppose on the contrary that problem (3.1) admits two distinct solutions
x̄ and ȳ. Then x̄, ȳ ∈ A0 . Since A0 is weakly convex, there exists a minimal geodesic
0
γx̄ȳ ∈ ΓA
x̄,ȳ . Then there exist v̄ ∈ V (x̄) and ū ∈ V (ȳ) such that
(3.20)
v̄, γ̇x̄ȳ (0) ≥ 0
and ū, −γ̇x̄ȳ (1) ≥ 0.
It follows that
v̄, γ̇x̄ȳ (0) − ū, γ̇x̄ȳ (1) ≥ 0.
This contradicts the strict monotonicity of V on A0 , and the proof is complete.
It is routine to verify that if V is strongly monotone, then it satisfies the coerciveness condition. Therefore, the following corollary is straightforward.
Corollary 3.14. Let A ⊂ M be a closed weakly convex set with a weak pole
o ∈ int A, and let V ∈ V(A) be a strongly monotone vector field on A. Suppose that
A has the BCC property (e.g., M is complete and the sectional curvature of M is
nonnegative everywhere or nonpositive everywhere). Then the variational inequality
problem (3.1) admits a unique solution.
2496
CHONG LI AND JEN-CHIH YAO
4. Convexity of solution sets. As assumed in the previous section, let A ⊆ M
be a nonempty closed subset, and let V be a set-valued vector field on A. This section
is devoted to the study of the convexity problem of the solution set of the variational
inequality problem. For this purpose, we introduce the notions of r-convexity and
prove some related lemmas.
Definition 4.1. Let A be a nonempty subset of M and let r ∈ (0, +∞). The set
A is said to be
(a) weakly r-convex if, for any p, q ∈ K with d(p, q) < r, there is a minimal
geodesic of M joining p to q lying in A;
(b) r-convex if, for any p, q ∈ A with d(p, q) < r, there is just one minimal
geodesic of M joining p to q and it is in A.
Let κ ≥ 0, and assume that M is of the sectional curvature bounded above
by κ. The Riemannian manifolds of the sectional curvature bounded above by κ
possess some useful properties which are listed in the following proposition. Note
by [6, Theorem 1A.6] that any Riemannian manifold of the curvature bounded above
by κ is a CAT(κ) space (the reader is referred to [6] for details). Thus Proposition 4.2
below is a direct consequence of [6, Propositions 1.4 and 1.7]. Recall that the model
space Mκn is the Euclidean space En if κ = 0 and the Riemannian manifold obtained
from the n-dimensional sphere (Sn , d) by multiplying the distance function by the
constant √1κ if κ > 0. Then Mκn is of the constant curvature κ, and the following “law
of cosines” in Mκn holds:
⎧
2
z̄)d(ȳ, z̄) cos ᾱ if κ = 0,
ȳ)2 = d(x̄, z̄)2 + d(ȳ,
⎨ d(x̄,√
√ z̄) − 2d(x̄, √
ȳ))
=
cos(
κd(x̄,
z̄))
cos(
κd(ȳ, z̄))
cos( κd(x̄,
(4.1)
√
√
⎩
+ sin( κd(x̄, z̄)) sin( κd(ȳ, z̄)) cos ᾱ if κ > 0,
where x̄, ȳ, z̄ ∈ Mκn and ᾱ := ∠x̄z̄ ȳ is the angle at z̄ of two geodesics joining z̄ to x̄, ȳ;
see, for example, [6, p. .24]. By definition, a geodesic triangle Δ(x, y, z) in M consists
of three points {x, y, z} in M (the vertices of Δ(x, y, z)) and three geodesic segments
joining each pair of vertices (the edges of Δ(x, y, z)). Recall that a triangle Δ(x̄, ȳ, z̄)
in the model space Mκ2 is said to be a comparison triangle for Δ(x, y, z) ⊂ X if
d(x, y) = d(x̄, ȳ), d(x, z) = d(x̄, z̄) and d(z, y) = d(z̄, ȳ).
Define Dκ := √πκ if κ > 0 and Dκ := +∞ if κ = 0.
Proposition 4.2. Suppose that M is of the curvature bounded above by κ. Then
the following assertions hold:
(i) For any two points x, y ∈ M with d(x, y) < Dκ , there exists one and only one
minimizing geodesic joining x to y.
(ii) For any z ∈ M , the distance function x → d(z, x) is convex on B(z, D2κ ); that
is, for any two points x, y ∈ B(z, D2κ ) and any γxy ∈ Γx,y , the function t → d(z, γ(t))
is convex on [0, 1].
(iii) Any ball in M with radius less than D2κ is strongly convex; in particular, we
have that rx ≥ D2κ for each x ∈ M , where rx is the convexity radius defined by (2.2).
(iv) For any geodesic triangle Δ(x, y, z) in M with d(x, y) + d(y, z) + d(z, x) <
2Dκ , there exists a comparison triangle Δ̄(x̄, ȳ, z̄) in Mκ2 for Δ(x, y, z) such that, for
γyz ∈ Γy,z and γȳz̄ ∈ Γȳ,z̄ ,
d(γyz (t), x) ≤ d(γ̄ȳz̄ (t), x̄)
for each t ∈ [0, 1]
and
∠xzy ≤ ∠x̄z̄ ȳ,
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2497
where ∠xzy and ∠x̄z̄ ȳ denote the inner angles at z and z̄ of triangles Δ(x, y, z) and
Δ̄(x̄, ȳ, z̄), respectively.
Clearly, by assertion (i) of Proposition 4.2, if M is of the curvature bounded above
by κ, then a subset of M is weakly Dκ -convex if and only if it is Dκ -convex.
Let z ∈ A and λ > 0. Define the vector field Vλ,z : A → 2T M by
Vλ,z (x) = λV (x) − Ez (x)
for each x ∈ A,
where Ez is the vector field defined by
(4.2)
Ez (x) := {u ∈ exp−1
x z : u = d(x, z), expx tu ∈ A ∀t ∈ [0, 1]} for each x ∈ A.
Note that Ez (x) = ∅ for any z, x ∈ A with d(x, z) < r if A is weakly r-convex.
Moreover, if A is r-convex, then we have that
(4.3)
Ez (x) = exp−1
x z
for any z, x ∈ A with d(x, z) < r.
Consider the variational inequality problem (3.1) with Vλ,z in place of V , and
recall that S(Vλ,z , A) is its solution set. Then we define the set-valued map Jλ : M →
2A by
Jλ (z) := S(Vλ,z , A)
for each z ∈ A,
that is, x ∈ Jλ (z) if and only if there exist v ∈ V (x) and u ∈ Ez (x) such that
(4.4)
λv − u, γ̇zy (0) ≥ 0
for each y ∈ A and each γzy ∈ ΓA
z,y .
Clearly, the following equivalence holds for any z ∈ A:
(4.5)
z ∈ S(V, A) ⇐⇒ z ∈ Jλ (z).
Throughout this whole section, we assume that V is monotone on A. For a subset
Z of some normed space, it would be convenient to use the notion |Z| to denote the
distance to the origin from Z:
inf u∈Z u
if Z = ∅,
|Z| :=
0
if Z = ∅.
Lemma 4.3. Let r ∈ (0, +∞), and let A be a weakly r-convex subset of M . Let
λ > 0 and x ∈ A be such that λ|V (x)| < r. Then the following assertion holds:
(4.6)
B(x, r) ∩ Jλ (x) ⊆ B(x, λ|V (x)|).
In particular, if A is weakly convex, then we have that
(4.7)
Jλ (x) ⊆ B(x, λ|V (x)|).
Proof. Let z ∈ B(x, r)∩Jλ (x). Since A is weakly r-convex, it follows by definition
that there exist v ∈ V (z) and u ∈ Ez (x) such that (4.4) holds. Let γzx be the
geodesic defined by γzx (t) := expz tu for each t ∈ [0, 1]. Then by (4.2), d(x, z) = u,
γ̇zx (0) = u, and γzx ∈ ΓA
z,x . This, together with (4.4), implies that
(4.8)
d2 (x, z) = u, u = u, γ̇zx (0) ≤ λv, γ̇zx (0).
2498
CHONG LI AND JEN-CHIH YAO
Since V is monotone, it follows that
d2 (x, z) ≤ λv, γ̇zx (0) ≤ λvx , γ̇zx (1) ≤ λvx d(x, z) for each vx ∈ V (x).
Hence d(x, z) ≤ λ|V (x)|, and the proof is complete.
Lemma 4.4. Suppose that M is of the sectional curvature bounded above by κ
and that A is Dκ -convex. Let y0 ∈ A and λ > 0 be such that λ|V (y0 )| < D2κ . Then
Jλ (y0 ) is nonempty.
Proof. Let R > 0 be such that
λ|V (y0 )| < R < r :=
Dκ
.
2
Then AR = B(y0 , R) ∩ A is compact and locally convex. Take o ∈ intR AR such that
d(y0 , o) < r−R
2 . Then, for each x ∈ AR ,
d(x, o) ≤ d(x, y0 ) + d(y0 , o) ≤ R +
r−R
< r.
2
By Proposition 4.2 (i), (ii), the minimal geodesic joining o to x is unique and lies in
AR . This means that o is a weak pole of AR . Thus, Theorem 3.6 is applicable to
concluding that S(Vλ,y0 , AR ) = ∅. By Lemma 4.3 (applied to AR in place of A), we
have that
d(yR , y0 ) ≤ λ|V (y0 )| < R
for any yR ∈ S(Vλ,y0 , AR ).
This, together with Proposition 3.2 (applied to o := y0 ), implies that S(Vλ,y0 , AR ) ⊆
S(Vλ,y0 , A) = Jλ (y0 ). This shows that Jλ (y0 ) = ∅, and the proof is complete.
Lemma 4.5. Suppose that M is of the sectional curvature bounded above by κ
and that A is Dκ -convex. Let y0 ∈ A, x0 ∈ S(V, A), and λ > 0 be such that
(4.9)
λ|V (y0 )| + d(y0 , x0 ) <
Dκ
.
2
Then the following estimates hold for each z ∈ Jλ (y0 ):
d(y0√
, x0 )2 ≥ d(z, x0 )2 + √
d(y0 , z)2
√
(4.10)
cos( κd(y0 , x0 )) ≤ cos( κd(z, x0 )) cos( κd(y0 , z))
if κ = 0,
if κ > 0
and
(4.11)
d(z, x0 ) ≤ d(y0 , x0 ).
Proof. Without loss of generality, we assume that κ = 1, and thus Dκ = π.
Let z ∈ Jλ (y0 ). By assumption (4.9), d(y0 , x0 ) < π2 and λ|V (y0 )| < π2 . Since A is
π-convex, it follows from Lemma 4.3 that
(4.12)
d(y0 , z) ≤ λ|V (y0 )| <
π
.
2
Thus one sees that if (4.10) is shown, then cos d(y0 , x0 ) ≤ cos d(z, x0 ) by (4.10); hence
(4.11) holds. Thus we need only prove (4.10). To this end, note by (4.12) that
d(z, x0 ) ≤ d(z, y0 ) + d(x0 , y0 ) ≤ λ|V (y0 )| + d(x0 , y0 ).
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2499
Therefore,
(4.13)
d(x0 , y0 ) + d(y0 , z) + d(z, x0 ) ≤ 2(λ|V (x0 )| + d(x0 , y0 )) ≤ π.
Consider the geodesic triangle Δ(x0 , y0 , z) with vertices x0 , y0 , z. By Proposition
4.2(i), the geodesic joining each pair of the vertices of Δ(x0 , y0 , z) is unique. Let
α := ∠x0 zy0 denote the angle of the geodesic triangle at vertex z, that is, α satisfies
that
(4.14)
−1
−1
−1
exp−1
z x0 , expz y0 = expz x0 expz y0 cos α.
By Proposition 4.2(iii), there exists a comparison triangle Δ(x̄0 , ȳ0 , z̄) in the model
space S2 such that
(4.15)
d(x̄0 , ȳ0 ) = d(x0 , y0 ), d(x̄0 , z̄) = d(x0 , z) and d(ȳ0 , z̄) = d(y0 , z).
Moreover, we have that α ≤ ᾱ := ∠x̄0 z̄ ȳ0 , the spherical angle at z̄ of the comparison
triangle. By (4.1), we have that
(4.16)
cos d(x̄0 , ȳ0 ) = cos d(x̄0 , z̄) cos d(ȳ0 , z̄) + sin d(x̄0 , z̄) sin d(ȳ0 , z̄) cos ᾱ.
In view of (4.15), to complete the proof, it suffices to show that cos ᾱ ≤ 0. Note that
−1
the unique geodesic γzx0 joining z to x0 is in ΓA
z,x0 , and so Ey0 (z) = expz y0 is a
singleton. Note also that x0 ∈ S(V, A) ⊆ A and z ∈ Jλ (y0 ) ⊆ A. It follows from
definition that there exist vx0 ∈ V (x0 ) and vz ∈ V (z) such that
vx0 , exp−1
x0 z ≥ 0
−1
and λvz − exp−1
z y0 , expz x0 ≥ 0.
This, together with the monotonicity of V , implies that
−1
−1
−1
exp−1
z y0 , expz x0 ≤ λvz , expz x0 ≤ −λvx0 , expx0 z ≤ 0.
Combining this with (4.14) gives that cos α ≤ 0. Since ᾱ ≥ α, we have cos ᾱ ≤ cos α ≤
0 and complete the proof.
The main theorem of this section is as follows.
Theorem 4.6. Suppose that M is of the sectional curvature bounded above by κ
and that A is closed and Dκ -convex. Then the solution set S(V, A) is Dκ -convex.
Proof. Let x1 , x2 ∈ S(V, A) be such that d(x1 , x2 ) < Dκ . Let c ∈ Γx1 ,x2 be the
minimal geodesic. Write y0 := c( 12 ). To complete the proof, it is sufficient to show
that y0 ∈ S(V, A). Note d(xi , y0 ) < D2κ for each i = 1, 2. We can take λ > 0 such
that
λ|V (y0 )| + d(xi , y0 ) <
Dκ
2
for each i = 1, 2.
Since A is Dκ -convex, Lemmas 4.4 and 4.5 are applicable to getting that Jλ (y0 ) = ∅
and
(4.17)
d(z, xi ) ≤ d(y0 , xi )
for each z ∈ Jλ (y0 ) and i = 1, 2.
Take z0 ∈ Jλ (y0 ), and let γi be the minimal geodesic joining z0 to xi for i = 1, 2.
Define
γ1 (1 − 2t) t ∈ [0, 12 ],
γ(t) :=
γ2 (2t − 1) t ∈ [ 12 , 1].
2500
CHONG LI AND JEN-CHIH YAO
Then applying (4.17), we estimate the length of γ by
l(γ) = l(γ1 ) + l(γ2 ) = d(z0 , x1 ) + d(z0 , x2 ) ≤ d(x1 , y0 ) + d(y0 , x2 ) = d(x1 , x2 ) ≤ l(γ).
This implies that γ is the minimal geodesic joining x1 and x2 . Hence γ = c thanks
to Proposition 4.2(i). This implies that z0 ∈ c([0, 1]) and thus y0 = z0 ∈ Jλ (y0 ) (as
d(z0 , x1 ) = d(z0 , x2 )). Consequently, y0 ∈ S(V, A) by Lemma 4.3, and the proof is
complete.
Note that a Hadamard manifold is of the sectional curvature bounded above by
κ = 0 and Dκ = +∞. Therefore, from the above theorem we have immediately the
following corollary, which was claimed in [26] for any continuous univalued vector field
V , but the proof provided there is not correct.
Corollary 4.7. Suppose that M is a Hadamard manifold and that A is convex.
Then the solution set S(V, A) is convex.
For a general Riemannian manifold, we have the following result.
Theorem 4.8. Suppose that M is a complete Riemannian manifold and A is
locally convex. Then the solution set S(V, A) is locally convex.
Proof. Without loss of generality, we assume that S(V, A) is nonempty. Let
x̄ ∈ S(V, A). Then there exists 0 < r̄ < rx̄ such that B(x̄, r̄) ∩ A is strongly convex.
Since B(x̄, r̄) is compact, it follows that the sectional curvature on B(x̄, r̄) is bounded,
and so there exists some κ > 0 such that the sectional curvature on B(x̄, r̄) is bounded
above by κ. This, together with Proposition 2.4, implies that N := B(x̄, r̄) is a totally
geodesic submanifold of M with the bounded curvature above by κ. Moreover, by
Proposition 2.5, Tx N = Tx M for each x ∈ N . This means that the restriction V |N of
V to N is a vector field on N .
Let 0 < R < r̄ and set AR := B(x̄, R) ∩ A. Then AR ⊆ N is compact in N .
Consider the variational inequality problem (3.1) with N and V |N in place of M and
V , and let SN (V |N , AR ) denote the corresponding solution set. Then
SN (V |N , AR ) ∩ B(x̄, R) = S(V, AR ) ∩ B(x̄, R) ⊇ S(V, A) ∩ B(x̄, R),
where the equality holds because N is a totally geodesic submanifold of M . Furthermore, by Proposition 3.2, we have that S(V, AR )∩B(x̄, R) ⊆ S(V, A)∩B(x̄, R); hence
SN (V, AR ) ∩ B(x̄, R) = S(V, A) ∩ B(x̄, R). Note that AR is strongly convex in N , and
thus so is Dκ -convex. Thus Theorem 4.6 is applied (to AR and N in place of A and
M ) to get that SN (V, AR ) is Dκ -convex in N . This, together with Proposition 4.2,
yields that SN (V, AR ) ∩ B(x̄, r) is strongly convex in N for any 0 < r ≤ D2κ . Taking
0 < r ≤ min{R, D2κ } and noting that N is a totally geodesic submanifold of M , we
have that S(V, A) ∩ B(x̄, r) = SN (V, AR ) ∩ B(x̄, r) is strongly convex in M . Thus we
showed that S(V, A) is locally convex as x̄ ∈ S(V, A) is arbitrary.
5. Proximal point methods. As in the previous section, we assume throughout this whole section that A ⊆ M is a nonempty closed subset and V ∈ V(A) is
monotone on A. This section is devoted to the study of convergence of the proximal point algorithm for the variational inequality problem (3.1). Let x0 ∈ D(A) and
{λn } ⊂ (0, +∞). The proximal point algorithm (with initial point x0 ) considered in
this section is defined as follows. Letting n = 0, 1, 2, . . . and having xn , choose xn+1
such that
(5.1)
xn+1 ∈ Jλn (xn )
for any n ∈ N.
Let x ∈ M and r > 0. Recall from [12, p. 72] that B(x, r) is a totally normal ball
around x if there is η > 0 such that for each y ∈ B(x, r) we have expy (B(0, η)) ⊃
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2501
B(x, r), and expy (·) is a diffeomorphism on B(0, η) ⊂ Ty M . The supremum of the
radii r of totally normal balls around x is denoted by r(x), i.e.,
(5.2)
r(x) := sup r > 0 B(x, r) is a totally normal ball around x .
Here we call r(x) the totally normal radius of x. By [12, Theorem 3.7] the totally
normal radius r(x) is well defined and positive. The next proposition is take from [40,
Lemma 3.6] and is useful in what follows.
Proposition 5.1. Let z ∈ M , and let x, y ∈ B(z, r(z)) with x = y. Then, for
any u ∈ Tx M and v ∈ Ty M , one has that
d
1 −1
d(expx su, expy sv)
−u, exp−1
(5.3)
=
.
x y + v, − expy x
ds
d(x, y)
s=0
We still need the following two lemmas, the first of which was proved in [34].
Recall that κ ≥ 0.
Lemma 5.2. Suppose
√ that M is of the curvature bounded above by κ. Let
x, y, z ∈ M be such that κ max{d(x, y), d(x, z), d(z, y)} ≤ r < π2 . Then the following assertion holds:
(5.4)
−1
d(expx s exp−1
x z, expy s expy z) ≤
sin(1 − s)r
d(x, y)
sin r
for each s ∈ [0, 1].
Lemma 5.3. Let z ∈ A, and let Ez be the set-valued vector field defined by (4.2).
Suppose that M is of the sectional curvature bounded above by κ and A is Dκ -convex.
Then −Ez is strongly monotone on B(z, r̄) for each 0 < r̄ < D4κ . Consequently, −Ez
is strictly monotone on B(z, D4κ ).
Proof. By Proposition 4.2, the geodesic joining x to y is unique, and exp−1
x y is
a singleton for any x, y ∈ B(z, D2κ ). This means that the vector field −Ez = exp−1
x y
is univalued on B(z, D2κ ). Let 0 < r̄ < D4κ and let x, y ∈ B(z, r̄). It suffices to show
that, if the geodesic joining x to y is contained in A, then the following inequality
holds:
√
√
−1
−1
−1
2
(5.5)
− exp−1
x z, expx y − − expy z, expy x ≤ −2 κr̄ cot(2 κr̄)d (x, y).
To show (5.5), note by definition that r(z) ≥ D4κ and so B(z, r̄) ⊆ B(z, r(z)). Thus
Proposition 5.1 is applicable to getting that
d
−1
−1
d(expx s expx z, expy s expy z)
(5.6)
ds
s=0
1 −1
−1
−1
.
=
−expx z, expx y + exp−1
y z, − expy x
d(x, y)
√
On the other hand, let r := 2 κ r̄. Then
√
π
κ max{d(x, y), d(y, z), d(z, x)} ≤ r < .
2
Then we apply Lemma 5.2 to conclude that
d
−1
z,
exp
s
exp
z)
d(expx s exp−1
y
x
y
ds
s=0
≤ lim+
s→0
sin(1−s)r
sin r
s
−1
d(x, y) = (r cot r)d(x, y).
2502
CHONG LI AND JEN-CHIH YAO
This, together with (5.6), implies that
1 −1
−1
−1
−exp−1
≤ −r(cot r)d(x, y).
x z, expx y + expy z, − expy x
d(x, y)
Hence (5.5) holds, and the proof is complete.
The following corollary is useful.
Corollary 5.4. Suppose that M is of the sectional curvature bounded above by
κ and that A is weakly convex. Let y0 ∈ A and λ > 0 be such that λ|V (y0 )| < D4κ .
Then Jλ (y0 ) is a singleton.
Proof. Let A0 := A ∩ B(y0 , D4κ ). By Lemmas 4.3 and 4.4, we have that ∅ =
Jλ (y0 ) ⊆ A0 . Furthermore, Vλ0 ,x0 is strictly monotone on A0 because −Ex0 is strictly
monotone on B(x0 , D4κ ) by Lemma 5.3, and V is monotone on A by assumption. Hence
the assumptions of Theorem 3.13 are satisfied, and the conclusion follows.
Recall that the proximal point algorithm (5.1) is well-defined if, for each n ∈ N,
Jλn (xn ) is a singleton. For the following proposition, which provides a sufficient
condition ensuring the well-posedness of the algorithm, we define inductively the
sequence of (set-valued) mappings {J n } by
J n+1 := Jλn ◦ J n
for any n = 0, 1, 2, . . . ,
where J 0 := I, the identity mapping.
Proposition 5.5. Suppose that M is of the sectional curvature bounded above
by κ and A is weakly convex. Let x0 ∈ A be such that
(5.7)
λn |V (J n (x0 ))| ≤
Dκ
4
for each n = 0, 1, 2, . . . .
Then algorithm (5.1) is well-defined.
Proof. By assumption (5.7),
λ0 |V (x0 )| = λ0 |V (J 0 (x0 ))| ≤
Dκ
.
4
Thus applying Corollary 5.4 (to λ0 and x0 in place of λ and y0 ), we get that Jλ0 (x0 )
is a singleton. Thus x1 is well-defined. Similarly, we have from (5.7) again that
λ1 |V (x1 )| = λ1 |V (J 0 (x0 ))| ≤
Dκ
4
and so x2 is also well-defined by Corollary 5.4. Thus, the well-defindedness of algorithm (5.1) can be established by mathematical induction, and the proof is complete.
Remark 5.1. Let {λn } ⊂ (0, +∞). We have the following assertions:
(a) If M is a Hadamard manifold, algorithm (5.1) is well-defined because Dκ =
+∞ and condition (5.7) holds automatically.
(b) If algorithm (5.1) generates a sequence {xn } satisfying that
(5.8)
λn |V (xn )| ≤
Dκ
4
for each n = 0, 1, 2 . . . ,
then condition (5.7) holds and algorithm (5.1) is well-defined.
Recall that a sequence {xn } in complete metric space X is Fejér convergent to
F ⊆ X if
d(xn+1 , y) ≤ d(xn , y)
for each y ∈ F and each n = 0, 1, 2, . . . .
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2503
The following proposition provides a local convergence result for the proximal point
algorithm.
Theorem 5.6. Suppose that M is of the sectional curvature bounded above by κ
and A is weakly convex. Let V ∈ V(A) be a monotone vector field satisfying S(V, A) =
∅. Let x0 ∈ A be such that d(x0 , S(V, A)) < D8κ and the algorithm (5.1) is well-defined.
Suppose that {λn } satisfies both condition (5.8) and
(5.9)
d(xn+1 , xn )
= 0.
λn
lim inf
n→∞
Then {xn } converges to a solution of the variational inequality problem (3.1).
Proof. Write F := S(V, A) ∩ B(x0 , D4κ ). Then F = ∅. Let x̂0 ∈ S(V, A) such that
d(x0 , x̂0 ) < D8κ . We first prove that the following two assertions hold for any x ∈ F
and any n = 0, 1, . . . :
√
√
√
(5.10)
cos( κd(xn , x)) ≤ cos( κd(xn+1 , x)) cos( κd(xn+1 , xn )),
(5.11)
d(xn+1 , x0 ) ≤ 2d(x̂0 , x0 ) <
Dκ
4
and d(xn+1 , x) ≤ d(xn , x) ≤ d(x0 , x).
For this purpose, let x ∈ F . Then
λ0 |V (x0 )| + d(x0 , x) <
Dκ
.
2
Hence, Lemma 4.5 is applicable (with x0 , x in place of y0 , x0 ). Thus, by (4.10) and
(4.11), we obtain that
√
√
√
cos κd(x0 , x) ≤ cos κd(x1 , x) cos κd(x1 , xn ) and d(x1 , x) ≤ d(x0 , x).
That is, assertion (5.10) and the second assertion in (5.11) are shown. In particular,
we have that d(x1 , x̂0 ) < d(x̂0 , x0 ) because x̂0 ∈ F by the choice of x̂0 . It follows that
d(x1 , x0 ) < d(x1 , x̂0 ) + d(x̂0 , x0 ) ≤ 2d(x̂0 , x0 ) <
Dκ
,
4
and the first assertion in (5.11) is also proved. Thus assertions (5.10) and (5.11) hold
for any x ∈ F and n = 0. Then one can use mathematical induction to show that
they hold for any x ∈ F and any n = 0, 1, . . . .
To verify the convergence of the sequence {xn }, we note that {xn } is Fejér convergent to F by (5.11); hence {xn } is bounded. This, together with assumption (5.9),
implies that there exists a subsequence {nk } such that xnk → x̄ for some x̄ ∈ A and
(5.12)
exp−1
xn
k +1
λnk
xnk
=
d(xnk +1 , xnk )
→ 0.
λnk
Then, by the first assertion in (5.11), we have
d(x̄, x0 ) = lim d(xnk , x0 ) ≤ 2d(x̂0 , x0 ) <
k
Dκ
;
4
hence x̄ ∈ B(x0 , D4κ ). Moreover, we conclude by (5.10) and (5.11) that
√
lim cos( κd(xnk +1 , xnk )) = 1,
n→∞
2504
CHONG LI AND JEN-CHIH YAO
and so limn→∞ d(xnk +1 , xnk ) = 0. This means that xnk +1 → x̄. Let R := D4κ and
o := x̄. Without loss of generality, we assume that {xnk }, {xnk +1 } are in B(x̄, R). By
definition, xnk +1 ∈ Jλnk (xnk ), and so there exists vnk +1 ∈ V (xnk +1 ) such that
(5.13) λnk vnk +1 − exp−1
xn
k +1
xnk , exp−1
xn
k +1
z ≥ 0
for each z ∈ AR := A ∩ B(o, R).
Since xnk +1 → x̄ and each V (xnk +1 ) is compact, it follows that {vnk +1 } is bounded.
Thus we may assume that vnk +1 → v̄ for some v̄ ∈ Tx̄ M (using subsequences if
necessary). Noting that V is upper Kuratowski semicontinuous at x̄, we have that
v̄ ∈ V (x̄). Letting k → ∞, we get from (5.12) and (5.13) that
v, exp−1
x z ≥ 0
for each z ∈ AR .
This shows that x̄ ∈ S(V, AR ). Hence, x̄ ∈ S(V, A) by Proposition 3.2, and so x̄ ∈ F .
Applying the second assertion in (5.11), we obtain that the sequence {d(xn , x̄)} is
monotone, which, together with limk d(xnk , x̄) = 0, shows that {xn } converges to x
and completes the proof.
Theorem 5.7. Suppose that M is of the sectional curvature bounded above by
κ and that A is weakly convex. Let V ∈ V(A) be a monotone vector field satisfying
S(V, A) = ∅. Let x0 ∈ A be such that d(x0 , S(V, A)) < D8κ and algorithm (5.1) is
well-defined. Suppose that {λn } ⊂ (0, +∞) satisfies
∞
(5.14)
λ2n = ∞
λn |V (xn )| ≤
and
n=0
Dκ
4
for each n = 0, 1, . . . .
Then the sequence {xn } converges to a solution of the variational inequality problem
(3.1).
Proof. By Theorem 5.6, it suffices to verify that (5.9) holds. To this end, let
n ∈ N. We assume that κ = 1 for simplicity. Let F = S(V, A) ∩ B(x0 , D4κ ), and let
x ∈ F . Applying Lemma 4.5, we have that (5.10) and (5.11) hold. Clearly (5.10) is
equivalent to the following inequality:
(5.15) 1
1
1
2
2
2
d(xn , x) ≥ sin
d(xn+1 , x) + cos d(xn+1 , x) sin
d(xn+1 , xn ) .
sin
2
2
2
By (5.11), d(xn+1 , x) ≤
cos d(xn+1 , x) sin
2
π
4;
hence
√
√ 2
d(xn+1 , xn )
1
2
2
d(xn+1 , xn ) ≥
·
=
d(xn+1 , xn )2 .
2
2
π
2π 2
Combining this with (5.15) yields that
∞
n=0
λ2n
d(xn+1 , xn )
λn
2
=
∞
d(xn+1 , xn )2
n=0
=
√
2π 2
∞ n=0
sin2
1
1
d(xn , x) − sin2
d(xn+1 , x)
< ∞.
2
2
∞
Therefore, (5.9) follows because n=0 λ2n = ∞.
Combining Theorem 5.7 and Proposition 5.5, together with Remark 5.1(b), we
immediately get the following corollary.
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2505
Corollary 5.8. Suppose that M is of the sectional curvature bounded from
above by κ and that A is weakly convex. Let V ∈ V(A) be a monotone vector field
with S(V, A) = ∅. Let x0 ∈ A be such that d(x0 , S(V, A)) < D8κ . Suppose that
{λn } ⊂ (0, +∞) satisfies condition (5.7) and
(5.16)
∞
λ2n = ∞.
n=0
Then algorithm (5.1) is well-defined, and the generated sequence {xn } converges to a
solution of the variational inequality problem (3.1).
In the special case when M is a Hadamard manifold, condition (5.7) and condition d(x0 , S(V, A)) < D8κ hold automatically. Therefore the following corollary is
direct and extends [21, Theorem 5.12], which is proved for univalued continuous monotone vector fields V under the stronger assumption that the sequence {λn } satisfies
inf n λn > 0.
Corollary 5.9. Suppose that M is a Hadamard manifold and that A is convex.
Let V ∈ V(A) be a monotone vector field with S(V, A) = ∅. Let x0 ∈ A, and suppose
that {λn } ⊂ (0, +∞) satisfies condition (5.16). Then algorithm (5.1) is well-defined,
and the generated sequence {xn } converges to a solution of the variational inequality
problem (3.1).
Consider the special case when A = M . Then the proximal point algorithm (5.1)
is reduced to the following for finding a singularity of V :
(5.17)
0 ∈ λn V (xn+1 ) − Exn (xn+1 ) for each n = 0, 1, 2, . . . ,
which, in Hadamard manifold M , is equivalent to the following:
(5.18)
0 ∈ λn V (xn+1 ) − exp−1
xn+1 xn
for each n = 0, 1, 2, . . . .
This algorithm was presented and studied in [21] for set-valued vector fields on
Hadamard manifolds. The following corollary, which is a direct consequence of Theorem 5.7, extends [21, Corollary 4.8], which was proved on Hadamard manifold M
under the stronger assumption that the sequence {λn } satisfies inf n λn > 0.
Corollary 5.10. Suppose that M is of the sectional curvatures bounded above
by κ. Let V ∈ V(M ) be a monotone vector field with V −1 (0) := {x ∈ M : 0 ∈
V (x)} = ∅. Let x0 ∈ M be such that d(x0 , V −1 (0)) < D8κ , and let {λn } ⊂ (0, +∞)
satisfy conditions (5.7) and (5.16). Then the sequence {xn } generated by the algorithm
(5.17) is well-defined and converges to a point x̄ ∈ V −1 (0).
One natural question is, Does there exist a positive number sequence {λn } satisfying conditions (5.7) and (5.16)? The following remark answers this affirmatively.
Remark 5.2. Let x0 ∈ A be such that d(x0 , S(V, A)) < D8κ , and set Λ :=
κ
sup V (x) : x ∈ B(x0 , D4κ ) . Let {λk } ⊂ 0, D
4Λ satisfy (5.16). Then, by the proofs
for Proposition 5.5 and Theorem 5.6, algorithm (5.1) is well-defined, and the generated
sequence {xn } is contained in B(x0 , D4κ ). This and the choice of {λk }, together with
the definition of Λ, means that conditions (5.7) and (5.16) are satisfied.
6. Application to convex optimization. As illustrated in [10] and the book
[35], lots of nonconvex optimization problems on the Euclidean space can be reformulated as convex ones on some proper Riemannian manifolds. This section is
devoted to an application in convex optimization problems on Riemannian manifolds
of the convergence results for the proximal point algorithm established in the previous
section.
2506
CHONG LI AND JEN-CHIH YAO
Let A ⊂ M be a closed weakly convex set, and let x̄ ∈ A. Recall that a vector
v ∈ Tx̄ M is tangent to A if there is a smooth curve γ : [0, ε] → A such that γ(0) = x
and γ (0) = v. Then the collection Tx̄ A of all tangent vectors to A at x̄ is a convex
cone in the space Tx̄ M and is called the tangent cone of A at x̄; see [39, p. 71]). Thus
the normal cone of A at x̄ is defined by
NA (x̄) := {u ∈ Tx̄ M : u, v ≤ 0
for each v ∈ Tx̄ A.}
Recall that Fx̄ A denotes the set of all feasible directions defined by (2.4), that is,
Fx̄ A = {v ∈ Tx̄ M : there exists t̄ > 0 such that expx̄ tv ∈ A for any t ∈ (0, t̄)}.
Using the notation Ex̄ (·) defined by (4.2) and noting (4.3), one sees that
Fx̄ A = cone(Ex̄ (A)).
Moreover, by [23, Proposition 3.4], Fx̄ A ⊆ Tx̄ A ⊆ Fx̄ A. Therefore we have that
NA (x̄) = (Fx̄ A) = {u ∈ Tx̄ M : u, v ≤ 0
for each v ∈ Ex̄ (A)}.
Let f : M → R be a proper function, and let D(f ) denote its domain, that is,
D(f ) := {x ∈ M : f (x) = ∞}.
We use Γfx,y to denote the set of all γxy ∈ Γx,y such that γxy ⊆ D(f ). In the following
definition, we introduce the notion of convex functions. The notion of the convex
function is taken from [39], where it was defined on a totally convex subset.
Definition 6.1. Let f : M → R be a proper function with weakly convex domain
D(f ). The function f is said to be
(a) weakly convex if, for any x, y ∈ D(f ), there is a minimal geodesic γxy ∈ Γfx,y
such that the composition f ◦ γxy is convex on [0, 1];
(b) convex if, for any x, y ∈ D(f ) and any geodesic γxy ∈ Γfx,y , the composition
f ◦ γxy is convex on [0, 1].
Suppose that f : M → R is a proper weakly convex function. Let x ∈ D(f ) and
v ∈ Tx M . The directional derivative in direction v and the subdifferential of f at x
are, respectively, defined by
f (x; v) := lim
t→0+
f (expx tv) − f (x)
t
and
∂f (x) := {u ∈∈ Tx M : u, v ≤ f (x; v) ∀v ∈ Tx M }.
Then f (x; ·) is proper and sublinear on Tx M with its domain
(6.1)
D(f (x; ·)) = Fx (D(f ))
and
(6.2)
∂f (x) = ∂f (x; ·)(0).
In particular, let δA denote the delta function defined by δA (x) = 0 if x ∈ A and
δA (x) = +∞ otherwise. Then one has that
(6.3)
∂δA (x) = NA (x)
for each x ∈ A.
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2507
Furthermore, by Proposition 5.1 (applied to 0 in place of v), we get the subdifferential
formula for the distance function d(z, ·) for some fixed z ∈ M :
(6.4)
∂d(z, ·)(x) = −
exp−1
x z
d(z, x)
for any x ∈ B(z, r(z)) \ {z}.
The following proposition shows that the subdifferential of a convex function is
monotone and upper Kuratowski semicontinuous. The proof for assertion (ii) was
given in [39, p. 71]), while the proofs for others are by definition and standard.
Proposition 6.2. Suppose that f : M → R is a proper weakly convex function.
Then the following assertions hold:
(i) ∂f : M → 2T M is upper Kuratowski semicontinuous, and ∂f (x) is convex for
each x ∈ M .
(ii) If x ∈ intD(f ), then ∂f (x) is nonempty, compact, and convex.
(iii) If f is convex, then ∂f : M → 2T M is monotone.
Consequently, if A ⊆ intD(f ), then ∂f ∈ V(A).
The following proposition, which was proved in [23, Proposition 4.3], provides
some sufficient conditions ensuring the sum rule of subdifferentials.
Proposition 6.3. Let f, g : M → R be proper weakly convex functions such that
f + g is weakly convex. Then the following formula holds:
(6.5)
∂(f + g)(x) = ∂f (x) + ∂g(x)
for each x ∈ intD(f ) ∩ D(g).
Consider the optimization problem
(6.6)
min f (x).
x∈A
The solution set of the optimization problem (6.6) is denoted by Sf (A). Throughout
this whole section, we assume that
(6.7)
A, f, f + δA are weakly convex and A ⊆ intD(f ).
Thus, by Proposition 6.3 and (6.3), we have that the following sum rule holds:
(6.8)
∂(f + δA )(x) = ∂f (x) + NA (x)
for each x ∈ A.
The following proposition describes the equivalence between the optimization
problem (6.6) and the variational inequality problem (3.1).
Proposition 6.4. Under assumption (6.7), we have that Sf (A) = S(∂f, A).
Proof. We first note the following chain of equivalences:
x̄ ∈ S(∂f, A)
⇐⇒ 0 ∈ ∂f (x̄) + NA (x̄)
⇐⇒ 0 ∈ ∂(f + δA )(x̄)
⇐⇒ [(f + δA ) (x̄, v) ≥ 0, ∀v ∈ Tx̄ M,
where the first and the last equivalences hold by definition, while the second equivalence does by formula (6.8). Thus to complete the proof, it suffices to show that x̄ is
a solution of the optimization problem (6.6) if and only if
(6.9)
(f + δA ) (x̄, v) ≥ 0
for each v ∈ Tx̄ M.
The “only if” part is trivial because
(f + δA )(expx̄ tv) − (f + δA )(x̄) ≥ 0
2508
CHONG LI AND JEN-CHIH YAO
holds for each t ∈ [0, 1] and v ∈ Tx̄ M . To prove the “if” part, let x ∈ A. Then
+δA
there exists a geodesic γ ∈ Γfx̄,x
such that the function f ◦ γ is convex on [0, 1]. Set
v := γ (0), and it follows that
f (x) − f (x̄) ≥ (f + δA ) (x̄, v) ≥ 0.
This means that x̄ is a solution of the optimization problem (6.6) and completes the
proof.
Consider the following proximal point algorithm with initial point x0 ∈ A for
convex optimization problem (6.6):
1
2
(6.10)
xn+1 ∈ argmin f (x) +
d(xn , x) : x ∈ A
for each n ∈ N.
2λn
Remark 6.1. By assumption (6.7), one sees that, for each z ∈ A and λ > 0,
1
2
argmin f (x) +
d(z, x) : x ∈ A = ∅.
2λ
This means that the algorithm (6.10) generates at least a sequence {xn }.
Theorem 6.5. Let A be a weakly convex subset of M of the sectional curvature
bounded above by κ. In addition to assumption (6.7), we further suppose that f : M →
R is a convex function such that S(f, A) = ∅. Let x0 ∈ A be such that d(x0 , S(f, A)) <
Dκ
8 , and let {xn } be a sequence generated by algorithm (6.10). Suppose that {λn } ⊂
(0, +∞) satisfy
(6.11)
∞
n=0
λn = ∞
and
λn |∂f (xn )| <
Dκ
4
for each n = 0, 1, . . . .
Then the algorithm (6.10) is well-defined, and the sequence {xn } converges to a solution of the minimization problem (6.6).
Proof. Let V := ∂f , and consider the variational inequality problem (3.1). Then
V ∈ V(A) is monotone by Proposition 6.2 and S(V, A) = Sf (A) by Proposition 6.4.
Below we will verify that the sequence {xn } coincides with the sequence generated by
algorithm (5.1). To do this, let n ∈ N and consider the function gn : M → R̄ defined
by
1
2
x ∈ B(xn , D2κ ),
2λn d(xn , x)
gn (x) :=
+∞
otherwise.
By Proposition 4.2(ii), gn is convex on M and D(gn ) = B(xn , D2κ ) is strongly convex.
Moreover, using (6.4) if x = xn+1 , and by definition if x = xn+1 , we have that
Dκ
x
for
each
x
∈
B
x
,
(6.12)
∂gn (x) = − exp−1
.
n+1
n
x
2
A
Let γn ∈ Γfxn+δ
,xn+1 be defined by γn (t) = expxn tvn for each t ∈ [0, 1] and some
vn ∈ Txn M with vn = d(xn , xn+1 ) such that f ◦ γn is convex on [0, 1]. Then
f (xn+1 ) − f (xn ) ≥ f (xn , vn ) ≥ u, vn ≥ −uvn for each u ∈ ∂f (xn ).
Hence
(6.13)
λn (f (xn ) − f (xn+1 )) ≤ λn |∂f (xn )|vn = λn |∂f (xn )|d(xn , xn+1 ).
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2509
By (6.10), one checks that
(6.14)
f (xn+1 ) +
1
d(xn , xn+1 )2 ≤ f (xn ).
2λn
This, together with (6.13) and (6.11), implies that
d(xn , xn+1 ) ≤ 2λn |∂f (xn )| <
(6.15)
Dκ
,
2
that is, xn+1 ∈ B(xn , D2κ ). This means that xn+1 is a minimizer of f + gn on A.
Note by Proposition 4.2 that D(f + g) = D(f ) ∩ D(g) is strongly convex. This implies
that f + g is convex. Note also that xn ∈ D(f ) ∩ int(D(g)). Then, we can apply
Proposition 6.3 and (6.12) to conclude that
Dκ
−1
.
∂(f + g)(x) = ∂f (x) + ∂g(x) = ∂f (x) − expx xn+1 for each x ∈ B xn ,
2
Set AR := A ∩ B(o, R) with o := xn and d(xn , xn+1 ) < R < D2κ . Then it follows from
Proposition 6.4 that xn+1 ∈ S(Vλn , AR ), where Vλn is defined on AR by
Vλn (·) = ∂(f + g)(·) = ∂f (·) − exp−1
(·) xn+1 .
(6.16)
Since xn+1 ∈ B(o, R), it follows from Proposition 3.2 that xn+1 ∈ S(Vλn , A), that
is xn+1 ∈ Jλn (xn ). Therefore, the sequence {xn } coincides with the one generated
by algorithm (5.1). Since (5.8) holds by assumption (6.11), it follows from Remark
5.1(b) that algorithm (5.1), and thus algorithm (6.10) is well-defined. It remains to
show that the sequence {xn } converges to a solution of the minimization problem
(6.6). Note by (6.14) that
+∞
n=0
λn
d(xn , xn+1 )
λn
2
≤2
+∞
(f (xn ) − f (xn+1 )) < +∞.
n=0
This, together with (6.11), implies that (5.9) holds. Hence Theorem 5.6 is applicable,
and the sequence {xn } converges to a point in SV = S(f, A). Thus the proof is
complete.
7. Conclusion. We have established, by developing a new approach, the results on the existence of the solution, on convexity of the solution set, and on the
convergence of the proximal point algorithm for variational inequality problems for
set-valued vector fields on general Riemannian manifolds. To the best of our knowledge, all of the known and important works in this direction, as mentioned in the
introduction, are done for univalued vector fields and on Hadamard manifolds, except the work in [25], which was done on Riemannian manifolds, but was concerned
only with the existence problem for univalued vector fields. In particular, it seems
that ours is the first paper to explore the proximal point algorithm on general Riemannian manifolds and/or to establish the convexity results of the solution sets for
the variational inequality problems, which seems new even for univalued vector fields
on Hadamard manifolds. Below we provide two examples which illustrate that our
results in the present paper are applicable, but the results in [14, 26] are not (and
neither are those in [21, 25]).
2510
CHONG LI AND JEN-CHIH YAO
Example 7.1. Let
M = H := (y1 , y2 ) ∈ R2 y2 > 0
be the Poincaré plane endowed with the Riemannian metric defined by
g11 = g22 =
1
,
y22
g12 = 0 for each (y1 , y2 ) ∈ H.
It is well known that H is a Hadamard manifold (cf. [39, p. 86]). Taking x̄ = (ȳ1 , ȳ2 ) ∈
H, we get Tx̄ H = R2 . Moreover, the geodesics of the Poincaré plane are the semilines
Ca : y1 = a, y2 > 0 and the semicircles Cb,r : (y1 − b)2 + y22 = r2 , y2 > 0. Let A ⊆ H
and V : H → R2 be defined, respectively, by
(7.1)
A := (y1 , y2 ) ∈ R2 : y1 ∈ [1, 3], 2 ≤ (y1 − 2)2 + y22 ≤ 10
and
⎧
2
2
⎨ (2y1 y2 , y2 − y1 ),
2
{(2ty1 y2 , t(y2 + y12 − y12 ) −
V (y1 , y2 ) :=
⎩
(0, − y12 ),
1
y2 )
y12 + y22 > 3,
: t ∈ [0, 1]}, 1 ≤ y12 + y22 ≤ 3,
y12 + y22 < 1.
Then V (x) is compact and convex for each x ∈ H. Furthermore, V is upper Kuratowski semicontinuous on H, and thus V ∈ V(A). Note that A is compact and
convex. Thus Corollary 3.7 is applicable to conclude that the variational inequality
problem (3.1) admits at least one solution. In fact, one can check by definition that
the solution set is
√ 2
.
S(V, A) ⊇ (1, y2 ) ∈ A : y ∈ 1,
2
Since V is not univalued, the corresponding existence results in [26] or [25] are not
applicable.
Next we consider the vector field V ∈ V(A) defined as follows. Let V1 be defined
by
⎧
y12 + y22 > 2,
⎨ (2y1 y2 , y22 − y12 ),
2
2
{(2ty1 y2 , t(y2 + 2 − y1 ) − 2) : t ∈ [0, 1]}, y12 + y22 = 2,
V1 (y1 , y2 ) :=
⎩
(0, −2),
y12 + y22 < 2.
By [39, p. 301], the function f defined on H by
2
y1 + y22 2
f (y1 , y2 ) := max
,
for any (y1 , y2 ) ∈ H
y2
y2
is convex, and by [39, p. 297], one checks that V1 is equal to the subdifferential of f .
Hence, by [21, Theorem 5.1], V1 is monotone and upper Kuratowski semicontinuous on
H. Let PC denote the projection on C, where C := {(y1 , y2 ) ∈ H : y12 + y22 ≤ 3} ⊆ H
is a closed convex subset. Then we define V by
V (y1 , y2 ) := exp−1
(y1 ,y2 ) PC (y1 , y2 ) + V1 (y1 , y2 ) for any (y1 , y2 ) ∈ H.
By [41], PC is nonexpansive on H, and it follows from [31] that the vector field
(y1 , y2 ) → exp−1
(y1 ,y2 ) PC (y1 , y2 ) is monotone and continuous on H. Consequently, we
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2511
have that V ∈ V(A) is monotone. Consider the variational inequality problem (3.1)
with V given as above and A defined by (7.1). Then we conclude from Corollaries 3.7
and 4.7 that S(V, A) is nonempty and convex. Again, neither the results in [26] nor
the results in [25] are applicable.
Example 7.2. Let
M = S2 := (y1 , y2 , y3 ) ∈ R3 y12 + y22 + y32 = 1
be the 2-dimensional unit sphere; see [39, p. 84] for more details. Denoting z1 :=
(0, 0, 1) and z−1 := (0, 0, −1), observe that the manifold S2 \ {z1 , z−1 } can be parameterized by Φ : (0, π) × [0, 2π] ⊂ R2 → S2 \ {z1 , z−1 } defined as Φ(θ, ϕ) := (y1 , y2 , y3 )T
for each θ ∈ (0, π) and ϕ ∈ [0, 2π] with
⎧
⎨ y1 : = sin θ cos ϕ,
y2 : = sin θ sin ϕ,
⎩
y3 : = cos θ.
It is easy to see from the above that (S2 \ {z1 , z−1 }, Φ−1 ) is a system of coordinates
around x whenever x ∈ S2 \ {z1 , z−1 }. Then the Riemannian metric on S2 \ {z1 , z−1 }
is given by
g11 = 1,
g12 = 0,
g22 = sin2 θ
for each θ ∈ (0, π) and ϕ ∈ [0, 2π].
The geodesics of S2 \ {z1 , z−1 } are the greatest circles or semicircles. Furthermore,
we have the curvature κ = 1 and thus Dκ = π.
Let := (a, b, c) be√the√geodesic triangle
in the first quadrant of S2 with
√
2
2
3
vertices a := (1, 0, 0), b := ( 2 , 2 , 0), c := ( 2 , 0, 12 ) and edges γa,b , γb,c , γc,a . Let
A be the set in the first quadrant of S2 with its boundary ∂A = . Clearly, A is
closed weakly convex. Define the function f : S2 → R by
−d(y, )
y ∈ A,
f (y) :=
+∞
otherwise,
where d(y, ) denotes the distance from √
y to .
√ By [36, Chapter 5, Lemma 3.3], one
sees that f is convex on A. Set d := 1 + 3 + 5. Below we show that
d
1
1
√
, √
(7.2)
ȳ := √
∈ S(f, A).
2 + d2
2 + d2 2 + d2
(Indeed, we could further show that S(f, A) = {ȳ}.) To do this, let ∠a and ∠b denote
the corresponding angles of the triangle at vertices a and b, respectively. Then the
geodesic segments in A bisecting them are ca and cb , which are, respectively, defined
by
√
√
(7.3)
ca : y2 = y3 and cb : y1 = y2 + ( 3 + 5)y3 for any (y1 , y2 , y3 ) ∈ A.
It is routine to check that ȳ is the intersection of the geodesics ca and cb . Note that
the nearest point to ȳ from each edge of the triangle is in the interior of the edge.
Thus, using the law of cosines (4.1) (with κ = 1), we conclude that the distances from
ȳ to all edges of the triangle are equal, that is, d(ȳ, γa,b ) = d(ȳ, γb,c ) = d(ȳ, γc,a ).
Let x ∈ A, and let y0 ∈ γa,b be the nearest point to ȳ from the edge γa,b , that
is, d(ȳ, y0 ) = d(ȳ, γa,b ). Without loss of generality, we assume that x is located in
2512
CHONG LI AND JEN-CHIH YAO
the set bounded by the triangle (a, y0 , ȳ). Consider the triangle (a, y0 , ŷ), where
ŷ ∈ A is the intersection of the geodesic segment joining y0 to ȳ and the geodesic
line determined by a, x. Let x0 ∈ γa,b be the point corresponding to x satisfying
d(a,x0 )
d(a,x)
d(a,y0 ) = d(a,ŷ) . We claim that
d(x, x0 ) ≤ d(ŷ, y0 ).
(7.4)
Granting this, we have that
d(x, ) ≤ d(x, x0 ) ≤ d(ŷ, y0 ) < d(ȳ, y0 ) = d(ȳ, );
hence, f (x) ≥ f (ȳ), and (7.2) is proved as x ∈ A is arbitrary.
To verify (7.4), let a, b ∈ (0, π2 ) and α ∈ [0, π2 ). Consider the function h on [0, 1]
defined by
(7.5)
h(t) := cos at cos bt + sin at sin bt cos α
for each t ∈ [0, 1].
Then, by elementary calculus, one checks that h is decreasing (and strictly decreasing
if α = 0) on [0, 1]. Now take a := d(a, y0 ), b := d(a, ŷ) and let α be the inner angle
at vertex a of the triangle (a, y0 , ŷ). By the choice of the points a, b, c, one sees
0
)
d(a,x)
that a, b ∈ (0, π2 ) and α ∈ [0, π2 ). Writing t := d(a,x
d(a,y0 ) = d(a,ŷ) and applying the law
of cosines (4.1) with κ = 1, we conclude that
cos d(x, x0 ) = h(t) ≥ h(1) = cos d(ŷ, y0 )
(noting that at = d(a, x0 ) and bt = d(a, x)). Thus (7.4) is seen to hold. Moreover,
since f is Lipschitz continuous on A with modulus 1 (cf. [23, Proposition
∞ 3.1]), we
have that |∂f (y)| ≤ 1 for each y ∈ A. Let {λn } ⊆ [0, π4 ] be such that n=0 λn = ∞,
and let x0 ∈ int A be such that d(x0 , ȳ) < π8 . Note that there exists a closed convex
subset A0 ⊆ int A(= intD(f )) such that x0 , ȳ ∈ A0 . Thus, Theorem 6.5 is applicable
(to A0 in place of A) to concluding that the sequence {xn } generated by algorithm
(6.10) is well-defined and converges to the unique solution ȳ. Clearly, M = S2 is not
a Hadamard manifold; hence, the results in [14] and [21] cannot be applicable.
Acknowledgments. The authors are grateful to both anonymous reviewers for
valuable suggestions and remarks which improved the quality of the paper.
REFERENCES
[1] R. Adler, J. P. Dedieu, J. Margulies, M. Martens, and M. Shub, Newton’s method on
Riemannian manifolds and a geometric model for the human spine, IMA J. Numer. Anal.,
22 (2002), pp. 359–390.
[2] F. Alvarez, J. Bolte, and J. Munier, A unifying local convergence result for Newton’s method
in Riemannian manifolds, Found. Comput. Math., 8 (2008), pp. 197–226.
[3] D. Azagra, J. Ferrera, and F. López-Mesas, Nonsmooth analysis and Hamilton-Jacobi
equations on Riemannian manifolds, J. Funct. Anal., 220 (2005), pp. 304–361.
[4] G. Beer, Approximate selections for upper semicontinuous convex valued multifunctions, J.
Approx. Theory, 39 (1983), pp. 172–184.
[5] W. M. Boothby, An Introduction to Differentiable Manifolds and Riemannian Geometry, 2nd
ed., Academic Press, New York, 1986.
[6] M. Bridson and A. Haefliger, Metric Spaces of Non-positive Curvature, Springer-Verlag,
Berlin, Heidelberg, 1999.
[7] J. Cheeger and D. Gromoll, On the structure of complete manifolds of nonnegative curvature, Ann. Math., 96 (1972), pp. 413–443.
VARIATIONAL INEQUALITIES ON RIEMANNIAN MANIFOLDS
2513
[8] J. X. Da Cruz Neto, O. P. Ferreira, and L. R. Lucambio Pérez, Contributions to the
study of monotone vector fields, Acta Math. Hungar., 94 (2002), pp. 307–320.
[9] J. X. Da Cruz Neto, O. P. Ferreira, and L. R. Lucambio Pérez, Monotone point-to-set
vector fields, J. Geom. Appl., 5 (2000), pp. 69–79.
[10] J. X. Da Cruz Neto, O. P. Ferreira, L. R. Lucambio Pérez, and S. Z. Németh, Convex and
monotone-transformable mathematical programming problems and a proximal-like point
method, J. Global Optim., 35 (2006), pp. 53–69.
[11] J. P. Dedieu, P. Priouret, and G. Malajovich, Newton’s method on Riemannian manifolds:
Covariant alpha theory, IMA J. Numer. Anal., 23 (2003), pp. 395–419.
[12] M. P. DoCarmo, Riemannian Geometry, Birkhäuser Boston, Boston, MA, 1992.
[13] A. Edelman, T. A. Arias, and S. T. Smith, The geometry of algorithms with orthogonality
constraints, SIAM J. Matrix Anal. Appl., 20 (1998), pp. 303–353.
[14] O. P. Ferreira and P. R. Oliveira, Proximal point algorithm on Riemannian manifolds,
Optim., 51 (2002), pp. 257–270.
[15] O. P. Ferreira, L. R. Lucambio Pérez, and S. Z. Németh, Singularities of monotone vector
fields and an extragradient-type algorithm, J. Global Optim., 31 (2005), pp. 133–151.
[16] O. P. Ferreira and B. F. Svaiter, Kantorovich’s theorem on Newton’s method in Riemannian
manifolds, J. Complexity, 18 (2002), pp. 304–329.
[17] D. Gabay, Minimizing a differentiable function over a differentiable manifold, J. Optim. Theory Appl., 37 (1982), pp. 177–219.
[18] P. T. Harker and J. S. Pang, Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications, Math. Programming,
48 (1990), pp. 161–220.
[19] G. Isac and S. Z. Németh, Scalar and Asymptotic Scalar Derivatives. Theory and Applications, Springer Optim. Appl. 13, Springer, New York, 2008.
[20] Y. S. Ledyaev and Q. J. Zhu, Nonsmooth analysis on smooth manifolds, Trans. Amer. Math.
Soc., 359 (2007), pp. 3687–3732.
[21] C. Li, G. López, and V. Martı́n-Máquez, Monotone vector fields and the proximal point
algorithm on Hadamard manifolds, J. London Math. Soc., 79 (2009), pp. 663–683.
[22] C. Li, G. López, V. Martı́n-Márquez, and J. H. Wang, Resolvents of set-valued monotone
vector fields in Hadamard manifolds, Set-Valued Var. Anal., 19 (2011), pp. 361–383.
[23] C. Li, B. S. Mordukhovich, J. Wang, and J.-C. Yao, Weak sharp minima on Riemannian
manifolds, SIAM J. Optim., 21 (2011), pp. 1523–1560.
[24] C. Li and J. H. Wang, Newton’s method on Riemannian manifolds: Smale’s point estimate
theory under the γ-condition, IMA J. Numer. Anal., 26 (2006), pp. 228–251.
[25] S. L. Li, C. Li, Y. C. Liou, and J. C. Yao, Existence of solutions for variational inequalities
on Riemannian manifolds, Nonlinear Anal., 71 (2009), pp. 5695–5706.
[26] S. Z. Németh, Variational inequalities on Hadamard manifolds, Nonlinear Anal., 52 (2003),
pp. 1491–1498.
[27] S. Z. Németh, Five kinds of monotone vector fields, Pure Appl. Math., 9 (1999), pp. 417–428.
[28] S. Z. Németh, Monotone vector fields, Publ. Math., 54 (1999), pp. 437–449.
[29] S. Z. Németh, Geodesic monotone vector fields, Lobachevskii J. Math., 5 (1999), pp. 13–28
(electronic).
[30] S. Z. Németh, Homeomorphisms and monotone vector fields, Publ. Math., 58 (2001), pp. 707–
716.
[31] S. Z. Németh, Monotonicity of the complementary vector field of a non-expansive map, Acta
Math. Hungar., 84 (1999), pp. 189–197.
[32] M. Patriksson, Nonlinear Programming and Variational Inequality Problems: A Unified Approach, Kluwer Academic Publishers, Dordrecht, 1999.
[33] P. Petersen, Riemannian Geometry, 2nd ed., Grad. Texts Math. 171, Springer, New York,
2006.
[34] B. Piatek, Halpern iteration in CAT(κ) spaces, Acta. Math. Sin. (Engl. Ser.), 27 (2011),
pp. 635–646.
[35] T. Rapcsák, Smooth Nonlinear Optimization in Rn , Kluwer Academic Publishers, Dordrecht,
1997.
[36] T. Sakai, Riemannian Geometry, Transl. Math. Monogr. 149, American Mathematical Society,
Providence, RI, 1992.
[37] S. Smith, Optimization techniques on Riemannian manifolds, in Hamiltonian and Gradient
Flows, Algorithms and Control, Fields Instit. Commun. 3, American Mathematical Society,
Providence, RI, 1994, pp. 113–136.
[38] S. T. Smith, Geometric Optimization Methods for Adaptive Filtering, Ph.D. thesis, Division
of Applied Science, Harvard University, Cambridge, MA, 1993.
2514
CHONG LI AND JEN-CHIH YAO
[39] C. Udrişte, Convex Functions and Optimization Methods on Riemannian Manifolds, Kluwer
Academic Publishers, Dordrecht, 1994.
[40] J. H. Wang, G. López, V. Martı́n-Márquez, and C. Li, Monotone and accretive vector fields
on Riemannian manifolds, J. Optim. Theory Appl., 146 (2010), pp. 691–708.
[41] R. Walter, On the metric projections onto convex sets in Riemannian spaces, Arch. Math.,
25 (1974), pp. 91–98.
Download