Research Article A New Smoothing Nonlinear Conjugate Gradient Method for

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 780107, 5 pages
http://dx.doi.org/10.1155/2013/780107
Research Article
A New Smoothing Nonlinear Conjugate Gradient Method for
Nonsmooth Equations with Finitely Many Maximum Functions
Yuan-yuan Chen1,2 and Shou-qiang Du1
1
2
College of Mathematics, Qingdao University, Qingdao 266071, China
School of Management, University of Shanghai for Science and Technology, Shanghai 200093, China
Correspondence should be addressed to Yuan-yuan Chen; usstchenyuanyuan@163.com
Received 8 March 2013; Accepted 25 March 2013
Academic Editor: Yisheng Song
Copyright © 2013 Y.-y. Chen and S.-q. Du. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
The nonlinear conjugate gradient method is of particular importance for solving unconstrained optimization. Finitely many
maximum functions is a kind of very useful nonsmooth equations, which is very useful in the study of complementarity problems,
constrained nonlinear programming problems, and many problems in engineering and mechanics. Smoothing methods for solving
nonsmooth equations, complementarity problems, and stochastic complementarity problems have been studied for decades. In this
paper, we present a new smoothing nonlinear conjugate gradient method for nonsmooth equations with finitely many maximum
functions. The new method also guarantees that any accumulation point of the iterative points sequence, which is generated by the
new method, is a Clarke stationary point of the merit function for nonsmooth equations with finitely many maximum functions.
1. Introduction
In this paper, we consider the following nonsmooth equations
with finitely many maximum functions:
max {β„Ž1𝑗 (π‘₯)} = 0,
𝑗∈𝐽1
..
.
(1)
max {β„Žπ‘›π‘— (π‘₯)} = 0,
𝑗∈𝐽𝑛
where β„Žπ‘–π‘— : 𝑅𝑛 → 𝑅 for 𝑖 = 1, . . . , 𝑛 and 𝑗 ∈ 𝐽𝑖 , 𝑖 = 1, . . . , 𝑛 are
continuously differentiable functions, and 𝐽𝑖 are finite index
sets. Denote that
β„Žπ‘– (π‘₯) = max {β„Žπ‘–π‘— (π‘₯)} ,
𝑗∈𝐽𝑖
𝑖 = 1, . . . , 𝑛,
𝑇
(2)
𝐻 (π‘₯) = (β„Ž1 (π‘₯) , . . . , β„Žπ‘› (π‘₯)) ,
then (1) can be reformulated as
𝐻 (π‘₯) = 0,
(3)
where 𝐻 : 𝑅𝑛 → 𝑅𝑛 is nonsmooth equations and 𝐽𝑖 (π‘₯) =
{𝑗𝑖 ∈ 𝑁 | β„Žπ‘–π‘— (π‘₯) = β„Žπ‘– (π‘₯)}, 𝑖 = 1, . . . , 𝑛. Nonsmooth equations
have been studied for decades, which is proposed in the
study of the optimal control, the variational inequality and
complementarity problems, equilibrium problems, and engineering mechanics [1–3], because many practical problems,
such as stochastic complementarity problems, variational
inequality problems, KKT systems of constrained nonlinear
programming problems, and many problems in equilibrium
problems, can be reformulated into (1). In the past few years,
there has been a growing interest in the study of (1) (such
as [4, 5]). Due to its simplicity and global convergence, the
iterative methods, such as nonsmooth Levenberg-Marquardt
method, general Newton method, and smoothing methods
for solving nonsmooth equations, have been widely studied
[4–11].
In this paper, we give a new smoothing nonlinear conjugate gradient method for (1). In the following section,
we will recall some definitions and some background of
nonlinear conjugate gradient methods. And we also give the
new smoothing nonlinear conjugate gradient method for (1),
which guarantees that any accumulation point of the iterative
points sequence is a Clarke stationary point of the merit
2
Abstract and Applied Analysis
function for (1). In the last section, some discussions are also
given.
Notation. In the following, a quantity with a subscript π‘˜
denotes that quantity is evaluated at π‘₯π‘˜ , the norm is the 2
norm, and 𝑅++ = {𝑑 | 𝑑 > 0}.
where 𝜌, 𝜎 ∈ (0, 1), 𝜌 < 𝜎, and 𝑔 is the gradient of 𝑓. The
direction is defined by
2. Preliminaries and New Method
In this section, firstly, we give some definitions and some
backgrounds of nonlinear conjugate gradient method. Secondly, we propose the new methods for (1) and give the convergence analysis.
In the following, we give two definitions, which will be
used in this paper.
Definition 1. Let 𝐹 : 𝑅𝑛 → 𝑅 be a locally Lipschitz function.
Then, 𝐹 is almost everywhere differentiable. Denote 𝐷𝐹 be
the set of points where 𝐹 is differentiable, then the general
gradient of 𝐹 at π‘₯ in the sense of Clark is
πœ•πΉ (π‘₯) = conv {
lim
π‘₯π‘˜ → π‘₯, π‘₯π‘˜ ∈𝐷𝐹
∇𝐹 (π‘₯π‘˜ )} ,
(4)
Definition 2. Let π‘Ÿ : 𝑅𝑛 → 𝑅 be a locally Lipschitz continuous function. We call π‘ŸΜƒ : 𝑅𝑛 × π‘…++ → 𝑅 a smoothing
function of π‘Ÿ, if π‘ŸΜƒ(⋅, πœ‡) is continuously differentiable for any
fixed πœ‡ ∈ 𝑅++ and
Μ‚ πœ‡↓0
π‘₯ → π‘₯,
−π‘”π‘˜ ,
if π‘˜ = 1,
π‘‘π‘˜ = {
−π‘”π‘˜ + π›½π‘˜ π‘‘π‘˜−1 , if π‘˜ ≥ 2,
(10)
where
π›½π‘˜ =
σ΅„©2
σ΅„©σ΅„©
π‘”π‘˜π‘‡ π‘¦π‘˜−1
σ΅„©σ΅„©π‘¦π‘˜−1 σ΅„©σ΅„©σ΅„©
−
𝛿
𝑔𝑇 𝑑 ,
2 π‘˜ π‘˜−1
𝑇 𝑧
𝑇
π‘‘π‘˜−1
π‘˜−1
(π‘‘π‘˜−1 π‘§π‘˜−1 )
(11)
𝑇
𝑇
π‘§π‘˜−1 = π‘¦π‘˜−1 + π‘‘π‘˜−1 π‘ π‘˜−1 , π‘‘π‘˜−1 = πœ–0 + max{0, −π‘ π‘˜−1
π‘¦π‘˜−1 /π‘ π‘˜−1
π‘ π‘˜−1 },
πœ–0 > 0, 𝛿 > 1/4, π‘¦π‘˜−1 = π‘”π‘˜ − π‘”π‘˜−1 , π‘ π‘˜−1 = π‘₯π‘˜ − π‘₯π‘˜−1 .
Now we give the new method for (7) as follows.
Method 1. Consider the following steps.
where conv denotes the convex set.
Μ‚ .
lim π‘ŸΜƒ (π‘₯, πœ‡) = π‘Ÿ (π‘₯)
where π‘‘π‘˜ is the direction and π›Όπ‘˜ > 0 is a step size; in this paper,
we use the Wolfe type line search [14]. Compute π›Όπ‘˜ > 0, such
that
σ΅„© σ΅„©2
𝑓 (π‘₯π‘˜ + π›Όπ‘˜ π‘‘π‘˜ ) − 𝑓 (π‘₯π‘˜ ) ≤ −πœŒπ›Όπ‘˜2 σ΅„©σ΅„©σ΅„©π‘‘π‘˜ σ΅„©σ΅„©σ΅„© ,
(9)
𝑇
σ΅„© σ΅„©2
𝑔(π‘₯π‘˜ + π›Όπ‘˜ π‘‘π‘˜ ) π‘‘π‘˜ ≥ −2πœŽπ›Όπ‘˜ σ΅„©σ΅„©σ΅„©π‘‘π‘˜ σ΅„©σ΅„©σ΅„© ,
Step 0. Given π‘₯0 ∈ 𝑅𝑛 , 𝑑0 = −𝑔0 , and π‘˜ := 0, if ‖𝑔0 β€– = 0,
then stop.
Step 1. Find π›Όπ‘˜ > 0 satisfying (9), and π‘₯π‘˜+1 is given by (8).
Step 2. Compute π‘‘π‘˜ by (10). Set π‘˜ := π‘˜ + 1, and go to Step 1.
(5)
We also need the following assumptions.
In the following, we will give the new method for (1). In
order to describe the method clearly, we divide this section
into two parts. In Case 1, we give the new nonlinear conjugate
gradient method for smooth objective function. In Case 2, we
give the new smoothing nonlinear conjugate gradient method for nonsmooth objective function.
Denote that
1
𝑓 (π‘₯) = ‖𝐻 (π‘₯)β€–2 ,
2
(6)
where 𝐻 is defined in (3). Then (1) is equivalent to the
following unconstrained minimization problem with zero
optimal value
min𝑛 𝑓 (π‘₯) .
π‘₯∈𝑅
(i) 𝐿 = {π‘₯ ∈ 𝑅𝑛 | 𝑓(π‘₯) ≤ 𝑓(π‘₯0 )} is level bounded;
(ii) in the neighborhood of 𝐿, there exists 𝐿 > 0, such that
󡄩󡄩󡄩𝑔 (π‘₯) − 𝑔 (𝑦)σ΅„©σ΅„©σ΅„© ≤ 𝐿 σ΅„©σ΅„©σ΅„©π‘₯ − 𝑦󡄩󡄩󡄩 ,
(12)
σ΅„©
σ΅„©
σ΅„©
σ΅„©
where π‘₯, 𝑦 ∈ π‘ˆ, π‘ˆ is a neighborhood of 𝐿.
In the following, we will give the global convergence
analysis about Method 1. Firstly, we give some lemmas.
Lemma 4. Let {π‘₯π‘˜ } be generated by Method 1, then
π‘”π‘˜π‘‡ π‘‘π‘˜ ≤ − (1 −
(7)
Case 1. In this section, we assume that 𝑓 is a continuously
differentiable function. Then (7) is a standard unconstrained
optimization problem. There are many methods for solving
the unconstrained optimization problem, such as Newton
method, nonlinear conjugate gradient method, and quasiNewton method [12–16]. Here, based on [13, 14], we will give
a new nonlinear conjugate gradient method to solve (7). The
iterates for solving (7) is given by
π‘₯π‘˜+1 = π‘₯π‘˜ + π›Όπ‘˜ π‘‘π‘˜ ,
Assumption 3. Consider the following:
(8)
1 σ΅„©σ΅„© σ΅„©σ΅„©2
) 󡄩𝑔 σ΅„© ,
4πœ‡ σ΅„© π‘˜ σ΅„©
(13)
where πœ‡ > 1/4.
By [13, Theorem 2.1], we have Lemma 4.
Lemma 5 (see [14]). Suppose that Assumption 3 holds, π›Όπ‘˜ is
computed by (9), and we have
2
(π‘”π‘˜π‘‡ π‘‘π‘˜ )
∑ σ΅„© σ΅„©2 < +∞.
σ΅„©π‘‘π‘˜ σ΅„©σ΅„©
π‘˜=1 σ΅„©
σ΅„© σ΅„©
∞
(14)
Abstract and Applied Analysis
3
By Lemmas 4 and 5, we have Lemma 6.
Lemma 6. Suppose that Assumption 3 holds, π›Όπ‘˜ is determined
by (9), and we get
∞ σ΅„©
󡄩󡄩𝑔 σ΅„©σ΅„©σ΅„©4
∑ σ΅„©σ΅„© π‘˜ σ΅„©σ΅„©2 < +∞.
σ΅„©π‘‘π‘˜ σ΅„©σ΅„©
π‘˜=1 σ΅„©
σ΅„© σ΅„©
(15)
Now, we give the global convergence theorem for Method
1.
We give the following smoothing nonlinear conjugate
gradient method.
Method 2. Give π‘₯0 ∈ 𝑅𝑛 , πœ‡0 > 0, 𝛾 > 0, and 𝛾1 ∈ (0, 1).
Step 1. Find π›Όπ‘˜ > 0 satisfying (21), and π‘₯π‘˜+1 is given by (8).
Μƒ
Step 2. If β€–∇π‘₯ 𝑓(π‘₯
π‘˜+1 , πœ‡π‘˜ )β€– ≥ π›Ύπœ‡π‘˜ , then set πœ‡π‘˜+1 = πœ‡π‘˜ ; otherwise, let πœ‡π‘˜+1 ≤ 𝛾1 πœ‡π‘˜ .
Step 3. Compute π‘‘π‘˜ by (22). Set π‘˜ := π‘˜ + 1, and go to Step 1.
Theorem 7. Suppose that Assumption 3 holds and {π‘₯π‘˜ } is
generated by Method 1, then
σ΅„© σ΅„©
lim inf σ΅„©σ΅„©σ΅„©π‘”π‘˜ σ΅„©σ΅„©σ΅„© = 0.
(16)
π‘˜→∞
Μƒ πœ‡) satisfies Assumption 3, for
Theorem 8. Suppose that 𝑓(⋅,
fixed πœ‡ > 0. Then the sequence {π‘₯π‘˜ } generated by Method 2
satisfies
Proof. Suppose by contradiction that there exists πœ– > 0, such
that
σ΅„©σ΅„©σ΅„©π‘”π‘˜ σ΅„©σ΅„©σ΅„© ≥ πœ–,
(17)
σ΅„© σ΅„©
holds for π‘˜ ≥ 1. By
𝑇
π‘‘π‘˜−1
π‘§π‘˜−1
≥
𝑇
πœ–0 π‘‘π‘˜−1
π‘ π‘˜−1 ,
(18)
and (11), we have
σ΅„©σ΅„© σ΅„©σ΅„©
πΏπœ– + πœ‡πΏ2
󡄩𝑔 σ΅„©
󡄨󡄨 󡄨󡄨
) σ΅„©σ΅„©σ΅„© π‘˜ σ΅„©σ΅„©σ΅„© ,
σ΅„¨σ΅„¨π›½π‘˜ 󡄨󡄨 ≤ ( 0 2
πœ–0
σ΅„©σ΅„©π‘‘π‘˜−1 σ΅„©σ΅„©
2
πΏπœ– + πœ‡πΏ σ΅„©σ΅„© σ΅„©σ΅„©
σ΅„©σ΅„© σ΅„©σ΅„© σ΅„©σ΅„© σ΅„©σ΅„©
) σ΅„©σ΅„©π‘”π‘˜ σ΅„©σ΅„© .
σ΅„©σ΅„©π‘‘π‘˜ σ΅„©σ΅„© ≤ σ΅„©σ΅„©π‘”π‘˜ σ΅„©σ΅„© + ( 0 2
πœ–
(19)
0
Denoting that π‘š = (1 + ((πΏπœ–0 + πœ‡πΏ2 )/πœ–02 )), we get β€– π‘‘π‘˜ β€–2 ≤ π‘š2 β€–
π‘”π‘˜ β€–2 . Then we have
σ΅„©4
∞ σ΅„©
∞ 2
󡄩󡄩𝑔 σ΅„©σ΅„©
πœ–
(20)
∑ σ΅„©σ΅„© π‘˜ σ΅„©σ΅„©2 ≥ ∑ 2 = +∞,
σ΅„©
σ΅„©
π‘š
𝑑
π‘˜=1 σ΅„©
π‘˜=1
σ΅„© π‘˜ σ΅„©σ΅„©
which contradicts (15). So we get the theorem.
Case 2. In this section, we assume that 𝑓 is a nonsmooth
function. Equation (7) is the nonsmooth unstrained optiΜƒ πœ‡) is the smoothing funcmization problem. Denote that 𝑓(π‘₯,
Μƒ , πœ‡ ), then 𝛼 > 0 is computed by
tion for 𝑓(π‘₯), π‘”Μƒπ‘˜ = ∇π‘₯ 𝑓(π‘₯
π‘˜ π‘˜
π‘˜
σ΅„© σ΅„©2
𝑓̃ (π‘₯π‘˜ + π›Όπ‘˜ π‘‘π‘˜ , πœ‡π‘˜ ) ≤ 𝑓̃ (π‘₯π‘˜ , πœ‡π‘˜ ) − πœŒπ›Όπ‘˜2 σ΅„©σ΅„©σ΅„©π‘‘π‘˜ σ΅„©σ΅„©σ΅„© ,
σ΅„© σ΅„©2
𝑇
π‘‘π‘˜ ≥ −2πœŽπ›Όπ‘˜ σ΅„©σ΅„©σ΅„©π‘‘π‘˜ σ΅„©σ΅„©σ΅„© ,
π‘”Μƒπ‘˜+1
lim πœ‡π‘˜ = 0,
π‘˜→∞
σ΅„©
σ΅„©
lim inf σ΅„©σ΅„©σ΅„©σ΅„©∇π‘₯ 𝑓̃ (π‘₯π‘˜ , πœ‡π‘˜−1 )σ΅„©σ΅„©σ΅„©σ΅„© = 0.
π‘˜→∞
Proof. If the set {π‘˜ | πœ‡π‘˜+1 ≤ 𝛾1 πœ‡π‘˜ } is finite, then for a fixed
Μƒ , πœ‡ )β€– ≥ π›Ύπœ‡ , for all π‘˜ > 𝐾. Denoteing that
𝐾, β€–∇𝑓(π‘₯
π‘˜ π‘˜−1
π‘˜−1
Μƒ πœ‡) is a smooth function, the
πœ‡ = πœ‡π‘˜ , π‘˜ > 𝐾, because 𝑓(⋅,
Μƒ πœ‡).
previous method reduces to Method 1, where 𝑓(π‘₯) = 𝑓(π‘₯,
By Theorem 7, we get
σ΅„©
σ΅„©
lim inf σ΅„©σ΅„©σ΅„©σ΅„©∇π‘₯ 𝑓̃ (π‘₯π‘˜ , πœ‡)σ΅„©σ΅„©σ΅„©σ΅„© = 0.
(25)
π‘˜→∞
Μƒ , πœ‡ )β€– ≥ π›Ύπœ‡
So we know that β€–∇𝑓(π‘₯
π‘˜ π‘˜−1
π‘˜−1 for π‘˜ > 𝐾 is
impossible. Then we can assume that the set {π‘˜ | πœ‡π‘˜+1 ≤ 𝛾1 πœ‡π‘˜ }
is infinite, then
lim πœ‡π‘˜ = 0.
π‘˜→∞
(26)
By the infinity of {π‘˜ | πœ‡π‘˜+1 ≤ 𝛾1 πœ‡π‘˜ }, we can assume that the
set is {π‘˜0 , π‘˜1 , . . . | π‘˜0 < π‘˜1 < ⋅ ⋅ ⋅ }. Then we have
σ΅„©
σ΅„©
lim inf σ΅„©σ΅„©σ΅„©σ΅„©∇𝑓̃ (π‘₯π‘˜π‘– +1 , πœ‡π‘˜π‘– )σ΅„©σ΅„©σ΅„©σ΅„© ≤ 𝛾1 lim πœ‡π‘˜π‘– .
(27)
𝑖→∞
𝑖→∞
By
lim πœ‡π‘˜π‘– = 0,
(28)
σ΅„©
σ΅„©
lim inf σ΅„©σ΅„©σ΅„©σ΅„©∇π‘₯ 𝑓̃ (π‘₯π‘˜ , πœ‡π‘˜−1 )σ΅„©σ΅„©σ΅„©σ΅„© = 0.
(29)
𝑖→∞
we have
(21)
(24)
π‘˜→∞
So we complete the proof.
where 0 < 𝜌 < 𝜎 < 1. The direction is defined by
if π‘˜ = 1,
−π‘”Μƒπ‘˜ ,
π‘‘π‘˜ = {
−π‘”Μƒπ‘˜ + π›½π‘˜ π‘‘π‘˜−1 , if π‘˜ ≥ 2,
(22)
σ΅„©σ΅„© Μƒ σ΅„©σ΅„©2
π‘”Μƒπ‘˜π‘‡ π‘¦Μƒπ‘˜−1
󡄩𝑦 σ΅„©
− 𝛿 σ΅„© π‘˜−1 σ΅„© 2 π‘”Μƒπ‘˜π‘‡ π‘‘π‘˜−1 ,
π›½π‘˜ = 𝑇
𝑇 𝑧
π‘‘π‘˜−1 π‘§Μƒπ‘˜−1
Μƒπ‘˜−1 )
(π‘‘π‘˜−1
(23)
Remark 9. From the result of Theorem 8, we know that for
some kinds of smoothing functions [9, 10], any accumulation
point π‘₯⋆ of {π‘₯π‘˜ } generated by Method 2 is a Clarke stationary
point of 𝑓; that is, 0 ∈ πœ•π‘“(π‘₯⋆ ).
where
3. Some Discussions
𝑇
𝑇
π‘§Μƒπ‘˜−1 = π‘¦Μƒπ‘˜−1 + π‘‘π‘˜−1 π‘ π‘˜−1 , π‘‘π‘˜−1 = πœ–0 + max{0, −π‘ π‘˜−1
π‘ π‘˜−1 },
π‘¦Μƒπ‘˜−1 /π‘ π‘˜−1
πœ–0 > 0, 𝛿 > 1/4, π‘¦Μƒπ‘˜−1 = π‘”Μƒπ‘˜ − π‘”Μƒπ‘˜−1 , π‘ π‘˜−1 = π‘₯π‘˜ − π‘₯π‘˜−1 .
In this paper, we give a new smoothing nonlinear conjugate
gradient method for (1). The new method also guarantees that
any accumulation point is a Clarke stationary point of the
merit function for (1).
4
Abstract and Applied Analysis
Discussion 1. In our Methods 1 and 2, we can use any line
search, which is well defined under the condition that the
search directions are descent directions.
Discussion 2. There are some kinds of smoothing functions,
which satisfied Assumption 3 for fixed πœ‡ > 0 (see [9, 10]).
When the smoothing function of 𝑓 has gradient consistent
property, then any accumulation point of the sequence {π‘₯π‘˜ }
generated by Method 2 is a Clarke stationary point. Under
some assumptions, we can also use the methods in [15] to
solve nonsmooth equations with finitely many maximum
functions (1).
Discussion 3. The new method can also be used for solving
nonlinear complementarity problem (NCP). By F-B function
πœ‘ (π‘Ž, 𝑏) = √π‘Ž2 + 𝑏2 − (π‘Ž + 𝑏) ,
π‘š
𝑗
∏𝐻𝑖 (π‘₯) = 0,
𝑖 = 1, . . . , π‘š,
𝑗 = 1, . . . , 𝑛,
𝑇
(𝑦 − π‘₯) 𝑝 (π‘₯) ≥ 0,
(31)
for any 𝑦 ∈ 𝑋,
(35)
where π‘₯ ∈ 𝑋. The general nonlinear complementarity problems are to compute π‘₯ ∈ 𝑅𝑛 , such that
π‘ž (π‘₯) ≥ 0,
𝑝 (π‘₯) ≥ 0,
π‘ž(π‘₯)𝑇 𝑝 (π‘₯) = 0.
(36)
We can rewrite (34) as the following nonsmooth equation:
π‘ž (π‘₯) − 𝑃𝑋 [π‘ž (π‘₯) − 𝑝 (π‘₯)] = 0,
(30)
we know that NCP is equivalent to πœ“(π‘₯) = 0, where πœ“
is a continuously differentiable function, so we can use the
smooth version Method 1 to solve it. Method 2 can also be
used to solve the vertical complementarity problems
𝐻𝑖 (π‘₯) ≥ 0,
is denoted by GVI(𝑋, 𝑝, π‘ž) in [18]. GVI(𝑋, 𝑝, π‘ž) is a generalization of complementarity problems, nonlinear variational
inequalities problems, and general nonlinear complementarity problems. The variational inequalities problems are to
compute
(37)
where 𝑃𝑋 is the projection operator onto 𝑋 under the
Euclidean norm. The general nonlinear complementarity
problems are widely used in solving engineering problems
and economic problems; such that, under some conditions,
the 𝑛-person noncooperative game problem can be reformulated as (37) (see [19]). Therefore, how to use our new method
to solve the general nonlinear complementarity problems and
the 𝑛-person noncooperative game problem would be an
interesting topic and deserves further investigation.
𝑖=1
where 𝐻𝑖 (π‘₯) : 𝑅𝑛 → 𝑅𝑛 are continuously differentiable
functions.
Discussion 4. The new method can also be used for solving
Hamilton-Jacobi-Bellman equations (HJB) (see [17]). The
Hamilton-Jacobi-Bellman equations (HJB) is used to find π‘₯ ∈
𝑅𝑛 , such that
max {𝐿𝑖 π‘₯ − 𝑙𝑖 } = 0,
(32)
on πœ•Θ,
𝑑
𝑖
where Θ is a bounded domain in 𝑅 , 𝐿 and 𝑖 = 1, . . . , π‘˜
are elliptic operators of second order. HJB arise in stochastic
control problems and often are used to solve finance and
control problems. By finite element method, we can obtain
the following discrete HJB equation; find π‘₯ ∈ 𝑅𝑛 , such that
max {𝐴𝑗 π‘₯ − πœπ‘— } = 0,
1≤𝑗≤π‘˜
(33)
where 𝐴𝑗 ∈ 𝑅𝑛×𝑛 , πœπ‘— ∈ 𝑅𝑛 , 𝑗 = 1, . . . , π‘˜.
Discussion 5. We also can consider to use the new method
to solve the general variational inequality problem (see [18]).
The general variational inequality problem is to compute π‘₯ ∈
𝑅𝑛 , such that
π‘ž (π‘₯) ∈ 𝑋,
𝑇
(𝑦 − π‘ž (π‘₯)) 𝑝 (π‘₯) ≥ 0,
This work was supported by National Science Foundation
of China (11101231, 70971070), a Project of Shandong Province Higher Educational Science and Technology Program
(J10LA05), and International Cooperation Program for Excellent Lecturers of 2011 by Shandong Provincial Education
Department.
in Θ,
1≤𝑖≤π‘˜
π‘₯ = 0,
Acknowledgments
for any 𝑦 ∈ 𝑋, (34)
where 𝑝, π‘ž : 𝑅𝑛 → 𝑅𝑛 are two continuously differentiable
functions and 𝑋 ⊆ 𝑅𝑛 is a closed convex set. Equation (34)
References
[1] L. Qi and P. Tseng, “On almost smooth functions and piecewise
smooth functions,” Nonlinear Analysis, vol. 67, no. 3, pp. 773–
794, 2007.
[2] F. Facchinei and J.-S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 1, Springer, New
York, NY, USA, 2003.
[3] D. Sun and J. Han, “Newton and quasi-Newton methods for
a class of nonsmooth equations and related problems,” SIAM
Journal on Optimization, vol. 7, no. 2, pp. 463–480, 1997.
[4] Y. Gao, “Newton methods for solving two classes of nonsmooth
equations,” Applications of Mathematics, vol. 46, no. 3, pp. 215–
229, 2001.
[5] S.-Q. Du and Y. Gao, “A parametrized Newton method for
nonsmooth equations with finitely many maximum functions,”
Applications of Mathematics, vol. 54, no. 5, pp. 381–390, 2009.
[6] N. Yamashita and M. Fukushima, “Modified Newton methods
for solving a semismooth reformulation of monotone complementarity problems,” Mathematical Programming B, vol. 76, no.
3, pp. 469–491, 1997.
[7] L. Q. Qi and J. Sun, “A nonsmooth version of Newton’s method,”
Mathematical Programming A, vol. 58, no. 3, pp. 353–367, 1993.
Abstract and Applied Analysis
[8] Y. Elfoutayeni and M. Khaladi, “Using vector divisions in solving the linear complementarity problem,” Journal of Computational and Applied Mathematics, vol. 236, no. 7, pp. 1919–1925,
2012.
[9] X. Chen and W. Zhou, “Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex
minimization,” SIAM Journal on Imaging Sciences, vol. 3, no. 4,
pp. 765–790, 2010.
[10] X. Chen, “Smoothing methods for nonsmooth, nonconvex
minimization,” Mathematical Programming B, vol. 134, no. 1, pp.
71–99, 2012.
[11] M. C. Ferris, O. L. Mangasarian, and J. S. Pang, Complementarity: Applications, Algorithms and Extensions, Kluwer Academic,
Dordrecht, The Netherlands, 2001.
[12] Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method
with a strong global convergence property,” SIAM Journal on
Optimization, vol. 10, no. 1, pp. 177–182, 1999.
[13] Z. Dai and F. Wen, “Global convergence of a modified HestenesStiefel nonlinear conjugate gradient method with Armijo line
search,” Numerical Algorithms, vol. 59, no. 1, pp. 79–93, 2012.
[14] C.-Y. Wang, Y.-Y. Chen, and S.-Q. Du, “Further insight into the
Shamanskii modification of Newton method,” Applied Mathematics and Computation, vol. 180, no. 1, pp. 46–52, 2006.
[15] Y. Y. Chen and S. Q. Du, “Nonlinear conjugate gradient methods
with Wolfe type line search,” Abstract and Applied Analysis, vol.
2013, Article ID 742815, 5 pages, 2013.
[16] Y. Xiao, H. Song, and Z. Wang, “A modified conjugate gradient
algorithm with cyclic Barzilai-Borwein steplength for unconstrained optimization,” Journal of Computational and Applied
Mathematics, vol. 236, no. 13, pp. 3101–3110, 2012.
[17] S. Zhou and Z. Zou, “A relaxation scheme for Hamilton-JacobiBellman equations,” Applied Mathematics and Computation,
vol. 186, no. 1, pp. 806–813, 2007.
[18] X. Chen, L. Qi, and D. Sun, “Global and superlinear convergence
of the smoothing Newton method and its application to
general box constrained variational inequalities,” Mathematics
of Computation, vol. 67, no. 222, pp. 519–540, 1998.
[19] M. C. Ferris and J. S. Pang, “Engineering and economic applications of complementarity problems,” SIAM Review, vol. 39,
no. 4, pp. 669–713, 1997.
5
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download