Research Article The Relationship between the Stochastic Singular Control of Jump Diffusions

advertisement
Hindawi Publishing Corporation
International Journal of Stochastic Analysis
Volume 2014, Article ID 201491, 17 pages
http://dx.doi.org/10.1155/2014/201491
Research Article
The Relationship between the Stochastic
Maximum Principle and the Dynamic Programming in
Singular Control of Jump Diffusions
Farid Chighoub and Brahim Mezerdi
Laboratory of Applied Mathematics, University Mohamed Khider, P.O. Box 145, 07000 Biskra, Algeria
Correspondence should be addressed to Brahim Mezerdi; bmezerdi@yahoo.fr
Received 7 September 2013; Revised 28 November 2013; Accepted 3 December 2013; Published 9 January 2014
Academic Editor: AgneΜ€s Sulem
Copyright © 2014 F. Chighoub and B. Mezerdi. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
The main objective of this paper is to explore the relationship between the stochastic maximum principle (SMP in short) and
dynamic programming principle (DPP in short), for singular control problems of jump diffusions. First, we establish necessary
as well as sufficient conditions for optimality by using the stochastic calculus of jump diffusions and some properties of singular
controls. Then, we give, under smoothness conditions, a useful verification theorem and we show that the solution of the adjoint
equation coincides with the spatial gradient of the value function, evaluated along the optimal trajectory of the state equation.
Finally, using these theoretical results, we solve explicitly an example, on optimal harvesting strategy, for a geometric Brownian
motion with jumps.
1. Introduction
In this paper, we consider a mixed classical-singular control
problem, in which the state evolves according to a stochastic
differential equation, driven by a Poisson random measure
and an independent multidimensional Brownian motion, of
the following form:
𝑑π‘₯𝑑 = 𝑏 (𝑑, π‘₯𝑑 , 𝑒𝑑 ) 𝑑𝑑 + 𝜎 (𝑑, π‘₯𝑑 , 𝑒𝑑 ) 𝑑𝐡𝑑
Μƒ (𝑑𝑑, 𝑑𝑒) + 𝐺𝑑 π‘‘πœ‰π‘‘ ,
+ ∫ 𝛾 (𝑑, π‘₯𝑑− , 𝑒𝑑 , 𝑒) 𝑁
𝐸
(1)
π‘₯0 = π‘₯,
where 𝑏, 𝜎, 𝛾, and 𝐺 are given deterministic functions and π‘₯
is the initial state. The control variable is a suitable process
(𝑒, πœ‰), where 𝑒 : [0, 𝑇] × Ω → 𝐴 1 ⊂ R𝑑 is the usual
classical absolutely continuous control and πœ‰ : [0, 𝑇] × Ω →
𝐴 2 = ([0, ∞))π‘š is the singular control, which is an increasing
process, continuous on the right with limits on the left, with
πœ‰0− = 0. The performance functional has the form
𝑇
𝑇
0
0
𝐽 (𝑒, πœ‰) = 𝐸 [∫ 𝑓 (𝑑, π‘₯𝑑 , 𝑒𝑑 ) 𝑑𝑑 + ∫ π‘˜ (𝑑) π‘‘πœ‰π‘‘ + 𝑔 (π‘₯𝑇 )] .
(2)
The objective of the controller is to choose a couple
(𝑒⋆ , πœ‰β‹† ) of adapted processes, in order to maximize the
performance functional.
In the first part of our present work, we investigate
the question of necessary as well as sufficient optimality
conditions, in the form of a Pontryagin stochastic maximum
principle. In the second part, we give under regularity
assumptions, a useful verification theorem. Then, we show
that the adjoint process coincides with the spatial gradient of
the value function, evaluated along the optimal trajectory of
the state equation. Finally, using these theoretical results, we
solve explicitly an example, on optimal harvesting strategy
for a geometric Brownian motion, with jumps. Note that
our results improve those in [1, 2] to the jump diffusion
setting. Moreover we generalize results in [3, 4], by allowing
2
both classical and singular controls, at least in the complete
information setting. Note that in our control problem, there
are two types of jumps for the state process, the inaccessible
ones which come from the Poisson martingale part and
the predictable ones which come from the singular control
part. The inclusion of these jump terms introduces a major
difference with respect to the case without singular control.
Stochastic control problems of singular type have received
considerable attention, due to their wide applicability in
a number of different areas; see [4–8]. In most cases,
the optimal singular control problem was studied through
dynamic programming principle; see [9], where it was shown
in particular that the value function is continuous and is the
unique viscosity solution of the HJB variational inequality.
The one-dimensional problems of the singular type,
without the classical control, have been studied by many
authors. It was shown that the value function satisfies a
variational inequality, which gives rise to a free boundary
problem, and the optimal state process is a diffusion reflected
at the free boundary. Bather and Chernoff [10] were the first
to formulate such a problem. Beneš et al. [11] explicitly solved
a one-dimensional example by observing that the value
function in their example is twice continuously differentiable.
This regularity property is called the principle of smooth fit.
The optimal control can be constructed by using the reflected
Brownian motion; see Lions and Sznitman [12] for more
details. Applications to irreversible investment, industry
equilibrium, and portfolio optimization under transaction
costs can be found in [13]. A problem of optimal harvesting
from a population in a stochastic crowded environment is
proposed in [14] to represent the size of the population at
time 𝑑 as the solution of the stochastic logistic differential
equation. The two-dimensional problem that arises in portfolio selection models, under proportional transaction costs,
is of singular type and has been considered by Davis and
Norman [15]. The case of diffusions with jumps is studied
by Øksendal and Sulem [8]. For further contributions on
singular control problems and their relationship with optimal
stopping problems, the reader is referred to [4, 5, 7, 16, 17].
The stochastic maximum principle is another powerful tool for solving stochastic control problems. The first
result that covers singular control problems was obtained
by Cadenillas and Haussmann [18], in which they consider
linear dynamics, convex cost criterion, and convex state
constraints. A first-order weak stochastic maximum principle
was developed via convex perturbations method for both
absolutely continuous and singular components by Bahlali
and Chala [1]. The second-order stochastic maximum principle for nonlinear SDEs with a controlled diffusion matrix
was obtained by Bahlali and Mezerdi [19], extending the
Peng maximum principle [20] to singular control problems.
A similar approach has been used by Bahlali et al. in [21], to
study the stochastic maximum principle in relaxed-singular
optimal control in the case of uncontrolled diffusion. Bahlali
et al. in [22] discuss the stochastic maximum principle in
singular optimal control in the case where the coefficients
are Lipschitz continuous in π‘₯, provided that the classical
derivatives are replaced by the generalized ones. See also the
recent paper by Øksendal and Sulem [4], where Malliavin
International Journal of Stochastic Analysis
calculus techniques have been used to define the adjoint
process.
Stochastic control problems in which the system is
governed by a stochastic differential equation with jumps,
without the singular part, have been also studied, both by
the dynamic programming approach and by the Pontryagin
maximum principle. The HJB equation associated with this
problems is a nonlinear second-order parabolic integrodifferential equation. Pham [23] studied a mixed optimal
stopping and stochastic control of jump diffusion processes
by using the viscosity solutions approach. Some verification
theorems of various types of problems for systems governed
by this kind of SDEs are discussed by Øksendal and Sulem
[8]. Some results that cover the stochastic maximum principle
for controlled jump diffusion processes are discussed in [3,
24, 25]. In [3] the sufficient maximum principle and the
link with the dynamic programming principle are given
by assuming the smoothness of the value function. Let us
mention that in [24] the verification theorem is established
in the framework of viscosity solutions and the relationship between the adjoint processes and some generalized
gradients of the value function are obtained. Note that Shi
and Wu [24] extend the results of [26] to jump diffusions.
See also [27] for systematic study of the continuous case.
The second-order stochastic maximum principle for optimal
controls of nonlinear dynamics, with jumps and convex state
constraints, was developed via spike variation method, by
Tang and Li [25]. These conditions are described in terms of
two adjoint processes, which are linear backward SDEs. Such
equations have important applications in hedging problems
[28]. Existence and uniqueness for solutions to BSDEs with
jumps and nonlinear coefficients have been treated by Tang
and Li [25] and Barles et al. [29]. The link with integral-partial
differential equations is studied in [29].
The plan of the paper is as follows. In Section 2, we
give some preliminary results and notations. The purpose of
Section 3 is to derive necessary as well as sufficient optimality
conditions. In Section 4, we give, under-regularity assumptions, a verification theorem for the value function. Then, we
prove that the adjoint process is equal to the derivative of the
value function evaluated at the optimal trajectory, extending
in particular [2, 3]. An example has been solved explicitly, by
using the theoretical results.
2. Assumptions and Problem Formulation
The purpose of this section is to introduce some notations,
which will be needed in the subsequent sections. In all what
follows, we are given a probability space (Ω, F, (F𝑑 )𝑑≤𝑇 , P),
such that F0 contains the P-null sets, F𝑇 = F for an
arbitrarily fixed time horizon 𝑇, and (F𝑑 )𝑑≤𝑇 satisfies the
usual conditions. We assume that (F𝑑 )𝑑≤𝑇 is generated by a
𝑑-dimensional standard Brownian motion 𝐡 and an independent jump measure 𝑁 of a Lévy process πœ‚, on [0, 𝑇] × πΈ,
where 𝐸 ⊂ Rπ‘š \ {0} for some π‘š ≥ 1. We denote by (F𝐡𝑑 )𝑑≤𝑇
(resp., (F𝑁
𝑑 )𝑑≤𝑇 ) the P-augmentation of the natural filtration
of 𝐡 (resp., 𝑁). We assume that the compensator of 𝑁 has the
form πœ‡(𝑑𝑑, 𝑑𝑒) = ](𝑑𝑒)𝑑𝑑, for some 𝜎-finite Lévy measure ]
on 𝐸, endowed with its Borel 𝜎-field B(𝐸). We suppose that
International Journal of Stochastic Analysis
3
Μƒ
∫𝐸 1 ∧ |𝑒|2 ](𝑑𝑒) < ∞ and set 𝑁(𝑑𝑑,
𝑑𝑒) = 𝑁(𝑑𝑑, 𝑑𝑒) − ](𝑑𝑒)𝑑𝑑,
for the compensated jump martingale random measure of 𝑁.
Obviously, we have
F𝑑 = 𝜎 [∫ ∫
𝐴×(0,𝑠]
𝑁 (π‘‘π‘Ÿ, 𝑑𝑒) ; 𝑠 ≤ 𝑑, 𝐴 ∈ B (𝐸)]
(3)
∨ 𝜎 [𝐡𝑠 ; 𝑠 ≤ 𝑑] ∨ N,
We denote by U = U1 × U2 the set of all admissible
controls. Here U1 (resp., U2 ) represents the set of the
admissible controls 𝑒 (resp., πœ‰).
Assume that, for (𝑒, πœ‰) ∈ U, 𝑑 ∈ [0, 𝑇], the state π‘₯𝑑 of our
system is given by
𝑑π‘₯𝑑 = 𝑏 (𝑑, π‘₯𝑑 , 𝑒𝑑 ) 𝑑𝑑 + 𝜎 (𝑑, π‘₯𝑑 , 𝑒𝑑 ) 𝑑𝐡𝑑
where N denotes the totality of ]-null sets and 𝜎1 ∨ 𝜎2 denotes
the 𝜎-field generated by 𝜎1 ∪ 𝜎2 .
Notation. Any element π‘₯ ∈ R𝑛 will be identified with a
column vector with 𝑛 components, and its norm is |π‘₯| =
|π‘₯1 | + ⋅ ⋅ ⋅ + |π‘₯𝑛 |. The scalar product of any two vectors π‘₯ and
𝑦 on R𝑛 is denoted by π‘₯𝑦 or ∑𝑛𝑖=1 π‘₯𝑖 𝑦𝑖 . For a function β„Ž, we
denote by β„Žπ‘₯ (resp., β„Žπ‘₯π‘₯ ) the gradient or Jacobian (resp., the
Hessian) of β„Ž with respect to the variable π‘₯.
Given 𝑠 < 𝑑, let us introduce the following spaces.
Μƒ (𝑑𝑑, 𝑑𝑒) + 𝐺𝑑 π‘‘πœ‰π‘‘ ,
+ ∫ 𝛾 (𝑑, π‘₯𝑑− , 𝑒𝑑 , 𝑒) 𝑁
𝐸
π‘₯0 = π‘₯,
where π‘₯ ∈ R𝑛 is given, representing the initial state.
Let
𝑏 : [0, 𝑇] × R𝑛 × π΄ 1 󳨀→ R𝑛 ,
𝜎 : [0, 𝑇] × R𝑛 × π΄ 1 󳨀→ R𝑛×𝑑 ,
(i) L2],(𝐸;R𝑛 ) or L2] is the set of square integrable functions
l(⋅) : 𝐸 → R𝑛 such that
β€–l (𝑒)β€–2L2 𝑛
],(𝐸;R )
2
:= ∫ |l (𝑒)| ] (𝑑𝑒) < ∞.
(4)
𝐸
(ii) S2([𝑠,𝑑];R𝑛 ) is the set of R𝑛 -valued adapted cadlag
processes 𝑃 such that
‖𝑃‖S2
([𝑠,𝑑];R𝑛 )
󡄨 󡄨2
:= E[ sup σ΅„¨σ΅„¨σ΅„¨π‘ƒπ‘Ÿ 󡄨󡄨󡄨 ]
< ∞.
(5)
(iii) M2([𝑠,𝑑];R𝑛 ) is the set of progressively measurable R𝑛 valued processes 𝑄 such that
𝑑
‖𝑄‖M2
([𝑠,𝑑];R𝑛 )
(iv)
󡄨 󡄨2
:= E[∫ σ΅„¨σ΅„¨σ΅„¨π‘„π‘Ÿ 󡄨󡄨󡄨 π‘‘π‘Ÿ]
𝑠
< ∞.
(6)
is the set of B([0, 𝑇] × Ω) ⊗ B(𝐸)
measurable maps 𝑅 : [0, 𝑇] × Ω × πΈ → R𝑛 such
that
‖𝑅‖L2
],([𝑠,𝑑];R𝑛 )
󡄨2
󡄨
:= E[∫ ∫ σ΅„¨σ΅„¨σ΅„¨π‘…π‘Ÿ (𝑒)󡄨󡄨󡄨 ] (𝑑𝑒) π‘‘π‘Ÿ]
𝑠 𝐸
1/2
< ∞.
(7)
To avoid heavy notations, we omit the subscript
([𝑠, 𝑑]; R𝑛 ) in these notations when (𝑠, 𝑑) = (0, 𝑇).
Let 𝑇 be a fixed strictly positive real number; 𝐴 1 is a
closed convex subset of R𝑛 and 𝐴 2 = ([0, ∞)π‘š ). Let us define
the class of admissible control processes (𝑒, πœ‰).
Definition 1. An admissible control is a pair of measurable,
adapted processes 𝑒 : [0, 𝑇] × Ω → 𝐴 1 , and πœ‰ : [0, 𝑇] × Ω →
𝐴 2 , such that
(1) 𝑒 is a predictable process, πœ‰ is of bounded variation,
nondecreasing, right continuous with left-hand limits, and πœ‰0− = 0,
2
2
(2) E[sup𝑑∈[0,𝑇] |𝑒𝑑 | + |πœ‰π‘‡ | ] < ∞.
(9)
𝐺 : [0, 𝑇] 󳨀→ R𝑛×π‘š
be measurable functions.
Notice that the jump of a singular control πœ‰ ∈ U2 at any
jumping time 𝜏 is defined by Δπœ‰πœ = πœ‰πœ − πœ‰πœ− , and we let
πœ‰π‘‘π‘ = πœ‰π‘‘ − ∑ Δπœ‰πœ ,
(10)
0<𝜏≤𝑑
be the continuous part of πœ‰.
We distinguish between the jumps of π‘₯𝜏 caused by the
jump of 𝑁(𝜏, 𝑒), defined by
Δ π‘π‘₯𝜏 := ∫ 𝛾 (𝜏, π‘₯𝜏− , π‘’πœ , 𝑒) 𝑁 ({𝜏} , 𝑑𝑒)
1/2
L2],([𝑠,𝑑];R𝑛 )
𝑑
𝛾 : [0, 𝑇] × R𝑛 × π΄ 1 × πΈ 󳨀→ R𝑛 ,
1/2
π‘Ÿ∈[𝑠,𝑑]
(8)
𝐸
𝛾 (𝜏, π‘₯𝜏− , π‘’πœ , 𝑒)
:= {
0
if πœ‚ has a jump of size 𝑒 at 𝜏,
otherwise,
(11)
and the jump of π‘₯𝜏 caused by the singular control πœ‰, denoted
by Δ πœ‰ π‘₯𝜏 := 𝐺𝜏 Δπœ‰πœ . In the above, 𝑁({𝜏}, ⋅) represents the
jump in the Poisson random measure, occurring at time 𝜏. In
particular, the general jump of the state process at 𝜏 is given
by Δπ‘₯𝜏 = π‘₯𝜏 − π‘₯𝜏− = Δ πœ‰ π‘₯𝜏 + Δ π‘π‘₯𝜏 .
If πœ‘ is a continuous real function, we let
Δ πœ‰ πœ‘ (π‘₯𝜏 ) := πœ‘ (π‘₯𝜏 ) − πœ‘ (π‘₯𝜏− + Δ π‘π‘₯𝜏 ) .
(12)
The expression (12) defines the jump in the value of
πœ‘(π‘₯𝜏 ) caused by the jump of π‘₯ at 𝜏. We emphasize that the
possible jumps in π‘₯𝜏 coming from the Poisson measure are
not included in Δ πœ‰ πœ‘(π‘₯𝜏 ).
Suppose that the performance functional has the form
𝑇
𝑇
0
𝑠
𝐽 (𝑒, πœ‰) = E [∫ 𝑓 (𝑑, π‘₯𝑑 , 𝑒𝑑 ) 𝑑𝑑 + 𝑔 (π‘₯𝑇 ) + ∫ π‘˜π‘‘ π‘‘πœ‰π‘‘ ] ,
for (𝑒, πœ‰) ∈ U,
(13)
4
International Journal of Stochastic Analysis
where 𝑓 : [0, 𝑇] × R𝑛 × π΄ 1 → R, 𝑔 : R𝑛 → R, and π‘˜ :
𝑙
𝑙
[0, 𝑇] → ([0, ∞))π‘š , with π‘˜π‘‘ π‘‘πœ‰π‘‘ = ∑π‘š
𝑙=1 π‘˜π‘‘ π‘‘πœ‰π‘‘ .
⋆ ⋆
An admissible control (𝑒 , πœ‰ ) is optimal if
𝐽 (𝑒⋆ , πœ‰β‹† ) = sup 𝐽 (𝑒, πœ‰) .
(14)
(𝑒,πœ‰)∈U
Let us assume the following.
(H1 ) The maps 𝑏, 𝜎, 𝛾, and 𝑓 are continuously differentiable
with respect to (π‘₯, 𝑒) and 𝑔 is continuously differentiable in π‘₯.
(H2 ) The derivatives 𝑏π‘₯ , 𝑏𝑒 , 𝜎π‘₯ , πœŽπ‘’ , 𝛾π‘₯ , 𝛾𝑒 , 𝑓π‘₯ , 𝑓𝑒 , and 𝑔π‘₯ are
continuous in (π‘₯, 𝑒) and uniformly bounded.
(H3 ) 𝑏, 𝜎, 𝛾, and 𝑓 are bounded by 𝐾1 (1 + |π‘₯| + |𝑒|), and 𝑔
is bounded by 𝐾1 (1 + |π‘₯|), for some 𝐾1 > 0.
(H4 ) For all (𝑒, 𝑒) ∈ 𝐴 1 × πΈ, the map
(i) For all V ∈ 𝐴 1
𝐻𝑒 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) (V𝑑 − 𝑒𝑑⋆ ) ≤ 0,
𝑑𝑑—π‘Ž.𝑒., P—π‘Ž.𝑠.
π‘˜π‘‘π‘– + 𝐺𝑑𝑖 𝑝𝑑 ≤ 0,
(15)
:= 𝜁T (𝛾π‘₯ (𝑑, π‘₯, 𝑒, 𝑒) + 𝐼𝑑 ) 𝜁
for 𝑖 = 1, . . . , π‘š,
(19)
(20)
π‘š
∑1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑 ≤0} π‘‘πœ‰π‘‘β‹†π‘π‘– = 0,
(21)
𝑖=1
𝑛
satisfies uniformly in (π‘₯, 𝜁) ∈ R × R ,
󡄨 󡄨2
π‘Ž (𝑑, π‘₯, 𝑒, 𝜁; 𝑒) ≥ σ΅„¨σ΅„¨σ΅„¨πœσ΅„¨σ΅„¨σ΅„¨ 𝐾2−1 ,
Theorem 2 (necessary conditions of optimality). Let (𝑒⋆ , πœ‰β‹† )
be an optimal control maximizing the functional 𝐽 over U, and
let π‘₯⋆ be the corresponding optimal trajectory. Then there exists
an adapted process (𝑝, π‘ž, π‘Ÿ(⋅)) ∈ S2 × M2 × L2] , which is
the unique solution of the BSDE (18), such that the following
conditions hold.
(ii) For all 𝑑 ∈ [0, 𝑇], with probability 1
(π‘₯, 𝜁) ∈ R𝑛 × R𝑛 󳨀→ π‘Ž (𝑑, π‘₯, 𝑒, 𝜁; 𝑒)
𝑛
3.1. Necessary Conditions of Optimality. The purpose of this
section is to derive optimality necessary conditions, satisfied
by an optimal control, assuming that the solution exists. The
proof is based on convex perturbations for both absolutely
continuous and singular components of the optimal control
and on some estimates of the state processes. Note that our
results generalize [1, 2, 21] for systems with jumps.
π‘˜π‘‘π‘– + 𝐺𝑑𝑖 (𝑝𝑑− + Δ π‘π‘π‘‘ ) ≤ 0,
for some 𝐾2 > 0.
(16)
for 𝑖 = 1, . . . , π‘š,
(22)
π‘š
∑1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )≤0} Δπœ‰π‘‘β‹†π‘– = 0,
(23)
𝑖=1
(H5 ) 𝐺, π‘˜ are continuous and bounded.
where Δ π‘π‘π‘‘ = ∫𝐸 π‘Ÿπ‘‘ (𝑒)𝑁({𝑑}, 𝑑𝑒).
3. The Stochastic Maximum Principle
Let us first define the usual Hamiltonian associated to the
control problem by
𝐻 (𝑑, π‘₯, 𝑒, 𝑝, π‘ž, X (⋅)) = 𝑓 (𝑑, π‘₯, 𝑒) + 𝑝𝑏 (𝑑, π‘₯, 𝑒)
𝑛
+ ∑ π‘žπ‘— πœŽπ‘— (𝑑, π‘₯, 𝑒)
(17)
𝑗=1
+ ∫ X (𝑒) 𝛾 (𝑑, π‘₯, 𝑒, 𝑒) ] (𝑑𝑒) ,
𝐸
In order to prove Theorem 2, we present some auxiliary
results.
3.1.1. Variational Equation. Let (V, πœ‰) ∈ U be such that (𝑒⋆ +
V, πœ‰β‹† + πœ‰) ∈ U. The convexity condition of the control domain
ensures that for πœ€ ∈ (0, 1) the control (𝑒⋆ +πœ€V, πœ‰β‹† +πœ€πœ‰) is also in
U. We denote by π‘₯πœ€ the solution of the SDE (8) corresponding
to the control (𝑒⋆ + πœ€V, πœ‰β‹† + πœ€πœ‰). Then by standard arguments
from stochastic calculus, it is easy to check the following
estimate.
Lemma 3. Under assumptions (H1 )–(H5 ), one has
𝑛
𝑛
𝑛×𝑛
×L2] . π‘žπ‘—
where (𝑑, π‘₯, 𝑒, 𝑝, π‘ž, X(⋅)) ∈ [0, 𝑇]×R ×𝐴 1 ×R ×R
and πœŽπ‘— for 𝑗 = 1, . . . , 𝑛, denote the 𝑗th column of the matrices
π‘ž and 𝜎, respectively.
Let (𝑒⋆ , πœ‰β‹† ) be an optimal control and let π‘₯⋆ be the
corresponding optimal trajectory. Then, we consider a triple
(𝑝, π‘ž, π‘Ÿ(⋅)) of square integrable adapted processes associated
with (𝑒⋆ , π‘₯⋆ ), with values in R𝑛 × R𝑛×𝑑 × R𝑛 such that
𝑑𝑝𝑑 = −𝐻π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) 𝑑𝑑
Μƒ (𝑑𝑑, 𝑑𝑒) ,
+ π‘žπ‘‘ 𝑑𝐡𝑑 + ∫ π‘Ÿπ‘‘ (𝑒) 𝑁
𝐸
𝑝𝑇 = 𝑔π‘₯ (π‘₯𝑇⋆ ) .
󡄨
󡄨2
lim E [ sup 󡄨󡄨󡄨π‘₯π‘‘πœ€ − π‘₯𝑑⋆ 󡄨󡄨󡄨 ] = 0.
πœ€→0
𝑑∈[0,𝑇]
(24)
Proof. From assumptions (H1 )–(H5 ), we get by using the
Burkholder-Davis-Gundy inequality
󡄨
󡄨2
E [ sup 󡄨󡄨󡄨π‘₯π‘‘πœ€ − π‘₯𝑑⋆ 󡄨󡄨󡄨 ]
𝑑∈[0,𝑇]
𝑇
󡄨
󡄨2
≤ 𝐾 ∫ E [ sup 󡄨󡄨󡄨π‘₯πœπœ€ − π‘₯πœβ‹† 󡄨󡄨󡄨 ] 𝑑𝑠
0
(18)
𝜏∈[0,𝑠]
𝑇
󡄨 󡄨2
󡄨 󡄨2
+πΎπœ€2 (∫ E [ sup 󡄨󡄨󡄨V𝜏 󡄨󡄨󡄨 ] 𝑑𝑠 + Eσ΅„¨σ΅„¨σ΅„¨πœ‰π‘‡ 󡄨󡄨󡄨 ) .
0
𝜏∈[0,𝑠]
(25)
International Journal of Stochastic Analysis
5
󡄨󡄨2
󡄨󡄨 1
𝑑
󡄨
󡄨
+ 𝐾E ∫ ∫ 󡄨󡄨󡄨󡄨∫ 𝛾π‘₯ (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ , 𝑒) Γπ‘ πœ€ π‘‘πœ‡σ΅„¨σ΅„¨σ΅„¨σ΅„¨ ] (𝑑𝑒) 𝑑𝑠
󡄨󡄨
0 𝐸 󡄨󡄨 0
󡄨 󡄨2
+ 𝐾Eσ΅„¨σ΅„¨σ΅„¨πœŒπ‘‘πœ€ 󡄨󡄨󡄨 ,
From Definition 1 and Gronwall’s lemma, the result follows immediately by letting πœ€ go to zero.
We define the process 𝑧𝑑 = 𝑧𝑑𝑒
𝑑𝑧𝑑 =
{𝑏π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) 𝑧𝑑
+
⋆
,V,πœ‰
by
𝑏𝑒 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) V𝑑 } 𝑑𝑑
𝑑
(30)
where
𝑗
+ ∑ {𝜎π‘₯𝑗 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) 𝑧𝑑 + πœŽπ‘’π‘— (𝑑, π‘₯𝑑⋆ , 𝑒t⋆ ) V𝑑 } 𝑑𝐡𝑑
𝑗=1
⋆
⋆
+ ∫ {𝛾π‘₯ (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒) 𝑧𝑑− + 𝛾𝑒 (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒) V𝑑 }
(26)
𝑑
𝑑
0
0
πœŒπ‘‘πœ€ = − ∫ 𝑏π‘₯ (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) 𝑧𝑠 𝑑𝑠 − ∫ 𝜎π‘₯ (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) 𝑧𝑠 𝑑𝐡𝑠
⋆
Μƒ (𝑑𝑠, 𝑑𝑒)
− ∫ ∫ 𝛾π‘₯ (𝑠, π‘₯𝑠−
, 𝑒𝑠⋆ , 𝑒) 𝑧𝑠− 𝑁
Μƒ (𝑑𝑑, 𝑑𝑒) + 𝐺𝑑 π‘‘πœ‰π‘‘ ,
×𝑁
0
𝐸
𝑑
𝑑
0
0
− ∫ 𝑏V (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) V𝑠 𝑑𝑠 − ∫ 𝜎V (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) V𝑠 𝑑𝐡𝑠
𝑧0 = 0.
From (H2 ) and Definition 1, one can find a unique
solution 𝑧 which solves the variational equation (26), and the
following estimate holds.
Lemma 4. Under assumptions (H1 )–(H5 ), it holds that
󡄨󡄨 π‘₯πœ€ − π‘₯⋆
󡄨󡄨2
󡄨
󡄨
𝑑
(27)
lim E󡄨󡄨󡄨󡄨 𝑑
− 𝑧𝑑 󡄨󡄨󡄨󡄨 = 0.
πœ€→0 󡄨
πœ€
󡄨󡄨
󡄨
Proof. Let
π‘₯πœ€ − π‘₯𝑑⋆
= 𝑑
− 𝑧𝑑 .
πœ€
πœ‡,πœ€
is given by
𝑑
𝐸
Γπ‘‘πœ€
πœŒπ‘‘πœ€
(28)
πœ‡,πœ€
We denote π‘₯𝑑 = π‘₯𝑑⋆ + πœ‡πœ€(Γπ‘‘πœ€ + 𝑧𝑑 ) and 𝑒𝑑 = 𝑒𝑑⋆ + πœ‡πœ€V𝑑 ,
for notational convenience. Then we have immediately that
Γ0πœ€ = 0 and Γπ‘‘πœ€ satisfies the following SDE:
1
πœ‡,πœ€ πœ‡,πœ€
𝑑Γπ‘‘πœ€ = { (𝑏 (𝑑, π‘₯𝑑 , 𝑒𝑑 ) − 𝑏 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ))
πœ€
𝑑
⋆
Μƒ (𝑑𝑠, 𝑑𝑒)
− ∫ ∫ 𝛾V (𝑠, π‘₯𝑠−
, 𝑒𝑠⋆ , 𝑒) V𝑠 𝑁
0
𝐸
𝑑
1
0
0
𝑑
1
0
0
+ ∫ ∫ 𝑏π‘₯ (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) 𝑧𝑠 π‘‘πœ‡ 𝑑𝑠
+∫ ∫
(31)
𝜎π‘₯ (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) 𝑧𝑠 π‘‘πœ‡ 𝑑𝐡𝑠
𝑑
1
0
𝐸 0
𝑑
1
0
0
𝑑
1
0
0
πœ‡,πœ€ πœ‡,πœ€
Μƒ (𝑑𝑠, 𝑑𝑒)
+ ∫ ∫ ∫ 𝛾π‘₯ (𝑠, π‘₯𝑠−
, 𝑒𝑠 , 𝑒) 𝑧𝑠− π‘‘πœ‡π‘
+ ∫ ∫ 𝑏V (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) V𝑠 π‘‘πœ‡ 𝑑𝑠
+ ∫ ∫ 𝜎V (𝑠, π‘₯sπœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) V𝑠 π‘‘πœ‡ 𝑑𝐡𝑠
𝑑
1
0
𝐸 0
πœ‡,πœ€ πœ‡,πœ€
Μƒ (𝑑𝑠, 𝑑𝑒) .
+ ∫ ∫ ∫ 𝛾V (𝑠, π‘₯𝑠−
, 𝑒𝑠 , 𝑒) V𝑠 π‘‘πœ‡π‘
− (𝑏π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) 𝑧𝑑 + 𝑏𝑒 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) V𝑑 ) } 𝑑𝑑
Since 𝑏π‘₯ , 𝜎π‘₯ , and 𝛾π‘₯ are bounded, then
1
πœ‡,πœ€ πœ‡,πœ€
+ { (𝜎 (𝑑, π‘₯𝑑 , 𝑒𝑑 ) − 𝜎 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ))
πœ€
𝑑
󡄨 󡄨2
󡄨 󡄨2
󡄨 󡄨2
E󡄨󡄨󡄨Γπ‘‘πœ€ 󡄨󡄨󡄨 ≤ 𝑀E ∫ 󡄨󡄨󡄨Γπ‘ πœ€ 󡄨󡄨󡄨 𝑑𝑠 + 𝑀Eσ΅„¨σ΅„¨σ΅„¨πœŒπ‘‘πœ€ 󡄨󡄨󡄨 ,
0
− (𝜎π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) 𝑧𝑑 + πœŽπ‘’ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) V𝑑 ) } 𝑑𝐡𝑑
1
πœ‡,πœ€ πœ‡,πœ€
⋆
+ ∫ { (𝛾 (𝑑, π‘₯𝑑− , 𝑒𝑑 , 𝑒) − 𝛾 (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒))
𝐸 πœ€
⋆
⋆
− (𝛾π‘₯ (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒) 𝑧𝑑− + 𝛾𝑒 (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒) V𝑑 ) }
(32)
where 𝑀 is a generic constant depending on the constants 𝐾,
](𝐸), and 𝑇. We conclude from Lemma 3 and the dominated
convergence theorem, that limπœ€ → 0 πœŒπ‘‘πœ€ = 0. Hence (27)
follows from Gronwall’s lemma and by letting πœ€ go to 0. This
completes the proof.
3.1.2. Variational Inequality. Let Φ be the solution of the
linear matrix equation, for 0 ≤ 𝑠 < 𝑑 ≤ 𝑇
Μƒ (𝑑𝑑, 𝑑𝑒) .
×𝑁
(29)
Since the derivatives of the coefficients are bounded, and
from Definition 1, it is easy to verify by Gronwall’s inequality
that Γπœ€ ∈ S2 and
󡄨󡄨2
𝑑 󡄨󡄨 1
󡄨
󡄨
󡄨 󡄨2
E󡄨󡄨󡄨Γπ‘‘πœ€ 󡄨󡄨󡄨 ≤ 𝐾E ∫ 󡄨󡄨󡄨󡄨∫ 𝑏π‘₯ (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) Γπ‘ πœ€ π‘‘πœ‡σ΅„¨σ΅„¨σ΅„¨σ΅„¨ 𝑑𝑠
󡄨󡄨
0 󡄨󡄨 0
󡄨󡄨2
𝑑 󡄨󡄨 1
󡄨
󡄨
+ 𝐾E ∫ 󡄨󡄨󡄨󡄨∫ 𝜎π‘₯ (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) Γπ‘ πœ€ π‘‘πœ‡σ΅„¨σ΅„¨σ΅„¨σ΅„¨ 𝑑𝑠
󡄨󡄨
0 󡄨󡄨 0
𝑑
𝑗
𝑑Φ𝑠,𝑑 = 𝑏π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) Φ𝑠,𝑑 𝑑𝑑 + ∑ 𝜎π‘₯𝑗 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) Φ𝑠,𝑑 𝑑𝐡𝑑
𝑗=1
⋆
Μƒ (𝑑𝑑, 𝑑𝑒) ,
+ ∫ 𝛾π‘₯ (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒) Φ𝑠,𝑑− 𝑁
(33)
𝐸
Φ𝑠,𝑠 = 𝐼𝑑 ,
where 𝐼𝑑 is the 𝑛 × π‘› identity matrix. This equation is linear,
with bounded coefficients, then it admits a unique strong
6
International Journal of Stochastic Analysis
solution. Moreover, the condition (H4 ) ensures that the
tangent process Φ is invertible, with an inverse Ψ satisfying
suitable integrability conditions.
From ItoΜ‚’s formula, we can easily check that 𝑑(Φ𝑠,𝑑 Ψ𝑠,𝑑 ) =
0, and Φ𝑠,𝑠 Ψ𝑠,𝑠 = 𝐼𝑑 , where Ψ is the solution of the following
equation
𝑑
{
𝑑Ψ𝑠,𝑑 = −Ψ𝑠,𝑑 {𝑏π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) − ∑ 𝜎π‘₯𝑗 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) 𝜎π‘₯𝑗 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ )
𝑗=1
{
−∫
𝐸
−
𝑑
∑Ψ𝑠,𝑑 𝜎π‘₯𝑗
𝑗=1
}
𝛾π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒) ] (𝑑𝑒) } 𝑑𝑑
The rest of this subsection is devoted to the computation
of the above limit. We will see that the expression (37) leads to
a precise description of the optimal control (𝑒⋆ , πœ‰β‹† ) in terms
of the adjoint process. First, it is easy to prove the following
lemma.
Lemma 5. Under assumptions (H1 )–(H5 ), one has
𝐼 = lim πœ€−1 (𝐽 (π‘’πœ€ , πœ‰πœ€ ) − 𝐽 (𝑒⋆ , πœ‰β‹† ))
πœ€→0
𝑇
= E [∫ {𝑓π‘₯ (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) 𝑧𝑠 + 𝑓𝑒 (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) V𝑠 } 𝑑𝑠
0
}
𝑇
+ 𝑔π‘₯ (π‘₯𝑇⋆ ) 𝑧𝑇 + ∫ π‘˜π‘‘ π‘‘πœ‰π‘‘ ] .
𝑗
(𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) 𝑑𝐡𝑑
0
−1
⋆
⋆
− Ψ𝑠,𝑑− ∫ (𝛾π‘₯ (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒) + 𝐼𝑑 ) 𝛾π‘₯ (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒)
𝐸
Proof. We use the same notations as in the proof of Lemma 4.
First, we have
πœ€−1 (𝐽 (π‘’πœ€ , πœ‰πœ€ ) − 𝐽 (𝑒⋆ , πœ‰β‹† ))
× π‘ (𝑑𝑑, 𝑑𝑒) ,
Ψ𝑠,𝑠 = 𝐼𝑑 ,
(34)
so Ψ = Φ . If 𝑠 = 0 we simply write Φ0,𝑑 = Φ𝑑 and Ψ0,𝑑 = Ψ𝑑 .
By the integration by parts formula ([8, Lemma 3.6]), we can
see that the solution of (26) is given by 𝑧𝑑 = Φ𝑑 πœ‚π‘‘ , where πœ‚π‘‘ is
the solution of the stochastic differential equation
𝑑
{
π‘‘πœ‚π‘‘ = Ψ𝑑 {𝑏𝑒 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) V𝑑 − ∑ 𝜎π‘₯𝑗 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) πœŽπ‘’π‘— (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) V𝑑
𝑗=1
{
𝐸
}
𝛾𝑒 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑧) V𝑑 ] (𝑑𝑒) } 𝑑𝑑
1
0
0
1
𝑇
πœ‡,πœ€
+ ∫ 𝑔π‘₯ (π‘₯𝑇 ) 𝑧𝑇 π‘‘πœ‡ + ∫ π‘˜π‘‘ π‘‘πœ‰π‘‘ ] + π›½π‘‘πœ€ ,
0
0
(39)
where
𝑇
1
1
0
0
0
πœ‡,πœ€
π›½π‘‘πœ€ = E [∫ ∫ 𝑓π‘₯ (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) Γπ‘ πœ€ π‘‘πœ‡ 𝑑𝑠 + ∫ 𝑔π‘₯ (π‘₯𝑇 ) Γπ‘‡πœ€ π‘‘πœ‡] .
(40)
By using Lemma 4, and since the derivatives 𝑓π‘₯ , 𝑓𝑒 , and
𝑔π‘₯ are bounded, we have limπœ€ → 0 π›½π‘‘πœ€ = 0. Then, the result
follows by letting πœ€ go to 0 in the above equality.
}
𝑑
𝑇
= E [∫ ∫ {𝑓π‘₯ (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) 𝑧𝑠 + 𝑓𝑒 (𝑠, π‘₯π‘ πœ‡,πœ€ , π‘’π‘ πœ‡,πœ€ ) V𝑠 } π‘‘πœ‡ 𝑑𝑠
−1
−∫
(38)
𝑗
+ ∑Ψ𝑑 πœŽπ‘’π‘— (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) V𝑑 𝑑𝐡𝑑
Substituting by 𝑧𝑑 = Φ𝑑 πœ‚π‘‘ in (38) leads to
𝑗=1
𝑇
+ Ψ𝑑− ∫
𝐸
⋆
(𝛾π‘₯ (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒)
𝐼 = E [∫ {𝑓π‘₯ (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) Φ𝑠 πœ‚π‘  + 𝑓𝑒 (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) V𝑠 } 𝑑𝑠
−1
+ 𝐼𝑑 )
0
⋆
× π›Ύπ‘’ (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒) V𝑑 𝑁 (𝑑𝑑, 𝑑𝑒)
+ Ψ𝑑 𝐺𝑑 π‘‘πœ‰π‘‘ − Ψ𝑑 ∫
𝐸
(𝛾π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒)
+𝑔π‘₯ (π‘₯𝑇⋆ ) Φπ‘‡πœ‚π‘‡
−1
+ 𝐼𝑑 )
× π›Ύπ‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒) 𝑁 ({𝑑} , 𝑑𝑒) 𝐺𝑑 Δπœ‰π‘‘ ,
πœ‚0 = 0.
(35)
Let us introduce the following convex perturbation of the
optimal control (𝑒⋆ , πœ‰β‹† ) defined by
(𝑒⋆,πœ€ , πœ‰β‹†,πœ€ ) = (𝑒⋆ + πœ€V, πœ‰β‹† + πœ€πœ‰) ,
(36)
for some (V, πœ‰) ∈ U and πœ€ ∈ (0, 1). Since (𝑒⋆ , πœ‰β‹† ) is an optimal
control, then πœ€−1 (𝐽(π‘’πœ€ , πœ‰πœ€ ) − 𝐽(𝑒⋆ , πœ‰β‹† )) ≤ 0. Thus a necessary
condition for optimality is that
lim πœ€−1 (𝐽 (π‘’πœ€ , πœ‰πœ€ ) − 𝐽 (𝑒⋆ , πœ‰β‹† )) ≤ 0.
πœ€→0
(37)
(41)
𝑇
+ ∫ π‘˜π‘‘ π‘‘πœ‰π‘‘ ] .
0
Consider the right continuous version of the square
integrable martingale
𝑇
𝑀𝑑 := E [∫ 𝑓π‘₯ (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) Φ𝑠 𝑑𝑠 + 𝑔π‘₯ (π‘₯𝑇⋆ ) Φ𝑇 | F𝑑 ] . (42)
0
By the ItoΜ‚ representation theorem [30], there exist two
processes 𝑄 = (𝑄1 , . . . , 𝑄𝑑 ) where 𝑄𝑗 ∈ M2 , for 𝑗 = 1, . . . , 𝑑,
and π‘ˆ(⋅) ∈ L2] , satisfying
𝑇
𝑀𝑑 = E [∫ 𝑓π‘₯ (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) Φ𝑠 𝑑𝑠 + 𝑔π‘₯ (π‘₯𝑇⋆ ) Φ𝑇 ]
0
𝑑
𝑑
+∑∫
𝑗=1 0
𝑄𝑠𝑗 𝑑𝐡𝑠𝑗
(43)
𝑑
Μƒ (𝑑𝑠, 𝑑𝑒) .
+ ∫ ∫ π‘ˆπ‘  (𝑒) 𝑁
0
𝐸
International Journal of Stochastic Analysis
7
𝑑
Let us denote 𝑦𝑑⋆ = 𝑀𝑑 − ∫0 𝑓π‘₯ (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ )Φ𝑠 𝑑𝑠. The adjoint
variable is the process defined by
𝑝𝑑 = 𝑦𝑑⋆ Ψ𝑑 ,
𝑗
π‘žπ‘‘
=
𝑗
𝑄𝑑 Ψ𝑑
−
𝑝𝑑 𝜎π‘₯𝑗
(𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) ,
for 𝑗 = 1, . . . , 𝑑,
(44)
−1
π‘Ÿπ‘‘ (𝑒) = π‘ˆπ‘‘ (𝑒) Ψ𝑑 (𝛾π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒) + 𝐼𝑑 )
𝑝𝑑 ((𝛾π‘₯ (𝑠, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒)
+
−1
+ 𝐼𝑑 )
3.1.3. Adjoint Equation and Maximum Principle. Since (37)
is true for all (V, πœ‰) ∈ U and 𝐼 ≤ 0, we can easily deduce the
following result.
Theorem 7. Let (𝑒⋆ , πœ‰β‹† ) be the optimal control of the problem
(14) and denote by π‘₯⋆ the corresponding optimal trajectory,
then the following inequality holds:
𝑇
E [ ∫ 𝐻V (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) (V𝑑 − 𝑒𝑑⋆ ) 𝑑𝑑
− 𝐼𝑑 ) .
0
𝑇
Theorem 6. Under assumptions (H1 )–(H5 ), one has
𝑇
+ ∑ {π‘˜π‘‘ + 𝐺𝑑 (𝑝𝑑− + Δ π‘π‘π‘‘ )} Δ(πœ‰ − πœ‰β‹† )𝑑 ] ≤ 0,
0
+
0<𝑑≤𝑇
𝑑
∑ π‘žπ‘ π‘— πœŽπ‘’π‘—
𝑗=1
where the Hamiltonian 𝐻 is defined by (17), and the adjoint
variable (𝑝, π‘žπ‘— , π‘Ÿ(⋅)) for 𝑗 = 1, . . . , 𝑑, is given by (44).
(𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ )
+ ∫ π‘Ÿπ‘  (𝑧) 𝛾𝑒 (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ , 𝑒) ] (𝑑𝑒)} V𝑠 𝑑𝑠
(45)
𝐸
π‘š
Now, we are ready to give the proof of Theorem 2.
Proof of Theorem 2. (i) Let us assume that (𝑒⋆ , πœ‰β‹† ) is an
optimal control for the problem (14), so that inequality (48)
is valid for every (V, πœ‰). If we choose πœ‰ = πœ‰β‹† in inequality
(48), we see that for every measurable, F𝑑 -adapted process
V : [0, 𝑇] × Ω → 𝐴 1
𝑇
+ ∑ ∫ {π‘˜π‘ π‘– + 𝐺𝑠𝑖 𝑝𝑠 } π‘‘πœ‰π‘ π‘π‘–
𝑖=1 0
π‘š
+ ∑ ∑ {π‘˜π‘ π‘– + 𝐺𝑠𝑖 (𝑝𝑠− + Δ π‘π‘π‘  )} Δπœ‰π‘ π‘– ] .
𝑇
𝑖=1 0<𝑠≤𝑇
E [∫ 𝐻V (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) (V𝑑 − 𝑒𝑑⋆ ) 𝑑𝑑] ≤ 0.
Proof. From the integration by parts formula ([8, Lemma
𝑗
3.5]), and by using the definition of 𝑝𝑑 , π‘žπ‘‘ for 𝑗 = 1, . . . , 𝑑,
and π‘Ÿπ‘‘ (⋅), we can easily check that
0
For V ∈ U1 define
𝐴 = { (𝑑, πœ”) ∈ [0, 𝑇] × Ω
such that 𝐻V (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) (V𝑑 − 𝑒𝑑⋆ ) > 0} .
(50)
𝑑
𝑇{
= E [ ∫ {𝑝𝑑 𝑏𝑒 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) + ∑π‘žπ‘ π‘— πœŽπ‘’π‘— (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ )
0
𝑗=1
[
{
+∫
𝐸
𝑇
Obviously 𝐴V𝑑 ∈ F𝑑 , for each 𝑑 ∈ [0, 𝑇]. Let us define
ΜƒV ∈ U1 by
V,
ΜƒV𝑑 (πœ”) = { ⋆
𝑒𝑑 ,
}
π‘Ÿπ‘‘ (𝑒) 𝛾𝑒 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒) ] (𝑑𝑒) } V𝑑 𝑑𝑑
}
𝑇
𝑖=1
0
+ ∑ (∫
𝐺𝑑𝑖 𝑝𝑑 π‘‘πœ‰π‘‘π‘π‘–
+
∑ 𝐺𝑑𝑖
0<𝑑≤𝑇
if (𝑑, πœ”) ∈ 𝐴V𝑑 ,
.
otherwise
(51)
If πœ† ⊗ P(𝐴V ) > 0, where πœ† denotes the Lebesgue measure,
then
𝑓π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) πœ‚π‘‘ Φ𝑑 𝑑𝑑
π‘š
(49)
V
𝐸 [𝑦𝑇 πœ‚π‘‡ ]
0
(48)
0
𝐼 = E [ ∫ {𝑓𝑒 (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ ) + 𝑝𝑠 𝑏𝑒 (𝑠, π‘₯𝑠⋆ , 𝑒𝑠⋆ )
−∫
𝑐
+ ∫ {π‘˜π‘‘ + 𝐺𝑑 𝑝𝑑 } 𝑑(πœ‰ − πœ‰β‹† )𝑑
𝑇
(𝑝𝑑− +
Δ π‘π‘π‘‘ ) Δπœ‰π‘‘π‘– )] .
]
(46)
Also we have
𝑇
𝐼 = E [𝑦𝑇 πœ‚π‘‡ + ∫ 𝑓π‘₯ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) Φ𝑑 πœ‚π‘‘ 𝑑𝑑
0
𝑇
𝑇
0
0
E [∫ 𝐻V (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) (ΜƒV𝑑 − 𝑒𝑑⋆ ) 𝑑𝑑] > 0,
0
which contradicts (49), unless πœ† ⊗ P(𝐴V ) = 0. Hence the
conclusion follows.
(ii) If instead we choose V = 𝑒⋆ in inequality (48), we
obtain that for every measurable, F𝑑 -adapted process πœ‰ :
[0, 𝑇] × Ω → 𝐴 2 , the following inequality holds:
𝑇
(47)
𝑐
E [ ∫ {π‘˜π‘‘ + 𝐺𝑑 𝑝𝑑 } 𝑑(πœ‰ − πœ‰β‹† )𝑑
0
+ ∫ 𝑓𝑒 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) V𝑑 𝑑𝑑 + ∫ π‘˜π‘‘ π‘‘πœ‰π‘‘ ] ,
substituting (46) in (47), the result follows.
(52)
(53)
⋆
+ ∑ {π‘˜π‘‘ + 𝐺𝑑 (𝑝𝑑− + Δ π‘π‘π‘‘ )} Δ(πœ‰ − πœ‰ )𝑑 ] ≤ 0.
0<𝑑≤𝑇
8
International Journal of Stochastic Analysis
In particular, for 𝑖 = 1, . . . , π‘š, we put πœ‰π‘‘π‘– = πœ‰π‘‘β‹†π‘– +
1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑 >0} πœ†(𝑑). Since the Lebesgue measure is regular then
which implies that
π‘š
𝑑
the purely discontinuous part (πœ‰π‘‘π‘– − πœ‰π‘‘β‹†π‘– ) = 0. Obviously, the
relation (53) can be written as
π‘š
𝑇
𝑖=1
0
∑E [∫
{π‘˜π‘‘π‘–
𝐺𝑑𝑖 𝑝𝑑 } 𝑑(πœ‰π‘–
+
−
𝑑
0
𝑖=1
0
= ∑E [∫
{π‘˜π‘‘π‘–
+
𝐺𝑑𝑖 𝑝𝑑 } 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑 >0} π‘‘πœ† (𝑑)]
(54)
π‘š
> 0.
𝑐
𝑖=1
0
−∑E [∫ {π‘˜π‘‘π‘– + 𝐺𝑑𝑖 𝑝𝑑 } 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑 ≤0} π‘‘πœ‰π‘‘β‹†π‘π‘– ] > 0.
(55)
By comparing with (53) we get
π‘š
𝑇
𝑖=1
0
∑E [∫ 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑 ≤0} π‘‘πœ‰π‘‘β‹†π‘π‘– ] = 0,
(56)
then we conclude that
π‘š
𝑇
∑ ∫ {π‘˜π‘‘π‘– + 𝐺𝑑𝑖 𝑝𝑑 } 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑 ≤0} π‘‘πœ‰π‘‘π‘π‘– = 0.
𝑖=1 0
(57)
Expressions (22) and (23) are proved by using the same
techniques. First, for each 𝑖 ∈ {1, . . . , π‘š} and 𝑑 ∈ [0, 𝑇]
fixed, we define πœ‰π‘ π‘– = πœ‰π‘ π‘– + 𝛿𝑑 (𝑠)1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )>0} , where 𝛿𝑑
denotes the Dirac unit mass at 𝑑. 𝛿𝑑 is a discrete measure, then
𝑐
𝑑
(πœ‰π‘ π‘– − πœ‰π‘ π‘– ) = 0 and (πœ‰π‘ π‘– − πœ‰π‘ π‘– ) = 𝛿𝑑 (𝑠)1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )>0} . Hence
π‘š
E [∑ {π‘˜π‘‘π‘– + 𝐺𝑑𝑖 (𝑝𝑑− + Δ π‘π‘π‘‘ )} 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )>0} ] > 0 (58)
𝑖=1
which contradicts (53), unless for every 𝑖 ∈ {1, . . . , π‘š} and
𝑑 ∈ [0, 𝑇], we have
P {π‘˜π‘‘π‘–
+
𝐺𝑑𝑖
(𝑝𝑑− + Δ π‘π‘π‘‘ ) > 0} = 0.
(59)
Next, let πœ‰ be defined by
π‘‘πœ‰π‘‘π‘–
=1
{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )≥0}
π‘‘πœ‰π‘‘β‹†π‘–
+ 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )<0} π‘‘πœ‰π‘‘β‹†π‘π‘– .
3.2. Sufficient Conditions of Optimality. It is well known
that in the classical cases (without the singular part of the
control), the sufficient condition of optimality is of significant
importance in the stochastic maximum principle, in the
sense that it allows to compute optimal controls. This result
states that, under some concavity conditions, maximizing the
Hamiltonian leads to an optimal control.
In this section, we focus on proving the sufficient maximum principle for mixed classical-singular stochastic control
problems, where the state of the system is governed by a
stochastic differential equation with jumps, allowing both
classical control and singular control.
Theorem 8 (sufficient condition of optimality in integral
form). Let (𝑒⋆ , πœ‰β‹† ) be an admissible control and denote π‘₯⋆
the associated controlled state process. Let (𝑝, π‘ž, π‘Ÿ(⋅)) be the
unique solution of 𝐡𝑆𝐷𝐸 (18). Let one assume that (π‘₯, 𝑒) →
𝐻(𝑑, π‘₯, 𝑒, 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) and π‘₯ → 𝑔(π‘₯) are concave functions.
Moreover suppose that for all 𝑑 ∈ [0, 𝑇], V ∈ 𝐴 1 , and πœ‰ ∈ U2
𝑇
E [ ∫ 𝐻V (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) (V𝑑 − 𝑒𝑑⋆ ) 𝑑𝑑
0
𝑇
𝑐
+ ∫ {π‘˜π‘‘ + 𝐺𝑑 𝑝𝑑 } 𝑑(πœ‰ − πœ‰β‹† )𝑑
0
(64)
Then (𝑒⋆ , πœ‰β‹† ) is an optimal control.
Proof. For convenience, we will use the following notations
throughout the proof:
∑E [∑ − {π‘˜π‘‘π‘– + 𝐺𝑑𝑖 (𝑝𝑑− + Δ π‘π‘π‘‘ )}
(61)
× 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )<0} Δπœ‰π‘‘β‹†π‘– ] > 0,
Now, by applying ItoΜ‚’s formula to 𝑦𝑑⋆ Ψ𝑑 , it is easy to check
that the processes defined by relation (44) satisfy BSDE (18)
called the adjoint equation.
0<𝑑≤𝑇
(60)
π‘š
0<𝑑≤𝑇
Thus (23) holds. The proof is complete.
+ ∑ {π‘˜π‘‘ + 𝐺𝑑 (𝑝𝑑− + Δ π‘π‘π‘‘ )} Δ(πœ‰ − πœ‰β‹† )𝑑 ] ≤ 0.
Then, the relation (53) can be written as
𝑖=1
(63)
𝑖=1
1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑− ≤0} π‘‘πœ‰π‘‘β‹†π‘‘π‘– , for 𝑖 = 1, . . . , π‘š, then we have 𝑑(πœ‰π‘– − πœ‰β‹†π‘– )𝑑 =
−1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑 ≤0} π‘‘πœ‰π‘‘β‹†π‘π‘– , and π‘‘πœ‰π‘‘π‘‘π‘– = π‘‘πœ‰π‘‘β‹†π‘‘π‘– . Hence, we can rewrite
(53) as follows:
𝑇
By the fact that π‘˜π‘‘π‘– + 𝐺𝑑𝑖 (𝑝𝑑− + Δ π‘π‘π‘‘ ) < 0, and Δπœ‰π‘‘π‘– ≥ 0, we get
∑1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )<0} Δπœ‰π‘‘β‹†π‘– = 0.
This contradicts (53) unless for every 𝑖 ∈ {1, . . . , π‘š}, πœ† ⊗
P{π‘˜π‘‘π‘– + 𝐺𝑑𝑖 𝑝𝑑 > 0} = 0. This proves (20).
Let us prove (21). Define π‘‘πœ‰π‘‘π‘– = 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑− >0} π‘‘πœ‰π‘‘β‹†π‘– +
π‘š
(62)
× 1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )<0} Δπœ‰π‘‘β‹†π‘– ] = 0.
𝑇
𝑇
𝑖=1
𝑐
πœ‰β‹†π‘– )𝑑
+ ∫ {π‘˜π‘‘π‘– + 𝐺𝑑𝑖 (𝑝𝑑− + Δ π‘π‘π‘‘ )} 𝑑(πœ‰π‘– − πœ‰β‹†π‘– )𝑑 ]
π‘š
E [∑ {π‘˜π‘‘π‘– + 𝐺𝑑𝑖 (𝑝𝑑− + Δ π‘π‘π‘‘ )}
Θ⋆ (𝑑) = Θ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) ,
Θ (𝑑) = Θ (𝑑, π‘₯𝑑 , 𝑒𝑑 , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) ,
for Θ = 𝐻, 𝐻π‘₯ , 𝐻𝑒
International Journal of Stochastic Analysis
9
π›Ώπœ™ (𝑑) = πœ™ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) − πœ™ (𝑑, π‘₯𝑑 , 𝑒𝑑 ) ,
for πœ™ = 𝑏, 𝜎, πœŽπ‘— , 𝑗 = 1, . . . , 𝑛, 𝑓
𝛿𝛾 (𝑑, 𝑒) = 𝛾 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒) − 𝛾 (𝑑, π‘₯𝑑 , 𝑒𝑑 , 𝑒) ,
𝑇
0
(65)
+ ∑ 𝐺𝑑 (𝑝𝑑− + Δ π‘π‘π‘‘ ) Δ(πœ‰ − πœ‰β‹† )𝑑 ] .
0<𝑑≤𝑇
⋆
, 𝑒𝑑⋆ , 𝑒) − 𝛾 (𝑑, π‘₯𝑑− , 𝑒𝑑 , 𝑒) .
𝛿𝛾− (𝑑, 𝑒) = 𝛾 (𝑑, π‘₯𝑑−
(68)
Let (𝑒, πœ‰) be an arbitrary admissible pair, and consider the
difference
𝐽 (𝑒⋆ , πœ‰β‹† ) − 𝐽 (𝑒, πœ‰)
𝑇
𝑇
0
0
= E [∫ 𝛿𝑓 (𝑑) 𝑑𝑑 + ∫ π‘˜π‘‘ 𝑑(πœ‰β‹† − πœ‰)𝑑 ]
(66)
+ E [𝑔 (π‘₯𝑇⋆ ) − 𝑔 (π‘₯𝑇 )] .
By the fact that (𝑝, π‘žπ‘— , π‘Ÿ(⋅)) ∈ S2 × M2 × L2] for 𝑗 =
1, . . . , 𝑛, we deduce that the stochastic integrals with respect
to the local martingales have zero expectation. Due to the
concavity of the Hamiltonian 𝐻, the following holds
E [𝑔 (π‘₯𝑇⋆ ) − 𝑔 (π‘₯𝑇 )]
𝑇
≥ E [∫ {− (𝐻⋆ (𝑑) − 𝐻 (𝑑)) + 𝐻𝑒⋆ (𝑑) (𝑒𝑑⋆ − 𝑒𝑑 )} 𝑑𝑑]
We first note that, by concavity of 𝑔, we conclude that
E [𝑔 (π‘₯𝑇⋆ ) − 𝑔 (π‘₯𝑇 )]
0
𝑛
𝑇{
𝑗
+ E [∫ {𝑝𝑑 (𝛿𝑏 (𝑑)) + ∑ (π›ΏπœŽπ‘— (𝑑)) π‘žπ‘‘
0
𝑗=1
[ {
}
+ ∫ (𝛿𝛾 (𝑑, 𝑒)) π‘Ÿπ‘‘ (𝑒) ] (𝑑𝑒) } 𝑑𝑑]
𝐸
} ]
≥ E [(π‘₯𝑇⋆ − π‘₯𝑇 ) 𝑔π‘₯ (π‘₯𝑇⋆ )] = E [(π‘₯𝑇⋆ − π‘₯𝑇 ) 𝑝𝑇 ]
𝑇
𝑇
0
0
⋆
− π‘₯𝑑− ) 𝑑𝑝𝑑 + ∫ 𝑝𝑑− 𝑑 (π‘₯𝑑⋆ − π‘₯𝑑 )]
= E [∫ (π‘₯𝑑−
𝑇 𝑛
𝑐
+ E [ ∫ 𝐺𝑑 𝑝𝑑 𝑑(πœ‰ − πœ‰β‹† )𝑑
𝑇
𝑐
+ E [ ∫ 𝐺𝑑𝑇 𝑝𝑑 𝑑(πœ‰ − πœ‰β‹† )𝑑
0
+ E [∫ ∑ (π›ΏπœŽ
0
[ 𝑗=1
𝑗
𝑗
(𝑑)) π‘žπ‘‘ 𝑑𝑑
(67)
+ ∑ 𝐺𝑑T (𝑝𝑑− + Δ π‘π‘π‘‘ ) Δ(πœ‰ − πœ‰β‹† )𝑑 ] .
0<𝑑≤𝑇
𝑇
+ ∫ ∫ (𝛿𝛾− (𝑑, 𝑒)) π‘Ÿπ‘‘ (𝑒) 𝑁 (𝑑𝑑, 𝑑𝑒) ]
0 𝐸
]
+ E [ ∑ 𝐺𝑑 (Δ π‘π‘π‘‘ ) Δ(πœ‰ − πœ‰β‹† )𝑑 ] ,
(69)
The definition of the Hamiltonian 𝐻 and (64) leads to
𝐽(𝑒⋆ , πœ‰β‹† )−𝐽(𝑒, πœ‰) ≥ 0, which means that (𝑒⋆ , πœ‰β‹† ) is an optimal
control for the problem (14).
0<𝑑≤𝑇
which implies that
𝐸 [𝑔 (π‘₯𝑇⋆ ) − 𝑔 (π‘₯𝑇 )]
𝑇
≥ E [∫ (π‘₯𝑑⋆ − π‘₯𝑑 ) (−𝐻π‘₯⋆ (𝑑)) 𝑑𝑑]
0
𝑛
𝑇{
𝑗}
+ E [∫ {𝑝𝑑 (𝛿𝑏 (𝑑)) + ∑ (π›ΏπœŽπ‘— (𝑑)) π‘žπ‘‘ } 𝑑𝑑]
0
𝑗=1
[ {
} ]
𝑇
+ E [∫ ∫ (𝛿𝛾− (𝑑, 𝑒)) π‘Ÿπ‘‘ (𝑒) 𝑁 (𝑑𝑑, 𝑑𝑒)]
0
𝐸
𝑇
+ E [∫ {(π‘₯𝑑⋆ − π‘₯𝑑 ) π‘žπ‘‘ + (π›ΏπœŽ (𝑑)) 𝑝𝑑 } 𝑑𝐡𝑑 ]
0
𝑇
+ E [∫ ∫
0
𝐸
⋆
{(π‘₯𝑑−
− π‘₯𝑑− ) π‘Ÿπ‘‘ (𝑒) + 𝑝𝑑− (𝛿𝛾− (𝑑, 𝑒))}
Μƒ (𝑑𝑑, 𝑑𝑒) ]
×𝑁
The expression (64) is a sufficient condition of optimality
in integral form. We want to rewrite this inequality in a
suitable form for applications. This is the objective of the
following theorem which could be seen as a natural extension
of [2, Theorem 2.2] to the jump setting and [3, Theorem 2.1]
to mixed regular-singular control problems.
Theorem 9 (sufficient conditions of optimality). Let (𝑒⋆ , πœ‰β‹† )
be an admissible control and π‘₯⋆ the associated controlled state
process. Let (𝑝, π‘ž, π‘Ÿ(⋅)) be the unique solution of 𝐡𝑆𝐷𝐸 (18). Let
one assume that (π‘₯, 𝑒) → 𝐻(𝑑, π‘₯, 𝑒, 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) and π‘₯ →
𝑔(π‘₯) are concave functions. If in addition one assumes that
(i) for all 𝑑 ∈ [0, 𝑇], V ∈ 𝐴 1
𝐻 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) = sup 𝐻 (𝑑, π‘₯𝑑⋆ , V, 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) ,
V∈𝐴 1
𝑑𝑑—π‘Ž.𝑒., P—π‘Ž.𝑠;
(70)
10
International Journal of Stochastic Analysis
(ii) for all 𝑑 ∈ [0, 𝑇], with probability 1
π‘˜π‘‘π‘– + 𝐺𝑑𝑖 𝑝𝑑 ≤ 0,
for 𝑖 = 1, . . . , π‘š,
π‘š
∑1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 𝑝𝑑 ≤0} π‘‘πœ‰π‘‘β‹†π‘π‘–
𝑖=1
π‘˜π‘‘π‘– + 𝐺𝑑𝑖 (𝑝𝑑− + Δ π‘π‘π‘‘ ) ≤ 0,
= 0,
for 𝑖 = 1, . . . , π‘š,
Since our objective is to maximize this functional, the
value function of the singular control problem becomes
(71)
(𝑒,πœ‰)∈U
(72)
(73)
π‘š
∑1{π‘˜π‘‘π‘– +𝐺𝑑𝑖 (𝑝𝑑− +Δ π‘ 𝑝𝑑 )≤0} Δπœ‰π‘‘β‹†π‘– = 0.
𝑉 (𝑑, π‘₯) = sup 𝐽(𝑒,πœ‰) (𝑑, π‘₯) .
If we do not apply any singular control, then the infinitesimal generator A𝑒 , associated with (8), acting on functions πœ‘, coincides on 𝐢𝑏2 (R𝑛 ; R) with the parabolic integrodifferential operator A𝑒 given by
(74)
𝑖=1
𝑛
A𝑒 πœ‘ (𝑑, π‘₯) = ∑𝑏𝑖 (𝑑, π‘₯, 𝑒)
𝑖=1
Then (𝑒⋆ , πœ‰β‹† ) is an optimal control.
+
Proof. Using (71) and (72) yields
𝑇
π‘š
0
𝑖=1 0
πœ•πœ‘
(𝑑, π‘₯)
πœ•π‘₯𝑖
πœ•2 πœ‘
1 𝑛 𝑖𝑗
∑ π‘Ž (𝑑, π‘₯, 𝑒) 𝑖 𝑗 (𝑑, π‘₯)
2 𝑖,𝑗=1
πœ•π‘₯ πœ•π‘₯
𝑇
E [∫ {π‘˜π‘‘ + 𝐺𝑑 𝑝𝑑 } π‘‘πœ‰π‘‘β‹†π‘ ] = E [∑ ∫ {π‘˜π‘‘π‘– + 𝐺𝑑𝑖 𝑝𝑑 } π‘‘πœ‰π‘‘β‹†π‘π‘– ] = 0.
+ ∫ {πœ‘ (𝑑, π‘₯ + 𝛾 (𝑑, π‘₯, 𝑒, 𝑒)) − πœ‘ (𝑑, π‘₯)
𝐸
(75)
𝑛
−∑𝛾𝑖 (𝑑, π‘₯, 𝑒, 𝑒)
The same computations applied to (73) and (74) imply
E [ ∑ {π‘˜π‘‘ + 𝐺𝑑 (𝑝𝑑− + Δ π‘π‘π‘‘ )} Δπœ‰π‘‘β‹† ] = 0.
(76)
0<𝑑≤𝑇
Hence, from Definition 1, we have the following inequality:
𝑇
(79)
𝑖=1
πœ•πœ‘
(𝑑, π‘₯)} ] (𝑑𝑒) ,
πœ•π‘₯𝑖
(80)
where π‘Žπ‘–π‘— = ∑π‘‘β„Ž=1 (πœŽπ‘–β„Ž πœŽπ‘—β„Ž ) denotes the generic term of the
symmetric matrix πœŽπœŽπ‘‡ . The variational inequality associated
to the singular control problem is
max {sup𝐻1 (𝑑, π‘₯, (π‘Š, πœ•π‘‘ π‘Š, π‘Šπ‘₯ , π‘Šπ‘₯π‘₯ ) (𝑑, π‘₯) , 𝑒) ,
𝑐
𝑒
E [ ∫ {π‘˜π‘‘ + 𝐺𝑑 𝑝𝑑 } 𝑑(πœ‰ − πœ‰β‹† )𝑑
0
(77)
+ ∑ {π‘˜π‘‘ + 𝐺𝑑 (𝑝𝑑− + Δ π‘π‘π‘‘ )} Δ(πœ‰ − πœ‰β‹† )𝑑 ] ≤ 0.
𝐻2𝑙
(81)
(𝑑, π‘₯, π‘Šπ‘₯ (𝑑, π‘₯)) , 𝑙 = 1, . . . , π‘š} = 0,
for (𝑑, π‘₯) ∈ [0, 𝑇] × π‘‚,
0<𝑑≤𝑇
π‘Š (𝑇, π‘₯) = 𝑔 (π‘₯) ,
The desired result follows from Theorem 8.
∀π‘₯ ∈ 𝑂.
(82)
𝐻1 and 𝐻2𝑙 , for 𝑙 = 1, . . . , π‘š, are given by
4. Relation to Dynamic Programming
In this section, we come back to the control problem studied
in the previous section. We recall a verification theorem,
which is useful to compute optimal controls. Then we show
that the adjoint process defined in Section 3, as the unique
solution to the BSDE (18), can be expressed as the gradient
of the value function, which solves the HJB variational
inequality.
4.1. A Verification Theorem. Let π‘₯𝑠𝑑,π‘₯ be the solution of the
controlled SDE (8), for 𝑠 ≥ 𝑑, with initial value π‘₯𝑑 = π‘₯. To put
the problem in a Markovian framework, so that we can apply
dynamic programming, we define the performance criterion
𝐻1 (𝑑, π‘₯, (π‘Š, πœ•π‘‘ π‘Š, π‘Šπ‘₯ , π‘Šπ‘₯π‘₯ ) (𝑑, π‘₯) , 𝑒)
=
πœ•π‘Š
(𝑑, π‘₯) + A𝑒 π‘Š (𝑑, π‘₯) + 𝑓 (𝑑, π‘₯, 𝑒) ,
πœ•π‘‘
(83)
𝑛
πœ•π‘Š
(𝑑, π‘₯) 𝐺𝑑𝑖𝑙 + π‘˜π‘‘π‘™ .
𝑖
πœ•π‘₯
𝑖=1
𝐻2𝑙 (𝑑, π‘₯, π‘Šπ‘₯ (𝑑, π‘₯)) = ∑
We start with the definition of classical solutions of the
variational inequality (81).
Definition 10. Let one consider a function π‘Š ∈ 𝐢1,2 ([0, 𝑇] ×
𝑂), and define the nonintervention region by
𝐢 (π‘Š) = { (𝑑, π‘₯) ∈ [0, 𝑇] × π‘‚,
𝐽(𝑒,πœ‰) (𝑑, π‘₯)
𝑇
𝑇
= E [∫ 𝑓 (𝑠, π‘₯𝑠 , 𝑒𝑠 ) 𝑑𝑠 + ∫ π‘˜π‘  π‘‘πœ‰π‘  + 𝑔 (π‘₯𝑇 ) | π‘₯𝑑 = π‘₯] .
𝑑
𝑑
(78)
𝑛
πœ•π‘Š
max ∑ { 𝑖 (𝑑, π‘₯) 𝐺𝑑𝑖𝑙 + π‘˜π‘‘π‘™ } < 0} .
1≤𝑙≤π‘š
𝑖=1 πœ•π‘₯
(84)
International Journal of Stochastic Analysis
11
We say that π‘Š is a classical solution of (81) if
πœ•π‘Š
(𝑑, π‘₯) + sup {A𝑒 π‘Š (𝑑, π‘₯) + 𝑓 (𝑑, π‘₯, 𝑒)} = 0,
πœ•π‘‘
𝑒
(85)
∀ (𝑑, π‘₯) ∈ 𝐢 (π‘Š) ,
𝑛
πœ•π‘Š
(𝑑, π‘₯) 𝐺𝑑𝑖𝑙 + π‘˜π‘‘π‘™ ≤ 0,
𝑖
πœ•π‘₯
𝑖=1
∑
Here πœ‰π‘‘ is the total number of individuals harvested up
to time 𝑑. If we define the price per unit harvested at time
𝑑 by π‘˜(𝑑) = 𝑒−πœ˜π‘‘ and the utility rate obtained when the size
𝛾
of the population at 𝑑 is 𝑋𝑑 by 𝑒−πœ˜π‘‘ 𝑋𝑑 . Then the objective is
to maximize the expected total time-discounted value of the
harvested individuals starting with a population of size π‘₯; that
is,
𝑇
(86)
0
∀ (𝑑, π‘₯) ∈ [0, 𝑇] × π‘‚, for 𝑙 = 1, . . . , π‘š,
πœ•π‘Š
(𝑑, π‘₯) + A𝑒 π‘Š (𝑑, π‘₯) + 𝑓 (𝑑, π‘₯, 𝑒) ≤ 0,
πœ•π‘‘
(87)
for every (𝑑, π‘₯, 𝑒) ∈ [0, 𝑇] × π‘‚ × π΄ 1 .
The following verification theorem is very useful to
compute explicitly the value function and the optimal control,
at least in the case where the value function is sufficiently
smooth.
Theorem 11. Let π‘Š be a classical solution of (81) with the
terminal condition (82), such that for some constants 𝑐1 ≥
1, 𝑐2 ∈ (0, ∞), |π‘Š(𝑑, π‘₯)| ≤ 𝑐2 (1 + |π‘₯|𝑐1 ). Then, for all (𝑑, π‘₯) ∈
[0, 𝑇] × π‘‚, and (𝑒, πœ‰) ∈ U
π‘Š (𝑑, π‘₯) ≥ 𝐽(𝑒,πœ‰) (𝑑, π‘₯) .
(88)
Furthermore, if there exists (𝑒⋆ , πœ‰β‹† ) ∈ U such that with
probability 1
(𝑑, π‘₯𝑑⋆ ) ∈ 𝐢 (π‘Š) ,
𝐿𝑒𝑏𝑒𝑠𝑔𝑒𝑒 π‘Žπ‘™π‘šπ‘œπ‘ π‘‘ π‘’π‘£π‘’π‘Ÿπ‘¦ 𝑑 ≤ 𝑇,
𝑒𝑑⋆ ∈ arg max {A𝑒 π‘Š (𝑑, π‘₯𝑑⋆ ) + 𝑓 (𝑑, π‘₯𝑑⋆ , 𝑒)} ,
𝑒
π‘š
𝑙=1
(89)
(90)
𝑛
πœ•π‘Š
(𝑑, π‘₯𝑑⋆ ) 𝐺𝑑𝑖𝑙 = π‘˜π‘‘π‘™ } π‘‘πœ‰π‘‘β‹†π‘π‘™ = 0,
𝑖
πœ•π‘₯
𝑖−1
∑ {∑
[0,𝑇)
𝑒−πœ˜π‘‘ π‘‘πœ‰π‘‘ ] ,
πœ™ (𝑑, π‘₯) = sup π½πœ‰ (𝑑, π‘₯) .
(95)
πœ‰∈Π(𝑑,π‘₯)
Note that the definition of Π(𝑑, π‘₯) is similar to Π(π‘₯), except
that the starting time is 𝑑, and the state at 𝑑 is π‘₯.
If we guess the nonintervention region 𝐢 has the form 𝐢 =
{(𝑑, π‘₯) : 0 < π‘₯ < 𝑏} for some barrier point 𝑏 > 0, then (84)
gets the form,
0=
πœ•2 Φ
πœ•Φ
1
πœ•Φ
(𝑑, π‘₯) + πœ‡π‘₯
(𝑑, π‘₯) + 𝜎2 π‘₯2 2 (𝑑, π‘₯)
πœ•π‘‘
πœ•π‘₯
2
πœ•π‘₯
+ ∫ {Φ (𝑑, π‘₯ (1 + πœƒπ‘’)) − Φ (𝑑, π‘₯) − πœƒπ‘₯𝑒
R+
πœ•Φ
(𝑑, π‘₯)} ] (𝑑𝑒)
πœ•π‘₯
+ π‘₯𝛾 exp (−πœ˜π‘‘) ,
(96)
(91)
for 0 < π‘₯ < 𝑏. We try a solution Φ of the form
(92)
𝑙=1
for all jumping times 𝑑 of πœ‰π‘‘β‹† , then it follows that π‘Š(𝑑, π‘₯) =
⋆ ⋆
𝐽(𝑒 ,πœ‰ ) (𝑑, π‘₯).
Φ (𝑑, π‘₯) = Ψ (π‘₯) exp (−πœ˜π‘‘) ;
(97)
AΦ (𝑑, π‘₯) = exp (−πœ˜π‘‘) A0 Ψ (π‘₯) ,
(98)
hence
where Ψ is the fundamental solution of the ordinary integrodifferential equation
Proof. See [8, Theorem 5.2].
In the following, we present an example on optimal
harvesting from a geometric Brownian motion with jumps
see, for example, [5, 8].
Example 12. Consider a population having a size 𝑋 = {𝑋𝑑 :
𝑑 ≥ 0} which evolves according to the geometric Lévy process;
that is
1
− πœ˜Ψ (π‘₯) + πœ‡π‘₯ΨσΈ€  (π‘₯) + 𝜎2 π‘₯2 ΨσΈ€ σΈ€  (π‘₯)
2
+ ∫ {Ψ (π‘₯ (1 + πœƒπ‘’)) − Ψ (π‘₯) − πœƒπ‘₯𝑒ΨσΈ€  (π‘₯)} ] (𝑑𝑒)
R+
+ π‘₯𝛾 = 0.
𝑑𝑋𝑑 = πœ‡π‘‹π‘‘ 𝑑𝑑 + πœŽπ‘‹π‘‘ 𝑑𝐡𝑑
Μƒ (𝑑𝑑, 𝑑𝑒) − π‘‘πœ‰π‘‘ ,
+ πœƒπ‘‹π‘‘− ∫ 𝑒𝑁
R+
𝑋0− = π‘₯ > 0.
(94)
where 𝑇 := inf{𝑑 ≥ 0 : 𝑋𝑑 = 0} is the time of complete
depletion, 𝛾 ∈ (0, 1) and πœ‡, 𝜎, 𝜘, πœƒ are positive constants with
𝜎2 /2 + πœƒ ∫R 𝑒](𝑑𝑒) ≤ πœ‡ < 𝜘. The harvesting admissible
+
strategy πœ‰π‘‘ is assumed to be nonnegative, nondecreasing
continuous on the right, satisfying 𝐸|πœ‰π‘‡ |2 < ∞ with πœ‰0− = 0,
and such that 𝑋𝑑 > 0. We denote by Π(π‘₯) the class of such
strategies. For any πœ‰ define
π‘š
Δ πœ‰ π‘Š (𝑑, π‘₯𝑑⋆ ) + ∑π‘˜π‘‘π‘™ Δπœ‰π‘‘β‹†π‘™ = 0,
𝛾
𝐽 (πœ‰) = E [∫ 𝑒−πœ˜π‘‘ 𝑋𝑑 𝑑𝑑 + ∫
(99)
for 𝑑 ∈ [0, 𝑇] , (93)
We notice that Ψ(π‘₯) = 𝐴π‘₯𝜌 +𝐾π‘₯𝛾 , for some arbitrary constant
𝐴; we get
AΦ (𝑑, π‘₯) = π‘₯𝛾 (π΄β„Ž1 (𝜌) + β„Ž2 (𝛾)) exp (−πœ˜π‘‘) ,
(100)
12
International Journal of Stochastic Analysis
where
1
1
β„Ž1 (𝜌) = 𝜎2 𝜌2 + (πœ‡ − 𝜎2 ) 𝜌
2
2
+ ∫ {(1 + πœƒπ‘’)𝜌 − 1 − πœƒπ‘’πœŒ} ] (𝑑𝑒) − 𝜘,
R+
1
1
β„Ž2 (𝛾) = 𝐾 ( 𝜎2 𝛾2 + (πœ‡ − 𝜎2 ) 𝛾
2
2
+ ∫ {(1 + πœƒπ‘’)𝛾 − 1 − πœƒπ‘’π›Ύ} ] (𝑑𝑒) − 𝜘) +1.
R+
(101)
Note that β„Ž1 (1) = πœ‡ − 𝜘 < 0 and limπ‘Ÿ → ∞ β„Ž1 (𝜌) = ∞; then
there exists 𝜌 > 1 such that β„Ž1 (𝜌) = 0. The constant 𝐾 is given
by
1
1
𝐾 = − ( 𝜎2 𝛾2 + (πœ‡ − 𝜎2 ) 𝛾
2
2
−1
𝛾
(102)
+ ∫ {(1 + πœƒπ‘’) − 1 − πœƒπ‘’π›Ύ} ] (𝑑𝑒) − 𝜘) .
R+
β‹†πœŒ
Outside 𝐢 we require that Ψ(π‘₯) = π‘₯ + 𝐡, where 𝐡 is a
constant to be determined. This suggests that the value must
be of the form
Φ (𝑑, π‘₯) = {
Arguing as in [7], we can adapt Theorem 15 in [16] to obtain
an identification of the optimal harvesting strategy as a local
time of a reflected jump diffusion process. Then, the system
(106)–(109) defines the so-called Skorokhod problem, whose
solution is a pair (𝑋𝑑⋆ , πœ‰π‘‘β‹† ), where 𝑋𝑑⋆ is a jump diffusion
process reflected at 𝑏.
The conditions (89)–(92) ensure the existence of an
increasing process πœ‰π‘‘β‹† such that 𝑋𝑑⋆ stays in 𝐢 for all times
𝑑. If the initial size π‘₯ ≤ 𝑏, then πœ‰π‘‘β‹† is nondecreasing and his
continuous part πœ‰π‘‘β‹†π‘ increases only when 𝑋𝑑⋆ = 𝑏 so as to
ensure that 𝑋𝑑⋆ ≤ 𝑏.
On the other hand, we only have Δπœ‰π‘‘β‹† > 0 if the initial
⋆
= π‘₯ − 𝑏, or if 𝑋𝑑⋆ jumps out of the
size π‘₯ > 𝑏 then πœ‰0−
nonintervention region by the random measure 𝑁; that is,
⋆
+ Δ π‘π‘‹π‘‘β‹† > 𝑏. In these cases we get Δπœ‰π‘‘β‹† > 0 immediately
𝑋𝑑−
to bring 𝑋𝑑⋆ to 𝑏.
It is easy to verify that, if (𝑋⋆ , πœ‰β‹† ) is a solution of the
Skorokhod problem (106)–(109), then (𝑋⋆ , πœ‰β‹† ) is an optimal
solution of the problem (93) and (94).
By the construction of πœ‰β‹† and Φ, all the conditions of the
verification Theorem 11 are satisfied. More precisely, the value
function along the optimal state reads as
(𝐴π‘₯𝜌 + 𝐾π‘₯𝛾 ) exp (−πœ˜π‘‘)
(π‘₯ + 𝐡) exp (−πœ˜π‘‘)
for 0 < π‘₯ < 𝑏,
(103)
for π‘₯ ≥ 𝑏.
Assuming smooth fit principle at point 𝑏, then the reflection threshold is
𝑏=(
𝐾𝛾 (1 − 𝛾)
)
𝐴𝜌 (𝜌 − 1)
1/(𝜌−𝛾)
,
(104)
where
𝐴=
1 − 𝐾𝛾𝑏𝛾−1
,
πœŒπ‘πœŒ−1
(105)
⋆𝛾
Φ (𝑑, 𝑋𝑑⋆ ) = (𝐴𝑋𝑑 + 𝐾𝑋𝑑 ) exp (−πœ˜π‘‘) ,
for all 𝑑 ∈ [0, 𝑇] .
4.2. Link between the SMP and DPP. Compared with the
stochastic maximum principle, one would expect that the
solution (𝑝, π‘ž, π‘Ÿ(⋅)) of BSDE (18) to correspond to the derivatives of the classical solution of the variational inequalities
(81)-(82). This is given by the following theorem, which
extends [3, Theorem 3.1] to control problems with a singular
component and [2, Theorem 3.3] to diffusions with jumps.
Theorem 13. Let π‘Š be a classical solution of (81), with the
terminal condition (82). Assume that π‘Š ∈ 𝐢1,3 ([0, 𝑇] × π‘‚),
with 𝑂 = R𝑛 , and there exists (𝑒⋆ , πœ‰β‹† ) ∈ U such that the
conditions (89)–(92) are satisfied. Then the solution of the
BSDE (18) is given by
𝐡 = 𝐴𝑏 + 𝐾𝑏 − 𝑏.
𝑝𝑑 = π‘Šπ‘₯ (𝑑, π‘₯𝑑⋆ ) ,
Since 𝛾 < 1 and 𝜌 > 1, we deduce that 𝑏 > 0.
To construct the optimal control πœ‰β‹† , we consider the
stochastic differential equation
π‘žπ‘‘ = π‘Šπ‘₯π‘₯ (𝑑, π‘₯𝑑⋆ ) 𝜎 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) ,
𝜌
𝛾
Μƒ (𝑑𝑑, 𝑑𝑒) − π‘‘πœ‰β‹† ,
𝑑𝑋𝑑⋆ = πœ‡π‘‹π‘‘β‹† 𝑑𝑑 + πœŽπ‘‹π‘‘β‹† 𝑑𝐡𝑑 + ∫ πœƒπ‘‹π‘‘β‹† 𝑒𝑁
𝑑
R+
(106)
𝑋𝑑⋆ ≤ 𝑏,
𝑑 ≥ 0,
(107)
1{𝑋𝑑⋆ <𝑏} π‘‘πœ‰π‘‘β‹†π‘ = 0,
(108)
1{𝑋𝑑−⋆ +Δ π‘ 𝑋𝑑⋆ ≤𝑏} Δπœ‰π‘‘β‹† = 0,
(109)
and if this is the case, then
⋆
Δπœ‰π‘‘β‹† = min {𝑙 > 0 : 𝑋𝑑−
+ Δ π‘π‘‹π‘‘β‹† − 𝑙 = 𝑏} .
(110)
(111)
π‘Ÿπ‘‘ (⋅) =
π‘Šπ‘₯ (𝑑, π‘₯𝑑⋆
+
𝛾 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒))
−
(112)
π‘Šπ‘₯ (𝑑, π‘₯𝑑⋆ ) .
Proof. Throughout the proof, we will use the following
abbreviations: for 𝑖, 𝑗 = 1, . . . , 𝑛, and β„Ž = 1, . . . , 𝑑,
πœ™1 (𝑑) = πœ™1 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ ) ,
for πœ™1 = 𝑏𝑖 , πœŽπ‘– , πœŽπ‘–β„Ž , 𝜎, π‘Žπ‘–π‘— ,
πœ™2 (𝑑, 𝑒) = πœ™2 (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑒) ,
⋆
, u⋆𝑑 , 𝑒) ,
𝛾− (𝑑, 𝑒) = 𝛾 (𝑑, π‘₯𝑑−
πœ•π‘π‘– πœ•π‘ πœ•π‘Žπ‘–π‘— πœ•πœŽπ‘–β„Ž πœ•π‘“
,
,
,
,
,
πœ•π‘₯π‘˜ πœ•π‘₯π‘˜ πœ•π‘₯π‘˜ πœ•π‘₯π‘˜ πœ•π‘₯π‘˜
for πœ™2 = 𝛾, 𝛾𝑖 ,
πœ•π›Ύπ‘– πœ•π›Ύ
,
,
πœ•π‘₯π‘˜ πœ•π‘₯π‘˜
⋆
𝛾−𝑖 (𝑑, 𝑒) = 𝛾𝑖 (𝑑, π‘₯𝑑−
, 𝑒𝑑⋆ , 𝑒) .
(113)
International Journal of Stochastic Analysis
13
πœπ‘…β‹† 𝑛
πœ•2 π‘Š
(𝑠, π‘₯𝑠⋆ ) πœŽπ‘– (𝑠) 𝑑𝐡𝑠
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1
From ItoΜ‚’s rule applied to the semimartingale (πœ•π‘Š/
πœ•π‘₯π‘˜ )(𝑑, π‘₯𝑑⋆ ), one has
+∫ ∑
πœ•π‘Š ⋆ ⋆
(𝜏 , π‘₯ ⋆ )
πœ•π‘₯π‘˜ 𝑅 πœπ‘…
+∫ ∫ {
𝑑
πœπ‘…β‹†
𝑑
πœ•π‘Š
⋆
(𝑠, π‘₯𝑠−
+ 𝛾− (𝑠, 𝑒))
πœ•π‘₯π‘˜
𝐸
⋆
=
πœπ‘…
πœ•2 π‘Š
πœ•π‘Š
(𝑑, π‘₯𝑑⋆ ) + ∫
(𝑠, π‘₯𝑠⋆ ) 𝑑𝑠
π‘˜
π‘˜
πœ•π‘₯
𝑑 πœ•π‘ πœ•π‘₯
−
πœπ‘…β‹† 𝑛
πœπ‘…β‹† 𝑛
πœ•2 π‘Š
⋆
) 𝑑π‘₯𝑠⋆𝑖
+ ∫ ∑ π‘˜ 𝑖 (𝑠, π‘₯𝑠−
πœ•π‘₯
πœ•π‘₯
𝑑 𝑖=1
πœπ‘…β‹†
1
∫
2 𝑑
+
𝑛
∑ π‘Žπ‘–π‘— (𝑠)
𝑖,𝑗=1
πœπ‘…β‹†
+∫ ∫ {
𝑑
𝐸
𝑑
πœ•3 π‘Š
(𝑠, π‘₯𝑠⋆ ) 𝑑𝑠
πœ•π‘₯π‘˜ πœ•π‘₯𝑖 πœ•π‘₯𝑗
+ ∑ Δπœ‰
𝑑<𝑠≤πœπ‘…β‹†
πœ•π‘Š
(𝑠, π‘₯𝑠⋆ ) .
πœ•π‘₯π‘˜
(116)
𝑛
πœ•2 π‘Š
⋆
(𝑠, π‘₯𝑠−
) 𝛾−𝑖 (𝑠, 𝑒)} 𝑁 (𝑑𝑠, 𝑑𝑒)
π‘˜
𝑖
𝑖=1 πœ•π‘₯ πœ•π‘₯
−∑
𝑑<𝑠≤πœπ‘…β‹†
π‘š
πœ•2 π‘Š
⋆
(𝑠,
π‘₯
)
𝐺𝑠𝑖𝑙 π‘‘πœ‰π‘ β‹†π‘π‘™
∑
𝑠
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1
𝑙=1
+∫ ∑
πœ•π‘Š
πœ•π‘Š
⋆
⋆
(𝑠, π‘₯𝑠−
+ 𝛾− (𝑠, 𝑒)) − π‘˜ (𝑑, π‘₯𝑠−
)
πœ•π‘₯π‘˜
πœ•π‘₯
+ ∑ {Δ πœ‰
πœ•π‘Š
⋆
Μƒ (𝑑𝑠, 𝑑𝑒)
(𝑠, π‘₯𝑠−
)} 𝑁
πœ•π‘₯π‘˜
Let πœ‰π‘ β‹†π‘ denotes the continuous part of πœ‰π‘ β‹† ; that is, πœ‰π‘ β‹†π‘ = πœ‰π‘ β‹† −
∑𝑑<𝑠≤πœπ‘…β‹† Δπœ‰π‘ β‹†π‘™ . Then, we can easily show that
πœπ‘…β‹† 𝑛
πœ•2 π‘Š
(𝑠, π‘₯𝑠⋆ ) 𝐺𝑠𝑖𝑙 π‘‘πœ‰π‘ β‹†π‘π‘™
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1
∫ ∑
𝑑
πœ•π‘Š
(𝑠, π‘₯𝑠⋆ )
πœ•π‘₯π‘˜
πœπ‘…β‹† 𝑛
πœ•2 π‘Š
(𝑠, π‘₯𝑠⋆ ) 𝐺𝑠𝑖𝑙 1{(𝑠,π‘₯𝑠⋆ )∈𝐷𝑙 } π‘‘πœ‰π‘ β‹†π‘π‘™
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1
=∫ ∑
𝑑
𝑛
πœ•2 π‘Š
⋆
−∑ π‘˜ 𝑖 (𝑠, π‘₯𝑠−
) Δ πœ‰ π‘₯𝑠⋆𝑖 } ,
πœ•π‘₯
πœ•π‘₯
𝑖=1
πœπ‘…β‹† 𝑛
where πœβ‹† is defined as in Theorem 11, and the sum is taken
over all jumping times 𝑠 of πœ‰β‹† . Note that
𝑑
For every (𝑑, π‘₯) ∈ 𝐷𝑙 , using (88) we have
𝑛
𝑛
πœ•2 π‘Š
πœ•π‘Š
πœ•
𝑖𝑙
=
{
π‘₯)
𝐺
∑
(𝑑,
(𝑑, π‘₯) 𝐺𝑑𝑖𝑙 + π‘˜π‘ π‘™ } = 0,
𝑑
𝑖
π‘˜ πœ•π‘₯𝑖
π‘˜
πœ•π‘₯
πœ•π‘₯
πœ•π‘₯
𝑖=1
𝑖=1
∑
π‘š
𝑙=1
πœ•2 π‘Š
(𝑠, π‘₯𝑠⋆ ) 𝐺𝑠𝑖𝑙 1{(𝑠,π‘₯𝑠⋆ )∈𝐢𝑙 } π‘‘πœ‰π‘ β‹†π‘π‘™ .
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1
+∫ ∑
(114)
⋆𝑖
+ Δ π‘π‘₯𝑠⋆𝑖 ) = ∑𝐺𝑠𝑖𝑙 Δπœ‰π‘ β‹†π‘™ ,
Δ πœ‰ π‘₯𝑠⋆𝑖 = π‘₯𝑠⋆𝑖 − (π‘₯𝑠−
(115)
for 𝑙 = 1, . . . , π‘š.
(118)
for 𝑖 = 1, . . . , 𝑛,
⋆𝑙
where Δπœ‰π‘ β‹†π‘™ = πœ‰π‘ β‹†π‘™ − πœ‰π‘ −
is a pure jump process. Then, we can
rewrite (114) as follows:
+∫ {
𝑑
𝑛
πœ•2 π‘Š
πœ•2 π‘Š
⋆
𝑖
(𝑠,
π‘₯
)
+
𝑏
(𝑠, π‘₯𝑠⋆ )
∑
(𝑠)
𝑠
π‘˜ πœ•π‘₯𝑖
πœ•π‘ πœ•π‘₯π‘˜
πœ•π‘₯
𝑖=1
+
πœπ‘…β‹† 𝑛 π‘š
𝑑
πœ•π‘Š
(𝑑, π‘₯𝑑⋆ )
πœ•π‘₯π‘˜
πœπ‘…β‹†
This proves
πœ•2 π‘Š
(𝑠, π‘₯𝑠⋆ ) 𝐺𝑠𝑖𝑙 1{(𝑠,π‘₯𝑠⋆ )∈𝐷𝑙 } π‘‘πœ‰π‘ β‹†π‘π‘™ = 0.
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1 𝑙=1
∫ ∑∑
πœ•π‘Š ⋆ ⋆
(𝜏 , π‘₯ ⋆ )
πœ•π‘₯π‘˜ 𝑅 πœπ‘…
=
πœ•3 π‘Š
1 𝑛 𝑖𝑗
∑ π‘Ž (𝑠) π‘˜ 𝑖 𝑗 (𝑠, π‘₯𝑠⋆ )
2 𝑖,𝑗=1
πœ•π‘₯ πœ•π‘₯ πœ•π‘₯
πœ•π‘Š
πœ•π‘Š
⋆
+ ∫ ( π‘˜ (𝑠, π‘₯𝑠⋆ + 𝛾 (𝑠, 𝑒)) − π‘˜ (𝑠, π‘₯𝑠−
)
πœ•π‘₯
πœ•π‘₯
𝐸
𝑛
2
πœ•π‘Š
(𝑠, π‘₯𝑠⋆ ) 𝛾𝑖 (𝑠, 𝑒)) ] (𝑑𝑒)} 𝑑𝑠
π‘˜
𝑖
𝑖=1 πœ•π‘₯ πœ•π‘₯
−∑
(117)
(119)
Furthermore, for every (𝑑, π‘₯) ∈ 𝐢𝑙 and 𝑙 = 1, . . . , π‘š, we have
∑𝑛𝑖=1 (πœ•π‘Š/πœ•π‘₯π‘˜ πœ•π‘₯𝑖 )(𝑑, π‘₯)𝐺𝑑𝑖𝑙 < 0.
⋆𝑐𝑙
But (91) implies that ∑π‘š
𝑙=1 1{(𝑠,π‘₯𝑠⋆ )∈𝐢𝑙 } π‘‘πœ‰π‘  = 0; thus
πœπ‘…β‹† 𝑛 π‘š
πœ•2 π‘Š
(𝑠, π‘₯𝑠⋆ ) 𝐺𝑠𝑖𝑙 1{(𝑠,π‘₯𝑠⋆ )∈𝐢𝑙 } π‘‘πœ‰π‘ β‹†π‘π‘™ = 0.
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1 𝑙=1
∫ ∑∑
𝑑
(120)
The mean value theorem yields
Δπœ‰
πœ•π‘Š
πœ•π‘Š
(𝑠, π‘₯𝑠⋆ ) = ( π‘˜ ) (𝑠, 𝑦 (𝑠)) Δ πœ‰ π‘₯𝑠⋆ ,
π‘˜
πœ•π‘₯
πœ•π‘₯ π‘₯
(121)
⋆
where 𝑦(𝑠) is some point on the straight line between π‘₯𝑠−
+
⋆
⋆
π‘˜
Δ π‘π‘₯𝑠 and π‘₯𝑠 , and (πœ•π‘Š/πœ•π‘₯ )π‘₯ represents the gradient matrix
of πœ•π‘Š/πœ•π‘₯π‘˜ . To prove that the right-hand side of the above
14
International Journal of Stochastic Analysis
𝑛
πœ•π‘π‘–
πœ•π‘Š
(𝑑) 𝑖 (𝑑, π‘₯𝑑⋆ )
π‘˜
πœ•π‘₯
πœ•π‘₯
𝑖=1
equality vanishes, it is enough to check that if Δπœ‰π‘ β‹†π‘™ > 0 then
∑𝑛𝑖=1 (πœ•2 π‘Š/πœ•π‘₯π‘˜ πœ•π‘₯𝑖 )(𝑠, 𝑦(𝑠))𝐺𝑠𝑖𝑙 = 0, for 𝑙 = 1, . . . , π‘š. It is clear
by (92) that
= −∑
π‘š
0 = Δ πœ‰ π‘Š (𝑠, π‘₯𝑠⋆ ) + ∑π‘˜π‘ π‘™ Δπœ‰π‘ β‹†π‘™
𝑙=1
π‘š
πœ•π‘“
1 𝑛 πœ•π‘Žπ‘–π‘—
πœ•2 π‘Š
∑ π‘˜ (𝑑, π‘₯𝑑⋆ ) 𝑖 𝑗 (𝑑, π‘₯𝑑⋆ ) − π‘˜ (𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ )
2 𝑖,𝑗=1 πœ•π‘₯
πœ•π‘₯ πœ•π‘₯
πœ•π‘₯
−
𝑛
πœ•π›Ύπ‘–
(𝑑, 𝑒)
π‘˜
𝐸 𝑖=1 πœ•π‘₯
(122)
𝑛
πœ•π‘Š
(𝑠, 𝑦 (𝑠)) 𝐺𝑠𝑖𝑙 + π‘˜π‘ π‘™ } Δπœ‰π‘ β‹†π‘™ .
𝑖
πœ•π‘₯
𝑖=1
−∫ ∑
= ∑ {∑
𝑙=1
×{
Δπœ‰π‘ β‹†π‘™
> 0, then (𝑠, 𝑦(𝑠)) ∈ 𝐷𝑙 , for 𝑙 = 1, . . . , π‘š.
Since
According to (88), we obtain
(126)
Finally, substituting (119), (120), (124), and (126) into (116)
yields
𝑛
πœ•2 π‘Š
(𝑠, 𝑦 (𝑠)) 𝐺𝑠𝑖𝑙
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1
∑
(123)
𝑛
πœ•π‘Š
πœ•
= π‘˜ {∑ 𝑖 (𝑠, 𝑦 (𝑠)) 𝐺𝑠𝑖𝑙 + π‘˜π‘ π‘™ } = 0.
πœ•π‘₯ 𝑖=1 πœ•π‘₯
𝑑(
πœ•π‘Š
(𝑑, π‘₯𝑑⋆ ))
πœ•π‘₯π‘˜
𝑛
πœ•π‘Š
πœ•π‘π‘–
(𝑑) 𝑖 (𝑑, π‘₯𝑑⋆ )
π‘˜
πœ•π‘₯
𝑖=1 πœ•π‘₯
= − {∑
This shows that
∑ Δπœ‰
𝑑<𝑠≤πœπ‘…β‹†
πœ•π‘Š
(𝑠, π‘₯𝑠⋆ ) = 0.
πœ•π‘₯π‘˜
πœ•π‘“
πœ•2 π‘Š
1 𝑛 πœ•π‘Žπ‘–π‘—
∑ π‘˜ (𝑑) 𝑖 𝑗 (𝑑, π‘₯𝑑⋆ ) + π‘˜ (𝑑)
2 𝑖,𝑗=1 πœ•π‘₯
πœ•π‘₯ πœ•π‘₯
πœ•π‘₯
+
(124)
𝑛
πœ•π›Ύπ‘–
(𝑑, 𝑒)
π‘˜
𝐸 𝑖=1 πœ•π‘₯
On the other hand, define
𝐴 (𝑑, π‘₯, 𝑒) =
πœ•π‘Š
πœ•π‘Š
(𝑑, π‘₯𝑑⋆ + 𝛾 (𝑑, 𝑒)) − 𝑖 (𝑑, π‘₯𝑑⋆ )} ] (𝑑𝑒) .
πœ•π‘₯𝑖
πœ•π‘₯
+∫ ∑
𝑛
πœ•π‘Š
πœ•π‘Š
(𝑑, π‘₯) + ∑𝑏𝑖 (𝑑, π‘₯, 𝑒) 𝑖 (𝑑, π‘₯)
πœ•π‘‘
πœ•π‘₯
𝑖=1
+
×(
πœ•2 π‘Š
1 𝑛 𝑖𝑗
∑ π‘Ž (𝑑, π‘₯, 𝑒) 𝑖 𝑗 (𝑑, π‘₯) + 𝑓 (𝑑, π‘₯, 𝑒)
2 𝑖,𝑗=1
πœ•π‘₯ πœ•π‘₯
+ ∫ {π‘Š (𝑑, π‘₯ + 𝛾 (𝑑, π‘₯, 𝑒, 𝑒)) − π‘Š (𝑑, π‘₯)
𝑛
πœ•2 π‘Š
(𝑑, π‘₯𝑑⋆ ) πœŽπ‘– (𝑑) 𝑑𝐡𝑑
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1
+∑
+∫ {
𝐸
𝐸
𝑛
πœ•π‘Š
−∑𝛾𝑖 (𝑑, π‘₯, 𝑒, 𝑒) 𝑖 (𝑑, π‘₯)} ] (𝑑𝑒) .
πœ•π‘₯
𝑖=1
(125)
π‘˜
If we differentiate 𝐴(𝑑, π‘₯, 𝑒) with respect to π‘₯ and
evaluate the result at (π‘₯, 𝑒) = (π‘₯𝑑⋆ , 𝑒𝑑⋆ ), we deduce easily from
(84), (89), and (90) that
𝑛
πœ•2 π‘Š
πœ•2 π‘Š
⋆
𝑖
(𝑑,
π‘₯
)
+
𝑏
(𝑑, π‘₯𝑑⋆ )
∑
(𝑑)
𝑑
π‘˜ πœ•π‘₯𝑖
πœ•π‘‘πœ•π‘₯π‘˜
πœ•π‘₯
𝑖=1
1 𝑛
πœ•3 π‘Š
+ ∑ π‘Žπ‘–π‘— (𝑑) π‘˜ 𝑖 𝑗 (𝑑, π‘₯𝑑⋆ )
2 𝑖,𝑗=1
πœ•π‘₯ πœ•π‘₯ πœ•π‘₯
πœ•π‘Š
πœ•π‘Š
+ ∫ { π‘˜ (𝑑, π‘₯𝑑⋆ + 𝛾 (𝑑, 𝑒)) − π‘˜ (𝑑, π‘₯𝑑⋆ )
πœ•π‘₯
𝐸 πœ•π‘₯
𝑛
πœ•2 π‘Š
−∑𝛾𝑖 (𝑠, 𝑒) π‘˜ 𝑖 (𝑑, π‘₯𝑑⋆ )} ] (𝑑𝑒)
πœ•π‘₯ πœ•π‘₯
𝑖=1
πœ•π‘Š
πœ•π‘Š
(𝑑, π‘₯𝑑⋆ + 𝛾 (𝑑, 𝑒)) − 𝑖 (𝑑, π‘₯𝑑⋆ ))] (𝑑𝑒)}𝑑𝑑
𝑖
πœ•π‘₯
πœ•π‘₯
πœ•π‘Š
πœ•π‘Š
⋆
⋆
Μƒ (𝑑𝑑, 𝑑𝑒).
(𝑑, π‘₯𝑑−
+ 𝛾− (𝑑, 𝑒))− π‘˜ (𝑑, π‘₯𝑑−
)} 𝑁
πœ•π‘₯π‘˜
πœ•π‘₯
(127)
The continuity of πœ•π‘Š/πœ•π‘₯π‘˜ leads to
lim
πœ•π‘Š
𝑅 → ∞ πœ•π‘₯π‘˜
(πœπ‘…β‹† , π‘₯πœβ‹†β‹† ) =
𝑅
=
πœ•π‘Š
(𝑇, π‘₯𝑇⋆ )
πœ•π‘₯π‘˜
πœ•π‘” ⋆
(π‘₯ ) ,
πœ•π‘₯π‘˜ 𝑇
for each π‘˜ = 1, . . . , 𝑛.
(128)
Clearly,
1 𝑛 πœ•π‘Žπ‘–π‘—
πœ•2 π‘Š
∑ π‘˜ (𝑑) 𝑖 𝑗 (𝑑, π‘₯𝑑⋆ )
2 𝑖,𝑗=1 πœ•π‘₯
πœ•π‘₯ πœ•π‘₯
=
𝑑
πœ•2 π‘Š
1 𝑛 πœ•
∑ π‘˜ ( ∑ πœŽπ‘–β„Ž (𝑑) πœŽπ‘—β„Ž (𝑑)) 𝑖 𝑗 (𝑑, π‘₯𝑑⋆ ) (129)
2 𝑖,𝑗=1 πœ•π‘₯ β„Ž=1
πœ•π‘₯ πœ•π‘₯
𝑛
𝑑
𝑛
= ∑ ∑ (∑πœŽπ‘–β„Ž (𝑑)
𝑗=1β„Ž=1
𝑖=1
πœ•2 π‘Š
πœ•πœŽπ‘–β„Ž
(𝑑, π‘₯t⋆ )) π‘˜ (𝑑) .
𝑖
𝑗
πœ•π‘₯ πœ•π‘₯
πœ•π‘₯
International Journal of Stochastic Analysis
15
(𝑝⋆ , π‘žβ‹† , π‘Ÿβ‹† (⋅)) of the following adjoint equation, for all 𝑑 ∈
[0, 𝑇)
Now, from (17) we have
πœ•π»
(𝑑, π‘₯, 𝑒, 𝑝, π‘ž, π‘Ÿ (⋅))
πœ•π‘₯π‘˜
𝑑𝑝𝑑⋆ = − (πœ‡π‘π‘‘β‹† + πœŽπ‘žπ‘‘β‹† + πœƒ ∫ π‘’π‘Ÿπ‘‘β‹† (𝑒) ] (𝑑𝑒)
R+
𝑛
πœ•π‘π‘–
= ∑ π‘˜ (𝑑, π‘₯, 𝑒) 𝑝𝑖
𝑖=1 πœ•π‘₯
⋆𝛾−1
+𝛾𝑋𝑑
𝑑 𝑛
πœ•π‘“
πœ•πœŽπ‘–β„Ž
+ ∑ ∑ π‘˜ (𝑑, π‘₯, 𝑒) π‘žπ‘–β„Ž + π‘˜ (𝑑, π‘₯, 𝑒)
πœ•π‘₯
β„Ž=1 𝑖=1 πœ•π‘₯
(130)
R+
𝑛
πœ•π›Ύπ‘–
(𝑑, π‘₯, 𝑒, 𝑒) π‘Ÿπ‘– (𝑒) ] (𝑑𝑒) .
π‘˜
𝐸 𝑖=1 πœ•π‘₯
The π‘˜th coordinate π‘π‘‘π‘˜ of the adjoint process 𝑝𝑑 satisfies
πœ•π»
(𝑑, π‘₯𝑑⋆ , 𝑒𝑑⋆ , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) 𝑑𝑑
πœ•π‘₯π‘˜
π‘˜
Μƒ (𝑑𝑑, 𝑑𝑒) ,
+ π‘žπ‘‘π‘˜ 𝑑𝐡𝑑 + ∫ π‘Ÿπ‘‘−
(𝑒) 𝑁
𝐸
π‘π‘‡π‘˜
+ exp (−πœ˜π‘‘) ≤ 0,
∀𝑑,
(135)
1{−𝑝𝑑⋆ +exp(−πœ˜π‘‘)<0} π‘‘πœ‰π‘‘β‹†π‘ = 0,
(136)
⋆
+ Δ π‘π‘π‘‘β‹† ) + exp (−πœ˜π‘‘) ≤ 0,
− (𝑝𝑑−
(137)
1{−(𝑝𝑑−⋆ +Δ π‘ 𝑝𝑑⋆ )+exp(−πœ˜π‘‘)<0} Δπœ‰π‘‘β‹† = 0.
(138)
Since 𝑔 = 0, we assume the transversality condition
for 𝑑 ∈ [0, 𝑇] ,
E [𝑝𝑇⋆ (𝑋𝑇⋆ − 𝑋𝑇 )] ≤ 0.
(139)
⋆
+ Δ π‘π‘π‘‘β‹† = 𝑝𝑑⋆ , and
We remark that Δ πœ‰ 𝑝𝑑⋆ = 0; then 𝑝𝑑−
the condition (138) reduces to
πœ•π‘”
= π‘˜ (π‘₯𝑇⋆ ) ,
πœ•π‘₯
(131)
with π‘žπ‘‘π‘˜ 𝑑𝐡𝑑 = ∑π‘‘β„Ž=1 π‘žπ‘‘π‘˜β„Ž π‘‘π΅π‘‘β„Ž . Hence, the uniqueness of the
solution of (131) and relation (128) allows us to get
π‘π‘‘π‘˜ =
(134)
⋆
Μƒ (𝑑𝑑, 𝑑𝑒) ,
+ π‘žπ‘‘β‹† 𝑑𝐡𝑑 + ∫ π‘Ÿπ‘‘−
(𝑒) 𝑁
−𝑝𝑑⋆
+∫ ∑
π‘‘π‘π‘‘π‘˜ = −
exp (−πœ˜π‘ ) ) 𝑑𝑑
πœ•π‘Š
(𝑑, π‘₯𝑑⋆ ) ,
πœ•π‘₯π‘˜
1{−𝑝𝑑⋆ +exp(−πœ˜π‘‘)<0} Δπœ‰π‘‘β‹† = 0.
We use the relation between the value function and the
solution (𝑝⋆ , π‘žβ‹† , π‘Ÿβ‹† (𝑒)) of the adjoint equation along the
optimal state. We prove that the solution of the adjoint
equation is represented as
β‹†πœŒ−1
𝑝𝑑⋆ = (π΄πœŒπ‘‹π‘‘
𝑛
πœ•2 π‘Š
(𝑑, π‘₯𝑑⋆ ) πœŽπ‘–β„Ž (𝑑) ,
π‘˜ πœ•π‘₯𝑖
πœ•π‘₯
𝑖=1
π‘žπ‘‘π‘˜β„Ž = ∑
(132)
(140)
⋆𝛾−1
+ 𝐾𝛾𝑋𝑑
β‹†πœŒ−1
π‘žπ‘‘β‹† = 𝜎 (𝐴𝜌 (𝜌 − 1) 𝑋𝑑
) exp (−πœ˜π‘‘) ,
⋆𝛾−1
+ 𝐾𝛾 (𝛾 − 1) 𝑋𝑑
) exp (−πœ˜π‘‘) ,
β‹†πœŒ−1
π‘Ÿπ‘‘β‹† (𝑒) = (𝐴𝜌 ((1 + πœƒπ‘’)𝜌−1 − 1) 𝑋𝑑
πœ•π‘Š
πœ•π‘Š
π‘˜
⋆
⋆
π‘Ÿπ‘‘−
+ 𝛾 (𝑑, 𝑒)) − π‘˜ (𝑑, π‘₯𝑑−
),
(⋅) = π‘˜ (𝑑, π‘₯𝑑−
πœ•π‘₯
πœ•π‘₯
⋆𝛾−1
+𝐾𝛾 ((1 + πœƒπ‘’)𝛾−1 − 1) 𝑋𝑑
) exp (−πœ˜π‘‘)
(141)
where π‘žπ‘‘π‘˜β„Ž
is the generic element of the matrix π‘žπ‘‘ and π‘₯𝑑⋆
is the
optimal solution of the controlled SDE (8).
Example 14. We return to the same example in the previous
section.
Now, we illustrate a verification result for the maximum
principle. We suppose that 𝑇 is a fixed time. In this case the
Hamiltonian gets the form
𝛾
𝐻 (𝑑, 𝑋𝑑 , 𝑝𝑑 , π‘žπ‘‘ , π‘Ÿπ‘‘ (⋅)) = πœ‡π‘‹π‘‘ 𝑝𝑑 + πœŽπ‘‹π‘‘ π‘žπ‘‘ + 𝑋𝑑 (−πœ˜π‘‘)
+ πœƒπ‘‹π‘‘− ∫ π‘’π‘Ÿπ‘‘ (𝑒) ] (𝑑𝑒) .
for all 𝑑 ∈ [0, 𝑇).
β‹†πœŒ−1
+
To see this, we differentiate the process (π΄πœŒπ‘‹π‘‘
⋆𝛾−1
𝐾𝛾𝑋𝑑 ) exp(−πœ˜π‘‘) using ItoΜ‚’s rule for semimartingales and
by using the same procedure as in the proof of Theorem 13.
Then, the conclusion follows readily from the verification of
(135), (136), and (139). First, an explicit formula for 𝑋𝑑 is given
in [4] by
𝑋𝑑 = π‘’πœ‡π‘‘ 𝑀𝑑 {π‘₯ − ( ∫
(133)
R+
Let πœ‰β‹† be a candidate for an optimal control, and let 𝑋⋆ be
the corresponding state process with corresponding solution
[0,𝑑)
𝑀𝑠−1 exp (−πœ‡π‘ ) π‘‘πœ‰π‘ 
+ ∑ 𝑀𝑠−1 𝛽𝑠 exp (−πœ‡π‘ ) Δπœ‰π‘  )} ,
0<𝑠≤𝑑
for 𝑑 ∈ [0, 𝑇] ,
(142)
16
International Journal of Stochastic Analysis
−1
where 𝛽𝑑 = (∫R πœƒπ‘’π‘({𝑑}, 𝑑𝑒))(1 + ∫R πœƒπ‘’π‘({𝑑}, 𝑑𝑒)) , and
+
+
𝑀𝑑 is a geometric Lévy process defined by
1
𝑀𝑑 = exp {(− 𝜎2 + ∫ {ln (1 + πœƒπ‘’) − πœƒπ‘’} ] (𝑑𝑒)) 𝑑
2
R+
𝑑
(143)
Μƒ (𝑑𝑑, 𝑑𝑒) } .
+ πœŽπ΅π‘‘ + ∫ ∫ ln (1 + πœƒπ‘’) 𝑁
0
References
R+
⋆
≤
From the representation (142) and by the fact that 𝑋𝑇∧𝑑
π‘₯𝑀𝑇∧𝑑 exp(πœ‡(𝑇 ∧ 𝑑)), we get
1−
𝑋𝑇∧𝑑 1
≤ (∫
𝑀𝑠−1 exp (−πœ‡π‘ ) π‘‘πœ‰π‘ 
⋆
𝑋𝑇∧𝑑
π‘₯
[0,𝑇∧𝑑)
(144)
+ ∑ 𝑀𝑠−1 𝛽𝑠 exp (−πœ‡π‘ ) Δπœ‰π‘  ) < ∞,
0<𝑠≤𝑑
hence
⋆
⋆
E [𝑝𝑇∧𝑑
(𝑋𝑇∧𝑑
− 𝑋𝑇∧𝑑 )]
β‹†πœŒ
2 1/2
⋆𝛾
≤ E[((π΄πœŒπ‘‹π‘‡∧𝑑 + 𝐾𝛾𝑋𝑇∧𝑑 ) exp (−𝜘 (𝑇 ∧ 𝑑))) ]
1
× E[( ∫
𝑀−1 exp (−πœ‡π‘ ) π‘‘πœ‰π‘ 
π‘₯ [0,𝑇∧𝑑) 𝑠
[
2
+
∑
0<𝑠≤𝑇∧𝑑
𝑀𝑠−1 𝛽𝑠
exp (−πœ‡π‘ ) Δπœ‰π‘  ) ]
]
improvement of the paper. This work has been partially
supported by the Direction Générale de la Recherche Scientifique et du Développement Technologique (Algeria),
under Contract no. 051/PNR/UMKB/2011, and by the French
Algerian Cooperation Program, Tassili 13 MDU 887.
1/2
.
(145)
By the dominated convergence theorem, we obtain (139)
by sending 𝑑 to infinity in (145).
A simple computation shows that the conditions (135)–
(138) are consequences of (107)–(109). This shows in particular that the pair (𝑋𝑑⋆ , πœ‰π‘‘β‹† ) satisfies the optimality sufficient
conditions and then it is optimal. This completes the proof
of the following result.
Theorem 15. One supposes that 𝜎2 /2 + πœƒ ∫R 𝑒](𝑑𝑒) ≤ πœ‡ < 𝜘,
+
and 𝑒 ≥ 0 𝑑] − π‘Ž.𝑒. If the strategy πœ‰β‹† is chosen such that the
corresponding solution of the adjoint process is given by (141),
then this choice is optimal.
Remark 16. In this example, it is shown in particular that the
relationship between the stochastic maximum principle and
dynamic programming could be very useful to solve explicitly
constrained backward stochastic differential equations with
transversality condition.
Conflict of Interests
The authors declare that there is no conflict of interests
regarding the publication of this paper.
Acknowledgments
The authors would like to thank the referees and the associate
editor for valuable suggestions that led to a substancial
[1] S. Bahlali and A. Chala, “The stochastic maximum principle
in optimal control of singular diffusions with non linear
coefficients,” Random Operators and Stochastic Equations, vol.
13, no. 1, pp. 1–10, 2005.
[2] K. Bahlali, F. Chighoub, and B. Mezerdi, “On the relationship
between the stochastic maximum principle and dynamic programming in singular stochastic control,” Stochastics, vol. 84,
no. 2-3, pp. 233–249, 2012.
[3] N. C. Framstad, B. Øksendal, and A. Sulem, “Sufficient stochastic maximum principle for the optimal control of jump diffusions and applications to finance,” Journal of Optimization
Theory and Applications, vol. 121, pp. 77–98, 2004, Erratum in
Journal of Optimization Theory and Applications, vol. 124, no. 2,
pp. 511–512, 2005.
[4] B. Øksendal and A. Sulem, “Singular stochastic control and
optimal stopping with partial information of ItoΜ‚-Lévy processes,” SIAM Journal on Control and Optimization, vol. 50, no.
4, pp. 2254–2287, 2012.
[5] L. H. R. Alvarez and T. A. Rakkolainen, “On singular stochastic
control and optimal stopping of spectrally negative jump
diffusions,” Stochastics, vol. 81, no. 1, pp. 55–78, 2009.
[6] W. H. Fleming and H. M. Soner, Controlled Markov Processes
and Viscosity Solutions, vol. 25, Springer, New York, NY, USA,
1993.
[7] N. C. Framstad, B. Øksendal, and A. Sulem, “Optimal consumption and portfolio in a jump diffusion market with proportional
transaction costs,” Journal of Mathematical Economics, vol. 35,
no. 2, pp. 233–257, 2001, Arbitrage and control problems in
finance.
[8] B. Øksendal and A. Sulem, Applied Stochastic Control of Jump
Diffusions, Springer, Berlin, Germany, 2005.
[9] U. G. Haussmann and W. Suo, “Singular optimal stochastic
controls. II. Dynamic programming,” SIAM Journal on Control
and Optimization, vol. 33, no. 3, pp. 937–959, 1995.
[10] J. A. Bather and H. Chernoff, “Sequential decision in the control
of a spaceship (finite fuel),” Journal of Applied Probability, vol.
49, pp. 584–604, 1967.
[11] V. E. Beneš, L. A. Shepp, and H. S. Witsenhausen, “Some
solvable stochastic control problems,” Stochastics, vol. 4, no. 1,
pp. 39–83, 1980/81.
[12] P.-L. Lions and A.-S. Sznitman, “Stochastic differential equations with reflecting boundary conditions,” Communications on
Pure and Applied Mathematics, vol. 37, no. 4, pp. 511–537, 1984.
[13] F. M. Baldursson and I. Karatzas, “Irreversible investment and
industry equilibrium,” Finance and Stochastics, vol. 1, pp. 69–89,
1997.
[14] E. M. Lungu and B. Øksendal, “Optimal harvesting from a
population in a stochastic crowded environment,” Mathematical
Biosciences, vol. 145, no. 1, pp. 47–75, 1997.
[15] M. H. A. Davis and A. R. Norman, “Portfolio selection with
transaction costs,” Mathematics of Operations Research, vol. 15,
no. 4, pp. 676–713, 1990.
International Journal of Stochastic Analysis
[16] M. Chaleyat-Maurel, N. El Karoui, and B. Marchal, “Réflexion
discontinue et systeΜ€mes stochastiques,” The Annals of Probability, vol. 8, no. 6, pp. 1049–1067, 1980.
[17] P. L. Chow, J.-L. Menaldi, and M. Robin, “Additive control of
stochastic linear systems with finite horizon,” SIAM Journal on
Control and Optimization, vol. 23, no. 6, pp. 858–899, 1985.
[18] A. Cadenillas and U. G. Haussmann, “The stochastic maximum principle for a singular control problem,” Stochastics and
Stochastics Reports, vol. 49, no. 3-4, pp. 211–237, 1994.
[19] S. Bahlali and B. Mezerdi, “A general stochastic maximum
principle for singular control problems,” Electronic Journal of
Probability, vol. 10, pp. 988–1004, 2005.
[20] S. G. Peng, “A general stochastic maximum principle for optimal
control problems,” SIAM Journal on Control and Optimization,
vol. 28, no. 4, pp. 966–979, 1990.
[21] S. Bahlali, B. Djehiche, and B. Mezerdi, “The relaxed stochastic
maximum principle in singular optimal control of diffusions,”
SIAM Journal on Control and Optimization, vol. 46, no. 2, pp.
427–444, 2007.
[22] K. Bahlali, F. Chighoub, B. Djehiche, and B. Mezerdi, “Optimality necessary conditions in singular stochastic control problems
with nonsmooth data,” Journal of Mathematical Analysis and
Applications, vol. 355, no. 2, pp. 479–494, 2009.
[23] H. Pham, “Optimal stopping of controlled jump diffusion processes: a viscosity solution approach,” Journal of Mathematical
Systems, Estimation, and Control, vol. 8, pp. 1–27, 1998.
[24] J.-T. Shi and Z. Wu, “Relationship between MP and DPP for the
stochastic optimal control problem of jump diffusions,” Applied
Mathematics and Optimization, vol. 63, no. 2, pp. 151–189, 2011.
[25] S. J. Tang and X. J. Li, “Necessary conditions for optimal control
of stochastic systems with random jumps,” SIAM Journal on
Control and Optimization, vol. 32, no. 5, pp. 1447–1475, 1994.
[26] X. Y. Zhou, “A unified treatment of maximum principle and
dynamic programming in stochastic controls,” Stochastics and
Stochastics Reports, vol. 36, no. 3-4, pp. 137–161, 1991.
[27] J. Yong and X. Y. Zhou, Stochastic Controls, Hamiltonian Systems
and HJB Equations, vol. 43, Springer, New York, NY, USA, 1999.
[28] A. Eyraud-Loisel, “Backward stochastic differential equations
with enlarged filtration: option hedging of an insider trader in
a financial market with jumps,” Stochastic Processes and Their
Applications, vol. 115, no. 11, pp. 1745–1763, 2005.
[29] G. Barles, R. Buckdahn, and E. Pardoux, “BSDEs and
integral-partial differential equations,” Stochastics and Stochastics Reports, vol. 60, no. 1-2, pp. 57–83, 1997.
[30] N. Ikeda and S. Watanabe, Stochastic Differential Equations and
Diffusion Processes, vol. 24, North-Holland, Amsterdam, The
Netherlands, 2nd edition, 1989.
17
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download