Hindawi Publishing Corporation International Journal of Stochastic Analysis Volume 2014, Article ID 201491, 17 pages http://dx.doi.org/10.1155/2014/201491 Research Article The Relationship between the Stochastic Maximum Principle and the Dynamic Programming in Singular Control of Jump Diffusions Farid Chighoub and Brahim Mezerdi Laboratory of Applied Mathematics, University Mohamed Khider, P.O. Box 145, 07000 Biskra, Algeria Correspondence should be addressed to Brahim Mezerdi; bmezerdi@yahoo.fr Received 7 September 2013; Revised 28 November 2013; Accepted 3 December 2013; Published 9 January 2014 Academic Editor: AgneΜs Sulem Copyright © 2014 F. Chighoub and B. Mezerdi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The main objective of this paper is to explore the relationship between the stochastic maximum principle (SMP in short) and dynamic programming principle (DPP in short), for singular control problems of jump diffusions. First, we establish necessary as well as sufficient conditions for optimality by using the stochastic calculus of jump diffusions and some properties of singular controls. Then, we give, under smoothness conditions, a useful verification theorem and we show that the solution of the adjoint equation coincides with the spatial gradient of the value function, evaluated along the optimal trajectory of the state equation. Finally, using these theoretical results, we solve explicitly an example, on optimal harvesting strategy, for a geometric Brownian motion with jumps. 1. Introduction In this paper, we consider a mixed classical-singular control problem, in which the state evolves according to a stochastic differential equation, driven by a Poisson random measure and an independent multidimensional Brownian motion, of the following form: ππ₯π‘ = π (π‘, π₯π‘ , π’π‘ ) ππ‘ + π (π‘, π₯π‘ , π’π‘ ) ππ΅π‘ Μ (ππ‘, ππ) + πΊπ‘ πππ‘ , + ∫ πΎ (π‘, π₯π‘− , π’π‘ , π) π πΈ (1) π₯0 = π₯, where π, π, πΎ, and πΊ are given deterministic functions and π₯ is the initial state. The control variable is a suitable process (π’, π), where π’ : [0, π] × Ω → π΄ 1 ⊂ Rπ is the usual classical absolutely continuous control and π : [0, π] × Ω → π΄ 2 = ([0, ∞))π is the singular control, which is an increasing process, continuous on the right with limits on the left, with π0− = 0. The performance functional has the form π π 0 0 π½ (π’, π) = πΈ [∫ π (π‘, π₯π‘ , π’π‘ ) ππ‘ + ∫ π (π‘) πππ‘ + π (π₯π )] . (2) The objective of the controller is to choose a couple (π’β , πβ ) of adapted processes, in order to maximize the performance functional. In the first part of our present work, we investigate the question of necessary as well as sufficient optimality conditions, in the form of a Pontryagin stochastic maximum principle. In the second part, we give under regularity assumptions, a useful verification theorem. Then, we show that the adjoint process coincides with the spatial gradient of the value function, evaluated along the optimal trajectory of the state equation. Finally, using these theoretical results, we solve explicitly an example, on optimal harvesting strategy for a geometric Brownian motion, with jumps. Note that our results improve those in [1, 2] to the jump diffusion setting. Moreover we generalize results in [3, 4], by allowing 2 both classical and singular controls, at least in the complete information setting. Note that in our control problem, there are two types of jumps for the state process, the inaccessible ones which come from the Poisson martingale part and the predictable ones which come from the singular control part. The inclusion of these jump terms introduces a major difference with respect to the case without singular control. Stochastic control problems of singular type have received considerable attention, due to their wide applicability in a number of different areas; see [4–8]. In most cases, the optimal singular control problem was studied through dynamic programming principle; see [9], where it was shown in particular that the value function is continuous and is the unique viscosity solution of the HJB variational inequality. The one-dimensional problems of the singular type, without the classical control, have been studied by many authors. It was shown that the value function satisfies a variational inequality, which gives rise to a free boundary problem, and the optimal state process is a diffusion reflected at the free boundary. Bather and Chernoff [10] were the first to formulate such a problem. BenesΜ et al. [11] explicitly solved a one-dimensional example by observing that the value function in their example is twice continuously differentiable. This regularity property is called the principle of smooth fit. The optimal control can be constructed by using the reflected Brownian motion; see Lions and Sznitman [12] for more details. Applications to irreversible investment, industry equilibrium, and portfolio optimization under transaction costs can be found in [13]. A problem of optimal harvesting from a population in a stochastic crowded environment is proposed in [14] to represent the size of the population at time π‘ as the solution of the stochastic logistic differential equation. The two-dimensional problem that arises in portfolio selection models, under proportional transaction costs, is of singular type and has been considered by Davis and Norman [15]. The case of diffusions with jumps is studied by Øksendal and Sulem [8]. For further contributions on singular control problems and their relationship with optimal stopping problems, the reader is referred to [4, 5, 7, 16, 17]. The stochastic maximum principle is another powerful tool for solving stochastic control problems. The first result that covers singular control problems was obtained by Cadenillas and Haussmann [18], in which they consider linear dynamics, convex cost criterion, and convex state constraints. A first-order weak stochastic maximum principle was developed via convex perturbations method for both absolutely continuous and singular components by Bahlali and Chala [1]. The second-order stochastic maximum principle for nonlinear SDEs with a controlled diffusion matrix was obtained by Bahlali and Mezerdi [19], extending the Peng maximum principle [20] to singular control problems. A similar approach has been used by Bahlali et al. in [21], to study the stochastic maximum principle in relaxed-singular optimal control in the case of uncontrolled diffusion. Bahlali et al. in [22] discuss the stochastic maximum principle in singular optimal control in the case where the coefficients are Lipschitz continuous in π₯, provided that the classical derivatives are replaced by the generalized ones. See also the recent paper by Øksendal and Sulem [4], where Malliavin International Journal of Stochastic Analysis calculus techniques have been used to define the adjoint process. Stochastic control problems in which the system is governed by a stochastic differential equation with jumps, without the singular part, have been also studied, both by the dynamic programming approach and by the Pontryagin maximum principle. The HJB equation associated with this problems is a nonlinear second-order parabolic integrodifferential equation. Pham [23] studied a mixed optimal stopping and stochastic control of jump diffusion processes by using the viscosity solutions approach. Some verification theorems of various types of problems for systems governed by this kind of SDEs are discussed by Øksendal and Sulem [8]. Some results that cover the stochastic maximum principle for controlled jump diffusion processes are discussed in [3, 24, 25]. In [3] the sufficient maximum principle and the link with the dynamic programming principle are given by assuming the smoothness of the value function. Let us mention that in [24] the verification theorem is established in the framework of viscosity solutions and the relationship between the adjoint processes and some generalized gradients of the value function are obtained. Note that Shi and Wu [24] extend the results of [26] to jump diffusions. See also [27] for systematic study of the continuous case. The second-order stochastic maximum principle for optimal controls of nonlinear dynamics, with jumps and convex state constraints, was developed via spike variation method, by Tang and Li [25]. These conditions are described in terms of two adjoint processes, which are linear backward SDEs. Such equations have important applications in hedging problems [28]. Existence and uniqueness for solutions to BSDEs with jumps and nonlinear coefficients have been treated by Tang and Li [25] and Barles et al. [29]. The link with integral-partial differential equations is studied in [29]. The plan of the paper is as follows. In Section 2, we give some preliminary results and notations. The purpose of Section 3 is to derive necessary as well as sufficient optimality conditions. In Section 4, we give, under-regularity assumptions, a verification theorem for the value function. Then, we prove that the adjoint process is equal to the derivative of the value function evaluated at the optimal trajectory, extending in particular [2, 3]. An example has been solved explicitly, by using the theoretical results. 2. Assumptions and Problem Formulation The purpose of this section is to introduce some notations, which will be needed in the subsequent sections. In all what follows, we are given a probability space (Ω, F, (Fπ‘ )π‘≤π , P), such that F0 contains the P-null sets, Fπ = F for an arbitrarily fixed time horizon π, and (Fπ‘ )π‘≤π satisfies the usual conditions. We assume that (Fπ‘ )π‘≤π is generated by a π-dimensional standard Brownian motion π΅ and an independent jump measure π of a LeΜvy process π, on [0, π] × πΈ, where πΈ ⊂ Rπ \ {0} for some π ≥ 1. We denote by (Fπ΅π‘ )π‘≤π (resp., (Fπ π‘ )π‘≤π ) the P-augmentation of the natural filtration of π΅ (resp., π). We assume that the compensator of π has the form π(ππ‘, ππ) = ](ππ)ππ‘, for some π-finite LeΜvy measure ] on πΈ, endowed with its Borel π-field B(πΈ). We suppose that International Journal of Stochastic Analysis 3 Μ ∫πΈ 1 ∧ |π|2 ](ππ) < ∞ and set π(ππ‘, ππ) = π(ππ‘, ππ) − ](ππ)ππ‘, for the compensated jump martingale random measure of π. Obviously, we have Fπ‘ = π [∫ ∫ π΄×(0,π ] π (ππ, ππ) ; π ≤ π‘, π΄ ∈ B (πΈ)] (3) ∨ π [π΅π ; π ≤ π‘] ∨ N, We denote by U = U1 × U2 the set of all admissible controls. Here U1 (resp., U2 ) represents the set of the admissible controls π’ (resp., π). Assume that, for (π’, π) ∈ U, π‘ ∈ [0, π], the state π₯π‘ of our system is given by ππ₯π‘ = π (π‘, π₯π‘ , π’π‘ ) ππ‘ + π (π‘, π₯π‘ , π’π‘ ) ππ΅π‘ where N denotes the totality of ]-null sets and π1 ∨ π2 denotes the π-field generated by π1 ∪ π2 . Notation. Any element π₯ ∈ Rπ will be identified with a column vector with π components, and its norm is |π₯| = |π₯1 | + ⋅ ⋅ ⋅ + |π₯π |. The scalar product of any two vectors π₯ and π¦ on Rπ is denoted by π₯π¦ or ∑ππ=1 π₯π π¦π . For a function β, we denote by βπ₯ (resp., βπ₯π₯ ) the gradient or Jacobian (resp., the Hessian) of β with respect to the variable π₯. Given π < π‘, let us introduce the following spaces. Μ (ππ‘, ππ) + πΊπ‘ πππ‘ , + ∫ πΎ (π‘, π₯π‘− , π’π‘ , π) π πΈ π₯0 = π₯, where π₯ ∈ Rπ is given, representing the initial state. Let π : [0, π] × Rπ × π΄ 1 σ³¨→ Rπ , π : [0, π] × Rπ × π΄ 1 σ³¨→ Rπ×π , (i) L2],(πΈ;Rπ ) or L2] is the set of square integrable functions l(⋅) : πΈ → Rπ such that βl (π)β2L2 π ],(πΈ;R ) 2 := ∫ |l (π)| ] (ππ) < ∞. (4) πΈ (ii) S2([π ,π‘];Rπ ) is the set of Rπ -valued adapted cadlag processes π such that βπβS2 ([π ,π‘];Rπ ) σ΅¨ σ΅¨2 := E[ sup σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨ ] < ∞. (5) (iii) M2([π ,π‘];Rπ ) is the set of progressively measurable Rπ valued processes π such that π‘ βπβM2 ([π ,π‘];Rπ ) (iv) σ΅¨ σ΅¨2 := E[∫ σ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨ ππ] π < ∞. (6) is the set of B([0, π] × Ω) ⊗ B(πΈ) measurable maps π : [0, π] × Ω × πΈ → Rπ such that βπ βL2 ],([π ,π‘];Rπ ) σ΅¨2 σ΅¨ := E[∫ ∫ σ΅¨σ΅¨σ΅¨π π (π)σ΅¨σ΅¨σ΅¨ ] (ππ) ππ] π πΈ 1/2 < ∞. (7) To avoid heavy notations, we omit the subscript ([π , π‘]; Rπ ) in these notations when (π , π‘) = (0, π). Let π be a fixed strictly positive real number; π΄ 1 is a closed convex subset of Rπ and π΄ 2 = ([0, ∞)π ). Let us define the class of admissible control processes (π’, π). Definition 1. An admissible control is a pair of measurable, adapted processes π’ : [0, π] × Ω → π΄ 1 , and π : [0, π] × Ω → π΄ 2 , such that (1) π’ is a predictable process, π is of bounded variation, nondecreasing, right continuous with left-hand limits, and π0− = 0, 2 2 (2) E[supπ‘∈[0,π] |π’π‘ | + |ππ | ] < ∞. (9) πΊ : [0, π] σ³¨→ Rπ×π be measurable functions. Notice that the jump of a singular control π ∈ U2 at any jumping time π is defined by Δππ = ππ − ππ− , and we let ππ‘π = ππ‘ − ∑ Δππ , (10) 0<π≤π‘ be the continuous part of π. We distinguish between the jumps of π₯π caused by the jump of π(π, π), defined by Δ ππ₯π := ∫ πΎ (π, π₯π− , π’π , π) π ({π} , ππ) 1/2 L2],([π ,π‘];Rπ ) π‘ πΎ : [0, π] × Rπ × π΄ 1 × πΈ σ³¨→ Rπ , 1/2 π∈[π ,π‘] (8) πΈ πΎ (π, π₯π− , π’π , π) := { 0 if π has a jump of size π at π, otherwise, (11) and the jump of π₯π caused by the singular control π, denoted by Δ π π₯π := πΊπ Δππ . In the above, π({π}, ⋅) represents the jump in the Poisson random measure, occurring at time π. In particular, the general jump of the state process at π is given by Δπ₯π = π₯π − π₯π− = Δ π π₯π + Δ ππ₯π . If π is a continuous real function, we let Δ π π (π₯π ) := π (π₯π ) − π (π₯π− + Δ ππ₯π ) . (12) The expression (12) defines the jump in the value of π(π₯π ) caused by the jump of π₯ at π. We emphasize that the possible jumps in π₯π coming from the Poisson measure are not included in Δ π π(π₯π ). Suppose that the performance functional has the form π π 0 π π½ (π’, π) = E [∫ π (π‘, π₯π‘ , π’π‘ ) ππ‘ + π (π₯π ) + ∫ ππ‘ πππ‘ ] , for (π’, π) ∈ U, (13) 4 International Journal of Stochastic Analysis where π : [0, π] × Rπ × π΄ 1 → R, π : Rπ → R, and π : π π [0, π] → ([0, ∞))π , with ππ‘ πππ‘ = ∑π π=1 ππ‘ πππ‘ . β β An admissible control (π’ , π ) is optimal if π½ (π’β , πβ ) = sup π½ (π’, π) . (14) (π’,π)∈U Let us assume the following. (H1 ) The maps π, π, πΎ, and π are continuously differentiable with respect to (π₯, π’) and π is continuously differentiable in π₯. (H2 ) The derivatives ππ₯ , ππ’ , ππ₯ , ππ’ , πΎπ₯ , πΎπ’ , ππ₯ , ππ’ , and ππ₯ are continuous in (π₯, π’) and uniformly bounded. (H3 ) π, π, πΎ, and π are bounded by πΎ1 (1 + |π₯| + |π’|), and π is bounded by πΎ1 (1 + |π₯|), for some πΎ1 > 0. (H4 ) For all (π’, π) ∈ π΄ 1 × πΈ, the map (i) For all V ∈ π΄ 1 π»π’ (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) (Vπ‘ − π’π‘β ) ≤ 0, ππ‘—π.π., P—π.π . ππ‘π + πΊπ‘π ππ‘ ≤ 0, (15) := πT (πΎπ₯ (π‘, π₯, π’, π) + πΌπ ) π for π = 1, . . . , π, (19) (20) π ∑1{ππ‘π +πΊπ‘π ππ‘ ≤0} πππ‘βππ = 0, (21) π=1 π satisfies uniformly in (π₯, π) ∈ R × R , σ΅¨ σ΅¨2 π (π‘, π₯, π’, π; π) ≥ σ΅¨σ΅¨σ΅¨πσ΅¨σ΅¨σ΅¨ πΎ2−1 , Theorem 2 (necessary conditions of optimality). Let (π’β , πβ ) be an optimal control maximizing the functional π½ over U, and let π₯β be the corresponding optimal trajectory. Then there exists an adapted process (π, π, π(⋅)) ∈ S2 × M2 × L2] , which is the unique solution of the BSDE (18), such that the following conditions hold. (ii) For all π‘ ∈ [0, π], with probability 1 (π₯, π) ∈ Rπ × Rπ σ³¨→ π (π‘, π₯, π’, π; π) π 3.1. Necessary Conditions of Optimality. The purpose of this section is to derive optimality necessary conditions, satisfied by an optimal control, assuming that the solution exists. The proof is based on convex perturbations for both absolutely continuous and singular components of the optimal control and on some estimates of the state processes. Note that our results generalize [1, 2, 21] for systems with jumps. ππ‘π + πΊπ‘π (ππ‘− + Δ πππ‘ ) ≤ 0, for some πΎ2 > 0. (16) for π = 1, . . . , π, (22) π ∑1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )≤0} Δππ‘βπ = 0, (23) π=1 (H5 ) πΊ, π are continuous and bounded. where Δ πππ‘ = ∫πΈ ππ‘ (π)π({π‘}, ππ). 3. The Stochastic Maximum Principle Let us first define the usual Hamiltonian associated to the control problem by π» (π‘, π₯, π’, π, π, X (⋅)) = π (π‘, π₯, π’) + ππ (π‘, π₯, π’) π + ∑ ππ ππ (π‘, π₯, π’) (17) π=1 + ∫ X (π) πΎ (π‘, π₯, π’, π) ] (ππ) , πΈ In order to prove Theorem 2, we present some auxiliary results. 3.1.1. Variational Equation. Let (V, π) ∈ U be such that (π’β + V, πβ + π) ∈ U. The convexity condition of the control domain ensures that for π ∈ (0, 1) the control (π’β +πV, πβ +ππ) is also in U. We denote by π₯π the solution of the SDE (8) corresponding to the control (π’β + πV, πβ + ππ). Then by standard arguments from stochastic calculus, it is easy to check the following estimate. Lemma 3. Under assumptions (H1 )–(H5 ), one has π π π×π ×L2] . ππ where (π‘, π₯, π’, π, π, X(⋅)) ∈ [0, π]×R ×π΄ 1 ×R ×R and ππ for π = 1, . . . , π, denote the πth column of the matrices π and π, respectively. Let (π’β , πβ ) be an optimal control and let π₯β be the corresponding optimal trajectory. Then, we consider a triple (π, π, π(⋅)) of square integrable adapted processes associated with (π’β , π₯β ), with values in Rπ × Rπ×π × Rπ such that πππ‘ = −π»π₯ (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) ππ‘ Μ (ππ‘, ππ) , + ππ‘ ππ΅π‘ + ∫ ππ‘ (π) π πΈ ππ = ππ₯ (π₯πβ ) . σ΅¨ σ΅¨2 lim E [ sup σ΅¨σ΅¨σ΅¨π₯π‘π − π₯π‘β σ΅¨σ΅¨σ΅¨ ] = 0. π→0 π‘∈[0,π] (24) Proof. From assumptions (H1 )–(H5 ), we get by using the Burkholder-Davis-Gundy inequality σ΅¨ σ΅¨2 E [ sup σ΅¨σ΅¨σ΅¨π₯π‘π − π₯π‘β σ΅¨σ΅¨σ΅¨ ] π‘∈[0,π] π σ΅¨ σ΅¨2 ≤ πΎ ∫ E [ sup σ΅¨σ΅¨σ΅¨π₯ππ − π₯πβ σ΅¨σ΅¨σ΅¨ ] ππ 0 (18) π∈[0,π ] π σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 +πΎπ2 (∫ E [ sup σ΅¨σ΅¨σ΅¨Vπ σ΅¨σ΅¨σ΅¨ ] ππ + Eσ΅¨σ΅¨σ΅¨ππ σ΅¨σ΅¨σ΅¨ ) . 0 π∈[0,π ] (25) International Journal of Stochastic Analysis 5 σ΅¨σ΅¨2 σ΅¨σ΅¨ 1 π‘ σ΅¨ σ΅¨ + πΎE ∫ ∫ σ΅¨σ΅¨σ΅¨σ΅¨∫ πΎπ₯ (π , π₯π π,π , π’π π,π , π) Γπ π ππσ΅¨σ΅¨σ΅¨σ΅¨ ] (ππ) ππ σ΅¨σ΅¨ 0 πΈ σ΅¨σ΅¨ 0 σ΅¨ σ΅¨2 + πΎEσ΅¨σ΅¨σ΅¨ππ‘π σ΅¨σ΅¨σ΅¨ , From Definition 1 and Gronwall’s lemma, the result follows immediately by letting π go to zero. We define the process π§π‘ = π§π‘π’ ππ§π‘ = {ππ₯ (π‘, π₯π‘β , π’π‘β ) π§π‘ + β ,V,π by ππ’ (π‘, π₯π‘β , π’π‘β ) Vπ‘ } ππ‘ π (30) where π + ∑ {ππ₯π (π‘, π₯π‘β , π’π‘β ) π§π‘ + ππ’π (π‘, π₯π‘β , π’tβ ) Vπ‘ } ππ΅π‘ π=1 β β + ∫ {πΎπ₯ (π‘, π₯π‘− , π’π‘β , π) π§π‘− + πΎπ’ (π‘, π₯π‘− , π’π‘β , π) Vπ‘ } (26) π‘ π‘ 0 0 ππ‘π = − ∫ ππ₯ (π , π₯π β , π’π β ) π§π ππ − ∫ ππ₯ (π , π₯π β , π’π β ) π§π ππ΅π β Μ (ππ , ππ) − ∫ ∫ πΎπ₯ (π , π₯π − , π’π β , π) π§π − π Μ (ππ‘, ππ) + πΊπ‘ πππ‘ , ×π 0 πΈ π‘ π‘ 0 0 − ∫ πV (π , π₯π β , π’π β ) Vπ ππ − ∫ πV (π , π₯π β , π’π β ) Vπ ππ΅π π§0 = 0. From (H2 ) and Definition 1, one can find a unique solution π§ which solves the variational equation (26), and the following estimate holds. Lemma 4. Under assumptions (H1 )–(H5 ), it holds that σ΅¨σ΅¨ π₯π − π₯β σ΅¨σ΅¨2 σ΅¨ σ΅¨ π‘ (27) lim Eσ΅¨σ΅¨σ΅¨σ΅¨ π‘ − π§π‘ σ΅¨σ΅¨σ΅¨σ΅¨ = 0. π→0 σ΅¨ π σ΅¨σ΅¨ σ΅¨ Proof. Let π₯π − π₯π‘β = π‘ − π§π‘ . π π,π is given by π‘ πΈ Γπ‘π ππ‘π (28) π,π We denote π₯π‘ = π₯π‘β + ππ(Γπ‘π + π§π‘ ) and π’π‘ = π’π‘β + ππVπ‘ , for notational convenience. Then we have immediately that Γ0π = 0 and Γπ‘π satisfies the following SDE: 1 π,π π,π πΓπ‘π = { (π (π‘, π₯π‘ , π’π‘ ) − π (π‘, π₯π‘β , π’π‘β )) π π‘ β Μ (ππ , ππ) − ∫ ∫ πΎV (π , π₯π − , π’π β , π) Vπ π 0 πΈ π‘ 1 0 0 π‘ 1 0 0 + ∫ ∫ ππ₯ (π , π₯π π,π , π’π π,π ) π§π ππ ππ +∫ ∫ (31) ππ₯ (π , π₯π π,π , π’π π,π ) π§π ππ ππ΅π π‘ 1 0 πΈ 0 π‘ 1 0 0 π‘ 1 0 0 π,π π,π Μ (ππ , ππ) + ∫ ∫ ∫ πΎπ₯ (π , π₯π − , π’π , π) π§π − πππ + ∫ ∫ πV (π , π₯π π,π , π’π π,π ) Vπ ππ ππ + ∫ ∫ πV (π , π₯sπ,π , π’π π,π ) Vπ ππ ππ΅π π‘ 1 0 πΈ 0 π,π π,π Μ (ππ , ππ) . + ∫ ∫ ∫ πΎV (π , π₯π − , π’π , π) Vπ πππ − (ππ₯ (π‘, π₯π‘β , π’π‘β ) π§π‘ + ππ’ (π‘, π₯π‘β , π’π‘β ) Vπ‘ ) } ππ‘ Since ππ₯ , ππ₯ , and πΎπ₯ are bounded, then 1 π,π π,π + { (π (π‘, π₯π‘ , π’π‘ ) − π (π‘, π₯π‘β , π’π‘β )) π π‘ σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 σ΅¨ σ΅¨2 Eσ΅¨σ΅¨σ΅¨Γπ‘π σ΅¨σ΅¨σ΅¨ ≤ πE ∫ σ΅¨σ΅¨σ΅¨Γπ π σ΅¨σ΅¨σ΅¨ ππ + πEσ΅¨σ΅¨σ΅¨ππ‘π σ΅¨σ΅¨σ΅¨ , 0 − (ππ₯ (π‘, π₯π‘β , π’π‘β ) π§π‘ + ππ’ (π‘, π₯π‘β , π’π‘β ) Vπ‘ ) } ππ΅π‘ 1 π,π π,π β + ∫ { (πΎ (π‘, π₯π‘− , π’π‘ , π) − πΎ (π‘, π₯π‘− , π’π‘β , π)) πΈ π β β − (πΎπ₯ (π‘, π₯π‘− , π’π‘β , π) π§π‘− + πΎπ’ (π‘, π₯π‘− , π’π‘β , π) Vπ‘ ) } (32) where π is a generic constant depending on the constants πΎ, ](πΈ), and π. We conclude from Lemma 3 and the dominated convergence theorem, that limπ → 0 ππ‘π = 0. Hence (27) follows from Gronwall’s lemma and by letting π go to 0. This completes the proof. 3.1.2. Variational Inequality. Let Φ be the solution of the linear matrix equation, for 0 ≤ π < π‘ ≤ π Μ (ππ‘, ππ) . ×π (29) Since the derivatives of the coefficients are bounded, and from Definition 1, it is easy to verify by Gronwall’s inequality that Γπ ∈ S2 and σ΅¨σ΅¨2 π‘ σ΅¨σ΅¨ 1 σ΅¨ σ΅¨ σ΅¨ σ΅¨2 Eσ΅¨σ΅¨σ΅¨Γπ‘π σ΅¨σ΅¨σ΅¨ ≤ πΎE ∫ σ΅¨σ΅¨σ΅¨σ΅¨∫ ππ₯ (π , π₯π π,π , π’π π,π ) Γπ π ππσ΅¨σ΅¨σ΅¨σ΅¨ ππ σ΅¨σ΅¨ 0 σ΅¨σ΅¨ 0 σ΅¨σ΅¨2 π‘ σ΅¨σ΅¨ 1 σ΅¨ σ΅¨ + πΎE ∫ σ΅¨σ΅¨σ΅¨σ΅¨∫ ππ₯ (π , π₯π π,π , π’π π,π ) Γπ π ππσ΅¨σ΅¨σ΅¨σ΅¨ ππ σ΅¨σ΅¨ 0 σ΅¨σ΅¨ 0 π π πΦπ ,π‘ = ππ₯ (π‘, π₯π‘β , π’π‘β ) Φπ ,π‘ ππ‘ + ∑ ππ₯π (π‘, π₯π‘β , π’π‘β ) Φπ ,π‘ ππ΅π‘ π=1 β Μ (ππ‘, ππ) , + ∫ πΎπ₯ (π‘, π₯π‘− , π’π‘β , π) Φπ ,π‘− π (33) πΈ Φπ ,π = πΌπ , where πΌπ is the π × π identity matrix. This equation is linear, with bounded coefficients, then it admits a unique strong 6 International Journal of Stochastic Analysis solution. Moreover, the condition (H4 ) ensures that the tangent process Φ is invertible, with an inverse Ψ satisfying suitable integrability conditions. From ItoΜ’s formula, we can easily check that π(Φπ ,π‘ Ψπ ,π‘ ) = 0, and Φπ ,π Ψπ ,π = πΌπ , where Ψ is the solution of the following equation π { πΨπ ,π‘ = −Ψπ ,π‘ {ππ₯ (π‘, π₯π‘β , π’π‘β ) − ∑ ππ₯π (π‘, π₯π‘β , π’π‘β ) ππ₯π (π‘, π₯π‘β , π’π‘β ) π=1 { −∫ πΈ − π ∑Ψπ ,π‘ ππ₯π π=1 } πΎπ₯ (π‘, π₯π‘β , π’π‘β , π) ] (ππ) } ππ‘ The rest of this subsection is devoted to the computation of the above limit. We will see that the expression (37) leads to a precise description of the optimal control (π’β , πβ ) in terms of the adjoint process. First, it is easy to prove the following lemma. Lemma 5. Under assumptions (H1 )–(H5 ), one has πΌ = lim π−1 (π½ (π’π , ππ ) − π½ (π’β , πβ )) π→0 π = E [∫ {ππ₯ (π , π₯π β , π’π β ) π§π + ππ’ (π , π₯π β , π’π β ) Vπ } ππ 0 } π + ππ₯ (π₯πβ ) π§π + ∫ ππ‘ πππ‘ ] . π (π‘, π₯π‘β , π’π‘β ) ππ΅π‘ 0 −1 β β − Ψπ ,π‘− ∫ (πΎπ₯ (π‘, π₯π‘− , π’π‘β , π) + πΌπ ) πΎπ₯ (π‘, π₯π‘− , π’π‘β , π) πΈ Proof. We use the same notations as in the proof of Lemma 4. First, we have π−1 (π½ (π’π , ππ ) − π½ (π’β , πβ )) × π (ππ‘, ππ) , Ψπ ,π = πΌπ , (34) so Ψ = Φ . If π = 0 we simply write Φ0,π‘ = Φπ‘ and Ψ0,π‘ = Ψπ‘ . By the integration by parts formula ([8, Lemma 3.6]), we can see that the solution of (26) is given by π§π‘ = Φπ‘ ππ‘ , where ππ‘ is the solution of the stochastic differential equation π { πππ‘ = Ψπ‘ {ππ’ (π‘, π₯π‘β , π’π‘β ) Vπ‘ − ∑ ππ₯π (π‘, π₯π‘β , π’π‘β ) ππ’π (π‘, π₯π‘β , π’π‘β ) Vπ‘ π=1 { πΈ } πΎπ’ (π‘, π₯π‘β , π’π‘β , π§) Vπ‘ ] (ππ) } ππ‘ 1 0 0 1 π π,π + ∫ ππ₯ (π₯π ) π§π ππ + ∫ ππ‘ πππ‘ ] + π½π‘π , 0 0 (39) where π 1 1 0 0 0 π,π π½π‘π = E [∫ ∫ ππ₯ (π , π₯π π,π , π’π π,π ) Γπ π ππ ππ + ∫ ππ₯ (π₯π ) Γππ ππ] . (40) By using Lemma 4, and since the derivatives ππ₯ , ππ’ , and ππ₯ are bounded, we have limπ → 0 π½π‘π = 0. Then, the result follows by letting π go to 0 in the above equality. } π π = E [∫ ∫ {ππ₯ (π , π₯π π,π , π’π π,π ) π§π + ππ’ (π , π₯π π,π , π’π π,π ) Vπ } ππ ππ −1 −∫ (38) π + ∑Ψπ‘ ππ’π (π‘, π₯π‘β , π’π‘β ) Vπ‘ ππ΅π‘ Substituting by π§π‘ = Φπ‘ ππ‘ in (38) leads to π=1 π + Ψπ‘− ∫ πΈ β (πΎπ₯ (π‘, π₯π‘− , π’π‘β , π) πΌ = E [∫ {ππ₯ (π , π₯π β , π’π β ) Φπ ππ + ππ’ (π , π₯π β , π’π β ) Vπ } ππ −1 + πΌπ ) 0 β × πΎπ’ (π‘, π₯π‘− , π’π‘β , π) Vπ‘ π (ππ‘, ππ) + Ψπ‘ πΊπ‘ πππ‘ − Ψπ‘ ∫ πΈ (πΎπ₯ (π‘, π₯π‘β , π’π‘β , π) +ππ₯ (π₯πβ ) Φπππ −1 + πΌπ ) × πΎπ₯ (π‘, π₯π‘β , π’π‘β , π) π ({π‘} , ππ) πΊπ‘ Δππ‘ , π0 = 0. (35) Let us introduce the following convex perturbation of the optimal control (π’β , πβ ) defined by (π’β,π , πβ,π ) = (π’β + πV, πβ + ππ) , (36) for some (V, π) ∈ U and π ∈ (0, 1). Since (π’β , πβ ) is an optimal control, then π−1 (π½(π’π , ππ ) − π½(π’β , πβ )) ≤ 0. Thus a necessary condition for optimality is that lim π−1 (π½ (π’π , ππ ) − π½ (π’β , πβ )) ≤ 0. π→0 (37) (41) π + ∫ ππ‘ πππ‘ ] . 0 Consider the right continuous version of the square integrable martingale π ππ‘ := E [∫ ππ₯ (π , π₯π β , π’π β ) Φπ ππ + ππ₯ (π₯πβ ) Φπ | Fπ‘ ] . (42) 0 By the ItoΜ representation theorem [30], there exist two processes π = (π1 , . . . , ππ ) where ππ ∈ M2 , for π = 1, . . . , π, and π(⋅) ∈ L2] , satisfying π ππ‘ = E [∫ ππ₯ (π , π₯π β , π’π β ) Φπ ππ + ππ₯ (π₯πβ ) Φπ ] 0 π π‘ +∑∫ π=1 0 ππ π ππ΅π π (43) π‘ Μ (ππ , ππ) . + ∫ ∫ ππ (π) π 0 πΈ International Journal of Stochastic Analysis 7 π‘ Let us denote π¦π‘β = ππ‘ − ∫0 ππ₯ (π , π₯π β , π’π β )Φπ ππ . The adjoint variable is the process defined by ππ‘ = π¦π‘β Ψπ‘ , π ππ‘ = π ππ‘ Ψπ‘ − ππ‘ ππ₯π (π‘, π₯π‘β , π’π‘β ) , for π = 1, . . . , π, (44) −1 ππ‘ (π) = ππ‘ (π) Ψπ‘ (πΎπ₯ (π‘, π₯π‘β , π’π‘β , π) + πΌπ ) ππ‘ ((πΎπ₯ (π , π₯π‘β , π’π‘β , π) + −1 + πΌπ ) 3.1.3. Adjoint Equation and Maximum Principle. Since (37) is true for all (V, π) ∈ U and πΌ ≤ 0, we can easily deduce the following result. Theorem 7. Let (π’β , πβ ) be the optimal control of the problem (14) and denote by π₯β the corresponding optimal trajectory, then the following inequality holds: π E [ ∫ π»V (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) (Vπ‘ − π’π‘β ) ππ‘ − πΌπ ) . 0 π Theorem 6. Under assumptions (H1 )–(H5 ), one has π + ∑ {ππ‘ + πΊπ‘ (ππ‘− + Δ πππ‘ )} Δ(π − πβ )π‘ ] ≤ 0, 0 + 0<π‘≤π π ∑ ππ π ππ’π π=1 where the Hamiltonian π» is defined by (17), and the adjoint variable (π, ππ , π(⋅)) for π = 1, . . . , π, is given by (44). (π , π₯π β , π’π β ) + ∫ ππ (π§) πΎπ’ (π , π₯π β , π’π β , π) ] (ππ)} Vπ ππ (45) πΈ π Now, we are ready to give the proof of Theorem 2. Proof of Theorem 2. (i) Let us assume that (π’β , πβ ) is an optimal control for the problem (14), so that inequality (48) is valid for every (V, π). If we choose π = πβ in inequality (48), we see that for every measurable, Fπ‘ -adapted process V : [0, π] × Ω → π΄ 1 π + ∑ ∫ {ππ π + πΊπ π ππ } πππ ππ π=1 0 π + ∑ ∑ {ππ π + πΊπ π (ππ − + Δ πππ )} Δππ π ] . π π=1 0<π ≤π E [∫ π»V (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) (Vπ‘ − π’π‘β ) ππ‘] ≤ 0. Proof. From the integration by parts formula ([8, Lemma π 3.5]), and by using the definition of ππ‘ , ππ‘ for π = 1, . . . , π, and ππ‘ (⋅), we can easily check that 0 For V ∈ U1 define π΄ = { (π‘, π) ∈ [0, π] × Ω such that π»V (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) (Vπ‘ − π’π‘β ) > 0} . (50) π π{ = E [ ∫ {ππ‘ ππ’ (π‘, π₯π‘β , π’π‘β ) + ∑ππ π ππ’π (π‘, π₯π‘β , π’π‘β ) 0 π=1 [ { +∫ πΈ π Obviously π΄Vπ‘ ∈ Fπ‘ , for each π‘ ∈ [0, π]. Let us define ΜV ∈ U1 by V, ΜVπ‘ (π) = { β π’π‘ , } ππ‘ (π) πΎπ’ (π‘, π₯π‘β , π’π‘β , π) ] (ππ) } Vπ‘ ππ‘ } π π=1 0 + ∑ (∫ πΊπ‘π ππ‘ πππ‘ππ + ∑ πΊπ‘π 0<π‘≤π if (π‘, π) ∈ π΄Vπ‘ , . otherwise (51) If π ⊗ P(π΄V ) > 0, where π denotes the Lebesgue measure, then ππ₯ (π‘, π₯π‘β , π’π‘β ) ππ‘ Φπ‘ ππ‘ π (49) V πΈ [π¦π ππ ] 0 (48) 0 πΌ = E [ ∫ {ππ’ (π , π₯π β , π’π β ) + ππ ππ’ (π , π₯π β , π’π β ) −∫ π + ∫ {ππ‘ + πΊπ‘ ππ‘ } π(π − πβ )π‘ π (ππ‘− + Δ πππ‘ ) Δππ‘π )] . ] (46) Also we have π πΌ = E [π¦π ππ + ∫ ππ₯ (π‘, π₯π‘β , π’π‘β ) Φπ‘ ππ‘ ππ‘ 0 π π 0 0 E [∫ π»V (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) (ΜVπ‘ − π’π‘β ) ππ‘] > 0, 0 which contradicts (49), unless π ⊗ P(π΄V ) = 0. Hence the conclusion follows. (ii) If instead we choose V = π’β in inequality (48), we obtain that for every measurable, Fπ‘ -adapted process π : [0, π] × Ω → π΄ 2 , the following inequality holds: π (47) π E [ ∫ {ππ‘ + πΊπ‘ ππ‘ } π(π − πβ )π‘ 0 + ∫ ππ’ (π‘, π₯π‘β , π’π‘β ) Vπ‘ ππ‘ + ∫ ππ‘ πππ‘ ] , substituting (46) in (47), the result follows. (52) (53) β + ∑ {ππ‘ + πΊπ‘ (ππ‘− + Δ πππ‘ )} Δ(π − π )π‘ ] ≤ 0. 0<π‘≤π 8 International Journal of Stochastic Analysis In particular, for π = 1, . . . , π, we put ππ‘π = ππ‘βπ + 1{ππ‘π +πΊπ‘π ππ‘ >0} π(π‘). Since the Lebesgue measure is regular then which implies that π π the purely discontinuous part (ππ‘π − ππ‘βπ ) = 0. Obviously, the relation (53) can be written as π π π=1 0 ∑E [∫ {ππ‘π πΊπ‘π ππ‘ } π(ππ + − π 0 π=1 0 = ∑E [∫ {ππ‘π + πΊπ‘π ππ‘ } 1{ππ‘π +πΊπ‘π ππ‘ >0} ππ (π‘)] (54) π > 0. π π=1 0 −∑E [∫ {ππ‘π + πΊπ‘π ππ‘ } 1{ππ‘π +πΊπ‘π ππ‘ ≤0} πππ‘βππ ] > 0. (55) By comparing with (53) we get π π π=1 0 ∑E [∫ 1{ππ‘π +πΊπ‘π ππ‘ ≤0} πππ‘βππ ] = 0, (56) then we conclude that π π ∑ ∫ {ππ‘π + πΊπ‘π ππ‘ } 1{ππ‘π +πΊπ‘π ππ‘ ≤0} πππ‘ππ = 0. π=1 0 (57) Expressions (22) and (23) are proved by using the same techniques. First, for each π ∈ {1, . . . , π} and π‘ ∈ [0, π] fixed, we define ππ π = ππ π + πΏπ‘ (π )1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )>0} , where πΏπ‘ denotes the Dirac unit mass at π‘. πΏπ‘ is a discrete measure, then π π (ππ π − ππ π ) = 0 and (ππ π − ππ π ) = πΏπ‘ (π )1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )>0} . Hence π E [∑ {ππ‘π + πΊπ‘π (ππ‘− + Δ πππ‘ )} 1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )>0} ] > 0 (58) π=1 which contradicts (53), unless for every π ∈ {1, . . . , π} and π‘ ∈ [0, π], we have P {ππ‘π + πΊπ‘π (ππ‘− + Δ πππ‘ ) > 0} = 0. (59) Next, let π be defined by πππ‘π =1 {ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )≥0} πππ‘βπ + 1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )<0} πππ‘βππ . 3.2. Sufficient Conditions of Optimality. It is well known that in the classical cases (without the singular part of the control), the sufficient condition of optimality is of significant importance in the stochastic maximum principle, in the sense that it allows to compute optimal controls. This result states that, under some concavity conditions, maximizing the Hamiltonian leads to an optimal control. In this section, we focus on proving the sufficient maximum principle for mixed classical-singular stochastic control problems, where the state of the system is governed by a stochastic differential equation with jumps, allowing both classical control and singular control. Theorem 8 (sufficient condition of optimality in integral form). Let (π’β , πβ ) be an admissible control and denote π₯β the associated controlled state process. Let (π, π, π(⋅)) be the unique solution of π΅ππ·πΈ (18). Let one assume that (π₯, π’) → π»(π‘, π₯, π’, ππ‘ , ππ‘ , ππ‘ (⋅)) and π₯ → π(π₯) are concave functions. Moreover suppose that for all π‘ ∈ [0, π], V ∈ π΄ 1 , and π ∈ U2 π E [ ∫ π»V (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) (Vπ‘ − π’π‘β ) ππ‘ 0 π π + ∫ {ππ‘ + πΊπ‘ ππ‘ } π(π − πβ )π‘ 0 (64) Then (π’β , πβ ) is an optimal control. Proof. For convenience, we will use the following notations throughout the proof: ∑E [∑ − {ππ‘π + πΊπ‘π (ππ‘− + Δ πππ‘ )} (61) × 1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )<0} Δππ‘βπ ] > 0, Now, by applying ItoΜ’s formula to π¦π‘β Ψπ‘ , it is easy to check that the processes defined by relation (44) satisfy BSDE (18) called the adjoint equation. 0<π‘≤π (60) π 0<π‘≤π Thus (23) holds. The proof is complete. + ∑ {ππ‘ + πΊπ‘ (ππ‘− + Δ πππ‘ )} Δ(π − πβ )π‘ ] ≤ 0. Then, the relation (53) can be written as π=1 (63) π=1 1{ππ‘π +πΊπ‘π ππ‘− ≤0} πππ‘βππ , for π = 1, . . . , π, then we have π(ππ − πβπ )π‘ = −1{ππ‘π +πΊπ‘π ππ‘ ≤0} πππ‘βππ , and πππ‘ππ = πππ‘βππ . Hence, we can rewrite (53) as follows: π By the fact that ππ‘π + πΊπ‘π (ππ‘− + Δ πππ‘ ) < 0, and Δππ‘π ≥ 0, we get ∑1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )<0} Δππ‘βπ = 0. This contradicts (53) unless for every π ∈ {1, . . . , π}, π ⊗ P{ππ‘π + πΊπ‘π ππ‘ > 0} = 0. This proves (20). Let us prove (21). Define πππ‘π = 1{ππ‘π +πΊπ‘π ππ‘− >0} πππ‘βπ + π (62) × 1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )<0} Δππ‘βπ ] = 0. π π π=1 π πβπ )π‘ + ∫ {ππ‘π + πΊπ‘π (ππ‘− + Δ πππ‘ )} π(ππ − πβπ )π‘ ] π E [∑ {ππ‘π + πΊπ‘π (ππ‘− + Δ πππ‘ )} Θβ (π‘) = Θ (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) , Θ (π‘) = Θ (π‘, π₯π‘ , π’π‘ , ππ‘ , ππ‘ , ππ‘ (⋅)) , for Θ = π», π»π₯ , π»π’ International Journal of Stochastic Analysis 9 πΏπ (π‘) = π (π‘, π₯π‘β , π’π‘β ) − π (π‘, π₯π‘ , π’π‘ ) , for π = π, π, ππ , π = 1, . . . , π, π πΏπΎ (π‘, π) = πΎ (π‘, π₯π‘β , π’π‘β , π) − πΎ (π‘, π₯π‘ , π’π‘ , π) , π 0 (65) + ∑ πΊπ‘ (ππ‘− + Δ πππ‘ ) Δ(π − πβ )π‘ ] . 0<π‘≤π β , π’π‘β , π) − πΎ (π‘, π₯π‘− , π’π‘ , π) . πΏπΎ− (π‘, π) = πΎ (π‘, π₯π‘− (68) Let (π’, π) be an arbitrary admissible pair, and consider the difference π½ (π’β , πβ ) − π½ (π’, π) π π 0 0 = E [∫ πΏπ (π‘) ππ‘ + ∫ ππ‘ π(πβ − π)π‘ ] (66) + E [π (π₯πβ ) − π (π₯π )] . By the fact that (π, ππ , π(⋅)) ∈ S2 × M2 × L2] for π = 1, . . . , π, we deduce that the stochastic integrals with respect to the local martingales have zero expectation. Due to the concavity of the Hamiltonian π», the following holds E [π (π₯πβ ) − π (π₯π )] π ≥ E [∫ {− (π»β (π‘) − π» (π‘)) + π»π’β (π‘) (π’π‘β − π’π‘ )} ππ‘] We first note that, by concavity of π, we conclude that E [π (π₯πβ ) − π (π₯π )] 0 π π{ π + E [∫ {ππ‘ (πΏπ (π‘)) + ∑ (πΏππ (π‘)) ππ‘ 0 π=1 [ { } + ∫ (πΏπΎ (π‘, π)) ππ‘ (π) ] (ππ) } ππ‘] πΈ } ] ≥ E [(π₯πβ − π₯π ) ππ₯ (π₯πβ )] = E [(π₯πβ − π₯π ) ππ ] π π 0 0 β − π₯π‘− ) πππ‘ + ∫ ππ‘− π (π₯π‘β − π₯π‘ )] = E [∫ (π₯π‘− π π π + E [ ∫ πΊπ‘ ππ‘ π(π − πβ )π‘ π π + E [ ∫ πΊπ‘π ππ‘ π(π − πβ )π‘ 0 + E [∫ ∑ (πΏπ 0 [ π=1 π π (π‘)) ππ‘ ππ‘ (67) + ∑ πΊπ‘T (ππ‘− + Δ πππ‘ ) Δ(π − πβ )π‘ ] . 0<π‘≤π π + ∫ ∫ (πΏπΎ− (π‘, π)) ππ‘ (π) π (ππ‘, ππ) ] 0 πΈ ] + E [ ∑ πΊπ‘ (Δ πππ‘ ) Δ(π − πβ )π‘ ] , (69) The definition of the Hamiltonian π» and (64) leads to π½(π’β , πβ )−π½(π’, π) ≥ 0, which means that (π’β , πβ ) is an optimal control for the problem (14). 0<π‘≤π which implies that πΈ [π (π₯πβ ) − π (π₯π )] π ≥ E [∫ (π₯π‘β − π₯π‘ ) (−π»π₯β (π‘)) ππ‘] 0 π π{ π} + E [∫ {ππ‘ (πΏπ (π‘)) + ∑ (πΏππ (π‘)) ππ‘ } ππ‘] 0 π=1 [ { } ] π + E [∫ ∫ (πΏπΎ− (π‘, π)) ππ‘ (π) π (ππ‘, ππ)] 0 πΈ π + E [∫ {(π₯π‘β − π₯π‘ ) ππ‘ + (πΏπ (π‘)) ππ‘ } ππ΅π‘ ] 0 π + E [∫ ∫ 0 πΈ β {(π₯π‘− − π₯π‘− ) ππ‘ (π) + ππ‘− (πΏπΎ− (π‘, π))} Μ (ππ‘, ππ) ] ×π The expression (64) is a sufficient condition of optimality in integral form. We want to rewrite this inequality in a suitable form for applications. This is the objective of the following theorem which could be seen as a natural extension of [2, Theorem 2.2] to the jump setting and [3, Theorem 2.1] to mixed regular-singular control problems. Theorem 9 (sufficient conditions of optimality). Let (π’β , πβ ) be an admissible control and π₯β the associated controlled state process. Let (π, π, π(⋅)) be the unique solution of π΅ππ·πΈ (18). Let one assume that (π₯, π’) → π»(π‘, π₯, π’, ππ‘ , ππ‘ , ππ‘ (⋅)) and π₯ → π(π₯) are concave functions. If in addition one assumes that (i) for all π‘ ∈ [0, π], V ∈ π΄ 1 π» (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) = sup π» (π‘, π₯π‘β , V, ππ‘ , ππ‘ , ππ‘ (⋅)) , V∈π΄ 1 ππ‘—π.π., P—π.π ; (70) 10 International Journal of Stochastic Analysis (ii) for all π‘ ∈ [0, π], with probability 1 ππ‘π + πΊπ‘π ππ‘ ≤ 0, for π = 1, . . . , π, π ∑1{ππ‘π +πΊπ‘π ππ‘ ≤0} πππ‘βππ π=1 ππ‘π + πΊπ‘π (ππ‘− + Δ πππ‘ ) ≤ 0, = 0, for π = 1, . . . , π, Since our objective is to maximize this functional, the value function of the singular control problem becomes (71) (π’,π)∈U (72) (73) π ∑1{ππ‘π +πΊπ‘π (ππ‘− +Δ π ππ‘ )≤0} Δππ‘βπ = 0. π (π‘, π₯) = sup π½(π’,π) (π‘, π₯) . If we do not apply any singular control, then the infinitesimal generator Aπ’ , associated with (8), acting on functions π, coincides on πΆπ2 (Rπ ; R) with the parabolic integrodifferential operator Aπ’ given by (74) π=1 π Aπ’ π (π‘, π₯) = ∑ππ (π‘, π₯, π’) π=1 Then (π’β , πβ ) is an optimal control. + Proof. Using (71) and (72) yields π π 0 π=1 0 ππ (π‘, π₯) ππ₯π π2 π 1 π ππ ∑ π (π‘, π₯, π’) π π (π‘, π₯) 2 π,π=1 ππ₯ ππ₯ π E [∫ {ππ‘ + πΊπ‘ ππ‘ } πππ‘βπ ] = E [∑ ∫ {ππ‘π + πΊπ‘π ππ‘ } πππ‘βππ ] = 0. + ∫ {π (π‘, π₯ + πΎ (π‘, π₯, π’, π)) − π (π‘, π₯) πΈ (75) π −∑πΎπ (π‘, π₯, π’, π) The same computations applied to (73) and (74) imply E [ ∑ {ππ‘ + πΊπ‘ (ππ‘− + Δ πππ‘ )} Δππ‘β ] = 0. (76) 0<π‘≤π Hence, from Definition 1, we have the following inequality: π (79) π=1 ππ (π‘, π₯)} ] (ππ) , ππ₯π (80) where πππ = ∑πβ=1 (ππβ ππβ ) denotes the generic term of the symmetric matrix πππ . The variational inequality associated to the singular control problem is max {supπ»1 (π‘, π₯, (π, ππ‘ π, ππ₯ , ππ₯π₯ ) (π‘, π₯) , π’) , π π’ E [ ∫ {ππ‘ + πΊπ‘ ππ‘ } π(π − πβ )π‘ 0 (77) + ∑ {ππ‘ + πΊπ‘ (ππ‘− + Δ πππ‘ )} Δ(π − πβ )π‘ ] ≤ 0. π»2π (81) (π‘, π₯, ππ₯ (π‘, π₯)) , π = 1, . . . , π} = 0, for (π‘, π₯) ∈ [0, π] × π, 0<π‘≤π π (π, π₯) = π (π₯) , The desired result follows from Theorem 8. ∀π₯ ∈ π. (82) π»1 and π»2π , for π = 1, . . . , π, are given by 4. Relation to Dynamic Programming In this section, we come back to the control problem studied in the previous section. We recall a verification theorem, which is useful to compute optimal controls. Then we show that the adjoint process defined in Section 3, as the unique solution to the BSDE (18), can be expressed as the gradient of the value function, which solves the HJB variational inequality. 4.1. A Verification Theorem. Let π₯π π‘,π₯ be the solution of the controlled SDE (8), for π ≥ π‘, with initial value π₯π‘ = π₯. To put the problem in a Markovian framework, so that we can apply dynamic programming, we define the performance criterion π»1 (π‘, π₯, (π, ππ‘ π, ππ₯ , ππ₯π₯ ) (π‘, π₯) , π’) = ππ (π‘, π₯) + Aπ’ π (π‘, π₯) + π (π‘, π₯, π’) , ππ‘ (83) π ππ (π‘, π₯) πΊπ‘ππ + ππ‘π . π ππ₯ π=1 π»2π (π‘, π₯, ππ₯ (π‘, π₯)) = ∑ We start with the definition of classical solutions of the variational inequality (81). Definition 10. Let one consider a function π ∈ πΆ1,2 ([0, π] × π), and define the nonintervention region by πΆ (π) = { (π‘, π₯) ∈ [0, π] × π, π½(π’,π) (π‘, π₯) π π = E [∫ π (π , π₯π , π’π ) ππ + ∫ ππ πππ + π (π₯π ) | π₯π‘ = π₯] . π‘ π‘ (78) π ππ max ∑ { π (π‘, π₯) πΊπ‘ππ + ππ‘π } < 0} . 1≤π≤π π=1 ππ₯ (84) International Journal of Stochastic Analysis 11 We say that π is a classical solution of (81) if ππ (π‘, π₯) + sup {Aπ’ π (π‘, π₯) + π (π‘, π₯, π’)} = 0, ππ‘ π’ (85) ∀ (π‘, π₯) ∈ πΆ (π) , π ππ (π‘, π₯) πΊπ‘ππ + ππ‘π ≤ 0, π ππ₯ π=1 ∑ Here ππ‘ is the total number of individuals harvested up to time π‘. If we define the price per unit harvested at time π‘ by π(π‘) = π−ππ‘ and the utility rate obtained when the size πΎ of the population at π‘ is ππ‘ by π−ππ‘ ππ‘ . Then the objective is to maximize the expected total time-discounted value of the harvested individuals starting with a population of size π₯; that is, π (86) 0 ∀ (π‘, π₯) ∈ [0, π] × π, for π = 1, . . . , π, ππ (π‘, π₯) + Aπ’ π (π‘, π₯) + π (π‘, π₯, π’) ≤ 0, ππ‘ (87) for every (π‘, π₯, π’) ∈ [0, π] × π × π΄ 1 . The following verification theorem is very useful to compute explicitly the value function and the optimal control, at least in the case where the value function is sufficiently smooth. Theorem 11. Let π be a classical solution of (81) with the terminal condition (82), such that for some constants π1 ≥ 1, π2 ∈ (0, ∞), |π(π‘, π₯)| ≤ π2 (1 + |π₯|π1 ). Then, for all (π‘, π₯) ∈ [0, π] × π, and (π’, π) ∈ U π (π‘, π₯) ≥ π½(π’,π) (π‘, π₯) . (88) Furthermore, if there exists (π’β , πβ ) ∈ U such that with probability 1 (π‘, π₯π‘β ) ∈ πΆ (π) , πΏππππ ππ’π πππππ π‘ ππ£πππ¦ π‘ ≤ π, π’π‘β ∈ arg max {Aπ’ π (π‘, π₯π‘β ) + π (π‘, π₯π‘β , π’)} , π’ π π=1 (89) (90) π ππ (π‘, π₯π‘β ) πΊπ‘ππ = ππ‘π } πππ‘βππ = 0, π ππ₯ π−1 ∑ {∑ [0,π) π−ππ‘ πππ‘ ] , π (π‘, π₯) = sup π½π (π‘, π₯) . (95) π∈Π(π‘,π₯) Note that the definition of Π(π‘, π₯) is similar to Π(π₯), except that the starting time is π‘, and the state at π‘ is π₯. If we guess the nonintervention region πΆ has the form πΆ = {(π‘, π₯) : 0 < π₯ < π} for some barrier point π > 0, then (84) gets the form, 0= π2 Φ πΦ 1 πΦ (π‘, π₯) + ππ₯ (π‘, π₯) + π2 π₯2 2 (π‘, π₯) ππ‘ ππ₯ 2 ππ₯ + ∫ {Φ (π‘, π₯ (1 + ππ)) − Φ (π‘, π₯) − ππ₯π R+ πΦ (π‘, π₯)} ] (ππ) ππ₯ + π₯πΎ exp (−ππ‘) , (96) (91) for 0 < π₯ < π. We try a solution Φ of the form (92) π=1 for all jumping times π‘ of ππ‘β , then it follows that π(π‘, π₯) = β β π½(π’ ,π ) (π‘, π₯). Φ (π‘, π₯) = Ψ (π₯) exp (−ππ‘) ; (97) AΦ (π‘, π₯) = exp (−ππ‘) A0 Ψ (π₯) , (98) hence where Ψ is the fundamental solution of the ordinary integrodifferential equation Proof. See [8, Theorem 5.2]. In the following, we present an example on optimal harvesting from a geometric Brownian motion with jumps see, for example, [5, 8]. Example 12. Consider a population having a size π = {ππ‘ : π‘ ≥ 0} which evolves according to the geometric LeΜvy process; that is 1 − πΨ (π₯) + ππ₯ΨσΈ (π₯) + π2 π₯2 ΨσΈ σΈ (π₯) 2 + ∫ {Ψ (π₯ (1 + ππ)) − Ψ (π₯) − ππ₯πΨσΈ (π₯)} ] (ππ) R+ + π₯πΎ = 0. πππ‘ = πππ‘ ππ‘ + πππ‘ ππ΅π‘ Μ (ππ‘, ππ) − πππ‘ , + πππ‘− ∫ ππ R+ π0− = π₯ > 0. (94) where π := inf{π‘ ≥ 0 : ππ‘ = 0} is the time of complete depletion, πΎ ∈ (0, 1) and π, π, π, π are positive constants with π2 /2 + π ∫R π](ππ) ≤ π < π. The harvesting admissible + strategy ππ‘ is assumed to be nonnegative, nondecreasing continuous on the right, satisfying πΈ|ππ |2 < ∞ with π0− = 0, and such that ππ‘ > 0. We denote by Π(π₯) the class of such strategies. For any π define π Δ π π (π‘, π₯π‘β ) + ∑ππ‘π Δππ‘βπ = 0, πΎ π½ (π) = E [∫ π−ππ‘ ππ‘ ππ‘ + ∫ (99) for π‘ ∈ [0, π] , (93) We notice that Ψ(π₯) = π΄π₯π +πΎπ₯πΎ , for some arbitrary constant π΄; we get AΦ (π‘, π₯) = π₯πΎ (π΄β1 (π) + β2 (πΎ)) exp (−ππ‘) , (100) 12 International Journal of Stochastic Analysis where 1 1 β1 (π) = π2 π2 + (π − π2 ) π 2 2 + ∫ {(1 + ππ)π − 1 − πππ} ] (ππ) − π, R+ 1 1 β2 (πΎ) = πΎ ( π2 πΎ2 + (π − π2 ) πΎ 2 2 + ∫ {(1 + ππ)πΎ − 1 − πππΎ} ] (ππ) − π) +1. R+ (101) Note that β1 (1) = π − π < 0 and limπ → ∞ β1 (π) = ∞; then there exists π > 1 such that β1 (π) = 0. The constant πΎ is given by 1 1 πΎ = − ( π2 πΎ2 + (π − π2 ) πΎ 2 2 −1 πΎ (102) + ∫ {(1 + ππ) − 1 − πππΎ} ] (ππ) − π) . R+ βπ Outside πΆ we require that Ψ(π₯) = π₯ + π΅, where π΅ is a constant to be determined. This suggests that the value must be of the form Φ (π‘, π₯) = { Arguing as in [7], we can adapt Theorem 15 in [16] to obtain an identification of the optimal harvesting strategy as a local time of a reflected jump diffusion process. Then, the system (106)–(109) defines the so-called Skorokhod problem, whose solution is a pair (ππ‘β , ππ‘β ), where ππ‘β is a jump diffusion process reflected at π. The conditions (89)–(92) ensure the existence of an increasing process ππ‘β such that ππ‘β stays in πΆ for all times π‘. If the initial size π₯ ≤ π, then ππ‘β is nondecreasing and his continuous part ππ‘βπ increases only when ππ‘β = π so as to ensure that ππ‘β ≤ π. On the other hand, we only have Δππ‘β > 0 if the initial β = π₯ − π, or if ππ‘β jumps out of the size π₯ > π then π0− nonintervention region by the random measure π; that is, β + Δ πππ‘β > π. In these cases we get Δππ‘β > 0 immediately ππ‘− to bring ππ‘β to π. It is easy to verify that, if (πβ , πβ ) is a solution of the Skorokhod problem (106)–(109), then (πβ , πβ ) is an optimal solution of the problem (93) and (94). By the construction of πβ and Φ, all the conditions of the verification Theorem 11 are satisfied. More precisely, the value function along the optimal state reads as (π΄π₯π + πΎπ₯πΎ ) exp (−ππ‘) (π₯ + π΅) exp (−ππ‘) for 0 < π₯ < π, (103) for π₯ ≥ π. Assuming smooth fit principle at point π, then the reflection threshold is π=( πΎπΎ (1 − πΎ) ) π΄π (π − 1) 1/(π−πΎ) , (104) where π΄= 1 − πΎπΎππΎ−1 , πππ−1 (105) βπΎ Φ (π‘, ππ‘β ) = (π΄ππ‘ + πΎππ‘ ) exp (−ππ‘) , for all π‘ ∈ [0, π] . 4.2. Link between the SMP and DPP. Compared with the stochastic maximum principle, one would expect that the solution (π, π, π(⋅)) of BSDE (18) to correspond to the derivatives of the classical solution of the variational inequalities (81)-(82). This is given by the following theorem, which extends [3, Theorem 3.1] to control problems with a singular component and [2, Theorem 3.3] to diffusions with jumps. Theorem 13. Let π be a classical solution of (81), with the terminal condition (82). Assume that π ∈ πΆ1,3 ([0, π] × π), with π = Rπ , and there exists (π’β , πβ ) ∈ U such that the conditions (89)–(92) are satisfied. Then the solution of the BSDE (18) is given by π΅ = π΄π + πΎπ − π. ππ‘ = ππ₯ (π‘, π₯π‘β ) , Since πΎ < 1 and π > 1, we deduce that π > 0. To construct the optimal control πβ , we consider the stochastic differential equation ππ‘ = ππ₯π₯ (π‘, π₯π‘β ) π (π‘, π₯π‘β , π’π‘β ) , π πΎ Μ (ππ‘, ππ) − ππβ , πππ‘β = πππ‘β ππ‘ + πππ‘β ππ΅π‘ + ∫ πππ‘β ππ π‘ R+ (106) ππ‘β ≤ π, π‘ ≥ 0, (107) 1{ππ‘β <π} πππ‘βπ = 0, (108) 1{ππ‘−β +Δ π ππ‘β ≤π} Δππ‘β = 0, (109) and if this is the case, then β Δππ‘β = min {π > 0 : ππ‘− + Δ πππ‘β − π = π} . (110) (111) ππ‘ (⋅) = ππ₯ (π‘, π₯π‘β + πΎ (π‘, π₯π‘β , π’π‘β , π)) − (112) ππ₯ (π‘, π₯π‘β ) . Proof. Throughout the proof, we will use the following abbreviations: for π, π = 1, . . . , π, and β = 1, . . . , π, π1 (π‘) = π1 (π‘, π₯π‘β , π’π‘β ) , for π1 = ππ , ππ , ππβ , π, πππ , π2 (π‘, π) = π2 (π‘, π₯π‘β , π’π‘β , π) , β , uβπ‘ , π) , πΎ− (π‘, π) = πΎ (π‘, π₯π‘− πππ ππ ππππ πππβ ππ , , , , , ππ₯π ππ₯π ππ₯π ππ₯π ππ₯π for π2 = πΎ, πΎπ , ππΎπ ππΎ , , ππ₯π ππ₯π β πΎ−π (π‘, π) = πΎπ (π‘, π₯π‘− , π’π‘β , π) . (113) International Journal of Stochastic Analysis 13 ππ β π π2 π (π , π₯π β ) ππ (π ) ππ΅π π ππ₯π ππ₯ π=1 From ItoΜ’s rule applied to the semimartingale (ππ/ ππ₯π )(π‘, π₯π‘β ), one has +∫ ∑ ππ β β (π , π₯ β ) ππ₯π π ππ +∫ ∫ { π‘ ππ β π‘ ππ β (π , π₯π − + πΎ− (π , π)) ππ₯π πΈ β = ππ π2 π ππ (π‘, π₯π‘β ) + ∫ (π , π₯π β ) ππ π π ππ₯ π‘ ππ ππ₯ − ππ β π ππ β π π2 π β ) ππ₯π βπ + ∫ ∑ π π (π , π₯π − ππ₯ ππ₯ π‘ π=1 ππ β 1 ∫ 2 π‘ + π ∑ πππ (π ) π,π=1 ππ β +∫ ∫ { π‘ πΈ π‘ π3 π (π , π₯π β ) ππ ππ₯π ππ₯π ππ₯π + ∑ Δπ π‘<π ≤ππ β ππ (π , π₯π β ) . ππ₯π (116) π π2 π β (π , π₯π − ) πΎ−π (π , π)} π (ππ , ππ) π π π=1 ππ₯ ππ₯ −∑ π‘<π ≤ππ β π π2 π β (π , π₯ ) πΊπ ππ πππ βππ ∑ π π ππ₯π ππ₯ π=1 π=1 +∫ ∑ ππ ππ β β (π , π₯π − + πΎ− (π , π)) − π (π‘, π₯π − ) ππ₯π ππ₯ + ∑ {Δ π ππ β Μ (ππ , ππ) (π , π₯π − )} π ππ₯π Let ππ βπ denotes the continuous part of ππ β ; that is, ππ βπ = ππ β − ∑π‘<π ≤ππ β Δππ βπ . Then, we can easily show that ππ β π π2 π (π , π₯π β ) πΊπ ππ πππ βππ π ππ₯π ππ₯ π=1 ∫ ∑ π‘ ππ (π , π₯π β ) ππ₯π ππ β π π2 π (π , π₯π β ) πΊπ ππ 1{(π ,π₯π β )∈π·π } πππ βππ π ππ₯π ππ₯ π=1 =∫ ∑ π‘ π π2 π β −∑ π π (π , π₯π − ) Δ π π₯π βπ } , ππ₯ ππ₯ π=1 ππ β π where πβ is defined as in Theorem 11, and the sum is taken over all jumping times π of πβ . Note that π‘ For every (π‘, π₯) ∈ π·π , using (88) we have π π π2 π ππ π ππ = { π₯) πΊ ∑ (π‘, (π‘, π₯) πΊπ‘ππ + ππ π } = 0, π‘ π π ππ₯π π ππ₯ ππ₯ ππ₯ π=1 π=1 ∑ π π=1 π2 π (π , π₯π β ) πΊπ ππ 1{(π ,π₯π β )∈πΆπ } πππ βππ . π ππ₯π ππ₯ π=1 +∫ ∑ (114) βπ + Δ ππ₯π βπ ) = ∑πΊπ ππ Δππ βπ , Δ π π₯π βπ = π₯π βπ − (π₯π − (115) for π = 1, . . . , π. (118) for π = 1, . . . , π, βπ where Δππ βπ = ππ βπ − ππ − is a pure jump process. Then, we can rewrite (114) as follows: +∫ { π‘ π π2 π π2 π β π (π , π₯ ) + π (π , π₯π β ) ∑ (π ) π π ππ₯π ππ ππ₯π ππ₯ π=1 + ππ β π π π‘ ππ (π‘, π₯π‘β ) ππ₯π ππ β This proves π2 π (π , π₯π β ) πΊπ ππ 1{(π ,π₯π β )∈π·π } πππ βππ = 0. π ππ₯π ππ₯ π=1 π=1 ∫ ∑∑ ππ β β (π , π₯ β ) ππ₯π π ππ = π3 π 1 π ππ ∑ π (π ) π π π (π , π₯π β ) 2 π,π=1 ππ₯ ππ₯ ππ₯ ππ ππ β + ∫ ( π (π , π₯π β + πΎ (π , π)) − π (π , π₯π − ) ππ₯ ππ₯ πΈ π 2 ππ (π , π₯π β ) πΎπ (π , π)) ] (ππ)} ππ π π π=1 ππ₯ ππ₯ −∑ (117) (119) Furthermore, for every (π‘, π₯) ∈ πΆπ and π = 1, . . . , π, we have ∑ππ=1 (ππ/ππ₯π ππ₯π )(π‘, π₯)πΊπ‘ππ < 0. βππ But (91) implies that ∑π π=1 1{(π ,π₯π β )∈πΆπ } πππ = 0; thus ππ β π π π2 π (π , π₯π β ) πΊπ ππ 1{(π ,π₯π β )∈πΆπ } πππ βππ = 0. π ππ₯π ππ₯ π=1 π=1 ∫ ∑∑ π‘ (120) The mean value theorem yields Δπ ππ ππ (π , π₯π β ) = ( π ) (π , π¦ (π )) Δ π π₯π β , π ππ₯ ππ₯ π₯ (121) β where π¦(π ) is some point on the straight line between π₯π − + β β π Δ ππ₯π and π₯π , and (ππ/ππ₯ )π₯ represents the gradient matrix of ππ/ππ₯π . To prove that the right-hand side of the above 14 International Journal of Stochastic Analysis π πππ ππ (π‘) π (π‘, π₯π‘β ) π ππ₯ ππ₯ π=1 equality vanishes, it is enough to check that if Δππ βπ > 0 then ∑ππ=1 (π2 π/ππ₯π ππ₯π )(π , π¦(π ))πΊπ ππ = 0, for π = 1, . . . , π. It is clear by (92) that = −∑ π 0 = Δ π π (π , π₯π β ) + ∑ππ π Δππ βπ π=1 π ππ 1 π ππππ π2 π ∑ π (π‘, π₯π‘β ) π π (π‘, π₯π‘β ) − π (π‘, π₯π‘β , π’π‘β ) 2 π,π=1 ππ₯ ππ₯ ππ₯ ππ₯ − π ππΎπ (π‘, π) π πΈ π=1 ππ₯ (122) π ππ (π , π¦ (π )) πΊπ ππ + ππ π } Δππ βπ . π ππ₯ π=1 −∫ ∑ = ∑ {∑ π=1 ×{ Δππ βπ > 0, then (π , π¦(π )) ∈ π·π , for π = 1, . . . , π. Since According to (88), we obtain (126) Finally, substituting (119), (120), (124), and (126) into (116) yields π π2 π (π , π¦ (π )) πΊπ ππ π ππ₯π ππ₯ π=1 ∑ (123) π ππ π = π {∑ π (π , π¦ (π )) πΊπ ππ + ππ π } = 0. ππ₯ π=1 ππ₯ π( ππ (π‘, π₯π‘β )) ππ₯π π ππ πππ (π‘) π (π‘, π₯π‘β ) π ππ₯ π=1 ππ₯ = − {∑ This shows that ∑ Δπ π‘<π ≤ππ β ππ (π , π₯π β ) = 0. ππ₯π ππ π2 π 1 π ππππ ∑ π (π‘) π π (π‘, π₯π‘β ) + π (π‘) 2 π,π=1 ππ₯ ππ₯ ππ₯ ππ₯ + (124) π ππΎπ (π‘, π) π πΈ π=1 ππ₯ On the other hand, define π΄ (π‘, π₯, π’) = ππ ππ (π‘, π₯π‘β + πΎ (π‘, π)) − π (π‘, π₯π‘β )} ] (ππ) . ππ₯π ππ₯ +∫ ∑ π ππ ππ (π‘, π₯) + ∑ππ (π‘, π₯, π’) π (π‘, π₯) ππ‘ ππ₯ π=1 + ×( π2 π 1 π ππ ∑ π (π‘, π₯, π’) π π (π‘, π₯) + π (π‘, π₯, π’) 2 π,π=1 ππ₯ ππ₯ + ∫ {π (π‘, π₯ + πΎ (π‘, π₯, π’, π)) − π (π‘, π₯) π π2 π (π‘, π₯π‘β ) ππ (π‘) ππ΅π‘ π ππ₯π ππ₯ π=1 +∑ +∫ { πΈ πΈ π ππ −∑πΎπ (π‘, π₯, π’, π) π (π‘, π₯)} ] (ππ) . ππ₯ π=1 (125) π If we differentiate π΄(π‘, π₯, π’) with respect to π₯ and evaluate the result at (π₯, π’) = (π₯π‘β , π’π‘β ), we deduce easily from (84), (89), and (90) that π π2 π π2 π β π (π‘, π₯ ) + π (π‘, π₯π‘β ) ∑ (π‘) π‘ π ππ₯π ππ‘ππ₯π ππ₯ π=1 1 π π3 π + ∑ πππ (π‘) π π π (π‘, π₯π‘β ) 2 π,π=1 ππ₯ ππ₯ ππ₯ ππ ππ + ∫ { π (π‘, π₯π‘β + πΎ (π‘, π)) − π (π‘, π₯π‘β ) ππ₯ πΈ ππ₯ π π2 π −∑πΎπ (π , π) π π (π‘, π₯π‘β )} ] (ππ) ππ₯ ππ₯ π=1 ππ ππ (π‘, π₯π‘β + πΎ (π‘, π)) − π (π‘, π₯π‘β ))] (ππ)}ππ‘ π ππ₯ ππ₯ ππ ππ β β Μ (ππ‘, ππ). (π‘, π₯π‘− + πΎ− (π‘, π))− π (π‘, π₯π‘− )} π ππ₯π ππ₯ (127) The continuity of ππ/ππ₯π leads to lim ππ π → ∞ ππ₯π (ππ β , π₯πββ ) = π = ππ (π, π₯πβ ) ππ₯π ππ β (π₯ ) , ππ₯π π for each π = 1, . . . , π. (128) Clearly, 1 π ππππ π2 π ∑ π (π‘) π π (π‘, π₯π‘β ) 2 π,π=1 ππ₯ ππ₯ ππ₯ = π π2 π 1 π π ∑ π ( ∑ ππβ (π‘) ππβ (π‘)) π π (π‘, π₯π‘β ) (129) 2 π,π=1 ππ₯ β=1 ππ₯ ππ₯ π π π = ∑ ∑ (∑ππβ (π‘) π=1β=1 π=1 π2 π πππβ (π‘, π₯tβ )) π (π‘) . π π ππ₯ ππ₯ ππ₯ International Journal of Stochastic Analysis 15 (πβ , πβ , πβ (⋅)) of the following adjoint equation, for all π‘ ∈ [0, π) Now, from (17) we have ππ» (π‘, π₯, π’, π, π, π (⋅)) ππ₯π πππ‘β = − (πππ‘β + πππ‘β + π ∫ πππ‘β (π) ] (ππ) R+ π πππ = ∑ π (π‘, π₯, π’) ππ π=1 ππ₯ βπΎ−1 +πΎππ‘ π π ππ πππβ + ∑ ∑ π (π‘, π₯, π’) ππβ + π (π‘, π₯, π’) ππ₯ β=1 π=1 ππ₯ (130) R+ π ππΎπ (π‘, π₯, π’, π) ππ (π) ] (ππ) . π πΈ π=1 ππ₯ The πth coordinate ππ‘π of the adjoint process ππ‘ satisfies ππ» (π‘, π₯π‘β , π’π‘β , ππ‘ , ππ‘ , ππ‘ (⋅)) ππ‘ ππ₯π π Μ (ππ‘, ππ) , + ππ‘π ππ΅π‘ + ∫ ππ‘− (π) π πΈ πππ + exp (−ππ‘) ≤ 0, ∀π‘, (135) 1{−ππ‘β +exp(−ππ‘)<0} πππ‘βπ = 0, (136) β + Δ πππ‘β ) + exp (−ππ‘) ≤ 0, − (ππ‘− (137) 1{−(ππ‘−β +Δ π ππ‘β )+exp(−ππ‘)<0} Δππ‘β = 0. (138) Since π = 0, we assume the transversality condition for π‘ ∈ [0, π] , E [ππβ (ππβ − ππ )] ≤ 0. (139) β + Δ πππ‘β = ππ‘β , and We remark that Δ π ππ‘β = 0; then ππ‘− the condition (138) reduces to ππ = π (π₯πβ ) , ππ₯ (131) with ππ‘π ππ΅π‘ = ∑πβ=1 ππ‘πβ ππ΅π‘β . Hence, the uniqueness of the solution of (131) and relation (128) allows us to get ππ‘π = (134) β Μ (ππ‘, ππ) , + ππ‘β ππ΅π‘ + ∫ ππ‘− (π) π −ππ‘β +∫ ∑ πππ‘π = − exp (−ππ ) ) ππ‘ ππ (π‘, π₯π‘β ) , ππ₯π 1{−ππ‘β +exp(−ππ‘)<0} Δππ‘β = 0. We use the relation between the value function and the solution (πβ , πβ , πβ (π)) of the adjoint equation along the optimal state. We prove that the solution of the adjoint equation is represented as βπ−1 ππ‘β = (π΄πππ‘ π π2 π (π‘, π₯π‘β ) ππβ (π‘) , π ππ₯π ππ₯ π=1 ππ‘πβ = ∑ (132) (140) βπΎ−1 + πΎπΎππ‘ βπ−1 ππ‘β = π (π΄π (π − 1) ππ‘ ) exp (−ππ‘) , βπΎ−1 + πΎπΎ (πΎ − 1) ππ‘ ) exp (−ππ‘) , βπ−1 ππ‘β (π) = (π΄π ((1 + ππ)π−1 − 1) ππ‘ ππ ππ π β β ππ‘− + πΎ (π‘, π)) − π (π‘, π₯π‘− ), (⋅) = π (π‘, π₯π‘− ππ₯ ππ₯ βπΎ−1 +πΎπΎ ((1 + ππ)πΎ−1 − 1) ππ‘ ) exp (−ππ‘) (141) where ππ‘πβ is the generic element of the matrix ππ‘ and π₯π‘β is the optimal solution of the controlled SDE (8). Example 14. We return to the same example in the previous section. Now, we illustrate a verification result for the maximum principle. We suppose that π is a fixed time. In this case the Hamiltonian gets the form πΎ π» (π‘, ππ‘ , ππ‘ , ππ‘ , ππ‘ (⋅)) = πππ‘ ππ‘ + πππ‘ ππ‘ + ππ‘ (−ππ‘) + πππ‘− ∫ πππ‘ (π) ] (ππ) . for all π‘ ∈ [0, π). βπ−1 + To see this, we differentiate the process (π΄πππ‘ βπΎ−1 πΎπΎππ‘ ) exp(−ππ‘) using ItoΜ’s rule for semimartingales and by using the same procedure as in the proof of Theorem 13. Then, the conclusion follows readily from the verification of (135), (136), and (139). First, an explicit formula for ππ‘ is given in [4] by ππ‘ = πππ‘ ππ‘ {π₯ − ( ∫ (133) R+ Let πβ be a candidate for an optimal control, and let πβ be the corresponding state process with corresponding solution [0,π‘) ππ −1 exp (−ππ ) πππ + ∑ ππ −1 π½π exp (−ππ ) Δππ )} , 0<π ≤π‘ for π‘ ∈ [0, π] , (142) 16 International Journal of Stochastic Analysis −1 where π½π‘ = (∫R πππ({π‘}, ππ))(1 + ∫R πππ({π‘}, ππ)) , and + + ππ‘ is a geometric LeΜvy process defined by 1 ππ‘ = exp {(− π2 + ∫ {ln (1 + ππ) − ππ} ] (ππ)) π‘ 2 R+ π‘ (143) Μ (ππ‘, ππ) } . + ππ΅π‘ + ∫ ∫ ln (1 + ππ) π 0 References R+ β ≤ From the representation (142) and by the fact that ππ∧π‘ π₯ππ∧π‘ exp(π(π ∧ π‘)), we get 1− ππ∧π‘ 1 ≤ (∫ ππ −1 exp (−ππ ) πππ β ππ∧π‘ π₯ [0,π∧π‘) (144) + ∑ ππ −1 π½π exp (−ππ ) Δππ ) < ∞, 0<π ≤π‘ hence β β E [ππ∧π‘ (ππ∧π‘ − ππ∧π‘ )] βπ 2 1/2 βπΎ ≤ E[((π΄πππ∧π‘ + πΎπΎππ∧π‘ ) exp (−π (π ∧ π‘))) ] 1 × E[( ∫ π−1 exp (−ππ ) πππ π₯ [0,π∧π‘) π [ 2 + ∑ 0<π ≤π∧π‘ ππ −1 π½π exp (−ππ ) Δππ ) ] ] improvement of the paper. This work has been partially supported by the Direction GeΜneΜrale de la Recherche Scientifique et du DeΜveloppement Technologique (Algeria), under Contract no. 051/PNR/UMKB/2011, and by the French Algerian Cooperation Program, Tassili 13 MDU 887. 1/2 . (145) By the dominated convergence theorem, we obtain (139) by sending π‘ to infinity in (145). A simple computation shows that the conditions (135)– (138) are consequences of (107)–(109). This shows in particular that the pair (ππ‘β , ππ‘β ) satisfies the optimality sufficient conditions and then it is optimal. This completes the proof of the following result. Theorem 15. One supposes that π2 /2 + π ∫R π](ππ) ≤ π < π, + and π ≥ 0 π] − π.π. If the strategy πβ is chosen such that the corresponding solution of the adjoint process is given by (141), then this choice is optimal. Remark 16. In this example, it is shown in particular that the relationship between the stochastic maximum principle and dynamic programming could be very useful to solve explicitly constrained backward stochastic differential equations with transversality condition. Conflict of Interests The authors declare that there is no conflict of interests regarding the publication of this paper. Acknowledgments The authors would like to thank the referees and the associate editor for valuable suggestions that led to a substancial [1] S. Bahlali and A. Chala, “The stochastic maximum principle in optimal control of singular diffusions with non linear coefficients,” Random Operators and Stochastic Equations, vol. 13, no. 1, pp. 1–10, 2005. [2] K. Bahlali, F. Chighoub, and B. Mezerdi, “On the relationship between the stochastic maximum principle and dynamic programming in singular stochastic control,” Stochastics, vol. 84, no. 2-3, pp. 233–249, 2012. [3] N. C. Framstad, B. Øksendal, and A. Sulem, “Sufficient stochastic maximum principle for the optimal control of jump diffusions and applications to finance,” Journal of Optimization Theory and Applications, vol. 121, pp. 77–98, 2004, Erratum in Journal of Optimization Theory and Applications, vol. 124, no. 2, pp. 511–512, 2005. [4] B. Øksendal and A. Sulem, “Singular stochastic control and optimal stopping with partial information of ItoΜ-LeΜvy processes,” SIAM Journal on Control and Optimization, vol. 50, no. 4, pp. 2254–2287, 2012. [5] L. H. R. Alvarez and T. A. Rakkolainen, “On singular stochastic control and optimal stopping of spectrally negative jump diffusions,” Stochastics, vol. 81, no. 1, pp. 55–78, 2009. [6] W. H. Fleming and H. M. Soner, Controlled Markov Processes and Viscosity Solutions, vol. 25, Springer, New York, NY, USA, 1993. [7] N. C. Framstad, B. Øksendal, and A. Sulem, “Optimal consumption and portfolio in a jump diffusion market with proportional transaction costs,” Journal of Mathematical Economics, vol. 35, no. 2, pp. 233–257, 2001, Arbitrage and control problems in finance. [8] B. Øksendal and A. Sulem, Applied Stochastic Control of Jump Diffusions, Springer, Berlin, Germany, 2005. [9] U. G. Haussmann and W. Suo, “Singular optimal stochastic controls. II. Dynamic programming,” SIAM Journal on Control and Optimization, vol. 33, no. 3, pp. 937–959, 1995. [10] J. A. Bather and H. Chernoff, “Sequential decision in the control of a spaceship (finite fuel),” Journal of Applied Probability, vol. 49, pp. 584–604, 1967. [11] V. E. BenesΜ, L. A. Shepp, and H. S. Witsenhausen, “Some solvable stochastic control problems,” Stochastics, vol. 4, no. 1, pp. 39–83, 1980/81. [12] P.-L. Lions and A.-S. Sznitman, “Stochastic differential equations with reflecting boundary conditions,” Communications on Pure and Applied Mathematics, vol. 37, no. 4, pp. 511–537, 1984. [13] F. M. Baldursson and I. Karatzas, “Irreversible investment and industry equilibrium,” Finance and Stochastics, vol. 1, pp. 69–89, 1997. [14] E. M. Lungu and B. Øksendal, “Optimal harvesting from a population in a stochastic crowded environment,” Mathematical Biosciences, vol. 145, no. 1, pp. 47–75, 1997. [15] M. H. A. Davis and A. R. Norman, “Portfolio selection with transaction costs,” Mathematics of Operations Research, vol. 15, no. 4, pp. 676–713, 1990. International Journal of Stochastic Analysis [16] M. Chaleyat-Maurel, N. El Karoui, and B. Marchal, “ReΜflexion discontinue et systeΜmes stochastiques,” The Annals of Probability, vol. 8, no. 6, pp. 1049–1067, 1980. [17] P. L. Chow, J.-L. Menaldi, and M. Robin, “Additive control of stochastic linear systems with finite horizon,” SIAM Journal on Control and Optimization, vol. 23, no. 6, pp. 858–899, 1985. [18] A. Cadenillas and U. G. Haussmann, “The stochastic maximum principle for a singular control problem,” Stochastics and Stochastics Reports, vol. 49, no. 3-4, pp. 211–237, 1994. [19] S. Bahlali and B. Mezerdi, “A general stochastic maximum principle for singular control problems,” Electronic Journal of Probability, vol. 10, pp. 988–1004, 2005. [20] S. G. Peng, “A general stochastic maximum principle for optimal control problems,” SIAM Journal on Control and Optimization, vol. 28, no. 4, pp. 966–979, 1990. [21] S. Bahlali, B. Djehiche, and B. Mezerdi, “The relaxed stochastic maximum principle in singular optimal control of diffusions,” SIAM Journal on Control and Optimization, vol. 46, no. 2, pp. 427–444, 2007. [22] K. Bahlali, F. Chighoub, B. Djehiche, and B. Mezerdi, “Optimality necessary conditions in singular stochastic control problems with nonsmooth data,” Journal of Mathematical Analysis and Applications, vol. 355, no. 2, pp. 479–494, 2009. [23] H. Pham, “Optimal stopping of controlled jump diffusion processes: a viscosity solution approach,” Journal of Mathematical Systems, Estimation, and Control, vol. 8, pp. 1–27, 1998. [24] J.-T. Shi and Z. Wu, “Relationship between MP and DPP for the stochastic optimal control problem of jump diffusions,” Applied Mathematics and Optimization, vol. 63, no. 2, pp. 151–189, 2011. [25] S. J. Tang and X. J. Li, “Necessary conditions for optimal control of stochastic systems with random jumps,” SIAM Journal on Control and Optimization, vol. 32, no. 5, pp. 1447–1475, 1994. [26] X. Y. Zhou, “A unified treatment of maximum principle and dynamic programming in stochastic controls,” Stochastics and Stochastics Reports, vol. 36, no. 3-4, pp. 137–161, 1991. [27] J. Yong and X. Y. Zhou, Stochastic Controls, Hamiltonian Systems and HJB Equations, vol. 43, Springer, New York, NY, USA, 1999. [28] A. Eyraud-Loisel, “Backward stochastic differential equations with enlarged filtration: option hedging of an insider trader in a financial market with jumps,” Stochastic Processes and Their Applications, vol. 115, no. 11, pp. 1745–1763, 2005. [29] G. Barles, R. Buckdahn, and E. Pardoux, “BSDEs and integral-partial differential equations,” Stochastics and Stochastics Reports, vol. 60, no. 1-2, pp. 57–83, 1997. [30] N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, vol. 24, North-Holland, Amsterdam, The Netherlands, 2nd edition, 1989. 17 Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014