Hindawi Publishing Corporation Abstract and Applied Analysis Volume 2013, Article ID 761306, 7 pages http://dx.doi.org/10.1155/2013/761306 Research Article Nonzero-Sum Stochastic Differential Game between Controller and Stopper for Jump Diffusions Yan Wang,1,2 Aimin Song,2 Cheng-De Zheng,2 and Enmin Feng1 1 2 School of Mathematical Sciences, Dalian University of Technology, Dalian 116023, China School of Science, Dalian Jiaotong University, Dalian 116028, China Correspondence should be addressed to Yan Wang; wymath@163.com Received 5 February 2013; Accepted 7 May 2013 Academic Editor: Ryan Loxton Copyright © 2013 Yan Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider a nonzero-sum stochastic differential game which involves two players, a controller and a stopper. The controller chooses a control process, and the stopper selects the stopping rule which halts the game. This game is studied in a jump diffusions setting within Markov control limit. By a dynamic programming approach, we give a verification theorem in terms of variational inequality-Hamilton-Jacobi-Bellman (VIHJB) equations for the solutions of the game. Furthermore, we apply the verification theorem to characterize Nash equilibrium of the game in a specific example. 1. Introduction In this paper we study a nonzero-sum stochastic differential game with two players: a controller and a stopper. The state π(⋅) in this game evolves according to a stochastic differential equation driven by jump diffusions. The controller affects the control process π’(⋅) in the drift and volatility of π(⋅) at time π‘, and the stopper decides the duration of the game, in the form of a stopping rule π for the process π(⋅). The objectives of the two players are to maximize their own expected payoff. In order to illustrate the motivation and the background of application for this game, we show a model in finance. Example 1. Let (Ω, F, {Fπ‘ }π‘≥0 , π) be a filtered probability space, let π΅(π‘) be a π-dimensional Brownian Motion, and let Μ Μ1 (ππ‘, ππ§), . . . , π Μπ (ππ‘, ππ§)) be π-independent π(ππ‘, ππ§) = (π compensated Poisson random measures independent of π΅(π‘), Μπ (ππ‘, ππ§) = ππ (ππ‘, ππ§)−]π (ππ§)ππ‘, π‘ ∈ [0, ∞). For π = 1, . . . , π, π where ]π is the LeΜvy measure of a LeΜvy process ππ (π‘) with jump measure ππ such that πΈ[ππ2 (π‘)] < ∞ for all π‘. {Fπ‘ }π‘≥0 Μ is the filtration generated by π΅(π‘) and π(ππ‘, ππ§) (as usual augmented with all the π-null sets). We refer to [1, 2] for more information about LeΜvy processes. We firstly define a financial market model as follows. Suppose that there are two investment possibilities: (1) a risk-free asset (e.g., a bond), with unit price π0 (π‘) at time π‘ given by ππ0 (π‘) = π (π‘) π0 (π‘) ππ‘, π0 (0) = 1, (1) (2) a risky asset (e.g., a stock), with unit price π1 (π‘) at time π‘ given by ππ1 (π‘) Μ (ππ‘, ππ§)] , = π1 (π‘−) [π (π‘) ππ‘ + π (π‘) ππ΅ (π‘) + ∫ πΎ (π‘, π§) π R π1 (0) > 0, (2) π where π(π‘) is Fπ‘ -adapted with ∫0 |π(π‘)|ππ‘ < ∞ a.s., π > 0 is a fixed given constant, π, π, πΎ is Fπ‘ -predictable processes satisfying πΎ(π‘, π§) > −1 for a.a. π‘, π§, a.s. and π ∫ {|π (π‘)| + π2 (π‘) + ∫ πΎ2 (π‘, π§) ] (ππ§)} ππ‘ < ∞, 0 R ∀π < ∞. (3) 2 Abstract and Applied Analysis Assume that an investor hires a portfolio manager to manage his wealth π(π‘) from investments. The manager (controller) can choose a portfolio π’(π‘), which represents the proportion of the total wealth π(π‘) invested in the stock at time π‘. And the investor (stopper) can halt the wealth process π(π‘) by selecting a stop-rule π : πΆ[0, π] → [0, π]. Then the dynamics of the corresponding wealth process π(π‘) = ππ’ (π‘) is πππ’ (π‘) = ππ’ (π‘−) { [(1 − π’ (π‘)) π (π‘) + π’ (π‘) π (π‘)] ππ‘ + π’ (π‘) π (π‘) ππ΅ (π‘) (4) Μ (ππ‘, ππ§)} , +π’ (π‘−) ∫ πΎ (π‘, π§) π R ππ’ (0) = π₯ > 0 (see, e.g., [3–5]). We require that π’(π‘−)πΎ(π‘, π§) > −1 for a.a. π‘, π§, a.s. and that π ∫ { |(1 − π’ (π‘)) π (π‘)| + |π’ (π‘) π (π‘)| + π’2 (π‘) π2 (π‘) 0 (5) 2 2 +π’ (π‘) ∫ πΎ (π‘, π§) ] (ππ§)} ππ‘ < ∞ a.s. R At terminal time π, the stopper gives the controller a payoff πΆ(π(π)), where πΆ : πΆ[0, π] → R is a deterministic mapping. Therefore, the controller aims to maximize his utility of the following form: π Jπ₯1 (π’, π) = πΈπ₯ [π−πΏπ π1 (πΆ (π (π)))−∫ π−πΏπ‘ β (π‘, π (π‘) , π’π‘ ) ππ‘] , 0 (6) where πΏ > 0 is the discounting rate, β is a cost function, and π1 is the controller’s utility. We denote πΈπ₯ the expectation with respect to ππ₯ and ππ₯ the probability laws of π(π‘) starting at π₯. Meanwhile it is the stopper’s objective to choose the stopping time π such that his own utility Jπ₯2 (π’, π) = πΈπ₯ [π−πΏπ π2 {π (π) − πΆ (π (π))}] (7) is maximized, where π2 is the stopper’s utility. As this game is typically a nonzero sum, we seek a Nash equilibrium, namely, a pair (π’∗ , π∗ ) such that Jπ₯1 (π’∗ , π∗ ) ≥ Jπ₯1 (π’, π∗ ) , ∀π’, Jπ₯2 (π’∗ , π∗ ) ≥ Jπ₯2 (π’∗ , π) , ∀π. ∗ There have been significant advances in the research of stochastic differential games of control and stopping. For example, in [6–9] the authors considered the zero-sum stochastic differential games of mixed type with both controls and stopping between two players. In these games, each of the players chooses an optimal strategy, which is composed by a control π’(⋅) and a stopping π. Under appropriate conditions, they constructed a saddle pair of optimal strategies. For the nonsum case, the games of mixed type were discussed in [6, 10]. The authors presented Nash equilibria, rather than saddle pairs, of strategies [10]. Moreover, the papers [11–17] considered a zero-sum stochastic differential game between controller and stopper, where one player (controller) chooses a control process π’(⋅) and the other (stopper) chooses a stopping π. One player tries to maximize the reward and the other to minimize it. They presented a saddle pair for the game. In this paper, we study a nonzero-sum stochastic differential game between a controller and a stopper. The controller and the stopper have different payoffs. The objectives of them are to maximize their own payoffs. This game is considered in a jump diffusion context under the Markov control condition. We prove a verification theorem in terms of VIHJB equations for the game to characterize Nash equilibrium. Our setup and approach are related to [3, 17]. However, their games are different from ours. In [3], the authors studied the stochastic differential games between two controllers. The work in [17] was carried out for a zero-sum stochastic differential game between a controller and a stopper. The paper is organized as follows: in the next section, we formulate the nonzero-sum stochastic differential game between a controller and a stopper and prove a general verification theorem. In Section 3, we apply the general results obtained in Section 2 to characterize the solutions of a special game. Finally, we conclude this paper in Section 4. 2. A Verification Theorem for Nonzero-Sum Stochastic Differential Game between Controller and Stopper Suppose the state π(π‘) = ππ’ (π‘) at time π‘ is given by the following stochastic differential equation: ππ (π‘) = πΌ (π (π‘) , π’0 (π‘)) ππ‘ + π½ (π (π‘) , π’0 (π‘)) ππ΅ (π‘) Μ (ππ‘, ππ§) , + ∫ π (π (π‘−) , π’1 (π‘−, π§) , π§) π (8) This means that the choice π’ is optimal for the controller when the stopper uses π∗ and vice verse. The game (4) and (8) are a nonzero-sum stochastic differential game between a controller and a stopper. The existence of Nash equilibrium shows that, by an appropriate stopping rule design π∗ , the stopper can induce the controller to choose the best portfolio he can. Similarly, by applying a suitable portfolio π’∗ , the controller can force the stopper to stop the employment relationship at a time of the controller’s choosing. Rπ0 (9) π (0) = π¦ ∈ Rπ , where πΌ : Rπ × U → Rπ , π½ : Rπ × U → Rπ×π , π : Rπ × U × Rπ → Rπ×π are given functions, and U denotes a given subset of Rπ . We regard π’0 (π‘) = π’0 (π‘, π) and π’1 (π‘, π§) = π’1 (π‘, π§, π) as the control processes, assumed to be caΜdlaΜg, Fπ‘ -adapted and with values in π’0 (π‘) ∈ U, π’1 (π‘, π§) ∈ U for a.a. π‘, π§, π. And we put π’(π‘) = (π’0 (π‘), π’1 (π‘, π§)). Then π(π‘) = π(π’) (π‘) is a controlled jump diffusion (see [18] for more information about stochastic control of jump diffusion). Abstract and Applied Analysis 3 Fix an open solvency set S ⊂ Rπ . Let πS = inf {π‘ > 0; π (π‘) ∉ S} (10) be the bankruptcy time. πS is the first time at which the stochastic process π(π‘) exits the solvency set S. Similar optimal control problems in which the terminal time is governed by a stopping criterion are considered in [19–21] in the deterministic case. Let ππ : Rπ × πΎ → R and ππ : Rπ → R be given functions, for π = 1, 2. Let A be a family of admissible controls, contained in the set of π’(⋅) such that (9) has a unique strong solution and πS σ΅¨ σ΅¨ πΈ [∫ σ΅¨σ΅¨σ΅¨ππ (ππ‘ , π’π‘ )σ΅¨σ΅¨σ΅¨ ππ‘] < ∞, 0 π¦ π = 1, 2 the family (ππ ) ; π ∈ Γ} is uniformly integrable, for π = 1, 2. (12) Then for π’ ∈ A and π ∈ Γ we define the performance functionals as follows: π¦ Jπ π π΄π (π¦) = ∑πΌπ (π¦, π’0 (π¦)) π=1 + ππ (π¦) ππ¦π π2 π 1 π (π¦) ∑ (π½π½π )π,π (π¦, π’0 (π¦)) 2 π,π=1 ππ¦π ππ¦π π + ∑ ∫ {π (π¦ + ππ (π¦, π’1 (π¦, π§) , π§)) − π (π¦) π=1 R −∇π (π¦) ππ (π¦, π’1 (π¦, π§) , π§) } ] (ππ§) , (16) (11) for all π¦ ∈ S, where πΈπ¦ denotes expectation given that π(0) = π¦ ∈ Rπ . Denote by Γ the set of all stopping times π ≤ πS . Moreover, we assume that {ππ− When the control π’ is Markovian, the corresponding process π(π’) (π‘) becomes a Markov process with the generator π΄π’ of π ∈ πΆ2 (Rπ ) given by where ∇π = (ππ/ππ¦1 , . . . , ππ/ππ¦π ) is the gradient of π, and ππ is the πth column of the π × π matrix π. Now we can state the main result of this section. Theorem 3 (verification theorem for game (9) and (13)). Suppose there exist two functions ππ : S → R; π = 1, 2 such that (i) ππ ∈ πΆ1 (S0 ) β πΆ(S), π = 1, 2, where S is the closure of S and S0 is the interior of S; (ii) ππ ≥ ππ on S, π = 1, 2. Define the following continuation regions: π (π’, π) = πΈπ¦ [∫ ππ (π (π‘) , π’ (π‘)) ππ‘ + ππ (π (π))] , 0 (13) π¦ We interpret ππ (π(π)) as 0 if π = ∞. We may regard J1 (π’, π) π¦ as the payoff to the controller who controls π’ and J2 (π’, π) as the payoff to the stopper who decides π. ∗ Definition 2 (nash equilibrium). A pair (π’ , π ) ∈ A × Γ is called a Nash equilibrium for the stochastic differential game (9) and (13), if the following holds: π¦ π¦ ∀π’ ∈ U, π¦ ∈ S, (14) π¦ π¦ ∀π ∈ Γ, π¦ ∈ S. (15) J1 (π’∗ , π∗ ) ≥ J1 (π’, π∗ ) , J2 (π’∗ , π∗ ) ≥ J2 (π’∗ , π) , π = 1, 2. (17) Suppose π(π‘) = ππ’ (π‘) spends 0 time on ππ·π a.s., π = 1, 2, that is, π = 1, 2. ∗ π·π = {π¦ ∈ S; ππ (π¦) > ππ (π¦)} , Condition (14) states that if the stopper chooses π∗ , it is optimal for the controller to use the control π’∗ . Similarly, condition (15) states that if the controller uses π’∗ , it is optimal for the stopper to decide π∗ . Thus, (π’∗ , π∗ ) is an equilibrium point in the sense that there is no reason for each individual player to deviate from it, as long as the other player does not. We restrict ourselves to Markov controls; that is, we assume that π’0 (π‘) = π’Μ0 (π(π‘)), π’1 (π‘, π§) = π’Μ1 (π(π‘), π§). As customary we do not distinguish between π’0 and π’Μ0 , π’1 and π’Μ1 . Then the controls π’0 and π’1 can simply be identified with functions π’Μ0 (π¦) and π’Μ1 (π¦, π§), where π¦ ∈ S and π§ ∈ Rπ . π (iii) πΈπ¦ [∫0 π πππ·π (π(π‘))ππ‘] = 0 for all π¦ ∈ S, π’ ∈ U, π = 1, 2, (iv) ππ·π is a Lipschitz surface, π = 1, 2, (v) ππ ∈ πΆ2 (S\ππ·π ). The second-order derivatives of ππ are locally bounded near ππ·π , respectively, π = 1, 2, (vi) π·1 = π·2 := π·, (vii) there exists π’Μ ∈ A such that, for π = 1, 2, π΄π’Μ ππ (π¦) + ππ (π¦, π’Μ (π¦)) = sup {π΄π’ ππ (π¦) + ππ (π¦, π’ (π¦))} { π’∈A (18) = 0, π¦ ∈ π·, ≤ 0, π¦ ∈ S \ π·, π (viii) πΈπ¦ [|ππ (ππ )| + ∫0 |π΄π’ ππ (ππ’ (π‘))|ππ‘] < ∞; π = 1, 2, for all π’ ∈ U, π ∈ Γ. For π’ ∈ A define π’ = inf {π‘ > 0; ππ’ (π‘) ∉ π·} < ∞, ππ· = ππ· (19) and, in particular, π’Μ = inf {π‘ > 0; ππ’Μ (π‘) ∉ π·} < ∞. πΜπ· = ππ· (20) 4 Abstract and Applied Analysis Μ = ππ’Μ (π‘) and π = 1, 2, . . .. Hence we deduce that where π(π‘) Suppose that (ix) the family {ππ (π(π)); π ∈ Γ, π ≤ ππ·} is uniformly integrable, for all π’ ∈ A and π¦ ∈ S, π = 1, 2. Then (Μ ππ·, π’Μ) ∈ Γ × A is a Nash equilibrium for game (9), (13), and π’,Μ ππ· π1 (π¦) = supJ1 π’∈A π’Μ,Μ ππ· (π¦) = J1 π’Μ,Μ ππ· π2 (π¦) = supJπ’2Μ,π (π¦) = J2 π∈Γ (21) (π¦) . (22) Proof. From (i), (iv), and (v) we may assume by an approximation theorem (see Theorem 2.1 in [18]) that ππ ∈ πΆ2 (S), π = 1, 2. For a given π’ ∈ A, we define, with π(π‘) = ππ’ (π‘), (23) In particular, let π’Μ be as in (vii). Then, π’Μ = inf {π‘ > 0; ππ’Μ (π‘) ∉ π·} . πΜπ· = ππ· πΜπ· ∧π 0 ≥ πΈπ¦ [∫ πΜπ· ∧π 0 π’∈A we conclude by combining (27), (29), and (30) that π¦ π’∈A π1 (π (π‘) , π’ (π‘)) ππ‘ + π1 (π (Μ ππ· ∧ π))] , 0 ππ Μ (π‘) , π’Μ (π‘)) ππ‘] , ≤ π2 (π¦) − πΈπ¦ [∫ π2 (π 0 0 π1 (π (π‘) , π’ (π‘)) ππ‘+π1 (π (Μ ππ· ∧ π))] π2 (π¦) ≥ lim inf πΈπ¦ [∫ π→∞ 0 0 π¦ J2 π’, π) . (Μ (33) The inequality (33) holds for all π ∈ Γ. Then we have π¦ π’, π) . π2 (π¦) ≥ supJ2 (Μ π∈Γ 0 π¦ (26) (π’, πΜπ·) . π¦ π’, πΜπ·) . π2 (π¦) = J2 (Μ (27) In particular, applying the Dynkin’s formula to π’ = π’Μ we get an equality, that is, π1 (π¦) = πΈπ¦ [∫ πΜπ· ∧π 0 π¦ = πΈ [∫ πΜπ· ∧π 0 Μ (π‘)) ππ‘ + π1 (π Μ (Μ −π΄π’Μ π1 (π ππ· ∧ π))] (28) (35) We always have π¦ π¦ J2 (Μ π’, πΜπ·) ≤ supJ2 (Μ π’, π) . π∈Γ (36) Therefore, combining (34), (35), and (36) we get π¦ Μ (π‘) , π’Μ (π‘)) ππ‘ + π1 (π Μ (Μ π1 (π ππ· ∧ π))] , (34) Similarly, applying the above argument to the pair (Μ π’, πΜπ·) we get an equality in (34), that is, Since this holds for all π’ ∈ A, we have π1 (π¦) ≥ Μ (π‘) , π’Μ (π‘)) ππ‘ + π2 (π Μ (ππ ))] π2 (π π πΜπ· π¦ supJ1 π’∈A π∧π Μ (π‘) , π’Μ (π‘)) ππ‘ + π2 (π Μ (π)) π{π<∞} ] ≥ πΈπ¦ [∫ π2 (π ≥ πΈπ¦ [∫ π1 (π (π‘) , π’ (π‘)) ππ‘ + π1 (π (Μ ππ·))] = J1 (π’, πΜπ·) . (32) where ππ = π ∧ π; π = 1, 2, . . .. Letting π → ∞ gives, by (11), (12), (i), (ii), (viii), and the Fatou Lemma, = π1 (π¦) π→∞ (31) which is (21). Next we prove that (22) holds. Let π’Μ ∈ A be as in (vii). For π ∈ Γ, by the Dynkin’s formula and (vii), we have (24) where π = 1, 2, . . .. Therefore, by (11), (12), (i), (ii), (viii), and the Fatou lemma, ≥ lim inf πΈ [∫ π¦ π’, πΜπ·) = supJ1 (π’, πΜπ·) , π1 (π¦) = J1 (Μ ππ −π΄π’ π1 (π (π‘)) ππ‘ + π1 (π (Μ ππ· ∧ π))] πΜπ· ∧π (30) Μ (ππ ))] = π2 (π¦) + πΈπ¦ [∫ π΄π’Μ π2 (π Μ (π‘)) ππ‘] πΈπ¦ [π2 (π (25) π¦ π¦ π’, πΜπ·) ≤ supJ1 (π’, πΜπ·) , J1 (Μ We first prove that (21) holds. Let πΜπ· ∈ Γ be as in (24). For arbitrary π’ ∈ A, by (vii) and the Dynkin’s formula for jump diffusion (see Theorem 1.24 in [18]) we have π1 (π¦) = πΈπ¦ [∫ (29) Since we always have π¦ (π¦) , π’ ππ· = ππ· = inf {π‘ > 0; ππ’ (π‘) ∉ π·} . π¦ π1 (π¦) = J1 (Μ π’, πΜπ·) . π¦ π’, πΜπ·) = supJ2 (Μ π’, π) , π2 (π¦) = J2 (Μ π∈Γ which is (22). The proof is completed. (37) Abstract and Applied Analysis 5 3. An Example In this section we come back to Example 1 and use Theorem 3 to study the solutions of game (4) and (8). Here and in the following, all the processes are assumed to be one-dimension for simplicity. To put game (4) and (8) into the framework of Section 2, we define the process π(π‘) = (π0 (π‘), π1 (π‘)); π(0) = π¦ = (π , π¦1 ) by ππ0 (π‘) = ππ‘, Imposing the first-order condition on π΄π’ π1 (π , π₯) − β(π , π₯, π’) and π΄π’ π2 (π , π₯), we get the following equations for the optimal control processes π’Μ: (ππ₯ − ππ₯) + ∫ ππΎ (π§) [ R π0 (0) = π ∈ R, − ππ1 (π‘) = ππ (π‘) (ππ₯ − ππ₯) = π1 (π‘) { [(1 − π’ (π‘)) π (π‘) + π’ (π‘) π (π‘)] ππ‘ + π’ (π‘) π (π‘) ππ΅ (π‘) (38) π1 (0) = π₯ = π¦1 . + ∫ ππΎ (π§) [ = π¦ J1 (π’, π) = πΈπ ,π¦1 [π−πΏ(π +π) π1 (πΆ (π1 (π))) R0 − ∫ π−πΏ(π +π‘) β (π0 (π‘) , π1 (π‘) , π’ (π‘)) ππ‘] , π2 (π1 (π) − πΆ (π1 (π)))] . (39) In this case the generator π΄π’ in (16) has the form (42) Thus, we may reduce game (4) and (8) to the problem of solving a family of nonlinear variational-integro inequalities. We summarize as follows. Theorem 4. Suppose there exist π’Μ satisfying (41) and two πΆ1 functions ππ ; π = 1, 2 such that π΄π’ π (π , π₯) = πππ } ππ₯ × ] (ππ§) . 0 [π πππ π2 π ππ 1 + [(1 − π’Μ) ππ₯ + π’Μππ₯] π + π₯2 π’Μ2 π2 2π ππ ππ₯ 2 ππ₯ π’πΎ (π§) + ∫ {ππ (π , π₯ + π₯Μ π’πΎ (π§)) − ππ (π , π₯) − π₯Μ π (π’, π) = πΈ ππ2 ππ (π , π₯ + πΜ π’πΎ (π§)) − 2 (π , π₯)] = 0. ππ₯ ππ₯ (41) π΄π’Μ ππ (π , π₯) Then the performance functionals to the controller (6) and the stopper (7) can be formulated as follows: −πΏ(π +π) π2 π ππ2 (π , π₯) + π₯2 π2 π’Μ 22 (π , π₯) ππ₯ ππ₯ With π’Μ as in (41), we put R π ,π¦1 ππ1 ππ (π , π₯ + πΜ π’πΎ (π§)) − 1 (π , π₯)] ππ₯ ππ₯ πβ (π , π₯, π’Μ) = 0, ππ’ R Μ (ππ‘, ππ§)} , +π’ (π‘−) ∫ πΎ (π‘, π§) π π¦ J2 π2 π ππ1 (π , π₯) + π₯2 π2 π’Μ 21 (π , π₯) ππ₯ ππ₯ ππ ππ 1 2 2 2 π2 π + [(1 − π’) ππ₯ + π’ππ₯] + π₯π’π ππ ππ₯ 2 ππ₯2 (1) ππ + ∫ {π (π , π₯ + π₯π’πΎ (π§)) − π (π , π₯) − π₯π’πΎ (π§) } ππ₯ R0 × ] (ππ§) . (40) π· = {(π , π₯) : π2 (π , π₯) > π−πΏπ π2 (π₯ − πΆ (π₯))} = {(π , π₯) : π1 (π , π₯) > π−πΏπ π1 (πΆ (π₯))} ; (43) (2) ππ ∈ πΆ2 (π·), π = 1, 2; (3) π2 (π , π₯) = π−πΏπ π2 (π₯ − πΆ(π₯)) and π1 (π , π₯) π−πΏπ π1 (πΆ(π₯)) for all (π , π₯) ∈ S \ π·; = To obtain a possible Nash equilibrium (Μ π’, πΜ) ∈ A × Γ for game (4) and (8), according to Theorem 3, it is necessary to find a subset π· of S = R2+ := [0, ∞)2 and ππ (π , π₯); π = 1, 2, such that (4) π΄π’ π1 (π , π₯) − β(π , π₯, π’) ≤ 0 and π΄π’ π2 (π , π₯) ≤ 0 for all (π , π₯) ∈ S \ π· and for all π’ ∈ A; (i) π1 (π , π₯) = π−πΏπ π1 (πΆ(π₯)) and π2 (π , π₯) = π−πΏπ π2 (π₯ − πΆ(π₯)), for all (π , π₯) ∈ π·; (5) π΄π’Μ π1 (π , π₯) − β(π , π₯, π’Μ) = 0 for all (π , π₯) ∈ π·, where π΄π’Μ π1 is given by (42); (ii) π1 (π , π₯) ≥ π−πΏπ π1 (πΆ(π₯)) and π2 (π , π₯) ≥ π−πΏπ π2 (π₯ − πΆ(π₯)), for all (π , π₯) ∈ S; (6) π΄π’Μ π2 (π , π₯) = 0 for all (π , π₯) ∈ π·, where π΄π’Μ π2 is given by (42). (iii) π΄π’ π1 (π , π₯) − β(π , π₯, π’) ≤ 0 and π΄π’ π2 (π , π₯) ≤ 0, for all (π , π₯) ∈ S \ π· and π’; (iv) there exists π’Μ such that π΄π’Μ π1 (π , π₯) − β(π , π₯, π’Μ) = π΄π’Μ π2 (π , π₯) = 0, for all (π , π₯) ∈ π·. Then the pair (Μ π’, πΜ) is a Nash equilibrium of the stochastic differential game (4) and (8), where πΜ = inf {π‘ > 0; ππ’Μ (π‘) ∉ π·} . (44) 6 Abstract and Applied Analysis Moreover, the corresponding equilibrium performances are π1 (π , π₯) = J(π ,π₯) 1 π’, πΜ) , (Μ π2 (π , π₯) = J(π ,π₯) π’, πΜ) . (Μ 2 (45) In this paper we will not discuss general solutions of this family of nonlinear variational-integro inequalities. Instead we discuss a solution in special case when π’2 . 2 Let us try the functions ππ , π = 1, 2, of the form πΎ (π‘, π§) = 0, β (π , π₯, π’) = ππ (π , π₯) = π−πΏπ ππ (π₯) , (46) (47) Let ππ (π₯) = πΜπ (π₯) on 0 < π₯ < π₯0 , π = 1, 2. By condition (5) in Theorem 4, we have π΄π’Μ πΜ1 (π₯) − π’Μ2 /2 = 0 for 0 < π₯ < π₯0 . Substituting (52) into π΄π’ πΜ1 (π₯) − π’2 /2 = 0, we obtain − πΏπΜ1 (π₯) + [ππ₯ + + (ππ₯ − ππ₯)2 πΜ1σΈ (π₯) ] πΜ1σΈ (π₯) 1 − π₯2 π2 πΜ1σΈ σΈ (π₯) [(ππ₯ − ππ₯) πΜ1σΈ (π₯)] 2 2(1 − π₯2 π2 πΜ1σΈ σΈ (π₯)) for some π₯0 > 0. (48) − πΏπΜ2 (π₯) + ππ₯πΜ2σΈ (π₯) + Then we have π΄π’ ππ (π , π₯) = π−πΏπ π΄π’ ππ (π₯) , + where 1 + π₯2 π’2 π2 ππσΈ σΈ (π₯) . 2 By conditions (1) and (3) in Theorem 4, we get π1 (π₯) = π1 (πΆ (π₯)) , π1 (π₯) > π1 (πΆ (π₯)) , 0 < π₯ < π₯0 , π₯ ≥ π₯0 , 0 < π₯ < π₯0 . From conditions (4), (5), and (6) of Theorem 4, we get the candidate π’Μ for the optimal control as follows: π’Μ = Argmax {π΄π’ π1 (π₯) − π’∈A = (56) π2 (π₯ − πΆ (π₯)) , π₯ ≥ π₯0 , πΜ2 (π₯) , 0 < π₯ < π₯0 , (57) π2 (π₯0 ) = π2 (π₯0 − πΆ (π₯0 )) , (52) (58) π2σΈ (π₯0 ) = π2σΈ (π₯0 − πΆ (π₯0 )) (1 − πΆσΈ (π₯0 )) . At the end of this section, we summarize the above results in the following theorem. Theorem 5. Let ππ , π = 1, 2 and let π₯0 be the solutions of equations (56)–(58). Then the pair (Μ π’, πΜ) given by π’∈A π’Μ = = Argmax { − πΏπ2 (π₯) + [(1 − π’) ππ₯ + π’ππ₯] π2σΈ (π₯) π’∈A − (ππ₯ − ππ₯) π2σΈ (π₯) . π₯2 π2 π2σΈ σΈ (π₯) π1 (πΆ (π₯)) , π₯ ≥ π₯0 , 0 < π₯ < π₯0 , πΜ1 (π₯) , π1σΈ (π₯0 ) = π1σΈ (πΆ (π₯0 )) πΆσΈ (π₯0 ) , π’Μ = Argmax {π΄π’ π2 (π₯)} = = 0. π1 (π₯0 ) = π1 (πΆ (π₯0 )) , (ππ₯ − ππ₯) π1σΈ (π₯) , 1 − π₯2 π2 π1σΈ σΈ (π₯) 1 + π₯2 π’2 π2 π2σΈ σΈ (π₯)} 2 2 (55) where πΜ1 (π₯) and πΜ2 (π₯) are the solutions of (56) and (57), respectively. According to Theorem 4, we use the continuity and differentiability of ππ at π₯ = π₯0 to determine π₯0 , π = 1, 2, that is, = Argmax { − πΏπ1 (π₯) + [(1 − π’) ππ₯ + π’ππ₯] π1σΈ (π₯) 1 π’2 + π₯2 π’2 π2 π1σΈ σΈ (π₯) − } 2 2 2 Therefore, we conclude that π’2 } 2 π’∈A [(ππ₯ − ππ₯) πΜ2σΈ (π₯)] 1 − π₯2 π2 πΜ2σΈ σΈ (π₯) 2(1 − π₯2 π2 πΜ2σΈ σΈ (π₯)) π2 (π₯) = { (51) (π₯) − 1] = 0. [π₯π (ππ₯ − ππ₯) πΜ2σΈ (π₯)] πΜ2σΈ σΈ (π₯) π1 (π₯) = { π₯ ≥ π₯0 , π2 (π₯) = π2 (π₯ − πΆ (π₯)) , π2 (π₯) > π2 (π₯ − πΆ (π₯)) , (50) [π₯ π πΜ1σΈ σΈ 2 (49) π΄π’ ππ (π₯) = − πΏππ (π₯) + [(1 − π’) ππ₯ + π’ππ₯] ππσΈ (π₯) 2 2 Similarly, we obtain π΄π’Μ πΜ2 (π₯) = 0 for 0 < π₯ < π₯0 by condition (6) in Theorem 4. And we substitute (53) in π΄π’ πΜ2 (π₯) = 0 to get and a continuation region π· of the form π· = {(π , π₯) ; π₯ < π₯0 } (54) 2 (53) (ππ₯ − ππ₯) π1σΈ (π₯) − (ππ₯ − ππ₯) π2σΈ (π₯) = , 1 − π₯2 π2 π1σΈ σΈ (π₯) π₯2 π2 π2σΈ σΈ (π₯) (59) πΜ = inf {π‘ > 0; π₯ ≥ π₯0 } is a Nash equilibrium of game (4) and (8). The corresponding equilibrium performances are ππ (π , π₯) = π−πΏπ ππ (π₯) , π = 1, 2. (60) Abstract and Applied Analysis 4. Conclusion A verification theorem is obtained for the general stochastic differential game between a controller and a stopper. In the special case of quadratic cost, we use this theorem to characterize the Nash equilibrium. However, the question of the existence and uniqueness of Nash equilibrium for the game remains open. It will be considered in our subsequent work. Acknowledgment This work was supported by the National Natural Science Foundation of China under Grant 61273022 and 11171050. References [1] J. Bertoin, LeΜvy Processes, Cambridge University Press, Cambridge, UK, 1996. [2] D. Applebaum, LeΜvy Processes and Stochastic Calculus, Cambridge University Press, Cambridge, UK, 2003. [3] S. Mataramvura and B. Øksendal, “Risk minimizing portfolios and HJBI equations for stochastic differential games,” Stochastics, vol. 80, no. 4, pp. 317–337, 2008. [4] R. J. Elliott and T. K. Siu, “On risk minimizing portfolios under a Markovian regime-switching Black-Scholes economy,” Annals of Operations Research, vol. 176, pp. 271–291, 2010. [5] G. Wang and Z. Wu, “Mean-variance hedging and forwardbackward stochastic differential filtering equations,” Abstract and Applied Analysis, vol. 2011, Article ID 310910, 20 pages, 2011. [6] I. Karatzas and W. Sudderth, “Stochastic games of control and stopping for a linear diffusion,” in Random Walk, Sequential Analysis and Related Topics: A Festschrift in Honor of Y.S. Chow, A. Hsiung, Z. L. Ying, and C. H. Zhang, Eds., pp. 100–117, World Scientific Publishers, 2006. [7] S. HamadeΜne, “Mixed zero-sum stochastic differential game and American game options,” SIAM Journal on Control and Optimization, vol. 45, no. 2, pp. 496–518, 2006. [8] M. K. Ghosh, M. K. S. Rao, and D. Sheetal, “Differential games of mixed type with control and stopping times,” Nonlinear Differential Equations and Applications, vol. 16, no. 2, pp. 143– 158, 2009. [9] M. K. Ghosh and K. S. Mallikarjuna Rao, “Existence of value in stochastic differential games of mixed type,” Stochastic Analysis and Applications, vol. 30, no. 5, pp. 895–905, 2012. [10] I. Karatzas and Q. Li, “BSDE approach to non-zero-sum stochastic differential games of control and stopping,” in Stochastic Processes, Finance and Control, pp. 105–153, World Scientific Publishers, 2012. [11] J.-P. Lepeltier, “On a general zero-sum stochastic game with stopping strategy for one player and continuous strategy for the other,” Probability and Mathematical Statistics, vol. 6, no. 1, pp. 43–50, 1985. [12] I. Karatzas and W. D. Sudderth, “The controller-and-stopper game for a linear diffusion,” The Annals of Probability, vol. 29, no. 3, pp. 1111–1127, 2001. [13] A. Weerasinghe, “A controller and a stopper game with degenerate variance control,” Electronic Communications in Probability, vol. 11, pp. 89–99, 2006. 7 [14] I. Karatzas and I.-M. Zamfirescu, “Martingale approach to stochastic differential games of control and stopping,” The Annals of Probability, vol. 36, no. 4, pp. 1495–1527, 2008. [15] E. Bayraktar, I. Karatzas, and S. Yao, “Optimal stopping for dynamic convex risk measures,” Illinois Journal of Mathematics, vol. 54, no. 3, pp. 1025–1067, 2010. [16] E. Bayraktar and V. R. Young, “Proving regularity of the minimal probability of ruin via a game of stopping and control,” Finance and Stochastics, vol. 15, no. 4, pp. 785–818, 2011. [17] F. Bagherya, S. Haademb, B. Øksendal, and I. Turpina, “Optimal stopping and stochastic control differential games for jump diffusions,” Stochastics, vol. 85, no. 1, pp. 85–97, 2013. [18] B. Øksendal and A. Sulem, Applied Stochastic Control of Jump Diffusions, Springer, Berlin, Germany, 2nd edition, 2007. [19] Q. Lin, R. Loxton, K. L. Teo, and Y. H. Wu, “A new computational method for a class of free terminal time optimal control problems,” Pacific Journal of Optimization, vol. 7, no. 1, pp. 63– 81, 2011. [20] Q. Lin, R. Loxton, K. L. Teo, and Y. H. Wu, “Optimal control computation for nonlinear systems with state-dependent stopping criteria,” Automatica, vol. 48, no. 9, pp. 2116–2129, 2012. [21] C. Jiang, Q. Lin, C. Yu, K. L. Teo, and G.-R. Duan, “An exact penalty method for free terminal time optimal control problem with continuous inequality constraints,” Journal of Optimization Theory and Applications, vol. 154, no. 1, pp. 30–53, 2012. Advances in Operations Research Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Advances in Decision Sciences Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Mathematical Problems in Engineering Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Algebra Hindawi Publishing Corporation http://www.hindawi.com Probability and Statistics Volume 2014 The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Differential Equations Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Volume 2014 Submit your manuscripts at http://www.hindawi.com International Journal of Advances in Combinatorics Hindawi Publishing Corporation http://www.hindawi.com Mathematical Physics Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Journal of Complex Analysis Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 International Journal of Mathematics and Mathematical Sciences Journal of Hindawi Publishing Corporation http://www.hindawi.com Stochastic Analysis Abstract and Applied Analysis Hindawi Publishing Corporation http://www.hindawi.com Hindawi Publishing Corporation http://www.hindawi.com International Journal of Mathematics Volume 2014 Volume 2014 Discrete Dynamics in Nature and Society Volume 2014 Volume 2014 Journal of Journal of Discrete Mathematics Journal of Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Applied Mathematics Journal of Function Spaces Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Optimization Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 Hindawi Publishing Corporation http://www.hindawi.com Volume 2014