Research Article Nonzero-Sum Stochastic Differential Game between Yan Wang,

advertisement
Hindawi Publishing Corporation
Abstract and Applied Analysis
Volume 2013, Article ID 761306, 7 pages
http://dx.doi.org/10.1155/2013/761306
Research Article
Nonzero-Sum Stochastic Differential Game between
Controller and Stopper for Jump Diffusions
Yan Wang,1,2 Aimin Song,2 Cheng-De Zheng,2 and Enmin Feng1
1
2
School of Mathematical Sciences, Dalian University of Technology, Dalian 116023, China
School of Science, Dalian Jiaotong University, Dalian 116028, China
Correspondence should be addressed to Yan Wang; wymath@163.com
Received 5 February 2013; Accepted 7 May 2013
Academic Editor: Ryan Loxton
Copyright © 2013 Yan Wang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We consider a nonzero-sum stochastic differential game which involves two players, a controller and a stopper. The controller
chooses a control process, and the stopper selects the stopping rule which halts the game. This game is studied in a jump diffusions
setting within Markov control limit. By a dynamic programming approach, we give a verification theorem in terms of variational
inequality-Hamilton-Jacobi-Bellman (VIHJB) equations for the solutions of the game. Furthermore, we apply the verification
theorem to characterize Nash equilibrium of the game in a specific example.
1. Introduction
In this paper we study a nonzero-sum stochastic differential
game with two players: a controller and a stopper. The state
𝑋(⋅) in this game evolves according to a stochastic differential
equation driven by jump diffusions. The controller affects the
control process 𝑒(⋅) in the drift and volatility of 𝑋(⋅) at time 𝑑,
and the stopper decides the duration of the game, in the form
of a stopping rule 𝜏 for the process 𝑋(⋅). The objectives of the
two players are to maximize their own expected payoff.
In order to illustrate the motivation and the background
of application for this game, we show a model in finance.
Example 1. Let (Ω, F, {F𝑑 }𝑑≥0 , 𝑃) be a filtered probability
space, let 𝐡(𝑑) be a π‘˜-dimensional Brownian Motion, and let
Μƒ
Μƒ1 (𝑑𝑑, 𝑑𝑧), . . . , 𝑁
Μƒπ‘˜ (𝑑𝑑, 𝑑𝑧)) be π‘˜-independent
𝑁(𝑑𝑑,
𝑑𝑧) = (𝑁
compensated Poisson random measures independent of 𝐡(𝑑),
̃𝑖 (𝑑𝑑, 𝑑𝑧) = 𝑁𝑖 (𝑑𝑑, 𝑑𝑧)−]𝑖 (𝑑𝑧)𝑑𝑑,
𝑑 ∈ [0, ∞). For 𝑖 = 1, . . . , π‘˜, 𝑁
where ]𝑖 is the Lévy measure of a Lévy process πœ‚π‘– (𝑑) with
jump measure 𝑁𝑖 such that 𝐸[πœ‚π‘–2 (𝑑)] < ∞ for all 𝑑. {F𝑑 }𝑑≥0
Μƒ
is the filtration generated by 𝐡(𝑑) and 𝑁(𝑑𝑑,
𝑑𝑧) (as usual
augmented with all the 𝑃-null sets). We refer to [1, 2] for more
information about Lévy processes.
We firstly define a financial market model as follows.
Suppose that there are two investment possibilities:
(1) a risk-free asset (e.g., a bond), with unit price 𝑆0 (𝑑) at
time 𝑑 given by
𝑑𝑆0 (𝑑) = π‘Ÿ (𝑑) 𝑆0 (𝑑) 𝑑𝑑,
𝑆0 (0) = 1,
(1)
(2) a risky asset (e.g., a stock), with unit price 𝑆1 (𝑑) at time
𝑑 given by
𝑑𝑆1 (𝑑)
Μƒ (𝑑𝑑, 𝑑𝑧)] ,
= 𝑆1 (𝑑−) [𝑏 (𝑑) 𝑑𝑑 + 𝜎 (𝑑) 𝑑𝐡 (𝑑) + ∫ 𝛾 (𝑑, 𝑧) 𝑁
R
𝑆1 (0) > 0,
(2)
𝑇
where π‘Ÿ(𝑑) is F𝑑 -adapted with ∫0 |π‘Ÿ(𝑑)|𝑑𝑑 < ∞ a.s., 𝑇 > 0
is a fixed given constant, 𝑏, 𝜎, 𝛾 is F𝑑 -predictable processes
satisfying 𝛾(𝑑, 𝑧) > −1 for a.a. 𝑑, 𝑧, a.s. and
𝑇
∫ {|𝑏 (𝑑)| + 𝜎2 (𝑑) + ∫ 𝛾2 (𝑑, 𝑧) ] (𝑑𝑧)} 𝑑𝑑 < ∞,
0
R
∀𝑇 < ∞.
(3)
2
Abstract and Applied Analysis
Assume that an investor hires a portfolio manager to
manage his wealth 𝑋(𝑑) from investments. The manager
(controller) can choose a portfolio 𝑒(𝑑), which represents the
proportion of the total wealth 𝑋(𝑑) invested in the stock at
time 𝑑. And the investor (stopper) can halt the wealth process
𝑋(𝑑) by selecting a stop-rule 𝜏 : 𝐢[0, 𝑇] → [0, 𝑇]. Then the
dynamics of the corresponding wealth process 𝑋(𝑑) = 𝑋𝑒 (𝑑)
is
𝑑𝑋𝑒 (𝑑) = 𝑋𝑒 (𝑑−) { [(1 − 𝑒 (𝑑)) π‘Ÿ (𝑑) + 𝑒 (𝑑) 𝑏 (𝑑)] 𝑑𝑑
+ 𝑒 (𝑑) 𝜎 (𝑑) 𝑑𝐡 (𝑑)
(4)
Μƒ (𝑑𝑑, 𝑑𝑧)} ,
+𝑒 (𝑑−) ∫ 𝛾 (𝑑, 𝑧) 𝑁
R
𝑋𝑒 (0) = π‘₯ > 0
(see, e.g., [3–5]). We require that 𝑒(𝑑−)𝛾(𝑑, 𝑧) > −1 for a.a. 𝑑, 𝑧,
a.s. and that
𝑇
∫ { |(1 − 𝑒 (𝑑)) π‘Ÿ (𝑑)| + |𝑒 (𝑑) 𝑏 (𝑑)| + 𝑒2 (𝑑) 𝜎2 (𝑑)
0
(5)
2
2
+𝑒 (𝑑) ∫ 𝛾 (𝑑, 𝑧) ] (𝑑𝑧)} 𝑑𝑑 < ∞ a.s.
R
At terminal time 𝜏, the stopper gives the controller a
payoff 𝐢(𝑋(𝜏)), where 𝐢 : 𝐢[0, 𝑇] → R is a deterministic
mapping. Therefore, the controller aims to maximize his
utility of the following form:
𝜏
Jπ‘₯1 (𝑒, 𝜏) = 𝐸π‘₯ [𝑒−π›Ώπœ π‘ˆ1 (𝐢 (𝑋 (𝜏)))−∫ 𝑒−𝛿𝑑 β„Ž (𝑑, 𝑋 (𝑑) , 𝑒𝑑 ) 𝑑𝑑] ,
0
(6)
where 𝛿 > 0 is the discounting rate, β„Ž is a cost function, and
π‘ˆ1 is the controller’s utility. We denote 𝐸π‘₯ the expectation
with respect to 𝑃π‘₯ and 𝑃π‘₯ the probability laws of 𝑋(𝑑) starting
at π‘₯.
Meanwhile it is the stopper’s objective to choose the
stopping time 𝜏 such that his own utility
Jπ‘₯2 (𝑒, 𝜏) = 𝐸π‘₯ [𝑒−π›Ώπœ π‘ˆ2 {𝑋 (𝜏) − 𝐢 (𝑋 (𝜏))}]
(7)
is maximized, where π‘ˆ2 is the stopper’s utility.
As this game is typically a nonzero sum, we seek a Nash
equilibrium, namely, a pair (𝑒∗ , 𝜏∗ ) such that
Jπ‘₯1 (𝑒∗ , 𝜏∗ ) ≥ Jπ‘₯1 (𝑒, 𝜏∗ ) ,
∀𝑒,
Jπ‘₯2 (𝑒∗ , 𝜏∗ ) ≥ Jπ‘₯2 (𝑒∗ , 𝜏) ,
∀𝜏.
∗
There have been significant advances in the research
of stochastic differential games of control and stopping.
For example, in [6–9] the authors considered the zero-sum
stochastic differential games of mixed type with both controls
and stopping between two players. In these games, each of the
players chooses an optimal strategy, which is composed by a
control 𝑒(⋅) and a stopping 𝜏. Under appropriate conditions,
they constructed a saddle pair of optimal strategies. For the
nonsum case, the games of mixed type were discussed in
[6, 10]. The authors presented Nash equilibria, rather than
saddle pairs, of strategies [10]. Moreover, the papers [11–17]
considered a zero-sum stochastic differential game between
controller and stopper, where one player (controller) chooses
a control process 𝑒(⋅) and the other (stopper) chooses a
stopping 𝜏. One player tries to maximize the reward and the
other to minimize it. They presented a saddle pair for the
game.
In this paper, we study a nonzero-sum stochastic differential game between a controller and a stopper. The controller
and the stopper have different payoffs. The objectives of them
are to maximize their own payoffs. This game is considered in
a jump diffusion context under the Markov control condition.
We prove a verification theorem in terms of VIHJB equations
for the game to characterize Nash equilibrium.
Our setup and approach are related to [3, 17]. However,
their games are different from ours. In [3], the authors studied
the stochastic differential games between two controllers.
The work in [17] was carried out for a zero-sum stochastic
differential game between a controller and a stopper.
The paper is organized as follows: in the next section,
we formulate the nonzero-sum stochastic differential game
between a controller and a stopper and prove a general
verification theorem. In Section 3, we apply the general
results obtained in Section 2 to characterize the solutions of a
special game. Finally, we conclude this paper in Section 4.
2. A Verification Theorem for
Nonzero-Sum Stochastic Differential Game
between Controller and Stopper
Suppose the state π‘Œ(𝑑) = π‘Œπ‘’ (𝑑) at time 𝑑 is given by the
following stochastic differential equation:
π‘‘π‘Œ (𝑑) = 𝛼 (π‘Œ (𝑑) , 𝑒0 (𝑑)) 𝑑𝑑 + 𝛽 (π‘Œ (𝑑) , 𝑒0 (𝑑)) 𝑑𝐡 (𝑑)
Μƒ (𝑑𝑑, 𝑑𝑧) ,
+ ∫ πœƒ (π‘Œ (𝑑−) , 𝑒1 (𝑑−, 𝑧) , 𝑧) 𝑁
(8)
This means that the choice 𝑒 is optimal for the controller
when the stopper uses 𝜏∗ and vice verse.
The game (4) and (8) are a nonzero-sum stochastic
differential game between a controller and a stopper. The
existence of Nash equilibrium shows that, by an appropriate
stopping rule design 𝜏∗ , the stopper can induce the controller
to choose the best portfolio he can. Similarly, by applying a
suitable portfolio 𝑒∗ , the controller can force the stopper to
stop the employment relationship at a time of the controller’s
choosing.
Rπ‘˜0
(9)
π‘Œ (0) = 𝑦 ∈ Rπ‘˜ ,
where 𝛼 : Rπ‘˜ × U → Rπ‘˜ , 𝛽 : Rπ‘˜ × U → Rπ‘˜×π‘˜ , πœƒ : Rπ‘˜ ×
U × Rπ‘˜ → Rπ‘˜×π‘˜ are given functions, and U denotes a given
subset of R𝑝 .
We regard 𝑒0 (𝑑) = 𝑒0 (𝑑, πœ”) and 𝑒1 (𝑑, 𝑧) = 𝑒1 (𝑑, 𝑧, πœ”)
as the control processes, assumed to be caΜ€dlaΜ€g, F𝑑 -adapted
and with values in 𝑒0 (𝑑) ∈ U, 𝑒1 (𝑑, 𝑧) ∈ U for a.a. 𝑑, 𝑧, πœ”.
And we put 𝑒(𝑑) = (𝑒0 (𝑑), 𝑒1 (𝑑, 𝑧)). Then π‘Œ(𝑑) = π‘Œ(𝑒) (𝑑) is
a controlled jump diffusion (see [18] for more information
about stochastic control of jump diffusion).
Abstract and Applied Analysis
3
Fix an open solvency set S ⊂ Rπ‘˜ . Let
𝜏S = inf {𝑑 > 0; π‘Œ (𝑑) ∉ S}
(10)
be the bankruptcy time. 𝜏S is the first time at which the
stochastic process π‘Œ(𝑑) exits the solvency set S. Similar
optimal control problems in which the terminal time is
governed by a stopping criterion are considered in [19–21] in
the deterministic case.
Let 𝑓𝑖 : Rπ‘˜ × πΎ → R and 𝑔𝑖 : Rπ‘˜ → R be given
functions, for 𝑖 = 1, 2. Let A be a family of admissible
controls, contained in the set of 𝑒(⋅) such that (9) has a unique
strong solution and
𝜏S
󡄨
󡄨
𝐸 [∫ 󡄨󡄨󡄨𝑓𝑖 (π‘Œπ‘‘ , 𝑒𝑑 )󡄨󡄨󡄨 𝑑𝑑] < ∞,
0
𝑦
𝑖 = 1, 2
the family
(π‘Œπœ ) ; 𝜏 ∈ Γ} is uniformly integrable,
for 𝑖 = 1, 2.
(12)
Then for 𝑒 ∈ A and 𝜏 ∈ Γ we define the performance
functionals as follows:
𝑦
J𝑖
π‘˜
π΄πœ™ (𝑦) = ∑𝛼𝑖 (𝑦, 𝑒0 (𝑦))
𝑖=1
+
πœ•πœ™
(𝑦)
πœ•π‘¦π‘–
πœ•2 πœ™
1 π‘˜
(𝑦)
∑ (𝛽𝛽𝑇 )𝑖,𝑗 (𝑦, 𝑒0 (𝑦))
2 𝑖,𝑗=1
πœ•π‘¦π‘– πœ•π‘¦π‘—
π‘˜
+ ∑ ∫ {πœ™ (𝑦 + πœƒπ‘— (𝑦, 𝑒1 (𝑦, 𝑧) , 𝑧)) − πœ™ (𝑦)
𝑗=1 R
−∇πœ™ (𝑦) πœƒπ‘— (𝑦, 𝑒1 (𝑦, 𝑧) , 𝑧) } ] (𝑑𝑧) ,
(16)
(11)
for all 𝑦 ∈ S, where 𝐸𝑦 denotes expectation given that π‘Œ(0) =
𝑦 ∈ Rπ‘˜ . Denote by Γ the set of all stopping times 𝜏 ≤ 𝜏S .
Moreover, we assume that
{𝑔𝑖−
When the control 𝑒 is Markovian, the corresponding
process π‘Œ(𝑒) (𝑑) becomes a Markov process with the generator
𝐴𝑒 of πœ™ ∈ 𝐢2 (Rπ‘˜ ) given by
where ∇πœ™ = (πœ•πœ™/πœ•π‘¦1 , . . . , πœ•πœ™/πœ•π‘¦π‘˜ ) is the gradient of πœ™, and πœƒπ‘—
is the 𝑗th column of the π‘˜ × π‘˜ matrix πœƒ.
Now we can state the main result of this section.
Theorem 3 (verification theorem for game (9) and (13)).
Suppose there exist two functions πœ™π‘– : S → R; 𝑖 = 1, 2 such
that
(i) πœ™π‘– ∈ 𝐢1 (S0 ) β‹‚ 𝐢(S), 𝑖 = 1, 2, where S is the closure of
S and S0 is the interior of S;
(ii) πœ™π‘– ≥ 𝑔𝑖 on S, 𝑖 = 1, 2.
Define the following continuation regions:
𝜏
(𝑒, 𝜏) = 𝐸𝑦 [∫ 𝑓𝑖 (π‘Œ (𝑑) , 𝑒 (𝑑)) 𝑑𝑑 + 𝑔𝑖 (π‘Œ (𝜏))] ,
0
(13)
𝑦
We interpret 𝑔𝑖 (π‘Œ(𝜏)) as 0 if 𝜏 = ∞. We may regard J1 (𝑒, 𝜏)
𝑦
as the payoff to the controller who controls 𝑒 and J2 (𝑒, 𝜏) as
the payoff to the stopper who decides 𝜏.
∗
Definition 2 (nash equilibrium). A pair (𝑒 , 𝜏 ) ∈ A × Γ is
called a Nash equilibrium for the stochastic differential game
(9) and (13), if the following holds:
𝑦
𝑦
∀𝑒 ∈ U, 𝑦 ∈ S,
(14)
𝑦
𝑦
∀𝜏 ∈ Γ, 𝑦 ∈ S.
(15)
J1 (𝑒∗ , 𝜏∗ ) ≥ J1 (𝑒, 𝜏∗ ) ,
J2 (𝑒∗ , 𝜏∗ ) ≥ J2 (𝑒∗ , 𝜏) ,
𝑖 = 1, 2.
(17)
Suppose π‘Œ(𝑑) = π‘Œπ‘’ (𝑑) spends 0 time on πœ•π·π‘– a.s., 𝑖 =
1, 2, that is,
𝑖 = 1, 2.
∗
𝐷𝑖 = {𝑦 ∈ S; πœ™π‘– (𝑦) > 𝑔𝑖 (𝑦)} ,
Condition (14) states that if the stopper chooses 𝜏∗ , it is
optimal for the controller to use the control 𝑒∗ . Similarly,
condition (15) states that if the controller uses 𝑒∗ , it is optimal
for the stopper to decide 𝜏∗ . Thus, (𝑒∗ , 𝜏∗ ) is an equilibrium
point in the sense that there is no reason for each individual
player to deviate from it, as long as the other player does not.
We restrict ourselves to Markov controls; that is, we
assume that 𝑒0 (𝑑) = 𝑒̃0 (π‘Œ(𝑑)), 𝑒1 (𝑑, 𝑧) = 𝑒̃1 (π‘Œ(𝑑), 𝑧). As
customary we do not distinguish between 𝑒0 and 𝑒̃0 , 𝑒1 and
𝑒̃1 . Then the controls 𝑒0 and 𝑒1 can simply be identified with
functions 𝑒̃0 (𝑦) and 𝑒̃1 (𝑦, 𝑧), where 𝑦 ∈ S and 𝑧 ∈ Rπ‘˜ .
𝜏
(iii) 𝐸𝑦 [∫0 𝑆 πœ’πœ•π·π‘– (π‘Œ(𝑑))𝑑𝑑] = 0 for all 𝑦 ∈ S, 𝑒 ∈ U, 𝑖 = 1, 2,
(iv) πœ•π·π‘– is a Lipschitz surface, 𝑖 = 1, 2,
(v) πœ™π‘– ∈ 𝐢2 (S\πœ•π·π‘– ). The second-order derivatives of πœ™π‘– are
locally bounded near πœ•π·π‘– , respectively, 𝑖 = 1, 2,
(vi) 𝐷1 = 𝐷2 := 𝐷,
(vii) there exists 𝑒̂ ∈ A such that, for 𝑖 = 1, 2,
𝐴𝑒̂ πœ™π‘– (𝑦) + 𝑓𝑖 (𝑦, 𝑒̂ (𝑦))
= sup {𝐴𝑒 πœ™π‘– (𝑦) + 𝑓𝑖 (𝑦, 𝑒 (𝑦))} {
𝑒∈A
(18)
= 0, 𝑦 ∈ 𝐷,
≤ 0, 𝑦 ∈ S \ 𝐷,
𝜏
(viii) 𝐸𝑦 [|πœ™π‘– (π‘Œπœ )| + ∫0 |𝐴𝑒 πœ™π‘– (π‘Œπ‘’ (𝑑))|𝑑𝑑] < ∞; 𝑖 = 1, 2, for all
𝑒 ∈ U, 𝜏 ∈ Γ.
For 𝑒 ∈ A define
𝑒
= inf {𝑑 > 0; π‘Œπ‘’ (𝑑) ∉ 𝐷} < ∞,
𝜏𝐷 = 𝜏𝐷
(19)
and, in particular,
𝑒̂
= inf {𝑑 > 0; π‘Œπ‘’Μ‚ (𝑑) ∉ 𝐷} < ∞.
πœΜ‚π· = 𝜏𝐷
(20)
4
Abstract and Applied Analysis
Μ‚ = π‘Œπ‘’Μ‚ (𝑑) and π‘š = 1, 2, . . .. Hence we deduce that
where π‘Œ(𝑑)
Suppose that
(ix) the family {πœ™π‘– (π‘Œ(𝜏)); 𝜏 ∈ Γ, 𝜏 ≤ 𝜏𝐷} is uniformly
integrable, for all 𝑒 ∈ A and 𝑦 ∈ S, 𝑖 = 1, 2.
Then (Μ‚
𝜏𝐷, 𝑒̂) ∈ Γ × A is a Nash equilibrium for game (9), (13),
and
𝑒,Μ‚
𝜏𝐷
πœ™1 (𝑦) = supJ1
𝑒∈A
𝑒̂,Μ‚
𝜏𝐷
(𝑦) = J1
𝑒̂,Μ‚
𝜏𝐷
πœ™2 (𝑦) = supJ𝑒2Μ‚,𝜏 (𝑦) = J2
𝜏∈Γ
(21)
(𝑦) .
(22)
Proof. From (i), (iv), and (v) we may assume by an approximation theorem (see Theorem 2.1 in [18]) that πœ™π‘– ∈ 𝐢2 (S),
𝑖 = 1, 2.
For a given 𝑒 ∈ A, we define, with π‘Œ(𝑑) = π‘Œπ‘’ (𝑑),
(23)
In particular, let 𝑒̂ be as in (vii). Then,
𝑒̂
= inf {𝑑 > 0; π‘Œπ‘’Μ‚ (𝑑) ∉ 𝐷} .
πœΜ‚π· = 𝜏𝐷
πœΜ‚π· ∧π‘š
0
≥ 𝐸𝑦 [∫
πœΜ‚π· ∧π‘š
0
𝑒∈A
we conclude by combining (27), (29), and (30) that
𝑦
𝑒∈A
𝑓1 (π‘Œ (𝑑) , 𝑒 (𝑑)) 𝑑𝑑 + πœ™1 (π‘Œ (Μ‚
𝜏𝐷 ∧ π‘š))] ,
0
πœπ‘š
Μ‚ (𝑑) , 𝑒̂ (𝑑)) 𝑑𝑑] ,
≤ πœ™2 (𝑦) − 𝐸𝑦 [∫ 𝑓2 (π‘Œ
0
0
𝑓1 (π‘Œ (𝑑) , 𝑒 (𝑑)) 𝑑𝑑+πœ™1 (π‘Œ (Μ‚
𝜏𝐷 ∧ π‘š))]
πœ™2 (𝑦) ≥ lim inf 𝐸𝑦 [∫
π‘š→∞
0
0
𝑦
J2
𝑒, 𝜏) .
(Μ‚
(33)
The inequality (33) holds for all 𝜏 ∈ Γ. Then we have
𝑦
𝑒, 𝜏) .
πœ™2 (𝑦) ≥ supJ2 (Μ‚
𝜏∈Γ
0
𝑦
(26)
(𝑒, πœΜ‚π·) .
𝑦
𝑒, πœΜ‚π·) .
πœ™2 (𝑦) = J2 (Μ‚
(27)
In particular, applying the Dynkin’s formula to 𝑒 = 𝑒̂ we get
an equality, that is,
πœ™1 (𝑦) = 𝐸𝑦 [∫
πœΜ‚π· ∧π‘š
0
𝑦
= 𝐸 [∫
πœΜ‚π· ∧π‘š
0
Μ‚ (𝑑)) 𝑑𝑑 + πœ™1 (π‘Œ
Μ‚ (Μ‚
−𝐴𝑒̂ πœ™1 (π‘Œ
𝜏𝐷 ∧ π‘š))]
(28)
(35)
We always have
𝑦
𝑦
J2 (Μ‚
𝑒, πœΜ‚π·) ≤ supJ2 (Μ‚
𝑒, 𝜏) .
𝜏∈Γ
(36)
Therefore, combining (34), (35), and (36) we get
𝑦
Μ‚ (𝑑) , 𝑒̂ (𝑑)) 𝑑𝑑 + πœ™1 (π‘Œ
Μ‚ (Μ‚
𝑓1 (π‘Œ
𝜏𝐷 ∧ π‘š))] ,
(34)
Similarly, applying the above argument to the pair (Μ‚
𝑒, πœΜ‚π·) we
get an equality in (34), that is,
Since this holds for all 𝑒 ∈ A, we have
πœ™1 (𝑦) ≥
Μ‚ (𝑑) , 𝑒̂ (𝑑)) 𝑑𝑑 + πœ™2 (π‘Œ
Μ‚ (πœπ‘š ))]
𝑓2 (π‘Œ
𝜏
πœΜ‚π·
𝑦
supJ1
𝑒∈A
𝜏∧π‘š
Μ‚ (𝑑) , 𝑒̂ (𝑑)) 𝑑𝑑 + 𝑔2 (π‘Œ
Μ‚ (𝜏)) πœ’{𝜏<∞} ]
≥ 𝐸𝑦 [∫ 𝑓2 (π‘Œ
≥ 𝐸𝑦 [∫ 𝑓1 (π‘Œ (𝑑) , 𝑒 (𝑑)) 𝑑𝑑 + 𝑔1 (π‘Œ (Μ‚
𝜏𝐷))]
= J1 (𝑒, πœΜ‚π·) .
(32)
where πœπ‘š = 𝜏 ∧ π‘š; π‘š = 1, 2, . . ..
Letting π‘š → ∞ gives, by (11), (12), (i), (ii), (viii), and the
Fatou Lemma,
=
πœ™1 (𝑦)
π‘š→∞
(31)
which is (21).
Next we prove that (22) holds. Let 𝑒̂ ∈ A be as in (vii).
For 𝜏 ∈ Γ, by the Dynkin’s formula and (vii), we have
(24)
where π‘š = 1, 2, . . .. Therefore, by (11), (12), (i), (ii), (viii), and
the Fatou lemma,
≥ lim inf 𝐸 [∫
𝑦
𝑒, πœΜ‚π·) = supJ1 (𝑒, πœΜ‚π·) ,
πœ™1 (𝑦) = J1 (Μ‚
πœπ‘š
−𝐴𝑒 πœ™1 (π‘Œ (𝑑)) 𝑑𝑑 + πœ™1 (π‘Œ (Μ‚
𝜏𝐷 ∧ π‘š))]
πœΜ‚π· ∧π‘š
(30)
Μ‚ (πœπ‘š ))] = πœ™2 (𝑦) + 𝐸𝑦 [∫ 𝐴𝑒̂ πœ™2 (π‘Œ
Μ‚ (𝑑)) 𝑑𝑑]
𝐸𝑦 [πœ™2 (π‘Œ
(25)
𝑦
𝑦
𝑒, πœΜ‚π·) ≤ supJ1 (𝑒, πœΜ‚π·) ,
J1 (Μ‚
We first prove that (21) holds. Let πœΜ‚π· ∈ Γ be as in (24). For
arbitrary 𝑒 ∈ A, by (vii) and the Dynkin’s formula for jump
diffusion (see Theorem 1.24 in [18]) we have
πœ™1 (𝑦) = 𝐸𝑦 [∫
(29)
Since we always have
𝑦
(𝑦) ,
𝑒
𝜏𝐷 = 𝜏𝐷
= inf {𝑑 > 0; π‘Œπ‘’ (𝑑) ∉ 𝐷} .
𝑦
πœ™1 (𝑦) = J1 (Μ‚
𝑒, πœΜ‚π·) .
𝑦
𝑒, πœΜ‚π·) = supJ2 (Μ‚
𝑒, 𝜏) ,
πœ™2 (𝑦) = J2 (Μ‚
𝜏∈Γ
which is (22). The proof is completed.
(37)
Abstract and Applied Analysis
5
3. An Example
In this section we come back to Example 1 and use Theorem 3
to study the solutions of game (4) and (8). Here and in the
following, all the processes are assumed to be one-dimension
for simplicity.
To put game (4) and (8) into the framework of Section 2,
we define the process π‘Œ(𝑑) = (π‘Œ0 (𝑑), π‘Œ1 (𝑑)); π‘Œ(0) = 𝑦 = (𝑠, 𝑦1 )
by
π‘‘π‘Œ0 (𝑑) = 𝑑𝑑,
Imposing the first-order condition on 𝐴𝑒 πœ™1 (𝑠, π‘₯) −
β„Ž(𝑠, π‘₯, 𝑒) and 𝐴𝑒 πœ™2 (𝑠, π‘₯), we get the following equations for
the optimal control processes 𝑒̂:
(𝑏π‘₯ − π‘Ÿπ‘₯)
+ ∫ π‘Ÿπ›Ύ (𝑧) [
R
π‘Œ0 (0) = 𝑠 ∈ R,
−
π‘‘π‘Œ1 (𝑑) = 𝑑𝑋 (𝑑)
(𝑏π‘₯ − π‘Ÿπ‘₯)
= π‘Œ1 (𝑑) { [(1 − 𝑒 (𝑑)) π‘Ÿ (𝑑) + 𝑒 (𝑑) 𝑏 (𝑑)] 𝑑𝑑
+ 𝑒 (𝑑) 𝜎 (𝑑) 𝑑𝐡 (𝑑)
(38)
π‘Œ1 (0) = π‘₯ = 𝑦1 .
+ ∫ π‘Ÿπ›Ύ (𝑧) [
=
𝑦
J1 (𝑒, 𝜏) = 𝐸𝑠,𝑦1 [𝑒−𝛿(𝑠+𝜏) π‘ˆ1 (𝐢 (π‘Œ1 (𝜏)))
R0
− ∫ 𝑒−𝛿(𝑠+𝑑) β„Ž (π‘Œ0 (𝑑) , π‘Œ1 (𝑑) , 𝑒 (𝑑)) 𝑑𝑑] ,
π‘ˆ2 (π‘Œ1 (𝜏) − 𝐢 (π‘Œ1 (𝜏)))] .
(39)
In this case the generator 𝐴𝑒 in (16) has the form
(42)
Thus, we may reduce game (4) and (8) to the problem of
solving a family of nonlinear variational-integro inequalities.
We summarize as follows.
Theorem 4. Suppose there exist 𝑒̂ satisfying (41) and two 𝐢1 functions πœ™π‘– ; 𝑖 = 1, 2 such that
𝐴𝑒 πœ™ (𝑠, π‘₯)
=
πœ•πœ™π‘–
}
πœ•π‘₯
× ] (𝑑𝑧) .
0
[𝑒
πœ•πœ™π‘–
πœ•2 πœ™
πœ•πœ™ 1
+ [(1 − 𝑒̂) π‘Ÿπ‘₯ + 𝑒̂𝑏π‘₯] 𝑖 + π‘₯2 𝑒̂2 𝜎2 2𝑖
πœ•π‘ 
πœ•π‘₯ 2
πœ•π‘₯
𝑒𝛾 (𝑧)
+ ∫ {πœ™π‘– (𝑠, π‘₯ + π‘₯Μ‚
𝑒𝛾 (𝑧)) − πœ™π‘– (𝑠, π‘₯) − π‘₯Μ‚
𝜏
(𝑒, 𝜏) = 𝐸
πœ•πœ™2
πœ•πœ™
(𝑠, π‘₯ + π‘ŸΜ‚
𝑒𝛾 (𝑧)) − 2 (𝑠, π‘₯)] = 0.
πœ•π‘₯
πœ•π‘₯
(41)
𝐴𝑒̂ πœ™π‘– (𝑠, π‘₯)
Then the performance functionals to the controller (6) and
the stopper (7) can be formulated as follows:
−𝛿(𝑠+𝜏)
πœ•2 πœ™
πœ•πœ™2
(𝑠, π‘₯) + π‘₯2 𝜎2 𝑒̂ 22 (𝑠, π‘₯)
πœ•π‘₯
πœ•π‘₯
With 𝑒̂ as in (41), we put
R
𝑠,𝑦1
πœ•πœ™1
πœ•πœ™
(𝑠, π‘₯ + π‘ŸΜ‚
𝑒𝛾 (𝑧)) − 1 (𝑠, π‘₯)]
πœ•π‘₯
πœ•π‘₯
πœ•β„Ž
(𝑠, π‘₯, 𝑒̂) = 0,
πœ•π‘’
R
Μƒ (𝑑𝑑, 𝑑𝑧)} ,
+𝑒 (𝑑−) ∫ 𝛾 (𝑑, 𝑧) 𝑁
𝑦
J2
πœ•2 πœ™
πœ•πœ™1
(𝑠, π‘₯) + π‘₯2 𝜎2 𝑒̂ 21 (𝑠, π‘₯)
πœ•π‘₯
πœ•π‘₯
πœ•πœ™
πœ•πœ™ 1 2 2 2 πœ•2 πœ™
+ [(1 − 𝑒) π‘Ÿπ‘₯ + 𝑒𝑏π‘₯]
+ π‘₯π‘’πœŽ
πœ•π‘ 
πœ•π‘₯ 2
πœ•π‘₯2
(1)
πœ•πœ™
+ ∫ {πœ™ (𝑠, π‘₯ + π‘₯𝑒𝛾 (𝑧)) − πœ™ (𝑠, π‘₯) − π‘₯𝑒𝛾 (𝑧) }
πœ•π‘₯
R0
× ] (𝑑𝑧) .
(40)
𝐷 = {(𝑠, π‘₯) : πœ™2 (𝑠, π‘₯) > 𝑒−𝛿𝑠 π‘ˆ2 (π‘₯ − 𝐢 (π‘₯))}
= {(𝑠, π‘₯) : πœ™1 (𝑠, π‘₯) > 𝑒−𝛿𝑠 π‘ˆ1 (𝐢 (π‘₯))} ;
(43)
(2) πœ™π‘– ∈ 𝐢2 (𝐷), 𝑖 = 1, 2;
(3) πœ™2 (𝑠, π‘₯) = 𝑒−𝛿𝑠 π‘ˆ2 (π‘₯ − 𝐢(π‘₯)) and πœ™1 (𝑠, π‘₯)
𝑒−𝛿𝑠 π‘ˆ1 (𝐢(π‘₯)) for all (𝑠, π‘₯) ∈ S \ 𝐷;
=
To obtain a possible Nash equilibrium (Μ‚
𝑒, πœΜ‚) ∈ A × Γ for
game (4) and (8), according to Theorem 3, it is necessary to
find a subset 𝐷 of S = R2+ := [0, ∞)2 and πœ™π‘– (𝑠, π‘₯); 𝑖 = 1, 2,
such that
(4) 𝐴𝑒 πœ™1 (𝑠, π‘₯) − β„Ž(𝑠, π‘₯, 𝑒) ≤ 0 and 𝐴𝑒 πœ™2 (𝑠, π‘₯) ≤ 0 for all
(𝑠, π‘₯) ∈ S \ 𝐷 and for all 𝑒 ∈ A;
(i) πœ™1 (𝑠, π‘₯) = 𝑒−𝛿𝑠 π‘ˆ1 (𝐢(π‘₯)) and πœ™2 (𝑠, π‘₯) = 𝑒−𝛿𝑠 π‘ˆ2 (π‘₯ −
𝐢(π‘₯)), for all (𝑠, π‘₯) ∈ 𝐷;
(5) 𝐴𝑒̂ πœ™1 (𝑠, π‘₯) − β„Ž(𝑠, π‘₯, 𝑒̂) = 0 for all (𝑠, π‘₯) ∈ 𝐷, where
𝐴𝑒̂ πœ™1 is given by (42);
(ii) πœ™1 (𝑠, π‘₯) ≥ 𝑒−𝛿𝑠 π‘ˆ1 (𝐢(π‘₯)) and πœ™2 (𝑠, π‘₯) ≥ 𝑒−𝛿𝑠 π‘ˆ2 (π‘₯ −
𝐢(π‘₯)), for all (𝑠, π‘₯) ∈ S;
(6) 𝐴𝑒̂ πœ™2 (𝑠, π‘₯) = 0 for all (𝑠, π‘₯) ∈ 𝐷, where 𝐴𝑒̂ πœ™2 is given
by (42).
(iii) 𝐴𝑒 πœ™1 (𝑠, π‘₯) − β„Ž(𝑠, π‘₯, 𝑒) ≤ 0 and 𝐴𝑒 πœ™2 (𝑠, π‘₯) ≤ 0, for all
(𝑠, π‘₯) ∈ S \ 𝐷 and 𝑒;
(iv) there exists 𝑒̂ such that 𝐴𝑒̂ πœ™1 (𝑠, π‘₯) − β„Ž(𝑠, π‘₯, 𝑒̂) =
𝐴𝑒̂ πœ™2 (𝑠, π‘₯) = 0, for all (𝑠, π‘₯) ∈ 𝐷.
Then the pair (Μ‚
𝑒, πœΜ‚) is a Nash equilibrium of the stochastic
differential game (4) and (8), where
πœΜ‚ = inf {𝑑 > 0; π‘Œπ‘’Μ‚ (𝑑) ∉ 𝐷} .
(44)
6
Abstract and Applied Analysis
Moreover, the corresponding equilibrium performances are
πœ™1 (𝑠, π‘₯) =
J(𝑠,π‘₯)
1
𝑒, πœΜ‚) ,
(Μ‚
πœ™2 (𝑠, π‘₯) = J(𝑠,π‘₯)
𝑒, πœΜ‚) .
(Μ‚
2
(45)
In this paper we will not discuss general solutions of this
family of nonlinear variational-integro inequalities. Instead
we discuss a solution in special case when
𝑒2
.
2
Let us try the functions πœ™π‘– , 𝑖 = 1, 2, of the form
𝛾 (𝑑, 𝑧) = 0,
β„Ž (𝑠, π‘₯, 𝑒) =
πœ™π‘– (𝑠, π‘₯) = 𝑒−𝛿𝑠 πœ“π‘– (π‘₯) ,
(46)
(47)
Let πœ“π‘– (π‘₯) = πœ“Μƒπ‘– (π‘₯) on 0 < π‘₯ < π‘₯0 , 𝑖 = 1, 2. By condition (5)
in Theorem 4, we have 𝐴𝑒̂ πœ“Μƒ1 (π‘₯) − 𝑒̂2 /2 = 0 for 0 < π‘₯ < π‘₯0 .
Substituting (52) into 𝐴𝑒 πœ“Μƒ1 (π‘₯) − 𝑒2 /2 = 0, we obtain
− π›Ώπœ“Μƒ1 (π‘₯) + [π‘Ÿπ‘₯ +
+
(𝑏π‘₯ − π‘Ÿπ‘₯)2 πœ“Μƒ1σΈ€  (π‘₯)
] πœ“Μƒ1σΈ€  (π‘₯)
1 − π‘₯2 𝜎2 πœ“Μƒ1σΈ€ σΈ€  (π‘₯)
[(𝑏π‘₯ − π‘Ÿπ‘₯) πœ“Μƒ1σΈ€  (π‘₯)]
2
2(1 − π‘₯2 𝜎2 πœ“Μƒ1σΈ€ σΈ€  (π‘₯))
for some π‘₯0 > 0.
(48)
− π›Ώπœ“Μƒ2 (π‘₯) + π‘Ÿπ‘₯πœ“Μƒ2σΈ€  (π‘₯) +
Then we have
𝐴𝑒 πœ™π‘– (𝑠, π‘₯) = 𝑒−𝛿𝑠 𝐴𝑒 πœ“π‘– (π‘₯) ,
+
where
1
+ π‘₯2 𝑒2 𝜎2 πœ“π‘–σΈ€ σΈ€  (π‘₯) .
2
By conditions (1) and (3) in Theorem 4, we get
πœ“1 (π‘₯) = π‘ˆ1 (𝐢 (π‘₯)) ,
πœ“1 (π‘₯) > π‘ˆ1 (𝐢 (π‘₯)) ,
0 < π‘₯ < π‘₯0 ,
π‘₯ ≥ π‘₯0 ,
0 < π‘₯ < π‘₯0 .
From conditions (4), (5), and (6) of Theorem 4, we get the
candidate 𝑒̂ for the optimal control as follows:
𝑒̂ = Argmax {𝐴𝑒 πœ“1 (π‘₯) −
𝑒∈A
=
(56)
π‘ˆ2 (π‘₯ − 𝐢 (π‘₯)) , π‘₯ ≥ π‘₯0 ,
πœ“Μƒ2 (π‘₯) ,
0 < π‘₯ < π‘₯0 ,
(57)
πœ“2 (π‘₯0 ) = π‘ˆ2 (π‘₯0 − 𝐢 (π‘₯0 )) ,
(52)
(58)
πœ“2σΈ€  (π‘₯0 ) = π‘ˆ2σΈ€  (π‘₯0 − 𝐢 (π‘₯0 )) (1 − 𝐢󸀠 (π‘₯0 )) .
At the end of this section, we summarize the above results
in the following theorem.
Theorem 5. Let πœ“π‘– , 𝑖 = 1, 2 and let π‘₯0 be the solutions of
equations (56)–(58). Then the pair (Μ‚
𝑒, πœΜ‚) given by
𝑒∈A
𝑒̂ =
= Argmax { − π›Ώπœ“2 (π‘₯) + [(1 − 𝑒) π‘Ÿπ‘₯ + 𝑒𝑏π‘₯] πœ“2σΈ€  (π‘₯)
𝑒∈A
− (𝑏π‘₯ − π‘Ÿπ‘₯) πœ“2σΈ€  (π‘₯)
.
π‘₯2 𝜎2 πœ“2σΈ€ σΈ€  (π‘₯)
π‘ˆ1 (𝐢 (π‘₯)) , π‘₯ ≥ π‘₯0 ,
0 < π‘₯ < π‘₯0 ,
πœ“Μƒ1 (π‘₯) ,
πœ“1σΈ€  (π‘₯0 ) = π‘ˆ1σΈ€  (𝐢 (π‘₯0 )) 𝐢󸀠 (π‘₯0 ) ,
𝑒̂ = Argmax {𝐴𝑒 πœ“2 (π‘₯)}
=
= 0.
πœ“1 (π‘₯0 ) = π‘ˆ1 (𝐢 (π‘₯0 )) ,
(𝑏π‘₯ − π‘Ÿπ‘₯) πœ“1σΈ€  (π‘₯)
,
1 − π‘₯2 𝜎2 πœ“1σΈ€ σΈ€  (π‘₯)
1
+ π‘₯2 𝑒2 𝜎2 πœ“2σΈ€ σΈ€  (π‘₯)}
2
2
(55)
where πœ“Μƒ1 (π‘₯) and πœ“Μƒ2 (π‘₯) are the solutions of (56) and (57),
respectively.
According to Theorem 4, we use the continuity and
differentiability of πœ“π‘– at π‘₯ = π‘₯0 to determine π‘₯0 , 𝑖 = 1, 2, that
is,
= Argmax { − π›Ώπœ“1 (π‘₯) + [(1 − 𝑒) π‘Ÿπ‘₯ + 𝑒𝑏π‘₯] πœ“1σΈ€  (π‘₯)
1
𝑒2
+ π‘₯2 𝑒2 𝜎2 πœ“1σΈ€ σΈ€  (π‘₯) − }
2
2
2
Therefore, we conclude that
𝑒2
}
2
𝑒∈A
[(𝑏π‘₯ − π‘Ÿπ‘₯) πœ“Μƒ2σΈ€  (π‘₯)]
1 − π‘₯2 𝜎2 πœ“Μƒ2σΈ€ σΈ€  (π‘₯)
2(1 − π‘₯2 𝜎2 πœ“Μƒ2σΈ€ σΈ€  (π‘₯))
πœ“2 (π‘₯) = {
(51)
(π‘₯) − 1] = 0.
[π‘₯𝜎 (𝑏π‘₯ − π‘Ÿπ‘₯) πœ“Μƒ2σΈ€  (π‘₯)] πœ“Μƒ2σΈ€ σΈ€  (π‘₯)
πœ“1 (π‘₯) = {
π‘₯ ≥ π‘₯0 ,
πœ“2 (π‘₯) = π‘ˆ2 (π‘₯ − 𝐢 (π‘₯)) ,
πœ“2 (π‘₯) > π‘ˆ2 (π‘₯ − 𝐢 (π‘₯)) ,
(50)
[π‘₯ 𝜎
πœ“Μƒ1σΈ€ σΈ€ 
2
(49)
𝐴𝑒 πœ“π‘– (π‘₯) = − π›Ώπœ“π‘– (π‘₯) + [(1 − 𝑒) π‘Ÿπ‘₯ + 𝑒𝑏π‘₯] πœ“π‘–σΈ€  (π‘₯)
2 2
Similarly, we obtain 𝐴𝑒̂ πœ“Μƒ2 (π‘₯) = 0 for 0 < π‘₯ < π‘₯0 by condition
(6) in Theorem 4. And we substitute (53) in 𝐴𝑒 πœ“Μƒ2 (π‘₯) = 0 to
get
and a continuation region 𝐷 of the form
𝐷 = {(𝑠, π‘₯) ; π‘₯ < π‘₯0 }
(54)
2
(53)
(𝑏π‘₯ − π‘Ÿπ‘₯) πœ“1σΈ€  (π‘₯) − (𝑏π‘₯ − π‘Ÿπ‘₯) πœ“2σΈ€  (π‘₯)
=
,
1 − π‘₯2 𝜎2 πœ“1σΈ€ σΈ€  (π‘₯)
π‘₯2 𝜎2 πœ“2σΈ€ σΈ€  (π‘₯)
(59)
πœΜ‚ = inf {𝑑 > 0; π‘₯ ≥ π‘₯0 }
is a Nash equilibrium of game (4) and (8). The corresponding
equilibrium performances are
πœ™π‘– (𝑠, π‘₯) = 𝑒−𝛿𝑠 πœ“π‘– (π‘₯) ,
𝑖 = 1, 2.
(60)
Abstract and Applied Analysis
4. Conclusion
A verification theorem is obtained for the general stochastic
differential game between a controller and a stopper. In
the special case of quadratic cost, we use this theorem to
characterize the Nash equilibrium. However, the question of
the existence and uniqueness of Nash equilibrium for the
game remains open. It will be considered in our subsequent
work.
Acknowledgment
This work was supported by the National Natural Science
Foundation of China under Grant 61273022 and 11171050.
References
[1] J. Bertoin, Lévy Processes, Cambridge University Press, Cambridge, UK, 1996.
[2] D. Applebaum, Lévy Processes and Stochastic Calculus, Cambridge University Press, Cambridge, UK, 2003.
[3] S. Mataramvura and B. Øksendal, “Risk minimizing portfolios
and HJBI equations for stochastic differential games,” Stochastics, vol. 80, no. 4, pp. 317–337, 2008.
[4] R. J. Elliott and T. K. Siu, “On risk minimizing portfolios under
a Markovian regime-switching Black-Scholes economy,” Annals
of Operations Research, vol. 176, pp. 271–291, 2010.
[5] G. Wang and Z. Wu, “Mean-variance hedging and forwardbackward stochastic differential filtering equations,” Abstract
and Applied Analysis, vol. 2011, Article ID 310910, 20 pages, 2011.
[6] I. Karatzas and W. Sudderth, “Stochastic games of control and
stopping for a linear diffusion,” in Random Walk, Sequential
Analysis and Related Topics: A Festschrift in Honor of Y.S. Chow,
A. Hsiung, Z. L. Ying, and C. H. Zhang, Eds., pp. 100–117, World
Scientific Publishers, 2006.
[7] S. HamadeΜ€ne, “Mixed zero-sum stochastic differential game
and American game options,” SIAM Journal on Control and
Optimization, vol. 45, no. 2, pp. 496–518, 2006.
[8] M. K. Ghosh, M. K. S. Rao, and D. Sheetal, “Differential games
of mixed type with control and stopping times,” Nonlinear
Differential Equations and Applications, vol. 16, no. 2, pp. 143–
158, 2009.
[9] M. K. Ghosh and K. S. Mallikarjuna Rao, “Existence of value in
stochastic differential games of mixed type,” Stochastic Analysis
and Applications, vol. 30, no. 5, pp. 895–905, 2012.
[10] I. Karatzas and Q. Li, “BSDE approach to non-zero-sum
stochastic differential games of control and stopping,” in
Stochastic Processes, Finance and Control, pp. 105–153, World
Scientific Publishers, 2012.
[11] J.-P. Lepeltier, “On a general zero-sum stochastic game with
stopping strategy for one player and continuous strategy for the
other,” Probability and Mathematical Statistics, vol. 6, no. 1, pp.
43–50, 1985.
[12] I. Karatzas and W. D. Sudderth, “The controller-and-stopper
game for a linear diffusion,” The Annals of Probability, vol. 29,
no. 3, pp. 1111–1127, 2001.
[13] A. Weerasinghe, “A controller and a stopper game with degenerate variance control,” Electronic Communications in Probability,
vol. 11, pp. 89–99, 2006.
7
[14] I. Karatzas and I.-M. Zamfirescu, “Martingale approach to
stochastic differential games of control and stopping,” The
Annals of Probability, vol. 36, no. 4, pp. 1495–1527, 2008.
[15] E. Bayraktar, I. Karatzas, and S. Yao, “Optimal stopping for
dynamic convex risk measures,” Illinois Journal of Mathematics,
vol. 54, no. 3, pp. 1025–1067, 2010.
[16] E. Bayraktar and V. R. Young, “Proving regularity of the
minimal probability of ruin via a game of stopping and control,”
Finance and Stochastics, vol. 15, no. 4, pp. 785–818, 2011.
[17] F. Bagherya, S. Haademb, B. Øksendal, and I. Turpina, “Optimal
stopping and stochastic control differential games for jump
diffusions,” Stochastics, vol. 85, no. 1, pp. 85–97, 2013.
[18] B. Øksendal and A. Sulem, Applied Stochastic Control of Jump
Diffusions, Springer, Berlin, Germany, 2nd edition, 2007.
[19] Q. Lin, R. Loxton, K. L. Teo, and Y. H. Wu, “A new computational method for a class of free terminal time optimal control
problems,” Pacific Journal of Optimization, vol. 7, no. 1, pp. 63–
81, 2011.
[20] Q. Lin, R. Loxton, K. L. Teo, and Y. H. Wu, “Optimal control
computation for nonlinear systems with state-dependent stopping criteria,” Automatica, vol. 48, no. 9, pp. 2116–2129, 2012.
[21] C. Jiang, Q. Lin, C. Yu, K. L. Teo, and G.-R. Duan, “An exact
penalty method for free terminal time optimal control problem
with continuous inequality constraints,” Journal of Optimization
Theory and Applications, vol. 154, no. 1, pp. 30–53, 2012.
Advances in
Operations Research
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Advances in
Decision Sciences
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Mathematical Problems
in Engineering
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Algebra
Hindawi Publishing Corporation
http://www.hindawi.com
Probability and Statistics
Volume 2014
The Scientific
World Journal
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International Journal of
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at
http://www.hindawi.com
International Journal of
Advances in
Combinatorics
Hindawi Publishing Corporation
http://www.hindawi.com
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Journal of
Complex Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
International
Journal of
Mathematics and
Mathematical
Sciences
Journal of
Hindawi Publishing Corporation
http://www.hindawi.com
Stochastic Analysis
Abstract and
Applied Analysis
Hindawi Publishing Corporation
http://www.hindawi.com
Hindawi Publishing Corporation
http://www.hindawi.com
International Journal of
Mathematics
Volume 2014
Volume 2014
Discrete Dynamics in
Nature and Society
Volume 2014
Volume 2014
Journal of
Journal of
Discrete Mathematics
Journal of
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Applied Mathematics
Journal of
Function Spaces
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com
Volume 2014
Download