AN EXPLICIT SOLUTION TO AN OPTIMAL STOPPING PROBLEM WITH REGIME SWITCHING

advertisement
J. Appl. Prob. 38, 464–481 (2001)
Printed in Israel
 Applied Probability Trust 2001
AN EXPLICIT SOLUTION TO AN OPTIMAL
STOPPING PROBLEM WITH REGIME SWITCHING
XIN GUO,∗ IBM Research Division and University of Alberta
Abstract
We investigate an optimal stopping time problem which arises from pricing Russian
options (i.e. perpetual look-back options) on a stock whose price fluctuations are modelled
by adjoining a hidden Markov process to the classical Black–Scholes geometric Brownian
motion model. By extending the technique of smooth fit to allow jump discontinuities,
we obtain an explicit closed-form solution. It gives a non-standard application of the
well-known smooth fit principle where the optimal strategy involves jumping over the
optimal boundary and by an arbitrary overshoot. Based on the optimal stopping analysis,
an arbitrage-free price for Russian options under the hidden Markov model is derived.
Keywords: Hidden Markov process; martingale; optimal stopping; regime switching;
Russian options
AMS 2000 Subject Classification: Primary 60G40; 62L10; 93E20
1. Introduction
In [6], we proposed a model for the fluctuations of a single stock price Xt by incorporating
the existence of inside information in the following form
dXt = Xt µε(t) dt + Xt σε(t) dWt ,
(1.1)
where Wt is the standard Wiener process, ε(t) is a Markov process, independent of Wt , which
represents the state of information in the investor community. For each state i, there is a known
drift parameter µi and a known volatility parameter σi . The pair (µε(t) , σε(t) ) takes different
values when ε(t) is in different states.
In this paper we consider the problem of pricing Russian options, based on this model.
The Russian option was coined by Shepp and Shiryayev in [12]. It is a perpetual look-back
option. The owner of the option can choose any exercise date, represented by the stopping time
τ (0 ≤ τ ≤ ∞) and gets a payoff of either s (a fixed constant) or the maximum stock price
achieved up to the exercise date, whichever is larger, discounted by e−rτ , where r is a fixed
number.
To price Russian options, there is a closely related optimal stopping time problem. Let
X = {Xt , t ≥ 0} be the price process for a stock with X0 = x > 0, and
St = max s, sup Xu ,
0≤u≤t
where s > x is a given constant. The problem is to compute the value of V ,
V = sup Ex,s e−rτ Sτ ,
τ
Received 7 December 1999; revision received 6 October 2000.
∗ Postal address: T. J. Watson Research Center, P.O. Box 218, IBM, Yorktown Heights, NY 10598, USA.
Email address: xinguo@us.ibm.com
464
(1.2)
An optimal stopping problem with regime switching
465
where τ is a stopping time with respect to the filtration F Xt = {X(s), s ≤ t}, meaning that no
clairvoyance is allowed.
When µε(t) and σε(t) are constants, equation (1.1) for stock price fluctuations Xt reduces to
the Black–Scholes geometric Brownian motion model, for which the corresponding optimal
stopping time problem was explicitly solved by Shepp and Shiryayev in [12]. Building on their
result, Duffie and Harrison derived a unique arbitrage-free price for the Russian option [3]: the
value is finite when the dividend payout rate is strictly positive, but is infinite otherwise.
For the general case of the market model, valuing Russian options is more complicated. The
model is incomplete (see [6]) in the sense that there are not enough financial instruments to
duplicate a portfolio and there does not exist a unique martingale measure under which the price
of the option can be derived by taking the corresponding discounted expectation. Therefore,
we will take two steps to solve the pricing problem.
First, we start our exploration by computing V in (1.2). This is an optimal stopping problem
with an infinite time horizon and with state space {(ε, x, s) | x ≤ s}. The key is to find the
so called ‘free boundary’ x = f (s, µ, σ, ε) such that if x ≤ f (s, µ0 , µ0 , σ0 , σ1 , λ0 , λ1 ) we
should stop immediately and exercise the option, while if s ≥ x ≥ f (s, µ0 , µ0 , σ0 , σ1 , λ0 , λ1 ),
we should keep observing the underlying stock fluctuations. By extending the technique of
the ‘principle of smooth fit’, which should be attributed to Kolmogorov and Chernoff (see,
for instance, [1]), to allow discontinuous jumps, we obtain a closed-form solution. We will
show that, when the hidden Markov process ε(t) switches from one state to another, there is a
discontinuous jump over the boundary, which is also called ‘regime switching’. The proof of
the result is via martingale theory (Section 3).
Secondly, based on this result, we investigate the problem of pricing Russian options. By
introducing the idea of a ‘ticket’, we succeed in completing the market model and in finding
one equivalent martingale measure, under which the ‘fair price’ of the Russian option under the
new model finds its home in economics (Section 6).
Not surprisingly, the success in obtaining an explicit closed-form solution relies heavily
on the Markov structure of (Xt /St , ε(t)). Therefore, we discuss, via the methodology of
the Markov dynamic programming, a detailed derivation of the closed-form solution. Out of
curiosity, we will also provide examples where Markov structure provides no clue to any explicit
solutions (Sections 4 and 5).
2. Problems and solutions
2.1. Problems
Throughout the paper, for explicitness, we will concentrate on the two-state case in which
ε(t) alternates between 0 and 1 such that
µ0 , ε(t) = 0,
µε(t) =
µ1 , ε(t) = 1,
σ0 , ε(t) = 0,
σε(t) =
σ1 , ε(t) = 1,
where σ0 = σ1 .
Moreover, let λi denote the rate of leaving state i and τi the time to leave state i; then
P(τi > t) = e−λi t ,
i = 0, 1.
(2.1)
466
X. GUO
We will illustrate that for a K-state case (K > 2), an algebraic equation of order 2K must
be solved. Let
dXt = Xt µε(t) dt + Xt σε(t) dWt ,
where µε(t) , σε(t) , W (t) are as defined above, and
St = max s, sup Xu ,
0≤u≤t
with s ≥ X0 = x fixed. Equation (1.2) can then be rewritten as
V = V ∗ (x, s, µ0 , µ1 , σ0 , σ1 , λ) = sup Ex,s e−rτ Sτ .
τ
2.2. Solutions
Now define
(2.2)
V ∗ |ε(0)=i = sup Ex,s [e−rτ Sτ | ε(0) = i];
τ
let
β
β
β
β
(1 − β2 )c0 2
(1 − β3 )c0 3
(1 − β4 )c0 4
(1 − β1 )c0 1
G0 (c0 , c1 ) =
β
β
β
β
(1 − β1 )l1 c0 1 (1 − β2 )l2 c0 2 (1 − β3 )l3 c0 3 (1 − β4 )l4 c0 4


β2 β3 β4 (2,3,4)
(β2 β3 +β2 β4 +β3 β4 )(2,3,4) −(β2 +β3 +β4 )(2,3,4) −(2,3,4)
−β1 β3 β4 (1,3,4) −(β1 β3 +β1 β4 +β3 β4 )(1,3,4) (β1 +β3 +β4 )(1,3,4)
(1,3,4) 

×
 β2 β1 β4 (1,2,4) (β2 β1 +β1 β4 +β2 β4 )(1,2,4) −(β2 +β1 +β4 )(1,2,4) −(1,2,4) 
−β2 β1 β3 (1,2,3) −(β2 β1 +β1 β3 +β2 β3 )(1,2,3) (β2 +β1 +β3 )(1,2,3)
(1,2,3)
 
1
0

×
g1  ,
g2
and
β
β
β
β
(1 − β2 )c1 2
(1 − β3 )c1 3
(1 − β4 )c1 4
(1 − β1 )c1 1
G1 (c1 , c0 ) =
β
β
β
β
(1 − β1 )l1 c1 1 (1 − β2 )l2 c1 2 (1 − β3 )l3 c1 3 (1 − β4 )l4 c1 4


β2 β3 β4 (2,3,4) (β2 β3 +β2 β4 +β3 β4 )(2,3,4) −(β2 +β3 +β4 )(2,3,4) −(2,3,4)
−β1 β3 β4 (1,3,4) −(β1 β3 +β1 β4 +β3 β4 )(1,3,4) (β1 + β3 +β4 )(1,3,4) (1,3,4) 

×
 β2 β1 β4 (1,2,4) (β2 β1 +β1 β4 +β2 β4 )(1,2,4) −(β2 +β1 +β4 )(1,2,4) −(1,2,4) 
−β2 β1 β3 (1,2,3) −(β2 β1 +β1 β3 +β2 β3 )(1,2,3) (β2 +β1 +β3 )(1,2,3) (1,2,3)
 
1
0

×
ĝ1  ,
ĝ2
where γ1 < γ2 are the (real) roots of the equation
(r + λ1 ) = γ µ1 + 21 γ (γ − 1)σ12 ,
γˆ1 < γˆ2 are the (real) roots of the equation
(r + λ0 ) = γ µ0 + 21 γ (γ − 1)σ02 ,
An optimal stopping problem with regime switching
467
the βi are the (real) roots of the quartic equation
[(r + λ0 ) − βµ0 − 21 β(β − 1)σ02 ][(r + λ1 ) − βµ1 − 21 β(β − 1)σ12 ] = λ0 λ1 ,
for i < j < k
and
2
g1 = − 2
σ0
g2 = −
2
σ02
ĝ1 = −
2
σ12
ĝ2 = −
2
σ12
(2.3)
(i,j,k) = (βi − βj )(βi − βk )(βj − βk )
γ2 γ1
r
c0
c0
λ1
λ0
+
− r − λ0 ,
γ2
− γ1
(r + λ1 )(γ2 − γ1 )
c1
c1
r + λ1
γ1 γ2 c0
c0
rγ1 γ2
−
λ0
(r + λ1 )(γ2 − γ1 )
c1
c1
γ2 γ1
2
2µ0 + σ0
r
c0
c0
λ0
− γ1
γ2
−
σ0
(r + λ1 )(γ2 − γ1 )
c1
c1
λ1
+
− r − λ0 ,
r + λ1
γˆ2
r
λ0
c1 γˆ1
c1
λ1
+
− r − λ1
− γˆ1
γˆ2
(r + λ0 )(γˆ2 − γˆ1 )
c0
c0
r + λ0
γˆ2 c1 γˆ1
r γˆ1 γˆ2
c1
λ1
−
(r + λ0 )(γˆ2 − γˆ1 )
c0
c0
γˆ2 2µ1 + σ12
r
c1 γˆ1
c1
−
γˆ2
λ1
− γˆ1
σ1
(r + λ0 )(γˆ2 − γˆ1 )
c0
c0
λ0
+
− r − λ1 .
r + λ0
Theorem 2.1. (i) Assume that r > max{(0, µ0 , µ1 }, λi = 0, and suppose there exist 0 < c1 <
c0 < 1 such that G0 (c0 , c1 ) = 0, then
V ∗ |ε(0)=i = Vi (x, s) ≥ s
such that


x < c0 s,

s,
4
V0 (x, s) =

ai x βi s 1−βi , c0 s < x < s,


(2.4)
i=1


s,
0 < x ≤ c1 s,



γ1
γ2 


rs
x
x
λ1 s

+
− γ1
, c1 s < x < c0 s,
γ2
c1 s
c1 s
r + λ1
V1 (x, s) = (r + λ1 )(γ2 − γ1 )


4




bi x βi s 1−βi ,
c0 s < x < s,


i=1
(2.5)
468
X. GUO
where the ai satisfy

l1
l2
l3
β1 l1 β2 l2 β3 l3

 1
1
1
β2
β3
β1

 
l4
a1
 
β4 l4 
 a2 
1   a3 
β4
a4
γ2 γ1

r
c0
λ1
c0
+
−
γ
γ
1
 (r + λ )(γ − γ ) 2 c
c
r + λ1 
1
1
2
1


γ1 1 γ2 

c0
rγ1 γ2
c0


=
−



−
γ
)
c
c
(r
+
λ
)(γ
1
2
1
1
1




1
0
(2.6)
and bi = li ai , where
r + λ0 − βi µ0 − 21 βi (βi − 1)σ02
.
λ0
Otherwise, there exist 0 < c0 < c1 < 1 so that G1 (c1 , c0 ) = 0 and
li =
V ∗ |ε(0)=i = Vi (x, s) ≥ s
such that

s,
x < c1 s,



4
V1 (x, s) = 

ai x βi s 1−βi , c1 s < x < s,

(2.7)
i=1

s,
0 < x ≤ c0 s,





γˆ1


rs
λ0 s
x
x γˆ2

γˆ2
v
+
− γˆ1
, c0 s < x < c1 s,
V0 (x, s) = (r + λ0 )(γˆ2 − γˆ1 )
c0 s
c0 s
r + λ0



4




bi x βi s 1−βi ,
c1 s < x < s,

i=1
(2.8)
with the ai given by

lˆ1
lˆ2
lˆ3
β lˆ β lˆ β lˆ
2 2
3 3
 11
 1
1
1
β1
β2
β3

 
lˆ4
a1


ˆ
a
β4 l 4   2 

1   a3 
a4
β4

γˆ2 r
λ0
c1 γˆ1
c1
+
γˆ2
− γˆ1


 (r + λ0 )(γˆ2 − γˆ1 )
c0
c0
r + λ0 


γˆ2


r γˆ1 γˆ2
c1
c1 γˆ1

=
−




(r + λ0 )(γˆ2 − γˆ1 )
c0
c0




1
0
(2.9)
An optimal stopping problem with regime switching
s
x = c0s
x=s
469
s
x = c1s
x=s
continuation
regions
free
boundaries
ε=0
x
ε=1
x
Figure 1.
s
x = c1s x = c0s
x=s
continuation
region
free
boundaries
transient
region
x
Figure 2.
and bi = lˆi ai , where
li =
r + λ1 − βi µ1 − 21 βi (βi − 1)σ12
= lˆi ai .
λ1
(ii) Define $i = {(x, s) | ci s ≤ x ≤ s}. Then the optimal stopping time τi will be the first time
to leave $i at state i, that is,
Xt
τi = inf t ≥ 0 = ci , ε(t) = i .
St
The region $i is called the ‘continuation region’; see Figure 1.
The interesting part of Theorem 2.1 is the so-called ‘regime switching’: namely, when
ε(t) changes state, there is an instantaneous jump. We define the ‘transient region’ to be
the region {(x, s) | c1 s ≤ x ≤ c0 s} when c1 < c0 , and, when c0 < s1 , to be the region
{(x, s) | c0 s ≤ x ≤ c1 s}; see Figure 2.
When li = 1, and consequently lˆi = 1, we have λ0 λ1 = 0, and Theorem 2.1 is the same
as in [12]. Evidently, the ai s and bi s, and hence V0 , V1 are uniquely determined by c0 and c1 ,
because the left most 4 × 4 matrix in (2.6) is a Vandermonde matrix which is invertible.
With some calculation, it is not hard to see that the equations (2.4) and (2.5) satisfy the
following: on the region $0 = {(x, s) | c0 s < x < s, 0 < x < s},
(r + λ0 )V0 = xµ0 V0 + 21 x 2 σ02 V0 + λ0 V1 ,
(r + λ1 )V1 = xµ1 V1 + 21 x 2 σ12 V1 + λ1 V0 ,
(2.10)
470
X. GUO
on the region $1 = {(x, s) | 0 < x ≤ s, c1 s < x < c0 s},
(r + λ1 )V1 = xµ1 V1 + 21 x 2 σ12 V1 + λ1 s,
and
V1 (x, s)|x=c0 s
∂V1 ∂x x=c0 s
V0 (x, s)|x=c0 s
∂V0 ∂x x=c0 s
∂V1 (x, s) ∂x
x=s
∂V0 (x, s) ∂x
x=s
γ2 γ1
c0
c0
λ1 s
rs
+
γ2
=
− γ1
,
(r + λ1 )(γ2 − γ1 )
c1
c1
r + λ1
γ1 γ2 c0
c0
rγ1 γ2
,
−
=
c1
c1
c0 (r + λ1 )(γ2 − γ1 )
=s
(smooth fit),
=0
(smooth fit),
=0
(smooth fit),
=0
(smooth fit).
This is crucial in the derivation of the equations (2.4) and (2.5). The basic idea is the
‘principle of smooth fit’.
Remark 2.1. If λ0 , λ1 , σ0 , σ1 are positive, then the equation
[(r + λ0 ) − βµ0 − 21 β(β − 1)σ02 ][(r + λ1 ) − βµ1 − 21 β(β − 1)σ12 ] = λ0 λ1
has four real roots.
The proof of this is straightforward: let
f (β) = [(r + λ0 ) − βµ0 − 21 β(β − 1)σ02 ][(r + λ1 ) − βµ1 − 21 β(β − 1)σ12 ] − λ0 λ1 ,
and let θ1 , θ2 be the roots of the corresponding quadratic equation,
(r + λ0 ) − θµ0 − 21 θ(θ − 1)σ02 = 0.
Clearly, the continuous function f (β) satisfies f (0) > 0, f (−∞) > 0, f (∞) > 0, and
f (θi ) = −λ1 λ2 < 0, for i = 1, 2. Since θ1 θ2 = −2(r + λ0 )/σ02 < 0, it follows that the
equation f (β) = 0 has four different real roots.
3. Proof of Theorem 2.1
In order to prove that V0 , V1 given in Theorem 2.1 are indeed optimal, it suffices to verify
that
Yt = e−rt V (Xt , St , ε(t))
is a uniformly integrable supermartingale and that the optimal stopping time is finite with
probability 1. To see this, observe that if Yt is a supermartingale and Vi (x, s) ≥ s, then we
have
Ex,s e−rτ Sτ ≤ Ex,s e−rτ V (Xτ , Sτ , ε(τ )) = Ex,s Yτ
≤ Ex,s Y0 = V (X0 , S0 , ε(0))
= Vi (x, s),
An optimal stopping problem with regime switching
471
and, if we take the supremum over all such τ , we obtain for all 0 ≤ x ≤ s,
V ∗ |ε(0)=i ≤ Vi (x, s).
On the other hand, the fact that Yt is a uniformly integrable martingale implies that for any
fixed t,
Ex,s Yt∧τi = Ex,s Y0 .
Hence, Yt is a positive uniformly integrable local martingale.
In fact, Yt is a uniformly integrable martingale. To see this, recall that any continuous local
martingale with an integrable quadratic variation is a martingale (see [9, 1.5.24]). Therefore, it
is sufficient to check that
EYt ∞ < ∞.
Since St increases only when Xt = St with Vs (s, s) = 0, and dSt = 0 for all Xt < St , by Itô’s
formula and Fubini’s theorem, we have
∞
e−2rt Vx2 (Xt , St , ε(t))Xt2 σε2 dt
∞
2 2
e−2rt Xt2 dt
≤K σ E
0
∞
2 2
e−2ζ t dt
≤K σ
EYt ∞ = E
0
0
< ∞,
where σ = max σε and K < ∞ is an upper bound of Vx (Xt , St , εt ), which is a linear
combination of (Xt /St )α−1 for some constant α1 and hence is bounded on the compact set
0 < ci ≤ Xt /St ≤ 1 (from (2.7) and (2.9)). The third inequality uses the fact that ζ =
r − max{µ0 , µ1 } > 0 and E[Xt2 ] < e2µt with µ = max{µ0 , µ1 }.
Now using the optional stopping theorem [10] and taking the supremum over all choices of
the optimal stopping time, we get
V ∗ |ε(0)=i (x, s) ≥ Vi (x, s).
Proposition 3.1. The martingale
Yt = e−rt V (Xt , St , ε(t))
is a supermartingale.
The following differentiation rule will facilitate our proof (see [4]).
Lemma 3.1. Suppose that X is process with values in Rn , each of whose components X i is a
semimartingale. Suppose F is a real valued twice continuously differentiable function on Rn ,
472
X. GUO
then F (Xt ) is a semimartingale and
F (Xt ) = F (X0 ) +
+
+
1
2
n
n i=1
i,j =1 0
t
(0,t]
∂
F (Xs − ) dXsi
∂xi
∂2
F (Xs − ) dXic , Xj c s
∂xi ∂xj
F (Xs ) − F (Xs − ) −
n
∂
F (Xs − )Xsi .
∂xi
i=1
0<s≤t
Proof of Proposition 3.1. The idea of the proof is as follows. Let t > 0, h > 0 and
Yt = Yh+t − Yt . Notice that
Xt = µε(t) Xt h + σε(t) Wh .
Then
EFt Yt = E[Yt+h − Yt | FtX ],
which is actually
E[e−r(t+h) V (Xt+h , st+h , εt+h ) − e−rt V (Xt , St , εt ) | Ft ]
= E[e−r(t+h) V (Xt+h , st+h , εt+h )1(εt+h =εt ) − e−rt V (Xt , St , εt ) | Ft ]
+ E[e−r(t+h) V (Xt+h , st+h , εt+h )1(εt+h =εt ) − e−rt V (Xt , St , εt ) | Ft ]
= E[e−rt e−λt h e−rh (V (Xt , St , εt ) + V (Xt , St , εt )Xt + 21 V (Xt , St , εt )Xt2 )
− e−rt V (Xt , St , εt ) | Ft ]
+ E[e−rt (1 − e−λt h ) e−rh (V (Xt , St , εt )
+ V (Xt , St , εt )Xt + 21 V (Xt , St , εt )Xt2 ) | Ft ]
= E[e−rt (1 − λt h − rh)(V (Xt , St , εt ) + V (Xt , St , εt )µε(t) Xt h
+ 21 V (Xt , St , εt )σε2( t) Xt2 h + o(h2 )) − e−rt V (Xt , St , εt ) | Ft ]
+ E[e−rt e−rh λt h(V (Xt , St , εt ) + V (Xt , St , ε(t)
)µε(t) Xt h
)σε2 Xt2 h) | Ft ]
+ 21 V (Xt , St , ε(t)
(t)
= E[h e−rt [−(λt + r)V (Xt , St , εt ) + V (Xt , St , εt )µε(t) Xt
2
+ 21 V (Xt , St , εt )σε(t)
Xt2 ]
) + o(h2 ) | Ft ],
+ λt hV (Xt , St , ε(t)
where the partial derivative is with respect to the first variable.
t More rigorously, recall that for any Markov process Z(t) with generator A, f (Z(t)) −
0 A(f (z(s)) ds is a martingale when f is twice continuously differentiable. Notice that ε(t)
has a generator
−λ0 λ0
,
λ1 −λ1
An optimal stopping problem with regime switching
473
and f (ε) = ε, so that f (0) = 0, f (1) = 1, and therefore
t
−λ0 λ0
M(t) = ε(t) −
ε(s) ds
λ1 −λ1
0
is a martingale. In consequence, we can write
t
ε(t) =
{λ0 χ0 (ε(s)) − λ1 χ1 (ε(s))} ds + M(t).
0
Therefore, applying the differentiation rule, we get
dV (Xt , St , ε(t)) e−rt
−rt
−rV dt + Vx (Xt , St , ε(t)) dx + 21 σε2 Vxx (Xt , St , ε(t)) dt
=e
+ Vs (Xt , St , ε(t)) dSt +
−
V (Xs , Ss , ε(s)) − V (Xs , Ss , ε(s )) .
s≤t
Furthermore, since dε(s) = −1 when ε(s) = 0 and ε(s − ) = 1, we can rewrite the sum in the
above equation as
V (Xs , Ss , ε(s)) − V (Xs , Ss , ε(s − ))
s≤t
=
0
t
V (Xs , Ss , ε(s)) − V (Xs , Ss , ε(s − ))(−1)ε(s
−)
dε(s).
Thus, effectively, there is another martingale, say M̄t , such that
t
−
M̄t =
−rV + Vx uε + 21 Vxx σε2 + (−1)ε(s ) [λ0 (1 − ε(s)) − λ1 ε(s)]
0
× [V (Xs , Ss , 1 − ε(s − )) − V (Xs , Ss , ε(s − ))] ds.
Together with (2.10), we see that whenever we start from time t, dYt ≤ 0, hence Yt is a
supermartingale in the region 0 < Xt ≤ St ci . (Here ci is chosen according to the starting state
of ε(t).) In the region St ci ≤ Xt ≤ St , because St grows only when Xt = St , and Vs (s, s) = 0,
we see that Yt is a positive local martingale.
Proposition 3.2. The martingale Yt is uniformly integrable.
The following lemma gives us the key (see [10]).
Lemma 3.2. (Doob’s Lp -inequality.) If X is a right-continuous martingale or positive submartingale indexed by an interval T of R, then if X ∗ = supt |Xt |, for p > 1,
λp P[X ∗ ≥ λ] ≤ sup E[|Xt |p ].
t
Proof of Proposition 3.2. By (2.4), (2.5), (2.7) and (2.9), the Vi s are bounded from above
when s is fixed and therefore the uniform integrability of Yt is equivalent to that of Zt = e−rt St .
Thus, it is sufficient to show that there exists a p > 1 such that
sup E|Zt |p < ∞.
0≤t<∞
474
X. GUO
For fixed t, using integration by parts,
∞
p
P(Zt > y) dy =
E|Zt |p =
∞
0
0
P(Zt > y 1/p ) dy.
Let ζ = r − max{µ0 , µ1 }, recalling that r > max{0, µ0 , µ1 }, we have ζ > 0, and
s
s
s
−rt
1/p
1/p
1 2
σε dWs +
µε ds −
P(Zt > y ) = P e
sup exp
2 σε ds > y
0≤s≤t
≤P
0≤s≤t
0
σε dWs −
0
s
0
s
σε dWs −
sup exp
0≤s≤t
≤
s
sup exp
=P
0
0
s
0
σε2 ds − tζ
1 2
2 σε ds
0
> y 1/p
ζt
>e y
1/p
p
t
E exp 0 σε dWt − 21 σε2 dt ep ζ t y p /p
exp(tp (p σ 2 − σ 2 − 2ζ )/2)
=
,
y p /p
where 1 < p is some constant to be chosen. The first inequality holds because, if P{X ≥ Y } =
1, then P{X > y} ≥ P{Y ≥ y}. The second inequality is from Doob’s Lp inequality. When
ζ > 0, we can choose a p > 1 such that ((p )2 − p )σ 2 − 2ζ < 0 and choose an appropriate
∞
1 < p < p such that E|Zt |p is dominated by 0 y p /p dy which is finite. Hence,
sup E|Zt |p < ∞.
0≤t<∞
Proposition 3.3. We have
P(τi < ∞) = 1.
Proof. Since the stopping time is the first time at which the process (Xt , St ) ever reaches
the line x = ci s whenever starting from state i, we can actually prove a slightly more general
statement, viz. P{τ = ∞} = 0, where τ is the first time t such that
max0≤u≤t Xu
1
≥
Xt
c
with 0 < c < 1. This is equivalent to proving that P{τ > T } → 0 as T → ∞.
Noticing that
t
t
σε2
P{τ > T } = P
ds +
µε −
σε dBs ≥ log c, 0 ≤ u ≤ t ≤ T ,
2
u
u
the independence of (Xs , Xt /Xs ) implies that
t
t
σ2
P{τ > 2T } ≤ P
µε − ε ds +
σε dBs ≥ log c, 0 ≤ u ≤ t ≤ T
2
u
u
t
t
2
σ
+P
µε − ε ds +
σε dBs ≥ log c, T ≤ u ≤ t ≤ 2T
2
u
u
≤ (P{τ > T })2 .
Since a mixture of Gaussian processes is unbounded, the propositions follows.
An optimal stopping problem with regime switching
475
4. Markov structure of (Xt /St , ε(t))
In [13], by the observation that (Xt /St ) is a Markov process when Xt is a geometric
Brownian motion, the derivation of the Russian option pricing problem was simplified and
the free boundary problem amounted to finding a threshold of the Markov process (Xt /St ).
Few modifications are needed to apply in a straightforward way the same methodology to our
case in validating that (Xt /St , ε(t)) is Markovian. We need only replace the classical Wiener
space in [13] generated by ($, F , F W = (FtW )t≥0 , P) with ($, F , F X = (FtX )t≥0 , P).
ε(t)
Proposition 4.1. The process {ψt
ε(t)
ψt
} is Markovian with respect to ($, F , F X , P), where
ε(u)
=
max{maxu≤t Xu
ε(t)
Xt
ε(0)
, X0
ε(0)
ψ0
}
.
For the detailed proof of Proposition 4.1, interested readers are referred to [13].
Although Proposition 4.1 is true, it is not clear in general how far the Markovian property
can carry us. Shepp [11] gave an example in which he considers the optimal stopping problem
V (w, m) = sup Ew,m Mτ e−rτ ,
τ
where Wt is standard Brownian motion and Mt = max{s, sup0≤u≤t Wu }. In this case, that
Wt − Mt is Markovian offers no clue to any explicit solution. We conclude that the tractability
of our problem is contingent upon the linearity of the payoff function as well. Gerber and Shiu
addressed in [5] various pricing problems with linear payoff functions.
5. Derivation of the solution—revisiting the problem in a discrete model
The martingale proof provided in Section 3 is stunningly simple, yet unrevealing. To gain a
deeper understanding of the structure of the solution, we resort to a seemingly crude numerical
simulation, through which the path to the answer is painlessly unveiled.
In [6], we provided several ways of discretizing the market model, one of which we now
describe. Let (Xn , ε(n)) represent the two-dimensional Markov processes of stock price at
time n and the state of the market at time n. It then satisfies the recursion
(Xn , ε(n)) = ηn(ε(n),ε(n−1)) (Xn−1 , ε(n − 1)),
i,j
where the ηn are i.i.d. random variables taking values uj with probability pj (χi,1−j +
(−1)χi,1−j e−λj h ) and 1/uj with probability (1 − pj )(χi,1−j + (−1)χi,1−j e−λj h ) where i, j ∈
{0, 1} and
1 i = j,
χ (i, j ) =
0 i = j.
For fixed T , we divide the interval [0, T ] into N sub-intervals such that T = N h and let
√
√
µi h + σi h − 21 σi2 h
σi h
,
pi =
li = e−λi h .
ui = e
,
d = e−rh ,
√
2σi h
n
In other words, (Xn , ε(n)) is a random walk taking values on the set (um
0 u1 , i) with i ∈ {0, 1},
m, n ∈ {0, ±1, ±2, . . . }.
476
X. GUO
Now consider the Markov chain X = ((Xn , ε(n)), Fn , P). We can restate the Russian option
pricing problem by the following dynamic programming problem: let
W0 (x, s) = s,
Z0 (x, s) = s,
Wm (x, s) = max Wm−1 (x, s),
dp0 l0 Wm−1 (u0 x, max{u0 x, s}) + dl0 q0 Wm−1
x
x
, max
,s
u0
u0
+ d(1 − l0 )p1 Wm−1 (u1 x, max{u1 x, s})
x
x
+ d(1 − l0 )q1 Wm−1
, max
,s
,
u1
u1
Zm (x, s) = max Zm−1 (x, s),
dp1 l1 Zm−1 (u1 x, max{u1 x, s}) + dl1 q1 Zm−1
x
x
, max
,s
u1
u1
+ (1 − l1 )dp0 Wm−1 (u0 x, max{u0 x, s})
x
x
, max
,s
.
+ (1 − l1 )dq0 Wm−1
u0
u0
Theorem 5.1. When r > max(0, µ1 , µ0 ), the solution to problem (2.2) is
V0 (x, s) = lim lim Wn (x, s),
h→0 n→∞
V1 (x, s) = lim lim Zn (x, s).
h→0 n→∞
The proof of Theorem 5.1 is by several steps:
Proof. The proof requires the following four propositions, an application of the principle of
smooth fit, and reference to [6, Theorem 4.2].
Proposition 5.1. We have
Wm (xu, su) = uWm (x, s),
Zm (xu, su) = uZm (x, s).
This simple, yet critical, proposition requires no proof.
Proposition 5.2. The functions V0 and V1 are increasing with respect to x, and V0 (x, s) ≥ s,
V1 (x, s) ≥ s.
Proposition 5.3. In the common continuation region where V0 , V1 are C 2 smooth, V0 , V1
satisfy
(r + λ0 )V0 (x, s) = xµ0 V0 (x, s) + 21 x 2 σ02 V0 (x, s) + λ0 V1 (x, s),
(r + λ1 )V1 (x, s) = xµ1 V1 (x, s) + 21 x 2 σ12 V1 (x, s) + λ1 V0 (x, s).
(5.1)
An optimal stopping problem with regime switching
477
Proof. Notice that if, for some m > 1,
x
x
, max
,s
u0
u0
+ d(1 − l0 )p1 Zm−1 (u1 x, max{u1 x, s})
x
x
,s
≥ Wm−1 (x, s),
+ d(1 − l0 )q1 Zm−1
, max
u1
u1
x
x
,s
dp1 l1 Zm−1 (u1 x, max{u1 x, s}) + dl1 q1 Zm−1
, max
u1
u1
+ (1 − l1 )dp0 Wm−1 (u0 x, max{u0 x, s})
x
x
+ d(1 − l1 )q0 Wm−1
, max
,s
≥ Zm−1 (x, s),
u0
u0
dp0 l0 Wm−1 (u0 x, max{u0 x, s}) + dl0 q0 Wm−1
then it is true for all n > m. Therefore, if the limit function V0 (x, s) is larger than s, then there
exists some m such that
x
x
, max
,s
dp0 l0 Wm−1 (u0 x, max{u0 x, s}) + dl0 q0 Wm−1
u0
u0
+ d(1 − l0 )p1 Zm−1 (u1 x, max{u1 x, s})
x
x
, max
,s
≥ Wm−1 (x, s).
+ d(1 − l0 )q1 Zm−1
u1
u1
In consequence, we have
x
x
, max
,s
dp0 l0 V0 (u0 x, max{u0 x, s}) + dl0 q0 V0
u0
u0
+ d(1 − l0 )p1 V1 (u1 x, max(u1 x, s))
x
x
+ d(1 − l0 )q1 V1
, max
,s
= V0 (x, s)
u1
u1
for all x, where V0 (x, s) > s. Similarly, for those (x, s) such that V1 (x, s) > s, we have
x
x
, max
,s
dp1 l1 V1 (u1 x, max{u1 x, s}) + dl1 q1 V1
u1
u1
+ (1 − l1 )dp0 V0 (u0 x, max{u0 x, s})
x
x
+ d(1 − l1 )q0 V0
, max
,s
= V1 (x, s).
u0
u0
Since V0 and V1 are C 2 on the continuation region, by Taylor expansions,
√
√
V (eσ h x, s) = V (1 + σ h + 21 σ 2 h + o(h))x, s),
√
V ((1 + 21 σ 2 h + σ h)x, s) = V (x + x, s)
= V (x, s) + xVx (x, s) + 21 (x)2 Vxx (x, s) + o(δx)2 + · · ·
√
= V (x, s) + (σ hx + 21 σ 2 hx)Vx (x, s) + 21 σ 2 hVxx (x, s),
478
X. GUO
and
√
V (x, s) = (1 − rh)p[V (x, s) + ( 21 σ 2 h − σ h)xVx (x, s) + 21 x 2 σ 2 hVxx (x, s)]
√
+ (1 − rh)q[V (x, s) + ( 21 σ 2 h + σ h)xVx (x, s) + 21 x 2 σ 2 hVxx (x, s)]
√
= (1 − rh)[V (x, s) + (q − p)σ hxVx (x, s) + 21 x 2 σ 2 hVxx (x, s)].
Recalling that
qi − pi =
µi h − 21 σi2 h
,
√
σi h
we obtain, on the continuation region, the following equations as anticipated:
(r + λ0 )V0 (x, s) = xµ0 V0 (x, s) + 21 x 2 σ02 V0 (x, s) + λ0 V1 (x, s),
(r + λ1 )V1 (x, s) = xµ1 V1 (x, s) + 21 x 2 σ12 V1 (x, s) + λ1 V0 (x, s).
Proposition 5.4. If r > max{0, µ1 , µ0 }, and µi , λi and σ1 are positive, then there exist 0 <
c0 , c1 < 1 such that Vi (x, s) = s, when x < ci s, and V1 (x, s) > s when ci s < x < s,
(i = 0, 1).
Proof. We give a proof by contradiction. Evidently, V0 (x, s) ≥ s and V1 (x, s) ≥ s. If, for
all 0 < x ≤ s, V0 (x, s) > s and V1 (x, s) > s, then by Proposition 5.3, in the continuation
region V0 and V1 are of the form:
V0 (x, s) =
4
ai (s)x βi ,
V1 (x, s) =
4
bi (s)x βi ,
i=1
i=1
where the βi satisfy (2.3).
Moreover, it is not hard to verify that if
V0 (x, 1) =
4
ai x βi ,
V1 (x, 1) =
i=1
4
bi x βi ,
i=1
are solutions to (5.1) at s = 1, then so are sV0 (x/s, 1), sV1 (x/s, 1). Hence, ai (s) is actually
of the form ai s 1−βi and bi (s) = bi s 1−βi . We conclude that if the boundaries between the
continuation region and the stopping region exist, they should be straight lines x = ci s.
Furthermore,
∂V0
V0 (x, s) ≥ s and
≥ 0,
∂x
therefore, a1 a2 = 0. Otherwise, when x → 0, V0 → 0; but since V0 ≥ s, this would be a
contradiction.
Now suppose that a1 = 0. Since
V0 (x, s) =
4
ai x βi ,
i=1
when x → 0, the leading term is x β1 and V0 > s, which implies that a1 is positive.
4 However,
βi
since V ≥ 0 almost everywhere, the same argument applied to V0 (x, s) =
i=1 ai βi x
An optimal stopping problem with regime switching
479
indicates that a1 < 0, a contradiction. Therefore, at least one of V0 and V1 should be equal to
s when x is close to 0. In other words, at least one of c0 and c1 exists.
Now suppose that c0 exists, but c1 does not, i.e. when 0 < x < c0 s, V0 (x, s) = s, V1 (x, s) >
s, then
x
x
dp1 l1 V1 (u1 x, max{u1 x, s}) + dl1 q1 V1
, max
,s
u1
u1
x
x
,s
+ (1 − l1 )dp0 V0 (u0 x, max{u0 x, s}) + d(1 − l1 )q0 V0
, max
u0
u0
= V1 (x, s).
By a Taylor expansion,
(r + λ1 )V1 = xµ1 V1 + 21 x 2 σ12 V1 + λ1 s,
which implies that
V1 (x, s) = A1 x γ1 + A2 x γ2 +
λ1
,
r + λ1
where γ1 , γ2 are the solution of the corresponding characteristic equation such that
(r + λ1 ) = γ µ1 + 21 γ (γ − 1)σ12 .
Hence, γ1 γ2 < 0. Repeating the same argument, we conclude that it is impossible to have
V1 (x, s) ≥ s and ∂V1 /∂x ≥ 0. Therefore, c1 also exists. So the two boundaries x = c0 s
and x = c1 s divide $ into three parts: what we have termed the stopping region, the common
continuation region and the transient region.
Continuing the proof of Theorem 5.1, we now apply the principle of smooth fit (see [14] for
details). Consider the transient region
$1 = {(x, s) | c1 s ≤ x ≤ c0 s},
where V0 (x, s) ≡ s. The principle of smooth fit implies that
∂V1 (x, s) = 0,
V1 (x, s)
= s,
∂x
x=c1 s
x=c1 s
and
(r + λ1 )V1 = xµ1 V1 + 21 x 2 σ12 V1 + λ1 s.
Thus, we can explicitly derive V1 in $1 in terms of c1 :
x γ1
x γ2
λ1 s
rs
+
γ2
− γ1
,
V1 (x, s) =
(r + λ1 )(γ2 − γ1 )
c1 s
c1 s
r + λ1
where γ1 , γ2 satisfy
(r + λ1 ) = γ µ1 + 21 γ (γ − 1)σ12 ;
(5.2)
480
X. GUO
hence,
γ1 =
γ2 =
−(µ1 − 21 σ12 ) −
−(µ1 − 21 σ12 ) +
(µ1 − 21 σ12 )2 + 2σ12 (r + λ1 )
σ12
(µ1 − 21 σ12 )2 + 2σ12 (r + λ1 )
σ12
,
.
Therefore,
γ2 γ1
c0
λ1 s
c0
rs
V1 (x, s)|x=c0 s =
+
γ2
− γ1
,
c1
(r + λ1 )(γ2 − γ1 )
c1
r + λ1
γ1 γ2 ∂V1 rγ1 γ2
c0
c0
,
=
−
∂x x=c0 s
c0 (r + λ1 )(γ2 − γ1 )
c1
c1
V0 (x, s)|x=c0 s = s,
∂V0 = 0.
∂x x=c0 s
With respect to the boundary conditions on x = s, it is conceivable (see also [5]) that if the
current stock price x is very close to s, almost surely Xt will exceed seδh before it is optimal
for the payee to demand payment. Therefore, V (x, s) does not depend on the precise value of
s, which implies that
∂V0
=0
(x, s)
∂s
x=s
and
∂V1
(x, s)
= 0.
∂s
x=s
Combining this with Propositions 5.1–5.4 and [6, Theorem 4.2], we reach the conclusion in
Theorem 5.1.
Note. The last two boundary conditions on x = s effectively guarantee that Yt is a submartingale.
6. Arbitrage-free price for Russian options
In [6], we proposed one way to complete the market model. At each time t, there is a
market for ‘ticket’, a security that pays one unit of account (say, dollar) at the next time
τ (t) = inf{u > t : ε(u) = ε(t)} that the Markov chain ε changes state. That contract then
becomes worthless (has no future dividends), and a new contract is issued that pays at the next
change of state, and so on.
It is not hard to see that, under natural pricing, this will complete the market, and provide
unique arbitrage-free prices to the Russian (and European, and any other) options on the
underlying risky asset (see [2], [7] and [8]). Moreover, the Russian option price will be given
by our solution in Theorem 2.1 if we re-interpret µi by supposing that µ(i) = r − δ(i), where
δ(i) is the rate of continual dividend payout, per dollar of underlying price.
An optimal stopping problem with regime switching
481
Acknowledgements
I am grateful to Larry Shepp for suggesting this project to me. Lester Dubins, Darrell Duffie,
Richard Gundy, Michael Harrison, Chris Heyde and Dan Ocone provided valuable comments
and advice. Thanks to the referee for kindly pointing out a missing step in the original proof
of the theorem. I gratefully acknowledge support from DIMACS and the hospitality of the
University of California at Berkeley.
References
[1] Chernoff, H. (1961). Sequential tests for the mean of a normal distribution. In Proc. 4th. Berkeley Symp. Statist.
Prob. Vol. 1. University of California Press, Berkeley, pp. 79–91.
[2] Delbaen, F. and Schachermayer, W. (1994). A general version of the fundamental theorem of asset pricing.
Math. Ann. 300, 463–520.
[3] Duffie, D. and Harrison, M. (1993). Arbitrage pricing of perpetual lookback options. Ann. Appl. Prob. 3,
641–651.
[4] Elliott, R. (1982). Stochastic calculus and applications. Springer, Berlin.
[5] Gerber, H. U. and Shiu, E. S. W. (1996). Martingale approach to pricing perpetual American options on two
stocks. Math. Finance 6, 303–322.
[6] Guo, X. (2001). Information and option pricing. Quant. Finance 1, 38–44.
[7] Harrison, M. and Kreps, D. (1979). Martingales and arbitrage in multiperiod securities markets. J. Econom.
Theory 20, 381–408.
[8] Harrison, M. and Pliska, S. (1981). Martingales and stochastic integrals in the theory of continuous trading.
Stoch. Proc. Appl. 11, 215–260.
[9] Karatzas, I. and Shreve, S. (1998). Brownian Motion and Stochastic Calculus. Springer, Berlin.
[10] Revuz, D. and Yor, M. (1991). Continuous Martingale and Brownian Motion. Springer, Berlin.
[11] Shepp, L. A. (1999). A model for stock price fluctuations based on information. Preprint, Rutgers University.
[12] Shepp, L. A. and Shiryaev, A. N. (1993). The Russian option: reduced regret. Ann. Appl. Prob. 3, 631–640.
[13] Shepp, L. A. and Shiryaev, A. N. (1994). A new look at pricing of the ‘Russian options’. Theory Prob. Appl.
39, 103–119.
[14] Van Moerbeke, P. L. J. (1976). On optimal stopping and free boundary problems. Arch. Rational Mechanics
Anal. 60, 101–148.
Download