Enlargements of Filtrations Monique Jeanblanc Jena, June 2010 June 9, 2010

advertisement
Enlargements of Filtrations
Monique Jeanblanc
Jena, June 2010
June 9, 2010
2
These notes are not complete, and have to be considered as a preliminary version of a full text.
The aim was to give a support to the lectures given by Monique Jeanblanc and Giorgia Callegaro at
University El Manar, Tunis, in March 2010 and later by Monique Jeanblanc at Iena, in June 2010.
Section marked with a ♠ are used only in very specific (but important) cases and can be forgotten
for a first reading.
A large part of these notes can be found in the book
Mathematical Methods in Financial Markets,
by Jeanblanc, M., Yor M., and Chesney, M. (Springer), indicated as [3M]. We thank Springer Verlag
for allowing us to reproduce part of this volume.
Many examples related to Credit risk can be found in
Credit Risk Modeling
by Bielecki, T.R., Jeanblanc, M. and Rutkowski, M., CSFI lecture note series, Osaka University
Press, 2009, indicated as [BJR].
I thank all the participants to Jena workshop for their remarks, questions and comments
Chapter 1
Theory of Stochastic Processes
1.1
Background
In this section, we recall some facts on theory of stochastic processes. Proofs can be found for
example in Dellacherie [18], Dellacherie and Meyer [22] and Rogers and Williams [68].
As usual, we start with a filtered probability space (Ω, F, F, P) where F = (Ft , t ≥ 0) is a given
filtration satisfying the usual conditions, and F = F∞ .
1.1.1
Path properties
A process X is continuous if, for almost all ω, the map t → Xt (ω) is continuous. The process is
continuous on the right with limits on the left (in short càdlàg following the French acronym1 ) if,
for almost all ω, the map t → Xt (ω) is càdlàg.
Definition 1.1.1 A process X is increasing if X0 = 0, X is right-continuous, and Xs ≤ Xt , a.s.
for s ≤ t.
Definition 1.1.2 Let (Ω, F, F, P) be a filtered probability space. The process X is F-adapted if for
any t ≥ 0, the random variable Xt is Ft -measurable.
The natural filtration FX of a stochastic process X is the smallest filtration F which satisfies the
usual hypotheses and such that X is F-adapted. We shall write in short (with an abuse of notation)
FtX = σ(Xs , s ≤ t).
Exercise 1.1.3 Write carefully the definition of the filtration F so that the right-continuity assumption is satisfied. Starting from a non-continuous on right filtration F0 , define the smallest right
continuous filtration F which contains F0 .
C
1.1.2
Predictable and optional σ-algebra
If τ and ϑ are two stopping times, the stochastic interval ]]ϑ, τ ]] is the set {(ω, t) : ϑ(ω) < t ≤ τ (ω)}.
The optional σ-algebra O is the σ-algebra generated on F × B by the stochastic intervals
[[τ, ∞[[ where τ is an F-stopping time.
1 In French, continuous on the right is continu à droite, and with limits on the left is admettant des limites à
gauche. We shall also use càd for continuous on the right. The use of this acronym comes from P-A. Meyer.
3
4
CHAPTER 1. THEORY OF STOCHASTIC PROCESSES
The predictable σ-algebra P is the σ-algebra generated on F × B by the stochastic intervals
]]ϑ, τ ]] where ϑ and τ are two F-stopping times such that ϑ ≤ τ .
A process X is said to be F-predictable (resp. F-optional) if the map (ω, t) → Xt (ω) is Pmeasurable (resp. O-measurable).
Example 1.1.4 An adapted càg process is predictable.
Proposition 1.1.5 Let F be a given filtration (called the reference filtration).
• The optional σ-algebra O is the σ-algebra on R+ × Ω generated by càdlàg F-adapted processes
(considered as mappings on R+ × Ω).
• The predictable σ-algebra P is the σ-algebra on R+ × Ω generated by the F-adapted càg (or
continuous) processes. The inclusion P ⊂ O holds.
These two σ-algebras P and O are equal if all F-martingales are continuous. In general
O = P ∨ σ(∆M, M describing the set of martingales) .
A process is said to be predictable (resp. optional) if it is measurable with respect to the
predictable (resp. optional) σ-field. If X is a predictable (resp. optional) process and T a stopping
time, then the stopped process X T is also predictable (resp. optional). Every process which is
càg and adapted is predictable, every process which is càd and adapted is optional. If X is a
càdlàg adapted process, then (Xt− , t ≥ 0) is a predictable process.
A stopping time T is predictable if and only if the process (11{t<T } = 1 − 11{T ≤t} , t ≥ 0)
is predictable, that is if and only if the stochastic interval [[0, T [[= {(ω, t) : 0 ≤ t < T (ω)} is
predictable. Note that O = P if and only if any stopping time is predictable. A stopping time T
is totally inaccessible if P(T = S < ∞) = 0 for all predictable stopping times S. See Dellacherie
[18], Dellacherie and Meyer [20] and Elliott [26] for related results.
If τ is an F-stopping time, the σ-algebra of events prior to τ , Fτ , and the σ-algebra of events
strictly prior to τ Fτ − are defined as:
Fτ = {A ∈ F∞ : A ∩ {τ ≤ t} ∈ Ft , ∀t}.
Fτ − is the smallest σ-algebra which contains F0 and all the sets of the form A ∩ {t < τ }, t > 0 for
A ∈ Ft .
If X is F-progressively measurable and τ a F-stopping time, then the r.v. Xτ is Fτ -measurable
on the set {τ < ∞}.
If τ is a random time (i.e. a non negative r.v.), the σ-algebra Fτ and Fτ − are defined as
Fτ
Fτ −
= σ(Yτ , Y F − optional)
= σ(Yτ , Y F − predictable)
Exercise 1.1.6 Give a simple example of predictable process which is not continuous on left.
Hint: Deterministic functions are predictable
1.1.3
C
Doob-Meyer decomposition ♠
An adapted process X is said to be of class2 (D) if the collection XT 11T <∞ where T is a stopping
time is uniformly integrable.
If Z is a non-negative supermartingale of class (D), there exists a unique increasing, integrable
and predictable process A such that Zt = E(A∞ − At |Ft ). In particular, any non-negative supermartingale of class (D) can be written as Zt = Mt −At where M is a uniformly integrable martingale.
2 Class
(D) is in honor of Doob.
1.1. BACKGROUND
1.1.4
5
Semi-martingales
We recall that an F-adapted process X is an F-semi-martingale if X = M + A where M is an F-local
martingale and A an F-adapted process with finite variation. If there exists a decomposition with a
process A which is predictable, the decomposition X = M + A where M is an F-martingale and A
an F-predictable process with finite variation is unique and X is called a special semi-martingale.
If X is continuous, the process A is continuous.
In general, if G = (Gt , t ≥ 0) is a filtration larger than F = (Ft , t ≥ 0), i.e., Ft ⊂ Gt , ∀t ≥ 0 (we
shall write F ⊂ G), it is not true that an F-martingale remains a martingale in the filtration G. It
is not even true that F-martingales remain G-semi-martingales.
Example 1.1.7 (a) Let Gt = F∞ . Then, the only F-martingales which are G-martingales are
constants.
(b)♠ An interesting example is Azéma’s martingale µ, defined as follows. Let B be a Brownian
motion and gt = sup{s ≤ t, Bs = 0}. The process
√
µt = (sgnBt ) t − gt , t ≥ 0
is a martingale in its own filtration. This discontinuous Fµ -martingale is not an FB -martingale, it
is not even an FB -semi-martingale.
We recall an important(but difficult result) due to Stricker [74].
Proposition 1.1.8 Let F and G be two filtrations such that F ⊂ G. If X is a G-semimartingale
which is F-adapted, then it is also an F-semimartingale.
Exercise 1.1.9 Let N be a Poisson process (i.e., a process with stationary and independent increments, such that the law of Nt is a Poisson law with parameter λ. Prove that, the process M defines
as Mt = Nt − λt is a martingale and that the process Mt2 − λt = (Nt − λt)2 − λt is also a martingale.
Prove that for any θ ∈ [0, 1],
Nt = θ(Nt − λt) + (1 − θ)Nt + θλt
is a decomposition of the semi-martingale N . Which one has a predictable finite variation process?
C
Exercise 1.1.10 Prove that τ is a H-stopping time, where H is the natural filtration of Ht = 11{τ ≤t} ,
and that τ is a G stopping time, where G = F ∨ H, for any filtration F.
C
Exercise 1.1.11 Prove that if M is a G-martingale and is F-adapted, then M is an F-martingale.
c defined as M
ct = E(Mt |Ft ) is an F-martingale.
Prove that, if M is a G-martingale, then M
C
e where F
e is independent of F, then any F martingale
Exercise 1.1.12 Prove that, if G = F ∨ F
remains a G-martingale.
Prove that, if F is generated by a Brownian motion W , and if there exists a probability Q equivalent
e is independent of F under Q, then any (P, F)-martingale remains a (P, G)-semi
to P such that F
martingale.
C
Stochastic Integration
If X = M + A is a semi-martingale and Y a (bounded) predictable process, we denote Y ?X the
stochastic integral
Z t
Z t
Z t
(Y ?X)t =
Ys dXs =
Ys dMs +
Ys dAs
0
0
0
6
CHAPTER 1. THEORY OF STOCHASTIC PROCESSES
Integration by Parts ♠
Any semi-martingale X admits a decomposition as a martingale M and a bounded variation process.
The martingale part admits a decomposition as M = M c + M d where M c is continuous and M d a
discontinuous martingale. The process M c is denoted in the literature as X c (even if this notation
is missleading!). The optional Itô formula is
Rt
Rt
f (Xt ) =
f (X0 ) + 0 f 0 (Xs− )dXs + 12 0 f 00 (Xs )dhX c is
+
P
0<s≤t [f (Xs )
− f (Xs− ) − f 0 (Xs− )∆Xs ] .
If U and V are two finite variation processes, the Stieltjes’ integration by parts formula can be
written as follows:
Z
Z
Ut Vt = U0 V0 +
Vs dUs +
Us− dVs
(1.1.1)
]0,t]
]0,t]
Z
Z
X
= U0 V0 +
Vs− dUs +
Us− dVs +
∆Us ∆Vs .
]0,t]
]0,t]
s≤t
As a partial check, one can verify that the jump process of the left-hand side, i.e., Ut Vt − Ut− Vt− , is
equal to the jump process of the right-hand side, i.e., Vt− ∆Ut + Ut− ∆Vt + ∆Ut ∆Vt .
The integration by parts is
Z
Xt Yt = X0 Y0 +
Z
t
Xs− dYs +
0
For bounded variation processes
[U, V ]t =
t
Ys− dXs + [X, Y ]t .
(1.1.2)
0
X
∆Us ∆Vs
s≤t
Let X be a continuous local martingale. The predictable quadratic variation process of X is the
continuous increasing process hXi such that X 2 − hXi is a local martingale.
Let X and Y be two continuous local martingales. The predictable covariation process is the
continuous finite variation process hX, Y i such that XY − hX, Y i is a local martingale.
Let X and Y be two local martingales. The covariation process is the finite variation process
[X, Y ] such that
(i) XY − [X, Y ] is a local martingale
(ii) ∆[X, Y ]t = ∆Xt ∆Yt
The process [X] = [X, X] is non-decreasing.
The predictable covariation process is (if it exists) the predictable finite variation process hX, Y i
such that XY − hX, Y i is a local martingale.
Exercise 1.1.13 Prove that if X and Y are continuous, hX, Y i = [X, Y ].
Prove that if M is the compensated martingale of a Poisson process with intensity λ, [M ] = N and
hM i = λt.
C
1.1.5
Change of probability
Brownian filtration
Let F be a Brownian filtration and L an F-martingale, strictly positive such that L0 = 1 and define
dQ|Ft = Lt dP|Ft . Then,
Z t
1
et := Bt −
B
dhB, Lis
0 Ls
1.1. BACKGROUND
7
is a Q, F-Brownian motion. Then, if M is an F-martingale,
Z t
1
ft := Mt −
M
dhM, Lis
L
s
0
is a Q, F-martingale.
General case ♠
More generally, let F be a filtration and L an F-martingale, strictly positive such that L0 = 1 and
define dQ|Ft = Lt dP|Ft . Then, if M is an F-martingale,
Z t
1
ft := Mt −
M
d[M, L]s
0 Ls
is a Q, F-martingale. If the predictable co-variation process hM, Li exists,
Z t
1
Mt −
dhM, Lis
0 Ls−
is a Q, F-martingale.
1.1.6
Dual Predictable Projections
In this section, after recalling some basic facts about optional and predictable projections, we introduce the concept of a dual predictable projection3 , which leads to the fundamental notion of
predictable compensators. We recommend the survey paper of Nikeghbali [62].
Definitions
Let X be a bounded (or positive) process, and F a given filtration. The optional projection of
X is the unique optional process (o)X which satisfies: for any F-stopping time τ
E(Xτ 11{τ <∞} ) = E( (o)Xτ 11{τ <∞} ) .
(1.1.3)
For any F-stopping time τ , let Γ ∈ Fτ and apply the equality (1.1.3) to the stopping time τΓ =
τ 11Γ + ∞11Γc . We get the re-inforced identity:
E(Xτ 11{τ <∞} |Fτ ) =
(o)
Xτ 11{τ <∞} .
In particular, if A is an increasing process, then, for s ≤ t:
E( (o)At −
(o)
As |Fs ) = E(At − As |Fs ) ≥ 0 .
(1.1.4)
Note that, for any t, E(Xt |Ft ) = (o)Xt . However, E(Xt |Ft ) is defined almost surely for any t; thus
uncountably many null sets are involved, hence, a priori, E(Xt |Ft ) is not a well-defined process
whereas (o)X takes care of this difficulty.
Comment 1.1.14 Let us comment the difficulty here. If X is an integrable random variable, the
et = E(X|Ft ), then P(Xt = X
et ) = 1.
quantity E(X|Ft ) is defined a.s. i.e. if Xt = E(X|Ft ) and X
e
That means that, for any fixed t, there exists a negligeable set Ωt such that Xt (ω) = Xt (ω) for
ω∈
/ Ωt . For processes, we introduce the following definition: the process X is a modification of Y
if, for any t, P(Xt = Yt ) = 1. However, one needs a stronger assumption to be able to compare
3 See
Dellacherie [18] for the notion of dual optional projection.
8
CHAPTER 1. THEORY OF STOCHASTIC PROCESSES
functionals of the processes. The process X is indistinguishable from (or a version of) Y if
{ω : Xt (ω) = Yt (ω), ∀t} is a measurable set and P(Xt = Yt , ∀t) = 1. If X and Y are modifications
of each other and are a.s. continuous, they are indistinguishable.
A difficult, but important result (see Dellacherie [17], p. 73) states: Let X and Y two optional
(resp. predictable) processes. If for every finite stopping time (resp. predictable stopping time) τ ,
Xτ = Yτ a.s., then the processes X and Y are indistinguishable.
Likewise, the predictable projection of X is the unique predictable process
any F-predictable stopping time τ
E(Xτ 11{τ <∞} ) = E( (p)Xτ 11{τ <∞} ) .
(p)
X such that for
(1.1.5)
As above, this identity reinforces as
E(Xτ 11{τ <∞} |Fτ − ) =
(p)
Xτ 11{τ <∞} ,
for any F-predictable stopping time τ (see Section 1.1 for the definition of Fτ − ).
Example 1.1.15 Let ϑi , i = 1, 2 be two stopping times such that ϑ1 ≤ ϑ2 and Z a bounded
r.v.. Let X = Z11]]ϑ1 ,ϑ2 ]] . Then, (o)X = U 11]]ϑ1 ,ϑ2 ]] , (p)X = V 11]]ϑ1 ,ϑ2 ]] where U (resp. V ) is the
right-continuous (resp. left-continuous) version of the martingale (E(Z|Ft ), t ≥ 0).
Let τ and ϑ be two stopping times such that ϑ ≤ τ and X a positive process. If A is an increasing
optional process, then,
µZ
¶
µZ
¶
τ
E
τ
Xt dAt
=E
(o)
Xt dAt
ϑ
.
ϑ
If A is an increasing predictable process, then,since 11]]ϑ,τ ]] (t) is predictable
Z τ
Z τ
(p)
E(
Xt dAt ) = E(
Xt dAt ) .
ϑ
ϑ
The notion of interest in this section is that of dual predictable projection, which we define
as follows:
Proposition 1.1.16 Let (At , t ≥ 0) be an integrable increasing process (not necessarily F-adapted).
(p)
There exists a unique integrable F-predictable increasing process (At , t ≥ 0), called the dual predictable projection of A such that
µZ ∞
¶
µZ ∞
¶
(p)
E
Ys dAs = E
Ys dAs
0
0
for any positive F-predictable process Y .
Rt
In the particular case where At = 0 as ds, one has
Z
(p)
At
=
t
(p)
as ds
(1.1.6)
0
Proof: See Dellacherie [18], Chapter V, Dellacherie and Meyer [22], Chapter 6 paragraph (73), page
(p)
148, or Protter [67] Chapter 3, Section 5. The integrability condition of (At , t ≥ 0) results from
(p)
the definition, since for Y = 1, one obtains E(A∞ ) = E(A∞ ).
¤
This definition extends to the difference between two integrable (for simplicity) increasing processes. The terminology “dual predictable projection” refers to the fact that it is the random measure
dt At (ω) which is relevant when performing that operation. Note that the predictable projection of
an increasing process is not necessarily increasing, whereas its predictable dual projection is.
1.1. BACKGROUND
9
If X is bounded and A has integrable variation (not necessarily adapted), then
E((X?A(p) )∞ ) = E((
(p)
X?A)∞ ) .
This is equivalent to: for s < t,
(p)
E(At − As |Fs ) = E(At
− A(p)
s |Fs ) .
(1.1.7)
(p)
If A is F-adapted (not necessarily predictable), then (At − At , t ≥ 0) is an F-martingale. In that
(p)
case, At is also called the predictable compensator of A.
(p)
Example 1.1.17 If N is a Poisson process, At = λt. If X is a Lévy process and f a P
positive function with compact support which does not contain 0, the predictable compensator of s≤t f (∆Xs )
R
is thν, f i = t f (x)ν(dx)
More generally, from Proposition 1.1.16 and (1.1.7), the process
(o)
A − A(p) is a martingale.
Proposition 1.1.18 If A is increasing, the process (o)A is a sub-martingale and A(p) is the predictable increasing process in the Doob-Meyer decomposition of the sub-martingale (o)A.
In a general setting, the predictable projection of an increasing process A is a sub-martingale whereas
the dual predictable projection is an increasing process. The predictable projection and the dual
predictable projection of an increasing process A are equal if and only if ( (p) At , t ≥ 0) is increasing.
It will also be convenient to introduce the following terminology:
Definition 1.1.19 If τ is a random time, we call the F-predictable compensator associated
with τ the F-dual predictable projection Aτ of the increasing process 11{τ ≤t} . This dual predictable
projection Aτ satisfies
µZ
¶
∞
E(Yτ ) = E
0
Ys dAτs
(1.1.8)
for any positive, predictable process Y .
It follows that the process E(11{τ ≤t} − Aτt is an F-martingale.
Lemma 1.1.20 Assume that the random time τ avoids the F-stopping times, i.e., P(τ = ϑ) = 0 for
any F-stopping time ϑ, and that F-martingales are continuous.
Then, the F-dual predictable projection of the process Ht : = 11τ ≤t , denoted Aτ , is continuous.
Proof: Indeed, if ϑ is a jump time of Aτ , it is an F-stopping time, hence is predictable, and
E(Aτϑ − Aτϑ− ) = E(11τ =ϑ ) = 0 ;
the continuity of Aτ follows.
¤
Lemma 1.1.21 Assume that the random time τ avoids the F-stopping times (condition (A) ) and
let aτ be the dual optional projection of 11τ ≤t . Then, Aτ = aτ , and Aτ is continuous.
Assume that all F-martingales are continuous (condition (C)). Then, aτ is continuous and consequently Aτ = aτ . Under conditions (C) and (A), Z = P(τ > t|Ft ) is continuous.
Proof: Recall that under (C) the predictable and optional sigma fields are equal. See Dellacherie
and Meyer [22] or Nikeghbali [62].
10
CHAPTER 1. THEORY OF STOCHASTIC PROCESSES
Lemma 1.1.22 Let τ be a finite random time such that its associated Azéma’s supermartingale Z τ
is continuous. Then τ avoids F-stopping times.
Proof: See Coculescu and Nikeghbali
Proposition 1.1.23 The Doob-Meyer decomposition of the super-martingale Ztτ = P(τ > t|Ft ) is
Ztτ = E(Aτ∞ |Ft ) − Aτt = µτt − Aτt
where µτt : = E(Aτ∞ |Ft ).
Proof: From the definition of the dual predictable projection, for any predictable process Y , one
has
µZ ∞
¶
E(Yτ ) = E
Yu dAτu .
0
Let t be fixed and Ft ∈ Ft . Then, the process Yu = Ft 11{t<u} , u ≥ 0 is F-predictable. Then
E(Ft 11{t<τ } ) = E(Ft (Aτ∞ − Aτt )) .
It follows that E(Aτ∞ |Ft ) = Ztτ + Aτt .
¤
Comment 1.1.24 ♠ It can be proved that the martingale
µτt : = E(Aτ∞ |Ft ) = Aτt + Ztτ
is BMO. We recall that a continuous uniformly integrable martingale M belongs to BMO space if
there exists a constant m such that
E(hM i∞ − hM iτ |Fτ ) ≤ m
for any stopping time τ . It can be proved (see, e.g., Dellacherie and Meyer [22], Chapter VII,) that
the space BMO is the dual of H1 , the space of martingales such that E(sup |Mt |) < ∞.
Example
We now present an example of computation of dual predictable projection. Let (Bs )s≥0 be an F−
(ν)
Brownian motion starting from 0 and Bs = Bs + νs. Let G(ν) be the filtration generated by
(ν)
(ν)
the process (|Bs |, s ≥ 0) (which coincides with the one generated by (Bs )2 ). We now compute
the decomposition of the semi-martingale (B (ν) )2 in the filtration G(ν) and the dual predictable
R t (ν)
projection (with respect to G(ν) ) of the finite variation process 0 Bs ds.
Itô’s lemma provides us with the decomposition of the process (B (ν) )2 in the filtration F:
Z
(ν)
(Bt )2
=2
0
t
Z
Bs(ν) dBs
t
+ 2ν
0
Bs(ν) ds + t .
To obtain the decomposition in the filtration G(ν) we remark that,
E(eνBs |Fs|B| ) = cosh(νBs )(= cosh(ν|Bs |))
which leads, thanks to Girsanov’s Theorem to the equality:
|B|
E(Bs + νs|Fs|B| ) =
E(Bs eνBs |Fs )
|B|
E(eνBs |Fs )
= Bs tanh(νBs ) = ψ(νBs )/ν ,
(1.1.9)
1.1. BACKGROUND
11
where ψ(x) = x tanh(x). We now come back to equality (1.1.9). Due to (1.1.6), we have just shown
that:
Z t
Z t
(ν)
The dual predictable projection of 2ν
Bs ds is 2
ds ψ(νBs(ν) ) .
(1.1.10)
0
As a consequence,
Z
(ν)
(Bt )2
t
−2
0
0
ds ψ(νBs(ν) ) − t
R t (ν)
is a G(ν) -martingale with increasing process 4 0 (Bs )2 ds. Hence, there exists a G(ν) -Brownian
motion β such that
Z t
Z t
2
(Bt + νt) = 2
|Bs + νs|dβs + 2
ds ψ(ν(Bs + νs)) + t .
(1.1.11)
0
0
Exercise 1.1.25 Let M a càdlàg martingale. Prove that its predictable projection is Mt− .
C
Rt
Rt
Exercise 1.1.26 Let X be a measurable process such that E( 0 |Xs |ds) < ∞ and Yt = 0 Xs ds,
Rt
then (o)Yt − 0 (o)Xs ds is an F-martingale (RW)
C
Exercise 1.1.27 Prove that if X is bounded and Y predictable
(p)
(Y X) = Y (p)X (RW p 349)
Exercise 1.1.28 Prove that, more generally than (1.1.10), the dual predictable projection of
Rt
(ν)
(ν)
is 0 E(f (Bs )|Gs )ds and that
(ν)
E(f (Bs(ν) )|Gs(ν) ) =
(ν)
(ν)
(ν)
0
(ν)
f (Bs )ds
(ν)
f (Bs )eνBs + f (−Bs )e−νBs
2 cosh(νBs )
Rt
C
.
C
Exercise 1.1.29 Prove that, if (αs , s ≥ 0) is an increasing process and X a positive measurable
process, then
µZ ·
¶(p) Z ·
(p)
Xs dαs
=
Xs dαs
0
In particular
0
t
µZ
¶(p)
·
Xs ds
0
Z
=
t
·
(p)
Xs ds
0
C
l
1.1.7
Itô-Kunita-Wentcell formula
We recall here the Itô-Kunita-Wentzell formula (see Kunita [55]). Let Ft (x) be a family of stochastic
processes, continuous in (t, x) ∈ (R+ × Rd ) a.s., and satisfying the following conditions:
(i) for each t > 0, x → Ft (x) is C 2 from Rd to R,
(ii) for each x, (Ft (x), t ≥ 0) is a continuous semimartingale
dFt (x) =
n
X
j=1
ftj (x) dMtj ,
12
CHAPTER 1. THEORY OF STOCHASTIC PROCESSES
where M j are continuous semimartingales, and f j (x) are stochastic processes continuous in (t, x),
such that for every s > 0, the map x → fsj (x) is C 1 , and for every x, f j (x) is an adapted process.
Let X = (X 1 , · · · , X d ) be a continuous semimartingale. Then
Ft (Xt )
= F0 (X0 ) +
n Z
X
0
j=1
+
d X
n Z
X
i=1 j=1
1.2
t
0
t
fsj (Xs ) dMsj
+
d Z
X
t
0
i=1
∂Fs
(Xs ) dXsi
∂xi
d Z
∂fs
1 X t ∂ 2 Fs
j
i
(Xs ) dhM , X is +
dhX k , X i is .
∂xi
2
0 ∂xi ∂xk
i,k=1
Some Important Exercices
Exercise 1.2.1 Let B be a Brownian motion, F its natural filtration and Mt = sups≤t Bs . Prove
that, for t < 1,
E(f (M1 )|Ft ) = F (1 − t, Bt , Mt )
with
r
F (s, a, b) =
Hint: Note that
2
πs
Ã
Z
Z
b−a
f (b)
e
−u2 /(2s)
∞
du +
0
b
µ
(u − a)2
f (u) exp −
2s
¶
!
du
.
c1−t + Bt )
sup Bs = sup Bs ∨ sup Bs = sup Bs ∨ (M
s≤1
s≤t
t≤s≤1
s≤t
cs = supu≤s B
bu for B
bu = Bu+t − Bt .
where M
C
Exercise 1.2.2 Let ϕ be a C 1 function, B a Brownian motion and Mt = sups≤t Bs . Prove that
the process
ϕ(Mt ) − (Mt − Bt )ϕ0 (Mt )
is a local martingale.
C
Hint: Apply Doob’s Theorem to the martingale h(Mt )(Mt − Bt ) +
R∞
Mt
du h(u).
Exercise 1.2.3 A Useful Lemma: Doob’s Maximal Identity.
Let M be a positive continuous martingale such that M0 = x.
(i) Prove that if limt→∞ Mt = 0, then
P(sup Mt > a) =
³x´
a
∧1
(1.2.1)
x
where U is a random variable with a uniform law on [0, 1].
U
law x
(ii) Conversely, if sup Mt =
, show that M∞ = 0.
U
law
and sup Mt =
Hint: Apply Doob’s optional sampling theorem to Ta ∧ t and prove, passing to the limit when
t goes to infinity, that
x = E(MTa ) = aP(Ta < ∞) = aP(sup Mt ≥ a) .
C
Exercise 1.2.4 Let F ⊂ G and dQ|Gt = Lt dP |Gt , where L is continuous. Prove that dQ|Lt = `t dP|Ft
Rt
P
where `t = E(Lt |Ft ). Prove that any (G, Q)-martingale can be written as MtP − 0 dhMLs,Lis where
M P is a (G, P)-martingale.
C
1.2. SOME IMPORTANT EXERCICES
13
Exercise 1.2.5 Prove that, for any (bounded) process a (not necessarily adapted)
Z t
Z t
Mt := E(
au du|Ft ) −
E(au |Fu )du
0
0
R·
is an F-martingale. Extend the result to the case 0 Xs dαs where (αs , s ≥ 0) is an increasing
predictable process and X a positive measurable process.
C
Hint: Compute E(Mt − Ms |Fs ) = E(
Rt
s
au du −
Rt
s
E(au |Fu )du|Fs ).
14
CHAPTER 1. THEORY OF STOCHASTIC PROCESSES
Chapter 2
Enlargement of Filtrations
From the end of the seventies, Barlow, Jeulin and Yor started a systematic study of the problem
of enlargement of filtrations: namely which F-martingales M remain G-semi-martingales and if it is
the case, what is the semi-martingale decomposition of M in G?
There are mainly two kinds of enlargement of filtration:
• Initial enlargement of filtrations: in that case, Gt = Ft ∨ σ(L) where L is a r.v. (or, more
generally Gt = Ft ∨ Fe where Fe is a σ-algebra)
• Progressive enlargement of filtrations, where Gt = Ft ∨ Ht with H the natural filtration
e is another
of Ht = 11{τ ≤t} where τ is a random time (or, more generally Gt = Ft ∨ Fet where F
filtration).
Up to now, three lecture notes volumes have been dedicated to this question: Jeulin [46], Jeulin
and Yor [50] and Mansuy and Yor [60]. See also related chapters in the books of Protter [67] and
Dellacherie, Maisonneuve and Meyer [19] and Yor [80]. Some important papers are Brémaud and
Yor [13], Barlow [7], Jacod [41, 40] and Jeulin and Yor [47], and the survey papers of Nikeghbali [62].
See also the theses of Amendinger [1], Ankirchner [4], and Song [72] and the papers of Ankirchner
et al. [5] and Yoeurp [78].
Very few studies are done in the case Gt = Ft ∨ Fet . One exception is for Fet = σ(Jt ) where
Jt = inf s≥t Xs when X is a three dimensional Bessel process.
These results are extensively used in finance to study two specific problems occurring in insider
trading: existence of arbitrage using strategies adapted w.r.t. the large filtration, and change of
prices dynamics, when an F-martingale is no longer a G-martingale. They are also a main stone for
study of default risk.
An incomplete list of authors concerned with enlargement of filtration in finance for insider
trading is: Ankirchner [5, 4], Amendinger [2], Amendinger et al. [3], Baudoin [8], Corcuera et al.
[15], Eyraud-Loisel [30], Florens and Fougère [31], Gasbarra et al. [33], Grorud and Pontier [34],
Hillairet [36], Imkeller [37], Karatzas and Pikovsky [51], Wu [77] and Kohatsu-Higa and Øksendal
[54].
Di Nunno et al. [24], Imkeller [38], Imkeller et al. [39], Kohatsu-Higa [52, 53] have introduced
Malliavin calculus to study the insider trading problem. We shall not discuss this approach here.
Enlargement theory is also used to study asymmetric information, see, e.g., Föllmer et al. [32]
and progressive enlargement is an important tool for the study of default in the reduced form
approach by Bielecki et al. [10, 11, 12], Elliott et al.[28] and Kusuoka [56]. We start with the case
where F-martingales remain G-martingales. In that case, there is a complete characterization so
that this property holds. Then, we study a particular example: Brownian and Poisson bridges. In
the following step, we study initial enlargement, where Gt = Ft ∨ σ(L) for a random variable L.
The goal is to give conditions such that F-martingales remain G-semi-martingale and, in that case,
to give the G-semi-martingale decomposition of the F-martingales. Then, we answer to the same
15
16
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
questions for progressive enlargements where Gt = Ft ∨σ(τ ∧t) for a non negative random variable τ .
Only sufficient conditions are known. The study of a particular and important case of last passage
times is presented.
2.1
Immersion of Filtrations
Let F and G be two filtrations such that F ⊂ G. Our aim is to study some conditions which ensure
that F-martingales are G-semi-martingales, and one can ask in a first step whether all F-martingales
are G-martingales. This last property is equivalent to E(ζ|Ft ) = E(ζ|Gt ), for any t and ζ ∈ L1 (F∞ ).
Let us study a simple example where G = F∨σ(ζ) where ζ ∈ L1 (F∞ ) and ζ is not F0 -measurable.
Obviously E(ζ|Gt ) = ζ is a G-martingale and E(ζ|Ft ) is an F-martingale. However E(ζ|G0 ) 6=
E(ζ|F0 ), and some F-martingales are not G-martingales (as, for example E(ζ|Ft )).
2.1.1
Definition
The filtration F is said to be immersed in G if any square integrable F-martingale is a G-martingale
(Tsirel’son [75], Émery [29]). This is also referred to as the (H) hypothesis by Brémaud and Yor
[13] which was defined as:
(H) Every F-square integrable martingale is a G-square integrable martingale.
Proposition 2.1.1 Hypothesis (H) is equivalent to any of the following properties:
(H1) ∀ t ≥ 0, the σ-fields F∞ and Gt are conditionally independent given Ft , i.e., ∀ t ≥ 0, ∀ Gt ∈
L2 (Gt ), ∀ F ∈ L2 (F∞ ), E(Gt F |Ft ) = E(Gt |Ft )E(F |Ft ).
(H2) ∀ t ≥ 0, ∀ Gt ∈ L1 (Gt ), E(Gt |F∞ ) = E(Gt |Ft ).
(H3) ∀ t ≥ 0, ∀ F ∈ L1 (F∞ ), E(F |Gt ) = E(F |Ft ).
In particular, (H) holds if and only if every F-local martingale is a G-local martingale.
Proof:
• (H) ⇒ (H1). Let F ∈ L2 (F∞ ) and assume that hypothesis (H) is satisfied. This implies that the
martingale Ft = E(F |Ft ) is a G-martingale such that F∞ = F , hence Ft = E(F |Gt ). It follows that
for any t and any Gt ∈ L2 (Gt ):
E(F Gt |Ft ) = E(Gt E(F |Gt )|Ft ) = E(Gt E(F |Ft )|Ft ) = E(Gt |Ft )E(F |Ft )
which is exactly (H1).
• (H1) ⇒ (H2). Let F ∈ L2 (F∞ ) and Gt ∈ L2 (Gt ). Under (H1),
H1
E(F E(Gt |Ft )) = E(E(F |Ft )E(Gt |Ft )) = E(E(F Gt |Ft )) = E(F Gt )
which is (H2).
• (H2) ⇒ (H3). Let F ∈ L2 (F∞ ) and Gt ∈ L2 (Gt ). If (H2) holds, then it is easy to prove that, for
F ∈ L2 (F∞ ),
H2
E(Gt E(F |Ft )) = E(F E(Gt |Ft )) = E(F Gt ),
which implies (H3). The general case follows by approximation.
• Obviously (H3) implies (H).
¤
2.1. IMMERSION OF FILTRATIONS
17
In particular, under (H), if W is an F-Brownian motion, then it is a G-martingale with bracket
t, since such a bracket does
R t not depend on the filtration. Hence, it is a G-Brownian motion. It is
important to note that 0 ψs dWs is a local martingale, for a G-adapted process ψ, satisfying usual
integrability conditions.
e where F and F
e are two
A trivial (but useful) example for which (H) is satisfied is G = F ∨ F
e
filtrations such that F∞ is independent of F∞ .
Exercise 2.1.2 Assume that (H) holds, and that W is an F-Brownian motion. Prove that W is a
G-Brownian motion without using the bracket.
Hint: either use the fact that W and (Wt2 − t, t ≥ 0) are F, hence G-martingales, or the fact that,
for any λ, exp(λWt − 21 λ2 t) is a F, hence a G martingale.
C
Exercise 2.1.3 Prove that, if F is immersed in G, then, for any t, Ft = Gt ∩ F∞ .
Show that, if τ ∈ F∞ , immersion holds if and only if τ is an F-stopping time. Hint: if A ∈ Gt ∩ F∞ ,
then 11A = E(11A |F∞ ) = E(11A |Ft )
C
2.1.2
Change of probability
Of course, the notion of immersion depends strongly on the probability measure, and in particular,
is not stable by change of probability. See Subsection 2.4.6 for a counter example. We now study in
which setup the immersion property is preserved under change of probability.
Proposition 2.1.4 Assume that the filtration F is immersed in G under P, and let Q be equivalent
to P, with Q|Gt = Lt P|Gt where L is assumed to be F-adapted. Then, F is immersed in G under Q.
Proof: Let N be a (F, Q)-martingale, then (Nt Lt , t ≥ 0) is a (F, P)-martingale, and since F is
immersed in G under P, (Nt Lt , t ≥ 0) is a (G, P)-martingale which implies that N is a (G, Q)martingale.
¤
Note that, if one defines a change of probability on F with an F-martingale L, one can not
extend this change of probability to G by setting Q|Gt = Lt P|Gt , since, in general, L fails to be a
G-martingale.
In the next proposition, we do not assume that the Radon-Nikodým density is F-adapted.
Proposition 2.1.5 Assume that F is immersed in G under P, and let Q be equivalent to P with
Q|Gt = Lt P|Gt where L is a G-martingale and define `t = E(Lt |Ft ). Assume that all F-martingales
are continuous and that L is continuous. Then, F is immersed in G under Q if and only if the
(G, P)-local martingale
Z t
Z t
dLs
d`s
−
: = L(L)t − L(`)t
0 Ls
0 `s
is orthogonal to the set of all (F, P)-local martingales.
Proof: We prove that any (F, Q)-martingale is a (G, Q)-martingale. Every (F, Q)-martingale M Q
may be written as
Z t
dhM P , `is
MtQ = MtP −
`s
0
where M P is an (F, P)-martingale. By hypothesis, M P is a (G, P)-martingale and, from Girsanov’s
Rt
P
theorem, MtP = NtQ + 0 dhMLs,Lis where N Q is an (G, Q)-martingale. It follows that
Z t
Z t
dhM P , Lis
dhM P , `is
−
MtQ = NtQ +
Ls
`s
0
0
Z t
dhM P , L(L) − L(`)is .
= NtQ +
0
18
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Thus M Q is a (G, Q) martingale if and only if hM P , L(L) − L(`)is = 0. Here L is the stochastic
logarithm, as the inverse of the stochastic exponential.
¤
Proposition 2.1.6 Let P be a probability measure, and
Q|Gt = Lt P|Gt ;
Q|Ft = `t P|Ft .
Then, hypothesis (H) holds under Q if and only if:
E(XLT |Gt )
E(X`T |Ft )
=
Lt
`t
∀T, ∀X ≥ 0, X ∈ FT , ∀t < T,
(2.1.1)
Proof: Note that, for X ∈ FT ,
1
EP (XLT |Gt ) ,
Lt
EQ (X|Gt ) =
EQ (X|Ft ) =
1
EP (X`T |Gt )
`t
and that, from MTC, (H) holds under Q if and only if, ∀T, ∀X ∈ FT , ∀t ≤ T , one has
EQ (X|Gt ) = EQ (X|Ft ) .
¤
Comment 2.1.7 The (H) hypothesis was studied by Brémaud and Yor [13] and Mazziotto and
Szpirglas [61], and in a financial setting by Kusuoka [56], Elliott et al. [28] and Jeanblanc and
Rutkowski [42, 43].
Exercise 2.1.8 Prove that, if F is immersed in G under P and if Q is a probability equivalent to
P, then, any (Q, F)-semi-martingale is a (Q, G)-semi-martingale. Let
Q|Gt = Lt P|Gt ;
Q|Ft = `t P|Ft .
and X be a (Q, F) martingale. Assuming that F is a Brownian filtration and that L is continuous,
prove that
¶
Z tµ
1
1
Xt +
dhX, `is −
dhX, Lis
`s
Ls
0
is a (G, Q) martingale.
In a general case, prove that
Z
Xt +
0
t
Ls−
Ls
µ
1
1
d[X, `]s −
d[X, L]s
`s−
Ls −
¶
is a (G, Q) martingale. See Jeulin and Yor [48].
C
(L)
Exercise 2.1.9 Assume that Ft = Ft ∨ σ(L) where L is a random variable. Find under which
conditions on L, immersion property holds.
C
Exercise 2.1.10 Construct an example where some F-martingales are G-martingales, but not all
F martingales are G-martingales.
C
e where (H) holds for F and G.
e
Exercise 2.1.11 Assume that F ⊂ G
τ
e
Let τ be a G-stopping time. Prove that (H) holds for F and F = F ∨ H where Ht = σ(τ ∧ t).
e Prove that F be immersed in G.
Let G be such that F ⊂ G ⊂ G.
C
2.2. BRIDGES
2.2
2.2.1
19
Bridges
The Brownian Bridge
Rather than studying ab initio the general problem of initial enlargement, we discuss an interesting
example. Let us start with a BM (Bt , t ≥ 0) and its natural filtration FB . Define a new filtration
(B )
as Ft 1 = FtB ∨ σ(B1 ). In this filtration, the process (Bt , t ≥ 0) is no longer a martingale. It
(B )
is easy to be convinced of this by looking at the process (E(B1 |Ft 1 ), t ≤ 1): this process is
identically equal to B1 , not to Bt , hence (Bt , t ≥ 0) is not a G-martingale. However, (Bt , t ≥ 0) is
a F(B1 ) -semi-martingale, as follows from the next proposition 2.2.2.
Before giving this proposition, we recall some facts on Brownian bridge.
The Brownian bridge (bt , 0 ≤ t ≤ 1) is defined as the conditioned process (Bt , t ≤ 1|B1 = 0).
Note that Bt = (Bt − tB1 ) + tB1 where, from the Gaussian property, the process (Bt − tB1 , t ≤ 1)
law
and the random variable B1 are independent. Hence (bt , 0 ≤ t ≤ 1) = (Bt − tB1 , 0 ≤ t ≤ 1). The
Brownian bridge process is a Gaussian process, with zero mean and covariance function s(1−t), s ≤ t.
Moreover, it satisfies b0 = b1 = 0.
We can represent the Brownian bridge between 0 and y during the time interval [0, 1] as
(Bt − tB1 + ty; t ≤ 1) .
More generally, the Brownian bridge between x and y during the time interval [0, T ] may be expressed
as
¶
µ
t
t
x + Bt − BT + (y − x); t ≤ T ,
T
T
where (Bt ; t ≤ T ) is a standard BM starting from 0.
R t∧1 1 −Bs
Exercise 2.2.1 Prove that 0 B1−s
ds is convergent.
t−s
Prove that, for 0 ≤ s < t ≤ 1, E(Bt − Bs |B1 − Bs ) = 1−s
(B1 − Bs )
C
Decomposition of the BM in the enlarged filtration F(B1 )
(B1 )
Proposition 2.2.2 Let Ft
= ∩²>0 Ft+² ∨ σ(B1 ). The process
Z
t∧1
βt = Bt −
0
B1 − Bs
ds
1−s
is an F(B1 ) -martingale, and an F(B1 ) Brownian motion. In other words,
Z
Bt = βt −
0
t∧1
B1 − Bs
ds
1−s
is the decomposition of B as an F(B1 ) -semi-martingale.
Proof: Note that the definition of F(B1 ) is done to satisfy the right-continuity assumption. We shall
(B )
note Ft 1 = Ft ∨ σ(B1 ) = Ft ∨ σ(B1 − Bs ). Then, since Fs is independent of (Bs+h − Bs , h ≥ 0),
one has
E(Bt − Bs |Fs(B1 ) ) = E(Bt − Bs |B1 − Bs ) .
20
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
For t < 1,
Z
E(
t
s
B1 − Bu
du|Fs(B1 ) )
1−u
It follows that
Z
t
1
E(B1 − Bu |B1 − Bs )du
1
−
u
s
Z t
1
=
(B1 − Bs − E(Bu − Bs |B1 − Bs )) du
1
−
u
s
µ
¶
Z t
1
u−s
=
B1 − Bs −
(B1 − Bs ) du
1−s
s 1−u
Z t
t−s
1
du =
(B1 − Bs )
(B1 − Bs )
=
1−s
1−s
s
=
E(βt − βs |Fs(B1 ) ) = 0
hence, β is an F(B1 ) -martingale (and an F(B1 ) -Brownian motion) (WHY?).
R1 1
It follows that if M is an F-local martingale such that 0 √1−s
d|hM, Bi|s is finite, then
Z
t∧1
ct = Mt −
M
0
¤
B1 − Bs
dhM, Bis
1−s
is a F(B1 ) -local martingale.
−Bt
−Bt
Comment 2.2.3 The singularity of B11−t
at t = 1, i.e., the fact that B11−t
is not square integrable
between 0 and 1 prevents a Girsanov measure change transforming the (P, F(B1 ) ) semi-martingale
B into a (Q, F(B1 ) ) martingale.
We obtain that the standard Brownian bridge b is a solution of the following stochastic equation
(take care about the change of notation)

bt

 dbt = −
dt + dWt ; 0 ≤ t < 1
1−t


b0 = 0 .
The solution of the above equation is bt = (1 − t)
mean and covariance s(1 − t), s ≤ t.
Rt
1
dWs
0 1−s
which is a Gaussian process with zero
Exercise 2.2.4 Using the notation of Proposition 2.2.2, prove that B1 and β are independent.
Check that the projection of β on FB is equal to B.
(B )
Hint: The F(B1 ) BM β is independent of F0 1 .
C
Exercise 2.2.5 Consider the SDE
(
dXt
=
−
X0
=
0
Xt
dt + dWt ; 0 ≤ t < 1
1−t
1. Prove that
Z
t
Xt = (1 − t)
0
dWs
; 0 ≤ t < 1.
1−s
2. Prove that (Xt , t ≥ 0) is a Gaussian process. Compute its expectation and its covariance.
3. Prove that limt→1 Xt = 0.
2.2. BRIDGES
21
C
Rt
Exercise 2.2.6 (See Jeulin and Yor [49]) Let Xt = 0 ϕs dBs where ϕ is predictable such that
Rt 2
ϕ ds < ∞. Prove that the following assertions are equivalent
0 s
1. X is an F(B1 ) -semimartingale with decomposition
Z t
Z t∧1
B1 − Bs
Xt =
ϕs dβs +
ϕs ds
t−s
0
0
2.
3.
R1
0
R1
0
−Bs |
|ϕs | |B11−s
ds < ∞
|ϕ |
√ s ds
1−s
<∞
C
2.2.2
Poisson Bridge
Let N be a Poisson process with constant intensity λ, FtN = σ(Ns , s ≤ t) its natural filtration and
T > 0 a fixed time. The process Mt = Nt − λt is a martingale. Let Gt∗ = σ(Ns , s ≤ t; NT ) be the
natural filtration of N enlarged with the terminal value NT of the process N .
Proposition 2.2.7 Assume that λ = 1. The process
Z t∧T
MT − Ms
ηt = Mt −
ds,
T −s
0
is a G∗ -martingale with predictable bracket, for t ≤ T ,
Z t
NT − Ns
Λt =
ds .
T −s
0
Proof: For 0 < s < t < T ,
E(Nt − Ns |Gs∗ ) = E(Nt − Ns |NT − Ns ) =
t−s
(NT − Ns )
T −s
where the last equality follows from the fact that, if X and Y are independent with Poisson laws
with parameters µ and ν respectively, then
P(X = k|X + Y = n) =
where α =
µ
. Hence,
µ+ν
µZ t
¶
NT − Nu ∗
E
du
|Gs
=
T −u
s
Z
t
s
Z
t
=
s
Z
=
s
t
n!
αk (1 − α)n−k
k!(n − k)!
du
(NT − Ns − E(Nu − Ns |Gs∗ ))
T −u
µ
¶
u−s
du
NT − Ns −
(NT − Ns )
T −u
T −s
t−s
du
(NT − Ns ) =
(NT − Ns ) .
T −s
T −s
Therefore,
Z
E(Nt − Ns −
s
t
NT − Nu
t−s
t−s
du|Gs∗ ) =
(NT − Ns ) −
(NT − Ns ) = 0
T −u
T −s
T −s
22
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
and the result follows.
¤
Comment 2.2.8 Poisson bridges are studied in Jeulin and Yor [49]. This kind of enlargement of
filtration is used for modelling insider trading in Elliott and Jeanblanc [27], Grorud and Pontier [34]
and Kohatsu-Higa and Øksendal [54].
Exercise 2.2.9 Prove that, for any enlargement of filtration the compensated martingale M remains a semi-martingale.
Hint: M has bounded variation.
C
Exercise 2.2.10 Prove that any FN -martingale is a G∗ -semimartingale.
C
Exercise 2.2.11 Prove that
Z
t∧T
ηt = Nt −
0
Prove that
Z
t∧T
hηit =
0
NT − Ns
ds − (t − T )+ ,
T −s
NT − Ns
ds + (t − T )+ .
T −s
Therefore, (η, t ≤ t) is a compensated G∗ -Poisson process, time-changed by
R
f(t), t ≥ 0) is a compensated Poisson process.
f( t NT −Ns ds) where (M
M
0
Rt
0
NT −Ns
T −s ds,
T −s
i.e., ηt =
C
Exercise 2.2.12 A process X fulfills the harness property if
µ
¶
Xt − Xs ¯¯
XT − Xs0
E
¯ Fs0 ], [T =
t−s
T − s0
for s0 ≤ s < t ≤ T where Fs0 ], [T = σ(Xu , u ≤ s0 , u ≥ T ). Prove that a process with the harness
property satisfies
³ ¯
´
T −t
t−s
¯
E Xt ¯ Fs], [T =
Xs +
XT ,
T −s
T −s
and conversely. Prove that, if X satisfies the harness property, then, for any fixed T ,
Z t
XT − Xu
T
Mt = Xt −
du
, t<T
T −u
0
is an Ft], [T -martingale and conversely. See [3M] for more comments.
2.2.3
C
Insider trading
Brownian Bridge
We consider a first example. Let
dSt = St (µdt + σdBt )
where µ and σ are constants, be the price of a risky asset. Assume that the riskless asset has an
constant interest rate r.
The wealth of an agent is
dXt = rXt dt + π
bt (dSt − rSt dt) = rXt dt + πt σXt (dWt + θdt), X0 = x
2.2. BRIDGES
23
where θ = µ−r
bSt /Xt assumed to be an FB adapted process. Here π
b is the number of
σ and π = π
shares of the risky asset, and π the proportion of wealth invested in the risky asset. It follows that
Z T
Z T
1
(r − πs2 σ 2 + θπs σ)ds +
ln(XTπ,x ) = ln x +
σπs dWs
2
0
0
Then,
Z
E(ln(XTπ,x ))
T
= ln x +
0
The solution of max E(ln(XTπ,x )) is πs =
θ
σ
µ
¶
1 2 2
E r − πs σ + θπs σ ds
2
and
sup E(ln(XTπ,x ))
µ
¶
1 2
= ln x + T r + θ
2
Note that, if the coefficients r, µ and σ are F adapted, the same computation leads to
¶
Z T µ
1
sup E(ln(XTπ,x )) = ln x +
E rt + θt2 dt
2
0
where θt =
µt −rt
σt .
We now enlarge the filtration with S1 (or equivalently, with B1 (WHY?)). In the enlarged
−Bt
filtration, setting, for t < 1, αt = B11−t
, the dynamics of S are
dSt = St ((µ + σαt )dt + σdβt ) ,
and the dynamics of the wealth are
dXt = rXt dt + πt σXt (dβt + θet dt), X0 = x
e
π,x
θs
with θet = µ−r
σ + αt . The solution of max E(ln(XT )) is πs = σ .
Then, for T < 1,
Z T
Z T
1 e2
π,x,∗
(r + θs )ds +
ln(XT ) = ln x +
σπs dβs
2
0
0
Z T
Z
1 2
1 2
1 T
π,x,∗
2
E(ln(XT )) = ln x +
(r + (θ + E(αs ) + 2θE(αs ))ds = ln x + (r + θ )T +
E(αs2 )ds
2
2
2 0
0
where we have used the fact that E(αt ) = 0 (if the coefficients r, µ and σ are F adapted, α is
orthogonal to Ft , hence E(αt θt ) = 0). Let
V F (x) =
V G (x) =
Then V G (x) = V F (x) + 12 E
RT
0
max E(ln(XTπ,x )) ; π is F admissible
max E(ln(XTπ,x )) ; π is G admissible
αs2 ds = V F (x) −
1
2
ln(1 − T ).
If T = 1, the value function is infinite: there is an arbitrage opportunity and there does not exist
an e.m.m. such that the discounted price process (e−rt St , t ≤ 1) is a G-martingale. However, for
any ² ∈ ]0, 1], there exists a uniformly integrable G-martingale L defined as
dLt =
µ − r + σζt
Lt dβt , t ≤ 1 − ²,
σ
L0 = 1 ,
such that, setting dQ|Gt = Lt dP|Gt , the process (e−rt St , t ≤ 1 − ²) is a (Q, G)-martingale.
This is the main point in the theory of insider trading where the knowledge of the terminal value
of the underlying asset creates an arbitrage opportunity, which is effective at time 1.
to mention, that in both cases, the wealth of the investor is Xt e−rt = x +
R t It is important
−rs
π d(Ss e ). The insider has a larger class of portfolio, and in order to give a meaning to the
0 s
stochastic integral for processes π which are not adapted with respect to the semi-martingale S, one
has to give the decomposition of this semi-martingale in the larger filtration.
24
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Exercise 2.2.13 Prove carefully that there does not exist any emm in the enlarged filtration. Make
precise the arbitrage opportunity.
C
Poisson Bridge
We suppose that the interest rate is null and that the risky asset has dynamics
dSt = St− (µdt + σdWt + φdMt )
where M is the compensated martingale of a standard Poisson process. Let (Xt , t ≥ 0) be the wealth
of an un-informed agent whose portfolio is described by (πt ), the proportion of wealth invested in
the asset S at time t. Then
dXt = πt Xt− (µdt + σdWt + φdMt )
(2.2.1)
Then,
µZ
Z
t
Xt = x exp
πs (µ − φλ)ds +
0
t
σπs dWs +
0
1
2
Z
0
t
Z
σ 2 πs2 ds +
¶
t
ln(1 + πs φ)dNs
0
Assuming that the stochastic integrals with respect to W and M are martingales,
Z T
1
E[ln(XT )] = ln(x) +
E(µπs − σ 2 πs2 + λ(ln(1 + φπs ) − φπs )ds .
2
0
Our aim is to solve
V (x) = sup E (ln(XTx,π ))
π
We can then maximize the quantity under the integral sign for each s and ω.
The maximum attainable wealth for the uninformed agent is obtained using the constant strategy
π̃ for which
1
1
π̃µ + λ[ln(1 + π̃φ) − π̃φ] − π̃ 2 σ 2 = sup[πµ + λ[ln(1 + πφ) − πφ] − π 2 σ 2 ] .
2
2
π
Hence
π̃ =
´
p
1 ³
µφ − φ2 λ − σ 2 ± (µφ − φ2 λ − σ 2 )2 + 4σ 2 φµ .
2
2σ φ
The quantity under the square root is (µφ − φ2 λ + σ 2 )2 + 4σ 2 φ2 λ and is non-negative.
The sign to be used depends on the sign of quantities related to the parameters. The optimal π
e is
the only one such that 1 + φe
π > 0. Solving the equation (2.2.1), it can be proved that the optimal
1
et = x(L
e t )−1 where dL
et = L
e t− (−σe
wealth is X
π dWt + (
− 1)dMt ) is a Radon Nikodym density
1 + φe
π
of an equivalent martingale measure. In this incomplete market, we thus obtain the utility equivalent
martingale measure defined by Davis [16] and duality approach (See Kramkov and Schachermayer).
We assume that the informed agent knows NT from time 0. Therefore, his wealth evolves
according to the dynamics
∗
dXt∗ = πt Xt−
[(µ + φ(Λt − λ)]dt + σdWt + φdMt∗ ]
where Λ is given in Proposition 2.2.7. Exactly the same computations as above can be carried out.
In fact these only require changing µ to (µ + φ(Λt − λ)) and the intensity of the jumps from λ to Λt .
1
] − π ∗ σ 2 = 0 and is given by
The optimal portfolio π ∗ is now such that µ − λφ + φΛs [
1 + π∗ φ
´
p
1 ³
2
2
2 λ + σ 2 )2 + 4σ 2 φ2 Λ
πs∗ =
µφ
−
φ
λ
−
σ
±
(µφ
−
φ
,
s
2σ 2 φ
2.3. INITIAL ENLARGEMENT
25
The optimal wealth is Xt∗ = x(L∗t )−1 where
dL∗t = L∗t− (−σπs∗ dWt + (
1
− 1)dMt∗ ) .
1 + φπs∗
Whereas the optimal portfolio of the uninformed agent is constant, the optimal portfolio of the
informed agent is time-varying and has a jump as soon as a jump occurs for the prices.
The informed agent must maximize at each (s, ω) the quantity
1
πµ + Λs (ω) ln(1 + πφ) − λπφ − π 2 σ 2 .
2
Consequently,
1
1
sup πµ + Λs ln(1 + πφ) − λπφ − π 2 σ 2 ≥ π̃µ + Λs ln(1 + π̃φ) − λπ̃φ − π̃ 2 σ 2
2
2
π
Now, E[Λs ] = λ, so
Z
sup E(ln XT∗ )
π
=
π
Z
≥
T
ln x + sup
0
T
ln x +
0
1
E(πµ + Λs ln(1 + πφ) − λπφ − π 2 σ 2 )ds
2
1
eT )
π̃(µ + λ ln(1 + π̃φ) − λπ̃φ − π̃ 2 σ 2 )ds = E(ln X
2
Therefore, the maximum expected wealth for the informed agent is greater than that of the uninformed agent. This is obvious because the informed agent can use any strategy available to the
uninformed agent.
Exercise 2.2.14 Solve the same problem for power utility function.
2.3
C
Initial Enlargement
(L)
Let F be a Brownian filtration generated by B. We consider Ft = Ft ∨σ(L) where L is a real-valued
random variable. More precisely, in order to satisfy the usual hypotheses, redefine
(L)
Ft
= ∩²>0 {Ft+² ∨ σ(L)} .
Proposition 2.3.1 One has
(i) Every F(L) -optional process Y is of the form Yt (ω) = yt (ω, L(ω)) for some F ⊗ B(Rd )-optional
process (yt (ω, u), t ≥ 0).
(ii) Every F(L) -predictable process Y is of the form Yt (ω) = yt (ω, L(ω)) where (t, ω, u) 7→ yt (ω, u) is
a P(F) ⊗ B(Rd )-measurable function.
We shall now simply write yt (L) for yt (ω, L(ω)).
We recall that there exists a family of regular conditional distributions Pt (ω, dx) such that Pt (·, A)
is a version of E(11{L∈A} |Ft ) and for any ω, Pt (ω, ·) is a probability on R.
2.3.1
Jacod’s criterion
In what follows, for y(u) a family of martingales and X a martingale, we shall write hy(L), Xi for
hy(u), Xi|u=L .
26
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Proposition 2.3.2 (Jacod’s Criterion.) Suppose that, for each t < T , Pt (ω, dx) << ν(dx) where
ν is the law of L and T is a fixed horizon T ≤ ∞. Then, every F-semi-martingale (Xt , t < T ) is
also an F(L) -semi-martingale.
Moreover, if Pt (ω, dx) = pt (ω, x)ν(dx) and if X is an F-martingale, the process
Z
t
et = Xt −
X
0
dhp· (L), Xis
,t < T
ps− (L)
is an F(L) -martingale. In other words, the decomposition of the F(L) -semi-martingale X is
Z
et +
Xt = X
0
t
dhp· (L), Xis
.
ps− (L)
Proof: In a first step, we show that, for any θ, the process p(θ) is an F-martingale. One has to
show that, for ζs ∈ bFs and s < t
E(pt (θ)ζs ) = E(ps (θ)ζs )
This follows from
E(E(11τ >θ |Ft )ζs ) = E(E(11τ >θ |Fs )ζs ) .
In a second step, we assume that F-martingales are continuous, and that X and p are square
integrable. In that case, hp· (L), Xi exists. Let Fs be a bounded Fs -measurable random variable and
(L)
h : R+ → R, be a bounded Borel function. Then the variable Fs h(L) is Fs -measurable and if a
Rt
et + dKu (L) holds, the martingale property of X
e should imply
decomposition
of
Xt = X
0
³
³ the form´´
et − X
es
that E Fs h(L) X
= 0, hence
µ
¶
Z t
E (Fs h(L) (Xt − Xs )) = E Fs h(L)
dKu (L) .
s
We can write:
E (Fs h(L) (Xt − Xs ))
µ
¶
Z ∞
= E Fs (Xt − Xs )
h(θ)pt (θ)ν(dθ)
−∞
Z
=
h(θ)E (Fs (Xt pt (θ) − Xs ps (θ))) ν(dθ)
R
µ Z t
¶
Z
=
h(θ)E Fs
d hX, p(θ)iv ν(dθ)
R
s
where the first equality comes from a conditioning w.r.t. Ft , the second from the martingale property
of p(θ), and the third from the fact that both X and p(θ) are square-integrable F-martingales.
Moreover:
µ
¶
µ Z
¶
Z t
Z t
E Fs h(L)
dKv (L)
= E Fs h(θ)
dKv (θ)pt (θ)ν(dθ)
s
R
s
µ Z t
¶
Z
=
h(θ)E Fs
pv (θ)dKv (θ) ν(dθ)
R
s
where the first equality comes from the definition of p, and the second from the martingale property of p(θ). By equalization of these two quantities, we obtain that it is necessary to have
dKu (θ) = d hX, p(θ)iu /pu (θ).
¤
The stability of initial times under a change of probability is rather obvious.
2.3. INITIAL ENLARGEMENT
27
Regularity Conditions ♠
One of the major difficulties is to prove the existence of a universal càdlàg martingale version of the
family of densities. Fortunately, results of Jacod [40] or Stricker and Yor [73] help us to solve this
technical problem. See also [1] for a detailed discussion. We emphazise that these results are the
most important part of enlargement of filtration theory.
Jacod ([40], Lemme 1.8 and 1.10) establishes the existence of a universal càdlàg version of the
density process in the following sense: there exists a non negative function pt (ω, θ) càdlàg in t,
b on Ω
b = Ω × R+ , generated by Ft ⊗ B(R+ ), such that
optional w.r.t. the filtration F
• for any θ, p. (θ) is an F-martingale; moreover, denoting ζ θ = inf{t : pt− (θ) = 0}, then p. (θ) > 0,
and p− (θ) > 0 on [0, ζ θ ), and p. (θ) = 0 on [ζ θ , ∞).
• For any bounded family (Yt (ω, θ), t ≥ 0) measurable w.r.t. P(F) ⊗ B(R+ ), the F-predictable
R
(p)
projection of the process Yt (ω, L(ω)) is the process Yt = pt− (θ)Yt (θ)ν(dθ).
b
• Let m be a continuous F-local martingale. There exists a P(F)-measurable
function k such
that
hp(θ), mi = (k(θ)p(θ)− ) ? hM i, P a.s.on ∩n {t ≤ Rnθ }
where Rnθ = inf{t : pt− (θ) ≤ 1/n}.
• Let m be a local F-martingale.
predictable function k such that
b
There exists predictable increasing process A and a FZ
t
hp(θ), mit =
ks (θ)p(θ)s− dAs .
0
If m is locally square integrable, one can chose A = hmi.
Lemma 2.3.3 The process p(L) does not vanish on [0, T [.
Corollary 2.3.4 Let Z be a random variable taking on only a countable number of values. Then
every F semimartingale is a F(Z) semimartingale.
Proof: If we note
η (dx) =
∞
X
P (Z = xk ) δxk (dx) ,
k=1
where δxk (dx) is the Dirac measure at xk , the law of Z, then Pt (ω, dx) is absolutely continuous
with respect to η with Radon-Nikodym density:
∞
X
P (Z = xk |Ft )
1x=xk .
P (Z = xk )
k=1
Now the result follows from Jacod’s theorem.¤
Rt
s
Exercise 2.3.5 Assume that F is a Brownian filtration. Then, E( 0 dhpp· (L),Xi
|Ft ) is an F-martingale.
s− (L)
Answer: Note that the result is obvious from the decomposition theorem! indeed taking expectation
w.r.t. Ft of the two sides of
Z t
dhp· (L), Xis
et = Xt −
X
ps− (L)
0
leads to
Z
et |Ft ) = Xt − E(
E(X
t
0
dhp· (L), Xis
|Ft ),
ps− (L)
28
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
et |Ft ) is an F-martingale.
and E(X
Our aim is to
R check it directly. Writing dpt (u) = pt (uσt (u)dWt and dXt = xt dWt , we note that
P(L ∈ R|Fs ) = R ps (u)ν(du) = 1 implies that
Z
Z
Z t
Z
Z t
Z
ps (u)ν(du) =
p0 (u)ν(du) +
dWs σs (u)ps (u)ν(du) = 1 +
dWs σs (u)ps (u)ν(du)
R
R
0
R
0
R
Rt
dhp· (L),Xis
ps− (L) |Ft )
0
hence R σs (u)ps (u)ν(du) = 0. The process E(
Rt
R
Rt
E(σs (L)xs |Fs )ds = 0 ds xs R σs (u)ps (u)ν(du) = 0.
0
R
is equal to a martingale plus
C
Exercise 2.3.6 Prove that if there exists a probability Q∗ equivalent to P such that, under Q∗ ,
the r.v. L is independent of F∞ , then every (P, F)-semi-martingale X is also an (P, F(L) )-semimartingale.
C
2.3.2
Yor’s Method
We follow here Yor [80]. We assume that F is a Brownian filtration. For a bounded Borel function
f , let (λt (f ), t ≥ 0) be the continuous version of the martingale (E(f (L)|Ft ), t ≥ 0). There exists a
predictable kernel λt (dx) such that
Z
λt (f ) =
λt (dx)f (x) .
R
From the predictable representation property applied to the martingale E(f (L)|Ft ), there exists a
b ) such that
predictable process λ(f
Z t
bs (f )dBs .
λt (f ) = E(f (L)) +
λ
0
bt (dx) such that
Proposition 2.3.7 We assume that there exists a predictable kernel λ
Z
bt (f ) =
bt (dx)f (x) .
dt a.s., λ
λ
R
bt (dx) is absolutely continuous with respect to
Assume furthermore that dt × dP a.s. the measure λ
λt (dx):
bt (dx) = ρ(t, x)λt (dx) .
λ
b such that
Then, if X is an F-martingale, there exists a F(L) -martingale X
Z t
bt +
Xt = X
ρ(s, L)dhX, Bis .
0
Sketch of the proof: Let X be an F-martingale, f a given bounded Borel function and Ft =
E(f (L)|Ft ). From the hypothesis
Z t
bs (f )dBs .
Ft = E(f (L)) +
λ
0
Then, for As ∈ Fs , s < t:
E(11As f (L)(Xt − Xs )) =
=
=
E(11As (Ft Xt − Fs Xs )) = E(11As (hF, Xit − hF, Xis ))
µ
¶
Z t
b
E 11As
dhX, Biu λu (f )
s
µ
¶
Z t
Z
E 11As
dhX, Biu λu (dx)f (x)ρ(u, x) .
s
R
2.3. INITIAL ENLARGEMENT
Therefore, Vt =
Rt
0
29
ρ(u, L) dhX, Biu satisfies
E(11As f (L)(Xt − Xs )) = E(11As f (L)(Vt − Vs )) .
(L)
It follows that, for any Gs ∈ Fs ,
E(11Gs (Xt − Xs )) = E(11Gs (Vt − Vs )) ,
hence, (Xt − Vt , t ≥ 0) is an F(L) -martingale.
¤
Let us write the result of Proposition 2.3.7 in terms of Jacod’s criterion. If λt (dx) = pt (x)ν(dx),
then
Z
λt (f ) = pt (x)f (x)ν(dx) .
Hence,
Z
dhλ· (f ), Bit
and
=
bt (f )dt =
λ
dxf (x) dt hp· (x), Bit
bt (dx) = dt hp· (x), Bit = dt hp· (x), Bit pt (x)dx
λ
pt (x)
therefore,
bt (dx)dt = dt hp· (x), Bit λt (dx) .
λ
pt (x)
In the case where λt (dx) = Φ(t, x)dx, with Φ > 0, it is possible to find ψ such that
µZ t
¶
Z
1 t 2
Φ(t, x) = Φ(0, x) exp
ψ(s, x)dBs −
ψ (s, x)ds
2 0
0
bt (dx) = ψ(t, x)λt (dx). Then, if X is an F-martingale of the form Xt = x +
and it follows that λ
Rt
Rt
x dBs , the process (Xt − 0 ds xs ψ(s, L), t ≥ 0) is an F(L) -martingale.
0 s
2.3.3
Examples
We now give some examples taken from Mansuy and Yor [60] in a Brownian set-up for which we use
the preceding. Here, B is a standard Brownian motion.
I Enlargement with B1 . We compare the results obtained in Subsection 2.2.1 and the method
presented in Subsection 2.3.2. Let L = B1 . From the Markov property
E(g(B1 )|Ft ) = E(g(B1 − Bt + Bt )|Ft ) = Fg (Bt , 1 − t)
´
³
R
2
1
. It follows that
where Fg (y, 1 − t) = g(x)P (1 − t; y, x)dx and P (s; y, x) = √2πs
exp − (x−y)
2s
³
´
2
(x−B
)
t
λt (dx) = √ 1
exp − 2(1−t)
dx. Then
2π(1−t)
2
1
λt (dx) = pt (x)P(B1 ∈ dx) = pt (x) √ e−x /2
2π
with
¶
µ
x2
(x − Bt )2
+
.
exp −
2(1 − t)
2
(1 − t)
pt (x) = p
1
30
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
From Itô’s formula,
dt pt (x) = pt (x)
x − Bt
dBt .
1−t
(This can be considered as a partial check of the martingale property of (pt (x), t ≥ 0).) It follows
t
that dhp(x), Bit = pt (x) x−B
1−t dt, hence
Z
et +
Bt = B
0
t
B1 − Bs
ds .
1−s
Note that, in the notation of Proposition 2.3.7, one has
¶
µ
(x − Bt )2
bt (dx) = x − Bt p 1
λ
dx .
exp −
1−t
2(1 − t)
2π(1 − t)
I Enlargement with M B = sups≤1 Bs . From Exercise 1.2.1,
E(f (M B )|Ft ) = F (1 − t, Bt , MtB )
where MtB = sups≤t Bs with
r
F (s, a, b) =
2
πs
Ã
Z
Z
b−a
f (b)
−u2 /(2s)
e
du +
0
!
∞
f (u)e
−(u−a)2 /(2s)
du
b
and, denoting by δy the Dirac measure at y,
s
(
µ
¶
Z MtB −Bt
u2
2
λt (dy) =
δy (MtB )
exp −
du
π(1 − t)
2(1 − t)
0
µ
¶ ¾
(y − Bt )2
+ 11{y>MtB } exp −
dy .
2(1 − t)
Hence, by applying Itô’s formula
s
bt (dy) =
λ
µ
¶
(MtB − Bt )2
−
2(1 − t)
µ
¶¾
(y − Bt )2
y − Bt
+ 11{y>MtB }
exp −
.
1−t
2(1 − t)
2
π(1 − t)
½
δy (MtB ) exp
It follows that
ρ(t, x) = 11{x>MtB }
with Φ(x) =
q R
2 x
π
0
e−
u2
2
x − Bt
Φ0
1
+ 11{MtB =x} √
1−t
1−t Φ
µ
x − Bt
√
1−t
¶
du.
R∞
R∞
I Enlargement with Z := 0 f (s)dBs where 0 f 2 (s)ds < ∞, for a deterministic function f .
The above method applies step by step: it is easy to compute λt (dx), since conditionally on Ft , Z
Rt
Rt
is gaussian, with mean mt = 0 ϕ (s) dBs , and variance σt2 = 0 ϕ2 (s) ds. The absolute continuity
requirement is satisfied with:
x − ms
.
ρ (x, t) = ϕ (s)
σs2
But here, we have to impose an extra integrability condition. For example, if we assume that
Z t
|ϕ (s) |
ds < ∞,
σs
0
2.3. INITIAL ENLARGEMENT
31
then B is a F(B1 ) -semimartingale with canonical decomposition:
¶
µZ ∞
Z t
ϕ (s)
et +
Bt = B
ds 2
ϕ (u) dBu ,
σs
0
s
As a particular case, we may take Z = Bt0 , for some fixed t0 and we recover the results for the
Brownian bridge.
More examples can be found in Jeulin [46] and Mansuy and Yor [60].
2.3.4
Change of Probability
We assume that Pt , the conditional law of L given Ft , is equivalent to ν for any t > 0, so that the
hypotheses of Proposition 2.3.2 hold and that p does not vanishes (on the support of ν).
Lemma 2.3.8 The process 1/pt (L), t ≥ 0 is an F(L) -martingale. Let, for any t, dQ = 1/pt (L)dP
(L)
on Ft . Under Q, the r.v. L is independent of F∞ . Moreover,
Q|Ft = P|Ft , Q|σ(L) = P|σ(L)
Proof: Let Zt (L) = 1/pt (L). In a first step, we prove that Z is an F(L) -martingale. Indeed,
(L)
E(Zt (L)|Fs ) = Zs (L) if (and only if) E(Zt (L)h(L)As ) = E(Zs (L)h(L)As ) for any (bounded)
Borel function h and any Fs -measurable (bounded) random variable As . From definition of p, one
has
Z
Z
E(Zt (L)h(L)As ) = E( Zt (x)h(x)pt (x)ν(dx)As ) = E( h(x)ν(dx)As )
R
Z R
=
h(x)ν(dx)E(As ) = E(As ) E(h(L))
R
The particular case t = s leads to E(Zs (L)h(L)As ) = E(h(L)) E(As ), hence E(Zs (L)h(L)As ) =
E(Zt (L)h(L)As ). Note that, since p0 (x) = 1, one has E(1/pt (L)) = 1/p0 (L) = 1.
Now, we prove the required independence. From the above,
EQ (h(L)As ) = EP (Zs (L)h(L)As ) = EP (h(L)) EP (As )
For h = 1 (resp. At = 1), one obtains EQ (As ) = EP (As ) (resp. EQ (h(L)) = EP (h(L))) and we are
done.
¤
This fact appears in Song [72] and plays an important rôle in Grorud and Pontier [35].
Lemma 2.3.9 Let m be a (P, F) martingale. The process mL defined by mL
t = mt /pt (L) is a
(P, F(L) )-martingale and satisfies E(mL
t |Ft ) = mt .
(L)
Proof: To establish the martingale property, it suffices to check that for s < t and A ∈ Fs , one
L
has E(mL
t 11A ) = E(ms 11A ), which is equivalent to EQ (mt 11A ) = EQ (ms 11A ). The last equality follows
from the fact that the (F, P) martingale m is also a (F, Q) martingale (indeed P and Q coincide on F),
hence a (F(L) , Q) martingale (by independence of L and F under Q. Bayes criteria shows that mL )
is a (P, F(L) )-martingale. Noting that E(1/pt (L)|Ft ) = 1 (take As = 1 and h = 1 in the preceeding
proof), the equality
E(mL
t |Ft ) = mt E(1/pt (L)|Ft ) = mt
ends the proof.
¤
Of course, the reverse holds true: if there exists a probability equivalent to P such that, under Q,
the r.v. L is independent to F∞ , then (P, F)-martingales are (P, F(L) )-semi martingales (WHY?).
Exercise 2.3.10 Prove that (Yt (L), t ≥ 0) is a (P, F(L) )-martingale if and only if Yt (x)pt (x) is a
family of F-martingales.
C
32
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Exercise 2.3.11 Let F be a Brownian filtration. Prove that, if X is a square integrable (P, F(L) )martingale, then, there exists a function h and a process ψ such that
Z
t
ψs (L)dBs
Xt = h(L) +
0
C
Equivalent Martingale Measures
We consider a financial market, where some assets, with prices S are traded, as well as a riskless asset
S 0 . The dynamics of S under the so-called historical probability are assumed to be semi-martingales
(WHY?). We denote by F the filtration generated by prices and by QF the set of probability
measures, equivalent to P on F such that the discounted prices Se are F-(local) martingales. We now
enlarge the filtration and consider prices as F(L) semi-martingales (assuming that it is the case). We
denote by Q(L) the set of probability measures, equivalent to P on F(L) such that the discounted
prices Se are F(L) -(local) martingales.
We assume that there exists a probability Q equivalent to P such that, under Q, the r.v. L is
independent to F∞ . Then, Q(L) is equal to the set of probability measures, equivalent to Q on F(L)
such that the discounted prices Se are F(L) -(local) martingales. Let P∗ ∈ QF and L its RN density
(i.e., dP∗ |Ft = Lt dP|Ft . The probability Q∗ defined as
dQ∗ |F (L) = Lt dQ|F (L) = Lt
t
t
1
dP|F(L)
t
pt (τ )
(L)
e is a (P, F)-martingale, hence SL/p(L)
e
belongs to QF . Indeed, Se being a (P∗ , F)-martingale, SL
is
(L)
∗
(L)
b
e
a (P, F )-martingale and S is a (Q , F )-martingale. The probability Q defined as
b (L) = h(τ )Lt
dQ|
F
t
where EP (h(τ )) = 1 belongs also to QF
(L)
1
dP|F(L)
t
pt (τ )
.
(L)
Conversely, if Q∗ ∈ QF , with RN density L∗ , then Q defined as dQ|Ft = E(L∗t |Ft )dP|Ft
belongs to QF . Indeed (if the interest rate is null), SL∗ is a (Q∗ , F(L) )-martingale, hence S`∗ is a
(Q∗ , F(L) )-martingale, where `∗t = E(L∗t |Ft ). It follows that S`∗ is a (Q∗ , F)-martingale.
2.3.5
An Example from Shrinkage ♠
Let W = (Wt )te0 be a Brownian motion defined on the probability space (Ω, G, P), and τ be a
random time, independent of W and such that P(τ > t) = e−λt , for all t ≥ 0 and some λ > 0 fixed.
We define X = (Xt )t≥0 as the process
µ³
¶
σ2 ´
+
Xt = x exp a + b −
t − b(t − τ ) + σ Wt
2
where a, and x, σ, b are some given strictly positive constants. It is easily shown that the process
X solves the stochastic differential equation
¢
¡
dXt = Xt a + b 11{τ >t} dt + Xt σ dWt .
Let us take as F = (Ft )t≥0 the natural filtration of the process X, that is, Ft = σ(Xs | 0 ≤ s ≤ t)
Rt
Rt
for t ≥ 0 (note that F is smaller than FW ∨ σ(τ )). From Xt = x + 0 (a + b 11{τ >s} )ds + 0 Xs σ dWs
it follows that (from Exercise 1.2.5)
2.3. INITIAL ENLARGEMENT
33
¡
¢
dXt = Xt a + b Gt dt + dmart
Identifying the brackets, one has dmart = Xt σdW̄t where W̄ is a martingale with bracket t, hence
is a BM. It follows that the process X admits the following representation in its own filtration
¡
¢
dXt = Xt a + b Gt dt + Xt σ dW̄t .
Here, G = (Gt )t≥0 is the Azéma supermartingale given by Gt = P(τ > t | Ft ) and the innovation
process W̄ = (W̄t )t≥0 defined by
W̄t = Wt +
b
σ
Z
t
¡
0
¢
11{τ >s} − Gs ds
is a standard F-Brownian motion. It is easily shown using the arguments based on the notion of
strong solutions of stochastic differential equations (see, e.g. [58, Chapter IV]) that the natural
filtration of W̄ coincides with F. It follows from [58, Chapter IX] that the process G solves the
stochastic differential equation
dGt = −λGt dt +
b
Gt (1 − Gt ) dW̄t .
σ
(2.3.1)
Observe that the process n = (nt )t≥0 with nt = eλt Gt admits the representation
dnt = d(eλt Gt ) =
b λt
e Gt (1 − Gt ) dW̄t
σ
and thus, n is an F-martingale (to establish the true martingale property, note that the process
(Gt (1−Gt ))t≥0 is bounded). The equality (2.3.1) provides the (additive) Doob-Meyer decomposition
of the supermartingale G, while Gt = (Gt eλt ) e−λt gives its multiplicative decomposition. It follows
from these decompositions that the F-intensity rate of τ is λ, so that, the process M = (Mt )t≥0 with
Mt = 11τ ≤t − λ(t ∧ τ ) is a G-martingale.
It follows from the definition of the conditional survival probability process G and the fact that
(Gt eλt )t≥0 is a martingale that the expression
P(τ > u | Ft ) = E[P(τ > u | Fu ) | Ft ] = E[Gu eλu | Ft ] e−λu = Gt eλ(t−u)
holds for 0 ≤ t < u.
From the standard arguments in [71, Chapter IV, Section 4], resulting from the application of
Bayes’ formula, we obtain that the conditional survival probability process can be expressed as
P(τ > u | Ft ) = 1 −
Zu∧t −λu
Yu∧t
+
e
Yt
Yt
for all t, u ≥ 0. Here, the process Y = (Yt )t≥0 is defined by
Z
t
Yt =
Zs λe−λs ds + Zt e−λt
(2.3.2)
0
and the process Z = (Zt )t≥0 is given by
¶
µ ³
Xt
2a + b − σ 2 ´
b
ln
−
t
.
Zt = exp
σ2
x
2
Moreover, by standard computations, we see that
dZt =
b
Zt (bGt dt + σ dWt )
σ2
(2.3.3)
34
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
and hence, the process 1/Y = (1/Yt )t≥0 , or its equivalent (eλt Gt /Zt )t≥0 , admits the representation
³1´
³ eλt G ´
b Gt
t
d
=d
=−
dW̄t
Yt
Zt
σ Yt
and thus, it is an F-local martingale (in fact, an F-martingale since, for any u ≥ 0, one has
1
eλu
=
P(τ > u | Ft )
Yt
Zu
for u > t). Hence, for each u ≥ 0 fixed, it can be checked that the process Z/Y = (Zt /Yt )t≥0 defines
an F-martingale, as it must be (this process being equal to (eλt Gt )t≥0 ).
We also get the representation
Z
∞
P(τ > u | Ft ) =
αt (v) η(dv)
u
for t, u ≥ 0, and hence, the density of τ is given by
αt (u) =
Zu∧t
Yt
where η(du) ≡ P(τ ∈ du) = λe−λu du. In particular, from (2.3.2), we have
Z ∞
αt (u) η(du) = 1
0
as expected.
2.4
Progressive Enlargement
We now consider a different case of enlargement, more precisely we assume that τ is a finite random
time, i.e., a finite non-negative random variable, and we denote
Gt = Ftτ := ∩²>0 {Ft+² ∨ σ(τ ∧ (t + ²))} .
We define the right-continuous process H as
Ht = 11{τ ≤t} .
We denote by H = (Ht , t ≥ 0) its natural filtration. With this notation, G = H ∨ F. Note that τ
is an H-stopping time, hence a G-stopping time. (In fact, H is the smallest filtration making τ a
stopping time).
For a general random time τ , it is not true that F-martingales are Fτ -semi-martingales. Here
is an example: due to the separability of the Brownian filtration, there exists a bounded random
variable τ such that F∞ = σ(τ ). Hence, Fττ+t = F∞ , ∀t so that the Fτ -martingales are constant
after τ . Consequently, F-martingales are not Fτ -semi-martingales.
We shall denote Ztτ = P(τ > t|Ft ) and, if there is no ambiguity, we shall also use the notation
Gt = P(τ > t|Ft ). The process P(τ > t|Ft ) is a super-martingale. Therefore, it admits a Doob-Meyer
decomposition.
Lemma 2.4.1 (i) Let τ be a positive random time and
P(τ > t|Ft ) = µτt − Aτt
2.4. PROGRESSIVE ENLARGEMENT
35
the Doob-Meyer decomposition of the super-martingale Gt = P(τ > t|Ft ) (the process Aτ is the
F-predictable compensator of H, see Definition ??). Then, for any F-predictable positive process Y ,
µZ ∞
¶
E(Yτ ) = E
Yu dAτu .
0
(ii)
ÃZ
T
E(Yτ 11t<τ ≤T |Ft ) = E
t
!
Yu dAτu |Ft
ÃZ
T
= −E
t
!
Yu dZuτ |Ft
Proof: For any process càdlàg Y of the form Yu = ys 11]s,t] (u) with ys ∈ bFs , one has
E(Yτ ) = E(ys 11]s,t] (τ )) = E(ys (At − As )) .
The result follows from MCT.
See also Proposition 1.1.23.
¤
Proposition 2.4.2 Any Gt measurable random variable admits a decomposition as yt 11t<τ +yt (τ )11τ ≤t
where yt is Ft -measurable, and, for any u ≤ t , yt (u) is Ft -measurable.
Any G-predictable process Y admits a decomposition as Yt = yt 11t≤τ + yt (τ )11τ <t where y is Fpredictable, and, for any u ≤ t , y(u) is F-predictable.
Any G-optional process Y admits a decomposition as Yt = yt 11t<τ + yt (τ )11τ ≤t where y is F-optional,
and, for any u ≤ t , y(u) is F-optional.
Proof: See Pham [65].
¤
Proposition 2.4.3 The process µτ is a BMO martingale
Proof: From Doob-Meyer decomposition, since G is bounded, µ is a square integrable martingale.¤
2.4.1
General Facts
A Larger Filtration
We also introduce the filtration Gτ defined as
et ∈ Ft : A ∩ {τ > t} = A
et ∩ {τ > t} }
Gtτ = {A ∈ F∞ ∨ σ(τ ) : ∃A
Note that, on t < τ , one has Gtτ = Ftτ = Ft and Gtτ = F∞ ∨ σ(τ ) on τ < t.
Lemma 2.4.4 Let ϑ be a Gτ -stopping time, then, there exists an F-stopping time ϑe such that
ϑ ∧ τ = ϑe ∧ τ
Let X ∈ F∞ . A càdlàg version of E(X|Gtτ ) is
1
11t<τ E(X11t<τ |Ft ) + X11t≥τ
P(τ > t|Ft )
If Mt = E(X|Gtτ ), then the process Mt− is
1 p
(X11]]0,τ ]] ) + X11]]τ,∞]]
Gt−
36
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Conditional Expectations
Proposition 2.4.5 For any G-predictable process Y , there exists an F-predictable process y such
that Yt 11{t≤τ } = yt 11{t≤τ } . Under the condition ∀t, P(τ ≤ t|Ft ) < 1, the process (yt , t ≥ 0) is unique.
Proof: We refer to Dellacherie [23] and Dellacherie et al. [19], page 186. The process y may be
recovered as the ratio of the F-predictable projections of Yt 11{t≤τ } and 11{t≤τ } .
¤
Lemma 2.4.6 Key Lemma: Let X ∈ FT be an integrable r.v. Then, for any t ≤ T ,
E(X11{τ <T } |Gt ) = 11{t<τ }
E(XGT |Ft )
Gt
Proof: On the set {t < τ }, any Gt measurable random variable is equal to an Ft -measurable random
variable, therefore
E(X11{τ <T } |Gt ) = 11{t<τ } yt
where yt is Ft -measurable. Taking conditional expectation w.r.t. Ft , we get yt =
(it can be proved that P(t < τ |Ft ) does not vanishes on the set {t < τ }.)
E(Yt 11{t<τ } |Ft )
.
P(t < τ |Ft )
¤
Exercise 2.4.7 Prove that
{τ > t} ⊂ {Ztτ > 0} =: At
(where the inclusion is up to a negligible set.
Hint: P(Act ∩ {τ > t}) = 0
2.4.2
(2.4.1)
C
Before τ
In this section, we assume that F-martingales are continuous and that τ avoids F-stopping times.
It is proved in Yor [79] that if X is an F-martingale then the processes Xt∧τ and Xt (1 − Ht ) are
Fτ semi-martingales. Furthermore, the decompositions of the F-martingales in the filtration Fτ are
known up to time τ (Jeulin and Yor [47]).
Proposition 2.4.8 Every F-martingale M stopped at time τ is an Fτ -semi-martingale with canonical decomposition
Z t∧τ
dhM, µτ is
f
Mt∧τ = Mt +
,
τ
Zs−
0
f is an Fτ -local martingale.
where M
Proof: Let Ys be an Fsτ -measurable random variable. There exists an Fs -measurable random
variable ys such that Ys 11{s≤τ } = ys 11{s≤τ } , hence, if M is an F-martingale, for s < t,
E(Ys (Mt∧τ − Ms∧τ )) = E(Ys 11{s<τ } (Mt∧τ − Ms∧τ ))
= E(ys 11{s<τ } (Mt∧τ − Ms∧τ ))
¡
¢
= E ys (11{s<τ ≤t} (Mτ − Ms ) + 11{t<τ } (Mt − Ms ))
From the definition of Z τ (see also Definition ??),
¡
E ys 11{s<τ ≤t} Mτ
¢
µ Z t
¶
τ
= −E ys
Mu dZu .
s
2.4. PROGRESSIVE ENLARGEMENT
37
From integration by parts formula
Z t
Z t
Z t
Mu dZuτ −Ms Zsτ +Ztτ Mt =
Zuτ dMu +hM, Z τ it −hM, Z τ is =
Zuτ dMu +hM, µτ it −hM, µτ is .
s
s
s
We have also
¡
¢
E ys (11{s<τ ≤t} Ms
=
¡
¢
E ys 11{t<τ } (Mt − Ms )) =
E (ys Ms (Zsτ − Ztτ ))
E (ys Ztτ (Mt − Ms )))
hence, from the martingale property of M
E(Ys (Mt∧τ − Ms∧τ )) = E(ys (hM, µτ it − hM, µτ is ))
¶
µ Z t
¶
µ Z t
dhM, µτ iu τ
dhM, µτ iu
= E ys
Zu = E y s
E(11{u<τ } |Fu )
Zuτ
Zuτ
s
s
µ Z t
¶
µ Z t∧τ
¶
dhM, µτ iu
dhM, µτ iu
= E ys
1
1
=
E
y
.
s
{u<τ }
Zuτ
Zuτ
s
s∧τ
The result follows.
See Jeulin for the general case.
2.4.3
¤
Basic Results
Proposition 2.4.9 The process
Z
t∧τ
M t = Ht −
0
dAτu
Gu−
is a G-martingale.
For any bounded G-predictable process Y , the process
Z t∧τ
Ys
τ
Yt 11τ ≤t −
τ dAs
Zs−
0
is a G-martingale.
The process Lt = (1 − Ht )/Gt is a G -martingale.
Proof: We give the proof in the case where G is continuous. Let s < t. We proceed in two steps,
using the Doob-Meyer decomposition of G as Gt = µτt − Aτt .
First step: we prove
1
E(Ht − Hs |Gs ) = 11s<τ
E(Aτt − Aτs |Fs )
Gs
Indeed,
E(Ht − Hs |Gs )
= P(s < τ ≤ t|Gs ) = 11s<τ
= 11s<τ
In a second step, we prove that
Z
E(
t∧τ
s∧τ
1
E(Gs − Gt |Fs )
Gs
1
E(Aτt − Aτs |Fs ) .
Gs
1
dAτu
|Gs ) = 11s<τ
E(Aτt − Aτs |Fs )
Gu
Gs
One has
Z
t∧τ
E(
s∧τ
dAτu
|Gs ) = 11s<τ E(
Gu
Z
s
t∧τ
dAτu
1
|Gs ) = 11s<τ
E(11s<τ
Gu
Gs
Z
s
t∧τ
dAτu
|Fs )
Gu
38
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Setting Kt =
Z
Rt
dAτu
,
s Gu
t∧τ
E(11s<τ
s
dAτu
|Fs ) =
Gu
=
Z ∞
E(11s<τ Kt∧τ |Fs ) = −E(
Kt∧u dGu |Fs )
s
Z t
Z t
Z t
−E(
Ku dGu + Kt Gt |Fs ) = E(
Gu dKu |Fs ) = E(
dAτu |Fs )
s
s
s
where we have used integration by parts formula to obtain
Z t
Z t
Z t
−
Ku dGu + Kt Gt = Ks Gt +
Gu dKu =
Gu dKu
s
s
−Λt
Note that, if Gt = Nt Dt = Nt e
martingale.
s
is the multiplicative decomposition of G, then Ht − Λt∧τ is a
¤
Lemma 2.4.10 The process λ satisfies
1 P(t < τ < t + h|Ft )
.
h→0 h
P(t < τ |Ft )
λt = lim
Proof: The martingale property of M implies that
Z t+h
E(11t<τ <t+h |Gt ) −
E((1 − Hs )λs |Gt )ds = 0
t
It follows that, by projection on Ft
Z
t+h
P(t < τ < t + h|Ft ) =
λs P(s < τ |Ft )ds .
t
¤
The reverse is know as Aven lemma [6].
Lemma 2.4.11 Let (Ω, Gt , P) be a filtered probability space and N be a counting process. Assume
that E(Nt ) < ∞ for any t. Let (hn , n ≥ 1) be a sequence of real numbers converging to 0, and
(n)
Yt
=
1
E(Nt+hn − Nt |Gt )
hn
Assume that there exists λt and yt non-negative G-adapted processes such that
(n)
(i) For any t, lim Yt
= λt
(ii) For any t, there exists for almost all ω an n0 = n0 (t, ω) such that
|Ys(n) − λs (ω)| ≤ ys (ω) , s ≤ t, n ≥ n0 (t, ω)
(iii)
Rt
0
ys ds < ∞, ∀t
Then, Nt −
Rt
0
λs ds is a G-martingale.
Definition 2.4.12 In the case where the process dAu = au du, the process λt =
et = 11t<τ λt is the G-intensity, and the process
F-intensity of τ , and λ
Z
Ht −
Z
t
λs ds = Ht −
0
is a G-martingale.
Z
t∧τ
t
(1 − Hs )λs ds = Ht −
0
0
es ds
λ
at
Gt−
is called the
2.4. PROGRESSIVE ENLARGEMENT
39
The intensity process is the basic tool to model default risk.
Exercise 2.4.13 Prove that if X is a (square-integrable) F-martingale, XL is a G -martingale,
where L is defined in Proposition 2.4.9.
Hint: See [BJR] for proofs.
C
Exercise 2.4.14 Let τ be a random time with density f , and G(s) = P(τ > s) =
R t∧τ f (s)
that the process Mt = Ht − 0 G(s)
ds is a H-martingale.
R∞
s
f (u)du. Prove
C
Restricting the information
Suppose from now on that Fet ⊂ Ft and define the σ-algebra Get = Fet ∨ Ht and the associated hazard
e t = − ln(G
e t ) with
process Γ
e t = P(t < τ |Fet ) = E(Gt |Fet ) .
G
Let Ft = Zt + At be the F-Doob-Meyer decomposition of the F-submartingale
F and assume
Rt
that A is absolutely continuous with respect to Lebesgue’s measure: At = 0 as ds. The process
e
e
et = E(At |Fet ) is a F-submartingale
A
and its F-Doob-Meyer
decomposition is
et = zet + α
A
et .
Hence, setting Zet = E(Zt |Fet ), the sub-martingale
Fet = P(t ≥ τ |Fet ) = E(Ft |Fet )
e
admits a F-Doob-Meyer
decomposition as
Fet = Zet + zet + α
et
where Zet + zet is the martingale part. From Exercise 1.2.5, α
et =
Z
Ht −
0
t∧τ
α
es
1 − Fes
Rt
0
E(as |Fes )ds. It follows that
ds
e
e
e s , and not ”as one could
is a G-martingale
and that the F-intensity
of τ is equal to E(as |Fes )/G
e and F, this proof can
think” to E(as /Gs |Fes ). Note that even if (H) hypothesis holds between F
e
e
not be simplified since F is increasing but not F-predictable (there is no raison for Fet to have an
intensity).
This result can be directly proved thanks to Bremaud’s following result (a consequence of Exercise
Rt
R
es ds is a G-martingale, then Ht − t E(λ
es |Ges )ds is a G-martingale.
e
1.2.5 ): if Ht − 0 λ
Since
0
E(11{s≤τ } λs |Ges ) =
=
it follows that Ht −
R t∧τ
0
11{s≤τ }
E(11{s≤τ } λs |Fes )
es
G
11{s≤τ }
11{s≤τ }
E(Gs λs |Fes ) =
E(as |Fes )
e
es
Gs
G
e
e s ds is a G-martingale,
E(as |Fes )/G
and we are done.
40
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Survival Process
Lemma 2.4.15 The super-martingale G admits a multiplicative decomposition as Gt = Nt Dt where
D is a decreasing process and N a local martingale.
Proof: In the proof, we assume that G is continuous. Assuming that the supermartingale G = µ−A
admits a multiplicative decomposition Gt = Dt Nt where N is a local-martingale and D a decreasing
process, we obtain that dGt = Nt dDt + Dt dNt is a Doob-Meyer decomposition of G. Hence,
dNt =
1
1
dµt , dDt = −Dt dAt ,
Dt
Gt
therefore,
Z
t
Dt = exp −
0
1
dAs
Gs
Λt
Conversely, if G admits the Doob-Meyer
R t 1 decomposition µ − A, then Gt = Nt Dt with Dt = e ,
Λ being the intensity process Λt = − 0 Gs dAs .
¤
Exercise 2.4.16 We consider, as in the paper of Biagini, Lee and Rheinlander a mortality bond, a
R τ ∧T
financial instrument with payoff Y = 0
Gs ds. We assume that G is continuous.
1. Compute, in the case r = 0, the price Yt of the mortality bond (it will be convenient to
RT
introduce mt = E( 0 G2s ds|Ft ). Is the process m a (P, F) martingale? a (P, G)-martingale?
2. Determine α, β and γ so that
dYt = αt dMt + βt (dmt −
1
1
dhm, Zit ) + γt (dZt −
dhZit )
Gt
Gt
3. Determine the price D(t, T ) of a defaultable zero-coupon bond with maturity T , i.e. a financial
asset with terminal payoff 11T <τ . Give the dynamics of this price.
4. We now assume that F is a Brownian filtration, and that
dSt = St (bdt + σdWt )
Explain how one can hedge the mortality bond.
Hint: From the key lemma,
ÃZ
!
τ ∧T
E
Gs ds|Gt
0
1
E(11t<τ
= 11t<τ
Gt
Z
=
=
Z
E(
Z
E(
T
0
u
dFu
t
Z
T
u∧T
dFu
Gs ds|Ft )
0
Z
∞
T
dFu
T
Gs ds|Ft )
0
Z
u
dFu
t
Z
∞
Z
Gs ds|Ft ) + E(
0
T
Gs ds|Ft ) + E(GT
Gs ds|Ft )
0
0
From integration by parts formula, setting ζt =
Z
GT ζT = Gt ζt +
t
Rt
0
Gs ds
0
t
Z
τ
Gs ds|Ft ) + 11τ ≤t
and from the second part of key lemma
Z τ ∧T
Z
I := E(11t<τ
Gs ds|Ft ) = E(
0
Z
τ ∧T
Gs ds
Z
T
T
ζs dGs +
Gs dζs
t
2.4. PROGRESSIVE ENLARGEMENT
hence
Z
I = Gt ζt + E(
t
41
T
Z
G2s ds)
t
= Gt ζt −
0
G2s ds + mt
and
µZ
It
:= E(I|Gt ) = 11t<τ
0
Z
=
t
1
Gs ds +
Gt
µ
¶¶
Z t
Z
2
mt −
Gs ds
+ 11τ ≤t
τ
it 11t<τ + 11τ ≤t
Z
Gu du = it 11t<τ +
0
0
t
Gu du
0
Z
s
dHs
0
τ
Gu du
0
Differentiating this expression leads to
dIt
=
µ
¶µ
¶
Z t
Z t
1
1
−dZt + dAt +
(
Gs ds − it )dHt + (1 − Ht ) 2 mt −
G2s ds
dhZit
Gt
Gt
0
0
¶
µ
Z t
1
1
1
+(1 − Ht )
dhm, Git +
dmt − 2 (mt −
G2s ds)dhm, Git
G2t
Gt
Gt
0
t
After some simplifications, and using the fact that dMt = dHt − (1 − Ht ) dA
Gt
dIt =
1
(mt −
Gt
Z
0
t
G2s ds)dMt +(1−Ht )
1
1
1
(dmt − dhm, Zit )−(1−Ht ) 2 (mt −
Gt
Gt
Gt
Z
0
t
G2s ds)(dZt −
1
dhZit )
Gt
C
2.4.4
Immersion Setting
Characterization of Immersion
Let us first investigate the case where the (H) hypothesis holds.
Lemma 2.4.17 In the progressive enlargement setting, (H) holds between F and G if and only if
one of the following equivalent conditions holds:
(i) ∀(t, s), s ≤ t,
(ii) ∀t,
P(τ ≤ s|F∞ ) = P(τ ≤ s|Ft ),
P(τ ≤ t|F∞ ) = P(τ ≤ t|Ft ).
(2.4.2)
Proof: If (ii) holds, then (i) holds too. If (i) holds, F∞ and σ(t ∧ τ ) are conditionally independent
given Ft . The property follows. This result can also be found in Dellacherie and Meyer [21].
¤
Note that, if (H) holds, then (ii) implies that the process P(τ ≤ t|Ft ) is increasing.
Exercise 2.4.18 Prove that if H and F are immersed in G, and if any F martingale is continuous,
then τ and F∞ are independent.
C
Exercise 2.4.19 Assume that immersion property holds and let yt (u) be a family of F-martingales.
Prove that, for t > s,
11τ ≤s E(yt (τ )|Gs ) = 11τ ≤s ys (τ )
C
42
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Norros’s lemma
Proposition 2.4.20 Assume that G = (Gt )t≥0 is continuous and G∞ = 0 and let Λ be the increasing predictable process such that Mt = Ht − Λt∧τ is a martingale. Then the following conclusions
hold:
(i) the variable Λτ has standard exponential law (with parameter 1);
(ii) if F is immersed in G, then the variable Λτ is independent of F∞ .
Proof: (i) Consider the process X = (Xt , t ≥ 0), defined by:
Xt = (1 + z)Ht e−zΛt∧τ
for all t ≥ 0 and any z > 0 fixed. Then, applying the integration by parts formula, we get:
dXt = z e−zΛt∧τ dMt .
(2.4.3)
Hence, by virtue of the assumption that z > 0, it follows from (2.4.3) that X is also a G-martingale,
so that:
¯ ¤
£
E (1 + z)Ht e−zΛt∧τ ¯ Gs = (1 + z)Hs e−zΛs∧τ
(2.4.4)
holds for all 0 ≤ s ≤ t. In view of the implied by z > 0 uniform integrability of X, we may let t go
to infinity in (2.4.4). Setting s equal to zero in (2.4.4), we therefore obtain:
£
¤
E (1 + z) e−zΛτ = 1 .
This means that the Laplace transform of Λτ is the same as one of a standard exponential variable
and thus proves the claim.
(ii) Recall that, under immersion property, G is decreasing and dΛ = dG/G. Applying the
change-of-variable formula, we get:
Z
t
e−zΛt∧τ = 1 + z
e−zΛs
0
11(τ >s)
dGs
Gs
(2.4.5)
for all t ≥ 0 and any z > 0 fixed. Then, taking conditional expectations under Ft from both parts
of expression (2.4.5) and applying Fubini’s theorem, we obtain from the immersion of F in G that:
¯ ¤
£
E e−zΛt∧τ ¯ Ft = 1 + z
Z
t
0
Z
t
=1+z
·
11(τ >s)
E e−zΛs
Gs
e−zΛs
0
Z
=1+z
t
¯ ¸
¯
¯ Ft dGs
(2.4.6)
P(τ > s | Ft )
dGs
Gs
e−zΛs dGs
0
for all t ≥ 0. Hence, using the fact that Λt = − ln Gt , we see from (2.4.6) that:
£
¤
E e−zΛt∧τ | Ft = 1 +
¢
z ¡
(Gt )1+z − (G0 )1+z
1+z
holds for all t ≥ 0. Letting t go to infinity and using the assumption G0 = 1, as well as the fact that
G∞ = 0 (P-a.s.), we therefore obtain:
£
¤
E e−zΛτ | F∞ =
that signifies the desired assertion. ¤
1
1+z
2.4. PROGRESSIVE ENLARGEMENT
43
G-martingales versus F martingales
Proposition 2.4.21 Assume that F is immersed in G under P. Let Z G be a G-adapted, P-integrable
process given by the formula
ZtG = Zt 11τ >t + Zt (τ )11τ ≤t ,
∀ t ∈ R+ ,
(2.4.7)
where:
(i) the projection of Z G onto F, which is defined by
ZtF := EP (ZtG |Ft ) = Zt P(τ > t|Ft ) + EP (Zt (τ )11τ ≤t |Ft ),
is a (P, F)-martingale,
(ii) for any fixed u ∈ R+ , the process (Zt (u), t ∈ [u, ∞)) is a (P, F)-martingale.
Then the process Z G is a (P, G)-martingale.
Proof: Let us take s < t. Then
EP (ZtG |Gs )
= EP (Zt 11τ >t |Gs ) + EP (Zt (τ )11s<τ ≤t |Gs ) + EP (Zt (τ )11τ ≤s |Gs )
1
= 11s<τ
(EP (Zt Gt |Fs ) + EP (Zt (τ )11s<τ ≤t |Fs )) + EP (Zt (τ )11τ ≤s |Gs )
Gs
On the one hand,
EP (Zt (τ )11τ ≤s |Gs ) = 11τ ≤s Zs (τ )
(2.4.8)
Indeed, it suffices to prove the previous equality for Zt (u) = h(u)Xt where X is an F-martingale. In
that case,
EP (Xt h(τ )11τ ≤s |Gs ) = 11τ ≤s h(τ )EP (Xt |Gs ) = 11τ ≤s h(τ )EP (Xt |Fs ) = 11τ ≤s h(τ )Xs = 11τ ≤s Zs (τ )
In the other hand, from (i)
EP (Zt Gt + Zt (τ )11τ ≤t |Fs ) = Zs Gs + EP (Zs (τ )11τ ≤s |Fs )
It follows that
EP (ZtG |Gs ) = 11s<τ
1
(Zs Gs + EP ((Zs (τ ) − Zt (τ ))11τ ≤s |Fs )) + 11τ ≤s Zs (τ )
Gs
It remains to check that
EP ((Zt (τ ) − Zs (τ ))11τ ≤s |Fs ) = 0
which follows from
EP (Zt (τ )11τ ≤s |Fs ) = EP (Zt (τ )11τ ≤s |Gs |Fs ) = EP (Zs (τ )11τ ≤s |Fs )
where we have used (2.4.8).
¤
Exercise 2.4.22 (A different proof of Norros’ result) Suppose that
P(τ ≤ t|F∞ ) = 1 − e−Γt
where Γ is an arbitrary continuous strictly increasing F-adapted process. Prove, using the inverse
of Γ that the random variable Γτ is independent of F∞ , with exponential law of parameter 1.
Hint: Suppose that
P (τ ≤ t|F∞ ) = 1 − e−Γt
where Γ is an arbitrary continuous strictly increasing F-adapted process. Let us set Θ : = Γτ . Then
{t < Θ} = {t < Γτ } = {Ct < τ },
where C is the right inverse of Γ, so that ΓCt = t. Therefore
P (Θ > u|F∞ ) = e−ΓCu = e−u .
We have thus established the required properties, namely, the probability law of Θ and its independence of the σ-field F∞ . Furthermore, τ = inf{t : Γt > Γτ } = inf{t : Γt > Θ}.
C
44
2.4.5
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Cox processes
We present here an particular case where immersion property holds. We deal here with the basic
and important model for credit risk.
Let (Ω, G, P) be a probability space endowed with a filtration F. A nonnegative F-adapted process
λ is given. We assume that there exists, on the space (Ω, G, P), a random variable Θ, independent of
F∞ , with an exponential law: P(Θ ≥ t) = e−t . We define the default time τ as the first time when
Rt
the increasing process Λt = 0 λs ds is above the random level Θ, i.e.,
τ = inf {t ≥ 0 : Λt ≥ Θ}.
In particular, using the increasing property of Λ, one gets {τ > s} = {Λs < Θ}. We assume that
Λt < ∞, ∀t, Λ∞ = ∞, hence τ is a real-valued r.v..
Comment 2.4.23 (i) In order to construct the r.v. Θ, one needs to enlarge the probability space
as follows. Let (Ω̂, F̂, P̂) be an auxiliary probability space with a r.v. Θ with exponential law. We
introduce the product probability space (Ω∗ , G ∗ , Q∗ ) = (Ω × Ω̂, F∞ ⊗ F̂, P ⊗ P̂).
(ii) Another construction for the default time τ is to choose τ = inf {t ≥ 0 : ÑΛt = 1}, where
Rt
Λt = 0 λs ds and Ñ is a Poisson process with intensity 1, independent of the filtration F. This
second method is in fact equivalent to the first. Cox processes are used in a great number of studies
(see, e.g., [57]).
(iii) We shall see that, in many cases, one can restrict attention to Cox processes models.
Conditional Expectations
Lemma 2.4.24 The conditional distribution function of τ given the σ-field Ft is for t ≥ s
³
´
P(τ > s|Ft ) = exp − Λs .
Proof: The proof follows from the equality {τ > s} = {Λs < Θ}. From the independence assumption and the Ft -measurability of Λs for s ≤ t, we obtain
¯ ´
³
³
´
¯
P(τ > s|Ft ) = P Λs < Θ ¯ Ft = exp − Λs .
In particular, we have
P(τ ≤ t|Ft ) = P(τ ≤ t|F∞ ),
(2.4.9)
and, for t ≥ s, P(τ > s|Ft ) = P(τ > s|Fs ). Let us notice that the process Ft = P(τ ≤ t|Ft ) is here
an increasing process, as the right-hand side of 2.4.9 is.
¤
Corollary 2.4.25 In the Cox process modeling, immersion property holds for F and G.
Remark 2.4.26 If the process λ is not non-negative, we get,
{τ > s} = {sup Λu < Θ} ,
u≤s
hence for s < t
P(τ > s|Ft ) = exp(− sup Λu ) .
u≤s
More generally, some authors define the default time as
τ = inf {t ≥ 0 : Xt ≥ Θ}
where X is a given F-semi-martingale. Then, for s ≤ t
P(τ > s|Ft ) = exp(− sup Xu ) .
u≤s
2.4. PROGRESSIVE ENLARGEMENT
45
Exercise 2.4.27 Prove that τ is independent of F∞ if and only if λ is a deterministic function. C
Exercise 2.4.28 Compute P(τ > t|Ft ) in the case when Θ is a non negative random variable,
independent of F∞ , with cumulative distribution function F .
C
Exercise 2.4.29 Prove that, if P(τ > t|Ft ) is continuous and strictly decreasing, then there exists
Θ independent of F∞ such that τ = inf{t : Λt > Θ}.
C
Exercise 2.4.30 Write the Doob-Meyer and the multiplicative decomposition of G.
C
Exercise 2.4.31 Show how one can compute P(τ > t|Ft ) when
τ = inf{t : Xt > Θ}
where X is an F-adapted process, not necessarily increasing, and Θ independent of F∞ . Does
immersion property still holds? Same questions if Θ is not independent of F∞ .
C
2.4.6
Successive Enlargements
Proposition 2.4.32 Let τ1 < τ2 a.s. , Hi be the filtration generated by the default process Hti =
11τi ≤t , and G = F ∨ H1 ∨ H2 . Then, the two following assertions are equivalent:
(i) F is immersed in G
(ii) F is immersed in F ∨ H1 and F ∨ H1 is immersed in G.
Proof: (this result was obtained by Ehlers and Schönbucher [25], we give here a slightly different
proof.) The only fact to check is that if F is immersed in G, then F ∨ H1 is immersed in G, or that
1
P(τ2 > t|Ft ∨ Ht1 ) = P(τ2 > t|F∞ ∨ H∞
)
This is equivalent to, for any h, and any A∞ ∈ F∞
E(A∞ h(τ1 )11τ2 >t ) = E(A∞ h(τ1 )P(τ2 > t|Ft ∨ Ht1 ))
We spilt this equality in two parts. The first equality
E(A∞ h(τ1 )11τ1 >t 11τ2 >t ) = E(A∞ h(τ1 )11τ1 >t P(τ2 > t|Ft ∨ Ht1 ))
is obvious since 11τ1 >t 11τ2 >t = 11τ1 >t and 11τ1 >t P(τ2 > t|Ft ∨ Ht1 ) = 11τ1 >t . Since F is immersed in G,
one has E(A∞ |Gt ) = E(A∞ |Ft ) and it follows (WHY?) that E(A∞ |Gt ) = E(A∞ |Ft ∨ Ht1 ), therefore
E(A∞ h(τ1 )11τ2 >t≥τ1 )
= E(E(A∞ |Gt )h(τ1 )11τ2 >t≥τ1 )
= E(E(A∞ |Ft ∨ Ht1 )h(τ1 )11τ2 >t≥τ1 )
= E(E(A∞ |Ft ∨ Ht1 )E(h(τ1 )11τ2 >t≥τ1 |Ft ∨ Ht1 ))
= E(A∞ E(h(τ1 )11τ2 >t≥τ1 |Ft ∨ Ht1 ))
Lemma 2.4.33 Norros Lemma.
Let τi , i = 1, · · · , n be n finite-valued random times and Gt = Ht1 ∨ · · · ∨ Htn . Assume that
(i) P (τi = τj ) = 0, ∀i 6= j
(ii) there exists continuous processes Λi such that Mti = Hti − Λit∧τi are G-martingales
then, the r.v’s Λiτi are independent with exponential law.
46
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
i
i
Proof: For any µi > −1 the processes Lit = (1 + µi )Ht e−µi Λt∧τi , solution of
dLit = Lit− µi dMti
are uniformly integrable martingales. Moreover, these martingales have no common jumps, and are
i
Q
orthogonal. Hence E( i (1 + µi )e−µi Λτi ) = 1, which implies
E(
Y
i
e−µi Λτi ) =
Y
(1 + µi )−1
i
i
hence the independence property.
AJOUTER F
¤
As an application, let us study the particular case of Poisson process. Let τ1 and τ2 are the two
first jumps of a Poisson process, we have
½ −λt
e
for s < t
G(t, s) =
e−λs (1 + λ(s − t)) for s > t
with partial derivatives
½
∂1 G(t, s)
=
−λe−λt for t > s
−λe−λs for s > t
and
½
h(t, s) =
½
k(t, s) =
½
, ∂2 G(t, s)
=
0 for t > s
−λ2 e−λs (s − t) for s > t
½
1 for t > s
t
s for s > t
, ∂1 h(t, s)
0 for t > s
1 − e−λ(s−t) for s > t
=
, ∂2 k(t, s) =
0 for t > s
1
s for s > t
½
0 for t > s
λe−λ(s−t) for s > t
Then, one obtains U1 = τ1 et U2 = τ2 − τ1
Martingales associated with a default time in the filtration generated by two default
processes
We present the computation of the martingales associated with the times τi in different filtrations.
In particular, we shall obtain the computation of the intensities in various filtrations.
We have established that, if F is a given reference filtration and if Gt = P(τ > t|Ft ) is the Azéma
Rt
supermartingale admitting a Doob-Meyer decomposition Gt = Zt − 0 as ds, then the process
Z
Ht −
0
t∧τ
as
ds ,
Gs
is a G-martingale, where G = F ∨ H and Ht = σ(t ∧ τ ). We now consider the case where two default
times τi , i = 1; 2 are given. We assume that
G(t, s) := P(τ1 > t, τ2 > s)
is twice differentiable and does not vanish. We denote by Hti = σ(Hsi , s ≤ t) the filtration generated
by he default process H i and by H = H1 ∨ H2 the filtration generated by the two default processes.
Our aim is to give the compensator of H 1 in the filtration H1 and in the filtration H.
We apply the general result to the case F = H2 and H = H1 . Let
1|2
Gt
= P(τ1 > t|Ht2 )
2.4. PROGRESSIVE ENLARGEMENT
47
1|2
be the Azéma supermartingale of τ1 in the filtration H2 , with Doob-Meyer decomposition Gt
R t 1|2
1|2
Zt − 0 as ds where Z 1|2 is a H2 -martingale. Then, the process
Z
Ht1 −
1|2
is a G-martingale. The process At
=
R t∧τ1
0
t∧τ1
a1|2
s
1|2
Gs
1|2
as
1|2
Gs
0
=
ds
ds is the G-adapted compensator of H 1 . The same
methodology can be applied for the compensator of H 2 .
Proposition 2.4.34 The process M 1,G defined as
Z t∧τ1
Z t∧τ1 ∧τ2
f (s, τ2 )
∂1 G(s, s)
ds +
ds
Mt1,G := Ht1 +
G(s, s)
t∧τ1 ∧τ2 ∂2 G(s, τ2 )
0
is a G-martingale.
Proof: Some easy computation enables us to write
1|2
Gt
=
P(τ1 > t|Ht2 ) = Ht2 P(τ1 > t|τ2 ) + (1 − Ht2 )
=
Ht2 h(t, τ2 ) + (1 − Ht2 )ψ(t)
where
h(t, v) =
∂2 G(t, v)
;
∂2 G(0, v)
P(τ1 > t, τ2 > t)
P(τ2 > t)
(2.4.10)
ψ(t) = G(t, t)/G(0, t).
Function t → ψ(t) and process t → h(t, τ2 ) are continuous and of finite variation, hence integration
by parts rule leads to
1|2
dGt
= h(t, τ2 )dHt2 + Ht2 ∂1 h(t, τ2 )dt + (1 − Ht2 )ψ 0 (t)dt − ψ(t)dHt2
¡
¢
= (h(t, τ2 ) − ψ(t)) dHt2 + Ht2 ∂1 h(t, τ2 ) + (1 − Ht2 )ψ 0 (t) dt
µ
¶
¢
¡
∂2 G(t, τ2 )
G(t, t)
=
−
dHt2 + Ht2 ∂1 h(t, τ2 ) + (1 − Ht2 )ψ 0 (t) dt
∂2 G(0, τ2 ) G(0, t)
From the computation of the Stieljes integral, we can write
¶
µ
¶
Z Tµ
G(t, t)
∂2 G(t, τ2 )
G(τ2 , τ2 ) ∂2 G(τ2 , τ2 )
−
dHt2 =
−
1{τ2 ≤T }
G(0, t) ∂2 G(0, τ2 )
G(0, τ2 )
∂2 G(0, τ2 )
0
¶
Z Tµ
G(t, t)
∂2 G(t, t)
dHt2
=
−
G(0, t) ∂2 G(0, t)
0
and substitute it in the expression of dG1|2 :
µ
¶
¡
¢
∂2 G(t, t)
G(t, t)
1|2
dGt =
−
dHt2 + Ht2 ∂1 h(t, τ2 ) + (1 − Ht2 )ψ 0 (t) dt
∂2 G(0, t) G(0, t)
We now use that
¡
¢ ∂2 G(0, t)
dHt2 = dMt2 − 1 − Ht2
dt
G(0, t)
where M 2 is a H2 -martingale, and we get the H2 − Doob-Meyer decomposition of G1|2 :
¶
µ
¶
µ
¡
¢ G(t, t)
G(t, t)
∂2 G(t, t) ∂2 G(0, t)
∂2 G(t, t)
1|2
2
2
−
dMt − 1 − Ht
−
dt
dGt
=
∂2 G(0, t) G(0, t)
G(0, t) ∂2 G(0, t)
G(0, t)
¡ 2
¢
+ Ht ∂1 h(t, τ2 ) + (1 − Ht2 )ψ 0 (t) dt
48
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
and from
µ
ψ 0 (t) =
∂2 G(t, t)
G(t, t)
−
∂2 G(0, t) G(0, t)
¶
∂2 G(0, t) ∂1 G(t, t)
+
G(0, t)
G(0, t)
we conclude
µ
1|2
dGt
=
∂2 G(t, t)
G(t, t)
−
G(0, t) ∂2 G(0, t)
¶
dMt2
µ
¶
2
(1)
2 ∂1 G(t, t)
+ Ht ∂1 h (t, τ2 ) + (1 − Ht )
dt
G(0, t)
From (2.4.10), the process G1|2 has a single jump of size
1|2
Gt
=
∂2 G(t,t)
∂2 G(0,t)
−
G(t,t)
G(0,t) .
From (2.4.10),
G(t, t)
= ψ(t)
G(0, t)
on the set τ2 > t, and its bounded variation part is ψ 0 (t). The hazard process has a non null
G(t,t)
G(t,t)
martingale part, except if G(0,t)
= ∂∂22G(0,t)
(this is the case if the default are independent). Hence,
(H) hypothesis is not satisfied in a general setting between Hi and G. It follows that the intensity
1 G(s,s)
1 h(s,τ2 )
of τ1 in the G-filtration is ∂G(s,s)
on the set {t < τ2 ∧ τ1 } and ∂h(s,τ
on the set {τ2 < t < τ1 }. It
2)
can be proved that the intensity of τ1 ∧ τ2 is
∂1 G(s, s) ∂2 G(s, s)
g(t)
+
=
G(s, s)
G(s, s)
G(t, t)
where
Exercise 2.4.35 Prove that Hi , i = 1, 2 is immersed in H1 ∨ H2 if and only if τi , i = 1, 2 are
independent.
C
Kusuoka counter example
Kusuoka [56] presents a counter-example of the stability of H hypothesis under a change of probability, based on two independent random times τ1 and τ2 given on some probability space (Ω, G, P)
R t∧τ
and admiting a density w.r.t. Lebesgue’s measure. The process Mt1 = Ht1 − 0 1 λ1 (u) du, where
Ht1 = 11{t≥τ1 } and λi is the deterministic intensity function of τi under P, is a (P, Hi ) and a (P, G)i ∈ds
martingale (Recall that λi (s)ds = P(τ
P(τi >s ) . Let us set dQ | Gt = Lt dP | Gt , where
Z
Lt = 1 +
0
t
Lu− κu dMu1
for some G-predictable process κ satisfying κ > 1 (WHY?), where G = H1 ∨ H2 . We set F = H1
and H = H2 . Manifestly, immersion hypothesis holds under P. Let
Z t∧τ1
b1 (u) du
ct1 = Ht1 −
M
λ
0
Z t∧τ1
ft1 = Ht1 −
M
λ1 (u)(1 + κu ) du
0
b
where λ(u)du
= Q(τ1 ∈ du)/Q(τ1 > u) is deterministic. It is easy to see that, under Q, the
c1 is a (Q, H1 )-martingale and M
f1 is a (Q, G) martingale. The process M
c1 is not a (Q, G)process M
martingale (WHY?), hence, immersion does not hold under Q.
Exercise 2.4.36 Compute Q(τ1 > t|Ht2 ).
C
2.4. PROGRESSIVE ENLARGEMENT
2.4.7
49
Ordered times
Assume that τi , i = 1, . . . , n are n random times admitting a density process with respect to a
reference filtration F. Let σi , i = 1, . . . , n be the sequence of ordered random times and G(k) =
(i)
F ∨ H(1) · · · ∨ H(k) where H(i) = (Ht = σ(t ∧ σi ), t ≥ 0). The G(k) -intensity of σk is the positive
Rt
(k)
G(k) -adapted process λk such that (Mt := 11{σk ≤t} − 0 λks ds, t ≥ 0) is a G(k) -martingale. The
G(k) -martingale M (k) is stopped at σk and the G(k) -intensity of σk satisfies λkt = 0 on {t ≥ σk }.
The following lemma shows the G(k) -intensity of σk coincides with its G(n) -intensity.
Lemma 2.4.37 For any k, a G(k) -martingale stopped at σk is a G(n) -martingale.
Proof: Let Y be a G(k) martingale stopped at σk , i.e. Yt = Yt∧σk for any t. Then it is also a
(k)
(k)
(n)
(n)
(Gt∧σk )t≥0 -martingale. Since Gt∧σk = Gt∧σk , we shall prove that the (Gt∧σk )t≥0 -martingale Y is also
a G(n) -martingale.
(n)
Let A ∈ Gt . For any T > t, we have E[YT 11A ] = E[YT 11A∩{σk ≥t} ] + E[YT 11A∩{σk <t} ]. On the one
(n)
(n)
hand, since A ∩ {σk ≥ t} ∈ Gσk ∧t and Y is a (Gt∧σk )t≥0 -martingale, we have E[YT 11A∩{σk ≥t} ] =
E[Yt 11A∩{σk ≥t} ]. On the other hand, since Y is stopped at σk , E[YT 11A∩{σk <t} ] = E[Yt 11A∩{σk <t} ].
Combining the two terms yields E[YT 11A ] = E[Yt 11A ]. Hence Y is a G(n) -martingale.
¤
The following is a familiar result in the literature.
Proposition 2.4.38 Assume that the G(k) -intensity λk of σk exists for all k ∈ Θ. Then the loss
intensity is the sum of the intensities of σk , i.e.
λL =
n
X
λk , a.s..
(2.4.11)
k=1
Rt
Proof: Since (11{σk ≤t} − 0 λks ds, t ≥ 0) is a G(k) -martingale stopped at σk , it is a G(n) -martingale.
R t Pn
Pn
k
We have by taking the sum that (Lt − 0 k=1 λks ds, t ≥ 0) is a G(n) -martingale. So λL
t =
k=1 λt
for all t ≥ 0.
¤
2.4.8
Pseudo-stopping Times ♠
As we have mentioned, if (H) holds, the process (Ztτ , t ≥ 0) is a decreasing process. The converse
is not true. The decreasing property of Z τ is closely related with the definition of pseudo-stopping
times, a notion developed from D. Williams example (see Example 2.4.41 below).
Definition 2.4.39 A random time τ is a pseudo-stopping time if, for any bounded F-martingale m,
E(mτ ) = m0 .
Proposition 2.4.40 The random time τ is a pseudo-stopping time if and only if one of the following
equivalent properties holds:
• For any local F-martingale m, the process (mt∧τ , t ≥ 0) is a local Fτ -martingale,
• Aτ∞ = 1,
• µτt = 1, ∀t ≥ 0,
• The process Z τ is a decreasing F-predictable process.
50
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Proof: The implication d ⇒ a is a consequence of Jeulin lemma. The implication a ⇒ b follows
from the properties of the compensator Aτ : indeed
Z ∞
E(mτ ) = E(
mu dAτu ) = E(m∞Aτ∞ ) = m0
0
implies that
Aτ∞
= 1 We refer to Nikeghbali and Yor [63].
¤
Example 2.4.41 The first example of a pseudo-stopping time was given by Williams [76]. Let B
be a Brownian motion and define the stopping time T1 = inf{t : Bt = 1} and the random time
ϑ = sup{t < T1 : Bt = 0}. Set
τ = sup{s < θ : Bs = MsB }
where MsB is the running maximum of the Brownian motion. Then, τ is a pseudo-stopping time.
Note that E(Bτ ) is not equal to 0; this illustrates the fact we cannot take any martingale in Definition
2.4.39. The martingale (Bt∧T1 , t ≥ 0) is neither bounded, nor uniformly integrable. In fact, since
the maximum MθB (=Bτ ) is uniformly distributed on [0, 1], one has E(Bτ ) = 1/2.
2.4.9
Honest Times
There exists an interesting class of random times τ such that F-martingales are Fτ -semi-martingales.
Definition 2.4.42 A random time τ is honest if τ is equal to an Ft -measurable random variable
on τ < t.
Example 2.4.43 (i) Let t fixed and gt = sup{s < t : Bs = 0} and set τ = g1 . Then, g1 = gt on
{g1 < t}.
(ii) Let X be an adapted continuous process and X ∗ = sup Xs , Xt∗ = sups≤t Xs . The random time
τ = inf{s : Xs = X ∗ }
is honest. Indeed, on the set {τ < t}, one has τ inf{s : Xs = Xs∗ }.
Proposition 2.4.44 ( Jeulin [46] ) A random time τ is honest if it is the end of a predictable set,
i.e., τ (ω) = sup{t : (t, ω) ∈ Γ}, where Γ is an F-predictable set.
In particular, an honest time is F∞ -measurable. If X is a transient diffusion, the last passage
time Λa (see Proposition 3.1.1) is honest.
Lemma 2.4.45 The process Y is Fτ -predictable if and only if there exist two F predictable processes
y and ye such that
Yt = yt 11t≤τ + yet 11t>τ .
Let X ∈ L1 . Then a càdlàg version of the martingale Xt = E [X|Ftτ ] is given by:
Xt =
1
1
E [ξ1t<τ |Ft ] 1t<τ +
E [ξ1t≥τ |Ft ] 1t≥τ .
τ
Zt
1 − Ztτ
Every Fτ optional process decomposes as
L11[0,τ [ + J11[τ ] + K11]τ,∞[ ,
where L and K are F-optional processes and where J is a F progressively measurable process.
2.4. PROGRESSIVE ENLARGEMENT
51
See Jeulin [46] for a proof.
Lemma 2.4.46 (Azéma) Let τ be an honest time which avoids F-stopping times. Then:
(i) Aτ∞ has an exponential law with parameter 1.
(ii) The measure dAτt is carried by {t : Ztτ = 1}
(iii) τ = sup{t : 1 − Zt = 1}
(iv)Aτ∞ = Aττ
Proposition 2.4.47 Let τ be honest. We assume that τ avoids stopping times. Then, if M is an
f such that
F-local martingale, there exists an Fτ -local martingale M
Z t∧τ
Z τ ∨t
dhM, µτ is
dhM, µτ is
f
Mt = Mt +
−
.
τ
τ
Zs−
1 − Zs−
0
τ
Proof: Let M be an F-martingale which belongs to H1 and Gs ∈ Fsτ . We define a G-predictable
process Y as Yu = 11Gs 11]s,t] (u). For s < t, one has, using the decomposition of G-predictable
processes:
µZ ∞
¶
E(11Gs (Mt − Ms )) = E
Yu dMu
µZ0 τ
¶
µZ ∞
¶
= E
yu dMu + E
yeu dMu .
Noting that
Rt
0
0
yeu dMu is a martingale yields E
E(11Gs (Mt − Ms ))
=
=
By integration by parts, setting Nt =
Rt
¡R ∞
0
µZ
τ
¢
yeu dMu = 0,
¶
(yu − yeu )dMu
µZ0 ∞
¶
Z v
τ
E
dAv
(yu − yeu )dMu .
τ
E
0
0
(yu − yeu )dMu , we get
µZ
τ
τ
E(11Gs (Mt − Ms )) = E(N∞ A∞ ) = E(N∞ µ∞ ) = E
0
¶
∞
τ
(yu − yeu )dhM, µ iu
.
0
Now, it remains to note that
µZ ∞ µ
¶¶
dhM, µτ iu
dhM, µτ iu
E
Yu
11{u≤τ } −
11{u>τ }
Zu−
1 − Zu−
0
µZ ∞ µ
¶¶
dhM, µτ iu
dhM, µτ iu
= E
yu
11{u≤τ } − yeu
11{u>τ }
Zu−
1 − Zu−
¶
µZ0 ∞
= E
(yu dhM, µτ iu − yeu dhM, µτ iu )
0
µZ ∞
¶
τ
= E
(yu − yeu ) dhM, µ iu
0
to conclude the result in the case M ∈ H1 . The general result follows by localization.
¤
Example 2.4.48 Let W be a Brownian motion, and τ = g1 , the last time when the BM reaches
0 before time 1, i.e., τ = sup{t ≤ 1 : Wt = 0}. Using the computation of Z g1 in the following
Subsection 3.3 and applying Proposition 2.4.47, we obtain the decomposition of the Brownian motion
in the enlarged filtration
¶
µ
Z t
sgn(Ws )
Φ0
|Ws |
f
√
√
Wt = Wt −
11[0,τ ] (s)
ds
1
−
Φ
1−s
1−s
0
¶
µ
Z t 0
|W |
Φ
√ s
ds
+11{τ ≤t} sgn(W1 )
1−s
τ Φ
52
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
where Φ(x) =
q R
2 x
π
0
exp(−u2 /2)du.
Comment 2.4.49 Note that the random time τ presented in Subsection 3.4 is not the end of a
predictable set, hence, is not honest. However, F-martingales are semi-martingales in the progressive
enlarged filtration: it suffices to note that F-martingales are semi-martingales in the filtration initially
enlarged with W1 .
Exercise 2.4.50 Prove that any F-stopping time is honest
C
Exercise 2.4.51 Prove that
Z
E(
t∧τ
0
dhM, µτ is
−
τ
Zs−
Z
τ ∨t
τ
dhM, µτ is
Ft )
τ
1 − Zs−
is an F-martingale.
2.4.10
C
An example (Nikeghbali and Yor ♠
This section is a part of [64]).
Definition 2.4.52 An F-local martingale N belongs to the class (C0 ), if it is strictly positive, with
no positive jumps, and limt→∞ Nt = 0.
Let N be a local martingale which belongs to the class (C0 ), with N0 = x. Let St = sups≤t Ns . We
consider:
g
=
sup {t ≥ 0 :
Nt = S∞ } = sup {t ≥ 0 :
St − Nt = 0} .
(2.4.12)
Lemma 2.4.53 [Doob’s maximal identity]
For any a > 0, we have:
P (S∞ > a) =
³x´
a
∧ 1.
x
is a uniform random variable on (0, 1).
S∞
For any stopping time ϑ, denoting S ϑ = supu≥ϑ Nu :
µ
¶
¡
¢
Nϑ
P S ϑ > a|Fϑ =
∧ 1,
a
(2.4.13)
In particular,
Hence
(2.4.14)
Nϑ
is also a uniform random variable on (0, 1), independent of Fϑ .
Sϑ
Proof: The first part was done in Exercise 1.2.3. The second part is an application of the first one
for the martingale Nϑ+t , t ≥ 0) and the filtration (Fϑ+t , t ≥ 0).
¤
Without loss of generality, we restrict our attention to the case x = 1.
Proposition 2.4.54 The submartingale Gt ≡ P (g > t | Ft ) admits a multiplicative decomposition
t
as Gt = N
St , t ≥ 0.
Proof: We have the following equalities between sets
{g > t}
= {∃ u > t : Su = Nu } = {∃ u > t : St ≤ Nu }
½
¾
=
sup Nu ≥ St = {S t ≥ Ss }.
u≥t
Hence, from (2.4.14), we get: P (g > t | Ft ) =
Nt
St .¤
2.4. PROGRESSIVE ENLARGEMENT
53
Proposition 2.4.55 Let N be a local martingale such that its supremum process S is continuous
(this is Rthe case if N is in the class C0 ). Let f be a locally bounded Borel function and define
x
F (x) = 0 dyf (y). Then, Xt := F (St ) − f (St ) (St − Nt ) is a local martingale and:
Z
t
f (Ss ) dNs + F (S0 ) ,
F (St ) − f (St ) (St − Nt ) =
(2.4.15)
0
Proof: In the case of a Brownian motion (i.e., N = B), this was done in Exercise 1.2.2. In the case
of continuous martingales, if F is C 2 ,
Z
F (St ) − f (St ) (St − Nt )
t
= F (St ) −
f (Ss ) dSs +
0
Z t
+
(Ss − Ns )f 0 (Ss )dSs
Z
t
f (Ss ) dNs
0
0
Rt
The last integral is null, because dS is carried by {S − N = 0} and 0 f (Ss ) dSs = F (St ) − F (S0 ).
For the general case, we refer the reader to [64].
¤
σ(S∞ )
Let us define the new filtration Ft
. Since g = inf {t : Nt = S∞ } ;, the random variable g is an
Fσ(S∞ ) -stopping time.
Consequently:
σ(S )
Ftg ⊂ Ft ∞ .
Proposition 2.4.56 For any Borel bounded or positive function f , we have:
E (f (S∞ ) |Ft ) =
µ
¶ Z Nt /St
µ ¶
Nt
Nt
f (St ) 1 −
+
dxf
St
x
0
Proof: In the following, U is a random variable, which follows the standard uniform law and which
is independent of Ft .
¡ ¡
¢
¢
E (f (S∞ ) |Ft ) = E f St ∨ S t |Ft
¡
¢
¡ ¡ ¢
¢
= E f (St ) 11{St ≥S t } |Ft + E f S t 11{St <S t } |Ft
¡
¢
¡ ¡ ¢
¢
= f (St ) P St ≥ S t |Ft + E f S t 11{St <S t } |Ft
µ
¶
µ µ ¶
¶
Nt
Nt
= f (St ) P U ≤
|Ft + E f
11{U < Nt } |Ft
St
St
U
µ
¶ Z Nt /St
µ ¶
Nt
Nt
= f (St ) 1 −
+
dxf
.
St
x
0
¤
We now show that E (f (S∞ ) |Ft ) is of the form 2.4.15 A straightforward change of variable in the
last integral also gives:
µ
¶
Z ∞
Nt
f (y)
E (f (S∞ ) |Ft ) = f (St ) 1 −
+ Nt
dy 2
St
y
St
µ
¶
Z ∞
Nt
f (y)
= f (St ) 1 −
+ Nt
dy 2
St
y
St
µZ ∞
¶
Z ∞
f (y)
f (y) f (St )
= St
dy 2 − (St − Nt )
dy 2 −
.
y
y
St
St
St
Hence,
E (f (S∞ ) |Ft ) = H (1) + H (St ) − h (St ) (St − Nt ) ,
54
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
with
Z
∞
H (x) = x
dy
x
and
Z
∞
h (x) = hf (x) ≡
x
f (y)
,
y2
f (y) f (x)
dy 2 −
=
y
x
Z
∞
x
dy
(f (y) − f (x)) .
y2
Moreover, from the Azéma-Yor type formula 2.4.15, we have the following representation of E (f (S∞ ) |Ft )
as a stochastic integral:
Z t
h (Ss ) dNs .
E (f (S∞ ) |Ft ) = E (f (S∞ )) +
0
´
³
Moreover, there exist two families of random measures (λt (dx))t≥0 and λ̇t (dx)
λt (dx)
λ̇t (dx)
t≥0
, with
µ
¶
Nt
dx
1−
δSt (dx) + Nt 11{x>St } 2
St
x
1
dx
= − δSt (dx) + 11{x>St } 2 ,
St
x
=
such that
Z
E (f (S∞ ) |Ft ) = λt (f )
=
λt (dx) f (x)
Z
λ̇t (f )
=
λ̇t (dx) f (x) .
Finally, we notice that there is an absolute continuity relationship between λt (dx) and λ̇t (dx);
more precisely,
λ̇t (dx) = λt (dx) ρ (x, t) ,
with
ρ (x, t) =
−1
1
11{St =x} +
11{St <x} .
St − Nt
Nt
Theorem ¡2.4.57 Let
¢ N be a local martingale in the class C0 (recall N0 = 1). Then, the pair of
filtrations F, Fσ(S∞ ) satisfies the (H0 ) hypothesis and every F local martingale X is an Fσ(S∞ )
semimartingale with canonical decomposition:
Z t
Z t
dhX, N is
dhX, N is
e
Xt = Xt +
11{g>s}
−
11{g≤s}
,
N
S
s−
∞ − Ns−
0
0
e is a Fσ(S∞ ) -local martingale.
where X
Proof: We can first assume that X is in H1 ; the general case follows by localization. Let Ks be
an Fs measurable set, and take t > s. Then, for any bounded test function f , λt (f ) is a bounded
martingale, hence in BM O, and we have:
E (11Ks f (S∞ ) (Xt − Xs ))
=
=
=
=
=
E (11Ks (λt (f ) Xt − λs (f ) Xs ))
E (11Ks (hλ (f ) , Xit − hλ (f ) , Xis ))
µ
µZ t
¶¶
E 11Ks
λ̇u (f ) dhX, N iu
s
µ
µZ t Z
¶¶
E 11Ks
λu (dx) ρ (x, u) f (x) dhX, N iu
s
µ
µZ t
¶¶
E 11Ks
dhX, N iu ρ (S∞ , u)
.
s
2.4. PROGRESSIVE ENLARGEMENT
But we also have:
ρ (S∞ , t) =
55
−1
1
11{St =S∞ } +
11{St <S∞ } .
St − Nt
Nt
It now suffices to use the fact that S is constant after g and g is the first time when S∞ = St , or in
other words:
11{S∞ >St } = 11{g>t} , and 11{S∞ =St } = 11{g≤t} .
Corollary 2.4.58 The pair of filtrations (F, Fg ) satisfies the (H0 ) hypothesis. Moreover, every Flocal martingale X decomposes as:
Z t
Z t
dhX, N is
dhX, N is
e
11{g>s}
−
11{g≤s}
,
Xt = Xt +
N
S∞ − Ns
s
0
0
e is an Fg -local martingale.
where X
Proof: Let X be an F-martingale which is in H1 ; the general case follows by localization.
Z t
Z t
dhX, N is
dhX, N is
e
Xt = Xt +
11{g>s}
−
11{g≤s}
,
N
S∞ − Ns
s
0
0
e denotes an Fσ(S∞ ) martingale. Thus, X,
e which is equal to:
where X
µZ t
¶
Z t
dhX, N is
dhX, N is
Xt −
11{g>s}
−
11{g≤s}
, ,
Ns
S∞ − Ns
0
0
σ(S∞ )
is Fg adapted (recall that Ftg ⊂ Ft
), and hence it is an Fg -martingale.
These results extend to honest times:
Theorem 2.4.59 Let L be an honest time. Then, under the conditions (CA), the supermartingale
Zt = P (L > t | Ft ) admits the following additive and multiplicative representations: there exists a
continuous and nonnegative local martingale N , with N0 = 1 and limt→∞ Nt = 0, such that:
Zt = P (L > t | Ft ) =
Zt
Nt
St
= Mt − At .
where these two representations are related as follows:
¶
µZ t
Z
1 t d < M >s
dMs
−
, St = exp (At ) ;
Nt = exp
Zs
2 0
Zs2
0
Z t
dNs
Mt = 1 +
= E (log S∞ | Ft ) , At = log St .
0 Ss
Remark 2.4.60 From the Doob-Meyer (additive) decomposition of Z, we have 1 − Zt = (1 − mt ) +
ln St . From Skorokhod’s reflection lemma (See [3M]) we deduce that
ln St = sup ms − 1
s≤t
Exercise 2.4.61 Prove that exp(λBt −
λ2
2 t)
belongs to (C0 ).
C
56
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
2.5
Initial Times
We assume that τ is a random time satisfying Jacod’s criteria (see Proposition 2.3.2)
Z ∞
P(τ > θ|Ft ) =
pt (u)ν(du)
θ
where ν is the law of τ . Moreover, in this section, we assume that p is strictly positive. We call
initial times the random variables that enjoy that property.
2.5.1
Progressive enlargement with initial times
As seen in Proposition 2.3.2, if τ s an initial time and X is an F-martingale, it is a F(τ ) = F∨σ(τ )-semi
martingale with decomposition
¯
Z t
dhX, p(θ)iu ¯¯
e
Xt = Xt +
pu− (θ) ¯θ=τ
0
e is an F ∨ σ(τ )-martingale
where X
Since X is a F(τ ) -semimartingale, and is G = Fτ adapted, it is a G-semimartingale. We are now
looking for its representation. By projection on G, it follows that
¯
µZ t
¶
dhX, p(θ)is ¯¯
e
Xt = E(Xt |Gt ) + E
|Gt
ps− (θ) ¯θ=τ
0
et |Gt ) is a G-martingale.
The process E(X
Proposition 2.5.1 Let X be a F-martingale. Then, it is a G-semi-martingale with decomposition
¯
Z t
Z t∧τ
dhX, p(θ)is ¯¯
dhX, Gis
b
+
Xt = Xt +
Gs−
ps− (θ) ¯θ=τ
t∧τ
0
b is a G-martingale.
where X
Proof: We give the proof in the case where F is a Brownian filtration. Then, for any θ, there exists
two predictable processes γ(θ) and x such that
dXt = xt dWt
dt pt (θ) = pt (θ)γt (θ)dWt
¯
Rt
s¯
and dhX, p(θ)it = xt pt (θ)γt (θ)dt. It follows that 0 dhX,p(θ)i
p − (θ) ¯
θ=τ
s
µZ
E
0
t
dhX, p(θ)is
|θ=τ |Gt
ps− (θ)
¶
µZ
=
0
xs γs (τ )ds. We write
¶
t∧τ
=E
Rt
xs γs (θ)ds|θ=τ |Gt
Z
t
+
0
t∧τ
dhX, p(θ)is
ps− (θ)
From Exercise 1.2.5,
µZ
E
¶
t∧τ
xs γs (τ )ds|Gt
0
Z
= mt +
t
xs E(11s<τ γs (τ )|Gs )ds
0
where m is a G-martingale. From key lemma
E(11s<τ γs (τ )|Gs )
1
1
E(11<τ γs (τ )|Fs ) = 11s<τ
E(
Gs
Gs
Z ∞
1
= 11s<τ
γs (u)ps (u)ν(du)
Gs s
Z
∞
γs (u)ps (u)ν(du)|Fs )
= 11s<τ
s
2.5. INITIAL TIMES
57
Now, we compute
dhX, Gi. In order to compute this bracket, one needs the martingale part of G
R∞
From Gt = t pt (u)ν(du), one deduces that
µZ ∞
¶
dGt =
γt (u)pt (u)ν(du) dWt − pt (t)ν(dt)
t
The last property can be obtained from Itô-Kunita-Wentcell (this can also be checked by hands:
indeed it is straightforward
to check that Gt + ∈t0 ps (S)ν(dS) is a martingale). It follows that
R∞
dhX, Git = xt ( t γt (u)pt (u)ν(du))dt, hence the result.
Proposition 2.5.2 A càdlàg process Y G is a G-martingale if (and only if ) there exist an F-adapted
càdlàg process y and an Ft ⊗ B(R+ )-optional process yt (.) such that
YtG = yt 11{τ >t} + yt (τ )11{τ ≤t}
and that
• E(YtG |Ft ) is an F-local martingale;
• For nay θ, (yt (θ)pt (θ), t ≥ θ) is an F-martingale.
Proof: We prove only the ’if’ part. Let Q defined as dQ|F (τ ) =
t
1
Lt dP|Gt
1
pt (τ ) dP|Ft(τ ) .
It follows that
(or, equivalently, dP|Gt = Lt dQ|Gt where EQ (pt (τ )|Gt )) . The process Y is a (P, G)
dQ|Gt =
martingale if and only if Y L is a (Q, G)- martingale where
Lt
= EQ (pt (τ )|Gt ) = 11t<τ `t + 11τ ≤t pt (τ )
1
(with `t = Q(t<τ
) EQ (11t<τ pt (τ )|Ft )) so that Yt Lt = 11t<τ yt `t + 11τ ≤t pt (τ )yt (τ ). Under Q, τ and F∞
are independent, and obviously F is immersed in G. From Proposition 2.4.21, the process Y L is a
(G, Q) martingale if pt (u)yt (u), t ≥ u is, for any u, an (F, Q)-martingale and if EQ (Yt Lt |Ft ) is an
(F, Q)-martingale. We now note that, from Bayes rule, EQ (Yt Lt |Ft ) = EQ (Lt |Ft )EP (Yt |Ft ) and that
Z ∞
EQ (Lt |Ft ) = EQ (pt (τ )|Ft ) =
pt (u)ν(du) = 1
0
therefore, EQ (Yt Lt |Ft ) = EP (Yt |Ft ). Note that
Z
EP (YtG |Ft ) = yt Gt +
and that
Z
yt (s)pt (s)ν(ds), t ≥ 0
0
Z
t
yt (s)pt (s)ds −
0
t
t
ys (s)ps (s)ν(ds), t ≥ 0
0
is a martingale (WHY?) so that the condition EP (Yt |Ft ) is a martingale is equivalent to
Z
yt Gt +
t
ys (s)ps (s)ν(ds), t ≥ 0
0
is a martingale.
¤
Theorem 2.5.3 Let ZtG = zt 11{τ >t} + zt (τ )11{τ ≤t} be a positive G-martingale with Z0G = 1 and let
Rt
ZtF = zt Gt + 0 zt (u)pt (u)ν(du) be its F projection.
Let Q be the probability measure defined on Gt by dQ = ZtG dP.
zt (θ)
Then, for t ≥ θ pQ
t (θ) = pt (θ) Z F , and:
t
zt
(i) the Q-conditional survival process is defined by GQ
t = Gt F
Zt
58
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
(ii) the (F, Q)-intensity process is λF,Q
= λFt
t
zt (t)
, dt- a.s.;
zt−
(iii) LF,Q is the (F, Q)-local martingale
= LFt
LF,Q
t
zt
exp
ZtF
Z
t
0
(λF,Q
− λFs )ds
s
Proof: From change of probability
Q(τ > θ|Ft ) =
1
E(ZtG |Ft )
EP (ZtG 11τ >θ |Ft )
1
1
= F EP (zt (τ )11τ >θ |Ft ) = F
Zt
Zt
Z
∞
zt (u)pt (u)ν(du)
θ
The form of the survival process follows immediately from the fact that
Z ∞
zt Gt +
zt (u)pt (u)ν(du)
t
is a martingale (or by an immediate application of Bayes formula). The form of the intensity is
obvious. The form of L is obtained from Bayes formula.
¤
Exercise 2.5.4 Prove that the change of probability measure generated by the two processes
zt = (LFt )−1 ,
zt (θ) =
pθ (θ)
pt (θ)
provides a model where the immersion property holds true, and where the intensity processes does
not change
C
Exercise 2.5.5 Check that
Z
E(
0
is an F-martingale.
Check that that
t∧τ
dhX, Gis
−
Gs−
Z
E(
t
0
Z
t
t∧τ
¯
dhX, p(θ)is ¯¯
|Ft )
ps− (θ) ¯θ=τ
¯
dhX, p(θ)is ¯¯
|Gt )
ps− (θ) ¯θ=τ
is a G martingale.
C
Rt
Exercise 2.5.6 Let λ be a positive F-adapted process and Λt = 0 λs ds and Θ be a strictly positive
R∞
random variable such that there exists a family γt (u) such that P(Θ > θ|Ft ) = θ γt (u)du.
Let τ = inf{t > 0 : Λt ≥ Θ}.Prove that the density of τ is given by
pt (θ) = λθ γt (Λθ ) if t ≥ θ
and
pt (θ) = E[λθ γθ (Λθ )|Ft ] if t < θ.
Conversely, if we are given a density p, prove that it is possible to construct a threshold Θ such that
τ has p as density.
C
Application: Defaultable Zero-Coupon Bonds
A defaultable zero-coupon with maturity T associated with the default time τ is an asset which pays
one monetary unit at time T if (and only if) the default has not occurred before T . We assume that
P is the pricing measure. By definition, the risk-neutral price under P of the T -maturity defaultable
zero-coupon bond with zero recovery equals, for every t ∈ [0, T ],
¡
¢
EP NT e−ΛT | Ft
P(τ > T | Ft )
= 11{τ >t}
(2.5.1)
D(t, T ) := P(τ > T | Gt ) = 11{τ >t}
Gt
Gt
2.5. INITIAL TIMES
59
where N is the martingalepart in the multiplicative decomposition of G (see Proposition ). Using
(2.5.1), we obtain
D(t, T )
:=
P(τ > T | Gt ) = 11{τ >t}
1
EP (NT e−ΛT | Ft )
Nt e−Λt
However, using a change of probability, one can get rid of the martingale part N , assuming that
there exists p such that
Z ∞
P(τ > θ|Ft ) =
pt (u)du
θ
Let P∗ be defined as
dP∗ |Gt = Zt∗ dP|Gt
where Z ∗ is the (P, G)-martingale defined as
Zt∗ = 11{t<τ } + 11{t≥τ } λτ e−Λτ
Note that
Nt
pt (τ )
dP∗ |Ft = Nt dP|Ft = Nt dP|Ft
and that P∗ and P coincide on Gτ .
Indeed,
Z
EP (Zt∗ |Ft )
t
= Gt +
λu e−Λu
0
= Nt e
−Λt
Z
+ Nt
t
Nt
pt (u)η(du)
pt (u)
λu e−Λu η(du) = Nt e−Λt + Nt (1 − e−Λt )
0
Then, for t > θ,
P∗ (θ < τ |Ft )
=
=
=
1
1
Nt
EP (Zt∗ 11θ<τ |Ft ) =
EP (11t<τ + 11{t≥τ >θ} λτ e−Λτ
|Ft )
Nt
Nt
pt (τ )
µ
¶
Z t
1
Nt
Nt e−Λt +
λu e−Λu
pt (u)du
Nt
pt (u)
θ
¢
1 ¡
Nt e−Λt + Nt (e−Λθ − e−Λt ) = e−Λθ
Nt
which proves that immersion holds true under P∗ , and the intensity of τ is the same under P and
P∗ . It follows that
EP (X11{T <τ } |Gt ) = E∗ (X11{T <τ } |Gt ) = 11{t<τ }
1
E∗ (e−ΛT X|Ft )
e−Λt
Note that, if the intensity is the same under P andP∗ , its dynamics under P∗ will involve a change
of driving process, since P and P∗ do not coincide on F∞ .
Let us now study the pricing of a recovery. Let Z be an F-predictable bounded process.
EP (Zτ 11{t<τ ≤T }|Gt )
=
=
=
=
Z T
1
11{t<τ } EP (−
Zu dGu |Ft )
Gt
t
Z T
1
11{t<τ } EP (
Zu Nu λu e−Λu du|Ft )
Gt
t
E∗ (Zτ 11{t<τ ≤T } |Gt )
Z T
1
11{t<τ } −Λt E∗ (
Zu λu e−Λu du|Ft )
e
t
60
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
The problem is more difficult for pricing a recovery paid at maturity, i.e. for X ∈ FT
EP (X11τ <T |Gt ) =
=
EP (X|Gt ) − EP (X11τ >T |Gt ) = EP (X|Gt ) − 11{τ >t}
EP (X|Gt ) − 11{τ >t}
1
e−Λt
¡
¢
E∗ Xe−ΛT | Ft
¡
¢
1
EP XNT e−ΛT | Ft
−Λ
t
Nt e
Since immersion holds true under P∗
E∗ (X11τ <T |Gt ) =
=
¡
¢
1
E∗ XNT e−ΛT | Ft
e−Λt
¡
¢
1
E∗ (X|Ft ) − 11{τ >t} −Λt E∗ XNT e−ΛT | Ft
e
E∗ (X|Gt ) − 11{τ >t}
If both quantities EP (X11τ <T |Gt ) and E∗ (X11τ <T |Gt ) are the same, this would imply that EP (X|Gt ) =
E∗ (X|Ft ) which is impossible: this would lead to EP (X|Gt ) = EP (X|Ft ), i.e. immersion holds under
P . Hence, non-immersion property is important while evaluating recovery paid at maturity ( P∗
and P do not coincide on F∞ ).
2.5.2
Survival Process ♠
Proposition 2.5.7 Let (Ω, F, P) be a given filtered probability space. Assume that N is a (P, F)local martingale and Λ an F-adapted increasing process such that 0 < Nt e−Λt < 1 for t > 0 and
N0 = 1 = e−Λ0 . Then, there exists (on an extended probability space) a probability Q which satisfies
Q|Ft = P|Ft ) and a random time τ such that Q(τ > t|Ft ) = Nt e−Λt .
Rt
Proof: We give the proof when F is a Brownian filtration and Λt = 0 λu du. For the general case,
see [45, 44]. We shall construct, using a change of probability, a conditional probability Gt (u) which
admits the given survival process (i.e. Gt (t) = Gt = Nt e−Λt ). From the conditional probability, one
can deduce a density process, hence one can construct a random time admitting tGt (u) as conditional
probability. We are looking for conditional probabilities with a particular form (the idea is linked
with the results obtained in Subsection 2.3.5). Let us start with a model in which P(τ > t|Ft ) = e−Λt ,
Rt
where Λt = 0 λs ds and let N be an F-local martingale such that 0 ≤ Nt e−Λt ≤ 1.
The goal is to prove that there exists a G-martingale L such that, setting dQ = LdP
(i) Q|F∞ = P|F∞
(ii) Q(τ > t|Ft ) = Nt e−Λt
The G-adapted process L
Lt = `t 11t<τ + `t (τ )11τ ≤t
is a martingale if for any u, (`t (u), t ≥ u) is a martingale and if E(Lt |Ft ) is a F-martingale. Then,
(i) is satisfied if
Z t
−Λt
`t (u)λu e−Λu du
1 = E(Lt |Ft ) = `t e
+
0
and (ii) implies that ` = N and `t (t) = `t . We are now reduced to find a family of martingales
`t (u), t ≥ u such that
Z t
`u (u) = Nu , 1 = Nt e−Λt +
`t (u)λu e−Λu du
0
We restrict our attention to families ` of the form
`t (u) = Xt Yu , t ≥ u
where X is a martingale such that
Z
t
Xt Yt = Nt , 1 = Nt e−Λt + Xt
0
Yu λu e−Λu du
2.5. INITIAL TIMES
61
. It is easy to show that
Z
t
1
)
Xu
0
In a Brownian filtration case, there exists a process ν such that dNt = νt Nt dWt and the positive
martingale X is of the form dXt = xt Xt dWt . Then, using the fact that integration by parts implies
1
dXt = xt (Xt Yt − eΛt )dWt = dNt ,
d(Xt Yt ) = Yt dXt − eΛt
Xt
we are lead to chose
νt Gt
xt =
Gt − 1
Yt = Y0 +
2.5.3
eΛu d(
Generalized Cox construction
In this subsection, we generalize the Cox construction in a setting where the threshold Θ is no more
independent of F∞ . Instead, we make the assumption that Θ admits a non trivial density with
respect to F. Clearly, the H-hypothesis is no longer satisfied and we shall explore the density in this
case.
Rt
Let λ be a strictly positive F-adapted process, and Λt = 0 λs ds. Let Θ be a strictly positive
random variable whose conditional distribution w.r.t. F admits a density w.r.t. the Lebesgue
measure,R i.e., there exists a family of Ft ⊗ B(R+ )-measurable functions γt (u) such that P(Θ >
∞
θ|Ft ) = θ γt (u)du. Let τ = inf{t > 0 : Λt ≥ Θ}. We say that a random time τ constructed in
such a setting is given by a generalized Cox construction.
Proposition 2.5.8 Let τ be given by a generalized Cox construction. Then τ admits the density
pt (θ) = λθ γt (Λθ ) if t ≥ θ
and
pt (θ) = E[λθ γθ (Λθ )|Ft ] if t < θ.
(2.5.2)
Proof: By definition and by the fact that Λ is strictly increasing and absolutely continuous, we
have for t ≥ θ,
Z ∞
Z ∞
Z ∞
P(τ > θ|Ft ) = P(Θ > Λθ |Ft ) =
γt (u)du =
γt (Λu )dΛu =
γt (Λu )λu du,
Λθ
θ
θ
which implies pt (θ) = λθ γt (Λθ ). The martingale property of p gives the whole density.
¤
Rt
Conversely, if we are given a density p, and hence an associated process Λt = 0 λs ds with
s (s)
λs = pG
, then it is possible to construct a random time τ by a generalized Cox construction, that
s
is, to find a threshold Θ such that τ has p as density. We denote by Λ−1 the inverse of the strictly
increasing process λ.
R t s (s)
Proposition 2.5.9 Let τ be a random time with density (pt (θ), t ≥ 0), θ ≥ 0. We let Λt = 0 pG
ds
s
and Θ = Λτ . Then Θ has a density γ with respect to F given by
h
i
1
γt (θ) = E pt∨Λ−1 (Λ−1
)
|F
(2.5.3)
t .
θ
θ
λΛ−1
θ
Obviously τ = inf{t > 0 : Λt ≥ Θ}. In particular, if the H-hypothesis holds, then Θ follows the
exponential distribution with parameter 1 and is independent of F.
Proof: We set Θ = Λτ and compute the density of Θ w.r.t. F
P(Θ > θ|Ft ) = P(Λτ > θ|Ft ) = P(τ > t, Λτ > θ|Ft ) + P(τ ≤ t, Λτ > θ|Ft )
Z ∞
Z t
= E[−
11{Λu >θ} dGu |Ft ] +
11{Λu >θ} pt (u)du
t
0
Z ∞
Z t
= E[
11{Λu >θ} pu (u)du|Ft ] +
11{Λu >θ} pt (u)du
t
0
(2.5.4)
62
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Rt
where the last equality comes from the fact that (Gt + 0 pu (u)du, t ≥ 0) is an F-martingale. Note
that since the process Λ is continuous and strictly increasing, also is its inverse. Hence
Z
Z
¤
£ ∞
¤
£ ∞
1
1
−1
(Λ
)
E
pΛ−1
=
E
ps∨t (s) dΛs |Ft
ds|F
t
s
s ∨t
−1
λΛ−1
λs
θ
Λ
s
Z
Z θ
£ ∞
¤
£ ∞
¤
=E
11{s>Λ−1 } ps∨t (s)ds|Ft = E
11{Λs >θ} ps∨t (s)ds|Ft ,
θ
0
0
which equals P(Θ > θ|Ft ) by comparing with (2.5.4). So the density of Θ w.r.t. F is given by (2.5.3).
In the particular case where H-hypothesis holds,
pt (u) = pu (u) = λu Gu = λu e−Λu when t ≥ u
and (2.5.4) equals
Z
P(Θ > θ|Ft )
11{Λu >θ} pu (u)du|Ft ] = E[
= E[
0
Z
=
E[
0
Z
∞
∞
∞
0
11{Λu >θ} λu e−Λu du|Ft ]
11{x>θ} e−x dx|Ft ],
which completes the proof.
2.5.4
¤
Carthagese Filtrations
In Carthegese, one can find three levels of different civilisation. This had lead us to make use of
three levels of filtration, and a change of measure. We consider the three filtrations
F ⊂ G = F ∨ H ⊂ F(τ ) = F ∨ σ(τ )
and we assume that the F-conditional law of τ is equivalent to the law of τ . This framework will
allow us to give new (and simple) proofs to some results on enlargement of filtrations. We refer the
reader to [14] for details.
Semi-martingale properties
(τ )
We note Q the probability measure defined on Ft
dQ|F (τ ) =
t
where Lt =
1
pt (τ )
as
1
dP|F (τ ) = Lt dP|F (τ )
t
t
pt (τ )
is a (P, F(τ ) )-martingale. Hence
dQ|Gt = `t dP|Gt
with ` is the (P, G)-martingale
Z ∞
1
1
`t = EP (Lt |Gt ) = 11t<τ
EP (
ν(du)|Gt ) + 11τ ≤t
Gt
p
t (τ )
t
R∞
where Gt = EP (τ > t|Ft ) = t pt (u)ν(du), and, obviously
dP|F (τ ) = pt (τ )dQ|F (τ ) = Λt dQ|F (τ )
t
t
t
where Λ = 1/L is a (Q, F(τ ) )-martingale and
dP|Gt = EQ (pt (τ )|Gt )dQ|Gt = λt dQ|Gt
2.5. INITIAL TIMES
63
with
Z ∞
1
1
1
λt = EQ (pt (τ )|Gt ) = 11t<τ
EQ (
pt (u)ν(du)|Gt ) + 11τ ≤t pt (τ ) = 11t<τ
Gt + 11τ ≤t pt (τ ) =
G(t)
G(t)
`
t
t
is a (Q, G) martingale, where
Z
∞
G(t) = EQ (τ > t|Ft ) = Q(τ > t) = P(τ > t) =
ν(du)
t
Assume now that ν is absolutely continuous with respect to Lebesque measure ν(du) = ν(u)du
R t∧τ
-with a light abuse of notation). We know, from the general study that MtP := Ht − 0 λPs ds with
R
t∧τ
ν(t)
Q
λPt = pt (t)ν(t)
is a (P, G) martingale and that MtQ := Ht − 0 λQ
s ds with λt = G(t) is a (P, G)
Gt
martingale.
The predictable increasing process A such that Ht − At∧τ is a (Q, F(τ ) ) martingale is H (indeed,
τ is F(τ ) -predictable)
Let now m be a (P, F) martingale. Then, it is a (Q, F) martingale, hence a (Q, G) and a (Q, F(τ ) )
martingale. It follows that mΛ is a (P, F(τ ) ) martingale and mλ a (P, G) martingale. Writing m =
mΛ/Λ and m = mλ/λ and making use of integration by parts formula leads us to the decomposition
of m as a F(τ ) and a G semi-martingale.
Representation theorem
Using the same methodology, one can check that, if there exists a martingale µ (eventually multidimensional) with the predictable representation property in (P, F), then the pair µ
b, M posses the
predictable representation property in (P, G) where µ
b is the martingale part of the G semi-martingale
µ.
64
CHAPTER 2. ENLARGEMENT OF FILTRATIONS
Chapter 3
Last Passage Times ♠
We now present the study of the law (and the conditional law) of some last passage times for diffusion
processes. In this section, W is a standard Brownian motion and its natural filtration is F. These
random times have been studied in Jeanblanc and Rutkowski [42] as theoretical examples of default
times, in Imkeller [37] as examples of insider private information and, in a pure mathematical point
of view, in Pitman and Yor [66] and Salminen [69].
We now show that, in a diffusion setup, the Doob-Meyer decomposition of the Azéma supermartingale may be computed explicitly for some random times τ .
3.1
Last Passage Time of a Transient Diffusion
Proposition 3.1.1 Let X be a transient homogeneous diffusion such that Xt → +∞ when t → ∞,
and s a scale function such that s(+∞) = 0 (hence, s(x) < 0 for x ∈ R) and Λy = sup{t : Xt = y}
the last time that X hits y. Then,
Px (Λy > t|Ft ) =
s(Xt )
∧ 1.
s(y)
Proof: We follow Pitman and Yor [66] and Yor [80], p.48, and use that under the hypotheses of the
proposition, one can choose a scale function such that s(x) < 0 and s(+∞) = 0 (see Sharpe [70]).
Observe that
¡
¢
Px Λy > t|Ft
=
¯ ´
¯ ´
³
³
¯
¯
Px inf Xu < y ¯ Ft = Px sup(−s(Xu )) > −s(y) ¯ Ft
u≥t
³
=
PXt
u≥t
´ s(X )
t
∧ 1,
sup(−s(Xu )) > −s(y) =
s(y)
u≥0
where we have used the Markov property of X, and the fact that if M is a continuous local martingale
with M0 = 1, Mt ≥ 0, and lim Mt = 0, then
t→∞
law
sup Mt =
t≥0
1
,
U
where U has a uniform law on [0, 1] (see Exercise 1.2.3).
¤
Lemma 3.1.2 The FX -predictable compensator A associated with the random time Λy is the process
1
s(y)
L (Y ), where L(Y ) is the local time process of the continuous martingale
A defined as At = −
2s(y) t
Y = s(X).
65
66
CHAPTER 3. LAST PASSAGE TIMES ♠
Proof: From x ∧ y = x − (x − y)+ , Proposition 3.1.1 and Tanaka’s formula, it follows that
s(Xt )
1
1 y
s(y)
∧ 1 = Mt +
L (Y ) = Mt +
` (X)
s(y)
2s(y) t
s(y) t
where M is a martingale. The required result is then easily obtained.
DONNER FORMULE TANAKA. DEFINIR p(m)
We deduce the law of the last passage time:
µ
¶
s(x)
Px (Λy > t) =
∧1 +
s(y)
µ
¶
s(x)
=
∧1 +
s(y)
¤
1
Ex (`yt (X))
s(y)
Z t
1
du p(m)
u (x, y) .
s(y) 0
Hence, for x < y
dt (m)
dt
p (x, y) = −
pt (x, y)
s(y) t
s(y)m(y)
σ 2 (y)s0 (y)
= −
pt (x, y)dt .
2s(y)
Px (Λy ∈ dt)
= −
(3.1.1)
For x > y, we have to add a mass at point 0 equal to
µ
¶
s(x)
s(x)
1−
∧1 =1−
= Px (Ty < ∞) .
s(y)
s(y)
Example 3.1.3 Last Passage Time for a Transient Bessel Process: For a Bessel process of
dimension δ > 2 and index ν (see [3M] Chapter 6), starting from 0,
Pδ0 (Λa < t) =
Pδ0 (inf Ru > a) = Pδ0 (sup Ru−2ν < a−2ν )
u≥t
µ
=
Thus, the r.v. Λa =
with parameter ν:
Pδ0
a2
R12 U 1/ν
Rt−2ν
U
¶
<a
−2ν
is distributed as
u≥t
=
µ
Pδ0 (a2ν
law a2
a2
2γ(ν+1)βν,1 = 2γ(ν)
P(γ(ν) ∈ dt) = 11{t≥0}
Hence,
Pδ0 (Λa
<
U Rt2ν )
∈ dt) = 11{t≥0}
1
tΓ(ν)
=
Pδ0
a2
<t
R12 U 1/ν
tν−1 e−t
dt .
Γ(ν)
µ
a2
2t
¶ν
e−a
2
/(2t)
dt .
Proposition 3.1.4 For H a positive predictable process
Ex (HΛy |Λy = t) = Ex (Ht |Xt = y)
Z
Ex (HΛy ) =
.
where γ(ν) is a gamma variable
We might also find this result directly from the general formula (3.1.1).
and, for y > x,
¶
∞
Ex (Λy ∈ dt) Ex (Ht |Xt = y) .
0
In the case x > y,
µ
¶ Z ∞
s(x)
Ex (HΛy ) = H0 1 −
+
Ex (Λy ∈ dt) Ex (Ht |Xt = y) .
s(y)
0
(3.1.2)
3.2. LAST PASSAGE TIME BEFORE HITTING A LEVEL
67
Proof: We have shown in the previous Proposition 3.1.1 that
Px (Λy > t|Ft ) =
s(Xt )
∧ 1.
s(y)
From Itô-Tanaka’s formula
s(Xt )
s(x)
∧1=
∧1+
s(y)
s(y)
Z
0
t
11{Xu >y} d
s(Xu ) 1 s(y)
− Lt (s(X)) .
s(y)
2
It follows, using Lemma 2.4.1 that
Ex (HΛx ) =
=
µZ ∞
¶
1
Ex
Hu du Ls(y)
(s(X))
u
2
µZ0 ∞
¶
1
Ex
Ex (Hu |Xu = y) du Ls(y)
(s(X))
.
u
2
0
Therefore, replacing Hu by Hu g(u), we get
µZ ∞
¶
1
s(y)
Ex (HΛx g(Λx )) = Ex
g(u) Ex (Hu |Xu = y) du Lu (s(X)) .
2
0
(3.1.3)
Consequently, from (3.1.3), we obtain
³
´
1
du Ex Ls(y)
Px (Λy ∈ du) =
u (s(X))
2
¡
¢
Ex HΛy |Λy = t = Ex (Ht |Xt = y) .
¤
Exercise 3.1.5 Let X be a drifted Brownian motion with positive drift ν and Λνy its last passage
time at level y. Prove that
µ
¶
ν
1
(ν)
2
Px (Λy ∈ dt) = √
exp − (x − y + νt)
dt ,
2t
2πt
and
½
Px (Λ(ν)
y
= 0) =
1 − e−2ν(x−y) ,
0
for
for
x>y
x < y.
Prove, using time inversion that, for x = 0,
law
Λ(ν)
=
y
where
1
(y)
Tν
Ta(b) = inf{t : Bt + bt = a}
See Madan et al. [59].
3.2
C
Last Passage Time Before Hitting a Level
Let Xt = x + σWt where the initial value x is positive and σ is a positive constant. We consider, for
0 < a < x the last passage time at the level a before hitting the level 0, given as gTa0 (X) = sup {t ≤
T0 : Xt = a}, where
T0 = T0 (X) = inf {t ≥ 0 : Xt = 0} .
68
CHAPTER 3. LAST PASSAGE TIMES ♠
(In a financial setting, T0 can be interpreted as the time of bankruptcy.) Then, setting α = (a−x)/σ,
T−x/σ (W ) = inf{t : Wt = −x/σ} and dα
t (W ) = inf{s ≥ t : Ws = α}
¡
¢
¡
¢
Px gTa0 (X) ≤ t|Ft = P dα
t (W ) > T−x/σ (W )|Ft
on the set {t < T−x/σ (W )}. It is easy to prove that
¡
¢
P dα
t (W ) < T−x/σ (W )|Ft = Ψ(Wt∧T−x/σ (W ) , α, −x/σ),
where the function Ψ(·, a, b) : R → R equals, for a > b,

 (a − y)/(a − b)
1
Ψ(y, a, b) = Py (Ta (W ) > Tb (W )) =

0
for
for
for
b < y < a,
a < y,
y < b.
(See Proposition ?? for the computation of Ψ.) Consequently, on the set {T0 (X) > t} we have
¡
¢ (α − Wt∧T0 )+
(α − Wt )+
(a − Xt )+
=
=
.
Px gTa0 (X) ≤ t|Ft =
a/σ
a/σ
a
(3.2.1)
As a consequence, applying Tanaka’s formula, we obtain the following result.
Lemma 3.2.1 Let Xt = x + σWt , where σ > 0. The F-predictable compensator associated with the
1 α
random time gTa0 (X) is the process A defined as At =
L
(W ), where Lα (W ) is the local
2α t∧T−x/σ (W )
time of the Brownian Motion W at level α = (a − x)/σ.
3.3
Last Passage Time Before Maturity
In this subsection, we study the last passage time at level a of a diffusion process X before the fixed
horizon (maturity) T . We start with the case where X = W is a Brownian motion starting from 0
and where the level a is null:
gT = sup{t ≤ T : Wt = 0} .
Lemma 3.3.1 The F-predictable compensator associated with the random time gT equals
r Z t∧T
2
dL
√ s ,
At =
π 0
T −s
where L is the local time at level 0 of the Brownian motion W.
Proof: It suffices to give the proof for T = 1, and we work with t < 1. Let G be a standard
Gaussian variable. Then
³ a2
´
³ |a| ´
√
P
>
1
−
t
=
Φ
,
G2
1−t
r
2
2 Rx
where Φ(x) =
exp(− u2 )du. For t < 1, the set {g1 ≤ t} is equal to {dt > 1}. It follows (see
π 0
[3M]) that
¶
µ
|Wt |
.
P(g1 ≤ t|Ft ) = Φ √
1−t
Then, the Itô-Tanaka formula combined with the identity
xΦ0 (x) + Φ00 (x) = 0
3.3. LAST PASSAGE TIME BEFORE MATURITY
69
leads to
¶
|W |
√ s
1−s
0
¶
Z t µ
|Ws |
=
Φ0 √
1−s
0
¶
Z t µ
|Ws |
0
Φ √
=
1−s
0
Z
P(g1 ≤ t|Ft )
=
t
µ
Φ0
µ
¶
µ
¶
Z
|W |
1 t ds
|Ws |
√ s
+
Φ00 √
2 0 1−s
1−s
1−s
µ
¶
Z t
sgn(Ws )
|W |
dL
√
√ s Φ0 √ s
dWs +
1−s
1−s
1−s
0
r Z t
sgn(Ws )
2
dL
√
√ s .
dWs +
π 0
1−s
1−s
d
It follows that the F-predictable compensator associated with g1 is
r Z t
dL
2
√ s , (t < 1) .
At =
π 0
1−s
¤
These results can be extended to the last time before T when the Brownian motion reaches the
level α, i.e., gTα = sup {t ≤ T : Wt = α}, where we set sup(∅) = T. The predictable compensator
associated with gTα is
r Z t∧T
2
dLα
√ s ,
At =
π 0
T −s
where Lα is the local time of W at level α.
We now study the case where Xt = x + µ t + σ Wt , with constant coefficients µ and σ > 0. Let
g1a (X) = sup {t ≤ 1 : Xt = a}
= sup {t ≤ 1 : νt + Wt = α}
where ν = µ/σ and α = (a − x)/σ. Setting
Vt = α − νt − Wt = (a − Xt )/σ ,
we obtain, using standard computations (see [3M])
P(g1a (X) ≤ t|Ft ) = (1 − eνVt H(ν, |Vt |, 1 − t))11{T0 (V )≤t} ,
where
µ
H(ν, y, s) = e
−νy
N
νs − y
√
s
¶
µ
νy
+e N
−νs − y
√
s
¶
.
Using Itô’s lemma, we obtain the decomposition of 1 − eνVt H(ν, |Vt |, 1 − t) as a semi-martingale
Mt + Ct .
We note that C increases only on the set {t : Xt = a}. Indeed, setting g1a (X) = g, for any
predictable process H, one has
µZ ∞
¶
E(Hg ) = E
dCs Hs
0
hence, since Xg = a,
µZ
0 = E(11Xg 6=a ) = E
0
¶
∞
dCs 11Xs 6=a
.
Therefore, dCt = κt dLat (X) and, since L increases only at points such that Xt = a (i.e., Vt = 0),
one has
κt = Hx0 (ν, 0, 1 − t) .
70
CHAPTER 3. LAST PASSAGE TIMES ♠
The martingale part is given by dMt = mt dWt where
mt
=
eνVt (νH(ν, |Vt |, 1 − t) − sgn(Vt ) Hx0 (ν, |Vt |, 1 − t)) .
Therefore, the predictable compensator associated with g1a (X) is
Z t
Hx0 (ν, 0, 1 − s)
dLas .
νV
s H(ν, 0, 1 − s)
0 e
Exercise 3.3.2 The aim of this exercise is to compute, for t < T < 1 , the quantity E(h(WT )11{T <g1 } |Gt ),
which is the price of the claim h(ST ) with barrier condition 11{T <g1 } .
Prove that
³
³ |W | ´ ¯ ´
¯
T
E(h(WT )11{T <g1 } |Ft ) = E(h(WT )|Ft ) − E h(WT )Φ √
¯ Ft ,
1−T
where
r
µ 2¶
u
exp −
du .
2
0
¯ ´
³
√
¯
Define k(w) = h(w)Φ(|w|/ 1 − T ). Prove that E k(WT ) ¯ Ft = e
k(t, Wt ), where
Φ(x) =
e
k(t, a)
2
π
Z
x
³
´
= E k(WT −t + a)
Z
³ |u| ´
³ (u − a)2 ´
1
= p
h(u)Φ √
exp −
du.
2(T − t)
1−T
2π(T − t) R
C
3.4
Absolutely Continuous Compensator
From the preceding computations, the reader might think that the F-predictable compensator is
always singular w.r.t. the Lebesgue measure. This is not the case, as we show now. We are indebted
to Michel Émery for this example.
Let W be a Brownian motion and let τ = sup {t ≤ 1 : W1 − 2Wt = 0}, that is the last time
before 1 when the Brownian motion is equal to half of its terminal value at time 1. Then,
½
¾ ½
¾
{τ ≤ t} =
inf 2Ws ≥ W1 ≥ 0 ∪ sup 2Ws ≤ W1 ≤ 0 .
t≤s≤1
t≤s≤1
I The quantity
µ
P(τ ≤ t, W1 ≥ 0|Ft ) = P
can be evaluated using the equalities
½
¾
W1
inf Ws ≥
≥0
t≤s≤1
2
¶
inf 2Ws ≥ W1 ≥ 0|Ft
t≤s≤1
½
¾
W1
− Wt ≥ −Wt
t≤s≤1
2
(
)
f1−t
W
Wt
f
=
inf (Wu ) ≥
−
≥ −Wt ,
0≤u≤1−t
2
2
=
inf (Ws − Wt ) ≥
fu = Wt+u − Wt , u ≥ 0) is a Brownian motion independent of Ft . It follows that
where (W
µ
¶
W1
P inf Ws ≥
≥ 0|Ft = Ψ(1 − t, Wt ) ,
t≤s≤1
2
3.5. TIME WHEN THE SUPREMUM IS REACHED
where
71
Ã
!
³
fs
W
x
x
x´
f
= P
inf Wu ≥
− ≥ −x = P 2Ms − Ws ≤ , Ws ≤
0≤u≤s
2
2
2
2
µ
¶
x
x
.
= P 2M1 − W1 ≤ √ , W1 ≤ √
2 s
2 s
Ψ(s, x)
I The same kind of computation leads to
µ
¶
P sup 2Ws ≤ W1 ≤ 0|Ft = Ψ(1 − t, −Wt ) .
t≤s≤1
I The quantity Ψ(s, x) can now be computed from the joint law of the maximum and of the process at
e be a r.v. uniformly distributed
time 1; however, we prefer to use Pitman’s theorem (see [3M]): let U
on [−1, +1] independent of R1 := 2M1 − W1 , then
e R1 ≤ y)
P(2M1 − W1 ≤ y, W1 ≤ y) = P(R1 ≤ y, U
Z 1
1
=
P(R1 ≤ y, uR1 ≤ y)du .
2 −1
For y > 0,
1
2
Z
1
1
P(R1 ≤ y, uR1 ≤ y)du =
2
−1
For y < 0
Z
Z
1
P(R1 ≤ y)du = P(R1 ≤ y) .
−1
1
P(R1 ≤ y, uR1 ≤ y)du = 0 .
−1
Therefore
µ
P(τ ≤ t|Ft ) = Ψ(1 − t, Wt ) + Ψ(1 − t, −Wt ) = ρ
where
r
ρ(y) = P(R1 ≤ y) =
2
π
Z
y
x2 e−x
2
/2
|W |
√ t
1−t
¶
dx .
0
t|
Then Zt = P(τ > t|Ft ) = 1 − ρ( √|W
). We can now apply Tanaka’s formula to the function ρ.
1−t
0
Noting that ρ (0) = 0, the contribution to the Doob-Meyer decomposition of Z of the local time of
W at level 0 is 0. Furthermore, the increasing process A of the Doob-Meyer decomposition of Z is
given by
Ã
!
¶
¶
µ
µ
1
|Wt |
1 00
|Wt |
1 0
|Wt |
√
p
dAt =
ρ
+ ρ √
dt
2
1−t 1−t 2
1−t
(1 − t)3
=
2
1
|W |
√ t e−Wt /2(1−t) dt .
1−t 1−t
We note that A may be obtained as the dual predictable projection on the Brownian filtration of
(W )
(x)
the process As 1 , s ≤ 1, where (As , s ≤ 1) is the compensator of τ under the law of the Brownian
(1)
bridge P0→x .
3.5
Time When the Supremum is Reached
Let W be a Brownian motion, Mt = sups≤t Ws and let τ be the time when the supremum on the
interval [0, 1] is reached, i.e.,
τ = inf{t ≤ 1 : Wt = M1 } = sup{t ≤ 1 : Mt − Wt = 0} .
72
CHAPTER 3. LAST PASSAGE TIMES ♠
Let us denote by ζ the positive continuous semimartingale
Mt − Wt
ζt = √
, t < 1.
1−t
q R
u2
2 x
Let Ft = P(τ ≤ t|Ft ). Since Ft = Φ(ζt ), (where Φ(x) =
π 0 exp(− 2 )du, (see Exercise in
Chapter 4 in [3M]) using Itô’s formula, we obtain the canonical decomposition of F as follows:
Z t
Z
1 t 00
du
Ft =
Φ0 (ζu ) dζu +
Φ (ζu )
2 0
1−u
0
r Z t
Z t
dWu
dM (ii)
2
(i)
√ u = Ut + F̃t ,
Φ0 (ζu ) √
= −
+
π
1
−
u
1−u
0
0
Rt
dWu
Φ0 (ζu ) √
is a martingale and Fe a predictable increasing process. To obtain
1−u
p
(i), we have used that xΦ0 + Φ00 = 0; to obtain (ii), we have used that Φ0 (0) = 2/π and also that
the process M increases only on the set
where Ut = −
0
{u ∈ [0, t] : Mu = Wu } = {u ∈ [0, t] : ζu = 0}.
3.6
Last Passage Times for Particular Martingales
We now study the Azéma supermartingale associated with the random time L, a last passage time
or the end of a predictable set Γ, i.e.,
L(ω) = sup{t : (t, ω) ∈ Γ} .
Proposition 3.6.1 Let L be the end of a predictable set. Assume that all the F-martingales are
continuous and that L avoids the F-stopping times. Then, there exists a continuous and nonnegative
local martingale N , with N0 = 1 and limt→∞ Nt = 0, such that:
Nt
Σt
Zt = P (L > t | Ft ) =
where Σt = sups≤t Ns . The Doob-Meyer decomposition of Z is
Zt = mt − At
and the following relations hold
µZ
t
dms
1
−
Zs
2
Z
t
dhmis
Zs2
Nt
= exp
Σt
= exp(At )
Z t
dNs
= 1+
= E(ln S∞ |Ft )
0 Σs
0
mt
0
¶
Proof: As recalled previously, the Doob-Meyer decomposition of Z reads Zt = mt − At with m and
A continuous, and dAt is carried by {t : Zt = 1}. Then, for t < T0 := inf{t : Zt = 0}
µZ t
¶
Z
dms
1 t dhmis
− ln Zt = −
−
+ At
2 0 Zs2
0 Zs
From Skorokhod’s reflection lemma we deduce that
µZ u
¶
Z
dms
1 u dhmis
At = sup
−
Zs
2 0
Zs2
u≤t
0
3.6. LAST PASSAGE TIMES FOR PARTICULAR MARTINGALES
73
Introducing the local martingale N defined by
µZ t
¶
Z
dms
1 t dhmis
Nt = exp
−
,
2 0 Zs2
0 Zs
it follows that
Nt
Σt
Zt =
and
Σt
=
µ
µZ
sup Nu = exp sup
u≤t
u≤t
u
0
1
dms
−
Zs
2
Z
u
0
dhmis
Zs2
¶¶
= eAt
¤
The three following exercises are from the work of Bentata and Yor [9].
Exercise 3.6.2 Let M be a positive martingale, such that M0 = 1 and limt→∞ Mt = 0. Let
a ∈ [0, 1[ and define Ga = sup{t : Mt = a}. Prove that
P(Ga ≤ t|Ft ) =
µ
¶+
Mt
1−
a
Assume that, for every t > 0, the law of the r.v. Mt admits a density (mt (x), x ≥ 0), and (t, x) →
mt (x) may be chosen continuous on (0, ∞)2 and that dhM it = σt2 dt, and there exists a jointly
continuous function (t, x) → θt (x) = E(σt2 |Mt = x) on (0, ∞)2 . Prove that
P(Ga ∈ dt) = (1 −
M0
1
)δ0 (dt) + 11{t>0} θt (a)mt (a)dt
a
2a
Hint: Use Tanaka’s formula to prove that the result is equivalent to dt E(Lat (M )) = dtθt (a)mt (a)
where L is the Tanaka-Meyer local time.
C
Exercise 3.6.3 Let B be a Brownian motion and
Ta(ν)
G(ν)
a
= inf{t : Bt + νt = a}
= sup{t : Bt + νt = a}
Prove that
µ
law
(Ta(ν) , G(ν)
a ) =
(ν)
1
1
Gν
Tν
,
(a)
¶
(a)
(ν)
Give the law of the pair (Ta , Ga ).
C
Exercise 3.6.4 Let X be a transient diffusion, such that
Px (T0 < ∞)
=
0, x > 0
Px ( lim Xt = ∞)
=
1, x > 0
t→∞
and note s the scale function satisfying s(0+ ) = −∞, s(∞) = 0. Prove that for all x, t > 0,
Px (Gy ∈ dt) =
−1 (m)
p (x, y)dt
2s(y) t
where p(m) is the density transition w.r.t. the speed measure m.
C
74
CHAPTER 3. LAST PASSAGE TIMES ♠
Bibliography
[1] J. Amendinger. Initial enlargement of filtrations and additional information in financial markets. PhD thesis, Technischen Universität Berlin, 1999.
[2] J. Amendinger. Martingale representation theorems for initially enlarged filtrations. Stochastic
Processes and their Appl., 89:101–116, 2000.
[3] J. Amendinger, P. Imkeller, and M. Schweizer. Additional logarithmic utility of an insider.
Stochastic Processes and their Appl., 75:263–286, 1998.
[4] S. Ankirchner. Information and Semimartingales. PhD thesis, Humboldt Universität Berlin,
2005.
[5] S. Ankirchner, S. Dereich, and P. Imkeller. Enlargement of filtrations and continuous Girsanovtype embeddings. In C. Donati-Martin, M. Emery, A. Rouault, and Ch. Stricker, editors,
Séminaire de Probabilités XL, volume 1899 of Lecture Notes in Mathematics. Springer-Verlag,
2007.
[6] T. Aven. A theorem for determining the compensator of a counting process. Scandinavian
Journal of Statistics, 12:69–72, 1985.
[7] M.T. Barlow. Study of filtration expanded to include an honest time. Z. Wahr. Verw. Gebiete,
44:307–323, 1978.
[8] F. Baudoin. Modeling anticipations on financial markets. In Paris-Princeton Lecture on mathematical Finance 2002, volume 1814 of Lecture Notes in Mathematics, pages 43–92. SpringerVerlag, 2003.
[9] A. Bentata and M. Yor. From Black-Scholes and Dupire formulae to last passage times of local
martingales. Preprint, 2008.
[10] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Stochastic methods in credit risk modelling,
valuation and hedging. In M. Frittelli and W. Rungaldier, editors, CIME-EMS Summer School
on Stochastic Methods in Finance, Bressanone, volume 1856 of Lecture Notes in Mathematics.
Springer, 2004.
[11] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Market completeness under constrained trading. In A.N. Shiryaev, M.R. Grossinho, P.E. Oliveira, and M.L. Esquivel, editors, Stochastic
Finance, Proceedings of International Lisbon Conference, pages 83–107. Springer, 2005.
[12] T.R. Bielecki, M. Jeanblanc, and M. Rutkowski. Completeness of a reduced-form credit risk
model with discontinuous asset prices. Stochastic Models, 22:661–687, 2006.
[13] P. Brémaud and M. Yor. Changes of filtration and of probability measures. Z. Wahr. Verw.
Gebiete, 45:269–295, 1978.
[14] G. Callegaro, M Jeanblanc, and B. Zargari. Carthagese Filtrations. Preprint, 2010.
75
76
BIBLIOGRAPHY
[15] J.M. Corcuera, P. Imkeller, A. Kohatsu-Higa, and D. Nualart. Additional utility of insiders
with imperfect dynamical information. Finance and Stochastics, 8:437–450, 2004.
[16] M.H.A. Davis. Option Pricing in Incomplete Markets. In M.A.H. Dempster and S.R. Pliska,
editors, Mathematics of Derivative Securities, Publication of the Newton Institute, pages 216–
227. Cambridge University Press, 1997.
[17] C. Dellacherie. Un exemple de la théorie générale des processus. In P-A. Meyer, editor,
Séminaire de Probabilités IV, volume 124 of Lecture Notes in Mathematics, pages 60–70.
Springer-Verlag, 1970.
[18] C. Dellacherie. Capacités et processus stochastiques, volume 67 of Ergebnisse. Springer, 1972.
[19] C. Dellacherie, B. Maisonneuve, and P-A. Meyer. Probabilités et Potentiel, chapitres XVIIXXIV, Processus de Markov (fin). Compléments de calcul stochastique. Hermann, Paris, 1992.
[20] C. Dellacherie and P-A. Meyer. Probabilités et Potentiel, chapitres I-IV. Hermann, Paris, 1975.
English translation: Probabilities and Potentiel, chapters I-IV, North-Holland, (1982).
[21] C. Dellacherie and P-A. Meyer. A propos du travail de Yor sur les grossissements des tribus.
In C. Dellacherie, P-A. Meyer, and M. Weil, editors, Séminaire de Probabilités XII, volume 649
of Lecture Notes in Mathematics, pages 69–78. Springer-Verlag, 1978.
[22] C. Dellacherie and P-A. Meyer. Probabilités et Potentiel, chapitres V-VIII. Hermann, Paris,
1980. English translation : Probabilities and Potentiel, chapters V-VIII, North-Holland, (1982).
[23] D.M. Delong. Crossing probabilities for a square root boundary for a Bessel process. Comm.
Statist A-Theory Methods, 10:2197–2213, 1981.
[24] G. Di Nunno, T. Meyer-Brandis, B. Øksendal, and F. Proske. Optimal portfolio for an insider
in a market driven by lévy processes. Quantitative Finance, 6:83–94, 2006.
[25] P. Ehlers and Ph. Schönbucher. Background filtrations and canonical loss processes for top-down
models of portfolio credit risk. Finance and Stochastics, 13:79–103, 2009.
[26] R.J. Elliott. Stochastic Calculus and Applications. Springer, Berlin, 1982.
[27] R.J. Elliott and M. Jeanblanc. Incomplete markets and informed agents. Mathematical Method
of Operations Research, 50:475–492, 1998.
[28] R.J. Elliott, M. Jeanblanc, and M. Yor. On models of default risk. Math. Finance, 10:179–196,
2000.
[29] M. Emery. Espaces probabilisés filtrés : de la théorie de Vershik au mouvement Brownien,
via des idées de Tsirelson. In Séminaire Bourbaki, 53ième année, volume 282, pages 63–83.
Astérisque, 2002.
[30] A. Eyraud-Loisel. BSDE with enlarged filtration. option hedging of an insider trader in a
financial market with jumps. Stochastic Processes and their Appl., 115:1745–1763, 2005.
[31] J-P. Florens and D. Fougere. Noncausality in continuous time. Econometrica, 64:1195–1212,
1996.
[32] H. Föllmer, C-T Wu, and M. Yor. Canonical decomposition of linear transformations of two
independent Brownian motions motivated by models of insider trading. Stochastic Processes
and their Appl., 84:137–164, 1999.
[33] D. Gasbarra, E. Valkeika, and L. Vostrikova. Enlargement of filtration and additional information in pricing models: a Bayesian approach. In Yu. Kabanov, R. Lipster, and J. Stoyanov,
editors, From Stochastic Calculus to Mathematical Finance: The Shiryaev Festschrift, pages
257–286, 2006.
BIBLIOGRAPHY
77
[34] A. Grorud and M. Pontier. Insider trading in a continuous time market model. International
Journal of Theoretical and Applied Finance, 1:331–347, 1998.
[35] A. Grorud and M. Pontier. Asymmetrical information and incomplete markets. International
Journal of Theoretical and Applied Finance, 4:285–302, 2001.
[36] C. Hillairet. Comparison of insiders’ optimal strategies depending on the type of sideinformation. Stochastic Processes and their Appl., 115:1603–1627, 2005.
[37] P. Imkeller. Random times at which insiders can have free lunches. Stochastics and Stochastics
Reports, 74:465–487, 2002.
[38] P. Imkeller. Malliavin’s calculus in insider models: additional utility and free lunches. Mathematical Finance, 13:153169, 2003.
[39] P. Imkeller, M. Pontier, and F. Weisz. Free lunch and arbitrage possibilities in a financial
market model with an insider. Stochastic Processes and their Appl., 92:103–130, 2001.
[40] J. Jacod. Calcul stochastique et Problèmes de martingales, volume 714 of Lecture Notes in
Mathematics. Springer-Verlag, Berlin, 1979.
[41] J. Jacod. Grossissement initial, hypothèse H 0 et théorème de Girsanov. In Séminaire de Calcul
Stochastique 1982-83, volume 1118 of Lecture Notes in Mathematics. Springer-Verlag, 1987.
[42] M. Jeanblanc and M. Rutkowski. Modeling default risk: an overview. In Mathematical Finance: Theory and Practice, Fudan University, pages 171–269. Modern Mathematics Series,
High Education press. Beijing, 2000.
[43] M. Jeanblanc and M. Rutkowski. Modeling default risk: Mathematical tools. Fixed Income
and Credit risk modeling and Management, New York University, Stern School of Business,
Statistics and Operations Research Department, Workshop, 2000.
[44] M. Jeanblanc and S. Song. Default times with given survival probability and their F-martingale
decomposition formula. Working Paper, 2010.
[45] M. Jeanblanc and S. Song. Explicit model of default time with given survival process. Working
Paper, 2010.
[46] Th. Jeulin. Semi-martingales et grossissement de filtration, volume 833 of Lecture Notes in
Mathematics. Springer-Verlag, 1980.
[47] Th. Jeulin and M. Yor. Grossissement d’une filtration et semi-martingales : formules explicites.
In C. Dellacherie, P-A. Meyer, and M. Weil, editors, Séminaire de Probabilités XII, volume 649
of Lecture Notes in Mathematics, pages 78–97. Springer-Verlag, 1978.
[48] Th. Jeulin and M. Yor. Nouveaux résultats sur le grossissement des tribus. Ann. Scient. Ec.
Norm. Sup., 11:429–443, 1978.
[49] Th. Jeulin and M. Yor. Inégalité de Hardy, semimartingales et faux-amis. In P-A. Meyer, editor,
Séminaire de Probabilités XIII, volume 721 of Lecture Notes in Mathematics, pages 332–359.
Springer-Verlag, 1979.
[50] Th. Jeulin and M Yor, editors. Grossissements de filtrations: exemples et applications, volume
1118 of Lecture Notes in Mathematics. Springer-Verlag, 1985.
[51] I. Karatzas and I. Pikovsky. Anticipative portfolio optimization. Adv. Appl. Prob., 28:1095–
1122, 1996.
[52] A. Kohatsu-Higa. Enlargement of filtrations and models for insider trading. In J. Akahori,
S. Ogawa, and S. Watanabe, editors, Stochastic Processes and Applications to Mathematical
Finance, pages 151–166. World Scientific, 2004.
78
BIBLIOGRAPHY
[53] A. Kohatsu-Higa. Enlargement of filtration. In Paris-Princeton Lecture on Mathematical Finance, 2004, volume 1919 of Lecture Notes in Mathematics, pages 103–172. Springer-Verlag,
2007.
[54] A. Kohatsu-Higa and B. Øksendal. Enlargement of filtration and insider trading. Preprint,
2004.
[55] H. Kunita. Some extensions of Itô’s formula. In J. Azéma and M. Yor, editors, Séminaire de
Probabilités XV, volume 850 of Lecture Notes in Mathematics, pages 118–141. Springer-Verlag,
1981.
[56] S. Kusuoka. A remark on default risk models. Adv. Math. Econ., 1:69–82, 1999.
[57] D. Lando. On Cox processes and credit risky securities. Review of Derivatives Research, 2:99–
120, 1998.
[58] R.S.R. Liptser and A.N. Shiryaev. Statistics of Random Processes. Springer, 2nd printing 2001,
1977.
[59] D. Madan, B. Roynette, and M. Yor. An alternative expression for the Black-Scholes formula
in terms of Brownian first and last passage times. Preprint, Université Paris 6, 2008.
[60] R. Mansuy and M. Yor. Random Times and (Enlargement of ) Filtrations in a Brownian Setting,
volume 1873 of Lectures Notes in Mathematics. Springer, 2006.
[61] G. Mazziotto and J. Szpirglas. Modèle général de filtrage non linéaire et équations différentielles
stochastiques associées. Ann. Inst. H. Poincaré, 15:147–173, 1979.
[62] A. Nikeghbali. An essay on the general theory of stochastic processes. Probability Surveys,
3:345–412, 2006.
[63] A. Nikeghbali and M. Yor. A definition and some properties of pseudo-stopping times. The
Annals of Probability, 33:1804–1824, 2005.
[64] A. Nikeghbali and M. Yor. Doob’s maximal identity, multiplicative decompositions and enlargements of filtrations. In D. Burkholder, editor, Joseph Doob: A Collection of Mathematical
Articles in his Memory, volume 50, pages 791–814. Illinois Journal of Mathematics, 2007.
[65] H. Pham. Stochastic control under progressive enlargement of filtrations and applications to
multiple defaults risk management. Preprint, 2009.
[66] J.W. Pitman and M. Yor. Bessel processes and infinitely divisible laws. In D. Williams, editor,
Stochastic integrals, LMS Durham Symposium, Lect. Notes 851, pages 285–370. Springer, 1980.
[67] Ph. Protter. Stochastic Integration and Differential Equations. Springer, Berlin, Second edition,
2005.
[68] L.C.G. Rogers and D. Williams. Diffusions, Markov processes and Martingales, Vol 1. Foundations. Cambridge University Press, Cambridge, second edition, 2000.
[69] P. Salminen. On last exit decomposition of linear diffusions. Studia Sci. Math. Hungar., 33:251–
262, 1997.
[70] M.J. Sharpe. Some transformations of diffusion by time reversal. The Annals of Probability,
8:1157–1162, 1980.
[71] A.N. Shiryaev. Optimal Stopping Rules. Springer, Berlin, 1978.
[72] S. Song. Grossissement de filtration et problèmes connexes. Thèsis, Paris IV University, 1997.
[73] C. Stricker and M. Yor. Calcul stochastique dépendant d’un paramètre. Z. Wahrsch. Verw.
Gebiete, 45(2):109–133, 1978.
BIBLIOGRAPHY
79
[74] Ch. Stricker. Quasi-martingales, martingales locales, semimartingales et filtration naturelle. Z.
Wahr. Verw. Gebiete, 39:55–63, 1977.
[75] B. Tsirel’son. Within and beyond the reach of Brownian innovations. In Proceedings of the
International Congress of Mathematicians, pages 311–320. Documenta Mathematica, 1998.
[76] D. Williams. A non-stopping time with the optional-stopping property. Bull. London Math.
Soc., 34:610–612, 2002.
[77] C-T. Wu. Construction of Brownian motions in enlarged filtrations and their role in mathematical models of insider trading. Thèse de doctorat d’état, Humbolt-Universität Berlin, 1999.
[78] Ch. Yoeurp. Grossissement de filtration et théorème de Girsanov généralisé. In Th. Jeulin and
M. Yor, editors, Grossissements de filtrations: exemples et applications, volume 1118. Springer,
1985.
[79] M. Yor. Grossissement d’une filtration et semi-martingales : théorèmes généraux. In C. Dellacherie, P-A. Meyer, and M. Weil, editors, Séminaire de Probabilités XII, volume 649 of Lecture
Notes in Mathematics, pages 61–69. Springer-Verlag, 1978.
[80] M. Yor. Some Aspects of Brownian Motion, Part II: Some Recent Martingale Problems. Lectures
in Mathematics. ETH Zürich. Birkhäuser, Basel, 1997.
Index
(H) Hypothesis, 16
predictable, 8
Pseudo stopping time, 49
Brownian
bridge, 19
Random time
honest, 50
Running maximum, 50
Class D, 4
Compensator
predictable, 9
semi-martingale, 5
Special semi-martingale, 5
Decomposition
Doob-Meyer, 55, 72
multiplicative, 55, 72
Doob-Meyer decomposition, 4
Dual predictable projection, 69
Filtration
natural, 3
First time
when the supremum is reached, 71
Gaussian
process, 19
Honest time, 50
Immersion, 16
Initial time, 56
Integration by parts
for Stieltjes, 6
Last passage time, 65
before hitting a level, 67
before maturity, 68
of a transient diffusion, 65
Multiplicative decomposition, 55, 72
Norros lemma, 42, 45
Predictable
compensator, 9
Process
adapted, 3
increasing, 3
Projection
dual predictable, 8
optional, 7
80
Download