MAT4701

advertisement
MAT4701
=============================== 22/01-2010
Curriculum Outline
(1) Basic definitions on Brownian motion
(2) The Itô-integral construction.
(3) Itô’s formula
(4) Stochastic differential equations
(5) Markov property
(6) Dynkin’s formula
(7) Partial differential equations
(8) Optimal stopping
Brownian Motion
|∆x| =
BM: B(t) = lim
∆t→0
√
t/∆t
X
∆t
√
(±1)k ∆t (= Bt )
k=1
When computing ∆t can be set equal to e.g
1
.
10000
Basic properties of Brownian Motion
There exists a probability space (Ω, F , P ) and a stochastic process Bt :
Ω → R, where the process has the following properties:
(1): B0 (ω) = 0
(2): For all ω, t → Bt (ω) is continuous.
(3): If 0 ≤ t1 ≤ t2 ≤ . . ., then Bt1 , Bt2 − Bt1 , Bt3 − Bt2 , . . . are independent.
(Which means the increments are independent).
(4): E[Bt (ω)] = 0 for all t.
(5): E[(Bt − Bs )2 ] = t − s for all t, s.
(6): For all (s, t), Bt − Bs is a normal distribution N(0, t − s), where t − s is
the variance.
The (±1) in the BM have equal probability, and the sum of N random
variables with expected value 0 also has expected value 0. That is why the
mean of the normal distribution is 0.
V ar(Bt ) = V ar
t/∆t
X
k=1
t/∆t
X
k=1
√ (±1)k ∆t =
√ V ar (±1)k ∆t =
1
t/∆t
X
√
V ar((±1)k )( ∆t)2 =
X
t
· ∆t = t
∆t
k=1
t/∆t
(1)(∆t) =
k=1
This is basically a verification of property (5), where s = 0.
We know this is a normal distribution due to the central limit theorem.
Stochastic integration
The Riemann integration uses the same basic idea that we want to use
in stochastic integration.
Z b
X
f (x)dx ≈
f (x∗i )∆xi
a
i
For a stochastic integral, Xt (ω) : a stochastic process, and Bt (ω) : brownian
motion.
Z b
X
Xt (ω)dBt (ω) ≈
Xt∗i (ω) Bti+1 − Bti
a
i
We want the limit to exist as maxi |ti+1 − ti | → 0. But this turns out to be
“impossible” (except some very specific cases).
Example.
0 = t0 ≤ t1 ≤ t2 ≤ . . . ≤ tN = T
We consider the sum
X
i
• Case 1.
t∗i
Bt∗i (Bti+1 − Bti )
= ti (leftmost point).
hX
i
E
Bti Bti+1 − Bti =
i
X E Bti Bti+1 − Bti =
i
Since Bti and Bti+1 − Bti are independent (independent increments)
X E Bti E Bti+1 − Bti = 0
i
2
• Case 2. t∗i = ti+1 (rightmost point).
hX
i
E
Bti+1 (Bti+1 − Bti ) =
i
In this case Bti+1 and Bti+1 − Bt1 are not independent. However, we can use
a little trick and add/subtract Bti .
hX
i
E
(Bti+1 − Bti + Bti )(Bti+1 − Bti ) =
i
E
hX
i
2
(Bti+1 − Bti ) +
X
i
i
Bti (Bti+1 − Bti ) =
We observe the right hand side is zero, like we showed in case 1.
i X
i X h
hX
2
2
∆ti = T
E (Bti+1 − Bti ) =
E
(Bti+1 − Bti ) =
i
i
i
P
If the limit of i Bt∗i (Bti+1 − Bti ) exists as max|ti+1 − ti | → 0 then the limit
must have expected value 0 and T (and everything in between), which is
impossible.
Solution
Basic idea: Fix the approximation point.
(1) t∗i = ti (Itô-integral).
(2) t∗i = ti+12−ti (Stratonovich integral).
Sigma-algebras: F − σ algebra.
(Σ1 ) ∅ ∈ F
(Σ2 ) F ∈ F ⇒ F c ∈ F S
(Σ3 ) F1 , F2 , . . . ∈ F ⇒ ∞
i=1 Fi ∈ F
σ-algebras generated by sets
C - a collection/family of sets.
σ(C) = the smallest σ-algebra containing all the sets in C.
One says σ(C) is the σ-algebra generated by C.
Random variables.
X : Ω → R.
σ(X) = the smallest σ-algebra containing all sets on the form F = X −1 (A),
where A is any Borel set.
3
σ−algebras generated by a stochastic process
Xt : Ω → R
σ(Xs , s ≤ t) = the smallest σ−algebra containing all sets on the form
{ω | Xt1 ∈ A, Xt2 ∈ A2 , . . . , Xtn ∈ An }
where t1 , t2 , . . . , tn ≤ t and A1 , . . . , An are arbitrary Borel sets.
Important: Ft = σ(Bs , s ≤ t). The σ−algebra generated by the Brownian
motion. A stochastic process is called Ft -adapted iff Xt is Ft −measurable for
all t.
Ft ; what does it “mean” that Xt is Ft −adapted?
Result: If Xt is Ft −measurable, then we can find a sequence ftn such that
Xt = lim ftn
n→∞
and where
ftn
=
Nn
X
(i)
a.e.
(i)
(i)
g1 (Btn1 )g2 (Btn2 ) . . . gm
(Btnm )
i=1
where g: any Borel-funtion and all times ≤ t.
Hence if ω is known (i.e what sample path we are on) we can in principle
find the value of Xt from the values Bs (ω), 0 ≤ s ≤ t. Conclusion: Ft contains
all “information” up to time t.
For instance
Xt = Bt sin(B 2t ) + B 2 t
3
is Ft -measurable since all t values are less than t.
Xt = B2t
is not Ft -measurable since it contains information in the “future” (2t > t).
Itô-integral. Construction.
Z
T
f (t, ω)dBt(ω)
S
We choose S, T . Normally S = 0, T = T .
V = V[S, T ] is the class of functions f (t, ω) : [S, T ] × Ω → R such that:
(i) f (t, ω) is B × F −measurable
4
(ii) f (t,
h Rω) is Ft −adapted
i
T
(iii) E S f (t, ω)2dt < ∞ (Lebesgue-integral)
Elementary functions
φ ∈ V is elementary iff
φ(t, ω) =
N
X
ej (ω)I[tj ,tj+1 ) (t)
j=1
where I is the indicatorfunction.
S = t1 ≤ t2 ≤ . . . ≤ tN = T.
Note: ej (ω) must be Ftj −measurable. The Itô-integral of an elementary
function is defined as:
Z
T
φ(t, ω)dBt(ω) =
S
N
X
j
ej (ω)(Btj+1 − Btj )
Itô-isometry.
Holds for any object in V. We will prove it for an elementary function.
h Z T
2 i
hZ T
i
E
φ(t, ω)dBt(ω)
=E
φ2 (t, ω)dt
S
S
The left side is a stochastic integral, where as the right side is a normal
Lebesgue integral.
Proof :
h Z T
2 i
E
φ(t, ω)dBt(ω)
=
S
Substituting the definition of the Itô-integral.
E
N
h X
j
E
N
hX
i,j
ej (ω)(Btj+1 − Btj )
2 i
=
i
ei (ω)ej (ω)(Bti+1 − Bti )(Btj+1 − Btj ) =
From here on, we consider three seperate cases.
5
• Case 1. i < j. Then:
ei (ω) is Ftj -measurable.
ej (ω) is Ftj -measurable.
Bti+1 − Bti is Ftj -measurable.
Since Fti ⊂ Ftj and ti+1 ≤ tj if i < j .
Then (Btj+1 − Btj ) is independent of the rest of the equation.
E
N
hX
i,j
E
N
hX
i,j
i
ej (ω)ei (ω)(Bti+1 − Bti )(Btj+1 − Btj ) =
i h
i
ej (ω)ei(ω)(Bti+1 − Bti ) E (Btj+1 − Btj ) = 0
• Case 2. i > j. Then Btj+1 − Btj is independent and everything cancels the
same way.
• Case 3. i = j.
E
h Z
T
φ(t, ω)dBt(ω)
S
E
N
hX
i=1
2 i
=
i
ei (ω)2(Btj+1 − Btj )2 =
N
i
i h
h
X
E ei (ω)2 E (Btj+1 − Btj )2 =
i=1
N
i
h
X
E ei (ω)2 ∆t =
i=1
E
N
hX
i=1
E
hZ
S
i
ei (ω)2∆t =
T
i
φ2 (t, ω)dt
Q.E.D
Lemma
If f ∈ V, then there exists a sequence of elementary functions φn , n = 1, 2, . . .
such that
hZ T
i
E
(f − φn )2 dt → 0 as n → ∞.
S
6
Usage of the lemma:
Definition of the Itô-integral for f ∈ V: find a sequence φn as above and put
Z T
Z T
f dBt = lim
φn dBt
n→∞
S
S
We are integrating an elementary function, i.e we have a sum.
Questions:
(1) Why is this well defined?
(2) Why does the limit exist in the first place?
Answers:
(2) Let
ψn =
Z
T
φn dBt
S
We consider:
2 i
ψn − ψm
=
Z T
h Z T
2 i
E
φn dBt −
φm dBt
=
E
h
S
S
The difference between elementary functions is a new elementary function.
h Z T
2 i
E
(φn − φm )dBt
=
S
The integral of elementary functions is linear. Apply Itô isometry.
hZ T
i
E
(φn − φm )2 dt =
S
E
hZ
S
2
2
T
i
(φn − f + f − φm )2 dt ≤
We use: (x + y) ≤ 4x + 4y 2 .
hZ T
i
E
4(φn − f )2 + 4(f − φm )2 dt → 0
S
From the lemma we know this tends to 0.
This proves that ψn is a Cauchy sequence in L2 Ω × [S, T ] . From measure
theory we know that all Lp spaces are complete. Since L2 is complete the
Cauchy sequence converges. Therefore the limit exists.
7
Answers:
(1) Why is it well defined? Let
E
h Z
φn → f,
S
E
Iso-isometry
Φn → f
Z
2 i
T
T
(φn dBt −
Φn dBt
=
h Z
E
E
2
hZ
S
T
S
hZ
S
T
φn − Φn dBt
T
2 i
=
i
(φn − Φn )2 dt =
i
(φn − f + f − Φn ) dt ≤
2
S
2
Again, since (x + y) ≤ 4x + 4y 2 = 4(x2 + y 2)
Z T
hZ T
i
2
4·E
(φn − f ) dt +
(f − Φn )2 dt → 0
S
S
This means they must converge to the same object.
Proof of Lemma.
• Step 1:
Assume that g ∈ V is bounded and continuous. Put
X
φn (t, ω) =
g(tj , ω)I[tj ,tj+1 ) (t).
j
RT
Since g is continuous for each ω; S (g −φn )2 dt → 0 pointwise in ω as n → ∞.
Z T
Z T
2
e
(g − φn ) dt ≤
(2C)2 dt ≤ C
S
S
e is some constant. So, by the
Where |g| ≤ C, since g is bounded and C
bounded convergence theorem we know that
hZ T
i
E
(g − φn )2 dt → 0.
S
• Step 2:
What if f is not continuous? Assume h ∈ V is bounded, but not necessarily
continuous. Then we can find gn ∈ V bounded and continuous such that
hZ T
i
E
(h − gn )2 dt → 0 as n → ∞.
S
8
Proof :
Let ψ ≥ 0 (positive function) be such that
(i) ψ(x) = 0 for x ≤ −1 and x ≥ 0.
(ii) ψR is continuous.
∞
(iii) −∞ ψ(x)dx = 1 (probability density).
Put ψn (x) = nψ(nx), n = 1, 2, . . . (approximating unity) and consider
Z t
gn (t, ω) =
ψn (s − t)h(s, ω)ds.
(∗)
0
Then gn is continuous (trivial, Lebesgue integrals are continuous in the
integral) and bounded (because all (*) integrates to 1) and adapted (which is
very difficult to prove). The s-t; if less than 0 provides a “negative support”
since all values are 0.
9
================================ 29/01-2010
Exercise 2.3
T
If {Hi } is a family of σ-algebras, prove that H = i∈I Hi is a σ-algebra.
a) We check that the three properties of the sigma-algebra are met.
\
∀i ∈ I; ∅ ∈ Hi ⇒ ∅ ∈
Hi
(Σ1 )
F ∈
\
i∈I
i∈I
Hi ⇒ ∀i ∈ I, F ∈ Hi ⇒ ∀i ∈ I, F c ∈ Hi ⇒ F c ∈
Fk ∈
\
i∈I
\
i∈I
Hi
(Σ2 )
Hi , k = 1, 2, . . . where Fk ∩ Fl = ∅ ⇒
∀i ∈ I, Fk ∈ Hi , k = 1, 2, . . . ⇒ ∀i ∈ I,
∞
[
k=1
Fk ∈ Hi ⇒
T
We see the three properties are fulfilled. Thus H =
i∈I
∞
[
k=1
(Σ3 )
Fk ∈
\
i∈I
Hi
Hi is a σ-algebra.
b) We are going to show by counterexample that the union of σ-algebras
generally isn’t a σ-algebra. Let
F1 = ∅, {1}, {2, 3}, {1, 2, 3},
F2 = ∅, {2}, {1, 3}, {1, 2, 3},
which both are σ-algebras. We will examine the union:
F1 ∪ F2 = ∅, {1}, {2}, {1, 3}, {2, 3}, {1, 2, 3},
We have {1} ∈ F1 ∪ F2 and {2}S∈ F1 ∪ F2 but {1, 2} 6∈ F1 ∪ F2 , that is, it
isn’t closed over unions, so G = i∈I Fi is not neccesarily a σ-algebra.
Exercise 2.6
P
Let A1 , A2 , . . . be sets in F such that ∞
k=1 Ak < ∞. Show that:
∞ [
∞
\
P
Ak = 0.
m=1 k=m
We define
Bm =
∞
[
Ak
k=m
We see that Bm gets smaller, so
we have that
B1 =
∞
[
Ak , B2 =
k=1
T
Bm ց ∞
m=1
lim P (Bm ) = P
m→∞
∞
[
Ak , . . .
k=2
Bm . By property 4 of measures,
∞
\
m=1
Bm
On the next page we repeat the properties of measures.
10
Measures
Let µ be a measure (Ω, µ), then
(1) Ai : (1 ≤ i ≤ n) are disjoint. When n is finite;
µ
n
[
i=1
n
X
µ(Ai )
Ai =
i=1
(2)
∀A, B ∈ F , A ⊂ B ⇒ µ(A) ≤ µ(B)
(3) Bn is increasing in F , convergent to B ∈ F , µ(B) = limn→∞ µ(Bn ), In
other words:
Bn ր B ⇒ µ(Bn ) → µ(B)
(4) Similarly, C1 < +∞.
Cn ց C ⇒ µ(Cn ) ց µ(C), n → ∞
So again, by property 4 of measures, we know that:
lim P (Bm ) = P
m→∞
P
∞ [
∞
\
m=1 k=m
lim P
m→∞
m=1
Ak = P
∞
[
k=m
∞
\
∞
\
m=1
Bm
Bm = lim P (Bm ) =
m→∞
Ak ≤ limm→∞
∞
X
P (Ak )
m=1
P
where lim is the
sup lim (which always exists). Now since ∞
k=1 P (Ak ) < ∞,
P∞
then limm→∞ m=1 P (Ak ) = 0, which is the remainder of the convergent
sequence. Result proven.
Exercise 2.7
a) G1 , G2 , . . . , Gn are disjoint subsets of Ω such that
Ω=
∞
[
Gi .
i=1
We want to prove that G that includes ∅ and a union of some (or all) of the
sets is a σ-algebra on Ω. We begin by verifying the three requirements for
σ-algebras.
∅ ∈ G by construction.
(Σ1 )
11
F =
p
[
i=1
c
Gi , 1 ≤ p ≤ n ⇒ F =
p
[
Gi
i=1
c
=
p
\
i=1
Gci ∈ G
F1 , F2 , . . . , Fk ∈ G ⇒ F1 ∪ F2 , . . . ∪ Fn ∈ G
(Σ2 )
(Σ3 )
which is satisfied by the nature of this exercise.
b) Prove that any finite σ-algebra is of the type described in a).
Let F be a finite σ-algebra of subsets of Ω.
o
\n
F ∈F;x∈F
∀x ∈ Ω ; Fx =
(Since F is finite, Ω is finite). Fx ∈ F and Fx is the smallest set in F which
contains x. A ∈ Fi and x ∈ A ⇒ Fx ∈ A (since Fx is the intersection, i.e the
smallest).
For given x, y ∈ Ω, either Fx ∩ Fy = ∅ or Fx = Fy .
We argue by contradiction: we assume that we have Fx ∩ Fy 6= ∅ and Fx 6= Fy
in other words Fx \Fy 6= ∅.
1) x ∈ Fx ∩ Fy ⇒ Fx ∩ Fy ∈ F , and Fx ∩ Fy ⊂ Fx . But this is impossible
(Fx is the smallest possible σ-algebra).
2) x ∈ Fx \Fy . Then Fx \Fy is a set in F and contains x and we have
Fx \Fy is strictly smaller than Fx , but that is impossible.
Let x1 , . . . , xm ∈ Ω such that Fx1 , . . . Fxm form a partition of Ω. Then any
F ∈ F is a disjoint union of some of the Fxi ’s. Then we have the situation
in a).
c) We have X : Ω → R : F -measurable.
∀C ∈ R, X −1 {C} = ω ∈ Ω, X(ω) = C ∈ F
(by definition of the measurability of X). Therefore X has the constant value
C on a finite union of Fi , Fi = Fxi (as in exercise b).
2.10
Xt is stationary if Xt ∼ Xt+h , h > 0 (if they have the same distribution).
∀t > 0, Bt+h − Bt ∼ N(0, h) (h is the variance) ⇒
Bt+h − Bt h≥0 has the same distribution ∀t ⇒
Bt is stationary in increments
12
Exercise 2.14
K ⊂ Rn , K has zero n-dimensional Lebesgue measure. We shall prove that
the expected total length of time that Bt spends in K is zero. This expected
time is given by (E x means that the Brownian motion begins in x, i.e B0 = x):
hZ ∞
i Z ∞ x
E
I{Bt ∈K} dt =
E x I{Bt ∈K} dt =
0
Z
∞
P Bt ∈ K dt =
x
0
0
Z
∞
(2πt)
0
−n
3
Z
K
since it has 0 Lebesgue measure.
− (x−y)
2t
e
2
dy dt = 0
Exercise 2.18
a) Let Ω = {1, 2, 3, 4, 5} and U = {1, 2, 3}, {3, 4, 5} . Find the σ-algebra
generated by U, that is the smallest σ-algebra on Ω containing U.
n
o
HU = ∅, {1, 2, 3}, {3, 4, 5}, {4, 5}, {1, 2}, {1, 2, 4, 5}, {3}, {1, 2, 3, 4, 5},
b) If X : Ω → R and
X(1) = X(2) = 0, X(3) = 10, X(4) = X(5) = 1
X is measurable with respects to HU if ∀B ⊂ R:

∅
if {0} 6∈ B, {10} 6∈ B, {1} 6∈ B




{1, 2} if {0} ∈ B, {10} 6∈ B, {1} 6∈ B



{3}
if {10} ∈ B, {0} 6∈ B, {1} 6∈ B
X −1 (B) =
{4, 5} if {1} ∈ B, {0} 6∈ B, {10} 6∈ B




{1,
2, 3} if {0} ∈ B, {10} ∈ B, {1} 6∈ B



...
etc
c) We define Y : Ω → R by
Y (1) = 0
Y (2) = Y (3) = Y (4) = Y (5) = 1
and want to find the σ-algebra generated by Y (which must be the smallest
possible σ-algebra).
n
o
HY = ∅, {1}, {2, 3, 4, 5}, {1, 2, 3, 4, 5}
13
Exercise 2.19
(Ω, F , µ is a probability space and p ∈ [1, ∞]. A sequence of functions
p
{fn }∞
n=1 , where fn ∈ L (µ) (the Lebesgue space) is called a Cauchy sequence
if
kfn − fm kp → 0
as m, n → ∞
The sequence is convergent if there exists a f ∈ Lp (µ) such that fn → f in
Lp (µ). We are going to prove that all convergent series are Cauchy sequences.
When a series is convergent, that means ∃f ∈ Lp (µ) such that kfn −f kp →
0 when n → ∞.
kfn − fm kp = kfn − f + f − fm kp ≤ kfn − f kp + kfm − f kp −−−−→ 0
n,m→∞
which means that fn is a Cauchy sequence.
Additional exercises 1 - Brownian Motion
Properties of Brownian Motion
Exercise 1
1) If t ≥ s, Bs and Bt − Bs are independent.
2) E[Bt2 ] = t, ∀t
3) Bt ∼ N(0, t)
4) X ∼ N(0, σ 2 ) ⇒
Z
x2
1
√
E[f (X)] =
f (x)e− 2σ2 dx
2πσ 2
R
Let t ≥ s and compute
h
E Bs2 Bt − Bs3
Hint: Bt = Bt − Bs + Bs .
E Bs2 Bt − Bs3 = E Bs2 Bt − E Bs3 =
E Bs2 Bt − Bs + Bs − E Bs3 =
E Bs2 Bt − Bs + E Bs3 − E Bs3 =
E Bs2 Bt − Bs = E Bs2 E Bt − Bs = s · 0 = 0
14
Exercise 2
t ≥ s. Compute
E Bt Bs
Hint: Bt = Bt − Bs + Bs .
E Bt Bs = E (Bt − Bs + Bs )Bs = E Bs (Bt − Bs ) + Bs2 =
E Bs (Bt − Bs ) + E Bs2 = E Bs E Bt − Bs + s = 0 · 0 + s = s
Exercise 3
Show that
E Bt − Bs )2 = t − s
Hint: See property 2 for Brownian motion.
E Bt − Bs )2 = E Bt2 − Bt Bs − Bt Bs + Bs2 =
E Bt2 − 2E Bt Bs + E Bs2 =
t − 2s + s = t − s
Exercise 4
Show that
E Br4 = 3r 2
Hint: Use integration by parts to prove that
E Br4 = 3rE Br2
We think of f (Br ) = (Br )4 , so we can use property (4) for Brownian motion
with f (x) = x4 .
Z
4
x2
1
√
E Br =
x4 e− 2r dx (= I)
2πr
R
We use integration by parts:
Z
Z
′
uv = uv − u′ v
We have
u = x3 =⇒ u′ = 3x2
x2
x2
v ′ = xe− 2r =⇒ v = −re− 2r
We get
i
x2
1 h
− rx3 e− 2r −
I=√
R
2πr
15
Z
R
−√
x2
1
3x2 re− 2r dx
2πr
2i
1 h
3 − x2r
I=√
+ 3r
− rx e
R
2πr
Z
R
√
1
x2
x2 e− 2r dx
2πr
By property 5) of Brownian motion, we note that we have
Z
1
x2
√
x2 e− 2r dx = E Br2
2πr
R
For the uv part of the integral, we first note that we have:
x2
−rx3 e− 2r =
−rx3
x2
e 2r
which gives
lim
x→∞
−rx3
e
x2
2r
L’Hop
=
lim
x→∞
−3rx2
2xe
x2
2r
= lim
x→∞
−3rx
2e
L’Hop
x2
2r
=
lim
x→∞
−3r
x2
=0
4xe 2r
and it is similar when x → −∞. Thus, the integral is:
E Br4 = I = 0 + 3rE Br2 = 3rE Br2
We can now simply apply property 2 of Brownian Motion and we have shown
that
E Br4 = 3r 2
Exercise 5
Assume t ≥ s. Compute
E Bs4 Bt2 − 2Bt Bs5 + Bs6
We use that Bt = Bt − Bs + Bs .
2
E Bs4 Bt2 − 2Bt Bs5 + Bs6 = E Bs4 Bt − Bs + Bs − 2Bt Bs5 + Bs6 =
2 E Bs4 Bt − Bs + Bs
− E 2Bt Bs5 + E Bs6
Some intermediate calculations:
2
(Bt − Bs + Bs
= (Bt − Bs )2 + 2Bs (Bt − Bs ) + Bs2 =
(Bt − Bs )2 + 2Bs Bt − 2Bs2 + Bs2 = (Bt − Bs )2 + 2Bs Bt − Bs2
This in turn shows us that
Bs4 Bt − Bs + Bt
2
= Bs4 (Bt − Bs )2 + 2Bt Bs − Bs2 =
16
Bs4 (Bt − Bs )2 + 2Bt Bs5 − Bs6
Returning to the exercise.
2 E Bs4 Bt − Bs + Bt
− E 2Bt Bs5 + E Bs6 =
E Bs4 (Bt − Bs )2 + 2Bt Bs5 − Bs6 − 2E Bt Bs5 + E Bs6 =
E Bs4 (Bt − Bs )2 + 2E Bt Bs5 − E Bs6 − 2E 2Bt Bs5 + E Bs6 =
6
5
E Bs4 (Bt − Bs )2 + 2E
B
−2E
Bt Bs5 + −E
Bs EBs6 =
t Bs E Bs4 (Bt − Bs )2 = E Bs4 E (Bt − Bs )2 = 3s2 (t − s)
where the last step follows from exercise 3 and 4.
17
=============================== 05/02-2010
Step 1 g ∈ V is bounded and continuous.
X
φn (t, ω) =
(t)
g(tj , ω)I(tj ,tj+1 )
=⇒ E
i
hZ
Step 2 h ∈ V is bounded.
Z
gn (t, ω) =
t
ψn (s − t)h(s, ω)ds =⇒ E
0
Step 3 f ∈ V


 −n
fn (t, ω) =
f (t, ω)

 n
T
S
i
(g − φn )2 dt → 0
hZ
S
T
i
(h − gn )2 dt → 0


if f (t, ω) < −n

hZ T
i
=⇒ E
(f − fn )2 dt → 0
if − n ≤ f (t, ω) ≤ n

S

if f (t, ω) > n
It is clear that fn (tω) → f (t, ω) pointwise in ω. A trivial observation is that
(f − fn )2 ≤ 4f 2, so by the Bounded convergencce theorem we have bounded
convergence.
Lemma
If f ∈ V[S, T ] then we can find a sequence of elementary functions such that
E
hZ
T
i
(f − φn ) dt → 0
2
S
(*)
Proof : Begin with step 3, then use step 2 and finally step 1.
Definition of the Itô-integral (Repetition)
Z
T
S
f (s, ω)dBs(ω) = lim
2
L
Z
T
φn (s, ω)dBs(ω)
S
Where (ej (ω) are step-functions):
Z
S
T
φn (s, ω)dBs(ω) =
X
j
ej (ω) Btj+1 − Btj .
We proved last time that the limit always exists (since L2 is complete, and
every Cauchy-sequence converges), and is independent of the choice of the
approximating sequence φn .
18
Properties of the Itô-integral
h Z T
2 i
E
f (t, ω)dBt
S
By defintion:
= lim E
n→∞
h Z
T
φn (t, ω)dBt(ω)
S
Elementary functions satisfy the Iso-isometry, so
hZ T
i
= lim E
φ2n (t, ω)dt
n→∞
2 i
S
and the Brownian motion has disappeared. But as we saw in (∗), φ2n → f
(bounded convergence theorem), so
hZ T
i
= E
f 2 (t, ω)dt
S
To summarize:
E
h Z
T
f (t, ω)dBt
S
2 i
= E
hZ
T
i
f (t, ω)dt
2
S
Note that Iso-isometry is valid for all functions, not just elementary ones!
Further properties
Mostly straightforward to verify from the definition.
(1)
Z T
Z U
Z T
f dBt =
f dBt +
f dBt (S < U < T )
S
(2)
S
Z
U
T
(cf + g)dBt = c
S
(3)
E
hZ
S
T
T
f dBt +
S
E
Since:
Z
hZ
T
S
Z
T
gdBt
S
i
f dBt = 0
i
hX
i
φn (t, ω)dBt = E
ej Btj+1 − Btj =
j
X X E ej (ω) E Btj+1 − Btj =
E ej (ω) · 0 = 0
j
j
19
(4)
Z
T
f dBt
S
is FT -measurable,
since there are no values s, t in the expression for the integral that exceeds
T.
Martingale properties
A filtration is a family of σ-algebras
M = {Mt }t≥0
(i.e we have a σ-algebra ∀t) such that:
(i) Mt ⊂ F where F is the biggest σ-algebra on Ω.
(ii) If s < t, then Ms ⊆ Mt (as time increases we get a bigger σ-algebra).
A martingale Mt is a stochastic process with the properties:
M1 ) Mt is Mt -measurable.
M2 ) E |Mt | < ∞ ∀t
M3 ) E Ms |Mt = Mt when s ≥ t.
The first two properties of the martingales are quite obvious. The third
property is the one most used in calculations.
Example
Bt is an Ft -martingale.
We check the properties.
M1 ) Bt is Ft -measurable. This follows by the definition.
M2 ) We use the Hölder inequality:
again, by definition.
M3 )
21
2
E |Bt | ≤ E |Bt |
= t < ∞
E Bs | Ft = E (Bs + Bt − Bt ) | Ft
The Bs − Bt is independent of Ft . The conditional expected value is linear
so:
= E Bs − Bt | Ft + E Bt | Ft
Now since Bs − Bt is independent from Ft and by property (iii) of
Martingales, we get
= E Bs − Bt + Bt = 0 + Bt = Bt
20
Doob’s martingale inequality
1 P sup |Mt | ≥ λ ≤ p E |MT |p
λ
0≤t≤T
where p ≥ 1, T ≥ 0 and λ > 0. This inequality holds for all t → Mt (ω)
continuous (which we accept without a proof). We use this to get uniform
convergence. This result is true for all martingales Mt . (We note that this
inequality says something about all Mt where t ≤ T .
Theorem
RT
Let f ∈ V[0, T ]. Then there exists a t-continuous version of 0 f (s, ω)dBs,
for t ∈ [0, T ].
What do we mean by version?
Xt and Yt are versions of each other if
P {ω | Xt (ω) = Yt (ω)} = 1 ∀t,
(can be different on a set with measure 0 and this set can be different for
any t).
Proof of theorem:
This is a difficult proof. The idea is to consider:
Z tX
Nn
In (t) =
enj (ω)I(tnj ,tnj+1 ) dBs
0
We choose this since
Z
0
(**)
j=1
t
f (s, ω)dBs ≈
Z
t
φn (s, ω)dBs
0
which in turn is defined by (∗∗). We want to prove that the sequence (∗∗)
has a uniformly convergent subsequence. If fn are all continuous functions
and fn → f uniformally, then f is continuous. Fact from real analysis.
Lemma
For each n, t → In (t) is an Ft -martingale. (Property i: it is Ft -measurable
since s, t in the expression are less than or equal to t. Property ii: In is finite).
Property iii needs some verification. We want to show that:
E In (s) | Ft = In (t)
We write In (s) as two integrals.
Z
Z s
t
E In (s) | Ft = E
φn (u)dBu +
φn (u)dBu | Ft
0
t
21
Rt
Since 0 φn (u)dBu is Ft -measurable, it equals itself. Hence, by linearity of
the expected value:
Z
s
= In (t) + E
φn (u)dBu | Ft
(L1)
t
Rs
Now we have a slight problem since t φn (u)dBu is not independent of Ft .
However, we can rewrite it:
Z
h X
s
i
(n)
(n)
E
φn (u)dBu |Ft = E
ej Btj+1 − Bt(n) Ft
(***)
j
t
(n)
t≤tj ≤s
(n)
In this form, we see that (Btj+1 − Bt(n) ) is independent of Ft . We will use to
j
facts from conditional expectations:
h i
A ⊆ B ⇒ E E X|B A = E X|A
and if X is A-measurable;
E XY |A = XE Y |A .
Returning to (∗ ∗ ∗), we use the properties of the expected value and get:
h X
i
(n) E
ej E Bt(n) − Bt(n) |Ft(n) Ft
j+1
j
j
(n)
t≤tj ≤s
Now (Bt(n) − Bt(n) ) is independent of Ft(n) , so we can rewrite:
j+1
j
j
E Bt(n) − Bt(n) |Ft(n) = E Bt(n) − Bt(n) E Ft(n) = 0 · E Ft(n) = 0
j+1
j
j
j+1
j
j
j
Since this is equal to zero, we get a recursive effect and everything in the
expression is multiplied by zero. Returning to L1 of the Lemma, we get:
Z
s
= In (t) + E
φn (u)dBu |Ft
(L1)
t
= In (t) + 0 = In (t)
Thus we have shown:
We have proved that all
E In (s)|Ft = In (t).
In (t) =
Z
t
φn (s, ω)dBs(ω)
0
22
(which are t-continuous) are Ft -martingales. For such I’s, In − Im are also
Ft -martingales (and so are any linear combinations of martingales).
We use Doob’s martingale inequality with p = 2, λ = ǫ and get
1 P sup |In (t, ω) − Im (t, ω)| ≥ ǫ ≤ 2 E |In − Im |2
ǫ
0≤t≤T
We have φn → f ∈ V. By construction of the Itô-integral, In −Im is a Cauchy
sequence, so
1 2
E
|I
−
I
|
→ 0 as m, n → ∞.
n
m
ǫ2
For simplicity we assume P (In −Im = ǫ) = 0. Then we can find a subsequence;
when we choose k and set ǫ = 2−k , so:
P sup |In − Im | > 2−k ≤ 2−k
0≤t≤T
We set:
Ak =
Summing over infinity;
∞
X
k=1
−k
.
ω sup |Ink+1 −Ink | > 2
0≤t≤T
P (Ak ) ≤
∞
X
k=1
We use the Borell-Cantelli theorem: if
P
∞
∞ [
\
2−k = 1 < ∞
P∞
m=1 k=m
k=1
P (Ak ) < ∞, then
Ak = 0
Where
∞ [
∞
\
m=1 k=m
Ak = ω ω is an element of infinitely many A′k s
c
T ∞ S∞
A
, or in other words, with
Then for almost all ω ∈
k
k=m
m=1
inside
that
set.
probability 1 we
are
not
c
T∞ S∞
If ω ∈
k=m Ak , there exists a number k1 (ω) such that if
m=1
k ≥ k1 (ω), then ω 6∈ Ak . But this means that
sup In − Im ≤ 2−k
0≤t≤T
23
for all k ≥ k1 (ω) and does not depend on t. That is, we have uniform
RT
RT
convergence. In (t) → 0 f (s, ω)dBs uniformly =⇒ 0 f (s, ω)dBs is tcontinuous except for a set of ω’s with measure 0. In that case we can simply
define the integral of f to be 0 on that set. Theorem proven!
We usually assume we are working with the continuous version.
Corollary
Rt
The mapping t → 0 f (s, ω)dBs is an Ft -martingale. (Because Im (t) are
Rt
martingales and In (t) → In (t) = 0 f (s, ω)dBs in L2 .
Proof :
We want to show that:
Z s
h Z t
2 i
E E[ f dB | Fs ] −
f dB
= 0.
0
0
Rs
If this is true, 0 f dBs is a martingale. We begin with the expression and
work our way through.
Z s
h Z t
2 i
E E[ f dB | Fs ] −
f dB
0
0
Rt
We add and subtract E[ 0 φn dB | Fs ] to the expression, and use the linearity
of expected values.
Z t
Z t
Z
Z s
2 t
= E E
f dB −
φn dB | Fs + E
φn dB | Fs −
f dB
0
0
0
0
Z s
h Z t
2 i
= E E[ (f − φn )dB | Fs] +
(φn − f )dB
o
0
h Z t
2 i
h Z s
2 i
≤ 4 E E[ (f − φn )dB | Fs ]
+E
(φn − f )dB
o
0
Since φn → f by Iso-isometry and construction
h Z t
2 i
h Z s
2 i
≤4 E
(f − φn )dB
+E
(φn − f )dB
o
0
which tends to 0 as n → ∞. Corollary proven.
24
Extensions of the Itô-integral
First. We have assumed that f ∈ V (measurable, adapted etc). Ft -adapted
can be relaxed. Idea: assume thatRBt is Ht -measurable and an Ht -martingale.
t
Then it is possible to construct o f dB when f ∈ Ht -measurable (where H
is “big”).
Why is this important?
Assume B1 and B2 are to independent Brownian motions.
Ht = σ B1 (s), s ≤ t, B2 (s), s3 ≤ t ,
i.e a filtration generated by both Brownian motions. We wish to calculate
Z
Z
B1 dB2 or
B2 dB1
and the first extension covers this.
Second. We have assumed
E
hZ
i
t
2
0
f (s, ω)ds < ∞.
This assumption can be relaxed in the case where
Z t
2
P
f (s, ω)ds < ∞ = 1.
0
The Itô/Stratonovich Integrals
Itô-integral:
Z
X
f dB ⇒
f (tj ) Btj+1 − Btj
Stratonovich-integral (denoted with a ◦):
Z
X tj + tj+1 Btj+1 − Btj
f ◦ dB ⇒
f
2
Assume that B n (t, ω) are continuously differentiable and B n (t, ω) → B(t, ω)
uniformly.
For each ω we can solve a DE (differential equation), i.e find Xt such that
(Xt depends on n; omitted so we can skip the indices).
(n)
dXt
dBs
= b(t, Xt ) + σ(t, Xt )
dt
ds
25
If we have found an Xt , we must have (integrating both sides):
Xt = X0 +
Z
t
b(s, Xs )ds +
0
X t = X0 +
Z
t
0
Z
t
b(s, Xs )ds +
0
Z
(n)
dBs
σ(s, Xs )
ds
ds
t
0
σ(s, Xs )dBs(n)
We try to pass this to the limit; i.e we let n → ∞ (possible to prove). Since
(n)
dBs is differentiable and B n (t, ω) → B(t, ω) as n → ∞:
Z t
Z t
Xt = X0 +
b(s, Xs )ds +
σ(s, Xs ) ◦ dBs
0
0
where we note that we get the Stratonovich-integral in the limit. This seems
to indicate that the Stratonovich-integral is the “natural choice”. However,
the limit also gives us:
Z t
Z t
Z t
1 ∂σ
(s, Xs )σ(s, Xs )ds +
σ(s, Xs )dBs ,
X t = X0 +
b(s, Xs )ds +
0
0
0 2 ∂x
and this time we have the Itô-integral. This means that:
Z t
Z t
Z t
1 ∂σ
(s, Xs )σ(s, Xs )ds +
σ(s, Xs )dBs =
σ(s, Xs ) ◦ dBs
0
0 2 ∂x
0
Suppose
Xt = X0 +
Z
t
0
(s, Xs ) ◦ dBs
and that this is a “good model” for a natural phenomenon (like stock prices).
If we put
eb(t, Xs ) = b(s, Xs ) + 1 ∂σ (t, Xt )σ(t, Xt )
2 ∂x
then the Itô version
Z t
Z t
e
X t = X0 +
b(s, Xs )ds +
σ(s, Xs )dBs
0
0
is an equally “good model”.
Whatever you can do with the Stratonovich-integral you can do with Itô and
vice versa. Itô is occasionally easier to use, and is generally preferred over
the Stratonovich-integral.
26
=============================== 12/02-2010
Exercise 3.1
Using the definition of the Itô integral, we want to prove that
Z t
Z t
sdBs = tBt −
Bs ds.
0
0
We have the partition:
0 = t0 < . . . < tj < tj+1 < . . . < tn = t
and
∆Bj = Btj+1 − Btj .
This means:
tBt =
n
X
∆(tj Bj ) =
n
X
j+1
j+1
(tj+1 Btj+1 − tj Btj ).
We note that:
∆(tj Bj ) = tj+1 Btj+1 − tj Btj
= tj+1 Btj+1 − tj Bj+1 + tj Bj+1 − tj Btj
= Bj+1 (tj+1 − tj ) + tj Bj+1 − Btj
= Bj+1(∆tj ) + tj (∆Bj )
That is,
tBt =
n
X
tj ∆Bj +
j=0
tBt =
t
sdBs +
0
Z
Z
t
Bs ds =⇒
0
t
0
Bj+1 ∆tj ,
j=0
and, when we let ∆tj → 0, we get
Z
n
X
sdBs = tBt −
27
Z
0
t
Bs dt
Exercise 3.3
(X)
If Xt : Ω → R is a stochastic process, we let Ht = Ht denote the σ(X)
algebra generated by {Xs ; s ≤ t}, i.e {Ht }t≥0 is the filtration of the process
{Xt }t≥0 .
(a) Verify: if Xt is a martingale w.r.t to some filtration {Nt }t≥0 , then Xt is
also a martingale w.r.t it’s own filtration.
An important observation here is that the filtration Ht generated by the
process is the smallest possible filtration. Therefore Ht ⊆ N . Using this, we
know that for s > t,
h i
E Xs | Ht = E E Xs | Nt Ht = E Xt | Ht = Xt
The two first properties of martingales are apparent since Xt already is a
martingale.
————————————
An important property of conditional expectation.
E E[X|F ] = E[X]
Proof : ∀B ∈ Ω,
Z
B
For B = Ω;
E[X(ω) | F ]dP (ω) =
Z
(⋆)
X(ω)dP (ω)
B
E E[X(ω) | F ] = E[X]
————————————
(b) Show that if Xt is a martingale w.r.t to Ht , then
E[Xt ] = E[X0 ] ∀t ≥ 0
Since Xt is martingale, we kow that E[Xt | H0 ] = X0 . So,
(⋆)
E[X0 ] = E E[Xt | H0] == E[X]
(c) The property in (b) is automatic if the process is a martingale. The
converse is not necessarily true. This is apparent if we consider the process
Xt = Bt3 . Then E[Bt3 ] = 0 = E[B0 ], but Xt is not a martingale. We verify
this by checking property M3 for martingales. Assume s > t:
28
E Bs3 | Ft = E (Bs − Bt )3 + 3Bs2 Bt − 3Bt2 Bs + Bt3 | Ft =
We can factor out Bt ’s, since they are Ft -measurable. We also have E[(Bs −
Bt )3 ] = 0. This is specially verified below, after 3.18.
3Bt E[Bs2 | Ft ] − 3Bt2 E[Bs | Ft ] + Bt3 =
Using that Bs2 = (Bs − Bt + Bt )2 :
3Bt E[(Bs − Bt )2 | Ft] − 3Bt2 E[Bs | Ft ] + Bt3 + 3Bt E[2Bs Bt2 − Bt2 ] =
3Bt E (Bs − Bt )2 − 2Bt3 + 6Bt3 − 3Bt3 =
3Bt (s − t) + Bt3 6= Bt3
This means
E[Xs | Ft] 6= Xt
for this process, and then it can not be a martingale.
Exercise 3.4
We are going to determine if these processes are martingales.
(i) Xt = Bt + 4t.
The first property of martingales is often trivial, so when you check if
something is a martingale you usually begin with property 2. In this case,
even property 2 is quite obvious, so we proceed to the third property. ∀s < t:
E[Bt + 4t | Fs] = 4t + E[Bt | Fs] = 4t + Bs 6= 4s + Bs
(M3 )
The property is not met, and therefore this process is not a martingale.
(ii) Xt = Bt2
E[Xt ] = E[Bt2 ] = t 6= 0 = E[B02 ] = E[X0 ]
From 3.3b) we know that any martingale must have the same expected value
for all t. This is not the case for this process, so this is not a martingale.
Rt
(iii) Xt = t2 Bt − 2 0 sBs ds.
Property M2 is fulfilled, and we consider M3 .
Z
2
t
E Xt | Fs = E t Bt | Fs − 2E
uBu du | Fs =
0
2
t Bs − 2E
Z
s
0
uBu du | Fs − 2E
29
Z
s
t
uBu du | Fs =
By splitting the integral, the first part is now Fs -measurable. We also use
property (⋆⋆) for conditional expectations on the integral from s to t.
Z s
Z t
2
t Bs − 2
uBu du − 2
uE Bu | Fs du =
0
2
t Bs − 2
2
2
t Bs − 2
Z
t Bs − 2
s
0
Z
s
s
uBu du − 2Bs
0
Z
0
s
Z
t
udu =
s
1
uBu du − 2Bs (t2 − s2 )
2
2
2
2
uBu du − t Bs + s Bs = s Bs − 2
Property M3 is met, and Xt is a martingale.
Z
s
uBu du
0
(iv) Xt = B1 (t)B2 (t) (a 2-dimensional Brownian motion). Assume s < t, and
try to verify property:
E B1 (t)B2 (t) | Fs = B1 (s)B2 (s)
(M3 )
Some intermediate calculations; using a little trick:
B1 (t) − B1 (s) B2 (t) − B2 (s) =
B1 (t)B2 (t) − B1 (t)B2 (s) − B1 (s)B2 (t) + B1 (s)B2 (s) =
We rearrange a little;
B1 (t)B2 (t) − B1 (s)B2 (t) + B1 (s)B2 (s) − B1 (t)B2 (s) =
We add and subtract B1 (s)B2 (s);
B1 (t)B2 (t)−B1 (s)B2 (t)+B1 (s)B2 (s)−B1 (t)B2 (s)+B1 (s)B2 (s)−B1 (s)B2 (s) =
Factor out B1 (s) and B2 (s) to get the desired result.
n
o
n
o
B1 (t)B2 (t) − B1 (s) B2 (t) − B2 (s) − B2 (s) B1 (t) + B1 (s) − B1 (s)B2 (s)
We can now take the expression we have:
n
o
B1 (t) − B1 (s) B2 (t) − B2 (s) = B1 (t)B2 (t) − B1 (s) B2 (t) − B2 (s)
n
o
− B2 (s) B1 (t) + B1 (s) − B1 (s)B2 (s)
and rewrite to the equivalent form:
n
o
B1 (t)B2 (t) = B1 (t) − B1 (s) B2 (t) − B2 (s) + B1 (s) B2 (t) − B2 (s)
n
o
+ B2 (s) B1 (t) + B1 (s) + B1 (s)B2 (s)
30
Now, returning to the exercise we now we can rewrite E[B1 (t)B2 (t) | Ft],
when s < t as
h
i
E B1 (t) − B1 (s) B2 (t) − B2 (s) | Fs
h
i
+ E B1 (s) B2 (t) − B2 (s) | Fs (= 0)
h
i
+ E B2 (s) B1 (t) − B1 (s) | Fs (= 0)
h
i
+ E B1 (s)B2 (s) | Fs =⇒
We use that B1 (s)B2 (s) are Fs -measurable.
h
i
E B1 (t) − B1 (s) B2 (t) − B2 (s) | Ft + B1 (s)B2 (s) =
Next we have independence in the expectation, and get:
h
i h
i
E B1 (t) − B1 (s) | Ft E B2 (t) − B2 (s) | Ft + B1 (s)B2 (s) = B1 (s)B2 (s)
Thus we have shown this is a martingale.
Exercise 3.5
Let Mt = Bt2 − t. Prove this is a martingale w.r.t the filtration Ft .
Properties M1 and M2 are obvious. We check M3 for t > s.
E[Mt | Fs ] = E[Bt2 − t | Fs] = E[Bt2 | Fs] − t =
We use Bt = Bt − Bs + Bs
h
i
2
2
E (Bt − Bs ) − Bs + 2Bt Bs | Fs − t =
E[(Bt − Bs )2 ] − E[Bs2 | Fs ] + 2E[Bt Bs | Fs ] − t =
t−s
−
Bs2
+
2Bs2
− t = Bs2 − s = Ms
All the properties for a martingale process are met, and Mt is martingale.
Exercise 3.18
Prove that
1
Mt = exp{σBt − σ 2 t}
2
is an Ft -martingale.
Properties M1 and M2 are simple, and we verify M3 for t > s:
i
h
σBt − 21 σ2 t
| Fs =
E e
31
We add and subtract σBs in the exponent.
h
i
h
i
1 2
1 2
E eσBt −σBs +σBs − 2 σ t | Fs = E eσ(Bt −Bs ) eσBs − 2 σ t | Fs =
We have measurability and independence, and can use some properties from
the conditional expectations.
1 2
eσBs − 2 σ t E eσ(Bt −Bs )
Use Laplace transfrom on the expected value:
1
1
2
eσBs − 2 σ t e 2 σ
1
eσBs − 2 σ
2s
2 (t−s)
= Ms
All the properties are met, and Mt is a martingale.
Verification for 3.3c): E[(Bt − Bs )3 ] = 0.
E (Bt − Bs )3 = E Bt3 − Bs3 − 3Bs2 Bt + 3Bs Bt2 =
E[Bt3 ] − E[Bs3 ] − 3E[Bs2 Bt ] + 3E[Bs Bt2 ] =
Here, the expectation to the first two terms are zero, and we use that
Bt = Bt − B − s + Bs
−3E[Bs2 Bt | Ft ] + 3E[Bs (Bt − Bs ) − Bs2 + 2Bt Bs ] =
−3E[Bs3 ] + 3[Bs (Bt − Bs )2 ] − 3E[Bs3 ] + 6E[Bt Bs2 ] =
3E[Bs ]E[(Bt − Bs )2 ] = 0
In the course of this verification, we used E[Bt3 ] = 0. This requires some
justification. If we regard E[Bt3 ] as E[f (X)] for f (X) = X 3 , we can use
property 4 of Brownian motion (on page 14).
E[Bt3 ]
Z
x2
e− 2t
dx
=
x √
2πt
R
3
We integrate with partial fractions.
u = x2 =⇒ u′ = 2x
x2
x2
xe− 2t
e− 2t
v = √
=⇒ v = −t √
2πt
2πt
′
32
We use these values in the integral:
Z
Z
′
I = uv = uv − u′ v
We get the integral:
Z
h x3 te− x2t2 i
x2
2t
I= √
xe− 2t dx
+√
2πt R
2πt R
The first part tends to zero, so we are left with:
−2t2 − x2 I = 0+ √
e 2t = 0
R
2πt
which also tends to zero, because we have the exponential function with a
negative square in the exponent. It tends to zero for positive and negative
x-values. Therefore E[Bt3 ] = 0.
Exercise 3.8
Let Y be a real valued random variable on (Ω, F , P ) such that
E[|Y |] < ∞.
We define
Mt = E[Y | Ft] ; t ≥ 0.
(3.8.1)
a) We want to show that Mt is an Ft -martingale. By definition M2 holds,
and M1 is trivial. We show that the third property holds. s ≤ t.
3.8.1
E[Mt | Fs] == E E[Y | Ft ] | Fs
The filtration Ft is increasing, i.e Fs ⊆ Ft , so
E E[Y | Ft] | Fs = E[Y | Fs ] = Ms
and we are done. Mt is a martingale.
(b) Conversely, we let Mt ; t ≥ 0 be a real valued Ft -martingale such that
sup E[|Mt |p ] < ∞ for some p > 1.
t≥0
Using Corollary 7, we want to show there exists Y ∈ L1 (P ) such that
Mt = E[Y | Ft ].
Corollary 7 states ∃M ∈ L1 (P ) such that Ms −→ M a.e (P ) and
s→∞
Z
|Ms − M|dP → 0 as s → ∞
33
E[ lim Ms | Ft ] = lim E[Ms | Ft] = Mt
s→∞
s→∞
We can put the limit outside because the conditional expectation is a random
variable. Consider this simplified version, for some F ∈ Ft .
E[IF lim Ms ] = lim E[IF Ms ]
s→∞
s→∞
The definition is, for F ∈ F :
Z
Z
E[X(ω) | Ft]dP =
X(ω)dP
F
=
F
Z
Yt dP
F
Returning to the exercise:
Z
Z
lim Ms dP =
E[ lim Ms | Ft ]dP =
s→∞
lim
s→∞
Z
Ms dP = lim
s→∞
F
F s→∞
Z
F
E[Ms | Ft ]dP =
Z
lim E[Ms | Ft ]dP =
F s→∞
Additional exercises 2 - Itô Isometry
Exercise 1
Calculate
E
h Z
0
t
2 i
(1 + Bs )dBs .
Hint: Apply Itô isometry;
Z
h Z T
2 i
E
=
f (t, ω)dBt(ω)
S
T
S
E f 2 (t, ω) ds
We recognise the function as f (t, ω) = 1 + Bs . Squaring this, we get:
f 2 (t, ω) = (1 + Bs )(1 + Bs ) = 1 + 2Bs + Bs2 ⇒
E[f 2 ] = E[1] + 2E[Bs ] + E[Bs2 ] = 1 + 0 + s = 1 + s ⇒
Z t
1 t
1
1 + sds = s + s2 = t + t2
2 0
2
0
34
Exercise 2
Calculate
E
h Z
t
0
Bs2 dBs
2 i
.
We apply the Itô isometry on f (t, ω) = Bs2 ,
f 2 (t, ω) = Bs4 ⇒ E[f 2 ] = E[Bs4 ] = 3s2
Z t
Z t
t
2
E[f ]ds =
3s2 ds = s3 = t3
0
Exercise 3
Calculate
E
h Z
0
0
t
sin(Bs )dBs
0
2
Z
+
t
cos(Bs )dBs
0
2 i
.
We can split the expected value over the two terms, and can apply the Iso
isometry to each function.
f = sin(Bs ) ⇒ f 2 = sin2 (Bs )
g = cos(Bs ) ⇒ g 2 = cos2 (Bs )
Z t
Z t
Z t
2
2
E sin (Bs ) ds +
E cos (Bs ) ds =
E sin2 (Bs ) + cos2 (Bs ) ds =
0
0
2
0
2
Using cos (α) + sin (α) = 1:
Z t
Z t
t
E 1 ds =
1ds = s = t
0
0
0
Exercise 4
Calculate
E
hZ
0
t
1dBs ·
Z
0
t
Bs2 dBs
i
1
Hint: xy = (x + y)2 − (x − y)2
4
Rt
Rt
Using the hint, we set X = 0 1dBs and Y = 0 Bs2 dBs .
1
1 1 E (X + Y )2 − (X − Y )2 = E (X + Y )2 − (X − Y )2
4
4
4
Substituting back, we now have:
Z
Z t
Z t
i
i 1h Z t
1 h t
2
2
E ( 1dBs +
Bs2 dBs )2 =
Bs dBs ) − ( 1dBs −
4
4
0
0
0
0
Z t
Z
h
i
h
i
t
1
1
E ( 1 + Bs2 dBs )2 − ( 1 − Bs2 dBs )2
4
4
0
0
E[XY ] =
35
We apply the Itô isometry.
Z
Z
1 t 1 t 2 2
E (1 + Bs ) ds −
E (1 − Bs2 )2 ds
4 0
4 0
(4.I)
We square the funtions:
(1 + Bs2 )2 = 1 + 2Bs2 + Bs4
(1 − Bs2 )2 = 1 − 2Bs2 + Bs4
Distribute and use the expectation.
E[1 + 2Bs2 + Bs4 ] = 1 + 2s + 3s2
E[1 − 2Bs2 + Bs4 ] = 1 − 2s + 3s2
Returning to the integral 4.I, we have:
Z
Z t
1 t
2
2
1 + 2s + 3s ds −
1 − 2s + 3s ds =
4 0
0
Z
1 t
1 + 2s + 3s2 − 1 + 2s − 3s2 ds =
4 0
Z
Z t
1 t
1
1 t
4sds =
sds = s2 = t2
4 0
2 0
2
0
Exercise 5
Prove that
hZ
E
T
S
f (t, ω)dBt ·
Z
i
T
g(t, ω)dBt =
S
Z
T
S
Applying the hint for exerise 4, with X =
RT
g(t, ω)dBt, we get:
S
E f (t, ω)g(t, ω) ds
RT
S
f (t, ω)dBt and Y
1
(X + Y )2 − (X − Y )2 =
4
Z
2 Z T
g(t, ω)dBt −
f (t, ω)dBt−
=
E[XY ] =
1 h
4
Z
T
f (t, ω)dBt +
S
1 h
E
4
Z
Z
S
T
T
f (t, ω) + g(t, ω)dBt
S
S
2
−
36
Z
T
g(t, ω)dBt
S
T
S
f (t, ω) − g(t, ω)dBt
2 i
2 i
=
=
Itô isometry.
Z T
Z T
2 2 1
E f (t, ω) + g(t, ω) ds −
E f (t, ω) − g(t, ω)dBt ds =
4
S
S
Z
T
S
2
2 i
1 h
E f (t, ω) + g(t, ω) − f (t, ω) − g(t, ω) ds =
4
We can now use the hint the other way, and arrive at the desired result.
Z T
E f (t, ω)g(t, ω) ds
S
37
===============================
Itô’s Formula. Consider the integral of
Z t
Bs dBs
19/02-2010
0
where we make the approximation
φn (s, ω) =
X
Btj I(tj ,tj+1 ) (s).
j
Clearly φn ∈ V. We must prove that the difference is negligible:
"
#
Z t
Z tj+1
X
2
Fubini
E
(Bs − φn (s))2 ds
= E
Bs − Btj ds ==
0
XZ
j
tj+1
tj
tj
j
2
E (Bs − Btj ) ds =
max(∆tj )
j
X
j
XZ
j
tj+1
(s − tj ) ds =
tj
1
∆tj = max(∆tj ) t → 0
j
2
X1
j
2
∆t2j ≤
The functions are arbitrarily close to each other.
Lemma 1 For ∆Bj = Btj+1 − Btj , we know that
X
(∆Bj )2 → t in L2
j
Lemma 2
A(B − A) =
Returning to the integral.
Z t
1 2
B − A2 − (B − A)2
2
Bs dBs = lim
n→∞
0
lim
n→∞
X
j
Z
t
φn dBs =
0
Btj Btj+1 − Btj
We now have the same form as in lemma 2. We get:
X
lim
Bt2j+1 − Btj − (∆Bj )2
n→∞
j
where we see that Bt2j+1 − Btj is a telescoping sum. And since B0 = 0, we use
lemma 1 and get:
1
1X
1
1
(∆Bj )2 = Bt − t
lim (Bt2 − 02 ) −
n→∞ 2
2 j
2
2
38
We need a more efficient machinery to calculate the Itô integral. We are
going to introduce the Itô formula. First some notation.
Z t
Z t
X t = X0 +
u(s, ω)ds +
v(s, ω)dBs
0
0
This is called an Itô process, or a stochastic integral. The shorthand notations
is
dXt = udt + vdBt .
Since we don’t have any differenting theory for functions with Brownian
motion, this is a slightly dubious equation, but with it we really mean the
full expression for Xt over.
Itô’s Formula
Let g(t, x) ∈ C 2 [0, T ] × R , and let Xt be an Itô process. Then Yt = g(t, Xt)
and
∂g
1 ∂2g
∂g
(t, Xt )dt +
dXt +
(t, Xt )(dXt )2 .
(I.1)
dYt =
2
∂t
∂x
2 ∂x
As earlier dXt = udt + vdBt , but we have to compute (dXt )2 . We have a set
of formal rules we must use.
dt · dt = dt · dBt = dBt · dt = 0
dBt · dBt = dt.
This last little fact is central to much of the Itô calculus! Multiplying the
parenthesis, we end up with a normal Lebesgue integral.
(dXt )2 = (udt+vdBt )2 = u2 (dt)2 +2uvdtdBt +v 2 (dBt )2 ⇒ 0+0+0+v 2dt = v 2 dt
Proof :
2
∂g ∂g ∂ 2 g
∂2g
We assume that g, ∂x
, ∂t , ∂x2 , ∂x∂t
, ∂∂t2g are bounded functions. The proof
can later be extended to the unbounded case. We also assume u and v are
elementary functions, which can also be extended to any object.
As a telescoping sum, we get:
g(t, Xt ) = g(0, X0) +
X
j
g(tj+1.Xtj+1 ) − g(tj , Xtj )
We write this as a 2nd order Taylor expansion.
X ∂g
X ∂g
1 X ∂2g
= g(0, X0) +
∆tj +
∆Xj +
(∆tj )2
2
∂t
∂x
2 j ∂t
j
j
+
X
X ∂2g
1 X ∂2g
2
(∆tj )(∆Xj ) +
(∆X
)
+
Rj
j
∂t∂x
2 j ∂x2
j
39
Now we examine each of the six terms in the expression individually and
see what happens.
Z t
X ∂g
∂g
(tj , Xtj )∆tj →
(s, Xs )ds
(1)
∂t
0 ∂t
j
We approximated this as a Riemann sum (integral) which we can do since
we have a bounded, continuous function.
X ∂g
X ∂g
∆Xj =
(uj ∆tj + vj ∆Bj ) →
(2)
∂x
∂x
j
j
Z
Z t
∂g
∂g
(s, Xs )u(s)ds +
(s, Xs )v(s)dBs
0 ∂x
0 ∂x
We did the same here, approximated as Riemann integrals.
t
X
1 X ∂2g
2
(∆t
)
≤
C
(∆tj )2 → 0
j
2 j ∂t2
j
Since g is bounded, there is some C ∈ R such that g ≤ C.
X ∂ 2 g X ∂2g
|∆tj | |∆Xj | ≤
(∆tj )(∆Xj ) ≤ ∂t∂x
∂t∂x
(3)
(4)
j
j
To ensure this expression is greater than the one above, we use absolute
values everywhere. Since g is bounded, we can again use a constant:
X
X
e
C
|∆tj | |uj ||∆tj | + |vj ||∆Bj | ≤ C
|∆tj |2 + |∆tj ||∆Bj |
j
j
In the expression we now have |∆tj |2 which, as in term 3, will tend to 0. We
can continue with just the last term in the expression.
X
X
2 |∆tj ||∆Bj | so we consider E
|∆tj ||∆Bj |
=
j
j
We use the double-sum argument.
hX
i
E
|∆ti ||∆Bi ||∆tj ||∆Bj | =
i,j
We can set the constants outside;
X
|∆ti ||∆tj |E |∆Bi ||∆Bj | =
i,j
40
(i6=j)
X
i,j
i X
h
i
h
|∆ti ||∆tj |E |∆Bi | E |∆Bj | +
|∆ti ||∆ti |E |∆Bi |2
i=j
Now we use E[|∆Bi |2 ] = ∆ti and the Hölder-inequality E[|f |] ≤ E[f 2 ]1/2 .
This yields;
(i6=j)
≤
X
i,j
q
q
X
|∆ti |2 |∆ti |
|∆ti ||∆tj | |∆ti | |∆tj | +
i=j
Since |∆ti | → 0, |∆t2i | → 0 even faster. Merging the double sum back, and
using the same general argument we get:
X
p 2
=
|∆ti | ∆ti → 0
i
We now consider the fifth term which is quite important.
X ∂2g
(∆Xj )2 =
2
∂x
j
=
X ∂2g ∂x2
j
X ∂2g
j
∂x2
uj ∆tj + vj ∆Bj
2
u2j (∆tj )2 + 2uj vj ∆tj ∆Bj + vj2 (∆Bj )2
(5)
Here u2j (∆tj )2 → 0 and 2uj vj ∆tj ∆Bj → 0, so all we have to consider is the
last part:
X ∂2g
vj2 (∆Bj )2
2
∂x
j
(∆Bj )2 is very closely related to ∆tj in L2 . We consider the difference
E
h X ∂ 2 g
j
vj2 (∆Bj )2 −
2
∂x
= E
h X ∂ 2 g
j
X ∂2g
j
vj2 ∆tj
2
∂x
v 2 (∆Bj2 − ∆tj )
∂x2 j
2 i
(F1)
2 i
We use the double sum argument.
= E
h X ∂ 2 g
i,j
2 i
∂2g
2 2
2
2
(ti , Xti ) 2 (tj , Xtj )vi vj (∆Bi − ∆ti )(∆Bj − ∆tj )
∂x2
∂x
41
We consider the case i < j. Then everything is Fj -measurable except
∆Bj2 −∆tj , which is independent of the rest. We also use that E[∆Bj2 −∆tj ] =
0, and everything cancels to 0. We have the exact same result for j < i. Thus
we only have to worry about the case i = j. Since g, v is bounded:
E
h ∂ 2 g ∂x2
vi4
∆b2i
− ∆ti
2 i
≤ CE
hX
i
∆Bi2
2 i
− ∆ti .
However, the differences (∆Bi2 − ∆ti )2 are arbitrarily small in L2 , so:
hX
2 i
2
CE
∆Bi − ∆ti
→0
i
The difference between the sum with (dBt )2 and dt is negligible, which is
connected to the rule (dBt )2 = dt. The fifth term in our original expression
is arbitrarily close to the same term with ∆ti , and this term converges to a
Riemann integral. In summary, the fifth term can be written as a Riemann
integral.
Z t 2
X ∂2g
∂ g
2
2
v dBj →
(s, Xs )v 2 (s, ω)ds
2
2
∂x
0 ∂x
j
The last term is the remainder term Rj = o(|∆ti |2 , |∆Xi |2 ), and this term
tends to zero.
In conclusion, this means that;
Z t
Z t
∂g
∂g
(s, Xs )ds +
(s, Xs )u(s)ds
Yt = g(t, Xt ) =
0 ∂x
0 ∂t
Z
Z t
1 t ∂2g
∂g
(s, Xs )v(s)dBs +
(s, Xs )v 2 (s, ω)ds
+
2
2 0 ∂x
0 ∂x
Using (dXt )2 = v 2 ds and dXt = udt + vdBt we get our conclusion, and we
have proved the Itô formula:
Z t
Z
Z t
∂g
1 t ∂2g
∂g
Yt =
(s, Xs )ds +
(s, Xs )dXt +
(s, Xs )(dXs )2 Q.E.D
2
∂t
∂x
2
∂x
0
0
0
42
We consider a few examples involving the formula. We begin by
integrating the one we began with, just to see how much easier it is with
the formula.
Example
Calculate
Z
t
Bs dBs .
0
We use the Itô process with Xt = Bt , or in other words:
dXt = 0dt + 1dBt .
We have to choose a two times differentiable function, and if we evaluate
the integral as a normal integral we get a reasonable choice for g. So we set
g(t, x) = 21 x2 .
Now we apply Itô’s formula, and we have the general form as:
∂g
1 ∂2g
∂g
dt +
dXt +
(dXt )2 .
∂t
∂x
2 ∂x2
We calculate each of these parts.
dg(t, Xt) =
1
g(t, x) = x2
2
∂g
∂g
(t, x) = 0 =⇒
(t, Xt )dt = 0
∂t
∂t
∂g
1
(t, x) = x
g(t, x) = x2 =⇒
2
∂x
Substituting Xt for x, which is Xt = Bt in this case.
=⇒
∂g
(t, Xt )dXt = Bt dBt
∂x
Finally,
∂2g
1
(t, x) = 1
g(t, x) = x2 =⇒
2
∂x2
With Xt = Bt , (dXt )2 = (dBt )2 = dt by the formal rules of Itô calculus. So;
1 ∂2g
(dXt )2 =⇒
2 ∂x2
We can now write out the Itô formula for
dg(t, Xt) =
1
1
(1)dt = dt
2
2
this integral.
∂g
1 ∂2g
∂g
dt +
dXt +
(dXt )2
2
∂t
∂x
2 ∂x
Substituting with our results over, recalling that g(t, Xt) = 21 (Bt )2 , we have
1 1
1
d Bt2 = 0 + Bt dBt + dt = Bt dBt + dt
2
2
2
43
Since Yt = g(t, Xt ) is a Itô process, we now have the short form:
dYt = udt + vdBt
and expanding this to the full form results in:
Z t
Z t
Yt = Y0 +
uds +
vdBs .
0
0
In this example, this means the short version
1 1
d Bt2 = Bt dBt + dt
2
2
in expanded form is:
1 2
B = B0 +
2 t
Z
t
0
1
ds +
2
Z
t
Bs dBs
0
or, equivalently, using that B0 = 0,
Z t
1
1
Bs dBs = Bt2 − t
2
2
0
which is the solution to the problem.
Example 2
We want to compute
Z
t
sdBs .
0
This is somewhat different since we now have a deterministic function s in
the integral. As is usually the case, we set Xt = Bt , and we make the not so
obvious choice g(t, x) = tx. We calculate the needed partial derivatives.

∂g


 ∂t = x
∂g
g(t, x) = tx =⇒
=t
∂x


 ∂2g
=0
∂x2
Using Itô’s formula, we get
d (g(t, Xt )) =
∂g
∂g
∂2g
dt +
dXt + 2 (dXt )2
∂t
∂x
∂x
d (tBt ) = Bt dt + tdBt + 0
44
We recognise this as the shorthand version of a stochastic process with
udt = Bt dt and vdBt = tdBt . Expanding gives
Z t
Z t
tBt = B0 +
Bs ds +
sdBs
Z
0
0
t
sdBs = tBt −
Z
0
t
Bs ds
0
and we have solved the integral. This can also be seen as a version of
integration by parts. We also see that we have simplified a stochastic integral
to a multiplication and a Lebesgue integral, which is easier to compute.
Multi-dimensional Itô-formula
When we have more than one Brownian motion. For m ∈ N, we have the
m-dimensional vector
Bt = B1 (t), B2 (t), . . . , Bm (t) .
For n stochastic processes we have:
dX1 = u1 dt + v11 dB1 + v12 dB2 + . . . + v1m dBm
dX2 = u2 dt + v21 dB1 + v22 dB2 + . . . + v2m dBm
..
..
.
.
dXn = un dt + vn1 dB1 + vn2 dB2 + . . . + vnm dBm
which we can simply write as
dXt = udt + vdBt
where


X1
 
Xt =  ...  ,
Xn


u1
 
u =  ...  ,
un


v11 . . . v1m

..  ,
v =  ... . . .
. 
vn1 . . . vnm


dB1


dBt =  ...  ,
dBm
and where we define g(t, x) for t ∈ [0, ∞) and x ∈ Rn .
Moving on to the multi-dimensional Itô formula we get Yt = g(t, Xt ),
n
n
X
∂gk
1 X ∂ 2 gk
∂gk
(t, Xt )dXi +
(dXi )(dXj )
dt +
dYt =
∂t
∂xi
2 i,j=1 ∂xi xj
i=1
This is not so different from the one dimensional case, except we have sums.
In this case we also have some formal rules which are easily transfered from
the one-dimensional case.
dt if i = j
dBi dBj =
0
if i 6= j
45
Example
We will calculate the first row for a 2-dimensional stochastic process.
Xt = B1 (t), B2 (t)
#
"
tB1 (t)3 B2 (t)2
Yt =
et B2 (t)
#
"
tx31 x22
.
g(t, x1 , x2 ) =
et x2
Each of the rows in Y and g corresponds to Y1 , Y2 , g1 and g2 .
We will calculate dY1 :
∂g1
∂g1
∂g1
1 ∂ 2 g1
(dX1 )2
dt +
dX1 +
dX2 +
∂t
∂x1
∂x2
2 ∂x21
1 ∂ 2 g1
1 ∂ 2 g1
1 ∂ 2 g1
(dX2 )2
(dX1 )(dX2 ) +
(dX1 )(dX2 ) +
+
2 ∂x2 x1
2 ∂x1 x2
2 ∂x22
dY1 =
We observe that X1 = B1 (t) and X2 = B2 (t), and the formal rules stated
above, and we know that dB1 dB2 = dB2 dB1 = 0, so we see that two of the
terms in the expression above becomes 0. We are left with:
∂g1
∂g1
∂g1
1 ∂ 2 g1
1 ∂ 2 g1
2
dY1 =
dt +
dX1 +
dX2 +
(dX1 ) +
(dX2 )2
2
2
∂t
∂x1
∂x2
2 ∂x1
2 ∂x2
where we can use that (dXi )2 = dt, and we have reduced the problem to:
dY1 =
1 ∂ 2 g1
∂g1
∂g1
1 ∂ 2 g1
∂g1
dt
+
dt
dt +
dX1 +
dX2 +
∂t
∂x1
∂x2
2 ∂x21
2 ∂x22
Now we evaluate the differentiated functions. g1 (t, X) = g1 (t, B1 , B2 ) =
tB1 (t)3 B2 (t)2 .
∂g1
(t, B1 , B2 ) = B13 B22
∂t
∂g1
(t, B1 , B2 ) = 3tB12 B22
∂x1
∂ 2 g1
(t, B1 , B2 ) = 6tB1 B22
2
∂x1
∂g1
(t, B1 , B2 ) = 2tB13 B2
∂x2
∂ 2 g1
(t, B1 , B2 ) = 2tB13
∂x22
46
We substitute these in the expression, and get:
1
1
dY1 = B13 B22 dt + 3tB12 B22 dB1 + 2tB13 B2 dB2 + 6tB1 B22 dt + 2tB13 dt
2
2
= B13 B22 dt + 3tB12 B22 dB1 + 2tB13 B2 dB2 + 3tB1 B22 dt + tB13 dt
3 2
2
3
= B1 B2 + 3tB1 B2 + tB1 dt + 3tB12 B22 dB1 + 2tB13 B2 dB2
This is the first row in the solution Yt .
Martingale Representation
First some preparation:
Doob-Dynkin Lemma.
Let X and Y be two random variables on Ω, taking values in Rn . Assume
that Y is σ(X)-measurable.
The only way it can be measurable to that σ-algebra is if there exists a
function f (Lebesgue-measurable) such that Y = f (X).
The Fourier Inversion Theorem.
Given a function φ. We get a new function by performing the Fourier
transform.
n Z
1
def
b
φ(y)e−ixy dy
φ(x) == √
n
2π
R
The theorem says we get the same function back when we perform an inverse
Fourier transform on the new function. That is,
n Z
1
ixy
b
φ(x) == √
φ(y)e
dy
2π
Rn
if φ and φb both are in L1 (Rn . Strictly speaking the equality is only true for
φ(−x), but for our applications it doesn’t matter.
Lemma
If g ∈ L2 (FT ), where FT is the σ-algebra generated by Brownian motion,
then this function can be approximated by objects on the form
φ Bt1 , Bt2 , . . . , Btn
where φ ∈ C0∞ (Rn ) (infinitely differentiable with compact support).
47
Lemma (difficult)
Consider objects on the form
Z
hZ t
i
1 t 2
exp
h(t)dBt −
h (t)dt
2 0
0
where h ∈ L2 [0, T ] is a deterministic function with no dependancy on ω. The
linear span of such functions is dence in L2 (Ω, FT ).
Proof :
Assume that
Z
g(ω) exp
Ω
hZ
t
1
h(t)dBt −
2
0
Z
0
t
i
h (t)dt dP (ω) = 0.
2
for all h ∈ L2 [0, T ]. Since h2 (t)dt is constant w.r.t dP (ω), we can think of
this as ea−b = ea /eb , i.e factor it out and remove it. We get
Z
Z
i
hZ t
1 t 2
h (t)dt dP (ω) = 0.
g(ω) exp
h(t)dBt −
2 0
Ω
0
Now we put
h(t) = λ1 I(0,t1 ) (t) + λ2 I(0,t2 ) (t) + . . . + λn I(0,tn ) (t)
which obviously is in L2 since it is bounded. Since
Z t
λI(0,a) (t)dBt = λBa
0
we now have
Z
Ω
i
h
g(ω) exp λ1 Bt1 + λ2 Bt2 + . . . + λn Btn dP (ω) = 0.
for all λ = (λ1 , . . . , λn ) ∈ Rn . We now define a function on Cn .
Z
h
i
G(z) =
g(ω) exp z1 + . . . + zn dP (ω)
Ω
is an analytic and an entire function. If the function G(z) is zero on the real
line, it must be zero everywhere, i.e G(z) = 0. This is clear if we look at the
series expansion.
f (z) =
X
n
an z ,
f n (0)
= 0 =⇒ f (z) = 0.
an =
n!
48
Now we let φ ∈ C0∞ (Rn ) be arbitrary.
Z
φ Bt1 , Bt2 , . . . , Btn g(ω)dP (ω)
Ω
and do the Fourier inversion.
Z
Z 1
iy1 Bt1 +iy2 Bt2 +...+iyn Btn
b
√
dyg(ω)dP (ω)
φ(y)e
n
2π
Ω
R
where we used (x1 , . . . , xn ) = (Bt1 , . . . , Btn ) in the Fourier inversion theorem.
We can now apply Fubini’s theorem and interchange the integral signs.
Z Z
1
b
√
φ(y)
eiy1 Bt1 +iy2 Bt2 +...+iyn Btn g(ω)dP (ω) dy
2π
Rn
{z
}
|Ω
G(z) with zi =yi Bti
Since G(z) is 0 everywhere, so is this entire expression!
Conclusion:
If
Z
g(ω) exp
Ω
2
nZ
0
T
1
h(t)dBt −
2
Z
0
T
o
h2 (t)dt dP (ω) = 0
C0∞ (Rn ):
for all h ∈ L [0, T ], then for any φ ∈
Z
Z
g 2 (ω)dP = 0
φ(Bt1 , . . . , Btn )g(ω)dP = 0 =⇒
Ω
Ω
since g ∼ φ(Bt1 , . . . , Btn ). Since g ∈ L2 (FT ) ⇒ g = 0 in L2 .
2
Now
R L is a Hilbert space (Inner RProduct space) and the inner product in
2
L is gφ. We have just Rshown that gφ = 0 which means g(ω) is orthogonal
to the linear span exp( h − 0.5h2 ). Since it is orthogonal to an arbitrary
function, it is orthogonal to all functions and this in turn means that it must
be in the set!
To sum up, any
R function2 g(ω) can be approximated by a linear
combination of exp( h − 0.5h ).
49
=============================== 26/02-2010
Problem 6
For the partition 0 = t1 ≤ t2 ≤ . . . ≤ tN = T , prove that
N
T
2 i
h
X
X
2
= 2
(∆ti )2
E T−
(∆Bi )
i=1
i=1
Multiplying the parenthesis we get
N
N
hX
i
h X
2 i
E[T 2 ] − 2T E
(∆Bi )2 + E
(∆Bi )2
i=1
We use the hint: (
PN
ai )2 =
i=1
N
X
2
T − 2T
i=1
P
i=1
PN
i,j=1 (ai aj ).
N
i
hX
∆ti + E
(∆Bi )2 (∆Bj )2
i,j=1
We note that
∆ti is a telescoping series which results in tN − t1 which is
T − 0 = T . We get T 2 − 2T 2 = −T 2 .
N
hX
i
2
2
−T + E
(∆Bi ) (∆Bj )
2
i,j=1
We can rewrite the double sum using a little summation manipulation.
N
X
X −T +
E (∆Bi )4 + 2
E (∆Bi )2 (∆Bj )2
2
i=1
i<j
Using expectation of Brownian motion and independence.
2
−T + 3
N
X
(∆ti )2 + 2
i=1
2
i<j
−T + 3
2
−T + 2
N
X
i=1
X E (∆Bi )2 E (∆Bj )2
N
X
(∆ti )2 + 2
i=1
2
(∆ti ) +
X
∆ti ∆tj
i<j
N
X
i=1
50
(∆ti )2 + 2
X
i<j
∆ti ∆tj
Using the same summation manipulation as above, just the other way,
−T + 2
N
X
−T 2 + 2
N
N
X
X
2
(∆ti )2 +
∆ti
2
2
(∆ti ) +
i=1
N
X
∆ti ∆tj
i,j=1
i=1
i=1
The last term is again a telescoping series, and sums to T , then we square it.
2
−T + 2
N
X
2
(∆ti ) + T
2
= 2
i=1
N
X
(∆ti )2
Q.E.D
i=1
Exercise 4.1
Given a Yt we want to write it as dYt = udt + vdBt .
a) For Yt = Bt2 = g(t, Bt) we see that g(t, x) = x2 . Differentiating.
∂g
=0
∂t
∂2g
=2
∂x2
∂g
= 2x
∂x
By Itô’s formula;
1
d(Yt ) = d(Bt 2 ) = 0dt + 2Bt2 dBt + 2(dBt )2
2
dYt = 1dt + 2Bt dBt
b) Yt = 2 + t + eBt , where we see g(t, x) = 2 + t + ex .
∂g
=1
∂t
∂g
= ex
∂x
∂2g
= ex
2
∂x
Itô’s formula.
1
1
dYt = 1dt + eBt dBt + eBt (dBt )2 = (1 + eBt )dt + eBt dBt
2
2
c) Yt = B12 (t) + B22 (t). This time g(t, x, y) = x2 + y 2 .
∂g
=0
∂t
∂g
= 2x
∂x
∂2g
=2
∂x2
∂g
= 2y
∂y
∂2g
=2
∂y 2
Multi dimensional Itô’s formula.
dYt = 2dt + 2B1 (t)dB1 (t) + 2B2 (t)dB2 (t)
51
∂2g
=0
∂y∂x
d) Yt = (t0 + t, Bt ).
1
0
dYt = (dt, dBt ) =
dt +
dBt
1
0
e) Yt = (B1 (t) + B2 (t) + B3 (t), B22 (t) − B1 (t)B2 (t).
dY1 (t) = dB1 (t) + dB2 (t) + dB3 (t)
dY2 (t) = 2B2 (t)dB2 (t) + dt − dB1 (t)dB3 (t) − B1 (t)db3 (t) + dB1 (t)dB3 (t)
and since dB1 (t)dB3 (t) = 0:


dB1
1
1
1 
0
dY1
dB2 
dt +
=
dYt =
−B3 2B2 −B1
1
dY2
dB3
Exercise 4.2
We set Yt = 31 Bt3 , which implies g(t, x) = 31 x3 .
∂g
=0
∂t
Itô’s formula.
Exercise 4.3
∂g
= x2
∂x
∂2g
= 2x
∂x2
1
1
d( Bt3 ) = Bt2 dBt + 2Bt dt
3
2
Z t
Z t
1 3 1 3
2
B = B0 +
Bs dBs +
Bs ds
3 t
3
0
0
Z t
Z t
1 3
2
Bs ds
Bs dBs = Bt −
3
0
0
We want to prove that d(Xt Yt ) = Xt dYt + Yt dXt + dXt · dYt . In thise case
g(t, x, y) = xy. We calculate the derivatives.
∂g
=0
∂t
∂g
=y
∂x
∂2g
=0
∂x2
∂g
=x
∂y
∂g
∂g
(t, Xt , Yt ) = Yt
(t, Xt , Yt ) = Xt
∂x
∂y
Applying Itô’s multi dimensional formula.
∂2g
=0
∂y 2
∂2g
=1
∂y∂x
∂2g
(t, Xt , Yt ) = 1
∂y∂x
1
d(Xt Yt ) = Yt dXt + Xt dYt + (dXt dYt + dYt dXt ) = Yt dXt + Xt dYt + dXt dYt
2
52
We can deduce the integration by parts formula.
Z t
Z t
Z t
X t Y t = X0 Y 0 +
Ys dXs +
Xs dYs +
dXs dYs
0
Z
0
0
t
Xs dYs = Xt Yt − X0 Y0 −
Exercise 4.5
Z
0
t
0
Ys dXs −
Z
t
dXs dYs
0
For a brownian motion Bt ∈ R with B0 = 0 we define βk = E[Btk ] for
k = 0, 1, 2, . . . and t ≥ 0. Using Itô’s formula we want to prove that
Z t
1
βk (t) = k(k − 1)
βk−2 (s)ds; k ≥ 2.
2
0
Y (t) = Btk implies f (t, x) = xk .
∂f
=0
∂x
∂2f
= k(k − 1)xk−2
∂y∂x
∂f
= kxk−1
∂y
Itô’s formula.
1
dYt = d(Btk ) = 0 + kBtk−1 dBt + k(k − 1)Btk−2 (dBt )2
2
k(k − 1) k−2
Bt dt + kBtk dBt
2
Since Y0 = B0k = 0 the integral form is
Z
Z t
k(k − 1) t k−2
k
Bt =
Bs ds + k
Bsk dBs
2
0
0
d(Btk ) =
Recalling that the expectation to a Itô integral is 0.
Z t
i
h k(k − 1) Z t
k−2
t
Bs ds + k
Bsk dBs
βk (t) = E[Bk ] = E
2
0
0
Z t
Z t
k(k − 1)
k(k − 1)
E[Bsk−2 ]ds + 0 =
βk−2 ds
2
2
0
0
a) We want to show that E[Bt4 ] = 3t2 and compute E[Bt6 ].
E[Bt4 ]. Let Yt = Bt4 ⇒ f (t, x) = x4 .
∂f
=0
∂x
∂f
= 4x3
∂y
53
∂2f
= 12x2
∂y∂x
Q.E.D
1
dYt = 4Bt3 dBt + 12Bt2 dt
2
Z t
Z t
4
3
Bt =
4Bs dBs + 6
Bs2 ds
0
0
Now, we take the expectation.
t
Z t
Z t
6 2 4
2
E[Bt ] = 6
E[Bs ]ds = 6
sds = s = 3t2
2 0
0
0
For E[Bt6 ] we use Yt = Bt6 and the implied function f (t, x) = x6 .
∂f
=0
∂x
∂f
= 6x5
∂y
∂2f
= 30x4
∂y∂x
1
dYt = 6Bt5 dBt + 30Bt4 dt
2
Z t
Z t
6
5
Bt = 6
Bs dBs + 15
Bs4 ds
0
0
Since the expectation to a stochastic integral is 0, and using the first part:
t
Z t
Z t
45 3 6
4
2
E[Bt ] = 15
E[Bs ]ds = 45
s ds = s = 15t3
3
0
0
0
b) We want to show that for k ∈ N:
E[Bt2k+1 ] = 0
and
E[Bt2k ] =
(2k)!tk
2k k!
Using the formula given above and t = tk ,
E[Bt2k+1 ]
= β2k+1
1
= (2k + 1)(2k)
2
Z
tk
β2k−1 (s)ds
0
Z tk
Z tk−1
1
1
(2k + 1)(2k)
(2k − 1)(2k − 2)
β2k−2 ds
2
2
0
0
Following this pattern, we work our way back until we arrive at β1 in the
integral.
Z t1
Z tk Z tk−1
1
1
1
...
E[Bs ]ds dt1 dt2 . . . dtk
(2k + 1)(2k) (2k − 1)(2k − 2) . . .
2
2
2
0
0
0
Since E[Bt ] = 0, this becomes (2k+1)!
· 0 = 0. In other words, the expectation
2k
for an odd power of a Brownian motion is always zero.
54
For the other part we use the same idea.
E[Bt2k ]
= β2k
1
(2k)(2k − 1)
2
Z
1
= (2k)(2k − 1)
2
tk
0
Z
tk
1
(2k − 2)(2k − 3)
2
We work our way back to β2 .
1
1
1
(2k)(2k − 1) (2k − 2)(2k − 3) . . .
2
2
2
Z
0
tk
Z
β2k (s)ds
0
Z
tk−1
β2k−1 ds
0
tk−1
...
0
Z
t1
0
E[Bs2 ]ds dt1 dt2 . . . dtk
This time we get E[Bs2 ] = s, and this we can integrate. For each step:
s⇒
s2
s3
s4
tk
⇒
⇒
⇒ ... ⇒
2
6
24
k!
Combining, we get:
E[Bt2k ] =
(2k)! tk
(2k)!tk
·
=
2k
k!
2k k!
Q.E.D
Using this formula we can make easy verifications.
k=2
E[Bt4 ] ==
(4)!t2
24t2
(2k)!tk
=
=
= 3t2
2k k!
22 2!
8
Exercise 4.11
We are going to use Itô’s formula to prove that the following processes are
martingales.
1
1
a) Xt = e 2 t cos(Bt ). The implied function is; f (t, x) = e 2 t cos x.
∂f
1 1
= e 2 t cos x
∂t
2
1
∂f
= −e 2 t sin x
∂x
1
∂2f
= −e 2 t cos x
2
∂x
Itô’s formula.
1
1 1
1 1
dXt = e 2 t cos(Bt )dt − e 2 t sin(Bt )dBt − e 2 t cos(Bt )(dBt )2
2
2
1
1 1 t 1 1 t t
cos(B
dXt = e2 2 cos(Bt )dt
t )dt − e 2 sin(Bt )dBt −e
2
2
1
dXt = −e 2 t sin(Bt )dBt
This is a martingale, since the u(t, ω)dt part of the integral is 0.
55
1
1
b) Xt = e 2 sin(Bt ). We use f (t, x) = e 2 t sin x.
1 1
∂f
= e 2 t sin x
∂t
2
1
∂f
= e 2 t cos x
∂x
1
∂2f
= −e 2 t sin x
2
∂x
Itô’s formula.
1
1 1
1 1
dXt = e 2 t sin(Bt )dt + e 2 t cos(Bt )dBt − e 2 t sin(Bt )(dBt )2
2
2
1
dXt = e 2 t cos(Bt )dBt
Again, the u part is 0, and this is a martingale.
1
1
1
1
c) Xt = (Bt + t)e−Bt − 2 t . Using f (t, x) = (x + t)e−x− 2 t = xe−x− 2 t + te−x− 2 t .
1
1
1
1
∂f
t
= − xe−x− 2 t + e−x− 2 t − e−x− 2 t
∂t
2
2
1
1
1
∂f
= e−x− 2 t − xe−x− 2 t − te−x− 2 t
∂x
1
1
1
1
∂2f
= −e−x− 2 t − e−x− 2 t + xe−x− 2 t + te−x− 2 t
2
∂x
1
1
1
= xe−x− 2 t − 2e−x− 2 t + te−x− 2 t
Itô’s formula. To simplify the notation we write Z = −Bt − 21 t.
1 1
1
dXt = − xeZ +eZ − teZ dt+ eZ −xeZ −teZ dBt + xeZ −2eZ +teZ (dBt )2
2
2
2
dXt = eZ − xeZ − teZ dBt
Once again the u part of the integral is 0, and this is a martingale.
Exercise 4.13
A stochastic process dXt = u(t, ω)dt+v(t, ω)dBt is a martingale if u(t, ω) = 0.
When we have a process with u 6= 0 we can construct a martingale from
Xt by multiplying with the exponential martingale Mt , so Yt = Xt Mt is a
martingale. The exponential martingale is
Z
n Z t
o
1 t 2
Mt = exp −
u(t, ω)dBt −
u (s, ω)ds
2 0
0
Like we verified in exercise 4.3.
dYt = Mt dXt + Xt dMt + dXt dMt .
56
(+)
We know RdXt , and weRwant to compute dMt . To shorten the notation we let
t
t
Z(t) = − 0 udBt − 21 0 u2 ds, so Mt = eZ(t) . By the Itô formula.
2
1
dMt = eZ(t) dZ(t) + eZ(t) dZ(t)
2
Where
(++)
1
dZ(t) = −udBt − u2 dt
2
so
1
1 3
u dtdBt + u4 (dt)2 .
2
4
Using that dtdt = dBt dt = dtdBt = 0 and dBt dBt = dt this reduces to
(dZ(t))2 = u2 dt. Substituting this in (++), we now have
dZ(t)
2
= u2 (dBt )2 + 2
1
1
dMt = eZ(t) − udBt − u2 dt + eZ(t) u2dt
2
2
1
1
dMt = eZ(t) − udBt − u2 dt + u2 dt = −eZ(t) udBt = −Mt udBt .
2
2
Now we know what dMt is, so we can return to the equation (+). We
remember that dXt = udt + dBt .
dYt = Mt dXt + Xt dMt + dXt dMt
(+)
dYt = Mt (udt + dBt ) + Xt − Mt udBt + (udt + dBt ) − Mt udBt
dYt = Mt udt + Mt dBt − Xt Mt udBt − Mt u2 dtdBt − Mt u(dBt )2
Using (dBt )2 = dt and dtdBt = 0.
dYt = M
−M
t udt + Mt dBt − Xt Mt udBt
t udt
dYt = Mt dBt − Xt Mt udBt = Mt 1 − Xt u(t, ω) dBt
The Lebesgue integral part is 0, and this is a martingale. Result verified.
Exercise 4.14
For each of the cases we want to find the process f (t, ω) ∈ V[0, T ] such that
F (ω) = E[F ] +
a) F (ω) = BT (ω).
BT (ω) =
Z
T
f (t, ω)dBt(ω).
0
Z
0
57
T
dBt (ω)
b) F (ω) =
RT
0
Bt dt.
Z
T
Bt dt = T Bt −
0
c) F (ω) = BT2 (ω).
2
BT2
d(BT ) = 2Bt dBt + dt ⇒
d) F (ω) = BT3 .
Z
T
tdBt
0
=T +2
Z
0
d(BT3 ) = 3Bt2 dBt + 3Bt dt
Z T
Z T
3
2
BT =
3Bt dBt + 3
Bt dt
BT3
e) F (ω) = eBT (ω) .
=
Z
0
0
T
0
3Bt2
+ 3(T − t) dBt
1 1
d eBt − 2 t = eBt − 2 t dBt
Z t
1
Bt − 12 t
=1+
eBt − 2 t dBt
e
0
x− 21 t
Using f (t, x) = e
.
BT
e
=e
1
t
2
+
Because
E eαBT
f) F (ω) = sin(BT (ω)).
Z
T
1
eBt + 2 (T −t) dBt
0
Z
x2
e− 2T
=
eαx √
dx.
2πT
R
1
1
d e 2 t sin Bt = e 2 t cos(Bt )dBt
1
t
2
e sin(Bt ) =
sin BT =
Z
Z
T
1
e 2 t cos(Bt )dBt
0
T
1
e 2 (t−T ) cos(Bt )dBt
0
58
T
Bt dBt
Additional exercises 3 - Itô’s Formula
Exercise 1
Find dYt and write the integral expansion.
Yt = tBt2 ⇒ g(t, x) = tx2
Xt = Bt
∂2g
= 2t
∂x2
1
dYt = Bt2 dt + 2tBt dBt + 2tdt
2
2
dYt = (t + Bt )dt + 2tBt dBt
∂g
= 2xt
∂x
∂g
= x2
∂t
Seeing that Y0 = 0.
Yt =
Z
t
s+
0
Bs2 ds
+
Z
t
2sBs dBs
0
Exercise 2
Find dYt and write the integral expansion.
Yt = eαBt ⇒ g(t, x) = eαx
Xt = Bt
∂g
=0
∂t
∂g
= αeαx
∂x
∂2g
= α2 eαx
2
∂x
1
dYt = αeαBt dBt + α2 eαBt dt
2
Y0 = eαB0 = e0 = 1.
Yt = 1 + α
Z
t
αBs
e
0
1
dBs + α2
2
Z
t
eαBs ds
0
Exercise 3
Calculate y(t) = E[eαBt ]. Show that this is:
Z
α2 t
y(s)ds
y(t) = 1 +
2 0
Hint: Use exercise 2.
From 2 we know that eαBt = Yt . Using the expression we found,
Z t
Z
h
1 2 t αBs i
αBt
αBs
E[e ] = E[Yt ] = E 1 + α
e dBs + α
e ds
2
0
0
59
Z
Z
1 2 t αBs e dBs + α E
e ds
2
0
0
The expectation to a stochastic integral is 0.
Z
Z
1 2 t
1 2 t αBs E[eαBs ]ds
e ds ⇒ 1 + α
1+ α E
2
2
0
0
1 + 2E
t
αBs
Now we have the expectation we began with in the integral, but this time it
depends on s!
Z
Z
1 2 t
1 2 t
αBt
αBs
y(t) = E[e ] = 1 + α
E[e ]ds = 1 + α
y(s)ds
2
2
0
0
Exercise 4
We are going to show that y ′ =
α2
y.
2
From exercise 3 we know that
Z
α2 t
y(t) = 1 +
y(s)ds
2 0
Introducing F (s) as the anti-derivative to y(s), by the fundamental theorem
of calculus. we have
α2 F (t) − F (0) .
y(t) = 1 +
2
Noting that F (0) is now a constant, we proceed and differentiate both sides.
α2 ′
α2
F (t) =
y(t)
2
2
since F is the antiderivative to y. Now we can calculate the differential
equation to find E[eαBt ].
α2
y ′(t) =
y(t)
2
α2
y ′(t)
=
y(t)
2
Z t ′
Z t 2
y (t)
α
ds =
ds
0 y(t)
0 2
t
α2
ln y(s) =
t
2
0
Using that y(0) = 1, and ln(1) = 0.
y ′(t) =
ln y(t) =
In summary E[eαBt ] = e
α2
t
2
α2
α2
t ⇒ y(t) = e 2 t
2
.
60
Exercise 5
Find dYt and write the integral expansion.
X(t) = B1 (t), B2 (t), B3 (t)
and Yt = et B1 (t)B2 (t)B3 (t)
The function implied is g(t, x1 , x2 , x3 ) = et x1 x2 x3 . Finding all the partial
derivatives.
∂g
= et x1 x2 x3
∂t
∂g
= et x1 x3
∂x2
∂g
= et x2 x3
∂x1
∂2g
= et x3
∂x1 x2
∂2g
= et x2
∂x1 x3
∂g
= et x1 x2
∂x3
∂2g
= et x1
∂x2 x3
∂2g
∂2g
∂2g
=
0
=
0
=0
∂x22
∂x23
∂x21
Before we apply this to the Itô formula, we note that all the double derivatives
with the same variables are 0, and all the double derivatives with different
variables cancel to zero since dBi dBj = 0 when i 6= j. So,
dYt = et B1 (t)B2 (t)B3 (t)dt
+ et B1 (t)B3 (t)dB2 (t)
Rt
Yt = 0 es B1 (s)B2 (s)B3 (s)ds
Rt s
+
e B1 (s)B3 (s)dB2 (s)
0
+ et B1 (t)B2 (t)dB1 (t)
+ et B2 (t)B3 (t)dB1 (t)
Rt
+ 0 es B1 (s)B2 (s)dB1 (s)
Rt
+ 0 es B2 (s)B3 (s)dB1 (s)
Exercise 6
Find dYt and write the integral expansion.
Xt = B1 (t), B2 (t)
and
Yt = B(t)21 B(t)22 .
For g(t, x1 , x2 ) = x21 x22 we find the derivatives.
∂g
=0
∂t
∂g
= 2x1 x22
∂x1
∂g
= 2x21 x2
∂x2
∂2g
∂2g
∂2g
2
2
=
2x
=
2x
= 4x1 x2
2
1
∂x21
∂x22
∂x1 x2
1
d B(t)21 B(t)22 = 0 + 2B1 B22 dB1 + 2B12 B2 dB2 + 2B22 dt + 2B12 dt
2
2
2
2
2
2
d B(t)1 B(t)2 = B1 + B2 dt + 2B1 B2 dB1 + 2B12 B2 dB2
Z t
Z t
Z t
2
2
2
2
2
B(t)1 B(t)2 =
B1 (s)+B2 (s)ds+2
B(s)1 B(s)2 dB1 (s)+2
B(s)21 B(s)2 dB2 (s)
0
0
61
0
Exercise 7
Find u and v such that dXt = udt + vdBt when
t
y
Xt = 2Bt = 1 .
e
y2
For y1 = t.
∂y1
=1
∂t
For y2 = e2x .
∂y2
=0
∂t
d(y2 ) = 0 + 2e2Bt
Our result is,
∂y1
∂ 2 y1
=0
=0
∂x
∂x2
d(y1 ) = 1dt + 0
∂y2
∂ 2 y2
= 2e2x
= 4e2x
∂x
∂x2
1
+ 4e2Bt dt = 2e2Bt dt + 2e2Bt dBt
2
1
u=
,
2e2Bt
0
v=
.
2e2Bt
62
=============================== 05/03-2010
Lastn time: if a function g iso orthogonal to all functions on the form
RT
RT
exp 0 h(t)dBt − 12 0 h2 (t)dt (orthogonal w.r.t the inner product L2 )
then g ≡ 0. In other words, the linear combination of such functions are
dence in L2 , or the span fills the entire room.
Ito’s Representation theorem
Let F ∈ L2 (FT , P ) which is a random variable, not a process. Then there
exists a unique stochastic process f (t, ω) ∈ V such that
Z T
F (ω) = E[F ] +
f (t, ω)dBt
0
Proof: (1) Assume
Z
1 T 2
h (t)dt
F = exp
h(t)dBt −
2 0
0
Rt
Rt
We shorten the notation by defining Xt = 0 h(t)dBt − 21 0 h2 (t)dt and at the
same time we find the derivative which we will need dXt = h(t)dBt − 21 h2 (t)dt,
and introduce
Z t
Z
1 t 2
Yt = exp
h(t)dBt −
h (t)dt = eXt .
2 0
0
Z
By Ito’s formula.
T
1
dYt = eXt dXt + eXt (dXt )2
2
1
1
dYt = eXt h(t)dBt − h2 (t)dt + h2 (t)dt
2
2
Xt
dYt = e h(t)dBt
Writing in the expanded integral form.
Z t
Yt = Y0 +
eXs h(s)dBs
0
By fixing t = T , we have
F = YT = 1 +
Z
T
eXt h(t)dBt
0
Since the expectation to a stochastic integral is zero we get E[F ] = 1 + 0 = 1.
Z T
F = E[F ] +
eXt h(t)dBt
0
63
which means f (t, ω) = eXt h(t). So (1) is ok if F = exp {XT }.
We now consider
ofosuch functions Fk , k = 1, 2, . . ..
nR a linear combination
R
T
1 T 2
where Fk = exp 0 hk (t)dBt − 2 0 hk (t)dt .
F =
X
E
(1)
ak Fk ==
X
X
ak E[Fk ] +
ak Fk +
E[F ] +
Z
Z
Z
T
fk (t, ω)dBt
0
X
T
0
T
ak fk (t, ω)dBt
f (t, ω)dBt
0
P
where f (t, ω) =
ak fk (t, ω). Since the Fk ’s are dence, any function can be
approximated by them. In other words, F ∈ L2 (FT , P ) is arbitrary, so there
exists Fn which is a linear combination of F ’s such that Fn → F .
Z T
F = lim Fn = lim E[Fn ] +
fn (t, ω)dBt.
(⋆⋆)
n→∞
n→∞
0
Now the following question arises: why does fn → f ? This needs some
justification.
Z T
Z T
h
2 i
2
E[(Fn − Fm ) ] = E E[Fn ] +
fn (t, ω)dbt − E[Fm ] −
fm (t, ω)dBt
0
E
h
0
E[Fn − Fm ] +
E[Fn − Fm ]2 +
2
Z
Z
2 i
2
fn − fm dBt
T
0
T
0
E[(fn − fm )2 ]dt
So, since E[(Fn − Fm ) ] → 0 and by Hölder’s inequality E[Fn − Fm ]2 ≤
E[(Fn − Fm )2 ] and must also be zero. Then we are left with
Z
T
0
E[(fn − fm )2 ]dt = 0
and this means that fn = fm , since fn is Cauchy in L2 [0, T ] × Ω , and since
L2 is complete; fn → f . So returning to (⋆⋆) we now know that
lim = F = E[F ]0
n→∞
Z
0
64
T
f (t, ω)dBt.
We have shown existence, and now we want to show uniqueness. We assume
Z T
F = E[F ] +
f1 (t, ω)dBt
0
F = E[F ] +
Z
T
f2 (t, ω)dBt
0
Subtracting one from the other.
Z
0=
T
(f1 − f2 )dBt
0
On both sides we now have elements in L2 , and we measure the L2 -norm.
Z T
2 0=E
(f1 − f2 )dBt
0
0=
Z
0
T
E[(f1 − f2 )2 ]dt
And again, this means we have f1 = f2 in L2 [(0, T ) × Ω]. Some
R notes on
this. An L2 norm is defined as, for some object H, kHk2 = H 2 dP . For
our applications P = The Lebesgue measure (t-variable)× The probability
measure on Ω. The expectation to the integral squared is the integral over
both of these.
Martingale Representation Theorem
Assume Mt i a Ft -martingale.
Then there exists a unique process g(s, ω) such
Rt
that Mt = E[M0 ] + 0 g(s, ω)dBs ∀t. To be martingales, we need pure Ito
integrals. As opposed to the normal representation theorem, this time the t
does not need to be fixed.
Proof:
Pick and fix t. By our previous result
Z t
Mt = E[Mt ] +
f (t) (s, ω)dBs .
0
If the L1 -norm is finite, the L2 -norm is finite.
We want to show uniqueness even by different times. By the properties
of conditional expectation.
E[X | A] = E[ E[X | B] | A] for A ⊂ B
If A = {∅, Ω} ⇒ E[X|A] = E[X]
65
(+)
(++)
However we have, since Mt is a martingale.
(+)
E[Mt ] = E[E[Mt |F0 ]] = E[M0 ]
So,
Mt = E[M0 ] +
Assuming 0 ≤ t1 < t2 .
Z
t
f (t) (s, ω)dBs .
0
Mt1 = E[Mt2 | Ft1 ]
(By def. of martingales.)
R t2 (t )
= E E[M0 ] + 0 f 2 (s, ω)dBs | Ft1
(Itô’s rep. thm.)
R t1 (t )
= E E[M0 ] + 0 f 2 (s, ω)dBs
(Martingale property*)
Mt1
At the same time.
Rt
= E[M0 ] + 0 1 f (t1 ) (s, ω)dBs
(Itô’s rep. thm.)
* and all pure Itô integrals are martingales.
Now, by the uniqueness property, they must be equal.
f (t2 ) (s, ω) = f (t1 ) (s, ω)
Let ti = i for i = 1, 2, . . . and define g(s, ω) = f (ti ) (s, ω) if 0 ≤ s ≤ ti which
is well defined. If t ∈ [0, ∞) is arbitrary, choose i ∈ N such that 0 ≤ t ≤ ti .
= E[M0 ] +
Rt
0
f (ti ) (s, ω)dBs
= E[M0 ] +
0
g(s, ω)dBs
Mt = E[M0 ] +
0
f (t) (s, ω)dBs
Rt
Rt
66
Q.E.D
Stochastic Differential Equations (SDE)
We consider expressions on the form
dXt
dBt
= b(t, Xt ) + σ(t, Xt )
.
| dt {z
} |
{z dt}
Ordinary DE
Stochastic part
Since Bt is non-differentiable (even though it is continuous), we look at the
form
dXt = b(t, Xt )dt + σ(t, Xt )dBt
but this we recognise as the shorthand form of
Z t
Z t
Xt = X0 +
b(s, Xs )ds +
σ(s, Xs )dBs .
0
0
We now ask ourselves, is it possible to find a solution Xs ? As opposed to conventional differential equations, we don’t work with the derivatives but the
integrals, so a more proper name would probably be integrational equations.
We’ll look at an example.
Geometric Brownian motion
dXt = rXt dt + σXt dBt
We begin by only considering the ordinary differential equation. dXt = rXt dt
which we can think of as y ′ = ry which has the solution y = Cerx . We then
consider g(t, x) = ln x. By Itô’s formula
1
1
1
− 2 (dXt )2
dXt +
d(ln(Xt )) =
Xt
2
Xt
d(ln(Xt )) = (rdt + αdBt ) −
1 1 2 2
α Xt dt
2 Xt2
1
d(ln(Xt )) = (r − α2 )dt + αdBt
2
This is the general solution process. Informally you guess on a possible
solution and apply Itô’s formula.
Z t
Z t
1 2
ln(Xt ) = ln(X0 ) +
(r − α )ds +
αdBs
2
0
0
1
ln(Xt ) = ln(X0 ) + (r − α2 )t + αBt
2
67
Rt
since 0 dBs = Bt . We raise both sides to the power of the exponential
function and find our solution.
1
2 )+αB
t
Xt = X0 e(r− 2 α
.
These solutions can be checked by using Itô’a formula and see if the result
match the inital problem.
For the general form:
Z t
Z t
Xt = X0 +
b(s, Xs )ds +
σ(s, Xs )dBs .
0
0
When does this equation have a unique solution? In order to be sure of this,
we have to impose some weak conditions.
Theorem
Let T > 0, b : [0, T ] × Rn → R and σ : [0, T ] × R → Rn × Rm .
Condition (1)
Rn
Rn ×Rm
z }| { z }| {
|b(t, x)| + |σ(t, x)| ≤ C(1 + |x|)
for some x ∈ Rn , a constant C ∈ R and 0 ≤ t ≤ T . This condition gives us
the restriction that there is at most linear growth. A counter-example if we
do not impose this is dXt = Xt2 dBt .
Condition (2)
|b(t, x) − b(t, y)| + |σ(t, x) − σ(t, y)| ≤ D|x − y|
for x, y ∈ Rn and D ∈ R. This is similar to the derivative, but is a more
general result.
If these conditions are met, there exists a unique, t-continuous and Ft measurable solution to
dXt = b(t, Xt )dt + σ(t, Xt )dBt
when X0 = Z and Z is independent of FT . Now we can prove this.
68
Uniqueness.
bt be two solutions. We will show that these are equal in L2 .
Let Xt and X
b t )2 ] =
E[(Xt − X
Z t
Z t
Z t
Z t
h
2 i
b
bs )dBs
E Z+ b(s, Xs )ds+ σ(s, Xs )dBs − Z+ b(s, Xs )ds+ σ(s, X
0
=E
h Z
t
b(s, Xs )ds −
0
=E
≤ 4E
0
h Z
0
h Z
t
0
t
Z
0
0
t
bs )ds +
b(s, X
Z
t
σ(s, Xs )dBs −
0
bs ) ds +
b(s, Xs ) − b(s, X
0
Z
t
0
Z
t
0
bs )dBs
σ(s, X
2 i
b
σ(s, Xs ) − σ(s, Xs ) dBs
h Z
2 i
b
b(s, Xs )−b(s, Xs ) ds +4E
2 i
2 i
b
σ(s, Xs )−σ(s, Xs ) dBs
t
0
(5.1)
Now we use that
Z t
2 Z t 2
bs )ds ≤
b(s, Xs ) − b(s, X
bs )ds
b(s, Xs ) − b(s, X
0
(Cond. 2)
≤
0
Z
t
0
2
b
D Xs − Xs ds
(Hölder)
≤
2
D t
Z
t
0
bs |2 ds
|Xs − X
(5.2)
The
last step due to Hölder’s inequality isn’t quite obvious. It states:
R
|f g| ≤ kf kLp kgkLq where p1 + 1q = 1. So if we let p = q = 2:
Z
0
t
|f |ds =
So
Z
0
t
|f · 1|ds ≤
Z
t
0
|f |ds =⇒
Z
t
2
0
Z
|f | ds
21
t
0
|f |ds
2
Z
t
2
1 ds
0
≤t·
Z
0
21
=
t
|f |2ds.
Returning to (5.1), we’ll work on the other term.
h Z t
2 i
bs )dBs
E
σ(s, Xs ) − σ(s, X
0
Itô isometry.
hZ t
2 i
bs ) ds
=E
σ(s, Xs ) − σ(s, X
0
2 i
hZ t
b
≤E
σ(s, Xs ) − σ(s, Xs ) ds
0
69
Z
0
t
1
|f |2ds · t 2
hZ t
i
Xs − X
bs 2 ds
≤D E
2
(5.3)
0
Using (5.1), (5.2) and (5.3) and that A ≤ B ⇒ E[A] ≤ E[B] we know that
Z t
Z t
2 2 2
2
bs 2 ds
bt ≤ 4D t
bs ds + 4D
E Xs − X
E Xt − X
E Xs − X
0
0
bt |2 ]. As we have just shown, g(t) ≤
We now define
g(t) = E[|Xt − X
R
t
4D 2 (t + 1) 0 g(s)ds. We also see that g(0) = 0. We now use Gronwall’s
inequality.
Z t
If f (t) ≤ C1 + C2
f (s)ds =⇒ f (t) ≤ C1 eC2 t
0
This is just a differential equation with an inequality, this is solved with
an integrating factor. We know that |g(t)| ≥ 0, but from this inequality
C1 = 0 ⇒ g(t) ≤ 0. The conclusion is g(t) = 0, or in other words
bt |2 ] = 0 for 0 ≤ t ≤ T . Given a t, we have Xt = X
bt in L2 a.s. The
E[|Xt − X
next question is if they will be equal for different times. This is answered by
the counter-intuitive implication
bt ) = 1 for all t ∈ [0, T ] =⇒ P (Xt = X
bt for all t ∈ Q∩[0, T ]) = 1.
P (Xt = X
Where Q is the set of rational numbers. This builds upon a result from
measure theory.
∞
\
An = lim P (An ) = 1.
P
n=1
n→∞
They are equal for all Q and continuous on the intervall [0, T ]. This again
implies that
bt for all t ∈ [0, T ]) = 1
P (Xt = X
or the two processes are equal for all times t in [0, T ].
Existence.
We introduce a technique for solving differential equations called Piccard
iteration. For the differential equation y ′ = y with the initial condition
y(0) = 1 The basic technique is to repeatedly calculate the equation
Z t
y(t) = y(0) +
y(s)ds.
0
70
y1 (t) = 1 (initial condition).
Rt
Rt
y2 (t) = 1 + 0 y1 (s)ds = 1 + 0 1ds = 1 + t
Rt
Rt
2
y3 (t) = 1 + 0 y2 (s)ds = 1 + 0 (1 + s)ds = 1 + t + t2
..
.
Rt
3
n
2
yn+1(t) = 1 + 0 yn (s)ds = 1 + t + t2 + t6 + . . . + tn!
The pattern is obvious and limn→∞ yn → et , which we can easily see is the
correct solution. We now try this method on the stochastic integral
dXt = b(t, Xt )dt + σ(t, Xt )dBt
for X0 = Z.
X1 (t, ω) = Z(ω) (initial condition).
Rt
Rt
X2 (t, ω) = Z + 0 b(s, X1 )ds + 0 σ(s, X1 )dBs
..
.
Rt
Rt
Xn+1 (t, ω) = Z + 0 b(s, Xn )ds + 0 σ(s, Xn )dBs
Now we show that Xn → X, which means that
Z t
Z t
X=Z+
b(s, X)ds +
σ(s, X)dBs
0
0
or, a solution exists. We know that if X is a Cauchy sequence it is also
convergent.
71
=============================== 12/03-2010
Exercise 5.4
Solve the stochastic differential equations.
(i)
dB1
1 0
1
dX1
dt +
=
0 X1 dB2
0
dX2
Solving separately.
dX1 (t) = dt + dB1 (t) =⇒ X1 (t) = X1 (0) + t + B1 (t)
dX2 (t) = X1 dB2 (t) =⇒ dX2 (t) = {X1 (0) + t + B1 (t)} dB2 (t)
Z t
Z t
⇒ X2 (t) = X2 (0) + X1 (0)B2 (t) +
sdB2 (s) +
B1 (s)dB2 (s)
Z
t
dX2 (s) =
0
Z
Z
t
X1 (s)dB2 (s) +
0
X2 (t) = X2 (0) + X1 (0)B2 (t) +
0
t
dsB2 (s) +
0
Z
t
sdB2 (s) +
0
(ii)
Z
Z
0
t
B1 (s)dB2 (s)
0
t
B1 (s)dB2 (s)
0
dX(t) = X(t)dt + dB(t)
Multiplying with the integrating factor e−t .
d e−t Xt = −e−t Xt dt + e−t dXt
= −e−t Xt dt + e−t (Xt dt + dBt ) = e−t dBt
Z t
−t
e Xt = X0 +
e−s dBs
Xt = X0 et +
Z
0
t
et−s dBS
0
Some calculations.
f (t, x) = e−t x
∂f
= −e−t x
∂t
∂f
= e−t
∂x
dX(t) = X(t)dt + dB(t)
dX(t) = X(t)dt
Z t
t
dX(s)
=
ds
0
0 X(s)
Z
72
log X(s)
t
0
=t
log X(t) = t =⇒ X(t) = et
This is similar to the same deterministic differential equation.
x(t) = yet
y = x(t)e−t
dy = e−t dBt
Z t
y(t) = x(0) +
e−s dBs
(E5.1)
(Itô’s formula)
0
We solve for x(t). By E5.1 we note that x(0) = y(0).
Z t
t
x(t) = e X(0) +
e−(s−t) dBs
0
(iii)
dX(t) = −X(t)dt + e−t dB(t)
Solving the same way as (ii).
d(et Xt ) = et Xt dt + et dXt = et Xt dt + et − Xt dt + e−t dBt = dBt
Xt = e−t X0 + e−t Bt
Exercise 5.5
a) Solve this to get the Ornstein-Uhlenbeck process.
dXt = µXt dt + σYt dBt
Here we have a = µ so we multiply with the integrating factor e−at = e−µt .
d e−µt Xt = −µe−µt Xt dt + e−µt dXt
= −µe−µt Xt dt + e−µt µXt dt + σdBt = e−µt σdBt
Z t
−µt
e Xt = X − 0 +
e−µs σdBs
Xt = eµt X0 +
Z
0
t
eµ(t−s) σdBs
0
b) Calculating the expectation and variation.
E[Xt ] = E[eµt X0 ] + 0 = eµt X0
73
(constant)
Var[Xt ] = E[Xt2 ] − E[Xt ]2 =
Z t
Z t
µt
2 2 µ
µ(t−s)
E e X(0) +E
e
dBs +2E e X(0)
eµ(t−s) σdBs −E[Xt ]2
| {z } 0
0
{z
}
Factors out |
=0
By Ito isometry.
2µt e X(0) +
Z
t
2µ(t−s) 2
2µt
e
σ ds
−e X(0) =
σ 2 e2µt
0
Z
t
e2µt e−2µs σ 2 ds =
0
t
σ 2 2µt
e−2µs
=
e −1
−2µ 0 2µ
Exercise 5.6
We are going to solve the stochastic differential equation
dYt = rdt + αYt dBt
by multiplying both sides of the equation with the integrating factor
1 2
Ft = exp −αBt + α t .
2
In this case we have a = α, but for the stochastic integral, so we multiply
with −αBt + 12 α2 t which we get from Itô’s formula. Since
1
2
f (t, x, y) = e−αx+ 2 α t y = Zy
1
∂f
= α2 Z
∂t
2
∂2f
= α2 Z
2
∂x
By Itô formula.
∂f
= −αZ
∂x
∂2f
=0
∂y 2
1 2
∂f
= e−αx+ 2 α t
∂y
1 2
∂2f
∂2f
=
= −αe−αx+ 2 α t
∂xy
∂yx
1
2
df = 21 α2 Zydt − αZydBt + e−αBt + 2 α dY (t)
1 2
+ 21 α2 Z(dBt )2 − αe−αBt + 2 α t dB(t)dY (t)
So, the process can be written as
1 2
1 2
1 2
d e−αBt + 2 α t Yt = 21 α2 e−αBt + 2 α t Yt − αe−αBt + 2 α t Yt dBt
1
1 2
+e−αBt + 2 t (rdt + αYtdBt ) + 12 α2 e−αBt + 2 α t Yt (dBt )2
1 2
−α−αBt + 2 α t (dBt )(rdt + αYt dBt )
1 2
== re−αBt + 2 α t dt =⇒
dY (t) = rdt + αY (t)dB(t)
dY (t) = αY (t)dB(t)
(E5.2)
74
Now we consider f (x) = log x.
1
d2 f
=
−
dx2
x2
df
1
=
dx
x
1
1 (dY2 )2
d log(Y (t)) =
dY (t) −
Y (t)
2 (Yt )2
By E5.2 we know what dY (t) is;
d log(Y (t)) =
αY (t)dB(t) 1 α2 Yt2 dt
−
Y (t)
2 (Yt )2
1
2 dt
Y (t) = Y (0)eαdBt − 2 α
1
2 dt
Y (0) = e−αdBt + 2 α
Y (t)
Exercise 5.7
a) We are going to solve the SDE
dX(t) = (m − Xt )dt + σdBt
Here the m − Xt implies a = −1.
d(et Xt ) = et Xt dt + et (mdt − Xt dt + σdBt ) = et mdt + et σdBt
Z t
Z t
t
s
e X t = X0 + m
e ds + σ
es dBs
0
0
Xt = X0 e−t + e−t m(et − s) + σ
= e−t (X0 − m) + m + σ
Z
t
Z
t
es−t dBs
0
es−t dBs
0
b) Calculating the expectation and variance.
E[Xt ] = e−t (X0 − m) + m
Z t
−t
2
2
Var[Xt ] = (e (X0 − m) + m) + σ
e2s e−2t ds − (e−t (X0 − m) + m)2
0
2
=
σ
[1 − e−2t ]
2
Exercise 5.8
Calculating a multi dimensional SDE.
dX1 (t) = X2 (t)dt + αdB1 (t)
dX2 (t) = −X1 (t)dt + βdB2 (t)
75
Using a = −g (matrix).
d(e−gt X(t)) = −ge−gt X(t)dt + e−gt dX(t) = e−gt MdB(t)
dB1 (t)
0 1
α 0
dX1 (t)
X1 (t)
=
dt +
dB2 (t)
−1 0
0 β
dX2 (t)
X2 (t)
| {z }
| {z }
g
M
e−gt X(t) = X(0) +
X(t) = egt X(0) +
Z
Z
t
e−gs MdB(s) ⇒
0
t
eg(t−s) MdB(s)
0
Using the infinite series expansion for the exponential function.
exp(tg) = I + tg +
t2 2
g + ...
2!
and observing that g 2 = −I (the negative identity matrix)
exp(tg) = I + tg −
t2
t3
t4
I − g + I + ...
2!
3!
4!
t3 t5
t2 t4
+ −... + I 1 − + − ...
3! 5!
2! 4!
= g sin(t) + I cos(t)
Z t
X(t) = g sin t + I cos t X(0) +
{g sin(t − s) + I cos(t − s)} MdBs =
=g t−
0
Z t
dB1 (s)
α 0
cos(t − s) sin(t − s)
X1 (0)
cos t sin t
+
dB2 (s)
0 β
− sin(t − s) cos(t − s)
X2 (0)
− sin t cos t
0
Multiplying the matrices.
Rt
Rt
X1 (t)
cos tX1 (0) + sin tX2 (0) + α 0 cos(t − s)dB1 + β 0 sin(t − s)dB2
Rt
Rt
=
X2 (t)
− sin tX1 (0) + cos tX2 (0) − α 0 sin(t − s)dB1 + β 0 cos(t − s)dB2
Exercise 5.17 - Gromwall’s inequality
If
Z t
υ(t) ≤ C + A
υ(s)ds
0
for 0 ≤ t ≤ T
prove
υ(t) ≤ C exp(At).
76
We define
w(t) =
Z
t
0
υ(s)ds =⇒ w ′ (t) = υ(s) ≤ C + Aw(t)
We now want to prove that w(t) ≤
C
(exp(At) − 1).
A
We consider the equality.
w ′(t) ≤ C + Aw(t) ⇒ w ′ (t) = C + Aw(t)
Sovling the DE
w ′(t)
w (t) = Aw(t) ⇒
=A ⇒
w(t)
′
log w(s)
t
0
Z
t
0
w ′ (s)
=A
w(s)
Z
= At ⇒ w(t) = DeAt ⇒ D = w(t)e−At
t
ds
0
(5.17.1)
D ′ = w ′ (t)e−At − Aw(t)e−At
D ′ = C + Aw(t) e−At − Aw(t)e−At
D ′ = Ce−At
−As t
Z t
−e
As
D=C
e ds = C
A
0
0
−At
−e
+1
1 − eAt
D=C
= C
A
A
Now we have found D and set this back in 5.17.1.
1 − eAt At C At
e =
e −1
w(t) = C
A
A
But we solved the equality instead of the inequality, which is what we
originally had. So, we have shown that
w(t) ≤
C At
e −1
A
AC At
e = CeAt ⇒ w ′(t) ≤ CeAt
A
But by definition w ′ (t) = υ(t) so we have shown
w ′(t) =
υ(t) ≤ CeAt
77
(Q.E.D)
Additional exercises 4 - Martingale properties and SDEs
Problem 1
For t ≥ s compute:
(i) E[(Bs + Bt2 )eBt | Ft ]
All brownian motions of t are Ft -measurable, but since s ≤ t so are those of
s. Everything in the expectation is Ft -measurable so
E[(Bs + Bt2 )eBt | Ft] = (Bs + Bt2 )eBt .
(ii) E[eBt −Bs Bs2 | Fs ]
Here Bs2 is Fs -measurable so we can factor it out. Next we note that Bt − Bs
is independent of Fs so we can take the expectation without the conditional.
1
E[eBt −Bs Bs2 | Fs ] = Bs2 E[eBt −Bs | Fs ] = Bs2 E[eBt −Bs ] = Bs2 e 2 (t−s)
(iii) E[Bt2 | Fs ]
Nothing is Fs -measurable. Instead we use that Bt = Bt − Bs + Bs .
E[Bt2 | Fs ] = E[(Bt − Bs + Bs )2 | Fs] =
E[(Bt − Bs )2 | Fs ] +2E[Bs (Bt − Bs ) | Fs] +E[Bs2 | Fs]
E[(Bt − Bs )2 ]
+2Bs E[Bt − Bs ]
+Bs2
We can split the expectation over the sum/subtractions. In the first term
we have independence. In the second term we can factor out Bs since it
is measurable, and use independence to ommit the Fs . In the last term
everything is measurable and we are left with the expression.
= (t − s) + 0 + Bs2 = t − s + Bs2
Problem 2
Prove that Bt3 − 3tBt is a martingale using the definition.
Linearity
E[Bt3 − 3tBt | Fs ] ==
E[Bt3 | Fs] − 3tE[Bt | Fs]
Martingale
==
E[Bt3 | Fs ] − 3tBs
Considering the first term and using Bt = Bt − Bs + Bs and recalling that
(a + b)3 = a3 + 3a2 b + 3ab2 + b3 :
E[Bt3 | Fs ] = E[(Bt − Bs + Bs )3 | Fs] =
E[(Bt − Bs )3 | Fs ] +3E[(Bt − Bs )2 Bs | Fs ] +3E[(Bt − Bs )Bs2 | Fs ] +E[Bs3 | Fs ]
E[(Bt − Bs )3 ]
+3Bs E[(Bt − Bs )2 ]
+3Bs2 E[(Bt − Bs )]
+Bs3 =⇒
(we have just used measurability and independence).
Using this.
E[Bt3 | Fs ] = 3Bs (t − s) + BS3
E[Bt3 −3tBt | Fs ] = E[Bt3 | Fs]−3tBs = 3tBs −3sBs +BS3 −3tBs = Bs3 +3sBs
(Q.E.D)
78
Problem 3
We are going to prove that Bt3 − 3tBt is a martingale using the Itô formula.
Defining f (t, x) = x3 − 3tx we calculate the partial derivatives:
∂f
= −3x
∂t
∂2f
= 6x
∂x2
∂f
= 3x2 − 3t
∂x
Using the Itô formula with f (t, Bt ).
1
d Bt3 − 3tBt = −3Bt dt + 3Bt2 − 3t dBt + 6Bt (dBt )2
2
2
t dt + 3Bt − 3t dBt + =
−3B
3Bt dt
(1)
(2)
Noting that Y0 = 0, we get
Z t
Z t
2
Yt = Y0 + 3 (Bs − s)dBs = 3 (Bs2 − s)dBs
0
0
from this we can see there isn’t a deterministic part to this integral, which
implies that it is a martingale. We have to check if the expectation of the
square is finite.
E[(Bs2 − s)2 ] = E[Bs4 ] + s2 + 2sE[Bs2 ] = 3s2 + s2 + 2s2 = 6s2 < ∞ (Q.E.D)
Problem 4
We know that E[|Y |] < ∞ and want to us this to prove that Xt = E[Y | Ft]
is a martingale.
F ⊂F
s
t
E[Xt | Fs ] = E[E[Y | Ft] | Fs] ==
E[Y | Fs] = Xs
By definition E[Xt2 ] < ∞ so Xt is a martingale.
Recap of the rules of Conditional Expectation.
(1) E[Y | Fs ] = Y
(2) E[Y | Fs ] = E[Y ]
(3) E[E[Y | Fs ] | Ft] = E[Y | Ft ]
if Y is Fs -measurable.
if Y independent of Fs .
if Ft ⊂ Fs .
Problem 5
For Xt = eBt prove that
1
dXt = Xt dt + Xt dBt
2
We define f (t, x) = ex .
∂f
=0
∂t
∂f
= ex
∂x
79
∂2f
= ex
2
∂x
Itô’s formula with f (t, Bt ).
1
1
dXt = 0 + eBt dBt + eBt (dBt )2 = Xt dt + Xt dBt
2
2
(Q.E.D)
Problem 6
a)
1
Xt = e−ρT e(σBt − 2 σ
1
2 t)
2
= e−ρT eσBt e− 2 σ t = Z(t)
Compute dXt and find E[Xt ].
1
2
We define f (t, x) = e−ρT e(σx− 2 σ t) .
1
∂f
= − σ 2 Z(t)
∂t
2
∂2f
= σ 2 Z(t)
∂x2
∂f
= σZ(t)
∂x
Itô’s formula with f (t, Bt ).
1
1
dXt = − σ 2 Z(t)dt + σZ(t)dBt + σ 2 Z(t)(dBt )2 = σZ(t)dBt
2
2
1
2
dXt = σe−ρT e(σBt − 2 σ t) dBt
We note that X0 = e−ρT .
−ρT
Xt = e
+σ
Z
t
1
e−ρT e(σBs − 2 σ
2 s)
dBs
0
Since the expectation to a stochastic is 0 we get
E[Xt ] = E[e−ρT ] + 0 = e−ρT
b) Using the Xt from a) we define φ(t, ω) such that
Z T
XT = E[XT ] +
φ(t, ω)dBt.
0
Is φ(t, ω) unique?
−ρT
XT = e
+
Z
0
T
1
2
−ρT
σBt − 2 σ t
|e σe{z
} dBt
φ(t,ω)
By the uniqueness condition of SDE, we know that the stochastic part
(the function σ(t, x)) is unique if it is Lipscitz continuous. We know the
exponential function is continuous, and in the intervall [0, T ] it is even
Lipschitz continuous, so by the uniqueness of SDE’s, φ(t, ω) is unique.
80
=============================== 19/03-2010
Piccard Iteration
n = 1. X1 (t, ω) = Z.
Z
X2 = X1 +
t
b(s, Z)ds +
0
n > 1.
Xn+1 (t) = Z +
Z
Z
t
σ(s, Z)dBs
0
Z
t
b(s, Xn (s))ds +
0
We can show that
For n = 1.
2 E Xn+1 − Xn ≤ C1 E
2 E X2 − X 1 = E 4E |
Z
Z
Z
b(s, Z)ds|
2
|Xn − Xn−1 |2 ds
0
t
b(s, Z)ds +
0
σ(s, Xn (s))dBs
0
t
0
t
t
Z
0
+ 4E |
Z
0
t
t
2 σ(s, Z)dBs ≤
σ(s, Z)dBs |2 ≤
First term: Hölder’s inequality. Second term: Itô isometry.
Z
Z t
t
2
4E t
|σ(s, Z)|2ds
|b(s, Z)| ds + 4E
0
0
By existence of solutions.
So:
|b(t, x)| + |σ(t, x)| ≤ C 1 + |x| for all t, x.
2 E X2 − X1 ≤ 4tE
Z
0
t
2
2
C 1 + |Z| ds + 4E
Z
0
t
2 C 2 1 + |Z| ds
We now have terms that don’t depend on s, and we can calculate them
explicitly.
2 2 = 4tE C 2 t 1 + |Z| + 4E C 2 1 + |Z| t
The constant Z depends on ω so we must keep the expectation. Using
(1 + |Z|)2 ≤ 4 + 4|Z|2:
≤ 4t2 C 2 4E 1 + |Z|2 + 4tC 2 4E 1 + |Z|2
81
We assume E[|Z|2 ] < ∞, i.e we have a finite L2 -norm. Thus for some constant
e
C:
e =⇒ E X2 − X1 2 ≤ Ct
e
≤t·C
(7.1)
For n = 2.
(7.1) ⇒ E
Z
0
t
2
|X3 (t) − X2 (t)| ds ≤ C1 E
e
C1 CE
Z
t
0
e
sds = C1 C
Z
t
0
With the same approach we can show that
Z
0
t
|X2 − X1 |2 ds ≤
e
sds = C1 C
t2
2
3
et
E |X4 (t) − X3 (t) ≤ C2 C
6
By induction.
tn+1
tn+1
where C3n+1
→ 0 as n → ∞.
E |Xn+1 − Xn |2 ≤ C3n+1
(n + 1)!
(n + 1)!
We now wish to show that X(t, ω) is Cauchy.
hZ t
i 12
2
E
|Xn − Xm | ds = kXn − Xm kL2
0
We write this out as a telescoping sum. We can use the triangle inequality.
n−1
X
Xi+1 − Xi L2
i=m
n−1
X
≤
i+m
kXi+1 − Xi k ≤
Now, using the previous method.
n−1 Z
X
i=m
t
C3i+1
0
21
si+1
ds
(i + 1)!
(Because what we really have is:)
n−1
X
i=m
kXi+1 − Xi kL2
Z
n−1
X
t
1
=
E
|Xi+1 − Xi |2 ds 2
i=m
0
We can calculate the integral in 7.2.
(7.2) =
n−1 X
C3i+1
i=1
82
ti+2 12
(i + 2)!
(7.2)
Because of the factorial in the expression, this is obviously convergent. So
kXn − Xm kL2 ≤ (7.2)
and when n → ∞ 7.2 tends to zero, so the difference becomes arbitrarily
small. This means Piccard iteration converges to an X that will solve the
differential equation.
Now it remains to show that Xt = limn→∞ Xn (t, ω) is t-continuous. (The L2 limit doesn’t necessarily need continuity because we only need almost sure
equality).
Z
Z
t
X(t, ω) = Z +
|
t
b(s, Xs )ds +
σ(s, Xs )dBs
0
{z
} |
{z
}
0
Always t-cont.
∃ t-cont version.
We define a new process with a t-continuous version of
e ω) = Z +
X(t,
Z
t
0
es )ds +
b(s, X
Z
t
0
Rt
0
σ(s, Xs )dBs .
es )dBs
σ(s, X
By definition this process will be t-continuous, and will also solve the SDE.
So, for the initial condition X(0, ω) = Z, where E[|Z|2 ] < ∞ and if for all x,
0 ≤ t ≤ T we have (as 5.2.1 and 5.2.2 in Oksendal):
|b(t, x) − b(t, y)| ≤ C|x − y|
|σ(t, x) − σ(t, y)| ≤ C|x − y|
|b(t, x)| ≤ C(1 + |x|)
|σ(t, x)| ≤ C(1 + |x|)
(7.3)
If Z is independent
of FR∞ (the σ-algebra of infinite size), then the SDE
Rt
t
Xt = Z + 0 b(s, Xs )ds + 0 σ(s, Xs )dBs has a unique, t-continuous solution.
dXt = b(t, Xt )dt + σ(t, Xt )dBt
Strong and Weak solutions (p. 72).
Strong solution: Bt is given and we find a t-continuous solution which is
pathwise unique.
Weak solution: We find a pair (Xt , Bt ) that solves the equation.
Example from the book. dXt = sign(XT )dBt . This does not satisfy the
conditions in 7.3, so it has no strong solution.
(1)
If we choose two different Brownian motions Bt
equation the usual way.
(1)
dXt
(1)
(1)
(2)
and Bt . If we solve the
(1)
= b(t, Xt )dt + σ(t, Xt )dBt
83
(2)
But now we solve the same equation with Bt .
(2)
dXt
(2)
(2)
(2)
= b(t, Xt )dt + σ(t, Xt )dBt
The solutions will be different for the ω’s. but these are both weak solutions
of the same equation.
Theorem
(1)
(2)
(1)
(2)
If Xt and Xt are weak solutions of the same equation, Xt and Xt have
the same distribution.
Weak solutions are interesting for different situations, e.g where you shift the
Brownian motion. Doing so alters the solution, men still gives us the same
distribution. This will be important for future applications.
Itô-diffusions.
We view two different stochastic differential equations.
SDE :
dXt = b(t, Xt )dt + σ(t, Xt )dBt
Diffusion : dXt = b(Xt )dt + σ(Xt )dBt
A special case where one doesn’t allow t-variation in the coefficients.
(Actually a SDE is a diffusion). Note: if Xt is not a diffusion, we can define
Yt ∈ Rn+1 , by
t
.
Yt =
Xt
Then Yt is a diffusion.
Example
dXt = sin(t)Xt dt + cos2 (t)Xt dBt
This is a nice, solvable SDE.
t
Y
Yt = 1 =
Xt
Y2
Y1
1
0
dYt = d
=
dt +
dBt
Y2
sin(Y1 )Y2
cos2 (Y1 )Y2
Now we don’t have any t-variation in the coefficients; they only depend on
Y1 .
= eb(Yt )dt + σ
e(Yt )dBt
This is a normal assignment for the final exam! This is also important for
control theory and optimal stopping.
84
Time-homogeneity
dXt = b(Xt )dt + σ(Xt )dBt
t≥s
As is quite common for diffusions this one begins in s. Xs = x ∈ R.
Notationwise we write this as Xts,x , t ≥ s which tells us that this process
starts at the time s and at that time the value is the constant x.
s,x
We now compare Xs+h
, h ≥ 0 with Xh0,x . They will give different solutions,
but since they solve the same equation they are weak solutions. We are now
going to show that they both have the same distribution. It is sufficient to
show that they are weak solutions to the same SDE.
Z s+h
Z s+h
Var. change
s,x
s,x
Xs+h = x +
b(Xu )du +
σ(Xus,x )dBu ==
s
x+
Z
0
h
s
s,x
b(Xs+u
)du
+
Z
t
0
s,x
σ(Xs+u
d(Bu+s − Bs )
After the change of variable we have a new Brownian motion, which we call
et = Bt+s − Bs . With X0 = x we have
B
But Xh0,x solves
et
dXt = b(Xt )dt + σ(Xt )dB
dXt = b(Xt )dt + σ(Xt )dBt
with X0 = x. We have the same equation with different Brownian motions.
By the theorem these are two weak solutions to the same differential equation
and they have the same distribution.
The Markov property for Itô Diffusions
E x f (Xt+h ) | Ft = E Xt f (Xh )
Now what does this equation tell us? E x means that the starting point is
x (X0 = x). For every y we can calculate g(y) = E y [f (Xhy )]. We get a
deterministic function of y.
(1)
(2)
Def.
x
is Ft -meas. rand. var.
E Xt f(Xhx ) == g(X
t)
x
x
E f (Xt+h ) | Ft
tends to a Ft -meas. rand. var.
Both sides of the equation gives us an Ft -measurable random variable. So
the Markovproperty means that (1) ⇔ (2).
Proof.
85
Let Xt = Xt0,x . We assume
Z t
Z t
Xt = x +
b(Xu )du +
σ(Xu )dBu = It
0
But then:
Xs = x +
Z
0
s
b(Xu )du +
0
Z
s
σ(Xu )dBu . = Is
0
We assume t ≥ s. We subtract the integrals. Xt − Xs = It − Is .
Z t
Z t
Xt = Xs +
b(Xu )du +
σ(Xu )dBu
s
(7.4)
s
where Xs = Xs0,x . We define the function F = F (x, s, t, ω).
F (x, s, t, ω) = Xts,x (ω)
so it begins in x at time s and we let this process travel on the path ω until
time t.
0,x
(7.4) ⇒ Xts,Xs = Xt0,x .
These are the same, i.e it makes no difference if we begin in 0 or Xs0 . Thus,
Xt0,x = F (Xs0,x , s, t, ω)
(7.5)
The left side of the equation will use 7.5 here.
E x f (Xt+h ) = E x f (F (Xt0,x , t, t + h, ω)) | Ft
(7.5.⋆)
We are using 7.5 here with t = t + h and s = t. On the right side:
0,x
E Xt
f (Xh ) = E f (F (y, 0, h, ω)) y=X 0,x
t
where we denote the value we substitute for y at the end.
Now we choose and fix t and t + h. Consider the function
g(y, ω) = f (F (y, t, t + h, ω)).
P
We can approximate g with m
k=1 φh (y)ψj (ω). This gives us
(7.5.⋆) = E
x
g(Xt0,x, ω) | Ft
m
hX
i
x
ds ≈ E
φh (Xt0,x )ψj (ω) | Ft
k=1
86
(7.6)
Since Xt0,x is Ft -measurable, then so is φk (Xt0,x ).
=
m
X
k=1
E
x
φh (Xt0,x )ψj (ω) | Ft
=
m
X
k=1
φh (Xt0,x )E x ψj (ω) | Ft
We put φ back into the expectation. We set in y for Xt0,x and treat y as a
constant.
m
m
hX
i
X
E x φh (y)ψj (ω) | Ft y=X 0,x = E x
φh (y)ψj (ω) | Ft
0,x
k=1
t
y=Xt
k=1
Using 7.6 this can be approximated.
≈ E x g(y, ω) | Ft y=X 0,x
t
By the definition 7.6, g(y, ω) is defined for t, . . . , t + h, which means g is
independent of Ft !
= E x g(y, ω) y=X 0,x
t
We have thus established equality betweem
E x f (Xt+h | Ft = E f (F (y, t, t + h, ω)) y=X 0,x
t
Now since we have the same distribution, we will have the same expectation,
we are just using time homogeneity and shifting things forward in time. We
can rewrite this as,
= E f (F (y, 0, h, ω)) y=X 0,x = E x f (Xh0,y ) y=X 0,x
t
t
Stopping times.
τ = τ (ω)
The stopping times depends on the path (ω). (Very relevant for final exams!).
Defintion: τ is an Ft -stopping time if
A = ω | τ (ω) ≤ τ : a subset of Ω.
A ∈ F : this must be true for all t. (It should be possible to determine whether
τ (ω) ≤ t or not based on the information Ft from an earlier t.
Example.
Consider finding the first exit time from some open set U. The stopping time
is defined as
Z
def
τU == {t > 0 : Xt 6∈ U}
Graphically the exit time is denoted by the red dot.
87
Proof : From topology: U = ∪∞
m=1 Km where the Km ’s are closed.
∞ [
\
ω | Xt 6∈ Km
ω | τU ≤ t =
(7.7)
m=1 r∈Q
r<t
Since Xr must be outside Km , this means it is on the edge. Xt where r < t
means that Xt is Ft -measurable (and r is a countable set). So (7.7) ∈ Ft .
(Recall that a finite number of operations will always keep you within the
σ-algebra).
The Strong Markov Property for Itô diffusions.
E x f (Xτ +h ) | Fτ = E Xτ f (Xh ) h ≥ 0
What is Fτ ? It is the σ-algebra generated by Bmin(τ,s) (ω) when 0 ≤ s ≤ t,
(and sometimes t = ∞).
Take s = 2, τ = τ(0,2) and Bmin(τ,2) (ω).
88
=============================== 09/04-2010
Itô diffusion generators
Let f : Rn → R be a function and consider the limit
Af (x) = lim+
tց0
E[f (Xt )] − f (x)
t
where x ∈ Rn . We have DA (x) which is the set of functions
T such that the
limit exists in x. However the real set of interest is DA = DA (x).
It may not look it, but is a crucial part in the theory of solving SDE’s.
Preliminary result.
Let Yt be an Itô process, and let τ be an Ft stopping time with E[τ ] < ∞
and let g be a finite Borel function. Then
hZ τ
i
E
g(Yt )dBt = 0.
0
In other words, this rather basic result can be extended when we have a
random value τ in the integral.
Proof :
Define τk = min(τ, k) (which is a very usual course of action in proofs
including the random stopping time). We note that τ = limk→∞ τk .
Z
Z
k
τk
g(Ys )dBs = E
I{s≤τ } (ω) g(Ys) dBs = 0
E
{z } | {z }
0
0 |
Ft -meas.
It is pretty straight forward to show that
Z τk
Z
g(Ys )dBs →
0
Ft -meas.
τ
g(Ys )dBs
0
in L2 . Consult the text book. The difference is arbitrarily small, so they can
be regarded as the same object, and the proof is completed.
Let
Yt = x +
Z
t
u(s, ω)ds +
0
Z
t
v(s, ω)dBs
0
We assume f ∈ C02 (Rn ), which means it is two times differentiable and has
compact support (it is 0 outside some specified region). We also assume that
E[τ ] < ∞. by Ito’s formula;
n
n
X
1 X ∂2f
∂f
(u
dt
+
v
dB
)
+
(vv T )ij dt
df (Yt) =
i
i
t
∂xi
2 i,j=1 ∂xi ∂xj
i=1
89
since (dYt )2ij = (vv T )ij . We can get rid of the term with the dBt by taking
the expectation on both sides. Writing out the full form:
f (Yτ ) = f (x) +
Z
τ
0
E[f (Yτ )] = f (x)+E
Z τ X
n
n
X
∂f
∂2f
1
(ui ds + vi dBs ) +
(vv T )ij ds
∂x
2
∂x
∂x
i
i
j
0
i=1
i,j=1
hZ
τ
0
n
n
i
X
1 X ∂2f
∂f
(Ys )ui (s, ω)ds+
(Ys )vv T (s, ω)ij ds
∂xi
2 i,j=1 ∂xi ∂xj
i=1
We can now use this in the generator.
E x [f (Xt )] − f (x)
=
tց0+
t
P
2f
∂f
T
(Xs )bi (Xs ) + 21 ni,j=1 ∂x∂i ∂y
(X
)σσ
(X
)dB
s
s
s
i,j
∂xi
i
Af (x) = lim
lim+
tց0
E
R t Pn
0
i=1
t
where we have the Ito diffusion
dXt = b(Xt )dt + σ(Xt )dBt .
If g is continuous, we have
Rt
lim
t→0
0
g(s)ds L’Hop
g(t)
== lim
= g(0)
t→0
t
1
Conclusion:
If f ∈ C02 , then Af (x) is defined and
Af (x) =
n
n
X
1 X ∂2f
∂f
T
(x) · bi (x) +
(x) · σσi,j
(x)
∂x
2
∂x
∂x
i
i
j
i,j=1
i=1
In addition we have also proved what is called Dynkin’s formula.
Z τ
x
x
E [f (Xτ )] = f (x) + E [
Af (Xt )dt]
0
where Af (x) is the generator defined above.
Notation: if f ∈ C 2 , we define
n
n
X
∂f
1 X ∂2f
T
Lf (x) =
(x) · bi (x) +
(x) · σσi,j
(x).
∂x
2
∂x
∂x
i
i
j
i=1
i,j=1
90
The only difference is that the L-operator doesn’t require compact support,
so it is defined on a bigger area than Af (x). For A we needed f ∈ C02 which
means the funksjon is zero outside some compact set. When that is not the
case, you can use Dynkin’s formula on a truncated function. The requirements
f ∈ C02 , E[τ ] < ∞ are very important for Dynkin’s formula.
In summary, we have the generator,
Af (x) = lim+
tց0
E[f (Xt )] − f (x)
t
the ”L” operator,
n
n
X
1 X ∂2f
∂f
T
(x)bi (x) +
(x)σσi,j
(x)
Lf (x) =
∂x
2
∂x
∂x
i
i
j
i,j=1
i=1
A third variation, called the characteristic operator is defined as
E x [f (Xτu )] − f (x)
uցx
E x [τu ]
Af (x) = lim
where u is a sequence of open sets that are shrinking to x.
”Traps”: are points the diffusion can not escape. E.g.
dXt = Xt dt + Xt dBt
with the startpoint x = 0 satsifies this equation and is a trap. In general: x
is a trap for Xt if
P x Xt = x for all t = 1
Lemma: Dynkin 1965.
If x is not a trap for Xt , there exists a open set U, with x ∈ U such that
E x [τU ] < ∞.
Explanation. If a point is not a trap, we can draw a circle around it, and the
escape time for the process, i.e the time the process uses to reach a point
outside U, is finite.
Let f ∈ C 2 : If x is a trap, then Af (x) = 0 ((0 − 0)/∞). Choose an U where
x ∈ U which is an open and bounded set. We modify f to f0 such that
f0 ∈ C02 and f = f0 on U. Then Af (x) = Af0 (x) = Lf0 (x) = Lf (x). If x is
a trap, then A = Lf (x) = 0. They coincince on traps.
91
If x is not a trap, we let U be bounded, and we look at the difference
E x [f (X )] − f (x)
τu
− Lf (x).
x
E [τu ]
If this difference is arbitrarily small, then Af (x) = Lf (x). By Dynkin’s
formula:
E x [R τu Lf (X )ds]
s
0
−
Lf
(x)
x
E [τu ]
E x [R τu Lf (X )ds] Lf (x)E x [τ ] s
u 0
−
x
x
E [τu ]
E [τu ]
E x [R τu Lf (X )ds] E x [Lf (x)τ ] s
u 0
−
x
x
E [τu ]
E [τu ]
E x [R τu Lf (X )ds] E x [R τu Lf (x)]ds s
0
0
−
E x [τu ]
E x [τu ]
E x [R τu Lf (X ) − Lf (x)ds] s
0
x
E [τu ]
When u ց x the numerator in the expression above goes to 0. We can
conclude that if f ∈ C 2 then Af (x) = Lf (x) for all x.
Proved: Choose an ε > 0. Choose U so small that |Lf (y) − Lf (x)| < ε for
all y ∈ U. On the interval (0, τU ) we have Xs ∈ U (we can’t leave U). This
means
Lf (Xs ) − Lf (y) < ε for all s ∈ (0, τU ).
R U E x [R τU Lf (X ) − Lf (x)] E x [R τU |Lf (X ) − Lf (x)|]
x τ
ε
E
[
1ds]
s
s
0
0
=ε
< x0 ≤
E[τU ]
E x [τU ]
E x [τU ]
A typical question in the final exam is: If the diffution is fine, calculate Lf (x).
Example.
dXt = 2dt + 4dBt . What is the L-operator?
2
1 ∂ f
∂f
2
+4
= 2f ′ (x) + 8f ′′ (x)
Lf (x) = 2
∂x
2 ∂x2
For dXt = 2xdt + 4xdBt .
1
(16x2 )f ′′ (x) = 2xf ′ (x) + 8x2 f ′′ (x)
Lf (x) = 2xf (x) +
2
′
To find when this equals zero, we get a Euler equation.
92
Optimal Stopping
Let Xt ∈ Rn be the state to a system at time t.
Example


x1 (t) ← The bank
Xt = x2 (t) ← Value asset 1
x3 (t) ← Value asset 2
Let g : Rn → R be a reward function: g(x1 , x2 , x3 ) = x1 + #1 · x2 + #2 · x3
where #1: number of assets 1, and #2 : number of assets 2.
The function g(Xt ) is the total value at time t. We also have the vector
x ∈ Rn which is the start value (initial condition) at t = 0.
The stop value g(Xτ ), where
R τ τ is a stopping time. We also have the
accumulated utility function 0 f (Xt )dt, whichR is the utility of having Xt
τ
from time 0 to τ . The total reward function is 0 f (Xt )dt + g(Xτ ).
The Stopping Problem.
Find a stopping time τ ∗ such that
i
hZ
h Z τ∗
x
x
f (Xt )dt + g(Xτ ∗ ) = sup E
E
τ
0
0
τ
i
f (Xt )dt + g(Xτ ) .
where we define x as the starting point (which is essential in this kind of
problems).
i
h Z τ∗
x
∗
f (Xt )dt + g(Xτ ) .
Φ(x) = E
0
This value will vary with x (where we find Φ(x) with solution methods).
Technical modification.
Rτ
We include time in the process.
The utility in 0 f (t, Xt )dt + g(τ, Xτ ) where
R τ −ρt
we can use, as an example 0 e Xt dt, where e−ρt is the discounted value to
Xt .
Since time dependence isn’t allowed in a diffution, we use the trick to
rewrite it. With Yt = (t, Xt ) we get
dt
1
0
dYt =
=
dt +
dBt
dXt
b(Xt )
σ(Xt )
Finding the L-operator: by the definition we get
LX f (x) =
n
X
i=1
n
∂f
∂2f
1X T
bi (x) (x) +
σσi,j (x)
(x)
∂x
2 i,j=1
∂xi ∂xj
93
But we need the Y , which has an extra variable.
∂f
+ LX f (x)
∂s
so for the extended process, we only get one extra term in the L-operator.
For some starting point (s, x), i.e Xs = x, which means y = (s, x) we get
Z
τ
y
supE
f (Ys )ds + g(Yτ ) = Φ(y).
Ly f (s, x) =
τ
0
How does this vary with y? There are two methods of solving this.
(1) By hiding the f in the g in various ways, so you reduce the problem to
Φ(y) = supE y g(Yτ ) ,
τ
and solve this by finding the smalles super harmonic majorant to g.
(2) Verification method.
We define a “solvency region” G which is the area where the system hasn’t
gone bankrupt -or- the area where Y still has a value. We define τG = inf {t >
0, Yt ∈ G}. Assuming
Z
τG −
y
f (Yt )dt] < ∞.
E
0
(The function f can’t be too negative too often). We want to solve
Z
τ
y
Φ(y) = sup E
f (Yt )dt + g(Yτ )
τ ≤τG
0
The Verification theorem
We want to find a function φ(y) and a stopping region D, D = {x ∈
G | φ(y) > g(y)} such that these requirements are met. (∂D: the border
of D).
⋆
⋆
⋆
(i)
LY φ + f = 0 on D
(ii)
φ ≥ g on G\D
T
(iii)
φ ∈ C 1 (G) C(G)
(iv)
Lφ + f ≤ 0 on G\D
(v)
φ ∈ C 2 on G\∂D
(vi)
τD =
inf t > 0, Yt ∈ D < ∞ a.s
R
τ
(vii)
E y 0 G I∂D (Yt )dt = 0, Y ∈ G
(viii) The family {φ(Xτ )|τ ≤ τD } must be uniformally integrable
(ix)
∂D must be piecewise C 1
If all these requirements are met, φ(y) will solve the stopping problem.
94
Example
dXt = rXt dt + αXt dBt , X0 = x > 0.
This is geometric brownian motion with a positive initial value. Standard
modell for a stock.
We assume there is a transaction cost a > 0 on selling, and we include the
discounted factor e−ρt , with the inflation rate ρ. We have the solvency region
G : R+ \{0} = (0, ∞). When the
capital hits 0 you are insolvent, i.e bankrupt
or broke. We also have D = x | 0 < x < x0 , where x0 is the critical value
where we need to sell the stock (and it is an unknown value which we must
find).
Φ(s, x) = sup E (s,x) e−ρτ (Xτ − a) ⇒ f = 0
τ ≤τG
We get f = 0 since there is no integral in the expectation. Since the time is
implisitly included in e−ρτ we extend and include the time.
Lφ =
∂φ
∂φ 1 2 2 ∂ 2 φ
+ |{z}
rx
+ α x
∂s
∂x 2 |{z}
∂x2
T
b
σσ
We now look at the requirements from the verification theorem.
∂φ 1 2 2 ∂ 2 φ
∂φ
+ rx
+ α x
=0
∂s
∂x 2
∂x2
We assume φ(s, x) = e−ρs ψ(x), and replace in the expression.
(i) Lφ + f = 0 =⇒
1
−ρe−ρs ψ(x) + rxe−ρs ψ ′ (x) + α2 x2 e−ρs ψ ′′ (x) = 0
2
ρs
We can multiply both sides with e , and get a Euler equation:
1
−ρψ(x) + rxψ ′ (x) + α2 x2 ψ ′′ (x) = 0
2
All Euler equations are solved with a power function: ψ(x) = xγ . Replacing:
1
−ρxγ + rxγxγ−1 + α2 x2 γ(γ − 1)xγ−2 = 0
2
1
−ρxγ + rγxγ + α2 γ(γ − 1)xγ = 0
2
1 2
−ρ + rγ + α γ(γ − 1) = 0
2
We have r: the payoff, σ: volatility, ρ: inflation and γ: which is an unknown
value. We can multiply both sides with 2 and rewrite as a second degree
polynomial,
α2 γ 2 + (2r − α2 )γ − 2ρ = 0
95
α2 − 2r ±
p
(2r − α2 ) + 8ρα2
γ >0
=⇒ 1
2
γ2 < 0
2α
γ1
The solution must therefore be ψ(x) = C1 x + C2 xγ2 . But γ2 < 0 and
C2 xγ2 → ∞, which is impossible, so C2 = 0. The only possibility is:
ψ(x) = Cxγ1 .
Since we must have continuity over ∂D, we have the requirement ψ(x0 ) =
x0 − a, so we must have ψ(x) = (x0 − a)( xx0 )γ1 which leaves the only unknown
as x0 , so on D (since that is where the differential equation is satisfied):
x
ψ(s, x) = e−ρs (x0 − a)( )γ1 .
x0
γ=
With requirement iii) in the verification theorem, φ ∈ C 1 (G), we can find x0 .
−ρs
e (x0 − a)( xx0 )γ1 if x ≤ x0
φ(s, x) =
e−ρs (x − a)
if x > x0
Since φ ∈ C 1 all these should coincide in the deriative (they are all constants
except x).
−ρs
∂φ
e (x0 − a)γ1 ( xx0 )γ1 −1 x10 if x ≤ x0
(x) =
e−ρs
if x > x0
∂x
1
= 1 ⇒ x0 = γ1 x0 − γ1 a ⇒
x0
γ1 a
x0 (1 − γ1 ) = −γ1 a ⇒ x0 =
γ1 − 1
φ ∈ C 1 ⇒ (x0 − a)γ1
a
We have found a candidate for φ(s, x). For x0 = therule γγ11−1
:
−ρs
e (x0 − a)( xx0 )γ1 if x ≤ x0
φ(s, x) =
e−ρs (x − a)
if x > x0
By construction we know that i), ii) and iii) from the verification theorem is
fulfilled, but what about the rest? If we consider iv) Lφ ≤ 0 on G\D, which
in this case translates to (0, ∞)\(0, x0) ⇒ [x0 , ∞).
φ(s, x) = e−ρs (x − a) =⇒
∂φ
∂φ 1 2 2 ∂ 2 φ
+ rx
+ α x
∂s
∂x 2
∂x2
−ρs
−ρs
−ρe (x − a) + rxe
·1+0
−ρs
e
ρa + (r − ρ)x ≤ 0
Lφ =
This is not true if r ≥ ρ, but if r < ρ it is ok. It can be shown that all the
other requirements are satisfied as long as r < ρ.
96
=============================== 23/04-2010
hZ τ
i
x
Φ(x) = E
f (Xt )dt + g(Xτ )
0
We want to find the τ that maximizes this function.
We want to try Dynkin’s formula with the assumptions φ ∈ C02 and
E[τ ] < ∞.
hZ τ
hZ τ
i
i
y
y
y
E [φ(Yτ )] = φ(y) + E
Lφ(Yt )dt ≤ φ(y) − E
f (Yt )dt
iv)
0
φ(y) ≥ E
y
hZ
τ
0
hZ
i
y
y
f (Yt )dt + E φ(Yτ ) ≥ E
iv, ii
0
τ
f (Yt )dt + g(Yτ )
0
This tells us that if we find some φ(y), then φ(y) ≥ Φ(y).
i
A more rigorous approach.
We define a sequence of stopping times τR = min R, inf{t > 0, |Yt | ≥ R} .
With this definition we are ensured a bounded stopping time which must be
smaller than a disk with radius R ≥ 0.
We find a sequence of C 2 -functions such that φj ≥ φ, φj = φ if dist(∂D) ≥ 1j
and Lφj + f ≤ K for all j. We define τ ∧ τR : as min(τ, τR ). This way Yτ ∧τR
can never extend outside the disk with radius R. By Dynkin’s formula:
h Z τ ∧τR
i
y
y
E φj (Yτ ∧τR ) = φj (y) + E
Lφj (Yt )dt
0
by the assumption Lφj + f ≤ K ⇒ Lφj ≤ −f + K
Z τ ∧τR
h Z τ ∧τR
i
y
≤ φj (y) + E
−f (Yt )dt +
Idist(∂D)< 1 Kdt
0
≤ φj (y) + E
y
hZ
j
0
τ ∧τR
0
i
−f (Yt ) + KIdist(∂D)< 1 dt
j
There are two cases we must consider, since φj = φ if dist(∂D) ≥ 1j , or in
other words, if dist(∂D) ≥ 1j then Lφj (Yt ) < −f (Yt ) and if dist(∂D) < 1j
then Lφj (Yt ) < −f (Yt ) + K.
i
h Z τ ∧τR
h Z τ ∧τR
i
y
y
φj (y) ≥ E
f (Yt )dt + φj (Yτ ∧τR ) − KE
Idist(∂D)< 1
0
0
j
If we now assume y 6∈ ∂D og let j → ∞ (with the assumption in the theorem,
the second term will tend to zero).
h Z τ ∧τR
i
y
φ(y) ≥ E
f (Yt )dt + φj (Yτ ∧τR )
0
97
We now have the same inequality as we got from Dynkin’s formula, except
we have τ ∧ τR . If we let R → ∞ then τ ∧ τR → τ and then we have arrived at
the exact same result as we did in the introductory false proof, which turns
out to give the correct result!
This is actually a common method of proving results of this nature. You
start by applying Dynkin’s formula which gives the inequality you need. You
can then use adjusted functions (as we did over) to get a rigorous approach.
Exam: The last proof is one which is required to know for the final exam
(point 11 on central results). What is expected? You are required to know
that you use Dynkin’s formula to prove the result, and to know that you use
some approximations. However, the more details you remember, the better.
We have now shown that φ(y) ≥ Φ(y) under the assumption y 6∈ ∂D. Now
it remains to show that φ(y) ≤ Φ(y).
If y ∈ D, Lφ + f = 0 (by the verification theorem). Even though we don’t
have compactness, we apply Dynkin’s formula.
h Z τD
i
y
y
E φ(YτD ) = φ(y) + E
Lφ(Yt )dt =
0
φ(y) − E y
hZ
0
τD
i
f (Yt )dt
This means that
h Z τD
hZ
i
y
y
φ(y) = E
f (Yt )dt + φ(YτD ) ≤ E
0
τD
0
i
f (Yt)dt + g(YτD ) ≤ Φ(y)
The last inequality follows since we have a specific tau in the left side, whereas
the right side is the supremum of all τ ’s, so we have less than or equal.
Rigorous approach (sketch):
We define a sequence of stopping times
1
τ (i) = inf t > 0, Yt ∈ D, dist(Yt , ∂D) =
i
We define τ (i) ∧τR : as the minimum of these and we have compact support as
well as E[τ ] < ∞. We let i → ∞ and get the same result as using Dynkin’s
formula directly.
We have now proved that φ ≤ Φ(y) is always true, and that φ(y) ≥ Φ(y) is
true if y 6∈ ∂D. From this we can conclude that φ = Φ(y) for all y exceot
possibly y ∈ ∂D.
However, by using the first parts of chapter 10 in the text book, we
can use that the least super harmonic majorant implies that Φ is at least
98
downward semi continuous. This means that Φ(y ∗ ) ≤ Φ(yj ) when yj → y ∗
and Φ(yj ) = φ(yj ) (since the yj are not on the border). This implies thta
Φ(y ∗ ) ≤ φ(y ∗).
If φ is continuous ⇒ φ(yj ) → φ(y), so when j → ∞, φ(yj ) → φ(y ∗ ) =⇒
Φ(y ∗ ) ≤ φ(y ∗ ) for y ∗ ∈ ∂D. We thus have Φ(y) ≤ φ(y) whenever we have
downward semi continuity (which is a weaker requirement).
Control Theory
dXt = b(t, Xt , ut )dt + σ(t, Xt , ut )dBt
We choose ut = ut (t, ω) (which is stochastic). It is chosen from a set U and
we stop the system when (t, Xt ) leaves some specified area G. We introduce
the value function:
h Z τG
i
u
(s,x)
J (s, x) = E
F (r, Xr , ur)dr + K(τG , XτG )Iτ <∞
s
where F is the cumulative utility and K is the final utility. The value τG is
decided before we start. We want to chose ut as good as possible and we try
to influence Xt to achieve the highest possible value function.
Illustration.
Let’s say we have two investments, a secure investment ∆p1 = r1 p1 ∆t and a
risky investment ∆p2 = r2 p2 ∆t + αp2 ∆Bt . We have Xt which is the wealth
at time t.
We introduce the control parameter ut = share of Xt invested in the risky
asset.
Then the system is:
∆Xt = r1 Xt (1 − ut )∆t + r2 Xt ut ∆t + αXt ut ∆Bt
Since we choose ut before we “invest” it, it will be Ft -measurable and thus
independent of ∆Bt . We get the Ito integral as a limit.
dXt = r1 (1 − ut ) + r2 ut Xt dt + αXt · ut dBt
when ∆t → 0, and we have a diffusion.
Stopping time τG . Principal:
We stop after a constant time T0 (e.g 1 year, 5 years) or when we become
bankrupt Xt = 0.:
G = (s, z) | s < T0 or z > 0
99
Techinichal Preperation (A general process).
dXt = b(t, Xt , ut )dt + σ(t, Xt , ut )dBt
As in the case for optimal stopping we can hide the t: Yt = (t, Xt ).
dYt = eb(Yt , ut )dt + σ
e(Yt , ut )dBt .
The simple trick is just:
1
0
dXt = rXt dt + σXt dBt ⇒ dYt =
dt +
dBt
rXt
αXt
For diffusions we have the charachteristic operators.
Markov Control (when u(t, ω) = u(Yt)).
In general the u(t, ω) can be Mt -measruable where Mr = σ(Ys : s ≤ t)
(which is all the historic information on how the price developed). When we
only use Yt and not the preceding information, e.g we only use the stock price
Yt and not the earlier information on how it fell/rose to that price, we are
using a Markov control. After all, Markov is a memoryless process.
In these cases, we arrive at the system equation
dYt = eb(Yt , u(Yt))dt + σ
e(Yt , u(Yt ))dBt
For a chosen ut this becomes a diffusion with “L”-operator
n
n
X
X
∂f
∂f
∂2f
Lf (y) =
bi
aij
(y) +
+
.
∂s
∂xi i,j=1 ∂xi ∂xj
i=1
Important for the solution: what do a and b look like?
bi (Y ) = ebi (y, u(y))
1
aij (y) = (σσ T )ij (y, u(y))
2
Important: ut must be chosen for this to have any meaning.
Pointvalues: (v is a constant).
n
n
X
∂f X e
1
∂2f
∂f
L f (y) =
+
+
(σσijT )
b(y, v)
∂s
∂xi i,j=1 2
∂xi xj
i=1
v
Lf (y) = Lu(y) f (y).
Theorem - Hamilton-Jacobi-Bellman I (HJB I).
Let
n
o
u
Φ(y) = sup J (y) | u = u(Yt ) a Markov control .
100
If there exists an optimal Markov control u∗ and Φ(y) ∈ C 2 (G), we must
have:
(i) sup F v (y) + Lv Φ(y) = 0 for y ∈ G
v∈U
(ii)
Φ(y) = K(y) for y ∈ ∂G
∗
u∗ (y)
(iii) F
(y) + Lu (y) Φ(y) = 0 for all+; y.
Verification Theorem for optimal control (HJB II).
If φ ∈ C 2 (G) ∩ C(G), i.e twice differentiable and continuous in the closed
part, satisfies
(i) F v (y) + Lv φ(y) ≤ 0 for all y ∈ G
(ii)
limt→τG φ(Yt ) = K(YτG )IτG <∞
Where φ(y) ≥ J u (y) for all Markov controls u and all y ∈ G. We include
IτG <∞ , because if the stopping time is infinite, we do not take into account
what happens.
If in addition we can find a function u0 (y) such that
(iii) F u0 (y) + Lu0 (y) = 0
we have solved the problem:
u0 (y) = u∗ (y) and φ(y) =
sup
J u (y).
u∈Markov
The proof is similar as for optimal stopping.
Proof.
Let u = u(y) be a random Markov control. We define
n
o
τR = min R, τG , inf t > 0, |Yt | ≥ R .
Applying Dynkin’s formula.
hZ
y
y
E [φ(YτR )] = φ(y) + E
So, for all finite R,
τR
0
hZ
i (i)
y
L φ(Yr )dr ≤ φ(y) − E
u
0
Z
h
φ(y) ≥ E φ(YτR ) +
y
0
τR
τR
F u (Yr )dr
i
i
F u (Yr )dr .
This is still true when R → ∞, since τR → τG :
Z τG
h
i
y
u
φ(y) ≥ E φ(YτG )IτG <∞ +
F (Yr )dr .
0
By assumption, u is arbitrary.
φ(y) ≥ J u (y) for all y =⇒ φ ≥ Φ(y)
101
Q.E.D
If (iii) holds, then φ(y) ≤ Φ(y), and then φ(y) = Φ(y). A more detailed
explanation using (iii): choose u = u0 (y), and use Dynkin’s formula.
h Z τR
i
y
y
E φ(YτR ) = φ(y) − E
F u0 (Yr )dr .
0
We let R → ∞.
y
E φ(YτG ) = φ(y) − E
φ(y) = E
y
hZ
τG
y
hZ
τG
u0
F (Yr )dr
0
u0
i
F (Yr )dr + K(YτG )IτG <∞
0
i
This means that u0 produces a value that is always equal to φ(y). Since u0
is an arbitrary chosen control, there might be one that is even better, so
φ(Y ) ≤ Φ(Y ) (which concludes a complete proof).
Theorem
Under certain conditions the optimal Markov control also happens to be
the optimal control of all possible controls. Assume there exists an optimal
Markov control u∗ such that
All points on the border to G are regular with regards to Yt
(i) (i.e when the process hits the border it immediately continues
out of the region, and does not traverse the border points).
u
(ii)
The mapping
) is bounded Ci2 (G) ∩ C(G)
h Y → J (Y
R
∗
∗
τ (iii)
E y J u (Yτ ) + 0 Lu J u (Yt )dt < ∞
(where (iii) is valid for all stopping times τ < τG and all u ∈ U). When
these three conditions are met, u∗ is the optimal control among all adapted
controls; i.e u(t, ω) : Ft -measurable. Since we require the optimal control to
be a Markov control, u∗ (Yt ), this means that the control cannot depend on
any previous information like for instance an integral does (even though the
integral is Ft -measurable).
Example.
Consider a geometric brownian motion.
dXt = rut Xt dt + αut Xt dBt
We have (in this case K = 0):
Φ(s, x) = sup E
u
(s,x)
hZ
s
102
∞
i
e−ρt f (Xt )dt .
What is the L-operator? We find:
1
a(s, x, v) = α2 v 2 x2
2
b(s, x, v) = r · v · x,
where v is the control and a(s, x, v) = 21 σσijT .
Lv f (s, x) =
By HJB I:
∂f
1
∂2f
∂f
+ rvx
+ α2 v 2 x2 2
∂s
∂x 2
∂x
sup F v (s, x) + Lv Φ(s, x) = 0
v∈R
which is equivalent to
n
∂Φ
∂Φ 1 2 2 2 ∂ 2 Φ o
sup e−ρt f (x) +
+ rvx
+ α v x
=0
∂s
∂x 2
∂x2
v∈R
♣
If the double derivative term is positive the sup will be infinite, so we must
2
have ∂∂xΦ2 ≤ 0. If we have the double derivative strictly less than 0 we get a
parable with a max point.
If we differentiate ♣ with regards to v and set it equal to zero we find the
maximum value.
rx
r ∂Φ
∂2Φ
∂Φ
+ α2 vx2 2 = 0 =⇒ v = u∗ = − ∂x∂ 2 Φ
∂x
∂x
α2 x ∂x2
We now set this value for v in ♣, and Lu φ + F u (Y ) = 0 by H.J.B,
!
!
∂Φ
2 ∂Φ 2
r
r
1
∂Φ
∂Φ
∂2Φ
2 2
∂x
∂x
+ rx −
+
α
x
=0
e−ρt f (x) +
2
2
2
∂s
∂x 2
∂x2
α2 x ∂∂xΦ2
α4 x2 ∂∂xΦ2
!
!
∂Φ
2 ∂Φ 2
r
r
∂Φ
1
∂Φ
∂x
∂x
=0
+r −
+
e−ρt f (x) +
2Φ ∂
2
2
∂s
∂x 2 α ∂∂x2 Φ2
α ∂x2
∗
Multiply both sides with
∗
∂2Φ 2
α .
∂x2
2
2
∂Φ
1 2 ∂Φ
∂Φ ∂ 2 Φ
2
−ρs
−r
+ r
=0
α e f (x) +
∂s ∂x2
∂x
2
∂x
2
∂Φ ∂ 2 Φ 1 2 ∂Φ
2
−ρs
− r
α e f (x) +
=0
∂s ∂x2
2
∂x
We now have a second order non-linear differential equation. The next step
is to set φ(t, x) = e−ρt ψ(x) which reduces it to a first order non-linear
differential equation (solved by e.g simulation).
2
103
=============================== 29/04-2010
10.1
In the optimal stopping problems below we want to find the supremum g ∗
and the optimal stopping time τ ∗ , if it exists.
a) g ∗(x) = supτ E x [Bτ2 ].
gn (x) = sup E x gn−1 (Bt ) where Sn = k · 2−n | 0 ≤ k ≤ 4n , n ∈ N
t∈Sn
g ∗ = limn→∞ gn .
sup E x [Bt2 ] = sup x + t = x + 2n −→ ∞
n→∞
t∈Sn
t∈Sn
So g ∗(x) = ∞, and D = x; g(x) < g ∗ (x) . Since it grows forever there is no
optimal stopping time, so τ ∗ does not exist.
b) g ∗ (x) = supτ E x [|Bτ |p ].
We have f (x) = |x|p .
sup E x [|Bt |p ] = sup (xp + f (p)tp ) = xp + f (p)t2n −→ ∞
n→∞
t∈Sn
t∈Sn
Infinite growth, so g ∗ = ∞, τ ∗ does not exist.
2
c) g ∗ (x) = supτ E x [e−Bτ ].
x2
x2
1
e− 1+2t = e− 1+2n −→ 1
n→∞
1 + 2t
t∈Sn
t∈Sn
2
So g ∗ (x) = 1 and D = x | e−x < 1 = R0 .
τ ∗ = inf t > 0 | Bt 6∈ R0 = inf t > 0 | Bt = 0
2
sup E x [e−Bt ] = sup √
d) g ∗ (s, x) = supτ E (s,x) [e−(ρ+s) 21 (eBτ + e−Bτ )]
We define f (s, x) = e−ρ(s+τ ) 21 (ex + e−x ), so
1
∂f
= e−ρ(s+τ ) (ex − e−x )
∂x
2
∂2f
1
= e−ρ(s+τ ) (ex + e−x )
2
∂x
2
The characteristic operator;
1
1
1
1
Af (s, x) = −ρe−ρ(s+τ ) (ex + e−x ) + e−(ρ+τ ) (ex + e−x ) = (−ρ + )f (s, x)
2
2
2
2
If ρ < 12 , then Af (s, x) > 0. Then g ∗(s, x) = ∞ and τ ∗ does not exist and
D = R.
104
If ρ ≥
1
2
1
1 t
sup E (s,x) [e−(ρ+s) (eBt + e−Bt )] = sup e−ρ(s+t) e 2 +x +
2
2
t∈Sn
t∈Sn
1 t
= sup e−ρ(s+t) e 2 ex + e−x
2
t∈Sn
1
2t
where we used E[eαBt ] = e 2 α
t
= e−ρ(s+t) e 2 cosh(x) = e−ρs cosh(x).
So g ∗ = e−ρs cosh(x)
D = x, g(x) < g ∗(x) = ∅
τ ∗ = inf t > 0, Bt 6∈ ∅ = 0.
Additional exercises 6 - Optimal stopping/Control Theory
Problem 1
We have the optimal stopping problem
Φ(s, x) = sup E
(s,x)
τ ≥0
hZ
0
τ
i
e−2t Xt dt
where dXt = dBt with the initial condition Xs = x ∈ R. The solvency region
G is the set of real numbers R.
(a) For some twice differentiable function φ ∈ C 2 (R) and Yt ∈ (t, Xt ) we
want to find the L-operator Lφ(s, x). By the definition:
∂φ
∂φ
∂2φ
2
Lφ(s, x) =
+ b(x)
+ σ (x) 2
∂s
∂x
∂x
From the given process we see that b(x) = 0 and σ(x) = 1, so
Lφ(s, x) =
∂φ ∂ 2 φ
+
∂s ∂x2
(b) We assume that the continuation region has the form D = {(s, x)|x >
x0 }, and the candidate φ(s, x) for the solution has the form
−2s
e (Ce−2x + 21 x) if x > x0
φ(s, x) =
.
0
if x ≤ x0
105
We are going to verify that for any C ∈ R, φ(s, x) satisfied Lφ(s, x) + f (s, x)
= 0, if x ∈ D. When x is in this region, we have the first function. The
function f is the one from the first, given problem.
f (s, x) = e−2s x
Using the expression for the L-operator from a), where
∂φ
1
= −2e−2s (Ce−2x + x),
∂s
2
∂φ
1
= e−2s (−2Ce−2x + ),
∂x
2
∂2φ
= e−2s (4Ce−2x )
2
∂x
1
1
Lφ(s, x) = −2e−2s (Ce−2x + x) + 4e−2s (Ce−2x )
2
2
(
(
(
(
(
(
−2x−2s
−2x−2s
−2s
(
(
(
(
(
2Ce
= −e−2s x
−2Ce
− e x +(
(
So,
Lφ(s, x) + f (s, x) = −e−2s x + e−2s x = 0
(Q.E.D)
(c) We are going to use the high contact principle to find values for C and x0
such that φ is a C 1 function. This means that φ and φ′ must be continuous
in x0 . Equating the function: φ(s, x0 ) = 0:
1 e−2s Ce−2x0 + x0 = 0
2
1
Ce−2x0 = − x0
2
1
C = − x0 e2x0
2
We use this expression for C in the derivatives.
e−2s − 2Ce−2x0 +
∂φ
(s, x0 )
∂x
1
=0
2
1
= 2Ce−2x0
2
1
1
= 2 − x0 e2x0 e−2x0
2
2
1
= −x0 e0
2
1
1
x0 = − =⇒ C = e−1
2
4
106
= 0:
(d) We are going to verify that φ(s, x) > g(s, x) for (s, x) ∈ D and
φ(s, x) ≥ g(s, x) everywhere. The function g is the non-integrated function
in the optimal stopping problem, so which in this case is 0.
In D, for some a > − 21 we must verify that:
1 −2a 1
−2s
φ(s, a) > g(s, a) ⇒ e
e
+ a >0
4e
2
a
>0
2
If a ≥ 0 this is trivial, and we see that we have equality when a = − 21 :
1
4e1+2a
+
1
1
− =0
0
4e
4
but that we have strictly greater values when a > − 21 . For x 6∈ D we have
φ(s, x) = 0 so φ(s, x) ≥ g(s, x) is true since both are 0.
(e) We want to verify that Lφ(s, x) + f (s, x) ≤ 0 everywhere. As we showed
in b), this is true for all x ∈ D. When x 6∈ D we have Lφ(s, x) = 0 and for
x = − 12 we evaluate f (s, x):
1
Lφ(s, x) + f (s, x) = 0 + − e−2s ≤ 0
2
which is always true since the exponential function is always positive, and
the x 6∈ D must be negative.
(f) We have verified (i)-(iv) in the verification theorem, and we must conclude
that φ(s, x) = Φ(s, x) is the solution to the optimal stopping problem.
Problem 2
We have the control problem
Φ(s, x) = sup E
u(t)
(s,x)
hZ
T
e−ρt ln[u(t)]dt + e−ρT X(T )
s
where dXt = (µXt − u(t))dt + σXt dBt and X0 = x > 0.
i
s<T
(a) For a twice differentiable function and let Lv denote the differential
operator for Yt = (t, Xt ) using the control v. What is Lv φ(s, x)? The
differential operator is the same as in (a) in problem 1. In this problem
we have
∂φ
∂φ 1 2 ∂ 2 φ
Lv φ(s, x) =
+b
+ σ
∂s
∂x 2 ∂x2
107
In this process, (σ is a constant),
b(s, x, v) = µXt − u(t),
so
Lv φ(s, x) =
σ(s, x, v) = σx
∂φ
∂φ 1 2 2 ∂ 2 φ
+ (µx − v)
+ σ x
∂s
∂x 2
∂x2
(b) By the HJB equations, a candidate should satisfy
sup F v (s, x) + Lv φ(s, x) = 0
φ(T, x) = e−ρT x
(*)
v≥0
We assume that the supremum above is obtained at a point where v 7→
F v (s, x)+Lv φ(s, x) is differentiable. We are going to prove that v = (eρs ∂φ
)−1 .
∂x
The function F v is the integrated function in the control problem.
F v (s, x) = e−ρs ln[v]
Using part (a):
∂F v
e−ρs
=
.
∂v
v
⇒
∂Lv φ
∂φ
=− .
∂v
∂x
So,
∂φ
e−ρs
=
⇒ v=
F +L φ =0 ⇒
v
∂x
v
v
′
−1
ρs ∂φ
e
.
∂x
(c) We assume that the solution is on the form φ(s, x) = f (s)x + g(s), where
f and g are functions of s only. We are going to use (*) to prove that f must
satisfy the equations
f ′ (s) + µf (s) = 0
f (T ) = e−ρT
and solve this equation to find f .
Using the differential operator, and the φ(s, x) given in (c), we get
∂φ
= f ′ (s)x + g ′(s),
∂s
∂φ
= f (s),
∂x
∂2φ
=0
∂x2
So,
By (*),
Lv φ(s, x) = f ′ (s)x + g ′ (s) + µxf (s) − vf (s)
F v + Lv φ = e−ρs ln[v] + f ′ (s)x + g ′ (s) + µxf (s) − vf (s) = 0
vf (s) − e−ρs ln[v] − g ′ (s)
f (s) + µf (s) =
x
... hva med resten?
′
108
=============================== 30/04-2010
Levy’s Characterising Theorem.
Let Xt be a continuous n-dimensional process on (Ω, H, Q) and let χt be the
σ-algebra generated by Xt . If:
(i) Xt is an χt -martingale w.r.t Q.
(ii) Xi (t)Xj (t) − δij t is w.r.t Q.
Then Xt is a brownian motion. Recall that δij = 1 if i = j and δij = 0
otherwise.
Change of Measure (Measure Theory).
Let P be a measure and M(ω) ≥ 0 a random variable where M(ω) ∈ L1
(which means it has a finite expectation). Define a new measure Q by
dQ = MdP , that is for the measure Q of some set A we have:
Z
Z
Q(A) =
M(ω)dP (ω) = IA (ω)M(ω)dP (ω).
A
So P (A) = 0 ⇒ Q(A) = 0.
Radon-Nikodym’s Theorem.
If P , Q are two measures such that P (A) = 0 ⇒ Q(A) = 0, there exists a
M ≥ 0, M ∈ L1 , such that dQ = MdP .
Notation: If P (A) = 0 ⇒ Q(A) = 0, we write Q << P : Q is absolutely
continuous with regards to P (informally said if P is small, then so is Q. A
dQ
notationwise reminder is M = dP
).
Conditional Expectation with a Change of Measure.
Let dQ = MdP and F ⊂ H. We include the weak assumptions that
EQ [|X|] < ∞ and EP |F ] > 0. Then
EQ [X|F ] =
EP [MX|F ]
EP [M|F ]
Proof.
Let H ∈ F be a random set. Important: Rin general E[X|FR] is a unique,
F -measurable, random variable such that H E[X|F ]dµ = H Xdµ for all
H ∈ F (which is the definition to conditional expectation).
Z
Z
Z
Z
Def.
Def.
EP [XM|F ]dP ==
XMdP =
XdQ ==
EQ [X|F ]dQ
H
H
Rad. Nik.
==
H
Z
H
EQ [X|F ]M dP
|
{z
}
Rand. var
109
H
The random variable is not necessarily F -measurable (because of M). We
can consider this as the X in the defintion.
Z
Def.
==
EP EQ [X|F ] · M|F dP
H
We now have EQ [X|F ] which is F -measurable, and can be factored out of
the expectation.
Z
=
EQ [X|F ]EP [M|F ]dP
H
Now we see that both X and M are F -measurable. We have shown that
Z
Z
E[XM|F ]dP =
EQ [X|F ]EP [M|F ]dP =⇒
H
H
EP [XM|F ] = EQ [X|F ]EP [M|F ]
EQ [X|F ] =
=⇒
EP [XM|F ]
EP [M|F ]
(Q.E.D)
Girsanov’s Theorem I
Let dYt = a(t, ω)dt+dBt where a is some arbitrary drift coefficient and σ = 1.
We assume that t ≤ T , Y0 = 0 and T ≤ ∞ (constant). Bt is a n-dimensional
brownian motion. We define
Mt = e−
Rt
0
a(s,ω)dBs − 21
Rt
0
a2 (s,ω)ds
We assume Mt is an Ft -martingale w.r.t P (and martingales must have
E[|Mt |] < ∞).
We define Q by dQ = MT dP , which is a new probability measure on the
same space. Then Q is a probability measure and: Yt is a brownian motion on
Q!!! (We have effectively removed the drift, a, and simplified matters. This
is of great importance to mathematical finance since we begin with P , which
is a market with some drift, and can remove it with the Girsanov theorem).
Proof.
We ensure that Q is a measure.
Z
Z
Mt : martingale
Q(Ω) =
1dQ =
MT dP = EP [Mt ] = EP [MT |F0 ]
==
M0 = 1
Ω
Ω
where F0 = {∅, Ω} is the trivial σ-algebra, and we had:
Mt = e−
Rt
0
a(s,ω)dBs − 21
Rt
0
110
a2 (s,ω)ds t=0
== e0 = 1.
So Q is a probability measure since M is a martingale.
We set Kt = Mt · Yt . Working in the original probability measure P , we apply
Ito’s formula (for a product).
dKi (t) = Mt dYi (t) + Yi (t)dMt + dYi (t)dMt .
We find dMt . If we consider Mt = eXt , and use Ito’s formula on ex we get
the this form:
1
1
dMt = deXt = eXt dXt + eXt (dXt · dXt ) = Mt dXt + Mt (dXt dXt ).
2
2
We have to take into account that dBs are not 1-dimensional and a = α is a
vector, so we will rewrite Mt :
Mt = e−
Rt
0
α(s,ω)dBs − 21
Rt
0
α2 (s,ω)ds
= e−
R t Pn
0
k=1
αk (s,ω)dBs − 21
R t Pn
0
k=1
α2k (s,ω)ds
Using that dBs ds = dsdBs = ds2 = 0 and dBi dBj = 0 when i 6= j and
dBi dBj = ds when j = i,
dMs = Ms
n
n
X
1
1X 2
αk (s, ω)ds + Ms
αk2 (s, ω)ds
−
αk (s, ω)dBk (s) −
2 k=1
2
k=1
k=1
n
X
dMs = Ms −
n
X
k=1
αk (s, ω)dBk (s) .
Because dMt is a purely stochastic integral it is a martingale. We also know
that dYt = αdt + dBt , so
dKi (t) = Mt dYi (t) + Yi (t)dMt + dYi (t)dMt .
n
X
dKi (t) = M(t) αi (t, ω)dt +dBi(t) + Yi (t)Mt −
αk (t, ω)dBk (t)
| {z }
k=1
dt
+ αi (t, ω)dt +dBi (t) Mt −
| {z }
dt
Mt αi dt +Mt dBi (t)−Mt Yi(t)
| {z }
=0
n
X
n
X
k=1
αk dBk ==
αk dBk (t)+ Mt αi dt + Mt dBi (t)
| {z }
k=1
=0
|
111
−
n
X
{zk=1
⋆
αk dBk (t)
}
The two terms cancel to zero because Mt is a purely stochastic process, and
since dBt dt = dtdBt = 0. The part marked with a star is multiplied in this
way
−Mt
n
X
k=1
αk dBk (t)dBi (t) = −Mt
n
X
αk dBi2 (t)
k=1
= −Mt
n
X
αk dt = 0
k=1
since dBj dBi = δij dt, and when we get Mt multiplied with a non-stochastic
dt, we get 0 again. Thus,
n
X
dKi (t) = Mt dBi (t) − Yi (t)
αk dBk (t)
k=1
We can see that we now have a process that only constitutes of the stochastic
dB(t)’s, so dKi(t) is a martingale w.r.t P . We now want to show that Y is a
martingale w.r.t Q. We know that dYt = αdt + dBt . We assume s ≤ t ≤ T .
EQ [Yt |Fs ] =
EP [Yt · MT |Fs ]
EP [Yt · MT |Fs ]
=
=
EP [MT |Fs ]
Ms
In the first step we used a change of conditional expectation, in the second
step we used that MT is a martingale. We now use the tower property of
conditional expectancy (the reverse way). Yt MT = EP [Yt MT |Ft ].
EP [Yt EP [MT |Ft ]|Fs ]
EP [Yt Mt |Fs ]
EP [EP [Yt MT |Ft]|Fs ]
=
=
=
Ms
Ms
Ms
In the first step, we factor Yt out of the inner conditional expectation because
it is Ft -measurable. In the second step we use that Mt is a martingale. Using
that Kt = Yt Mt and that Kt is a martingale.
EP [Yt Mt |Fs ]
Ys Ms
=
= Ys
Ms
Ms
and we have shown that Yt is a martingale. Q.E.D.
We use the same technique to show that Yi (t)Yj (t) − δij t is also a Qet where B
et is a brownian motion on Q.
martingale. So dYt = dB
Girsanov’s Theorem II
We have the process,
dYt = β(t, ω)dt + θ(t, ω)dBt
t≤T
where the securities are Yt ∈ Rn , β ∈ Rn , θ ∈ Rn×m and degrees of freedoms
in generating the securities Bt ∈ Rm .
112
Assume there are two processes u(t, ω) and α(t, ω) such that
θ(t, ω)u(t, ω) = β(t, ω) − α(t, ω)
(which is a very central equation in mathematical finance). We define
Mt = e−
Rt
0
u(s,ω)dBs − 12
Rt
0
u2 (s,ω)ds
and dQ(ω) = MT dP on FT . If MT is a martingale, Q is a probability measure
(proved in the same way as in Girsanov I), then
Z t
Def.
b
Bt ==
u(s, ω)ds + Bt
0
is a brownian motion and
bt
dYt = α(t, ω)dt + θ(t, ω)dB
(which is also an important equation which we get if the assumptions are
true). Alernate forms:
bt = u(t, ω)dt + dBt
dB
bt − u(t, ω)dt.
dBt = dB
Proof.
Probability measure is verified as in the proof of Girsanov I.
bt − u(t, ω)dt.
dYt = β(t, ω)dt + θ(t, ω) dB
bt .
= β(t, ω) − θ(t, ω)u(t, ω) dt + θ(t, ω)dB
Since θ(t, ω)u(t, ω) = β(t, ω) − α(t, ω):
bt
= β(t, ω) − β(t, ω) + α(t, ω) dt + θ(t, ω)dB
bt
= α(t, ω)dt + θ(t, ω)dB
From Girsanov I, we proved that
dYt = α(t, ω)dt + dBt
is a new brownian motion, which applies directly to
bt = u(t, ω)dt + dBt .
dB
bt is indeed a brownian motion.
so we know that dB
113
Mathematical Finance
We consider a mathematical market (with n securities).
dX0 = ρ(t, ω)X0 (t)dt
X0 (0) = 1
is the bank account with the stochastic interest rate ρ(t, ω).
(i = 1, 2, . . . , n)
dXi = µi (t, ω)dt + σi (t, ω)dBt
is the process for each of the securities (i.e value), where the term with µi (t, ω)
is the general drift. We have a m-dimensional Bt so
Def.
σi (t, ω) =
m
X
σij (t, ω)dBj (ω).
j=1
We have a normalised market if and only if X0 (t) = 1 for all times t, which
is equivalent to saying that ρ(t, ω) = 0.
A portfolio is a (n + 1)-dimensional Ft -adapted process
θ(t, ω) = θ0 (t, ω), θ1(t, ω), . . . , θn (t, ω)
which we interpret as how many of each asset we have. The value of the
portfolio is defined as
V (t, ω) =
n
X
θi (t, ω)Xi(t)
i=0
We have a self financing portfolio if and only if
(i) dVt = θ(t, ω)dXt.
This is not a particularly obvious result. The short explanation is that the
change in the in the value is solely determined changes in the portfolio. A
more indepth explanation is that when we are in time ti we can only used
information from t < ti . After we have made our investment choices we fall
in on dVt = θ(t, ω)dXt . Another requirement for a self financing portfolio is
Z t n
m X
n
X
2 X
(ii)
θi (s)µi (s) +
σij (s)
ds < ∞ a.s
θ0 (s)ρ(s)X0 (s) +
0
i=1
j=1
i=1
which basically means that the θ’s can’t be too big.
114
If a market isn’t normalised, we can normalise it. We define
X i (t) = X0 (t)−1 Xi (t) 0 ≤ i ≤ n
Then X 0 (t) = X0 (t)−1 X0 (t) = 1 and
X = 1, X 1 , . . . , X n
is a normalised vector. R
t
If we use X0 (t) = e 0 ρ(s,ω) , then
Rt
dX0 (t) = e
0
ρ(s,ω)
ρ(t, ω) = X0 (t)ρ(t, ω)dt.
If we assume the interest rate is known, then X0−1 is known.
Def.
ξ(t) = X0 (t)−1 = e−
Rt
0
ρ(s,ω)
.
We have a normalised process when X i (t) = ξ(t)Xi(t). By the chain rule:
dξ(t) = ξ(t) − ρ(t, ω) dt
By Ito’s formula for a product:
dX i (t) = ξ(t)dXi(t) + Xi (t)dξ(t) + dXi (t)dξ(t)
Since dξ(t) only consists of dt’s, the last term becomes zero.
= ξ(t)dXi(t) + Xi (t)ξ(t) − ρ(t, ω) dt
= ξ(t) dXi (t) − ρ(t, ω)Xi(t)dt
Assume that θ is self financing for X(t). Is the normalised market also self
financing? Yes. Calculations to show this follows.
Def. V
Def. X
V (t) = θ(t)X(t) == ξ(t)θ(t)X(t) == t ξ(t)V (t)
We apply Ito’s formula.
dV (t) = ξ(t)dV (t) + V (t)dξ(t) + dV (t)dξ(t)
| {z }
=0
(the term dξdV = 0 since ξ only has dt).
dV =θdX
==
dξ=−ρξ
ξ(t)θ(t)dXt − ρ(t)ξ(t)V (t)dt
115
V =θX
== ξ(t)θ(t)dXt − ρ(t)ξ(t)θ(t)Xt dt
= ξ(t)θ(t) dXt − ρ(t)Xt dt = θ(t)dX t
In conclusion, the normalised market is self financing, if the market is:
dV (t) = θ(t)dX(t).
Self financing by ”adapting” the bank account.
Choose θ1 (t), θ2 (t), . . . , θn (t) randomly (which means you can buy whatever
you like from securities). Define
A(t) =
n Z
X
i=1
t
0
θi (s)dXi (s) − θi (t)Xi (t).
If
θ
θ0 (t) = V (0) + ξ(t)A(t) +
Z
t
ρ(s)A(s)ξ(s)ds
0
then θ is self financing. (This is also the only case where θ is self financing).
Admissible Portfolios
A portfolio is called admissible if there exists a constant K < ∞ such that
V θ (t, ω) ≥ −K a.s for (t, ω) ∈ [0, T ) × Ω.
This is necessary to ensure we cannot construct certain machineries that
generate infinite income.
Fatou’s Lemma
Let f1 , f2 , . . . , fn , . . . be a sequence of non-negative functions; fi (x) ≥ 0 for
all x and all i’s. Let f (x) = lim inf fn (x) (the greatest lower bound), then
n→θ
Z
f dµ ≤ lim inf
n→θ
Z
fn dµ
(and if this also holds for −f we get an equality), and
E[f |H] ≤ lim inf E[fn |H].
Actually the only requirement we need is to have a lower bounded function,
since f ≥ −K can be written as f + K ≥ 0 and if we define g = f + K we
can use the lemma.
116
=============================== 07/05-2010
Arbitrage
An admissible portfolio is called an arbitrage if and only if,
(i)
V θ (0) = 0
(ii)
V θ (T ) ≥ 0
(iii) P V θ (T ) > 0 > 0
Sets that are arbitrage free
Assume we can find an quivalent measure Q ∼ P or (Q << P and P << Q)
such that the normalised marked is a local martingale w.r.t Q- In this case
X(t) does not have an arbitrage.
We have worked a lot with the stochastic integral
Z T
σ(t, ω)dBt
0
which has the important requirement
hZ T
i
E
σ 2 (t, ω)dt < ∞
0
or σ ∈ V, which is necessary to ensure that we can use Ito-isometry and to
use that the expected value of the stochastic integral is zero.
In mathematical finance, the σ, which is the volatility, can sometimes
become too large for this requirement. To prevent the function from diverging
to infinity, we restrict it by introducing stopping times. With this is it possible
to discuss local martingales.
Local Martingale
There exists an increasing sequence of stopping times τk for k = 1, 2, . . . such
that
a)
τk → ∞
a.s
b) Xmin(t,τk ) is a martingale for all k
Proof
Assume θ is an arbitrage.
θ
θ
(i) V T ≥ 0 a.s, w.r.t P ⇒ V T ≥ 0 a.s, w.r.t Q
since Q << P . That is,
θ
θ
P V T < 0 = 0 =⇒ Q V T < 0 = 0.
117
This means that requirement (i) for arbitrages is satsified for both Q and P .
θ
θ
(ii) P V T > 0 > 0 =⇒ Q V T > 0 > 0
since P << Q. Requirement (ii) is also
satisfied. By both (i) and (ii) we must
have a positive expectation: E V T > 0.
Now there are slightly different approaches for martingales and local
martingales. For normal martingales, we have by the property of selffinancing portfolies:
θ
θ
σ (t, ω)dBt =⇒ E V t = 0
dV (t) = θ(t)dX t = θ(t)e
since the expectation to a stochastic integral is zero for martingales. We have
a similar proof method for local martingales.
θ
If X t is a local Q-martingale, then so is V (t) (which can be seen from
θ
σ (t, ω)dBt. We recall that if θ is admissible, V θ (t)
dV (t) = θ(t)dX t = θ(t)e
θ
has a lower bound: V (t) ≥ −K a.s. Because of this we can apply Fatou’s
lemma (for t ≥ s):
θ
Fatou
θ
θ
EQ [V t |Fs ] = Eq [ lim V min(t,τk ) |Fs ] ≤ lim inf EQ [V min(t,τk ) |Fs ]
k→∞
k→∞
θ
θ
= lim inf V min(s,τk ) = V s
k→∞
θ
so V min(t,τk ) is a martingale.
If we define s = 0 and t = T , the final time, we have
θ
θ
EQ [V T ] ≤ V 0 = 0.
However, from the requirements for arbitrage we have
θ
EQ [V T ] > 0
which is completely opposite for local (and normal) martingales. In conclusion
we see that they are mutual exclusive.
Theorem (Closely related to Girsanov’s theorem)
Let
b ω) = X1 , X2 , . . . , Xn
X(t,
be a vector of the securities (not including X0 , the secure bank account). If
there exists a process u(t, ω) ∈ V which satisfied the equation
b ω)
σ(t, ω)u(t, ω) = µ(t, ω).ρ(t, ω)X(t,
118
(where µ is the drift and σ is the volatility), and we have
h
1
E exp
2
Z
T
0
i
u2 (t, ω)st < ∞
(the Nobikov condition), which is only true for “small” u, then the market
does not have arbitrage.
Conversely, if there does not exist an arbitrage, we can find an Ft -adapted
u(t, ω) that solves the equation:
b ω).
σ(t, ω)u(t, ω) = µ(t, ω) − ρ(t, ω)X(t,
If σ is invertible and the dimensions match up, we can determine u(t, ω)
directly.
Proof.
We can assume ρ = 0 (X has an arbitrage if and only if the normalised X
has an arbitrage). We define Yt by dYt = u(t, ω)dt + dBt . We use Girsanov’s
theorem
Z
n Z T
o
1 T 2
MT = exp −
u(t, ω)dBt −
u (t, ω)dt ,
2 0
0
for dQ = MT dP and Yt is some Q-brownian motion. (The exponential
function is never zero, so we can divide with it, and we have
dP = MT−1 dQ
We have
h
E exp
n1 Z
2
0
T
⇒
P ∼ Q.
oi
u (t, ω)dt < ∞
2
so u is finite. Before we continue, we note that dBt = dYt − u(t, ω)dt. We
have
b
dX = µ(t, ω)dt + σ(t, ω)dBt = mu(t, ω)dt + σ(t, ω) dYt − u(t, ω)dt =
ρ=0
µ(t, ω)dt+σ(t, ω)dYt −σ(t, ω)u(t, ω)dt = µ(t, ω)dt+σ(t, ω)dYt −µ(t, ω)dt ⇒
b = σ(t, ω)dYt
dX
where dYt is a Q brownian motion, and therefore also a Q-martingale, and
even a local Q-martingale. (X0 = 1, and constants are martingales as well).
To prove the converse, we have to use some topology.
119
Hilbert space theory.
Let S be a closed subspace of some random Hilbert space H (for instance
R2 as a subspace of R3 ). For all given points µ ∈ H, there exists a unique
element w ∈ S which is the closest point to µ. If we define v = µ − w, then
v ∈ S ⊥ (v is orthogonal to the space S).
Now to prove the converse: Let S ∈ Rn be the subspace generated by
σ(t, ω)u, for u ∈ Rm . (For ρ = 0 we want to come as close to u as possible).
We will not include the proof here, but we end up with v = µ − w = 0.
(This result is used in the text book, but not mentioned).
Attainable claims and Completeness.
Assume
h
n1 Z t
oi
E exp
u2 (s, ω)ds < ∞
2 0
(so u is “small”). We define Q on FT :
Z
n Z T
o
1 T 2
dQ = exp −
u (t, ω)dt dP.
u(t, ω)dBt −
2 0
0
R
et = T u(s, ω)ds + Bt is an Ft -martingale, and an Ft -brownian
Then B
0
motion w.r.t Q (Girsanov’s theorem) -and- any F (ω) ∈ L2 (FT , Q) has a
representation (Ito’s rep. thm):
Z T
et
F (ω) = EQ [F ] +
φ(t, ω)dB
0
where F is a stochastic variable, and
hZ T
i
EQ
φ2 (t, ω)dt < ∞.
0
(By Ito’s rep. thm. any L2 variable can be expressed with an Ito integral).
In applications, these stochastic variables are claims for options. A natural
question is to ask if we can construct portfolios that generate F (ω), which is
what mathematical finance is all about.
Lemma
Assume we can find a u(t, ω) that solves
b ω).
σ(t, ω)y(t, ω) = µ(t, ω) − ρ(t, ω)X(t,
Further assume that
h
E exp
n1 Z
2
T
u2 (s, ω)ds
0
120
oi
< ∞.
b
Then the bank account is dX 0 (t) = 0, the securities are dX(t)
=
e
ξ(t)σ(t, ω)dBt and the normalised value is
θ
dV (t) =
n
X
θi (t, ω)dX i (t) =
i=1
n
X
i=1
e
ξ(t) θ1 , . . . , θn σ(t, ω)dB(t).
Proof
Follows from Girsanov’s theorem. (We get a local martingale, and are only
left with terms with dB, which makes it substantially easier to compute).
European T -claim
Definition: A European T -claim is a downward bounded, FT -measurable
stochastic variable F ∈ L2 (Q), where Q is the riskneutral measure.
F is called attainable if there exists an admissible portfolio θ a) such that
Vzθ (T )
F (ω) =
=z+
Z
T
θ(t)dXt
0
where z is the starting value to the portfolio. The integral is from the form
of a self financing portfolio written out in’integral form:
Z
dVt = θ(t)dXt ⇐⇒ VT = V0 + θ(t)dXt ,
0
and b) such that
θ
z
V =z+
Z
0
t
es
ξ(s) θ1 , . . . , θn σ(s, ω)dB
A market is complete if and only if all T -claims are attainable.
Theorem
The market is complete if and only if σ(t, ω) has a left inverse Λ(t, ω), that
is, there exists and Ft -adapted Λ(t, ω) such that Λ(t, ω) · σ(t, ω) = Im (the
m × m identity matrix), when we have m Brownian motions.
When this is the case, and σ is an n × m-matrix:
b ω)
σ(t, ω)u(t, ω) = µ(t, ω) − ρ(t, ω)X(t,
h
i
b ω)
Λ(t, ω)σ(t, ω)u(t, ω) = Λ µ(t, ω) − ρ(t, ω)X(t,
h
i
b ω)
u(t, ω) = Λ µ(t, ω) − ρ(t, ω)X(t,
121
we can calculate u directly.
Proof
Assume there exists a left-inverse. We need to find an admissible portfolio θ
and a starting value z such that
Z t
θ
(i) Vz (t) = z +
θ(s)dXs
0 ≤ t ≤ T.
0
(ii) Vzθ (t) is a martingale. and (iii) Vzθ (t) = F (ω)
Now we find the θ’s. First we note that ρ ≥ 0 and K ≥ ρ, so K ≥ ρ ≥ 0.
Since the normalising factor is defined as
ξ(t) = e−
Rt
0
ρ(s,ω)ds
we can use the limits for ρ to see that
e−KT ≤ e−
Rt
0
ρ(s,ω)ds
≤ 1.
In other words, the normalising factor can not exceed 1, and cannot be
infintely small.
We also note that
F (ω) ∈ L2 ⇐⇒ ξ(t)F (ω) ∈ L2 .
If F is a stochastic variable, then so is the normalised version. By Ito’s
representation theorem,
Z T
et
ξ(T )F (ω) = EQ ξ(T )F (ω) +
φ(t, ω)dB
0
so we want z = EQ ξ(T)F (ω) and φ(s, ω) = ξ(s) θ1 , . . . , θn σ(s, ω).
If we set θ1 , . . . , θn = ξ(t)−1 φ(t, ω)Λ(t, ω):
ξ(t) θ1 , . . . , θn σ(t, ω) = ξ(t)ξ(t)−1 φ(t, ω)Λ(t, ω)σ(t, ω) = φ(t, ω)
If we can construct φ(t, ω), it’s easy to see what the portfolio is supposed to
look like as it progresses.
From the requirements of admissible portfolios, we need them to have a
lower bound. Not so obvious in this case. θ is admissible, only if then V θ is
downward bounded.
ξ(t)VZθ (t) = EQ ξ(T )Vzθ |Ft = EQ ξ(T )F |Ft
122
By definition a T -claim is downward bounded, so ξ(T )F is downward
bounded, so EQ [ξ(T )F ] is downward bounded which means that ξ(t)Vzθ (t) is
downward bounded which, finally, implies that Vzθ (t) is downward bounded.
(using that whenever X ≥ Y , then E[X|A] ≥ E[Y |A]. Alternatively
ξ(t)Vzθ (t) ≥ −C ⇒ Vzθ (t) ≥
C
C
≥ − −KT = −D.
ξ(t)
e
Pricing of options
A European option on a claim F is a guarantee to get paid F at the time
t = T > 0. How much do we need to pay for this guarantee at time t = 0?
Some simple principals (not mathematical rigorous conlcusions, just
common sense);
θ
(1) If there exists a portfolio θ such that if V−y
(T ) + F (ω) ≥ 0 a.s a buyer
will be willing to pay y.
e
(2) If there exists a portfolio θe such that Vzθ (T ) ≥ F (ω) (which equates to
starting with receiving z, and trading in the portfolio θe with an end time
larger than F (ω), you should sell the claim). We have more money than the
claim owes, so this is a good deal.
If the following intervals do not overlap, it is impossible to agree on a
price.
Buy-interval: (−∞, ymax ]
Sell-interval: [zmin , ∞)
An agreement can only be reached if ymax ≥ zmin . (In the Black-Scholes
formula, these values coincide). If we have equality, we have also determined
the price of the option.
Before we go to the next theorem, we define the essential infimum, ess inf.
We remove all sets with measure zero, and only consider the “realistic” values
and determine the normal infimum of those.
Theorem
Let F be a European T -claim. Then the ess inf ξ(T )F (ω) is the largest
constant K such that ξ(T )F (ω) ≥ K a.s.
This means ess inf ξ(T )F (ω) ≤ ymax ≤ EQ [ξ(T )F ] ≤ zmin ≤ ∞.
Proof
We set y = ess inf ξ(T )F (ω).
θ = − y, 0, . . . , 0
θ
(we have everything in the bank). So, V −y (t) = −y for all t.
θ
V −y (t) + ξ(T )F (ω) = ξ(T )F (ω) − y ≥ 0 a.s.
123
Because y = ess inf ξ(T )F (ω), it is true that ξ(T )F (ω) ≤ ymax .
For the next inequality, we assume y ∈ (−∞, ymax ]. Then there exists
θ
some θ such that V −y (T ) + F (ω) ≥ 0 a.s.
For a self financing portfolio:
Z t
Z t
−y +
θ(s)dXs + F (ω) ≥ 0 =⇒ −y +
θ(s)dXs ≥ −F (ω) a.s
0
0
Using Girsanov’s theorem.
Z t
es ≥ −ξ(T )F (ω)
−y +
ξ(s)θ(s)σ(s)dB
−y + EQ
0
hZ
i
e
ξ(s)θ(s)σ(s)dBs ≥ −EQ ξ(T )F (ω)
t
0
Assuming a martingale,
−y ≥ −EQ ξ(T )F (ω)
=⇒
y ≤ EQ ξ(T )F (ω) .
For a local martingale,
hZ t
i
es ≥ −EQ ξ(T )F (ω) .
−y ≥ EQ
ξ(s)θ(s)σ(s)dB
0
and we reach the same conclusion.
For the next inequality.
Z T
z+
θ(s)dXs ≥ F (ω).
0
We normalise and use Girsanov’s theorem.
Z T
es ≥ ξ(T )F (ω)
z+
ξ(s)θ(s)σ(s)dB
0
For normal martingales,
hZ
z + EQ
i
es ≥ EQ ξ(T )F (ω)
ξ(s)θ(s)σ(s)dB
T
0
z ≥ EQ ξ(T )F (ω)
=⇒
For local martingales
z + EQ
hZ
T
0
so zmin
hZ
z ≥ z + EQ
≥ EQ ξ(T )F (ω) .
zmin ≥ EQ ξ(T )F (ω) .
i
es ≥ EQ ξ(T )F (ω)
ξ(s)θ(s)σ(s)dB
0
T
i
e
ξ(s)θ(s)σ(s)dBs ≥ EQ ξ(T )F (ω)
124
Lemma
If the market is complete, we always have
ymax = EQ ξ(T )F (ω) = zmin
Proof
Completeness means we can replicate the claim F . In other words, we can
find a y, such that
Z T
−y +
θ(s)dX(s) = −F (ω).
0
Doing the same calculations as in the theorem above, but this time we always
get equality. In a complete market, all stochastic integrals are martingales
with expected value 0.
Normalising and using Girsanov’s theorem.
Z T
b
−y +
ξ(s)θ(s)σ(s)dB(s)
= −ξ(T )F (ω)
0
−y = −EQ ξ(s)F (ω)
=⇒
y = EQ ξ(s)F (ω)
=⇒
ymax = EQ ξ(s)F (ω)
Since thwe y’s are less than or equal to the expected value, and we found a
candidate that was equal, this must mean that the maximum of all the y’s
is equal as well.
Z T
es = ξ(T )F (ω) =⇒ zmin = EQ ξ(T )F (ω)
z+
ξ(s)θ(s)σ(s)dB
0
by the same argument.
We have seen how you find the price to a European option. The question is
now how to find the portfolio when the price is given. By Ito’s representation
theorem.
Z T
et .
ξ(T )F (ω) = z +
φ(t, ω)dB
0
We know that ξ(t)θ(t)σ(t) = φ(t), which we solve with regard to θ.
One weakness with stochastic analysis based on Brownian motion is that
it does not take leaps into account, which is very important as an option
price is completely different before and after the jump. A way to take leaps
into account is to use Levy processes instead.
125
Download