STK4510 27/08-2010

advertisement
STK4510
—————————————————————————
27/08-2010
Purpose: price and hedge financial derivatives (options).
Example:
Credit derivatives; options that pays money if someone goes bankrupt.
Example:
European call options.
Toolbox
Stochastic Analysis
ւ
differentiation
↓
wrt a stochastic process
Ito’s formula
ց
integration
↓
wrt stochastic processes
Ito’s integral
From MAT2700 we explored the discrete markets (binomial models), but will
now move on to continuous time.
Continuous stochastic prcesses St ,
t ≥ 0.
Section 2.1 + 2.2 Brownian Motion (Oksendal)
Def 2.1.4
A stochastic process is a family of random variables parametrized over time
Xt t∈[0,T ]
defined on a probability space (Ω, P ).
Events for the stochastic processes
Let B ⊆ Rn ,
Xt : Ω 7→ Rn .
Xt−1 (B) = ω ∈ Ω | Xt(ω) ∈ B
For instance, if Xt is the temperature at day/time t, the set B can be
”temperature over 20o ” C, then Xt−1 (B) is one event.
The probability of an event is denoted by P (A) for some A ∈ Ω.
1
Def 2.1.1
A probability P is a function on ”all events” A ⊆ Ω with values between 0
and 1 such that
(i) P (∅) = 0, P (Ω) = 1
(ii) If A1 , A2 , . . . are disjoint (and countable) subsets of Ω, which means
Ai ∩ Aj = ∅ when i 6= j, then
P
∞
\
j=1
Ai =
∞
X
P (Ai ).
i=1
Problem: It may not be possible to assign a probability P (A) to all subsets
A ∈ Ω. In simple cases, like the ”trivial” case Ω : Ω = {ω1 , ω2 } where S0 can
go two ways, we can set A1 = ∅, A2 = ω1 , A3 = ω2 and A4 = {ω1 , ω2 }. For
this simple Ω we can set P (A1 ) = P (∅) = 1 and P (A4 ) = P (Ω) = 1 and, say,
P (A2 ) = P ({ω1 } = 0.3 and finally P (A3 ) = P ({ω2}) = 1 − P (ω1) = 0.7.
By defining a probability P in the continuous case, we can get paradoxes
unless we have some type of structure on Ω.
Def 2.1.4
A σ-algebra (σ stands for s in summation) F is a collection/family of events
A ⊆ Ω where
(i) ∅ ∈ F
(ii) If A ∈ F ⇒ Ac ∈ F where Ac = Ω\A
S
(iii) A1 , A2 , . . . ∈ F ⇒ ∞
i=1 Ai ∈ F .
So in 2.1.1, P is a function on the σ-algebra F .
A probability space (Ω, F , P )
To have a specific Ω and P you are forced to define a F .
In finance: F is the ppotential state of the future. Use this to see the
option price and even how they will evolve in the future.
Examples:
Ω : F = {∅, Ω}: the trivial σ-algebra.
P(Ω) = the power set, all subsets of Ω.
Ω = R. We choose F = the smallest σ-algebra that contains all open subsets
of R.
Property 1: Countable intersections of σ-algebras is again a σ-algebra.
2
Property 2: P(R) contains all open subsets of R. The smalles σ-algebra
F ⊂ P(R) (strictly smaller) has a name: the Borel σ-algebra usually denoted
BR and BRn .
Z
1 − x2
P (A) =
e 2 dx
A ∈ BRn .
A 2π
This integral isn’t defined for all A ∈ R, so we must use the Borel σ-algebra.
The probability space is (R, BR , P ).
Given (Ω, F , P ). A random variable X is F -measurable if X −1 (U) ∈ F for
every U ∈ BR .
These ω are F -measurable. In practice this means one can find the probability
P X −1 (U) .
FX is the σ-algebra operated by X: that is the smallest σ-algebra that
contains all events X −1 (U), where U ∈ BR .
Trivially: X becomes FX -measurable. Using FX means we loose the
probabilities that take into account the extreme prices (which was a
contributing factor to the 2009 financial crisis). So FX ⊂ F , i.e it is a smaller
σ-algebra.
Def.
A probability space (Ω, F , P ) is called complete (not in the Cauchy sense) if
all subsets B ⊆ A, where P (A) = 0 also belong to F .
Brownian motion
Brownian motions is partly derived from the Heat equation
du(t, x)
1 d2 u(t, x)
=
dt
2 dx2
where we define u(0, x) = f (x). This function has the solution
Z
1 − (x−y)2
e 2t dy = E f (Bt ) | B0 = x
u(t, x) =
f (y) √
2πt
R
If we define
P (t, x, y) = √
3
1 − (x−y)2
e 2t
2πt
and for 0 ≤ t1 ≤ ts ≤ . . . ≤ tk and have F1 , . . . , Fk ∈ BR we get
Z
νt1 ,...,tk (F1 ×· · ·×Fk ) =
p(t1 , x, x1 )p(t2 −t1 , x1 , x2 ) . . . p(tk −tk−1 , xk−1 , xk )dx1 . . . dxk
F1 ×···×Fk
where x is the starting point and we have transitional probabilities.
Def
A Brownian motion (BM) {Bt }t∈[0,∞) is a stochastic process with finite
dimensional distributions, for some k ∈ N
P Btx1 ∈ F1 , . . . , Btk ∈ Fk = νt1 ,...,tj (F1 × · · · × Fk )
where B0 = x or P x (B0 = x) = 1. (The left side is the probability of a sample
of BM’s).
Question: Does BM exist? Does there exist a (Ω, F , P ) such that we can
define BM?
Luckily, Kolmogorov has already proved that they do. We have existence
by Kolmogorov’s extension theorem.
Theorem 2.1.5
For all t1 , . . . , tk ∈ [0, ∞), k ∈ N, let
νtσ(1) ,...,tσ(k) (F1 × · · · × Fk ) = νt1 ,...,tk Fσ−1 (1) × · · · × Fσ−1 (k)
(i)
(if we change the times, we change the order of the sets) where σ is any
permutation of {1, . . . , k}. The permutations of e.g {1, 2} are {1, 2} and
{2, 1}.
νt1 ,...,tk F1 × · · · × Fk = νt1 ,...,tk ,tk+1 ,...,tk+m F1 × · · · × Fk × R
× ·{z
· · × R} (ii)
|
| {z }
m-times
add m
If these two conditions hold, there exists a probability space (Ω, F , P ) and a
stochastic process {Xt }t∈[0,∞) such that
P Xt1 ∈ F1 , . . . , Xtk ∈ Fk = νt1 ,...,tk F1 × · · · × Fk ).
Using Kolmogorov’s extension theorem with k = 2 with a normal
distribution, we can look at the distribution for the 2-dimensional BM. Proof
of (ii) is easy, but very messy.
Properties of BM
We consider the BM Bt with B0 = x.
x
P (Bt ∈ F ) =
Z
4
F
p(t, x, y)dy
is equivalent to saying Bt is a normally distributed random variable, or
Bt ∼ N (§, ⊔) (expectation, variance). The more time progresses, the bigger
the variations around x.
Increments
We study Bt − Bs , for t > s.
E Bt − Bs = 0
2 E Bt − Bs
= E Bt2 − 2E Bt Bs + E Bs2
= t − 2E Bt Bs + s
= t − 2s + s = t − s
We will show in an exercise tat E Bt Bs = s (the smaller index).
So Bt − Bs ∼ N (0, t − s), which means it is stationary (in time).
In the image, the stationarity means that the difference Bt −Bs is independent
from Bv − Bu . We say BM is independent in the increments. We can see this
in the cross-covariance:
E Bv − Bu Bt − Bs = E Bv Bt − Bv Bs − Bu Bt + Bu Bs
= E Bv Bt − E Bv Bs − E Bu Bt + E Bu Bs = v − v − u + u = 0.
In summary: Brownian motion has the following properties:
(1) Independent increments.
(2) Stationary increments.
(3) Normally distributed increments.
Properties (1) and (2) alone define a bigger class called Lévy processes.
We will assume the returns of a stock satisfy the three requirements. This
is necessary so models can be modelled with BM.
5
Brownian motions as a function of time
We fix a ω ∈ Ω : t 7→ Bt (ω).
Is this path continuous?
Def 2.2.2 Copies of stochastic processes
Assume Xt and Yt are two stochastic processes on Ω, F , P , then Xt is called
a version/modification of Yt if
P ω|Xt(ω) = Yt (ω) = 1 ∀t.
Kolmogorov’s continuity theorem
Suppose Xt is a stochastic process such that for all T > 0, there exists
positive constants α, β, D such that
h
1β
α i
E Xt − Xs ≤ D t − s
(almost like Hölder continuity), then there exists a modification Yt of Xt
which is continuous/has continuous paths.
Exercise: Show that, when α = 4, β = 1, D = 3
4 2
E Bt − Bs = 3t − s .
Brownian motion has continuous paths. However, they are nowhere different
does
tiale (that is almost everywhere non-differentiable). The concept dB
dt
not make sense.
6
—————————————————————————
03/09-2010
Black & Scholes-model
We will primarily use a Geometric Brownian motion (GBM) as a model for
the price dynamics of financial assets
St = S0 eµt+σBt
where t is time, Bt the BM, µ the drift and σ is the diffusion.
• We define the return as
At =
St+1 − St
S0 eµ(t+1)+σBt+1 − S0 eµt+σBt
µ+σ(Bt+1 −Bt )
=
=
e
−
1
St
S0 eµt+σBt
At is stationary, log normal, shifted by -1 and independent.
• Another related concept is the Logreturn (Logarithmic return):
At = ln
St+1
= µ + σ(Bt+1 − Bt )
St
The logreturn is stationary, independent and normal: At ∼ N (µ, σ 2). We will
sometimes use the logreturns instead of the returns.
Portfolios
We will have a stock price at time t: St , and some trading strategy Xt , with
GBM as the underlying model. If we consider the expression
Xsi Si+1 − Si
where XSi : is the position in stock, Si the stock price at time i and
Xsi Si+1 − Si is the loss/gain from i to i + 1. If we have the accumulated
profit over the entire period:
Z
n
X
n→∞ t
XSi Si+1 − Si −→
Xs dSs ,
0
i=0
but we have not defined this integral yet. That will be our next step.
First we reduce this integral to one with respect to the BM.
Z t
Xs dBs .
0
How do we integrate this wrt BM? We will consult Oksendal.
7
Section 3.1
R t- Ito Integral
Defining 0 Xs dBs is our goal and first we need to understand the limit
n → ∞ from the sum to integral. When we know that, we will know which
processes we can integrate.
Elementary Processes (Stochastic)
We define
n−1
X
φs (ω) =
ei (ω)I[si,si+1 )
i=1
where ei (ω): is a given number for a fixed ω, i.e a random variable. For some
fixed path ω we have the graphical illustration:
Definition
We define the stochastic integral in terms of the elementary function.
Z t
n−1
X
ei Bsi+1 − Bsi
φs dBs = lim
0
n→∞
i=0
Definition 3.1.2
Ft is the σ-algebra generated by BM up to time t, that is the smallest σalgebra on Ω that contains all sets (subsets):
ω ∈ Ω | Bs1 ∈ F1 , Bs2 ∈ F2 , . . . , Bsk ∈ Fk
and Fi ∈ BR , i = 1, . . . , k.
Investigate all the cases above: s1 < s2 < . . . < sk ≤ t.
Example
k = 1, s1 = t, F1 = (0, ∞)
8
In the image we have
ω1 6∈ ω ∈ Ω | Bt ∈ (0, ∞) ∋ ω2 .
Ft is called a filtration at time t. If s ≤ t, then Fs ⊆ Ft (which we say is
increasing). This is used for option pricing dynamics and to define a class of
integrable stochastic processes.
Definition 3.1.3
A stochastic process Xt is called Ft -adapted if
{ω ∈ Ω | Xt ∈ F } ∈ Ft
∀F ∈ BR .
At time t, we don’t look into the future, just the past. (Xt is Ft -measurable
for all t.
A nonexample would be the process Xt = Bt+1 which is not Ft -adapted,
since it takes values from the future. A simple example is Xt = Bt .
Now assume φs is an Fs -adapted, elementary process
E
hZ
t
φs dBs
0
i
n−1
X
Independence
=
E ei (Bsi+1 − Bsi )
==
0
i=0
where the expression in the sum is Fsi -measurable.
Independent increments: independent over ei vs (Bsi+1 − Bsi ) because
we have Fsi -independence and Fsi+1 -independence. The expectation to a
stochastic integral is thusly always 0.
Ito Isometry
We assume only that φ2s < ∞. We will derive Ito isometry.
E
hZ
t
φs dBs
0
i
n−1
X
=
E ei ej (Bsi+1 − Bsi )(Bsj+1 − Bsj ) =
i,j=0
We split the sum in two cases i = j and i < j. The last case is counted twice,
by symmetry, so we can disregard the case j < i.
n−1
n−1
X
X
2
2
E ei (Bsi+1 − Bsi ) + 2
E ei ej (Bsi+1 − Bsi )(Bsj+1 − Bsj )
i=0
i<j,i,j=0
Since we have independent
in the
increments, the expectation
second term
can be split, so we get E ei ej (B
−
B
)
E
(B
−
B
)
si
2si+1
sj+1 2 sj = 0. The2 first
2
term also has independence: E ei (Bsi+1 − Bsi ) = E[ei ]E[(Bsi+1 − Bsi ) ] and
9
the second factor is just the variance of the difference of the BM, which is
si+1 − Si , so we have
Z t
n−1
X
2
=
E ei si+1 − si ) + 0 =
E φ2s ds.
0
i=0
In short, we have proved that
hZ t
i Z t E
φs dBs =
E φ2s ds
0
0
which is known as the important, Ito isometry property.
Definition 3.1.4 - Ito-integrable processes
Let Xs be a stochastic process with properties
(a) Xs is Ft -adapted, for s ≤ t
Rt
(b) 0 E[Xs2 ]ds < ∞ (a finiteness restriction).
(c) (s, ω) 7→ Xs (ω) is BR × F -measurable. (This property is a mathematical
formality and we assume this is always true in this course).
Then Xs is Ito-integrable on [0, t].
Example
We have the process Xs = Bs , which is just the normal BM. We verify the
two first properties.
(a) This is immediately verified. Bs is adapted.
Rt
Rt
(b) 0 E[Bs2 ]ds = 0 sds = 21 t2 < ∞.
The two properties are verified. BM is Ito-integrable.
In functional analysis we could say that Ito-isometry gives a norm
translation between two L2 -spaces.
Claim
There exists a sequence of Ito-integrable elementary processes {φns }∞
n=1 , such
that
Z t h
i
n→∞
E Xs − φns )2 ds −→ 0
0
for a given Ito-integrable profcess Xs . (Proved in Oksendal).
This result says that any stochastic, Ito-integrable process can be
approximated with an elementary function.
The Ito-isometry gives a mapping
Z t
Z t
2 E
φs dBs
=
E[φ2s ]ds
0
0
10
which is equality between a L2 (P ) and a L2 (P × ds) Lebesgue measures.
Both these spaces are Hilbert spaces, so all Cauchy sequences have a limit.
To show that the elementary approximations are unique, we assume we
have two approximations and show that they are arbitrarily close to each
other.
Z t
n
2
m 2
=
E (φns − φm
)
ds
=
φ
−
φ
s
s
s
0
n
2
φ − Xs + Xs − φm 2 ≤ φn − Xs + Xs − φm s
s
s
s
Z t
Z t
2
n
n
2 2 m 2
E φs −Xs ds+2
E Xs −φm
ds −→ 0
≤ 2 φs −Xs + ≤ 2 Xs −φs ≤ 2
s
0
0
as m, n tends to 0, by the claim.
We now turn to the integrals of these elementary functions.
Z t
h Z t
2 i
n
E
φs dBs −
φm
dB
s
s
0
0
The difference between two elementary functions is again an elementary
function. After collecting into one integral, we apply Ito isometry:
h Z t
2 i Z t n,m→0
n
m
E
φs − φs dBs
=
E φns − φm
ds −→ 0.
s
0
Thus
nR
t
0
φns dBs
0
o∞
n=1
is a Cauchy sequence of random variables in L2 (P ).
Side remark: If Yn is a sequence of random variables converging to Y in
variance, E[(Yn − Y )2 ] → 0 as n → ∞, then Yn is a Cauchy sequence.
E[|Yn − Ym |2 ] → 0 as n, m → ∞.
We have a Cauchy sequence. Does it converge to something? yes! Hilbert
spaces are complete (L2 (P )), and all Cauchy sequences have a limit in
complete spaces.
Conclusion: There exists a random variable with finite variance such that
Z t
Z t
n
lim
φs dBs =
Xs dBs
n→∞
0
0
and we have verified the existence of stochastic integrals. (We have no
construction).
11
—————————————————————————
10/09-2010
Conditional Expectation & Martingales
Given (Ω, F , P ), and s > t, then
h i
E Xs knowledge of X up tp time t = Xt
|
{z
}
Ft
Appendix B (Oksendal) B.1
Let H be a σ-algebra on Ω, where H ⊆ F . Assume X is a random variable
such that E[|X|] < ∞.
Then E[X|H] is a random variable such that
(1) E[X|H] is H-measurable.
(2) E E[X|H]H = E[X|H] for all H ⊆ H.
Existence and Uniqueness
First we write down the definition of a conditional expectation.
E[X|H] =
E[XIH ]
.
P (H)
We define for all H ∈ H,
Q(H) = E[X|H]P (H) = E[XIH ] =
Z
X(ω)dP (ω).
H
Q is a (signed) measure on (Ω, H), being absolutely continuous wrt P .
Measure Theory: Radon-Nikodym Theorem
There exists a unique Z being H-measurable such that
Z
Q(H) =
Z(ω)dP (ω).
H
(Note: Z 6= X on sets not in H, because X is F -measurable). We define
E[X|H] := Z. From definition B.1 1) is obviously okay. For 2):
R
R
h
i
ZdP
XdP
ZI
H
E E[X|H]H = E[Z|H] =
= H
= H
= E[X|H] P (H)
P (H)
P (H)
12
Theorem B.2 - Properties of Conditional Expectation
• (a) Linearity.
E[aX + bY |H] = aE[X|H] + bE[Y |H]
• (b)
h
i
E E[X|H] = E[X]
Proof
Using B.1 property 2) with H = Ω.
• (c) If X is H-measurable.
E[X|H] = X
Proof By construction of conditional expectation (Z and X must coincide
on H).
• (d) If X is independent of H then E[X|H] is no longer a random variable,
but a constant.
Proof
First we establish independence. X is said to be independent of H when (just
like P (A ∩ B) = P (A)P (B))
X −1 = ω ∈ Ω : X(ω) ∈ A
for A ∈ BR
is independent of all H ∈ H (for all A’s). Checking the properties for B.1:
(1) is okay, since both ∅ and Ω are included in the σ-algebra.
(2)
So,
h
i
h
i
E E[X|H]|H = E E[X]|H = E[X]
Indpc.
E[XIH ] == E[X]E[IH ] = E[X]P (H)
which means
E[X|H] =
E[XIH
= E[X]
P (H)
• (e) (Important property). If Y is H-measurable, then
E[Y · X|H] = Y E[X|H].
We can factor out the measurable variables.
13
Proof
By arguing in «Ito integration»,
we consider Y = IH , H ∈ H (and afterwards
P
we show it for Y = ej IHj and approximate).
(1) E[X|H] is H-measurable by definition, Y is measurable by assumption
and the product is also H-measurable.
(2) For G ∈ H
h
i
h
i
h
i
E
I
·
E[X|H]
·
I
H
G
E Y · E[X|H]G = E IH · E[X|H]G =
=
P (G)
h
i
h
i
h
i
E E[X|H] · IH∩G
E X · IH∩G
E IH X · IG
=
=
= E[Y X|G]
P (G)
P (G)
P (G)
Definition 3.2.2
A stochastic process Xt is a martingale wrt Ft is
(1) Xt is Ft -adapted.
(2) E[|Xt |] ≤ ∞ for all t ≥ 0 (or t ∈ [0, T ])
(3) Martingale property (very important): For all s ≥ t: E[Xs |Ft = Xt . This
property is the only mathematical fact we need to know to price options, in
addition o a few financial assumptions.
Example
Brownian motion is a martingale. We verify this be checking the 3 properties.
(1) Bt is Ft -adapted.
(2) E[|Bt |] = 0 < ∞, since Bt ∼ N (0, t).
(3) For s ≥ t:
a)
c)
E[Bs |Ft ] = E[Bs −Bt +Bt |Ft ] = E[Bs −Bt |Ft ]+E[Bt |Ft ] = E[Bs −Bt |Ft ]+Bt
Independent increments property of BM implies that Bs − Bt is independent
of Bu for all u ≤ t, which again implies that Bs − Bt is independent of Ft
d)
= E[Bs − Bt ] + Bt = Bt .
Consequence - Expectation of a martingale
Xt is a martingale. Martingale Property = M.p
M.p
b) E[Xt ] = E E[Xt |F0 ] = E[X0 ]
The expectation is always given at time 0, which means the expectation is
constant.
14
Are Ito integrals martingales?
Before we answer this we require a «double conditional» property of
conditional expectation.
Theorem B.3
If G ⊆ H, G and H are σ-algebras, then
h
i
E E[X|H]|G = E[X|G].
(G has less information and is a «finer» filter. This is known as the «Tower
property». The proof of this is left to the reader.
Now, returning to the question, we define
Z t
Mt =
Xs dBs
0
where Xs is Ito-integrable. Is Mt an Ft -martingale? We restrict ourselves to
elementary processes:
X
Xs = φn (s) =
ej I(sj < s ≤ sj+1 )
j
hZ s
i
E Ms |Ft = E
φn (s)dBu |Ft
0
Using s ≥ t, linearity of integrals and then linearity of cond. expectations.
hZ t
i
hZ s
i
=E
φn (s)dBu |Ft + E
φn (s)dBu |Ft
0
t
Property c) of conditional expectations.
Z t
hX
i
=
φn (u) + E
ej Bsj+1 − Bsj Ft
0
sj ≥t
Now we use the double filtration. For sj ≥ t, Fsj ⊇ Ft , because the filtration
is increasing.
h X
i
= Mt + E E
ej Bsj+1 − Bsj Fsj Ft
sj ≥t
Property e).
= Mt + E
hX
sj ≥t
i
ej E Bsj+1 − Bsj Fsj Ft
15
Independence
hX
i
= Mt + E
ej E Bsj+1 − Bsj Ft = Mt
|
{z
}
s ≥t
j
=0
The opposite is also true: martingales are also Ito integrals (which is known
as the martingale representation theorem) which apply to martingales with
finite variance. There exists martingales that can not be represented as Ito
integrals.
Rt
Martingales are important to us, because for Mt = 0 Xs dBs , the
stochastic process Xs can be a hedging strategy. We can prove its existence.
—————————————————————————
17/09-2010
Ito’s Formula
To motivate the formula, we begin by considering the chain rule from classical
analysis.
df g(t)
= f ′ g(t) g ′ (t).
dt
What would this look like if g(t) = Bt ? It wouldn’t exist because the
derivative of BM, g ′ (t) wouldn’t exist. However, the BM does have an interal,
so we can move things around and get an alternative expression.
Z t
f ′ g(t) g ′(s)ds.
f g(t) = f g(0) +
|{z}
0
B0 =0
If we look at g ′ (s)ds as dg(s). We now ask ourselves if:
Z
Z t
B(t) =
f ′ B(r) dB(s).
0
This turns out not to be the case.
Counterexample
As we showed from the definition
Z t
1
1
Bs dBs = Bt2 − t.
2
2
0
So if we let f (x) = 21 x2 , then
Z t
f ′ B(s) dB(s) = f B(s) .
0
16
We do not have the ”correction term” − 12 t.
Ito’s formula
We will look at the formula for f t, X(ω) where X(ω) is some Ito process.
Def 4.1.1
An Ito process X(t) is defined as
Z t
Z t
X(t) = X(0) +
u(s)ds +
v(s)dB(s)
0
0
Rt
where v(t) is Ito integrable and u(t) is Ft -adapted with 0 |u(s)ds < ∞ (also
known as a semi-martingale). We note that the Ito proces is Ft -adapted.
Additional
hR note: ini STK4510 we only impose v(t) to be Ito integrable,
t
that is E 0 v 2 (s)ds < ∞, but this condition can be weakened to:
Rt
P 0 v 2 (s)ds < ∞ = 1, but we won’t need this.
Example
For Xt =
earlierm
1 2
B ,
2 t
is this an Ito process? Using our direct calculations from
Z t
Z t
1
1 2
ds.
Bs dBs +
Xt = Bt =
2
0
0 2
This is re required form, so Xt is an Ito process.
Example
Xt = Bt . Is Brownian motion an Ito process? Yes, we simply set
Z t
Xt = Bt =
1dBs .
0
(with u(s) = 0).
Notation
Instead of writing an Ito process in integral form, we usually just write the
differential firm.
dXt = ut dt + vt dBt .
This is equivalent to
Xt = X0 +
Z
t
us ds +
0
Z
t
vs dBs .
0
Heuristic interpretation dXt = Xt+dt − Xt for the infintesmal change.
17
Theorem 4.1.2
Let Xt e an Ito process
dXt = ut dt + vt dBt
and f ∈ C 1,2 (which means f is once differentiable wrt to time t and two
times differentiable wrt to the space x), then f (t, Xt ) is again an ito process,
where
df (t, Xt) =
∂f
∂f
∂2f
(t, Xt )dt +
(t, Xt )dXt + 2 (t, Xt )(dXt )2 .
∂t
∂x
∂x
Ignoring the double derivative, we see we have the classical chain rule. By
dXt and (dXt )2 we just replace with the original Ito process.
For these kinds of calculations we have some very important rules:
(dt)2 = dBt dt = dBt dt = 0
(dBt )2 = dt.
When we insert the expression for dXt and (dXt )2 , we get
df (t, Xt ) =
∂2f
2
∂f
∂f
(t, Xt )dt+ (t, Xt ) ut dt+vt dBt + 2 (t, Xt ) ut dt+vt dBt .
∂t
∂x
∂x
And using the calculation rules above, we can shorten the last one:
2
ut dt + vt dBt = u2t (dt)2 + vt2 (dBt )2 + 2ut vt dtdBt = 0 + vt2 dt + 0 = vt2 dt.
We collect the dt-terms and the dBt -terms.
∂f
∂f
1 2 ∂2f ∂f
+ ut
+ vt 2 dt + vt dBt
∂t
∂x 2 ∂x
∂x
dBt is Ito integrable. However
A small remark: we do not know if the term vt ∂f
∂x
this is integrable in the weaker sense that we are ”ignoring”.
In finance we think of f as the possible pay off, and then vt ∂f
becomes
∂x
the hedging strategy. The dt-term will be the price.
Example
We will look at a process we have already worked with
Z t
Bs dBs
0
which we know will be 21 Bt2 − 12 . We want to use Ito’s formula this time.
How do we use the chain rule and make an educated guess, like we do in the
classical integral?
18
We guess f (t, x) = x2 , and we use dXt = dBt .
∂f
= 0,
∂t
∂f
= 2x,
∂x
∂2f
= 2.
∂x2
Setting into Ito’s formula.
1
d f (t, Xt ) = 0 + 2Bt dBt + (2)(dBt )2 = dt + 2Bt dBt
2
Integral form.
Bt2
=
Z
t
1ds +
0
Z
t
2Bs dBs = t + 2
0
Z
t
Bs dBs
0
On the far right we have the exact integral we are looking for, so shifting
terms we get
Z t
1
1
Bs dBs = Bt2 − t.
2
2
0
Proof - Sketch proof of Itos’ formula
We only prove this for the 2 dimensional version, and for Xt = Bt . We have
∂f
∂2f
∂f
(t, Bt )dt +
(t, Bt )dBt + 2 (t, Bt )dt
∂t
∂x
∂x
df (t, Bt) =
Integral form.
f (t, Bt ) = f (0, B0 ) +
Z
0
t
∂f
(s, Bs )ds +
∂t
Z
t
0
∂f
(s, Bs )dBs +
∂x
Z
0
t
∂2f
(s, Bs )ds
∂x2
We are going to use Taylor’s formula, and begin by observing how we can
write f (t, Bt ) as a (telescoping) sum:
X
f (t, Bt ) = f (0, B0) +
f (tj+1, Btj+1 ) − f (tj , Btj ).
j
for some partition 0 = t0 < t1 < . . . < tn−1 < tn = t. We Taylor expand this
sum around the point (tj , Btj ).
f (t, Bt ) = f (0, B0) +
n−1
X
∂f
j=1
n−1
n−1
∂t
∆tj +
n−1
X
∂f
j=1
n−1
∂x
∆Btj
n−1
X ∂2f
X
1 X ∂2f
1 X ∂2f
2
2
+
Rj
(∆t
)
+
∆t
∆B
+
(∆B
)
+
j
j
t
t
j
j
2 j=1 ∂t2
∂t∂x
2 j=1 ∂x2
j=1
j=1
where the Rj ’s are remainder terms, functions of (∆tj )2 and (∆Btj )
19
We work with the assumption f ∈ Cb1,2 , that is, f is a bounded function.
Taking the limit,
Z t
n−1
X
∂f
∂f
n→∞
∆tj −→
ds
∂t
∂t
0
j=1
n−1
X
∂f
j=1
∂x
n→∞
∆Btj −→
Z
t
0
∂f
dBs .
∂x
Both of these sums also converge in variance. Moving on to the other terms,
we first check the variance of the fifth sum in the Taylor expanions.

!2 
n−1
2
X
∂ f
E
∆tj ∆Btj 
∂t∂x
j=0
By squaring the sum, we get a double sum. By independent increments, most
terms are zero except when they are multiplied by themselves. We can rewrite
this as
"
#
2
n−1
X
∂2f
=
E
(∆tj )2 (∆Btj )2
∂t∂x
j=0
2
2
∂ f
∂ f
= ∂t∂x
(tj , Btj )). Now, by independent
(to shorten the notation ∂t∂x
2
∂ f
increments and adaptiveness of ∂t∂x ,
"
"
2 #
2 #
n−1
n−1
2
2
X
X
∂ f
∂ f
∆tj ≤ max(∆tj )2
∆tj
E
=
(∆tj )2 E
j
∂t∂x
∂t∂x
j=0
j=0
As we let n → ∞, we get
"
2 #
Z t " 2 2 #
n−1
X
∂2f
∂ f
E
∆tj −→
ds
E
∂t∂x
∂t∂x
0
j=0
but then we also get ∆tj → 0, and so maxj (∆tj )2 → 0. Thus,
n−1
X
∂2f
n→∞
∆tj ∆Btj −→ 0
∂t∂x
j=1
Since ∆tj → 0, then so does (∆tj )2 , so we can immediately conclude that
n−1
1 X ∂2f
n→∞
(∆tj )2 −→ 0.
2
2 j=0 ∂t
20
By this reasoning, all the remainder terms also tend to 0 since they all consist
of (∆tj )2 or (∆tj )(∆Btj .
n−1
X
n→∞
Rj −→ 0.
j=1
Now we only have to consider the lasat sum. From the properties of the
integral,
Z t 2
n−1 2
X
∂ f
∂ f
n→∞
∆tj −→
ds.
2
2
∂x
0 ∂x
j=1
But instead of showing that the sum converges to this integral, we show that
the difference between the last sum in the Taylor expansion and this sum
becomes arbitrarily small. We define
a(t) =
∂2f
,
∂x2
aj =
∂2f
(tj , Btj ).
∂x2
By the triangle inequality,
Z
n−1
t
X
aj (∆Btj )2 ≤
a(s)ds −
0
j=0
Z
n−1
n−1
n−1
t
X
X
X
2
a
∆t
−
a
(∆B
)
+
a(s)ds
−
a
∆t
j
j
j
tj j
j
0
j=0
j=0
j=0
It is sufficient to show that (since the first term follows by definition of the
integral)
n−1
X
n→∞
aj (∆Btj )2 − ∆tj −→ 0
j=0
in variance. Taking the expectation of the expression squared.

!2 
n−1
X
=
E
aj (∆Btj )2 − ∆tj
j=0
n−1
X
i,j=0
E ai aj (∆Bti )2 − ∆ti )(∆Btj )2 − ∆tj )
If i 6= j we use he independent increments property, for i < j, so we get
E ai aj (∆Bti )2 − ∆ti ) E (∆Btj )2 − ∆tj )
|
{z
}
=0
21
so if i 6= j we get 0 contribution (the exact same argument applies for j < i).
We only need i = j, so
n−1
n−1
X
2 X
2 E a2j (∆Btj )2 − ∆tj
=
E[a2j ]E (∆Btj )2 − ∆tj
=
j=0
j=0
n−1
X
E[a2j ] E (∆Btj )4 −2∆tj E (∆Btj )2 +(∆tj )2
{z
}
|
{z
}
|
j=0
=3(∆tj )2
=∆tj
So in the parenthesis we get
3(∆tj )3 − 2(∆tj )2 + (∆tj )2 = 2(∆tj )2
so we have
= 2∆tj
n−1
X
E[a2j ]∆tj
j=0
≤ 2 max ∆tj
0≤j≤n−1
n−1
X
E[a2j ]∆tj
j=0
and when we let n → 0, we get
−→ 2 max ∆tj
| {z }
→0
Z
|0
t
E[a2j ]ds .
{z }
finite
Hence the expression goes to zero and the proof is concluded. —————————————————————————
24/09-2010
Pricing and Hedging Options Setup: We have a market consisting of:
stock: dSt = aSt dt + σSt dBt
bond: dRt = rRt dt
option dPt =?
In the stock, the a is called the drift and σ the volatility.
Option gives the holder a payoff f (ST ) at exercise time T , for some “nice”
function f . An example is a call option f (x) = max(x − K, 0). Note: we have
PT = f (ST ).
Portfolios consist of stocks, bonds and options. (Also called a strategy).
at : number of stocks at time t.
22
bt : amount of money in the bank.
ct : number of options at time t.
All these three are Ft -adapted. At time t we have the value of the portfolio
given by:
Vt = at St + bt Rt + ct Pt .
Definition
The process Vt is self financing if
dVt = at dSt + bt dRt + ct dPt .
Interpretation: no withdrawal or insertion of money from/into the portfolio,
Definition
A self-financing portfolio Vt is called an arbitrage possibility if V0 = 0, VT ≥ 0
and P (VT ≤ 0) = 0. In other words: we do nothing and earn something for
sure.
Now we can find the proce of the option dPt .
Black, Scholes and Merton ∼1970.
Assumptions
(1) There exists a self-financing strategy (a, b, 0) such that VT =
f (ST ).
(2) The price of the option Pt = C(t, St ) could be expressed as an
unknown function C of time t and stock price St .
Now to avoid any possibilities of arbitrage, we must have Vt = Pt . Using this
and assumption (2), we apply Ito’s formula, and get
1
dPt = dVt = Ct dt + Cx dSt + Cxx (dSt )2
2
1
(1)
= Ct + αSt Cx + σ 2 St 2 Cxx dt + σCx St dBt .
2
This is the differential form of the price of the portfolio (and the option
price). We can go further using the assumption that Vt is self-financing. So
from assumption (1),
dVt = dPt = at dSt + bt dRt = at αSt dt + σSt dBt + bt rRt dt.
= at αSt + bt rRt dt + at σSt dBt .
23
(2)
Setting equations (1) and (2) equal to each other, the dBt terms, the dBt
must be equal (if they are not we can scale them), and the St and σ cancel
each other out, and we have the equality
(Delta hedging)
at = Cx (t, St )
(When we look at the option proce C and then how sensitive it is to the
underlying stock price we differentiate wrt x: Cx . This is delta hedging).
Bond position
Since we have no arbitrage, and we use the definition:
Vt = Pt = at St + bt Rt .
Solving this for bt we get an expression we can use (at = Cx and Vt =
C(t, St )):
C(t, St ) − St Cx (t, St )
.
(3)
bt =
Rt
We can now properly compare the dt-terms in equation (2).
1
Ct + σ 2 St2 Cxx = rRt bt
2
and, exchanging bt as in equation (3),
1
Ct + σ 2 St2 Cxx = rC − rSt Cx
2
so, we get
1
Ct + rSt Cx + σ 2 St2 Cxx = rC.
2
With the boundary condition we have a complete PDE, often called the B&S
PDE.
Ct (t, x) + rxCx (t, x) + 21 σ 2 x2 Cxx (t, x) = rC(t, x)
C(T, x) = f (x), x ≥ 0, 0 ≤ t ≤ T.
We notice that in the PDE there is no dependence on the drift α, so only
the uncertainty matters. The PDE above was solved by Fourier analysis. We
will briefly show how they solved it.
Recall ex. 5
∂u
1 ∂2u
E f (x + B(t)) = u(t, x) =⇒
=
, u(0, x) = f (x)
∂t
2 ∂x2
24
With this result in mind, we consider
u(t, x) = E f (BT )|Bt = x
We have the BM starting in x at time t. The BM BT is normally distributed
with expectation x and variance T − t, so we write BT Bt =x ∼ N (x, T − t).
Hence we have the normal density function
Z
(y−x)
1
u(t, x) = p
f (y)e− 2(T −t) dy.
2π(T − t) R
We have used a convolution product between f (y) and e−z (functional
analysis).
Following the exercise 5 computations, we have the well known Heat
equation.
∂u 1 ∂ 2 u
+ 2 ∂x2 = 0
∂t
u(T, x) = f (x)
It turns out that u solves this PDE. We now modify u so we get the right
side of equation B&S PDE.
v(t, x) = e−r(T −t) u(t, x)
v(T, x) = u(T, x) = f (x)
The derivatives wrt x are the same for v and x. For t:
So;
∂v
∂u
= rv(t, x) + e−r(T −t)
∂t
∂t
1 ∂2u
1 ∂2v
= rv(t, x) + e−r(T −t) −
=
rv(t,
x)
−
2 ∂x2
2 ∂x2
∂v
2
∂ v
+ 21 ∂x
2 = rv
v(T, x) = f (x).
∂t
This PDE is solved by
v(t, x) = e−r(T −t) E [f (BT )|Bt = x] .
(This was solved by Bachelier, 1900). We still have rxCx and 21 σ 2 x2 Cxx .
We replace the BM with another process, so the density function solves
the PDE. We consider a process Zs instead of Bs with
dZs = rZs ds + σZs dBs ,
25
s≥t
where we can see the close analogy to the normal stock price process. We
assume Zt = x (and we note that dZs is GBM with drift r and volatility σ).
Claim which is left to the student to show.
C(t, x) = e−r(T −t) E [f (ZT )|Zt = x] .
Before we continue we will solve exercise 2.
Exercise 2
St = S0 eµt+σBt .
(a) - Ito’s formula.
1
dSt = (mu + σ 2 )St dt + σSt dBt
2
(b) GBM is usually
dSt = αSt dt + σSt dBt .
This is an equation where St is unknown. Integral form.
Z t
Z
St = S0 +
αSu du + σSu dBu .
0
0
This is a (very specific) stochastic differential equation (SDE).
Is
St = S0 eαt+σBt
a solution? NO!
Is
1
St = S0 e(α− 2 σ
2 )t+σB
t
a solution? YES. (We note that the expected log return is α − 21 σ 2 .
Drawing analogies to this exercise, we use Ito’s formula on Zs , and recalling
that Z0 = x.
1 2
Zs = xer− 2 σ )(s−t)+σ(Bt −Bs )
Where we can write, or define, σ(Bt − Bs ) as σ(Bst,0 ), which means the BM
at time s, starting in 0 at time t, as in the image:
26
We now look at
1
ln Zs = ln x + (r − σ 2 )(s − t) + σBst,0 .
2
In time T we have
1
ln ZT = ln x + (r − σ 2 )(T − t) + σBTt,0 .
2
We have
1
ln ZT Zt =x ∼ N ln x + (r − σ 2 )(T − t), σ 2 (T − t)
2
and we see that the logarithm of Zs is normally distributed, which means Zt
is lognormally distributed.
C(t, x) = e−r(T −t) E f (eln ZT |Zt = x
Using what we know about ln ZT :
= e−r(T −t)
Z
∞
1
−
f (ey ) p
e
2πσ 2 (T − t)
−∞
1 σ 2 )(T −t)
y−ln x−(r− 2
2σ 2 (T −t)
2
dy
This should satisfy the conditions in the B& S-PDE. The B&S-formula was
derived using this PDE and f (x) = max(x − K, 0) in the expression for
C(t, x).
27
—————————————————————————
15/10-2010
28
Exercises
2.1
Calculate the expectation to St given in (2.1).
E[St ] = E S0 eµt+σBt = E S0 eµt eσBt =
1 2
1 2
S0 eµt E eσBt = S0 eµt e 2 σ t = S0 eµt+ 2 σ t
where we used the formula.
1
2
E[eαBt ] = e 2 α t .
3.1
We are going to show that
n
n
h X
2 i X
E
Xsi Bsi+1 − Bsi
=
E[Xs2i ](si+1 − si )
i=1
i=1
Writing out the sum on the left side.
h
i
Xs1 Bs2 −Bs1 +· · ·+Xsn Bsn+1 −Bsn
E Xs1 Bs2 −Bs1 +· · ·+Xsn Bsn+1 −Bsn
When we multiply the two expressions, we get a new set of sums, and because
of the linearity of the expectation we can distribute it over the sums. There
are two different cases we must consider. When the subscripts are equal we
get:
h
i
2 E Xsi Bsi+1 − Bsi · Xsi Bsi+1 − Bsi = E Xs2i Bsi+1 − Bsi
and using independence and normal variance for Brownian motion:
2 = E Xs2i E Bsi+1 − Bsi
= E Xs2i si+1 − si .
The other case is when they are not equal. For j > i.
h
i
E Xsi Bsi+1 − Bsi · Xsj Bsj+1 − Bsj
Using that Bsj+1 − Bsj is independent from the rest:
h
i h
i
E Xsi Bsi+1 − Bsi · Xsj E Bsj+1 − Bsj = 0
|
{z
}
=0
In short, we only have to pay attention to the terms where the subscripts are
squared, and when we take the sum of all the terms, we are done.
n
n
h X
2 i X
E
Xsi Bsi+1 − Bsi
=
E[Xs2i ](si+1 − si )
i=1
i=1
29
3.2
We are going to verify the expectation and variance of an Ito integral. By
definition of the Ito integral
E
hZ
t
Xs dBs
0
Var
Z
t
i
n−1
n−1
X
X
E Xsi Bsi+1 −Bsi =
= lim
E Xsi E Bsi+1 − Bsi = 0
n→∞
|
{z
}
i=1
i=1
=0
Xs dBs = E
0
h Z
t
Xs dBs
0
Using the result from 3.1.
= lim
n→∞
n
X
2 i
E[Xs2i ](si+1
i=1
=E
h
lim
n→∞
− si ) →
Z
t
0
n−1
X
i=1
Xsi Bsi+1 − Bsi
2 i
E Xs2 ds
As an alternative way we could just have used Ito isometry directly.
3.3
As Xs and Ys are Ito integrable processes they are both adapted and the
squared
in L2 . We must show that aXs + bYs is adapted and
R t process is finite
2
that 0 E[(aXs + bYs ) ]ds < ∞.
The linear combination of two adapted processes yields a new adapted
process, so the first property follows directly.
If x ≥ y, then
(x + y)2 = x2 + 2xy + y 2 ≤ x2 + 2x2 + x2 = 4x2
and similarly if y ≥ x. To be safe we can use that (x + y) ≤ 4x2 + 4y 2. Now
we want to show that the variance is finite.
Z t
Z t
Z t
2
2
2
2
E[Ys2 ]ds < ∞
E[Xs ]ds + 4b
E[(aXs + bYs ) ]ds ≤ 4a
0
0
0
since both terms are finite. Thus the two properties of Ito integrability are
met.
For Xs = 1 we check the same properties. As was discussed a process
Ys = f (Bs ) is adapted, so we can simply set f (x) = 1 and it is adapted. The
other property is easy.
hZ t
i
2
E
1 ds = t < ∞.
0
30
3.4
Using (dBt )2 = dt and (dt)2 = dtdBt = dBt dt = 0, we’ll show that
(3.11)⇒(3.10). Rewriting (3.11),
df (t, Xt) =
∂f
∂f
1 ∂2f
(t, Xt )dt +
(t, Xt )dXt +
(t, Xt )(dXt )2
∂t
∂x
2 ∂x2
where we replace:
dXt = ut dt + vt dBt
and
(dXt )2 = u2t (dt)2 + 2ut vt dtdBt + vt2 (dBt )2 = vt2 dt
which becomes
df (t, Xt) =
1 ∂2f
∂f
∂f
(t, Xt )dt +
(t, Xt ) ut dt + vt dBt +
(t, Xt )vt2 dt
∂t
∂x
2 ∂x2
Collecting the deterministic and stochastic parts
df (t, Xt ) =
∂f
∂t
(t, Xt ) +
∂f
1 ∂2f
∂f
2
(t, Xt )ut +
(t,
X
)v
dt
+
(t, Xt )vt dBt
t
t
∂x
2 ∂x2
∂x
which is equation (3.10). QED.
Exercises 03. Sep
Exercise 1
We have a normally distributed random variable X, and will find it’s four
moments, and the moment generating function.
The first moment, the expected value or mean, is defined as:
Z
µ = E[X] =
XdP
Ω
for some probability space (Ω, F , P ). We define the second moment, variance,
in terms of the first
σ 2 = Var(X) = E[X 2 ] − E[X]2 = E[X 2 ] − µ2 .
The third moment, skewness, is
γ1 =
E[X 3 ] − 3µσ 2 − µ3
σ3
31
and the fourth moment, kurtosis, is given by
γ2 =
E[(X − µ)4 ]
.
σ2
Finding the mgf; since X is normally distributed, we have X = µ + σY for
the standard normally distributed Y .
MX (t) = E[exp(tX)] = E[exp(t(µ + σY ))] = E[exp(µt + σtY )] =
E[exp(µt) exp(σtY )] = exp(µt)E[exp(σtY )] =
Z
1
exp(µt) √
exp(tσy) exp(−y 2 /2)dy = exp(µt + 0.5σ 2 t2 ).
2π R
From this, we can derive explicit expressions for the moments. Calculating
the first moment, n = 1 and t = 0. In general,
(n)
E[X n ] = MX (0),
so the first moment can be calculated as
E[X] =
MX′ (0)
= exp(µt + 0.5σ t )(µ + σ t)
2 2
2
t=0
= (1)(µ + 0) = µ.
For the coming calculations, we set Z(t) = µt + 21 σ 2 t2 , so we have MX (t) =
eZ(t) . We note that Z ′ (t) = µ + σ 2 t, Z ′′ (t) = σ 2 and Z (n) (t) = 0 for n ≥ 3.
We calculate the second moment.
′ 2
E[X 2 ] = MX′′ (0)t=0 = Z ′ (t)eZ(t) t=0 = Z ′′ eZ(t) + Z ′ (t) eZ(t) t=0 .
Now we use that Z(0) = 0, Z ′(0) = µ, Z ′′ (0) = σ 2 , and we get
= σ 2 + µ2 =⇒ σ 2 = E[X 2 ] − µ2 .
For the third moment.
3
′′
Z(t)
E[X ] = Z (t)e
′
2
Z(t)
+ Z (t) e
′ t=0
=
3
Z ′ (t)Z ′′ (t)eZ(t) + 2Z ′ (t)Z ′′ (t)eZ(t) + Z ′ (t) eZ(t) t=0
µσ 2 + 2µσ 2 + µ3 = E[X 3 ] = 3µσ 2 + µ3 .
The approach is similar for the fourth moment.
32
=
Exercise 3
• (a) We want to show that the power set P(Ω) is a σ-algebra on Ω. To
do this we must verify the three properties of σ-algebras for the power set.
Formally, we power set is
P(Ω) = A | A ⊆ Ω .
(Σ1 ) Since Ω ⊆ Ω, we have Ω ∈ P. Verified.
(Σ2 ) For A ∈ P(A) we have A ⊆ Ω. For Ac we have Ac = Ω\A ⊆ Ω, so
Ac ∈ P(A). Verified.
S
(Σ3 ) For A1 , A2 , . . . ⊆ Ω, we have n∈N An ⊆ Ω since none of the sets can
contain information from outside Ω. Since it is a subset of Ω it is by definition
in P(Ω). Verified.
All requirements are met, which means P(Ω) is a σ-algebra on Ω.
• (b) For the σ-algebras F1 and F2 on Ω, I will verify that F1 ∩ F2 is also a
σ-algebra. As in a) we verify this by checking the three properties.
(Σ1 ) By assumption Ω ∈ F1 and Ω ∈ F2 since they are σ-algebras. Since Ω
is included in both families, then Ω ∈ F1 ∩ F2 . The first property is met.
(Σ2 ) For some set A ∈ F1 ∩ F2 , we must establish that Ac ∈ F1 ∩ F2 .
A ∈ F1 ∩ F2 =⇒ A ∈ F1 and A ∈ F2
Since these families are σ-algebras:
Ac ∈ F1 and Ac ∈ F2 =⇒ Ac ∈ F1 ∩ F2 .
(Σ3 ) For A1 , A2 , . . . ∈ F1 ∩ F2 we must verify that
S
n∈N
∈ F1 ∩ F2 .
A1 , A2 , . . . ∈ F1 ∩ F2 =⇒ A1 , A2 , . . . ∈ F1 and A1 , A2 , . . . ∈ F2
Since these are each σ-algebras:
[
[
[
An ∈ F1 and
An ∈ F2 =⇒
An ∈ F1 ∩ F2 .
n∈N
n∈N
n∈N
We have verified the three properties, so F1 ∩ F2 is a σ-algebra.
33
Exercise 4
We have the real line R and the Borel σ-algebra BR . For a subset A ∈ BR we
define
Z
P (A) =
φ(x)dx
A
where φ(x) = (2π)
probability space.
−1/2
2
exp(−x /2). We will show that (R, BR , P ) is a
We have a measure space whenever we have a set with a corresponding σalgebra, which we already have. We must verify that the function P is a
measure, and that it is a probability measure.
The probability measure must be non-negative, and as an integral this is
automatically verified. We must have P (∅) = 0 which is also true since we
have an integral.
When we look at aPcountable collection of disjoint sets {Ai }, we must
check that P (∪i Ai ) = i P (Ai).
Z
Z
P (∪i Ai ) =
φ(x)dx =
φ(x)I∪i Ai dx
∪i Ai
R
P
For disjoint sets, the indicator function has the property: I∪i Ai = i IAi , so
Z
Z X
X X
XZ
=
φ(x)
IAi dx =
φ(x)IAi dx =
P (Ai ).
φ(x)dx =
R
i
R
i
i
Ai
i
We have confirmed that we have a measurable space. Lastly we check that
the integral over the entire set is 1, or P (R) = 1.
Z ∞
x2
1 √ 1
2π = 1.
e− 2 dx = √
P (R) = √
2π −∞
2π
We have made all the verifications, and this space is indeed a probability
space.
Exercise 6
For a Brownian motion Bt , show that
E |Bt − Bs |4 = 3|t − s|2 .
34
Exercises 17. Sep
Exercise 3.1 [Oksendal]
Using the definition of Ito integrals, we are to prove
Z t
Z t
sdBs = tBt −
Bs ds.
0
0
By the definition, we have a a partition 0 = s0 = s1 < s2 < . . . < sn = t and
we can write the integral as a sum
Z t
X
sdBs =
sj+1 Bsj − Bji
0
j
Using a hint we are given, we know that
X
X
X
si+1 Bsi+1 − si Bsi =
sj Bsj+1 − Bsj +
Bj (sj+1 − sj )
j
j
j
where we recognise the first term on the right as the sum representation of
our integral. Shifting terms we get
X
X
X
sj Bsj+1 − Bsj =
si+1 Bsi+1 − si Bsi −
Bj (sj+1 − sj ).
j
j
j
The first term on the right is now a telescoping sum (s0 = 0 and sn = t).
X
si+1 Bsi+1 − si Bsi = st Bst − st−1 Bst−1 + st−1 Bst−1 − · · · − s0 Bs0
j
= st Bst − s0 Bs0 = tBt − 0(B0 ) = tBt .
Finally, letting j → ∞, we return to the integral representation, and arrive
at the result.
Z t
Z t
sdBs = tBt −
Bs ds.
0
0
Exercise 3.4 [Oksendal]
We will verify if the following processes are martingales (using definition 3.2.2
from 10/09), and must verify the three properties. Xt is a martingale is (i)
it is Ft -adapted, EkXt |] < ∞, and the important martingale property: for
t ≥ s, E[Xt |Fs ] = Xs . Or, alternatively, just check that E[Xt ] = E[X0 ] (but
this does not mean that we have a martingale process).
35
•(i) Xt = Bt + 4t. This is obviously finite and Ft -adapted. We will
check the martingale property. Assume t ≥ s;
E[Xt |Fs ] = E[Bt + 4t|Fs ] = E[Bt |Fs ] + 4t = Bs + 4t 6= Bs + 4s.
We can also see that E[X0 ] = 0, which would be equal to E[Xt ] = 4t if Xt
was a martingale.
•(ii) Xt = Bt2 . Adapted and E[|Bt2 |] = t < ∞ finite. Checking the
martingale property.
i
h
2 2
E[Xt |Fs ] = E[Bt2 |Fs ] = E Bt −Bs +Bs Fs = E Bt −Bs +2 Bt −Bs Bs +Bs2 Fs
2 = E Bt − Bs |Fs + 2E[Bt Bs |Fs ] − 2E[Bs2 |Fs ] + E[Bs2 |Fs ]
First term: independence. Second term: Bs is measurable and can be
factored out, and Bt is a martingale. Third term: measurable. Fourth term:
measurable.
2 = E Bt − Bs
+
2Bs2
−2Bs2 + Bs2 = t − s + Bs2 6= Xs .
Alternatively, E[B02 ] = 0 =
6 t = E[Bt2 ], which means this cannot be a
martingale.
Rt
•(iii) Xt = t2 Bt − 2 0 sBs ds. This is obviously adapted and finite, so
we check the final property.
Z t
i
i
h
hZ t
2
2
E[Xt |Fs ] = E t Bt − 2
sBs dsFs = E[t Bt |Fs ] − 2E
sBs dsFs
0
0
i
i
hZ t
hZ s
t E[Bt |Fs ] − 2E
uBu duFs − 2E
uBu duFs
2
s
0
We have Bt is a martingale, independence and measurability, so we get:
Z t
Z s
Z t
Z s
2
2
t Bs − 2
sE[Bu |Fs ]du − 2
uBu du = t Bs − 2Bs
sdu − 2
uBu du
s
t2 Bs − Bs t2 − s2 − 2
0
Z
0
s
s
uBu du = s2 Bs − 2
By our calculations, this is a martingale.
36
Z
0
0
s
uBu du = Xs .
Exercise 3.5 [Oksendal]
We will prove that Mt = Bt2 − t is a martingale. It is adapted. It is finite.
Verifying the martingale property. Let t ≥ s:
E[Mt |Fs ] = E Bt2 − tFs = E Bt2 Fs − t.
We repeat the calculations from 3.4(ii).
= t − s + Bs2 − t = Bs2 − s = Ms .
Thus, Mt is a martingale, by direct verification.
Exercise 3.6 [Oksendal]
Proving Nt = Bt3 − 3tBt is a martingale. By inspection this is adapted and
finite. We will verify the martingale property for t ≥ s.
E[Nt |Fs ] = E[Bt3 − 3tBt |Fs ] = E[Bt3 |Fs ] − 3tE[Bt |Fs ]
Some intermediate calculations, for x = Bt − Bs and y = Bs :
Bt3 = (Bt −Bs +Bs )3 = (x+y)3 = (x+y)(x2 +2xy+y 2) = (x3 +3x2 y+3xy 2+y 3)
= (Bt − Bs )3 + 3(Bt − Bs )2 Bs + 3(Bt − Bs )Bs2 + Bs3 =
We do not need to go any further, since we can use independence
and measurability to work our way through this expression. Taking the
conditional expectation wrt the filtration Fs .
E (Bt − Bs )3 Fs + 3E (Bt − Bs )2 Bs Fs + 3E (Bt − Bs )Bs2 Fs + E Bs3 Fs
We have measurability.
E (Bt − Bs )3 Fs + 3Bs E (Bt − Bs )2 Fs + 3Bs2 E (Bt − Bs )Fs + Bs3
Independence.
E (Bt − Bs )3 + 3Bs E (Bt − Bs )2 + 3Bs2 E (Bt − Bs ) + Bs3 .
Using that the odd-numbered expectations are 0, and the even one is t − s:
= 3Bs (t − s) + Bs3 = 3tBs − 3sBs + Bs3 = E[Bt3 |Fs ]
Returning to the original exercise, we had:
E[Bt3 |Fs ] − 3tE[Bt |Fs ] = 3tBs − 3sBs + Bs3 − 3tBs = Bs3 − 3sBs = Ns .
Nt is a martingale.
37
Additional Exercise
Show that
1
M(t) = M(0) exp − σ 2 + σB(t)
2
is a martingale wrt to the filtration Ft for the BM-process B(t)...
38
Exercises 24. Sep
Exercise 1
Let
dX(t) = µ − αX(t) dt + σdB(t).
Find the dynamics dS(t) where S(t) = exp X(t) .
We first define g(t, x) = ex(t) .
Exercise 2
39
Download