Brownian Motion Contents March 15, 2013

advertisement
Brownian Motion
March 15, 2013
Contents
1 The Gaussian Process
2
2 Markov Processes
14
3 Markov Transition Functions
17
4 Strong Markov Property
19
5 Brownian Martingales
23
6 Donsker’s Theorem
33
1
1
The Gaussian Process
Definition 1.1. Brownian Motion
A Brownian motion is a stochastic process {Bt }t≥0 on R s.t.
• t → Bt is continuous a.s.
N
• For positive strictly increasing sequence {ti }N
i=1 we have that {Bti − Bti−1 }i=2 are independent.
• For 0 ≤ s < t we have that Bt−s ∼ N (0, t − s)
Definition 1.2. d-Dimensional Brownian Motion
(1)
(d)
(i)
{Bt }t≥0 : Bt = (Bt , ..., Bt ) is a d-dimensional Brownian motion if each Bt is an independent
Brownain motion on R.
Definition 1.3. Characteristic Function
For a random variable Z the characteristic function is
ϕZ (θ) := E[eiθZ ]
Definition 1.4. Gaussian Vector
Pd
Z = (Z1 , ..., Zd ) ∈ Rd is a Gaussian vector if any linear combination i=1 λi Zi is a Gaussian random
variable in R
i.e. has density
(x−µ)2
2σ 2
e−
f (x) = √
2πσ
Corollary 1.1. The characteristic function of a Gaussian random variable N (µ, σ 2 ) is
ϕZ (θ) = eiθµ−σ
2 2
θ /2
Proof. Let Z ∼ N (µ, σ 2 )
ϕZ (θ) = E[eiθz ]
(z−µ)2
Z ∞
− 2σ2
iθz e
√
=
e
dz
2πσ
−∞
Z ∞
(z−µ)2 −2σ 2 iθz
1
2σ 2
=√
e−
dz
2πσ −∞
Z ∞
z 2 −2µz+µ2 −2σ 2 iθz
1
2σ 2
=√
e−
dz
2πσ −∞
Z ∞
(z−(µ+σ 2 iθ))2 +σ 4 θ 2 −2µσ 2 iθ
1
2σ 2
=√
e−
dz
2πσ −∞
Z ∞ − (z−(µ+σ22 iθ))2
2σ
e
−σ 2 θ 2 /2+µiθ
√
=e
dz
2πσ
−∞
= e−σ
2 2
θ /2+µiθ
Definition 1.5. Gaussian Process
The stochastic process {Xt }t≥0 is Gaussian if (Xt1 , ..., XtN ) is a Gaussian vector for any choice of
{ti }N
i=1
Lemma 1.1. The following are facts about Gaussian random variables:
• Z ∼ N (µ, σ 2 ), c > 0 =⇒ cZ ∼ N (cµ, c2 σ 2 )
• Zi ∼ N (µi , σi2 ) i = 1, 2 independent then Z1 + Z2 ∼ N (µ1 + µ2 , σ12 + σ22 )
2
• If Zk ∼ N (µk , σk2 ) for k ∈ N then limk→∞ µk = µ, limk→∞ σk2 = σ 2 and Zk converges a.s. to
some Z ∼ N (µ, σ 2 )
• ϕZ (θ) determines the law of Z
`
• Z1 Z2 ⇐⇒ E[eiθ1 Z1 eiθ2 Z2 ] = E[eiθ1 Z1 ]E[eiθ2 Z2 ]
∀θ1 , θ2
• If limk→∞ ϕZk (θ) = ϕZ (θ) then Zk converges to Z in distribution.
• The law of the Gaussian vector Z = (Z1 , ..., ZN ) is determined by E[Zk ], E[Zi Zj ]
• If Z1 , Z2 are Gaussian then they are independent iff E[Z1 Z2 ] = E[Z1 ]E[Z2 ]
Lemma 1.2. A Brownian motion is a Gaussian process
PN
Proof. We need to check that i=1 λi Bti is Gaussian for any choice of {λi , ti }N
i=1 .
We can write
N
N
X
X
λ i B ti =
µi (Bti − Bti−1 )
i=1
i=1
for some µ where t0 = 0, B0 = 0.
This form is the sum of independent Gaussian random variables and hence is itself Gaussian.
Lemma 1.3. The Brownian Bridge: Xt := Bt − tB1 for t ∈ [0, 1] is a Gaussian process
PN
Proof. We need to check that for any choice of {λi , ti }N
i=1 we have that
i=1 λi (Bti − ti B1 ) is
Gaussian.
This sum can be written as
!
N
N
n
N
+1
X
X
X
X
λi (Bti − ti B1 ) =
λi Bti −
λi ti B1 =
λ̂i Bti
i=1
i=1
i=1
i=1
Pn
where λ̂i = λi for i ≤ N , λ̂N +1 = − ( i=1 λi ti ) and tN +1 = 1
This is the same summation form as the Brownian motion and hence is Gaussian.
Lemma 1.4. The Orstein-Uhlenbeck process: Xt := e−Ct Be2Ct for C > 0 is a Gaussian process
PN
−Cti
Proof. We need to check that for any choice of {λi , ti }N
Be2Cti is
i=1 we have that
i=1 λi e
Gaussian.
Write λ̂i := λi e−Cti and then t̂i = e2Cti . We then have that the sum is of the form:
N
X
λi e−Cti Be2Cti =
i=1
N
X
λ̂i Bt̂i
i=1
then using the same rearrangment trick as in the Brownian motion case we have the desired result.
Lemma 1.5. For Brownian motion Bt we have that
E[Bt Bs ] = min(t, s)
Proof. WLOG let 0 ≤ s < t, then we have that
E[Bt Bs ] = E[(Bt − Bs )Bs + Bs2 ]
= E[(Bt − Bs )(Bs − B0 ) + (Bs − B0 )2 ]
= E[Bt − Bs ]E[Bs − B0 ] + E[(Bs − B0 )2 ]
by independence
=0+s
3
Theorem 1.1. If X = {Xt }t≥0 is a Gaussian process with continuous paths s.t.
• E[Xt ] = 0 ∀t
• E[Xt Xs ] = min(s, t)
then X is a Brownian motion.
Proof. We already have that the paths are continuous a.s. and the expectation is zero so it remains to
show that the increments Xt − Xs are independent Gaussians with zero mean and variance t − s.
By definition of a Gaussian process Xt − Xs is Gaussian. Let 0 ≤ s < t
E[Xt − Xs ] = E[Xt ] − E[Xs ]
=0−0
=0
V ar[Xt − Xs ] = E[(Xt − Xs )2 ] − E[Xt − Xs ]2
= E[Xt2 + Xs2 − 2Xt Xs ]
= E[Xt2 ] + E[Xs2 ] − 2E[Xt Xs ]
= t + s − 2min{t, s}
=t−s
We now need to show that the increments are independent. It suffices to show that
Xt1 − Xt2 , Xt3 − Xt4 are independent for 0 ≤ t4 < t3 ≤ t2 < t1
E[Xt1 − Xt2 ]E[Xt3 − Xt4 ] = 0
E[(Xt1 − Xt2 )(Xt3 − Xt4 )] = E[Xt1 Xt3 − Xt1 Xt4 − Xt2 Xt3 + Xt2 Xt4 ]
= E[Xt1 Xt3 ] − E[Xt1 Xt4 ] − E[Xt2 Xt3 ] + E[Xt2 Xt4 ]
= t3 − t4 − t3 + t4
=0
which suffices for independence since the increments are Gaussian.
Lemma 1.6. If Bt is a Brownian motion then
1
Z
I=
Bt dt
0
is a Gaussian N (0, 1/3) variable.
Proof. I is the a.s. limit as N → ∞ of
N
N
1 X
1 X
Bk/N =
(N − k + 1)(B k − B k−1 )
N
N
N
N i=1
k=1
The terms in the second sum are independent Gaussian random variables and hence the sum is a
4
Gaussian random variable moreover the almost sure limit is Gaussian.
Z 1
Bt dt
E[I] = E
0
Z
1
E[Bt ]dt
=
0
=0
V ar[I] = E[I 2 ]
Z 1
Z
Bs ds
=E
0
Z
1
Z
1
Bt dt
0
1
E[Xt Xs ]dsdt
=
Z
by Fubini
0
0
1
Z
1
min{t, s}dsdt
=
0
0
Z
1Z
t
1
Z
1
Z
tdsdt
sdsdt +
=
0
0
Z
1
=
0
t2 /2dt +
t
0
1
Z
t − t2 dt
0
= 1/6 + 1/2 − 1/3
= 1/3
So providing the use of Fubini is valid we have the desired result. We need that
Z 1 Z 1
E
|Bt ||Bs |dsdt < ∞
0
We have that:
Z 1 Z 1
Z
E
|Bt ||Bs |dsdt =
0
0
1
Z
1
E[|Bt ||Bs |]dsdt
0
Z
0
since psoitive
0
1
Z
1
≤
0
Z 1
0
Z 1
0
0
q
√
E[Bt2 ]E[Bs2 ]dsdt
by Cauchy-Schwarz
ts
<1
Lemma 1.7. Scaling Lemma:
If B is a Brownian motion and c > 0 then Xt := 1c Bc2 t for t ≥ 0 is also a Brownian motion.
Proof. It suffices to check that X has continuous paths, E[Xt ] = 0, E[Xt Xs ] = min{t, s} and that X is
Gaussian.
Xt is a continuous function of a Brownian motion hence clearly has continuous paths.
E[Xt ] = 1c E[Bc2 t ] = 0 since B is a Brownian motion.
5
Let s < t then:
1
1
E[Xs Xt ] = E Bc2 s Bc2 t
c
c
1
= 2 E[Bc2 s Bc2 t ]
c
1
= 2 c2 s
c
=s
N
X
λk Xtk =
k=1
N
X
λk
k=1
c
Bc2 tk
which is a sum of Gaussian random variables hence is Gaussian.
Lemma 1.8. Time Inversion Lemma:
If B is a Brownian motion then
(
Xt :=
tB 1t
0
t 6= 0
t=0
is also a Brownian motion.
Proof. It suffices to check that X has continuous paths, E[Xt ] = 0, E[Xt Xs ] = min{t, s} and that X is
Gaussian.
N
X
k=1
λk Xtk =
N
X
k=1
λk tk B t1
k
which is the sum of Gaussian random variables hence is Gaussian.
E[Xt ] = tE[B 1t ] = 0 since B is a Brownian motion.
Let s < t then 1/t < 1/s so
E[Xs Xt ] = E[stB 1s B 1s ]
= stE[B 1s B 1s ]
= st
1
t
=s
t, B 1t are continuous for t > 0 hence tB 1t has continuous paths for t > 0 so it suffices to show that
limt→0+ tB 1t = 0
6
{ lim+ Xt = 0} = { lim+ Xq = 0 : q ∈ Q}
t→0
by continuity of paths for t
q→0
= {∀n ∈ N, ∃m ∈ N : q ∈ Q ∩ (0, 1/m] =⇒ |Xq | < 1/n}
\ [
\
=
{|Xq | < 1/n}
n∈N m∈N q∈Q∩(0,1/m]

P( lim+ Xt = 0) = P 
t→0

\ [
\
{|Xq | < 1/n}
n∈N m∈N q∈Q∩(0,1/m]

= lim P 
n→∞

[
\
{|Xq | < 1/n}
since the sets are decrea
m∈N q∈Q∩(0,1/m]


\
= lim lim P 
{|Xq | < 1/n}
n→∞ m→∞
since the sets are increa
q∈Q∩(0,1/m]
= lim lim lim P (|Xqk | < 1/n)
where qk are the ordered elements of Q ∩ (0, 1
n→∞ m→∞ k→∞
= lim lim lim P (|Bqk | < 1/n)
n→∞ m→∞ k→∞
= P( lim+ Bt = 0)
t→0
Corollary 1.2. If B is a Brownian motion then
Bt
=0
tα
lim
t→∞
∀α > 1/2
Bt
lim sup √ = ∞
t
t→∞
Bt
lim inf √ = −∞
t→∞
t
Lemma 1.9. Gaussian Tails:
For a > 0
1
a
1
1− 2
a
e
a
a
∞
a
1 − a2
e 2 =
a
e−
≤
Proof. For z > a > 0 we have that z/a > 1 so
Z ∞
Z
z2
e− 2 dz ≤
So indeed the upper bound holds.
Notice that
∞
Z
2
− a2
Z
∞
e−
a
7
z2
2
dz ≤
1 − a2
e 2
a
z − z2
1 a2
e a dz = e− 2
a
a
x2
2
+
1 − x2
e 2 dx
x2
So
Z ∞
2
e
− x2
a
Z ∞
1 − a2
1 − x2
2
dx = e
−
e 2 dx
a
x2
a
Z ∞
1 a2
x − x2
≥ e− 2 −
e 2 dx
a
a3
a
a2
1 a2
1
= e− 2 − 3 e− 2 dx
a a 1 − a2
1
= e 2 1− 2
a
a
Lemma 1.10. Borel Cantelli:
Let {Ai }∞
i=1 ∈ F:
P∞
• If i=1 P(Ai ) < ∞ then P(Ai i.o.) = 0
P∞
• If {Ai }∞
i=1 are independent and
i=1 P(Ai ) = ∞ then P(Ai i.o.) = 1
Proof.
E
•
"∞
X
#
χAi =
i=1
=
∞
X
i=1
∞
X
since χ ≥ 0
E[χAi ]
P(Ai )
i=1
<∞
So P a.s. only finitely many of the events occur.
P∞
• Let Z = i=1 χAi be the total number of events, then it suffices to show that E[e−Z ] = 0.
"∞
#
h P∞
i
Y
− i=1 χAi
−χAi
E e
=E
e
i=1
=
=
=
≤
Y
∞E[e−χAi ]
i=1
∞
Y
by independence
P(Aci ) + e−1 P(Ai )
i=1
∞
Y
(1 − (1 − e−1 )P(Ai ))
i=1
∞
Y
e−(1−e
−1
)P(Ai )
i=1
= e−(1−e
−1
)
P∞
i=1
P(Ai )
=0
Lemma 1.11. Reflection Principle:
For a ≥ 0 we have that
P(sups≤t Bs > a) = 2P(Bt ≥ a)
8
Proof. Let Ω0 := {sups≤t Bs ≥ a} be the event that the Brownian motion exceeds a before time t.
The sets {Bt > a}, {Bt = a}, {Bt < a} form a partition so
P(Ω0 ) = P(Ω0 ∩ {Bt > a}) + P(Ω0 ∩ {Bt = a}) + P(Ω0 ∩ {Bt < a})
P(Ω0 ∩ {Bt = a}) = 0 so we don’t need to worry about this event.
P({Bt < a}|Ω0 ) = P({Bt > a}|Ω0 ) since one the path reaches the level a there are the same number of
paths ending above a as there are below a.
This gives us that P(Ω0 ∩ {Bt < a}) = P(Ω0 ∩ {Bt > a})
So indeed we have that
P(sups≤t Bs > a) = 2P(Bt ≥ a)
Theorem 1.2.
p Law of the Iterated Logarithm
Let ψ(t) = 2t log(log(t)) then
lim sup
t→∞
lim inf
t→∞
Bt
=1
ψ(t)
Bt
= −1
ψ(t)
Proof. By symmetry of Bt it suffices to show just the first limit:
lim sup
t→∞
To do this we will show that lim supt→∞
• lim supt→∞
Bt
ψ(t)
Bt
ψ(t)
Bt
=1
ψ(t)
≤ 1 and then lim supt→∞
Bt
ψ(t)
≤1
≤1
Notice that this inequality is equivalent to saying that ∀ε > 0 we have that
sufficiently large t.
Z
∞
P(Bt > (1 + ε)ψ(t)) =
(1+ε)ψ(t)
≤
Bt
ψ(t)
≤ 1 + ε for
z2
e− 2
√ dz
2π
1
since Bt Gaussian
2
p
e−(1+ε)
(1 + ε) 4π log(log(t))
log(log(t))
using the Gaussian tails lemma
Choose a sequence tn = θn for θ > 1. We want to control the probability that the bound is
exceeded at these values tn .
P(Bθn > (1 + ε)ψ(θn )) ≤
2
n
1
p
e−(1+ε) log(log(θ ))
n
(1 + ε) 4π log(log(θ ))
2
1
p
=
e−(1+ε) log(n log(θ))
(1 + ε) 4π log(n log(θ))
2
≤ C(θ, ε)n−(1+ε)
where C is some constant depending on θ, ε.
By Borel-Cantelli we have that P(Bθn > (1 + ε)ψ(θn ) i.o.) = 0 since
∞
X
2
C(θ, ε)n−(1+ε) < ∞
n=1
hence the Brownian motion will almost surely reach a last n such that at θn it exceeds the bound.
9
We need to show that the process will not exceed the bound between θn , θn+1 for sufficiently
large n.
P(sups≤θn Bs > (1 + ε)ψ(θn )) = 2P(Bθn > (1 + ε)ψ(θn ))
≤ 2C(θ, ε)n−(1+ε)
2
So by Borel-Cantelli there are a.s. only finitely many intervals [θn , θn+1 ) for which the Brownian
exceeds the bound.
We therefore have that eventually for t ∈ [θn , θn+1 ):
ψ(θn+1 )
Bt
≤ (1 + ε)
ψ(t)
ψ(θn )
s
θn+1 log((n + 1) log(θ))
= (1 + ε)
θn log(n log(θ))
s
√
log((n + 1) log(θ))
= (1 + ε) θ
log(n log(θ))
s
lim
n→∞
log((n + 1) log(θ))
=1
log(n log(θ))
Moreover we can choose ε arbitrarily small and θ arbitrarily close to 1 hence indeed we have that
Bt
ψ(t) ≤ 1 a.s.
• lim supt→∞
Bt
ψ(t)
≥1
Notice that this inequality is equivalent to saying that ∀ε > 0 we have that
sufficiently large t.
Again set tn = θn for θ > 1 then
Bt
ψ(t)
≥ 1 − ε for
2
P(Bθn > (1 − ε)ψ(θn )) ≥ C(θ, ε)n−(1−ε)
using Gaussian tails.
√
Write An = {Bθn − Bθn−1 > (1 − ε) 1 − θ−1 ψ(θn )}.
Which are independent by construction, also Bθn − Bθn−1 ∼ N (0, θn − θn−1 ).
2
We have that P(An ) ≥ C(θ, ε)n−(1−ε) , moreover since An are independent we have that by
Borel-Cantelli An occurs infinitely often almost surely.
This gives us that
limsupn→∞
p
√
Bθn
≥ (1 − ε) 1 − θ−1 − (1 + ε) θ−1
n
ψ(θ )
but we can take ε arbitrarily small and θ arbitrarily large so we have that lim supt→∞
Definition 1.6. Exceptional Point:
An exceptional point for the Brownian motion B is a point at which B is differentiable.
Definition 1.7. Holder-Continuity:
f : [0, ∞) → R is α-Holder continuous for α ∈ (0, 1] at t ∈ [0, ∞) if ∃M, δ > 0 s.t.
|ft+s − ft | ≤ M |s|α ∀|s| < δ.
Lemma 1.12. If X ∼ N (0, N −1 ), C > 0 then
2C 1
P(|X| ≤ CN −1 ) ≤ √ √
2π N
10
Bt
ψ(t)
≥1
Proof.
P(|X| ≤ CN −1 ) = P(X ≤ CN −1/2 )
Z CN −1/2 − z2
e 2
√ dz
=
2π
−CN −1/2
z2
2C 1
≤ √ √ maxz e− 2
2π N
2C 1
=√ √
2π N
Proposition 1.1. Let ΩM,δ := {∃t ∈ [0, 1] : |Bt+s − Bt | ≤ M |s| ∀|s| < δ} be the event that B is
1-Holder continuous at some point t ∈ [0, 1].
Then P(ΩM,δ ) = 0.
k k+1 , N s.t. |Bt+s − Bs | ≤ M |s| ∀|s| ≤ δ.
Proof. Suppose ∃t ∈ N
4
We can choose N sufficiently large such that k+i
∈ [t, t + δ]. We then have that:
N
i=1
M
N
2M
|B k+2 − Bt | ≤
N
N
3M
|B k+3 − Bt | ≤
N
N
4M
|B k+4 − Bt | ≤
N
N
|B k+2 − B k+1 | ≤ |B k+2 − Bt | + |B k+1 − Bt |
|B k+1 − Bt | ≤
N
N
|B k+3
N
|B k+4
N
N
N
N
3M
≤
N
− B k+2 | ≤ |B k+3 − Bt | + |B k+2 − Bt |
N
N
N
5M
≤
N
− B k+3 | ≤ |B k+4 − Bt | + |B k+3 − Bt |
N
N
N
7M
≤
N
Since B is a Brownian motion we have that
B k+i+1 − B k+i ∼ N (0, N −1 )
N
N
for i = 1, 2, 3.
11
So
P(ΩM,δ ) ≤ P
N[
−3 \
3
|B k+i+1
N
k=1 i=1
(2i − 1)M
− B k+i | ≤
N
N
!
N
−3 Y
3
X
(2i − 1)M
≤
P |B k+i+1 − B k+i | ≤
N
N
N
i=1
k=1
≤
N
−3
X
k=1
C
N 3/2
C
≤√
N
C
lim √ = 0
N →∞
N
So indeed since N can be chosen arbitrarily large we have that P(ΩM,δ ) = 0
Proposition 1.2. Let α > 1/2 and ΩM,δ := {∃t ∈ [0, 1] : |Bt+s − Bt | ≤ M |s|α ∀|s| < δ} be the event
that B is α-Holder continuous at some point t ∈ [0, 1].
Then P(ΩM,δ ) = 0.
k k+1 , N s.t. |Bt+s − Bs | ≤ M |s|α ∀|s| ≤ δ.
Proof. Suppose ∃t ∈ N
Let ρ = d4/αe.
ρ
We can choose N sufficiently large such that k+i
∈ [t, t + δ]. We then have that:
N
i=1
iM
Nα
− B k+i | ≤ |B k+i+1 − Bt | + |B k+i − Bt |
|B k+i − Bt | ≤
N
|B k+i+1
N
N
N
N
(2i − 1)M
≤
Nα
Since B is a Brownian motion we have that
B k+i+1 − B k+i ∼ N (0, N −1 )
N
N
for i = 1, ..., ρ.
So
P(ΩM,δ ) ≤ P
N −ρ+1
[ ρ−1
\
≤
|B k+i+1
N
i=1
k=1
NX
−ρ+1 ρ−1
Y
P |B k+i+1
N
k=1
≤
i=1
NX
−ρ+1
k=1
(2i − 1)M
− B k+i | ≤
N
Nα
!
(2i − 1)M
− B k+i | ≤
N
Nα
C
N α(ρ−1)/2
C
≤√
N
C
lim √ = 0
N →∞
N
So indeed since N can be chosen arbitrarily large we have that P(ΩM,δ ) = 0
12
Corollary 1.3.
P
∞ [
∞
[
!
ΩM, N1
=0
M =1 N =1
Theorem 1.3. Kolmogorov Continuity Theorem:
Suppose {Xt }t∈[0,1] has continuous paths and satisfies
E[|Xt − Xs |p ] ≤ C|t − s|1+γ
for some γ, p > 0 then X has α-Holder continuous paths for α ∈ (0, γ/p).
Corollary 1.4. A Brownian motion has α-Holder continuous paths for α < 1/2.
Proof. Let B be a Brownian motion then
E[|Bt − Bs |p ] = E[|X|p ]
X ∼ N (0, t − s)
p
p/2
= E[|Y | ]|t − s|
Y ∼ N (0, 1)
p/2
= Cp |t − s|
So the Kolmogorov continuity theorem holds for a Brownian motion with α ≤ γ/p where γ + 1 = p/2
hence the theorem holds for α < 1/2.
Lemma 1.13. Markov’s Inequality:
If Z ≥ 0 is a random variable then
P(Z ≥ 0) ≤
E[Z]
a
Proof.
E[Z] ≥ E[ZIZ≥a ]
since Z ≥ 0
≥ E[aIZ≥a ]
= aE[IZ≥a ]
= aP(Z ≥ a)
Corollary 1.5. Suppose {Xt }t∈[0,1] has continuous paths and satisfies
E[|Xt − Xs |p ] ≤ C|t − s|1+γ
for some γ, p > 0 then
P(|Xt − Xs | ≥ a) ≤
C|t − s|1+γ
ap
Proof.
P(|Xt − Xs | ≥ a) = P(|Xt − Xs |p ≥ ap )
≤
E[|Xt − Xs |p ]
ap
C|t − s|1+γ
≤
ap
by Markov
by KCT
13
Proposition 1.3. For α > 1/2 we have that
P(B α − Holder Continuous ∀t ≥ 0) = 1
Proof. Let
AN =
2N [
|X
k
2n
− X k−1
|>
n
2
k=1
1
2nα
be the event that some interval has a jump larger than 2−N α then
2N
X
P(AN ) ≤
P |X
k
2n
1
− X k−1
|>
n
2nα
2
k=1
≤
2N C k −
X
2N
1+γ
k−1 2N 2−N αp
k=1
≤ 2N
C 2kN −
1+γ
k−1 2N 2−N αp
−N (γ−αp)
= C2
Since B is a Brownian motion and we have assumed α < 1/2 this indeed holds by corollary 1.4.
By Borel Cantelli we have that
∞
X
P(AN ) < ∞
N =1
hence AN occurs only finitely often and hence we can find N sufficiently large such that B is α-Holder
continuous between fixed points of distance 2−N .
So we have that on some countable grid of points D = {x ∈ Q : x = k/2m , k, m ∈ N} we have that the
Brownian motion is α-Holder continuous. We need to extend this to the entire space but this follows
by continuity of paths.
2
Markov Processes
Definition 2.1. Independent:
• The σ-Fields F, G on (Ω, P) are independent if
P(A ∩ B) = P(A)P(B)
∀A ∈ F, B ∈ G
• Random variables X, Y are independent if
P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B)
Lemma 2.1. Random variables X, Y are independent iff
E[f (X)g(Y )] = E[f (X)]E[g(Y )]
∀f, g bounded measurable functions.
Proof. Suppose
E[f (X)g(Y )] = E[f (X)]E[g(Y )]
∀f, g bounded measurable functions.
14
∀A, B measurable
Then let A ∈ F, B ∈ G and set f = IA , g = IB which are bounded and measurable. Moreover
P(X ∈ A, Y ∈ B) = E[IA (X)IB (Y )] = E[IA (X)]E[IB (Y )] = P(X ∈ A)P(Y ∈ B)
Now let f, g be bounded measurable functions. There exist sequence of simple functions
fn =
n
X
λi IAi
gm =
i=0
m
X
γj IBj
j=0
s.t. limn→∞ fn = f and limn→∞ gn = g so by mechanics of measure theory the result follows:
E[f (X)g(X)] = E[ lim fn (X) lim gm (Y )]
n→∞
m→∞
= lim lim E[fn (X)gm (Y )]
n→∞ m→∞
= lim lim
n→∞ m→∞
= lim lim
n→∞ m→∞
= lim lim
n→∞ m→∞
= lim lim
n→∞ m→∞
n X
m
X
i=0 j=0
n X
m
X
i=0 j=0
n X
m
X
i=0 j=0
n X
m
X
λi γj E[IAi (X)IBj (Y )]
λi γj P(X ∈ Ai , Y ∈ Bj )
λi γj P(X ∈ Ai )P(Y ∈ Bj )
λi γj E[IAi (X)]E[IBj (Y )]
i=0 j=0
"
= lim lim E
n→∞ m→∞
n
X


m
X
λi IAi (X) E 
γj IBj (Y )
#
i=0
j=0
= E[ lim fn (X)]E[ lim gm (Y )]
n→∞
m→∞
= E[f (X)]E[g(Y )]
Definition 2.2. σ-Field Generated by X:
For a random variable X : Ω → R we say that
σ(X) := {X −1 (A) : A ∈ B(R)}
is the σ-field generated by X.
Corollary 2.1. For random variables X, Y we have that X
`
Y
⇐⇒ σ(X)
`
σ(Y )
Theorem 2.1. Markov Property for Brownian Motion:
If B is a Brownian motion and t0 > 0 define Xt := Bt0 +t − Bt0 .
Then we have that
a
σ({Xt }t≥0 )
σ({Bt }t∈[0,t0 ) )
Definition 2.3. π-System:
A collection A of subsets is called a π-system if it is closed under finite intersections.
Lemma 2.2. If F0 , G0 are π-systems such that
P(A ∩ B) = P(A)P(B)
∀A ∈ F0 , B ∈ G0
then
P(A ∩ B) = P(A)P(B)
∀A ∈ σ(F0 ), B ∈ σ(G0 )
15
Corollary 2.2. A Brownian motion has the Markov property.
`
Proof. It suffices to check that {Xtk }N
{Bsr }M
r=1 so
k=1
h PM
i
h PM
i
PN
PN
E e r=1 λr Bsr e k=1 µk Xtk = E e r=1 λr Bsr e k=1 µk BT +tk −BT )
h PM
i
PN
= E e r=1 λ̂r (Bsr −Bsr−1 )+ k=1 µ̂k (BT +tk −BT +tk−1 )
h PM
i h PN
i
= E e r=1 λ̂r (Bsr −Bsr−1 ) E e k=1 µ̂k (BT +tk −BT +tk−1 )
By independent increments.
Definition 2.4. Local Maxima:
t ∈ R+ is the local maxima of a Brownian motion B if ∃ε > 0 s.t. Bt > Bs
∀s ∈ (t − ε, t + ε) \ {t}
Proposition 2.1. If B is a Brownian motion then:
• a < b < c < d implies that P(max[a,b] Bt 6= max[c,d] Bt ) = 1
• P(∃1 t∗ ∈ [a, b] : Bt∗ = max[a,b] Bt ) = 1
• Local maxima are dense in [0, ∞)
Proof.
• Write Xt = Bc+t − Bc
Define
Y = max[c,d] Bt − Bc = max[0,d−c] Xt
which is independent of σ(Bs : s ≤ c) by the Markov property. Similarly define
Z = max[a,b] Bt − Bc
Then it suffices to show that P(Y 6= Z) = 1
By the Markov property we have that Y, Z are independent, moreover Y has a density so let
ν(dy) = P(Y ∈ dy), µ(dz) = P(Z ∈ dz) then for ϕ : R2 → R we have
Z Z
E[ϕ(Y, Z)] =
ϕ(y, z)ν(dy)µ(dz)
By independence so let ϕ(y, z) = Iy=z
P(Y = Z) = E[Iy=z ]
Z Z
=
Iy=z ν(dy)µ(dz)
=0
• By contradiction suppose ∃t1 6= t2 ∈ [a, b] such that Bt1 = Bt2 = max[a,b] Bt
Then we can find c, d ∈ Q such that a < t1 < c < d < t2 < b.
But P(max[a,c] Bt = max[d,b] Bt ) = 0 for any fixed a, b, c, d and there are only coutably many such
events satisfying the above condition we must have that
[
P(
max[a,c] Bt = max[d,b] Bt ) = 0
c∈Q∩[a,b]
• Choose interval [a, a + n−1 ] for some n ∈ N, a ∈ Q
By the previous part P(∃1 t∗ ∈ [a, a + n−1 ] : Bt∗ = max[a,a+n−1 ] Bt ) = 1
Any interval contains some such interval of the form [a, a + n−1 ] and since there are countably
many we indeed have that local maxima are dense.
16
Definition 2.5. Brownian Filtration:
For a Brownian motion B the Brownian filtration at t is defined as
FtB := σ(Bs : s ≤ t)
Definition 2.6. Germ Field:
For a Brownian motion B the germ field at t is defined as
\
FtB+ :=
FsB
s>t
Corollary 2.3. An alternative version of the Markov property is that for fixed T ∈ R we have that
when Xt := BT +t = BT we have
a
σ(Xt : t ≥ 0)
FTB+
Corollary 2.4. F0B+ is P-trivial.
`
Proof. For
T = 0 we have that {Bt : t ≥ 0} F0B+
`
So F0B+ F0B+
which can only occur if F0B+ is P-trivial.
`
Theorem 2.2. σ({Xs }s≥0 ) FtB+ where Xs := Bt+s − Bt
Proof. Take A ∈ FtB+ and B ∈ σ({Xs }s≥0 )
It suffices to choose events in π-systems generating these σ-fields so choose
B = {(Xs1 ∈ C1 , ..., Xsm ∈ Cm )}
Now it suffices to check that
h Pm
i
h Pm
i E ei k=1 λk Xsk eiθχA = E ei k=1 λk Xsk E eiθχA
For ε > 0 let Xsε = Bt+s+ε − Bt+ε then Xsε
`
FtB+ for any ε > 0.
h Pm
i
h Pm
i ε
ε
E ei k=1 λk Xsk eiθχA = E ei k=1 λk Xsk E eiθχA
h Pm
i h Pm
i ε
lim E ei k=1 λk Xsk E eiθχA = E ei k=1 λk Xsk E eiθχA
ε→0
h Pm
i
h Pm
i
ε
lim E ei k=1 λk Xsk eiθχA = E ei k=1 λk Xsk eiθχA
ε→0
By dominated convergence theorem and continuous paths so indeed the statement holds.
3
Markov Transition Functions
Definition 3.1. Markov Transition Kernal:
p : (0, ∞) × Rd × B(R) → [0, 1] is a Markov transition kernal if
• A → pt (x, A) is a probability.
• x → pt (x, A) is measurable.
Definition 3.2. Markov Process:
{Xt }t≥0 process on Rd with transition kernal pt started from x is a Markov process if
! Z
Z Y
n
n
\
P
{Xti ∈ Ai } =
...
pti −ti−1 (xi−1 , dxi )
i=1
A1
An i=1
where x0 := x, t0 := 0.
17
Corollary 3.1. From simple measure theory we have that for f : Rn → R and Markov process X we
get
Z
Z
n
Y
E[f (Xt1 , ..., Xtn )] =
...
f (x1 , ..., xn )
pti −ti−1 (xi−1 , dxi )
A1
An
i=1
Btx
Lemma 3.1.
:= x + Bt is a Markov process started at x with kernal pt (x, dy) = qt (y, x)dy where
qt (z) is the Gaussian density with mean zero and variance z.
Proof.
P
n
\
!
{Btxi ∈ dxi }
n
\
=P
i=1
!
{Btxi − Btxi−1 ∈ d(xi − xi−1 )}
i=1
=
=
n
Y
i=1
n
Y
P(Btxi − Btxi−1 ∈ d(xi − xi−1 ))
by independence
qti −ti−1 (xi − xi−1 )
i=1
By integrating both sides over Ai we have the required result.
Lemma 3.2. Process {Xt }t≥0 with kernal p is a Markov process iff ∀s < t we have
P(Xt ∈ A|FsX ) = pt−s (xs , A) = P(Xt−s ∈ A|X0 = xs )
Remark 3.1. For a probability space (Ω, F, P) we have that L2 (Ω, F, P) := {X : Ω →p
R|E[X 2 ] < ∞}
is an inner product space with inner product < X, Y >= E[XY ] and norm ||X||L2 := E[X 2 ]
Definition 3.3. Conditional Expectation:
For probability space (Ω, F, P) and sub-σ-field G ⊆ F we say that Y = E[X|G] if < X − Y, Z >= 0
whenever Z ∈ L2 (Ω, G, P)
Intuitively E[X|G] is the closest G measurable function to X.
Lemma 3.3. For X ∈ L2 (Ω, F, P)
∃1 Y ∈ L2 (Ω, G, P) s.t. < X − Y, Z >= 0
∀Z ∈ L2 (Ω, G, P)
Corollary 3.2. For X ∈ L2 (Ω, F, P) and G ⊆ F sub σ-field TFAE
• < X − Y, Z >= 0
∀Z ∈ L2 (Ω, G, P)
• E[(X − Y )Z] = 0
∀Z ∈ L2 (Ω, G, P)
• E[XZ] = E[Y Z]
∀Z ∈ L2 (Ω, G, P)
• E[XχA ] = E[Y χA ] ∀A ∈ G
R
R
• A X(ω)dP(ω) = A Y (ω)dP(ω)
∀A ∈ G
`
Lemma 3.4. Suppose X is G measurable, Z G and ϕ : R2 → R is measurable then:
E[ϕ(X, Z)|G] = E[ϕ(x, Z)]|x=X
Proposition 3.1. If (Ω, F, P) is a probability space and G, H are sub σ-algebras of F then:
• E[E[X|G]] = E[X]
• If X is G-measurable then E[X|G] = X a.s.
• E[a1 X1 + a2 X2 |G] = a1 E[X1 |G] + a2 E[X2 |G]
• If X ≥ 0 then E[X|G] ≥ 0 a.s.
18
• If Xn ≥ 0 is an increasing sequence of random variables converging to X a.s. then E[Xn |G]
converges to E[X|G] a.s.
• If Xn is a sequence of random variables then E[lim inf n→∞ Xn G] ≤ lim inf n→∞ E[Xn |G] a.s.
• If |Xn (ω)| ≤ |V (ω)| ∀n such that E[V ] < ∞ and Xn converges to X a.s. then E[Xn |G] converges
to E[X|G]
• If c : R → R is convex and E[|c(X)|] < ∞ then E[c(X)|G] ≥ c(E[X|G])
• If H ⊆ G then E[E[X|G]|H] = E[X|H] a.s.
• If Z is G measurable and bounded then E[ZX|G] = ZE[X|G]
`
• If H σ(σ(X), G) then E[X|σ(G, H)] = E[X|G] a.s.
Lemma 3.5. X is a Markov process iff
Z
E[f (Xt1 , ..., Xtn )] =
Z
...
f (xt1 , ..., xtn )
n
Y
pti −ti−1 (xti−1 , dxti )
i=1
For any f measurable.
Proof. By measure theory it suffices to show this for
f (Xt1 , ..., Xtn ) =
n
Y
ϕi (Xti )
i=1
for {ϕi }ni=1 measurable.
By induction:
##
" n
#
" " n
Y
Y
X
E
ϕi (Xti ) = E E
ϕi (Xti )Ftn−1
i=1
i=1
=E
"n−1
Y
#
ϕi (Xti )E[ϕn (Xtn )|FtXn−1 ]
i=1
=E
"n−1
Y
#
Z
ϕi (Xti )
ϕi (xtn )ptn −tn−1 (xtn−1 , dxtn )
i=1
Z
=
...
Z Y
n
ϕi (xti )pti −ti−1 (xti−1 , dxti )
i=1
4
Strong Markov Property
Definition 4.1. Filtration:
Collection of σ-fields {Ft }t≥0 is a filtration on (Ω, F, P) if Fs ⊆ Ft ⊆ F for any s ≤ t
Definition 4.2. Stopping Time:
T : Ω → R+ is a stopping time for filtration {Ft }t≥0 if {T ≤ t} ∈ Ft
Definition 4.3. Hitting Time:
For A ⊆ R we write the hitting time of the Brownian motion B to be
TA := inf {t ≥ 0 : Bt = a}
19
∀t ∈ R+
Lemma 4.1. If C is closed then {TC ≤ t} ∈ FtB
Proof.
{TC ≤ t} =
∞
\
[
{infr∈C ||Bq − r|| < 1/n} ∈ FtB
n=1 q∈Q∩[0,t]
Lemma 4.2. For stopping time T with respect to filtration {Ft }t≥0 then for
FT := {A : A ∩ {T ≤ t} ∈ Ft ∀t ≥ 0} we have that
• FTB is a σ-field
• If S ≤ T are stopping times with respect to {Ft }t≥0 then FS ⊆ FT
• T is FT measurable.
Proof.
• φ ∩ {T ≤ t} = φ ∈ Ft
Ω ∩ {T ≤ t} = {T ≤ t} ∈ Ft
A ∈ FT =⇒ Ac ∩ {T
≤ t} = {T ≤ t} \ A ∈
Ft
T∞
T∞
{Ai }∞
∈
F
=⇒
(
A
)
∩
{T
≤
t}
=
t
i=1
i=1 i
i=1 (Ai ∩ {T ≤ t}) ∈ Ft
• A ∩ {T ≤ t} = A ∩ {T ≤ t} ∩ {S ≤ t}
So if A ∩ {S ≤ t} ∈ Ft then by definition {T ≤ t} ∈ Ft which is a σ-field so
A ∩ {T ≤ t} ∩ {S ≤ t} ∈ Ft
• It suffices to check that {T ≤ s} ∈ FT ∀s ∈ R by the π-systems lemma.
{T ≤ s} ∩ {T ≤ t} = {T ≤ min{s, t}} ∈ Fmin{s,t} ⊆ Ft
Theorem 4.1. Strong Markov Property:
If B is a Brownian motion, FtB = σ({Bs }s≤t ) and T is a stopping time with respect to {FtB }t≥0 where
T < ∞ a.s. then:
X := BT +s − BT is a Brownian motion independent of FTB .
Proof. We will split the proof into two cases:
• Suppose T takes only a discrete set of values
` Ω=
We want to show that Xt : +BT +t − BT FTB
Choose A ∈ FTB and let F be measurable.
E[F ({BT +t − BT }t≥0 )χA ] =
∞
X
S∞
k=1 {T
= tk }
E[χA∩{T =tk } F ({Btk +t − Btk }t≥0 )]
k=1
But, buy the Markov property we have that F ({Btk +t − Btk }t≥0 )
`
Ftk and
A ∩ {T = tk } = (A ∩ {T ≤ tk }) \ (A ∩ {T ≤ tk−1 }) ∈ FtBk
So χA∩{T ≤tk }
`
F ({Btk +t − Btk }t≥0 ) hence we have that
E[F ({BT +t − BT }t≥0 )χA ] =
=
=
∞
X
k=1
∞
X
k=1
∞
X
E[χA∩{T =tk } F ({Btk +t − Btk }t≥0 )]
E[χA∩{T =tk } ]E[F ({Btk +t − Btk }t≥0 )]
E[χA∩{T =tk } ]E[F ({Bt }t≥0 )]
k=1
= E[χA ]E[F ({Bt }t≥0 )]
So indeed the proof holds when T can only take a countable number of values.
20
• For general T we want to approximate T using a discrete set of stopping times so define
TN :=
∞
X
j
IT ∈[ j−1 , j )
N
N
N
j=1
which is the smallest multiple of 1/N strictly larger than T .
Notice that limN →∞ TN = T and that TN > T for any N
.
j
We need to check that TN is a stopping time so let t ∈ j−1
N , N
{TN
j−1
≤ t} = TN ≤
N
j−1
= T <
N
[
B
{T ≤ s} ∈ F B
=
j−1 ⊆ Ft
N
j−1
s∈Q∩[0, N )
So indeed TN is a stopping time hence from the previous case we have that {BTN +t − BTN }t≥0 is
a Brownian motion independent of FTBN .
Moreover TN > T so FTB ⊆ FTBN .
Let A ∈ FTB then
h PM
i
h PM
i
lim E ei k=1 θk (BTN +sk −BTN ) P(A) = lim E ei k=1 θk (BTN +sk −BTN ) χA
N →∞
N →∞
h PM
i
= E ei k=1 θk (BT +sk −BT ) χA
h PM
i
= E ei k=1 θk (BT +sk −BT ) P(A)
So indeed we have that BT +s − BT is a Brownian motion independent of FTB .
Proposition 4.1. If c ≤ b ≤ a then Ta − Tb
`
Tc and Ta − Tb has the same distribution as Ta−b
Proof. Xt := BTb +t − b is a Brownian motion independent of FTBb by the strong Markov property.
Since Tc is measurable with respect to FTc ⊆ FTb we have that Tc is independent of X which is a
measurable function of {Bt }t≥Tb .
Moreover Ta − Tb = inf {t : Xt = a − b} which clearly has the same distribution as Ta−b
Definition 4.4. Isolated Point:
For a set A we say that t ∈ A is isolated if ∃ε > 0 such that (t − ε, t + ε) ∩ A = {t}
Lemma 4.3. If A ⊆ R \ φ is closed and has no isolated points then it is uncountable.
Proof. {0, 1}N is uncountable, in particular we can choose the set of sequences with entries 0, 1 which
do not end with an infinite sequence of zeros. By doing this every element is unique and the set is still
countable.
We want to form a bijection between this set and A.
Since A 6= φ and has no isolated points we can choose t0 6= t1 ∈ A.
Write B0 = B(t0 , ε0 ) ∩ A, B1 = B(t1 , ε1 ) ∩ A where ε1 are chosen small enough that B0 ∩ B1 = φ.
Since A has no isolated points neither do B0 , B1 so we can find t00 , t01 ∈ B0 , t10 , t11 ∈ B1 .
We can then choose ε00 , ε01 , ε10 , ε11 small enough such that
B00 = B(t00 , ε00 ) ∩ A, B01 = B(t01 , ε01 ) ∩ A, B10 = B(t10 , ε10 ) ∩ A, B11 = B(t11 , ε11 ) ∩ A are disjoint
and have no isolated points.
21
Repeating this inductively and choosing values of ε → 0 we have that a = (a0 , a1 , ...) can be mapped
to a unique point
∞
\
ta =
Ba0 ,...,ai
i=0
Proposition 4.2. Let Z := {T : Bt = 0} then
• L(Z) = 0 a.s.
• @ an isolated point a.s.
• Z is uncountable a.s.
Proof.
•
Z
∞
L(Z) =
χt∈Z dt
Z0 ∞
χBt =0 dt
=
0
Z
E[L(Z)] =
∞
P(Bt = 0)dt
by Tonelli
0
=0
So indeed L(Z) = 0 a.s.
• Let τs := inf {t ≥ s : Bt = 0} and Ẑ = {τs : s ∈ Q}
Bτs +t is a Brownian motion by the strong Markov property so by the law of the iterated
logarithm cannot be isolated.
Suppose τ ∈ Z \ Ẑ then ∃{Sn }∞
n=1 ∈ Q converging to τ from the left each with corresponding
τSn < τ .
But Sn ≤ τSn < τ so we must have that τsn converges to τ and hence τ cannot be isolated.
• If A is closed and has no isolated points then it is uncountable.
Z := {t ≥ 0 : Bt = 0} is closed by continuity of B and hence Z is uncountable.
Lemma 4.4. Arcsine Laws For Brownian Motion:
If B is a Brownian motion for the interval [0, 1] then
• Let M be the unique time where B attains its maximum BM = supt∈[0,1] Bt then
P(M ∈ dt) =
1
π
p
t(1 − t)
• Let T be the final time where B attains the value zero T = sup{t ∈ [0, 1]|Bt = 0} then
P(T ∈ dt) =
• Let L be the time B spends above zero L =
R1
0
1
π
p
t(1 − t)
χBs >0 ds then
P(L ∈ dt) =
π
1
p
t(1 − t)
Theorem 4.2. Levy’s Theorem:
If B is a Brownian motion and St := sups≤t Bs then Xt := St − Bt defines a reflected Brownian
motion.
22
5
Brownian Martingales
Definition 5.1. Martingale:
For a filtration {Ft }t≥0 on probability space (Ω, F, P) a process {Mt }t≥0 is called a martingale if:
• E[|Mt |] < ∞
• {Mt }t≥0 is adapted to {Ft }t≥0
• ∀s ≤ t
E[Mt |Fs ] = Ms
Theorem 5.1. Optional Stopping Theorem (version 1):
If {Mt }t≥0 is a bounded martingale on {Ft }t≥0 and T a bounded stopping time then
E[MT ] = E[M0 ]
Proof. We shall split the proof into two cases:
• Case 1) ∃{ti }ni=1 ∈ R+ such that ti < ti+1 ≤ K ∀i and T ∈ {ti }ni=1 .
E[MT ] =
=
n
X
k=1
n
X
E[MT χT =tk ]
E[Mk χT =tk ]
k=1
= E[Mk ]
= E[M0 ]
• Case 2) For general T .
Define
Tn =
N
X
j
χT ∈[ j−1 , j )
N
N
N
j=1
which is a finite sequence of bounded stopping times and therefore satisfy case 1.
TN is a decreasing sequence in N such that limN →∞ TN = T so limN →∞ MTN = MT by
continuity of paths.
By DCT we have that:
E[MT ] = lim E[MTN ]
N →∞
= E[M0 ]
Corollary 5.1. For a < 0 < b we have that
P(Ta < Tb ) =
b
b−a
where Ta = inf {t ≥ 0 : Bt = a}, Tb = inf {t ≥ 0 : Bt = b}
Proof. Bt is a martingale and T := min{Ta , Tb } is a stopping time so for fixed N we have that T ∧ N
is a bounded stopping time.
By the optional stopping theorem we have that
E[BT ∧N ] = 0
23
|BT ∧N | ≤ max{|b|, |a|} for any N so by dominated convergence theorem we have that
E[BT ] = lim E[BT ∧N ] = 0
N →∞
Which gives that:
0 = E[BT ]
= aP(Ta < Tb ) + bP(Tb < Ta )
= aP(Ta < Tb ) + b(1 − P(Ta < Tb ))
= b + P(Ta < Tb )(a − b)
b
P(Ta < Tb ) =
b−a
Corollary 5.2. If T = min{Ta , Tb } for a < 0 < b then
E[T ] = |ab|
Proof.
E[Bt2 − t|Fs ] = E[(Bt − Bs )2 − Bs2 + 2Bs Bt − t|Fs ]
= E[(Bt − Bs )2 + 2Bs (Bt − Bs ) + Bs2 |Fs ]
= t − s + Bs2 − t
= Bs2 − s
hence Bt2 − t is a martingale so by the optional stopping theorem
E[BT2 ∧N − T ∧ N ] = 0
|BT2 | ≤ max{b2 , a2 } so by dominated convergence theorem limN →∞ E[BT2 ∧N ] = E[BT2 ]
Furthermore by monotone convergence theorem we have that limN →∞ E[T ∧ N ] = E[T ]
So we get that
E[BT2 ] − E[T ] = lim E[BT2 ∧N ] − E[T ∧ N ]
N →∞
= lim E[BT2 ∧N − T ∧ N ]
N →∞
=0
E[T ] = E[BT2 ]
= b2 P(Tb < Ta ) + a2 P(Ta < Tb )
a2 b
b2 a
+
b−a b−a
= −ab
=−
Corollary 5.3. If θ > 0 then
E[e−
θ2
2
Ta
24
] = eθa
Proof. eθBt −
θ2
2
t
is a martingale and Ta ∧ N is a bounded stopping time hence
E[eθBTa ∧N −
Notice that e−
that
θ2
2
Ta ∧N
θ2
2
Ta ∧N
] = E[eθB0 ] = 1
≤ 1 for any N and eθBTa ∧N ≤ eθa so by dominated convergene theorem we have
E[eθBTa −
θ2
2
Ta
] = lim E[eθBTa ∧N −
θ2
2
Ta ∧N
N →∞
]=1
and since BTa = a we indeed have that
E[e−
θ2
2
Ta
] = eθa
Corollary 5.4. Let Xt := Bt − ct and Ta := inf {t ≥ 0 : Xt = a} then
P(Ta < ∞) = e−2ca
Proof. For θ > 0 we have that eθBt −
eθBt −
θ2
2
t
= eθ(Xt +ct)−
=e
θ2
2
θ2
2
t
is a martingale. Moreover
t
2
θXt + c− θ2 t
is a martingale on FtX .
Ta is a stopping time so Ta ∧ N is a bounded stopping time.
eθXTa ∧N ≤ eθa since θ > 0 moreover
lim eθXTa ∧N = eθa χTa <∞
N →∞
e
2
θc− θ2 Ta ∧N
θ2
2
≤ 1 so long as θc <
moreover
lim e
2
θc− θ2 Ta ∧N
N →∞
=e
2
θc− θ2
Ta
χTa <∞
So by dominated convergence theorem we have that
E[eθa χTa <∞ e
2
θc− θ2
Ta
] = lim E[e
2
θXTa ∧N + θc− θ2 Ta ∧N
N →∞
] = E[eθX0 ] = 1
So by choosing θ = 2c we have that
P(Ta < ∞) = E[χTa <∞ ]
θa
= E[e χTa <∞ e
2
θc− θ2
Ta
]e−θa
= e−θa
= e−2ca
Proposition 5.1. Lots of Brownian Martingales:
∂f
∂2f
If B is a d-dimensional Brownian motion, f ∈ C2,1 and f, ∂f
∂t , ∂xi , ∂xi ∂xj are of at most exponential
growth then
Z t
∂f x
1
f (Btx , t) −
(Bs , s) + ∆f (Bsx , s)ds
2
0 ∂s
is a martingale.
25
Proof.
• f has at most exponential growth hence ∃C0 , C1 s.t.
x
f (Btx , t) ≤ C0 eC1 |Bt |
So we have that
x
E[|f (Btx , t)|] ≤ C0 E[eC1 |Bt | ]
x
x
< C0 E[eC1 Bt + e−C1 Bt ]
C 2 t2
C 2 t2
−C1 x+ 12
C1 x+ 12
+e
= C0 e
<∞
Similarly
∂f
∂t , ∆f
are of at most exponential growth so ∃C2 , C3 s.t.
x
∂f x
1
(B , s) + ∆f (Bsx , s) ≤ C3 eC4 Bs
∂s s
2
In particular we have that
Z t
Z t
x
1
∂f x
(Bs , s) + ∆f (Bsx , s)ds ≤ E C3 eC4 Bs ds
E 2
0
0 ∂s
Z t
2 s2
2 s2
C4
−C4
≤
C3 eC4 x+ 2 + eC4 x+ 2
ds
0
≤ tC3 e2C4 |x|+
2 t2
C4
2
<∞
So indeed
Z
E[|Mt |] ≤ E[|f (Btx , t)|] + E t
0
∂f x
1
(Bs , s) + ∆f (Bsx , s)ds < ∞
∂s
2
• WLOG let x = 0 then the original case is a shift of this one and since we have at most
exponential growth this will merely be a change in parameters. Moreover let
Lf :=
∂f
1
+ ∆f
∂t
2
then
E[Mt |FsB ]
Z t
B
= E f (Bt , t) −
Lf (Br , r)dr|Fs
0
Z s
Z t
B
= E f (Bs , s) + f (Bt , t) − f (Bs , s) −
Lf (Br , r)dr −
Lf (Br , r)dr|Fs
0
s
Z t−s
= Ms + E f (Bs + Xt−s , t) − f (Bs , s) −
Lf (Bs + Xs+r , s + r)dr|FsB
Xr = Bs+r − Bs
0
Z t−s
= Ms + E f (z + Xt−s , t) − f (z, s) −
Lf (Xr + z, s + r)dr 0
z=Bs
The result follows if we can show that the arguement of the expectation is null, in particular
defining g(y, t) := f (z + y, t + s) we want to show that
Z t
E[g(Xt , t)] − g(0, 0) = E
Lg(Xr , r)dr
0
26
By Fubini since E[|Mt |] < ∞ this is equivalent to showing
Z t
E[g(Xt , t)] − g(0, 0) =
E [Lg(Xr , r)] dr
0
Differentiating both sides gives us that this is equivalent to
d
E[g(Xt , t)] = E[Lg(Xt , t)]
dt
So letting
|x|2
e− 2t
φ(x, t) =
(2π)d/2
gives
Z
d
d
E[g(Xt , t)] =
g(x, t)φ(x, t)dx
dt
dt Rd
Z ∂φ
∂g
=
+g
(x, t)dx
φ
∂t
∂t
Rd
Z ∂g
1
=
φ
+ g ∆φ (x, t)dx
∂t
2
Rd
Z
∂g 1
=
φ(x, t)
+ ∆g (x, t)dx
∂t
2
Rd
∂φ
1
= ∆φ
∂t
2
Using Integration by Parts
= E[Lg(Xt , t)]
If f is such that ∆f = 0 or
∂f
∂s
+ 12 ∆f = 0 then f (Btx , t) is a martingale.
Theorem 5.2. Dirichlet Problem:
Suppose D ⊂ Rd is open and bounded and u ∈ C 2 (Rd ) solves
(
∆u(x) = 0
x∈D
u(x) = f (x) x ∈ ∂D
for some boundary temperature f then
u(x) = E[f (BTx )]
where T = inf {t ≥ 0 : Btx ∈ ∂D}.
Proof. By lots of Brownian motion we have that
Z
x
u(Bt ) −
t
0
1
∆u(Bsx )ds
2
is a martingale since we can modify u outside D to ensure the derivatives are of at most exponential
growth.
By optional stopping theorem we have that
"
#
Z T ∧N
1
x
x
∆u(Bs )ds
u(x) = E u(BT ∧N ) −
2
0
By the first condition on u we have that
Z
T ∧N
0
1
∆u(Bsx )ds = 0
2
Moreover u is bounded on D since D is bounded and u is twice differentiable. By DCT we have that
u(x) = lim E[u(BTx ∧N )] = E[f (BTx )]
N →∞
27
Corollary 5.5. If B is a d dimensional Brownian motion for d ≥ 3 then
P(Ta < Tb ) =
1
1
− bd−2
|x|d−2
1
1
− bd−2
ad−2
Where Ta = inf {t ≥ 0 : |Btx | = a}, Tb = inf {t ≥ 0 : |Btx | = b} for 0 < a < |x| < b.
Proof. Let u(x) = |x|−(d−2) then ∆u(x) = 0 whenever x ∈ Rd \ {0}.
Write D = {x ∈ Rd : a < |x| < b} and consider the Dirichlet problem for T = Ta ∧ Tb .
This gives us that:
1
u(x) = E
|Btx |d−2
1
1
= d−2 P(Ta < Tb ) + d−2 P(Tb < Ta )
a
b
1
1
= d−2 P(Ta < Tb ) + d−2 (1 − P(Ta < Tb ))
a
b
By the definition of u(x) we therefore have by rearranging
P(Ta < Tb ) =
1
1
− bd−2
|x|d−2
1
1
− bd−2
ad−2
Corollary 5.6. For d ≥ 3 we have that
P(Ta < ∞) =
a
|x|
d−2
Corollary 5.7. For d ≥ 3 we have that |Bt | → ∞.
Proof. If |Bt | 9 ∞ then ∃K < ∞ and {tN }∞
N =1 such that |BtN | ≤ K.
Let TN = inf {t ≥ 0 : |Bt | ≥ N } then by the law of the iterated logarithm we have that TN < ∞ a.s.
(N )
If we define Xt := BTN +t − BTN which is a Brownian motion be strong Markov we have that
(N )
P(Xt
∈ B(0, K) eventually) =
K
N
d−2
from the previous corollary so

P

\
(N )
{Xt
∈ B(0, K) eventually} = 0
N ≥K
In particular

P
∞
[
\

(N )
{Xt
∈ B(0, K) eventually} = 0
K=1 N ≥K
Corollary 5.8. If B is a 2 dimensional Brownian motion then
P(Ta < Tb ) =
log(b) − log(|x|)
log(b) − log(a)
Where Ta = inf {t ≥ 0 : |Btx | = a}, Tb = inf {t ≥ 0 : |Btx | = b} for 0 < a < |x| < b.
28
Proof. Let u(x) = log(|x|) then ∆u(x) = 0 whenever x ∈ Rd \ {0}.
Write D = {x ∈ Rd : a < |x| < b} and consider the Dirichlet problem for T = Ta ∧ Tb .
This gives us that:
u(x) = E[log(Btx )]
= log(a)P(Ta < Tb ) + log(b)P(Tb < Ta )
= log(a)P(Ta < Tb ) + log(b)(1 − P(Ta < Tb ))
By the definition of u(x) we therefore have by rearranging
P(Ta < Tb ) =
log(b) − log(|x|)
log(b) − log(a)
Corollary 5.9. A 2 dimensional Brownian motion hits and non-empty open ball with probability 1 but
the probability it hits a specific point is 0.
Furthermore the 2 dimensional Lebesgue measure of {Bt }t≥0 is 0 but B is dense in R2 .
Theorem 5.3. Poisson Problem:
Suppose D ⊂ Rd is bounded and u ∈ C 2 (Rd ) solves
(
1
2 ∆u(x) = −g(x)
u(x) = 0
x∈D
x ∈ ∂D
for some interior temperature function g then
"Z
#
T
g(Bsx )dx
u(x) = E
0
where T = inf {t ≥ 0 : Btx ∈ ∂D}.
Proof. By modifying u outside D we can satisfy the lots of Brownian martingales proposition with
Z t
1
x
u(Bt ) −
∆u(Bsx )ds
2
0
which is therefore a martingale.
By optional stopping theorem we have that
"
#
Z T ∧N
1
x
u(x) = E u(BT ∧N ) −
∆u(Bs )ds
2
0
"Z
#
T ∧N
x
x
= E[u(BT ∧N )] + E
g(Bs )ds
0
Since u is twice differentiable on D we have that u is bounded hence by DCT we get that
lim E[u(BTx ∧N )] = 0
N →∞
since u(x) = 0 in the boundary.
Similarly
Z
T ∧N
g(Bsx )ds ≤ ||g||∞ T
0
and E[T ] < ∞ so DCT gives us that
"Z
lim E
N →∞
#
T ∧N
g(Bsx )ds
0
"Z
=E
0
So indeed we get the desired result.
29
T
#
g(Bsx )ds
We can extend the Dirichlet and Poisson problems to hold for u ∈ C 2 (D) ∩ C(D) by using the case we
have proven on Dε = {x : d(x, Dc ) < ε} and taking the limit as ε → 0.
Theorem 5.4. If u ∈ C 1,2 ([0, ∞) × Rd ) is of at most exponential growth and is a solution to
(
∂u
1
t > 0, x ∈ Rd
∂t = 2 ∆u
u(0, x) = f (x) x ∈ Rd
then
u(x, t) =
E[f (Btx )]
|x−y|2
e− 2t
dy
=
f (x)
(2π)d/2
Rd
Z
Proof. If we fix t > 0 then
u(Bsx , t − s) −
Z
s
−
0
∂u 1
+ ∆u (Brx , t − r)dr
∂t
2
is a martingale by lots of Brownian motion, moreover since
is null so u(Bsx , t − s) is a martingale.
By optional stopping theorem we have that
∂u
∂t
= 21 ∆u we have that the integral term
u(x, t) = E[u(Btx , 0)] = E[f (Btx )]
Theorem 5.5. For D ⊆ Rd ff u ∈ C 1,2 ([0, ∞) × D)
to

∂u
1

 ∂t = 2 ∆u
u(0, x) = f (x)


u(t, x) = g(x)
is of at most exponential growth and is a solution
t > 0, x ∈ D
x∈D
x ∈ ∂D, t > 0
then
u(x, t) = E[g(BTx )IT ≤t + f (Btx )IT >t ]
Proof. If we fix t > 0 then
u(Bsx , t
Z
s
− s) −
0
∂u 1
−
+ ∆u (Brx , t − r)dr
∂t
2
1
is a martingale by lots of Brownian motion, moreover since ∂u
∂t = 2 ∆u we have that the integral term
x
is null so u(Bs , t − s) is a martingale.
T ∧ t is a bounded stopping time so setting s = 0, optional stopping theorem gives us that
u(x, t) = E[u(BTx ∧t , t − (T ∧ t))] = E[χT <t g(BTx ) + χT ≥t f (Btx )]
Definition 5.2. Convex:
φ : R → R is convex if
φ(x) = sup{L(x) : L ≤ φ, L linear}
Proposition 5.2. Jensen’s Inequality:
If E[|X|] < ∞, φ : R → R is convex then both
E[φ(X)] ≥ φ(E[X])
E[φ(X)|F] ≥ φ(E[X|F])
hold a.s.
30
Corollary 5.10. If {Mt }t≥0 is a martingale and φ : R → R is convex then
E[φ(Mt )|Fs ] ≥ φ(E[Mt |Fs ]) = φ(Ms )
Lemma 5.1. Fatou:
If {Xn }∞
n=1 is a sequence of positive random variables then
E[lim inf Xn ] ≤ lim inf E[Xn ]
n
n
Lemma 5.2. If {Mt }t≥0 is a martingale with continuous paths, φ : R → R+ convex and T ≤ K is a
bounded stopping time then
E[φ(Mt )] ≤ E[φ(MK )]
Proof. We shall split the proof into then discrete and continuous cases:
• Case 1)
Suppose T ∈ {tr }m
r=1 are the discrete times T can take ordered such that tr < tr+1 ≤ K
E[φ(MT )] =
≤
m
X
r=1
m
X
E[φ(Mtr )χ{T =tr }
E[φ(MK )χ{T =tr }
r=1
= E[φ(MK )]
• Case 2)
Suppose T is a general bounded stopping time then find a sequence of discrete bounded stopping
times {Tn }∞
n=1 such that Tn ≥ Tn+1 ≥ T and limn→∞ Tn = T
From case 1 we have that for any n
E[φ(MTn )] ≤ E[φ(MK )]
and by Fatou’s lemma we have that
E[φ(MT )] = E[lim inf φ(MTn )] ≤ lim inf E[φ(MTn )] ≤ E[φ(MK )]
n
n
Theorem 5.6. Optional Stopping Theorem (version 2):
If {Mt }t≥0 is a martingale on {Ft }t≥0 with continuous paths and T a bounded stopping time then
E[MT ] = E[M0 ]
Proof. From version 1 we have that then statement holds when {Mt }t≥0 is bounded or when T only
takes discrete values.
Write (x)+ := max{0, x} which is convex and x = (x − L)+ − (−L − x)+ + max{min{x, L}, −L}
In particular MT = (MT − L)+ − (−L − MT )+ + max{min{MT , L}, −L}
Find a sequence of discrete bounded stopping times {Tn }∞
n=1 such that Tn ≥ Tn+1 ≥ T and
limn→∞ Tn = T
Fix ε > 0 and choose L sufficiently large such that
E[(MTn − L)+ ] ≤ E[(MK − L)+ ] < ε
and similarly
E[(−MTn − L)+ ] ≤ E[(−MK − L)+ ] < ε
31
We can then choose N sufficiently large such that
|E[max{min{MTn , L}, −L}] − E[max{min{MT , L}, −L}]| ≤ ε
for any n ≥ N since max{min{MT , L}, −L} is bounded. Moreover
E[max{min{x, L}, −L}] = E[MT ] − E[(MT − L)+ ] + E[(−L − MT )+ ]
so
|E[max{min{MTn , L}, −L}] − E[MT ] − E[(MT − L)+ ] + E[(−L − MT )+ ]| ≤ ε
Which gives
|E[max{min{MTn , L}, −L}] − E[MT ]| ≤ 3ε
Since TN is discrete from version 1 we have that
E[M0 ] = E[MTN ]
= E[(MTN − L)+ ] − E[(−L − MTN )+ ] + E[max{min{MT , L}, −L}]
So
|E[M0 ] − E[MT ]| ≤ 5ε
Since ε > 0 was chosen arbitrarily we indeed have that the statement holds.
Definition 5.3. Regular:
If y ∈ ∂D is regular for u : R → R and f : ∂D → R if for any sequence {xn }∞
n=1 ∈ D such that
limn→∞ xn = y we have that limn→∞ u(xn ) = f (y)
Definition 5.4. Ball Averaging:
u : D → R satisfies ball averaging if for any x ∈ D and any ε > 0 s.t. B(x, ε) ⊂ D we have that
Z
1
u(x) =
u(y)dy
VB(x,ε)
B(x,ε)
where VB(x,ε) is the volume of the ball B(x, ε).
Definition 5.5. Sphere Averaging:
u : D → R satisfies sphere averaging if for any x ∈ D and any ε > 0 s.t. B(x, ε) ⊂ D we have that
Z
1
u(x) =
u(y)dy
S
B(x,ε)
∂B(x,ε)
where SB(x,ε) is the surface area of the ball B(x, ε).
Lemma 5.3. If u solves the Dirichlet problem then it satisfies sphere averaging.
Proof. Let Sε := inf {t ≥ 0 : Btx ∈ ∂B(x, ε)} be the first time the Brownian motion leaves the ball
B(x, ε) then Xt = BSxε +t − BSε is a Brownian motion by the strong Markov property. Furthermore let
T 0 := inf {t > 0 : Xt + BSε ∈ ∂D} be the time between leaving the ball for the first time and leaving D.
We then have that
u(x) = E[f (BTx )]
= E[f (BSxε + BTx − BSxε )]
= E[f (BSxε + XT 0 )]
= E[E[f (BSxε + XT 0 )|FSε ]]
= E[f (z + XT 0 )|z=BSε ]
= E[u(z)|z=BSε ]
= E[u(BSxε )]
which is the sphere average.
32
Lemma 5.4. If u(x) = E[f (BTx )] ∈ C ∞ (D) with ∆u = 0 on D and every y ∈ ∂D is regular then u
solves the Dirichlet problem.
Lemma 5.5. If ∃C ⊂ Dc cone with vertex y ∈ ∂D then y is regular.
6
Donsker’s Theorem
For {Zi }∞
i=1 i.i.d. random variables with E[Zi ] = 0, V ar[Zi ] = 1 we denote
n
X
Sn :=
Zi
i=1
to be the discrete time Markov chain representing the position of a particle at time n and
(N )
Xt
SN t
:= √
N
to be the scaled version of Sn .
Definition 6.1. E-Valued Random Variable:
If (Ω, F, P) is a probability space, (E, d) a metric space and X : (Ω, F, P) is measurable then X is
called an E-valued random variable.
Lemma 6.1. Let B(C[0, 1]) denote the σ-algebra
generated by Borel sets on C[0, 1]. If F0 is the
Tn
π-system formed by the sets {f ∈ C[0, 1] : i=1 {f (ti ) ∈ Oi }} for open sets {Oi }ni=1 and times {ti }ni=1
then
σ(F0 ) = B(C[0, 1])
Definition 6.2. Convergence In Distribution:
d
If {Xn }∞
n=1 , X are random variables then Xn → X if
lim [F (Xn )] = E[F (X)]
n→∞
∀F ∈ Cb .
Theorem 6.1. Continuous Mapping Theorem:
d
˜ is continuous then
If Xn → X on metric space (E, d) and G : (E, d) → (Ẽ, d)
d
G(Xn ) → G(X)
˜
on (Ẽ, d)
Proof. Let F : Ẽ → R be continuous and bounded then F ◦ G is continuous and bounded hence
lim E[F (G(Xn ))] = E[F (G(X))]
n→∞
so indeed
d
G(Xn ) → G(X)
We denote Disc(f ) := {g : f is discontinuous at g}
Theorem 6.2. Entended Continuous Mapping Principle:
d
If Xn → X on (E, d) and F : E → R is measurable s.t. P(X ∈ Disc(F )) = 0 then
d
F (Xn ) → F (X)
33
Lemma 6.2. Skorokhod:
If Z is a random variable with E[Z] = 0 then ∃T < ∞ stopping time s.t. BT has the same distribution
as Z and E[T ] = E[Z 2 ]
Proof. Firstly suppose that Z only takes values {a, b} where a < 0 ≤ b
Then letting T = Ta,b := inf{t ≥ 0 : Bt ∈ {a, b} and using a previous result we have that

b

P(BT = a) = b−a
−a
P(BT = b) = b−a


E[T ] = −ab
which satisfies the requirements for the lemma.
For general T let α < 0 ≤ β be random variables independent of B then the aim is to use Tα,β by
choosing an appropriote joint density of α, β : ν(da, db) = P(α ∈ da, β ∈ db) to match Z.
Write µ+ (dz) = P(Z ∈ dz) for z ≥ 0 and µ− (dz) = P(Z ∈ dz) for z < 0.
For z ≥ 0 we want that
µ+ (dz) = P(BTα,β ∈ dz)
We have that
P(BTα,β ∈ dz) = E[P(BTα,β ∈ dz|σ(α, β))]
α
χβ∈dz
=E −
β−α
Z Z
a
=
−
χb∈dz ν(da, db)
b−a
Z
a
= −
ν(da, dz)
z−a
from previous case
So choose
ν(da, db) = (b − a)µ+ (db)π(da)
where
Z
−aµ+ (dz)π(da) = 1
which gives us that
Z
P(BTα,β ∈ dz) = −aµ+ (dz)π(da)
= µ+ (dz)
as required.
For z < 0 we want that
µ− (dz) = P(BTα,β ∈ dz)
We have that
P(BTα,β ∈ dz) = E[P(BTα,β ∈ dz|σ(α, β))]
β
=E
χα∈dz
β−α
Z Z
b
=
χa∈dz ν(da, db)
b−a
Z
b
=
ν(dz, db)
b−z
Z
= bµ+ (db)π(dz)
from previous case
34
By choosing
π(dz) = R
µ− (dz)
xµ+ (dx)
we have that
Z
P(BTα,β ∈ dz) =
bµ+ (db) R
µ− (dz)
xµ+ (dx)
= µ− (dz)
as required hence we have the distribution
ν(da, db) =
(b − a)µ+ (db)µ− (da)
R
xµ+ (dx)
as a joint density.
By construction we have that P(BTα,β ∈ dz) = P(Z ∈ dz) so it remains to show that
1.
Z
−a R
µ− (da)
=1
bµ+ (dx)
2.
E[Tα,β ] = E[Z 2 ]
3.
Z Z
ν(da, db) = 1
Which hold as follows:
1.
0 = E[Z]
Z
Z
= xµ+ (dx) + xµ− (dx)
hence
Z
Z
xµ+ (dx) =
So indeed
Z
−a R
µ− (da)
=
bµ+ (dx)
2. Notice that
2
E[Z ] =
Z
Z
−xµ− (dx)
Z
−aµ− (da) bµ+ (db) = 1
Z
2
x µ+ (dx) +
35
x2 µ− (dx)
So we have that
E[Tα,β ] = E[−αβ]
Z
= −abν(da, db)
Z Z
−ab(b − a)µ+ (db)µ− (da)
R
=
xµ+ (dx)
Z Z
Z Z 2
2
−ab µ+ (db)µ− (da)
a bµ+ (db)µ− (da)
R
R
=
+
xµ+ (dx)
xµ+ (dx)
R
R 2
R 2
R
− aµ− (da) b µ+ (db)
a µ− (da) bµ+ (db)
R
R
=
+
xµ+ (dx)
xµ+ (dx)
R
R 2
R 2
R
aµ+ (da) b µ+ (db)
a µ− (da) bµ+ (db)
R
R
=
+
xµ+ (dx)
xµ+ (dx)
Z
Z
= b2 µ+ (db) + a2 µ− (da)
= E[Z 2 ]
3.
Z Z
(b − a)µ+ (db)µ− (da)
R
xµ+ (dx)
R
RR
bµ+ (db)µ− (da) −
aµ+ (db)µ− (da)
R
xµ+ (dx)
R
R
R
µ− (da) bµ+ (db) − µ+ (db) aµ− (da)
R
xµ+ (dx)
R
R
R
µ− (da) bµ+ (db) + µ+ (db) aµ+ (da)
R
xµ+ (dx)
Z
µ− (da) + µ+ (db)
Z Z
ν(da, db) =
R
=
R
=
R
=
Z
=
=1
Theorem 6.3. Donsker’s Theorem:
If {Zi }∞
i=1 i.i.d. random variables with E[Zi ] = 0, V ar[Zi ] = 1 and
Sn :=
n
X
Zi
i=1
is the discrete time Markov chain representing the position of a particle at time n and
(N )
Xt
SN t
:= √
N
is the scaled version of Sn then X (N ) converges in distribution to a Brownian motion as N → ∞.
Proof. By Skorokhod we can find α, β random variables such that E[Tα,β ] = 1 and BTα,β is equal to Z1
in distribution.
Take i.i.d copies {(αi , βi )}∞
i=1 independent of B then let T1 = inf{t ≥ 0 : Bt ∈ {α, β}} and
TN +1 = inf{t ≥ TN : Bt − BTN ∈ {αN +1 , βN +1 }} which are stopping times.
36
We can then let
BN
= √ t
N
(N )
Bt
Furthermore let ΩN,ε = {||X (N ) − B (N ) ||∞ > ε} then we want to show that
lim P(ΩN,ε ) = 0
N →∞
since given this for fixed F unifomrly continuous bounded function we have that
|E[F (X (N ) )] − E[F (B (N ) )]| ≤ E[|F (X (N ) ) − F (B (N ) )|]
≤ 2||F ||∞ P(ΩN,ε ) + E[|F (X (N ) ) − F (B (N ) )|χΩN,ε ]
So for fixed η > 0 by uniform continuity we can choose ε > 0 small enough such that
E[|F (X (N ) ) − F (B (N ) )|χΩN,ε ] ≤ η
and N large enough such that
2||F ||∞ P(ΩN,ε ) ≤ η
So indeed we shall have convergence in distribution.
Hence it remains to show that P(ΩN,ε ) < ε which breaks down into:
1. Since
(N )
XK
N
SK
BT
(N )
= √ = √ K = B TK
N
N
N
we need that
(N )
(N )
B TK ≈ B K
N
N
(N )
2. We also require that Xt
(N )
− Bt
is small on the intervals t ∈ [K/N, (K + 1)/N ).
These indeed hold as follows:
1. T1 , T2 − T1 , T3 − T2 , ... are i.i.d with mean 1 so by the strong law of large number we have that
TN /N converges to 1 almost surely.
Furthermore maxK≤N (TK 0K)/N converges to 0 almost surely as N → ∞ so writing
T − K K
ΩN,δ = maxk=1,...,N ≥δ
N
gives us that limN →∞ P(ΩN,δ ) = 0.
(N )
2. Suppose ||X (N ) − B (N ) ||∞ > ε then ∃t ∈ [0, 1] s.t. |Xt
WLOG we can let K ≤ N t < K + 1 then either
(N )
|Bt
(N )
− Bt
| ≥ ε.
(N )
− BK/N | ≥ ε
or
(N )
|Bt
(N )
− B(K+1)/N | ≥ ε
So we have that
(N )
P(||X (N ) − B (N ) ||∞ > ε) ≤ P(ΩN,δ ) + P(|Bs(N ) − Bt
(N )
(N )
> ε f or some |s − t| ≤ δ + N −1 )
But P(|Bs − Bt > ε f or some |s − t| ≤ δ + N −1 ) = P(|Bs − Bt > ε f or some |s − t| ≤ 2δ) by
choosing N sufficiently large.
By choosing δ sufficiently small and using continuity of paths we then have that
P(|Bs − Bt > ε f or some |s − t| ≤ 2δ) can be made sufficinetly small then taking the limit
N → ∞ we get the desired result.
37
∞
Lemma 6.3. If {Xn }∞
n=1 , {Yn }n=1 are sequences of random variables on the same probability space
such that Xn converges to X in distribution and Yn converges to 0 in distribution then Xn + Yn
converges to X in distribution.
Proof. It suffices to show that the characteristic functions converge.
|E[eiθ(Xn +Yn ) ] − E[eiθX ]| = |E[eiθ(Xn +Yn ) − eiθXn ] + E[eiθXn − eiθX ]|
≤ |E[eiθ(Xn +Yn ) − eiθXn ]| + |E[eiθXn − eiθX ]|
lim |E[eiθXn
n→∞
iθ(Xn +Yn )
− eiθX ]| = 0
− eiθXn ]| = |E[eiθXn (eiθYn − 1)]|
|E[e
≤ E[|eiθXn ||eiθYn − 1|]
≤ E[|eiθYn − 1|]
q
≤ E[|eiθYn − 1|2 ]
q
= 2E[1 − eiθYn ]
by Cauchy-Schwarz
lim E[1 − eiθYn ] = 0
n→∞
So indeed Xn + Yn converges to X in distribution.
Lemma 6.4. If Xn is a sequence of random variables such that limn→∞ E[|Xn |] = 0 then Xn
converges to 0 in distribution.
Proof. It suffices to show that the characteristic functions converge.
Recall that ex ≥ 1 + x so:
|E[1 − eiθXn ]| ≤ E[|θX|] → 0
Theorem 6.4. If {Xn }∞
n=1 , X are random variables on the same probability space and (E, d) is a
metric space then the following are equivalent:
1. limn→∞ E[F (Xn )] = E[F (X)]
∀F : E → R bounded and continuous.
2. limn→∞ E[F (Xn )] = E[F (X)]
∀F : E → R bounded and uniformly continuous.
3. lim supn→∞ P(Xn ∈ A) ≤ P(X ∈ A) for any A closed.
4. lim inf n→∞ P(Xn ∈ A) ≥ P(X ∈ A) for any A open.
5. limn→∞ P(Xn ∈ A) = P(X ∈ A) whenever P(X ∈ ∂A) = 0.
6. limn→∞ E[F (Xn )] = E[F (X)] for all bounded, measurable F : E → R s.t. P(X ∈ Disc(F )) = 0.
Proof.
• 1 =⇒ 2)
If F is uniformly continuous then it is also continuous hence the claim is immediate.
• 2 =⇒ 3)
Let ε > 0 and define
Fε (x) =
1−
38
d(x, A)
ε
+
which are uniformly continuous, bounded functions such that
lim Fε (x) = χA (x)
ε→0
Since Fε (x) ≤ 1 by dominated convergence theorem we have that
Z
lim E[Fε (X)] = χA dP = P(X ∈ A)
n→∞
So we have that
P(X ∈ A) = lim E[Fε (X)]
ε→0
= lim lim E[Fε (Xn )]
ε→0 n→∞
≥ lim sup E[χA (Xn )]
n→∞
= lim sup P(Xn ∈ A)
n→∞
• 3 =⇒ 4)
Let A open then Ac is closed so:
P(X ∈ A) = 1 − P(X ∈ Ac )
≤ 1 − lim sup P(Xn ∈ Ac )
n→∞
= lim inf P(Xn ∈ A)
n→∞
• 3, 4 =⇒ 5)
If P(X ∈ ∂A) = 0 then P(X ∈ A) = P(X ∈ A) = P(X ∈ Ao ) so we have that
lim sup P(Xn ∈ A) ≤ lim sup P(Xn ∈ A)
n→∞
n→∞
≤ P(X ∈ A)
= P(X ∈ A)
= P(X ∈ Ao )
≤ lim inf P(Xn ∈ Ao )
n→∞
≤ lim inf P(Xn ∈ A)
n→∞
≤ lim sup P(Xn ∈ A)
n→∞
• 5 =⇒ 6)
Notice that 5 is the specific case of 6 where F = χA since Disc(χA ) = ∂A so we have 6 for
indicator functions.
For general bounded measurable F fix ε > 0 and choose {αi }ni=1 increasing sequence such that
|αi − αi+1 | < ε so we can define the ε approximation of F as
Fε (x) =
n
X
αi χf −1 (αi ,αi+1 ] (x)
i=1
Notice that pα := P(F (X) = α) > 0 for countably many α only since {α : pα > 1/n} has at most
n elements. Letting Q = {α : pα > 0} we can choose {αi }ni=1 ∈
/ Q then we have that
P(F (X) = α) = 0.
39
So if we let Ai = f −1 (αi , αi+1 ] = {x : f (x) ∈ (αi , αi+1 ]} then we have that
Disc(χAi ) ⊂ Disc(F ) ∪ {x : F (x) = αi } ∪ {x : F (x) = αi+1 }
and P(Disc(F )) = P({x : F (x) = αi }) = P({x : F (x) = αi+1 }) = 0
hence P(X ∈ ∂Ai ) = 0 so 5 holds with f (x) = χAi (x) and so
lim E[χAi (Xn )] = E[χAi (X)]
n→∞
We therefore get that
lim E[Fε (Xn )] = lim
n→∞
n→∞
= lim
n→∞
n
X
i=1
n
X
αi E[χAi (Xn )]
αi E[χAi (X)]
i=1
= E[Fε (X)]
Furthermore ||F − Fε || ≤ ε but ε was chosen arbitrarily hence indeed
limn→∞ E[F (Xn )] = E[F (X)]
• 6 =⇒ 1)
If F is bounded and continuous then F is measurable hence the claim is immediate.
40
Download