1. (i) Since X(t) and Y (t) are Gaussian, it is enough to show that the

advertisement
1.
(i) Since X(t) and Y (t) are Gaussian, it is enough to show that the two processes have the same
mean and covariance as W (t). Clearly, we have EX(t) = 0, EY (t) = 0. Furthermore,
1
1
E(X(t)X(s)) = E(W (ct)W (cs)) = min(ct, cs) = min(t, s).
c
c
Similarly,
E(Y (t)Y (s)) = E(W (c + t)W (c + s)) − E(W (c + t)W (c)) − E(W (c)W (c + s)) + EW (c)2
= min(t + c, c + s) − c = min(t, s).
[7] UNSEEN
(ii) W (t) is a (0, t) random variable. We use the formula for the characteristic function φ(λ) of a
Gaussian variable (with λ = 0) to obtain
1
EeiW (t) = e− 2 t .
(iii) W (t) + W (s) is Gaussian, since it is the sum of two Gaussian random variables. We have that
E(W (t) + W (s)) = 0 and
E(W (t) + W (s))2 = t + s + 2 min(t, s).
We use the formula for the characteristic function of a Gaussian random variable to obtain
t
s
i(W (t)+W (s))
Ee
= exp − − − min(t, s) .
2 2
[6] SEEN SIMILAR
(iv) B(t) is a mean zero Gaussian process with variance
EB(t)2 = t(1 − t).
Hence, the probability distribution function is
p(x, t) = p
1
2πt(1 − t)
[7] SEEN SIMILAR
1
exp −
x2
2t(1 − t)
.
2.
(i) We have that Xj = Yj . We set h(Yj ) = f (Xj ) − Ef (Y0 ) = f (Yj ) − Ef (Y0 ). We calculate
2
2
N −1
N −1
1 X
1 X
E
f (Xj ) − Ef (Y0 ) = E h(Yj )
N j=0
N j=0
=
N −1 N −1
1 X X
E h(Yj )h(Yk )
2
N
j=0 k=0
=
=
1
N2
1
N2
N
−1 N
−1
X
X
2
Eh(Yk ) δjk
j=0 k=0
N
−1
X
2
Eh(Yk )
k=0
C
= ≤ E|f (Y0 )|2 → 0.
N
In the above we used our assumption E(f (Y0 ))2 < +∞ together with the triangle inequality.
[10] SEEN SIMILAR
(ii) We have
Z
EX(t) =
t
EY (s) ds = 0.
0
The fact that X(t) is Gaussian follows
R t from the Gaussianity of the process Y (t) and a Riemman
sum approximation of the integral 0 Y (s) ds. In particular, for arbitrary n and t1 , . . . tn , we
have that the random vector
Z t1
Z t2
Z tn
Y (s) ds
Y (s) ds,
Y (s) ds, . . .
0
0
0
is approximately equal to


k1
k2
kn
X
X
X

Y (sj ) ∆sj ,
Y (sj ) ∆sj , · · ·
Y (sj ) ∆sj  ,
j
j
j
which is a Gaussian random vector since the linear combination of Gaussian random variables
is Gaussian. Hence, all finite dimensional distributions of X(t) are Gaussian and the process
X(t) is Gaussian.
2
To calculate the covariance
Z tZ
E(X(t)X(s)) =
0
Z
s
e−|p−q| dpdq
0
max(t,s) Z min(t,s)
=
0
Z
e−|p−q| dpdq
0
min(t,s) Z min(t,s)
=
e
0
−|p−q|
Z
dpdq +
0
Z
min(t,s)
= 2
Z
q
dq
0
Z
= 2
Z
dq
0
e−|p−q| dp +
0
min(t,s)
max(t,s) Z min(t,s)
Z
min(t,s)
0
max(t,s) Z min(t,s)
min(t,s)
q
e
−|p−q|
Z
Z
dq
min(t,s)
0
e−|p−q| dpdq
0
max(t,s)
dp +
e−|p−q| dpdq
min(t,s)
e|p−q| dp
0
= E(X(t)X(s)) = 2 min(t, s) + e− min(t,s) + e− max(t,s) − e−|t−s| − 1.
[10] SEEN SIMILAR
3
3.
(i) The generator of the process is
L=a
d
σ 2 d2
+
.
dx
2 dx2
The backward Kolmogorov equation is
∂f
∂f
σ2 ∂ 2f
= Lf = a
+
.
∂t
∂x
2 ∂x2
The forward Kolmogorov equation is
∂p
∂p σ 2 ∂ 2 p
.
= L∗ p = −a
+
∂t
∂x
2 ∂x2
The stochastic differential equation is
dXt = a dt + σ dWt .
[5] UNSEEN
(ii) The solution of the SDE is
Xt = x0 + at + σW (t).
The mean and variance are
EXt = x0 + at
and
E(Xt − EXt )2 = σ 2 t.
[5] SEEN SIMILAR
(iii) Xt is a Gaussian process with mean x0 + at and variance σ 2 t. Hence, the transition probability
density (i.e. the solution of the Fokker-Planck equation with initial condition p(x, 0|x0 , 0) =
δ(x − x0 )) is
1
|x − x0 − at|2
p(x, t|x0 , 0) = √
exp −
.
2σ 2 t
2πσ 2 t
[5] SEEN SIMILAR
(iv) The stationary Fokker-Planck equation is
−a∂x p +
σ2 2
∂ p = 0,
2 x
p(0) = p(1).
We multiply this equation by p(x), integrate over [0, 1], integrate by parts and use the periodic
boundary conditions to obtain
Z
1
|∂x p|2 dx = 0.
0
Hence, the unique normalized solution of the stationary Fokker-Planck equation is
ps (x) = 1.
[5] SEEN SIMILAR
4
4.
(i) The generator is
1 d2
2 dx2
in [0, 1] equipped with Neumann boundary conditions.
The backward and forward Kolmogorov equation are
L=
∂f
1 ∂2f
=
∂t
2 ∂x2
and
∂p
1 ∂2p
=
.
∂t
2 ∂x2
Both equations are posed on [0, 1] equipped with Neumann boundary conditions.
[5] SEEN SIMILAR
(ii) We have to solve the initial-boundary value problem
1 ∂2p
∂p
,
=
∂t
2 ∂x2
p(x, 0) = δ(x − x0 ).
∂x p(0, t) = ∂x p(1, t) = 0,
The boundary conditions are satisfied by functions of the form cos(nπx). We look for a solution
in the form of a cosine Fourier series
∞
X
1
p(x, t) = a0 +
an (t) cos(nπx).
2
n=1
From the initial conditions we obtain
Z 1
an (0) = 2
cos(nπx)δ(x − x0 ) dx = 2 cos(nπx0 ).
0
We substitute the expansion into the PDE and use the orthonormality of the Fourier basis to
obtain the equations for the Fourier coefficients:
ȧn = −
n2 π 2
an
2
from which we deduce that
an (t) = an (0)e−
Consequently
p(x, t|x0 , 0) = 1 + 2
∞
X
n2 π 2
t
2
.
cos(nπx0 ) cos(nπx)e−
n=1
[5] SEEN SIMILAR
(iii) The stationary Fokker-Planck equation is
∂ 2 ps
= 0, ∂x ps (0) = ∂x ps (1) = 0.
∂x2
5
n2 π 2
t
2
.
The unique normalized solution to this boundary value problem is p(x) = 1. Indeed, we multiply
the equation by ps, integrate by parts and use the boundary conditions to obtain
Z 1
dps 2
dx dx = 0,
0
from which it follows that ps (x) = 1. Alternatively, by taking the limit of p(x, t|x0 , 0) as t → ∞
we obtain the invariant distribution:
lim p(x, t|x0 , 0) = 1.
t→∞
[5] SEEN SIMILAR
(iv) We calculate
Z
1Z 1
xx0 p(x, t|x0 , 0)ps (x0 ) dxdx0
E(W (t)W (0)) =
0
Z
0
1Z 1
=
xx0
0
=
1+2
0
∞
X
2 2
cos(nπx0 ) cos(nπx)e
n=1
+∞
(2n+1)2 π 2
1
8 X
1
−
t
2
+ 4
e
.
4 π
(2n + 1)4
n=0
[5] SEEN SIMILAR
6
− n 2π t
!
dxdx0
Download