Brownian Motion Chapter 9

advertisement
Chapter 9
Brownian Motion
9.1. If X and Y are independent, then for all bounded, continuous f, g : Rd !
R,
E[f(X)g(Y)] = E[f(X)] · E[g(Y)].
Apply this with f(x) = exp(iu · x) and g(y) = exp(iv · y) to obtain half of
the result. Conversely, suppose that the preceding identity holds for the
stated f and g, for all such u and v. Let (X 0 , Y 0 ) be a pair of independent
random variables such that X 0 has the same distribution as X and Y 0 has
the same distribution as Y. By the previous part,
h
i
h
i h
i
0
0
0
0
E eiu·X +iv·Y = E eiu·X E eiv·Y
⇥
⇤ ⇥
⇤
= E eiu·X E eiv·Y
⇥
⇤
= E eiu·X+iv·Y ;
the last identity holds by our assumption. Therefore, the uniqueness theorem of Fourier transforms shows that the distribution of (X , Y) is the
same as that of (X 0 , Y 0 ). In particular, X and Y are independent.
To derive Corollary 9.7 note that the covariance matrix of (X1 , . . . , Xn , Y1 , · · · , Ym )
has the form

Q1 0
Q=
,
0 Q2
where Q1 is the covariance matrix of (X1 , . . . , Xn ) and Q2 is that of (Y1 , . . . , Ym ).
In particular, for all ↵ 2 Rn and 2 Rm ,
✓
◆
h
i
1
i(↵, )·(X1 ,...,Xn ,Y1 ,...,Ym )
E e
= exp - (↵, ) · Q(↵, )
2
✓
◆
1
1
= exp - ↵ · Q1 ↵ · Q2
2
2
h
i
h
i
= E ei↵·(X1 ,...,Xn ) ⇥ E ei ·(Y1 ,...,Ym ) .
The asserted independence follows from the first part.
23
24
CHAPTER 9. BROWNIAN MOTION
9.2. By definition, E[exp(i↵ · Gn )] = exp(- 12 ↵ · Qn ↵) for all ↵ 2 Rk , where
1
n n
n
Qn
ij = E[Gi Gj ]. Therefore, limn E[exp(i↵ · G )] = exp(- 2 ↵ · Q↵). Evn
idently, Q is nonnegative-definite because Q is. Also, Q is symmetric.
Therefore, exp(- 12 ↵ · Q↵) is the characteristic function of a Gaussian vector with covariance matrix Q (Theorem 9.5). The convergence theorem
for characteristic functions finishes the proof.
9.3. Any vector is centered Gaussian iff all linear combinations are one-dimensional,
centered Gaussians. This proves that (Gm+1 , . . . , Gn ) is Gaussian.
The remainder will be posted soon.
9.4. Let S = ±1 with probability half each, and Z = an independent N(0 , 1).
Define X1 := Z and X2 := S|Z|. Then X2 = N(0 , 1) as well. Indeed,
because of independence, P{X2 2 A} = 21 P{Z 2 A} + 12 P{-Z 2 A}, which
is P{Z 2 A}, since -Z and Z both have the same distribution.
If (X1 , X2 ) had a 2-dimensional Gaussian distribution, then all linear combinations of X1 and X2 would be Gaussians. In particular, X1 + X2 would
have a normal distribution. But
P{X1 + X2 = 0} = P{S = 1}P{Z > 0} + P{S = -1}P{Z < 0} =
1
.
4
Therefore, X1 + X2 cannot have a normal distribution [because 0 < 14 <
1]. And it follows from this that (X1 , X2 ) does not have a 2-D Gaussian
distribution.
Rt
9.5. Because W is a.s. continuous, I(t) = 0 W(s) ds is a Riemann integral, and
is continuously-differentiable in t (fundamental theorem of calculus). We
can also approximate I(t) as a Riemann sum:
N
1 X
W
n!1 N
I(t) = lim
i=1
✓
it
N
◆
a.s.
Because the vector (W(t/N), . . . , W(t)) is Gaussian, I(t) is the limit of
linear combinations of Gaussians, whence it is Gaussian (Problem 9.2).
9.7. Evidently, -W and {tW(1/t)}t>0 are centered Gaussian processes [because W is a centered Gaussian process]. It remains to compute covariances. But then,
E [{-W(s)} {-W(t)}] = E [W(s)W(t)] = min(s, t).
Also,

✓ ◆
✓ ◆
✓
◆
1
1
1 1
E sW
tW
= st min
,
= min(s, t).
s
t
s t
Download