Uploaded by faithnyambo

A17469W1 Paper B-2022

advertisement
A17469W1
DEGREE OF MASTER OF SCIENCE
Mathematical and Computational Finance
Numerical Methods
Hilary Term 2023
Thursday 12 January 2023, 9:30 a.m. - 11:00 a.m.
Candidates must attempt the following
ˆ TWO questions from Numerical Methods
You may attempt as many questions as you wish.
The best two answers will count toward the total mark.
Do not turn over until told that you may do so.
Page 1 of 5
Numerical Methods
1. (a) Let Z denote a random variable with probability density p and let g : R −→ R denote a
payoff function. Explain the difference between computing the expectation of g(Z)
via (1) Quadrature / Integration and via (2) Monte Carlo simulation.
Give an example of a modeling setup where Monte Carlo simulation is more favourable
than quadrature. Explain your answer.
(b) (i) Name three possible methods for generating standard normal random variables from
uniform random variables.
(ii) Let X and Y be independent N (0, 1)-distributed random variables. Define a random
variable Z such that Cov(X, Z) = ρ for some constant ρ ∈ (−1, 0).
(c) Let V1 be a real valued random variable and let F denote its cumulative distribution
function. Assume that the inverse function F −1 exists and is strictly increasing. Consider
a uniform random variable U on [0, 1] and set V2 := F −1 (U ).
ˆ Show that the cumulative distribution function of V2 coincides with F .
ˆ Now let (Wt )t⩾0 denote a standard Brownian motion on R. Consider V1 = σWt for
some σ > 0 and t > 0, and let U denote a uniform random variable on [0, 1]. Find a
mapping G such that G(U ) ∼ V1 . Explain your answer. [You can use the expression
Φ for the cdf of a standard normal distribution in your answer.]
(d) Consider the following Stochastic Differential Equation on R
dSt = µ(St )dt + σ(St ) dWt
S0 = s > 0,
for t > 0,
(1)
where (Wt )t⩾0 denotes a standard Brownian Motion and µ(·) and σ(·) are smooth functions. Assume that there exists a unique solution to SDE (1).
(i) State the Euler scheme for simulating (1).
(ii) State sufficient assumptions (without proof) on the coefficients of the SDE (1) that
ensure strong convergence of the Euler scheme.
(e) Let V = E[f (ST )] denote the value of an option with payoff f ∈ C(R) at time T > 0 for an
asset S := (St )t⩾0 whose dynamics are as in (1). Let S̄ denote the P
discrete approximation
1
i
b
of S via the Euler Scheme. We set V̄ := E[f (S̄T )] and let Y := L L
i=1 f (S̄T ) denote the
corresponding Monte Carlo approximation for V̄ .
Let L ∈ N indicate the number of samples in the Monte Carlo approximation and N ∈ N
the number of time steps in the Euler scheme.
h
i
(i) Decompose E (Yb − V )2 , the mean squared error of the estimate Yb for V , into a bias
and a variance term. How can the bias term be reduced? Explain your answer.
(ii) Explain how this decomposition can be used to determine the optimal number L of
samples and N of time steps to minimize the mean squared error given the constraint
L × N = C on the computational cost for some constant C ∈ N+ .
A17469W1
Page 2 of 5
2. (a) Consider for some x > 0 and T > 0 the random variable
1 2
XT = x exp (r − σ )T + σWT ,
2
(2)
where W denotes a standard Brownian
motion on R. Consider a continuous payoff func1 PN
N
b
tion f ∈ C(R) and let Z := N i=1 f (XTi ) denote the Monte Carlo estimator for the
quantity E [f (XT )], where XTi are i.i.d samples from the distribution of the random variable XT and where N denotes the number of samples.
(i) In what sense does the estimator ZbN converge to the expectation E [f (XT )] as N → ∞?
State a theorem that implies convergence of the Monte Carlo estimator ZbN .
(ii) Let EN denote the quantity
2 N
b
EN := E E[f (XT )] − Z
.
P
i
2
Show that EN = N1 σX , where σX := N1 N
i=1 E (f (XT ) − E[f (XT )]) .
(iii) What is the convergence rate for the root mean squared error of the estimator ZbN ?
(iv) If f (x) = (x − K)+ for some K > 0, indicate another way to calculate E[f (XT )].
(b) (i) Explain the technique of Fast Approximations discussed in the lecture as an alternative
to direct Monte Carlo approximation.
(ii) In what sense is the approximation you described in part (i) of your answer faster
than direct Monte Carlo approximation?
(iii) Name two possible methods that can be applied to speed up Monte Carlo simulation
and explain how they accelerate / improve simulation.
(c) Consider the following stochastic differential equation on the interval [0, T ]
dSt = b(St )dt + a(St ) dWt ,
S0 = x > 0,
(3)
for a standard Brownian Motion (Wt )t⩾0 and smooth functions a(·) and b(·).
(i) Determine the functions a(·) and b(·) in (3) such that ST ∼ XT holds [where XT
denotes the random variable in equation (2)]. Specify the Euler Scheme S̄ for (3) in this
special case.
(ii) In the special case described in part (i) can you give an example of a modelling scenario
that would require simulation of (3) [as opposed to an approximation via (2)]?
(iii) Explain the notions weak convergence and strong convergence of a numerical scheme
S̄ for a process S of the form (3). Does weak convergence and/or strong convergence hold
for the Euler scheme in the special case described in (i)?
If yes, state the (weak / strong) convergence rate [without proof ], if not, explain why not.
(iv) Recall the weak convergence rate and the strong convergence rate [without proof ] for
the Milstein scheme for the SDE (3) in the same special case that we considered in (i).
Does the Milstein scheme have better convergence properties than the Euler scheme?
(d) Determine the Kolmogorov backward PDE for u(t, s) = E[(ST − K)+ |St = s] where the
dynamics of S are given by (3) and specify the function u(t, s) at final time t = T .
A17469W1
Page 3 of 5
TURN OVER
3. (a) Suppose that Y is a binary random variable taking values {1, 0} with probability p, (1 − p)
respectively for a small parameter p ∈ [0, 1].
√
Var[Y ]
(i) Determine the expectation E[Y ], the variance Var[Y ], and the relative error E[Y ]
1
for this random variable and calculate these quantities in the case p = 10
.
P
i
(ii) Suppose we want to estimate the expectation of the random variable Y via N1 N
i=1 Y ,
i
where Y are i.i.d. samples from the distribution of Y . Determine the number N of samples needed such that the relative error of our estimator is less than some ϵ > 0. Calculate
1
1
this quantity for the values p = 10
and ϵ = 100
.
(b) (i) List two possible methods to reduce the variance of a Monte Carlo estimator.
(ii) Why is it desirable in financial modeling to reduce the variance of an estimator: What
are two possible ways an estimator can be improved if the variance is reduced?
(iii) Let X be a real valued random variable and f ∈ C(R) a payoff function. Suppose
that we want to approximate E[f (X)] using the estimator fb = λ(ḡ − E[g(X)]), where
λ ∈ R and g is another
which the value E[g(X)] is known, and where
P payoffi function, for
1 PL
i
i
f¯, ḡ denote f¯ = L1 L
f
(X
)
and
ḡ
=
i=1
i=1 g(X ) respectively, where where X are
L
i.i.d. samples from the distribution of X.
State conditions for which the variance of the estimator fb is lower than the variance of f¯.
(iv) Consider the expression
L
E [f (X)] = Eh [f (X)r(X)] ≈
1X
f (X l )r(X l ).
L
l=1
Explain the role of r and L in this approximation in the context of importance sampling.
What types of importance sampling did we discuss in the lecture?
Describe a financial example where importance sampling is typically used.
(c) What are greeks and what are they needed for in financial modeling? Give two examples
of greeks introduced in the lecture. Name a possible method to calculate greeks.
(d) Suppose that the following estimate Ŷ is used to approximate the sensitivity
Ŷ =
∂V
∂θ
N
1 X (i)
X (θ + ∆θ) − X (i) (θ − ∆θ) .
2N ∆θ
i=1
Derive an expression for the variance of the estimator Ŷ if we use:
(i) independent samples for bumped variables (ii) the same samples for bumped variables.
(e) Consider for m ∈ N the independent and identically distributed random variables


1 with probability p
ξm = −1 with probability p


0 with probability 1 − 2p
(4)
for some p ∈ [0, 1], and consider for the time grid {0, ∆t, 2∆t, . . .} and space discretisation
∆x ∈ R+ a process X ∆ described recursively by
X0∆ = 0
A17469W1
∆
∆
and X(m+1)∆t
= Xm∆t
+ ξm ∆x,
Page 4 of 5
for m ∈ N+ .
(5)
∆
∆
(i) Determine the mean and variance of Xm∆t
and calculate P(X(m+1)∆t
= m∆x).
∆
(ii) For which (admissible) values of p is Var(X∆t ) = ∆t satisfied?
(f) Recall the heat equation on R
1 ∂2u
∂u
=
.
∂t
2 ∂x2
(6)
m
m
um−1
= (1 + 2p)um
n
n − p(un+1 + un−1 ),
(7)
Furthermore, consider the scheme
m
with initial condition u0 = 0, boundary conditions um
−N = uN = 0 and with 2p =
∆t
.
∆x2
Finally, consider the following Algorithm:
Decide if each of the following statements are true or false (T/F) and explain your answer:
(i) The scheme (7) is a finite difference scheme corresponding to the heat equation (6).
(ii) The Algorithm above describes a finite difference scheme to the heat equation (6).
(iii) The Algorithm above corresponds to the finite difference scheme (7).
A17469W1
Page 5 of 5
LAST PAGE
Download