Questions

advertisement
Questions
(1) Monetary risk measures: properties and applications to the evaluation of
risk of a portfolio. Diversification and VaR.
A monetary risk measure is a way to quantify the risk associated with some
financial position X, which is a random variable: X(ω) → R for ω ∈ Ω (Ω is
the state space of all possible outcomes). We define the space of all financial
positions to be a linear space X (which contains the constants), endowed
with the supremum norm kXk := supω∈Ω |X(ω)|.
Def. We define a monetary risk measure as a function ρ : X → R with the
following two properties:
Monotonicity
X ≤ Y (X(ω) ≤ Y (ω)∀ω ∈ Ω) =⇒ ρ(X) ≥ ρ(Y )
Cash invariance (translation invariance):
X ∈ X , a ∈ R =⇒ ρ(X + a) = ρ(X) − a
Two important properties for monetary risk measures are:
Convexity
For X, Y ∈ X and an arbitrary λ ∈ (0, 1), we have:
ρ λX + (1 − λ)Y ≤ λρ(X) + (1 − λ)ρ(Y )
Coherency
For some λ ≥ 0 and X ∈ X and a convex ρ,
ρ λX = λρ(X)
Convexity has the interpretation that we reward diversification. Under
positive homogeneity, we get subadditivity: ρ(X + Y ) ≤ ρ(X) + ρ(Y ), which
means that if we e.g have a financial position consisting of investments from
different sectors, the upper bound of this risk is the sum of the individual
risks.
Examples of mrm: Worst Case R.M
ρmax (X) = − inf X(ω), ∀X ∈ X .
ω∈Ω
1
VaR - Value at Risk
For a given probability level α ∈ (0, 1)
V aRα (X) = ρvar = inf m ∈ R | P m + X < 0 ≤ α
The V aR is probably the most commonly used m.r.m, but it has a weakness
as it is not convex, and therefore punishes diversification of the financial
position.
(2) Monetary risk measures and acceptance set. Acceptable positions and risk
measures.
From a risk measure ρ, we can a corresponding acceptance set. We say that
X is an acceptable position if ρ(X) ≤ 0 (chosen by convention). We define
the acceptance set for the m.r.m ρ to be:
Aρ := X ∈ X | ρ(X) ≤ 0 = A
This is always non-empty, and the infimum is a finitely small number. When
we have an acceptable position, X ∈ A and some other position Y ∈ X
where we know that Y ≤ X, then we have Y ∈ A.
If we only have the set A, we can construct the m.r.m from it:
ρ(X) = inf m ∈ R | m + X ∈ A .
Important properties for the m.r.m also transfer over to the acceptance set.
ρ convex ⇐⇒ A convex
ρ pos. homogeneous ⇐⇒ A is a cone
ρ coherent ⇐⇒ A is a convex cone
A is a cone means that:
λ ≥ 0, X ∈ A ⇒ λX ∈ A
We can also go the other way: for a given set A ⊂ X of acceptable positions
and construct a function:
ρA (X) := inf m ∈ R | m + X ∈ A
2
For ρA we have: it is a m.r.m. Properties on the set A also transfer over to
the associated m.r.m ρA .
A is convex =⇒ ρA is a convex m.r.m
A is a cone =⇒ ρA is a pos. homogeneous
A is a convex cone =⇒ ρA is a coherent m.r.m
If A is convex, ρA is a convex measure of risk. If A is a cone, ρA is
pos.homogeneous and if A is a convex cone, ρA is a coherent risk measure. If
A ⊂ AρA and we have a closure property in addition, then A = AρA .
(3) Monetary risk measures on bounded measurable functions: representation
results for convex and coherent monetary risk measures in terms of finitely
additive probabilities.
X is the space of all bounded, measurable functions on the measurable
space (Ω, F ) (a Banach space if endowed with the supremum norm k · k).
We denote the space of finitely, additive set functions on this space by
M1,f = M1,f (Ω, F ).
Def. let A1 , . . . , AN be mutually disjoint subsets of the σ-algebra F . A
mapping Q : F → R is a finitely additive probability measure if:
(i)
(ii)
Q(∅) = 0
N
N
[
X
Q
Q(Ai )
Ai =
i=1
(iii)
i=1
Q(Ω) = 1
For X ∈ X , we denote the integral with the expectation:
We have the following representation theorem:
R
XdQ = EQ [X].
Thm. Any convex risk measure ρ on X has the form (sup?):
ρ(X) = max EQ [−X] − αmin (Q) , X ∈ X
Q∈M1,f
where the penalty function is given by:
αmin (Q) := sup EQ [−Y ], Q ∈ M1,f
Y ∈Aρ
(which can also be true for other penalty functions α ≥ αmin ).
3
If ρ is a coherent risk measure, then α(Q) = 0 for Q ∈ M1,f , so we have:
ρ(X) = sup EQ [−X]
Q∈M1,f
Proof Sketch
For a convex ρ we prove the representation by showing ρ(X) ≥ supQ {. . .}
and ρ ≤ supQ {. . .}, which means we have equality. For the ≥ inequality, we
start with X ∈ X and define X ′ := X + ρ(X).
ρ(X ′ ) = ρ X + ρ(X) = ρ(X) − ρ(X) = 0 =⇒ X ′ ∈ Aρ
From the definition of the penalty function:
αmin (Q) = sup EQ [−Z]
Z∈Aρ
≥ EQ [−X ′ ]
= EQ [−X − ρ(X)]
= EQ [−X] − ρ(X) =⇒
αmin (Q) ≥ EQ [−X] − ρ(X)
ρ(X) ≥ EQ [−X] − αmin (Q)
ρ(X) ≥ sup {EQ [−X] − αmin (Q)}
Q∈M1,f
For ≤ we use functional analysis to construct a specific probability measure
b where ρ is ≤. If ρ is coherent:
Q
−αmin (Q) = − sup EQ [−Z] = inf EQ [Z]
Z∈Aρ
Z∈Aρ
For X ∈ Aρ , we must have ρ(X) ≤ 0, and by monotinicity this means X ≥ 0
for all X ∈ Aρ , so the infimum is 0, and hence −αmin (Q) = 0.
4
(4) Monetary risk measures on bounded measurable functions: representation
results for convex and coherent monetary risk measures in terms of (countably
additive) probabilities.
With M1 := M1 (Ω, F ), we denote the space of all σ-additive probability
measures:
Def. let A1 , . . . , Ai , . . . be mutually disjoint subsets of the σ-algebra F . A
mapping Q : F → R is a σ-additive probability measure if:
(i)
(ii)
Q(∅) = 0
∞
∞
[
X
Q
Ai =
Q(Ai )
i=1
(iii)
i=1
Q(Ω) = 1
For σ-additive measures, convex m.r.m’s ρ can be represented by a penalty
function α which is infinite outside the set M1 :
ρ(X) = sup EQ [−X] − α(Q)
(⋆)
Q∈M1
Thm. When this representation is true, ρ satisfies the following two
continuity properties:
Continuity from above:
Xn ց X =⇒ ρ(Xn ) ր ρ(X)
or, equivalenty: Lower Semicontinuity:
ρ(X) ≤ lim inf ρ(Xn )
n→∞
Proof Sketch - skal ryddes opp!
We show that the representation (⋆) ⇒ (b), (b) ⇒ (a) and (a) ⇒ (b). By
dominated convergence:
(⋆) ⇒ ρ(X) ≤ lim inf ρ(Xn )
n→∞
(b) ⇒ (a)
Xn ց X =⇒ ρ(Xn ) ≤ ρ(X) =⇒
lim ρ(Xn ) ≤ ρ(X)
n→∞
But by the first step, ρ(X) ≤ lim inf ρ(Xn ), so limn→∞ ρ(Xn ) = ρ(X).
n→∞
(b) ⇒ (a). Xn → X is bounded, so we define Ym := supn≥m Xn which is a
5
decreasing sequence, with Ym ց X. By monotonicity: Yn ≥ Xn ⇒ ρ(Yn ) ≤
ρ(Xn ), and since Yn ց X :
ρ(X) = lim ρ(Yn ) ≤ lim inf ρ(Xn )
n→∞
n→∞
Coherent risk measures satisfy the representation:
ρ(X) = sup EQ [−X]
Q∈Qmax
if, and only if, the penalty function equals 0. We define Qmax := {Q ∈ M1 |
αmin (Q) = 0}, so Qmax ⊆ M1 .
⋆ (5) Convex monetary risk measures on L∞ : definitions, representations
and equivalent properties.
Def. We fix a probability measure P on the measurable space (Ω, F ) and set
the space of all financial positions X = L∞ := L∞ (Ω, F , P )
Def. Absolutely continuous: a measure Q is absolutely continuous wrt P ,
denoted Q ≪ P , if for A ∈ F (the σ-algebra):
P (A) = 0 =⇒ Q(A) = 0.
We define the set of probability measures:
M1 (P ) = M1 (Ω, F , P ) := Q ∈ M1 (Ω, F ) | Q ≪ P ⊆ M1 (Ω, F )
Thm. For X ∈ L∞ , a convex mrm ρ has a representation for Q ∈ M1 (P ) or,
equivalently, the representation with the minimal penalty function restricted
to the space of probability measures M1 (P ):
ρ(X) = sup
EQ [−X] − αmin (Q)
Q∈M1 (P )
When this is true, we have the following equivalent properties on ρ:
ρ is continuous from above: if Xn ց X P -a.s, then ρ(Xn ) ր ρ(X).
ρ satisfies the Fatou property: for a bounded sequence Xn → X P -a.s:
ρ(X) ≤ lim inf ρ(Xn ).
n→∞
ρ is lower semicontinuous of the weak∗ topology on L∞ .
The acceptance set Aρ ⊂ L∞ is weak∗ closed.
6
⋆ (6) Risk measures on Lp , p ∈ [1, ∞]: definitions, representations properties
in terms of dual spaces.
We are working in the framework where X = Lp for p ∈ [1, ∞], and where
we have the corresponding dual space in terms of the functionals ℓ (which
are isometries, i.e maps between metric spaces that preserve distance):
X ′ = ℓ | ℓ : X → R are linearly continuous
For p ∈ (1, ∞) and X = Lp the dual is X ′ = L′p = Lq where
q are a conjugate pair).
1
p
+ 1q = 1 (p and
For p = 1, X = Lp the dual is X ′ = L′1 = L∞ , and for p = ∞, L∞ the
dual is X ′ = ba(P ) which is the space of bounded, signed, finite and additive
measures that are absolutely continuous wrt P . (In general L1 ⊆ ba(P )).
By Riesz’ representation theorem,we find that expectations can be written
as linear functions and vice versa.
Thm. For a convex and lsc ρ : X → R we have the representation for ρ in
terms of the dual space:
ρ(X) = sup ℓ(−X) − α(ℓ) , X ∈ X .
ℓ∈X ′
For any such representation, ρ is convex and lsc. (They are equivalent).
⋆ (7) Stochastic integration: martingales and local martingales. Basic
elements.
On a measure space (Ω, F , P ), we have the process {Xt }t∈[0,T ] written as
X : Ω × [0, T ] → R. Properties:
1) Measurability: ∀A ∈ B(R), the inverse map is in the σ-algebras for the
domain: A ∈ F × B[0,T ] .
2) Adaptedness: a process Xt is adapted if ∀t ∈ [0, T ], Xt is Ft -measurable.
3) Progressive Measurability. A stronger property than adaptedness. It is a
left/right-continuous adapted process, or it has a version that is adapted and
measurable.
4) Predictability. In a sense a point t in the process is announced by the
predecessing points.
7
Martingale
A stochastic process {Mt }t∈[0,T ] is a martingale if it satisfies the following
three properties:
M1 ) Mt is measurable for all t.
M2 ) E[|Mt |] < ∞ for all t.
M3 ) The martingale property. For s ≤ t:
E[Mt | Fs ] = Ms
Random Time, Stopping Times and Optional Times
A random time is a random variable τ : Ω → [0, ∞]. We call random time a
stopping time, if ∀t:
{τ ≤ t} ∈ Ft
which we interpret that the event {τ ≤ t} is known at time t (the stopping
time depends on the filtration).
Local Martingale
For the probability space (Ω, F , P ), with the filtration F, then the process
Mt is called a local martingale if there exists a sequence of F-stopping {τk }k
times such that
Mloc
1 ) The stopping times are a.s increasing: P (τk ≤ τk+1 ) = 1
Mloc
2 ) The stopping times diverge a.s P (τk → ∞ as k → ∞) = 1.
Mloc
3 ) The stopped process Xmin(t,τk ) is a martingale for every k.
For the usual
R T Ito integral we have the assumption that the integrand φ
satisfies: E[ 0 φ2 (t)dt] < ∞, but sometimes we relax this assumption to:
RT
P ( 0 φ2 (t)dt < ∞) = 1 (in Øksendal φ ∈ W). Under this assumption the
stochastic integral is a local martingale (and we don’t know if it is integrable).
Uniform Integrability
{Yt }t∈[0,T ] is uniformly integrable if:
h
i
lim sup E |Yt |1{|Yt |>c} = 0
c→∞
t
8
(8) BSDEs definition, existence (and uniqueness) of a (strong) solution.
For BSDEs we have the general setting: a Brownian motion Wt , a probability
space (Ω, F , P ) with the filtration F generated by the BM. We always work
with a fixed, finite time horizon [0, T ].
S2 (0, T ): the set of real valued, progressively measurable processes Y , such
that:
i
h
2
E sup |Yt | < ∞
0≤t≤T
H2 (0, T ): the set of progressively measurable processes Z such that:
Z T
2
E
|Zt | dt < ∞.
0
For a BSDE we are given a pair (ξ, g) where ξ is called the terminal/final
point and the function g is called the driver/generator. We assume ξ is an
FT -measurable random variable in L2 , and for g we assume: (i) g(t, y, z) is
progressively measurable for all y,z. g(t, 0, 0) ∈ H2 (0, T ) and g is Lipschitz
continuous in (y, z), i.e there exists a constant Cg such that:
g(t, y1, z2 ) − g(t, y2, z2 ) ≤ Cg |y1 − y2 | + Cg |z1 − z2 |
A BSDE is an equation on the form:
−dYt = g(t, Yt , Zt )dt − Zt dWt
YT = ξ
A solution to a BSDE is a pair (Y, Z) ∈ S2 (0, T ) × H2 (0, T ) that satisfies:
Yt = ξ +
Z
T
g(s, Ys, Zs )ds −
t
Z
T
Zs dWs
t
for 0 ≤ t ≤ T .
Thm. Given a BSDE with a pair (ξ, g) that satisfies the assumptions, there
exists a unique solution (Y, Z).
Proof sketch
We define g := g(s, Us , Vs ). We define a martingale, and by the MRT:
Z
h
Mt := E ξ +
0
T
i
gds Ft =⇒ Mt = M0 +
9
Z
0
t
Zs dWs , Z ∈ H2 (0, T )
Define:
Z T
h
i
Yt := E ξ +
gds Ft
t
Z T
Z t
h
i
=E ξ+
gds ±
gds Ft
t
0
Z T
Z t
h
i
=E ξ+
gds Ft −
gds
0
0
Recognise the expectation as the martingale Mt
Z t
Z t
= M0 +
Zs dWs −
gds
0
0
From this we observe that:
Z
YT = M0 +
T
Zs dWs −
0
Z
T
gds = ξ
0
Continuing:
Yt = M0 +
Z
t
Zs dWs −
0
= M0 +
|
=ξ+
Z
Z
Z
gds ±
0
T
0
t
Zs dWs −
{z
Z
T
0
=YT =ξ
T
gds −
t
Z
Z
T
gds −
}
T
Zs dWs ±
t
Z
Z
T
gds
t
T
Zs dWs +
t
Z
T
gds
t
Zs dWs
t
We verify that Yt ∈ S2 (0, T ): use that |a + b + c|2 ≤ 5(|a|2 + |b|2 + |c|2), and
show that the sup of this is finite. The two first terms are okay, since ξ is a
bounded random variable and g is a continuous function on a finite interval.
The stochastic integral of Zs is finite due to the Doob inequality.
The final step is to show that the pair solution (Ys , Zs ) for the BSDE
exists and is unique. We show that the function: Φ : S2 (0, T ) × H2 (0, T ) →
S2 (0, T ) × H2 (0, T ), with (Ys , Zs ) = Φ(Us , Vs ) given by:
Z T
Z T
Yt = ξ +
g(s, Us , Vs )ds +
Zs dWs
t
t
is a contraction on the Banach norm. We define a norm:
hZ T
i 21
k(Y, Z)kβ = E
eβs (Ys2 + Zs2 )ds
0
10
Under this norm, which is equivalent to the standard Banach norm, we can
derive the inequality:
1
k(Ys , Zs )kβ ≤ k(Us , Vs )kβ
2
which shows that Φ is a strict contraction on the complete Banack space,
and this implies a fix point exists and is unique, which in turn proves that
the solution to the BSDE exists and is unique.
(9) Linear BSDEs.
For a pair (ξ, g), where g is linear, and time dependent At , Bt and Ct where
At and Bt are bounded and progressively measurable and Ct ∈ H2 (0, T ), we
have a Linear BSDE, which is on the form:
−dYt = At Yt + Bt Zt + Ct dt − Zt dWt
YT = ξ
Thm.
The unique solution to the linear BSDE is given by:
Z T
h
i
Γt Y t = E ΓT ξ +
Γs Cs ds Ft
t
for t ∈ [0, T ], where Γt comes from the corresponding, linear SDE:
dΓt = Γt At dt + Γt Bt dWt
Γ0 = 1
Proof
Ito’s product formula on Γt Yt (and using the Stochastic Calculus rules:
dt2 = dtdWt = dWt dt = 0, dWt2 = dt).
d Γt Yt = Yt dΓt + Γt dYt + dΓt dYt
Inserting the dynamics:
+ B dW − Γ A
Z
= Γt Yt At
dt
Γt
Z
t
t
t t Yt + t Bt + Ct dt + Γt Zt dWt + t Bt dt
= −Γt Ct dt + Γt (Yt Bt + Zt )dWt
Integral form:
Γt Yt = Γ0 Y0 −
Z
t
Γs Cs ds +
0
Z
0
11
t
Γs Ys Bs + Zs dWs
Recalling that Γ0 = 1 and shifting terms:
Z t
Z t
Γt Y t +
Γs Cs ds = Y0 +
Γs Ys Bs + Zs dWs
0
0
We have a local martingale on the right hand side, and by using the
Burkholder-David-Gundy inequality with p = 1, we apply the quadratic
variation [Pham 9] to get:
h Z t
21 i
h Z t
2 21 i
E <
Γs Ys Bs + Zs dWs >
=E
Γ2s Yu Bu + Zu du
0
0
With some work we can show this is finite, so we can conclude that this is a
martingale. So, by the martingale property (and using YT = ξ):
Z t
Z T
h
i
Γt Y t +
Γs Cs ds = E ΓT ξ +
Γs Cs ds Ft =⇒
0
0
Z T
Z t
h
i
Γt Y t = E ΓT ξ +
Γs Cs ds Ft −
Γs Cs ds
0
0
Z T
h
i
= E ΓT ξ +
Γs Cs ds Ft
t
We find Z by applying the MRT to the martingale.
(10) Comparison theorem for BSDEs
The Comparison Theorem gives us the opportunity to compare BSDEs. If
we have two pairs of terminal conditions and generators satisfying the usual
conditions: (ξ 1 , g 1) and (ξ 2 , g 2), with the corresponding solutions (Y 1 , Z 1 )
and (Y 2 , Z 2 ), and they satisfy the following assumptions:
• ξ 1 ≤ ξ 2 a.s
• g 1(t, Yt1 , Zt1 ) ≤ g 2(t, Yt1 , Zt1 ) dt × dP a.e
• g 2(t, Yt1 , Zt1 ) ∈ H2 (0, T )
Then: Yt1 ≤ Yt2 for all t ∈ [0, T ]. Additional: if Y02 ≤ Y01 , then we have
equality Yt1 = Yt2 , and if P (ξ 1 < ξ 2 ) > 0 or we have a strict relationship on
the drivers: g 1 < g 2, then Y01 < Y02 .
Proof
We define Y t := Yt2 −Yt1 , Z t := Zt2 −Zt1 and g t := g 2 (t, Yt1 , Zt1 )−g 1 (t, Yt1 , Zt1 ),
12
and for a shorthand notation: g ijk := g i(t, Ytj , Ztk ). We will examine the
BSDE:
−dY t = dYt2 − dYt1
Y T = ξ2 − ξ1 = ξ
Writing it out:
−dY t = g 222 − g 111 dt − Zt2 − Zt1 dWt
| {z }
=Z t
We will work with the dt and rewrite this BSDE as a Linear BSDE. Adding
and subtracting 212 and 211:
g 222 − g 111 ± g 212 ± g 211
= g 222 − g 111 + g 212 − g 212 + g 211 − g 211
= g 222 − g 212 + g 212 − g 211 + g 211 − g 111
g 212 − g 211
g 222 − g 212
2
1
=
Y
−
Y
Zt2 − Zt1 + g 211 − g 111
t
t +
1
1
2
2
Y − Y | {z }
Z − Z | {z } | {z }
| t {z t }
| t {z t }
=g t
=Y
=Z
=∆yt
t
=∆zt
t
= ∆yt Y t + ∆zt Z t + g t =⇒
−dY t = ∆yt Y t + ∆zt Z t + g t dt + Z t dWt
This is a linear BSDE with At = ∆yt , Bt = ∆zt and Ct = g t . The solution to
a linear BSDE is:
Z T
h
i
Γs g s ds Ft
Γt Y t = E ΓT ξ +
t
with the corresponding SDE:
dΓt =
Γ0 =
Γt ∆yt dt + ∆zt dWt dt
1
We see that Γt is an exponential function, so Γt ≥ 0. From our assumptions
ξ ≥ 0 and g t ≥ 0, so we have Y t ≥ 0 for all t ∈ [0, T ].
13
(11) BSDEs, g-expectations, conditional g-expectations: definitions and
properties
For a BSDE with terminal point and driver (ξ, g) with the BSDE
assumptions, we have the unique solution given by:
Z T
Z T
ξ
Yt = Yt = ξ +
g(s, Ys, Zs )ds −
Zs dWs .
t
t
From the BSDEs we can construct a non-linear expectation called gexpectation, where g is the driver to a BSDE.
Def. For a X ∈ L2 (FT ), and the unique solution (Yt , Zt ) for a BSDE, we
define the g-expectation to be:
Z T
Z T
X
Eg [X] := Y0 = X +
g(s, Ys, Zs )ds +
Zs dWs
0
0
and for any t ∈ [0, T ], the conditional g-expectation of X under Ft is defined
as:
Z
Z
T
Eg [X | Ft ] :=
YtX
= X+
T
g(s, Ys, Zs )ds +
t
Zs dWs
t
(and obviously Eg [X | F0 ] = Eg [X]).
We loose linearity, but retain the other important properties for expectations.
Thm.
1) Monotonicity: (Proved by the comparison theorem)
∀X1 ≤ X2 =⇒ Eg [X1 | Ft ] ≤ Eg [X2 | Ft ]
2) “Constant” preservation. For X ∈ L2 (Ft ):
Eg [X | Ft ] = X
3) Time consistency/Tower property. If s ≤ t:
h
i
Eg Eg [X | Ft ] Fs = Eg [X | Fs ]
4) Indicator-function homogeneity. For A ∈ Ft :
Eg [1A X | Ft ] = 1A Eg [X | Ft ]
14
With additional assumptions on g: g independent of y, we also have additivity
for η ∈ L2 (Ft ):
Eg [X + η | Ft ] = Eg [X | Ft ] + η
(which is important for risk measures). Finally, we also mention the gMartingales:
Def. An F-adapted, stochastic process Xt for t ∈ [0, T ] with E[Xt2 ] < ∞ for
all t is a g-martingale (resp. sub/super), if for every s ≤ t we have:
Eg [Xt | Fs ] = Xs
(resp.) (≥, ≤)
⋆ (12) Monotonic limit theorem for BSDEs.
We consider a sequence of pairs of terminal points and drivers (Xi , g) for
i = 1, 2, . . ., where g is a fixed driver and Xi is a sequence of terminal points.
For each of these pairs, we get a sequence of solutions (Y i , Z i ) satisfying:
Z T
Z T
i
i
i
i
Yt = YT +
g(s, Ys , Zs )ds −
Zsi dWs
t
t
where Xi = YTi . We assume the solutions Yti ր Yt P -a.s for all t as i → ∞
with Yt ∈ H2 (0, T ). Under these assumptions, we have the following theorem:
Thm. There exists a Zt ∈ H2 (0, T ) such that
Z T
Z T
Yt = X +
g(s, Ys, Zs ) −
Zs dWs
t
t
(X = YT ).
(13) Risk measures via g-expectations: properties.
With the usual assumptions on g(t, y, z): it is progressively measurable for
all y, z, g(·, 0, 0) ∈ H2 (0, T ) and g satisfies the Lipschitz condition (for ν and
µ) with the additional g(·, y, 0) = 0 for all y, we define the risk measure via
the g-expectation:
ρg (X) := Eg [−X], ∀X ∈ X
The two main properties for mrm are satisfied:
Monotonicity: The g-expectation satisfies monotonicity (by the comparison
theorem),such that for X1 , X2 ∈ X where X1 ≤ X2 then Eg [X1 ] ≤ Eg [X2 ], so:
X1 ≤ X2 =⇒ −X1 ≥ −X2 , ρg (X1 ) = Eg [−X1 ] ≥ Eg [−X2 ] = ρg (X2 )
15
For some m ∈ R and X ∈ X we have, since g is independent of y (the
additional assumption):
ρg (X + m) = Eg [−X − m] = Eg [−X] − m = ρg (X) − m
Also for some c ∈ R, we have ρg (c) = Eg [−c] = −c with the special case
ρ(0) = 0, so this is always normalized.
Properties on the driver g decide the properties on the risk measure.
Thm. If g is convex in (y, z), then Eg [·] and Eg [· | Ft ] are convex.
Proof.
For an arbitrary λ ∈ (0, 1) and X1 , X2 ∈ L2 (FT ): we define:
Λ := λX1 + (1 − λ)X2
Since g is convex:
g(t, λy1 + (1 − λ)y2 , λz1 + (1 − λ)z2 ≤ λg(t, y1, z1 ) + (1 − λ)g(t, y2, z2 )
so:
λ · Eg [X1 | Ft ] + (1 − λ)Eg [X2 | Ft ] =
Z T
h
i
E Λ+
λg(s, YsX1 , ZsX1 ) + (1 − λ)g(s, YsX2 , ZsX2 )ds Ft ≥
t
Z T h
i
E Λ+
λg s, λYsX1 + (1 − λ)YsX2 , λZsX1 + (1 − λ)ZsX2 ds Ft =
t
Z T h
i
Λ
Λ
E Λ+
λg s, Ys , Zs ds Ft =
t
Eg [λX1 + (1 − λ)X2 | Ft ]
Thm. If g is positive homogeneous, then for a t ∈ [0, T ], Eg [αX | Ft ] =
αEg [X | Ft ]
Proof
Comparing:
Z T
h
i
αX
Eg [αX | Ft ] = Yt = E αX +
g(s, YsαX , ZsαX )ds Ft
t
Z T
h
i
1
g(s, YsαX , ZsαX )ds Ft
=α·E X +
α
t
g is homogeneous:
Z T
h
i
Y αX Z αX
=α·E X +
g(s, s , s )ds Ft
α
α
t
16
Without α, we have:
Eg [X | Ft ] =
YtX
Z
h
=E X+
T
g(s, YsX , ZsX )
t
i
Ft
We have two BSDEs with the terminal point X and driver g, and by
uniqueness of the solution, these are the same, so αEg [X | Ft ] = Eg [αX | Ft ].
We can combine both these theorems:
Thm. If g is sub-linear (convex and pos.homog.) then Eg [·] and Eg [· | Ft ] are
sub-linear.
(14) Risk measures via g-expectations: Black-Scholes market pricing and risk
evaluation.
In the Black and Scholes market, B&S, we have the risky asset St with the
GBM-dynamics:
dSt = µt St dt + σt St dWt
S0 > 0
We have a time dependent portfolio πt , and we have the corresponding value
process:
dVt = πt dSt = πt µt St ds + πt σt St dWt
where πt is constructed so it replicates X ∈ L2 (FT ), so VT = X. We recall
the general form of a BSDE:
−dYt = g(t, Yt , Zt )dt − Zt dWt
YT = ξ
and compare this to what we have in this case:
−dVt = −πt µt St ds − πt σt St dWt
VT = X
We see that Zt = πt σt St dWt which implies g(t, y, z) = −(µt /σt )z (which is
linear and independent of y). From this we have a BSDE with the terminal
point and driver (X, g), with the solution Y :
Y0X = Eg [X] = ρg (−X)
Y0X = V0π = E[X]
the risk represents the initial value of the replicating strategy of −X, which
is connected to the arbitrage free price of the claim.
17
(15) Dynamic risk measures: definition and properties.
When X ∈ X is a risky position and we want to monitor the risk over the
entire time [0, T ], we use a risk measure that changes over time: the dynamic
risk measure.
Def. A dynamic risk measure is a family of mappings, depending on t ∈ [0, T ]
that satisfies the three properties:
(a) ∀t, ρt : X → L0 (Ft )
(b) ρ0 is a static risk measure.
(c) ρT (X) = −X when X ∈ X is FT -measurable.
Properties for static risk measures, are transfered to dynamic risk measures.
Dynamic convexity:
∀t ∈ [0, T ] : ρt is convex
Dynamic positivity (related to monotonicity)
∀t ∈ [0, T ] :
X ≥ 0 =⇒ ρt (X) ≤ ρt (0)
Dynamic constancy
∀t ∈ [0, T ] :
c ∈ R =⇒ ρt (c) = −c
Dynamic translability (a stronger cash invariance since it applies to r.v and
not just constants)
∀t ∈ [0, T ], ∀X ∈ X , ∀Y ∈ X , Y ∈ Ft :
ρt (X + Y ) = ρt (X) − Y
Dynamic sub-linearity:
∀t ∈ [0, T ], α ≥ 0, ∀X, Y ∈ X : ρt (αX) = αρt (X), ρt (X+Y ) ≤ ρt (X)+ρt (Y )
We have convexity and coherency for dynamic risk measures as well:
• A dynamic risk measure is coherent if it satisfies dynamic positivity,
translability and sub-linearity.
• A dynamic risk measure is convex if it satisfies dynamic convexity and
ρt (0) = 0 for all t ∈ [0, T ].
• A dynamic risk measure is time consistent if for all t ∈ [0, T ], for all X ∈ X
and for all A ∈ Ft :
ρ0 (X 1A ) = ρ0 − ρt (X)1A
18
(16) Dynamic risk measures represented in terms of (countably additive)
probabilities and discussion on time-consistency.
Recall that M1 (P ) = {Q ∈ M1 | Q ≪ P }. We have the dynamic convex
risk measure:
ρt (X) := ess sup EQ [−X | Ft ] − αt (Q)
Q∈M1 (P )
where we require inf Q∈M1 (P ) αt (Q) = 0. (Satisfies dynamic positivity,
constancy and dynamic translability/cash invariance)
We have the dynamic coherent risk measure:
ρt (X) := ess sup EQ [−X | Ft ]
Q∈M1 (P )
To verify that these are dynamic monetary risk measures, we examine the
three necessary properties. Both measures map from X into L0 (Ft ) for t
(since X ∈ X are finite and we only take their expectation), and for t = 0 we
get the static representation theorems seen previously. We verify the third
property, and begin with the coherent risk measure. Assume X ∈ FT :
ρT (X) = ess sup EQ [−X | FT ] = ess sup − X = −X
Q∈M1 (P )
Q∈M1 (P )
For the convex risk measure we can use this (and the requirement on αt ):
ρT (X) = −X +
sup {−αT (Q)} = −X +
Q∈M1 (P )
inf
Q∈M1 (P )
(Q) = −X
And we have verified the three properties.
Both of these risk measures are dynamic monetary risk measures, but neither
of them are time-consistent in general. For time consistency we require the
set M1 (P ) to be m-stable, which is that we can “combine” two sets in M1 (P )
and keep them in this set.
19
(17) Dynamic risk measures via g-conditional-expectation: properties.
Set X = L2 (FT ), we have the Brownian filtration F and g, which is the driver
for a BSDE that satisfies all the conditions that guarantee the existence of a
solution.
Def. We define the dynamic risk measure from the g-expectation as:
ρgt (X) := Eg [−X | Ft ],
X ∈ L2 (Ft ), t ∈ [0, T ]
which has the following properties:
a) Ft -measurability for all t ∈ [0, T ].
b) ρg0 is a static risk measure.
c) ρgT (X) = −X.
Prop. The dynamic risk measure ρgt satisfies the following properties:
(i) Continuous time homogeneity: For 0 ≤ s ≤ t ≤ T
ρgs (X) = ρgs (−ρgt (X)),
∀X ∈ L2 (FT )
Since:
ρgs (−ρgt (X))
h
i
= Eg Eg [X | Ft ] Fs = Eg [−X | Fs ] = ρgs (X)
(ii) If there is a t ∈ [0, T ] such that ρgt (X) ≤ ρgt (Y ), then:
∀s ∈ [0, t], ρgs (X) ≤ ρgt (Y )
Since: for some s ∈ [0, t] we apply the Comparison Theorem to the
BSDEs (g, Eg [−X | Ft ]) and (g, Eg [−Y | Ft ]):
h
i
g
ρs (X) = Eg [−X | Fs ] =Eg Eg [−X | Ft ] Fs ≤
h
i
Eg Eg [−Y | Ft ] Fs = Eg [−Y | Fs ] = ρgs (Y )
(iii) If g satisfies sub-linearity, then ρgt is coherent and time-consistent.
(iv) If g is convex, ρgt is convex and time-consistent, and additionaly
satisfies dynamic positivity, constancy and translability.
20
(18) To which extent dynamic risk measures are generated by conditional
g-expectation?
From RG: Almost any dynamic coherent or dynamic convex risk measure
comes from a conditional g-expectation.
Prop. let ρt be a dynamic, time-consistent and coherent (resp. convex and
cash invariant) risk measure defined on X = L2 (FT ). If E[X] := ρ0 (−X) for
X ∈ L2 (FT ) is:
(i) strictly monotone:
X ≥Y
=⇒ E[X] ≥ E[Y ]
(ii) E µ -dominated:
E[X + Y ] − E[X] ≤ Eµ [Y ], ∀X, Y ∈ L2 (FT )
then there exists a unique g satisfying the following conditions:
(A) Lipschitz continuity
(B) g(·, y, z) ∈ L2 (dP × dt) for all y, z.
(C) g(t, y, 0) = 0, P -a.s.
(D) g is independent of y.
(E) |g(t, z)| ≤ µ|z|.
With this g, the dynamic risk measure is generated by the g-expectation:
ρ0 (X) = E[−X] = Eg [−X]
ρt (X) = Eg [−X | Ft ], ∀t ∈ [0, T ], ∀X ∈ L2 (Ft )
21
Download