Lecture: Continuous Time Models with Investment Applications Simon Gilchrist Boston Univerity and NBER

advertisement
Lecture: Continuous Time Models with
Investment Applications
Simon Gilchrist
Boston Univerity and NBER
EC 745
Fall, 2013
Brownian Motion
Brownian motion (Wiener process): Continous time stochastic
process with three properties:
Markov process: probability distribution for all future values
depends only on its current value.
Independent increments: probability distribution for the change in
the process is independent of any other non-overlapping time
interval.
Changes in the process over any finite interval are normally
distributed witha variance that increases linearly with the time
interval.
Formal Definition
If z(t) is a wiener process then any change in z, ∆z
corresponding to a time interval ∆t satisfies the following
conditions:
√
∆z = εt ∆t
εt ∼ N (0, 1)
E(εt εt+s ) = 0 f or t 6= s
Intution: Change in z(t) over a finite interval T . Divide T into
n = T /∆t :
∆z = z(s + T ) − z(s) =
n
X
√
εi ∆t
i=1
E(∆z) = 0
V (∆z) = n∆t = T
Brownian motion with drift
Brownian motion with drift:
dx = αdt + σdz
where dz is a Wiener process.
Over any finite interval ∆t, ∆x is distributed normal with
E(∆x) = α∆t, V (∆x) = σ 2 ∆t.
Random walk representation of Brownian motion:
Show that dx is the limit of a discrete time random walk with
drift.
Suppose
∆x = ∆h
= −∆h
with prob p
with prob q = 1 − p
then
E∆x = (p − q)∆h
and
V (x) = E(∆x2 ) − E(∆x)2
= (1 − (p − q)2 )∆h2
= 4pq∆h2
Binomial distribution
Let a time interval t have n = t/∆t discrete steps, then xt − xo
is a serise of n independent trials with ∆h a success occurring
with prob p and −∆h a failure, occurring with prob (1 − p) = q.
So xt − xo has a binomial distribution with:
E(xt − xo ) = n(p − q)∆h = t(p − q)∆h/∆t
and
V (xt − xo ) = n ∗ ((1 − (p − q)2 )∆h2 = 4pqt∆h2 /∆t
Random walk representation of Brownian motion:
Choose ∆h, p, q so that mean and variance of xt − xo depends
only on t and not on step-size ∆t or jump ∆h :
√
∆h = σ ∆t
p=
α√ 1
α√ 1
1+
∆t //q =
1−
∆t
2
σ
2
σ
then
p−q =
α√
α
∆t = 2 ∆h
σ
σ
This implies
α ∆h 2
= αt
σ 2 ∆t
α 2 ∆t σ 2
V (xt − xo ) = t 1 −
σ
E(xt − xo ) = t
and
so
lim V (xt − xo ) = tσ 2 .
∆t→0
Comments:
Brownian motion is limit of discrete time random walk where
mean and variance are independent of step-size ∆t and jump
∆h. This limiting process has the property that variance grows
linearly per unit of time.
For any finite interval, total distance travelled is infinite as
∆t → 0 :
|∆x| = ∆h so E |∆x| = ∆h and nE |∆x| = t
tσ
∆h
=√
→∞
∆t
∆t
Brownian motion is not differentiable in the conventional sense:
∆x ∆h =
∆t ∆t → ∞
so dx/dt does not exist and we
cannot compute E(dx/dt). We
1
can compute E(dx) and dt E(dx) however.
Ito Processes
Generalize brownian motion (Ito processes):
dx = a(x, t)dt + b(x, t)dz
where dz = wiener process and a(x, t), b(x, t) are non-random
function of state.
E(dx) = a(x, t)dt
so a(x, t) is instantaneous rate of drift.
Instantaneous variance:
V (dx) = E(dx2 ) − E(dx)2
= a(x, t)2 dt2 + 2E((a(x, t)b(x, t)dtdz) + b(x, t)2 var(dz)
The first two terms are of order dt2 and dt3/2 so that
V (dx) = b(x, t)2 var(dz)
= b(x, t)2 dt
Example 1: Geometric brownian motion:
Let
dx = αxdt + σxdz
If x is a geometric brownian motion then F (x) = ln(x) is
brownian motion with drift:
dF = (α − 0.5σ 2 )dt + σdz
This implies that
ln(xt /xo ) ∼ N
α − 0.5σ 2 t, σ 2 t
Using properties of the log-normal we have
E(xt ) = xo eαt
2
V (xt ) = x2o e2αt (eσ t − 1)
Present values
Also we have the present value expression:
Z ∞
Z ∞
−rt
E
x(t)e dt =
xo e−(r−t) dt
o
o
=
xo
r−α
The drift rate α can be interpreted as the dividend growth rate.
Example 2: Continous time AR1 (Ornstein-Uhlenbeck)
Let
dx = η(µ − x)dt + σdz
Then
E(xt ) = µ + (xo − µ)e−ηt → µ as t → ∞
V (xt ) =
σ2
σ2
(1 − e−2ηt ) →
as t → ∞
2η
2η
If η → ∞ x is a constant. Need to adjust both σ, η to vary the
degree of mean reversion.
Ito’s Lemma:
Ito process is continuous but not differentiable.
What about functions of x, F (x, t) where
dx = a(x, t)dt + b(x, t)dz
Consider taylor-series expansion of F (x, t) (ignore higher order
derivatives of t):
dF =
∂F
1 ∂2F 2 1 ∂3F 3
∂F
dt +
dt +
dx +
dx + ..
∂x
∂t
2 ∂x2
6 ∂x3
We want all terms of order dt :
dx is of order dt
(dx)2 = b(x, t)2 dt +higher order terms
Ito’s Lemma
This implies
dF
dF
∂F
∂F
1 ∂2F 2
dt +
dt +
dx
2
∂x
∂t
2
∂x
2 ∂F
1
∂ F
∂F
+ a(x, t)
+ b(x, t)2
dt
=
∂t
∂x
2
∂x2
∂F
+ b(x, t)
dz
∂x
=
Taking expectations we have:
2 ∂F
∂F
1
2 ∂ F
+ a(x, t)
E(dF ) =
+ b(x, t)
dt
∂t
∂x
2
∂x2
2 Because of uncertainty the term 12 b(x, t)2 ∂∂xF2 is of first-order.
I.e. owing to Jensen’s inequality, if the function is concave at x
uncertainty lowers the value of dF .
Example: Geometric brownian motion
Let
dx = αxdt + σxdz
Let F (x) = ln(x)
2 1
∂F
∂F
2 ∂ F
+ b(x, t)
dt + b(x)
dz
dF = a(x)
∂x
2
∂x2
∂x
−1
1
1
=
α + σ 2 x2
dt + σx dz
2
2
x
x
1
=
α − σ 2 dt + σdz
2
The log of geometric brownian motion is a brownian motion
with drift.
Dynamic programming in continuous time:
Start with discrete time problem:
F (x, t) = max{π(x, u, t)∆t +
u
1
E F (x0 , t + ∆t)|x, u
1 + ρ∆t
where π(x, u, t) is flow profit given state x and policy u.
Rearrange to get
ρ∆tF (x, t) = max{π(x, u, t)∆t+E F (x0 , t + ∆t) − F (x, t)|x, u
u
Divide by ∆t and take limit as ∆t → 0
ρF (x, t) = max{π(x, u, t) +
u
1 0 0
E F (x , t)|x, u }
dt
1
Suppose x follows an Ito process:
dx = a(x, u, t)dt + b(x, u, t)dz
then up to o(∆t)
E F (x0 , t + ∆t) − F (x, t)|x, u = [Ft (x, t) + a(x, u, t)Fx (x, t)
1 2
b (x, u, t)Fxx (x, t)]∆t
+
2
We now have that the return equation satisifies:
ρF (x, t) = max{π(x, u, t) + Ft (x, t) + a(x, u, t)Fx (x, t)
u
+
1 2
b (x, u, t)Fxx (x, t)}
2
Hamilton-Jacobi-Bellman equation
If there is an infinite horizon and a() and b() don’t depend
explicitly on time then the value F (x) satisfies the ordinary
differential equation:
1
ρF (x) = max{π(x, u) + a(x, u)F 0 (x) + b2 (x, u)F 00 (x)}
u
2
This is the continuous time equivalent of the Bellman equation.
Optimal stopping problem: Discrete time
Let π(x) denote flow profit of a machine.
Let Ω(x) denote the terminal payoff.
Assume that π(x) −
ρ
1+ρ Ω(x)
Φ(x0 |x),
is increasing in x.
Assume that
the distribution function is first-order
stochastic dominant (i.e. an increase in x shifts the probability
distribution of x0 to the right) – example AR1, Random walk.
Optimal policy
Value function:
1
E F (x0 |x
F (x) = max Ω(x); π(x) +
1+ρ
Solution: stop if x < x∗ for some value x∗ to be determined.
Optimal stopping problem in continuous time
Assume Ito process for x :
dx = a(x)dt + b(x)dz
Profit relative to flow value of terminal payoff
π(x) − ρΩ(x)
is increasing in x.
Return function:
F (x) = max Ω(x); π(x) +
1
E [F (x + dx|x]
1 + ρdt
Solution: stop if x < x∗ for some value x∗ to be determined.
Value on continuation region
If x > x∗ the return function satisfies
ρF (x) = π(x) +
1
E(dF )
dt
which implies:
1
ρF (x) = π(x) + a(x)F 0 (x) + b2 (x)F 00 (x) f or x > x∗
2
Because x∗ is endogenous, we need two boundary conditions to
solve this differential equation.
Optimality conditions
Value matching:
F (x∗ ) = Ω(x∗ )
Smooth pasting:
Fx (x∗ ) = Ωx (x∗ )
Suppose Ω(x) = 0. At boundary:
1
0 = π(x∗ ) + b2 (x∗ )F 00 (x∗ )
2
The solution implies wait until π(x) ≤ π(x∗ ) < 0 before
stopping (t is worthwile to incur some loss before closing down
the machine).
Heuristic argument for smooth pasting:
Suppose Fx < Ωx at x∗ . We have upward kink. Then there exists
an x∗∗ > x∗ such that Ω(x∗∗ ) > F (x∗∗ ) and we should stop at
x∗∗ .
Suppose Fx > Ωx at x∗ . We have downward kink. Then payoff
is convex at optimum and there is value to waiting and
determining what the realized value of x will be.
Smooth pasting – slightly less heuristic
Assume a = 1, b = 1,
Over the interval ∆t, x rise by ∆h with
√ p = 1/2[1 +
falls by ∆h with prob q = 1/2[1 − ∆t].
√
∆t] and
Consider the alternative policy of waiting until ∆t to take action.
Return from waiting
Return is
G = π(x∗ )∆t +
1
[pF (x∗ + ∆h) + qΩ(x∗ − ∆h)]
1 + ρ∆t
= π(x∗ )∆t +
1
[pF (x∗ ) + Fx (x∗ )∆h) + q (Ω(x∗ ) + Ωx (x∗ )∆h)]
1 + ρ∆t
+higherorder terms
Let ∆t → 0 but recognize that ∆t is of order ∆h2 and goes to
zero faster than ∆h.
Apply value matching condition, we get
1
G = F (x∗ ) + [Fx (x∗ ) − Ωx (x∗ )]∆h > F (x∗ )
2
Option to invest
Assume value of a project evolves according to geometric
brownian motion (log-normal dividends):
dV = αV dt + σV dz
Project manager can pay I to exercise an option and get V .
Let F (V ) denote the value of the investment opportunity:
F (V ) = max E (VT − I)e−ρT
T
Here Vt denotes the payoff to investing at t.
Assume α < ρ (otherwise wait forever).
Deterministic case: (σ = 0)
Value of payoff, given initial value Vo :
V (t) = Vo eαt
Value of investing at time T is therefore:
F (V ) = (V eαT − I)e−ρT
Suppose α < 0. In this case, invest now if V > I. Otherwise,
never invest.
Suppose 0 < α < ρ. F (V ) > 0 even if V < I since V is growing
exponentially.
Optimality
First-order-condition:
dF (V )
= −(ρ − α)V e−(ρ−α)T + ρIe−ρT = 0
dT
Invest now (set T = 0) if
V >V∗ =
Suppose
ρ
I>I
ρ−α
ρ
I>V >I
ρ−α
Project has positive net-present value at T = 0 but you should
still wait to invest. Intuition: cost of investing discounted at
higher rate (ρ) than benefit, discounted at ρ − α.
Solution
Solution is therefore
1
ρI
T = max
ln
,0
α (ρ − α) V
∗
and
(ρ − α) V ρ/α
αI
f or V < V ∗
F (V ) =
ρ−α
ρI
= V − I f or V > V ∗
Stochastic case:(σ > 0).
When is it optimal to invest I in return for an asset worth V ?
Assume V follows geometric brownian motion:
dV = αV dt + σV dz
Investment rule: optimal cutoff V > V ∗
Continuation region:
Value of investment project determined by capital gain.
ρF dt = E (dF )
Continuation value
Apply Ito’s Lemma:
1
dF = F 0 (V )dV + F 00 (V )(dV )2
2
V is Brownian motion:
1
dF = αV F 0 (V )dt + σV F 0 (V )dz + σ 2 V 2 F 00 (V )dt
2
Take expectations:
1
E(dF ) = αV F 0 (V )dt + σ 2 V 2 F 00 (V )dt
2
Bellman’s equation holds in the continuation region:
1
ρF (V ) = αV F 0 (V ) + σ 2 V 2 F 00 (V )
2
Solution
We are looking for a solution to the differential equation
1
ρF (V ) = αV F 0 (V ) + σ 2 V 2 F 00 (V )
2
which is satisfied at V > V ∗ .
Because V ∗ is endogenous, we have a free-boundary problem.
Boundary conditions
Value matching:
F (V ∗ ) = V ∗ − I
Smooth pasting
F 0 (V ∗ ) = 1
We also have
F (0) = 0
i.e. if V = 0 , with geometric brownian motion it remains at zero.
Waiting to invest
Rewriting the value matching condition we have
V ∗ − F (V ∗ ) = I
This implies that manager will wait to invest, even if the project
has positive net present value.
Reasons:
Dividend growth (as in non-stochastic case)
Uncertainty – higher uncertainty raises the option value F (V ∗ )
and delays investment.
Explicit solution:
Guess:
F (V ) = AV β1
Value matching implies
AV ∗β1 = V − I
Smooth pasting implies
β1 AV ∗β1 −1 = 1
Combine these we get
V∗ =
β1
I
β1 − 1
A=
V∗−I
β
V∗ 1
and
Solving for coefficients
We now need to solve for β 1 . Plug guess into differential
equation.
Let δ = ρ − α. Equation is satisified if β is a root of
1 2
σ β(β − 1) + (ρ − δ) β − ρ = 0
2
There are two roots to this equation. The positive root satisfies
s
1 (ρ − δ)
(ρ − δ) 1 2
ρ
β1 = −
+
−
+2 2 >1
2
2
2
σ
σ
2
σ
Comparative statics:
β1 is decreasing in σ, we have that V ∗ is increasing in σ – as
uncertainty increase, we wait longer to invest. (The wedge
between V ∗ and I increases.)
β1 is increasing in δ, we have that V ∗ is decreasing in δ – as the
growth adjusted discount for profits increases, we invest sooner.
β1 is decreasing in ρ (holding δ constant). The more we discount
costs relative to growth-adjusted benefits, we wait longer to
invest.
Limiting behavior:
As σ → ∞, V ∗ → ∞
As σ → 0, if α > 0
β1 →
ρ
ρ
and V ∗ → I > I
ρ−δ
δ
As σ → 0, if α ≤ 0
β1 → ∞ and V ∗ → I
Implications for user cost:
Assume profit flow of machine is geometric brownian motion:
dπ = απdt + σπdz
Value of profit stream is
Z
Vt = E
∞
t
πs e−ρ(s−t) ds =
πt
ρ−α
Investment rule is
πt > π ∗ =
β
(ρ − α) I > (ρ − α) I
β−1
Quadratic equation implies
β
1
(ρ − α) I = ρ + σ 2 β1
β−1
2
Critical value of profits satisfies:
1
π ∗ = ρ + σ 2 β1 I > ρI
2
Comments:
Uncertainty increases the hurdle rate – the effective user cost of
capital that should be applied when evaluating a given project.
With no uncertainty, the user cost is ρ and does not depend on α.
In other words, although without uncertainty, we may still wait
to invest, our decision is based on standard user cost arguments –
i.e. waiting for the flow value of profits to exceed the flow cost of
the investment.
Abel and Eberly:
Generalized adjustment cost framework to include fixed costs
and irreversibility.
Operating profit: π(kt , εt ) where kt is current capital and εt is a
random shock to profits. Assume πk > 0, πkk ≤ 0.
Shock process:
dεt = µ(εt )dt + σ(εt )dz
where z is brownian motion.
Capital accumulation:
dkt = (It − δkt )dt
Investment costs:
Purchase-sale costs
Pk− I(I < 0) + Pk+ I(I ≥ 0), Pk+ > Pk−
Adjustment costs: continous, strictly convex, twice
differentiable. Minimized at I = 0.
Fixed costs: – non-negative if I 6= 0
Augmented adjustment cost function:
Express the augmented adjustment cost function as
vC(I, K)
where
v = 1 if I 6= 0
= 0 if I = 0
Limits:
lim C(I, K) = lim C(I, K) = C(0, K)
I→0−
I→0+
where C(0, K) is fixed cost.
Also:
CI (0, K)+ ≥ 0
and
CI (0, K)+ ≥ CI (0, K)−
Value of the firm:
Hamilton-Jacobi-Bellman equation:
1
rV (K, ε) = max π(k, ε) − vc(I, k) + E(dV )
I,v
dt
Taylor expansion:
1
1
dV = Vk dk + Vε dε + Vkk (dk)2 + Vεε (dε)2 + Vkε dkdε + ..
2
2
Note that
dk = (I − δK)dt
so that
dk 2 = o(dt)
Also
(dε)2 = σ 2 (ε) dt + o(dt)
Expected firm value
This implies
1
dV = Vk (I − δK)dt + Vε (µ(ε)dt + σ(ε)dz) + Vεε σ 2 (ε)dt
2
Taking expectations:
1
2
EdV = Vk (I − δK) + Vε µ(ε) + Vεε σ (ε) dt
2
Bellman’s equation
Let q = Vk then
1 2
rV = max π(k, ε) − vc(I, k) + q(I − δk) + µ(ε)Vε + σ (ε)Vεε
I,v
2
Bellman’s equation says that we can choose I, v to solve
max [qI − vC(I, k)]
I,v
Optimal investment
First consider v = 1. Let
Ψ(q, k) = max [qI − c(I, k)]
I
Let I ∗ (q, k) satisfy:
CI (I ∗ (q, k), k) = q f or q < CI (0, k)− or q > CI (0, k)+
I ∗ (q, k) = 0 f or CI (0, k)− < q < CI (0, k)+
Comments:
CII > 0 implies that I ∗ (q, k) is strictly increasing in q over
range of action.
If C(I, k) differentiable at I = 0,
CI (I ∗ (q, k), k) = 0 f or all q
If C(I, k) non-differentiable at zero, we have a range of inaction.
I ∗ (q, k) < 0 if q < CI (0, k)−
= 0 if CI (0, k)− < q < CI (0, k)+
> 0 otherwise
These results imply that I ∗ (q, k) is non-decreasing in q.
Optimal choice of v
If v = 0, I = 0, and
qI − v(CI, k) = 0.
If v = 1,
Ψ(q, k) = qI ∗ (q, k) − C(I ∗ (q, k), k).
Look at shape of Ψ(q, k) :
Ψq (q, k) = I ∗ (q, k) < 0 if q < CI (0, k)−
= 0 if CI (0, k)− < q < CI (0, k)+
= I ∗ (q, k) > 0 otherwise
Also
Ψqq (q, k) = Iq∗ (q, k) > 0 in action range
Result: Ψ(q, k) is convex in q and attains minimum on interval
CI (0, k)− < q < CI (0, k)+ .
Optimal policy:
Let q1 ≤ q2 be roots of Ψ(q, k). Optimal policy is then:
b k) = I ∗ (q, k) < 0 if q < q1
I(q,
b k) = 0 if q1 < q < q2
I(q,
b k) = I ∗ (q, k) > 0 if q > q2
I(q,
Possible cases:
Unique root: occurs with no fixed costs and differentiability.
Implies no range of inaction.
Exactly two roots: only occurs with fixed cost. Implies range of
inaction.
Continuum of roots: occurs if there is no fixed cost but
non-differentiability. Implies Range of inaction.
Comments:
Range of inaction depends on adjustment costs not π function or
ε process.
If there are fixed costs or non-differentiability of C(0, k) we
have a non-degenerate range of inaction.
b k) will have a
With fixed costs the optimal policy I(q,
discontinuity.
Solving for q:
Differentiate Bellman equation w.r. to k :
1 2
b k)−δq+qk (I−δk)+µ(ε)V
b
rVk = πk (k, ε)−b
v Ck (I,
ε,k + σ (ε)Vεεk
2
Get E(dq) using Ito’s Lemma:
1
E(dq) = qk (Ib − δk)dt + µ(ε)qε dt + σ 2 (ε)qεε dt
2
Here we use fact that q = Vk so qε = Vkε , qεε = Vεεk . From
Bellman equation we now have:
b k) +
(r + δ)q = πk (k, ε) − vbCk (I,
1
E(dq)
dt
i.e. required return on the marginal unit of capital equals
marginal product plus expected capital gain.
Solution
Lemma: Suppose xt is a diffusion process and a > 0 then
Z ∞
xt = Et
gt+s e−as ds
0
is a solution to
1
Et dx − axt + gt = 0
dt
This implies
Z ∞h
i
πk (kt+s , εt+s ) − vbt+s Ck (Ibt+s , kt+s ) e−(r+δ)s ds
qt = Et
0
So qt is the expected present discounted value of the marginal
return to capital.
Relating q to observables:
If π, vC(I, k) are linearly homogenous then
qo = Vo /ko
To show this, consider
1
dk
1
E(d(qk)) =
E(dq)k + q
dt
dt
dt
=
b k) k + q(I − δk)
(r + δ)q − πk + vbCk (I,
Apply linear homogeneity to get
1
b k) + q Ib − vbCI Ib
E(d(qk)) = rq − (π − vbC(I,
dt
If I > 0, v = 1, q = CI . If I = 0, v = 0. so last term is zero.
1
b k)
E(d(qk)) = rq − (π − vbC(I,
dt
Again apply the lemma
Z ∞h
i
qo ko = Et
π(kt+s , εt+s ) − vbt+s C(Ibt+s , kt+s ) e−rs ds = Vo
0
Example: Linear homogeneity and fixed cost = bK.
Assume
C(I, k) = kc(I/k, 1) = kG(I/k)
Then
I/k = G0−1 (q) < 0 if q ≤ q1
= 0 if q1 < q < q2
= G0−1 (q) > 0 ιf q ≥ q2
Example of q :
π(k, p) = max pLα K 1−α − wL = hpθ k
L
where
h = (1 − α)αα/1−α w−α/(1−α) > 0, θ =
then
Z
qt = hEt
0
∞
pθt+s e−(r+δ)s ds
1
>1
1−α
Specific example
Suppose
dp = σpdz
which implies
ln(pt+s /pt ) ∼ N (−0.5σ 2 s, σ 2 s)
and
Et (pθt+s ) = pθt exp(1/2(θ (θ − 1) σ 2 s
so that
qt =
hpθt
.
r + δ − 0.5θ (θ − 1) σ 2
In this case, as σ increases, Tobin’s Q will increase. So will
investment.
Intuition
Profit functions are convex in prices. Although the total profit
function is linearly homogenous, given a quasi-fixed factor
capital, the flexibility of labor implies convexity with respect to
prices. Thus, a mean preserving spread to the price implies
higher variable profits and therefore more investment.
Comment: this is true even if there are irreversibilties and fixed
costs to investing.
Why? In this model, fixed costs are flow fixed costs, i.e.
whenever investment is non-zero, you must pay the cost. There
isn’t really an option to invest aspect to the model. (Hence range
of inaction does not depend explicitly on the stochastic process
ε). This model does nicely illustrate the point made by Hartman,
Abel and others that investment may increase with uncertainty
owing to the fact that profit functions are convex in prices
however.
References:
Dixit, A. and Pindcyk, Investment under uncertainty, Princeton
Press. 1994. Chapters 2-5.
Abel, Andrew and Janice Eberly, “A unified model of investment
under uncertainty”, AER, 1994.
Pindyck, Robert, “Irreversibility, Uncertainty and Investment”,
Journal of Economic Literature, Vol XXIX, 1991, 1110-1148.
Sodal, Sigbjorn, “A simplified exposition of smooth pasting”,
Economic Letters, 58, 1998, 217-223.
Download