Envelope Theorems for Non-Smooth and Non

advertisement
Envelope Theorems for Non-Smooth and
Non-Concave Optimization*
Carlo Strub‡
University of St. Gallen
Andrew Clausen†
University of Pennsylvania
First version:
June , 
is version: April , 
Abstract
We study general dynamic programming problems with continuous and discrete choices
and general constraints. e value functions may have kinks arising () at indifference
points between discrete choices and () at constraint boundaries. Nevertheless, we establish a general envelope theorem: first-order conditions are necessary at interior optimal choices. We only assume differentiability of the utility function with respect to
the continuous choices. e continuous choice may be from any Banach space and the
discrete choice from any non-empty set.
Keywords: Envelope theorem, differentiability, dynamic programming, discrete choice, nonsmooth analysis
*
We are very grateful to Guy Arie, Martin Barbie, Christian Bayer, Aleksander Berentsen, Carlos CarilloTudela, Harold Cole, Russell Cooper, omas Davoine, Jesús Fernández-Villaverde, Marcus Hagedorn, Christian Hellwig, Leo Kaas, Andreas Kleiner, Dirk Krueger, Tzuo Hann Law, Yvan Lengwiler, Alexander Ludwig, George Mailath, Steven Matthews, Guido Menzio, Nirav Mehta, Leonard Mirman, Kurt Mitman, Georg
Nöldeke, Nicola Pavoni, Andrew Postlewaite, Kevin Reffett, José Rodrigues-Neto, Felipe Saffie, Florian
Scheuer, Karl Schlag, Ilya Segal, Shouyong Shi, Lones Smith, Kyungchul Song, Xi Weng and Mark Wright
for fruitful discussions.
†
clausen@sas.upenn.edu
‡
cs@carlostrub.ch


Introduction
Optimization problems that involve both discrete and continuous choices are common in
economics. Examples include the trade-off between consumption and savings alongside
the discrete decision of whether to work, accept a job offer, declare bankruptcy, go to college, or enroll children in child care.¹ In addition, we show how non-smooth optimization
problems, such as capital adjustment in the presence of fixed costs, may be recast as mixed
continuous and discrete choice problems. In the absence of lotteries or other smoothing
mechanisms, such problems create kinks in the value function where agents are indifferent
between two discrete choices. A second type of kink arises when constraints become binding. As a result, the value function is non-differentiable, non-concave, and may even lack
directional derivatives. Can first-order conditions be applied under such circumstances?
is paper provides two general envelope theorems. e first relates to static optimization problems. Figure a illustrates an example where an investor maximizes his profit by
choosing the size of his investment, c, and the product, d1 or d2 , to invest in. e investor
takes the upper envelope over the two per-product profits f and maximizes it with respect
to the continuous choice c. We assume f (·, d) is differentiable for each discrete choice
d ∈ {d1 , d2 }. Observe in the figure that the upper envelope has only downward kinks but
no upward kinks. Moreover, maxima may not occur at downward kinks. erefore, our
static envelope theorem concludes that interior maxima only occur at differentiable points.
In other words, at an investment level where the investor is indifferent between the two
products, he strictly prefers to increase the investment and choose product d1 , or decrease
the investment and choose product d2 . Amir, Mirman and Perkins (, Lemma .) and
Milgrom and Segal (, Corollary ) provide special cases of this theorem under the assumptions of supermodularity and equidifferentiability, respectively.²
Our second envelope theorem applies this intuition to dynamic settings. When an agent
makes both discrete and continuous choices subject to some constraint, the value function
has (potentially infinitely many) kinks. In Figure b, c represents effort, and the two curves
represent the payoffs from attending college or not. As before, discrete choices may lead
to downward kinks. In addition, binding constraints may lead to upward (or downward)
kinks. Nevertheless, we show that at interior optimal choices, the value function is differentiable; the agent never chooses a savings level where he is indifferent between college or not.³
More specifically, our theorem applies if the choice is an optimal one-period interior choice,
¹ Eckstein and Wolpin (), Rust (), or Aguirregabiria and Mira () list many more examples.
² A totally different approach by Renou and Schlag () does not study derivatives at all but uses the
weaker notion of “ordients” (ordered gradients).
³ In complementary work, Rincón-Zapatero and Santos () study the differentiability of value functions at boundary choices.

which means the agent is able to increase or decrease his continuous choice today without
changing any other choices. For example, this condition is met if the agent can increase or
decrease his savings today without changing his college or future savings decisions.
Previous envelope theorems in dynamic settings do not accommodate discrete choices
and impose additional assumptions. Mirman and Zilcha (, Lemma ) and Benveniste
and Scheinkman (, eorem ) impose concavity assumptions, and Amir et al. (,
Lemma .) assume supermodularity.
f (c, d) , F (c)
V (c)
V (c)
Upward Kink
F (c)
(binding constraint)
f (c, d1 )
f (c, d2 )
Downward Kink
(discrete choice)
.
.
c
(a) Static Problem
c
(b) Constrained Dynamic Problem
Figure : Mixed continuous and discrete choice problems
e concept of directional derivatives is central to the proofs of previous envelope theorems. However, in our fully general setting, directional derivatives may not exist everywhere, so a new approach is required. We apply Fréchet sub- and superdifferentials, and
their one-dimensional analogues which we call Dini sub- and superdifferentials.⁴ ey capture what we think of as upward and downward kinks.
is paper is organized as follows: Section  states our envelope theorems. All proofs go
into Section  which contains additional general lemmata on kinks and upper envelopes.
ere, we also discuss the relationship of our results to previous publications. Section 
illustrates the breadth of applications of our envelope theorems to non-smooth and nonconcave dynamic programming problems. e proofs of the Banach space versions of our
theorems are in the appendix. Nevertheless, we recommend reading them as they are more
elegant (but less intuitive) than the standard versions.
⁴ e terminology “Dini sub- and superdifferential” does not appear to be widespread. On the other hand,
“Fréchet sub- and superdifferential” is a standard generalization of the convex analysis notion of “subdifferential” to non-convex functions.


Theorems
An agent makes a continuous choice c ∈ C and a discrete choice d ∈ D. Initially, we require
the continuous choice set C to be a subset of R; Appendix A generalizes all theorems to
allow C to be a subset of any Banach space. We allow the discrete choice set D to be any nonempty set, e.g. a finite set such as {full time work, part time work, not work}, a continuous
space such as R2 , or an infinite dimensional space such as C[0, 1].
Definition . We say F is the upper envelope of {f (·, d)}d∈D if F (c) = supd∈D f (c, d).
Our static envelope theorem asserts that non-differentiable points are never optimal choices.
An agent is never indifferent between two discrete choices aer making an optimal continuous choice (unless the discrete choices are locally equivalent).
eorem . Suppose F is the upper envelope of a (possibly infinite) set of differentiable funcˆ maximizes f , and ĉ ∈ int (C), then F is differentiable at ĉ and
tions {f (·, d)}. If (ĉ, d)
ˆ = 0.
satisfies the first-order condition F ′ (ĉ) = fc (ĉ, d)
Note that this theorem requires that the supremum be attained at the optimal choice ĉ, but
not elsewhere.
Our second result builds on eorem  to study dynamic programming problems with
continuous and discrete choices. In every period, the agent makes a continuous and discrete
choice (c′ , d′ ) based on the state variable (c, d) consisting of the previous period’s choices.
We denote the set of possible states by Ω. e agent may only make choices that satisfy the
constraint
(c, c′ , d, d′ ) ∈ Γ.
It will be convenient to write
Γ (c, ·; d, ·) = {(c′ , d′ ) : (c, c′ , d, d′ ) ∈ Γ}
Γ (c, ·; d, d′ ) = {c′ : (c, c′ , d, d′ ) ∈ Γ}
Γ (·, c′ ; d, d′ ) = {c : (c, c′ , d, d′ ) ∈ Γ} .
Let us assume that the agent has a feasible choice at every state, i.e. Γ(c, ·, d, ·) ⊆ Ω is nonempty for all (c, d) ∈ Ω.
Problem . Consider the following dynamic programming problem:
V (c, d) =
sup
u (c, c′ ; d, d′ ) + β V (c′ , d′ ) ,
(c′ ,d′ )∈Γ(c,·;d,·)

()
where the domain of V is Ω. We assume that u(·, c′ ; d, d′ ) and u(c, ·; d, d′ ) are differentiable
on int(Γ(·, c′ ; d, d′ )) and int(Γ(c, ·; d, d′ )), respectively.⁵⁶
ere are two sources of non-differentiability in Problem . As before, the value function
may have downward kinks at states where the agent is indifferent between two discrete
choices. In addition, Problem  introduces constraints. Hence, the value function may have
upward (or downward) kinks at states where the agent makes a boundary choice, but prefers
an interior choice at some nearby states. As in eorem , our approach is to focus on
differentiability at optimal choices away from boundaries.
We define interior choices as follows. First, the continuous component of the choice
′ ′
(c , d ) must satisfy the standard requirement that it is in the interior of today’s feasible set.
In addition, a second – more subtle – requirement is necessary. Suppose that the agent plans
to choose (c′′ , d′′ ) tomorrow aer choosing (c′ , d′ ) today. If the agent were to change c′ a little bit, then (c′′ , d′′ ) might become infeasible. If (c′′ , d′′ ) is a particularly good choice for the
agent tomorrow, the agent would effectively be constrained to choices today which make
(c′′ , d′′ ) feasible tomorrow. erefore, the notion of interior choice must take into account
the constraint imposed by the subsequent choice of the agent. We require that (c′′ , d′′ ) remains feasible aer all sufficiently small changes in c′ .⁷ ese two considerations lead to the
following definition.
Definition . e choice c′ is a one-period interior choice with respect to (c, c′′ , d, d′ , d′′ ) if
(i) c′ ∈ int(Γ(c, ·; d, d′ )) and
(ii) c′ ∈ int(Γ(·, c′′ ; d′ , d′′ )).
We establish that the value function is differentiable at optimal interior choices.
eorem . Suppose (ĉ′ , dˆ′ ) and (ĉ′′ , dˆ′′ ) are optimal choices at states (c, d) and (ĉ′ , dˆ′ ), respectively. If ĉ′ is a one-period interior choice with respect to (c, ĉ′′ , d, dˆ′ , dˆ′′ ), then V (·, dˆ′ ) is
differentiable at ĉ′ and satisfies the first-order condition
−uc′ (c, ĉ′ ; d, dˆ′ ) = β Vc (ĉ′ , dˆ′ ) = β uc (ĉ′ , ĉ′′ ; dˆ′ , dˆ′′ ).
⁵ Since we neither study nor require the existence of optimal policies or value functions, we do not impose
conditions such as β ∈ (0, 1). In particular, if the value function takes infinite values, then there are no
maxima and the conditions for our theorems are violated.
⁶ is notation accommodates non-stationary problems. For example, the discrete choice set D could be
′
constructed as D = ∪∞
t=0 Dt , where each pair of sets Dt and Dt′ is disjoint, and Γ(c, c ; d, ·) ⊆ Dt+1 for all
′
d ∈ Dt and all c, c ∈ C.
⁷ is second condition is somewhat familiar. Benveniste and Scheinkman () require that (c, ĉ′ ) ∈
int(Γ) where ĉ′ is an optimal choice at state c. eir condition is used to establish a stronger version of subdifferentiability. However, this part of their proof only requires c ∈ int(Γ(·, ĉ′ )) which is similar to our second
condition.

e optimal one-period interior choice condition of the theorem is unusual as it requires
the existence of an optimal continuous choice ĉ′′ tomorrow (as well as today). Nevertheless,
this condition is quite weak in three regards. First, it is only relevant when the problem has
constraints; it is automatically satisfied if all continuous choices are made from open sets.
Second, the condition does not require the optimal continuous choice tomorrow ĉ′′ to be
an interior choice. ird, the condition does not require the value function to be differentiable at tomorrow’s optimal choice (or anywhere else). Even if the agent chooses a kink
point tomorrow, the theorem still applies so long as it is feasible for him to change today’s
choice ĉ′ without changing his other choices. Still, the one-period interior choice condition
may fail and we explore how to apply eorem  in such cases in Section .
A natural extension of Problem  is the following version of a dynamic programming problem that incorporates stochastic shocks.
Problem . Consider the following stochastic dynamic programming problem:
∑
V (c, d, θ) =
sup
u (c, c′ ; d, d′ ; θ) + β
π (θ′ | θ) V (c′ , d′ , θ′ ) ,
(c′ ,d′ )∈Γ(c,·;d,·;θ)
θ ′ ∈Θ
where the domain of V is Ω × Θ. We assume that u(·, c′ ; d, d′ ; θ) and u(c, ·; d, d′ ; θ) are
differentiable on int(Γ(·, c′ ; d, d′ ; θ)) and int(Γ(c, ·; d, d′ ; θ)), respectively.
e following theorem establishes that the value function is differentiable at optimal choices.
It is a stochastic version of eorem .
Definition . e choice c′ is a stochastic one-period interior choice with respect to (c, c′′ (·), d, d′ , d′′ (·))
at θ if
(i) c′ ∈ int(Γ(c, ·; d, d′ ; θ)) and
(ii) c′ ∈ int(Γ(·, c′′ (θ′ ); d′ , d′′ (θ′ ); θ′ )) for all θ′ .
eorem . Suppose (ĉ′ , dˆ′ ) are optimal choices following (c, d, θ) in Problem , and (ĉ′′ (·), dˆ′′ (·))
are optimal policies for the following period’s choices as a function of θ′′ . If ĉ′ is a one-period
interior choice with respect to (c, ĉ′′ (·), d, dˆ′ , dˆ′′ (·)), then V (·, dˆ′ ) is differentiable at ĉ′ and
satisfies the first-order condition
∑
∑
−uc′ (c, ĉ′ ; d, dˆ′ ; θ) = β
π(θ′ |θ) Vc (ĉ′ , dˆ′ , θ′ ) = β
π(θ′ |θ) uc (ĉ′ , ĉ′′ (θ′ ); dˆ′ , dˆ′′ (θ′ ); θ′ ).
θ′
θ′
We omit the proof of this theorem, as it is a straightforward generalization of eorem .
e main difference is that there is a convex combination of value functions in the Bellman
equation, rather than one single value function. is requires a simple generalization of the
upcoming Lemma  part (iii) to finite sums. Generalizing to continuous random variables
would require generalizing this lemma to integrals.


Proofs
. Classification of Non-Differentiable Points
is section develops a classification of non-differentiable points of functions. We define
upward and downward kinks in terms of (Dini) sub- and superderivatives. en, we show
that every non-differentiable point is either an upward or a downward kink and provide a
lemma on important algebraic operations.
Intuitively, we would like to define an upward kink as a point where the slope approaching from the le is greater than the slope approaching from the right (see Figure a). Downward kinks would have the converse property.
Upward Kink
Downward Kink
.
.
(a) Upward and Downward Kinks
(b) Bouncing Ball Function
Figure : Classifying non-differentiable points
However, we can not use directional derivatives because they may not exist. For instance, consider the bouncing ball function F depicted in Figure b as the upper envelope
of a countable set of parabolas {f (·, d)}d∈D where
(
)
{s
}
1
d
f (c, d) = −
(c − d) c −
and D =
:
s
∈
{−1,
1}
,
n
∈
N
.
|d|
2
2n
is function has directional derivatives everywhere except at c = 0. In particular, the right
directional derivative at c = 0,
lim +
∆c→0
F (∆c) − F (0)
∆c
√
does not exist because the slope oscillates between 0 and ( 2−1)2 . We resolve this problem
by taking limits inferior and superior of the slope, which always exist. According to our

classification, c = 0 is a downward kink but not an upward kink.⁸
Definition . e (Dini) sub- and superdifferentials of f at c ∈ int(C) are
{
}
f (c + ∆c) − f (c)
f (c + ∆c) − f (c)
∂D f (c) = m ∈ R : lim sup
≤ m ≤ lim inf
∆c→0+
∆c
∆c
∆c→0−
{
}
f (c + ∆c) − f (c)
f (c + ∆c) − f (c)
D
∂ f (c) = m ∈ R : lim inf
≥ m ≥ lim sup
.
∆c→0−
∆c
∆c
∆c→0+
If ∂D f (c) is non-empty, then we say f is (Dini) subdifferentiable at c. Similarly, if ∂ D f (c) is
non-empty, then we say f is (Dini) superdifferentiable at c.
Definition . If f is not subdifferentiable at c, then we say it has an upward kink at c. Similarly, if f is not superdifferentiable at c, then we say it has a downward kink at c.
e following lemma establishes that a non-differentiable point of a function can be classified as either an upward kink or a downward kink.
Lemma  (Differentiability). A function f : R → R is differentiable at c if and only if f
is both sub- and superdifferentiable at c. Moreover, if f is differentiable at c then {f ′ (c)} =
∂D f (c) = ∂ D f (c).
Proof. e forward direction is straightforward. For the reverse direction, suppose that f is
both sub- and superdifferentiable at c, so that there exist m∗ ∈ ∂D f (c) and m∗ ∈ ∂ D f (c).
From the definitions,
f (c + ∆c) − f (c)
f (c + ∆c) − f (c)
≤ m∗ ≤ lim inf
∆c→0+
∆c
∆c
∆c→0−
f (c + ∆c) − f (c)
f (c + ∆c) − f (c)
lim inf
≥ m∗ ≥ lim sup
.
−
∆c→0
∆c
∆c
∆c→0+
lim sup
Since infima are weakly less than suprema, going clockwise, each expression is weakly less
than the following one. erefore, all of the expressions are equal. us, f is differentiable
at c with f ′ (c) = m∗ = m∗ .
e following lemma provides some calculus properties of sub- and superdifferentials. Part (iii)
provides a sufficient condition for the differentiability of a sum of functions, and plays an
important role in the proof of eorem .
⁸ In similar examples, there are points that are both upward and downward kinks. For example, the function f (x) = x sin x1 has an upward and downward kink at x = 0.

Lemma  (Differential Calculus). e following statements are true at any c (along with their
superdifferentiable counterparts):
(i) If g and h are subdifferentiable, then so is g + h.
(ii) g is subdifferentiable if and only if −g is superdifferentiable.
(iii) If g and h are subdifferentiable and g + h is superdifferentiable, then g, h, and g + h
are differentiable.
Proof.
(i) is result follows from the subadditivity property of limits superior that allows us to write
]
[
lim sup [g (c) + h (c)] ≤ lim sup g (c) + lim sup h (c)
c→0−
c→0−
c→0−
= lim sup g (c) + lim sup h (c) ,
c→0−
c→0−
and the analogous right limit inferior inequality.
(ii) Trivial.
(iii) From part (i), g + h is subdifferentiable, and hence differentiable by Lemma . From
part (ii), −g is superdifferentiable, and part (i) implies h = (g + h) + (−g) is superdifferentiable. erefore, Lemma  implies h is differentiable.
History: e notions of a Dini sub- and superdifferentials of one-dimensional functions
are special cases of Fréchet sub- and superdifferentials of functions on Banach spaces. For
simplicity, the body of our paper uses Dini sub- and superdifferentials (generalizations of
all results are in Appendix A). However, we will discuss the history here in terms of the
non-smooth analysis literature which focuses on Fréchet sub- and superdifferentials.
e notions of Fréchet sub- and superdifferentials generalize classical notions from
convex analysis to non-convex functions. However, according to Kruger (), previous
work in mathematics has not applied these concepts because of “rather poor calculus” as
∂F (f + g)(x) ̸= ∂F f (x) + ∂F g(x). Our approach appears to be novel: we simultaneously
study sub- and superdifferentiability of functions to establish full differentiability.
e notions of Fréchet sub- and superdifferentials defined in Appendix A are standard,
and appear in Schirotzek (, Chapter ), although Fréchet superdifferentials only appear in a two-page section on Hamilton-Jacobi equations. e special case of Dini sub
and superdifferentials is non-standard. e pioneering papers that lead to these definitions are Clarke (, ), Penot (, ), and Bazaraa, Goode and Nashed ().
Lemma A. – the Banach space version of Lemma  – appears without proof as Proposition . in Kruger (), but does not appear in Schirotzek. Parts (i) and (ii) of Lemma A.
– the Banach space version of Lemma  – appear without proof in the discussion on pages 
and  of Schirotzek, respectively. We believe part (iii) is novel.
. Proof of eorem 
To establish eorem , we prove that
(i) non-differentiable points are either upward or downward kinks or both (Lemma ),
(ii) optimal choices may not occur at downward kinks (Lemma , Figure a), and
(iii) upper envelopes may not contain upward kinks (Lemma , Figure b).
Lemma . If ĉ ∈ int(C) is a maximum of g : R → R, then g is superdifferentiable at ĉ with
0 ∈ ∂ D g(ĉ).
Proof. Since ĉ is a maximum, the slope on the le is weakly positive, and the slope on the
right is weakly negative. In other words, for any ∆c > 0,
g (ĉ − ∆c) − g (c)
g (ĉ + ∆c) − g (c)
≥0≥
.
−∆c
∆c
Taking limits gives
lim inf
−
∆c→0
g (ĉ + ∆c) − g (c)
g (ĉ + ∆c) − g (c)
≥ 0 ≥ lim sup
,
∆c
∆c
∆c→0+
which establishes 0 ∈ ∂ D g(ĉ).
Lemma . If F is the upper envelope of a (possibly infinite) set of differentiable functions
ˆ then F is subdifferentiable at c with fc (c, d)
ˆ ∈
{f (·, d)}, and c ∈ int(C), and F (c) = f (c, d),
∂D F (c).
Proof. Since dˆ is an optimal choice at c but (perhaps) not at c + ∆c,
(
)
( )
ˆ
f c + ∆c, d − f c, dˆ ≤ F (c + ∆c) − F (c) .

.
.
ĉ
ĉ
(a) Lemma 
(b) Lemma 
Figure : Illustration of Proof of eorem 
Dividing by ∆c > 0, and taking limits gives
( )
F (c + ∆c) − F (c)
fc c, dˆ ≤ lim inf
.
∆c→0+
∆c
Similarly, dividing by ∆c < 0 and taking limits gives
( )
F (c + ∆c) − F (c)
fc c, dˆ ≥ lim sup
.
∆c
∆c→0−
ˆ ∈ ∂D F (c).
erefore, fc (c, d)
We are ready now to prove eorem  which is restated here.
eorem . Suppose F is the upper envelope of a (possibly infinite) set of differentiable funcˆ maximizes f , and ĉ ∈ int (C), then F is differentiable at ĉ and
tions {f (·, d)}. If (ĉ, d)
ˆ = 0.
satisfies the first-order condition F ′ (ĉ) = fc (ĉ, d)
Proof. Lemmata  and  establish that F is super- and subdifferentiable at ĉ with 0 ∈
ˆ ∈ ∂D F (ĉ). Applying Lemma , we conclude that F is differentiable at
∂ D F (ĉ) and fc (ĉ, d)
′
ˆ
ĉ with F (ĉ) = 0 = fc (ĉ, d).
History: We sketch the history of the proof steps (i)–(iii). Mirman and Zilcha (,
Lemma ) introduced (iii) in the context of a growth model. Instead of using (ii), they
ensure that there are no downward kinks by assuming that the objective is jointly concave
in all choices. eir proofs are based on directional derivatives, which exist everywhere on
concave functions.⁹ Benveniste and Scheinkman () generalize their theorem.
⁹ Rockafellar (, eorem .) proves the existence of directional derivatives of concave functions,
which he traces as far back as Stolz (, Satz , p. ) who describes it as a standard result from geometry.

Amir et al. (, Lemmata . and .) introduced the proof strategy of (i)–(iii), also
in the context of a growth model. To ensure that directional derivatives exist in step (i), they
impose a supermodularity assumption on the underlying objective function.
Milgrom and Segal (, Corollary ) were the first to notice that this logic applies
without any topological or monotonicity assumptions on the discrete choice set D. To ensure that directional derivatives exist in step (i), they assumed that {f (·, d)}d∈D is an equidifferentiable set of functions.¹⁰ eir eorem  generalizes Clarke (, eorem .),
which in turn generalizes Danskin (, eorem ).
. Proof of eorem 
In this section, we prove eorem  which is restated here:
eorem . Suppose (ĉ′ , dˆ′ ) and (ĉ′′ , dˆ′′ ) are optimal choices at states (c, d) and (ĉ′ , dˆ′ ), respectively. If ĉ′ is a one-period interior choice with respect to (c, ĉ′′ , d, dˆ′ , dˆ′′ ), then V (·, dˆ′ ) is
differentiable at ĉ′ and satisfies the first-order condition
−uc′ (c, ĉ′ ; d, dˆ′ ) = β Vc (ĉ′ , dˆ′ ) = β uc (ĉ′ , ĉ′′ ; dˆ′ , dˆ′′ ).
e following lemma implies that V (·, dˆ′ ) is subdifferentiable at the optimal choice ĉ′ when
ĉ′ ∈ int(Γ(·, ĉ′′ ; dˆ′ , dˆ′′ )). In other words, upward kinks may only arise when a constraint
binds on today’s choice; upward kinks in the value function at future dates do not propagate
backwards. Note that the lemma is written with different timing, and is applicable in a more
general setting than the theorem.
Lemma . Suppose (ĉ′ , dˆ′ ) are optimal choices given (c, d) in Problem . If c ∈ int(Γ(·, ĉ′ ; d, dˆ′ )),
then the value function V (·, d) is subdifferentiable at c with uc (c, ĉ′ ; d, dˆ′ ) ∈ ∂D V (c, d).
Proof. For all c + ∆c ∈ Γ(·, ĉ′ ; d, dˆ′ ), we have the inequality
V (c + ∆c, d) − V (c, d)
[ (
)
(
)] [ (
)
(
)]
′
′ ˆ′
′
′ ˆ′
ˆ
ˆ
≤ u c + ∆c, ĉ ; d, d + β V ĉ , d
− u c, ĉ ; d, d + β V ĉ , d
(
)
(
)
= u c + ∆c, ĉ′ ; d, dˆ − u c, ĉ′ ; d, dˆ .
Since c ∈ int(Γ(·, ĉ′ ; d, dˆ′ )), this inequality holds for all ∆c in an open neighborhood of 0.
Dividing both sides by ∆c > 0 and taking limits gives
)
(
V (c + ∆c, d) − V (c, d)
′
′
ˆ
.
≤
u
c,
ĉ
;
d,
d
lim inf
c
∆c→0+
∆c
¹⁰ A set of functions {f (c, ·)}d∈D is equidifferentiable at c if [f (c + ∆c, d) − f (c, d)] /∆c converges uniformly as ∆c → 0.

Similarly, dividing both sides by ∆c < 0 and taking limits gives
(
)
V (c + ∆c, d) − V (c, d)
′
′
ˆ
lim sup
≥ uc c, ĉ ; d, d .
∆c
∆c→0−
erefore, V (·, d) is subdifferentiable at c with uc (c, ĉ′ ; d, dˆ′ ) ∈ ∂D V (c, d).
It remains to show that V (·, dˆ′ ) is superdifferentiable at the optimal choice ĉ′ . e Bellman
equation in Problem  may be decomposed into the recursive equations
v (c′ ; c, d) =
sup
u (c, c′ ; d, d′ ) + β V (c′ , d′ )
d′ ∈Γ(c,c′ ;d,·)
V (c, d) = sup v (c′ ; c, d)
(a)
(b)
c′ ∈C
s.t. c′ ∈ Γ (c, ·; d, d′ ) for some d′ ∈ D.
Our approach is to strip away the operations on the right side of (a) until we arrive at
V , showing that each expression is superdifferentiable at each step. Surprisingly, the subdifferentiability of V (·, dˆ′ ) established above plays a key role.
Since eorem  requires that the optimal choice ĉ′ lies in int(Γ(c, ·; d, dˆ′ )), Lemma 
implies that v(·; c, d) is superdifferentiable at ĉ′ , and 0 is a superderivative.
e right side of (a) can be written as
G (c′ ) =
g (c′ , d′ )
sup
d′ ∈Γ(c,c′ ;d,·)
g (c′ , d′ ) = u (c, c′ ; d, d′ ) + β V (c′ , d′ ) .
(a)
(b)
Part (i) of the following lemma establishes that g(·, dˆ′ ) is also superdifferentiable at ĉ′ with
a superderivative of 0. e lemma applies to general static optimization problems and may
be applied by setting f = g and F = G.
Lemma . Suppose F is the upper envelope of a set of functions {f (·, d)}, and that F (c∗ ) =
f (c∗ , d∗ ).
(i) If F is superdifferentiable at c∗ , then f (·, d∗ ) is also superdifferentiable at c∗ with ∂ D f (c∗ , d∗ ) ⊇
∂ D F (c∗ ).
(ii) If f (·, d∗ ) is subdifferentiable at c∗ , then F is also subdifferentiable at c∗ with ∂D f (c∗ , d∗ ) ⊆
∂D F (c∗ ).

Proof. We only present the proof of part (i), as the proof for part (ii) is analogous. Since F
is superdifferentiable at c∗ , there is some slope m∗ ∈ ∂ D F (c∗ ) with
lim inf
−
∆c→0
F (c∗ + ∆c) − F (c∗ )
F (c∗ + ∆c) − F (c∗ )
≥ m∗ ≥ lim sup
.
∆c
∆c
∆c→0+
Since F (c) ≥ f (c, d∗ ), we know that
F (c∗ + ∆c) − F (c∗ ) ≥ f (c∗ + ∆c, d∗ ) − f (c∗ , d∗ ) .
Dividing by ∆c > 0 and taking limits, we find that
m∗ ≥ lim sup
∆c→0+
F (c∗ + ∆c) − F (c∗ )
f (c∗ + ∆c, d∗ ) − f (c∗ , d∗ )
≥ lim sup
.
∆c
∆c
∆c→0+
Along with the analogous inequality on the le, this establishes m∗ ∈ ∂ D f (c∗ , d∗ ).
So far, we have established that g(·, dˆ′ ) is superdifferentiable at ĉ′ and that each term in its
sum is subdifferentiable. erefore, Lemma  part (iii) implies g(·, dˆ′ ) and V (·, dˆ′ ) are differentiable at ĉ′ . We also established that 0 is a superderivative of g(·, dˆ′ ) and uc (ĉ′ , ĉ′′ ; dˆ′ , dˆ′′ )
is a subderivative of V (·, dˆ′ ), so these are in fact the derivatives. e equality of eorem 
follows, and this completes the proof.
History: Lemma  is a straightforward generalization of Lemma , whose history is discussed above. e early envelope theorems for dynamic programming problems (Mirman
and Zilcha (, Lemma ) and Benveniste and Scheinkman ()) imposed concavity
assumptions to establish a form of superdifferentiability to complete the proof. In particular, part (ii) of Lemma  – which we did not use to prove the theorem – is reminiscent of
Benveniste and Scheinkman (, Lemma ). e proof of Amir et al. (, Lemma .)
has a similar structure to our eorem . However, in their setting the flow value is differentiable, so their proof is simpler.

Applications
To illustrate how broadly our theorems may be applied, we present two examples. e first
is a classical dynamic programming problem with binary labor choice. is is a straightforward application of eorem . e second is a capital adjustment problem with fixed costs.
In this application, the optimal one-period interior choice condition fails, but nevertheless
eorem  may be applied.

Binary Labor Choice: Consider the following dynamic programming problem with consumption, savings, and a discrete labor choice:
W (a) =
max
(c,a′ ,ℓ)∈R3
u (c, ℓ) + β W (a′ )
s.t. c ≥ 0, a′ ≥ 0, ℓ ∈ {0, 1} ,
c + a′ = R a + w ℓ,
()
where u (·, ℓ) is differentiable and β ∈ (0, 1). To apply eorem , we reformulate the
problem as follows:
W̃ (a, L) =
max
(a′ ,L′ )∈Γ(a,·;L,·)
u (R a + w L′ − a′ , L′ ) + β W̃ (a′ , L′ ) ,
where Γ = {(a, a′ ; L, L′ ) : (a, L) ∈ Ω, (a′ , L′ ) ∈ Ω, R a + w L′ − a′ ≥ 0}
and Ω = [0, ∞) × {0, 1} .
Note the abuse of notation: L′ = ℓ is the labor supplied today. is means that W̃ does not
depend on its second argument ℓ, (i.e. W̃ (·, L) = W for all L) as yesterday’s labor choice is
not pay-off relevant today.
Suppose L̂, â′ , and ĉ are optimal choices given a, and that ĉ′ is an optimal choice given
′
â .¹¹ e optimal one-period interior choice condition of eorem  is that (i) ĉ > 0 and
â′ > 0, and (ii) ĉ′ > 0. is condition is very weak in the context of this problem. If the
agent’s preferences satisfy the Inada condition that the marginal utility of consumption at
c = 0 is infinite, then ĉ > 0 and ĉ′ > 0 are satisfied, leaving only the standard requirement that the savings choice â′ must be interior. If this condition is met, then the theorem
establishes that W̃ (·, L̂′ ) is differentiable at â′ , and the first-order condition
(
) β
(
) β
uc ĉ, L̂′ = W̃a′ â′ , L̂′ = W ′ (â′ )
R
R
is satisfied.
To summarize, W exhibits two types of kinks: those arising at savings choices where
the agent is indifferent between working or not and those arising where constraints change
from non-binding to binding. Kinks at indifference points can not be optimal choices and
are therefore irrelevant. Kinks at constraint boundaries are only relevant when tomorrow’s
choice may become infeasible aer a small change in today’s choice. e lower bound of
saving nothing tomorrow is irrelevant, because saving nothing is feasible regardless of today’s choices. e upper bound of saving everything is also irrelevant when the agent’s preferences satisfy the Inada condition, because consuming nothing is suboptimal. erefore,
neither type of kink interferes with the application of first-order conditions.
¹¹ ĉ and ĉ′ are short-hand for ĉ = Râ + wL̂′ − â′ and ĉ′ = Râ′ + wL̂′′ − â′′ .

Fixed Costs of Capital Adjustment: In many markets, there are fixed costs associated
with adjusting capital stocks. For example, expanding office space involves searching for a
new building, transporting furniture, and so on. Khan and omas () and Bachmann,
Caballero and Engel () study the impact of fixed costs on investment over the business cycle. ey apply the envelope theorem of Benveniste and Scheinkman () to the
benchmark model of no fixed costs but fall back on numerical methods in the general case.
We show that the value function is differentiable at all optimal capital adjustment levels.
More generally, this application illustrates how to analyze problems in which the optimal
one-period interior choice condition of eorem  fails. e techniques explored here are
also applicable to more general adjustment costs, as well as irreversible investment, and
problems with bid-ask spreads.
A firm has access to a production technology that allows it to use k units of capital to
produce f (k) units of output, where f is differentiable. e market price of the output is
normalized to 1. e capital stock k depreciates at rate δ. e firm may adjust the capital
stock at any time, but this requires a fixed cost of c units of output. If it decides to pay this
fixed cost, then it may buy or sell units of capital at a price of pk . e firm discounts future
profits at rate β, and has the following dynamic programming problem:
{
f (k) + β W ((1 − δ) k) ,
( k′
)
W (k) = max
maxk′ ≥0 f (k) − c − pk 1−δ
− k + β W (k ′ ) .
We assume that there is some return to investment and that the firm prefers not to allow
the capital stock to depreciate to nothing, i.e.
−c + max
k
∞
∑
β f ((1 − δ) k) >
t
t
t=0
∞
∑
β t f (0).
t=0
e value function W has downward kinks at capital levels k where the firm is indifferent between making an adjustment or not. Moreover, when the firm changes from
non-adjustment to adjustment (in either direction), it pays a fixed cost which might cause
a downward jump in its profits. is could potentially lead to upward kinks in the value
function. Below, we apply eorem  to establish: if k̂ ′ > 0 is an optimal choice given k
that involves an adjustment (i.e. k̂ ′ ̸= (1 − δ) k), then the value function W is differentiable
at k̂ ′ and satisfies the first-order condition
pk
= β W ′ (k̂ ′ ).
1−δ
e firm sets the marginal cost of adjustment equal to the marginal future benefit. Since
neither depend on the prior capital stock, there is no history dependence in the capital stock

choice once an adjustment decision has been made. erefore, the firm repeats through
finite cycles in which the capital depreciates and is replenished periodically according to
the Euler equation.
We reformulate the problem into the notation of Problem :
{
maxa′ ∈{0,1} f (k) + β W̃ [(1 − δ) k, a′ ]
if a = 0,
( k′
)
W̃ (k, a) =
′
′
maxk′ ≥0,a′ ∈{0,1} f (k) − c − pk 1−δ − k + β W̃ (k , a ) if a = 1.
e optimal one-period interior choice condition of eorem  is satisfied if (i) a = 1
and k̂ ′ > 0, and (ii) â′ = 1. In other words, eorem  applies if the agent makes two
adjustments in a row. However, the condition is violated if there is no adjustment in the
following period, because a small change in the capital level k̂ ′ chosen today would imply
a small change in the capital level (1 − δ) k̂ ′ “chosen” tomorrow. Nevertheless, we establish
that W is differentiable at any optimal capital choice k̂ ′ > 0.
In the case that the firm waits before readjusting, we may still apply eorem  by
bundling the waiting periods together with the first adjustment period into one single period. If the firm waits n periods before readjusting, the optimal capital level k̂ ′ maximizes¹²
(
f (k) − c − pk
)
(
)
k′
− k + β f (k ′ ) + · · · + β n f (1 − δ)n−1 k ′
1−δ
+ β n+1 W̃ ((1 − δ)n k ′ , 1) .
Equivalently, the optimal capital level n periods into the future K̂ ′ maximizes
[
f (k) − c − pk
]
(
)
(
)
K′
K′
K′
n
−k +βf
+ ··· + β f
(1 − δ)n+1
(1 − δ)n
(1 − δ)1
+ β n+1 W̃ (K ′ , 1).
We may reinterpret eorem  by treating the flow utility functions as all of the terms
before the continuation value.¹³ Now, the optimal one-period interior choice condition is
that (i) a = 1 and K̂ ′ > 0, and (ii) an adjustment will be made in the reformulated “tomorrow.” By construction, adjustments are made in both periods, so this condition is met.
erefore, eorem  implies that W̃ (·, 1) is differentiable at K̂ ′ = (1 − δ)n−1 k̂ ′ .
¹² We assumed earlier that the firm does not find it profitable to allow the capital stock to depreciate to
nothing.
¹³ is interpretation of the problem is non-stationary, in that the first period has a different flow value
function from the subsequent periods. As discussed earlier, eorem  generalizes easily to non-stationary
problems, because the discrete choice can include a time index.

Next, we establish that W is subdifferentiable at k̂ ′ . e continuation value W is bounded below by the value function from choosing non-adjustment for n periods,
H(k ′ ) = f (k ′ ) + · · · + β n f ((1 − δ)n−1 k ′ ) + β n+1 W̃ ((1 − δ)n k ′ , 1) .
By construction, H(k̂ ′ ) = W (k̂ ′ ), and H inherits differentiability at k̂ ′ from W̃ (·, 1) (as
established above). erefore, part (ii) of Lemma  implies that W is subdifferentiable at k̂ ′ .
Finally, since k̂ ′ maximizes
−pk
k′
+ β W (k ′ ) ,
1−δ
this objective is superdifferentiable at k̂ ′ by Lemma . Moreover, each term is subdifferentiable, so the objective is differentiable by Lemma . us, W may be expressed as the
difference of two functions that are differentiable at k̂ ′ . is completes the proof that at any
optimal choice k̂ ′ > 0, the first-order condition
( )
pk
′
= β W k̂ ′
1−δ
is satisfied.
Numerical Analysis: Our results may be useful for numerical analysis. Fella () applies our theorems in his generalization of the endogenous grid method of Carroll ().
He finds his method is substantially faster and more accurate than discretization methods.
A
Banach Space Version
For many dynamic programming problems, there are several continuous choices (we already accommodated arbitrary “discrete” choice spaces above). We generalize our concepts
and results to multidimensional spaces, which we number in the same way as in the main
text for ease of reference.
Let (X, ∥·∥) be a Banach space (for example, X could be Rn ). We denote
X ∗ = {ϕ : X → R such that ϕ is linear and continuous}
as its topological dual space. e standard notion of differentiability in Banach spaces is due
to Fréchet.

Definition A.. A function f : X → R is Fréchet differentiable at x if there is some ϕ∗ ∈ X ∗
such that
f (x + ∆x) − f (x) − ϕ∗ ∆x
= 0.
∆x→0
∥∆x∥
lim
ϕ∗ is called the Fréchet derivative of f at x, and may be written as f ′ (x) or fx (x).
eorem A.. Suppose F is the upper envelope of a (possibly infinite) set of Fréchet differˆ maximizes f , then F is Fréchet differentiable at ĉ with
entiable functions {f (·, d)}. If (ĉ, d)
′
ˆ = 0.
F (ĉ) = fc (ĉ, d)
e statement of the generalization of eorem  is identical to the original, apart from the
use of Fréchet derivatives.
eorem A.. Suppose (ĉ′ , ĉ′′ , dˆ′ , dˆ′′ ) are optimal choices following (c, d) in Problem  (in
which the utility functions are Fréchet differentiable in the analogous way). If ĉ′ is a one-period
interior choice with respect to (c, ĉ′′ , d, dˆ′ , dˆ′′ ), then V (·, dˆ′ ) is Fréchet differentiable at ĉ′ and
satisfies the first-order condition
− uc′ (c, ĉ′ ; d, dˆ′ ) = β Vc (ĉ′ , dˆ′ ) = β uc (ĉ′ , ĉ′′ ; dˆ′ , dˆ′′ ).
()
Notice that the following proofs are shorter than the standard proofs (because we do not
have to deal with le and right limits), however this comes at the cost of a loss in economic
intuition. We keep the order and numbering similar to the proofs in Section  but we omit
all the surrounding text, discussing these results.
Definition A.. e Fréchet subdifferential of f : X → R is
{
}
f (x + ∆x) − f (x) − ϕ∗ ∆x
∗
∗
∂F f (x) = ϕ ∈ X : lim inf
≥0 ,
∆x→0
∥∆x∥
and f is Fréchet subdifferentiable if ∂F f (x) is non-empty. Similarly, the Fréchet super-differential of f is
{
}
f (x + ∆x) − f (x) − ϕ∗ ∆x
F
∗
∗
∂ f (x) = ϕ ∈ X : lim sup
≤0 ,
∥∆x∥
∆x→0
and f is Fréchet superdifferentiable if ∂ F f (x) is non-empty.
For completeness, we prove the following standard result which generalizes Lemma .

Lemma A.. A function f : X → R is Fréchet differentiable if and only if it is both Fréchet
sub- and superdifferentiable.
Proof. It is straightforward to show that differentiable functions are sub- and superdifferentiable. Conversely, suppose f is both Fréchet sub- and superdifferentiable, so that
ϕ∗ ∈ ∂F f (x) and ϕ∗ ∈ ∂ F f (x). en from the definitions,
f (x + ∆x) − f (x) − ϕ∗ ∆x
≥ 0,
∆x→0
∥∆x∥
f (x + ∆x) − f (x) − ϕ∗ ∆x
≤ 0.
lim sup
∥∆x∥
∆x→0
lim inf
e second inequality may be rewritten as
lim inf −
∆x→0
f (x + ∆x) − f (x) − ϕ∗ ∆x
≥ 0.
∥∆x∥
From the superadditivity of limits inferior, we deduce
[
]
f (x + ∆x) − f (x) − ϕ∗ ∆x f (x + ∆x) − f (x) − ϕ∗ ∆x
lim inf
−
≥0
∆x→0
∥∆x∥
∥∆x∥
∆x
≥ 0.
lim inf [ϕ∗ − ϕ∗ ]
∆x→0
∥∆x∥
But this final equality is only satisfied when ϕ∗ = ϕ∗ . erefore, the Fréchet sub- and superdifferentials coincide on a singleton, which must be the Fréchet derivative.
Lemma A.. e following statements are true at any c:
(i) If g and h are Fréchet subdifferentiable, then so is g + h.
(ii) g is Fréchet subdifferentiable if and only if −g is Fréchet superdifferentiable.
(iii) If g and h are Fréchet subdifferentiable and g + h is Fréchet superdifferentiable, then g,
h, and g + h are Fréchet differentiable.
Lemma A.. If ĉ ∈ int(C) is a maximum of f : C → R, then f is superdifferentiable at ĉ
with 0 ∈ ∂ F f (ĉ).
Proof. Since ĉ is a maximum, f (ĉ+∆c)−f (ĉ) ≤ 0 for sufficiently small ∆c ∈ X. Dividing
by ∥∆c∥ and taking limits gives
lim sup
∆c→0
f (ĉ + ∆c) − f (ĉ)
≤ 0.
∥∆c∥
erefore 0 ∈ ∂ F f (ĉ).

Lemma A.. If F is the upper envelope of a set of Fréchet differentiable functions {f (·, d)},
and c∗ ∈ int(C), and F (c∗ ) = f (c∗ , d∗ ), then F is subdifferentiable with fc (c∗ , d∗ ) ∈
∂F F (c∗ ).
Proof. If F (c∗ ) = f (c∗ , d∗ ), then we have
f (c∗ + ∆c, d∗ ) − f (c∗ , d∗ ) ≤ F (c∗ + ∆c) − F (c∗ ) .
Subtracting ϕ∆c, dividing by ∥∆c∥, and taking limits on both sides gives
lim inf
∆c→0
f (c∗ + ∆c, d∗ ) − f (c∗ ) − ϕ ∆c
F (c∗ + ∆c) − F (c∗ ) − ϕ ∆c
≤ lim inf
.
∆c→0
∥∆c∥
∥∆c∥
Aer setting ϕ = fc (c∗ , d∗ ), the le side is zero. erefore, the right side is non-negative,
so fc (c∗ , d∗ ) ∈ ∂F F (c∗ ).
Lemma A.. Suppose (ĉ′ , dˆ′ ) are optimal choices given (c, d) in Problem . If c ∈ int(Γ(·, ĉ′ ; d, dˆ′ )),
then the value function V (·, d) is subdifferentiable at c with uc (c, ĉ′ ; d, dˆ′ ) ∈ ∂D V (c, d).
Proof. is proof is omitted; it is straightforward to adapt the proof of Lemma  using the
technique in the proof of Lemma .
Lemma A.. Suppose F is the upper envelope of the set of functions {f (·, d)}, and that
F (c∗ ) = f (c∗ , d∗ ).
(i) If F is Fréchet superdifferentiable at c∗ , then f (·, d∗ ) is also Fréchet superdifferentiable
at c∗ .
(ii) If f (·, d∗ ) is Fréchet subdifferentiable at c∗ , then F is also Fréchet subdifferentiable at
c∗ .
Proof. We only provide a proof for part (i). Since F is Fréchet superdifferentiable at c∗ , there
is some ϕ∗ ∈ ∂ F F (c∗ ) with
lim sup
∆x→0
F (x + ∆x) − F (x) − ϕ∗ ∆x
≤ 0.
∥∆x∥
∗
Since F (c) ≥ f (c, d ), we know that
F (c∗ + ∆c) − F (c∗ ) ≥ f (c∗ + ∆c, d∗ ) − f (c∗ , d∗ ) .
Subtracting ϕ∗ ∆c, dividing by ∥∆c∥ and taking limits on both sides yields
lim sup
∆x→0
F (c∗ + ∆c) − F (c∗ ) − ϕ∗ ∆c
f (c∗ + ∆c, d∗ ) − f (c∗ , d∗ ) − ϕ∗ ∆c
≥ lim sup
.
∥∆c∥
∥∆c∥
∆x→0
From the first inequality, the le side is less than 0, which establishes that ϕ∗ ∈ ∂ F f (c∗ , d∗ ).

References
A, V. and M, P. (). Dynamic discrete choice structural models: A survey. Journal of
Econometrics,  (), –.
A, R., M, L. J. and P, W. R. (). One-sector nonclassical optimal growth: Optimality
conditions and comparative dynamics. International Economic Review,  (), –.
B, R., C, R. J. and E, E. M. (). Lumpy Investment in Dynamic General Equilibrium. Working paper, Yale University.
B, M. S., G, J. J. and N, M. Z. (). On the cones of tangents with applications to mathematical programming. Journal of Optimization eory and Applications, , –.
B, L. M. and S, J. A. (). On the differentiability of the value function in dynamic
models of economics. Econometrica,  (), –.
C, C. D. (). e method of endogenous gridpoints for solving dynamic stochastic optimization
proble. Economics Letters,  (), –.
C, F. H. (). Generalized gradients and applications. Transactions of e American Mathematical
Society, , –.
— (). A new approach to Lagrange multipliers. Mathematics of Operations Research,  (), –.
D, J. M. (). e theory of max-min, with applications. SIAM Journal on Applied Mathematics,
 (), –.
E, Z. and W, K. I. (). e specification and estimation of dynamic stochastic discrete choice
models: A survey. e Journal of Human Resources,  (), –.
F, G. (). A Generalized Endogenous Grid Method for Non-Concave Problems. Working Paper ,
Queen Mary, University of London, School of Economics and Finance.
K, A. and T, J. K. (). Idiosyncratic shocks and the role of nonconvexities in plant and aggregate
investment dynamics. Econometrica,  (), –.
K, A. Y. (). On Fréchet subdifferentials. Journal of Mathematical Sciences, , –.
M, P. and S, I. (). Envelope theorems for arbitrary choice sets. Econometrica,  (), –.
M, L. J. and Z, I. (). On optimal growth under uncertainty. Journal of Economic eory,
 (), –.
P, J.-P. (). Sous-différentiels de fonctions numériques non convexes. Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Série A, , –.
— (). Calcul sous-différentiel et optimisation. Journal of Functional Analysis,  (), –.
R, L. and S, K. H. (). Ordients: Optimization and comparative statics without utility functions, mimeo.
R-Z, J. P. and S, M. S. (). Differentiability of the value function without interiority
assumptions. Journal of Economic eory,  (), –.
R, R. T. (). Convex Analysis. Prince.
R, J. (). Dynamic programming. In S. N. Durlauf and L. E. Blume (eds.), e New Palgrave Dictionary
of Economics, Basingstoke: Palgrave Macmillan.
S, W. (). Nonsmooth Analysis. Springer.

S, O. (). Grundzüge der Differential- und Integralrechnung. Leipzig: B. G. Teubner.

Download