Document 11225462

advertisement
Penn Institute for Economic Research
Department of Economics
University of Pennsylvania
3718 Locust Walk
Philadelphia, PA 19104-6297
pier@econ.upenn.edu
http://economics.sas.upenn.edu/pier
PIER Working Paper 14-005
“Cautious Expected Utility and the Certainty Effect”
by
Simone Cerreia-Vioglio, David Dillenberger,
and Pietro Ortoleva
http://ssrn.com/abstract=2397353
Cautious Expected Utility and the Certainty Effect∗
Simone Cerreia-Vioglio†
David Dillenberger‡
Pietro Ortoleva§
February 2014
Abstract
Many violations of the Independence axiom of Expected Utility can be
traced to subjects’ attraction to risk-free prospects. The key axiom in this
paper, Negative Certainty Independence (Dillenberger, 2010), formalizes this
tendency. Our main result is a utility representation of all preferences over
monetary lotteries that satisfy Negative Certainty Independence together with
basic rationality postulates. Such preferences can be represented as if the agent
were unsure of how to evaluate a given lottery p; instead, she has in mind a
set of possible utility functions over outcomes and displays a cautious behavior: she computes the certainty equivalent of p with respect to each possible
function in the set and picks the smallest one. The set of utilities is unique in
a well-defined sense. We show that our representation can also be derived from
a ‘cautious’ completion of an incomplete preference relation.
JEL: D80, D81
Keywords: Preferences under risk, Allais paradox, Negative Certainty Independence, Incomplete preferences, Cautious Completion, Multi-Utility representation.
∗
We thank Faruk Gul, Fabio Maccheroni, Massimo Marinacci, Efe Ok, Wolfgang Pesendorfer, Gil
Riella, and Todd Sarver for very useful comments and suggestions. The co-editor and four anonymous
referees provided valuable comments that improved the paper significantly. We greatly thank Selman
Erol for his help in the early stages of the project. Cerreia-Vioglio gratefully acknowledges the
financial support of ERC (advanced grant, BRSCDP-TEA). Ortoleva gratefully acknowledges the
financial support of NSF grant SES-1156091.
†
Department of Decision Sciences, Università Bocconi. Email: simone.cerreia@unibocconi.it.
‡
Department of Economics, University of Pennsylvania. Email: ddill@sas.upenn.edu.
§
Department of Economics, Columbia University. Email: pietro.ortoleva@columbia.edu.
1
1
Introduction
Despite its ubiquitous presence in economic analysis, the paradigm of Expected Utility
is often violated in choices between risky prospects. While such violations have been
documented in many different experiments, a specific preference pattern emerges as
one of the most prominent: the tendency of people to favor certain (risk-free) options–
the so-called Certainty Effect (Kahneman and Tversky, 1979). This is shown, for
example, in the classic Common Ratio Effect, one of the Allais paradoxes, in which
subjects face the following two choice problems:
1. A choice between A and B, where A is a degenerate lottery which yields $3000
for sure and B is a lottery that yields $4000 with probability 0.8 and $0 with
probability 0.2.
2. A choice between C and D, where C is a lottery that yields $3000 with probability 0.25 and $0 with probability 0.75, and D is a lottery that yields $4000
with probability 0.2 and $0 with probability 0.8.
The typical result is that the vast majority of subjects choose A in problem 1
and D in problem 2,1 in violation of Expected Utility and in particular of its key
postulate, the Independence axiom. To see this, note that prospects C and D are the
0.25:0.75 mixture of prospects A and B, respectively, with the lottery that gives $0
for sure. This means that the only pairs of choices consistent with Expected Utility
are (A, C) and (B, D). Additional evidence of the Certainty Effect includes, among
many others, Allais’ Common Consequence Effect, as well as the experiments of
Cohen and Jaffray (1988), Conlisk (1989), Dean and Ortoleva (2012b), and Andreoni
and Sprenger (2012).2
Following these observations, Dillenberger (2010) suggests a way to define the
Certainty Effect behaviorally, by introducing an axiom, called Negative Certainty
Independence (NCI), that is designed precisely to capture this tendency. To illustrate,
note that in the example above, prospect A is a risk-free option and it is chosen by
most subjects over the risky prospect B. However, once both options are mixed,
1
This example is taken from Kahneman and Tversky (1979). Of 95 subjects, 80% choose A over
B, 65% choose D over C, and more than half choose the pair A and D. These findings have been
replicated many times (see footnote 2).
2
A comprehensive reference to the evidence on the Certainty Effect can be found in Peter Wakker’s
annotated bibliography, posted at http://people.few.eur.nl/wakker/refs/webrfrncs.doc.
2
leading to problem 2, the first option is no longer risk-free, and its mixture (C) is
now judged worse than the mixture of the other option (D). Intuitively, after the
mixture one of the options is no longer risk-free, which reduces its appeal. Following
this intuition, axiom NCI states that for any two lotteries p and q, any number λ in
[0, 1], and any lottery δx that yields the prize x for sure, if p is preferred to δx then
λp + (1 − λ) q is preferred to λδx + (1 − λ) q. That is, if the sure outcome x is not
enough to compensate the decision maker (henceforth DM) for the risky prospect
p, then mixing it with any other lottery, thus eliminating its certainty appeal, will
not result in the mixture of δx being more attractive than the corresponding mixture
of p. NCI is weaker than the Independence axiom, and in particular it permits
Independence to fail when the Certainty Effect is present – allowing the DM to favor
certainty, but ruling out the converse behavior. For example, in the context of the
Common Ratio questions above, NCI allows the typical choice (A, D), but is not
compatible with the opposite violation of Independence, (B, C). It is easy to see
how NCI is also consistent with most other experiments that document the Certainty
Effect.
The goal of this paper is to characterize the class of continuous, monotone, and
complete preference relations, defined on lotteries over some interval of monetary
prizes, that satisfy NCI. That is, we aim to characterize a new class of preferences
that are consistent with the Certainty Effect, together with very basic rationality
postulates. A characterization of all preferences that satisfy NCI could be useful
for a number of reasons. First, it provides a way of categorizing some of the existing
decision models that can accommodate the Certainty Effect (see Section 5 for details).
Second, a representation for a very general class of preferences that allow for the
Certainty Effect can help to delimit the potential consequences of such violations of
Expected Utility. For example, a statement that a certain type of strategic or market
outcomes is not possible for any preference relation satisfying NCI would be easier
to obtain given a representation theorem. Lastly, while Dillenberger (2010) did not
provide a utility representation of the preferences that satisfy NCI, he showed that
this property is linked not only to the Certainty Effect, but also to behavioral patterns
in dynamic settings, such as preferences for one-shot resolution of uncertainty. Hence,
characterizing this class will have implications also in different domains of choice.
Our main result is that any continuous, monotone, and complete preference relation over monetary lotteries satisfies NCI if and only if it can be represented as
3
follows: there exists a set W of strictly increasing (Bernoulli) utility functions over
outcomes, such that the value of any lottery p is given by
V (p) = inf c (p, v) ,
v∈W
where c (p, v) is the certainty equivalent of lottery p calculated using the utility function v. That is, if we denote by Ep (v) the expected utility of p with respect to v,
then c (p, v) = v −1 (Ep (v)) .
We call this representation a Cautious Expected Utility representation and interpret it as follows. The DM acts as if she were unsure how to exactly evaluate
each given lottery: she does not have one, but a set of possible utility functions over
monetary outcomes. She then reacts to this multiplicity using a form of caution:
she evaluates each lottery according to the lowest possible certainty equivalent corresponding to some function in the set.3 The use of certainty equivalents allows the DM
to compare different evaluations of the same lottery, each corresponding to a different
utility function, by bringing them to the same “scale” – dollar amounts. Note that,
by definition, c (δx , v) = x for all v ∈ W. Therefore, while the DM acts with caution
when evaluating general lotteries, such caution does not play a role when evaluating
degenerate ones. This leads to the Certainty Effect.
While the representation theorem discussed above characterizes a complete preference relation, the interpretation of Cautious Expected Utility is linked with the notion
of a completion of incomplete preferences. Consider a DM who has an incomplete
preference relation over lotteries, which is well-behaved (that is, a reflexive, transitive,
monotone, and continuous binary relation that satisfies Independence). Suppose that
the DM is asked to choose between two options that the original preference relation
is unable to compare; she then needs to choose a rule to complete her ranking. While
there are many ways to do so, the DM may like to follow what we call a Cautious
Completion: if the original relation is unable to compare a lottery p with a degenerate
lottery δx , then in the completion the DM opts for the latter – “when in doubt, go
with certainty.” Theorem 5 shows that there always exists a unique Cautious Completion, which admits a Cautious Expected Utility representation.4 That is, it shows
3
As we discuss in Section 5, this interpretation of cautious behavior is also provided in CerreiaVioglio (2009).
4
In fact, we will show that it admits a Cautious Expected Utility representation with the same
set of utilities that can be used to represent the original incomplete preference relation (in the sense
4
that Cautious Completions must generate preferences that satisfy NCI – leading to
another way to interpret this property.
We conclude by assessing the empirical performance of our model and its relation
with existing literature. We will argue that not only our model suggests a useful way
of interpreting existing empirical evidence and derives new theoretical predictions,
but it can also accommodate some additional evidence on the Certainty Effect (e.g.,
the presence of Allais-type behavior with large stakes but not with small ones), which
poses difficulties to many popular alternative models.
The remainder of the paper is organized as follows. Section 2 presents the axiomatic structure, states the main representation theorem, and discusses the uniqueness properties of the representation. Section 3 characterizes risk attitudes and comparative risk aversion. Section 4 presents the result on the completion of incomplete
preference relations. Section 5 surveys related theoretical models. Section 6 discusses
experimental evidence. Section 7 concludes. All proofs appear in the Appendices.
2
2.1
The Model
Framework
Consider a compact interval [w, b] ⊂ R of monetary prizes. Let ∆ be the set of
lotteries (Borel probability measures) over [w, b], endowed with the topology of weak
convergence. We denote by x, y, z generic elements of [w, b] and by p, q, r generic
elements of ∆. We denote by δx ∈ ∆ the degenerate lottery (Dirac measure at x)
that gives the prize x ∈ [w, b] with certainty. The primitive of our analysis is a binary
relation < over ∆. The symmetric and asymmetric parts of < are denoted by ∼ and
, respectively. The certainty equivalent of a lottery p ∈ ∆ is a prize xp ∈ [w, b] such
that δxp ∼ p.
We start by imposing the following basic axioms on <.
Axiom 1 (Weak Order). The relation < is complete and transitive.
Axiom 2 (Continuity). For each q ∈ ∆, the sets {p ∈ ∆ : p < q} and {p ∈ ∆ : q < p}
are closed.
Axiom 3 (Weak Monotonicity). For each x, y ∈ [w, b], x ≥ y if and only if δx < δy .
of the Expected Multi-Utility representation of Dubra et al. (2004)).
5
The three axioms above are standard postulates. Weak Order is a common assumption of rationality. Continuity is needed to represent < through a continuous
utility function. Finally, under the interpretation of ∆ as monetary lotteries, Weak
Monotonicity simply implies that more money is better than less.
2.2
Negative Certainty Independence (NCI)
We now discuss the axiom which is the core assumption of our work. As we have
previously mentioned, a bulk of evidence against Expected Utility arises from experiments in which one of the lotteries is degenerate, that is, yields a certain prize for
sure. For example, recall Allais’ Common Ratio Effect: subjects choose between A
and B, where A = δ3000 and B = 0.8δ4000 + 0.2δ0 . They also choose between C and
D, where C = 0.25δ3000 + 0.75δ0 and D = 0.2δ4000 + 0.8δ0 . The typical finding is that
the majority of subjects tend to systematically violate Expected Utility by choosing
the pair A and D. Kahneman and Tversky (1979) called this pattern of behavior the
Certainty Effect. The next axiom, introduced in Dillenberger (2010), captures the
Certainty Effect with the following relaxation of the Independence axiom.5
Axiom 4 (Negative Certainty Independence). For each p, q ∈ ∆, x ∈ [w, b], and
λ ∈ [0, 1],
p < δx ⇒ λp + (1 − λ)q < λδx + (1 − λ)q.
(NCI)
The NCI axiom states that if the sure outcome x is not enough to compensate the
DM for the risky prospect p, then mixing it with any other lottery, thus eliminating
its certainty appeal, will not result in the mixture of x being more attractive than the
corresponding mixture of p. In particular, xp , the certainty equivalent of p, might not
be enough to compensate for p when part of a mixture.6 In this sense NCI captures
the Certainty Effect. When applied to the Common Ratio experiment, NCI only
posits that if B is chosen in the first problem, then D must be chosen in the second
one. Specifically, it allows the DM to choose the pair A and D, in line with the typical
pattern of choice. Coherently with this interpretation, NCI captures the Certainty
Effect as defined by Kahneman and Tversky (1979) – except that, as opposed to the
5
Recall that a binary relation < satisfies Independence if and only if for each p, q, r ∈ ∆ and for
each λ ∈ (0, 1], we have p < q if and only if λp + (1 − λ) r < λq + (1 − λ) r.
6
We show in Appendix A (Proposition 5) that our axioms imply that < preserves First Order
Stochastic Dominance and thus for each lottery p ∈ ∆ there exists a unique certainty equivalent xp .
6
Figure 1: NCI and the Marschak-Machina triangle
latter, NCI applies also to lotteries with three or more possible outcomes, and can
thus be applied to broader evidence such as Allais’ Common Consequence Effect.7
Besides capturing the Certainty Effect, NCI has static implications that put additional structure on preferences over ∆. For example, NCI (in addition to the other
basic axioms) implies Convexity: for each p, q ∈ ∆, if p ∼ q then λp + (1 − λ) q < q
for all λ ∈ [0, 1]. To see this, assume p ∼ q and apply NCI twice to obtain that
λp + (1 − λ) q < λδxp + (1 − λ) q < λδxp + (1 − λ) δxq = δxq ∼ q. NCI thus suggests
weak preference for randomization between indifferent lotteries. Furthermore, note
that by again applying NCI twice we obtain that p ∼ δx implies λp + (1 − λ) δx ∼ p
for all λ ∈ [0, 1], which means neutrality towards mixing a lottery with its certainty
equivalent.
To further illustrate the restrictions on preferences imposed by NCI, it will be
useful to discuss its implications on the shape of indifference curves in any MarschakMachina triangle, which represents all lotteries over fixed three outcomes x3 > x2 >
x1 (see Figure 1). NCI implies three restrictions on these curves: (i) Convexity
implies that all curves must be convex ; (ii) the bold indifference curve through the
7
Kahneman and Tversky (1979, p. 267) define the Certainty Effect as the requirement that for
each x, y ∈ [w, b] and α, β ∈ (0, 1), if αδy + (1 − α) δ0 is indifferent to δx then αβδy + (1 − αβ) δ0 is
preferred to βδx + (1 − β) δ0 . Notice that this immediately follows from NCI.
7
origin (which represents the lottery δx2 ) is linear, due to neutrality towards mixing a
lottery with its certainty equivalent; and (iii) this bold indifference curve is also the
steepest, that is, its slope relative to the (p1 , p3 ) coordinates exceeds that of any other
indifference curve through any point in the triangle.8 Since, as explained by Machina
(1982), the slope of an indifference curve expresses local attitude towards risk (with
greater slope corresponds to higher local risk aversion), this last property captures
the Certainty Effect by, loosely speaking, requiring that local risk aversion is at its
peak when it involves a degenerate lottery. In Section 6 we show that this pattern of
indifference curves is consistent with a variety of experimental evidence on decision
making under risk.9
2.3
Representation Theorem
Before stating our representation theorem, we introduce some notation. We say that
a function V : ∆ → R represents < when p < q if and only if V (p) ≥ V (q).
Denote by U the set of continuous and strictly increasing functions v from [w, b] to
R. We endow U with the topology induced by the supnorm. For each lottery p
and function v ∈ U, we denote by Ep (v) the expected utility of p with respect to v.
The certainty equivalent of lottery p calculated using the utility function v is thus
c (p, v) = v −1 (Ep (v)) ∈ [w, b].
Definition 1. Let < be a binary relation on ∆ and W a subset of U. The set W is a
Cautious Expected Utility representation of < if and only if the function V : ∆ → R,
defined by
V (p) = inf c (p, v)
∀p ∈ ∆,
v∈W
represents <. We say that W is a Continuous Cautious Expected Utility representation if and only if V is also continuous.
We now present our main representation theorem.
8
The steepest middle slope property is formally derived in Lemma 3 of Dillenberger (2010).
NCI also has implications in non-static settings. Dillenberger (2010) shows that in the context
of recursive and time-neutral, non-Expected Utility preferences over compound lotteries, NCI is
equivalent to an intrinsic aversion to receiving partial information – a property which he termed
preferences for one-shot resolution of uncertainty. Dillenberger (2010) also shows that in the context
of preferences over information structures, NCI characterizes all non-Expected Utility preferences
for which, when applied recursively, perfect information is always the most valuable information
system.
9
8
Theorem 1. Let < be a binary relation on ∆. The following statements are equivalent:
(i) The relation < satisfies Weak Order, Continuity, Weak Monotonicity, and Negative Certainty Independence;
(ii) There exists a Continuous Cautious Expected Utility representation of <.
According to a Cautious Expected Utility representation, the DM has a set W of
possible utility functions over monetary outcomes. Each of these functions is strictly
increasing, i.e., agrees that “more money is better”. These utility functions, however,
may have different curvatures: it is as if the DM is unsure how to evaluate each
lottery. The DM then reacts to this multiplicity with caution: she evaluates each
lottery p by using the utility function that returns the lowest certainty equivalent.
That is, there are two key components in the representation: (i) the agent is unsure
about how to evaluate each lottery (having a non-degenerate set W); and (ii) she
acts conservatively and uses the most cautious criterion at hand (captured by the inf
aggregator).
As a concrete example, suppose that the DM needs to evaluate the lottery p, which
pays either $0 or $10, 000, both equally likely. The DM may reasonably find it difficult
to give a precise answer, but, instead, finds it conceivable that her certainty equivalent
of p falls in the range [$3, 500, $4, 500], and that this interval is tight (that is, the end
points are plausible evaluations). This is the first component of the representation:
the DM has a set of plausible valuations that she considers. Nevertheless, when asked
how much she would be willing to pay in order to obtain p, she is cautious and answers
at most $3, 500. This is the second component of the representation. Note that if W
contains only one element then the model reduces to standard Expected Utility. Note
also that, since each u ∈ W is strictly increasing, the model preserves monotonicity
with respect to First Order Stochastic Dominance.
An important feature of the representation above is that the DM uses the utility
function that minimizes the certainty equivalent of a lottery, instead of just minimizing its expected utility.10 The reason is that comparing certainty equivalents means
bringing each evaluation with each utility function to a unified measure, amounts
of money, where a meaningful comparison is possible. It is then easy to see how
10
This alternative model is studied in Maccheroni (2002). We discuss it in detail in Section 5.
9
the representation in Theorem 1 leads to the Certainty Effect: while the DM acts
with caution when evaluating general lotteries, caution does not play a role when
evaluating degenerate ones – no matter which utility function is used, the certainty
equivalent of a degenerate lottery that yields the prize x for sure is simply x.
Example 1. Let [w, b] ⊆ [0, ∞) and W = {u, v} where
u (x) = − exp (−βx) , β > 0; and v (x) = xα , α ∈ (0, 1) .
That is, u (resp., v) displays constant absolute (resp., relative) risk aversion. Furthermore, if the interval [w, b] is large enough (b > 1−α
> w), then u and v are not
β
ranked in terms of risk aversion, that is, there exist p and q such that the smallest
certainty equivalent for p (resp., q) corresponds to u (resp., v). This functional form
can easily address the Common Ratio Effect.11
Example 1 shows that one could address experimental evidence related to the
Certainty Effect using a set W that includes only two utility functions. The key
feature of this example is not the specific functional forms used, but rather the fact
that there is no unique v in W which minimizes the certainty equivalents for all
lotteries. If this were the case, then only v would matter and behavior would coincide
with Expected Utility. This implies, for example, that if all utilities in W have
constant relative risk aversion (that is, vi ∈ W only if vi (x) = xαi for some αi ∈ (0, 1)),
then preferences will be indistinguishable from Expected Utility with coefficient of
relative risk aversion equals 1 − minj αj . (See Section 2.6, where we suggest some
convenient parametric class of utility functions that can be used in applications.)
The discussion above further suggests that the Cautious Expected Utility model
treats the Certainty Effect as a local property, which is not necessarily invariant to
changes in the stakes involved. To illustrate, consider again Example 1 and note
that v has a higher coefficient of (absolute or relative) risk aversion than u for all
. Therefore, when restricted to lotteries with outcomes only in
outcomes below 1−α
β
11
For example, let α = 0.8 and β = 0.0002. Let p = 0.8δ4000 + 0.2δ0 , q = 0.2δ4000 + 0.8δ0 and
r = 0.25δ3000 + 0.75δ0 . Direct calculations show that
V (p) = c (p, u) ' 2904 < 3000 = V (δ3000 ) ,
but
V (q) = c (q, v) ' 535 > 530 ' c (r, v) = V (r) .
We have δ3000 p but q r.
10
h
i
w,
, preferences are Expected Utility with Bernoulli index v, but they violate
Expected Utility for larger stakes. This is compatible with experimental evidence
that suggests that Allais-type behavior is mostly prominent when the stakes are high
(or, more precisely, when there is a large gap between the best and worst possible
outcome). The fact that our model can accommodate this evidence is one of its
distinctive features, which we discuss in detail in Section 6.
The interpretation of the Cautious Expected Utility representation is different
from some of the most prominent existing non-Expected Utility models. For example,
the common interpretation of the Rank Dependent Utility model of Quiggin (1982) is
that the DM knows her utility function but she distorts probabilities. By contrast, in
a Cautious Expected Utility representation the DM takes probabilities at face value,
but she is unsure of which utility function to use, and applies caution by using the
most conservative one in the set. In Section 5, we point out that not only the two
models have a different interpretation but they entail stark differences in behavior:
the only preference relation that is compatible with both models is Expected Utility.
Lastly, we note that the use of the most conservative utility in a set is reminiscent
of the Maxmin Expected Utility of Gilboa and Schmeidler (1989) under ambiguity,
in which the DM has not one, but a set of probabilities, and evaluates acts using the
worst probability in the set. Our model can be seen as a corresponding model under
risk. This analogy with Maxmin Expected Utility will then be strengthened by our
analysis in Section 4, where we argue that both models can be derived from extending
incomplete preferences using a cautious rule.
In the next subsection we will outline the main steps in the proof of Theorem 1.
There is one notion that is worth discussing independently, since it plays a major role
in the analysis of all subsequent sections. We introduce a derived preference relation,
denoted <0 , which is the largest subrelation of the original preference < that satisfies
the Independence axiom. Formally, define <0 on ∆ by
1−α
β
p <0 q ⇐⇒ λp + (1 − λ) r < λq + (1 − λ) r
∀λ ∈ (0, 1] , ∀r ∈ ∆.
(1)
In the context of choice under risk, this derived relation was proposed and characterized by Cerreia-Vioglio (2009). It parallels a notion introduced in the context of
choice under ambiguity by Ghirardato et al. (2004) (see also Cerreia-Vioglio et al.
(2011a)). This binary relation, which contains the comparisons over which the DM
11
abides by the precepts of Expected Utility, is often interpreted as including the comparisons that the DM is confident in making. We refer to <0 as the Linear Core of
<. Note that, by definition, if the original preference relation < satisfies NCI, then
p 6<0 δx implies δx p. That is, whenever the DM is not confident to declare p better
than the certain outcome x, the original relation will rank δx strictly above p. This
intuition will be our starting point in Section 4, where we discuss the idea of Cautious
Completions. Lastly, as we will see in Section 2.5, <0 will allow us to identify the
uniqueness properties of the set W in a Cautious Expected Utility representation.
2.4
Proof Sketch of Theorem 1
In what follows we discuss the main intuition of the proof of Theorem 1; a complete
proof, which includes the many omitted details, appears in Appendix B. We focus
here only on the sufficiency of the axioms for the representation.
Step 1. Define the Linear Core of < . As we have discussed above, we introduce the
binary relation <0 on ∆ defined in (1).
Step 2. Find the set W ⊆ U that represents <0 . By Cerreia-Vioglio (2009), <0
is reflexive and transitive (but possibly incomplete), continuous, and satisfies Independence. In particular, there exists a set W of continuous functions on [w, b] that
constitutes an Expected Multi-Utility representation of <0 , that is, p <0 q if and only
if Ep (v) ≥ Eq (v) for all v ∈ W (see Dubra et al., 2004). Since < satisfies Weak
Monotonicity and NCI, <0 also satisfies Weak Monotonicity. For this reason, the set
W can be chosen to be composed only of strictly increasing functions.
Step 3. Representation of <. We show that < admits a certainty equivalent representation, i.e., there exists V : ∆ → R such that V represents < and V (δx ) = x for
all x ∈ [w, b].
Step 4. Relation between < and <0 . We note that (i) < is a completion of <0 , i.e.,
p <0 q implies p < q; and (ii) for each p ∈ ∆ and for each x ∈ [w, b], p 6<0 δx implies
δx p. The latter is an immediate implication of NCI.
Step 5. Final step. We conclude the proof by showing that we must have V (p) =
inf v∈W c (p, v) for all p ∈ ∆. For each p, find x ∈ [w, b] such that p ∼ δx , which means
V (p) = V (δx ) = x. First note that we must have V (p) = x ≤ inf v∈W c (p, v). If this
was not the case, then we would have that x > c (p, v) for some v ∈ W, which means,
by Step 2, p 6<0 δx . But by Step 4(ii) we would obtain δx p, contradicting δx ∼ p.
12
Second, we must have V (p) = x ≥ inf v∈W c (p, v): if this was not the case, then we
would have x < inf v∈W c (p, v). We could then find y such that x < y < inf v∈W c (p, v),
which, by Step 2, would yield p <0 δy . By Step 4 (i), we could conclude that p <
δy δx , contradicting p ∼ δx .
2.5
Uniqueness and Properties of the Set of Utilities
We now discuss the uniqueness properties of a set of utilities W in a Cautious Expected Utility representation of <. To do so, we define the set of normalized utility
functions Unor = {v ∈ U : v(w) = 0, v(b) = 1}, and, without loss of generality,
confine our attention to a normalized Cautious Expected Utility representation, that
is, we further require W ⊆ Unor . Even with this normalization, we are bound to find
uniqueness properties only ‘up to’ the closed convex hull: if two sets share the same
closed convex hull, then they must generate the same representation, as proved in the
following proposition. Denote by co (W) the closed convex hull of a set W ⊆ Unor .
Proposition 1. If W, W 0 ⊆ Unor are such that co (W) = co (W 0 ) then
inf c (p, v) = inf 0 c (p, v)
v∈W
v∈W
∀p ∈ ∆.
Moreover, it is easy to see how W will in general not be unique, even up to the
closed convex hull, as we can always add redundant utility functions that will never
achieve the infimum. In particular, consider any set W in a Cautious Expected Utility
representation and add to it a function v̄ which is a continuous, strictly increasing,
and strictly convex transformation of some other function u ∈ W. The set W ∪ {v̄}
will give a Cautious Expected Utility representation of the same preference relation,
as the function v̄ will never be used in the representation.12
Once we remove these redundant utilities, we can identify a unique (up to the
closed convex hull) set of utilities. In particular, for each preference relation that
c
admits a Continuous Cautious Expected Utility representation, there exists a set W
such that any other Cautious Expected Utility representation W of these preferences
c ⊆ co(W). In this sense W
c is a ‘minimal’ set of utilities. Moreover,
is such that co(W)
c will have a natural interpretation in our setup: it constitutes a unique
the set W
12
Since u ∈ W and c (p, u) ≤ c (p, v) for all p ∈ ∆, there will not be a lottery p such that
inf
c (p, v) = c (p, v) < inf c (p, v).
v∈W∪{v̄}
v∈W
13
(up to the closed convex hull) Expected Multi-Utility representation of the Linear
Core <0 , the derived preference relation defined in (1). In terms of uniqueness, if
two sets constitute a Continuous Cautious Expected Utility representation of < and
an Expected Multi-Utility representation of <0 , then their closed convex hull must
coincide. This is formalized in the following result.
Theorem 2. Let < be a binary relation on ∆ that satisfies Weak Order, Continuity,
c⊆
Weak Monotonicity, and Negative Certainty Independence. Then there exists W
Unor such that
c is a Continuous Cautious Expected Utility representation of <;
(i) The set W
c ⊆
(ii) If W ⊆ Unor is a Cautious Expected Utility representation of <, then co(W)
co(W);
c is an Expected Multi-Utility representation of <0 , that is,
(iii) The set W
p <0 q ⇐⇒ Ep (v) ≥ Eq (v)
c
∀v ∈ W.
c is unique up to the closed convex hull.
Moreover, W
2.6
Parametric Sets of Utilities and Elicitation
In applied work, it is common to specify a parametric class of utility functions and
estimate the relevant parameters. The purpose of this subsection is to suggest some
parametric classes that are compatible with Cautious Expected Utility representation.
We then remark on the issue of how to elicit the set of utilities from a finite data set.
The discussion in Section 2.3 suggests that our model coincides with Expected
Utility if we focus on a set W that includes only utilities with constant absolute (or
relative) risk aversion. More generally, if preferences are not Expected Utility and W
contains only functions from the same parametric class, then the level of risk aversion
within this class must depend on more than a single parameter. We now suggest
two examples of parsimonious families of utility functions for which risk attitude is
characterized by the values of only two parameters; this property could be useful in
empirical estimations.
14
The first example is the increasingly popular family of Expo-Power utility functions (Saha (1993)), which generalizes both constant absolute and constant relative
risk aversion, given by
u (x) = 1 − exp −λxθ , with λ 6= 0, θ 6= 0, and λθ > 0.
This functional form has been applied in a variety of fields, such as finance, intertemporal choices, and agriculture economics. Holt and Laury (2002) show that this
functional form fits well experimental data that involve both low and high stakes.
The second example is the set of Pareto utility functions, given by
x
u (x) = 1 − 1 +
γ
−κ
, with γ > 0 and κ > 0.
Ikefuji et al. (2012) show that a Pareto utility function has some desirable properties.
00 (x)
κ+1
If u is Pareto, then the coefficient of absolute risk aversion is − uu0 (x)
= x+γ
, which is
increasing in κ and decreasing in γ. Therefore, for a large enough interval [w, b], if
κu > κv and γu > γv then u and v are not ranked in terms of risk aversion.
We conclude with a brief remark on the issue of elicitation. If one could observe
the certainty equivalents for all lotteries, then the whole preference relation would be
recovered and the set W identified (up to its uniqueness properties) – but this requires
an infinite number of observations. With a finite data set, one can approximate, or
partially recover, the set W as follows. Note that if a function v assigns to some
lottery p a certainty equivalent that is smaller than the one observed in the data
(i.e., c(p, v) < xp ), then v cannot belong to W. Therefore, by observing the certainty
equivalents of a finite number of lotteries, one could exclude a set of possible utility
functions and approximate the set W ‘from above.’ It is easy to see that the set
thus obtained would necessarily contain the ‘true’ one, and that as the number of
observations increases, the set will shrink to coincide with W (or, more precisely,
with a version of W up to uniqueness). Such elicitation would be significantly faster
if, as is often the case in empirical work, one is willing to assume that utility functions
come from a specific parametric class, such as the ones described above.
15
3
Cautious Expected Utility and Risk Attitudes
In this section we explore the connection between Theorem 1 and standard definitions
of risk attitude, and characterize the comparative notion of “more risk averse than”.
c as in
Throughout this section, we mainly focus on a ‘minimal’ representation W
Theorem 2.
Remark. If W is a Continuous Cautious Expected Utility representation of a preferc a set of utilities as identified in Theorem 2 (which is
ence relation <, we denote by W
unique up to the closed convex hull). More formally, we can define a correspondence
T that maps each set W that is a Continuous Cautious Expected Utility representation
of some < to a class of subsets of Unor , T (W), each element of which satisfies the
c
properties of points (i)-(iii) of Theorem 2 and is denoted by W.
3.1
Characterization of Risk Attitudes
We adopt the following standard definition of risk aversion/risk seeking.
Definition 2. We say that < is risk averse if p < q whenever q is a mean preserving
spread of p. Similarly, < is risk seeking if q < p whenever q is a mean preserving
spread of p.
Theorem 3. Let < be a binary relation that satisfies Weak Order, Continuity, Weak
Monotonicity, and Negative Certainty Independence. The following statements are
true:
c is concave.
(i) The relation < is risk averse if and only if each v ∈ W
c is convex.
(ii) The relation < is risk seeking if and only if each v ∈ W
Theorem 3 shows that the relation found under Expected Utility between the
concavity/convexity of the utility function and the risk attitude of the DM holds also
for the more general Continuous Cautious Expected Utility model – although it now
c In turn, this shows that our model is compatible
involves all utilities in the set W.
with many types of risk attitudes. For example, despite the presence of the Certainty
Effect, when all utilities are convex the DM would be risk seeking.
16
3.2
Comparative Risk Aversion
We now proceed to compare the risk attitudes of two individuals.
Definition 3. Let <1 and <2 be two binary relations on ∆. We say that <1 is more
risk averse than <2 if and only if for each p ∈ ∆ and for each x ∈ [w, b] ,
p <1 δx
=⇒
p <2 δx .
Theorem 4. Let <1 and <2 be two binary relations with Continuous Cautious Expected Utility representations, W1 and W2 , respectively. The following statements are
equivalent:
(i) <1 is more risk averse than <2 ;
(ii) Both W1 ∪W2 and W1 are Continuous Cautious Expected Utility representations
of <1 ;
c1 ).
\
(iii) co W1 ∪ W2 = co(W
Theorem 4 states that DM1 is more risk averse than DM2 if and only if all the
utilities in W2 are redundant when added to W1 .13,14 This result compounds two
conceptually different channels that in a Cautious Expected Utility representation
lead one decision maker to be more risk averse than another. The first channel
is related to the curvatures of the functions in each set of utilities. For example,
if each v ∈ W2 is a strictly increasing and strictly convex transformation of some
v̂ ∈ W1 , then DM2 assigns a strictly higher certainty equivalent than DM1 to any
nondegenerate lottery p ∈ ∆ (while the certain outcomes are, by construction, treated
similarly in both). In particular, as we discussed in Section 2.5, no member of W2
will be used in the representation corresponding to the union of the two sets. The
second channel corresponds to comparing the size of the two sets of utilities. Indeed,
if W2 ⊆ W1 then for each p ∈ ∆ the certainty equivalent under W2 is weakly greater
than that under W1 , implying that <1 is more risk averse than <2 .
13
We thank Todd Sarver for suggesting point (iii) in Theorem 4.
Note that if both <1 and <2 are Expected Utility preferences, then there are v1 and v2 such
c1 , {v2 } = W2 = W
c2 , and points (ii) and (iii) in Theorem 4 are equivalent to
that {v1 } = W1 = W
v1 being an increasing concave transformation of v2 .
14
17
We can distinguish between these two different channels, and characterize the
behavioral underpinning of the second one. To do so, we focus on the notion of
Linear Core and its representation as in Theorem 2.
Definition 4. Let <1 and <2 be two binary relations on ∆ with corresponding Linear
Cores <01 and <02 . We say that <1 is more indecisive than <2 if and only if for each
p, q ∈ ∆
p <01 q =⇒ p <02 q.
Since we interpret the derived binary relation <0 as capturing the comparisons
that the DM is confident in making, Definition 4 implies that DM1 is more indecisive
than DM2 if whenever DM1 can confidently declare p weakly better than q, so does
DM2. The following result characterizes this comparative relation and links it to the
comparative notion of risk aversion.
Proposition 2. Let <1 and <2 be two binary relations that satisfy Weak Order,
Continuity, Weak Monotonicity, and Negative Certainty Independence. The following
statements are true:
c2 ) ⊆ co(W
c1 );
(i) <1 is more indecisive than <2 if and only if co(W
(ii) If <1 is more indecisive than <2 , then <1 is more risk averse than <2 .
Proposition 2 establishes the relationship between indecisiveness, which is akin
to incompleteness of the Linear Core, and risk aversion. The more indecisive the
DM is, the more possible evaluations of each lottery she considers; and since she is
cautious and uses only the lowest of such evaluations, a more indecisive DM has a
lower certainty equivalent for each lottery and thus is also more risk averse.
4
Cautious Completions of Incomplete Preferences
While our analysis thus far has focused on the characterization of a complete preference relation that satisfies NCI (in addition to the other basic axioms), we will
now show that this analysis is deeply related to that of a ‘cautious’ completion of an
incomplete preference relation over lotteries.
Consider a DM who has an incomplete preference relation over the set of lotteries. We can see this relation as representing the comparisons that the DM feels
18
comfortable making. There might be occasions, however, in which the DM is asked
to choose among lotteries she cannot compare, and to do this she has to complete
her preferences. Suppose that the DM wants to do so applying caution, i.e., when in
doubt between a sure outcome and a lottery, she opts for the sure outcome. Which
preferences will she obtain after the completion?
This analysis parallels the one of Gilboa et al. (2010), who consider an environment
with ambiguity instead of risk, although with one minor formal difference: while in
Gilboa et al. (2010) both the incomplete relation and its completion are a primitive
of the analysis, in our case the primitive is simply the incomplete preference relation
over lotteries, and we study the properties of all possible completions of this kind.15
Since we analyze an incomplete preference relation, the analysis in this section
requires a slightly stronger notion of continuity, called Sequential Continuity.16
Axiom 5 (Sequential Continuity). Let {pn }n∈N and {qn }n∈N be two sequences in ∆.
If pn → p, qn → q, and pn < qn for all n ∈ N then p < q.
In the rest of the section, we assume that <0 is a reflexive and transitive (though
potentially incomplete) binary relation over ∆, which satisfies Sequential Continuity,
Weak Monotonicity, and Independence. We look for a Cautious Completion of <0 , as
formalized in the following definition.
ˆ is a
Definition 5. Let <0 be a binary relation on ∆. We say that the relation <
Cautious Completion of <0 if and only if the following hold:
ˆ satisfies Weak Order, Weak Monotonicity, and for each p ∈ ∆
1. The relation <
there exists x ∈ [w, b] such that p ∼
ˆ δx ;
ˆ q;
2. For each p, q ∈ ∆, if p <0 q then p <
ˆ p.
3. For each p ∈ ∆ and x ∈ [w, b], if p 6<0 δx then δx ˆ most notably, the
Point 1 imposes few minimal requirements of rationality on <,
existence of a certainty equivalent for each lottery p. Weak Monotonicity will imply
15
Riella (2013) develops a more general treatment that encompasses the result in this section and
the one in Gilboa et al. (2010); he shows that a combined model could be obtained starting from a
preference relation over acts that admits a Multi-Prior Expected Multi-Utility representation, as in
Ok et al. (2012) and Galaabaatar and Karni (2013), and constructing a Cautious Completion.
16
This notion coincides with our Continuity axiom if the binary relation is complete and transitive.
19
that this certainty equivalent is unique. In point 2, we assume that the relation
ˆ extends <0 . Finally, point 3 requires that such a completion of <0 is done with
<
caution.
Theorem 5. If <0 is a reflexive and transitive binary relation on ∆ that satisfies Sequential Continuity, Weak Monotonicity, and Independence, then <0 admits a unique
ˆ and there exists a set W ⊆ U such that for all p, q ∈ ∆
Cautious Completion <
p <0 q ⇐⇒ Ep (v) ≥ Eq (v)
∀v ∈ W
and
ˆ q ⇐⇒ inf c (p, v) ≥ inf c (q, v) .
p<
v∈W
v∈W
Moreover, W is unique up to the closed convex hull.
Theorem 5 shows that, given a binary relation <0 which satisfies all the tenets
ˆ is alof Expected Utility except completeness, not only a Cautious Completion <
ˆ admits a
ways possible, but it is also unique. Most importantly, such completion <
Cautious Expected Utility representation, using the same set of utilities as in the Expected Multi-Utility Representation of the original preference <0 . This result shows
that our model could also represent the behavior of a subject who might be unable to
compare some of the available options and, when asked to extend her ranking, does
so by being cautious. Together with Theorem 1, Theorem 5 shows that this behavior is indistinguishable from that of a subject who starts with a complete preference
relation and satisfies Axioms 1–4. In turn, this shows that incomplete preferences
followed by a Cautious Completion could generate the Certainty Effect.
Finally, Theorem 5 strengthens the link between the Cautious Expected Utility
model and the Maxmin Expected Utility model of Gilboa and Schmeidler (1989).
Gilboa et al. (2010) show that the latter could be derived as a completion of an
incomplete preference relation over Anscombe-Aumann acts that satisfies the same
normative assumptions as <0 (adapted to their domain), by applying a form of caution
according to which, when in doubt, the DM chooses a constant act. Similarly, here we
derive the Cautious Expected Utility model by extending an incomplete preference
over lotteries using a form of caution according to which, when in doubt, the DM
chooses a risk-free lottery.17
17
The condition in Gilboa et al. (2010) is termed Default to Certainty; Point 3 of Definition 5 is
20
5
Related Literature
Dillenberger (2010) introduces the NCI axiom and discusses its implication in dynamic
settings. Under some specific assumptions on preferences over two-stage lotteries, he
shows that NCI is a necessary and sufficient condition to a property called “preference
for one-shot resolution of uncertainty”. Dillenberger, however, does not provide a
utility representation as in Theorem 1. Dillenberger and Erol (2013) provide an
example of a continuous, monotone, and complete preference relation that satisfies
NCI but not a property called Betweenness (see below), suggesting that indeed these
are different properties. The present paper provides a complete characterization of
all binary relations that satisfy NCI (in addition to the other three basic postulates)
and clarifies the precise relationship with Betweenness.
Cerreia-Vioglio (2009) characterizes the class of continuous and complete preference relations that satisfy Convexity, that is, p ∼ q implies λp + (1 − λ) q < q for
all λ ∈ (0, 1). Loosely speaking, Cerreia-Vioglio shows that there exists a set V of
normalized Bernoulli utility functions, and a real function U on R × V, such that
preferences are represented by
V (p) = inf U (Ep (v) , v) .
v∈V
Using this representation, Cerreia-Vioglio interprets Convexity as a behavioral property that captures a preference for hedging; such preferences may arise in the face
of uncertainty about the value of outcomes, future tastes, and/or the degree of risk
aversion. He suggests the choice of the minimal certainty equivalent as a criterion
to resolve uncertainty about risk attitudes and as a completion procedure. (See also
Cerreia-Vioglio et al. (2011b) for a risk measurement perspective.) As we discussed
in Section 2, NCI implies Convexity, which means that the preferences we study in
this paper are a subset of those studied by Cerreia-Vioglio. Indeed, this is apparent also from Theorem 1: our preferences correspond to the special case in which
U (Ep (v) , v) = v −1 (Ep (v)) = c (p, v). Furthermore, our representation theorem establishes that NCI is the exact strengthening of convexity needed to characterize the
minimum certainty equivalent criterion.
A popular generalization of Expected Utility is the Rank Dependent Utility (RDU)
the translation of this condition to the context of choice under risk.
21
model of Quiggin (1982), also used within Cumulative Prospect Theory (Tversky
and Kahneman, 1992). According to this model, individuals weight probability in a
nonlinear way. Specifically, if we order the prizes in the support of the lottery p, with
x1 < x2 < ... < xn , then the functional form for RDU is:
V (p) = u(xn )f (p (xn ) ) +
n−1
X
n
n
X
X
u(xi )[f (
p (xj )) − f (
p (xj ))],
i=1
j=i
j=i+1
where f : [0, 1] → [0, 1] is strictly increasing and onto, and u : [w, b] → R is increasing.
If f (p) = p then RDU reduces to Expected Utility. If f is convex, then larger weight
is given to inferior outcomes; this corresponds to a pessimistic probability distortion suitable to explain the Allais paradoxes. Apart from the different interpretation
of RDU compared to our Cautious Expected Utility representation, as discussed in
Section 2.3, the two models have completely different behavioral implications: Dillenberger (2010) demonstrates that the only RDU preference relations that satisfy NCI
are Expected Utility. That is, RDU is generically incompatible with NCI.18
Another popular and broad class of continuos and monotone preferences is the
Betweenness class introduced by Dekel (1986) and Chew (1989). The central axiom in
this class is a weakening of the Independence axiom which implies neutrality toward
randomization among equally-good lotteries.19 That is, if < satisfies Betweenness,
then its indifference curves in the Marschak-Machina triangle are linear, but not
necessarily parallel as under Expected Utility. One of the most prominent examples of
preference relations that satisfy Betweenness is Gul (1991)’s model of Disappointment
Aversion (denoted DA in Figure 2). For some parameter β ∈ (−1, ∞) and a strictly
increasing function u : [w, b] → R, the disappointment aversion value of a simple
lottery p is the unique v that solves
P
v=
{xi |u(xi )≥v }
P
p (xi ) u (xi ) + (1 + β) {xi |u(xi )<v } p (xi ) u (xi )
P
.
1 + β {xi |u(xi )<v } p (xi )
In most applications, attention is confined to the case where β > 0, which corre18
Bell and Fishburn (2003) showed that Expected Utility is the only RDU with the property that
for each binary lottery p and x ∈ [w, b], p ∼ δx implies αp + (1 − α) δx ∼ δx . This property is implied
by NCI (see Section 2.2). Geometrically, it corresponds to the linear indifference curve through the
origin in any Marschak-Machina triangle (Figure 1 in Section 2.2).
19
More precisely, the Betweenness axiom states that for each p, q ∈ ∆ and λ ∈ (0, 1), p q (resp.,
p ∼ q) implies p λp + (1 − λ) q q (resp., p ∼ λp + (1 − λ) q ∼ q).
22
Expected Utility
Cautious EU
Convex
preferences
DA with >0
Rank Dependent
Utility
DA with <0
Betweenness
Figure 2: Cautious Expected Utility and other models
sponds to “Disappointment Aversion” (the case of β ∈ (−1, 0) is referred to “Elation
Seeking”, and the model reduces to Expected Utility when β = 0). Artstein-Avidan
and Dillenberger (2011) show that Gul’s preferences satisfy NCI if and only if β ≥ 0.
Combining with the example given in Dillenberger and Erol (2013) mentioned above,
we conclude that preferences in our class neither nest, nor are nested in, those that
satisfy Betweenness.
Figure 2 summarizes our discussion thus far about the relationship between the
various models.20
Maccheroni (2002) (see also Chatterjee and Krishna, 2011) derives a utility function over lotteries of the following form: there exists a set T of utilities over outcomes,
such that the value of every lottery p is the lowest Expected Utility, calculated with
20
Chew and Epstein (1989) show that there is no intersection between RDU and Betweenness
other than Expected Utility (see also Bell and Fishburn, 2003). Whether or not RDU satisfies
Convexity depends on the curvature of the distortion function f ; in particular, concave f implies
Convexity. In addition to Disappointment Aversion with negative β, an example of preferences that
satisfy Betweenness but do not satisfy NCI is Chew (1983)’s model of Weighted Utility.
23
respect to members of T , that is,
V (p) = min Ep (v) .
v∈T
Maccheroni’s interpretation of this functional form, according to which “the most
pessimist of her selves gets the upper hand over the others” is closely related to our
idea that “the DM acts conservatively and uses the most cautious criterion at hand.”
In addition, both models satisfy Convexity. Despite these similarities, the two models
are very different. First, Maccheroni’s model cannot (and it was not meant to) address
the Certainty Effect: since certainty equivalents are not used, also degenerate lotteries
have multiple evaluations (see Section 2.3). Second, one of Maccheroni (2002)’s key
axioms is Best Outcome Independence, which states that for each p, q ∈ ∆ and each
λ ∈ (0, 1), p q if and only if λp + (1 − λ) δb λq + (1 − λ) δb . This axiom is
conceptually and behaviorally distinct from NCI.
Schmidt (1998) develops a model in which the value of any nondegenerate lottery
p is Ep (u), whereas the value of the degenerate lottery δx is v(x). The Certainty
Effect is captured by requiring v(x) > u(x) for all x. Schmidt thus captures the
Certainty Effect while maintaining Expected Utility if no certain outcomes are involved. The key difference with Cautious Expected Utility is that Schmidt’s model
violates both Continuity and Monotonicity, while in this paper we confine our attention to preferences that satisfy both of these basic properties. In addition, his
model also violates NCI: For example, take u (x) = x and v (x) = 2x, and note
that V (δ3 ) = 6 > 4 = V (δ2 ), but V (δ2 ) = 4 > 2.5 = V (0.5δ3 + 0.5δ2 ) .21 Other
discontinuous specifications of the Certainty Effect include Gilboa (1988) and Jaffray
(1988). Both are models which may be dubbed ‘expected utility with a security level’.
Roughly speaking, the security level of a lottery is a function of the worst outcome in
its support. These models are also derived from variants of the Independence axiom,
but will generically violate NCI.
Dean and Ortoleva (2012a) present a model which, when restricted to preferences
over lotteries, generalizes pessimistic RDU. In their model, the DM has a single utility
function and a set of pessimistic probability distortions; she then evaluates each
lottery using the most pessimistic of these distortions. This property is derived by an
axiom, Hedging, that captures the intuition of preference for hedging of Schmeidler
21
The statement in Dillenberger and Erol (2013) that Schmidt’s model satisfies NCI is incorrect.
24
(1989), but applies it to preferences over lotteries. Dean and Ortoleva (2012a) show
how this axiom captures the behavior in the Allais paradoxes. The exact relation
between NCI and Hedging remains an open question.
Machina (1982) studies a model with minimal restrictions imposed on preferences
apart from requiring them to be smooth (in the sense of Fréchet differentiability).
One of the main behavioral assumptions proposed by Machina is Hypothesis II, which
implies that indifference curves in the Marschak-Machina triangle Fan Out, that is,
they become steeper as one moves in the north-west direction. The steepest middle
slope property (Section 2.2) implies that our model can accommodate Fanning Out in
the lower-right part of the triangle (from where most evidence on Allais-type behavior
had come), while global Fanning Out is ruled out.
6
Experimental Evidence
Over the last decades, a large amount of experimental evidence has documented not
only the existence of violations of Expected Utility, as in the Allais paradoxes, but also
other regularities of preferences over lotteries. Based on the comprehensive surveys
of Camerer (1995) and Starmer (2000), the following could be considered the three
most established stylized empirical findings:22
1. Indifference curves in the Marschak-Machina triangle exhibit Mixed Fanning:
indifference curves become first steeper (Fanning Out) and then flatter (Fanning
In) as we move towards the north-west direction;23
2. Violations of Expected Utility are much less frequent when all options are nondegenerate lotteries and have similar support, i.e., inside the triangle;24
3. Indifference curves are typically nonlinear, meaning that Betweenness is often
violated.25
22
The following are documented for lotteries involving only positive outcomes. As it is well know,
behavior may be very different when losses are involved (Camerer, 1995).
23
Chew and Waller (1986); Camerer (1989); Conlisk (1989); Starmer and Sugden (1989); Battalio
et al. (1990); Prelec (1990); Sopher and Gigliotti (1993); Wu (1994).
24
Conlisk (1989); Camerer (1992); Harless (1992); Sopher and Gigliotti (1993); Harless and
Camerer (1994); Andreoni and Sprenger (2012).
25
Chew and Waller (1986); Bernasconi (1994); Camerer and Ho (1994); Prelec (1990).
25
To these, Camerer (1995) and more recent experimental studies add the following
robust finding:
4. Allais-type behavior is significantly less frequent when stakes are small rather
than large.26
These stylized facts, taken together, pose difficulties for most models of nonExpected Utility, including those discussed in Section 5. In the discussion below,
we confine our attention to the two most popular alternatives to Expected Utility,
namely Rank Dependent Utility (RDU) and Betweenness.
RDU is compatible with Mixed Fanning in the interior of the triangle (fact #1),27
with the vast reduction of violations of Expected Utility inside the triangle (fact #2),
and, as is evident from Figure 2 of Section 5, with non-Betweenness (fact #3). The
compatibility of RDU with facts 1-3 has led some authors to consider it the most
empirically supported of existing generalization of Expected Utility. However, as its
name suggests, one of the key defining features of RDU is that the only thing that
matters for probability distortion is the rank of an outcome within the support of a
lottery, not its size: thus the presence of Allais-type behavior should be fully independent of the stakes. This is in contrast with the evidence that Allais-type behavior
tends to disappear as stakes become lower (fact #4).28 In addition, there is evidence
of frequent violations of RDU’s main behavioral underpinning, Comonotonic/Ordinal
Independence.29 Finally, empirical works that have focused on RDU and estimated
the shape of the probability distortion functions have found strong empirical support for it having an S-shape (Wu and Gonzalez, 1996). However, RDU with an
S-shaped probability distortion has further behavioral implications that are rejected
by numerous studies.30
26
Conlisk (1989); Camerer (1989); Burke et al. (1996); Fan (2002); Huck and Müller (2012);
Agranov and Ortoleva (2013).
27
According to RDU, indifference curves along the hypotenuse of the triangle should be parallel
to each other – thus it is not compatible with Mixed Fanning in a strict sense. At the same time,
depending on the parameters of the models, RDU can generates indifference curves with the Mixed
Fanning property in the interior of the triangle.
28
RDU implies that if we detect an Allais-type violation of Expected Utility in some range of
prizes, e.g., with x1 < x2 < x3 , then similar violations of Expected Utility can be produced in
any range of prizes. That is, for any y1 < y3 there exists y2 ∈ (y1 , y3 ) and a, b ∈ (0, 1) such that
δy2 aδy3 + (1 − a) δy1 but bδy2 + (1 − b) δy1 ≺ abδy3 + (1 − ab) δy1 .
29
Wu (1994); Wakker et al. (1994).
30
Battalio et al. (1990); Harless and Camerer (1994); Starmer and Sugden (1989); Andreoni and
Sprenger (2012).
26
Models based on Betweenness may be consistent with a less frequent presence
of Allais-type behavior with smaller stakes (fact #4). They can also exhibit Mixed
Fanning (fact #1), with the most prominent example being Gul’s model of Disappointment Aversion. The fact that the Betweenness axiom is often violated (fact
#3), however, poses the greatest challenge to the empirical validity of this class. The
linearity of indifference curves also means that fact #2 is violated: behavior should
be invariant to translation of the corresponding prospects into the interior of the
triangle.
By contrast, the Cautious Expected Utility model is compatible with all four
stylized facts above. First, the steepest middle slope property (Section 2.2) implies
that the model is compatible with Mixed Fanning (fact #1) and rules out global
fanning out. It is also compatible with fact #2 since indifference curves could be
linear and parallel to each other as we move inside the triangle, but also convex –
thus allowing for non-Expected Utility behavior – as we approach the boundaries.
As we mentioned in Section 5, Cautious Expected utility does not imply, and is not
implied by, Betweenness, hence is consistent with fact #3.
But perhaps the most distinctive feature of the Cautious Expected Utility model
is its compatibility with the presence of Allais-type behavior with large stakes but not
small ones (fact #4): this would be the case if, for example, all utility functions in the
representation agree on an initial smaller range and then start disagreeing as stakes
become larger; or if, as in Example 1, one of the utility functions in W is the most
risk averse for a range of outcomes below a threshold. More generally, this property,
to some degree, will be implied whenever the set W is finite.
This leads us to discuss a more general point. Camerer (1995), Tversky and
Kahneman (1992) and Wakker (2010) all mention how the RDU model is “unlikely
to be accurate in detail,” mostly for its full separability of probability weights and
outcomes. At the same time, they were skeptical about possible generalizations, as
the benefit of achieving a better fit may be outweighed by the loss of parsimony
and predictive power. The Cautious Expected Utility model takes a different route:
instead of generalizing RDU, it suggests a different approach to capture violations of
Expected Utility based on multiple utilities rather than probability distortions. While
keeping tractability, as we have seen above this different approach can accommodate
most prominently observed behavioral patterns.
27
We conclude by noting that the Cautious Expected Utility model has additional
behavioral implications which may or not find empirical support, and that have not
been subject to similar scrutiny yet. For example, while consistent with many of the
findings on the Certainty Effect, to our knowledge no direct tests of NCI have been
conducted thus far. Indeed, the simplicity of the axiom should make such direct tests
easy to implement. In addition, as we have already discussed, our model implies Convexity of preferences, a property that has been tested experimentally, albeit possibly
with smaller scrutiny. The existing evidence is mixed: while the experimental papers
that documents violations of betweenness found deviations in both directions (that
is, either preference or aversion to mixing), both Sopher and Narramore (2000) and
Dwenger et al. (2013) find explicit evidence in favor of Convexity.
7
Concluding Remarks
This paper characterizes a new class of preferences over lotteries that capture the
Certainty Effect, together with very basic rationality postulates. We show that this
type of violation of Expected Utility, which is one of the most prominently observed
behavioral patterns, can be interpreted as reflecting cautious behavior in the evaluation of lotteries. We also demonstrate that the Certainty Effect can be generated by
a Cautious Completion of incomplete preferences.
We conclude the paper by assessing our model in light of the goals it aimed to
achieve and its relationship with the existing literature. There are three dimensions
in which models are typically assessed and compared (the relative merit of which
are often debated). First, in terms of their underlying assumptions: models derived
from more transparent and well-grounded assumptions are often preferred. Second,
in terms of implications: better empirical performance is typically valued. Third, and
more loosely, in terms of their plausibility: does the model provide a sound story?
In terms of assumptions, our model imposes, together with three very basic postulates, only a single axiom, NCI, that is designed precisely to capture the Certainty
Effect.31 Thus, the model is constructed to study the consequences of the Certainty
Effect and basic rationality alone. This is in contrast with most other prominent
31
As we have mentioned in Section 2.2, NCI can be thought of as a natural extension of the
definition of the Certainty Effect in Kahneman and Tversky (1979), which permits same type of
violations but applies only to binary lotteries.
28
models that, while consistent with Allais-type behavior, are derived by either imposing additional properties that are not directly related to the Certainty Effect (e.g.,
Comonotonic/Ordinal Independence of RDU, or Betweenness) or by violating one or
more of the basic assumptions (e.g., Monotonicity or Continuity).
In terms of testable implications, in Section 6 we show that our model is consistent
with many well-established empirical findings on preferences under risk. In addition,
it allows the Certainty Effect to be substantially less prominent when stakes are
small rather than large – a robust finding in the experimental literature that is hard
to reconcile with most popular alternative models. We believe this goes to show how
the different way to think of the Certainty Effect and other violations of Expected
Utility suggested by our model could be used to organize, and re-interpret, a broad
class of existing empirical evidence.
Lastly, in terms of interpretation, our model differs from prominent alternatives as
it involves a decision maker who does not distort probabilities, but rather is unsure
about how to evaluate lotteries and applies a criterion of caution. Whether such
interpretation is to be considered more plausible is of course a subjective matter –
but it offers a novel way of reading existing evidence and their implications.
Appendix A: Preliminary Results
We begin by proving some preliminary results that will be useful for the proofs of
the main results in the text. In the sequel, we denote by C ([w, b]) the set of all real
valued continuous functions on [w, b]. Unless otherwise specified, we endow C ([w, b])
with the topology induced by the supnorm. We denote by ∆ = ∆ ([w, b]) the set of
all Borel probability measures endowed with the topology of weak convergence. We
denote by ∆0 the subset of ∆ which contains only the elements with finite support.
Since [w, b] is closed and bounded, ∆ is compact with respect to this topology and ∆0
is dense in ∆. Given a binary relation < on ∆, we define an auxiliary binary relation
<0 on ∆ by
p <0 q ⇐⇒ λp + (1 − λ) r < λq + (1 − λ) r
∀λ ∈ (0, 1] , ∀r ∈ ∆.
Lemma 1. Let < be a binary relation on ∆ that satisfies Weak Order. The following
statements are true:
29
1. The relation < satisfies Negative Certainty Independence if and only if for each
p ∈ ∆ and for each x ∈ [w, b]
p < δx =⇒ p <0 δx .
(Equivalently p 6<0 δx =⇒ δx p.)
2. If < also satisfies Negative Certainty Independence then < satisfies Weak Monotonicity if and only if for each x, y ∈ [w, b]
x ≥ y ⇐⇒ δx <0 δy ,
that is, <0 satisfies Weak Monotonicity.
Proof. It follows from the definition of <0 .
We define
Vin = {v ∈ C ([w, b]) : v is increasing} ,
Vinco = {v ∈ C ([w, b]) : v is increasing and concave} ,
U = Vs−in = {v ∈ C ([w, b]) : v is strictly increasing} ,
Unor = {v ∈ C ([w, b]) : v (b) − 1 = 0 = v (w)} ∩ Vs−in .
Consider a binary relation <∗ on ∆ such that
p <∗ q ⇐⇒ Ep (v) ≥ Eq (v)
∀v ∈ W
(2)
where W is a subset of C ([w, b]). Define Wmax as the set of all functions v ∈ C ([w, b])
such that p <∗ q implies Ep (v) ≥ Eq (v). Define also Wmax − nor as the set of all
functions v ∈ Unor such that p <∗ q implies Ep (v) ≥ Eq (v). Clearly, we have that
Wmax − nor = Wmax ∩ Unor and Wmax − nor , W ⊆ Wmax .
Proposition 3. Let <∗ be a binary relation represented as in (2) and such that x ≥ y
if and only if δx <∗ δy . The following statements are true:
1. Wmax and Wmax − nor are convex and Wmax is closed;
2. ∅ =
6 Wmax − nor ;
3. Wmax ⊆ Vin , ∅ =
6 Wmax ∩ Vs−in , and cl (Wmax ∩ Vs−in ) = Wmax ;
30
4. p <∗ q if and only if Ep (v) ≥ Eq (v) for each v ∈ Wmax − nor ;
5. If W is a convex subset of Unor that satisfies (2) then cl (W) = cl (Wmax − nor ).
Proof. 1. Consider v1 , v2 ∈ Wmax − nor (resp., v1 , v2 ∈ Wmax ) and λ ∈ (0, 1). Since
both functions are continuous, strictly increasing, and normalized (resp., continuous),
it follows that λv1 +(1 − λ) v2 is continuous, strictly increasing, and normalized (resp.,
continuous). Since v1 , v2 ∈ Wmax − nor (resp., v1 , v2 ∈ Wmax ), if p <∗ q then Ep (v1 ) ≥
Eq (v1 ) and Ep (v2 ) ≥ Eq (v2 ). This implies that
Ep (λv1 + (1 − λ) v2 ) = λEp (v1 ) + (1 − λ) Ep (v2 )
≥ λEq (v1 ) + (1 − λ) Eq (v2 ) = Eq (λv1 + (1 − λ) v2 ) ,
proving that Wmax − nor (resp., Wmax ) is convex. Next, consider {vn }n∈N ⊆ Wmax such
that vn → v. It is immediate to see that v is continuous. Moreover, if p <∗ q then
Ep (vn ) ≥ Eq (vn ) for all n ∈ N. By passing to the limit, we obtain that Ep (v) ≥ Eq (v),
that is, that v ∈ Wmax , hence Wmax is closed.
2. By Dubra et al. (2004, Proposition 3), it follows that there exists v̂ ∈ C ([w, b])
such that
p ∼∗ q =⇒ Ep (v̂) = Eq (v̂)
and
p ∗ q =⇒ Ep (v̂) > Eq (v̂) .
By assumption, we have that x ≥ y if and only if δx <∗ δy . This implies that x ≥ y
if and only if v̂ (x) ≥ v̂ (y), proving that v̂ is strictly increasing. Since v̂ is strictly
increasing, by taking a positive and affine transformation, v̂ can be chosen to be such
that v̂ (w) = 0 = 1 − v̂ (b). It is immediate to see that v̂ ∈ Wmax − nor .
3. By definition of Wmax , we have that if p <∗ q then Ep (v) ≥ Eq (v) for all
v ∈ Wmax . On the other hand, by assumption and since W ⊆ Wmax , we have that if
Ep (v) ≥ Eq (v) for all v ∈ Wmax then Ep (v) ≥ Eq (v) for all v ∈ W which, in turn,
implies that p <∗ q. In other words, Wmax satisfies (2) for <∗ . By assumption, we
can thus conclude that
x ≥ y =⇒ δx <∗ δy =⇒ Eδx (v) ≥ Eδy (v) ∀v ∈ Wmax =⇒ v (x) ≥ v (y) ∀v ∈ Wmax ,
31
proving that Wmax ⊆ Vin . By point 2 and since Wmax − nor ⊆ Wmax , we have that
∅ 6= Wmax ∩ Vs−in . Since Wmax ∩ Vs−in ⊆ Wmax and the latter is closed, we have
that cl (Wmax ∩ Vs−in ) ⊆ Wmax . On the other hand, consider v̇ ∈ Wmax ∩ Vs−in and
v ∈ Wmax . Define {vn }n∈N by vn = n1 v̇ + 1 − n1 v for all n ∈ N. Since v, v̇ ∈ Wmax
and the latter set is convex, we have that {vn }n∈N ⊆ Wmax . Since v̇ is strictly
increasing and v is increasing, vn is strictly increasing for all n ∈ N, proving that
{vn }n∈N ⊆ Wmax ∩ Vs−in . Since vn → v, it follows that v ∈ cl (Wmax ∩ Vs−in ), proving
that Wmax ⊆ cl (Wmax ∩ Vs−in ) and thus the opposite inclusion.
4. By assumption, we have that there exists a subset W of C ([w, b]) such that
p <∗ q if and only if Ep (v) ≥ Eq (v) for all v ∈ W. By point 3 and its proof, we can
replace first W with Wmax and then with Wmax ∩ Vs−in . Consider v ∈ Wmax ∩ Vs−in .
Since v is strictly increasing, there exist (unique) γ1 > 0 and γ2 ∈ R such that
v̄ = γ1 v + γ2 is continuous, strictly increasing, and satisfies v̄ (w) = 0 = 1 − v̄ (b).
For each v ∈ Wmax ∩ Vs−in , it is immediate to see that Ep (v) ≥ Eq (v) if and only if
Ep (v̄) ≥ Eq (v̄). Define W̄ = {v̄ : v ∈ Wmax ∩ Vs−in }. Notice that W̄ ⊆ Unor . From
the previous part, we can conclude that p <∗ q if and only if Ep (v̄) ≥ Eq (v̄) for
all v̄ ∈ W̄. It is also immediate to see that W̄ ⊆ Wmax − nor . By construction of
Wmax − nor , notice that
p <∗ q =⇒ Ep (v) ≥ Eq (v)
∀v ∈ Wmax − nor .
On the other hand, since W̄ ⊆ Wmax − nor , we have that
Ep (v) ≥ Eq (v)
∀v ∈ Wmax − nor =⇒ Ep (v̄) ≥ Eq (v̄)
∀v̄ ∈ W̄ =⇒ p <∗ q.
We can conclude that Wmax − nor represents <∗ .
5. Consider v ∈ W. By assumption, v is a strictly increasing and continuous
function on [w, b] such that v (w) = 0 = 1 − v (b). Moreover, since W satisfies (2),
it follows that p <∗ q implies that Ep (v) ≥ Eq (v). This implies that v ∈ Wmax − nor .
We can conclude that W ⊆ Wmax − nor , hence, cl (W) ⊆ cl (Wmax − nor ). In order to
prove the opposite inclusion, we argue by contradiction. Assume that there exists v ∈
cl (Wmax − nor ) \cl (W). Since v ∈ cl (Wmax − nor ), we have that v (w) = 0 = 1 − v (b).
By Dubra et al. (2004, p. 123–124) and since both W and Wmax − nor satisfy (2), we
32
also have that
cl cone (W) + θ1[w,b] θ∈R = cl cone (Wmax − nor ) + θ1[w,b] θ∈R .
. Observe that there exists
θ∈R
We can conclude that v ∈ cl cone (W) + θ1[w,b]
{v̂n }n∈N ⊆ cone (W) + θ1[w,b] θ∈R such that v̂n → v. By construction and since W
is convex, there exist {λn }n∈N ⊆ [0, ∞), {vn }n∈N ⊆ W, and {θn }n∈N ⊆ R such that
v̂n = λn vn + θn 1[w,b] for all n ∈ N. It follows that
0 = v (w) = lim v̂n (w) = lim λn vn (w) + θn 1[w,b] (w) = lim θn
n
n
n
and
1 = v (b) = lim v̂n (b) = lim λn vn (b) + θn 1[w,b] (b) = lim {λn + θn } .
n
n
n
This implies that limn θn = 0 = 1 − limn λn . Without loss of generality, we can thus
assume that {λn }n∈N is bounded away from zero, that is, that there exists ε > 0 such
that λn ≥ ε > 0 for all n ∈ N. Since {θn }n∈N and {v̂n }n∈N are both convergent, both
sequences are bounded, that is, there exists k > 0 such that
kv̂n k ≤ k and |θn | ≤ k
∀n ∈ N.
It follows that
ε kvn k ≤ λn kvn k = kλn vn k = λn vn + θn 1[w,b] − θn 1[w,b] ≤ λn vn + θn 1[w,b] + −θn 1[w,b] ≤ λn vn + θn 1[w,b] + |θn |
≤ kv̂n k + |θn | ≤ 2k
that is, kvn k ≤
2k
ε
∀n ∈ N,
for all n ∈ N. We can conclude that
kv − vn k = kv − v̂n + v̂n − vn k ≤ kv − v̂n k + kv̂n − vn k
= kv − v̂n k + λn vn + θn 1[w,b] − vn ≤ kv − v̂n k + |λn − 1| kvn k + |θn |
2k
≤ kv − v̂n k + |λn − 1|
+ |θn |
ε
33
∀n ∈ N.
Passing to the limit, it follows that vn → v, that is, v ∈ cl (W), a contradiction.
We next provide a characterization of <0 which is due to Cerreia-Vioglio (2009).
Here, it is further specialized to the particular case where < satisfies Weak Monotonicity and NCI in addition to Weak Order and Continuity. Before proving the
statement, we need to introduce a piece of terminology. We will say that <00 is an
integral stochastic order if and only if there exists a set W 00 ⊆ C ([w, b]) such that
p <00 q ⇐⇒ Ep (v) ≥ Eq (v)
∀v ∈ W 00 .
Proposition 4. Let < be a binary relation on ∆ that satisfies Weak Order, Continuity, Weak Monotonicity, and Negative Certainty Independence. The following
statements are true:
(a) There exists a set W ⊆ Unor such that p <0 q if and only if Ep (v) ≥ Eq (v) for
all v ∈ W.
(b) For each p, q ∈ ∆ if p <0 q then p < q.
(c) If <00 is an integral stochastic order that satisfies (b) then p <00 q implies p <0 q.
(d) If <00 is an integral stochastic order that satisfies (b) and such that W 00 can be
chosen to be a subset of Unor then co (W) ⊆ co (W 00 ).
Proof. (a). By Cerreia-Vioglio (2009, Proposition 22), there exists a set W ⊆
C ([w, b]) such that p <0 q if and only if Ep (v) ≥ Eq (v) for all v ∈ W. By Lemma 1,
we also have that x ≥ y if and only if δx <0 δy . By point 4 of Proposition 3, if <∗ =<0
then W can be chosen to be Wmax − nor .
(b), (c), and (d). The statements follow from Cerreia-Vioglio (2009, Proposition
22 and Lemma 35).
The next proposition clarifies what is the relation between our assumption of
Weak Monotonicity and First Order Stochastic Dominance. Given p, q ∈ ∆, we
write p <F SD q if and only if p dominates q with respect to First Order Stochastic
Dominance.
34
Proposition 5. If < is a binary relation on ∆ that satisfies Weak Order, Continuity,
Weak Monotonicity, and Negative Certainty Independence then
p <F SD q =⇒ p < q.
Proof. By Proposition 4, there exists W ⊆ Unor such that
p <0 q ⇐⇒ Ep (v) ≥ Eq (v)
∀v ∈ W.
By Proposition 4 and since W ⊆ Unor ⊆ Vin , it follows that
p <F SD q =⇒ Ep (v) ≥ Eq (v)
=⇒ Ep (v) ≥ Eq (v)
∀v ∈ Vin
∀v ∈ W
=⇒ p <0 q =⇒ p < q,
proving the statement.
Appendix B: Proof of the Results in the Text
Proof of Theorem 1. Before starting, we point out that in proving (i) implies (ii),
we will prove the existence of a Continuous Cautious Expected Utility representation
W which is convex and normalized, that is, a subset of Unor . This will turn out to be
useful in the proofs of other results in this section. The normalization of W will play
no role in proving (ii) implies (i).
(i) implies (ii). We proceed by steps.
Step 1 . There exists a continuous certainty equivalent utility function V : ∆ → R.
Proof of the Step.
Since < satisfies Weak Order and Continuity, there exists a continuous function
V̄ : ∆ → R such that V̄ (p) ≥ V̄ (q) if and only if p < q. By Weak Monotonicity, we
have that
x ≥ y ⇐⇒ δx < δy ⇐⇒ V̄ (δx ) ≥ V̄ (δy ) .
(3)
Next, observe that δb <F SD q <F SD δw for all q ∈ ∆. By Proposition 5 and since <
35
satisfies Weak Order, Continuity, Weak Monotonicity, and NCI, this implies that
δb < q < δw
∀q ∈ ∆.
(4)
Consider a generic q ∈ ∆ and the sets
{δx : δx < q} = {p ∈ ∆ : p < q} ∩ {δx }x∈[w,b]
and
{δx : q < δx } = {p ∈ ∆ : q < p} ∩ {δx }x∈[w,b] .
By (4), Continuity, and Aliprantis and Border (2005, Theorem 15.8), both sets are
nonempty and closed. Since < satisfies Weak Order, it follows that the sets
{x ∈ [w, b] : δx < q} and {x ∈ [w, b] : q < δx }
are nonempty, closed, and their union coincides with [w, b]. Since [w, b] is connected,
there exists an element xq in their intersection. In other words, there exists xq ∈ [w, b]
such that δxq ∼ q. Since q was chosen to be generic and by (3) and (4), such element
is unique and we further have that
V̄ (δb ) ≥ V̄ (q) = V̄ δxq = V̄ (q) ≥ V̄ (δw )
∀q ∈ ∆.
(5)
Next, define f : [w, b] → R by f (x) = V̄ (δx ) for all x ∈ [w, b]. By (3), Aliprantis
and Border (2005, Theorem 15.8), and (5), f is strictly increasing, continuous, and
such that f ([w, b]) = V̄ (∆). It follows that V : ∆ → R defined by u = f −1 ◦ V̄ is
a well defined continuous function such that p < q if and only if V (p) ≥ V (q) and
V (δx ) = x for all x ∈ [w, b], proving the statement.
Step 2. <0 is represented by a set W ⊆ Unor , that is,
p <0 q ⇐⇒ c (p, v) ≥ c (q, v)
∀v ∈ W.
(6)
Proof of the Step.
It follows by point (a) of Proposition 4. Recall that W can be chosen to be
Wmax − nor for <0 .
Step 3. For each p ∈ ∆ we have that inf v∈W c (p, v) ∈ [w, b].
36
Proof of the Step.
Fix p ∈ ∆. By construction, we have that b ≥ c (p, v) ≥ w for all v ∈ W. It
follows that c = inf v∈W c (p, v) is a real number in [w, b].
Step 4. For each p ∈ ∆ we have that
V (p) ≤ inf c (p, v) .
v∈W
Proof of the Step.
Fix p ∈ ∆. By Step 3, c = inf v∈W c (p, v) is a real number in [w, b]. Since
V (∆) = [w, b], if c = b then we have that V (p) ≤ b = c. Otherwise, pick d such that
b > d > c. Since d > c, we have that there exists ṽ ∈ W such that
c (p, ve) < d = c (δd , ve) .
By Step 2, it follows that p 6<0 δd . By Lemma 1, this implies that δd p, that is,
V (p) < V (δd ) = d. Since d was chosen to be generic and strictly greater than c, we
have that V (p) ≤ c, proving the statement.
Step 5. For each p ∈ ∆ we have that
V (p) ≥ inf c (p, v) .
v∈W
Proof of the Step.
Fix p ∈ ∆. By Step 3, c = inf v∈W c (p, v) is a real number in [w, b]. By construction, we have that
c (p, v) ≥ c = c (δc , v)
∀v ∈ W.
By Step 2, it follows that p <0 δc . By Proposition 4 point (b), this implies that p < δc ,
that is, V (p) ≥ V (δc ) = c, proving the statement.
The implication follows from Steps 1, 2, 4, and 5.
(ii) implies (i). Assume there exists a set W ⊆ U such that V : ∆ → R, defined
by
V (p) = inf c (p, v)
v∈W
∀p ∈ ∆,
is a continuous utility function for <. Since < is represented by a continuous utility
function, it follows that it satisfies Weak Order and Continuity. By construction, it
37
is also immediate to see that V (δx ) = x for all x ∈ [w, b]. In light of this fact, Weak
Monotonicity follows immediately. Finally, consider p ∈ ∆ and x ∈ [w, b]. Assume
that p < δx . It follows that for each λ ∈ [0, 1] and for each q ∈ ∆
c (p, v) ≥ V (p) ≥ V (δx ) = x = c (δx , v)
=⇒ Ep (v) ≥ Eδx (v)
∀v ∈ W
∀v ∈ W
=⇒ Eλp+(1−λ)q (v) ≥ Eλδx +(1−λ)q (v)
∀v ∈ W
=⇒ c (λp + (1 − λ) q, v) ≥ c (λδx + (1 − λ) q, v)
∀v ∈ W
=⇒ V (λp + (1 − λ) q) ≥ V (λδx + (1 − λ) q)
=⇒ λp + (1 − λ) q < λδx + (1 − λ) q,
proving that < satisfies NCI.
Proof of Proposition 1. Consider W and W 0 in Unor such that co(W) = co(W 0 ).
Notice first that if both W and W 0 are convex, the proposition follows trivially. To
prove the proposition, it will therefore suffice to show that for each W ⊆ Unor we have
that
inf c (p, v) = inf c (p, v)
∀p ∈ ∆.
v∈W
v∈co(W)
Consider p ∈ ∆. It is immediate to see that
inf c (p, v) ≥
v∈W
inf
c (p, v) .
v∈co(W)
Conversely, consider v̄ ∈ co (W). It follows that there exist {vi }ni=1 ⊆ W and
n
n
P
P
{λi }ni=1 ⊆ [0, 1] such that
λi = 1 and
λi vi = v̄. Define x ∈ [w, b] and
i=1
i=1
{xi }ni=1 ⊆ [w, b] by x = c (p, v) and xi = c (p, vi ) for all i ∈ {1, ..., n}. By contradiction, assume that x < mini∈{1,...,n} xi . Since {vi }ni=1 ⊆ W ⊆ Vs−in , we have that
Ep (v̄) = Ep
n
X
!
λi vi
i=1
n
n
n
X
X
X
=
λi Ep (vi ) =
λi vi (xi ) >
λi vi (x) = v̄ (x) ,
i=1
i=1
i=1
that is, x = c (p, v) > x, a contradiction. This implies that
c (p, v) = x ≥
min xi =
i∈{1,...,n}
min c (p, vi ) ≥ inf c (p, v) .
i∈{1,...,n}
38
v∈W
Since v̄ was chosen to be generic in co (W), we can conclude that
c (p, v) ≥ inf c (p, v)
v∈W
∀v̄ ∈ co (W) ,
proving that inf v∈W c (p, v) ≤ inf v∈co(W) c (p, v) and thus the statement.
Proof of Theorem 2. By the proof of Theorem 1 (Steps 1, 2, 4, and 5), we have
c ⊆ Unor such that
that there exists a set W
p <0 q ⇐⇒ Ep (v) ≥ Eq (v)
c
∀v ∈ W
(7)
and such that V : ∆ → R, defined by
V (p) = inf c (p, v)
∀p ∈ ∆,
(8)
c
v∈W
is a continuous utility function for <. This proves points (i) and (iii). Next consider a
subset W of Unor such that the function V : ∆ → R defined by V (p) = inf v∈W c (p, v)
for all p ∈ ∆ represents <. Define <00 by
p <00 q ⇐⇒ Ep (v) ≥ Eq (v)
∀v ∈ W.
00
It is immediateto see
that if p < q then p < q. By point (d) of Proposition 4, this
c ⊆ co (W), proving point (ii).
implies that co W
c1 and W
c2 in Unor that satisfy (7) and (8). By point 5
Finally, consider two sets W
c1 ) = co(W
c2 ).
of Proposition 3, it follows that co(W
Proof of Theorem 3. We just prove point (i) since point (ii) follows by an analogous
argument. Given p ∈ ∆, we denote by e(p) its expected value. We say that p <M P S q
if and only if q is a mean preserving spread of p.32 Recall that < is risk averse if and
32
Recall that, by Rothschild and Stiglitz (1970), if p and q are elements of ∆0 and q is a mean
preserving spread of p, then p and q have the same mean and they give the same probability to
each point in their support with the exception of four ordered points x1 < x2 < x3 < x4 . There the
following relations hold:
q (x1 ) − p (x1 ) = p (x2 ) − q (x2 ) ≥ 0
and
q (x4 ) − p (x4 ) = p (x3 ) − q (x3 ) ≥ 0.
39
only if p <M P S q implies p < q. Assume that < is risk averse. Let p, q ∈ ∆0 .33 Since
∆0 is dense in ∆ and < satisfies Weak Order and Continuity, we have that
p <M P S q =⇒ λp + (1 − λ) r <M P S λq + (1 − λ) r
∀λ ∈ (0, 1] , ∀r ∈ ∆0
=⇒ λp + (1 − λ) r < λq + (1 − λ) r
∀λ ∈ (0, 1] , ∀r ∈ ∆0
=⇒ λp + (1 − λ) r < λq + (1 − λ) r
∀λ ∈ (0, 1] , ∀r ∈ ∆
=⇒ p <0 q.
This implies that
p <M P S q =⇒ p <0 q =⇒ Ep (v) ≥ Eq (v)
c
∀v ∈ W.
c is concave. For the other direction, assume that
We can conclude that each v in W
c is concave. Since W
c ⊆ Vinco , we have that
each v in W
p <M P S q =⇒ e (p) = e (q) and Ep (v) ≥ Eq (v)
∀v ∈ Vinco
c =⇒ p <0 q =⇒ p < q,
=⇒ Ep (v) ≥ Eq (v)
∀v ∈ W
proving that < is risk averse.
Proof of Theorem 4. Before proceeding, we make a few remarks. Fix i ∈ {1, 2}.
By the proof of Theorem 1 and since <i satisfies Weak Order, Continuity, Weak
i
0
Monotonicity, and NCI, it follows that Wmax
− nor for <i constitutes a Continuous
i
c
Cautious Expected Utility representation of <i . Since Wmax
− nor is convex, if Wi
ci ) coincides with the closure of
is chosen as in Theorem 2 then we have that co(W
i
i
Wmax
− nor . Also recall that for each p ∈ ∆, we denote by xp the element in [w, b] such
that p ∼i δxip . We also have that Vi : ∆ → R, defined by
Vi (p) =
inf
i
v∈Wmax
− nor
c (p, v) = inf c (p, v) = inf c (p, v)
ci
v∈W
v∈Wi
∀p ∈ ∆,
represents <i , yielding that xip = Vi (p).
(i) implies (ii) and (i) implies (iii). Since <1 is more risk averse than <2 , we have
that p ∼1 δx1p implies p <2 δx1p . Since <2 satisfies Weak Order and Weak Monotonicity,
33
Recall that ∆0 is the subset of ∆ which contains just the elements with finite support.
40
it follows that x2p ≥ x1p for all p ∈ ∆. This implies that
V1 (p) = min {V1 (p) , V2 (p)} =
inf
v∈W1 ∪W2
c (p, v)
∀p ∈ ∆,
that is, W1 ∪ W2 is a Continuous Cautious Expected Utility representation of <1 . By
the remark in Section 3, it follows that W\
1 ∪ W2 is also a Continuous Cautious Expected Utility representation of <1 . By the initial part of the proof, we can conclude
c1 ) = cl W 1
\
that co(W
max − nor = co(W1 ∪ W2 ).
c1 ) = cl W 1
(iii) implies (i). Since co(W
= co(W\
1 ∪ W2 ), it follows that
max − nor
V1 (p) = inf c (p, v) =
inf
c1
v∈W
v∈W\
1 ∪W2
=
inf
v∈W1 ∪W2
c (p, v)
c(p, v) = min {V1 (p) , V2 (p)} ≤ V2 (p) ∀p ∈ ∆,
proving that x2p ≥ x1p for all p ∈ ∆. It follows that <1 is more risk averse than <2 .
(ii) implies (i). Since W1 ∪ W2 is a Continuous Cautious Expected Utility representation of <1 , it follows that
V1 (p) =
inf
v∈W1 ∪W2
c (p, v) ≤ inf c (p, v) ≤ V2 (p)
v∈W2
∀p ∈ ∆,
proving that x2p ≥ x1p for all p ∈ ∆. It follows that <1 is more risk averse than <2 . Proof of Proposition 2. (i) We first prove necessity. By Theorem 2 and since <1
and <2 satisfy Weak Order, Continuity, Weak Monotonicity, and NCI, we have that
p <0i q ⇐⇒ Ep (v) ≥ Eq (v)
ci ⇐⇒ Ep (v) ≥ Eq (v)
∀v ∈ W
ci ). (9)
∀v ∈ co(W
By Proposition 4 point (b) and since <1 is more indecisive than <2 , we have that
p <01 q =⇒ p <02 q =⇒ p <2 q.
c2 ) ⊆ co(W
c1 ). We next
By Proposition 4 point (d) and (9), we can conclude that co(W
c2 ) ⊆ co(W
c1 ), we have that
prove sufficiency. By (9) and since co(W
c1 ) =⇒ Ep (v) ≥ Eq (v) ∀v ∈ co(W
c2 ) =⇒ p <0 q,
p <01 q =⇒ Ep (v) ≥ Eq (v) ∀v ∈ co(W
2
41
proving point (i).
i
ci ) = cl Wmax
(ii) By (9) and Proposition 3, we have that co(W
− nor for i ∈ {1, 2}.
2
1
Since <1 is more indecisive than <2 , it follows that cl Wmax
− nor ⊆ cl Wmax − nor .
1
2
2
1
By definition of Wmax
− nor and Wmax − nor , it follows that Wmax − nor ⊆ Wmax − nor . By
the proof of Theorem 1, this implies that
V1 (p) =
inf
1
v∈Wmax
− nor
c (p, v) ≤
inf
2
v∈Wmax
− nor
c (p, v) = V2 (p)
∀p ∈ ∆.
Since each Vi is a continuous certainty equivalent utility function, it follows that <1
is more risk averse than <2 .
Proof of Theorem 5. Let <0 be a reflexive and transitive binary relation on ∆ that
satisfies Sequential Continuity, Weak Monotonicity, and Independence. We first prove
the existence of a Cautious Completion. In doing this, we show that this completion
has a Cautious Expected Utility representation. By Dubra et al. (2004), there exists
a set W ⊆ C([w, b]) such that p <0 q if and only if Ep (v) ≥ Eq (v) for all v ∈ W. By
Proposition 3, without loss of generality, we can assume that W ⊆ Unor ⊆ U.
ˆ as
Next define the binary relation <
ˆ q ⇐⇒ inf c (p, v) ≥ inf c (q, v) .
p<
v∈W
v∈W
(10)
ˆ is well defined and it satisfies Weak Order, Weak Monotonicity, and
Notice that <
ˆ is a
clearly for each p ∈ ∆ there exists x ∈ [w, b] such that p ∼
ˆ δx . Next, we show <
completion of <0 . Since each v ∈ W is strictly increasing, we have that
p <0 q ⇐⇒ c (p, v) ≥ c (q, v)
∀v ∈ W.
(11)
This implies that if p <0 q then inf v∈W c (p, v) ≥ inf v∈W c (q, v), that is, if p <0 q then
ˆ Finally, let x be an element of [w, b] and p ∈ ∆ such that p 6<0 δx . By (11), it
p<q.
follows that there exists ṽ ∈ W such that c (δx , ve) = x > c (p, ve). By (10), this implies
ˆ p. This concludes the proof of
that inf v∈W c (δx , v) = x > inf v∈W c (p, v), hence δx the existence of a Cautious Completion.
We are left with proving uniqueness. Let <◦ be a Cautious Completion of <0 .
By point 1 of Definition 5, <◦ satisfies Weak Order, Weak Monotonicity, and for
each p ∈ ∆ there exists x ∈ [w, b] such that p ∼◦ δx . This implies that there exists
V : ∆ → R such that V represents <◦ and V (δx ) = x for all x ∈ [w, b]. Moreover, we
42
have that V (∆) = [w, b]. Let p ∈ ∆. Define c = inf v∈W c (p, v) ∈ [w, b]. If c = b then
V (p) ≤ b = c = inf v∈W c (p, v). If c < b then for each d ∈ (c, b) there exists ṽ ∈ W
such that c (δd , ve) = d > c (p, ve), yielding that p 6<0 δd . By point 3 of Definition 5,
we can conclude that δd ◦ p, that is, d = V (δd ) > V (p). Since d was arbitrarily
chosen in (c, b), it follows that V (p) ≤ c = inf v∈W c (p, v). Finally, by definition of c
and (11), we have that c (p, v) ≥ c = c (δc , v) for all v ∈ W, that is, p <0 δc . By point
2 of Definition 5, it follows that p <◦ δc , that is, V (p) ≥ V (δc ) = c = inf v∈W c (p, v).
In other words, we have shown that V (p) = inf v∈W c (p, v) for all p ∈ ∆. By (10) and
ˆ proving the statement.
since V represents <◦ , we can conclude that <◦ = <,
References
Agranov, M. and P. Ortoleva (2013): “Stochastic Choice and Hedging,” Mimeo California
Institute of Technology.
Aliprantis, C. D. and K. C. Border (2005): Infinite Dimensional Analysis, Springer, Berlin
Heidelberg New York.
Andreoni, J. and C. Sprenger (2012): “Uncertainty Equivalents: Linear Tests of the Independence Axiom,” Mimeo, University of California San Diego.
Artstein-Avidan, S. and D. Dillenberger (2011): “Dynamic Disappointment Aversion,”
Mimeo, University of Pennsylvania.
Battalio, R. C., J. H. Kagel, and K. Jiranyakul (1990): “Testing between alternative models
of choice under uncertainty: Some initial results,” Journal of risk and uncertainty, 3, 25–50.
Bell, D. E. and P. C. Fishburn (2003): “Probability weights in rank-dependent utility with
binary even-chance independence,” Journal of Mathematical Psychology, 47, 244–258.
Bernasconi, M. (1994): “Nonlinear preferences and two-stage lotteries: Theories and evidence,”
The Economic Journal, 54–70.
Burke, M. S., J. R. Carter, R. D. Gominiak, and D. F. Ohl (1996): “An experimental note
on the allais paradox and monetary incentives,” Empirical Economics, 21, 617–632.
Camerer, C. F. (1989): “An experimental test of several generalized utility theories,” Journal of
Risk and uncertainty, 2, 61–104.
——— (1992): “Recent tests of generalizations of expected utility theory,” Utility theories: Measurements and applications, 207–251.
——— (1995): “Individual decision making,” in Handbook of Experimental Economics, ed. by A. R.
J. Kagel, Princeton University Press, Princeton, vol. 2.
43
Camerer, C. F. and T.-H. Ho (1994): “Violations of the betweenness axiom and nonlinearity in
probability,” Journal of risk and uncertainty, 8, 167–196.
Cerreia-Vioglio, S. (2009): “Maxmin Expected Utility on a Subjective State Space: Convex
Preferences under Risk,” Mimeo, Bocconi University.
Cerreia-Vioglio, S., P. Ghirardato, F. Maccheroni, M. Marinacci, and M. Siniscalchi
(2011a): “Rational Preferences under Ambiguity,” Economic Theory, 48, 341–375.
Cerreia-Vioglio, S., F. Maccheroni, M. Marinacci, and L. Montrucchio (2011b): “Risk
measures: rationality and diversification,” Mathematical Finance, 21, 743–774.
Chatterjee, K. and V. R. Krishna (2011): “A nonsmooth approach to nonexpected utility
theory under risk,” Mathematical Social Sciences, 62, 166–175.
Chew, S. H. (1983): “A Generalization of the Quasilinear Mean with Applications to the Measurement of Income Inequality and Decision Theory Resolving the Allais Paradox,” Econometrica,
51, 1065–1092.
——— (1989): “Axiomatic utility theories with the betweenness property,” Annals of Operations
Research, 19, 273–298.
Chew, S. H. and L. G. Epstein (1989): “A unifying approach to axiomatic non-expected utility
theories,” Journal of Economic Theory, 49, 207–240.
Chew, S. H. and W. S. Waller (1986): “Empirical tests of weighted utility theory,” Journal of
Mathematical Psychology, 30, 55–72.
Cohen, M. and J. Jaffray (1988): “Certainty effect versus probability distortion: An experimental analysis of decision making under risk.” Journal of Experimental Psychology: Human
Perception and Performance, 14, 554.
Conlisk, J. (1989): “Three variants on the Allais example,” The American Economic Review, 79,
392–407.
Dean, M. and P. Ortoleva (2012a): “Allais, Ellsberg, and Preferences for Hedging,” Mimeo,
California Institute of Technology.
——— (2012b): “Estimating the Relationship between Economic Preferences: A Testing Ground
for Unified Theories of Behavior,” Mimeo, California Insitute of Technology.
Dekel, E. (1986): “An axiomatic characterization of preferences under uncertainty: Weakening
the independence axiom,” Journal of Economic Theory, 40, 304–318.
Dillenberger, D. (2010): “Preferences for One-Shot Resolution of Uncertainty and Allais-Type
Behavior,” Econometrica, 78, 1973–2004.
44
Dillenberger, D. and S. Erol (2013): “Negative certainty independence without Betweenness,”
Economics Letters (forthcoming).
Dubra, J., F. Maccheroni, and E. A. Ok (2004): “Expected utility theory without the completeness axiom,” Journal of Economic Theory, 115, 118–133.
Dwenger, N., D. Kübler, and G. Weizsäcker (2013): “Flipping a Coin: Theory and Evidence,” Mimeo Humboldt-Universität zu Berlin.
Fan, C.-P. (2002): “Allais paradox in the small,” Journal of Economic Behavior & Organization,
49, 411–421.
Galaabaatar, T. and E. Karni (2013): “Subjective Expected Utility with Incomplete Preferences,” Econometrica, 81, 255–284.
Ghirardato, P., F. Maccheroni, and M. Marinacci (2004): “Differentiating ambiguity and
ambiguity attitude,” Journal of Economic Theory, 118, 133–173.
Gilboa, I. (1988): “A combination of expected utility and maxmin decision criteria,” Journal of
Mathematical Psychology, 32, 405–420.
Gilboa, I. and D. Schmeidler (1989): “Maxmin expected utility with non-unique prior,” Journal
of Mathematical Economics, 18, 141–153.
Gilboa, I., D. Schmeidler, F. Maccheroni, and M. Marinacci (2010): “Objective and
subjective rationality in a multi-prior model,” Econometrica, 78, 755–770.
Gul, F. (1991): “A theory of disappointment aversion,” Econometrica, 59, 667–686.
Harless, D. W. (1992): “Predictions about indifference curves inside the unit triangle: A test of
variants of expected utility theory,” Journal of Economic Behavior & Organization, 18, 391–414.
Harless, D. W. and C. F. Camerer (1994): “The predictive utility of generalized expected
utility theories,” Econometrica, 1251–1289.
Holt, C. and S. Laury (2002): “Risk aversion and incentive effects,” American Economic Review,
92, 1644–1655.
Huck, S. and W. Müller (2012): “Allais for all: Revisiting the paradox in a large representative
sample,” Journal of Risk and Uncertainty, 44, 261–293.
Ikefuji, M., R. J. Laeven, J. R. Magnus, and C. Muris (2012): “Pareto utility,” Theory and
Decision, 1–15.
Jaffray, J. Y. (1988): “Choice under risk and the security factor: an axiomatic model,” Theory
and Decision, 24, 169–200.
45
Kahneman, D. and A. Tversky (1979): “Prospect theory: an analysis of choice under risk,”
Econometrica, 47, 263–291.
Maccheroni, F. (2002): “Maxmin under risk,” Economic Theory, 19, 823–831.
Machina, M. J. (1982): “Expected Utility” Analysis without the Independence Axiom,” Econometrica, 50, 277–323.
Ok, E., P. Ortoleva, and G. Riella (2012): “Incomplete Preferences under Uncertainty: Indecisiveness in Beliefs vs. Tastes,” Econometrica, 80, 1791–1808.
Prelec, D. (1990): “A Pseudo-endowment effect, and its implications for some recent nonexpected
utility models,” Journal of Risk and Uncertainty, 3, 247–259.
Quiggin, J. (1982): “A theory of anticipated utility,” Journal of Economic Behavior & Organization, 3, 323–343.
Riella, G. (2013): “On the Representation of Incomplete Preferences under Uncertainty with
Indecisiveness in Tastes,” Mimeo, Universidade de Brası́lia.
Rothschild, M. and J. E. Stiglitz (1970): “Increasing risk: I. A definition,” Journal of Economic theory, 2, 225–243.
Saha, A. (1993): “Expo-power utility: A ‘flexible’ form for absolute and relative risk aversion,”
American Journal of Agricultural Economics, 75, 905–913.
Schmeidler, D. (1989): “Subjective probability and expected utility without additivity,” Econometrica, 57, 571–587.
Schmidt, U. (1998): “A Measurement of the Certainty Effect,” Journal of Mathematical Psychology,
42, 32–47.
Sopher, B. and G. Gigliotti (1993): “A test of generalized expected utility theory,” Theory and
Decision, 35, 75–106.
Sopher, B. and J. M. Narramore (2000): “Stochastic choice and consistency in decision making
under risk: An experimental study,” Theory and Decision, 48, 323–350.
Starmer, C. (2000): “Developments in non-expected utility theory: The hunt for a descriptive
theory of choice under risk,” Journal of Economic Literature, 38, 332–382.
Starmer, C. and R. Sugden (1989): “Violations of the independence axion in common ratio
problems: An experimental test of some competing hypotheses,” Annals of Operations Research,
19, 79–102.
Tversky, A. and D. Kahneman (1992): “Advances in prospect theory: cumulative representation
of uncertainty,” Journal of Risk and Uncertainty, 5, 297–323.
46
Wakker, P., I. Erev, and E. U. Weber (1994): “Comonotonic independence: The critical
test between classical and rank-dependent utility theories,” Journal of Risk and Uncertainty, 9,
195–230.
Wakker, P. P. (2010): Prospect theory: For risk and ambiguity, Cambridge Univ Press.
Wu, G. (1994): “An empirical test of ordinal independence,” Journal of Risk and Uncertainty, 9,
39–60.
Wu, G. and R. Gonzalez (1996): “Curvature of the probability weighting function,” Management
science, 42, 1676–1690.
47
Download