A theory of fair compensation - The Hebrew University of Jerusalem

advertisement
1
Abstract ............................................................................................................................... 3
1. Introduction ................................................................................................................... 3
2. The normative basis: a measure of riskiness................................................................. 5
2.1 Introduction to the operational measure of riskiness ................................................ 5
2.2 A special class of strategies ...................................................................................... 6
3. Approximating riskiness ............................................................................................... 8
3.1 A Taylor-based approximation of R ......................................................................... 8
3.2 From approximating riskiness to defining a decision rule ...................................... 11
3.3 A final refinement of the decision rule ................................................................... 14
4. Fair compensation theory............................................................................................ 15
4.1 Following the decision rule ..................................................................................... 16
4.2 Alternative interpretations of the decision rule ....................................................... 18
4.2.1 Gain and riskiness ............................................................................................ 18
4.2.2 The mechanism interpretation.......................................................................... 20
4.3 Compensation as an expected payoff ...................................................................... 21
4.3.1 The p-root transformation ................................................................................ 21
4.3.2 From riskiness measure to attractiveness measure .......................................... 23
5. Data analysis ............................................................................................................... 24
5.1 Analysis of the results ............................................................................................. 27
5.1.1 Loss probabilities up to 0.8 .............................................................................. 28
5.1.2 The 50-50 case ................................................................................................. 29
5.1.3 High loss probabilities (above 0.8) .................................................................. 30
6. Properties of the fair compensation rate ..................................................................... 32
6.1 Homogeneity ........................................................................................................... 32
6.2 Globality ................................................................................................................. 33
6.3 Stochastic dominance.............................................................................................. 33
6.4 Decomposing compensation into its components ................................................... 33
7. Discussion ................................................................................................................... 35
8. Conclusion .................................................................................................................. 37
Appendix A: Wealth considerations in prospect theory results ........................................ 39
Appendix B: Proofs........................................................................................................... 40
Appendix C: Expansion to “non-simple” gambles ........................................................... 42
References ......................................................................................................................... 46
2
Figure 1 ............................................................................................................................... 6
Figure 2 ............................................................................................................................. 11
Figure 3 ............................................................................................................................. 12
Figure 4 ............................................................................................................................. 15
Figure 5 ............................................................................................................................. 23
Figure 6 ............................................................................................................................. 28
Figure 7 ............................................................................................................................. 29
Figure 8 ............................................................................................................................. 32
Table 1: Compensation rate as a function of loss probability ........................................... 17
Table 2: A cross-theory comparison of indifference points.............................................. 27
Table 3: A test of loss aversion with mixed prospects...................................................... 40
3
Abstract
Prospect Theory, and more specifically Cumulative Prospect Theory, has challenged the
assumption that expected utility theory describes observed decision behavior in the real
world. Moreover, prospect theory has challenged the assumption, common in economics,
that the decision maker is rational. However, maximizing expected utility may not be the
one and only measure of rationality, and other normative theories of decision under risk
may be considered. Recently there has been progress in the development of the concept
of riskiness, which opens routes to a stricter definition of rational decision making.
In this paper we suggest that decision making under risk, as reported and analyzed in
(cumulative) prospect theory, can be explained as an approximation of a normative set of
rules based on the concept of riskiness measures. The approximation of fully rational
behavior can be thought of as a form of bounded rationality, where the sub optimality of
cognitive procedures finds expression in the substitution of complex formulas with their
simplified approximations. We then calculate an indifference threshold for the median
decision maker1 which is consistent with both riskiness considerations and evidence from
the lab, and which captures loss aversion and other descriptive attributes of common
decision making. This indifference threshold is interpreted as a fair compensation level
for accepting risky gambles, and its characteristics and implications are examined.
Finally, using the indifference threshold we build a new measure for the attractiveness of
gambles.
1. Introduction
Loss aversion, diminishing sensitivity, and overweighting of small probabilities have
been three essential characteristics widely regarded as explaining observed human
decision making ever since Tversky and Kahneman first articulated them in their seminal
work (Kahneman & Tversky, 1979; Tversky & Kahneman, 1991 and 1992). Prospect
Theory, and in particular Cumulative Prospect Theory, strives to describe these properties
by suggesting a two-stage formulation, whose first stage translates probabilities into
1
Throughout this paper, we use the term "median decision maker" to indicate a decision maker whose
attitude towards risk places him at the median of the population.
4
decision weights, and whose second stage translates payoffs into utilities using a value
function that is concave for gains and convex for losses. The nonlinear transformation of
probabilities contradicts the axioms of von Neumann and Morgenstern’s Expected Utility
Theory (von Neumann & Morgenstern, 1944), the most firmly established and widely
used decision theory in economics, and the dependency on a reference point contradicts
the common assumption in economics that utility is a function of total wealth. As a result
of this contradiction, there ensued a long and lasting debate on the extent to which
expected utility theory can explain observed behavior, and although the normative
character of expected utility theory is usually agreed upon, its positive (or descriptive)
character remains in doubt. A route seldom taken toward resolving the contradiction is to
look for an alternative normative theory of decision making under risk that would explain
real-world behavior better than expected utility theory and thus reaffirm the rationality of
the decision maker. We suggest that such an alternative theory can be found in the
growing body of literature on riskiness and measures of riskiness.
In chapter 2 we present the alternative normative theory to be discussed here—the
operational measure of riskiness suggested by Foster and Hart (2007)—and discuss a
class of strategies of a potentially descriptive nature that follow from this riskiness
measure. Chapter 3 raises some bounded rationality considerations that lead to a new
approximate measure of riskiness, and based on this new measure a decision rule is
suggested. Chapter 4 discusses the attributes of the decision rule, such as how it is used
and what interpretations can be given to it, which form the theoretical basis for our Fair
Compensation Theory. Chapter 5 checks the validity of the predictions of this fair
compensation theory against the results of cumulative prospect theory. Chapter 6
investigates the mathematical characteristics of the approximate measure of riskiness
used in this theory. In section 7 we discuss possible extensions of the fair compensation
theory. Section 8 concludes.
5
2. The normative basis: a measure of riskiness
2.1 Introduction to the operational measure of riskiness
Decision theory analyzes human decision making under risk, while sometimes presenting
such decisions as gambles. The statistical moments of a gamble’s outcomes (e.g., the
mean and the variance) can imply its level of risk, but there is no agreed-on measure of
riskiness, despite various attempts to define one. For a survey of measures of riskiness,
see Foster & Hart, 2007, Section 6.4; Aumann & Serrano, 2007, Section 7; Machina &
Rothschild, 2007.
There is an ongoing literature on measures of riskiness for gambles. Two fruitful
measures were recently suggested by Foster and Hart (2007) and Aumann and Serrano
(2007). Although these two measures originate from totally distinct hypotheses, and are
quite different in their agendas, both reach very similar results for a large class of
gambles,2 since their mathematical formulas are similar. Foster and Hart’s model is of
particular interest, since a decision rule that guarantees no bankruptcy can be based upon
their operational measure of riskiness. Their model suggests that in order to avoid
bankruptcy, one should compare one's “reserve wealth” (“wealth” for short, i.e., the
whole wealth or the fraction thereof dedicated to gambles) to the riskiness of the gamble
(measured in monetary terms), and reject the gamble if wealth is lower than riskiness
(henceforth the F&H rejection rule). On the other hand, accepting a gamble when wealth
is greater than riskiness is not mandatory.
The following notations and definitions are based on Foster and Hart (2007). A
gamble g is a real-valued random variable having some negative values (losses are
possible) and positive expectation, i.e., Pg  0  0 and Eg   0 . The probability of loss
is strictly positive because otherwise the gamble would be degenerated: any gamble
whose outcomes are all positive must be accepted by a rational decision maker, and is
assumed to be accepted in practice by practically everyone.3 The expected payoff of the
gamble is restricted to positive values because every gamble with a negative expectation
2
The class of gambles with payoffs that are relatively small compared to the riskiness measure.
Rationality cannot be a requirement in a positive theory of decision making (as opposed to a normative
one), but this extremely weak sense of rationality—a preference for a positive amount of money over
none—is a reasonable characteristic for the ordinary decision maker.
3
6
is assumed to be rejected by the common decision maker, as most people are risk-averse.
We can assign a measure of riskiness RFH g  to each gamble g, such that the greater the
risk involved in the gamble, the greater the value of RFH g  . RFH g  is homogeneous of
degree I, i.e., RFH g   RFH g  for every   0 . For each gamble g and wealth W, a
strategy s either accepts g at W, or rejects g at W. Hence we can write the F&H rejection
rule as follows: A strategy s guarantees no bankruptcy if and only if for every gamble g
and wealth W>0, whenever W< RFH g  s rejects g at W. The value of RFH g  (hereafter
  g 
simply R) is obtained by solving the equation E ln 1    0 (R has no explicit
  R 
expression).
2.2 A special class of strategies
Definition: A “simple” gamble g is a two-outcome gamble yielding a strictly negative
payoff -L with positive probability p, and a positive payoff G otherwise:
Figure 1
For the sake of simplicity we will restrict our discussion in this paper to “simple”
gambles (i.e., two-outcome gambles). This restriction should not weaken our analysis, as
prospect theory and cumulative prospect theory are completely based on evaluations of
two-outcome gambles (see Kahneman & Tversky, 1979, 1992). We refer to “non-simple”
gambles in Appendix C.
Next, we define vs g  : inf W  0 : s accepts gat W  . Thus vs g  indicates the
minimal wealth needed in order to accept g when using s. Let us now look at a class of
strategies S* defined over the set of all “simple” gambles, that satisfy the F&H rejection
7
rule in the following manner: each s* S * corresponds to a unique probabilitydependent threshold Th(p), such that:
 ,
v s g   
R   ,
if G L  Th( p)
if G L  Th( p)
We let ε be some positive number arbitrarily small. In words, a strategy s* accepts a
gamble g if and only if the gain-to-loss ratio is above some threshold Th(p) and the
wealth is greater than the riskiness measure R. This decision rule can be written in a
simpler form:
Accept a gamble g if and only if:
(1) G/L>Th(p)
AND
(2) W>R.
Th(p) is a non-decreasing function of the loss probability p; this implies that the greater
the probability of losing (p), the higher is the threshold for acceptance (the gain-to-loss
ratio (G/L) needed to transfer a gamble from the upper group of gambles (whose
v s g    ) to the lower group (whose vs g   R   )). Clearly, s* guarantees no
bankruptcy as R  vs g    for every gamble g. A strategy s* separates the space of
gambles into two clusters: one contains gambles that are rejected at all wealth levels, and
the other contains gambles that are accepted whenever R  W . This clustering of
gambles suggests that the class S* can be described as a family of “compensation”
strategies, where each strategy s* is characterized by a compensation level, indicated by
the threshold Th(p), such that a gamble that offers a gain-to-loss ratio (G/L) below this
level is always rejected (reward not compensating for the risk taken), while a gamble that
offers a gain-to-loss ratio above this level is accepted as long as no bankruptcy is
guaranteed (i.e., as long as R  W ). It is immediately evident that “compensation”
strategies incorporate two plausible attributes. First, they capture a naïve but powerful
“rule of thumb” according to which probabilities translate into risk and gain-to-loss ratio
translates into compensation, while Th(p) functions as a “price tag” of the required
compensation for each level of risk. Second, risk aversion becomes more dominant as the
stakes grow, as reflected by the fact that it is possible to skew up any gamble with
“compensating” G/L (multiply both G and L by λ>1) until its riskiness R surpasses the
8
wealth W, and as a result the gamble is rejected. The first property is a characteristic of a
CARA utility function,4 while the second is supported both theoretically and empirically
by numerous studies, such as Holt & Laury (2002), Smith and Walker (1993), PalaciosHuerta and Serrano (2006), and Fullenkamp et al. (2003, p. 219). The focus on the class
of strategies S* is motivated by their high descriptive ability, which will become much
more apparent in Chapter 5, where the predictions of an s*-based decision rule are
compared to the results of cumulative prospect theory.
3. Approximating riskiness
3.1 A Taylor-based approximation of R
In this section we take the first step from normative to positive decision theory. The
motivation for seeking riskiness-based decision making in real life stems partly from
evolutionary considerations: people whose behavior violates the F&H rejection rule are
prone to bankruptcy and therefore perhaps at an evolutionary disadvantage as bankruptcy
may lead to no living offspring. The adoption of decision rules (such as the F&H
rejection rule) through an evolutionary process is guided partly by their direct fitnessenhancing character and partly by their lack of complexity. The decision maker may not
be aware of using these rules, but he is nevertheless guided by them if his mind is capable
of being “programmed” to follow them. Such rules may be regarded as instances of rule
rationality, a term coined by Robert Aumann to describe unconscious “rules of thumb”
that work in general and evolve like genes: if they work well, they are fruitful and
multiply; if they work poorly, they become rare and eventually extinct (Aumann, 1997).
We just saw that “compensation level”-based strategies may guarantee no bankruptcy,
thus enhance fitness, but in order to be regarded rule rational they must not be too
complex. Let us examine the feasibility of an evolutionary process leading to adoption of
a compensation level-based strategy s* for decision making. Finding an arbitrary
monotonic threshold function Th(p) with which to compare the gain-to-loss ratio is an
easy mental task (e.g., one can demand, as one’s compensation level, a ratio G/L equal to
4
In the sense that a person with CARA utility evaluates the gamble regardless of his own wealth: he either
accepts it at all wealth levels or rejects it at all wealth levels. We describe this behavior as putting a “price
tag” on the risk involved in a gamble, and judging the payoffs relative to this “price tag,” while ignoring
wealth considerations.
9
the reciprocal of the gain probability, so that G:L equals 2:1 if the chance to win is 1/2,
G:L equals 4:1 if the chance to win is 1/4, and so on). However, how can one compare
his wealth W to RFH g  , when the evaluation of the latter involves an implicit
function whose treatment is beyond the ability of an ordinary person, even if this
treatment does not involve solving equations but rather following an “evolutionarily
programmed” rule? It may prove a task too complicated for the human mind.
Nevertheless, it is reasonable to assume that people who used decision rules that were
based on a more accurate estimation of R had an evolutionary advantage over their
peers.5 Thus, there may have been an evolutionary pressure to approximate R, the implicit
expression of riskiness given by Foster and Hart. In a way, approximating R may be
regarded as a form of bounded rationality, that is to say, a method used by a brain with
limited calculation abilities to reach what Herbert Simon coined “satisficing” results
(Herbert Simon, 1972, 1982). In order to find a simple explicit expression that
approximates the F&H. measure of riskiness RFH g  and can be used as its substitute, we
turn now to examine the mathematical aspects of the implicit formulation of R.
  g 
For the F&H measure of riskiness R we have E ln 1    0 . A Taylor series
  R 
development
of
the
2
expression
3
in
parentheses
is
4
 g  g 1g  1g  1g 
ln 1                 . Approximating this infinite series using
 R  R 2 R 3 R 4 R 
g g 2 
 0 , from which a new approximated measure of
the first two terms yields E  
2
 R 2R 
riskiness can be derived, namely R(g ) :
 
E g2
. R(g) is a good approximation as long as
2 Eg 
g
is small enough (uniformly). From a bounded rationality perspective, R(g) has an
R
explicit expression that may substitute the implicit F&H measure of riskiness, while
giving the requested “satisficing” results. In addition, since R(g) is a ratio of the
dispersion and the mean, it reflects the intuition-driven convention that riskiness should
5
Accurate estimation of R is required in order to avoid bankruptcy.
10
rise with increased dispersion and fall with increased expectation (this may recall the
logic behind the Sharpe Ratio). Therefore, if there was an evolutionary pressure to base
decision rules on RFH g  , it is possible that R(g) was used instead, thanks to its relative
simplicity. Another interesting feature of R(g) is that it is also the Taylor-based
approximation to the measure of riskiness of Aumann and Serrano (henceforth the A&S
g
 

measure of riskiness): the A&S measure of riskiness solves the equation E 1  e R   0 ,


while 1  exp  x   x 
g
1 2 1 3 1 4
x  x 
x     . Substituting x with and again taking
2
6
24
R
only the first two terms of the Taylor series we get R(g), just as was shown with the F&H
measure of riskiness. Thus, even if the A&S measure has certain advantages over the
F&H measure as a normative guideline, R(g) nevertheless approximates it just as well.
Holding L and p constant, RFH g  declines monotonically to zero as the gain G
grows to infinity. For small values of G the ratio
g
is also small, and therefore R(g)
R
behaves very similar to RFH g  . However, as G keeps growing,
g
grows too, and R(g)
R
gradually draws away from RFH g  . Unlike RFH g  , its approximation R(g) does have a
minimum point as a function of G (while L and p are fixed). Define the gain at this
minimum point G'. Then obviously R(g) at the gain G' is min R(g). Since beyond G'
R(g) gradually rises, while RFH g  keeps decreasing, we get that for values of G above
G' min R(g) is a better approximation to RFH g  than R(g) itself. This fact is
demonstrated in Figure 2, and will be used now for a refinement of the decision rule.
11
Foster-Hart riskiness measure vs. its approximation - R(g)
1400
1200
Riskiness
1000
800
600
G’
400
200
0
100
150
200
250
300
350
400
The Gain - G
Foster-Hart riskiness measure
R(g) - approximated measure of riskiness
G - the gain of the gamble
Min R(g)
Figure 2
The graph illustrates the relations between
RFH g  , R(g), min R(g), and G, for L=100 and
p=0.5. The X-axis is G. The blue line indicates RFH g  , which decreases monotonically with G.
The orange line is R(g), which reaches its minimum at the point G', where G (green line) crosses
it. The purple line shows min R(g). It can be noticed that beyond the crossing point G', min R(g)
approximates RFH g better than R(g).
 
3.2 From approximating riskiness to defining a decision rule
The fact that min R(g) approximates RFH g  for G>G' better than R(g) does indicates
that R(g) can be replaced with an improved approximation to RFH g  for all G. Define
R* as follows:
Rg 
R*  
min Rg 
, if G  G '
, if G  G '
Therefore, R* is preferred to R(g) in order to approximate RFH g  for all values of G.
This is demonstrated graphically in Figure 3.
12
Foster-Hart riskiness measure vs. its approximation - R(g)
1400
1200
Riskiness
1000
800
600
400
200
0
100
150
200
250
300
350
400
The Gain - G
Foster-Hart riskiness measure
R* - approximated measure of riskiness
G - the gain of the gamble
Figure 3
The graph illustrates the relations between RFH g , R*, and G, for L=100 and p=0.5. The orange
line is R*, which is composed of R(g) up to the point where it reaches its minimum, and a
horizontal continuation from this point on (indicating the minimal value of R(g)).
 
In addition to reducing computational complexity, the substitution of the F&H riskiness
measure with its approximation R* achieves two other goals. First, R* smoothens the
behavior of the riskiness measure for small loss probabilities. In the original F&H
riskiness measure, the loss L is the lower bound on RFH g  . Thus, for example, the
gamble A, which yields -100 with 1% probability and 10,000 with 99% probability, has
the same riskiness as gamble B, which yields -100 with 5% probability and 100 with 95%
probability; the RFH g  for both is 100 (which is the absolute value of the loss L), though
the former is definitely more attractive than the latter, regardless of one’s attitude to risk.
Allowing the riskiness measure to be lower than L captures the fact that people may be
willing to accept such low-risk gambles as gamble A even when their own capital is
lower than the possible loss (i.e., when W<L). This relaxation of the lower bound
restriction is achieved by substituting RFH g  with R*. Second, and more important for
13
this paper, the substitution of RFH g  with R* enables us to set a non-arbitrary
compensation level Th(p) to the strategy s*, based on R* itself. As mentioned, the
compensation level for a strategy s* is a non-decreasing function of the loss probability
Th(p), which represents the threshold to which the gain-to-loss ratio of the gamble is
compared. In order to show how R* can dictate a specific Th(p) we need to examine the
properties of G', which is the point where R(g) reaches its minimum.
R(g) can be written explicitly for a “simple” gamble in the following way:
 
E g2
1 1  p G 2  pL
R(g ) 
 
. By deriving this particular expression of R(g) with
2 E g  2 1  p G  pL
2
respect to G, we find that the gain that minimizes R(g) is G', such that G '  L 
p
1
p
.
Since at this point the gain G' equals the riskiness R(g) (see Appendix B for a proof), we
get * G'  L 
p
1 p
 Rg  G G '  min Rg  . Now define Th( p) 
p
1 p
(this is a non-
decreasing monotonic function of p). Hence, G' can be written as L  Th( p) , and a
compensation-level strategy s* with Th( p) 
p
1 p
accepts a gamble if and only if the
gain G is greater than G' (equivalent to G/L>Th(p)) and the wealth W is greater than the
riskiness.
The requirement that the gain be greater than G' follows from the significance of G'
in the definition of R*: if by using R*, RFH g  can be assessed only approximately, risk
aversion might take the form of minimizing R* by accepting gambles only in the
horizontal part of R*, where riskiness is constant and at a minimum (this can also be
described as minimizing a second-order approximation of the accurate riskiness RFH g  ).
Since the value of G for which R(g) reaches its minimum (and the horizontal part of R*
begins) is G', minimizing R* is equivalent to demanding the gain G to be greater than G'.
In conclusion, the compensation level can be based upon min R*, such that by setting
Th( p ) 
p
1
p
as a lower threshold on G/L, only gambles whose gain-to-loss ratio
minimizes R* are considered.
14
3.3 A final refinement of the decision rule
In Section 2.2, we defined the class of strategies S*, such that each s* S * has a unique
probability-dependent threshold Th(p), for which it accepts a gamble g if and only if two
conditions are satisfied:
W  RFH g 

G L  Th( p)
To adjust such a strategy to the bounded capability of the human mind, “evolution” had
to approximate RFH g  . As we showed, a natural candidate for that approximation is R*.
We were also led to concentrating on one specific s* that followed from using R* instead
of RFH g  . When we update the general characteristic conditions of S* strategies to that
of the specific R*-based strategy s*, we see that acceptance of g is dependent on
satisfying two conditions:
  W  R *

p




G
L


1 p

where R* is defined as
Rg 
R*  
min Rg 
, if G  G '
, if G  G '
Bearing in mind that G>G' and G L 
p
1 p
are equivalent conditions, we notice that
condition (II) restricts R* to the region where it equals min R(g). This allows for a slight
manipulation in the writing of the decision rule, such that
  W  min Rg 

p




G

L

1

p

And finally, recalling from (*) that G'  L 
p
1 p
 min Rg  , we can write the decision
rule in its final form:
Define Th( p) 
p
1 p
. The compensation-based decision rule is:
15
g
Accept
if
and
only
if
W  Th p   L
AND
G  Th p   L ,
that
is,
Th p  L  min W , G.
Moreover, this decision rule is homogeneous; i.e,. if g is accepted at wealth W, then λg is
accepted at λW for every λ>0.6
F.H. riskiness measure, its approximation (R*), and the gain condition in the decision rule
1400
The condition on the gain
in the decision rule:
G>G’ ≡
G>R(g) ≡ G>min R(g)
≡ G>R* ≡ G>L∙Th(p)
≡ {R*|R*=min R(g)}
1200
Riskiness
1000
800
600
G’
400
200
0
200
150
100
250
350
300
400
The Gain - G
Foster-Hart riskiness measure
Approximated measure of riskiness
G
Figure 4
The graph illustrates the area of acceptance according to the condition on the gain in the decision
rule, while presenting its varied formulations. The illustration is done for L=100 and p=0.5.
4. Fair compensation theory
Fair compensation theory (compensation theory for short) is the name we suggest for the
decision theory described in this paper, according to which people follow the two-part
6
If g is accepted at wealth W, then it follows that:
1) W > R(g) => W >
λW > E
 
 
 
 
 
g   2Eg  => λW > R(λg)
2
2) G > R(g) => G >

 
E g 2 2Eg  => λW > λ E g 2 2Eg  => λW > 2 E g 2 2Eg  =>

E g 2 2Eg  => λG > λ E g 2 2Eg  => λG > 2 E g 2 2Eg  => λG
 
> E g  2 E g => λG > R(λg)
and therefore λg is accepted at wealth λW.
2
16
decision rule stated above, while doing their best in terms of bounded rationality to avoid
bankruptcy and guarantee growth of wealth.
The critical gain G' is the fair compensation for making a decision at the risk of
losing L with probability p, such that the decision maker is indifferent between accepting
and rejecting the gamble (L,G';p,1-p). Our reason for using the term “fair compensation”
has to do with the way we often interpret risky (as well as non-risky) offers. Once we
have a notion of the gain needed to induce us to accept a certain risk (even if the notion is
merely an “evolutionarily programmed” hunch), one may interpret offers of lower gains
as unfair or as uncompensating for the risk involved (unfairness is a common motive in
the literature for rejecting otherwise attractive offers, e.g., in ultimatum games; see Fehr
& Schmidt (1999), Fehr & Gachter (2000), Rabin (1993), Kahneman, Knetsch & Thaler
(1986)).
In the following sections we describe the logic behind the theory, and in particular how
the decision rule can be followed by boundedly rational agents, how this rule can be
interpreted, and how it relates to the demand for a positive expected payoff.
4.1 Following the decision rule
The whole procedure of approximating the original riskiness measure suggested by
Foster and Hart was driven by the motivation to find how riskiness can be assessed by
everyone, and to make sure that the s*-based decision rule can be followed (before
claiming that empirically it is followed). Was this goal achieved? Let us start with the
condition G  Th p   L , where Th( p) 
p
1 p
. Note that the condition on G is linear
with respect to the loss L. This means that we expect the decision maker to be able to
assess the gamble’s compensation rate G/L (clearly an easy task) and the required
compensation level Th(p), as a function of p. Assessing Th(p) does not require solving
root equations, but merely acquiring a sense of the size of Th(p) as a function of p. For
example, the compensation level Th(p) for a 50-50 gamble over L and some G is 2.41,
and so “feeling” that a fair compensation rate in a coin flipping is a bit below 2.5 is
enough to guarantee that the decision maker follows the rule quite closely: every gamble
17
offering a smaller compensation rate G/L will be rejected, and every gamble offering a
higher compensation rate may be accepted.
The descriptive capabilities of the decision rule are analyzed in Chapter 5, but we
will note here that this number 2.41 is inside the range of the loss aversion coefficient
estimations of Tversky and Kahneman: between 2 and 2.5 (Tversky & Kahneman, 1991),
and is backed up empirically by the ratio that characterizes the asymmetric effects of
price increases and decreases (Putler, 1988; Kalawani, Yim, Rinne, and Sugita, 1990). In
a similar manner, by having merely the sense of the magnitude of compensation rate
needed for only a few loss probabilities, one can follow the decision rule quite closely.
Like the value function in prospect theory, Th(p) is a function that emphasizes losses
more than gains. This “loss aversion” attribute of the fair compensation rate is illustrated
in Table 1 for a few loss probabilities:
p(Loss) G'/L
0.5
2.4
0.9
18.5
0.75
6.5
0.25
1
0.1
0.46
Table 1: Compensation rate as a function of loss probability
With such a table or a slightly more detailed one encoded in our brain, we are guaranteed
a small margin of error when following the condition G  Th p   L .
The second condition, W  Th p   L , is very similar in structure, and so can be
followed in a very similar way, namely, by replacing G by W, and demanding a “wealthto-loss” ratio W/L greater than the compensation level Th(p) (which is evaluated in any
case). Though less intuitive, this “wealth” condition is as easy to follow as the “gain”
condition.
18
4.2 Alternative interpretations of the decision rule
In Chapter 5 we will use Cumulative Prospect Theory (Tversky & Kahneman, 1992) as a
reference for observed decision making under risk to show that decision making can be
described quite accurately with the decision rule suggested above (accept a gamble if and
only if Th p  L  min W , G). As mentioned, this decision rule is based on a strategy s*
that satisfies the necessary condition for no bankruptcy (W>R for every accepted
gamble), except that R* is used instead of the original RFH g  to reflect bounded
rationality. Therefore, if the accuracy of RFH g  is not too crucial for this condition, then
the principal goal of this paper—finding a normative rule that fits observed behavior and
thus can serve as an alternative explanation to the heuristics-based explanation of
Prospect Theory—is already achieved. Nevertheless, though one element in the decision
rule is directly connected to Foster and Hart’s normative assertions (which require W>R),
the other element (which requires G  Th p   L ) may appear out of place. Therefore, we
devote the next section to suggesting some interpretations of this part of the rule, in the
hopes that at least one of them will shed light on the true reason for its evolution.
4.2.1 Gain and riskiness
To better understand the “gain” condition G  Th p   L , we need to return to two earlier
and equivalent definitions of this condition: (I) R* = min R(g) and (II) G>R* (Figure 4).
The fact that condition (I) requires that only gambles with a riskiness R* that is constant
and at a minimum be accepted suggests a very simplistic and straightforward form of risk
aversion. We shall refer to it as the minimum riskiness condition. On the other hand,
condition (II) is based on the fact that G' is the only point where G=R* (Figure 3), and for
greater values of G the gain remains above the riskiness, i.e., G>R*. Viewed this way,
condition (II) implies that the gain is compared to the riskiness, and that acceptance
follows from the fact that the gain is greater than the riskiness measure. We can think of
condition (II) as a simplistic compensation-based rule, where a gamble’s riskiness R*
should be compensated by a gain greater than R*. We shall refer to it as the gain greater
than riskiness condition.
19
Recall that Foster and Hart state that it is possible but not necessary to accept a gamble
when W>R, whereas it is detrimental to accept a gamble when W<R, since it guarantees
bankruptcy in the long run. As a result, for each level of wealth W, gambles may be
partitioned into affordable (i.e., acceptable) and unaffordable risks (i.e., leading to
bankruptcy if accepted). We will now try to demonstrate how an intelligent use of the
riskiness measure may lessen the danger of bankruptcy and even lead to enhanced
growth. To do so, we will suggest an evolutionarily based intuition to the proposed
decision rule. Evolutionary considerations are often used to explain heuristics of
judgment and decision making. This kind of reasoning is based on the assertion that
human behavior was shaped, in the long process of evolution, for the most part while
human beings lived in nomadic hunting and gathering tribes, under conditions that
merely enabled subsistence. Thus, a behavior that makes no apparent sense today can
sometimes be understood by the advantage in fitness it may have conferred to hunting
and gathering societies. In these societies, consumption reduces to its narrow conception
of consumption of calories, and the payoff currency that sets preferences is calories rather
than money. The “reserve wealth” from the Foster–Hart model may then be interpreted as
the calories at your disposal (in your stomach or stored in your cave), and the gamble
may be the decision whether to exert energy on a risky venture, e.g., going on a hunting
expedition (or wandering to a new location, etc.). One interesting observation can already
be made: while in the Foster–Hart model having a large initial wealth translates to the
possible acceptance of gambles with very small positive expectations (since both W and
R are large), from an evolutionary point of view it nonetheless makes more sense to avoid
hunting expeditions with only small positive expectation if you already have many
calories at your disposal. So the fact that you can afford to take the risk does not mean
that you should go ahead and take it. But what affordable risks are worth taking? Here
we wish to demonstrate the logic of the “minimum riskiness” and “gain greater than
riskiness” conditions as guidelines for decision making. Under the “minimum riskiness”
condition, one goes on a hunting expedition only if the possible gain is at least the gain
that minimizes the riskiness. Minimizing the riskiness is therefore a heuristic that
dispenses with the affordable but unnecessary ventures, including opportunities to
marginally improve an already satisfying nutrition. Under the “gain greater than
20
riskiness” condition, one goes hunting only if the possible gain is greater than the
riskiness. Remember that the rejection rule (reject if R>W) implies that one should reject
gambles one cannot afford, where the measure for affordability is the riskiness R. Hence,
the rejection rule implies that riskiness serves as a measure of the amount of calories that
the hunter stands to lose in a hunting expedition (so that R>W means that the amount of
calories that the hunter stands to lose on an expedition is greater than the amount at his
disposal). Therefore, “gain greater than riskiness” can be understood to mean that the
hunter should go on the hunting expedition only if the hunt holds the promise of gaining
more calories than those he will lose by going out (and the same is true of wandering to a
new location). More generally, the no-bankruptcy condition W>R instructs us to keep the
level of riskiness below the initial wealth, and the “gain greater than riskiness” condition
instructs us to keep the level of riskiness below the possible gain. Taken together, these
conditions tell us to avoid taking actions whose riskiness is either greater than what we
already have or greater than what we can expect to achieve by taking them.
4.2.2 The mechanism interpretation
Unlike the previous two interpretations, the mechanism interpretation to be presented
here seeks to explain the plausibility of the condition not through its direct consequences,
but rather as an indirect mechanism for avoiding bankruptcy, without any need to
compare wealth to riskiness. For this interpretation, the condition may be written simply
as G>G', where the significance of the point G' lies in the fact that it separates R* into
two parts as follows:
Rg 
R*  
min Rg 
, if G  G '
, if G  G '
While assessing R(g ) 
 
E g2
1 1  p G 2  pL
 
2 E g  2 1  p G  pL
2
is too complicated a task for a
boundedly rational agent, we showed in Section 4.1 that assessing min Rg   L 
p
1 p
(see (*) in Section 3.2) is not. Therefore, assessing riskiness using R* is almost
impossible when G≤G', while quite feasible when G>G'. Since a good assessment of R*
is crucial to avoid bankruptcy (by following the “wealth” condition W>R*), a
21
psychological mechanism that rejects “uncompensating” gambles (i.e., gambles with
G≤G') can guarantee that only gambles whose riskiness measure is estimable will be
considered, which enables the decision maker to follow the “wealth” condition required
for avoiding bankruptcy.
A psychological mechanism can take the form of emotions or expectations, while a
physical mechanism takes the form of a chemically based stimulation. Two principal
physical mechanisms are hunger and sexual attraction: hunger is a mechanism for
guaranteeing a supply of energy to our body through food consumption; sexual attraction
is a mechanism for the genes’ reproduction through intercourse. Insult or the demand for
fairness (as in the Ultimatum Game) are psychological mechanisms for safeguarding
against a reputation that one is a “sucker” (Aumann, 1997). In a similar way, we suggest
that seeking compensation for risk is a mechanism for preventing bankruptcy: demanding
a compensation rate G/L greater than the threshold Th(p) leads to considering only
gambles with G>G', whose riskiness R* is estimable; thus wealth can be compared to
riskiness and bankruptcy can be easily avoided. This mechanism of “seeking
compensation” is well known to investors: every investment bears a risk, but only
investments where the risk of failure is believed to be well compensated for by the
expected revenues are eventually taken.
4.3 Compensation as an expected payoff
4.3.1 The p-root transformation
Let us now consider only the “gain” condition of the decision rule, namely,
G  L
p
1
p
. G', the value of G that makes the decision maker indifferent to the


gamble g, solves the equation 1  p G  p L  0 . But this is exactly the point of zero
expected payoff of a transformed gamble, such that p  p and 1  p  1  p . When
we define this transformation as a “p-root transformation,” it follows that the decision
maker’s behavior can be interpreted as demanding a positive expected payoff in the proot transformed gamble.
22
Two main attributes of judgment in prospect theory are the emphasis on losses and
the non-linear weighting of probabilities. In prospect theory these two are modeled
separately: the emphasis on losses is modeled with the parameter λ in the value function;
the non-linear weighting of probabilities, i.e., the overweighting of low probabilities and
the underweighting of high probabilities, is modeled with an inverse S-shaped weighting
function. Analyzed together, these two effects accumulate in a gamble with low
probability of loss, since both magnify the effect of this possible loss, one through
overweighting its size (compared to the gain), and the other through overweighting its
low probability of being realized. On the other hand, these two effects tend to cancel each
other out in a gamble with a high probability of loss, since the size of the loss is
overweighted while the probability of its realization is underweighted. However, since
the loss-emphasizing parameter λ is greater than 2, its effect suppresses the diminishing
effect of the high probability, i.e., a possible loss is always overemphasized, no matter
how high its probability.
The p-root transformation incorporates both effects solely through its transformation
of the probabilities. The superiority of the effect of the loss-emphasizing parameter λ
over the effect of the magnitude of the probability is reflected by the fact that the p-root
transformation always increases the loss probability and always decreases the gain
probability. The second-order effect—overweighting of low probabilities and
underweighting of high probabilities—is reflected in the intensity of its emphasis on
losses. As demonstrated in Figure 5, the p-root transformation always overweights the
probability of loss, but it does so appreciably for low probabilities and only slightly for
high ones. Similarly, it always underweights the probabilities of gain, but it does
so appreciably for high probabilities (since both the gain and the high probability are
underweighted) and only slightly for low ones.
23
p -root transformation of probabilities
Transformed probability
1
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
Original probability
Loss probability
Gain probability
Non-transformed probability
Figure 5
This is a graphic illustration of the p-root transformation. Loss probabilities are always magnified, but the
extent of magnification is diminished as the probability increaes. Gain probabilities are always reduced,
but the extent of reduction is increased as the probability increases. This is a superposition of two effects
known in Prospect Theory: loss aversion and non-linear weighting of probabilities.
As a result, the p-root transformation may substitute prospect theory’s compound
structure of a weighting function multiplied by a value function, while embodying the
same implications for decision making under risk.
4.3.2 From riskiness measure to attractiveness measure
We now suggest a characterization of an attractiveness scale of gambles based on the
expectations of their p-root transformations. Define the attractiveness of a gamble g to be
the expected payoff of the p-root transformation of g. By definition, we get zero
attractiveness for gambles toward which the median decision maker is indifferent.
Moreover, a gamble with G>G' always has a positive index of attractiveness, and a
gamble with G<G' has a negative index of attractiveness. Therefore, the “gain” condition
of the decision rule may be substituted by a demand for a positive index of attractiveness.
Incorporating attractiveness into the model of decision making, we can now talk about
24
each gamble’s level of attractiveness and compare all kinds of gambles, rather than
settling for a rule that can only distinguish between participation and avoidance. In so
doing, we render attractiveness an objective measure that substitutes the subjective
measure of utility. The empirical question of whether a more attractive gamble indeed has
a higher chance of being accepted by the median decision maker is left for future
investigation, as is the theoretical appeal of its properties (clearly it violates the expected
utility hypothesis, and therefore new axiomatization needs to be done to support it as an
index). Therefore, the index of attractiveness will not be used for analyzing the
predictions of compensation theory.
5. Data analysis
Tversky and Kahneman present cumulative prospect theory as “a new version of prospect
theory that… allows different weighting functions for gains and for losses.” They further
state that “the key elements of this theory are 1) a value function that is concave for
gains, convex for losses, and steeper for losses than for gains, and 2) a nonlinear
transformation of the probability scale, which overweights small probabilities and
underweights moderate and high probabilities” (Tversky & Kahneman, 1992). The value
function and weighting functions are the following:
Their theory is backed up by experimental data gathered in the lab. From the
experimental results they compute the numerical values for the parameters of the
functions: α = β = 0.88, λ = 2.25, γ = 0.61, and δ = 0.69 (Tversky & Kahneman, 1992).
The fact that the parameters α and β of the value function are found to be equal is
significant to this paper. In fair compensation theory, the equation G/L=Th(p), which
25
describes the indifference point of the “gain” condition in the decision rule, is
characterized by a property of “scale independence”; i.e., for given probabilities to gain
or to lose, only the ratio G/L matters, so that multiplying both G and L by the same
constant has no effect. In cumulative prospect theory, the indifference point can be
vG   w (1  p)  v L   w  p   0 . Substituting the
derived from the equation
expressions for the value function and weighting functions in this equation, and replacing
β with α (since they are equal), we get the following equation for the indifference point:




G 
p
p   1  p 
  

L  1  p 

p  1  p 



1



.
1 
 

1
We shall refer to it as the “indifference point” representation of prospect theory. As can
be easily seen, the indifference point in prospect theory bears the same “scale
independence” property as the indifference point of the decision rule in fair compensation
theory. This makes them easy to compare, since the comparison is independent of the
scale of gains and losses: for each couple of win-lose probabilities (characterized solely
by the value of p), the G/L ratio that makes the decision maker indifferent to the gamble
according to one theory can be compared to the G/L ratio that makes him indifferent
according to the other.
The “wealth” condition of the decision rule, i.e., W/L>Th(p), cannot be compared to
the results of prospect theory since the wealth of the subjects in the experiments is
unknown. Nevertheless, as analyzed in Appendix A, it can be assumed that the wealth
condition was not violated by the vast majority of the subjects (because the condition was
satisfied for all gambles for most subjects), and so the wealth condition is redundant for
the comparative analysis presented here. A broader discussion of wealth considerations in
analyzing prospect theory results appears in the Conclusion.
We are using the “indifference point” representation with the explicit numerical
values of the parameters to get prospect theory’s estimations on decision making, and
comparing them to the “gain” condition of the decision rule in fair compensation theory.
Both theories are further compared to estimates based on the F&H and the A&S measure
of riskiness. These two measures are both monotonic w.r.t. first-order stochastic
dominance (Section 6.3) and therefore have no point of minimum riskiness for a given
26
possible loss (which is a desirable property for a measure of riskiness). However, each of
them has a unique point where riskiness equals gain (for a “simple” gamble), and this
equity point is used for the comparison. The purpose of entering estimates based on these
riskiness measures, despite the fact that none of them attempts to suggest a wealthindependent indifference point as a function of the ratio G/L, is to create a scale that
enables us to notice just how close the predictions of fair compensation theory are to
those of prospect theory.
The usual graphic representation of prospect theory is separated into a value function that
is concave for gains and convex for losses, and two inverse S-shaped weighting
functions. However, for the sake of comparison with other alternatives, we represent the
function as assigning a gain value to a vector (p,L) (a possible loss and its probability),
which indicates the gain needed to make the median decision maker indifferent to the
gamble (G,1-p;L,p). This is a graphic illustration of the “indifference point” defined
above. Furthermore, asking for the gain that makes one indifferent to a suggested gamble
is a method used by Tversky & Kahneman to derive the parameters of the functions of
prospect theory.7 In a similar manner, each of the other theories and measures provides an
estimation of the gain G that makes the median decision maker indifferent to the same
vector (p,L).8
Table 2 summarizes the results for a loss of $100 with probabilities at the range
[0.01, 0.99].
Prob. of Loss (p)
0.01
0.05
0.1
0.25
0.5
0.75
0.9
0.95
0.99
Prob. of Gain (1-p)
0.99
0.95
0.9
0.75
0.5
0.25
0.1
0.05
0.01
The gain (G) required for indifference:
Prospect Theory
7.13
27.02 49.43 118.6 274.1 601.2 1270 2093 6329
Fair Compensation 11.11 28.80 46.25 100.0 241.4 646.4 1849 3849 19850
Theory
F&H measure of 100.0 100.0 100.2 114.3 200.0 484.7 1349 2791 14333
riskiness
“[T]he subjects made choices regarding the acceptability of a set of mixed prospects (e.g., 50% chance to
lose $100 and 50% chance to win x) in which x was systematically varied” (Tversky & Kahneman, 1992).
8
As mentioned, the “estimate” of the F&H and the A&S index of riskiness is the point where R=G.
7
27
A.S. measure of 24.08 38.98 52.62 94.0
204.2 523.1 1473 3056 15715
riskiness
Table 2: A cross-theory comparison of indifference points
5.1 Analysis of the results
Since the ratio G/L is not dependent on the size of L for any of the theories and measures
presented, one can simply divide all values by 100 (the size of loss in the numerical
example) to obtain the compensation rate suggested by each of them. Figure 6 shows for
each of the theories and riskiness measures defined above the compensation rate (i.e., the
G/L ratio) required to make a decision maker indifferent to a gamble g as a function of
the loss probability. In addition, the compensation rate needed for a risk-neutral agent is
drawn as a reference (a risk-neutral agent will accept any gamble with a positive expected
payoff and reject any gamble with a negative expected payoff). Hence, he is indifferent to
g whenever 1  p   G  p  L  0 , i.e., when G L  p 1  p .9 It can be observed from
the graphs that a risk-neutral agent is the least sensitive of all to the risk of losing (as we
would have expected), while compensation theory describes the most risk-averse decision
maker, with very similar results to prospect theory for most of the range (see detailed
analysis below).
9
Similarly, in fair compensation theory one is indifferent to a gamble whenever the expected payoff of its
p-root transformation is zero (as illustrated in Section 4.3).
28
Comparative Graph of Compensation Rates
35.0
30.0
Compensation rate
25.0
20.0
15.0
10.0
5.0
0.0
0.05
0.15
0.25
0.35
0.45
0.55
0.65
0.75
0.85
0.95
p (Loss)
Compensation Theory
Risk-Neutrality
Prospect Theory
Foster-Hart
Aumann-Serrano
Figure 6
5.1.1 Loss probabilities up to 0.8
For loss probabilities up to ~0.8, the compensation rates predicted by Compensation
Theory (C-T) are similar to those predicted by Prospect Theory (P-T), outperforming
even the F&H and A&S riskiness measures to which it approximates (Figure 7). This is
true even for very small probabilities, such as p=0.05 (2.7 in P-T, 2.9 in C-T) and p=0.01
(0.07 in P-T, 0.11 in C-T). Therefore, at least in this range of loss probabilities, the
simple “fair compensation rate,” G L 


p 1  p , manages to capture all three
attributes of decision making in prospect theory: loss aversion, diminishing sensitivity,
and the non-linear weighting of probabilities (since all three are embedded in prospect
theory’s predictions). The fact that a decision rule based on R*, which is only an
approximation to the riskiness measures of Foster and Hart and of Aumann and Serrano,
predicts observed behavior better than the equivalent decision rules that are based on
them directly (at least according to the prospect theory standard), can be ascribed to its
greater conformity to the real-life behavior of boundedly rational agents.
29
Comparative Graph of Compensation Rates
9.0
8.0
Compensation rate
7.0
6.0
5.0
4.0
3.0
2.0
1.0
0.0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
p (Loss)
Compensation Theory
Risk-Neutrality
Prospect Theory
Foster-Hart
Aumann-Serrano
Figure 7
5.1.2 The 50-50 case
Of particular interest is the point where winning and losing are equally likely. The
behavior at this point can be used to infer “pure” loss aversion, i.e., the exact size of the
gain needed to cancel out an equally likely loss. The equal likelihood case has been
investigated more than other cases, and it enabled Tversky & Kahneman to conclude that
the loss aversion coefficient is 2–2.510: “Such estimates [of the coefficient of loss
aversion] can be obtained by observing the ratio G/L that makes an even chance to gain G
or lose L just acceptable. We have observed a ratio of just over 2:1 in several
experiments. In one gambling experiment with real payoffs, for example, a 50–50 bet to
win $25 or lose $10 was barely acceptable, yielding a ratio of 2.5:l. Similar values were
obtained from hypothetical choices regarding the acceptability of larger gambles, over a
range of several hundred dollars” (Tversky & Kahneman, 1991). In their next, 1992
article Tversky and Kahneman present cumulative prospect theory, and supply
10
Tversky & Kahneman indicate that loss aversion coefficient could vary across dimensions, e.g. when
moving from money to safety issues. Here I am only concerned with monetary payoffs, and leave these
extensions to other disciplines outside the scope of this paper.
30
experimental results that enable them to estimate the loss aversion coefficient directly
from subjects’ indifference to participation in an equal likelihood gamble. Faced with a
50% chance to lose a predetermined amount (-$25, -$50, -$100, or -$150), the median
subject demanded a 50% chance to win $61, $101, $202, or $280, respectively, in order
to be indifferent to taking the gamble. These results suggest a loss aversion coefficient of
1.87-2.44, which matches the 1:2 - 1:2.5 ratio Tversky & Kahneman indicated in their
previous paper.
Compensation theory suggests a loss aversion coefficient of 2.41, which not only
falls within the range of the results just mentioned, but is also a better fit for these
experimental results than the value of 2.7411 obtained from multiplying the value function
and weighting functions as suggested by cumulative prospect theory. It is worth noting
that the extent of loss aversion is believed to be testable in real-life markets: “The
response to changes is expected to be more intense when the changes are unfavorable
(losses) than when they are for the better” (Tversky and Kahneman, 1991). An empirical
study aimed to assess loss aversion in real markets is Putler (1988), which found that the
estimated elasticity of price increases was 2.4412 times stronger than the estimated
elasticity of price decreases. This value is very close to the value of 2.41 estimated by
compensation theory.
5.1.3 High loss probabilities (above 0.8)
As the probability of loss grows and approaches 1, the prediction of prospect theory
draws away from that of compensation theory, until it even crosses that of a risk-neutral
agent, when prospect theory predicts that the median decision maker will accept a gamble
with a negative expected payoff if the probability of winning is very small and the
potential gain is very large relative to the potential loss (Figure 8). In prospect theory this
phenomenon is attributed to the underweighting of high probabilities and the
overweighting of small probabilities, and is known as risk-seeking in the domain of small
Tversky & Kahneman estimate the loss aversion parameter λ of the value function as 2.25, but after
translating probabilities to weights the loss aversion coefficient obtained is 2.74, larger than the 2-2.5
coefficient widely quoted.
12
Putler tested his model by estimating separately demand elasticities for increases and for decreases in the
retail price of shell eggs, relative to a reference price estimated from a series of earlier prices. The
estimated elasticities were -1.10 for price increases and -0.45 for price decreases, implying a 2.44 ratio of
elasticities.
11
31
probabilities of winning large gains. Yet it would be more precise to say that the behavior
observed in the laboratory is a preference for a very risky but high-yield gamble (a small
probability of winning a large amount or nothing) over its expected monetary payoff.
This cannot be easily extrapolated to a preference for a gamble with negative expected
payoff over avoidance. In fact, these attributes describe most real-life casino-style
gambles, and the median decision maker avoids them more regularly than he participates
in them. It is unlikely that a subject in a laboratory experiment will ever be asked to take
a 99% chance of losing his own $100 for a 1% chance of winning a lot more, but it is
reasonable to expect that were he asked to do so, the median subject would be willing to
accept only a gamble with positive expectation, for a gain at least as large as required by
the risk-neutral agent (and according to compensation theory the gain would have to be
about twice as large as that required by a risk-neutral agent). Therefore, if one accepts the
claim that risk-seeking in the sense of taking gambles with negative expectation is not
characteristic of the median decision maker, then one should favor the predictions of
compensation theory over those of prospect theory for risky decisions with high loss
probabilities.13
13
This is not to say that there are no people who would risk $100 for a 1% chance of winning $6,300 (as
predicted by prospect theory), while ignoring the negative expectation of such a gamble. However,
compensation theory predicts that the median person will be willing to engage in such a risky prospect only
for prizes of $20,000 and up.
32
Comparative Graph of Compensation Rates
100.0
Compensation rate
80.0
60.0
40.0
20.0
0.0
0.8
0.82
0.84
0.86
0.88
0.9
0.92
0.94
0.96
p (Loss)
Compensation Theory
Risk-Neutrality
Prospect Theory
Foster-Hart
Aumann-Serrano
Figure 8
6. Properties of the fair compensation rate
A fair compensation is a positive value function G'(p,L), which describes the gain G' that
makes the median decision maker indifferent to the gamble g = (G', -L ; 1-p,p) for given
values of p and L. Since the fair compensation G' is linear with respect to L, it can be
replaced by the fair compensation rate G'/L, which is a function of the loss probability p
alone, such that
p
G'

.
L 1 p
6.1 Homogeneity
The fair compensation needed to compensate for a risk of losing L with probability p (in a
“simple” gamble) is homogeneous of degree I with respect to the loss L, i.e.,
G ' ( p, tL)  tG' ( p, L) . It can be shown that R* is also homogeneous of degree I (proved in
Appendix B). Homogeneity of degree I is a property shared by the riskiness measures of
Foster–Hart and Aumann–Serrano as well. Aumann and Serrano claim that homogeneity
of degree I embodies the cardinal nature of riskiness, since it captures the intuitive logic
33
that riskiness must grow at the same rate as the stakes (Aumann & Serrano, 2007). In a
similar way, homogeneity of degree I embodies the fundamental nature of compensation,
in the sense that doubling the possible loss requires doubling the gain needed to
compensate for it.
6.2 Globality
Some riskiness measures, such as Pratt (1964) and Arrow (1965), are criticized for their
inapplicability to gambles of large amounts; their “local” character arises from their being
based on derivatives of the utility function (Aumann & Serrano, 2007; Eisenhauer, 2006).
By contrast, compensation theory is not based on derivatives, and is therefore applicable
to all two-outcomes gambles, regardless of the size of their payoffs. This makes the
theory globally applicable.
6.3 Stochastic dominance
One of the most important concepts of riskiness is stochastic dominance (see, e.g.,
Rothschild and Stiglitz, 1970; Hadar and Russell, 1969). Say that a gamble g first-order
(stochastically) dominates (FOD) g', if g  g ' for sure, and g  g ' with some positive
probability. A measure of riskiness R is called first-order monotonic if R(g)<R(g')
whenever g FOD g'. Some of the most prominent measures of riskiness violate
monotonicity with respect to FOD, e.g., standard deviation/mean (sharp ratio),
variance/mean, and “value at risk” (VaR, common in portfolio risk management). R*
violates FOD in the strict sense, though it can be said to satisfy “quasi-FOD,” i.e.,
R*(g)≤R*(g') whenever g FOD g'. In addition, the fair compensation G'(p,L) is strictly
monotonic with respect to p and L, thus making a strict compliance of R* with stochastic
dominance not obligatory.
6.4 Decomposing compensation into its components
Let us decompose the decision rule suggested by compensation theory into its two
original components:
  W  R *

p

 G L  1  p

34
Consider now a hypothetical decision process that yields this decision rule in the
following way: When faced with a gamble, or more generally, with the need to decide
under risk, two distinct decision processes, one for each condition in the decision rule, are
running in parallel in two separate parts of the brain. One process evaluates the properties
of the gamble alone (regardless of current wealth considerations) and sends a positive
signal to the decision center of the brain if and only if G L 
p
1
p
. The other process
evaluates the level of risk imposed on the gambler due to his current “liquidity
constraints” and sends a positive signal to the decision center of the brain if and only if
W>R*. Finally, the decision center of the brain rules in favor of the risky decision if both
signals are positive, and against it otherwise. Now examine the characteristics of these
two separate decision processes. The first one embodies the main characteristic of a
CARA decision maker: it is indifferent to wealth and cares only about the properties of
the gamble. The second process, were it using the original RFH g  instead of its
approximation R*, would have imitated a logarithmic utility decision maker, a person
with a CRRA utility function of ux  log x 14 (since R* is used instead of RFH g  , it
may be regarded as “boundedly rational” CRRA decision making). Described in this way,
the decision rule is an intersection of the outputs of CARA-like and CRRA-like decision
processes.15
The description of two distinct decision processes captures another aspect of
“bounded rationality,” namely, the aspect of computational limitations, reflected in the
decomposition of a complex task into simple modules that can be analyzed more easily,
though not yet optimally.
14
15
Foster and Hart (2007) contains an extensive discussion of this characteristic.
RFH g and RAS g  share the first two terms in their Taylor development (on which R* as an
 
RFH g  and RAS g  , Foster and Hart write: “ RAS g 
looks for the critical risk aversion coefficient regardless of wealth, whereas RFH g  looks for the critical
approximated measure is based). Comparing
wealth regardless of risk aversion” (Foster and Hart, 2007). By incorporating the characteristics of both
riskiness measures, the R*-based decision rule may be seen as a heuristic that considers both the critical
risk aversion coefficient regardless of wealth and the critical wealth regardless of risk aversion.
35
7. Discussion
It has become common in recent years to try and trace the evolutionary reasoning for a
range of typical human practices (see, e.g., Gintis, 2007; Jones, 2001; Robson, 2002).
According to this school of thought, practices that violate the normative theory of
judgment and decision making are analyzed in a prehistorical context, and are often
claimed to have conferred survival advantage to small nomadic groups. However, we find
it more appealing to forgo the vague past, which is open to too many hypotheses, and
instead to seek support for normative theories in the contemporary human behavior. To
this end, we interpret the decision behavior reported in prospect theory as stemming from
a “boundedly rational” usage of a normative measure of riskiness. While Foster and Hart
base their guideline for avoiding bankruptcy on an implicit measure of riskiness, a
boundedly rational human mind may use a simple threshold-based bankruptcy-proof
strategy, which approximates the implicit measure of riskiness to an estimable explicit
expression.
Three conceptual layers of explanations for human behavior can be distinguished:
the bottom layer is in fact more descriptive than explanatory, since it models observed
behavior patterns and suggests their formulation; the middle layer adds a principal
explanation to observed patterns, but lacks insight into the prevalence of this explanation;
the top layer supplies the reasoning for the existence, and in the case of behavior the
fitness-enhancing character, of the behavior described by the bottom layer and explained
by the middle layer. In the light of this multilayered conception, we suggest that prospect
theory be viewed as the bottom layer of modeling decision making behavior. The theory
describes observed patterns, such as loss aversion and overweighting of small
probabilities, and suggests their formulation in the form of a characteristic value function
and inverse S-shaped weighting functions. Compensation theory seeks to provide a
principal explanation to the patterns observed by prospect theory of people estimating the
riskiness measure of a suggested gamble, and accepting it if and only if the gain and their
wealth are both higher than the estimated riskiness. The top layer of this argument is
supplied by Foster and Hart (2007), where they prove that using their measure of
riskiness for decision making is the only way to avoid bankruptcy and guarantee
sustained growth.
36
The main emphasis in Foster and Hart (2007) is put on the necessity for rejecting
gambles whose riskiness measure RFH g  exceeds W. We incorporated this condition
into the decision rule of compensation theory (with R* instead of RFH g  ), but kept it
subordinate to the condition that guarantees a gain-to-loss ratio greater than the fair
compensation rate. In fact, we do not claim that this paper gives any supporting evidence,
whether direct, implicit, or approximate, that people actually compare their reserve
wealth to the riskiness of gambles. What we do claim is that the fair compensation-based
decision rule introduced in this paper (with its wealth-dependent property) is an
approximation to a strategy that guarantees “infinite wealth” in the long run, and is as
capable of explaining decision making as cumulative prospect theory is. Thus, our
decision rule provides (an approximation to) a normative one that suits allegedly
irrational decision making.
The inability to give a descriptive proof of the wealth part of the decision rule stems
from the fact that the gambles that were used by Cumulative Prospect Theory to evaluate
loss aversion had R* values of no more than a few hundred dollars, far less than the
wealth that students from industrialized societies assume they posses when relating to a
hypothetical gamble (see a detailed calculation in Appendix A). As a result, it is hard to
identify cases where subjects reject otherwise attractive gambles due to their wealth
limitations. However, unlike cumulative prospect theory, compensation theory assumes
that each decision maker has a certain level of riskiness—the level equal to his wealth—
above which he will refuse to participate in the gamble. The intuition is that people tend
to avoid risking their entire savings to the same extent that they are willing to risk small
amounts of money.16 For example, it is reasonable to assume that most people will reject
a real-life gamble of two equally likely alternatives where one alternative is to lose $5M,
no matter how much they stand to gain by the other alternative. Indeed, Foster and Hart
(2007) imply that people who ignored their wealth constraints were prone to bankruptcy
(a similar logic is suggested in footnote 23 in Foster and Hart, 2007).
Tversky and Kahneman empirically demonstrated the irrelevance of wealth
considerations in gambles whose riskiness measure is below the W=R* threshold by
showing that it is possible to calculate one loss aversion coefficient that holds for a wide
16
I.e., wealthier people are less risk-averse.
37
range of payoffs, thereby implying that the reference point for decision making is current
wealth rather than zero wealth.17 Others have corroborated this claim with the high extent
of risk aversion found in low-payoff gambles, which suggests that wealth considerations
are widely ignored in such decision situations. For example, Rabin (2000) demonstrates
the exaggerated risk aversion needed to justify wealth considerations in these situations.
8. Conclusion
Prospect theory is based on experiments that became famous for demonstrating
irrationality in decision making. In this context, rationality is interpreted as following the
set of axioms that form the basis of expected utility theory, the most prominent normative
theory of decision making. The unbridgeable gap between these two competing theories
has led me to a quest for a different normative decision theory that may explain the
observed decision behavior better, while endorsing the rationality of the decision makers.
In fact, all that is needed is to show that real observed decision making behavior does not
contradict a decision rule that is rational.
One of the active fields of research in normative decision theory deals with defining
measures of riskiness in order to find one coherent measure for assessing the risk inherent
in different lotteries (the “gambles” of this paper). In a recent paper in this field Foster
and Hart (2007) introduce “an operational measure of riskiness” RFH g  , which is
“operational” in the sense that in addition to ranking all gambles on one scale according
to their riskiness (as all measures of riskiness do), it enables us to derive operational
recommendations about the advisability of accepting or rejecting a gamble. In fact, Foster
and Hart base a normative guideline for decision making on their measure of riskiness by
showing that a decision maker who rejects all gambles whose measure of riskiness
exceeds his reserve wealth guarantees himself no-bankruptcy. Based on these concepts,
Fair compensation theory introduced in this paper raises the hypothesis that decision
making as observed in the experiments of prospect theory may be explained as the result
of following a bankruptcy-proof strategy, while using R* instead of RFH g  as the
17
In other words, there is no concave utility function of wealth, U(W), such that
U(W)=0.5U(W+tG)+0.5U(W-tL) for all values of t in a certain range, as required by a wealth-dependent
function with a constant loss-aversion coefficient.
38
riskiness measure (where R* is an approximation to RFH g  based on its Taylor
development). The substitution of RFH g  with R* is justified as a form of bounded
rationality, as R* is a much simpler measure than RFH g  . This specific strategy is
chosen from among other potential strategies because it is assumed to be simple enough
to be used by everyone, it fits the main patterns of observed decision behavior (as
described in Tversky and Kahneman, 1992), and it has several appealing properties.
A “simple” gamble g is a two-outcome gamble yielding a strictly negative payoff -L with
probability p, and a positive payoff G otherwise. According to compensation theory,
whenever the median decision maker is faced with a “simple” gamble whose riskiness
measure R* is below his reserve wealth W, the gamble is accepted if and only if the gain
G exceeds R* too (we show that this is equivalent to accepting a gamble only if the gain
is greater than G', the gain that minimizes R*). We then demonstrate that G' is linear with
respect to L, and rephrase the decision rule as accepting a gamble whose riskiness
measure R* is below W if and only if the ratio G to L exceeds a fair compensation rate,
which is a function of the loss probability p alone, and equals
p
1
p
. An index of
attractiveness based on this fair compensation rate is then suggested for ranking the
attractiveness of gambles.
39
Appendix A: Wealth considerations in prospect theory results
The experimental data of cumulative prospect theory naturally does not contain
information on the wealth of the subjects. However, in order to test the plausibility of
assumptions on the relationship between the financial state and the willingness to take
risks, one must overcome this obstacle. The decision rule presented in this paper contains
an assumption of that kind, the assumption that people reject gambles when their wealth
W is below L 
p
1 p
 min Rg . By calculating the maximal value of
L
p
1
p
for each of the accepted gambles in the data, we can retrieve a lower limit on the wealth
that a median subject must have possessed in order not to violate this rule. If it is
reasonable enough to assume that the median subject did possess such an amount, then
the data does not refute the decision rule suggested.
Since most of the data in prospect theory consists of prospects that are all positive or all
negative (where the risk refers to the exact amount to be won or lost), it cannot be
analyzed directly by compensation theory.18 The raw data from prospect theory that can
be analyzed directly by compensation theory consists of four problems that included
mixed prospects. In these problems subjects were asked to specify the positive amount G
that would make them indifferent to an equal chance of obtaining gain G or loss L.
Multiplying L by
p
1
p
we get the minimal wealth W needed in order for the subject
not to reject the gamble due to wealth constraints (according to the decision rule). The
results, taken from Tverski and Kahneman (1992), are presented in Table 3.19
18
This data is nevertheless integrated into the comparative analysis in chapter 5 through its influence on the
values of the parameters of the value function and the weighting functions.
19
The values in the column c indicate the possible losses introduced to subjects, the values in the column x
indicate the gains required to make the median subject indifferent to an equal chance of obtaining gain x or
loss c, and the values in the column θ represent the resulted gain-to-loss ratio, representing the degree of
loss aversion.
40
Table 3: A test of loss aversion with mixed prospects
In all four problems the probability of losing is the same (p=0.5); hence the value of the
expression L 
p
1
p
grows linearly with L. The maximal value of L in these questions
is 150 (problem 4 in the table), and the wealth needed to make it acceptable is:
W  L
p
1
p
 150 
0.5
1 - 0.5
 362 . Was the wealth of the median subject in the
experiments conducted by Tversky and Kahneman (1992) greater than $362? One would
assume so, considering that the subjects were students from top U.S. universities, where
the cost of one week’s tuition exceeds $362. Moreover, since the problems were
hypothetical, the subjects knew that they could not lose real money in these gambles, so
that even if they did not really possess the whole amount needed to take the gamble in
real life, they could still accept it in the lab. However, by accepting a gamble in the lab,
subjects indicate that they find the gamble’s ratio of gain-to-loss attractive, and that they
would accept it in real life if they had enough money in their possession. The possibility
that when dealing with hypothetical problems in the lab subjects ignore their real-life
limitations raises the supposition that people refer to the properties of the gamble itself
without considering their own financial state. In such a case, researchers' attempts to
improve the validity of their experiments by raising significantly the payoffs of the
gambles may still fail to reveal the effect of current wealth on the willingness to accept
gambles.
Appendix B: Proofs
(1) G = R(g)  min R(g)
We prove here that in “simple” gambles, R(g) gets its minimum at the only value of G
that satisfies G = R(g).
41
R(g) is expressed as follows:
 
E g2
1 1  p G 2  pL
 
2 E g  2 1  p G  pL
R(g ) 
2
Solving G = R(g):
1 1  p G 2  pL
G 
 21  p G 2  2 pLG  1  p G 2  pL2
2 1  p G  pL
2
 1  p G 2  2 pLG  pL2  0
G1, 2 


pL  p 2 L2  1  p  pL2
p p L


1  p 
1  p 
pL
p 1
1  p 1  p 
G must be positive; therefore we get:
G
pL
1
p
1  p 1  p 
G
p
1
p
L
F.O.C. for R(g) with regard to G:
 
E g2
1 1  p G 2  pL
R(g ) 
 
2 E g  2 1  p G  pL
2

21  p G  1  p G  pL  1  p   1  p G 2  pL2


21  p G 2  2 pLG  1  p G 2  pL2
 1  p G 2  2 pLG  pL2  0
This is the same equation that was used to solve G = R(g); therefore it leads to the same
value of G: G 
p
1
p
L
S.O.C. for R(g) with regard to G:
1  p G
s.t. G 
2
 2 pLG  pL2
p
1
p
L

'
G
 21  p G  2 pL
42

 21  p G  2 pL  2 1  p


p L  pL  2 p L  0
Therefore, this is a minimum point.
(2) Homogeneity of degree 1 of the approximated riskiness measure R*
Rg 
R*  
min Rg 
, if G  G '
, if G  G '
Or in an explicit representation:
 1 1  p G 2  pL 2
 
 2 1  p G  pL
R*  
 p
1  p L

, if G 
p
1 p
, if G 
L
p
1 p
L
Homogeneity of degree 1 of R* implies that R*(tg) = tR*(g) for every t>0 and all possible
gambles. To prove it, we need to examine each part of R* separately:
(i) G 
p
1 p
L in the gamble g. This implies that tG 
p
1
p
tL in the gamble tg.
1 1  p t 2 G 2  pt 2 L
1 1  p G 2  pL
 t = tR*(g).
Therefore we get R*(tg) = 
= 
1  p tG  ptL
2
2 1  p G  pL
2
(ii) G 
p
1
p
L in the gamble g. This implies that tG 
Therefore we get R*(tg) =
p
1
p
tL =
p
1
p
2
p
1
p
tL in the gamble tg.
L  t = tR*(g).
Appendix C: Expansion to “non-simple” gambles
The theory presented in this paper can be generalized in various ways to include
“general” gambles, i.e., gambles with no restrictions on the outcomes and with any
number of payoffs (i.e., more than two). We will discuss here the plausibility and the
shortcomings of several possible expansions. A gamble g is a real-valued random
variable having some negative values (losses are possible) and positive expectation, i.e.,
Pg  0  0 and Eg   0 (Foster & Hart, 2007). Foster and Hart start with these two
43
restrictions ( Pg  0  0 and Eg   0 ), but later expand their riskiness measure to
include gambles with no possibility of loss (leading to zero riskiness) and to gambles
with negative expectation (leading to infinite riskiness). Maintaining these two
restrictions does not limit in any way a descriptive decision theory such as fair
compensation theory. First, the expectation of an accepted gamble cannot be negative
since the gain required to compensate a risk-averse person for any loss induces a positive
expectation, by the definition of risk aversion (note that risk-seeking was reported by
Tversky and Kahneman (1979, 1992) only when the non-risky alternative was sure loss,
i.e., never when a gamble with negative expectation can be simply rejected with no loss).
This is reflected in the condition G  L 
p
1
p
, which cannot be satisfied by a gamble
with negative expectation. Second, the probability of loss must be strictly positive,
because when there is no possibility of loss there is no positive compensation that can
force indifference: the gamble is always favorable.
The generalization of the decision rule to include gambles of multiple outcomes is
done in the following manner: first the point G' should be redefined; then the “gain”
condition for acceptance is a redefinition of G>G', while the “wealth” condition becomes
W>R(G'), where R(G') is the value of the measure of riskiness R(g) at the point G=G'
(this condition was stated originally as W>R*, but for G>G' the riskiness R* is constant
and equals R*(G'); moreover, it is simpler to use here the Taylor-based approximation
R(g) than to use R*, and the result is not affected due to the fact that in the point G' the
riskiness measures R(g) and R* are equal).
When moving from discussing “simple” gambles to discussing “general” gambles,
the indifference point of the decision rule can continue to fill its dual role as the minimum
R(g) and the point where riskiness equals the gain (see Section 3.2), if the appropriate
adjustments are made. Define the set Gi  to be the set of all positive payoffs with
probability greater than 0 in the “general” gamble. Then the equality R(g)=G has no
unequivocal meaning when there are two or more positive outcomes. Nevertheless,
minimizing R(g) with respect to any of the positive payoffs Gi of the gamble is
equivalent to forcing that payoff to be equal to R(g) while fixing all other payoffs
44
G
j

j  i . Therefore, all the Gi ’s (and maybe even all linear combinations thereof) are
equally viable candidates on which to base the rule, and so there are too many degrees of
freedom for generalizing the decision rule. Two possible generalizations seem nonarbitrary and mark the boundaries for discussion: one is to accept the gamble g if
max Gi  is larger than R(g), and the other is to accept g if the weighted average of Gi 
is larger than R(g). The first generalization is based on the same logic as Regret Theory
(Loomes & Sugden, 1982): just as in regret theory receiving any payoff other than the
maximal payoff bears a sense of regret, the risk in fair compensation theory of receiving
any payoff other than the maximal must be “compensated for.” In fact, under this
generalization, both regret theory and compensation theory recognize the same
psychological effect, namely, that the disappointment from missing the best reward
overpowers the satisfaction or relief from being spared the worst outcome. The
shortcoming of this generalization is that it turns an otherwise rejected gamble into an
accepted one if a gain with a very low probability of being won is added to the gamble’s
set of outcomes, a transition in preferences that is hardly likely to occur in reality. While
this max Gi  -based generalization is the most discriminating between gains (treating all
other gains as losses), the second generalization, based on the weighted average of Gi ,
i.e.,
p
i
Gi
Gi
p
Gi
, does not discriminate between gains at all. It suggests that people
i
use the expectation of gains as their target gain for comparison, a practice that can be
described as risk-neutral in the domain of gains. Between these two generalizations lies a
range of alternative generalizations that balance between these extreme attitudes.
Another approach might be to try to generalize the p-root transformation and base
the decision rule upon it. Here again there is more than one way of doing it. One way is
to relate to all possible outcomes other than max Gi  as losses (as in regret theory), and
then transform each probability p j of losing L j and each probability p i of gaining
G G
i
i
 max Gi  to
p j and
gaining max Gi  to 1 

j
pi respectively, while transforming the probability of

p j  i pi . Another way is to transform the loss
45
probabilities p j to
p j , and the gain probabilities p i to 1  1  pi .20 Since the point
of indifference is based on the point of zero expectation, there is no need to normalize the
sum of transformed probabilities to 1, and there is no need to transform the probability to
get a zero outcome.
Clearly, as our theory is a descriptive one, the generalized decision rule must suit real
human behavior. In fact it should have been based upon prospect theory data. Indeed,
prospect theory has no restrictions in terms of the number of payoffs, and provides
predictions of decision making in complex gambles. However, in prospect theory the
experimental data used for the evaluation of the numerical values of the parameters of the
value function and the weighting functions do not include gambles with more than two
outcomes. Therefore, generalizing the decision rule of compensation theory according to
the functions of Prospect Theory cannot guarantee valid predictions.
.Nevertheless, the “intuitive” sense of fair compensation, i.e. the idea that people
sense the magnitude of compensation they “deserve”, can no longer comply with any of
the suggested generalizations: while estimating the fair compensation for a “simple”
gamble is a rather simple task (since only the probability matters; see Table 1), the
complexity of estimating the compensation for a gamble with two possible losses is
higher in orders of magnitude.21 This renders our idea of a simple table encoded in the
brain (see section 4.1) inapplicable when dealing with a gamble with as few as three
choices. Obviously, people do make choices in “non-simple” gambles. A more
convincing answer to the question of how people decide in such circumstances is left for
future research.
20
This transformation follows the original one: transforming 1-p to
q=1-p to 1 1q .
1 p
can be written as transforming
 
The complexity of storing the compensation rate G/L for N different probabilities is O N , since the size
of L is not relevant, while the complexity of storing the compensation needed for L1,L2 with probabilities
21
P1,P2 is
 
O N 4 , since all four parameters are needed for calculating the threshold of indifference.
46
References
Arrow, K.J., 1965. Aspects of the Theory of Risk Bearing, Helsinki: Yrjo Jahnsson.
Aumann, R.J., 1997. “Rationality and Bounded Rationality,” Games and Economic
Behavior 21, 2–14.
Aumann, R.J. and R. Serrano, 2007. “An Economic Index of Riskiness,” The Hebrew
University of Jerusalem, Center for the Study of Rationality DP–446 (mimeo).
Eisenhauer, J.G., 2006. “Risk Aversion and Prudence in the Large,” Research in
Economics 60, 179–187.
Fehr, E. and Gächter S., 2000. “Fairness and Retaliation: The Economics of Reciprocity,”
The Journal of Economic Perspectives 14, 159–181.
Fehr, E. and K.M.Schmidt, 1999. “A Theory of Fairness, Competition and Cooperation,”
Quarterly Journal of Economics 114, 817–868.
Foster, D. and S. Hart, 2007. “An Operational Measure of Riskiness,” The Hebrew
University of Jerusalem, Center for the Study of Rationality DP–454 (mimeo).
Fullenkamp, C., R. Tenorio, and R. Battalio, 2003. “Assessing Individual Risk Attitudes
Using Field Data from Lottery Games,” Review of Economics and Statistics 85, 218– 225.
Gintis, H., 2007. “The Evolution of Private Property,” Journal of Economic Behavior
& Organization 64, 1–16.
Holt, C.A. and S.K. Laury, 2002. “Risk Aversion and Incentive Effects.” American
Economic Review 92, 1644–1655.
Jones, O.D., 2001. “Time-shifted Rationality and the Law of Law’s Leverage: Behavioral
Economics Meets Behavioral Biology,” Northwestern University Law Review 95,
1141–1206.
Kahneman, D., J.L. Knetsch, and R. Thaler, 1986. “Fairness as a Constraint on
Profit
Seeking: Entitlements in the Market,” American Economic Review 76, 728–741.
Kahneman, D., and A. Tversky, 1979. “Prospect Theory: An Analysis of Decision under
Risk,” Econometrica 47, 263–292.
Kalawani, M.U., C.K. Yim, H.J. Rinne, and Y. Sugita, 1990. “A Price Expectations
Model of Customer Brand Choice,” Journal of Marketing Research 27, 251–262.
47
Loomes, G. and R. Sugden, 1982. “Regret Theory: An Alternative Theory of Rational
Choice under Uncertainty,” The Economic Journal 92, 805–824.
Machina, M.J. and M. Rothschild, 2007. “Risk,” in The New Palgrave Dictionary of
Economics, 2nd Edition, S.N. Durlauf and L.E. Blume (editors), London: Macmillan,
forthcoming.
Palacios-Huerta, I., and R. Serrano, 2006. “Rejecting Small Gambles under Expected
Utility,” Economics Letters 91, 250–259.
Pratt, J.W., 1964. “Risk Aversion in the Small and in the Large,” Econometrica 32,
122–136.
Putler, D.S., 1988. “Reference Point Effects and Consumer Behavior,” Economic
Research Service, U.S. Department of Agriculture, Washington, DC (unpublished
manuscript.
Rabin, M., 1993. “Incorporating Fairness into Game Theory and Economics,”
American Economic Review 83, 1281–1302.
Rabin, M., 2000. “Risk Aversion and Expected Utility Theory: A Calibration Theorem,”
Econometrica 68, 1281–1292.
Robson, A.J., 2002. “Evolution and Human Nature,” The Journal of Economic
Perspectives 16, 89–106.
Rothschild, M. and J.E. Stiglitz, 1970. “Increasing Risk: I. A Definition,” Journal of
Economic Theory 2, 225–243.
Simon, H., 1972. “Theories of Bounded Rationality,” in Decision and Organization, C.
McGuire and R. Radner (editors), Amsterdam: North Holland.
Simon, H., 1982. “Models of Bounded Rationality”. Cambridge, MA: MIT Press.
Smith, V.L. and J.M. Walker, 1993. “Rewards, Experience, and Decision: Costs in First
Price Auctions.” Economic Inquiry 31, 237–244.
Tversky, A., and D. Kahneman, 1991. “Loss Aversion in Riskless Choice: A ReferenceDependent Model,” The Quarterly Journal of Economics 106, 1039–1061.
Tversky, A., and D. Kahneman, 1992. “Advances in Prospect Theory: Cumulative
Representation of Uncertainty,” Journal of Risk and Uncertainty 5, 297–323.
Von Neumann, J. and 0. Morgenstern, 1944. Theory of Games and Economic Behavior,
Princeton: Princeton University Press.
Download