The Power of Shame and the Rationality of Trust Steve Tadelis UC Berkeley

advertisement
The Power of Shame
and the Rationality of Trust
Steve Tadelis∗
UC Berkeley
Haas School of Business
PRELIMINARY AND INCOMPLETE!
May 14, 2007
Abstract
Experimental evidence and a host of recent theoretical ideas take aim
at the common economic assumption that individuals are selfish. The
arguments made suggest that intrinsic “social preferences” of one kind
or another are at the heart of unselfish, pro-social behavior that is often
observed. I suggest an alternative motive based on “shame” that is imposed by the beliefs of others, which is distinct from the more common
approaches to social preferences such as altruism, a taste for fairness, reciprocity, or self-identity perception. The motives from shame may explain
observed behavior in some previously studied experiments, and more importantly, they imply new testable predictions. Experiments confirm both
that shame is a motivator, and that trusting players are strategically rational in that they anticipate the power of shame. Some implications for
policy and strategy are discussed. JEL classifications C72, C91, D82
∗ This work was inspired during the Stanford Institute for Theoretical Economic workshop
in August 2004. I am grateful to Gary Charness for sharing his experimental design and
his experience. I also thank Yossi Feinberg, Shachar Kariv, John Morgan, Muriel Niederle
and Larry Samuelson for helpful discussions. Victor Bennett and Constança Esteves provided outstanding research assistance. This work has been supported by the National Science
Foundation and by UC Berkeley’s X-Lab at the Haas School of Business.
1
1
Introduction
At the market level, the selfish rational choice model that has been the workhorse of economic analysis works surprisingly well given its simple structure
and extreme assumptions. However, when human subjects are put to tests of
individual decision making in simple strategic situations, their behavior departs
from theoretical predictions in one systematic way. Namely, individuals seem
to exhibit some form of “pro-social” behavior, in which they sacrifice some of
their own monetary payoff to increase (and sometimes decrease) the payoff of
others. (See Camerer 2003 for an excellent summary of these results.)
Such pro-social behavior is manifested in daily life by a variety of so called
acts of kindness (offering donations, helping strangers), as well as contributions
to the public good (voting, volunteering). This casual empiricism and the experimental results suggest that individuals have “other-regarding” preferences, for
which many mechanisms have been proposed. These preferences include altruism (direct and Indirect (Andreoni, 1990)); inequity Aversion (Fehr and Schmidt
(1999)); preferences for fairness (Rabin (1993)), preferences for reciprocity (Falk
and Fischbacher (2006)). and even more involved preferences such as regards
for self-identity (Akerlof and Kranton (2000)), or combinations of the intrinsic
and extrinsic concerns (Benabou and Tirole, 2006).
These preferences are primarily driven by intrinsic considerations–they are
an expression of one’s preferences over outcomes that include the payoffs of
others, or how these payoffs were reached. As such, I take liberty in loosely
combining these mechanisms for pro-social preferences under the title of “guilt”:
being selfish causes a player to experience some intrinsic loss of utility. This
broad notion is somewhat cavalier in its breadth, and is different than the recent
description of preferences with guilt as modelled by Battigalli and Dufwenberg
(2006, 2007). They define guilt as a loss of utility for a player who believes that
he disappointed another player. Hence, players have preferences over monetary
outcomes, and over the beliefs of their opponents. the later mostly playing a
role through issues such as disappointment: I lose utility if you are disappointed,
the latter being a function of your beliefs.
I refer to shame as a distinct motivator as compared the guilt, or the broad
host of intrinsic preferences. As noted by a teen self-help web site1 ,
“Guilt and Shame are closely connected emotions, we tend to
feel guilty when we have violated rules or not lived up to expectations and standards that we set for ourselves. If we believe that we
“should” have behaved differently or we “ought” to have done better, we likely feel guilty. Shame involves the sense that we have done
something wrong that means we are “flawed,” “no good,” “inadequate,” or “bad” and is usually connected to the reactions of others.
Anytime you catch yourself thinking ‘if they knew ______ then
they would not like me or would think less of me,’ you are feeling
shameful.”
1 http://www.teenhealthcentre.com/articles/publish/article_93.shtml
2
This quote (and similar ones that appear on a variety of websites) summarizes quite well the accepted distinction between these emotions. As Tangney
(1995) writes, “there is a long-standing notion that shame is a more “public”
emotion than guilt, arising from public exposure and disapproval, whereas guilt
represents a more “private” experience arising from self-generated pangs of conscience.” As such, preferences that incorporate shame must include a preferences
over the perception, or beliefs of others, and not just some intrinsic preferences
over physical outcomes.
Three questions then arise, which are the focus of this paper. First, if both
shame and guilt, as referred to above, can motivate pro-social behavior, can
these two distinct motivators be empirically distinguished? Second, if they can,
and if shame is confirmed as a motivator, do players act as if they perceive
the power of shame to direct their own behavior? And finally, what are the
public policy and strategy implications for the design of public and private
institutions? This paper offers an affirmative answer to the first two questions,
and some insights into the third.
I begin the analysis in Section 2 with a simple game of trust to model these
kind of preferences and to apply notions of equilibrium behavior, which is illustrative of the kind of forces that would identify preferences for shame. I
resort to the well established structure of ex post beliefs being formed given a
prior distribution of “types,” and given beliefs over behavior and the environment. I endow some players with preferences over the beliefs of others, as in
Geanakoplos, Pearce and Stacchetti (1989), and adapt a rather common notion
of sequential equilibrium, similar to the recent work of Battigalli and Dufwenberg (2007).
In particular, the game I analyze and explore is a simpler variant of the
well know “Trust Game,” (in the spirit of Berg, Dickhaut and McCabe, 1995).
Player 1 can trust player 2 or exit with a safe outside option. If trusted, player
2 can return in kind by cooperating, but at a sacrifice of an additional monetary
payoff that he receives from defecting. A selfish player 2 will therefore never
cooperate, and anticipating this player 1 should never trust.
Two sources of noise are then introduced. First, the game has an element
of “imperfect monitoring” as introduced by Charness and Dufwenberg (2006):
following trust, player 1 cannot always be certain wether a low payoff is the
result of player 2 defecting, or the result of bad luck.2 Second, the game has
some incomplete information: Player 2 can be either selfish, or thoughtful. A
thoughtful type of player 2 cares about the ex post beliefs of player 1, in particular, a thoughtful player 2 would prefer that player 1 not think he is selfish.
In equilibrium, given the prior beliefs of player 1 over the types and strategies
of player 2, player 1 will form a posterior belief over the type of player 2, and
this belief is what player 2 cares about.
Testing whether shame, as defined above, has an impact on the choices of
2 This captures examples such as a principal who hires an agent for a fixed wage in a
hidden action setting, a trader relies on carrier to deliver his goods, constituents electing a
politician to increase their welfare, etc. where the outcome can be poor despite the adequate
behavior of all players.
3
players is done by manipulating the ability of player 1 to decipher whether a
bad outcome is a result of bad luck or of selfish behavior of player 2. Using
this idea, Section 3 shows that it is possible to manipulate the ex post beliefs of
player 1 without changing other parameters of the game, which in turn implies
that if preferences over shame are not present, equilibrium behavior will not
change. As such, if player 2 does care about shame (the beliefs of player 1), the
theory suggests a testable implication on the behavior of player 2: if player 1
can more easily establish whether bad outcomes correspond to selfish behavior
then player 2 is more likely to act cooperatively. This offers an answer to the
first question posed: preferences driven by shame can be distinguished from
those driven by guilt (i.e., intrinsic motivations).
Answering the second question is also a consequence of simple equilibrium
analysis: a rational player 1 will anticipate the change in shame-based incentives
for player 2 following different informational environments. If the informational
environment implies that a thoughtful player 2 is more likely to cooperate, then
player 1 should be more inclined to trust player 2. Thus, trust can be explained
as a rational response to the incentives provided by shame, and finding his
behavior would imply that rational players act as if they perceive the power of
shame, offering an affirmative answer to the second question.
This simple theoretical exercise is then applied to a series of laboratory experiments as described in Section 5, which manipulate the information along the
lines described above. The experimental results described in Section 6 strongly
support the hypotheses that are derived from the theoretical analysis. Thus,
without ruling out that some intrinsic motivations explored in previous studies
are drivers of pro-social behavior, the motivation implied by shame is strong and
robust, as is the equilibrium response of rational players. This is in line with a
complementary paper by Andreoni and Bernheim (2007) in which players care
about how others perceive them.
Following the experimental analysis, Section 7 offers a discussion that focuses on three main issues. First, shame-based preferences can be considered as
a reduced form of more complex reputational preferences that players would exhibit in repeated games with incomplete information (e.g., Kreps et. al., 1982).
Despite the fact that players are playing a one-shot game, they may perceive
the situation as part of a repeated game, either rationally or as a rule of thumb,
which can explain the experimental results. To what extent the players can
rationally consider the experimental interaction as part of a repeated game is
a judgement call, but I will argue that it is not very convincing and that the
rule of thumb, which may be a result of evolutionary pressures, may be more
convincing.
A second issue raised in the discussion is the ability of shame based preferences to explain the behavior observed in other experimental settings, and what
kinds of new experiments would the theory suggest as good testing grounds.
Finally, I offer some thoughts on the implication of shame-based preferences to
both private sector strategy (contractual relationships) and public policy.
4
2
Modelling Shame: A Noisy Trust Game
As described in the introduction, shame is defined as an emotion that induces
behavior due to the fear of what others will think about the player in question.
Hence, it is natural to assume that the beliefs of others must include some “bad”
type of person that they can attribute to the player n question, and the incentive
mechanism follows from the player’s aversion to being thought of as bad.
It would be rather straightforward to introduce a general form of preferences that would capture this idea. Indeed, in a recent paper, Battigalli and
Dufwenberg (2007) introduce preferences that depend on the beliefs of others.
However, instead of having bad types and incomplete information, they endow
players with expectations over monetary outcomes, and these beliefs form the
basis for disappointment (i.e., you get less than you expect.) Thus, the preferences of players in Dufwenberg and Battigalli (2007) are over money and over
the ex post beliefs of others, the latter beliefs being related to expectations
about outcomes instead of types of other players.
To offer a general model that introduces such “bad” type, however, may be
context dependent. This is particularly true of one views preferences over the
beliefs of others as a reduced form rule of thumb that replaces more complex
behavior in repeated games where reputational concerns are present. I defer
this discussion to section 7.1, and turn to a simple trust game to illustrate the
idea of preferences for shame, and the behavioral consequences and predictions
if such preferences are present. Section 7.2 expands on how to consider further
variants in a variety of different games.
2.1
Perfect Information and Monetary Payoffs
Imagine a simple trust game in which player 1 (the trustor) can trust (T ) or
not-trust (N ) player 2 (the trustee), and if trusted, player 2 can cooperate (C)
or defect (D). Trust followed by cooperation is Pareto superior to not-trust,
but following trust, player 2 must incur a cost to cooperate. Defection, however, imposes a cost on player 1. There is imperfect success to cooperation:
with probability p ∈ (0, 1) cooperation succeeds and player 1 receives a high
payoff, while with probability 1 − p cooperation fails, and player 1 receives a
payoff of zero, identical to defection. Conditional on cooperating, the pecuniary
payoffs to player 2 do not depend on success. This trust-game has he structure
used in the experiments of Charness and Dufwenberg (2006), and with perfect
information can be described by the game in Figure 1.
5
1
N
T
2
v
v
C
D
Nature
p
1- p
c/p
c
0
d
0
c
Figure 1: A Simple Trust Game
In this game, the payoff from player 1 choosing N is v > 0 to both players,
while if player 1 chooses T and player 2 chooses C then both get an expected
payoff of c > v. Following trust, the cost of cooperating for player 2 is d − c > 0.
These payoffs imply that the game has a unique equilibrium: player 2 will never
cooperate if he is trusted since d > c, and in turn player 1 will never trust since
v > 0.
Unlike most trust games (see, e.g., Camerer 2003), there is added noise by
having Nature move after player 2’s cooperative action. The effect is that even
when player 2 acts cooperatively, there is some chance that player 1 will receive
the same low payment he gets from player 2’s choice of defection. Hence, if
player 1 were only to observe his own payoffs, a payoff of 0 can be attributed to
either selfish behavior of player 2, or just bad luck.3
2.2
Incomplete Information and Preferences over Shame
To add a dimension of pro-social behavior, I add a simple layer of incomplete
information. Player 1 is assumed to be selfish in the sense that only pecuniary
payoffs matter to him. Player 2, in contrast, can either be selfish, or he can
have some form of pro-social tendencies. As argued in the introduction, one
can think of either guilt or shame as being separate motivators, and as such
a distinction ought to be made. I formalize the difference between guilt and
shame as follows: guilt is associated with the intrinsic cost of “cheating” player
3 This “technology” is present in Charness and Duwefenberg (2006), but they do not make
explicit use of the noise in their theoretical or experimental analysis. Instead, they use it as
a form of “hidden action” in taht it relates to the standard principal-agent model.
6
1 after he chose to trust player 2. Shame, in contrast, is associated with the
intrinsic cost of having player 1 believe that player 2’s behavior was inadequate.
Formally, a player is “Bad” with probability β, and “Good” with probability
1 − β. I abuse the standard terminology by using “bad” for what we usually
refer to as selfish, while “good” is an individual who potentially suffers from
feelings of guilt and shame. This asymmetric introduction of types for player 2
and not for player 1 will be enough to generate some testable hypotheses.4
As an intrinsic motivator, guilt would be a hard dimension to manipulate
externally.5 Therefore, we introduce a type of player that is motivated solely by
shame. The cost of shame is modelled as a positive cost which is proportional to
what player 2 believes is the ex post belief of player 1 about the type of player.6
Namely, if player 2 believes that player 1’s ex post belief about 2 being bad is
given by β 0 ∈ [0, 1], then the cost of shame is β 0 s where s > 0 is the intensity of
shame. In a richer model with a more continuous variation of shame sensitivity,
different people can have different levels of s, where s = 0 would coincide with
pure selfish preferences.
The full game is depicted in Figure 2. Notice that there is a departure
from the standard definition of a game since payoffs are not functions of actions
and types alone, but also of the beliefs that some players have about others. In
particular, player 1’s belief about the type of player 2 directly effects the payoffs
of player 2, which is the way in which the model captures shame. This is related
to the small literature on “Psychological Games,” pioneered by Geanakoplos,
Pearce and Stacchetti (1989).7
4 Of course, one can argue that by not trusting player 2, player 1 may be acting in an
offensive way, which should potentially result in feelings of guilt or shame. I ignore this
possibility for simplicity, though at a later stage this can be incorporated into a richer set of
experiments. In particular, noise can be added after player 1 choose “trust” in a way that
may cause termination of the game, so that player 2 who is not called upon to move will not
necessarily know whether player 1 was not-trusting, or whether trust was followed by a noisy
exogenous termination.
5 Psychologists have used the method of “priming” to try and modify the intrinsic feelings
of individuals, so as to increase their feeling of guilt. (see XXX.) It is unclear whether priming
can differentially change the marginal “guilt” cost of selfish behavior, which would be necessary
to evaluate guilt as an incentive for pro-social behavior.
6 The cost of guilt can be given by some g > 0 that is incurred when a good player chooses
to defect. As mentioned above, I cannot reliably manipolate guilt, and as such, will ignore
this potential motivator.
7 In Charness and Dufwenberg (2006) the beliefs of player 1 also enter into the preferences
of player 2, but their notion of “guilt aversion” is based on 2’s incentives not to disappoint
player 1, which is different from the notion of “shame” I introduce here.
7
Nature
“bad”
“good”
ß
1- ß
1
1
N
T
N
T
2
v
v
C
2
v
v
D
Nature
p
c/p
c
C
D
Nature
1- p
0
d
p
c/p
c
0
c
1- p
0
d – sß'
0
c – sß'
Figure 2: A Trust Game with Incomplete Information about Shame
2.3
Exposure and Inference
To complete the structure of the game we endow player 1 with an exogenous
“exposure technology.” This basically refers to player 1’s ability to decipher the
noisy outcome of a payoff of 0, and attribute it either to bad luck, or to an
uncooperative action by player 2. For convenience, I refer to “success” (S) as
the outcome in which player 1 gets a payoff of C/p, which is a consequence of
player 1 choosing to trust, 2 choosing to cooperate, and nature being favorable.
I refer to “failure” (F ) as the outcome of player 1 receiving a payoff of 0, which
is either a consequence (following trust of player 1) of player 2 defecting, or of
player 2 cooperating and nature being unfavorable.
I assume that with probability π ∈ [0, 1] exposure occurs (e = 1) and player
1 will observe the action of player 2. That is, if player 1 chooses to trust player
2, and the outcome is a failure, then with probability π player 1 will learn
whether the payoff of 0 is due to player 2 choosing D, or wether it was because
of nature choosing the payoffs (u1 , u2 ) = (0, c) following player 2’s choice of C.
With probability (1 − π), however, exposure does not occur (e = 0) and player
1 learns nothing about the reason for failure.
To foreshadow the experimental design, it is useful to emphasize two extreme
forms of detection technology, namely π = 0 and π = 1. When π = 0 exposure
never happens and player 1 never learns the source of a payoff of zero, whereas
when π = 1 exposure is certain to happen. Clearly, if player 2 is exposed then he
cannot “hide” behind the noise of bad luck as a source of low payoffs to player
1. As a result, a “good” player 2 will be more easily deterred from choosing
D. This simple intuition is formalized and explored more carefully in the next
8
section.
3
Equilibrium Analysis
I consider a natural adaptation of sequential equilibrium to the setting of this
game where in each subgame, player’s are playing a best response to their beliefs,
and these beliefs are consistent with Bayes’ rule.8 Solving Backward, we know
that a “bad” player 2 will always choose to defect. The best response of a
“good” player 2 depends on his payoffs, which in turn depend on his actions
and on the beliefs of player 1 (more precisely, the beliefs of player 2 about the
beliefs of player 1). Equilibrium analysis will require the beliefs of player 1
about the type and actions of player 2 to be correct, and that player 2’s beliefs
about the beliefs of player 1 are correct as well.9
Clearly, a “bad” player 2 will never choose to cooperate. Let σ ∈ [0, 1] be the
probability that a good player 2 chooses to cooperate. If player 1 correctly anticipates σ, and if he trusts player 2, then following trust, his ex post conditional
beliefs depend on success (S) or failure (F ), as well as on whether he learns the
action of player 2 through the exposure technology. Let β 0 (F, e = 0) ∈ [0, 1] be
the posterior of player 1 about player 2 being a bad type following a failure, and
no exposure of the reason for failure. It follows from Bayes’ rule that,
β 0 (F, e = 0) ≡ Pr{bad|F, e = 0}
β
β
=
=
≥ β.
β + (1 − β)[(1 − σ) + σ(1 − p)]
1 − pσ(1 − β)
The expression for β 0 (F, e = 0) is easy to interpret: the beliefs of player 1
following failed trust are worse if the ex ante likelihood of bad types is larger (β
is greater), if the good player 2 is more likely to cooperate (σ is greater), or if
the Nature’s noise is reduced (p is greater).
If, instead, exposure happens, then updating depends on what player 1
learns. If player 1 learns that player 2 chose D, then the Bayesian posterior
is given by,
β 0 (F, e = 1, D) ≡ Pr{bad|F, e = 1, D}
β
β
=
=
≥ β 0 (F, e = 0).
β + (1 − β)(1 − σ)
1 − σ(1 − β)
Finally, if player 2 observes a success, or if he observes a failure but through
exposure learns that player 2 chose the cooperate, then he knows that he is
facing a good type, that is, β 0 (S) = β 0 (F, e = 1, C) = 0.10
8 The analysis in Battigalli and Dufwenberg (2005, 2007) suggests that adopting the notions
of sequential equilibrium to these kinds of games is quite valid.
9 Unlike Battigalli and Defwenberg (2005, 2007), higher order beliefs will not play a role in
the game analyzed.
1 0 If defection has a probability of success that is positive, then as long as this probability is
lower that the probability of success from cooperation, then it must be the case that β 0 (S) =
β 0 (F, e = 1, C) < β and β 0 (F, e = 1, D) > β 0 (F, e = 0) > β.
9
I impose the following belief restriction:
R1: Ex Post Beliefs. The ex post beliefs of player 1 must satisfy Bayes rule
as described above.
This restriction imposes beliefs for player 1 at the end of the game that are
consistent with Bayes rule. The restriction is satisfied in a sequential equilibrium
except for the case where player 1 chooses not to trust player 2, or if σ = 0.
In either of these two cases, reaching a success is a zero probability event and
Bayes rule does not apply. It is for these two cases that R1 imposes a restriction.
Remark 1 A variety of refinements would imply the belief restriction R1.
3.1
No Trust/Defection Equilibrium
I first consider the possibility of an equilibrium with no trust that is supported
by the correct belief that any type of player 2 would defect following trust.
Given R1, if player 1 believes that σ = 0 then a good player’s utility following
defection is
0
0
0
uG
2 (D, β ) = d − s(πβ (F, e = 1, D) + (1 − π)β (F, e = 0))
= d − sβ
where the second equality follows from R1 and from σ = 0.
If instead a good player 2 would choose to cooperate (and depart from the
proposed equilibrium), then his utility following cooperation is
0
0
0
0
uG
2 (C, β ) = c − spβ (S) − s(1 − p)(πβ (F, e = 1, C) + (1 − π)β (F, e = 0))
= c − s(1 − p)(1 − π)β
where the second equality follows from R1 together with σ = 0.11 Therefore,
defection is a best response if
d − sβ > c − s(1 − p)(1 − π)β ,
or,
d > c + sβ(π + p − πp)
(1)
This is, of course, straightforward. If shame was not a motivator, then all we
need for defection to be a best response is d > c. However, since the good player
is motivated by shame, there as to be a “cost of shame” premium over the pure
monetary amounts. In particular, d has to exceed c by sβ(π + p − πp) > 0 for
Defect to be part of an equilibrium.
Clearly, a higher sensitivity to shame (a higher s) will cause this premium to
increase. It is interesting to note two other intuitive comparative statics about
this premium. First, this premium increases in π, the likelihood of exposure.
1 1 Recall that σ = 0 is used to determine the beliefs of player 1, despite the fact that player
2 departed from the proposed equilibrium.
10
This intuitively follows because more exposure carries more shame from defection. Second, this premium increases in p, the likelihood that nature does not
cause bad luck. This intuitively follows because if nature is more “reliable”, then
it appears that player 2 would bear more responsibility for failed outcomes.
I continue the analysis with the parametric assumptions that guarantee some
trust in equilibrium. Namely, that (1) is violated:
A1:
d < c + sβ(π + p − πp)
The following claim is therefore immediate:
Claim 2 “No trust” followed by “Defect” of all types cannot be an equilibrium
if and only if A1 is satisfied.
3.2
Trust/Cooperation Equilibrium
Continuing with A1, player 1’s utility from trusting player 2 is obviously increasing in σ, the probability that a good player 2 will choose to cooperate.
Therefore, player 1’s willingness to trust is highest when he believes ex ante
that σ = 1, in which case he would be willing to trust player 2 if β is not too
high (the likelihood of a bad type is not too high). In particular, if σ = 1 then
player 1’s strict best response is to trust if and only if
β × 0 + (1 − β)c > v
or β < c−v
c . This of course is obvious: if the likelihood of a bad player 2 is too
high, then even impeccable behavior by a good player 2 will not induce player
1 to trust player 2. The following claim is immediate:
Claim 3 Trust by player 1 can be part of an equilibrium if and only if A2 is
satisfied.
Thus, to allow for some trust in equilibrium I must continue with the following assumption,
A2:
β <1−
c−v
c
If player 1’s ex ante belief is that σ = 1, then his ex post belief must satisfy,
β
> β,
1 − p(1 − β)
β 0 (F, e = 1, C) = β 0 (S) = 0,
β 0 (F, e = 1, D) = 1
β 0 (F, e = 0) =
Hence, player 2’s utility from cooperating is,
0
0
0
0
uG
2 (C, β ) = c − spβ (S) − s(1 − p)(πβ (F, e = 1, C) + (1 − π)β (F, e = 0))
β
= c − s(1 − p)(1 − π)
1 − p(1 − β)
11
If instead player 2 departs from the proposed equilibrium and chooses to
defect, then posterior be
0
0
0
uG
2 (D, β ) = d − s(πβ (F, e = 1, D) + (1 − π)β (F, e = 0))
µ
¶
β
= d − s π + (1 − π)
1 − p(1 − β)
Cooperation is therefore a best response if and only if
µ
¶
β
β
c − s(1 − p)(1 − π)
> d − s π + (1 − π)
,
1 − p(1 − β)
1 − p(1 − β)
or,
d<c+s
µ
π(1 − p) + pβ
1 − p(1 − β)
¶
(2)
As for the no-trust analysis above,³ this inequality
is straightforward. If the
´
π(1−p)+pβ
expected shame costs of defection, s 1−p(1−β) , are large enough then they
outweigh the monetary benefit of defection, d − c.
³
´
The comparative statics on the expected shame costs of defection, s π(1−p)+pβ
,
1−p(1−β)
are similar to those discussed above for the shame premium for a defection equilibrium. A higher sensitivity to shame (a higher s) will cause these costs to
increase. Also, these costs increases in π, the likelihood of exposure and also in
p, the likelihood that nature does not cause bad luck.12
It turns out that our previous analysis of conditions for which a no-trust
equilibrium will not exist is useful vis-a-vis conditions for a trust equilibrium to
exist:
Claim 4 A1 implies (2)
Proof. It suffices to show that sβ(π + p − πp) <
=
=
=
π(1−p)+pβ
1−p(1−β) .
We have,
π(1 − p) + pβ
− β(π + p − πp)
1 − p(1 − β)
π(1 − p) + pβ − β(π + p − πp)(1 − p(1 − β))
1 − p(1 − β)
π − πp + pβ − βπ − βp + βπp + βπp + βp2 − βπp2 − β 2 πp − β 2 p2 + β 2 πp2
1 − p(1 − β)
π(1 − β)(1 − p) + βπp(1 − β)(1 − p) + p2 β(1 − β)
>0
1 − p(1 − β)
Intuitively, (2) is implied by A1 because when beliefs about the action of
good types are better (higher σ) then the Bayes updating after receiving a
1 2 Taking
the derivative of
³
π(1−p)+pβ
1−p(1−β)
´
with respect to p yields
12
β−πβ
(1−p(1−β))2
> 0.
payoff of zero is more severe (higher β 0 ) which creates stronger incentives for a
good type to cooperate. Thus, if A1 is satisfied, this allows for the provision of
incentives for a good player 2 for the worse beliefs about σ, which means that
incentives are even stronger for higher values of σ.
Remark 5 If A1 is violated but (2) is satisfied, then there will be multiple
Equilibria (both pure strategy equilibria discussed above and a mixed strategy
equilibrium.) In that case the comparative statics discussed below will still be
valid.
4
Empirical Implications
To fix ideas, let β satisfy A2 and imagine that there may be a distribution of
good types with variation over s. Then, given any fixed monetary payoffs of v,
c and d, it is possible to consider comparative statics on the effect of changes in
π, the exposure technology, on the equilibrium play of the game.
4.1
The Power of Shame
Recall from the analysis above that the “cost of shame” premium that was
identified in (1) and (2) is increasing in the likelihood of exposure, π. Hence, it
immediately follows that,
Corollary 6 As π increases, the set of parameters for which cooperate is part
of the unique equilibrium increases, and the set of parameters for which defect
is part of any equilibrium decreases.
This can be best seen by considering condition A1. For any given set of
parameters, an increase in π will make it more likely that A1 is satisfied. This
implies the following testable hypothesis:
Hypothesis 1: The Power of Shame. If shame plays a role in the decision
to cooperate, an increase in π should cause (weakly) more cooperation by
players 2.
I coin this hypothesis the “power of shame” since it identifies a manipulable
intervention that affects the mechanism of shame that has been demonstrated in
the model. Notice that if people are motivated by other social preferences that
are not dependent on the beliefs of others, e.g., altruism, fairness, reciprocity
and self identity, then changes in π should not have an effect on the outcomes
observed. This is the first prediction that differentiates the theory of shame
from other social preferences.
4.2
The Rationality of Trust
The “power of shame” conclusion in corollary (6) has implications about the
behavior of player 2 in equilibrium: a higher exposure to shame will create
13
stronger incentives to cooperate. As a result, equilibrium analysis implies that
player 1 should anticipate the “power of shame” with a response of, loosely
speaking, more trusting behavior. More precisely, for any given set of parameters, an increase in π will make it more likely that A1 is satisfied. As a result,
a rational player 1 will trust player 2. It immediately follows that,
Corollary 7 As π increases, the set of parameters for which trust is part of
the unique equilibrium increases, and the set of parameters for which no-trust
is part of any equilibrium decreases.
This implies the following testable hypothesis:
Hypothesis 2: The Rationality of Trust. If shame plays a role in the decision to cooperate, an increase in π should cause (weakly) more trust by
players 1.
I coin this hypothesis the “rationality of trust” since it identifies a manipulable intervention that affects the incentives to trust through the mechanism
of shame. Thus, if shame is a motivator for player 2 then changes in π should
not only have an effect on the choice of player 2, but they should also, through
rational expectations, have an effect on the choice of player 1. This is the second
prediction that differentiates the theory of shame from other social preferences.
5
Experimental Design
The experimental design follows the theoretical analysis described above, and is
based very closely on the experiment used in Charness and Dufwenberg (2006).
Sessions were conducted at UC Berkeley’s X-Lab, in a large classroom divided
into two sides by a center aisle. Participants were seated at private tables with
dividers between them. Twelve sessions were conducted: two pilot sessions
with one treatment each, four with four treatments and six with six treatments.
There were 10-30 participants per session. No one could participate in more
than one session. Average earnings were $x (including a $7 show-up fee), and
each sessions took about 45 minutes to one hour.
In each session, participants were referred to as “A” or “B” (for players 1
and 2 respectively). A coin was tossed to determine which side of the room were
A players and which side were B players. Personal identification numbers were
assigned to participants who were informed that these numbers would be used
to determine pairings (one A with one B), to track decisions and to determine
payoffs.
The game played in all treatments had the same structure as in Figure 1 with
the parameters v = 5, c = 10, and d = 14 and p = 56 . In each treatment, A
players received a sheet with two options, “In” (equivalent to “trust” in Figure
1) and “out” (equivalent to “no-trust”.) B players received a sheet with two
14
options, “Roll” (equivalent to “cooperate”) and “Don’t Roll” (equivalent to
“defect”.)13
In each treatment, first A’s recorded their choices and their sheets were
collected. Next, B’s chose whether to Roll or Don’t Roll a 6-sided die. B made
this choice without knowing A’s actual choice In or Out, but the instructions
explained that B’s choice would be irrelevant if A chose Out. This guarantees
an observation for every B player. After the decisions of B’s were recorded
and collected, a 6-sided die was rolled (by me) for each B. This was carefully
explained to the participants in advance, to allow for anonymity of B’s who chose
Don’t Roll. This roll was relevant if and only if (In, Roll) had been chosen. The
outcome corresponding to a success occurred only if the die came up 2, 3, 4, 5,
or 6 after a Roll choice (hence, p = 56 ).
The first four sessions had four treatments each as follows:
1. Stranger-Noise (SN): In this treatment participants did not know who
they were matched with, nor did A-players learn why they received a
payoff of zero if they did (π = 0).
2. Stranger-No-Noise (SNN): In this treatment participants did not know
who they were matched with, but A-players did learn why they received
a payoff of zero if they did (π = 1).
3. Partner-Noise (PN): In this treatment participants did know who they
were matched with (pairs of ID numbers were announced before the decisions and each pair acknowledged each other by standing). A-players did
not learn why they received a payoff of zero if they did (π = 0).
4. Partner-No-Noise (PN): In this treatment participants knew who they
were matched with (as in PN) and A-players did learn why they received
a payoff of zero if they did (π = 1).
The next six sessions included the following two treatments:
5. Stranger-Public (SP): In this treatment participants did not know who
they were matched with. After the sheets were collected for each player B,
I publicly announced what that player B had chosen. (This is like π = 1
for all player A’s, without knowing who they were matched with).
6. Partner-Public (PP): In this treatment participants did know who they
were matched with. After the sheets were collected for each player B, I
publicly announced what that player B had chosen.
Starting with the first four treatments, there is a clear informational ranking between treatment PN and PNN: this is exactly the game described in the
1 3 It is customary not to use “loaded” words such as “trust” or “defect” for the actual
experiment, and these are the exact terms used in Charness and Dufwenberg (2006).
15
theoretical analysis. Hence, the theory predicts that there will be more cooperation (the power of shame) and more trust (the rationality of trust) in treatment
PNN.
The same comparison applies for SN versus SNN, but now the shame is
“shared” among all the B players, implying a kind of “free rider” problem in
which the power of shame is not as severe as it would be if identities were
known. To see this, imagine there were two players of each role, with priors β
as in Figure 2. If players are matched randomly then a player 1 who receives
a payoff of zero believes that he was matched with each of the two players 2
with equal probability, and as such the Bayes updating is “diluted” across the
two players. As such, the theory implies that both cooperation and trust will
be weaker in the stranger settings than the partner settings, other things equal.
However, since other things are not equal it is not possible to infer the ranking
of cooperation and trust between the SNN and PN treatments. What is known
is that these will both have higher levels of cooperation and trust compared to
SN, and lower levels compared to PNN.
The last two treatments are an attempt to disentangle two effects: a possible
“partner-effect” of having an identity of a partner revealed, and a “public shame
effect”, of having the action of player 2 announced to all the participants in the
room. If shame is the primary motivator, then it is the announcement and
not the partnering that should matter, implying that the results in SP and PP
should be identical. Intuitively, one might expect the behavior in the public
announcement treatments to be somewhat more cooperative since then shame
is exposed to all the players in the room. However, if the beliefs of players
are that once the “word is out” then it percolates throughout society, then the
results of SP and PP will be similar to PNN.
6
6.1
Experimental Results
The Power of Shame
Focusing on the “trustee” or player B, the experimental results corroborate the
predictions of the theory. Table 1 presents the means and exact confidence intervals for each of the 6 treatments: SN, SNN, PN, PNN, SP, PP. These are the
exact means and confidence intervals based in the binomial distribution.
Variable
SN
SNN
PN
PNN
SP
PP
Obs
85
85
85
85
51
51
Mean
.2
.376
.4
.753
.725
.784
Std. error.
.043
.053
.053
.047
.062
.058
16
Low 95%
.121
.274
.295
.647
.583
.647
High 95%
.301
.488
.512
.840
.841
.887
Table 1: Percent of B players who chose Roll (Cooperate)
The results of Table 1 are also shown in Figure ??. Interestingly, the behavior
of B players in the two public treatments PP and SP are statistically the same,
and they are also the same as in PNN. This suggests that partnering per se does
not affect behavior, and that exposure to one player seems to have the same
shame effects as exposure to the whole group.
means and 95% confid. intervals
.887109
.121042
SN
SNN
PN
PNN
Treatments
SP
PP
Percentage of B Players Who Choose Cooperate in the Six Treatments
Table 2 shows the results from a linear regression for the behavior of the B
players. With mutually exclusive treatments, the predicted values of yi will the
expected value of Y conditional on the treatments, which is just the proportions
of B players that played IN. Standard errors are robust Huber-White standard
errors to account the heteroskedastic variance of the yi values.14
Column (1) contains the analysis for the first four treatments for all sessions.
Column (2) contains treatments five and six (SP and PP, respectively) for the
last 6 sessions. Both (1) and (2) yield almost identical results to the binomial
analysis above. Column (3) pools all treatments for the subjects that underwent
the last six sessions six treatments.15
The F-tests, assessing whether dummies for the treatments are jointly zero
1 4 More
precisely, Yi ∼ Bernoulli (p = δ j Tj , j = 1, ...6). so that,
E[(Yi | Tj , j = 1...4) = p = δ j Tj , j = 1, ...6).
where V AR (Yi = 1 | Tj , j = 1...6) = p(1 − p), and is therefore heteroskedastic.
1 5 The subjects who underwent only four treatment without SP and PP drop out of the
regression analysis all treatmets are pooled.
17
appear at the bottom of the table. The dummies are jointly significant (or,
more precisely, we can reject the hypothesis, given the p-value of almost zero,
that the dummies are jointly equal to zero) in the analyses in columns (1)
through (3)
Dependent variable: Player B chose Roll (Cooperated)
(1)
(2)
(3)
SN
0.200
0.196
(0.044)**
(0.056)**
SNN
0.377
(0.052)**
-
0.411
(0.070)**
PN
0.400
(0.053)**
-
0.431
(0.070)**
PNN
0.7531
(0.047)**
-
0.804
(0.056)**
SP
-
0.725
(0.063)**
0.725
(0.063)**
PP
-
0.784
(0.058)**
0.784
(0.058)**
Test
F (4,336) F(2,100)
F(6,300)
F-stat value 95.93
156.98
100.68
Significance 0.0000
0.000
0.000
N
340
102
306
Robust standard errors (which equal WLS errors in this case)
** (significant at 5% level)
Table 2: Behavior of B players
***TO BE COMPLETED***
6.2
The Rationality of Trust
Turning to the “trustor” or player A, the experimental results again corroborate
the predictions of the theory. Table 3 presents the means and exact confidence
intervals for each of the 6 treatments: SN, SNN, PN, PNN, SP, PP. These are
the exact means and confidence intervals based in the binomial distribution..
18
means and 95% confid. intervals
.847432
.2079
SN
SNN
PN
PNN
Treatments
SP
PP
Figure 1: The Precentage of A Players Who Chose Trust in the Six Treatments
Treatment
SN
SNN
PN
PNN
SP
PP
Obs
86
86
86
86
53
53
Mean
.302
.558
.547
.744
.660
.736
Std. err.
.050
.054
.054
.047
.065
.061
Low 95%
.208
.447
.435
.639
.517
.597
High 95%
.411
.665
.654
.832
.785
.847
Table 3: Percent of A players who chose In (Trust)
The results of Table 3 are also shown in Figure 1.
19
SN
Dependent variable: Player A chose In (Trust)
(1)
(2)
(3)
0.302
0.340
(0.05)**
(0.066)**
SNN
0.558
(0.054)**
-
0.585
(0.068)**
PN
0.547
(0.054)**
-
0.604
(0.068)**
PNN
0.741
(0.048)**
-
0.660
(0.066)**
SP
-
0.660
(0.066)**
0.660
(0.066)**
PP
-
0.736
(0.061)**
0.736
(0.061)**
Test
F (4,339) F(2,106)
F(6,312)
F-test value
121.8
122.98
87.72
Significance
0.0000
0.000
0.000
N
343
106
318
Robust standard errors (which equal WLS errors in this case)
** (significant at 5% level)
***TO BE COMPLETED***
6.3
Monotonicity and Reciprocity
The comparative statics derived form the theory go beyond the average statistics of the proportion of cooperators and trusting players. Namely, behavior
should be monotonic: the more shame there is (the higher π, or knowing one’s
opponent), the higher is the likelihood that a player B will choose cooperate and
that a player A will choose trust. This, goes a step beyond comparing means as
in the tables above, since it predicts monotonicity at the individual level, which
can be tested using the fact that there at least 4 observations for each individual. Figure ?? offers a simple description of the individual level monotonicity
vis-a-vis the experimental treatments.
20
SN
PNN
PP
SP
SNN
PN
More cooperation
More trust
Monotonic Behavior Accross Treatments
To keep the environment constant, consider the subset of players that includes those who have gone through four treatments, and those who have gone
through six but with the public announcements as the last two treatments. This
offers us a set of individuals who were exposed to the same conditions, but with
the order of these conditions varying.
To investigate whether monotonicity is violated, we need to observe what a
violation of monotonicity for an individual would be. For players B, cooperating
in SN and not in SNN, PN or PNN would constitute a violation. The reason is
that there is more informational content in either SNN, PN or PNN as compared
with SN, so if the power of shame is string enough for the SN treatment, it must
be so for the other three treatments. Similarly, cooperating in SNN or PN and
not in PNN would constitute a violation.
For players A a violation of monotonicity is more subtle: they may learn
something in the middle of the experiment when the treatment is SNN or PNN.
That is, if a player trusted in the session SNN and learned that he was defected
against, the his posterior of the distribution of types is worse, and he may
then not trust in a treatment with more information. This in turn implies
that a violation of monotonicity by an A player can be rationalized by Bayes
updating: if player A trusted in the SNN treatment and observed a defection,
Bayes updating can imply that he will not trust a player B in the PN treatment.
Turning to the actual play, in the four treatments SN, SNN, PN and PNN,
players B exhibit a high level of monotonicity: only 3 out of 51 individuals
have a violation of monotonicity. In these same four treatments, players A
have significantly more violations of monotonicity: 20 out of 53. It turns out
that a positive and significant correlation occurs between A players whose trust
was abused, and the same players exhibiting a non-monotonicity after learning
about their partner’s defection. What’s more important, is that out of the 20
violations, 19 are consistent with belief reversal, i.e., they occur after trust in
the SNN treatment.
An obvious question is whether a taste for reciprocity can explain this reversal behavior (e.g., Falk and Fischbacher (2006)). It is possible, but this is,
in my opinion, less convincing because of the random pairings. For reciprocity
to be a motive, the players should believe that the probability of re-matching
is high enough, which is at the least questionable with group sized of 6 to 10
pairs.
21
7
Discussion
7.1
Shame and Reputational Concerns
TO BE COMPLETED
7.2
Beyond Trust Games
TO BE COMPLETED
7.2.1
Dictator Games
• Dictator games and exit options (Dana, Cain and Dawes (2005), Lazear,
Malmendier and Weber (2006)). (Also consistent with reference groups
like in Fehr and Schmidt (1999))
7.2.2
Ultimatum Games
Ultimatum games are more complex than dictator games in that explaining the
commonly observed experimental results suggests that some form of reciprocity
may be present. Recall that in such games player 1, the proposer, offers some
split of some pot of money x, say x1 for himself and x2 = x − x1 for player
2. Player two then responds by either accepting the offer after which the sums
are distributed, or rejecting it after which both players receive no money. It
is well known from many experiments that proposers typically leave significant
amounts of x2 for the responder, and the latter typically rejects offers of x2 that
are low (usually less than a third). See Camerer (2003) for a detailed survey.
It is possible that the interpretation of preferences for shame as a reduced
form “shortcut” for reputational concerns would be consistent with both first
players leaving money for responders, and responders rejecting low payments.
The challenge is to devise an experiment that would distinguish between reciprocity and preferences to appear strong, i.e., a shame not to be perceived as
weak.
A similar use of monitoring noise is possible. In particular, imagine that
after a responder rejects an offer, with some probability Nature will reverse
that decision and make it appear as if the responder accepted the offer. Now
one can manipulate whether or not the proposer observes the choices of the
responder and of nature, or whether he only observes the payoffs. If rejecting
low offers are a consequence of some kind of shame-reputational concerns, then
when noise about actions is introduced, responders would be more likely to
accept low offers.
7.2.3
Voting and Public Goods
Since the description of the “Voting Paradox” by Downs (1957), there have
been many attempts to offer a rational-choice explanation for the fact that
many people turn out to vote despite the fact that their voting does not really
22
mater, and that voting itself is costly. Common explanations have been similar
to the conventional ideas behind pro-social behavior: people care about the
“public good” through some sense of responsibility, and hence they show up at
the ballots. This implies that if access to voting becomes easier, turnout should
clearly not decrease, and most likely would increase.
In an interesting recent paper, Funk (2006) investigates a unique natural
experiment that offers facts flies in the face of this conventional wisdom. She
collected data from Swiss elections before and after mail-in voting was introduced. Clearly, the possibility of mail voting must have reduced voting costs
substantially. However, not only it did overall turnout not increase, but voter
turnout decreased more in the smaller communities. She also shows that in
communities where poll station operating windows were smaller (interpreted
as an increase in the cost of voting due to the lack of flexible hours), turnout
decreased more.
Funk (2006) resorts to a signalling model of voters who wish to be seen as
public minded. The kind of preferences for shame of not wanting to look selfish
(don’t care to vote) would also do the trick and offer similar comparative statics.
Namely, in smaller communities or in those with short poll operating windows,
it is more likely that someone you know will be at the station at any given time.
As such, not voting and offering a lie as to when you were there is more likely
to be detected as compared with large communities and with flexible hours. By
offering mail-in voting you can credibly say that you sent it in, and this noise
will be the perfect cover for the selfish cost minimizing voter.
Generally, it is easy to see how one can manipulate experimental games of
public good contributions in a similar way to test whether preferences for shame
appear to play a role in these games as well.
7.3
Implications for Strategy and Policy
TO BE COMPLETED
• Policy:
— shame as a deterrent (“Johns” on billboards);
— voting
• Strategy:
— shame as an incentive mechanism
— Non for profits: fund-raising in churches and synagogues
8
References
Andreoni, James (1990) “Impure Altruism and Donations to Public Goods: A
Theory of Warm Glow Giving,” Economic Journal 100:464-77.
23
Andreoni, James and B. Douglas Bernheim (2007) “Social Image and the 50-50
Norm,” mimeo.
Battigalli, Pierpaolo and Martin Dufwenberg (2005), “Dynamic Psychological
Games”, mimeo.
Battigalli, Pierpaolo and Martin Dufwenberg (2007), “Guilt in Games”, mimeo.
Benabou, Roland and Jean Tirole (2006) “‘Incentives and Prosocial Behavior,”
American Economic Review, 96(5):1652-1678
Berg, Joyce, John Dickhaut, and Kevin McCabe (1995) “Trust, Reciprocity,
and Social History,” Games and Economic Behavior 10:122-142,
Camerer, Colin Behavioral Game Theory: Experiments in Strategic Interaction. 2003, Princeton University Press, Princeton, NJ.
Geanakoplos, John, David Pearce and Ennio Stacchetti (1989), “Psychological
Games and Sequential Rationality”, Games and Economic Behavior, 1:60—
79.
Charness, Gary and Martin Dufwenberg (2005) “Promises and Partnership,”
forthcoming Econometrica.
Dana, J., D. M. Cain and R. Dawes. (2005) “What you don’t know won’t hurt
me: Costly (but quiet) exit in dictator games.” Forthcoming, Organizational Behavior and Human Decision Processes.
Battigalli, Pierpaolo and Martin Dufwenberg (2007) “Guilt in Games,” mimeo,
University of Arizona
Falk, Armin and Urs Fischbacher (2006) “A Theory of Reciprocity,” Games
and Economics Behavior 54(2):293-315
Fehr, Ernst and Klaus M. Schmidt (1999) “A Theory of Fairness, Competition
and Cooperation,” Quarterly Journal of Economics, 114:817-68.
Funk, Patricia (2006) "Modern Voting Tools, Social Incentives and Voter
Turnout: Theory and Evidence," mimeo, Universitat Pompeu Fabra.
Kreps, David M. and Robert B. Wilson, “Sequential Equilibrium,” Econometrica,
1982 50(4):863-894
Lazear, Edward, Ulrike Malmendier and Roberto Weber (2006) “Sorting in
experiments with application to social preferences,” mimeo,
Rabin, Matthew (1993) “Incorporating Fairness into Game Theory and Economics,” American Economic Review, 83(5):1281-1302
Tangney, June (1995), “Recent Advances in the Empirical-Study of Shame and
Guilt”, American Behavioral Science, 38(8):1132-45.
24
9
9.1
Appendix
Experiment Instructions
25
Download