1 STAG HUNT by John Bryant* Rice University © 2013 Since

STAG HUNT
by
John Bryant*
Rice University
© 2013
Since the Financial Crisis of 2007 and the Great Recession there has been a natural
heightening of interest in coordination-failure. This has focused interest on the seminal Stag
Hunt experiments. My analysis starts with specifications of the related conundrums Prisoner’s
Dilemma, Chicken, and Stag Hunt itself, in that most spare, yet most interesting, class of games,
the 2x2 ordinal games. With the modern experimental coordination game results, an insightful
new 2x2 taxonomy is enabled. Having these fundamentals in hand, attention turns to the
perplexingly complex behavior and dynamics in the greater richness of the multiple player,
multiple strategy Stag Hunt.
Lucas Cranach the Elder,
Stag Hunt of Elector Friedrich III the Wise (1529)
[Photo in the Public Domain]
*John Bryant, Dept. of Economics-MS 22, Rice University, P. O. Box 1892, Houston, Texas
77251-1892, USA, jbb@rice.edu, phone 713-348-3341, fax 713-348-5278 (kindly clearly mark
“for John Bryant”)
1
“… post the 2008-09 crisis, the world economy is pregnant with multiple equilibria—selffulfilling outcomes of pessimism or optimism, with major macroeconomic implications.
Multiple equilibria are not new. We have known for a long time about self-fulfilling bank runs;
this is why deposit insurance was created.” (Olivier Blanchard, Economic Counsellor and Chief
Economist at the International Monetary Fund, 2011, emphasis his)
Introduction
Since the advent of the Financial Crisis of 2007 and the Great Recession there has been a
natural heightening of interest in examining possible explanatory factors for such shattering
events. In Game Theory the great conundrums of the Prisoner’s Dilemma, Chicken, and Stag
Hunt games seem to have the potential to at least provide valuable insights into this crisis. The
Prisoner’s Dilema is characterized by a Nash equilibrium Pareto dominated by a non-equilibrium
outcome. This has long been recognized as at least a problematic characterization of social
interaction. Chicken is characterized by Pareto non-comparable Nash equilibria, that is, conflict
games, as famously studied by Thomas Schelling (1960, 1980). Few doubt the relevance of this
insight to many situations, but it, as well, does not specify a clear mechanism for occasional
collapse. The Stag Hunt, however, is characterized by Pareto ranked Nash equilibria. This
suggests switching between unambiguously good and unambiguously bad equilibria due to
switches between the related equilibrium beliefs. This approach was motivated by Robert
Lucas’s microfoundations and rational expectations revolution, and Keynes’s unemployment
equilibria and animal spirits (Bryant, 1983, 1987). Thus Stag Hunt seems a natural source for
insights on crises, but consideration of the three conundrums together sharpens those insights.
All three of these conundrums have canonical representations in two player two strategy
per player games, easily revealing an essence of the problems they present. Further, these
canonical representations belong to the class of completely ordered (no indifference) 2x2 ordinal
games, a fascinating class of games initially introduced by Rapoport and Guyer in 1966. One
advantage of this class is that it is small enough, 78 games, to permit a manageable sized
complete taxonomy, while including many interesting games. Organized around the Prisoner’s
Dilemma, Chicken and Stag Hunt, then, a new taxonomy is provided. As a side benefit, beyond
insights directly involving the Stag Hunt itself, this new taxonomy provides additional valuable
insights. Of particular note, dominance criteria, motivated by the Prisoner’s Dilemma, serve to
lower information requirements on players. And information requirements can be particularly
important in crisis situations. Hereafter, where it will not lead to confusion, “2x2 games” refers
to Rapoport and Guyer’s class of games.
Robinson and Goforth’s (2005) volume provides an interesting, and relatively intricate,
topological development of the structure of the 2x2 games as a class, which reflects the vitality
and variety of this area of research. This paper, nevertheless, stays with the standard diagram, or
payoff matrix, form of Rapoport and Guyer as it is “directly useful for analysing
behaviour.” (Robinson and Goforth, 2005, p. 25) Indeed it is the analysis of behavior which
motivates the development of the structure of the new, relatively simple and intuitive, taxonomy
of this paper. Further, with their topological direction, Robinson and Goforth do not address
many of the issues confronted in this paper, including the role of the Stag Hunt in explaining
collapse, the particular pivotal role of the Prisoner’s Dilemma, Chicken and Stag Hunt, taken
2
together, in the structure of the 2x2 games, or set dominance and the economizing on information
of the dominance criteria. They do, however, emphasize the central role of the Prisoner’s
Dilemma, and appropriately so. Robinson and Goforth (2005) also includes a helpful glossary
and bibliography.
Indeed, since the Financial Crisis and Great Recession, much of the heightened interest in
Game Theoretic insights has particularly been centered on the Stag Hunt coordination game,
although the potential for coordination-failure had not gone unnoticed earlier as well, particularly
in Banking Theory and Macroeconomics (Allen and Gale, 2007; Blanchard, 2011; Bryant, 1980;
Diamond and Dybvig, 1983). In general the Stag Hunt presents the possibility of players getting
“stuck” in a bad equilibrium, as emphasized by Goeree and Holt (2005, p. 349). The Stag Hunt
also suggests the possibility of self-fulfilling waves of optimism and pessimism, of periods of
instability, or, with the possibility of the Pareto dominant Nash equilibrium being the norm,
fragility (Bryant, 2002). At the same time, it presents the situation of strategic uncertainty, a
model of “all you have to fear is fear itself.” In short, the Stag Hunt, and its associated strategic
uncertainty, introduce a whole new dimension to the interdependency of decisions.
Still, the potential of the Stag Hunt had in the main gone largely unnoticed outside the
realm of Banking Theory and Macroeconomics, due to Schelling’s compelling concept of the
focal Nash equilibrium. Clearly in a game with Pareto ranked Nash equilibrium the Pareto
dominating Nash equilibrium (and unique Pareto optimal outcome in the Stag Hunt itself) is
focal. If best for all does not strike a chord, rational actor modeling itself seems fruitless.
However, the seminal strategic uncertainty experiments with the Stag Hunt coordination game,
originating with the work Cooper, DeJong Forsythe and Ross (1990) in small group experiments
and Van Huyck, Battalio and Beil (1990) [VHBB] in large, resoundingly reject this focal
argument, at least as being definitive. This rejection is particularly stark in VHBB, and not
surprisingly, strategic uncertainty is the key. This experimental coordination game literature as a
whole, in which coordination-failure is widely observed, is now extensive, but a fine “entry
point” is Goeree and Holt, (2005) and Holt (c. 2007). This realization of the problematic nature
of the Stag Hunt, not widely recognized at the time of Rapoport and Guyer’s work, plays a
critical role in the new taxonomy of their 2x2 ordinal games, and the important insights it
provides.
Indeed, the fact that the Prisoner’s Dilemma, and Chicken, when joined together with the
Stag Hunt, play a uniquely pivotal role in the structure of the new taxonomy, this of itself is an
instructive insight.
Prisoner’s Dilemma and Dominance
“Simplicity is the ultimate sophistication” --brochure for Apple II [and Da Vinci?] (Isaacson,
Steve Jobs, p. 80)
The most famous, or infamous, type of game is the Prisoner’s Dilemma, which can be
represented by the following canonical symmetric standard 2x2 game diagram.
(In the standard, if not fully standardized, diagram, for 2 player 2 strategy games
generally, the row and column players’ strategies are written outside the “box,” and the outcomes
for the two players are listed in the corresponding cells, or quadrants, inside the box, with the
row player’s payoffs listed first; this will be clear when viewing the diagrams below. Indeed the
3
cleverness, and clarity, of this diagrammatic device may, in part, itself help account for the major
role such games have played in both the development and teaching of Game Theory.)
Canonical Prisoner’s Dilemma
Here (2,2) is the unique (standard vector) non dominated outcome (and hence a unique
strict Nash equilibrium), corresponding to both playing strategy one, as 2>1 and 4>3, yet it is
Pareto dominated by the (3,3) outcome, that is, both players would be better off with both
playing strategy two. Thus the well deserved preeminence of the Prisoner’s Dilemma. Notice
that here the player does not need to know the other player’s payoffs to determine dominance,
just which of the other’s strategies corresponds to which of her own payoffs. So one might
naturally think of this as a situation not involving strategic interaction. But the predicted
outcome of (2,2) being Pareto dominated suggests that this may be jumping to a conclusion too
quickly.
Consider the following game, and compare it to the Prisoner’s Dilemma:
4
Canonical “Strategy Safe”
PRODUCE
DON'T PRODUCE
PRODUCE
(4,4)
(3,2)
DON'T PRODUCE
(2,3)
(1,1)
with suggestive names for strategies one and two added.
Here strategy one, “produce,” vector dominates strategy two, “don’t produce,” as 4>2 and
3>1 (for both players). But in strategy one, “produce,” also has the feature that the worst
outcome from strategy one, “produce,” 3, is better than the best outcome from strategy two,
“don’t produce,” 2. This is a case of set dominance, and the best possible outcome is predicted
by both of these forms of dominance criteria. (Bryant, 1987) Notice that here the player does not
need to know anything about the other player’s payoffs to determine dominance. Set dominance
is the least heroic extension of maximization to strategic interaction of all, imposing the least
information requirements. One might refer to this as a “strategy safe” situation.
Simple Choice
Game Theory presented a novel problem, how to extend maximization to strategic
interaction. (Von Neumann and Morgenstern, c. 1944, Chapter I, 2. -2.2.4) Rapoport and
Guyer’s 2x2 games, in the strict ranking assumption, can be thought of as providing the most
“primitive,” the least restrictive, possible solution to this problem.
Consider first simple choice, but where the choice is unambiguous. All one needs to do in
this case is determine which of the alternatives is best, assuming just one is, but which must hold
for simple choice to be unambiguous: a maximization problem. However, in a game the
individual player does not choose, determine, the outcome as the other players influence that
outcome. Consider simple choice, then, but where one does not know which of a possible set of
alternatives one will be choosing between. Then a strict ranking of the possible alternatives
5
suffices to determine the choice no matter what grouping of the possible alternatives will be
available.
So we will, in the first instance, assume that the players each have a strict ranking, know
each others’ ranking, and know nothing more or less about each other. Then we take the players
to Rapoport and Guyer’s 2x2 games, which isolate the essence of the problem. That is, hopefully
thereby we will be capturing something of the essence of the effect of introducing strategic
interaction, and strategic uncertainty, as it works out, and of the structure of strategic uncertainty.
We will be observing what happens to unambiguous simple choice when confronting strategic
interaction in this setting.
Making “taking players to games” precise requires some further development.
Fortunately there are choice criteria, dominance criteria that depend only upon ranking, and no
more, that extend maximization to games, and that are moreover the least heroic extensions of
maximization to games, namely set and vector dominance themselves. Depending only upon
ranking as they do, they are also the natural complements of Rapoport and Guyer’s 2x2 ordinal
games.
An important side effect of being the least heroic extensions of maximization is that these
dominance criteria impose the minimal possible information requirements on the players. In the
fog of economic crises, reduced access to information, or greater information needs, and as well
reduced understanding, or sharpened disagreement, play a substantial role. Simple games with
limited information requirements come to the fore perhaps.
In the presentation of the new taxonomy of 2x2 games, all of those games exhibiting set
dominance, and all of those exhibiting vector dominance but not set dominance, like the
Prisoner’s Dilemma itself, are treated.
Multiple Nash Equilibria
Chicken, games with multiple Pareto non-comparable Nash equilibria, here two, that is,
conflict games, can be represented by the following canonical symmetric standard 2x2 game
diagram :
6
Canonical Chicken
STRATEGY 1
STRATEGY 2
STRATEGY 1
(1,1)
(4,2)
STRATEGY 2
(2,4)
(3,3)
Like with the Prisoner’s Dilemma, each player would be best off playing strategy one when
the other plays strategy two. In the Prisoner’s Dilemma, however, this competitive nature of the
game is “masked” by strategy one being vector dominant for both players. In “Chicken” payoffs
1 and 2 are switched, and there is no prediction from the dominance criteria, all strategy pairs are
possible. Now, if the row player is absolutely convinced, for some reason, that the column
player will be “stubborn” and play strategy one, then the row player will play strategy two, and
receive his next to worst outcome, while the “stubborn” player gets his best possible outcome;
and, symmetrically, if the column player is absolutely convinced that the row will be “stubborn”
and play strategy one, then the column player will play strategy two, and receive his next to
worst outcome, while the “stubborn” player gets his best possible outcome. These are both, then,
strict Nash equilibria. But if both play “stubborn,” not knowing that the other will do so as well,
both get their worst outcome of all. The name “chicken” derives from a, truly stupid from the
adult perspective, teenage “game” where two drivers drive their cars at each other at high speed,
and, if one swerves, that one is “chicken,” cowardly. Clearly, to predict a particular outcome one
would have to go beyond the strategic structure itself as represented in the game.
In the presentation of the new taxonomy of 2x2 games, all of those games exhibiting two
Pareto non-comparable, competitive, Nash equilibria, like canonical Chicken itself, are treated.
The remaining of our conundrums is the Stag Hunt coordination game, games with
multiple, here two, Nash equilibria that are Pareto ranked. Stag Hunt can be represented by the
following canonical symmetric standard 2x2 game diagram :
7
Canonical Stag Hunt
STRATEGY 1
STRATEGY 2
STRATEGY 1
(4,4)
(1,3)
STRATEGY 2
(3,1)
(2,2)
This is a, small and structurally simple, “team” game, a game of strategic complementarity.
As compared to the Prisoner’s Dilemma, the (2,2) Nash equilibrium outcome is now Pareto
dominated by a Nash equilibrium outcome, (4,4). Thus, indeed, the best possible outcome for
both is physically feasible, but is not predicted (or ruled out) by the dominance criteria, which
provide no prediction. While there is no consensus on the theory of the dynamics of repeated
play in such a game (Romer, 2012, p. 290), this strategic structure captures the notion of (joint)
self-fulfilling prophesy: with strategy one being “produce” and strategy two “don’t produce,” for
both of us, if we believe the other will produce, we will produce, only if we think (fear) the other
will (may?) not produce, out of the same fear on his part, will we not produce. This is one
reason, in addition to the surprising experimental results, that the Stag Hunt has been receiving
more attention than Chicken of late.
In the presentation of the new taxonomy of 2x2 games, all of those games exhibiting two
Pareto ranked Nash equilibria, like canonical Stag Hunt itself, are treated.
With the three great conundrums in hand, further insight into their relationship can be
provided. Indeed, in the same venerable volume of General Systems as Rapoport and Guyer’s
paper, Morehous (1966) provides a valuable insight into the canonical representation of that
greatest conundrum, the Prisoner’s Dilemma, which insight can be extended to Chicken and Stag
Hunt. Take the “cooperative” outcome in the canonical Prisoner’s Dilemma, (3,3), to be the
norm. One problem with this norm is that a player who deviates from the norm when the other
does not is better off doing so. A second problem is that if the other player deviates from the
norm, a player is better off having deviated as well. Morehous suggestively labels these “greed”
and “fear.” Similarly, for the other two conundrums, in their canonical representations, take the
“cooperative” outcome to be the norm. In canonical Chicken this norm, (4,4), has the problem
that a player who deviates from the norm when the other does not is better off doing so. But it
8
does not have the second problem. If the other player deviates from the norm, a player is better
off not having deviated. In contrast, in the canonical Stag Hunt this norm, (4,4), does not have
the first problem as a player who deviates from the norm, when the other does not deviate, is
worse off doing so. However, it does have the second problem that if the other player deviates
from the norm, a player is better off having deviated as well. In the stag hunt all “I” have to fear
is that the other will deviate, who would do so only out of his fear that “I” would do so, for the
same reason. “All we have to fear is fear itself,” the pure form of strategic uncertainty, is indeed
isolated in the “Stag Hunt.” It has often been noted that playing “cooperative” in the Stag Hunt
is “risky,” but Morehous’s insight characterizes the specific nature of that risk.
With these relationships between the great conundrums of Game Theory, the Prisoner’s
Dilemma, Chicken and Stag Hunt in hand, we are ready to turn to the new taxonomy in which
they, or, equivalently, the dominance criteria and multiple Nash equilibria, play a pivotal role.
Taxonomy
To present the new taxonomy of 2x2 games there is some minor but necessary technical
detail that needs to be specified.
As mentioned above, herein, where it will not lead to confusion, “2x2 games” refers to the
complete set of completely ordered (no indifference, or “strict”) 2x2 (two player two strategies
each) ordinal games of Rapoport and Guyer. In 2x2 games the two players can order from best
to worst the four possible “outcomes,” with no cases of indifference, but their evaluations can go
no further than this. For expositional convenience these ordinal payoffs are referred to as 4,3,2
or 1, with the understanding that the “outcome” receiving payoff 4 just means that it is strictly
preferred to the “outcome” receiving payoff 3, and so on. That is 4 means the best outcome, 3
means the next to best outcome, 2 means the next to worst outcome and 1 means the worst
outcome. This is the simplest specification of utility, and yet it admits the canonical
representations of the Prisoner’s Dilemma, Chicken, and Stag Hunt.
To assume the existence of a utility, payoff, function can be viewed as just a convenient
mechanism for specifying certain behavioral regularities. The actor acts as if it is maximizing a
utility function if obeying those regularities. As noted above, in the case of completely ordered
ordinal preferences this implies that, if given the simple choice among the possible outcomes, the
mechanism or decision process will certainly pick one particular one, and similarly for all
subsets of the possible outcomes. Moreover, these choices satisfy transitivity. For example, 4 is
picked over 3 and 3 over 2, and 4 is picked over 2, and so on. The question for game theory is,
then, what the players’ particular orderings in simple choices, or equivalently the implied
regularities in behavior in such simple choices, imply for behavior, strategy choices, in strategic
interaction, as modeled by the game itself.
There are some of the basic features of game theoretic analysis that bear repeating here, as
they are freely exploited below, as needed. The initial assignment of a player as row or column
player, in the standard diagram, is arbitrary. In the analysis itself, the strategies are only
identified by their payoffs to the “outcomes,” they have no independent identities of their own;
and for, that matter, in the analysis itself, the payoffs are attached to the cross product of
strategies, “outcomes” is really just a suggestion of a sort of situation to which the analysis
applies. Finally, in this context it is arbitrary whether a player’s strategy is labeled “strategy
9
one” or “strategy two.” (In application, however, sometimes there are symmetries in the actual
strategies available to the players, “run” or “stay” say, so that one may want to adjust for this in
using the taxonomy as presented.)
As the numbering of the strategies is, at this level of abstraction, arbitrary, to avoid
duplication the convention is now adopted that 4 appears in the first row of the row player’s
payoffs, and in the first column of the column player’s payoffs. (This convention has the
sometimes convenient feature that if both players getting the best possible outcome is possible,
(4,4) appears in the upper left quadrant of the standard game diagram. However, in experiments,
with subjects who use left to right, top to bottom writing presumably (?), one has to be concerned
about an “upper left” bias.)
These “Minor But Necessary Technical Details” help explain the structure of the “map” of
the full taxonomy, which is used for reference below. The taxonomy is generated by a structured
pairwise matching of the row and column player four element payoff matrices.
10
12
11
IxII
I
10
IxIII
IxIV
IIxIII
IIxIV
9
8
II
Row Axis
7
6
III
5
IIIxIV
4
3
IV
2
1
0
0
1
2
3
4
5
6
7
8
9
10
11
12
Column Axis
Roughly speaking, “I” refers to pure set dominance games, pairwise matching of payoff matrices
exhibiting set dominance, “II” to pure (standard) vector dominance games, both suggested by the
Prisoner’s Dilemma, and “III” to pure Chicken games, and “IV” to pure Stag Hunt games, as
they, and the other categories, are described below. (The reader may also observe below that “I”
could be referred to as “IxI,” and so on.)
“I” is the pure set dominance games. Thus the set of games exhibiting set dominance for
both players is the first component of the new taxonomy to be examined. In setting up a
taxonomy one wants to avoid overlapping classes of games. For example, consider the class of
games in which it is physically possible for both players to get the best possible outcome and the
11
class of games in which it is physically possible for both to players get the worst possible
outcome. These classes overlap. However, set dominance, holding for both players, is just such
a non-overlapping class of games in an intuitively revealing taxonomy made up only of selected
non-overlapping classes of games bearing interesting relations. With the taxonomy in hand,
then, for example, finding and characterizing those games in which it is physically possible for
both players to get the best possible outcome, and those games in which it is physically possible
for both to players get the worst possible outcome, is trivial: look for “(4,4)” and “(1,1)” games
respectively; and similarly for other common characteristics.
Thus we turn to a first segment, or module, of the taxonomy. It is convenient to list just the
four payoffs to the column player in a qualifying game, leaving out the rest of the game’s
diagram. The possible set dominant payoffs for the column player are:
41
32
42
31
31
42
32
41
Now imagine the left side of the page has the respective row player payoffs running down it, in
the same order. Matching them up then results in a matrix of both players payoffs of the games.
Leaving out the rest of the diagram again, but this first time including the borders for clarity, we
have:
41
32
42
31
31
42
32
41
43
12
(4,4) (3,1)
(1,3) (2,2)
(4,4) (3,2)
(1,3) (2, 1)
(4,3) (3,1)
(1,4) (2,2)
(4,3) (3,2)
(1,4) (2,1)
43
21
canonical “Strategy Safe”
(4,4) (3,2)
(2,3) (1, 1)
(4,3) (3,1)
(2,4) (2,2)
(4,3) (3,2)
(2,4) (1,1)
(3,3) (4,1)
(1,4) (2,2)
(3,3) (4,2)
(1,4) (2,1)
34
12
34
21
(3,3) (4,2)
(2,4) (1,1)
(That’s ten games “down.”)
The payoffs below the principle diagonal are left out because of the symmetry in the row and
column players’ payoffs, and that the initial assignment of a player as row or column player is
arbitrary. For example, the missing payoffs in row three column two are the payoffs in row two
column three, but with row and column player reversed, while the games on the principle
diagonal are themselves symmetric, by construction.
Here the player does not need to know anything about the other player’s payoffs to
determine dominance. For all of these games in this class, call it Class I, the set dominance
prediction is a payoff for each player of either 4 or 3, including all combinations, and where (4,4)
12
is physically possible it is predicted. By the way, row two column two is an overlap in the class
of games in which it is physically feasible for both players to get the best possible outcome and
the class of games in which it is physically feasible for both to players get the worst possible
outcome.
In this context, of the least heroic extension of maximization to social interaction, it is
useful to first introduce a different information structure and related prediction mechanism,
contraction by set dominance. Suppose the column player, say, realizes that for the row player
strategy one is set dominant, and that the row player behaves accordingly and will choose
strategy one. Then to know which strategy to play the column player need only be able to rank
her outcomes associated with the row player’s strategy one, and not with the row player’s
strategy two, for it is now, with this realization, a simple maximization problem. In further
reaches of the taxonomy, contraction by dominance (if feasible) can be the only applicable
mechanism of prediction by dominance.
“II” is the pure vector dominance games. This is a mercifully small class as the
possible payoffs for the column player, and respective game matrix are:
43
21
21
43
(4,4) (2,3)
(3,2) (1,1)
(4,2) (2,1)
(3,4) (1,3)
(2,2) (4,1) (canonical Prisoner’s Dilemma)
(1,4) (3,3)
(That’s thirteen games “down.”)
The only game here, in this class, exhibiting the problematic Pareto dominated unique
(standard vector) non dominated outcome (and hence Pareto dominated Nash equilibrium) of the
Prisoner’s Dilemma is the canonical version of the Prisoner’s Dilemma already exhibited;
although, in the asymmetric game, in the vector dominance prediction, one of the players, the
one with the Prisoner’s Dilemma payoff pattern itself, receives his next to worst outcome.
Where it is physically feasible, the best outcome for both is predicted by vector dominance and
the worst outcome for both eliminated. Here the player does not need to know the other player’s
payoffs to determine dominance, just which of the other’s strategies corresponds to which of her
own payoffs.
In this context, of a somewhat heroic extension of maximization to social interaction, it is
useful once again to first introduce a different information structure and the related prediction
mechanism contraction by (vector) dominance. Suppose the column player, say, realizes that for
the row player strategy one is vector dominant, and that the row player behaves as prescribed by
vector dominance, and will choose strategy one. Then to know which strategy to play the
column player need only be able to rank her outcomes associated with the row player’s strategy
one, and not with the row player’s strategy two, for it is now, with this realization, once again, a
13
simple maximization problem. In further reaches of the taxonomy, contraction by (vector)
dominance (if feasible) can be the only applicable mechanism of prediction by dominance.
The remaining of our conundrums are the Chicken, and Stag Hunt coordination games,
that is games with multiple (here two) Nash equilibria. (Note that some use the term
“coordination game” for Stag Hunt games only.) As Nash equilibria cannot occur in the same
row or column, the pairs of Nash equilibria either have the players playing differently numbered
strategies, represented by Chicken, or both playing the same numbered strategies, strategy one or
strategy two, as in the Stag Hunt game. The associated two classes of games are those of the new
taxonomy that satisfy these conditions respectively.
“III” is the pure Chicken games. For games with conflicting Nash equilibria like
Chicken, Class III, the possible payoffs for the column player, and respective game matrix are:
23
41
12
43
13
42
(2, 2) (4, 3)
(3, 4) (1,1)
(2, 1) (4, 2)
(3, 4) (1,3)
(2, 1) (4, 3)
(3, 4) (1, 2)
(canonical Chicken) (1, 1) (4, 2)
(2, 4) (3, 3)
(1, 1) (4, 3)
(2, 4) (3, 2)
(1, 1) (4, 3)
(3, 4) (2, 2)
(That’s nineteen games “down.”)
All of these games, Class III, are conflict coordination games like canonical Chicken,
and, hence, dominance criteria provide no predictions. For three of them both playing
“stubborn” yields the worst possible outcome, with the symmetric canonical Chicken itself
having the largest “gap” between both playing stubborn and both playing “conciliatory.” In one
of the games, the first, upper left, symmetric one, both playing “stubborn” Pareto dominates both
playing “conciliatory.” In the remaining two games both playing “stubborn” and both playing
“conciliatory” are Pareto dominated. So not only are these not zero sum games, which is
meaningless in the ordinal context, they also have elements of common interest; as, indeed,
Schelling argues is typical in general in situations involving conflicts of interests. (Schelling,
1960, 1980, Chapter I) Here the best possible for both outcome is not physically feasible.
14
“IV” is the pure Stag Hunt games. For games with Pareto ranked Nash equilibria, Class
IV, the possible payoffs for the column player, and respective game matrix are:
41
23
42
13
43
12
(4, 4) (2, 1)
(1, 2) (3, 3 )
(4, 4) (2,2)
(1, 1) (3,3)
(4, 4) (2, 3)
(1, 1) (3, 2)
(4, 4) (1, 2)
(2, 1) (3, 3)
(4, 4) (1, 3)
(2, 1) (3, 2)
(canonical Stag Hunt) (4, 4) (1, 3)
(3, 1) (2, 2)
(That’s twenty-five games “down.”)
None of these games are solved by dominance. Indeed, all of these games, Class IV, have
Pareto ranked Nash equilibria like the canonical Stag Hunt, with the canonical Stag Hunt having
the largest gap between the good and bad Nash equilibrium. In all of them the best possible
outcome for both is physically feasible, but not, of course, predicted by dominance, and where
the worst possible outcome for both is physically feasible, it is not ruled out of course. Two of
the games have both players getting the best possible outcome and both players getting the worst
possible outcome, both situations, being feasible and not ruled out.
With these classes of games, motivated by the Prisoner’s Dilemma and multiple Nash
equilibria, the basic structure of the taxonomy is determined. That is, the remainder of the
taxonomy is generated by “mixing” the four “Pure” classes. That is, we pairwise match row and
column players from different of the classes. In doing so, the advantage of the parsimonious 2x2
structure becomes abundantly clear. For Game Theory has the “curse of dimensions,” or
richness, in the extreme.
15
“IxII” is Class I x Class II of Set Dominance x Vector Dominance games.
The Class I row player payoffs are: 4 3
43
34
3 4 and
12
21
12
21
the Class II column player payoffs are:
43
21
21
43
and the resultant game matrix is:
(4, 4) (3, 3)
(1, 2) (2, 1)
(4, 2) (3, 1)
(1, 4) (2, 3)
(4, 4) (3, 3)
(2, 2) (1, 1)
(4, 2) (3, 1)
(2, 4) (1, 3)
(3, 4) (4, 3)
(1, 2) (2, 1)
(3, 2) (4, 1)
(1, 4) (2, 3)
(3, 4) (4, 3)
(3, 2) (4, 1)
(2, 2) (1, 1)
(2, 4) (1, 3)
(That’s thirty-three games “down.”)
As all of these “mixed” games are inherently asymmetric, these sub matrices are rectangular.
All of these are solved by dominance, with the upper left respective payoffs being the
outcome. But here, once again, there are two ways to view these solutions, which have different
minimum requirements on the players’ information. The row player, with strategy one set
dominant, only needs to know that strategy one guarantees one of her two best outcomes. In one
case the vector dominant column player needs to know her own payoffs and how they “line up”
with the row player’s strategies (and be guided by the weaker vector dominance criterion).
Alternatively, if the column player knows that strategy one is set dominant for the row player,
and that the row player knows this and acts accordingly, and she knows her payoffs from her
strategies when the row player plays strategy one, then her’s is just a simple maximization
problem. This alternative information requirement provides a simple example of the, earlier
introduced, contraction by dominance (here set, and hence also vector dominance), an important
approach to many games (although with multiple strategies it need not yield a single predicted
outcome as here). Naturally, when the best outcome for both is physically possible it occurs, but
a mediocre predicted outcome of (3, 2) is possible, and the vector dominant column player can
do better, in ordinal terms that is, than the set dominant row player; (3, 4) can be the predicted
outcome.
16
“IxIII” is Class I x Class III of Set Dominance x Chicken games.
The Class I row player payoffs are: 4 3
43
34
3 4 again and
12
21
12
21
the Class II column player payoffs are:
23
41
12
43
13
42
and the resultant game matrix is:
(4, 2) (3, 3)
(1, 4) (2, 1)
(4, 1) (3, 2)
(1, 4) (2, 3)
(4, 1) (3, 3)
(1, 4) (2, 2)
(4, 2) (3, 3)
(2, 4) (1, 1)
(4, 1) (3, 2)
(2, 4) (1, 3)
(4, 1) (3, 3)
(2, 4) (1, 2)
(3, 2) (4, 3)
(1, 4) (2, 1)
(3, 1) (4, 2)
(1, 4) (2, 3)
(3, 1) (4, 3)
(1, 4) (2, 2)
(3, 2) (4, 3)
(3, 1) (4, 2)
(3, 1) (4, 3)
(2, 4) (1, 1)
(2, 4) (1, 3)
(2, 4) (1, 2)
(That’s forty-five games “down.”)
All of these are solvable by contraction by set dominance, but the “competitive” column
player never does better than the set dominant row player, and can do as poorly as 2. Here the
best outcome for both is physically infeasible but where the worst outcome for both is physically
feasible it is predicted not to occur.
17
“IxIV” is Class I x Class IV of Set Dominance x Stag Hunt games.
The Class I row player payoffs are: 4 3
12
43
21
The Class IV column player payoffs are now:
34
12
3 4 again and
21
41
23
42
13
43
12
and the resultant game matrix is:
(4, 4) (3, 1)
(1, 2) (2, 3)
(4, 4) (3, 2)
(1, 1) (2, 3)
(4, 4) (3, 3)
(1, 1) (2, 2)
(4, 4) (3, 1)
(2, 2) (1, 3)
(4, 4) (3, 2)
(2, 1) (1, 3)
(4, 4) (3, 3)
(2, 1) (1, 2)
(3, 4) (4, 1)
(1, 2) (2, 3)
(3, 4) (4, 2)
(1, 1) (2, 3)
(3, 4) (4, 3)
(1, 1) (2, 2)
(3, 4) (4, 1)
(3, 4) (4, 2)
(3, 4) (4, 3)
(2, 2) (1, 3)
(2, 1) (1, 3)
(2, 1) (1, 2)
(That’s fifty-seven games “down.”)
All of these are solvable by contraction by set dominance, and all the predicted outcomes
are good, 3 or 4.
Interestingly, the stag hunt column player always receives 4, and thus sometimes does
better, in his ordinal ranking, than does the set dominant row player. One might think that this
should get the attention of analysts who believe that utility is endogenous or passed on somehow.
However, a “converted” Stag Hunt player’s best outcome may be worse for him than the second
best outcome he could have gotten in his original set dominant state. The rankings can be
situation specific ordinal rankings. While behavior is altered, we do not in general know whether
or not there is an associated utility level that falls with the alteration.
Where the best outcome for both is physically possible it occurs.
Remembering the problematic aspect of the canonical Prisoner’s Dilemma (one of the
Class II games), the plot now thickens.
18
“IIxIII” is Class II x Class III of Vector Dominance (only) x Chicken games.
The Class II row player payoffs are: 4 2
2 4 and
31
13
the Class III column player payoffs are:
23
41
12
43
13
42
and the resultant game matrix is:
(4, 2) (2, 3)
(3, 4) (1, 1)
(4, 1) (2, 2)
(3, 4) (1, 3)
(4, 1) (2, 3)
(3, 4) (1, 2)
(2, 2) (4, 3)
(2, 1) (4, 2)
(2, 1) (4, 3)
(1, 4) (3, 1)
(1, 4) (3, 3)
(1, 4) (3, 2)
(That’s sixty-three games “down.”)
All of these are solvable by contraction by vector dominance, with some poor outcomes
predicted, that is, some 2’s predicted, including 2’s for both, and in the whole first row a Pareto
dominated outcome is predicted, as in the canonical Prisoner’s Dilemma itself. In these
Prisoner’s Dilemma games the vector dominant player exhibits both Morehous’s “Greed” and
“Fear” problems, as in the, symmetric, canonical Prisoner’s Dilemma. The Prisoner’s Dilemma
is relatively flourishing in this asymmetric case. Here the best outcome for both is never
physically possible.
“IIxIV” is Class II x Class IV of Vector Dominance (only) x Stag Hunt games.
The Class II row player payoffs are: 4 2
31
2 4 and
13
the Class IV column player payoffs are:
41
23
42
13
43
12
and the resultant game matrix is:
(4, 4) (2, 1)
(3, 2) (1, 3 )
(4, 4) (2,2)
(3, 1) (1,3)
(4, 4) (2, 3)
(3, 1) (1, 2)
(2, 4) (4, 1)
(2, 4) (4, 2)
(2, 4) (4, 3)
(1, 2) (3, 3 )
(1, 1) (3, 3)
(1, 1) (3, 2)
(That’s sixty-nine games “down.”)
All of these are solvable by contraction by vector dominance, with some poor outcomes
predicted, that is in the second row 2’s are predicted for the vector dominant player, but, once
again, the stag hunt player receives only 4’s, and, thus no Pareto dominated outcomes are
predicted. But when the best outcome for both is physically feasible it is predicted, and when the
worst outcome for both is physically feasible it is predicted not to occur.
19
Finally we have the intriguing “IIIxIV” which is Class III x Class IV or Chicken x Stag
Hunt.
The Class III row player payoffs are: 2 4
31
the Class IV column player payoffs are:
14
23
41
23
1 4 and
32
42
43
13
12
and the resultant game matrix is:
(2, 4) (4, 1)
(3, 2) (1, 3)
(2, 4) (4, 2)
(3, 1) (1, 3)
(2, 4) (4, 3)
(3, 1) (1, 2)
(1, 4) (4, 1)
(2, 2) (3, 3)
(1, 4) (4, 2)
(2, 1) (3, 3)
(1, 4) (4, 3)
(2, 1) (3, 2)
(1, 4) (4, 1)
(1, 4) (4, 2)
(1, 4) (4, 3)
(3, 2) (2, 3)
(3, 1) (2, 3)
(3, 1) (2, 2)
(Now that’s all seventy-eight games “down.”)
Dominance makes no prediction in any of these games. Further, all nine of these games
do not have any Nash equilibria at all. And these are the only of our games with no Nash
equilibria. This is, then, a proof by inspection that multiple Nash equilibria is not a necessary
condition for dominance criteria to fail to predict an outcome, perhaps an obvious theorem, but it
also demonstrates that this stems from mixing payoffs associated with the two opposing types of
multiple Nash equilibria. The payoffs are “pulling” for pairs of Nash equilibria in opposite
directions, as it were. The best outcome for both and the worst outcome for both are physically
infeasible in these games.
For the purposes here, the very fact of no Nash Equilibria suffices. That is, the strict
ranking of alternatives does not yield the prediction of a unique outcome, else it would be a pure
strategy Nash equilibrium. Naturally, if the payoffs are cardinal, and not just rankings, and if the
players are in a mixed strategy equilibrium, that probabilistic uncertainty remains. Still, one well
might imagine that in a crisis of the magnitude of the Financial Crisis of 2007 and the Great
Recession the crisis would disrupt such (non strict) equilibrium beliefs. These games belong
with the Conundrums generated by Pareto dominated Nash equilibria and multiple strict Nash
equilibria.
Having described all the classes of 2x2 games in the new taxonomy, a summing up of this
taxonomy’s features is called for.
Taxonomy: A Summing Up
In broad strokes:
(A) All games involving set dominance are determinate using set dominance or
contraction by set dominance. That is classes (I), (IxII), (IxIII), (IxIV).
(B) All games involving vector dominance but not set dominance are determinate using
vector dominance or contraction by vector dominance. That is classes (II), (IIxIII), (IIxIV).
20
(C) Both set and vector dominance economize on information requirements, with set
dominance economizing the most.
(D) Despite dominance criteria economizing on information requirements, all the
remaining games are recognized conundrums involving either two or no Nash equilibria.
(E) In terms of our leading concern, Stag Hunt games comprise a class of games of their
own, all of the games with Pareto ranked Nash equilibria. That is class (IV). Moreover, when
paired with Chicken players the result is the pathology of no Nash equilibria. That is class
(IIIxIV).
How, then, does the new taxonomy present a significantly different perspective, a
different view of the world? One aspect is how the structure provided by the two dominance
criteria motivated by the Prisoner’s Dilemma, and by the conflicting or Pareto ranked Nash
equilibria of Chicken and Stag Hunt, plays a unique pivotal role in the taxonomy. Given the
prominent role that two player two strategy games have played in the development and teaching
of Game Theory, this is a significant insight. A second notable aspect is the presentation of
rational decision making under conditions of low information, particularly relevant in crises,
which has been gone into in detail in the development of the taxonomy. A third notable aspect is
how unambiguous simple choice, when confronted with strategic interaction, through the
medium of the least heroic extensions of maximization, yields an unambiguous social outcome,
except in recognized conundrums of game theory.
Having these fundamentals of 2x2 games in hand, we can fruitfully confront and
appreciate the perplexingly more complex behavior and dynamics enabled in the greater richness
of VHBB’s own multiple player multiple strategy specification of the Stag Hunt. Here strategic
uncertainty unveils the whole new dimension to the interdependency of decisions.
VHBB and Strategic Uncertainty
“Van Huyck, Battalio and Beil’s results [VHBB] suggest that predictions from deductive theories
of behavior should be treated with caution: even though Bryant’s [1983] game is fairly simple,
actual behavior does not correspond well with the predictions of any standard theory. The results
also suggest that coordination-failure models can give rise to complicated behavior and
dynamics.” (Romer, David, Advanced Macroeconomics, 2012, p. 290)
We now face stark and perplexing results. Fortunately, Rapoport and Guyer’s 2x2
approach provides insight into the yet more perplexing and complex behavior and dynamics
revealed in VHBB’s repeated game experiments. These complexities require the greater richness
of the specification of their large Stag Hunt game, a game motivated by Macroeconomics, as
noted by Romer (2012, p. 290) and by VHBB (1990, p. 235) themselves. Further, the taxonomy
is an abstraction, applying to any interaction equally as well, in theory at least. VHBB puts some
economic meat on the strategy theory bones.
The striking, robust and widely replicated “downward cascades” or “self-fulfilling [and
intensifying] panics” observed in the repeated game experiments of John Van Huyck, Raymond
Battalio and Richard Beil in “Tacit Coordination Games, Strategic Uncertainty, and Coordination
21
Failure,” (1990) [VHBB] are experimental results that, I think it is fair to say, are now
considered classic, at least in their original Macroeconomics context. Moreover, the preceding
paragraph not withstanding, they may ultimately have importance for issues not touched upon
herein, or even imagined. For Game Theory of itself is no modest enterprise, involving as it does
all interaction between two or more people; if not, indeed, involving more, for example
competing portions of the nervous system with competing access to sensory input and competing
control of the body, or involving or treating non human entities or organizations as actors. That
is, it involves interactive systems generally, it being no accident that Rapoport and Guyer (1966)
and Morehous (1966) published in a journal named General Systems.
Indeed, with respect to this lack of modesty, and getting down to basics, in its “pristine”
form, Game Theory makes a very powerful, but also very strong, in the sense of stringent,
assumption. Namely, the “one-shot” game characterization provides a complete description of
the relevant environment. Furthermore, therefore, in every environment that has the same “oneshot” game characterization the same behavior occurs. The VHBB repeated game experiments
provide one particularly stark example where this assumption is most unambiguously violated.
The perplexingly complex behavior and dynamics generated in these experiments is a reflection
of this stark violation of this complete description assumption.
Needless to say, the violation of complete description has received a great deal of
attention in this context, as well as others, and with varying degrees of success in the resulting
analysis. (Holt, 2007, Chapter 26) So what may be most perplexing is that the dynamics in
VHBB has remained perplexing! (Romer, 2012, p. 290) VHBB’s strategic uncertainty has
proven refractory.
The ultimate goal of the paper, then, is to provide insights into the perplexing
complicated behavior and dynamics of the VHBB experiments based on this structure:
Faithfulness and Trust
are important, and in some situations critical:
“To focus the analysis consider the following tacit coordination game, which is a strategic form
representation of John Bryant’s (1983) Keynesian coordination game.” (VHBB, p. 235)
U(i) = A { minimum [e(1), ...,e(N)] } - B {e(i)},
i = 1, ...,N; A > B> 0; 0 ≤ e(i) ≤ E.
(Symmetric, Summary Statistic)
VHBB’s specification is of a multiple player multiple strategy team game. Numerous
individual’s efforts are complements. In the strategic form representation there are N>1
individuals, labeled as individual 1 through N, and individual i is one particular one of those N
individuals. Each individual i chooses an “effort level” e(i) that is between 0 and E, E>0. The
payoff to individual i is called U(i) and is increasing in the minimum effort level chosen in the
whole group of N individuals, and decreasing, but less strongly, in the individual’s own effort.
22
Intuitively, then, this can be thought of as similar to a simple model of the individual payoffs to
members of a “team.”
More technically, it is worth noting that if you are going to model team production, where
“moving together” beats “going it alone,” and if you are going to assume constant returns to
scale and also want the technical convenience of continuity in “effort” (hence the continuum of
equilibria) then, by Euler’s theorem, the payoff function must be non-differentiable, as with the
“min rule:” that is, the no surplus condition (a condition of Walrasian equilibrium not met in
team production) is not a mere technicality, consider also the more familiar Leontief production
technology specification. Notice further that as the number of players grows the “min rule”
provides a “severe test of payoff dominance” (VHBB, p. 236), a feature completely missed by
standard theory.
As for experimental suitability, the equality of some payoffs, but with the Nash equilibria
still Pareto ranked, the symmetry and the summary statistic (“min rule” here), these features
make possible the compact and clear formulation, which clarity is important for experiments. In
addition, as a practical matter, the symmetry means one observes that many more individuals
facing the same problem.
In their experimental form VHBB used seven levels of “effort,” and payoffs were in
dollars. In the first experiment there were seven groups of 14-16 subjects playing ten repeated
games with the same group. In the first play almost 70% did not play the high, best for all, if all
do so, effort level, but only 2% played the lowest effort level. Only 10% predicted there would
be an equilibrium outcome, and none of the seven groups played an equilibrium. In subsequent
repetitions, many of the players played below the minimum effort level of the previous
repetition, which the authors referred to as overshooting, or, more descriptively, undercutting.
By the tenth period 72% of the subjects played the minimum effort level, although in all 70
repetitions Nash equilibrium never occurred. Of course, VHBB provides much more detail, but
this is the crux of the matter for the purposes of this paper. As is typical in economic
experiments, the payoffs are in cash, but this does not imply that the players have, or behave as if
they have, cardinal utility functions and the requisite knowledge of each other’s behavior.
This undercutting is the mechanism behind the downward cascade, the strong, and also
extremely robust, as it turns out, downward dynamic. To my knowledge, VHBB is the first
explicit recognition of this downward dynamic, an entirely “new” dynamic. While a great deal
of interesting work has been done on related specifications of coordination games, it is this
critical and perplexing downward dynamic that is examined here.
The case made here for explaining the perplexing complicated behavior and dynamics of
VHBB’s results is not to deny the “draw” of Schelling’s focal Nash equilibrium, the “draw” of
Pareto dominance and Pareto optimality. But neither is it to deny the impact of Morehous’s fear
motive that characterizes the Stag Hunt game. Rather it is to recognize these forces, but to
observe that the player’s reaction to these different forces can well be idiosyncratic, be expected
to be idiosyncratic, and particularly so when the “draw” of rationality itself does not bear on such
idiosyncratic reactions, when the game theoretic outcome is indeterminate.
In particular, with respect to the undercutting mechanism itself, it does not seem
unreasonable that some players would in repeated play find the minimum play of the previous
game focal, rather than, or more than, the Pareto dominant equilibrium, but would also be
23
“pulled down on” by Morehous’s fear motive. Then, as the play continued, the “draw” of the
focal would fall for many with their observation of the “downward cascades.” But the “draw” of
the focal Pareto dominant equilibrium still remains, with varying degrees of intensity, and
equilibrium does not obtain. Further, it seems perfectly believable that in a real world crisis, like
the Financial Crisis and Great Recession, Morehous’s “fear” motive might overpower the draw
from focal equilibria, as it does in VHBB’s pristine tacit coordination game!
Conclusion
In this scenario, it is the very dispersion of individual choices that is an ingredient in the
predictability of the joint behavior, the strong and extremely robust downward cascade. For this
effect there remains an important loose end however, namely the observation that as the number
of players grows the “min rule” provides a “severe test of payoff dominance” (VHBB, p. 236).
This is a feature which is completely missed by standard theory, as consideration of the strategic
form reveals. In the case of the min rule a downward deviation by a single individual at the
minimum causes the same loss to every other individual, no matter how large the team. But the
larger the team, the more likely a “low ball” player, a player particularly sensitive to Morehous’s
“fear” motive; an aspect of team size the other players likely recognize themselves. Team
growth increases the “pull” of Morehous’s fear motive if you will. The complementarity remains
very tight in this “weakest link” model.
If, on the other hand, as the team grows the effect of a single deviation lessens, this would
have an offsetting effect. The nature of the complementarity is, then, an important consideration,
as experiments verify, starting with Van Huyck, Battalio and Beil’s multiple player multiple
strategy “average opinion” experiments (1991). As an example, suppose the team product is
banking services. If the matter at hand involves possible bottlenecks in critical components in
delivering those services then the “min rule” would be appropriate. But if the matter is the team
aspect of, that is the positive externalities between the banks within, the banking system as a
whole, then the complementarity “loosens” as team, system, size grows. In general the nature of
the complementarity does significantly influence behavior, as Goeree and Holt’s (2005) repeated
experiments neatly verify, by varying the costs of own “effort” in the “min rule,” with two
strategies per player in pairwise matchings of players, that is, 2 player 2 strategy games. In these
repeated experiments, with low effort cost, the average of effort rises over time, and, with high
effort cost, the average of effort falls over time, but the dispersion of individual choices remains.
There is a caveat concerning this effect of a “loosening” of the complementarity as team
size grows, however. In this scenario of ours, lacking a generally applicable theory of the focal,
as described above, one can imagine that, if it is discernible, even a single individual’s decision
could be focal for some. A fear of “copy cats” itself could be self-fulfilling, to a degree. One
might reasonably conclude, then, that the risk of coordination-failure is higher the larger the
number of players involved and the tighter the group’s complementarity. But the very possibility
of coordination-failure cannot be ruled out in any case. Moreover, when economies advance the
tightness of complementarities grow, financial and real, within and between countries. The
frequency and scale of crises may be expected to grow as well, all else equal.
And, finally, why the term “Stag Hunt?”
Professor Vincent Crawford noted the parallel of the VHBB “team” game to Rousseau’s
24
1754 “Stag Hunt” parable from the “Discourse on Inequality Among Men.” (VHBB, p. 235,
footnote 1)
“That is how [primitive] men may have gradually acquired a crude idea of mutual commitments
and the advantage of fulfilling them, but only insofar as their present and obvious interest
required it, because they knew nothing of foresight, and far from concerning themselves with the
distant future, they did not even think of the next day. If a group of them set out to take a deer,
they were fully aware that they would all have to remain faithfully at their posts in order to
succeed; but if a hare happened to pass near one of them, there can be no doubt that he pursued it
without a qualm, and that once he had caught his prey, he cared little whether or not he had made
his companions miss theirs.” (Bair, trans., The Essential Rousseau, p. 175)
25
ACKNOWLEDGEMENTS
I am very much indebted to Marc Dudey for bringing Rapoport and Guyer’s prior work to my
attention, and to Marcia Brennan, Richard Grandy, Simon Grant, John Harsanyi, Herve Moulin,
Martin Shubik, and Robert Solow for very insightful observations, prior to this paper, concerning
the application of game theory. Errors and oversights are my responsibility alone.
26
BIBLIOGRAPHY
Allen, Franklin and Douglas Gale, Understanding Financial Crises, Oxford, Oxford University
Press, 2007.
_______ (eds.), Financial Crises, The International Library of Critical Writings in Economics,
London, Edward Elgar, 2008.
Blanchard, Olivier, “2011 in Review: Four Hard Truths” (http://blog-imfdirect.imf.org/
2011/12/21/2011-in-review-four-hard-truths/), IMF Blog, December 21, 2011.
Bryant, John, “A Model of Reserves, Bank Runs, and Deposit Insurance,” Journal of Banking
and Finance 4 (1980) pp. 335-344.
_____, “A Simple Rational Expectations Keynes-Type Model,” The Quarterly Journal of
Economics 98 #3 (Aug. 1983), pp. 525-528.
_____, “The Paradox of Thrift, Liquidity Preference and Animal Spirits,” Econometrica 55
(Sept. 1987), pp. 1231-1235.
_____, “Trade, credit and systemic fragility,” Journal of Banking & Finance 26 (2002), pp.
475-489.
Cooper, Russell W., Douglas V. DeJong, Robert Forsythe and Thomas W. Ross, “Selection
Criteria in Coordination Games: Some Experimental Results,” American Economic Review 80
#1 (March 1990), pp. 218-233.
Cooper, Russell W., Coordination Games: Complementarities and Macroeconomics, Cambridge,
Cambridge University Press, 1999.
De Roover, R., Money, Banking and Credit in Medieval Bruges: Italian Merchant-Bankers,
Lombards and Money-Changers: A Study of the Origin of Banking, Cambridge Mass., The
Medieval Academy of America, 1948.
_____, L’Evolution de la Lettre de Change XIV-XVIII siecles, Paris, Libraire Armand Colin,
1953.
Diamond, D. W., and P. H. Dybvig, “Bank Runs, Deposit Insurance and Liquidity,” Journal of
Political Economy 91 (1983), pp. 401-419.
Goeree, Jacob K. and Charles A. Holt, “An Experimental Study of Costly Coordination,” Games
and Economic Behavior, (special issue in honor of Richard McKelvey) 2005 #2, pp. 349-364.
27
Holt, Charles A., Markets, Games, & Strategic Behavior, New York, Pearson Addison Wesley, c.
2007.
Isaacson, Walter, Steve Jobs, New York, Simon & Schuster, 2011.
Morehous, L. G., “Two Motivations for Defection in Prisoner’s Dilemma Games,” General
Systems 11 (1966), pp. 225-228.
Rapoport, Anatol and Melvin Guyer, “A Taxonomy of 2x2 Games,” General Systems 11 (1966),
pp. 203-214, reprinted in General Systems 23 (1978), pp. 125-136.
Robinson, David and David Goforth, The Topology of the 2x2 Games: A New Periodic Table,
New York, Routledge, 2005.
Romer, David, Advanced Macroeconomics, New York, McGraw-Hill, 1996, 2012.
Rousseau, Jean-Jacques, [the Stag Hunt Parable], “Discourse on Inequality Among Men,” in The
Essential Rousseau, Lowell Blair (trans.), New York, Penguin, 1983, p. 175.
Schelling, Thomas C., The Strategy of Conflict, Cambridge, Mass., Harvard University Press,
1960, 1980.
Van Huyck, John B., Raymond C. Battalio and Richard O. Beil, “Tacit Coordination Games,
Strategic Uncertainty, and Coordination Failure,” American Economic Review 80 #1 (March
1990), pp. 234-248. [VHBB]
_____, “Strategic Uncertainty, Equilibrium Selection Principles, and Coordination Failure in
Average Opinion Games,” The Quarterly Journal of Economics 106 #3 (Aug. 1991), pp.
885-911.
Von Neumann, John and Oskar Morgenstern, Theory of Games and Economic Behavior,
Princeton, Princeton University Press, c. 1944.
28