Review of Martin Nowak SuperCooperators

advertisement
Review of Martin Nowak,
SuperCooperators:
Altruism, Evolution, and Why We Need Each Other to Succeed
Free Press (2011)
Herbert Gintis
Martin Nowak is a mathematician and scientist of great stature, expansive ambition, and allembracing energy. He avoids being hog-tied to any particular discipline, but rather follows his
fancy where it takes him in the behavioral sciences. Nowak is especially admirable because he is
an iconoclast who comes out guns a-blazing to confront any Received Wisdom with which he
fundamentally (or even marginally) disagrees. He apparently does not mind being wrong (he
sometimes is) because when he is right, it more than makes up for his goof-ups. Nowak was a
student of Peter Schuster, the great mathematical chemist, and a student and coauthor of the
biological mathematician Karl Sigmund, at the University of Vienna in the late 1980's.
Nowak is currently the head of the Program for Evolutionary Dynamics at Harvard University.
He was hired by Harvard after an admirer offer to contribute a Very Large Sum to the Harvard
endowment as part of a deal to install him as Professor of Mathematics and Biology at the august
institution. Nowak steps on lots of peoples toes, and there is little chance he would have been
hired at the venerable school without the huge monetary emolument. This is no criticism of
Nowak---Harvard does not cotton up to anti-Establishment, transdisciplinary, iconoclastic
intellectuals. To round out Nowak's multidimensional cerebral life, he is an observant and pious
Catholic, and his work at Harvard is co-sponsored by the Templeton Foundation, famous for its
support of projects that link science and spirituality.
The main theme of SuperCooperators is both important and ubiquitous in modern behavioral
science: we humans are who we are because we evolved that way biologically, and cooperation
has been the key to our success as a species. "Out breathtaking ability to cooperate is one of the
main reasons we have managed to survive in every ecosystem on Earth, from scorch, sun-baked
deserts to the frozen wastes of Antarctica to the dark, crushing ocean depths" (the hyperbole is a
typical of Nowak's literary tricks; in fact we have not survived in any of the three ecosystems he
mentions, but we have in most of those he does not). If the last half of the Twentieth century was
the Homage to Selfishness, the first half of the current century is the Homage to Cooperation.
Perhaps the most important contribution to this new theme is John Maynard Smith and Eors
Szathmary's The Origins of Life, which depicts each stage in the development of biological
complexity as the synergistic synthesis of hitherto competing biological entities, from single-cell
to multicellular organisms, from prokaryotic to eukaryotic cells, all the way up to social species,
of which our species is among the most successful. As for humans, Samuel Bowles and I have
argued this position for some fifteen years, our book "A Cooperative Species" (Princeton, 2011)
coming out in a few months, and our colleagues Robert Boyd and Peter Richerson predated our
effort by at least a decade.
SuperCooperators is extremely accessible to non-scientists. It dispenses with footnotes and
highly accurate but complex phraseology, in favor of explaining concepts in a witty,
conversational, and light-handed manner. One is struck throughout with Nowak's love of
mathematics and science. He repeatedly conveys to the reader his the love of knowledge for its
own sake (he regularly quotes from Albert Einstein, a man whose passion for truth approached
theological levels). He also conveys his commitment to and fondness for his co-researchers, both
teachers and mentors (the book is dedicated to Bob and Karl---presumably his mentor Robert
May and his teacher and coauthor Karl Sigmund). In fact, SuperCooperators is much more a
book about the work of Nowak and his Harvard Program for Evolutionary Dynamics than it is an
even-handed overview of the field and its development. This was a good choice on Nowak's part,
because it permits him to convey the degree of collegiality and personal commitment that is one
of the great comforts of scientific research, and is often ignored in favor of maintaining the
emotional flatness that is (falsely) reputed to be the hallmark of good science.
How do we account for human cooperation? Nowak offers five very broad mechanisms: direct
reciprocity, indirect reciprocity, spatial selection, multilevel selection, and kin-selection. He
devotes a chapter to each of these mechanisms, in each case highlighting his own contributions.
This may sound self-aggrandizing, and it is. But I do not think this is a mistake. The benefit for
readers is a sense of intimacy with the writer, who becomes the protagonist in a scientific
melodrama, a knight in shining armor slaying falsity and promoting truth wherever his
wanderings take him. However, this approach does not allow the reader to appreciate the
substantive scientific issues involved. For instance, Nowak is strong supporter of kin-selection,
but a bitter critic, along with Harvard colleague Edward O. Wilson and co-worker Corita Tarnita,
of William Hamilton's explanation of kin-selection in terms of "inclusive fitness." Now you must
understand, dear reader, that the inclusive fitness concept has become the standard explanation of
cooperation in population biology and animal behavior theory, so Nowak's critique, presented in
Nature in 2010, produced a major shake-up in the profession. Nowak presents his critique here,
but does not attempt to relate the counterarguments, which are at least extremely interesting, and
probably indicate serious problems with the Nowak-Tarnita-Wilson argument. For one think, the
authors pose inclusive fitness theory as an "alternative to natural selection," which is pretty
ludicrous. Right or wrong, inclusive fitness theory is a form of natural selection, not an
alternative to natural selection.
To convey his light-handed treatment of scientific themes, consider Nowak's treatment of
multilevel selection. After two short paragraphs going over the history of group selection (focus
on Wynne-Edwards' latter years studying red grouse with binoculars from the window of a room
in a rest home in the Dee Valley), Nowak confesses that earlier attempts to model multilevel
selection were "too complicated for my taste" and that he and Karl Sigmund did not find them
"particularly convincing." This manner of presenting objections to a scientific theory bypasses all
substantive issues (what were the earlier models and what were their successes and deficiencies)
and moves directly to the subjective emotions of Nowak and Sigmund. How does Nowak
describe his and Sigmund's passage into the theory of multilevel selection. "In the heart of the
Rauriser Urwald," he writes, "Karl [Sigmund] and I came across a little wooden board bearing a
poem by Goethe (1749-1832)...Goethe's poem begins as follows: "Müsset beim Naturbetrachten
immer eins wie alles acten." This translates as "When looking at nature, you must always
consider the detail and the whole." I could not think of a better way to express the idea of
multilevel selection and, though I realized the field had a long, troubled, and vexatious history, I
began to think about whether it might work in the real world." Now, of course, the Goethe quote
can be interpreted in many ways, but none of them has anything to do with multilevel or any
other form of selection. Moreover, I frankly do not believe that this story is accurate. It sounds
like an apocryphal yarn, like the one about Newton and the apple.
Nowak goes on to explain the model of multilevel selection with Arne Traulsen that is no
different than a dozen previous models of the within-group cost and between-group benefit of
altruistic behavior, and with no mention of Price's equation, which was the inspiration for all
previous mathematical models of multilevel selection (Nowak belittles Price's equation later in
the book as being "tautological", which is incorrect, as it depends on a empirically meaningful
analytical representation of biological fitness). The paper Traulsen and Nowak wrote (PNAS
2006) was indeed simple and elegant, but hardly earthshaking. The elegance derives from its
simplifying assumptions (recall Nowak's distaste of complexity), which include (a) no migration;
(b) fixed maximum group size; (c) groups of maximum size consist of all cooperators or all
defectors. None of these conditions is biologically relevant, so the Nowak Traulsen contribution
is likely only of passing interest. To reiterate, I do not consider Nowak's self-aggrandizement a
mistake, given his goal of presenting the non-scientist reader with a highly human treatment of
behavioral and mathematical theory. The historians will sort out who actually did what, and the
average reader does not really care at all. However, this approach does lead to the suppression of
scientific content in favor of personal feelings and memorable anecdotes.
Given his strong personal commitments to his co-workers, it is not surprising that Nowak rarely
has praise for the sustained and seminal contributions of contemporary researchers who work
outside his research group. Luca Cavalli-Sforza, Robert Boyd, Peter Richerson, Robin Dunbar,
Alan Grafen, Amos Zahavi, and Joseph Henrich are not mentioned at all, and others, including
Samuel Bowles and Marcus Feldman, are mentioned in passing for a particular study or two, but
nothing revealing the depth of their thought or nature of their contributions to modern behavioral
science. Similarly, he has little room for contributions from other fields, such as economics,
sociology, and psychology, or evidence from archeology, history, anthropology, or paleontology.
This I also consider a shortcoming. Weaving the contributions of others with his own in a more
substantive way would give the reader a better appreciation for the contemporary state of
research on cooperation and conflict in the animal and human world. Perhaps this is just tit-fortat, as I find that few authors from the behavioral sciences outside of Nowak's circle pay much
attention to the work of Nowak and his students. My problem with Nowak's models is that they
tend to remove important aspects of the subject being modeled in the interest of obtaining
elegant formulas. What, for instance, is the value of a model of group selection without
migration and variable group size? What, indeed, are we supposed to make of the Prisoner's
Dilemma played on a grid, when in real life no species plays any games at all on a grid.
I like the fact that Nowak does not shy from headlong attack on ideas that he thinks are wrong. I
am not so happy, however, when the ideas of people with whom I work are being attacked.
Nowak's Chapter 12, "Punish and Perish," is just such an attack. In various works, my colleagues
and I have proposed that humans exhibit a behavioral syndrome we call "strong reciprocity,"
according to which most humans have a predisposition to cooperate in collective endeavors that
depend on voluntary participation, the success of which depends on a high rate of cooperation
and a low rate of defection, and also have a predisposition to punish those who free-ride on the
cooperation of others, without requiring that their efforts be repaid in the future. The latter
predisposition has been called "altruistic punishment," and it is this concept to which Nowak
objects. I think Nowak has correct objections to assertions we do not make but that he attributes
to us, and incorrect objection concerning statements we do make.
Nowak begins Chapter 12 by saying that "Punishment is not, as some have claimed, a
mechanism for the evolution of cooperation." Rather, he claims, "punishment fits neatly into the
framework of the Prisoner's Dilemma." Here Nowak's claim is incorrect. The Prisoner's
Dilemma, and direct reciprocity as well, are only relevant in dyadic interactions, and it is quite
true that altruistic punishment is irrelevant in such interactions, or in fact in even broader
interactions where reputation effects are powerful and ubiquitous. However, human cooperation
occurs in large groups in which reputation effects have little or no power. It is these situations
that altruistic becomes relevant for sustaining cooperation.
For instance, consider voting in a democratic society. It is costly to vote and because one vote
cannot change the outcome of an election except in the smallest of communities, a purely selfregarding person would never vote. Voting is thus mostly an example of the positive side of
strong reciprocity, that of altruistic cooperation. However, many people vote because their
family, friends, and associates would disapprove of their lack of community spirit if they did not
vote. If A does not vote and B expresses disapproval, then B incurs the cost of disrupting the
normal relationship between A and B. Disapproving of A is thus an act of costly altruistic
punishment, supporting democratic norms at personal expense without the likelihood of any
personal gain therefrom. Without such altruistic punishment, voting might be considerably less
common than it is, a situation that could delegitimize political democracy.
Consider also ostracism, which is a common group punishment to a member who violates the
norms associated with group membership. An individual is ostracized when all (or at least most)
group members rupture normal relations with the individual and/or actively prevent the
individual from participating in group activities or sharing the benefits of these activities. We
tend to ignore the altruistic punishment side of the ostracism phenomenon, but it is always
present, unless there is an external third party who has an incentive to enforce the ostracism
decision.
Nowak does recognize the existence of mufti-player games, as when he says "If I punish you for
defecting in games with other players, that is indirect reciprocity." This statement, however is
incorrect. Indirect reciprocity depends on the ability of the group to establish strong reputational
information, but such information is generally not available in large groups. If I express to you
my disapproval of your not voting who, other than yourself, will see this action and use it to
increase my stock of "good reputation"? The answer is generally, no one; nor do I express my
disapproval because I hope to gain reputation-wise.
Of course altruistic punishment is not a "mechanism of evolution" in the sense of direct
reciprocity, indirect reciprocity, and the others among Nowak's "big five." The big five are social
mechanisms, whereas, altruistic punishment is an evolved form of behavior exhibited by humans
that contributes to human cooperation, and hence to the emergence of our species as a worldscale key player. It operates in direct interactions, large groups, and explains important aspects of
multilevel selection in humans, but like big teeth, wings, or an advanced immune system, it is not
a "mechanism" of evolution.
Nowak also stresses that in the many experiments that have shown the efficacy of altruistic
punishment in fostering cooperation, in fact the cost of punishing often completely offsets the
gains from cooperation. He does not, however, present the important study by GÃÃ,Ã,¤chter,
Renner and Sefton (2008) in which subjects are allowed to interact over 50 periods rather than
just 10. They found that after the initial rounds, the net benefits to the group with the punishment
option significantly exceeded those of the no-punishment group with the difference in net
payoffs growing over time, except for the final round in which the hapless end-game free-riders
were heavily punished.
Given that most social dilemma interactions in neighborhoods, work teams, and the like, extend
over far more than 10 periods, the concern that altruistic punishment lowers group benefits to be
misplaced. The experiment to which Nowak refers (Dreber et al. 2008) does not constitute
evidence for the counterproductive punishment hypothesis for the additional reason that their
two-person game made punishment irrelevant, for one could always retaliate on a defector
simply by withdrawing cooperation, thus obviating the need for any special kind of punishment.
But while the 50-period design of the GÃÃ,Ã,¤chter et al. experiment corrects one of the
design biases that suggested counterproductive punishment in the earlier experiment, their design
still misses something essential to altruistic punishment in the real world: it is effective only if it
is regarded as legitimate according to widely held social norms.
Ertan, Page, and Putterman (2009) designed an ingenious experiment to explore this possibility.
They allowed experimental subjects prior to playing the public goods game to vote on whether
punishment should be allowed and if so, should it be restricted in any manner. From their first
opportunity to vote, no group ever allowed punishment of high contributors, most groups
eventually voted to allow punishment of low contributors in the baseline treatments, and the
result was both high contributions and high efficiency levels. In the laboratory, groups solved
their free-rider problems by allowing low contributors alone to be punished. Apparently the
determination of the punishment system by majority rule made the punishment not only an
incentive but also a signal of group norms.
Nowak also argues that what we call altruistic punishment is often costly but not motivated at all
by prosocial feelings of punishing norm violators. Rather it is motivated by spite and a taste for
revenge. Punishing those who have hurt you is normally motivated by a taste for retribution
rather than a selfless expression of support for public morality. Indeed, in our research we tend to
term this costly punishment, and it is altruistic in the sense of being other-regarding. However,
fully understand that retribution is generally not motivated by prosocial intentions. Much of the
punishment that occurs under the rubric of strong reciprocity is motivated by the urge to retaliate
simply because hurting those who hurt you is a selfish, however socially useful, pleasure.
Nowak's critique of altruistic punishment in Dreber et al. probably should not have been
published because it is well known that cooperation can be sustained in the repeated prisoner's
dilemma by self-regarding players without punishment (it is called tit-for-tat), and interpreting
cooperating as "reward" in this setting is just a semantic ploy with no substance. However in
2009, in Science, Nowak published a true public goods game with punishment (groups of size
four), called "Positive Interactions Promote Public Cooperation," with coauthors David G. Rand,
Anna Dreber, Tore Ellingsen, and Drew Fudenberg. The paper contends that experimental
evidence with human subjects in the laboratory shows that reward is more effective than
punishment in eliciting cooperation in a public goods game.
The paper has two serious and obvious flaws. First it is indefinitely repeated, so there are no end
game results. This design contrasts with virtually all previous studies of the public goods game,
which assumes that players know exactly how many periods will be played, and much can be
learned from the course of play from early to late rounds. It is well known, for instance, that the
rate of decay of cooperation depends on the length of the game, and even when cooperation is
sustained for most of the game, it can break down towards the final periods. In the
Supplementary Online Material (SOM) the authors justify this deviation from the standard
protocol by saying that indefinite repetition is more realistic than a fixed number of periods. This
is true, but irrelevant. Having a fixed number of periods is an experimental control condition that
is introduced precisely to avoid the ambiguity of indefinite repetition: there are an infinite
number of equilibria to the indefinitely repeated game, and which if any is chosen is likely to
depend on extraneous framing effects.
Much more important however, the "reward" in the experiment costs 4 points for the rewarding
subject while the recipient gains 12 points. Of course reward will outperform punishment if the
experimenter pays sufficiently for the reward, while the players pay for the punishment! It would
be a surprising result if the authors came to the same conclusion with the recipient getting 4
points, not 12.
In fact, it is clear that the punishment/reward stages of the game in this paper form another public
goods game with a multiplier of 3, in which individuals cooperate with no end-game effects. The
fact that rewarding players can choose to whom to direct the benefits they generate is interesting,
but we still have two overlapping public goods game in which the second includes a signal
(previous contribution) that partially recreates the conditions of a two-player repeated prisoner's
dilemma. This is a worthy subject of study, but in no way supports the authors' conclusions.
By the way, the reader would never suspect the existence of any of the egregious flaws in this
paper by reading the published material. It is all available only in the Supplementary Online
Material, which is read only by experts. I doubt that the reviewers of the paper even glanced at
the Supplementary Online Material. If I were a Science editor, I would compel reviewers to
review, separately, the main contribution and the supplementary material. The quality of
reviewing might improve considerably, at least in areas with which I am acquainted.
Nowak's hostility to altruistic punishment appears to flow from his preference for rewarding
good behavior rather than punishing bad behavior. This is of course very enlightened thinking
when it comes to raising children and managing employees. But here he confuses the sorts of
social dilemmas dealt with in the human cooperation literature with reward an punishment in the
sort of principal-agent models that economists commonly deal with, such as employer employee.
In an employer-employee setting, despite the predictions of standard economic theory, punishing
bad behavior often has severely efficiency-reducing effects, whereas trusting an employee to "do
the right thing" is often the most cost effecting strategy an employer can use. In effect, by
trusting, the employer sets up a setting where the strong reciprocity predispositions of employees
operate to great effect. This was shown by our colleagues Ernst Fehr. Simon Gaechter, and
Georg Kirchsteiger (1997) in one of the most famous experiments in behavioral game theory, in
which they show that trust is a potent contract enforcement device. More recently, Ernst Fehr
and Bettina Rockenbach, in their paper "Detrimental Effects of Sanctions on Human Altruism",
Nature 422 [March 13] (2003):137-140, showed clearly that when employers have an available
punishment device but choose not to use it, they elicit the best performance from workers.
Sanctions are important, but they surely backfire when deployed in inappropriate circumstances.
Social dilemmas where cooperation is voluntary and regulated by peer-relationships are not
principal-agent models, which are dyadic hierarchical interactions. However, it is usually true
even in principal-agent models that the principal has some means of punishing miscreant agents
(e.g. firing them). Nowak's idea of a world in which reward reigns supreme and the threat of
punishment is absent is not our world.
Nowak's iconoclasm is a wonderful gift to scientists, because he stirs up the muddy waters in
fruitful directions even when he is ultimately wrong in his critiques. In the long run, however,
Nowak's modus operandi may be costly to an assessment of his scientific contributions. Nowak
often jumps into a field he barely knows, makes arguments that experts merely dismiss as halfbaked, and is taken seriously only by scientists and non-scientists who are even more ignorant of
the field than Nowak. Human cooperation is a case in point.
Download