Review of Jon Elster, Explaining Social Behavior

advertisement
Review of Jon Elster, Explaining Social Behavior,
Cambridge University Press (2007)
Herbert Gintis
Jon Elster's strength is his deep understanding of behavioral science as well as the classical
writers on human nature and human society. In the past several years, his goal has been to join
the two, throwing in the natural sciences, to explain more fully the nature of society. He says (p.
246) "In a common view, the scientific enterprise has three distinct parts or branches: the
humanities, the social sciences, and the natural sciences...but...a rigid distinction may prevent
cross-fertilization...the social sciences can benefit from the biological study of human beings and
other animals...interpretation of works of art and explanation are closely related enterprises." I
think he is largely successful, and that this is a very useful approach for humanists and social
scientists (although not for natural scientists). The number of insights per page in this book is
prodigious, and it should be widely read.
I have several criticisms of Elster's exposition. In part, our differences may have narrowed or
disappeared, as this book was published in 2007 and doubtless written a few years before that.
Elster's treatment of altruism is very Kantian. An act is altruistic if it benefits another at a cost to
oneself, and one was motivated to undertake the act in order to benefit the other person. This, I
believe, is absurd. If I really care about another person's welfare, then it pleases me to help this
person. It is in my self-interest to behave altruistically. Very often I use the term self-regarding
rather that self-interested, precisely because a truly moral person has a self-interest in being
other-regarding. Part of my satisfaction in performing the altruistic act is that so doing is morally
right, and I get satisfaction from behaving in a morally correct manner. But it may not. I may
think there is nothing especially moral about being helpful or considerate or loyal--it just gives
me satisfaction. Similarly, I may punish bad acts of others not because I want to change society
for the better, but because I am personally very angry at the behavior. If I scream at a bad driver
on the road, I am not trying to improve his driving behavior. I am trying to make him feel bad,
and I might not care a whit whether it affects his behavior.
Elster's treatment of rational choice is quite knowledgeable and sophisticated. But he presents the
theory in a manner that renders it empirically incorrect, and gives no way to improve upon it,
except to talk about emotions and irrationality. Rational choice theory assumes agents have a
subjective prior over the effect of their choices on outcomes (beliefs), a set of transitive,
consistent preferences over outcomes (preferences), and the face constraints in making their
choices (such as limited information and resources). A rather strong form, but which I think is
generally acceptable, is that rational agents update their subjective prior using something
equivalent to Bayes rule. That is all. Elster insists that "rational choice theorists want to explain
behavior on the bare assumption that agents are rational." (p. 191) This I call the fallacy of
methodological individualism, which is rampant in economics, and is empirically false (see my
book Bounds of Reason, Princeton 2009).
A stripped-down version of rational choice theory is both compatible with the facts, and
extremely useful, as much of applied economics attests to. The main weakness of the theory, I
believe, is the assumption that beliefs are personal (subjective prior), when in fact beliefs are
generally the product of social linkages among complexly networked minds, and probabilities
are resident in distributed cognition over this network. Elster's discussion of beliefs is again very
rich, but he does not present the networked character of minds and the relationship of such
networks or the formation and transformation of beliefs.
Elster's description of game theory is very useful and his suggested readings are excellent.
However, he criticizes game theory for "predictive failures" (p.337), including behavior in
finitely repeated games that are subject to analysis using backward induction. These games
include the repeated prisoner's dilemma, the centipede game, and the traveler's dilemma. In all
cases, backward induction gives a result that is very far from how people play the game. For
instance, in the repeated prisoner's dilemma and the centipede game, backward induction says to
defect on the first round, whereas in fact in a long game, people generally cooperate until near
the very end of the game. However, the use of backward induction, while very common in game
theory, cannot be justified by rationality alone. Rather, you need the common knowledge of
rationality (CKR), which I believe is a very suspect epistemological condition---see Bounds of
Reason.
Download