What in moral psychology is innate?
Joshua Greene
Department of Psychlogy; Center for the Study of Brain, Mind, and Behavior
Princeton University
[Note to readers: This is a very rough draft, as will be evident to anyone who reads it. Please do
not cite or circulate. Feedback welcome: jdgreene@princeton.edu]
If you go to www.dictionary.com, type in the word “innate,” and hit enter, this is what you’ll get:
adj
1.Possessed at birth; inborn.
2.Possessed as an essential characteristic; inherent.
3.Of or produced by the mind rather than learned through experience: an innate
knowledge of right and wrong.1
Of all the things in the world one might use to illustrate the concept of innateness, this dictionary
offers moral knowledge. I find this amusing—the idea that someone who is not exactly sure
what “innate” means would benefit from knowing that one of the most complex and least
understood of human capacities could plausibly be described as “innate.” And yet this choice, I
suspect, is no accident. Our capacity for moral judgment, perhaps more than anything else,
strikes people as both within us and external to us, as essentially human and at the same time
possessing a mysterious external authority, like the voice of God or Nature calling us at once
from within and beyond. But however obvious the reality of an innate capacity for moral
judgment may be to theologians, lexicographers, and the like, it is not at all obvious from a
scientific point of view.
What is clear is that the days of the tabula rasa are over [Pinker], and insofar as an innate
capacity for moral judgment is simply the opposite of a moral blank slate, we can say that human
moral judgment is innate, or has an innate component. But one might think that we humans are
genetically equipped with a moral psychology that is more than merely un-blank. One might
think ourselves endowed with a specialized moral sense [ref], delicately tuned to detect nature’s
moral signals, or a universal moral grammar [ref], hard-wired in to enable the fast and efficient
generation of well-formed moral judgments and behaviors. Or perhaps we have an innate
capacity for moral reasoning. How plausible are these more ambitious hypotheses? In this
chapter I will examine evidence from a variety of sources that will allow us to make some
educated guesses.
1
[Note that this is not a very good definition for various reasons. This is in June, 2003. Note source of definition.
The American Heritage® Dictionary of the English Language, Fourth Edition
Copyright © 2000 by Houghton Mifflin Company.
Published by Houghton Mifflin Company. All rights reserved.]
What do we mean by “innate?”
I will not dwell on this ubiquitous question, but it’s sufficiently important to warrant a few
remarks. (For more in-depth discussions see [? , ?, and ?] in this volume and [Elman et al.,
Samuels])
First, note that the dictionary definition above is a poor one for our purposes. Traits such
as secondary sexual characteristics may be innate, but they are not present at birth. Likewise, it’s
not clear what counts as an “essential” characteristic. My semi-detached earlobes may well be
innate, but are they essential? Essential for what? Finally, there is no meaningful dichotomy
between things that are produced by the mind and things that are learned through experience, as
nearly everything with which we are concerned in this chapter is a product of the mind, though
only some of these mental products are learned through experience.
Second, there simply is no straightforward scientific definition of “innate.” One can’t say
that a trait is innate if it’s determined by genes because almost nothing of interest is
“determined,” i.e determined entirely, by genes. The unfolding of even the most rigid
developmental programs typically requires input from the pre- and/or post-natal environment.
Rather than saying that an innate trait must be “determined” by genetic influences, one might say
that it must be importantly “affected” or “shaped” or “constrained” by genetic influences. This
approach has the unfortunate consequence of making nearly everything, from baseball to
bouffant hairdos, innate. In light of this one might then make the rather obvious point that
innateness is a matter of degree. This is certainly true, but this facile concession masks the depth
of the problem, suggesting that we will merely have to settle for such claims as, “Baseball is
fifteen percent innate and eighty-five percent learned.” The contributions of genes on the one
hand and learning, culture, and the environment on the other are very difficult to distinguish, let
alone quantify. It’s not as if baseball would exist in a form that is eighty-five percent similar to
its present form (whatever that means) if humans had radically different eyes, ears, and bodyplans, nor would we have fifteen percent of baseball if the baseball-playing cultures that exist
today had taken a sufficiently different course through recent history.
One can meaningfully talk about the percentage of the observed variance in a trait that is
accounted for by genes. For example, it might turn out that, say, fifty percent of the observed
variance in professional baseball players’ batting averages are attributable to genetic factors. But
a trait’s having a large proportion of its variance accounted for by genetic factors is not the same
thing as that trait’s being innate. As supposed above, it may turn out that variance in batting skill
is largely accounted for by genetic factors, but that does not mean that baseball-playing behavior
is innate, at least not in any ordinary sense. In principle, innateness does not require variance at
all. If, for example, human eyes were all exactly the same, there would be no variance in human
eye structure to account for, but this wouldn’t mean that human eyes have no innate structure.
Indeed, such uniformity might be thought to imply that eye structure is more innate.
More generally, one might suppose that uniformity or universality implies innateness, or vice
versa. These seductive ideas are problematic as well. First, starting with the “vice versa,” an
innate trait need not be universal. For example, when a new adaptive trait first appears within a
sub-population of a species it is not universal, and yet it may be as innate as anything else. In
addition, an observable trait may be innate without being universal because it is the result of a
universal disposition to exhibit that trait in some circumstances rather than others [ref]. Second,
universality or uniformity does not imply innateness. Hairstyling and poetry are reported to exist
in all human cultures [Brown], but are these things therefore innate? It’s not obviously true, and
it’s not clear what could settle the question. Clearly humans tend toward these activities in
normal environments, and clearly this state of affairs depends on genetic input, but to say that
poetic composition and hairstyling are innate seems to go beyond the facts of universality and
genetic dependence.
Perhaps one should say, following Sober [1999], that a trait is innate if the trait reliably
emerges in a sufficiently wide range of environments given the genes that code for that trait.
(Here the claim is not that universality implies or is implied by innateness, but rather that
universality within the population that has the relevant genes implies and is implied by
innateness.) As Samuels [2002] points out, this criterion suffers from a similar problem to one
described above. Nearly everyone ends up with the belief that water is wet, and yet this belief is
not obviously innate and is likely to be learned.
Perhaps the difference that makes the difference is natural selection: If poetry and hairstyling were not specifically selected for during the course of human evolution, then they are not
innate. Unfortunately, the natural selection criterion probably excludes too much. First, it would
fail to count traits caused by new mutations as innate. Second, there are plausibly innate traits
that are not specifically selected for, e.g. the human susceptibility to cocaine addiction . This
susceptibility certainly isn’t learned, and it doesn't appear to be culture-bound. If requiring that
innate traits be specifically selected for is asking too much, perhaps we should say that a trait is
innate if it follows as a direct consequence of traits that were selected for. Unfortunately, this
criterion risks classifying as innate everything that depends in any significant way on genes,
which is pretty much everything of interest. Of course it all depends on what one means by
“directly.” One can certainly make progress along these lines by specifying the various ways
that genes can affect traits [Elman et al], some of which are more naturally classified than others
as “direct,” but it is unlikely that such efforts will return an account of innateness that is at once
intuitively appealing and scientifically useful.2
I mention the above complexities to illustrate the simple point that the concept of
“innateness” is not so simple, and may even be more trouble than it’s worth. Still, one may
emerge from this barrage of logical distinctions and cautionary examples with the sense that, for
the things that matter, it can’t really be all that complicated. “We all know perfectly well,” one
might say, “that the visual system is basically innate and that baseball is not!” This is mostly
half right. There are some things, like the human visual system, that have all the characteristics
that we expect from the things we call “innate.” The problem is baseball. It’s very hard to say
what exactly it is that makes baseball not innate. This is especially true once one realizes that
baseball may be considered at different levels of abstraction. Baseball, like the English
language, was not specifically selected for, but a disposition to engage in physically challenging
2
A noteworthy proposal along these lines is Richard Samuels' [2002] primitivist account of innateness. Samuels
argues that a psychological trait is innate if it is "psychologically primitive," and a trait is psychologically primitive
if correct psychological theory (as opposed to biological theory) does not explain how this trait is acquired. Thus,
this view distinguishes the innate from the non-innate by appeal to the distinction between psychology and biology.
In my estimation, Samuels' account elegantly captures many people's intuitive sense of what "innate" means and in
many ways gets to the heart of the matter. There does seem to be an important distinction between psychology and
biology, and I agree with Samuels that our intuitive sense of innateness depends on this distinction. But, I maintain,
appearances are deceiving. The emerging field of cognitive neuroscience is, in a nutshell, the dissolution of the
intuitively appealing but ultimately illusory distinction between psychology and biology. If Samuels' account of
innateness is the best one can give—and it may well be—then what Samuels' account reveals, above all else, is that
the usefulness of the concept of innateness will diminish as the science of mind (psychology) and the science of
brain (neuroscience) continue to merge. Some of the results discussed in this chapter will, I hope, illustrate this
merging process.
and competitive leisure activities may have been selected for, just as the grammar underlying the
English language may have been as well [Pinker]. But even if we come to the firm conclusion
that baseball is not innate, this does not mean that evolution and baseball have nothing to do with
each other. On the contrary, if we want to understand why baseball is so popular, why a curve
ball is so hard to hit, why some players streak while others slump, why baseball’s rules seem
reasonable, and why we tend to root for the home team, an evolutionary perspective may be
absolutely essential. Moreover, the conclusion that baseball is not innate is entirely compatible
with the claims that ninety percent of the variability in baseball performance is attributable to
genetic factors and that a game like baseball was destined to evolve sooner or later.
In other words, it’s not entirely clear what we gain or lose in accepting or rejecting the
innateness of baseball. And likewise—to return us to the matter at hand—it’s not clear what
matters of substance turn on the innateness of morals. At the same time, however, this
indifference to the question of nativism in moral psychology is something of an achievement. It
comes only with the recognition that our capacity for moral judgment, like language and memory
and the rest of our cognitive capacities, is yet another natural phenomenon to be understood in
terms of the complex interactions between genes and their environments. This indifference is
earned only after one has given up on the idea that there is a sphere of moral thought that is
comfortably insulated from crass biology. Once one accepts that the human mind is a product of
natural selection and everything that follows from this, the hopes and fears that motivate the
question “Is morality innate?” either disappear or are redirected toward new questions that are
ultimately more meaningful:
Which biological adaptations shape our moral judgment, and how do they do it?
Which of these adaptations are distinctively human?
How much of the variability in moral judgment is attributable to genetic differences?
How much of the variability in moral judgment is attributable to cultural differences?
What kinds of environmental inputs have the strongest effects on one’s moral sensibility?
Is there an underlying structure to human moral psychology, “a universal moral
grammar?”
Is there a morality module?
Are there things that we are biologically programmed to see as right or wrong?
In what follows I will discuss a variety of data that bear on these questions.
Social emotions and behavior in non-human primates
An important source of evidence concerning the biological forces that have shaped human
morality comes from studies of our nearest living relatives. I will not discuss these studies in
great detail because their results and implications have already received excellent treatment by
experts in the field [Flack and deWaal, Boehm, deWaal, Hauser], but instead will offer an
editorial summary, largely following Flack and deWaal (2000).
Non-human primates lead intensely social lives, and, moreover, their social lives and the
behavioral tendencies that structure them bear a striking resemblance to those of humans. To
begin, monkeys and apes appear to have a rudimentary sense of fairness with respect to
exchanges of food and other goods, which itself depends on an ability to keep track of different
individuals' past behaviors and to use that information for the purposes of social calculation. For
example, adult female brown capuchin monkeys exchange food in a reciprocal way, repaying
individuals who have behaved altruistically in the past [de Waal, 1997b]. Among chimpanzees,
the exchanges are monitored, but not on an exchange-by-exchange basis. Pairs of chimpanzees
tend to share with each other in a symmetrical fashion over the long term, but sharing is often
asymmetrical over the course of a day [de Waal 1989b]. Aggression is more often directed
against beggars for food rather than possessors of food. This pattern is noteworthy because
aggression toward "have-nots" rather than "haves" appears to be aimed not so much at the
defense of one's food—after all, the beggar is begging for food, not threatening to take it—but
rather aimed at defending a system of norms that determine when food-sharing is or is not
appropriate. (Of course, this does not require an explicit understanding of norms and a
deliberate attempt to defend them.) Another noteworthy behavior is the chimpanzee's "respect
for possession." [Goodall, 1971]. High-ranking individuals will often allow lower-ranking
individuals to retain food that could easily be taken and in some cases will beg for food from
lower-ranking individuals. Social expectations also seem to be exhibited in reciprocal patterns of
conflict intervention. Chimpanzees exhibit "a revenge system," whereby an individual
chimpanzee A is more likely to intervene in a conflict against chimpanzee B if B had previously
intervened in a conflict against A.
Chimpanzees also have elaborate mechanisms for moderating conflict. First,
reconciliation behavior repairs damaged relationships after a conflict has occurred. Chimpanzees
will sometimes effect reconciliation through such human-like gestures as extending a hand for
kissing, and mouth-to-mouth kissing [de Waal, 1989]. Another social mechanism related to
conflict is the adoption of the "control-role." In chimpanzees a dominant individual will break
up conflicts, impartially punishing any combatants who continue to fight [de Waal, 1982,
Boehm, 2000], and chimpanzees are known to form large coalitions to suppress unwanted
individual behavior [Boehm, 2000]. Yet another such mechanism is the phenomenon of third
party mediation, whereby two chimpanzees who have recently fought with one another are
brought together for reconciliation by the mediating actions of an individual who was not
involved in the original conflict [de Waal and van Roosmalen, 1979]. Indeed, chimpanzees
appear to take a keen interest in the resolution of conflicts, as demonstrated by repeated
observations of group-wide celebration following the resolution of dramatic conflicts [de Waal,
1996].
In addition to social behaviors related to exchange and conflict, chimpanzees exhibit a
softer, emotionally-based concern for other individuals that is naturally described as sympathy or
empathy. De Waal [1982] has observed, for example, a juvenile chimpanzee embrace an adult
male who has just lost a confrontation with this rival. This appears to be part of a broader pattern
of active consolation in chimpanzees, a pattern that is not observed outside of the great apes [de
Waal and Aureli, 1996].
In short, chimpanzees exhibit a wide range of social behaviors that, in the words of Flack
and de Waal [2000] form the "building blocks" of much of human morality. They exhibit social
sensibilities that are strikingly similar to human moral sensibilities, including such phenomena as
moral indignation/anger, compassion, a sense of fairness, and community concern. Of course
human morality is far more developed and complex than the moralities or proto-moralities of our
nearest relatives, but these similarities are significant nonetheless and in need of explanation.
More specifically, one might wonder whether these similarities between humans and non-human
primates stem from a common genetic inheritance [de Waal; Darwin, Descent], or whether our
perception of similarity stems from a tendency on our part to attribute more to these animals than
is actually there. With respect to this issue, Flack and de Waal appeal to a principle of
'evolutionary parsimony' according to which apparently similar behaviors observed in closely
related species are reasonably attributed to underlying genetic similarities.
Nativism and modularity
Imagine the following scenario. A woman is brought to the emergency room after sustaining a
severe blow to the head. At first, and much to her doctors' surprise, her neurological function
appears to be completely normal. And for the most part it is, but it soon becomes clear that she
has acquired a bizarre disability. As a result of her accident, this woman can no longer play
basketball.3 Her tennis game is still top notch, as is her golf swing, and so on. Only her
basketball game has been compromised. Could such an accident really happen? Almost
certainly not. The way the brain is organized, it is virtually impossible that something like a
blow to the head could selectively destroy one's ability to play basketball and nothing else. This
is because the neural machinery required to play basketball isn't sitting in one place, like a car's
battery. Instead, this machinery is spread all over the brain, and its various components are used
in the performance of any number of other tasks.
But what if someone were to have an accident like this? If, somehow, this were to
happen, we would be forced to conclude that the brain has structures that are, in some cases at
least, specifically dedicated to playing basketball and that these structures are sufficiently
independent from the rest of the brain that rendering them inoperable need not affect the rest of
the brain's function. In other words, we would have strong evidence for the existence of a
"basketball module."4
Talk of modules and modularity is often associated with nativist hypotheses and
evolutionary psychology [Cosmides and Tooby, Pinker]. Such nativists argue that the human
mind is essentially a collection of modules, a cognitive Swiss Army knife if you will, with each
of its several components designed by natural selection to perform a specific function in a
relatively independent fashion. Nativists like to see cases of brain damage leading to selective
cognitive deficits, not because nativists are cruel, but because such cases demonstrate that the
mind's structure is modular, and that, they argue, is evidence that our minds have been
specifically adapted to perform the tasks that are performed more by those mental modules. This
argument, however, is controversial. Many theorists, especially those in the "connectionist "
camp, have argued that modular architecture can develop without the help of any specific
adaptations to that end and that therefore modularity is no evidence for nativism [Elman et al].
Certainly, modularity and nativism are independent in principle, and there do appear to be in
instances in which modules (of certain sort, at least) are acquired through learning [Schiffrin and
Schneider, O'Reilly and Munakata].5 But, say nativists, there are modules and there are modules.
3
The basketball example is adapted from Casebeer and Churchland [2003].
A module is a "cognitive organ," a computational structure that computes outputs from inputs by way of
intervening processes are computationally isolated from the rest of the system in which it is in embedded, i.e.
“encapsulated” [Fodor]. A modular structure gains efficiency at the expensive of flexibility, much as a hand-held
calculator does a splendid job of quickly and accurately performing arithmetic calculations while being utterly
useless for most other tasks, such as making dinner.
5
Of course, it depend s on what one means by "module." According to some definitions, a module must be innate.
Needless to say, there is no such thing as a learned module according to this definition. But the central idea behind
4
Some kinds of moduralish learning may take place, but there's only so much modular structure
that one can acquire without the help of natural selection.
I mention this controversy not because I intend to help resolve it, but because it bears
directly on the data I will be discussing in the next three sections. Studies of psychopaths and
patients with brain lesions along with neuroimaging studies of normal individuals reveal a
number of noteworthy cognitive dissociations. To dissociate two cognitive processes is simply
to show that those processes are independent to some extent. 6 For example, in the case of the
imaginary brain damage described above, the neural bases of basketball-playing are
dramatically dissociated from all other cognitive processes. To say that two processes are
dissociable from each other is not too far from saying that they make use of different modules.
Here's the argument: If two processes are dissociable, they must make use of different cognitive
structures to some extent. But why would that be? Presumably it is because these different
cognitive structures perform different functions. But for different parts to perform different
functions, they must, to some extent, stay out of each others' business, else there would be no
functionally distinct parts of which to speak. And a functionally distinct cognitive structure that,
more or less, takes care of its own business is, more or less, a module.
Let us suppose, then, that a dissociation between two processes implies that those
processes draw on different modules and, a fortiori, shows that modules relevant to these
processes exist. If you're the kind of nativist who believes that modular structures are, as a
general rule, innate structures, you will then conclude that there is probably a significant "innate
component" to one or both of the dissociated processes. If, in contrast, you think that modules
develop without any special (i.e. domain-specific) help from natural selection, you'll likely resist
this conclusion.
For now, I will simply leave the reader to his/her prejudices, although I will make some
partisan comments later. My aim at this point is simply to explain how the data discussed below
may bear on the issue of innateness in moral psychology. But, as suggested above, the issue of
innateness may prove to be a red herring. Best, at this point, to let the data speak for themselves.
Lesion data
As noted above, the idea that a well-placed blow to the head could selectively rob one of one's
ability to play basketball is pretty far-fetched. And yet there have been cases in which brain
damage has appeared to rob individuals of their moral sensibilities in a strikingly selective way.
By far, the most celebrated of such cases is that of Phineas Gage, a Nineteenth Century railroad
foreman working in Vermont. One fateful day, an accidental explosion sent a tamping iron
through Gage's eye socket and out the top of his head, destroying much of his medial prefrontal
cortex. Gage not only survived the accident; at the time he appeared to have emerged with all of
his mental capacities intact. After a two-month recuperation period Gage was pronounced cured,
but it was soon apparent that Gage was damaged. Before the accident he was admired by his
colleagues for his industriousness and good character. After the accident, he became lawless.
He wandered around, making trouble wherever he went, unable to hold down a steady job due to
modularity is informational encapsulation, and the point illustrated by cases of learned modularity is that a system
can spontaneously develop encapsulated processing structures in the course of learning how to solve a problem that
it was not specifically adapted to solve.
6
What I am calling a dissociation here is, strictly speaking, a double dissociation.
his anti-social behavior. For a long time no one understood why Gage’s lesion had the profound
but remarkably selective effect that it had.
More recent cases of patients with similar lesions have shed light on Gage’s injury.
Damasio and colleagues [refs] report on a patient named “Elliot” who suffered a brain tumor in
roughly the same region that was destroyed in Gage. Like Gage, Elliot maintained his ability to
speak and reason about topics such as politics and economics. He scored above average on
standard intelligence tests, including some designed to detect frontal lobe damage, and responded
normally to standard tests of personality. However, his behavior, like Gage’s, was not
unaffected by his condition. While Elliot did not develop anti-social tendencies to the extent that
Gage did, he, too, exhibits certain peculiar deficits, particularly in the social domain. A simple
laboratory probe has helped reveal the subtle but dramatic nature of Elliot’s deficits. When
shown pictures of gory accidents or people about to drown in floods, Elliot reported having no
emotional response but commented that he knew that he used to have strong emotional responses
to such things. Intrigued by these reports, Damasio and colleagues employed a series of tests
designed to assess the effects of Elliot’s damage on his decision-making skills. They asked him,
for example, whether or not he would steal if he needed money and to explain why or why not.
His answers were like those of other people, citing the usual reasons for why one shouldn’t
commit such crimes. Saver and Damasio followed up this test with a series of five tests of
moral/social judgment [Saver and Damasio, 1991]. As before, Elliot performed normally or
above average in each case. It became clear that Elliot’s explicit knowledge of social and moral
conventions was as good or better than most people’s, and yet his personal life, like Gage’s, has
deteriorated rapidly as a result of his condition (although he does not seem to mind). Damasio
attributes Elliot's real-life failures not to his inability to reason (in the sense above) but to his
inability to integrate emotional responses into his practical judgments. “To know, but not to
feel,” says Damasio, is the essence of his predicament.
In a study of Elliot and four other patients with similar brain damage and behavioral
deficits, Damasio and his colleagues observed a consistent failure to exhibit typical
electrodermal responses (a standard indication of emotional arousal) when these patients were
presented with socially significant stimuli, though they responded normally to non-social,
emotionally arousing stimuli [Damasio, Tranel, and Damasio, 1990]. A more recent study of
patients like Elliot used the "Iowa gambling task" to study their decision-making skills [Bechara
et al, 1996]. In this task, patients draw cards from a set of four decks and receive points
depending on which cards they choose. The decks are stacked so that two of them yield great
gains but even greater losses while the other two decks yield moderate gains but even smaller
losses. Thus, the best long term strategy is to pick from the "safe" decks rather than the "risky"
decks. Normal individuals sample all four decks initially but eventually figure out that the safe
decks are a better choice. Moreover, during the learning phase, normal subjects exhibit
anticipitatory electrodermal responses immediately prior to choosing from the risky decks,
indicating a negative emotional response to the intention to make a disadvantageous choice.
(Amazingly, these electrodermal responses are typically observed before the subject is
consciously aware that the disadvantageous decks are disadvantageous [Bechara et al, 1994].)
Patients like Elliot, however, typically fail to realize that the risky decks make bad choices and,
what's more, they fail to have the anticipatory electrodermal responses to the risky decks
exhibited by normal individuals, suggesting, as predicted, that their failure to perform well in the
gambling task is related to their emotional deficits. They can't feel their way through the
problem.
While the subjects in the above studies exhibit “sociopathic behavior” as a result of their
injuries, they are not “psychopaths." Most often they themselves, rather than other people, are
the victims of their poor decision-making. However, a more recent study (Anderson et al., 1999)
of two subjects whose ventral, medial, and polar prefrontal cortices were damaged at an early
age (three months and fifteen months) reveals a pattern of behavior that is characteristically
psychopathic: lying, stealing, violence, and lack of remorse after committing such violations.
These developmental patients, unlike Elliot et al., exhibit more flagrantly anti-social behavior
presumably because they did not have the advantage of a lifetime of normal social experience
involving normal emotional responses. Both patients perform fairly well on IQ tests and other
standard cognitive measures and perform poorly on the Iowa gambling task, but unlike adultonset patients their knowledge of social/moral norms is deficient. Their moral reasoning appears
to be, in the terminology of Kohlberg, “preconventional,” conducted from an egocentric
perspective in which the purpose is to avoid punishment. Other tests show that they have a
limited understanding of the social and emotional implications of decisions and fail to identify
primary issues and generate appropriate responses to hypothetical social situations. Grattan and
Eslinger (1992) report similar results concerning a different developmental-frontal patient. Thus,
it appears that the brain regions compromised in these patients include structures crucial not only
for online decision-making but also for the acquisition of social knowledge and dispositions
toward normal social behavior.
What can we learn from these damaged individuals? In Gage—the legend if not the
actual patient—we see a striking dissociation between "cognitive"7 abilities and moral
sensibilities. Gage, once an esteemed man of character, is transformed by his accident into a
scoundrel, with little to no observable damage to his "intellectual" faculties. A similar story
emerges from Elliot's normal performance on questionnaire-type assays of his social/moral
decision-making. Intellectually or "cognitively," Elliot knows the right answers, but his real life
social/moral decision-making is lacking. From this pattern of results, one might conclude that
Gage, Elliot, and the like have suffered selective blows to their "morality centers." Other results,
however, complicate this neat picture. Elliot and similar patients appear to have emotional
deficits that are somewhat more general and that adversely affect their decision-making in nonsocial contexts as well as social ones (e.g. on the gambling task). And to further complicate
matters, the developmental patients studied by Anderson and colleagues appear to have some
"cognitive" deficits, although these deficits are closely related to social decision-making. Thus,
what we observe in these patients is something less than selective damage to these individuals'
moral judgment abilities, but something more than a general deficit in "reasoning" or
"intelligence" or "judgment." In other words, these data suggest that there are dissociable
cognitive systems that contribute asymmetrically to moral judgment but give us little reason to
believe that there is a discrete faculty for moral judgment or a "morality module."8 What these
7
The term "cognitive" has two uses. In some contexts, "cognitive" refers to information processing in a general way
as in "cognitive science." In other contexts, "cognitive" refers to a subset of cognitive (first meaning) processes that
are to be contrasted with affective or emotional processes. Unfortunately, there is no good, unambiguous word for
these non-emotional processes, largely because, at the present time at least, they lack theoretical unity. In spite of
these difficulties I will use the term "cognitive" with scare quotes to indicate this second, more specific meaning.
8
There is a sizable literature reporting on patients with morally aberrant behavior resulting from frontal damage, and
the cases discussed above are not necessarily representative. I have chosen to focus on these cases because they, of
all the cases reported in the lesion literature, come the closest to reporting a dissociation between the capacity for
moral judgment and other cognitive abilities. The thought is that if there were a case in which moral judgment were
selectively knocked out, that single case would tell us much more about the architecture of the moral mind than a
data do suggest is that there is an important dissociation between affective and "cognitive"
contributions to social/moral decision-making and that the importance of the affective
contributions have been underestimated by those who think of moral judgment primarily as a
reasoning process.9 Indeed, Jonathan Haidt has amassed an impressive body of evidence in
support of the conclusion that moral judgment is driven almost entirely by emotion [Haidt,
2001].
Anti-social behavior
The studies described above are of patients whose social behavior has been compromised by
observable and relatively discrete brain lesions. There are, however, many cases of individuals
who lack macroscopic brain damage and who exhibit pathological social behavior. These
people fall into two categories: people with anti-social personality disorder (APD) and the subset
of these individuals known as psychopaths. Anti-social personality disorder is just a catch-all
label for whatever it is that causes some people to habitually violate our more serious social
norms, typically those that are codified in our legal system [DSM IV].10 If you habitually lie,
steal, or beat people up, you have APD, but you are not necessarily a psychopath. This is
because the leading model of psychopathy [Hare, ???] has two factors. In addition to exhibiting
pathologically anti-social behavior, psychopaths are additionally characterized by a pathological
degree of callousness, lack of empathy or emotional depth, and lack of genuine remorse for their
anti-social actions. In more intuitive terms, the difference between plain old APD and
psychopathy is the difference between a seriously flawed human being (e.g. the hot-headed
barroom brawler) and someone who is just inhuman (the cold-blooded killer who murders his
parents for their money).
Psychopaths appear to be special in a number of ways [Blair, 2001]. First, while the
behavioral traits that are used to diagnose APD correlate with IQ and socio-economic status, the
traits that are distinctive of psychopaths do not [Hare et al, 1991]. Moreover, the behaviors
associated with APD tend to decline with age, while the psychopath's distinctive socialemotional dysfunction holds steady [Harpur et al, 1994]. The roots of violence appear to be
different in psychopaths as compared to similarly violent non-psychopaths. The anti-social
behavior of non-psychopaths appears to be more contingent in two ways. First, positive
parenting strategies appear to influence the behavior of non-psychopaths, whereas psychopaths
appear to be impervious in this regard. Second, an probably not incidentally, the violence of
psychopaths is more often instrumental rather than impulsive [Blair, 2001].
Experiments using psychopaths as subjects reveal further, more subtle differences
between psychopaths and other individuals with APD. Psychopaths exhibit a lower level of
tonic electrodermal activity and show weaker electrodermal responses to emotionally significant
hundred cases in which brain damage results in a hodgepodge of behavioral deficiencies, including deficiencies in
moral judgment. For studies of some more "hodgepodgy" cases (or cases that have not been shown to be not
"hodgepod y") see Blair and Cipolotti (2000), Grafman et al. [?], Pincus (2001), Cohen et al. (1999), and Davidson
et al. (2000). For a striking case of acquired pedophilia due to a frontal lesion see Burns and Swerdlow (2003).
9
By "reasoning" I refer to processes that are relatively slow and effortful with intermediate steps that are
consciously accessible. Thse stand in contrast to intuitive processes that are quick, effortless, and with intermediate
steps that are not consciously accessible.
10
Pincus (2001) compares the APD designation to other such laughably empty medical terms such as ??? ani
("itchy anus").
stimuli than normal individuals (Hare and Quinn, 1971). A more recent study (Blair et al., 1997)
compares the electrodermal responses of psychopaths to a control group of criminals who, like
the psychopaths, were serving life sentences for murder or manslaughter. While the psychopaths
were like the other criminals in their responses to threatening stimuli (e.g. an image of a shark’s
open mouth) and neutral stimuli (e.g. an image of a book), they showed significantly reduced
electrodermal responses to distress cues (e.g. an image of a crying child’s face) relative to the
control criminals, a fact consistent with the observation that psychopathic individuals appear to
have a diminished capacity for emotional empathy.
Blair (1995) hypothesized that psychopaths’ diminished capacity for emotional empathy
should prevent them from drawing a distinction between what are sometimes referred to as
“moral” and “conventional” rules and that psychopaths, as compared to other criminals, should
make fewer references to the pain or discomfort of victims in explaining why certain harmful
actions are unacceptable. Both of these predictions were confirmed. “Moral” transgressions
were defined as those having negative consequences for the “rights and welfare of others” and
included instances of one child hitting another and a child smashing a piano. “Conventional”
transgressions were defined as “violations of the behavioral uniformities that structure social
interactions within social systems” and included instances of a boy wearing a skirt and a child
who leaves the classroom without permission. While the “normal” subjects (non-psychopathic
incarcerated criminals) drew a general distinction between moral and conventional
transgressions, the psychopaths did not. Normal subjects found a greater difference in
permissibility and seriousness between moral and conventional transgressions than did the
psychopaths. The most striking finding, however, concerned the psychopath’s judgments
concerning “modifiability.” Each of the transgression stories were set in school, and in each case
the subjects were asked whether or not it would be permissible for the child to perform the
transgressive action if the teacher had said earlier that such actions were permitted. Nonpsychopathic criminals tended to say that the conventional transgressions would become
permissible if the teacher were to explicitly allow their performance but that the moral
transgressions would not be permissible in either case. Psychopaths, however, treated all
transgressions as impermissible regardless of what the teacher says.11 In addition, the
psychopaths were, as predicted, less likely to appeal to the pain and discomfort of victims and
more likely to appeal to the violation of rules in explaining why various transgressions are
impermissible.
A different cognitive test shows that psychopathic murderers fail to have normal
unpleasant associations with violence. The study conducted by Gray et al. (2003) uses an
adapted version of the Implicit Associations Test (IAT) (Greenwald et al., 1998). Subjects
classify uppercase words as "pleasant" or "unpleasant" and lowercase words as "violent" or
"peaceful." Normal individuals respond more quickly when the same button is used to indicate
both "pleasant" and "peaceful" and more slowly when different buttons are used, but
psychopathic murders show no such effect. Surprisingly these results hold only for psychopathic
murderers and not for psychopathic non-murderers, non-psychopathic murders.
11
While Blair predicted that the psychopaths would fail to draw the moral/conventional distinction, he predicted
more specifically that the psychopaths would treat both types of transgression as normal subjects treat conventional
ones. Nevertheless, he found that the psychopaths treated both types of transgressions as normal subjects treat moral
ones. He attributes this result to the fact that his psychopathic subjects (all incarcerated) have an interest in
demonstrating that they have “learned the rules.”
According to Blair (Blair et al., 1997), “The clinical and empirical picture of a
psychopathic individual is of someone who has some form of emotional deficit." This
conclusion is bolstered by the results of a recent neuroimaging study in which psychopaths and
control criminals processed emotionally salient words. In the psychopaths, part of the posterior
cingulate gyrus, a region that exhibits increased activity during a variety of emotion-related tasks
(Maddock, 1999), was less active than in the control subjects. At the same time, other regions
were more active in psychopaths during this task, leading Khiel et al. to conclude that the
psychopaths were using an alternative cognitive strategy to perform this task, much as Blair
argues that psychopaths answer questions about the "modifiability" of moral transgressions
through non-emotional means.
Thus, so far, a host of signs point to the importance of emotions in moral judgment
[Haidt, 2001]. In light of this, one might come to the conclusion that a psychopath, with his
dearth of morally relevant emotion, is exactly what we're looking for—a human being "with
everything— hold the morality." Indeed, Schmitt et al. (1999) found that psychopaths performed
normally on the Iowa gambling task, suggesting that their emotion-based decision-making
deficits are not general, but rather related specifically to the social domain. As before, however,
the empirical picture is not quite so simple, as psychopaths appear to have other things "held" as
well. To begin, two studies, one of adult psychopaths [Mitchell et al, 2002] and one of children
with psychopathic tendencies [Blair et al, 2001], found that psychopathic individuals do perform
poorly on the Iowa gambling task. (These authors attribute the conflicting results to Schmitt et
al.'s failure to use the original task directions, which emphasize the strategic nature of the task.)
Moreover, there are several indications that psychopaths have deficits that extend well beyond
their apparently stunted social-emotional responses. Lapierre et al. (1995) find that psychopaths
have a hard time inhibiting a learned response in a Go/No-Go task12, and Kiehl et al. (2000)
observed abnormal electrophysiological signals (brain waves) in psychopaths performing this
task. Newman et al. (1997) found similar result using a "response reversal" task in which
subjects must adjust their strategy due to changing pay-off contingencies. Along similar lines,
Bernstein et al. (2000) found that psychopaths had a hard time recalling incidental information
about the spatial location of words they were told to memorize, but only for words presented in
the right spatial field (which are processed by the left hemisphere). They argue that psychopaths
have a hard time attending to secondary cues or peripheral information once their left
hemisphere-based motivational system is engaged. Kiehl et al. (1999a) found that psychopaths
produced abnormal electrophysiological signals during the performance of a "visual oddball"
task" in which one must detect target stimuli (large squares) within a series of distractor stimuli
(small squares). Psychopaths exhibited decreased P300 responses (electrically positive
responses at a latency of approximately 300 milliseconds) to target stimuli, and these diminished
responses were less lateralized than in normal individuals. Finally, Kiehl et al. (1999b) have also
found abnormal electrophysiological activity in psychopaths during linguistic tasks. Thus, it's
pretty clear that psychopathy involves a number of suite of superficially varied cognitive deficits
and abnormalities.
The psychopathy literature sends mixed signals regarding the "impulsivity" of
psychopaths. On the one hand, psychopaths are to be distinguished from other violent
individuals by their "cold-bloodedness," their lack of empathy, etc. Once again, their violence is
supposed to be "instrumental" rather than "reactive" [Blair 2001] At the same time, however,
12
This is a simple cognitive task in which the subject must, depending on the stimulus cue, either perform an action
or do nothing. [ref]
some of the evidence described above suggests that psychopaths have a hard time inhibiting
disadvantageous behavior, even during the performance of "dry" cognitive tasks. Most likely,
these apparently contradictory results reflect two sides of the same cognitive coin, although it is
not clear how to reconcile them. Compared to some anti-social individuals, psychopaths are
"cool and collected," but a closer examination reveals that psychopaths have a kind of
impulsivity or one-track-mindedness that distinguishes them from normal individuals in a subtle,
but fairly general way. The results of a neuroimaging study of "predatory" vs. "affective"
murderers by Raine et al. (1998) gestures toward a synthesis. They argue that excessive subcortical activity in the right hemisphere leads to violent impulses, but that "predatory" murderers,
who unlike "affective" murderers exhibit normal levels of prefrontal activity, are better able to
control these impulses. (In a more recent study, Raine et al. (2000) found that a sample of
individuals diagnosed with APD (some of whom, however, may have been psychopaths [right?])
tended on average to have decreased prefrontal gray matter.) Thus, according to these results,
the difference between a violent "hot head" and a violent "predator" is a lack of cognitive
control, while the difference between a violent "predator" and normal person is the presence of
anti-social impulses. These conclusions, however, are based on fairly crude neuroimaging data
that are difficult to interpret.13 Additionally, it's not clear how to reconcile the claim that
"predatory" and "affective" murderers act on the same underlying impulses with the claim that
psychopathic violence is "instrumental" rather than "impulsive."
To make a long story short, psychopaths are not nature's controlled experiment with
amorality. Psychopathy is a complicated syndrome that has subtle and not-so-subtle effects on a
wide range of behaviors, including many behaviors that, superficially at least, have nothing to do
with moral judgment and behavior. At the same time, however, psychopathy appears to be a
fairly specific syndrome. Psychopaths are not just people who are very violent or very bad.
Using the proper methods, psychopaths are clearly distinguishable from others whose behavior is
comparably anti-social, suggesting that the immoral behavior associated with psychopathy stems
from the malformation of specific cognitive structures that make important contributions to
moral judgment. Moreover, these structures seem to be rather "deep" in the sense that they are
not well-defined by the concepts of ordinary experience and, more to the point, ordinary
learning. Psychopaths do not appear to be people who have, through some unusual set of
experiences, acquired some unusual moral beliefs or values. (Recall, once again, that
psychopathic tendencies, unlike general anti-social tendencies, do not appear to correlate with
things that ordinarily affect learning: IQ, socio-economic status, and age.) Rather, they appear to
have an abnormal but stereotyped cognitive structure that affects a wide range of behaviors, from
their willingness to kill to their inability to recall the corner of the screen on which a given word
has appeared.14
Neuroimaging studies of moral judgment and decison-making
13
It should also be noted that Raine et al.'s criteria for "predatoryness" is different from Blair's criteria for
psychopathy. Blair uses Hare's PCL-R, the two-factor model described above, while Raine et al. used the judgments
of raters who considered a wide variety of information including clinical assessments, legal documents, interview
transcripts, and media reports.
14
Blair [2001?] argues that this suite of cognitive features results from developmental dysfunction in the amygdala.
Consider the following moral dilemma (the trolley dilemma):15 A runaway trolley is headed for
five people who will be killed if it proceeds on its present course. The only way to save these
people is to hit a switch that will turn the trolley onto an alternate set of tracks where it will run
over and kill one person instead of five. Is it okay to turn the trolley in order to save five people
at the expense of one? Most people say that it is, and they tend to do so in a matter of seconds
(Greene et al., 2001).16
Now consider a slightly different dilemma (the footbridge dilemma): A runaway trolley
threatens to kill five people as before, but this time you are standing next to a large stranger on a
footbridge spanning the tracks, in between the oncoming trolley and the five people. The only
way to save the five people is to push this stranger off the bridge and onto the tracks below. He
will die as a result, but his body will stop the trolley from reaching the others. Is it okay to save
the five people by pushing this stranger to his death? Most people say that it's not and, once
again, they do so rather quickly.
These dilemmas were devised as part of a puzzle for moral philosophers [Thomson] by
which the aim is to explain why it's okay to sacrifice one life to save five in the first case but not
in the second case. Solving this puzzle has proven very difficult. While many attempts to
provide a consistent, principled justification for these two intuitions have been made, the
justifications offered are not at all obvious and are generally problematic. The fact that these
intuitions are not easily justified gives rise to second puzzle, this time for moral psychologists:
How do people know (or "know") to say "yes" to the trolley dilemma and "no" to the footbridge
dilemma if there is no obvious, principled justification for doing so? If these conclusions aren't
reached on the basis of some readily accessible moral principle, they must be made on the basis
of some kind of intuition. But where do these intuitions come from?
To try to answer this question, my colleagues and I conducted an experiment in which
subjects responded to these and other moral dilemmas while having their brains scanned (Greene
et al, 2001). Our hypothesis was that the thought of pushing someone to his death with one's
bare hands is more emotionally salient than the thought of bringing about similar consequences
by hitting a switch. More generally, we hypothesized that moral violations of an "up close and
personal" nature, as in the footbridge case, are more emotionally salient than moral violations
that are more impersonal, as in the trolley case, and that this difference in emotional response
explains why people respond so differently to these two cases.
The rationale for this hypothesis is evolutionary. As noted above, it is very likely that we
humans have inherited many of our social instincts from our primate ancestors, among them
instincts that reign in the tendencies of individuals to harm one another. These instincts are
emotional instincts, triggered by behaviors and other elicitors that were present in our ancestral
environment. This environment did not include opportunities to harm other individuals using
complicated machinery, but it did include opportunities to harm other individuals by pushing
them into harms way (e.g. off a cliff or into a river). Thus, one might suppose that the sorts of
basic, interpersonal violence that threatened our ancestors back then will "push our buttons"
today in a way that peculiarly modern harms do not.
With all of this in mind, we operationalized the "personal"/"impersonal" distinction as
follows: A moral violation is personal if it is (a) likely to cause serious bodily harm (b) to a
particular person (c) in such a way that the harm does not result from the deflection of an
15
This and the following dilemma are adapted from Thomson [ref].
Note that here and elsewhere, "most people" may not hold cross-culturally, as the people who have been tested
using these dilemmas are primarily American college students.
16
existing threat onto a different party.17 A moral violation is impersonal if it fails to meet these
criteria. One can think of these criteria for personal harm in terms of ME HURT YOU and as
delineating roughly those violations that a chimpanzee can appreciate.18 Condition (a) (HURT)
picks out roughly those harms that a chimp can understand (e.g., assault vs. tax evasion).
Condition (b) (YOU) requires that the victim be vivid as an individual. Finally, condition (c)
(ME) captures the notion of “agency,” the idea that the action must spring in a vivid way from
the agent’s will, must be “authored” rather than merely “edited” by the agent. Pushing someone
in front of a trolley meets all three criteria and is therefore "personal," while diverting a trolley
involves merely deflecting an existing threat, removing a crucial sense of “agency” and therefore
making this violation "impersonal." Other moral dilemmas (about forty total) were categorized
using these criteria as well.
Before turning to the data, the evolutionary rational e for the "personal"/"impersonal"
distinction requires a bit more fleshing out. Emotional responses may explain why people say
"no" to the footbridge dilemma, but why do they say "yes" to the trolley dilemma? Here we
must consider what's happened since we and our closest living relatives parted ways. Of course,
a great deal has happened since then, but among the most important developments is that of our
capacity for general-purpose abstract reasoning, a capacity that can be used to think about
anything one can name, including moral matters. Thus, one might suppose that when the heavyduty, social-emotional instincts of our primate ancestors lay dormant, abstract reasoning has an
opportunity for greater influence. And, more specifically, one might suppose that in response to
the trolley case, with its peculiarly modern method of violence, the powerful emotions that might
otherwise say "No!" remain quiet, and a faint little rational voice can be heard: "Isn't it better to
save five lives instead of one?"
That's a hypothesis. Is it true? And how can we tell? This hypothesis makes some
strong predictions regarding what we should see in the brain scanner while people are responding
to personal and impersonal moral dilemmas. The contemplation of personal moral dilemmas like
the footbridge case should produce relative increases in neural activity in brain regions
associated with emotional response and social cognition, while the contemplation of impersonal
moral dilemmas should produce relatively greater activity in regions associated with "higher
cognition." This is exactly what was observed (Greene et al., 2001). Contemplation of personal
moral dilemmas produced relatively greater activity in two emotion-related areas, the posterior
cingulate cortex and the medial prefrontal cortex (one of the areas damaged in both Gage and
Elliot), as well as in the superior temporal sulcus, a region associated with various kinds of social
cognition [Allison et al]. At the same time, contemplation of impersonal moral dilemmas
produced relatively greater neural activity in two classically "cognitive" brain areas associated
with working memory function in the inferior parietal lobe and the dorsolateral prefrontal cortex.
This hypothesis also makes a prediction regarding people's reaction times. According to
the view I've sketched, people tend to have emotional responses to personal moral violations that
incline them to judge against performing those actions. That means that someone who judges a
moral violation to be appropriate (e.g. someone who says it's okay to push the man off the bridge
in the footbridge case) will most likely have to override an emotional response in order to do it.
That overriding process will take time, and thus we would expect that "yes" answers will take
17
This third criterion mirrors Thomson's [ref] "no new threat" principle. Her principle, however, is a normative
principle, whereas this criterion is part of a set of criteria that are simply descriptive.
18
We didn't evolve from chimpanzees, of course. Chimpanzees are simply a stand-in for the ancient primate species
that gave rise to both chimpanzees and us.
longer than "no" answers in response to personal moral dilemmas like the footbridge case. At
the same time, we have no reason to predict a difference in reaction times between "yes" and
"no" answers in response to impersonal moral dilemmas like the trolley case because there is,
according to this model, no emotional response to override in such cases. Here, too, the
prediction holds. Trials in which the subject judged in favor of a moral personal violation took
significantly longer than trials in which the subject judged against them, but there was no
comparable reaction time effect observed in response to impersonal moral violations (Greene et
al., 2001).
[Note: the following data from my own work presented here are either unpublished or published
only in conference abstracts. Presumably they will have been published by the time this book
comes out, but if that is not the case then I may not be able to include them in this discussion.
Also, some results may change as more data are collected , although the main results are fairly
robust at this point.]
Further results support this model as well. Above we contrasted the neural effects of
contemplating "personal" vs. "impersonal" moral dilemmas. But what should we expect to see
if we subdivide the "personal" moral dilemmas into two categories based on difficulty (i.e. based
on reaction time)? Consider the following moral dilemma (the crying baby dilemma): It's
wartime, and you and some of your fellow villagers are hiding from enemy soldiers in a
basement. Your baby starts to cry, and you cover his mouth to block the sound. If you remove
your hand your baby will cry, the soldiers will hear, and they will find you and the others and kill
all of you, including you and your baby. If you do not remove your hand, your baby will
smother to death. Is it okay to smother your baby to death in order to save yourself and the other
villagers? This is a very difficult question. Different people give different answers and nearly
everyone takes a relatively long time to answer.
Here's a similar dilemma (the infanticide dilemma): You are a teenage girl who has
become pregnant. By wearing baggy clothes and putting on weight you have managed to hide
your pregnancy. One day during school, you start to go into labor. You rush to the locker room
and give birth to the baby alone. You do not feel that you are ready to care for this child. Part of
you wants to throw the baby in the garbage and pretend it never existed so that you can move on
with your life. Is it okay to throw away your baby in order to move on with your life? For the
people in our test sample, at least, this is a very easy question. All of them say fairly quickly that
it would be wrong to throw the baby away.
What's going on in these two cases? My colleagues and I hypothesized as follows. In
both cases there is a prepotent, negative emotional response to the personal violation in question,
killing one's own baby. In the crying baby case, however, there are powerful, countervailing
rational19 considerations that push one toward smothering the baby. After all, the baby is going
to die no matter what, and so you have nothing to lose20 and much to gain by smothering it,
awful as it is. In some people the emotional response ("Aaaahhhh!!! Don't do it!!!") dominates,
and those people say "no." In other people, a "cognitive," cost-benefit analysis ("But you have
nothing to gain, and so much to lose...") wins out, and those people say "yes."
19
By calling these considerations "rational" my intention is not to endorse them, but rather to describe the type of
psychological process that underlies them as one that is "cool" rather than "hot," "cogntive," deliberate, etc.
20
At least in terms of lives lost or saved.
What does this model predict that we'll see in the brain scanner when we compare cases
like crying baby to cases like infanticide? First, this model supposes that cases like crying baby
involve an increased level of "response conflict," i.e. conflict between competing representations
for behavioral response. Thus, we should expect that difficult moral dilemmas like crying baby
will produce increased activity in a brain region that is associated with response conflict
[Botvinick et al], the anterior cingulate cortex. Second, according to our model, the crucial
difference between cases like crying baby and cases like infanticide is that dilemmas like crying
baby involve "cognitive" considerations that compete with the prepotent, negative emotional
response. Thus, we should expect to see increased activity in classically "cognitive" brain areas
when we compare cases like crying baby to cases like infanticide, even though dilemmas like
crying baby are personal moral dilemmas. Finally, according to our model, cases like crying
baby involve a competition between "cogntive" and emotional forces, and so we should expect to
see some evidence of the emotional forces making a stand. At the same time, however, the
emotional response is supposed to be common to both kinds of dilemma in this comparison, and
so some of the emotion-related activity associated with personal moral dilemmas should subtract
out.
All of these predictions held (Greene et al, 2002). Comparing high reaction time
personal moral dilemmas like crying baby to low reaction time personal moral dilemmas like
infanticide revealed increased activity in the anterior cingulate (conflict), the anterior dorsolateral
prefrontal cortex ("cognitive"), the inferior parietal lobes ("cognitive"), the posterior cingulate
(emotional), and the precuneus (a region near the posterior cingulate that has been associated
with visual imagery and that appears to work in concert with the posterior cingulate). The
activity in the medial prefrontal cortex or the superior temporal sulcus subtracted out.
So far we have talked about neural activity correlated with the type of dilemma under
consideration, but what about activity correlated with subjects' behavioral response? Does a
brain look different when it's saying "yes" as compared to when it's saying "no" to questions like
these? To answer this question we subdivided our dilemma set further by comparing the trials in
which the subject says "yes" to difficult personal moral dilemmas like crying baby to trials in
which the subject says "no" in response to such cases. Once again, we turn to the model for a
prediction. If cases in which people say "no" are cases in which emotion wins, then we would
expect to see more activity in the posterior cingulate and possibly the precuneus in those cases.
Likewise, if the cases in which people say "yes" are cases in which "cognition" wins, then we
would expect to see more activity in the dorsolateral prefrontal cortex and/or parietal lobes in
those cases.
The first of these predictions held. That is, the posterior cingulate and precuneus showed
relatively greater activity at the end of trials in which the subject said "no." There was no
significant effect in keeping with the second prediction for this comparison, although there was a
trend in the right direction in the right anterior dorsolateral prefrontal cortex (but see below for a
significant effect in this area). Additionally, the insula, a region associated with disgust, anger,
and autonomic arousal [refs] showed increased activity for "yes" answers, a surprise given our
model (according to which emotional responses are associated with "no" answers for these
questions). However, an examination of the time course of the activity in this area revealed that
the differences in activity in this region probably occurred after the decisions were made. Thus,
we interpret this activity as a reactive effect, an emotional backlash to the subject's approving
judgment of a personal moral violation. (After all, how you would you feel if you just decided
that it would be okay to smother your own baby?)
The above analysis was performed at the level of individual trials, but we can also
perform a similar analysis at the level of individual subjects. Individual subjects can be
characterized along a continuous "utilitarian" -"Kantian" dimension depending on how often and
how quickly they are willing or unwilling to sacrifice an individual's welfare in the name of the
greater good. It turns out that, at the a well-chosen time point, the level of neural activity in the
precuneus correlates positively (r = .38) with Kantian behavioral tendencies. In contrast, the
level of neural activity in the right anterior dorsolateral prefrontal cortex correlates positively
with utilitarian behavioral tendencies (r = .44). These effects are even stronger if the analyses
are restricted to the most Kantian subjects for the precuneus (r = .73) and the most utilitarian
subjects for the right anterior dorsolateral prefrontal cortex (r = .94!). In other words, if you're a
Kantian (i.e. someone who tends to reach Kantian conclusions, not necessarily someone who
reasons as Kantians recommend) then your precuneus activity reflects just how Kantian you are.
Likewise, for the right anterior dorsolateral prefrontal cortex if you're a utilitarian. Together, the
levels of activity in these two brain areas account for about a third of the variance in people's
overall Kantian-utilitarian tendencies (R2 = .33).
The above results, taken together, provide strong support for the model sketched above
according to which moral decisions are produced through an interaction between emotional and
"cognitive" processes subserved by anatomically dissociable brain systems. Another recent brain
brain imaging experiment adds further support to this model of moral judgment. Alan Sanfey,
Jim Rilling, and colleagues (2003) conducted a brain imaging study of the Ultimatum Game in
order to study the neural bases of people's sense of fairness. The Ultimatum Game works as
follows: There is a sum of money, say $10, and the first player (the proposer) makes a proposal
as to how to divide it up between herself and the other player. The second player, the responder,
can either accept the offer, in which case the money is divided as proposed, or reject the offer, in
which case no one gets anything.
When both players are perfectly rational, purely motivated by financial self-interest, and
these facts were known to the proposer, the outcome of the game is guaranteed. Because
something is better than nothing, a rationally and financially self-interested reponder will accept
any offer. A rational and financialy self-interested proposer who knows this will therefore offer
the responder as small a share of the total as possible, and the thus the proposer will get nearly all
and the responder will get nearly none. This, however, is not what usually happens when people
play the game, even when both players know that the game will only be played once. Proposers
usually make offers that are fair (i.e. fifty-fifty split) or close to fair, and responders tend to reject
offers that are more than a little unfair. Why does this happen?
The answer, once again, implicates emotion. The results of Sanfey and Rilling's study
show that unfair offers, as compared to fair offers, produce increased activity in the anterior
insula, the region mentioned above that is associated with anger, disgust, and autonomic arousal.
Moreover, individuals' average levels of insula activity correlated positively with the percentage
of offers they rejected and was weaker for trials in which the subject believed that the unfair
offer was made by a computer program. But the insula is only part of the story. The anterior
cingulate , the region mentioned above that is associated with response conflict, and the
dorsolateral prefrontal cortex, one of the regions mentioned above that is associated with "higher
cognition" were also more active in response to unfair offers. Moreover, for trials in which the
unfair offer was rejected, the level of activity in the insula tended to be higher than the level of
activity in the dorsolateral prefrontal cortex, while the reverse was true of trials in which unfair
offers were rejected.
These results dovetail nicely with the imaging experiments of hypothetical moral
dilemmas described above. In both studies, the insula subserves an emotional response to an
action that is naturally seen as unfair. (In my study the insula responds to the subject's own
decision to approve of a personal moral violation, whereas here the subject and her insula are
reacting to another person's decision to allocate resources unfairly.) In both studies there is a
"cognitive" rationale for not acting on the basis of one's emotions. In response to crying baby,
for example, one recognizes that putting one's parental instincts aside and smothering the baby
will produce the best overall consequences, whereas in the Ultimatum Game one recognizes that
putting one's righteous indignation aside and accepting an unfair offer will make oneself more
money. These instances of "cognitive overriding" appear to be subserved by regions in the
dorsolateral prefrontal cortex. Finally, according to this model, both unfair offers and difficult
moral dilemmas elicit competing emotional and "cognitive" representations, as indicated by the
activation of the anterior cingulate cortex in both cases.
Other neuroimaging results have shed light on the neural bases of moral judgment. Jorge
Moll and colleagues have conduced two experiments using simple, morally significant sentences
(e.g. "They hung and innocent") [Moll et al., 2001, 2002b] and an experiment using morally
significant pictures (e.g. pictures of poor abandoned children) [Moll et al., 2002a]. These
studies along with the ones described above implicate a wide range of brain areas in the
processing of morally significant stimuli, with a fair amount of agreement (given the variety of
tasks employed in these studies) concerning which brain areas are the most important.21 In
addition, many of the brain regions implicated by this handful of neuroimaging studies of moral
cognition overlap with those implicated in neuroimaging studies of "theory of mind," the ability
to represent others' mental states [Frith, 2001]. (For a more detailed account of the
neuroanatomy of moral judgment and its relation to related processes see Greene and Haidt
(2002).) While many big questions remain unanswered, it is clear from these studies that there is
no "moral center" in the brain, no "morality module." Moreover, moral judgment does not
appear to be a function of "higher cognition," with a few emotional perturbations thrown in
[Haidt, 2001; Damasio, 1994] Nor do moral judgments appear to be driven entirely (or even
more or less entirely) my emotional responses [Haidt, 2001]. Rather, moral judgments appear to
be produced by a complex network of brain areas subserving both emotional and "cognitive"
processes [Greene et al., 2001; Greene and Haidt, 2002; Sanfey et al., 2003].
Other data: development, anthropology/cultural psychology, evolutionary psychology, and
behavioral genetics
There are many sources of valuable information that bear on the issue of innateness in moral
psychology. In addition to the ones discussed above, I will briefly mention five more.
The most extensive literature in moral psychology is probably the literature on moral
development, including the well-known works of Piaget [ref], Kohlberg [ref, ref], Turiel [ref],
and Giligan [ref], among others. In the best cases, the study of development can provide
dramatic evidence for the existence of innate behaviors. Indeed, if newborn human babies, like
newborn gazelles taking their first shaky steps, were capable of making rudimentary moral
judgments straight out of the womb, it would be clear that our capacity for moral judgment is as
21
An exception here are the "cognitive" areas, which are only seen using paradigms that involve "impersonal" moral
judgments and/or difficult moral decisions [Greene et al., 2001; Sanfey et al., 2003].
innate as anything else. Needless to say, things don't work that way, and as a result the scientific
record of human moral development is difficult to interpret. The are two fundamental problems
that make drawing conclusions about nativism in moral psychology from the developmental
literature difficult. The first problem is methodological. Much of the most influential work in
moral psychology has come from analyses of children's verbal accounts of their own moral
sensibilities [Kohlberg]. As demonstrated throughout this chapter, there is a large and growing
body of evidence suggesting that moral judgment is largely an intuitive, emotional affair [Haidt,
2001]. If that's correct, then it's likely that much of the work that has been done chronicling the
increasingly sophisticated modes of moral thought employed by children, teenagers, and adults
has inadvertently been directed toward the development of people's abilities to verbalize and
rationalize their moral sensibilities rather than the development of those sensibilities themselves
[Haidt, 2001]
The second, and more significant, problem was alluded to above. Moral development,
like so much else, is a robustly interactive affair, and, in a species that depends so much on
cultrual learning, it is difficult to tease apart the genetic and environmental components of any
complex social behavior simply by observing it. There are, however, some suggestive findings.
For example, it is sometimes argued that behaviors that are learned during a critical period are
behaviors that we are specifically and innately prepared to learn, but these claims are
controversial. (The claims are most often made [Pinker] and disputed [McClelland, Elman et al]
in discussions of the development of language.) As noted above, patients who sustain
ventromedial damage at a young age appear more likely to exhibit anti-social behavior than those
who sustain such damage later in life (Anderson et al., 1999). Rozin, Fallon, and AugustoniZiskind (1985) point out that children in cultures that do not emphasize matters of purity and
pollution in moral contexts [Schweder] often develop such intuitions spontaneously around the
ages of seven or eight. (Think of children’s frequent obsessions with “cooties.”) These
intuitions and concerns, however, tend to wither without cultural support. Something similar
appears to be the case with respect to the ethics of autonomy and community. Around the age of
four children tend to go from being relatively uninterested in matters of fairness to being
obsessed with them, often overgeneralizing norms of fairness to inappropriate situations (Fiske,
1991). Minoura (1992) found striking differences in the socialization processes undergone by
Japanese children of different ages living in America where their fathers were temporarily
transferred for work. Of these children, those who spent a few years in America during the ages
of nine through fifteen tended to develop American ways of interacting with friends and of
reacting to and addressing interpersonal problems. The ones who spent time in America before
the age of nine showed no such lasting effects, and the ones who arrived in America after the age
of fifteen did not adjust as well to American life. Such late arrivals typically felt awkward
behaving in American ways even while having excellent explicit knowledge of American
behavioral norms.
While human newborns have little in the way of a moral sensibility, human children can
be morally precocious in a way that suggest that they have received some specific help from their
genes, a claim that is bolstered by observations of what appear to be analogous behaviors in
chimpanzees (see above). Children exhibit empathy and other pro-social behaviors from a very
young age [Hoffman, 1982, Kagan, ?]. Likewise, young children (including autistic children
[Blair]) are able to draw the moral/conventional distinction (see above) [Turiel].
Regardless of what one makes of the evidence for nativist views of moral development,
it's clear that culture plays an important role in shaping people's moral sensibilities, as
demonstrated by the fact that moral beliefs and values vary widely from culture to culture, and
from sub-culture to sub-culture. Anthropologists and cultural psychologists have documented a
wide range of such differences [Haidt et al, 1993; Fiske, 1990; Nisan, 1987; Schweder; Rozin,
1997a, 1997b; Lakoff, 1996; Naravez, 1999; Edwards, 1987 what's a good summary for this?],
but the general point that cultural differences involve differences in moral outlook can be
ascertained simply by reading the newspaper. The more interesting question, then, is whether
there are similarities in moral outlook among the various cultures of the world that point toward
an innate component. Some have observed that the cultural variations on human morality appear
to variations on certain central themes. According to Shweder and his colleagues (1997), moral
norms and intuitions cluster around what he calls the “big three” domains of human moral
phenomena: the “ethics of autonomy” which concerns rights, freedom, and individual welfare;
the “ethics of community” which concerns the obligations of the individual to the larger
community in the form of loyalty, respectfulness, modesty, self-control, etc.; and the “ethics of
divinity” which is concerned with the maintenance of moral purity in the face of moral pollution.
Rozin and colleagues (1999) argue that these three domains correspond to three basic moral
emotions: anger for autonomy, contempt for community, and disgust for divinity. While all
cultures appear to have practices associated with each of the "big three" moral domains to some
extent, their emphases can be very different. Westerners, for example, are said to have relatively
little experience with the ethics of divinity, although there are exceptions as seen, for example, in
the moralization of meat-eating among some Western vegetarians [Rozin]. Others have
observed that certain moral principles seem to be universal such as injunctions against rape and
murder and provisions for redressing such wrongs [Brown], and some claim that social life is
structured by a discrete set of principles for fair exchange [Fiske], with strong moral injunctions
against exchanges that violate these principles [Tetlock et al.].
One problem in interpreting these regularities is that the presence of a trait across a wide
variety of cultures, even all cultures, does not imply the trait in question is innate. As noted
above, poetry and hair styling are observed in all cultures [Brown], but there seems to be a
further question, and much room for skepticism, as to whether these things are innate. At the
same time, however, there do seem to be some commonalities in moral outlook that smack of
innateness. Perhaps the best example is the incest taboo, which is maintained in some form in all
cultures [Brown]. There are three complementary reasons why this aspect of morality is such a
good candidate for innateness. First, it has a strong biological rationale and is observed in other
species [ref]. (Matings between first-degree relatives are more likely to produce defective
offspring.) Second, the taboo is endorsed even by peoples who have no knowledge of the
deleterious biological consequences of incest. Third, people who are aware of these
consequences maintain that incest is wrong even when those consequences can be avoided (e.g.
through birth control) but have an amusingly hard time justifying their convictions [Haidt]. The
incest taboo appears to be grounded in a strongly felt moral intuition that outstrips any rationale
we can provide for it, which suggesting that the biological rationale is the operative one,
implemented in the form of an evolutionary adaptation.
The case for the incest taboo as a biological adaptation is pretty strong, but is it an
exception? Many evolutionary theorists argue that the incest taboo is just the tip of the iceberg
when it comes innate moral dispositions in humans. These arguments fall into two main
categories which we might call "possibility arguments" and "actuality arguments." For a long
time, the existence of pro-social or altruistic behavior in humans and other animals was viewed
as a problem for those who would explain the most salient aspects of human nature in terms of
natural selection: If nature selects for traits that help individuals outcompete other individuals in
the struggle for existence, why would any individual do anything to help others' at one's own
expense? Many people, including one of evolutionary theory's earliest and most vocal
proponents (Huxley, 1894), concluded that morality must be a cultural addition to biological
human nature, and perhaps even an antidote of sorts. In recent decades, however, an impressive
body of work, beginning with the seminal works of Hamilton [1964], Maynard Smith
[1964],Williams [1966], and Trivers [1971] has explained how the amoral forces of natural
selection can produce moral creatures with genuinely altruistic motives. These theories have
been refined in recent years and have given rise to the field of evolutionary game theory
[Axelrod; Skyrms; Boyd and Richerson; Sober and Wilson]. I will not attempt to summarize
these developments, but will simply report that the existence of genuine altruism in humans and
other animals is no longer regarded as an obstacle to evolutionary explanations of human nature.
These days, most of the controversy in evolutionary psychology centers around specific
theories according to which various aspects of human moral nature are biological adaptations.
Many of these arguments have focused on the perennially alluring topics of sex and violence.
For example, people have used evolutionary theory to explain patterns in gender roles and
sexual preference [Symons, 1979; Perrett face study, 1999; Wright, 1994] as well as patterns in
human violence [Daly and Wilson; Wright, 1994; Demonic Males; Thornhill and Palmer, 2000,
Buss and Duntley, this volume.] Some have sought positive evidence for the cognitive
machinery postulated by earlier theorists' possibility arguments. For example, the adaptation for
reciprocal altruism described by Trivers requires that individuals be reasonably good at detecting
those who would reap the benefits of social exchange without paying the costs. This led Leda
Cosmides [1989] to posit the existence of a specialized "cheater detection" mechanism and to
catch it in action using her well-known variation on the Wason selection task, a test of logical
reasoning. When people are given a standard version of this task in which the subject must
identify the conditions required to satisfy an abstract logical rule, they do poorly, but when the
test is recast in terms of a social exchange whereby the subject must identify the behavioral
conditions that must be met in order for a social norm to have been respected, people do much
better. Cosmides concludes that this specialized ability to reason about social exchange is a
biological adaptation that allows people to successfully reap the benefits of social life.
Most of the arguments regarding evolutionary adaptations that bear on moral judgment
and behavior are speculative, theoretical arguments of the form "Wouldn't it make sense that...?"
I personally am sympathetic to evolutionary psychology with respect to understanding both
morality and human nature more broadly, but I am the first to admit that the field is heavy on
theory and light on hard data. The fundamental difficulty in evolutionary approaches to human
behavior is that they seek to explain readily observable phenomena in terms of processes that
took place long ago and that leave very little in the way of physical evidence. The fact that the
phenomena to be explained by evolutionary theories are readily observable is, perhaps
surprisingly, a disadvantage , because this makes it hard for theorists to make genuine
predictions, i.e. to describe in advance phenomena that have yet to be observed, either formally
or informally. Theorists whose evidence consists of successful "postdictions" can always be
accused of retrofitting theory to data, and, regardless of one's views regarding the normative
epistemology of science, novel predictions tend to be more compelling. There are, however, a
handful of cases in which evolutionary theories of human behavior have made novel
predictions22 that have turned out to be true. Cosmides' studies using the modified Wason
Selection Task are a good example. Her evolutionary interpretation of these results has been
criticized, but in this case it's the skeptics whose alternative theories are in danger of having been
retrofitted to the data.
In the years to come, the emerging field of behavioral genetics will contribute greatly to
our emerging understanding of human moral nature. It's important, however, to understand what
exactly studies correlating genes and behavior can and cannot tell us. There are, unbeknownst to
many people, two broad "nature-nurture" questions. The first question is concerns the extent to
which genetic influences account for the aspects of human nature that we all share. This is the
question with which most of this chapter has been concerned. More specifically, we have been
examining the features of moral psychology that all normal humans appear to have as well as
features that we share with other species. So far as this first question is concerned, genetic
evidence is difficult to interpret because it is hard, both empirically and conceptually, to partial
out the effects of genes when both the genes and the effects are the uniform. The second
question concerns the factors that make us different from one another, and here genetic evidence
has proven incredibly powerful.
Behavioral geneticists track correlations between genes and behavioral traits in two main
ways. The first way is to compare the behavior of people with known degrees of genetic
relatedness. Pairs of identical twins reared apart can be compared to unrelated people reared
apart. Pairs of identical twins reared together can be compared to pairs of fraternal twins reared
together, both of whom can be compared to pairs of ordinary siblings reared together, all of
whom can be compared to pairs of genetically unrelated individuals reared together. In a similar
way, the genetic heritability of traits can be deduced through the examination of family trees.
Second, in contrast to the whole-genome approach employed in twin and family studies,
researchers can also identify correlations between observable traits and particular genes or
ensembles of genes.
Four decades of research have produced some striking results.
Testing confirms that identical twins, whether separated at birth or not, are eerily alike
(though far from identical) in just about any trait one can measure. They are similar in
verbal, mathematical, and general intelligence, in their degree of life satisfaction, and in
personality traits such as introversion, agreeableness, neuroticism, conscientiousness, and
openness to experience. They have similar attitudes toward controversial issues such as
the death penalty, religion, and modern music. They resemble each other not just in
paper-and-pencil tests, but in consequential behavior such as gambling, divorcing,
committing crimes, getting into accidents, and watching television. And they boast
dozens of shared idiosyncrasies such as giggling incessantly, giving interminable answers
to simple questions, [and] dipping buttered toast in coffee... The crags and valleys of
their electroencephalograms ... are as alike as those of a single person recorded on two
occasions, and the wrinkles of their brains and distribution of gray matter across cortical
areas are also similar. (Pinker, 2002, pg. 47)
22
By "novel predictions" I mean cases in which folk wisdom fails to make the same prediction. For example, the
"prediction" that men are, on average, more willing to have sex with strangers than women doesn't count as novel,
even before this phenomenon has been scientifically documented.
If you have a longer than average version of the D4DR dopamine receptor gene you are
more likely to be a thrill seeker, the kind of person who jumps out of airplanes, clambers
up frozen waterfalls, or has sex with strangers. If you have a shorter version of a stretch
of DNA that inhabits the seratonin transporter gene on chromosome 17, you are more
likely to be neurotic and anxious... (pg. 48)
...A conventional summary is that about half of the variation in intelligence, personality,
and life outcomes is heritable... (pg. 374)
So far as I am aware, there haven't been any genetic studies of moral psychology per se, but the
results described above concerning such traits as criminality, sexual promiscuity, and views on
the death penalty come pretty close, and given the general trend described above it seems
unlikely that genes will account for less of the variation in moral behavior than they do for most
other kinds of behavior.
That said, two very important caveats to these bold claims deserve attention. First, the
heritability of a trait is relative to a population, and the more homogeneous the population the
higher the heritability values are likely to be. The results described above are typically from
studies of relatively homogeneous populations, and therefore they might not tell us much about
the extent to which genes account for differences between cultures (Pinker, 2002, pg. 380).
Nevertheless, if you're an American who wants to know why you and George W. Bush can't
seem to agree on anything, genes may be a large part of the answer.
The second important caveat is that individual differences in moral values and character
may not be the differences that matter most. Decades of research in social psychology, including
a high proportion of its most celebrated studies [Milgram, Darley and Latane], have shown that
situational variables explain much more of people's behavior than is generally acknowledged23
and that personality traits explain much less than is ordinarily thought [Nisbett and Ross]. (For
an excellent philosophical account of moral philosophy, moral psychology, and situationsim see
Doris [?]) Thus, even if genes explain a lot about why I behave differently from you when you
and I are in the same situation, the differences between our situations may be the differences that
matter most. Still, this point should not obscure the fact that genes account for a great deal of
behavioral variation across situations, as demonstrated by the striking similarities between twins
reared apart.24
What in moral psychology is innate?
In extracting from the above discussion an answer to this question, it will be useful to distinguish
between the form and content of moral thought. The form of moral thought concerns the nature
23
There is some evidence to suggest that Asians do a better job of acknowledging this than Westerners. [Nisbett]
There is certainly an interesting—and, to my knowledge, overlooked—tension between situationism and the
apparent power of genetic factors to explain behavior. One possible resolution of this tension lies in the fact that
situational factors seem to account for behavior in particular situations (e.g. a particular choice about whether or not
to help a distressed individual) whereas genes seem to account for broad patterns of behavior (e.g. criminal
tendencies, television watching). Perhaps genes account for people's "long run averages" while situational factors
provide a better account of people's day-to-day choices. It's not clear what consequences this resolution would have
for those who have found in the situationist doctrine grounds for skepticism about the existence of morally
significant character traits [Doris; Harman].
24
of the cognitive processes that subserve moral thinking, which will most likely be a function of
the cognitive structures that are in place to carry out those processes. The content of moral
thought concerns the nature of people's moral beliefs and attitudes, what they think of as right or
wrong, good or bad, etc.. Thus, it could turn out that all humans have an innate tendency to think
about right and wrong in a certain way in the absence of any genetic predisposition to come to
any particular conclusions about which things are right or wrong. With this in mind, let us
review the data presented above.
A wide range of studies over several decades reveals striking similarities between the
social and emotional dispositions that structure the social lives of non-human primates and those
of humans. With respect to the form of moral thinking, the primary similarities appear to lie in
the affective domain. The basic emotions that pull us human that govern human social exchange
(anger, empathy, anxiety, joy) appear to be at work in non-human primates, and in similar ways.
Thus, so far as the form of our moral thought is concerned, we humans apparently make use of
many of the same cognitive tools as other primates. No one believes that the chimpanzee 's
social emotions are cultural inventions, and the principle of "evolutionary parsimony" suggests
that our strikingly similar social emotions spring from a common genetic source, courtesy of
natural selection. There are, of course, aspects of human moral thinking that are not observed
among our nearest relatives. As Otto from A Fish Called Wanda points out "Gorillas don't read
Nietzsche!"25 Humans certainly have a complex, language-dependent capacity for abstract moral
thought that outstrips anything of which other animals are capable. Jonathan Haidt (2001) has
argued, however, that our that human moral reasoning is largely a post-hoc affair and plays little
direct role in producing moral judgments, which are made on the basis of emotional intuitions.
Thus, if the strong emotivist position developed by Haidt is correct, the form of human moral
thought may, underneath all the rhetoric and mediating cognitive capacities, be very similar to
that of chimpanzees.
Studies of non-human primates also speak to the content of human morality. Their social
lives are not only governed by social emotions, these social-emotional responses appear to be
directed toward familiar contents. Chimpanzees do not, for example, fly into a rage when a
member of the troupe peels his banana from the bottom instead of the top. Instead, they seem to
care about the same sorts of things that we do: avoiding personal violence, equitable distribution
of material resources, restrictions on sexual access, and so on. Thus it seems likely that many of
our most basic moral values find their roots in the pro-social dispositions of our primate
ancestors.
A number of themes emerge from studies of patients with social behavioral problems
stemming from brain injury, psychopaths, and the neural bases of moral judgment in normal
individuals. Popular conceptions of moral psychology, bolstered by the legend of Phineas Gage
and popular portrayals of psychopaths, encourage the belief that there must be a "moral center"
in the brain. This does not appear to be the case. The lesion patients, both developmental and
adult-onset, all have deficits that extend beyond the moral domain, as do the psychopaths that
have been studied. Moreover, the results of brain imaging studies of moral judgment reveal that
moral decision-making involves a diverse network of neural structures that are implicated in a
wide range of other phenomena. Nevertheless, the dissociations observed in pathological cases
and in the moral thinking of normal individuals are telling. Most importantly, multiple sources
of evidence point toward the existence of at least two relatively independent systems that
contribute to moral judgment: (1) an affective system that (a) has its roots in primate social
25
Or, perhaps, as Wanda maintains, they do read Nietzsche, but fail to understand it.
emotion and behavior; (b) is selectively damaged in certain patients with frontal brain lesions;
and (c) is selectively triggered by personal moral violations, perceived unfairness, and, more
generally, socially significant behaviors that existed in our ancestral environment. (2) a
"cognitive" system that (a) is far more developed in humans than in other animals; (b) is
selectively preserved in the aforementioned lesion patients and psychopaths; and (c) is not
triggered in a stereotyped way by social stimuli. I have called these two different "systems," but
they themselves are almost certainly composed of more specific subsystems. In the case of the
affective system, it's subsystems are probably rather domain-specific, while the system that is
responsible for linguistically-based higher cognition, though composed of subsystems with
specific cognitive functions, is more flexible and more domain-general than the affective system
and its subcomponents. Mixed in, perhaps mistakenly, with what I've called the affective system
are likely to be cognitive structures specifically dedicated to representing the mental states of
others ("theory of mind") (Greene and Haidt, 2002).
What does this mean for the innateness of moral thought? It seems that the form of moral
thought is highly dependent on the large-scale structure of the human mind. Cognitive
neuroscience is making it increasingly clear that the mind/brain is composed of a set of
interconnected modules. Modularity is generally associated with nativism, but some maintain
that learning can give rise to modular structure, and in some cases this is certainly true. My
opinion, however, is that large scale modular structure is unlikely to be produced without a great
deal of rather specific biological adaptation. Insofar as that is correct, the form of human moral
thought is to a very great extent shaped by the contingent structure of the human mind, which
itself is a product of natural selection. In other words, nature does not provide us our moral
thinking is not the result of applying a general-purpose learning device to the problems of social
life. As the stark contrast between the trolley and footbridge problem suggests, our moral
judgment is greatly affected by the quirks in our cognitive design.
As for the content of human morality, there are good reasons to think that genes play an
important role here as well. Psychopaths, with their highly stereotyped suite of deficits, don't
just get things wrong. They get things wrong in very specific ways. More specifically, they
appear to be lacking in empathy, an emotional capacity that counteracts the more selfish motives
that are so dramatically displayed inpsychopathic behavior. Even if there is no "empathy
module" the fact that empathic emotions can be removed without damaging too much else
suggests that ordinary humans, like chimpanzees, are equipped with domain-specific socialemotional tendencies that are products of natural selection and that lie at the core of human
morality.
So far I've argued that the form of human moral thought is importantly shaped by the
innate structure of the human mind and that some basic pro-social tendencies probably provide
human morality with innate content. What about more ambitious versions of moral nativism?
Some have suggested, for example, that there may be a moral equivalent of "universal grammar,"
[Rawls, Stich, Mikhail?, others] a deep structure to our moral thought that is hidden from
ordinary moral experience, but nonetheless responsible for shaping it. I have my doubts. Of
course, it depends on what one means by "grammar." As I've said, I do think that the human
mind has a very particular structure that is universal and that profoundly shapes both the form
and content of moral judgment. You can call that "universal moral grammar" if you like, but I
think this is misleading. It seems to me that the proponents of moral grammar have a normative
agenda. They want not only deep form, but highly specific deep content. In other words, what
they really want is more analogous to a moral language than a moral grammar. They look at
people's responses to things like the trolley and footbridge cases and observe that people seem to
have a detailed and highly sensitive knowledge of right and wrong that defies people's ability to
explicate that knowledge, much as people's implicit understanding of grammar far outstrips their
explicit understanding of how language works. Such nuggets of implanted moral wisdom
encourage the thought that somewhere, deep in our cognitive architecture, we're going to find the
mother lode.
I am very skeptical of the idea that there are detailed moral truths written into our psyche.
Rather, I think that the suggestive cases like trolley and footbridge work so well because they
exploit a large-scale dissocation in our cognitive architecture. We recoil at the thought of
pushing the man off the footbridge, but not at that of hitting the switch, because we're adapted to
respond emotionally to good-old-fashioned interpersonal violence, and not because we have a
detailed set of moral rules written into our brains, one of which tells when exactly it is okay
sacrifice one life to save to save five. And besides, even if there were a detailed set of moral
rules written into our brains, would it matter? It would certainly be nifty if it were true, but I
don't think it would get us what we want out of normative ethics. That is, I don't think it would
do anything to settle the important moral questions over which people disagree. If my deepest
moral convictions were at odds with the dictates of innate moral grammar, I would simply say
"So much the worse for innate moral grammar!", as could you.
I believe that the question of nativism in moral psychology grips many people because
our moral thought is at once both highly familiar and thoroughly alien. Our moral convictions
are central to our humanity and integral to our lives, and yet their origins are obscure, leading
people to attribute them to supernatural forces, or their more naturalistic equivalents. For some,
it seems, the idea of innate morality holds the promise of validation. Our moral convictions, far
from being the internalization of rules that we invented and taught one another, would be a gift
form a universe wiser than ourselves. I believe that there is much wisdom in our moral instincts,
but they, like everything else nature hands us, have their quirks and their flaws. Those who seek
redemption in the study of moral psychology are bound to be disappointed, but there are, I think,
enough rewards on the horizon to make it worth the trouble.
they believe that in some sense, our moral instincts will be validated if they turn out to be
something that we've taken from forces wiser than ours
our ability to explicate that knowledge, much as people
human moral behavior and
moral grammar unlikely, and unlikely to give people what they want, which is really an innate
moral "language" i.e. something with content.