ICR pamphlet - The Brain and the Mind

advertisement
Consciousness, Will & Responsibility:
What studying the brain tells us about the mind.
Chris D Frith
Wellcome Trust Centre for Neuroimaging at University College London
& Interacting Minds Group, University of Aarhus
The Problem
I can choose to have a glass of red wine or a glass of white wine and I experience
this choice as a clear act of free will. How could I doubt that I don't have free
will? But consider the following argument. We can all agree that we have no
control over events in the past. And we can also agree that we have no control
over the laws of nature. But aren’t our present choices and acts entirely
determined by the past and by the laws of nature? If this is the case then my
choice of wine will follow inexorably from some hidden chain of events. It
follows that I have no control over my actions and my experience of having free
will, however strong, is an illusion (Hobart, 1998). Furthermore it follows that all
our conscious experience is just an unnecessary luxury that has no effect on what
we do (Gallagher, 2006). Some recent scientific studies, which I shall discuss
shortly, appear to support this position (Libet, Gleason, Wright, & Pearl, 1983;
Wegner, 2003). Nevertheless, I believe that our conscious experience does have a
function and does affect our behaviour. I believe that conscious experience has
evolved: We humans have more conscious experience than other animals and
this provides us with an advantage.
The problem is that it has proved remarkably difficult to demonstrate what this
advantage might be. Our behaviour is continuously modified by signals of which
we are unaware. There are also many things we can do without needing to be
aware of what we are doing, so that, in these cases, conscious experience is
entirely unnecessary. Nevertheless there are many things about which we have a
vivid conscious experience. Why is this so? What is the purpose of this vivid
experience? In this essay I shall discuss the new experiments that are beginning
1
to reveal what our conscious experience might be for. I shall conclude that its
most critical role is in social interactions.
Introduction – Our reticent brain
Our brain contains about 90 billion neurons that are continuously active. But
very little of this activity impinges on our awareness. This is usually to our
advantage. We all know, for example, that if someone asks you to think about
how you are controlling your bicycle, you will probably fall off.
Of course, we are not directly aware of the activity in our neurons. But without
this activity we would have no awareness. And it seemed natural to assume that
the main function of this activity was to create our awareness. We have only
recently been able to show how much neural activity there is which never
creates any awareness. This mistake was inevitable. Why should we search for
neural activity associated with processes that do not impinge on our awareness?
By definition we have no insight into these unconscious processes that are not
available to introspection. In the first part of this essay I will give examples to
show just how complex and powerful these unconscious processes can be.
It was only when it became possible to study brain activity directly that inklings
of these hidden processes emerged. When, in 1852, Helmholtz first measured the
speed with which signals travel through neurons, and was also the first to
measure reaction time, he found that nerve conduction time was much slower
than expected and that reaction time was even slower. The implication is that the
brain is doing a great deal of work in between the time when, for example, the
light strikes our eye and the perception of an object emerges into consciousness.
Helmholtz suggested that this work involved the brain making unconscious
inferences about what is out there in the world on the basis of the crude signals
available to the senses. This idea was taken up again 100 years later by cognitive
psychologists, notably Richard Gregory, in books such as Eye & Brain (see also
Gregory, 1997). At around the same time advances in single cell recording began
to reveal the neural mechanisms by which these unconscious inferences were
made (Hubel & Wiesel, 1977). In the last 20 years, with the development of brain
2
imaging, it has been possible to measure activity in the human brain and show
that our brain responds to sensory signals of which we are unaware.
Our brain doesn’t just alter its activity in response to these signals. It can also use
these signals to alter our behaviour. We may be aware of deciding to pick up an
object, but the fine control of this reaching and grasping movement happens
without awareness. I discussed the unconscious basis of perception and action in
much more detail in my book Making up the Mind (Frith, 2007). In this essay I
shall take these ideas further by considering the role of unconscious and
conscious processes in decision-making and in social interactions.
Actions & Decision
An important component of consciousness is the vivid experience of being in
control of our actions and decisions. However, there are now many experiments
suggesting that this experience is an illusion. The most famous is the study by
Benjamin Libet, published in 1983. In this study brain activity (EEG) was
measured while volunteers raised their index finger ‘whenever they had the urge
to do so’. When you raise your finger spontaneously (rather than in response to a
signal) a gradual change in brain activity can be observed that starts almost 1
second before the lifting action occurs. Libet’s novel idea was to measure the
time at which his volunteers reported having the urge to lift their finger. He did
this by providing a clock face from which they could note the time at which the
urge occurred. His striking finding was that awareness of the urge to act
occurred about 300 msec after the start of the brain activity that precedes the act
(Libet, et al., 1983). While the details of Libet’s experiment have been criticised,
there have been many subsequent studies using various different techniques
that confirm this result. For example, John-Dylan Haynes and colleagues recently
used state-of-the-art functional magnetic resonance imaging (fMRI) techniques
and were able to predict which finger their volunteers would lift from brain
activity occurring about 10 seconds before the act (Soon, Brass, Heinze, &
Haynes, 2008).
3
The implication of this result is that we have little conscious control over the
initiation of these simple actions. Our conscious experience emerges after the
brain event1. However, these studies involve very simple acts where conscious
control may not be important. Perhaps we need deliberate conscious control
when we have to make more difficult and important decisions? Recent
experiments suggest that this is not the case. Dijksterhuis and colleagues asked
their volunteers to make a series of complex and difficult to decisions. For
example, they had to decide which, among a number of expensive cars, would be
the best buy. In one condition they were given 4 features of each car on which to
base their decision. In a more complex condition they were given 12 features for
each car. Having been given all this information the volunteers had 4 minutes in
which to decide on their choice. The key manipulation was that some of the
volunteers were given another task to do during this 4 minutes so that they
could not think about their decision. The striking result was that, for the complex
decision (12 features), the people who had not had the chance to think about it
made better decisions (Dijksterhuis, Bos, Nordgren, & van Baaren, 2006).
How is this possible? A plausible idea concerns the different ways in which the
unconscious and the conscious parts of our brain operate. Our unconscious brain
can handle many things in parallel. The 12 features of the cars in the complex
decision define a 12 dimensional space in which the best car can be located. This
is just the kind of problem our unconscious brain can handle. Our conscious
brain, in contrast, has a very limited capacity and can only handle a few features
at a time. For conscious decisions complex problems must be simplified into a
small number of dimensions, but the danger is that the wrong simplification will
be applied.
If the brain can achieve so much with processes that never impinge on
consciousness, we are left to wonder what consciousness is good for. I shall come
1
This observation has been taken by some to show that we do not have free will. Of course, free
will is not ruled out if the choice occurs at an unconscious level. The question then raised is
whether people can be held responsible for choices that are made without conscious
deliberation.
4
back to this problem later. But first we shall explore the many unconscious
processes that play such an important role in our social interactions.
Social Contagion and Empathy
We still follow that romantic notion that each of us is essentially alone in the
world and that we can never have any real access to the mind of another. This is
another of the illusions created by our conscious brain. At the unconscious level
we are deeply embedded in the social world of other people.
The extraordinary extent of this embedding can be observed in behaviour and in
brain activity. The most striking behavioural example is the chameleon effect.
When two people are interacting they show non-conscious mimicry of each
other’s postures, mannerisms and facial expressions (Chartrand & Bargh, 1999).
This is an example of social contagion. At around the same time that chameleon
effect was discovered, social contagion was also observed at the level of neural
activity. Mirror Neurons, discovered by Giacomo Rizzolatti and colleagues, are
active when a monkey performs a particular action such as picking up a peanut.
However, they are also active when the monkey observes the experimenter
performing the same action. It is as if the monkey’s motor system mirrors the
actions observed in others. Similar mirroring of action can also be observed in
the human brain (Rizzolatti & Craighero, 2004). Presumably it is the mirror
system in the brain that underpins the chameleon effect.
There are now many experiments demonstrating contagious effects when we
observe the actions of others. An example I particularly like comes from the lab
of Roman Liepelt and his colleagues. Volunteers had to move their first or second
finger as quickly as possible in response to a signal. Their fingers were not
restrained in any way. However, they were looking at a picture of someone else’s
hand while they made their responses. The remarkable observation was that
simply seeing a hand in which the first and second fingers were clamped down
was sufficient to slow down the responses of the volunteers (Liepelt, et al.,
2009).
5
There are also contagious effects from the experiences of people we are
observing. Sarah-Jayne Blakemore and colleagues scanned volunteers while they
watched a video of someone being touched on the head or neck. They found that
watching someone being touched causes activity in the same region of the brain
(somatosensory cortex) as is activated when you are being touched yourself
(Blakemore, Bristow, Bird, Frith, & Ward, 2005). For one of the volunteers in this
experiment this is a conscious experience. She reports that when she sees
someone else being touched she feels it on her own body. This is a rare form of
synaesthesia. But it seems that for all of us there is a neural mirroring of this
feeling of touch in the brain. However, this does not usually break through into
our consciousness. Here again the conscious experience does seem to be an
unnecessary side effect.
There are also now several studies showing that the emotions of others are
contagious. This is a form of empathy. We feel the pain of others. If we see
someone having a needle stuck into their hand then activity is elicited in the
brain in many of the regions that would be activated if we were in pain ourselves
(Avenanti, Paluello, Bufalari, & Aglioti, 2006). This corresponds to our subjective
feeling of wincing at the sight of someone in pain. Tania Singer and her
colleagues showed that even a signal that your friend was about to get a painful
shock was sufficient to elicit activity in regions of the brain concerned with pain
(Singer, Seymour, et al., 2004).
I believe that these contagious effects of the actions, experiences and emotions of
others occur largely unconsciously and automatically. In other words, we are
unaware that our behaviour is being altered by these social signals and we have
little control over our responses to these signals. There is considerable evidence
in favour of this belief. For example, if we are shown a fearful face we tend to
imitate the expression with our own face and show activity in brain regions
concerned with fear. These effects occur even when the face is presented so
rapidly (the fearful face is presented for 30 msec and then immediately replaced
6
by a neutral face) that we are unaware of having seen it (Dimberg, Thunberg, &
Elmehed, 2000; Whalen, et al., 1998).
What are the advantages of social contagion?
We must gain some advantages from these widespread and powerful contagious
effects. But what might these be? I mentioned that our responses to these signals
are largely unconscious and automatic. In many situations this is also the case for
the production of these signals. We do not deliberately change the shape of our
face when we are afraid. Of course, we can deliberately smile and laugh for the
purpose of communication, but many of the social signals that elicit contagion
are not deliberately communicative. They are examples of what has been called
public information. This is useful information that we can acquire simply by
watching the behaviour of others. Many animals take advantage of such signals.
For example, foraging starlings can learn about new locations of food by
watching the behaviour of other starlings (Templeton & Giraldeau, 1995). The
facial expressions associated with fear and disgust are examples of public
information. These expressions have direct value for the person who expresses
them. When we express fear we open our eyes wide and increase our nasal
volume (Susskind, et al., 2008). This enhances our responses to sensory stimuli.
Our visual field becomes larger and we are more sensitive to smells. As a result
we are more able to detect the source of danger. The opposite effects are created
by the expression of disgust. Our eyes are narrowed and our nasal volume is
decreased making us less affected by noxious stimuli. As a result expressions of
fear and disgust are signals of different kinds of danger. By imitating them we
automatically prepare ourselves to cope with these different kinds of danger.
Our responses to these signals of fear and disgust are directly advantageous to us
as individuals. However, our responses to many of the other social signals I have
mentioned have a less direct advantage to us as individuals. Our responses to
these signals are advantageous to us through improving the performance of the
group of which we are members.
7
Alignment: The first kind of advantage occurs because our automatic imitation of
others increases alignment. When two people interact through imitation they
become more similar to each other and this is likely to make joint action and
communication easier. A very simple example of this is joint attention. We all
have a very strong tendency to follow the eye gaze of others in order to see what
they are looking at. Again this seems to be an automatic tendency that we cannot
suppress (Bayliss & Tipper, 2006). The effect of this is that we will align our
attention with the person we are interacting with. We will have a shared focus of
attention.
Another example of motor contagion relevant to this idea is the demonstration
that simply being primed to think about elderly people causes students to slow
down their speed of walking (Bargh, Chen, & Burrows, 1996). This alignment of
movement speed would clearly be an advantage when performing some joint
action with an elderly person.
Alignment occurs in speech as well as with movements. During a dialogue people
imitate each other at a number of different levels. The grammatical forms they
use, the words they use and even the way they pronounce the words become
more similar (Pickering & Garrod, 2004). This alignment makes communication
more efficient. For example, alignment enables people rapidly to agree on how to
name the various objects that they are manipulating together (Clark & Krych,
2004).
These effects generate better performance for the group, which is, of course,
good for the individuals within that group.
Prosocial behaviour: There are, however, other effects of social contagion that
create advantages for the group even more directly. The chameleon effect has
been explored by pairing volunteers with stooges who have been instructed by
the experimenter to covertly mimic their partners. As long as the volunteers
don’t notice the imitation then they report that they like the partners who mimic
them better than those that don’t (Chartrand & Bargh, 1999). Furthermore,
8
people who have been mimicked are more helpful and generous to other people
than those who have not been mimicked. This behaviour is not restricted to the
person who has been doing the mimicking, but seems to reflect a general
increase in prosocial behaviour, including, for example, increased donations to
charity (van Baaren, Holland, Kawakami, & van Knippenberg, 2004).
A striking example of a social cue generating prosocial behaviour comes from a
study conducted by Melissa Bateson and colleagues in the Psychology
Department at Newcastle University. As in many such departments there is
common room where coffee and tea are always available. The staff are supposed
to put the money for the coffee and tea that they drink in a box, but the money
put in the box never quite covers the amount consumed. A very simple
experiment was performed in which a photo was placed above the box for the
money. On alternate weeks this was a photo of flowers or a pair of human eyes.
The results were striking: significantly more money was put in the box on the
weeks when there was a pair of eyes above it (Bateson, Nettle, & Roberts, 2006).
Remember, there was no real surveillance involved, only a photo of a pair eyes.
The group benefits from this prosocial behaviour, and, in the long term so do the
individuals within the group.
Consciousness & Reason
In my review so far I have I have talked complex processes relating to perception
and action that are entirely unconscious. Such unconscious processes are also
very important for our social behaviour where they help us to align ourselves
with others and to behave in a more prosocial manner. So what possible
advantages can conscious experience add?
Moral Dilemmas: There is a popular notion associated with enlightenment
philosophy that emotion is the enemy of reason (see Damasio, 1994). In this
notion emotion can be roughly equated with ‘primitive’ unconscious processes,
while reason is a conscious process of deliberation that is unique to humans. Of
course, we are conscious of the emotion, but, in the presence of an emotion, we
9
no longer deliberate about the action we are gong to take. And, the idea is that, if
you let your decision be generated by emotion in this way, it will be a bad
decision. Brain imaging studies have investigated this idea by presenting people
with moral dilemmas. One well-studied dilemma is the ‘trolley problem’. A
railway wagon (or trolley in the USA) is running out of control along the track. If
it continues along this track it will hit 5 people working on the track and they
will all be killed. The only way to prevent this happening is to divert the wagon
onto a side track, but this diversion will kill the one person who is working on
that track. Should you pull the lever to divert the wagon? Most people think it
right to pull the lever since it is less worse that one person die than 5 people die.
However, a slight change in the scenario will cause people to make a different
decision. In this scenario there is no side track. However, you are standing on a
bridge above the track and you can stop the wagon by pushing the fat man who
happens to be beside you onto the track. Here again this action results in one
person being killed instead of 5. Nevertheless, most people think it wrong to
push the fat man onto the track.
The question is, why should there be a difference between these two scenarios,
when from a rational, utilitarian point of view the dilemma is the same: is it
better that one should die rather than 5? Joshua Greene and colleagues
attempted to answer this question by measuring brain activity while volunteers
were presented with various versions of the trolley problem. They found that
rejection of the strict utilitarian approach, as, for example, when deciding it
would be wrong to push the fat man onto the track to stop the wagon, was
associated with greater activity in brain regions concerned with emotion and
response conflict (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001). This
result suggests that the emotional response to the thought of pushing the fat man
onto the track interferes with our ability to make a rational decision about the
dilemma.
This interpretation of our responses to moral dilemmas remains controversial.
One problem is that, in these experiments, the participants are not directly
10
involved in the dilemma. The (very artificial) situation is described and they say
what ought to be done. But what would they actually do?
The Ultimatum Game: Moral dilemmas present us with particularly difficult
decisions. Simpler kinds of decision-making can be studied in the lab by asking
people to play economic games. Participants in these games make decisions
about money. These are real decisions in the sense that the participants actually
get more or less money as a result of their decisions. These games can also reveal
to us something about the conflict between reason and emotion.
The ultimatum game involves two players. One player, the proposer, is given
some money (e.g. $100). He can offer any proportion of this he likes to the other
player, the responder. If the responder accepts the offer then both keep the
money. But if the responder rejects the offer, then neither gets any money.
According classic economic theory, the rational decision for the responder is to
accept any offer, however small. This is on the grounds that some money is
better than no money. However, the majority of responders reject offers that are
less than a third of the total (Camerer & Thaler, 1995). Again this seems to be a
case of emotion leading to irrational decisions. Low offers are felt to be unfair
and therefore emotionally upsetting. Brain imaging studies confirm that low
offers and their rejection are associated with activity in regions including the
insula and anterior cingulate cortex which are also activated by stimuli eliciting
negative emotional states such as pain and disgust (Sanfey, Rilling, Aronson,
Nystrom, & Cohen, 2003).
Why is the decision to accept a small amount of money more rational than to
reject it? In this case the decision to accept is considered rational because it
maximises the individual monetary gain. But while such decisions may be good
for the individual are they necessarily good for the group?
Trust & Reciprocity: A slightly more complex economic game involves a group of
players who are each given an equal amount of money at the start of the game. A
player can invest this money in the group. The money invested is tripled by the
11
banker and then shared equally among the members of the group. This means
that an individual investor finishes up with slightly less, but the group as a whole
has more money than before. However, if everyone invests then everyone gains.
The problem with such games is that some players adopt a selfish strategy.
These free riders realise that, if they don’t invest themselves, they will still
benefit from the investments of others. And, since they keep their original
endowment, they will be even better off. The appearance of free riders in the
game is a disaster for the group. Players stop investing because they don’t see
why they should support the free riders. With this reduction in cooperation the
group benefits come to an end.
How can we overcome the problem created by the free riders? Fehr & Gächter
added a new feature to the game: altruistic punishment. A player is allowed to
punish other players by having them fined, i.e. money is taken away from them.
However, the player applying the punishment has to pay for the privilege. It is in
this sense that the punishment is altruistic. We lose money when we apply the
punishment. The addition of altruistic punishment to the game reduces free
riding and the group, and all the individuals within it, continue to benefit from
investment (Fehr & Gachter, 2002).
This result casts a new light on peoples’ behaviour in the ultimatum game. Why
do we turn down low offers? The unfair proposer is behaving like a free rider:
keeping the money for himself rather than investing it in the group. By rejecting
the offer we are applying altruistic punishment. It may cost us money, but, by
taking money away from this proposer, he may learn to behave more fairly in the
future bringing benefits to everyone in the group.
If we accept this analysis, then, from the point of view of interactions within a
group, this emotional behaviour is not irrational since it increases reciprocity
and maximises monetary gain for the group. Here is another example of a
primitive emotional and largely unconscious process guiding us to behave in a
less selfish and more group oriented manner, which, in the long run, is beneficial
to the individuals within the group.
12
So what is consciousness for?
As we have seen consciousness seems to play no role in the control of many of
those altruistic actions and decisions that we consider to be such a special
feature of our humanity. Furthermore, much of our social behaviour seems to be
controlled by unconscious processes driven by our emotions rather than by
conscious deliberation. So why should a key component of our consciousness be
the vivid experience of controlling our actions and making our decisions.
This experience is partly about causation, I am causing my hand to move, I am
causing the light to come on by pressing the switch, and partly about choice, I
could have done something else if I had wanted to. But what is this experience
good for?
The feeling of responsibility: It is important for us to make a distinction between
deliberate actions and accidents. If my arm movement accidentally spills the
wine on you then everyone is very compassionate about my embarrassment. But
if, with much the same arm movement, I do it deliberately, the action is meant,
and taken, as a severe insult. Infants as young as 9 months can distinguish
between deliberate and accidental actions made by other people, for example,
whether the toy was withheld deliberately or dropped accidentally (Behne,
Carpenter, Call, & Tomasello, 2005).
We researchers can create unintended movements in the lab by holding a coil
over the head of a volunteer and stimulating the motor cortex with a brief, but
powerful magnetic pulse (transcranial magnetic stimulation, TMS). The pulse
causes the fingers of one hand to move. We perceive this movement as
unintended and not under our control. Patrick Haggard and his colleagues used
this technique to compare the conscious experience of accidental and intended
movements. In both cases the movement was followed after ¼ of a second by a
sound. In the case of the intended movement the volunteer caused the tone to
occur by pressing a key. In the case of the accidental movement the tone
followed after the magnetic pulse that caused the finger to move. Haggard used
13
the technique introduced by Libet to measure the time at which volunteers
experienced the two events: the time of the movement and the time of the tone.
This experiment revealed the phenomenon of intentional binding. When a
deliberate movement causes the tone, then the movement is experienced as
occurring later in mental time than in physical time, while the tone appears to
occur earlier. The opposite effect occurs when the unintended movement is
followed by the tone. The unintended movement appears to occur earlier and the
following tone later (Haggard, Clark, & Kalogeras, 2002). In other words when
we feel that we are causing something to happen, our action and its consequence
appear to be closer together in time. They are bound together in mental time.
When the action is not considered to cause an event, then the action and the
event are pushed apart in time. Our brain is creating a sense of responsibility for
some events and not others.
This feeling of responsibility for our actions also applies to the actions of others. I
have already mentioned the behaviour of mirror neurons that are active when
we act and also when we see other people performing the same actions. Activity
in these neurons relates to actions, but does not indicate who is performing the
action. There is also a characteristic pattern of brain activity that appears after
making an error, such as pressing the wrong key in a laboratory experiment. The
same activity also occurs when we see someone else making an error (van Schie,
Mars, Coles, & Bekkering, 2004). Here again the activity relates to an error, but
does not indicate who is making the error.
These results indicate the extent to which our brain treats our own actions and
those of others in a similar fashion. Indeed, with suitable manipulations, it is easy
to create illusions of responsibility for actions. Daniel Wegner has caused
volunteers to be convinced that they performed an action when it was actually
performed by someone else (Wegner & Wheatley, 1999) or to be convinced that
someone else performed an action when it was actually themselves (see also
Wegner, 2003; Wegner, Fuller, & Sparrow, 2003).
14
Whatever the nature of our ability to consciously control our actions, the feeling
of responsibility for our actions has a very important role in social cooperation. I
have already mentioned the importance of altruistic punishment in maintaining
cooperation in groups. We have a strong drive to punish people who behave
unfairly. However, this feeling of unfair treatment only arises when we also feel
that the bad behaviour is deliberate. Tania Singer asked her volunteers to play a
trust game with a number of other players. Some of these players collaborated
while others played unfairly. As you would expect the volunteers punished the
unfair players and rewarded the cooperative players. However, this only
happened for players who were perceived as behaving deliberately, of their own
free will. The volunteers were told that another group of players were not
choosing their responses, but were simply reading them off a sheet of
instructions that had been given to them before the experiment. The volunteers
did not punish or reward these players, even though the monetary gains and
losses were exactly the same as with the group of deliberate players (Singer,
Kiebel, Winston, Dolan, & Frith, 2004).
I conclude from these results that our conscious sense that we, and others, are
responsible for our actions has a critical role in fostering cooperation. This
ensures that group interactions are of benefit to everyone.
The Feeling of Regret: The other aspect of our conscious experience of being in
control of our actions is the feeling that we could have chosen a different action.
It is this feeling that leads to regret.
We must distinguish between regret and disappointment. Both feelings occur
when we discover the outcome of our actions. We are disappointed if the
outcome is worse than we expected. We experience regret if we discover that the
outcome would have been better if only we had chosen a different action. This
feeling of regret depends upon our belief that we could have chosen this other
action. Since we only experience regret after we have performed an action, it
might seem that we have another example where our conscious experience of
action has no role in determining what actions we perform. However, we can
15
also experience anticipated regret; that is, we can anticipate the regret we might
feel if it turns out that we should have chosen another action.
We can study the effects of anticipated regret on our decisions experimentally.
We only experience regret when we discover that the action we rejected has
been more successful. An experiment can be set up so that, in one condition, we
know that we will find out what would have happened. Imagine an auction
scenario in which participants know they will be told what the winning bid was
when they lose. In this case, if the winning bid is very close to their own, they can
have regret for not having made a higher bid. This is contrasted with an auction
in which losers are not told the winning bid. The anticipated regret that is
possible in the first scenario causes participants to bid higher (Filiz-Ozbay &
Ozbay, 2007).
Explaining our Actions: Our experience of choosing one action rather than
another carries with it the belief that we can explain and justify why we made
that particular choice. By thinking about the choices we could have made when
the action we chose turns out to be bad, we might be able to avoid the mistake in
the future. However, we can also explain to ourselves and to others why we
made a particular choice and attempt to justify that decision.
There are, however, problems with these processes through which we try to
explain and justify our actions after the event. First, our attempts to understand
and explain our actions depend upon introspection, and introspection about our
decisions is very fragile. This fragility is demonstrated most elegantly in an
experiment by Petter Johansson and colleagues (2005). The participants in this
experiment were shown a sequence of pairs of faces. On each trial they were
shown a pair and had to choose which one they preferred. The experimenter
then handed them the face they had just chosen and asked them to give their
reasons for making that choice. However, on a small number of trials, by using
sleight of hand, the experimenter gave the participant the other face, the one
they had not chosen. The majority of volunteers did not notice this deception and
went on to justify a choice they had not made.
16
This observation suggests that our experience of making a choice, just like our
experience of performing an act, is not the result of a direct perception, but is an
inference made on the basis of limited evidence. In this sense, understanding our
own motivations may be as indirect as our understanding of the motivations of
others.
However, this is not how it feels to us. We have the impression that we have
good insight into the basis of our own decisions and motivations. Since we have
no direct knowledge of other peoples’ motivations, we tend to judge them on the
basis of their behaviour, rather than their intentions. This illusion can lead to the
perception that we do not suffer from various biases, while other people do
suffer from these biases. We judge our own actions on the basis of introspection.
However, since our biases are largely unconscious, introspection falsely suggests
to us that we are unbiased. In contrast, we judge other peoples’ biases on the
basis of their behaviour, where indeed their biases are directly revealed (Pronin,
Berger, & Molouki, 2007).
So what is the value of this fragile ability to reflect upon our decisions after we
have made them? I suggest that such reflection enables us to discuss our
decisions with others and thereby to get better at making inferences about our
own motivations. We can learn about our own biases by discussing our
behaviour with others. But without the ability to introspect, how ever fragile this
ability may be, such discussion would not be possible. But does increased
understanding through communication give us better control over our actions
and choices? I have some ideas on this point, but I must emphasise that they are
extremely speculative. It remains remarkably difficult to find empirical evidence
that tells us to what extent the conscious experience of action and decision has
any value.
I think there are at least two ways in which reflecting on our actions might create
some control over our decisions. When confronted with a number of different
possible actions, our choice is determined by our brain’s estimation of the likely
17
value associated with the outcome of each action. Given this set of values our
choice is completely determined. We choose the action with the highest value.
However, there are ways on which we could alter this decision process. For
example, we might be able to change the values associated with each possible
course of action. This might be how anticipated regret alters our decisions. The
anticipated regret associated with just missing an item in an auction makes us
give a higher value to choosing a higher bid. However, what I am most interested
in is the possibility that, through communication, we might be able to alter the
set of actions from which we are choosing. For example, by discussing with
others the various possible actions that we were considering, we might discover
that there were additional possibilities that we had not thought of.
The dark side of social behaviour
However, there may be another value for the conscious control of action. This
relates to the dark side of social behaviour.
In the first part of this essay I presented the evidence that our automatic,
primitive responses, far from being selfish, frequently cause us to work for the
benefit of the group. In the long term the survival of the group benefits all the
individuals within it. However, this is only part of the story. It does not take into
account that there are other groups. There is our group, the in-group, the nice
people and there are other groups, the out-groups, the strange people. There is
evidence that the altruistic behaviour that we show towards our own group can
only arise through competition with other groups.
Altruism has long been a problem for evolutionary theory, but is now largely
explained by models that demonstrate how altruistic behaviour is sustained by
reciprocal, long term or indirect benefit to individuals or their offspring (Sachs,
Mueller, Wilcox, & Bull, 2004; Trivers, 1971). Altruistic people sacrifice
themselves for the good of others. At first sight, then, it would seem that
altruistic people should be less likely to survive than others and so this altruistic
trait should die out. And yet many people clearly are altruistic. One solution to
the problem is to consider the effect of competition between groups. For
18
example, Sober & Wilson (1998) suggest that, within a group, the most selfish
individual will do best, while, when groups compete, it is the group with fewest
selfish individuals that does best. Thus group competition will favour the
survival of altruism within the group. Individuals in groups where there is
altruistic behaviour are more likely to survive than those in the groups full of
selfish individuals. In a sense cooperative behaviour is also selfish since it leads
directly to individual advantage. Thus, what has come to be called altruistic
behaviour is only truly altruistic in the short term (West, Griffin, & Gardner,
2007).
The dark side of this account of altruism is that it only applies within groups and
not between groups. Evidence is beginning to appear that this is indeed the case.
Ernst Fehr and colleagues studied the development of sharing. Children were
asked to choose between selfish and an altruistic options. For example, keep 2
euros for to yourself and give none to the other child or keep 1 for yourself and give
one to the other. Between the ages of 4 and 8 years children become more likely
to share with members of their in-group, but less likely to share with members of
an out-group. Sharing and discrimination seemed to develop hand-in-hand.
Evidence is also appearing to suggest that automatic contagion and empathy
does not occur for out-group members. Xiaojing Xu and colleagues looked at
empathy for pain by scanning participants while they watched people apparently
having needles pushed in to their faces. When Caucasian participants saw
needles penetrating the faces of fellow Caucasians activity was elicited in brains
regions associated with pain (anterior insula and anterior cingulate cortex) a
sign of empathy. However no such empathic responses were elicited when they
saw needles penetrating Chinese faces. The reverse happened with Chinese
participants who only showed empathy for fellow Chinese (Xu, Zuo, Wang, &
Han, 2009).
Studies of race prejudice in New Yorkers reveal a similar pattern. When shown
the faces of unknown black people, activity was elicited in the amygdala. Such
activity is also seen when people see something they are afraid of (LeDoux,
19
2000) or someone who is untrustworthy (Winston, Strange, O'Doherty, & Dolan,
2002). This activity is seen even in people for whom there is no evidence that
they have any overt race prejudice. However, the activity in the amygdala elicited
by black faces does correlate with a behavioural measure of unconscious race
prejudice (Phelps, et al., 2000).
Overcoming our prejudices
We probably all have a degree of unconscious prejudice against various outgroups. However, if we have time to think about it these unconscious prejudices
can be overcome. And perhaps here is where we can find another role for
consciousness.
In another study of race prejudice the experimenters looked at the effect of
presenting the black faces for a very short time (30 msec) or a much longer time
(525 msec). With the short presentation the black faces elicited more activity in
the amygdala than white faces. This is evidence of the unconscious prejudice that
I have already mentioned. But with the long presentations this difference
between black and white faces was much reduced. This effect was accompanied
by increased activity in other regions including the dorsolateral prefrontal
cortex. The authors interpret this as evidence that the unconscious prejudice is
over-ridden by high level reasoning processes that have their origin in prefrontal
cortex (Cunningham, et al., 2004). There is indeed much evidence in favour of the
idea that a major role for prefrontal cortex is to over-ride habitual responses. For
example, after damage to prefrontal cortex, patients can become slaves to their
immediate environment. For example, when being shown round someone else’s
house and seeing a room in which there was a bed with the cover turned down,
one patient took off his clothes and got into bed (Lhermitte, 1983).
We can use deliberate reasoning to overcome our unconscious biases and this
ability is probably enhanced if we become aware of, or at least if we know about,
our unconscious biases. However, this kind of control is always slow and
effortful and, with practice, will always be taken over by an unconscious,
automatic process.
20
Rational behaviour need not be good behaviour
The characterisation of the thoughtful and deliberate reflection on our behavior
that depends upon the integrity of prefrontal cortex as ‘rational’ is probably
misleading. It would be more accurate to say that we can use this high level
ability to justify our behaviour, to rationalise. And in many circumstances we are
just as skilled at justifying selfish behaviour as we are at justifying altruistic
behaviour.
An experimental demonstration of this concerns moral hypocrisy (Valdesolo &
DeSteno, 2008 ). In this experiment participants were told that they had to
perform two tasks in conjunction with an unseen partner. The participant could
choose which task he would perform and typically chose the easier of the two
tasks. This is clearly selfish behaviour. Participants were also asked to rate how
bad their own behaviour was and also how they would rate the same behaviour
if someone else did it. They rated their own behaviour as less bad than someone
else doing the same thing. This is moral hypocrisy.
We can now ask the question, is this moral hypocrisy the result of an automatic,
self-serving bias, or a result of a deliberate process of self-justification? The
experimenters addressed this question by applying a cognitive load. The
participants had to remember a string of random numbers at the same time that
they rated how bad the behaviour was. It is well established that a cognitive load
of this kind engages conscious processes of deliberation and therefore prevents
these processes being used for other purposes such as, in this case, selfjustification. The results were clear. When a cognitive load was applied the moral
hypocrisy disappeared. The participants rated their own self-serving behaviour
and that of others as equally bad.
This is an experimental demonstration that ‘rational’, deliberate thought can be
used to justify selfish behaviour as well as unselfish behaviour. But of course we
have all had experiences in real life when we have heard such self-serving
justifications (although usually from other people, rather than ourselves). Thus,
21
both our automatic, unconscious processes and our deliberate, conscious
processes will support immoral as well as moral behaviour. We are
automatically prosocial with the in-group, but antisocial with the out-group. We
can use our powers of reason to justify good behaviour and we can also use them
to justify bad behaviour.
So what is the value of our vivid conscious experience of being able to
choose one course of action rather than another?
Although we often justify our behaviour to ourselves, a very important use of
this ability is to justify our behaviour to others. In other words we can discuss
with other people whether the outcome of our action was intended or
unintended. We can explain our motives for doing something and we can explain
why we chose one option rather than another. Such discussions are valuable for
at least three reasons. First, through such discussion we can learn to introspect
better about our intentions and perhaps acquire some conscious knowledge
about our unconscious biases. Second, we can develop commonly agreed
strategies for making decisions and commonly agreed accounts as to why one
kind of behaviour is morally better than another. So I am suggesting that a major
value of our vivid experience of being in control of our actions is that we can
share these experiences with others. For this purpose it is hardly relevant
whether the experience is illusory or not. The important point is that the
experience enables cultural norms about decision making to develop and spread
through society.
Discussing how decisions are made can affect behaviour
Our behaviour can be affected by our beliefs about how we make decisions. This
has been demonstrated by giving students texts to read about the nature of free
will (Vohs & Schooler, 2008). One passage came from The Astonishing Hypothesis
by Francis Crick. This passage includes the statement ‘most rational people now
recognise that free will is an illusion.’ Another passage came from the same book,
but was about the nature of consciousness, rather than free will. After reading
one of these passages the students were given a computerised arithmetic test on
which it was rather easy to cheat without, apparently, any possibility of being
22
detected. The students were told there was a fault in the testing progam such
that, if they pressed the space bar the correct answer would be revealed. The
students were asked not to do this. The results of this and a series of similar
experiments were clear. Those students who had been told that free will was an
illusion were more likely to cheat.
Why should this be? Further research is required to answer this question, but my
speculation is as follows. Our beliefs about free will are essentially beliefs about
how we make decisions, in particular why we choose one option rather than
another. The students had two options in the arithmetic test: to work out the
answer or to cheat. Cheating is selfish because it requires less effort and gives an
advantage over others. It is also wrong because it goes against the explicit
instructions given by the experimenter. As we have seen in my earlier discussion,
there is a wide spread belief that we are all fundamentally selfish and have to
overcome this through deliberate effort: the exertion of free will. Furthermore, in
the absence of free will, we are no longer responsible for our actions and
therefore escape blame. So, if I have no free will, I will cheat. I can’t control my
selfish impulses and I can’t be blamed either. But what if people believed that
they were not creatures fundamentally driven by short-term gain, but believed
rather that they were strongly embedded within a complex, interactive and
supportive group. Such people, on being told that they had no free will, would
not be more likely to cheat. This experiment demonstrates how our beliefs about
how we make decisions, created in this case through reading the idea of others,
can directly affect our decision-making process.
Optimising group behaviour
However, there is third and, I believe, even more important advantage that can
be gained from sharing our experiences. This relates to the gains that we can
make from acting with others, rather than on our own. There are many situations
in which a group can achieve outcomes that could not be achieved by one person
acting alone. Obvious examples would include carrying a heavy object. In these
situations the outcome achieved by the group is simply the sum of the outcomes
possible for each individual in the group.
23
I believe, however, that our ability to share experiences allows groups to achieve
more than the sum of the individuals. This is because what we share by this
means is knowledge and information, rather than a physical entity like force. My
research group has been exploring this idea, but in the realm of perception
rather than action. In a very simple perceptual task, you might be asked to detect
whether a signal was present or not; for example, is there an incoming aeroplane
on the radar screen? Your decision to say yes or no will depend upon two factors:
the change in activity in your visual cortex and your confidence about whether
this change in activity was caused by a real signal (the plane) or was just a
random fluctuation in brain activity. You clearly cannot share with another
person what the change in brain activity was, but, through introspection, you can
tell her about your confidence in what you saw. This form of confidence is an
example of meta-cognition: knowing about what you know. We have been
examining what happens when two people are shown the same signal and then
make a joint decision about that signal. Obviously a problem only arises when
they disagree about the signal. In this case we ask them to discuss the problem
and come up with a joint decision.
It is likely, of course, that one of the two participants will be slightly better at the
task than the other, through better eyesight or more practice, for example. In this
case we would get better performance from the pair if we simply went with the
decision of the better participant and ignored the other. However, we found that,
for most pairs, the joint decisions they made generated better performance even
than the better member of the pair.
How is this possible? In addition to differences in skill between participants,
there will also be fluctuations in performance within participants over time.
Inevitably in this rather boring task our attention will wander so that on some
trials we will have a much better idea of what happened than on others. This
means that, on some trials, participant A will have seen the signal better, while,
on other trials, participant B will have seen the signal better. Through discussing
what they have just seen on each trial, the two participants can decide who had a
24
better view of the signal on each trial. This requires that they can tell each other
how confident they were in what they saw. They must also arrive at a common
metric for their confidence, so that their confidences can properly be compared.
By choosing the response of the person with the better view on each trial they
can effectively eliminate most of the ‘bad’ trials for each individual (except those
rare occasions when both participants had a ‘bad’ trial at the same time) and
produce joint performance that is better than the better participant acting on his
own.
I believe that, in this as yet unpublished experiment (Bahrami, et al.), we have
found a situation where introspection, that is our conscious experience, can
generate better performance. This is made possible because our participants can
share their introspections about their decision-making processes.
Conclusions
Unconscious decision processes.
The survival of all creatures depends upon making the right choices, so it is no
surprise that human brains are exquisitely well designed for making decisions.
Our brain builds up a store of possible actions and the most likely outcomes
associated with these actions. In any given situation our brain will weigh up
these different possibilities and choose the action that maximises the value of the
anticipated outcome. All this happens outside our conscious awareness.
But what is it that our brain is maximising when it makes these decisions?
Essentially it is maximising our access to resources (gains) and minimising the
cost incurred (losses) when we perform the chosen action. For example, the
physical effort required to perform the action will be taken into account and
minimised. This means that, if we are tired, the outcome will have to have a
higher value before we choose to perform the associated action.
What our brain chooses to maximise has a major role in social behaviour. Should
I maximise the gains for myself now or the gains for the group? If I maximise the
gains for the group it may be to my advantage in the future. As we have seen, in
25
many situations, it is the gains for the group (or least for the in-group) that are
maximised, rather than for the gains for the self. The mere presence of others
primes prosocial, group oriented choices. Once again all this happens outside
conscious awareness.
Conscious decision processes
The special feature of these unconscious decision making processes is that they
can handle many different factors at the same time and optimise the way in
which these factors are weighted against one another. As soon as we start
deliberately thinking about the decision we are trying to make we alter this
exquisite adjustment of the many factors. Remember what happens when we
think about how to ride a bicycle. This is because we can only consciously
consider a small number of factors at one time. We give these factors much more
weight than the majority that are outside consciousness. This change in the
weighting of the factors will often lead to a worse decision.
But in spite of this exquisite weighting of factors there is an inevitable
disadvantage associated with our unconscious decision making processes. The
weights are based on our past history of personal experiences. Confronted with
an entirely novel situation we have no basis for making our choices. Do we have
no option, but to learn a new set of decision weights by trial and error? Not
always, because we have the, possibly unique, human ability to benefit from the
experience of others. This is where an inestimable value of consciousness can be
found.
The value of conscious experience
We have a vivid conscious awareness of making decisions and controlling our
actions. And yet, as we have seen, there is good evidence that this awareness
occurs after the decision has been made and the action performed. This result
implies that our consciousness has no role in the immediate choice and initiation
of actions. However, this delay in the awareness of our actions is vital for linking
our actions with their consequences. It is only when we know about the outcome
of an action that we can recognise whether this outcome was what we intended
26
or something accidental and unexpected. It is only after we know about the
outcome of our action that we can recognise whether we might have done better
to select another action. But, even though our ability to introspect upon our
actions and decisions is limited, this ability is sufficient to enable us to develop
explicit theories of how good decisions can be made.
And, very importantly, we can discuss our conscious experience of decision
making with others. This has many advantages. Our introspection about our
decision-making processes is fragile and insecure. Through discussions with
others we can improve our ability to introspect about our actions. We learn
about ourselves by talking to others.
An even greater advantage, however, is that we can learn from other people who
have had different experiences how to make decisions in novel situations. We
can be told what factors to take into account and how to weight them. This
knowledge can never lead to the same exquisitely tuned system as our
unconscious decision processes, but it provides a vital stopgap while our
unconscious processes develop through our personal experience of the new
situation.
Consciousness is for other people
I conclude, then, that the vivid experience of controlling our actions and making
our decisions, which is such a salient feature of conscious experience, has limited
benefit for each of us as an individual. Enormous benefit derives from this
experience, but it is not for ourselves. It is for others.
Acknowledgements: I am grateful to Uta Frith & Rosalind Ridley for their
comments on earlier versions of this essay.
27
References
Avenanti, A., Paluello, I. M., Bufalari, I., & Aglioti, S. M. (2006). Stimulus-driven
modulation of motor-evoked potentials during observation of others'
pain. Neuroimage, 32(1), 316-324.
Bahrami, B., Olsen, K., Latham, P., Roepstorff, A., Rees, G., & Frith, C. D. Optimally
interacting minds. in submission.
Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior:
direct effects of trait construct and stereotype-activation on action. J Pers
Soc Psychol, 71(2), 230-244.
Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of being watched enhance
cooperation in a real-world setting. Biol Lett, 2(3), 412-414.
Bayliss, A. P., & Tipper, S. P. (2006). Predictive gaze cues and personality
judgments: Should eye trust you? Psychol Sci, 17(6), 514-520.
Behne, T., Carpenter, M., Call, J., & Tomasello, M. (2005). Unwilling versus unable:
infants' understanding of intentional action. Dev Psychol, 41(2), 328-337.
Blakemore, S. J., Bristow, D., Bird, G., Frith, C., & Ward, J. (2005). Somatosensory
activations during the observation of touch and a case of vision-touch
synaesthesia. Brain, 128(Pt 7), 1571-1583.
Camerer, C., & Thaler, R. H. (1995). Ultimatums, Dictators and manners. Journal
of Economic Perspectives, 9(2), 209-219.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: the perceptionbehavior link and social interaction. J Pers Soc Psychol, 76(6), 893-910.
Clark, H. H., & Krych, M. A. (2004). Speaking while monitoring addressees for
understanding. Journal of Memory and Language, 50(1), 62-81.
Cunningham, W. A., Johnson, M. K., Raye, C. L., Chris Gatenby, J., Gore, J. C., &
Banaji, M. R. (2004). Separable neural components in the processing of
black and white faces. Psychol Sci, 15(12), 806-813.
Damasio, A. R. (1994). Descartes' error: emotion, reason, and the human brain.
New York: G.P.Putnam's Sons.
Dijksterhuis, A., Bos, M. W., Nordgren, L. F., & van Baaren, R. B. (2006). On
making the right choice: the deliberation-without-attention effect. Science,
311(5763), 1005-1007.
Dimberg, U., Thunberg, M., & Elmehed, K. (2000). Unconscious facial reactions to
emotional facial expressions. Psychol Sci, 11(1), 86-89.
Fehr, E., & Gachter, S. (2002). Altruistic punishment in humans. Nature,
415(6868), 137-140.
Filiz-Ozbay, E., & Ozbay, E. Y. (2007). Auctions with anticipated regret: Theory
and experiment. [Article]. American Economic Review, 97(4), 1407-1418.
Frith, C. D. (2007). Making up the Mind; How the Brain Creates our Mental World.
Oxford: Blackwell.
Gallagher, S. (2006). Where's the action?: Epiphenomenalism and the problem of
free will. In W. Banks, S. Pockett & S. Gallagher (Eds.), Does Consciousness
Cause Behavior? An Investigation of the Nature of Intuition (pp. 109-124).
Cambridge, MA: MIT Press.
28
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001).
An fMRI investigation of emotional engagement in moral judgment.
Science, 293(5537), 2105-2108.
Gregory, R. L. (1997). Knowledge in perception and illusion. Philos Trans R Soc
Lond B Biol Sci, 352(1358), 1121-1127.
Haggard, P., Clark, S., & Kalogeras, J. (2002). Voluntary action and conscious
awareness. Nature Neuroscience, 5(4), 382-385.
Hobart, R. E. (1998). Free Will as Involving Determination and Inconceivable
without It. In P. Van Inwagen & D. Zimmerman (Eds.), Metaphysics: The
Big Questions (pp. 343-355). Oxford: Blackwell.
Hubel, D. H., & Wiesel, T. N. (1977). Ferrier Lecture: Functional Architecture of
Macaque Monkey Visual Cortex. Proceedings of the Royal Society of
London. Series B. Biological Sciences, 198(1130), 1-59.
LeDoux, J. E. (2000). Emotion circuits in the brain. Annu Rev Neurosci, 23, 155184.
Lhermitte, F. (1983). 'Utilization behaviour' and its relation to lesions of the
frontal lobes. Brain, 106 (Pt 2), 237-255.
Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious
intention to act in relation to onset of cerebral activity (readinesspotential). The unconscious initiation of a freely voluntary act. Brain, 106
(Pt 3), 623-642.
Liepelt, R., Ullsperger, M., Obst, K., Spengler, S., von Cramon, D. Y., & Brass, M.
(2009). Contextual movement constraints of others modulate motor
preparation in the observer. Neuropsychologia, 47(1), 268-275.
Phelps, E. A., O'Connor, K. J., Cunningham, W. A., Funayama, E. S., Gatenby, J. C.,
Gore, J. C., et al. (2000). Performance on indirect measures of race
evaluation predicts amygdala activation. J Cogn Neurosci, 12(5), 729-738.
Pickering, M. J., & Garrod, S. (2004). Toward a mechanistic psychology of
dialogue. Behav Brain Sci, 27(2), 169-190; discussion 190-226.
Pronin, E., Berger, J., & Molouki, S. (2007). Alone in a crowd of sheep: asymmetric
perceptions of conformity and their roots in an introspection illusion. J
Pers Soc Psychol, 92(4), 585-595.
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annu Rev
Neurosci, 27, 169-192.
Sachs, J. L., Mueller, U. G., Wilcox, T. P., & Bull, J. J. (2004). The evolution of
cooperation. Q Rev Biol, 79(2), 135-160.
Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E., & Cohen, J. D. (2003). The
neural basis of economic decision-making in the Ultimatum Game.
Science, 300(5626), 1755-1758.
Singer, T., Kiebel, S. J., Winston, J. S., Dolan, R. J., & Frith, C. D. (2004). Brain
responses to the acquired moral status of faces. Neuron, 41(4), 653-662.
Singer, T., Seymour, B., O'Doherty, J., Kaube, H., Dolan, R. J., & Frith, C. D. (2004).
Empathy for pain involves the affective but not sensory components of
pain. Science, 303(5661), 1157-1162.
Sober, E., & Wilson, D. S. (1998). Unto Others. Cambridge, MA: Harvard University
Press.
Soon, C. S., Brass, M., Heinze, H. J., & Haynes, J. D. (2008). Unconscious
determinants of free decisions in the human brain. Nat Neurosci, 11(5),
543-545.
29
Susskind, J. M., Lee, D. H., Cusi, A., Feiman, R., Grabski, W., & Anderson, A. K.
(2008). Expressing fear enhances sensory acquisition. Nat Neurosci,
11(7), 843-850.
Templeton, J. J., & Giraldeau, L.-A. (1995). Patch assessment in foraging flocks of
European starlings: evidence for the use of public information. Behav.
Ecol., 6(1), 65-72.
Trivers, R. L. (1971). The Evolution of Reciprocal Altruism. The Quarterly Review
of Biology, 46(1), 35-57.
Valdesolo, P., & DeSteno, D. (2008 ). The Duality of Virtue: Deconstructing the
Moral Hypocrite. Journal of Experimental Social Psychology, 44(5), 13341338.
van Baaren, R. B., Holland, R. W., Kawakami, K., & van Knippenberg, A. (2004).
Mimicry and prosocial behavior. Psychol Sci, 15(1), 71-74.
van Schie, H. T., Mars, R. B., Coles, M. G., & Bekkering, H. (2004). Modulation of
activity in medial frontal and motor cortices during error observation.
Nat Neurosci, 7(5), 549-554.
Vohs, K. D., & Schooler, J. W. (2008). The value of believing in free will:
encouraging a belief in determinism increases cheating. Psychol Sci, 19(1),
49-54.
Wegner, D. M. (2003). The Illusion of Conscious Will. Cambridge, Mass: The MIT
Press.
Wegner, D. M., Fuller, V. A., & Sparrow, B. (2003). Clever hands: uncontrolled
intelligence in facilitated communication. J Pers Soc Psychol, 85(1), 5-19.
Wegner, D. M., & Wheatley, T. (1999). Apparent mental causation - Sources of the
experience of will. American Psychologist, 54(7), 480-492.
West, S. A., Griffin, A. S., & Gardner, A. (2007). Social semantics: altruism,
cooperation, mutualism, strong reciprocity and group selection. J Evol
Biol, 20(2), 415-432.
Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S. C., Lee, M. B., & Jenike, M. A.
(1998). Masked presentations of emotional facial expressions modulate
amygdala activity without explicit knowledge. J Neurosci, 18(1), 411-418.
Winston, J. S., Strange, B. A., O'Doherty, J., & Dolan, R. J. (2002). Automatic and
intentional brain responses during evaluation of trustworthiness of faces.
Nat Neurosci, 5(3), 277-283.
Xu, X., Zuo, X., Wang, X., & Han, S. (2009). Do you feel my pain? Racial group
membership modulates empathic neural responses. J Neurosci, 29(26),
8525-8529.
30
Neural Hermeneutics
Entry: Encyclopedia of Philosophy and the Social Sciences, SAGE publications
The term, hermeneutics, originally referred to the art of interpreting complex
written texts, in particular holy scriptures, which demand considerable skill to
reveal their meaning. Hermeneutics is especially relevant to the problem of
translation, which, of necessity requires interpretation of the original text. The
major problem for hermeneutics concerns how to develop criteria for deciding
when an interpretation is correct.
Modern practitioners of hermeneutics recognised that this problem applies,
more generally, to any situation in which messages have to be interpreted.
Schleiermacher, among others, suggested that the problem for the interpreter is
to reveal the message intended by the speaker, i.e. what the author had in mind
when she wrote the text (mens auctoris). However, this criterion, that the
interpretation matches the intention, is problematic since the author might be
dead, or otherwise unavailable. So all we typically have is the text and some
knowledge of the context in which it was written.
This difficulty is not confined to the interpretation of ancient texts. Even if I am
talking with you face-to-face, I cannot access your mind to check whether my
interpretation of what you have just said corresponds to what you intended me
to understand. I can create a coherent story, but I can never get independent
evidence about the correctness of my interpretations. Nevertheless, in spite of
this apparently insurmountable difficulty, most of the time people seem to be
able to understand each other very adequately. How is this achieved?
Neural Hermeneutics is concerned with the mechanisms, instantiated in the
brain, through which people are able to understand one another. Such
mechanisms, although they might now be specialised for understanding, will
have evolved from earlier mechanisms with other purposes. Two such extant
mechanisms seem relevant to the problem of understanding. The first is
predictive coding (or Bayesian inference) which explains our perception of the
physical world. The second is simulation and alignment, which aids our
perception of the social world.
We perceive an object, such as a tree, on the basis of signals from our senses.
This process is not linear, since no sensation can unambiguously indicate the
presence of a tree. Rather a computational loop is required circling from
sensation to inference and back again. Our brain infers the most likely cause of
the sensations and then tests this inference by collecting more sensory evidence
(e.g. by moving the eyes, or touching the object). If the evidence is not what was
expected on the basis of the inferred cause of the sensations (a prediction error)
then the inference has to be updated. Once when the fit between sensations and
inferred cause is sufficiently good, is the object unambiguously perceived.
In principle the same mechanism can be applied when trying to understand the
mental world of others. The major difference is that, unlike with trees, the
31
process goes in both directions: while I am trying to understand you, you are
trying to understand me. Here the sensory evidence might be the words I hear
from which I infer the idea you are trying to convey. I can test my inference, not
only by predicting what else you are likely to say, but also by saying something
myself and predicting how you will respond. Meanwhile you will be applying the
same strategy to what I say. When our prediction errors become sufficiently low,
then we have probably understood one another. In this account, the error we are
minimising is not the difference between my idea and your idea, since we have
no direct access to each other’s ideas. Rather, it is the difference between my
idea and my representation of your idea (see figure 1).
Figure 1 about here
One advantage of the formulation in terms of predictive coding is that it elegantly
captures the concept of the Hermeneutic Circle, whereby the whole cannot be
understood without reference to the parts, while, at the same time, the parts
cannot be understood without reference to the whole. In the same way, in the
predictive coding loop, the inferred cause (the idea, the whole) predicts the
evidence, while, at the same time, the evidence (the words, the parts) modifies
the inferred cause.
The predictive coding model outlined here does not explain how the link is made
between the words and the idea, that is, how the initial inference is made. One
possibility is to use simulation, that is I can predict what words you will use on
the basis of what I myself would say in the same situation. This is by analogy
with motor simulation, in which I predict the movements of others on the basis
of my own motor system, a mechanism for which there is now considerable
evidence. A necessary consequence of the application of simulation to
understanding others is that understanding will be more difficult to achieve if
you are in some way different to the person you are trying to understand. This
problem may be mitigated through alignment (or mirroring).
We all have a strong and automatic tendency to imitate each other: the
chameleon effect. This imitation or mirroring occurs in many domains, including
gestures, emotions and aspects of speech such as intonation, grammar and
vocabulary. Such mirroring makes us more similar to the person we are
interacting with and thereby makes motor and mental simulation more efficient.
Direct evidence that understanding is improved by alignment comes from a
study showing that communication was improved when participants
deliberately imitated the accent of the person they were talking to.
The interactive mechanism I have described above implies that understanding is,
in part, a collaboration between the partners engaged in the discourse. Thus I
learn more about my own ideas through interacting with someone else. This
relates to Schleiermacher’s suggestion that, by taking the context into account,
the translator can achieve a better understanding of the text even than the
original author. By the same argument a listener can have a better understanding
of the speaker, than the speaker herself. This is because the listener will not only
understand the message that the speaker intends to convey, but can also take
32
account of signs, such as body language, indicating aspects of the message that
the speaker was unaware of. This better understanding will be fed back to the
speaker in the course of the conversation. Thus, through interactions with others
we can achieve a better understanding of ourselves.
Chris Frith
Thomas Schwarz Wentzer
Cross References
Communication studies, Cooperation (Coordination), Empathy, Hermeneutics,
Mirror Neurons and Motor Cognition in Action Explanation.
Further reading
Adank, P., Hagoort, P., & Bekkering, H. (2010). Imitation improves language
comprehension. Psychol Sci, 21(12), 1903-1909.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: the perceptionbehavior link and social interaction. J Pers Soc Psychol, 76(6), 893-910.
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annu Rev
Neurosci, 27, 169-192.
Garrod, S., & Pickering, M. J. (2009). Joint Action, Interactive Alignment, and
Dialog. Topics in Cognitive Science, 1(2), 292-304.
Wilson, M., & Knoblich, G. (2005). The case for motor involvement in perceiving
conspecifics. Psychol Bull, 131(3), 460-473.
Yuille, A., & Kersten, D. (2006). Vision as Bayesian inference: analysis by synthesis?
Trends Cogn Sci, 10(7), 301-308.
Relevant publications by the authors
Frith, C.D. (2002) How can we share experiences? Comment from Chris Frith. Trends
Cogn Sci. 6(9), 374.
Frith, C.D. (2010). What is consciousness for? Pragmatics & Cognition 18(3), 497551.
Chris Frith – Biosketch
Chris Frith trained as a neuropsychologist and has devoted his career to the
study of the relationship between the mind and the brain. He was a pioneer in
the use of brain imaging to study higher cognitive functions, including
consciousness, agency and theory of mind. He currently works on the neural and
computational basis of human social interactions.
33
Download