Full Text (Final Version , 100kb)

advertisement
Theory of Mind and Game Theory
Robbert Jansen
314489
Abstract
This paper intends to look at the interaction between two fields of research that are of great
interest to economics: Theory of Mind and Game Theory. It looks at the link between the two
and to both the role played by, and the implication this has on the assumption of rationality
and the common knowledge assumption inherent herein. The result is mixed. Some studies
point towards a clear link between Theory of Mind and Game Theory. While other studies
have more difficulty establishing this connection, or do so in a way different from our
expectations. Based on these results a research idea is proposed to shed more light on our
ability to engage in mentalizing ability while participating in strategic interactions.
1.
Theory of Mind
Introduction to Theory of Mind
The term ‘Theory of Mind’ was first coined by Premack and Woodruff in 1978 (Premack and
Woodruff 1978). They were interested in answering the question: “Does the chimpanzee have
a theory of mind? Premack and Woodruff define Theory of Mind (ToM) rather broadly as the
ability of an individual to impute mental states to both himself and to others. This paper
inspired a vast amount of research on the concept of ToM, sometimes also called mentalizing.
Although further research often uses a slightly different, more narrow, definition of ToM.
This paper will provide an overview of the research that has been done on mentalizing since
Premack and Woodruffs seminal paper in 1978 and will look at its interaction with a field of
economic research called Game Theory.
First and foremost let us try and answer the question posed by the authors of the original
paper: Does the chimpanzee have a theory of mind? It seems the chimpanzee does have a
Theory of Mind, but only to a certain extent. After an extensive learning period chimpanzees
are able to understand actions and intentions as shown by Premack and Woodruff and other
research since (Yamakoshi and Matsuzawa 2000), (Warneken and Tomasello 2006).
However a full grasp of a system of beliefs and desires, like in humans, is not yet
demonstrated to be present in apes. Additionally chimpanzees have not been able to pass the
false belief test, which we will often see used as a test for ToM in the remainder of this paper.
Concluding, the chimpanzee does have a theory of mind in a very rudimentary form. It is able
to impute some mental states to others, in line with the definition for ToM used by Premack
and Woodruff. However the great apes would fail to meet the more strict criteria used later in
this paper. And humans seem to be the only species that naturally learn this ability (Wimmer
and Perner 1983). So the question ultimately becomes a matter of semantics. Since the
presence of mentalizing ability in apes is not the main focus of this paper I will not go in on
the subject any deeper. And I will refer to (Call and Tomasello 2008) for an overview of
research done in the past 30 years in an attempt to find an answer to this question.
In their 1983 paper Wimmer and Perner provide the next important step in research the area
of Theory of Mind (Wimmer and Perner 1983). Rather than using the broad definition
proposed by Premack and Woodruff, they opt to use a more narrow definition of ToM
borrowed from Pylyshyn (Pylyshyn 1978). He describes it in the following way: Someone
who has a theory of mind does not only recognize a certain state of the world and stands in a
certain relationship towards this state of the world, these relationships are also represented
explicitly. This full grasp of multiple states of the world is often referred to as a metarepresentation. This view of Theory of Mind as a meta-representation is important since it
lead the authors to an important insight. That of the more difficult meta-representational case
where there is a difference between one’s own view of reality and that another agent. In this
case the representation of the other agents view of the world is more complicated since it no
longer corresponds to the view of reality of the person himself. This so called difference of
propositional content, in a slightly different context, was created by Chandler and Greenspan,
and Flavell through a late arriving bystander (Chandler and Greenspan 1972) (Flavell et al.
1968). Here the difference in propositional content was created by the observation that an
agent arrived late to the scene, and had therefore missed a crucial piece of information. And
additionally a difference of meta-representation had previously been created by Marvin and
Mossler through the use of sensory specific information (Marvin et al. 1976) (Mossler et al.
1976). Here the difference of knowledge was created respectively by excluding an agent from
visual and auditory information. All four of these studies focused on perspective taking rather
than the representation of mental states, an important distinction with the intent of Wimmer
and Perner.
The False-Belief Task
Based on the meta-representation definition of Pylyshyn, the four papers on perspective
taking and Dennett’s proposition for a test of false-belief (Dennett 1978) Wimmer and Perner
devised their test for Theory of Mind. Because the 1983 paper has proven to be a cornerstone
in the research for ToM, and the fact that Wimmer and Perner’s method has been much
reused and expanded upon, I will give a quick overview of their method here.
In experiment 1 the authors use a story about a boy (Maxi) that places an object in a box,
subsequently this object gets moved to another box without the boys knowledge. This creates
a false belief situation where the child the story is told to (the test subject) knows the
chocolate is in location x but this subject should know Maxi thinks it is in location y. Subjects
are tested based on their beliefs about the thoughts and intentions of the boy. Subjects will
only succeed when they can correctly represent the mind (and false beliefs) of Maxi, instead
of their own correct beliefs. The test was presented to infants age 4-5, 6-7, 8-9 in a way that
made sure (and was checked) they understood. It was found most 4-5 year olds failed to form
a correct meta-representation, while most of 6 and older succeeded.
In a series of follow up experiments the authors tested for possible explanations for poor
performance of younger children. They found out the complexity of the task itself was not to
blame. Additionally recognizing deceptive plans proved to be much easier to understand than
constructing deceptive plans among infants (this is in line with earlier research by Schultz
and Cloghesy 1981). Encouraging the subjects to think carefully about the task only helped
older children. And finally a different version of the task was performed, where the object
was not placed elsewhere, but instead was removed from the scene entirely. This new version
of the task did prove to be helpful for both 4-5, and 5-6 year olds, but 3-4 year olds did not
perform any better. Similarly Marvin et al. and Mossler et al. found children between the ages
of 3 and 4 did not perform any better in situations where they were aware of the lack of
knowledge of another agents, instead of the awareness of wrong knowledge of this agent
(Marvin et al. 1976) (Mossler et al. 1976). This might be of importance for a discussion
further on in this paper about the origin of the Theory of Mind ‘skill’. Wimmer and Perner
conclude that what they call the ‘ToM skill’ (understanding of false-belief, inference and
construction of deceptive plans) is learned as a novel skill between the age of 4 and 6.
Higher Order Perspective Taking
Building on their idea of meta-representation and on their earlier paper Wimmer and Perner
recognize a more difficult form of this meta-representation (Perner and Wimmer 1985). The
will test the so called second –order belief structure (person A thinks, that person B thinks
thought X), opposed to their previous first-order belief structure (person A thinks thought X).
Not only is this task interesting because it provides a more difficult challenge. Also reasoning
short of second order reasoning will lead to the wrong answer. Where previously some
reasoning short of the full understanding of the concept could still lead to a correct answer
(although this was controlled for with follow-up questions about understanding).
The setup they use is the following: The story is about two characters John and Mary they
both share the common knowledge of an ice-cream van being in location 1. They will then
independently of each other be told the van moves to location 2. But since they get told this
fact independently of each other this is not common knowledge. Therefore the test subject
has to understand that John thinks that Mary thinks the ice-cream van is still in location 1
(second-order belief). Where previously something along the lines of ‘Where will Mary go
for ice-cream?’ was tested (first-order belief). Now any reasoning short of 2nd order (1st order
reasoning ‘Where does John think the van is?’ or ‘Where does Mary think the van is?’ or 0th
order reasoning ‘Where is the van?’) will lead to the wrong answer.
The second order false belief with and without memory aid (John does not know Mary was
informed about the new location) was tested against a version that allows for first-order
reasoning to lead to the correct answer (Mary is not told about the van moving so Johns belief
about Mary’s belief is actually correct) among 7-10 year olds. Memory aid helped
participants significantly as a reminder to include this in their reasoning, not because the
information was unknown (this was checked for with a control question). Not surprisingly
older children performed better. And performance in tasks that could be solved using just
first-order reasoning was better than in those that could only include 2nd order reasoning.
Overall result a few 7 year olds, about half 8-9 year olds and most 10 year olds understand
second order reasoning. And not only did the memory aid prove to be significantly helpful,
an additional experiment where subjects were uttered to think carefully before answering the
test question yielded positive results as well.
A possible explanation the authors give for some of the failure among younger children in
this experiment is that younger children use a heuristic based system to arrive at a conclusion.
While older children with more mental computing power at their disposal use a more
thorough approach based on analytical reasoning. For instance younger children might use
the 0th order belief (the actual location they know the van is in) and perform a check with
Mary’s or Johns 1st order belief. Since this check will confirm the ice-cream van being in
location 2, this type of reasoning might lead to the wrong answer. Perner and Wimmer check
for this hypothesis by moving the van to a 3rd location. Since performance was not any better
under this condition it seems safe to conclude this hypothesis is unjustified.
Additional follow up experiments tested for a learning effect, or contrary attention span
problems (two consecutive stories were presented), the wrong assumption of restored mutual
knowledge (since both John and Mary know the new location of the van subjects might be
fooled in believing common knowledge to hold) and a first-order belief wrongness matching
approach to the question (Johns belief about Mary is false therefore I answer the opposite of
what John thinks). Learning effects did play a significant role, but only among older children.
The other two factors examined: restoration of mutual knowledge assumption and a
wrongness matching approach were not confirmed.
Overall using their methods and enhancements most 6 year olds and almost all 7-9 year olds
were able represent second order belief states. The performance with this method is much
better than that found by other researchers. Other research suggests representation of
epistemic states (ignorance) is achievable by children on average 1 year younger (Hogrefe,
Wimmer, Perner 1984) suggesting second-order epistemic states might be understood even
earlier than age 6. With their 1983 and 1985 papers Wimmer and Perner established a good
overview of Theory of Mind in children. Including the where the difficulties lie in
understanding the concept of other minds, and what factors help in understanding this.
Learning the Theory of Mind Skill
One of the main criticisms or points of debate after the Wimmer and Perner papers was their
claim about the “ToM skill” as a novel skill learned at a certain age. This view stems from
the conjecture of, among others, Piaget, Skinner and Freud. Where the newborn is described
as cut-off from the rest of the world, often illustrated as a bird in an egg where its food supply
and everything it needs is enclosed in its shell. Fodor (1992) challenges this view he makes
the case for a view called nativism. According to nativism the newborn is already in
possession of these skills (in this case the ToM skill). Central assumption is that not a child’s
understanding of ToM like phenomena changes between the age of 3 and 4. But rather that
the ability to access computational resources required for solving the task increases. The
brains computational strength would in this case act as a performance barrier. Looking at this
debate a little more closely is interesting. Since it not only provides us with a more thorough
view of Theory of Mind, it also uncovers some of the elements that are related or perhaps
fundamental building blocks for the mentalizing skill.
Fodor argues many children at age 3 generally fail the false belief task, but slightly older
children pass it. He thinks it is implausible children supposedly learn this completely new
skill in such a short and sudden transition. Furthermore Leslie (1987) argues children under
the age of 3 have no problem at all participating in pretend play. For pretend play, just like
with the representation of other minds, it is important to understand the existence of a state of
being that is separate from reality (or at least for the reality experienced by the child itself).
Therefore the ability of very young children to be able to participate in pretend play to be
seems like a good argument for nativism.
Many tasks designed to test this decoupling of beliefs or states from the state of reality
confirm this view:
-The false belief explanation task (Bartsch and Wellman 1989). Here children are asked why
the protagonist of the story looks for an object in the wrong place. This task shows the
success of at least some of the children who failed the false belief task in explaining the
reasoning behind looking in the wrong place. This points out that an explanation that children
do not grasp the concept of false belief is unlikely.
-The deception task (Chandler, Fritz and Hala 1989). Here children from the age of 2.5 years
are able to manipulate the environment so that the protagonist of the story is tricked into a
false belief. Again it seems unlikely the subjects would be able to without at least a grasp of
the concept of false belief.
-The disparate belief task (Wellman 1990). An object is either in container A or B, Maxi
beliefs the object is in container B, where will he look? Correctness here suggests the test
subject understands agents to act out of their own beliefs.
-Enhanced false-belief task (Siegal and Beattie 1991). Seagal and Beattie enhance the falsebelief task by changing the question to: “Where will Maxi FIRST look for the object?”. It
turns out that performance among three year olds is dramatically increased under this
condition. Originally believed to be an experimental artifact, their study was later replicated
and redeemed (Leslie 1994). A result difficult to explain under the classical view of learning
the ToM skill.
Based partly on these findings Fodor proposes a framework where children simplify the task
and make a mental shortcut. This framework is constructed with the use of two hypothesis.
Hypothesis1: predict in a way that will satisfy an agents desires. Hypothesis2: predict in a
way that would satisfy an agents desires if his beliefs were true. According to Fodor children
of 3 and under take H1 if it yields a unique prediction. Children of age 4 and older only use
this when they think the beliefs of the agent are true. This system of heuristics is somewhat
similar to the one Wimmer and Perner hypothesized earlier (a simple solution with a simple
check), but could not confirm empirically. Some of the tasks describe below also shed doubt
on the correctness of this hypothesized framework.
These results are however all better explained under a nativism view than under the more
classical view ascribed to by Wimmer and Perner. At this point it is very interesting to note
that young children also have problems with tasks similar in difficulty and structure to ToM
tasks, or false belief tests, but that do not require them to understand other minds (or in a less
direct way).
-The belief prediction task (Perner, Leekman and Wimmer 1987). A child is shown a crayon
box and is asked what is inside. He obviously thinks crayons are inside, to which he is shown
there are actually pencils inside the box. He is then asked what Maxi will think is inside the
box. Younger children answer pencils, older children will answer with crayons. This still
requires some ToM skill, but in a more indirect way than a standard false belief task requires.
-The appearance/reality task (Flavell, Green and Flavell 1986). This task is similar to the
belief prediction task except that it does not require an infant to understand other minds at all.
A child is shown an object that looks like a rock, is asked what is the object; Rock. Is then
invited to examine it and finds out it’s a sponge. Is then again asked what the object looks
like; Older children respond rock, younger children respond sponge. The result is very
interesting since it no longer involves other minds, yet we still observe similar results as in
mentalizing tests. In some sense the test question can be interpreted as ‘What does this object
look like to you?’. Therefore one mind still has to be understood by the test subjects to some
extend (their own).
-Polaroid task (Zaitchik 1990). A child is explained the working of a polaroid. He then sees a
picture taken of a doll in location A. After the picture is taken the doll is moved to location B.
Where is the doll in the picture? Older children get the test question correct younger ones do
not. Here similar results are found as in the false-belief tasks, but no minds are involved at all
(unless children ascribe minds to physical objects). Merely an understanding of the physical
world is required to answer the test question correctly.
These results beg the question whether false belief tasks on infants do in fact test their ability
to mentalize. If children run into similar problems on tasks where no other minds are
involved this might be a serious issue. In a broader sense the difficulty could stem from a
problem similar to the problem of other minds. It is not unlikely that the broader issue is the
concept of different states of reality, or different states of the world. This discussion however
moves beyond the scope of this paper, and I will therefore limit myself to the effects this has
on Theory of Mind exclusively. I did not want to leave it unmentioned though, since I think it
is an important criticism on the research done in this field.
On the topic of nativism. More recent research does confirm the suspicion of Fodor on
nativism, but in a less extreme form than he proposed (‘starting-state nativism’ opposed to
Fodors ‘final-state nativism’). This research has mainly focused on imitation in very young
infants, which seems to be an important building block to develop ToM and social cognition
in general. But since this is of less interest for our discussion and beyond the scope of this
paper I refer to Meltzoff (2002) for an overview.
Theory of Mind and Autism
At this point it becomes interesting to look at the interaction between Theory of Mind and
autism. Baron-Cohen, Leslie, Firth (1985) where the first to link the ToM proposed by
Premack and Woodruff to autism. Autism is a developmental disorder where social-,
communication- and imagination skills (pretend play) are severely impaired (Wing 1991).
The link to mentalizing ability seems obvious. The authors used a version of the false-belief
task very similar to the one introduced by Wimmer and Perner in their 1983 paper this time
with Sally and Anne as the main characters instead of Maxi (mentioned here because this
version is often used in further researched and is referred to as the Sally-Anne task). They
found that 80% of the subjects diagnosed with autism failed the test, while 85% and 86% of
normal, and down syndrome children respectively passed the test. Here down syndrome
children were used as a control group with a low IQ score, together with healthy (but much
younger) children. These results were later replicated by a number of studies, and therefore
the link between autism and an underdeveloped Theory of Mind seems well established. For
an overview of these results see Firth, Morton and Leslie (1991). At first glance it might not
seem like the connection between developing mentalizing ability and childhood autism is
relevant. However later on in this paper when comparing Theory of Mind with Game Theory
it will become clear this link is indeed very interesting.
Neuroscience and Theory of Mind
Neuroscience is a field that is generally ignored in economics, apart from the recently
developing field of neuro-economics. And normally the subject would be beyond the scope of
a paper on economics. For our current subject insights from neuroscience will prove to be
very helpful however. Since we can link neuroscientific studies on Theory of Mind to
neuroscientific studies on Game Theory. Furthermore many studies in the field of neuroeconomics involve the strategic games of Game Theory. Therefore I will provide an overview
here of the brain areas involved in mentalizing. There are three main brain areas that are often
brought into contact with ToM. Those areas are the superior temporal sulcus (STS), the
temporal poles (TP), the anterior paracingulate cortex (APCC, much overlap with medial prefrontal cortex or mPFC) and occasionally the amygdala and the orbitofrontal cortex (OFC).
(Gallagher and Firth 2003)(Singer 2008).
The superior temporal sulcus seems to be related to mentalizing, but not in a direct way. This
brain region is linked to understanding stories involving people (Gallagher et al. 2000), but
also in explaining intention and causality (Brunet et al. 2000). The STS has also been
connected with tracking body movement (Grossman and Blake 2001) and facial recognition
of emotions (Narumoto et al. 2001). Allison et al. (2000) speculate the underlying causal link
between these factors, they argue the area is activated when intentions or actions of other
individuals are signaled. (Visual) detection of intention is of course strongly linked to
mentalizing, although it does not require any thinking of other minds.
In various studies the temporal poles have been associated with recollection of familiar faces,
scenes (Nakamura et al. 2000), voices (Nakamura et al. 2001), and emotional- (Dolan et al.
2000) and autobiographical- (Fink et al. 1996) memory retrieval. Gallagher and Firth believe
that based on these studies the function of the temporal poles can be summarized as storage
for different types of memories (Gallagher and Firth 2003). This can be useful for engaging
in Theory of Mind in two different possible ways. First of all when engaging in deceptive
activities it is useful to remember our own lies. Second of all we can potentially draw on our
past experiences providing a semantic based prediction of future outcomes. Firth and Firth
(2003) examination of the temporal poles based on 10 neuro-imaging studies leads them to
the following conclusion. The temporal poles are used to create an emotional and semantic
background for the issue at hand, based on past experience. They conclude the temporal poles
are used when analyzing a picture or a story whether this includes mentalizing or not.
The most promising area associated with Theory of Mind is the anterior paracingulate cortex.
Gallagher et al. (2002) constructed a test where a human under a PET scan would be told he
was playing a strategic game (rock-paper-scissors) against a human opponent and a computer
(as control) to see what difference in activation there were between the two situations. It
turned out the only significantly activated area was the apcc. McCabe et al. (2001) found
similar results using a functional magnetic resonance imaging(fMRI) study on subjects
performing a trust and reciprocity game versus humans, compared to a version where they
played versus the computer, this study also highlighted the apcc. These experiments were
both set up very ‘cleanly’ where the only difference between the treatment and the control
group was whether the game was played against a human or a computer. It is possible other
brain regions (STS and temporal poles) activated because of a difference in cues between the
two conditions. Apcc is part of the medial frontal cortex. Lesion studies confirm an intact
medial frontal cortex seems necessary for mentalizing abilities (Rowe et al. 2001 and Stuss et
al. 2001)
There are some alternative explanations for activations in the anterior paracingulate cortex.
Autonomic arousal is one of these a possible explanations. Other studies have found parts of
the ACC respond to autonomic arousal in particular cognitive uncertainty and anticipatory
arousal (Critchley et al. 2000) (Critchley et al. 2001). However the area found in those studies
is more posterior than the apcc. Increasing task difficulty could be an explanation for the
activation in this area. However other studies seem to contradict this hypothesis as other
regions of the ACC seem to be correlated with task difficulty rather than the apcc (Duncan
and Owen 2000) (Barch et al. 2001). There are several other studies that found activation in
the apcc that are not as directly connected to thinking about other minds. These activations
are often found in self-monitoring task like emotions currently experienced (Lane et al.
1997), or visual self-recognition (Kircher et al. 2000). Which can be thought of as related to
ToM. Based on these observations we can conclude a very strong link between this brain
region and the concept of Theory of Mind seems almost certain. Firth and Firth (2003) agree.
They believe the mPFC to activate when people are thinking about their own mental states or
those of others, especially when these mental states are decoupled from reality.
The amygdala and orbitofrontal cortex have also been associated with ToM. Although the
involvement of these areas rests on very little evidence (Baron-Cohen et al. 1999) and
(Baron-Cohen et al. 1994) respectively. Aside from these studies no other neuro-imaging
studies have found increased activity in those areas. Therefore little evidence exists of a
direct link between these areas and mentalizing ability, and I will not discuss them any
further.
Imitation and Mirror Neurons
In mirror neurons were discovered in the macaque brain (Rizzolatti, Giacomo et al., 1996).
These neurons fire both when the primate performs an action himself and when this primate
observes the same action being performed by someone else. Literally monkey see, monkey
do. Immediately the imitating system in newborns which we talked about earlier comes to
mind. There it was already believed this system of imitation was a fundamental building
block for developing a theory of mind. The discovery of the mirror neuron made this system
an even more obvious choice. Gallese and Goldman (1998) for instance, propose our ability
to share the mental states of others might be dependent on this system. I do not want to delve
too deep into the subject of mirror neurons, but I did want to mention it for the link it
provides to Theory of Mind. For a further review on the topic of mirror neurons see Grezes
and Decety (2001) and Williams et. al. (2001) for comparing mirror neurons with imitation
and autism.
Empathy
Völm et.al. (2005) points out that all the brain regions generally associated with Theory of
Mind also activate when examining empathy (using fMRI). One might suspect the two
subjects to be related, or more precisely, some basic understanding of other minds to be
required for empathy, yet this finding is still surprising. (Blair et al 1996) do however realize
these two systems can not be completely identical. For, as they mentions, inhibitions in
empathy do not necessarily imply inhibitions in mentalizing ability and vice versa. As
demonstrated with the mental illnesses of psychopathy and autism respectively. Preston and
de Waal (2002) suggest the basis for empathy is a system that allows us to simulate the state
of another person in ourselves and therefore being able to imagine what the other person
feels. This sounds very familiar to both Theory of Mind and to the mirror neuron system
described above. Preston and de Waal indeed share this opinion. They believe the system that
allows people to share the mental states of others has to be updated to encompass the ability
to also experience the feelings and sensations of others. Singer and Fehr (2005) not only
agree with this conclusion. They also add the process of empathy is automatic. The
automaticity in this sense implies that no conscience thought is required for the empathic
feelings to take place. They believe this is important since ones emotions and feelings impact
the decision making process and automatically sharing another’s feelings would be impactful
in this area.
2.
Game Theory
Introduction to Game Theory
Although the absolute beginning of Game Theory is often said to have originated with
Zermelo in 1913 with his article on backwards induction in the game of chess (Zermelo
1913). Or even with Cournots view on oligopolistic firms in 1838 (Cournot 1838). The
concept of Game Theory (GT) became relevant for the field of economics with the
collaboration between the mathematician von Neumann and the economist Morgenstern in
their book “Theory of Games and Economic behavior” (1944). Their work was immediately
well received and has been expanded upon ever since. It is no exaggeration to state it has
become one of the cornerstones of modern economics, especially in the fields of decision
making under uncertainty and strategic interactions. Since GT is a much more familiar sight
in the field of economics I will not spend as much time elaborating on it as I did for Theory
of Mind. I will shortly summarize what Game Theory provides for economics. I will discuss
some of the games often used throughout many of the articles mentioned in this paper. And I
will spend some time elaborating on ‘solutions’ to game theoretic situations. And finally I
will comment on the presumed rational ‘homo economicus’ and on the merit of this
assumption.
Perhaps one of the greatest accomplishments of Game Theory is its role in the advancement
of economics as a science. The scientific method is about creating testable hypothesis.
Economics however, as one of the social sciences, can sometimes struggle in this regard. It is
difficult to quantify social or strategic interactions. Making it even more difficult to examine
them, even more challenging to recreate decision making tasks in a lab, and therefore very
demanding to create testable hypothesis. Game theory changes this. It creates a framework in
which decision making tasks can be examined, compared and quantified. Economists are
therefore able to create and re-create lab experiments on social or strategic choice under
controlled conditions. In this way GT helps economics in a descriptive nature, it allows us to
observe and understand better. Many of the experiments discussed in this paper and in
general many important economic findings would not have been possible without the
methods of Game Theory.
The second contribution to the field of economics automatically comes to mind when talking
about another pioneer in the field of Game Theory: John Nash. Nash (1950) in his original
paper and with his contributions since, build on the work of von Neumann and Morgenstern
by providing a mathematical proof for optimal solutions to both cooperative and noncooperative games. Outside of the solutions in zero-sum games the original authors already
provided themselves. This has proven to be a monumental step in the development of GT and
indeed of economics in general. Before this breakthrough economics was predominantly used
in a broader macro-economic scale, analyzing supply and demand curves, and Keynesian
multiple country models (Myerson 1996). Nash’s work opened up a new door, and now
modern economics can be seen as analysis of incentives in all social situations (Myerson
1996). All of this is possible because Nash proposed a way to find solutions to game theoretic
problems given the right assumptions (often rationality). With the aid of these solutions later
called ‘Nash equilibria’ economist actually were able to draw normative conclusions based
on the models of von Neumann and Morgenstern. The Nash equilibrium is a position where
no player can improve his outcome by changing only his choice (Myerson 1978).
The Prisoners Dilemma
The most well-known and probably most used game within the realm of Game Theory is the
Prisoners Dilemma (PD). In story form the PD is described as such: Two prisoners escaped
from prison both are questioned at the same time. They each have the option to Confess or to
Lie. If they both confess they both receive a payoff of -8, if they both lie they both receive a
payoff of -1. In the case where one of the two confesses and the other one lies the payoffs are
0 and -10 respectively. This is often shown schematically like below.
Here the dominant strategy is for both players to Confess. Since irrespective of what the other
player does this results in the higher payoff. Therefore the predicted (and only Nash
equilibrium) equilibrium is -8,-8, while this is not the socially optimal solution (both players
would be better off in the -1,-1 scenario). Many variations exist of the prisoners dilemma,
maybe most notably the sequential prisoners dilemma. This version can be solved by the use
of backwards induction discussed later in this paper (as long as the number of games are
finite).
The Ultimatum Game
Another very popular strategic game is the Ultimatum Game (UG). In this game players do
not act at the same time. Player1 receives an initial endowment of 10, he can then propose to
split this amount over both players in any way he likes. Player2 consequently gets the option
to either accept or reject the proposal. In case of acceptance both players get payed according
to the split, in case of rejection both players receive 0. This game is presented schematically
below.
The most rational solution to this game is again found with the process of backwards
induction. Player2 should accept any offer above 0 if he is a rational agent maximizing his
payoff. Knowing this Player1 should offer Player2 the smallest amount possible above 0 to
maximize his own payoff. Both the concept of rational agents and the concept of backwards
induction will be discussed in more detail later on. Notable variations on the UG are the Trust
Game (where Player1 can choose to allocate part of his endowment to Player2, after which it
gets multiplied by some amount, and after this Player2 can choose whether he wants to give
part or all of his endowment back to Player1) and the Dictator Game (explained in more
detail below).
The Dictator Game
The Dictator Game (DG), as specified above, is a variation of the Ultimatum Game. While
this technically is not a strategic game since Player2’s actions do not in any way impact the
outcome of the game, it is still a very useful tool. The DG is often used to measure varying
degrees of empathy or altruism when compared to outcomes of the UG. In this ‘game’
Player1 receives an initial endowment and he decides how the income is split between the
two players. A very straightforward game, with a very simple rational solution as well.
Player1 should keep the entire outcome in order to maximize his income.
Solutions and Assumptions of Game Theory
As seen above these games can be ‘solved’, that is an optimal strategy or solution can be
found for all players given the right assumptions. I will give a short explanation of each of the
solutions, or methods to achieve this solutions. We have already explained the Nash
equilibrium before. We came across backwards induction but have not fully explained the
concept yet. The idea behind backwards induction is to consider the last person that has to
make a choice. We assume this person to make the optimal choice and then work backwards
from there, continuing to the next to last person and so forth until every choice is made.
Another concept worth explaining is the iterated elimination of dominated strategies. We
already came across dominated strategies in the Prisoners Dilemma. A dominated strategy is
a choice that is never preferable to another choice for every possible outcome. Iterated
elimination of dominated strategies simply removes dominated strategies as options from the
game one by one.
All the strategies suggested above rely on the same critical assumption, that is all agents are
rational and income (or utility) maximizing. Or, even more strictly speaking, these rationality
and selfishness are common knowledge; I know, you know, I know (et cetera) you are
rational. These assumptions were made already by John Nash in 1950 are essential in solving
our strategic games. They have however received a lot of criticism. Empirical evidence
suggests people are not the rational utility maximizing agents or ‘homo economicus’ as it is
often called. People often behave irrational or altruistic (the field of behavioral economics is
based on these observations). Surrendering the possibility to the possibility that some agents
are not fully rational already poses big problems. Not only does this possibility make it
impossible to solve some games, but it also introduces a new problem. If I know all other
agents are not necessarily rational, is it still rational for me to act as if they were? (van
Damme 1989 discusses something similar) For an overview of the criticism of this view of
rationality see (Doucouliagos 1994), (Binmore 1987), (Camerer 1997) and (Kahneman 2003).
At this point the link between Game Theory and Theory of Mind should already become
intuitively clear. The assumption of common knowledge mentioned above, that is a critical
component of the proposed rational solutions to strategic interactions for instance. For me to
understand you know, that I know (et cetera), I have to be able to grasp the concept of other
minds. And most importantly I have to understand that the beliefs inherent in other minds can
differ from both my own views, and from reality. The common knowledge assumption even
rests on higher-order perspective taking abilities, we have seen in second-order belief tasks
and a meta-representational view beyond those. Since inherent in it is a claim I make not only
about your beliefs, but also about your beliefs about me (and further iterations).
On top of that, the consideration of whether we should assume our opponent to be a rational
agent (mentioned above) is of great interest for our link with ToM as well. More concretely,
if we consider the assumptions of GT to all be valid. Then every agent is assumed to be a
fully rational utility maximizer, with this fact being common knowledge. In that case optimal
solutions could be reached by simply following a set of rules, or logical constructs, as we
have seen above. In this case playing a well programmed computer or a human being should
not differ. So under the strict assumptions often found in economics the link between Theory
of Mind and Game Theory is totally absent. This is of course no surprise to us, because that is
the appeal of the solutions proposed by Nash and colleagues. To rid strategic interactions of
the unpredictable human interferences, and make the exercise merely one of deduction. We
have, however, seen from literature (Doucouliagos 1994, Binmore 1987, Camerer 1997 and
Kahneman 2003) this assumption of rationality is much too optimistic. This means the
connection between Theory of Mind and Game Theory might still be there.
In the following chapter we will be looking at this connection in more detail. And based on
the observations made above we can say that, to some extent, the link between ToM and GT
is also an indication for the failure of the assumption of rationality. For, as we have seen
previously, under perfect rationality there is no reason for this link to exist. Any evidence
indicating people use Theory of Mind during Game Theory tasks lends credence to the claim
that people at least violate the common knowledge assumption of rationality. This
observation makes our quest to find a link between the two subjects all the more interesting.
3. Interaction Between Theory of
Mind and Game Theory
The concept of Theory of Mind seems like a very simple task at first glance. After all as we
have seen earlier children of age 3 or 4 are already able to make decisions based on the notion
of the existence of other mind (depending on the specific task). And the problems posed in
many game theoretical games do not seem to be very challenging either, especially to those
who are familiar with them. So how could this seemingly easy task elude adults, especially in
strategic tasks where they are encouraged to think critically or in similar real-world
situations? This conclusion might be a bit too hasty though. For instance people often
wrongly assume their strategic opponent to play dominated strategies (Colin F. Camerer
2003). Additionally Johnson-Laird and Steedman (1978) found evidence adults had trouble
reaching the right conclusions of syllogism in cases where these required at least two models.
Game theorists: Schelling (1960), Rapoport (1967) and Goffman (1970) all recognize the
importance of the ability to engage in higher order perspective taking to be successful in
game theoretic tasks. Based on these papers Game Theory and Theory of Mind does not
appear to be as straightforward, and both areas and their interactions are definitely worth
examining.
Studies that are Combining Both Fields
We have already seen some of the studies that were being discussed linking both the field of
Theory of Mind and that of Game Theory together. In this paragraph we I will provide an
overview of the research that has been done combining both fields. First of all the article by
McCabe et. al. (2001) combines these two areas. They let participants play the Trust Game
described earlier against both a human and a computer opponent. The purpose of this study
was to define a brain area associated with ToM, which they indeed were able to (anterior
paracingulate cortex). Interestingly enough this brain region was only activated more when
participants were willing to cooperate with their human counterpart. McCabe uses a game
theoretic game played against human opponent assuming it will activate the Theory of Mind
network, more importantly he is successful in this attempt. This neatly presents us the
connection we are interested in.
Two other studies looked more closely at the cooperation found by McCabe to be an integral
part of the involvement of Theory of Mind related regions, this time with a different result
Rilling et. al. (2002) and King-Casas et. al. (2005). King-Casas et al. (2005) study
cooperation in the sequential trust game and whether they could find a neural bases for a
building of trust between two participants. They were indeed able to find such a basis, but not
in areas associated with ToM, which I find a little surprising. I would have thought that
building trust is an achievement that would require use of the mentalizing system. And even
if the trust building exercise is more based around pattern recognition or a semantic system
based on past experience we could have predicted the Temporal Poles to have played a role
here. In line with these findings are the results Rilling et. al. (2002). In this article subjects
were scanned using an fMRI while they were playing a sequential Prisoners Dilemma game.
The authors were interested what brain regions would activate in cases where mutual
cooperation was achieved. One might expect the brain areas associated with ToM to activate
here (TP, STS and most importantly apcc). The results were different however. The only
significant activation found was in those area connected to the reward system. It could be the
case here the ToM mechanisms activated earlier, when making a decision whether or not to
cooperate. One would still expect the fact whether the opponent cooperated last turn or not to
factor into the decision of the next turn in a sequential game though. A disappointing result
for our connection between ToM and GT.
Related to this research is the work of Sanfrey et. al. (2003). In a way their work focusses on
the opposite of cooperation in strategic games. They examine the reaction of people to unfair
offers offered in an Ultimatum Game. Brain regions are triggered that are connected to
cognition and other regions are triggered which are connected to emotions. Among unfair
offers that are rejected by Player2 the activation in the emotional areas is extra strong. This
suggests the decision to reject is mostly emotional, which is not surprising since the rational
decision would be to accept any offer. Even though some cognitive areas are activated it are
not the brain regions commonly associated with ToM. Suggesting perspective taking does not
play much of a role in deciding whether to accept or reject an unfair offer.
Also investigating the UG, but this time more directly linked to Theory of Mind is Takagishi
et. al. (2010) They examined what difference an understanding of Theory of Mind (SallyAnne false-belief task) would impact fairness in Ultimatum Game offers. This turns out to be
a positive effect. Having access to this ‘skill’ does indeed improve offers in the UG. The
authors theorize this is because being able to place yourself in the state of mind of the other
person increases fairness related behavior. An initial response here might be this is very
related to the connection between ToM and empathy proposed by Völm et.al. (2005).
However, the work done by Artinger et. al. (2010) suggests otherwise. Like Takagishi et. al.
they also examine the UG among others. Artinger et. al. also examine the Dictator Game to
be able to compare the results with those of the Ultimatum Game, and the authors use a
personality test to find a disposition for both empathy and ToM in their test subjects. The
authors use these measures to find their impact on the behavior in the strategic games used
here. As it turns out empathy has no effect the behavior in both games, contrary to what we
thought previously. A disposition for ToM does impact behavior, but not like we would have
expected. It increases offers in the DG, but not in the UG, while the opposite would conform
to our expectations. The authors mention a possible explanation based on further findings by
them. There seems to be a strong correlation between how other people act, and how people
act themselves. The authors propose a social norm system might be an important factor in this
type of decision making.
An even stronger case for the connection between ToM and GT is provided by Rilling et. al.
(2004). They study whether they can find activation in the brain regions associated with ToM
while playing the Ultimatum Game and the Prisoners Dilemma and while receiving signals
that could be used to deduce the intent of the (human and computer) opponents. They are
indeed able to activate all the associated brain regions, and this activation is stronger versus
human opponents. This article provides an excellent case for us. It shows areas associated
with mentalizing abilities in other studies are activated when subjects are motivated to think
about their opponents mental states in strategic games.
Other than just looking at people’s behavior in strategic games Bhatt and Camerer (2005)
investigate the interaction between Theory of Mind and Game Theory by asking people to
think about their opponent as well. They examine players based on a game that is solvable
using the process of eliminating dominated strategies. Players are asked to play the game,
think about what opponents will do, and think about what opponents think they will do (2nd
order belief). Not surprisingly in all of these tasks the ToM regions are activated. What is
more surprising is that there seems to be little difference between formation of belief and
choice according to this study. The authors believe this indicates a ‘state of equilibrium’
where people are not able play the game correctly without strategically thinking about the
opponents actions and beliefs. This is what we would expect, and is in line with the rest of
our paper.
Autism in Game Theory and Theory of Mind
From previous studies we have already seen children with autism to be severely impaired in
ToM tasks (Baron-Cohen, Leslie and Firth 1985) and (Firth, Morton and Leslie 1991). Sally
and Hill (2006) seek to expand upon this research by looking how well children with autism
perform in strategic games. A strong idea since the link between reduced mentalizing abilities
and autism is so evident. They examined healthy (but younger) children and children
diagnosed with autism spectrum disorder (ASD), in the Prisoners Dilemma, the Ultimatum
Game and the Dictator Game. Some differences between ASD subjects and healthy subjects
were observed, but not nearly as much as the authors would have anticipated. One possible
explanation the authors give is some small experimental design flaws stemming from the
difficulty of dealing with the test subjects. Another more interesting possibility Sally and Hill
submit is the possibility A theory of Mind is sufficient but not required to engage in the
strategic games played by the subjects. The tasks in the laboratory might be abstract enough
for the children with ASD to complete them relatively well, without engaging in the process
of thinking about other minds. Nevertheless these results are quite surprising as children with
autism often encounter larger difficulties in real life than with these ToM related tasks. The
differences Sally and Hill were able to find between subjects with an ASD diagnoses and
healthy subjects were: A difficulty to make a coherent and intentional choice in the sequential
PD game among autistic children. The ability to pass the 2nd order false-belief task was
positively correlated with cooperation in all the PD tasks and positively correlated with
cooperation to fair offers in the UG. Participants with the autism spectrum disorder were also
more likely to accept small offers.
Conclusion
In conclusion the results seem to be mixed. Some studies do point to a strong connection
between our two areas of interest, while others find no such link or in a way different than we
would have expected. Based on the positive research results we can conclude a the interaction
between Theory of Mind and Game Theory is definitely there, but we have to explain why
other research fails to find this connection. In the autism related studies we mentioned above
the link was found, but much weaker than anticipated, to which we responded the
abstractness of lab experiments might have been a factor. The other main area where we
would have expected stronger results was the research involving sequential games (KingCasas et. al. 2005 and Rilling et. al. 2002). Based on these results we propose a research to
shed some light on the seeming absence of the Theory of Mind related brain regions in the
decision making process during these studies.
4.
Research Proposal
Motivations and Setup of the Research
I found the most surprising outcome of this paper the results of those articles examining
sequential games. Mainly the articles by King-Casas et. al. (2005) and Rilling et. al. (2002)
and their inability to find activation in brain areas related to Theory of Mind. I had originally
expected sequential games to be more related to mentalizing. After all the more information
you have about your opponent the better informed your judgement about that person can be.
Therefore the process of mentalizing seems to be more valuable here if anything. This view is
validated to some extent by the difficulty autistic children had with the sequential PD game.
Since children with autism have great difficulty in understanding other minds, this finding
supports my initial hypothesis.
A possible explanation is people decouple sequential games from their opponent and see it
more as a pattern seeking task. Or possibly an underlying system of semantics based on past
experiences takes over the decision making process. But in both of these cases I would at
least expect the Temporal Poles to be activated, since they are related to memory based
semantic belief formation. Based on these observations I propose the following research:
A task involving three versions of the Trust Game to be able to compare them to each other.
Each will be played both against a computer and a human opponent to single out the ToM
component. The first is a standard TG, the second is a sequential TG with a set number of
rounds, and the third is a sequential TG with a randomized number of rounds. The second and
the third version differ from each other since they should require a different approach when
solving them. The sequential Trust Game with a set number of rounds can be ‘solved’ using
backwards induction, while the one with a randomized number of rounds can not be solved
this way. During all of these games I want to scan the subjects under an fMRI to see whether
the ToM related areas are activated. Keeping all other factors constant it will be very
interesting to see whether the fact that a game is sequential indeed does diminish mentalizing
brain region activation.
My hypothesis
H1: In the standard Trust Game ToM related areas will be highlighted when comparing tasks
with human opponents and tasks with computer opponents.
This task will be very similar to the one studied by McCabe et. al. (2001). The reason we
want to perform this study is twofold. First of all we hope we can replicate and validate the
results found by McCabe et. al. Secondly in order for the comparison between one-shot
games and sequential games to be more powerful, we want to perform them with the least
amount of confounding effects possible, to establish maximum control. So we will need to
perform the tasks in as much of the same circumstances as possible. Based on the findings of
McCabe et. al. (2001) we can formulate our hypothesis to match their results; Anterior
paracingulate cortex activation will be found when comparing tasks against human opponents
with tasks against computer opponents.
H2: In the sequential Trust Game with a set number of rounds ToM related areas will not be
highlighted when comparing tasks with human opponents and tasks with computer
opponents.
H3: In the sequential Trust Game with a randomized number of rounds ToM related areas
will not be highlighted when comparing tasks with human opponents and tasks with computer
opponents.
These experimental setups are similar to King-Casas et al. (2005), where subjects played a 10
round sequential Trust Game. And somewhat similar to Rilling et. al. (2002) where subjects
played at least 20 rounds of an iterated Prisoners Dilemma Game against a computer and (to
their knowledge) human opponent. Based on the results of these two studies we formulated
our hypothesis to find no more activation of ToM related brain regions when comparing the
tasks of human and non-human opponents in both versions of the Trust Game.
It has to be pointed out the study of King-Casas et al. (2005) had a different research goal
than our proposed study. They were interested in the difference between the factors
underlying cooperation in the TG. Additionally they did not compare between cases of
human and computer opponents. This might be the reason why they failed to find activation
in brain areas related to mentalizing. Or why they might not have mentioned it if it was
present in equal strength under cooperation and non-cooperation outcomes. It is however not
unreasonable to think mentalizing ability to have an impact on the cooperation decision.
Therefore we would have expected increased activation in these regions (like with McCabe
et. al. 2001) comparing cooperative and non-cooperative outcomes, which were apparently
not found by King-Casas et. al. Additionally Fischbacher and Gächter (2010) found people were
willing to cooperate mostly when they knew their counterpart was willing to cooperate as well (a
criteria they call conditional cooperation). In order to find out whether your opponent is willing to
cooperate one would expect ToM to be able to play a role.
Rilling et. al. (2002) did compare tasks with human opponents, and tasks with non-human
opponents. And although again the focus of their research was on cooperation rather than
ToM, we would have expected to see a positive outcome here. One possible explanation here
could be the Theory of Mind related areas were involved in a different time in the decision
making process than what the authors studied. However, one would expect the beliefs about
your opponent to be updated when receiving new information, and we expect the mentalizing
brain regions to play a role here. In any case the findings of Rilling et. al. provide an
argument in favor of our proposed hypothesis.
The difference between the 2nd and 3rd task is relatively small, but very relevant. In iterated
games the solution can be found by solving the last game of the series like it was a one-shot
game, consequently the previous iteration are solved using the process of backward
induction. At least theoretically this game can be solved much simpler than the randomized
version (where the optimal solution is less obvious). The version of the sequential TG with a
fixed number of rounds can therefore be ‘solved’ without the use of ToM altogether under the
strict assumptions of rationality (like mentioned earlier). Therefore if we would expect just
one of these two sequential games to involve Theory of Mind, we would expect to find it in
the sequential Trust Game with a randomized number of rounds, where a purely logic based
solution is less obvious.
Implications
Even though our hypothesis are well grounded in literature, the outcome we hypothesize
would still be quite surprising. This expected result would mean people use a different neural
system in sequential strategic games than the system they use in the one-shot version of this
game. On top of that people would apparently not use their ability to understand other minds
in iterated versions of these games, while there is actually more knowledge to be gained here.
In sequential versions of strategic games more information is present about the mind of the
opponent, so more conclusions could theoretically be drawn from thinking about the
opponents beliefs and motivations. If the results are as we hypothesize, it would of course be
very interesting what system replace the ToM related system as a basis for decision making.
Further studies would be able to uncover this neural system. It would also be interesting to
find out why we would change to this other system of belief formation. It could be that it is
simply too complex to use our Theory of Mind system in sequential games. People may find
it difficult to update their beliefs about other minds retrospectively for instance. This would
make the mentalizing system much less useful in those scenarios and might therefore be the
key behind our ‘change of mind’. This, of course, needs to be validated by further research as
well. But given our hypothesis are indeed correct it would, in my opinion, prove to be a very
interesting research area.
References
Premack, D. & Woodruff, G. (1978). Does the chimpanzee have a theory of mind?
Behavioral and Brain Sciences, Volume 1, Issue 04, December 1978, pp 515- 526.
Myowa-Yamakoshi, M. & Matsuzawa, T. (2000). Imitation of intentional manipulatory
actions in chimpanzees (Pan troglodytes). J. Comp. Psychol. 114, 381–391.
Warneken, F. & Tomasello, M. (2006). Altruistic helping in human infants and young
chimpanzees. Science 31, 1301–1303.
Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and constraining
function of wrong beliefs in young children’s understanding of deception. Cognition,
13, 103-128.
Call, J. & Tomasello, M. (2008). Does the chimpanzee have a theory of mind? 30 years
later.
Trends in Cognitive Sciences, Volume 12, Issue 5, May 2008, Pages 187-192.
Pylyshyn, Z. W. (1978). When is attribution of beliefs justified? Behavioral and Brain
Sciences , Volume 1, Issue 04, December 1978, pp 592- 593.
Chandler, M. J. & Greenspan, S. (1972). Ersatz egocentrism: A reply to H. Borke.
Developmental Psychology, 7,104-106.
Flavell, J. H., Botkin, P. T., Fry, C. L., Wright, J. W. & Jarvis, P. E. (1968). The
Development of Role-taking and Communication Skills in Children. New York, Wiley.
Marvin, R. S., Greenberg, M. T. and Mossier, D. G. (1976). The early development of
conceptual perspective taking: Distinguishing among multiple perspectives. Child
Development, 4 7, 5 11-S 14.
Mossler, D. G., Marvin, R. S. and Greentlerg, M. T. (1976). Conceptual perspective taking in
2- to 6-year-old children. Developmental Psychology., 12.85-86.
Dennett, D. C. (1978). Beliefs about beliefs. Behavioral Brain Sciences 1, 568–570.
Shultz, T. R., & Cloghesy, K. (1981). Development of recursive awareness of intention.
Developmental Psychology, 17, 465-471.
Perner, J. & Wimmer, H. (1985). “John Thinks That Mary Thinks That. . .” Attribution of
Second-Order Beliefs by 5- to 10-Year-Old Children. Journal of experimental child
psychology 39 (3), 437-471.
Hogrefe, G. J., Wimmer, H. & Perner, J (1984). Ignorance vs. false belief A developmental
lag in epistemic state attribution. Child development, 567-582.
Fodor, J. A. (1992). A theory of the child's theory of mind. Cognition, 44(3), 283-296.
Leslie, A. M. (1987). Pretense and representation: The origins of ‘theory of mind’.
Psychological review 94, 412–426.
Bartsch, K., & Wellman, H. M. (1989). Young children’s attribution of action to beliefs and
desires. Child Development, 60, 946-964.
Chandler, M. J., Fritz, A. S., Hala, S. M. (1989). Small scale deceit: Deception as a marker of
2-, 3- and 4-year-old’s early theories of mind. Child Development, 60, 1263-1277.
Wellman, H. M. (1990). The child’s theory of mind. Cambridge MA: MIT Press.
Siegal, M. & Beattie, K. (1991). Where to look first for children’s knowledge of false beliefs.
Cognition, Volume 38, Issue 1, January 1991, Pages 1–12.
Leslie, A. M. (1994). Pretending and believing: issues in the theory of ToMM. Cognition.
1994 Apr-Jun;50(1-3):211-38.
Perner, J., Leekman, S. R., & Wimmer, H. (1987). Three-year-olds’ difficulty with false
belief. British Journal of Developmental Psychology, 5: 125–137.
Flavell, J. H., Green, F. L., & Flavell, E. R. (1986). Development of knowledge about the
appearance-reality distinction. Monographs of the Society for Research in Child
Development,51, serial no. 212.
Zaitchik, D. (1990). When representations conflict with reality: The preschooler’s problem
with false beliefs and “false” photographs. Cognition, 35, 41-68.
Meltzoff, A. N. (2002). Imitation as a Mechanism of Social Cognition: Origins of Empathy,
Theory of Mind, and the Representation of Action, in Blackwell Handbook of
Childhood Cognitive Development.
Baron-Cohen, S., Leslie, A. M. & Firth, U. (1985); Does the autistic child have a “theory of
mind”? Cognition Volume 21, Issue 1, October 1985, Pages 37–46.
Wing, L. (1991). The relationship between Asperger's syndrome and Kanner's autism.
Cambridge University Press, Cambridge, England.
Frith, U., Morton, J. & Leslie, A. M. (1991). The cognitive basis of a biological disorder:
Autism. Trends in Neurosciences. 14, 433-438.
Gallagher, H. L. & Firth, C. D. (2003). Functional imaging of ‘theory of mind’. Trends
Cognitive Science 2003 Feb;7(2):77-83.
Singer, T. (2008). Understanding Others: Brain Mechanisms of Theory of Mind and
Empathy. Neuroeconomics: Decision making and the brain. , (pp. 251-268).
Gallagher, H. L., Happé, F., Brunswick, N., Fletcher, P. C., Frith, U. & Frith, C. D. (2000).
Reading the mind in cartoons and stories: an fMRI study of ‘theory of mind’ in verbal
and nonverbal tasks. Neuropsychologia 38, 11–21.
Brunet, E., Sarfati, Y., Hardy-Baylé, M. C. & Decety, J. (2000). A PET investigation of the
attribution of intentions with a nonverbal task. NeuroImage 11, 157–166.
Grossman, E. D. and Blake, R. (2001). Brain activity evoked by inverted and imagined
biological motion. Vision Research 41, 1475–1482.
Narumoto, J., Okada, T., Sadato, N., Fukui, K. & Yonekura, Y. (2001). Attention to emotion
modulates fMRI activity in human right superior temporal sulcus. Cognitive Brain
Research 12, 225–231.
Allison, T., Puce, A. & McCarthy, G. (2000). Social perception from visual cues: role of the
STS region. Trends Cognitive Science 4, 267–278.
Nakamura, K. et al. (2000) Functional delineation of the human occipito-temporal areas
related to face and scene processing: a PET study. Brain 123, 1903–1912.
Nakamura, K., Kawashima, R., Sato, N., Nakamura, A., Sugiura, M., Kato, T., Hatano, K.,
Ito, K., Fukuda, H., Schormann, T. & Zilles, K. (2001). Neural substrates of familiar
voices: a PET study. Neuropsychologia 39, 1047–1054.
Dolan, R. J., Lane, R., Chua, P. & Fletcher, P. (2000). Dissociable temporal lobe activations
during emotional episodic memory retrieval. NeuroImage 11, 203–209.
Fink, G. R., Markowitsch, H. J., Reinkemeier, M., Bruckbauer, T., Kessler, J. & Heiss, W.
(1996). Cerebral representation of one’s own past: neural networks involved in
autobiographicalmemory. The Journal of Neuroscience 16, 4275–4282.
Firth, U. & Firth, C. D. (2003). Development and neurophysiology of mentalizing. Philos
Trans R Soc Lond B Biol Sci. 2003 Mar 29;358(1431):459-73.
Gallagher, H. L., Jack, A. I., Roepstorff, A. & Frith, C. D. (2002). Imaging the intentional
stance. Neuro-Image 16, 814–821.
McCabe, K., Houser, D., Ryan, L., Smith, V. & Trouard, T. (2001). A functional imaging
study of cooperation in two-person reciprocal exchange. Proc. Natl. Acad. Sci. U. S.
A. 98, 11832–11835.
Rowe, A. D., Bullock, P. R., Polkey, C. E. & Morris, R. G. (2001). ‘Theory of mind’
impairments and their relationship to executive functioning following frontal lobe
excisions. Brain 124, 600–616.
Stuss, D. T., Gallup, G. G. Jr. & Alexander, M. P. (2001). The frontal lobes are necessary for
‘theory of mind’. Brain 124, 279–286.
Critchley, H. D., Corfield, D. R., Chandler, M. P., Mathias, C. J. & Dolan, R. J. (2000).
Cerebral correlates of autonomic cardiovascular arousal: a functional neuroimaging
investigation in humans. J. Physiol. London 523, 259–270.
Critchley, H. D., Mathias, C. J. & Dolan, R. J. (2001). Neural activity in the human brain
relating to uncertainty and arousal during anticipation. Neuron 29, 537–545.
Duncan, J. & Owen, A. M. (2000). Common regions of the human frontal lobe recruited by
diverse cognitive demands. Trends Neuroscience 23, 475–483.
Barch, D. M., Braver, T. S., Akbudak, E., Conturo, T., Ollinger, J. & Snyder, A. (2001).
Anterior cingulate cortex and response conflict: effects of response modality and
processing domain. Cereb. Cortex 11, 837–848.
Lane, R. D., Fink, G. R., Chau, P. M. & Dolan, R. J. (1997). Neural activation during
selective attention to subjective emotional responses. NeuroReport 8, 3969–3972.
Kircher, T. T., Senior, C., Phillips, M. L., Rabe-Hesketh, S., Benson, P. J., Bullmore, E. T.,
Brammer, M., Simmons, A., Bartels, M. & David, A.S. (2000). Recognizing one’s
own face. Cognition 78, B1–B15.
Baron-Cohen, S., Ring, H. A., Wheelwright, S., Bullmore, E. T., Brammer, M. J., Simmons,
A. & Williams, S. C. (1999). Social intelligence in the normal and autistic brain: an
fMRI study. Eur. J. Neurosci. 11, 1891–1898.
Baron-Cohen, S., Ring, H., Moriarty, J., Schmitz, B., Costa, D. & Ell, P. (1994). The brain
basis of theory of mind: the role of the orbitofrontal region. British Journal of
Psychiatry, 165, pp. 640–649.
Rizzolatti, G., Fadiga, L., Gallese, V. & Fogassi, L. (1996). Premotor cortex and the
recognition of motor actions. Brain Res Cogn Brain Res. 1996 Mar;3(2):131-41.
Gallese, V. & Goldman, A. (1998). Mirror Neurons and the Simulation Theory of MindReading. Trends in Cognitive Sciences, 1998, 2(12), pp. 493-501.
Grezes, J. & Decety, J. (2001). Functional Anatomy of Execution, Mental Simulation,
Observation, and Verb Generation of Actions: A Meta-analysis. Human Brain
Mapping, 2001, 12(1), pp. 1-19.
Williams, J. H., Whiten, A., Suddendorf, T. & Perrett, D. I. (2001). Imitation, mirror neurons
and autism. Neurosci. Biobehav. Rev. 25, 287– 295.
Völlm, B. A., Taylor, A. N., Richardson, P., Corcoran, R., Stirling, J., McKie, S., Deakin, J.
F. & Elliott, R. (2005). Neuronal correlates of theory of mind and empathy: A
functional magnetic resonance imaging study in a nonverbal task. Neuroimage. 2006
Jan 1;29(1):90-8. Epub 2005 Aug 24.
Blair, J., Sellars, C., Strickland, I., Clark, F., Williams, A., Smith, M. & Jones, L. (1996).
Theory of mind in the psychopath. J. Forensic sychiatry 7, 15–25.
Preston, S. D. & de Waal, F. B. (2002).Empathy: Its Ultimate and Proximate Bases.
Behavioral and Brain Science, 2002, 25(1), pp. 1-72.
Singer, T. & Fehr, E. (2005). The Neuroeconomics of Mind Reading and Empathy. American
Economic Review, 95(2): 340-345.
Zermelo, E. (1913). Über eine Anwendung der Mengenlehre auf die Theorie des
Schachspiels, Proc. Fifth Congress Mathematicians, (Cambridge 1912), Cambridge
Univ. Press1913, 501–504.
Cournot, A. (1838). Recherches sur les Principes Mathematiques de la Theorie des Richesses.
Paris: Hachette.
Von Neumann, J. & Morgenstern, O. (1944). Theory of Games and Economic Behavior. 2d
d. Princeton: Princeton University Press.
Myerson, R. B. (1996). Nash equilibrium and the history of economic theory. Journal of
Economic Literature, 37(3): 1067-1082.
Myerson, R. B. (1978). Refinements of the Nash equilibrium concept. International Journal
of Game Theory 1978, Volume 7, Issue 2, pp 73-80.
Van Damme, E. (1989). Stable equilibria and forward induction. Journal of Economic Theory
Volume 48, Issue 2, August 1989, Pages 476–496.
Doucouliagos C. (1994). A Note on the Evolution of Homo Economicus. Journal of
Economic Issues, Vol. 28, No. 3 (Sep., 1994), pp. 877-883.
Binmore, K. (1987). Modeling Rational Players Part I. Economics and Philosophy, 3, 179214.
Camerer, C. F. (1997). Progress in Behavioral Game Theory. The Journal of Economic
Perspectives Vol. 11, No. 4 (Autumn, 1997), pp. 167-188.
Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality.
American Psychologist, Vol 58(9), Sep 2003, 697-720.
Schelling, T. C. (1960). The strategy of conflict. Cambridge, MA: Harvard Univ. Press.
Rapoport, A. (1967). Escape from paradox. Scientific American, 217, 50-56.
Goffman, E. (1970). Strategic interaction. Oxford: Blackwell.
Rilling, J. K., Sanfey, A. G., Aronson, J. A., Nystrom, L. E. & Cohen, J. D. (2004). The
neural correlates of theory of mind within interpersonal interactions. Neuroimage.
2004 Aug;22(4):1694-703.
Artinger, F., Exadaktylos, F., Koppel, H. & Sääksvuori, L. (2010). Unravelling fairness in
simple games? The role of empathy and theory of mind. Jena economic research
papers, No. 2010,037.
Rilling, J., Gutman, D., Zeh, T., Pagnoni, G., Berns, G. & Kilts, C. (2002). A Neural Basis
for Social Cooperation. Neuron, Vol. 35, 395–405, July 18, 2002.
Sanfey, A. G., Rilling, J. K., Aronson, J. A., Nystrom, L. E. & Cohen, J. D. (2003). The
neural basis of economic decision-making in the ultimatum game. Science 300, 1755
–1758 .
King-Casas, B., Tomlin, D., Anen, C., Camerer, C. F., Quartz, S. & Montague, P. R. (2005).
Getting to Know You: Reputation and Trust in a Two-Person Economic Exchange.
Science 1 April 2005: Vol. 308 no. 5718 pp. 78-83
Takagishi, H., Kameshima, S., Schug, J., Koizumi, M. & Yamagishi, T. (2010). Theory of
Mind enhances the preference for fairness. J Exp. Child Psychol. 2010 JanFeb;105(1-2):130-7.
Sally, D. & Hill, E. (2006). The development of interpersonal strategy: Autism, theory-ofmind, cooperation and fairness. Journal of Economic Psychology 27 (2006) 73–97
Fischbacher, U. & Gächter, S. (2010). Social Preferences, Beliefs, and the Dynamics of
Free Riding in Public Goods Experiments. American Economic Review, 100: 541-56.
Download