Fractionating the System of Deductive Reasoning.

advertisement
Goel
October 27, 2007
1 of 26
Fractionating the System of Deductive Reasoning
Vinod Goel
Department Of Psychology
York University, Toronto, Ontario M3J 1P3 Canada
Keywords: reasoning, frontal lobes, neuropsychology, neuroimaging, focal lesions
Address for Correspondence:
Dr. Vinod Goel
Dept. of Psychology
York University
Toronto, Ont.
Canada, M3J 1P3
Fax: 416-736-5814
Email: vgoel@yorku.ca
Goel, V. (in press). Fractionating the System of Deductive Reasoning. In Neural Correlates of
Thinking, Eds. E. Kraft, B. Guylas, & E. Poppel. Springer Press.
Draft, October 27, 2007
Goel
October 27, 2007
2 of 26
Introduction
Reasoning is the cognitive activity of drawing inferences from given information. All
reasoning involves the claim that one or more propositions (the premises) provide some
grounds for accepting another proposition (the conclusion). A subset of arguments are called
deductive arguments. Such arguments can be evaluated for validity, a relationship between
premises and conclusion involving the claim that the premises provide absolute grounds for
accepting the conclusion (i.e. if the premises are true, then the conclusion must be true). A
key feature of deduction is that conclusions are contained within the premises and are
logically independent of the content of the propositions. As such, deductive reasoning is a
good candidate for a self-contained cognitive module. In fact, it is probably the best and only
candidate among higher-level reasoning/thinking processes for a cognitive module...
Two theories of reasoning dominate the cognitive literature and provide different
characterizations of the nature of this "module". They differ with respect to the competence
knowledge they draw upon, the mental representations they postulate, and the mechanisms
they invoke. Mental logic theories (Braine, 1978; Henle, 1962; Rips, 1994) postulate that
Reasoners have an underlying competence knowledge of the inferential role of the closedform, or logical terms, of the language (e.g. ‘all’, ‘some’, ‘none’, ‘and’, etc,). The internal
representation of arguments preserve the structural properties of the propositional strings in
which the premises are stated. A mechanism of inference is applied to these representations
to draw conclusions from premises. Essentially, the claim is that deductive reasoning is a
rule governed process defined over syntactic strings.
Goel
October 27, 2007
3 of 26
By contrast, mental model theory (Johnson-Laird, 1983; Johnson-Laird & Byrne, 1991)
postulates that Reasoners have an underlying competence knowledge of the meaning of the
closed-form, or logical terms, of the language (e.g. ‘all’, ‘some’, ‘none’, ‘and’, etc,)1 and use
this knowledge to construct and search alternative scenarios.2 The internal representation of
arguments preserve the structural properties of the world (e.g. spatial relations) that the
propositional string are about rather than the structural properties of the propositional strings
themselves. The basic claim is that deductive reasoning is a process requiring spatial
manipulation and search.
When studies investigating the neural basis of reasoning began a decade ago, a natural,
inevitable starting point was to search for a "reasoning module" in the context of one of these
two theories (Goel, Gold, Kapur, & Houle, 1997, 1998; Osherson et al., 1998). A system built
upon visuospatial processes would be consistent with mental model theory while a system built
upon linguistic/syntactic processes would be consistent with mental logic theory (Goel, 2005).
After a decade of neuroimaging and patient studies of logical reasoning the data are not
supporting this picture. I will return to this issue later in the chapter.
Our approach, informed as much by neuropsychology as cognitive theory, has been
somewhat different. We have looked for double dissociations/breakdowns (i.e. causal joints) in
the neural machinery underlying reasoning by examining behavioral data from neurological
patients with focal lesions, and neuroimaging data from normal, healthy controls. In terms of the
1
Whether there is any substantive difference between “knowing the inferental role” and “knowing the meaning” of
the closed-form terms, and thus the two theories is a moot point, debated in the literature.
2
See Newell (1980) for a discussion of the relationship between search and inference.
Goel
October 27, 2007
4 of 26
former, we are unaware of a single report of neurological patients with a selective reasoning
deficit. In terms of the latter, we find some commonality and some variability in neural networks
engaged by reasoning tasks across various studies (see Table 1) (Goel, 2007). In the context of
this experience, we find very little evidence for a "logic module" in the brain. However,
examining the data, in the absence of a commitment to a specific cognitive theory of reasoning
suggests that human reasoning is underwritten by a fractionated system that is dynamically
configured in response to specific task and environmental cues. Three of these lines of
fractionation are reviewed below. Existing cognitive theories are accessed in light of these data
and an alternative explanatory framework is provided.
Systems for Dealing with Familiar and Unfamiliar Material
As mentioned in the introduction, deductive arguments are valid as a function of their
logical form. The content of the argument is irrelevant for the determination of validity. Despite
this logical truism, one of the most robust, and problematic, finding in the psychology of
reasoning is that content has a significant effect on the reasoning process. To explore this issue,
we have carried out a series of studies, using syllogisms and transitive inferences, whereby we
have held logical form constant and systematically manipulated content of arguments. These
studies indicate that two distinct systems are involved in reasoning about familiar and unfamiliar
material. More specifically, a left lateralized frontal-temporal conceptual/language system
(Figure 1a) processes familiar, conceptually coherent material while a bilateral parietal
visuospatial system, with some dorsal frontal involvement, (Figure 1b) processes unfamiliar,
nonconceptual or conceptually incoherent material.
Goel
October 27, 2007
5 of 26
The involvement of the left frontal-temporal system in reasoning about familiar or
meaningful content has also been demonstrated in neurological patients with focal unilateral
lesions to prefrontal cortex (parietal lobes intact), using the Wason card selection task (Goel,
Shuren, Sheesley, & Grafman, 2004). These patients performed as well as normal controls on
the arbitrary version of the task, but unlike the normal controls they failed to benefit from the
presentation of familiar content in meaningful version of the task. In fact, the latter result was
driven by the exceptionally poor performance of patients with left frontal lobe lesions. Patients
with lesions to right prefrontal cortex performed for as well as normal controls.
There is even some evidence to suggest that the response of the frontal-temporal system
to familiar situations may be content specific to some degree (in keeping with some content
specificity in the organization of temporal lobes) (McCarthy & Warrington, 1990). For example,
while middle temporal lobe regions are activated when reasoning about categorical statements
such as "All dogs or pets", in the case of making transitive inferences about familiar spatial
environments, reasoning is mediated by posterior hippocampus and parahippocampal gyrus, the
same structures that underwrite spatial memory and navigation tasks (Goel, Makale, & Grafman,
2004). Perhaps the most robust example of content specificity in the organization of the heuristic
system is the "Theory of Mind" reasoning system identified by number of studies.(Fletcher et al.,
1995; Goel, Grafman, Sadato, & Hallet, 1995)
Systems for Dealing with Conflict and Belief-Bias
A robust consequence of the content effect is that subjects perform better on reasoning
tasks when the logical conclusion is consistent with their beliefs about the world (arguments 4-6
Goel
October 27, 2007
6 of 26
in Table 2) than when it is inconsistent with their beliefs (arguments 7-9 in Table 2) (Evans,
Barston, & Pollard, 1983; Wilkins, 1928). In the former case, subjects' beliefs facilitate the task
while in the latter case they inhibit it. Within inhibitory belief trials the prepotent response is the
incorrect response associated with belief-bias. Incorrect responses in such trials indicate that
subjects failed to detect the conflict between their beliefs and the logical inference and/or inhibit
the prepotent response associated with the belief-bias. These belief-biased responses activate
VMPFC (BA 11, 32), highlighting its role in non-logical, belief-based responses. The correct
response indicates that subjects detected the conflict between their beliefs and the logical
inference, inhibited the prepotent response associated with the belief-bias, and engaged the
formal reasoning mechanism. The detection of this conflict requires engagement of right
lateral/dorsal lateral prefrontal cortex (BA 45, 46) (see Figure 2) (Goel, Buchel, Frith, & Dolan,
2000; Goel & Dolan, 2003; Prado & Noveck, 2007). This conflict detection role of right
lateral/dorsal PFC is a generalized phenomenon that has been documented in a wide range of
paradigms in the cognitive neuroscience literature (Caramazza, Gordon, Zurif, & DeLuca, 1976;
Fink et al., 1999; Stavy, Goel, Critchley, & Dolan, 2006)
One particularly poignant demonstration of this system using lesion data was carried out
by Caramazza et al. (Caramazza et al., 1976) using simple two-term reasoning problems such as
the following: “Mike is taller than George” who is taller? They reported that left hemisphere
patients were impaired in all forms of the problem but – consistent with imaging data (Goel et
al., 2000; Goel & Dolan, 2003) – right hemisphere patients were only impaired when the form of
the question was incongruent with the premise (e.g. who is shorter?).
Goel
October 27, 2007
7 of 26
Systems for Dealing with Certain and Uncertain Information
Cognitive theories of reasoning do not typically postulate different mechanisms for
reasoning with complete and incomplete information. Consider the arguments 1-3 in Table 2.
All major cognitive theories of reasoning assume that the same cognitive system would deal with
each of these inferences.
However, patient and neuroimaging data suggest that different neural systems underwrite
these inferences. Goel et al. (2006) tested neurological patients with focal unilateral frontal lobe
lesions (see Figure 3) on a transitive inference task while systematically manipulating
completeness of information regarding the status of the conclusion (i.e. determinate and
indeterminate trials). The results demonstrated a double dissociation such that patients with left
PFC lesions were selectively impaired in trials with complete information (i.e. determinate
trials), while patients with right PFC lesions were selectively impaired in trials with incomplete
information (i.e. indeterminate trials). At the cortical level, this strongly indicates hemispheric
asymmetry in systems dealing with reasoning about determinate and indeterminate situations. At
the cognitive level, it suggests that different mechanisms may be involved.
Implications for Cognitive Theories of Reasoning
These types of data are difficult to accommodate within "unitary" theories such as mental
model theory and mental logic theory, unless one (1) adopts a position whereby one of the
identified networks constitutes "real" reasoning and all the others are not really reasoning; (2) is
Goel
October 27, 2007
8 of 26
prepared to cherry pick the results (Knauff, Fangmeier, Ruff, & Johnson-Laird, 2003); or (3)
question the competence of existing studies (Monti, Osherson, Martinez, & Parsons, 2007).
An intuitively palatable alternative to these theories -- to explain at least part of the
results -- is provided by dual mechanism accounts. At a very crude level, dual mechanism
theories make a distinction between formal, deliberate, rule-based processes and implicit,
unschooled, automatic processes. However, dual mechanism theories come in various flavours
that differ on the exact nature and properties of these two systems. Theories differentially
emphasize explicit and implicit processes (Evans & Over, 1996), conscious and preconscious
processes (Stanovich & West, 2000), formal and heuristic processes (Newell & Simon, 1972),
and associative and rule based processes (Goel, 1995; Sloman, 1996). The relationship among
these various proposals has yet to be fully clarified.
In previous writings I have suggested that the dissociation between systems dealing with
familiar or meaningful information that we have beliefs about and unfamiliar or nonmeaningful
information that we have no beliefs about is broadly consistent with the above set of ideas from
dual mechanism theories (Goel, 2005).
However, as these theories are developing, some
discrepancies are emerging between their predictions and neuropsychological data. Furthermore,
the development/clarification of the theories is exposing critical features that in some cases do
violence to our commonsense notions of rationality. Here I will discuss the dual mechanism
ideas as developed by Evans and Over (Evans, 2003; Evans & Over, 1996) and further expanded
by Stanovich (2004).
Goel
October 27, 2007
9 of 26
The two systems postulated by these researchers have come to be widely known as
System 1 and System 2. System 1 constitutes a collection of processes whose application is fast,
automatic (we have little or no conscious control over them), and mandatory, once triggered by
relevant stimuli (i.e. the trigger is causally sufficient for the response). They generally have an
evolutionary origin but some automatization through practice is allowed for.
The classic
example of a System 1 mechanism is the reflex arc. Like the reflex arc all System 1 processes
provide for a causal link between trigger and response, belong to the old part of the brain, and are
driven by mechanisms that drive other automatic behaviors across the evolutionary spectrum like
forging and mating (Stanovich, 2004). This hypothesis has a less and more extreme version. On
the "moderate" account we still have a formal reasoning system that is augmented by these
evolutionarily useful modules (Over, 2002). On the more extreme version (massive modularity)
there is no formal system, just a collection of numerous modules evolved to solve specific
evolutionary problems (Duchaine, Cosmides, & Tooby, 2001). It is the former view that I'm
discussing here. A discussion of the latter is beyond the scope of this review.
I want to suggest that this particular division and subsequent characterization of System 1
is inconsistent with both, traditional views of rationality and the neuropsychological data. In
terms of the neuropsychological data, the neural correlates of System 1 (insofar as belief-bias is
part of the system) are underwritten by high level language/conceptual systems, arguably unique
to humans. They are not "reflex arc" type mechanisms belonging to the "old part of the brain", as
proposed by the above accounts.
Conceptually, there are two problems.
First, just because a processes is innate,
automatic, and mandatory does not imply that it is part of the "old brain" system. Second, the
Goel
October 27, 2007
10 of 26
description of reasoning processes as "innate, automatic, and mandatory" is an unfortunate
mischaracterization of the phenomenon. The System1 mechanisms described above, cannot
even be candidates for rational systems, as this term is generally understood. In the Western
literature characterizations of rationality have normally included the following features:

rationality is about means to an end

in rational behavior there exists a "gap" between stimulus and response; the
antecedent condition is never sufficient for action

rationality is ascribed to individual behavior

a rational choice is not simply a selection, it is a selection for a reason (Bermudez,
2002)
I take these to be a widely accepted, unproblematic constraints on rationality. But the
above characterization of System1 is inconsistent with the existence of a "gap" between stimulus
and response. Insofar as System1 processes are such that the environmental trigger is causally
sufficient for the response, all behavior mediated by these processes is removed from the realm
of rationality.
An excellent example of such a behavior is the eye blink reflex arc. If I suddenly snap my
fingers close to your eyes, you will blink. If I prewarn you of my intention prior to snapping my
fingers close to your eyes, you will still blink. Even if I prewarn you, and assure you that I will
not touch your eyes (and you trust me and believe me), you will still blink when I snap my
fingers. Even if I prewarn you, assure you, and offer you large monetary rewards for not
blinking, you will still blink when I snap my fingers close to rise. The snapping of my fingers in
the proximity of your eyes is causally sufficient for you to blink.
Goel
October 27, 2007
Compare this to an example from the belief-bias phenomenon.
11 of 26
Suppose a subject
evaluates the following (valid) argument as invalid: "apples are red fruit; all red fruit are
poisonous; therefore all apples are poisonous".
There are several important dissimilarities
between such a response and a reflex like an eye blink. First, subjects can give sensible reasons
for their response, for example, "I was focusing on x instead of y". Second, when the correct
answer is pointed out to them they can acknowledge that they've made a mistake and apply the
analytic system next time around and generate the normative response. The phenomenology of
the belief-bias effect is very different from that of an eye blink. Reasoning fallacies are simply
not automatic and mandatory in the same sense as reflex arcs. To think otherwise is to invite
conceptual confusion.
I suspect that these researchers are being misled by their behavioral individuation of
systems: "if it is slow and deliberate it belongs to one category; if it is fast and "automatic" it
belongs to the other category". Behavioral categories are superficial and largely uninteresting
for scientific purposes. For example, one can make a category of "all things that move fast" and
another category of "all things that move slow". In the first category one might put things such
as cars, planes, comets, and electromagnetic waves, while in the second category one might
include bikes, bears, fish, rocks rolling down hills, etc.. Notice the problems here. First, it is
unclear how fast or slow one has to move for membership into the respective category. Do cars
move fast enough to be in the "fast" category. They move fast when compared to bikes and
bears, but not very fast when compared to electromagnetic waves. Which category do birds
belong to? Second, while these categories may be of interest for some purpose, they are of little
Goel
October 27, 2007
12 of 26
interest in terms of understanding locomotion because members of the category do not share
underlying causal principles of locomotion. Scientifically interesting categories individuate
along causal lines. In light of these concerns, I would say that this particular development of
dual mechanism theory is not a candidate to explain the neuropsychological data, and in fact, is
conceptually confused.
An alternative formulation of dual processing theory stems from Simon's notion of
Bounded Rationality(Simon, 1983) and the incorporation of this idea into Newell & Simon's
(1972) models of human problem solving. The key idea was the introduction of the notion of the
problem space, a computational modeling space shaped by the constraints imposed by the
structures of a time and memory bound serial information processing system and the task
environment. The built-in strategies for searching this problem space include such content free
universal methods as Means Ends Analysis, Breath First Search, Depth First Search, etc. But
the universal applicability of these methods comes at the cost of enormous computational
resources. But given that the cognitive agent is a time and memory bound serial processor it
would often not be able to respond in real time, if it had to rely on formal, context independent
processes. So the first line of defense for such a system is the deployment of task-specific
knowledge to circumvent formal search procedures.
Consider the following example. I arrive at the airport in Paris and need to make a
telephone call before catching my connecting flight in an hour. I notice that the public
telephones require a special calling card. The airport is a multistory building with shops on
several floors. If I know nothing about France I could start on the top floor at one end of the
building, enter a store and ask for a telephone card. If I find one, I can terminate my search and
make my phone call. If I don't, I can proceed to the next store, until I have visited every store on
Goel
October 27, 2007
13 of 26
the floor (or found a telephone card). I could then go down to the next floor and continue in the
same fashion. Following this breadth first (British Museum) search strategy, I will systematically
visit each store and find a telephone card if one of them sells it. The search will terminate when I
have found the telephone card, or visited the last store. This may take several hours, and I may
miss my connecting flight. If however, I am knowledgeable about France, I may have a specific
piece of knowledge that may help me circumvent this search: Telephone cards are sold by the
Tabac Shop. If I know this, I merely have to search the directory of shops, find the Tabac shop
and go directly there, circumventing the search procedure. Notice this knowledge is very
powerful, but very situation specific. It will not help me find a pair of socks in Paris or make a
telephone call in Delhi.
On this account, heuristics are situation specific, learned and consciously applied
procedures. If present, they are given priority. If they are not available we fall back upon more
general/universal (but computationally intensive) strategies. There are some important
differences between the Newell and Simon account and the previous account. The most
important have to do with ontological commitments. Newell and Simon are not necessarily
talking about two distinct systems/modules with different evolutionary history, but simply two
different strategies for processing information. In this sense it is a much weaker account, but it is
more consistent with our intuitions about rationality and the neuropsychological data.
The dual processing approaches, whatever their particular features, only account for one
of several sets of dissociations that we are finding in the neuropsychological data on reasoning.
Therefore, we need to start talking about multiple reasoning systems rather than simply dual
Goel
October 27, 2007
14 of 26
reasoning systems. More generally, in terms of viewing reasoning as a dynamically configured
fractionated system I would like to propose the following type of view.
We explain neuropsychological data in terms of an interplay between Gazzaniga's "left
hemisphere interpreter" (Gazzaniga, 2000) and right PFC systems for conflict detection and
uncertainty maintenance. The function of this interpreter is to make sense of the environment by
completing patterns by filling in the gaps in the available information. I don't think the system is
specific to particular types of patterns. It doesn't care whether the pattern is logical, causal,
social, statistical, etc.. It simply abhors uncertainty and will complete any pattern, often
prematurely, to the detriment of the organism. The roles of the conflict detection and uncertainty
maintenance systems are respectively, to detect conflicts in patterns and actively maintain
representations of indeterminate/ambiguous situations and bring them to the attention of the
interpreter. While there is considerable evidence for the existence of these systems, their time
course of processing an interaction are largely unknown. One speculative account of how
processing of arguments might proceed through these systems is presented in Figure 4.
Consider the nine possible types of three-term transitive arguments reproduced in Table
2. Arguments (1-3) that subjects can have no beliefs about are relegated to the formal/Universal
methods processing system. This system is continually monitored by a conflict detector (right
DLPFC) and uncertainty maintenance system (right VLPFC). In the case of argument (2) an
inconsistency will be detected between premises and the conclusion and an "invalid"
determination made. In the case of arguments (1) & (3) there is no conflict. Further pattern
completion should validate the consistency of (1) resulting in a "valid" response. In the case of
Goel
October 27, 2007
15 of 26
(3) the uncertainty maintenance system will highlight the uncertainty inherent in the premises
and inhibit the left hemisphere interpreter from making unwarranted assumptions, eventually
allowing an "invalid" response to be generated.
Arguments (4-9), containing propositions that subjects have beliefs about, are initially
passed on to the left frontal-temporal system for heuristic processing. However, if a conflict is
detected between the believability of the conclusion and the logical response (7-9) the processing
is rerouted to, or at least shared with, the formal pattern matcher in the parietal system. In the
formal system these arguments are dealt with in a similar manner as arguments (1-3), except for
the following important differences: (i) the conflict detection system has to continually monitor
for belief-logic conflict while also monitoring for logical inconsistency; and (ii) the fact that
subjects have beliefs about the content will also make the task of the uncertainty maintenance
system much more difficult. Often it will fail to inhibit the left hemisphere interpreter. Both
these situations place greater demands on the cognitive system, resulting in higher reaction times
and lower accuracy scores in these types of trials.
Arguments (4-6) are passed to the left frontal-temporal heuristic/conceptual system and
are largely (though not necessarily exclusively) processed by this system. The believability of
the conclusion response is the same as the logical response, facilitating the conflict detection in
(5) and pattern completion in (4). Even the "invalid" response in (6) is facilitated, but for the
wrong reason. As above, the unbelievability of the conclusion makes it difficult for the
uncertainty maintenance system to maintain uncertainty of the conclusion, but in this case failure
facilitates the correct response.
Goel
October 27, 2007
16 of 26
Conclusion and Summary
Considerable progress has been made over the past decade in our understanding of the
neural basis of logical reasoning. In broad terms, these data are telling us that the brain is
organized in ways not anticipated by cognitive theories of reasoning. We should be receptive to
this possibility. In particular, we need to confront the possibility that there may be no unitary
reasoning system in the brain. Rather, the evidence points to a fractionated system that is
dynamically configured in response to certain task and environmental cues. We have reviewed
three lines of fractionation of the system of reasoning, discussed their implications for theories of
reasoning and speculated on how they may interact during the processing of various types of
logical arguments. There is of course no claim that the systems are exhaustive. The main point
is that dissociation data provides important information regarding the causal joints of the
cognitive system. Sensitivity to these data will move us beyond the sterility of mental models vs.
mental logic debate and further the development of cognitive theories of reasoning.
Goel
October 27, 2007
17 of 26
References
Acuna, B. D., Eliassen, J. C., Donoghue, J. P., & Sanes, J. N. (2002). Frontal and Parietal Lobe
Activation during Transitive Inference in Humans. Cereb Cortex, 12(12), 1312-1321.
Bermudez, J. L. (2002). Rationality and Psychological Explanation without Language. In J. L.
Bermudez & A. Millar (Eds.), Reason and Nature: Essays in the Theory of Rationality
(pp. 233-264). New York: Oxford University Press.
Braine, M. D. S. (1978). On the Relation Between the Natural Logic of Reasoning and Standard
Logic. Psychological Review, 85(1), 1-21.
Canessa, N., Gorini, A., Cappa, S. F., Piattelli-Palmarini, M., Danna, M., Fazio, F., & Perani, D.
(2005). The effect of social content on deductive reasoning: an fMRI study. Hum Brain
Mapp, 26(1), 30-43.
Caramazza, A., Gordon, J., Zurif, E. B., & DeLuca, D. (1976). Right-hemispheric damage and
verbal problem solving behavior. Brain Lang, 3(1), 41-46.
Duchaine, B., Cosmides, L., & Tooby, J. (2001). Evolutionary psychology and the brain. Curr
Opin Neurobiol, 11(2), 225-230.
Evans, J. S. (2003). In two minds: dual-process accounts of reasoning. Trends Cogn Sci, 7(10),
454-459.
Evans, J. S. B. T., Barston, J., & Pollard, P. (1983). On the conflict between logic and belief in
syllogistic reasoning. Memory and Cognition, 11, 295-306.
Evans, J. S. B. T., & Over, D. E. (1996). Rationality and Reasoning. NY: Psychology Press.
Fangmeier, T., Knauff, M., Ruff, C. C., & Sloutsky, V. (2006). FMRI evidence for a three-stage
model of deductive reasoning. J Cogn Neurosci, 18(3), 320-334.
Fink, G. R., Marshall, J. C., Halligan, P. W., Frith, C. D., Driver, J., Frackowiak, R. S., & Dolan,
R. J. (1999). The neural consequences of conflict between intention and the senses.
Brain, 122(Pt 3), 497-512.
Fletcher, P. C., Happe, F., Frith, U., Baker, S. C., Dolan, R. J., Frackowiak, R. S. J., & Frith, C.
D. (1995). Other Minds In The Brain: A Functional Imaging Study Of "Theory Of Mind"
In Story Comprehension. Cognition, 57, 109-128.
Gazzaniga, M. S. (2000). Cerebral specialization and interhemispheric communication: does the
corpus callosum enable the human condition? Brain, 123 ( Pt 7), 1293-1326.
Goel, V. (1995). Sketches of Thought. Cambridge, MA: MIT Press.
Goel, V. (2005). Cognitive Neuroscience of Deductive Reasoning. In K. Holyoak & R. Morrison
(Eds.), Cambridge Handbook of Thinking and Reasoning (pp. 475-492). Cambridge:
Cambridge University Press.
Goel, V. (2007). Anatomy of deductive reasoning. Trends Cogn Sci, 11(10), 435-441.
Goel, V., Buchel, C., Frith, C., & Dolan, R. J. (2000). Dissociation of Mechanisms Underlying
Syllogistic Reasoning. NeuroImage, 12(5), 504-514.
Goel, V., & Dolan, R. J. (2001). Functional Neuroanatomy of Three-Term Relational Reasoning.
Neuropsychologia, 39(9), 901-909.
Goel, V., & Dolan, R. J. (2003). Explaining modulation of reasoning by belief. Cognition, 87(1),
B11-22.
Goel
October 27, 2007
18 of 26
Goel, V., & Dolan, R. J. (2004). Differential involvement of left prefrontal cortex in inductive
and deductive reasoning. Cognition, 93(3), B109-121.
Goel, V., Gold, B., Kapur, S., & Houle, S. (1997). The Seats of Reason: A Localization Study of
Deductive & Inductive Reasoning using PET (O15) Blood Flow Technique.
NeuroReport, 8(5), 1305-1310.
Goel, V., Gold, B., Kapur, S., & Houle, S. (1998). Neuroanatomical Correlates of Human
Reasoning. Journal of Cognitive Neuroscience, 10(3), 293-302.
Goel, V., Grafman, J., Sadato, N., & Hallet, M. (1995). Modelling Other Minds. NeuroReport,
6(13), 1741-1746.
Goel, V., Makale, M., & Grafman, J. (2004). The hippocampal system mediates logical
reasoning about familiar spatial environments. J Cogn Neurosci, 16(4), 654-664.
Goel, V., Shuren, J., Sheesley, L., & Grafman, J. (2004). Asymmetrical involvement of frontal
lobes in social reasoning. Brain, 127(Pt 4), 783-790.
Goel, V., Tierney, M., Sheesley, L., Bartolo, A., Vartanian, O., & Grafman, J. (2006).
Hemispheric Specialization in Human Prefrontal Cortex for Resolving Certain and
Uncertain Inferences. Cereb Cortex.
Heckers, S., Zalesak, M., Weiss, A. P., Ditman, T., & Titone, D. (2004). Hippocampal activation
during transitive inference in humans. Hippocampus, 14(2), 153-162.
Henle, M. (1962). On the Relation Between Logic and Thinking. Psychological Review, 69(4),
366-378.
Houde, O., Zago, L., Mellet, E., Moutier, S., Pineau, A., Mazoyer, B., & Tzourio-Mazoyer, N.
(2000). Shifting from the perceptual brain to the logical brain: the neural impact of
cognitive inhibition training. J Cogn Neurosci, 12(5), 721-728.
Johnson-Laird, P. N. (1983). Mental Models: Towards a Cognitive Science of Language,
Inference, and Consciousness. Cambridge, Mass.: Harvard University Press.
Johnson-Laird, P. N., & Byrne, R. M. J. (1991). Deduction. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Knauff, M., Fangmeier, T., Ruff, C. C., & Johnson-Laird, P. N. (2003). Reasoning, models, and
images: behavioral measures and cortical activity. J Cogn Neurosci, 15(4), 559-573.
Knauff, M., Mulack, T., Kassubek, J., Salih, H. R., & Greenlee, M. W. (2002). Spatial imagery
in deductive reasoning: a functional MRI study. Brain Res Cogn Brain Res, 13(2), 203212.
McCarthy, R. A., & Warrington, E. K. (1990). Cognitive Neuropsychology: A Clinical
Introduction. NY: Academic Press.
Monti, M. M., Osherson, D. N., Martinez, M. J., & Parsons, L. M. (2007). Functional
neuroanatomy of deductive inference: A language-independent distributed network.
Neuroimage, 37(3), 1005-1016.
Newell, A. (1980). Reasoning, Problem Solving, and Decision Processes: The Problem Space as
a Fundamental Category. In R. S. Nickerson (Ed.), Attention and Performance VIII.
Hillsdale, N.J.: Lawrence Erlbaum.
Newell, A., & Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, N.J.: PrenticeHall.
Noveck, I. A., Goel, V., & Smith, K. W. (2004). The neural basis of conditional reasoning with
arbitrary content. Cortex, 40(4-5), 613-622.
Osherson, D., Perani, D., Cappa, S., Schnur, T., Grassi, F., & Fazio, F. (1998). Distinct brain loci
in deductive versus probabilistic reasoning. Neuropsychologia, 36(4), 369-376.
Goel
October 27, 2007
19 of 26
Over, D. (2002). The Rationality of Evolutionary Psychology. In J. L. Bermudez & A. Millar
(Eds.), Reason and Nature: Essays in the Theory of Rationality (pp. 187-207). New York:
Oxford University Press.
Parsons, L. M., & Osherson, D. (2001). New Evidence for Distinct Right and Left Brain Systems
for Deductive versus Probabilistic Reasoning. Cereb Cortex, 11(10), 954-965.
Prado, J., & Noveck, I. A. (2007). Overcoming perceptual features in logical reasoning: a
parametric functional magnetic resonance imaging study. J Cogn Neurosci, 19(4), 642657.
Rips, L. J. (1994). The Psychology of Proof: Deductive Reasoning in Human Thinking.
Cambridge, MA: MIT Press.
Simon, H. A. (1983). Reason in Human Affairs. Stanford, California: Stanford University Press.
Sloman, S. A. (1996). The Empirical Case For Two Systems Of Reasoning. Psychological
Bulletin, 119(1), 3-22.
Stanovich, K. (2004). The Robot's Rebellion: Finding Meaning in the Age of Darwin. Chicago:
University of Chicago Press.
Stanovich, K. E., & West, R. F. (2000). Individual Differences in Reasoning: Implications for the
Rationality Debate. Behavioral & Brain Sciences, 22, 645-665.
Stavy, R., Goel, V., Critchley, H., & Dolan, R. (2006). Intuitive interference in quantitative
reasoning. Brain Res, 1073-1074, 383-388.
Wilkins, M. C. (1928). The effect of changed material on the ability to do formal syllogistic
reasoning. Archives of Psychology, 16(102), 5-83.
Goel
October 27, 2007
20 of 26
Table 1. Summary of particulars of 19 neuroimaging studies of deductive reasoning and reported regions of activation corresponding most closely to the
main effect of reasoning. Numbers denote Brodmann Areas; RH = Right Hemisphere; LH = Left Hemisphere; Hi = Hippocampus; PSMA =Pre-Sensory-Motor Area
Blank cells indicate absence of activation in region. "Stimuli modality" refers to the form and manner of presentation of the stimuli.
Cerebellum activations are not noted in the table. Reproduced from Goel (2007).
Studies (Organized by Tasks) Scanning Method
Stimuli Modality
Occipital Lobes
RH
LH
Transitivity (Explicit)
Goel et al. (1998)
Goel & Dolan (2001)
Knauff et al. (2003)
Goel et al. (2004)
Fangmeier et al. (2006)
PET
fMRI
fMRI
fMRI
fMRI
visual, linguistic
visual, linguistic
auditory, linguistic
visual, linguistic
visual, nonlinguistic
Transitivity (Implicit)
Acuna et al. (2002)
Heckers et al. (2004)
fMRI
fMRI
visual, nonlinguistic
visual, nonlinguistic
Categorical Syllogisms
Goel et al. (1998)
Osherson et al. (1998)
Goel et al. (2000)
Goel & Dolan (2003)
Goel & Dolan (2004)
PET
PET
fMRI
fMRI
fMRI
visual, linguistic
visual, linguistic
visual, linguistic
visual, linguistic
visual, linguistic
18
18, 19
17, 18
18
Conditionals (Simple)
Noveck et al. (2004)
Prado & Noveck (2006)
fMRI
fMRI
visual, linguistic
visual, linguistic
18
Conditionals (Complex)
Houde et al. (2000)*
Parsons et al. (2002)
Canessa et al. (2005)
PET
PET
fMRI
visual, nonlinguistic
visual, linguistic
visual, linguistic
Mixed Stimuli
Goel et al. (1997)
Knauff et al. (2001)
PET
fMRI
visual, linguistic
auditory, linguistic
Notes: *Brodmann Areas not provided by authors.
17, 18, 19
19
19
18, 19
18,19
Parietal Lobes
RH
LH
Temporal Lobes
RH
LH
Basal Ganglia
RH
LH
37
7, 40
7
7, 40
7
7, 40
7
7
40
7, 39, 40
40
39, 40
40
yes
21
21, 22, Hi
37, Hi
Cingulate
RH
yes
yes
7
7
37
19
17
39, 40
7
40
18
19
18
19
7, 39, 40
7, 39, 40
19
19
7, 40
7, 14
21/22
21, 22, 38
39
yes
yes
45, 46
6, 9
46, 47
6, 9, 46,11
6, 9
24
6, 8, 9, 46
PSMA, 6
6, 8, 9, 46
6, 47
45
6
6
45, 46, 47
6
44, 45
6, 44
6, 44, 45
6, 45, 46
6, 47
9, 46
24, 32
yes
yes
yes
yes
yes
yes
37
21, 37, 39
21, 22
32
6
6
11, 47
6
21, 38
21, 22, Hi
37, 21
32
yes
21, 22
Frontal Lobes
RH
LH
24, 32
21, 22
18
17, 18
18, 19
LH
yes
yes
24
32
31
32
10, 44, 9
6, 8, 9, 10, 46
6, 8, 9, 46
32
32
6, 9
6, 9
Goel
October 27, 2007
21 of 26
Table 2. Three-term transitive arguments sorted into 9 categories. See text.
Determinate Arguments
Valid
Invalid
Indeterminate Arguments
Invalid
No meaningful
Content
(content has no
effect oo task)
Congruent
(content
facilitates task)
City-A is north of City-B
City-B is north of City-C
City-A is north of City-C
(1)
City-A is north of City-B
City-B is north of City-C
City-C is north of City-A
(2)
City-A is north of City-B
City-A is north of City-C
City-B is north of City-C
(3)
London is north of Paris
Paris is north of Cairo
London is north of Cairo
(4)
London is north of Paris
Paris is north of Cairo
Cairo is north of London
(5)
London is north of Paris
London is north of Cairo
Cairo is north of Paris
(6)
Incongurent
(content
inhibits task)
London is north of Paris
Cairo is north of London
Cairo is north of Paris
(7)
London is north of Paris
Cairo is north of London
Paris is north of Cairo
(8)
London is north of Paris
London is north of Cairo
Paris is north of London
(9)
Goel
October 27, 2007
22 of 26
Figure Captions
1. (A) Reasoning about familiar material (All apples are red fruit; All red fruit are nutritious; All
apples are nutritious) activates a left frontal (BA 47) temporal (BA 21/22) system. (B)
Reasoning about a familiar material (All A are B; All B are C; All A are C) activates
bilateral parietal lobes (BA 7, 40) and dorsal prefrontal cortex (BA 6). (Reproduced from
Goel et al.(Goel et al., 2000))
2. The right lateral/dorsal lateral prefrontal cortex (BA 45, 46) is activated during conflict
detection. For example, in the following argument "All apples are red fruit; All red fruit
are poisonous; All apples are poisonous" the correct logical answer is "valid"/"true" but
the conclusion is inconsistent with our world knowledge, resulting in a belief-logic
conflict. (Reproduced from Goel et al.(Goel & Dolan, 2003))
3. Lesion overlay maps showing location of lesions in patients tested on reasoning with
determinate and indeterminate argument forms. Patients with lesions to right prefrontal
cortex were selectively impaired in reasoning about indeterminate forms (Mary is taller
than Mike; Mary is taller than George; Mike is taller than George). Patients with lesions
to left prefrontal cortex showed an overall impairment on reasoning. (Reproduced from
Goel et al(Goel et al., 2006).)
¶4. A speculative account of how the systems identified by the neuropsychological data may
interact in the processing of logical arguments. See text.
Goel
A
C
Figure 1
October 27, 2007
Reasoning with
familiar material
23 of 26
B
Reasoning with
unfamiliar material
Goel
Conflict detection
system
Figure 2
October 27, 2007
24 of 26
Goel
October 27, 2007
Lesion Overlay Maps
Figure 3
25 of 26
Goel
October 27, 2007
26 of 26
Conflict
Detector
invalid
R DLPFC
Argument
Formal/Universal
System
General Pattern
Completer
valid
Parietal
L PFC
Conflict
Detector
Uncertainty
Maintenance
R DLPFC
invalid
R VLPFC
Heuristic
System
invalid
valid
L FL/TL
Figure 4
Download