Mathematical Thinking and Human Nature

advertisement
Mathematical Thinking and Human Nature:
Consonance and Conflict
by
Uri Leron
Department of Science Education
Technion – Israel Institute of Technology
Email: uril@tx.technion.ac.il
Revised March 22, 2006
Running Head: Mathematical Thinking and Human Nature
Key Words: evolutionary psychology, functions, human nature, logical reasoning,
origins of mathematical thinking, social exchange, Wason card selection task
CONTENTS
Abstract
1. Introduction
2. Some ideas from evolutionary psychology
2.1 Universal Human Nature
2.2 Evolutionary origins
2.3 Modern humans with ancient brains
2.4 Nature vs. nurture
2.5 How flexible is human nature?
2.6 Do we really need human nature? Why isn’t “intuition” enough?
2.7 How can we tell what is part of human nature?
3. Origins of mathematical thinking
3.1 Level 1: Rudimentary Arithmetic
3.2 Level 2: Informal Mathematics
3.3 Level 3: Formal Mathematics
4. Mathematical case studies
4.1 Mathematical Logic vs. the Logic of Social Exchange
4.2 Do functions make a difference?
5. Conclusion
References
1
Abstract
Human nature, which had traditionally been the realm of philosophers, novelists and
theologicians, has recently become the subject of scientific study by cognitive science,
neuroscience, research on babies and on animals, anthropology, and evolutionary
psychology. The goal of this paper is to show—by surveying relevant research and by
analyzing some mathematical case studies—how different parts of mathematical
thinking can be either enabled or hindered by aspects of human nature. This
preliminary theoretical framework can add an evolutionary and ecological level of
interpretation to empirical findings of mathematics education research, as well as
illuminate some classroom issues.
1. Introduction
In this paper I’d like to bring together two recent interdisciplinary research strands
and examine their possible influence on research and practice in mathematics
education. One strand deals with the origins of mathematical thinking. It comes
mainly from research in cognitive science and its goal is to identify general cognitive
mechanisms that enable mathematical thinking and learning (Dehaene, 1997;
Butterworth, 1999; Lakoff & Nunez, 2000; Devlin, 2000. Cf. also Tall, 2001; Kaput
& Shaffer 2002; Shaffer and Kaput, 1999). This strand will be reviewed in Section 3,
where I propose to organize these cognitive mechanisms in three levels, depending on
the kind of mathematical thinking involved. The other strand, reviewed in Section 2,
is evolutionary psychology (EP)—the study of universal human nature and its
evolutionary origins (Cosmides & Tooby 1992, 1997, 2000; Pinker, 1997, 2002;
Plotkin 1998, 2004), with special attention to its developmental and educational
aspects (Geary, 2002; Bjorklund & Pellegrini, 2002). Together, these two strands can
hopefully give us a plausible picture of some of the cognitive mechanisms that enable
or inhibit mathematical thinking, along with the reason and process of how they
came about.
The paper is organized in two main parts. The first part (Sections 2 and 3) is a
personal review and synthesis of some of the main ideas of the above two strands,
especially those of EP, which are still largely unknown in the mathematics education
community. While the literature on the cognitive origins seeks to explain how
everybody can do mathematics, the EP strand may give us a new perspective on why,
in contrast, so many people find mathematics—even some seemingly very simple
mathematics—strange and unnatural. A synthesis of the two strands can set the
foundations for a comprehensive theoretical framework for discussing the
relationship between mathematical thinking and human nature. In the second part
(Section 4), I will apply this framework to analyzing some specific mathematical
topics. Laying firm foundations for such a theoretical framework and showing its
feasibility in analyzing mathematical case studies is (in the words of the old Chinese
adage) “a journey of a thousand miles”; it is hoped that the present paper might be a
first step on this journey.
Before plunging into the theoretical discussion, let me anticipate the gist of the
general argument with a simple example, taken from Dehaene (1997).
Please take a few minutes to try to memorize the following three pieces of
information:
2
Charlie David lives on Albert Bruno Avenue
Charlie George lives on Bruno Albert Avenue
George Ernie lives on Charlie Ernie Avenue
…………………………………………
Hard and irritating, isn’t it? But is it really a problem of memory?
Notice that—under the mapping Albert  1, Bruno  2, Charlie  3, David  4,
Ernie  5, and George  7—these three statements are “isomorphic” to the following
three multiplication facts:
3 x 4 = 12
3 x 7 = 21
7 x 5 = 35 .
Now, there is plenty of research evidence by developmental and cognitive
psychologists that many people find memorizing the multiplication table (especially a
few facts around the middle area of the table) considerably hard: they take a long time
to answer and they make many errors. On the other hand, there is much evidence that
all children (under normal development) demonstrate prodigious learning and
memory capabilities in learning the vocabulary of their mother tongue. How is it then
that many have such difficulty remembering a few (about 20) multiplication facts?
How can human memory, which in other contexts performs astonishing feats, falter
on such a simple-looking task?
This example demonstrates in a nutshell—and elementary setting—the typical
paradox that will occupy us in this article. And the answer points to the general thesis
of this paper: people fail this task not because of a weakness in their mental
apparatus, but because of its strength! This seeming paradox is resolved once we take
into account the evolutionary origins of our brain and mind, and the selection
pressures that influenced its “design” over millions of years. The point is, what may
have been adaptive in the ancient ecology in which our stone-age ancestors lived (and
is still adaptive today under similar conditions), may often be maladaptive in modern
contexts.
Returning to the present example, the particular strength of our memory that gets in
the way of memorizing the tables is, according to Dehaene, its automatic and
insuppressible associative character: The number facts cannot be separated out and
remain hopelessly entangled with each other, as the “Charlie David” memory puzzle
clearly demonstrates. The problem, then, is not memorizing three addresses, but
disentangling the interference created by the dense net of similarities and
interconnections. Or, in Dehaene’s (1997, p. 127) words: “Arithmetic facts are not
arbitrary and independent of each other. On the contrary, they are closely intertwined
and teeming with false regularities, misleading rhymes and confusing puns.” Dehaene
attributes this clash, as we will do in our case studies (section 4), to the evolutionary
origins of our brain:
Associative memory is a strength as well as a weakness. […] When faced with a tiger, we
must quickly activate our related memories of lions. But when trying to retrieve the result
of 7 x 6, we court disaster by activating our knowledge of 7 + 6 or of 7 x 5. Unfortunately for
mathematicians, our brain evolved for millions of years in an environment where the
advantage of associative memory largely compensated for its drawbacks in domains like
3
arithmetic. We are now condemned to live with inappropriate arithmetic associations that our
memory recalls automatically, with little regard for our efforts to suppress them. (p. 128)
2. Some ideas from evolutionary psychology
Two notes of caution are in order. First, evolutionary thought in psychology has a
rocky and controversial history, which I cannot survey here (see Plotkin, 2004). In
this paper the term evolutionary psychology (and the abbreviation EP) will be used
exclusively for contemporary evolutionary psychology, whose beginnings can be
seen in the research in the late 1980s by Daly & Wilson on murder within families,
Buss on sexual preferences in many cultures, and Cosmides on social contract and
cheater detection. The official public debut of EP is often attributed to the appearance
of the influential volume The Adapted Mind (Barkow et al, 1992), and an
authoritative recent overview is The Handbook of Evolutionary Psychology (Buss,
2005).
Second, the literature on evolutionary psychology is vast, and deals with many subtle,
complex and controversial issues, such as the modularity of mind, nature vs. nurture
and brain vs. mind. Moreover, some of its claims are quite revolutionary and run
against prevalent emotions and biases, hence are easily misunderstood and
misinterpreted. I thus cannot presume to give a fair representation of this complex
discipline in such a brief sketch. In the space available I can only give a much
simplified account of the relevant points, skipping most of the subtleties and
controversies. The review here is provided as a convenience for readers who want to
get a quick and easy idea of the relevant background for this article. Readers who
really want to know what EP is about, should at least read the online EP Primer
(Cosmides & Tooby, 1997). Additional online resources can be found in the EP FAQ
(Hagen, 2004) and the EP Index (Steen, 2001). A good textbook presentation can be
found in Bjorklund & Pellegriny (2002). For the controversy surrounding EP, cf.
Fodor, 2000; Gilovich et al, 2002; Leland & Brown, 2002, Chapter 5; Over, 2003;
Pinker, 1997, 2002; Plotkin, 2004, Chapter 7; and Stanovich & West, 2003.
A good place to begin is a quotation from two of the founders of contemporary
evolutionary psychology:
Evolutionary psychology is an approach to the psychological sciences in
which principles and results drawn from evolutionary biology, cognitive
science, anthropology, and neuroscience are integrated with the rest of
psychology in order to map human nature. By human nature, evolutionary
psychologists mean the evolved, reliably developing, species-typical
computational and neural architecture of the human mind and brain.
According to this view, the functional components that comprise this
architecture were designed by natural selection to solve adaptive problems
faced by our hunter-gatherer ancestors, and to regulate behavior so that these
adaptive problems were successfully addressed […] Evolutionary psychology
is not a specific subfield of psychology, such as the study of vision, reasoning,
or social behavior. It is a way of thinking about psychology that can be applied
to any topic within it […]. (Cosmides & Tooby, 2000)
2.1 Universal Human Nature
According to the first part of the Cosmides & Tooby quotation above, human nature
is comprised of universal (“species-typical”) traits and behaviors1 that are attained
4
automatically and spontaneously by all human beings during normal development
(“reliably developing”), independently of geography, race, culture, civilization or
direct instruction. These are the skills in which all humans are natural experts.
To illustrate, here are two out of the many possible examples of universal human
nature, each with a corresponding non-example:
 acquiring mother tongue (but not programming “tongue”);
 walking on two feet (but not on two hands).
These examples also demonstrate that what is part of human nature need not be
innate—we are not born talking or walking; and that what is not part of human nature
need not be unlearnable—highly motivated people with prolonged and specialized
instruction or practice do learn to program computers or to walk on their hands. The
skills mentioned in both these examples and non-examples are all acquired through
intense interaction with the physical and social environment; the difference lies in the
kind of learning required for their acquisition. Every child with normal development
within a “species-typical environment” (in this case, language-using community) will
acquire natural language proficiency. Programming language proficiency, in contrast,
is only acquired by a few, and only under very special learning conditions.
2.2 Evolutionary origins
What I have described so far of human nature, i.e., the existence of a set of universal
core abilities that all normal mature humans are “naturally good at”, seems to be
accepted by the vast majority of researchers in the natural and cognitive sciences, and
is in fact sufficient for presenting and defending my central thesis concerning
mathematical thinking. However, it leaves open the crucial question as to how and
why universal human nature came about. The other parts of the Cosmides & Tooby
quotation—evolved architecture of the human mind and brain, functional
components, natural selection and solving adaptive problems—are meant to fill in this
gap. These additional parts are at the same time what gives evolutionary psychology
its great depth, but also what makes it more controversial among some biologists and
cognitive scientists.2
According to EP, the human brain is a complex biological organ, showing “design
features” for processing information (stimulus-response behavior, learning and
memory, complex social behavior, comprehending and producing language, and so
forth). Like other complex biological organs which show design-for-a-purpose
features—eyes for seeing, hands for grasping, lungs for oxygenating blood—the brain
has evolved through natural selection to address recurring problems of information
processing faced by our ancestors, which were crucial for their a successful survival
and reproduction. Psychologists and cognitive scientists have always been asking
“how the mind works?” Evolutionary psychology makes it possible for the first time
to address in a scientifically meaningful way the question that had previously been
the exclusive realm of philosophers and novelists: “Why is it working that way?”.
Although the processes of goal-oriented design by humans and “blind design” by
natural selection are crucially different (the teleology/teleonomy distinction), there
are certain commonalities about their well-adapted products, and the analogy is often
used by biologists and evolutionary psychologists: If you are trying to understand
how a complex machine works (such as an airplane or a watch), your efforts would be
greatly facilitated if you took into consideration for what purpose this machine was
designed.3
5
2.3 Modern humans with ancient brains
The hominid brain evolved over millions of years, reaching its present state
somewhere between 100,000 and 20,000 years ago.4 Due to the slow rate of
biological evolution, modern civilization (dating back roughly 10,000 years, starting
with the invention of agriculture and the beginning of permanent settlements) hasn’t
had nearly enough time to affect the evolution of the human brain in any major way.
Thus, it is important to realize that our brain is adapted to the ecology of the stone age
and the hunter-gatherer society of that time, and may be un-adapted, or even maladapted, for many aspects of modern civilization. Because of the considerable
plasticity of our brain, it is able to learn and adapt to modern conditions by “coopting” ancient mechanisms for modern purposes, such as writing, driving and
mathematics (Geary, 2002). A prediction of this theory is that the ease or difficulty of
learning a certain skill will be not only a function of that skill’s complexity, but also
(or even mainly) of the extent to which such co-optation is available for that skill.
David Geary (2002), who has been studying evolutionary educational psychology,
introduced in this connection the terms biologically primary and secondary abilities:
The academic competencies that can emerge with schooling are termed
biologically secondary abilities and are built from the primary and evolved
cognitive systems that comprise folk psychology, folk biology, and folk
physics, as well as other evolved domains (e.g., the number-countingarithmetic system; […] Secondary activities, such as reading and writing, thus
involve co-opting primary folk-psychological systems: Cooptation is defined
as the adaptation (typically through instruction) of evolved cognitive systems
for culturally specific uses […] (p. 330)
[…] folk knowledge and inferential biases may run counter to related
scientific concepts. The instructional prediction is that in such cases, folk
knowledge will impede the learning and adoption of related scientific
concepts or procedures. (p. 338)
2.4 Nature vs. nurture
This ancient dichotomy concerning human nature has been argued endlessly through
the ages. However, as many had observed, such controversies are not really solved
but rather fade away, as it becomes clear that the problem lies in the formulation of
the question itself, and in the false dichotomies it presumes. Or, as John Dewey
(1989/1909) had said: “Intellectual progress usually occurs through sheer
abandonment of questions together with both the alternatives they assume, an
abandonment that results from their decreasing vitalism and a change of urgent
interest. We do not solve them, we get over them.”
It is now almost universally agreed that “nature or nurture” is simply the wrong
question to ask, since the influence of genes (nature) and environment (nurture) on
the organism are inextricably intertwined. Briefly, genes do not determine (except in
very rare cases) strict properties of the organism, but only predispositions and biases
as to the kind of interaction the organism will have with the environment, and the
range of behaviors it can acquire through these interactions. Furthermore, it is now
known that the environment influences what (and how, and for how long) genes will
be expressed (Geary, 2002; Bjorklund & Pellegriny, 2002; Ridley, 2003). To take a
famous example, most researchers believe (and there is strong theoretical and
empirical case for this belief; cf. Pinker, 1994) that we are all born with some innate
6
structure for learning language. But we are not born with language, nor are we predisposed to learn a particular language. Rather, according to one prevailing view,
infants are born equipped with some innate structure that makes them attuned to
language and to fundamental aspects of “universal grammar”, as well as with strong
motivation to engage in the kind of interactions that will lead them to acquire
language. But the particularities of a specific language and dialect are picked up
entirely from the environment. Without the coordinated action of both these
components—the innate structure and the interaction with a language-rich
environment—language acquisition would not be possible.
An excellent synthesis of current thinking on the nature vs. nurture debate is Ridley’s
(2003) Nature via Nurture. Here is one quotation from the many relevant ones from
his book:
The environment is not some real, inflexible thing: it is a unique set of
influences actively chosen by the actor him- or herself. Having a certain set of
genes predisposes a person to experience a certain environment. Having
“athletic” genes makes you want to practice sport; having “intellectual” genes
makes you seek out intellectual activities. The genes are agents of nurture.
[Here Ridley draws a parallel with how genes affect weight.] The genes are
likely to affect appetite rather than aptitude. They do not make you intelligent;
they make you more likely to enjoy learning. Because you enjoy it, you spend
more time doing it and you grow more clever. Nature can only act via nurture.
It can act only by nudging people to seek out the environmental influences
that will satisfy their appetites. The environment acts as a multiplier of small
genetic differences, pushing athletic children towards the sports that reward
them and pushing bright children towards the books that reward them.
(Ridley, 2003, p. 92; also see Geary’s (2002) discussion of predisposition to
engage in specific activities that will lead to learning certain skills.)
2.5 How flexible is human nature?
It follows from the preceding discussion that belief in the influence of evolution and
the genes on behavior does not imply “genetic determinism” (as some opponents have
charged). We become what we are through learning, education, culture and, in
general, interaction with the environment. The role of evolution and the genes is
indeed crucial, but only by regulating (both constraining and enabling) the kind of
interactions we will engage in, and the range of potential learning that may result
from these interactions, but not its actual outcome (for a thorough discussion of these
important and subtle issues cf. Geary, 2002; Bjorklund & Pellegrini, 2002; Ridley,
2003). This perspective can explain why all humans have so much in common
(universal human nature), yet each possesses so much unique personality of his or her
own.
In short, human nature is not something we are born with: it develops. Furthermore,
universal human nature develops under the combined influence of species-typical
parts of the genome on the one hand, and species-typical parts of the environment,
such as the force of gravity, 3-dimensional space and social groups on the other. On
top of universal human nature, individuals and particular subcultures may have
various “natural” traits that arise from their individual genome or special conditions
in their local environment, such as climate, flora and fauna, and particular cultural
heritage. To see more on varieties of human universals, cf. Brown (1991).
7
2.6 Do we really need human nature? Why isn’t “intuition” enough?
Some readers may object to my evolutionary framework, claiming that it doesn’t add
much to the simpler and more conventional argument, that people fail in certain (e.g.
mathematical) tasks because they are “counter-intuitive”. The framework proposed
here does not negate this conventional argument—it strengthens it. But it does go
deeper in trying to elaborate what we mean by intuition and, mainly, where did it
come from? Why do all people have such intuitions, and what is their origins? In fact,
if by intuitions we only mean “what people do naturally and easily” then the above
explanation becomes almost vacuous: people succeed in tasks that are easy and fail in
those that are difficult. By invoking the biologically-based human nature, we avoid
this trap. People are good in certain tasks not because the tasks are simple,5 but
because our brains have evolved by natural selection over millions of years specialpurpose circuitry for those tasks that had a survival and reproduction advantage. EP is
laying the foundations for a scientifically testable theory, explaining how universal
human nature is formed in each individual via an intricate collaboration of inheritance
(species-typical genome) and development (species-typical environment).
2.7 How to establish that some trait or behavior is part of human nature?
It is an important but non-trivial question, how we can determine that a certain
observed trait or behavior is an adaptation, brought about by natural selection to
address problems of survival or reproduction faced by our ancient ancestors
(Cosmides & Tooby, 1997; Pinker 1997). We start from evidence of complex
design—features which are specialized for solving an adaptive problem, such as eyes
for seeing. We can then conjecture that this organ or module has been designed by
natural selection to solve that adaptive problem. Research from many disciplines is
then used in attempting to corroborate this conjecture. Anthropologists conduct
comparative studies in many cultures in all corners of the earth, to determine whether
such traits or behaviors are universal or culturally and locally determined. (The
appendix in Pinker (2002) lists some 400 such human universals, compiled by Donald
Brown (1991)). Developmental psychologists study infants to determine how early in
their developmental hints of such behaviors can be detected; this may indicate how
much nurture nature needs for the development of the behavior in question.
Ethologists study our non-human relatives (such as chimpanzees and other primates
and mammals) in search of early versions of such behaviors. Evolutionary
psychologists conduct experiments to check predictions about human behavior that
are made from evolutionary theorizing. The results of studies by archeologists and
paleontologists are used to make inferences on the kind of survival problems our
ancestor might have faced. And theoretical work is done to analyze complex design
features in such behaviors and perform “reverse engineering” on them (Pinker, 1997).
With a combination of these methods one can build a strong case for such
adaptations, though this is not an exact science and such claims are still likely to
remain hotly debated, just as they have been even within biological evolution (e.g.,
Sternley, 2001).
3. Origins of mathematical thinking6
In this section I consider the following (admittedly vague) question:
Is mathematical thinking a natural extension of common sense, or is it an
altogether different kind of thinking?
8
The possible answers to this question are of great interest and importance for both
theoretical and practical reasons. Theoretically, this is an important special case of the
general question of how our mind works. In practice, the answers to this question
clearly have important educational implications.
Recently, several books and research articles have appeared, which bear on this
question, so that the possible answers, though still far from being conclusive, are less
of a pure conjecture than they had previously been. These new studies have inquired
into the cognitive and biological origins of mathematical thinking and have come
from research disciplines as varied as neuroscience, cognitive science, cognitive
psychology, evolutionary psychology, anthropology, linguistics and ethology; their
subjects were normal adults, infants, animals, and patients with brain damage.
The conclusions of the various researchers seem at first almost contradictory: Aspects
of mathematical cognition are described as anything from being embodied to being
based on general cognitive mechanisms to clashing head-on with what our mind has
been “designed” to do by natural selection over millions of years.
However, these seeming contradictions all but fade away once we realize that
“mathematics” (and with it “mathematical cognition”) may mean different things to
different people, sometimes even to the same person on different occasions. In fact,
the main goal of this section is to show that all this multifaceted research by different
researchers coming from different disciplines, may be neatly organized into a
coherent scheme once we exercise a bit more care with our distinctions and
terminology.
To this end, I will distinguish three levels of mathematics, called here rudimentary
arithmetic,7 informal mathematics and formal mathematics,8 each with its own
different thinking mechanisms.
When interpreted within this framework, the research results show that while certain
elements of mathematical thinking are innate and others are easily learned, certain
more advanced (and, significantly, historically recent) aspects of mathematics—
formal language, de-contextualization, abstraction and proof—may be in direct
conflict with aspects of human nature.
For a fuller account of these studies, the reader is referred to Houser & Spelke (2004),
Dehaene (1997) and Butterworth (1999) for the first level; Lakoff & Núñez (2000)
and Devlin (2000) for the first and second levels; Cosmides & Tooby (1992, 1997),
Geary (2002) and Bjorklund & Pellegrini (2002) for the third level.
3.1 Level 1: Rudimentary arithmetic
Rudimentary arithmetic consists of the simple operations of subitizing, estimating,
comparing, adding and subtracting, performed on very small collections (usually not
more than 4) of concrete objects. Research on infants and on animals, as well as brain
research, indicates that some ability to do mathematics at this level is hard-wired in
the brain and is processed by a ‘number sense’, just as colors are processed by a
‘color sense’. Excellent syntheses of this research are Dehaene (1997) and
Butterworth (1999).
It is not easy to prove that some feature is an “adaptation”, brought about by
evolution via natural selection, but a strong case can be made by showing that four
conditions are fulfilled: One, the feature in question could have conferred a clear
9
survival advantage on our stone-age hunter-gatherer ancestors; two, some version of
this feature exists in our non-human relatives; three, babies already exhibit this
feature even before they had a chance to learn it from their physical or social
environment; four, there is evidence for complex design, that is, it is highly
improbable that the feature could have arisen by chance.
Indeed, it is easy to imagine how rudimentary arithmetic could help survival for our
ancestors, e.g., in keeping count of possessions and in estimating amount of food
(going for the tree with more fruit) and number of enemies.
There are many experiments showing that some animals (such as chimpanzees, rats
and pigeons) have ‘number sense’. A striking example is an experiment by Karen
McComb and her colleagues (cf. Butterworth, 1999, pp 141-2) showing that when a
female lion at the Serengeti National Park in Tanzania detects the roar of unfamiliar
lions invading her territory, she will decide to attack only if the number of her sisters
nearby on the territory is greater than the number of invaders. This is all the more
remarkable because she seems to compare the two numbers across sense modalities:
she hears the intruders but sees (or memorizes) her sisters. “Thus she has to abstract
the numerosity of the two collections—intruders and defenders—away form the sense
in which they were experienced and then compare these abstracted numerosities.”
(ibid)
It seems at first all but impossible to establish what mathematical facts a very young
baby knows, but developmental psychologists using ingenious research methods have
nonetheless managed to establish a body of firm results. See Dehaene (1997) for a
comprehensive survey and reference to the original research literature. The following
brief sample is taken (with some omissions) from Lakoff and Núñez (2000, pp 1516).
1. At three or four days, a baby can discriminate between a collection of two and
three items. […]
2. By four and half months, a baby “can tell” that one plus one is two and that
two minus one is one. […]
3. These abilities are not restricted to visual arrays. Babies can also discriminate
numbers of sounds. At three or four days, a baby can discriminate between
sounds of two or three syllables. […]
4. And at about seven months, babies can recognize the numerical equivalence
between arrays of objects and drumbeats of the same number. […]
There are too many details and variations to do justice to this intricate research here,
but the reader can get some idea from a brief description of one of the main methods
used: timing the baby’s gaze and the violation-of-expectation research paradigm.
When a baby looks for a while at a repeating or highly expected scene, it will get
bored and will look at the scene for shorter and shorter periods (a phenomenon called
habituation). When the scene suddenly changes, or something unexpected happens,
the baby’s gaze duration (called fixation time) will become measurably longer.
Researchers moved behind a screen, in front of the baby’s eyes, one puppet and then
another, and then lifted the screen to reveal what’s behind it. Babies typically looked
significantly longer (i.e., were surprised) when they saw one puppet (or three) behind
the screen, as compared to two. This experiment was repeated with many variations
and controls, with the inevitable conclusion that, in a sense, babies are born with the
innate knowledge that one and one makes two.
10
3.2 Level 2: Informal Mathematics
This is the kind of mathematics, familiar to every experienced teacher of advanced
mathematics, which is presented to students in situations when mathematics in its
most formal and rigorous form would be inappropriate. It may include topics from all
mathematical areas and all age levels, but will consist mainly of “thought
experiments” (Cf. Lakatos, 1978; Tall, 2001; Reiner & Leron, 2001), carried out with
the help of figures, diagrams, analogies from everyday life, “typical” examples, and
students’ previous experience. For example, when teaching group theory, many
instructors preface the formal presentation of the proposition (x  y) 1  y 1  x 1
by the following intuitive analogy: Suppose you put on your socks and then your
shoes. If you now want to undo this operation, you need to first take off your shoes
and then your socks. Thus to find the inverse of a combined operation you need to
combine the individual inverses in reverse order. (It is less well-known that trying to
refine this example into a rigorous mathematical one, poses unexpected difficulties,
even for an experienced mathematician;9 the gap between the intuitive and the formal
versions of this proposition may thus be wider than normally suspected.)
Some recent research, as well as classroom experience, indicates that informal
mathematics is an extension of common sense, and is in fact being processed by the
same mechanisms that make up our everyday cognition, such as imagery, natural
language, thought experiment, social cognition and metaphor. That mathematical
thinking has “co-opted” older and more general cognitive mechanisms, is in fact only
to be expected (Geary, 2002), taking into account that mathematics in its modern
sense has been around for only about 2500 years—a mere eye blink in evolutionary
terms.
Two recent books—by Lakoff & Núñez (2000) and by Devlin (2000)—present
elaborate theories to show how our ability to do mathematics is based on other (more
basic and more ancient) mechanisms of human cognition. Significantly for the thesis
presented here, both theories mainly seek to explain the thinking processes involved
in Level 2 mathematics, so that their conclusions need not apply to Level 3. In fact, as
I explain in the next section, there are reasons to believe that their conclusions do not
apply to Level 3 mathematical thinking.
Note: The authors are not always explicit on the scope of mathematics they discuss,
but see e.g., “I am not talking about becoming a great mathematician or venturing
into the heady heights of advanced mathematics. I am speaking solely about being
able to cope with the mathematics found in most high school curricula.” (Devlin,
2000, p. 271); and “Our enterprise here is to study everyday mathematical
understanding of this automatic unconscious sort […]” (Lakoff & Núñez, 2000, p.
28).
Lakoff and his colleagues have for many years argued convincingly the case for
metaphor as a central mechanism in human cognition. Recently, Lakoff & Núñez
(2000) have extended this argument to a detailed account on how mathematical
cognition is first rooted in our body via embodied metaphors, then extended to more
abstract realms via “conceptual metaphors”, i.e., inference-preserving mappings
between a source domain and a target domain, where the former is presumably more
concrete and better-known than the latter. In their account they thus show (more
convincingly in some places than in others) how mathematical cognition builds on the
same mechanisms of our general linguistic and cognitive system.
11
According to this theory, our conceptual system is mostly built “from the bottom up”,
starting from our embodied knowledge and gradually building up to ever more
abstract concepts. However, an interesting twist to this picture has been suggested by
Tall (2001). Since many parts of modern mathematics (especially those dealing with
the various facets of infinity) go strongly against our “natural” intuitions, it is hard to
build appropriate understandings of them solely via metaphorical extensions of the
learner’s existing cognitive structures. (The research literature abounds with examples
of students’ “misconceptions” arising from such clashes between natural intuitions
and the formal theory.) As Tall shows, we need also to take into account a process
going in the opposite direction. Some of the results of the formal axiomatic theory—
called “structure theorems”—may feed back to develop more refined intuitions of the
concepts involved.
[Reviewer 3 claims this is a “weak and fuzzy” description of Devlin’s account.]
Devlin (2000) gives a different account than Lakoff & Núñez, but again one
attempting to show how mathematical thinking has “co-opted” existing cognitive
mechanisms. His claim is that the metaphorical “math gene”—our innate ability to
learn and to do mathematics—comes from the same source as our linguistic ability,
namely our ability for “off-line thinking” (basically, performing thought experiments,
whose outcome will often be valid in the external world). Devlin in addition gives a
detailed evolutionary account of how all these abilities might have evolved.10 Devlin’s
account, however speculative, appears plausible, provided you limit it to informal
mathematics. In other words, his account fits well situations in which people do
mathematics by constructing mental structures and then navigating within those
structures,11 but not situations where such structures are not available to the learner.
For example, it is hard to imagine any “concrete” structure that will form an honest
model of a uniformly continuous function or a compact topological space.
3.3 Level 3: Formal Mathematics
The term “formal mathematics” refers here not to the contents but to the form of
advanced mathematical presentations in classroom lectures and in college-level
textbooks, with their full apparatus of abstraction, formal language, decontextualization, rigor and deduction. The fact that understanding formal
mathematics is hard for most students is well-known, but my question goes farther: is
it an extension (no matter how elaborate) of common sense or an altogether different
kind of thinking? Put differently, is it a “biologically secondary ability” (Geary,
2002), or an altogether new kind of thinking that ought perhaps to be termed
“biologically tertiary ability”? This issue will be our focus in the next section. The
mathematical case studies, as well as the persistent failure of many bright college
students to master formal mathematics, suggest that the thinking involved in formal
mathematics is not an extension of common sense; that it may in fact sometime clash
head-on with human “natural” thinking.
So, is mathematical thinking an extension of common sense?
We can now answer a little more precisely the question posed in the beginning of this
section. According to contemporary thinking in cognitive science and in evolutionary
psychology (Lakoff & Núñez, 2000; Pinker, 1997, 2002; Cosmides & Tooby, 1997),
we may consider common sense as a folk version of what we have called here
universal human nature. It is a set of procedures—such as learning mother tongue,
recognizing faces, negotiating everyday physical and social situations, and using
12
rudimentary arithmetic—that have evolved by natural selection because they had
conferred survival and reproductive advantage on our stone-age hunter-gatherer
ancestors. As previously indicated, modern mathematics is too young in evolutionary
terms for us to have evolved cognitive mechanisms specifically for mathematical
thinking. To the extent that we at all can do mathematics, it must be based on older
mechanisms that have been do-opted by our brain for this new purpose (Geary, 2002).
The research surveyed in this paper shows that this is indeed the case for what I have
called Informal Mathematics: It is processed by the common sense mechanisms of
language, social cognition, mental imagery, thought experiment and metaphor.
Classroom experience, too, indicates that students have little trouble making sense of
mathematics as long as it is presented through familiar examples, diagrams and
analogies. The same classroom experience, however, indicates that students do have a
lot of trouble with the switch to the Formal Mathematics level. It seems as though our
mind contains no cognitive mechanism that could be readily co-opted for this
purpose. This doesn’t mean it can’t be done: after all, people do achieve such
unnatural feats as juggling 10 balls while riding on a bicycle or playing a Beethoven
piano sonata. It does mean that the huge amount of effort and practice needed to get
there requires an equally huge amount of specific motivation from the learner, and
therein lies the trouble.
The research from evolutionary psychology and related disciplines (see the
mathematical case studies below) hints that the situation with some parts of Formal
Mathematics may be even worse that that. Not only do we not have cognitive
modules that can be easily marshaled for this kind of thinking; it may even be in
direct clash with the thinking we find most natural, i.e., with parts of our universal
human nature.
4. Mathematical case studies
The theoretical framework outlined in the previous sections, will now be applied to
some mathematical “case studies” which, except for that framework, would have
remained an unexplained paradox. Each of the case studies deals with a well-defined
mathematical topic or task, which on the one hand appears very simple, but on the
other hand is known to cause serious difficulty for many people. We have already
seen one such example—the multiplication table and the surprising difficulties many
people have in memorizing it. This example also points the way to dissolving the
apparent paradox in general: we look for a particular trait of human nature (i.e.
something all people are naturally good at) that undermines the mathematical
thinking required. The seeming paradox would then be explained as a special case of
the clash between Darwinian adaptations to ancient ecologies and the requirements of
modern civilization. Once again, on this view, the cause for the failure of most people
in the particular mathematical task lies not in a cognitive weakness, but rather in a
cognitive strength—one of the core abilities people are naturally good at, that
unfortunately happened to clash with the required mathematical behavior.
The complete elaboration of each case study calls for three components of empirical
research: one, to establish the wide-spread difficulty of the topic under discussion;
two, to find a particular trait that undermines the mathematical thinking required; and
three, to establish that the undermining trait is indeed part of human nature. In this
13
preliminary paper, whose main purpose is to lay out the theoretical framework and
demonstrate its usefulness, I will limit myself to presenting two more case studies for
which empirical research already exists: elementary mathematical logic (“if…
then…” statements) and “Do functions make a difference?”. One more topic that
exhibits similar phenomena is statistical thinking (e.g. Cosmides & Tooby, 1996;
Gigerenzer, 2002; Gilovich, Griffin & Kahneman, 2002). This is a vast, complex and
controversial area and will be treated in a separate paper.
4.1 Mathematical Logic (ML) vs. the Logic of Social Exchange (LSE)
[
1 In view of Reviewer 1, comment 5, say something about the controversy on the
Wason task in general and CT interpretation in particular
2. In view of Reviewer 2, comment 1, mention the interoperation by Johnson-Laird
and Evans, and compare to the CT interoperation
].
In the introduction we have met our first paradox: How is it that people perform
easily and naturally enormous memory feats, such as learning a 20,000-word
vocabulary, but at the same time have great difficulty memorizing 20 multiplication
facts? Our answer was that memorizing the tables is undermined by some of the great
strengths of our cognitive apparatus, namely associative memory. In the same vein,
this subsection will present research showing that people do not naturally think in
logical terms, so they fail on simple logical tests. In contrast, they do reason naturally
about social situations (no matter how complex), so they do well on tasks involving
complex “logic of social exchange”. Furthermore, when conflict arises, people mostly
choose the “logic of social exchange” (LSE) over mathematical logic (ML), so their
answers would be judged erroneous by the norms of ML.
“Social exchange appears to be an ancient, pervasive and central part of
human social life. […] As a behavioral phenotype, social exchange is as
ubiquitous as the human heartbeat. The heartbeat is universal because the
organ that generates it is everywhere the same. This is a parsimonious
explanation for the universality of social exchange as well: the cognitive
phenotype of the organ that generates it is everywhere the same. Like the
heart, its development does not seem to require environmental conditions
(social or otherwise) that are idiosyncratic or culturally contingent.”
(Cosmides & Tooby, 1997)
Cosmides and Tooby (1992, 1997) have used the Wason card selection task (which
tests people’s understanding of “if P then Q” statements; cf. Wason, 1966; Wason &
Johnson-Laird, 1972) to uncover what they refer to as people’s evolved reasoning
“algorithms”. In a typical example of the card selection task, subjects are shown four
cards, say A T 6 3, and are told that each card has a letter on one side and a number
on the other. The subjects are then presented with the rule, “if a card has a vowel on
one side, then it has an even number on the other side”, and are asked the following
question: What card(s) do you need to turn over to see if any of them violate this
rule? The notorious result is that about 90% of the subjects, including science majors
14
in college, give an incorrect answer. The actual percentage may vary somewhat
depending on the content of P and Q and on the background story.
The motivation behind the original Wason experiment was partly to see if people
would naturally behave in accordance with the Popperian paradigm that science
advances through refutation (rather than confirmation) of held beliefs. In the card
selection task, it is necessary to consider what is the negation of the given rule, what
will refute it? The answer is that the rule is violated if and only if a card has a vowel
on one side but an odd number on the other. Thus, according to mathematical logic,
the cards you need to turn are A (to see if it has an odd number on the other side) and
3 (to see if it has a vowel on the other side). Most people who take the task neglect
the 3 card, often choosing 6 instead.
Like many cognitive psychologists before them, Cosmides and Tooby have presented
their subjects with many versions of the task, all having the same logical form “if P
then Q”, but varying widely in the contents of P and Q and in the background story.
While the classical results of the Wason Task show that most people perform very
poorly on it, Cosmides and Tooby demonstrated that their subjects performed
strikingly well on tasks involving conditions of social exchange. In social exchange
situations the individual receives some benefit and is expected to pay some cost. In
the Wason experiment they are represented by statements of the form “if you get the
benefit, then you pay the cost” (e.g., “If a man eats cassava root, then he must have a
tattoo on his chest” or, more contemporarily, “if you get your car washed, then you
pay $5”). A cheater is someone who takes the benefit but does not pay the cost.
Cosmides and Tooby explain that when the Wason task concerns social exchange, a
correct answer amounts to detecting a cheater. Since subjects performed correctly and
effortlessly in such situations, Cosmides and Tooby have theorized that our mind
contains evolved “cheater detection algorithms”. This is also supported by the
extensive research literature on “reciprocal altruism”, which shows that this form of
altruism can only evolve in species that have mechanisms to detect and punish
cheaters.12
Significantly for mathematics education, Cosmides and Tooby (1992, pp. 187-193;
1997) have also tested their subjects on the so-called “switched social contract”
(mathematically, the converse statement “if Q then P”), in which the correct answer
by the logic of social exchange is different from that of mathematical logic. Here is a
more detailed explanation of the two versions of the social contract tasks (adapted
from Cosmides & Tooby, 1997).
Standard version (if P then Q): if you take the benefit, then you pay the cost
(e.g., “if you get your car washed, then you pay $5”)
Switched version (if Q then P): if you pay the cost, then you take the benefit
(“if you pay $5, then you get your car washed”)
Standard
Benefit
Accepted
P
Benefit
Not Accepted
not-P
Cost
Paid
Q
Cost
Not Paid
not-Q
Switched
Q
not-Q
P
not-P
The general form of the proposition in the Wason Card selection Task is:
15
If P then Q.
Its violation is its ML negation: P & not-Q.
The standard form of a task expressing social contract (SC) is:
If a person receives a benefit (P), then she pays the price (Q)
(e.g., “if you get your car washed, then you pay $5”).
A cheater is someone who violates LSE, that is, he takes the benefit but doesn’t pay
the price: P & not-Q . Thus, a Wason task involving standard social contract cannot
distinguish between ML and LSE: The falsifying statements, hence the correct
answers, are the same.
The switched social contract (SSC) is:
If a person pays a price (Q), then he or she receives a benefit (P)
(“if you pay $5, then you get your car washed”).
Its ML negation is Q & not-P, but its LSE violation (i.e., cheater detection) is P &
not-Q, since a cheater remains a person who takes the benefit but doesn’t pay the
price.
Thus, a Wason task involving SSC can serve to distinguish between LSE and ML.
The results obtained by Cosmides & Tooby (1992, 1997) were that their subjects
overwhelmingly chose the former over the latter. As might be expected, when conflict
arises, the logic of social exchange overrides mathematical logic.
One way to interpret this result is that according to LSE, an “if… then…” statement is
automatically interpreted symmetrically, as if it meant “if and only if”. For a vivid
illustration of this interpretation, consider the following thought experiment. Imagine
you are confronted by a thug in a dark back alley. The thug: “If you don’t give me the
money, I will beat you up!” You give him the money and he nevertheless beats you.
You complain (having managed to get back on your feet and to dust your clothes):
“But I did give you the money, so how come you still beat me?” You feel cheated, as
everyone else in this situation probably would. But the fact is that the thug didn’t
even break his promise. According to ML (but not LSE), his statement “If you don’t
give me the money (P), I will beat you up (Q)” implies nothing whatever about the
case where you do give him the money (not-P).
This body of theoretical and experimental work by evolutionary psychologists, adds a
new level of support, prediction and explanation to the well-documented phenomenon
in math education (e.g. Hazzan & Leron, 1996) that students are prone to confusing
between mathematical propositions and their converse. Once again, the new EP
perspective offered here is that this confusion arises not because our minds are too
feeble for making this modern distinction, but because it runs against some of the
mind’s ancient and most powerful capabilities.
4.2 Functions and variables in mathematics vs. operations and objects in the
real world.
The phenomenon I present here came up in the context of research on learning
computer science (specifically, functional programming), but has turned out to be
really an observation on mathematical thinking. Interestingly, it is hard to see how
this kind of data could have been elicited by a purely mathematical task. The
empirical research reported here is taken from Tamar Paz’s (2003) doctoral
dissertation, carried out under the supervision of the present author. No knowledge of
computer science or programming is assumed in the following analysis.
16
The functional programming paradigm has originated with the LISP programming
language and its various offsprings, including later dialects such as Scheme and Logo.
In these languages the basic data objects are lists,13 i.e. an ordered set of objects of the
language; for example, the following is a list with 4 elements (the last one being itself
a list): [ALL WE NEED [IS LOVE]]. We can create variables in the language by
assigning a name to an object. For example, suppose we assign the name L to the
above list. Then we have created a variable whose name is L and its value is the list
[ALL WE NEED [IS LOVE]]. Formally, the variable consists of the pair of linked
objects (name and value), but it is customary to refer to it simply as “the variable L”.
[Reviewer 3 says that the use of italics in L and Rest is quite random, and I must be
uniform about it.] In addition to lists, functional programming consists of operations
(functions) on lists, for example, First and Rest. The operation First inputs a list and
outputs its first element; thus, for the list L defined above, First L will output the
word ALL. Rest inputs a list and outputs the list without its first element; thus Rest L
will output the list [WE NEED [IS LOVE]]. Two functions can be composed, as in
mathematics, by taking the output of one as the input to the other; thus, First Rest L
will output WE. (It would be appropriate to assign the name Second to the composed
function First Rest, since in general it will output the second element of a list.)
Now consider the following question, which haunted the students in Paz’s (2003)
research:
When we perform an operation (a function) in functional programming, what
happens to the input variable? For example, after we have executed Rest L, what is
the (new) value of L?14
When dealing with a particular programming language, the answer may of course
depend on the decisions taken by the designers of the language. However, the
mathematical answer—and the one adopted by most functional languages—is that the
input variable remains the same: functions do not change their input. But this is not
what many students in the research thought: they worked under the assumption that
the input variable has changed, in fact, that it has received the value of the output of
the function.15 Thus if L is [ALL WE NEED [IS LOVE]], then (by the prevailing
view) the operation Rest will actually remove the first element of L, so that L will
now become [WE NEED [IS LOVE]]. This view, while very natural (as we argue
below), nonetheless leads to programming errors. In fact, it is through the analysis of
such errors that Paz first came across this phenomenon.
The following discussion is admittedly speculative and more research is needed to
substantiate its claims; however, I believe it does offer as convincing an explanation
for this observed behavior as any we can derive from other theoretical frameworks
employed in current math education research. For example, we could say (as has
often been said before) that the modern concept of functions is “counter-intuitive”.
While this explanation is certainly true, it seems to beg the question: What is the
nature and source of this intuition, and why the clash?
I propose to view this empirical finding as an example of the clash between the
modern mathematical (or computational) view of functions, and their origin in human
nature. To do this, we need to look for the roots (mainly cognitive and developmental,
but also historical) of the function concept—a synthesis of Freudenthal’s (1983)
“didactical phenomenology” and Geary’s (2002) “biologically primary abilities”:
What in the child’s natural experience during development may have given rise to the
17
basic intuitions on which the function concept is built? (Fruedenthal, 1983; Kleiner,
1989; Lakoff & Núñez, 2000.)
I propose two candidates for such sources, which historically would lead to what I
call the algebraic and the analytic images of functions. These two branches of the
function concept share the same concept definition, but differ significantly in their
concept image (Vinner & Tall, 1981). The algebraic image is invoked when the
operation acts on objects of an arbitrary character, as is mostly the case in abstract
algebra (transformations, permutations, symmetries, isomorphisms, homomorphisms
and the like). It is particularly relevant to the kind of arbitrary objects found in
functional programming, and will be the main focus of the subsequent discussion.
The analytic image is invoked when dealing with functions of real variable—where
the concepts of slope, continuity, graph, monotonicity and the like are meaningful—
and will be considered here only briefly.
According to the algebraic image, an operation is acting on an object. The agent who
is performing the operation takes an object and does something to it. For example, a
child playing with a toy may move it, squeeze it or paint it. The object before the
action is the input and the object after the action is the output. The operation is thus
transforming the input into the output. This image is traditionally captured by the
“function machine” metaphor. The proposed origin is the child’s experience as acting
on objects in the physical world. This is part of the basic mechanism by which the
child comes to know the world around it, and I believe it is part of what I have called
universal human nature. Part of this mechanism is perceiving the world via objects,
categories and operations on them (Piaget, Rosch, etc.) Inherent to this image is the
experience that an operation changes its input—after all, that’s why we engage in it in
the first place: we move something to change its place, squeeze it to change its shape,
paint it to change its color.
But this is not what happens in modern mathematics or in functional programming. In
the modern formalism of functions, nothing is really changing! The function is a
“mapping between two fixed sets” or even, in its most extreme form, a set of ordered
pairs. As is the universal trend in modern mathematics, an algebraic formalism has
been adopted that completely suppresses the images of process, time and change.16
The existence of the clash between the natural and formal views of functions receives
additional support from watching the intuitive behavior of experienced programming
instructors, when introducing such functions to students. Many prefer to use a
dynamic, “more intuitive” definition such as “Rest removes the first element of a
list”. Compared to a formal definition, this formulation is indeed more “user friendly”
and is more easily understood and memorized by the students; but it also promotes
the (non-normative) impression of the input changing into the outputs: if Rest really
removed the first element of L (its input) then after performing Rest L, L would be
left without its first element.
Incidentally, since I do not believe that what we say as teachers have the power to
implant (or uproot) intuitions in our students, I still happily use such intuitive (and
formally imprecise) formulations in my classes. The best way to introduce complex
concepts (continuous functions in calculus is an even better example) is, I believe, by
working synergistically with the students’ intuitions. Then we can attempt to
gradually help the students refine their intuition, along with the mathematical
definition itself, towards more professionally acceptable standards.
18
The analytic image has a different intuitive source for functions: co-variation, two
quantities that are changing together (Freudenthal, 1983; Kleiner, 1989). Although
this image is not directly relevant to the list processing functions considered here, we
may note in passing that here too the basic image—that of the two variables actually
varying together—goes contrary to the static modern definition.
Note: In normal mathematical discourse, this phenomenon is hard to spot since we
give different names, often x and y, to the input and output, and students are used to
assigning the input to x and the output to y. The same holds for Pascal (or C)
programming, since the input and output values are assigned different names. In
functional programming, in contrast, since the name of the game is composition of
functions, the output of one function is used directly as input for the other, and is not
assigned a name. But this is exactly the point of this research: by changing the
familiar (protective) context, the students “natural” assumptions are revealed.
5. Conclusion
In this paper, I have used an evolutionary framework to explain some difficulties in
mathematical thinking—difficulties that might otherwise seemed puzzling. How is it
that we can naturally and effortlessly memorize thousands of “language facts” when
learning our mother tongue, and at the same time have great difficulties memorizing
some 20 multiplication facts? By the same token, how is it that we can naturally and
effortlessly learn to carry out highly complex “social reasoning”, and at the same time
have great difficulties carrying out an elementary logical task, such as the Wason card
selection task? The answer I have given, drawing on research in evolutionary
psychology, stems from the mismatch between ancient adaptations and the
requirement of modern civilization. On the one hand, some of the things we are
“naturally” (i.e., universally and spontaneously) good at are the result of adaptations
to the ancient ecologies in which our species has evolved over millions of years. On
the other hand, the things we are particularly weak at, usually represent the
requirements of modern civilization to which there was not nearly enough time for
any significant adaptations to evolve by biological natural selection. To the extent
that we are at all reasonably good at any modern task (such as driving cars), this
usually happens through co-optation of some more ancient abilities to the needs of the
modern task. The tasks presented above, however, seem to run against what we are
naturally good at. Our mind’s natural strengths seem to run against them.
In mathematics in particular, there seems to be a historical trend towards increasing
formalism and rigor, and away from the original intuitions that gave rise to the
various mathematical concepts. More specifically, our basic intuitions are usually
rooted in acting on the (physical and social) environment, and are thus inherently tied
to process, to doing things, to change over time. But the trend in modern mathematics
has been away from process (which is hard to formalize), hence away from the
original intuitions. Thus, for example, we have seen that the modern formalism no
longer supports the intuitions of functions as actually changing things.
[Orit: Too short, add detail] The educational implications of these insights are
certainly not to change modern mathematics—after all, it has performed admirably
well—but rather, it seems to me, to make a sharper distinction between the
mathematics needed for the professional minority, and that which is digestible by the
great majority of non-professionals. Learning formal mathematics, even the parts that
19
clash with human nature, is possible, but it requires a prolonged and sustained effort,
which in turn requires an intense motivation by the learner. Though these conditions
are achieved by some teachers and by some students, they are not likely to be met by
the general population. There is a large body of genuine and significant mathematics
that can be learned in a more natural setting, by compromising (at least temporarily)
on some of the formalism and precision dictated by the need of professional
mathematicians.
Acknowledgement: I gratefully acknowledge helpful comments from Lisser Rye
Ejersbo, Ilan Eshel, Orit Hazzan, Hanna Lifson, Domingo Paola, Miriam Reiner,
David Tall.
References
Barkow, J.H., Cosmides, L., and Tooby, J. (Eds.): 1992, The Adapted Mind:
Evolutionary Psychology and the Generation of Culture, Oxford University Press.
Bickerton, D.: 1995, Language and Human Behaviour, University of Washington
Press.
Bjorklund, D.F., and Pellegrini, A.D.: 2002, The Origins of Human Nature:
Evolutionary Developmental Psychology, American Psychological Association
Press.
Brown, D.E.: 1991, Human Universals, McGraw-Hill, New York.
Buss, D.M. (Ed.): 2005, The Handbook of Evolutionary Psychology, Wiley.
Butterworth, B.: 1999, What Counts: How Every Brain in Hardwired for Math, Free
Press.
Cosmides, L. and Tooby, J.: 1992, ‘Cognitive Adaptations for Social Exchange’, In
Barkow, J., Cosmides, L., and Tooby, J. (Eds.), The Adapted Mind: Evolutionary
Psychology and the Generation of Culture, Oxford University Press, 163-228.
Cosmides, L. and Tooby, J.: 1996, ‘Are Humans Good Intuitive Statisticians After
All? Rethinking Some Conclusions from the Literature on Judgment Under
Uncertainty’, Cognition 58, 1-73.
Cosmides, L. and Tooby J.: 1997, Evolutionary Psychology: A Primer, retrieved 8
March 2016, from http://www.psych.ucsb.edu/research/cep/primer.html .
Cosmides, L. and Tooby J.: 2000, ‘Evolutionary Psychology and the Emotions’, in
Handbook of Emotions (2nd Edition), Lewis, M., and Haviland-Jones, J. (Eds.),
retrieved 8 March 2016, from
http://www.psych.ucsb.edu/research/cep/emotion.html
Deacon, T.W.: 1997, The Symbolic Species: The Co-evolution of Language and the
Brain, Norton.
Dehaene, S.: 1997, The Number Sense: How the Mind Creates Mathematics, Oxford
University Press.
Devlin, K.: 2000, The Math Gene: How Mathematical Thinking Evolved and Why
Numbers Are Like Gossip, Basic Books.
20
Dewey, J.: (1989/1909), ‘The Influence of Darwinism on Philosophy’, In
Hollinger, D.A. and Capper, C. (Eds.), The American Intellectual Tradition,
128-34. Oxford University Press.
Fodor J.: 2000, The Mind Doesn’t Work That Way: The Scope and Limits of
Computational Psychology, MIT Press.
Freudenthal, H.: 1983, Didactical Phenomenology of Mathematical Structures,
Kluwer
Geary, D.: 2002, ‘Principles of Evolutionary Educational Psychology’, Learning and
individual differences, 12, 317-345.
Gigerenzer, G.: 2002, Calculated Risks: How to Know When Numbers Deceive You,
Simon and Schuster.
Gilovich, T., Griffin, D., and Kahneman D. (Eds.): 2002, Heuristics and Biases: The
Psychology of Intuitive Judgement, Cambridge University Press.
Hagen, E.: 2004, The Evolutionary Psychology FAQ, retrieved 08 March 2016, from
http://www.anth.ucsb.edu/projects/human/evpsychfaq.html .
Hauser, M.D. and Spelke, E. (2004). ‘Evolutionary and developmental foundations of
human knowledge’, in Gazzaniga, M. (Ed.), The Cognitive Neurosciences, III,
MIT Press.
Hazzan, O. and Leron, U.: 1996, ‘Students' Use and Misuse of Mathematical
Theorems: The Case of Lagrange's Theorem’, For the Learning of Mathematics,
16, 23-26.
Hull, D.: 1982, ‘Biology and Philosophy’, in Floistad, G. (ed.) Contemporary
Philosophy: A New Survey, pp. 281-316, Kluwer.
Kaput, J. Shaffer, D.: 2002, ‘On the Development of Human Representational
Competence from an Evolutionary Point of View: From Episodic to Virtual
Culture’, in Gravemeijer, K., Lehrer, R., Oers, B. van and Verschaffel, L.,
Symbolizing, Modelling and Tool Use in Mathematics Education, Kluwer, 269286.
Kleiner, I.: 1989, ‘Evolution of the Function Concept: A Brief Survey’, The College
Mathematics Journal, 20, 282–300.
Lakatos, I.: 1978, Mathematics, Science and Epistemology, Philosophical Papers Vol.
2, edited by J. Worrall and G. Currie, Cambridge University Press.
Lakoff, G. and Núñez, R.: 2000, Where Mathematics Comes From: How the
Embodied Mind Brings Mathematics Into Being, Basic Books.
Leland, K.N. and Brown, G.R.: 2002, Sense and Nonsense: Evolutionary Perspective
on Human behaviour, Oxford University Press.
Leron, U.: 2003, Origins of Mathematical Thinking: A Synthesis, Proceedings
CERME3, Bellaria, Italy, March, 2003, retrieved 08 March 2016, from
http://www.dm.unipi.it/~didattica/CERME3/WG1/papers_doc/TG1-leron.doc
21
Over, D.E. (Ed.): 2003, Evolution and the Psychology of Thinking: The Debate.
Psychology Press.
Paz, T.: 2003, Natural Thinking vs. Formal Thinking: The Case of Functional
Programming, doctoral dissertation (Hebrew). Technion – Israel Institute of
Technology.
Pinker, S.: 1994, The Language Instinct, Harper Collins.
Pinker, S.: 1997, How the Mind Works, Norton.
Pinker, S.: 2002, The Blank Slate: The Modern Denial of Human Nature, Viking.
Plotkin, H.: 1998. Evolution in the Mind: an Introduction to Evolutionary Psychology,
Harvard University Press.
Plotkin, H.: 2004, Evolutionary Thought in Psychology: A Brief History, Blackwell.
Reiner, M. and Leron, U.: 2001, Physical Experiments, Thought Experiments,
Mathematical Proofs, Model-Based Reasoning Conference (MBR’01), Pavia,
Italy.
Ridley, M.: 2003, Nature via Nurture: Genes, Experience, and What Makes Us
Human, Harper Collins.
Shaffer, D. and Kaput, J.: 1999, ‘Mathematics and Virtual Culture: An Evolutionary
Perspective on Technology and Mathematics Education’, Educational Studies in
Mathematics 37, 97–119
Stanovich, K. E. and West, R. F.: 2003, ‘Evolutionary versus instrumental goals: How
evolutionary psychology misconceives human rationality’, in Over, D. E. (Ed.).
Evolution and the Psychology of Thinking: The Debate, Psychology Press, pp. 171230.
Steen, F. (Web Editor): 2001, Evolutionary Psychology Index. Retrieved 08 March
2016, from http://cogweb.ucla.edu/ep/index.html .
Sternly, K: 2001, Dawkins vs. Gould: Survival of the Fittest, Icon Books.
Tall, D: 2001, ‘Conceptual and Formal Infinities’, Educational Studies in
Mathematics, 48, 199-238.
Vinner, S. and Tall, D.: 1981, ‘Concept Image and Concept Definition in
Mathematics with Particular Reference to Limits and Continuity’, Educational
Studies in Mathematics, 12, 151–169.
Wason, P.: 1966, ‘Reasoning’, in Foss, B.M. (Ed.), New horizons in psychology,
Penguin.
Wason, P. and Johnson-Laird, P.: 1972, The Psychology of Reasoning: Structure and
Content, Harvard University Press.
1.
2.
More precisely, the “computational and neural architecture of the human mind and brain” that
regulate that behavior.
I am only interested here in scientific controversy. There is also emotional and ideological
opposition, the discussion of which is outside the scope of this article. See Pinker (2002) for a
thorough discussion of this “modern denial of human nature”.
22
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
“Haldane can be found remarking, ‘Teleology is like a mistress to a biologist: he cannot live
without her but he's unwilling to be seen with her in public.’ Today the mistress has become a
lawfully wedded wife. Biologists no longer feel obligated to apologize for their use of
teleological language; they flaunt it. The only concession which they make to its disreputable
past is to rename it ‘teleonomy’.” (David Hull, 1982, p. 298)
Since brains do not fossilize and researchers must rely on indirect evidence, the jury is still out
on the precise figures; those differences, however, do not affect our main thesis.
On the contrary, language and social interaction are so enormously complex that no software
system can even approach the performance of a four-years old.
This section is adapted from Leron (2003).
In a more comprehensive discussion of the innate roots of rudimentary mathematics, one
ought to consider beside arithmetic also topics such as rudimentary geometry, rudimentary
topology, rudimentary logic and others. These topics, however, are not nearly as wellresearched as rudimentary arithmetic, and the whole issue of rudimentary mathematics will
not affect the main thesis of the paper, which is wholly concerned with more advanced
mathematical thinking. Cf. Houser and Spelke (2004) who say: “In fact, number may
represent the best worked out system of core knowledge to date, with well developed
theoretical models, and detailed empirical work in humans and animals that cuts across the
levels of behavior, mind, and brain.”
Strictly speaking, ‘formal’ and ‘informal’ ought to refer not to the mathematical subject matter
itself but to its presentation and re-presentation. In many cases these may in fact describe two
facets of the same piece of mathematics, such as informal and formal treatments of continuity
in calculus.
For example, what exactly will be the set of the elements of the group? The element
corresponding to “putting on one’s shoes”? The square of this element?
Devlin is relying here substantially on Bickerton’s (1995) account of the evolution of
language.
See in this connection his “mathematical house” metaphor on p. 125.
For recent neuropsychological and cross-cultural evidence, published in the Proceedings of
the National Academy of Sciences (PNAS), cf.
http://www.psych.ucsb.edu/research/cep/socex/sugiyama.html#Lawrence%20S.%20Sugiyama
. For some of the controversy surrounding the Cosmides and Tooby interpretation of the card
selection research, especially concerning the so-called Massive Modularity Hypothesis, cf.
Over, 2003.
Hence the name LISP—an abbreviation for LISt Processing.
This is not a question the students were asked directly. Rather, the question—and its
somewhat surprising answer—came up through the students’ use (and discussion during
interviews) of variables, while they were working on more meaningful programming tasks.
By saying that the students “thought” or “assumed” this, I don't necessarily mean that they
consciously thought so, or that they would give this answer if asked directly. All I mean is that
their programming behavior is consistent with this belief.
Professional mathematicians are still able to maintain these images despite the formalism, but
for novices the connection is hard to come by.
23
Download