Representation - School of Life Sciences

advertisement
1
Representation
I.
Background.
The concept of representation is relevant in this context as "mental representation", which was
introduced by Cognitive Science to capture the mysteries of the mind compatible with the laws of basic
science. A representation is anything that is about something. Philosophers call the property of being
about something Intentionality (see this volume). Intentionality and intentional are technical terms in
philosophy, not to be confused with the everyday use of the word ‘intentional’. When we think, we
think about things, e.g., about yesterdays party. The party plays a role in my thought in two distinct
ways. On the one hand, there is the actual party I was at yesterday which my current thought is directed
at or refers to. This is called the target, referent or intentional object of the thought (these terms are not
all quite synonymous; see Cummins, 1996). On the other hand, there is how the party is presented as
being in my thought; for example, I may think of the party that it happened yesterday. “The party
happened yesterday” is the content of the thought. In this case, the thought is true; but if the thought
had content “the party happened a week ago”, the thought would be false. It is because target and
content are distinct that thoughts can be true or false (target and content match or mismatch); and that
deception, illusions and hallucinations are thus possible. Similarly, the distinction is what allows my
boss and my spouses’ lover to be thought of as different people (same referent person, different
contents); and thus more generally allows people to be at cross-purposes and for life to be interesting.
Representation is a central concept in Cognitive Science. The fact that some representations are
not mental (e.g. photos) has motivated the idea that understanding simple physical representations
might enable understanding the mind in a naturalistic way. Some feel that understanding representation
might give us a handle on consciousness itself. Perhaps the goal of psychology in general is precisely to
understand to what extent and in what way the mind represents. According to Perner (1991), how
children come to understand the mind as representational is central to child development. Further,
understanding representation is central to the philosophy of language, of science and of rationality.
Here we will focus just on issues relevant to the nature of mental representation.
II.
The Representational Theory of Mind.
In order to explain how this non-physical relation of aboutness could be implemented in a
physical system (our brains or a computer) the representational theory of mind (proposed by Fodor,
amongst others; see Crane, 2005) sees mental states as representational states: If the student thinks of
the party, then the student has some neural activity (representational vehicle/medium) that represents
the party, and the attitude of thinking about (rather than desiring, or hating, etc.) the party is captured
by the functional role this representation of the party plays in relation to the student's other mental
states. This approach promised to be a satisfying solution to understanding the mind, in particular as
computer programmes seemed good instances of mental representations, i.e., internal electrical
processes that represent things outside the machine. Also the notion of representation is familiar from
other physical entities like linguistic inscriptions or pictures able to capture aboutness. For instance, a
photo of the party is a physical object (representational vehicle: piece of paper with patterns of colour)
that shows a scene of the party (target) in a certain way (content). Moreover, it seems clear what each
of these elements is. However, when one asks why, the photo shows the scene of the party the answer
defers to the beholders of the photo: they see that scene in it; similarly for language: the readers
understand the linguistic inscriptions in the book, etc. If it is us human users of these representations
that determine content, then this makes them useless to explain the origin of Intentionality of the mind,
because it is only derived from their users' Intentionality (derived intentionality, see Searle 2004;
Chinese Room, this volume) and there is no user endowed with Intentionality reading our nervous
activity. Philosophers have since made various proposals of how Intentionality can be naturalized by
2
explaining how mental representations can have content without the help of an agent imbued with
Intentionality. This quest for naturalization has also been made pressing by the shift from classical
computer programming, where the programmer starts with meaningful expressions, to Connectionist
(see this volume) and robotic systems, where the system starts with nodes and connections that carry no
meaning. Meaning has to be built up by interacting with the environment. What are the criteria that an
element in such a network represents something?
III.
III.1
Foundational Problems of Naturalizing Representation:
Representational content.
The three main approaches to determining how the content of a representation is determined are the
following:
III.1.a Causal/informational theories (Dretske, 1981) assume that a neural activity "cow" becomes a
representation meaning COW if it is reliably caused by the presence of a cow, hence reliably carries the
information that (indicates) a cow is present. A recognized problem with this approach is that on this
theory representations cannot misrepresent. If "cow" is sometimes caused by the presence of a horse on
a dark night then it simply represents disjunction of COW-OR-HORSE-ON-A-DARK-NIGHT
(disjunction problem) instead of misrepresenting the horse as a cow as one would intuitively expect. A
possible solution to this problem is to distinguish an error free learning period, which establishes
meaning, and an application period in which misrepresentation can occur. But this distinction is
arbitrary and has no basis in reality.
III.1.b Functional role semantics (e.g. Block, Harman; see Greenberg and Harman, 2006) are
conceived in the spirit of Sellars' theory theory of concepts. Here the meaning of a representation is
determined by the permissible inferences that can be drawn from it or, in more liberal versions, how the
representation is used more generally. "cow" means COW because it licenses inferences that are
appropriate about cows. Problems with this approach include how learning anything new, by changing
the inferences one is disposed to draw, may change the meaning of all one’s representations (holism).
It is also not clear how a convincing story concerning misrepresentation is possible; if the use of a
representation includes drawing the inference that a horse on a dark night is a cow, how can a horse
going for her evening constitutional ever come to be misrepresented as a cow? A possible solution is to
restrict which uses determine meaning, but once again this seems arbitrary if not question-begging.
III.1.c Teleosemantics (e.g. Millikan, 1993). Whereas causal/informational theories focus on the input
relationship of how external conditions cause representations, and functional role semantics focuses on
the output side of what inferences a representation enables, teleosemantics sees representations as
midway between input and output and focuses on function as determined by history. Representations
do not represent because they reliably indicate, or simply because they are used, but because of
functions brought about by an evolutionary or learning history. The basic assumption is that there must
be enough covariation between the presence of a cow causing a "cow" token, so that the token can
guide cow-appropriate behaviour often enough that there is a selective advantage without which the
cow-"cow" causal link would not have evolved. The “cow” token acquires the function of indicating
cows. Just as historically defined function allows a heart to both function and malfunction, a
representation can malfunction making misrepresentation possible. If a horse happens to cause a "cow"
token, then the token misrepresents the horse as cow. The token still means COW (disjunction problem
solved) because the causing of "cow" by horses was not what established this causal link. Although this
theory seems to be the current leading contender it is far from uncontested (Neander, 2004). For
example, intuitions are divided whether history can be relevant to Intentionality; if a cosmic accident
3
resulted in an atom-by-atom replica of you in a swamp, would not swamp-you have intentional states
despite having no evolutionary or learning history?
III.2. Wide versus narrow content:
If I think “that water looks cool”, the target, the water, is in the world external to me. The
representational vehicle is in my head. Does the content depend on what is in my head or does it
depend on the world? Putnam first pointed out that causal theories imply the content depends on the
world; because what the content is depends on the nature of what caused the representation. Similarly,
on teleosemantic theories, content depends on a selectional history and thus depends on things external
to my head. According to another argument (by Burge), as the meaning of words depends on their
accepted use in a linguistic community, if I express my thoughts in words, the content of my thoughts
depends on facts external to me, namely, accepted use in a linguistic community. Conversely, on a
strict functional role semantics, content depends on the inferences I would draw, so content depends
only on what goes on inside my head. Content that depends on the world is called broad or wide
content; content that depends only on what goes on in the head is called narrow content. The thesis that
all content is broad is called content externalism. A curious consequence of content externalism is that
it seems there is no guarantee I will ever know what I am thinking, because the contents of my thoughts
depends on more than me. See Kim (2006, chapter 9) for further discussion.
III.3. Thoughts about the non-existent.
The attempts to naturalize representational content has almost exclusively focused on the use of
representations in detecting the state of the environment and acting on the environment. In other words
the analysis—in particular of causal and informational approaches—is restricted to cases of real,
existing targets. The teleosemantic approach can be extended to the content of desires since these
representations can also contribute to the organism's fitness in a similar way to beliefs. Papineau (1993)
made desires the primary source of representation. However, little has been said how natural processes
can build representations of hypothetical, counterfactual assumptions about the world or even outright
fiction. One intuitively appealing step is to copy informative representations (caused by external states)
into fictional contexts, thereby decoupling or quarantining them from their normal indicative functions.
Although intuitively appealing, the idea presupposes the standard assumption of symbolic computer
languages that symbols exist (but based on derived intentionality) that can be copied and keep their
meaning. It remains to be shown how such copying can be sustained on a naturalist theory.
III.4 Implicit vs explicit representation.
A representation does not make all of its content explicit (see Implicit vs Explicit Knowledge,
this volume). According to Dienes and Perner (1999) something is implicitly represented if it is part of
the representational content but the representational medium does not articulate that aspect. This can be
illustrated with bee dances to indicate direction and amount of nectar. From the placement in space and
the shape of the dance bees convey to their fellow bees the direction and distance of nectar. The
parameters of the dance, however, only change with the nectar’s direction and distance. Nothing in the
dance indicates specifically that it is about nectar. Yet nectar is part of the meaning. It is represented
implicitly within the explicit representation of its direction and distance. Just as a bee dance is about
nectar without explicitly representing that fact, arguably a person ignorant of chemistry means H2O
when she says “water” (an argument famously proposed by Putnam) even though such a person does
not explicitly represent the distinction between H2O and some other structure.
Bee dances only make the properties of distance and direction explicit. The English sentence
“the nectar is 100 yards south” makes explicit the full proposition that the location is predicated of a
certain individual, the nectar. The sentence “I see the nectar is 100 yards south” makes even more
4
explicit; namely, the fact that one knows the proposition. Explicit representation is fundamental to
consciousness as we discuss below.
IV.
Application in Sciences:
Despite foundational problems, research in psychology, functional neuroscience and computer science
has proceeded using the notion of representation. The tacit working definition is in most cases
covariation between stimuli and mental activity inferred from behaviour or neural processes.
IV.1. Cognitive Neuroscience:
Investigates the representational vehicle (neural structures) directly. The intuitive notion of
representation relies heavily on covariation. If a stimulus (including task instructions for humans)
reliably evokes activity in a brain region it is inferred that this region contains the mental
representations required to carry out the given task. Particularly pertinent was the success of finding
single cortical cells responsive to particular stimuli (single cell recording), which led to the probably
erroneous assumption that particular features are represented by single cells, parodied as "grandmother
cells". In the research programme investigating the Neural Correlates of Consciousness (this volume)
conscious contents of neural vehicles are determined by correlating verbal reports with neural
activities.
IV.2. Cognitive Psychology:
Assumes that experimental stimuli and task instructions are mentally represented. These
representations are modelled and task performance (errors or reactions times) predicted. The quality of
predictions provides the evidential basis of mental representations. For the investigation of longer
reasoning processes sometimes the introspective verbal report is taken as evidence for the underlying
mental representations. One more extensively investigated issue was whether thinking takes different
forms in terms of language like propositions or whether it proceeds in an analogue form akin to
perception, or is symbolic or connectionist (subsymbolic). Cognitive approaches to perception can be
contrasted with ecological and Sensorimotor Approaches (this volume); these latter attempt to do
without a notion of representation at all. The anti-representationalist style of argument often goes like
this: We need less rich representations than we thought to solve a task; therefore representations are not
necessary at all. The premise is a valuable scientific insight; but the conclusion a non-sequitur. To take
a case in ecological psychology, while people in running to catch a ball do not need to represent the
trajectory of the ball, as might first be supposed, they do represent the angle of elevation of their gaze
to the ball. The representation is brutally minimal, but the postulation of a content-bearing state about
something (the angle) is necessary to explain how people solve the task. There is a similar conflict
between cognitive (representational) and Dynamical Systems (this volume) (anti-representational)
approaches to the mind, e.g. development.
IV.3. Artificial Intelligence:
The traditional approach (symbolic AI, or Good Old Fashioned AI, GOFAI) assumed that the
mind can be captured by a program, which is designed to have building blocks with representational
content (derived intentionality). The New Robotics is a reaction to trying to program explicitly all
required information into an AI, a strategy that largely failed. The new strategy is start from the bottom
up, trying to construct the simplest sort of device that can actually interact with an environment.
Simplicity is achieved by having the device presuppose as much as possible about its environment by
being embedded in it; the aim is to minimise explicit representation where a crucial feature need only
be implicitly represented. In fact, the proclaimed aim of New Roboticists is sometimes to do without
representation altogether (see Clark, 1997).
5
V.
Representation and Consciousness:
V.1 Content of Consciousness
According to the strong representationalist position all the Contents of Consciousness (see this
volume) are just the contents of suitable representations (e.g. Tye, 1995). This is an attempt to
naturalise consciousness by explaining it in terms of what may be a more tractable problem,
representations. Others believe that conscious states can be qualitative in ways that are not
representational (Block, 1996); for example, Orgasm (this volume), moods, pain are just states and not
states about anything. A strong representationalist may specify representational contents of these states
(e.g. generalised anxiety is both about a certain state of the body and cognitive system, and a
representation that the world is dangerous). The sceptic replies there is still something it is like to be in
these states that is left out (Qualia, this volume). A further position is that there is nothing left out;
qualia are identical to certain intentional contents, but those intentional contents cannot be reduced or
naturalised in other terms.
V.1
Access consciousness.
Access consciousness (this volume) sees consciousness as a matter of whether mental
representations are accessible for general use by the system. The contents of access consciousness are
commonly taken to be just the contents of certain representations. Access consciousness is frequently
tied to the distinction between implicit and explicit knowledge. Dienes & Perner (1999) explain why.
To return to the bee dance, the fact that direction and distance are predicates of nectar is not made
explicit in the bee dance. However for referential communication and directing attention explicit
predication becomes crucial: in order to direct one's attention to an object one needs to identify under
one aspect and then learn more about its other aspects.
Whether predication and the ability to refer is sufficient is another question. When one is
conscious of some state of affairs one is also conscious of one's own mental state with which one
beholds that state of affairs, which suggests that one needs explicit representation of one's attitude
(explicit attitude). This criterion for consciousness goes hand in hand with the Higher Order Thought
theory of consciousness (this volume?), according to which a mental state is a conscious state if one has
a higher-order thought representing it. Higher Order Thought theory motivates testing the conscious or
unconscious status of knowledge in perception and learning (Learning, Explicit vs Implicit this
volume) by asking subjects to report or discriminate the mental state they are in (guessing vs knowing)
rather than simply discriminating the state of the world (see Objective vs Subjective Measures, this
volume).
V.2.
Phenomenal consciousness.
Phenomenal consciousness (Block, 1995) is concerned about the subjective feel of conscious
experiences. Whether phenomenal consciousness can be reduced to the contents of any representation
is controversial. Some argue it can be. First-order representationalists (e.g. Tye, 1995) argue that the
relevant representations just need to have certain properties, in addition to being poised for general
access, like being fine-grained (i.e. containing Nonconceptual Content, this volume). Higher Order
Representationalists, like Carruthers (2000), argue that subjective feel requires higher order
representations about the subjective nature of ones first order mental state. This is required—according
to Carruthers—because first order representations only give a subjective view of the world but no
appreciation that the view is subjective, i.e., no feeling of subjectivity. For this feeling the higher order
state has to metarepresent the difference between what is represented by the lower level (the objective
world, albeit from the perceivers subjective point of view) and how it is represented as being
(subjective content). Since such metarepresentation is thought to develop over the first few years
6
(Perner, 1991) this carries the implication that children up to 4 years may not have phenomenal
consciousness, an implication used either to discredit higher-order thought theories of consciousness
(Dretske, 1993) or as an inducement to weaken the requirements for entertaining higher-order thoughts
(Rosenthal, 2002; see Philosophy, Study of Consciousness, this volume).
Josef Perner & Zoltan Dienes
about 3100 words
REFS
Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences,
18, 227-247.
Block, N. (1996). Mental Paint and Mental Latex, Philosophical Issues, 7, 1-17
Carruthers, P. (2000). Phenomenal Consciousness: a naturalistic theory. Cambridge University Press
Clark, A. (1997). Being there: Putting brain, body and world together again. MIT Press.
Crane, T. (2005). The mechanical mind, 2nd Edition. Routledge.
Cummins, R. (1996). Representations, targets, and attitudes. MIT Press
Dienes, Z., & Perner, J. (1999) A theory of implicit and explicit knowledge. Behavioural and Brain
Sciences, 22, 735-755.
Dretske, F. (1981). Knowledge and the Flow of Information. MIT Press.
Dretske, F. (1993). Conscious Experience. Mind, 102, 263-83.
Greenberg, M. & Harman, G. (2006). Conceptual Role Semantics. In E. Lepore & B. Smith (eds) The
Oxford Handbook of Philosophy of Language. Oxford University Press.
Kim, J. (2006). The philosophy of mind. 2nd edition. Westview Press.
Millikan, R. G. (1993). White queen psychology and other essays for Alice. MIT Press.
Neander, K. (2004). Teleological theories of mental content. In the Stanford Encyclopedia of
Philosophy.
Papineau, D. (1993). Philosophical naturalism. Blackwell.
Perner, J. (1991). Understanding the representational mind. MIT Press.
Rosenthal, D. M. (2002). Consciousness and higher order thought. Encyclopedia of Cognitive Science.
Macmillan Publishers Ltd (pp. 717-726).
Searle, J. (2004). Mind: A brief introduction. Oxford University press.
Tye, M. (1995). Ten problems of consciousness: A representational theory of the phenomenal mind.
MIT Press.
Download