The Computational-Representational Understanding of Mind

advertisement
The Computational Model
of the Mind
The Computational-Representational
Understanding of Mind
Thagard (p. 10): “Here is the central hypothesis of cognitive
science: Thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures” (emph. added,
Thagard calls this CRUM, for ComputationalRepresentational Understanding of Mind).
Representations
Representations are arrays of objects (symbols) that stand in
certain relations to each other and that are understood to
represent certain objects and their relations to each other.
Examples: maps and landscapes, family trees and family
structures, portraits and persons, cave paintings and animals
and hunting events, stories and events (discourse referents,
temporal sequencing), film/TV music and the events it narrates, chess notation and chess games, airplane models in
wind-tunnels and real airplanes. What else?
Representing abstract objects: Allegorical painting (e.g. the
seven cardinal virtues, and the seven sins, in medieval paintings), notation of numbers (Roman, Arabic, binary, bundled
strokes). What else?
Elements of representations (Peirce): Representations are
realized in a medium (e.g. a map, by marks on paper), their
elements are related to something else (their content), the
relations between the elements of the representation are
grounded in the relations between the elements of their
content, and the relation between representations and content is interpretable by some interpreter. ( mental representation)
Natural signs (e.g., smoke for fire, a fingerprint for a person) are not necessarily representations. The sign and what
it indicates are related by natural laws; it is not necessary to
have an interpreter to make this connection.
Rules concerning the symbols and the significant relations
to each other: Syntax. Rules concerning the way the symbols and their relation to each other are interpreted: Semantics.
Computations
Computations are rule-governed manipulations of syntactic
objects. Examples:
 Addition, for numbers.
1 1 1
78925
+ 40858
119783
 Proof of syllogisms with the help of Venn diagrams:
All elephants are mammals.
Some vertebrates are not mammals.
Some vertebrates are not elephants.
 Derivations in propositional logic (Boolean algebras):
[p  q]
Not: p and not q
p  q
not p or not not q (de Morgan)
p  q
not p or q (double negation)
p  q
if p then q (definition of implication)
 Determining the distance to be traveled on a map.
In general, computations allow us to derive insights about
the content (the semantics), using purely syntactic manipulations of symbols.
Notice: Many representations are not used for computations
(e.g., a portrait of a person, a story about events).
Not every representation is equally well suited for computation. Try the following addition with Roman numerals:
MLCXII
+ MMCCIX
???
The classical notion of computation involves a person that
is doing the computation. But we also trust machines to do
computations for us (calculators, computers).
The concept of representations and computations is central
for the understanding of computer programs:
data structures (representations)
+ algorithms (computation)
 running program
Mental Representations
Ordinarily, representations are concrete objects (e.g., marks
on paper, sound waves). CRUM assumes that there are also
mental representations. It is commonly assumed that
mental representations are concrete as well (as physical
states of the brain), but this is not necessary — we can remain agnostic about the medium of mental representations.
But in order even to talk about mental representations, we
will have to use some way to describe them. We will have
to represent (in the ordinary sense) what we assume to be
mental representations.
Examples of (representations of) mental representation:
 The representation of syntactic structures of language by
bracketing or trees:
Mary [[saw [the man]] [with the binoculars]]
Mary [[saw [the man [with the binoculars]]]]
 Representation of speech sounds by letters (cat) or phonetic notation (/kæt/). Left-right :: before-after
 Mental family trees (my mother’s uncle’s daughter).
Mental Computations
Once we assume mental representations, we can imagine
thinking as consisting of rule-governed manipulations of
mental representations, i.e. computations running on mental
representations. This is quite similar to the way we saw
computation on a computer:
mental representation
+ computational procedures  thinking
CGS 360, Introduction to Cognitive Science, Steve Wechsler, Dept. of Linguistics, UT Austin, wechsler@mail.utexas.edu, 2/13/2016
Three levels of description of representations and computations
As mentioned, we can remain agnostic about the physical
nature of mental representations, and still follow the CRUM
model.
Marr (1982) justified this by showing that we can distinguish between three levels in the description of representational and computational systems (computation and the
brain).
 The level of abstract problem analysis (the task is decomposed into subtasks, e.g., a flowchart of a computer
program)
 The level of the algorithm and data structures (specifying
a formal procedure in a well understood representational
system, e.g., a computer program written in C++)
 The level of the implementation (constructing a device
that is able to carry out the algorithm, e.g., the compilation
of a computer program on a Macintosh)
Marr argued that higher-level questions are, to a large degree, independent of the levels below it. Therefore, questions of the higher levels can be addressed independently of
the details of the implementation. We can study many aspects of the mind, following the CRUM model, without
necessarily studying aspects of the brain.
However, there are good arguments that the “implementation” of the mind in the brain imposes powerful constraints
on the possible algorithms and general flowcharts of the
mind that should not be ignored. If this is the case, then we
have to take as representation the physical activation patterns of the brain, and as computation the physically induced changes of those patterns. According to what we
know about the brain, the nature of those representations
would be quite different from the symbolic representations
we are familiar with (distributed representations; we will
return to that (connectionist computation)).
Arguments for CRUM
It’s the only game in town
Fodor: Representational/computational theories of vision,
language, reasoning etc. are the only viable theories of these
areas that we possess.
The Turing Test
Turing (1950): The only way to ascribe human intelligence
to a creature (an animal or a robot) is to engage it in humanlike interaction (e.g., a discourse with questions and answers). If the behavior of the creature is not distinguishable
from the behavior of a human, there is no reason not to
ascribe human intelligence to it.
If a human-made object (a robot) constructed based on
principles of representation and computation passes the test,
then this would show that representation and computation
are sufficient to create human intelligence. (No one has built
such a robot yet, so this argument is hypothetical).
Arguments against CRUM
The Chinese Room
One crucial property of representations we mentioned was
that the relation between representations and content is
interpretable by some interpreter. Mental representations
lack this property. (Naïve conception of vision: The eye is a
camera, the retina is a screen, the mind watches the screen.
Question: What is the “eye” of the mind? Another mind?
What’s the eye of that mind?) Hence, talking of “mental
representations” is, to a certain degree, metaphorical.
Searle: “Imagine a native English speaker who knows no
Chinese locked in a room full of boxes of Chinese symbols
(a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people
outside the room send in other Chinese symbols which,
unknown to the person in the room, are questions in Chinese
(the input). And imagine that by following the instructions
in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions
(the output). The program enables the person in the room to
pass the Turing Test for understanding Chinese but he does
not understand a word of Chinese. (…) If the man in the
room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese
then neither does any other digital computer solely on that
basis because no computer, qua computer, has anything the
man does not have. (…) Implemented programs are by
definition purely formal or syntactical. (…) Minds have
mental or semantic contents. (…) Syntax is not by itself
sufficient for, nor constitutive of, semantics. Conclusion:
Implemented programs are not constitutive of minds. Strong
AI is false. (…) The Turing test fails to distinguish real
mental capacities from simulations of those capacities.”
Patricia & Paul Churchland: This is appealing to common
sense. It is as if arguing against the electromagnetic nature
of light by saying that one cannot produce light by waving a
magnet. Pinker: It’s just about the semantics of the word
‘understand’.
Qualia, Consciousness and Free Will
According to CRUM, minds are fully determined by representations and algorithms operating on them. But this leaves
unexplained how a mind feels “from inside”. We can perhaps describe the computational processes or brain states of
a person seeing a red object, or feeling pain, but seeing a
red object or feeling pain is a totally different matter
(qualia). We cannot explain how the phenomenon of
consciousness arises, or the nature of the impression that
we can choose whether or not to act in a certain way (free
will).
A brain teaser by Pinker (p. 146): “Surgeons replace one of
your neurons with a microchip that duplicates its inputoutput functions. You feel and behave exactly as before.
Then they replace a second one, a third one, until more and
more of your brain becomes silicon. (…) Do you ever notice the difference? (…) Is some other conscious entity
moving in with you?”
CGS 360, Introduction to Cognitive Science, Steve Wechsler, Dept. of Linguistics, UT Austin, wechsler@mail.utexas.edu, 2/13/2016
Download