PowerPoint slides of Lecture - Center for Cognitive Science

advertisement
What is Cognitive Science?
Zenon Pylyshyn, Rutgers Center for Cognitive Science
What’s in the mind that we may know it?
http://ruccs.rutgers.edu/faculty/pylyshyn.html
Cognitive science is a delicate mixture
of the obvious and the incredible
Granny was almost right:
Behavior really is governed by what we know and
what we want (together with the mechanisms for
representing and for drawing inferences from these)
It’s emic, not etic properties that matter
Kenneth Pike
What determines our behavior is not how
the world is, but how we represent it
 As Chomsky pointed out in his review of Skinner, if
we describe behavior in relation to the objective
properties of the world, we would have to conclude
that behavior is essentially stimulus-independent
 Every behavioral regularity (other than physical
ones like falling) is cognitively penetrable
It’s emic
states that
matter!
The central role of representation presents
some serious problems for a natural science
What representations are about is what matters
But how can the fact that a belief is about some
particular thing have an observable consequence?
• e.g. How can the presence of “holy grail” in a belief
determine behavior when the holy grail does not exist?
In a natural science if “X causes Y” then X must
exist and be causally connected to Y!
• It’s even worse than that; even when X exists, it is not
X’s physical properties that are relevant!
e.g., the North Star & navigation
This dilemma is sometimes referred
to as Brentano’s problem or the
problem of intentionality
 What determines what we do is what our
mental states are about, but aboutness is
not a category of natural science.
 That is why Brentano concluded that
psychology was beyond the grasp of
natural science.
There are other properties that are special
to cognitively determined behavior
1. The Semantic determinants of most cognitive
behavior. To capture regularities in cognitivelycaused behavior we must use semantic terms –
terms referring to what things mean. Samemeaning stimuli are equivalent for many
generalizations of cognitive science.
2. The Cognitive Penetrability of most cognitive
processes. Almost any regularity can be
systematically altered in a quasi-rational way by
imparting new information.
Is it hopeless to think we can have
a natural science of cognition?
Along comes The computational theory of mind
“the only straw afloat”
The major historical milestones
• Brentano’s recognition of the problem of
intentionality
• The formalist movement in the foundations
of mathematics: Hilbert, Goedel, Russell &
Whitehead, Turing, Church, …
• Representational/Computational theory of
mind: Newell & Simon, Chomsky, Fodor
How to make a purely mechanical system reason about things it does
not understand or know about? The discovery of symbolic logic.
(1) Married(John, Mary) or Married(John, Susan)
and the equation or “statement”,
(2) not[Married(John, Susan)].
from these two statements you can conclude,
(3) Married(John, Mary)
But notice that (3) follows from (1) and (2) regardless of what is in the
parts of the equation not occupied by the terms or or not so that you could
write down the equations without mentioning marriage or John or Mary
or, for that matter, anything having to do with the world. Try replacing
these expressions with the meaningless letters P and Q. The inference still
holds:
(1') P or Q
(2') not Q
therefore,
(3') P
Intelligent systems behave the way
they do because of what the represent
• But in order to function under physical
principles, the representations must be
encoded in physical properties
• How to encode knowledge in physical
properties is by first encoding it in symbolic
form (Proof Theory tells us how) and then
instantiating those symbolic codes
physically (computer science tells us how)
Cognitive Science and the Tri-Level Hypothesis
Intelligent systems are organized at three (or
more) distinct levels:
1. The physical or biological level
2. The symbolic or syntactic level
3. The knowledge or semantic level
This means that different regularities may
require appeal to different levels
Calculator example
• Why is the calculator’s printing faint and irregular? Why are parts of
numbers missing in the LED display?
• Why does it take longer to multiply large numbers than small ones,
whereas it takes the same length of time to add large numbers as small
numbers?
• Why does it take longer to calculate trigonometrical functions than
sums?
• Why is it especially fast at calculating the logarithm of 1?
• Why is it that when one of the keys (labeled ) is pressed after a
number is entered, the calculator prints what appears to be the square
root of that number? Will it always do so?
• When the answer to an arithmetic problem is too long to fit in the
display window, why are some of the digits left off?
Does intentionality (and the trilevel
hypothesis) only apply to high-level
processes such as reasoning?
• Examples from vision.
Does intentionality (and the trilevel
hypothesis) only apply to high-level
processes such as reasoning?
• Examples from color vision.
“Red light and yellow light mix to produce orange light”
This remains true for any way of getting red light and
yellow light:
e.g. yellow may be light of 580 nanometer wavelength, or
it may be a mixture of light of 530 nm and 650 nm
wavelengths.
So long as one light looks yellow and the other looks red
the “law” will hold.
Does intentionality (and the trilevel
hypothesis) only apply to high-level
processes such as reasoning?
• Examples from language.
John gave the book to Fred because he finished it
John gave the book to Fred because he wanted it
• The city council refused to give the workers a permit for a
demonstration because they feared violence
• The city council refused to give the workers a permit for a
demonstration because they were communists
Methodological aside:
On the difference between
explanations that appeal to mental
architecture and those that appeal
to tacit knowledge
Suppose we observe some robust
behavioral regularity. What does it tell
us about the nature of the mind or
about its intrinsic properties?
An illustrative example: Mystery Code Box
What does this behavior pattern tell us about the nature of the box?
The Moral:
Regularities in behavior
may be due to either:
1. The inherent nature of the
system (to its structure), or
2. The nature of what the system
represents (what it “knows”).
Where it matters:
Application of the architecture vs knowledge
distinction to understanding what goes on when
we reason using mental images
Examples of behavior regularities
attributable to tacit knowledge
• Colour mixing, conservation of volume
• The effect of image size ?
• Scanning mental images ?
Color mixing example
Conservation of volume example
Our studies of mental scanning
2
1.8
1.6
scan image
imagine lights
show direction
Latency (secs)
1.4
1.2
1
0.8
0.6
0.4
0.2
0
1
2
3
4
Relative distance on image
(Pylyshyn & Bannon. See Pylyshyn, 1981)
There is even reason to doubt that one can imagine scanning
continuously (Pylyshyn & Cohen, 1998)
If cognition is at a different level of
organization than the physical level, how
can we ever tell what it is?
 We are limited only by the imagination of the
experimenter, e.g.,
 Relative complexity evidence (RT, error rates…)
 Intermediate state evidence
 Eye tracking






Stage analysis (additive factors method)
Event Related Potentials (EEG)
fMRI
clinical observations of brain damage
Psychophysical methods (SDT)
Etc…
Example of one methodology:
Sternberg memory search paradigm
Of course we can’t always be sure we
have the right method or instrument
If all else fails there is always parsimony and
generality…(they worked well in physics and linguistics!)
Download