Minds, Brains & Programs

advertisement
Cognitive Computing 2012
The computer and the mind
SEARLE
THE CHINESE ROOM ARGUMENT
Professor Mark Bishop
Background

Although there is still tremendous controversy over its success – with perhaps
the majority of Cognitive Scientists and Philosophers holding that the
argument fails - there is a great deal of consensus over the importance of
John Searle’s Chinese Room argument:

In a recent article (Minds & Machines, volume 7, 1997), Larry Hauser has called it,
“perhaps the most influential and widely-cited argument against the claims of
Artificial Intelligence”.

Stevan Harnad, editor of Behavioural and Brain Sciences (possibly damming with
faint praise) asserted that it has, “already reached the status of a minor classic”.

And Anatol Rapaport claims that it, “rivals the Turing test as a touchstone of
philosophical inquiries into the foundations of AI”.
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
2
Central claims

The central claims of Searle’s argument - first presented in his 1980
paper ‘Minds, Brains & Programs’ - are that computations alone cannot,
in principle, give rise to cognitive states, and that they therefore cannot
explain human cognition.

Searle uses the distinction between syntax and semantics to argue that
while computers can follow purely formal rules, they cannot be said to
know the meaning of the symbols they are manipulating, and therefore
cannot be credited with an understanding of the programs those symbols
compose.

They can simulate intelligent performances, but not duplicate them.
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
3
Searle: two fallacies of computing

The ‘information processing’ fallacy:

In the CRA Searle famously asserts that a computer is “all syntax, no
semantics”.



I.e. The computer does not do ‘information processing’ as it does not know
exactly what it is processing; a ‘bit’ of information might represent the day of
the week, the weight of a body or the speed of a bullet.
NB. In later writings Searle moves to deny that a computer has even
[observer independent] syntax.
The ‘simulation’ fallacy:

13/04/2015
Given that a computer simulation of a fire doesn't burn the
neighbourhood down, or a simulation of gold make you rich, why
should a computer simulation of understanding, actually understand?
(c) Bishop: The philosophy of A.I. & A-Life
4
Weak and Strong AI

John Searle’s classic [and ubiquitous] taxonomy of AI systems:

Weak AI
 “the principal value of the computer in the study of the mind is that it gives
us a very powerful tool”.

Strong AI
 “the appropriately programmed computer really is a mind”; it is conscious
and has intentional states (i.e. the internal states have an ‘about-ness’ and
‘direction’ to them with respect to the entities to which they refer);

13/04/2015
i.e. the distinguishing property of mental phenomena is that of being necessarily
directed upon an object, whether real or imaginary.
(c) Bishop: The philosophy of A.I. & A-Life
5
Implications

Searle’s argument - if sound - would clearly undermine the very foundation of
much cognitive science grounded, as it is, on a ‘Strong AI’ supposition.

The importance of Searle’s article is thus:

Philosophical



Practical – (in Computing/Cybernetics/Cognitive Science)


Can a computational theory of mind explain human understanding?
Is the Turing Test an adequate test for machine intelligence?
Are computational approaches to AI a good idea?
… and this has ensured that it has been widely attacked - and defended - with
almost religious fervour, by exponents of both disciplines.
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
6
Schank and Abelson’s research
using ‘scripts’

In 1977 Schank and Abelson published information on a program they
had created, which could accept a story and then answer questions
about it, using a large set of rules, heuristics and scripts, (Schank, R.C.
& Abelson, R.P. (1977). Scripts, Plans, Goals & Understanding,
Hillsdale, NJ:Erlbaum).

13/04/2015
A script is a detailed description of a stereotypical event unfolding through
time.

For example, a system dealing with restaurant stories would have a set of
scripts about typical events that happen in a restaurant: entering; choosing
a table; ordering food; paying etc.

And the script for a lecture on “The computer and the mind”?
(c) Bishop: The philosophy of A.I. & A-Life
7
SAM – the Script Applier Mechanism

A more recent application of scripts is found in a program called SAM.

To answer questions about stories, SAM needs to be able to expand
them.


e.g. “John went to a restaurant. He asked a waitress for coq au vin. He left
a very large tip.”
SAM INFERS: “John went to a French restaurant. He sat down at a
table. He read a menu. He ordered coq au vin from the waitress. He
ate the coq au vin. He enjoyed the meal. He left a large tip. He didn’t
pay cash. He exited from the restaurant.”
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
8
FRUMP: the Fast Reading Understanding
and Memory Program

FRUMP takes as input live news feed, (from United Press
International wire service) and automatically produces
news summaries.

Proponents of ‘Strong AI’ have claimed that:

Programs like SAM and FRUMP actually understand the
stories they operate upon.

The (scripting) mechanisms that SAMP and FRUMP
embody actually explain human comprehension.
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
9
The Chinese room

A modern critic of Strong AI is John Searle.

Searle’s most famous work on machine understanding, is the Chinese Room
gedankenexperiment (Searle, J. “Minds, Brains & Programs”, The Behavioural and
Brain Sciences, 3 (1980): 417-24).

In the paper Searle describes a situation where he is locked in a room where he is presented with a
batch of paper on which are Chinese symbols that he does not understand.

Searle is then given a second batch of Chinese symbols together with a set of rules in English that
describe an algorithm for correlating the second with the first.

Finally he is given a third batch of Chinese symbols together with another set of rules in English to
enable him to correlate the third batch with the first two, and these rules instruct him how to return
certain sets of Chinese symbols in response to the symbols given in the third batch.
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
10
Scripts, stories and questions

Unknown to Searle, the people outside the room call




The symbols he returns they call



the first batch of Chinese symbols, the script;
the second set the story;
and the third questions about the story.
answers to the questions about the story;
and the set of rules Searle is obeying, the program.
To complicate matters further the people outside also give him stories
in English and ask him questions about them in English to which he
can reply in English.
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
11
Understanding Chinese?

After a while Searle gets so good at manipulating the symbols and the
‘outsiders’ get so good at supplying the rules which he follows, that the
answers he gives to the questions in Chinese symbols become
indistinguishable from those a true Chinese man might give.

From the external point of view, the answers to the two sets of questions, one
in English the other in Chinese, are equally good, but in the Chinese case, he
behaves ‘like a computer’, as an instantiation of a computer program. Searle.

Searle trenchantly points out that he has passed the Turing test and hence
that the claim of Strong AI is that he can ‘think in’ and ‘understand’ Chinese,
whereas he still does not understand a word of the language!
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
12
The Chinese room ..
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
13
From ‘low-level grammars’ …
Historically real A.I. practitioners have been incredulous at the extreme
simplicity of the symbol correlation rules described by Searle; rules that
simply …
… “correlate one set of formal symbols with another set of formal symbols … merely by
their shape”;
… such that [typically very trivial combinations of] un-interpreted symbols – Squiggles map simply onto others - squoggles.
It has always seemed likely to A.I. engineers that any machine
understanding program with a claim to real-world generality would require
a very large and complex rule-base (program) typically applying very highlevel rules (functions).
… to high level programs
However it is also clear from MBP Searle intended the CRA to be general,
applicable to any possible A.I. program (grammar based; rule based; neural
network; Bayesian etc)
[JS] “I can have any formal program you like, but I still understand nothing”.
So if the CRA succeeds, it must succeed against even the most complex high-level
systems.
So, in a spirit of cooperation (between A.I. practitioners and Searle) let us consider
a more complex formal program/rule-book-system which has (as one high-levelrule) a call to, say, Google-translate.
Hence the internal representations used by monoglot Searle (as the ‘man in
the room’) scribbled on bits of paper could now perhaps maintain partial
interpretations of the [unknown] Chinese input symbols as “symbol-strings-inEnglish”.
High-level English:
the English [language] reply revisited
Because Searle speaks English, by default he brings to the room an
understanding of:

(A) the high-level rules [as defined in English] in the rule-book;

(B) any English language internal representations that the rules book
requires - Searle’s scribbles on paper;
Hence it is likely that the repeated application of such [externallygrounded] high-level-rules to Chinese text could foster the emergence of
semantics and understanding in even a monoglot English speaker like
Searle …

… via a process analogous to ones gradual understanding of a
Chinese text via the repeated use of a Chinese-English dictionary .
“Escaping from the Chinese room”
“in (Boden, M.) (ed) ‘The philosophy of A.I.’, (1988)”
But does a computer CPU ‘understand’ its program and variables (internal
representations) as coded as raw binary signals?
In her 1988 paper Maggie Boden suggests that, unlike say [the human-driven
manipulations of] formal logic, it does ...
... because unlike the rules of logic, the execution of a computer program ‘actually
causes events to happen (e.g. data read/writes to memory and peripherals)’.
and such ‘causal semantics’ enable Boden (contra Searle) to suggest that a CPU
processing Chinese symbols actually does have a ‘toe-hold on semantics’ …
… “It is a mistake to regard [executing] computer programs as pure syntax and no
semantics”.
The analogy Boden draws is to to Searle’s understanding of the English rule-book ; if
correct, this suggests the [extended; high-level] ‘English reply’ holds.
On the execution of a computer
program
However it is not Searle’s command of the [extended] English rule-book alone that enables
the [extended] English reply to have force; it is the combination of (a) Searle’s
understanding of the [extended] English rules and (b) the dynamic [English language] data
on which they operate.
But in contrast to Searle [reading ‘rules-in-English’ on ‘data-in -English’]:
(a) the computer CPU does not understand its rule-book [program] anymore than water
in a stream understands its flow down-hill; both processes are strictly entailed by their
current state and that of their environment (data);
(b) and for the CPU, the strings of binary data that it processes exactly constitute uninterpreted ‘formal symbols’ of the type Searle described in his original exposition of the
CRA.
Hence even the extended English reply fails; “all syntax and no semantics” any physical
computer (as it executes its machine-code program on un-interpreted binary data) is merely
analogous to [monoglot] Searle’s attempts to understand an unknown Chinese text using
only a Chinese-Chinese dictionary..
The ‘Systems Reply’ (Berkeley)
Over the 30+ years since its inception, perhaps the most widely deployed response to
the CRA has been ‘The Systems Reply’.

This states that “It is a mistake to limit understanding to the animate simulator, but
rather it belongs to the system {JS; Rule-book; Google-translate; scraps-of-paper
etc.} as a whole”.
JS: “Internalise all the components [of the high-level system] so that they are all in
Searle’s mind” …

… “Now there isn't anything at all to the system that [Searle] does not encompass”
and yet Searle continues to trenchantly insist that he doesn’t understand a word of
Chinese, even though his answers to questions posed to him in Chinese continue
to be indistinguishable to those a native Chinese speaker might give.
In response to this ‘internalisation’ move some (e.g. John Haugeland) insist that such
internalisation produces two distinct cognitive systems - (a) Searle and (b) Searle-asCRA - with only the latter genuinely ‘understanding Chinese’.
John Haugeland’s response to
Searle’s ‘systems reply’
“ What we are to imagine in the internalisation fantasy is something like a
patient with a multiple personality disorder:

one ‘personality’ Searle is fluent in English (both written and spoken), doesn't
know a word of Chinese, and is otherwise perfectly normal (except that he has
the calculative powers of a mega idiot savant);

the other ostensible personality - let's call him Hao - is fluent in Chinese
(though only written, not spoken), has no English, and, moreover, apart from
seeming to be able to read and write, is deaf, dumb, blind, and paralysed.
Why, exactly, should we conclude that Hao doesn't understand the
Chinese that he appears to be reading and writing ‘automatically’, as
it were ? ”
A tale of two jokes ...
In response to Haugeland, contrast Searle responding to a joke in
English and Hao responding to a joke in Chinese.
Although, ex-hypothesi, the external responses will be the same in
both cases ...

“Ha ha”
... there is a clear ontological distinction between the state of being
John Searle and that of being Hao

with only the former actually ‘understanding’ the joke.
In other words, merely generating appropriate [external] behaviour
is no guarantee of the presence of relevant underlying cognitive
state(s).
Ontological and epistemological
considerations
It is this basic, fundamental distinction …
between epistemic concerns:
The knowledge we deploy to check for the presence of a cognitive state in
a known cognitive system (e.g. in a human);
AND ontological:
On being in [instantiating] that state (e.g. in a machine);
... that continues to lie at the root of much confusion in the ongoing Chinese room debate.
The robot reply

“But what if we built a robot, controlled by a computer, that could directly
inter-react with the world? By directly interacting with the world such a
robot would have genuine understanding and mental states”.


Cf. Harnad’s ‘Total Turing Test’ : the TTT involves demonstrating successful
performance in both ‘linguistic’ and ‘robotic’ behaviours and incorporates the same range
of empirical data that is available in the human case [i.e. over a lifetime].
JS. It is interesting to note that now the cognitive scientists are retreating
from their original claim that only appropriate computations are necessary
for understanding.

13/04/2015
Nonetheless, if we rebuilt the Chinese room inside the ‘head’ of the [TTT] robot, with
symbolic [sense] inputs and symbolic [motor] outputs, the original scenario is unchanged.
(c) Bishop: The philosophy of A.I. & A-Life
23
The connectionist reply

“Build a computer that simulates the firing of real brain neurons when it
communicates in Chinese”.

JS. Again, it is interesting to note that now the cognitive scientists are retreating
from their original symbolic theory of mind to a sub-symbolic, connectionist theory
of mind, where knowledge is represented as a distributed pattern of weights.

We could design a network of water pipes to mimic the neural network and emulate the
neural network by applying rules to control the opening and closing of the pipe valves.

However the operation of such a system is still formally defined by a set of rules and
hence remains vulnerable to the CRA;

Searle imagines himself manically running around the network of pipes, opening and
closing appropriate valves [and hence producing ‘appropriate Chinese responses], but
still not understanding a word of Chinese.
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
24
The combination reply

“Link all of the above together forming a bi-pedal robot,
controlled by neural networks, that interacts directly with the
world. Surely this forms a convincing and decisive
argument…”

JS. If we knew nothing more then perhaps we would attribute
intentionality, understanding and intelligence to such a
machine, just as we do for other humans, apes etc.

13/04/2015
However given that we know it is controlled by a computer program,
we must still continue to think of the robot as nothing more than a
sophisticated dummy.
(c) Bishop: The philosophy of A.I. & A-Life
25
The other minds reply

“How do you know that people understand Chinese or
anything else; only by their behaviour…”

JS. It is a base axiom of cognitive science that humans do
possess cognitive states, genuine understanding and
intentionality …

13/04/2015
… in much the same way that a physicist knows the reality, (and the
fundamental know-ability of), physical matter.
(c) Bishop: The philosophy of A.I. & A-Life
26
The many mansions reply

“John Searle only considers today's analogue &
digital computers; would his argument apply to
the computers of tomorrow?”
 E.g.

Biological, Quantum computers ...
JS. Yes, assuming such systems remain bound
by the Church/Turing thesis...
13/04/2015
(c) Bishop: The philosophy of A.I. & A-Life
27
Conclusions
The CRA suggests that merely generating
appropriate [external] behaviour is no guarantee
of the presence of relevant underlying cognitive
state(s).

And it is this basic, fundamental distinction between
epistemic concerns AND ontological …

… that continues to lie at the root of much confusion in the
on-going Chinese room debate.
13/04/2015
Dr. Mark Bishop
28
Download