Searle and The Chinese Room - The Richmond Philosophy Pages

advertisement
A very famous challenge: Searle and The Chinese Room
In the 1970s, many AI
researchers believed that
computers would one day be
intelligent. Computers are
just ‘symbol-shufflers’ but we
see today just how advanced
they have become. It was
argued in 1976 by Newell
and Simon that not only will
computers be intelligent, but
anything that is intelligent is a
computer. The only way to
create a mind is to create a
super symbol processor.
They argued that only a symbol-processor would have the flexibility needed to
represent the world systematically and to manipulate the symbols – to think – in ways
needed to be clever and creative.
In 1980, John Searle argued that this was wrong. According to Searle, the brain is a
computer – it processes symbols in some sense – but it is a special type of
computer. It is made out of brain-stuff. It is a biological organ just like a heart or a
liver. The job of the heart is to pump blood and the job of the brain is to think. Searle
argues that only brains can support minds. He says that the relation between minds
and brains is like the relation between liquidity and water, or between pressure and
gas molecules. Liquidity is a feature of a large collection of water molecules that can’t
be reduced to the properties of the molecules in a simple way. No one molecule is a
liquid nor behaves like one. This position is known as Biological Naturalism, on
which there is more below.
Now for the Chinese Room. Bernard is a monolingual speaker of English in a room
surrounded by Chinese speakers. They think that Bernard is a very shy Chinese
person. They communicate with him by writing sequences of characters on slips of
paper and feeding them through a slot in the wall. Bernard looks up the sequence in
a book of all possible sequences. Next to the sequence in the book are a variety of
other sequences that are (in Chinese) meaningful responses. Bernard copies out the
squiggles and pushes the paper with them on back through the slot.
Bernard is a symbol-processor who appears to understand Chinese. But Bernard
doesn’t understand Chinese. In the same way, a computer can process symbols, to
an arbitrarily high level of ‘cleverness’ but it will not understand them. What is
necessary to understand them. Searle thinks that only conscious beings have the
capacity to understand how symbols represent and therefore have meaning. Now,
while it is possible that a computer could be functionally equivalent to a minded
entity, a computer will lack genuine intentionality. Functionalism is not, then,
complete account of the mind. Since only brains can support conscious minds, silicon
computers will never be intelligent.
Searle’s Argument has generated a huge literature in the last twenty-five years. Here
are some famous responses with Searle’s counter-responses.
1
The Systems Reply: Searle mistakes the part for the whole. No one clump of
neurons understands English. It takes the whole brain. Similarly, there should be no
surprise that Bernard doesn’t understand Chinese. It’s the whole system that does.
Searle’s Reply: Put all the machinery into Bernard’s head. There’s just Bernard now
and he still doesn’t understand Chinese.
The Robot Reply: Allow Bernard to move as a robot rather than be stuck in a room.
Allow him to interact with the world.
Searle’s Reply: So long as is perceiving and issuing symbols, be it on bits of paper or
by moving his arms around and making sounds, he doesn’t understand.
The Brain Simulator Reply: (or Block’s Chinese Nation). Suppose we copied the
‘neural program’ to a computer. Or indeed had it implemented by China. It would
have to understand as it is just a big brain. So we shouldn’t rule out that the Chinese
system understands.
Searle’s Reply: Make it out of people or water pipes. It’s brain stuff that matters –
only brains can give symbols meaning.
The Combination Reply: All three together?
Searle’s Reply: Why should three bad ideas make a good one?
The Many Mansions Reply: You’re just targeting today’s technology. In the future,
we’ll discover technologies powerful enough to run minds.
Searle’s Reply: it’s an empirical question what stuffs can support semantics. Brains
can, perhaps machines and Martians too. But it won’t (just) be because they can run
a program. It will be because they have the right kind of stuff.
The preferred response these days is an extension of the Systems Reply called the
Virtual Persons Reply. Let’s go back to the Systems Reply first. When in the room,
Bernard doesn’t understand Chinese but the room does. Bernard is merely a part of
the system. When a computer is running Word, the CPU (the processor) of a
computer doesn’t ‘understand’ all that is going on, because it is just part of a larger
machine: there are sub-processors and memory chips, for example. If we put the
room on legs, we’d have a ‘virtual person’ with Bernard as a dumb CPU controller.
So, we can consider Bernard memorising the book. (This does stretch things
arguably too far, but anyway…) There would then be two people residing in Bernard:
the English-speaking Bernard that he is aware of and this other person whose life he
acts out without fully understanding it. Of course, this sounds very bizarre, but then
so too is the whole experiment. We get confused because we think: how can
someone ‘be’ someone else without realising it? Of course, as a curious human
being, they would start to interpret and understand what they were saying in Chinese.
But put these thoughts aside. Bernard would be like Dr. Jekyll and Mr. Hyde.
2
Download