Philosophy of Mind II: Minds and Machines. 1.

advertisement
Philosophy of Mind II: Minds and Machines.
1.
Turing. Can a Machine Think? Turing is worried about the vagueness of this
question. Just what do be mean by ‘machine’? (If we ourselves are (in some sense)
machines, then the answer is obviously ‘yes’.) And what do we mean by ‘think’?
Turing proposes to replace this question with a clearer alternative, the simulation
question: Can a machine fool us into thinking that it is human? If it can, Turing
suggests, we might as well admit that it can think. Thinking about this from the point
of view of the observer, who has interacted with a machine and a person over the web.
If the simulation is successful, she has exactly the same evidence in favour of the
claim that the machine can think as she has to suppose that someone we’ve only
communicated over the net with can think.
2.
But most of Turing’s discussion aims at removing obstacles to his position. As he
says, there is a very long list of things people say a ‘machine’ could never do: feel
emotions (consider Mr. Data—but also Lor!); write a poem; do something creative or
surprising or unexpected; do something other than what it has been told to do (a silly
one, really—I often tell my computer to do things that it fails to do—save this file,
open that web site, print this document! It often does something odd or peculiar
instead…)
3.
So what do you think? Is Turing right? Should we dispense with the ‘can a machine
think?’ question, and focus on successful simulation instead? What do you think of
some of the particular arguments and ideas Turing presents: The idea of a learning
machine, the unpredictability of many things that complex machines do, the gap he
sees between the claim that there is no complete set of rules regarding what we should
do (the traffic light example) covering all circumstances, and the claim that we are not
ourselves complex machines doing what we do because of the circumstances and the
laws of nature; the difficulty of inferring, even in a very simple case, from observation
of the ‘behaviour’ (input-output relation) of a machine, to a correct account of the
program the machine is actually running.
4.
Suppose that you found yourself in a rich and complex conversation, ranging over
many topics, telling stories, expressing opinions and preferences and so on. Suppose
you then found that your partner in the conversation was a sophisticated computer
(like Hal in 2001: A Space Odyssey, or the great computer designed to calculate the
answer to the question of life, the universe and everything in The Hitchhiker’s Guide
to the Galaxy). What would you conclude: That a computer can think (in whatever
sense matters)? Or that you had been fooled, and no thinking was really going on
despite the evidence of your conversation? Think about Turing’s Onion example
here…
5.
Searle. John Searle proposed a famous argument against Turing’s position, called the
‘Chinese Room’ argument. Searle distinguishes sharply between what he calls ‘weak’
AI and ‘strong’ AI. Weak AI views the role of computers in our study of the mind as
tools. With their help we can formulate hypotheses of various kinds and test them.
But strong AI holds that a sufficiently powerful, properly programmed computer is a
mind, and can be literally said to understand, believe, and perhaps even to know
certain things. For strong AI, programs running on such a machine don’t just test
ideas about thinking, they constitute or display or explain what thinking is.
6.
The picture of a supposedly successful strong AI program is modeled on Schank’s
story-interpreting program, which takes stories as input and then answers questions
about the events described in the story. For a strong AI proponent, a program that
does this well enough can correctly be said to understand the story, and even (here I
think some care is needed) explain human understanding of such stories.
7.
The Chinese Room is a thought experiment Searle proposes to test this strong AI
position. Searle imagines himself in a room with a book of ‘squiggles’ and another
book including instructions in English along with more squiggles. These squiggles are
actually strings of Chinese writing, and the instructions guide Searle in producing
more squiggles in response to further squiggles & rule combinations that are sent in to
Searle. Next Searle proposes that he becomes very adept at following these rules, and
the resulting output, to a Chinese speaker outside Searle’s room, is indistinguishable
from the output of a second room in which a real Chinese speaker sits, answering
questions in Chinese about a story in Chinese that the speaker in the second room has
a copy of with her. Roughly, Searle’s idea is that the first room, in which Searle sits,
is passing the Turing test in Chinese.
8.
Searle turns next to evaluate the claims of strong AI, applied to this case. His strong
AI proponent claims both that the room (including Searle) understands Chinese, and
that what is going on in the room somehow explains what Chinese speakers do when
they understand Chinese. Searle claims, first, that he doesn’t understand a word of
these Chinese stories placed in the room. Second, Searle claims there is no
explanation of understanding here, since there is no understanding at all going on. He
also considers the possibility that some further ‘symbol processing’ is all that
distinguishes his grasp of English from the not-quite understanding of Chinese that he
has as he carries out his instructions in the Chinese room. Searle doesn’t reject this
claim outright, but he regards it as unsupported by the evidence in such a case.
9.
Searle’s subsequent discussion focuses sharply on the ‘purely formal’ nature of the
rules/program being carried out in the Chinese room. ‘Computational operations on
purely formally defined elements’, he suggests, ‘have no interesting connection with
understanding’. They are not sufficient conditions (as his two conclusions above
suggest) and he sees no reason to suppose that they are necessary either. The evidence
for this (again) is that a person can follow such formal rules while ‘not understanding
anything’.
10.
Meaning: Searle claims the difference between English and Chinese is the difference
between knowing meanings and not knowing the meanings of the symbols. This, he
urges, is completely clear-cut here, not a matter of degree or somehow a fuzzy line
(though we do recognize matters of degree and fuzzy lines here in other cases).
11.
Finally, Searle considers a range of responses, all of which he considers inadequate—
the systems reply, the robot reply, the brain simulator reply, the combination reply, the
other minds reply and the Many mansions reply.
12.
Problems: A. Searle is wrong to say that the Chinese room (or a computer passing the
Turing test) merely runs a formally characterized computer program. This issue
returns in Searle’s closing discussion, where he re-states the question he aims to be
answering as ‘ “But could something think, … , solely by virtue of being a computer
with the right sort of program? Could instantiating a program… by itself by a
sufficient condition of understanding?”’ (315). But the Chinese room does a lot more
than just instantiate an abstract algorithm. In particular, it does so in such a way that
putting in actual written questions in Chinese lead it to output perfectly reasonable,
sensible answers in Chinese. (Many computers carrying out the same algorithm, and
so abstractly simulating this process, would be unable to accept such input and
produce such output.) B. Searle’s invocation of a homunculus points to another
problem: The only thing that could meet his criteria is to put something he accepts as
understanding (i.e. a human) at the centre of the entire ‘understanding’ system.
Finally, what would Searle say about apparently intelligent aliens with very different
‘brains’? How would he decide whether they were really understanding, or just
simulating? This relates closely to the ‘other minds’ and ‘many mansions’ objections.
13.
Gaps: What is understanding, according to Searle? Does he have a positive theory, or
is it only a ‘gap’ in his account?
Download