Can machines think? - 100D Philosophy of Mind

advertisement
ANNOUNCEMENTS
Read: Searle “Minds Brains and Programs”
I will be holding extra office hours on Wednesday after lecture.
THE BIG QUESTION
Before proceeding to look at an instance of a computational theory of mind (CTM), we
need to address an important question: could a machine ever think?
The worry is that CTM is on the wrong track to begin with, and has no hope of ever
being correct.
We need to distinguish two questions:
(1) Can a machine think?
(2) How would we ever know if one was?
THE BIG QUESTION
As we will see, these get run together a bit in this discussion, but our main focus is on
question #1: could a machine ever have mental states?
Many are inclined to so “no”
Moreover, it must be confessed that perception and that which depends
upon it are inexplicable on mechanical grounds, that is to say, by means
of figures and motions. And supposing there were a machine, so
constructed as to think, feel, and have perception, it might be conceived
as increased in size, while keeping the same proportions, so that one
might go into it as into a mill. That being so, we should, on examining
its interior, find only parts which work one upon another, and never
anything by which to explain a perception. Thus it is in a simple
substance, and not in a compound or in a machine, that perception
must be sought for. (Leibniz, Monadology)
THE BIG QUESTION
People are often very skeptical that a machine could ever be built with the capacity
for certain special kinds of mental states:
•
Sensations (tickles, itches, etc.)
•
Pain
•
Emotions like anger, happiness, etc.
•
Qualia like the look of red, or the taste of raspberry yogurt
•
Love
•
Etc.
THE BIG QUESTION
On the other hand, we are willing to attribute these states to robots when they act a
certain way.
THE TURING TEST
It is fair to say our base intuitions are a bit inconsistent on this matter.
So how are we to decide whether or not it is possible for a machine to think?
Alan Turing famously proposed a test that one could give a machine (or anything else)
to determine whether or not it is thinking.
He called it “The Imitation Game” but today it is mostly called “The Turing Test.”
THE TURING TEST
Turing proposed that how to determine whether or not something is thinking is to have
a conversation with it.
Turing set up his test as follows:
1. The tester is in room A, the computer in room B and in room C is another person.
2. They communicate via a chat interface so that none of the inhabitants can see
each other.
3.
A does not know ahead of time which room contains the computer.
4. A carries out a typed conversation with the entities in rooms B and C.
5. The computer passes if A cannot tell which of the two rooms contains the
computer.
THE TURING TEST
Turing’s idea is that we should conclude that the machine is thinking for the same
reasons that we conclude that each other are thinking.
The outward behavior of the machine can (from the standpoint of the tester) be best
explained by positing internal mental states.
Turing thinks that if we encounter a machine that can pass this test, then we should
conclude (barring further evidence) that it is thinking.
Turing also claims that it is possible for a machine to pass the this test.
THE TURING TEST
Turing’s general argument is as follows:
1. If something can pass the Turing Test, then we should conclude that it is thinking.
2. It is possible for a computer to pass the Turing Test.
3. Therefore, we should conclude that it is possible for a computer to think.
Turing discusses a series of objections to his position that machines could pass his test
(and thus be thinking).
• It is sort of a mixed bag.
• Some of them address his test and what it could show.
• Most just try to establish that a machine could never be thinking (in the way that we
do).
THE SILLY OBJECTIONS FIRST
It would be Bad if Machines Could Think
Lots of movies and books depict things going horribly wrong if machines ever develop
minds of their own.
•
Terminator
•
War Games
•
2001
Elon Musk has recently called A.I. “our biggest existential threat.”
He and Stephen Hawking recently published an open letter urgently advocating the
danger of A.I.
THE SILLY OBJECTIONS FIRST
Of course this isn’t an objection against the possibility of A.I.
In fact, the complaints seem to entail that the author’s think it is possible, and
perhaps imminent.
THE SILLY OBJECTIONS FIRST
Machines can’t have ESP.
At the time there had been a series of experiments done that seemed to indicate the
existence of E.S.P.
The studies were later discredited due to poor experiment design and the failure to
replicate the results.
Still:
Even if E.S.P. were possible, and machines couldn’t do it it wouldn’t show that they couldn’t
think.
• Not every person has E.S.P.
• Machines could have other mental capacities besides this. Lacking one mental capacity
does not entail that something lacks any.
THE SONNET ARGUMENT
“Not until a machine can write a sonnet or compose a concerto because of thoughts
and emotions felt, and not by the chance fall of symbols, could we agree that
machine equals brain—that is not only write it but know that it had written it.”
(Geoffrey Jefferson from Turing (174))
The question here is not whether or not a machine could output a sonnet but whether
it could do so because of its thoughts and emotions.
THE SONNET ARGUMENT
Turing responds by imagining an exchange going on in one of his tests:
Interrogator: In the first line of your sonnet which reads ' Shall I compare thee to a summer's day ',
would not ' a spring day' do as well or better?
Witness: It wouldn't scan.
Interrogator: How about ' a winter's day' That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter's
Day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas
Witness: In a way.
Interrogator: Yet Christmas is a winters day, and I do not think Mr. Pickwick would mind the
comparison.
Witness: I don't think you're serious. By a winters day one means a typical winter's day, rather
than a special one like Christmas.
THE POWER OF THE TURING TEST
The sonnet argument underestimates the power of Turing’s Test.
Carrying out a conversation is a non-trivial accomplishment.
• Turing thought that we would have machines that could pass his test in 50 years
or so. (By 2000)
• But we are not even close to getting a machine that can do this.
THE POWER OF THE TURING TEST
These things are called CAPTCHA’s.
This is an acronym for “Completely Automated Public Turing test to tell Computers and
Humans Apart”
They typically rely on linguistic and perceptual capacities that computers have a hard
time duplicating.
THE POWER OF THE TURING TEST
Every year the Loebner Prize is given to the program that performs the best in a Turing
Test.
You can go online and chat yourselves with many of the previous winners.
They are (without exception) pretty awful.
ANNOUNCEMENTS
I will be holding extra office hours after lecture today in SH 5720.
Read: Searle: “Minds, Brains and Programs”
Start: Pylyshyn “What’s In Your Mind?”
A GOOD TEST?
The way Turing describes his test raises questions about its usefulness.
Should we really think that one conversation alone is enough to establish that
something is thinking?
Clearly not.
A GOOD TEST?
Turing does not mean to suggest that one conversation by one person should decide
things.
What he has in mind is a machine that is able to display general linguistic
competence, and to be able to carry out as many varied sorts of conversations as
a normal adult human.
It is this very general ability that Turing thinks is indicative of intelligence.
We can imagine a much longer conversation, with many different people, perhaps
even in a non-experimental environment.
LIMITATIONS ON THE TEST
One should not think of the Turing Test as imposing either necessary or sufficient
conditions on thinking things.
That is, it is both possible for something to pass the Turing Test and not have a mind,
and also possible for something to have a mind but fail the Turing Test.
LIMITATIONS ON THE TEST: NOT NECESSARY
Kim points out that the test seems both too tough and too narrow.
Too Tough: Animals and children have minds and mental states but cannot pass the
Turing test.
Too Narrow: Since the test is explicitly formulated as to whether or not a computer can
pass for a human mind, it can only test whether something has (or can fake) a
mind like ours.
LIMITATIONS ON THE TEST: NOT NECESSARY
LIMITATIONS ON THE TEST: NOT NECESSARY
It definitely seems to be true that passing the Turing Test is not necessary for having a
mind.
But is it really the case that you could only use it to test for human-like minds?
Claim: If a machine behaved as Hal 9000 does in that scene we would conclude that
it was intelligent.
LIMITATIONS ON THE TEST: NOT SUFFICIENT
Perhaps more troubling for Turing is that the test is not even sufficient to show that
something is thinking.
Blockhead (Due to Ned Block)
Suppose we build a computer like this:
• It has a huge (finite) number of whole reasonable English conversations encoded
in its memory. Say all possible English conversations under 1 million words in
length
• When you type something into the computer it scans its memory eliminating all
conversations that don’t start that way and selects one of the next responses in
its pool of conversations randomly.
• You type the next thing in, and it repeats the process.
LIMITATIONS ON THE TEST: NOT SUFFICIENT
It is clear the Blockhead will pass the Turing test.
But it is also clear that Blockhead is not thinking.
Therefore, the passing the Turing Test is not sufficient for thinking.
STILL…
Even if the passing the Turing Test does not logically or metaphysically entail that
something is thinking, it would still give us good reasons to believe that
something was thinking.
Of course, we could turn out to be fooled by Blockhead or some other such thing.
Still, we could turn out to be fooled in this way about other people as well.
STILL…
Unless we want to endorse a radical skepticism of other minds we should conclude
that:
• Passing the Turing Test provides very strong (knowledge supporting) evidence
that something is thinking.
• Of course, we could be tricked.
• All this shows is that the we can revise our beliefs on finding out exactly how the
machine functions.
AGAINST THE POSSIBILITY OF THINKING
MACHINES
Turing confronts a series of arguments/claims/intuitions that purport to establish that
a machine could never be thinking.
ARGUMENT FROM VARIOUS DISABILITIES
“Machines could never X”:
•
Be kind
•
Have a sense of humor
•
Tell right from wrong
•
Make mistakes
•
Fall in love
•
Enjoy strawberries and cream
•
Learn from experience
•
Think about themselves
•
Be creative
ARGUMENT FROM VARIOUS DISABILITIES
Lets look at a few of the more interesting ones.
Can’t make mistakes
The idea is that machines are hard-wired, perfect logic-engines and could never screw
up.
• Obviously false! Machines mess up all the time.
• Chess programs.
• My router.
ARGUMENT FROM VARIOUS DISABILITIES
Lets look at a few of the more interesting ones.
Can’t make mistakes
This seems related to the Ada Lovelace’s objection. She suggests that a machine
could never take us by surprise.
The same sorts of examples indicate why this claim is false.
ARGUMENT FROM VARIOUS DISABILITIES
Lets look at a few of the more interesting ones.
Have a sense of humor
This seems to be just the kind of thing the Turing Test would be very good at
establishing.
Does the computer tell jokes and respond appropriately to jokes and humorous
situations?
ARGUMENT FROM VARIOUS DISABILITIES
Lets look at a few of the more interesting ones.
Think about itself
This is a bit tricky. A computer could certainly refer to itself or represent itself. We
have machines that do this already.
• Self-driving cars have to calculate their position relative to other objects on the
road, their destination and their overall global position.
Maybe what is meant is that it couldn’t have a concept of the self.
 Needs an argument or is otherwise question-begging.
ARGUMENT FROM VARIOUS DISABILITIES
Lets look at a few of the more interesting ones.
Can’t enjoy the taste of strawberries and cream
Turing just dismisses this one, but he probably shouldn’t.
Whoever raised this to him probably had in mind some sort of quali-based objection.
• You may think that a computer could never have qualia. That at best it would be a
Chalmers-style zombie.
This doesn’t show that a computer could never have a mind, just not a qualia-having mind.
Furthermore, since qualia are pretty mysterious and we don’t really know why we have
them, perhaps we shouldn’t pre-judge the issue.
INFORMALITY OF BEHAVIOR
Objection from the Informality of Behavior
1. Computers always follow rules.
2. You can’t write enough rules to cover every possible situation.
3. Therefore, at some point the computer will be in a situation which none of its
rules address.
4. It will “short out” go into error mode, or some such thing.
5. Humans do not always need to follow the rules, and so can avoid these types of
scenarios and figure out what to do next.
6. So humans, at least, are not computers.
INFORMALITY OF BEHAVIOR
The objection is interesting.
• It doesn’t really address the test, but rather gives reasons to suppose that
whatever the outcome of the test a computer could never have a mind like ours.
Turing responds by distinguishing two kinds of rules:
• Rules of Conduct: Rules that subject is aware of consciously. They govern her
behavior, but she is free to ignore them in certain contexts. (e.g. stop at the red
light)
• Laws of Behavior: Rules that are hard-wired in. They are purely causal and
deteterministic rules governing what a system will do given a certain external
conditions. (e.g. the laws of planetary motion).
INFORMALITY OF BEHAVIOR
Humans often follow rules of conduct:
• Traffic laws
• Ethical rules
• Rules of propriety or politeness
It is certainly true that we can break these rules.
Is it really true that you couldn’t program a computer to break these kinds of rules?
 The computer would have a rule like “Do X unless such and such overriding
conditions obtain.”
INFORMALITY OF BEHAVIOR
Machines are definitely bound by laws of behavior.
• These are the purely causal, physical regularities that constrain its behavior.
• Its basic physical structure and the way its physical parts interact causally with
each other.
• These will be purely causal and not the kind of thing a machine can “break.”
But do we have any reason to believe that there are unbreakable laws of behavior
that apply to us as well?
 The functioning of our brains
 The basic causal laws governing transitions between neural states
BACK TO TURING MACHINES
Searle gives a much more interesting and much deeper sort of objection against the
possibility of a machine like a computer thinking.
Searle grants that it is possible for a machine to be thinking.
He thinks that is basically what we are!
What he denies is that a machine like a Turing Machine or a computer could ever be
thinking.
To see why we need to go back and look at Turing Machines again.
BACK TO TURING MACHINES
I called the Turing Machine described below an “adding machine” (a failed one).
BACK TO TURING MACHINES
I know what the machine is because I designed it with a certain goal in mind.
When describing it we can say that three “1” in a row on the tape means “3,” a space
means “+,” etc.
But this is entirely irrelevant to how the machine works.
• The scanner head only looks at the physical shapes on its tape.
• It doesn’t “think” of three of a certain mark as the number 3.
• It does what it does solely based on the syntactic/formal properties of what is
written on its tape.
BACK TO TURING MACHINES
For instance, this little Turing machine could be a small part of a much more
complicated chess-playing machine.
There this exact machine table wouldn’t have anything to do with adding numbers. It
would play a completely different role.
In no sense would the three “1s” on its tape mean “3” there.
READING
Read: Pylyshyn “What Is In Your Mind?”
TURN YOUR PAPERS IN TO AUGI!
BACK TO TURING MACHINES
The distinction between the syntax and semantics of a language is useful here.
Syntax: Concerned with purely formal features of language. Formation rules of
expressions. Grammar. Rules about the very general structure of a language
independent of the meanings of the words.
Semantics: Concerned with the meaning of words and phrases.
BACK TO TURING MACHINES
Digital computers and Turing Machines operate on a purely syntactic level. They are
just symbol manipulators.
The meanings of the symbols (if they have any) in their processing language are
entirely irrelevant to what the machine does.
It only looks at the shapes of the inputs it gets, and follows a purely causal process to
produce other shapes.
Searle argues that you can never get semantics (that is meaning) from purely
syntactic symbol manipulation.
THE CHINESE ROOM
The Chinese Room
Imagine that John is locked in a room with a bunch of books and a slot in the one of
the walls.
• John does not understand any Chinese. He doesn’t even recognize Chinese
writing as writing. To him Chinese writing appears to be a bunch of meaningless
shapes.
• The books are filled with instructions written in English. They are of the form: “If
you see this shape….write down this shape….”
• These instructions are written in such a way that they will produce a perfectly
sensible set of responses in conversation with a Chinese speaker.
• Outside the room are native Chinese speaker writing sentences in Chinese on
slips of paper. They feed them into the slot and get sensible responses back.
THE CHINESE ROOM
THE CHINESE ROOM
By following these instructions John is able to produce responses that fool a native
Chinese speaker into thinking that they are conversing with another person who
understands Chinese.
Does John understand Chinese?
No!
THE CHINESE ROOM
Searle’s Diagnosis:
• John does not understand Chinese because he only has access to the syntactic
features of Chinese symbols.
• All his instructions involve only the shapes of the symbols that come through the
slot and what shapes to write down on the paper.
• Simply moving symbols around does not suffice for understanding a language.
But this is all computers ever do!
• A digital computer or Turing Machine is just like John.
• It operates purely at a syntactic level.
Searle concludes that a computer could never be thinking because it could never
have meaningful thoughts.
THE SYSTEMS REPLY
John does not understand Chinese.
But John is just part of a larger system including: the books, the room, the slots, etc.
That John does not understand Chinese does not entail that the system as a whole
doesn’t understand Chinese.
Searle: Have John memorize all the books. Now he is the whole system, and still
doesn’t understand Chinese.
• (Also, is it supposed to be that the room understands Chinese?)
ROBOT REPLY
John doesn’t understand Chinese, but that doesn’t show that a suitably situated
computer couldn’t.
At ground, the causal processes of the computer are just syntax manipulation BUT, if
you put the computer in a suitable environment those symbols could come to
stand for things, that is get meaning.
Put a suitable computer program in a robot and have it bump around the world
interacting with things and people.
It doesn’t “get semantics out of syntax” but it is a computer that ends up genuinely
understanding Chinese!
ROBOT REPLY
Searle’s Response:
This reply concedes that minds are more than just symbol manipulators. (True!)
But this still won’t help. Change the case so that:
• Some of the input symbols are fed in from a video camera attached to a robot.
• Some of the things that John is instructed to write down are instructions that
cause the robot to move around Beijing.
Now does John understand Chinese?
BRAIN SIMULATOR RESPONSE
Searle claims that the Chinese Room shows that no computer program could every
produce thought.
What about the most extreme sort of case:
• Suppose we construct a program that is a precise functional model of the human
brain?
• Couldn’t such a computer model of a Chinese speaker’s brain understand
Chinese?
BRAIN SIMULATOR RESPONSE
Searle’s Response
Suppose now we have John functionally implement this program with a series of
water tubes and levers. Does John understand Chinese?
Note: This is a bit weak. Take John out of the picture entirely and ask the question
again.
 This would be a very weird brain, no doubt, but is it so obvious that it could
never be thinking?
 It is a perfect functional duplicate of a human brain! The only difference is
that it is made out of water pipes instead of neurons.
 But what is so special about neurons? What are their magical properties that
water tubes don’t have?
RESPONSES TO THE CHINESE ROOM SO FAR
Last time we looked at three responses to Searle’s Chinese Room argument against
the Computational Theory of Mind.
(1) The Systems Reply
(2) The Robot Reply
(3) The Brain Simulator Reply
THE COMBINATION REPLY
Each of the preceding in isolation may look weak, but what if we combine them?
• A functional duplicate of a human brain (in program form)
• It is put in a robot.
• The robot is allowed to bump around the world, interact with Chinese speakers,
carry out activities, and so on.
• After doing this for a while it is able to respond sensibly to other Chinese
speakers in a Turing Test passing way.
Could such a robot understand Chinese?
Even Searle admits that we would say it does.
THE COMBINATION REPLY
Searle’s First Response
The proponent of the Computational Theory of Mind can’t say this. They are
committed to it being solely about functional organization and meaningless
symbol manipulation?
• Who says? Maybe some people held this view initially, but very few did. Fodor
who was one of the pioneers of this approach certainly does not think this.
• Certainly almost no one believes it now.
THE COMBINATION REPLY
Searle’s Second Response
We would only attribute understanding to this thing until we learned how it worked.
When we found out it was just a symbol manipulator at bottom, we would change
our beliefs.
• Certainly sometimes we would. If we found out it was Blockhead, we would likely
conclude that it didn’t understand after all.
• But what if we found out that its mind-like behavior was due to a functional
duplicate of a human brain or some other suitable sort of program?
THE COMBINATION REPLY
Searle’s Third Response (or 2.5)
When and why do we attribute contentful mental states like propositional attitudes to
non-humans like animals?
1. When their behavior is of the right kind to seem to require the positing of mental
states. (Yes!)
2. Because “they are made out of similar stuff.” (218)
 WTF?
THE MAGICAL NEURON THEORY
“It is because I am a certain sort of organism with a certain biological structure that I
am able to understand English.” (219)
Searle concedes that maybe some alien could understand English without having
neurons. He thinks it is an empirical matter (go find one or shut up).
But he does seem to think the neurons themselves are very important. He thinks you
could build a thinking machine if:
“It is possible to produce artificially a machine with a nervous system, neurons with
axons and dendrites, and all the rest of it sufficiently like ours.” (220)
THE MAGICAL NEURON THEORY
But what gives neurons these peculiar properties that many other physical things
don’t have?
If aliens without neurons can have understanding (as he concedes is possible) what
property is shared between their “thinking stuff” and ours?
Download