Philosophy 4610

advertisement
Philosophy 4610
Philosophy of Mind
Week 9: Computer Thinking
(continued)
Blade Runner :
(Do Androids Dream of Electric Sheep?)
► The
year is 2019
► Deckard (Harrison Ford) is
a “Blade Runner” – an elite
cop trained to find and
hunt down human-like
androids or “replicants”
► Six replicants have escaped
from a prison colony and
are causing problems
Blade Runner:
The Voight-Kampf Test
►
►
In order to tell whether a subject is human or a replicant,
investigators use a complex test called the “voight-kampf”
test to evaluate their responses and reactions
Some of the newest generation of replicants have been
designed to give emotional responses and have even been
implanted with “false memories” so that they themselves do
not know they are not human.
Blade Runner
► If
you were Deckard and were
confronted with a tricky subject
who might be a Replicant,
what questions would you
want to ask him?
► How could you know for sure
whether your subject was
human or not? Could you
know for sure?
The Turing Test:
Questions and Objections
► Is
there anything essential that a human
being can do that a computer could never
do? Why?
► Even if a computer can pass a Turing test,
how do we know it is really thinking as
opposed to imitating or simulating thought?
► If the Turing test is not a good test for
actual thinking, is there any better test?
Computer Thinking:
Objections
1) The Theological Objection:
“Thinking is a function of man’s immortal soul. God has
given an immortal soul to every man and woman, but
not to any other animal or to machines. Hence no
animal or machine can think.” (p. 5)
Response:
1) If God can create bodies and attach souls to them,
he could also attach souls to computers
2) Theological arguments are unsatisfactory for
establishing scientific conclusions
Computer Thinking:
Objections
2) The “Heads in the Sand”
Objection:
“The consequences of machines thinking
would be too dreadful. Let’s hope and
believe that they cannot do so.” (p. 6)
Response: This is not really an argument at
all, but just an appeal for consolation.
Computer Thinking:
Objections
4) The Argument from Consciousness:
“No machine could feel (and not merely artificially signal ...)
pleasure at its successes, grief when its valves fuse, be
warmed by flattery, be made miserable by its mistakes, be
charmed by sex, be angry or depressed when it cannot get
what it wants.” (Geoffery Jefferson, 1949 (P. 6))
Response: If it is impossible to know that a machine is
really conscious judging from its responses, then it is
impossible to know whether any other person is really
conscious as well. If the Turing test could not show that a
computer is really thinking, then it is impossible for me to
show that anyone else (other than myself) is really
thinking.
Computer Thinking:
Objections
5) Arguments from Various Disabilities:
No computer could ever do X (where X is, e.g. “Be kind,
resourceful, beautiful, friendly, have initiative, have a sense
of humour, tell right from wrong, make mistakes, fall in
love, enjoy strawberries and cream, make some one fall in
love with it, learn from experience, use words properly, be
the subject of its own thought, have as much diversity of
behaviour as a man, do something really new.” (p. 8)
Response: Various, but all of these seem to be based on
a bad extrapolation from what we have seen before. Some
of the computers we have seen cannot do these things,
but that is no reason to think we could not eventually build
a computer that can.
Computer Thinking:
Objections
6) Lady Lovelace’s Objection:
Computers only do what they are programmed to
do, so it is impossible for a computer ever to learn
something new or do something unexpected
Response: Computers do “new” and surprising
things all the time. It is also easily possible for us
to set up a mechanism whereby a computer can
modify its own program, and thereby can be said
to have “learned.”
Computer Thinking:
Minds and Machines
► “The
‘skin-of-an-onion analogy is also helpful. In
considering the functions of the mind or brain we
find certain operations which we can explain in
purely mechanical terms. This we say does not
correspond to the real mind: it is a sort of skin
which we must strip off to find the real mind. But
then in what remains we find a further skin to be
stripped off, and so on. Proceeding in this way do
we ever come to the ‘real’ mind, or do we
eventually come to the skin which has nothing in
it? In the latter case the mind is mechanical.”
(Turing, p. 12)
Artificial Intelligence:
Identifying the positions
► Can
a computer think?
► Is passing the Turing test a sufficient
criterion for a computer thinking?
► What do you think each of the positions
(dualism, logical behaviorism, identity
theory, functionalism) we studied in the first
half of the course would say to each
question?
John Searle and the ‘Chinese Room’
► Searle
argues against both
functionalism (the
computer model of mind)
and the claim that a
computer that passes the
Turing test would actually
be thinking.
► He does so by using a
counter-example wherein
a system passes the Turing
test, but is not at all
thinking or understanding.
Searle and “Strong AI”
► “Strong
that:
AI” can be defined as the position
 I) A computer that is programmed with rules for
the manipulation of symbols can actually think
 II) We can tell that such a system is actually
thinking if it can pass the Turing test.
► Searle’s
“Chinese Room” example is meant
to refute both claims
The “Chinese Room”
The Chinese Room
► In
the Chinese Room, there is a rule book
for manipulating symbols and an operator
who does not understand any Chinese
► The Room produces perfectly good Chinese
answers and could pass a Turing Test
conducted in Chinese
► But nothing in the room actually
understands Chinese
The Chinese Room
► According
to Searle, in the Chinese Room
there is intelligent-seeming behavior but no
actual intelligence or understanding. There
is syntax (rules for the manipulation of
meaningless signs) but the semantics or
meaning of the signs is missing. This shows,
Searle argues, that rule-governed behavior
is not enough to give real understanding or
thinking.
The Chinese Room: The “Systems”
Reply
► Even
if there is no single element in the
Chinese Room that understands Chinese,
perhaps the understanding of Chinese really
is in the whole system itself.
► What are the criteria for “really
understanding” as opposed to just seeing to
understand? What role (if any) does
experience, consciousness, or self-awareness
play? How might we test for these qualities?
Computer Thinking:
Summary
► Turing
suggested that computers could think and
he suggested the Turing test to determine whether
they can think.
► If we accept the test, it will be difficult to hold
onto a dualist or theological view of human
consciousness
► On the other hand, it is not obvious how to explain
consciousness or the possibility of a physical
organism giving rise to experience at all
Download