Y - Institutionen för filosofi, lingvistik och vetenskapsteori

advertisement
Turingtestet
Yta och djup i cybernetisk
dialog
Staffan Larsson
Institutionen för lingvistik
Humanistdagarna 2005
•
•
•
•
•
Posthumanism
Turingtestet
Artificiell intelligens
Wittgensteintestet?
Mediering och kategorisering
Posthumanism
• N. K. Hayles: How we became Posthuman
• Den "posthumana" människan - cyborgen – är en människa
utvidgad av och i symbios med teknologin; våra teknologier fungerar
som proteser.
• Det posthumana synsättet utgör en kreativ kritik mot traditionell
humanism,
– som ignorerar kroppslighet och antar att människans ”själsliv” är
oberoende av kroppen (liberalt rationellt icke-kroppsligt subjekt)
– men också grundar sig på antagandet att den biologiska kroppen på ett
okomplicerat vis utgör människans gräns
• Istället är kroppen bara "den första protesen"; människan är
oskiljaktigt integrerad med sina hjälpmedel, sitt språk, sina
teknologier, sin omvärld och sina medmänniskor
• Seminarieserie: Tes-Anitites-Protes (TAP)
– www.ling.gu.se/projekt/tap
Alan Turing (1912-1954)
• Born in London, PhD in Princeton 1938
• The world’s first computer scientist
– proved the existence of uncomputable functions
– invented the first formal model of computation: the
Turing Machine
– helped break the German Enigma code using a
simple nonprogrammable computer
– laid the groundwork for Artificial Intelligence (AI)
• Arrested 1952 under British laws of
homosexuality
– forced to undergo hormone treatment
– died from cyanide poisoning
“Computing Machinery and
Intelligence” (1950)
• “Can machines think?”
– Turing thought this question “too meaningless
to deserve discussion”
– wanted to replace it with with something more
concrete
The imitation game
• 3 participants:
– man (A)
– woman (B)
– interrogator (C)
• Interrogator stays in a separate room, and
communicates via a messenger or a text
terminal (”teleprinter”)
• C’s goal is to determine who is the man and who
is the woman; he knows them as X and Y
• A tries to get C to make the wrong identification
• B tries to help C make the correct identification
The new question
• ”What will happen when a machine takes the
part of A in this game, the part of B being taken
by a man”?
• Will C decide wrongly as often as in the original
game?
• This question replaces ”Can machines think?”
• This is an operational / behavioural definition of
thinking and intelligence
• Focus on verbal behaviour abstracts from
incidental properties like visual appearance
Turing’s prediction
“In about fifty years time it will be possible to programme
computers with a storage capacity of about 109 to make
them play the game so well that an average interrogator
will not have more than 70 percent chance of making the
right identification after five minutes of questioning.”
“At the end of the century the use of words and general
educated opinion will have altered so much that one will
be able to speak of machines thinking without expecting
to be contradicted”
Lobner Prize
Two dialogues from the Loebner
Prize competition 2005
•
•
•
•
•
•
•
•
•
X: Hello, my name is Alice and I am the human
Judge 1: Hi Alice, if you are indeed human, you should be able to tell me
which is bigger, my big toe or a Boeing 747
X: Hi there Judge 1. Um, I think I sometimes am.I am a real person. As you
are sitting in front of your computer screen, you may be wondering if I am a
computer or a person talking.
Judge 1: Can you answer the question about the toe and the 747?
X: Please rephrase the question and try asking me again. As you hear the
sound of keys typing you may begin to feel more comfortable chatting with
me.
Judge 1: Which is bigger, a 747 or my big toe. Come on, answer it.
X: The obvious one. Please rephrase your question, Judge 1. The feel of
the keys on the tips of your fingers may remind you of some questions you
wanted to ask me about.
Judge 1: Is a 747 smaller than my big toe or your big toe or anyone's big
toe?
X: Not if I can help it. You may be seated alone in a room, or together with
a group of other people using computers. You may be judging me.
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Y: Hello, my name is Sam and I am the human.
Judge 3: what is your profession?
Y:i am a professor.
Judge 3: do you like windsurfing?
Y: think I would. Do you?
Judge 3: tell me what you know about it.
Y: Not much, to be honest. Sort of sailing on a surfboard?
Judge 3: do you know more about sailing?
Y: A little more. I'm not too big on boats.
Judge 3: ell me some sport you are big on.
Y: I like to swim.
Judge 3: which stroke is the most tiring to you? and to most people?
Y: Crawl. Or butterfly.
Judge 3: how far can you butterfly?
Y: Not very far. It's tiring! As you say.
Judge 3: what makes it so tiring?
Y: I suppose that it provides less floating momentum than backstroke, for
instance.
Judge 3: could you describe the motion you make when you do butterfly?
Y: Both sets of limbs work simultaneously.
• X = ALICE, winner of the 2005 Loebner
Prize
• Y = A human confederate
• Judge 3: i think you're real.
• Y: Thanks. I think you're real too.
• Men Turingtestet har en roll i
“How I failed the Turing Test”
http://www.blogcadre.com/blog/jason_striegel/how_i_failed_the_turing_test_2005_09_04_13_26_29
• ...the more I tried proving my "actual" intelligence, the
more my "artificial" intelligence would get called into
question...
• jmstriegel: no, really. I'm quite human.
jmstriegel: test me if you want
shymuffin32: ok
shymuffin32: why do you like music?
jmstriegel: hmm. i've never really considered that.
jmstriegel: hell, i'm not going to be able to contrive a
good answer for that one. ask me something else.
shymuffin32: jeesus, you're worse than eliza
Artificial Intelligence
• Goal
– simulate human/intelligent behaviour/thinking
• Weak AI
– Machines can be made to act as if they were
intelligent, i.e., pass the Turing Test
• Strong AI
– Agents that act intelligently have real, conscious
minds
– cf. “Is passing a TT criterial for intelligence?”
• It is possible to believe in strong AI but not in
weak AI
Kritik mot AI
• Utifrån Heidegger och Wittgenstein har ett antal filosofer
och (före detta) AI-forskare (Winograd & Flores 1987,
Weizenbaum 1976, Dreyfus 1992) formulerat en kritik
mot AI (främst GOFAI)
• Mänsklig ”common sense” / ”bakgrund” är grundad i
kroppen (embodied) och en produkt av mänskliga
förkroppsligade socialiseringsprocesser; ej klart att det
finns några ”genvägar” till common sense
• Denna bakgrund är central för språkförståelse och
språkanvändning
• Därför kommer ingen dator (av den typ vi känner idag)
att kunna klara Turingtestet
• I denna mening kan Turingtestet sägas utgöra en gräns
mellan människa och maskin; det som bara människan
klarar
Gränssnittet aktualiserar gränsen
• Vi kan istället att välja att inrikta forskningen på
framtagandet av språkteknologiska gränssnitt, i syfte att
göra teknologier mer lättillgängliga
• Men: språkteknologiska gränssnitt ofta får formen av ett
(skenbart) subjekt;
– ett dialogsystem som SJ:s tidtabellupplysning framträder som en
individ, låt vara trög, enkelspårig och med stora hörselproblem.
– Detta är i viss mån en identifikation vi människor inte kan låta bli
att göra åtminstone i någon grad; om det pratar som en
människa, så är det en människa
• Språkteknologiska gränssnitt tenderar genom sin
språkliga förmåga att aktualisera gränsdragningen
mellan människa och maskin, vare sig vi vill eller inte
Hayles on the Turing test
• ”Your job is to pose questions that can
distinguish verbal performance from
embodied reality”
• Turing makes a cruicial distinction:
– the enacted body, present in the flesh on one
side of the computer screen
– the represented body, produced by verbal and
semiotic markers consituting it in an electronic
environment
• The subject is contingent and mediated by
technology...
– ”...that has become so entwined with the
production of identity that it can no longer be
meaningfully separated from the human
subject”
– a cyborg
The new question again
• 3 participants:
– man (A)
– woman (B)
– interrogator (C)
• What will happen when a machine takes
the part of A in this game, the role of B
being taken by a man?
• Will C decide wrongly as often as in the
original game?
• Not completely clear whether
– A should try to pretend that A is a woman,
– or that A is a human
• Not completely clear whether
– B should be a woman or a man
• Not completely clear whether C should be told
that
– one of X and Y is a machine and the other a woman
– or that one of X and Y is a man and the other a
woman
Gender imitation
• Why has the gender aspect of Turing’s original
formulation been downplayed? Perhaps it
suggests that gender, as well as thinking, is
– a property projected on the subject by others
– not emanating from some essence within the subject,
but rather working ”from the outside in”
– cf. social constructivism, feminism, queer theory
• Sherry Turkle: Life on the screen – Indentity in
the Age of the Internet
Language games and communities
• Compare the Turing test to Wittgenstein’s view of
language communities
– Kripke (1982): Wittgenstein on Rules and Private Languages
– Instead of building complex theories of meaning and
understanding, see how these concepts are used
• When we say that someone understands an utterance,
or means something by an utterance, we are
performatively including or excluding them into a
community of language users
– “I understand what you mean”: inclusion
– “What A really means (when he says X) is Y”: exclusion
• To be part of a community is to be accepted as someone
able to participate in the language games specific to that
community
Communities on several levels
•
•
•
•
•
•
All speakers of some human language
All speakers of some indoeruopean language
All speakers of some Scandinavian language
All speakers of Swedish
All speakers of Gothenburg dialect of Swedish
All speakers of some sublanguage specific to an
occupation (doctor, bus driver, bartender...)
The Turing Test and the human
language community
• The Turing test is based on language use
– it implies (at least) that human-level language
use requires human-level intelligence
• We could instead simply regard it as a test
of the ability to use human language as a
human does it
• From this perspective, the force of the
Turing test is not that it shows
intelligence...
• ... but that it is a (very strict) test to
determine whether to take a computer into
a community of human language users
• When we meet someone for the first time, we
automatically make a first judgement as to
whether this is a person we can talk to
– We may extrapolate from physical appearance
– After a brief and perhaps very formulaic conversation
we automatically assume that this person is a
member of my language community
• We do not apply the Turing test to new
acquaintances...
– but if someone clearly breaks a rule, we get
suspicious
The Wittgenstein Test?
• A more ecologically valid version of the
Turing test would be to sneak a computer
into a conversation and see if anyone
notices
Kategorisering
• Vi tilldelar hela tiden varandra kategorier efter
det vi direkt kan iaktta (ytan)
–
–
–
–
man / kvinna
svensk / invandrare
läkare / spårvagnsförare / ...
(människa / maskin)
• Dessa kategorier antas ofta var förknippade
med någon form av essens (ett djup)
– Från iakttagelser av ytan gör vi antaganden om vad
som ligger därbakom
• Dessa tilldelningar är i hög grad intuitiva och
automatiska
Mediering
• För länge sedan var kommunikationen inte (lika
tydligt och lika ofta) medierad
– Samtal ansikte mot ansikte
• Ny kommunikationsteknologi har lett till att en
större andel av vår kommunikation är medierad
av teknologi
• I medierad kommunikation är den andre
närvarande bara genom sin representation
• I den mån representationen är språklig (en text
eller en röst) kommer det språkliga beteendet att
utgöra grunden för vår kategorisering
Turingtestet som en prototyp för
medierad kategorisering
• Turingtestet klargör villkoren för den
teknologiskt medierad kommunikation
• Medierad kommunikation kan ses som
dialog mellan cybernetiska system
– (människa + kommunikationsteknologi) talar
med (människa + kommunikationsteknologi)
• Vad innebär denna förskjutning för våra
vanemässiga sätt att kategorisera
varandra?
Turing’s list of possible objections
• The theological objection
– Thinking is a function of man’s immortal soul, given by God to
humans but not to machines
• The “heads in the sand” objection
– “The consequences of machines thinking would be too dreadful.
Let us hope and belive that they cannot do so.”
• The mathematical objection
– based on Gödel’s incompleteness theorem & limitations of
discrete-state machines
• The argument from consciousness
– The only way one could be sure that a machine thinks is to be
that machine and to feel oneself thinking
• Argument from disability
– claims (usually unsupported) of the form ”a machine
can never do X”
• Lady Lovelace’s objection
– computers can only do what we tell them to
• The argument from Continuity in the nervous
system
– The human nervous system is not a discrete-state
machine, and a computer can thus not mimic its
behaviour
• The argument from informality of behaviour
• (Searle’s Chinese Room
– argument concerns strong AI
– purports to show that producing intelligent behavoiur
is not a sufficient condition for being a mind)
• An argument about whether a “chinese room”
(i.e. a computer) can understand language
• Displaying verbal behaviour is not a sufficient
condition for understanding a language
The Turing test and cultural change
(Collins)
•
•
•
•
Now suppose, for argument's sake, that the test is long enough for the
Chinese language to change while the questions are being asked, or that it
is repeated again and again over a long period, and that the interrogators
do not conspire to keep their questions within the bounds of the linguistic
cross-section encapsulated in the original look-up tables.
If the stock of look-up tables, etc., remains the same, The Room will
become outdated-it will begin to fail to answer questions convincingly.
Suppose instead, that the look-up tables are continually updated by
attendants.
Some of the attendants will have to be in day-to-day contact with changing
fashions in Chinese-they will have to share Chinese culture.
– Thus, somewhere in the mechanism there have to be people who do understand
Chinese sufficiently well to know the difference between the Chinese equivalents
of "to be or not to be" and "what will it be, my droogies" at the time that The
Room is in operation.
– Note that the two types of room-synchronic and diachronic-are distinguishable
given the right protocol.
– It is true that the person using the look-up tables in the diachronic room still does
not understand Chinese, but among the attendants there must be some who do.
• Under the extended protocol, any Chinese Room that
passed the test would have to contain type 4 knowledge
[EXPLAIN] and I have argued that it is to be found in
those who update the look-up tables.
• It is these people who link the diachronic room into
society-who make it a social entity.
• Under the extended protocol, the Turing Test becomes a
test of membership of social groups. It does this by
comparing the abilities of experimental object and control
in miniature social interactions with the interrogator.
Under this protocol, passing the test signifies social
intelligence or the possession of encultured knowledge.
A simplified Turing Test (Collins)
•
•
•
Once one sees this point, it is possible to simplify the Turing Test greatly
while still using it to check for embeddedness in society.
The new test requires a determined judge, an intelligent and literate control
who shares the broad cultural background of the judge, and the machine
with which the control is to be compared.
The judge provides both "Control" and "Machine" with copies of a few typed
paragraphs (in a clear, machine-readable font), of somewhat mis-spelled
and otherwise mucked-about English, which neither has seen before.
– It is important that the paragraphs are previously unseen for it is easy to devise a
program to transliterate an example once it has been thought through.
•
Once presented, Control and Machine have, say, an hour to transliterate the
passages into normal English.
– Machine will have the text presented to its scanner and its output will be a
second text.
– Control will type his/her transliteration into a word processor to be printed out by
the same printer as is used by Machine.
– The judge will then be given the printed texts and will have to work out which has
been transliterated by Control and which by Machine.
• Here is a specimen of the sort of paragraph the
judge would present.
– mary: The next thing I want you to do is spell a word
that means a religious ceremony.
– john: You mean rite. Do you want me to spell it out
loud?
– mary: No, I want you to write it.
– john: I'm tired. All you ever want me to do is write,
write, write.
– mary: That's unfair, I just want you to write, write,
write.
– john: OK, I'll write, write.
– mary: Write.
• mary: The next thing I want you to do is spell a
word that means a religious ceremony.
• john: You mean “rite”. Do you want me to spell it
out loud?
• mary: No, I want you to write it.
• john: I'm tired. All you ever want me to do is
write, write, write.
• mary: That's unfair, I just want you to write “rite”
right.
• john: OK, I'll write “rite”.
• mary: Right.
• The point of this simplifed test
– the hard thing for a machine to do in a Turing Test is to demonstrate the
skill of repairing typed English conversation
– the interactional stuff is mostly icing on the cake.
• The simplifed test is designed to draw on all the culture-bound
common-sense needed to navigate the domain of error correction in
printed English.
• This is the only kind of skill that can be tested through the medium of
the typed word but it is quite sufficient, if the test is carefully
designed, to enable us to tell the socialized from the unsocialized.
• It seems to me that if a machine could pass a carefully designed
version of this little test all the significant problems of artificial
intelligence would have been solved-the rest would be research and
development.
Hayles again
If my nightmare is a culture inhabited by
posthumans who regard their bodies as fashion
accessories rather than the ground of being,
my dream is a version of the posthuman that
embraces the possibilities of information
technologies without being seduced by fantasies
of unlimited power and disembodied immortality,
that recognizes and celebrates finitude as a
condition of human being, and that understands
human life is embedded in a material world of
great complexity, one on which we depend for
our continued survival.
Download