Daniel Iverson - Westminster College

advertisement
Iverson 1
Daniel Iverson
Professor Goldman
English 110
October 28, 2004
Essay #4, Rough Draft
It was several decades ago scientists began constructing rudimentary computers, and with
this development came the debate over artificial intelligence. Technology has changed a lot
since then, but the basic arguments remain the same. Those who debate the matter subscribe to
many contrasting definitions of the terms associated with it, but most agree artificial intelligence
itself is simply a machine’s ability to perform tasks normally thought to require intelligence. The
questions many philosophers have raised relate to the nature of this technology. Does it indicate
consciousness or self-awareness? Is it indeed artificial, or could it be actual intelligence?
As far as this discussion is concerned, articles by four qualified men have been chosen to
represent different viewpoints. The information they provide is a bit outdated, as none of it was
published more recently than 1982, but since many of them spoke hypothetically about the issue,
their fundamental claims are still valid. It may be impossible to ever determine with certainty
who was correct, but the skeptical perspective ultimately seems to be the sounder one.
Alan Turing, despite having an increasingly antiquated perspective because of his death
more than 50 years ago, is still one of the most respected authorities on artificial intelligence. He
is best known for what he titled “the imitation game,” which, when applied to computers, came
to be known as the “Turing Test.” The imitation game is an exercise in which three individuals
are separated and one is given the challenge of identifying, through interrogation, which of the
other two is male and which is female. The male’s purpose is to mislead this person into
choosing incorrectly, though he still must provide the truth. The question Turing raises is
whether or not the interrogator would reach the wrong decision as frequently if a computer
Iverson 2
assumed the role of the male. Even back in the 1950s, he believed this would one day be
possible. No computer program has been able to pass the test thus far, however.
Since it was first proposed, the Turing Test has become the definitive indicator of a
computer’s intelligence. Furthermore, Turing’s argument depends entirely upon the test. If it is
flawed, then his argument is flawed as well. Few have questioned its validity, but does a
computer’s ability to mislead a human really show it is intelligent, or does it merely demonstrate
that humans have programmed enough possible responses to allow it to pass the test?
William Lycan is a philosophy instructor at the University of North Carolina at Chapel
Hill, and he endorses a similar belief about the nature of artificial intelligence by supporting the
notion of machine consciousness. He defines intelligence as a combination of flexibility and a
responsiveness to contingencies—or more simply, a sensitivity to information. He believes that
by this definition, there should be no question as to whether computers are capable of
intelligence because they are constructed specifically to receive, store, and process information.
Lycan’s most convincing claims come from the hypothetical examples he provides. In
the Henrietta example, Lycan asks his readers to imagine a process by which a regular woman
has her human components systematically exchanged with mechanical replacements, eventually
including a computerized brain capable of retaining her personality, perceptual acuity, and so
forth. He then asks the question, “Did she lose consciousness at some point during the sequence
of operations, despite her continuing to behave and respond normally? When?” (Lycan 99).
The claims Lycan makes are credible in the sense that they function within their author’s
predefined parameters, but a person who creates his or her own definition and then forms an
argument around it has not actually proven much at all. In other words, his claims would not
necessarily be valid if he had used any other definition of intelligence to present his case.
Iverson 3
Like Turing, John Searle created a test to prove his point and based the majority of his
argument upon it. This experiment has come to be known as “the Chinese room experiment.” It
is an exercise in which a person is locked in a room and presented with three sets of Chinese
writing along with instructions written in English that explain the correlation among the
characters in the writing. It is assumed, of course, that the person participating in the study is not
familiar with the Chinese language and understands English sufficiently well.
With the
instructions, the person is able to answer questions about the Chinese writing simply by reading
the instructions on how the characters relate rather than actually understanding what they say.
The experiment is intended to prove that while a computer may be able to receive information
and a program allows it to answer questions asked about it, this does not necessarily constitute
understanding. If it did understand, the program would not even be necessary.
Another item Searle focuses on is the difference between strong and weak artificial
intelligence. According to Searle, weak AI characterizes a computer as a powerful tool used to
study the human mind. It allows scientists to test hypotheses more rigorously and precisely, but
is still just a tool. In contrast, strong AI identifies a computer as a mind itself, not just a tool used
to study one. Proponents of strong AI claim their programs understand information presented to
them because they are able to answer questions about it even when the answers are implicit.
They say observing a computer’s ability to do this explains the human capacity to do it as well.
Again, Searle uses his Chinese room example to attempt to disprove these claims.
Morton Hunt is a prolific writer on the subject of psychology as it relates to science. His
stand on artificial intelligence is that there are certain qualities a human mind has that a machine
simply cannot replicate, primarily consciousness and self-awareness. He is concerned with being
conscious of consciousness, which he best summarizes with the statement “there is no evidence
Iverson 4
that any computer has ever realized it is itself” (Hunt 104). He says that until a computer is
capable of possessing such a quality, or at least demonstrating it, machines cannot be considered
conscious. Along with the aforementioned claim, he discusses the issue of cognitive history. He
argues that a computer is not aware of its own history or the fact it represents anything external.
Hunt states that self-awareness is, to humans, the essence of what it means to be alive.
Given the incredible developments that can occur in only a short time, it certainly seems
reasonable that technology will one day elevate the capabilities of computers to match or even
surpass human abilities. The debate is still hypothetical, though, as this technology does not
presently exist and is therefore irrelevant.
Right now, though computers are becoming
increasingly complex, the fact they are still constructed by humans and programmed with human
inclinations seems to override any argument to the contrary. No computer has reproduced itself,
nor has it shown any evidence of its own intentions. Even if it responds to questions in what
humans would deem an intelligent way, its answers are still based upon conditions determined
by humans. Though the arguments that machines are conscious or self-aware or intelligent are
well constructed, they are still just based on assumption and are ultimately insufficient to prove
much of anything. Even if a person were to ask a computer if it is conscious or not and it
responded by saying it was, it would only be doing so because a human told it to.
Iverson 5
Works Cited
Hunt, Morton. “What the Human Mind Can Do that the Computer Can’t.” The Canon and Its
Critics. Eds. Todd M. Furman and Mitchell Avila. Mountain View, CA: Mayfield,
2000. 102-107.
Lycan, William. “Machine Consciousness.” The Canon and Its Critics. Eds. Todd M. Furman
and Mitchell Avila. Mountain View, CA: Mayfield, 2000. 97-102.
Searle, John. “Minds, Brains, and Programs.” The Canon and Its Critics. Eds. Todd M. Furman
and Mitchell Avila. Mountain View, CA: Mayfield, 2000. 108-114.
Turing, A. M. “Computing Machinery and Intelligence.” The Canon and Its Critics. Eds. Todd
M. Furman and Mitchell Avila. Mountain View, CA: Mayfield, 2000. 89-96.
Download