ai - Good

advertisement
John Raub
November 22, 2005
PHIL 201
Syntax and Semantics in A.I.
Artificial intelligence and computational theories of mind are hotly debated topics. They
require a certain degree of abstract as well as technical thought, as they relate and compare
two very different subjects: humans and computers. John Searle uses his Chinese Room
argument as a means of disproving the use of semantics in computational systems. In both
the cases of the man in the room and the man in the robot he exhibits basic computations that
affect the environment outside of the room, but he shows that the environment outside of the
room does not affect the computations within. Those outside are tricked into believing that
inside the room is a person who actually understands the input that he is fed, and can act with
intentionality, when in reality the person is simply following instructions with no idea of the
consequences or reasons for doing so. Margaret Boden discusses many of the flaws that she
finds in Searle's case. Boden's arguments rely heavily on logical fallacies and differing beliefs
of various schools of psychology and philosophy. Margaret Boden's critiques fail to recognize
several technical aspects of John Searle's Chinese Room argument, therefor failing to fully
refute the main points behind it, namely that semantics are irrelevant to computational
systems and the relevance of intentionality.
Boden's first critique is about Searle's claims against the use of semantics in
computations. What Searle believes is that the computations in regards to computer science
are solely based on syntax. A computation depends on formal symbols and a formal rule set
to process, but the semantics of a particular computation are independent of the syntax and
do not constrain the process to succeed or fail. Because of the semantic independence, the
syntax of a computer process is unable to grant meaning or intentionality (Boden 379).
Searle's viewpoints are generally correct. The purpose of a function in most programs is to
provide a general layout for a calculation that can be used regardless of the semantical
values. For example, a function, or computation, could hold the form “add number one to
number two and return the result”, but the values passed in don't give the function any more
or less meaning to the computer. The programmer could send 1 and 2, or 5 and 6, into the
function, but the statements “add 1 and 2 and return 3” and “add 5 and 6 and return 11” are
exactly the same in the perspective of the computer.
The Chinese room argument is used by Searle to show the uselessness in semantics
when following instructions. The Chinese room simulates good usage of a programming
function. First, a symbol is passed into the room, then a set of instructions is processed based
on the input, and finally a result is output. The book which defines the instructions may require
additional symbol manipulation, and other times it may not. So, depending on the semantics
of the input, more or less processing may be required, but at no point does the computer, or
Searle-in-the-room, realize or understand what the computations are about (Boden 380). To
the Chinese men outside of the room it does appear that Searle-in-the-room understands
Chinese, but this is not the case.
The second argument that Boden critiques is in regards to intentionality and
understanding. Searle believes that the biochemical properties of the brain are the critical to
providing causal powers. He also deems simple symbol manipulation and computation to be
not nearly enough to provide an environment for interpreting semantics. Further, Searle states
that interpreting and manipulating symbols is impossible for computers (Boden 380). At the
very basics of the inner workings of computers this is true. Computation is simply carried out
by the traversal of an electronic signal through various transistors which set a state of “on” or
“off”. However, on top of this basic layer are many abstracted layers. When a programmer
performs an instruction to add two numbers, the code is translated and broken down into the
basic layer that the computer can understand. In this sense, no computer is ever really aware
of the semantics of a program in a state that a human does. Using the rhetorical question,
“can a computer made out of old beer cans possibly understand?”, Searle shows that simple
intuition can provide insight to the possibility of mental functions performed by inorganic
machines (Boden 381). While this example is great at simplifying the matter, it dumbs down
the design of computers. Computers are intricately designed pieces of machinery meant to do
complex computations. Beer cans are meant to do nothing but store beer. Searle's example
seems to draw parallels to computers and beer cans themselves, not a computer built from,
or inside of beer cans.
In regards to the biological necessity for intentionality, Boden analyzes Searle's
analogy of intentionality to photosynthesis. The main argument here is that “we not only know
that chlorophyll supports photosynthesis, we also understand how it does so (and why various
other chemical cannot)” (Boden 381). Searle's analogy is rather weak, but so is Boden's
argument against it. The architecture and design of computers is known, but the biological
architecture and states of the brain are not and the two are often compared in terms of
artificial intelligence. Another of Boden's weak arguments comes from the definition she
provides for Searle's intentionality. She describes his definition as a psychological one that
relates the brain to the world in a relational proposition. She counters this merely by stating
that other definitions are logical (Boden 381). She leaves these logical definitions undefined,
simply implying that since the definition is debated, it is useless.
The relevance of biological necessity is further debated by questioning the relevance of
the specific chemicals that possibly produce intentionality in brains. Going back to Searle's
beer can computer example, Boden explains that even though possible materials used to
provide intentionality, such as chemicals in the brain, beer cans, or silicon, may or may not be
scientifically sound, intuition plays no part in disproving anything. Further more, human
intuitions change as science progresses (Boden 382). These arguments are correct in their
accusations, but again the focus is not directed properly. Searle uses intuition and the beer
can computer example to supplement his arguments, not to base his arguments on.
Boden attempts to refute Searle's Chinese Room example as well as the background
information he uses, but she seems to miss some traits that humans have over robots.
Searle-in-the-box has no understanding of Chinese even if the Chinese men outside of the
room seem to think so. Boden gives an example of a robot in a restaurant equipped with a
camera for visual input that can interact through both language and movement and
demonstrates human understanding (Boden 382). However, the perception that the robot
understands anything at all is only through the eyes of the humans it interacts with. The robot,
even if equipped with a visual input, is only receiving data and processing it as a set of rules
describes. Searle's Chinese Room is expanded through the use of a man inside of a robot,
instead of a room. This robot receives input through it's visual sense and acts on the input
accordingly. Boden states that the robot would be able to “recognize raw bean sprouts and, if
the recipe requires it, toss them into a wok as well as the rest of us” (Boden 383). It is entirely
feasible to make a robot that could perform this action. All it involves is simply reading a
recipe, processing the data, recognizing the produced output, and acting on it. This does not
constitute human understanding or intentionality, however. There is no explanation for what
makes the robot start cooking in the first place. This is something that would have to be
programmed into the robot, and thus it does not act on when humans need the food but rather
an arbitrary time that has little to do with when humans are hungry. If a human were cooking
the food, they may decide to add some variant amount of bean sprouts to the recipe based on
their particular preference at that moment. A cook may be daring or conservative and try to
add a little more or less flavor. A robot could possibly add more or less ingredients to a recipe
based on random and arbitrary data, but in no way can a robot predict the preference of a
human, since this is a task that most humans have trouble with. Since a robot cannot taste
the way a human can, it could not possibly understand and act with intentionality on human
traits.
The gripe that Boden has with this view is that it does not follow what computational
psychologists believe. She argues that Searle gives credit to Searle-in-the-robot for carrying
out the functions that take place. According to her, most computational psychologists do not
believe the brain is responsible for intentionality, and that functions like seeing and
understanding are intrinsic to a person as a whole, not just the brain (Boden 383). In other
words, if a person acts on something, credit is given to the person, not the person's brain.
Searle, in her eyes, creates a false connection between his views and those of computational
psychologists. It is impossible, however, to think of a robot's or a computer's casing as part of
the whole, much less an important part. Computer parts can be replaced, and an entire
computer or robot can be rebuilt inside of another casing. This is something that humans
cannot do. Humans are stuck with their bodies where as robots are not. So it is impossible to
think of a robot in terms of it's physical traits when what really makes it a robot, and gives it
“personality”, is the silicon innards and the programming applied to it.
Because Searle-in-the-robot must be able to read the English rule book, Boden
attempts to show that some form of understanding must exist for the robot and Searle-in-therobot to interact (Boden 384). A key step is omitted, which heavily impacts the validity of
Boden's argument. All computers must understand a basic language, machine or assembly
language. This is the basic language of “on” and “off”, but it is the basis of all computation. At
this level the language is as simple as following one instruction at a time, much like the rule
book. The robot acts as the program that is perceived, and the instruction book acts as a
compiler to translate the input into a language that Searle-in-the-robot can understand. A
computer does not understand what the word “go” means, just as Searle-in-the-robot does
not understand the Chinese symbols that the robot sees. Searle-in-the-robot does understand
another language, and the instructions are the intermediary between the outside world and
Searle-in-the-robot. Boden shows that Searle-in-the-robot must know and understand certain
English words to be able to act like a program, but since a programming language must exist
for a program to operate, Searle's focus is on the understanding of Chinese (Boden 384). No
matter what language Searle-in-the-robot understands, the semantics of the language outside
are irrelevant as long as the instructions, the compiler, can translate them to what can be
understood.
In conclusion, Boden's arguments overlook a certain technical aspect that cannot be
ignored when discussing subjects like artificial intelligence and computational theories of
mind. Searle's Chinese Room argument requires some abstract thinking in terms of the
architecture of computers and robots, and cannot be directly compared to humans. This flaw
is not the fault of Searle, however, as humans and robots are not nearly the same things by
nature. Searle uses the argument to convey the use of computer architecture in terms of
human brain functions. Boden takes a more linguistic and logical approach to form her
arguments, which causes a lack of technical debate. In some regards her arguments are
clearly formed, but they miss some of the major points of the Chinese Room argument.
Download