Final Essay (Consciousness as Functionalism)

advertisement
1
Clay Chastain
Dr. Brown
PHIL 3331
23 April 2008
Functionalism in Computer Consciousness, and the “Hard Problem”
FUNCTIONAL EQUIVALENTS OF COMPUTER CONCIOUSNESS
Many contemporary philosophers of the mind have offered theories that suggest
consciousness is a property which is unique to humans. While other species of animals
have consciousness (Nagel 219), the experience in these creatures is often seen as less
advanced or aware than humans are and inherently different from human consciousness.
Due to the common conception on the nature of consciousness being a distinctly higherlevel animal property divided into distinct physically objective and experientially
subjective sections, philosophers often overlook equivalents that could possibly be found
in manmade mechanical inventions, namely computers. As technology continues to
advance along a doubling rate through the years, it is foreseeable that computing devices
will become so powerful that they will have some form of consciousness. It is my belief
that the consciousness of a computer will be obviously different in its construction while
still producing an end result that is directly comparable to that of actual human
consciousness. Although there are many arguments against the ability for Artificial
Intelligence to become a reality based on mankind’s current path, these claims are nonessential to the nature of consciousness in computers.
2
From a functionalist perspective, it seems completely feasible that a man-made, handprogrammed “intelligence” of sorts can produce the likeness of having consciousness;
there is a functional equivalent between our own conscious behavior and the appearance
of conscious behavior displayed by a computer. It is reasonable to say that the means to
achieve the end will not be the same, but the end result will be. Not only this, the
behavior overtly shown in the physical world will be the closest level of emulation of
human behavior that humans can possibly achieve. In this sense, the outward behavior of
the computer system represents a strong tie to the functional equivalents of mental states
which are a large part of conscious experience.
In Andy Clark’s Mindware, the author paraphrases Putnam’s functionalist views as an
“organization… [that is] a web of links between inputs, inner computation states, and
outputs … [that] fixes the shape and content of mental life… [regardless of] the building
material …” (14). While simplified, this definition points out that importance of how
something is created and how it operates rather than trying to make each individual
example of results unique. As an example, there are many ways to depict characters to
make text: handwriting, typewriters, and computers (Class Notes). In each of these
examples, the means of accomplishing a goal are completely different and have wide
gaps in complexity. From a top order, the human’s ability to think and render characters
into text using advanced muscle memory and experience is the most complex in the
series. Following this, the computer produces text through some sort of interaction, be it
live human interaction or through indirect human action (i.e. programming). Lastly, the
typewriter is a simplistic machine which uses only very basic state changes and inputs to
3
render text on paper which is produced solely through direct human interaction. Each one
of these examples is different in its method and complexity; there is seemingly little
similarity between a typewriter and a human being or a computer. However, their
function in accomplishing a task remains the same: to produce text. Each one of these
methods utilizes a variety of states that change one another to make the text, but
regardless of this, the overall function of each is the same. In this sense, functionalism
does not rely on construction, but rather the visible end.
Yet, it should be noted that the visible end in each of these examples is different in some
minor ways. The handwriting could be pencils on paper, the typewriter could be ink on
paper, and the computer could just be text on a monitor (i.e. data stored in binary).
However, we obviously equate all of these to be the same thing albeit slightly different in
presentation. It would seem that people would not see consciousness as created by a
computer to be the same as consciousness inherent in humans. For example, tests such as
the Turing Test, “a test . . . that involved a human interrogator trying to spot … whether a
hidden conversant was a human or a machine” (Clark 21) would probably not hold as
actual consciousness. Any machine that could consistently trick the operator into thinking
it was a human would, as Turing felt, would be considered an artificial intelligence,
however. It is reasonable to say that these attempts at creating a machine complex enough
to have deep level relationships with information are not conscious at all. However, just
as handwriting is one result of producing text, it would also seem that the
“consciousness” of computers is also the same in producing thoughts in a way similar to
humans. Although it is safe to assume the human mind is organized in a different way
4
than computers, it is not feasible to rule out that the approach of high-level programming
to create an emulation of conscious experience and awareness will not end up producing
a functional equivalent to the abilities of the brain. But, to understand whether or not this
can be considered conscious experience is largely dependent upon the approach used to
define it.
DEFINING CONCIOUSNESS
Philosophers such as David Chalmers’ in his article Consciousness and Its Place in
Nature question the definition of what consciousness really may be. He describes
consciousness into two distinct sets of problems. The first part of consciousness, he says,
is “the ability to discriminate stimuli, or to report information, or to monitor internal
states, or to control behavior” (247). This problem is described as the “easy” problem
because it is an objective portion of life which is measurable and testable (also known as
a-consciousness), and the behavioral results are usually clear products of these features of
consciousness. Yet, he says that the “hard problem” is one of subjective experience, or
the raw feels (or qualia, or p-consciousness) that humans have when interacting in the
physical world. It is essentially impossible (or very “hard”) to explain the why and how
consciousness interacts with the functions in the “easy problem” category.
ADDRESSING THE “HARD PROBLEM”
One of the more striking examples is his use of zombies, beings which are physically
identical to humans and perform the same functions, but are essentially beings without
conscious experience (249). There would be no difference between behaviors of either
5
group of beings and only some omnipresent third-party would be able to know the
difference. Although he says that “there is little reason for zombies to exist in the natural
world … [zombies are] at least conceivable” (249) in the sense that humans can imagine
such a world, and if it can be imagined, it is metaphysically possible to have a universe in
which this situation is actual.
Unfortunately, this view of consciousness seems to be lacking a clear basis in reality.
While it is obviously conceivable that zombies exist, humans do not know the underlying
reasons that cause conscious experience to occur. Perhaps it is a fundamental law that if
all physical traits produce mental states and conscious experience, then what must
logically follow is that all mental traits come from the physical world. In this sense, there
would be no “hard problem” because the physical is simply the reason for consciousness.
It seems in my opinion that the conceivability of zombies is true, but the possibility of
this actually existing in any universe would be low. Similarly, it is conceivable that there
could be actual zombies of the shambling, flesh-eating variation who possess identical
physical traits but lack consciousness. It is also conceivable that there are beings billions
of galaxies away who are exactly identical to us (i.e. Stargate). Yet, these are science
fiction possibilities and seem completely and totally unlikely. To me, both of these
examples I have given are conceivable but ridiculously far-fetched. And, even if such a
universe existed where beings had no consciousness, we would never be able to know. In
fact, it is possible that we could be the zombies that Chalmers mentions! In turn, our
interpretation of conscious experience is not conscious experience at all. The problem of
determining if conscious experience needs to have this hard problem is largely unclear
6
because of the impossibility to know in certain whether or not raw feels are actually
subjective experiences or calculated objective experiences.
Chalmers also makes other claims to support the idea behind raw feels, most importantly
the knowledge argument. This essentially states that “there are facts about consciousness
that are not deducible from physical facts” (249). The argument goes on to say that even
with all of the knowledge of the physical world, the conscious world would never be
completely known. No matter how advanced our abilities at reconstructing a system, it
would be impossible to reconstruct consciousness. In an elaboration, Thomas Nagel’s
What Is It Like to Be a Bat? article examines the idea that it is impossible to truly know
how it is to be a bat despite our similarities as mammals (220). Even with all of the
information about how a bat is constructed and how they live, “[we] are restricted to the
resources of [our] own mind, and those resources are inadequate to the task” (220). Nagel
continues to argue this point through his article, emphasizing the subjective nature of
experience.
Despite this seemingly conclusive example of the incompatibilities between the human
mind and the mind of a bat, I do not believe that with all the facts about bats we could
never know how it is to be a bat. Hypothetically, there could exist a computer system
which emulates the world of a bat to a nearly perfect level. As an example from existing
computers, x86-based chips (Intel processors, for example) are able to, after being
programmed with the appropriate translation of code, interpret and display data from a
variety of other formats. While these emulations are often slower and unstable, in theory
7
it is possible to create perfect emulation which would result in the ability to show how it
is to be this other form of computing. Similarly, it is possible that in the future there will
be such a system that can properly interpret the raw physical data of a bat and display
how it is to be a bat in such a way that the brain is interfaced with the computer. Despite
the fact that this day may never come, it is more than conceivable as it is backed up by
existing trends which are on a much smaller scale. So, the theory has a reasonable
amount of proof that the emulation system is possible. As of yet, the human race has not
made any conclusive evidence to support one way or another, leaving me to side with the
open-ended chance that it could be possible.
Ned Block in his Concepts of Consciousness article goes on to claim that the entire
debate between objective data and subjective experience is simply “confusing Pconsciousness [raw feels] with something else” (208). He reasons that most of the time
that people reference something as P-consciousness they are actually attempting to
describe something that is more A-conscious (or the objective data). As a more extreme
example of our supposed confusion between these two realms, Daniel Dennett’s
argument put forth in Quining Qualia states that “conscious experience has no properties
that are special in any of the ways qualia have been supposed to be special” (227).
Essentially, this argument means that raw feels are not “special” in the sense that they are
merely products of misinterpretations; p-consciousness can be more adequately described
by actual objective descriptors instead of our subjective experience. He reasons that these
qualia are only the “last ditch defense of the inwardness and elusiveness of our minds”
8
(229) and are created to attempt to explain our inability to connect what we perceive as
experience to the physical features that create it.
Due to these objections, I find that the physical world is the only world in which we can
know on an objective level; in this way, the easy problem is the only problem
philosophers need to tackle to explain consciousness. Similarly, Clark’s conclusion to the
consciousness section in Mindware is more concerned with figuring out if there actually
is a “hard problem” to address. He aptly names this the “Meta-hard problem” because he
finds it to be the “hardest and most important of them all” (187). Yet, his own views are
close to my own: “[Clark] is not persuaded that explaining [p-consciousness] presents
any fundamentally different kind of problem” (187). Until we can reach a safer
conclusion, it would probably be more beneficial for consciousness to be defined in terms
of something which is more easily explained in terms of the physical world.
ADDRESSING THE “EASY PROBLEM”
With this “hard problem” stance, the easy problem of consciousness, especially in terms
of functional equivalents, seems (and rightly so) easy. If the easy problem encompasses
all objective standards, such as reactions to stimuli, it would seem clear that Artificial
Intelligence would be able to fulfill the criteria with ease. Even in recent years, Clark
mentions computers which are able to make decent attempts at problem solving, such as
Schank’s 1975 test of using a script logically to make sound conclusions (31).
9
Of course, while the behavior outwardly displays logical behavior, the question arises of
whether or not the computer is actually able to think and understand what it is processing.
In the classical example by John Searle called the “Chinese Room”, he argues that if
there was a situation where a person was trapped in a room alone and fed symbols, the
person’s response symbols would be based on a codebook rather than understanding the
meaning of the symbols. The person would be given a Chinese letter and would look up
the correct response and send the correct symbol back. However, the person would not
actually have to know Chinese in order to complete the objectives. Clark summarizes this
by saying that “[r]eal understanding requires … certain actual (though still largely
unknown) physical properties, instantiated in biological brains” (34). Similarly, Hubert
Dreyfus’ theories on the nature of Artificial Intelligence state that there is a
misconception in the way we assume computers work; instead, he says that no amount of
symbol and pattern processing can lead to any sort of actual understanding. The mind is a
completely different system of organization that learns instead of just processes symbols
(C. Chastain 3).
Even though the seemingly obvious answer to the question of whether or not computers
are able to understand what they process is that (as of now) they do not, their apparent
behavior is such that computers appear to understand. By adding more and more code (as
Clark describes it on page 37), it would seem that programmers are only building systems
which do more complex symbol processing jobs. However, this causes a new question to
arise: what if the symbol processing became so complex that the computers had the
ability to show perfect human behaviors? At this point, would this be enough to claim the
10
computers were conscious? My personal opinion is that this would be considered
consciousness because of the level of complexity between A-conscious links (i.e. the
ability to problem solve based on a significant range of previous experiences). Perhaps a
system would even be able to interpret these physical features in a more subjective way
similar to how humans attempt to describe experiences as raw feels. This idea does not
seem far-fetched, considering the conceivability and high likelihood that some computer
will be made to emulate human functions exactly (as evidenced from modern robotics
and software advances). It would be logical to call these behaviors consciousness if the
system was so complex that it could formulate ideas based off of past experiences that
were similar, if not equivalent to, human mind processing. Even though the system was
designed and programmed by humans, the emulation would be so accurate that it would
be a practical necessity to affix a human name to their ability to process as we do –
namely consciousness. Again, this concept relies on the functional equivalents that
consciousness may have.
HUMAN RELUCANCE TO DEFINE FUNCTIONAL EQUIVALENTS
As an anecdote, Clark’s inclusion of the “A Diverson” skit (25) highlights the humorous
possibility that machine aliens who visit Earth are skeptical that human (or meat)
consciousness is functionally equivalent to their own. In the end, the aliens leave without
addressing the subject. While this is just a skit, it serves as an analogy of how humans
will likely perceive computer consciousness when we first encounter a convincing model
of it. Instead of embracing these computers as actual conscious intelligence, we will
probably claim that they are merely mimicking human behavior. But, at the point where
11
computers are making rational decisions and learning from experience in everyday life, it
seems significant that humans accept the functional equivalence of what we have created,
even if the systems that created the function are inherently different. Of course, because
humans were responsible for the creation of these computers, it could be argued that our
knowledge and insight into their “consciousness” makes an Artificial Intelligence less
natural (and indeed artificial regardless of its advancement). Ironically, humans have
little knowledge about their own consciousness; it is possible that we really are nothing
more than highly advanced meat machines, despite our inclination to think otherwise. As
of now, no one would argue that computers are capable of higher level thoughts and
connections on par to humans, but the future may lead to a situation in which there is no
perceivable difference between the two.
SUMMATION
Overall, it would seem that there is no “hard problem” of subjective experience in which
consciousness is supposed to be pinned upon. There is a high likelihood that the physical
world represents the entire world that is, and as a result, the physical world produces the
mental world that is responsible for developing the idea of raw feels or qualia. In this
sense, the possibility for computer consciousness is real and viable. Consciousness for
computers is more of a theory to be applied in the future. Yet, it is important to look into
the matter as computers may eventually have such a degree of power and complexity that
they are functional equivalents to the conscious behavior of humans. While the means of
reaching the ends of consciousness may be very different, it is likely that the end result
will be closely modeled after human consciousness.
12
Works Cited
Block, Ned. “Concepts of Consciousness”. Philosophy of Mind: Classical and
Contemporary Readings. Ed. David J. Chalmers. New York: Oxford University
Press, 2001. 206-218.
Chalmers, David J. “Consciousness and Its Place in Nature”. Philosophy of Mind:
Classical and Contemporary Readings. Ed. David J. Chalmers. New York: Oxford
University Press, 2001. 247-272.
Chastain, Clay. “Literary Review”. Unpublished Paper at Trinity University, 2008.
Clark, Andy. Mindware: An Introduction to the Philosophy of Cognitive Science. New
York: Oxford University Press, 2001.
Dennett, Daniel C. “Quining Qualia”. Philosophy of Mind: Classical and Contemporary
Readings. Ed. David J. Chalmers. New York: Oxford University Press, 2001.
226-246.
Nagel, Thomas. “What Is It Like to Be a Bat?”. Philosophy of Mind: Classical and
Contemporary Readings. Ed. David J. Chalmers. New York: Oxford University
Press, 2001. 219-225.
Zalta, Edward N., Ed. “Consciousness”. 16 Aug. 2004. Stanford Encyclopedia of
Philosophy. 18 Apr. 2008. Stanford University.
<http://plato.stanford.edu/entries/consciousness/>.
Download