What is Artificial Intelligence?

advertisement
ROBOETHICS
JAMES JONES
APRIL 24, 2012
Machine ethics or machine morality is the study of the design and building of moral
machines also known as AMAs (Artificial Moral Agents). These robots (or computers with
sufficient artificial intelligence) would be capable of acting morally or at least appearing that
way. An understanding of what is meant by “artificial intelligence” is an important first step to
the discussion of the possibility of AI ethics.
What is Artificial Intelligence?
Before defining artificial intelligence it is necessary to define and reach consensus on
what it means to be intelligent. One definition declares artificial intelligence as “the area of
computer science focusing on creating machines that can engage in behaviors that humans
consider intelligent.”(CITATION) But what exactly would humans consider intelligent?
Understanding the difference between a robot with the appearance of intelligence and
an actual thinking machine is a crucial first step. Intelligent organisms or other intelligent
entities would display at least some degree of skill in solving complex problems and making
generalizations and relationships; but what exactly does a complex problem consist of and what
exactly constitutes a relationship? Even if the community of experts in the field reach
consensus for these requirements, to what degree can it be said that a machine has
comprehended and digested information? These are questions that have no clearly defined
answers at the moment, but there may be an easier way to put our automatons to the test.
Possibly the simplest way of quantifying machine intelligence is to gauge how well
robots mimic the behavior of the human brain, and in 1950 that is exactly what the British
computer scientist Alan Turing proposed. Turing suggested that “a computer would deserve to
be called intelligent if it could deceive a human into believing that it was human.”(The Turing
Test) Turing thought that the question of whether machines can think was irrelevant and too
meaningless to deserve discussion; he instead asks of our machines, how well can you play the
“Imitation Game?” In the years since his death, this test has been demonstrated to be very
influential but also broadly criticized in the field of artificial intelligence.
Implications of Machine Morality
As the field of robotics advances we will see robots becoming faster, stronger, more
powerful and, with the advance of artificial intelligence, they could ultimately be capable of
evolving a complex understanding of the environment around them. Assuming that true
artificial intelligence is possible and that machines can eventually learn to think for themselves,
it will be increasingly more important for professionals in the field of computer science and
robotics to be aware of how morality may be programmed into our intelligent machines. If we
can determine that it is possible for artificial morality (or perhaps a “real” morality) to be
programmed or hardwired into our robots and computers, the question then widens and
reaches into the realm of philosophy. Who’s morality should be impressed upon these
machines and to what extent should we describe these ethical rules?
Asimov and the Three Laws of Robotics
The science-fiction short story Liar!, written by Isaac Asimov in 1941 contains the first
known use of the word “robotics” and represents a single piece of Asimov’s larger fiction series
I,Robot published in 1950. The series has been incredibly influential in the world of science
fiction and technology for the last sixty years and is considered by many to be responsible for
popularizing robots more than any other series. Other than imparting the word “robotics” itself,
Asimov contributes something he calls the Three Laws of Robotics, which he uses throughout
the series as a safeguard against unwanted robot behavior.
Isaac Asimov's "Three Laws of Robotics"
1.) A robot may not injure a human being or, through inaction, allow a human being to come
to harm.
2.) A robot must obey orders given it by human beings except where such orders would
conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with
the First or Second Law.
(Isaac Asimov's "Three Laws of Robotics")
This safety feature is hardwired into nearly all of Asimov’s robots throughout the stories
and cannot be bypassed. Asimov continuously uses the I,Robot series as a sort of backdrop to
test these laws against failure. This has given reason to modify the laws at times and to even
add a fourth law. Asimov placed this fourth law before the other three and gave it the
appropriate name of the zeroth law which reads:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The effect that these laws would have on ethical robotic behavior is somewhat obvious,
but perhaps even more interesting is Asimov’s rationalization of the laws. In one essay Asimov
points out that versions of the Laws are implicit in the construction of nearly all tools:
1.) A tool must not be unsafe to use. Hammers have handles, screwdrivers have hilts.
2.) A tool must perform its function efficiently unless this would harm the user.
3.) A tool must remain intact during its use unless its destruction is required for its use or
for safety.
The acknowledgment that robots would function in much the same way that our tools do
is a somewhat surprising conclusion gained from examining the Three Laws. When thinking
about the moral implications of either shackling our constructs with hardwired laws or releasing
intelligent machines to think for themselves, it would be very important to restate our
perspective. We can adopt the perspective that artificial intelligences are individuals and it is in
our best interest to help them prosper, or we can adopt a viewpoint that more closely
resembles the relationship between a man and his screwdriver.
Lessons from fiction
There are many other examples of machine morality in popular culture such as The
Matrix (1999) in which hostile machines are the dominate species on Earth and humans are
used as a sort of cattle to be used as fuel for the machines. The HAL 9000 in Arthur C. Clarke’s
Space Odyssey series is a computer built for assisting astronauts that eventually turns on its
human proctors and becomes a very unsettlingly cold and well-mannered villain.
Futurama’s depiction of robots is very unique and addresses many major issues with roboethics
throughout the series with unconventional characters. Hedonism-bot is a robot that enjoys the more
pleasurable things in life while Roberto is a clinically insane robot who robs banks. Many episodes
feature The Robot Devil who imprisons and tortures (mostly with rhyming and singing) other robots in a
fiery underground lair. The most important character in the series, Bender, loves to smoke, drink, steal,
and have sex with hooker-bots. The issue of robots serving some sexual purpose in the future has been
discussed before in several media examples, but Futurama’s approach to the subject is much more
unconventional. The robots in Futurama instead of being programmed to fulfill human desires tend to
prefer sex with other robots or other simple home appliances.
Final Thoughts
In conclusion it seems that the way in which robots and artificial intelligences are
portrayed in popular culture plays a big role in how we as humans look at them. This perception
informed by fiction at times but also through our interactions with early semi-intelligent
machines, will guide the advancement and morality of future automatons into either extremely
complex tools OR artificial individuals with intelligences rivaling our own.
References
Goertzel, Ben. "A Cosmist Manifesto." A Cosmist Manifesto. 1 June 2010. Web. 24 Apr. 2012.
<http://cosmistmanifesto.blogspot.com/>.
"Artificial Intelligence." ThinkQuest. Oracle Foundation. Web. 24 Apr. 2012.
<http://library.thinkquest.org/2705/>.
"Can't Get Enough Futurama: Information: Character Bios." Can't Get Enough Futurama: Futurama
News. Web. 24 Apr. 2012. <http://www.gotfuturama.com/Information/CharacterBios/>.
"Ethics of Artificial Intelligence." Wikipedia. Wikimedia Foundation, 23 Apr. 2012. Web. 24 Apr. 2012.
<http://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence>.
"Three Laws of Robotics." Wikipedia. Wikimedia Foundation, 18 Apr. 2012. Web. 24 Apr. 2012.
<http://en.wikipedia.org/wiki/Three_Laws_of_Robotics>.
"The Turing Test." (Stanford Encyclopedia of Philosophy). Web. 24 Apr. 2012.
<http://plato.stanford.edu/entries/turing-test/>.
"Isaac Asimov's "Three Laws of Robotics"" Auburn University. 1 Jan. 2001. Web. 24 Apr. 2012.
<http://www.auburn.edu/~vestmon/robotics.html>.
Download