Machine Ethics - Computer Science and Computer Engineering

advertisement
Machine Ethics
In the World of Tomorrow
Ben Gooding
4/3/2012
Summary: Should the focus of machine ethics be on the development of full ethical
agents? Isaac Asimov believes that we should focus on the ethics. James Moor believes
that the focus should be on the implementation of ethics rather than on the ethics
themselves. Michael Anderson and Susan Anderson believe that the focus should be on
the ethical decision making of the machines. Bruce McLaren believes the focus should be
on limited implementation and the final ethical decision should be in the hands of the
human user. In my personal opinion I mostly agree with the fundamentals set forth by
McLaren with a little more focus on the overall implementation of limited systems.
Before beginning to discuss the opinions of Michael Anderson, Susan Anderson,
James Moor, Bruce McLaren and myself, a clear understanding of what machine ethics
must be made. Isn’t machine ethics the same as computer ethics? Well no, not exactly.
Computer ethics for software engineers as set forth by the Association of Computing
Machinery (ACM) and the IEEE pertains to computer scientists and their behavior within
the field of computer science that will affect those around them. There are eight points to
the software engineering code of ethics: (1) Public – software engineers shall act
consistently with the public interest, (2) client and employer – software engineers shall
act in a manner that is in the best interest of their client and employer consistent with
public interest, (3) product – software engineers shall ensure that their products and
related modifications meet the highest professional standards possible, (4) judgment –
software engineers shall maintain integrity and independence in their professional
judgment, (5) management – software engineering managers and leaders shall subscribe
to and promote an ethical approach to the management of software development and
maintenance, (6) profession – software engineers shall advance the integrity and
reputation of the profession consistent with the public interest, (7) colleagues – software
engineers shall be fair to and supportive of their colleagues, and (8) software engineers
shall participate in lifelong learning regarding the practice of their profession and shall
promote an ethical approach to the practice of their profession (ACM/IEEE). Based on
the Software engineering code of ethics we now have a clearer idea of what
computer/software ethics are. So what exactly is machine ethics and why is it important?
Typical issues surround computer ethics deal with hacking, piracy, privacy issues and
other topics normally associated with computers, however, the issues surrounding
machine ethics deal with the behavior of machines in relation humans (Anderson &
Anderson, The Status of Machine Ethics: A Report from the AAAI Symposium, 2007, p.
1). Now why is this important? With the future of computing and machinery changing at
a fast pace the need to invest in the application of ethics to the world of machines is
necessary. Moor state’s that there are three reasons as to the importance of machine
ethics; “(1) Ethics is important. We want machines to treat us well. (2) Because machines
are becoming more sophisticated and make our lives more enjoyable, future machines
will likely have increased control and autonomy to do this. More powerful machines need
more powerful ethics. (3) Programming or teaching a machine to act ethically will help
us better understand ethics” (Moor, 2006, p. 21).
Expanding on Moor’s views, which are shared by Anderson, Anderson and
McLaren is important to the overall understanding of creating an ethical agent. In Moor’s
first reason he states that we want machines to treat us well. Furthering that statement we
come to the realization that just like humans machines have a responsibility to do what is
right. When a human performs a crime he is punished in the court of law. A robot must
be able to understand these ramifications if they break an ethical code of conduct. Is the
creator of the robot responsible for their robots actions? Most likely the creator would not
want that, especially if their robot is mass produced and causes mass chaos. With this in
mind the implementation of ethics into robots is extremely important.
Humans have almost always feared the thought of autonomous sentient robots and
with Asimov wanting to grant intelligent machines freedom we have a right to be afraid.
Pop culture is full of machines rising against their human creators. One of the more
recent examples is the movie The Matrix where humans are subjugated by sentient
machines for the purpose of harvesting the internal energy of humans for power. With
movies like The Matrix, Terminator, and iRobot all showing an uprising by our robotic
creations the fear of humans against robots is indeed sincere. What does this mean to
machine ethics and those who develop the implementation of such principles? Who is
held responsible should an unethical decision cause harm to humans? How can we
prevent such actions from occurring? How do we decide on the best ethical decision to be
made by a sentient robot? These are all questions being asked by machine ethicists and as
of now have no clear answer. In the future as our understanding of machine ethics
expands we will be able to answer these questions.
With a general understanding of reason two and why it is important we must step
back to the 1970’s with science fiction writer Isaac Asimov. Most people may not be
directly familiar with Asimov but they are familiar with some of his works; “Bicentennial
Man” and “iRobot.” In his book “Bicentennial man” Asimov discusses the theory of the
“Three Laws of Robotics” which states: (1) a robot may not injure a human being, or,
through inaction, allow a human being to come to harm, (2) a robot must obey the orders
given it by human beings except where such orders would conflict with the first law and
(3) a robot must protect its own existence as long as such protections does not conflict
with the first or second law (Asimov, 1976, p. 135). The “Three Laws of Robotics”
would serve as the starting point of machine ethics (Anderson S. L., 2008, p. 1).
However, later on his book Asimov comes to the stance that “There is no right to deny
freedom to any object with a mind advanced enough to grasp the concept and desire the
state” (Asimov, 1976, pp. 142-144), which only adds to the fear of human subjugation by
their robotic creations but if we ignore Asimov’s change of heart then subjugation is
harder to imagine.
As for reason three by performing research in machine ethics we will have the
potential to discover flaws in current ethical theories. With the discovery of these flaws
ethical theory can be changed and perfected. How would machine ethics lead to these
discoveries? Imagine you are walking down the street and happen to come upon a dollar.
You pocket the dollar and continue on your way happy as can be now that you can afford
that ice cream cone at McDonald’s. Several blocks later you see a beggar asking for
money. When presented with the beggar, would you rather give the dollar to the beggar
who may end up spending the money on something that is not best for his well being or
would you give the beggar the dollar? Most people would tell someone else, of course I
would give the beggar the dollar I found when in truth most people would continue on
their way. A machine however, does not favor itself over that of others. In this case
ethical theories can be perfected with an impartial and consistent advisor.
James Moor declares that there are four types of machines in existence. Nearly all
people involved in the area of machine ethics agree with his beliefs. Moor declares that
machines are either an ethical impact agent, an implicit ethical agent and explicit ethical
agent or a full ethical agent (Moor, 2006, pp. 19-20). Before divulging further into
machine ethics we will look at what each type of machine is and the impacts that they
have on society. Once we have this understanding we will be able to look into the
opinions of Anderson, Anderson, Moor and McLaren.
So what is an ethical impact agent? An ethical impact agent can best be described
as a machine that will leave an impact on someone, or something based on an ethical
dilemma. Qatar is steeped in Islamic culture but is open to influence from western
countries. However, traditions still remain in Qatar that date back hundreds of years. One
of those traditions is that of camel racing. In camel racing the weight of the rider plays a
major role (just like in horse racing in the United States). The lighter the weight of the
jockey the faster the camel will go. Using slaves as jockeys was a common practice until
the United States and the United Nations threatened economic sanctions upon Qatar.
These slave jockeys would typically be young men that were starved. In order to develop
an inexpensive and lightweight solution to the problem a lightweight mechanic jockey
was created for use in the race. One part of the machine would whip the camel while the
other would control the reins. The mechanical jockey runs on a Linux based computer
with a 2.4GHZ processor and a GPS enabled chip allowing for control of the jockey. As
Wired explained it, “Every robot camel jockey bopping along on its improbable mount
means one Sudanese boy freed from slavery and sent home.” These robot jockeys are
ethical impact agents (Lewis, 2005, pp. 188-195).
With a clear understanding of ethical impact agents we must look into implicit,
explicit and full ethical agents. What is an implicit ethical agent? The best way to go
about describing an implicit ethical agent is to break the phrase down. The word implicit
means directly implied or inferred. When thinking about implicit ethical agents you
wouldn’t even assume they need ethics, however that is not the case. The most commonly
used implicitly ethical agent would be an Automated Teller Machine (ATM). Why is an
ATM an implicit ethical agent? That question can be answered with the coding of the
ATM. The ATM is programmed in such a way that it avoids unethical behavior by
properly calculating all transactions performed in your account. Other implicit ethical
agents are your tax software, anti-virus software, and your home security system. You
wouldn’t come home one day from work expecting to see that your anti-virus software
had suddenly become a virus would you? Implicit ethical agents are basically pieces of
software created following the software engineering code of ethics stated earlier or as
Moor states “Computers are implicit ethical agents when the machine’s construction
addresses safety or critical reliability concerns… A line of code telling the computer to be
honest won’t accomplish this [in reference to an ATM]” (Moor, 2006, p. 19). An implicit
ethical agent is the most common type of agent and will continue to be in my opinion for
the next ten years.
What is an explicit ethical agent? The word explicit means that a definite answer
is provided. Creating an explicit ethical agent is a difficult task. A clear example of an
explicit ethical agent is hard to come by. As of this point in time there are no true explicit
ethical agents. However Michael Anderson, Susan Anderson and Chris Armen have
implemented to ethical theories into machines called Jeremy (after Jeremy Bentham) and
W.D. (after William D. Ross). Bentham followed the principles of act utilitarianism and
Ross follows the principle of prima facie (which is something that one ought to do unless
it conflicts with a stronger duty, so there can be exceptions, unlike an absolute duty, for
which there are no exceptions). In order to find a workaround for the problem of no clear
solution given by W.D. they created an algorithm which compares the duties created by
W.D. with similar ethical cases involving the duties given. According to Moor explicit
ethical agents would be the best types of agents to have in the case of disaster relief. A
machine with the ability to process all incoming information would be able to judge who
needs help the most and where relief would be the most effective. Now, you may be
thinking, that has nothing to do with ethics, however it clearly does. Deciding where to
send relief can directly affect who lives and dies and as such, is an ethical issue (Moor,
2006, pp. 19-20).
Full ethical agents stir up the most controversy out of any ethical agent. Explicit
ethical agents are basically computer applications that analyze and come up with an
ethical decision, leaving the action of performance to a human. A full ethical agent on the
other hand has the ability to come up with an explicit ethical outcome and act on it. A
human adult is a prime example of a full ethical agent as we have the ability to derive an
answer, justify it, and then act on it. When thinking of full agent ethics it is hard to grasp,
especially for a programmer such as myself. As of this point in time many, including
myself, believe there is a line or ceiling separating explicit and full ethical agents. The
more common form of argument for this line is that machines cannot become full ethical
agents, meaning that they will not have consciousness, or free will the likes of which
Andrew from “The Bicentennial Man” had (Moor, 2006, pp. 20-21). The counter
argument is that humans are machines and since humans are full ethical agents, machines
created by humans can be full ethical agents. My opinion falls between the two camps.
As of now I do not believe it is within our scope of abilities to create a fully functional
ethical agent. We will however be able to make advanced explicit ethical agents that will
border the area of a full ethical agent.
With an understanding of all the types of ethical machines and the opinion set
forth by James Moor we must begin to look into Michael Anderson and Susan Anderson,
who share a similar view to Moor, albeit a little different. Anderson and Anderson are
primarily concerned with the ethical decision making skills of machines, meaning the
creation of explicit ethical agents with the ultimate goal of creating a full ethical agent.
The goal of creating an ethical agent is a difficult one to do, as there is no complete
standard for ethics, the field is constantly evolving. Anderson believes that their full
ethical agents will be able to make a judgment given an ethical dilemma they have never
been presented with before (Anderson & Anderson, The Status of Machine Ethics: A
Report from the AAAI Symposium, 2007, pp. 4-5). In another paper Anderson makes
mention of virtue ethics, which is ethics based on what kind of person you want to be
rather than what you ought to do. A relevant pop culture reference would be the video
games Mass Effect. In Mass Effect the user is prompted with ethical decisions to make.
Rather than making a choice on what you ought to do the player makes a decision based
on the kind of person they want their character to be. If you want your character to be a
tough guy you would choose the options that would give off the appearance of being
tough. This is the type of ethics puts actions on the backburner. However, Anderson
believes that when it comes to machines we should focus on their actions (Anderson &
Anderson, Machine Ethics: Creating an Ethical Intelligent Agent, 2007, p. 15). Based on
this Anderson strongly believes in the creation of machines that can perform actions
based on the ethical values instilled in them.
In the 2005 AAAI symposium on machine ethics McLaren disagrees with the
statements made by Anderson and Anderson. McLaren is reluctant to give machines the
power to make ethical decisions. McLaren would rather have machines perform the
computations and prompt humans for answers to ethical dilemmas (Anderson &
Anderson, The Status of Machine Ethics: A Report from the AAAI Symposium, 2007, p.
5). A human should have the ultimate decision in the ethical decision making of
machines according to McLaren. Regardless of the ability for a machine to autonomously
make the decisions itself, within an ethical reasoning, the final decision should still be left
to humans. The use of machines should help humans come to decisions, not make them
for them (McLaren, 2005, p. 1). Based on McLaren’s viewpoint of the role of machines
in helping people make ethical decisions McLaren has focused on the creation of explicit
ethical agents. He believes that we should not make full ethical agents, disagreeing with
Anderson and Anderson.
At the symposium people pointed out that McLaren’s view would cause problems
for those working in the field of artificial intelligence (Anderson & Anderson, The Status
of Machine Ethics: A Report from the AAAI Symposium, 2007, p. 5). Why would this
cause a problem? The result of leaving the decision to humans would result in the
inability to create full ethical agents. Specifically, those whose behavior would affect the
welfare of human beings, because such machines would have to make ethical decisions.
If a machine was created to help provide support to hurricane victims, as stated earlier, it
would not be possible if we followed McLaren’s view.
I see some of the validity to the point that McLaren tries to make and the point
made by some of the symposium goers. I believe that we should be extremely careful to
what machines we give the ability to make ethical decisions to. Those in the field of
business I believe would be wholly appropriate. The ethics of business and law are
clearer than those related to the welfare of human beings. Some would say that business
and law do have a reflection on the welfare of human beings, while true, I believe that the
impact of such use of machines will not be as powerful as that of one in the field of
police enforcement or that of a butler. A butler that is a robot is directly responsible for
those in its care. Even when we have the ability to create a fully autonomous robot how
do we go about punishing it if it makes a bad decision that was believed to help and not
cause harm? Do we just turn it off? Disassemble it? This is where problems arise.
In response to my statements above Isaac Asimov would point to his “Three Laws
of Robotics” as a source for solving both my and McLaren’s doubts. With the
application of Asimov’s laws as well as ethical decision making as stated by Anderson
we could create a way to create ethical robots that will behave in a way that will not
cause harm to humans. However, there are several problems with this if we apply it once
again to the hurricane situation. What if the ethical decision would cause some person
harm while saving other? Based on Asimov’s laws the robot would be unable to come to
a conclusion and would attempt to save all of the people. However, this may not be
possible causing an infinite loop in the computer basically causing it to destroy itself. The
need to find a way to give morals and not just ethics to robots is important. This is an
issue that will eventually be solved.
Having looked at all of the viewpoints provided I have come to the result that at
this point in time we should focus on the creation of explicit ethical agents and avoid
creating full ethical agents. Some may view this as a drastic point of view, but rather I
prefer to give this as a cautionary measure. It took nearly 40 years for us to create a chess
program that would beat a world champion (Moor, 2006, p. 21). The amount of time it
will take for us to perfect the creation of a full ethical agent will be even longer. Imagine
someone creates a piece of software that can make ethical decisions on its own but the
software is not perfected. An incomplete and unperfected piece of software that can
behave unethically would be terrible for humanity. Humans are already imperfect as it is.
To add another imperfect device to this world could lead to disaster. The creation of full
ethical agents in the forms of robots or androids would be one that is hard to accept by
humanity.
To add to this as of this point in time there is no clear standard set forth for
machine ethics just as there is no standard set forth for human ethics. What would
machine ethics look like? Is it even possible to set a standard for machine ethics? We
only have a limited understanding of ethical theory. Not only is our ethical theory limited,
people do not even agree on it! Is Bentham correct, is Mill correct or is Kant correct?
Nobody has come to a clear consensus on these points of view. Programming a computer
to have a set of ethical standards will be challenging. How can the computer determine
what a human deems as harm? Harm differs from one person to the next. Moving a cup in
the house of someone with OCD can send them into a fit, how does the robot know how
to react given that situation? Given the current limitations of human programming
capabilities creating a full ethical agent is out of our scope. Instead, as I stated above, and
as Moor believes, we should focus on creating limited explicit ethical agents. Only once
those have been perfected can we move on to creating a full ethical agent.
Works Cited
ACM/IEEE. (n.d.). Software Engineering Code of Ethics and Professional Practice.
Retrieved 3 29, 2012, from Association for Computing Machinery:
http://www.acm.org/about/se-code
Anderson, M., & Anderson, S. L. (2007, Winter). Machine Ethics: Creating an Ethical
Intelligent Agent. AI Magazine , 15-26.
Anderson, M., & Anderson, S. L. (2007, Spring). The Status of Machine Ethics: A Report
from the AAAI Symposium. Minds and Machines: Journal for Artificial Intelligence,
Philosophy, and Cognitive Science , 1-10.
Anderson, S. L. (2008, April). Asimov's "three laws of robotics" and machine metaethics.
AI & Society , 477-493.
Asimov, I. (1976). The Bicentennial man. In The Bicentennial man and other stories.
Doubleday.
Lewis, J. (2005, November). Robots of Arabia. Wired , 188-195.
McLaren, B. M. (2005, November). Lessons in Machine Ethics from the Perspective of
Two Computational Models of Ethical Reasoning. Papers from the AAAI Fall Symposium ,
70-77.
Moor, J. (2006, July-Aug). The Nature, Importance, and Difficulty of Machine Ethics. IEEE
Intelligent Systems , 18-21.
Download