Ethics and A.I. - Michael Schultz

advertisement
PHI 350 Technology and Ethics
Ethics and A.I.
Artificial Ethics for Artificial Intelligence
Michael Schultz
3/23/2009
Artificial Ethics for Artificial Intelligence 2
When I use the word artificial I use it to describe something that is synthesized, that is
modeled after an item that is real or was real at one point. The word intelligence, made from our
mind to describe a concept of reality that of which is what us humans use (the word) to bring
some truth to what we are and separate ourselves from the rest of the organisms on the earth, as
described by Nietzsche(1873). My paper’s purpose is to dive into investigation of one question;
what benefit to humanity is it to develop Artificial Intelligence? To properly answer this
question, if it can be, I start off by proposing a few simple questions and answer them through
careful analysis and example. In society I feel that there is a strong sense that humanity wants
Artificial Intelligence however maybe the same people that want it don’t know really why they
want it. Yes, it may solve some problem at hand now or in the near future but an opposite
argument can be that these same problems could be solved by human intelligence if we place the
same amount of focus on solving them versus creating something else to solve them. It no doubt
has brought many researchers together and given purpose to them and as Mr. Moore points out it
will allow us to study our own ethics in detail. Is A.I. an end or a means? Why do we need
Artificial Intelligence? Humanity has brought its self this far, for what purposes is Artificial
Intelligence to be used for today and in the future?
Defining Artificial Intelligence
One definition of artificial intelligence from the Institute of Telecommunication Sciences
is “The capability of a device to perform functions that are normally associated with human
intelligence, such as reasoning and optimization through experience.” Further they classify
Artificial Intelligence as a branch of Computer Science. I agree with their definition however I
think that putting Artificial Intelligence under Computer Science exclusively is not entirely
appropriate. I will attempt to explain why I think that it is not appropriate but I will give another
Artificial Ethics for Artificial Intelligence 3
definition that also puts A.I. under computer science. This definition I feel more to the point;
“The branch of computer science concerned with making computers behave like
humans.”(Artificial intelligence, 2004) It is simple and simple is something that does not happen
very often when talking about A.I. and ethics, a computer that acts like a human. These
definitions are acceptable but they limit what A.I. can be by defining it in terms of what a
computer can do. After reading a book unrelated to A.I. specifically, but on the subject of
knowledge unification, it makes sense that if at some level our intellect was made up of many
transistors that A.I. could be modeled with a computer. As of yet I have not found nor read any
evidence of this, so I think that it is safe to say that A.I. will never be able to behave like a
human in thought or action until it is comprised of something that resembles what it is attempting
to model, the brain. This brings thoughts of consilience to mind, as put forth by Edward Wilsons
book titled, Consilience. In that the unification of knowledge or the combining of specialties of
science and physics can help answer bigger questions of science and more specifically in this
paper, Artificial Intelligence. As of now what we call A.I. is ‘housed’ in hardware and software
code of a computer, the core of which is basically a bunch of transistors, electronic switches.
That of which I do not mean to simplify to the point that someone reading this should think I do
not feel that the modern computer is insignificant; just that I do not think that the modern
computer can truly do the human intellect any justice in modeling any similar intellectual
process. However, in the future I assume that a device that will house A.I. will resemble the
current computer paradigm as much as an abacus resembles the computer of today. After doing
some simple research into the topic of this paper I feel that if A.I. is ever going to become
truthfully resembling of what it intends to mimic, human intelligence, it should be more
encompassing than just a branch of computer science. Other disciplines obviously will have to be
Artificial Ethics for Artificial Intelligence 4
involved, such as neurobiology, psychology and sociology on top of electrical engineering and
computer science to name a few. By examining where A.I. stands in progress now I hope to give
a better feel of how and what it will be in the future.
AI Currently in use by Researchers or Industry
From motor vehicles to robots and software programs that advise doctors as if they are
the expert. Artificial Intelligence currently is used throughout many industries and I will touch
on a few. The purpose of which is to give the reader a feel for what we currently consider
artificial intelligence and how we use it to help. In the sector of medicine and medical practice
many systems have been developed to help out practitioners give better diagnosis to patients.
According to Pandey and Mishra (2006) these software programs are referred to as expert
systems (ES). Any one expert system may have a specific disease to help diagnose or it may be
for the purpose of providing ethical advice on dealing with patients especially in sensitive areas.
Like the prototype Medical Ethics Advisor which was developed by Michael Anderson, Susan
Leigh Anderson and Chris Armen. Like any other system MedEthEx is based on a set of rules.
According to Anderson, Anderson & Armen (2006) MedEthEx implements biomedical
principles of Beauchamp and Childress. From the brief paper that describes the MedEthEx
system it is apparent that the purpose was to demonstrate that a moral model could be
implemented and not that the one chosen was or should be the end all. If one moral model could
be modeled into an Artificial Intelligence then other moral models can be. The type of dilemmas
that could be solved by MedEthEx as proposed by an example by Anderson, Anderson & Armen
is if a patient refuses treatment recommended by the physician, should the physician continue to
attempt to persuade the patient or accept the patient’s refusal? Based on the rules programmed
into the system and training cases learned by the system the inputs from the physician on this
Artificial Ethics for Artificial Intelligence 5
specific new case the ES would provide an advisement on the most preferred action to take based
on an inductive logic. They also stated that the data they got back from this experiment could be
used for an actual medical advisement robot for elderly people.
In another area A.I. is used to drive a real vehicle around town without the aid of humans.
The DARPA urban challenge had 11 final teams compete for a two million dollar prize for the
best autonomous vehicle to safely maneuver in a controlled simulated urban town. The winner
was Tartan Racing and I will them as an example for my paper. The thought of driving a vehicle
on urban roads may not be that difficult for a seasoned driver, especially in the town that a
particular driver lives in and has experience driving around. However for a computer to control
a vehicle; the task seems daunting. The Tartan Race team broke the problem down in to five
main parts; mission planning, motion planning, behavior generation, perception (world
modeling) and mechatronics.
When thinking about all the decisions that are made during driving from home to a
destination it is a lot to think about. I must say on a personal level that the act of driving a
vehicle seems difficult for humans to learn at first, so for a machine to autonomously drive must
be a very complex undertaking for any team of scientist and engineers. Hopefully they don’t
model the driving habits of New Jersey drivers! I say that to be funny however, it does bring a
point about the behavior of drivers in different parts of the country. From traffic laws of
different states and behaviors of different drivers it may be easier to predict the weather!
Seriously though, from the Tartan teams paper they describe that the mission planning algorithm
creates a cost graph of where the vehicle is and where it needs to go. Taking into consideration
the current environment and creating new graphs when the environment changes. The way they
describe its operation I imagined the GPS device in my friend’s car that gives him a route to
Artificial Ethics for Artificial Intelligence 6
follow to his destination and when he makes a wrong turn the device re-computes the route to the
same destination but with the new current location. We may not see the next car commercial
advertise for the new autonomous feature anytime soon but, the point is that it is possible with
technology of today.
Another use of Artificial Intelligence is on the battle field, robots that currently have the
capability to fire a weapon are in existence. As of right now these systems are remote operated
vehicles. From a New York Times article in 2005 however, the end goal is to create an
autonomous robot that could replace a solder according to the pentagon. What happened to the
end goal of ending war, all together? From the same article Gordon Johnson of the Pentagon
was quoted as saying “They're not afraid. They don't forget their
orders. They don't care if the guy next to them has just been shot.
Will they do a better job than humans? Yes." Not to get carried away
the solder replacement robot is not here yet. The technology in 2005
allowed for what is called a special weapons observation remote
direct-action system or SWORDS, pictured in figure 2. The small
Figure 2: SWORDS ROV
tank treaded robot has already had an upgrade and is called MAARS
ROV (modular advanced armed robotic system) figure 1. In the New York Times Article by
Tim Weiner;
Military planners say robot soldiers will think, see and react increasingly like
humans. In the beginning, they will be remote-controlled, looking and acting
like lethal toy trucks. As the technology develops, they may take many shapes.
And as their intelligence grows, so will their autonomy.
Figure 1: MAARS ROV
Artificial Ethics for Artificial Intelligence 7
As of now both the SWORDS and MAARS are remote operated, so any ethical issues lie with
the operator. Hopefully with the last three examples of current technology the need for ethical
concern is evident. These are only three examples of current technology that either use Artificial
intelligence techniques or are in the plan to be using A.I. as soon as it’s feasible. To further the
concern and bring to light something that a Rene Descartes said in his famous Discourse on
Method,
Thus I imagined that peoples who, having once been half savages and having been civilized only little by
little, have made their laws only to the extent that the inconvenience due to crimes and quarrels have forced
them to do so, could not be as well ordered as those who, from the very beginning of their coming together,
have followed the fundamental precepts of some prudent legislator.
Without reading between lines Descartes makes a point that can be extended to our current
dilemma, if we do not start to plan for true artificial intelligence then we may be making up
ethics laws and programming rules for A.I. as things happen and they may not be all that well
thought out. This brings me to a point in this paper where we shall transfer from current
technology of A.I. to future possibility of A.I. and bring up points of what could happen if we do
not have a plan for implementing some sort of ethics or morals into these future machines.
Ethical Dilemmas of Future Possibility
I think that it is inevitable that any intelligence either will have some form of morality to
follow or it will form its own morality. For the sake of the worst case scenario this next
argument I assume that some day in the future that some device(s) will have the capability of
what science fiction predicts; the capability of human like thought and behaviors. Yes, some say
that it’s not possible for a machine to have human thoughts and cognition. To that argument I
fully agree, with one stipulation which is, machines of today will never have human thoughts or
Artificial Ethics for Artificial Intelligence 8
cognition. With a brief look at human history how many times have scientists and philosophers
such as Galileo, Newton, the Wright Brothers and Einstein to name a few proven previous
theories wrong and showed the world that a previous paradigm of thought was incorrect or
flawed? So I say for once give in to the possibility that impossible does not exist and plan for the
future.
The plan for machine morality is slowly coming about. It is still vaporware, a term used
by Wallach in his book Moral Machines that he describes as “a promise that no one knows how
to fulfill.” The combination of philosophers and scientists of today has started to pave the way
and will eventually fulfill this promise. The problems that occur are not simple to solve, like
what point of view should ethics come from, religious, philosophical idea, or just human laws?
If religious, which doctrine should be followed? These questions of who’s ethics to use should
be answered, and then comes the huge undertaking of how does one codify them to be
programmed into a computer. This is the point where I simply refer you back to earlier in my
paper where I said that today’s machines will never really think like a human. Untill other
architecture is invented that will be able to learn ethics or pick up on human behavior. I do not
say that to imply that all attempts to create A.I. should be stopped because they will fail. On the
contrary, I feel that we should try to make it work and in the process I think we will come up
with another architecture to better fit what we as a society want A.I. to be.
I say that I am a product of my upbringing, the way in which I was mentored by family;
my social interactions with friends, these things dictate how I react to situations today. I cannot
think that the future A.I. will be very different in this process. The human brain is very complex
and recognizes patterns in everything that it senses. Colors, smells, touch, sounds and
combinations of all have patterns that we like or dislike. In the words of Kenneth Wesson, an
Artificial Ethics for Artificial Intelligence 9
advocate for neurosciences “The brain is a ‘pattern-detecting device,’ aggressively searching for
those patterns which will help give meaning to new or incoming stimuli.” Just like the pattern of
black shapes on this white paper, you the reader recognizes them as letters, characters, words and
hopefully a well structured paper. There are many issues to overcome if a machine was created
that could do and think as a human. And going into any unknown territory is very scary for some
people. It already is a problem with employment, as technology increases and the human
population increases not only are there more people but, technology will take more of the jobs
from human workers. This dilemma alone has raised concern for many. According to Arnold
Brown; “Cisco is using software programs to replace humans in human resources, finance,
customer service, and other staff areas.” This leads to further problems and pressure for political
leaders to try and solve.
In conclusion let us revisit the first questions. Does society really want A.I.? With all the
hype in media about it I answer this question with a definite yes. Will the invention of A.I. be a
benefit to society? No one can truly answer that question with pre-analysis; post analysis of
what happens once A.I. is invented and used will answer that question. One thing is for certain
that we can make sure the outcome of having A.I. in our society is more positive than negative if
we start incorporating ethics with our designs.
Artificial Ethics for Artificial Intelligence 10
References
Anderson, M., & Anderson, S.L. (2006, July). MedEthEx: A Prototype Medical Ethics Advisor.
Retrieved March 18, 2009, from http://www.aaai.org/
Anderson, M. & Anderson, S. L. (2007, February 14). The Status of Machine Ethics: A Report
From the AAAI Symposium. Minds & Machines, 17, 1-10. Retrieved March 11, 2009,
from Academic Search Premier
Artificial intelligence. (2004, February 10). Retrieved March 18, 2009, from
http://webopedia.com/ TERM/A/artificial_intelligence.html
Brown, Arnold. (2007). The Impact of Robots on Employment. Contemporary Issues
Companion: Artificial Intelligence. Retrieved March 19, 2009, from Opposing
Viewpoints Resource Center. Gale. Institute of Technology at Utica- SUNY.
TALON family of Military, Tactical, EOD, MAARS, Hazmat, SWAT, and Dragon Runner Robots.
(n.d.). Retrieved March 21, 2009, from http://www.foster-miller.com/lemming.htm
Urmson, Chris et al. (2007, April 13). A Multi-Modal Approach to the DARPA Urban
Challenge. Retrieved March 17, 2009, from
http://www.darpa.mil/grandchallenge/TechPapers/Tartan_Racing.pdf
Wendel, W. & Colin A. (2009). Moral Machines Teaching Robots Right From Wrong. New
York, NY:
Oxford University Press, Inc.
Weiner, Tim. (2005, February 16). New Model Army Soldier Rolls Closer to Battle. New York
Times. Retrieved March 19, 2009, from http://www.nytimes.com
Artificial Ethics for Artificial Intelligence 11
Wesson, Kenneth A. (2003, August). What Everyone Should Know About the Latest Brain
Research. Science Master. Retrieved March 19, 2009, from
http://www.sciencemaster.com
Download