Writing Assignment 3 - University of Pittsburgh

advertisement
Bursic 2:00
ENGINEERING ETHICS OF AUTONOMOUS SOLDIERS
Scott Smith (sms249@pitt.edu)
As an engineer that deals with computers and robotics there The second canon that relates to this scenario is for the
are some possible scenario that could arise that could have engineer to keep in mind “the good of the people and
ethical issues. One such scenario could be that the United government of my nation and its allies” [3]. This canon helps
States government contracts my company to produce support that the production of autonomous soldiers is indeed
autonomous soldiers for the army. To first determine whether ethical as it is indeed for the good of the nation who have in
this is an ethical issue or not it is imperative to look at the code mind the welfare of its people. This line of thought though
of ethics for engineers, reference the code of ethics for both can be looked at as mostly wishful thinking as the government
software and robotics engineers, and use other scholarly does not always seem to have in mind the best interests of its
sources to help make the best decision on how to handle the people. Of course the best way to determine whether or not
issue at hand.
they would in this case would be to use scholarly resources to
help predict how the robots might be used and then compare
The National Society of Professional Engineers Code of those actions with the various codes of ethics. United States
Ethics for Engineers is the best place to start to determine military lawyers and judge advocates take the legal and
whether producing autonomous soldiers for warfare is an ethical evaluation of new weapon systems very seriously [4],
ethical warfare. The main canon that deals with this sort of so even if this technology wants to be developed it might be
issue is canon number 1 which states “Hold paramount the put on halt because those parties think that it is not ethical.
safety, health, and welfare of the public” [1]. This canon is There is a chance that they do think it is ethical but the actions
fairly vague and makes it hard to determine if what the taken with those robotic soldiers might not line up with the
contract states is an ethical issue or not. It all depends if code of ethics as an engineer. Robots do not have the capacity
warfare in general holds paramount the safety, health, and to really discriminate between combatants and nonwelfare of the public. It would most likely be more useful to combatants so it is possible that if the robots were left to make
look at other codes of more specific engineering disciplines their own decisions about what targets to eliminate or not then
to get a better understanding. The Software Engineering many unnecessary casualties may result from those actions
Code of Ethics has two canons that relate well to the topic at [5]. This clearly does not coincide with engineering code of
hand. The first: “Software engineers shall act consistently ethics as it does not take into consideration the wellbeing of
with public interest” [2]. This one is still vague as it requires as many people as possible [3]. If this was the predicted case
determination if war is in public interest which isn’t always then as an engineer the project to design autonomous robots
the case but autonomous soldiers might be more favorable should not be carried out to fruition since it clearly goes
than using human soldiers. The second: “Software engineers against the code of ethics. The programmer themselves also
shall maintain integrity and independence in their have to take into consideration how the robot will function
professional judgment” [2]. This canon is also not as helpful since the values given to them are based off of the
as it could be but a sub point of that canon says to “temper all programmer and the code used to program [6]. The
technical judgments by the need to support and maintain assumption here is that the programmer is following the code
human values” [2]. Since human values are hard to determine of ethics and if he is being supervised by an engineer one of
as they are different for each individual this cannot really they will make sure that the code is being upheld for the sake
provide enough evidence that producing autonomous soldiers of the greater good. If the robots are being controlled by a
is an ethical issue. The last code of engineering to check human component then they are not autonomous and are just
would be the Code of Ethics for Robotics Engineers. This like military robots of today which are slightly controversial,
code has two canons that relate well with the scenario. The like the Predator and Reaper [7]. There is the option that a
first states robotics engineers must keep in mind “the good of human merely controls the “kill switch” per se. This would
as many people as possible and known environmental mean that they let the robots go about on their mission but
concerns” [3]. This canon will bring up the heaviest debate would have some safety feature built in so if the robots had
over whether or not producing autonomous soldiers are an decided to eliminate a target then that command could be
ethical issue or not. Each autonomous soldier theoretically overridden if the target was an innocent civilian. This would
saves one life of the soldier it replaces which would help prove to be a safer way overall and would not conflict, at least
coincide with this canon about keeping in mind the good of as to a large extent, with the engineering code of ethics. Support
many people as possible. On the other hand those same robots robots have been shown to be helpful to saving lives [7]. By
used to save lives on one side would be used to take them on having robot soldiers in place of human ones some sources
the other side. Autonomous soldiers might even prove to be claim it adds up to a “moral saving”, some value for each life
more useful in killing larger amounts of enemy soldiers than that is not lost or damaged through war [8]. Another source
would otherwise occur leading to higher total death tolls. This states that since robots cannot have advanced mental states
conflicts with the canon as it would have been more ethical to the consequences from the actions they take must be the moral
not make them since less people overall suffer as casualties. responsibility of someone else [9]. The responsibility could
University of Pittsburgh, Swanson School of Engineering 2013-10-01
Scott Smith
fall to a number of people. The first could be the programmer
himself for making a mistake in the code. It could also be the
person who assembled the robot and made a slight mistake in
either the design or missed out on a vital part for the robot to
work properly. There is also the option that the commander
in charge of the robot made a mistake in the control and
handling of the robots or did not expect the results to be what
they were.
but it the damage from the incident would have already been
done. The negatives can potentially outweigh the positives
which would lead the production of autonomous soldiers to
conflict with the engineering code of ethics but this does not
have to be the case. If the robots are being used properly and
ethically then they can prove to be valuable tools to help the
“greater good”. Of course other ethical issues may arise
during the research and production of these autonomous
soldiers. It is possible that the engineer working on this may
be bribed to create something more dangerous and destructive
than what was originally intended or even to have them
remove some sort of safety feature to prevent unnecessary
loss of civilian lives. That same engineer could also be
pressured to take some kind of “shortcut” which would lead
to the product coming out sooner. The risks with that though
in this case would be something has a critical malfunction at
an important time and things start to go terribly wrong
whether the robot starts to kill everything in site or breaks
down beyond repair in the middle of battle. Other issues from
this could be that the robots are easy to hack and reprogram
so other countries can then use the robotic might that was
supposed to help to instead cause massive amounts of damage
and death. Another issue could be that some bug is not caught
when testing the device which causes another critical
malfunction or just a lack of responsiveness costing the client
a great deal of money to get fixed and ruining the reputation
of the person who coded the robots in the first place.
After looking at various sources and the different codes of
ethics for engineers it is not fully possible to determine
whether the production of autonomous soldiers conflicts with
the engineering code of ethics or not. It would then be best
for the engineer to probably contact a trusted colleague to get
their opinion on the matter and take some time to meditate on
the personal thoughts and values of the engineer in charge of
this task. This issue could potentially conflict with the code
of ethics as described above which means that the best way to
figure out a decision on what to do would be to weigh the
positives and the negatives of going through with producing
autonomous soldiers.
The largest positive of having
autonomous soldiers would be that it would save human lives
on the side using them. Soldiers would no longer have to go
out and risk their lives nearly as much since there would be a
large amount of less valuable and more disposable robotic
soldiers available to use. The value of a human life is almost
unmeasurable or even if a number was supposed to be put on
the value of a life then it would be controversial at best, if not
leading to an outrage that someone would try such a thing.
The negatives on the other hand may just outweigh the
positives of going through with the production. The first is
that more human life could be lost on the opposing side of
war, which if it increases by more than the amount of lives
saved it proceeds to defeat the purpose of using them to save
lives. Engineers are supposed to think about the well-being
of all people according to the code of ethics and if more lives
are lost than saved then the product conflicts with the code of
ethics. Robots are also susceptible to electromagnetic pulses
as is most technology in general. If the reliance on robots gets
too high then any nation could be potentially crippled beyond
compare because an EMP was launched. There is also the
possibility that whoever is in charge of the robot soldiers is
using them for unethical purposes like the killing of
innocents. Of course this is something that happens in the
military even today but it could be brought to a greater scale
and robot soldiers will not question the orders they are given
since they do not have the capacity to even do so. They also
lack any form of morals or ethics themselves so they will keep
following orders no matter what happens to them or their
targets until those orders change. This could potentially be
very dangerous but if some sort of safety feature in coded or
built into the robot in case of such an event then it is possible
to be avoided. The officer in charge would also be put in a
military trial for his actions and things would be dealt with
The best way to avoid any of the ethical issues brought up
would be to not produce the robotic soldiers like the
government wants but there are compromises that could be
used as an alternative that would still be in line with the code
of ethics. The biggest compromise would be to implement
some sort of safety feature where the robots would stop any
lethal actions until a human can determine whether or not the
robots are attacking the correct target or innocent civilians.
Another possibility would be some kind of kill switch which
would let the person monitoring the robots to cancel the last
command and have them fall back. This would be useful in
preventing them from taking innocent lives because of a
command that was not specific enough or a failure to correctly
identify a civilian as a non-target. Other ethical issues that
can occur during production like taking shortcuts can be
easily avoided. All that needs to be done is for the engineer
to take a firm stance in making sure that every part of the robot
is completed to a high standard where all the parts work as
they should and that they are not more destructive than the
robots need to be. By following the code of ethics
autonomous soldiers could prove very useful to reducing the
casualties of war but it is imperative for the engineer in charge
of the production to make sure that things are completed
correctly and to a certain standard which does not conflict
with the engineering code of ethics.
2
Scott Smith
WORKS CITED
[1] “NSPE Code of Ethics for Engineers.” (online code of ethics)
http://www.nspe.org/Ethics/CodeofEthics/index.html
[2] “Software Engineering Code of Ethics and Professional Practice.” (online code of ethics)
http://www.computer.org/portal/web/certification/resources/code_of_ethics
[3] “A Code of Ethics For Robotics Engineers.” (online code of ethics)
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5453245
[4] K. Anderson, M Waxman. (Dec. 2012). “Law and ethics for robot soldiers.” Policy Review (online article)
http://go.galegroup.com/ps/i.do?action=interpret&id=GALE%7CA312292087&v=2.1&u=upitt_main&it=r&p=AO
NE&sw=w&authCount=1
[5] N. Sharkey. (March 2013). “The Evitability of Autonomous Robot Warfare.” International Review of the Red Cross
(online article) http://rt4rf9qn2y.search.serialssolutions.com/?ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF8&rfr_id=info:sid/summon.serialssolutions.com&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The
+evitability+of+autonomous+robot+warfare&rft.jtitle=International+Review+of+the+Red+Cross&rft.au=Noel+E+Sharkey&
rft.date=2012-06-01&rft.pub=Cambridge+University+Press&rft.issn=1816-3831&rft.eissn=16075889&rft.volume=94&rft.issue=886&rft.spage=787&rft_id=info:doi/10.1017%2FS1816383112000732&rft.externalDocID=
3002796561&paramdict=en-US
[6] W. Wallach, C. Allen. (October 2012). “Framing Robot Arms Control.” Springer Science+Business Media Dordrecht
(online article) http://link.springer.com/article/10.1007%2Fs10676-012-9303-0/fulltext.html
[7] M. Schulzke. (April 2011) “Robots as Weapons in Just Wars.” Springer-Verlag (online article)
http://link.springer.com/article/10.1007%2Fs13347-011-0028-5/fulltext.html
[8] T. Simpson. (May 2011). “Robots, Trust and War.” Springer-Verlag (online article)
http://link.springer.com/article/10.1007%2Fs13347-011-0030-y/fulltext.html
[9] T. Hellstrom. (September 2012). “On the Moral Responsibility of Military Robots.” Springer Science+Business Media
(online article) http://link.springer.com/article/10.1007%2Fs10676-012-9301-2/fulltext.html
3
Download