Mahboobin, 10:00-11:50
Davis Herchko ([email protected])
“Everything that civilization has to offer is a product of
human intelligence; we cannot predict what we might achieve
when this intelligence is magnified by the tools that AI may
provide, but the eradication of war, disease, and poverty
would be high on anyone's list. Success in creating AI would
be the biggest event in human history. Unfortunately, it might
also be the last” [1]. Stephen Hawking is known worldwide
for his outstanding accomplishments in the fields of physics,
cosmology, and computation. From proving black holes and
theorizing the existence of the universe, his knowledge on
quantum physics is unquestionably immense. This
understanding of the quantum realm has helped humanity
innovate and create an innumerable amount of technology,
especially quantum computers. Current quantum computers
are only able to focus on mathematical calculations, but the
more sophisticated quantum computers of the future are
expected to achieve remarkable things.
A modern digital computer owes its functionality to the
microprocessor, which is comprised of billions of transistors.
Each transistor has the ability to either switch on or off, with
on translating in binary code to 1, and off to 0. Current
microprocessors are continually getting smaller yet more
powerful with each commercial release, but their abilities
appear meek when compared to the capabilities of a quantum
computer. Instead of using transistors to calculate
information, a quantum computer is able to use manipulated
particles called qubits. Qubits are different from transistors in
that they can be “0” and “1” at the same time as a result of the
Law of Superposition, and the Law of Entanglement allows
for qubits to connect to one another in a continuous chain.
These laws intertwine to allow qubits to attain parallel
processing. This type of computation is leaps above current
technology. For example, a quantum computer with just 30
qubits is more powerful than the world’s most powerful
supercomputer, and a 300 qubit quantum computer would be
more powerful than every computer in the world connected
together [2]. This power will be extremely useful for
countless endeavors in the future, the main one being true
human-level artificial intelligence. Researchers have
attempted to create a human-like AI on contemporary
computers in the past, but there isn’t enough processing
power to recreate and simulate such complex thought and
emotion. Conversely, quantum computers have all the storage
an artificial intelligence software could ever need.
Coming to Fruition
The concept of artificial intelligence has been discussed
for thousands of years going back to Greek times, but the
University of Pittsburgh Swanson School of Engineering 1
mechanical/computer AI most people are familiar with has
only been sought after for the past few decades. The computer
science community had predicted that we would be able to
achieve artificial intelligence around this time in history. Even
though items such as Apple’s Siri and IBM’s Watson have
been created, these intelligent applications aren’t considered
“true” artificial intelligence. True artificial intelligence, or
strong AI, is the approach to “develop artificial intelligence to
the point where the machine’s intellectual capability is
functionally equal to a human’s” [3]. This type of computer
creation would completely alter modern life as we know it.
Instead of telling a computer known variables in order to solve
a complex problem, you simply have to give a computer an
issue and it will be able to resolve it on its own knowledge.
Humanity would now have a computer and software that
could learn from its mistakes and use common sense to make
decisions. Global issues ranging from disease, economic
fluctuations, aging, sustainability, and cancer could be solved
by the super-intellect of a strong AI computer. Although that
may seem implausible, scientists and engineers are already on
the path to creating this revolutionary technology. Multiple
governments have already funneled billions of dollars into
research on artificial intelligence with the purpose of solving
world issues, but also to control warfare. The inclusion of
drones and robotic assistance in combat has already changed
the game, but these are still controlled and dependent on
humans. However, a strong AI weapon does not have to be
under human control; it would be completely autonomous and
use its own discretion. This robotic independence would
forever alter the ways of battle, and researchers have already
deemed it to be “the third revolution in warfare, after
gunpowder and nuclear arms” [4].
Over the past few years, Google Inc. has acquired over
twenty robotics and artificial intelligence companies. In
addition, they have been assisting the quantum computing
company D-Wave Systems with innovating the world’s most
advanced quantum computer, which is already processing
with over 1000 qubits. As a result of their commitment and
interest to these aspects of technology, Google will be at the
forefront of artificial intelligence for the next few decades.
This new research and development opportunity would
certainly interest me after graduating from the University of
Pittsburgh with a degree in computer engineering, so I would
be fervent to pursue a job at Google. After over twenty years
with the company, I have a senior level computer engineering
position at their headquarters in Mountain View, California.
My current role is overseeing multiple engineering teams in
the creation of the next generation of technology: autonomous
Davis Herchko
artificial intelligence. After decades of studies and research,
true human-like artificial intelligence has nearly been created.
The US Government and the Department of Defense has kept
a close eye on the AI project at Google, and they are now
requesting that we collaborate with Lockheed Martin in order
to build fully autonomous robots and drones for warfare.
Google’s administration board has accepted their terms and
has also requested me to complete my project as soon as
possible in order to begin work with Lockheed. I am not
entirely sure that I am comfortable with speeding up the
project and diverting it from its original purpose. I understand
that Google will make billions of dollars as a result of this
defense contract with the United States, but the short time
period I have been instructed to abide by seems a bit too
haphazard. It will take more than a year to simply test the
artificial intelligence unit by itself to discover if it may have
any thought malfunctions, and that’s before even allowing it
semiautonomous military robots and drones still have issues
discerning an innocent civilian from a threat, and even though
a fully autonomous computer would have more success at
eliminating dangers, errors would still be made because of the
lack of human awareness. Furthermore, the aspiration for the
AI project was to have a self-thinking supercomputer that
could assist in solving innumerable global problems and
scientific conundrums, not control weaponry and assist in
careless decisions when addressing international conflicts.
The sheer power that a government possesses with artificial
intelligence robots would likely cause many other countries
to panic and begin developing technology of their own, much
like the nuclear arms race of the twentieth century. This new
type of fight for power could start a chain of events that might
cause another major global conflict.
The National Society of Professional Engineers has an
extensive code of ethics that all engineers should abide by. In
regards to this specific scenario, canons #1 and #6 should be
closely considered and followed. The first canon states that
all engineers shall “hold paramount the safety, health and
welfare of the public” [5]. By assisting in the government in
creating AI soldiers, I would be saving an innumerable
amount of human life as result of them not going into battle.
On the other hand, I would be assisting in the killing of other
humans around the world by completing this defense contract
and allowing the US government to use an endless amount of
robots to complete military operations. The sixth canon
demands that engineers shall “conduct themselves honorably,
responsibly, ethically, and lawfully so as to enhance the
honor, reputation, and usefulness of the profession” [5]. The
original goals of my project were certainly in line with this
canon, but I’m not comfortable with the altered timeline that
I have been given to accomplish by the government. Creating
this style of AI still allows me to conduct lawfully, but it
would not give me the ability to conduct myself honorably,
responsibly, or ethically.
The completion of this project would have multiple
conflicting consequences. A major positive is the fact that
using AI soldiers would drastically decrease the number of
killed military personnel, and the number of cases of PostTraumatic Stress Disorder (PTSD) and suicide would drop
significantly. Robots and drones are able to carry a much
heavier workload than humans, and they do not require a
salary, food, or clothing during their military tenure. Even
though the AI units would be expensive to build and maintain,
they would be cheaper than human soldiers over their
lifetime. Moreover, the artificial intelligence would be much
more effective at analyzing and utilizing information during
defense tactics. While AI provides effective solutions to many
problems we face in modern times, there are still numerous
drawbacks that cannot be overlooked. A main moral dilemma
is who to blame in the event that something does awry with
the machine. No humans will be in control of the controls, and
the AI would be incapable of taking a moral stand on the
issue. As well, the robot would be unable to make a human
moral decision when confronted with killing another human,
which urges the question if humans have a right to be killed
by an autonomous weapon. Another major concern is if using
AI would actually increase the amount of conflicts in the
world rather than decrease them. The government would no
longer have to worry about human life being on the line when
deploying troops, causing them to be more prone to making
The Institute of Electrical and Electronics Engineers’ code
of ethics, much like the National Society of Professional
Engineers’ code, sets clear guidelines based on the values of
humanity and the world. This specific engineering code is
helpful to my dilemma because of the significant amount of
electrical engineering that would have to be done in order to
complete the autonomous artificial intelligence military units.
The articles which apply to this scenario state that engineers
must agree “to accept responsibility in making decisions
consistent with the safety, health, and welfare of the public,
and to disclose promptly factors that might endanger the
public or the environment” and “to improve the understanding
of technology, its appropriate application, and potential
consequences” [6]. The future of humanity is in my hands
with such a dangerous and costly project. An autonomous
warfare system unquestionably has upside, but it is
questionable to say those benefits outweigh the risks. It be can
be argued that creating this AI will improve public safety and
well-being, but I do not believe that the same can be said when
Davis Herchko
years pass and other countries have the same technology
because of how overpowered the United States is. I also do
not feel as though AI is being used appropriately in armed
robots. There are so many other possibilities for artificial
intelligence than being programmed as killing machines, all
of which do not involve the chance of killing innocent
action, and allowing humanity to create such a weapon could
spell the end of civilization. The creation of artificial
intelligence is inevitable, but that doesn’t mean its use in
warfare has to be.
To All Future Engineers in this Position
It is important to take time away from everything and truly
determine how you feel about the situation. After that, an
excellent idea is to discuss the circumstance with people that
you trust and care about. These individuals have your best
interest in mind and can also give feedback from an outsider’s
perspective. If you still feel unsure about what to do after this,
the best course of action is to simply refrain from continuing
the project. Always remember that an engineer should bear in
mind the effects that their work can have on humanity, and no
amount of money or power should affect their abilities to
determine right and wrong. Use the codes of ethics as a guide
to achieve great things for not only yourself, but for the world
around you.
In 2007, the South African National Defense Force was
experimenting with a brand new semiautonomous military
technology named Oerlikon GDF-005. This new piece of
equipment was being used as an anti-aircraft weapon, and
could automatically lock on to and destroy low-flying
aircrafts, helicopters, drones, and cruise missiles by using
radar and laser range finders. While being tested on, the antiaircraft gun suddenly began wildly firing in all directions and
ended up unloading all of its two 250-round auto-loader
magazines. The apparent computer malfunction ended up
killing nine individuals, and wounding another fourteen. After
further investigation, it was discovered that a computer failure
wasn’t the culprit, but instead an engineering failure at the
company Oerlikon. As a result of the hasty creation of the
technology, small computer and mechanical errors were
accidentally made in production which assisted in causing the
horrific events with the defense force [7].
[1] S. Hawking. (2014, May 22). “Stephen Hawking:
Transcendence looks at the implications of artificial
intelligence - but are we taking AI seriously?” Independent.
(Online article).
[2] H. Bachor. (2015). “Quantum Computing.” Nova. (Online
[3] “A Holistic Approach to AI.” (2011). Open Computing
Facility at University of California, Berkeley. (Online article).
[4] S. Russell. (2015, May 27). “Take a Stand on AI
Weapons.” Nature. (Online article).
[5] “NSPE Code of Ethics for Engineers.” (2015). NSPE.
(Online article).
[6] “IEEE Code of Ethics.” (2015). IEEE. (Online article).
[7] N. Shachtman. (2007, October 18). “Robot Cannon Kills
[8] D. Herchko (2015). Interview.
[9] S. Gibbs. (2015, July 27). “Musk, Wozniak and Hawking
urge ban on warfare AI and autonomous weapons.” The
Guardian. (Online article).
Even though there are positive outcomes from creating
autonomous robotic weapons, the potential dangers
associated with such a monumental technology would cause
me to confidently decline taking on the project. Despite
pressure from the executives at Google, as well as the United
States Government, I do not want to create an artificial
intelligence that doesn’t fully commit to the overall safety of
everyone in the world, not just American citizens. My oldest
brother is a safety manager for a large airline company, and
he stated that “it is reassuring knowing that my job means
something and stands for something larger than myself.
Peoples’ lives are in my hands, and I need to be sure every
single one of them gets from point A to point B, safe and
sound. If even one of them is injured or killed, then I am not
doing my job correctly” [8]. While I am not an aeronautics
safety manager in my scenario, the ethical issues are still the
same. As an engineer, I need to ensure safety and security for
all individuals, and causing one person harm means I have
committed a failure in my occupation. Furthermore, an open
letter warning humanity of a “military artificial intelligence
arms race” was recently signed by Tesla/SpaceX CEO Elon
Musk, Apple co-founder Steve Wozniak, and even famous
professor Stephen Hawking [9]. These influential minds have
already determined that military AI isn’t the correct course of
Davis Herchko
M. LaChat. (1986). “Artificial Intelligence and Ethics: An
Exercise in the Moral Imagination.” AI Magazine. (Online
“Physical Scientists, the Union of Concerned Scientists, and
Pugwash.” (2013). Online Ethics Center for Engineering.
(Online article).
S. Sloane. (2001). “The Use of Artificial Intelligence by the
United States Navy: Case Study of a Failure.” AI Magazine.
(Online article).
“Social Scientists and the Human Terrain System Project.”
(2013). Online Ethics Center for Engineering. (Online
J. Sullins. (2013). “An Ethical Analysis of the Case for
Robotic Weapons Arms Control.” NATO Cooperative Cyber
Defense Center of Excellence. (Online article).
I would like to thank my family and friends for always
supporting me and always being there for me whenever I
needed any assistance. I would also like to thank my
Engineering Analysis Professor, Dr. Arash Mahboobin, and
my writing instructor, Keely Bowers, for giving me great
feedback on all of my work and writing. Finally, I want to
recognize Jordan Walk, Laurel Murray, and my parents for
taking the time out of their busy schedules to read and review
my paper and provide helpful feedback. Without just one of
these individuals, this paper would not have been as wellconstructed and thought-provoking as it is. The group of
people that I have mentioned have been a tremendous
influence on me, and I am more than thankful for everything
they have done for me.

artificial intelligence - University of Pittsburgh