Uploaded by Julian Lires

State of US Robotics JSPG

advertisement
In Order to Form a More Perfect Union: The State of Robotics and Artificial Intelligence Policy
in the United States
By Bryan McMahon
Robots, artificial intelligence, and automation dominate today’s headlines but have a long
history in the 20th century. The word robot, first used in the Czech play R.U.R (Rossum’s
Universal Robots) in 1933, described human-like, plasma-filled beings designed for drudgery,
and were ‘automatons’, efficient but emotionless and incapable of original thinking.1 These
robots were twice as efficient as a human worker but, “lacked a human soul.”2 The fictional
robots reflected anxieties of a growing, mass-produced culture and fears of a future bereft of
human inspiration and ingenuity.
Today, robot and artificial intelligence (AI) technology looks very different, but many of
the concerns reflected in fiction remain the same. They work on Wall Street, independently
executing millions of trades a day without a banker’s say.3 They are being developed to drive
cars, work in industrial settings, diagnose disease from x-rays, conduct the discovery process for
lawyers, and review the financial details of commercial-loan contract agreements for banks.4,5,6
It is difficult to precisely define what a robot or AI is, but specific features of a robot or AI can
already be seen working today. Automation, either by machine, computer program, or datacrunching algorithm, is the dominant feature of robotic and artificial intelligent technology that
interacts with us on a daily basis. Automation began as an outgrowth of Henry Ford’s assembly
line production method in which each worker repeated his own specialized task as part of
building an entire car. Today, aided by advances in computing power and access to vast
quantities of data, automation is poised to affect almost every level of employment with the
potential to completely eliminate some sectors.7 This Second Machine Age, as it is now called,
portends to shape every aspect of our economic, social, and political lives.8 Many reports have
predicted automation will decimate jobs, with some estimating up to 47% of jobs are within its
reach.9 Other studies, however, emphasize that while automation may take over specific parts of
the job, many of those tasks are undesirable due to their rote nature (in robotics research, the
responsibilities best suited for robots are often called “dirty, dull and dangerous”).10 In this line
of thinking, automation may free us from the repetitive parts of jobs and create positions to
oversee automated systems.11 12 13 Automation will reach beyond physically intensive labor to
affect expert fields, too. In fact, a report by McKinsey’s Global Institute estimates that for 60%
of all jobs, 30% of tasks can be automated with today’s robotic and AI.14 It appears inevitable
that automation will reshape the job markets, the skills needed to stay employed, and the nature
of the jobs themselves.
Given the possible scope of automation and its potential to reshape our future, it is vital
to know what legislation, regulations, and policy frameworks concerning robotics and AI
technologies exist today, how they came about, and how these policies govern robotics and AI.
This paper summarizes the current state of robotics and AI policy in the United States, examines
how such policies arose, and analyzes future directions for the regulation of robotics and AI. To
do so, I first trace the historical development of robotics and AI, to identify early government
influences. Then, I examine the regulations made by federal agencies that currently make up the
bulk of robotics and AI policy.
Importantly, no broad, unified policy for robotics or AI exists. Instead, several federal
agencies have responded to growths of various robotic or AI technologies with regulations
within specific domains, like safety or a well-regulated public infrastructure. However, the scale
and cost of implementing the regulations, technical complexity of objects being regulated, and
2
number of technological objects is increasing over time, reflecting a growth in robotic or
artificially intelligent technologies. Parallel to this growth is an expansion in regulatory scope, as
individual agencies are being tasked with regulating layers of issues entangled by an increasing
technical complexity. Finally, I evaluate the future implications of robotics and AI policy and
recommend some regulatory steps to take now to help steer us through the coming waves of
disruptive technologies.
History of Robotics and Artificial Intelligence
Before investigating the history of robotics and AI, clear definitions of robots and AI are
needed. For the purpose of this paper, a robot is, “primarily a machine with manipulators that can
easily be programmed to do a variety of manual tasks automatically.”15 Arguably, a robot
defined this way does not yet exist in the world, despite the many modern products referred to as
robots. Essential to this definition of a robot are two factors: its ability to affect and interact with
the physical world around it and its programmable automation. Here, robotics will refer to
certain technologies that would be a part of a robot or allow a machine to act in some ways like a
robot. Computer vision, for example, allows a machine to ‘see’ and interact with objects around
it without human intervention.
Similarly, AI is loosely defined, but here refers to “the branch of computer science
devoted to programming computers to carry out tasks that if carried out by human beings would
require intelligence,” including, “perception, reasoning, learning, understanding and similar
cognitive abilities.”16 Here, AI is specifically not embodied and unable to affect its physical
surroundings. It exists as an expansion of computer capabilities, usually in the form of a
computer program, that represents and acts with intelligence. Throughout this paper, I refer to AI
as any technologies not physically embodied that act with something like a human intelligence.
3
For example, machine learning, a type of AI, is a technique in which a computer learns to carry
out a task without being explicitly programmed on how to do so.17 Instead, the computer
program is given a general goal, trained on known data relating to the task, and, “then unleashed
to solve similar problems with new information.”18 Machine learning programs have become so
complex that how a program reaches a decision is often unknown. This presents a “black box”
problem for researchers and policymakers, which I address in the final section of this paper.
Robotics and AI, as referred to throughout the paper, are specific technologies that represent
facets or subtypes of a robot or AI.
A useful guide to tracing the recent developments of AI and robotics technology is to
follow International Business Machines Corporation’s (IBM) research and computer product
development. IBM’s Watson made history in 2011 by beating former 74-time champion Ken
Jennings on Jeopardy!. Watson’s victory signaled that impressive new ground had been broken
in the development of AI, as Jeopardy!’s puns, word games, and flipped answer-question format
had previously stymied AI’s natural language processing. In 2013, Watson diagnosed lung
cancer from interpreting x-rays, and, in 2016, was hired under the name ROSS by the law firm
Baker&Hostetler to help read bankruptcy documents.19, 20 How did we get to Watson?
Computers are a necessity for both robotics and AI. Both a robot and AI are partly
defined by the execution of automated actions without human intervention, and this is achieved
with the programmability of a computer. The history of robotics and AI, then, begins with the
invention of computers, specifically ENIAC (Electronic Numerical Integrator and Computer).21
The history of robotics and AI waxes and wanes along lines of progress and failure, yet a
consistent theme emerges: the development of the technologies of robotics and AI were the result
of strong public-private partnerships, with one side picking up slack in funding, interest, or
4
research for the other over time. It is also important to note that federal funding started much of
the relevant research and is therefore among the earliest examples of policy decisions that
impacted the field.
ENIAC was a military project of the United States Army originally developed to
calculate ballistic missile firing tables (projections of projectile flight paths) during World War
II.22 ENIAC, brought online in 1943, was the world’s first operational, general purpose,
electronic digital computer and, importantly, was programmable.23 Manhattan Project scientists
Stanislaw Ulam and Edward Teller used ENIAC to model the feasibility of their hypothetical
hydrogen bomb, and the computer in fact pointed towards what later became the successful,
Teller-Ulam design for a hydrogen bomb.24
The next step forward was made by Alan Turning, a British computer scientist and
mathematician, when he described the conditions necessary for a ‘thinking machine’ in a 1950
paper titled, “Computing Machinery and Intelligence”.25 Turing’s paper also formulates a way to
decide if a machine can be said to be intelligent because the nature of an intelligent machine and
thinking themselves are not agreed upon. To determine whether a machine can be considered
intelligence, Turing formulated a test, which he referred to as the “Imitation Game,” but is now
more broadly known as the “Turing Test.”26 The paper formally founded the field of AI as a
branch of computer science, and inspired other mathematicians, computer scientists, and
engineers to begin working on the design of an artificially intelligent machine. While ENIAC
provided the necessary technology for AI and robotics to begin, Turing’s formulation of the
thinking machine gave the field an intellectual framing and direction.
Following Turing’s formulations of an intelligent machine, AI researchers Marvin
Minksy and John McCarthy, along with senior scientists Claude Shannon and Daniel Rochester
5
of IBM, organized the first gathering of AI researchers at Dartmouth College in 1956.27 The
conference formally named their field ‘AI’ and enumerated its mission, aiming to categorize and
catalogue, “every aspect of learning or any other feature of intelligence can be so precisely
described that a machine can be made to simulate it.”28 This began the general project of
attempting to simulate, usually through formal logic systems, a person’s thinking and then
program that process into a computer. Wild assertions and predictions—like two researchers
claiming to have solved the philosophical mind/body problem29—abounded during this period
following the enthusiasm borne out of the Dartmouth Conference. In particular, researchers
predicted the invention of AI able to solve any problem a human could—called general AI—
within 20 years.30 Eager promises such as these would set the stage for boom and bust cycles of
interest and funding for AI and robotics over the years. In 1963, seven years after the conference,
the Defense Advanced Research Projects Agency (DARPA) funded research at three major AI
labs, including Stanford (Stanford Artificial Intelligence Lab, or SAIL), Carnegie Mellon, and
the Massachusetts Institute of Technology (MIT).31 DARPA, perhaps swayed by the promises
and enthusiasm of the emerging field, gave each lab three million dollars per year.32 Two designs
for AI motivated the field. First, logic systems research focused on rigid, step-by-step problem
solving, and required the programs be fed simplified versions of the problems they aimed to
solve. The other approach, called perceptrons or neural networks, featured a network of decision
making agents within a larger system, modeled from the human brain.33 However, the projects in
the 1960s overpromised and raised unrealistically high expectations, which, could not ultimately
be delivered. Falling short of the wild predictions posited by both the private and public sectors,
government leaders lost interest in the field and reduced associated funding.
6
The resulting shortfall in money and attention to AI, beginning with the cancelation of
DARPA project funding in 1974, inaugurated the First AI Winter.34 In addition to returning
underwhelming results, AI researchers ran into technical limitations; at that time, computing
power was insufficient to allow programming with the capacity to sift through the large volume
of data necessary for AI. The field was not revived again with federal funding until the early
1980s when the Strategic Computing Initiative (SCI) began, in response to other countries’
public announcements of AI projects. In particular, in 1981 the Japanese Ministry of
International Trade and Industry announced the “Fifth Generation Computer”, an $850 million
project aiming to build a machine with general AI in ten years.35 In response, SCI, funded by
DARPA, began funding AI research to create a machine to, “see, hear, speak, and think like a
human” in 1983.36 This time the research focused on expert systems and software. The
honeymoon was short lived, however, due to the stock market crash in 1987. This event,
combined with a shift in DARPA priorities, resulted in a slash to SCI’s funding. In total, the SCI
funneled over $1 billion dollars into university and government labs for AI research, but by 1987
the field again fell short of achieving general AI.37 DARPA ended the SCI and, without a clear
way forward for federally funded AI research, the Second AI Winter began.
The Second AI Winter signaled an end to government funded research projects until
DARPA’s Grand Challenges of the 2000s, when advances in computing power allowed private
companies to lead their own AI projects. IBM took particular interest in marketable
demonstrations of rapid advances in computer processing power. In 1997, IBM’s chess playing
computer, DeepBlue, matched up against then world champion Garry Kasparov.38 Over six
games, DeepBlue won 4-2. The highly publicized match of “Man vs. Machine” brought AI to the
general public, although DeepBlue represented advances in computer processing more than
7
advances in AI. The computer used a brute-force method of calculating the iterations of each
move to select the best option.39 However, the advances in computer power that propelled
DeepBlue to victory would be vitally important to the field of AI following the birth of the
internet and would revive a method for designing AI.
Today’s AI is a revival of early neural-network architectures made possible by the
marriage of increased computing power and a deluge of data to feed the programs by the
internet.40 Watson learned to diagnose lung cancer by being fed millions of x-rays and then
checking those images against the billions of online radiology journal articles and textbooks.41
The AI revolution has come full circle, now powered by leaps in computing power and a flood of
data connected through the internet and cloud computing.
Federal Agency Action
To date, Congress has not passed comprehensive laws regulating the broad field of
robotics and AI. Without legislation passed by Congress uniformly addressing robotics and AI,
regulatory responsibility has been distributed piecemeal across various federal agencies. For
example, the Occupational Safety and Health Administration (OSHA), Federal Rail
Administration (FRA), Federal Aviation Administration (FAA), National Highway
Transportation Safety Administration (NHTSA), and the Food and Drug Administration (FDA)
have regulations and guidelines that fall within their agencies’ respective domains. As discussed
in the introduction, robotics and AI, as they exist today, are multifaceted groupings of specific
technologies and areas of research, like automation, machine learning, or computer vision.
Current regulations of robotics and AI target specific facets of robotics or AI as they are used in
real-world applications and products. Thus automation as a feature of robotics can lead to
regulation by agencies as diverse as OSHA, when workplace safety is affected, or the Food and
8
Drug Administration (FDA), when the safety and efficacy of medical devices must be
determined. Given the variation of technologies and potential uses that could be included as
robotics or AI technologies—from drones to medical diagnostic tools to automated industrial
machines—the resulting policy actions regulate robotics not as a broad class of objects but in
specific domains of use.42 This framework is intended to balance the need to support innovative
development of new technologies, with protecting the public. This section summarizes the state
of robotics and AI and details the resulting policy frameworks, with an emphasis on the growth
of technologies that spurred federal agency regulation.
Automation in Industrial Manufacturing
The first widespread example of both the implementation and subsequent regulation of
robots or AI occurred in manufacturing. Robotic automation was an outgrowth of Henry Ford’s
assembly line mass production in the 1920s to build Ford cars.43 Along the assembly line, each
factory worker repeated a highly specific part of the overall production of the car, and, by
limiting the labor of each worker to one step of building the whole car, the time it took to make a
Model T was cut from twelve and half hours to six.44 Automated industrial robots take this
iterative, specialized production one step further, replacing humans altogether in parts of the
manufacturing process that are easily repeatable. For example, by 1984, automated robotic arms
welded the Ford Fiesta’s frame entirely without human intervention.45 Today’s advances in
robotics are replacing factory workers with machines that are increasingly capable of automating
the entire manufacturing process, leading to fears of an approaching Second Machine Age in
which entire factories may become robotic.
Cost is a major motivation driving adoption of factory robots. The initial, nonrecurring
capital investment to buy the robot can be markedly cheaper than paying an employee wages and
9
benefits over his or her lifetime.46 This trend has continued, with more facets of human labor in
manufacturing continuing to be automated. For example, in one Tesla car manufacturing plant,
opened in 2013, robots are employed to do multiple tasks. Working in coordination, these robots
can, “autonomously swap the tools wielded by their robotic arms in order to complete a variety
of tasks. The same robot installs the seats, retools itself, and then applies adhesive and puts the
windshield into place.”47 48 Improvements in industrial robots, like increasing the total range of
motion, improving fine motor control of the machine’s “hand,” increasing the computing power
and overall programmability, synchronizing multiple robots, or augmenting their autonomous
capacities by installing visual systems or wireless communication systems, have greatly
expanded the tasks and types of labor these machines can reliably automate which contributed to
their broader adoption.
Robotics and automation have a long history in industrial manufacturing, and their rapid
and widespread adoption has resulted partly from a ‘hands-off’ regulatory approach, particularly
at the federal level. Government activity in industrial robotics began with federally funded
robotics and artificial projects in the 1950s and 1960s. SAIL, in which Victor Scheidman
designed the first Stanford Arm. The Stanford Arm allowed for the “arm solution,” a robotics
term that describes the complete programmability of the arm’s movements in three dimensions
with an attached mini-computer. The development of the new technology leveraged federal
funding from DARPA since the lab’s founding in 1963 by AI researcher John McCarthy.49
Robots automating individual manufacturing processes rapidly spread to factories in the
1970s and early eighties. In response to this growth, in 1982 the federal government took the first
step towards regulating these advances when the Office of Technology Assessment (OTA) began
studying the effects of robots and AI. The OTA, which existed from 1972 until 1995, provided
10
objective and authoritative analysis and policy recommendations of complex scientific and
technological issues to members of Congress.50 In 1982, the OTA released a report, “Exploratory
Workshop on the Social Impacts of Robotics,” following the widespread adoption of robots in
factories. This report was the first in a series on robotics and AI, focused primarily on the larger
social and economic effects of automated industrial robots.51 This report grappled with the
potential economic (e.g., increases in factory productivity, job loss, and increased capital
accumulation by the owner of the business) and social effects that could follow widespread
automation to enable policy makers to craft suitable regulations.52 Such reports served only to
inform and recommend policy objectives; no government regulations on automated industrial
robots existed at the time of the report.
While robots were first installed to take over the most dangerous parts of the
manufacturing process, their presence posed its own risk. By 1983, 66,000 automated industrial
robots had been installed, and workplace-robot safety accidents began to be reported to OSHA.
An agency within the Department of Labor, OSHA, sometimes in partnership with state
government, oversees workplace safety standards and regulations to “assure safe and healthful
working conditions for working men and women by setting and enforcing standards and by
providing training, outreach, education and assistance.”53 OSHA’s authority to oversee the health
and safety of workplaces is mainly accomplished through compliance officers who visit and
inspect workplace environments to verify that they are in compliance with OSHA’s safety
standards and workplace regulations, levying fines on facilities that violate OSHA standards. The
first accident occurring between a human worker and an automated machine was reported on
November 12th, 1984, noted as a worker’s “right leg caught in a welding apparatus”.54 The
machine, a Hitachi robot welding the frames of General Motors vans, could either be automated
11
or manually controlled and was in its automated mode when it caught the leg of a worker who
had been working with the robot for nearly two months. The machine was in its automated mode,
and the other workers failed to stop the machine as no override switch had been installed.55 More
accidents followed, with increasing consequences. In 1987, two industrial robots crushed two
workers to death. The first death was caused by a robotic arm that had automated grabbing and
welding metal plates and crushed the worker between two plates. 56 The second worker was
killed by an automated lathe.57
In response to these accidents, the OTA produced a set of recommended safety standards
and workplace regulation requirements for manufacturers using automated industrial robots. The
guidelines, adopted into safety standards by OSHA in 1987, described necessary safety
precautions to help prevent further accidents between workers and automated machines as robots
continued to be adopted and grew increasingly complex in shape and function.58 They emphasize
the dangers different robots may pose and describe what physical and procedural precautions are
needed to ensure their safe operation.
At present, robots struggle to recognize the intentions and body language of humans, and
fail to recognize if a worker is in danger unless its interactions are clearly defined. The risk to
the worker presented by a robot varies greatly given the robot itself, its degree of automation, the
level of, frequency and kind of interaction with a human, and the type the work performed.59
Given the rapid adoption and evolution of industrial robots, OSHA regularly updates the OSHA
Technical Manual. Section IV, Chapter 4 of the manual describes robot types, robot system
types, and the variety of operations carried out by industrial robots.60 61 In addition, OSHA’s
training manuals and educational materials have been expanded to reflect the increasingly
technical nature of the industrial workplace.
12
The pace of industrial robotics adoption has only increased as robots have become
increasingly adept, networked, and, more recently, capable of learning from past mistakes.62
Today’s factories feature networked robots working and communicating in groups. Interestingly,
OSHA’s regulation of the health and safety of its workers has come full circle and is now
investigating the need to protect robot workers. In 2015, OSHA Administrator Daniel Michaels,
recognizing the future growth of industrial robotics and automation, promised the agency was
developing a framework to give robot workers an equal footing to their human counterparts.63
“Robots are people, too,” remarked Michaels to local administrators as he unveiled the plan,
called APRIL (Awareness and Protection of Robotics Industry Layout).64 APRIL calls for policy
development of Personal Protection Equipment (PPE) to protect robots from humans. PPE for
industrial robotics, such as cages separating robots and humans, could help eliminate workers’
injuries and deaths as well as damage to machines. However, even as industrial robots automate
more tasks within the factory, this area of policy has remained focused on safety regulations
within factory and its workers. In keeping with the government’s reactive approach to regulating
robotics and AI, policies largely remain mute on groundbreaking technology, until a
demonstrated danger makes apparent the need for standardized regulation of a specific domain of
robotics’ qualities.
Robotically-Assisted Surgery Systems
Medicine, like many other professional fields, is poised to be reshaped by robotics and
AI. Created in 1906, the FDA is responsible for the protection and promotion of public health by
ensuring the safety and efficacy of drugs, medical devices, the food supply, cosmetics, and
biological products.65 It was originally designed to punish fraudulently produced or labeled
drugs, food or cosmetics by seizing a company’s goods if proven to be “adulterated”. However,
13
following a mass poisoning in which at least 100 people died after ingesting toxic ‘Elixir
Sulfanilamide’ that was being used as a medicine, the FDA was further empowered by
Congress’s Food, Drug, and Cosmetics Act of 1937. The act expanded FDA’s authority to
include a pre-market evaluation of the safety and efficacy of any drugs, medical devices, or
biological products. Under this authority, robotically-assisted surgical (RAS) devices are
regulated as a medical devices.
Modern RAS systems are a repurposing of a robotic arm designed for welding car chassis
for assisting in a delicate, non-laparoscopic brain biopsy in 1985.66 In 1990, Computer Vision’s
AESOP system became the first approved system for its endoscopic procedures.67 New ground
was broken in 2000 when the FDA approved Intuitive Surgery’s da Vinci surgical system for
general surgery.68 This was the first general purpose RAS cleared by the FDA. A RAS system is
an interface between the surgeon and the patient that, “enables surgeons to use computer
technology to control and move surgical instruments through tiny incisions in patient’s body”.69
Despite incorporating the term, “robotic,” RAS are teleoperated; a surgeon is always in control
of the surgical tools. The surgeon performs the procedure with machine-steadied tools while
looking at a screen that displays the surgery site. A RAS is composed of a console and two
carts.70 The console is the controls and display through which the surgeon views the surgery site
using a 3D endoscope. The bedside cart consists of the robotic arms, a camera, and surgical
instruments the surgeon uses to conduct the surgery. Finally, a separate cart is used for hardware
and software support that varies according to the surgery performed. RAS systems are used
almost exclusively for laparoscopic procedures as the robotically steadied surgical instruments
may provide greater precision during laparoscopic procedures than the surgeon’s hands alone.71
14
The FDA regulates the safety and efficacy of medical devices through pre-market
evaluations and post-market monitoring. Medical devices are classified into three categories
based on their expected levels of risk. RAS systems are regulated as Class II, 510(k) devices due
to their “moderate risks and moderate regulatory controls.”72 510(k) pre-market notifications
require manufacturers to submit a letter showing the, “device to be marketed is at least as safe
and effective, that is, substantially equivalent, to a legally marketed device,”73 in lieu of the FDA
itself testing the device. In order to demonstrate substantial equivalence and qualify for a 510(k)
notification, the RAS must have the same intended use, technological characteristics and safety
and efficacy as the predicate device already on the market.74 To date, the FDA has made no
independent standards specific to RAS devices, instead relying on predicate devices to guide
RAS’s safety and effectiveness.
In addition to pre-market regulation, the FDA also carries out its mission of ensuring
public health by conducting post-market safety studies. Incidents in which a device causes or
contributes to a death or malfunction require a medical report to be submitted to the FDA.75
Called the MedSun reporting system, this network of 250 hospitals nationally works with the
FDA to understand device use. The FDA also conducts its own post-approval studies to track and
analyze the safety, effectiveness, and reliability of devices granted a 510(k) exception. Finally,
the FDA uses discretionary studies to assess device performance and clinical outcomes, describe
benefits and risks for patient sub-populations. Like manufacturing robotics, RAS systems are
regulated within the FDA’s policy directive to protect and promote the public health by ensuring
the safety and effectiveness of drugs, medical devices, and biological products. The practice of
medicine, training, and the implementation of RAS systems and AI tools like Watson, is left to
medical practitioners.
15
Positive Train Control
Historically, trains, like factories, have played an important role in spurring technological
development and subsequent regulation. Like OSHA, agency action was motivatecd by accidents
resulting in deaths and injuries spurred agency action. For trains and their operators, however,
automation and networked intelligence may be replacing humans in the name of safety. Created
by the Department of Transportation in 1966, the Federal Railroad Administration (FRA) writes
and enforces railroad safety rules and conducts research and development in support of improved
railroad safety and national rail transportation policy.76 The FRA oversees a technology that
displays features of both automation and a basic networked intelligence called Positive Train
Control (PTC). First pioneered and tested in a partnership between General Electric and Union
Pacific Railroad in the 1990s, PTC is a system of monitoring devices that track a train’s speed,
location, angle entering a turn, and position in relation to other trains, bridges, and train stations.
PTC can also automatically control train speed and movement in the event the train’s operator
fails to maintain the correct speed given track conditions77 The location of the train along the
track, in relation to other trains and upcoming turns, stops, bridges, or highway crossings, is
continuously monitored by GPS and relayed, along with speed, turning angle and
communications with the train’s operator, to the network operation center. If a stop or slow down
signal is given, the train operator has eight seconds to respond before the network automatically
takes action and slows the train.78
More than twenty major train accidents occurred between 2000-2008 in the United States
alone; however, the 2008 Metrolink crash in Chatworth, California in particular spurred
Congress into action.79 The passenger train hit a freight train head on after the train’s engineer,
distracted by texting, failed to hear and heed the train dispatcher’s warning about an upcoming
16
freight train, and ran a red signal. 80 The crash, an unfortunate confluence of human distraction
and an inability to control the train’s speed remotely, killed 25 passengers and injured 135.
Given that PTC was developed to prevent this type of crash, Congress passed the Implementing
Railroad Safety Improvement Act of 2008, which directed the FRA install PTC systems on most
rail lines by 2015.81 To implement this law, the FRA, after two years of research and outreach to
railroad companies, published its final regulations for PTC systems on January 15, 2010.82 The
project has proved to be more complex and costly than previously expected, and the delays
forced the deadline to be pushed back. The Positive Train Control Enforcement Act of 2015
extends the deadline to 2020 and sets aside more money to help address many of the technical
difficulties that have slowed the implementation of PTC. In particular, finding and funding a
uniform broadband spectrum to use for the PTC system, which must span the entire rail network
and be secure from hacking or interference, has been slow. Still, according to FRA data, 38% of
freight and 29% of passenger locomotives have the PTC technology equipped as of early 2017.
The FRA, responding in its duty to write and enforce railroad safety policies, mandated the
implementation of PTC to stop preventable train crashes.
The scale of PTC implementation is vast, and the FRA must assist rail companies
installing PTC to monitor an estimated 60,000 miles of track.83 To accomplish this, the FRA has
so far funded $650 million for PTC implementation by rail companies and a given $1 billion
dollar loan to the Metropolitan Transportation Authority to implement PTC on the Long Island
Rail Road and Metro-North Railroad.84 The scale of implementing this technology spans rail
infrastructure across the country and involves layers of city, state, and federal regulators working
together to put into place. The FRA’s regulatory reach has expanded by the simple fact of PTC
being installed along 60,000 miles of railroad track.
17
Drones
Unmanned aircraft systems (UAS), more widely known as drones highlight a growing
need for regulation following the explosion of robot-like technologies in recent years. Unmanned
aircraft systems have been at the forefront of flying technology since the dawn of flight. Pilotless
aircraft were first developed during World War I to drop bombs or act as flying torpedoes but
were not remotely controlled.85 The term “drone” was first coined in 1936 by the head of the
Navy’s research group, referring to the radio controlled aerial targets that had been developed for
pilot and anti-aircraft training.86 In the years since, UAS have been developed and deployed for
reconnaissance and surveillance purposes across the globe; today, drones are used for remote
strikes as an extension of air military without risking the pilot in the field.87 Outside of the
military, UASs enjoy a long hobbyist history, beginning with radio-controlled model airplanes
first sold in the 1950s.88
Beginning in the 2000s, UAS grew more advanced as microchips and software were
installed, driving a widespread adoption of drones for personal and commercial purposes.
Combined with steadily declining costs to purchase, equip and use drones, an estimated 2.5
million commercial or personal drones are expected to fly by 2016, with an estimated 7 million
by 2020 according to the FAA report, “FAA Aerospace Forecast: Fiscal Years 2016-2036”.89 As
highlighted in the report, the expansion of drone technology sensors, high-resolution cameras,
and ease of remote piloting by smartphone has produced a boom in commercial and personal
drone sales. The FAA predicts a commercial drone boom with the total UAS used in commerce
rising from 600,000 in 2016 to 2.5 million by 2017. 90, 91 The five largest sectors predicted to
incorporate drones are real-estate/aerial photography, insurance, agriculture (crop inspection and
pesticide spraying), industrial inspection (bridge, powerline, and antennae inspections), and
18
government (primarily law enforcement). Along with the expected growth in the commercial
sphere, personal or hobbyist drones are expected to rise to 4.3 million by 2017.92
In an attempt to keep up with the rising tide of drones, several key policies have been
enacted in the last five years. Importantly, in contrast to industrial robots and PTC, which was a
response to several accidents, UAS policies grew out of conversation between multiple
stakeholders and the FAA. Private companies, university researchers, hobbyists, and safety and
privacy advocates all contributed to a proactive public-private approach. Two reasons made the
public-private approach possible. First, drone hobbyists had a prior history of self-regulation
under their umbrella organization, the Academy of Model Aeronautics (AMA), created in
1940.93 Second, the widespread adoption of drones brought together the numerous and diverse
stakeholder interests whose vision for drone uses intersected and overlapped, triggering the effort
to develop a framework of policies that could best accommodate all parties.
In 2012, facing a groundswell of drone technology and stakeholder interest, Congress
passed the FAA Modernization and Reform Act to allow the agency to develop a framework and
infrastructure to integrate new technologies, like UASs.94 The basic requirements for a
commercial aircraft—a certified and registered craft, a licensed pilot, and operation approval—
continued to apply to UAS. But, as the framework to integrate drones was developed, the act also
created the Section 333 exemption, which granted the FAA the authority to determine the need
for a commercial drone to obtain an airworthiness certificate on a case by case basis. However,
the Section 333 exemption was frustratingly slow to get approved and proved to be too narrow
and restrictive a pathway for companies. In fact, the difficulties caused by the Section 333
exception laid the groundwork for a revised drone rule to be issued later. The FAA
Modernization and Reform Act formally regulated personal or hobbyist drones by the FAA, in
19
place of self-regulation according to previously drawn up in coordination with the AMA.95
Drone use had expanded to the point that the FAA decided a unified and codified set of federal
regulations needed to replace the system of self-regulation. Section 336 of the FAA
Modernization and Reform Act codified existing operating procedures and included language
that UASs for personal use must still stay within visual-line-of-sight (VLOS) of the operator at
all times. This language in particular emphasized the reason for federal regulation of hobbyist
drones because UAS pose a serious risks to privacy, safety, and an integrated national airspace if
left unrestricted by VLOS rules. Section 336 also exempted personal drones from future FAA
rules, setting the piece of the larger framework in place. As the larger framework and
infrastructure was developed, the FAA worked closely to best accommodate the diverse and
growing uses of drones. Drone delivery services, represented by Amazon and Google,
researchers, farmers, and many types of industry all coordinated priorities and concerns to the
developing regulations, working closely with the FAA regulators; issues beyond private
stakeholders included privacy concerns for the public due to surveillance drones, raised by the
Electronic Privacy Information Center, were a part of the conversation.
The resulting policy framework was the FAA’s final rule published on June 28th, 2016,
known as the “Department of Transportation (DOT) Operation and Certification of Small
Unmanned Aircraft Systems”.96 The Rule put into place firm regulations for the type of drone
allowed in commercial operations. With Section 333 in mind, the Small UAS Rule clarified and
enabled drone use to ensure commercial drone use could expand without endangering the public.
First, the drone must be in constant VLOS of either its operator or another person in
communication with the operator. This restriction, while burdensome for many of the drones’
uses, like flying over large areas of crops, simplifies and lowers the risks of UAS operation
20
beyond VLOS in the National Airspace. Second, much to the chagrin of companies developing
drone delivery systems, each UAS must be controlled by one person; this one-to-one system
prevents large scale automation of a drone’s flying, surveilling, or delivering. Third, the drone’s
pilot must have a remote pilot certification. Finally, the drone and its payload cannot exceed 55
pounds nor be operated more 400 feet above the ground or 100 miles per hour in its flight path.
In this final rule, the FAA balanced the need for a well-regulated National Airspace and risks
associated with drones operating outside of VLOS of its operators against the opportunity for the
rapid introduction of commercial drones across a range of industries. The rule, while prohibiting
drone delivery services like Amazon and Google for the time being, will allow commercial
drones to create an estimated 100,000 jobs and $82 billion dollars in new services over the next
decade while preserving the integrity of the National Airspace.97 The FAA Modernization and
Reform Act and FAA “Operation and Certification of Small Unmanned Aircraft Systems” Final
Rule respond to and unify diverse and consequential uses for UAS, particularly in commerce and
research with a framework of policies moving beyond self-regulation to ensure a well-integrated
National Airspace.
Self-Driving Cars
Much like the FAA’s response to the spread of drones, the onset of self-driving cars
catalyzed the need for government action. Stakeholders recognized the need to create safety
guidelines, an upgraded infrastructure to integrate self-driving technology, and ongoing publicprivate collaboration. However, in many respects, the self-driving car market and its associated
parties are larger than those surrounding drones. For example, the market for a fully autonomous
vehicles is estimated at $87 billion, compared to $3.3 billion for the commercial drone market.
Self-driving cars serve the public interest by providing greater access to transportation for
21
disabled or elderly drivers, supporting cutting edge research, and increasing safety along our
roads and highways.98
Self-driving technology has been tested in cars as far back as the 1920s and 1930s, when
radio-controlled cars were driven as a show on city streets.99 In the 1950s, General Electric (GE)
tested “self-driving” cars by placing a series of electrified strips set into the road at vital points
that could control switches in the vehicle, instructing the car when and at what angle to turn, to
change speeds, and brake.100 In 1958, GE, in coordination with the Nebraska Department of
Transportation, successfully demonstrated such a system on a 400 foot stretch of public highway
just outside of Lincoln, Nebraska. Interestingly, such a system would have demanded early and
intensive government funding and regulation but would have also pushed the auto industry to
create cars as a mix of public and private transportation. However, the government balked at the
estimated $100,000 per mile cost of putting such a system in place.101
A new approach in the 1980s, growing out of the resurgence of research on neural
networks, which had begun decades earlier, was the Autonomous Land Vehicle (ALV) as an
offshoot of DARPA’s Strategic Computing Initiative. Initiated in 1983, the Strategic Computing
Initiative funded research into advanced computer hardware and AI to modernize the military. 102
The research, led by Carnegie Mellon, SRI International, and Martin Marietta (later to merge
with Lockheed Corporation to become Lockheed Martin in 1995) culminated in 1987 when the
ALV successfully navigated over 2,000 miles on a rocky test course.103 The ALV drove without
the use of an internal map or GPS system, relying instead on sensors to complete the course. This
system, pioneered in ALV, formed the basis for contemporary self-driving cars.
In 1991, following these successes, Congress passed the Intermodal Surface
Transportation Efficiency Act Transportation Authorization Bill, authorizing and instructing the
22
United States Department of Transportation (DOT) to demonstrate “an automated vehicle and
highway transportation system by 1997”. 104 The National Highway Transportation and Safety
Administration (NHTSA) headed the project, which, in conjunction with several universities, put
on a demonstration of 20 self-driving cars along a section of I-15 highway outside of San Diego,
mixing in with non-automated cars and driving in traffic. However, further projects were
canceled due to cuts in DOT’s funding. Federal funding again returned with DARPA’s Grand
Challenges. First offered in 2004, the DARPA Grand Challenge was series of prize competitions
designed to spur autonomous vehicle development of both hardware (the design of the car) and
software (autonomous navigation).105 A mix of universities and private companies competed for
the one million dollar prize; the success of the first Grand Challenge led to five in total, with
later Challenges focusing on developing robotics and AI, rather than self-driving cars alone.
Many competitors in the DARPA Gramd Challenge went on to work for the companies that are
currently developing self-driving car technologies.
Not all companies are interested in fully automated (i.e., completely without human
intervention) vehicles. Tesla, for example, combined automatic braking, rearview cameras, and
distance sensors that other car companies introduced to create an ‘Autopilot’ mode.106 Through
Autopilot, Tesla’s cars maintain appropriate speed, distance from other cars and stay between
lane markings; however, the driver’s attention and intervention are still required.107 Tesla is
working toward a fully autonomous car through successive updates to its Autopilot. In addition,
automakers have another approach to automated driving. Traditional car companies have been
incrementally introducing features that assist or fully automate some parts of driving. Many
vehicles now use radar sensors to warn the driver if he or she is too close to another car, and
some engage the brakes automatically if the driver fails to respond. Mirroring the various
23
approaches to a self-driving car, the current laws governing them are a complicated patchwork
that often differ according to the type of self-driving car.
Much of policy so far has been a binary of whether the autonomous vehicle is allowed on
the road or not as city and state regulators are in charge of issuing vehicle licenses and
registrations. There is also an ongoing competition be the leader in setting policy between car
companies and tech companies at the state level. Tennessee, Georgia, Illinois, and Maryland
have crafted bills to allow only traditional automakers access to test and develop their selfdriving cars,108 while Nevada’s Department of Transportation granted Google’s fully
autonomous vehicles the first license for a self-driving car in 2012. By comparison, Uber’s fleet
of self-driving cars left San Francisco after the California Department of Motor Vehicles revoked
their registrations as no fully automated vehicles have been approved by the municipality; in
Pittsburgh, self-driving Ubers have been offering rides since 2016.109 The current piecemeal
approach will lead to more confusion because no fewer than twenty companies, ranging from
Silicon Valley startups to traditional car manufacturers like Ford and BMW, have entered the
self-driving car market, and the number continues to grow.110
Facing a groundswell of self-driving technologies that redefine what ‘driving’ means and
who does it, a wider framework of policies has been called for by many of the companies
themselves, safety and consumer advocates, and regulators. Prototype cars and trucks, automated
or not, now regularly cross in and out of municipalities and state lines and drive on roads
maintained by state and local infrastructure funds. Given the number of cars that could become
autonomous and the need for a seamless integration of self-driving technology into the existing
infrastructure for drivers and pedestrians alike, a larger and coordinated framework of policies
are needed to satisfy the public and private interests touched by automated driving. Like in the
24
case of automated industrial robots, existing safety regulations were amended to accommodate
the arrival of robotics and AI to the roads. The National Highway Transportation Safety Agency
(NHTSA) oversees the national safety regulations for motor vehicles, setting the safety standards
to which all motor vehicles must comply. In September 2016, NHTSA released a set of
voluntary guidelines for automated driving technology called, “Federal Automated Vehicles
Policy”.111 As guidelines, the “Federal Automated Vehicles Policy” is a set of policy
recommendations to expand the agency’s mission of enhancing safety on the roads. The
complexity of regulating complex autonomous vehicles, enacting the necessary infrastructure
and balancing private and public interests led NHTSA to issue a set of guidelines first.
Guidelines, while increasing the uncertainty of the final framework in the short-term, allow for
an ongoing conversation between stakeholders and a review of proposed regulations before the
final policies are put into place. This document breaks regulation of self-driving cars, called
Highly Automated Vehicles (HAVs), into four broad parts: vehicle performance guideline for
automated vehicles, model state policy, NHTSA’s current regulatory tools, and recommended
new tools and authorities.112 These regulations apply to systems that are considered fully
automated, i.e., that can complete the entire driving task without human intervention, or semiautomated, like Tesla’s Autopilot. All HAVS to drive on public roads, so any organization
testing or developing automated vehicles need to be covered under NHSTA’s guidance as well.
Importantly, regulation of automated vehicles, despite their expectation to disrupt industries and
transform infrastructure, cities, and personal mobility, is first motivated by the federal
government’s safety rules according to the harm it may pose to the public.
NHTSA’s guidelines outline the performance standards of HAVs to meet its safety
regulations for vehicle manufacturers, which may, in this case, be traditional car manufacturers,
25
researchers, or software companies involved in testing or deploying autonomous vehicles.113
NHTSA’s standards outline how well a car must protect the passengers during a crash in addition
to how it should function when driven. For HAVs, the safety standards must cover automated
driving and all the technologies that contribute, making it a highly technical review process. A
new fifteen-point standard would amend NHTSA’s current vehicle manufacturer safety
standards, the “Federal Motor Vehicle Safety Standards and Regulations”, as authorized by the
National Traffic and Motor Vehicle Safety Act in 1967, to include safety regulations explicitly
for HAVS.
Although the amended safety standards are currently just guidelines, NHTSA has
requested for vehicle safety reports to be voluntarily submitted, so that the effectiveness of the
proposed regulations may be tracked and be used in later rule-making by the agency. These
recommended safety amendments reflect the unique challenge automated cars may bring
regulators, as ‘safety’ has expanded into the domains of data sharing between customers,
companies, and third-parties, the resulting tension between data sharing and personal privacy,
cybersecurity, and complex ethical situations revolving around questions like who a vehicle will
choose to hit during a crash that would inevitable involve multiple people. These are complex
questions to address, let alone regulate, without impeding the development of autonomous
vehicles, and this complexity is reflected in the diversity of the stakeholders participating the
regulatory process and the multifaceted framework that has so far developed.
The proposed guidelines would follow traditional regulatory jurisdictional
responsibilities: states are in charge of vehicle licensing and registration, traffic laws, and motor
vehicle insurance and liability structures while federal agencies oversee the regulation, like
safety, of motor vehicles. The Department of Transportation be in charge of licensing HAVs as
26
the ‘driving’ in fully automated vehicles is not done by individuals but by the vehicle itself;
therefore, separate registration and licensing processes to accommodate HAVs need to be
added.114 The guidelines also propose restructuring insurance and liability rules to spread risk
among owners, operators, passengers, and manufacturers. Emphasized throughout the
recommendations for states is the need to avoid a patchwork of laws from state to state or laws
different enough that encourage competing interests from car makers. Public roads exist within
and across state lines and therefore require state and federal policy to mesh and leave no
regulatory gaps.
The final two sections of NHTSA’s guidelines review its current authorities and
regulatory tools and recommendwhere they will need to be expanded. The tools would support
the expanded authority of NHTSA to regulate autonomous vehicles. In particular, allowing
NHTSA to require pre-market testing and affirmative approval for all HAVs, a change from its
current purview.115 New proposed statutory amendments would grant further authority of
NHTSA to regulate HAVs in three main areas. First, given the potential scope of an autonomous
system, NHTSA would be able to require manufacturers to “cease and desist” their operations in
emergency sections. A single city or manufacturer may be responsible for a live network of
HAVSs, and so immediate intervention in the network in case of a systemic error or in times of
an emergency is necessary. Second, the number of vehicles exempted from some federal
regulations for research purposes would be expanded. Research into total integration of the
autonomous vehicles into the larger infrastructure needs to be expanded, and the exemptions
allow testing to progress. Finally, ‘over the air,’ or wireless, software updates would be first
checked for safety and errors by NHSTA.116 As it outlines above, part of the recommended
expansion of federal authority is in response to the highly technical nature of carrying out basic
27
regulations about cars, and cities or states may not have the money or ability to support the
necessary highly technical staff. Another reason an expansion of federal regulatory authority
may be needed is the sheer scope of implementing of a new, potentially disruptive technology
across the United States.
Autonomous vehicle technology is not yet ready for the leap to fully automated driving.
More time and better synchronized testing and development between public and private
stakeholders is needed. Policies on the books now may miss what HAVs and their infrastructure
will look like and cost; worse, regulations could limit or all-together prevent HAVs from hitting
the road by regulating before the technology is mature. NHTSA must ensure the public interest is
served by increasing safety associated with driving and expanding access to transportation.
Given the scope of implementation and potentially transformative nature of HAVs, NHTSA’s
guidelines allow for an open and private-public approach to policy-making that sets the stage for
well-targeted regulations in near future.
Conclusion and Future Implications
The rise of robotics and AI and government policy-making have been intertwined since
the birth of the computer during World War II. At first, regulators privileged the development of
these technologies through decades of funding research projects, universities, and holding prize
competitions to spur innovation. As the technologies spread and matured, federal agencies
established regulatory frameworks in response to safety needs. Recently, however, robotics and
AI have progressed to a point that is straining the patchwork of federal agency regulations due to
the technology’s ubiquity, technical complexity, and potential capabilities. Automation and the
continuing evolution of robotic and AI technologies will undoubtedly reshape our economic,
social, and political lives.
28
Part of what has been traced from the summary of the current state of robotics and AI
policy is an increase in the scale, cost, and complexity of enacting policy frameworks for the
evolving robotic and AI technologies. OSHA was the first agency to intersect with and regulate
robotics by amending and updating its existing safety standards. The FDA was able to
accommodate RAS systems into its current regulatory structure for medical devices, and, given it
does not regulate the practice of medicine, currently has no authority over AI diagnosis support
tools. The FRA faces a scale and cost to implement PTC that is substantially higher than what
either the FDA or OSHA faces. Finally, both the FAA and NHTSA require expanded authorities
and regulatory tools to put into place the necessary policy frameworks for drones and HAVs. The
policy frameworks to regulate robotic or AI technologies within an agency’s domain have
necessarily become more complex over time.
The increasing scale, cost, and complexity of enacting policy frameworks reflects the
evolving nature of robotic and AI technologies. These technologies jare increasingly embodying
a single object of regulation, multiplying the policy concerns and compounding the regulatory
tools needed by any single agency.117 For example, to carry out this mission with HAV’s,
NHTSA must have the technical expertise to determine the crashworthiness of both the physical
car and the safety of the self-driving technologies. Implicit in the range of technologies an HAV
uses to drive, however, are concerns beyond NHTSA’s traditional ones of safety and
crashworthiness. NHTSA will also need to regulate the data collection, sharing, storage inherent
to each car, the privacy of the driver, and the cybersecurity of the HAV. The issue extends well
beyond self-driving cars, however. Smart homes connected in the internet of things, networked
factories, networked trains, and even future medical devices will all be connected to the internet
and susceptible to being hacked. Data sharing, personal privacy and ethical frameworks to direct
29
robots or AI that coexists with humans result from the evolving technology and demand a unified
approach to regulation.
The current patchwork of federal agencies that is reacting independently to developments
in robotics and AI in each specific domain will not be able to adequately address the changes and
challenges brought by the Second Machine Age.118 Instead, a unified approach that puts robotic
and AI technology first is needed. There are two steps Congress can quickly take to accomplish
this. The first step is to bring back the OT). The OTA produced nonpartisan analysis by experts
in science and technology fields, lending their expertise on legislation, regulations, or through
reports on science and technology topics. At its time of closure in 1995, the OTA had produced a
total of 750 reports on topics ranging from acid rain and global climate change to health care and
polygraphs. Its closing prompted Republican House Representative Amo Houghton to say, “We
are cutting off one of the most important arms of Congress when we cut off unbiased knowledge
about science and technology.”119 Reinstating the OTA would provide members of Congress
with objective and authoritative scientific and technological analysis on complex issues for a bill,
report, or set of regulations being considered. And reviving the Office of Technology
Assessment is relatively simple to do because the Office was never abolished. It was defunded
and stripped of its experts, but no new legislation recreating the office is needed to bring it back.
Its annual budget at its closing in 1995 was $20 million, a drop in Congress’s $3.2 billion budget
for itself.120 Robotics and AI will soon reshape our economy, our social lives, and our political
institutions, and Congress needs scientific and technological expertise to craft the right
legislation for our future.
In addition to bringing rigorous scientific and technological analysis back to Congress, a
new federal agency dedicated to robotics and AI is needed. When the radio was invented, the
30
technology proved to be transformative for our society and required infrastructure that spanned
the country, the Federal Radio Commission was created. Now the FCC, the agency smoothed the
integration of radio, cell phones, and the internet into society in ways that aimed to serve the
public interest. Robotics and AI stand as transformative technologies, but the incorporation has
been far from smooth. Economic dislocation has already begun with little policy oversight. And
creating an agency to effectively govern a complex set of regulatory needs is historically the rule
not the exception. Furthermore, as robotic and AI technologies continue to evolve, the policy
frameworks needed to balance public and private interests will grow increasingly complex and
depend on highly technical knowledge. Relying on individual agencies to regulate these
technologies in their specific domains blocks the necessary accrual of policy expertise a
consolidated federal agency could solve. Furthermore, the US government’s historical approach
of recruiting the “best and brightest” has failed to bring in talented computer scientists,
engineers, and technology policymakers.121 Instead, the best have gone to work for tech
companies like Google, Amazon, and FaceBook where they work on cutting-edge technologies
and are rewarded with high salaries. In addition to being better positioned to recruit the best
talent, a unified federal agency would be better suited to deal with the issues, like cybersecurity,
that cut across multiple technology domains. Finally, a federal agency is needed to give the
public a hand in this new world being forged. Currently, the policy frameworks are being
hammered out between a distributed collection of stakeholders, but because the agencies are
currently constrained in their regulatory authority, the public interest and the voice of citizens are
similarly constrained. For example, self-driving cars have the potential to reshape our
relationships to our commutes, our intuitions and about harm and risk, the design of our cities,
and our foundational ideas about property.122 However, NHTSA’s authority to regulate, and
31
therefore capacity to hear and be influenced by the public, is limited to that of safety of the
vehicle itself. Who will speak for the public when our cities and sense of property are changed
by the forces of robotics and AI? A federal robotics and AI agency could help mitigate and steer
the Unites States through the disruptive waves to ensure we are all best served as we enter the
Second Machine Age.
1
Roberts, Adam. The History of Science Fiction. (New York: Palgrave MacMillan, 2006), 168.
Ibid.
3
Popper, Nathaniel. “The Robots are Coming for Wall Street.” New York Times.
https://www.nytimes.com/2016/02/28/magazine/the-robots-are-coming-for-wall-street.html. Accessed 4/16/17
4
Ford, Martin. The Rise of the Robots. (London: OneWorld Publications, 2015), 1-12.
5
Sills, Anthony. “IBM Ross and Watson Tackle the Law.” IBM (blog).
https://www.ibm.com/blogs/watson/2016/01/ross-and-watson-tackle-the-law/. Accessed 3/21/17.
6
Son, Hugh. “JP Morgan Does in Seconds What Took Lawyers 360,000 Hours.” Bloomberg News.
https://www.bloomberg.com/news/articles/2017-02-28/jpmorgan-marshals-an-army-of-developers-to-automatehigh-finance. Accessed 4/15/17.
7
McKinsey Global Institute. “A Future that Works: Automation, Employment and Productivity”. January 2017.
8
Brynjolfsson, Erik and McAfee, Andrew. The Second Machine Age: Work, Progress, and Prosperity in a Time of
Brilliant Technologies. (New York: W. W. Norton Company, 2014), 2-12.
9
McKinsey Global Institute. “A Future that Works: Automation, Employment and Productivity”.
10
Leila Takayama, Wendy Ju, and Clifford Nass. “Beyond dirty, dangerous and dull: what everyday people think
robots should do.” Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
(2008), 25-32.
11
Association for Advancing Automation. “Robots Fuel the Next Wave of US Productivity”. White Paper. October
2015.
12
A Roadmap for Robotics: From Internet to Robotics. 2016 Edition.
13
Ibid.
14
McKinsey Global Institute. “A Future that Works: Automation, Employment and Productivity”.
15
Gevarter, William. Intelligent Machines. (Upper Saddle River: Prentice-Hall Publications, 1985), 8.
16
Gevarter, William. Intelligent Machines. 150.
17
Ford, Martin. The Rise of the Robots. 1-12.
18
Ibid.
19
Cha, Arianna. “IBM Supercomputer Watson’s Next Feat? Taking on Cancer.” Washington Post.
http://www.washingtonpost.com/sf/national/2015/06/27/watsons-next-feat-taking-on-cancer/. Accessed
4/18/17.
20
Jesus, Cecille. “IBM Lawyer Ross Has Been Hired By Its First Official Law Firm.” Futurism.
https://futurism.com/artificially-intelligent-lawyer-ross-hired-first-official-law-firm/. Accessed 4/7/17
21
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial
Intelligence. (Nantick: AK Peters Ltd, 2004), 111-136.
22
McCartney, Scott. ENIAC: The Triumphs and Tragedies of the World's First Computer. Walker & Co. 1999. Print
23
William T. Moye. “The Army Sponsored Revolution.” U.S. Army Archives. January 1996.
http://ftp.arl.mil/~mike/comphist/96summary/index.html. Accessed 4/01/17.
24
McCartney, Scott. ENIAC: The Triumphs and Tragedies of the World's First Computer. Walker & Co. 1999. Print
25
Turing, Alan. “Computer Machinery and Intelligence.” Mind. 59 (1960): 433-460.
2
32
26
The Turning Test is a functionality test of a machine to decide if it is intelligent. If the machine can have a
conversation with a person, and he or she cannot distinguish the machine’s dialogue from that of a human, then
the machine can be considered intelligent.
27
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial
Intelligence. 111-136.
28
McCarthy, John, et. al. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. 31
August 1955. retrieved April 17th, 2017.
29
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial
Intelligence. 111-136.
30
Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. (New York: Basic Book, 1993),
52-107.
31
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial
Intelligence. 111-136.
32
Adjusted for inflation, that means each lab received approximately 24 million dollars in today’s money per year.
33
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial
Intelligence. 111-136.
34
Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. 52-107.
35
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial
Intelligence. 111-136.
36
Roland, Alex; Shiman, Philip. Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993.
(Cambridge: MIT Press, 2002), 84-112.
37
Ibid.
38
Hsu, Feng-Hsiung. "Behind Deep Blue: Building the Computer that Defeated the World Chess Champion".
(Princeton: Princeton University Press, 2002), 92-97.
39
Hsu, Feng-Hsiung. "Behind Deep Blue: Building the Computer that Defeated the World Chess Champion". 107.
40
Kaplan, Jerry. Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence. (New
Haven: Yale University Press, 2015), 23-28.
41
Cha, Arianna. “IBM Supercomputer Watson’s Next Feat? Taking on Cancer.” Washington Post.
http://www.washingtonpost.com/sf/national/2015/06/27/watsons-next-feat-taking-on-cancer/. Accessed
4/18/17.
42
Calo, Ryan. “Why We Need a Federal Agency on Robotics.” Scientific American.
https://www.scientificamerican.com/article/why-we-need-a-federal-agency-on-robotics. Accessed 3/2/17.
43
McKenzie, Sheena. “Rise of the Robots: The Evolution of Henry Ford’s Assembly Line.” CNN Money.
http://money.cnn.com/gallery/technology/2015/04/29/ford-factory-assembly-line-robots/6.html. Accessed
3/15/17.
44
Ibid
45
Ibid
46
Association for Advancing Automation. “Robots Fuel the Next Wave of US Productivity”.
47
Ford, Martin. The Rise of the Robots. Pg. 32-45.
48
Lavrinc, Davon. “A Peek Inside Tesla’s Robotic Factory”. Wired.. July 16, 2013.
https://www.wired.com/autopia/2013/07/tesla-plant-video/. Accessed 2/13/17.
49
Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. 52-107.
50
O’Keefe, Ed. “When Congress Wiped an Agency off the Map.” Washington Post.
https://www.washingtonpost.com/blogs/federal-eye/post/when-congress-wiped-an-agency-off-themap/2011/11/29/gIQAIt0J9N_blog.html. Accessed 4/12/17.
51
Office of Technology Assessment. “Exploratory Workshop on the Social Impacts of Robotics”. 1982. NTIS #PB82184862
52
McKinsey Global Institute. “A Future that Works: Automation, Employment and Productivity”.
53
https://www.osha.gov/about.html
54
Occupational Safety and Health Administration Accidents Database. Keyword “Robot”.
55
Ibid.
56
Lathe is a machine used to shape wood, metal, or other materials by a rotating drive
57
Occupational Health and Safety Administration Accidents Database. Keyword “Robot”.
33
58
Occupational Safety and Health Administration, Safety Standards. 01-12-002 [PUB 8-1.3].
Occupational Safety and Health Administration. Technical Manual.
https://www.osha.gov/dts/osta/otm/otm_iv/otm_iv_4.html
60
Ibid.
61
To date, 38 incidents have been reported to OSHA relating to a robot with 25 resulting in deaths.
62
Ford, Martin. The Rise of the Robots. 23-24.
63
Christ, Ginger. “OSHA Unveils New Robot Safety Paln.”EHSToday. http://ehstoday.com/osha/osha-unveils-newrobot-safety-plan. Accessed 3/24/17.
64
Ibid.
65
Food & Drug Administration. https://www.fda.gov/
66
Samadi, David. “History of Robotic Surgery.” Blog. http://www.roboticoncology.com/history/. Accessed 4/14/17.
67
Ibid.
68
Ibid.
69
FDA. “Products and Medical Procedures.”
https://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/SurgeryandLifeSupport/ComputerAssistedS
urgicalSystems/default.htm
70
Ibid.
71
Whether the advertised greater precision of surgical robots translates into better health outcomes is still being
investigated. See: Payne TN, Dauterive FR. "A Comparison of Total Laparoscopic Hysterectomy to Robotically
Assisted Hysterectomy: Surgical Outcomes in a Community Practice." (2008) and Barbash, Gabriel, and Glied,
Sherry. “New Technology Healthcare Costs—The Case of Robotically Assisted Surgery.” (2013) for contrasting
views.
72
Food and Drug Administration. “Device Regulation and Guidance.”
https://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/
73
21CFR807.92(a)(3))
74
Food and Drug Administration. “Device Regulation and Guidance.”
https://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/HowtoMarketYourDevice/PremarketSubmiss
ions/PremarketNotification510k/default.htm
75
Ibid.
76
United States. Department of Transportation Act. 49 U.S.C. §103, section 3 (e)(1).
77
https://www.fra.dot.gov/Page/P0358
78
Regional Transportation District of Denver. “RTD Integrates PTC into Commuter Rail Lines.” http://www.rtdfastracks.com/media/uploads/main/PTC_Fact_Sheet_2016_FINAL.pdf. Accessed 2/22/17.
79
The Times Editorial Board. “Positive Train Control Should Wait, but the Wait Shouldn’t be too Long.” LA Times.
http://www.latimes.com/opinion/editorials/la-ed-1001-positive-train-control-20151001-story.html. Accessed
1/18/17.
80
Ibid.
81
U.S. Rail Safety Improvement Act of 2008, Public Law 110–432, 122 2008), 49 U.S.C. § 20101.
82
Federal Railroad Association. “Positive Train Control.” https://www.fra.dot.gov/ptc
83
Ibid.
84
Ibid.
85
The History of Drone Technology.” Website. http://www.redorbit.com/reference/the-history-of-dronetechnology. Accessed 2/12/17.
86
Ibid.
87
Weiss, Lora. “Autonomous Robots in the Fog of War.” IEEE Spectrum.
http://spectrum.ieee.org/robotics/military-robots/autonomous-robots-in-the-fog-of-war. Accessed 3/02/17.
88
Model Aircraft Museum. “Aeromodeling History.” http://www.modelaircraft.org/museum/aerohistory.aspx.
Accessed 3/04/17.
89
Federal Aviation Administration, FAA Aerospace Forecast, FAA-16-36.
https://www.faa.gov/data_research/aviation/aerospace_forecasts/media/FY201636_FAA_Aerospace_Forecast.pdf. Accessed 3/06/17.
90
Yadron, Danny. “Amazon and Google’s Drone Delivery Plans Hit Snag with New US Regulations.”
91
Federal Aviation Administration, FAA Aerospace Forecast, FAA-16-36.
59
34
92
Ibid.
Model Aircraft Museum. “Aeromodeling History.”
94
FAA Modernization and Reform Act, Public Law 112-95, US Statutes at Large 126 (2012).
95
Ibid.
96
Federal Aviation Agency. “Operation of Small Unmanned Aircraft Systems, Final Rule.” Federal Register 14 CFR
21. 2016.
97
Ibid.
98
http://spectrum.ieee.org/cars-that-think/transportation/self-driving/self-driving-cars-market-2030
99
"'Phantom Auto' will tour city". The Milwaukee Sentinel. Google News Archive. 8 December 1926. Retrieved 23
July 2013
100
Wetmore, Jameson M.. "Driving the Dream – The History and Motivations Behind 60 Years of Automated
Highway Systems in America." Automotive History Review (2003): 4-19.
101
Bartz, Daniel. :Autonomous Cars Will Make Us Safer.” Wired. https://www.wired.com/2009/11/autonomouscars/. Accessed 3/27/17.
102
Dr. Robert Leighty. “DARPA ALV Summary”. U.S. Army Engineers Topographic Laboratories. Report #ETL-R-085.
1986
103
Pomerleau, Dean A. “An Autonomous Land Vehicle in a Neural Network”. Carnegie Mellon University, 1989.
104
Bishop, Richard. Intelligent Vehicle Technologies and Trends. (Boston: Artech House, 2005), 1-25.
105
http://archive.darpa.mil/grandchallenge/
106
https://www.tesla.com/autopilot
107
https://www.tesla.com/blog/all-tesla-cars-being-produced-now-have-full-self-driving-hardware
108
Bhuiyan, Johana. “A Series of US State Laws Could Prevent Google and Uber from Operating Self-Driving Cars”.
Recode. https://www.recode.net/2017/2/25/14738966/self-driving-laws-states-gm-car-makers. Accessed 4/27/17.
109
Mele, Christopher. “In a Retreat, Uber Ends Its Self-Driving Car Experiment in San Francisco.” New York Times.
https://www.nytimes.com/2016/12/21/technology/san-francisco-california-uber-driverless-car-.html. Accessed
3/21/17.
110
Muoio, Danielle. “These 19 Companies are Racing to Build Self-Driving Cars in the Next 5 Years.”
111
NHTSA. “Federal Automated Vehicles Policy.” Policy Guidance. September 2016.
112
Ibid
113
Ibid
114
Ibid
115
Ibid
116
Ibid
117
Kaplan, Jerry. 24-26.
118
Brynjolfsson, Erik and McAfee, Andrew. The Second Machine Age: Work, Progress, and Prosperity in a Time of
Brilliant Technologies. 2-12.
119
Nader, Ralph. “Nader Proposes Reviving Office of Technology Assessment”. Independent Political Report.
http://independentpoliticalreport.com/2010/06/nader-proposes-reviving-congressional-office-of-technologyassessment/. Accessed 4/26/17.
120
Ibid.
121
Calo, Ryan. “Why We Need a Federal Agency on Robotics.”
122
Lemney, Mark. “IP in a World Without Scarcity.” N.Y.U. Law Review. 90 (2015): 460-515.
93
35
Download