Uploaded by Eyitene Iwere

LEGAL QUESTIONS AND CONCERNS SURROUNDING ARTIFICIAL INTELLIGENCE

advertisement
LEGAL QUESTIONS AND CONCERNS SURROUNDING ARTIFICIAL
INTELLIGENCE
STUDENT ID: 4984628
NEW RIGHTS, CYBERSPACE AND LAW
DECEMBER 20, 2018
1. INTRODUCTION
Technology has become very disruptive in recent times. It determines to a large extent,
what we do. For instance, Siri can conduct a search on the internet with just a voice prompt
from the operator of an Apple device, and various emerging cutting-edge technology to
improve on the existing artificial intelligence systems are constantly being developed.
Interestingly, these systems are also beginning to assume a life of their own, in that they
are now autonomous, having capacity to be self-taught1 and self-driven (an example of the
latter being self-driven automobiles), having the ability to coordinate and make decisions
that affect not only human life but the general trajectory of human civilization and society.
This has brought a lot of challenges within the field of law. The aim of this paper is to
highlight a number of those concerns and take a legal look at what challenges they pose
and what reforms might be needed in law to overcome these challenges.
Because of the broad scope of the above, this paper shall focus on questions of tort to enable
us to determine the following questions:
a. Can liability be imposed on Artificial intelligence systems?
b. Under what circumstances can we impose liability on the designer/developer/end user
of AI systems?
c. What reforms are necessary to effectively impose liability?
1
Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den
Hoven, Roberto V. Zicari, Andrej Zwitter, Will Democracy Survive Big Data and Artificial Intelligence? Scientific
American (2017) https://www. scientificamerican.com/article/will-democracy-survive-big-data-and-artificialintelligence/
STUDENT ID: 4984628
1
In this paper, this author argues that for a number of reasons, particularly the nature of AI
systems, legal liability cannot be imposed on AI systems. The author goes on further to
highlight these reasons and propose the circumstances under which the developer/end user
will be liable and the extent of this liability. The author concludes by suggesting areas of
law that require reform in order to adequately determine and impose liability for harm
caused by AI systems.
2A.
NATURE OF AI
There is currently no acceptable legal definition of Artificial Intelligence (AI). Rather,
various attempts have been made at defining AI. AI is said to cover “a gamut of
technologies from simple software to sentient robots, and everything in between, and
unavoidably includes both algorithms and data.”2 The absence of a legal definition of AI
has an impact on the effectiveness of imposing liability for harm caused by AI as will be
discussed further.
2B.
CAN LIABILITY BE IMPOSED ON AI SYSTEMS?
Isaac Asimov set down three laws of robotics, namely – (i) That robots may not, through
action or inaction, allow harm to come to human beings; (ii) A robot must obey orders
given to it by human beings unless same will conflict with the first law; and (iii) A robot
must protect its own existence, as long as such protection does not conflict with the first or
second laws.3
There have been questions regarding whether the above laws can be applied to AI systems
as they appear vague and incomplete.4 However, if these laws were followed by AI
2
Iria Gruffrida, Legal Perspective on the Trials and Tribulations of AI: How Artificial Intelligence, the Internet of
Things, Smart Contracts, and Other Technologies Will Affect the Law.Case Western Reserve Law Review, Vol 68
(2018).
3
Isaac Asimov, I, Robot (1950).
4
Jack M. Balkin, The Three Laws of Robotics in the Age of Big Data. Ohio State Law Journal, Vol. 78 (2017).
STUDENT ID: 4984628
2
systems, there would be no need to further discuss whether liability can be imposed on
them as they will not cause harm.
Intelligent entities are expected to have the following attributes – the ability to effectively
communicate with humans, internal knowledge (knowledge about itself), external
knowledge (knowledge about its environment), goal driven behavior (expected to take
action to achieve its goal), and creativity (take alternate action when initial action fails).5
Some researchers are of the opinion that AI systems meet all of the stated attributes,
implying that AI systems possess intelligence.6
Unfortunately, we do not live in a perfect world, and just as humans, deemed to be
intelligent, are capable of behaving irrationally, so are robots. Humans are seen as
intelligent beings, but not all humans behave predictably.
This author is of the opinion that AI systems, being man-made, cannot be said to possess
the same level of intelligence as humans do and consequently, cannot be held to the same
standards as humans are.
AI systems have been known to fail with consequences ranging from accidental injuries to
fatal death.7 The pressing question is “can we hold AI systems legally liabile for these
mishaps?
2C.
PROBLEMS WITH IMPOSING LIABILITY ON AI SYSTEMS
One of the most significant problems posed by autonomous AIs is the question of legal
personality.
5
Roger C. Schank, What is AI, Anyway? The Foundations of Artificial Intelligence 3 (Derek Partridge and Yorick
Wilks) eds., 2006.
6
Ibid pps 4-6.
7
Between 1984 and 2014, AI systems have claimed 33 lives in the United States. See
<https://www.news.com.au/technology/factory-worker-killed-by-rogue-robot-says-widowed-husband-inlawsuit/news-story/13242f7372f9c4614bcc2b90162bd749>
STUDENT ID: 4984628
3
According to law, only a legal person can be held legally liable.8 It follows from this that
in order to impose legal liability on artificial intelligence, it has to possess legal personality.
Presently, AI systems, not being natural persons or being conferred with legal personality
are considered as products,9 and as such, do not have legal personality. AI systems are, at
best, “agents or instruments of other entities that have legal capacity.10 This makes it
impossible to hold such systems legally liable for their actions or harm resulting
therefrom.11
Another significant obstacle to imposing liability on AI systems is the difficulties
associated with proving intent. Unlike with humans where there are generally acceptable
standards of behaviors by which the conduct of an individual can be judged against. By
contrast, as AI is a constantly evolving field, comprising of a wide range of hardware and
software products, no set standards of conduct have been established.
For the above reasons, we can conclude that AI systems cannot be held legally liable for
their conducts.
Article 12 of United Nations Convention on the Use of Electronic Communications in
International Contracts (the “Convention”) that a person (whether a natural person or a
legal entity) on whose behalf a computer was programmed should ultimately be
responsible for any message generated by the machine.
The Convention places liability for messages generated by a machine on the person on
whose behalf the computer was programmed. Should we apply the same rule to harm
8
Y Bathaee, The artificial intelligence black box and the failure of intent and causation, Harvard Journal of Law &
Technology Vol 31 (2018 ).
9
Bertolini, Andrea, Robots as Products: The Case for a realistic analysis of Robotic Applications and Robotics and
liability rules (August 31, 2013). Law Innovation and Technology , 5(2), 2013, 214-247. Available at SSRN:
https://ssrn.com/abstract=2410754
10
David C. Vladeck, Machines without Principals: Liability Rules and Artificial Intelligence, 89 Wash. L. Rev.
117, 150 (2014).
11
Id.
STUDENT ID: 4984628
4
caused by AI systems? How do we determine the natural person or legal entity on whose
behalf a computer is programmed? Since we cannot state, for certain, the most effective
method will be to consider those who operate these systems – the developer and the end
user.
The next section will deal with the range of liabilities that can be imposed on developers
and end users of AI systems.
2D. NATURE AND STANDARDS OF LIABILITY THAT CAN BE IMPOSED
The types of liabilities that can arise from the operation of AI systems are criminal and
civil liability. Criminal liability requires the presence of both actus reus and mens rea. The
former is the commission of an act or an omission,12 and an intention to commit said act).13
The legal consequence of criminal liability is punishment.14 Civil liability, on the other
hand, refers to legal obligations from private wrongs or a non-criminal breach of contract.
With the above in mind, what is the nature of liability that can be imposed and on whom
can this liability be imposed?
This author argues that civil liability, particularly tortious liability is the appropriate nature
of legal liability that can be imposed in relation to harm caused by AI systems. The premise
of this argument is based on the conclusion that it would almost be impossible to prove
12
Walter Harrision Hitchler, The Physical Element of Crime, 39 DICK. L Rev 95 (1934); Michael Moore; Act and
Crime: The Philosophy of actions and its implications for criminal law (1993).
13
J Ll. J. Edwards, The Criminal Degrees of Knowledge, 17 MOD. L. REV 294 (1954).
14
Henry M. Hart Jr “The Aims of the Criminal Law.” Law and Contemporary Problems, Vol. 23 p. 405 (1958).
STUDENT ID: 4984628
5
both the actus reus and mens rea element to impose criminal liability on the part of either
the manufacturer, developer or end user of AI systems for harm occasioned by AI systems.
Before we discuss on whom the liability should lie, we will now consider the appropriate
nature of tortious liability that should be imposed.
There are two schools of thought on this issue. Some researchers are of the opinion that
strict liability is the appropriate standard.15 This school of thought believes that this will
deter the manufacturers and developers from launching their products without conducting
the necessary safety tests required for new technologies as required by American law.
Others believe that negligence should be the appropriate standard.16 We will consider both
standards.
In order to apply the negligence standard, we must establish the existence of a duty of care,
breach of said duty; and a causal link between the breach and the resulting injury.17
In relation to AI systems, liability for negligence is said to occur , for example, in relation
to computer programs, when the software is defective and when a party is injured as a result
of using said software.18 Gertsner (1993) argues that the vendor of an AI software owes a
duty of care to the customer but the standard of care to be applied will depend on whether
such AI system is regarded as an expert system, in which case the appropriate standard will
be that of an expert or professional.19 A professional is described as “one who possesses a
standard minimum of special knowledge and ability and who undertakes work requiring
special skill.”20
In determining breach of said duty of care, Gertsner (1993) believes that errors in the
functions of AI systems that could easily have been detected by the developer, failure to
15
Kristopher-Kent Harris, Drones: Proposed Standards of Liability, 35 Santa Clara High Tech. L. 65 (2018)
Maruerite E. Gerstner, Liability Issues with Artificial Intelligence Software, 33 Santa Clara L. Rev. 239 (1993).
17
Id.
18
Id.
19
Id.
20
W. Page Keeton et al., Prosser and Keeton on the law of torts § 30, at 185-188 (5th ed. 1984).
16
STUDENT ID: 4984628
6
update the knowledge, the user supplying faulty input or unduly relying on the output; or
using the program for an incorrect purpose are ways in which this duty of care might be
breached.
This author disagrees with Gertsner on the last three examples. To hold a developer liable
for an end user supplying faulty input, unduly relying on the output or using the program
for an incorrect purpose can be akin to holding developers of a mobile phone responsible
for damages caused by smashing the phone – using the phone for an improper purpose.
Also, courts have been unwilling to apply the negligence standard to developers, probably
due to the lack of uniformity of minimal standards to be applied to software developers or
programmers.21
Regarding the appropriate standard of care to be imposed, the United States District Court
has held that the standard of care expected from professionals does not extend to
Information Technology (IT) professionals because unlike other professions such as
medical doctors and attorneys, the ability to enter the technological and programming field
is not regulated or restricted by either state or Federal licensing laws.22 Another basis for
its decision was that there is no industry-wide standard by which to adjudge the conduct
of IT professionals.23
It can be argued that the reasons provided are insufficient to justify the court’s reason for
deciding that IT professionals could not be regarded as professionals for the purpose of
imposing liability. This is due to certain arguments emphasizing that the professional
standard, as provided under the Restatement of Torts is not limited to professions but also
extends to trades.24 Trade is defined broadly25 to include “any person who undertakes to
21
Susan Nycum, Liability for Malfunction of a Computer Program, 7 Rutgers J. Computers, Tech. & L. 1, 9 (1979).
Superior Edge, Inc. v. Monsanto Co., 44 F. Supp. 3d 890, 912 (D. Minn. 2014)
23
Ibid
24
Restatement (Second) of Torts § 299A (1965)
25
Danny Toby, Software Malpractice in the Age of AI: A Guide for the Wary Tech Company Available at
http://www.aies-conference.com/wp-content/papers/main/AIES_2018_paper_43.pdf
22
STUDENT ID: 4984628
7
render services to others in the practice of a skilled trade, such as that of air- plane pilot,
precision machinist, electrician, carpenter, blacksmith, or plumber.”
One can argue that if certain skilled laborers such as plumbers and blacksmiths are held to
the standard of professionals, players in a highly complex field such as AI systems
development should not be left out.
Notwithstanding the above, the author believes negligence is not the appropriate standard
to be applied to developers for the reasons that will be explained below.
i.
Standard of liability to be imposed on the developer
Since the developers possess more knowledge than the user, the author believes that the
standard of strict liability should be applied.
Strict liability is distinguishable from negligence as it does not require proof of intent but
proof that “the product was defective and unreasonably dangerous when used in a normal,
intended, or reasonably foreseeable manner, and that the defect caused plaintiff's injury.”26
Since strict liability is applied to products, AI systems must be classified as products and
not services,27 in order for the standard to be applied to the developers. Strict liability
requires individuals to exercise “the utmost care to prevent the harm” and includes
activities such as the operation of dangerous instruments.28
In determining what constitutes dangerous instruments, courts apply a balancing test,
taking the following factors into consideration:
a. The existence of a high degree of risk of some harm to the person;
26
RESTATEMENT (SECOND) OF TORTS § 402(A)
Todd M. Turley, Expert Software Systems: The Legal Implications, 8 Computer L J.455,457.
28
RESTATEMENT (SECOND) OF TORTS § 519.
27
STUDENT ID: 4984628
8
b. The likelihood that the harm that results from it will be great;
c. The inability to eliminate the risk by the exercise of reasonable care;
d. The extent to which the activity is not a matter of common usage;
e. The inappropriateness of the activity to the place where it is carried on; and
f. The extent to which its value to the community is outweighed by its dangerous
attributes.29
In assessing these factors, it is evident that the nature of the AI system in question will play
a great role in reaching the conclusion that such system is a dangerous instrument. For
example, no one will argue that drones are dangerous since by their very nature, they are
intended to cause harm. However, should less obvious systems such as autonomous
vehicles be regarded as dangerous instruments? This shall be considered in greater detail.
Vladeck (2014) considers autonomous vehicles, not as tools used by humans, but
“machines deployed by humans that will act independently of direct human instruction,
based on information acquired and analyzed and will often make consequential decisions
in circumstances that may not be anticipated by, let alone directly addressed by, the
machine’s creator.”30
The inability to fully predict the rationale for decisions of AI systems highlights a problem
with the strict liability standard. Should a developer be held liable for unforeseen actions
occasioned of the AI system? This appears to be a high threshold. However, this seems to
be the most appropriate standard to apply when balancing the interests of the developer,
end user and a third party.
I intend to analyze the nature of autonomous vehicles as AI systems against the established
factors for determining whether instruments can be said to be dangerous as to necessitate
the imposition of the strict liability standard.
29
30
Id.
Vladeck, supra note 10.
STUDENT ID: 4984628
9
If we concede that autonomous vehicles can act independently of humans, it means that
there is some risk of such system causing harm, either to the user or a third party. The
degree of such harm might not easily be determinable. However, there is a likelihood that
the harm that results from it might be great. This is because, just as manual cars crash, it
has been established that so will autonomous vehicles.31 Furthermore, if we concede that
autonomous vehicles can operate independently of their owners, it is highly probable that
the risk of harm that might arise from its operation might not adequately be eliminated by
the exercise of reasonable care and skill. It also appears to be quite early to make a
determination as to whether the value of having autonomous vehicles on roads outweighs
the dangerous attributes they possess.
Bearing these in mind, the unpredictability of AI systems plays a great role in my
suggestion that the strict liability standard should be imposed.
A cursory look at the factors to be considered in determining what constitutes dangerous
instruments demonstrates that inability to mitigate risks even when exercising reasonable
care can cause a product to be regarded as dangerous.
Since certain AI systems are not wholly dependent on input, it may not be possible to truly
understand how a trained AI program is arriving at its decisions or predictions.32 It is for
this same reason that I argue in favor of strict liability.
To set a lower standard might most likely lead to the effect of encouraging developers to
hide under the cloak of the unpredictability of the very system they created when such
system operates in a manner that causes harm. Particularly because of the nature of the AI
field and the level of sophisticated knowledge required to develop such systems, it is only
fair that the developer is not given a soft landing.
31
Matthew Michaels Moore & Beverly Lu, Autonomous Vehicles for Personal Transport: A Technology
Assessment, (Social Science Research Network Working Paper 2011).
32
Bathaee, supra note 8.
STUDENT ID: 4984628
10
Furthermore, to accept negligence as the standard will require inquiry into the conduct of
the developer in order to assess whether such conduct falls below that which is expected
of him. The challenge that this will pose is that AI has no current regulatory regime and it
is clear that effective regulation of AI systems will require, among other things, legally
defining what AI means and what exactly is covered by the term.33
In addition, the only clear-cut mechanism to establish intent on the part of the developer
lies in the purpose for which the system was developed in the first place; it is preferable
for strict liability to be the standard. For instance, drones were developed to kill. If drones
were deployed to murder innocent civilians, the developer cannot be held responsible
simply because the drone caused harm. However, if, despite the end user following all
required instructions for the proper operation of the drone, it malfunctions and kills those
it was not intended for, the developer has to be held responsible.
The Restatement (Third) of Torts provides that for product liability, one of the following
three categories of defect must be present – manufacturing defect, failure to provide
adequate instructions or warning; or design defect. 34
Manufacturing defect refers to a situation where the product departs from its intended
design even though all possible care was exercised in preparing and marketing the
product.35
Product liability will also be established where in situations where there were inadequate
instructions and adequate instructions would have reduced the foreseeable risk of harm
posed by the product.36
33
Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,
29 Harv. J. L. & Tech. 353, 400 (2016)
34
Restatement (Second) of Torts: Product Liability § 2 (1997)
35
Id § 2(a)
36
Id § 2(c)
STUDENT ID: 4984628
11
Lastly, for product liability based on design defects, it has to be demonstrated that the
foreseeable risks of harm could have been avoided by the adoption of an alternative product
design and the omission of said alternative design renders the product not reasonably safe.37
Regarding the above requirements to establish product liability, in relation to AI systems,
it is clear that a high level of technical knowledge , particularly trade secrets, will be
required in order to adequately establish that the product departed from its intended design,
adequate instructions were not provided or an alternative product design would have
eliminated or mitigated the risk of harm caused by the AI system.
For the above reasons, this author is of the view that a more effective method will be to
impose strict liability, distinct from product liability, on the developer for harm caused by
AI systems during their normal and intended operations. It is assumed that the developer
will at least provide the end user with a manual, stating the acceptable uses of the AI
system.
When users do not operate such systems as intended, the developers cannot be held liable
as such harm was not caused by a defect or malfunction by the system.
Imposing strict liability will reduce the difficulties associated with finding intent on the
part of the developer when the system was used in the normal course of its operations.
ii.
Standard of liability to be imposed on the end user
This author argues that negligence is the appropriate standard of liability to be imposed on
the end user. This argument is based on the premise that it is a lot easier to assess intent on
the part of the end user than it is on the part of the developer.
Bertolini (2013) states that if an owner or third party misuses a product that results in harm,
he may be called upon to bear the consequences that may be derived from such misuse.
37
Id § 2(b)
STUDENT ID: 4984628
12
Bertolini (2013) believes that a normal negligence standard should be applied both in cases
of misuse and cases where it was clear that the AI system was used to cause harm.
However, there are several problems associated with applying the negligence standard.
The first has to do with intent. Because of the nature of algorithms on which AI systems
are built,38 Yavar (2018) concludes that “computers are no longer merely executing
detailed pre-written instructions but are capable of arriving at dynamic solutions to
problems based on patterns in data that humans may not even be able to perceive.”39 The
implication is that “it may not be possible to truly understand how a trained AI program is
arriving at its decisions or predictions”. 40
If there is no established procedure for determining how AI systems interpret the data being
imputed to arrive at the conclusions with which certain activities are carried out, there will
be no reasonable standard or conduct that can be attributed to the user of such system.
Impliedly, the AI system may not be achieving the desired aim of the user. If this is the
case, it becomes a lot more difficult to prove intent as users could argue that the conduct
of the system is not proportionate to the user’s input. For instance, according to Vladeck
(2014), autonomous vehicles are said to be machine-driven, with the anticipation of very
minimal human control. Vladeck (2014) makes a contrast between the AI systems in
autonomous vehicles with those of airplanes in that the latter was designed to enable planes
fly on autopilot but with the direction and vigilance of the pilots, while the goal of the
former seems to be to require very minimal human interference. If this is the case, this
might pose a challenge regarding the establishment of intent on the part of the end user.
Notwithstanding the above limitation, this author argues that intent on the part of the end
user can be inferred from the manner in which said user uses the system.
In support of the above position, Gary and Rachel (2012) hold the view that if the operator
of an autonomous vehicle was specifically instructed not to operate it in certain weather
38
Some of these machines are said to have the ability to self-learn.
Bathaee, supra note 8.
40
Id.
39
STUDENT ID: 4984628
13
conditions, or on certain types of traffic patterns but chooses to do so; or where the user
fails to utilize an override mechanism to regain control of the system, the user will most
likely be apportioned some or all of the blame in a resulting accident. 41
Thus, negligence should be the appropriate standard to be applied to end users for harm
caused by AI systems.
3. IMPACT OF IMPOSITION OF LIABILITY ON LAW
Regardless of whether we establish a strict liability or negligence standard, there is little
doubt that imposing liability on developers or end users will have legal consequences. It
has the potential to redefine the entire area of law, particularly third-party liability. If the
strict liability standard is adopted, this will necessitate the amendment of existing laws to
legally recognize AI systems as products. If the negligence standard is adopted, this will
lead to the recognition of AI systems as services and there would be the need to further
establish what standard of care would be imposed on developers of such systems. If the
standard of care remains as it currently is, it would very difficult to hold a developer liable
as the loosened standard of care will be inadequate to appropriately cover even situations
where the developers have held themselves out to be professionals and which, but for the
present position of law regarding IT professionals, such developers would have been held
legally liable.
41
Gary E. Marchant; Rachel A. Lindor, The Coming Collision between Autonomous Vehicles and the Liability
System, 52 Santa Clara L. Rev. 1321 (2012).
STUDENT ID: 4984628
14
4. RECOMMENDATIONS
From the above issues discussed, it is clear that reforms are long overdue in this everevolving field as it appears to be moving faster than the existing legislation can adequately
cater for.
The importance of knowing how best to deal with these issues cannot be over-emphasized
as we are in an age where some of these systems can be deployed in war situations. How
do they determine which is a civilian population? Will drones be trained on the rules of
war? When bombs are deployed in wars, they are being deployed by humans who are
intelligent and are often familiar with the applicable rules to be followed in warfare.
However, AI systems are being used to remotely control drones. To the extent that there
is a slight probability that AI systems might be able to act independently of instructions
imputed, appropriate and adequate legal measures must be taken to fully understand the
nature and extent of these systems in an attempt to sufficiently mitigate the potential risks
associated with such systems.
To this end, the author proposes the following recommendations.
There is a pressing need for both a legal definition and classification of artificial
intelligence systems. AI systems will need to be legally classified as either a product or a
service. This will guide the courts in the determination of disputes arising out of the
operation of AI systems.
Also, seeing as existing legislation appears to be inadequate, there is the need to develop
legislation and regulations particularly for the effective regulation and licensing of AI
systems. The present legislation is inadequate to cater for the technical challenges posed
by AI systems. Legislation needs to set appropriate standards of liability to be applied to
developers and users of AI system. Provisions should also be made for the appropriate
modes of enforcement. Should States be allowed to develop separate standards of legal
STUDENT ID: 4984628
15
liability or should there be a minimum standard upon which various States can develop?
These are some of the issues that the relevant legislation should address.
In relation to enforcement, due to the technical nature of the AI field, this author holds the
view that technical expertise will be needed. Professional bodies made up of experts in the
AI field might be in a better position to address the ever-evolving issues being raised by
the ownership and operation of AI systems. For instance, it will be more effective if a
Technology Practitioners Disciplinary Council made up of members that have the capacity
to understand in real time, the problems posed by any AI system and also apportion blame
where necessary, is set up. Members of such a body will be better placed than the courts to
determine whether harm caused by an AI system was as a result of a design error, or
incapacity of user, inappropriate use, etc.
Lastly, there needs to be an increased awareness of AI systems and the legal challenges
they pose as this might lead to a speedier dispensation of the above issues.
CONCLUSION
In this paper, I have attempted to take a broad look at AI and law, particularly as it affects
tortious liabilities arising from harm caused by AI systems.
I hold the view that due to an absence of legal personality, no liability can be imposed on
AI systems for harm caused during their operations. For this reason, liability has to be
imposed on either the developer or the end user, depending on the circumstances.
In relation to the developers of such systems, due to the inability to determine intent, the
technical knowledge possessed by the developers, the non-recognition of developers as
professionals for the purpose of establishing a standard of care; and the dearth of legal
guidance regarding the adequate standard of care owed by developers to end users and third
parties, I argue that strict liability should be the appropriate standard for imposing liability.
However, in relation to end users, due to the presence of user guides and manuals to
STUDENT ID: 4984628
16
determine the acceptable uses of AI systems, I am of the view that the negligence standard
is a more appropriate standard of liability to be imposed on the end user.
TABLE OF LEGISLATION
Restatement (Second) of Torts
Restatement (Second) of Torts: Product Liability
United Nations Convention on the Use of Electronic Communications in International
Contracts.
TABLE OF CASES
Superior Edge, Inc. v. Monsanto Co., 44 F. Supp. 3d 890, 912 (D. Minn. 2014)
BIBLIOGRAPHY OF SECONDARY SOURCES
Asimov Isaac, I, Robot (1950).
Balkin Jack M, The Three Laws of Robotics in the Age of Big Data. Ohio State Law
Journal, Vol. 78 (2017).
Bathaee, Y.The artificial intelligence black box and the failure of intent and causation,
Harvard Journal of Law & Technology Vol 31 (2018 ).
Bertolini, Andrea, Robots as Products: The Case for a realistic analysis of Robotic
Applications and Robotics and liability rules (August 31, 2013). Law Innovation and
Technology , 5(2), 2013, 214-247.
Edwards J Ll. J., The Criminal Degrees of Knowledge, 17 Mod. L. Rev 294 (1954).
Gruffrida Iria, Legal Perspective on the Trials and Tribulations of AI: How Artificial
Intelligence, the Internet of Things, Smart Contracts, and Other Technologies Will Affect
the Law. Case Western Reserve Law Review, Vol 68 (2018)
Hart Henry M. Jr “The Aims of the Criminal Law.” Law and Contemporary Problems,
Vol. 23 p. 405 (1958).
Helbing Dirk, Frey S. Bruno , Gigerenzer Gerd, Hafen Ernst, Hagner Michael, Hofstetter
Yvonne, Jeroen van den Hoven,. Zicari Roberto V, Zwitter Andrej, Will Democracy
Survive Big Data and Artificial Intelligence? Scientific American (2017)
STUDENT ID: 4984628
17
Hitchler Walter Harrision, The Physical Element of Crime, 39 Dick. L Rev 95 (1934);
Keeton W. Page et al., Prosser and Keeton on the law of torts § 30, at 185-188 (5th ed.
1984).
Marchant Gary E.; Rachel A. Lindor, The Coming Collision between Autonomous Vehicles and
the Liability System, 52 Santa Clara L. Rev. 1321 (2012).
Moore Matthew Michaels & Lu Beverly, Autonomous Vehicles for Personal Transport: A
Technology Assessment, (Social Science Research Network Working Paper 2011).
Moore Michael ,Act and Crime: The Philosophy of actions and its implications for criminal law
(1993).
Nycum , Susan, Liability for Malfunction of a Computer Program, 7 Rutgers J. Computers,
Tech. & L. 1, 9 (1979).
Schank Roger C, What is AI, Anyway? The Foundations of Artificial Intelligence 3 (Derek
Partridge and Yorick Wilks) eds., 2006.
Scherer Matthew U., Regulating Artificial Intelligence Systems: Risks, Challenges,
Competencies, and Strategies, 29 Harv. J. L. & Tech. 353, 400 (2016)
Toby Danny, Software Malpractice in the Age of AI: A Guide for the Wary Tech Company
Turley Todd M., Expert Software Systems: The Legal Implications, 8 Computer L J.455,457.
Vladeck David C., Machines without Principals: Liability Rules and Artificial Intelligence, 89
Wash. L. Rev. 117, 150 (2014).
STUDENT ID: 4984628
18
Download