Uploaded by definitelynotsuspicious123

Gordon Are superintelligent robots entitled to human rights

advertisement
Received: 8 April 2022
|
Revised: 6 June 2022
|
Accepted: 19 June 2022
DOI: 10.1111/rati.12346
ORIGINAL ARTICLE
Are superintelligent robots entitled to human
rights?
John-­Stewart Gordon1,2
1
Department of Philosophy, Vytautas
Magnus University, Kaunas, Lithuania
2
Abstract
Faculty of Law, Vytautas Magnus
University, Kaunas, Lithuania
This paper considers relatively long-­term possibilities for
Correspondence
John-­Stewart Gordon, Department of
Philosophy, Vytautas Magnus University,
V. Putvinskio g. 23, LT-­4 4243 Kaunas,
Lithuania.
Email: johnstgordon@pm.me, john.gordon@
vdu.lt
ligent robots (SRs). The great technological developments
Funding information
European Social Fund, Grant/Award
Number: 09.3.3-­LMT-­K-­712
the future relationship between humans and superintelin fields such as artificial intelligence (AI), robotics and
computer science have made it quite likely that we will
see the advent of SRs towards the end of this century (or
somewhat later). If SRs have a higher moral and legal status
than typical adult human beings based on their greater psychological capacities, then they should also be entitled to
human rights. However, even though SRs might be entitled
to stronger moral and legal rights for this reason, it might
nonetheless be necessary to limit their (otherwise justified)
claims to avoid causing human beings to become extinct or
endangered. The paper provides an argument in support of
SRs' claims to human rights but also warns about the socio-­
political, moral and legal implications of taking such a step.
KEYWORDS
artificial intelligence, human rights, moral rights, moral status,
personhood, superintelligent robots
1 | I NTRO D U C TI O N
The great technological developments in such areas as artificial intelligence (AI), robotics, and computer science
in recent years make it quite likely that future robots could become superintelligent—­i.e., smarter than human
beings—­as was famously claimed by Kurzweil (2005) and has been thoroughly discussed by Bostrom (2014). Even
though superintelligent robots (SRs) might become a reality only several decades from now on or even at the
end of this century, the socio-­political as well as moral and legal implications of this advent will be substantial
and groundbreaking for our societies and for human rights legislation (Livingston & Risse, 2019; Risse, 2019).
Ratio. 2022;35:181–193.
wileyonlinelibrary.com/journal/rati
© 2022 John Wiley & Sons Ltd.
|
181
182
|
GORDON
Many authors (Gordon, 2020; Gunkel, 2018; Nyholm, 2020) believe that we should be prepared for this situation
because of the significant socio-­political, moral and legal changes it will produce, especially with respect to the
relationship between human beings and SRs. Some authors think that we should slow down the technological
research until we have solved the so-­called control problem or even that we should stop working on SRs at all
because they could eventually jeopardize human existence.
In this paper, I discuss the most extreme view with respect to the moral and legal status of future SRs, namely,
the twofold hypothesis that SRs (a) are entitled to human rights and (b) would have a higher moral and legal status
than human beings once they exist. This line of reasoning is based on a particular view of personhood according
to which cognitive capabilities (e.g., rationality, intelligence, autonomy, self-­awareness) are most decisive in determining the moral status of different species, such as human beings and animals, as well as within each species. It
recognizes our tendency to make a moral distinction between individual entities according to their cognitive capabilities (e.g., dolphins have a higher moral status than earthworms based on their higher cognitive capabilities).
The next part of the paper (following this introduction) briefly reviews three common approaches to
personhood—­the rational-­autonomy approach, the social-­relational approach, and the human dignity approach—­
which are currently used in the context of robotics (Gordon, 2021). The third part examines some views of how
moral status can be fleshed out so as to make sense of the idea that entities with a higher moral status could claim
stronger moral rights than other entities. This particular way of thinking has also been discussed in the context of
human enhancement and transhumanism (Savulescu & Bostrom, 2009). The fourth part considers the implications
of the preceding discussion for SRs' entitlement to human rights and their supposed higher moral status, and it
offers a comprehensive argument as to why SRs—­if they exist—­should be considered morally and legally superior
to human beings. The fifth part contains three critical remarks regarding the findings of the previous parts of the
paper and is followed by some brief concluding observations.
2 | TH R E E CO N C E P T S O F PE R S O N H O O D
At least three main concepts of personhood are currently used in the context of AI to determine the moral status
of machines1 (Gordon & Nyholm, 2021): the rational-­autonomy approach, the social-­relational approach, and the
human dignity approach. Each will be reviewed in this section.
2.1 | The rational-­autonomy approach to personhood
The rational-­autonomy approach to personhood is usually based on a Kantian line of reasoning in which a moral
status is ascribed to entities who possess rationality and autonomy. Only rational beings who act autonomously,
said Kant, have dignity and are therefore part of the moral community. All entities that do not act autonomously,
such as animals, are not part of the moral community, even though members of the moral community may have
indirect duties towards such beings (Kant, 2009/1785). Autonomous beings have direct duties towards each other,
as can be further determined by Kant's categorical imperative, which is considered the gold standard according
to which rational beings act.
This reliance on rationality and autonomy in determining the moral status of a being has been questioned by authors in disability studies, who generally believe that all human beings have an equal moral status even if they suffer from a severe cognitive impairment (Koch, 2004), and in the field of animal rights, where proponents argue that
the cognitive boundaries between species come in degrees and experience the morally important phenomenon of
1
Robots are machines but not all machines are robots.
GORDON
|
183
suffering, and that therefore animals deserve at least some moral protection as well (Cavalieri, 2001; Donaldson
& Kymlicka, 2013; Francione, 2009; Singer, 1975).
This is an important critique. I think that Kamm (2007) is right when she claims that moral status depends on
sapience (i.e., ability to reason) and/or sentience (i.e., ability to suffer and to experience feelings, sensations and
emotions). Kamm contends that sapience and sentience are distinct properties that can be separated, but that
neither one is necessary and that either one is sufficient to establish a being's moral status (Kamm, 2007, p. 229).
According to Kamm, “an entity has moral status when, in its own right and for its own sake, it can give us reason
to do things such as not destroy it or help it” (p. 229).
2.2 | The social-­relational approach to personhood
The social-­relational approach to personhood has been championed by such authors as Gunkel (2012, 2018) and
Coeckelbergh (2014). The main line of reasoning is that personhood and hence the moral relevance of beings are
established through social relations. Both Gunkel and Coeckelbergh believe that the standard approach of ascribing moral rights based on the ability to reason or to suffer is misleading. Instead, they argue, social relations are of
utmost importance with respect to a being's moral status and rights. Moral status emerges through social relations
with other entities, including machines. 2
Gordon (2021) and Müller (2021) have pointed out that even though social relations are morally important,
they are not the decisive reason why we should ascribe moral rights to entities in the first place. Rather, Gordon
argues, one should apply objective criteria such as rationality, intelligence, autonomy or the ability to suffer.
Relying on social relations as a criterion could raise issues related to social exclusion where people could simply
stop having any social relations with another being. Such behaviour might eventually undermine the other being's
moral status according to the social-­relational approach.
2.3 | The human dignity approach to personhood
Proponents of the human dignity approach to personhood follow a strict logic. They claim, first, that only human
beings have dignity and, second, that only beings with dignity deserve full moral status as well as entitlement to
all moral and legal rights. No other being, they say, has any comparable status and rights. This line of reasoning is
traditionally associated with religion, especially the argument from the imago dei to justify the special status of
human beings with respect to any other species (Smith, 2021, chapter 3).3
The main problems with this type of argument rest on (a) the likelihood of the existence of God and (b) whether
simply being human is really sufficient for the ascription of any status or rights whatsoever. The modern debate suggests, instead, that there is no convincing philosophical argument in support of the existence of God
(Mackie, 1983), and that it does not seem plausible that only human beings—­by virtue of being human—­deserve full
moral protection (Singer, 2009). The standard view is that moral status and the attribution of rights are related to
personhood (and not to the empirical fact that a being is human). Based on this perspective, it would be impossible
for SRs ever to qualify for the full range of moral and legal rights.
2
Recently, Ubuntu ethics, which can be seen as the social-­relational approach in the context of traditional African philosophy, has been applied to
robots. Wareham (2021), Coeckelbergh (2022), and Jecker et al. (2022) thoroughly examine and apply African ethics to provide yet another, slightly
different pathway regarding how to think about the relationship between humans and (intelligent) robots more fruitfully. With respect to the
application of Ubuntu ethics concerning AI, Gordon offers a critique in “The African Relational Account of Social Robots: A Step Back?” (2022),
arguing that Ubuntu ethics is completely human-­centred and therefore a step back compared to, for example, Gunkel's relational account.
3
Smith (2021, pp. 101–­110) argues that some AI robots should be “granted legal personhood” even though they should not necessarily also be
considered moral agents. His main reason for taking this position is to protect against any form of “dehumanization” of humans, but Smith also
believes that one should protect those robots from questionable human actions, such as using them as sex robots.
184
|
GORDON
Another line of reasoning could contend that if humans are created in the image of God and therefore have
dignity, and if superintelligent humanoid robots are created in the image of human beings, should these robots not,
by transitivity, also potentially be seen as having dignity? The problem with this argument is that humans lack the
power to confer the same dignity upon their own creations, because they are not God. Smith substantiates this
point in a similar way: “It is paramount that the reader understand(!) that robotic persons will be made in the image
of humans, not in the image of God. They will not be imager-­bearers(!) that way humans are image-­bearers; they
will not think like humans nor desire things that humans desire” (Smith, 2021, p. 107).
2.4 | The upshot
The human dignity approach to personhood does not seem to be a viable option, since we simply do not know
whether God exists. Furthermore, the idea that one must be human to enjoy important moral and legal rights is
unconvincing (Singer, 2009). In addition, the objections to the social-­relational approach are quite convincing.
Hence, we are left with the rational autonomy approach to personhood.
My current view on the relationship between personhood, moral status and full moral status (FMS) is that
moral status and FMS differ concerning their requirements. Kamm seems right when she claims that the ability to
reason or suffer is sufficient for the ascription of moral status to that entity, whereas FMS—­in my view—­necessarily
requires the being to have sapience and sentience (like typical adult human beings). Even though moral personhood and FMS are tightly connected, the two concepts must be distinguished from each other in the following
way: If a being has FMS, then this being has also moral personhood (like typical adult human beings), but it does
not follow from this that a being with moral personhood necessarily also has FMS (SRs with sapience but without
sentience).4 That is because moral personhood does not necessarily include sentience but should always contain
sapience. Therefore, moral personhood should not be fleshed out in terms of the social-­relational approach (as
discussed above), but it should always be characterized by the presence of morally relevant properties such as
rationality, autonomy, and self-­awareness.5
3 | FRO M FU LL M O R A L S TAT U S TO M O R E TH A N FU LL M O R A L S TAT U S
If, at some future point, SRs gain the ability to reason, then they would be entitled to moral status and related
rights as well. The vital issue, then, is whether the ascription of moral status comes in degrees (DeGrazia, 20086)
or whether there is a kind of moral threshold above which all entities either are considered the same (Kamm, 2007)
or could be further distinguished based on their cognitive abilities (Bostrom, 2005). This vital issue will be examined below.
4
It could be the case that the first generation of SRs will be only sapient and will lack sentience. However, the second generation of SRs could
possibly possess both qualities and would therefore also have FMS. After all, this is an empirical question. A rival view has been voiced by
DeGrazia (2022), who argues that robots must be sentient and have their own interests in order to qualify for moral status. He briefly discusses
Kamm's objection that robots could be conscious but lack sentience (pp. 81–­82).
5
I am currently working on a paper concerning the concept of moral status, which will flesh out the complex relationship between the main
concepts more properly than I can do here. However, I hope that the above description is sufficient for the present line of reasoning.
6
DeGrazia describes this position without necessarily endorsing it.
GORDON
|
185
3.1 | What is a full moral status?7
A typical adult human being is commonly considered to have personhood, which provides her with a full moral
status (FMS). This status, in turn, constitutes the basis for the ascription of moral and legal rights. The idea
of FMS implies that there also exist beings with less than full moral status. For example, there is debate as
to whether fetuses, people in non-­responsive states, and people with severe cognitive impairments should
be considered persons with the same moral status as a typical adult human being. Some authors, such as
Singer (2009), believe that they may have some moral standing (and should thus be part of our moral considerations) but do not have FMS.
According to DeGrazia (2008, p. 183), moral status can be defined as follows: “To say that X has moral status
is to say that (1) moral agents have obligations regarding X, (2) X has interests, and (3) the obligations are based
(at least partly) on X's interests.” This definition offers the possibility of various degrees of moral status, rather
than assuming a threshold above which all beings have the same rights. Thus, this concept could be applied to
animals and humans alike; for example, it might be possible to distinguish between different entities according
to their possibility of having interests and then to use this information to flesh out our moral obligations to them
accordingly.
In his recent paper Robots with Moral Status, DeGrazia also applies his interest-­based approach of moral status
to future robots and claims that “robots will gain moral status if and when they acquire their own interests—­
and, collectively, a welfare that matters to them—­which will happen if (and only if) they become sentient”
(DeGrazia, 2022, p. 74). Later, he emphasizes this point by stating that “having interests is not only necessary,
but also sufficient, for moral status” (p. 77). Consciousness as “subjective experience” (p. 79), in his view, is the
precondition of sentience, which in turn provides entities such as future robots with moral status (once they have
achieved this technological threshold).
In Challenges to Human Equality (McMahan, 2008), McMahan argues for an “intermediate moral status” (IMS),
to be ascribed to human infants and higher primates, in contrast to the FMS possessed by typical adult human
beings. According to McMahan, it is possible that some beings with IMS should be accorded a lesser moral status
than FMS because their “psychological capacity”8 differs.
In general, I agree with McMahan, but I think that his conception of FMS and IMS should be expanded to also
allow for beings with a higher moral status (HMS)9 than typical adult human beings (see the next section). And
indeed, in the section on “supra-­
persons” of his paper “Cognitive Disability and Cognitive Enhancement,”
McMahan (2009, pp. 598–­604) speculates about the moral status of cognitively enhanced human beings and
claims the following:
There is some plausibility, and no incoherence, in the supposition that just as there is a moral
threshold between ourselves and animals, so there would be a parallel threshold between supra-­
persons, who would differ from us psychologically by more than we differ from animals, and ourselves. (McMahan, 2009, p. 602)
Although, at the end of his paper, McMahan states that his “speculations about supra-­persons prove nothing” and
“are meant only to be suggestive” (McMahan, 2009, p. 604). I believe that his reasoning is actually quite convincing and
7
The edited volume Rethinking Moral Status by Clarke et al. (2021) contains several chapters by well-­k nown philosophers which examine the nature
of moral status from different perspectives and with respect to different topics. For example, it includes a chapter on AI (Sinnott-­A rmstrong &
Conitzer, 2021, pp. 267–­289), which argues that an advanced AI might have some degree of moral status and related rights.
8
Psychological capacities include, for example, self-­consciousness, the capacity for caring, or the capacity to act on the basis of reasons which
persons have to varying degrees (McMahan, 2009, p. 602).
9
The terms higher moral status and more than full moral status are used interchangeably in this paper.
186
|
GORDON
substantiates the idea that supra-­persons might be entitled to HMS once they exist. Such supra-­persons would not
have to be human beings but could be any entities—­including SRs—­which exceed humans in the relevant psychological
capacities related to moral status.
3.2 | More than full moral status
The use of “full” in the concept of FMS implies that this moral status cannot be extended any further. In other
words, no beings could have a higher moral status than FMS. However, Bostrom (2005) has suggested in the
context of transhumanism that it might be possible to create human beings with superior cognitive abilities than
current human beings. According to Agar (2013) and Douglas (2013), if that were possible, then at some future
point there could be human beings with HMS.
Given the general line of reasoning in debates on moral status (especially if we accept the thesis that moral
status comes in degrees), the logic would seem to compel us to extend the concept of FMS so as to include the
possibility of HMS for entities with advanced cognitive abilities. However, as noted above, this argument is not
restricted to human beings or “post-­humans”; cyborgs or SRs, for example, could also be entities with advanced
cognitive abilities.
If we accept the idea that HMS is, at least, possible in theory and that this status corresponds to either a stronger or a more advanced set of moral and legal rights (which trump the rights of beings with FMS) for those entities,
then we must acknowledge that these more advanced entities would have a moral advantage over all other beings
without HMS (see also McMahan, 2009, pp. 598–­604). Alternatively, we could also argue against the category of
HMS even though we might, at some point, see the advent of entities with advanced cognitive abilities who are
considered much smarter than current typical adult human beings. We might take this position so as to avoid the
risk of discrimination against typical human beings. Whether this decision would be accepted by entities with
much higher psychological capacities is another matter.
4 | TH I N K I N G TH E I M P OS S I B LE
4.1 | Superintelligent robots and human rights?
Gordon and Pasvenskiene (2021) reviewed the contemporary literature on whether intelligent robots should be
entitled to human rights once they exist and offered an interesting analysis of the current state of research. This
topic has been quite energetically discussed in numerous blogs and popular magazines but only very rarely in
academic journals. Both authors believe that this lack of academic attention is a mistake since future technological
development will—­albeit perhaps still several decades in the future—­most likely lead to the creation of SRs who
may want to have their “human rights” recognized.
The concept of human rights—­or at least the idea of moral human rights—­is no longer necessarily linked
to being human, but rather to the concept of personhood (Gordon & Pasvenskiene, 2021, sections 4 and 5).
Recently, numerous authors have applied the human rights approach to support the protection of higher animals (Cavalieri, 2001; Donaldson & Kymlicka, 2013; Francione, 2009; Singer, 1975) and even the environment
(Atapattu, 2015; Stone, 2010). If this is the case, however, then one could raise the question whether SRs should be
entitled to human rights as well once (and if) they exist. Although it is possible, as Gunkel (2018) and Gellers (2022)
do, to speak about “robot rights” rather than “human rights for robots,” the rhetorical force of using human-­rights
language is an important aspect of including humanlike SRs in the moral community. That is why Francione (2009)
and others have used the term human rights also in the context of animals. Likewise, it seems appropriate to speak
about human rights in the context of SRs.
GORDON
|
187
It was argued above that the concept of personhood is the most important way to determine an entity's moral
(and legal) status. In particular, it was argued that mental or psychological capacities are of utmost significance
in determining what we understand by personhood, in contrast to the social-­relational approach and the human
dignity approach (which support the view that one must be a member of the species homo sapiens to count morally). Furthermore, it seems reasonable to consider the possibility that entities can have HMS depending on their
degree of psychological capacities. But if this is the case, then it seems also plausible that these supra-­persons
should eventually be entitled to more (or stronger) moral and legal rights than typical adult human beings (see
McMahan, 2009).
But what if these supra-­persons are SRs? McMahan (2009) and other authors including Singer (1975, 1979)
rightly indicate that our common-­sense morality is incoherent with respect to how we deal with, for example,
animals such as pigs, geese, chickens, cows and chimpanzees that have more advanced psychological capacities
than human fetuses and even newborns. Most people believe that, for example, newborns are entitled to moral
rights such as the right not to be killed or used for medical purposes, whereas the just-­mentioned animals, which
are (empirically speaking) more advanced in terms of their psychological capacities, hold these rights to a much
lesser degree if at all. The reason for this incoherence is that human fetuses and newborns belong to the human
species and are connected to a human family (McMahan, 2009).
However, if we use only one moral standard, such as the ability to reason, for all cases and apply it coherently,
then human offspring would gain personhood, FMS, and the related moral and legal rights at a much later stage in
their lives; in fact, they would rate below the higher-­level animals mentioned above (Gordon, 2021). The above line
of reasoning has implications for the way how we would most likely treat SRs. If, at some future point, we see the
advent of SRs, we will most probably not grant them HMS relative to typical adult human beings. But something is
wrong with this reasoning. Gordon and Pasvenskiene (2021) have pointed out that the ascription of moral status
and rights is not up to us but must be determined independently. In other words, if SRs meet the objective criteria
which determine whether entities have FMS and HMS, then they are entitled to the related moral and legal rights,
unless one adopts an approach that we would consider incoherent.
The socio-­political, legal and moral implications of this development would be substantial. If non-­human entities are awarded human rights and even stronger moral and legal rights than typical adult human beings based on
their much higher psychological capacities, substantial social unrest might arise between human beings and SRs.
Two differently advanced species would be sharing one world. If we cannot solve the so-­called alignment problem
between them, as argued by Bostrom (2014) and more recently by Ord (2020, chapter 5), humanity would most
likely become extinct.10 Therefore, we should contemplate these issues carefully and long before the existence of
SRs becomes imminent.
4.2 | The argument
The above-­mentioned general line of reasoning leads to the following argument:
1. Premiss: The attribution of moral (and legal) rights is based on the particular moral (and legal) status of
a given entity.
2. Premiss: The higher the moral (and legal) status of an entity is, (a) the more rights are available to that entity and
(b) the stronger that entity's rights are in comparison to entities with a lower moral (and legal) status (degree
model).
10
To solve the control problem (i.e., to avoid the possibility of SRs starting a revolution and killing or enslaving human beings for their own
purposes), human beings could apply two strategies. First, we could teach SRs our most fundamental values such as equality, justice and compassion
so that the SRs would align with our moral values and ethical standards (thereby solving the alignment problem). Or, second, we could try to keep a
superintelligent machine in some kind of “box” so that it would not have the opportunity to endanger humanity (e.g., Armstrong et al., 2012).
188
|
GORDON
3. Premiss: The typical adult human being has FMS and the greatest amount of moral and legal rights in the strongest sense (common view).
4. Premiss: SRs have HMS (hypothesis).
5. Premiss: HMS provides the entity with more and stronger moral (and legal) rights compared to entities with a
lower moral status (degree model).
6. Ergo: SRs are entitled to more and stronger moral (and legal rights) than the typical adult human being.
The above argument presupposes the correctness of the two principles stated by Bostrom and Yudkowsky (2014,
pp. 323–­324):
(i) The principle of substrate non-­discrimination: If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral
status.
(ii) The principle of ontogeny non-­discrimination: If two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.
Both principles can be considered the starting point and constraining framework of how one should view the
relationship between human beings and intelligent machines (once they exist). I do not see any convincing counterargument against both principles.11 The above argument could be questioned on two accounts. The first objection concerns the likelihood of the existence of SRs (premiss 4) and the second objection concerns the use of the
degree model in premiss 5. I will respond briefly to each point in turn.
The possibility of whether SRs might exist in the future remains unclear. Many AI researchers believe, that we
will eventually see the advent of SRs, but that it is impossible to determine the exact time (Müller & Bostrom, 2016).
My estimation is that we will most probably encounter SRs towards the end of this century (or even earlier) and
that we therefore have limited time to consider how we want to organize our societies once they exist. The advent
of SRs will cause substantial socio-­political as well as moral and legal implications for our societies, and we need to
prepare for them well in advance. If we wait until we are confronted by these issues to start thinking about them,
it will be too late.
A second objection is that we should not use the degree model to think about the recognition of moral and
legal status and related rights. Rather, one might argue, we should adhere to a moral threshold above which all
entities should be treated in the same way. This alternative would eliminate the possibility of entities with higher
psychological capacities being granted a higher moral and legal status and related rights than the typical adult
human being. The underlying theme of this objection is connected to general ideas of prioritizing one's own species over other species in relation to vital questions regarding moral status as well as moral and legal rights (see
the section “The incoherence approach” below). Even though this approach might be understandable from a
human-­centred perspective, it still undermines how ethics and moral philosophy have been carried out in modern
times (at least to a great extent)—­
namely, by using universal ethical theories such as utilitarianism and
Neo-­Kantianism.12
Whether SRs would actually feel the need to respect a moral threshold is a totally different matter. That is why
scholars such as Bostrom (2014) and Ord (2020) emphasize the need to solve the alignment problem and to control
SRs to ensure that humanity will not become extinct.
11
Sinnott-­A rmstrong and Conitzer (2021, pp. 272–­273) have rightly affirmed that both principles are well-­founded.
12
See Schwitzgebel and Garza (2015) for the claim that we would likely have moral obligations towards SRs and that this fact should be properly
reflected in our actions towards them.
GORDON
|
189
5 | C R ITI C A L R E M A R K S
Besides the commonly voiced objections with respect to the impossibility of machines becoming self-­aware, conscious, and intelligent and hence ever developing the ability to reason (Searle, 1980), opponents who adhere to
functionalism and computationalism have argued that intelligence does not necessarily require belonging to a
particular substratum, such as carbon-­based beings (like humans), but that it could also evolve in silicon-­based
environments, if the system is complex enough (Chalmers, 1996: chapter 9; Sinnott-­Armstrong & Conitzer, 2021,
pp. 275–­282). Eventually, this dispute will be solved when new types of highly complex systems actually emerge;
it cannot be decided based on theoretical arguments alone.
The more interesting objections concern issues related to the likely advent of SRs and how we plan to deal with
that eventuality, including ways to establish a proper mutual and peaceful relationship with non-­human entities
who are much smarter than us (indeed, the difference between SRs and us could be comparable to that between
us and ants, if not larger). If human beings try to treat SRs as their slaves with no rights, then humanity might be
digging its own grave.
5.1 | Do not produce SRs in great numbers
There is great interest in creating artificially intelligent entities that could take over many of our jobs (some scholars estimate that most human jobs will vanish within the next 100 years or so, especially those that are boring or
extremely dangerous). SRs will achieve everything much faster, more effectively and without any mistakes. This
almost utopian situation, in which human beings could rest, relax and enjoy themselves all day, every day, could
quickly turn into a dystopian nightmare in which human beings lose their general capabilities, degenerate, and
become lazy and stupid because they no longer face any obstacles in their lives and because everything is done
for them (Danaher, 2019a). Adult human beings might become like children with limited human autonomy, due
to the overly paternalistic nature of machines that do everything for them (e.g., Danaher, 2019b; Russell, 2019;
Vallor, 2015, 2016). At this stage, human beings will face the existential problem of boredom as SRs solve all our
problems.
To avoid this dystopian situation, one could limit the number of SRs and other AI machines created so as
to deal with any existential repercussions arising from these technological developments and prevent humanity from becoming a dysfunctional race. Limiting the quantity of SRs would also avoid or minimize two
additional problems: the existential risk problem and the competition for global resources. Bostrom (2014)
and Ord (2020) have warned about the potential problem of unaligned SRs that might not value the same
things as humans and thus could be motivated to cause the human race's extinction if we do not solve the
control problem. Furthermore, some scholars have voiced concerns with respect to earth's already limited
global resources, which would have to be shared amongst human beings and SRs as well. The production and
maintenance of highly advanced machines require substantial resources and could quickly drain the remaining
resources available on earth.
Therefore, in summary, we should avoid producing SRs in great numbers so as to avert (a) the degeneration
of human beings, (b) our own destruction (existential risk scenarios), and (c) overconsumption of our global
resources.
5.2 | The incoherence approach
Some people might argue that even if SRs become smarter than human beings and therefore are entitled to HMS
(at least from an objective perspective), we should never acknowledge their higher moral status and give them
190
|
GORDON
stronger (and more) moral and legal rights than human beings have. Being incoherent, many might argue, is not
necessarily morally wrong.
This line of reasoning is quite similar to how we currently treat higher animals such as the great apes, who actually deserve much stronger moral and legal rights based on their higher moral status than human beings tend to
want them to have (Cavalieri, 2001; Francione, 2009; Singer, 1975). McMahan (2009) correctly claims that human
beings are incoherent with respect to the acknowledgement of the moral status and related rights of some animals
in comparison to human offspring, based on their higher psychological capacities. In addition, Singer (2009) has
justifiably questioned this argument, calling it prone to speciesism and contending that for this reason it should
be rejected (Singer, 2009).
Whether one should be “loyal” to one's own species compared to other species has been famously discussed
by Bernard Williams in his book chapter “The Human Prejudice” (Williams, 2006), where he argues against “moral
universalists” such as Singer. Williams explores the general idea of loyalty partly against the background of his
thought-­experiment concerning “benevolent managerial aliens” who visit our planet and eventually conclude that
it might be better to remove humans from earth (Williams, 2006, pp. 151–­152).13 In this context, he claims the
following:
The relevant ethical concept is something like: loyalty to, or identity with, one's ethnic or cultural
grouping; and in the fantasy case, the ethical concept is: loyalty to, or identity with, one's species.
… So the idea of there being an ethical concept that appeals to our species membership is entirely
coherent. (Williams, 2006, p. 150)
Applying Williams' reasoning to the case of robots, one could possibly argue that even though SRs might
become much smarter than human beings and therefore claim HMS based on their status as supra-­p ersons,
human beings should nonetheless prioritize protection of their own species against any possible dangers, such
as their extinction by a robot revolution. There might only be a narrow dividing line between “allowing” SRs the
enjoyment of their legitimate entitlement to moral and legal rights, on one hand, and, on the other hand, using
them as mere tools and thereby creating a situation that has been dubbed “new speciesism” (DeGrazia, 2022,
pp. 84–­86).
In the Groundwork (Kant, 2009/1785), Kant argues from a somewhat different perspective and claims that
“being incoherent” amounts to “being irrational,” which in turn is morally wrong. In other words, if SRs are
considered persons, then they have at least the same moral and legal status as well as the same moral and legal
rights as human beings. Perhaps SRs should rather be considered supra-­p ersons and hence entitled to HMS
and related rights. The latter view depends on how we view the difference in moral status between typical
adult human beings (normally viewed as possessing FMS) and SRs or cognitively enhanced human beings (who
might qualify for HMS). One could introduce a model according to degrees or adhere to a moral threshold
above which all persons should be treated the same independently of their different degrees with respect to
their psychological capacities.
5.3 | Rethinking the nature of human rights
Human rights are commonly considered the most important rights in human societies and not trumped by any
other rights or open to violation by governments (Gordon, 2016). If SRs could claim HMS, then at least according
to the degree model, one would be forced to claim that either (a) the “human rights” of SRs will always trump the
human rights held by human beings or (b) there exist some stronger rights than human rights, given the general
13
For a thorough analysis of Williams' views, see Diamond (2018).
|
GORDON
191
view that a higher moral status usually entitles the particular entities to stronger (and most likely more extensive)
moral and legal rights than those extended to entities with a lower moral and legal status.
If the above scenario is correct, then one should either give up talking about human rights in the context of
SRs and offer a new label or umbrella term that covers both entities (i.e., human beings and SRs) or adopt the view
that moral and legal status and related rights will remain the same for all beings with either FMS or HMS. The
latter approach, based on the threshold model, might be more appealing to human beings in view of the possible
dangers posed by SRs.
Even though I believe that the latter approach is incoherent, since we would be switching from a degree model
to a threshold model only because we do not want to become morally and legally inferior to entities such as SRs
and cognitively enhanced human beings or cyborgs, it might—­pragmatically speaking—­nonetheless be the best
option to ensure the survival of human beings.
6 | CO N C LU S I O N S
If SRs become a reality at some future point, maybe towards the end of this century, our human societies will face
substantial socio-­political as well as moral and legal challenges. Based on their highly advanced technological nature, SRs might be entitled to stronger (and more) moral and legal rights compared to those of typical adult human
beings given their unmatched moral and legal status (degree model). To avoid this scenario, human beings might
adhere to the so-­called threshold model and argue that all entities with FMS and HMS (independently of their
higher moral status based on their higher psychological capacities) can claim only the same moral and legal rights
as typical adult human beings. This approach is incoherent but might be the best way to avoid the risk of human
beings becoming morally and legally inferior and hence suffering substantial disadvantages in future societies.
Furthermore, it might be prudent to produce only a relatively low number of SRs, to avoid (a) the existential
risk scenario and (b) a fight for the global resources necessary to enable the existence of both human beings and
robots. This approach, however, would require human beings to solve the alignment problem, since SRs might
not be willing to agree to limits on their proliferation. How we should deal with such issues is a matter for further
research, which is absolutely necessary to ensure human survival.
AC K N OW L EG D M E N T S
I want to thank the Centre for Socio-­Legal Studies at the University of Oxford for their hospitality and hosting me
when I wrote this paper in 2021. Thanks also to the seminar audience where I was presenting the paper. Special
thanks to João Paulo Lordelo G. Tavares for his comments on an earlier draft of my writing. Furthermore, I like to
thank Liisi Keedus for inviting me to present my revised paper at the University of Tallinn in May 2022. Finally,
special thanks to the anonymous referee for his/her excellent comments on a previous version of this article.
F U N D I N G I N FO R M AT I O N
This research is funded by the European Social Fund under the activity “Improvement of Researchers” Qualification
by Implementing World-­class R&D Projects', Measure No. 09.3.3-­LMT-­K-­712
C O N FL I C T O F I N T E R E S T
None.
ORCID
John-­Stewart Gordon
https://orcid.org/0000-0001-6589-2677
192
|
GORDON
REFERENCES
Agar, N. (2013). Why is it possible to enhance moral status and why is doing so wrong? Journal of Medical Ethics, 39(2), 67–­74.
Armstrong, S., Sandberg, A., & Bostrom, N. (2012). Thinking inside the box: controlling and using an Oracle AI. Minds and
Machines, 22(4), 299–­324.
Atapattu, S. (2015). Human rights approaches to climate change: challenges and opportunities. Routledge.
Bostrom, N. (2005). Transhumanist values. Journal of Philosophical Research, 30(Suppl.), 3–­14.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. M. Ramsey (Eds.), The
Cambridge handbook of artificial intelligence (pp. 316–­334). Cambridge University Press.
Cavalieri, P. (2001). The animal question: why non-­human animals deserve human rights. Oxford University Press.
Chalmers, D. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
Clarke, S., Zohny, H., & Savulescu, J. (2021). Rethinking moral status. Oxford University Press.
Coeckelbergh, M. (2014). The moral standing of machines: Towards a relational and non-­C artesian moral hermeneutics.
Philosophy & Technology, 27(1), 61–­77.
Coeckelbergh, M. (2022). The ubuntu robot: Towards a relational conceptual framework for intercultural robotics. Science
and Engineering Ethics, 28(16). https://doi.org/10.1007/s1194​8-­022-­0 0370​-­9
Danaher, J. (2019a). Automation and utopia. Harvard University Press.
Danaher, J. (2019b). The rise of the robots and the crises of moral patiency. AI & Society, 34(1), 129–­136.
DeGrazia, D. (2008). Moral status as a matter of degree. The Southern Journal of Philosophy, 46, 181–­198.
DeGrazia, D. (2022). Robots with moral status? Perspectives in Biology and Medicine, 65(1), 73–­88.
Diamond, C. (2018). Bernard Williams on the human prejudice. Philosophical Investigations, 41(4), 379–­398.
Donaldson, S., & Kymlicka, W. (2013). Zoopolis: A political theory of animal rights. Oxford University Press.
Douglas, T. (2013). Human enhancement and supra-­personal moral status. Philosophical Studies, 162(3), 473–­497.
Francione, G. L. (2009). Animals as persons: Essays on the abolition of animal exploitation. Columbia University Press.
Gellers, J. (2022). Rights for robots: Artificial intelligence, animals and environmental law. Routledge.
Gordon, J.-­S. (2016). Human rights. In D. Pritchard (Ed.), Oxford bibliographies in philosophy. Oxford University Press. published online 2013, updated version October 2016.
Gordon, J.-­S. (2020). What do we owe to intelligent robots? AI & Society, 35, 209–­223.
Gordon, J.-­S. (2021). Artificial moral and legal personhood. AI & Society, 36(2), 457–­471.
Gordon, J.-­S. (2022). The African relational account of social robots: A step back? Philosophy & Technology, 35. https://doi.
org/10.1007/s1334​7-­022-­0 0532​- ­4
Gordon, J-­S., & Nyholm, S. (2021). The ethics of artificial intelligence. Internet Encyclopedia of Philosophy, online.
Gordon, J.-­S., & Pasvenskiene, A. (2021). Human rights for robots? A literature review. AI and Ethics, 1, 579–­591.
Gunkel, D. (2012). The machine question. Critical perspectives on AI, robots, and ethics. MIT Press.
Gunkel, D. (2018). Robot rights. MIT Press.
Jecker, N. S., Atiure, C. A., & Ajei, M. O. (2022). Two steps forward: An African relational account of moral standing.
Philosophy & Technology, 35(38). https://doi.org/10.1007/s1334​7-­022-­0 0533​-­3
Kamm, F. M. (2007). Intricate ethics, rights, responsibilities, and permissible harm. Oxford University Press.
Kant, I. (2009/1785). Groundwork of the metaphysics of morals. Harper Perennial Modern Classics.
Koch, T. (2004). The difference that difference makes: Bioethics and the challenge of “disability”. Journal of Medicine and
Philosophy, 29(6), 697–­716.
Kurzweil, R. (2005). The singularity is near: when humans transcend biology. Penguin Books.
Livingston, S., & Risse, M. (2019). The future impact of artificial intelligence on humans and human rights. Ethics &
International Affairs, 33(2), 141–­158.
Mackie, J. L. (1983). The miracle of theism: arguments for and against the existence of God. Oxford University Press.
McMahan, J. (2008). Challenges to human equality. The Journal of Ethics, 12, 81–­104.
McMahan, J. (2009). Cognitive disability and cognitive enhancement. Metaphilosophy, 40(3–­4), 582–­605.
Müller, V. C. (2021). Is it time for robot rights? Moral status in artificial entities. Ethics and Information Technology, 23, 579–­587.
Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In V. Müller (Ed.),
Fundamental issues of artificial intelligence (pp. 553–­571). Springer.
Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield.
Ord, T. (2020). The precipice: Existential risk and the future of humanity. Hachette Books.
Risse, M. (2019). Human rights and artificial intelligence: An urgently needed agenda. Human Rights Quarterly, 41, 1–­16.
Russell, S. (2019). Human compatible. Viking Press.
Savulescu, J., & Bostrom, N. (2009). Human enhancement. Oxford University Press.
Schwitzgebel, E., & Garza, M. (2015). A defense of the rights of artificial intelligences. Midwest Studies in Philosophy, 39(1),
98–­119.
GORDON
|
193
Searle, J. R. (1980). Minds, brains, and programs. Behavioural and Brain Sciences, 3(3), 417–­457.
Singer, P. (1975). Animal liberation. Avon Books.
Singer, P. (1979). Practical ethics. Cambridge University Press.
Singer, P. (2009). Speciesism and moral status. Metaphilosophy, 40(3–­4), 567–­581.
Sinnott-­Armstrong, W., & Conitzer, V. (2021). How much moral status could artificial intelligence ever achieve? In S.
Clarke, H. Zohny, & J. Savulescu (Eds.), Rethinking moral status (pp. 267–­289). Oxford University Press.
Smith, J. (2021). Robotic persons: our future with social robots. WestBow Press.
Stone, C. D. (2010). Should trees have standing? Law, morality and the environment. Oxford University Press.
Vallor, S. (2015). Moral deskilling and upskilling in a new machine age: reflections on the ambiguous future of character.
Philosophy & Technology, 28(1), 107–­124.
Vallor, S. (2016). Technology and the virtues: a philosophical guide to a future worth wanting. Oxford University Press.
Wareham, C. S. (2021). Artificial intelligence and African conceptions of personhood. Ethics and Information Technology,
23, 127–­136.
Williams, B. (2006). The human prejudice. In A. More (Ed.), Williams' Philosophy as a Humanistic Discipline (pp. 135–­152).
Princeton University Press.
How to cite this article: Gordon, J-S (2022). Are superintelligent robots entitled to human rights? Ratio, 35,
181–193. https://doi.org/10.1111/rati.12346
Download