The Ethics of Simulated Reality: Suggestions for Guiding Principles

advertisement
The Ethics of Simulated Reality
The Ethics of Simulated Reality: Suggestions for Guiding Principles
Matthew Kuan Johnson
Yale University
Correspondence to:
Matthew Kuan Johnson
Yale University
matthew.johnson@aya.yale.edu
The Ethics of Simulated Reality
Abstract:
In the first part of the paper, I review recent neuroscientific technologies and advances,
particularly those that allow for certain types of sensory experiences to be systematically induced
through electrical stimulation of the appropriate neural pathways, and suggest how such
technologies indicate the possibility of having robust and vivid experiences inside of a simulated
reality. After finishing this discussion of the types of simulated realities that currently exist, I
then consider the types of highly robust simulated realities that will likely be possible in the
future, such as “experience machines” and “brains (or bodies) in vats.” This section considers the
reasons for, and objections to, plugging into simulated realities of this type. One exciting
possibility with simulated realities is that they provide us with the possibility of creating virtual
worlds in which terrible cases of natural and moral evil will not be possible, and in which
enhanced opportunities for experiencing joy and flourishing will be possible. Furthermore, I
suggest that if meaningful human connection and autonomy is still possible in such simulated
realities, that they would be preferable to living in the “real” world. Next, I consider how
simulated realities should be structured, borrowing concepts and arguments from the “problem of
evil” literature in philosophy, in exploring whether or not all kinds of suffering and natural and
moral evil should be kept absent from the simulated reality, or just the worst cases of each, and
which goods will be lost by excluding evil and suffering from simulated realities. In the final
section, I explore potential issues that could arise with respect to the regulation of simulated
realities, and suggest some potential solutions. Although the full and robust simulated realities of
“experience machines” and “brains (or bodies) in vats” may be a long way away, the technology
already exists for lesser types of simulated reality. Consequently, issues of regulation of
simulated reality (even of the lesser kind) will likely become some of the most pressing
neuroethical problems of the future. This paper, therefore, seeks to propose some general
principles of, and to identify potential opportunities and issues with, simulated reality in order to
begin the discussion of how to regulate this technology. Countless examples come to mind of
instances in which ethical frameworks lagged behind the technological advancements that they
needed to accommodate, and the problems arose as a result. This paper serves begin the process
of debate concerning simulated realities, so that the appropriate ethical framework will already
be in place once the technology finally starts to come into its own.
Keywords: Neuroethics; Experience Machine; Brains in Vats, Simulated Reality; Soul-Making
Theodicy; Experimental Philosophy
The Ethics of Simulated Reality
I. Introduction
One of the thought experiments that has generated some of the most interest, both among
philosophers and in popular culture, is “The Experience Machine.” It was first proposed by
Robert Nozick in his book, Anarchy, State, and Utopia, and explores whether or not one would
be willing to hook oneself up to a machine that would insert oneself into a virtual reality
(Nozick, 1974). This virtual reality would seem completely real to the individual inside the
machine, as the virtual reality will have been extremely well programmed. In addition, the
machine would inhibit the recall of certain of the individual’s memories, such that while the
individual is inside of the machine, he will not recall having chosen to be inside of the machine.
Furthermore, the machine would provide a plethora of highly pleasurable experiences for those
who hook themselves up to it. Hooking oneself up to the experience machine, however, requires
a commitment to remaining inside of the machine and not being “awakened” from it for a long
period of time.
II. Why the Experience Machine is Theoretically Possible (and Already Exists)
While at first blush it may seem that the technology necessary to manufacture such a
machine is, at best, decades or centuries beyond our grasp (and at worst, unattainable), the
technology has already been developed and is in use that makes such a machine theoretically
possible. Indeed, one kind of “experience machine” first came into existence around the time that
Anarchy, State, and Utopia was published: cochlear implants. Cochlear implants work by having
electric signals directly stimulate the auditory nerve, which the brain transcodes and interprets as
sounds (Eshraghi et al., 2012). In other words, while hearing aids work by amplifying sounds for
individuals, cochlear implants allow deaf people to hear, by converting sounds into electrical
signals, and then directly sending these signals into the appropriate neural pathways in order to
induce a particular auditory experience. It may not be immediately obvious how cochlear
implants resemble an experience machine, and this is because cochlear implants are typically
used to send the electrical signals that represent the sounds that are actually going on outside of
the individual (through utilizing a microphone placed on the outside of the ear). Consider a case,
however, in which the implant, rather than receiving its information from a microphone placed
on the outside of the ear, instead received information from a computer. That individual would
then have auditory experiences of whatever the computer gave him (for instance Glenn Gould
playing the Goldberg Variations), rather than having auditory experiences that bore a relation to
the sounds going on outside of him. Such a case begins to resemble the experience machine.
Even more amazingly, a technology that works in the same way that cochlear implants do
(i.e. through electrical stimulation of neural pathways), has allowed blind individuals to have
visual experiences of the world around them. One blind individual’s visual experiences from this
technology were of such robustness that he was capable of successfully driving around a parking
lot (Naumann, 2012). Furthermore, technologies also exist that can use electrical stimulation to
induce olfactory experiences (Kumar et al., 2012), gustatory experiences (Nakatsu, Hideaki, &
Gopalakrishnakone, 2012), and tactile experiences (Olausson et al., 2002). In sum, the
technology exists to induce experiences of our five senses through electrical stimulation.
Consequently, the possibility of developing an experience machine that could faithfully replicate
The Ethics of Simulated Reality
the kind of full, sensory experiences that we enjoy in the real world, may be likely and on the
horizon.
Sensory experiences alone are insufficient, as the experience machine also involves the
ability for the individual within it to experience some sort of sense of autonomy. Electrodes
(such as those used in EEG and EMG) that record electrical activity in neuromuscular networks
are able to record how signals are being conveyed along efferent nerve fibers, in order to cause
various parts of the body to move. If these networks were sufficiently mapped, a very complex
computer would be able to discern the types of movements that the individual inside of the
experience machine was intending to perform, from measuring the electrical activity in the nerve
fibers. The computer could then portray the individual as performing that action within the
simulated reality. This technology of recording intentions from action potentials, and turning this
information into representations of the types of movements that the individual is intending to
perform, has existed for a number of years, and allows “locked-in” patients, who are unable to
move, to be able to communicate in various ways (typically through their controlling a cursor on
a screen through the computer’s interpretation of their action potentials, which convey some
intention for how they want the cursor to move) (Kennedy & Bakay, 1998). Consequently, it is
theoretically possible that there could be an experience machine developed in which individuals
were able to perceive themselves as having the same degree of causal efficacy within the
experience machine as they have in reality.
While the physicalist should experience no uneasiness with the theoretical possibility of
the type of experience machine that I have suggested (financial concerns and other such
considerations aside), the dualist may experience some uneasiness with what we have discussed
so far. Nevertheless, it strikes me that this type of experience machine should provide no
problems for the dualist. Dualists typically do not consider “mind uploading” or “brains in vats”
to be possible, because they do not think that minds are realizable in just any type of physical
substrate. Indeed, dualists believe that each person involves physical elements and nonphysical
elements that causally affect each other. On the dualist view, therefore, even if very clever
scientists carefully mapped the location, properties, and connections of every neuron in my body,
and reproduced these properties and relationships perfectly in a computer or in a robot made out
of silicon, the computer or robot would not have any conscious experiences, or qualia, because it
would lack the nonphysical element that is essential to human beings. Similarly, some dualists do
not think that brains in vats are possible; in other words, they would deny the possibility that
very clever scientists would be able to remove my brain and to hook it up to a computer, or to a
new robot body made of silicone, and still have me, in this new state (i.e. many dualists would
deny that I would continue to enjoy conscious existence and qualia in such a state). The
experience machine that I have suggested is a type of “full body envattment” that should pose no
problems to the dualist for two reasons. Firstly, as I have just explained, the technology already
exists with which we can systematically induce sensory experiences, and also record and
interpret electrical activity in motor pathways, that represent intended motor actions. Secondly,
by leaving the entire body intact in this type of “full body envattment,” the mind and its
nonphysical substance will, almost assuredly, also be left intact. In other words, the experience
machine that I have suggested only concerns itself with affecting and interpreting, respectively,
the physical systems that regulate input and output of information. In this way, the mind and its
nonphysical substance is still free to process this information as it has always done.
The Ethics of Simulated Reality
II. Objections (and Replies) to Entering the Experience Machine
Nozick thought that most people would not choose to plug into the experience machine if
given the choice, and copious anecdotal evidence along with more formal types of empirical
evidence reveals that he seems to be right about this (De Brigard, 2010). Nozick believed that
such a reaction to his thought experiment indicated that human beings care about more than
merely their own happiness (of which life inside the experience machine would provide them a
lot of) (Nozick, 1974). It strikes me that one of the primary obstacles to these individuals
entering the experience machine is their concern that doing so would sever their contact with real
human beings, and that this would be a bad thing. While the virtual characters that they
encounter in the experience machine would be indistinguishable from human beings, these
characters would lack the kinds of experiences and qualia that many take to be essential to the
human life. We can avoid this concern if we consider some point in the future in which it is
possible for everyone to enter the experience machine at the same time, and to share life together
inside of the experience machine. Here, one would be able to have the same relationships that
one could have “in the real world,” since everyone would be in the experience machine. I will
term this variation “the shared experience machine” (or SEM), and here, the concern about
violating some kind of relational obligation to other human beings, is avoided.
One may reply, however, that it is highly unlikely that more than only a relatively small
number of people would plug into the SEM. Additionally, one might object that it would be
deeply troubling if some of her loved ones plug in and others of her loved ones do not, as she
would find herself having lost some of her most intimate relationships. This objection, however,
fails to account for the tremendous possible benefits that life in the SEM could accord. For
example, we currently live in a world in which tremendous moral and natural evils are not only
possible, but are actualized with horrifying frequencies. Even those who are lucky enough to
escape suffering these things often experience deep sorrow over the state of the world, and
anxiety that catastrophe could befall them or their loved ones at any moment. In a simulated
reality, however, all events (those brought about through human volition and otherwise), are
under the control of whoever programmed that particular world. In SEM, therefore, natural evil,
such as hurricanes and droughts could be excluded. Similarly, acts of moral evil could be made
impossible to commit in SEM. Analogously, consider how many war-based video games include
a setting which allows the players to “turn off friendly fire.” This means that whenever one tries
to kill one of their teammates, the game does not allow it. Similarly, SEM could be created such
that whenever one tries to injure someone else, cheat, steal, etc., they will be unable to do so. As
a result, I find it unlikely that individuals would choose to avoid plugging in to SEM, since doing
so would amount to turning down the possibility of living in a world in which there were no
horrendous evils such as rapes, brutal murders, and abused children.
The possibility of SEM’s being such a utopia, in which it is not only the case that moral
and natural evils do not occur, but in which it is metaphysically impossible for them to occur,
provides the resources with which to answer some other possible objections. For instance, others
might insist that the experience machine could function well as something like a “vacation
center,” or that individuals might like to be able to alternate freely between life in the experience
machine and life in the real world. Others may suggest that there could be degrees of simulated
reality, in which we would live partially in the real world and partially in a simulated reality. The
problem is that any contact with the real world that we would have under such systems would
leave us vulnerable to horrendous evils. Perhaps, one might be able to save the possibility of
The Ethics of Simulated Reality
degrees of simulated reality being an attractive option, if everyone were to have some kind of
brain headset that prevented them from performing horrendous acts of evil. It still strikes me
that, arguments regarding the prevention of horrendous evils aside, living in a fully simulated
reality would provide us with such a high level of control over the properties of that world, that
in opting out of life in SEM, we would miss out on many potential goods. For example, we could
go beyond removing natural and moral evils in SEM, and remove all annoyances. Furthermore,
we could even enhance the quality of our experiences of, and the frequency of our experiences
of, joy. Thus, SEM would not only be a world devoid of the discomforts of mosquitos and of toestubbing, but every night would also have more vivid sunsets, and the grass would smell sweeter,
and so on.
One of Nozick’s suggestions for why many object to life inside the experience machine,
is that it would limit us to a man-made reality, thereby precluding our contact with a deeper
reality (Nozick, 1974). I counter that SEM, in being designed by man, could be more conducive
to his flourishing than would be a world brought about by the impersonal forces of the universe
(such as the one in which we live). I can see no reason to adopt a principle that maintains that life
inside of “deeper reality” is better than life inside of man-made simulated realities. Furthermore,
endorsing such a principle seems to me to commit a kind of naturalistic fallacy, or fall to the isought problem, as such a principle seems to come from the idea that because “deeper reality”
exists naturally, that it is therefore better, or the one we “ought” to live in, or is the way that
things “ought to be.” What occurs naturally is not always best, or what is morally preferable.
Indeed, we have already discussed a great number of reasons to think that the creation of a “manmade reality” could confer serious and important improvements upon “deeper reality.”
Nozick’s other suggestions for why individuals object to the experience machine are that
human beings are concerned with what they are (and would not, therefore, want to be bodies
floating around in tanks while hooked up to machines), and that we want to do actually perform
our actions, rather than to merely experience doing them (Nozick, 1974). Regarding the first
point, the body of the envatted individual is still causally efficacious. It may be the case that we
are uncomfortable with the image of ourselves in tanks, because it brings to mind images of
causally inefficacious bodies such as corpses and individuals in comas. It strikes me, however,
that if we remind ourselves that the envatted individual is not causally inert, but, on the contrary,
is using his body to convey his intentions to the computer (which represents his intended
behavior in SEM), that this discomfort will likely subside. Nozick’s second point here is solved
by the fact that in SEM, there can still be relationships with other individuals (since everyone is
envatted). This is because, while I am unsure exactly what Nozick means by “actually doing
actions,” it seems that he is referring to the human desire to accomplish certain things, because of
the effect that it will have on others or upon oneself. Because everyone is envatted in SEM, such
meaningful and intentional action, that has an effect on oneself and on others, is still possible.
To sum up the discussion thus far, it strikes me as obvious and axiomatic that there must
be some moral principle that states that we ought to promote the good of conscious beings. It
does not strike me as obvious what could possibly ground the principle that it is better to remain
in “deeper reality” instead of joining a simulated reality, and this is a principle that Nozick seems
to suggest that many objectors to the experience machine endorse. Even if there were such a
principle, it seems that the principle to promote the good of conscious beings would always
override it. If everyone were to join SEM, they would be able to maximize their own good (on
account of the absence of horrendous evil, etc.), and they would remain in relationships with
other conscious beings, and thereby continue to have opportunities to promote the good of other
The Ethics of Simulated Reality
conscious beings, as well. It is important to note that simulated realities of the SEM type cannot
be denigrated as “escapism” or as “temporary holidays.” Life in SEM involves the same types of
autonomy, relationships, and possibilities for self-actualization that exist in the real world.
Recent empirical work from the field of Experimental Philosophy has found that the
unwillingness of most people to plug in to the experience machine may not stem from the types
of rational principles that Nozick had suggested (De Brigard, 2010). Philosophy has traditionally
been characterized by a pervasive rationalism, in which intuitions are often believed to track
deeper truths or facts of the matter, and human reasoning capacities are typically thought to be
highly reliable. By contrast, Experimental Philosophy sees our reasoning faculties as much more
fallible, and so Experimental Philosophy concerns itself with discovering and explaining the
factors that give rise to and affect our intuitions, as well as the factors that can affect and bias our
reasoning processes. Empirical evidence from experimental philosophy has found that what is
likely driving people’s responses to Nozick’s experience machine is a bias toward favoring the
status quo; indeed, when the scenario was changed such that participants were posed with the
following scenario: someone tells them that they are currently living inside of an experience
machine and asks them if they would like to unplug, most of them said they would not unplug
(De Brigard, 2010). As a result, De Brigard interprets this to meant that what is really going on
in people’s responses to the experience machine is a desire to maintain the status quo: when
situated in reality, people want to stay in reality: conversely, when situated inside the experience
machine, people want to stay inside of it. In conclusion, in this section, I have provided a variety
of reasons for plugging into the experience machine. De Brigard’s work may suggest that much
of the resistance to accepting the reasons for plugging in to the machine may stem, in large part,
from a desire to maintain the status quo, rather than from a well-reasoned position.
III. Life Inside the Experience Machine
How should we set up the experience machine for life inside of it? At first blush, the
possibility of creating a world in which there was no suffering or adversity is highly tempting.
Whether suffering is necessary for a good human life is partly an empirical point. While much
work in cognitive science has been done relating to adversity and development (Sternberg, R.J.,
2007; Tedeshi & Calhoun, 2004; Elder, G. 1974), I do not want to pursue it here. This is because
the literature has concerned itself with showing ways in which adversity and suffering can be
formative, and very little of the literature is helpful in answering the question of whether the
existence of adversity and suffering is required for human flourishing. Consequently, in this
section we will engage with a literature that explores this very question of whether or not the
existence of suffering is necessary for the good human life: the problem of evil literature in
philosophy.
Firstly, in deciding how to structure life inside the experience machine, we can use as a
guiding principle the one that William Rowe applied toward benevolent creators of worlds: in
creating the rules of operation of a world, a “…good being would prevent the occurrence of any
intense suffering it could, unless it could not do so without thereby losing some greater good or
permitting some evil equally bad or worse.” (Rowe, 1979). Initially, it seems as if all evil should
be banished from SEM, for what “greater good” could possibly be lost by so doing?
The class of goods that would be lost is referred to as second-order goods: goods that can
only exist if evil and suffering also exist (Mackie, 1955). Indeed, certain goods such as empathy,
The Ethics of Simulated Reality
sacrifice for others, and forgiveness, are only possible if suffering and evil also exist as, for
example, forgiveness is impossible in a world in which individual cannot harm one another. It is
possible that one may object that virtues such as empathy, sacrifice, and forgiveness are secondorder goods that are worth giving up for the possibility of having a world in which no suffering
exists. The problem with this suggestion is that we take love to be one of the most fundamental
goods for human lives, and some of the most important ways in which humans love and are
bound together are through empathy, sacrifice, and forgiveness. These second-order goods, then,
may be necessary in order to make meaningful and deep relationships possible.
Secondly, it may be the case that human action can only be meaningful if it is truly
autonomous. Indeed, that a course of action has been freely chosen by an individual seems to be
an essential component of meaningful human action, and this seems to require the possibility that
the human could have chosen wrongly. This would be impossible in the “no harm” world that
was mentioned previously (the world in which individuals are simply physically unable to
engage in actions that harm others). One could argue that such a world still could contain
meaningful human action and autonomy, as individuals would still be able to choose whether or
not to endorse their evil desires and intentions (even if they cannot act on them). The problem is
that the dissonance between the desires and intentions that the individual will have to act
wrongly, and their inability to act upon them, will likely create intense experiences of negative
affect. Individuals will then have no choice except to surrender to their fate and rid themselves of
these intentions and desires, in order to avoid the negative affect. The individual’s ridding herself
of her wrong desires and intentions, then, will have been due to entirely self-interested
motivations (i.e. drive reduction). Because acting and thinking good thoughts will always be in
one’s best interest, this seems to preclude the possibility of what Kant called “the good will,”
which “does the right simply and solely because it is right, and of which he said that this is the
only intrinsically good thing in the world or out of it.” (Hick, 1966). In other words, when all
right action is only capable of being endorsed as a result of self-interested reasons, it seems to
deny the possibility of meaningful choice and moral development, itself. Finally, many of the
meta-ethical theories that currently enjoy the most popular support, suggest that harm is the base
upon which “wrongness” supervenes (Hare, 2001). Thus, when we say an action is “wrong,”
what we really mean to convey is our concern that it will cause some sort of harm. If harm is no
longer possible (i.e. the “no harm world”), such meta-ethical theories would suggest that the
category of “wrongness” would collapse, and morality itself would likely collapse, as a result.
Some may contend that this would not be a bad thing, as the tradeoff would be a world in which
no evil or suffering occurs; however, I have attempted to show why such a world may be devoid
of the possibility of meaningful human action, which is likely a loss that outweighs the good of
having a world in which suffering and evil do not exist.
What if we tried to keep the category of “wrong” by permitting the possibility of wrong
actions, yet we simply made the world one of (near) perfect justice? In other words, instead of
allowing injustice, we would set up SEM such that such that anyone who attempted to perform a
good action was immediately paid their due, and the consequences of an evil action would fall
upon the enactor (a kind of “Wile. E. Coyote” or “perfect karma” world in which all attempted
harm resulted only in harm being inflicted upon oneself). In such a world, good human action
would always involve the expectation of reward, and bad human action would always involve
the expectation of punishment. Such a world would be wholly unconducive to human beings
doing things solely out of respect for the fact that it is the right thing to do (Hick, 1966). Human
The Ethics of Simulated Reality
action, again would be reduced to a kind of drive reduction, and human autonomy, and the
possibility of meaningful human action, would be greatly reduced.
We turn now to the question of what types of suffering should be excluded from our
world. I maintain that, “horrendous evils,” which are “evils the participation in which (that is,
the doing or suffering of which) constitutes prima facie reason to doubt whether the participant’s
life could (given their inclusion in it) be a great good to him/her on the whole.” (Adams, 1999).
The extreme natural and moral evils (murder, rape, grave injury etc.) that, in the previous
section, we mentioned should be left out from our world, seem to be the type of evils, the
excluding of which, could not possibly also result in the loss of some greater good. John Hick
argues that there is actually a greater good that is served by the existence of horrendous evils: the
world must be truly dangerous and unpredictable in order for us to take it seriously (Hick, 1966).
In other words, if the world we lived in were one in which the good always prevailed (no matter
how little effort was put in to protect it) and there were never any truly great dangers that faced
us, humans would become complacent, lacking the drive toward development and personal
progress. Only if the world is truly dangerous and unpredictable-in short, only if we do not know
ahead of time that good will always prevail-will we be lifted from our natural tendencies to
complacency, and toward working toward improving ourselves and our world. Unfortunately, we
lack the resources here to address this objection thoroughly, since whether or not the absence of
horrendous evils will result in complacency is an empirical point (that also has not been
addressed yet by social science research). It is also worth mentioning that Hick’s argument was
applied in a completely different context: he was interested primarily in “soul-making,” or in
exploring the conditions most conducive for individuals to work out their salvation before God.
Horrendous evils may be required for this journey of self-purification. By contrast, in SEM, such
an extreme process of development is not required and, consequently, horrendous evils are likely
also not required. I take it, however, that some degree of suffering and evil would be of benefit
for SEM, as these may be necessary in order to nudge human beings to search for new heights of
knowledge, innovation, progress, and development, all of which seem to be necessary for human
flourishing.
I now address the question of whether natural evil should be included in SEM. Natural
evil contributes to a more unpredictable world, which creates a context that encourages human
progress and innovation as a response to it. Furthermore, as aforementioned, a Kantian “good
will” is possible only in a world in which the consequences of moral action are unpredictable.
This is because, in an unpredictable world, a morally upright person could still get hit by an
“undeserved” hurricane or drought, and experience suffering as a result of it. In such a world,
this individual’s choices to become a morally upright person will have come from a Kantian
“good will” and not from self-interested concerns: he will have chosen morality because it is
good, in itself, and not because being morally upright would protect him from natural evil. One
may object that it is still possible to choose morality as an intrinsic good, and from non-selfinterested motivations, even in worlds without the unpredictability that comes from the existence
of natural evil. While I agree with this, I respond that the Kantian “good will,” as I understand it,
requires some sort of testing in order to achieve its most robust form. If this is correct, an
individual would not be able to be said to have a truly “good will,” unless there had been
moments in which the attractiveness of the moral life had been challenged. For example, the
strongest “good will” would have been developed from not merely any old instance of having
chosen morality for its own sake, but from instances in which morality was chosen under
circumstances in which the moral life appeared to be useless, unattractive, or challenging (such
The Ethics of Simulated Reality
as if one still chooses the moral life despite having been “unjustly” hit by a drought, that has
caused much suffering to that individual, and of which his suffering could be alleviated by
stealing water from his neighbor). Consequently, unpredictability, of the kind provided by the
existence of natural evil, is probably necessary in order for there to be the possibility of having a
robust type of “good will.”
One could object, however, that this unpredictability could be supplied by moral evil,
making the existence of natural evil unnecessary. I respond that Hick suggests that natural evil
provides an instructive function, since individuals see the harm caused by natural evil, and
realize that they, themselves, can also cause harm (Hick, 1966). Whether the amount of moral
evil that will exist in a world without natural evil will be sufficient to provide the aforementioned
needed unpredictability, and whether or not natural evil is necessary in order to make moral evil
a live option for individuals, is an empirical problem, and we currently have few tools with
which to explore it. Only after the appropriate empirical work has been done, will this be a
fruitful line of inquiry. Finally, one could object that all instances of “natural evil” in SEM are
really cases of moral evil, since it was an individual, or group of individuals (i.e. the computer
programmers of SEM) who, in a sense, caused the possibility of the natural evil. I find it highly
unlikely that an individual who experiences “natural evil” inside of SEM will perceive it as being
a case of moral evil. Even if they do see it as an instance of moral evil, it strikes me as unlikely
that their framing it in this way will make any practical differences to their responses to it.
In conclusion, in this section we confirmed that horrendous evils should be excluded in
SEM, since in doing so we will likely not also lose some comparable or greater good. We also
explored how some level of suffering and moral evil was necessary in order to make secondorder goods possible (such as the virtues of empathy, sacrifice for another, and forgiveness,
which seem to be necessary for deep relationships). Additionally, suffering, moral evil, and some
degree of unpredictability and injustice, are necessary in order to encourage human progress, and
to allow for SEM to be an arena in which meaningful human action can possibly occur. The
levels and types of each will need to be worked out (likely through empirical means), at some
point.
IV. Regulation of the Experience Machine
A great many difficulties will arise regarding the task of the regulation of the experience
machine. Many issues about regulation will be best addressed once simulated reality becomes a
possibility, since the ways in which these issues will be able to be resolved will depend heavily
on the circumstances of that time and environment (I am thinking here of such issues as who will
have to stay behind and tend to the machines, who has a right to being inside of them in the event
that they are not readily available to all, etc.). In this section, I explore the remaining issues that
we, in the present day, will be able to consider.
Firstly, one great advantage of SEM is that there could be multiple possible simulated
realities, each with their own unique characteristics, and one could choose which simulated
reality she wanted to enter. This could be beneficial in, for example, keeping certain violently
opposed groups apart. If individuals were free to travel between them, as they wished, this would
ensure that enemies could be kept apart, but that friends would not be.
One problem with the possibility of having multiple simulated realities involves the
question of how much freedom should be allowed in establishing the rules of operation for that
The Ethics of Simulated Reality
particular simulated reality. Indeed, it could be the case that many unvirtuous worlds are created,
in which individuals live out profoundly perverse fantasies. For instance, perhaps granting
unregulated freedom to create worlds for oneself would result in individuals creating worlds
inhabited by virtual characters that they would carry out their sick and perverse fantasies with
and upon. One’s initial reaction may be to endorse such a system, since it would keep the more
unsavory members of society effectively “quarantined” to their own worlds. On the other hand,
one might object that society still has a responsibility to not hand those individuals over to such a
life. One response is that a systems of “nudges” could be used, in which individuals are
encouraged toward particular patterns of behavior, while not completely regulating individuals’
personal lives (Thaler & Sunstein, 2008). This principle is at work, for example, in raising taxes
on cigarettes, in order to encourage citizens to quit smoking. Additionally, one may object that
allowing individuals to create their own, deeply perverse simulated realities could serve to
encourage such practices to become widespread, such that many individuals would leave the
SEM that contained the majority of people, for their own personal worlds. I reply that I think that
most individuals will realize, after a short amount of time in those worlds, that there is more to
life than simple drive reduction, and they will eventually grow tired of living out their perverse
fantasies, and desire to rejoin the rest of society.
V. Conclusion
I have suggested reasons why simulated reality, on the level of an experience machine,
may become a real possibility in future. If it does, questions of how to design, structure, and
regulate it, will be some of the most important that face society. For that reason, it is crucial to
have some guiding principles in place, ahead of time, such that the ethical framework is in place
well before the technology is. This paper has been an attempt to initiate dialogue on this topic,
and in it I have provided arguments for why living in the simulated reality may be preferable to
living in our own, I have engaged with objections to plugging into simulated reality, and I have
suggested ways in which to go about thinking about how to including the appropriate types and
amounts of suffering and injustice in the world. Finally, I considered certain issues and potential
avenues of resolution in terms of regulation of simulated reality.
VI. References
Adams, Marilyn McCord. Horrendous Evils and the Goodness of God. Ithaca, NY: Cornell UP,
1999. Print.
De Brigard, F. (2010). If you like it, does it matter if it’s real? Philosophical Psychology, 23(1),
43–57.
Elder, Glen H. Children of the Great Depression: Social Change in Life Experience. Chicago: U
of Chicago, 1974. Print.
Eshraghi A.A., Nazarian R., Telischi F.F., Rajguru S.M., Truy E., Gupta C. (2012). The cochlear
implant: historical aspects and future prospects. Anatomical Record, 295 (11): 1967–80.
The Ethics of Simulated Reality
Hare, John E. God's Call: Moral Realism, God's Commands, and Human Autonomy. Grand
Rapids, MI: W.B. Eerdmans, 2001. Print.
Kennedy, P.R. & Bakay, R.A. (1998). Restoration of neural output from a paralyzed patient by a
direct brain connection. NeuroReport, 9, 1707-1711.
Kumar G., Juhász C., Sood S., Asano E. (2012). Olfactory hallucinations elicited by electrical
stimulation via subdural electrodes: effects of direct stimulation of olfactory bulb and tract.
Epilepsy Behav. Jun: 24(2):264-8.
Mackie, J. L. "Evil and Omnipotence," Mind, New Series, Vol. 64, No. 254. (Apr., 1955), pp.
200-212.
Nakatsu, R., Hideaki, N., & Gopalakrishnakone, P. (2012) Tongue Mounted Interface for
Digitally Actuating the Sense of Taste. in Proceedings of the 16th IEEE International
Symposium on Wearable Computers (ISWC), June 2012, pp. 80-87.
Naumann, Jens. Search for Paradise: A Patient's Account of the Artificial Vision Experiment.
S.l.: Xlibris, 2012. Print.
Nozick, R. (1974). Anarchy, state and utopia. New York: Basic Books.
Olausson, H., Lamarre, Y., Backlund, H., Morin, C., Wallin, B.G., Starck, G., et al. (2002).
Unmyelinated tactile afferents signal touch and project to insular cortex. Nature Neurosciences,
5, 900-904.
Putnam, Hilary. Reason, Truth, and History. Cambridge: Cambridge UP, 1981. Print.
Rowe, William L. (1979). “The Problem of Evil and Some Varieties of Atheism,” American
Philosophical Quarterly 16: 335-41.
Sternberg, R.J.: Wisdom, Intelligence, and Creativity Synthesized. New York: Cambridge
University Press. (2007). Print.
Tedeshi, R.G., & Calhoun, L.G. (2004). Posttraumatic Growth: Conceptual Foundation and
Empirical Evidence. Philadelphia, PA: Lawrence Erlbaum Associates.
Thaler, Richard H., and Cass R. Sunstein. Nudge: Improving Decisions about Health, Wealth,
and Happiness. New Haven, CT: Yale UP, 2008. Print.
Download