Apocalyptic Artificial Intelligence

advertisement
Apocalyptic AI
Religion and the Promise of Artificial Intelligence
Robert M Geraci
Department of Religious Studies
Manhattan College
I. Introduction
One of mankind’s most cherished dreams—in religious, scientific, and artistic
circles—has been the creation of humanoid life (Bilski 1988; Cohen 1966; Newman
2004). The Pygmalion myth in ancient Greece and the golem myth in medieval Judaism,
for example, reflect this drive within artistic and religious spheres respectively. In
science, a long tradition of automatons includes ancient Greek water mechanisms, 17th
century Japanese tea-serving dolls, and 18th century European automata (such as
Vaucanson’s wing-flapping, eating, and defecating duck or Jacquet-Droz’s piano player).
Contemporary robotic technology continues this trend. 1 Often, the creation of intelligent
life is simultaneously religious, scientific, and artistic. Neat divisions among these
categories cannot be easily drawn even today, when robotics and artificial intelligence
(AI) hold the most promise for realizing this longstanding dream.
In the late 20th century and early 21st century, a number of influential roboticists
and AI pioneers wrote popular science books that show the close connections between
religion and science in contemporary life. Major figures in this movement include awardwinning AI inventor Ray Kurzweil, seminal roboticist Hans Moravec, neural net building
Hugo de Garis, and UK roboticist Kevin Warwick. 2 The world they so eloquently
describe conjures a fantastic paradise in which robotics and AI improve humankind and
the world. In doing so, these AI advocates lead a scientific movement that never strays
far from the apocalyptic traditions of Western culture.
Apocalyptic AI is a popular science movement which has absorbed the
apocalyptic categories of the Jewish and Christian traditions (this despite the atheistic
convictions of most advocates). Second Temple Jews and early Christians maintained a
dualistic worldview and anticipated a radical divide between the present and the
immediate future. They expected that the world and the human person would be
transformed, with the latter given new, improved bodies in order to enjoy the heavenly
Kingdom which would resolve the problematic dualism of earthly life. Apocalyptic AI
looks forward to a mechanical future in which human beings will upload their minds into
machines and enjoy a virtual reality Kingdom. Thanks to these soteriological hopes, the
historical structure of Apocalyptic AI offers fruitful comparison to Jewish and Christian
views.
II. God’s Kingdom Come: Apocalyptic Theology
Properly speaking, “apocalypse” is a (generally eschatological) literary genre
wherein a prophet receives divine revelation through a vision of a transcendent reality
distinct from the everyday world. Apocalypticism, however, occurs in a wide variety of
narrative structures; it can be “broadly described as the belief that God has revealed the
1
imminent end of the ongoing struggle between good and evil in history” (Collins 2000a,
vii). In its fullest elaborations, the breakdown of a “proper” social order promotes
apocalypticism, in which believers expect that God will reconfigure the world and
transform humanity so they can enjoy the Kingdom eternally. 3 Although the popular
science works of Apocalyptic AI are not members of the apocalypse genre, they contain
what Wayne Meeks calls “apocalyptic discourse” (2000, 463). 4
Jewish and Christian apocalyptic traditions drew upon contributions from ancient
Israel’s prophetic tradition (Russell 1964, Hanson 1975) and wisdom tradition (von Rad
1965), combat myths of ancient Mesopotamia (Clifford 2000), and writings from Greek
and Persian culture (Collins 2000b). Collectively, these sources found a home in the
social landscape of Second Temple Judaism and early Christianity. 5 Because the 20th
century robotics and AI authors had access to the full tradition of Jewish and Christian
apocalypticisms, there will be little effort to offer a chronological assessment of
apocalyptic development or situate the ideas of one group relative to those of others.
Rather, the worldview of Apocalyptic AI will be analyzed after several basic
characteristics of Jewish and Christian traditions have been described. 6
Alienation: Rome’s To Do with Jerusalem
Troubling political conditions provide fertile ground for apocalyptic writings and
beliefs. 7 The political alienation experienced by Jews and Christians in the ancient world
made apocalypticism a favorable religious alternative to cultural submission. This was
clearly the case during Roman rule of Palestine. Political power and the right to worship
were often stripped away from Jews and Christians, who had little hope but to wait for
the Messiah (or his return) to rectify the world.
Elements of apocalypticism arose in Judaism during times of tribulation,
culminating in the Second Temple period. The conflict with Assyria (Isaiah), the
Babylonian Captivity and post-exilic period (Ezekiel, 3 Isaiah), Greek rule and the
Maccabean Revolt (Daniel, 2 Maccabees, the early elements of 1 Enoch) and Roman
Rule (2 Baruch, 4 Ezra, Apocalypse of Abraham, 1 Enoch 37-71) all contributed to
Jewish faith in the apocalyptic redemption of Israel. While early texts such as Isaiah and
Ezekiel were not themselves apocalypses, they provided a starter culture for some of that
genre’s key ideas. Prophetic oracles and hopes for redemption led to full blown
apocalypticism in which alienation would be overcome with the establishment of a new
world.
During Roman rule, Jews confronted a basic conflict in their covenantal view of
history. Why did God withhold control of the promised land? 8 The followers of Jesus
inherited this pre-existing Jewish alienation and established upon it a new religion based
on their presumed Messiah. As their political and religious split with the Jews widened,
Christians were as uncomfortable in the Temple as they were in the agora, where they
refused to participate in Roman public religion. 9 Early Christians desperately awaited the
return of Jesus, which would eliminate their twofold political troubles and bring about an
eternal end to alienation. 10 Apocalyptics hope that God, as arbiter of absolute justice and
rectifier of a corrupt world, will radically reconstruct the world. Apocalyptics, despite
their criticism of the present world, are not pessimistic in their outlook. They are
“passionately concerned, even obsessed, with the possibility of goodness” (Meeks 2000,
2
462). As this world is so miserably sinful, apocalyptics look forward to the time when
God creates a new, infinitely good world.
The New Jerusalem
Apocalypticism devalues the present world, which, as 4 Ezra states, is but clay
compared to the gold of the next (8:1-3). 11 God will establish a new world where all of
the misguided values of the old world will give way to meaningful life. The boundary
between the old world and the new, between evil and good, reflects the fundamental
dualism of apocalyptic life.
Meaningful activity—in the form of prayer and heavenly praise—abounds in the
new kingdom. Apocalyptic visionaries often traveled to Heaven, which provides a
glimpse into their expectations of a radically reconstructed world. When apocalyptics
ascend to heaven, they typically understand it as a glorious temple (Himmelfarb 1993).
According to Himmelfarb, priestly investiture allows visionaries to join the angels in the
heavenly temple and once in heaven they observe the angels and righteous human beings
engaged in prayer to and praise of God (2 Enoch 8-9, Apocalypse of Zephaniah 3:3-4, 1
Enoch 39:9-14). From most religious perspectives, praise of God amounts to the highest
human endeavor; the entire cosmos supposedly devotes itself to this task. As the
ultimately meaningful activity, it occupies the highest levels of heaven. Apocalyptics
point to the difference between our world, where material desires overshadow the
spiritual, and the heavenly world, where angels and the righteous saints maintain proper
values. The coming kingdom will appropriate the activity and faith of heaven.
Because misaligned values characterize the present world, apocalyptics anticipate
a new world, one built up by God to replace the old. This hope was common to ancient
Jews and Christians, who expressed it in their writings. “I am about to create a new
heavens and a new earth,” declares God in the Hebrew Bible (Isa. 65:17). This event is
fully realized in 1 Enoch’s “Apocalypse of Weeks” (1 Enoch 94:16) 12 around the time of
the Maccabees and revisited in John’s apocalyptic vision (Rev. 21:1) in the late 1st
century C.E. John sees a New Jerusalem descending from heaven; in the New Jerusalem,
death and sadness will be wiped away (21:2-4). In the New Kingdom, no one need worry
whether he or she should pay taxes to Caesar! 13
The new world will dissolve sadness and bring humanity into contact with the
divine. The current world acquires meaning through its historical progression toward the
new but is otherwise devoid of real value. 14 The new world is fully eschatological,
however: it leads nowhere and it never evolves. Flush with the eternal presence of God, it
is ultimately meaningful with nothing more to be sought. Whatever it is that happens in
the New Jerusalem, it certainly will not advance history in any conventional sense. God,
of course, plans to include the righteous in this wondrous future. Thanks to their
resurrection in transformed bodies, the saved will enter the Kingdom of God.
Angels from the Ashes
Apocalyptic Jews and Christians expected resurrection in glorified bodies.
Although alternate views of resurrection existed, 15 the dominant positions among both
Jews and Christians maintained that it would be bodily. 16 Indeed, it was the bodily
3
element of resurrection that functioned as the lynchpin of communal self-definition,
uniting the disparate elements of Jewish and Christian theologies (Setzer 2004). Although
apocalyptic Jews and Christians expected bodily resurrection, they did not expect to have
precisely the same bodies as those which they possessed in life. 17 God will glorify the
apocalyptic dead, raising them up in purified bodies; made immortal, these glorious new
bodies enable the righteous to join the angels in the Kingdom of God.
This resurrection stands opposite the common Greek notion of escape from the
body but still recognizes that earthly bodies lack something essential to life in the future
world. For example, Paul’s assertion that “flesh and blood cannot inherit the Kingdom”
(1 Cor: 50) refers to the impossibility of saving human bodies as they are in this world
(Setzer 2004, 65). At the moment of the kingdom come, the human person must take on a
new form. “We will not all die,” says Paul, “but we will all be changed” (1 Cor 15: 51;
see also 2 Cor 5:1-4). Although each person will require a body in order to take part in
the life to come, the mortal bodies of the present cannot inhabit the perfect world of the
future. God must, therefore, act to transform human bodies into angelic bodies. 18
The glorious new body will be immortal. Death marks the ultimate degradation of
humanity so resurrection in a heavenly body will eliminate it. 19 The impure bodies of the
world are mortal. God promises a new body, one that belongs in the New Jerusalem.
Reconfigured bodies will combine humanity with the divine glory of the celestial realm.
These bodies will be eternal, perfect, and immortal…just like the world to which they go.
The Shifting Nature of Transcendent Guarantees
Early Jewish and Christian apocalypticisms share several basic characteristics,
which, as we will see in Section III, appear in 20th century popular science books on
robotics and artificial intelligence. Ancient Jews and Christians, caught in alienating
circumstances, eagerly anticipated God’s intervention in history. After the end of history,
God will create a new world and resurrect humanity in new bodies to eternally enjoy that
world.
Jewish and Christian apocalypticisms require that God intervene in history but
Apocalyptic AI advocates cannot rely upon divine forces. Instead, they offer evolution as
a transcendent guarantee for the new world. 20 Apocalyptic AI advocates unite Moore’s
Law, which describes the rate of technical progress in computer processing speeds, to
biological evolution (Moravec 1999, 165; Kurzweil 1999, 255) as a means of assuring
that the movement’s predictions will come to fruition. Even without God, evolution
guarantees the coming of the Kingdom. The nature of this Kingdom and its inhabitants
bears striking resemblance to that of Jewish and Christian apocalyptic salvation.
III. Virtual Kingdom Come: Apocalyptic AI
Several pioneers in robotics and AI speak the language of apocalypticism.
Advocates anticipate a radical divide in history that resolves their present state of
alienation. This resolution requires the establishment of a new world in which machine
life succeeds biological life. Human beings will cast off the limitations of their bodies for
mechanical and virtual bodies that will live forever in eternal bliss.
4
Apocalyptic visions include either an account of historical progress or an
otherworldly journey interested in cosmological speculation (Collins 1984, 5);
Apocalyptic AI includes both. 21 Apocalyptic AI authors extrapolate from current
technological trends to predict the course of history over the next fifty years (a course
which inevitably revolves around robotic and AI technology) but they also explore the
transcendent realm of cyberspace. Ray Kurzweil even relies upon an angelic figure from
the transcendent future realm to offer advice and interpretation.
The apocalyptic tradition of robotics and AI pioneers results from the political
struggles of late 20th century science (Geraci 2007) and the dangerous world in which the
authors were raised. 22 Stephen O’Leary of the University of Southern California points
toward the “generational sensibility” of baby boomers, who grew up in the decades of
nuclear proliferation and the Cold War, following the trauma of World War II, the Shoah,
and the atomic destruction of Hiroshima and Nagasaki (2000, 393). O’Leary argues that
the barrage of apocalypticism in 1990s pop culture, from Heavens Gate to The
Terminator, was a result of this sensibility. 23 The apocalyptic tradition provided an outlet
for those distressed, whether that distress arises from the threat of Soviet invasion or the
fear of a more “natural” death.
Alienation: The Body Embattled
Though Apocalyptic AI advocates fight political battles for funding and respect,
the greater share of alienation in Apocalyptic AI derives from the physical nature of
humanity. 24 The human body has a number of significant restrictions, chief of which is,
of course, its rather limited shelf life. In addition, complain the major figures in
Apocalyptic AI, a mind trapped inside a human body learns only with difficulty, thinks
slowly, and has difficulty passing on its knowledge.
Protein-based life forms, say Apocalyptic AI advocates, will never think as well
as machines will in the future (de Garis 2005, 103; Kurzweil 1999, 4; Moravec 1988, 5556; Moravec 1999, 55; Warwick 1997, 178). The speeds possible in human brains cannot
equal those of a computer, whose silicon substrate is far more efficient in the
transmission of information. Limited memory and inadequate accuracy further trouble
human minds; these problems will be wiped out in the transition to mechanical life.
Descriptions of death in Apocalyptic AI further show the alienation felt by its
advocates. A living person’s value, in Apocalyptic AI, stems from the knowledge he or
she possesses, rather than being intrinsic to life or grounded in social relations of one sort
or another. The AI apocalypse will end the “wanton loss of knowledge and function that
is the worst aspect of personal death” (Moravec 1988, 121). It would appear that the
death of knowledge counts for more than the death of persons. This is the case because
the aspects of personhood supposedly divorced from rational thought are considered
fetters. 25
Fear of death is widespread, but Apocalyptic AI advocates see traditional and
widely-held belief in souls and spirits as a feeble psychological ploy. According to AI
pioneer Marvin Minsky, they are “all insinuations that we’re helpless to improve
ourselves” (1985, 41, emphasis original). The loss of knowledge that cannot be overcome
in religion, according to the Apocalyptic AI crowd, can be addressed through technology.
While Minsky deplores religious failures to rectify the perceived failures of human life,
5
his enthusiasm for AI’s possibilities shows how Apocalyptic AI shares the passionate
excitement that Meeks describes in ancient apocalypticism (see above).
Distrust of the world and its values undergirds the alienation of Apocalyptic AI.
The movement’s advocates worry that the world might turn its back on scientific
understanding in favor of ‘useless’ religious faiths and, as a consequent, they believe that
the world is a place devoid of intentional intellectual work, the only thing that produces
meaning. The world is a bad place not because it is evil but because it is ignorant and
inadequate. Even where people strive to compute, they find themselves hampered on all
sides.
The mind will never be at home until it sheds the body that inhibits the mind’s
rational processes. Slow computation, limited recall, insufficient ability to share one’s
insights, and death all restrict the mind from realizing its full potential, which can be
unlocked only by a radical change in life itself. Before we turn to the new lives predicted
for our future, however, we must first examine the world in which they will live. The AI
apocalypse will not come about because people fear death, loss of knowledge, et cetera.
Rather, these things are supposedly incidental. The apocalypse must come about because
evolutionary history moves inexorably in that direction.
A Virtual Jerusalem
Just as many early Jews and Christians believed that God’s intervention in history
was right around the corner, contemporary figures in Apocalyptic AI believe that a
moment of cataclysmic change approaches. Gradual change has little place in apocalyptic
visions. Instead, apocalyptic believers anticipate a sudden revolution (Schoepflin 2000,
428). In Apocalyptic AI, this momentous event is called the “singularity;” it marks a
radical divide between this world and the next, a mechanical world culminating in the
onset of the age of mind, a Virtual Kingdom in cyberspace. Technological evolution (i.e.
the processing speeds of computers) currently experiences exponential growth. The
singularity is the point on the graph of progress where explosive growth occurs in a blink
of an eye; it is the end of history and the beginning of the new world and it is closer than
you think.
Architectural software pioneer Michael Benedikt argues that cyberspace opens the
doors to the Heavenly City of Revelations (1994, 14). In its utopian manifestations,
architecture blends art and technology to envision how the Kingdom might appear. 26 Of
course, architecture seeks to bring about the very conditions that it illustrates. Benedikt’s
cyberspace would allow human beings unfettered joy through idyllic environs and
limitless personal experience. The eschatological kingdom of Benedikt’s architectural
fantasy shows the deep connections between virtual reality and Christian salvation. In
cyberspace, we will find the good life: an egalitarian society (see Stone 1991),
vanquishment of need (Kurzweil 1999, 248; Moravec 1999, 137), happiness (Kurzweil
1999, 236), even better sex lives (ibid., 148, 206). As though these things were not
enough, we will see in the next section that the Virtual Kingdom will virtually guarantee
our immortality.
Benedikt believes that human creativity (through cyberspace architecture) is the
key to the Heavenly City but most members of the Apocalyptic AI community believe
that machine creativity will lead to salvation. Humanity’s chief role, its most important
6
task, is the creation of artificial life. From that point, the robots will do what biological
life cannot.
The singularity is the point at which machines become sufficiently intelligent to
start teaching themselves. When that happens, their growth rate will be exponential,
meaning that in very short time, the world will irrevocably shift from the biological to the
mechanical (Kurzweil 1999, 3-5; de Garis 2005, 158; Moravec 1998, 1-2). The
mechanical age will permit the establishment of a new Kingdom, one not based upon
God but otherwise making the same promises as more traditional apocalypses. 27
The march toward the Virtual Kingdom will proceed through an Edenic earthly
life before the final transcendence of mind over matter. AI and robotics will relieve
humanity of its burdens, forcing human beings up the social ladder into a universal class
of wealthy owners (Moravec 1999, 128). This future will be a “garden of earthly
delights…reserved for the meek” (ibid., 143). Like all earthly gardens, it must eventually
whither away but it will leave us—not with a fallow field—but with a much greater
world, a paradise never to be lost.
Moravec’s paradise on earth is the most recent extrapolation of the two-stage
apocalypse tradition of Judaism and Christianity. The Book of Revelations predicts the
righteous will enjoy a 1000 year reign of peace prior to the final end of the world (20:47). 28 Likewise, in 4 Ezra, God says that the Messiah will come and “in mercy he will set
free the remnant of my people, those who have been saved throughout my borders, and
he will make them joyful until the end comes” (12:34). A two stage apocalypse allows
the believer to expect an earthly resolution to their alienation as a reward for faithful
service while still promising the ultimate fulfillment of immortal salvation. Before the
final transition to the Virtual Kingdom, Moravec holds out a fantastic vision of human
joy.
With robots earning wealth, humanity will lose its sense of material need.
Kurzweil envisions a future in which need is a “quaint idea” (1999, 249). No one will
work for his daily bread, but will quite literally have it fall from heaven. Outer space
corporations of intelligent robots, Moravec imagines, will provide for human beings.
Their diligent work will allow us the leisure to pursue intellectual discoveries (though we
will be rather less efficient at this than the machines).
Unlike the millennial land use technologies of early America (e.g. surveying, the
axe, and the plow) described by David Nye (2003), artificial intelligence does not stop
with the establishment of an earthly Kingdom. Eventually, the machines will tire of
caring for humanity and decide to spread throughout the universe in the interests of
discovering all the secrets of the cosmos (though perhaps some will remain behind). They
will convert the entire universe into an “extended thinking entity” (Moravec 1988, 116).
Utopian conditions brought about by advances in robotics and AI will merely presage the
wondrous Virtual Kingdom to come. Moravec’s “Age of Robots” is supplanted by an
“Age of Mind” in which the expansion of machines creates space for a “subtler world”
(1999, 163). The earthly paradise is but a brief stopping point on the historical march to a
transcendent Virtual Kingdom
Real, meaningful activity will cease to take place in the physical world, shifting
instead to cyberspace. Just as meaningful prayer characterizes heaven in Judeo-Christian
apocalypticism, meaningful computation occupies all individuals in Apocalyptic AI.
Heaven allows for only one ultimately meaningful activity, which is perceived as absent
7
on earth. Just as the apocalyptic visionaries despair over the faithlessness of their
generations, Apocalyptic AI advocates bemoan the lack of computation that they (and
even more so, their contemporaries) are capable of producing. The Virtual Kingdom will
rectify such sorrows.
Where the machines go, “meaningful computation” will follow (Moravec 1992a,
15). They will gradually fill up space with increasingly intricate computations until, in
the end, “every tiniest mote will be part of a relevant computation or storing a significant
datum” (Moravec 1999, 166). Machine computation will extend throughout the universe
in what Moravec terms a Mind Fire. The Mind Fire will reject the useless, meaningless
existence of earthly life in favor of ubiquitous computation. “Boring old Earth also will
suddenly be swallowed by cyberspace. Afterwards its transformed substance will host
astronomically more meaningful activity than before” (Moravec 1999, 167).
Virtual reality (realities) will accompany the spread of intelligent machines. The
importance of the Mind Fire does not lie in the material presence of intelligent machines
but in the cyberspace created by their computations. Moravec is a modern day alchemist,
seeking to make gold out of lead, life out of death. All “[p]hysical activity will gradually
transform itself into a web of increasingly pure thought” (1999, 164). The purification of
the cosmos will make it a meaningful world. 29 Alchemy seeks to improve upon the
world, to transmute the lesser into the greater; just so, Moravec takes what appears
debased in his eyes and seeks to make it illustrious. 30 Before the AI apocalypse, very
little meaningful activity takes place (it does so only in the isolated pockets of rational
intellect on earth, particularly where AI work is done). Soon there will be nothing but
meaningfulness.
Like many apocalypses, the popular science books of Apocalyptic AI promise an
escape from time. Supremely intelligent machines will use cyberspace to escape the
historical reality of the present. In the Mind Fire, entire universes will be created through
vast computer simulations. All of history will be held captive, played and replayed for the
interest and amusement of intelligent machines (Moravec 1992a). All of history will
coexist in one eternal moment.
The Virtual Kingdom is unquestionably apocalyptic and not merely utopian
eschatology. While the latter takes place on earth, more or less in keeping with the
traditional rules of everyday life (Collins 2000b), the Virtual Kingdom transcends earthly
life. The new world of Apocalyptic AI is transcendentally other, it surpasses human life
and replaces it with something that—while perhaps connected to the physical reality of
our current lives—exists on another plane altogether. The Virtual Kingdom is a
transcendent plane of cyberspace where history ends, pain disappears, and truly
meaningful life becomes possible.
Cyborgs, Robots, and Software
Fortunately for us, there will be room to join our mechanical progeny as they
spread their Virtual Kingdom throughout the cosmos. Just as the old flesh cannot inherit
the Kingdom of God, however, it cannot inherit the Virtual Kingdom. Human beings will
augment or replace their weak human bodies in order to participate in the wondrous
future to come. Participation in the kingdom come may depend upon integrating
mechanical parts and human beings in the creation of cyborgs; or perhaps human beings
8
will slough off their bodies altogether, allowing their minds free reign in the virtual
reality of the Mind Fire. In order to experience the heavenly world of hyper-intelligent
machines, these scientists advocate building new bodies that can successfully operate in
the Virtual Kingdom.
Supplementing our natural abilities with computer hardware implanted directly
into our bodies could help human beings join ever-smarter machines in the Kingdom.
University of Reading roboticist Kevin Warwick believes that adding mechanical
hardware to human brains will enable them to compete mentally with artificial
intelligences. Cyborg minds, he believes, will be far superior to natural human minds
(2003, 136). Thanks to enhanced memory, additional senses (such as infrared or
ultraviolet vision), internal networking to the Internet, rapid powers of computation, and
more, cyborgs will quickly begin to ignore the “trivial utterances” of ordinary human
beings (ibid., 136). While Warwick assumes that we will soon transcend the limitations
of our current existence, his goals are slightly more modest than those of most
Apocalyptic AI thinkers.
Moravec and Kurzweil believe that our technology will become so powerful that
we will download our consciousness into machines, thus freeing ourselves from human
bodies altogether. 31 Midway in our transition from human beings to disembodied
superminds we will house ourselves in machine bodies (without any of the old human
biological parts) but eventually our need to walk and talk will fade into oblivion; as
software, we will leap from computer to computer.
Successful downloading of our consciousness depends upon our ability to
represent the pattern of neuron firing in our brains. Moravec argues that the essential
human person is a pattern, not a material instantiation. Pattern-identity, he says, “defines
the essence of a person, say myself, as the pattern and process going on in my head and
body, not the machinery supporting that process. If the process is preserved, I am
preserved. The rest is mere jelly.” (1988, 117, emphasis original). Any material housing
for that pattern will do and the machine bodies he imagines will do quite nicely indeed.
Moravec paints a lovely picture of the enormous powers that robot bodies will
provide us (1999, 150-154). His “robot bush” takes after the branching leaf and root
structures of plants, with each layer of branches smaller than the one form which it
springs. The branches get smaller and smaller until they can operate on a nanoscale.
Massive computation will enable the mind to control the many tiny digits at the ends. For
the robot bush, “the laws of physics will seem to melt in the face of intention and will. As
with no magician that ever was, impossible things will simply happen around a robot
bush” (1988, 107-108, emphasis original).
Kurzweil echoes Moravec’s belief that in the near future, mechanical bodies will
greatly enhance the powers of human minds. He believes that nanotechnology will enable
human beings to build far superior bodies (1999, 141). Maintaining all the advantages of
a biological body (described as: suppleness, self-healing, and cuddliness), nanobodies
will live longer, be more malleable to environmental changes, and allow faster
computation times for the mind.
But once we have successfully ported consciousness out of a human being and
into a machine, why stop at the meager scale of our present lives? Eventually, human
minds will eliminate their needs for bodily existence; after all, “[w]e don’t always need
real bodies. If we happen to be in a virtual environment, then a virtual body will do just
9
fine” (Kurzweil 1999, 142). Once we overcome the limited aspiration for a better
physical body, we can expand our horizons immensely. We will cease living in the
physical world; our lives will play out in a virtual world and our minds will not,
therefore, require any particular kind of embodiment.
Our virtual selves will accomplish exactly what Isaiah prophesied millennia ago.
“[T]here won’t be mortality by the end of the twenty-first century…Up until now, our
mortality was tied to the longevity of our hardware…As we cross the divide to instantiate
ourselves into our computational technology, our identity will be based on our evolving
mind file. We will be software, not hardware.” (ibid., 128-129, emphasis original). We
will become, in Moravec’s terms, “disembodied superminds” (1992a, 20). Of course, all
software needs hardware, so we will not be truly disembodied. We will, however, cease
identifying with our material bodies. We will replace living bodies with virtual bodies
capable of transferal and duplication across the cosmic spread of machines. 32
In the future, human beings will reconfigure their bodies in order to participate in
the Kingdom come. Whether as cyborgs, robots, or software, they will live forever, cast
aside pain and want, and participate in a truly universal network of knowledge. This quest
can be undertaken only after we replace our human bodies and join our mechanical
children in the Virtual Kingdom.
Tribulation
Twentieth century apocalyptic worldviews, established through the terrors of the
World Wars and the Cold War, infiltrated a broad spectrum of baby boomer culture. Just
as artists, politicians, and everyday citizens lived with apocalyptic expectations, many
scientists did. In the robotics and AI community, apocalyptic expectations of a new world
inhabited by the elect in new bodies freed from their previous alienation took on a life of
their own. In pop science books, these ideas attempt to guide scientific research in their
fields.
IV. Armageddon Realized: Lions among the Sheep
A loud bang accompanies both the beginning and the end of the world. In the final
days, there will be war and pain and sorrow and false prophets and much gnashing of
teeth. Just as it follows Jewish and Christian apocalypses in so many other ways, the AI
apocalypse will be a difficult time for those still living. The promise of a better future to
come makes the apocalypse worthwhile but those who suffer through it must overcome
conflict and war.
The prospect of violence, maintained by many members of the Euro-American AI
and robotics communities, threatens human survival past the singularity. 33 While
Moravec and Kurzweil anticipate a joyful merger of human and machine, other popular
science authors believe the coming of intelligent machines heralds violent
confrontation—either between human beings and machines or among human beings
themselves.
Moravec does not believe that war and strife will characterize the future. He says
that in the future, “antisocial robot software would sell poorly” (1999, 77). Although he
acknowledges that such software exists currently, 34 it will “soon cease being
10
manufactured” (1992b, 52). Moravec wants to preserve the Edenic vision of his
apocalypse, so he rejects the violence encoded in current funding priorities.
Although he hopes the future will be peaceful, Moravec fails to secure that vision.
He admits, for example, that each individual in the future will try to “subvert others to its
purposes” (1999, 164). Any world in which individuals struggle for power over one
another is a world in which violence will continue. Not all violence requires the use of
atomic weaponry and self-piloting aircraft. Given that struggle remains vital to postapocalyptic life, we are hard pressed to believe his more idyllic promises.
Daniel Crevier, author of an influential history of AI, believes that Moravec’s
account of immortality is convincing (1993, 339) and looks forward to the possibility of
spiritual evolution in an age of intelligent machines (341), but he fears that intelligent
machines will have a ways to go before they fulfill their fantastic potential. He believes
that early generations of human-equivalent AI will be psychotic and paranoid (318). Such
machines would surely be a stumbling block on the way to peaceful paradise but Crevier
believes, like Moravec, that such impediments will be overcome.
Worse still, it is possible that robotic behavior might necessarily be what human
beings would call psychotic. As machines get smarter and smarter, they might lose
interest in taking care of human beings. According to Warwick, intelligent machines will
have no need for human values or social skills (1997, 179). Naturally, he believes, they
will desire domination over the human beings they come to outsmart (ibid., 274). 35
Warwick paints an ugly picture of the year 2050. By that time, he says, machines might
rule the planet, using human beings as slaves to perform jobs that are inconvenient for
robots. 36
Despite his pessimism, Warwick also hopes to overcome the dangers of AI
through the “birth” of cyborgs. Although wars between human beings and machines, long
a staple of science fiction, seem inevitable to Warwick, he takes no military funding and
hopes that humanity will avoid becoming a slave species for more intelligent machines.
Warwick hopes that by becoming cyborgs we will alleviate this concern. As human
beings mesh with machines, they will acquire the same set of interests and motivations
that the intelligent machines will have. Of course, this means that their values, desires,
and needs may be drastically different from those of “natural” humans (2003, 136). At
least something human may live to see the promised land.
Whether machines alone become super-intelligent or whether they will be joined
by cyborgs, a problematic split in values will appear on earth. Many human beings may
disapprove of building machines or people that no longer share the basic assumptions of
human nature. 37 Divergent values could lead to human beings fighting one another or
they could lead to human beings fighting machines.
Hugo de Garis appreciates strife because it is the harbinger of his technological
dreams. Although he professes to lose sleep over the death of humanity, he looks forward
to wars and rumors of wars. De Garis believes that human beings will fight one another
over whether or not to build the machines, which he calls “artilects” (i.e. “artificial
intellects”). As these wars will inevitably arise over the new technology, the wars must
ultimately be considered good: their absence would signal the absence of massively
intelligent machines.
De Garis believes the rise of AI will lead to (a seemingly morally good) war
between the people he calls Terrans (those who oppose such technologies) and those he
11
calls Cosmists (those who favor the technology). Even should humanity avoid warring
over the building of intelligent machines, the machines themselves may decide that
human beings are pests, in need of elimination (2005, 12). He does not mind that a
pitched war may lead to the destruction of the human race because he believes that
“godlike” machines will survive afterward. His cheerful acceptance of the doom of
humankind recalls the early 20th century words of Frederick Grant:
one must not be unwilling to pay any cost, however great; for the Kingdom is
worth more than anything in this world, even one’s life. Life, earthly happiness,
the otherwise legitimate satisfactions of human desire, all may need to be let go;
one must not hesitate at any sacrifice for the sake of entrance into the Kingdom.
The Kingdom must be one’s absolute highest good, whole aim, completely
satisfying and compensating gain (Grant 1917, 157).
De Garis shares Grant’s faith in the overriding value of the Apocalypse. He even
argues that the building of artilects who will eventually replace humankind is an act of
religious worship (2005, 104). 38 The coming Kingdom is the seat of all meaning and
value—any sacrifice that brings it about will be a worthy sacrifice. He describes the
Cosmist leaders as “‘big picture’ types, former industrial giants, visionary scientists,
philosophers, dreamers, individuals with powerful egos, who will be cold hearted enough
and logical enough to be willing to trade a billion human lives for the sake of building
one godlike artilect. To them, one godlike artilect is equivalent to trillions of trillions of
trillions of humans anyway” (2005, 174, emphasis mine). No sacrifice of human lives
will be too great. 39
Apocalyptic AI, taken as a whole, fails to offer absolute security to humanity but
then again, so does religion. While evangelical Christians may adorn their cars with
bumper stickers proclaiming “In case of Rapture, this car will be unoccupied,” they
cannot be absolutely certain that they shall be among that number. Perhaps the Rapture
will come and they will find themselves left behind. Apocalyptic AI raises the stakes of
this fear—it suggests that all of humanity might miss out on the future paradise. This
concern occupies some of the field’s leading figures, who desire to lead humanity into the
bright new world. De Garis writes, he claims, in order that humanity can address the
“artilect war” before it happens. Perhaps if we do so, he suggests, we will avoid
catastrophe and find ourselves wedded permanently to our technology. 40
VI. Conclusion
The eschatological and soteriological strategies of Jewish and Christian
apocalyptic groups, which remained—sometimes overt, sometimes submerged—within
Western theology even during the medieval and early modern periods, surface in
Apocalyptic AI. Apocalyptic AI is the heir to these religious promises, not a bastardized
version of them. We commonly speak of science and religion as though they are two
separate endeavors but, while they do have important distinctions that make such
everyday usage possible, 41 they are neither clearly nor permanently demarcated; the line
separating the two changes from era to era and from individual to individual. It is not,
12
precisely, unscientific to make religious promises nor is it entirely irreligious to assert
atheistic convictions. In Apocalyptic AI, technological research and religious categories
come together in a stirringly well-integrated unit.
The integration of religion and science in Apocalyptic AI demonstrates the
continued need for historical analysis. Just as Apocalyptic AI sheds light on Biblical
apocalypticism, the latter helps us understand Apocalyptic AI. Careful reading of ancient
apocalyptic traditions allows us to understand what is at stake in pop science robotics and
AI. Without that knowledge, Apocalyptic AI would be incoherent or invisible to us.
In my analysis, there remains one glaring exception to the comparison between
popular science AI and ancient apocalypticisms. From it stem other, less startling
differences. In ancient apocalyptic texts and movements, a god operates to guarantee the
victory of good over the forces of evil. In Apocalyptic AI, evolution operates to guarantee
the victory of intelligent computation over the forces of ignorance and inefficiency.
While both God and evolution offer transcendent guarantees about the future, their own
moral status affects their historical goals. The death of God alters the apocalyptic
morality play by changing the oppositions that are fundamental to the world.
Both Apocalyptic AI and Judeo-Christian apocalyptic traditions share a dualistic
view of the world. While the question of moral evil rarely, if ever, appears in Apocalyptic
AI, 42 value judgments about right/wrong and good/bad do; these value judgments are
attached to the dichotomies of virtual/physical and machine/human. The unpleasant half
of such dichotomies is grounded in ignorance and inefficiency, over which evolution
triumphs in the creation of intelligent, immortal machines capable of colonizing the entire
cosmos and thereafter establishing a transcendent virtual realm.
Contrary to Collins’s claim in The Encyclopedia of Apocalypticism, the victory of
good over evil is not necessary in apocalypticism. Rather, the dualist view of the cosmos,
tied to questions of alienation and the victory over it through the creation of a
transcendent realm inhabited by purified beings offers a more inclusive, and more fruitful
definition. That the struggle between good and evil is one possible interpretation of
apocalyptic dualism cannot be denied. That the struggle between good and evil is the
only interpretation of apocalyptic dualism must be denied.
We are often tempted to conclude that some basic dissatisfaction leads to dualism
but the opposite appears closer to the truth. Dualism is not arrived at inductively after we
feel alienated by our bodies, politics, etc. Rather, dualism is the presumption by which we
subsequently align the empirical world into the good and the bad. Apocalyptic AI’s
rejection of the body, of finitude, and of human thoughts and emotions reflects a deeper
expectation that the world is already divided into the good and the bad, rather than vice
versa. Given the dualist worldview of Apocalyptic AI, such distaste is inevitable.
Longstanding religious dreams of purity, perfection and immortality can be realized, say
the Apocalyptic AI advocates, as long as we see them through scientific and
technological lenses.
The Virtual Kingdom rejects both traditional religion and traditional humanity. It
enthusiastically endorses mechanical life and approves of human beings only insofar as
we are able to step beyond the boundaries that make us human. The ultimate value of
human life (rational computation) should be liberated from the body (and quite possibly
the emotions) that characterize life as we know it. Our current reality is separated out into
13
what is good and bad, only by eliminating the physical and embracing the virtual, say
Apocalyptic AI authors, can we return to the undifferentiated wholeness of the good.
This comparison between ancient apocalyptic traditions to modern science shows
the significance of religion-science studies to the wider field of religious studies and,
from there, to the social goal of our discipline. In his analysis of the Jonestown massacre
in the last chapter of Imagining Religion, J.Z. Smith argues that religious studies must be
relevant to society (1982). Smith believes that when the academy reneges on its
obligation to interpret modern events it destroys its own raison d’être. If we in the study
of religion are unwilling or incapable of interpreting the ways in which the sacred
responds to and helps shape scientific culture, then who will be? 43
The rapid development of computers and worldwide deployment of robots
remains well within the radar of the sacred: the promises of and strategies employed by
Apocalyptic AI stem from their religious environment. As “transhumanists” and
“extropians” acquire increasing public attention, the significance of Apocalyptic AI will
continue to grow. Analyzing Apocalyptic AI allows us to think through the modern world
and, at the same time, throws light upon apocalypticism in the ancient world. Comparison
with Apocalyptic AI clarifies the significance and nature of dualism, alienation, and
transcendence of the world and the body in ancient apocalypticisms.
Apocalypticism thrives in modern robotics and AI. Though many practitioners
operate on a daily basis without regard for the fantastic predictions of the Apocalyptic AI
community, the advocates of Apocalyptic AI are powerful voices in their fields and,
through their pop science books, wider culture. Apocalyptic AI has absorbed the
categories of Jewish and Christian apocalyptic theologies and utilizes them for scientific
and supposedly secular aims. Scholars of religion have as much obligation as anyone, and
more obligation than most, to help explore the characteristics of this movement and its
ramifications upon wider culture.
14
Bibliography
Barbour, Ian. 1997. Religion and Science: Historical and Contemporary Issues. San
Francisco: HarperSanFrancisco.
Benedikt, Michael. 1994. “Introduction.” Cyberspace: First Steps (ed. Benedikt).
Cambridge, MA: M.I.T. Press.
Benson, Timothy O. 1993. “Fantasy and Functionality: The Fate of Utopia.”
Expressionist Utopias: Paradise, Metropolis, Architectural Fantasy (ed. Benson).
Los Angeles: Los Angeles County Museum of Art. 12-51.
Bilski, Emily D. (ed.). 1988. Golem! Danger, Deliverance and Art. New York: The
Jewish Museum.
Brooke, John Hedley and Cantor, Geoffrey. 1998. Reconstructing Nature: The
Engagement of Science and Religion. New York: Oxford University Press.
Bull, Malcolm. 1999. Seeing Things Hidden: Apocalypse, Vision and Totality. New York:
Verso.
-------- 2000. “The End of Apocalypticism?” The Journal of Religion 80:4 (October).
658-662.
Cantor, Geoffrey and Kenny, Chris. 2001. “Barbour’s Fourfold Way: Problems With His
Taxonomy of Science-Religion Relationships.” Zygon: Journal of Religion and
Science 36:4 (December). 765-781.
Choset, Howie. 2007. Personal conversation (June 8).
Clifford, Richard J. 2000. “The Roots of Apocalypticism in Near Eastern Myth.” The
Encyclopedia of Apocalypticism Volume I: The Origins of Apocalypticism in
Judaism and Christianity (ed. John J. Collins). New York: Continuum Press. 338.
Cohen, John. 1966. Human Robots in Myth and Science. London: Allen & Unwin, Ltd.
Collins, John J. 2000a. “General Introduction.” The Encyclopedia of Apocalypticism
Volume I: The Origins of Apocalypticism in Judaism and Christianity (ed. John J.
Collins). New York: Continuum Press. vi-xii.
-------- 2000b. “From Prophecy to Apocalypticism: The Expectation of the End.” The
Encyclopedia of Apocalypticism Volume I: The Origins of Apocalypticism in
Judaism and Christianity (ed. John J. Collins). New York: Continuum Press. 129161.
Cook, Stephen L. Prophecy and Apocalypticism: The Postexilic Social Setting.
Minneapolis: Fortress Press. 1995.
Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial
Intelligence. New York: Basic Books.
Damasio, Antonio. Descartes’ Error: Emotion, Reason, and the Human Brain. New
York: Penguin. 1994.
De Boer, Martin C. 2000. “Paul and Apocalyptic Eschatology.” The Encyclopedia of
Apocalypticism Volume I: The Origins of Apocalypticism in Judaism and
Christianity (ed. John J. Collins). New York: Continuum Press. 345-383.
De Garis, Hugo. 2005. The Artilect War: Cosmists vs. Terrans: A Bitter Controversy
Concerning Whether Humanity Should Build Godlike Massively Intelligent
Machines. Palm Springs, CA: ETC Publications.
15
Foerst, Anne. 2004. God in the Machine: What Robots Teach Us About Humanity and
God. New York: Dutton.
García Martínez, Florentino. 2000. “Apocalypticism in the Dead Sea Scrolls.” The
Encyclopedia of Apocalypticism Volume I: The Origins of Apocalypticism in
Judaism and Christianity (ed. John J. Collins). New York: Continuum Press. 162192.
Geraci, Robert M. 2005. The Cultural History of Religions and the Ethics of Progress:
Building the Human in 20th Century Religion, Science, and Art. Doctoral
Dissertation. University of California at Santa Barbara.
-------- 2006. “Spiritual Robots: Religion and Our Scientific View of the Natural World.”
Theology and Science 4:3. 229-246.
-------- 2007. “Cultural Prestige: Popular Science Robotics as Religion-Science Hybrid.”
Reconfigurations: Interdisciplinary Perspectives on Religion in a Post-Secular
Age (eds. Alexander Ornella and Stefanie Knauss). Münster: LIT. 2007. 43-58.
-------- Forthcoming. “Theological Implications of Artificial Intelligence: What Science
Fiction Tells Us about Robotic Technology and Religion.” Zygon: Journal of
Religion and Science.
Grant, Frederick. 1917. “The Gospel Kingdom.” The Biblical World 50:3. 129-191.
Hanson, Paul D. [1975] 1979. The Dawn of Apocalyptic: The Historical and Sociological
Roots of Jewish Apocalyptic Eschatology. Philadelphia: Fortress Press.
The New Oxford Annotated Bible With the Apocrypha. New Revised Standard Version.
New York: Oxford University Press. 1994.
Hertzfeld, Noreen. “Creating in Our Own Image: Artificial Intelligence and the Image of
God.” Zygon: Journal of Religion and Science 37:2. 303-316.
Horsley, Richard A. 1993. Jesus and the Spiral of Violence: Popular Jewish Resistance
in Roman Palestine. Minneapolis: Fortress Press.
-------- 2000. “The Kingdom of God and the Renewal of Israel: Synotpic Gospels, Jesus
Movements, and Apocalypticism.” The Encyclopedia of Apocalypticism Volume
I: The Origins of Apocalypticism in Judaism and Christianity (ed. John J.
Collins). New York: Continuum Press. 303-344.
Joy, Bill. 2000. “Why the Future Doesn’t Need Us.” Wired 8.04 (April 2000).
Kurzweil, Ray. 1999. The Age of Spiritual Machines: When Computers Exceed Human
Intelligence. New York: Viking.
Marcus, Joel. 1996. “Modern and Ancient Jewish Apocalypticism.” The Journal of
Religion 76:1 (January). 1-27.
Meeks, Wayne. 2000. “Apocalyptic Discourse and Strategies of Goodness.” The Journal
of Religion 80:3 (July). 461-475.
Minsky, Marvin. 1985. Society of Mind. New York: Simon and Schuster.
Moravec, Hans. 1988. Mind Children: The Future of Robot and Human Intelligence.
Cambridge, MA: Harvard University Press.
-------- 1992a. “Letter from Moravec to Penrose.” Electronic-mail correspondence
published in Thinking Robots, An Aware Internet, and Cyberpunk Librarians: The
1992 LITA President’s Program (ed. R. Bruce Miller and Milton T. Wolf).
Chicago: Library and Information Technology Association. 51-58.
-------- 1992b. “Pigs in Cyberspace.” Thinking Robots, An Aware Internet, and
Cyberpunk Librarians: The 1992 LITA President’s Program (ed. R. Bruce Miller
16
and Milton T. Wolf). Chicago: Library and Information Technology Association.
15-21.
-------- 1999. Robot: Mere Machine to Transcendent Mind. New York: Oxford University
Press.
Newman, William R. 2004. Promethean Ambitions: Alchemy and the Quest to Perfect
Nature. Chicago: University of Chicago Press.
Noble, David F. 1997. The Religion of Technology: The Divinity of Man and the Spirit of
Invention. New York: Penguin.
Nye, David E. 2003. America as Second Creation: Technology and Narratives of a New
Beginning. Cambridge, MA: M.I.T. University Press.
O’Leary, Stephen D. 2000. “Apocalypticism in American Popular Culture.” The
Encyclopedia of Apocalypticism Volume 3: Apocalypticism in the Modern World
and the Contemporary Age. New York: Continuum. 392-426.
Peterson, David L. 1997. “Review: Prophecy and Apocalypticism: The Postexilic Social
Setting.” The Journal of Religion 77:4 (October). 655-656.
Russell, D.S. 1964. The Method and Message of Jewish Apocalyptic: 200 BC—100 AD.
Philadelphia: Westminster Press.
-------- 1978. Apocalyptic: Ancient and Modern. Philadelphia: Fortress Press.
Schodt, Frederik L. 1988. Inside the Robot Kingdom: Japan, Mechatronics, and the
Coming Robotopia. New York: Kodansha International.
Schoepflin, Rennie B. 2000. “Apocalypticism in an Age of Science.” The Encyclopedia
of Apocalypticism Volume 3: Apocalypticism in the Modern World and the
Contemporary Age. New York: Continuum. 427-441.
Setzer, Claudia. 2004. Resurrection of the Body in Early Judaism and Early Christianity:
Doctrine, Community, and Self-Definition. Boston: Brill Academic Publishers.
Smith, J.Z. 1982. Imagining Religion: From Babylon to Jonestown. Chicago: University
of Chicago Press.
Stone, Allucquere Rosanne. 1991. “Will the Real Body Please Stand Up?: Boundary
Stories about Virtual Cultures.” Cyberspace: First Steps. 81-118.
Torry, Robert. 1991. “Apocalypse Then: Benefits of the Bomb in Fifties Science Fiction
Films.” Cinema Journal 31:1 (Autumn). 7-21.
Vermes, Gaza. The Complete Dead Sea Scrolls in English. New York: Allen Lane. 1962.
Von Rad, G. 1965. Theologie des Alten Testaments. Munich: Kaiser.
Warwick, Kevin. 1997. March of the Machines: The Breakthrough in Artificial
Intelligence. Chicago: University of Illinois Press.
-------- 2003. “Cyborg Morals, Cyborg Values, Cyborg Ethics.” Ethics and Information
Technology 5:3. 131-137.
Webb, Robert L. 1990. “‘Apocalyptic’: Observations on a Slippery Term.” Journal of
Near Eastern Studies 49:2 (April). 115-126.
17
1
While the term “robot” includes machines with a wide variety of capabilities, which they usually carry out
automatically, I am concerned with the robots that resemble human beings and carry out many or most
human tasks. Such robots do not, as yet, exist; they remain the speculation of science fiction and future
progress in the dual fields of robotics and artificial intelligence. Scientists design robots to take over human
tasks, both physical and mental. The ultimate promise of robotics/AI always includes the hope that robots
will provide us with unlimited recreation. For a summary of the difficulties in defining what counts as a
robot and what does not, see Schodt 1988, 29-52.
2
Ray Kurzweil is a pioneer in optical character recognition, speech recognition, and other AI fields. He is a
member of the National Inventors Hall of Fame and has been awarded both the Lemelson-MIT Prize (a
$500,000 prize for invention and innovation) and the 1999 National Medal of Technology. Moravec is
known for his contributions to the scientific literature on robot vision and navigation, including the
influential Stanford Cart, the first computer controlled, autonomous vehicle. Howie Choset, one of his
colleagues at the CMU Robotics Lab before Moravec retired, credits Moravec as the single most important
figure in the study of mobile robots (2007). Moravec is also known for his oft-cited Mind Children and
Robot, which will be the principle sources for this essay. Like Kurzweil’s The Age of Spiritual Machines,
Mind Children has more than 100 academic citations and is widely accepted in discussions of
transhumanism and futurism. Hugo de Garis is known for his work in evolvable hardware (the attempt to
build artificial neurons through cellular automata theory). He has been rather less successful in his
scientific work than have been Kurzweil, Moravec, and Warwick but his recent advocacy of a war over
whether to build an artificially intelligent machine has received significant attention in the transhumanist
community. Kevin Warwick is noted for both his research in robotics and his research in and advocacy of
cyborg technologies. His studies of self-organization in robots significantly advanced cybernetics theory,
and although his cyborgian efforts have been criticized as being more publicity stunts than effective
research, they have contributed to the study of machine-neuron connectivity.
3
Robert Webb argues that ‘apocalypticism’ is not a useful term with regard to sociological movements and
recommends the use of “millenarian movement” to refer to social groups and their ideologies (1990). He
believes that ‘apocalypticism’ properly characterizes only the ideology of apocalypses. While Webb makes
a strong point in directing us toward the common elision of the differences between a literary ideology and
a social ideology, his use of ‘millenarian movement’ is also problematic. As he notes, Collins argues that
there is little overlap between Jewish apocalyptic literature and millenarian movements (1984, 205).
Moreover, the terminological clarity he seeks (and perhaps fails to achieve if Collins is correct) is
overshadowed by the analytical confusion that arises in the progressive narrowing of apocalypticism’s
meaning. If a group can no longer be apocalyptic, then we must refer to the characteristics of millenarian
groups when addressing the social world of apocalyptic literature and apocalyptic eschatology. If we do
this, we risk forcing an unwelcome mix of apocalyptic and millenarian characteristics. For this reason,
while I applaud Webb’s call to terminological attention, I think applying the adjective ‘apocalyptic’ to a
social movement is a reasonable approach and I will take it here (while trying to be careful so as to not
confuse or conflate apocalyptic ideologies and apocalyptic movements).
4
Meeks considers three characteristics sufficient (though not necessary) to classify apocalyptic discourse.
He argues that apocalyptic discourse is: revelatory, interpretive, and dualistic. Apocalyptic AI is certainly
dualistic in its dichotomies of good/bad, knowledge/ignorance, machine/human being, virtual/physical.
Likewise, it is interpretive in its approach to history, which supposedly justifies its conclusions about the
future. It differs, however, in that it does not appear revelatory in the sense that Meeks illustrates; it is not
based on “inaccessible” knowledge. Indeed, Apocalyptic AI advocates would argue that while their
conclusions are in some sense invisible to the average person, this is only because the average person has
not properly studied the signs.
5
I do not presume an identity between apocalyptic Judaism and apocalyptic Christianity. Indeed, as Joel
Marcus has pointed out, there is vast difference amongst the apocalyptic views of contemporary Jews so to
presume even so much as the identity of all ancient Jewish apocalypticisms would be presumptuous indeed
(1996, 2). But, as studies of apocalypticism have shown, Jewish and Christian apocalyptic traditions are
sufficiently similar to allow fruitful comparison. Moreover, the entire cultural legacy of the Judeo-Christian
tradition is available to modern writers, thus justifying the somewhat totalizing approach taken. In this
essay, a continuity of apocalypticism from Judaism through Christianity into modern technoscience is
18
presumed. This presumption stems from the presence of the features of apocalypticism outlined in the text:
a dualist perspective on the cosmos, a new world, which resolves the dualism, to be inaugurated after a
radical divide in history, the necessity of new bodies in order that human beings can share in the new world
to come, and the preference for apocalypticism among alienated communities. Contradicting this thesis,
Cook argues that apocalypticism is not tied to alienation or deprivation; he believes that apocalyptic
writings in Ezekiel, Zechariah, and Joel stem from ruling priestly groups (1995). Likewise, M.C. de Boer
points out that for Paul, alienation was a consequent, not a cause, of his conversion to Jesus’ mission (2000,
348). De Boer’s position presumes, however, that Jewish political life was stable and comfortable for Paul.
While he may not have been subject to political persecution prior to his conversion, large segments of the
Jewish community were uncomfortable precisely because political alienation was a constant fact under
Roman rule. Horsley suggests that colonialism (in this case imperial domination from Rome) can lead to
cultural retreat and, therefore, zealous persecution of sinners, a sequence he attributes as likely in the case
of Saul/Paul (1993, 128-129). He believes that by Roman times, prolonged subjugation of Judea meant that
society was “almost continually in circumstances of crisis” (ibid., 4), a position previously held by Russell
(1978). Further, de Boer’s claim assumes that all alienation equals political alienation, a fact disputed by
Webb (1990) and destabilized if we assume that Apocalyptic AI advocates are alienated. The principle
form of alienation in Apocalyptic AI is distaste for the human bodily finitude. Apocalyptic AI advocates
are, however, politically alienated, as seen in their desire to establish cultural authority to protect their
research funding from perceived cultural threats (Geraci 2007). A tie between apocalypticism and
alienation does not indicate that apocalypticism flourished among only conventicles, or small religious
groups who have lost power struggles. Rather, just as Marx argued in economics, even the elite can suffer
from alienation. Horsley and Russell rightly demonstrate that the apocalyptic imagination can arise from
powerful groups who remain, nevertheless, alienated. Cook’s dispute with the term alienation stems from
an overly strict interpretation thereof; he seems to think that political and economic alienation is the only
kind and distinguishes ‘cognitive dissonance’ from this (1995, 16). Even Marx’s use of the term exceeds
Cook’s. There is no reason to run from the word alienation when it so clearly evokes dissatisfaction and a
feeling of ‘not being at home’ in a way that ‘cognitive dissonance’ does not. Moreover, Cook assumes that
Temple priestly imagery constitutes priestly authorship and never details the psychological and social
outlook of Temple priests. Indeed, in his review of Cook’s work, Peterson suggests post-exilic Temple
priests may have had been subordinate to the power of bet ‘abot, or ‘ancestral houses’ (1997). Cook is
right, however, in pointing out that alienation does not cause apocalypticism (1995, 40). Alienation is a
characteristic of, not a cause of, apocalypticism.
6
All of the apocalyptic characteristics described in this paper can be directly related to a dualism in
apocalyptic worldviews. There is always a struggle between a right way of thinking/living/seeing and a
wrong way of thinking/living seeing. Malcolm Bull focuses on the dualistic nature of apocalyptic beliefs in
his definition of apocalypticism as “the revelation of excluded undifferentiation” (1999, 83). For Bull, the
centrality of undifferentiation in the apocalypse (92) marks its chief characteristic and the appropriate line
of demarcation for defining the apocalyptic. Bull’s undifferentiation is visible in Apocalyptic AI, where the
machine and the human being blur.
7
“Apocalyptic,” notes D.S. Russell, “is a language of crisis” (1978, 6).
8
This is not, of course, the first time that Jews had such difficulties. The history of Judaism after the
conquest of Jerusalem by Babylon has almost presumed the political alienation of the Jews. As Collins
points out, “even those who wielded power in post-exilic Judah experienced relative deprivation in the
broader context of the Persian empire” (2000b, 133), likewise those of the Hellenistic period (ibid., 147). .
9
On opposition to the Jewish elite and Roman rule, see Horsley 2000. According to Horsley, early
Christian writings (e.g. Q and the Gospel of Mark) opposed earthly rulers and looked forward to a renewed
Israel.
10
Jews and Christians expected an imminent end to history (4Q247, 4 Ezra 5:4, Mark 8:1, 13:30, 1 Cor.
7:25-31, Rev. 22:7, 10, 12). Although most Jewish and Christian groups reevaluated eschatological time
frames and seemingly ceased believing that the world would end in their lifetimes, in the apocalyptic
communities of Greek and Roman rule Jews and Christians both expected a short end to the world.
Identification of the precise timing of the apocalypse is always challenging, especially when so many
documents, as has been the case in the Qumran findings, are damaged. The fragmented Qumran
Apocalypse of Weeks, however, appears to indicate that the final stage of history will be the rule of the
19
Kittim—presumably the Romans. Collins maintains a similar position in his interpretation of the Hodayot.
He believes that the “eschatological drama is already under way” for the author of 1QH3 (1984, 138).
11
Historical events, especially those of disaster and evil, have meaning only insofar as they proceed
towards the end of the world and the establishment of the Kingdom. That fulfillment of God’s promise to
remake the world is the cause of every event, and thus the source of the event’s meaningfulness.
12
1 Enoch is, according to Collins, the first literal expectation of the world’s recreation in Jewish
apocalypticism (as opposed to a presumably metaphorical expectation in Isaiah) (2000b, 141).
13
God is always just about to create a new world in apocalyptic imaginings. The imminent end of the world
is predicted among Jews (4 Ezra 4:26, 2 Bar. 85:10) and Christians (Mk. 13:30, 1 Thess. 4:13-18, Rev.
22:7). To know that one’s alienation might someday be resolved in the distant future of subsequent
generations millennia from now would little improve one’s mood. Knowing that God plans on rectifying
the world in the very near future gives hope to the downtrodden. There is no reason to believe that all
apocalyptic alienation serves the same purpose. For example, the apocalyptic writings of first century
Judaism before the Temple was destroyed may have been calls to war but those after the destruction of the
temple brought relief without necessarily leading to revolution (Collins 2000b, 159).
14
This kind of apocalypticism is often called post-millenarian in Christianity. Because the return of Jesus is
expected after the millennium of peace (rather than a necessary precursor to it), we can expect the world to
slowly improve over the course of history before culminating in the Second Coming. Early modern
scientists often reflected this attitude in their promises of technological progress. They argued that scientific
and, more importantly, technological progress improved humanity’s life on earth and was, therefore, part of
the divine promise. When integrated into apocalyptic thought, however, such learning was not “for its own
sake” but rather for the sake of God or of the Kingdom come. Thus, meaning is inextricable from the
salvific future.
15
E.g. see 4 Ez 7:88-99.
16
Bodily resurrection first occurs in Ezekiel 37: “I am going to open your graves, and bring you up from
your graves, O my people; and I will bring you back to the land of Israel” (Ezek 37:12; see also Isa 25:8
and 26:19). For Ezekiel, however, resurrection referred to a restored nation of Israel, not to a literal
resurrection of the faithful.
17
It may well have been the metaphorical Ezekiel who set this paradigm. Ezekiel’s resurrection apparently
refers to the revived nation of Israel but his language affirmed the need for physical, if perhaps divinely
transformed, bodies when later apocalyptics began to reinterpret the notion of resurrection in Jewish belief.
In the perfect world to come, death will be annihilated. The bodies of the saved will be incorruptible,
imperishable. This tradition traces back at least as far as the apocalyptic portions of Isaiah. “No more shall
there be in [the new world] an infant that lives but a few days, or an old person who does not live out a
lifetime; for one who dies at a hundred years will be considered a youth” (Isa 65:20). Subsequently,
Biblical authors expected the bodily resurrection to do more than just raise bodies from the ground. The
early Jews and Christians believed that bodily resurrection would include a transformation of the body into
something superior (2 Bar 51:3-10, Mark 12:25, Lk 20: 35-36).
18
In ancient Jewish and Christian apocalyptic traditions, the saved resemble celestial bodies (equated with
angels) in their glory. Comparing the resurrected body to the sun, the moon, and the stars, Paul says,
“What is sown is perishable, what is raised is imperishable. It is sown in dishonor, it is raised in glory. It is
sown in weakness, it is raised in power. It is sown a physical body, it is raised a spiritual body” (1 Cor
15:42-44; see also Phil 3:21). Likewise, 2 Baruch declares that the saved “shall be glorified in changes, and
the form of their face shall be turned into the light of their beauty, that they may be able to acquire and
receive the world which does not die, which is promised to them” (51:3-4) and “they shall be made equal to
the stars” (51:9)
19
For example, the “perishable body must put on imperishability, and this mortal body must put on
immortality” (1 Cor 15:53-54).
20
Although Moravec recognizes that natural evolution is “blind” (1988, 158-159), he believes that
“competitive diversity will allow a Darwinian evolution to continue, weeding out ineffective ways of
thought” (1999, 165). The shift from competition for resources to competition for thought shows that
Moravec believes evolution is teleological. More open in his faith, de Garis asserts that perhaps “the rise of
the artilect is inherent in the laws of physics” (2005, 175). Artilect is de Garis’s term for a machine with
“godlike” intelligence.
20
21
The coincidence of both historical progress and an otherworldly journey appears to differentiate
Apocalyptic AI from ancient apocalypses. Generally, apocalypses contain either narratives of historical
progress or otherworldly journeys but the Apocalypse of Abraham is a notable exception. While it may be
rare for an apocalypse to contain both elements, it is not unheard of.
22
Even the academic study of apocalypticism reflects this kind of attitude. Concern with ancient
apocalyptic beliefs arose out of the apparent risk of worldwide destruction in the mid 20th century (Russell
1978, Bull 2000).
23
That Apocalyptic AI is ultimately optimistic can be traced in part to the optimism inherent in apocalyptic
worldviews (Meeks 2000) but also to the specifically 20th century interpretations of apocalyptic possibility.
In “Apocalypse Then,” Robert Torry argues 1950s science-fiction films harness the dangers of an atomic
war to improve the world (1991). Torry shows how When Worlds Collide, The Day the Earth Stood Still,
and War of the Worlds were all metaphors for atomic conflict and how each ultimately affirms the
possibility of apocalyptic salvation. The apocalyptic imagination of late 20th century AI and robotics no
doubt reflects the “beneficial apocalypse” of mid-century science fiction.
24
Apocalyptic AI advocates are primarily concerned with their bodily alienation but there is a degree to
which political concerns over funding and public authority demonstrate perceived political alienation. The
desire for cultural authority clearly plays a role in researchers’ willingness to write pop science books, as
the books inevitably “prove” both the enormous significance of robotics/AI research and the superiority of
their authors as social commentators/directors (see Geraci 2007). It need not be maintained, however, that
Apocalyptic AI requires political alienation in order to qualify as apocalyptic. Webb points out that
apocalyptic alienation (crisis, in his terms) does not have any exclusive definition; he recognizes
psychological, social, religious, economic, and political factors in his approach to the social world of
apocalyptic imagination (1990, 125).
25
In separating rational thought and problem solving from the body, the emotions, etc., Apocalyptic AI
advocates miss the importance of these in the rational thinking process, which cannot function properly
without them (Damasio 1994).
26
For another example, see Benson 1993.
27
Western technology has a long association with God’s Kingdom. For example, David F. Noble has
argued that medieval millenarianism has been integrated into a wide array of 20th century technologies
(1997) and David E. Nye has demonstrated the relevance of millenarian thinking to the development of
land use technologies in early American history (2003). The promises of Apocalyptic AI are the latest in
this historical trend.
28
Christians have divided in their interpretation of this passage, either believing that Jesus will personally
inaugurate the 1000 years or that they will occur prior to his arrival. These two beliefs are called premillenialism and post-millenialism, respectively. Many of the founders of modern science and technology
were post-millenialist, arguing that the technology would produce a wonderful world, a return to Eden,
which would subsequently be destroyed upon the coming of Jesus. Apocalyptic AI shares in this tradition,
with the obvious exception of the role of Jesus/God.
29
Daniel Crevier argues that building machines smarter than human beings will effectively purify
intelligence, removing the stain of its prior life (1993, 307). Like most Apocalyptic AI advocates, he
believes that information exists in a transcendent realm (1993, 48). It is to this transcendent world that
Apocalyptic AI hopes to bring human minds.
30
Newman offers a particularly effective portrayal of our ambivalence toward alchemical creation (2004).
He shows that since Greek times we have been both admiring and suspicious of artistic mimesis, in some
cases unsure which is the greater but tending toward faith in the natural (unless, of course, we are
alchemists!). Even the creation of gold was a potentially mortal sin, as in it the alchemist would pretend to
rival God (ibid., 222). Already in the medieval period, debate raged over whether human beings could
produce an artificial man and, if so, whether it would be superior to human beings (ibid., 35-36).
31
Warwick is outright suspicious of the feasibility of such a program (1997, 180-181).
32
Even Warwick’s cyborgs, though they will not be disembodied superminds, will operate in a cosmic
internet, having shifted computation/thinking to hardware linked to the global network.
33
While Westerners often decry the destructive power of robotics and AI, this fear is largely absent in
Japan, where other highly influential research continues. Religious differences between Japan and the West
promote many differences between their respective approaches to robotics and AI (Geraci 2006). One
important difference lies in the dynamic of power between intelligent machines and human beings. As is
21
apparent in this paper, human beings serve the interests of intelligent machines in the West. In Japan,
however, intelligent machines are expected to serve the interests of human beings. Obviously, when human
beings become subservient to machines those machines will appear threatening to many people.
34
The United States’ military is the world’s largest source of funding for robotics and AI. The military, of
course, has a vested interest in “antisocial” software and one is hard pressed to think of why that would
change. For a review of military values and robotics/AI see Geraci 2005, 104-112.
35
Of all social values we might hope to instill in machines, Warwick expresses confidence in instilling only
our will to power. Although he opposes military uses for AI and robotics (1997, 210), he subconsciously
accepts that the machines of the future will be military machines. He never attempts to explain why smarter
machines will necessarily desire dominance; he simply assumes it to be the case. Given the role of the
military in the wider robotics and AI world, however, his faith may be well-founded
36
Fear that machines will replace us, from Warwick and Bill Joy (who oppose the possibility) to Moravec,
Kurzweil, and de Garis (who approve of it, though in different ways), are based upon the presumption that
the essential “humanness” to be imparted in them is substantial or functional. That is, if human beings are
defined as possessing certain kinds of substances (e.g. reason) or functional abilities (e.g. running or world
domination), then surely we have much reason to fear the rise of the machines. If, however, the ability to
form relationships with one another is the essentially human attribute that we hope to instill in machines
then we have little to fear from them (Hertzfeld 2002, 312).
37
Kurzweil also recognizes that many human beings might oppose advanced AI technology, especially
those economically disenfranchised by the machines. He supposes, however, that in the future “the
underclass [will be] politically neutralized” (1999, 196). Kurzweil never bats an eyelash at this statement or
its wider implication for the poor or those other groups who may oppose the disenfranchisement the
machines threaten (which far exceeds the economic sphere).
38
This position has an interesting parallel in the claims of the Lutheran theologian Anne Foerst (2004, 3241). Foerst also asserts that building intelligent machines is an act of religious worship but for her it is to
participate in the co-creation of the world, not in the creation of a god. Engineering becomes prayer,
approved and encouraged by the Christian God. We should build companions in her account, rather than
successors. Some difficulties in this position appear in Geraci (Forthcoming).
39
This has a parallel in ancient writings. In the Apocalypse of Abraham, for example, sacrifice is required
for Abraham to enter heaven. Of course, Abraham doesn’t intend to sacrifice the human race!
40
Warwick and Kurzweil echo de Garis in this. Both raise possible threats to humanity but remain
hopeful—Kurzweil is always more positive than de Garis, Warwick is so on his happier days—that
humanity will merge with machines and join in the joyful life everlasting.
41
The most important such demarcation is that science makes no recourse to the supernatural. While there
are specifically religious kinds of promises in Apocalyptic AI, their transcendent guarantee comes from
directly within nature…evolution is held to be the key to our future salvation.
42
When morality does appear in Apocalyptic AI, such as in de Garis’s artilect war, the values are inverted.
What we once thought to be evil (such as total human genocide) is, he tells us, actually good.
43
We have, moreover, an obligation to promote an historical, philosophical, and sociological analysis of
religion and science that goes beyond the current paradigms in such study. Most late 20th century
approaches to religion and science (Brooke and Cantor (1998) is the best known exception) stray little from
Ian Barbour’s four-fold typology of religion-science interactions: conflict, separation, dialogue, integration
(Barbour 1997). That typology, which has a semi-theological aim (Cantor and Kenny 2001) is exploded in
a description of Apocalyptic AI. Clearly, the competition for cultural authority between AI advocates and
theologians could be seen as a form of conflict, but the basis for AI claims to authority is, itself, grounded
in the sacred traditions of Western religion! Therefore, both conflict and integration appear simultaneously,
with the latter reconfigured, denying that integration necessarily means the cheerful merger of truths
envisioned by Barbour and others.
22
Download