Extinction Outweighs - Open Evidence Project

advertisement
Extinction Outweighs
Extinction OW – US-Russia War (2AC)
US-Russia war is an existential risk – four reasons
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
The US and Russia still have huge stockpiles of nuclear weapons. But would an all-out nuclear war really
exterminate humankind? Note that: (i) For there to be an existential risk it suffices that we can’t be sure that it
wouldn’t. (ii) The climatic effects of a large nuclear war are not well known (there is the possibility of a nuclear winter).
(iii) Future arms races between other nations cannot be ruled out and these could lead to even greater arsenals
than those present at the height of the Cold War. The world’s supply of plutonium has been increasing steadily to about two thousand
tons, some ten times as much as remains tied up in warheads ([9], p. 26). (iv) Even if some humans survive the short-term
effects of a nuclear war, it could lead to the collapse of civilization. A human race living under stone-age
conditions may or may not be more resilient to extinction than other animal species.
Existential risks outweigh
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Existential risks have a cluster of features that make it useful to identify them as a special category: the extreme
magnitude of the harm that would come from an existential disaster; the futility of the trial-and-error approach;
the lack of evolved biological and cultural coping methods; the fact that existential risk dilution is a global public
good; the shared stakeholdership of all future generations; the international nature of many of the required
countermeasures; the necessarily highly speculative and multidisciplinary nature of the topic; the subtle and
diverse methodological problems involved in assessing the probability of existential risks; and the comparative
neglect of the whole area. From our survey of the most important existential risks and their key attributes, we can extract tentative
recommendations for ethics and policy: 9.1
Raise the profile of existential risks We need more research into existential risks – detailed
studies of particular aspects of specific risks as well as more general investigations of associated ethical, methodological, security and policy
issues. Public awareness should also be built up so that constructive political debate about possible countermeasures becomes possible. Now,
it’s a commonplace that researchers always conclude that more research needs to be done in their field. But in this instance it is really true.
There is more scholarly work on the life-habits of the dung fly than on existential risks.
Probability is at least 25% and even a 1% risk is terminal
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
In combination, these indirect arguments add important constraints to those we can glean from the
direct consideration of various technological risks, although there is not room here to elaborate on the details. But
the balance of evidence is such that it would appear unreasonable not to assign a substantial probability
to the hypothesis that an existential disaster will do us in. My subjective opinion is that setting this
probability lower than 25% would be misguided, and the best estimate may be considerably
higher. But even if the probability were much smaller (say, ~1%) the subject matter would
still merit very serious attention because of how much is at stake. In general, the greatest
existential risks on the time-scale of a couple of centuries or less appear to be those that
derive from the activities of advanced technological civilizations. We see this by looking at the various
existential risks we have listed. In each of the four categories, the top risks are engendered by our activities. The only significant existential risks
for which this isn’t true are “simulation gets shut down” (although on some versions of this hypothesis the shutdown would be prompted by
our activities [27]); the catch-all hypotheses (which include both types of scenarios); asteroid or comet impact (which is a very low probability
risk); and getting killed by an extraterrestrial civilization (which would be highly unlikely in the near future).[19]
Extinction OW – US-Russia War (Long)
US-Russia war threatens extinction – this outweighs other risks
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have
not evolved mechanisms, either biologically or culturally, for managing such risks . Our intuitions and coping strategies
have been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile
accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black
plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-anderror in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the
perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They
haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species. With the
exception of a species-destroying comet or asteroid impact (an extremely rare occurrence), there were probably no significant existential risks
in human history until the mid-twentieth century, and certainly none that it was within our power to do something about. The first manmade
existential risk was the inaugural detonation of an atomic bomb. At the time, there was some concern that the explosion might start a runaway
chain-reaction by “igniting” the atmosphere. Although we now know that such an outcome was physically impossible, it qualifies as an
existential risk that was present at the time. For there to be a risk, given the knowledge and understanding available, it suffices that there
is some subjective probability of an adverse outcome, even if it later turns out that objectively there was no
chance of something bad happening. If we don’t know whether something is objectively risky or not, then it is
risky in the subjective sense. The subjective sense is of course what we must base our decisions on.[2] At any given time we must
use our best current subjective estimate of what the objective risk factors are.[3] A much greater existential risk emerged with the
build-up of nuclear arsenals in the US and the USSR. An all-out nuclear war was a possibility with both a
substantial probability and with consequences that might have been persistent enough to qualify as global and terminal.
There was a real worry among those best acquainted with the information available at the time that a nuclear Armageddon would occur and
that it might annihilate our species or permanently destroy human civilization.[4] Russia and the US retain large nuclear arsenals that could be
used in a future confrontation, either accidentally or deliberately. There is also a risk that other states may one day build up large nuclear
arsenals. Note however that a smaller nuclear exchange, between India and Pakistan for instance, is not an existential risk, since it would not
destroy or thwart humankind’s potential permanently. Such a war might however be a local terminal risk for the cities most likely to be
targeted. Unfortunately, we shall see that nuclear Armageddon and comet or asteroid strikes are mere preludes to the existential risks that we
will encounter in the 21st century. The special nature of the challenges posed by existential risks is illustrated by the following
Our approach to existential risks cannot be one of trial-and-error. There is no
opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience – is
points:
unworkable. Rather, we must take a proactive approach. This requires
foresight to anticipate new types of threats
and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.
We
cannot necessarily rely on the institutions, moral norms, social attitudes or national security policies that
developed from our experience with managing other sorts of risks . Existential risks are a different kind of beast. We might
find it hard to take them as seriously as we should simply because we have never yet witnessed such disasters.[5] Our collective fear-response
is likely ill calibrated to the magnitude of threat.
Reductions in existential risks are global public goods [13] and may therefore be
undersupplied by the market [14]. Existential risks are a menace for everybody and may require acting on the international plane. Respect for
national sovereignty is not a legitimate excuse for failing to take countermeasures against a major existential risk.
If we take into
account the welfare of future generations, the harm done by existential risks is multiplied by another factor, the
size of which depends on whether and how much we discount future benefits [15,16]. In view of its undeniable
importance, it is surprising how little systematic work has been done in this area. Part of the explanation may be that many of the gravest risks
stem (as we shall see) from anticipated future technologies that we have only recently begun to understand. Another part of the explanation
may be the unavoidably interdisciplinary and speculative nature of the subject. And in part the neglect may also be attributable to
an aversion against thinking seriously about a depressing topic. The point, however, is not to wallow in gloom and
doom but simply to take a sober look at what could go wrong so we can create responsible strategies for
improving our chances of survival. In order to do that, we need to know where to focus our efforts.
Extinction OW – US-Russia War – AT: D
Even if everyone doesn’t *immediately* die, an existential risk still exists because
human survival is imperiled
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
We shall use the following four categories to classify existential risks[6]: Bangs – Earth-originating intelligent life
goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction. Crunches – The
potential of humankind to develop into posthumanity[7] is permanently thwarted although human life continues in some
form. Shrieks – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and
desirable. Whimpers – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to
either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule
degree of what could have been achieved. Armed with this taxonomy, we can begin to analyze the most likely scenarios in each category. The
definitions will also be clarified as we proceed.
Extinction OW – D-Rule
Preventing extinction is a d-rule – below is the decision calculus;
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Previous sections have argued that the combined probability of the existential risks is very substantial. Although there is still a fairly broad
range of differing estimates that responsible thinkers could make, it is nonetheless arguable that because the negative utility of an
existential disaster is so enormous, the objective of reducing existential risks should be a dominant
consideration when acting out of concern for humankind as a whole. It may be useful to adopt the following rule
of thumb for moral action; we can call it Maxipok: Maximize the probability of an okay outcome, where an
“okay outcome” is any outcome that avoids existential disaster. At best, this is a rule of thumb, a prima facie suggestion,
rather than a principle of absolute validity, since there clearly are other moral objectives than preventing terminal global disaster. Its usefulness
consists in helping us to get our priorities straight. Moral action is always at risk to diffuse its efficacy on feel-good projects[24] rather on
serious work that has the best chance of fixing the worst ills. The cleft between the feel-good projects and what really has the greatest
potential for good is likely to be especially great in regard to existential risk. Since the goal is somewhat abstract and since
existential risks don’t currently cause suffering in any living creature[25], there is less of a feel-good dividend to
be derived from efforts that seek to reduce them . This suggests an offshoot moral project, namely to reshape the popular moral
perception so as to give more credit and social approbation to those who devote their time and resources to benefiting humankind via global
safety compared to other philanthropies. Maxipok, a kind of satisficing rule, is different from Maximin (“Choose the action that has the best
worst-case outcome.”)[26]. Since we cannot completely eliminate existential risks (at any moment we could be sent into the dustbin of cosmic
history by the advancing front of a vacuum phase transition triggered in a remote galaxy a billion years ago) using maximin in the
present context has the consequence that we should choose the act that has the greatest benefits under the
assumption of impending extinction. In other words, maximin implies that we should all start partying as if there
were no tomorrow. While that option is indisputably attractive, it seems best to acknowledge that there just
might be a tomorrow, especially if we play our cards right.
Extinction OW – Outweighs VTL
Human extinction is the greatest act of suffering imaginable – using scientific methods
to forestall extinction is crucial
Richard J. Epstein and Y. Zhao ‘9 – Laboratory of Computational Oncology, Department of Medicine, University of Hong Kong, The
Threat That Dare Not Speak Its Name; Human Extinction, Perspectives in Biology and Medicine Volume 52, Number 1, Winter 2009, Muse.
Human extinction is 100% certain—the only uncertainties are when and how. Like the men and women of Shakespeare’s
As You Like It, our species is but one of many players making entrances and exits on the evolutionary stage. That we generally deny that such
exits for our own species are possible is to be expected, given the brutish selection pressures on our biology. Death, which is merely a biological
description of evolutionary selection, is fundamental to life as we know it. Similarly, death occurring at the level of a species—extinction—is as
basic to biology as is the death of individual organisms or cells. Hence, to regard extinction as catastrophic—which implies that
it may somehow never occur, provided that we are all well behaved—is not only specious, but self-defeating. Man is both
blessed and cursed by the highest level of self-awareness of any life-form on Earth. This suggests that the process of human
extinction is likely to be accompanied by more suffering than that associated with any previous species
extinction event. Such suffering may only be eased by the getting of wisdom : the same kind of wisdom that could, if
applied sufficiently early, postpone extinction. But the tragedy of our species is that evolution does not select for such foresight. Man’s
dreams of being an immortal species in an eternal paradise are unachievable not because of original sin—the doomsday scenario for which we
choose to blame our “free will,” thereby perpetuating our creationist illusion of being at the center of the universe—but rather, in reductionist
terms, because paradise is incompatible with evolution. More scientific effort in propounding this central truth of our
species’ mortality, rather than seeking spiritual comfort in escapist fantasies, could pay dividends in minimizing
the eventual cumulative burden of human suffering.
Extinction OW – Turns the Alt
Even if we don’t get to extinction, a limited nuclear war turns and outweighs
Martin, Professor of Social Sciences in the School of Social Sciences, Media and
Communication at the University of Wollongong, 1982
(Brian, “How the Peace Movement Should be Preparing for Nuclear War,” Bulletin of Peace
Proposals, Vol. 13, No. 2, 1982, pp. 149-159)
In addition to the important physical effects of nuclear war there would be important indirect political effects. It seems very likely that there
would be strong moves to maintain or establish authoritarian rule as a response to crises
preceding or following nuclear war. Ever since Hiroshima, the threat of nuclear destruction has been
used to prop up repressive institutions, under the pretext of defending against the 'enemy'.
The actuality of nuclear war could easily result in the culmination of this trend. Large segments of the population could be
manipulated to support a repressive regime under the necessity to defend against further threats or to obtain revenge.
A limited nuclear war might kill some hundreds of thousands or tens of millions of people, surely a major tragedy. But another tragedy
could also result: the establishment, possibly for decades, of repressive civilian or military rule in
countries such as Italy, Australia and the US, even if they were not directly involved in the war. The possibility of grassroots mobilisation for
disarmament and peace would be greatly reduced even from its present levels. For such developments the people and the peace movements
of the world are largely unprepared.
Extinction OW – Cummiskey
Lives that are lost as a result of not performing the plan are your first ethical
duty
David Cummiskey, Associate Professor of Philosophy @ Bates College & a Ph.D. from UM,
1996, Kantian Consequentialism, Pg. 145-146
In the next section, I will defend this interpretation of the duty of beneficence. For the sake of argument, however, let us first simply assume
that beneficence does not require significant self-sacrifice and see what follows. Although Kant is unclear on this point, we will assume that
significant self-sacrifices are supererogatory. Thus, if I must harm one in order to save many, the individual whom I will harm by my action is not
morally required to affirm the action. On the other hand, I have a duty to do all that I can for those in need. As a consequence I
am faced
with a dilemma: If I act, I harm a person in a way that a rational being need not consent to; if I
fail to act, then I do not do my duty to those in need and thereby fail to promote an objective
end. Faced with such a choice, which horn of the dilemma is more consistent with the formula of the end-in-itself? We must not
obscure the issue by characterizing this type of case as the sacrifice of individuals for some
abstract “social entity.” It is not a question of some persons having to bear the cost for some
elusive “overall social good.” Instead, the question is whether some persons must bear the
inescapable cost for the sake of other persons. Robert Nozick, for example, argues that “to use a person in
this way does not sufficiently respect and take account of the fact that he [or she] is a
separate person, that his is the only life he [or she] has.” But why is this not equally true of all
those whom we do not save through our failure to act? By emphasizing solely the one who
must bear the cost if we act, we fail to sufficiently respect and take account of the many
other separate persons, each with only one life, who will bear the cost of our inaction. In such a
situation, what would a conscientious Kantian agent, an agent motivated by the unconditional value of rational beings, choose? A morally
good agent recognizes that the basis of all particular duties is the principle that “rational
nature exists as an end in itself.” Rational nature as such is the supreme objective end of all conduct. If one truly
believes that all rational beings have an equal value then the rational solution to such a dilemma
involves maximally promoting the lives and liberties of as many rational beings as possible. In
order to avoid this conclusion, the non-consequentialist Kantian needs to justify agent-centered
constraints. As we saw in chapter 1, however, even most Kantian deontologists recognize that agent-centered
constraints require a non-value based rationale. But we have seen that Kant’s normative theory is based on an
unconditionally valuable end. How can a concern for the value of rational beings lead to a refusal to sacrifice rational beings even when this
would prevent other more extensive losses of rational beings? If the moral law is based on the value of rational beings and their ends, then
what is the rationale for prohibiting a moral agent from maximally promoting these two tiers of value? If
I sacrifice some for the
sake of others, I do not use them arbitrarily, and I do not deny the unconditional value of
rational beings. Persons may have “dignity, that is, an unconditional and incomparable worth”
that transcends any market value, but persons also have a fundamental equality that dictates
that some must sometimes give way for the sake of others. The concept of the end-in-itself
does not support the view that we may never force another to bear some cost in order to
benefit others. If on focuses on the equal value of all rational beings, then equal consideration suggests that one
may have to sacrifice some to save many.
Extinction OW – Sandberg
Extinction outweighs – it’s the only irreversible impact
Sandberg, 8 – Anders Sandberg, James Martin Research Fellow at the Future of Humanity
Institute at Oxford University, et al., with Jason G. Matheny, Ph.D. candidate in Health Policy
and Management at the Johns Hopkins Bloomberg School of Public Health and Special
Consultant to the Center for Biosecurity at the University of Pittsburgh Medical Center, and
Milan M. Ćirković, Senior Research Associate at the Astronomical Observatory of Belgrade and
Assistant Professor of Physics at the University of Novi Sad in Serbia and Montenegro, 2008
(“How can we reduce the risk of human extinction?,” Bulletin of the Atomic Scientists,
September 8th, Available Online at http://thebulletin.org/web-edition/features/how-can-wereduce-the-risk-of-human-extinction)
The facts are sobering. More than 99.9 percent of species that have ever existed on Earth have gone
extinct. Over the long run, it seems likely that humanity will meet the same fate. In less than a billion years, the increased intensity of the Sun will initiate a wet
greenhouse effect, even without any human interference, making Earth inhospitable to life. A couple of billion years later Earth will be destroyed, when it's engulfed
by our Sun as it expands into a red-giant star. If we colonize space, we could survive longer than our planet, but as mammalian species survive, on average, only two
million years, we should consider ourselves very lucky if we make it to one billion. Humanity could be extinguished as early as this century by succumbing to natural
hazards, such as an extinction-level asteroid or comet impact, supervolcanic eruption, global methane-hydrate release, or nearby supernova or gamma-ray burst.
(Perhaps the most probable of these hazards, supervolcanism, was discovered only in the last 25 years, suggesting that other natural hazards may remain
the probability of any one of these events killing off our species is very low—less than one
as improbable as these events are,
measures to reduce their probability can still be worthwhile. For instance, investments in asteroid detection and
deflection technologies cost less, per life saved, than most investments in medicine. While an extinction-level asteroid
impact is very unlikely, its improbability is outweighed by its potential death toll. The risks from
unrecognized.) Fortunately
in 100 million per year, given what we know about their past frequency. But
anthropogenic hazards appear at present larger than those from natural ones. Although great progress has been made in
reducing the number of nuclear weapons in the world, humanity is still threatened by the possibility of a global
thermonuclear war and a resulting nuclear winter. We may face even greater risks from emerging technologies. Advances in synthetic
biology might make it possible to engineer pathogens capable of extinction-level pandemics. The knowledge, equipment, and materials needed to engineer
pathogens are more accessible than those needed to build nuclear weapons. And unlike other weapons, pathogens are self-replicating, allowing a small arsenal to
become exponentially destructive. Pathogens have been implicated in the extinctions of many wild species. Although most pandemics "fade out" by reducing the
density of susceptible populations, pathogens with wide host ranges in multiple species can reach even isolated individuals. The intentional or unintentional release
of engineered pathogens with high transmissibility, latency, and lethality might be capable of causing human extinction. While such an event seems unlikely today,
the likelihood may increase as biotechnologies continue to improve at a rate rivaling Moore's Law. Farther out in time are technologies that remain theoretical but
might be developed this century. Molecular nanotechnology could allow the creation of self-replicating machines capable of destroying the ecosystem. And
advances in neuroscience and computation might enable improvements in cognition that accelerate the invention of new weapons. A survey at the Oxford
conference found that concerns about human extinction were dominated by fears that new technologies would be misused. These emerging threats are especially
challenging as they could become dangerous more quickly than past technologies, outpacing society's ability to control them. As H.G. Wells noted, "Human history
Such remote risks may seem academic in a world plagued
by immediate problems, such as global poverty, HIV, and climate change. But as intimidating as these
problems are, they do not threaten human existence . In discussing the risk of nuclear winter, Carl Sagan
emphasized the astronomical toll of human extinction: A nuclear war imperils all of our descendants, for as long
as there will be humans. Even if the population remains static, with an average lifetime of the order of 100 years, over a
typical time period for the biological evolution of a successful species (roughly ten million years), we are talking about
some 500 trillion people yet to come. By this criterion, the stakes are one million times greater for
extinction than for the more modest nuclear wars that kill "only" hundreds of millions of people.
There are many other possible measures of the potential loss—including culture and science, the
evolutionary history of the planet, and the significance of the lives of all of our ancestors who
contributed to the future of their descendants. Extinction is the undoing of the human enterprise.
There is a discontinuity between risks that threaten 10 percent or even 99 percent of humanity and
those that threaten 100 percent. For disasters killing less than all humanity, there is a good chance
that the species could recover. If we value future human generations, then reducing extinction risks
becomes more and more a race between education and catastrophe."
should dominate our considerations . Fortunately, most measures to reduce these risks also improve global security against a range of lesser
catastrophes, and thus deserve support regardless of how much one worries about extinction. These measures include: Removing nuclear weapons from hairtrigger alert and further reducing their numbers; Placing safeguards on gene synthesis equipment to prevent synthesis of select pathogens; Improving our ability to
respond to infectious diseases, including rapid disease surveillance, diagnosis, and control, as well as accelerated drug development; Funding research on asteroid
detection and deflection, "hot spot" eruptions, methane hydrate deposits, and other catastrophic natural hazards; Monitoring developments in key disruptive
technologies, such as nanotechnology and computational neuroscience, and developing international policies to reduce the risk of catastrophic accidents.
Extinction OW – AT: K
Existential risks outweigh their impact and justify our scholarship
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Existential risks have a cluster of features that make it useful to identify them as a special category: the extreme
magnitude of the harm that would come from an existential disaster; the futility of the trial-and-error approach;
the lack of evolved biological and cultural coping methods; the fact that existential risk dilution is a global public
good; the shared stakeholdership of all future generations; the international nature of many of the required
countermeasures; the necessarily highly speculative and multidisciplinary nature of the topic; the subtle and
diverse methodological problems involved in assessing the probability of existential risks; and the comparative
neglect of the whole area. From our survey of the most important existential risks and their key attributes, we can extract
tentative recommendations for ethics and policy: 9.1
Raise the profile of existential risks We need more research into
existential risks – detailed studies of particular aspects of specific risks as well as more general investigations of associated
ethical, methodological, security and policy issues. Public awareness should also be built up so that constructive political
debate about possible countermeasures becomes possible . Now, it’s a commonplace that researchers always conclude that
more research needs to be done in their field. But in this instance it is really true. There
habits of the dung fly than on existential risks.
is more scholarly work on the life-
Extinction OW – AT: AIDS
AIDS and pandemics aren’t existential risks
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have
not evolved mechanisms, either biologically or culturally, for managing such risks. Our intuitions and coping strategies have
been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile
accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox,
black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards
risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people
immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the
worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly
affected the total amount of human suffering or happiness or determined the long-term fate of our species.’
Extinction OW – AT: Bio-D
Biodiveristy isn’t an existential risk
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Existential risks are distinct from global endurable risks. Examples of the latter kind include: threats to the
biodiversity of Earth’s ecosphere, moderate global warming, global economic recessions (even major ones), and possibly stifling
cultural or religious eras such as the “dark ages”, even if they encompass the whole global community, provided they are transitory (though see
the section on “Shrieks” below). To say that a particular global risk is endurable is evidently not to say that it is
acceptable or not very serious. A world war fought with conventional weapons or a Nazi-style Reich lasting for a decade would be
extremely horrible events even though they would fall under the rubric of endurable global risks since humanity could
eventually recover. (On the other hand, they could be a local terminal risk for many individuals and for persecuted ethnic groups.) I shall
use the following definition of existential risks: Existential risk – One where an adverse outcome would either annihilate Earth-originating
intelligent life or permanently and drastically curtail its potential. An existential risk is one where humankind as a whole is
imperiled. Existential disasters have major adverse consequences for the course of human civilization for all
time to come.
Extinction OW – AT: Bioweapons
Biological super diseases aren’t an existential risk
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have
not evolved mechanisms, either biologically or culturally, for managing such risks. Our intuitions and coping strategies have
been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile
accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox,
black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards
risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people
immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the
worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly
affected the total amount of human suffering or happiness or determined the long-term fate of our species.’
Extinction OW – AT: Cycle of Violence/Tech
Technology *may* usher in risks but it *certainly* solves extinction
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
In combination, these indirect arguments add important constraints to those we can glean from the direct
consideration of various technological risks, although there is not room here to elaborate on the details. But the balance of
evidence is such that it would appear unreasonable not to assign a substantial probability to the hypothesis that an
existential disaster will do us in. My subjective opinion is that setting this probability lower than 25% would be
misguided, and the best estimate may be considerably higher. But even if the probability were much smaller
(say, ~1%) the subject matter would still merit very serious attention because of how much is at stake . In general,
the greatest existential risks on the time-scale of a couple of centuries or less appear to be those that derive
from the activities of advanced technological civilizations. We see this by looking at the various existential risks we have listed.
In each of the four categories, the top risks are engendered by our activities. The only significant existential risks for which this isn’t true are
“simulation gets shut down” (although on some versions of this hypothesis the shutdown would be prompted by our activities [27]); the catchall hypotheses (which include both types of scenarios); asteroid or comet impact (which is a very low probability risk); and getting killed by an
extraterrestrial civilization (which would be highly unlikely in the near future).[19] It may not be surprising that existential risks created by
modern civilization get the lion’s share of the probability. After all, we are now doing some things that have never been done on Earth before,
and we are developing capacities to do many more such things. If non-anthropogenic factors have failed to annihilate the human species for
hundreds of thousands of years, it could seem unlikely that such factors will strike us down in the next century or two. By contrast, we have no
reason whatever not to think that the products of advanced civilization will be our bane. We shouldn’t be too quick to dismiss the existential
risks that aren’t human-generated as insignificant, however. It’s true that our species has survived for a long time in spite of whatever such risks
are present. But there may be an observation selection effect in play here. The question to ask is, on the theory that natural disasters sterilize
Earth-like planets with a high frequency, what should we expect to observe? Clearly not that we are living on a sterilized planet. But maybe that
we should be more primitive humans than we are? In order to answer this question, we need a solution to the problem of the reference class in
observer selection theory [76]. Yet that is a part of the methodology that doesn’t yet exist. So at the moment we can state that the most
serious existential risks are generated by advanced human civilization, but we base this assertion on direct considerations. Whether there is
additional support for it based on indirect considerations is an open question. We should not blame civilization or technology for
imposing big existential risks. Because of the way we have defined existential risks, a failure to develop
technological civilization would imply that we had fallen victims of an existential disaster (namely a crunch,
“technological arrest”). Without technology, our chances of avoiding existential risks would therefore be nil.
With technology, we have some chance, although the greatest risks now turn out to be those generated by
technology itself.
Extinction OW – AT: Death Drive/Genocide
Even if the plan somehow leads to erasure of a population, global terminal threats
outweigh local personal threats
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
We can distinguish six qualitatively distinct types of risks based on their scope and intensity (figure 1). The third
dimension, probability, can be superimposed on the two dimensions plotted in the figure. Other things equal, a risk is more serious
if it has a substantial probability and if our actions can make that probability significantly greater or
smaller. “Personal”, “local”, or “global” refer to the size of the population that is directly affected; a global risk
is one that affects the whole of humankind (and our successors). “Endurable” vs. “terminal” indicates how intensely
the target population would be affected. An endurable risk may cause great destruction, but one can either
recover from the damage or find ways of coping with the fallout. In contrast, a terminal risk is one where the
targets are either annihilated or irreversibly crippled in ways that radically reduce their potential to live the sort
of life they aspire to. In the case of personal risks, for instance, a terminal outcome could for example be death ,
permanent severe brain injury, or a lifetime prison sentence. An example of a local terminal risk would be genocide leading to
the annihilation of a people (this happened to several Indian nations). Permanent enslavement is another example.
Extinction OW – AT: Discourse First
The 1AC is necessary discourse – combating complacency is crucial to halting certain
and inevitable extinction
Richard J. Epstein and Y. Zhao ‘9 – Laboratory of Computational Oncology, Department of Medicine, University of Hong Kong, The
Threat That Dare Not Speak Its Name; Human Extinction, Perspectives in Biology and Medicine Volume 52, Number 1, Winter 2009, Muse.
We shall not speculate here as to the “how and when” of human extinction; rather, we ask why there remains
so little discussion of this important topic. We hypothesise that a lethal mix of ignorance and denial is blinding
humans from the realization that our own species could soon (a relative concept, admittedly) be as endangered as
many other large mammals (Cardillo et al. 2004). For notwithstanding the “overgrown Petri dish” model of human decline now
confronting us, the most sinister menace that we face may not be extrinsic selection pressures but complacency.
Entrenched in our culture is a knee-jerk “boy who cried wolf ” skepticism aimed at any person who voices
concerns about the future—a skepticism fed by a traditionally bullish, growth-addicted economy that eschews
caution (Table 1). But the facts of extinction are less exciting and newsworthy than the roller-coaster booms and busts of stock markets.
Extinction OW – AT: Economy
Economic collapse isn’t an existential threat – its endurable
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Existential risks are distinct from global endurable risks. Examples of the latter kind include: threats to the
biodiversity of Earth’s ecosphere, moderate global warming, global economic recessions (even major ones), and possibly
stifling cultural or religious eras such as the “dark ages”, even if they encompass the whole global community, provided they are transitory
(though see the section on “Shrieks” below). To say that a particular global risk is endurable is evidently not to say that it
is acceptable or not very serious. A world war fought with conventional weapons or a Nazi-style Reich lasting for a decade would be
extremely horrible events even though they would fall under the rubric of endurable global risks since humanity could
eventually recover. (On the other hand, they could be a local terminal risk for many individuals and for persecuted ethnic groups.) I shall
use the following definition of existential risks: Existential risk – One where an adverse outcome would either annihilate Earth-originating
intelligent life or permanently and drastically curtail its potential. An existential risk is one where humankind as a whole is
imperiled. Existential disasters have major adverse consequences for the course of human civilization for all
time to come.
Extinction OW – AT: Global Warming
Global warming isn’t an existential risk
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Existential risks are distinct from global endurable risks. Examples of the latter kind include: threats to the
biodiversity of Earth’s ecosphere, moderate global warming, global economic recessions (even major ones), and possibly stifling
cultural or religious eras such as the “dark ages”, even if they encompass the whole global community, provided they are transitory (though see
the section on “Shrieks” below). To say that a particular global risk is endurable is evidently not to say that it is
acceptable or not very serious. A world war fought with conventional weapons or a Nazi-style Reich lasting for a decade would be
extremely horrible events even though they would fall under the rubric of endurable global risks since humanity could
eventually recover. (On the other hand, they could be a local terminal risk for many individuals and for persecuted ethnic groups.) I shall
use the following definition of existential risks: Existential risk – One where an adverse outcome would either annihilate Earth-originating
intelligent life or permanently and drastically curtail its potential. An existential risk is one where humankind as a whole is
imperiled. Existential disasters have major adverse consequences for the course of human civilization for all
time to come.
Extinction OW – AT: Influenza
Influenza pandemics aren’t existential risks
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
Risks in this sixth category are a recent phenomenon. This is part of the reason why it is useful to distinguish them from other risks. We have
not evolved mechanisms, either biologically or culturally, for managing such risks. Our intuitions and coping strategies have
been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile
accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox,
black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards
risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people
immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the
worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly
affected the total amount of human suffering or happiness or determined the long-term fate of our species.’
Extinction OW – AT: Social Death
Social death isn’t equivalent to physical death, which outweighs
Jonas ’96 (Hans, Former Alvin Johnson Prof. Phil. – New School for Social Research and Former Eric Voegelin Visiting Prof. – U. Munich,
“Morality and Mortality: A Search for the Good After Auschwitz”, p. 111-112)
With this look ahead at an ethics for the future, we are touching at the same time upon the question of the future of freedom. The unavoidable
discussion of this question seems to give rise to misunderstandings. My dire prognosis that not only our material standard of living but also our
democratic freedoms would fall victim to the growing pressure of a worldwide ecological crisis, until finally there would remain only some form
of tyranny that would try to save the situation, has led to the accusation that I am defending dictatorship as a solution to our problems. I shall
ignore here what is a confusion between warning and recommendation. But I have indeed said that such a tyranny
would still be
better than total ruin; thus, I have ethically accepted it as an alternative. I must now defend this standpoint, which I continue to
support, before the court that I myself have created with the main argument of this essay. For are we not contradicting
ourselves in prizing physical survival at the price of freedom? Did we not say that freedom was the condition of
our capacity for responsibility—and that this capacity was a reason for the survival of humankind?; By tolerating tyranny as an
alternative to physical annihilation are we not violating the principle we established: that the
How of existence must not take precedence over its Why? Yet we can make a terrible
concession to the primacy of physical survival in the conviction that the ontological capacity
for freedom, inseparable as it is from man's being, cannot really be extinguished, only
temporarily banished from the public realm. This conviction can be supported by experience
we are all familiar with. We have seen that even in the most totalitarian societies the urge for
freedom on the part of some individuals cannot be extinguished, and this renews our faith in
human beings. Given this faith, we have reason to hope that, as long as there are human beings who survive,
the image of God will continue to exist along with them and will wait in concealment for its
new hour. With that hope—which in this particular case takes precedence over fear—it is permissible, for the sake
of physical survival, to accept if need be a temporary absence of freedom in the external
affairs of humanity. This is, I want to emphasize, a worst-case scenario, and it is the foremost task of responsibility at this particular
moment in world history to prevent it from happening. This is in fact one of the noblest of duties (and at the same time one concerning selfpreservation), on the part of the imperative of responsibility to avert future coercion that would lead to lack of freedom by acting freely in the
present, thus preserving as much as possible the ability of future generations to assume responsibility. But more than that is involved. At
stake is the preservation of Earth's entire miracle of creation, of which our human existence is
a part and before which man reverently bows, even without philosophical "grounding." Here too
faith may precede and reason follow; it is faith that longs for this preservation of the Earth (fides quaerens intellectum), and reason comes as
best it can to faith's aid with arguments, not knowing or even asking how much depends on its success or failure in determining what action to
take. With this confession of faith we come to the end of our essay on ontology.
Extinction OW – AT: Structural Violence
Portraying eco-damage as ‘extinction-level’ is a crucial communication act that
forestalls complete extinction – it solves their turn because it sparks a new social ethic
Richard J. Epstein and Y. Zhao ‘9 – Laboratory of Computational Oncology, Department of Medicine, University of Hong Kong, The
Threat That Dare Not Speak Its Name; Human Extinction, Perspectives in Biology and Medicine Volume 52, Number 1, Winter 2009, Muse.
Final ends for all species are the same, but the journeys will be different. If we cannot influence the end of our
species, can we influence the journey? To do so—even in a small way—would be a crowning achievement for
human evolution and give new meaning to the term civilization. Only by elevating the topic [End Page 121] of human
extinction to the level of serious professional discourse can we begin to prepare ourselves for the challenges
that lie ahead. Table 3. Human Thinking Modes Relevant to Extinction: from Ego-Think to Eco-Think The difficulty of the required
transition should not be underestimated. This is depicted in Table 3 as a painful multistep progression from the 20th-century
philosophical norm of Ego-Think—defined therein as a short-term state of mind valuing individual material self-interest above all other
considerations—to Eco-Think, in which humans come to adopt a broader Gaia-like outlook on themselves as but
one part of an infinitely larger reality. Making this change must involve communicating the non-sensationalist
message to all global citizens that “things are serious” and “we are in this together”—or, in blunter language, that the road to
extinction and its related agonies does indeed lie ahead . Consistent with this prospect, the risks of human extinction—and the
cost-benefit of attempting to reduce these risks—have been quantified in a recent sobering analysis (Matheny 2007). Once complacency
has been shaken off and a sense of collective purpose created, the battle against self-seeking anthropocentric human instincts will
have only just begun. It is often said that human beings suffer from the ability to appreciate their own mortality—an existential agony
that has given rise to the great religions— but in the present age of religious decline, we must begin to bear the added burden of anticipating
the demise of our species. Indeed, as argued here, there are compelling reasons for encouraging this collective mind-shift. For in the best of
all possible worlds, the realization that our species has long-term survival criteria distinct from our short-term
tribal priorities could spark a new social ethic to upgrade what we now all too often dismiss as “human nature ”
(Tudge 1989). [End Page 122]
Extinction OW – AT: VTL
Existential risks threaten everything about life we value
Bostrom, ‘2.
Nick, PhD, Faculty in Philosophy at Oxford, “Existential Risks: Analyzing Human Extinction
Scenarios and Related Hazards,” http://www.nickbostrom.com/existential/risks.html.
We shall use the following four categories to classify existential risks [6]: Bangs – Earth-originating intelligent life
goes extinct in relatively sudden disaster resulting from either an accident or a deliberate act of destruction. Crunches – The
potential of humankind to develop into posthumanity[7] is permanently thwarted although human life continues in some
form. Shrieks – Some form of posthumanity is attained but it is an extremely narrow band of what is possible and
desirable. Whimpers – A posthuman civilization arises but evolves in a direction that leads gradually but irrevocably to
either the complete disappearance of the things we value or to a state where those things are realized to only a minuscule
degree of what could have been achieved. Armed with this taxonomy, we can begin to analyze the most likely scenarios in each category. The
definitions will also be clarified as we proceed.
Download