Case

advertisement
1AC
Plan
Text: The United States should legalize physician assisted suicide by granting all those
with terminal illnesses the constitutional right to undergo pre-mortem
cryopreservation.
Reanimation Adv
Rapidly advancing cryopreservation technology can be a genuine solution to the
suffering of millions, unfortunately restrictions on euthanasia obstruct the physicians
ability to adequately vitrify the body and brain- legalizing cryothanasia would
optimize the chances for successful reanimation
Zoltan Istvan 08/21/2014 "Should Cryonics, Cryothanasia, and Transhumanism Be Part of the
Euthanasia Debate?" http://www.huffingtonpost.com/zoltan-istvan/should-cryonics-cryocide_b_5518684.html Zoltan Istvan is a bestselling author and graduate of Columbia University
An elderly man named Bill sits in a lonely Nevada nursing home, staring out the window. The sun is
fading from the sky, and night will soon cover the surrounding windswept desert. Bill has late-onset
Alzheimer's disease, and the plethora of medications he's on is losing the war to keep his mind intact.
Soon, he will lose control of many of his cognitive functions, will forget many of his memories, and will
no longer recognize friends and family. Approximately 40 million people around the world have some
form of dementia, according to a World Health Organization report. About 70 percent of those suffer
from Alzheimer's. With average lifespans increasing due to rapidly improving longevity science, what are
people with these maladies to do? Do those with severe cases want to be kept alive for years or even
decades in a debilitated mental state just because modern medicine can do it? In parts of Europe and a
few states in America where assisted suicide--sometimes referred to as euthanasia or physician aid in
dying--is allowed, some mental illness sufferers decide to end their lives while they're still cognitively
sound and can recognize their memories and personality. However, most people around the world with
dementia are forced to watch their minds deteriorate. Families and caretakers of dementia patients are
often dramatically affected too. Watching a loved one slowly loose their cognitive functions and
memories is one of the most challenging and painful predicaments anyone can ever go through.
Exorbitant finances further complicate the matter because it's expensive to provide proper care for the
mentally ill. In the 21st Century--the age of transhumanism and brilliant scientific achievement--the
question should be asked: Are there other ways to approach this sensitive issue? The transhumanist
field of cryonics--using ultra-cold temperatures to preserve a dead body in hopes of future revival-has come a long way since the first person was frozen in 1967. Various organizations and companies
around the world have since preserved a few hundred people. Over a thousand people are signed up to
be frozen in the future, and many millions of people are aware of the procedure. Some may say cryonics
is crackpot science. However, those accusations are unfounded. Already, human beings can be revived
and go on to live normal lives after being frozen in water for over an hour. Additionally, suspended
animation is now occurring in a university hospital in Pittsburgh, where a saline-cooling solution has
recently been approved by the FDA to preserve the clinically dead for hours before resuscitating them.
In a decade's time, this procedure may be used to keep people suspended for a week or a month before
waking them. Clearly, the medical field of preserving the dead for possible future life is quickly
improving every year. The trick with cryonics is preserving someone immediately after they've died .
Otherwise, critical organs, especially the brain and its billions of neurons, have a far higher chance of
being damaged in the freezing. However, it's almost impossible to cryonically freeze someone right
after death. Circumstances usually get in the way of an ideal suspension. Bodies must first be brought
to a cryonics facility. Most municipalities require technicians, doctors, and a funeral director to legally
sign off on a body before it can be cryonically preserved. All this takes time, and minutes are precious
once the last heartbeat and breath of air have been made by a cryonics candidate. Recently, some
transhumanists have advocated for cryothanasia, where a patient undergoes physician or selfadministered euthanasia with the intent of being cryonically suspended during the death process or
immediately afterward. This creates the optimum environment since all persons involved are on hand
and ready to do their part so that an ideal freeze can occur. Cryothanasia could be utilized for a number
of people and situations: the atheist Alzheimer's sufferer who doesn't believe in an afterlife and wants
science to give him another chance in the future; the suicidal schizophrenic who doesn't want to exist in
the current world, but isn't ready to give up altogether on existence; the terminally ill transhumanist
cancer patient who doesn't want to lose half their body weight and undergo painful chemotherapy
before being cryonically frozen; or the extreme special needs or disabled person who wants to come
back in an age where their disabilities can be fixed. There might even be spiritual, religious, or
philosophical reasons for pursuing an impermanent death, as in my novel The Transhumanist Wager,
where protagonist Jethro Knights undergoes cryothanasia in search of a lost loved one. There are many
sound reasons why someone might choose cryothanasia. Whoever the person and whatever the reason,
there is a belief that life can be better for them in some future time. Some experts believe we will begin
reanimating cryonically frozen patients in 25 to 50 years. Technologies via bioengineering,
nanomedicine, and mind uploading will likely lead the way. Hundreds of millions of dollars are being
spent on developing these technologies that will also create breakthroughs for the field of cryonics and
other areas of suspended animation. Another advantage about cryonics and cryothanasia is their
affordability. It costs about $1,000 to painlessly euthanize oneself and an average of $80,000 to
cryonically freeze one's body. It costs many times more than that to keep someone alive who is suffering
from a serious mental disorder and needs constant 24-hour a day care over many years. Despite some
of the positive possibilities, cryothanasia is virtually unknown to people and is often technically illegal
in many places around the world . Of course, much discussion would have to take place in private,
public, and political circles in order to determine if cryothanasia has a valid place in society.
Nevertheless, cryothanasia represents an original way for dementia sufferers and others to consider
now that they are living far longer than ever before.
Technological avenues for immortality are forthcoming and cryonic reanimation is the
key ingredient - brain uploading could be feasible as early as 2025
Marcus Edwards 2012 "The Path to Immortality"
http://wiki.eyewire.org/en/The_Path_to_Immortality Edwards is a contributor for the EyeWire project,
a game to developed to help Sebastian Seung's Lab at MIT map the brain. Eyewire has grown to over
150,000 players from 140+ countries.
It often seems that man is on an unending journey to escape death. From the immortality potions of
ancient China to modern scientific endeavors, it may appear as if we are fighting a losing battle. Even
with the current developments in medicine and biology, we have been unable to achieve any sense of
immortality. Though the life expectancy of people living in developed countries has risen dramatically in
the last two hundred years, none have yet eluded death. But imagine for a moment a process for
prolonging the human life expectancy from age 76 to eternity. Would this be possible? The answer may
lie in the field of cryonics. Cryonics is dedicated to freezing the human body to extremely low
temperatures with the hopes that they may one day be revived by some future technology. What
interests me in particular is not the cryopreservation of the body, but of the brain. Not surprisingly, the
preservation of the brain is known as neuropreservation. After reading Connectome, I asked myself if
merely preserving the brain would similarly preserve the connectome— the theoretical home of
memories, consciousness and personality. Like many before me, I am enthralled with the possibility of
achieving immortality, though I still have some reservations about achieving this through
neuropreservation. The technologies to freeze the brain have existed for decades, but how to revive the
brain from the inevitable damage that results from freezing it for long periods of time has never been
achieved. The world’s leading cryopreservation company, Alcor, may have an answer. Alcor was
founded in 1977, and since then has become the foremost authority on cryonics. The company is
currently researching new methods of freezing the bodies of clients so that they may withstand eternity,
but many of their scientists believe full-body storage is not necessary. Like Dr. Seung, these researchers
believe that the brain is the home of human identity. This mindset has created a new division of
cryonics: “The Neuropreservation Option.” This has become the most popular option since its inception,
as many clients have come to believe that the preservation of the brain preserves the personality and
the soul. Not surprisingly, it is also much more affordable to preserve a brain than an entire body. Since
the cryopreservation process lasts long after the legal death of the clients, they are required to pay the
full amount in advance. Alcor currently charges $200,000 for a full-body preservation while
neuropreservation costs less than half that; still a pricey $80,000. But is the cost worth it? If the human
brain can be revived, there are still many unknown factors. How will the brain be repaired? What body
will the brain occupy? Will the society that greets you even want you back? As for repairing the brain,
one solution has already been proposed. Nanotechnology has grown substantially in recent years, and
many proponents of neuropreservation argue that nanobots, minuscule robots designed for a specific
function, may provide a solution. Nanobots are not yet advanced enough to be able to repair such
delicate brain tissue. However, advancements in the future may eventually lead to just that. While many
forms of nanotechnology are currently in the research and development phase, progress in the field will
likely allow for more advanced procedures. Scientists at Harvard and MIT have been able to specifically
target cells with nanobots filled with chemotherapy chemicals. This drastically reduced the damage
done by the chemotherapy chemicals typically caused by imprecise delivery methods. In spite of these
seemingly miraculous possibilities, I often wonder if there is any point. In Connectome, Dr. Seung posed
the question: Does cryogenically preserving a brain similarly preserve the connectome? If not, then it
may be futile to maintain frozen brains. Assuming his hypothesis that the neural connections are where
consciousness resides, damage to the connectome would result in an irreversible loss of personality. But
if the connectome is preserved by the extreme cold, neuropreservation could one day be a viable
means of achieving an undetermined life span. Even if the connectome endures, another problem
exists. The brain may physically house identity, but it has little purpose without some body to aid its
function. Researchers have proposed several solutions to this problem. Recently, media sources have
advertised cloning as a method for giving the brain a body. There is no fear of rejection because the cells
and DNA of the clone would be identical to the brain in storage, but this raises many ethical questions:
Suppose a client walks into Alcor asking for a neuropreservation. He pays the fee, lives for several more
years and then dies. His brain is sent to an Alcor facility where it is frozen. Over a century from now, his
brain is revitalized using nanobots then placed inside a clone that was created specifically for that
purpose. His brain is successfully implanted and he lives a successful life until he dies and the process
begins again. I have several problems with this proposal. What would happen to the clone that was
raised simply for the purpose of becoming a vessel? The line between what legal rights a clone would
have compared to a “normal” human is far too blurry to become practical across the world. Indeed,
members of Alcor claimed that cloning is a “crude” method of providing the brain with a body and that a
“more elegant means” must be achieved. It may not seem elegant, but far less crude than the cloning
option is growing an entire body around the rejuvenated brain. Much like a zygote rapidly multiplies into
an infant, a body could be grown around the brain. Imagine using bioengineered catalysts and reactions
to facilitate the growth of an entire body around the actual brain. A person could be “born” much like a
child, the only difference being that an adult body grows instead. The brain—and presumably the
identity—of the patient would already be preserved, so a spine would have to be created around the
brain, forming the intricate neural connections that make life possible. In spite of what many
researchers at Alcor may think, the path to immortality may not be so complex (from a biological
perspective). In Connectome, Dr. Seung also proposed the idea of mapping connectomes, uploading that
information to a microprocessor and using that information to achieve digital immortality . Many of the
technologies to achieve this extraordinary feat are already in place . Microchips are relatively cheap
and are expected to be able to process the amount of information a human brain contains by the year
2025. This could mean that mapping connectomes with increasingly advanced imaging techniques could
automate the process, leaving it entirely up to computers. Connectomes could eventually be mapped
with unprecedented speed, meaning that any human could have his identity recorded on a microchip
when they die, then have it transferred to a computer program prolonging his life indefinitely. While
many of the ideas I proposed are only in their beginning stages, the technologies involved are advancing
rapidly. Perhaps in the future scientists will look back at us, wondering why we ever believed it was
impossible to live forever, just as we question those pessimists who believed that a journey to the moon
would never be within our grasp. Dr. Ralph Merkle, inventor of public key cryptography, once said
“Cryonics is an experiment. So far the control group isn’t doing that well.” He knew as well as any other
that no one has a chance to escape death without trying, and optimism is necessary to keep the spirit of
such dreams alive. One day, not too far off, we may scoff at death, knowing that we are not bound to
our current biological forms. Whether it be through microchips, cryonics, or some distant technology of
the future, we may all hope to walk the path of immortality, carefully treading in a realm once thought
to be reserved for the gods alone.
Immortality extends life's quantitative potential to infinity- brain uploading
independently elevates its intrinsic value to unknown heights
Nick Bostrom 2003 “Transhumanism FAQ”
http://www.transhumanism.org/index.php/WTA/faq21/63/ Nick Bostrom is a Swedish philosopher at
St. Cross College, University of Oxford known for his work on existential risk, the anthropic principle,
human enhancement ethics, the reversal test, and consequentialism. He holds a PhD from the London
School of Economics (2000). He is the founding director of both The Future of Humanity Institute and
the Oxford Martin Programme on the Impacts of Future Technology as part of the Oxford Martin School
at Oxford University.
Uploading (sometimes called “downloading”, “mind uploading” or “brain reconstruction”) is the process
of transferring an intellect from a biological brain to a computer. One way of doing this might be by first
scanning the synaptic structure of a particular brain and then implementing the same computations in
an electronic medium. A brain scan of sufficient resolution could be produced by disassembling the
brain atom for atom by means of nanotechnology. Other approaches, such as analyzing pieces of the
brain slice by slice in an electron microscope with automatic image processing have also been proposed.
In addition to mapping the connection pattern among the 100 billion-or-so neurons, the scan would
probably also have to register some of the functional properties of each of the synaptic
interconnections, such as the efficacy of the connection and how stable it is over time (e.g. whether it is
short-term or long-term potentiated). Non-local modulators such as neurotransmitter concentrations
and hormone balances may also need to be represented, although such parameters likely contain much
less data than the neuronal network itself. In addition to a good three-dimensional map of a brain,
uploading will require progress in neuroscience to develop functional models of each species of neuron
(how they map input stimuli to outgoing action potentials, and how their properties change in response
to activity in learning). It will also require a powerful computer to run the upload, and some way for the
upload to interact with the external world or with a virtual reality. (Providing input/output or a virtual
reality for the upload appears easy in comparison to the other challenges.) An alternative hypothetical
uploading method would proceed more gradually: one neuron could be replaced by an implant or by a
simulation in a computer outside of the body. Then another neuron, and so on, until eventually the
whole cortex has been replaced and the person’s thinking is implemented on entirely artificial hardware.
(To do this for the whole brain would almost certainly require nanotechnology.) A distinction is
sometimes made between destructive uploading, in which the original brain is destroyed in the process,
and non-destructive uploading, in which the original brain is preserved intact alongside the uploaded
copy. It is a matter of debate under what conditions personal identity would be preserved in destructive
uploading. Many philosophers who have studied the problem think that at least under some conditions,
an upload of your brain would be you. A widely accepted position is that you survive so long as certain
information patterns are conserved, such as your memories, values, attitudes, and emotional
dispositions, and so long as there is causal continuity so that earlier stages of yourself help determine
later stages of yourself. Views differ on the relative importance of these two criteria, but they can both
be satisfied in the case of uploading. For the continuation of personhood, on this view, it matters little
whether you are implemented on a silicon chip inside a computer or in that gray, cheesy lump inside
your skull, assuming both implementations are conscious. Tricky cases arise, however, if we imagine
that several similar copies are made of your uploaded mind. Which one of them is you? Are they all you,
or are none of them you? Who owns your property? Who is married to your spouse? Philosophical,
legal, and ethical challenges abound. Maybe these will become hotly debated political issues later in this
century. A common misunderstanding about uploads is that they would necessarily be “disembodied”
and that this would mean that their experiences would be impoverished. Uploading according to this
view would be the ultimate escapism, one that only neurotic body-loathers could possibly feel tempted
by. But an upload’s experience could in principle be identical to that of a biological human. An upload
could have a virtual (simulated) body giving the same sensations and the same possibilities for
interaction as a non-simulated body. With advanced virtual reality, uploads could enjoy food and drink,
and upload sex could be as gloriously messy as one could wish. And uploads wouldn’t have to be
confined to virtual reality: they could interact with people on the outside and even rent robot bodies in
order to work in or explore physical reality. Personal inclinations regarding uploading differ. Many
transhumanists have a pragmatic attitude: whether they would like to upload or not depends on the
precise conditions in which they would live as uploads and what the alternatives are. (Some
transhumanists may also doubt whether uploading will be possible.) Advantages of being an upload
would include: Uploads would not be subject to biological senescence. Back-up copies of uploads could
be created regularly so that you could be re-booted if something bad happened. (Thus your lifespan
would potentially be as long as the universe’s.) You could potentially live much more economically as an
upload since you wouldn’t need physical food, housing, transportation, etc. If you were running on a
fast computer, you would think faster than in a biological implementation. For instance , if you were
running on a computer a thousand times more powerful than a human brain, then you would think a
thousand times faste r (and the external world would appear to you as if it were slowed down by a
factor of a thousand). You would thus get to experience more subjective time, and live more, during any
given day. You could travel at the speed of light as an information pattern, which could be convenient in
a future age of large-scale space settlements. Radical cognitive enhancements would likely be easier to
implement in an upload than in an organic brain.
Biostatic time travel solves grief, death anxiety, and philosophical inquiry
Charles Tandy 2009"Entropy and Immortality" Journal of Futures Studies, August 2009, 14(1): 39-50
http://www.jfs.tku.edu.tw/wp-content/uploads/2014/01/141-A03.pdf He is a Senior Faculty Research
Fellow in Bioethics
at Fooyin University's Research Center for Medical Humanities in Taiwan
Perfection of future-directed time travel in the form of suspended-animation (biostasis) seems feasible
in the 21st century.13 I believe it even seems feasible to eventually offer it freely to all who want it.
Jared Diamond has pointed out that: " If most of the world's 6 billion people today were in cryogenic
storage and neither eating, breathing, nor metabolizing, that large population would cause no
environmental problems. "14 This might allow them to travel to an improved world in which they
would be immortal. Since aging and all other diseases would have been conquered, they might not have
to use time travel again unless they had an accident requiring future medical technology. But the ontoresurrection imperative demands more than immortality for those currently alive. In extraterrestrial
space we can experiment (e.g. via Einsteinian or Gödelian past-directed time travel-viewing) with
immortality for all persons no longer alive. Seg-communities (Self-sufficient Extra-terrestrial Greenhabitats, or O'Neill communities – e.g., see O'Neill, 2000) can assist us with our ordinary and terrestrial
problems as well as assist us in completion of the onto-resurrection project. Indeed, in Al Gore's
account of the global warming of our water planet, his parable of the frog is a central metaphor.
Because the frog in the pot of water experiences only a gradual warming, the frog does not jump out. I
add: Jumping off the water planet is now historically imperative. Indeed, it seems unwise to put all of
our eggs (futures) into one basket (biosphere). I close with these words from Jacques Choron: "Only
pleasant and personal immortality provides what still appears to many as the only effective defense
against...death. But it is able to accomplish much more. It appeases the sorrow following the death of a
loved one by opening up the possibility of a joyful reunion...It satisfies the sense of justice outraged by
the premature deaths of people of great promise and talent, because only this kind of immortality
offers the hope of fulfillment in another life. Finally, it offers an answer to the question of the ultimate
meaning of life, particularly when death prompts the agonizing query [of Tolstoy], 'What is the purpose
of this strife and struggle if, in the end, I shall disappear like a soap bubble?' Above it was shown that
mental-reality and all-reality are dimensions of reality which are not altogether reducible to any strictly
physical-scientific paradigm. A more believable (general-ontological) paradigm was presented. Within
this framework, the issue of personal immortality was considered. It was concluded that the
immortality project, as a physical-scientific common-task to resurrect all dead persons , is ethically
imperative. The imperative includes as first steps the development of suspended-animation ,
superfast-rocketry, and seg-communities.
Restricting an individual's right to life extension is akin to manslaughter
Zoltan Istvan 01/31/2014"When Does Hindering Life Extension Science Become a Crime?"
http://www.psychologytoday.com/blog/the-transhumanist-philosopher/201401/when-does-hinderinglife-extension-science-become-crime Zoltan Istvan is a bestselling author and graduate of Columbia
University
Every human being has both a minimum and a maximum amount of life hours left to live. If you add
together the possible maximum life hours of every living person on the planet, you arrive at a special
number: the optimum amount of time for our species to evolve, find happiness, and become the most
that it can be. Many reasonable people feel we should attempt to achieve this maximum number of
life hours for humankind . After all, very few people actually wish to prematurely die or wish for their
fellow humans' premature deaths. In a free and functioning democratic society, it's the duty of our
leaders and government to implement laws and social strategies to maximize these life hours that we
want to safeguard. Regardless of ideological, political, religious, or cultural beliefs, we expect our leaders
and government to protect our lives and ensure the maximum length of our lifespans. Any other
behavior cuts short the time human beings have left to live. Anything else becomes a crime of
prematurely ending human lives. Anything else fits the common legal term we have for that type of
reprehensible behavior: criminal manslaughter.
Prefer expert consensus on cryonics
Gregory Benford et al. 2005 "Scientists’ Open Letter on Cryonics"
http://www.evidencebasedcryonics.org/scientists-open-letter-on-cryonics/, Benford is has Ph.D. in
Physics from UC San Diego. He is also Professor of Physics at the University of California; Irvine, and 62
other Ph.D. signatories
To whom it may concern, Cryonics is a legitimate science-based endeavor that seeks to preserve
human beings, especially the human brain , by the best technology available. Future technologies for
resuscitation can be envisioned that involve molecular repair by nanomedicine, highly advanced
computation, detailed control of cell growth, and tissue regeneration. With a view toward these
developments, there is a credible possibility that cryonics performed under the best conditions
achievable today can preserve sufficient neurological information to permit eventual restoration of a
person to full health. The rights of people who choose cryonics are important , and should be
respected. Sincerely (63 Signatories)
Transhumanism Adv
Pre-mortem cryopreservation is inevitably going to be a question of PAS- drawing
the legal 'death for life' distinction now will set a standard that endorses the
discipline
Ryan Sullivan 2011"Pre-Mortem Cryopreservation: Recognizing a Patient's Right to Die in Order to
Live"
http://www.quinnipiac.edu/prebuilt/pdf/SchoolLaw/HealthLawJournalLibrary/04_14QuinnipiacHealthLJ
49%282010-2011%29.pdf Sullivan is J.D. from the University of Nebraska, he received his masters from
California University of
Pennsylvania and bachelors from Colorado State University.
A brief look at the massive compilation of scientific, biological and medical advancements of the last
two centuries demonstrates that human ingenuity is limitless. Given enough time, it seems anything is
possible, even the prospect of immortality . The science of low-temperature preservation has already
become mainstream - used today to preserve blood, organs, and even human embryos. Recent
developments in cryobiology and nanotechnology have converted cryonicists' once-abstract faith in
future science into a tangible, achievable aspiration. Although the technology required for successful
human reanimation may still be many years away, the right to pre-mortem cryopreservation should
be made available now, so that those future advancements may be reached. The terminally-ill
patient's interest in achieving cryonic preservation before ailments destroy all hope of reanimation
is legitimate and substantial . When balancing the interests of the individual against the countervailing
interests of the state, the court should consider the particular circumstances surrounding the patient's
request for assistance in achieving clinical death. The right to pre-mortem cryopreservation should be
distinguished from the right to assisted suicide. The terminally-ill cryonicist fervently seeks to extend his
life - there is no suicidal intent. Thus, the state's interests in preserving life and preventing suicide are
not offended. Further, the ethical integrity of the medical profession will not be tarnished by allowing
medical professionals to assist terminally- ill patients in protecting their only chance for survival, no
matter how remote this chance may appear to be. The physician assisting in pre-mortem preservation
should be treated no differently than the brain surgeon who clinically suspends the life of his patient in
order to save his life.236 That doctor knows it is his patient's only chance of survival; the same is true for
the terminally-ill brain cancer patient seeking pre-mortem cryopreservation. Additionally, the concerns
of abuse and manipulation of the elderly that some courts have asserted when denying the right to
assisted suicide are not present here. Rather, the patient is a competent individual, fully aware that his
only option for future survival is immediate preservation. For the above reasons, a patient suffering
from a degenerative brain disease should be granted a constitutional right to assistance in achieving
pre-mortem cryogenic preservation. Denying this right ensures either a prolonged, agonizing death
with absolutely no hope of future life, or a cruder, unassisted form of suicide. No state interest is served
by either of these outcomes. If the state's interest in preserving life is truly compelling, states should
support patients who seek assistance in realizing their only conceivable chance of future life.
Our advocacy establishes a personhood contingency standard for the reanimated that
transcends normative conceptions of identity- this facilitates a consciousness shift
towards collective transhumanism
James J. Hughes 2001 "The Future of Death: Cryonics and the Telos of Liberal Individualism"
http://www.transhumanist.com/volume6/death.htm Hughes holds a doctorate in sociology from the
University of Chicago, where he served as the assistant director of research for the MacLean Center for
Clinical Medical Ethics.
The current definitions of death, worked out twenty years ago to address the technology of the
respirator, are already falling apart. Some are suggesting we dispense with “death” as a unitary marker
of human status, while others are pushing for the recognition of a neocortical standard. The twenty first
century will begin to see a shift toward consciousness and personhood-centered ethics as a means of
dealing not only with brain death, but also with extra-uterine feti, intelligent chimeras, human-machine
cyborgs, and the other new forms of life that we will create with technology. The struggle between
anthropocentrists and biofundamentalists, on the one hand, and transhumanists on the other, will
be fierce . Each proposal for a means of extending human capabilities beyond our “natural” and “Godgiven” limitations, or blurring the boundaries of humanness, will be fought politically and in the courts.
But in the end, because of increasing secularization, the tangible advantages of the new technologies,
and the internal logic of Enlightenment values, I believe we will begin to develop a bioethics that accords
meaning and rights to gradations of self-awareness, regardless of platform. This transformation is
unlikely to cause the cryonically suspended to be automatically reclassified as living however. For
pragmatic reasons, and due to the uncertainty of information loss, the cryonically frozen are likely to
remain dead until proven living. They will be in the status of the soldier missing in action, who has been
thought dead, his wife remarried, his estate settled, who is suddenly rescued by some future nanoRambo. Once there has been tangible proof that the prisoners are still in their camp, there will be a reevaluation of the status of the frozen. Getting frozen will then come to be seen as a plausible alternative
to death, rather than a bizarre way to preserve a corpse. By this point, however, few people will
presumably need to make use of this option. Since this change in the public perception of the status of
the frozen is many decades off, and the frozen will be seen as “dead” in the meantime, cryonics
organizations should focus more attention on collaborating with choice in dying organizations . Most
proposed assisted suicide statues would not allow cryonic suspension as a method. But with secular
trends that support further liberalization, and the growing organization of the majority in support of
assisted suicide, it seems likely that the coming decades will see laws that allow cryonicists to choose
suspension as a part of their “suicide” method. The suggested shift toward a personhood standard for
social policy would dramatically effect the reanimated. A personhood standard would open the
possibility that the legal identity of a reanimated person would be contingent on their recovery of some
threshold of their prior memory and personality. Advance directives of the suspended should address
the question of whether they are interested in repairing and reanimating their brain, even if nanoprobes
or other diagnostic methods suggest that the resulting person will not be them, but some new person.
Finally, I have touched on the truly unpredictable, the equivalent of a bioethical, moral and legal
Singularity: the fundamental problematizing of the self. Once technology has fully teased out the
constituent processes and structures of memory, cognition and personality, and given us control over
them; once we are able to share or sell our skills, personality traits and memories; once some individuals
begin to abandon individuality for new forms of collective identity; then the edifice of Western ethical
thought since the Enlightenment will be in terminal crisis. The political and ethical trends that are
predictable now, as the Enlightenment works towards its telos, will become unpredictable. As
transhumanists work to complete the project of the Enlightenment, the shift to a consciousness-based
standard of law and ethics, we must also prepare political values and social ethics for the era beyond
the discrete, autonomous individual.
Evolving notions of personhood and death via legal reform catalyzes transhumanist
progress through the state
Martine Rothblatt, J.D., Ph.D. 2006"Forms of Transhuman Persons and the Importance of Prior
Resolution of Relevant Law" Volume 1, Issue 1, 1st Quarter
http://www.terasemjournals.org/PCJournal/PC0101/rothblatt_02e.html Rothblatt started the satellite
vehicle tracking and satellite radio industries and is the Chairman of United Therapeutics, a
biotechnology company. She is also the founder of Terasem Movement, Inc.
Since the time of Darwin’s contemporaries, many people have assumed that evolution always went in
an upward path of increasing complexity. This idea persists even though Darwin himself was not of that
assumption. Most evolutionary biologists emphasize that evolution occurs as much sideways as anything
else. So when we consider other versions of humans that may not be more advanced intellectually or
physically, would they also be transhuman? What about artificial intelligence that is not patterned on
human thoughts? Peter Voss explores Artificial General Intelligence and how it may not be patterned on
human thoughts in his article, "AGI" in this issue. These exceptions illustrate that the term "transhuman"
is an evolving term, which is actually a good thing. It ties in with the theme of this article, which is a
comparison between the law of outer space and the law of transhuman persons, because outer space
itself has never been a well-defined concept. Outer space has been a continuously evolving concept.
When we talk about the Law of Transhuman Persons, that gives rise to some questions about how we
define persons. The United States Code defines a person as a human or organization with legal rights
and duties. This gives rise to several questions, such as the following: Are transhumanized US citizens
still citizens? If there is no renunciation or death, are you still a citizen even if you have chosen bit by bit
to replace yourself, or to just change your attitudes and become transhumanized as an individual
physically or attitudinally? What about a revived person? How about somebody who has experienced
legal death , even perhaps heart death, but not information-theory death? In other words, their brain
is vitrified or cryonicized as within an organization such as ALCOR, and subsequently becomes
revived, and is then living, autonomous and conscious. Is that individual a citizen or not? We also
need to ask whether non-citizens can be organized as a trust or other business entity. The Terasem
Movement decided to organize the Colloquia on the Law of Transhuman Persons because we were
inspired by the ongoing Colloquia on the Law of Outer Space. In 1958, a group of about thirty
technologists and lawyers gathered together to hold the first Colloquia on the Law of Outer Space. This
happened at the very dawning of the space age. This was the era of the Khrushchev-Nixon kitchen
debates over such seeming trivialities as which political and economic system would produce a better
washing machine. It was the time of forced desegregation in Little Rock and the first launches of the US
and Soviet satellites. Image 2 depicts this era. If anyone asks whether we are starting too early to think
about transhuman law, I refer them to the environment in which the first Colloquia on the Law of Outer
Space[1] met. At that time, no animal had even been to orbit. It was just twelve years after Arthur C.
Clarke had published his first article proposing that a satellite in geostationary orbit would be able to
broadcast continuously over a portion of the earth’s service. No one had ever thought of that before. He
was the first to publish the idea of a wireless world. In his article, he included a picture of a little person
inside the satellite, because they could not yet conceive that electronics technology would be
sophisticated enough to handle the switching of calls in an unmanned communication satellite. The
colloquia met twenty years before any spacecraft had caused any earthly damage (the first space object
to crash to earth occurred in 1978), so it met well before any real legal issue arose from occupying outer
space. Similarly, it may be twenty years into the future before the first artificial intelligence agent causes
damage. Nonetheless, one would be hard-pressed to say that we are starting too soon with a Colloquia
on the Law of Transhuman Persons. Image 3 shows a comparison of where we were with outer space
technology and where we are with transhuman technology. In each category, we are at comparable
point today in transhuman technology to where outer space technology was in 1958. Raymond Kurzweil
provided the analysis for Image 4. In it, he shows that we are within twenty years from the point in time
when computers will have human-level intelligence. Image 5 is also by Kurzweil and makes the same
point; that is, because of the accelerating rate of technology in general, miniaturization in size, speed of
processing, and advances in medical technology, we will have even some of the more aggressive
concepts of transhuman technology - such as transhuman persons walking around, curious about things
- within twenty years. What did the experts conclude about space law in 1958? First, they came to the
conclusion that the age-old concept of national sovereignty over air space had to give way to the
technological reality of orbital over-flight. Up until the time of the space age, it was thought that a
country’s sovereignty went from the core of the earth in a cone out to the cosmos. You did not have the
right to fly a balloon or a plane over another country’s space without their permission. Yet when Sputnik
orbited the world, the Russians didn’t ask for anybody’s permission. Thus it became clear that it would
be ludicrous to ask for permission for orbital over-flight. Technological advancement therefore
abolished a fundamental principal of international law and national sovereignty. The colloquia also
concluded that a designated entity had to be legally responsible for every object launched into outer
space. They realized that these objects could cause damage and if nobody was responsible and there
was no rule of law, conflict and possibly even war might result. So how do they fare? Image 6 contains
pictures of two of the founding members of the Colloquia on the Law of Outer Space - Andrew Haley
from Washington D.C. and Stephen Gorove from the University of Mississippi. Nine years after they
began, they had an international treaty that banned sovereignty over space. Six years later, an
international treaty on liability caused by space objects was adopted worldwide. These treaties were
based on the findings and developments that came out of each yearly meeting of the colloquia. Each
year, the colloquia would develop and draft treaties, and papers would be presented on the pros and
cons of different propositions. Last year, in 2005, the Colloquia on the Law of Outer Space held its 47th
meeting. It has never missed a year since 1958. Thus, the Colloquia on the Law of Outer Space is
certainly a great role model for those of us working on the Law of Transhuman Persons. What might we
conclude analogous to our legal forbearers? Perhaps transhumanist technology renders age-old
concepts of citizenship and death as obsolete as the age-old legal concept of national sovereignty. We
will have to come up with new concepts to transcend death or citizenship because of our own “Sputnikizing” of technology in our own time. And perhaps we will agree that responsibility for transhuman
persons needs to be regularized in some fashion so that newly created individuals have a train of
responsibility whether to themselves or the non-transhuman people who created them. A possible
analytic framework for a transhuman person law is laid out in Image 7. We may need to evolve to an
information theory definition of death instead of heart death or brain death, which have been the
prevailing definitions. If an individual’s mind information is still organized, we have to ask if they are
really dead under our concept of information theory death. We then must question whether that entity
is conscious. Consciousness is a complex subject. My favorite definition of consciousness is borrowed
from Justice Potter Steward’s definition of pornography - that he can’t define it, but he knows it when
he sees it. When he said he knew it when he saw it, he said finally that we will have to revert to
community standards of what pornography is to a particular community. Perhaps we will need
community standards with regard to whether or not an entity is consciousness. Finally, if an entity is not
dead and they are conscious, what type of legal rights do they have? Does the Equal Protection clause of
the Constitution apply so that they have the same rights as people who have been biologically born in
the United States? We have a number of years to explore these decisions. We certainly don’t have to
solve them at the first colloquia. But if we could accomplish what the first Colloquia on the Law of Outer
Space did - create an agenda of legal issues to be addressed - we will be on a good track. Finally, if we do
agree that transhuman individuals should be granted transhuman citizenship, it would certainly be a
huge leap to grant citizenship based on an individual’s desire for citizenship, human rights, and
organization of mind information rather than based on a genome or a phenotype.
Nanotech is inevitable – transhumanism allows safe stewardship that prevents grey
goo
Treder and Phoenix 3 [PUBLISHED JANUARY 2003 — REVISED DECEMBER 2003, “Safe Utilization of
Advanced Nanotechnology”, Chris Phoenix and Mike Treder, Mike Treder, Executive Director of CRN, BS
Biology, University of Washington, Research Fellow with the Institute for Ethics and Emerging
Technologies, a consultant to the Millennium Project of the American Council for the United Nations
University and to the Future Technologies Advisory Group, serves on the Nanotech Briefs Editorial
Advisory Board, is a member of the New York Academy of Sciences and a member of the World Future
Society. AND Chris Phoenix, CRN’s Director of Research, has studied nanotechnology for more than 15
years. BS, Symbolic Systems, MS, Computer Science, Stanford University]
Many words have been written about the dangers of advanced nanotechnology. Most of the
threatening scenarios involve tiny manufacturing systems that run amok, or are used to create
destructive products. A manufacturing infrastructure built around a centrally controlled, relatively large,
self-contained manufacturing system would avoid these problems. A controlled nanofactory would pose
no inherent danger, and it could be deployed and used widely. Cheap, clean, convenient, on-site
manufacturing would be possible without the risks associated with uncontrolled nanotech fabrication
or excessive regulation . Control of the products could be administered by a central authority;
intellectual property rights could be respected. In addition, restricted design software could allow
unrestricted innovation while limiting the capabilities of the final products. The proposed solution
appears to preserve the benefits of advanced nanotechnology while minimizing the most serious risks.
Advanced Nanotechnology And Its Risks As early as 1959, Richard Feynman proposed building devices
with each atom precisely placed1. In 1986, Eric Drexler published an influential book, Engines of
Creation2, in which he described some of the benefits and risks of such a capability. If molecules and
devices can be manufactured by joining individual atoms under computer control, it will be possible to
build structures out of diamond, 100 times as strong as steel; to build computers smaller than a
bacterium; and to build assemblers and mini-factories of various sizes, capable of making complex
products and even of duplicating themselves. Drexler's subsequent book, Nanosystems3, substantiated
these remarkable claims, and added still more. A self-contained tabletop factory could produce its
duplicate in one hour. Devices with moving parts could be incredibly efficient. Molecular manufacturing
operations could be carried out with failure rates less than one in a quadrillion. A computer would
require a miniscule fraction of a watt and one trillion of them could fit into a cubic centimeter.
Nanotechnology-built fractal plumbing would be able to cool the resulting 10,000 watts of waste heat. It
seems clear that if advanced nanotechnology is ever developed, its products will be incredibly powerful.
As soon as molecular manufacturing was proposed, risks associated with it began to be identified.
Engines of Creation2 described one hazard now considered unlikely, but still possible: grey goo. A small
nanomachine capable of replication could in theory copy itself too many times4. If it were capable of
surviving outdoors, and of using biomass as raw material, it could severely damage the environment5.
Others have analyzed the likelihood of an unstable arms race6, and many have suggested economic
upheaval resulting from the widespread use of free manufacturing7. Some have even suggested that the
entire basis of the economy would change, and money would become obsolete8. Sufficiently powerful
products would allow malevolent people, either hostile governments or angry individuals, to wreak
havoc. Destructive nanomachines could do immense damage to unprotected people and objects. If the
wrong people gained the ability to manufacture any desired product, they could rule the world, or cause
massive destruction in the attempt9. Certain products, such as vast surveillance networks, powerful
aerospace weapons, and microscopic antipersonnel devices, provide special cause for concern. Grey goo
is relevant here as well: an effective means of sabotage would be to release a hard-to-detect robot that
continued to manufacture copies of itself by destroying its surroundings. Clearly, the unrestricted
availability of advanced nanotechnology poses grave risks, which may well outweigh the benefits of
clean, cheap, convenient, self-contained manufacturing. As analyzed in Forward to the Future:
Nanotechnology and Regulatory Policy10, some restriction is likely to be necessary. However, as was
also pointed out in that study, an excess of restriction will enable the same problems by increasing the
incentive for covert development of advanced nanotechnology. That paper considered regulation on a
one-dimensional spectrum, from full relinquishment to complete lack of restriction. As will be shown
below, a two-dimensional understanding of the problem—taking into account both control of nanotech
manufacturing capability and control of its products—allows targeted restrictions to be applied,
minimizing the most serious risks while preserving the potential benefits.
Grey goo development causes extinction
April Freitas 2000 “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy
Recommendations,” Foresight Institute, Senior Research Fellow at the Institute for Molecular
Manufacturing, J.D. from Santa Clara University, authoring the multi-volume text Nanomedicine, the
first book-length technical discussion of the potential medical applications of hypothetical molecular
nanotechnology and medical nanorobotics, 2009 recipient of the Feynman Prize in Nanotechnology for
Theory
Perhaps the earliest-recognized and best-known danger of molecular nanotechnology is the risk that
self-replicating nanorobots capable of functioning autonomously in the natural environment could
quickly convert that natural environment (e.g., "biomass") into replicas of themselves (e.g., "nanomass")
on a global basis, a scenario usually referred to as the "gray goo problem" but perhaps more properly
termed "global ecophagy." As Drexler first warned in Engines of Creation [2]: "Plants" with "leaves" no
more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an
inedible foliage. Tough omnivorous "bacteria" could out-compete real bacteria: They could spread like
blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous
replicators could easily be too tough, small, and rapidly spreading to stop - at least if we make no
preparation. We have trouble enough controlling viruses and fruit flies. Among the cognoscenti of
nanotechnology, this threat has become known as the "gray goo problem." Though masses of
uncontrolled replicators need not be gray or gooey, the term "gray goo" emphasizes that replicators
able to obliterate life might be less inspiring than a single species of crabgrass. They might be superior in
an evolutionary sense, but this need not make them valuable. The gray goo threat makes one thing
perfectly clear: We cannot afford certain kinds of accidents with replicating assemblers. Gray goo
would surely be a depressing ending to our human adventure on Earth, far worse than mere fire or
ice, and one that could stem from a simple laboratory accident. Lederberg [3] notes that the microbial
world is evolving at a fast pace, and suggests that our survival may depend upon embracing a "more
microbial point of view." The emergence of new infectious agents such as HIV and Ebola demonstrates
that we have as yet little knowledge of how natural or technological disruptions to the environment
might trigger mutations in known organisms or unknown extant organisms [81], producing a limited
form of "green goo" [92].
Extinction in 72 hours
Mark Pesce, BS Candidate at MIT, October, 1999, “Thinking Small,” FEED Magazine,
http://hyperreal.org/~mpesce/ThinkingSmall.html
The nanoassembler is the Holy Grail of nanotechnology; once a perfected nanoassembler is available,
almost anything becomes possible – which is both the greatest hope and biggest fear of the
nanotechnology community. Sixty years ago, John Von Neumann – who, along with Alan Turing founded
the field of computer science – surmised that it would someday be possible to create machines that
could copy themselves, a sort of auto-duplication which could lead from a single instance to a whole
society of perfect copies. Although such a Von Neumann machine is relatively simple in theory, such a
device has never been made – because it’s far easier, at the macromolecular scale, to build a copy of a
machine than it is to get the machine to copy itself. At the molecular level, this balance is reversed; it’s
far easier to get a nanomachine to copy itself than it is to create another one from scratch. This is an
enormous boon – once you have a single nanoassembler you can make as many as you might need – but
it also means that a nanoassembler is the perfect plague. If – either intentionally or through accident
– a nanoassembler were released into the environment, with only the instruction to be fruitful and
multiply, the entire surface of the planet – plants, animals and even rocks - would be reduced to a
“gray goo” of such nanites in little more than 72 hours. This “gray goo problem”, well known in
nanotechnology acts as a check against the unbounded optimism which permeates scientific
developments in atomic-scale devices. Drexler believes the gray goo problem mostly imaginary, but
does admit the possibility of a “gray dust” scenario, in which replicating nanites “smother” the Earth in a
blanket of sub-microscopic forms. In either scenario, the outcome is much the same. And here we
encounter a technological danger unprecedented in history: If we had stupidly blown ourselves to
kingdom come in a nuclear apocalypse, at least the cockroaches would have survived. But in a gray
goo scenario, nothing – not even the bacteria deep underneath the ground – would be untouched.
Everything would become one thing: a monoculture of nanites.
Transhumanism solves the human condition
Nick Bostrom 2009 “IN DEFENSE OF POSTHUMAN DIGNITY"
http://www.psy.vanderbilt.edu/courses/hon182/Posthuman_dignity_Bostrom.pdf Nick Bostrom is a
Swedish philosopher at St. Cross College, University of Oxford known for his work on existential risk, the
anthropic principle, human enhancement ethics, the reversal test, and consequentialism. He holds a
PhD from the London School of Economics (2000). He is the founding director of both The Future of
Humanity Institute and the Oxford Martin Programme on the Impacts of Future Technology as part of
the Oxford Martin School at Oxford University.
The prospect of posthumanity is feared for at least two reasons. One is that the state of being
posthuman might in itself be degrading, so that by becoming posthuman we might be harming
ourselves. Another is that posthumans might pose a threat to “ordinary” humans. (I shall set aside a
third possible reason, that the development of posthumans might offend some supernatural being.) The
most prominent bioethicist to focus on the first fear is Leon Kass: Most of the given bestowals of nature
have their given species-specified natures: they are each and all of a given sort. Cockroaches and
humans are equally bestowed but differently natured. To turn a man into a cockroach—as we don’t
need Kafka to show us—would be dehumanizing. To try to turn a man into more than a man might be so
as well. We need more than generalized appreciation for nature’s gifts. We need a particular regard and
respect for the special gift that is our own given nature.5 Transhumanists counter that nature’s gifts
are sometimes poisoned and should not always be accepted. Cancer, malaria, dementia, aging,
starvation, unnecessary suffering, cognitive shortcomings are all among the presents that we wisely
refuse. Our own species-specified natures are a rich source of much of the thoroughly unrespectable
and unacceptable—susceptibility for disease, murder, rape, genocide, cheating, torture, racism. The
horrors of nature in general and of our own nature in particular are so well documented6 that it is
astonishing that somebody as distinguished as Leon Kass should still in this day and age be tempted to
rely on the natural as a guide to what is desirable or normatively right. We should be grateful that our
ancestors were not swept away by the Kassian sentiment, or we would still be picking lice off each
other’s backs. Rather than deferring to the natural order, transhumanists maintain that we can
legitimately reform ourselves and our natures in accordance with humane values and personal
aspirations.
Meta-analysis of behavioral studies indicate evolutionary tendencies lie at the root of
all human violence
Mohammed Tadesse 2006, “The Fundamental Causes of Armed Conflict in Human History:
Reinterpretation of Available Sources,” Organization for Social Science Research in Eastern and Southern
Africa, Ph.D, Harayama University
Through a long process of cultural development, human beings are able to score remarkable
achievements in their life. However, people are still unable to avoid conflicts of violent /armed
character, which are destructive in their nature. Archaeological findings, anthropological interpretations
and historical records indicate that people have been engaged in armed conflicts since the prehistoric
period. Naturally, the following questions may ensue: What is the nature of this phenomenon? What are
the roots and responsible causes of waging limitless destructive wars without interruption? Why are
people not in a position to overcome conflicts of armed nature for the last time? Although it seems too
ambitious, the paper tries to deal with this crucial problem, which indiscriminately affects all. In all
periods of human history, armed conflict has been an important issue of intellectual debate. Great
thinkers, politicians, historians, theologians, military theoreticians, and behavioural scientists have
exerted maximum efforts to examine and explain the nature of the problem from different perspectives.
However, their findings are diversified and influenced by different factors. Some of the conclusions
made by these experts have also led their audiences to a muddle. Therefore, it is essential to re-examine
the problem for three major reasons: i) Curiosity to learn about the nature and causes of the problem;
ii) Misdiagnosis of the nature, sources and/or causes of armed conflict by experts and non-experts; and
iii) Unwillingness on the part of the world to learn from its tragic history. The study tries to analyse the
following questions: Is violent / armed conflict an eternal phenomenon that cannot be controlled or a
social phenomenon that can be controlled? What are the fundamental causes of armed conflicts in the
history of humankind? Hence, attempt is made to: a) Re-examine different approaches and theories of
scholars in explaining the nature and course of armed conflict; b) Reinterpret the nature of armed
conflict in human history, whether it is an innate genetic characteristic of human beings, a social
construct or determined and moulded by both; and c) Enrich the existing knowledge on the matter and
probably provide some valuable conceptual explanations to the problem. The problem is mainly
conceptual in nature, which dictates the method of collecting and analysing the data. Thus, the paper
uses a body of concepts from behavioural sciences to apply a thematic approach and scientific methods
and techniques, which enable to look for evidences, describe the nature and causes of the problem, and
formulate broad statements. The paper uses secondary sources of multidisciplinary character (findings
of biology, psychology, anthropology, archaeology, relevant historical and other social science
theoretical books, thesis, articles, religious books, etc.). Based on the available materials, the researcher
has reviewed and classified different views of scholars regarding the nature of aggressive behaviour in
general and armed conflict in particular. Finally, the data is analysed using a descriptive method of
study. The findings are as follows: 1. On the nature of armed conflicts: i) the evolutionary
development of human intelligence is the primary responsible factor (under conditions) for the origin
of aggressiveness in human behaviour, which gradually planted the culture of war in the history of
humankind. ii) The present state of human warrior culture is inevitable and a continuous process of
evolutionally development and it remains part of human life for a long period. 2. On the causes of
armed conflict: Conflicts of violent/armed character are not products of a single factor. Conflicts result
from the denial or ignoring or suppression of human biological as well as socio- psychological
(ontological) needs. Just for the sake of simplicity, the paper classifies the responsible motives, needs, or
causes of armed conflict into fundamental and specific causes: i) The fundamental causes, which are
common for all violent conflicts, are grouped into primary and secondary sources. a) Under primary
source of fundamental character come: - Human nature - Socio-psychological needs - Economic factors
b) Under secondary source of fundamental character come: - Politics and - Culture (the presence of
warrior tradition) ii) Specific causes. Each war that had taken place in different periods of human history
has its own specific causes of functional character. The specific causes of certain wars may not be the
responsible causes for the other and /or all spoils of wars. In one way or the other, specific causes also
belong to the fundamental causes. Let us see some of the conflicting events of historical character that
can be marked as specific causes, which were used to: ¬ Adopt strangers (assimilation); ¬ Enslave
others; ¬ Enlarge territory; ¬ Colonize; ¬ Achieve unification; ¬ Establish sphere of influence; ¬ Settle
border conflict; ¬ Separate from the main historical nation-state; ¬ Achieve irredentism, etc. The
following initiatives can be taken as possible options to maintain relative security before the outbreak of
armed conflict, and if not, to minimize the destruction: 1. Human beings, by their nature of evolutionary
development, do not possess the ability to avoid conflicts forever and to maintain peaceful life for the
last time. But the findings of this research confirm the possibility of either delaying the development of
the responsible factors for the origin of armed conflict and/ or minimizing its all round destruction. This
is viable only if the concerned bodies are able to diagnose the sources of armed conflict and take all
preventive -measures, which also include maintaining reasonable force of defence and balance of power
in their respective areas. Hence, there should not be any magnanimity to disarm the nation unilaterally;
2. Although the paper needs further investigation, it can be used for: - Enriching the theoretical basis
and help others to study related topics of specific character - Differentiating the "rational" from the
"accidental" causes of violent conflicts
No risk of a turn because technology to destroy the world already exists- means
there’s only a chance that transhumanism solves extinction by eliminating the drive
for violence
Mark Walker 2009 “Ship of Fools: Why Transhumanism is the Best Bet to Prevent the Extinction of
Civilization ,” The Global Spiral, Feb 5,
http://www.metanexus.net/magazine/tabid/68/id/10682/Default.aspx Walker is an assistant professor
at New Mexico State University and holds the Richard L. Hedden Chair of Advanced Philosophical Studies
This line of thinking is further reinforced when we consider that there is a limit to the downside of
creating posthumans , at least relatively speaking. That is, one of the traditional concerns about
increasing knowledge is that it seems to always imply an associated risk for greater destructive capacity.
One way this point is made is in terms of ‘killing capacity’: muskets are a more powerful technology than
a bow and arrow, and tanks more powerful than muskets, and atomic bombs even more destructive
than tanks. The knowledge that made possible these technical advancements brought a concomitant
increase in capacity for evil. Interestingly, we have almost hit the wall in our capacity for evil: once you
have civilization destroying weapons there is not much worse you can do. There is a point in which the
one-upmanship for evil comes to an end—when everyone is dead. If you will forgive the somewhat
graphic analogy, it hardly matters to Kennedy if his head is blown off with a rifle or a cannon. Likewise, if
A has a weapon that can kill every last person there is little difference between that and B’s weapon
which is twice as powerful. Posthumans probably won’t have much more capacity for evil than we have,
or are likely to have shortly. So, at least in terms of how many persons can be killed, posthumans will not
outstrip us in this capacity. This is not to say that there are no new worries with the creation of
posthumans, but the greatest evil, the destruction of civilization, is something which we now, or will
soon, have. In other words, the most significant aspect that we should focus on with contemplating the
creation of posthumans is their upside. They are not likely to distinguish themselves in their capacity
for evil , since we have already pretty much hit the wall on that, but for their capacity for good.
Debating the issues surrounding transhumanism is essential because the coming
technology can and will redefine our very nature
Liz Klimas 2013 "‘Transhumanist Movement’ Is Coming: The Ethical Dilemma Posed by Rapidly
Advancing Technology" http://www.theblaze.com/stories/2013/02/06/transhumanist-movement-iscoming-the-ethical-dilemma-posed-by-rapidly-advancing-technology/ Klimas graduated from Hillsdale
College with a Bachelor of Science, she also has interned for the National Oceanic and Atmospheric
Administration (NOAA) and the Association for Women in Science.
“Technology is fire. If you can control it, it’s great. If it controls you, you’re in trouble.” This was the
theme of the latest episode of the Glenn Beck program on TheBlaze TV. Sure, some technology and
ideas Beck showcased on his show Wednesday night might sound crazy and remind you of a science
fiction flick — Beck himself acknowledges this — but they’re not, he said. Take drones for example.
There was a time when people couldn’t imagine the capabilities of an unmanned aerial vehicle. Now
such drones are being used in strikes against hostile adversaries, which Beck said is good. But just this
week a memo from the Obama administration said U.S. citizens could be subject to drone strikes if they
are a “senior operational” leader of al-Qaeda or “an associated force.” “The president is using drones
right now and nobody is really talking about the ethics of this,” Beck said on the show. “We should have
seen this day coming but we didn’t. At least as a society, we didn’t.” It is the conversation about the
ethics of use of what might now seem like science fiction that Beck says has happen. “It’s not the app,
it’s not the gun, it’s not the drone, it’s what you do with it,” he said. Part of the ethical issue regarding
the use of technology that Beck has focused upon at length is the transhumanist movement. The
groundwork for merging the human body with machines to the point where the concept of the
Singularity would be reached, an idea strongly supported by futurist Ray Kurzweil, is well on its way.
Beck pointed to the recent breakthrough of 3D-printed human embryonic stem cells that scientists hope
will someday allow for 3D-printed organs. He noted a “million dollar bionic man” named Rex that is
outfitted with technology to hear, speak, move and even has artificial organs. So if humans are fixing
their physical beings to live longer, as they already are today in many respects, how will this affect
society? As an example, Beck called up how soldiers are being mended and returning home physically
fixed to an extent, but their internal scars are not being addressed. The recent shooting of acclaimed
sniper Chris Kyle and his friend Chad Littlefield by a Marine reservist who reportedly had PTSD is an
example. Another question if humans are to live longer is if the Earth will be able to support that. In
another example, Beck highlighted a water purification system called Slingshot by Deka that can make
clean drinking water from any, and we mean, any liquid. If this product were to be made smaller and
more cost-effective, millions of people would be saved from conditions that result from a lack of potable
drinking water. But some have been saying for years that the Earth is reaching a “tipping point” with
regard to its growing population and that more growth would cause “severe impacts” on quality of life.
Beck posed the moral dilemma regarding whether this machine would even get to the people who need
it based on this argument of finite resources and global warming. And what of the human mind?
Kurzweil’s ideas are that humans will not only augment their organs and other physical features with
technology but their minds as well. Technology is on its way for computers to begin reading our minds.
Take the soon-to-be-released MindMeld app. MindMeld” from Expect Labs is described by San
Francisco-based founders as an “always on Siri,” according to Technology Review. Here’s more from
about how the app works: Users can sign up or log in through Facebook and hold free video or voice
calls with up to eight people through the app. If a participant taps on a button in the app during a call,
MindMeld will review the previous 15 to 30 seconds of conversation by relying on Nuance’s voice
recognition technology. It will identify key terms in context—in a discussion to find a sushi restaurant,
for example, or one about a big news story that day—and then search Google News, Facebook, Yelp,
YouTube, and a few other sources for relevant results. Images and links are displayed in a stream for the
person who tapped the button to review. With a finger swipe, he or she can choose to share a result
with others on the call. Furthermore, where will the line between “what is life and what isn’t?” be
drawn, Beck asked. “ We’re in trouble, but the future is bright if we go in with open eyes ,” he said. ”If
we lose the concept of the soul and become the creator at the same time, what does the phrase ‘we’re
all endowed by our creator with certain unalienable rights’ mean?” Have we past the point of no return?
Yale computer science professor David Gelernter, who was a guest on Wednesday’s show, used paint as
an example to illustrate his point. He said paint has changed over centuries and has improved, but has
the human artist exponentially gotten better? No, because human nature itself hasn’t changed,
Gelernter said. Still, the ethical discussion about technology coming down the pike is important now
none the less, because it could reach a point where it might be used to alter human nature itself.
“We have to talk about technology — the good side and the bad side,” Beck continued. “We need to
have the moral and ethical debate.”
2AC
T
W/M: Pre-mortem cryopreservation challenges all of the legal, ethical, and social
issues associated with PAS
Robert W. Pommer, III 1993 "DONALDSON v. VAN DE KAMP: CRYONICS, ASSISTED SUICIDE, AND THE
CHALLENGES OF MEDICAL SCIENCE" Journal of Contemporary Health Law & Policy, 9 J. Contemp. Health
L. & Pol'y 589, Lexis, Pommer is a partner in the Government & Internal Investigations Practice Group in
the Washington, D.C. office of Kirkland & Ellis LLP.
In recent years, advances in medical science have left the legal community with a wide array of social,
ethical, and legal problems previously unimaginable. n1 Historically, legislative and judicial responses to
these advances lagged behind the rapid pace of such developments. n2 The gap between the scientist's
question, "Can we do it?," and the lawyer's question, "Should/may we do it?," is most evident in the
field of cryonics, with its technique of cryonic, or cryogenic, suspension.¶ In cryonic suspension, a legally
dead but biologically viable person is preserved at an extremely low temperature until advances in
medical science make it possible to revive the person and implement an effective cure. n3 The
terminally ill patient who wishes to benefit from such treatment is faced with the dilemma that present
life must be ceased with hope of future recovery. As a result, the process challenges our traditional
notions of death and the prospects of immortality while raising a host of concomitant legal dilemmas.
C/I: The plan legalizes almost all relative to one of the topic areas.
Corpus Juris 1932 Volume 45, p. 579
NEARLY. A term purely relative in its meaning [Cogswell v. Bull, 29 Cal. 320, 325] defined as almost,
within a little [Webster D.]
We don't under limit- PAS statutes are notoriously broad so specific legal reform is
necessary and inevitable- make them prove what abusive PAS aff's we justify
Lit checks predictability
Their standard produces 1 topical case- we allow for creativity in case construction
which is better for education.
Transhumanism is a pre-requisite to productive discussions – our understanding of the
world is so limited that we can’t even know what we should want to know without
altering humanity
Nick Bostrom 2005 “Transhumanist Values” http://www.nickbostrom.com/ethics/values.html Nick
Bostrom is a Swedish philosopher at St. Cross College, University of Oxford known for his work on
existential risk, the anthropic principle, human enhancement ethics, the reversal test, and
consequentialism. He holds a PhD from the London School of Economics (2000). He is the founding
director of both The Future of Humanity Institute and the Oxford Martin Programme on the Impacts of
Future Technology as part of the Oxford Martin School at Oxford University.
The conjecture that there are greater values than we can currently fathom does not imply that values
are not defined in terms of our current dispositions. Take, for example, a dispositional theory of value
such as the one described by David Lewis.[5] According to Lewis’s theory, something is a value for you if
and only if you would want to want it if you were perfectly acquainted with it and you were thinking
and deliberating as clearly as possible about it. On this view, there may be values that we do not
currently want, and that we do not even currently want to want, because we may not be perfectly
acquainted with them or because we are not ideal deliberators. Some values pertaining to certain forms
of posthuman existence may well be of this sort; they may be values for us now, and they may be so in
virtue of our current dispositions, and yet we may not be able to fully appreciate them with our current
limited deliberative capacities and our lack of the receptive faculties required for full acquaintance with
them. This point is important because it shows that the transhumanist view that we ought to explore
the realm of posthuman values does not entail that we should forego our current values. The
posthuman values can be our current values, albeit ones that we have not yet clearly comprehended.
Transhumanism does not require us to say that we should favor posthuman beings over human beings,
but that the right way of favoring human beings is by enabling us to realize our ideals better and that
some of our ideals may well be located outside the space of modes of being that are accessible to us
with our current biological constitution.
No guaranteed negative ground for topics areas- the aff gets to choose the area for
debate under “one or more” anyway, so the neg has to adapt their strategy to our
choice. We are not abusing the area.
Arbitrary bright line- they don’t define the “all of our area,” so their quantitative
standard lacks any specificity.
Default to reasonability- replacing reasonable defs with competing interps is judge
intervention - race to the bottom moves the goalposts - arbitrary interps link turn any
reasons to prefer.
“Nearly all” is vague
Jacinto Zavala 2003 “Teachers Beware! You May Be Liable Under Proposition 227: California Teachers
Association v. State Board of Education” 37 U.S.F. L. Rev. 493, Winter, lexis, Zavala has a JD from the
University of San Francisco.
Every United States citizen has the right to know when he or she is exposed to liability. Unfortunately,
the Ninth Circuit has denied this right to California schoolteachers. The court's refusal to recognize that
teachers have First Amendment rights heralds a dark day for teachers, especially since the United States
Supreme Court explicitly recognized such a right in both Hazelwood and Pickering. Moreover, the terms
"nearly all" and "overwhelmingly" are not words of common understanding. Most people would fail to
arrive at the same number when asked to define in percentages the terms "nearly all" and
"overwhelmingly." The Ninth Circuit, however, did not seem to think so. Most troubling, however, is
how clearly section 320 raises all three concerns that underlie the vagueness doctrine. First, the terms
"nearly all" and "overwhelmingly" back innocent teachers into a corner by not providing them with fair
warning as to exactly what amount of non-English will expose them to liability. Second, when this is
coupled with the fact that school districts have wide latitude in designing programs to meet the
mandate of Proposition 227, it becomes virtually impossible for a teacher to protect herself against
liability. Third, there is a high probability of arbitrary and discriminatory application of the initiative,
because a teacher from a particular district can easily be singled out by a parent who believes that the
law is being violated. In essence, Proposition 227 forces a teacher to curtail her exercise of free speech
by preventing her from speaking to a student in his native language, even if she feels it is in the student's
best interest to do so.
Err aff on topicality- we can’t win on being topical alone, so if there’s doubt, don’t
vote negative.
PAS K
RoB
Role of the ballot is to evaluate effects of the plan- other interps arbitrarily exclude 9
min of aff offense- judge should choose justifications that best test plan desirabilitydebate dialectic sufficient filter for knowledge production and epistemology- prefer
specific warrants over vague buzzwords
Prior questions will never be fully settled—must take action even under conditions of
uncertainty
Molly Cochran 99, Assistant Professor of International Affairs at Georgia Institute for Technology,
“Normative Theory in International Relations”, 1999, pg. 272
To conclude this chapter, while modernist and postmodernist debates continue, while we are still
unsure as to what we can legitimately identify as a feminist ethical/political concern, while we still are
unclear about the relationship between discourse and experience, it is particularly important for
feminists that we proceed with analysis of both the material (institutional and structural) as well as
the discursive. This holds not only for feminists, but for all theorists oriented towards the goal of
extending further moral inclusion in the present social sciences climate of epistemological uncertainty.
Important ethical/ political concerns hang in the balance. We cannot afford to wait for the metatheoretical questions to be conclusively answered . Those answers may be unavailable. Nor can we
wait for a credible vision of an alternative institutional order to appear before an emancipatory agenda
can be kicked into gear. Nor do we have before us a chicken and egg question of which comes first:
sorting out the metatheoretical issues or working out which practices contribute to a credible
institutional vision. The two questions can and should be pursued together, and can be via moral
imagination. Imagination can help us think beyond discursive and material conditions which limit us, by
pushing the boundaries of those limitations in thought and examining what yields. In this respect, I
believe international ethics as pragmatic critique can be a useful ally to feminist and normative theorists
generally.
A2 Biopolitics Short
Dickinson, associate professor of history – UC Davis, ‘4
(Edward, Central European History, 37.1)
In short, the continuities between early twentieth-century biopolitical discourse and the practices of
the welfare state in our own time are unmistakable. Both are instances of the “disciplinary society” and
of biopolitical, regulatory, social-engineering modernity, and they share that genealogy with more
authoritarian states, including the National Socialist state, but also fascist Italy, for example. And it is
certainly fruitful to view them from this very broad perspective. But that analysis can easily become
superficial and misleading, because it obfuscates the profoundly different strategic and local dynamics
of power in the two kinds of regimes. Clearly the democratic welfare state is not only formally but also
substantively quite different from totalitarianism. Above all, again, it has nowhere developed the
fateful, radicalizing dynamic that characterized National Socialism (or for that matter Stalinism), the
psychotic logic that leads from economistic population management to mass murder. Again, there is
always the potential for such a discursive regime to generate coercive policies. In those cases in which
the regime of rights does not successfully produce “health,” such a system can —and historically does—
create compulsory programs to enforce it. But again, there are political and policy potentials and
constraints in such a structuring of biopolitics that are very different from those of National Socialist
Germany. Democratic biopolitical regimes require, enable, and incite a degree of self-direction and
participation that is functionally incompatible with authoritarian or totalitarian structures. And this
pursuit of biopolitical ends through a regime of democratic citizenship does appear, historically, to
have imposed increasingly narrow limits on coercive policies, and to have generated a “logic” or
imperative of increasing liberalization. Despite limitations imposed by political context and the slow
pace of discursive change, I think this is the unmistakable message of the really very impressive waves of
legislative and welfare reforms in the 1920s or the 1970s in Germany.90¶ Of course it is not yet clear
whether this is an irreversible dynamic of such systems. Nevertheless, such regimes are characterized
by sufficient degrees of autonomy (and of the potential for its expansion) for sufé cient numbers of
people that I think it becomes useful to conceive of them as productive of a strategic coné guration of
power relations that might fruitfully be analyzed as a condition of “liberty,” just as much as they are
productive of constraint, oppression, or manipulation. At the very least, totalitarianism cannot be the
sole orientation point for our understanding of biopolitics, the only end point of the logic of social
engineering.¶ This notion is not at all at odds with the core of Foucauldian (and Peukertian) theory.
Democratic welfare states are regimes of power/knowledge no less than early twentieth-century
totalitarian states; these systems are not “opposites,” in the sense that they are two alternative ways
of organizing the same thing. But they are two very different ways of organizing it. The concept
“power” should not be read as a universal stiè ing night of oppression, manipulation, and entrapment,
in which all political and social orders are grey, are essentially or effectively “the same.” Power is a set
of social relations, in which individuals and groups have varying degrees of autonomy and effective
subjectivity. And discourse is, as Foucault argued, “tactically polyvalent.” Discursive elements (like the
various elements of biopolitics) can be combined in different ways to form parts of quite different
strategies (like totalitarianism or the democratic welfare state); they cannot be assigned to one place
in a structure, but rather circulate. The varying possible constellations of power in modern societies
create “multiple modernities,” modern societies with quite radically differing potentials.91
Posthumanism K Turn
Cryonics paves the way for new formulations of identity- reviving those in suspended
animation would shatter the current spatio-temporal picture of existence
Kim Lacey 2011 "Viva Whenever: Suspended and Expanded Bodies in Time" Journal of Evolution and
Technology - Vol. 22 Issue 1 http://jetpress.org/v22/lacey.pdf Kim Lacey Ph.D. is an Assistant Professor
of English at Saginaw Valley State University.
Although media philosopher Vilém Flusser insists that external memory – and I will extend his 79
argument to SPEs, too – are solely simulations of bodily functions, performance artist Stelarc views this
argument quite differently (Flusser 1990). As Stelarc acknowledges in “Prosthetics, Robotics and
Remote Existence: Postevolutionary Strategies,” “evolution ends when technology invades the body”
(Stelarc 1991, 591-95). Arguing for the need to begin thinking about our future selves, Stelarc suggests
that we should replace parts of the body as they fail, rather than temporarily repairing the body with
modern medicine. Through his proposed method, the body will eventually become obsolete and
ultimately will be composed of interchangeable and upgrade-able parts. When the memory fails to
perform, one has externalized memory devices; when flaccidity becomes an issue, one can turn to
Viagra®; when monthly menstruation becomes annoying, there‟s Lybrel®: The body need no longer be
repaired but simply have parts replaced. Extending life no longer means “existing” but rather being
“operational.” Bodies need not age or deteriorate; they would not run down or even fatigue; they
would stall then start – possessing both the potential for renewal and reactivation. (Stelarc 1991, 593)
When a suspended body is restarted, it functions the same as before it stalled; because the body is
becoming replaceable, one would never lose the power of memory, say, or the ability to have an
erection. With digital and pharmaceutical suspension, these corporeal capabilities are forever
repeatable . Richard Doyle reminds us that, “The cryonics patient is promised a self that will persist
even through the sudden avalanche of identity called „awakening.‟ I am still I. […] If identity is a set of
becomings, it is only in becoming-frozen that becoming itself is frozen.” If we view cryonics as an action
to replace death, we can look to digital and pharmaceutical suspension to provide a similar guarantee
(Doyle 2003, 66). Just as “cryonics is a promise,” (65), the body in suspension, too, risks the possibility
of never being reanimated. After all, the return to the suspended patient is the pivotal moment for
suspension, digital and bodily. We must revive the dead in order to move forward.
Redefining notions death and existence by way of transhumanism uniquely
constitutes a Posthuman ontology- casting old dualisms aside for a de-centered yet
technology progressive framework overcomes biological restrictions on agency,
imagination, and solidarity
Francesca Ferrando 2013 "Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New
Materialisms: Differences and Relations" http://www.bu.edu/paideia/existenz/volumes/Vol.82Ferrando.html Francesca is a posthuman philosopher and feminist theorist. She has earned a European
PhD in Philosophy from the University of Roma Tre (Italy), and M.A. in Gender Studies from the
University of Utrecht (Holland). She is currently a Visiting Scholar at Columbia University.
There are significant differences within the posthuman scenario, each leading to a specialized forum of
discourse. If modern rationality, progress and free will are at the core of the transhumanist debate, a
radical critique of these same presuppositions is the kernel of antihumanism,32 a philosophical position
which shares with posthumanism its roots in postmodernity, but differs in other aspects.33 The
deconstruction of the notion of the human is central to antihumanism: this is one of its main points in
common with posthumanism. However, a major distinction between the two movements is already
embedded in their morphologies, specifically in their denotation of "post-" and "anti-." Antihumanism
fully acknowledges the consequences of the "death of Man," as already asserted by some poststructuralist theorists, in particular by Michel Foucault.34 In contrast, posthumanism does not rely on
any symbolic death: such an assumption would be based on the dualism dead/alive , while any strict
form of dualism has been already challenged by posthumanism, in its post-dualistic process-ontological
perspective. Posthumanism, after all, is aware of the fact that hierarchical humanistic presumptions
cannot be easily dismissed or erased. In this respect, it is more in tune with Derrida's deconstructive
approach rather than with Foucault's death of Man.35 To complete a presentation of the posthuman
scenario, metahumanism is a recent approach closely related to a Deleuzian legacy;36 it emphasizes the
body as a locus for amorphic re-significations, extended in kinetic relations as a body-network. It should
not be confused with metahumanity, a term which appeared in the 1980s within comics narratives and
role-playing games,37 referring to superheros and mutants, and it has since been employed specifically
in the context of cultural studies. Lastly, the notion of posthumanities has been welcomed in academia
to emphasize an internal shift (from the humanities to the posthumanities), extending the study of the
human condition to the posthuman; furthermore, it may also refer to future generations of beings
evolutionarily related to the human species. Conclusion The posthuman discourse is an ongoing process
of different standpoints and movements, which has flourished as a result of the contemporary attempt
to redefine the human condition. Posthumanism, transhumanism, new materialisms, antihumanism,
metahumanism, metahumanity and posthumanities offer significant ways to rethink possible existential
outcomes. This essay clarifies some of the differences between these movements, and emphasizes the
similarities and discrepancies between transhumanism and posthumanism, two areas of reflection that
are often confused with each other. Transhumanism offers a very rich debate on the impact of
technological and scientific developments in the evolution of the human species; and still, it holds a
humanistic and human-centric perspective which weakens its standpoint: it is a "Humanity Plus"
movement, whose aim is to "elevate the human condition."38 On the contrary, speciesism has become
an integral part of the posthumanist approach, formulated on a post-anthropocentric and posthumanistic episteme based on decentralized and non-hierarchical modes. Although posthumanism
investigates the realms of science and technology, it does not recognize them as its main axes of
reflection, nor does it limit itself to their technical endeavors, but it expands its reflection to the
technologies of existence . Posthumanism (here understood as critical, cultural, and philosophical
posthumanism, as well as new materialisms) seems appropriate to investigate the geological time of the
anthropocene. As the anthropocene marks the extent of the impact of human activities on a planetary
level, the posthuman focuses on de-centering the human from the primary focus of the discourse. In
tune with antihumanism, posthumanism stresses the urgency for humans to become aware of
pertaining to an ecosystem which, when damaged, negatively affects the human condition as well. In
such a framework, the human is not approached as an autonomous agent, but is located within an
extensive system of relations. Humans are perceived as material nodes of becoming; such becomings
operate as technologies of existence. The way humans inhabit this planet, what they eat, how they
behave, what relations they entertain, creates the network of who and what they are: it is not a
disembodied network, but (also) a material one, whose agency exceeds the political, social, and
biological human realms, as new materialist thinkers sharply point out. In this expanded horizon, it
becomes clear that any types of essentialism, reductionism, or intrinsic biases are limiting factors in
approaching such multidimensional networks. Posthumanism keeps a critical and deconstructive
standpoint informed by the acknowledgement of the past, while setting a comprehensive and
generative perspective to sustain and nurture alternatives for the present and for the futures. Within
the current philosophical environment, posthumanism offers a unique balance between agency,
memory, and imagination, aiming to achieve harmonic legacies in the evolving ecology of
interconnected existence.
Wilderson K
Anti-blackness isn’t inherent or ontological—it’s historically contingent and hence able to
change
Hudson, professor of political studies – University of the Witwatersrand, ‘13
(Peter, “The state and the colonial unconscious,” Social Dynamics: A journal of African studies
Vol. 39, Issue 2, p. 263-277)
¶ Thus the self-same/other distinction is necessary for the possibility of identity itself. There always has
to exist an outside, which is also inside, to the extent it is designated as the impossibility from which the
possibility of the existence of the subject derives its rule (Badiou 2009, 220). But although the excluded
place which isn’t excluded insofar as it is necessary for the very possibility of inclusion and identity may
be universal (may be considered “ontological”), its content (what fills it) – as well as the mode of this
filling and its reproduction – are contingent. In other words, the meaning of the signifier of exclusion is
not determined once and for all: the place of the place of exclusion, of death is itself over-determined,
i.e. the very framework for deciding the other and the same, exclusion and inclusion, is nowhere
engraved in ontological stone but is political and never terminally settled. Put differently, the
“curvature of intersubjective space” (Critchley 2007, 61) and thus, the specific modes of the “othering”
of “otherness” are nowhere decided in advance (as a certain ontological fatalism might have it) (see
Wilderson 2008). The social does not have to be divided into white and black, and the meaning of
these signifiers is never necessary – because they are signifiers. To be sure, colonialism institutes an
ontological division, in that whites exist in a way barred to blacks – who are not. But this ontological
relation is really on the side of the ontic – that is, of all contingently constructed identities, rather than
the ontology of the social which refers to the ultimate unfixity, the indeterminacy or lack of the social. In
this sense, then, the white man doesn’t exist, the black man doesn’t exist (Fanon 1968, 165); and
neither does the colonial symbolic itself, including its most intimate structuring relations – division is
constitutive of the social, not the colonial division. “Whiteness” may well be very deeply sediment in
modernity itself, but respect for the “ontological difference” (see Heidegger 1962, 26; Watts 2011, 279)
shows up its ontological status as ontic. It may be so deeply sedimented that it becomes difficult even to
identify the very possibility of the separation of whiteness from the very possibility of order, but from
this it does not follow that the “void” of “black being” functions as the ultimate substance, the
transcendental signified on which all possible forms of sociality are said to rest. What gets lost here,
then, is the specificity of colonialism, of its constitutive axis, its “ontological” differential. A crucial
feature of the colonial symbolic is that the real is not screened off by the imaginary in the way it is under
capitalism. At the place of the colonised, the symbolic and the imaginary give way because non-identity
(the real of the social) is immediately inscribed in the “lived experience” (vécu) of the colonised subject.
The colonised is “traversing the fantasy” (Zizek 2006a, 40–60) all the time; the void of the verb “to be” is
the very content of his interpellation. The colonised is, in other words, the subject of anxiety for whom
the symbolic and the imaginary never work, who is left stranded by his very interpellation.4 “Fixed” into
“non-fixity,” he is eternally suspended between “element” and “moment”5 – he is where the colonial
symbolic falters in the production of meaning and is thus the point of entry of the real into the texture
itself of colonialism. Be this as it may, whiteness and blackness are (sustained by) determinate and
contingent practices of signification; the “structuring relation” of colonialism thus itself comprises a
knot of significations which, no matter how tight, can always be undone. Anti-colonial – i.e., anti-
“white” – modes of struggle are not (just) “psychic” 6 but involve the “reactivation” (or “desedimentation”)7 of colonial objectivity itself. No matter how sedimented (or global), colonial
objectivity is not ontologically immune to antagonism. Differentiality, as Zizek insists (see Zizek 2012,
chapter 11, 771 n48), immanently entails antagonism in that differentiality both makes possible the
existence of any identity whatsoever and at the same time – because it is the presence of one object in
another – undermines any identity ever being (fully) itself. Each element in a differential relation is the
condition of possibility and the condition of impossibility of each other. It is this dimension of
antagonism that the Master Signifier covers over transforming its outside (Other) into an element of
itself, reducing it to a condition of its possibility.8 All symbolisation produces an ineradicable excess over
itself, something it can’t totalise or make sense of, where its production of meaning falters. This is its
internal limit point, its real:9 an errant “object” that has no place of its own, isn’t recognised in the
categories of the system but is produced by it – its “part of no part” or “object small a.”10 Correlative to
this object “a” is the subject “stricto sensu” – i.e., as the empty subject of the signifier without an
identity that pins it down.11 That is the subject of antagonism in confrontation with the real of the
social, as distinct from “subject” position based on a determinate identity.
Support for the claim that blackness is ontological is founded on psychoanalysis- those
are bankrupt—unfalsifiable.
Bellelli, 2006
(Andrea Bellelli, M.D., Graham MacDonald and Philip Catton, editors, Routledge, Critical Appraisals,
“Review - Karl Popper” January 3, Volume 10, Issue 1,
metapsychology.mentalhelp.net/poc/view_doc.php?type=book&id=2963)
Popper's negative epistemology can be used to distinguish scientific theories, which make risky
predictions and can be falsified, from non-scientific or pseudo-scientific theories, which cannot be
falsified. Indeed Popper weighted on his balance several theories, such as Marxism and
psychoanalysis, which pretended to be scientific and found them incapable of any testable prediction
and non-falsifiable. Karl Popper: Critical Appraisals is a collection of eleven essays that evaluate the
most controversial aspects of Popper's philosophy of science and society. It is an excellent book and
all contributors are highly qualified; the intended audience is however almost as qualified as the
authors themselves, and it is expected that the reader is quite familiar with most, if not all, the
writings of Popper. It is neither an introduction to Popper, nor a global analysis of his contributions.
Actually, some of the essays focus on quite specific and problematic aspects of his thought, and his
most important hypotheses are discussed to a lesser extent, as is typical of specialized analyses. Two
crucial and related points of Popper's epistemology are the refusal of induction and the role of
observation and experiment. These are analyzed and criticized in several essays of this book. Popper
had two objections to induction, clearly but not ordinately formulated: we cannot completely trust
observation; and we cannot legitimately generalize from observation. The former objection is not
strictly against induction, and is usually formulated by conventionalist theoreticians; the latter is
Hume's classical argument against induction. Popper's solution is radical: the experiment is not a
reason of the theory, it is only a reason to trust (or not to trust) the theory. Denying the observation a
status in the content of the theory is a bold move, more easily defensible when considering
theoretical physics than anatomy, and it is difficult to believe that it is adopted by many researchers.
If you feel stimulated (rather than bored) by this type of enquiry, then "Karl Popper: critical
appraisals" is your book, and you will find there a thorough analysis of Popper's ambivalent feeling
about experiment and observation, even better than the one you find in Popper's writings. Take the
simplest empirical description you can imagine, something like "I am reading from a computer
screen". You can substantiate your statement with other statements, but this leads to infinite
regression; to stop regression you may either establish non-questionable postulates (this being
conventionalism or dogmatism) or accept as a proof a statement describing your sensorial perception
(this being psychologism). Most scientists are implicitly psychologists: they trust experience and
observation, and Popper concedes that sensory perceptions are in general remarkably accurate, for
this grants the organism a selective evolutionary advantage. However, Popper thinks that hypothesis
and theories cannot be based upon undemonstrable sensory perceptions and thus he only assigns to experience a role in
justifying our belief in a theory: if we see a theoretical prediction fulfilled, then we trust the theory. Popper's path between the precipices of empiricism on the one side and
conventionalism on the other is narrow indeed, and the essays in these critical appraisals are a useful guide. Three essays, by Alan Musgrave, Semiha Akinci and Philip Catton describe
the relationships between Popper's theory and conventionalism on the one hand and with induction on the other hand. I found them very interesting as they clarify some points of
Popper's theory that I had always found quite obscure. Popper opposed the verificationist theory of the logical positivists of his time, who assumed that describing an experience is nonproblematic; he pointed out that between the observed fact and its description there stands the logical barrier of psychologism that introduces in the logically demonstrable structure of
the theory the empirical and non-demonstrable step that uses the hard-wired circuitry of our brain to convert a fact into a description. Refusing psychologism entails the paradoxical
consequence that Popper's theory may appear a refined version of conventionalism. To state it more clearly, we may ask ourselves which proof we would accept of a scientific
hypothesis or prediction. If the proof we demand is an observation or an experiment, then we are positivists and Popper accuses us of psychologism, i.e. of relying upon the poorly
known functioning of our brain for the judgment of consistence between a fact and a statement. If the proof we demand is logical coherence with other parts of the theory, then we are
conventionalists, and Popper accuses us of neglecting the world we try to describe. We may then ask which proof Popper would accept, and his answer is none: a hypothesis can be
falsified but cannot be verified. However, we can provisionally trust our experience as the judge trusts the eyewitness: we weight favorable and contrary empirical evidence and come to
a decision that is neither conventional nor arbitrary. We notice that the problem of psychologism is particularly relevant to Popper for he conceives objectivity as inter-subjectivity (i.e.
an observation is objective if it can be repeated by every subject); Jacques Monod defined objective an observation that could be made by an instrument (i.e. minimizing the subject's
contribution) and confined the problem of psychologism to a less relevant and more controlled position. I may add that Popper in his analysis did not consider some well established
means of controlling psychologism, e.g. blind methods, as employed in medical research. Akinci's conclusion is that Popper's conventionalism is epistemological, i.e. conventions are
made about the proper methods of scientific investigation, not epistemical, i.e. related to the content of scientific theories. Epistemical conventionalism was formalized by the French
epistemologist and mathematician Henry Poincare', but has a long standing tradition in philosophy, even though I cannot believe that it has been espoused by many scientists. I think that
it was best explained by Andreas Osiander in his preface to the Copernicus' De Revolutionibus Orbium Coelestium: Neque enim necesse est eas hypotheses esse veras, imo ne
verisimiles quidem, sed sufficit hoc unum, si calculum observationibus congruentem exhibeant. (Indeed it is not necessary that these hypotheses are true, nor verisimilar, but it is enough
if the calculus we base on them is congruent with the observation.) Osiander was not a conventionalist: he skillfully constructed the argument to protect his friend Copernicus from
possible retaliations by the Inquisition. Copernicus thought (as Osiander) that his hypothesis was true, i.e. that it described the real relationships between the apparent movements of the
stars and the actual movements of the Earth; and Popper no doubt concurs with this view and confines conventionalism to methodology. Philip Catton in his essay criticizes Popper's
view of science as an eminently theoretical enterprise (indeed Popper himself wrote in The Logic of Scientific Discovery that sciences are systems of theories, thus leaving aside
descriptive sciences like anatomy or geography). Data from these sciences can be used to build up theories, as Catton demonstrates, but their intrinsic theoretical content is minimal: he
points out that Newton used this method, that he called deducing from experiments. Catton's essay demonstrates that scientists do not think and behave as Popper; however they neither
behave in ways Popper would forbid, and surely they would concur on Popper main point, i.e. that their hypotheses should be falsifiable, and should be rejected or modified if falsified.
Catton's point is that experiments have not only the negative function of testing the hypothesis, they also positively suggest and shape the hypothesis. Why was Popper so adamant in
denying the positive role of the experiment? Again, the reason is that admitting the experiment positive role would grant some status to induction, Popper's bete noire, as Alan Musgrave
thoroughly discusses in his contribution. Essentially, Popper failed to recognize that Hume's argument against induction only works if we assign absolute rather than probabilistic
validity to induction. Popper dealt at length with probability in The Logic of Scientific Discovery, and distinguished between two meanings of the term in common usage: indeed,
probable may be properly used to mean that an event has measurable chances of happening, as when we say that it is likely that any day of august is warmer than any day of october; or
we may improperly use probable to indicate that we believe that an assertion is true, but we want not to commit ourselves too strongly, as when we say that it is likely that Copernicus
was right and Ptolemy was wrong. Popper strongly opposed the latter use of the term, but not the former; however, he never explicit ly admitted the obvious consequence that induction
may be reformulated probabilistically. This mode of reasoning clearly shows up in the writings of the scientists quoted by Catton who creatively formulated deterministic hypothesis
compatible with inductively inferred regularities and probabilities; all of them were perfectly aware that the process is not infallible. Later in his life, Popper turned his attention to social
sciences, and to what he called historicism, the idea that some deterministic rationale exists in social events, scientifically investigable. In Conjectures and Confutations Popper discusses
the links between scientific and social theories in reference to his acquaintance with the heretical psychoanalyst Alfred Adler, and his dislike of Marxist philosophy is expressed in The
man is
free to choose among socially acceptable alternatives and therefore no specific prediction can be
made on his behavior, even though general regularities in social phenomena may be recognized.
Thus, he fiercely opposed two deterministic hypotheses, Marxism and psychoanalysis; time proved
him right. The essays by Gonzalez, Shearmur, List and Pettit, Macdonald, Ryan, O' Hear and
Waldron discuss Popper's political philosophy and its position in the philosophy of the twentieth
century. Moreover, some of these essays critically analyze the logical relationships between Popper's
philosophy of science and of society. Here the questions are subtler than in the case of the
philosophy of science, and any summary is bound to be incomplete. A crucial difference between
philosophy of science and political philosophy is that the former analyzes hypotheses, the latter
opinions. Opinions can be based on logic, but ultimately they do not compete with each other in the
same way as hypotheses do, and two contrasting opinions may be both (subjectively) true, whereas
two contrasting hypotheses cannot be both (objectively) true. Often we misrepresent our subjective
opinions as objective hypotheses in order to discredit the opinions of our adversaries. Popper tried to
fight these misrepresentations, and probably went a bit too far: indeed he judged Marxism and
psychoanalysis as false scientific hypotheses rather than as plausible but subjective opinions
improperly presented as hypotheses.
Open Society and its Enemies. Popper's position with respect to social sciences is somewhat different from the one he takes for natural sciences. Popper thought that
Case
A2 State Bad
The nation-state is a malleable tool we can use to advance ethical goals – it’s not a
Platonic entity the effects of which are always pre-determined
Rogers Brubaker 2004 "In the Name of the Nation: Reflections on Nationalism and Patriotism"
Citizenship Studies, Vol. 8, No. 2, www.sailorstraining.eu/admin/download/b28.pdf Brubaker is an
American sociologist, and professor at University of California, Los Angeles.
This, then, is the basic work done by the category ‘nation’ in the context of nationalist movements—
movements to create a polity for a putative nation. In other contexts, the category ‘nation’ is used in a
very different way. It is used not to challenge the existing territorial and political order, but to create a
sense of national unity for a given polity. This is the sort of work that is often called nation-building, of
which we have heard much of late. It is this sort of work that was evoked by the Italian statesman
Massimo D’Azeglio, when he famously said, ‘we have made Italy, now we have to make Italians’. It is this
sort of work that was (and still is) undertaken—with varying but on the whole not particularly
impressive degrees of success—by leaders of post-colonial states, who had won independence, but
whose populations were and remain deeply divided along regional, ethnic, linguistic, and religious lines.
It is this sort of work that the category ‘nation’ could, in principle, be mobilized to do in contemporary
Iraq—to cultivate solidarity and appeal to loyalty in a way that cuts across divisions between Shi’ites and
Sunnis, Kurds and Arabs, North and South.2 In contexts like this, the category ‘nation’ can also be used
in another way, not to appeal to a ‘national’ identity transcending ethnolinguistic, ethnoreligious, or
ethnoregional distinctions, but rather to assert ‘ownership’ of the polity on behalf of a ‘core’
ethnocultural ‘nation’ distinct from the citizenry of the state as a whole, and thereby to define or
redefine the state as the state of and for that core ‘nation’ (Brubaker, 1996, p. 83ff). This is the way
‘nation’ is used, for example, by Hindu nationalists in India, who seek to redefine India as a state
founded on Hindutva or Hinduness, a state of and for the Hindu ethnoreligious ‘nation’ (Van der Veer,
1994). Needless to say, this use of ‘nation’ excludes Muslims from membership of the nation, just as
similar claims to ‘ownership’ of the state in the name of an ethnocultural core nation exclude other
ethnoreligious, ethnolinguistic, or ethnoracial groups in other settings. In the United States and other
relatively settled, longstanding nation-states, ‘nation’ can work in this exclusionary way, as in nativist
movements in America or in the rhetoric of the contemporary European far right (‘la France oux
Franc¸ais’, ‘Deutschland den Deutshchen’). Yet it can also work in a very different and fundamentally
inclusive way.3 It can work to mobilize mutual solidarity among members of ‘the nation’, inclusively
defined to include all citizens—and perhaps all long-term residents—of the state. To invoke nationhood,
in this sense, is to attempt to transcend or at least relativize internal differences and distinctions. It is an
attempt to get people to think of themselves— to formulate their identities and their interests—as
members of that nation, rather than as members of some other collectivity. To appeal to the nation can
be a powerful rhetorical resource, though it is not automatically so. Academics in the social sciences and
humanities in the United States are generally skeptical of or even hostile to such invocations of
nationhood. They are often seen as de´passe´, parochial, naive, regressive, or even dangerous. For many
scholars in the social sciences and humanities, ‘nation’ is a suspect category. Few American scholars
wave flags, and many of us are suspicious of those who do. And often with good reason, since flagwaving has been associated with intolerance, xenophobia, and militarism, with exaggerated national
pride and aggressive foreign policy. Unspeakable horrors—and a wide range of lesser evils—have been
perpetrated in the name of the nation, and not just in the name of ‘ethnic’ nations, but in the name of
putatively ‘civic’ nations as well (Mann, 2004). But this is not sufficient to account for the prevailingly
negative stance towards the nation. Unspeakable horrors, and an equally wide range of lesser evils,
have been committed in the name of many other sorts of imagined communities as well—in the name
of the state, the race, the ethnic group, the class, the party, the faith. In addition to the sense that
nationalism is dangerous, and closely connected to some of the great evils of our time—the sense that,
as John Dunn (1979, p. 55) put it, nationalism is ‘the starkest political shame of the 20th-century’—
there is a much broader suspicion of invocations of nationhood. This derives from the widespread
diagnosis that we live in a post-national age. It comes from the sense that, however well fitted the
category ‘nation’ was to economic, political, and cultural realities in the nineteenth century, it is
increasingly ill-fitted to those realities today. On this account, nation is fundamentally an anachronistic
category, and invocations of nationhood, even if not dangerous, are out of sync with the basic principles
that structure social life today.4 The post-nationalist stance combines an empirical claim, a
methodological critique, and a normative argument. I will say a few words about each in turn. The
empirical claim asserts the declining capacity and diminishing relevance of the nation-state. Buffeted by
the unprecedented circulation of people, goods, messages, images, ideas, and cultural products, the
nation-state is said to have progressively lost its ability to ‘cage’ (Mann, 1993, p. 61), frame, and govern
social, economic, cultural, and political life. It is said to have lost its ability to control its borders, regulate
its economy, shape its culture, address a variety of border-spanning problems, and engage the hearts
and minds of its citizens. I believe this thesis is greatly overstated, and not just because the September
11 attacks have prompted an aggressively resurgent statism.5 Even the European Union, central to a
good deal of writing on post-nationalism, does not represent a linear or unambiguous move ‘beyond the
nation-state’. As Milward (1992) has argued, the initially limited moves toward supranational authority
in Europe worked—and were intended—to restore and strengthen the authority of the nation-state.
And the massive reconfiguration of political space along national lines in Central and Eastern Europe at
the end of the Cold War suggests that far from moving beyond the nation-state, large parts of Europe
were moving back to the nation-state.6 The ‘short twentieth century’ concluded much as it had begun,
with Central and Eastern Europe entering not a post-national but a post-multinational era through the
large-scale nationalization of previously multinational political space. Certainly nationhood remains the
universal formula for legitimating statehood. Can one speak of an ‘unprecedented porosity’ of borders,
as one recent book has put it (Sheffer, 2003, p. 22)? In some respects, perhaps; but in other respects—
especially with regard to the movement of people—social technologies of border control have
continued to develop. One cannot speak of a generalized loss of control by states over their borders; in
fact, during the last century, the opposite trend has prevailed, as states have deployed increasingly
sophisticated technologies of identification, surveillance, and control, from passports and visas through
integrated databases and biometric devices. The world’s poor who seek to better their estate through
international migration face a tighter mesh of state regulation than they did a century ago (Hirst and
Thompson, 1999, pp. 30–1, 267). Is migration today unprecedented in volume and velocity, as is often
asserted? Actually, it is not: on a per capita basis, the overseas flows of a century ago to the United
States were considerably larger than those of recent decades, while global migration flows are today ‘on
balance slightly less intensive’ than those of the later nineteenth and early twentieth century (Held et
al., 1999, p. 326). Do migrants today sustain ties with their countries of origin? Of course they do; but
they managed to do so without e-mail and inexpensive telephone connections a century ago, and it is
not clear—contrary to what theorists of post-nationalism suggest—that the manner in which they do so
today represents a basic transcendence of the nation-state.7 Has a globalizing capitalism reduced the
capacity of the state to regulate the economy? Undoubtedly. Yet in other domains—such as the
regulation of what had previously been considered private behavior—the regulatory grip of the state
has become tighter rather than looser (Mann, 1997, pp. 491–2). The methodological critique is that the
social sciences have long suffered from ‘methodological nationalism’ (Centre for the Study of Global
Governance, 2002; Wimmer and Glick-Schiller, 2002)—the tendency to take the ‘nation-state’ as
equivalent to ‘society’, and to focus on internal structures and processes at the expense of global or
otherwise border-transcending processes and structures. There is obviously a good deal of truth in this
critique, even if it tends to be overstated, and neglects the work that some historians and social
scientists have long been doing on border-spanning flows and networks. But what follows from this
critique? If it serves to encourage the study of social processes organized on multiple levels in addition
to the level of the nation-state, so much the better. But if the methodological critique is coupled— as it
often is—with the empirical claim about the diminishing relevance of the nation-state, and if it serves
therefore to channel attention away from state-level processes and structures, there is a risk that
academic fashion will lead us to neglect what remains, for better or worse, a fundamental level of
organization and fundamental locus of power. The normative critique of the nation-state comes from
two directions. From above, the cosmopolitan argument is that humanity as a whole, not the nationstate, should define the primary horizon of our moral imagination and political engagement (Nussbaum,
1996). From below, muticulturalism and identity politics celebrate group identities and privilege them
over wider, more encompassing affiliations. One can distinguish stronger and weaker versions of the
cosmopolitan argument. The strong cosmopolitan argument is that there is no good reason to privilege
the nation-state as a focus of solidarity, a domain of mutual responsibility, and a locus of citizenship.8
The nation-state is a morally arbitrary community, since membership in it is determined, for the most
part, by the lottery of birth, by morally arbitrary facts of birthplace or parentage. The weaker version of
the cosmopolitan argument is that the boundaries of the nation-state should not set limits to our moral
responsibility and political commitments. It is hard to disagree with this point. No matter how open and
‘joinable’ a nation is—a point to which I will return below—it is always imagined, as Benedict Anderson
(1991) observed, as a limited community. It is intrinsically parochial and irredeemably particular. Even
the most adamant critics of universalism will surely agree that those beyond the boundaries of the
nation-state have some claim, as fellow human beings, on our moral imagination, our political energy,
even perhaps our economic resources.9 The second strand of the normative critique of the nationstate—the multiculturalist critique—itself takes various forms. Some criticize the nation-state for a
homogenizing logic that inexorably suppresses cultural differences. Others claim that most putative
nation-states (including the United States) are not in fact nation-states at all, but multinational states
whose citizens may share a common loyalty to the state, but not a common national identity (Kymlicka,
1995, p. 11). But the main challenge to the nation-state from multiculturalism and identity politics
comes less from specific arguments than from a general disposition to cultivate and celebrate group
identities and loyalties at the expense of state-wide identities and loyalties. In the face of this twofold
cosmopolitan and multiculturalist critique, I would like to sketch a qualified defense of nationalism and
patriotism in the contemporary American context.10 Observers have long noted the Janus-faced
character of nationalism and patriotism, and I am well aware of their dark side. As someone who has
studied nationalism in Eastern Europe, I am perhaps especially aware of that dark side, and I am aware
that nationalism and patriotism have a dark side not only there but here. Yet the prevailing antinational, post-national, and trans-national stances in the social sciences and humanities risk obscuring
the good reasons—at least in the American context—for cultivating solidarity, mutual responsibility, and
citizenship at the level of the nation-state. Some of those who defend patriotism do so by distinguishing
it from nationalism.11 I do not want to take this tack, for I think that attempts to distinguish good
patriotism from bad nationalism neglect the intrinsic ambivalence and polymorphism of both. Patriotism
and nationalism are not things with fixed natures; they are highly flexible political languages, ways of
framing political arguments by appealing to the patria, the fatherland, the country, the nation. These
terms have somewhat different connotations and resonances, and the political languages of patriotism
and nationalism are therefore not fully overlapping. But they do overlap a great deal, and an enormous
variety of work can be done with both languages. I therefore want to consider them together here. I
want to suggest that patriotism and nationalism can be valuable in four respects. They can help develop
more robust forms of citizenship, provide support for redistributive social policies, foster the integration
of immigrants, and even serve as a check on the development of an aggressively unilateralist foreign
policy. First, nationalism and patriotism can motivate and sustain civic engagement. It is sometimes
argued that liberal democratic states need committed and active citizens, and therefore need patriotism
to generate and motivate such citizens. This argument shares the general weakness of functionalist
arguments about what states or societies allegedly ‘need’; in fact, liberal democratic states seem to be
able to muddle through with largely passive and uncommitted citizenries. But the argument need not be
cast in functionalist form. A committed and engaged citizenry may not be necessary, but that does not
make it any less desirable. And patriotism can help nourish civic engagement. It can help generate
feelings of solidarity and mutual responsibility across the boundaries of identity groups. As Benedict
Anderson (1991, p. 7) put it, the nation is conceived as a ‘deep horizontal comradeship’. Identification
with fellow members of this imagined community can nourish the sense that their problems are on
some level my problems, for which I have a special responsibility.12 Patriotic identification with one’s
country—the feeling that this is my country, and my government—can help ground a sense of
responsibility for, rather than disengagement from, actions taken by the national government. A feeling
of responsibility for such actions does not, of course, imply agreement with them; it may even generate
powerful emotions such as shame, outrage, and anger that underlie and motivate opposition to
government policies. Patriotic commitments are likely to intensify rather than attenuate such emotions.
As Richard Rorty (1994) observed, ‘you can feel shame over your country’s behavior only to the extent
to which you feel it is your country’.13 Patriotic commitments can furnish the energies and passions that
motivate and sustain civic engagement.
A2 Impartiality
Our analysis isn’t a view from nowhere—situated impartiality isn’t neutral or
objective, but it does allow contestation
Lisa J Disch 1993 “MORE TRUTH THAN FACT: Storytelling as Critical Understanding in the Writings of
Hannah Arendt,” Political Theory Vol. 21 No. 4, p. 665-694 Disch is a Professor of Political Science at the
University of Michigan, Ph.D. in Political Science from Rutgers University, B.A. in Political Science from
Kenyon College
Arendt seems to have viewed Thucydides as she did herself, as a political theorist from whom the
question of historical objectivity is an irrelevant methodological debate. The task of the political theorist
is not to report objectively but to tell a story that engages the critical faculties of the audience. Euben
makes a similar claim, crediting Thucydides with "offering a new standard of accuracy" to his readers. He
writes that "however personal or Athenian his work, however much he may have had ties to the
aristocratic class at Athens, there is a sense in which he is absent from his discourse. Or to put it more
accurately, he is trying to sustain conditions within the text that makes discourse outside it possible."87
This is no conventional model of objective reporting, as it consists neither in a bloodlessly neutral
writing style nor in an attempt to avoid selectivity but, rather, in the fact that Thucydides leaves the
reader with the task of interpreting the various conflicts he represents. To Euben and Arendt then, who
are political theorists, Thucydides' work achieves something more important than objectivity: political
impartiality. Political impartiality is not secured by means of detachment from politics but by fostering
public deliberation that depends on the ability "to look upon the same world from one another's
standpoint."88 Arendt credits the practice of political impartiality to the polis, which she idealizes as a
realm of "incessant talk" and plurality, in which "the Greeks discovered that the world we have in
common is usually regarded from an infinite number of different standpoints, to which correspond the
most diverse points of view."89 Thucydides' work fosters political impartiality by an artistic (though not
fictional) creation of plurality by his representation of speeches from the multiple, divergent
perspectives that constitute the public realm. Euben writes that Thucydides gives us "a form of political
knowledge that respects, even recapitulates, the paradoxes and 'perspectivism' of political life."9? This
account of political impartiality, characterized not by abstraction but by the interplay among a plurality
of perspectives, anticipates the conception of impartiality that Arendt will discern in Kant's description
of the "enlarged mentality" in Third Critique. She admires Thucydides because his imaginative history
makes it possible for the reader to think as if engaged in the debates of his time. This section bears out
the claim that there is an "untold story" about storytelling in the discrepancies among the various
statements of method, published and unpublished, that Arendt formulated over the course of writing
Origins. This story documents her "unusual approach" to political theory and historical writing, in the
shift she makes from abstract, neutral reporting to explicitly moral storytelling from the personal
experience of the author. She adopts this approach to demonstrate and teach a kind of critical
understanding that, in Nussbaum's words, "consists in the keen responsiveness of intellect, imagination,
and feeling to the particulars of a situation."9' This early work begins to describe how to make a
judgment from experience, arguing that one proceeds not by applying principles from a transcendent
framework but by considered attention to one's immediate response to an event. It does not yet explain
what makes this contingent judgment critical. The answer to this question lies in her attempt to discern
a political philosophy in Kant's Critique of Judgment. SITUATED IMPARTIALITY In her lectures on Third
Critique, Arendt explains that she is drawn to Kant's conception of taste as a model for political thinking
because she finds in it a formulation of impartiality that accords with plurality. Its subject, she claims, is
"men in the plural, as they really are and live in societies."92 Where practical reason is individual and
abstract, imagining the principle of one's act as a universal rule, Kant defines the impartiality necessary
for aesthetic judgment in terms of intersubjectivity, which he calls "enlarged thought."93 Arendt
creatively appropriates Kant's description of taste as "enlarged thought" to explain how one gets from
experience to criticism: the critical move entails a shift from thinking from a private perspective to
thinking from a public vantage point. Her version of enlarged thought makes a bridge between
storytelling and situated impartial critical understanding Arendt foreshadows her turn to Kant's Third
Critique as early as the preface to Origins where she uses the term "crystallization." As Seyla Benhabib
argues, this term is an attempt to explain the unconventional structure and organization of the book-the
structure that Arendt explained to Mary Underwood as writing "against" history-by alluding to
Benjamin's "Theses on the Philosophy of History." Benjamin argues that the critical historian who
refuses to write from the perspective of the victor must "brush history against the grain."94 According
to Benhabib, Arendt uses the peculiar language of "elements" and "crystallization" because she, like
Benjamin, wants "to break the chain of narrative continuity, to shatter chronology as the natural
structure of narrative, to stress fragmentariness, historical dead ends, failures and ruptures."9 The
crystallization metaphor is unquestionably an attempt by Arendt to bring Benjamin to mind, but it is also
an allusion to Kant's account of taste. The reference to Kant affirms the claim of Arendt's early writings
that political events are contingent and so cannot be named or known in terms of existing conceptual
categories. In Third Critique, Kant introduces "crystal- lization" as a metaphor for contingency, which he
calls "the form of the purposiveness of an object, so far as this is perceived in it without any
representation of a purpose. "' Crystallization describes the formation of objects that come into being
not by a gradual, evolutionary process but suddenly and unpredictably "by a shooting together, i.e. by a
sudden solidi- fication, not by a gradual transition. . . but all at once by a saltus, which transition is also
called crystallization."97 In describing a kind of being that is contingent but susceptible to critical
evaluation nonetheless, crystallization justifies the possibility of a kind of judgment that is both
spontaneous and principled.98 In calling totalitarianism "the final crystallizing catastrophe" that constitutes its various "elements" into a historical crisis, Arendt makes an analogy between contingent beauty
and unprecedented evil. This analogy turns on the claim that totalitarianism, a phenomenon to which no
abstract categorical framework is adequate, poses a problem of understanding that is similar to that
posed by beauty. Political events, like aesthetic objects, can neither be explained in evolutionary terms
nor judged with reference to an external purpose or principle. Even so, we are bound to discern their
meaning or else to relinquish our freedom by reacting without thinking against forces we do not
understand. Arendt is drawn to Third Critique because she wants to argue that political judgment is not
a kind of practical reason or moral judgment but a kind of taste. Moral judgment, according to Kant, is
"determinant," which means that it functions by subsuming a particular instance under a general rule
that is rationally derived prior to that instance.99 Taste, on the other hand, is reflec- tive. It operates in a
contingent situation, meaning one for which there can be no predetermined principle, so that a thinker
takes her bearings not from the universal but from the particular (p. 15). Leaving technical language
behind, the implication of reflective judgment is that it is primarily concerned with questions of
meaning. Arendt's turn to Third Critique for a model for political judgment is utterly consistent with her
early essays, then, because aesthetic judgment confronts the world from the start as a problem of
understanding. Kant's problem in Third Critique is to account for the possibility of aesthetic judgment by
distinguishing judgments about beauty from idiosyn- cratic preferences, on one hand, and from
categorical values, on the other. He claims that an expression of taste in the beautiful differs from our
interest in the pleasant, to which we are drawn by the desire for gratification, and from our regard for
the good, which we are compelled to esteem by its objective worth according to the categorical
imperative. Taste is unique in that it is spontaneous but principled. He calls it "a disinterested and free
satisfaction; for no interest, either of sense or of reason, here forces our assent" (p. 44). To account for
the possibility of aesthetic judgment, Kant must explain how an expression of taste can be more than
"groundless and vain fancy," without arguing that it is objectively necessary (p. 191). Kant answers this
problem by proposing that aesthetic judgment is intersubjective. A statement of preference is
subjective, in that when I affirm that something is pleasing I mean that it is pleasing to me; in stating
that something is beautiful, however, I am expressing a preference that I attribute to everyone else.
Aesthetic judgment differs from pure and practical reason in that this claim to intersubjective validity is
not justified with reference to an abstract universal concept of beauty but rests on a purportedly
common sense of pleasure in the beautiful. This common sense is, according to Kant, what makes taste
"strange and irregular" because "it is not an empirical concept, but a feeling of pleasure (consequently
not a concept at all) which, by the judgment of taste, is attributed to everyone" (p. 27). He explains
further that taste speaks "with a universal voice . . [but] does not postulate the agree- ment of
everyone.... It only imputes this agreement to everyone, as a case of the rule in respect of which it
expects, not confirmation by concepts, but assent from others" (pp. 50-51). That is, although a judgment
of taste cannot be proved, its validity turns on the presumption that others would assent to it. The
paradox that Kant sustains in defining taste as a judgment that takes its bearings not from
transcendental concepts but from feeling is analogous to Arendt's attempt to define political judgment
as critical understanding that does not withdraw to an abstract vantage point but takes its bearings from
experience. Paul Guyer has noted that Kant's account is deeply ambiguous because Kant proposes to
defend the possibility of taste both on the grounds of intersubjectivity, that a judgment about beauty is
imputed to everyone else, and on the grounds of communicability, that it actually secures the assent of
others in public exchange. Although Kant appears to suggest that intersub- jectivity is both necessary
and sufficient to communicability, one could impute a judgment to others without communicating it to
them or defending it to their satisfaction. Guyer claims that intersubjectivity takes precedence over
communicability in Kant's argument, writing that although Kant "is at pains to show that pleasure in the
beautiful may be imputed to others, he is not at equal pains to show how such pleasure may be
conveyed from one who feels it to one who, in particular circumstances, does not.""" What is interesting
about this ambiguity for the purposes of this essay is that Arendt makes a creative appropriation of taste
by suggesting a significantly different ground of validity. Arendt politicizes Kant's concept of taste by
arguing that its validity turns on "publicity."'0' Publicity means openness to contestation, which she describes as "the testing that arises from contact with other people's think- ing."'02 This claim that critical
thinking involves contestation suggests that neither intersubjectivity nor communicability adequately
accounts for the possibility of reflective judgment. In contrast to intersubjectivity, publicity requires that
a judgment come into "contact" with others' perspectives; it cannot simply be imputed to them. But
"contact" and "testing" in no way imply that validity depends on actually securing general assent to
one's own beliefs. On the contrary, given Arendt's claim that the public realm is constituted by a
plurality of divergent perspectives, general assent would be not just an unlikely outcome of public
debate but an undesirable one. Thus Arendt politicizes Kant's "taste" by eschewing its tendency toward
consen- sus in favor of contestation. Even though "publicity" makes a significant departure from Kant's
de- fense of taste, Arendt attributes it to him nonetheless, claiming that she learned it from his concept
"common sense." Kant argues that aesthetic judgment presupposes common sense, which he defines as
a capacity to practice "enlarged thought." This practice involves "comparing your judg- ment with the
possible rather than the actual judgments of others, and by putting ourselves in the place of any other
man, by abstracting from the limitations which contingently attach to our own judgment."'03 Thus Kant
argues that one raises one's idiosyncratic preference for an object to a critical judgment by abstracting
from one's own contingent situation to arrive at the standpoint of any observer. Hannah Arendt
appropriates "enlarged thought" from Kant's Third Cri- tique but with a creative departure from the
original that she does not acknowledge. Arendt writes that the general validity of taste is "closely
connected with particulars, with the particular conditions of the standpoints one has to go through in
order to arrive at one's own 'general standpoint.' "104 Where enlarged thinking, as Kant describes it,
involves abstracting from the limitations of a contingent situation to think in the place of any other
man,"'05 Arendt speaks explicitly of a general standpoint that is achieved not by abstraction but by
considered attention to particularity."> Thus enlarged thought, in her terms, is situated rather than
abstract. She calls it training "one's imagination to go visiting,"" which involves evoking or telling
yourself the multiple stories of a situation from the plurality of conflicting perspectives that constitute
it.'08 Enlarged thought is Arendt's answer to the question of how one moves from experience to critical
understanding, but it is not the Kantian "enlarged thought" that she has in mind. In her creative
appropriation of Third Critique, Arendt redefines enlarged thought from abstract reasoning to what I call
"situated impartiality." She credits Kant with breaking from the customary assumption that abstraction
is requisite to impartiality, writing that Kantian impartiality "is not the result of some higher standpoint
that would then actually settle [a] dispute by being altogether above the melee"; instead, it "is obtained
by taking the viewpoints of others into account."'09 Curiously, Arendt conceals her innovation by failing
to mark the distinction between situated impartial thinking and Kant's "enlarged mentality." Where
enlarged thinking is a consequence of either securing assent to one's judgment or simply imputing it to
others, situated impartial thinking involves taking divergent opinions into account in the process of
making up one's mind and, ultimately, locating one's judgment in relation to those views. Although she
conceals it, Arendt makes a significant break with the universalizing assumptions of Kant's thought. The
departure from Kant's "taste" is even more pronounced, as Arendt argues that it is not the philosopher
but the storyteller who possesses an extraordinary talent for enlarged thinking.110 Arendt describes
storytelling as an art that needs "a certain detachment from the heady, intoxicating business of sheer
living that, perhaps, only the born artist can manage in the midst of living.""' Although this description
comes from her essay on Isak Dinesen, the conceptualization of storytelling on which it relies brings to
mind Walter Benjamin's essay, 'The Storyteller." Not only does Benjamin credit story- tellers with the
ability to think critically "in the midst of living," but he also implies that storytellers inspire enlarged
thinking in others: "the storyteller takes what he tells from experience-his own or that reported by
others. And he in turn makes it the experience of those who are listening to his tale.""2 As Benjamin
describes it, the capacity for situated impartial thinking is not the storyteller's exclusive privilege, and
the storyteller is not the kind of teacher who imparts a lesson to her listeners. Rather, the storyteller's
gift is, in his words, the ability to craft an account that is "free from explanation," thereby teaching the
practice of situated impartial vision."3 A skillful story- teller teaches her readers to see as she does, not
what she does, affording them the "intoxicating" experience of seeing from multiple perspectives but
leaving them with the responsibility to undertake the critical task of interpre- tation for themselves. This
capacity of storytelling to invite situated impartial thinking can be understood only if the distinctions
among storytelling, testimonial, and illustration are clearly demarcated. A testimonial is self-expressive:
it asserts "this is the way I see the world." It is fully determined by the experience of the speaker and, as
such, can inspire refutation or empathy but not critical engagement as Arendt defines it. In contrast,
illustration is not at all expres- sive. Its purpose is to give anecdotal "proof" of a theory; consequently, it
is determined not by experience but by the abstract framework it is meant to exemplify. The kind of
story that Arendt and Benjamin have in mind invites the reader to "go visiting," asking "how would the
world look to you if you saw it from this position?" The critical perspective that one achieves by visiting
is neither disinterested, like Kant's taste, nor empathic. Arendt writes that "this process of
representation does not blindly adopt the actual views of those who stand somewhere else, and hence
look upon the world from a different perspective; this is not a question of . .. empathy, as though I tried
to be or to feel like something else ... but of being and thinking in my own identity where I am not."" 4
Visiting means imagining what the world would look like to me from another position, imagining how I
would look to myself from within a different world, and coming to understand that I might define my
principles differently if I did not stand where I am accustomed to."5 Where visiting promotes
understanding, empathy obstructs it. By empathizing with another, I erase all difference. But when I visit
another place, I experience the disorientation that lets me understand just how different the world
looks from different perspectives. The relationship between storytelling and situated impartiality is
multiple and complex. Storytelling is a means by which one "visits" different perspec- tives. It is also a
narrative form that lends itself to giving a multiperspectival account of a situation, that, in turn, invites
others to "visit" those perspectives. Relative to abstract argument, testimonial, and illustration, the
advantage of a story is that it can be both ambiguous and meaningful at once. An ambiguous argument,
testimony, or example is less effective for its indeter- minacy, because the purpose of such modes of
discourse is to distill the plural meanings of an incident into definitive conclusions. Ambiguity in a story
encourages the permanent contestation and multiple reinterpretation of meanings that make situated
impartiality possible. In Arendt's unfinished lectures on judgment, then, there is an implicit answer to
the question of how thinking from experience can be critical. This answer turns on a creative
appropriation of Kant's enlarged thinking by means of storytelling and situated impartiality. For Arendt,
critical under- standing involves telling or hearing multiple stories of an event from the plurality of
perspectives that it engages. One purpose of testing one's per- spective against the perspectives of
others is to take a stand in full recognition of the complexity and ambiguity of the real situations in
which judgments are made. One further purpose is to hold oneself responsible to argue with and speak
not only to those with whom one agrees but to those with whom one disagrees. This means not simply
acknowledging the inevitable partiality of any individual perspective but insisting that perspectival
differences be raised, contested, and situated in reference to each other. The point is not consensus or
accuracy but plurality and accountability.
Solves Nanotech
Shifting to an information theory of death solves the legal dysfunctions that lead to
cybernetic war
Martine Rothblatt, J.D., Ph.D. 2006"Forms of Transhuman Persons and the Importance of Prior
Resolution of Relevant Law" Volume 1, Issue 1, 1st Quarter
http://www.terasemjournals.org/PCJournal/PC0101/rothblatt_02e.html Rothblatt started the satellite
vehicle tracking and satellite radio industries and is the Chairman of United Therapeutics, a
biotechnology company. She is also the founder of Terasem Movement, Inc.
If it seems as though making the leap to believe in the possibilities of trranshuman persons is too great,
remember that in 1958, it was just as big a leap to cast aside the concept of national sovereignty being
based from the core of the earth and reaching in a cone out into space and replace it with the idea that
national sovereignty ending at some point. Law must evolve with evolving technology. Copernicus’
theory of the earth’s rotation numbered the days of old-school sovereignty. The notion of sovereignty
sweeping out to the cosmos in a fixed cone is rendered irrelevant when we accept that the earth is
rotating on an axis because everybody’s cone would sweep the same sectors of cosmic space. Going all
the way back to Copernicus, the legal artifice of national sovereignty was already becoming illogical. In
the very same way, Turing’s theory of machine consciousness has begun to number the days of oldschool citizenship. Turing asked, what if you could converse with a machine and you couldn’t tell the
difference between conversing with a machine and conversing with a person? Is not that machine as
conscious as the person ? If we don’t evolve law with evolving technology, we will face conflicts of
dysfunctional law. The founders of space law did their best to avoid space conflict (between the US
and the Soviet Union in particular) over conflicts of law. Today, we are not at risk for a war with Russia
over transhuman rights, but could there be a war between humans and transhumans, between flesh
and electronic substrate? That’s certainly a common theme of dystopic[1] science fiction plots and it is
something that we can avoid with prior legal development. How might we do in ten, twenty or fifty
years? Image 8 depicts some possibilities. Certainly, the bigger challenge we undertake, the longer it will
take. A shift to an information theory basis of death is not that big of a change. We just recently made a
big leap in the past century from heart death to brain death. So perhaps this is not that big of a leap. It
may take a relatively short period of time. At the other end of the spectrum is unifying artificial
intelligence and citizenship, which might be a pretty big leap for society to take and may take quite a bit
longer. The time to start the dialogue is now.
A2 Manslaughter
Backlash – Censoring certain words transforms politics into a fight over language
rather than the institutions that generate true violence.
Brown 1 [Wendy Brown, professor at UC-Berkeley, 2001 Politics Out of History, p. 35-36]JFS
“Speech codes kill critique,” Henry Louis Gates remarked in a 1993 essay on hate speech. Although
Gates was referring to what happens when hate speech regulations, and the debates about them, usurp
the discursive space in which one might have offered a substantive political response to bigoted
epithets, his point also applies to prohibitions against questioning from within selected political
practices or institutions. But turning political questions into moralistic ones—as speech codes of any
sort do—not only prohibits certain questions and mandates certain genuflections, it also expresses a
profound hostility toward political life insofar as it seeks to preempt argument with a legislative and
enforced truth. And the realization of that patently undemocratic desire can only and always convert
emancipatory aspirations into reactionary ones. Indeed, it insulates those aspirations from questioning
at the very moment that Weberian forces of rationality and bureaucratization are quite likely to be
domesticating them from another direction. Here we greet a persistent political paradox: the
moralistic defense of critical practices, or of any besieged identity, weakens what it strives to fortify
precisely by sequestering those practices from the kind of critical inquiry out of which they were born.
Thus Gates might have said, “Speech codes, born of social critique, kill critique.” And, we might add,
contemporary identity-based institutions, born of social critique, invariably become conservative as they
are forced to essentialize the identity and naturalize the boundaries of what they once grasped as a
contingent effect of historically specific social powers. But moralistic reproaches to certain kinds of
speech or argument kill critique not only by displacing it with arguments about abstract rights versus
identity-bound injuries, but also by configuring political injustice and political righteousness as a
problem of remarks, attitude, and speech rather than as a matter of historical, political-economic, and
cultural formations of power. Rather than offering analytically substantive accounts of the forces of
injustice or injury, they condemn the manifestation of these forces in particular remarks or events.
There is, in the inclination to ban (formally or informally) certain utterances and to mandate others, a
politics of rhetoric and gesture that itself symptomizes despair over effecting change at more
significant levels. As vast quantities of left and liberal attention go to determining what socially
marked individuals say, how they are represented, and how many of each kind appear in certain
institutions or are appointed to various commissions, the sources that generate racism, poverty,
violence against women, and other elements of social injustice remain relatively unarticulated and
unaddressed. We are lost as how to address those sources; but rather than examine this loss or
disorientation, rather than bear the humiliation of our impotence, we posture as if we were still
fighting the big and good fight in our clamor over words and names. Don’t mourn, moralize
Download