Uploaded by Tobin Wilson

Michigan-DoPh-Neg-NDT-Round-8

advertisement
1NC---Round 8---NDT
Off
1NC
우리는 미국놈들이 핵무기를 먼저 사용하는 제안을 반대하다
The CP solves the case and is net beneficial---linguistic standardization authorizes
genocide and extinguishes Korean racial identity. Actively celebrating Asian linguistics
is a critical tool in anti-imperial struggle.
Kwak ’20 [Joy and Laurence; January 2020; Authors of a comparative project analyzing linguistic
imperialism in Korea and indigenous Canada; Catalyst McGill, “Language as a Weapon of Imperialism: A
Comparative Case Study Between Canada and Korea,” https://catalystmcgill.com/language-as-aweapon-of-imperialism-a-comparative-case-study-between-canada-and-korea/]
The Korean Colonial Experience under Japanese Imperialism
History
From 1910 to 1945, Korea underwent colonization by imperialistic Japan ese forces and became a Japanese colony. Due
to its geographic proximity to Japan, as well as its underlying political potential as an annex to the growing Japanese occupation of East and
Southeast Asia, Korea was targeted as an extension of Japan. It underwent thirty-five years of “ Japanization ”: attempts
to make Koreans Japanese, notably through both subtle and obvious acts of linguistic imperialism .
Despite the fact that more than a century has passed since the beginning of this colonial relationship, the legacies of
linguistic imperialism still continue to create rifts in the economic, political and social ties between the two countries to this
day.
The Korean language has a unique history: while the spoken language has been circulated for thousands of years, its written counterpart was
created by King Sejong (세종대왕) in the 15th century. Nonetheless, between 1450 and 1910, written Chinese (한자) and an adapted form of
spoken Chinese was popular thanks to the influence of the nobility, who considered it an indicator of elite status and higher education.
It wasn’t until the colonization of Korea in 1910 that Koreans began promoting Kugo (국어) as the Korean language in order to create a unified
identity, as a part of the “rise of nationalism in response to foreign aggression”. The ‘Joseon Eohakhwe’ (Korean Language Society) was a group
of scholars who initiated the nation-wide struggle for the promotion of the Korean language. However, as Japanese imperialist tactics
diversified and intensified in order to eradicate the use of Korean, the Society shifted its focus from the promotion to the preservation of the
Korean language. As the Society faced mounting opposition from the colonial government which made learning Japanese a mandatory part of
the curriculum, the study of Korean became ‘voluntary’ in schools in 1938 per the newly revised Korean Education Statute (which was reestablished by the colonial government).
By 1941, the imperial government had succeeded in removing Korean completely from the curriculum and banning its instruction, thereby
implementing a fully Japanese language education system. Soon after, the Korean Language Society was forcefully disbanded in 1942, with
many of its members becoming incarcerated or killed by the Japanese government.
Process of Linguistic Imperialism
Japan’s imperial government made a crescendo effort to achieve complete linguistic dominance in Korea in a matter of a few decades. This
process entailed the following three strategies: first, the native legislature was modified to accommodate the demands of the colonial
government. The continuous change of the Korean Education Statute, which eventually banned the teaching of Korean in schools by 1943, is
one example of the colonial government editing the legislature to fit its own interests. Indoctrination also played a very significant role in
Japan’s movement to achieve complete “Japanization”, and that propaganda represented a crucial element of the colonial curriculum’s goal to
render Koreans subservient and make them loyal subjects to Japan, in particular to its Emperor Ten’no.
The second procedure was a result of the first: linguistic stratification, or the categorisation, division and distinction between the indigenous
language and the colonizer’s. The colonizer’s language was attributed high status and power , thereby
disempowering the native language and speakers and establishing an even more unbalanced power
dynamic between the colonizer and colonized.
Finally, the third procedure, “linguicism,” consisted of the complete elimination of the local tongue. In order
to achieve this linguistic homogeneity , the dominating force employed methods of linguistic
oppression such as the “ carrot and stick approach”, where the use of the native language invoked
punishment from the colonial power while becoming fluent in the colonizer’s language was rewarded with the promise of
higher economic status, more employment opportunities, and equal treatment. For Korea, this even included the forceful change of Korean last
names to Japanese names of relatively similar meaning.
Ultimately, this process of linguistic surveillance has had serious implications in countless
colonized regions around the world, including native language decline, the creation of pidgin or creole languages,
as well as language genocide .
So what sparked this frenzied race to colonization? Imperial Japan, having come to the realization that, to be a modern power was to be a
colonial power in the 20th century, began its conquest of colonizing Asia, mimicking similar Western scrambles at that time. Especially in East
and Southeast Asia, it attempted to establish a stronger Asian identity (Japanese Pan-Asianism) and a more significant global presence in the
increasingly Western-dominated world, thus “protecting” the colonized countries from “Westernization”. Therefore, it was necessary for Japan
to make its colonies “Japanized”: a supportive colony which embraced its new identity would prove helpful to the imperial government’s
colonial efforts by providing labor, intellectual support, and economic benefits through the exploitation of its land and peoples.
Historical and Current Implications
Language is often a gleaming indicator of social issues and social change, and is a crucial part of one’s cultural identity. That being said, the
implications of establishing linguistic dominance by taking one’s language away and enforcing another in its place can be negative and longlasting.
To demonstrate this point, Hiroyuki uses the example of Micronesians islanders, who also experienced linguistic imperialism under colonial
Japan. She explains that some of the older Micronesian islanders still have a positive view of Japan, especially of the former Emperor Ten’no.
However, she also discusses the negative legacies of such colonial policies amongst the Micronesians: the native Micronesians suffered racial
prejudice and were labeled as inferior, lower-class citizens despite many having achieved fluency in Japanese. They experienced an identity
crisis, with many saying that they “felt ashamed of identifying themselves”.
The elimination of the existing tongue was crucial to Japan’s colonial efforts because language is a direct representation of
the social aspects of a community: a country’s culture, traditions, mindset and even spirituality are argued
to be tied to its language . Therefore, changing the language of a community by enforcing the language of the
colonizer would facilitate the transition into eventually adopting the colonizer’s traditions , cultures
and beliefs . And even for those colonial subjects who accepted the new changes and sought to achieve an elevated status by “becoming
Japanese” and becoming fluent in Japanese, they soon discovered that the promises of inclusion made by the colonial government would never
be fulfilled.
This offer of inclusivity and the subsequent lack of fulfillment was one of the propellants of active Korean opposition against Japanese
imperialism. Many scholars have also made the connection between the Korean language and
identity , as language preservation and promotion movements were a crucial part of the
independence movement. To many Koreans today, the “ soul and spirit of the nation” is embodied
in Korean, which had been targeted by colonial Japan to achieve its imperialist agenda, but soon became a tool of resistance
and a symbol of independence.
Their 2AC permutation or theory arguments naturalize the linguistic binary by treating
English as the baseline around which other languages are positioned---this enacts
psychic violence on Korean subjects and results in internalized oppression.
Cho ’21 [Jinhyun; February 16; Senior Lecturer in the Translation and Interpreting Program of the
Department of Linguistics at Macquarie University; International Journal of the Sociology of Language,
“Constructing a white mask through English: the misrecognized self in Orientalism,” vol. 271]
2 Coloniality and modernity of language and race
In trying to understand the impact of Orientalism on beliefs about language and race, it is important to examine colonial discourse
on linguistic dominance and racial othering . Coloniality is inseparable from modernity, which
justified and was justified by colonialism led by Europe during much of the second half of the previous millennium (Escobar
2007). The project of modernity was rationalized by the presupposed inferiority of indigenous
populations who were stigmatized as racially and linguistically “ less-than -human being s ” (Veronelli
2015: 112) incapable of achieving modernity without the help of the colonizers. The superior-inferior distinction between the
colonizers and the colonized and by extension, Europeanness and non-Europeanness, and whiteness and non-whiteness served
to shape the co-naturalization of language and race by which language and race were constructed
as naturally bounded and inseparable (Rosa 2017). As with race, languages spoken by the colonized were
reduced to be able to deliver only “simple communication” which refers to “infantile, primitive meaning expression” (Veronelli
2015: 118), according to linguistic hierarchies which positioned European languages superior to any other languages (Veronelli 2015).
Through the Euro-centered colonial enterprises spanning from 16th to 20th centuries, racial and linguistic classification significantly spread, and
coloniality became a global phenomenon in which power, hierarchies and status were racialized and
the superior-inferior dichotomy was established as a normalcy (Quijano 2000). The relation between
language and racialization was further strengthened with the development of the academic fields of
philology and ethnology in Europe, through which race emerged as a subject of physiological discrimination and language as a marker of
a degree of civilization (Ashcroft 2001). In the rigidity of the geographical hierarchy, there was presumably only a single legitimate way of
knowing the world and all other ways were downgraded to the sphere of doxa (Bourdieu 1990), in which the realization of the colonial norm
was so complete that the norm became unquestioned truths (see Castro-Gómez 2007). The unquestioned acceptance of the colonizers’
superiority by the colonized essentially represents the absence of recognition or “misrecognition”, which refers to “the process by which power
relations come to be perceived not for what they objectively are, but in a form which renders them legitimate in the eyes of those subject to
the power” (Bourdieu, translated by Terdiman [1987: 813]; see also Bourdieu 1989). This induced misunderstanding of colonial power
structures was necessary for the reproduction of macro orders in which the dominated were led to misrecognize their subordinate and inferior
status as natural (Balaton-Chrimes 2017). The conspiratorial view of SelfOrientalism fails to account for this induced aspect of misrecognition
which is obtained not by conspiracy but by structural means (Terdiman 1987).
In the rigid binary structures, it may not be possible to find “self” and any effort of identity formation is inevitably subject to the recognizing
gaze of the dominant as models of civilization (Balaton and Chrimes 2017; see also Fanon 2008). As an important marker of civilization,
language was often misrecognized as a medium to acquire new identity by the colonized whose
sense of worthlessness pushed them to attempt to construct a white mask by behaving and
sounding like the dominant (see Fanon 2008). Since Fanon’s ground-breaking work in 1952, however, there has been a significant
lack of scholarly inquiry to grasp the complex role of language in racialized identity formation of racial Others in the contexts of coloniality.
The gap is particularly notable in linguistic research in and about Asia (Rosa 2017). The proposed study which
follows the racialized identity formation processes of Yun Chi-Ho, the sojourner and interpreter on whom we will now focus, is, therefore,
expected to fill the lacuna by shedding light on the complicated interactions between race, language and power relations experienced and
narrated from the perspective of the historically marginalized.
Before moving on to analyze the English diaries of Yun, it is important to examine the language ideologies that Yun developed with regard to English in presojourning times. This will help us to
account for the relationships between English and identity that he experienced later during his stay in the US. Yun’s encounter with English began in 1882 at the age of eighteen while studying
in Japan. As China, which had traditionally acted as Korea’s protector, declined in power and the US emerged as a new Elder Brother for Korea, Yun was urged to learn English by his fellowcountryman Kim Ok-Kyun, a leader of the failed 1884 coup Kap Shin Chung Byun, a reform movement led by young and progressive government officials against the royal family and powerful
conservative politicians. Similar to other progressive Korean elites at the turn of the 19th century, Yun was critical of the powerlessness of the Korean government and was keen to achieve
modernization for his nation’s future (Lee 2010). As American civilization was considered as a model for Korea’s later development by progressive elites (Talley 2016), Yun was inspired to learn
English for the future of Korea. After learning English from a Dutch secretary at the Dutch consulate in Japan for four months, Yun returned to Korea in 1883 and became the first EnglishKorean interpreter for the first American government minister to Korea, Lucius Howard Foote. As such, needs for modernity constructed in the colonial binary in which the US was established
as a model of civilization significantly influenced Yun’s decision to learn English.
While Yun initially saw English as an important instrument to modernize Korea, the strong connections that he had with American missionaries in Korea as one of the first Korean Christian
(Methodist) converts led Yun to form positive views on the US, and he longed to study abroad there (Lee 2010). Yun’s connections with American missionaries later enabled him to study in the
U.S through religious sponsorship between 1888 and 1893, during which period Yun’s view on English experienced a significant shift, as he struggled with the racism that was pervasive in
American society. It is important to note that the clear boundaries present in American society along the racial-linguistic lines, founded on beliefs that English belonged to the West and
whiteness, misrepresented an opportunity for Yun to reconstruct a white identity by making English belong to him. We will now follow his diarized records of this identity reconstruction
attempt, with a particular focus on Yun’s reports of his own racialized ideologies of English. Contrary to the idea of Orientals conspiratorially conforming to Orientalist thought, the process
through which Yun eventually subscribed to Orientalism was fraught with internal conflicts and even resistance to the dominant ideologies of the day. This article, therefore, approaches SelfOrientalism as a process that intersects in complex ways with racial and linguistic desires through the primary lens of misrecognition.
3 Methodology
In order to carry out the research, I collected data in 2018 from an online Korean history database (Korean History Database 2018), which stores Yun’s diaries. I focus on the diary entries
written during his sojourn in the US (from October 1888 to October 1893) as a key period. The period under investigation represents “contact zones” (Pratt 2007), in which the dominant and
the dominated come into contact with each other in a colonial space marked by unequal power relations. Distance is a key to understanding the concept of contact zone, because the Self is
not only physically away from his or her homeland but also symbolically distant from the Others, who objectify the Oriental homeland with suspicion. Focusing on the sojourning period shall,
therefore, provide us with a glimpse into the processes of Orientalist thought being encountered and internalized by this minority individual, and reveal how an associated inferiority complex
eventually led the individual to desire a racialized identity expressed through language.
As mentioned above, Yun’s American sojourn diaries were kept in Korean only during the first year of the sojourn (from 16 October 1888 to 6 December 1889). From 7 December 1889,
however, he started keeping diaries exclusively in English. While the analysis focuses on the English diaries, I also analyze the first year Korean diaries to see if there is any comparative shift in
terms of his racial and language ideologies. Based on content analysis (Hsieh and Shannon 2005), I systematically identified and classified themes through the process of coding with a focus on
English and race from a chronological perspective. The data analysis identifies three distinctive patterns of racial and language ideologies mediated by the psychological distance that Yun felt
from the superior others. Firstly, the initial period of sojourn (from October 1888 to December 1889) marks the beginning of an inferiority complex embedded in Orientalism, as Yun was
excluded and objectified as the Other in the United States. The second period of sojourn (from January 1890 to December 1892) sees an intensification of the inferiority complex held by Yun
and his simultaneous efforts to distance himself from what he saw as the “Other” – Africans/African-Americans. The analysis of the last stage of the sojourn (from January 1893 to October
1893) highlights his attempt to construct a “white mask” by completely denying his Korean identity and replicating white prejudice against his own ethnic group. Throughout the process of
distancing and distinction, English is misrecognized as a key to Yun’s identity reconstruction project. In what follows, I present the findings of the analysis in the aforementioned chronological
manner.
4 Becoming the “Other” away from the homeland (1888–1889)
Yun first arrived in the US on 16 October 1888 in order to study theology at Vanderbilt University in Tennessee. He later went on to Emory College in 1891, to pursue studies in humanities. As
noted in Yun’s initial motivation behind English language learning, Yun was keen to achieve modernization for the future of Korea, and had a belief in its potential for transformation. Yun’s
dedication to the modernization mission is expressed in his 29 December 1888 diary entry, which says “내 마음껏 내 나라를 섬기는 것이 내 직분인 것이다” [It is my calling to serve my
country to the best of my ability].
The professed nation-building mission was, however, not entirely out of his own volition but was significantly
influenced by America n missionaries, who saw Yun as an effective tool to promote an evangelical mission
in Korea (Urban 2014). The issue of race and language in the case of Christian missionaries has been complicated, due primarily to the
conflict between humanistic religious principles and the missionaries’ own biases, influenced by the racial and cultural categories they knew.
Although missionaries were educated to believe in the fundamental spirits of universal equality, they were, at the same time, not completely
free from categorizations of people embedded in Orientalist ideas (Oddie 1994). Similar to other Western missionaries, American missionaries
in Korea also worked under the influence of a superior-inferior binary worldview, and rescuing poor Oriental brothers and sisters out of the
deplorable states was their key modernizing mission (Cho 2017). For colonial missionaries keen on modernization projects, Yun’s presence in
America as one of very few Asian Christian students was important as a proof of the success of missionary work abroad (Urban 2014).
Thus, Yun was initially welcomed into white social circles made up of theology students and missionaries, but
this form of inclusion soon proved difficult for Yun to bear, especially because many still treated him as
an inferior Oriental , rather than an equal Christian (Urban 2014). Well aware of his public role as a model Oriental convert and
simultaneously feeling weary of the “distance” from the mainstream society, Yun saw a diary as a private space in which he could express his
personal feelings and opinions, without needing to worry about white censorship (Urban 2014).
On 7 December 1889, about a year after his arrival in the US, Yun decided to write diaries in English only. The rationale he gave for this
language choice was the richness of English vocabulary, which he believed would enable him to express himself better than in Korean:
Up at 5a.m. Cloudy. My Diary has hitherto been kept in Corean. But its vocabulary is not as yet rich enough to express all what I want
to say. Have therefore determined to keep the Diary in English. (7 December 1889)
While the decision to write in English can be seen as part of his efforts to improve English language proficiency (Kim 2011), the expressed view
of English as “richer” in its expressive resources than Korean merits attention. His belief in English being superior to Korean
in terms of linguistic richness indicates his burgeoning subscription to the superior-inferior linguistic binary ,
in which the languages of the dominated were regarded as too simple to deliver “human
communication”, an act believed to be realized only through the sophisticated languages of the dominant
(Veronelli 2015). The awareness of this linguistic hierarchy was accompanied by a growing recognition of the
marginalized position that Yun was experiencing at the point of the language switch, including the
limitations he had started experiencing in his inclusion in white Christian society.
The analysis of his Korean diarie s written up to the point of adopting English as a medium indicates a growing
inferiority complex . Whereas in Korea, Yun had been regarded as a man of high intelligence from a respectable class, in America, he
was suddenly reduced to an inferior Oriental from a poor country. As expressed in the 1889 Korean diary entries, Yun felt “업신여김”
[despised] (25 April), “도처에서 멸시를 받으니” [belittled everywhere] (7 May) and “수치” [humiliated] (24 May) for being Korean. The low
treatment to which Yun was subject led him to seriously question the worth of his ethnic origin , and he even
lamented “조선 사람으로 태어나 무슨 세상 영광을 바라겠는가” [What good in the world can I expect as a Korean person] (25 April 1889).
Considering Yun’s expressed frustration at his inferior status in American society, it can be argued that the language switch from
Korean to English marks the beginning of his subjugation to the colonial linguistic hierarchy in which
English was positioned as superior to any other languages. Yun’s growing awareness of the relationship between the
hierarchic binary structures of peoples and nations and the prevailing beliefs about language was accompanied by an increasing racial
awareness as well, as Yun was exposed to the ideas of Social Darwinism in the US (i.e. ideas that accept and legitimize the relative
disempowerment of certain peoples as having been biologically determined by their relative lack of racial fitness). The 19th century saw the rise
of race theory as an explicit belief system or “scientific racism” (Rutledge 1995), which was used to justify and rationalize the theory of Social
Darwinism. After first encountering Social Darwinism during the sojourn, Yun became an ardent believer of the ideology (Tikhonov 2012), and
the degree to which he subscribed to Social Darwinism is well exemplified below:
1NC
Rescission CP
The President of the United States should propose that the budget authority for
[nuclear first use] be rescinded in perpetuity and refrain from spending unauthorized
appropriations. The United States Congress should unanimously pass legislation
rescinding the budget authority for [nuclear first use] and issue a Congressional
resolution expressing its support.
The CP competes and solves the case. Instead of restricting nuclear use, it rescinds the
budget authority to fund the arsenal---that’s absolutely solvent.
Chappell ’22 [John; 2022; J.D. and M.S. in Foreign Service Candidate at Georgetown University;
American University National Security Law Brief, “President of the United States, Destroyer of Worlds:
Considering Congress’s Authority to Enact a Nuclear No-First-Use Law,” vol. 12]
Past n o- f irst- u se debate s have included questions about whether passing a no-first-use law would exceed
Congress's authority . As the Biden administration reviews the U.S. nuclear posture and considers adopting a no-first-use policy,
evaluating the constitutionality of Congress enacting a no-first-use law helps determine whether Congress could
enshrine a no-first-use policy in law, regardless of President Biden's decision.
Using the proposed Restricting First Use of Nuclear Weapons Act as a model, 28this article argues that Congress can constitutionally enact a law restricting the President's use of nuclear
weapons. Section II outlines the history of no-first-use policy debates. Section III discusses how the Constitution allocates war powers between Congress and the President in general. Section
IV then considers how war powers apply to the first use of nuclear weapons and how Congress could constrain first use. Section V analyzes two situations that raise constitutional and practical
issues for a no-first-use law. Finally, Section VI discusses the article's findings and their implications for U.S. nuclear policy.
II. Background and History of No-First-Use Policy Debates
Policymakers have debated whether to declare a no-first-use policy for at least seventy years, beginning soon after the dawn of the nuclear age. 29Both Congress and presidential
administrations have considered implementing a no-first-use policy. 30However, the United States has elected to keep first use on the table time and again. This Section outlines past no-firstuse debates.
A. The Policy Debate Around No First Use
The 2018 Nuclear Posture Review describes the current U.S. declaratory policy, 31ruling out a no-first-use pledge in order to maintain deterrence against non-nuclear attacks. 32U.S.
declaratory policy precludes using nuclear weapons against "states that are party to the [Non-Proliferation Treaty] and in compliance with their nuclear non-proliferation obligations." 33The
Nuclear Posture Review states that the United States would consider using nuclear weapons in response to "attacks on the U.S., allied, or partner civilian population or infrastructure, and
attacks on U.S. or allied nuclear forces, their command and control, or warning and attack assessment capabilities." 34
Proponents of a no-first-use policy argue that the United States should never need to use nuclear weapons first because the United States can accomplish any necessary objective that the first
use of nuclear weapons could advance with conventional force instead. 35No-first-use supporters also claim that prohibiting the first use of nuclear weapons would decrease the likelihood of a
mistaken nuclear launch by ensuring that the United States would not respond to a false alarm with a nuclear strike. 36No-first-use supporters further argue that the current policy of leaving
first use on the table undermines stability in a crisis by incentivizing other states to launch a preemptive strike, increasing the risk of miscommunication and brinkmanship, and prompting
opponents to take measures to increase the survivability of their forces that would increase the risk of unauthorized use. 37
Opponents of a no-first-use policy, on the other hand, argue that the policy would undermine deterrence. 38They claim the United States needs a nuclear deterrent against both nuclear
attacks and significant conventional, chemical, biological, or cyber threats. 39Conventional threats were especially salient during the Cold War when policymakers feared the Warsaw Pact may
invade NATO and that the Pact's conventional superiority required a nuclear deterrent. 40Furthermore, they posit that a no-first-use policy would undermine extended deterrence over U.S.
allies, 41incentivizing them to develop their own nuclear arms amid eroding U.S. assurances. 42
B. Presidential Considerations
Several administrations have weighed the possibility of a no-first-use policy. George Kennan, the diplomat and strategist who first crafted the Cold War's containment policy, recommended a
no-first-use policy to the Truman administration in 1950, but President Truman declined to implement the proposal and kept the first use of nuclear weapons under "active consideration." 43
During the Clinton administration, Defense Secretary Les Aspin considered a no-first-use policy as part of a post-Cold War nuclear posture. 44However, Aspin elected not to incorporate no first
use into the Nuclear Posture Review after allies expressed concern that the policy would undermine their security. 45 Although the details of those concerns are not publicly available, allies
tend to approach the prospect of a no-first-use policy with caution if they depend on U.S. security assurances to deter conventional attacks. A no-first-use policy would retain the option of
retaliating with nuclear weapons against a nuclear attack on an ally, but it would rule out the use of nuclear weapons in retaliation against a non-nuclear attack, whether on an ally or the
United States. Therefore, allies and partners that depend on the United States for extended deterrence against non-nuclear attacks tend to oppose a no-first-use policy because it would make
a conventional attack against them appear less risky.
The election of President Barack Obama elicited hopes for policies to reduce the role of nuclear weapons in U.S. security, including a no-first-use policy. 46In a 2009 speech in Prague, President
Barack Obama affirmed "America's commitment to seek the peace and security of a world without nuclear weapons." 47That vision contributed to President Obama's selection for the Nobel
Peace Prize that year. 48However, after consultations with allies, the Obama administration chose not to include no first use in the 2010 Nuclear Policy Review. 49President Obama revisited
no first use at the end of his second term but again encountered resistance from advisors and allies, including Japan, South Korea, France, and the United Kingdom. 50
The Trump administration resurfaced concerns about U.S. nuclear policy, making no first use an important policy issue in the 2020 presidential election. 51No-first-use policy even appeared in
a Democratic presidential debate in 2019 when Senator Elizabeth Warren supported the idea because "[i]t reduces the likelihood that someone miscalculates, [or] someone
misunderstands." 52 Montana governor Steve Bullock countered that he "wouldn't want to take [first use] off the table." 53
The Biden administration has begun its Nuclear Posture Review, which was expected for release in early 2022. 54The process has already sparked controversy within the Department of
Defense, where senior officials requested the resignation of a political appointee overseeing the review. 55The ouster drew concern from Senator Ed Markey, who worried the appointee's
reassignment may have been motivated by a desire to disadvantage no first use in the review process. 56In late 2021, hundreds of top scientists urged President Biden to adopt a no-first-use
policy in a letter. 57Dozens of members of Congress followed suit in early 2022, urging the President to "[d]eclare that the sole purpose of nuclear weapons is to deter a nuclear attack on the
United States and its allies, and that the United States will never use nuclear weapons first." 58President Biden's past support for a no-first-use policy has also elicited concern from allies. 59As
Vice President in January 2017, Joe Biden said, "Given our non-nuclear capabilities, and today's threats--it's hard to envision a plausible scenario in which the first use of nuclear weapons
would be [*57] necessary. Or make sense." 60Biden reaffirmed his position as a presidential candidate. 61However, in March 2022, administration officials reportedly indicated that President
Biden's Nuclear Posture Review will not adopt a no-first-use policy. 62
In sum, president ial administration s have considered a no-first-use policy and elected to keep their
options open while assuring the public that they would only use nuclear weapons in extreme circumstances.
C. Congressional Proposals
Congress , in turn, has explored passing a no-first-use law . Unlike presidential policy considerations,
a no-first-use law would bind the Executive Branch across administrations. However, efforts to pass a
no-first-use law have thus far fallen flat partially due to constitutional concerns.
During the Vietnam War, legislators considered how to best reclaim congressional authority over war powers and foreign policy. 63The 1970s saw framework legislation like the War Powers
Resolution, National Emergencies Act, and the Arms Export Control Act. 64As Congress considered how to reassert control over foreign policy and national security issues, two no-first-use
proposals emerged.
In 1971, the Federation of American Scientists (FAS) drafted a bill requiring the assent of a committee of congressional leaders before a President could use nuclear weapons first without a
declaration of war. 65FAS renewed its call for a no-first-use law in 1984 with an essay in Foreign Policy. 66 The essay sparked debate among constitutional scholars 67 and elicited criticism
from those who questioned the constitutionality of a leadership committee authorizing nuclear first use. 68 In particular, some concluded the mechanism amounted to an unconstitutional
legislative veto 69 and Congress could not delegate its war powers to a leadership committee. 70
In 1972, Senator William Fulbright (D-Ark.) proposed an amendment to a draft of the War Powers Resolution that would prohibit the President from using "nuclear weapons without the prior,
explicit authorization of Congress" except "in response to a nuclear attack or to an irrevocable launch of nuclear weapons." 71Senator Jacob Javits (D-N.Y.) opposed the amendment on
constitutional grounds, stating that, after Congress places a nuclear weapon in the U.S. arsenal, the President has the prerogative as commander in chief to decide "whether, when, or how to
use it or not to use it." 72The Senate overwhelmingly voted down the Fulbright Amendment with a vote of 68-10. 73As of 2021, that instance remains Congress's only vote on a no-first-use
law.
With renewed concerns about the first use of nuclear weapons during the Trump administration, Congress has again considered a no-first-use law. Senator Elizabeth Warren (DMass.) and
Congressman Adam Smith (D-Cal.) introduced a 2019 bill that simply read, "It is the policy of the United States to not use nuclear weapons first." 74They reintroduced the bill in 2021. 75Both
Warren-Smith bills attracted cosponsors but neither has come to a vote. 76
Senator Ed Markey and Congressman Ted Lieu have introduced the Restricting First Use of Nuclear Weapons Act in every Congress since 2017, 77 aiming to provide checks and balances on
presidential sole authority to use nuclear weapons. 78 The Markey-Lieu proposal is the leading no-first-use bill since the Fulbright Amendment.
The Markey-Lieu bill argues nuclear weapons are distinct from conventional weapons as a constitutional matter. 79The proposal's findings include recognition that "nuclear weapons are
uniquely powerful" and "a first-use nuclear strike carried out by the United States would constitute a major act of war." 80Therefore, the bill stipulates "[n]o Federal funds may be obligated or
expended to conduct a first-use nuclear strike unless such strike is conducted pursuant to a war declared by Congress that expressly authorizes such strike." 81The Restricting First Use of
Nuclear Weapons Act defines a first use of nuclear weapons as an "attack using nuclear weapons against an enemy that is conducted without the Secretary of Defense and the Chairman of the
Joint Chiefs of Staff first confirming to the President that there has been a nuclear strike against the United States, its territories, or its allies." 82
By finding the first use of nuclear weapons a major act of war 83and establishing "[a] first-use nuclear strike conducted absent a declaration of war by Congress would violate the
Constitution," 84the bill interprets Congress's war power as inclusive of regulating nuclear first use. The bill also recognizes the President's role as commander in chief, noting the President
currently has sole operational authority to authorize the use of nuclear weapons and U.S. military officers must comply with the President's order in accordance with their obligations under
the Uniform Code of Military Justice. 85
Although the Markey-Lieu bill has garnered dozens of cosponsors in each of the three Congresses in which it has been introduced, it has never left the originating committee for a floor vote in
either chamber of Congress. 86 However, previous no-first-use proposals have sparked considerable constitutional debates, inviting the question of whether constitutional concerns may
hinder the passage of the Restricting No First Use Act of 2021.
III. Nuclear Weapons and the War Powers of Congress and the President
As previous no-first-use proposals demonstrate, the first use of nuclear weapons raises questions about the respective roles of Congress and the President in waging war. This Section discusses
the war powers of Congress and the President and analyzes the interaction between their respective authorities.
A. Congressional War Powers
The Framers of the Constitution recognized the gravity of decisions to enter into war and allocated certain war powers to Congress. Writing as
Publius in Federalist 69, Alexander Hamilton stated that "the declaring of war... [and] the raising and regulating of fleets and armies... would
appertain to the legislature." 87In a 1793 essay, James Madison wrote, "In no part of the constitution is more wisdom to be found than in the
clause which confides the question of war or peace to the legislature, and not to the executive department." 88
Congress's power to declare war includes authority over those decisions to enter into war . 89The
Constitution expressly vests in Congress the exclusive power to declare war. 90 The declare war authority is more than a formalistic authority to
issue a declaration. 91In Talbot v. Seeman, Chief Justice John Marshall observed that "The whole powers of war...by the constitution of the
United States [are] vested in Congress." 92A formal declaration is not required to conduct a war. Rather, Congress may decide to enter into war
in a variety of ways, including with authorizations for the use of military force or appropriations . 93
As mentioned, the Markey-Lieu no-first-use bill observes that "[t]he Constitution gives Congress the sole power to declare war" and asserts that
nuclear first use "would constitute a major act of war." 94Senator Markey emphasized Congress's war powers, saying, "Our Constitution affords
Congress, not the President, the exclusive power to declare war and that extends, clearly, to the most catastrophic type of war, nuclear war. No
Commander-in-Chief [ sic] should be able to act alone to start a nuclear war." 95
Under the Constitution, Congress is authorized to "make Rules for the Government and Regulation of the land and naval Forces." 96The Land
and Naval Forces Clause establishes Congress's authority over internal regulation of the armed forces. Under that authority, Congress
established the Uniform Code of Military Justice and enacts defense authorization acts that shape the military's internal bureaucracy. 97
Senator Fulbright asserted that his no-first-use amendment to the War Powers Resolution was authorized under the Land and Naval Forces
Clause, but he mostly appealed to the declare war power during debate. 98
The Constitution also states that "No Money shall be drawn from the Treasury, but in Consequence of Appropriations made by Law," granting
Congress the authority to make appropriations. 99The Constitution further authorizes Congress to "lay and collect Taxes...to...provide for the
common Defense... of the United States." 100 The Constitution prohibits an appropriation for the army extending
beyond two years , 101 provid ing Congress with periodic opportunities to control the conduct of
war by reducing or eliminating funding to the military. 102 Congress can use its power of the
purse to limit military action. 103 As the U.S. Court of Claims observed in Swaim v. United States, "Congress may increase the Army,
or reduce the Army, or abolish it altogether." 104 Similarly, Congress could remove nucl ear weapon s from the U.S.
arsenal, choose to modernize existing nuclear forces, or halt the development of particular
delivery systems . 105
Senator Markey and Congressman Lieu appeal to Congress's appropriations power in their no first use proposal. As
mentioned, their bill stipulates, " No Federal funds may be obligated or expended to conduct a first-use
nuclear strike unless such strike is conducted pursuant to a war declared by Congress that expressly authorizes
such strike." 106
The precedent of clawing back funding via rescission reigns in uncontrolled spending.
Feulner ’18 [Edwin; April 18; Ph.D. from the University of Edinburgh, M.B.A. from the Wharton School
of Business at the University of Pennsylvania, graduate of Georgetown University and the London School
of Economics; The Heritage Foundation, “Rolling Back the Tide of Overspending,”
https://www.heritage.org/budget-and-spending/commentary/rolling-back-the-tide-overspending]
The implication, of course, was that nothing could be done about this latest round of massive
overspending . Like it or lump it, there it is.
But that’s not exactly true . The president can, in fact, do something. He can pursue what’s known as a rescissions package.
Don’t let the wonky word cause your eyes to glaze over. “Rescission” simply means to revoke, cancel or repeal a law, or at least part of it. A
rescission s package would basically rescind part of the spending that Congress recently passed.
But wait, you may say. The Constitution gives the “power of the purse” to Congress, not the president. What’s he got to do with initiating such
an action?
Article I does stipulate that “no money shall be drawn from the Treasury but in consequence of appropriations made by law.” But Article II says
the president “shall take care that the laws be faithfully executed.”
So Congress provides the funding, but the president is responsible for how the appropriations are executed.
That doesn’t mean he can decide how it’s spent unilaterally, or do so in a vacuum. Congress still has an important role to play.
Presidents from Thomas Jefferson on down had been submitting rescissions for years with relatively little trouble. But things came to a head
during Richard Nixon’s presidency. He broke with previous presidents by impounding larger amounts (nearly $15 billion in 1973, out of a total
budget of $245 billion) and ignoring Congress’s intent that the funds be spent.
The legislative branch responded to this challenge with the Congressional Budget and Impoundment Control Act of 1974. “Title X of the act
limited the power of the president to withhold funding and put into place a formal procedure for when the president tried to do so,” writes
budget expert Justin Bogie.
This didn’t mean the end of the road for rescissions. From 1974 to 2000, presidents proposed about 1,200 rescissions, totaling more than $77
billion. Congress approved 461 of them, which resulted in a savings of $25 billion.
But there have been no requests for rescissions since 2000 . President Clinton was the last one to even try. That
needs to change .
Yes, the amounts we’re talking about aren’t exactly huge. The most significant budgetary savings occurred under President Reagan: 1.3 percent.
And that, of course, refers to cuts from discretionary spending — not the increasingly massive amount considered mandatory.
So no, as Mr. Bogie points out, “rescissions will not fix the country’s current fiscal mess. Rescissions are not a significant deficit-reduction
tool, nor are they meant to be.”
However, they are still a n important first step toward what should be a top goal for Washington policymakers: Getting
their out-of-control spending problem under control.
Just because the amount we can save through rescissions is a relatively small one doesn’t mean we shouldn’t
try. If nothing else, it sends a message that cuts need to be made. The least Congress could do is provide a package
of $13 billion in rescissions, which is just 1 percent of the total omnibus.
We can’t just throw up our hands because the task before us is so huge. Yes, the federal debt is higher than it’s been at any time in the postWorld War II era. The need to get spending under control is more important than ever. But if we can’t
claw back even a small amount through rescissions, do es that mean we’re just supposed to give up
entirely ?
You have to start somewhere . And a rescission s package is the place to do it .
Global catastrophe.
Taylor ’20 [Kenneth; September 1; Professor of Economics at Villanova University, Ph.D. from Stony
Brook University, M.Sc. from the University of Wyoming; Futures, “The Passing of Western Civilization,”
vol. 122]
Another relevant issue facing us is the contexts in which humans naturally engage in altruistic behavior. Paleolithic human’s expression of
familial or nonfamilial altruism endures for only a couple generations, with concern for future generations subsequently deteriorating (Dawkins,
1989). The fact is there used to be no compelling need for longer-term concern. However, such limiting of intergenerational interest leads to a
shorter-term policy horizon than that needed to address today’s global problems. For instance, over the past half-century, western citizens
have endorsed rising public indebtedness. Citizens thereby enjoy greater consumption than could be provided out of current income. This
boosts positive hedonic sensations (more goods and services consumed today) while reducing negative hedonic sensations (lower taxes paid
today) and is reinforced by the basic human predisposition for immediate gratification. However, at some point, any debt , even a
public one, must be repaid : Public debt is delayed taxation . At that future point we face a set of
conundrums ; for as soon as we must repay, we necessarily experience a significant increase in negative hedonic sensations. Given that
we dislike losses more than twice as much we like equal gains, we resist, which merely entrenches the unsustainable
trends that eventually leads to crisis (Tversky & Kahneman, 1991). This insight further helps explain the lack of general concern
for damage to the biosphere or humankind’s enduring faith in a “technofix” solution—it is easier to brush off emergent
problems if you believe that a necessary fix will surely be found when needed by some future generation that you are
not too concerned about in the first place.
Finally, research reveals that individuals often behave differently in group contexts than they would if acting alone (i.e. crowd psychology). Within a group setting individuals often take up the
group’s identity, ignoring their own conscience, suspending judgment and accountability. The result is that individuals within groups participate in acts they would never commit separately.
Social psychologists say that a participant enters a lower state of self-awareness called “deindividuation”. The resulting anonymity can have horrifically destructive effects on innocent lives and
property, as witnessed during riots, genocides and wars (Cantril, 2002). Extending these insights, the anonymity associated with group behaviors coupled with limited future concern helps us
to understand why individuals in developed nations can be insensitive to poverty, emigration, environmental destruction, water shortages and excessive debt on the global level. When one
adds human flaws to an increasingly crowded planet, effects become amplified—there are over 7 billion of us all behaving much the same way.
2. Population
Population growth during the past 200 years has had a positive impact on global economic growth and standard of living. Without a doubt, the powerful, pervasive, and positive effects
brought about by Enlightenment thinking and Industrial Revolution propelled civilization to higher states of wealth and welfare. Advances in public health, medicine and agricultural
productivity set off exponential growth in population, reinforcing economic progress within a series of positive feedback loops. The historical events represent a transformative tsunami, with
all dimensions of civilization and Earth changed forevermore. As the 19th century began Earth was still pristine for humans with their newly minted ideas. There were more unknown places to
explore and exploit while human population was low, estimated at being about one billion globally in 1800. Abundant resources were readily available to support industrialization and upsurge
in living standards. New technologies arising from scientific research, innovation and commercialization drove the entire enterprise forward. Economists would say the potential for both
extensive and intensive economic growth on the global stage was at a maximum back then. Capitalist industries and markets were freer—within the formidable command of enabling, imperial
power—to spread across the planet, bringing more and more places, people and resources into the western economic paradigm.
The United Nations projects a further 45 % increase in human population, a forecast that might well have significant consequences. Specifically, the UN projects that global population will
grow from 7.7 billion today to about 11.2 billion by 2100 with 95 % certainty (2017). Once human population peaks, fertility may begin a gradual fall, leading to the population growth pattern
moving from its current exponential path to one that is logistic. This pattern has already appeared in developed nations with demographers projecting it to occur in time throughout the less
developed world. Human population is estimated to level off between 11–12 billion with every reason to believe that population decline commences in the 22st century as world-wide fertility
rates fall below replacement level. Before we make a sigh of collective relief, there are three relevant details to consider. First, most additional population growth will occur in the less
developed parts of the world, already struggling to pull themselves out of poverty. Second, before human population begins to decline, we and our planet must get through the next 80 years—
or more like 150 years or so before our numbers return to even today’s level. Third, the projected 3–4 billion surge in global population is forecast to be the largest increase ever in absolute
number during any 80-year period in history. While the average annual rate of population growth is decreasing, the number of people born increase in total number—because each year the
base is larger so a decrease in the growth rate can still result in more people being born. Between these considerations lie many risks, all to the downside. Already unprecedented numbers of
immigrants are trying to move to the richer, more politically stable parts of the world while sectarian (i.e. tribal) strife within poorer countries becomes more common, with technologically
empowered militants and autocrats amplifying their power.
Earth may be able to feed, clothe and house the 7.7 billion people who are presently here—although several billion marginally so. As the next 3–4 billion people arrive the question of whether
this represents unsustainable overpopulation becomes significant. Overpopulation is generally defined as a situation where an organism's numbers exceed the carrying capacity of its habitat.
The problem is that the 21 st century will witness, and some would say is already witnessing, a time when Earth—clearly a closed environment—experiences a set of innate and/or human
imposed restraints. As limits are reached poorer nations will find themselves in a condition of "demographic entrapment"—a condition when a nation has a population more than its carrying
capacity without the option of migration, with too little in export earnings to pay for critical imports. The net effect can be localized Malthusian crises with characteristics of mass starvation
and sociopolitical instability. Climate change is already cited as making this tendency more probable in Sub-Saharan Africa. In other words, the low-hanging fruit sustaining population growth
and prosperity these past 200 years has been plucked—or is being horded—by those fortunate to have industrialized and prospered earlier. The decline of global, westernized civilization as we
know it may well be herald by poorer nations fracturing before the crisis spreads to the rest of our global village (Homer-Dixon, 2006).
One final facet is that we are not only a biological force but have become a geological one as well. It is suggested that our numbers are now so vast, our industry so extensive, that a new
geological age has begun: the Anthropocene (Waters et al., 2016). This conjecture is rooted not simply in our numbers but in our nature as well: We are 7.7 billion individuals with insatiable
collective wants on our way to becoming up to 11–13 billion strong, transforming myriad dimensions of Earth’s hydrosphere, atmosphere, lithosphere and biosphere. One group of scientists
has identified limits beyond which we should not push our planet (Stockholm Resilience Centre, 2015). This research suggests we are nearing tipping points into radically different planetary
states with unknown ecosystemic features. Earth systems are frustratingly complex, so the results are tentative yet worrying. The bottom line is that planetary boundaries exist and will
restrain the current trajectory of civilization. Given the UN’s population growth forecast, we may begin hitting some of these fuzzy planetary boundaries soon. BÁnyai states that
environmental regulation has failed, for human behavior is “psychopathological” (2019). Her analysis supports the conclusion made here that civilization’s decline is inevitable. Even if passing
the tipping points move Earth into a still hospitable environment for humans, the transitions involved will magnify the tensions associated within the climatic stage of civilization. All this is
symptomatic of human behavior and numbers, representing distinctive features during this phase of civilization’s climax.
3. Fall of empires
Earlier civilizations followed a similar pattern of development characterized by what Theodor Mommsen defined long ago as genesis, growth,
senescence, collapse and decay (1854-1856). Since Edward Gibbon's extensive work, The Decline and Fall of the Roman Empire, scholars have
taken an active interest in what causes the eventual decline of all empires (1776-1788). In the case of Rome, Gibbon suggested decay of the
elite was brought on by the “natural and inevitable effect of immoderate greatness”. Arnold Toynbee refined Gibbon’s ideas by adding that the
political elite became increasingly parasitic, leading to an increasingly marginalized majority who undermine the integrity of empire in
numerous ways (1939). Other macrohistorians, such as Oswald Spengler, argue for a world view based on the cyclical rise and decline of
civilizations, suggesting we have begun a centuries-long process of decline mirroring that witnessed in antiquity (1926). Joseph Tainter’s study
of Rome identified increased sociopolitical complexity—causing rigidity and fragility while drawing off scarce resources—as a major cause of its
decline, with many suggesting his insights relevant today (1988). For many ancient, albeit smaller, civilizations, Jared Diamond suggests a
quintet of external factors led to decline: environmental degradation, climate change, dependency upon external trade, intensifying levels of
internal and external violence and, finally, societal responses—or lack of response—to all these factors (2005). For modern civilization—and this
was likely true for many ancient ones as well—Mancur Olson argued that special interest groups accumulate around the central
power structure, draw ing off resources, impeding the ability of central authorities to respond
appropriately to the growing threats to the integrity of empire (1982). One final point found in all these studies is
that leaders essentially fail ed to deal with developing, macro-problems , both internal and external,
before reaching the threshold of crisis and impending collapse .
Galtung and Inayatullah’s ambitious book, Macrohistory and Macrohistorians: A Theoretical Framework, examines the contributions made by
twenty macrohistorians in understanding multiple facets of the cycles of civilization (1997). A further ambition of this work was to produce a
comparative and integrative history of the patterns and causes of change throughout time. They begin deep in the past with the premodern
insights of Ssu-Ma Ch'ien, Augustine and Ibn Khaldun; progressing to 19th century contributions of such dialectical thinkers as Friedrich Hegel
and Karl Marx; ending with the more recent thinking of Pitirim Sorokin, Prabhat Sarkar and contributors to the Gaia hypothesis. This sweeping
set of transhistorical and cross-cultural perspectives of social change are then treated comparatively, resulting in the definition of twelve
different "sciences" addressing change in the human condition. These “sciences” reflect diverse pedagogical perspectives on the study of
civilizational change, each focusing on distinct forces, patterns and units of analysis (i.e. vectors of change). Reflecting what has been previously
noted, Galtung and Inayatullah identify “stages and patterns” in the cyclical development of civilizations as a common theme among the
macrohistorians reviewed. Inclusion of non-Western thinkers infuses the work with a rich set of historical experiences and perspectives while
providing us with analytical tools to help understand on multiple levels what is happening within western civilization today.
There are further works suggesting that the present course of humanity has refocused the process of decline of western civilization to
distinctive factors (i.e. planetary). From Paul R. Ehrlich’s neo-Malthusian life-long work since he published The Population Bomb in 1968, to
Meadows et al. (1972) ongoing work since introducing the “Limits to Growth” hypothesis in 1972, to Edward O. Wilson’s 2002 concept of HIPPO
(Habitat destruction, Invasive species, Pollution, Human Over-Population, and Overharvesting), many intellectuals have warned that trends
associated with human expansion are unsustainable, pushing today’s civilization into its climatic stage. More recently, the Ehrlichs wrote a
piece entitled “Can a collapse of global civilization be avoided?” (Ehrlich & Ehrlich, 2013). They begin by stating that “global collapse appears
likely” due to overpopulation and overconsumption with dramatic cultural change necessary to avert catastrophe. Laura Spinny published a
summary of additional corroborating research all pointing to socioeconomic disintegration, concluding that “almost nobody thinks the outlook
for the West is good” (New Scientist, 2018). Lastly, Luke Kemp, from the Centre for the Study of Existential Risk at the University of Cambridge,
published a BBC report noting that “collapse may be a normal phenomenon for civilizations, regardless of their size and technological stage”,
and that “our tightly-coupled , globalized economic system is, if anything, more likely to make crisis
spread ” (Kemp, 2019).
4. Questions to ask before climbing into the lifeboat
What is the Earth’s carrying capacity for humanity? Is it 11 billion or some larger or smaller number? Also, what needs to be done to balance the human desire for personal opportunity, physical comfort and liberty with a sustainable, habitable planet ? Further, what have we learned and
what do we cherish about our current civilization that we wish to preserve for the future? Finally, how will we carry our treasurers into the next civilization? These are not easy questions, but they must be asked and answered in the next several decades.
The first question requires that we define what a “sustainable” population size would entail. To many it requires a maintainable level of the physical components providing a healthy standard of living for all, consistent with viable ecosystemic balance. Such a standard of living would
require ready access to the basics of nutrition, clothing and shelter. In addition, it would require equal entry to the higher reaches of the Maslow hierarchy through provision of a stable environment, basic health care and education, as well as minimal socioeconomic barriers to
advancement within a strong legal system. Providing such would guarantee equal opportunity to everyone toward achieving aspirations compatible with innate, or acquired, abilities and drive. In other words, the sustainable population size requires something more than mere survival of
our species, since a healthy civilization requires dynamic engagement of, and opportunities for, its members. If the researchers at the Stockholm Resilience Center are correct in stating that Earth will be moving into a profoundly altered state in coming decades, then Earth’s carrying
capacity for humanity at any future time is indeterminable today. This has not stopped prognosticators from making estimates based on our planet’s current ecosystemic state. Paul Ehrich places the optimal population of the planet between 1.5 and 2 billion people (The Guardian, 2012).
Unfortunately, most research on the subject varies so much as to be currently useless. The reason for such disparate conclusions distills down to the underlying assumptions made by those doing the researc h—and this too can become victim of partisan, tribal truth. There are those that
believe that human adaptability and ingenuity places no limit on human population size while others derive a number less than that suggested by Ehrlich. Further research is needed, and, in the end, the sustainable population range determined requires balancing the carrying capacity of
Earth by ecological footprint analysis with some minimum scale required to maintain humanity’s diversity within a transformed, vibrant civilization design.
Many would point to China’s lapsed one-child policy and say it was a failure since it was illiberal and resulted in an inverted demographic pyramid. The first part is true and, as to the second part, there are implications arising from this multi-decade policy that will negatively impact China’s
future economic growth, which is construed as a bad outcome. This last conclusion arises from the questionable assumption that aggregate economic growth is something we should always strive for. What really matters is a nation’s “human development” level over time. The fact is that
it’s not bad to experience stagnant or negative GDP growth if real per capita human development, as defined by the United Nations’ Human Development Index, remains positive (United Nations, 2019). This is the trick public policy makers need investigate and then achieve. Population
decline in the context of robust technologically driven productivity gains is one avenue toward achieving this goal (Frey, 2019). Büchs and Koch have i nvestigated the degrowth transition, finding that wellbeing need not suffer (2019). However, they note that the psychological transition in
expectations will not be easy. Echoing this concern, Fergnani underscores the difficulty in disentangling the instilled psychological pleasure individuals gain as participants in capitalism (2019). The coordinated cadence of human effort born of t he Industrial Revolution created a
psychosocial dynamic that will prove deeply resistant to paradigmatic change. This too requires careful research, but an inescapable conclusion is that it is crucial to modify our inculcated beliefs concerning economic growth.
What do we want to carry into our future from our current sociocultural fabric beyond fine arts, literature and accumulated STEM knowledge? Thomas Jefferson said in the opening lines of the Declaration of Independence that, “We hold these truths to be self-evident, that all men are
created equal, that they are endowed, by their Creator, with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness”. Protection of your person and your family, the freedom to live your life without excessive social hindrance and the right to pursue
happiness, all resonate with basic human nature. James Q. Wilson, among others, have argued that morality has a strong genetic component, so these new concepts, embodied within the then new democracy, had a strong appeal to humanity’s complex, evolved sense of fairness and
justice. While from an objective perspective these “rights” were anything but “self-evident”, claiming so brought them into the social contract, creating a powerful motive among citizens to defend and preserve the young nation. The mid-20th century innovation of connecting individual
with social rights, enshrined within a system based on the preeminence of the rule of law, became a potent unifying force and, in many ways, the final socioeconomic achievement the Liberal Tradition made at the end of western civilization’s colonization stage. There is much here worth
preserving for it nurtures collective intelligence networks while preserving social cohesion. These points need to be reflected upon and expanded through further investigation.
Will these issues and questions be addressed in an anticipatory or reactive manner? With a reactive approach, the path we’re on, we risk the ongoing decline, collapse and subsequent resurrection to be hijacked by those using tribal truth to create something partisan and potentially
devolutionary. While we can imagine governments constructively rising to and addressing the stresses of civilization’s demise, such thinking has proven misplaced and fruitless. The Ehrlichs, as many others, suggest that “widely based cultural change is required”, which is also improbable
(Ehrlich & Ehrlich, 2013). Even where we see awareness of such issues as climate change sculpted into active social policy, t he protective hand of human biases and special interests lie in the shadows. Further, if the problem is rooted in our numbers and behavior, such polices add up to
band-aids—they serve to delay the inevitable. Barring some techno-fix set of miracles, which is possible yet not probable, we must conclude that humanity’s response will continue to be reactive until crisis and collapse are upon us: We are captive to the cycles of history.
At some point before collapse, as awareness grows, an overdue but hopeful reaction is possible, entailing a focused collective intelligence effort attending these issues and questions. For sake of another name, let’s call this the “Human Foundation Project” (Taylor, 2012). Parallel to this
project, many in the top socioeconomic groups will be building their walls, underwriting their private militias and fortifying to survive collapse. Their long-term objective will be to create a forthcoming civilization shaped in their image—the one that is now proving unsustainable. This is a
dead-end vision for a viable future, so we must look elsewhere for humanity’s salvation.
Ironically, as stated at the beginning of this essay, it’s the global rich and powerful that will make sure they’re the first to climb into the lifeboats. Most of these people are self-centered yet gained their status through being intelligent, industrious and adaptive. Among them are a few
concerned visionaries—such as Bill Gates—known for his, and his wife’s, generosity through The Bill and Melinda Gates Foundation supporting public health and education initiatives around the world. There are numerous others that could be mentioned—many well know entrepreneurs,
actors and financiers—but the point is that not all the rich are narrowly self-interested: Many put their wealth behind honorable causes. J. Pierpont Morgan is often cited as single-handedly saving the United States from financial collapse during the panic of 1907 (Chernow, 2010). It has
happened before and can happen again. If we don’t want the self-serving rich and powerful with their private militias emerging from their gated communities, turned personal bunkers, to reclaim the storyline of tomorrow’s civilization, we need to make a contingency plan to set the stage
for something better, encapsulating the philosophically and socially noble features that emerged from the Enlightenment and evolved Liberal Tradition. Once the financing is in hand, the work of the Human Foundation will commence.
The Human Foundation will design a blueprint of a new civilization to emerge at an appropriate time and place possessing the most fertile environment for development. Emergence will eventually take place in phases with the ultimate objective of dominating the storyline of civilization’s
reconstruction. It will have a charter and mission statement built upon the following core principles and goals:
1Preserve: Conserve accumulated knowledge related to STEAM fields.
2Delineate: Define what is worth using from the past in designing our next civilization. This includes study of cultural, political, legal, economic and social practices.
3Create: Design a new, robust civilization based on the objective of establishing a world order that nurtures ongoing evoluti on of humanity and life on earth.
4Sustainability: Emphasize sustainability in all dimensions of institutional design.
5Outreach: Communicate findings with as broad an audience as possible.
6Community: Build a network of individuals and organizations sympathetic with the objectives of the Human Foundation Project.
7Endure and Protect: Create the physical and socioeconomic mechanisms to sustain the Human Foundation through collapse of western civilization. This includes providing for and protecting those associated with the foundation.
8Phases: Lay out the necessary steps during and after initiation to make the design a reality.
9Monitor and Assess: Observe and recalculate prospects and limitations presented to the new design as the collapse of Western civilization progresses.
10Timing: Be prepared to move decisively when and where opportunity is presented.
Soon after its formation the Human Foundation will hold a series of symposiums, bringing visionaries and specialists from across the spectrum of human knowledge together. Each symposium will be centered on critical themes, such as “Sustainable sociocultural practices best serving
human needs” or “Beginning institutions, rules and laws” or “Balancing planetary and human systems”, etc. Early on a permanent staff will be necessary to focus collective intelligence on the objectives of the project. Along with gathering and storing that which is to be preserved, while
applying foresight intelligence to design a nascent civilization to come, a vital concern will be to find means to endure civilization’s collapse such that formulated plans remain actionable.
Some comments are required on items #8 to #10 in the mission statement principles and goals. Anticipated escalation of armed
conflict would destroy the functionality of national sociopolitical institutions in many regions of the world. Besides
mass emigration of displaced citizens, other geopolitical consequences are difficult to predict. Rising seas,
more severe weather events and increasing temperatures might drive those from densely populated coastal regions inland, increasing
competition for resources, social tension and, in some cases, the likelihood of famine. Some nations will disappear (e.g. the Maldives) while
others may be relatively untouched (e.g. New Zealand). Global supply chains could easily be disrupted and trade
impaired, with the overhang of debt and defaults causing financial market disruption and global
GDP to stagnate or turn negative . Public policy initiatives will be constrained by past excesses.
Unmet expectations of citizens in the developed world will probably amplify idiosyncratic social and political
instability . What if nations or terrorist group s turn to the use of w eapons of m ass d estruction to achieve their
objectives? Perhaps there will be nations welcoming the Human Foundation early in the course of collapse, willing to alter their institutions,
public policies and legal structures to accommodate the foundation’s vision. All these possibilities must be monitored and factored into the
Human Foundation’s plans for when, where and how to initiate the first stages of its plan for establishing the next civilization. Researchers at
the Human Foundation will need to engage in continuous scenario analysis and planning. The precise nature of Western civilization’s collapse is
unknowable, while the nature and extent of ongoing environmental damage, with its impact on regional habitability, a further unknown. The
key will be to design a robust, adaptable plan within a wide range of hypothetical scenarios as the changing state of the world is revealed during
the final phases of collapse and decay.
5. Next steps on humanities’ journey
All so far is premised on the assumption that western civilization it in decline. To many it certainly doesn’t feel like this is happening: “…people
are carrying on as usual, shopping for their next holiday or posing on social media” (Spinny, 2018). There are two relatively recent
developments at work, each having played a role in temporarily counteracting degeneration, sustaining the stage of senescence. First, the
global financial system shifted from a debit to a credit basis in the early 1970s. This has permitted an historic increase of debts relative to
assets. Debt is essentially borrowing against future income , boosting growth and consumption
temporarily in the current region of time. At some point the bills from the past must be repaid ,
shifting decline onto an accelerated path in a shorter period than would have otherwise occurred.
The Covid -19 epidemic ha s quickly revealed the fragility of this credit expansion cycle. Twenty-five
central banks had announced q uantitative e asing initiatives by mid-April 2020 to mitigate the economic harm inflicted by viral
suppression policies. Further, governments around the world had announced a total of $8 trillion in additional fiscal spending to soften the
blow. These efforts entail pumping liquidity into markets through central bank purchases of various private
and public financial instruments. The net effect is to transfer more of the accelerating debt burden
from the private to public sectors. Along with this, some predict that the United States may run annual fiscal deficits in both 2020
and 2021 of over 3 trillion dollars, at a rate that may approach 18 % of GDP. This begs the question as to the limitations of this approach to
economic stabilization. BCA Research, the organization that coined the term “debt super-cycle” back in the 1970s to describe this phenomenon,
has declared that the end of the super-cycle began in 2014 and is currently accelerating, ushering in a dangerous period of
insidious developments that will fundamentally alter the global economy and civilization as we
know it (MarketWatch, 2020).
1NC
Topicality
The aff must be topical.
“Resolved” refers to policies.
Words and Phrases ’64 [Permanent Edition, an English Language Dictionary]
Definition of the word “resolve,” given by Webster is “to express an opinion or determination by
resolution or vote ; as ‘it was resolved by the legislature ;” It is of similar force to the word
“enact,” which is defined by Bouvier as meaning “to establish by law ”.
Vote negative for FAIRNESS and CLASH.
Unbounded topics deny a role for negation---debate is innately competitive, which
creates an incentive for teams to retreat from controversy and forces the neg to first
characterize the aff and then debate it, which evacuates the benefits of detailed prep
and research. Any impact intrinsic to debate, as opposed to discussion, comes from
negation. Topical stasis produces iterative and in-depth strategies that are a
prerequisite to debate’s pedagogical value. Independently, turns case---well-prepared
opponents create sparring partners. Failure to iterate produces false positives---neg
on presumption.
1NC
Assurance/Deterrence DA
The United States should preserve nuclear first use solely against military centers.
Doyle 10 Doyle, Thomas E. “Kantian nonideal theory and nuclear proliferation.” International Theory,
2:1, 87–112 & Cambridge University Press, 2010. Scarsdale CC
The same analysis applies to any policy of carrying out deterrent threats solely against population/ government centers. However, for Aspirant
to carry out deterrent
threats solely against military centers seems prima facie consistent with Kant’s view on the right
of national defense, and it parallels some applications of just war theory on the problem of limited nuclear warfighting (Ramsey, 1962; Orend, 2000). Once acquired, a lowyield
nuclear device might annihilate one or more of Rival’s army divisions, naval task forces, or air-force bases, severely crippling its capacity to continue to aggress. More importantly, a
maxim that corresponds to this intention appears to pass the universality test. Aspirant could in principle
assent to a rule that permits all nuclear-armed states to threaten and carry out exclusively counterforce nuclear
reprisals, much in the same way that nationalist morality permits all states to use conventional force in selfdefense.23 This isn’t to say that Rival can read off Aspirant’s intentions from
its nuclear procurement behavior. And this is not to say that in the process of nuclear miniaturization required to produce these weapons that Aspirant might not retain its larger nuclear
devices. It is to say, though, that
Aspirant’s maxim on this point can be imagined without formal contradiction. Moreover,
were Aspirant to miniaturize its arsenal and then verifiably decommission or destroy its larger devices, Rival might come to behave that Aspirant had abandoned any policy of mutually assured
destruction in favor of a policy of severely limited counterforce warfare.
The plan devastates assurance of East Asian allies---reversal is perceived as a
downgrading extended deterrence, exposing allies to non-nuclear aggression.
Costlow ’21 [Matthew; 2021; Senior Analyst at the National Institute for Public Policy, M.S. in Defense
and Strategic Studies from the University of Missouri, Ph.D. candidate in Political Science at George
Mason University; Occasional Paper, “A Net Assessment of ‘No First Use’ and ‘Sole Purpose’ Nuclear
Policies,” https://nipp.org/wp-content/uploads/2021/07/OP-7-for-web-final.pdf]
Calculated Ambiguity and the Assurance of Allies
The policy of calculated ambiguity remains popular among U.S. allies and partners for many of the same
reasons it has remained U.S. policy across Democratic and Republican administrations. First, allies and partners value the
flexibility provided by a policy of calculated ambiguity, which does not commit them to any particular course of
action before the dynamics of a crisis or conflict are fully known—ultimately providing another option to allow diplomacy and deterrence to
have an effect. Second, allies and partners are the closest geographically to many of the nuclear and
strategic non-nuclear threats that may compel the U nited S tates to consider threatening nuclear
first use for deterrence. During the Cold War, such a scenario typically involved a massive Soviet conventional attack that NATO could
only hope to stop with the early first use of nuclear weapons to deny the Soviet Union a victory. Today, such a scenario could
hypothetically involve the United States consulting with South Korea and Japan on nuclear employment to
prevent or terminate large-scale North Korean chemical weapons use.
It is important to note in this regard the role that geography plays in U.S. nuclear declaratory policy—a factor that few government or nongovernment analysts have fully examined. Blessed with two large oceans on the east and west, and friendly neighbors to the north and south,
the United States has leveraged its political and economic ties to form alliances and partnerships around the world, mainly through its
formidable naval capabilities. States in Europe and Asia have found friendship with the United States to be mutually beneficial both
economically and militarily as they face hostile hegemonic powers on their respective continents. To reinforce its commitments to the security
of its allies and partners, the United States developed a system of military bases overseas where it could land troops and weapons in the event
of a crisis or conflict.
Thus, given the relative isolation from its allies and partners, the U nited S tates developed military
declaratory policies that essentially promised to come to their aid in case of an adversary’s aggression, but
mobilizing and transporting immense numbers of U.S. conventional forces , both personnel and
weapons, would take a great deal of time —time that may not necessarily be available if the situation was severe for an ally
or partner.
Therefore, the U nited S tates, allies, and partners have valued keeping the option open of nuclear first
use as one way to minimize the problem of the prolonged time it takes to mobilize and transport overwhelming conventional forces from the
U.S. homeland to the spot of a crisis or conflict. As the late strategist Colin Gray wrote towards the end of the Cold War, “Because of the
geographical asymmetry between the superpowers and given the interests most likely to be at immediate stake in a conflict, the principal
burden of decision regarding nuclear escalation is likely to be borne by NATO and the United States.”16 In other words, given the
aggressive nature of states like China , Russia, and North Korea , and given the geographic
proximity of U.S. allies close to those states, and the large distance between the United States and those allies, the
decision about the need to employ nuclear weapons first will likely weigh more heavily on the
U nited S tates and its allies. U.S. officials view all of these factors as increasing the importance of calculated ambiguity and the
weapon systems and policies that make it credible to allies, partners, and potential adversaries.
U.S. officials have only rarely ever considered dropping the policy of calculated ambiguity in favor of a policy of nuclear no first use or sole
purpose, in no small part because allies and partners have consistently favored the s tatus quo . The Obama
administration reportedly considered adopting a no first use or sole purpose policy twice, once at the beginning of the
administration around 2010, when it was writing its Nuclear Posture Review, and once toward the end of the administration in 2016, as
President Obama was close to leaving office.17 Multiple Obama administration senior officials have recounted how allie d official s
expressed their profound opposition to such a change in U.S. nuclear declaratory policy. For example, Gary
Samore, White House Coordinator for Arms Control and Weapons of Mass Destruction, Proliferation, and Terrorism, stated, “So we wanted to
make sure that our allies knew that our new negative security assurance would not jeopardize our commitment to their security. And for the
same reason, we are obviously not prepared to do ‘no first use’ or ‘sole purpose’ because that could raise questions about our commitment to
use the full range of our military forces to protect friends.”18 Or, as Robert Einhorn, Special Advisor for Nonproliferation and Arms Control at
the Department of State, said at a rollout event for the 2010 NPR, “In our discussions with allies and friends around the world—and we had
many frequent contacts with those friends—they indicated to us that such a radical shift [sole purpose] in [sic] U.S. approach could be
unsettling to them.”19 Indeed, subsequent investigation by academic researchers and extensive interviews with Japanese officials revealed that
Japanese leaders were “ relieved ” that the 2010 NPR did not issue a nuclear no first use or sole purpose
policy.20
Later, in 2016, the Obama administration revisited the issue of perhaps issuing a nuclear no first use policy. Jon Wolfsthal, who at the time was
the Senior Director for Arms Control and Nonproliferation at the National Security Council, recounts how leaders in the U.S. Department of
Defense were opposed to such a change because of concerns about its effects on U.S. allies Japan and South Korea.21 When news about how
the Obama administration was considering a nuclear no first use policy leaked to the press, Wolfsthal recalls:
… we got a call from [Japanese] Prime Minister Abe’s office objecting to no-first-use adoption… We had visits from Japanese officials.
And it had almost nothing to do with North Korea and it had almost everything to do with China, the idea that somehow if we
were to adopt n o- f irst- u se, it would be seen by China as reducing our commitment to Japan,
and therefore it would reduce Japanese security. And when we made the arg ument that it is not credible for
the U nited S tates to threaten the use of nuclear weapons first against China and that eliminating that
would make our retaliatory threat much more credible, that was not an argument that was convincing to the
Japanese government.22
Contemporaneous reporting indicates that it was not only Japanese officials that objected to changing U.S.
nuclear declaratory policy , but officials from the United Kingdom, France, Germany, and South Korea also voiced
their concerns to U.S. officials.23
East Asian prolif triggers inadvertent AND deliberate nuclear war. Independently, the
acquisition alone tips off a nuclear domino that escalates the Middle East.
Cimbala ’23 [Stephen and Adam Lowther; June 2023; Ph.D. in Political Science from the University of
Wisconsin, Distinguished Professor of Political Science at Penn State University; Ph.D. from the
University of Alabama, Director of Research and Education at the Louisiana Tech Research Institute;
Springer, “Nuclear Danger in Asia: Arms Races or Stability?” p. 266-280]
Unstable c ommand and c ontrol over nuc lear force s could be joined at the hip to nationalist or
religious hostility in Asia. During the Cold War, the Americans and Soviets competed on the basis of political ideology: communism
versus capitalism. Both of these ideologies were rooted in Western philosophy and history and neither, with the exception of lunatic fringes on
both sides, anticipated an inevitable final day of judgment between the two systems. Despite significant differences in military strategic
doctrine, the USA and the Soviet Union established an ongoing process of nuclear arms control that helped to stabilize their political
relationship and to avoid conflicts based on misinformation and mistaken assumptions about one another’s intentions. In a sense, more than
four decades of “nuclear learning” occurred as between the two nuclear superpowers that lasted to the very end of the Cold War and even
beyond the demise of the USSR.7
In Asia, the next decade or two may witness the combination of absolute weapons in the hands of
leaders with apocalyptic motivations or regionally hegemonic objectives. To be clear: it is not asserted here that the leaders
of nuclear powers in Asia will be less “rational” than their European counterparts. Rationality is a loaded, and a subjective, term. In politics, it
implies a logical or logically intended connection between political ends and means.8 Leaders in Asia will have political
objectives that differ from those of the Cold War Americans and Soviets: not illogical in their own terms, but less road tested
against accidental , inadvertent, or deliberate escalation to nuclear war under the stress of political
crisis and ambiguous intell igence.267
Military strategy is the realm of logical paradox and oxymoronic truths. Strategies and policies intended as defensive can, in this paradoxical
world, appear provocative and offensive to other states. For example, leaders in Asia might misconstrue or apply mistakenly
the strategy of preemption . It is not an offensive strategy but a defensive one. Preemption is motivated by the expectation that the
opponent has already launched an attack or is about to. It is dangerous on account of its “defensiveness”: leaders
misperceive that they are already under attack based on deficient and misleading indicators of
warning, or on mistake n assumption s about enemy intentions and capabilities.9
Another possible path to war based on the twenty-first century military realities is the deliberate use, or threatened use, of weapons of mass
destruction, including nuclear weapons. We noted already that this use might occur as part of “anti-access” or “area denial” strategies in Asia.
Hostile powers could employ the threat of nuclear first use against American allies or forward deploying
the US forces in order to deter American intervention in the region, against their interests.
This is an obvious stratagem for China to use in case it decides to forcibly disarm Taiwan or otherwise create a military fait accompli in the
Western Pacific contrary to the US interests.10 China would not need to use nuclear or other weapons of mass destruction in order to
accomplish a number of its possible objectives in the region, absent the US military intervention. The challenge for China would be to deter or
defeat US military intervention in favor of Taiwan or another threatened interest. More accurate ballistic missiles of various ranges, improved
air defenses and C4ISR (command, control, communications, computers, intelligence, surveillance and reconnaissance) and expertise in
cyberwar could augment Chinese access denial capabilities—with or without nuclear weapons.11 In addition, the combination of
nuclear and cyber capabilities in a USA–China confrontation could create a scenario of escalation
from conventional war into a nuclear crisis. As Andrew Futter has noted:
This is a particular concern given the fact that China is thought to share some parts of its C2 system for both nuclear and conventional forces.
This risk is, in turn, likely to have implications for China’s ‘No First Use’ nuclear posture, particularly when cyber is combined with US ballistic
missile defence plans and conventional global strike capabilities.12
Nor is this all—some Chinese policy and strategy discussions suggest a potential for seamless transition from
conventional to nuclear war , once an enemy has attacked China’s vital interests. China’s expectation is that the most important
regional wars of the future will be conventional conflicts under the shadow of nuclear deterrence. Therefore China may adhere to a notion of
“double deterrence” based on the combined or sequential use of conventional and nuclear missile brigades, with nuclear weapons as “a
backstop to support conventional operations.” 13
North Korea, whose small nuclear arsenal might also play an “access denial” role, is the best current example of an otherwise strategically
insignificant state whose nukes have placed it in a pivotal position for regional stability. Without an agreement to verifiably dismantle its
declared nuclear weapons capability, North Korea threatens serial production of nuclear weapons for access denial to the Americans, for
political intimidation of South Korea and Japan, and for possible sale to third parties, including terrorists. The USA and South Korea could defeat
North Korea in a war if it came to that, but such a war on any scale would be devastating for South Korean civilians. Therefore, Seoul prefers
détente and an open door for eventual unification of the two Koreas, as opposed to coercive diplomatic or military pressure against Pyongyang.
It may turn out that North Korea eventually abandons its apparent commitment to membership in the ranks of nuclear powers, although the
nuclear saber rattling of Supreme Leader Kim Jong-un is disconcerting in this respect. It is also too early to tell whether the P-5 plus one (the
USA, Britain, France, Russia, China, and Germany) agreement with Iran in 2015 to freeze its nuclear ambitions short of weaponization will have
transitory or enduring effects. If these two cases slip the leash of nonprolif eration, others are almost certain
to follow , and the geostrategic logic of political rivalry in Asia and in the Middle East will be tightly
bound up with a high probability of WMD , including nuclear, use . The USA and allied diplomacy
with regard to actual and potential nuclear states in Asia will have to combine carrots with sticks in
order to induce, or dissuade, a repeat performance of July and August, 1914 but on a grander scale.14 A worst-case
scenario is not inevitable; there is no deterministic relationship between more weapons and a greater likelihood of bad decisions.15
The next section provides some operational definitions for variables and models one possible, although not necessarily inevitable, Asian nuclear
arms race if proliferation is not contained.
Proliferation in Asia
The Cast of Characters
In this section, a model of eight potential nuclear states in Asia, circa. 2025–2030, is posited for heuristic purposes. It is not a point prediction,
but a device for generating hypotheses and insights. The states in question include the acknowledged and de facto nuclear weapons states with
strong military presence in Asia: the USA, Russia, China, India, Pakistan, and North Korea. In addition, nuclear weapons have also spread to the
following nuclear-threatened or nuclear-aspiring states in Asia: Japan and South Korea. These assumptions might be falsified in the future:
more, or fewer, states in Asia might acquire nuclear weapons in the next fifteen years or so. However, for ascertaining the effects of
interactions among Asian powers with respect to nuclear deterrence, these selections appear to be substantively appropriate.
<<FIGURE OMITTED>>
The USA behaves strategically in Asia, to be sure, and deploys significant forces there. And some might reasonably argue that globalization will
drive more the US security challenges into Asia, now that post-Cold War Europe is presumably debellicized. But the USA is still a unique actor,
capable of marshaling conventional as well as nuclear forces with global reach and precision strike. This uniqueness in capability and in an
expansive definition of its international security interests puts the USA in a singular category: but not necessarily, as some have argued, as a
global hegemon.16 Truly international hegemony is beyond the scope of any single power today, because power includes both “hard”
economic and military power and “soft” power of cultural values, diplomacy, and moral suasion.17
Figure 1, below, is a matrix of interactions among eight existing and possible future nuclear states in Asia.
In theory, each of the eight states above might have a deterrence problem or potential problem with any one, or more than one, of the others.
Deterrent situations could be dyadic , involving two powers, or more complicated . The size and shape of
opposed political coalitions would determine the military requirements for each state. Suppose, for example, that Japan and
So uth Ko rea allied against North Korea . North Korea might then receive support from China, and
perhaps Pakistan. India would probably balance against China and Pakistan. Russia would have conflicting
priorities: political and economic commitments to India and China; concern about instability on the Korean peninsula; suspicions about PRC
intentions but preferring détente between Moscow and Beijing; and opposed to Japan’s going nuclear.
<<FIGURE OMITTED>>
Each cell in the matrix of possible deterrent threats or actions might be filled in by P (generally positive expectations about the threat of an
immediate deterrence situation, although all states exist in a general deterrence condition requiring at least passive vigilance); or N (generally
negative expectations about the threat of an immediate deterrence situation); or A (ambivalent expectations). The resulting matrix might
appear as it does in Fig. 2, below.
Expert analysts could disagree about the values assigned to cells in the matrix, and today’s values might not be the same as tomorrow’s. The
matrix is illustrative, not definitive. The three-option ordinal scale offered here could be refined into an interval scale of trust or suspicion with
regard to the expectation of involvement in an immediate deterrence situation. For example, each cell might be scored from one to ten with
one representing the most positive affect (least fearful of a challenge from the designated state) and ten representing the most negative threat
assessment (most pessimistic about a challenge from the state in question). A panel of expert judges could assign various values for each cell in
the matrix and over long periods of time, providing a longitudinal analysis based on pooled judgments.
Judgments like those discussed above are not merely exercises for analysts. Political leaders and military planners make these kinds of
judgments every day. Threat assessments, even against nominal allies, are accompanied by espionage against allies, adversaries, and
nonaligned states. Note how alliances, both soft and hard, complicate assessments as simplified in the preceding matrices. If, for example,
many of the “ambivalent” states (positive or negative) with respect to various partners were to shift suddenly into the positive (P) or negative
(N) column, dramatic adjustments in the definition of adversaries and in the requirements for deterrence would result. Something like this
happened in the years immediately preceding World War I, when alliances that had previously been inchoate and flexible began to harden and
reify into antagonistic blocs. As a result, the flexibility of alignment necessary for stability in a multipolar system disappeared into the rigidity of
dichotomous thinking about friends and enemies.
Threat assessments pertinent to nuclear deterrence assume reliable intelligence about the actual intentions and capabilities of adversaries.
Intell igence about states’ intentions is notorio usly suspect , on the evidence of history. Intentions of policy
makers are often deliberately veiled, especially if those intentions include plans for preemptive attack. In theory, military capabilities should be
more “objective” to enumerate and to assess than are the intentions of states. But capabilities can also lend themselves to misestimation.
Many states have gone to war based on faulty net assessments that underestimated enemy
capabilities and overestimated their own. The capabilities of military forces are bound up with the skills of the commanders who
are using them and the political leaders to whom those commanders are accountable. As to the latter: Napoleon and Hitler, despite the
excellence of their militaries, managed to drive their armed forces into the ground and their states into political defeat by insisting upon
objectives and war plans that were beyond the reach of their military capabilities.
In the case of nuclear forces, capabilities are measured not only by their destructiveness in war but also by their credibility in deterrence. Since proving a negative is difficult, the absence of
war in a given case may, or may not, suffice to prove that deterrence “worked.” Nevertheless, most political leaders and military planners would prefer to achieve their objectives while
avoiding an outbreak of nuclear war: unless those leaders and planners are highly risk acceptant or motivated by absolutist goals not amenable to compromise. Since nuclear deterrence is
preferable for most to nuclear war, capabilities for nuclear deterrence matter more than capabilities for nuclear war fighting. However, the matter is complicated by the fact that a convincing
nuclear deterrent must be capable of performing its retaliatory missions when called upon to do so.
Measuring the relationship between nuclear capabilities and their contribution to deterrence involves several steps, carried out in the next section. Hypothetical nuclear forces are posited for
these eight possible nuclear-armed states in 2025–2030 and are subjected to force exchange modeling in order to see how well they would be expected to perform under various conditions.
From these findings, some hypotheses and generalizations about stability in a future multipolar nuclear Asia can be inferred.
Forces and Outcomes
For purposes of analysis, a notional force mix of land-based missiles, sea-based missiles, and bomber delivered weapons is assigned to each of the eight states. The US and Russian forces are
assumed as constrained by New START limits on deployed warheads and intercontinental launchers and New START counting rules (at least through 2026).18 Over-specification and excessive
detail for each arsenal would be mistaken, for several reasons. First, the precise weapons systems deployed by each power in the years ahead will be determined by their future threat
perceptions, economic and technological capabilities, and political ideologies and affinities. In addition, for states with intra-regional rivalries, land-based missiles and bombers with short or
medium ranges suffice to deliver “strategic” blows as well as the longer range intermediate and intercontinental launchers. Third, not every state may prefer a “triad” of land and sea-based
missiles and bombers, but we have assigned each state in the analysis a “triad” of sorts in order to “level the playing field” of force survivability. For these and other reasons, our notional
forces are broad-gauged composites and not point predictions about detailed capabilities.
Table 1, below, summarizes the sizes of the operationally deployed strategic (i.e., capable of strategic or decisive effect) weapons for each state. The sizes of their total forces and their mixes
of launchers vary with our estimates of their future capabilities and strategic settings. For example, states with large national territory (Russia, China) can deploy land-based missiles more
survivably than states with less territory (Japan). And states with advanced technology can deploy with more confidence a fleet of ballistic missile submarines than states with a less developed
research and development capability.
In addition, Table 1 also calculates the numbers of surviving and second strike retaliating warheads for each state. Each of the notional prewar forces is subjected to counterforce first strikes
from state or states “X” (unknown). The surviving and retaliating forces left to each state after absorbing a first strike are calculated using standard methodology for estimating how many landbased missiles (ICBMs or missiles of shorter range), submarine-launched missiles (SLBMs), or bomber delivered weapons remain. Although we do not know the exact identity of future
attackers against any one or more of these states, analysts can estimate with reasonable confidence (based on historical studies and knowledge of weapons and commandcontrol system
performances) the probable percentages of surviving weapons systems in each case. The methodology used here is extrapolated from road tested force exchange models used in other
studies.19
As might be expected, the larger forces have more total survivable and retaliating warheads, compared to the smaller prewar forces. However, not all weapons systems survive equally. Much
depends on each state’s mix of launch platforms and, as well, the conditions under which retaliatory launch takes place. Four possible conditions are examined in Table 1: (1) forces are on
generated alert and launched on warning (so-called maximum retaliation); (2) forces are on generated alert, but are launched only after riding out an attack (intermediate retaliation); (3)
forces are on day to day or normal peacetime alert, but are launched on warning of attack (also intermediate retaliation); (4) forces are on day to day alert and ride out the attack (minimum
retaliation).
<<TABLE OMITTED>>
The summaries in Table 1 provide important indicators about static stability, but are not necessarily informative with respect to dynamic nuclear stability. To estimate the latter, we need two
measures of dynamic stability as between the performances of nuclear retaliatory forces. The first measure is generation stability: the ratio of the number of weapons surviving and retaliating
after riding out the attack, compared to the decision for launch on tactical warning of attack, under each of two conditions: (1) forces are on day to day alert; or, (2), forces are on generated
alert. The difference between the two conditions is expressed as a percentage: the higher the percentage, the more stable the situation. The second measure of dynamic stability is “prompt
launch stability,” or the numbers of weapons surviving and retaliating on day to day alert, compared to generated alert, under each of two conditions: (1) forces ride out the attack and then
retaliate; or, (2), forces are launched on tactical warning of attack.
The measures of dynamic stability enable us to go beyond the mere calculation of second strike survivability and to ask about the relational expectations of potential adversaries. In particular,
we can see the relationship between alternative force sizes and force structures, in addition to the options available to decision makers with respect to their dependency on prompt compared
to delayed launch, or on generated alert compared to day to day alert.
Estimation of dynamic stability will, of course, require more fine-grained analysis of dyadic relationships as between potential adversaries. For example, in 2017 the US relationship with North
Korea deteriorated in the wake of repeated North Korean missile and nuclear tests and predictably negative USA and United Nations responses. Was nuclear deterrence stability as between
the USA (and its Asian allies) and North Korea at risk of inevitable failure, given a continuation of North Korean brinkmanship and military adventurism? Part of the problem was the lack of
intelligence about North Korean decision-making. North Korean leader Kim Jongun might understand the US logic of nuclear deterrence rationality very incompletely, if at all. On the other
hand, Kim Jong-un might understand American deterrence rationality but not accept it: his version of brinkmanship was self-taught, without the nuclear learning that the USA, the Soviet
Union, and China had gone through during the Cold War.
We might use the simple matrix below to illustrate some of the challenges facing the US leaders in dealing with North Korea on issues of nuclear crisis stability. On one axis, we have the basic
assumption about the decision rationality of the North Korean leadership. On the other axis, we note whether the USA would prefer to rely on prompt or delayed launch in the face of credible,
but not entirely certain, warning of nuclear attack.
From the perspective of the preceding matrix, it might appear that cells #1 and #4 produced “consistent” or “symmetrical” outcomes. That is: other things being equal, we might expect the US
authorization for prompt instead of delayed launch, under plausible conditions of North Korean attack, if there were no assumption that North Korea understood and accepted the US
deterrence rationality. By similar reasoning, we might expect a US preference for delayed launch if the USA assumed that North Korea understood correctly and accepted the US deterrence
rationality. On the other hand, cells #2 and #3 are “inconsistent” or “asymmetrical” in raising the possibility of the US preference for delayed launch despite American doubts that North Korea
understood correctly or accepted the US deterrence rationality, or the possibility of the US preference for prompt retaliatory launch even if American leaders believed that North Korea
understood and accepted US rationality. (In the preceding discussion, “accepted” does not connote approval: it simply means that North Korea accepted the fact of the US deterrence being
what it is, regardless North Korea’s opinion of it).
<<FIGURE OMITTED>>
The matrix depicted in Fig. 3 is merely illustrative of the challenges that nuclear-armed states in Asia might face in their efforts to manage a nuclear crisis. Additional dimensions might be
added to the matrix: for example, about the quality of the US intelligence with respect to North Korean intentions and capabilities. A comprehensive matrix of this sort would be as multidimensional as the ingenuities of policymakers, analysts, or scholars wanted to make it. However, a smaller matrix is more parsimonious in getting at the variables that matter for the
discussion at hand.
Preliminary Findings and Indications
Several conclusions follow from the preceding analysis. First, strategies matter. States can guarantee larger numbers of surviving and retaliating warheads against plausible first strikes by
alerting more forces sooner or by launching “on warning.” However, these operational predilections may be destabilizing from the perspective of secure crisis management. On the other hand,
military planners will press for alerted and rapidly launched forces as an alternative to losing their deterrents. For example: although, apart from the USA, Russia is and will probably remain as
the largest fish in the Asian nuclear pond, its long-range air forces are at risk under any conditions of day to day alert. So, too, are those of its partners and rivals in Asia.
Second, force structures also matter. Sea-based missiles are more survivable than land-based missiles. Where states can afford to build, deploy, and control a ballistic missile submarine force,
it improves survivability and reduces their dependency on hair trigger alert or launch postures. An alternative to SSBN forces for smaller or less developed economies is additional reliance on
cruise missiles. Cruise missiles can be based at sea on surface ships or on submarines, on aircraft or on land. They are “slow fliers” compared to ballistic missile “fast fliers” and highly survivable
because of their flexible deployments and small signatures. Deployed in sufficient numbers and on diverse platforms, cruise missiles could preclude first strike vulnerability for even the
smallest nuclear states. And cruise missiles are highly accurate.
Third, nuclear war is unlikely to occur by means of a surprise attack “out of the blue” absent prior political confrontation. Therefore, states’ forces will, in most instances of nuclear crisis
management, already be highly alerted. The most probable decision among operational postures as listed above will be that between riding out the attack and retaliating or launching on
warning of attack. The results summarized in Tables 1 and 2, above, show that all states suffer a considerable penalty by waiting to ride out the attack as opposed to launching on warning of
attack. However, the same tables also show that, even after riding out an attack on generated alert, each state retains numerous surviving and retaliating weapons. So: in the most likely “real
world” situation of riding out an attack (with alerted forces), the smaller as well as the larger nuclear powers can guarantee unacceptable damage to any rational attacker.
Fourth, the quality of national military command and control systems matters a great deal in maintaining nuclear stability with eight, or fewer, nuclear powers in Asia. Command and control
systems must provide for negative control against accidents or usurpation of authority, and for secure positive control to guarantee at least minimum or assured retaliation. Nuclear command
and control systems are based on the soft power of knowledge-intensive technologies instead of the hard power of metal and mass. Disorganized or rigid command-control systems may push
decision makers into unnecessary reliance on simplified options and fast triggers. Political accountability, especially during a nuclear crisis, matters just as much. Whose fingers are on the
button in, for example, North Korea or Pakistan? Who is in charge? Who has the authority and control to start a war—or to end one? The answers to these questions may determine the
prospects for peace in Asia.
<<TABLE OMITTED>>
Conclusions
The spread of nuc lear weapon s in Asia poses two kinds of threats to international peace and
security. The first is that of a deliberate decision taken for nuclear first strike , either in mistaken
fear of imminent attack, or as a preventive war to disable a rising and presumably threatening opponent. The second
nuclear danger in Asia is that of inadvertent escalation growing out of a conventional war, and related to this,
the possibility of accidental or inadvertent use of nuclear forces due to military usurpation of civil
authority or technical malfunction .20 However, there is no reliable metric for relating the numbers of nuclear weapons states
to the probability of nuclear first use. States’ internal decision-making processes will drive these decisions, for better or worse. Although the
international system imposes certain constraints on the behaviors of current and aspiring nuclear weapons states, the system is also the
derivative of their respective national priorities and threat perceptions.
In addition, the possibility of regional arms races in Asia and elsewhere increases the significance of
the nuclear paradox in American defense planning. From one perspective, the US nuclear mod ernization is
required, not only to deter nuclear attack or blackmail against the USA, but also to prevent coercion or war against its
regional non-nuclear allies . Withdrawal of the American nuclear umbrella and the extended deterrence provided
by superior US nuclear forces could increase the risk of war in strategic Asia. On the other hand, the USA must also pursue
with Russia (and perhaps others) nuclear arms limitation and reduction agreements: otherwise, the spread of nuclear weapons to new state
and possibly non-state actors will be encouraged.
NFU greenlights territorial conquests by revisionist powers---goes nuclear AND cracks
the world order.
Blanc ’19 [Alexis A. and Lisa Saum-Manning; Summer/Fall; PhD, Political Science, George Washington
University, Political Scientist at RAND Corporation, former Senior Program Manager at the Department
of Energy’s National Nuclear Security Administration; PhD, Political Science, UCLA, Political Scientist at
RAND Corporation; SAIS Review of International Affairs, “The No-First-Use Debate: Arguments,
Assumptions, and an Assessment,” vol. 39 no. 2]
The debate over NFU has resurfaced, this time driven by the US legislative branch and in the context of a remarkably different security
environ- ment.19 The 2018 National Defense Strategy presented in starkest terms the return to great power competition. The report expanded
the threat beyond the two superpowers of the past century to also include China, North Korea, and Iran as challenge rs of the
international order.20 In recognition of these concerns we offer three scenarios that explore the utility of retaining the
option to use nuc lear weapon s first in a conflict: (1) a Russian ground invasion of one or more Baltic state s ; (2)
a Chinese amphibious assault on Taiwan ; and (3) a North Korean provocation that escalates into a full-scale
war on the peninsula. These cases are all low-probability, yet high-consequence, scenarios. The stakes would likely be nothing
less than the potential for nuc lear war and the reordering of the US alliance-structure that has underpinned the postWorld War II international order.21 These three are also arguably the most stressing scenarios, because, as we will
show, each reveals the shortfalls of existing US and allied conventional capabilities to prevail. Finally, these
cases provide the most compelling scenarios where nuc lear weapon s might credibly be employed.
Thus, these vignettes allow us to posit the potential role of nuclear weapons in each case and broadly consider what the outcome might be if a
conventional conflict escalated to nuclear use.
The first scenario postulates a Russian invasion of Latvia, Lithuania, Estonia, or some combination of the three. Defense
experts note that since Russia’s annexation of Crimea and Donbas, all three states have felt an existential threat from Russia, and
Russia has asserted a right to “ protect ” the Russian ethnics resident in neighboring states.22 Russia enjoys
local conventional superiority in terms of manpower and firepower vis-à-vis all three states.23 In a scenario
where the twenty-two highly mechanized battalions from Russia’s Western Military District advance on local forces in the Baltic states—
defended by roughly twelve lightly armored NATO battalions—experts assess that Russian forces could come within strik ing distance
of each state’s capital within three days .24 Russia also benefits from proximity and shorter logistics
lines than NATO, which enable Russia to initiate an offensive and flow reinforcements to the theater,
thus generating and sustaining combat power.25 Whereas estimates suggest the US would require months to
deploy a comparable number of battalions to the battlespace, Russia could redeploy as many as 150,000 personnel in weeks.26
Given the conventional disparity and confronted with a fait accompli seizure of a NATO member state, the Alliance could face clear pressures to
escalate beyond the conventional options executed during contemporary multinational exercises, very possibly including limited nuclear use. A
brief examination of the nuclear capabilities in the theater as well as potential targets, however, does not provide confidence in the utility of
this option. The B61 bomb is the only nuclear weapon deployed in Europe.27 The bomb has five different yields, potentially providing more
flexibility in use options. However, the putative target set—given the difficulties delivering the weapon—are limited.28 It seems logical to
conclude that Russia would have incorporated the possibility of NATO’s launching at least a demonstrative nuclear strike, perhaps over the
ocean, into its risk calculus before launching such an invasion. Thus, it is difficult to see how such a tactic would materially change Russia’s
calculus and coerce the state into reversing course. Targeting the invading Russian force itself would essentially result in a Pyrrhic victory for
whichever Baltic state were occupied. Alternatively, if the US targeted Russia’s integrated air defense systems, or Russia’s homeland, the
escalatory potential is high, given Russia’s declaratory statements that such attacks would elicit nuclear retaliation.29 Striking dispersed Russian
brigades as they maneuvered into the area would require hundreds of nuclear weapons to have a military effect and would also likely result in
significant civilian casualties.30 Thus, if Russia does not use nuclear weapons first, and limits its aims in the conflict, nuclear first-use by the
United States seems largely implausible at present.31
The second scenario postulates a case where China initiates an amphibious assault against Taiwan, again perhaps an escalatory action in
response to domestic pressures or an external insult. Sophisticated Chinese anti-access area denial capabilities have garnered the most
attention in terms of the challenge for US force projection capabilities.32 Recent analyses suggest that China’s current generation of
missiles and aircraft would be able to achieve air superiority against Taiwanese and forward deployed
US forces almost immediately , though China’s ability to launch the requisite amphibious assault force is much less robust.33 But,
China recently launched the first ship of a line capable of credibly executing such an operation and additional ships are well into the
construction phase.34 This new capability aside, analyses suggest massive salvos from China’s large arsenal of precision
ballistic missiles would, “seriously degrade Taiwan’s self-defense capabilities...[leaving] Taiwan with a
profoundly reduced ability to defend itself, leaving itself open to a range of follow-on actions intended to
coerce or conquer it and its people.”35
Similar to Russia, China ’s military modernization program has enabled it to acquire local conventional superiority .36
Even if China cannot yet achieve a successful fait accompli assault on the island, such a capability is not in the too distant future.37 In these and
other scenarios, the US quickly confronts the question of whether to employ nuclear weapons.38 Again, though, the targets for such strikes and
how to manage the risks for escalation are daunting. Even US conventional strikes on the Chinese homeland, let alone nuclear strikes, could
intentionally or inadvertently lead to a nuclear retaliation.39 Again similar to Russia, using nuclear weapons to target Chinese amphibious
invasion forces would require hundreds of nuclear weapons, which would certainly not constitute a limited strike. Further, it seems doubtful
that the military utility of nuclear strikes would be greater than targeting these assets with conventional munitions. Threatening Chinese forces
that had reached Taiwan itself would also be fraught with political risk due to the potential for massive civilian casualties. As in the Baltics
scenario, in a case where China refrains from using nuclear weapons first, it is difficult to identify the targets to strike with nuclear weapons
that would have military utility while constraining the risk of escalation.
The final scenario also begins with a provocative action. In this case, North Korea might again bombard South Korea’s
Yeonpyeong Island. Strikes could escalate into a larger conflict as South Korea implements its
“ disproportionate response” doctrine.40 North Korea’s nuclear arsenal has grown to a between fifteen and
sixty deployed weapons; by all accounts, it will soon acquire a survivable nuclear force capable of holding the continental US at
risk.41 Eliding North Korea’s nuclear and chemical weapons, the arsenal of conventional missiles and artillery fielded by
North Korea in hardened facilities is capable of inflicting civilian casualties in the hundreds of
thousands in a matter of minutes . Fifty percent of the South’s population centers and economic activity
are within range of these fires, which are capable of launching 500,000 shells per hour for several hours.42 North Korea also
holds a 2:1 advantage in troops vis-à-vis the US and South Korea. Estimates suggest militarily
meaningful US reinforcements will be unable to reach the Peninsula for weeks or months .43 A limited
ground invasion to seize the Kaesong Heights on which this artillery and associated forces are dug in would require at least two infantry and
mechanized corps, with the estimated loss of an entire corps in the offensive.44
Again, confronting an adversary possessing local conventional superior- ity, the US could face pressure to escalate to nuclear use. However, the
military utility of this option is not particularly compelling, and the prospects for such strikes to avoid escalating the con- flict are daunting.
Whereas North Korea can hold at risk numerous military, political and economic as- sets, the options are less apparent for the US and its allies.
Preemptively striking North Korea’s nuclear ar- senal would likely require at least 50 bombs or warheads.45 Not only would such a strike not be
“limited,” but the fallout from it would impact both South Korea and Japan. Striking North Korean conventional positions in the Kaesong
Heights is equally problematic, given the challenge of distinguishing between a limited offensive and an at- tempted regime change.46 Thus,
the strategic utility of nuclear weapons seems marginal, at a minimum because such a strike carries significant potential for North Korea to
escalate with nuclear strikes on US allies and perhaps even the United States.
These cases illustrate a problem that has plagued US policymakers and military planners since the time the US could no longer claim a
monopoly on nuclear weapons: how to make nuclear deterrence threats credible. Indeed, the seeming intractability of developing operational
plans of targets and deploying the requisite nuclear forces to execute these plans, was a key driver for the original “Gang of Four” supporting
moving to an NFU policy.47 So what is to be done? Should the US adopt a declaratory n o- f irst- u se policy ? The vignettes
suggest that, for the US at least, using nuclear weapons first has dubious military utility. They also suggest that using nuclear weapons first to
punish aggression (rather than deny it) may well result in a Pyrrhic victory. Yet, a distinguishing characteristic of the last
seven decades is the absence of great power war . Has the threat of first-use plausibly played a role in this absence? It
is possible that a n o- f irst- u se policy would be destabilizing , opening the door for subsequently
emboldened adversaries to pursue their political aims through conventional war. Not only would such wars be
highly destructive, but they would almost certainly increase the risk of nuclear war . A better question to raise
thus might be, is it worth changing the s tatus quo policy to find out ?
Case
They Violate---1NC
The NPR rejected NFU.
NPR 22 (DEP’T OF DEF., NATIONAL DEFENSE STRATEGY: 2022 NUCLEAR POSTURE REVIEW, at 9 (2022))
Declaratory Policy. United States declaratory policy reflects a sensible and stabilizing approach to deterring a range of attacks in a dynamic security environment.
This balanced policy maintains a very high bar for nuclear employment, while also complicating adversary decision calculus, and assuring Allies and partners. As long
as nuclear weapons exist, the fundamental role of nuclear weapons is to deter nuclear attack on the United States, our Allies, and partners. The United
States would only consider the use of nuclear weapons in extreme circumstances to defend the vital
interests of the United States or its Allies and partners.
The United States will not use or threaten to use nuclear weapons against non-nuclear weapon states that are party to the NPT and in compliance with their nuclear
non-proliferation obligations. For all other states, there remains a narrow range of contingencies in which U.S.
nuclear weapons may still play a role in deterring attacks that have strategic effect against the United
States or its Allies and partners.
Declaratory policy is informed by the threat, assessed adversary perceptions, Ally and partner perspectives, and our strategic risk reduction objectives. We
conducted a thorough review of a broad range of options for nuclear declaratory policy - including both
No First Use and Sole Purpose policies - and concluded that those approaches would result in an
unacceptable level of risk in light of the range of non-nuclear capabilities being developed and fielded
by competitors that could inflict strategic-level damage to the United States and its Allies and partners.
Some Allies and partners are particularly vulnerable to attacks with non-nuclear· means that could produce devastating effects. We retain the goal of moving
toward a sole purpose declaration and we will work with our Allies and partners to identify concrete steps that would allow us to do so.
That’s a promise.
Mount 22 (Adam Mount, director of the Defense Posture Project and a senior fellow at the Federation
of American Scientists, “The Biden Nuclear Posture Review: Obstacles to Reducing Reliance on Nuclear
Weapons,” Arms Control, January/February 2022, https://www.armscontrol.org/act/202201/features/biden-nuclear-posture-review-obstacles-reducing-reliance-nuclear-weapons)
Whether or not Biden, confronted with political resistance or additional information, changed his mind
on a sole purpose policy, the 2022 NPR demonstrates that the existing process for developing nuclear
weapons policy is deeply flawed. Deputy National Security Advisor Jon Finer optimistically promised
that “this is going to be the president’s posture review and the president’s posture.” 19 It is also
possible that, in his final days in office, Biden may find himself delivering another wistful speech lamenting that yet another administration has
failed to establish a sole purpose policy as a guiding principle of U.S. nuclear policy or to significantly reduce reliance on nuclear weapons.
Consequentialism---1NC
Hedonistic act utilitarianism is true.
1. Empiricism---resolving a-priori conflicts is impossible because they require credible
moral judgement---only empirical reflection provides sufficient reason for beliefs and
intentions.
Gertler 18. Brie Gertler [Provost for Academic Affairs and Commonwealth Professor in the Corcoran
Department of Philosophy at the University of Virginia]. “Self-Knowledge and Rational Agency: A
Defense of Empiricism.” Philosophy and Phenomenological Research, Vol. XCVI No. 1, 2018.
https://doi.org/10.1111/phpr.12288
This brings us to the second kind of self-knowledge, concerning the outcome of critical self-reflection. This knowledge is arguably more salient for the agentialist’s
case, since it is only the outcome of critical self-reflection that is avowable.
For the deliberative juror, the recognition that my evidence for the defendant’s guilt is weak is closely
linked with the belief I do not believe that the defendant is guilty. What is this link? On Moran’s view, the epistemic right to
believe that there is a link between one’s evidence regarding X and one’s belief about X, is based in the “Transcendental assumption of Rational Thought”: the
assumption “that what I actually believe about X can be determined, made true by, my reflection on X itself” (Moran 2003, 406). But no transcendental
reasoning is required to explain how the close link between the deliberative juror’s belief and (what she
regards as) her evidence puts her at an epistemic advantage, vis-a-vis self-knowledge, relative to the
detached juror. For example, the empiricist can say that the deliberative juror infers that she doesn’t
believe that the defendant is guilty, from reflection on the weakness of her evidence. This inference is
truth-preserving precisely because she occupies the agential position on the suspension of that belief:
she can suspend it directly on the basis of her reason to do so. Alternatively, the empiricist could posit a
monitoring mechanism, which takes as input one’s (assessment of the) reasons for believing or intending, and
delivers as output a belief about what one believes or intends. Such a mechanism will be reliable if the
thinker occupies an agential position on regulating her attitudes, relative to her reasons.
The empiricist alternatives just mentioned are epistemically externalist. The empiricist also has internalist options for explaining
the deliberative juror’s superior epistemic position. The deliberative juror may have evidence to the
effect that, at least when it comes to issues in which she has little emotional stake, her beliefs tend to
conform to her evidence . (Her evidence for this conformance could be empirical: e.g., it could derive
from introspection.) The detached juror would presumably have weaker evidence for the conformance
between his beliefs and his evidence. In that case, the deliberative juror would be more strongly justified in inferring, from the fact that her
evidence regarding the defendant’s guilt is weak, that she does not believe that the defendant is guilty.
Moreover, the empiricist can explain the appeal of the transparency method, as a way of determining
what one believes or intends. This method is appealing because we generally believe that our attitudes
are sensitive to our reasons. If we didn’t believe this, we wouldn’t engage in practical and theoretical
deliberations as a way of shaping our intentions and beliefs. Of course, this belief is sometimes false: attitudes sometimes resist
the force of reasons. But Moran is surely correct that we generally make what he calls the Transcendental assumption of Rational Thought. Depending on the
epistemological views she favors, the empiricist could claim that this assumption is justified or warranted. Or she could maintain that it is an ungrounded
assumption, but that use of the transparency method yields self-knowledge nevertheless, because that method is sufficiently reliable or the beliefs it generates are
“safe”.
It may seem that these empiricist alternatives miss the point. They posit differences in epistemic resources, whereas what is
fundamentally special about critical self-reflection, avowals, and the deliberative stance seems to be
agential rather than epistemic. This latter idea may well be correct. As I said above, the fundamental difference
between the two jurors seems to lie not in how they know their attitudes but, rather, in how each
juror’s reasons affect their attitudes. I have proposed ways of understanding this difference in how
reasons affect attitudes that are amenable to empiricism about self-knowledge. If empiricists can
explain the relevant agential phenomena, they can answer the agentialist challenge. That challenge is further
diminished by the fact that empiricists can explain how occupying the agential position could improve the prospects for self-knowledge.
2. Extinction outweighs---respect for persons as ends-in-themselves normatively
demands sacrifice of the innocent for the greater good.
Cummiskey 90. David Cummiskey [Professor of Philosophy at Bates]. “Kantian Consequentialism.”
Ethics, Vol. 100, No. 3, 1990. https://www.jstor.org/stable/2381810
I. Kant, Consequentialism, and the Sacrifice of the Innocent
In principle, if not in practice, a consequentialist may be required to sacrifice an innocent person for the sake of some greater good. Does the Kantian injunction to
respect the autonomy of persons rule out the sacrifice of the innocent? Contemporary neo-Kantians, and perhaps Kant himself,
seem to think it obvious that Kant has provided a justification for agent-centered constraints-forbidding
the killing of one, for example, to save two others. I shall argue that, despite current philosophical opinion, Kantian respect for
persons, treating persons as ends-in-themselves, does not generate agent-centered constraints on the
maximization of the good; and, thus, in principle, if not in practice, Kantian normative theory does not
rule out the sacrifice of the innocent.
In criticizing consequentialism most contemporary Kantians appeal to Kant's second and third formulations of the categorical imperative: for Kant,
autonomy was tied to the notion of free and equal rational beings pursuing their legitimate ends in what
he called a Kingdom of Ends. To respect the autonomy of persons is to "act in such a way that you treat
humanity whether in your own person or in the person of any other never simply as a means but always
at the same time as an end" (GMM, p. 429; CPR, pp. 87, 131).1 The moral law cannot require us to as a means only: it would not treat them as a free
and equal member of a Kingdom of Ends. Since consequentialism may sometimes require us to aggress against some
persons in order to aid others, it does not respect persons and is thus unfit for the supreme principle of
morality. In short, consequentialism does not respect the autonomy of persons because it may allow
sacrifices which fail to treat persons as ends-in-themselves.
Despite its Kantian tone and its intuitive appeal, there is no defensible Kantian pedigree for this type of
objection. Indeed, we shall see that the most natural Kantian interpretation of the demand to respect
persons generates a form of consequentialism. It follows that a conscientious Kantian moral agent
may be required to sacrifice the innocent because it will promote the good.
This claim is in some part familiar. Utilitarians from Mill through Hare have maintained that universalizability , which
they take to be the essence of Kantianism, is a purely formal principle which is compatible with virtually
any normative principle, including principles which require the sacrifice of the innocent. Thus, they say, their
theories satisfy the Kantian requirement of universalizability and so are not open to criticism by Kantians. My arguments are distinct in three ways from this timehonored approach, however, for the good reason that this time-honored approach does not do the work it sets out to do.
First, I follow most modern Kantians and focus on the formula of the end-in-itself, not the often criticized formula of universalizability. Even if the formula of
universilizibility is trivial,2 it is now recognized that the formula of the end-in-itself, which expresses the matter or objective end of moral action, need not be. This
formula requires independent consideration and evaluation by anyone wishing to refute Kantian conclusions. This article considers this perhaps more fruitful
formulation of Kant's categorical imperative and finds that even it cannot justify the rejection of consequentialism.
Second, critics, like proponents, must consider the overall development of Kant's normative theory, not
just the formulations of the categorical imperative . Thus, one needs to consider Kant's later development of his theory in the
Metaphysics of Morals.3 To this end, this article will consider not just the formula of the end-in-itself, but the
relevance of Kant's distinctions between duties of justice and duties of virtue, between external and
internal legislation, between maxims of action and maxims of ends, and between perfect and
imperfect duties . The point, of course, is not to provide a survey of Kant's distinctions, but to see whether Kant's later articulation of his normative
theory provides any reason for rejecting consequentialism.
Third, this article will show that Kant not only fails to refute consequentialism but actually provides support for a
form of normative consequentialism. The formula of the end-in-itself, probably the most influential
formulation today among Kantians, most naturally leads to just the sort of conclusions about action that
many neo-Kantians wish to avoid: that sacrifice of the innocent may be morally necessary.
Given that so much of Kant's moral theory is clearly non-consequentialist, this last, most radical, contention needs clarification. Of course, I am aware of the
deontological emphasis of some of Kant's specific examples and, thus, I am not suggesting that Kant defended consequentialism. The point, though, is
that Kant's explicit rejection of consequentialism rests simply on intuitive reliance on commonsense
morality, rather than on any argument he provides.5
I do not deny that these deontological intuitions have their appeal. Surely, however, when neo-Kantians appeal to Kant, in arguments against normative
consequentialism, they do so in the belief that Kant has provided some normative justification for specific deontological intuitions. They appeal to the force of
Kant's arguments, not just the authority of Kant's intuitions. Whether those intuitions are supported by explicit or even
implicit argument of truly justificatory force is, thus, a crucial issue. Just as one cannot assume that
utilitarianism generates a practically undefeasible right to liberty simply because Mill argues that it does,
one cannot take it for granted that Kant's theory generates agent-centered constraints. Indeed, in Kant's case
there is a significant gap between Kant's basic normative theory and his endorsement of commonsense deontological morality. We shall see that Kant's normative
theory does not provide the material to fill this gap.
Familiar arguments from consequentialists have failed to show why this is so, and have thus failed to convince contemporary Kantians and those sympathetic to
them. The reason for this is that the flaw lies much deeper than has been seen-so deep that it is at the heart of Kant's normative theory itself. Kant's
normative theory logically cannot provide a refutation of all forms of consequentialism because it is
actually a form of consequentialism: namely, Kantian consequentialism.
3. Moral Substitutability---deontological ethics cannot provide the necessary enablers
required for ethics to guide action.
Sinnott-Armstrong 92. Walter Sinnott-Armstrong [Chauncey Stillman Professor of Practical Ethics in
the Department of Philosophy and the Kenan Institute for Ethics at Duke]. “An Argument for
Consequentialism.” Philosophical Perspectives, Vol. 6, 1992. https://www.jstor.org/stable/2214254
5. Against Deontology
So defined, the class of deontological moral theories is very large and diverse. This makes it hard to say anything in general about it. Nonetheless, I will
argue that no deontological moral theory can explain why moral substitutability holds. My argument
applies to all deontological theories because it depends only on what is common to them all, namely,
the claim that some basic moral reasons are not consequential. Some deontological theories allow very many weighty moral
reasons that are consequential, and these theories might be able to explain why moral substitutability holds for some of their moral reasons: the consequential
ones. But even these theories cannot explain why moral substitutability holds for all moral reasons, including the non-consequential reasons that make the theory
deontological. The failure of deontological moral theories to explain moral substitutability in the very cases
that make them deontological is a reason to reject all deontological moral theories.
I cannot discuss every deontological moral theory, so I will discuss only a few paradigm examples and show why they cannot explain moral substitutability. After
this, I will argue that similar problems are bound to arise for all other deontological theories by their
very nature.
The simplest deontological theory is the pluralistic intuitionism of Prichard and Ross. Ross writes that, when someone promises to do
something, 'This we consider obligatory in its own nature, just because it is a fulfillment of a promise,
and not because of its consequences.'12 Such deontologists claim in effect that, if I promise to mow the
grass, there is a moral reason for me to mow the grass, and this moral reason is constituted by the fact
that mowing the grass fulfills my promise. This reason exists regardless of the consequences of mowing
the grass, even though it might be overridden by certain bad consequences. However, if this is why I
have a moral reason to mow the grass, then, even if I cannot mow the grass without starting my mower,
and starting the mower would enable me to mow the grass, it still would not follow that I have any
moral reason to start my mower, since I did not promise to start my mower, and starting my mower
does not fulfill my promise. Thus, a moral theory cannot explain moral substitutability if it claims that
properties like this provide moral reasons.
Of course, this argument is too simple to be conclusive by itself, since deontologists will have many responses. The question is whether any response is adequate. I
will argue that no response can meet the basic challenge.
A deontologist might respond that his moral theory includes not only the principle that there is a moral
reason to keep one's promises but also another principle that there is a moral reason to do whatever is
a necessary enabler for what there is a moral reason to do. This other principle just is the principle of moral substitutability, so, of
course, I agree that it is true. However, the question is why it is true. This new principle is very different from the substantive
principles in a deontological theory, so it cries out for an explanation. If a deontologist simply adds this new principle to the
substantive principles in his theory, he has done nothing to explain why the new principle is true. It would be ad hoc to tack it on solely in
order to yield moral reasons like the moral reason to start the mower. In order to explain or justify
moral substitutability, a deontologist needs to show how this principle coheres in some deeper way with
the substantive principles of the theory. That is what deontologists cannot do.
A second response is that I misdescribed the property that provides the moral reason. Deontologists might admit
that the reason to mow the lawn is not that this fulfills a promise, but they can claim instead that the moral reason to mow the lawn is that this is a necessary
enabler for keeping a promise. They can then claim that there is a moral reason to start the mower, because starting the mower is also a necessary enabler for
keeping my promise. Again, I agree that these reasons exist. But the question is why. This deontologist needs to explain why the moral reason has to be that the act
is a necessary enabler for fulfilling a promise instead of just that the act does fulfill a promise. If there is no moral reason to keep a
promise, it is hard to understand why there is any moral reason to do what is a necessary enabler for
keeping a promise. Furthermore, deontologists claim that the crucial fact is not about consequences but
directly about promises. My moral reason is supposed to arise from what I said before my act and not from consequences after my act. However,
what I said was 'I promise to mow the grass'. I did not say, 'I promise to do what is a necessary enabler for mowing the grass.' Thus, I did not promise to do what is a
necessary enabler for keeping the promise. What I promised was only to keep the promise. Because of this, deontologists who base moral reasons directly on
promises cannot explain why there is not only a moral reason to do what I promised to do (mow the grass) but also a moral reason to do what I did not promise to
do (start the mower).
Deontologists might try to defend the claim that moral reasons are based on promises by claiming that
promise keeping is intrinsically good and there is a moral reason to do what is a necessary enabler of
what is intrinsically good. However, this response runs into two problems. First, on this theory, the
reason to keep a promise is a reason to do what is itself intrinsically good, but the reason to start the
mower is not a reason to do what is intrinsically good. Since these reasons are so different, they are derived in different ways. This
creates an incoherence or lack of unity which is avoided in other theories. Second, this response conflicts with a basic theme in deontological theories. If my
promise keeping is intrinsically good, your promise keeping is just as intrinsically good. But then, if what
gives me a moral reason to keep my promise is that I have a moral reason to do whatever is intrinsically
good, I have just as much moral reason to do what is a necessary enabler for you to keep your promise.
And, if my breaking my promise is a necessary enabler for two other people to keep their promises,
then my moral reason to break my promise is stronger than my moral reason to keep it (other things
being equal). This undermines the basic deontological claim that my reasons derive in a special way from my promises.13 So this response explains moral
substitutability at the expense of giving up deontology.
A fourth possible response is that any reason to mow the grass is also a reason to start my mower
because starting my mower is part of mowing the grass. However, starting my mower is not part of
mowing the grass, because I can start my mower without cutting any grass. I might start my mower hours in advance and
never get around to cutting any grass. Suppose I start the mower then go inside and watch television. My wife comes in and asks, 'Have you
started to mow the lawn?', so I answer, 'Yes. I've done part of it. I'll finish it later.' This is not only
misleading but false. Furthermore, mowing the grass can have other necessary conditions, such as
buying a mower or leaving my chair, which are not parts of mowing the grass by any stretch of the
imagination.
Finally, deontologists might charge that my argument begs the question. It would beg the question to assume moral substitutability if this principle were
inconsistent with deontological theories. However, my point is not that moral substitutability is inconsistent with
deontology. It is not. Deontologists can consistently tack moral substitutability onto their theories. My
point is only that deontologists cannot explain why moral substitutability holds. It would still beg the
question to assert moral substitutability without argument. However, I did argue for moral substitutability, and my argument was
independent of its implications for deontology. I even used examples of moral reasons that are typical of deontological theories. Deontologists still might complain
that the failure of so many theories to explain moral substitutability casts new doubt on this principle. However, we normally should not reject a scientific
observation just because our theory-cannot explain it. Similarly, we normally should not reject an otherwise plausible moral judgment just because our favorite
theory cannot explain why it is true. Otherwise, no inference to the best explanation could work. My argument simply extends this general explanatory burden to
principles of moral reasoning and shows that deontological theories cannot carry that burden.
Even though this simple kind of deontological theory cannot explain moral substitutability, more complex deontological theories might seem to do better. One
candidate is Kant, who accepts something like substitutability when he writes, 'Whoever wills the end, so far as reason has decisive influence on his action, wills also
the indispensably necessary means to it that lie in his power.'14 Despite this claim, however, Kant fails to explain moral substitutability. Kant says in effect
that there is a moral reason to do an act when the maxim of not doing that act cannot be willed as a
universal law without contradiction. My moral reason to keep my promise to mow the grass is then supposed to be that not keeping promises
cannot be willed universally without contradiction However, not starting my mower can be willed universally without contradic tion I can even consistently and
universally will not to start my mower when this is a necessary enabler for keeping a promise. The basic problem is that Kant repeatedly
claims that his theory is purely a priori, but moral substitutability makes moral reasons depend on
what is empirically possible. Kantians might try to avoid this problem by interpreting universalizability
in terms of a less pure kind of possibility and 'contradiction'. On one such interpretation, Kant claims it is contradictory to will
universal promise breaking, because, if everyone always broke their promises, no promises would be trusted, so no promises could be made or, therefore, broken.
There are several problems here, but the most relevant one is that people could still trust each other's promises, including their promises to mow a lawn, even if
nobody ever starts his mower when this is a necessary enabler for keeping a promise. This might happen, for example, if it is common practice to keep mowers
running for long periods, so those to whom promises are made assume that it is not necessary to start one's mower in order to mow the lawn. This shows that there
is no contradiction of this kind in a universal will not to start my mower when this is a necessary enabler for keeping a promise. Thus, this interpretation of Kant also
fails to explain why there is a moral reason to start the mower. Some defenders of Kant will insist that both of these interpretations fail to recognize that, for Kant,
certain ends are required by reason, so rational people cannot universally will anything that conflicts with these ends. One problem here is to
specify which and why particular ends have this special status It is also not clear how these rational ends
would conflict with universally not starting mowers. Thus, Kant can do no better than other
deontologists at explaining why there is a moral reason to start my mower or why moral substitutability
holds.
Of course, there are many other versions of deontology. I cannot discuss them all. Nonetheless, these examples suggest that it is the very nature of deontological
reasons that makes deontological theories unable to explain moral substitutability. This comes out clearly if we start from the other side and ask which properties
create the moral reasons that are derived by moral substitutability. What gives me a moral reason to start the mower is the consequences of starting the mower.
Specifically, it has the consequence that I am able to mow the grass. This reason cannot derive from the same property as my moral reason to mow the lawn unless
what gives me a moral reason to mow the lawn is its consequences. Thus, any non-consequentialist moral theory will have to
posit two distinct kinds of moral reasons: one for starting the mower and another for mowing the grass.
Once these kinds of reasons are separated, we need to understand the connection between them But
this connection cannot be explained by the substantive principles of the theory. That is why all
deontological theories must lack the explanatory coherence which is a general test of adequacy for all
theories.
I conclude that no deontological theory can adequately explain moral substitutability. I have not proven this,
but I do challenge deontologists to give a better explanation of moral substitutability Deontologists are very inventive, but I doubt that they can meet this challenge.
Consequential considerations are inevitable---respect for persons as ends in
themselves must involve practical considerations of states of affairs.
Cummiskey 90. David Cummiskey [Professor of Philosophy at Bates]. “Kantian Consequentialism.”
Ethics, Vol. 100, No. 3, 1990. https://www.jstor.org/stable/2381810
V. Respect for Persons
It might be argued that the formula of the end-in-itself essentially involves the concept of respect for
persons, not the consequentialist concept of promoting the good. For persons to exist as an objective end is for them to exist
as objects of respect, not as a value to be promoted. Respect for persons involves respecting the rights of persons; that is,
respect essentially involves honoring agent-centered constraints on actions. Although neither explicitly argues that the
concept of respect entails an agent-centered approach, both Donagan's assumption that the formula of the end-in- itself generates "prohibitory concepts" and
Murphy's assumption that respecting persons involves the noninterference with the freedom of rational beings seem to presuppose such an entailment. In order to
respond to this objection we must look more closely at the concept of respect. What is it to respect something or someone?
Stephen Darwall has argued that there are two kinds of respect: "recognition respect" and "appraisal
respect." Recognition respect "consists in giving appropriate consideration or recognition to some
feature of its object in deliberating about what to do." Appraisal respect consists in a positive appraisal
of its object as a consequence of some intrinsic features of the object. Appraisal respect does not essentially involve any
conception of how one's behavior toward that object is appropriately restricted. Since respect for persons is supposed to play a role in determining our conduct, the
notion of respect involved is recognition respect.25
There is a narrow notion of recognition respect, which is limited to moral recognition respect, and a more general notion of recognition respect. Moral
recognition respect involves giving appropriate moral weight to the features of the object of respect in
one's deliberations about what to do. To morally respect some object is to regulate one's behavior, that
is, to constrain or conform one's actions, in accordance with the moral requirements generated by the
object. On the most general notion of recognition respect, any fact which one takes into account in deliberation is an object of respect. This notion is so broad
that it covers all uses of respect. (Indeed, it may be too broad.)
The demand that we respect persons is a moral demand that we regulate our conduct according to the
moral requirements generated by the existence of persons . Now, as Darwall points out, recognition respect for persons is
"identical with recognition respect for the moral requirements that are placed on one by the existence of persons." But, of course, what we want
to know is simply the moral requirements placed on us by the existence of persons. The concept of
moral recognition respect is thus such that it does not help discover the particular moral requirements
generated by the existence of persons.
Kantian normative theories, and commonsense morality, often assume that respect for persons fundamentally involves agent-centered constraints rather than
consequentialist considerations. The concept of respect, however, does not support this assumption. A consequentialist approach is prima
facie as appropriate as an agent-centered approach. To assume otherwise is simply question begging.
Indeed, if one insists that, as a conceptual matter, respecting persons logically involves honoring agentcentered constraints, then one must provide a rationale for interpreting the formula of the end-in-itself
as essentially involving the notion of respect rather than some consequentialist notion.
Respect for persons involves giving appropriate practical consideration to the fact that there are
persons. The meaning of 'respect' cannot settle the issue of what counts as appropriate practical
consideration. Let us thus return to the issue of conflicting grounds, of obligation and the nature and
extent of the positive duty of beneficence.
Acts are good if they tend towards the good---probabilistic reasoning distinguishes
actual from expected consequences.
Cummiskey 21. David Cummiskey. [Professor of Philosophy at Bates, Ph.D., M.A., University of
Michigan.] “Consequentialism.” International Encyclopedia of Ethics. 2021.
10.1002/9781444367072.wbiee428.pub2
Actual or expected consequences
The actual, long‐term , total consequences of our decisions and actions are uncertain . Good
intentions do not always lead to good results. For example, if one sees an infant fall into a pond, one should jump to the rescue. Saving a
life clearly seems like it promotes the good and is thus the right thing to do. Yet some might object that, for all we really know, the infant could grow up to be the
next Hitler. If I save a baby that grows up to be the next Hitler, my action actually causes great harm. Would the
consequentialist conclude that my act was wrong?
In deciding what to do, clearly the best action that a person can do is to choose the option that seems
most likely to maximize the good . Some consequentialists thus distinguish the actual
consequences (objective rightness) and the expected consequences of actions (subjective
rightness). The best actual outcome is the goal, and choosing the best expected outcome is the means to
this goal. As a theoretical matter, we could define rightness in terms of objective rightness. It would follow that an agent acts wrongly when they blamelessly
and unknowingly save baby Hitler. However, it is clearly counterintuitive to say that saving a little baby is
wrong . To call an action wrong implies that it is blameworthy and thus subjectively wrong (Mill 2002b
[1861]). Although the objectively best action actually leads to the best consequences, we can only judge
ourselves and others from the subjective perspective of what someone can know and foresee.
Therefore, most consequentialists focus on rightness from the agent’s subjective perspective . It is
the tendency of actions to advance the good that really matters : the right action is the available
option that, as far as the agent can see, tends to promote the most overall good.
4. Specificity---government actions must use utilitarianism.
A. Aggregation---governments have to aggregate since all collective actions incur
tradeoffs that help some and hurt other, means based side constraints freeze action.
B. Act/Omission---there’s no distinction for governments since policies create
permissions and prohibitions so authorizing action cannot be an omission since the
state assumes culpability in regulating the public domain, ie voting against something
is still acting.
C. Intent/Foresight---there’s no distinction; governments can’t have intent since
they’re made up of multiple actors with separate motivations, ie some congress
people might vote for something to gain votes while other actually think the bill is
good.
Takes out and turns their calculation indicts, consequentialism might be hard but it’s
not impossible, and the alternative is no action which is worse; and actor spec
outweighs since different actors have different ethical standings.
That means death outweighs---the aff’s maxim includes universalizing extinction--that’s a contradiction.
Intuitions outweigh---everyone here thinks pleasure is likely good and pain is likely
bad---burden of proof is on them.
5. Fission proves personal identity is reductionist– psychological continuity doesn’t
exist.
Olson 10, Eric. [Professor of Philosophy at the University of Sheffield] Oct 28, 2010 “Personal Identity” Stanford Encyclopedia of
Philosophy. http://plato.stanford.edu/entries/identity-personal/#PsyApp
Whatever psychological continuity may amount to, a more serious worry for the Psychological Approach is that you could be psychologically
continuous with two past or future people at once. If your cerebrum—the upper part of the brain largely responsible for mental
features—were transplanted, the recipient would be psychologically continuous with you by anyone's lights (even
if there would also be important psychological differences). The Psychological Approach implies that she would be you. If we destroyed one of
your cerebral hemispheres, the resulting being would also be psychologically continuous with you. (Hemispherectomy—even the removal of
the left hemisphere, which controls speech—is considered a drastic but acceptable treatment for otherwise-inoperable brain tumors: see
Rigterink 1980.) What if we did both at once, destroying one hemisphere and transplanting the other? Then too, the
one who got the transplanted hemisphere would be psychologically continuous with you, and according to the Psychological
Approach would be you. But now suppose that both hemispheres are transplanted, each into a different empty
head. (We needn't pretend, as some authors do, that the hemispheres are exactly alike.) The two recipients—call them Lefty and
Righty—will each be psychologically continuous with you. The Psychological Approach as I have stated it implies that any future
being who is psychologically continuous with you must be you. It follows that you are Lefty and also that you are Righty. But that cannot
be: Lefty and Righty are two, and one thing cannot be numerically identical with two things. Suppose Lefty is hungry
at a time when Righty isn't. If you are Lefty, you are hungry at that time. If you are Righty, you aren't. If you are Lefty and Righty,
you are both hungry and not hungry at once: a contradiction.
That proves util – if persons are not a continuous unit then distribution among them is
irrelevant– we just maximize good experiences since only experiences are morally
evaluable– other theories err by presuming the person is a separate entity.
6. Weighability— only consequentialism explains degrees of wrongness— you can
only explain why breaking a promise to take a dying person to the hospital is worse
than breaking a promise to meet for lunch by appealing to consequences.
7. Use epistemic modesty— that’s the probability of the framework being true times
the magnitude of an impact under it.
a. substantively true: maximizes the probability of the most moral value; arguments
against a framework mitigate offense under it but that mitigation is contingent, half
the debate shouldn’t be thrown out just since someone’s 1% ahead on fwk.
b. clash: discourages debaters from ignoring contention level debate which means we
get education about phil and the topic— topical ed outweighs since we only have 2
months for each topic; this is drop the arg.
8. Universalizability collapses to consequentialism.
Singer 93, Peter. 1993. “Practical Ethics.” Cambridge University Press. http://www.stafforini.com/txt/Singer%20%20Practical%20ethics.pdf
Can we use this universal aspect of ethics to derive an ethical theory that will give us guidance about right and wrong? Philosophers from the Stoics to Hare and
Rawls have attempted this. No attempt has met with general acceptance. The problem is that if we describe the universal aspect of ethics in bare, formal terms, a
wide range of ethical theories, including quite irreconcilable ones, are compatible with this notion of universality; if, on the other hand, we build up our description
of the universal aspect of ethics so that it leads us ineluctably to one particular ethical theory, we shall be accused of smuggling our own ethical beliefs into our
definition of the ethical - and this definition was supposed to be broad enough, and neutral enough, to encompass all serious candidates for the status of 'ethical
theory'. Since so many others have failed to overcome this obstacle to deducing an ethical theory from the universal aspect of ethics, it would be foolhardy to
attempt to do so in a brief introduction to a work with a quite different aim. Nevertheless, I shall propose something only a little less ambitious. The
universal aspect of ethics, I suggest, does provide a persuasive, although not conclusive, reason for taking a
broadly utilitarian position. My reason for suggesting this is as follows. In accepting that ethical judgments must be
made from a universal point of view, I am accepting that my own interests cannot, simply because they are my
interests, count more than the interests of anyone else. Thus my very natural concern that my own interests
be Iooked after must, when I think ethically, be extended to the interests of others. Now, imagine that I am trying to decide
between two possible courses of action - perhaps whether to eat all the fruits I have collected myself, or to share them with others. Imagine, too, that I am deciding
in a complete ethical vacuum, that I know nothing of any ethical considerations - I am, we might say, in a pre-ethical stage of thinking. How would I make up my
mind? One thing that would be still relevant would be how the possible courses of action will affect my interests. Indeed, if we define 'interests' broadly enough, so
that we count anything people desire as in their interests (unless it is incompatible with another desire or desires), then it would seem that at this pre-ethical stage,
only one's own interests can be relevant to the decision. Suppose I then begin to think ethically, to the extent of recognising that my own interests cannot count for
more, simply because they are my own, than the interests of others. In place of my own interests, I now have to take into account the interests of all those affected
by my decision. This requires me to weigh up all these interests and adopt the course of action most likely to
maximize the interests of those affected. Thus at least at some level in my moral reasoning I must choose the course of action that has the
best consequences, on balance, for all affected. (I say 'at some level in my moral reasoning' because, as we shall see later, there are utilitarian reasons for believing
that we ought not to try to calculate these consequences for every ethical decision we make in our daily lives, but only in very unusual circumstances, or perhaps
when we are reflecting on our choice of general principles to guide us in future. In other words, in the specific example given, at first glance one might think it
obvious that sharing the fruit that I have gathered has better consequences for all affected than not sharing them. This may in t~ end also be the best general
principle for us all to adopt, but before we can have grounds for believing this to be the case, we must also consider whether the effect of a general practice of
sharing gathered fruits will benefit all those affected, by bringing about a more equal distribution, or whether it will reduce the amount of food gathered, because
some will cease to gather anything if they know that they will get sufficient from their share of what others gather.)
1NC---Catastrophe
Most famous kantian goes neg.
Korsgaard PhD 02 [Christine, PhD in Philosophy, works at Harvard] “Internalism and the Sources of
Normativity” RE
But actions are also events in the world (or correspond to events in the world, at least), and they too
have consequences. There are a number of different ways in which one can deal with worries about
what happens to the consequences in Kant’s ethical theory. It is worth pointing out that Kant himself
not only did not ignore the consequences, but took the fact that good actions can have bad effects as
the starting point for his religious philosophy. In his religious thought, Kant was concerned with the
question how the moral agent has to envision the world, how he has to think of its metaphysics in order
to cope with the fact that the actions morality demands may have terrible effects that we never
intended, or may simply fail to have good ones. I myself see the development of what Rawls has called
“nonideal theory” to be the right way of taking care of a certain class of cases, in which the
consequences of doing the right thing just seem too appalling for us to simply wash our hands of. But I
do not want to say that just having bad consequences is enough to put an action into the realm of
nonideal theory. I think there is a range of bad consequences that a decent person has to be prepared to
live with, out of respect for other people’s right to manage their own lives and actions, and to contribute
to shared decisions. But I also think that there are cases where our actions go wrong in such a way that
they turn out in a sense not to be the actions we intended to do, or to instantiate the values we meant
them to instantiate. I think that some of these cases can be dealt with by introducing the kind of doublelevel structure into moral philosophy that I have described in the essay on “The Right to Lie: Kant on
Dealing with Evil.”3 But I also think there are cases that cannot be domesticated even in this way, cases
in which, to put it paradoxically, the good person will do something “wrong.” I have written about that
sort of case too, in “Taking the Law into Our Own Hands: Kant on the Right to Revolution.”4
1NC---Predictions
Predictions are accurate because of modernization in data and technique.
Ward ’13 [Michael D. Ward received his B.S. degree in Chemistry from the William Paterson College of
New Jersey in 1977 and his Ph.D. degree at Princeton University in 1981. He was a Welch postdoctoral
fellow at the University of Texas, Austin, between 1981 and 1982. Dr Nils W. Metternich is an Associate
Professor in International Relations at the School of Public Policy. He joined the Department in 2013 and
holds a PhD in political science from the University of Essex. Prior to joining UCL he was a postdoctoral
research fellow at Duke University (2011-12). "Learning from the Past and Stepping into the Future:
Toward a New Generation of Conflict Prediction." https://experts.syr.edu/en/publications/learningfrom-the-past-and-stepping-into-the-future-toward-a-new-]
Political events are frequently framed as unpredictable . Who could have predicted the Arab Spring ,
9/11 , or the end of the cold war ? This skepticism about prediction reflects an underlying desire to forecast. Predicting
political events is difficult because they result from complex social processes . However, in recent years,
our capacity to collect info rmation on social behavior and our ability to process large data have
increased to degrees only foreseen in science fiction . This new ability to analyze and predict behavior
confronts a demand for better political forecasts that may serve to inform and even help to structure
effective policies in a world in which prediction in everyday life has become commonplace .
Only a decade ago, scholars interested in civil wars undertook their research with constrained resources ,
limited data , and statistical estimation capabilities that seem underdeveloped by current
standards. Still , major advances did result from these efforts. Consider “Ethnicity, Insurgency and Civil War” by
Fearon and Laitin (2003), one of the most venerated and cited articles about the onset of civil wars. Published in 2003, it has over
3,000 citations in scholar.google.com and almost 900 citations in the Web of Science (as of April 2013). It has been cited
prominently in virtually every social science discipline in journals ranging from Acta Sociologica to World Politics; and
it is the most downloaded article from the American Political Science Review. 2 This article is rightly regarded as an
important , foundational piece of scholarship . However, in the summer of 2012, it was used by Jacqueline Stevens in a
New York Times Op-Ed as evidence that political scientists are bad forecasters . That claim was wildly off
the mark in that Fearon and Laitin do not focus on forecasting, and Stevens ignored other, actual forecasting efforts in political science.
Stevens’ funding point —which was taken up by the US Congress—was that government on quantitative
approaches was being wasted on efforts that did not provide accurate policy advice. In contrast to Stevens,
we argue that conflict research in political science can be substantially improved by more , not less,
attention to predictions through quantitative approaches.
We argue that the increasing availability of disaggregated data and advanced estimation tech niques are
making forecasts of conflict more accurate and precise , thereby helping to evaluate the utility of
different models and winnow the good from the bad. Forecasting also helps to prevent overfitting and reduces
confirmation bias . As such, forecasting efforts can be used to help validate models , to gain greater
confidence in the resulting estimates , and to ultimately present robust models that may allow us to
improve the interaction with decision makers seeking greater clarity about the implications of
potential actions.
2NC---Round 8---NDT
CP---Rescission
Solvency---2NC
It’s the solves better! It’s the strongest possible restraint---comparative AND specific
evidence.
Campbell ’19 [Clark; 2019; J.D. from the J. Reuben Clark School of Law at Brigham Young University,
Captain in the United States Air Force Judge Advocate General’s Corps; American University National
Security Law Brief, “Congress-in-Chief: Congressional Options to Compel Presidential Warmaking,” vol.
9]
IV. The Power of the Purse
A. Constitutional Authority for the Power of the Purse
The Legislative branch of the United States Government has one weapon that has been extremely effective in
Legislative-Executive battles, the power of the purse . "This power over the purse may, in fad, be
regarded as the most complete and effectual weapon with which any constitution can arm the immediate
representatives of the people . . . ." 127 This power is especially pertinent as applies to the power to make war . 128
Congress' power to control the budget and spending of the President may be the best chance at
controlling presidential action, especially in the realm of war-making.
This area of Congress' power has been extensively explored by scholars and repeatedly tested by
the Judiciary. 129The power of the purse has even been explored in its application to Congress' ability to compel war-making in an
excellent article by Charles Tiefer. 130 Tiefer focused on the ability of Congress to use spending riders to force the President to increase the
tempo and intensity of an ongoing war. 131 Drawing from the work of Tiefer and others allows greater insight into the ability and limitations of
Congress not only to step up a war, but also to compel the President to enter a war. 132
B. Appropriations Riders
The main route for Congress to exercise the power of the purse is through riders to appropriation bill s . 133 Riders force a
President to follow certain conditions or become subject to certain limit ation s to receive the
funding appropriated. 134 While the President has discretion to reject the bill and the included rider, the funding would also be lost.
Congress has much more solid ground on which to stand for limitation riders than riders compelling action, but both are available to
Congress. 135 While many scholars and pundits argue against the power of Congress to compel
presidential action, " even supporters of pres idential power would concede that Congress could
plainly and simply cut off funds " and end a war. 136 Past appropriation riders have sought to use limitations to prohibit
the President from acting in various areas, for example continuing war in Afghanistan. 137
Perm: Do Both---2NC
‘Rescission’ is an either/or procedure---it excludes restrictions that incidentally reduce
spending AND can’t be combined.
Heniff ’12 [Bill, Megan Lynch, and Jessica Tollestrup; December 3; Coordinator and Analyst on
Congress and the Legislative Process; Analyst on Congress and the Legislative Process; Analyst on
Congress and the Legislative Process; Congressional Research Service, “Introduction to the Federal
Budget Process,” https://sgp.fas.org/crs/misc/98-721.pdf]
Authorizing Measures
The rules of the House and (to a lesser extent) the Senate require that agencies and programs be
authorized in law before an appropriation is made for them. An authorizing act is a law that (1)
establishes a program or agency and the terms and conditions under which it operates; and (2) authorizes the
enactment of appropriations for that program or agency. Authorizing legislation may originate in either the House or the
Senate and may be considered any time during the year. Many agencies and programs have temporary authorizations that have to be renewed
annually or every few years.
<<TEXT CONDENSED, NONE OMITTED>>
Action on appropriations measures sometimes is delayed by the failure of Congress to enact necessary authorizing legislation. The House and Senate often waive or disregard their rules against unauthorized appropriations for ongoing programs that have not yet been reauthorized. The budgetary impact of authorizing legislation depends on whether it contains only discretionary authorizations (for which funding is provided in annual appropriations acts) or direct spending, which itself enables an agency to enter into obligations. The Annual Appropriations Process An
appropriations act is a law passed by Congress that provides federal agencies legal authority to incur obligations and the Treasury Department authority to make payments for designated purposes. The power of appropriation derives from the Constitution, which in Article I, Section 9, provides that “[n]o money shall be drawn from the Treasury but in consequence of appropriations made by law.” The power to appropriate is exclusively a legislative power; it functions as a limitation on the executive branch. An agency may not spend more than the amount appropriated to
it, and it may use available funds only for the purposes and according to the conditions provided by Congress. The Constitution does not require annual appropriations, but since the First Congress the practice has been to make appropriations for a single fiscal year. Appropriations must be used (obligated) in the fiscal year for which they are provided, unless the law provides that they shall be available for a longer period of time. All provisions in an appropriations act, such as limitations on the use of funds, expire at the end of the fiscal year, unless the language of the act
extends their period of effectiveness. The President requests annual appropriations in his budget submitted each year. In support of the President’s appropriations requests, agencies submit justification materials to the House and Senate Appropriations Committees. These materials provide considerably more detail than is contained in the President’s budget and are used in support of agency testimony during Appropriations subcommittee hearings on the President’s budget. Congress passes three main types of appropriations measures. Regular appropriations acts
provide budget authority to agencies for the next fiscal year. Supplemental appropriations acts provide additional budget authority during the current fiscal year when the regular appropriation is insufficient or to finance activities not provided for in the regular appropriation. Continuing appropriations acts provide stop-gap (or full-year) funding for agencies that have not received a regular appropriation. In a typical session, Congress acts on 12 regular appropriations bills and at least two supplemental appropriations measures. Because of recurring delays in the
appropriations process, Congress also typically passes one or more continuing appropriations each year. The scope and duration of these measures depend on the status of the regular appropriations bills and the degree of budgetary conflict between the President and Congress. In recent years, Congress has merged two or more of the regular appropriations acts for a fiscal year into a single, omnibus appropriations act. By precedent, appropriations originate in the House of Representatives. In the House, appropriations measures are originated by the Appropriations
Committee (when it marks up or reports the measure) rather than being introduced by a member beforehand. Before the full Committee acts on the bill, it is considered in the relevant Appropriations subcommittee (the House and Senate Appropriations Committees have 12 parallel subcommittees). The House subcommittees typically hold extensive hearings on appropriations requests shortly after the President’s budget is submitted. In marking up their appropriations bills, the various subcommittees are guided by the discretionary spending limits and the allocations
made to them under Section 302 of the 1974 Congressional Budget Act. The Senate usually considers appropriations measures after they have been passed by the House. When House action on appropriations bills is delayed, however, the Senate sometimes expedites its actions by considering a Senate-numbered bill up to the stage of final passage. Upon receipt of the House-passed bill in the Senate, it is amended with the text that the Senate already has agreed to (as a single amendment) and then passed by the Senate. Hearings in the Senate Appropriations
subcommittees generally are not as extensive as those held by counterpart subcommittees in the House. The basic unit of an appropriation is an account. A single unnumbered paragraph in an appropriations act comprises one account and all provisions of that paragraph pertain to that account and to no other, unless the text expressly gives them broader scope. Any provision limiting the use of funds enacted in that paragraph is a restriction on that account alone. Over the years, appropriations have been consolidated into a relatively small number of accounts. It is typical
for a federal agency to have a single account for all its expenses of operation and additional accounts for other purposes such as construction. Accordingly, most appropriation accounts encompass a number of activities or projects. The appropriation sometimes earmarks specific amounts to particular activities within the account, but the more common practice is to provide detailed information on the amounts intended for each activity in other sources (principally, the committee reports accompanying the measures). In addition to the substantive limitations (and other
provisions) associated with each account, each appropriations act has “general provisions” that apply to all of the accounts in a title or in the whole act. These general provisions appear as numbered sections, usually at the end of the title or the act. The standard appropriation is for a single fiscal year—the funds have to be obligated during the fiscal year for which they are provided; they lapse if not obligated by the end of that year. An appropriation that does not mention the period during which the funds are to be available is a one-year appropriation. Congress also
makes no-year appropriations by specifying that the funds shall remain available until expended. No-year funds are carried over to future years, even if they have not been obligated. Congress sometimes makes multiyear appropriations, which provide for funds to be available for two or more fiscal years. Appropriations measures also contain other types of provisions that serve specialized purposes. These include provisions that liquidate (pay off) obligations made pursuant to certain contract authority; reappropriate funds provided in prev ious years; transfer funds from
one account to another; rescind funds (or release deferred funds); or set ceilings on the amount of obligations that can be made under permanent appropriations, on the amount of direct or guaranteed loans that can be made, or on the amount of administrative expenses that can be incurred during the fiscal year. In addition to providing funds, appropriations acts often contain substantive limitations on government agencies. Detailed information on how funds are to be spent, along with other directives or guidance, is provided in the reports accompanying the various
appropriations measures. Agencies ordinarily abide by report language in spending the funds appropriated by Congress. The appropriations reports do not comment on every item of expenditure. Report language is most likely when the Appropriations Committee prefers to spend more or less on a particular item than the President has requested or when the committee wants to earmark funds for a particular project or activity. When a particular item is mentioned by the committee, there is a strong expectation that the agency will adhere to the instructions. Revenue
Legislation Article I, Section 8 of the Constitution gives Congress the power to levy “taxes, duties, imposts, and excises.” Section 7 of this article requires that all revenue measures originate in the House of Representatives. In the House, rev enue legislation is under the jurisdiction of the Ways and Means Committee; in the Senate, jurisdiction is held by the Finance Committee. While House rules bar other committees from reporting revenue legislation, sometimes another committee will report legislation levying user fees on a class that benefits from a particular service or
program or that is being regulated by a federal agency. In many of these cases, the user fee legislation is referred subsequently to the Ways and Means Committee. Most revenues derive from existing provisions of the tax code or Social Security law, which continue in effect from year to year unless changed by Congress. This tax structure can be expected to produce increasing amounts of revenue in future years as the economy expands and incomes rise. Nevertheless, Congress usually makes some changes in the tax laws each year, either to raise or lower revenues or to
redistribute the tax burden. Congress typically acts on revenue legislation pursuant to proposals in the President’s budget. An early step in congressional work on revenue legislation is publication by CBO of its own estimates (developed in consult ation with the Joint Tax Committee) of the revenue impact of the President’s budget proposals. The congressional estimates often differ significantly from those presented in the President’s budget. The revenue totals in the budget resolution establis h the framework for subsequent action on revenue measures. The budget
resolution contains only revenue totals and total recommended changes; it does not allocate these totals among revenue sources (although it does set out Medicare receipts separately), nor does it specify which provisions of the tax code are to be changed. The House and Senate often consider major revenue measures, such as the Tax Reform Act of 1986, under their regular legislative procedures. However, as has been the case with direct spending programs, many of the most significant changes in revenue policy in recent years have been made in the context of the
reconciliation process. Although revenue changes are usually incorporated into omnibus budget reconciliation measures, along with spending changes (and sometimes debt-limit increases), revenue reconciliation legislation may be considered on a separate legislative track (e.g., the Tax Equity and Fiscal Responsibility Act of 1982). When the reconciliation process is used to advance revenue reductions (or spending increases) that would lead to a deficit, or would enlarge an existing deficit, Section 313 of the 1974 Congressional Budget Act (referred to as the Senate’s “Byrd
rule”) limits the legislative changes to the period covered by the reconciliation directives. Accordingly, some recent tax cuts have been subject to sunset dates. In enacting revenue legislation, Congress often establishes or alters tax expenditures. The term “tax expenditures” is defined in the 1974 Congressional Budget Act to include revenue losses due to deductions, exemptions, credits, and other exceptions to the basic tax structure. Tax expenditures are a means by which the federal government pursues public policy objectives and can be regarded as alternatives to
other policy instruments such as grants or loans. The Joint Tax Committee estimates the revenue effects of legislation changing tax expenditures, and it also publishes five-year projections of these provisions as an annual committee print. Debt-Limit Legislation When the revenues collected by the federal government are not sufficient to cover its expenditures, it must finance the shortfall through borrowing. Federal borrowing is subject to a public debt limit established by statute. When the federal government operates with a budget deficit, the public debt limit must be
increased periodically. The frequency of congressional action to raise the debt limit has ranged in the past from several times in one year to once in several years. When the federal government incurred large and growing surpluses in recent years, Congress did not have to increase the debt limit, but the enactment of increases in the debt limit has again become necessary with the recurrence of deficits. Legislation to raise the public debt limit falls under the jurisdiction of the House Ways and Means Committee and the Senate Finance Committee. Although consideration of
such measures in the House usually is constrained through the use of special rules, Senate action sometimes is farranging with regard to the issues covered. In the past, the Senate has added many non-germane provisions to debt-limit measures, such as the 1985 Balanced Budget Act. In 1979, the House amended its rules to provide for the automatic engrossment of a measure increasing the debt limit upon final adoption of the conference report on the budget resolution. The rule, House Rule XLIX (commonly referred to as the Gephardt rule), was intended to facilitate
quick action on debt increases. However, the Senate had no comparable rule. For years, the House and Senate could enact debt-limit legislation originating under the Gephardt rule or arising under conventional legislative procedures. During the past decade, Congress has enacted debt-limit increases as part of omnibus budget reconciliation measures, continuing appropriations acts, and other legislation. The House recodified the Gephardt rule as House Rule XXIII at the beginning of the 106th Congress, repealed it at the beginning of the 107th Congress, and reinstated it,
as new Rule XXVII, at the beginning of the 108th Congress. At the beginning of the 112th Congress, the House once again repealed the rule, thereby requiring the House to vote directly on any legislation that changes the statutory limit on the public debt. Reconciliat ion Legislation Beginning in 1980, Congress has used reconciliation legislation to implement many of its most significant budget policies. Section 310 of the 1974 Congressional Budget Act sets forth a special procedure for the development and consideration of reconciliation legislation. Reconciliation legislation
is used by Congress to bring existing revenue and spending law into conformity with the policies in the budget resolution. Reconciliation is an optional process, but Congress has used it more years than not; during the period covering 1980 through 2010, 20 reconciliation measures were enacted into law and three were vetoed. The reconciliation process has two stages—the adoption of reconciliation instructions in the budget resolution and the enactment of reconciliation legislation that implements changes in revenue or spending laws. Although reconciliation has been
used since 1980, specific procedures tend to vary from year to year. Reconciliation is used to change the amount of revenues, budget authority, or outlays generated by existing law. In a few instances, reconciliation has been used to adjust the public debt limit. On the spending side, the process focuses on entitlement laws; it may not be used, however, to impel changes in Social Security law. Reconciliation sometimes has been applied to discretionary authorizations (which are funded in annual appropriations acts), but this is not the usual practice. Reconciliation was used
in the 1980s and into the 1990s as a deficit-reduction tool. Beginning in the latter part of the 1990s, some reconciliation measures were used principally to reduce revenues, thereby increasing the deficit. At the beginning of the 110th Congress, both chambers adopted rules requiring that reconciliation be used solely for deficit reduction. Reconciliation Directives Reconciliation begins with a directive in a budget resolution instructing designated committees to report legislation changing existing law or pending legislation. These instructions have three components: (1) they
name the committee (or committees) that are directed to report legislation; (2) they specify the amounts by which existing laws are to be changed (but do not identify how these changes are to be made, which laws are to be altered, or the programs to be affected); and (3) they usually set a deadline by which the designated committees are to recommend the changes in law. The instructions typically cover the same fiscal years covered by the budget resolution. Sometimes, budget resolutions have provided for more than one reconciliation measure to be considered
during a session. The dollar amounts are computed with reference to the CBO baseline. Thus, a change represents the amount by which revenues or spending would decrease or increase from baseline levels as a result of changes made in existing law. This computation is itself based on assumptions about the future level of revenues or s pending under current law (or policy) and about the dollar changes that would ensue from new legislation. Hence, the savings associated with the reconciliation process are assumed savings. The actual changes in revenues or spending
may differ from those estimated when the reconciliation instructions are formulated. Although the instructions do not mention the programs to be changed, they are based on assumptions as to the savings or deficit reduction (or, in some cases, increases) that would result from particular changes in revenue provisions or spending programs. These program assumptions are sometimes printed in the reports on the budget resolution. Even when the assumptions are not published, committees and members usually have a good idea of the specific program changes
contemplated by the reconciliation instructions. A committee has discretion to decide on the legislative changes to be recommended. It is not bound by the program changes recommended or assumed by the Budget Committees in the reports accompanying the budget resolution. Further, a committee has to recommend legislation estimated to produce dollar changes for each category delineated in the instructions to it. When a budget resolution containing a reconciliation instruction has been approved by Congress, the instruction has the status of an order by the House
and Senate to designated committees to recommend legislation, usually by a date certain. It is expected that committees will carry out the instructions of their parent chamber, but the 19 74 Congressional Budget Act does not provide any sanctions against committees that fail to do so. Development and Consideration of Reconciliation Measures When more than one committee in the House and Senate is subject to reconciliation directives, the proposed legislative changes usually are consolidated by the Budget Committees into an omnibus bill. The 1974 Congressional
Budget Act does not permit the Budget Committees to revise substantively the legislation recommended by the committees of jurisdiction. This restriction pertains even when the Budget Committees estimate that the proposed legislation will fall short of the dollar changes called for in the instructions. Sometimes, the Budget Committees, working with the leadership, develop alternatives to the committee recommendations, to be offered as floor amendments, so as to achieve greater compliance with the reconciliation directives. The 1974 act requires that amendments
offered to reconciliation legislation in either the House or the Senate be deficit neutral. To meet this requirement, an amendment reducing revenues or increasing spending must offset these deficit increases by equivalent revenue increases or spending cuts. During the first several years’ experience with reconciliation, the legislation contained many provisions that were extraneous to the purpose of reducing the deficit. The reconciliation submissions of committees included such things as provisions that had no budgetary effect, that increased spending or reduced
revenues, or that violated another committee’s jurisdiction. In 1985, the Senate adopted a rule (commonly referred to as the Byrd rule) on a temporary basis as a means of curbing these practices. The Byrd rule has been extended and modified several times over the years. In 1990, the Byrd rule was incorporated into the 1974 Congressional Budget Act as Section 313 and made permanent. Although the House has no rule comparable to the Senate’s Byrd rule, it may use other devices to control the inclusion of extraneous matter in reconciliation legislation. In particular,
the House may use special rules to make in order amendments that strike such matter. House and Senate Earmark Disclosure Rules In 2007, both the House and Senate adopted rules intended to bring more transparency to the process surrounding earmarks. Although the definitions vary, an earmark generally is considered to be an allocation of resources to specifically targeted beneficiaries, either through discretionary or direct spending, limited tax benefits, or limited tariff benefits.5 Concern about earmarking practices arose over such provisions being inserted into
legislation or accompanying reports without any identification of the sponsor, and the belief that many earmarks were not subject to proper scrutiny and diverted resources to lesser-priority items or items without sufficient justification, thereby contributing to wasteful spending or revenue loss. In response to this concern, earmark rules were adopted that vary by chamber, but include three main features. The firs t feature is a requirement that members requesting a congressional earmark provide a written statement to the chair and ranking minority member of the
committee of jurisdiction that includes the member’s name, the name and address of the intended earmark recipient, the purpose of the earmark, and a certification that the member or member’s spouse has no financial interest in such an earmark. (The Senate rule applies not only to the spouse but the entire immediate family.) The second feature is a general requirement that committees provide a list of all earmarks included in reported legislation. The third feature is a point of order against legislation that is not accompanied by a list of included earmarks. These vary
by chamber. House of Representatives House Rule XXI, clause 9, generally requires that certain types of measures be accompanied by a list of earmarks or a statement that the measure contains no earmarks.6 If the list of earmarks or the statement that no earmark exists in the measure is absent, a point of order may lie against the measure’s floor consideration. The point of order applies to the absence of such a list or statement, and does not speak to the completeness or the accuracy of such document. House earmark disclosure rules apply to any congressional earmark
included in either the text of the measure or the committee report accompanying the measure, as well as the conference report and joint explanatory statement. The disclosure requirements apply to items in authorizing, appropriations, and revenue legislation. Furthermore, they apply not only to measures reported by committees, but also to unreported measures, “manager’s amendments,” Senate measures, and conference reports. These earmark disclosure requirements, however, do not apply to all legislation at all times. Not subject to the rule are floor amendments
(except a “manager’s amendment”), amendments between the Houses, or amendments considered as adopted under a self-executing special rule, including a committee amendment in the nature of a substitute made in order as original text. The earmark rule, as with most House rules, is not self enforcing and relies instead on a member raising a point of order if the rule is violated. When a measure is considered under suspension of the rules, House rules are laid aside and earmark disclosure rules are, therefore, waived. It is not in order to consider a special rule that
waives earmark requirements under the House rule. The Senate Senate Rule XLIV creates a point of order against a motion to proceed to consider a measure or a vote on adoption of a conference report, unless the chair of the committee or the Majority Leader (or designee) certifies that a complete list of earmarks and the name of each Senator requesting each earmark is available on a publicly accessible congressional website in a searchable form at least 48 hours before the vote. If a Senator proposes a floor amendment containing an earmark, those items must be
printed in the Congressional Record as soon as “practicable.”7 If the earmark certification requirements have not been met, a point of order may lie against consideration of the measure or a vote on the conference report. The point of order applies only to the absence of such certification, and does not speak to its accuracy. Senate earmark disclosure rules apply to any congressional earmark included in either the text of the bill or a committee report accompanying the bill, as well as a conference report and joint explanatory statement. The disclosure requirements apply
to items in authorizing, appropriations, and revenue legislation. Furthermore, they apply not only to measures reported by committees, but also to unreported measures, amendments, House bills, and conference reports. The earmark rule may be waived either by unanimous consent or by motion, which requires the affirmative vote of three-fifths of all Senators (60, if there are no vacancies).8 The earmark rule, as with most Senate rules, is not self enforcing and relies instead on a Senator raising a point of order if the rule is violated. While not embodied in either
chamber’s rules, an earmark “ban” or “moratorium” is currently in effect in both the House and Senate, enforced by committee and chamber leadership.9 Impoundment and Line-Item Veto
<<PARAGRAPH BREAKS RESUME>>
Impoundment
Although an appropriation limits the amounts that can be spent, it also establishes the expectation that the available funds will be used to carry
out authorized activities. Therefore, when an agency fails to use all or part of an appropriation, it deviates from the intentions of
Congress. The Impoundment Control Act of 1974 prescribes rules and procedures for instances in which available funds are impounded.
An impoundment is an action or inaction by the President or a federal agency that delays or withholds the obligation or
expenditure of budget authority provided in law. The 1974 I mpoundment C ontrol A ct divides
impoundments into two categories and establishes distinct procedures for each. A deferral delays the use
of funds; a rescission is a presidential request that Congress rescind ( cancel ) an appropriation or other form of
budget authority. Deferral and rescission are exclusive and comprehensive categories; an impoundment is
either a rescission or a deferral—it cannot be both or something else.
Although impoundments are defined broadly by the 1974 act, in practice they are limited to major actions that affect the level or rate of
expenditure. As a general practice, only deliberate curtailments of expenditure are reported as
impoundments ; actions having other purposes that incidently affect the rate of spending are not
recorded as impoundments. For example, if an agency were to delay the award of a contract because of a dispute with a vendor, the
delay would not be an impoundment; if the delay were for the purpose of reducing an expenditure, it would be an impoundment. The line
between routine administrative actions and impoundments is not clear and controversy occasionally arises as to whether a particular action
constitutes an impoundment.
The absence of nuclear use nullifies the rescission.
Yarbrough ’17 [Steven; January 20; Magistrate Judge on the United States District Court of New
Mexico; Lexis, “Eli v. United States Bank Nat'l Ass'n, 2017 U.S. Dist.,” 22044]
b. Analysis
Count 1 of Plaintiff's Complaint
The loan at issue in this case was recorded on July 25, 2005. ECF No. 1 at 6. Defendants do not dispute that the provisions of 15 U.S.C. §
1635 and 15 U.S.C. § 1640 apply to this consumer loan. 15 U.S.C. § 1635 provides two avenues under which a borrower/obligor has a right to
rescind a loan covered by the statute. First, "the obligor shall have the right to rescind the transaction until midnight of the
third business day following the consummation of the transaction or the delivery of the information and the rescission forms required under
this section . . . ." 15 U.S.C. § 1635(a). Alternatively, if a lender fails to provide a borrower the required information and forms, "[a]n obligor's
right of rescission shall expire three years after the date of consummation of the transaction or upon the sale of the property . . . ." 15 U.S.C. §
1635(f). Defendants assert that they provided Plaintiff all required TILA notices and, therefore, the three day, rather than the three year,
limitation period applies. ECF No. 7 at 4-5. Nonetheless, they also argue that, even if the three day limitation period did not apply, the three
year limitation would bar Plaintiff's lawsuit. Id. Because a factual dispute exists with regard to whether Defendants provided Plaintiff with all
required TILA notices, I will only address Defendants' alternative argument - that the three year limitation period operates to bar Plaintiff's
lawsuit.
In Beach v. Ocwen Federal Bank, 523 U.S. 410, 419, 118 S. Ct. 1408, 140 L. Ed. 2d 566 (1998), the United States Supreme Court unequivocally
stated, "[w]e respect Congress's manifest intent by concluding that the Act permits no federal right to rescind, defensively or otherwise, after
the 3-year period of § 1635(f) has run." Thus, Plaintiff's November 2015 attempt to rescind a loan recorded in July 2005 comes more than seven
years too late. None of the arguments Plaintiff makes in an attempt to avoid this outcome are persuasive.
First, Plaintiff argues that the loan was never consummated. ECF No. 11 at 2-3. However, he provides no facts or arguments to support his
contention that the 2005 loan was never consummated. Indeed, he acknowledges in his complaint that "Plaintiff has made certain payments of
monies in regards to this loan" and requests return of those monies. ECF No. 1 at 21. Similarly, during oral argument, Plaintiff stated, "I acted on
the contract, I sent them money. But it's my belief that it was not consummated fully, your Honor." Motions Hearing Tr. at 16, Nov. 17, 2016
(ECF No. 36). When asked what that belief was based on, Plaintiff responded, "I'm not prepared to adjudicate that fully, your Honor." Id.
The applicable regulations define consummation as "the time that a consumer becomes contractually obligated on a credit transaction." 12
C.F.R. § 1026.2(a)(13). Plaintiff does not dispute that the loan in question was recorded in 2005 and that he thereafter made payments on the
loan. Plaintiff's assertion that the loan was never consummated does not constitute a disputed material fact. Instead, it constitutes a legal
conclusion that is unsupported by either disputed or undisputed factual allegations. It is well-established that the Court is "not bound to accept
as true a legal conclusion couched as a factual allegation" for purposes of deciding a motion to dismiss. See Ashcroft, 556 U.S. at 678 (internal
quotation marks and citation omitted). Based on Plaintiff's own representations, I conclude that the loan in question was consummated no
later than July 25, 2005. Therefore, pursuant to 15 U.S.C. § 1635(f), Plaintiff was required to bring any claim for rescission no later than July 25,
2008.
Even if the Court were to accept Plaintiff's argument with regard to consummation, however, Plaintiff would still not prevail. Plaintiff argued at
the hearing that "as far as consummation, I'm saying it was never consummated." Motions Hearing Tr. at 14. In response to this contention, the
Court asked Plaintiff "if there was never a consummation, then there could be no legally binding contract ,
and if there's no contract , then what is it that you're seeking to rescind ? In other words, you're
saying you want to rescind something that you're also saying doesn't exist because it was never consummated, and
I'm having a hard time following that." Id. at 15. Plaintiff responded with the above quoted statement, "I acted on the contract,
I sent them money. But it's my belief that it was not consummated fully, your Honor." Id. This response fails to address the inescapable
logical conclusion that, if the loan had never been consummated, there would be nothing to
rescind .
DA---Deterrence
Case
Case---2NC
This connection between pain and pleasure and phenomenal conceptions of intrinsic
value and disvalue is irrefutable – everything else regresses – robust neuroscience
proves.
Blum et al. 18 Kenneth Blum, 1Department of Psychiatry, Boonshoft School of Medicine, Dayton VA
Medical Center, Wright State University, Dayton, OH, USA 2Department of Psychiatry, McKnight Brain
Institute, University of Florida College of Medicine, Gainesville, FL, USA 3Department of Psychiatry and
Behavioral Sciences, Keck Medicine University of Southern California, Los Angeles, CA, USA 4Division of
Applied Clinical Research & Education, Dominion Diagnostics, LLC, North Kingstown, RI, USA
5Department of Precision Medicine, Geneus Health LLC, San Antonio, TX, USA 6Department of Addiction
Research & Therapy, Nupathways Inc., Innsbrook, MO, USA 7Department of Clinical Neurology, Path
Foundation, New York, NY, USA 8Division of Neuroscience-Based Addiction Therapy, The Shores
Treatment & Recovery Center, Port Saint Lucie, FL, USA 9Institute of Psychology, Eötvös Loránd
University, Budapest, Hungary 10Division of Addiction Research, Dominion Diagnostics, LLC. North
Kingston, RI, USA 11Victory Nutrition International, Lederach, PA., USA 12National Human Genome
Center at Howard University, Washington, DC., USA, Marjorie Gondré-Lewis, 12National Human
Genome Center at Howard University, Washington, DC., USA 13Departments of Anatomy and
Psychiatry, Howard University College of Medicine, Washington, DC US, Bruce Steinberg, 4Division of
Applied Clinical Research & Education, Dominion Diagnostics, LLC, North Kingstown, RI, USA, Igor Elman,
15Department Psychiatry, Cooper University School of Medicine, Camden, NJ, USA, David Baron,
3Department of Psychiatry and Behavioral Sciences, Keck Medicine University of Southern California, Los
Angeles, CA, USA, Edward J Modestino, 14Department of Psychology, Curry College, Milton, MA, USA,
Rajendra D Badgaiyan, 15Department Psychiatry, Cooper University School of Medicine, Camden, NJ,
USA, Mark S Gold 16Department of Psychiatry, Washington University, St. Louis, MO, USA, “Our evolved
unique pleasure circuit makes humans different from apes: Reconsideration of data derived from animal
studies”, U.S. Department of Veterans Affairs, 28 February 2018, accessed: 19 August 2020,
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6446569/, R.S.
Pleasure is not only one of the three primary reward functions but it also defines reward. As homeostasis explains the
functions of only a limited number of rewards, the principal reason why particular stimuli, objects, events,
situations, and activities are rewarding may be due to pleasure. This applies first of all to sex and to the primary
homeostatic rewards of food and liquid and extends to money, taste, beauty, social encounters and nonmaterial, internally set, and intrinsic
rewards. Pleasure, as the primary effect of rewards, drives the prime reward functions of learning, approach behavior, and
decision making and provides the basis for hedonic theories of reward function. We are attracted by most
rewards and exert intense efforts to obtain them, just because they are enjoyable [10]. Pleasure is a passive
reaction that derives from the experience or prediction of reward and may lead to a long-lasting state of happiness. The word happiness is
difficult to define. In fact, just obtaining physical pleasure may not be enough. One key to happiness involves a network of good friends.
However, it is not obvious how the higher forms of satisfaction and pleasure are related to an ice cream cone, or to your team winning a
sporting event. Recent multidisciplinary research, using both humans and detailed invasive brain analysis of animals
has discovered some critical ways that the brain processes pleasure [14]. Pleasure as a hallmark of reward
is sufficient for defining a reward, but it may not be necessary. A reward may generate positive learning and approach
behavior simply because it contains substances that are essential for body function. When we are hungry, we may
eat bad and unpleasant meals. A monkey who receives hundreds of small drops of water every morning in the laboratory is unlikely to feel a
rush of pleasure every time it gets the 0.1 ml. Nevertheless, with these precautions in mind, we may define any stimulus, object, event, activity,
or situation that has the potential to produce pleasure as a reward. In the context of reward deficiency or for disorders of addiction,
homeostasis pursues pharmacological treatments: drugs to treat drug addiction, obesity, and other compulsive behaviors. The theory of
allostasis suggests broader approaches - such as re-expanding the range of possible pleasures and providing opportunities to expend effort in
their pursuit. [15]. It is noteworthy, the first animal studies eliciting approach behavior by electrical brain stimulation interpreted their findings
as a discovery of the brain’s pleasure centers [16] which were later partly associated with midbrain dopamine neurons [17–19] despite the
notorious difficulties of identifying emotions in animals. Evolutionary theories of pleasure: The love connection BO:D Charles Darwin and other
biological scientists that have examined the biological evolution and its basic principles found various mechanisms that
steer behavior and biological development. Besides their theory on natural selection, it was particularly the sexual selection
process that gained significance in the latter context over the last century, especially when it comes to the question of what makes us “what we
are,” i.e., human. However, the capacity to sexually select and evolve is not at all a human accomplishment alone or a sign of our uniqueness;
yet, we humans, as it seems, are ingenious in fooling ourselves and others–when we are in love or desperately search for it. It is well
established that modern biological theory conjectures that organisms are the result of evolutionary competition. In fact,
Richard Dawkins stresses gene survival and propagation as the basic mechanism of life [20]. Only genes that lead
to the fittest phenotype will make it. It is noteworthy that the phenotype is selected based on behavior that maximizes gene
propagation. To do so, the phenotype must survive and generate offspring, and be better at it than its competitors. Thus, the ultimate,
distal function of rewards is to increase evolutionary fitness by ensuring the survival of the organism and reproduction. It
is agreed that learning, approach, economic decisions, and positive emotions are the proximal functions through which phenotypes obtain
other necessary nutrients for survival, mating, and care for offspring. Behavioral reward functions have evolved to help
individuals to survive and propagate their genes. Apparently, people need to live well and long enough to
reproduce. Most would agree that homo-sapiens do so by ingesting the substances that make their bodies function properly. For this
reason, foods and drinks are rewards. Additional rewards, including those used for economic exchanges, ensure sufficient palatable food and
drink supply. Mating and gene propagation is supported by powerful sexual attraction. Additional properties, like body form, augment the
chance to mate and nourish and defend offspring and are therefore also rewards. Care for offspring until they can reproduce themselves helps
gene propagation and is rewarding; otherwise, many believe mating is useless. According to David E Comings, as any small edge will
ultimately result in evolutionary advantage [21], additional reward mechanisms like novelty seeking and exploration widen the
spectrum of available rewards and thus enhance the chance for survival, reproduction, and ultimate gene propagation. These functions may
help us to obtain the benefits of distant rewards that are determined by our own interests and not immediately available in the environment.
Thus the distal reward function in gene propagation and evolutionary fitness defines the proximal
reward functions that we see in everyday behavior. That is why foods, drinks, mates, and offspring are
rewarding. There have been theories linking pleasure as a required component of health benefits salutogenesis, (salugenesis). In essence,
under these terms, pleasure is described as a state or feeling of happiness and satisfaction resulting from an
experience that one enjoys. Regarding pleasure, it is a double-edged sword, on the one hand, it promotes positive feelings (like
mindfulness) and even better cognition, possibly through the release of dopamine [22]. But on the other hand, pleasure simultaneously
encourages addiction and other negative behaviors, i.e., motivational toxicity. It is a complex neurobiological phenomenon, relying on reward
circuitry or limbic activity. It is important to realize that through the “Brain Reward Cascade” (BRC) endorphin and endogenous morphinergic
mechanisms may play a role [23]. While natural rewards are essential for survival and appetitive motivation leading to beneficial biological
behaviors like eating, sex, and reproduction, crucial social interactions seem to further facilitate the positive effects exerted by pleasurable
experiences. Indeed, experimentation with addictive drugs is capable of directly acting on reward pathways and causing deterioration of these
systems promoting hypodopaminergia [24]. Most would agree that pleasurable activities can stimulate personal growth and may help to induce
healthy behavioral changes, including stress management [25]. The work of Esch and Stefano [26] concerning the link between compassion and
love implicate the brain reward system, and pleasure induction suggests that social contact in general, i.e., love, attachment, and compassion,
can be highly effective in stress reduction, survival, and overall health. Understanding the role of neurotransmission and pleasurable states
both positive and negative have been adequately studied over many decades [26–37], but comparative anatomical and neurobiological
function between animals and homo sapiens appear to be required and seem to be in an infancy stage. Finding happiness is different between
apes and humans As stated earlier in this expert opinion one key to happiness involves a network of good friends [38]. However, it is not
entirely clear exactly how the higher forms of satisfaction and pleasure are related to a sugar rush, winning a sports event or even sky diving, all
of which augment dopamine release at the reward brain site. Recent multidisciplinary research, using both humans and detailed invasive brain
analysis of animals has discovered some critical ways that the brain processes pleasure. Remarkably, there are pathways for ordinary
liking and pleasure, which are limited in scope as described above in this commentary. However, there are many brain
regions, often termed hot and cold spots, that significantly modulate (increase or decrease) our pleasure or even
produce the opposite of pleasure— that is disgust and fear [39]. One specific region of the nucleus accumbens is
organized like a computer keyboard, with particular stimulus triggers in rows— producing an increase and
decrease of pleasure and disgust. Moreover, the cortex has unique roles in the cognitive evaluation of our feelings of
pleasure [40]. Importantly, the interplay of these multiple triggers and the higher brain centers in the prefrontal cortex are very intricate and
are just being uncovered. Desire and reward centers It is surprising that many different sources of pleasure activate the same circuits between
the mesocorticolimbic regions (Figure 1). Reward and desire are two aspects pleasure induction and have a very widespread, large circuit. Some
part of this circuit distinguishes between desire and dread. The so-called pleasure circuitry called “REWARD” involves a well-known dopamine
pathway in the mesolimbic system that can influence both pleasure and motivation. In simplest terms, the well-established mesolimbic system
is a dopamine circuit for reward. It starts in the ventral tegmental area (VTA) of the midbrain and travels to the nucleus accumbens (Figure 2). It
is the cornerstone target to all addictions. The VTA is encompassed with neurons using glutamate, GABA, and dopamine. The nucleus
accumbens (NAc) is located within the ventral striatum and is divided into two sub-regions—the motor and limbic regions associated with its
core and shell, respectively. The NAc has spiny neurons that receive dopamine from the VTA and glutamate (a dopamine driver) from the
hippocampus, amygdala and medial prefrontal cortex. Subsequently, the NAc projects GABA signals to an area termed the ventral pallidum
(VP). The region is a relay station in the limbic loop of the basal ganglia, critical for motivation, behavior, emotions and the “Feel Good”
response. This defined system of the brain is involved in all addictions –substance, and non –substance related. In 1995, our laboratory coined
the term “Reward Deficiency Syndrome” (RDS) to describe genetic and epigenetic induced hypodopaminergia in the “Brain Reward Cascade”
that contribute to addiction and compulsive behaviors [3,6,41]. Furthermore, ordinary “liking” of something, or pure pleasure, is
represented by small regions mainly in the limbic system (old reptilian part of the brain). These may be part of larger
neural circuits. In Latin, hedus is the term for “sweet”; and in Greek, hodone is the term for “pleasure.” Thus, the word Hedonic is now
referring to various subcomponents of pleasure: some associated with purely sensory and others with more complex emotions involving
morals, aesthetics, and social interactions. The capacity to have pleasure is part of being healthy and may even extend life, especially if linked to
optimism as a dopaminergic response [42]. Psychiatric illness often includes symptoms of an abnormal inability to experience pleasure, referred
to as anhedonia. A negative feeling state is called dysphoria, which can consist of many emotions such as pain, depression, anxiety, fear, and
disgust. Previously many scientists used animal research to uncover the complex mechanisms of pleasure, liking, motivation and even emotions
like panic and fear, as discussed above [43]. However, as a significant amount of related research about the specific brain regions of
pleasure/reward circuitry has been derived from invasive studies of animals, these cannot be directly compared with subjective states
experienced by humans. In an attempt to resolve the controversy regarding the causal contributions of mesolimbic dopamine systems to
reward, we have previously evaluated the three-main competing explanatory categories: “liking,” “learning,” and “wanting” [3]. That is,
dopamine may mediate (a) liking: the hedonic impact of reward, (b) learning: learned predictions about rewarding effects, or (c) wanting: the
pursuit of rewards by attributing incentive salience to reward-related stimuli [44]. We have evaluated these hypotheses, especially as they
relate to the RDS, and we find that the incentive salience or “wanting” hypothesis of dopaminergic functioning is supported by a majority of the
scientific evidence. Various neuroimaging studies have shown that anticipated behaviors such as sex and gaming, delicious foods and drugs of
abuse all affect brain regions associated with reward networks, and may not be unidirectional. Drugs of abuse enhance dopamine signaling
which sensitizes mesolimbic brain mechanisms that apparently evolved explicitly to attribute incentive salience to various rewards [45].
Addictive substances are voluntarily self-administered, and they enhance (directly or indirectly) dopaminergic synaptic function in the NAc. This
activation of the brain reward networks (producing the ecstatic “high” that users seek). Although these circuits were initially thought to encode
a set point of hedonic tone, it is now being considered to be far more complicated in function, also encoding attention, reward expectancy,
disconfirmation of reward expectancy, and incentive motivation [46]. The argument about addiction as a disease may be confused with a
predisposition to substance and nonsubstance rewards relative to the extreme effect of drugs of abuse on brain neurochemistry. The former
sets up an individual to be at high risk through both genetic polymorphisms in reward genes as well as harmful epigenetic insult. Some
Psychologists, even with all the data, still infer that addiction is not a disease [47]. Elevated stress levels, together with polymorphisms (genetic
variations) of various dopaminergic genes and the genes related to other neurotransmitters (and their genetic variants), and may have an
additive effect on vulnerability to various addictions [48]. In this regard, Vanyukov, et al. [48] suggested based on review that whereas the
gateway hypothesis does not specify mechanistic connections between “stages,” and does not extend to the risks for addictions the concept of
common liability to addictions may be more parsimonious. The latter theory is grounded in genetic theory and supported by data identifying
common sources of variation in the risk for specific addictions (e.g., RDS). This commonality has identifiable neurobiological substrate and
plausible evolutionary explanations. Over many years the controversy of dopamine involvement in especially “pleasure” has led to confusion
concerning separating motivation from actual pleasure (wanting versus liking) [49]. We take the position that animal studies cannot provide
real clinical information as described by self-reports in humans. As mentioned earlier and in the abstract, on November 23rd, 2017, evidence
for our concerns was discovered [50] In essence, although nonhuman primate brains are similar to our own, the disparity between other
primates and those of human cognitive abilities tells us that surface similarity is not the whole story. Sousa et al. [50] small case found
various differentially expressed genes, to associate with pleasure related systems. Furthermore, the
dopaminergic interneurons located in the human neocortex were absent from the neocortex of nonhuman African apes. Such differences in
neuronal transcriptional programs may underlie a variety of neurodevelopmental disorders. In simpler terms, the system controls the
production of dopamine, a chemical messenger that plays a significant role in pleasure and rewards. The senior author, Dr. Nenad Sestan from
Yale, stated: “Humans have evolved a dopamine system that is different than the one in chimpanzees.” This may explain why the behavior of
humans is so unique from that of non-human primates, even though our brains are so surprisingly similar, Sestan said: “It might also shed light
on why people are vulnerable to mental disorders such as autism (possibly even addiction).” Remarkably, this research finding emerged from
an extensive, multicenter collaboration to compare the brains across several species. These researchers examined 247 specimens
of neural tissue from six humans, five chimpanzees, and five macaque monkeys. Moreover, these
investigators analyzed which genes were turned on or off in 16 regions of the brain. While the differences
among species were subtle, there was a remarkable contrast in the neocortices, specifically in an area of
the brain that is much more developed in humans than in chimpanzees. In fact, these researchers found that a gene
called tyrosine hydroxylase (TH) for the enzyme, responsible for the production of dopamine, was
expressed in the neocortex of humans, but not chimpanzees. As discussed earlier, dopamine is best known for
its essential role within the brain’s reward system; the very system that responds to everything from sex, to
gambling, to food, and to addictive drugs. However, dopamine also assists in regulating emotional responses, memory, and
movement. Notably, abnormal dopamine levels have been linked to disorders including Parkinson’s, schizophrenia and spectrum disorders such
as autism and addiction or RDS. Nora Volkow, the director of NIDA, pointed out that one alluring possibility is that the neurotransmitter
dopamine plays a substantial role in humans’ ability to pursue various rewards that are perhaps months
or even years away in the future. This same idea has been suggested by Dr. Robert Sapolsky, a professor of biology and neurology at
Stanford University. Dr. Sapolsky cited evidence that dopamine levels rise dramatically in humans when we anticipate potential rewards that
are uncertain and even far off in our futures, such as retirement or even the possible alterlife. This may explain what often
motivates people to work for things that have no apparent short-term benefit [51]. In similar work, Volkow and
Bale [52] proposed a model in which dopamine can favor NOW processes through phasic signaling in reward circuits or LATER processes
through tonic signaling in control circuits. Specifically, they suggest that through its modulation of the orbitofrontal cortex, which processes
salience attribution, dopamine also enables shilting from NOW to LATER, while its modulation of the insula, which processes interoceptive
information, influences the probability of selecting NOW versus LATER actions based on an individual’s physiological state. This hypothesis
further supports the concept that disruptions along these circuits contribute to diverse pathologies, including obesity and addiction or RDS.
2. Regress---It’s impartial, specific to public actors, and resolves infinite regress which
explains all value.
Greene 15 — (Joshua Greene, Professor of Psychology @ Harvard, being interviewed by Russ Roberts, “Joshua Greene on Moral Tribes,
Moral Dilemmas, and Utilitarianism”, The Library of Economics and Liberty, 1-5-15, Available Online at https://www.econtalk.org/joshuagreene-on-moral-tribes-moral-dilemmas-and-utilitarianism/#audio-highlights, accessed 5-17-20, HKR-AM) **NB: Guest = Greene, and only his
lines are highlighted/underlined
Guest: Okay. So, I think utilitarianism is very much misunderstood. And this is part of the reason why we shouldn't even call it utilitarianism
at all. We should call it what I call 'deep pragmatism', which I think better captures what I think utilitarianism is really like, if you really
apply it in real life, in light of an understanding of human nature. But, we can come back to that. The idea, going back to the tragedy of
common-sense morality is you've got all these different tribes with all of these different values based on
their diferent ways of life. What can they do to get along? And I think that the best answer that we have is--well, let's back up. In
order to resolve any kind of tradeoff, you have to have some kind of common metric. You have to have
some kind of common currency. And I think that what utilitarianism, whether it's the moral truth or not,
is provide a kind of common currency. So, what is utilitarianism? It's basically the idea that--it's really two ideas put
together. One is the idea of impartiality. That is, at least as social decision makers, we should regard
everybody's interests as of equal worth. Everybody counts the same. And then you might say, 'Well, but okay, what
does it mean to count everybody the same? What is it that really matters for you and for me and for
everybody else?' And there the utilitarian's answer is what is sometimes called, somewhat accurately and somewhat
misleadingly, happiness. But it's not really happiness in the sense of cherries on sundaes, things that make you
smile. It's really the quality of conscious experience. So, the idea is that if you start with anything that you
value, and say, 'Why do you care about that?' and keep asking, 'Why do you care about that?' or 'Why do you care about that?'
you ultimately come down to the quality of someone's conscious experience. So if I were to say, 'Why did
you go to work today?' you'd say, 'Well, I need to make money; and I also enjoy my work.' 'Well, what
do you need your money for?' 'Well, I need to have a place to live; it costs money.' 'Well, why can't you
just live outside?' 'Well, I need a place to sleep; it's cold at night.' 'Well, what's wrong with being cold?'
'Well, it's uncomfortable.' 'What's wrong with being uncomfortable?' 'It's just bad.' Right? At some point
if you keep asking why, why, why, it's going to come down to the conscious experience--in Bentham's
terms, again somewhat misleading, the pleasure and pain of either you or somebody else that you care about. So
the utilitarian idea is to say, Okay, we all have our pleasures and pains, and as a moral philosophy we
should all count equally. And so a good standard for resolving public disagreements is to say we should go
with whatever option is going to produce the best overall experience for the people who are affected.
Which you can think of as shorthand as maximizing happiness--although I think that that's somewhat misleading. And the
solution has a lot of merit to it. But it also has endured a couple of centuries of legitimate criticism. And one of the biggest criticisms--and now we're
getting back to the Trolley cases, is that utilitarianism doesn't adequately account for people's rights. So, take the footbridge
case. It seems that it's wrong to push that guy off the footbridge. Even if you stipulate that you can save more people's lives. And so anyone who is going to defend
utilitarianism as a meta-morality--that is, a solution to the tragedy of common sense morality, as a moral system to adjudicate among competing tribal moral
systems--if you are going to defend it in that way, as I do, you have to face up to these philosophical challenges: is it okay to kill on person to save five people in this
kind of situation? So I spend a lot of the book trying to understand the psychology of cases like the footbridge
case. And you mention these being kind of unrealistic and weird cases. That's actually part of my
defense.
Reducing existential risks is the top priority in any coherent moral theory.
Pummer 15 (Theron, Philosophy @St. Andrews http://blog.practicalethics.ox.ac.uk/2015/05/moral-agreement-on-saving-the-world/)
There appears to be lot of disagreement in moral philosophy. Whether these many apparent disagreements are deep and irresolvable, I believe
there is at least one thing it is reasonable to agree on right now, whatever general moral view we adopt : that
it is very important to reduce the risk that all intelligent beings on this planet are eliminated by an enormous
catastrophe , such as a nuclear war. How we might in fact try to reduce such existential risks is discussed elsewhere. My claim here is only
that we – whether we’re consequentialists, deontologists, or virtue ethicists – should all agree that we
should try to save the world. According to consequentialism, we should maximize the good, where this is taken to be the
goodness, from an impartial perspective, of outcomes. Clearly one thing that makes an outcome good is that the people in it are doing well.
There is little disagreement here. If the happiness or well-being of possible future people is just as important as that of people who already
exist, and if they would have good lives, it is not hard to see how reducing existential risk is easily the most important thing in the whole world.
This is for the familiar reason that there are so many people who could exist in the future – there are trillions upon trillions… upon trillions.
There are so many possible future people that reducing existential risk is arguably the most important
thing in the world, even if the well-being of these possible people were given only 0.001% as much weight as that of existing people.
Even on a wholly person-affecting view – according to which there’s nothing (apart from effects on existing people) to be said in
favor of creating happy people – the case for reducing existential risk is very strong. As noted in this seminal paper, this case
is strengthened by the fact that there’s a good chance that many existing people will, with the aid of life-extension technology, live very long
and very high quality lives. You might think what I have just argued applies to consequentialists only. There is a
tendency to assume that, if an argument appeals to consequentialist considerations (the goodness of
outcomes), it is irrelevant to non-consequentialists . But that is a huge mistake . Nonconsequentialism is the view that there’s more that determines rightness than the goodness of
consequences or outcomes; it is not the view that the latter don’t matter . Even John Rawls wrote, “All
ethical doctrines worth our attention take consequences into account in judging rightness. One which
did not would simply be irrational, crazy.” Minimally plausible versions of deontology and virtue
ethics must be concerned in part with promoting the good, from an impartial point of view .
They’d thus imply very strong reasons to reduce existential risk, at least when this doesn’t significantly involve doing
harm to others or damaging one’s character. What’s even more surprising, perhaps, is that even if our own good (or that of those near and dear
to us) has much greater weight than goodness from the impartial “point of view of the universe,” indeed even if the latter is entirely morally
irrelevant, we may nonetheless have very strong reasons to reduce existential risk. Even egoism, the view that each agent should maximize
her own good, might imply strong reasons to reduce existential risk. It will depend, among other things, on what one’s own
good consists in. If well-being consisted in pleasure only, it is somewhat harder to argue that egoism would imply strong reasons to reduce
existential risk – perhaps we could argue that one would maximize her expected hedonic well-being by funding life extension technology or by
having herself cryogenically frozen at the time of her bodily death as well as giving money to reduce existential risk (so that there is a world for
her to live in!). I am not sure, however, how strong the reasons to do this would be. But views which imply that, if I don’t care about other
people, I have no or very little reason to help them are not even minimally plausible views (in addition to hedonistic egoism, I here have in mind
views that imply that one has no reason to perform an act unless one actually desires to do that act). To be minimally plausible, egoism will
need to be paired with a more sophisticated account of well-being. To see this, it is enough to consider, as Plato did, the possibility of a ring of
invisibility – suppose that, while wearing it, Ayn could derive some pleasure by helping the poor, but instead could derive just a bit more by
severely harming them. Hedonistic egoism would absurdly imply she should do the latter. To avoid this implication, egoists would need to build
something like the meaningfulness of a life into well-being, in some robust way, where this would to a significant extent be a function of otherregarding concerns (see chapter 12 of this classic intro to ethics). But once these elements are included, we can (roughly, as above) argue that
this sort of egoism will imply strong reasons to reduce existential risk. Add to all of this Samuel Scheffler’s recent intriguing arguments (quick
podcast version available here) that most of what makes our lives go well would be undermined if there
were no future generations of intelligent persons. On his view, my life would contain vastly less well-being if (say) a year after
my death the world came to an end. So obviously if Scheffler were right I’d have very strong reason to reduce existential
risk. We should also take into account moral uncertainty. What is it reasonable for one to do, when
one is uncertain not (only) about the empirical facts, but also about the moral facts? I’ve just argued that there’s agreement
among minimally plausible ethical views that we have strong reason to reduce existential risk – not only consequentialists, but also
deontologists, virtue ethicists, and sophisticated egoists should agree. But even those (hedonistic egoists) who disagree should
have a significant level of confidence that they are mistaken, and that one of the above views is correct. Even
if they were 90% sure that their view is the correct one (and 10% sure that one of these other ones is correct), they
would have pretty strong reason, from the standpoint of moral uncertainty, to reduce
existential risk . Perhaps most disturbingly still, even if we are only 1% sure that the well-being of possible
future people matters, it is at least arguable that, from the standpoint of moral uncertainty, reducing existential risk is
the most important thing in the world . Again, this is largely for the reason that there are so many people
who could exist in the future – there are trillions upon trillions… upon trillions. (For more on this and other related issues, see this
excellent dissertation). Of course, it is uncertain whether these untold trillions would, in general, have good
lives. It’s possible they’ll be miserable. It is enough for my claim that there is moral agreement in the relevant sense
if, at least given certain empirical claims about what future lives would most likely be like, all minimally
plausible moral views would converge on the conclusion that we should try to save the world. While there
are some non-crazy views that place significantly greater moral weight on avoiding suffering than on promoting happiness, for reasons others
have offered (and for independent reasons I won’t get into here unless requested to), they nonetheless seem to be fairly implausible views.
And even if things did not go well for our ancestors, I am optimistic that they will overall go fantastically
well for our descendants, if we allow them to. I suspect that most of us alive today – at least those of us not suffering
from extreme illness or poverty – have lives that are well worth living, and that things will continue to improve .
Derek Parfit, whose work has emphasized future generations as well as agreement in ethics, described our situation clearly and accurately: “We
live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast.
We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next
few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading
through this galaxy…. Our descendants might, I believe, make the further future very good. But that good future may also depend in part on us.
If our selfish recklessness ends human history, we would be acting very wrongly.” (From chapter 36 of On What Matters)
We’re responsible for intervening actors in virtue of our choice not to intervene.
Uniacke (University of Wollongong, NSW, Australia) ’99 (Suzanne, Jun99, International Journal of
Philosophical Studies, “Absolutely Clean Hands? Responsibility for What’s Allowed in Refraining from
What’s Not Allowed,” Vol. 7 Issue 2, p189, 21p)
We bear responsibility for the outcome of another’s actions , for instance, when we provoke these
actions (Iago); or when we supply the means (Kevorkian), identification (Judas), or incentive (Eve); or
where we encourage another to act as he [or she] does (Lady Macbeth). Despite his disclaimer, Pilate
cannot acquit himself entirely of the outcome of what others decide simply by ceding the judgment to
them. In these examples agents are indirectly, partly responsible for the outcomes of what others do in
virtue of something they themselves have done. But indirect, partial responsibility for what another
person does can also arise through an agent’s non-intervention and be grounded in intention or fault; for
example, when Arthur does not prevent Brian killing Catherine, because Arthur wants Catherine dead, or because Arthur simply cannot be
bothered to warn her or call the police. Of course attributions of indirect, partial responsibility can be difŽ cult. And as far as absolutism is
concerned, the relevant sense of ‘brings about’, outlined earlier, will sometimes be quite stretched where an agent is attributed with
responsibility for what someone else does. All the same, by our non-intervention we can help bring about some things
that are directly and voluntarily caused by others.29
Saying anything else is self-contradictory AND violates their so-called categorical
obligation to prevent harm.
Suzanne Uniacke, University of Wollongong at Australia, June 1999, International Journal of
Philosophical Studies, “Absolutely Clean Hands? Responsibility for What’s Allowed in Refraining from
What’s Not Allowed,” Vol. 7 Issue 2, p189, 21p
The p rinciple of i ntervening a ction is difficult to reconcile with Gewirth’s earlier comment that if
‘someone threatens . . . to kill innocent hostages if we do not break a promise . . . breaking the
promise would be the obviously right course, by the criterion of degrees of necessity for action’.34 By
the principle of intervening action, the person issuing the threat is responsible for the hostages’
deaths; and by parity of reasoning in Abrams’s case, since we are not responsible for these deaths,
the fact that we can prevent them does not affect our moral duty to keep our promise. Gewirth may
well intend to confine application of the principle of intervening action to cases where ‘the
conflicting rights are of the same supreme degree of importance’.35 But this restriction is ad
hoc. According to the principle, A’s responsibility for C’s incurring a certain harm Z is removed by B’s
intervention. The nature of what A needs to do in order to prevent C’s incurring Z is irrelevant
to the principle itself. But surely it is precisely the nature of what A would have to do in order to
prevent Z that is crucial to the absolutist claim that A is not responsible for Z in acute cases.
Agents are morally responsible for non-prevention.
Suzanne Uniacke, University of Wollongong at Australia, June 1999, International Journal of
Philosophical Studies, “Absolutely Clean Hands? Responsibility for What’s Allowed in Refraining from
What’s Not Allowed,” Vol. 7 Issue 2, p189, 21p
In this paper I address a common absolutist response to the charge of ‘culpably clean hands’. Specifically, I examine the absolutist grounds
for denying an agent’s responsibility for what he allows to happen in ‘[keeping his] hands clean’ in acute circumstances. In defending the
non-prevention of what is, viewed impersonally, the greater harm in such cases, absolutists insist on a
difference in responsibility between what an agent brings about as opposed to what he allows to
happen. Typically, the absolutist defence of non-intervention in acute cases pivots on this alleged difference: the agent’s obligation not
to do harm is said to be more stringent than his obligation to prevent (comparable) harm, since as agents we are principally responsible for
what we ourselves do. My central point is that this representation of the absolutist response to acute cases – as
grounded in a difference in responsibility for what we do as opposed to what we allow – involves a misleading theoretical
inversion . I argue that the absolutist justification of an agent’s non-intervention in acute cases will depend
on a direct defence of the nature and the stringency of the moral norm with which the
agent’s non-intervention complies. The nature and stringency of this norm are basic to attribution
of agent responsibility in acute cases, and not the other way around. I focus on absolutism, but my
argument is relevant to any nonconsequentialist justification of non-intervention which invokes the claim
that we are principally responsible for what we do as opposed to what we allow to happen.
1NR---Round 8---NDT
Case
Case---1NR
Calc indicts are wrong.
Hardin 90 Hardin, Russell (Helen Gould Shepard Professor in the Social Sciences @ NYU). May 1990. Morality within the Limits of Reason.
University Of Chicago Press. pp. 4. ISBN 978-0226316208. JDN.
One of the cuter charges against utilitarianism is that it is irrational in the following sense. If I take the time to calculate
the consequences of various courses of action before me, then I will ipso facto have chosen the course of action to take, namely, to sit and
calculate, because while I am calculating the other courses of action will cease to be open to me. It should embarrass
philosophers that they have ever taken this objection seriously. Parallel considerations in other realms
are dismissed with eminently good sense. Lord Devlin notes, “If the reasonable man ‘worked to rule’ by perusing to the point of
comprehension every form he was handed, the commercial and administrative life of the country would creep to a standstill.” James March
and Herbert Simon escape the quandary of unending calculation by noting that often we satisfice, we do not maximize: we
stop calculating and considering when we find a merely adequate choice of action. When, in principle, one cannot
know what is the best choice, one can nevertheless be sure that sitting and calculating is not the best choice.
But, one may ask, How do you know that another ten minutes of calculation would not have produced a better choice? And one can only
answer, You do not. At some point the quarrel begins to sound adolescent. It is ironic that the point
of the quarrel is almost never
at issue in practice (as Devlin implies, we are almost all too reasonable in practice to bring the world to a
standstill ) but only in the principled discussions of academics.
Large, short term consequences outweigh, precisely because of epistemic uncertainty.
Cowen 6 --- Professor at George Mason University.
Tyler, “The Epistemic Problem Does Not Refute Consequentialism,” George Mason University, Utilitas
Vol. 18, No. 4, December 2006
Let us start with a simple example, namely a suicide bomber who seeks to detonate a nuclear device
in midtown Manhattan. Obviously, we would seek to stop the bomber, or at least try to reduce the
probability of a detonation. We can think of this example as standing in more generally for choices,
decisions, and policies that affect the long-term prospects of our civilization.
If we stop the bomber, we know that in the short run we will save millions of lives, avoid a
massive tragedy, and protect the long-term strength, prosperity, and freedom of the United States.
Reasonable moral people, regardless of the details of their metaethical stances, should not argue
against stopping the bomber.
No matter how hard we try to stop the bomber, we are not, a priori, committed to a very definite view
of how the long run will play out. After
all, stopping the bomber will reshuffle future genetic identities, and may bring about the birth of
a future Hitler. We can of course imagine possible scenarios where such destruction works out for the
better ex post. Perhaps, for instance, the explosion leads to a subsequent disarmament or antiproliferation advances. But we would not breathe a sigh of relief on hearing the news of the destruction
for the first time. Stopping the bomber brings a significant net welfare improvement in the short run,
while we face radical generic uncertainty about the future in any case.
Furthermore, if we can stop the bomber, our long-run welfare estimates will likely show some
improvement as well. The bomb going off could lead to subsequent attacks on other major cities, the
emboldening of terrorists, or perhaps broader panics. There would be a new and very real doorway
toward the general collapse of the world. While the more distant future is remixed radically, we should
not rationally believe that some new positive option has been created to counterbalance the current
destruction and its radically negative potential implications. To put it simply, it is difficult to
see the violent destruction of Manhattan as on net – in ex ante terms – favoring either the shortterm or long-term prospects of the world.
Even if the long-run expected value is impossible to estimate, we need only some probability that the
relevant time horizon is indeed short (perhaps a destructive asteroid will strike the earth). This will tip
the consequentialist balance against a nuclear attack on Manhattan. Now it is not a legitimate response
simply to assume away the epistemic problem by considering only the short time
horizon. But if the future is truly radically uncertain, as the epistemic argument suggests, we cannot rule
out some chance of a short time horizon. And if everything else were truly incalculable and impossible
to estimate, we should be led to assign decisive weight to this short time horizon scenario. We again
should stop the bomber.
If the Manhattan example does not convince you, consider the value of stopping a terrorist attack that
would decimate the entire United States. Or consider an attack that would devastate all of Western
civilization, or the entire world . At some point we can find a set of consequences so
significant that we would be spurred to action, again in open recognition of broader longrun uncertainties.
All omissions are actions. They are just mental states that lead to no action.
Sartorio 09. CAROLINA SARTORIO, Omissions and Causalism, University of Arizona Volume 43, Issue 3,
Article first published online: 3 AUG 2009
Second, a causalist could claim that other things besides events can be causes and effects but causal talk involving events is still the most basic kind of causal talk. In particular, causal talk
involving omissions and other absences can be true, but it is made true, ultimately, by causal talk involving events. This is Vermazen’s suggestion in his (1985), which Davidson explicitly
Imagine that I am tempted to eat
some fattening morsels, but I refrain. Then my passing on the morsels is an intentional omission because
the relevant mental states/events (proattitudes, intentions, etc.) cause my not eating the morsels, and this is, in turn, because, had
those mental states been absent, then some other mental states/events (competing pro-attitudes, intentions, etc.) would have
caused my eating the morsels. In other words, actual causal talk involving omissions is made true by counterfactual causal talk involving positive occurrences or
embraces in his reply to Vermazen (Davidson 1985). How can a causalist do this? Roughly, Vermazen’s idea is the following.
events.
Download