Rd 2 Neg vs Weber Shoell/Shackleford - openCaselist 2012-2013

advertisement
Rd 2 Neg vs Weber Shoell/Shackleford
1AC
Thorium
Natives/uranium mining adv
Disarm adv
1NC
T
1. Interpretation – the aff must remove direct restrictions on nuclear power
Restrictions are ON production
Dictionary.com no date on. (n.d.). Dictionary.com Unabridged. Retrieved September 26, 2012, from Dictionary.com website:
http://dictionary.reference.com/browse/on
16. (used to indicate a source or a person
She depends on her friends for encouragement.
or thing that serves as a source or agent): a duty on imported goods;
That’s contextual
Heritage Foundation 12 http://www.askheritage.org/what-should-the-government-do-to-createjobs/ Cited by Dave Arnett and Andrea Reed in “Reduce Restrictions and Increase Incentives Mechanism
Wording Paper” which came exceedingly close to the final resolution.
Congress should instead promote entrepreneurship and investment with policies that would create jobs without adding to the deficit.
Congress can do this through a combination of explicit actions and by eliminating specific, Washington-based threats to the economy: Freezing
all proposed tax hikes and costly regulations at least until unemployment falls below 7 percent, including health care and energy taxes; Freezing
spending and rescinding unspent stimulus funds; Reforming regulations to reduce unnecessary business costs, such as repealing Section 404 of
the Sarbanes-Oxley Act; Reforming the tort system to lower costs and uncertainty facing businesses; Removing
federal law and
regulations that restrict domestic energy production; Suspending the job-killing Davis-Bacon Act (DBA) that requires
federal construction contractors to pay prevailing wage rates that average 22 percent above market rates; Passing pending free-trade
agreements with South Korea, Colombia, and Panama, which nearly all economists agree will help the economy; and Reducing taxes on
companies’ foreign earnings so they will transfer that money back into Americas economy. Congress must recognize that a strong recovery and
new hiring depends on the confidence businesses have in the future. Uncertainty is a fact of life for all businesses, but when Washington adds
materially to that uncertainty, businesses invest less and hire less. This is especially true following a deep recession, with so many producers still
struggling with excess capacity. The most powerful, no-cost strategy Congress can adopt is to stop threatening those in a position to hire–no
more taxes, no cap-and-trade legislation, no government takeover of private health care, and no more massive increases in the public debt.
Nuclear power production is electrical energy produced by nuclear reaction
West's Encyclopedia of American Law, edition 2. S.v. "Nuclear Power." Retrieved August 15
2012 from http://legal-dictionary.thefreedictionary.com/Nuclear+Power
A form of energy produced by an atomic reaction, capable of producing an alternative source of
electrical power to that supplied by coal, gas, or oil.
2. Violation – the aff removes restrictions on a fuel bank, not nuclear power
3. Standards
a. Limits—anything can indirectly affect production including tons of bidirectional
mechanisms—only direct actions predicated on production are predictable
EIA ’92 Office of Energy Markets and End Use, Energy Information Administration, US Department of
Energy, “Federal Energy Subsidies: Direct and Indirect Interventions in Energy Markets,” 1992,
ftp://tonto.eia.doe.gov/service/emeu9202.pdf
In some sense , most Federal policies have the potential to affect energy markets. Policies supporting
economic stability or economic growth have energy market consequences; so also do Government policies
supporting highway development or affordable housing. The interaction between any of these policies
and energy market outcomes may be worthy of study. However, energy impacts of such policies
would be incidental to their primary purpose and are not examined here. Instead, this report focuses on
Government actions whose prima facie purpose is to affect energy market outcomes, whether through
financial incentives , regulation, public enterprise, or research and development.
b. Even if they’re fx topical, that wrecks neg ground because it lets them generate
advantages from a limitless amount of unpredictable mechanisms which deviate from
the focal point of neg research
and we provide ample aff ground – they can remove NRC restrictions like waste
confidence
Default to competing interpretations – most objective way to evaluate T without
resulting in judge intervention
K
Nuclear power is a capitalist shell game to trick massive subsidies out of the
government and sustain imperial domination
ICC ’11 “Nuclear energy, capitalism and communism,” International Communist Current, 8/16/2011,
http://en.internationalism.org/wr/347/nuclear
The potential to use nuclear fission or fusion to produce power has been known about for around a century but it was only after the Second
World War that it was actually realised. Thus, while its general context is that outlined above, the
specific context is the post-war
situation dominated by the rivalry between the USA and USSR and the nuclear arms race that
resulted. The development of nuclear power is thus not only inextricably linked to that of nuclear
weapons but was arguably a smokescreen for the latter. In the early 1950s the American government
was concerned about the public’s response to the danger of the nuclear arsenal it was assembling and
the strategy of first strike that was being propounded. It’s response was to organise a campaign
known as Operation Candor to win the public over through adverts across the media (including comic
books) and a series of speeches by President Eisenhower that culminated in the announcement at the UN
General Assembly of the ‘Atoms for Peace’ programme to “encourage world-wide investigation into the most
effective peacetime uses of fissionable materials.”[19] The plan included sharing information and resources, and the US and
USSR jointly creating an international stockpile of fissionable material. In the years that followed the arms race went on unabated
and nuclear weapons spread to other powers, often under the guise of a civilian nuclear power
programme, as in Israel and India. The initial reactors produced large quantities of material for nuclear weapons and small
amounts of very expensive electricity. The sharing of nuclear knowledge became part of global imperialist struggles;
thus in the late 1950s Britain secretly supplied Israel with heavy water for the reactor it was building with French assistance.[20] Despite
talk about energy too cheap to meter, nuclear power has never fulfilled this promise and has relied on
state support to cover its real cost. Even where private companies build and run plants there are
usually large open or hidden subsidies. For example privatisation of the nuclear industry in Britain failed
when Thatcher attempted it in the 1980s because private capital identified there were unquantifiable
costs and risks. It was only in 1996, when the ageing Magnox reactors that would soon need decommissioning were excluded from the
deal that private investors were prepared to buy British Energy at a knockdown price of £2bn. Six years later the company had to be bailed out
with a £10bn government loan.[21] While advocates of nuclear energy today argue that it is cheaper than other sources this remains a
questionable assertion. In 2005 the World Nuclear Association, stated that “In most industrialized countries today, new nuclear power plants
offer the most economical way to generate base-load electricity even without consideration of the geopolitical and environmental advantages
that nuclear energy confers” and published a range of data to support the claim that construction, financing, operating and waste and
decommissioning costs have all reduced.[22] Between 1973 and 2008 the proportion of energy from nuclear reactors grew from 0.9% of the
global total to 5.8%.[23] A report published in 2009, commissioned by the German Federal Government ,[24]
makes a far more critical evaluation of the economics of nuclear power and questions the idea that there is a nuclear renaissance underway.
The report points
out that the number of reactors has fallen over the last few years in contrast to the
widespread forecasts of increases in both reactors and the power produced. The increase in the
amount of power generated that has taken place during this period is the result of upgrading the
existing reactors and extending their operational life. It goes on to argue that there is a lot of uncertainty
about the reactors currently described as being ‘under construction’, with a number having been in
this position for over 20 years. The number under construction has fallen from the peak of over 200 in 1980 to below 50 in 2006. As
regards the economics of nuclear power, the report points to the high level of uncertainty in all areas including financing, construction,
operation and decommissioning. It shows that the
state remains central to all nuclear projects, regardless of who
they are formally owned and operated by. One aspect of this is the various forms of subsidy provided
by the state to support capital costs, waste management and plant closure and price support. Another
has been the necessity for the state to limit the liability of the industry in order for the private sector
to accept the risks. Thus in 1957 the US government stepped in when insurance companies refused to agree insurance because they
were unable to quantify the risk.[25] Today it is estimated that “In general national limits are in the order of a few hundred million Euro, less
than 10% of the cost of building a plant and far less than the cost of the Chernobyl accident.”[26] The dangers
of nuclear energy are
as fiercely debated as the costs and the scientific evidence seems to be very variable. This is particularly
the case with the Chernobyl disaster where the estimates of the deaths that resulted vary widely. A World Health
Organisation Report found that 47 the 134 emergency workers initially involved had died as a result of contamination by 2004[27] and
estimated that there would be just under 9,000 excess deaths from cancer as a result of the disaster.[28] A
report by Russian
scientists published in the Annals of the New York Academy of Sciences estimated that from the date of the accident until
2006 some 985,000 additional deaths had resulted from the accident from cancer and a range of other
diseases.[29] For those without specialist medical and scientific knowledge this is difficult to unravel, but what is less questionable is the
massive level of secrecy and falsification that runs from the decision by the British government to withhold publication of the report into one of
the first accidents in the industry at Windscale in 1957 to Fukishima today where the true scale of the disaster only emerged slowly. Returning
to Chernobyl, the Russian government did not report the accident for several days, leaving the local population to continue living and working
amidst the radiation. But it was not only Russia. The French government minimised the radiation levels reaching the country[30] and told its
population that the radiation cloud that spread across the whole of Europe had not passed over France![31] Meanwhile the British government
reassured the country that there was no risk to health, reporting levels of radiation that were forty times lower than they actually were[32],
and then quarantined hundreds of farms. As late as 2007 374 farms in Britain still remained under the special control scheme.[33] Nuclear
energy is being pushed by various governments as a ‘green’ solution to the problems associated with
fossil fuels. This is largely a smokescreen to hide the real motives, which are concerns about the
possible exhaustion of oil, the increasing price of oil and the risks associated with a dependence on
energy resources outside the state’s control. This green facade is slipping as the economic crisis leads states to return to
coal[34] and to push down the costs of exploiting new sources of oil, much of which is physically hard to access, or requires processes that
pollute and despoil the environment, such as coal-tar sands. Energy
supplies have also been a factor in the imperialist
struggles over recent years and it seems likely that this may increase in the period ahead. Nuclear
energy then comes back to where it started as a source of fissile material and a cover for weapons
programmes.
Increases in production efficiency and effectiveness only magnify energy shortages
and crises
Foster et al. ’10 John Bellamy Foster, professor of sociology at University of Oregon, Brett Clark,
assistant professor of sociology at North Carolina State University, and Richard York, associate professor
of sociology at University of Oregon, “Capitalism and the Curse of Energy Efficiency,” Monthly Review,
November 2010, Vol. 62, Issue 6, pp. 1-12
The Jevons Paradox is the product of a capitalist economic system that is unable to conserve on a
macro scale, geared, as it is, to maximizing the throughput of energy and materials from resource tap to
final waste sink. Energy savings in such a system tend to be used as a means for further development of
the economic order, generating what Alfred Lotka called the “maximum energy flux,” rather than minimum
energy production.34 The deemphasis on absolute (as opposed to relative) energy conservation is built
into the nature and logic of capitalism as a system unreservedly devoted to the gods of production
and profit. As Marx put it: “Accumulate, accumulate! That is Moses and the prophets!”35 Seen in the context of a capitalist society, the
Jevons Paradox therefore demonstrates the fallacy of current notions that the environmental
problems facing society can be solved by purely technological means. Mainstream environmental
economists often refer to “dematerialization,” or the “decoupling” of economic growth, from
consumption of greater energy and resources. Growth in energy efficiency is often taken as a concrete
indication that the environmental problem is being solved. Yet savings in materials and energy, in the
context of a given process of production, as we have seen, are nothing new; they are part of the
everyday history of capitalist development.36 Each new steam engine, as Jevons emphasized, was more
efficient than the one before. “Raw materials-savings processes,” environmental sociologist Stephen Bunker noted,
“are older than the Industrial Revolution, and they have been dynamic throughout the history of
capitalism.” Any notion that reduction in material throughput, per unit of national income, is a new
phenomenon is therefore “profoundly ahistorical.”37 What is neglected, then, in simplistic notions that increased
energy efficiency normally leads to increased energy savings overall, is the reality of the Jevons Paradox relationship—
through which energy savings are used to promote new capital formation and the proliferation of
commodities, demanding ever greater resources. Rather than an anomaly, the rule that efficiency
increases energy and material use is integral to the “regime of capital” itself.38 As stated in The Weight of
Nations, an important empirical study of material outflows in recent decades in five industrial nations (Austria, Germany, the Netherlands, the
United States, and Japan): “Efficiency
gains brought by technology and new management practices have been
offset by [increases in] the scale of economic growth.”39
Capital reduces staggeringly large populations to servility at the whim of multinational
corporations, formenting widespread social instability—the only hope for the world is
fundamental economic change
Foster and McChesney ’12 John Bellamy Foster, professor of sociology at University of Oregon, and
Robert W. McChesney, Gutgsell Endowed Professor of Communication, University of Illinois-UrbanaChampaign, “The Endless Crisis,” Monthly Review, May 2012, vol. 64, issue 1, pp. 1-28
The biggest question mark generated by this new phase of accumulation today is the rapid growth of
a few large emerging economies, particularly China and India. The vagaries of an accumulation system
in these countries based on the exploitation of massive reserve armies of workers (in China a “floating
population” of peasants) in the hundreds of millions , which cannot be absorbed internally through
the standard industrialization process, makes the future of the new Asia uncertain. The imperial rent
exacted by multinationals, who also control the global supply chains, means that emerging economies
face what may appear to be an open door to the world market, but must proceed along paths
controlled from outside.74 The vast inequality built into a model of export-oriented development based on low-wage labor creates
internal fault lines for emerging economies. China is now the site of continual mass protests, occurring on a scale
of hundreds of thousands annually. In an article entitled “Is China Ripe for Revolution?” in the February 12, 2012 New York
Times, Stephen R. Platt wrote that the Taiping Rebellion of the nineteenth century might stand as a historical
reminder of the possibility of another major “revolution from within” in that country (in which case, he
notes, Washington would mostly likely find itself “hoping for that revolution to fail”).75 In many ways the world situation, with minor
modifications, conforms to the diagnosis provided by Che Guevara at the Afro-Asian Conference in Algeria in 1965: “Ever
since
monopoly capital took over the world, it has kept the greater part of humanity in poverty, dividing all
the profits among the group of the most powerful countries…. There should be no more talk about
developing mutually beneficial trade based on prices forced on the backward countries by the law of
value and the international relations of unequal exchange that result from the law of value.”76 If some
emerging economies are now developing rapidly, the dominant reality is the global labor arbitrage
that is increasing the level of exploitation worldwide, the greatest burden of which is falling on the
global South. An underlying premise throughout our analysis is that imperialist divisions within the world remain and
are even deepening, enforcing wide disparities in living conditions. Still, in the age of global monopolyfinance capital working people everywhere are increasingly suffering—a phenomenon that Michael Yates has
referred to as “The Great Inequality.”77 Entrenched and expanding monopolies of wealth, income, and power are
aimed at serving the interests of a miniscule portion of the world population, now known as the 1%—or
the global ruling classes of contemporary monopoly-finance capital. The world is being subjected to a
process of monopolistic capital accumulation so extreme and distorted that not only has it produced
the Great Inequality and conditions of stagnation and financial instability, but also the entire planet as
a place of human habitation is being put in peril in order to sustain this very system.78 Hence, the future
of humanity—if there is to be one at all —now lies with the 99%. “If the system itself is at fault,” Gar
Alperovitz observes in his America Beyond Capitalism, “then self evidently—indeed, by definition—a solution would
ultimately require the development of a new system.”79
Overexpansion triggers multiple scenarios for extinction
Foster ‘5 John Bellamy Foster, professor of sociology at the University of Oregon, "Naked Imperialism,"
Monthly Review, Vol. 57 No. 4, 2005
From the longer view offered by a historical-materialist critique of capitalism, the direction that would be taken by U.S. imperialism following
the fall of the Soviet Union was never in doubt. Capitalism
by its very logic is a globally expansive system. The
contradiction between its transnational economic aspirations and the fact that politically it remains
rooted in particular nation states is insurmountable for the system. Yet, ill-fated attempts by individual
states to overcome this contradiction are just as much a part of its fundamental logic. In present world
circumstances, when one capitalist state has a virtual monopoly of the means of destruction, the
temptation for that state to attempt to seize full-spectrum dominance and to transform itself into the
de facto global state governing the world economy is irresistible. As the noted Marxian philosopher István Mészáros
observed in Socialism or Barbarism? (2001)—written, significantly, before George W. Bush became president: “[W]hat is at stake
today is not the control of a particular part of the planet—no matter how large—putting at a disadvantage but still
tolerating the independent actions of some rivals, but the control of its totality by one hegemonic economic and
military superpower, with all means—even the most extreme authoritarian and, if needed, violent
military ones—at its disposal.” The unprecedented dangers of this new global disorder are revealed in
the twin cataclysms to which the world is heading at present: nuclear proliferation and hence
increased chances of the outbreak of nuclear war, and planetary ecological destruction. These are
symbolized by the Bush administration’s refusal to sign the Comprehensive Test Ban Treaty to limit nuclear weapons development and by its
failure to sign the Kyoto Protocol as a first step in controlling global warming. As former U.S. Secretary of Defense (in the Kennedy and Johnson
administrations) Robert McNamara stated in an article entitled “Apocalypse Soon” in the May–June 2005 issue of Foreign Policy: “The
United States has never endorsed the policy of ‘no first use,’ not during my seven years as secretary or since. We have been and remain
prepared to initiate the use of nuclear weapons—by the decision of one person, the president—against either a nuclear or
nonnuclear enemy whenever we believe it is in our interest to do so.” The nation with the greatest conventional military
force and the willingness to use it unilaterally to enlarge its global power is also the nation with the
greatest nuclear force and the readiness to use it whenever it sees fit—setting the whole world on
edge. The nation that contributes more to carbon dioxide emissions leading to global warming than
any other (representing approximately a quarter of the world’s total) has become the greatest obstacle to addressing
global warming and the world’s growing environmental problems—raising the possibility of the
collapse of civilization itself if present trends continue.
No turns—capitalism tends toward stagnation—market interventions and innovative
spurts continually magnify structural contradictions
Foster and McChesney ’12 John Bellamy Foster, professor of sociology at University of Oregon, and
Robert W. McChesney, Gutgsell Endowed Professor of Communication, University of Illinois-UrbanaChampaign, “The Endless Crisis,” Monthly Review, May 2012, vol. 64, issue 1, pp. 1-28
Nearly twenty years later, Sweezy, writing with Paul Baran, published their now classic study, Monopoly Capital, which was to have a strong
influence on New Left economics in the 1970s. “The
normal state of the monopoly capitalist economy,” they declared,
“is stagnation.”28 According to this argument, the rise of the giant monopolistic (or oligopolistic) corporations had
led to a tendency for the actual and potential investment-seeking surplus in society to rise. The very
conditions of exploitation (or high price markups on unit labor costs) meant both that inequality in
society increased and that more and more surplus capital tended to accumulate actually and
potentially within the giant firms and in the hands of wealthy investors, who were unable to find
profitable investment outlets sufficient to absorb all of the investment-seeking surplus. Hence, the
economy became increasingly dependent on external stimuli such as higher government spending
(particularly on the military), a rising sales effort, and financial expansion to maintain growth.29 Such external
stimuli, as Sweezy was later to explain, were “not part of the internal logic of the economy itself,” falling
“outside the scope of mainstream economics from which historical, political, and sociological
considerations are carefully excluded.”30 All of these external stimuli were self-limiting, and/or
generated further long-run contradictions, leading to the resumption of stagnation tendencies.
Sending capital investment abroad did little to abate the problem since the return flow of profits and
other business returns, under conditions of unequal exchange between global North and South and
U.S. hegemony in general, tended to overwhelm the outward flow. A truly epoch-making innovation, playing the
role of the steam engine, the railroad, or the automobile in the nineteenth and early-to-midtwentieth centuries, might alter the situation. But
such history-changing
innovations of the kind that would alter the entire geography and scale of
accumulation were not to be counted on and were probably less likely under mature monopolycapitalist conditions. The result was that the economy, despite its ordinary ups and downs, tended to
sink into a normal state of long-run slow growth, rather than the robust growth assumed by orthodox
economics. In essence an economy in which decisions on savings and investment are made privately
tends to fall into a stagnation trap: existing demand is insufficient to absorb all of the actual and
potential savings (or surplus) available, output falls, and there is no automatic mechanism that generates
full recovery.31 Stagnation theory, in this sense, did not mean that strong economic growth for a time was impossible in mature capitalist
economies—simply that stagnation was the normal case and that robust growth had to be explained as the
result of special historical factors. This reversed the logic characteristic of neoclassical economics, which assumed
that rapid growth was natural under capitalism, except when outside forces, such as the state or trade unions, interfered
with the smooth operation of the market. Stagnation also did not necessarily mean deep downturns with negative growth, but rather a slowing
down of the trend-rate of growth due to overaccumulation. Net
investment (i.e., investment beyond that covered by depreciation funds)
atrophied, since with rising productivity what little investment was called for could be met through
depreciation funds alone. Stagnation thus assumed steady technological progress and rising
productivity as its basis. It was not that the economy was not productive enough; rather it was too
productive to absorb the entire investment-seeking surplus generated within production.
Vote negative to affirm classist politics.
Increasing contradictions of capital necessitate a new approach. Focus on material
production is key to social praxis.
Tumino ’12 Stephen Tumino, more marxist than Marx himself, “Is Occupy Wall Street Communist,”
Red Critique 14, Winter/Spring 2012,
http://www.redcritique.org/WinterSpring2012/isoccupywallstreetcommunist.htm
Leaving aside that the purpose of Wolff's speech was to popularize a messianic vision of a more just society based on workplace democracy, he
is right about one thing: Marx's
original contribution to the idea of communism is that it is an historical and
material movement produced by the failure of capitalism not a moral crusade to reform it. Today we are
confronted with the fact that capitalism has failed in exactly the way that Marx explained was inevitable.[4] It
has "simplified the class antagonism" (The Communist Manifesto); by concentrating wealth and centralizing
power in the hands of a few it has succeeded in dispossessing the masses of people of everything
except their labor power. As a result it has revealed that the ruling class "is unfit to rule," as The Communist
Manifesto concludes, "because it is incompetent to assure an existence to its slave within his slavery,
because it cannot help letting him sink into such a state, that it has to feed him, instead of being fed
by him." And the slaves are thus compelled to fight back. Capitalism makes communism necessary
because it has brought into being an international working class whose common conditions of life give
them not only the need but also the economic power to establish a society in which the rule is "from
each according to their ability, to each according to their need" (Marx, Critique of the Gotha Programme). Until
and unless we confront the fact that capitalism has once again brought the world to the point of
taking sides for or against the system as a whole, communism will continue to be just a bogey-man or a
nursery-tale to frighten and soothe the conscience of the owners rather than what it is—the materialist theory that is an
absolute requirement for our emancipation from exploitation and a new society freed from necessity!
As Lenin said, "Without
revolutionary theory there can be no revolutionary movement" (What Is To Be Done?).
We are confronted with an historic crisis of global proportions that demands of us that we take
Marxism seriously as something that needs to be studied to find solutions to the problems of today.
Perhaps then we can even begin to understand communism in the way that The Communist Manifesto presents it as
"the self-conscious, independent movement of the immense majority, in the interest of the immense
majority" to end inequality forever.
Totalizing analysis is key to environmentalism—only critique offers hope of escape
from ecological devastation
Magdoff ’12 Fred Magdoff, Professor emeritus of plant and soil science at the Unviersity of Vermont,
“Harmony and Ecological Civilization,” Monthly Review, June 2012, Vol. 64, Issue 2, p. 1-9
Nevertheless, for many the role that capitalism plays in ecological destruction is invisible. Thus the
ecological and social antagonisms and contradictions of capitalism are frequently misdiagnosed. Some
observers suggest that many of these problems are caused by the rise of industrial society. Here, so the thinking goes,
any society based on or using industrial production will necessarily have the same resource and environmental problems. Others blame
the thoughtless exploitation of natural resources and the great damage done to the environment on the existence of too many
people. The large population, exceeding the carrying capacity of the planet, they maintain, is the culprit and the solution is therefore to
reduce the population of the earth as quickly as possible. (Not easy to do of course by humane means.) Some ahistorical
commentators say the problem is endemic to humans because we are inherently greedy and
acquisitive. With a few important exceptions, non-Marxist discussions of the problems neglect to even look at
the characteristics and workings of capitalism, let alone examine them at any depth. They are so
embedded in the system, that they assume that capitalism, which many mislabel “the market
economy,” will go on and on forever—even, it is illogically assumed, if we destroy the earth itself as a
place of human habitation—while any other type of economic system is absolutely inconceivable.
Economic, societal, and historical contexts are completely ignored. Rational and useful alternative
solutions to any problem depend upon a realistic analysis and diagnosis as to what is causing it to
occur. When such analysis is lacking substance the proposed “solutions” will most likely be useless. For
example, there are people fixated on nonrenewable resource depletion that is caused, in their opinion, by “overpopulation.” Thus, they
propose, as the one and only “solution,” a rapid “degrowth” of the world’s population. Programs that provide contraceptives to women in poor
countries are therefore offered as an important tool to solving the global ecological problem. However, those concerned with there being too
many people generally do not discuss the economic system that is so destructive to the environment and people or the critical moral and
practical issue of the vast inequalities created by capitalism. Even the way that capitalism itself requires population growth as part of its overall
expansion is ignored. Thus, a
critical aspect almost always missing from discussions by those concerned with population as it affects
that the overwhelming majority of the earth’s environmental problems are
caused by the wealthy and their lifestyles—and by a system of capital accumulation that
predominantly serves their interests. The World Bank staff estimates that the wealthiest 10 percent of
humanity are responsible for approximately 60 percent of all resource use and therefore 60 percent of
the pollution (most probably an underestimate). Commentators fixated on nonrenewable resources and pollution as the
resource use and pollution is
overriding issues cannot see that one of their main “solutions”—promoting birth control in poor countries—gets nowhere near to even
beginning to address the real problem. It should go without saying that poor people should have access to medical services, including those
involving family planning. This should be considered a basic human right. The rights of women in this respect are one of the key indicators of
democratic and human development. But how
can people fixated on the mere population numbers ignore the fact
that it is the world’s affluent classes that account for the great bulk of those problems—whether one
is looking at resource use, consumption, waste, or environmental pollution—that are considered so
important to the survival of society and even humanity? In addition to the vast quantity of resources
used and pollution caused by wealthy individuals, governments are also responsible. The U.S. military
is one of the world’s prime users of resources—from oil to copper, zinc, tin, and rare earths. The
military is also is the single largest consumer of energy in the United States.5 While capitalism creates many of
the features and relationships discussed above, we must keep in mind that long before capitalism existed there were
negative societal aspects such as warfare, exploitation of people and resources, and ecological
damage. However, capitalism solidifies and makes these problems systemic while at the same time
creating other negative aspects.
Evaluate the debate as a dialectical materialist—you are a historian inquiring into the
determinant factors behind the 1AC—Marx’s labor theory of value is the best possible
description
Tumino ‘1 Stephen Tumino, professor of English at the University of Pittsburgh, “What is Orthodox
Marxism and Why it Matters Now More Than Ever Before,” Red Critique, Spring 2001,
http://redcritique.org/spring2001/whatisorthodoxmarxism.htm
Any effective political theory will have to do at least two things: it will have to offer an integrated understanding
of social practices and, based on such an interrelated knowledge, offer a guideline for praxis. My main
argument here is that among all contesting social theories now, only Orthodox Marxism has been able to produce an
integrated knowledge of the existing social totality and provide lines of praxis that will lead to
building a society free from necessity. But first I must clarify what I mean by Orthodox Marxism. Like all other modes and forms
of political theory, the very theoretical identity of Orthodox Marxism is itself contested—not just from non-and anti-Marxists who question the
very "real" (by which they mean the "practical" as under free-market criteria) existence of any kind of Marxism now but, perhaps more tellingly,
from within the Marxist tradition itself. I will, therefore, first say what I regard to be the distinguishing marks of Orthodox Marxism and then
outline a short polemical map of contestation over Orthodox Marxism within the Marxist theories now. I will end by arguing for its effectivity in
bringing about a new society based not on human rights but on freedom from necessity. I will argue that to
know contemporary
society—and to be able to act on such knowledge—one has to first of all know what makes the
existing social totality. I will argue that the dominant social totality is based on inequality—not just
inequality of power but inequality of economic access (which then determines access to health care,
education, housing, diet, transportation, . . . ). This systematic inequality cannot be explained by
gender, race, sexuality, disability, ethnicity, or nationality. These are all secondary contradictions and
are all determined by the fundamental contradiction of capitalism which is inscribed in the relation of
capital and labor. All modes of Marxism now explain social inequalities primarily on the basis of these
secondary contradictions and in doing so—and this is my main argument—legitimate capitalism. Why? Because
such arguments authorize capitalism without gender, race, discrimination and thus accept economic
inequality as an integral part of human societies. They accept a sunny capitalism—a capitalism beyond
capitalism. Such a society, based on cultural equality but economic inequality, has always been the
not-so-hidden agenda of the bourgeois left—whether it has been called "new left," "postmarxism," or "radical democracy."
This is, by the way, the main reason for its popularity in the culture industry—from the academy (Jameson, Harvey, Haraway, Butler,. . . ) to
daily politics (Michael Harrington, Ralph Nader, Jesse Jackson,. . . ) to. . . . For
all, capitalism is here to stay and the best that
can be done is to make its cruelties more tolerable, more humane. This humanization (not
eradication) of capitalism is the sole goal of ALL contemporary lefts (marxism, feminism, anti-racism, queeries, . . . ).
Such an understanding of social inequality is based on the fundamental understanding that the source
of wealth is human knowledge and not human labor. That is, wealth is produced by the human mind and
is thus free from the actual objective conditions that shape the historical relations of labor and
capital. Only Orthodox Marxism recognizes the historicity of labor and its primacy as the source of all
human wealth. In this paper I argue that any emancipatory theory has to be founded on recognition of the
priority of Marx's labor theory of value and not repeat the technological determinism of corporate
theory ("knowledge work") that masquerades as social theory. Finally, it is only Orthodox Marxism that
recognizes the inevitability and also the necessity of communism—the necessity, that is, of a society
in which "from each according to their ability to each according to their needs" (Marx) is the rule.
Particular facts are irrelevant without totalizing historical theory—the aff makes
universal what is particular to capitalism—methodological inquiry is prior to action
Lukács ’67 György Lukács, History and Class Consciousness: Studies in Marxist Dialectics, trans. Rodney
Livingstone, MIT Press: Cambridge, 1967, p. 7-10
Thus we perceive that there is something highly problematic in the fact that capitalist society is predisposed to harmonise with scientific
method, to constitute indeed the social premises of its exactness. If
the internal structure of the 'facts' of their
interconnections is essentially historical, if, that is to say, they are caught up in a process of continuous
transformation, then we may indeed question when the greater scientific inaccuracy occurs. It is when I
conceive of the 'facts' as existing in a form and as subject to laws concerning which I have a
methodological certainty (or at least probability) that they no longer apply to these facts? Or is it when I
consciously take this situation into account, cast a critical eye at the 'exactitude' attainable by such a
method and concentrate instead on those points where this historical aspect, this decisive fact of
change really manifests itself? The historical character of the 'facts' which science seems to have
grasped with such 'purity' makes itself felt in an even more devastating manner. As the products of
historical evolution they are involved in continuous change. But in addition they are also precisely in
their objective structure the products of a definite historical epoch, namely capitalism. Thus when 'science'
maintains that the manner in which data immediately present themselves is an adequate foundation
of scientific conceptualisation and that the actual form of these data is the appropriate starting point for
the formation of scientific concepts, it thereby takes its stand simply and dogmatically on the basis of
capitalist society. It uncritically accepts the nature of the object as it is given and the laws of that
society as the unalterable foundation of 'science'. In order to progress from these 'facts' to facts in the
true meaning of the word it is necessary to perceive their historical conditioning as such and to
abandon the point of view that would see them as immediately given: they must themselves be subjected
to a historical and dialectical examination. For as Marx says:8 "The finished pattern of economic relations as
seen on the surface in their real existence and consequently in the ideas with which the agents and
bearers of these relations seek to understand them, is very different from, and indeed quite the
reverse of and antagonistic to their inner, essential but concealed core and the concepts
corresponding to it." If the facts are to be understood, this distinction between their real existence
and their inner core must be grasped clearly and precisely. This distinction is the first premise of a truly scientific study
which in Marx's words, "would be superfluous if the outward appearance of things coincided with their essence" .10 Thus we must
detach the phenomena from the form in which they are immediately given and discover the
intervening links which connect them to their core, their essence. In so doing, we shall arrive at an understanding of their
apparent form and see it as the form in which the inner core necessarily appears. It is necessary because of the historical character of the facts,
because they have grown in the soil of capitalist society. This twofold character, the simultaneous recognition and transcendence of immediate
appearances is precisely the dialectical nexus. In this respect, superficial readers imprisoned in the modes of thought created by capitalism,
experienced the gravest difficulties in comprehending the structure of thought in Capital. For on the one hand, Marx's account pushes the
capitalist nature of all economic forms to their furthest limits, he creates an intellectual milieu where they can exist in their purest form by
positing a society 'corresponding to the theory', i.e. capitalist through and through, consisting of none but capitalists and proletarians. But
conversely, no sooner does this strategy produce results, no sooner does this world of phenomena seem to be on the point of crystallising out
into theory than it dissolves into a mere illusion, a distorted situation appears as in a distorting mirror which is, however, "only the conscious
expression of an imaginary movement". Only
in this context which sees the isolated facts of social life as aspects
of the historical process and integrates them in a totality, can knowledge of the facts hope to become
knowledge of reality. This knowledge starts from the simple (and to the capitalist world), pure, immediate,
natural determinants described above. It progresses from them to the knowledge of the concrete
totality, i.e. to the conceptual reproduction of reality. This concrete totality is by no means an unmediated datum for thought. "The
concrete is concrete," Marx says,11 "because it is a synthesis of many particular determinants, i.e. a unity of
diverse elements." Idealism succumbs here to the delusion of confusing the intellectual reproduction of
reality with the actual structure of reality itself. For "in thought, reality appears as the process of
synthesis, not as starting-point, but as outcome, although it is the real starting-point and hence the starting-point for
perception and ideas." Conversely, the vulgar materialists, even in the modem guise donned by Bernstein and others, do not go beyond
the reproduction of the immediate, simple determinants of social life. They imagine that they are being quite extraordinarily 'exact' when they
simply take over these determinants without either analysing them further or welding them into a concrete totality. They take
the facts
in abstract isolation, explaining them only in terms of abstract laws unrelated to the concrete totality.
As Marx observes: "Crudeness and conceptual nullity consist in the tendency to forge arbitrary unmediated connections between things that
belong together in an organic union." 12 The crudeness and conceptual nullity of such thought lies primarily in the fact that it
obscures
the historical, transitory nature of capitalist society. Its determinants take on the appearance of
timeless, eternal categories valid for all social formations. This could be seen at its crassest in the vulgar bourgeois
economists, but the vulgar Marxists soon followed in their footsteps. The dialectical method was overthrown and with it the methodological
supremacy of the totality over the individual aspects; the parts were prevented from finding their definition within the whole and, instead, the
whole was dismissed as unscientific or else it degenerated into the mere 'idea' or 'sum' of the parts. With the totality
out of the
way, the fetishistic relations of the isolated parts appeared as a timeless law valid for every human
society. Marx's dictum: "The relations of production of every society form a whole" 13 is the
methodological point of departure and the key to the historical understanding of social relations. All the
isolated partial categories can be thought of and treated-in isolation-as something that is always present in every society. (If it cannot be found
in a given society this is put down to 'chance as the exception that proves the rule.) But the
changes to which these individual
aspects are subject give no clear and unambiguous picture of the real differences in the various stages
of the evolution of society. These can really only be discerned in the context of the total historical
process of their relation to society as a whole.
Natives adv
Squo solves--Nuclear is declining—their ev is just industry hype that’s always proven
wrong
Maize 8-6 Kennedy Maize, “A Bumpy Road for Nukes,” POWER Magazine, 8/6/2012,
http://www.powermag.com/nuclear/4859.html
Washington, D.C., 6 August 2012 — It’s been a rough road for nuclear advocates in the U.S. of late, although nothing seems
to dent the Pollyanna armor of the nuclear crowd, always appearing to believe a revival is just over
the horizon and headed into view. Here are a few fraught developments for the nuclear business that suggest the
positive vision just might be a mirage. * GE CEO Jeff Immelt in a recent interview with the Financial Times revealed a
surprising and somewhat uncharacteristic realism with regard to the company’s nuclear future and that of its partner in radioactivity, Hitachi. In London
for the Summer Olympics, Immelt told a reporter for the FT, “It’s really a gas and wind world today. When I talk to the guys who run the oil
companies, they say look, they’re finding more gas all the time. It’s just hard to justify nuclear, really hard. Gas is
so cheap, and at some point, really, economics rule.” For the nuclear industry, economics has always been the
fundamental enemy – not the green-tinged, hairy anti-nuke activists, but the folks with the green eye shades, sharp pencils and, today, even sharper
spreadsheets. The nuclear execs long have pursued governments as their bulwark against markets, and that has often worked. Today, as Immelt notes, gas has
made the market forces so overwhelming, at least in those places such as the U.S. where gas is astonishingly abundant, that even
government likely can’t come to the rescue of nuclear power. Could that have something to do with the
abject failure of the 2005 Energy Policy Act’s loan guarantee provisions, which have not worked for renewables any
better than they have worked for nukes? Indeed, the threat of gas is at least as potentially toxic for many wind and solar projects as it is for
nuclear and coal new build. * In Georgia, the Southern Company is facing what looks like growing problems with its
Vogtle project, which aims for two new nuclear units using the unproven but promising Westinghouse AP1000 reactor design. With its federal
loan in jeopardy (Southern says it can go ahead without taxpayer funds) and the project running behind schedule and over
budget, the Atlanta-based utility now faces lawsuits brought by the reactor vendor and the construction contractor Shaw Group. The amount in
dispute, some $29 million, is tiny compared to the multi-billion-dollar price tag for the project. But it may be revealing of ruptures in the deal.
Robert Marritz, an energy lawyer and veteran industry observer, publisher of ElectricityPolicy.com, commented that “ the very filing of a lawsuit at
this stage of the first nuclear plant construction in decades is stunning, reflecting stresses in a relationship that should, one would
think, be contained and resolved rather than boiling over into public view.” Indeed, the parties are also engaged in a
larger, perhaps nastier, dispute involving $800 million that has not gotten much public exposure. And that’s real money. * Moving to California, the
long-running saga of Edison International’s San Onofre Nuclear Generating Station (SONGS, how’s that for an inept
acronym?) continues, with little clarity in sight. The plant has been out of service since January as a result
of unexpected and still unexplained tube wear in the plant’s steam generators. According to Bloomberg New Energy
Finance, the outage is costing the utility about $1.5 million a day just in lost revenue. The cost to the state in jeopardized reliability hasn’t
been calculated, although Edison has started up mothballed gas capacity to fill the supply gap. There is no firm date for restart at the
nuclear plant. In the meantime, the California Public Utilities Commission is planning a formal investigation of the outage and Edison’s response, but
recently decided to delay that until the utility files a legally-required report with the CPUC November 1. CPUC President Mike Peevey is a former executive with the
Los Angeles-based utility.
Native American exploitation is structured through capitalism’s imperative to destroy
competition
Hedges ’12 Chris Hedges, “Welcome to the Asylum,” Truthdig, 4/30/2012,
http://www.truthdig.com/report/item/welcome_to_the_asylum_20120430/
When the most basic elements that sustain life are reduced to a cash product, life has no intrinsic
value. The extinguishing of “primitive” societies, those that were defined by animism and mysticism, those that celebrated
ambiguity and mystery, those that respected the centrality of the human imagination, removed the only ideological
counterweight to a self-devouring capitalist ideology. Those who held on to pre-modern beliefs, such as Native
Americans, who structured themselves around a communal life and self-sacrifice rather than hoarding
and wage exploitation, could not be accommodated within the ethic of capitalist exploitation, the cult
of the self and the lust for imperial expansion. The prosaic was pitted against the allegorical. And as we
race toward the collapse of the planet’s ecosystem we must restore this older vision of life if we are to survive. The war on the Native
Americans, like the wars waged by colonialists around the globe, was waged to eradicate not only a
people but a competing ethic. The older form of human community was antithetical and hostile to
capitalism, the primacy of the technological state and the demands of empire. This struggle between belief
systems was not lost on Marx. “The Ethnological Notebooks of Karl Marx” is a series of observations derived from Marx’s reading of works by
historians and anthropologists. He took notes about the traditions, practices, social structure, economic systems and beliefs of numerous
indigenous cultures targeted for destruction. Marx noted arcane details about the formation of Native American society, but also that “lands
were owned by the tribes in common, while tenement-houses were owned jointly by their occupants.” He wrote of the Aztecs, “Commune
tenure of lands; Life in large households composed of a number of related families.” He went on, “… reasons for believing they practiced
communism in living in the household.” Native Americans, especially the Iroquois, provided the governing model for the union of the American
colonies, and also proved vital to Marx and Engel’s vision of communism.
NO spillover and doesn’t solve—public opposition to reprocessing
Acton 2009 -- James M. Acton [Ph.D in theoretical physics at Cambridge University, senior associate in
the Nuclear Policy Program at the Carnegie Endowment] “The myth of proliferation-resistant
Technology” BULLETIN OF THE ATOMIC SCIENTISTS, VOL. 65, NO. 6, NOVEMBER/DECEMBER 2009 EBSCO
Reprocessing and take back. Another advertised benefit of re- processing, particularly advanced methods such as UREX+, is
its potential to contribute to spent fuel management by reducing the volume, radiotoxicity, and heat
of high-level waste. 14 Advocates of reprocessing, particularly in the Energy Department, argue that by simplifying waste
management, reprocessing could facilitate the nonproliferation holy grail of spent fuel “take back” or “take away”—that is, the removal of
spent fuel by the state that supplied it or by a third party. 15 Take-back provisions would undoubtedly benefit nonproliferation. They would
reduce the domestic pres- sures on states to develop reprocessing as a waste management strategy thereby avoiding “plutonium rivers” as
well as preventing the buildup of large volumes of spent fuel that could become “plutonium mines.” The
problem is that
reprocessing is unlikely to decrease public opposition to importing spent fuel and may actually
increase it. At issue, once again, is whether a tech- nical solution can solve what is essentially a political
problem. 16 Public opposition to importing spent fuel operates at various levels. 17 Many people have
a visceral ob- jection to turning their state into a nuclear “dump.” Some also believe that countries should deal with
their own waste—a form of the “polluter pays” principle. These concerns are simultaneously exacerbated by a lack of
trust in nuclear regulators. Advanced re- processing technologies solve none of these objections.
Reprocessing is additionally a controversial technology in itself. The planning process for any kind of reprocessing
facility in the United States (and many other countries) would unquestionably be met with intense opposition on environmental grounds and
prob- ably with numerous legal challenges. This
could slow the develop- ment of a credible waste-management
strategy, making take back even less likely.
Stewardship adv
Non-uq--no widespread proliferation
Hymans ’12 Jacques Hymans, Associate Professor of International Relations at USC, “North Korea's
Lessons for (Not) Building an Atomic Bomb,” Foreign Affairs, 4/16/2012,
http://www.foreignaffairs.com/articles/137408/jacques-e-c-hymans/north-koreas-lessons-for-notbuilding-an-atomic-bomb
Washington's miscalculation is not just a product of the difficulties of seeing inside the Hermit Kingdom. It is also a result of the broader
tendency to overestimate the pace of global proliferation. For decades, Very Serious People have
predicted that strategic weapons are about to spread to every corner of the earth. Such warnings
have routinely proved wrong - for instance, the intelligence assessments that led to the 2003 invasion of Iraq but they continue to be issued. In reality, despite the diffusion of the relevant technology and the
knowledge for building nuclear weapons, the world has been experiencing a great proliferation
slowdown. Nuclear weapons programs around the world are taking much longer to get off the ground and their failure rate is much higher - than they did during the first 25 years of the nuclear age. As I explain
in my article "Botching the Bomb" in the upcoming issue of Foreign Affairs, the key reason for the great proliferation
slowdown is the absence of strong cultures of scientific professionalism in most of the recent crop of would-be
nuclear states, which in turn is a consequence of their poorly built political institutions. In such dysfunctional
states, the quality of technical workmanship is low, there is little coordination across different technical
teams, and technical mistakes lead not to productive learning but instead to finger-pointing and recrimination.
These problems are debilitating, and they cannot be fixed simply by bringing in more imported parts
through illicit supply networks. In short, as a struggling proliferator, North Korea has a lot of company.
Thorium reprocessing functions uniquely enable prolif
Makhijani ’12 Arjun Makhijani, electrical and nuclear engineer, President of the Institute for Energy
and Environmental Research, has served as an expert witness in Nuclear Regulatory Commission
Proceedings, “Is Thorium A Magic Bullet For Our Energy Problems?” interviewed by Ira Flatow, host of
Science Friday, NPR, 5/4/2012, http://www.npr.org/2012/05/04/152026805/is-thorium-a-magic-bulletfor-our-energy-problems
So what are the problems? The problem is that with this particular reactor, most people will want a reprocessing, that is
separating the fissile material on-site. so you have a continuous flow of molten salt out of the reactor.
You take out the protactinium-233, which is a precursor of uranium, and then you put the uranium
back in the reactor, and then you keep it going. But if you look at the Princeton University paper on
thorium reactors from a few years ago, you'll see that this onsite reprocessing allows you to separate protactinium
altogether. Now, the U.S. wouldn't do it, but if you were a county without nuclear materials and had a
reprocessing plant right there, you'd separate the protactinium-233, you'd get pure u ranium- 233 ,
which is easier to make bombs with than plutonium.
Fuel bank doesn’t solve
Hibbs ‘12 Mark Hibbs, senior associate in the Nuclear Policy Program at the Carnegie Endowment,
“Negotiating Nuclear Cooperation Agreements,” Carnegie Endowment for International Peace,”
8/7/2012, http://carnegieendowment.org/2012/08/07/negotiating-nuclear-cooperationagreements/d98z
U.S. resolve to include a no-ENR pledge in the body of new bilateral agreements will be seen by some countries as arrogant and unacceptable.
Incorporating ENR terms into side-letters or preambles may be less offensive. That approach would also more easily facilitate including
reciprocal commitments by the United States into its 123 bargains with foreign countries. These might include guaranteeing nuclear fuel supply
through participation in the U.S. fuel bank, facilitating the country’s access to other back-up sources of nuclear fuel, and, in the future, perhaps
even taking back U.S.-origin spent fuel. The outcome of any negotiation for a bilateral nuclear cooperation agreement will depend on the
leverage both sides bring to the table. When
the U nited S tates negotiated most of the 22 such agreements in force today,
it was the world’s leading provider of nuclear technology, equipment, and fuel. As the examples of Jordan and Vietnam show, unlike half a
century ago, nuclear newcomers today don’t need to buy American. The vendor field is populated by
firms in Argentina, Australia, Canada, thse European Union, Japan, Kazakhstan, Namibia, Niger, Russia, and South Korea, and in the future
they will be joined by others in China and India. Governments in these countries do not seek to establish a no-ENR
requirement as a condition for foreign nuclear cooperation. Some of them, Australia and Canada for example, have
strong nonproliferation track records. Countries now seeking to form foreign industrial partnerships to set up nuclear
power programs have numerous options and they will favor arrangements that provide them the
most freedom and flexibility.
The aff is anti-nuclear nuclearism—rhetoric of disarm only fuels imperial domination
Darwin Bondgraham, writer, historian, and ethnographer, and Will Parrish, edited by Mariam
Pemberton, "Anti-nuclear Nuclearism," Foreign Policy In Focus, 12 January 2009, accessed 7/2/10
http://www.fpif.org/articles/anti-nuclear_nuclearism
The Obama administration is likely to continue a policy that we call “anti-nuclear nuclearism.” Anti-nuclear
nuclearism is a foreign
policy that relies upon overwhelming U.S. power, including the nuclear arsenal, but makes
rhetorical and even some substantive commitments to disarmament, however vaguely defined. Anti-nuclear
and military
nuclearism thrives as a school of thought in several think tanks that have long influenced foreign policy choices related to global nuclear forces.
Even the national nuclear weapons development labs in New Mexico and California have been avid supporters and crafters of of it. As a policy,
anti-nuclear nuclearism is designed to ensure U.S. nuclear and military dominance by rhetorically
calling for what has long been derided as a naïve ideal: global nuclear disarmament. Unlike past forms of
nuclearism, it de-emphasizes the offensive nature of the U.S. arsenal. Instead of promoting the U.S. stockpile as a
strategic deterrence or umbrella for U.S. and allied forces, it prioritizes an aggressive diplomatic and military campaign of nonproliferation.
Nonproliferation efforts are aimed entirely at other states, especially non-nuclear nations with
suspected weapons programs, or states that can be coerced and attacked under the pretense that
they possess nuclear weapons or a development program (e.g. Iraq in 2003). Effectively pursuing this kind of
belligerent nonproliferation regime requires half-steps toward cutting the U.S. arsenal further, and at
least rhetorically recommitting the United States to international treaties such as the Nuclear Non-Proliferation Treaty (NPT). It requires a fig
leaf that the United States isn’t developing new nuclear weapons, and that it is slowly disarming and de-emphasizing its nuclear arsenal. By
these means the United States has tried to avoid the charge of hypocrisy, even though it has designed and built newly modified weapons with
qualitatively new capacities over the last decade and a half. Meanwhile, U.S. leaders have allowed for and even promoted a mass proliferation
of nuclear energy and material, albeit under the firm control of the nuclear weapons states, with the United States at the top of this pile. Many
disarmament proponents were elated last year when four extremely prominent cold warriors — George P. Shultz, William Perry, Henry
Kissinger, and Sam Nunn — announced in a series of op-eds their commitment to "a world free of nuclear weapons." Strange bedfellows indeed
for the cause. Yet the fine print of their plan, published by the Hoover Institute and others since then, represents the anti-nuclear nuclearist
platform to a tee. It’s a conspicuous yet merely rhetorical commitment to a world without nuclear weapons. These four elder statesmen have
said what many U.S. elites have rarely uttered: that abolition is both possible and desirable. However, the anti-nuclear
posture in their
policy proposal comes to bear only on preventing non-nuclear states from going nuclear, or else preventing
international criminal conspiracies from proliferating weapons technologies and nuclear materials for use as instruments of non-state terror. In
other words, it’s
about other people's nuclear weapons, not the 99% of materials and arms possessed by
the United States and other established nuclear powers. This position emphasizes an anti-nuclear politics entirely for
what it means for the rest of the world — securing nuclear materials and preventing other states from going nuclear or further developing their
existing arsenals. U.S. responsibility to disarm remains in the distant future, unaddressed as a present imperative. Exclusive Route around the
Concerns about the nuclear programs of other states — mostly Islamic, East and South Asian nations (i.e., Iran,
North Korea, etc.) — conveniently work to reinforce existing power relations embodied in U.S. military
supremacy and neocolonial relationships of technological inequality and dependence. By invoking
their commitment to a "world free of nuclear weapons," the ideologues behind the anti-nuclear
nuclearist platform justify invasions, military strikes, economic sanctions, and perhaps even the use of
nuclear weapons themselves against the "rogue states" and "terrorists" whose possession of weapons technologiesvastly less
CTBT
advanced than those perpetually stockpiled by the United States is deemed by the anti-nuclear nuclearists the first and foremost problem of
the nuclear age.
Solvency
Necessary tech advances are difficult and costly
Tickell 10-31 Oliver Tickell, author, journalist, and campaigner specializing in environment, energy,
and health issues, “The Promise and Peril of Thorium,” James Martin Center for Nonproliferation
Studies, 10/31/2012, http://wmdjunction.com/121031_thorium_reactors.htm
However, four major challenges complicate efforts to get LFTRs operational on a production scale: Producing a
reactor containment material that can withstand both intense radiation and high-temperature
chemical assault from a wide spectrum of fission products and their daughters over a fifty year time
scale; Developing the continuous on-site reprocessing technology, using currently experimental
pyroprocessing/electrorefining processes, so that it can operate reliably without accident or releases,
also over a fifty year time scale; Scaling up the experimental MSRE design used at Oak Ridge, with its 7
megawatt (MW) thermal power rating, into a production LFTR with a thermal output of 1,000 MW or more,
capable of reliable continuous operation; and Achieving all the above at modest cost, so that the
resulting electricity can be sold in competitive energy markets.
Thorium worsens the waste issue—produces hazardous U-233 and other radioactive
materials that can’t be reprocessed
Rees ’11 Eifion Rees, “Don't believe the spin on thorium being a ‘greener’ nuclear option,” The
Ecologist, 6/23/2011,
http://www.theecologist.org/News/news_analysis/952238/dont_believe_the_spin_on_thorium_being_
a_greener_nuclear_option.html
All other issues aside, thorium is still nuclear energy, say environmentalists, its reactors disgorging the same toxic byproducts and fissile waste
with the same millennial half-lives. Oliver Tickell, author of Kyoto2, says the
fission materials produced from thorium are of
a different spectrum to those from uranium-235, but ‘include many dangerous-to-health alpha and beta
emitters’. Tickell says thorium reactors would not reduce the volume of waste from uranium reactors. ‘It
will create a whole new volume of radioactive waste, on top of the waste from uranium reactors.
Looked at in these terms, it’s a way of multiplying the volume of radioactive waste humanity can
create several times over.’ Putative waste benefits – such as the impressive claims made by former Nasa scientist Kirk Sorensen, one
of thorium’s staunchest advocates – have the potential to be outweighed by a proliferating number of MSRs. There are already 442 traditional
reactors already in operation globally, according to the International Atomic Energy Agency. The
by-products of thousands of
smaller, ostensibly less wasteful reactors would soon add up. Anti-nuclear campaigner Peter Karamoskos goes further,
dismissing a ‘dishonest fantasy’ perpetuated by the pro-nuclear lobby. Thorium cannot in itself power a reactor; unlike
natural uranium, it does not contain enough fissile material to initiate a nuclear chain reaction. As a
result it must first be bombarded with neutrons to produce the highly radioactive isotope uranium-233 – ‘so these are
really U-233 reactors,’ says Karamoskos. This isotope is more hazardous than the U-235 used in conventional
reactors, he adds, because it produces U-232 as a side effect (half life: 160,000 years), on top of familiar
fission by-products such as technetium-99 (half life: up to 300,000 years) and iodine-129 (half life: 15.7
million years).
Add in actinides such as protactinium-231 (half life: 33,000 years) and it soon
becomes apparent that thorium’s superficial cleanliness will still depend on digging some pretty deep
holes to bury the highly radioactive waste.
Fuel cycle fabrication is massively expensive
Katusa ’12 Marin Katusa, “The Thing About Thorium: Why The Better Nuclear Fuel May Not Get A
Chance,” Forbes, 2/6/2012, http://www.forbes.com/sites/energysource/2012/02/16/the-thing-aboutthorium-why-the-better-nuclear-fuel-may-not-get-a-chance/2/
Well, maybe quite a bit of support. One
of the biggest challenges in developing a thorium reactor is finding a
way to fabricate the fuel economically. Making thorium dioxide is expensive, in part because its
melting point is the highest of all oxides, at 3,300° C. The options for generating the barrage of
neutrons needed to kick-start the reaction regularly come down to uranium or plutonium, bringing at
least part of the problem full circle.
Takes a uselessly long time
Tickell ’12 Oliver Tickell, “Thorium: Not ‘green’, not ‘viable’, and not likely,” Nuclear Pledge, June 2012,
http://www.nuclearpledge.com/reports/thorium_briefing_2012.pdf
3.8 Timescale Claim: Thorium and the LFTR offer a solution to current and medium-term energy supply deficits. Response: The
thorium
fuel cycle is immature. Estimates from the UK’s National Nuclear Laboratory and the Chinese Academy
of Sciences (see 4.2 below) suggest that 10-15 years of research will be needed before thorium fuels are
ready to be deployed in existing reactor designs. Production LFTRs will not be deployable on any
significant scale for 40-70 years.
No investment
Knowledge@Wharton 11 (Knowledge@Wharton, the online research and analysis journal of the
Wharton School of the University of Pennsylvania, Mar 30,
[knowledge.wharton.upenn.edu/article.cfm?articleid=2743], jam)
A Decline in Public Support Yet Fukushima has no doubt had an impact. Italy put a one-year moratorium on its plans to re-establish a
nuclear energy program. Germany idled seven of its 17 nuclear reactors for safety checks as protesters clamor for an end to nuclear power.
China, which had planned to quadruple its nuclear energy capacity by 2020, temporarily suspended all project approvals. For
projects in
the U nited S tates, an uphill climb has become even steeper. According to a poll released March 21 from Pew
Research Center, public support for the increased use of nuclear power has declined since the earthquake in Japan. More
than half (52%) now oppose the increased use of nuclear power, while 39% favor it. That compares to 47% in favor and 47%
opposed in October. "As for the long-term prospects for the industry, I think the implications of Japan will be long-lasting,"
says Chris Lafakis, an energy economist at Moody's Analytics. It will be more difficult to get approval for a plant and
more difficult to obtain financing. Although the federal government is pushing for loan guarantees, projects would still
need support from a financial institution to get financed, he points out. "And the large banks are in no hurry
to extend credit for a plant knowing the regulatory environment" and current public sentiment, he says. There may not be a
"formal moratorium" against new nuclear power plants, "but I think there's an effective one." Even before the Japanese
earthquake, the nuclear industry was struggling to expand in the U.S. because of a sluggish economy and a sudden abundance of cheap natural
gas. Based on recent shale discoveries, the U.S. Energy Information Administration estimates the country's recoverable shale gas resources are
more than double the volume it assumed just one year ago. "Cheap natural gas makes it difficult to pull the trigger on nuclear investment,"
Chris Hansen, director of strategy and initiatives at IHS Cambridge Energy Research Associates, noted during a panel discussion at the Wharton
Energy Conference in October 2010. "The outlook is that the 'Shale Gale' will really increase the chance for natural gas to grow its market
share." Japan's nuclear crisis will not create that much more additional delay, Hansen told Knowledge@Wharton in a follow-up interview. Low
gas prices had already slowed down proposed projects because investors were hesitant to commit billions of dollars that might not pay off.
Safety reviews will simply be added to an already delayed process. "I see market share in the U.S. probably eroding for the next 10 years,"
Hansen says. "All of the new build will be gas, so nuclear will slip." Economics prevented the much-touted "nuclear renaissance" from ever
taking hold in the United States, adds Debra K. Decker, a research associate with the Belfer Center for Science and International Affairs at the
John F. Kennedy School of Government at Harvard. "Like all business, the nuclear business is about risks and returns," says Decker, who studies
nuclear proliferation and proposals for reform. "The
risk of getting approvals has been high enough in the United
States, and the electricity business has been deregulated enough, that the risk-return ratio has not really
supported new builds. When you factor in the high upfront capital costs of nuclear and the still relatively
inexpensive gas and coal options, the economics are not there. Nuclear does not come out as an
attractive option without supports."
Can’t build new reactors
Mez September ‘12—Lutz Mez [Department of Political and Social Sciences, Freie Universitat Berlin]
“Nuclear energy–Any solution for sustainability and climate protection” Energy Policy 48 (2012) 56–63
The nuclear industry has been battling a host of problems for three decades. A global construction
boom can be ruled out at present if only due to the lack of production capacities and shortages of
technicians; nor will this situation change much over the short and medium term. Only one single company in the world,
Japan Steel Works Ltd., is able to forge the large pressure vessels in reactors the size of EPR. Not only the
pressure vessel, but also the steam generators in the new Finnish plant come from Japan. In the USA, on the other hand, there is not a
single manufacturing plant capable of producing such large components. The sole facility in Europe, the
AREVAforgeintheFrenchcityofLeCreusot, is only able to produce components of a limited size and in limited numbers. Beyond this, the
nuclear industry is busy with retrofitting projects, as replacement of steam generators for power
plants whose operating lifetimes are to be extended. Because such large production plants cannot be
built overnight, this situation will not improve quickly. New nuclear power plants moreover have to be
operated by new personnel—but the nuclear industry and operators are scarcely even able to replace
staff who retires. An entire genera- tion of engineers, nuclear physicists and experts on protection
against radiation are missing as the industry is challenged twofold: at the same time as new plants are being constructed, plants
which have been closed must be torn down and solutions finally found for nuclear waste.
2NC
Capitalism
Saying it’s a good idea to not question is ludicrous. Your argument is question
begging—if we win your ideology is problematic then the K already implicates your
framework.
Meszaros ’89 Istvan Meszaros, Chair of philosophy @ U. of Sussex, The Power of Ideology, 1989 p.
232-234
Nowhere is the myth of ideological neutrality – the self-proclaimed Wertfeihert or value neutrality of so-called ‘rigorous social science’ –
stronger than in the field of methodology. Indeed, we
are often presented with the claim that the adoption of the
advocated methodological framework would automatically exempt one from all controversy about values,
since they are systematically excluded (or suitably ‘bracketed out’) by the scientifically adequate method itself,
thereby saving one from unnecessary complication and securing the desired objectivity and uncontestable outcome. Claims and procedures of
this kind are, of course, extremely problematical. For they circularly assume that their enthusiasm for the virtues of ‘methodological neutrality’
is bound to yield ‘value neutral’ solutions with regard to highly contested issues, without first examining the all-important question as to the
conditions of possibility – or otherwise – of the postulated systematic neutrality at the plane of methodology itself. The unchallengeable validity
of the recommended procedure is supposed to be self-evident on account of its purely methodological character. In reality, of course, this
approach to methodology is heavily loaded with a conservative ideological substance. Since, however, the plane of
methodology (and ‘meta-theory’) is said to be in principle separated from that of the substantive issues, the methodological circle can be
conveniently closed. Whereupon the mere insistence on the purely methodological character of the criteria laid down is supposed to establish
the claim according to which the approach in question is neutral because everybody can adopt it as the common frame of reference of ‘rational
discourse’. Yet, curiously enough, the
proposed methodological tenets are so defined that vast areas of vital social
concern are a priori excluded from this rational discourse as ‘metaphysical’, ‘ideological’, etc. The effect of
circumscribing in this way the scope of the one and only admissible approach is that it automatically disqualifies, in the name of
methodology itself, all those who do not fit into the stipulated framework of discourse. As a result, the propounders
of the ‘right method’ are spared the difficulties that go with acknowledging the real divisions and incompatibilities as they necessarily arise
from the contending social interests at the roots of alternative approaches and the rival sets of values associated with them. This
is where
we can see more clearly the social orientation implicit in the whole procedure. For – far from offering an adequate
scope for critical enquiry – the advocated general adoption of the allegedly neutral methodological framework is
equivalent, in fact, to consenting not even to raise the issues that really matter. Instead, the stipulated ‘common’
methodological procedure succeeds in transforming the enterprise of ‘rational discourse’ into the dubious practice of producing methodology
for the sake of methodology: a tendency more pronounced in the twentieth century than ever before. This practice consists in sharpening the
recommended methodological knife until nothing but the bare handle is left, at which point a new knife is adopted for the same purpose. For
the ideal methodological knife is not meant for cutting, only for sharpening, thereby interposing itself between the critical intent and the real
objects of criticism which it can obliterate for as long as the pseudo-critical activity of knife-sharpening for its own sake continues to be
pursued. And that happens to be precisely its inherent ideological purpose. 6.1.2 Naturally, to speak of a ‘common’ methodological framework
in which one can resolve the problems of a society torn by irreconcilable social interest and ensuing antagonistic confrontations is delusory, at
best, notwithstanding all talk about ‘ideal communication communities’. But to define the methodological tenets of all rational discourse by
way of transubstantiating into ‘ideal types’ (or by putting
into methodological ‘brackets’) the discussion of
contending social values reveals the ideological colour as well as the extreme fallaciousness of the
claimed rationality. For such treatment of the major areas of conflict, under a great variety of forms – from the Viennes version of
‘logical positivism’ to Wittgenstein’s famous ladder that must be ‘thrown away’ at the point of confronting the question of values, and from the
advocacy of the Popperian principle of ‘little by little’ to the ‘emotivist’ theory of value – inevitably
always favours the
established order. And it does so by declaring the fundamental structural parameters of the given society
‘out of bounds’ to the potential contestants, on the authority of the ideally ‘common’ methodology. However, even on a
cursory inspection of the issues at stake it ought to be fairly obvious that to consent not to question the fundamental
structural framework of the established order is radically different according to whether one does so
as the beneficiary of that order or from the standpoint of those who find themselves at the receiving
end, exploited and oppressed by the overall determinations (and not just by some limited and more or less easily
corrigible detail) of that order. Consequently, to establish the ‘common’ identity of the two, opposed sides of a structurally safeguarded
hierarchical order – by means of the reduction of the people who belong to the contending social forces into fictitious ‘rational interlocutors’,
extracted from their divided real world and transplanted into a beneficially shared universe of ideal discourse – would be nothing short of a
methodological miracle. Contrary to the wishful thinking hypostatized as a timeless and socially unspecified rational communality, the
elementary condition of a truly rational discourse would be to acknowledge the legitimacy of
contesting the given order of society in substantive terms. This would imply the articulation of the
relevant problems not on the plan of self-referential theory and methodology, but as inherently practical issues whose
conditions of solution point towards the necessity of radical structural changes. In other words, it would
require the explicit rejection of all fiction of methodological and meta-theoretical neutrality. But, of course, this would be far too much to
expect precisely because the society in which we live is a deeply divided society. This is why through the dichotomies of ‘fact and value’, ‘theory
and practice’, ‘formal and substantive rationality’, etc., the conflict-transcending methodological miracle is constantly stipulated as the
necessary regulative framework of ‘rational discourse’ in the humanities and social sciences, in the interest of the ruling ideology. What makes
this approach particularly difficult to challenge is that its value-commitments
are mediated by methodological
precepts to such a degree that it is virtually impossible to bring them into the focus of the discussion
without openly contesting the framework as a whole. For the conservative sets of values at the roots of such orientation
remain several steps removed from the ostensible subject of dispute as defined in logico/methodological, formal/structural, and
semantic/analytical terms. And
who would suspect of ideological bias the impeccable – methodologically sanctioned –
credentials of ‘procedural rules’, ‘models’ and ‘paradigms’? Once, though, such rules and paradigms are
adopted as the common frame of reference of what may or may not be allowed to be considered the
legitimate subject of debate, everything that enters into the accepted parameters is necessarily
constrained not only by the scope of the overall framework, but simultaneously also by the inexplicit
ideological assumptions on the basis of which the methodological principles themselves were in the
first place constituted. This is why the allegedly ‘non-ideological’ ideologies which so successfully conceal and exercise their apologetic
function in the guise of neutral methodology are doubly mystifying. Twentieth-century currents of thought are dominated by approaches that
tend to articulate the social interests and values of the ruling order through complicated – at time completely bewildering – mediations, on the
methodological plane. Thus, more than ever before, the task of ideological demystification is inseparable from the investigation of the complex
dialectical interrelationship between methods and values which no social theory or philosophy can escape.
Alt’s try or die—perm is coopted
Parr ’13 Adrian Parr, The Wrath of Capital, 2013, p. 2-5
The fable provides an intriguing perspective on freedom… the free market is left to negotiate our future
for us.
Perm fractures movements
Parr ’13 Adrian Parr, The Wrath of Capital, 2013, p. 5-6
The contradiction of capitalism is that… political ghost emptied of its collective aspirations.
Rd 4 Neg vs Whitman Menzies/Barsky
1AC
Nuclear loan guarantees
- Natty bad
- Prolif
1NC
1
Financial incentives must target energy production
Yusof ’12 Nor’Aini Yusof, “THE EFFECTIVENESS OF GOVERNMENT INCENTIVES TO FACILITATE AN
INNOVATIVE HOUSING DELIVERY SYSTEM: THE PERSPECTIVE OF HOUSING DEVELOPERS,” Theoretical
and Empirical Researches in Urban Management 7.1 (February 2012), pp. 55-68
In general, incentives can be defined as mechanism that motivates organisations to become involved in the
targeted activities (Lucas and Ogilvie, 2006). As defined by Berrone (2008), incentives are tangible or intangible rewards
used to stimulate a person or organisation to take a particular course of action. Incentives also serve as encouragement so
that the involved organisations are more receptive and willing to consider the changes that they are expected to adopt (Kam and Tang, 1997).
The rationale for incentives is that while innovation generates positive externalities to other organisations in the industry, the organisations
that adopt innovations incur the costs and bear the risk for the benefit of others (Hyytinen and Toivanen, 2005). Incentives to promote
activities to implement energy savings, for example, provide benefits to home owners in terms of less energy consumption, cost-savings and
improved building efficiency, while at the same time, the inventor of energy-saving technologies incurs substantial costs associated with
innovation (Zhong et al., 2009). Different countries vary considerably regarding the nature of the incentives systems upon which they rely in
order to promote certain actions as a tool to achieve particular goals. Incentives can be divided into three broad classes, namely, financial
Financial incentives exist when an agent can expect
some form of reward in exchange for a particular course of action. Moral incentives exist when a particular choice is
incentives, moral incentives and coercive incentives (Johnson, 2004).
socially regard as particularly admirable. Coercive incentives occur when a failure to act in a particular way is known to result in punishment.
Requate (2005) specifically defines the term incentive for innovation as the advantages that an organisation obtains from creating and
implementing a new technology. Tax incentive can be another way of encouraging the housing developers to get involved in innovation. Tax
incentives have been used by countries to achieve a variety of different objectives, not all of which are equally compelling on conceptual
grounds. Some commonly cited objectives of tax incentives include reducing unemployment, prompting specific economic sectors or types of
activities; as a matter of either economic or social policy, and addressing regional development needs (Howell et al., 2002). In short, since
encouraging organisations to adopt innovations is difficult, incentives play a motivational role by serving as the basis for organisations to
change the way in which they do business.
Distinct from loan guarantees
Rosner & Goldberg 11 (Robert, William E. Wrather Distinguished Service Professor, Departments of
Astronomy and Astrophysics, and Physics, and the College at the U of Chicago, and Stephen, Energy
Policy Institute at Chicago, The Harris School of Public Policy Studies, "Small Modular Reactors - Key to
Future Nuclear Power Generation in the U.S.," November 2011,
[https://epic.sites.uchicago.edu/sites/epic.uchicago.edu/files/uploads/EPICSMRWhitePaperFinalcopy.pd
f], jam)
x Production Cost Incentive: A production cost incentive is a performance-based incentive. With a production cost incentive, the government
incentive would be triggered only when the project successfully operates. The project sponsors would assume full responsibility for
the upfront capital cost and would assume the full risk for project construction. The production cost incentive would establish a target price, a so-called “market-based benchmark.” Any
savings in energy generation costs over the target price would accrue to the generator. Thus, a production cost incentive would provide a strong motivation for cost control and learning
improvements, since any gains greater than target levels would enhance project net cash flow. Initial SMR deployments, without the benefits of learning, will have significantly higher costs
than fully commercialized SMR plants and thus would benefit from production cost incentives. Because any production cost differential would decline rapidly due to the combined effect of
module manufacturing rates and learning experience, the financial incentive could be set at a declining rate, and the level would be determined on a plant-by-plant basis, based on the
achievement of cost reduction targets. 43 The key design parameters for the incentive include the following: 1. The magnitude of the deployment incentive should decline with the number of
SMR modules and should phase out after the fleet of LEAD and FOAK plants has been deployed. 2. The incentive should be market-based rather than cost-based; the incentive should take into
account not only the cost of SMRs but also the cost of competing technologies and be set accordingly. 3. The deployment incentive could take several forms, including a direct payment to
offset a portion of production costs or a production tax credit.The Energy Policy Act of 2005 authorized a production tax credit of $18/MWh (1.8¢/kWh) for up to 6,000 MW of new nuclear
power plant capacity. To qualify, a project must commence operations by 2021. Treasury Department guidelines further required that a qualifying project initiate construction, defined as the
pouring of safety-related concrete, by 2014. Currently, two GW-scale projects totaling 4,600 MW are in early construction; consequently, as much as 1,400 MW in credits is available for other
nuclear projects, including SMRs. The budgetary cost of providing the production cost incentive depends on the learning rate and the market price of electricity generated from the SMR
project. Higher learning rates and higher market prices would decrease the magnitude of the incentive; lower rates and lower market prices would increase the need for production incentives.
Using two scenarios (with market prices based on the cost of natural gas combined-cycle generation) yields the following range of estimates of the size of production incentives required for
the FOAK plants described earlier. For a 10% learning rate, ƒ Based on a market price of $60/MWh44 (6¢/kWh), the LEAD plant and the subsequent eight FOAK plants would need, on average,
a production credit of $13.60/MWh (1.4¢/kWh), 24% less than the $18 credit currently available to renewable and GW-scale nuclear technologies. (The actual credit would be on a sliding
scale, with the credit for the LEAD plant at approximately $31/MWh, or 3.1¢/kWh, declining to a credit of about $6/MWh, or 0.6¢/kWh, by the time of deployment of FOAK-8). The total cost of
the credit would be about $600 million per year (once all plants were built and operating). ƒ If the market price were about $70/MWh (7¢/kWh), the LEAD and only four subsequent FOAK
plants would require a production incentive. In this case, the average incentive would be $8.40/MWh (0.8¢/kWh), with a total cost of about $200 million per year. Higher learning rates would
drive down the size of the production incentive. For example, at a 12% learning rate, ƒ At a market price of $60/MWh (6¢/kWh), the LEAD and the subsequent five FOAK plants would require a
production incentive, with an average incentive level of about $15/MWh (1.5¢/kWh). Total annual cost (after all plants are in full operation) would be about $450 million per year. ƒ At a
market price of $70/MWh (7¢/kWh), the LEAD and three FOAK plants would require a production incentive averaging $9.00/MWh (0.9¢/kWh, half of the current statutory incentive), with a
total annual cost of about $170 million per year. The range of costs for the production incentive illustrates the sensitivity of the incentive level to the learning rate and the market price of
electricity. Thus, efforts to achieve higher learning rates, including fully optimized engineering designs for the SMRs and the manufacturing plant, as well as specially targeted market
introduction opportunities that enable SMRs to sell electricity for higher priced and higher value applications, can have a critical impact on the requirements for production incentives. The
Loan guarantees do not
directly impact project capital costs, but guarantees facilitate the ability of the project sponsors to
access capital at lower cost. The effect of the guarantee is to broaden the pool of potential equity and
debt investors, and thus to lower the WACC of the project. The lower WACC is then reflected in a lower LCOE. Loan guarantees can be particularly effective in mitigating the risk
potential size of the incentive should be subject to further analysis as higher quality cost estimates become available. x Loan Guarantees:
premium typically associated with the financing of FOAK technology deployments. For example, federal loan guarantees are viewed as having a key role in mitigating the risk premium and
lowering the WACC early-mover, GW-scale nuclear plants. As discussed earlier, the smaller investment requirements for the first-of-a-kind SMR plant (both the LEAD and one or more early
FOAK plants) significantly reduce the risk premium that may otherwise be sought by private equity and debt holders; this reduced risk premium would obviate the need for loan guarantees.
Appendix F discusses the relationship between size of investment relative to the size of the sponsor and its potential effect on risk premium. The business case analysis assumes that a robust
SMR DD&E effort will mitigate the risk premium sufficiently so that loan guarantees will not be part of the incentive program. However, it is possible that a federal loan guarantee may be
appropriate for the LEAD and the FOAK-1 plant.45
Voting issue:
Aff destroys limits
Dyson et al. 3 Megan Dyson, “Flow: The Essentials of Environmental Flows,” International Union for
Conservation of Nature and Natural Resources, 2003, p. 67-68
Understanding of the term ‘incentives’ varies and economists have produced numerous typologies. A brief characterization of incentives is
therefore warranted. First, the term is understood by economists as incorporating both positive and negative aspects, for example a tax that
leads a consumer to give up an activity that is an incentive, not a disincentive or negative incentive. Second, although incentives are also
construed purely in economic terms, incentives refer to more than just financial rewards and penalties. They are the “positive and negative
changes in outcomes that individuals perceive as likely to result from particular actions taken within a set of rules in a particular physical and
social context.”80 Third, it
is possible to distinguish between direct and indirect incentives, with direct
incentives referring to financial or other inducements and indirect incentives referring to both variable
and enabling incentives.81 Finally, incentives of any kind may be called ‘perverse’ where they work against their purported aims or
have significant adverse side effects. Direct incentives lead people, groups and organisations to take particular action or
inaction. In the case of environmental flows these are the same as the net gains and losses that different stakeholders experience. The key
challenge is to ensure that the incentives are consistent with the achievement of environmental flows. This implies the need to compensate
those that incur additional costs by providing them with the appropriate payment or other compensation. Thus,
farmers asked to
give up irrigation water to which they have an established property or use right are likely to require a
payment for ceding this right. The question, of course, is how to obtain the financing necessary to cover the costs of developing such
transactions and the transaction itself. Variable incentives are policy instruments that affect the relative costs and
benefits of different economic activities. As such, they can be manipulated to affect the behaviour of the
producer or consumer. For example, a government subsidy on farm inputs will increase the relative
profitability of agricultural products, hence probably increasing the demand for irrigation water. Variable
incentives therefore have the ability to greatly increase or reduce the demand for out-of-stream, as well as in-stream, uses of water. The
number of these incentives within the realm of economic and fiscal policy is practically limitless.
Takes away neg ground
Camm et al ‘8 Frank Camm, James T. Bartis, Charles J. Bushman, “Federal Financial Incentives to
Induce Early Experience Producing Unconventional Liquid Fuels,” RAND Corporation, prepared for the
United States Air Force and the National Energy Technology Laboratory of the United States Department
of Energy, 2008, http://www.rand.org/pubs/technical_reports/TR586.html
Production Incentives When a specific investor’s discount rate exceeds the government’s, investment incentives are more cost-effective than
are production incentives. As the cash-flow analysis demonstrates, at project start-up, it costs the government substantially less to reduce a
project’s real after-tax private IRR by one point with an investment incentive than with a production incentive. But after
investment is
complete, investment incentives are no longer available. In some projects, a production incentive can
help the government ensure that, after investment costs are sunk, an investor still has an incentive to
operate the plant it has built. This is the primary role any production incentive is likely to play in an
incentive package that promotes private production of unconventional fuels. In a secondary role, an incentive could
also be designed to induce more production each year to accelerate the learning process. The government’s goals should dictate which form of
production incentive to use. Like investment incentives, production incentives come in many varieties—e.g., lump sum versus cost sharing,
direct subsidy versus tax subsidy.6 Their relative costs and benefits mirror those of investment incentives. As noted, production incentives are
most likely to be useful if a plant does not generate taxable net income without government support. As a result, the same concerns raised
about the value of tax subsidies to an investor without taxable income arise here. One new wrinkle here is the choice between production
incentives rewarding years of production and those rewarding production during any year. The distinction can be important if the incentive
package does not effectively dictate, through purchasing and pricing agreements, how much the investor will produce in a year.
2
Their justification of energy production in totalizing apocalyptic terms drains energy
from the workers who produce it. The machine saves lives at the expense of all
livelihood.
The Invisible Committee ‘9 French anarcho-communist grad students, The Coming Insurrection,
2009, Semiotext(e) http://libcom.org/files/thecominsur_booklet[1].pdf
The order of work was the order of a world. The evidence of its ruin is paralyzing to those who dread what will come after.
Today work is tied less to the economic necessity of producing goods than to the political necessity of
producing producers and consumers, and of preserving by any means necessary the order of work.
Producing oneself is becoming the dominant occupation of a society where production no longer has an
object: like a carpenter who’s been evicted from his shop and in desperation sets about hammering and
sawing himself. All these young people smiling for their job interviews, who have their teeth whitened
to give them an edge, who go to nightclubs to boost the company spirit, who learn English to advance their careers, who get divorced
or married to move up the ladder, who take courses in leadership or practice “self-improvement” in order to
better “manage conflicts” – “the most intimate ‘self-improvement’”, says one guru, “will lead to increased emotional stability, to
smoother and more open relationships, to sharper intellectual focus, and therefore to a better economic performance.” This swarming
little crowd that waits impatiently to be hired while doing whatever it can to seem natural is the result
of an attempt to rescue the order of work through an ethos of mobility. To be mobilized is to relate to
work not as an activity but as a possibility. If the unemployed person removes his piercings, goes to the
barber and keeps himself busy with “projects,” if he really works on his “employability,” as they say, it’s
because this is how he demonstrates his mobility. Mobility is this slight detachment from the self, this
minimal disconnection from what constitutes us, this condition of strangeness whereby the self can now
be taken up as an object of work, and it now becomes possible to sell oneself rather than one’s labor
power, to be remunerated not for what one does but for what one is, for our exquisite mastery of social codes, for
our relational talents, for our smile and our way of presenting ourselves. This is the new standard of socialization. Mobility brings
about a fusion of the two contradictory poles of work: here we participate in our own exploitation, and all participation
is exploited. Ideally, you are yourself a little business, your own boss, your own product. Whether one is working or not, it’s a
question of generating contacts, abilities, networking, in short: “human capital.” The planetary injunction to mobilize
at the slightest pretext – cancer, “terrorism,” an earthquake, the homeless – sums up the reigning powers’ determination to maintain the reign
of work beyond its physical disappearance. The
present production apparatus is therefore, on the one hand, a
gigantic machine for psychic and physical mobilization, for sucking the energy of humans that have
become superfluous, and, on the other hand, it is a sorting machine that allocates survival to conformed
subjectivities and rejects all “problem individuals,” all those who embody another use of life and, in this
way, resist it. On the one hand, ghosts are brought to life, and on the other, the living are left to die. This is the
properly political function of the contemporary production apparatus. To organize beyond and against work, to collectively
desert the regime of mobility, to demonstrate the existence of a vitality and a discipline precisely in
demobilization, is a crime for which a civilization on its knees is not about to forgive us. In fact, it’s the
only way to survive it.
Alt text: embrace the coming insurrection. Immersing ourselves in catastrophe
reclaims life from the control of the system.
The Invisible Committee ‘9 French anarcho-communist grad students, The Coming Insurrection,
2009, Semiotext(e) http://libcom.org/files/thecominsur_booklet[1].pdf
Everything about the environmentalist’s discourse must be turned upside-down. Where they talk of
“catastrophes” to label the present system’s mismanagement of beings and things, we only see the
catastrophe of its all too perfect operation. The greatest wave of famine ever known in the tropics (18761879) coincided with a global drought, but more significantly, it also coincided with the apogee of
colonization. The destruction of the peasant’s world and of local alimentary practices meant the
disappearance of the means for dealing with scarcity. More than the lack of water, it was the effect of
the rapidly expanding colonial economy that littered the Tropics with millions of emaciated corpses.
What presents itself everywhere as an ecological catastrophe has never stopped being, above all, the
manifestation of a disastrous relationship to the world. Inhabiting a nowhere makes us vulnerable to the
slightest jolt in the system, to the slightest climactic risk. As the latest tsunami approached and the
tourists continued to frolic in the waves, the islands’ hunter-gatherers hastened to flee the coast,
following the birds. Environmentalism’s present paradox is that under the pretext of saving the planet
from desolation it merely saves the causes of its desolation. The normal functioning of the world usually
serves to hide our state of truly catastrophic dispossession. What is called “catastrophe” is no more than
the forced suspension of this state, one of those rare moments when we regain some sort of presence in
the world. Let the petroleum reserves run out earlier than expected; let the international flows that
regulate the tempo of the metropolis be interrupted, let us suffer some great social disruption and some
great “return to savagery of the population,” a “planetary threat,” the “end of civilization!” Either way,
any loss of control would be preferable to all the crisis management scenarios they envision. When this
comes, the specialists in sustainable development won’t be the ones with the best advice. It’s within the
malfunction and short-circuits of the system that we find the elements of a response whose logic would
be to abolish the problems themselves. Among the signatory nations to the Kyoto Protocol, the only countries that have fulfilled
their commitments, in spite of themselves, are the Ukraine and Romania. Guess why. The most advanced experimentation with “organic”
agriculture on a global level has taken place since 1989 on the island of Cuba. Guess why. And it’s along the African highways, and nowhere
else, that auto mechanics has been elevated to a form of popular art. Guess how. What makes
the crisis desirable is that in the
crisis the environment ceases to be the environment. We are forced to reestablish contact, albeit a
potentially fatal one, with what’s there, to rediscover the rhythms of reality. What surrounds us is no
longer a landscape, a panorama, a theater, but something to inhabit, something we need to come to
terms with, something we can learn from. We won’t let ourselves be led astray by the one’s who’ve
brought about the contents of the “catastrophe.” Where the managers platonically discuss among
themselves how they might decrease emissions “without breaking the bank,” the only realistic option
we can see is to “break the bank” as soon as possible and, in the meantime, take advantage of every
collapse in the system to increase our own strength.
3
CIR passes—Obama pushing
Nakamura 2/21 (David, WaPo, “Labor, business leaders declare progress in immigration talks”
http://www.washingtonpost.com/blogs/post-politics/wp/2013/02/21/labor-business-leaders-declareprogress-in-immigration-talks/) will
Labor and business leaders on Thursday said they have made progress toward a pact over how to
implement reforms of immigration laws in the workplace, but they stopped short of agreeing on a new guest worker
program for foreigners.¶ ¶ In a joint statement, AFL-CIO President Richard Trumka and U.S. Chamber of Commerce President
Thomas Donohue expressed optimism over their negotiations and emphasized they are committed to finding a solution that
would allow companies to more quickly and easily hire foreigners when Americans are not available.¶ “Over the last months, representatives of
business and labor have been engaged in serious discussions about how to fix the system in a way that benefits both workers and employers,
with a focus on lesser-skilled occupations,” the two leaders said. “We
have found common ground in several important
areas, and have committed to continue to work together and with Members of Congress to enact
legislation that will solve our current problems in a lasting manner.Ӧ A bipartisan Senate group that is developing
immigration reform legislation had asked the AFL-CIO and Chamber to come up with an agreement over
a potential guest worker program, a controversial provision that has helped sink previous attempts to overhaul immigration laws.¶
Donohue has called for a new guest worker program that would allow companies to hire more foreigners in low-skilled occupations such as
farming where there have been shortages of U.S. workers, and to allow foreign workers increased mobility to change jobs when necessary.
Trumka has said the labor union would agree only if the number of visas are reduced during times of high umemployment and if foreign
workers are provided a path to citizenship to help protect wages and benefits to all workers.¶ In the joint statement, the two sides said they
have agreed to three principles. The first is that American workers should have the first crack at all jobs, and the second would provide a new
visa that “does not keep all workers in a permanent temporary status, provides labor mobility in a way that still gives American workers a first
shot at available jobs, and that automatically adjusts as the American economy expands and contracts.” ¶ The third principle is a call for a new,
quasi-independent federal bureau that would monitor employment statistics and trends to inform Congress about where to set visa caps for
foreign workers each year.¶ “We are now in the middle – not the end – of this process, and we pledge to continue to work together and with
our allies and our representatives on Capitol Hill to finalize a solution that is in the interest of this country we all love,” Donohue and Trumka
said in the statement.¶ The
Senate working group, comprised of four Democrats and four Republicans, is aiming to develop
legislative proposals by next month, and President Obama has affirmed his support of the group’s general
principles.¶ Obama’s own legislative proposals, contained in a draft bill that the White House says is a backup plan if the Senate effort fails,
does not include a guest worker provision. As a senator in 2007, Obama voted in favor of an amendment to a comprehensive immigration bill
that would have sunset a guest worker program after five years; that immigration bill ultimately failed in the Senate, and some Republicans cite
the amendment as a reason why.¶ White
House press secretary Jay Carney said the joint statement represented “yet
another sign of progress, of bipartisanship and we are encouraged by it. At the same time, it is an agreement on
principles. We remain focused on encouraging the Senate to develop a comprehensive bill.”
Plan’s highly controversial – spending and safety
Wharton 11 (Knowledge@Wharton, the online research and analysis journal of the Wharton School of
the University of Pennsylvania, Mar 30, [knowledge.wharton.upenn.edu/article.cfm?articleid=2743],
jam)
Yet while the Administration continues to voice its support, Fukushima may have stalled the expansion of
nuclear power in the U.S. for the near future, say Wharton professors and nuclear experts. Despite calls for a "nuclear
renaissance," the industry was already struggling to move forward in the midst of an economic downturn and competition from
cheap natural gas. Now events in Japan have reignited fears about nuclear's safety, which could cause further delays. The
lingering question for U.S. energy policy: If not nuclear, then what? Nuclear as part of U.S. energy policy "depends on what leadership we
have," says Erwann Michel-Kerjan, managing director of the Wharton Risk Management and Decision Processes Center. "Where do we want
our country to be? We have been talking about energy independence for a long time. The question is, what do we do about that?" Before the
earthquake in Japan, a growing number of people were saying nuclear. Not only would it allow the United States to become more energy
independent, but it would also lower greenhouse gas emissions, the industry argued. When measured by carbon footprint, nuclear is on par
with solar, hydro, wind, biomass and geothermal, and in terms of the land use required, nuclear comes out ahead of other green energy
sources, they say. For a 1,000 megawatt power plant, nuclear requires about one square mile of space, compared with 50 square miles for
solar, 250 for wind and 2,600 for biomass. But nuclear
power plants are enormously expensive, costing as much as $2 billion
to $6 billion to build, according to "Nuclear Energy Policy," a report from the Congressional Research Service. Financing
new reactors
which are highly controversial . To expand,
is heavily dependent on loan guarantees from the federal government,
the industry says it needs more than the $18.5 billion in loan guarantees the government currently allocates, which is enough for three
or four reactors. Opponents argue that loan guarantees unfairly subsidize a mature industry and would be better
spent elsewhere. The debate in some other countries is less heated. Unlike the United States, which has significant natural
resources, many other countries have fewer energy options and have made a strong commitment to nuclear power. They are unlikely to
abandon it now, says Michel-Kerjan. France, for example, which turned to nuclear energy decades ago after suffering through an oil crisis, relies
on nuclear for 80% of its power. Emerging economies such as China and India are also investing heavily in nuclear to cope with increasing
energy demand.
CIR boosts science diplomacy – key to solve numerous existential threats
Pickering and Agre 10 (Thomas Pickering, former undersecretary of State and US Ambassador to the
UN, Russia, India, Israel, Jordan, El Salvador, and Nigeria. Peter Agre, Nobel Prize winning chemist at
Johns Hopkins. “More opportunities needed for U.S. researchers to work with foreign counterparts”
Baltimore Sun 2-9-10 http://articles.baltimoresun.com/2010-02-09/news/balop.northkorea0209_1_science-and-technology-north-korea-scientists-and-engineers)
In particular, the U.S. government should quickly and significantly increase the number of H1-B visas being
approved for specialized foreign workers such as doctors, scientists and engineers. Their contributions are
critical to improving human welfare as well as our economy. Foreign scientists working or studying in U.S. universities also
become informal goodwill ambassadors for America globally -- an important benefit in the developing
world, where senior scientists and engineers often enter national politics.¶ More broadly, we urgently need
to expand and deepen links between the U.S. and foreign scientific communities to advance solutions to
common challenges. Climate change, sustainable development, pandemic disease, malnutrition,
protection for oceans and wildlife, national security and innovative energy technologies all demand
solutions that draw on science and technology.¶ Fortunately, U.S. technological leadership is admired
worldwide, suggesting a way to promote dialogue with countries where we otherwise lack access and
leverage. A June 2004 Zogby International poll commissioned by the Arab American Institute found that only 11 percent of Moroccans
surveyed had a favorable overall view of the United States -- but 90 percent had a positive view of U.S. science and technology. Only 15 percent
of Jordanians had a positive overall view, but 83 percent registered admiration for U.S. science and technology. Similarly, Pew
polling
data from 43 countries show that favorable views of U.S. science and technology exceed overall views of
the United States by an average of 23 points.¶ The recent mission to North Korea exemplified the vast potential of science for
U.S. diplomacy. Within the scientific community, after all, journals routinely publish articles co-written by scientists from different nations, and
scholars convene frequent conferences to extend those ties. Science demands an intellectually honest atmosphere, peer review and a common
language for professional discourse. Basic values of transparency, vigorous inquiry and respectful debate are all inherent to science.¶ Nations
that cooperate on science strengthen the same values that support peaceful conflict resolution and
improved public safety. U.S. and Soviet nongovernmental organizations contributed to a thaw in the
Cold War through scientific exchanges, with little government support other than travel visas. The U.S. government is off to a
good start in leveraging science diplomacy, with 43 bilateral umbrella science and technology agreements now in force. The Obama
administration further elevated science engagement, beginning with the president's June speech in Cairo. Then, in November, Secretary of
State Hillary Clinton appointed three science envoys to foster new partnerships and address common challenges, especially within Muslimmajority countries. She also announced the Global Technology and Innovation Fund, through which the Overseas Private Investment
Corporation will spur private-sector investments in science and technology industries abroad.¶ These steps are commendable, but the
White House and the State Department need to exercise even greater leadership to build government capacity
and partnerships that advance U.S. science diplomacy globally. Congress should lead as well, with greater recognition of
science engagement and increased funding for science capacity-building. Both chambers must work together to give the executive branch the
resources it needs. In
an era of complex global challenges, science diplomacy is a critical tool for U.S. foreign
policy. The opportunity to strengthen that tool and advance our diplomatic goals should not be missed.
1NC—Natty Adv
Nuclear hikes warming
ScienceDaily (July 13, 2009) “Trapping Carbon Dioxide Or Switching To Nuclear Power Not Enough To
Solve Global Warming Problem, Experts Say”
http://www.sciencedaily.com/releases/2009/07/090713085248.htm
Bo Nordell and Bruno Gervet of the Department of Civil and Environmental Engineering at Luleå University of Technology in Sweden have
calculated the total energy emissions from the start of the industrial revolution in the 1880s to the modern day. They
have worked out
that using the increase in average global air temperature as a measure of global warming is an
inadequate measure of climate change. They suggest that scientists must also take into account the
total energy of the ground, ice masses and the seas if they are to model climate change accurately. The
researchers have calculated that the heat energy accumulated in the atmosphere corresponds to a mere 6.6% of
global warming, while the remaining heat is stored in the ground (31.5%), melting ice (33.4%) and sea water (28.5%). They point out that
net heat emissions between the industrial revolution circa 1880 and the modern era at 2000 correspond to almost three quarters of the
accumulated heat, i.e., global warming, during that period. Their calculations suggest that most measures to combat global
warming, such as reducing our reliance on burning fossil fuels and switching to renewables like wind power and solar energy, will ultimately
help in preventing catastrophic climate change in the long term. But the same calculations also show that trapping
carbon dioxide,
so-called carbon dioxide sequestration, and storing it deep underground or on the sea floor will have
very little effect on global warming. "Since net heat emissions accounts for most of the global warming
there is no or little reason for carbon dioxide sequestration," Nordell explains, "The increasing carbon dioxide emissions
merely show how most net heat is produced. The "missing" heat, 26%, is due to the greenhouse effect, natural variations in climate and/or an
underestimation of net heat emissions, the researchers say. These calculations are actually rather conservative, the researchers say, and the
missing heat may be much less. The researchers
also point out a flaw in the nuclear energy argument. Although
nuclear power does not produce carbon dioxide emissions in the same way as burning fossil fuels it does
produce heat emissions equivalent to three times the energy of the electricity it generates and so
contributes to global warming significantly, Nordell adds.
Methane leaks reducing now
Everley 2-7 Steve Everley, Spokesman at Energy in Depth, “*UPDATE* EPA Data Show 66 Percent Drop
in Methane Emissions from Oil and Gas,” Energy in Depth, 2/7/2013,
http://www.energyindepth.org/epa-data-show-66-percent-drop-in-methane-emissions-from-oil-andgas/
The U.S. Environmental Protection Agency’s latest report on greenhouse gases (GHGs) shows a significant drop in
methane emissions from natural gas development, as compared to EPA’s prior data. The latest reporting from EPA
suggests methane emissions from petroleum and natural gas systems were 82.6 million metric tons of CO2 equivalent in 2011. Last year, EPA’s
GHG Inventory – which assessed data for 2010 – estimated that natural gas systems alone emitted more than 215 million metric tons, while
petroleum systems added another 31 million metric tons. Taken together, EPA’s latest
data on petroleum and natural gas suggest a 66
percent decline in methane emissions from the agency’s prior estimates. Here are some other noteworthy findings from
EPA:
Can’t solve warming—emission patterns are locked in
Dye 10-26 Lee Dye, “It May Be Too Late to Stop Global Warming,” ABC News, 10/26/2012,
http://abcnews.go.com/Technology/late-stop-globalwarming/story?id=17557814&singlePage=true#.UI58icXR5DA
Here's a dark secret about the earth's changing climate that many scientists believe, but few seem eager to discuss: It's
too late to stop global
warming. Greenhouse gasses pumped into the planet's atmosphere will continue to grow even if the
industrialized nations cut their emissions down to the bone. Furthermore, the severe measures that would
have to be taken to make those reductions stand about the same chance as that proverbial snowball in
hell. Two scientists who believe we are on the wrong track argue in the current issue of the journal Nature Climate Change that global warming is inevitable and
it's time to switch our focus from trying to stop it to figuring out how we are going to deal with its consequences. "At present, governments' attempts
to limit greenhouse-gas emissions through carbon cap-and-trade schemes and to promote renewable and sustainable energy sources are
probably too late to arrest the inevitable trend of global warming," Jasper Knight of Wits University in Johannesburg, South Africa, and Stephan
Harrison of the University of Exeter in England argue in their study. Those efforts, they continue, "have little relationship to the real world." What is clear,
they contend, is a profound lack of understanding about how we are going to deal with the loss of huge land
areas, including some entire island nations, and massive migrations as humans flee areas no longer
suitable for sustaining life, the inundation of coastal properties around the world, and so on ... and on ... and on.
That doesn't mean nations should stop trying to reduce their carbon emissions, because any reduction could lessen the consequences. But the cold fact is no matter
what Europe and the United States and other "developed" nations do, it's not going to curb global climate change, according to one scientist who was once highly
skeptical of the entire issue of global warming. "Call me a converted skeptic," physicist Richard A. Muller says in an op-ed piece published in the New York Times last
July. Muller's latest book, "Energy for Future Presidents," attempts to poke holes in nearly everything we've been told about energy and climate change, except the
fact that "humans are almost entirely the cause" of global warming. Those of us who live in the "developed" world initiated it. Those who live in the "developing"
world will sustain it as they strive for a standard of living equal to ours. "As far as global warming is concerned, the
developed world is becoming
irrelevant," Muller insists in his book. We could set an example by curbing our emissions, and thus claim in the future that "it wasn't our fault," but about the
only thing that could stop it would be a complete economic collapse in China and the rest of the world's developing countries. As they race
forward, their industrial growth -- and their greenhouse gas emissions -- will outpace any efforts by the
West to reduce their carbon footprints, Muller contends. "China has been installing a new gigawatt of coal
power each week," he says in his Times piece, and each plant pumps an additional ton of gases into the
atmosphere "every second." "By the time you read this, China's yearly greenhouse gas emissions will be
double those of the United States, perhaps higher," he contends. And that's not likely to change. "China is fighting
poverty, malnutrition, hunger, poor health, inadequate education and limited opportunity. If you were
the president of China, would you endanger progress to avoid a few degrees of temperature change?" he
asks.
Natty Gas k2 econ and manufacturing – solves price spikes
Gibson 12 Thomas is a writer at the Hill, 5/2/12, “Natural gas: game changer for American
manufacturing” http://thehill.com/special-reports-archive/1335-ohio-energy-summit/225145-naturalgas-game-changer-for-american-manufacturing
Domestic natural gas holds the potential to yield revolutionary economic and energy benefits for the
United States. Fully developing our natural gas resources from the shale formations across the country
will provide low-cost and reliable sources of energy that will boost our competitiveness, and spur an
American manufacturing renaissance which the steel industry, with its ripple effect throughout the
supply chain, is helping to lead. According to a recent report released by Professor Timothy Considine, an energy economist at the
University of Wyoming, the U.S. steel industry supports more than one million jobs in the U.S. economy. For every job formed in the
steel industry, seven additional jobs are created in other economic sectors. For that reason, the steel
sector has played an outsized role in driving manufacturing’s post-recession resurgence. Despite this
encouraging analysis, the U.S. manufacturing sector still faces significant challenges including energy
cost uncertainty. As an energy-intensive industry, the domestic steel industry’s international
competitiveness depends on our ability to capitalize on the discovery and development of North
America’s shale resources. Our industry consumes large amounts of natural gas, and will benefit from
the increased supply resulting from shale production, which keeps gas both reliable and available at a
low cost. A second positive dimension of shale resource development for our industry is that steel pipe and tube products that U.S.
steelmakers produce are integral to the exploration, production and transmission of natural gas and oil. Producing more natural gas
will help maximize industry market opportunities due to the increased demand for steel. In addition,
these developments will create high-value jobs, stimulate economic activity in North America and help
provide energy security and independence for our nation. The facts are crystal clear. A recent independent study found
that the Marcellus Shale accounted for the creation of more than 60,000 jobs in Pennsylvania in 2009, with more than 200,000 new jobs
anticipated by 2015. In addition, a study by the Ohio Oil & Gas Energy Education Program notes that development of Ohio’s Utica Shale could
support more than 204,000 jobs in just four years (IHS Global Insight).
In the steel industry, which supports over 115,000
jobs in Ohio, companies are already making substantial new capital investments and creating high-value
jobs in the state as a result of shale natural gas production. And developing our natural shale resources
is not just creating jobs within the steel industry. Because of our industry’s jobs multiplier effect, it is
creating thousands of jobs in the manufacturing sector. Because of its massive potential for job creation
and capability to provide a reliable, affordable source of energy, it is essential that public policies on
natural gas production be developed and implemented carefully, so as to balance economic, energy, and
environmental goals. Most of the states in which shale production is taking place have already had robust regulatory programs in place
for decades; others are currently in the process of updating regulations. This is working, and regulations on shale gas drilling are best developed
and put in place at the state rather than the federal level. A one-size-fits-all approach from the federal government will almost certainly hinder
the economic and energy benefits that we are already seeing from shale gas production, and could also limit the potential for an American
manufacturing renaissance. By
embracing our shale reserves as the tremendous domestic resource that they
are, while developing them in a careful manner, our nation can lessen its dependency on foreign energy
supplies, create thousands of jobs and spur economic growth for years to come.
Correlation with stability doesn’t imply causality—Brooks et al oversell benefits and
ignore drawbacks
Walt 1-2 Stephen M. Walt, professor of international affairs at Harvard’s Kennedy School of
Government, “More or less: The debate on U.S. grand strategy,” 1/2/2013,
http://walt.foreignpolicy.com/posts/2013/01/02/more_or_less_the_debate_on_us_grand_strategy
Third, B, I, & W give "deep engagement" full credit for nearly all the good things that have occurred
internationally since 1945 (great power peace, globalization, non-proliferation, expansion of trade, etc.), even though the
direct connection between the strategy and these developments remains contested. More importantly, they
absolve the strategy from most if not all of the negative developments that also took place during this
period. They recognize the events like the Indochina War and the 2003 war in Iraq were costly blunders,
but they regard them as deviations from "deep engagement" rather than as a likely consequence of a
strategy that sees the entire world as of critical importance and the remaking of other societies along
liberal lines as highly desirable if not strategically essential.
Overstretch and blowback mitigates coercive benefits
Maher 11 (Richard, Max Weber postdoctoral fellow at the European University Institute, Ph.D in
Political Science from Brown University, Orbis, 55(1), Winter, jam)
Since the disintegration of the Soviet Union and the end of the Cold War, world politics has been unipolar, defined by American
preponderance in each of the core components of state power-military, economic, and technological. Such an imbalanced distribution of
power in favor of a single country is unprecedented in the modern state system. This material advantage does not automatically
translate into America's preferred political and diplomatic outcomes, however. Other states, if now only at the margins, are challenging
U.S. power and authority. Additionally, on a range of issues, the United States is finding it increasingly difficult to realize its goals and
ambitions. The
even bigger challenge for policymakers in Washington is how to respond to signs that
America's unquestioned preeminence in international politics is waning. This decline in the United States'
relative position is in part a consequence of the burdens and susceptibilities produced by unipolarity.
Contrary to the conventional wisdom, the U.S. position both internationally and domestically may actually be
strengthened once this period of unipolarity has passed. On pure material terms, the gap between the United States and
the rest of the world is indeed vast. The U.S. economy, with a GDP of over $14 trillion, is nearly three times the size of China's, now the
world's second-largestnational economy. The United States today accounts for approximately 25 percent of global economic output, a
figure that has held relatively stable despite steadily increasing economic growth in China, India, Brazil, and other countries. Among the
group of six or seven great powers, this figure approaches 50 percent. When one takes discretionary spending into account, the United
States today spends more on its military than the rest of the world combined. This imbalance is even further magnified by the fact that five
of the next seven biggest spenders are close U.S. allies. China, the country often seen as America's next great geopolitical rival, has a
defense budget that is oneseventh of what the United States spends on its military. There is also a vast gap in terms of the reach and
sophistication of advanced weapons systems. By some measures, the United States spends more on research and development for its
military than the rest of the world combined. What is remarkable is that the United States can do all of this without completely breaking
the bank. The United States today devotes approximately 4 percent of GDP to defense. As a percentage of GDP, the United States today
spends far less on its military than it did during the Cold War, when defense spending hovered around 10 percent of gross economic output.
As one would expect, the United States today enjoys unquestioned preeminence in the military realm. No other state comes close to having
despite this material preeminence, the United
States sees its political and strategic influence diminishing around the world. It is involved in two costly and
destructive wars, in Iraq and Afghanistan, where success has been elusive and the end remains out of sight. China
has adopted a new assertiveness recently, on everything from U.S. arms sales to Taiwan, currency convertibility, and
America's growing debt (which China largely finances). Pakistan, one of America's closest strategic allies, is
facing the threat of social and political collapse. Russia is using its vast energy resources to reassert its dominance in what it views
as its historical sphere of influence. Negotiations with North Korea and Iran have gone nowhere in dismantling
their nuclear programs. Brazil's growing economic and political influence offer another option for partnership and
investment for countries in the Western Hemisphere. And relations with Japan, following the election that brought the
opposition Democratic Party into power, are at their frostiest in decades. To many observers, it seems that America's vast
power is not translating into America's preferred outcome. As the United States has come to learn, raw power
the capability to project military power like the United States.And yet,
does not automatically translate into the realization of one's preferences, nor is it necessarily easy to maintain one's predominant position
There are many costs that come with predominance - material, political, and reputational. Vast imbalances
of power create apprehension and anxiety in others, in one's friends just as much as in one's rivals. In this view, it is not
necessarily American predominance that produces unease but rather American predominance. Predominance also makes one a
tempting target, and a scapegoat for other countries' own problems and unrealized ambitions. Many a Third World autocrat has
in world politics.
blamed his country's economic and social woes on an ostensible U.S. conspiracy to keep the country fractured, underdeveloped, and
subservient to America's own interests.
Predominant power
likewise
breeds
envy, resentment, and alienation. How is it
possible for one country to be so rich and powerful when so many others are weak, divided, and poor? Legitimacy-the perception that
one's role and purpose is acceptable and one's power is used justly-is
indispensable for maintaining
power and influence in
world politics. As we witness the emergence (or re-emergence) of great powers in other parts of the world, we realize that American
predominance cannot last forever. It is inevitable that the distribution of power and influence will become more balanced in
the future, and that the United States will necessarily see its relative power decline. While the United States
naturally should avoid hastening the end of this current period of American predominance, it should not look upon the next
period of global politics and international history with dread or foreboding. It certainly should not seek to maintain its predominance
at any cost, devoting unlimited ambition, resources, and prestige to the cause. In fact, contrary to what many have argued about the
importance of maintaining its predominance,
strengthened once
America's position in the world-both at home and internationally-could very well be
over. It is, therefore, necessary for the United States to start thinking about
its era of preeminence is
how best to position itself in the "post-unipolar" world.
Heg collapse inevitable—structural economic weakness
Layne ’12 Christopher Layne, Robert M. Gates Chair in Intelligence and National Security at the George
Bush School of Government and Public Service at Texas A&M University, noted neorealist, “This Time It’s
Real: The End of Unipolarity and the Pax Americana,” International Studies Quarterly (2012) 56, 203-213
Contrary to the way their argument was portrayed by many of their critics, the 1980s declinists did not claim either that the United States
already had declined steeply, or that it soon would undergo a rapid, catastrophic decline. Rather, they pointed to domestic and
economic drivers that were in play and which, over time, would cause American economic power to
decline relatively and produce a shift in global distribution of power. The declinists contended that the United
States was afflicted by a slow—’’termite’’—decline caused by fundamental structural weaknesses in the
American economy.7 Kennedy himself was explicitly looking ahead to the effects this termite decline would have on United States’
world role in the early twenty-first century. As he wrote, ‘‘The task facing American statesman over the next decades. .. is to recognize that
broad trends are under way, and that there is a need to ‘manage’ affairs so that the relative erosion of the United States’ position takes place
slowly and smoothly, and is not accelerated by policies which bring merely short-term advantage but longer-term disadvantage’’ (Kennedy
1987:534; my emphasis). When one goes back and re-reads what the 1980s declinists pinpointed as the drivers of American decline, their
analyses look farsighted because the
same drivers of economic decline are at the center of debate today: too
much consumption and not enough savings; persistent trade and current account deficits; chronic
federal budget deficits and a mounting national debt; and de-industrialization. Over time, 1980s declinists
said, the United States’ goals of geopolitical dominance and economic prosperity would collide. Today, their
warnings seem eerily prescient. Robert Gilpin’s 1987 description of America’s economic and grand strategic plight could just as
easily describe the United States after the Great Recession: With
a decreased rate of economic growth and a low rate of
national savings, the United States was living and defending commitments far beyond its means. In
order to bring its commitments and power back into balance once again, the United States would one
day have to cut back further on its overseas commitments, reduce the American standard of living, or
decrease domestic productive investment even more than it already had. In the meantime, American hegemony
was threatened by a potentially devastating fiscal crisis. (Gilpin 1987:347–348) In the Great Recession’s wake—doubly so
since it is far from clear that either the United States or global economies are out of the woods—the United States now is facing the dilemmas
that Gilpin and the other declinists warned about.
1NC—Prolif Adv
Squo solves nuke leadership
Stulberg and Fuhrmann 13 (Adam Stulberg, associate professor in the Sam Nunn School of
International affairs at Georgia Tech and Matthew Fuhrmann, assistant professor of political science at
Texas A&M. “Introduction: Understanding the Nuclear Renaissance.” In The Nuclear Renaissance and
International Security, ed. By Adam Stulberg and Matthew Fuhrmann,. Stanford UP, 2013.) will
A second scenario reflects a resurgence of nuclear energy, marked by deepening reliance on the sector by states that currently
possess nuclear fuel cycle capabilities. This too finds support from contemporary trends. According to IAEA statistics, over twothirds of the sixty plants under construction as of March 2012 (WNA 2012) are being built in just four countries that currently embrace nuclear
power – China, India, South Korea, and Russia (IAEA 2010b). Notwithstanding temporary moratoria on construction and delays owing to safety
checks precipitated by the Fukushima accident, China is projected to be home to more than one-third of the world’s new reactors, doubling
power generation from the sector by 202. Russia plans to build between two to three new reactors per year and for the sector to meet 25
percent of domestic demand for electricity by 2050l and India intends to triple nuclear power production by 2020. The
United States is
poised to regain its global stature in the industry , with twenty-one new construction and operating
license applications on file as of March 2012 (Blake 2012). Similarly, the growth in power production is
projected to be met primarily by the expansion of natural uranium mining and nuclear fuel production among existing
suppliers. These states are expected to cover the global demand that exceeds the capacity of secondary supply sources (e.g., stockpiles, fuel
blended down from destroyed weapons), leveraging respective national fuel cycle capabilities to compete intensely for greater market shares.
Civilian nuclear energy leads to proliferation
Fuhrmann ‘9 -- Matthew Fuhrmann [Assistant Professor of Political Science at the University of South
Carolina] “Spreading Temptation” International Security Summer 2009, Vol. 34, No. 1, Pages 7-41,
Posted Online July 7, 2009. http://www.mitpressjournals.org/doi/pdf/10.1162/isec.2009.34.1.7
Civilian Nuclear Cooperation and the Bomb Decades ago scholars offered a “technological momentum” hypothesis,
suggesting that countries are more likely to pursue nuclear weapons once they obtain civilian nuclear
technology and expertise. 21 The logic driving this hypothesis is that the accumulation of nuclear technology and knowledge leads to
incremental advances in the field of nuclear engineering that ultimately makes progress toward developing a nuclear weapons capability before
a formal decision to build the bomb is made. 22 John Holdren illustrates this argument well when he states that the
proliferation of
nuclear power represents the spread of an “attractive nuisance.” 23 This logic highlights the relationship between the
peaceful and military uses of the atom, but it underplays the political dimensions of proliferation. 24 Peaceful nuclear cooperation
and nuclear weapons are related in two key respects. First, all technology and materials linked to a
nuclear weapons program have legitimate civilian applications. For example, uranium enrichment and plutonium
reprocessing facilities have dual uses because they can produce fuel for power reactors or fissile material for nuclear weapons.
Second, civilian nuclear cooperation increases knowledge in nuclear-related matters. This knowledge can
then be applied to weapons-related endeavors. Civilian nu clear programs necessitate familiarity with
the handling of radioactive materials, processes for fuel fabrication and materials having chemical or
nuclear properties, and the operation and function of reactors and electronic control systems. They also
provide experience in other crucial fields, such as metallurgy and neutronics. 25 These experiences offer
“a technology base upon which a nuclear weapon program could draw.” 26 These linkages suggest that peaceful
nuclear assistance reduces the expected costs of a weapons program, making it more likely that a decision to begin such a program will be
made. Considerable
political and economic costs—such as international sanctions, diplomatic isolation,
and strained relationships with allies—can accompany nuclear weapons programs. 27 Leaders may be
reluctant to take on these burdens unless they believe that a weapons campaign could succeed
relatively quickly. 28 As Stephen Meyer argues, “When the financial and resource demands of [beginning a weapons
program] become less burdensome, states might opt to proceed . . . under a balance of incentives and disincentives that
traditionally might have been perceived as insufficient for a proliferation decision.” 29 Sometimes, nuclear assistance can cause
leaders to initiate nuclear weapons programs in the absence of a compelling security threat. This usually
happens when scientists and other members of atomic energy commissions convince the political
leadership that producing a nuclear weapon is technologically possible and can be done with relatively
limited costs. 30 Scientists do not always push leaders down the nuclear path, but in many cases they do. 31 Leaders are
persuaded by this lobbying because they are keenly aware that the quicker the bomb can be developed,
the less likely other national priorities will suffer. Although nuclear assistance occasionally produces bomb programs in the
absence of a security threat, the relationship between such assistance and proliferation is usually more nuanced. Countries that have received
considerable assistance are especially likely to initiate bomb programs when threats arise because they have greater demand for the strategic
advantages that nuclear weapons offer. 32 In other words, peaceful nuclear assistance typically conditions the effect that a security
environment has on a state’s political decision to begin a weapons program. A state that suffers a defeat in war or feels threatened for another
reason is unlikely to initiate a program if it lacks a developed civilian nuclear program. Without
the technical base in place, it is
too costly to venture down the weapons path. This explains, in part, why Saudi Arabia has yet to begin a
nuclear weapons program even though it faces considerable security threats. 33 Likewise, countries are unlikely to
nuclearize—even if they have accumulated significant amounts of assistance—if they do not face security threats. On the other hand, initiation
of a weapons program is more likely in states that operate in dangerous security environments and possess peaceful nuclear facilities and a
cadre of trained scientists and technicians. There
are also strong theoretical reasons to suggest the existence of a
relationship between civilian nuclear cooperation and the acquisition of nuclear weapons. Given the links
described above, civilian nuclear energy cooperation can aid nuclear weapons production by providing the
technology and items necessary to produce fissile material. 34 This is noteworthy because fissile material
production is the most difficult step in building the bomb. 35 Cooperation also establishes a technical knowledge base that
permits advances in nuclear explosives and related fields, ultimately facilitating bomb production. Occasionally, technical capacity
alone causes states to produce the bomb. But just as all states receiving nuclear aid do not begin weapons programs, every
country that acquires assistance does not assemble bombs. Security threats, which pro- vide the political motivation to
build the bomb, coupled with atomic aid are a recipe for the acquisition of nuclear weapons. Four hypotheses
show from this logic: Hypothesis 1: Countries receiving peaceful nuclear assistance are more likely to begin nuclear weapons programs.
Hypothesis 2: Countries receiving peaceful nuclear assistance are more likely to begin nuclear weapons programs when a security threat arises.
Hypothesis 3: Countries receiving peaceful nuclear assistance are more likely to acquire nuclear weapons. Hypothesis 4: Countries facing
security threats and receiving peaceful nuclear assistance are more likely to acquire weapons. Below I apply these hypotheses to several cases
to show how peaceful nuclear cooperation can lead to proliferation.
Prolif’s unlikely, slow, and safe
Mueller ’10 John Mueller, professor of political science at Ohio State University, “Calming Our Nuclear
Jitters,” Issues in Science & Technology, Vol. 26, Issue 2, Winter 2010,
http://www.issues.org/26.2/mueller.html
The fearsome destructive power of nuclear weapons provokes understandable dread, but in crafting public policy we must move beyond this
initial reaction to soberly assess the risks and consider appropriate actions. Out of awe over and anxiety about nuclear weapons, the world’s
super-powers accumulated enormous arsenals of them for nearly 50 years. But then, in the wake of the Cold War, fears that the bombs would
be used vanished almost entirely. At the same time, concerns that terrorists and rogue nations could acquire nuclear weapons have sparked a
new surge of fear and speculation. In
the past, excessive fear about nuclear weapons led to many policies that
turned out to be wasteful and unnecessary. We should take the time to assess these new risks to avoid
an overreaction that will take resources and attention away from other problems. Indeed, a more
thoughtful analysis will reveal that the new perceived danger is far less likely than it might at first
appear. Albert Einstein memorably proclaimed that nuclear weapons “have changed everything except our way of thinking.” But the
weapons actually seem to have changed little except our way of thinking, as well as our ways of
declaiming, gesticulating, deploying military forces, and spending lots of money. To begin with, the bomb’s
impact on substantive historical developments has turned out to be minimal. Nuclear weapons are, of course,
routinely given credit for preventing or deterring a major war during the Cold War era. However, it is increasingly clear that the Soviet
Union never had the slightest interest in engaging in any kind of conflict that would remotely resemble
World War II, whether nuclear or not. Its agenda emphasized revolution, class rebellion, and civil war,
conflict areas in which nuclear weapons are irrelevant. Thus, there was no threat of direct military
aggression to deter. Moreover, the possessors of nuclear weapons have never been able to find much
military reason to use them, even in principle, in actual armed conflicts. Although they may have failed to alter
substantive history, nuclear weapons have inspired legions of strategists to spend whole careers agonizing over what one analyst has called
“nuclear metaphysics,” arguing, for example, over how many MIRVs (multiple independently targetable reentry vehicles) could dance on the
head of an ICBM (intercontinental ballistic missile). The result was a colossal expenditure of funds. Most important for current policy is the fact
that contrary
to decades of hand-wringing about the inherent appeal of nuclear weapons, most countries
have actually found them to be a substantial and even ridiculous misdirection of funds, effort, and
scientific talent. This is a major if much-underappreciated reason why nuclear proliferation has been so
much slower than predicted over the decades. In addition, the proliferation that has taken place has been
substantially inconsequential. When the quintessential rogue state, Communist China, obtained nuclear weapons in 1964, Central
Intelligence Agency Director John McCone sternly proclaimed that nuclear war was “almost inevitable.” But far from engaging in the
nuclear blackmail expected at the time by almost everyone, China built its weapons quietly and has
never made a real nuclear threat. Despite this experience, proliferation anxiety continues to flourish. For more than a
decade, U.S. policymakers obsessed about the possibility that Saddam Hussein’s pathetic and
technologically dysfunctional regime in Iraq could in time obtain nuclear weapons, even though it took
the far more advanced Pakistan 28 years. To prevent this imagined and highly unlikely calamity,
damaging and destructive economic sanctions were imposed and then a war was waged, and each
venture has probably resulted in more deaths than were suffered at Hiroshima and Nagasaki combined.
(At Hiroshima and Nagasaki, about 67,000 people died immediately and 36,000 more died over the next four months. Most estimates of the
Iraq war have put total deaths there at about the Hiroshima-Nagasaki levels, or higher.)
No nuclear terror—deterrence and prevention kill acquisition
Mearsheimer ’10 John J. Mearsheimer, R. Wendell Harrison Distinguished Professor of Political
Science at the University of Chicago, “Imperial by Design,” The National Interest, 12/16/2010,
http://nationalinterest.org/print/article/imperial-by-design-4576
This assessment of America’s terrorism problem was flawed on every count. It was threat inflation of the highest order. It made no sense to
declare war against groups that were not trying to harm the United States. They were not our enemies; and going after all terrorist
organizations would greatly complicate the daunting task of eliminating those groups that did have us in their crosshairs. In addition, there was
no alliance between the so-called rogue states and al-Qaeda. In fact, Iran and Syria cooperated with Washington after 9/11 to help quash
Osama bin Laden and his cohorts. Although the Bush administration and the neoconservatives repeatedly asserted that there was a genuine
connection between Saddam Hussein and al-Qaeda, they never produced evidence to back up their claim for the simple reason that it did not
exist. The fact is that states have strong incentives to distrust terrorist groups, in part because they might turn on them
someday, but also because countries cannot control what terrorist organizations do, and they may do something that gets their patrons into
serious trouble. This
is why there is hardly any chance that a rogue state will give a nuclear weapon to
terrorists. That regime’s leaders could never be sure that they would not be blamed and punished for a
terrorist group’s actions. Nor could they be certain that the United States or Israel would not incinerate
them if either country merely suspected that they had provided terrorists with the ability to carry out a
WMD attack. A nuclear handoff, therefore, is not a serious threat. When you get down to it, there is only a remote possibility
that terrorists will get hold of an atomic bomb. The most likely way it would happen is if there were
political chaos in a nuclear-armed state, and terrorists or their friends were able to take advantage of the ensuing confusion to
snatch a loose nuclear weapon. But even then, there are additional obstacles to overcome: some countries keep their
weapons disassembled, detonating one is not easy and it would be difficult to transport the device
without being detected. Moreover, other countries would have powerful incentives to work with
Washington to find the weapon before it could be used. The obvious implication is that we should work with other states
to improve nuclear security, so as to make this slim possibility even more unlikely. Finally, the ability of terrorists to strike the
American homeland has been blown out of all proportion. In the nine years since 9/11, government
officials and terrorist experts have issued countless warnings that another major attack on American soil
is probable—even imminent. But this is simply not the case.3 The only attempts we have seen are a few
failed solo attacks by individuals with links to al-Qaeda like the “shoe bomber,” who attempted to blow up an
American Airlines flight from Paris to Miami in December 2001, and the “underwear bomber,” who tried to blow up a Northwest
Airlines flight from Amsterdam to Detroit in December 2009. So, we do have a terrorism problem, but it is hardly an
existential threat. In fact, it is a minor threat. Perhaps the scope of the challenge is best captured by Ohio State political scientist John
Mueller’s telling comment that “the number of Americans killed by international terrorism since the late 1960s . .
. is about the same as the number killed over the same period by lightning, or by accident-causing deer,
or by severe allergic reactions to peanuts.”
Leadership isn’t exerted
Clearly 8-13 Richard Cleary, Research Assistant at the American Enterprise Institute, “Persuading
Countries to Forgo Nuclear Fuel-Making,” Nonproliferation Policy Education Center, 8/13/2012,
http://npolicy.org/article.php?aid=1192&tid=30
The cases above offer a common lesson: The U.S., though constrained or empowered by circumstance, can exert
considerable sway in nonproliferation matters, but often elects not to apply the most powerful tools at
its disposal for fear of jeopardizing other objectives. The persistent dilemma of how much to emphasize
nonproliferation goals, and at what cost, has contributed to cases of nonproliferation failure. The
inconsistent or incomplete application of U.S. power in nonproliferation cases is most harmful when it gives the
impression to a nation that either sharing sensitive technology or developing it is, or will become, acceptable
to Washington. U.S. reticence historically, with some exceptions, to prioritize nonproliferation—and in so doing reduce the
chance of success in these cases—does not leave room for great optimism about future U.S. efforts at persuading
countries to forgo nuclear fuel-making.
States develop nuclear power for prestige, not cost—tech transfer doesn’t solve
Lewis ‘12 Jeffrey Lewis, director of the East Asia Nonproliferation Program at the James Martin Center
for Nonproliferation, “It's Not as Easy as 1-2-3,” Foreign Policy, 8/1/2012,
http://www.foreignpolicy.com/articles/2012/08/01/it_s_not_as_easy_as_1_2_3
Creating market incentives to discourage the spread of enrichment and reprocessing seems like a reasonable thing to do - except that most
states make nuclear decisions on something other than a cost basis. Nuclear power enthusiasts have been no
strangers to wishful thinking, starting with claims that nuclear energy would be "too cheap to meter." Government decisions about
nuclear power tend to prioritize concerns about sovereignty and keeping technological pace with
neighbors. It is not hard to see national nuclear programs as something akin to national airlines - moneylosing prestige projects that barely take market forces into account. Often, aspiring nuclear states look
to countries like the United States and Japan as models. If such countries invest heavily in fuel-cycle
services, developing states might try to copy them rather than simply become their customers.
1NC—Solvency
Can’t build new reactors—lack of industrial production facilities and trained workers
Mez September 2012—Lutz Mez [Department of Political and Social Sciences, Freie Universitat Berlin]
“Nuclear energy–Any solution for sustainability and climate protection” Energy Policy 48 (2012) 56–63
The nuclear industry has been battling a host of problems for three decades. A global construction boom
can be ruled out at present if only due to the lack of production capacities and shortages of technicians; nor
will this situation change much over the short and medium term. Only one single company in the world, Japan Steel Works
Ltd., is able to forge the large pressure vessels in reactors the size of EPR. Not only the pressure vessel, but also the
steam generators in the new Finnish plant come from Japan. In the USA, on the other hand, there is not a single manufacturing
plant capable of producing such large components. The sole facility in Europe, the AREVAforgeintheFrenchcityofLeCreusot, is
only able to produce components of a limited size and in limited numbers. Beyond this, the nuclear industry is busy with
retrofitting projects, as replacement of steam generators for power plants whose operating lifetimes are
to be extended. Because such large production plants cannot be built overnight, this situation will not
improve quickly. New nuclear power plants moreover have to be operated by new personnel—but the
nuclear industry and operators are scarcely even able to replace staff who retires. An entire genera- tion
of engineers, nuclear physicists and experts on protection against radiation are missing as the industry is
challenged twofold: at the same time as new plants are being constructed, plants which have been closed must be torn down and solutions
finally found for nuclear waste.
Takes too long to solve
Mariotte ‘7 Michael Mariotte, executive director of Nuclear Info and Resource Service, “Nuclear
Power in Response to Climate Change,” Council On Foreign Relations, 11/9/2007,
http://www.cfr.org/energy/nuclear-power-response-climate-change/p14718
Environmental advocates considering “reconsidering” nuclear power in light of climate change are too
late. The accelerating pace of the climate crisis and the dawning realization that we no longer have the
luxury of a few decades to address the crisis already have made nuclear power an irrelevant technology
in terms of climate. Even if the nuclear industry had solved the safety, radioactive waste, proliferation,
cost, and other issues that ended its first generation—and it hasn’t solved any of those problems—it
wouldn’t matter. What nuclear power can offer for climate is simply too little, too late. The major
studies that have looked at the issue—MIT, the National Commission on Energy Policy, etc.—generally agree that for
nuclear to make a meaningful contribution to carbon emissions reduction would require reactor
construction on a massive scale: 1,200 to 2,000 new reactors worldwide, 200 to 400 in the United States
alone. And that would have to be done over the next 40 to 50 years. Pity poor Japan Steel Works, the world’s major
facility for forging reactor pressure vessels (there is one other, small-capacity facility in Russia): working overtime it can produce twleve
pressure vessels per year. Do the math: That’s less than half of what is needed. Even
if someone put in the billions of dollars
and years necessary to build a new forging facility, it’s still not enough, not fast enough. There are 104
operable reactors in the United States today. In November 2017, no matter how much taxpayer money is thrown at the nuclear industry, there
will be 104—or fewer. Even with streamlined licensing procedures and certified reactor designs, it will take ten, twelve years or more to license,
build and bring a single new reactor online. And since most of the reactor designs being considered are first or second of a kind, count on them
taking even longer. Our energy future ultimately will be carbon-free and nuclear-free, based primarily on solar and wind power, energy
efficiency, and distributed generation. What is perhaps less obvious is that the future is now. In the years we’d be waiting for that first new
reactor to come online, we can install ten times or more solar and wind capacity, and save twenty times or more that much power through
By the time that first reactor
could come online, solar could already be cost-competitive, while wind and efficiency already are
cheaper than nuclear. We no longer have ten years to begin reducing carbon emissions. Waiting around for a few new reactors won’t
increased efficiency while building the mass production that reduces costs, especially for photovoltaics.
help our climate, but it would waste the funds needed to implement our real energy future.
Loan guarantees fail—numerous constraining factors
Maize ’12 Kennedy Maize, “Fukushima Disaster Continues to Cloud Nuclear Outlook,” POWER
Magazine, 7/1/2012, http://www.powermag.com/nuclear/4761.html
J. Frank Russell, senior vice president at Concentric Energy Advisors, described the ambiguous status of nuclear power
today from a U.S. perspective. By many counts, he said, “this should be a year of celebration for ‘new nuclear’ in the U.S.”
because Southern Co. is building Vogtle Units 3 and 4, and Scana Corp. has a green light from the Nuclear
Regulatory Commission (NRC) for the two new units at its V.C. Summer station. In contrast to what could be justified optimism, “the
reality is different,” Russell said. “The pipeline is empty, with other proposed units stalled or delayed by the
sponsors.” The promise of “up to a dozen” new units that was common in the industry a few years ago “has mostly
gone away,” and the industry has awakened to a less-friendly environment. Many reasons account for
faded nuclear dreams in the U.S., Russell said. The 2008 recession lowered demand for power and reduced
financial markets’ appetite for risk. The collapse of natural gas prices as a result of the shale gas revolution
undercut the economics. So did the federal government’s failure to put a price on carbon emissions.
Fukushima also played a role. But the key factor dogging the U.S. nuclear sector has been the high and growing cost
of nuclear power plants. “While many of these issues may be considered temporary,” said Russell, “the sheer total cost of large-scale
new nuclear units is just too large for many companies to bear.” Few companies have the capitalization and appetite
for risk to take on a project that could cost $10 billion, the current estimate for a new nuclear unit in the U.S. For a merchant generator,
finding the equity capital for such an undertaking is problematic. “Even with a loan guarantee,” he said, “the
equity may be impossible to raise.” What will it take for a real U.S. nuclear turnaround? Russell offered a
list, with each item necessary to achieving rebirth but none sufficient in itself. He said that demand growth will have
to return and that the current generating capacity surplus must decline. Natural gas prices will have to double
to at least $4/million cubic feet. A carbon price also must be put in place. The Vogtle and Summer units
must come in on schedule and must meet budget targets (an outcome already put in doubt by cost
increases recently announced at Vogtle). And policy makers and the public must be positive and supportive.
Nuclear is shockingly expensive, has no economies of scale, and cannot become cost
competitive with other technologies
Mez September 2012—Lutz Mez [Department of Political and Social Sciences, Freie Universitat Berlin]
“Nuclear energy–Any solution for sustainability and climate protection” Energy Policy 48 (2012) 56–63
In contrast to other energy technologies, there are no positive economies of scale in the construction of
nuclear power plants. On the contrary, specific investment costs have become ever more expensive.
Moreover, plants have had considerable cost overruns— and not only in the USA. In the early phase from 1966 to
1967, estimated overnight costs were $ 560/kW, but actual overnight costs turned out to be $ 1,170/kilowatt, i.e. 209 percent more. In the
years from 1974 to 1975, estimated overnight costs of $ 1,156/kilowatt were assumed, but actual overnight costs turned out to be $ 4,410/
kilowatt—i.e. 381 percent more (Gielecki & Hewlett 1994). On top of this, current data on construction costs are only available in Western
Europe and North America. The costs of construction projects in China, India and Russia are either not available or not comparable. Because
construction costs for power plants have risen con- siderably over the last few years, especially due to
the major expansion in conventional coal-fired power plants in China and India, specific construction
costs for nuclear power plant projects have risen many times over. The nuclear power industry esti- mated
construction costs at $ 1,000/kilowatt for new generation IIIþ nuclear power plants by 2002. However, this cost level has turned out to be
completely unrealistic. The contractual price for the
European Pressurized Reactor ordered from AREVA NP in 2004 for the Finnish
four years behind schedule and at
least 90 percent over budget, reaching a total cost estimate of h 5.7 billion ($ 8.3 billion) or close to h 3,500 ($
site in Olkiluoto was already h 2,000/kW—at the time this was $ 3,000/kW. ‘‘The project is
5,000) per kilowatt’’ (Schneider, Froggatt, Thomas 2011: 8). As a result of this trend, estimates in the USA for 2007/2008 have soared to $
5,000/kW: Asked about challenges facing construction of new nuclear and coal power plants, the US
Federal Energy Regulatory
Commission (FERC) Chairman, Jon Wellinghoff, allowed that ‘‘we may not need any, ever. That’s a
‘theoretical question’ because I don’t see anybody building these things until costs get to a reasonable
level’’ (Platts 22 April 2009). He characterized the projected costs of new nuclear plants as prohibitive, citing
estimates of roughly $ 7,000/kW. These estimates were confirmed in 2009 as well by the detailed offers tendered for the construction of a nuclear power plant in Ontario, the Ontario Nuclear Procurement Project: between $ 6,700/kW and $ 10,000/kW, which of course
killed the project—especially as it did not even take into account the fact that cost estimates in the past were always below actual construction
costs. The estimates of power plant capital and operating cost, updated by the U.S. Energy Information Administration in November 2010, are
shown in table 3: In
comparison nuclear power plants, even with the low estimate of overnight costs of 5,335
$/kW, are more expensive than traditional fossil fired power stations. Onshore wind, solar PV and hydro
have much lower fixed operating & maintaining costs and zero variable costs. The leading rating agencies
Standard & Poor’s and Moody’s also voiced misgivings over the last few years regarding the economic
viability of new nuclear power plants: the leading credit-rating company, Standard & Poor’s, warned as far back as 2007: ‘‘In the
past, engineering, procurement, and construction contracts were easy to secure. However, with increasing raw material costs, a
depleted nuclear-specialist workforce, and strong demand for capital projects worldwide, construction
costs are increasing rapidly’’( Schneider 2008). Moody’s also revealed its skepticism in an analysis of possible new construction
projects in the USA: ‘‘Moody’s does not believe the sector will bring more than one or two new nuclear plants online by 2015.’’ (ibid.) It based
its assessment on the year 2015 because this is the date which most companies trumpeting their nuclear ambitions at present use. Moody’s
affirmed that many of the current expecta- tions for nuclear power were ‘‘overly ambitious.’’ It had more bad
news for the industry when its June Global Credit Research paper concluded that ‘‘the cost and complexity of building a new
nuclear power plant could weaken the credit metrics of an electric utility and potentially pressure its
credit ratings several years into the project.’’(ibid.) Even the Nuclear Energy Institute (2008), the nuclear industry’s
trade organization, stated in August 2008 that ‘‘there is considerable uncertainty about the capital cost of
new nuclear generating capacity.’’(ibid). In conclusion, these would not appear to be very rosy pro- spects for
a technology which was developed in the 1950s and 1960s and which could have scarcely survived down
to the present without massive government subsidies in Western and democratic industrialized
countries.
Competition, bureaucracy, and delays hamper exports
NEI ’12 “U.S. Nuclear Export Rules Hurt Global Competitiveness,” Nuclear Energy Institute, Winter
2012, http://www.nei.org/resourcesandstats/publicationsandmedia/insight/insightwinter2012/usnuclear-export-rules-hurt-global-competitiveness/
Today, U.S. dominance of the global nuclear power market has eroded as suppliers from other countries
compete aggressively against American exporters. U.S. suppliers confront competitors that benefit from
various forms of state promotion and also must contend with a U.S. government that has not adapted to new
commercial realities. The potential is tremendous—$500 billion to $740 billion in international orders over the next decade, representing tens of
thousands of potential American jobs, according to the U.S. Department of Commerce. With America suffering a large trade deficit, nuclear goods and services
represent a market worth aggressive action. However, antiquated
U.S. government approaches to nuclear exports are
challenging U.S. competitiveness in the nuclear energy market. New federal support is needed if the United States wants to reclaim dominance in
commercial nuclear goods and services—and create the jobs that go with them. “The U.S. used to be a monopoly supplier of nuclear materials and technology back
in the ’50s and ’60s,” said Fred McGoldrick, former director of the Office of Nonproliferation and Export Policy at the State Department. “That position has eroded
to the point where we’re
a minor player compared to other countries.” America continues to lead the world in technology innovation
and know-how. So what are the issues? And where is the trade? Effective coordination among the many government agencies
involved in nuclear exports would provide a boost to U.S. suppliers. “Multiple U.S. agencies are engaged with
countries abroad that are developing nuclear power, from early assistance to export controls to trade
finance and more,” said Ted Jones, director for supplier international relations at NEI. The challenge is to create a framework that allows commercial
nuclear trade to grow while ensuring against the proliferation of nuclear materials. “To compete in such a situation, an ongoing
dialogue between U.S. suppliers and government needs to be conducted and U.S. trade promotion must
be coordinated at the highest levels,” Jones said. Licensing U.S. Exports Jurisdiction for commercial nuclear export
controls is divided among the Departments of Energy and Commerce and the Nuclear Regulatory Commission and
has not been comprehensively updated to coordinate among the agencies or to reflect economic and
technological changes over the decades. The State Department also is involved in international nuclear commerce. It negotiates
and implements so-called “123 agreements” that allow for nuclear goods and services to be traded with a foreign country. The federal agencies often
have different, conflicting priorities, leading to a lack of clarity for exporters and longer processing times
for export licenses. “The U.S. nuclear export regime is the most complex and restrictive in the world
and the least efficient,” said Jones. “Furthermore, it is poorly focused on items and technologies that pose little or
no proliferation concern. By trying to protect too much, we risk diminishing the focus on sensitive technologies and
handicapping U.S. exports.” A case in point is the Energy Department’s Part 810 regulations. While 123 agreements open trade
between the United States and other countries, Part 810 regulates what the United States can trade with another country. For
certain countries, it can take more than a year to obtain “specific authorizations” to export nuclear items. Because other
supplier countries authorize exports to the same countries with fewer requirements and delays, the Part 810 rules
translate into a significant competitive disadvantage for U.S. suppliers.
Accidents are inevitable and unavoidable.
Jackson 11 — David Jackson [PhD. Professor (Adjunct) of Engineering Physics at McMaster University]
Fukushima and Reactor Safety – Unknown Unknowns ORIGINALLY POSTED June 5, 2011
http://reactorscanada.com/2012/05/31/fukushima-and-reactor-safety-unknown-unknowns/
No technology including nuclear power can be made absolutely safe. One of the best and well-known
historical illustrations of this is the design of the Titanic. The plan incorporated a system of sixteen
water- tight compartments that allowed the ship to float if two of the first four were flooded. Thus, the
ship should survive any conceivable collision scenario. The media of the time and perhaps the designers
talked of an “unsinkable” ship but who could imagine an accident in which the vessel would scrape
along an iceberg slitting open six of the forward compartments? The root of the problem is the
“unknown unknowns”. This bit of pentagon-speak nicely sums up the idea that there are accident scenarios that one can’t imagine in
advance. I also like the terminology “Black Swans” which I take to mean the same thing and will use that way. Who could imagine that
the operators at Three Mile Island would manually turn off the reactor’s Emergency Core Cooling
System? Who could imagine that the Chernobyl operators would drive a reactor with no containment
into a parameter region of unstable operation? Who could imagine that the design of the Fukushima
power reactors would not withstand the largest possible earthquake and that the emergency power
system was vulnerable to inundation? Unfortunately, the answer is that no one imagined these
possibilities and so the last two of these accidents became catastrophes. It’s not just the big ones. Many
other unanticipated nuclear accidents have occurred over the sixty years of nuclear power that avoided
disasters with public consequences through the prompt operation of safety systems, skilled operator intervention and plain good luck.
Preliminary accounts of the Fukushima accident s show the importance of two well-known issues of nuclear safety. The first was inadequate
redundancy in the fuel cooling systems because the reactor station design was deficient in protecting against earthquakes and tsunamis. Who
could imagine that in seismically active Japan, a country with 120 active volcanoes? The second was the failure to vent in time the hydrogen
generated by fuel exposure although the operators had many hours to do so. The hydrogen mixed with the oxygen from ambient air caused
explosions that wrecked buildings and much of the reactor equipment in them further compounding the difficulty of restoring the necessary
fuel cooling. Ironically, the same “hydrogen bubble” was a big concern at Three Mile Island some thirty years before and hydrogen explosions
It’s easy to
second guess and criticize others in hindsight. However, my intention here is not to assign blame for
Fukushima but to show that nuclear accidents are fundamentally unavoidable . As history now shows, every
once in a while there will be a big nuclear accident when a Black Swan comes to roost. Even the most
sanguine unsinkable ship fans can no longer deny that. When nuclear authorities in Canada and elsewhere say
“our reactors are safe” what they really mean is that they have gone through all of their event trees,
probabilistic calculations and voluminous safety documents of numerous accident scenarios and found
no fault with them. In effect they are saying: “we have solutions for every accident scenario we have been able to imagine.” They
are sincere in their exhaustive analysis and certainly want no accidents at occur. However, they can make no plausible claim for
no doubt occurred at Chernobyl. Who could imagine that the operators would be so indecisive as to bungle hydrogen venting?
a process for identifying unknown unknowns. I wish they were more honest by adding these qualifiers to their declarations of safety. To me
this leads to two important conclusions. First, the nuclear
industry should recognize publically that serious nuclear
accidents will occur from time to time in spite of their best efforts to prevent them. Therefore, nuclear
accidents are a price that society has to pay for increasing population growth, high energy consumption and perhaps low-carbon energy as
argued in my first posting on Fukushima. Secondly, the public must acknowledge that its expectation of perfect nuclear safety is unrealistic and
like any other industrial technology accidents will continue to be a feature of nuclear energy in the future. I can
detect some of that feeling of public acceptance already emerging. Lake Ontario isn’t Walden Pond and never will be.
Kills solvency
Bonass et al, 12 (Matt Bonass, Michael Rudd and Richard Lucas are partners in Bird and Bird’s energy
and utilities sector group. Mr Bonass and Mr Rudd are co-editors of Renewables: A Practical Handbook.
Mr Rudd is head of Bird and Bird’s nuclear practice. Author interviews, Renewable energy: the fallout
from Fukushima, http://www.globelawandbusiness.com/Interviews/Detail.aspx?g=910fcfa6-9a2d-4312b034-9bbcb2dad9e0)
There is no doubt that the events in Fukushima have cast a shadow over the nuclear industry. Previous
nuclear disasters, such as those at Three Mile Island in the United States and Chernobyl in the former
Soviet Union, have caused significant delays to the development of the nuclear industry. For example, the
United States embargoed the building of new nuclear plants for nearly 25 years following the Three Mile
Island crisis and many European countries ceased their nuclear development programmes following
events at Chernobyl. Our colleagues across our network of European offices are telling us that their clients are approaching them to
discuss the implications of the Fukushima events. For example, what will be the effect on the discussions currently taking place in Italy on the
referendum to recommence nuclear development activities? What will be the effect of the proposals in Germany to place its plants offline?
Will Europe adopt the proposals for a comprehensive risk and safety assessment of nuclear plants (the socalled ‘stress tests’)? Public perception is vital to the nuclear industry. Over the past few years, nuclear has
gradually become an accepted part of the low carbon economy, but it may now have become politically
unacceptable to many.
Investors still have to pay massive up-front costs—deters interested groups
Ben-Moshe et al 9 (Sony, Jason Crewell, Breton Pearce, Brett Rosenblatt, J.Ds, Kelley Gale, Finance
Chair at Latham and Watkins, and Kelley Thomason, Project Finance Practice Group, Lathan and
Watkins, “FINANCING THE NUCLEAR RENAISSANCE: THE BENEFITS AND POTENTIAL PITFALLS OF
FEDERAL & STATE GOVERNMENT SUBSIDIES AND THE FUTURE OF NUCLEAR POWER IN CALIFORNIA”
web Energy Law Journal Vol 30 No 2 2009) CJQ
Much has been written on the DOE‘s loan guarantee program under the EPAct 2005, particularly in light of the changes to that program for renewable projects
under the American Recovery and Reinvestment Act of 2009, and as such we will not cover its ―nuts and bolts‖ in great detail. But generally speaking, the
federal loan guarantee program applicable to nuclear projects authorizes the DOE to make guarantees
of debt service under construction loans for up to eighty percent of the construction costs of new nuclear
projects that will (1) avoid or reduce air pollutants and emissions of greenhouse gases, and (2) employ new or significantly improved technology to do so. 61
Several requirements must be met before the DOE can enter into a loan guarantee agreement. First, either an
appropriation for the cost of the guarantee must have been made or the DOE must receive full payment for the cost of the guarantee from the developer. 62
Because no money has been appropriated to cover these costs and the DOE has stated it does not
intend to seek appropriations to pay these costs for any nuclear projects, 63 it appears that project
developers may be responsible for pre-paying the full costs of the loan guarantees, 64 unless the Bingaman
legislation discussed below is passed as proposed or similar legislation is enacted. Two components currently make up the cost of the guarantee. The first
part is an ―Administrative Cost‖: the DOE must receive fees sufficient to cover applicable administrative
expenses for the loan guarantee including the costs of evaluating applications, negotiating and closing loan guarantees, and monitoring the
progress of projects. 65 These administrative expenses passed on to the developer include an application fee of $800,000, a facility fee of one half of one percent of
the amount guaranteed by the loan guarantee, and a maintenance fee of $200,000–$400,000 per year. 66 Second,
the DOE must receive a
―Subsidy Cost‖ for the loan guarantee, which is defined as the net present value of the government‘s
expected liability from issuing the guarantee. 67 The Subsidy Cost must be estimated by a developer in an application, but cannot be
officially determined until the time the loan guarantee agreement is signed. 68 The administrative costs associated with the program have been criticized as overly
burdensome, 69 and the Subsidy Cost remains unquantifiable but decisively enormous. In
fact, Standard & Poor‘s recently estimated
that the Subsidy Cost for a typical nuclear reactor could be as high as several hundred million dollars. 70
The lack of clarity around how to quantify these costs up front and, as discussed below, the position of the DOE that the Subsidy Cost is not an eligible
project cost under the loan guarantee program, make it difficult for developers to arrange investment or interim financing
to get them through the development process. 71 Additionally, before entering a loan guarantee, the DOE must determine that (1)
―there is reasonable prospect of repayment of the principal and interest on [the guaranteed debt] by the borrower,‖ (2) the amount guaranteed by the
government under the loan guarantee, when combined with other available financing sources, is sufficient to carry out the nuclear construction project, and (3) the
DOE possesses a first lien on the assets of the project and other assets pledged as security and its security interest in the project is not subordinate to any other
financing for the project. 72 Finally, the loan guarantee obligation must bear interest at a rate determined by the Secretary to be reasonable, taking into account the
range of interest rates prevailing in the private sector for similar Federal government guaranteed obligations of comparable risk and the term of the guarantee
cannot exceed the lesser of thirty years or ninety percent of the useful life of the nuclear reactor. 73 These
requirements create uncertainties
for developers and financiers seeking to understand how the program will work to support the financing
of a new nuclear power plant. For instance, it is unclear how government approval of interest rates will
work in the context of a deal with multiple debt instruments that each may have different pricing. Setting
interest rates in these types of deals is an iterative process of modeling interest rates and testing markets. Further, it is unclear how interest rates will be compared.
To our knowledge, there are no ―similar Federal government guaranteed obligations of comparable risk‖ to debt issued for the construction of a nuclear power
project. 74
Doesn’t lower construction costs
Brumfiel ‘7 Geoff Brumfiel, senior news reporter for Nature Business, “Powerful incentives,” Nature,
vol. 448, 8/16/2007
Safety net The guarantees would provide a major boost for plant construction, says Marilyn Kray, vicepresident for project development at
Exelon, a utility based in Chicago, Illinois, and the largest nuclear generator in the nation. They would reassure lenders, and allow utilities to
borrow at lower rates. Given the enormous capital costs, he says, “a single interest percentage point is quite significant.” “It would be a very
it
might still fail to drive down the costs of construction to a competitive level. The expert labour and
technology needed to build such plants is expensive, as is the meticulous regulatory process. The
bottom line, Nikas says, is that the incentives may get one or two plants built — but they won’t herald a
building boom in nuclear power stations.
useful incentive to have,” agrees Dimitri Nikas, an energy analyst with Standard & Poor’s, a financial services company in New York. But
2NC
Topicality
Bar none entry support isn’t topical. Otherwise the aff would be able to introduce tons
of new and unproven energy tech that the neg could never predict
O’Brien ‘8 Mike O’Brien, minister of state, Department for Energy and Climate Change in Parliament,
“Clause 20 — Terms and conditions,” 11/18/2008, http://www.theyworkforyou.com/debate/?id=200811-18b.159.3
I have quite a lot still to say, so I shall try to give as full a reply, and as brief, as possible. Amendment (b) to Lords amendment No. 42 suggests
we replace the term "financial incentives" in proposed new subsection (2)(a) with "payment". The
use of the term "financial
incentives" clarifies that the general purpose of the scheme is to incentivise low-carbon electricity
generation through financial incentives, as opposed to other means such as a regulatory obligation or
barrier-busting support , such as help with the planning system. We believe that such clarity is helpful in
setting out beyond any doubt the primary purpose of the scheme. However, to give additional reassurances about
our intentions, I would point to the powers under proposed new subsection (3) that specifies the term "payment" in all the key provisions that
will establish the scheme. In others words, it is explicit that we are dealing with payments to small-scale generators. What is proposed will be a
real feed-in tariff scheme.
Makes affs like upping defense expenditures for the Middle East or affs from the high
school topic topical
EIA ’92 Office of Energy Markets and End Use, Energy Information Administration, US Department of
Energy, “Federal Energy Subsidies: Direct and Indirect Interventions in Energy Markets,” 1992,
ftp://tonto.eia.doe.gov/service/emeu9202.pdf
The issue of subsidy in energy policy analysis extends beyond consideration of actions involving some form
of financial commitment by the Federal Government. Subsidy-like effects flow from the imposition of
a range of regulations imposed by Government on energy markets. Regulations may directly subsidize a
fuel by mandating a specified level of consumption, thereby creating a market which might not otherwise
exist. The imposition of oxygenate requirements for gasoline in the winter of 1992, which stimulates demand for alcohol-based additives, is a
recent example. Regulations more often explicitly penalize rather than subsidize the targeted fuel. To the extent that
regulations on coal emissions raise costs of coal use, the competitive opportunities for alternatives, including renewables, natural gas, and
conservation, are enhanced. The additional costs that influence the consumption of coal versus other fuels do not require any exchange of
money between the Government and buyers and sellers of energy. However, this in no way diminishes the policy’s potential impact on
resource allocation and relative prices of energy products. Much
current debate on energy policy focuses on
externalities associated with energy use. Many believe there is a large implicit subsidy to energy production and consumption insofar as
pollution results in environmental costs not fully charged to those responsible. Failure to internalize “recognized” externalities in the context of
current fuel use may result in conventional energy being underpriced compare to other energy sources. Advocates of increased use of
renewable energy claim this form of “subsidy” to be central to the continued dominance of fossil fuels as a component of energy supply. In fact,
the effort to deal with environmental concerns has become a central feature of Federal energy policy.
Substantial costs which were formerly outside the market mechanism have, through the implementation of a series of taxes and
regulations, been internalized to energy markets. This report examines these developments as components of the current energy debate
regarding the significance of direct and indirect energy subsidies. In that context, a variety of environmental trust funds and components of the
Clean Air Act are examined. The report does not address the question of how much and what kind of externalities remain to be addressed
through further revision of policy. Such considerations are far beyond the scope of this effort. There
could be legitimate debate
over whether some of the programs described in this report are primarily directed towards energy or
towards
some broader objective , or alternatively whether programs excluded from this report ought to have been included.
Programs that provide incentives for broad classes of economic activity, such as investment in fixed
capital or investment in basic research, have been excluded , because they affect neither the choice
between energy and nonenergy investment, nor the choice among particular forms of energy. Some
may consider the Strategic Petroleum Reserve (SPR) to be a subsidy to energy consumers, while others may
consider it to be a program to protect the vital national interests of the United States. The SPR is not included in
this report. Some of the more expansive definitions of energy subsidies have included defense expenditures
related to contingencies in the Persian Gulf. U.S. defense expenditures are designed to provide security, and
the level of oil prices is not functionally related to the level of defense activity. Therefore defense expenditures
are not considered here. Some may consider Federal transportation programs to be forms of energy subsidy,
while others may think the energy impact of transportation programs is incidental to their intended purpose.
Transportation programs are not included. State and local programs (which are significant in a number of cases) have been
excluded by definition, since this report is about Federal subsidies.
Prolif adv
We excerpt this article’s massive method section that shows that Civilian Nuclear
Cooperation significantly increases prolif. The effect is huge and robust against
anything you can throw at it
Fuhrmann ‘9 -- Matthew Fuhrmann [Assistant Professor of Political Science at the University of South
Carolina] “Spreading Temptation” International Security Summer 2009, Vol. 34, No. 1, Pages 7-41,
Posted Online July 7, 2009. http://www.mitpressjournals.org/doi/pdf/10.1162/isec.2009.34.1.7
Given that every empirical approach has drawbacks, a multimethod assessment of my theory can inspire greater confidence in the findings presented in this article. 89 The case study analysis above provides rich descriptions of my
Statistical analysis allows me to minimize the risks of
selection bias and determine the average effect of independent variables on proliferation aims and
outcomes. 91 Additionally, it permits me to control for confounding variables and to show that peaceful nuclear
cooperation—and not some other factor—explains nuclear proliferation. This is especially important because proliferation is a complicated
argument and illustrates that the causal processes operate as expected in actual instances of proliferation. 90
process, and there is rarely only one factor that explains why nuclear weapons spread. 92 For the statistical analysis, I use a data set compiled by Sonali Singh and Christopher Way to identify the determinants of nuclear
proliferation. 93 I adopt a standard time-series cross-sectional data structure for the period 1945 to 2000, and the unit of analysis is the country (monad) year. For my analysis of nuclear weapons program onset, a country exits the
data set once it initiates a weapons acquisition campaign. Similarly, for my analysis of nuclear weapons acquisition, a country exits the data set once it obtains at least one nuclear bomb. dependent variables To analyze nuclear
proliferation, I coded two dependent variables, both of which are dichotomous. The first is coded 1 if the country initiated a nuclear weapons program in year t and 0 otherwise. The second is coded 1 if the country acquired
nuclear weapons in year t and 0 otherwise. To create these variables, I consulted a list of nuclear proliferation dates compiled by Singh and Way. 94 explanatory variables I hypothesized above that the accumulation of civilian
nuclear assistance makes states more likely both to begin nuclear weapons programs and to acquire such weapons—especially when security threats are also present. To operationalize civilian nuclear assistance, I collected and
coded new data on NCAs signed from 1945 to 2000. NCAs are an appropriate independent variable for this analysis because they must be in place in virtually all cases before the exchange of nuclear technology, materials, or
knowledge can take place. These agreements typically lead to the construction of a nuclear power or research reactor, the supply of fissile materials (e.g., plutonium or enriched uranium), the export of fissile material production
facilities, or the training of scientists and technicians. Related agreements that are not classified as NCAs include: (1) agreements that are explicitly defense related; (2) financial agreements; (3) agricultural or industrial agreements
unrelated to nuclear power; (4) agreements dealing with the leasing of nuclear material; and (5) liability agreements. To produce these data, I consulted a list compiled by James Keeley of more than 2,000 NCAs. 95 Figure 1 plots
the number of NCAs signed from 1950 to 2000. The figure shows a general increase in the number of NCAs over time, which is explained by the emergence of a greater number of capable nuclear suppliers. The number has
ºuctuated slightly, with peaks in the late 1980s and the mid-1990s. The first NCA was signed in 1952, after which the average number of agreements signed each year was 58. I created an independent variable that measures the
aggregate number of NCAs that a state signed in a given year entitling it to nuclear technology, materials, or knowledge from another country. 96 If a state signed an NCA but only supplied—and did not receive—nuclear assistance
as part of the terms of the deal, then this would not be captured by the nuclear cooperation agreements variable. Table 1 lists the thirty countries that received the most nuclear assistance via these agreements from 1945 to 2000.
97 To operationalize security threats, I created a variable measuring the fiveyear moving average of the number of militarized interstate disputes (MIDs) per year in which a country was involved. This variable is based on version
3.0 of the Correlates of War’s MID data set. 98 I coded a third variable that interacts these two measures to test for the conditional effect of nuclear cooperation on proliferation. control variables I controlled for other factors
thought to affect proliferation. 99 To control for technological capacity, I included a variable measuring a country’s GDP per capita and a squared term of this measure to allow for the possible curvilinear relationship between
economic development and the pursuit of nuclear weapons. 100 To measure a state’s industrial capacity, I included a dichotomous variable that is coded 1 if it produced steel domestically and had an electricitygenerating capacity
greater than 5,000 megawatts and 0 otherwise. I included a dichotomous variable that is coded 1 if the state was involved in at least one enduring rivalry as an additional proxy for a state’s security environment. 101 A dichotomous
variable that is coded 1 if a state shared a defense pact with one of the nuclear-capable great powers and 0 otherwise is also included because security guarantees of this magnitude could reduce states’ incentives to develop their
own nuclear weapons. 102 There are a number of “internal determinants” that could affect incentives to proliferate. I included two variables related to democracy. The first measures the country’s score on the Polity IV scale. 103
The second variable, which measures whether a state is democratizing, calculates movement toward democracy over a five-year span by subtracting a state’s Polity score in year t-5 from its Polity score in year t. To control for a
state’s exposure to the global economy, I included a variable measuring the ratio of exports plus imports as a share of GDP. 104 I also included a measure of trade liberalization that mirrors the democratization measure described
above. For the sake of robustness, I included one variable that Singh and Way excluded from their model. 105 I created a dichotomous variable and coded it 1 if the state signed the NPT in year t and 0 otherwise. NPT membership
could be salient in explaining decisions to proliferate because states make legal pledges not to pursue nuclear weapons when they sign this treaty. methods of analysis I used probit regression analysis to estimate the effect of
independent variables on nuclear weapons program onset and bomb acquisition. Given that the proliferation outcomes analyzed here occurred relatively infrequently, I also used rare events logit to estimate the effect of
independent variables on nuclear weapons program onset and nuclear weapons acquisition. 106 This estimator is appropriate when the dependent variable has thousands of times fewer 1’s than 0’s. I used clustering over states to
control for heteroskedastic error variance. To control for possible temporal dependence in the data, I also included a variable to count the number of years that passed without a country pursuing nuclear weapons or acquiring the
bomb. 107 Finally, I lagged all independent variables one year behind the dependent variable to control for possible simultaneity bias. Results of the Statistical Tests Before moving to the multivariate analysis, I considered cross
tabulations of nuclear cooperation agreements against nuclear weapons program onset and nuclear weapons acquisition. The results are presented in tables 2 and 3. These simple cross tabulations underscore that proliferation is a
relatively rare event. Decisions to begin weapons program occur in fifteen of the observations in the sample (0.22 percent), and bomb acquisition occurs in nine observations in the sample (0.13 percent). Even though proliferation
occurs infrequently, these cross tabulations show that nuclear cooperation strongly inºuences whether countries will go down the nuclear path. Participation in at least one nuclear cooperation agreement increases the likelihood
of beginning a bomb program by about 500 percent. The combination of militarized conºict and nuclear assistance has an even larger substantive effect on program onset. Experiencing both of these phenomenon increases the
these relationships are not deterministic. Although countries that receive
80 percent of the countries
that began programs did so after receiving civilian aid. The four countries that initiated nuclear weapon
programs without receiving such assistance—France, the Soviet Union, the United Kingdom, and the
United States—did so in the 1940s and early 1950s when peaceful nuclear cooperation was not an option. From 1955 to 2000, no
probability of initiating a weapons program by about 638 percent. This simple analysis emphasizes that
peaceful assistance were more likely to begin weapons programs, the majority of countries that benefit from such aid do not proliferate. It is also noteworthy that
country began a nuclear weapons program without first receiving civilian assistance . This suggests that after the early days
of the atomic age, nuclear aid became a necessary condition for launching a nuclear weapons program. Similar patterns
emerged between nuclear assistance and weapons acquisition. Nuclear aid increases the likelihood of acquiring the bomb by about 360 percent; the combination of atomic assistance and militarized disputes increases the
probability of building nuclear weapons by 750 percent. The relationship between nuclear assistance and weapons acquisition is also probabilistic—not deterministic—because not all countries that receive aid cross the nuclear
threshold. Table 3 indicates that atomic assistance was not always a necessary condition for bomb acquisition, although the vast majority of all proliferators did receive help. Seventy-eight percent of the countries that produced
the bomb received some assistance, and no country acquired weapons without receiving aid from 1953 to 2000. To explore the role of possible confounding variables, I turn now to the multivariate analysis. Table 4 presents the
initial results from the multivariate statistical analysis. The odd-numbered models were estimated using probit, and the even-numbered models were estimated using rare events logit. In models 1–4, the dependent variable is
weapons program onset. Models 1 and 2 exclude the interaction term and allow me to evaluate whether peaceful nuclear assistance affects decisions to begin bomb programs independent of the security environment. Models 3
and 4 include the interaction term and enable me to evaluate the conditional effect of atomic assistance on the initiation of nuclear weapons campaigns. In models 5–8 the dependent variable is acquisition. Models 5–6 exclude the
interaction term, allowing me to evaluate the unconditional effect of nuclear aid on bomb development. Models 7 and 8 include the interaction term, so I can assess the conditional effect of atomic assistance on a country
successfully building nuclear weapons. The results show that peaceful nuclear assistance continues to contribute to both nuclear weapons program onset and bomb acquisition, even when accounting for confounding variables. In
models 1–2 the coefficient on the variable measuring the cumulative amount of atomic assistance a country has received is positive and highly statistically significant. 108 This indicates that,
on average,
countries receiving nuclear aid are more likely to initiate bomb programs. The substantive effect of this variable is also strong. Raising the
value of the NCA variable from its mean (6.69) to one standard deviation above the mean (22.72) increases the likelihood of beginning a weapons program by 185 percent. 109 The findings in table 4 reveal a similar relationship be
tween atomic assistance and bomb acquisition. As shown in models 5–6, the coefficient on the variable measuring the number of NCAs a country has signed is positive and highly significant, indicating that countries receiving
peaceful nuclear aid are more likely to build the bomb. Increasing the NCA variable from its mean to one standard deviation above the mean raises the probability that a country will build nuclear weapons by 243 percent. 110 Does
peaceful nuclear assistance have an especially strong effect on proliferation when countries also face security threats? Because I use an interaction term to test this part of my argument, it is not possible to evaluate this effect
based solely on the information presented in table 4. The appropriate way to interpret interaction terms is to graph the marginal effect of atomic assistance and the corresponding standard errors across the full range of the
militarized interstate dispute variable. 111 If zero is included in the confidence interval, then atomic assistance does not have a statistically significant effect on proliferation at that particular level of conºict. Figures 2 and 3 allow
me to evaluate how the combination of atomic aid and militarized conºict affect proliferation. Figure 2 plots the marginal effect of nuclear aid on weapons program onset as the number of militarized disputes rises. 112 It is difficult
to see in the figure, but
atomic assistance has a statistically significant effect on weapons programs across all levels
of conflict because zero is never included in the confidence interval. At low levels of conºict, increases in peaceful nuclear assistance have relatively small substantive effects on the likelihood of bomb program onset.
But as the security environment worsens, the substantive effect of atomic assistance on initiating a
bomb program is magnified. The probability that an average country experiencing six militarized
disputes will develop the bomb rises from 0.000936 to 0.0902 when the country receives increases in
atomic aid. 113 This indicates that countries are highly unlikely to begin weapons programs in the absence of such
assistance—even if they face security threats. But if threats are present and states receive additional atomic assistance, the likelihood of beginning a bomb program spikes
dramatically. If that same country were to be involved in twelve militarized disputes in one year, increases in
nuclear assistance would raise the probability of program initiation from 0.0625 to 0.737, an increase
of 1,078 percent . If an average country that experiences eighteen militarized disputes in a year receives additional atomic assistance, the likelihood that it will begin a weapons program rises from 0.426 to
0.933, an increase of 119 percent. Note that at high levels of conflict, the probability of weapons program onset approaches 1 with increases in peaceful aid, but countries that face numerous security threats are also likely to
proliferate in the absence of assistance. Consequently, increases in nuclear assistance yield smaller rises in the probability of proliferation at high levels of conºict. This is why the marginal effect displayed in figure 2 declines slightly
after about thirteen disputes. Figure 3 illustrates the conditional effect of nuclear aid on weapons acquisition as the number of disputes rises. 114 Nuclear assistance does not have a statistically significant effect on acquisition
when countries experience an average of zero militarized disputes, because zero is included in the confidence inter val. For all other levels of conºict, atomic assistance has a statistically significant effect. If countries experience an
average of one militarized dispute, the substantive effect of atomic aid is modest. Increases in peaceful assistance raise the probability of bomb acquisition from 0.0000165 to 0.0000122, an increase of 43 percent. For an average
state experiencing six disputes, receiving nuclear aid raises the probability it will acquire nuclear weapons more substantially, from 0.000504 to 0.00202. If that same state were to experience twelve disputes in a year, the
probability of acquisition would rise from 0.0144 to 0.306, an increase of 2,031 percent. Likewise, receiving atomic assistance and experiencing eighteen conºicts increases the probability of bomb development by 511 percent,
from 0.110 to 0.671. These results indicate that, on average, countries that receive atomic assistance are more likely to proliferate— especially when security threats arise. Turning to the control variables, the coefficient on the
variable measuring whether a state shares a military alliance with a nuclear-armed power is statistically insignificant in all eight models, suggesting that nuclear protection has no effect on whether a country pursues the bomb or
successfully builds it. The coefficients on the variables measuring whether a country is democratic or is democratizing are also statistically insignificant, indicating that regime type has little effect on proliferation aims and
outcomes. Many policymakers assume that proliferation is a problem caused by “rogue” or undemocratic states. Although some autocratic states such as North Korea have proliferated in recent years, on average democracy is less
salient in explaining the spread of nuclear weapons than the conventional wisdom suggests. The results also fail to support the argument that trade dependence inºuences nuclear proliferation. The coefficients on the variables
measuring trade dependence and liberalization are not statistically significant in models 1–4, meaning that these factors have no effect on states’ decisions to build the bomb. Economic openness also has an insignificant effect on
bomb acquisition. But interestingly, liberalization has a positive and statistically significant effect in models 6 and 8, indicating that liberalizing states are more likely to cross the nuclear threshold. Future research should explore
whether these results may be due to imperfect measurement of these concepts Some of the control variables do behave as expected. The coefficient on the variable measuring whether a country has signed the NPT is negative and
statistically significant in models 1–4, indicating that countries making nonproliferation commitments are less likely to initiate bomb programs. Substantively, NPT membership reduces the likelihood that a country will begin a
nuclear weapons program by more than 90 percent. For statistical reasons, it was necessary to exclude the NPT variable from models 5–8. 115 The coefficient on the variable measuring whether a country is involved in a rivalry is
positive and statistically significant across models 1–4, but it is insignificant in models 6 and 8. Likewise, the GDP variables have statistically significant effects in models 1 and 3, but these results are sensitive to model specification.
Industrial capacity has a positive and statistically significant effect in all eight models, indicating that countries with high industrial capabilities are more likely to begin weapons programs and successfully build the bomb. This is the
only variable other than the factors operationalizing my argument that has a statistically significant effect across model specifications. Having adequate industrial capacity increases the probability of program initiation from
0.00000226 to 0.000105 and the probability of acquisition from 0.000487 to 0.000804. To further assess the robustness of my findings, I conducted a sensitivity analysis. I used a new estimator to account for possible endogeneity
and an alternate coding for the dependent variable. In addition, I excluded “sensitive” nuclear cooperation agreements from the coding of my key independent variable. For space considerations, I discuss only brieºy the results of
these robustness checks. Detailed discussions of the results and procedures, as well as all of the tables displaying the statistical results, are available in an online appendix. 116 endogeneity My argument is that the accumulation of
nuclear cooperation agreements encourages states to begin nuclear weapons programs and that receiving atomic aid ultimately enables states to acquire nuclear weapons. But it is also possible that states seek nuclear assistance
One standard approach to address the
endogeneity issue is to lag the independent variables one year behind the dependent variable. 118 I adopted this
when they are pursuing nuclear weapons. 117 Thus, nuclear cooperation may be endogenous to nuclear weapons pursuit.
approach in the analysis presented above. As an additional way to address this issue, I estimated two endogenous equations simultaneously. The first equation represents the total number of nuclear cooperation agreements a
state has made in a particular year, and the second estimates the likelihood that it is pursuing nuclear weapons. As was the case above, the proliferation equation parallels the work of Singh and Way. 119 The nuclear cooperation
equation that I employed is based on a recent study of the causes of atomic assistance. 120 To estimate these equations simultaneously, I used a technique originally developed by G.S. Maddala and practically implemented by
Omar Keshk. 121 This method is designed for simultaneous equation models where one of the endogenous variables is continuous and the other is dichotomous, which is precisely the nature of the variables in this analysis. The
two-stage estimation technique generates instruments for each of the endogenous variables and then substitutes them in the respective structural equations. The first equation (with the continuous variable) is estimated using
ordinary least squares, and the second (with the dichotomous variable) is estimated using probit. 122 The results of the two-stage probit least squares model that addresses the simultaneity issue are generally consistent with the
, nuclear cooperation has a positive and statistically significant effect on nuclear
weapons pursuit. This result is robust to alternate model specifications. 124 dependent variable coding It is often difficult to determine the year
findings presented above. 123 Most important
that a country begins a nuclear weapons program or acquires the bomb, given the secrecy that surrounds such military endeavors. As a result, there is some disagreement among scholars on the dates that certain countries began
Estimating the same models
displayed above but with the alternate proliferation dates also does not affect the results relating to my
argument. removal of sensitive agreements Recent research finds that countries receiving certain “sensitive” nuclear assistance are more likely to acquire nuclear weapons. 126 For the reasons I argued above, the
relationship between nuclear assistance and proliferation is broader. Training in nuclear engineering, the supply of research or power reactors, and the transfer of certain nuclear materials also affect proliferation. To test
whether my results may be driven by a few sensitive deals, I excluded them from the coding of my
independent variable. This type of sensitive agreement is extremely rare, so this change resulted in the
removal of a small number of agreements. I then estimated all models displayed in table 4 with this alternate coding of the independent variable. The findings
relevant to my argument are generally unaltered when sensitive agreements are excluded from my
coding of atomic assistance.
to proliferate. To explore whether my results are sensitive to proliferation codings, I used an alternate set of proliferators and dates compiled by Jo and Gartzke. 125
They’ll find sellers
Ford ’12 Christopher Ford, served until September 2008 as United States Special Representative for
Nuclear Nonproliferation, Director, Center for Technology and Global Security and Senior Fellow at the
Hudson Institute, “Perilous Precedents: Proliferation as Policy in Alternative Nuclear Futures,” Hudson
Institute, 6/28/2012, http://www.hudson.org/index.cfm?fuseaction=publication_details&id=9026
I sometimes wonder, however, whether the seeming irresistibility of the case for nonproliferation may sometimes get in the way
of our analytical acuity as we look at the geopolitical environment. It is not uncommon in our diplomatic relations, for instance, to hear
it declared with great assurance that "Country Such-and-Such shares our interest in preventing nuclear weapons proliferation" – and to have it
be assumed, in effect, that if we just remind its leaders of this shared interest, they will see the light and come around to our point of view. If
there is a problem in obtaining someone's cooperation on nonproliferation matters, we tend to see this as
being merely
due to disagreements over "tactics," or perhaps just some lack of capacity to do be helpful
despite genuinely good intentions. At worst, we suspect merely that others are holding out in order to
bargain for as high a price as possible in return for giving us the cooperation they really do, in their hearts,
agree is important anyway. This may be unwise on their part – or perhaps on ours for commoditizing such cooperation
by trying to purchase it through concessionary inducements – but we assume that such bargaining doesn't
really bespeak a significant difference of opinion about the value of nonproliferation. Only the would-be proliferator
regimes themselves, we might think, actually want nuclear weapons to spread – and even then one usually doesn't have to look far to find
some analyst who feels their pursuit of such devices is an unfortunate but understandable choice, taken only grudgingly in the face of real or
perceived foreign threats. Almost
no one, we sometimes seem to assume, really supports proliferation. But perhaps we
as policy" is not
always felt to be an inherently irrational strategy. It is a strategy that it remains powerfully in our interest to prevent others
from adopting, of course. We probably miss something important, however, if we see proliferation as no more than
some kind of aberrance or confusion. "Proliferation-as-policy" is actually something that seems to have appealed to a
number of real-world decision-makers in the past. Many of you doubtless knows these stories at least as well as I do, but let me
offer some examples: The Soviets gave Beijing a great deal of help in its weapons development, though by all
should take a step back from the obviousness of such conclusions, and consider the possibility that, "proliferation
accounts Khrushchev balked just before fulfilling the final clauses of his 1957 cooperation agreement with the Chinese, and stopped before
providing Mao Zedong with an actual weapon prototype. French scientists provided a great deal of sensitive dual-use technology to Israel,
including the reactor at Dimona and a plutonium-separation capability that many observers believe to form the core of Israel's weapons
program even today. China
helped Pakistan develop its nuclear weapons program, to the point – it has been reported – that Beijing
provided actual weapons designs. Even in the midst of the ongoing nuclear crisis over Iran's previously-secret nuclear program, Russia
provided Tehran with the nuclear reactor at Bushehr. Russian entities apparently also designed the plutonium-production
reactor that Iran is constructing at Arak. China is reported to have provided uranium hexafluoride (UF6) feedstock to Iran's secret enrichment
program in the early 1990s, and to have provided the blueprints for Iran's uranium conversion facility. North Korea is reported to have provided
Libya with UF6, and – more infamously – to have constructed a plutonium-production reactor for Bashar al-Assad in Syria, though the Israelis
bombed it in 2007. The official story about Abdul Qadeer Khan's notorious nuclear smuggling ring is that it was some kind of a rogue operation,
without official support or encouragement, but few analysts today seem really to believe this. It is widely believed that the Pakistani
government, or a significant portion of it, was indeed complicit in Khan's operations. Finally, Saudi
Arabia has long been rumored
to have helped finance Pakistan's nuclear weapons program. The evidence thus suggests that proliferation isn't just
about those who want to acquire nuclear weapons: at one point or another, a number of countries have apparently concluded
that supporting proliferation is in their interest.
US credibility has no bite
McGoldrick ’10 Fred McGoldrick, CSIS, spent 30 years at the U.S. State and Energy Departments and
at the U.S. mission to the IAEA, negotiated peaceful nuclear cooperation agreements with a number of
countries and helped shape the policy of the United States to prevent the spread of nuclear weapons,
“The U.S.-UAE Peaceful Nuclear Cooperation Agreement: A Gold Standard or Fool’s Gold?” Center for
Strategic and International Studies, 11/30/10,
http://csis.org/files/publication/101130_McGoldrick_USUAENuclear.pdf
In sum, the United States is facing an uphill battle to compete in the international nuclear market and
cannot dictate nonproliferation conditions that others will find unacceptable. Nations embarking on
new nuclear programs do not need to rely on the United States for their nuclear fuel, equipment,
components, or technology. They have alternatives and lots of them, as other states with nuclear programs have
steadily built up their nuclear export capacities, which in some cases are state run or state supported.
Warming adv
Emissions from ionizing radiation and uranium mining
Mez September 2012—Lutz Mez [Department of Political and Social Sciences, Freie Universitat Berlin]
“Nuclear energy–Any solution for sustainability and climate protection” Energy Policy 48 (2012) 56–63
The sector of electrical power production accounts for about 27 percent of global anthropogenic CO2 emissions and constitutes by far the biggest and fastest
growing source of greenhouse gas emissions. That is why supposedly
CO2-free nuclear power plants have frequently been
praised as a panacea against climate change. As an argument in favor of civil use of nuclear energy, advocates such as RWE manager Fritz
Vahrenholt are fond of pointing out that the operation of nuclear power plant does not cause any CO2 emissions (Vahrenholt in ‘‘Welt online’’, 23 September 2010).
And, underscoring the advantages of German nuclear power plants, he added: ‘‘if their output was replaced by power plants using fossil fuels, this would cause
additional emissions of 120,000,000 t of CO2 per year.’’ Here Vahrenholt assumed that total nuclear power would be replaced by lignite coal plants. But
a
turnaround in energy policy would make greater use of renewable energy and decentralized gas- fired
combined heat and power plants which do not cause any more CO2 emissions than nuclear power plants
(Fritsche et al., 2007). On top of this, viewed from a systemic perspective, nuclear power plants are by no means free of CO2 emissions. Already today, they produce
up to 1/3 as much greenhouse gases as large modern gas power plants. CO2 emissions of nuclear energy in connection with its production—depending on where
the raw material uranium is mined and enriched—amounts to between 7 and 126 g CO2equ 1 per kilowatt hour (GEMIS 4.7). O ¨ ko-Institut has estimated a specific
emission of 28 g CO2equ per kilowatt hour for a typical nuclear power plant in Germany—including emis- sions caused by the construction of the plant—with
enriched uranium from different supplier countries. An initial estimate of global CO2 emissions through the production of nuclear power for 2010 has produced an
amount of more than 117,000,000 t CO2equ (see Table 4)—this is roughly as much as the entire CO2 emissions produced by Greece this year. And
this data
does not even include the emissions caused by storage of nuclear waste. Storm van Leeuwen & Smith calculated the ratio
of CO2 emissions from nuclear energy and from a gas fired plant of the same net (electrical) capacity. Electricity generated from atomic energy emits 90–140 g CO2
per kWh. The range is due to the different grade of ores used. Only
if the uranium consumed by the nuclear energy system has
been extracted from rich ores, does nuclear generation emit less CO2 than gas based generation, giving the
impression that the application of nuclear energy would solve the global warming problem. However, uranium is a limited resource; it is
estimated to last for 50 to 70 years with current generation technology and demand. And when rich ores
become exhausted this ratio increases until it finally becomes larger than one, making the use of nuclear
energy unfavorable compared to simply burning the (remaining) fossil fuels directly. In the coming decades, indirect
CO2 emissions from nuclear power plants will moreover increase considerably because more fossil
energy will have to be used to mine uranium. In view of this trend, nuclear power plants will no longer have any
advantage over modern gas-fired power plants, let alone in comparison to the advantages offered by
increased energy efficiency or greater use of renewable energies, especially when the latter are used in cogeneration plants. ‘‘In
the long term the use of nuclear energy provides us with no solution to the problem’’ (Storm van Leeuwen 2007). And Robert
Rosner (2009), Director of Argonne National Laboratory, added: ‘‘Nuclear power is unlikely to play a critical
role in limiting CO2 equivalent concentrations in the atmosphere (y) No realistic plan foresees a reactor build rate that allows
nuclear power to stay below 550 ppme2 CO2 within the next 30–40 years.’’ Nuclear power plants also contribute to climate change
by emitting radioactive isotopes such as tritium or carbon 14. And the radioactive noble gas krypton 85, a
product of nuclear fission, ionizes the air more than any other radioactive substance. Krypton 85 is
produced in nuclear power plants and is released on a massive scale in reprocessing. The concentration of krypton
85 in the earth’s atmosphere has soared over the last few years as a result of nuclear fission, reaching record levels. Even though krypton 85 may have an impact on
the climate (Kollert & Donderer 1994), these emissions have not received any attention in international climate-protection negotiations to this very day. As for the
assertion that nuclear power is needed to promote climate protection, exactly the opposite would appear to be the case: nuclear
power plants must
be closed down quickly in order to exert pressure on operators and the power plant industry to redouble
efforts at innovation in the development of sustainable and socially compatible energy technologies—
especially the use of smart energy services.
Mainstream accounts overestimate atmospheric heat which means mitigation will
miss the extra warming caused by the aff
Nordell and Gervet 09 – Bo Nordell* and Bruno Gervet [Department of Civil and Environmental
Engineering, Luleå University of Technology] “Global energy accumulation and net heat emission” Int. J.
Global Warming, Vol. 1, Nos. 1/2/3, 2009, http://www.zo.utexas.edu/courses/THOC/NordellGervet2009ijgw.pdf
Independent of what causes global warming, it should be considered in terms of accumulated energy.
The performed estimations of global heat accumulation in the air, ground and global net heat emissions only include the years from 1880 to
2000. The data used in estimating global heat accumulation in the air, ground and melting of land ice are relatively reliable, while the melting
of sea ice might be overestimated. The main uncertainty concerns sea temperature increase, which means that sensible heat accumulation in
sea water might be underestimated.
It was found that the air only contains 6.6% of the globally accumulated
heat, of which 45% is distributed over the land area, though it accounts for about 30% of Earth’s total area. The remaining
heat is accumulated in the ground (31.5%), sea water (28.5%), melting of sea ice (11.2%) and melting of land ice (22.2%) (Figure 4). The
melting of ice has absorbed 33.4% of the total global warming. The heat stored by sea water, the melting of sea ice and in the air over oceans
accounts for almost 43% of the global heat accumulation, while the corresponding value for the land area is about 35% if ~22% of land ice,
which is mainly in Greenland, is treated separately. It is concluded that net heat emissions contributes to 74% of global warming. The missing
heat (26%) must have other causes, e.g., the greenhouse effect, natural variations in the climate and/or the underestimation of net heat
emissions. The performed calculations are conservative, i.e., both the global heat accumulation and heat emissions are underestimated. Each
Most
measures that already have been taken to combat global warming are also beneficial for the current
explanation. Renewable energy should be promoted instead of using fossil fuels. However, CO2 sequestration and subsequent storage
will have very little effect on global warming. It is also concluded that nuclear power is not a solution to (but part
of these means that the missing heat in Figure 4 is too high. These uncertainties are discussed in more detail in Appendix B.
of) the problem . The underestimated heat accumulation means that the missing heat in Figure 4 is too
high. Also, the underestimation of net heat emission gives the same result.
Minimal methane leakages—flaring, capture, green completing solves
Cathles ’12 L. M. Cathles, PhD in geological sciences at Princeton, co-leader of the oil and gas thrust of
the Cornell KAUST program and Director of the Cornell Institute for the Study of the Continents, Fellow
of American Association of Advancement of Science – 1987, Extractive Metallurgy Science Award,
Metallurgical Society of AIME – 1985, “Assessing the greenhouse impact of natural gas,” Geochemistry,
Geophysics, Geosystems, Volume 13, Issue 6, June 2012, DOI: 10.1029/2012GC004032
[33] The most extensive syntheses of data on fugitive gases associated with unconventional gas recovery
is an industry report to the EPA commissioned by The Devon Energy Corporation [Harrison, 2012]. It
documents gas leakage during the completion of 1578 unconventional (shale gas or tight sand) gas wells by 8
different companies with a reasonable representation across the major unconventional gas
development regions of the U.S. Three percent of the wells in the study vented methane to the
atmosphere. Of the 1578 unconventional (shale gas or tight sand) gas wells in the Devon study, 1475 (93.5%) were green
completed- that is they were connected to a pipeline in the pre-initial production stage so there was no need
for them to be either vented or flared. Of the 6.5% of all wells that were not green completed, 54% were flared.
Thus 3% of the 1578 wells studied vented methane into the atmosphere. [34] The wells that vented methane to the
atmosphere did so at the rate of 765 Mcsf/completion. The maximum gas that could be vented from the non-green
completed wells was estimated by calculating the sonic venting rate from the choke (orifice) size and source gas temperature of the well, using
a formula recommended by the EPA. Since
many wells might vent at sub-sonic rates, which would be less, this is
an upper bound on the venting rate. The total vented volume was obtained by multiplying this venting rate by the known duration
of venting during well completion. These vented volumes ranged from 340 to 1160 Mscf, with an average of 765 Mscf. The venting from an
average unconventional shale gas well indicated by the Devon study is thus ∼23 Mscf ( = 0.03 × 765 Mscf), which is similar to the 18.33
MscfEPA [2010] estimates is vented during well completion of a conventional gas well (half vented and half flared). Since
venting during
well completion and workover conventional gas wells is estimated at 0.01% of production [e.g., Howarth et
al., 2011], this kind of venting is insignificant for both unconventional and conventional wells.
Methane leak rates are tiny—consensus of independent and EPA research
Cathles ’12 L. M. Cathles, PhD in geological sciences at Princeton, co-leader of the oil and gas thrust of
the Cornell KAUST program and Director of the Cornell Institute for the Study of the Continents, Fellow
of American Association of Advancement of Science – 1987, Extractive Metallurgy Science Award,
Metallurgical Society of AIME – 1985, “Assessing the greenhouse impact of natural gas,” Geochemistry,
Geophysics, Geosystems, Volume 13, Issue 6, June 2012, DOI: 10.1029/2012GC004032
[36] Once a well is in place, the leakage involved in routine operation of the well site and in transporting
the gas from the well to the customer is the same for an unconventional well as it is from a conventional
well. What we know about this leakage is summarized in Table 2. Routine site leaks occur when valves are opened and closed, and leakage
occurs when the gas is processed to removing water and inert components, during transportation and storage, and in the process of
distribution to customers. The
first major assessment of these leaks was carried out by the Gas Research Institute
(GRI) and the EPA in 1997 and the results are shown in the second column of Table 2. Appendix A of EPA [2010] gives a detailed and
very specific accounting of leaks of many different kinds. These numbers are summed into the same categories and displayed in column 3 of
Table 2. EPA
[2011] found similar leakage rates (column 4). Skone [2011] assessed leakage from 6 classes of gas
wells. We show his results for unconventional gas wells in the Barnett Shale in column 5 of Table 2. His other well classes are similar.
Venkatesh et al. [2011] carried out an independent assessment that is given in column 6. There are variations in
these assessments, but overall a leakage of ∼1.5% of production is suggested. Additional discussion of this data
and its compilation can be found in Cathles et al. [2012; see also online press release, 2012] and L. M. Cathles (Perspectives on the Marcellus
gas resource: What benefits and risks are associated with Marcellus gas development?, online blog,
http://blogs.cornell.edu/naturalgaswarming/, 2012).
Reserves are underestimated—major oil suppliers also have undiscovered gas
reserves
Channell et al. ’12 Jason Channell, Director, Citi Investment Research & Analysis, previously:
Executive Director at Goldman Sachs Executive Director at Cheuvreux Executive Director at ABN AMRO
Hoare Govett Analyst at Fidelity Investments, Timothy Lam, Vice President, Utilities & Alternative
Energy, Citi Research, HK-based research analyst covering the utilities, solar and alternative
energysectors, Shahriar (Shar) Pourreza, CFA, Advisor at Citigroup Global Markets Inc, “Shale &
renewables: a symbiotic relationship,” Citi Research, 12 September 2012,
https://ir.citi.com/586mD%2BJRxPXd2OOZC6jt0ZhijqcxXiPTw4Ha0Q9dAjUW0gFnCIUTTA%3D%3D
Since shale assets are often found underneath conventional gas fields, large conventional gas producing
countries that were not included in the EIA report, such as Russia and Saudi Arabia, are also likely to
possess significant shale gas reserves; for example Baker Hughes, an oil-field services company, has suggested
that Saudi Arabia possesses the fifth-largest shale gas reserves in the world.
Current Developments in Natty Gas prevent price spikes
-New storage sites
-Long term contracts
Kirkland 11 Joel is and E&E report and writer for Climate Wire, 3/23/11. “Stable price remains the key
to shifting power sector from coal to natural gas”
http://www.eenews.net/public/climatewire/2011/03/23/3
Producers and utilities both are signing longer-term supply deals for gas. McMahon said changes in
contracts to help stabilize gas supply and demand will help electric utilities as they spend billions to
upgrade infrastructure. "The capital expansion of the industry on the generation side happens in big,
lumpy tranches," he said. Utilities are shifting their emphasis to gas-fired generation and toward more nuclear power in the Southeast
instead of building new coal-fired generation or retrofitting the oldest coal plants with expensive technology to cut dangerous emissions.
Potential climate change regulations requiring power plants to emit less carbon dioxide and federal regulations requiring utilities to comply
with the Clean Air Act are pushing utilities in that direction. Low
natural gas prices have helped also. "Realizing and
maximizing these benefits, however, will require that investors have confidence in the mid- to long-term
stability of natural gas prices," says the report. Further, the report says "rules that unnecessarily restrict the use of or raise the cost
of long-term contract and hedging tools for managing supply risk should be avoided." Producers and utilities could sign fixedprice, long-term contracts that build in price adjustments, the groups suggested. Today, gas is often
bought through a combination of short- to medium-term supply contracts and a spot market.
Increasingly, producers looking for customer guarantees are pressing electric utilities and other buyers
to sign longer-term contracts. Gas storage will play a bigger role in balancing supply and demand as gas
consumption continues expanding, according to the report. Nearly 700 billion cubic feet of new natural
gas storage capacity has been built in the past decade. Increasingly, gas storage is a critical piece of the
puzzle for managing price volatility. As more gas flows through trading points in the Southeast and
Northeast, notes the report, there's a greater need for gas storage facilities near the major markets.
"Thus, growth in the amount of storage available to the market -- now at a historic high and still growing
-- is an important contributor to more stable and competitive gas markets," says the report. =
Their threats are exaggerated or invented for political purposes—magnifies instability
Walt 1-2 Stephen M. Walt, professor of international affairs at Harvard’s Kennedy School of
Government, “More or less: The debate on U.S. grand strategy,” 1/2/2013,
http://walt.foreignpolicy.com/posts/2013/01/02/more_or_less_the_debate_on_us_grand_strategy
The problem, of course, is that U.S. leaders can only sell deep engagement by convincing Americans that the
nation's security will be fatally compromised if they do not get busy managing the entire globe. Because
the United States is in fact quite secure from direct attack and/or conquest, the only way to do that is by
ceaseless threat-mongering, as has been done in the United States ever since the Truman Doctrine, the
first Committee on the Present Danger and the alarmist rhetoric of NSC-68. Unfortunately, threat-mongering requires people in
the national security establishment to exaggerate U.S. interests more-or-less constantly and to conjure
up half-baked ideas like the domino theory to keep people nervous. And once a country has talked itself
into a properly paranoid frame of mind, it inevitably stumbles into various quagmires, as the United
States did in Vietnam, Iraq, and Afghanistan. Again, such debacles are not deviations from "deep engagement"; they are a
nearly inevitable consequence of it.
Naval force comparisons are political shell games with no relevance to grand
strategy—US dominance remains
Stigler ’12 Andrew L. Stigler, associate professor in the National Security Affairs Department of the
Naval War College, “Obama Wise Not to Play Battleship,” New Atlanticist, 11/19/2012,
http://www.acus.org/new_atlanticist/obama-wise-not-playbattleship?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+new_atlanticist+%2
8New+Atlanticist%29
There was one exception, however. In the final debate, Mitt Romney asserted that the US military was drifting into a
state of diminished strength. “Our navy is smaller now than at any time since 1917,” he warned. “Our air force is older and smaller
than at any time since it was founded in 1947.” The exchange was not unusual for presidential campaigns, and the comparisons Romney made
are not uncommon. The challenger seeks to portray the incumbent as having weakened America, and the incumbent counters. Republicans in
particular are partial to charging that Democrats have undercut the military, created a “hollow force,” and the like. The usual Democratic
response is to argue that the US military is still the strongest in the world, and offer contrasting facts to counter. But Obama didn’t offer the
usual response. “I think Governor Romney hasn’t spent enough time looking at how our military works,” Obama said. Regarding
Romney’s naval comparisons, Obama argued “Well, governor, we also have fewer horses and bayonets
… we have these things called aircraft carriers, where planes land on them. We have ships that go underwater, nuclear submarines. And so,
the question is not a game of Battleship, where we’re counting ships. It’s what are our capabilities.” Here,
Obama cut the heart out of the casual historical comparisons that are so often offered in presidential campaigns. Ships and aircraft are
not the same today as they were nearly a century ago. Nor is the international environment.
Capabilities, not raw numbers, are what matter. If the size of our enemies’ militaries determined how
much attention they received from the United States, we would be ignoring al Qaeda. Regarding the Navy,
Secretary of Defense Robert Gates was fond of making a far more salient comparison. In a 2009 speech at the Naval War College, he
pointed out that, in terms of tonnage, the United States Navy “is still larger than the next thirteen navies
combined – and eleven of those thirteen navies are US allies or partners Similarly, comparing the US Air Force of
2012 with that of the year it was “created” in 1947 is an even more dubious criticism. Those making the comparison imply the USAF was thrust
into being with a fleet even larger than what we have today. But the “birth” of the Air Force in 1947 would be far more accurately called a
name change than the de novo creation of a new service. It was the National Security Act of 1947 that created the US Air Force as a separate
service, from what had been the US Army Air Corps for many decades. One might instead trace the origin of the USAF to the Signal Corps’
Aeronautical Division, commonly credited as the world’s first military aviation organization in 1907. Given that is was the WWII Army Air Corps
that had received the thousands of mass-produced planes used to defeat Nazi Germany and Imperial Japan two years earlier, it’s no small
wonder that this “new” service had more planes in 1947 than the fleet of hugely expensive technological marvels that the United States flies
into combat today. The abuse
of numerical comparisons relating to national security assets – whether real or fictitious
– is not uncommon in American history. Reagan’s Secretary of the Navy, John F. Lehman, was a strong
proponent of a 600-ship Navy. The genesis of this goal was not a cold analysis of the need for naval assets, but rather a desire to
have more ships than the Soviet Union. The US Navy never reached the goal, due in part to defense budget cuts by
Congress. At its peak in 1987, the US had 594 ships in service. But this number was artificially inflated, many
argue, as some of the these were WWII-era ships whose sunset had been postponed to fill out the force in
hopes of achieving the goal. Democratic candidates have not been non-participants in this game. As he ran for president in 1960,
John F. Kennedy spoke ominously of a “missile gap” between the US and the Soviet Union, claiming that the US was falling dangerously behind
in the arms race. Kennedy’s claims were based on the most pessimistic assessments available, however, and were disconfirmed by the CIA’s
own estimate of Soviet capabilities. Casual
numerical comparisons will always be appealing for political
candidates, because they are easily digested by an electorate. They immediately put one’s opponent on
the defensive. Who wouldn’t want to rather have more than less, after all? Suddenly, reasonable cuts to the world’s most expensive
military can be twisted into images of hollowness, weakness, and the like.
1NR
Politics
Warming causes extinction
Sify ’10 citing Ove Hoegh-Guldberg, professor at University of Queensland and Director of the Global
Change Institute, and John Bruno, associate professor of Marine Science at UNC, “Could unbridled
climate changes lead to human extinction?” Sify News, 6/19/10, http://www.sify.com/news/couldunbridled-climate-changes-lead-to-human-extinction-news-international-kgtrOhdaahc.html
The findings of the comprehensive report: 'The impact of climate change on the world's marine ecosystems'
emerged from a synthesis of recent research on the world's oceans, carried out by two of the world's
leading marine scientists. One of the authors of the report is Ove Hoegh-Guldberg, professor at The University of Queensland and the
director of its Global Change Institute (GCI). 'We may see sudden, unexpected changes that have serious
ramifications for the overall well-being of humans, including the capacity of the planet to support
people. This is further evidence that we are well on the way to the next great extinction event,' says HoeghGuldberg. 'The findings have enormous implications for mankind, particularly if the trend continues. The
earth's ocean, which produces half of the oxygen we breathe and absorbs 30 per cent of humangenerated carbon dioxide, is equivalent to its heart and lungs. This study shows worrying signs of illhealth. It's as if the earth has been smoking two packs of cigarettes a day!,' he added. 'We are entering a
period in which the ocean services upon which humanity depends are undergoing massive change and
in some cases beginning to fail', he added. The 'fundamental and comprehensive' changes to marine life
identified in the report include rapidly warming and acidifying oceans, changes in water circulation and
expansion of dead zones within the ocean depths. These are driving major changes in marine
ecosystems: less abundant coral reefs, sea grasses and mangroves (important fish nurseries); fewer,
smaller fish; a breakdown in food chains; changes in the distribution of marine life; and more frequent
diseases and pests among marine organisms. Study co-author John F Bruno, associate professor in marine science at The
University of North Carolina, says greenhouse gas emissions are modifying many physical and geochemical
aspects of the planet's oceans, in ways 'unprecedented in nearly a million years'. 'This is causing
fundamental and comprehensive changes to the way marine ecosystems function,' Bruno warned, according to a
GCI release. These findings were published in Science.
And framing issue – sci dip creates diplomatic linkages that facilitate communication
and increase U.S. influence – prevents crisis escalation and war globally
Krasnodebska 12 (Molly, Former Contributing Researcher for the USC Center for Conflict Prevention.
Master of Public Diplomacy, USC, Conflict Prevention,
http://uscpublicdiplomacy.org/index.php/newswire/media_monitor_reports_detail/science_diplomacy/
)
Science diplomacy can function as a tool for conflict prevention and be understood as fostering
cooperation between the scientific communities of hostile countries. Cooperation in the field of scientific research
can help bridge the gap between the countries by creating a forum of mutual support and common interests. ¶ In recent
years, there have been numerous examples of scientific cooperation between countries that otherwise have no official diplomatic
relations. One such example is the "inter-Korean cooperation" in the chemistry, biotech and nano-science arenas, which
was first proposed in March 2010. Science diplomacy between the two Koreas is also exemplified by the foundation of the first privately funded
university in communist North Korea, the Pyongyang University of Science and Technology by Dr. Kim Chin-Kyung, a former war prisoner in
North Korea. "Educating people is a way to share what they love, and share their values," said the South Korean in an interview.¶ Another
example is the earthquake research cooperation between China and Taiwan initiated in January 2010. Chen Cheng-hung, vice chairman
of the National Science Council in Taipei calls the initiative the “biggest scientific cooperation program between the two sides of the Taiwan
Straits so far.Ӧ The United States launched a science diplomacy campaign toward Iran. The two
formal relations
countries, which have had no
since 1980, have re-launched their ‘broken dialogue” though science. In the summer of 2009, the
American Association for the Advancement of Science established a new Center for Science Diplomacy in Iran. According to Miller-McCune, this
“scientist-to scientist exchange” is more effective that governmental public diplomacy initiatives. The two countries instead of trying to
“influence each other’s behavior…will learn something in the process.” ¶ In addition, science
diplomacy for conflict prevention
can also be understood as the use of science and technology to enhance global or regional security. Solving regional
problems and advancing peoples’ well-being though technology by providing them with access to water, clean energy, food, and
information can prevent the rise of conflicts. ¶ The United States had been the leading country in the use of science and
technology diplomacy for the purpose of advancing security. This kind of public diplomacy is particularly directed towards the
Muslim world. One example of this is "vaccine diplomacy." In an interview for SciDevNet in March 2010, Peter J. Hotez, president of the
Sabin Vaccine Institute in Washington D.C. stated: “the United States could help reduce the burden of neglected diseases and
promote peace by engaging Islamic nations in collaborative vaccine research and development.” This would “improve vaccine
development for neglected diseases” in countries such as Indonesia, Malaysia and Pakistan where vaccine diplomacy is currently being
implemented.
CIR key to heg
Nye 12 (Joseph, Professor at Harvard’s Kennedy School of Government and former asst. SecDef.
“Immigration and American Power” Project Syndicate 12/10/12 http://www.projectsyndicate.org/commentary/obama-needs-immigration-reform-to-maintain-america-s-strength-byjoseph-s--nye)
The United States is a nation of immigrants. Except for a small number of Native Americans, everyone is originally from
somewhere else, and even recent immigrants can rise to top economic and political roles. President Franklin Roosevelt once famously
addressed the Daughters of the American Revolution – a group that prided itself on the early arrival of its ancestors – as “fellow immigrants.”¶
In recent years, however, US politics has had a strong anti-immigration slant, and the issue played an important role in the Republican Party’s
presidential nomination battle in 2012. But Barack Obama’s re-election demonstrated the electoral power of Latino voters, who rejected
Republican presidential candidate Mitt Romney by a 3-1 majority, as did Asian-Americans.¶ As a result, several prominent Republican politicians
are now urging their party to reconsider its anti-immigration policies, and plans
for immigration reform will be on the
agenda at the beginning of Obama’s second term. Successful reform will be an important step in
preventing the decline of American power.¶ Fears about the impact of immigration on national values and on a coherent sense
of American identity are not new. The nineteenth-century “Know Nothing” movement was built on opposition to immigrants, particularly the
Irish. Chinese were singled out for exclusion from 1882 onward, and, with the more restrictive Immigration Act of 1924, immigration in general
slowed for the next four decades.¶ During the twentieth century, the US recorded its highest percentage of foreign-born residents, 14.7%, in
1910. A century later, according to the 2010 census, 13% of the American population is foreign born. But, despite being a nation of immigrants,
more Americans are skeptical about immigration than are sympathetic to it. Various opinion polls show either a plurality or a majority favoring
less immigration. The recession exacerbated such views: in 2009, one-half of the US public favored allowing fewer immigrants, up from 39% in
2008.¶ Both the number of immigrants and their origin have caused concerns about immigration’s effects on American culture. Demographers
portray a country in 2050 in which non-Hispanic whites will be only a slim majority. Hispanics will comprise 25% of the population, with Africanand Asian-Americans making up 14% and 8%, respectively.¶ But mass communications and market forces produce powerful incentives to
master the English language and accept a degree of assimilation. Modern media help new immigrants to learn more about their new country
beforehand than immigrants did a century ago. Indeed, most of the evidence suggests that the latest immigrants are assimilating at least as
quickly as their predecessors.¶ While too rapid a rate of immigration can cause social problems, over the
long term, immigration
strengthens US power. It is estimated that at least 83 countries and territories currently have fertility rates that
are below the level needed to keep their population constant. Whereas most developed countries will
experience a shortage of people as the century progresses, America is one of the few that may avoid
demographic decline and maintain its share of world population.¶ For example, to maintain its current population size,
Japan would have to accept 350,000 newcomers annually for the next 50 years, which is difficult for a culture that has historically been hostile
to immigration. In contrast, the Census Bureau projects that the US population will grow by 49% over the next four decades. ¶ Today, the US is
the world’s third most populous country; 50 years from now it is still likely to be third (after only China and India). This
is highly relevant
to economic power: whereas nearly all other developed countries will face a growing burden of
providing for the older generation, immigration could help to attenuate the policy problem for the US.¶ In
addition, though studies suggest that the short-term economic benefits of immigration are relatively small, and that unskilled workers may
suffer from competition, skilled
immigrants can be important to particular sectors – and to long-term growth.
There is a strong correlation between the number of visas for skilled applicants and patents filed in the
US. At the beginning of this century, Chinese- and Indian-born engineers were running one-quarter of Silicon Valley’s technology businesses,
which accounted for $17.8 billion in sales; and, in 2005, immigrants had helped to start one-quarter of all US
technology start-ups during the previous decade. Immigrants or children of immigrants founded roughly 40% of the 2010
Fortune 500 companies.¶ Equally
important are immigration’s benefits for America’s soft power. The fact that
people want to come to the US enhances its appeal, and immigrants’ upward mobility is attractive to
people in other countries. The US is a magnet, and many people can envisage themselves as Americans,
in part because so many successful Americans look like them. Moreover, connections between
immigrants and their families and friends back home help to convey accurate and positive information
about the US.¶ Likewise, because the presence of many cultures creates avenues of connection with other
countries, it helps to broaden Americans’ attitudes and views of the world in an era of globalization.
Rather than diluting hard and soft power, immigration enhances both.¶ Singapore’s former leader, Lee Kwan Yew, an
astute observer of both the US and China, argues that China will not surpass the US as the leading power of the twentyfirst century, precisely because the US attracts the best and brightest from the rest of the world and
melds them into a diverse culture of creativity. China has a larger population to recruit from domestically, but, in Lee’s view,
its Sino-centric culture will make it less creative than the US.¶ That is a view that Americans should take to heart. If Obama succeeds in
enacting immigration reform in his second term, he will have gone a long way toward fulfilling his
promise to maintain the strength of the US.
Empirically causes fights – kills bipart
Restuccia 12 (Andrew, The Hill, Feb 13, [thehill.com/blogs/e2-wire/e2-wire/210335-energy-deptnears-approval-of-83b-nuclear-loan-setting-up-capitol-hill-fight], jam)
Energy Secretary Steven Chu said Monday he expects to finalize an $8.3 billion taxpayer-backed loan for two new
nuclear reactors in Georgia, setting up another battle on Capitol Hill over the government’s investments in
energy projects. The Energy Department offered the loan guarantee to Southern Co. in February 2010, but finalization of the deal was
conditioned on a number of key regulatory approvals. Chu signaled Monday that the loan guarantee is nearing final approval, less than a week
after the Nuclear Regulatory Commission (NRC) greenlighted the license for the project. “We expect that one to close and go forward,” Chu told
reporters Monday afternoon, but cautioned that “there are a number of other milestones” the project must achieve before getting the final OK
from the Energy Department. NRC’s decision last week to approve the license for the project — allowing construction and conditional operation
of two reactors at the existing Vogtle power plant near Waynesboro, Ga. — marked a major milestone for the nuclear industry. It’s the first
time the commission has approved construction of new reactors since 1978. But the
project faces major resistance from some
Democrats in Congress, who are hoping to use Republicans’ eagerness to probe the $535 million loan
guarantee to failed solar firm Solyndra against them. Rep. Edward Markey (R-Mass.), a senior member of the House
Energy and Commerce Committee, criticized Republicans Monday for not objecting to the pending nuclear loan
guarantee, which is more than 15 times larger than the one given to Solyndra in 2009. “The Republican push for a loan guarantee for a
nuclear reactor project exponentially riskier than Solyndra proves that their interests are not in financial stewardship but in political game
playing,” Markey said. Markey, in a letter Monday, pressed Chu not to finalize the nuclear loan guarantee until the Energy Department makes
improvements to its loan program recommended in a White House-mandated report released last week. The report — conducted by Herb
Allison, the former Treasury Department official who oversaw the Troubled Asset Relief Program — calls for several steps to improve oversight
of the loan program, but also provides lower estimates of taxpayer risk than an earlier federal forecast. “Given the massive taxpayer debt to be
assumed and the extraordinary risk associated with the Vogtle project, we should not act on final approval of the Southern Company loan
guarantee unless all of the improvements recommended in the Allison report have been put in place to reduce the likelihood of a multi-billion
dollar taxpayer bailout,” Markey said. Republicans
have hit the administration hard on Solyndra’s September
bankruptcy, alleging that the decision to grant the company a loan guarantee was influenced by politics and raising broader questions
about the administration’s green energy investments.
Fukushima killed support
YPCC ’12 “The Climate Note,” Yale Project on Climate Change Communication, 3/11/2012,
http://environment.yale.edu/climate/the-climate-note/nuclear-power-in-the-american-mind/
How did American images of nuclear power change in response to the Fukushima disaster? Using two
separate nationally representative surveys – one conducted in June of 2005 and the second in May of 2011 – we asked Americans to provide
the first word or phrase that came to mind when they thought of “nuclear power.” We then categorized these free associations to identify the
most common images of nuclear power in the American mind. Compared
to 2005, Americans in May of 2011 were
significantly more likely to associate nuclear power with images of “disaster” (including many direct
references to Fukushima) or “bad” (including bad, dangerous, and scary). And as described in our report: Public Support for
Climate & Energy Policies, only 47 percent of Americans in May 2011 supported building more nuclear power
plants, down 6 points from the prior year (June 2010), while only 33 percent supported building a nuclear power plant in their own local area.
Fukushima was a “focusing event” – a crisis that generates massive media and public attention and ripple
effects well beyond the disaster itself. The meltdown and release of radioactive materials at Fukushima directly impacted the air,
water, soil, people, and biota in the immediate vicinity of the facility, but the ripple effects of the disaster cascaded through broader Japanese
society, causing, among other things, the prime minister to pledge the end of nuclear power in Japan. Further, the ripples, like the tsunami that
triggered the crisis, ricocheted across the world, leading the German government to pledge the phase-out of nuclear power, reviews of nuclear
plant safety in other countries, and shifts in global public opinion about nuclear energy, including a shift in the meaning of "nuclear power" in
the American mind.
Obama will get tied to the plan
USA TODAY 11 (Mar 3, "Nuclear power industry ups outreach to Congress,"
[www.usatoday.com/news/washington/2011-03-29-nuke-support_N.htm], jam)
President Obama has supported the development of nuclear plants as a way to reduce greenhouse gas emissions in producing electricity. In
the 2012 budget he submitted to Congress last month, Obama proposed another $36 billion in federal loan
guarantees for new nuclear plants.
Eberly 1/21 Todd Eberly, coordinator of Public Policy Studies and assistant professor in the
Department of Political Science at St. Mary's College of Maryland, The presidential power trap; Barack
Obama is discovering that modern presidents have difficulty amassing political capital, which hinders
their ability to enact a robust agenda,” 1/21/2013 http://articles.baltimoresun.com/2013-0121/news/bs-ed-political-capital-20130121_1_political-system-party-support-public-opinion
Barack Obama's election in 2008 seemed to signal a change. Mr. Obama's popular vote majority was the largest for any president since 1988,
and he was the first Democrat to clear the 50 percent mark since Lyndon Johnson. The president initially enjoyed strong public
approval and, with a Democratic Congress, was able to produce an impressive string of legislative
accomplishments during his first year and early into his second, capped by enactment of the Patient Protection and Affordable Care Act.
But with each legislative battle and success, his political capital waned . His impressive successes with
Congress in 2009 and 2010 were accompanied by a shift in the public mood against him, evident in the rise of
the tea party movement, the collapse in his approval rating, and the large GOP gains in the 2010 elections, which brought a return to divided
government.
Specifically for upcoming term
Walsh 12 Ken covers the White House and politics for U.S. News. “Setting Clear Priorities Will Be Key
for Obama,” 12/20, http://www.usnews.com/news/blogs/Ken-Walshs-Washington/2012/12/20/settingclear-priorities-will-be-key-for-obama
And there is an axiom in Washington: Congress, the bureaucracy, the media, and other power centers can do justice to only one or two
issues at a time. Phil Schiliro, Obama's former liaison to Congress, said Obama has "always had a personal commitment" to gun control, for example. ¶ But
Schiliro told the New York Times, "Given the crisis he faced when he first took office, there's only so much capacity
in the system to move his agenda ." So Obama might be wise to limit his goals now and avoid
overburdening the system , or he could face major setbacks that would limit his power
and credibility
for
the remainder of his presidency.
Bipart bill in Squo—congress confident
The Hill 2/22 “Bipartisan House immigration group reports ‘incredible progress’” By Russell Berman
- 02/22/13 06:00 AM ET http://thehill.com/homenews/house/284409-house-immigration-groupreports-incredible-progress
A bipartisan House group is making “really good progress” on immigration reform legislation despite missing a
target date for an agreement, a top Republican participant said.¶ “I am now more sure than ever that we’re going to have
a bipartisan bill,” a longtime advocate of comprehensive reform, Rep. Mario Diaz-Balart (R-Fla.), said in an interview. “We’re
making incredible progress.Ӧ Diaz-Balart
is a member of a House group that includes more than a half dozen
liberal and conservative lawmakers who have been working for years behind closed doors on an
immigration overhaul. As talks accelerated in recent months, people involved in the effort said the group had hoped to announce an
agreement around President Obama’s State of the Union address.¶ That date came and went, and now aides say that while the talks are
ongoing, participants are not setting a deadline or target date for releasing legislation.
Broad agreement
SILive 2/23 Staten Island Live Try for real immigration reform By Staten Island Advance Editorial on
February 23, 2013 at 5:31 AM, updated February 23, 2013 at 8:58 AM Print
http://www.silive.com/opinion/editorials/index.ssf/2013/02/for_real_immigration_reform_se.html
"Real reform means strong border security, and we can build on the progress my administration has already made — putting more boots on
the southern border than at any time in our history and reducing illegal crossings to their lowest levels in 40 years," said President Obama.¶
Associated Press Both President Obama
and a majority of members of Congress support immigration reform.
And they agree there is a growing need to decide the fate of the 11 million illegal immigrants in
America.¶ Then, too, there is agreement in Washington that a cornerstone of any new immigration-reform
legislation must be securing our borders once and for all; otherwise, the flood of illegals won't stop.
Momentum now—continual key to compromise
Karst 2/21 (Tom, The Packer, “ Immigration reform hangs in the balance ”
http://www.thepacker.com/fruit-vegetable-news/Immigration-reform-hangs-in-the-balance192315371.html) will
Guenther, senior vice president of public policy for the United Fresh Produce Association, Washington, D.C., said the immigration issue
has the most energy and momentum as any time since 2006.¶ But the fate of the issue may be settled by
how greedy immigration reform advocates become, he said.¶ “Part of this is going to be how interested
stakeholders on all sides of this very complicated issue — agriculture, the broader business community, unions, farm
worker advocates — are these people going to start overreaching?” Guenther said Feb. 20.¶ Guenther said that while
there is bipartisan support for reform in the House and Senate, a comprehensive immigration deal isn't
a slam dunk.¶ Immigration reform was supported by President Bush in 2006, along with both Democratic
and Republican members of Congress, Guenther said. However, posturing and overreaching by opposing
factions of the debate led to the breakdown in the process.
CIR before everything else
Kowalski 2/20 (Alex, Bloomberg News, “All-the-Right-Skills Immigrants Ride U.S. Hiring Wave:
Economy” http://www.sfgate.com/business/bloomberg/article/All-the-Right-Skills-Immigrants-Ride-US-Hiring-4293469.php#ixzz2LalpH9Qx
Working immigrants, who are more likely than native-born Americans to either lack a high school diploma or to hold an advanced
degree, have gained from a decades-long divergence in the labor market that has swelled demand for jobs paying
above- and below-average wages. Amid this dynamic, the battle over comprehensive changes in immigration
law is coming to the forefront in Congress.¶
Foreign-born workers “increase efficiency in the economy, and by increasing
efficiency, they eliminate bottlenecks,” said Pia Orrenius, a senior economist at the Federal Reserve Bank of Dallas who has studied
immigration. Their availability “lowers overall unemployment, and increases economic growth.”
Top of the docket
Papich 2/6 (Michael, The Pendulum, “Immigration reform returns to legislative forefront”
http://www.elonpendulum.com/2013/02/immigration-reform-returns-to-legislative-forefront/)
Four years ago, it was the stimulus package and the health care bill. Now, it’s immigration reform. Recent proposals from the Senate and the
president may make immigration reform the first big legislative push of Barack Obama’s next four
years.¶ A bipartisan committee of eight senators put out a framework for an immigration reform bill Jan. 28. Among other things, the
proposal includes a system to provide undocumented immigrants currently in the United States a way to obtain “probationary legal status”
after completing a background check and paying various fines and taxes. To receive a green card, these individuals would complete mandatory
English and civics courses, show a history of employment and undergo further background checks.
Rd 6 Neg vs Wyoming Marcum/Pauli
1AC
Local wind
1NC
1NC—DA
CIR passes—Obama pushing—momentum key
Nakamura 2/21 (David, WaPo, “Labor, business leaders declare progress in immigration talks”
http://www.washingtonpost.com/blogs/post-politics/wp/2013/02/21/labor-business-leaders-declareprogress-in-immigration-talks/) will
Labor and business leaders on Thursday said they have made progress toward a pact over how to
implement reforms of immigration laws in the workplace, but they stopped short of agreeing on a new guest worker
program for foreigners.¶ ¶ In a joint statement, AFL-CIO President Richard Trumka and U.S. Chamber of Commerce President
Thomas Donohue expressed optimism over their negotiations and emphasized they are committed to finding a solution that
would allow companies to more quickly and easily hire foreigners when Americans are not available.¶ “Over the last months, representatives of
business and labor have been engaged in serious discussions about how to fix the system in a way that benefits both workers and employers,
with a focus on lesser-skilled occupations,” the two leaders said. “We
have found common ground in several important
areas, and have committed to continue to work together and with Members of Congress to enact
legislation that will solve our current problems in a lasting manner.Ӧ A bipartisan Senate group that is developing
immigration reform legislation had asked the AFL-CIO and Chamber to come up with an agreement over
a potential guest worker program, a controversial provision that has helped sink previous attempts to overhaul immigration laws.¶
Donohue has called for a new guest worker program that would allow companies to hire more foreigners in low-skilled occupations such as
farming where there have been shortages of U.S. workers, and to allow foreign workers increased mobility to change jobs when necessary.
Trumka has said the labor union would agree only if the number of visas are reduced during times of high umemployment and if foreign
workers are provided a path to citizenship to help protect wages and benefits to all workers.¶ In the joint statement, the two sides said they
have agreed to three principles. The first is that American workers should have the first crack at all jobs, and the second would provide a new
visa that “does not keep all workers in a permanent temporary status, provides labor mobility in a way that still gives American workers a first
shot at available jobs, and that automatically adjusts as the American economy expands and contracts.” ¶ The third principle is a call for a new,
quasi-independent federal bureau that would monitor employment statistics and trends to inform Congress about where to set visa caps for
foreign workers each year.¶ “We are now in the middle – not the end – of this process, and we pledge to continue to work together and with
our allies and our representatives on Capitol Hill to finalize a solution that is in the interest of this country we all love,” Donohue and Trumka
said in the statement.¶ The
Senate working group, comprised of four Democrats and four Republicans, is aiming to develop
legislative proposals by next month, and President Obama has affirmed his support of the group’s general
principles.¶ Obama’s own legislative proposals, contained in a draft bill that the White House says is a backup plan if the Senate effort fails,
does not include a guest worker provision. As a senator in 2007, Obama voted in favor of an amendment to a comprehensive immigration bill
that would have sunset a guest worker program after five years; that immigration bill ultimately failed in the Senate, and some Republicans cite
the amendment as a reason why.¶ White
House press secretary Jay Carney said the joint statement represented “yet
another sign of progress, of bipartisanship and we are encouraged by it. At the same time, it is an agreement on
principles. We remain focused on encouraging the Senate to develop a comprehensive bill.”
Wind incites massive Congressional controversy—scuttles ideological bridging
Carney ‘12 (Timothy P., senior political columnist at the Washington Examiner, "Wind lobby strives to
adapt to Tea Party era," 2012, [washingtonexaminer.com/article/1273946], jam)
The Tea Party has weakened the clout of the wind lobby, and imperiled the industry's prized political possession - a billion-dollar " P roduction T ax C redit." Leaked documents show how the lobby is trying to adapt to Republican
power. The American Wind Energy Association's 2011 annual report and related documents were quietly posted on the Internet last week by a
wind power opponent upset by windmills' negative impact on birds. The documents show the lobby's efforts to frame its opponents as tax
hikers, and to use opposition research against subsidy critics, some of whom it classifies as "libertarian free-market fundamentalists." One of
AWEA's strongest responses to the 2010 GOP takeover of the House was to hire Republican strategist Mike Murphy, whose past clients include
John McCain, Mitt Romney and Jeb Bush. Murphy's Revolution Agency came on to help with AWEA's top legislative priority: extending the
federal Production Tax Credit for renewable energy. The PTC reduces a windmill owner's corporate income tax by 2.2 cents for every kilowatthour generated. "AWEA's message and champions have largely resided on the left," the Revolution Agency stated in a strategy memo included
in AWEA's 2011 annual report. So the 2010 elections required AWEA to "pivot" from "green energy and Obama to jobs, manufacturing,
business investment, and Conservative Republicans," while still "taking care not to erode base support from the left." One
core problem,
the memo explained: The "debt-strapped, partisan, and Tea Party-infused Congress is reflexively skeptical of
subsidies and many outside the windy red states have an inherently negative sentiment toward
renewable energy." So how do you convince fiscally conservative Republicans to preserve a subsidy for green energy? "The campaign
must define the failure to reauthorize the PTC as a tax hike with resulting negative implications for American jobs," Murphy's agency explained.
For instance, Revolution agency suggested AWEA give out a "Taxpayer Protector Award" to congressmen backing the credit. Murphy's group
also suggested "reaching out to a credible conservative tax advocate voice like Grover Norquist," president of Americans for Tax Reform.
Norquist has recently opposed the abolition of some special energy tax credits. But he and ATR endorsed legislation last year
that would end all energy tax credits, including the PTC, because it would also lower rates. Norquist told me Thursday
that he hadn't heard from AWEA on the PTC. To persuade the grassroots, Murphy and his colleagues suggested advertising on sites such as
RushLimbaugh.com, Hannity.com, GlennBeck.com, and FoxNews.com. Winning
over Republican staff would not be easy,
the GOP staffers, there is a strong ideological tilt and they're inclined to view
renewable energy with skepticism." Specifically, Murphy's agency instructed AWEA not to pit wind energy against fossil fuels. One
reason: "Fossil fuels and coal have spent so much time and money programming these Staffers."
Revolution warned. "Among
Now is a key moment to affirm cultural plurality over anti-immigrant racism. Only
forming a society welcoming to people from other cultures creates the potentiality for
democratic praxis.
Palumbo-Liu 2/23 David Palumbo-Liu [Professor of Comparative Literature, Stanford] “The Failure of
Multiculturalism and the Necessity of Democracy” Truthout | Op-Ed Saturday, 23 February 2013 00:00
http://www.truth-out.org/opinion/item/14716-the-failure-of-multiculturalism-and-the-necessity-ofdemocracy
A birthday picnic for Vivianna Rivera, 2, at Flushing Meadows-Corona Park, in New York, August 20, 2011. The park, a faded site of both the
1939 and 1964 World's Fairs, is a multicultural backyard for a borough of immigrants. (Photo: Hiroko Masuike / The New York Times)¶ Social
practices that best deliver multiculturalism are those that at the same time may be regarded as
enactments of democratic enfranchisement, the very thing that should inform the heart of the
multicultural project.¶ This past summer, I spoke on a panel on the question of multiculturalism in Europe. This topic seemed
especially urgent, not only because of Anders Behring Breivik's horrific killing spree in Norway - which he
stated was in large part motivated by his hatred of multiculturalism - but also because the leaders of
three major western nations had recently declared the "failure of multiculturalism."¶ Trying to make sense of
the declaration of multiculturalism's "failure," I turned A. O. Hirschman's The Rhetoric of Reaction, which concisely puts forward
three theses used by reactionary rhetoric: "According to the perversity thesis, any purposive action to
improve some feature of the political, social or economic order only serves to exacerbate the condition
one wishes to remedy.... The futility thesis holds that attempts at social transformation will be
unavailing, that they will simply fail to make a dent." Finally, the jeopardy thesis argues that "the cost of
the proposed change or reform is too high as it endangers some previous, precious accomplishment."¶
Each of these three theses is evident in the pronouncements on multiculturalism's "failure": Policies
undertaken to advance a multicultural spirit have perversely only made things worse; multiculturalism is futile - it will not and cannot reach its
goals, and in the effort to make something out of this doomed project, the harm to the thing that we most cherish - our national identity - will
be enormous. And it was in the spirit of this sort of rhetoric that we heard Merkel, Sarkozy and Cameron all proclaim the "failure of
multiculturalism."¶ No matter that each had their specific, local, reasons for uttering this judgment; each of them then could evoke the need to
exercise what Cameron called "muscular liberalism" to step into the field of failure and inject some needed realism into their national agendas.
For each, this new activist "liberalism" would stop the state's pampering of minorities, its forced indulgence of their jealous hold onto their
particularities and forcibly integrate them into the national social, political and cultural body. It was not really even about "their own good"
anymore - it was now legitimate to focus rather on the common good of "the Nation."¶ Let's
look at two issues hidden behind
this idea of multiculturalism's reputed failure. By bringing them to the foreground, we can add a crucial dimension to our
understanding of possible routes to "success." First - race. Clearly, not all "cultures" are in need of being integrated in
a "muscular" way. It is those cultures whose members are most conspicuously marked by racial
difference or notably different religious or cultural practices that are considered to be most reticent to
join the national body, and in need of extra help. In the EU, this had to specifically target Muslims. The fact that their
marginalization was brought about in large part by longstanding practices enabled by racial and religious
distinctions should not be lost on us. Second - democracy. The very existence of a national form
demands practices of segmentation and exclusion, for creating particular channels and locations for the
integration of diverse peoples, and differentially shaping their ability to participate in the democratic life
of the nation. Multi-"culturalism" can never erase the racist segmentation of some populations off from the core; it
is doomed to fail because it cannot alone overcome a much broader and deeper set of issues that exceed the "cultural." But this does
not mean we should abandon its ethos - rather it means that we need to get a firmer grip on reality.
Most important is the issue of democratic citizenship: "Integration" is not a mechanical process, simply
"presence" in the core society means little if there is not actual political enfranchisement and the ability
and resources to represent and advocate one's point of view.¶ In order to grasp the larger picture and understand better
the relation between multiculturalism and democracy, we need to scale up to a more global level, and a deeper historical frame, to recover the
traces of the long histories that have maintained the racial divide, and to see as well how new demands for enfranchisement have been
rebutted or rechanneled for an equally long time. In other words, today's "failure" should be placed at the feet of something other than
recalcitrant minorities or "flabby" liberalism.¶ The conference convened at the Paris UNESCO center. As we were at that particular locale, I
decided that it would be more than appropriate to revisit one of its foundational documents: the 1950 Statement on Race. In 1948 the UN
Social and Economic Council [assigned] UNESCO the task of "defining 'race' to support the assertion that racism was morally unacceptable, and
unsupported by science."1¶ The 1950 Statement begins simply and empathically: "Scientists have reached general agreement in recognizing
that mankind is one: that all men belong to the same species."2 It argues that "the unity of mankind from both the biological and social
viewpoints is the main thing. To recognize this and to act accordingly is the first requirement of modern man." The statement's final point is
equally crucial: "All normal human beings are capable of learning to share a common life, to understand the nature of mutual service and
reciprocity, and to respect social obligations and contracts." The 1950 Statement can thus be credited with propounding - in an international
document presented on the world stage - a strong statement on equality.¶ The 1967 Statement on Racism takes these elements, but
transforms the context from post-war to postcolonial, and empathically draws the connection between race and democracy. The first sentence
begins, "'All men are born free and equal both in dignity and in rights.' This universally proclaimed democratic principle stands in jeopardy
whenever political, economic, social and cultural inequalities affect human group relations. A particularly striking obstacle to the recognition of
equal dignity for all is racism [emphasis added]."¶ The panel looks out into the contemporary world to see the effects of historical racism (sadly,
more than 40 years later, things seem the same): "The social and economic causes of racial prejudice are particularly observed in settler
societies wherein are found conditions of great disparity of power and property, in certain urban areas where there have emerged ghettoes in
which individuals are deprived of equal access to employment, housing, political participation, education and the administration of justice, and
in many societies where social and economic tasks which are deemed contrary to the ethics or beneath the dignity of its members are assigned
to a group of different origins who are derided, blamed and punished for taking on these tasks."¶ We
can trace the trajectory
mapped out by the 1950 and the 1967 statements - the emphasis shifting from an optimistic argument
for equal dignity and mankind's "plasticity" in learning new ways of perceiving racial identity and
difference, to an acknowledgment of the obdurate, concrete manifestations of racism in the political,
social and economic lives of people - the abiding results of conquest, enslavement and colonialism. But one more piece of the
historical puzzle needs to be put in place before we can see why and how today's "multiculturalism" has failed to fulfill its role of primary
"integrator" for the nation. It is because, as I see it, for multiculturalism to work, we have to move beyond "tolerance" to practical integration
into democratic participation. It was in the late '60s and early '70s that the latter was thought to be problematic precisely because the influx of
"others" was too great.¶ This
becomes patently clear as we move from the broad international body of the
United Nations and the mandate of UNESCO to foster education, science and culture, to a specific
consortium that emerged in the 1970s, just a few years after the 1967 Statement on Racism. The Trilateral
Commission was originally created in 1973 "to bring together experienced leaders within the private sector to discuss issues of global concern
at a time when communication and cooperation between Europe, North America and Asia were lacking." One of its very first publications was
entitled, "The Crisis of Democracy: Report on the Governability of Democracies to the Trilateral Commission."3 Whereas the
US
contributor to this report is most well-known for his "clash of civilization" and bizarre anti-immigrant
writings, Samuel Huntington in this earlier document reveals a deep skepticism with the basic notion of
democracy, which we assume was the thing to be guarded against civilizational antagonisms:¶ The essence
of the democratic surge of the 1960s was a general challenge to existing systems of authority, public and private. In one form or another, the
challenge manifested itself in the family, the university, business, public and private institutions, politics, the government bureaucracy and the
military service. People no longer felt the same obligation to obey those whom they had previously considered superior to themselves in age,
rank, status, expertise, character or talents.... Each group claimed its right to participate equally - in the decision which affected itself....¶ This
increase in political participation is "primarily the result of the increased salience which citizens perceive politics to have for their own
immediate concerns.¶ So what's wrong with that? Isn't this precisely the picture of a
robust democratic society? Not exactly, for this
vigor is largely made up of minority voices and viewpoints demanding attention to their particular needs.
This put pressure on the political institutions of the state: "In the United States, the strength of democracy poses a problem for the
governability of democracy.... We have come to recognize that there are potentially desirable limits to the indefinite extension of political
democracy. Democracy will have a longer life if it has a more balanced existence."¶ Indeed, turning to the essay by the French representative
and chief author for this volume, we find Michel Crozier making the same points, albeit from a western European perspective:¶ The
vague
and persistent feeling that democracies have become ungovernable has been growing steadily in
Western Europe.... The European political systems are overloaded with participants and demands, and they have increasing difficulty in
mastering the very complexity which is the natural result of their economic growth and political development.... There are a number of
interrelated reasons for this situation. First of all, social and economic developments have made it possible for a great many more groups and
interests to coalesce. Second, the information explosion has made it difficult, if not impossible, to maintain the traditional distance that was
deemed necessary to govern. Third, the democratic ethos makes it difficult to prevent access and restrict information, while the persistence of
the bureaucratic processes, which had been associated with the traditional governing systems, makes it impossible to handle them at a low
enough level.... Citizens make incompatible claims. Because they press for more action to meet the problems they have to face, they require
more social control.¶ I recognize the difference in our historical situations; nevertheless the ideological contradictions abide, and directly inform
the "failure" of multiculturalism. For if new sources of labor were required in Europe after the Second World War, and if that demand, along
with the effects of decolonization, the fall of socialist states - and, since 9/11 - the specter of international terrorism have called forth the need
to mobilize notions of republicanism and integration anew, then at one and the same time, the issues of overcapacity, the conceptual (if not
real) threat of excessive equality, and the notion that some peoples from some cultures could not, or would not, be integrated, and that the
"Information Age" threatened to create the possibility of more people not only knowing more things, but using that knowledge to exercise their
democratic rights, all had to be revisited. Indeed, Cameron, Merkel and Sarkozy all argued that it was the intolerant, non-compliant "others"
that held too jealously to their "customs" or, much worse, tried to impose them violently on others in one virulent form or another, that made
any attempt at multiculturalism wrong-headed and destructive.¶ However, such claims simply went against fact. Not only were actually existing
state policies significantly working, but equally important, ordinary people were actually working out "integrative" measures themselves at the
local level. Consider the Searching for Neighbors project taking place in Hungary, Germany, Italy and Austria, which investigated the "tension
between the top-down and bottom-up, between policy and practice," and examined not only the failures but also the successes of state and
local policies in the internal borderlands of cities. Likewise, UNESCO's "International Coalition of Cities Against Racism" moves from the nationstate level to connections across the world between cities, each with their own particular situations but each also committed to a global
initiative against racism. This sort of project gives the idea of a "global" city a new valence as it recognizes precisely the historical issues already
embedded in the 1950 and 1967 statements and others, but also the new conditions of the contemporary world. Finally, a Dec. 4, 2008 article
in The Economist, "When Town halls turn to Mecca," reports on municipal measures to create spaces for Muslim practices: "In the Brussels
suburb of Molenbeek, where the dominant culture is that of Morocco, a circular from the district authorities reminds residents not to kill
animals at home. It invites them to a 'temporary abbatoir' that will function for 48 hours in a council garage. Molenbeek is one of four areas in
Brussels which have set up makeshift slaughterhouses.... In places like Molenbeek, a few miles away from the European Union's main
institutions, talk of the continent's transformation into Eurabia doesn't sound absurd.... Yet talk of civilizational war in Europe's cobblestoned
streets is out of line in one respect: It understates the ability of democratic politics, especially local politics, to adapt to new social phenomena."
Considering these cases, and many more, it becomes clear that "multiculturalism" has not failed entirely; it is practiced in all sorts of formal and
informal ways to good effect. These successes thus refute both the declarations of its wholesale "failure" and the alibi that declaration provides
to shut down the spirit and practice of multiculturalism. We would do better to strategically recognize exactly what works and why.¶ Now one
obvious and important criticism of my promotion of such practices is that it might take pressure off the state to enact such things. But I don't
think it's a zero-sum game. Rather, such initiatives at the local, informal and semi-formal level can be exercises in democracy with a small "d,"
but with immediate and local effects. This
does not mean we give up political action on a national level, but that
we come to understand better the capacities and will at that level, and respond accordingly. Whether
multiculturalism succeeds or fails is directly linked to the achievability of democracy.
1NC
The United States Department of the Treasury should clarify through a revenue ruling
that that income from locally-owned wind power produced for on-site demand
equipment is Real Estate Investment Trust-eligible.
REITs solve investment restructuring—generates large scale energy development—
empirics
Mormann and Reicher ’12 Felix Mormann and Dan Reicher, Steyer-Taylor Center for Energy Policy
and Finance at Stanford Law and Business, “Smarter Finance for Cleaner Energy: Open Up Master
Limited Partnerships (MLPs) and Real Estate Investment Trusts (REITs) to Renewable Energy
Investment,” Brookings Institute, November 2012,
http://www.brookings.edu/~/media/Research/Files/Papers/2012/11/13%20federalism/13%20clean%20
energy%20investment.pdf
REITs, meanwhile, have a total market capitalization of over $440 billion, with recent Internal Revenue
Service “private letter rulings” blazing a trail for investment in gas pipelines, power transmission, and
other energy-related projects. With average shareholder dividends of approximately 9 percent, REITs
raised over $50 billion in 2011 alone, including some for traditional energy and infrastructure projects,
and overall more capital than all U.S. clean energy investment for that year.
CP avoids congressional action
Mormann and Reicher ’12 Felix Mormann, and Dan Reicher, “Smarter Finance for Cleaner Energy:
Open Up Master Limited Partnerships (MLPs) and Real Estate Investment Trusts (REITs) to Renewable
Energy Investment,” Brookings Institute, November 2012,
http://www.brookings.edu/~/media/Research/Files/Papers/2012/11/13%20federalism/13%20clean%20
energy%20investment.pdf
REITs could be opened for renewable energy investment through executive or legislative action. Executive
action would require the Department of Treasury to clarify—through project-specific private letter rulings or,
preferably, a broadly applicable “revenue ruling”—that renewable power generation equipment
qualifies as real property under the tax code and that income from these assets, including from the
sale of electricity, is considered REIT-eligible income. The same clarification could also be made through a legislative
amendment to the federal tax code. Use of MLPs for renewable energy projects would require legislative action,
such as the amendments proposed by the Master Limited Partnerships Parity Act. Current law specifically prohibits MLP
investment in “inexhaustible” natural resources, leaving little room for statutory interpretation that
would make renewables MLP-eligible.
1NC—K
The opposition of local autonomy versus global corporatization leaves unchallenged
formative assumptions of scale that bolster economic stratification—local
productionism is still implicated in global domination
Brown and Purcell ’5 J. Christopher Brown, Mark Purcell, “There’s nothing inherent about scale:
political ecology, the local trap, and the politics of development in the Brazilian Amazon,” Geoforum 36
(2005) 607–624, doi:10.1016/j.geoforum.2004.09.001
If we were to unify the three theoretical principles above into a single directive for research on scale, we might say that the analysis of scale
should examine how the relationships among scales are continually socially produced, dismantled, and re-produced through political struggle.
The analysis should always see scales and scalar relationships as the outcome of particular political projects. It should therefore address
which political interests pursue which scalar arrangements. Furthermore, it should analyze the agenda of those political
interests. Political and economic geographers have incorporated these principles of scale into their examination of their principal subject, the
global political economy. Their main argument with respect to scale is that the most recent (post-1970) round of global restructuring has
involved a scalar shift in the organization of the global political economy. In
the Fordist era, capitalist interests involved in
large-scale manufacturing allied with national governments to produce a stable national-scale
compromise between capital and labor that suited the interests of capital, labor, and the state. In the
contemporary, post-Fordist era, this national-scale hegemony is being dismantled; capital has expanded
the scale of its operations in an effort to overcome the crises of the early 1970s and establish a new
international regime of accumulation studded with particularly strong regional clusters of economic
activity (Peck and Tickell, 1994; Storper, 1997; Dicken, 1998; Scott, 1998). Though the national-scale state remains powerful in this new
arrangement, its hegemony has slipped and new scales of state organization have become increasingly important (Jessop, 1998; MacLeod and
Goodwin, 1999; Jessop, 2000; Peck, 2001). The relationship
among the various scales of organization is therefore presently in
relative flux. The clear trend so far in the post-Fordist era has been a shift away from the dominance of national scale
arrangements and toward organization at both local/regional scales and international/global scales. Thus we see, for example,
analyses of ‘‘state devolution’’ whereby the national-scale state cedes authority and responsibility to sub-national states
and non-state institutions (Staeheli et al., 1997). We see also analyses of ‘‘the internationalization of production’’ whereby firms whose
organization and operations were formerly limited mostly to the national scale have expanded their operations to international scales in a
search for a new ‘‘spatial fix’’ for accumulation (Harvey, 1982; Dicken, 1998). This dual scalar shift away from the national and toward the global
and local has been termed ‘‘glocalization’’ (Swyngedouw, 1992; Courchene, 1995; Robertson, 1995), and it represents a central component of
the argument in recent political economy. Most observers argue that it remains to be seen if this ‘‘glocalization’’ trend will result in a re-fixing
of hegemony such that local and the global scales share an enduring scalar preeminence at the expense of the national. But the thrust of the
glocalization thesis has been to understand how particular interests are struggling to unfix, rearrange, and re-fix the relationships among
socially produced scales in the global political economy. 3. Scale in political ecology To an extent, the roots of the political ecology tradition lie
in an effort to transcend the methodological limitations of cultural ecology, which too often examined only the local scale and treated it as a
closed system. Political ecologys rallying cry has been for analyses that examine the ‘‘wider political economy’’ so that the local scale can be
analyzed in its wider scalar context (Ite, 1997; Mehta, 2001; Filer, 1997; Sharpe, 1998; Vayda, 1983; Tanner, 2000; Bebbington, 1999). Arguing
that cultural ecologys view of local places was based on outdated notions of closed, stable ecological systems, the early political ecologists
called for more attention to how local human ecologies were embedded in a set of wider political-economic processes that greatly influenced
local outcomes (Blaikie, 1985; Blaikie and Brook- field, 1987; Peet and Watts, 1996; Zimmerer, 1996). Political ecologists in the 1980s were
influenced by the political-economic literature in general and Marxist thought in particular. Bryant (1998, p. 3) notes that ‘‘neo-Marxism offered
a means to link local social oppression and environmental degradation to wider political and economic concerns relating to production
questions.’’ Despite this early engagement, over the past ten years or so political ecologists have for various reasons moved away from their
engagement with work in political economy. 8 Evidence of this disengagement is that the work on the politics of scale, one of the most dynamic
bodies of work in current political economy, has scarcely attracted attention from political ecologists. Citations of the politics of scale literature
are rarely found even in political ecological work that is explicit in its attention to scale (e.g. Bassett and Zueli, 2000; Adger et al., 2001;
Schroeder, 1999a,b; Gezon, 1999; Raffles, 1999; Awanyo, 2001; Becker, 2001; Freidberg, 2001; Young, 2001). Erik Swygedouws work is
particularly instructive. He is both a leading scale theorist and an insightful political ecologist working primarily on water provision. Oddly, much
of his political ecology work does not draw explicitly on his scale work, even when the potential links are quite clear (Swyngedouw, 1997c,
1999). Although it does not explain Swyngedouws work, part of the move away from political economy is traceable to a turn in political ecology
toward post-structural, post-colonial, and postmodern perspectives. These approaches are often critical of the overarching meta-narratives
advanced by some political economists (Mohan and Stokke, 2000). To be clear, we do not suggest that the post-structural turn has been a bad
thing for political ecology; we are not calling for a rejection of post-structural approaches or for a wholesale return to political economy.
Neither are we calling for a whole-cloth acceptance of the scale literatures analysis of the global political economy. Instead, we argue that
current research in political economy has something specific to offer political ecology: its theoretical conception of scale. Further, we argue
political ecology is not now taking full advantage of this potential. In many respects the
tendency to essentialize local scale
arrangements also stems from political ecology’s early development. As we have seen, in the early 1980s political
ecologists were concerned to go beyond the local perspective of cultural ecology, to study how ‘‘the wider political economy’’ structured the
local cultures and ecologies that so occupied the attention of cultural ecologists. Pronouncements at the beginning of many political ecology
articles, for example, offer now-familiar phrases concerning how human–environmental relationships must be examined at ‘‘multiple scales’’,
‘‘across scales,’’ and ‘‘among scales,’’ because what happens in local places is impacted by human–environmental factors at different scales
(e.g. Vayda, 1983). As Zimmerer (1994, p. 117) states, ‘‘attention to multiple scales is now de rigueur’’ in political ecology. We suggest that
political ecologists’ simultaneous stress on both wider scales and political-economic processes has led them to conflate the two: ‘‘wider
scales’’ have become discursively attached to political economic processes, while the ‘‘local scale’’ has
become the scale of culture and ecology. Most political ecologists do not explicitly theorize scale as a social
construction, and this leaves open the possibility of entering the trap of assuming that certain scales can be
inherently tied to particular processes. The tendency in political ecology to think about a ‘‘wider’’ political economy (at larger
scales) distinct from a local culture and a local ecology has persisted (e.g. Escobar, 2001). As a result, contemporary political ecologists often
lament how the global political economy dictates local cultural and ecological processes, assuming that more decision-making authority
transferred to the local scale would allow the forces of culture and ecology to resist those of political economy. There are numerous examples
of this assumption in political ecology and development studies, both inside and outside of geography (e.g. Peluso, 1992; Adams and
Rietbergen-McCracken, 1994; Fairhead and Leach, 1994; Hall, 1997; Horowitz, 1998; Michener, 1998; Thorburn, 2000; Twyman, 2000; Brand,
2001; Campbell et al., 2001; Platteau and Abraham, 2002). We argue that the
local trap assumption is embedded in calls for
management, and greater local
greater attention to local indigenous knowledge, community-based natural resource
participation in development. Much of this work seeks to highlight the positive qualities of local resistance to marginalization by
oppressive political economic processes at wider scales (Vickers, 1991; Herlihy, 1992; Posey, 1992; Miller, 1993; Brosius, 1997; Hall, 1997;
Stevens, 1997; Metz, 1998; Sillitoe, 1998; Pichon et al., 1999; Brodt, 2001; Stone and DAndrea, 2001). This line of thinking
misses the
that political economy, culture, and ecology all exist and operate simultaneously at a
range of scales. Local scales are always embedded in and part of the global scale, and the global scale is constituted
fundamental fact
by the various local scales. Local and global cannot be thought of as separate or independent entities; they are inextricably tied together as
different aspects of the same set of social and ecological processes (Brenner, 2001b). Moreover, a given social
process cannot be
thought of as inherently attached to, or operating primarily at, a particular scale. Political economy is not inherently ‘‘wider,’’ and
culture and ecology are not inherently ‘‘local.’’ We cannot train our attention on one particular scale and hope to capture the essence of any of
these processes. Rather, we must examine together the various scales at which each operates, to understand how their scalar
interrelationships are socially produced through political struggle. Moreover, in order to understand what outcomes a particular scalar
arrangement will have, research must analyze the political agendas of the specific actors who pursue and are empowered by that scalar
arrangement. We contend that a more explicit and sustained engagement with the political economy literature on scale offers political ecology
a systematic theoretical way out of locally trapped thinking.
Financial incentives are behaviorist tools of control—local owners are remapped as
more efficient cogs in economic machinery
Adaman and Madra ’12 Fikret Adaman & Yahya M. Madra, “Understanding Neoliberalism as
Economization: The Case of the Ecology,” April 2012,
http://www.econ.boun.edu.tr/content/wp/EC2012_04.pdf
Neoliberal reason is therefore not simply about market expansion and the withdrawal of the welfare
state, but more broadly about reconfiguring the state and its functions so that the state governs its
subjects through a filter of economic incentives rather than direct coercion. In other words, supposed
subjects of the neoliberal state are not citizen-subjects with political and social rights, but rather
economic subjects who are supposed to comprehend (hence, calculative) and respond predictably (hence,
calculable) to economic incentives (and disincentives). There are mainly two ways in which states under the sway of neoliberal
reason aim to manipulate the conduct of their subjects. The first is through markets, or market-like incentivecompatible institutional mechanisms that economic experts design based on the behaviorist assumption that
economic agents respond predictably to economic (but not necessarily pecuniary) incentives, to achieve certain
discrete objectives. The second involves a revision of the way the bureaucracy functions. Here, the neoliberal reason functions as an
internal critique of the way bureaucratic dispositifs organize themselves: The typical modus operandi of this critique is to submit the
bureaucracy to efficiency audits and subsequently advocate the subcontracting of various functions of the state to the private sector either by
full-blown privatization or by public-private partnerships. While in the first case citizen-subjects are treated solely as economic beings, in the
second case the
state is conceived as an enterprise, i.e., a production unit, an economic agency whose functions are
persistently submitted
to various forms of economic auditing, thereby suppressing all other (social, political, ecological)
priorities through a permanent economic criticism. Subcontracting, public-private partnerships, and privatization are all
different mechanisms through which contemporary governments embrace the discourses and practices of contemporary multinational
corporations. In either case, however, economic
policy decisions (whether they involve macroeconomic or microeconomic matters)
are isolated from public debate and deliberation, and treated as matters of technocratic design and
implementation, while regulation, to the extent it is warranted, is mostly conducted by experts outside
political life—the so-called independent regulatory agencies. In the process, democratic participation in
decision-making is either limited to an already highly-commodified, spectacularized, mediatized
electoral politics, or to the calculus of opinion polls where consumer discontent can be managed
through public relations experts. As a result, a highly reductionist notion of economic efficiency ends up
being the only criteria with which to measure the success or failure of such decisions. Meanwhile, individuals
with financial means are free to provide support to those in need through charity organizations or corporations via their social responsibility
channels. Here, two related caveats should be noted to sharpen the central thrust of the argument proposed in this chapter. First, the
separation of the economic sphere from the social-ecological whole is not an ontological given, but
rather a political project. By treating social subjectivity solely in economic terms and deliberately
trying to insulate policy-making from popular politics and democratic participation, the neoliberal
project of economization makes a political choice. Since there are no economic decisions without a
multitude of complex and over-determined social consequences, the attempt to block (through
economization) all political modes of dissent, objection and negotiation available (e.g., “voice”) to those who
are affected from the said economic decisions is itself a political choice. In short, economization is itself a political
project. Yet, this drive towards technocratization and economization—which constitutes the second caveat—does not mean that the dirty and
messy distortions of politics are gradually being removed from policy-making. On the contrary, to the extent that policy
making is being
exposed to the “distortions” of a politics of rent-seeking
and speculation—ironically, as predicted by the representatives of the Virginia School. Most public-private partnerships
are hammered behind closed doors of a bureaucracy where states and multinational corporations
divide the economic rent among themselves. The growing concentration of capital at the global scale
gives various industries (armament, chemical, health care, petroleum, etc.—see, e.g., Klein, 2008) enormous amount of
leverage over the governments (especially the developing ones). It is extremely important, however, to note that this tendency
insulated from popular and democratic control, it becomes
toward rent-seeking is not a perversion of the neoliberal reason. For much of neoliberal theory (in particular, for the Austrian and the Chicago
schools), private monopolies and other forms of concentration of capital are preferred to government control and ownership. And furthermore,
for some (such as the Virginia and the Chicago schools), rent-seeking is a natural implication of the “opportunism” of human beings, even
though neoliberal thinkers disagree whether rent-seeking is essentially economically efficient (as in “capture” theories of the Chicago school
imply) or inefficient (as in rent-seeking theories of the Virginia school imply) (Madra and Adaman, 2010). This reconfiguration of the way
modern states in advanced capitalist social formations govern the social manifests itself in all domains of public and social policy-making.
From education to health, and employment to insurance, there is an observable shift from rightsbased policymaking forged through public deliberation and participation, to policy-making based
solely on economic viability where policy issues are treated as matters of technocratic calculation. In
this regard, as noted above, the treatment of subjectivity solely in behaviorist terms of economic incentives
functions as the key conceptual choice that makes the technocratization of public policy possible.
Neoliberal thinking and practices certainly have a significant impact on the ecology. The next section will focus on the different means through
which various forms of neoliberal governmentality propose and actualize the economization of the ecology.
Their faith in market topologies and technical innovation makes wind expansion
doomed to failure—this tech-optimism conceals significant social externalities
Hildyard et al. ’12 Nicholas Hildyard, Larry Lohmann and Sarah Sexton, “Energy Security For What?
For Whom?” The Corner House, February 2012,
http://www.thecornerhouse.org.uk/sites/thecornerhouse.org.uk/files/Energy%20Security%20For%20W
hom%20For%20What.pdf
The same (over) optimistic faith in the ultimate ability of markets, technological innovation, energy
substitution and economic growth to overcome all scarcities is reflected in the supposition that
renewable energies will be able to power a continuously-expanding global economy. But to put in place
the necessary generating plant powered by solar collectors, wind turbines and tidal systems would
require large areas of land and large quantities of aluminium, chromium, copper, zinc, manganese,
nickel, lead and a host of additional metals, most of which are already being used for other purposes;
their increased supply is “ problematic if not impossible ”. 34 Water, too, is likely to prove a major
constraint. Already, the energy system is the largest consumer of water in the industrialised world (in
the US half of all water withdrawals are for energy, to cool power stations, for example). The
development of alternative fuels, including non-renewable “alternatives” such as electricity derived
from nuclear power, and shale oil and gas, is likely to increase water use still further. 35 A 2006 Report
from the US Department of Energy calculates that to meet US energy needs by the year 2030, total US
water consumption might have to increase by 10 to 15 per cent – and that such extra supply may not be
available. 36 Unsurprisingly, US Secretary of State Hillary Clinton announced in March 2010 that global freshwater scarcity was now a
national security concern for US foreign policy makers. 37 This is not even to mention one of the biggest threats of a
runaway “renewables”-based economy, which is to ordinary people’s access to land, as it faces further
enclosure for giant wind farms or solar parks. The problems do not end there. Whilst significant and
wholly welcome gains have been made in improving energy efficiencies, these gains are soon overtaken
by continued economic expansion (see Box: “Energy Efficiency”).
Alt text: Affirm geography without scale.
The fiction of scale shields political hierarchies from criticism—the role of the ballot is
to interrogate spatial assumptions—critique of Cartesian domination is key to
resisting technocracy
Springer ’12 Simon Springer, “Human geography without hierarchy,” currently in review but
conveniently posted on his academia.edu account, 2012
Scale is what Jacques Lacan (1977/2001) would refer to as a ‘master-signifier’, through which the subject
maintains the artifice of being identical with its own signifier. A master-signifier is devoid of value, but
provides a point de capiton, or anchoring point around which other signifiers can stabilize. The point
de capiton is thus the point in the signifying chain–in the case of scale the Archimedean point–at
which “the signifier stops the otherwise endless movement (glissement) of the signification” and
generates the essential(ist) illusion of a fixed meaning (Lacan 1977/2001: 231). There is only one exit . To
envision a human geography without hierarchy we must ultimately reject the Archimedean point by
leaping out of the Cartesian map, and into the world: the map is an abstraction it cannot cover Earth
with 1:1 accuracy. Within the fractal complexities of actual geography the map can see only
dimensional grids. Hidden enfolded immensities escape the measuring rod. The map is not accurate;
the map cannot be accurate (Bey 1991: 101). The map is scale, and scale the map,5 sustained on the
exclusion of unconsciousness–the knowledge that is not known (here be dragons!)–and a failure to see the
immensities that are enfolded within space through things like ‘hidden transcripts’ (Scott 1990), which
empower us in the everyday through their quiet declaration of alternative, counter discourses that
challenge, disrupt, and flatten the assumed ‘mastery’ of those who govern, that is, those who control
the map (Anderson 1991; Winichakul 1994).6 Newman (2012: 349) recognizes the potential of radical politics within spaces that are
liberated from the discourse of the master and the particular relation of authority to knowledge that it erects in seeking to exclude the
knowledge of the unconscious: The Master’s position of authority over knowledge also instantiates a position of political authority: political
discourses are, for instance, based on the idea of being able to grasp the totality of society, something that is, from a Lacanian point of view,
impossible. Implicated in this discourse, then, is the attempt to use knowledge to gain mastery over the whole social field; it is a discourse of
governing… The
terra incognita that scale, hierarchy, and governance inevitably produce becomes a
powerful weapon in the hands of the oppressed, which is why the anarchist tactic of direct action–in
contrast to civil disobedience–often proceeds outside of the view of authority (Graeber 2009). Direct action is not
fundamentally about a grand gesture of defiance, but is instead the active prefiguration of alternative
worlds, played out through the eternal process of becoming and a politics of infinitely demanding
possibilities (Critchley 2008). As a decentralized practice direct action does not appeal to authority in the hopes that it can be reasoned
with or temporarily bridled by a proletarian vanguard, for anarchism recognizes its corruption and conceit all too well and has nothing left to
say. Instead, the
geography of direct action is the spontaneous liberation of a particular area (of land, of time,
produces a temporary autonomous zone (TAZ), which emerges outside of the gaze of
authority only to dissolve and reform elsewhere before capitalism and sovereignty can crush or co-opt
it (Bey 1991). A TAZ is driven by the confluence of action, being, and rebellion and its greatest strength is its
invisibility. There is no separation of theory and practice here, and organizational choice is not subordinated to future events as in the
of imagination) that
Marxist frame of understanding, it is continually negotiated as an unfolding rhizome that arises though the actual course of process.
Renewables  tech cronyism
Parry ’13 Adrian Parr, The Wrath of Capital, 2013, p. 15-17
A rising powerful transnational industry is… ecological cycles, and future lives.
1NC—DA
Wind power kills unique predators
Ritter 5 [Ritter – staff writer – 1/4/2005 (John, “Wind turbines taking toll on birds of prey,” USA Today,
http://www.usatoday.com/news/nation/2005-01-04-windmills-usat_x.htm)]
ALTAMONT PASS, Calif. — The big turbines that stretch for miles along these rolling, grassy hills have churned out clean, renewable electricity
for two decades in one of the nation's first big wind-power projects. But for just as long,
massive fiberglass blades on the more
than 4,000 windmills have been chopping up tens of thousands of birds that fly into them, including
golden eagles, red-tailed hawks, burrowing owls and other raptors. After years of study but little progress reducing
bird kills, environmentalists have sued to force turbine owners to take tough corrective measures. The companies, at risk of federal
prosecution, say they see the need to protect birds. "Once we finally realized that this issue was really serious, that we had to solve it to move
forward, we got religion," says George Hardie, president of G3 Energy. The
size of the annual body count — conservatively
put at 4,700 birds — is unique to this sprawling, 50-square-mile site in the Diablo Mountains between
San Francisco and the agricultural Central Valley because it spans an international migratory bird
route regulated by the federal government. The low mountains are home to the world's highest
density of nesting golden eagles. Scientists don't know whether the kills reduce overall bird
populations but worry that turbines, added to other factors, could tip a species into decline . "They didn't
realize it at the time, but it was just a really bad place to build a wind farm," says Grainger Hunt, an ecologist with the Peregrine Fund who has
studied eagles at Altamont.
Top level predators key to ecological health
Carey 6 [LiveScience staff writer – 7/19/2006 (Bjorn, “Top predators key to ecosystem survival,”
MSNBC, http://www.msnbc.msn.com/id/13939039)]
Top-level predators strike fear in the hearts of the animals they stalk. But when a deer is being mauled by a wolf, at least it can know that it's
giving its life for the greater good. A
new study reveals how ecosystems crumble without the presence of top
predators be keeping populations of key species from growing too large. It also provides a cautionary
lesson to humans, who often remove top predators from the food chain, setting off an eventual
collapse. The study is detailed in the July 20 issue of the journal Nature. The researchers studied eight natural food
webs, each with distinct energy channels, or food chains, leading from the bottom of the web to the
top. For example, the Cantabrian Sea shelf off the coast of Spain has two distinct energy channels. One starts with the phytoplankton in the
water, which are eaten by zooplankton and fish, and so on up to what are called top consumer fish. The second channel starts with detritus that
sinks to the sea floor, where it's consumed by crabs and bottom-dwelling fish, which are consumed by higher-up animals until the food energy
reaches top-level consumers.
The top predators play their role by happily munching away at each channel's
top consumers, explained study leader Neil Rooney of the University of Guelph in Canada. "Top predators are kind of like the regulators
of the food web—they keep each energy channel in check," Rooney told LiveScience. "The top predator goes back and forth
between the channels like a game of Whac-a-Mole," a popular arcade game in which constantly appearing moles are
smacked down with a mallet. Constant predation of the top consumers prevents a population from growing larger than the system can support.
Boom or bust Removing
a top predator can often alter the gentle balance of an entire ecosystem. Here's an
an area floods permanently and creates a series of islands, not all the
islands have enough resources to support top predators. Top consumers are left to gobble up
nutrients and experience a reproductive boom. The boom is felt throughout the system, though, as
the booming species out-competes others, potentially driving the lesser species to extinction and
reducing biodiversity. Rooney refers to this type of ecosystem change as a "boom-and-bust cycle," when one species' population boom
example of what can happen: When
ultimately means another will bust. Bigger booms increased chances of a bust. “With each bust, the population gets very close to zero, and its
difficult getting back," he said. Human
role in 'boom-and-bust' Humans often play a role in initiating boomand-bust cycles by wiping out the top predator.s For example, after gray wolves were hunted to near extinction in the
United States, deer, elk, and other wolf-fearing forest critters had free reign and reproduced willy-nilly, gobbling up the vegetation that other
consumers also relied on for food. Or, more recently, researchers found that when fish stocks in the Atlantic Ocean are over fished, jellyfish
populations boom. While jellyfish have few predators, removing the fish frees up an abundance of nutrients for the jellyfish to feast on.
Ecosystems provide us with the food we eat and help produce breathable air and clean water. But
they're generally fragile and operate best when at a stable equilibrium, scientists say. "These are our
life support systems ," Rooney said. "We're relying on them. This study points to the importance of
top predators and that we need to be careful with how we deal with them."
D-Rule—the alterity of the animal is our foremost ethical responsibility—outweighs
utilitarian calculus of ‘inevitability’
Lawlor ‘7 Leonard Lawlor [is Faudree-Hardin Professor of Philosophy at The University of Memphis,
author of a handful of books, and editor of a couple journals], "Animals Have No Hand” An Essay on
Animality in Derrida, CR: The New Centennial Review 7.2 (2007) 43-69,
http://muse.jhu.edu/journals/new_centennial_review/v007/7.2lawlor.html
Let us strip the demonstration down one more time to its essential structure. If what most properly defines human existence is
the fault or defect of being mortal (or, more precisely, if understanding the possibility of mortality as possibility is what most
properly defines us), then we are able to say that we truly understand that possibility only if we have access to
death as such in the presence of a moment—in the blink of the eye, in indivisible and silent sovereignty,
secretly. But, since we only ever have access to the possibility of death as something other than
possibility—that is, as impossibility, as something blinding, as something shared across countless others—
we cannot say that we understand the possibility of death truly, naked even. Then, the being of us, our fault,
resembles the fault of animals.22 The fault now has been generalized, and therefore so has evil. The resemblance
between us and them in regard to the fault or evil, however, does not mean that [End Page 64] we have
anthropomorphized the animals; it does not mean that we have succumbed to the risk of biological
continuism. With this resemblance, we have what Derrida, in Of Spirit, calls "une analogie décalée": "a staggered analogy" (1987d, 81;
1989b). There is a nonsimultaneity between us and them, between us and the other. This nonsimultaneity comes
with time or, rather, is "from time" ("depuis le temps"), as Derrida says in L'animal que donc je suis (2006, 40; 2002a, 390), "from always"
("depuis toujours"), as he says in "The Ends of Man" (1972b, 147; 1982, 123). What these phrases mean is clear: there is a fault, and yet there is
The nonsimultaneity is always there, in all of us, in the Geschlecht or genus or genre or gender or
race or family or generation that we are. The Geschlecht is always verwesende, "de-essenced" (1987d, 143; 1989b, 91).
The fault that divides, being there in us, means that all of us are not quite there, not quite Da, not quite dwelling, or
rather all of us are living out of place, in a sort of nonplace, in the indeterminate place called kho¯ra, about which we
can say that it is neither animal nor divine—nor human, or that it is both animal and divine—and human.
Indeterminate, the nonplace contains countless divisions, countless faults. All of us living together in this
nonplace, we see now, is based in the fact that all living beings can end ("Finis," as in the title of the first essay found in
Aporias) (see also 1996a, 76; 1993a, 39). All the living beings are mortal (2006, 206), and that means we can speak of "the
ends of animal" (113; translation mine). All the living beings share in this weakness, in this lack of power, in "this impotence
[impuissance] at the heart of power" (2006, 49; 2002a, 396). All of us have this fault. Therefore we can return to a question we raised
no fall.
earlier: are not all of us "poor in world"? This "poverty" ("Ar-mut," in German) implies a "feeling oneself poor," a kind of passion, a kind of
suffering (2006, 213; translation mine).23 Therefore, when
an animal looks at me, does it not address itself to me,
mutely, with its eyes? Does it not implore, just as Derrida's cat looks at him, imploring him to set it free (from
the bathroom)? And does not this look imply that, like us, animals suffer? The suffering of animals is undeniable (2006, 49;
2002a, 396). As Derrida always says in relation to these sorts of formulas, this undeniability means that we can only deny it,
and deny it in countless ways. Yet, none of these denials of the suffering of animals will have been
sufficient! [End Page 65]
1NC—Case
Their “main street over wall street” critique of corporations is propaganda for wider
structures of inequality, restricting effective activism
English ’10 Michael D. English, “MAIN STREET IS WALL STREET OR AN INTERFACE WITH SLAVOJ ZIZEK’S
FIRST AS TRAGEDY, THEN AS FARCE,” 5/1/2010, http://www.unrestmag.com/main-street-is-wall-streetor-an-interface-with-slavoj-zizeks-first-as-tragedy-then-as-farce/
Žižek’s (2009), First as Tragedy, Then as Farce, is yet another salvo at proponents of the liberal world order. It is by no means a rigorous academic study or a
philosophical treatise, but an unflinching challenge to the status quo of the (un)critical Left. Using the financial collapse of 2008 as his anchor, Žižek launches his
characteristically unflattering assault on global capitalism and liberal democracy. Francis Fukuyama’s utopian vision of the “end of history” serves as the running
joke disparaged over and over throughout to remind readers that the West does not solely determine the delineation of history. History struck Fukuyama’s dream
first as tragedy on the morning of September 11, 2001 and it returned to mock it September of 2008. While
the world panders to the very
system and people responsible for the mess, begging that they should clean up after themselves, Žižek
asserts capitalism is running on the fumes and assails the Left for their failure to pose a suitable
challenge. The work is comprised of two main essays: “It’s Ideology, Stupid!” and “The Communist Hypothesis.” “It’s Ideology, Stupid” is a hybrid of Žižek’s
previous work peppered with insights from Naomi Klein’s (2007) The Shock Doctrine. To function properly, capitalist economies
require a significant amount of state socialism (welfare) to keep them going. At no time is this more
apparent than during a period of financial shock or crisis. When those too big to fail actually do fail,
government bailouts are the only way to keep the captains of industry on life support. What this
exposes are the inconsistencies inherent within arguments postulated by liberal economists and
defenders of the free market. The market is only free to run its course because of a socialist safety net
provided through state intervention (an argument Rosa Luxemburg made roughly a century ago). While this critique is anything but new, the
failure within the United States to critically analyze its current predicament has lead to the resurgence
of what at best could be considered misguided populism and at worst xenophobic nationalism. How did
we get to this moment? The paternalistic relationship between the champions of the market and the rest (read those in the West who now want a return days of
1773) was fine so long as the market provided. History
it seems is a cruel judge of ideology. The events of 9-11 and the
financial crisis betrayed the hypocritical underpinnings of the capitalist ideology. One cannot save
Main Street without saving Wall Street; the American way of life is inextricably tied to exploitation
and those being exploited (read the global Rest) might just not like the relationship as it stands. The
irrationality of global capitalism is bound up in the willful ignorance of superstructure, as proven by the
continued failure of the advocates of Main Street to see a link between the two. The response from average Americans to losing
their home and life savings highlights this cycle of dependency. One loses faith in the priests of Wall
Street, but not in the flawed doctrine at the very heart of their suffering. We have arrived at a point
where we can imagine nothing beyond global capitalism. God may have hurt us, but god dammit he’s
still our god! Yet ideology is a far more insidious beast and each turn of the screw brings more of the same. We are not just blinding
marching down the aisle to the altar of sacrifice. Instead, we, the critical left, are active participants through our silence and our failure of to generate real
alternatives. What are we to make of the billionaire investor who wants to end malaria or the coffee company promising to give a share of its profits to a country in
need? Is this not an alternative to Wall Street’s greed? While this is moral pandering is at its very best, we
have come to accept that the only
answer to capitalism is better, more socially responsible capitalism. It is ok to exploit so long as your
exploitation is lined with a heart of gold and biodegradable. The solution? For Žižek the Left has repeatedly failed to offer
convincing alternatives to global capitalism and communism is still the correct alternative. The second part of his text, “The Communist Hypothesis,” is the closest
Žižek has ever come to offering a plan of action. Lenin’s
dictum that “we must begin again from the beginning only do it
better this time” is Žižek’s call for another Hegelian split. This time from within the Left to separate
out the sympathizers of liberalism who are in their own right profit from misery only to turn around
and repackage it as humanitarianism. The communist hypothesis is not a return to an ideal, but a
reaction to antagonisms (capitalism) that generate communism’s necessity. It is a necessity driven by
historical conditions. Capitalism is fundamentally flawed no matter how hip and eco-friendly. Žižek asserts
that no one is going to solve these problems for the Left except the Left, so why does the Left keep
waiting for someone to do their work for them? What is required is a return to serious thought and
perhaps what can only be termed as “non-action.” This can be read as both a plea to let the system fail and a call to the Left to stop
fooling around; “You’ve had your anti-communist fun, and you were pardoned for it – time to get serious once again!”
They don’t solve democracy or corruption—their ev cites broken judicial systems,
telecom monopolies, reliance on oil for mobility as alt causes
Participatory democracy is a myth to entrap resisters within the iron cage of expertist
post-politics
Swyngedouw ‘8 (Erik, Dept. Geography and Development at Manchester, “Where is the political?”
http://www.socialsciences.manchester.ac.uk/disciplines/politics/research/hmrg/activities/documents/S
wyngedouw.pdf) CJQ
Politics is hereby reduced to the sphere of policy-making, to the domain of governing and polic(y)ing through
allegedly (and often imposed) participatory deliberative procedures, with a given distribution of places and functions.
Consensual policy-making in which the stakeholders (i.e. those with recognized speech) are known in advance and
where disruption or dissent is reduced to debates over the institutional modalities of governing and the
technologies of expert administration or management, announces the end of politics, annuls dissent
from the consultative spaces of policy making and evacuates the proper political from the public sphere.
In this post-democratic post-political constitution, adversarial politics (of the left/right variety or of radically divergent struggles over
imagining and naming different socio-environmental futures for example) are considered hopelessly out of date. Although
disagreement and debate are of course still possible, they operate within an overall model of elite consensus and
agreement (Crouch 2004), subordinated to a managerial-technocratic regime (see also (Jörke 2005) (Blühdorn 2006)), sustained by a series
of populist tactics. What is at stake then, is the practice of genuine democracy, of a return to the polis, the
public space for the encounter and negotiation of disagreement, where those who have no place, are
not counted or named, can acquire, or better still, appropriate voice, become part of the police. But before
we can consider this, we need to return to the possibilities of ‘thinking the political’.
Complexity theory is arbitrary and counterproductive
Hendrick ‘9 Diane Hendrick, “Complexity Theory and Conflict Transformation: An Exploration of
Potential and Implications,” University of Bradford Department of Peace Studies, June 2009,
http://143.53.238.22/acad/confres/papers/pdfs/CCR17.pdf
It is still relatively early days in the application of complexity theory to social sciences and there are
doubts and criticisms, either about the applicability of the ideas or about the expectations generated
for them. It is true that the translation of terms from natural science to social science is sometimes contested due to the significant
differences in these domains, and that there are concerns that the meanings of terms may be distorted, thus making
their use arbitrary or even misleading. Developing new, relevant definitions for the new domain applications, where the terms
indicate a new idea or a new synthesis that takes our understanding forward, are required. In some cases, particular aspects of
complexity theory are seen as of only limited applicability, for example, self-organisation (see Rosenau‘s argument above
that it is only relevant in systems in which authority does not play a role). There are those who argue that much that is being touted
as new is actually already known, whether from systems theory or from experience, and so complexity
theory cannot be seen as adding value in that way. There are also concerns that the theory has not been
worked out in sufficient detail, or with sufficient rigour, to make itself useful yet. Even that it encourages
woolly thinking and imprecision.In terms of application in the field, it could be argued that it may lead to
paralysis, in fear of all the unexpected things that could happen, and all the unintended consequences
that could result, from a particular intervention. The proposed adaptability and sensitivity to emerging new situations may
lead to difficulties in planning or, better expressed, must lead to a different conception of what constitutes planning, which is, in itself,
challenging (or even threatening) for many fields. The
criteria for funding projects or research may not fit
comfortably with a complexity approach, and evaluation, already difficult especially in the field of
conflict transformation, would require a re-conceptualisation. Pressure for results could act as a disincentive to change project design
in the light of emergent processes. There may be the desire to maintain the illusion of control in order to retain the confidence of funders. On
the other hand, there are fears that complexity may be used as an excuse for poor planning, and
implementation, which is a valid concern for funders. In addition, there may be scepticism that the co-operation and coordination between different researchers or interveners, (let alone transdisciplinary undertakings) appropriate to
working on complex problem domains, will not work due to differing mental models, competing
interests and aims, competition for funding, prestige, etc. Such attempts appear, therefore, unrealistic
or unfeasible.
Epistemology isn’t an advantage to the 1AC—nothing about the plan changes whether
people think linearly. The internal link chain of corporate ownership to democratic
corruption to social inequality is a double turn with this.
Wind’s expensive and inefficient—their numbers are industry-biased understatements
Helman 12-21 Christopher Helman, “Why It's The End Of The Line For Wind Power,” Forbes,
12/21/2012 aka the apocalypse, http://www.forbes.com/sites/christopherhelman/2012/12/21/why-itsthe-end-of-the-line-for-wind-power/
First off — the windiest places are more often far away from where electricity is needed most, so the
costs of building transmission lines is high. So far many wind projects have been able to patch into
existing grid interconnections. But, says Taylor, those opportunities are shrinking, and material expansion
of wind would require big power line investments. Second, the wind doesn’t blow all the time, so power
utilities have found that in order to balance out the variable load from wind they have to invest in
keeping fossil-fuel-burning plants on standby. When those plants are not running at full capacity they
are not as efficient. Most calculations of the cost of wind power do not take into account the costs per
kWh of keeping fossil plants on standby or running at reduced loads. But they should, because it is a
real cost of adding clean, green, wind power to the grid. Taylor has crunched the numbers and determined that these
elements mean the true cost of wind power is more like double the advertised numbers .
DG crashes the grid, restricts solvency
Newman et all ’11 (Newman is part of the Physics Department at the University of Alaska, Carreras is
a professer at University of Carloss in Madrid, Kirchner is part of the Physics Department at the
University of Alaska, and Dobson is a member of the ECE department at the University of Wisconsin; the
paper was presented at the Hawaii International Conference on System Science)
[Newman, D. E., B. A. Carreras, M. Kirchner, and I. Dobson. "The Impact of Distributed Generation on
Power Transmission Grid Dynamics." (2011): n. pag. Print.] AMB
If one were able to build a power transmission system with highly reliable distributed generation,
these results suggest that system would be very robust and reliable. This makes sense from the point of
view of the dynamic reorganization that can occur. When an element of the system fails, there are many other
routes and generators that can take up the slack. The problem of course is that distributed
generation, particularly from renewables like wind and solar, are much less reliable then central
generation facilities. As mentioned in the last section, distributed generation is not in general as reliable as the central generation.
Wind, and therefore wind energy, in a given location can vary greatly as to a lesser degree can solar
power. When added to the generation mix, this less reliable generation source can impact the
transmission grid. To investigate this, we use the same distributed generation model as before, but now allow the distributed
generators to have a daily random fluctuation in their capacity. For this study, we have set a fraction (Pg) of the distributed generation
nodes to probabilistically be reduced to a fixed fraction (fg) of their nominal generation limits. In the cases shown here we have used Pg =
0.5 and fg = 0.3. This means that half of the distributed generators can have their capacity reduced to 0.3 of their nominal capacity.
While these added fluctuations in the system increase the probability of failure, the most pronounced
effect comes when the total amplitude of these distributed generation variations start to become
close to the total system generation margin. At that point a huge change in the dynamics occurs and
increasing the distributed generation seriously degrades the system behaviour. Figure 11 shows the
blackout frequency as a function of the degree of distribution for 2 uniform distribution cases, one without any variability of the
distributed generators and one with variability. At 0.1 degree of distribution the frequencies are the same but after 0.3 they diverge
sharply with the improvement seen in the reliable distribution cases reversed in the variable distribution case and
becoming much worse.
Natty gas swamps solvency
Yergin ’12 Daniel Yergin is chairman and founder of IHS Cambridge Energy Research Associates, is on
the U.S. Secretary of Energy advisory board, and chaired the U.S. Department of Energy's Task Force on
Strategic Energy Research and Development, interviewed by Brian Dumaine, “Will gas crowd out wind
and solar?” CNN, 4/17/2012, http://tech.fortune.cnn.com/2012/04/17/yergin-gas-solarwind/?iid=HP_LN
Fracking technology has given the U.S. a 100-year supply of cheap natural gas. What's its impact on coal, nuclear, wind, and solar power?
Inexpensive natural gas is transforming the competitive economics of electric power generation in the
U.S. Coal plants today generate more than 40% of our electricity. Yet coal plant construction is grinding to a halt: first,
because of environmental reasons and second, because the economics of natural gas are so compelling. It is being
championed by many environmentalists as a good substitute for coal because it is cleaner and emits about 50% less carbon dioxide. Nuclear
power now generates 20% of our electricity, but the plants are getting old and will need to be replaced. What will replace them? Only a few
nuclear plants are being built in the U.S. right now. The economics of building nuclear are challenging -- it's much more expensive than natural
gas. Isn't the worry now that cheap natural gas might also crowd out wind and solar? Yes. The debate is over whether natural gas is a bridge
fuel to buy time while renewables develop or whether it will itself be a permanent, major source of electricity. What do you think? Over the
past year the debate has moved beyond the idea of gas as a bridge fuel to what gas means to U.S. manufacturing and job creation and how it
will make the U.S. more globally competitive as an energy exporter. The President's State of the Union speech was remarkable in the way it
wrapped the shale gas boom into his economic policies and job creation. I believe natural
gas in the years ahead is going to be
the default fuel for new electrical generation. Power demand is going to go up 15% to 20% in the U.S.
over this decade because of the increasing electrification of our society -- everything from iPads to electric Nissan Leafs. Utilities will
need a predictable source of fuel in volume to meet that demand, and natural gas best fits that
description. And that won't make the environmental community happy? Well, natural gas may be a relatively clean hydrocarbon, but it's
still a hydrocarbon. So wind and solar will have a hard time competing? Remember that wind and solar account for only 3% of
our electric power, whereas natural gas is 23%, and its share will go up fast. Most of that 3% is wind. Natural gas
has a new role as the partner of renewables, providing power when the wind is not blowing and the sun is not shining. Will solar scale? Solar is
still under 1% of U.S. electric generation, and even though its costs have come down dramatically, they must come down a lot more. Solar is
generally much more expensive than coal and natural gas. You have to remember that energy is a huge, capital-intensive business, and it takes
a very long time for new technologies to scale. The euphoria that comes out of Silicon Valley when you see how quickly a Twitter or a YouTube
can emerge doesn't apply to the energy industry.
Neodynium shortage means no turbine construction
Chandler ‘12 David, MIT News Office, 4/9/12, “Clean energy could lead to scarce materials”
http://web.mit.edu/newsoffice/2012/rare-earth-alternative-energy-0409.html
As the world moves toward greater use of low-carbon and zero-carbon energy sources, a possible
bottleneck looms, according to a new MIT study: the supply of certain metals needed for key cleanenergy technologies. Wind turbines, one of the fastest-growing sources of emissions-free electricity, rely
on magnets that use the rare earth element neodymium. And the element dysprosium is an essential
ingredient in some electric vehicles’ motors. The supply of both elements — currently imported almost
exclusively from China — could face significant shortages in coming years, the research found. The study, led
by a team of researchers at MIT’s Materials Systems Laboratory — postdoc Elisa Alonso PhD ’10, research scientist Richard Roth PhD ’92, senior
research scientist Frank R. Field PhD ’85 and principal research scientist Randolph Kirchain PhD ’99 — has been published online in the journal
Environmental Science & Technology, and will appear in print in a forthcoming issue. Three researchers from Ford Motor Company are coauthors. The study looked at 10 so-called “rare earth metals,” a group of 17 elements that have similar properties and which — despite their
name — are not particularly rare at all. All
10 elements studied have some uses in high-tech equipment, in many
cases in technology related to low-carbon energy. Of those 10, two are likely to face serious supply
challenges in the coming years. The biggest challenge is likely to be for dysprosium: Demand could increase by 2,600 percent over
the next 25 years, according to the study. Neodymium demand could increase by as much as 700 percent. Both materials
have exceptional magnetic properties that make them especially well-suited to use in highly efficient, lightweight motors and batteries. A single
large wind turbine (rated at about 3.5 megawatts) typically contains 600 kilograms, or about 1,300 pounds, of rare earth metals. A conventional
car uses a little more than one pound of rare earth materials — mostly in small motors, such as those that run the windshield wipers — but an
electric car might use nearly 10 times as much of the material in its lightweight batteries and motors. Currently,
China produces 98
percent of the world’s rare earth metals, making those metals “the most geographically concentrated of
any commercial-scale resource,” Kirchain says. Historically, production of these metals has increased by only a few percent each
year, with the greatest spurts reaching about 12 percent annually. But much higher increases in production will be needed to meet the
expected new demand, the study shows. China has about 50 percent of known reserves of rare earth metals; the United States also has
significant deposits. Mining
of these materials in the United States had ceased almost entirely — mostly
because of environmental regulations that have increased the cost of production — but improved
mining methods are making these sources usable again. Rare earth elements are never found in
isolation; instead, they’re mixed together in certain natural ores, and must be separated out through
chemical processing. “They’re bundled together in these deposits,” Kirchain says, “and the ratio in the deposits doesn’t
necessarily align with what we would desire” for the current manufacturing needs. Neodymium and dysprosium are not the most
widely used rare earth elements, but they are the ones expected to see the biggest “pinch” in supplies, Alonso explains, due to projected rapid
growth in demand for high-performance permanent magnets. Kirchain says that when they talk about a pinch in the supply, that doesn’t
necessarily mean the materials are not available. Rather, it’s a matter of whether the price goes up to a point where certain uses are no longer
economically viable. The researchers stress that their study does not mean there will necessarily be a problem meeting demand, but say that it
does mean that it will be important to investigate and develop new sources of these materials; to improve the efficiency of their use in devices;
to identify substitute materials; or to develop the infrastructure to recycle the metals once devices reach the end of their useful life. The
purpose of studies such as this one is to identify those resources for which these developments are most
pressing. While the raw materials exist in the ground in amounts that could meet many decades of
increased demand, Kirchain says the challenge comes in scaling up supply at a rate that matches
expected increases in demand. Developing a new mine, including prospecting, siting, permitting and
construction, can take a decade or more. “The bottom line is not that we’re going to ‘run out,’” Kirchain says, “but it’s an issue
on which we need focus, to build the supply base and to improve those technologies which use and reuse these materials. It needs to be a
focus of research and development.” Barbara Reck, a senior research scientist at Yale University who was not involved in this work, says “the
results highlight the serious supply challenges that some of the rare earths may face in a low-carbon
society.” The study is “a reminder to material scientists to continue their search for substitutes,” she says, and “also a vivid reminder that
the current practice of not recycling any rare earths at end-of-life is unsustainable and needs to be reversed.”
2NC
Anarchist Geographies
Entire industries are salivating at the thought of the plan – making consumption more
environmentally-friendly is an attractive label for consumers, allowing corporations to
further their control over the global environment by painting themselves as
responsible stewards.
Timothy W. Luke, Professor of Political Science at Virginia Polytechnic Institute and State
University,1999, Discourses of the Environment, p. 141-42
In some sectors or at a few sites, ecologically
more rational participation in some global commodity chains may
well occur as a by-product of sustainable development. Over-logged tropical forests might be saved for biodiversity-seeking
genetic engineers; over-fished reefs could be shifted over to eco-tourist hotel destinations; over-grazed prairies may
see bison return as a meat industry animal. In the time—space compression of postmodern informational
capitalism, many businesses are more than willing to feed these delusions with images of
environmentally responsible trade, green industrialization, or ecologically sustainable commerce, in order to
create fresh markets for new products. None the less, do these policies contribute to ecologically sustainable
development? or do they simply shift commodity production from one fast track to another slower one,
while increasing opportunities for more local people to gain additional income to buy more commodities that
further accelerate transnational environmental degradation? or do they empower a new group of outside
experts following doctrines of engagement to intervene in local communities and cultures so that their
geo-power may serve Global Marshall Plans, not unlike how it occurred over and over again during Cold War-era experiments
at inducing agricultural development, industrial development, community development, social development and technological development?
Now that the Cold War is over, as the Clinton/Gore green geopolitics suggests, does
the environment simply substitute for
Communism as a source and site of strategic contestation, justifying rich/powerful/industrial states’
intervention in poor/weak/agricultural regions to serve the interests of outsiders who want to control
how forests, rivers, farms or wildlife are used?
Reclaiming public space is a prior question—perm is coopted
Springer ’11 Simon Springer, “Public Space as Emancipation: Meditations on Anarchism, Radical
Democracy, Neoliberalism and Violence,” Antipode, Volume 43, Issue 2, pages 525–562, March 2011,
DOI: 10.1111/j.1467-8330.2010.00827.x
Since democracy is meant to be inclusive, it is specifically those public spaces and places that are of
primary importance. Thus, public space can be understood as the very practice of radical democracy, including
agonism, which might rid civil societies of hierarchy, technocracy, international patrons, government
appropriation, and co-optation by the modern aristocracy. Only a conception of civil society rooted in
public space is sufficient for a radical vision of democracy. This relationship between radical democracy and
space is crucial, because democracy requires not only spaces where people can gather to discuss the
issues of the day (Staeheli and Thompson 1997), but also places where ideas can be contested. A democratic
society must value public space as a forum for all social groups, where there should be no structural
deterrents prohibiting the ability of individuals to participate in public affairs. Public space as such is the
site where collective performance, speech, and agonism must be entrenched (Goheen 1998), thus allowing
it to become the primary medium through which identities are created and disputed (Ruddick 1996).
More energy production bolsters processes of exploitation—prior critique is the only
way to think energy more responsibly
Hildyard et al. ’12 Nicholas Hildyard, Larry Lohmann and Sarah Sexton, “Energy Security For What?
For Whom?” The Corner House, February 2012,
http://www.thecornerhouse.org.uk/sites/thecornerhouse.org.uk/files/Energy%20Security%20For%20W
hom%20For%20What.pdf
Energy efficiency plays a significant role in the European Commission’s energy security plans. 40 But while higher energy efficiencies
of processes and appliances could reduce the amount of energy used by an individual, household or
business (at least initially), they would not necessarily result in reduced consumption overall, especially if the
price stayed the same or fell. In fact, energy efficiencies could lead to increased energy consumption. 41
This paradox was first noted by British economist Stanley Jevons, who observed that increased energy
efficiency in coal-fired steam engines resulted in more coal being used to power more steam engines in
more applications. In 1865, he concluded that greater efficiency in extracting and burning coal reduced
coal prices, thereby leading to greater overall coal consumption. He said: “It is wholly a confusion of ideas
to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary
is the truth.” 42 During the 1970s, another British economist, Len Brookes, argued likewise that devising ways to
produce goods with less oil – an obvious response to the sudden leap in oil prices – would merely accommodate the
new prices, causing oil consumption to be higher than it would have been if efforts had not been made
to increase efficiencies. Several examples illustrate this paradox. In 2005, for instance, the average US passenger
car consumed about 40 per cent less fuel per kilometre than it had done in 1960. But more widespread
ownership of automobiles (an average of two people per vehicle in 2005 compared to nearly three in 1970) and higher
average distances driven, particularly as out-of-town shopping malls and suburban housing proliferated
while public transport declined, resulted in average per capita automobile fuel consumption being 30
per cent higher in 2005 than in 1960. Despite increased energy efficiencies of refrigerators, light bulbs
and buildings, US electricity consumption in 2008 was double that of 1975 while overall energy
consumption was up by 38 per cent, even though manufacturing had been outsourced to Asia over this
period. During the 20th century, the efficiency of British public street lighting rose some 20-fold, but the intensity of the illumination
increased about 25 times, more than eliminating the efficiency gains. 43 Between 1980 and 2000, China halved the energy
intensity of its economy, but more than doubled its per capita energy consumption. Refrigeration has enabled
fresh fruit and vegetables to be transported ever greater distances and kept in shops and homes for longer, but has also contributed to more
food being thrown away. These examples suggest that under
an economic logic of “permanent” growth, reining in
energy use simply provides more energy to drive the whole system on. Efforts to mitigate the excesses
may only worsen them. More efficient energy transformations lower the per unit cost of captured
energy, which then stimulates increased consumption of the resources. Sociologist Bruce Podobnik who has studied
energy transitions of the past concludes: “True reductions in energy consumption require political and social
transformations; they are not caused by energy technologies alone.” 44
All the tropes of scale, from the global/local binary to the critique of corporations
shielding individual agency, to the redemptive myth of an uncorrupt government are
terrible for social theory
Latour ‘5 Bruno Latour, Reassembling the Social: An Introduction to Actor-Network Theory, Oxford
University Press: New York, 2005, p. 185-186
Scale is the actor’s own achievement. Although this is the oldest and, in my view, the most decisive proposition made by ANT,247
I have never encountered anyone who could accept to even glance at the landscape thus revealed—no more, if I dare the parallel, than Galileo
could tempt his ‘dear and respected colleagues’ to have a look through his makeshift telescope. The reason is that we
tend to think of
scale— macro, meso, micro—as a well-ordered zoom. It is a bit like the marvelous but perversely misleading book The Powers of
Ten, where each page offers a picture one order of magnitude closer than the preceding one all the way from the Milky Way to the DNA fibers,
with a photo somewhere in the middle range that shows two young picnickers on a lawn near Lake Superior.248 A microsecond of reflection is
enough to realize that this montage is misleading—where
would a camera be positioned to show the galaxy as a
whole? Where is the microscope able to pin down this cell DNA instead of that one? What ruler could
order pictures along such a regular trail? Nice assemblage, but perversely wrong. The same is true of the
zooming effect in the social realm, except that, in this case, it is taken not as a clever artistic trick, but as a most natural injunction
springing from the sturdiest common sense. Is it not obvious that IBM is ‘bigger’ than its sales force? That France is ‘wider’ than the School of
Mines that is much ‘bigger’ than me? And if we imagine IBM and France as having the same star-like shape as the command and control war
room I mentioned earlier, what would we make of the organizational charts of IBM’s corporate structure, of the map of France, of the picture
of the whole Earth? Are they not obviously providing the vastly wider ‘framework’ into which ‘smaller things’ have to be ‘situated’? Does it not
make perfect sense to say that Europe is bigger than France, which is bigger than Paris that is bigger than rue Danton and which is bigger than
my flat? Or to say that the 20th century provides the frame ‘in which’ the Second World War has ‘taken place’? That the battle of Waterloo, in
Stendhal’s The Charterhouse of Parma, is a vastly more important event than Fabrizio del Dongo’s experience of it? While readers might be
ready to listen patiently to the claims of ANT for a new topography, they won’t take it any further if it goes too much against every
commonsensical reaction. How could ‘putting things into a frame’ not be the most reasonable thing to do? I agree that the point is to follow
common sense. I also agree that framing things into some context is what actors constantly do. I am simply arguing that it is this very
framing activity, this very activity of contextualizing, that should be brought into the foreground and that it cannot be
done as long as the zoom effect is taken for granted. To settle scale in advance would be sticking to one
measure and one absolute frame of reference only when it is measuring that we are after; when it is
traveling from one frame to the next that we want to achieve. Once again, sociologists of the social are not abstract
enough. They believe that they have to stick to common sense, although what demonstrates, on the contrary, a complete lack of reason is
imagining a ‘social zoom’ without a camera, a set of rails, a wheeled vehicle, and all the complex teamwork which has to be assembled to carry
out something as simple as a dolly shot. Any
zoom of any sort that attempts to order matters smoothly like the set
of Russian dolls is always the result of a script carefully planned by some stage manager. If you doubt it, then
go visit Universal Studios. ‘Ups’ and ‘downs’, ‘local’ and ‘global’ have to be made, they are never given. We all know
this pretty well, since we have witnessed many cases where relative size has been instantaneously reversed—
by strikes, revolutions, coups, crises, innovations, discoveries. Events are not like tidy racks of clothes in
a store. S, M, X, XL labels seem rather confusingly distributed; they wane and wax pretty fast; they shrink or enlarge at lightning speed. But
we never seem ready to draw the consequences of our daily observations, so obsessed are we by the gesture of ‘placing things into their wider
context’. And yet this gesture should also be carefully documented! Have
you ever noticed, at sociological conferences, political
meetings, and bar palavers, the hand gestures people make when they invoke the ‘Big Picture’ into which
they offer to replace what you have just said so that it ‘fits’ into such easy-to-grasp entities as ‘Late
Capitalism’, ‘the ascent of civilization’, ‘the West’, ‘modernity’, ‘human history’, ‘Postcolonialism’, or ‘globalization’? Their hand
gesture is never bigger than if they were stroking a pumpkin! I am at last going to show you the real size
of the ‘social’ in all its grandeur: well, it is not that big. It is only made so by the grand gesture and by the
professorial tone in which the ‘Big Picture’ is alluded to. If there is one thing that is not common sense, it
would be to take even a reasonably sized pumpkin for the ‘whole of society’. Midnight has struck for
that sort of social theory and the beautiful carriage has been transformed back into what it should always have remained: a member
of the family Cucurbitaceae.
Replacing scalar hierarchy with flat ontology is key to seeing through the
homogenizing and totalizing logics of neoliberalism which trigger serial policy failure
and systemic oversight
Springer ’12 Simon Springer, “Human geography without hierarchy,” currently in review but
conveniently posted on his academia.edu account, 2012
To return to the epigraph that opened this essay, let me ask again what happened to ethics along the way? What has become of radical
geography when it seems to willingly shed its heterodox skin to embrace the orthodoxy of hierarchy? Marston et al.’s (2005) flat ontology is a
bold envisioning that yearns to think beyond dis-traction, and yet as they stormed the citadel of entrenched geographical thought, tearing its
vertical scaffolding to the ground, an ‘old guard’ came out in legion to disavow their project (Leitner and Miller 2007; Jessop et al. 2008; Jonas
2006). By
marching to the drum of a horizontal politics, where all participants to the dance that we call
‘life’ become agonistically defined as legitimate and welcome claimants to the global commons, we
invoke a very different kind of geography, one that is thoroughly radical in its orientation because it
recognizes, much like Élisée Reclus (1876-94) did many years ago, that the earth is an integral system. It is not
geography itself that summons hierarchy and authority, as is the assumption of Harvey by positing scale as inextricable to
geography, but the delusion of scale itself , a concept that peels off conceptual layers of an onion and
assumes that the individual strata that it isolates can somehow represent the complexity of the
whole. This is a theoretical movement that literally takes geography out of context, and partitions our
thinking about space in rigid Cartesian snapshots. Hierarchy and authority are invoked precisely
because of the Archimedean ontology and supposed mastery that the concept of scale assumes, and
the resultant unconsciousness that it implies. So is it a case of hierarchy being necessary for politics to function, as is Harvey’s
contention, or is this actually a dysfunction in the mode of thinking that underpins scale as a means to conceptualize and order our world?
Politics don’t require authority anymore than we require the concept of scale in human geography.
Scale is an abstraction of visioning, an ocular objectification of geography that encourages hierarchical
thinking, even if unintentionally, or more accurately, unconsciously. As an ontological precept, the
detached gaze of scale invokes Donna Haraway’s (1991: 189) ‘god-trick’, “and like the god-trick, this eye
fucks the world” through its point de capiton and the unconsciousness it maintains with respect to
situated knowledges and rhizomic spaces. By ‘jumping scale’ and moving away from the grounded
particularities that are woven through multiple sites of activity and resistance, we problematically
“delimit entry into politics–and the openness of the political–by pre-assigning it to a cordoned register
for resistance” (Marston et al. 2005: 427). A human geography without hierarchy does not mean that we should ignore
hierarchy and pretend it doesn’t exist; rather it is a contention that our ontological premise should be attuned to a
horizontal politics. Similarly, we cannot ignore scale in the hope that it will simply disappear (Moore 2008). The performative
epistemology of scale ensures its perpetuation beyond ontology (Herod 2011), so even if we desire to abandon it as
part of the geographical lexicon, the relationship of the ban, which functions on an inclusive exclusion (Agamben 1998), ensures that it will
continue to haunt the way with think about the world. What
is needed then is a project that is both reflexive about the
limitations of scale and employs a language that subverts and rescinds hierarchy at every possible
opportunity so that a flat ontology becomes possible. A flattening the spatial register is thus a
commitment to offering a new discursive formation so that our scholarship becomes oriented towards
the reinforcement of a rhizomic politics, rather than making excuses as to why we supposedly require
hierarchy. Once we start to accept hierarchy as a worthwhile or unavoidable component of our
political thought and practice–through particular scalar precepts or otherwise–this licenses other
forms of hierarchy. Should we now expect a ‘radical’ geography to start understanding hierarchies in other incarnations as acceptable
means to an end? Why are hierarchies as a function of scale considered acceptable, but hierarchies as a function of class and capital
accumulation rejected? On what basis can we say that certain hierarchies are appropriate, while others are not? We
can either
indicate that all hierarchy is wrong and let that guide a new rhizomic ethics, or we can build
cathedrals of knowledge that proclaim a sense of mastery, a fabrication that is always a mere façade
because it ignores the ‘hidden enfolded immensities’, the ‘sheer physical messiness’, the ‘sticky
materiality of practical encounters’ that can never be captured, pinned down, or fully understood.
Conceptualizing our being in communities of shared proximity leads to extermination
of difference
Dorota Glowacka [ASSOCIATE PROFESSOR OF HUMANITIES at University of King’s College Halifax; PhD
in Comparative Literature from SUNY buffalo; Director, Contemporary Studies Programme; Faculty
Member, Contemporary Studies Programme] “Community and the Work of Death: Thanato-Ontology in
Hannah Arendt and Jean-Luc Nancy” Culture Machine, Vol 8 (2006),
http://www.culturemachine.net/index.php/cm/article/viewArticle/38/46
'Why is the idea of community so powerful that it is possible for its members to willingly die for such limited imaginings?'
(Anderson, 1983: 7) The anthropologist's answer is that the Western conception of community has been
founded on the mythical bond of death between its members, who identify themselves as subjects
through the apology of the dead heroes. Yet is not this endless recitation of prosopopeia, which serves
as the self-identificatory apparatus par excellence, also the most deadly mechanism of exclusion? Whose
voices have been foreclosed in the self-addressed movement of the epitaph? Indeed, who, in turn, will have to suffer a death that is absolute,
whose negativity will not be sublated into the good of communal belonging, so that community can perpetuate itself? 'Two different deaths': it
is the 'they' who will perish, without memory and without a remainder, so that the 'we' can be endlessly resurrected and blood can continue to
flow in the veins of the communal body, the veins now distended by the pathos of this recitation. The question I would like to ask in this paper
is whether there can be the thinking of community that interrupts this sanguinary logic. A collectivity that projects itself as unified presence has
been the predominant figure of community in the West. Such
community reveals itself in the splendor of full presence,
'presence to self, without flaw and without any outside' (Nancy, 2001:15; 2003a: 24), through the re-telling of its
foundational myth. By infinitely (self)communicating the story of its inauguration, community ensures
its own transcendence and immortality. For Jean-Luc Nancy, this immanent figure of community has
impeded the 'true' thinking of community as being-together of humans. Twelve years after writing his
seminal essay 'The Inoperative Community', Nancy contends that 'this earth is anything but a sharing of
humanity -- it is a world lacking in world' (2000: xiii). In Being Singular Plural (1996), Nancy returns to Heidegger's discussion of
Mitsein (Being-with) in Being and Time, in order to articulate an ontological foundation of being-together or being-in-common and thus to
move away from the homogenizing idiom of community. Departing from Heidegger's habit of separating the political and the philosophical,
however, Nancy situates his analysis in the context of global ethnic conflicts, the list of which he enumerates in the
'Preface',3 and to which he returns, toward the end of the book, in 'Eulogy for the Mêlée (for Sarajevo, March 1993)'. The fact that Nancy has
extended his reflection on the modes of being-together to include different global areas of conflict indicates that
he is now seeking to
re-think 'community' in a perspective that is no longer confined to the problematic of specifically
Western subjectivity. This allows me to add to Nancy's 'necessarily incomplete' list the name of another community-in-conflict: the
Polish-Jewish community, and to consider, very briefly, the tragic fact of the disappearance of that community during the events of the
Holocaust and in its aftermath. Within a Nancean problematic, it is possible to argue that the history of this community in Poland, which has
been disastrous to the extent that it is now virtually extinct, is related, as in Sarajevo, to a failure of thinking community as Being-with. What I
would like to bring out of Nancy's discussion, drawing on the Polish example in particular, is that rethinking
community as being-incommon necessitates the interruption of the myth of communal death by death understood as what I
would refer to, contra Heidegger, as 'dying-with' or 'Being-in-common-towards-death'. Although Nancy himself
is reluctant to step outside the ontological horizon as delineated by Dasein's encounter with death and would thus refrain from such
formulations, it is when he reflects on death (in the closing section of his essay 'Of Being Singular Plural' in Being Singular Plural), as well as in
his analysis of the 'forbidden' representations of Holocaust death in Au fond des images (2003b), that he finds Heidegger's project to be lacking
(en sufferance). This leads me to a hypothesis, partly inspired by Maurice Blanchot's response to Nancy in The Unavowable Community (1983),
that the
failure of experiencing the meaning of death as 'dying-with' is tantamount to the impossibility of
'Being-with'. In the past and in the present, this failure has culminated in acts of murderous, genocidal
hatred, that is, in attempts to erase a collectivity's proper name, and it is significant that many of the
proper names on Nancy's list fall under the 1948 United Nations' definition of the genocide as 'acts
committed with intent to destroy, in whole or in part, a national, ethnic, racial or religious group'.4 The
Polish national narrative has been forcefully structured by communal identification in terms of the work of death, resulting in a mythical
construction from which the death of those who are perceived as other must be excluded. It is important to underscore that the history of
Polish-Jewish relations has never been marred by violence of genocidal proportions on the part of the ethnic Poles. I will argue nevertheless
that what this history discloses is a fundamental failure to produce modes of co-habitation grounded in ontological being-in-common. As
became tragically apparent during the Holocaust and in its aftermath, Poles' disidentification with their
Jewish neighbors led to an overall posture of indifference toward (and in some cases direct complicity in) their
murder. Again, I will contend that this failure of 'Being-with' in turn reveals a foreclosure of 'dying-with' in the
Polish mode of communal belonging, that is, a violent expropriation of the Jewish death. At this fraught
historical juncture of ontology and politics, I find it fruitful to engage Nancy's forays into the thinking of death and the community with Hannah
Arendt's reflection on the political and social space. In 'The Nazi Myth' (1989), which Nancy co-authored with Lacoue-Labarthe, Arendt's
definition of ideology as a self-fulfilling logic 'by which a movement of history is explained as one consistent process' (The Origins of
Totalitarianism, qtd in Lacoue-Labarthe and Nancy, 1989: 293) is the starting point for the analysis of the myth. Nancy and Lacoue-Labarthe
elaborate Aredn't analysis in order to argue that the
will to mythical identification, which saw its perverse culmination
in the extermination of European Jews during the Nazi era, is inextricable from the general problematic
of the Western metaphysical subject. It is also in that essay that Nancy and Lacoue-Labarthe condemn 'the thought that puts itself
deliberately (or confusedly, emotionally) at the service of an ideology behind which it hides, or from whose struggle it profits', citing
Heidegger's ten month-involvement with National Socialism as an example par excellence.
Little steps are insufficient—we need radical systemic critique—anything less is a
surveillance strategy which makes flagrant adventurism inevitable
Smith ’11 Mick Smith, Against Ecological Sovereignty, University of Minnesota Press: Minneapolis,
2011, p. 90-91
This talk of the complicity between absolute innocence and system- atic guilt may seem a little abstract. But think, for example, of the ways in
which we are frequently told by authority that the ecological crisis is everyone’s and/or no one’s fault and that it is in all of our own (selfish)
interests to do our bit to ensure the world’s future. Think too of
the tokenistic apolitical “solutions" this perspective
engenders——let’s all drive a few miles less, all use energy-efficient light-bulbs, and so on. These actions
may offer ways of further absolving an already systematically dispersed guilt, but they hardly touch
the systemic nature of the problem, and they certainly do not identify those who profit most from this
situation. This pattern is repeated in almost every aspect of modern existence. Think of modern
cityscapes or of the shopping mall as expressions of modern civilization’s social, economic, and
(im)moral orders. These are far from being places of free association. They are constantly and
continuously monitored by technology’s eyes in the ser- vice of states and corporations (Lyons 2001). Of
course, those who po- lice the populace argue that the innocent have nothing to fear from even the most
intrusive forms of public surveillance, that it is only the guilty who should be concerned. But this is simply not true, nor, as Foucault (1991)
argued in a different context, is this the rationale behind such panopticism. Everyone
is captured on closed-circuit
television, and it is precisely any semblance of innocence that is lost through this incessant
observation of the quotidian. All are deemed (potentially) guilty and expected to internalize the moral
norms such surveillance imposes, to police themselves to ensure security for private property, the
circulation of capital, and fictitious (anti)social contracts——“NO LOITERING ALLOVVED,” “FREE PARKING FOR
CUSTOMERS ONLY], This egalitarian dispersal of guilt contaminates everyone, placing them on interminable
trial, since no one can ever be proven innocent by observations that will proceed into an indefinite
future.2° Thus, as Camus (1984, 12) marked, it is indeed innocence that is called upon to justify itself, to justify why
it should (but will not) be allowed to survive in any aspect of everyday lives. This surveillance is also (and
by no means accidentally), as Agamben argues, a key aspect of the biopolitical reduction of politics to the policing
of disciplined subjects, redefined not as individuals but as “bare life." Again, this refers to people
stripped of their political and ethical possibilities and now primarily identified in terms of their bodily
in- scription of transferable information. People are not quite reduced to animality, to just their
biology, but their biological being is made increasingly subject to observation, management, and
control as the key mode of operation of contemporary authority. Video cameras; facial, gait, and voice pattern
recognition technologies, fingerprints; retinal scans; DNA analysis; electronic tagging; the collection of consumer information; data mining and
tracking; global positioning systems; communications intelligence; and so on, concern themselves with every aspect of people’s lives hut are in
no sense concerned for the individuals (in their singularity). Rather, they
measure and evaluate that person’s every move
as a potential risk to the security of property, the security of capital(ism), the security of the moral
order, and the security of the state. The entire populace comes to occupy an increasingly pervasive
state of exception, a “zone of anomie" (Agamben 2005, 50) that is certainly not a state of nature but a
technologically mediated political (and ethical) void. And again, if this authoritarian monitoring and
control is questioned, the answer is that it is everyone’s fault and nobody’s, that it is actually our
desires and our ultimate personal security (the se- curity of people and state) that determined that the relevant
authorities had no choice but to take this path, that it is a small cost to pay for the protection of
civilization, that, in effect, we are all guilty of our own impending technologically mediated reduction
to bare life.
1NR
Case
For example, local ownership doesn’t change who builds it—aff’s incentives line the
pockets of tech MNCs
Harris 11 (Jerry. "Going Green to Stay in the Black: Transnational Capitalism and Renewable
Energy." Perspectives on Global Development and Technology 10.1 (2011): 41-59. Print.)
The wind power industry is already dominated by large TNCs. The top eleven corporations that
produce and install wind turbines held 95 percent of the market in 2008. These TNCs are a combination of
relatively new players that established themselves over the last twenty some years and older corporate giants. Vestas, the Danish TNC, held the
number one spot with 19 percent of the market, down from 24.6 percent in 2007. The only significant US TNC, GE Energy, was second with 18
percent. Three German transnationals, 44 J. Harris / PGDT 10 (2011) 41-59 Enercon, Siemens and Nordex occupied 20 percent of the market;
two Spanish corporations, Gamesa and Acciona held 15 percent; Sinovel, Dongfang and Goldwind, all from China controlled 13 percent; and the
India giant, Suzlon, acquiring REpower of Germany, now has 8 percent (ekopolitan 2010). Although wind power is only one percent of global
energy, it continues to experience rapid growth and receives the largest share of renewable energy investments—$48.9 billion in 2009.
Because of the size and complexity of wind turbines the industry tends towards large manufacturers
that often also install, service and maintain wind farms. Politically, building a green energy base is
presented with nationalist rhetoric about oil independence and local jobs. And government subsidies and
incentives have been key to creating a market in all countries moving in this direction. But as pointed out in a study by the Peterson Institute
and World Resources Institute, “Cross-border
investment rather than trade is the dominant mode of global
integration. Standard international trade in wind energy equipment is relatively small and declining.
Instead, foreign direct investment (FDI) flows dominate the global integration of the wind sector”
(Kirkegaard, Thilo and Weischer 2009). With $50 billion in total sales in 2008 only about 10 percent were in exports. An indicator of growing
integration is FDI in newly-built wind turbine factories. World totals were just $200 million in 2003 but grew to about $2.3 billion by 2008. (FOi
Markets Project Database 2010) Consequently, although the industry is young it is already following transnational lines of development. At the
end of 2009 there were 130,000 wind turbines installed or under construction. Europe has half the world’s wind turbine capacity and the
industry employees 155,000 workers, but China and the US are its fastest growing markets.
This disavowal shuts down effective interrogation. Reframing the problem is a prior
question.
Žižek ’12 Slavoj Žižek, interviewed by debaters Brad Bolman (hipster) and Tara Raghuveer (went to
debate camp with Jake Novack), “A Conversation with Slavoj Zizek,” Harvard Crimson, 2/10/2012,
http://www.thecrimson.com/article/2012/2/10/theres-this-slovenian-saying/
Fifteen Minutes: What is the role of academia at an institution like Harvard in the current global crisis? Slavoj Žižek: What is crucial and also I
think—especially today, when we have some kind of re-emergence of at least some kind of practical spirit, protest, and so on—one of the
dangers I see amongst some radical academia circles is this mistrust in theory, you know, saying, “Who needs fat books on Hegel and logic? My
god, they have to act!” No, I think quite on the contrary. More than ever, today it’s crucial to emphasize that on the one hand, yes, every
empirical example undermines theory. There are no full examples. But, point two, this does not mean that we should turn the examples against
theory. At the same time, there is no exception. There are no examples outside theories. Every example of a theory is an indication of the inner
split dynamics of the theory itself, and here dialectics begins, and so on.... Don’t fall into the trap of feeling guilty, especially if you have the luck
of studying in such a rich place. All this bullshit like, “Somalian children are starving....” No! Somalian children are not starving because you have
a good time here. There are others who are much more guilty. Rather, use the opportunity. Society will need more and more intellectual work.
It’s this topic of intellectuals being privileged—this is typical petty-bourgeois manipulation to make you feel guilty. You know who told me the
best story? The British Marxist, Terry Eagleton. He told me that 20 or 30 years ago he saw a big British Marxist figure, Eric Hobsbawm, the
historian, giving a talk to ordinary workers in a factory. Hobsbawm wanted to appear popular, not elitist, so he started by saying to the workers,
“Listen, I’m not here to teach you. I am here to exchange experiences. I will probably learn more from you than you will from me.” Then he got
the answer of a lifetime. One ordinary worker interrupted him and said, “Fuck off! You are privileged to study, to know. You are here to teach
us! Yes, we should learn from you! Don’t give us this bullshit, ‘We all know the same.’ You are elite in the sense that you were privileged to
learn and to know a lot. So of course we should learn from you. Don’t play this false egalitarianism.” Again, I think there is a certain strategy
today even more, and I speak so bitterly about it because in Europe they are approaching it. I think Europe
is approaching some
kind of intellectual suicide in the sense that higher education is becoming more and more streamlined.
They are talking the same way communists were talking 40 years ago when they wanted to crush
intellectual life. They claimed that intellectuals are too abstract in their ivory towers; they are not
dealing with real problems; we need education so that it will help real people—real societies’
problems. And then, again, in a debate I had in France, some high politician made it clear what he thinks and he said...in that
time in France there were those demonstrations in Paris, the car burnings. He said, “Look, cars are
burning in the suburbs of Paris: We don’t need your abstract Marxist theories. We need psychologists
to tell us how to control the mob. We need urban planners to tell us how to organize the suburbs to
make demonstrations difficult.” But this is a job for experts, and the whole point of being intellectual
today is to be more than an expert. Experts are doing what? They are solving problems formulated by
others. You know, if a politician comes to you, “Fuck it! Cars are burning! Tell me what’s the psychological
mechanism, how do we dominate it?” No, an intellectual asks a totally different question: “What are
the roots? Is the system guilty?” An intellectual, before answering a question, changes the question.
He starts with, “But is this the right way to formulate the question?” FM: You spoke at Occupy Wall Street a few
months ago. What is your personal involvement with the Occupy Wall Street movement, and what do you think the protests signify? SZ: None.
My personal involvement was some guy who was connected with it, and he told me, “Would you go there, come there?” And I said, “Okay.
Why not?” Then the same guy told me,“Be careful, because microphones are prohibited, you know, it’s this echoing, repeating.” So my friend
told me, frankly, to be demagogic: “Just try to be as much as possible effective, short, slow,” and so on, and that was it. I didn’t even drop my
work. What does [Occupy] mean? Then they
tell you, “Oh, Wall Street should work for the Main Street, not the
opposite,” but the problem is not this. The problem is that the system stated that there is no Main
Street without Wall Street. That is to say that banking and credits are absolutely crucial for the system to
function today. That is why I understand Obama when—two years ago you know when the first, I think it was, $750 billion and a bit
more—it was simply blackmail and it was not possible to say no because that’s how the system functions. If Wall Street were to break down,
everything would break. We
should think more radically. So again, the formula “Give money to Main Street and not to Wall
ruined. That is to say, all these honest, hardworking people who do their jobs cannot find work
now. Think how to change that. Think how to change [the] mechanisms of that. We are no longer
dealing with short-term crises like in 2008.
Street” is
Don’t buy their “cede the political” blackmail or reformist neurosis—we get nothing
from their third-way pandering
Smith ’11 Mick Smith, Against Ecological Sovereignty, University of Minnesota Press: Minneapolis,
2011, p. 219-221
THE PURPOSE OF THIS BOOK is to open possibilities for rethinking and constituting ecological ethics and politics—so should
one need to apologize
for a lack of specific environmental policies? Should the book declaim on the necessity of using low-energy light bulbs or of increasing the
price of gasoline? Alternatively, should one point out that such limited measures are an apology (a poor substitute) for the
absence of any real ecological ethics and politics? Is it possible to read this book and still think that I believe the problem is one of the
incompleteness of the current policy agenda and that the solution to our environmental ills is an ever-more complex and complete legislative program to represent
and regulate the world? It may be possible, but I hope not. For this
desire for legislative completeness, this institutional lack (in
a Lacanian sense), the desire for policy after policy, is clearly the regulative counterpart to the global metastasis
of those free-market approaches that effectively reduce the world’s diversity to a common currency, a
universal, abstract, monetary exchange value. They are, almost literally, two sides of the same coin, the currency
of modernism and capitalism, and their presence is a tangible indicator of the spread of biopolitics. Meanwhile, the
restricted economy of debates around policy priorities and cost-benefit analyses almost always excludes
more profound questions and concerns about the singular denizens of a more-than-human world. The purpose of
this book is to provide philosophical grounds on which such questions and concerns can be raised, to challenge the myths and meta- physics that would regard them
as inconsequential. In this, no doubt, it is already overambitious. In any case, unlike the majority of people writing on the environment, I do not have a recipe for
saving the natural world, a set of rules to follow, a list of guiding principles, or a favorite ideology or institutional form to promote as a solution For before all this,
we need to ask what “saving the natural world" might mean. And this requires, as I have argued, sustaining
ethics, and politics, and ecology over and against sovereign power—the exercise of which reduces people to
bare life and the more-than-human world to standing reserve. This is not a politically or ethically neutral position in the way
that liberalism, for example, would like to present itself; it reenvisages ecological communities in very different ways. Of course, I have opinions on what is to be
done, although not in any Leninist fashion. My sympathies might find some expression in, for example, ecologically revisioning Kropotkin‘s mutual aid and
Proudhon’s mutualism. But expanding on these ideas here would just provide an excuse for many not to take the broader argument about sovereignty seriously.
What we need are plural ways to imagine a world without sovereign power, without human dominion. And so, instead
of an apology or an apologia for the lack of policy recommendations (and who exactly would implement them), I offer an apologue, a “moral fable, esp. one having
animals or inanimate things as its characters" (New Shorter Oxford Dictiormry).* I have in mind a recent image that momentarily broke through the self-referential
shells that accrete around so many of us, cutting us off from the world as it really is. Not an ancient painting on a rock wall, but a still photograph of a living polar
bear standing, apparently stranded, on a small iceberg—a remainder of the ice Hoes melting under the onslaught of global warming. Now only a wishful thinker (or
a bureaucrat) would contend that such bears will be saved by more stringent hunting permits (deeply as I abhor sport hunting), by a policy to increase ecotourism,
or by a captive breeding program in a zoo. These measures are clearly inadequate for the task. Despite conforming to a policy model and taking account of
realpolitik, they are far from being realistic. For the
bear, in its essence, is ecologically inseparable from the ice-clad
north; it lives and breathes as an inhabitant, a denizen, of such apparently inhospitable places. It is an
opening on an ursine world that we can only actually imagine, but its image still flashes before us
relating “what-has-been to the now” (Benjamin 1999, 462). This image is one of many that, through affecting
us, have the potential to inspire new ethical and political constellations like those of radical ecology. For
a radical ecologist, the bear is not a resource (not even an ecotourist sight) but a being of ethical concern, albeit it in so many respects alien, pointless, independent
from us—and, for the most part, indifferent to us. It can become close to our hearts (which is not to expect to hug it but to say that it elicits an ethical response that
inspires a politics). And this politics argues that only
a hypocrite could claim that the solution to the plight of such polar
bears lies in the resolution of questions of arctic sovereignty, in an agreement to take account of the
rightful territorial claims of states over these portions of the earth. Once defined as sovereign
territories, there can be no doubt that the minerals and oil beneath the arctic ocean will be exploited
in the "name of the people and the state” and to make money for capitalists. And this will add
immeasurably to the same atmospheric carbon dioxide that already causes the melting ice . After all, there
is no other purpose in declaring sovereignty except to be able to make such decisions. And once this
power of decision is ceded, all the nature reserves in the world will not save the bears or the ecology
they inhabit. They will (in Nancy’s and Agamben’s terms) have been "abandoned," “sacrificed,” for nothing and by
nothing. So what could be done? When those seeking a policy solution to the Arctic’s melting ice ask this
question, they are not looking to institute policies that would abandon or radically transform
capitalism, abolish big oil, or even close down the Athabasca tar sands. They expect to legislate for a
mode of gradual amelioration that in no way threatens the current economic and political forms they
(wrongly) assume to be ultimate realities. In short, they want a solution that maintains the claims of
ecological sovereignty.
Or predictive debate solves 100% of the impact
Garrett ’12 Banning Garrett, director of the Asia Program and Strategic Foresight Initiative at the
Atlantic Council, “In Search of Sand Piles and Butterflies,” Strategic Foresight Initiative Blog, Atlantic
Council, 1/23/2012, http://www.acus.org/disruptive_change/search-sand-piles-and-butterflies
“Disruptive change” that produces “strategic shocks” has become an increasing concern for
policymakers, shaken by momentous events of the last couple of decades that were not on their radar
screens – from the fall of the Berlin Wall and the 9/11 terrorist attacks to the 2008 financial crisis and
the “Arab Spring.” These were all shocks to the international system, predictable perhaps in retrospect
but predicted by very few experts or officials on the eve of their occurrence. This “failure” to predict
specific strategic shocks does not mean we should abandon efforts to foresee disruptive change or look
at all possible shocks as equally plausible. Most strategic shocks do not “come out of the blue.” We can
understand and project long-term global trends and foresee at least some of their potential effects,
including potential shocks and disruptive change. We can construct alternative futures scenarios to envision
potential change, including strategic shocks. Based on trends and scenarios, we can take actions to avert possible undesirable
outcomes or limit the damage should they occur. We can also identify potential opportunities or at least
more desirable futures that we seek to seize through policy course corrections. We should distinguish
“strategic shocks” that are developments that could happen at any time and yet may never occur. This
would include such plausible possibilities as use of a nuclear device by terrorists or the emergence of an
airborne human-to-human virus that could kill millions. Such possible but not inevitable developments
would not necessarily be the result of worsening long-term trends. Like possible terrorist attacks, governments
need to try to prepare for such possible catastrophes though they may never happen. But there are other potential
disruptive changes, including those that create strategic shocks to the international system, that can result from identifiable trends that make them more likely in
the future—for example, growing demand for food, water, energy and other resources with supplies failing to keep pace.
We need to look for the
“sand piles” that the trends are building and are subject to collapse at some point with an additional but
indeterminable additional “grain of sand” and identify the potential for the sudden appearance of
“butterflies” that might flap their wings and set off hurricanes. Mohamed Bouazizi, who immolated himself December 17, 2010
in Sidi Bouzid, Tunisia, was the butterfly who flapped his wings and (with the “force multiplier” of social media) set off a hurricane that is still blowing throughout
the Middle East. Perhaps the
metaphors are mixed, but the butterfly’s delicate flapping destabilized the sand
piles (of rising food prices, unemployed students, corrupt government, etc.) that had been building in Tunisia, Egypt, and much of
the region. The result was a sudden collapse and disruptive change that has created a strategic shock that is still producing tremors throughout the region.
But the collapse was due to cumulative effects of identifiable and converging trends. When and what form change will take may be difficult if not impossible to
foresee, but the
likelihood of a tipping point being reached—that linear continuation of the present into the
future is increasingly unlikely—can be foreseen. Foreseeing the direction of change and the likelihood of discontinuities, both sudden
and protracted, is thus not beyond our capabilities. While efforts to understand and project long-term global trends cannot provide accurate predictions, for
example, of the GDPs of China, India, and the United States in 2030, looking at economic and GDP growth trends, can provide insights into a wide range of possible
outcomes. For
example, it is a useful to assess the implications if the GDPs of these three countries each
grew at currently projected average rates – even if one understands that there are many factors that
can and likely will alter their trajectories. The projected growth trends of the three countries suggest that at some point in the next few
decades, perhaps between 2015 and 2030, China’s GDP will surpass that of the United States. And by adding consideration of the economic impact of demographic
trends (China’s aging and India’s youth bulge), there is a possibility that India will surpass both China and the US, perhaps by 2040 or 2050, to become the world’s
largest economy. These potential shifts of economic power from the United States to China then to India would likely prove strategically disruptive on a global scale.
Although slowly developing, such disruptive change would likely have an even greater strategic impact than the Arab Spring. The “rise” of China has already proved
strategically disruptive, creating a potential China-United States regional rivalry in Asia two decades after Americans fretted about an emerging US conflict with a
then-rising Japan challenging American economic supremacy. Despite uncertainty surrounding projections, foreseeing the possibility (some would say high
likelihood) that China and then India will replace the United States as the largest global economy has near-term policy implications for the US and Europe. The
potential long-term shift in economic clout and concomitant shift in political power and strategic position away from the US and the West and toward the East has
implications for near-term policy choices. Policymakers could conclude, for example, that the West should make greater efforts to bring the emerging (or reemerging) great powers into close consultation on the “rules of the game” and global governance as the West’s influence in shaping institutions and behavior is
likely to significantly diminish over the next few decades. The alternative to finding such a near-term accommodation could be increasing mutual suspicions and
hostility rather than trust and growing cooperation between rising and established powers—especially between China and the United States—leading to a
fragmented, zero-sum world in which major global challenges like climate change and resource scarcities are not addressed and conflict over dwindling resources
and markets intensifies and even bleeds into the military realm among the major actors. Neither of these scenarios may play out, of course. Other global trends
suggest that sometime in the next several decades, the world could encounter a “hard ceiling” on resources availability and that climate change
could throw the global economy into a tailspin, harming China and India even more than the United States. In this case, perhaps India and China would falter
economically leading to internal instability and crises of governance, significantly reducing their rates of economic growth and their ability to project power and play
a significant international role than might otherwise have been expected. But this scenario has other implications for policymakers, including dangers posed to
Western interests from “failure” of China and/or India, which could produce huge strategic shocks to the global system, including a prolonged economic downturn
in the West as well as the East. Thus, looking
at relatively slowly developing trends can provide foresight for necessary
course corrections now to avert catastrophic disruptive change or prepare to be more resilient if
foreseeable but unavoidable shocks occur. Policymakers and the public will press for predictions and
criticize government officials and intelligence agencies when momentous events “catch us by surprise.”
But unfortunately, as both Yogi Berra and Neils Bohr are credited with saying, “prediction is very hard,
especially about the future.” One can predict with great accuracy many natural events such as sunrise and the boiling point of water at sea level. We can rely on the
infallible predictability of the laws of physics to build airplanes and automobiles and iPhones. And we can calculate with great precision the destruction footprint of
a given nuclear weapon. Yet
even physical systems like the weather as they become more complex, become
increasingly difficult and even inherently impossible to predict with precision. With human behavior,
specific predictions are not just hard, but impossible as uncertainty is inherent in the human universe. As
futurist Paul Saffo wrote in the Harvard Business Review in 2007, “prediction is possible only in a world in which events are preordained and no amount of actions in
the present can influence the future outcome.” One cannot know for certain what actions he or she will take in the future much less the actions of another person,
a group of people or a nation state. This obvious point is made to dismiss any idea of trying to “predict” what will occur in the future with accuracy, especially the
outcomes of the interplay of many complex factors, including the interaction of human and natural systems. More broadly, the human future is not predetermined
but rather depends on human choices at every turning point, cumulatively leading to different alternative outcomes. This
uncertainty about the
future also means the future is amenable to human choice and leadership . Trends analyses—including
foreseeing trends leading to disruptive change—are thus essential to provide individuals, organizations and political leaders
with the strategic foresight to take steps mitigate the dangers ahead and seize the opportunities for
shaping the human destiny. Peter Schwartz nearly a decade ago characterized the convergence of trends and disruptive change as “inevitable
surprises.” He wrote in Inevitable Surprises that “in the coming decades we face many more inevitable surprises: major
discontinuities in the economic, political and social spheres of our world, each one changing the ‘rules of
the game’ as its played today. If anything, there will be more, no fewer, surprises in the future, and they will
all be interconnected. Together, they will lead us into a world, ten to fifteen years hence, that is
fundamentally different from the one we know today. Understanding these inevitable surprises in our future is
critical for the decisions we have to make today …. We may not be able to prevent catastrophe
(although sometimes we can), but we can certainly increase our ability to respond, and our ability to see
opportunities that we would otherwise miss.
DA
Speciesism makes unspeakable violence banal and invisible
Kochi and Ordan ‘8 Tarik Kochi & Noam Ordan, “An Argument for the Global Suicide of Humanity,”
borderlands, vol. 7 no. 3, 2008,
http://www.borderlands.net.au/vol7no3_2008/kochiordan_argument.pdf
Within the picture many paint of humanity, events such as the Holocaust are considered as an
exception, an aberration. The Holocaust is often portrayed as an example of ‘evil’, a moment of hatred, madness and cruelty (cf. the
differing accounts of ‘evil’ given in Neiman, 2004). The event is also treated as one through which humanity might comprehend its own
weakness and draw strength, via the resolve that such actions will never happen again. However, if we
take seriously the differing
ways in which the Holocaust was ‘evil’, then one must surely include along side it the almost
uncountable numbers of genocides that have occurred throughout human history. Hence, if we are to
think of the content of the ‘human heritage’, then this must include the annihilation of indigenous
peoples and their cultures across the globe and the manner in which their beliefs, behaviours and
social practices have been erased from what the people of the ‘West’ generally consider to be the
content of a human heritage. Again the history of colonialism is telling here. It reminds us exactly how
normal, regular and mundane acts of annihilation of different forms of human life and culture have
been throughout human history. Indeed the history of colonialism, in its various guises, points to the fact that so many of our
legal institutions and forms of ethical life (i.e. nation-states which pride themselves on protecting human rights through the
rule of law) have been founded upon colonial violence, war and the appropriation of other peoples’ land
(Schmitt, 2003; Benjamin, 1986). Further, the history of colonialism highlights the central function of ‘race war’
that often underlies human social organisation and many of its legal and ethical systems of thought
(Foucault, 2003). This history of modern colonialism thus presents a key to understanding that events such as the Holocaust are
not an aberration and exception but are closer to the norm, and sadly, lie at the heart of any heritage
of humanity. After all, all too often the European colonisation of the globe was justified by arguments
that indigenous inhabitants were racially ‘inferior’ and in some instances that they were closer to
‘apes’ than to humans (Diamond, 2006). Such violence justified by an erroneous view of ‘race’ is in many
ways merely an extension of an underlying attitude of speciesism involving a long history of killing
and enslavement of non-human species by humans. Such a connection between the two histories of
inter-human violence (via the mythical notion of differing human ‘races’) and interspecies violence, is well expressed
in Isaac Bashevis Singer’s comment that whereas humans consider themselves “the crown of
creation”, for animals “all people are Nazis” and animal life is “an eternal Treblinka” (Singer, 1968, p.750).
Certainly many organisms use ‘force’ to survive and thrive at the expense of their others. Humans are not special in this regard. However
humans, due a particular form of self-awareness and ability to plan for the future, have the capacity
to carry out highly organised forms of violence and destruction (i.e. the Holocaust; the massacre and enslavement of
indigenous peoples by Europeans) and the capacity to develop forms of social organisation and communal life in
which harm and violence are organised and regulated. It is perhaps this capacity for reflection upon the merits of harm
and violence (the moral reflection upon the good and bad of violence) which gives humans a ‘special’ place within the food chain. Nonetheless,
with these capacities come responsibility and our proposal of global suicide is directed at bringing into full view the issue of human moral
responsibility. When taking a wider view of history, one which focuses on the relationship of humans towards other species, it becomes clear
that the human
heritage – and the propagation of itself as a thing of value – has occurred on the back of seemingly
endless acts of violence, destruction, killing and genocide. While this cannot be verified, perhaps ‘human’
history and progress begins with the genocide of the Neanderthals and never loses a step thereafter.
It only takes a short glimpse at the list of all the sufferings caused by humanity for one to begin to
question whether this species deserves to continue into the future. The list of human-made disasters
is ever-growing after all: suffering caused to animals in the name of science or human health, not to
mention the cosmetic, food and textile industries; damage to the environment by polluting the earth
and its stratosphere; deforesting and overuse of natural resources; and of course, inflicting suffering
on fellow human beings all over the globe, from killing to economic exploitation to abusing minorities,
individually and collectively. In light of such a list it becomes difficult to hold onto any assumption
that the human species possesses any special or higher value over other species. Indeed, if humans at any
point did possess such a value, because of higher cognitive powers, or even because of a special status granted by God, then humanity has
surely devalued itself through its actions and has forfeited its claim to any special place within the cosmos. In our
development from
higher predator to semi-conscious destroyer we have perhaps undermined all that is good in
ourselves and have left behind a heritage best exemplified by the images of the gas chamber and the
incinerator. We draw attention to this darker and pessimistic view of the human heritage not for
dramatic reasons but to throw into question the stability of a modern humanism which sees itself as
inherently ‘good’ and which presents the action of cosmic colonisation as a solution to environmental
catastrophe. Rather than presenting a solution it would seem that an ideology of modern humanism is
itself a greater part of the problem, and as part of the problem it cannot overcome itself purely with
itself. If this is so, what perhaps needs to occur is the attempt to let go of any one-sided and privileged
value of the ‘human’ as it relates to moral activity. That is, perhaps it is modern humanism itself that must
be negated and supplemented by a utopian anti-humanism and moral action re-conceived through
this relational or dialectical standpoint in thought.
Download